diff options
Diffstat (limited to 'doc/rtd')
-rw-r--r-- | doc/rtd/topics/datasources/altcloud.rst | 23 | ||||
-rw-r--r-- | doc/rtd/topics/datasources/azure.rst | 3 | ||||
-rw-r--r-- | doc/rtd/topics/datasources/cloudstack.rst | 2 | ||||
-rw-r--r-- | doc/rtd/topics/datasources/configdrive.rst | 16 | ||||
-rw-r--r-- | doc/rtd/topics/datasources/digitalocean.rst | 6 | ||||
-rw-r--r-- | doc/rtd/topics/datasources/ec2.rst | 11 | ||||
-rw-r--r-- | doc/rtd/topics/datasources/exoscale.rst | 12 | ||||
-rw-r--r-- | doc/rtd/topics/datasources/nocloud.rst | 16 | ||||
-rw-r--r-- | doc/rtd/topics/datasources/opennebula.rst | 30 | ||||
-rw-r--r-- | doc/rtd/topics/datasources/openstack.rst | 3 | ||||
-rw-r--r-- | doc/rtd/topics/datasources/smartos.rst | 12 | ||||
-rw-r--r-- | doc/rtd/topics/debugging.rst | 14 | ||||
-rw-r--r-- | doc/rtd/topics/dir_layout.rst | 39 | ||||
-rw-r--r-- | doc/rtd/topics/examples.rst | 2 | ||||
-rw-r--r-- | doc/rtd/topics/format.rst | 81 | ||||
-rw-r--r-- | doc/rtd/topics/merging.rst | 18 | ||||
-rw-r--r-- | doc/rtd/topics/network-config-format-v2.rst | 18 |
17 files changed, 175 insertions, 131 deletions
diff --git a/doc/rtd/topics/datasources/altcloud.rst b/doc/rtd/topics/datasources/altcloud.rst index eeb197f2..9d7e3de1 100644 --- a/doc/rtd/topics/datasources/altcloud.rst +++ b/doc/rtd/topics/datasources/altcloud.rst @@ -3,24 +3,25 @@ Alt Cloud ========= -The datasource altcloud will be used to pick up user data on `RHEVm`_ and `vSphere`_. +The datasource altcloud will be used to pick up user data on `RHEVm`_ and +`vSphere`_. RHEVm ----- For `RHEVm`_ v3.0 the userdata is injected into the VM using floppy -injection via the `RHEVm`_ dashboard "Custom Properties". +injection via the `RHEVm`_ dashboard "Custom Properties". The format of the Custom Properties entry must be: :: - + floppyinject=user-data.txt:<base64 encoded data> For example to pass a simple bash script: .. sourcecode:: sh - + % cat simple_script.bash #!/bin/bash echo "Hello Joe!" >> /tmp/JJV_Joe_out.txt @@ -38,7 +39,7 @@ set the "Custom Properties" when creating the RHEMv v3.0 VM to: **NOTE:** The prefix with file name must be: ``floppyinject=user-data.txt:`` It is also possible to launch a `RHEVm`_ v3.0 VM and pass optional user -data to it using the Delta Cloud. +data to it using the Delta Cloud. For more information on Delta Cloud see: http://deltacloud.apache.org @@ -46,12 +47,12 @@ vSphere ------- For VMWare's `vSphere`_ the userdata is injected into the VM as an ISO -via the cdrom. This can be done using the `vSphere`_ dashboard +via the cdrom. This can be done using the `vSphere`_ dashboard by connecting an ISO image to the CD/DVD drive. To pass this example script to cloud-init running in a `vSphere`_ VM set the CD/DVD drive when creating the vSphere VM to point to an -ISO on the data store. +ISO on the data store. **Note:** The ISO must contain the user data. @@ -61,13 +62,13 @@ Create the ISO ^^^^^^^^^^^^^^ .. sourcecode:: sh - + % mkdir my-iso NOTE: The file name on the ISO must be: ``user-data.txt`` .. sourcecode:: sh - + % cp simple_script.bash my-iso/user-data.txt % genisoimage -o user-data.iso -r my-iso @@ -75,7 +76,7 @@ Verify the ISO ^^^^^^^^^^^^^^ .. sourcecode:: sh - + % sudo mkdir /media/vsphere_iso % sudo mount -o loop user-data.iso /media/vsphere_iso % cat /media/vsphere_iso/user-data.txt @@ -84,7 +85,7 @@ Verify the ISO Then, launch the `vSphere`_ VM the ISO user-data.iso attached as a CDROM. It is also possible to launch a `vSphere`_ VM and pass optional user -data to it using the Delta Cloud. +data to it using the Delta Cloud. For more information on Delta Cloud see: http://deltacloud.apache.org diff --git a/doc/rtd/topics/datasources/azure.rst b/doc/rtd/topics/datasources/azure.rst index b41cddd9..8328dfad 100644 --- a/doc/rtd/topics/datasources/azure.rst +++ b/doc/rtd/topics/datasources/azure.rst @@ -82,7 +82,8 @@ The settings that may be configured are: provided command to obtain metadata. * **apply_network_config**: Boolean set to True to use network configuration described by Azure's IMDS endpoint instead of fallback network config of - dhcp on eth0. Default is True. For Ubuntu 16.04 or earlier, default is False. + dhcp on eth0. Default is True. For Ubuntu 16.04 or earlier, default is + False. * **data_dir**: Path used to read metadata files and write crawled data. * **dhclient_lease_file**: The fallback lease file to source when looking for custom DHCP option 245 from Azure fabric. diff --git a/doc/rtd/topics/datasources/cloudstack.rst b/doc/rtd/topics/datasources/cloudstack.rst index a3101ed7..95b95874 100644 --- a/doc/rtd/topics/datasources/cloudstack.rst +++ b/doc/rtd/topics/datasources/cloudstack.rst @@ -7,7 +7,7 @@ CloudStack sshkey thru the Virtual-Router. The datasource obtains the VR address via dhcp lease information given to the instance. For more details on meta-data and user-data, -refer the `CloudStack Administrator Guide`_. +refer the `CloudStack Administrator Guide`_. URLs to access user-data and meta-data from the Virtual Machine. Here 10.1.1.1 is the Virtual Router IP: diff --git a/doc/rtd/topics/datasources/configdrive.rst b/doc/rtd/topics/datasources/configdrive.rst index f1a488a2..f4c5a34a 100644 --- a/doc/rtd/topics/datasources/configdrive.rst +++ b/doc/rtd/topics/datasources/configdrive.rst @@ -64,7 +64,7 @@ The following criteria are required to as a config drive: :: openstack/ - - 2012-08-10/ or latest/ + - 2012-08-10/ or latest/ - meta_data.json - user_data (not mandatory) - content/ @@ -83,7 +83,7 @@ only) file in the following ways. :: - dsmode: + dsmode: values: local, net, pass default: pass @@ -97,10 +97,10 @@ The difference between 'local' and 'net' is that local will not require networking to be up before user-data actions (or boothooks) are run. :: - + instance-id: default: iid-dsconfigdrive - + This is utilized as the metadata's instance-id. It should generally be unique, as it is what is used to determine "is this a new instance". @@ -108,18 +108,18 @@ be unique, as it is what is used to determine "is this a new instance". public-keys: default: None - + If present, these keys will be used as the public keys for the instance. This value overrides the content in authorized_keys. Note: it is likely preferable to provide keys via user-data :: - + user-data: default: None - -This provides cloud-init user-data. See :ref:`examples <yaml_examples>` for + +This provides cloud-init user-data. See :ref:`examples <yaml_examples>` for what all can be present here. .. _OpenStack: http://www.openstack.org/ diff --git a/doc/rtd/topics/datasources/digitalocean.rst b/doc/rtd/topics/datasources/digitalocean.rst index 938ede89..88f1e5f5 100644 --- a/doc/rtd/topics/datasources/digitalocean.rst +++ b/doc/rtd/topics/datasources/digitalocean.rst @@ -20,8 +20,10 @@ DigitalOcean's datasource can be configured as follows: retries: 3 timeout: 2 -- *retries*: Determines the number of times to attempt to connect to the metadata service -- *timeout*: Determines the timeout in seconds to wait for a response from the metadata service +- *retries*: Determines the number of times to attempt to connect to the + metadata service +- *timeout*: Determines the timeout in seconds to wait for a response from the + metadata service .. _DigitalOcean: http://digitalocean.com/ .. _metadata service: https://developers.digitalocean.com/metadata/ diff --git a/doc/rtd/topics/datasources/ec2.rst b/doc/rtd/topics/datasources/ec2.rst index 76beca92..a90f3779 100644 --- a/doc/rtd/topics/datasources/ec2.rst +++ b/doc/rtd/topics/datasources/ec2.rst @@ -13,7 +13,7 @@ instance metadata. Metadata is accessible via the following URL: :: - + GET http://169.254.169.254/2009-04-04/meta-data/ ami-id ami-launch-index @@ -34,19 +34,20 @@ Metadata is accessible via the following URL: Userdata is accessible via the following URL: :: - + GET http://169.254.169.254/2009-04-04/user-data 1234,fred,reboot,true | 4512,jimbo, | 173,,, Note that there are multiple versions of this data provided, cloud-init by default uses **2009-04-04** but newer versions can be supported with relative ease (newer versions have more data exposed, while maintaining -backward compatibility with the previous versions). +backward compatibility with the previous versions). -To see which versions are supported from your cloud provider use the following URL: +To see which versions are supported from your cloud provider use the following +URL: :: - + GET http://169.254.169.254/ 1.0 2007-01-19 diff --git a/doc/rtd/topics/datasources/exoscale.rst b/doc/rtd/topics/datasources/exoscale.rst index 27aec9cd..9074edc6 100644 --- a/doc/rtd/topics/datasources/exoscale.rst +++ b/doc/rtd/topics/datasources/exoscale.rst @@ -26,8 +26,8 @@ In the password server case, the following rules apply in order to enable the "restore instance password" functionality: * If a password is returned by the password server, it is then marked "saved" - by the cloud-init datasource. Subsequent boots will skip setting the password - (the password server will return "saved_password"). + by the cloud-init datasource. Subsequent boots will skip setting the + password (the password server will return "saved_password"). * When the instance password is reset (via the Exoscale UI), the password server will return the non-empty password at next boot, therefore causing cloud-init to reset the instance's password. @@ -38,15 +38,15 @@ Configuration Users of this datasource are discouraged from changing the default settings unless instructed to by Exoscale support. -The following settings are available and can be set for the datasource in system -configuration (in `/etc/cloud/cloud.cfg.d/`). +The following settings are available and can be set for the datasource in +system configuration (in `/etc/cloud/cloud.cfg.d/`). The settings available are: * **metadata_url**: The URL for the metadata service (defaults to ``http://169.254.169.254``) - * **api_version**: The API version path on which to query the instance metadata - (defaults to ``1.0``) + * **api_version**: The API version path on which to query the instance + metadata (defaults to ``1.0``) * **password_server_port**: The port (on the metadata server) on which the password server listens (defaults to ``8080``). * **timeout**: the timeout value provided to urlopen for each individual http diff --git a/doc/rtd/topics/datasources/nocloud.rst b/doc/rtd/topics/datasources/nocloud.rst index 1c5cf961..bc96f7fe 100644 --- a/doc/rtd/topics/datasources/nocloud.rst +++ b/doc/rtd/topics/datasources/nocloud.rst @@ -57,24 +57,24 @@ Given a disk ubuntu 12.04 cloud image in 'disk.img', you can create a sufficient disk by following the example below. :: - + ## create user-data and meta-data files that will be used ## to modify image on first boot $ { echo instance-id: iid-local01; echo local-hostname: cloudimg; } > meta-data - + $ printf "#cloud-config\npassword: passw0rd\nchpasswd: { expire: False }\nssh_pwauth: True\n" > user-data - + ## create a disk to attach with some user-data and meta-data $ genisoimage -output seed.iso -volid cidata -joliet -rock user-data meta-data - + ## alternatively, create a vfat filesystem with same files ## $ truncate --size 2M seed.img ## $ mkfs.vfat -n cidata seed.img ## $ mcopy -oi seed.img user-data meta-data :: - + ## create a new qcow image to boot, backed by your original image $ qemu-img create -f qcow2 -b disk.img boot-disk.img - + ## boot the image and login as 'ubuntu' with password 'passw0rd' ## note, passw0rd was set as password through the user-data above, ## there is no password set on these images. @@ -88,12 +88,12 @@ to determine if this is "first boot". So if you are making updates to user-data you will also have to change that, or start the disk fresh. Also, you can inject an ``/etc/network/interfaces`` file by providing the -content for that file in the ``network-interfaces`` field of metadata. +content for that file in the ``network-interfaces`` field of metadata. Example metadata: :: - + instance-id: iid-abcdefg network-interfaces: | iface eth0 inet static diff --git a/doc/rtd/topics/datasources/opennebula.rst b/doc/rtd/topics/datasources/opennebula.rst index 7c0367c4..8e7c2558 100644 --- a/doc/rtd/topics/datasources/opennebula.rst +++ b/doc/rtd/topics/datasources/opennebula.rst @@ -21,7 +21,7 @@ Datasource configuration Datasource accepts following configuration options. :: - + dsmode: values: local, net, disabled default: net @@ -30,7 +30,7 @@ Tells if this datasource will be processed in 'local' (pre-networking) or 'net' (post-networking) stage or even completely 'disabled'. :: - + parseuser: default: nobody @@ -46,7 +46,7 @@ The following criteria are required: or have a *filesystem* label of **CONTEXT** or **CDROM** 2. Must contain file *context.sh* with contextualization variables. File is generated by OpenNebula, it has a KEY='VALUE' format and - can be easily read by bash + can be easily read by bash Contextualization variables ~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -57,7 +57,7 @@ the OpenNebula documentation. Where multiple similar variables are specified, only first found is taken. :: - + DSMODE Datasource mode configuration override. Values: local, net, disabled. @@ -75,30 +75,30 @@ Datasource mode configuration override. Values: local, net, disabled. Static `network configuration`_. :: - + HOSTNAME Instance hostname. :: - + PUBLIC_IP IP_PUBLIC ETH0_IP If no hostname has been specified, cloud-init will try to create hostname -from instance's IP address in 'local' dsmode. In 'net' dsmode, cloud-init +from instance's IP address in 'local' dsmode. In 'net' dsmode, cloud-init tries to resolve one of its IP addresses to get hostname. :: - + SSH_KEY SSH_PUBLIC_KEY One or multiple SSH keys (separated by newlines) can be specified. :: - + USER_DATA USERDATA @@ -111,7 +111,7 @@ This example cloud-init configuration (*cloud.cfg*) enables OpenNebula datasource only in 'net' mode. :: - + disable_ec2_metadata: True datasource_list: ['OpenNebula'] datasource: @@ -123,17 +123,17 @@ Example VM's context section ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ :: - + CONTEXT=[ PUBLIC_IP="$NIC[IP]", - SSH_KEY="$USER[SSH_KEY] - $USER[SSH_KEY1] + SSH_KEY="$USER[SSH_KEY] + $USER[SSH_KEY1] $USER[SSH_KEY2] ", USER_DATA="#cloud-config # see https://help.ubuntu.com/community/CloudInit - + packages: [] - + mounts: - [vdc,none,swap,sw,0,0] runcmd: diff --git a/doc/rtd/topics/datasources/openstack.rst b/doc/rtd/topics/datasources/openstack.rst index 421da08f..8ce2a53d 100644 --- a/doc/rtd/topics/datasources/openstack.rst +++ b/doc/rtd/topics/datasources/openstack.rst @@ -78,6 +78,7 @@ upgrade packages and install ``htop`` on all instances: {"cloud-init": "#cloud-config\npackage_upgrade: True\npackages:\n - htop"} For more general information about how cloud-init handles vendor data, -including how it can be disabled by users on instances, see :doc:`/topics/vendordata`. +including how it can be disabled by users on instances, see +:doc:`/topics/vendordata`. .. vi: textwidth=78 diff --git a/doc/rtd/topics/datasources/smartos.rst b/doc/rtd/topics/datasources/smartos.rst index cb9a128e..be11dfbb 100644 --- a/doc/rtd/topics/datasources/smartos.rst +++ b/doc/rtd/topics/datasources/smartos.rst @@ -15,7 +15,8 @@ second serial console. On Linux, this is /dev/ttyS1. The data is a provided via a simple protocol: something queries for the data, the console responds responds with the status and if "SUCCESS" returns until a single ".\n". -New versions of the SmartOS tooling will include support for base64 encoded data. +New versions of the SmartOS tooling will include support for base64 encoded +data. Meta-data channels ------------------ @@ -27,7 +28,7 @@ channels of SmartOS. - per the spec, user-data is for consumption by the end-user, not provisioning tools - - cloud-init entirely ignores this channel other than writting it to disk + - cloud-init entirely ignores this channel other than writing it to disk - removal of the meta-data key means that /var/db/user-data gets removed - a backup of previous meta-data is maintained as /var/db/user-data.<timestamp>. <timestamp> is the epoch time when @@ -42,8 +43,9 @@ channels of SmartOS. - <timestamp> is the epoch time when cloud-init ran. - when the 'user-script' meta-data key goes missing, the user-script is removed from the file system, although a backup is maintained. - - if the script is not shebanged (i.e. starts with #!<executable>), then - or is not an executable, cloud-init will add a shebang of "#!/bin/bash" + - if the script does not start with a shebang (i.e. starts with + #!<executable>), then or is not an executable, cloud-init will add a + shebang of "#!/bin/bash" * cloud-init:user-data is treated like on other Clouds. @@ -133,7 +135,7 @@ or not to base64 decode something: * base64_all: Except for excluded keys, attempt to base64 decode the values. If the value fails to decode properly, it will be returned in its text - * base64_keys: A comma deliminated list of which keys are base64 encoded. + * base64_keys: A comma delimited list of which keys are base64 encoded. * b64-<key>: for any key, if there exists an entry in the metadata for 'b64-<key>' Then 'b64-<key>' is expected to be a plaintext boolean indicating whether diff --git a/doc/rtd/topics/debugging.rst b/doc/rtd/topics/debugging.rst index e13d9151..afcf2679 100644 --- a/doc/rtd/topics/debugging.rst +++ b/doc/rtd/topics/debugging.rst @@ -68,18 +68,18 @@ subcommands default to reading /var/log/cloud-init.log. 00.00100s (modules-final/config-rightscale_userdata) ... -* ``analyze boot`` Make subprocess calls to the kernel in order to get relevant +* ``analyze boot`` Make subprocess calls to the kernel in order to get relevant pre-cloud-init timestamps, such as the kernel start, kernel finish boot, and cloud-init start. .. code-block:: shell-session - $ cloud-init analyze boot + $ cloud-init analyze boot -- Most Recent Boot Record -- - Kernel Started at: 2019-06-13 15:59:55.809385 - Kernel ended boot at: 2019-06-13 16:00:00.944740 - Kernel time to boot (seconds): 5.135355 - Cloud-init start: 2019-06-13 16:00:05.738396 - Time between Kernel boot and Cloud-init start (seconds): 4.793656 + Kernel Started at: 2019-06-13 15:59:55.809385 + Kernel ended boot at: 2019-06-13 16:00:00.944740 + Kernel time to boot (seconds): 5.135355 + Cloud-init start: 2019-06-13 16:00:05.738396 + Time between Kernel boot and Cloud-init start (seconds): 4.793656 Analyze quickstart - LXC diff --git a/doc/rtd/topics/dir_layout.rst b/doc/rtd/topics/dir_layout.rst index 7a6265eb..ebd63ae7 100644 --- a/doc/rtd/topics/dir_layout.rst +++ b/doc/rtd/topics/dir_layout.rst @@ -2,11 +2,12 @@ Directory layout **************** -Cloudinits's directory structure is somewhat different from a regular application:: +Cloud-init's directory structure is somewhat different from a regular +application:: /var/lib/cloud/ - data/ - - instance-id + - instance-id - previous-instance-id - datasource - previous-datasource @@ -35,38 +36,41 @@ Cloudinits's directory structure is somewhat different from a regular applicatio The main directory containing the cloud-init specific subdirectories. It is typically located at ``/var/lib`` but there are certain configuration - scenarios where this can be altered. + scenarios where this can be altered. TBD, describe this overriding more. ``data/`` - Contains information related to instance ids, datasources and hostnames of the previous - and current instance if they are different. These can be examined as needed to - determine any information related to a previous boot (if applicable). + Contains information related to instance ids, datasources and hostnames of + the previous and current instance if they are different. These can be + examined as needed to determine any information related to a previous boot + (if applicable). ``handlers/`` - Custom ``part-handlers`` code is written out here. Files that end up here are written - out with in the scheme of ``part-handler-XYZ`` where ``XYZ`` is the handler number (the - first handler found starts at 0). + Custom ``part-handlers`` code is written out here. Files that end up here are + written out with in the scheme of ``part-handler-XYZ`` where ``XYZ`` is the + handler number (the first handler found starts at 0). ``instance`` - A symlink to the current ``instances/`` subdirectory that points to the currently - active instance (which is active is dependent on the datasource loaded). + A symlink to the current ``instances/`` subdirectory that points to the + currently active instance (which is active is dependent on the datasource + loaded). ``instances/`` - All instances that were created using this image end up with instance identifier - subdirectories (and corresponding data for each instance). The currently active - instance will be symlinked the ``instance`` symlink file defined previously. + All instances that were created using this image end up with instance + identifier subdirectories (and corresponding data for each instance). The + currently active instance will be symlinked the ``instance`` symlink file + defined previously. ``scripts/`` - Scripts that are downloaded/created by the corresponding ``part-handler`` will end up - in one of these subdirectories. + Scripts that are downloaded/created by the corresponding ``part-handler`` + will end up in one of these subdirectories. ``seed/`` @@ -77,6 +81,7 @@ Cloudinits's directory structure is somewhat different from a regular applicatio Cloud-init has a concept of a module semaphore, which basically consists of the module name and its frequency. These files are used to ensure a module is only ran `per-once`, `per-instance`, `per-always`. This folder contains - semaphore `files` which are only supposed to run `per-once` (not tied to the instance id). + semaphore `files` which are only supposed to run `per-once` (not tied to the + instance id). .. vi: textwidth=78 diff --git a/doc/rtd/topics/examples.rst b/doc/rtd/topics/examples.rst index c30d2263..62b8ee49 100644 --- a/doc/rtd/topics/examples.rst +++ b/doc/rtd/topics/examples.rst @@ -134,7 +134,7 @@ Configure instances ssh-keys .. literalinclude:: ../../examples/cloud-config-ssh-keys.txt :language: yaml :linenos: - + Additional apt configuration ============================ diff --git a/doc/rtd/topics/format.rst b/doc/rtd/topics/format.rst index 74d1fee9..76050402 100644 --- a/doc/rtd/topics/format.rst +++ b/doc/rtd/topics/format.rst @@ -4,22 +4,24 @@ User-Data Formats ***************** -User data that will be acted upon by cloud-init must be in one of the following types. +User data that will be acted upon by cloud-init must be in one of the following +types. Gzip Compressed Content ======================= Content found to be gzip compressed will be uncompressed. -The uncompressed data will then be used as if it were not compressed. +The uncompressed data will then be used as if it were not compressed. This is typically useful because user-data is limited to ~16384 [#]_ bytes. Mime Multi Part Archive ======================= -This list of rules is applied to each part of this multi-part file. +This list of rules is applied to each part of this multi-part file. Using a mime-multi part file, the user can specify more than one type of data. -For example, both a user data script and a cloud-config type could be specified. +For example, both a user data script and a cloud-config type could be +specified. Supported content-types: @@ -66,7 +68,8 @@ User-Data Script Typically used by those who just want to execute a shell script. -Begins with: ``#!`` or ``Content-Type: text/x-shellscript`` when using a MIME archive. +Begins with: ``#!`` or ``Content-Type: text/x-shellscript`` when using a MIME +archive. .. note:: New in cloud-init v. 18.4: User-data scripts can also render cloud instance @@ -83,25 +86,27 @@ Example #!/bin/sh echo "Hello World. The time is now $(date -R)!" | tee /root/output.txt - $ euca-run-instances --key mykey --user-data-file myscript.sh ami-a07d95c9 + $ euca-run-instances --key mykey --user-data-file myscript.sh ami-a07d95c9 Include File ============ This content is a ``include`` file. -The file contains a list of urls, one per line. -Each of the URLs will be read, and their content will be passed through this same set of rules. -Ie, the content read from the URL can be gzipped, mime-multi-part, or plain text. -If an error occurs reading a file the remaining files will not be read. +The file contains a list of urls, one per line. Each of the URLs will be read, +and their content will be passed through this same set of rules. Ie, the +content read from the URL can be gzipped, mime-multi-part, or plain text. If +an error occurs reading a file the remaining files will not be read. -Begins with: ``#include`` or ``Content-Type: text/x-include-url`` when using a MIME archive. +Begins with: ``#include`` or ``Content-Type: text/x-include-url`` when using +a MIME archive. Cloud Config Data ================= -Cloud-config is the simplest way to accomplish some things -via user-data. Using cloud-config syntax, the user can specify certain things in a human friendly format. +Cloud-config is the simplest way to accomplish some things via user-data. Using +cloud-config syntax, the user can specify certain things in a human friendly +format. These things include: @@ -114,9 +119,11 @@ These things include: .. note:: This file must be valid yaml syntax. -See the :ref:`yaml_examples` section for a commented set of examples of supported cloud config formats. +See the :ref:`yaml_examples` section for a commented set of examples of +supported cloud config formats. -Begins with: ``#cloud-config`` or ``Content-Type: text/cloud-config`` when using a MIME archive. +Begins with: ``#cloud-config`` or ``Content-Type: text/cloud-config`` when +using a MIME archive. .. note:: New in cloud-init v. 18.4: Cloud config dta can also render cloud instance @@ -126,25 +133,41 @@ Begins with: ``#cloud-config`` or ``Content-Type: text/cloud-config`` when using Upstart Job =========== -Content is placed into a file in ``/etc/init``, and will be consumed by upstart as any other upstart job. +Content is placed into a file in ``/etc/init``, and will be consumed by upstart +as any other upstart job. -Begins with: ``#upstart-job`` or ``Content-Type: text/upstart-job`` when using a MIME archive. +Begins with: ``#upstart-job`` or ``Content-Type: text/upstart-job`` when using +a MIME archive. Cloud Boothook ============== -This content is ``boothook`` data. It is stored in a file under ``/var/lib/cloud`` and then executed immediately. -This is the earliest ``hook`` available. Note, that there is no mechanism provided for running only once. The boothook must take care of this itself. -It is provided with the instance id in the environment variable ``INSTANCE_ID``. This could be made use of to provide a 'once-per-instance' type of functionality. +This content is ``boothook`` data. It is stored in a file under +``/var/lib/cloud`` and then executed immediately. This is the earliest ``hook`` +available. Note, that there is no mechanism provided for running only once. The +boothook must take care of this itself. -Begins with: ``#cloud-boothook`` or ``Content-Type: text/cloud-boothook`` when using a MIME archive. +It is provided with the instance id in the environment variable +``INSTANCE_ID``. This could be made use of to provide a 'once-per-instance' +type of functionality. + +Begins with: ``#cloud-boothook`` or ``Content-Type: text/cloud-boothook`` when +using a MIME archive. Part Handler ============ -This is a ``part-handler``: It contains custom code for either supporting new mime-types in multi-part user data, or overriding the existing handlers for supported mime-types. It will be written to a file in ``/var/lib/cloud/data`` based on its filename (which is generated). -This must be python code that contains a ``list_types`` function and a ``handle_part`` function. -Once the section is read the ``list_types`` method will be called. It must return a list of mime-types that this part-handler handles. Because mime parts are processed in order, a ``part-handler`` part must precede any parts with mime-types it is expected to handle in the same user data. +This is a ``part-handler``: It contains custom code for either supporting new +mime-types in multi-part user data, or overriding the existing handlers for +supported mime-types. It will be written to a file in ``/var/lib/cloud/data`` +based on its filename (which is generated). + +This must be python code that contains a ``list_types`` function and a +``handle_part`` function. Once the section is read the ``list_types`` method +will be called. It must return a list of mime-types that this part-handler +handles. Because mime parts are processed in order, a ``part-handler`` part +must precede any parts with mime-types it is expected to handle in the same +user data. The ``handle_part`` function must be defined like: @@ -156,11 +179,13 @@ The ``handle_part`` function must be defined like: # filename = the filename of the part (or a generated filename if none is present in mime data) # payload = the parts' content -Cloud-init will then call the ``handle_part`` function once before it handles any parts, once per part received, and once after all parts have been handled. -The ``'__begin__'`` and ``'__end__'`` sentinels allow the part handler to do initialization or teardown before or after -receiving any parts. +Cloud-init will then call the ``handle_part`` function once before it handles +any parts, once per part received, and once after all parts have been handled. +The ``'__begin__'`` and ``'__end__'`` sentinels allow the part handler to do +initialization or teardown before or after receiving any parts. -Begins with: ``#part-handler`` or ``Content-Type: text/part-handler`` when using a MIME archive. +Begins with: ``#part-handler`` or ``Content-Type: text/part-handler`` when +using a MIME archive. Example ------- diff --git a/doc/rtd/topics/merging.rst b/doc/rtd/topics/merging.rst index 5f7ca18d..2b5e5dad 100644 --- a/doc/rtd/topics/merging.rst +++ b/doc/rtd/topics/merging.rst @@ -68,8 +68,10 @@ Cloud-init provides merging for the following built-in types: The ``Dict`` merger has the following options which control what is done with values contained within the config. -- ``allow_delete``: Existing values not present in the new value can be deleted, defaults to False -- ``no_replace``: Do not replace an existing value if one is already present, enabled by default. +- ``allow_delete``: Existing values not present in the new value can be + deleted, defaults to False +- ``no_replace``: Do not replace an existing value if one is already present, + enabled by default. - ``replace``: Overwrite existing values with new ones. The ``List`` merger has the following options which control what is done with @@ -77,7 +79,8 @@ the values contained within the config. - ``append``: Add new value to the end of the list, defaults to False. - ``prepend``: Add new values to the start of the list, defaults to False. -- ``no_replace``: Do not replace an existing value if one is already present, enabled by default. +- ``no_replace``: Do not replace an existing value if one is already present, + enabled by default. - ``replace``: Overwrite existing values with new ones. The ``Str`` merger has the following options which control what is done with @@ -88,10 +91,13 @@ the values contained within the config. Common options for all merge types which control how recursive merging is done on other types. -- ``recurse_dict``: If True merge the new values of the dictionary, defaults to True. -- ``recurse_list``: If True merge the new values of the list, defaults to False. +- ``recurse_dict``: If True merge the new values of the dictionary, defaults to + True. +- ``recurse_list``: If True merge the new values of the list, defaults to + False. - ``recurse_array``: Alias for ``recurse_list``. -- ``recurse_str``: If True merge the new values of the string, defaults to False. +- ``recurse_str``: If True merge the new values of the string, defaults to + False. Customizability diff --git a/doc/rtd/topics/network-config-format-v2.rst b/doc/rtd/topics/network-config-format-v2.rst index 50f5fa61..7f857550 100644 --- a/doc/rtd/topics/network-config-format-v2.rst +++ b/doc/rtd/topics/network-config-format-v2.rst @@ -54,11 +54,11 @@ Physical devices : (Examples: ethernet, wifi) These can dynamically come and go between reboots and even during runtime (hotplugging). In the generic case, they - can be selected by ``match:`` rules on desired properties, such as name/name - pattern, MAC address, driver, or device paths. In general these will match - any number of devices (unless they refer to properties which are unique - such as the full path or MAC address), so without further knowledge about - the hardware these will always be considered as a group. + can be selected by ``match:`` rules on desired properties, such as + name/name pattern, MAC address, driver, or device paths. In general these + will match any number of devices (unless they refer to properties which are + unique such as the full path or MAC address), so without further knowledge + about the hardware these will always be considered as a group. It is valid to specify no match rules at all, in which case the ID field is simply the interface name to be matched. This is mostly useful if you want @@ -228,8 +228,8 @@ Example: :: **parameters**: *<(mapping)>* -Customization parameters for special bonding options. Time values are specified -in seconds unless otherwise specified. +Customization parameters for special bonding options. Time values are +specified in seconds unless otherwise specified. **mode**: *<(scalar)>* @@ -367,8 +367,8 @@ Example: :: **parameters**: <*(mapping)>* -Customization parameters for special bridging options. Time values are specified -in seconds unless otherwise specified. +Customization parameters for special bridging options. Time values are +specified in seconds unless otherwise specified. **ageing-time**: <*(scalar)>* |