summaryrefslogtreecommitdiff
path: root/doc/rtd
diff options
context:
space:
mode:
Diffstat (limited to 'doc/rtd')
-rw-r--r--doc/rtd/index.rst47
-rw-r--r--doc/rtd/topics/availability.rst7
-rw-r--r--doc/rtd/topics/capabilities.rst14
-rw-r--r--doc/rtd/topics/datasources.rst178
-rw-r--r--doc/rtd/topics/datasources/altcloud.rst91
-rw-r--r--doc/rtd/topics/datasources/azure.rst155
-rw-r--r--doc/rtd/topics/datasources/cloudsigma.rst40
-rw-r--r--doc/rtd/topics/datasources/cloudstack.rst34
-rw-r--r--doc/rtd/topics/datasources/configdrive.rst129
-rw-r--r--doc/rtd/topics/datasources/digitalocean.rst28
-rw-r--r--doc/rtd/topics/datasources/ec2.rst61
-rw-r--r--doc/rtd/topics/datasources/fallback.rst16
-rw-r--r--doc/rtd/topics/datasources/maas.rst8
-rw-r--r--doc/rtd/topics/datasources/nocloud.rst75
-rw-r--r--doc/rtd/topics/datasources/opennebula.rst146
-rw-r--r--doc/rtd/topics/datasources/openstack.rst28
-rw-r--r--doc/rtd/topics/datasources/ovf.rst12
-rw-r--r--doc/rtd/topics/datasources/smartos.rst164
-rw-r--r--doc/rtd/topics/dir_layout.rst5
-rw-r--r--doc/rtd/topics/examples.rst43
-rw-r--r--doc/rtd/topics/format.rst27
-rw-r--r--doc/rtd/topics/hacking.rst1
-rw-r--r--doc/rtd/topics/logging.rst37
-rw-r--r--doc/rtd/topics/merging.rst5
-rw-r--r--doc/rtd/topics/modules.rst5
-rw-r--r--doc/rtd/topics/moreinfo.rst7
-rw-r--r--doc/rtd/topics/vendordata.rst71
27 files changed, 1207 insertions, 227 deletions
diff --git a/doc/rtd/index.rst b/doc/rtd/index.rst
index f8ff3c9f..90defade 100644
--- a/doc/rtd/index.rst
+++ b/doc/rtd/index.rst
@@ -1,32 +1,45 @@
.. _index:
-=====================
+.. http://thomas-cokelaer.info/tutorials/sphinx/rest_syntax.html
+.. As suggested at link above for headings use:
+.. # with overline, for parts
+.. * with overline, for chapters
+.. =, for sections
+.. -, for subsections
+.. ^, for subsubsections
+.. “, for paragraphs
+
+#############
Documentation
-=====================
+#############
-.. rubric:: Everything about cloud-init, a set of **python** scripts and utilities to make your cloud images be all they can be!
+.. rubric:: Everything about cloud-init, a set of **python** scripts and
+ utilities to make your cloud images be all they can be!
+*******
Summary
------------------
-
-`Cloud-init`_ is the *defacto* multi-distribution package that handles early initialization of a cloud instance.
+*******
+`Cloud-init`_ is the *defacto* multi-distribution package that handles early
+initialization of a cloud instance.
----
.. toctree::
:maxdepth: 2
- topics/capabilities
- topics/availability
- topics/format
- topics/dir_layout
- topics/examples
- topics/datasources
- topics/logging
- topics/modules
- topics/merging
- topics/moreinfo
- topics/hacking
+ topics/capabilities.rst
+ topics/availability.rst
+ topics/format.rst
+ topics/dir_layout.rst
+ topics/examples.rst
+ topics/datasources.rst
+ topics/logging.rst
+ topics/modules.rst
+ topics/merging.rst
+ topics/vendordata.rst
+ topics/moreinfo.rst
+ topics/hacking.rst
.. _Cloud-init: https://launchpad.net/cloud-init
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/availability.rst b/doc/rtd/topics/availability.rst
index 2d58f808..ef5ae7bf 100644
--- a/doc/rtd/topics/availability.rst
+++ b/doc/rtd/topics/availability.rst
@@ -1,8 +1,8 @@
-============
+************
Availability
-============
+************
-It is currently installed in the `Ubuntu Cloud Images`_ and also in the official `Ubuntu`_ images available on EC2.
+It is currently installed in the `Ubuntu Cloud Images`_ and also in the official `Ubuntu`_ images available on EC2, Azure, GCE and many other clouds.
Versions for other systems can be (or have been) created for the following distributions:
@@ -18,3 +18,4 @@ So ask your distribution provider where you can obtain an image with it built-in
.. _Ubuntu Cloud Images: http://cloud-images.ubuntu.com/
.. _Ubuntu: http://www.ubuntu.com/
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/capabilities.rst b/doc/rtd/topics/capabilities.rst
index 63b34270..be0802c5 100644
--- a/doc/rtd/topics/capabilities.rst
+++ b/doc/rtd/topics/capabilities.rst
@@ -1,6 +1,6 @@
-=====================
+************
Capabilities
-=====================
+************
- Setting a default locale
- Setting a instance hostname
@@ -9,16 +9,18 @@ Capabilities
- Setting up ephemeral mount points
User configurability
---------------------
+====================
`Cloud-init`_ 's behavior can be configured via user-data.
User-data can be given by the user at instance launch time.
-This is done via the ``--user-data`` or ``--user-data-file`` argument to ec2-run-instances for example.
+This is done via the ``--user-data`` or ``--user-data-file`` argument to
+ec2-run-instances for example.
-* Check your local clients documentation for how to provide a `user-data` string
- or `user-data` file for usage by cloud-init on instance creation.
+* Check your local clients documentation for how to provide a `user-data`
+ string or `user-data` file for usage by cloud-init on instance creation.
.. _Cloud-init: https://launchpad.net/cloud-init
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/datasources.rst b/doc/rtd/topics/datasources.rst
index 3a9c808c..9acecc53 100644
--- a/doc/rtd/topics/datasources.rst
+++ b/doc/rtd/topics/datasources.rst
@@ -1,20 +1,21 @@
.. _datasources:
-===========
+***********
Datasources
-===========
-----------------------
- What is a datasource?
-----------------------
+***********
-Datasources are sources of configuration data for cloud-init that typically come
-from the user (aka userdata) or come from the stack that created the configuration
-drive (aka metadata). Typical userdata would include files, yaml, and shell scripts
-while typical metadata would include server name, instance id, display name and other
-cloud specific details. Since there are multiple ways to provide this data (each cloud
-solution seems to prefer its own way) internally a datasource abstract class was
-created to allow for a single way to access the different cloud systems methods
-to provide this data through the typical usage of subclasses.
+What is a datasource?
+=====================
+
+Datasources are sources of configuration data for cloud-init that typically
+come from the user (aka userdata) or come from the stack that created the
+configuration drive (aka metadata). Typical userdata would include files,
+yaml, and shell scripts while typical metadata would include server name,
+instance id, display name and other cloud specific details. Since there are
+multiple ways to provide this data (each cloud solution seems to prefer its
+own way) internally a datasource abstract class was created to allow for a
+single way to access the different cloud systems methods to provide this data
+through the typical usage of subclasses.
The current interface that a datasource object must provide is the following:
@@ -70,131 +71,28 @@ The current interface that a datasource object must provide is the following:
def get_package_mirror_info(self)
----
-EC2
----
-
-The EC2 datasource is the oldest and most widely used datasource that cloud-init
-supports. This datasource interacts with a *magic* ip that is provided to the
-instance by the cloud provider. Typically this ip is ``169.254.169.254`` of which
-at this ip a http server is provided to the instance so that the instance can make
-calls to get instance userdata and instance metadata.
-
-Metadata is accessible via the following URL:
-
-::
-
- GET http://169.254.169.254/2009-04-04/meta-data/
- ami-id
- ami-launch-index
- ami-manifest-path
- block-device-mapping/
- hostname
- instance-id
- instance-type
- local-hostname
- local-ipv4
- placement/
- public-hostname
- public-ipv4
- public-keys/
- reservation-id
- security-groups
-
-Userdata is accessible via the following URL:
-
-::
-
- GET http://169.254.169.254/2009-04-04/user-data
- 1234,fred,reboot,true | 4512,jimbo, | 173,,,
-
-Note that there are multiple versions of this data provided, cloud-init
-by default uses **2009-04-04** but newer versions can be supported with
-relative ease (newer versions have more data exposed, while maintaining
-backward compatibility with the previous versions).
-
-To see which versions are supported from your cloud provider use the following URL:
-
-::
-
- GET http://169.254.169.254/
- 1.0
- 2007-01-19
- 2007-03-01
- 2007-08-29
- 2007-10-10
- 2007-12-15
- 2008-02-01
- 2008-09-01
- 2009-04-04
- ...
- latest
-
-------------
-Config Drive
-------------
-
-.. include:: ../../sources/configdrive/README.rst
-
-----------
-OpenNebula
-----------
-
-.. include:: ../../sources/opennebula/README.rst
-
----------
-Alt cloud
----------
-
-.. include:: ../../sources/altcloud/README.rst
-
---------
-No cloud
---------
-
-.. include:: ../../sources/nocloud/README.rst
-
-----
-MAAS
-----
-
-*TODO*
-
-For now see: http://maas.ubuntu.com/
-
-----------
-CloudStack
-----------
-
-.. include:: ../../sources/cloudstack/README.rst
-
----
-OVF
----
-
-*TODO*
-
-For now see: https://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/files/head:/doc/sources/ovf/
-
----------
-OpenStack
----------
-
-.. include:: ../../sources/openstack/README.rst
-
--------------
-Fallback/None
--------------
-
-This is the fallback datasource when no other datasource can be selected. It is
-the equivalent of a *empty* datasource in that it provides a empty string as userdata
-and a empty dictionary as metadata. It is useful for testing as well as for when
-you do not have a need to have an actual datasource to meet your instance
-requirements (ie you just want to run modules that are not concerned with any
-external data). It is typically put at the end of the datasource search list
-so that if all other datasources are not matched, then this one will be so that
-the user is not left with an inaccessible instance.
-
-**Note:** the instance id that this datasource provides is ``iid-datasource-none``.
-.. _boto: http://docs.pythonboto.org/en/latest/
+Datasource Documentation
+========================
+The following is a list of the implemented datasources.
+Follow for more information.
+
+.. toctree::
+ :maxdepth: 2
+
+ datasources/altcloud.rst
+ datasources/azure.rst
+ datasources/cloudsigma.rst
+ datasources/cloudstack.rst
+ datasources/configdrive.rst
+ datasources/digitalocean.rst
+ datasources/ec2.rst
+ datasources/maas.rst
+ datasources/nocloud.rst
+ datasources/opennebula.rst
+ datasources/openstack.rst
+ datasources/ovf.rst
+ datasources/smartos.rst
+ datasources/fallback.rst
+
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/datasources/altcloud.rst b/doc/rtd/topics/datasources/altcloud.rst
new file mode 100644
index 00000000..8646e77e
--- /dev/null
+++ b/doc/rtd/topics/datasources/altcloud.rst
@@ -0,0 +1,91 @@
+Alt Cloud
+=========
+
+The datasource altcloud will be used to pick up user data on `RHEVm`_ and `vSphere`_.
+
+RHEVm
+-----
+
+For `RHEVm`_ v3.0 the userdata is injected into the VM using floppy
+injection via the `RHEVm`_ dashboard "Custom Properties".
+
+The format of the Custom Properties entry must be:
+
+::
+
+ floppyinject=user-data.txt:<base64 encoded data>
+
+For example to pass a simple bash script:
+
+.. sourcecode:: sh
+
+ % cat simple_script.bash
+ #!/bin/bash
+ echo "Hello Joe!" >> /tmp/JJV_Joe_out.txt
+
+ % base64 < simple_script.bash
+ IyEvYmluL2Jhc2gKZWNobyAiSGVsbG8gSm9lISIgPj4gL3RtcC9KSlZfSm9lX291dC50eHQK
+
+To pass this example script to cloud-init running in a `RHEVm`_ v3.0 VM
+set the "Custom Properties" when creating the RHEMv v3.0 VM to:
+
+::
+
+ floppyinject=user-data.txt:IyEvYmluL2Jhc2gKZWNobyAiSGVsbG8gSm9lISIgPj4gL3RtcC9KSlZfSm9lX291dC50eHQK
+
+**NOTE:** The prefix with file name must be: ``floppyinject=user-data.txt:``
+
+It is also possible to launch a `RHEVm`_ v3.0 VM and pass optional user
+data to it using the Delta Cloud.
+
+For more information on Delta Cloud see: http://deltacloud.apache.org
+
+vSphere
+-------
+
+For VMWare's `vSphere`_ the userdata is injected into the VM as an ISO
+via the cdrom. This can be done using the `vSphere`_ dashboard
+by connecting an ISO image to the CD/DVD drive.
+
+To pass this example script to cloud-init running in a `vSphere`_ VM
+set the CD/DVD drive when creating the vSphere VM to point to an
+ISO on the data store.
+
+**Note:** The ISO must contain the user data.
+
+For example, to pass the same ``simple_script.bash`` to vSphere:
+
+Create the ISO
+^^^^^^^^^^^^^^
+
+.. sourcecode:: sh
+
+ % mkdir my-iso
+
+NOTE: The file name on the ISO must be: ``user-data.txt``
+
+.. sourcecode:: sh
+
+ % cp simple_scirpt.bash my-iso/user-data.txt
+ % genisoimage -o user-data.iso -r my-iso
+
+Verify the ISO
+^^^^^^^^^^^^^^
+
+.. sourcecode:: sh
+
+ % sudo mkdir /media/vsphere_iso
+ % sudo mount -o loop JoeV_CI_02.iso /media/vsphere_iso
+ % cat /media/vsphere_iso/user-data.txt
+ % sudo umount /media/vsphere_iso
+
+Then, launch the `vSphere`_ VM the ISO user-data.iso attached as a CDROM.
+
+It is also possible to launch a `vSphere`_ VM and pass optional user
+data to it using the Delta Cloud.
+
+For more information on Delta Cloud see: http://deltacloud.apache.org
+
+.. _RHEVm: https://www.redhat.com/virtualization/rhev/desktop/rhevm/
+.. _vSphere: https://www.vmware.com/products/datacenter-virtualization/vsphere/overview.html
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/datasources/azure.rst b/doc/rtd/topics/datasources/azure.rst
new file mode 100644
index 00000000..18d7c506
--- /dev/null
+++ b/doc/rtd/topics/datasources/azure.rst
@@ -0,0 +1,155 @@
+Azure
+=====
+
+This datasource finds metadata and user-data from the Azure cloud platform.
+
+Azure Platform
+--------------
+The azure cloud-platform provides initial data to an instance via an attached
+CD formated in UDF. That CD contains a 'ovf-env.xml' file that provides some
+information. Additional information is obtained via interaction with the
+"endpoint".
+
+To find the endpoint, we now leverage the dhcp client's ability to log its
+known values on exit. The endpoint server is special DHCP option 245.
+Depending on your networking stack, this can be done
+by calling a script in /etc/dhcp/dhclient-exit-hooks or a file in
+/etc/NetworkManager/dispatcher.d. Both of these call a sub-command
+'dhclient_hook' of cloud-init itself. This sub-command will write the client
+information in json format to /run/cloud-init/dhclient.hook/<interface>.json.
+
+In order for cloud-init to leverage this method to find the endpoint, the
+cloud.cfg file must contain:
+
+datasource:
+ Azure:
+ set_hostname: False
+ agent_command: __builtin__
+
+If those files are not available, the fallback is to check the leases file
+for the endpoint server (again option 245).
+
+You can define the path to the lease file with the 'dhclient_lease_file'
+configuration. The default value is /var/lib/dhcp/dhclient.eth0.leases.
+
+ dhclient_lease_file: /var/lib/dhcp/dhclient.eth0.leases
+
+walinuxagent
+------------
+In order to operate correctly, cloud-init needs walinuxagent to provide much
+of the interaction with azure. In addition to "provisioning" code, walinux
+does the following on the agent is a long running daemon that handles the
+following things:
+- generate a x509 certificate and send that to the endpoint
+
+waagent.conf config
+^^^^^^^^^^^^^^^^^^^
+in order to use waagent.conf with cloud-init, the following settings are recommended. Other values can be changed or set to the defaults.
+
+ ::
+
+ # disabling provisioning turns off all 'Provisioning.*' function
+ Provisioning.Enabled=n
+ # this is currently not handled by cloud-init, so let walinuxagent do it.
+ ResourceDisk.Format=y
+ ResourceDisk.MountPoint=/mnt
+
+
+Userdata
+--------
+Userdata is provided to cloud-init inside the ovf-env.xml file. Cloud-init
+expects that user-data will be provided as base64 encoded value inside the
+text child of a element named ``UserData`` or ``CustomData`` which is a direct
+child of the ``LinuxProvisioningConfigurationSet`` (a sibling to ``UserName``)
+If both ``UserData`` and ``CustomData`` are provided behavior is undefined on
+which will be selected.
+
+In the example below, user-data provided is 'this is my userdata', and the
+datasource config provided is ``{"agent_command": ["start", "walinuxagent"]}``.
+That agent command will take affect as if it were specified in system config.
+
+Example:
+
+.. sourcecode:: xml
+
+ <wa:ProvisioningSection>
+ <wa:Version>1.0</wa:Version>
+ <LinuxProvisioningConfigurationSet
+ xmlns="http://schemas.microsoft.com/windowsazure"
+ xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
+ <ConfigurationSetType>LinuxProvisioningConfiguration</ConfigurationSetType>
+ <HostName>myHost</HostName>
+ <UserName>myuser</UserName>
+ <UserPassword/>
+ <CustomData>dGhpcyBpcyBteSB1c2VyZGF0YQ===</CustomData>
+ <dscfg>eyJhZ2VudF9jb21tYW5kIjogWyJzdGFydCIsICJ3YWxpbnV4YWdlbnQiXX0=</dscfg>
+ <DisableSshPasswordAuthentication>true</DisableSshPasswordAuthentication>
+ <SSH>
+ <PublicKeys>
+ <PublicKey>
+ <Fingerprint>6BE7A7C3C8A8F4B123CCA5D0C2F1BE4CA7B63ED7</Fingerprint>
+ <Path>this-value-unused</Path>
+ </PublicKey>
+ </PublicKeys>
+ </SSH>
+ </LinuxProvisioningConfigurationSet>
+ </wa:ProvisioningSection>
+
+Configuration
+-------------
+Configuration for the datasource can be read from the system config's or set
+via the `dscfg` entry in the `LinuxProvisioningConfigurationSet`. Content in
+dscfg node is expected to be base64 encoded yaml content, and it will be
+merged into the 'datasource: Azure' entry.
+
+The '``hostname_bounce: command``' entry can be either the literal string
+'builtin' or a command to execute. The command will be invoked after the
+hostname is set, and will have the 'interface' in its environment. If
+``set_hostname`` is not true, then ``hostname_bounce`` will be ignored.
+
+An example might be:
+ command: ["sh", "-c", "killall dhclient; dhclient $interface"]
+
+.. code:: yaml
+
+ datasource:
+ agent_command
+ Azure:
+ agent_command: [service, walinuxagent, start]
+ set_hostname: True
+ hostname_bounce:
+ # the name of the interface to bounce
+ interface: eth0
+ # policy can be 'on', 'off' or 'force'
+ policy: on
+ # the method 'bounce' command.
+ command: "builtin"
+ hostname_command: "hostname"
+
+hostname
+--------
+When the user launches an instance, they provide a hostname for that instance.
+The hostname is provided to the instance in the ovf-env.xml file as
+``HostName``.
+
+Whatever value the instance provides in its dhcp request will resolve in the
+domain returned in the 'search' request.
+
+The interesting issue is that a generic image will already have a hostname
+configured. The ubuntu cloud images have 'ubuntu' as the hostname of the
+system, and the initial dhcp request on eth0 is not guaranteed to occur after
+the datasource code has been run. So, on first boot, that initial value will
+be sent in the dhcp request and *that* value will resolve.
+
+In order to make the ``HostName`` provided in the ovf-env.xml resolve, a
+dhcp request must be made with the new value. Walinuxagent (in its current
+version) handles this by polling the state of hostname and bouncing ('``ifdown
+eth0; ifup eth0``' the network interface if it sees that a change has been
+made.
+
+cloud-init handles this by setting the hostname in the DataSource's 'get_data'
+method via '``hostname $HostName``', and then bouncing the interface. This
+behavior can be configured or disabled in the datasource config. See
+'Configuration' above.
+
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/datasources/cloudsigma.rst b/doc/rtd/topics/datasources/cloudsigma.rst
new file mode 100644
index 00000000..54963f61
--- /dev/null
+++ b/doc/rtd/topics/datasources/cloudsigma.rst
@@ -0,0 +1,40 @@
+CloudSigma
+==========
+
+This datasource finds metadata and user-data from the `CloudSigma`_ cloud
+platform. Data transfer occurs through a virtual serial port of the
+`CloudSigma`_'s VM and the presence of network adapter is **NOT** a
+requirement, See `server context`_ in the public documentation for more
+information.
+
+
+Setting a hostname
+------------------
+By default the name of the server will be applied as a hostname on the first
+boot.
+
+
+Providing user-data
+-------------------
+
+You can provide user-data to the VM using the dedicated `meta field`_ in the
+`server context`_ ``cloudinit-user-data``. By default *cloud-config* format is
+expected there and the ``#cloud-config`` header could be omitted. However
+since this is a raw-text field you could provide any of the valid `config
+formats`_.
+
+You have the option to encode your user-data using Base64. In order to do that
+you have to add the ``cloudinit-user-data`` field to the ``base64_fields``.
+The latter is a comma-separated field with all the meta fields whit base64
+encoded values.
+
+If your user-data does not need an internet connection you can create a `meta
+field`_ in the `server context`_ ``cloudinit-dsmode`` and set "local" as
+value. If this field does not exist the default value is "net".
+
+
+.. _CloudSigma: http://cloudsigma.com/
+.. _server context: http://cloudsigma-docs.readthedocs.org/en/latest/server_context.html
+.. _meta field: http://cloudsigma-docs.readthedocs.org/en/latest/meta.html
+.. _config formats: http://cloudinit.readthedocs.org/en/latest/topics/format.html
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/datasources/cloudstack.rst b/doc/rtd/topics/datasources/cloudstack.rst
new file mode 100644
index 00000000..04603d9c
--- /dev/null
+++ b/doc/rtd/topics/datasources/cloudstack.rst
@@ -0,0 +1,34 @@
+CloudStack
+==========
+
+`Apache CloudStack`_ expose user-data, meta-data, user password and account
+sshkey thru the Virtual-Router. For more details on meta-data and user-data,
+refer the `CloudStack Administrator Guide`_.
+
+URLs to access user-data and meta-data from the Virtual Machine. Here 10.1.1.1
+is the Virtual Router IP:
+
+.. code:: bash
+
+ http://10.1.1.1/latest/user-data
+ http://10.1.1.1/latest/meta-data
+ http://10.1.1.1/latest/meta-data/{metadata type}
+
+Configuration
+-------------
+
+Apache CloudStack datasource can be configured as follows:
+
+.. code:: yaml
+
+ datasource:
+ CloudStack: {}
+ None: {}
+ datasource_list:
+ - CloudStack
+
+
+.. _Apache CloudStack: http://cloudstack.apache.org/
+.. _CloudStack Administrator Guide: http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/virtual_machines.html#user-data-and-meta-data
+
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/datasources/configdrive.rst b/doc/rtd/topics/datasources/configdrive.rst
new file mode 100644
index 00000000..acdab6a2
--- /dev/null
+++ b/doc/rtd/topics/datasources/configdrive.rst
@@ -0,0 +1,129 @@
+Config Drive
+============
+
+The configuration drive datasource supports the `OpenStack`_ configuration
+drive disk.
+
+ See `the config drive extension`_ and `introduction`_ in the public
+ documentation for more information.
+
+By default, cloud-init does *always* consider this source to be a full-fledged
+datasource. Instead, the typical behavior is to assume it is really only
+present to provide networking information. Cloud-init will copy off the
+network information, apply it to the system, and then continue on. The "full"
+datasource could then be found in the EC2 metadata service. If this is not the
+case then the files contained on the located drive must provide equivalents to
+what the EC2 metadata service would provide (which is typical of the version 2
+support listed below)
+
+Version 1
+---------
+
+The following criteria are required to as a config drive:
+
+1. Must be formatted with `vfat`_ filesystem
+2. Must be a un-partitioned block device (/dev/vdb, not /dev/vdb1)
+3. Must contain *one* of the following files
+
+::
+
+ /etc/network/interfaces
+ /root/.ssh/authorized_keys
+ /meta.js
+
+``/etc/network/interfaces``
+
+ This file is laid down by nova in order to pass static networking
+ information to the guest. Cloud-init will copy it off of the config-drive
+ and into /etc/network/interfaces (or convert it to RH format) as soon as
+ it can, and then attempt to bring up all network interfaces.
+
+``/root/.ssh/authorized_keys``
+
+ This file is laid down by nova, and contains the ssk keys that were
+ provided to nova on instance creation (nova-boot --key ....)
+
+``/meta.js``
+
+ meta.js is populated on the config-drive in response to the user passing
+ "meta flags" (nova boot --meta key=value ...). It is expected to be json
+ formatted.
+
+Version 2
+---------
+
+The following criteria are required to as a config drive:
+
+1. Must be formatted with `vfat`_ or `iso9660`_ filesystem
+ or have a *filesystem* label of **config-2**
+2. Must be a un-partitioned block device (/dev/vdb, not /dev/vdb1)
+3. The files that will typically be present in the config drive are:
+
+::
+
+ openstack/
+ - 2012-08-10/ or latest/
+ - meta_data.json
+ - user_data (not mandatory)
+ - content/
+ - 0000 (referenced content files)
+ - 0001
+ - ....
+ ec2
+ - latest/
+ - meta-data.json (not mandatory)
+
+Keys and values
+---------------
+
+Cloud-init's behavior can be modified by keys found in the meta.js (version 1
+only) file in the following ways.
+
+::
+
+ dsmode:
+ values: local, net, pass
+ default: pass
+
+
+This is what indicates if configdrive is a final data source or not.
+By default it is 'pass', meaning this datasource should not be read.
+Set it to 'local' or 'net' to stop cloud-init from continuing on to
+search for other data sources after network config.
+
+The difference between 'local' and 'net' is that local will not require
+networking to be up before user-data actions (or boothooks) are run.
+
+::
+
+ instance-id:
+ default: iid-dsconfigdrive
+
+This is utilized as the metadata's instance-id. It should generally
+be unique, as it is what is used to determine "is this a new instance".
+
+::
+
+ public-keys:
+ default: None
+
+If present, these keys will be used as the public keys for the
+instance. This value overrides the content in authorized_keys.
+
+Note: it is likely preferable to provide keys via user-data
+
+::
+
+ user-data:
+ default: None
+
+This provides cloud-init user-data. See :ref:`examples <yaml_examples>` for
+what all can be present here.
+
+.. _OpenStack: http://www.openstack.org/
+.. _introduction: http://docs.openstack.org/trunk/openstack-compute/admin/content/config-drive.html
+.. _python-novaclient: https://github.com/openstack/python-novaclient
+.. _iso9660: https://en.wikipedia.org/wiki/ISO_9660
+.. _vfat: https://en.wikipedia.org/wiki/File_Allocation_Table
+.. _the config drive extension: http://docs.openstack.org/user-guide/content/config-drive.html
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/datasources/digitalocean.rst b/doc/rtd/topics/datasources/digitalocean.rst
new file mode 100644
index 00000000..c6f5bc74
--- /dev/null
+++ b/doc/rtd/topics/datasources/digitalocean.rst
@@ -0,0 +1,28 @@
+Digital Ocean
+=============
+
+The `DigitalOcean`_ datasource consumes the content served from DigitalOcean's
+`metadata service`_. This metadata service serves information about the
+running droplet via HTTP over the link local address 169.254.169.254. The
+metadata API endpoints are fully described at
+`https://developers.digitalocean.com/metadata/
+<https://developers.digitalocean.com/metadata/>`_.
+
+Configuration
+-------------
+
+DigitalOcean's datasource can be configured as follows:
+
+ datasource:
+ DigitalOcean:
+ retries: 3
+ timeout: 2
+
+- *retries*: Determines the number of times to attempt to connect to the metadata service
+- *timeout*: Determines the timeout in seconds to wait for a response from the metadata service
+
+.. _DigitalOcean: http://digitalocean.com/
+.. _metadata service: https://developers.digitalocean.com/metadata/
+.. _Full documentation: https://developers.digitalocean.com/metadata/
+
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/datasources/ec2.rst b/doc/rtd/topics/datasources/ec2.rst
new file mode 100644
index 00000000..4810c984
--- /dev/null
+++ b/doc/rtd/topics/datasources/ec2.rst
@@ -0,0 +1,61 @@
+Amazon EC2
+==========
+
+The EC2 datasource is the oldest and most widely used datasource that
+cloud-init supports. This datasource interacts with a *magic* ip that is
+provided to the instance by the cloud provider. Typically this ip is
+``169.254.169.254`` of which at this ip a http server is provided to the
+instance so that the instance can make calls to get instance userdata and
+instance metadata.
+
+Metadata is accessible via the following URL:
+
+::
+
+ GET http://169.254.169.254/2009-04-04/meta-data/
+ ami-id
+ ami-launch-index
+ ami-manifest-path
+ block-device-mapping/
+ hostname
+ instance-id
+ instance-type
+ local-hostname
+ local-ipv4
+ placement/
+ public-hostname
+ public-ipv4
+ public-keys/
+ reservation-id
+ security-groups
+
+Userdata is accessible via the following URL:
+
+::
+
+ GET http://169.254.169.254/2009-04-04/user-data
+ 1234,fred,reboot,true | 4512,jimbo, | 173,,,
+
+Note that there are multiple versions of this data provided, cloud-init
+by default uses **2009-04-04** but newer versions can be supported with
+relative ease (newer versions have more data exposed, while maintaining
+backward compatibility with the previous versions).
+
+To see which versions are supported from your cloud provider use the following URL:
+
+::
+
+ GET http://169.254.169.254/
+ 1.0
+ 2007-01-19
+ 2007-03-01
+ 2007-08-29
+ 2007-10-10
+ 2007-12-15
+ 2008-02-01
+ 2008-09-01
+ 2009-04-04
+ ...
+ latest
+
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/datasources/fallback.rst b/doc/rtd/topics/datasources/fallback.rst
new file mode 100644
index 00000000..1eb01dd0
--- /dev/null
+++ b/doc/rtd/topics/datasources/fallback.rst
@@ -0,0 +1,16 @@
+Fallback/None
+=============
+
+This is the fallback datasource when no other datasource can be selected. It
+is the equivalent of a empty datasource in that it provides a empty string as
+userdata and a empty dictionary as metadata. It is useful for testing as well
+as for when you do not have a need to have an actual datasource to meet your
+instance requirements (ie you just want to run modules that are not concerned
+with any external data). It is typically put at the end of the datasource
+search list so that if all other datasources are not matched, then this one
+will be so that the user is not left with an inaccessible instance.
+
+**Note:** the instance id that this datasource provides is
+``iid-datasource-none``.
+
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/datasources/maas.rst b/doc/rtd/topics/datasources/maas.rst
new file mode 100644
index 00000000..f495dd4b
--- /dev/null
+++ b/doc/rtd/topics/datasources/maas.rst
@@ -0,0 +1,8 @@
+MAAS
+====
+
+*TODO*
+
+For now see: http://maas.ubuntu.com/
+
+
diff --git a/doc/rtd/topics/datasources/nocloud.rst b/doc/rtd/topics/datasources/nocloud.rst
new file mode 100644
index 00000000..b9ab5f11
--- /dev/null
+++ b/doc/rtd/topics/datasources/nocloud.rst
@@ -0,0 +1,75 @@
+NoCloud
+=======
+
+The data source ``NoCloud`` allows the user to provide user-data and meta-data
+to the instance without running a network service (or even without having a
+network at all).
+
+You can provide meta-data and user-data to a local vm boot via files on a
+`vfat`_ or `iso9660`_ filesystem. The filesystem volume label must be
+``cidata``.
+
+These user-data and meta-data files are expected to be in the following format.
+
+::
+
+ /user-data
+ /meta-data
+
+Basically, user-data is simply user-data and meta-data is a yaml formatted file
+representing what you'd find in the EC2 metadata service.
+
+Given a disk ubuntu 12.04 cloud image in 'disk.img', you can create a
+sufficient disk by following the example below.
+
+::
+
+ ## create user-data and meta-data files that will be used
+ ## to modify image on first boot
+ $ { echo instance-id: iid-local01; echo local-hostname: cloudimg; } > meta-data
+
+ $ printf "#cloud-config\npassword: passw0rd\nchpasswd: { expire: False }\nssh_pwauth: True\n" > user-data
+
+ ## create a disk to attach with some user-data and meta-data
+ $ genisoimage -output seed.iso -volid cidata -joliet -rock user-data meta-data
+
+ ## alternatively, create a vfat filesystem with same files
+ ## $ truncate --size 2M seed.img
+ ## $ mkfs.vfat -n cidata seed.img
+ ## $ mcopy -oi seed.img user-data meta-data ::
+
+ ## create a new qcow image to boot, backed by your original image
+ $ qemu-img create -f qcow2 -b disk.img boot-disk.img
+
+ ## boot the image and login as 'ubuntu' with password 'passw0rd'
+ ## note, passw0rd was set as password through the user-data above,
+ ## there is no password set on these images.
+ $ kvm -m 256 \
+ -net nic -net user,hostfwd=tcp::2222-:22 \
+ -drive file=boot-disk.img,if=virtio \
+ -drive file=seed.iso,if=virtio
+
+**Note:** that the instance-id provided (``iid-local01`` above) is what is used
+to determine if this is "first boot". So if you are making updates to
+user-data you will also have to change that, or start the disk fresh.
+
+Also, you can inject an ``/etc/network/interfaces`` file by providing the
+content for that file in the ``network-interfaces`` field of metadata.
+
+Example metadata:
+
+::
+
+ instance-id: iid-abcdefg
+ network-interfaces: |
+ iface eth0 inet static
+ address 192.168.1.10
+ network 192.168.1.0
+ netmask 255.255.255.0
+ broadcast 192.168.1.255
+ gateway 192.168.1.254
+ hostname: myhost
+
+.. _iso9660: https://en.wikipedia.org/wiki/ISO_9660
+.. _vfat: https://en.wikipedia.org/wiki/File_Allocation_Table
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/datasources/opennebula.rst b/doc/rtd/topics/datasources/opennebula.rst
new file mode 100644
index 00000000..1b90a9c7
--- /dev/null
+++ b/doc/rtd/topics/datasources/opennebula.rst
@@ -0,0 +1,146 @@
+OpenNebula
+==========
+
+The `OpenNebula`_ (ON) datasource supports the contextualization disk.
+
+ See `contextualization overview`_, `contextualizing VMs`_ and
+ `network configuration`_ in the public documentation for
+ more information.
+
+OpenNebula's virtual machines are contextualized (parametrized) by
+CD-ROM image, which contains a shell script *context.sh* with
+custom variables defined on virtual machine start. There are no
+fixed contextualization variables, but the datasource accepts
+many used and recommended across the documentation.
+
+Datasource configuration
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Datasource accepts following configuration options.
+
+::
+
+ dsmode:
+ values: local, net, disabled
+ default: net
+
+Tells if this datasource will be processed in 'local' (pre-networking) or
+'net' (post-networking) stage or even completely 'disabled'.
+
+::
+
+ parseuser:
+ default: nobody
+
+Unprivileged system user used for contextualization script
+processing.
+
+Contextualization disk
+~~~~~~~~~~~~~~~~~~~~~~
+
+The following criteria are required:
+
+1. Must be formatted with `iso9660`_ filesystem
+ or have a *filesystem* label of **CONTEXT** or **CDROM**
+2. Must contain file *context.sh* with contextualization variables.
+ File is generated by OpenNebula, it has a KEY='VALUE' format and
+ can be easily read by bash
+
+Contextualization variables
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+There are no fixed contextualization variables in OpenNebula, no standard.
+Following variables were found on various places and revisions of
+the OpenNebula documentation. Where multiple similar variables are
+specified, only first found is taken.
+
+::
+
+ DSMODE
+
+Datasource mode configuration override. Values: local, net, disabled.
+
+::
+
+ DNS
+ ETH<x>_IP
+ ETH<x>_NETWORK
+ ETH<x>_MASK
+ ETH<x>_GATEWAY
+ ETH<x>_DOMAIN
+ ETH<x>_DNS
+
+Static `network configuration`_.
+
+::
+
+ HOSTNAME
+
+Instance hostname.
+
+::
+
+ PUBLIC_IP
+ IP_PUBLIC
+ ETH0_IP
+
+If no hostname has been specified, cloud-init will try to create hostname
+from instance's IP address in 'local' dsmode. In 'net' dsmode, cloud-init
+tries to resolve one of its IP addresses to get hostname.
+
+::
+
+ SSH_KEY
+ SSH_PUBLIC_KEY
+
+One or multiple SSH keys (separated by newlines) can be specified.
+
+::
+
+ USER_DATA
+ USERDATA
+
+cloud-init user data.
+
+Example configuration
+~~~~~~~~~~~~~~~~~~~~~
+
+This example cloud-init configuration (*cloud.cfg*) enables
+OpenNebula datasource only in 'net' mode.
+
+::
+
+ disable_ec2_metadata: True
+ datasource_list: ['OpenNebula']
+ datasource:
+ OpenNebula:
+ dsmode: net
+ parseuser: nobody
+
+Example VM's context section
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+::
+
+ CONTEXT=[
+ PUBLIC_IP="$NIC[IP]",
+ SSH_KEY="$USER[SSH_KEY]
+ $USER[SSH_KEY1]
+ $USER[SSH_KEY2] ",
+ USER_DATA="#cloud-config
+ # see https://help.ubuntu.com/community/CloudInit
+
+ packages: []
+
+ mounts:
+ - [vdc,none,swap,sw,0,0]
+ runcmd:
+ - echo 'Instance has been configured by cloud-init.' | wall
+ " ]
+
+.. _OpenNebula: http://opennebula.org/
+.. _contextualization overview: http://opennebula.org/documentation:documentation:context_overview
+.. _contextualizing VMs: http://opennebula.org/documentation:documentation:cong
+.. _network configuration: http://opennebula.org/documentation:documentation:cong#network_configuration
+.. _iso9660: https://en.wikipedia.org/wiki/ISO_9660
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/datasources/openstack.rst b/doc/rtd/topics/datasources/openstack.rst
new file mode 100644
index 00000000..ea47ea85
--- /dev/null
+++ b/doc/rtd/topics/datasources/openstack.rst
@@ -0,0 +1,28 @@
+OpenStack
+=========
+
+*TODO*
+
+Vendor Data
+-----------
+
+The OpenStack metadata server can be configured to serve up vendor data
+which is available to all instances for consumption. OpenStack vendor
+data is, generally, a JSON object.
+
+cloud-init will look for configuration in the ``cloud-init`` attribute
+of the vendor data JSON object. cloud-init processes this configuration
+using the same handlers as user data, so any formats that work for user
+data should work for vendor data.
+
+For example, configuring the following as vendor data in OpenStack would
+upgrade packages and install ``htop`` on all instances:
+
+.. sourcecode:: json
+
+ {"cloud-init": "#cloud-config\npackage_upgrade: True\npackages:\n - htop"}
+
+For more general information about how cloud-init handles vendor data,
+including how it can be disabled by users on instances, see `Vendor Data`_.
+
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/datasources/ovf.rst b/doc/rtd/topics/datasources/ovf.rst
new file mode 100644
index 00000000..a0770332
--- /dev/null
+++ b/doc/rtd/topics/datasources/ovf.rst
@@ -0,0 +1,12 @@
+OVF
+===
+
+The OVF Datasource provides a datasource for reading data from
+on an `Open Virtualization Format
+<https://en.wikipedia.org/wiki/Open_Virtualization_Format>`_ ISO
+transport.
+
+For further information see a full working example in cloud-init's
+source code tree in doc/sources/ovf
+
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/datasources/smartos.rst b/doc/rtd/topics/datasources/smartos.rst
new file mode 100644
index 00000000..a1e1542b
--- /dev/null
+++ b/doc/rtd/topics/datasources/smartos.rst
@@ -0,0 +1,164 @@
+SmartOS Datasource
+==================
+
+This datasource finds metadata and user-data from the SmartOS virtualization
+platform (i.e. Joyent).
+
+Please see http://smartos.org/ for information about SmartOS.
+
+SmartOS Platform
+----------------
+The SmartOS virtualization platform uses meta-data to the instance via the
+second serial console. On Linux, this is /dev/ttyS1. The data is a provided
+via a simple protocol: something queries for the data, the console responds
+responds with the status and if "SUCCESS" returns until a single ".\n".
+
+New versions of the SmartOS tooling will include support for base64 encoded data.
+
+Meta-data channels
+------------------
+
+Cloud-init supports three modes of delivering user/meta-data via the flexible
+channels of SmartOS.
+
+* user-data is written to /var/db/user-data
+
+ - per the spec, user-data is for consumption by the end-user, not
+ provisioning tools
+ - cloud-init entirely ignores this channel other than writting it to disk
+ - removal of the meta-data key means that /var/db/user-data gets removed
+ - a backup of previous meta-data is maintained as
+ /var/db/user-data.<timestamp>. <timestamp> is the epoch time when
+ cloud-init ran
+
+* user-script is written to /var/lib/cloud/scripts/per-boot/99_user_data
+
+ - this is executed each boot
+ - a link is created to /var/db/user-script
+ - previous versions of the user-script is written to
+ /var/lib/cloud/scripts/per-boot.backup/99_user_script.<timestamp>.
+ - <timestamp> is the epoch time when cloud-init ran.
+ - when the 'user-script' meta-data key goes missing, the user-script is
+ removed from the file system, although a backup is maintained.
+ - if the script is not shebanged (i.e. starts with #!<executable>), then
+ or is not an executable, cloud-init will add a shebang of "#!/bin/bash"
+
+* cloud-init:user-data is treated like on other Clouds.
+
+ - this channel is used for delivering _all_ cloud-init instructions
+ - scripts delivered over this channel must be well formed (i.e. must have
+ a shebang)
+
+Cloud-init supports reading the traditional meta-data fields supported by the
+SmartOS tools. These are:
+
+ * root_authorized_keys
+ * hostname
+ * enable_motd_sys_info
+ * iptables_disable
+
+Note: At this time iptables_disable and enable_motd_sys_info are read but
+ are not actioned.
+
+Disabling user-script
+---------------------
+
+Cloud-init uses the per-boot script functionality to handle the execution
+of the user-script. If you want to prevent this use a cloud-config of:
+
+.. code:: yaml
+
+ #cloud-config
+ cloud_final_modules:
+ - scripts-per-once
+ - scripts-per-instance
+ - scripts-user
+ - ssh-authkey-fingerprints
+ - keys-to-console
+ - phone-home
+ - final-message
+ - power-state-change
+
+Alternatively you can use the json patch method
+
+.. code:: yaml
+
+ #cloud-config-jsonp
+ [
+ { "op": "replace",
+ "path": "/cloud_final_modules",
+ "value": ["scripts-per-once",
+ "scripts-per-instance",
+ "scripts-user",
+ "ssh-authkey-fingerprints",
+ "keys-to-console",
+ "phone-home",
+ "final-message",
+ "power-state-change"]
+ }
+ ]
+
+The default cloud-config includes "script-per-boot". Cloud-init will still
+ingest and write the user-data but will not execute it, when you disable
+the per-boot script handling.
+
+Note: Unless you have an explicit use-case, it is recommended that you not
+ disable the per-boot script execution, especially if you are using
+ any of the life-cycle management features of SmartOS.
+
+The cloud-config needs to be delivered over the cloud-init:user-data channel
+in order for cloud-init to ingest it.
+
+base64
+------
+
+The following are exempt from base64 encoding, owing to the fact that they
+are provided by SmartOS:
+
+ * root_authorized_keys
+ * enable_motd_sys_info
+ * iptables_disable
+ * user-data
+ * user-script
+
+This list can be changed through system config of variable 'no_base64_decode'.
+
+This means that user-script and user-data as well as other values can be
+base64 encoded. Since Cloud-init can only guess as to whether or not something
+is truly base64 encoded, the following meta-data keys are hints as to whether
+or not to base64 decode something:
+
+ * base64_all: Except for excluded keys, attempt to base64 decode
+ the values. If the value fails to decode properly, it will be
+ returned in its text
+ * base64_keys: A comma deliminated list of which keys are base64 encoded.
+ * b64-<key>:
+ for any key, if there exists an entry in the metadata for 'b64-<key>'
+ Then 'b64-<key>' is expected to be a plaintext boolean indicating whether
+ or not its value is encoded.
+ * no_base64_decode: This is a configuration setting
+ (i.e. /etc/cloud/cloud.cfg.d) that sets which values should not be
+ base64 decoded.
+
+disk_aliases and ephemeral disk
+-------------------------------
+By default, SmartOS only supports a single ephemeral disk. That disk is
+completely empty (un-partitioned with no filesystem).
+
+The SmartOS datasource has built-in cloud-config which instructs the
+'disk_setup' module to partition and format the ephemeral disk.
+
+You can control the disk_setup then in 2 ways:
+ 1. through the datasource config, you can change the 'alias' of
+ ephermeral0 to reference another device. The default is:
+
+ 'disk_aliases': {'ephemeral0': '/dev/vdb'},
+
+ Which means anywhere disk_setup sees a device named 'ephemeral0'
+ then /dev/vdb will be substituted.
+ 2. you can provide disk_setup or fs_setup data in user-data to overwrite
+ the datasource's built-in values.
+
+See doc/examples/cloud-config-disk-setup.txt for information on disk_setup.
+
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/dir_layout.rst b/doc/rtd/topics/dir_layout.rst
index 6dcb22ce..3f5aa205 100644
--- a/doc/rtd/topics/dir_layout.rst
+++ b/doc/rtd/topics/dir_layout.rst
@@ -1,6 +1,6 @@
-================
+****************
Directory layout
-================
+****************
Cloudinits's directory structure is somewhat different from a regular application::
@@ -79,3 +79,4 @@ Cloudinits's directory structure is somewhat different from a regular applicatio
is only ran `per-once`, `per-instance`, `per-always`. This folder contains
sempaphore `files` which are only supposed to run `per-once` (not tied to the instance id).
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/examples.rst b/doc/rtd/topics/examples.rst
index 2e6cfa1e..a110721c 100644
--- a/doc/rtd/topics/examples.rst
+++ b/doc/rtd/topics/examples.rst
@@ -1,11 +1,11 @@
.. _yaml_examples:
-=====================
+*********************
Cloud config examples
-=====================
+*********************
Including users and groups
---------------------------
+==========================
.. literalinclude:: ../../examples/cloud-config-user-groups.txt
:language: yaml
@@ -13,7 +13,7 @@ Including users and groups
Writing out arbitrary files
----------------------------
+===========================
.. literalinclude:: ../../examples/cloud-config-write-files.txt
:language: yaml
@@ -21,21 +21,21 @@ Writing out arbitrary files
Adding a yum repository
------------------------
+=======================
.. literalinclude:: ../../examples/cloud-config-yum-repo.txt
:language: yaml
:linenos:
Configure an instances trusted CA certificates
-----------------------------------------------
+==============================================
.. literalinclude:: ../../examples/cloud-config-ca-certs.txt
:language: yaml
:linenos:
Configure an instances resolv.conf
-----------------------------------
+==================================
*Note:* when using a config drive and a RHEL like system resolv.conf
will also be managed 'automatically' due to the available information
@@ -47,28 +47,28 @@ that wish to have different settings use this module.
:linenos:
Install and run `chef`_ recipes
--------------------------------
+===============================
.. literalinclude:: ../../examples/cloud-config-chef.txt
:language: yaml
:linenos:
Setup and run `puppet`_
------------------------
+=======================
.. literalinclude:: ../../examples/cloud-config-puppet.txt
:language: yaml
:linenos:
Add apt repositories
---------------------
+====================
.. literalinclude:: ../../examples/cloud-config-add-apt-repos.txt
:language: yaml
:linenos:
Run commands on first boot
---------------------------
+==========================
.. literalinclude:: ../../examples/cloud-config-boot-cmds.txt
:language: yaml
@@ -80,70 +80,70 @@ Run commands on first boot
Alter the completion message
-----------------------------
+============================
.. literalinclude:: ../../examples/cloud-config-final-message.txt
:language: yaml
:linenos:
Install arbitrary packages
---------------------------
+==========================
.. literalinclude:: ../../examples/cloud-config-install-packages.txt
:language: yaml
:linenos:
Run apt or yum upgrade
-----------------------
+======================
.. literalinclude:: ../../examples/cloud-config-update-packages.txt
:language: yaml
:linenos:
Adjust mount points mounted
----------------------------
+===========================
.. literalinclude:: ../../examples/cloud-config-mount-points.txt
:language: yaml
:linenos:
Call a url when finished
-------------------------
+========================
.. literalinclude:: ../../examples/cloud-config-phone-home.txt
:language: yaml
:linenos:
Reboot/poweroff when finished
------------------------------
+=============================
.. literalinclude:: ../../examples/cloud-config-power-state.txt
:language: yaml
:linenos:
Configure instances ssh-keys
-----------------------------
+============================
.. literalinclude:: ../../examples/cloud-config-ssh-keys.txt
:language: yaml
:linenos:
Additional apt configuration
-----------------------------
+============================
.. literalinclude:: ../../examples/cloud-config-apt.txt
:language: yaml
:linenos:
Disk setup
-----------
+==========
.. literalinclude:: ../../examples/cloud-config-disk-setup.txt
:language: yaml
:linenos:
Register RedHat Subscription
-----------------------------
+============================
.. literalinclude:: ../../examples/cloud-config-rh_subscription.txt
:language: yaml
@@ -151,3 +151,4 @@ Register RedHat Subscription
.. _chef: http://www.opscode.com/chef/
.. _puppet: http://puppetlabs.com/
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/format.rst b/doc/rtd/topics/format.rst
index 1dd92309..5898d17a 100644
--- a/doc/rtd/topics/format.rst
+++ b/doc/rtd/topics/format.rst
@@ -1,18 +1,18 @@
-=======
+*******
Formats
-=======
+*******
User data that will be acted upon by cloud-init must be in one of the following types.
Gzip Compressed Content
------------------------
+=======================
Content found to be gzip compressed will be uncompressed.
The uncompressed data will then be used as if it were not compressed.
This is typically is useful because user-data is limited to ~16384 [#]_ bytes.
Mime Multi Part Archive
------------------------
+=======================
This list of rules is applied to each part of this multi-part file.
Using a mime-multi part file, the user can specify more than one type of data.
@@ -31,7 +31,7 @@ Supported content-types:
- text/cloud-boothook
Helper script to generate mime messages
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+---------------------------------------
.. code-block:: python
@@ -59,14 +59,14 @@ Helper script to generate mime messages
User-Data Script
-----------------
+================
Typically used by those who just want to execute a shell script.
Begins with: ``#!`` or ``Content-Type: text/x-shellscript`` when using a MIME archive.
Example
-~~~~~~~
+-------
::
@@ -78,7 +78,7 @@ Example
$ euca-run-instances --key mykey --user-data-file myscript.sh ami-a07d95c9
Include File
-------------
+============
This content is a ``include`` file.
@@ -89,7 +89,7 @@ Ie, the content read from the URL can be gzipped, mime-multi-part, or plain text
Begins with: ``#include`` or ``Content-Type: text/x-include-url`` when using a MIME archive.
Cloud Config Data
------------------
+=================
Cloud-config is the simplest way to accomplish some things
via user-data. Using cloud-config syntax, the user can specify certain things in a human friendly format.
@@ -109,14 +109,14 @@ See the :ref:`yaml_examples` section for a commented set of examples of supporte
Begins with: ``#cloud-config`` or ``Content-Type: text/cloud-config`` when using a MIME archive.
Upstart Job
------------
+===========
Content is placed into a file in ``/etc/init``, and will be consumed by upstart as any other upstart job.
Begins with: ``#upstart-job`` or ``Content-Type: text/upstart-job`` when using a MIME archive.
Cloud Boothook
---------------
+==============
This content is ``boothook`` data. It is stored in a file under ``/var/lib/cloud`` and then executed immediately.
This is the earliest ``hook`` available. Note, that there is no mechanism provided for running only once. The boothook must take care of this itself.
@@ -125,7 +125,7 @@ It is provided with the instance id in the environment variable ``INSTANCE_I``.
Begins with: ``#cloud-boothook`` or ``Content-Type: text/cloud-boothook`` when using a MIME archive.
Part Handler
-------------
+============
This is a ``part-handler``. It will be written to a file in ``/var/lib/cloud/data`` based on its filename (which is generated).
This must be python code that contains a ``list_types`` method and a ``handle_type`` method.
@@ -147,7 +147,7 @@ The ``begin`` and ``end`` calls are to allow the part handler to do initializati
Begins with: ``#part-handler`` or ``Content-Type: text/part-handler`` when using a MIME archive.
Example
-~~~~~~~
+-------
.. literalinclude:: ../../examples/part-handler.txt
:language: python
@@ -157,3 +157,4 @@ Also this `blog`_ post offers another example for more advanced usage.
.. [#] See your cloud provider for applicable user-data size limitations...
.. _blog: http://foss-boss.blogspot.com/2011/01/advanced-cloud-init-custom-handlers.html
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/hacking.rst b/doc/rtd/topics/hacking.rst
index 96ab88ef..5ec25bfb 100644
--- a/doc/rtd/topics/hacking.rst
+++ b/doc/rtd/topics/hacking.rst
@@ -1 +1,2 @@
.. include:: ../../../HACKING.rst
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/logging.rst b/doc/rtd/topics/logging.rst
index b010aa96..c6afca16 100644
--- a/doc/rtd/topics/logging.rst
+++ b/doc/rtd/topics/logging.rst
@@ -1,15 +1,15 @@
-=======
+*******
Logging
-=======
+*******
Cloud-init supports both local and remote logging configurable through python's
built-in logging configuration and through the cloud-init rsyslog module.
Command Output
---------------
+==============
Cloud-init can redirect its stdout and stderr based on config given under the
``output`` config key. The output of any commands run by cloud-init and any
-user or vendor scripts provided will also be included here. The ``output``
-key accepts a dictionary for configuration. Output files may be specified
+user or vendor scripts provided will also be included here. The ``output`` key
+accepts a dictionary for configuration. Output files may be specified
individually for each stage (``init``, ``config``, and ``final``), or a single
key ``all`` may be used to specify output for all stages.
@@ -31,9 +31,9 @@ stdout and stderr from all cloud-init stages to
For a more complex example, the following configuration would output the init
stage to ``/var/log/cloud-init.out`` and ``/var/log/cloud-init.err``, for
stdout and stderr respectively, replacing anything that was previously there.
-For the config stage, it would pipe both stdout and stderr through
-``tee -a /var/log/cloud-config.log``. For the final stage it would append the
-output of stdout and stderr to ``/var/log/cloud-final.out`` and
+For the config stage, it would pipe both stdout and stderr through ``tee -a
+/var/log/cloud-config.log``. For the final stage it would append the output of
+stdout and stderr to ``/var/log/cloud-final.out`` and
``/var/log/cloud-final.err`` respectively. ::
output:
@@ -48,8 +48,8 @@ output of stdout and stderr to ``/var/log/cloud-final.out`` and
Python Logging
--------------
Cloud-init uses the python logging module, and can accept config for this
-module using the standard python fileConfig format. Cloud-init looks for config
-for the logging module under the ``logcfg`` key.
+module using the standard python fileConfig format. Cloud-init looks for
+config for the logging module under the ``logcfg`` key.
.. note::
the logging configuration is not yaml, it is python ``fileConfig`` format,
@@ -67,9 +67,9 @@ Python's fileConfig format consists of sections with headings in the format
logging must contain the sections ``[loggers]``, ``[handlers]``, and
``[formatters]``, which name the entities of their respective types that will
be defined. The section name for each defined logger, handler and formatter
-will start with its type, followed by an underscore (``_``) and the name of the
-entity. For example, if a logger was specified with the name ``log01``, config
-for the logger would be in the section ``[logger_log01]``.
+will start with its type, followed by an underscore (``_``) and the name of
+the entity. For example, if a logger was specified with the name ``log01``,
+config for the logger would be in the section ``[logger_log01]``.
Logger config entries contain basic logging set up. They may specify a list of
handlers to send logging events to as well as the lowest priority level of
@@ -80,13 +80,13 @@ handlers. A level entry can be any of the following: ``DEBUG``, ``INFO``,
the ``NOTSET`` option will allow all logging events to be recorded.
Each configured handler must specify a class under the python's ``logging``
-package namespace. A handler may specify a message formatter to use, a priority
-level, and arguments for the handler class. Common handlers are
+package namespace. A handler may specify a message formatter to use, a
+priority level, and arguments for the handler class. Common handlers are
``StreamHandler``, which handles stream redirects (i.e. logging to stderr),
and ``FileHandler`` which outputs to a log file. The logging module also
-supports logging over net sockets, over http, via smtp, and additional
-complex configurations. For full details about the handlers available for
-python logging, please see the documentation for `python logging handlers`_.
+supports logging over net sockets, over http, via smtp, and additional complex
+configurations. For full details about the handlers available for python
+logging, please see the documentation for `python logging handlers`_.
Log messages are formatted using the ``logging.Formatter`` class, which is
configured using ``formatter`` config entities. A default format of
@@ -173,3 +173,4 @@ For more information on rsyslog configuration, see :ref:`cc_rsyslog`.
.. _python logging config: https://docs.python.org/3/library/logging.config.html#configuration-file-format
.. _python logging handlers: https://docs.python.org/3/library/logging.handlers.html
.. _python logging formatters: https://docs.python.org/3/library/logging.html#formatter-objects
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/merging.rst b/doc/rtd/topics/merging.rst
index 2bd87b16..eca118f5 100644
--- a/doc/rtd/topics/merging.rst
+++ b/doc/rtd/topics/merging.rst
@@ -1,5 +1,6 @@
-==========================
+**************************
Merging User-Data Sections
-==========================
+**************************
.. include:: ../../merging.rst
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/modules.rst b/doc/rtd/topics/modules.rst
index 57892f2d..a3ead4f1 100644
--- a/doc/rtd/topics/modules.rst
+++ b/doc/rtd/topics/modules.rst
@@ -1,6 +1,6 @@
-=======
+*******
Modules
-=======
+*******
.. automodule:: cloudinit.config.cc_apt_configure
.. automodule:: cloudinit.config.cc_apt_pipelining
.. automodule:: cloudinit.config.cc_bootcmd
@@ -55,3 +55,4 @@ Modules
.. automodule:: cloudinit.config.cc_users_groups
.. automodule:: cloudinit.config.cc_write_files
.. automodule:: cloudinit.config.cc_yum_add_repo
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/moreinfo.rst b/doc/rtd/topics/moreinfo.rst
index b34cb7dc..9c3b7fba 100644
--- a/doc/rtd/topics/moreinfo.rst
+++ b/doc/rtd/topics/moreinfo.rst
@@ -1,12 +1,13 @@
-================
+****************
More information
-================
+****************
Useful external references
---------------------------
+==========================
- `The beauty of cloudinit`_
- `Introduction to cloud-init`_ (video)
.. _Introduction to cloud-init: http://www.youtube.com/watch?v=-zL3BdbKyGY
.. _The beauty of cloudinit: http://brandon.fuller.name/archives/2011/05/02/06.40.57/
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/vendordata.rst b/doc/rtd/topics/vendordata.rst
new file mode 100644
index 00000000..2a94318e
--- /dev/null
+++ b/doc/rtd/topics/vendordata.rst
@@ -0,0 +1,71 @@
+***********
+Vendor Data
+***********
+
+Overview
+========
+
+Vendordata is data provided by the entity that launches an instance
+(for example, the cloud provider). This data can be used to
+customize the image to fit into the particular environment it is
+being run in.
+
+Vendordata follows the same rules as user-data, with the following
+caveats:
+
+ 1. Users have ultimate control over vendordata. They can disable its
+ execution or disable handling of specific parts of multipart input.
+ 2. By default it only runs on first boot
+ 3. Vendordata can be disabled by the user. If the use of vendordata is
+ required for the instance to run, then vendordata should not be used.
+ 4. user supplied cloud-config is merged over cloud-config from vendordata.
+
+Users providing cloud-config data can use the '#cloud-config-jsonp' method to
+more finely control their modifications to the vendor supplied cloud-config.
+For example, if both vendor and user have provided 'runcnmd' then the default
+merge handler will cause the user's runcmd to override the one provided by the
+vendor. To append to 'runcmd', the user could better provide multipart input
+with a cloud-config-jsonp part like:
+
+.. code:: yaml
+
+ #cloud-config-jsonp
+ [{ "op": "add", "path": "/runcmd", "value": ["my", "command", "here"]}]
+
+Further, we strongly advise vendors to not 'be evil'. By evil, we
+mean any action that could compromise a system. Since users trust
+you, please take care to make sure that any vendordata is safe,
+atomic, idempotent and does not put your users at risk.
+
+Input Formats
+=============
+
+cloud-init will download and cache to filesystem any vendor-data that it
+finds. Vendordata is handled exactly like user-data. That means that the
+vendor can supply multipart input and have those parts acted on in the same
+way as user-data.
+
+The only differences are:
+
+ * user-scripts are stored in a different location than user-scripts (to
+ avoid namespace collision)
+ * user can disable part handlers by cloud-config settings.
+ For example, to disable handling of 'part-handlers' in vendor-data,
+ the user could provide user-data like this:
+
+ .. code:: yaml
+
+ #cloud-config
+ vendordata: {excluded: 'text/part-handler'}
+
+Examples
+========
+There are examples in the examples subdirectory.
+
+Additionally, the 'tools' directory contains 'write-mime-multipart',
+which can be used to easily generate mime-multi-part files from a list
+of input files. That data can then be given to an instance.
+
+See 'write-mime-multipart --help' for usage.
+
+.. vi: textwidth=78