summaryrefslogtreecommitdiff
path: root/doc
diff options
context:
space:
mode:
authorScott Moser <smoser@brickies.net>2016-11-10 16:42:43 -0500
committerScott Moser <smoser@brickies.net>2016-11-10 16:48:58 -0500
commit127f0f5076bf8e5c53dd538899199455ebc08fbc (patch)
treeb749345cefd3c003a8e7960af32d50ea98fa2de3 /doc
parent25c218e5659445ecf64febe03c08c6fd9ca016e6 (diff)
downloadvyos-cloud-init-127f0f5076bf8e5c53dd538899199455ebc08fbc.tar.gz
vyos-cloud-init-127f0f5076bf8e5c53dd538899199455ebc08fbc.zip
doc: make the RST files consistently formated and other improvements.
The biggest things here are: * move doc/sources/*/README.rst to doc/rtd/topics/datasources This gives each datasource a page in the rtd docs, which make it easier to read. * consistently use the same header style throughout. As suggested at http://thomas-cokelaer.info/tutorials/sphinx/rest_syntax.html use: # with overline, for parts * with overline, for chapters =, for sections -, for subsections ^, for subsubsections “, for paragraphs Also, move and re-format vendor-data documentation to rtd.
Diffstat (limited to 'doc')
-rw-r--r--doc/merging.rst77
-rw-r--r--doc/rtd/index.rst47
-rw-r--r--doc/rtd/topics/availability.rst7
-rw-r--r--doc/rtd/topics/capabilities.rst14
-rw-r--r--doc/rtd/topics/datasources.rst178
-rw-r--r--doc/rtd/topics/datasources/altcloud.rst (renamed from doc/sources/altcloud/README.rst)12
-rw-r--r--doc/rtd/topics/datasources/azure.rst (renamed from doc/sources/azure/README.rst)14
-rw-r--r--doc/rtd/topics/datasources/cloudsigma.rst40
-rw-r--r--doc/rtd/topics/datasources/cloudstack.rst (renamed from doc/sources/cloudstack/README.rst)9
-rw-r--r--doc/rtd/topics/datasources/configdrive.rst (renamed from doc/sources/configdrive/README.rst)30
-rw-r--r--doc/rtd/topics/datasources/digitalocean.rst (renamed from doc/sources/digitalocean/README.rst)17
-rw-r--r--doc/rtd/topics/datasources/ec2.rst61
-rw-r--r--doc/rtd/topics/datasources/fallback.rst16
-rw-r--r--doc/rtd/topics/datasources/maas.rst8
-rw-r--r--doc/rtd/topics/datasources/nocloud.rst (renamed from doc/sources/nocloud/README.rst)32
-rw-r--r--doc/rtd/topics/datasources/opennebula.rst (renamed from doc/sources/opennebula/README.rst)4
-rw-r--r--doc/rtd/topics/datasources/openstack.rst (renamed from doc/sources/openstack/README.rst)12
-rw-r--r--doc/rtd/topics/datasources/ovf.rst12
-rw-r--r--doc/rtd/topics/datasources/smartos.rst (renamed from doc/sources/smartos/README.rst)87
-rw-r--r--doc/rtd/topics/dir_layout.rst5
-rw-r--r--doc/rtd/topics/examples.rst43
-rw-r--r--doc/rtd/topics/format.rst27
-rw-r--r--doc/rtd/topics/hacking.rst1
-rw-r--r--doc/rtd/topics/logging.rst37
-rw-r--r--doc/rtd/topics/merging.rst5
-rw-r--r--doc/rtd/topics/modules.rst5
-rw-r--r--doc/rtd/topics/moreinfo.rst7
-rw-r--r--doc/rtd/topics/vendordata.rst (renamed from doc/vendordata.txt)60
-rw-r--r--doc/sources/cloudsigma/README.rst38
29 files changed, 500 insertions, 405 deletions
diff --git a/doc/merging.rst b/doc/merging.rst
index afe1a6dd..bf49b909 100644
--- a/doc/merging.rst
+++ b/doc/merging.rst
@@ -1,5 +1,5 @@
Overview
---------
+========
This was implemented because it has been a common feature request that there be
a way to specify how cloud-config yaml "dictionaries" provided as user-data are
@@ -52,7 +52,7 @@ into a more useful list, thus reducing duplication that would have had to
occur in the previous method to accomplish the same result.
Customizability
----------------
+===============
Since the above merging algorithm may not always be the desired merging
algorithm (like how the previous merging algorithm was not always the preferred
@@ -96,41 +96,45 @@ An example of one of these merging classes is the following:
merged[k] = v
return merged
-As you can see there is a '_on_dict' method here that will be given a source value
-and a value to merge with. The result will be the merged object. This code itself
-is called by another merging class which 'directs' the merging to happen by
-analyzing the types of the objects to merge and attempting to find a know object
-that will merge that type. I will avoid pasting that here, but it can be found
-in the `mergers/__init__.py` file (see `LookupMerger` and `UnknownMerger`).
-
-So following the typical cloud-init way of allowing source code to be downloaded
-and used dynamically, it is possible for users to inject there own merging files
-to handle specific types of merging as they choose (the basic ones included will
-handle lists, dicts, and strings). Note how each merge can have options associated
-with it which affect how the merging is performed, for example a dictionary merger
-can be told to overwrite instead of attempt to merge, or a string merger can be
-told to append strings instead of discarding other strings to merge with.
+As you can see there is a '_on_dict' method here that will be given a source
+value and a value to merge with. The result will be the merged object. This
+code itself is called by another merging class which 'directs' the merging to
+happen by analyzing the types of the objects to merge and attempting to find a
+know object that will merge that type. I will avoid pasting that here, but it
+can be found in the `mergers/__init__.py` file (see `LookupMerger` and
+`UnknownMerger`).
+
+So following the typical cloud-init way of allowing source code to be
+downloaded and used dynamically, it is possible for users to inject there own
+merging files to handle specific types of merging as they choose (the basic
+ones included will handle lists, dicts, and strings). Note how each merge can
+have options associated with it which affect how the merging is performed, for
+example a dictionary merger can be told to overwrite instead of attempt to
+merge, or a string merger can be told to append strings instead of discarding
+other strings to merge with.
How to activate
----------------
+===============
There are a few ways to activate the merging algorithms, and to customize them
for your own usage.
1. The first way involves the usage of MIME messages in cloud-init to specify
- multipart documents (this is one way in which multiple cloud-config is joined
- together into a single cloud-config). Two new headers are looked for, both
- of which can define the way merging is done (the first header to exist wins).
- These new headers (in lookup order) are 'Merge-Type' and 'X-Merge-Type'. The value
- should be a string which will satisfy the new merging format defintion (see
- below for this format).
+ multipart documents (this is one way in which multiple cloud-config is
+ joined together into a single cloud-config). Two new headers are looked
+ for, both of which can define the way merging is done (the first header to
+ exist wins). These new headers (in lookup order) are 'Merge-Type' and
+ 'X-Merge-Type'. The value should be a string which will satisfy the new
+ merging format defintion (see below for this format).
+
2. The second way is actually specifying the merge-type in the body of the
- cloud-config dictionary. There are 2 ways to specify this, either as a string
- or as a dictionary (see format below). The keys that are looked up for this
- definition are the following (in order), 'merge_how', 'merge_type'.
+ cloud-config dictionary. There are 2 ways to specify this, either as a
+ string or as a dictionary (see format below). The keys that are looked up
+ for this definition are the following (in order), 'merge_how',
+ 'merge_type'.
String format
-*************
+-------------
The string format that is expected is the following.
@@ -142,14 +146,15 @@ The class name there will be connected to class names used when looking for the
class that can be used to merge and options provided will be given to the class
on construction of that class.
-For example, the default string that is used when none is provided is the following:
+For example, the default string that is used when none is provided is the
+following:
::
list()+dict()+str()
Dictionary format
-*****************
+-----------------
In cases where a dictionary can be used to specify the same information as the
string format (ie option #2 of above) it can be used, for example.
@@ -164,7 +169,7 @@ This would be the equivalent format for default string format but in dictionary
form instead of string form.
Specifying multiple types and its effect
-----------------------------------------
+========================================
Now you may be asking yourself, if I specify a merge-type header or dictionary
for every cloud-config that I provide, what exactly happens?
@@ -174,13 +179,13 @@ first one on that stack is the default merging classes, this set of mergers
will be used when the first cloud-config is merged with the initial empty
cloud-config dictionary. If the cloud-config that was just merged provided a
set of merging classes (via the above formats) then those merging classes will
-be pushed onto the stack. Now if there is a second cloud-config to be merged then
-the merging classes from the cloud-config before the first will be used (not the
-default) and so on. This way a cloud-config can decide how it will merge with a
-cloud-config dictionary coming after it.
+be pushed onto the stack. Now if there is a second cloud-config to be merged
+then the merging classes from the cloud-config before the first will be used
+(not the default) and so on. This way a cloud-config can decide how it will
+merge with a cloud-config dictionary coming after it.
Other uses
-----------
+==========
In addition to being used for merging user-data sections, the default merging
algorithm for merging 'conf.d' yaml files (which form an initial yaml config
@@ -192,3 +197,5 @@ merging, for example).
Note, however, that merge algorithms are not used *across* types of
configuration. As was the case before merging was implemented,
user-data will overwrite conf.d configuration without merging.
+
+.. vi: textwidth=78
diff --git a/doc/rtd/index.rst b/doc/rtd/index.rst
index f8ff3c9f..90defade 100644
--- a/doc/rtd/index.rst
+++ b/doc/rtd/index.rst
@@ -1,32 +1,45 @@
.. _index:
-=====================
+.. http://thomas-cokelaer.info/tutorials/sphinx/rest_syntax.html
+.. As suggested at link above for headings use:
+.. # with overline, for parts
+.. * with overline, for chapters
+.. =, for sections
+.. -, for subsections
+.. ^, for subsubsections
+.. “, for paragraphs
+
+#############
Documentation
-=====================
+#############
-.. rubric:: Everything about cloud-init, a set of **python** scripts and utilities to make your cloud images be all they can be!
+.. rubric:: Everything about cloud-init, a set of **python** scripts and
+ utilities to make your cloud images be all they can be!
+*******
Summary
------------------
-
-`Cloud-init`_ is the *defacto* multi-distribution package that handles early initialization of a cloud instance.
+*******
+`Cloud-init`_ is the *defacto* multi-distribution package that handles early
+initialization of a cloud instance.
----
.. toctree::
:maxdepth: 2
- topics/capabilities
- topics/availability
- topics/format
- topics/dir_layout
- topics/examples
- topics/datasources
- topics/logging
- topics/modules
- topics/merging
- topics/moreinfo
- topics/hacking
+ topics/capabilities.rst
+ topics/availability.rst
+ topics/format.rst
+ topics/dir_layout.rst
+ topics/examples.rst
+ topics/datasources.rst
+ topics/logging.rst
+ topics/modules.rst
+ topics/merging.rst
+ topics/vendordata.rst
+ topics/moreinfo.rst
+ topics/hacking.rst
.. _Cloud-init: https://launchpad.net/cloud-init
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/availability.rst b/doc/rtd/topics/availability.rst
index 2d58f808..ef5ae7bf 100644
--- a/doc/rtd/topics/availability.rst
+++ b/doc/rtd/topics/availability.rst
@@ -1,8 +1,8 @@
-============
+************
Availability
-============
+************
-It is currently installed in the `Ubuntu Cloud Images`_ and also in the official `Ubuntu`_ images available on EC2.
+It is currently installed in the `Ubuntu Cloud Images`_ and also in the official `Ubuntu`_ images available on EC2, Azure, GCE and many other clouds.
Versions for other systems can be (or have been) created for the following distributions:
@@ -18,3 +18,4 @@ So ask your distribution provider where you can obtain an image with it built-in
.. _Ubuntu Cloud Images: http://cloud-images.ubuntu.com/
.. _Ubuntu: http://www.ubuntu.com/
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/capabilities.rst b/doc/rtd/topics/capabilities.rst
index 63b34270..be0802c5 100644
--- a/doc/rtd/topics/capabilities.rst
+++ b/doc/rtd/topics/capabilities.rst
@@ -1,6 +1,6 @@
-=====================
+************
Capabilities
-=====================
+************
- Setting a default locale
- Setting a instance hostname
@@ -9,16 +9,18 @@ Capabilities
- Setting up ephemeral mount points
User configurability
---------------------
+====================
`Cloud-init`_ 's behavior can be configured via user-data.
User-data can be given by the user at instance launch time.
-This is done via the ``--user-data`` or ``--user-data-file`` argument to ec2-run-instances for example.
+This is done via the ``--user-data`` or ``--user-data-file`` argument to
+ec2-run-instances for example.
-* Check your local clients documentation for how to provide a `user-data` string
- or `user-data` file for usage by cloud-init on instance creation.
+* Check your local clients documentation for how to provide a `user-data`
+ string or `user-data` file for usage by cloud-init on instance creation.
.. _Cloud-init: https://launchpad.net/cloud-init
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/datasources.rst b/doc/rtd/topics/datasources.rst
index 3a9c808c..9acecc53 100644
--- a/doc/rtd/topics/datasources.rst
+++ b/doc/rtd/topics/datasources.rst
@@ -1,20 +1,21 @@
.. _datasources:
-===========
+***********
Datasources
-===========
-----------------------
- What is a datasource?
-----------------------
+***********
-Datasources are sources of configuration data for cloud-init that typically come
-from the user (aka userdata) or come from the stack that created the configuration
-drive (aka metadata). Typical userdata would include files, yaml, and shell scripts
-while typical metadata would include server name, instance id, display name and other
-cloud specific details. Since there are multiple ways to provide this data (each cloud
-solution seems to prefer its own way) internally a datasource abstract class was
-created to allow for a single way to access the different cloud systems methods
-to provide this data through the typical usage of subclasses.
+What is a datasource?
+=====================
+
+Datasources are sources of configuration data for cloud-init that typically
+come from the user (aka userdata) or come from the stack that created the
+configuration drive (aka metadata). Typical userdata would include files,
+yaml, and shell scripts while typical metadata would include server name,
+instance id, display name and other cloud specific details. Since there are
+multiple ways to provide this data (each cloud solution seems to prefer its
+own way) internally a datasource abstract class was created to allow for a
+single way to access the different cloud systems methods to provide this data
+through the typical usage of subclasses.
The current interface that a datasource object must provide is the following:
@@ -70,131 +71,28 @@ The current interface that a datasource object must provide is the following:
def get_package_mirror_info(self)
----
-EC2
----
-
-The EC2 datasource is the oldest and most widely used datasource that cloud-init
-supports. This datasource interacts with a *magic* ip that is provided to the
-instance by the cloud provider. Typically this ip is ``169.254.169.254`` of which
-at this ip a http server is provided to the instance so that the instance can make
-calls to get instance userdata and instance metadata.
-
-Metadata is accessible via the following URL:
-
-::
-
- GET http://169.254.169.254/2009-04-04/meta-data/
- ami-id
- ami-launch-index
- ami-manifest-path
- block-device-mapping/
- hostname
- instance-id
- instance-type
- local-hostname
- local-ipv4
- placement/
- public-hostname
- public-ipv4
- public-keys/
- reservation-id
- security-groups
-
-Userdata is accessible via the following URL:
-
-::
-
- GET http://169.254.169.254/2009-04-04/user-data
- 1234,fred,reboot,true | 4512,jimbo, | 173,,,
-
-Note that there are multiple versions of this data provided, cloud-init
-by default uses **2009-04-04** but newer versions can be supported with
-relative ease (newer versions have more data exposed, while maintaining
-backward compatibility with the previous versions).
-
-To see which versions are supported from your cloud provider use the following URL:
-
-::
-
- GET http://169.254.169.254/
- 1.0
- 2007-01-19
- 2007-03-01
- 2007-08-29
- 2007-10-10
- 2007-12-15
- 2008-02-01
- 2008-09-01
- 2009-04-04
- ...
- latest
-
-------------
-Config Drive
-------------
-
-.. include:: ../../sources/configdrive/README.rst
-
-----------
-OpenNebula
-----------
-
-.. include:: ../../sources/opennebula/README.rst
-
----------
-Alt cloud
----------
-
-.. include:: ../../sources/altcloud/README.rst
-
---------
-No cloud
---------
-
-.. include:: ../../sources/nocloud/README.rst
-
-----
-MAAS
-----
-
-*TODO*
-
-For now see: http://maas.ubuntu.com/
-
-----------
-CloudStack
-----------
-
-.. include:: ../../sources/cloudstack/README.rst
-
----
-OVF
----
-
-*TODO*
-
-For now see: https://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/files/head:/doc/sources/ovf/
-
----------
-OpenStack
----------
-
-.. include:: ../../sources/openstack/README.rst
-
--------------
-Fallback/None
--------------
-
-This is the fallback datasource when no other datasource can be selected. It is
-the equivalent of a *empty* datasource in that it provides a empty string as userdata
-and a empty dictionary as metadata. It is useful for testing as well as for when
-you do not have a need to have an actual datasource to meet your instance
-requirements (ie you just want to run modules that are not concerned with any
-external data). It is typically put at the end of the datasource search list
-so that if all other datasources are not matched, then this one will be so that
-the user is not left with an inaccessible instance.
-
-**Note:** the instance id that this datasource provides is ``iid-datasource-none``.
-.. _boto: http://docs.pythonboto.org/en/latest/
+Datasource Documentation
+========================
+The following is a list of the implemented datasources.
+Follow for more information.
+
+.. toctree::
+ :maxdepth: 2
+
+ datasources/altcloud.rst
+ datasources/azure.rst
+ datasources/cloudsigma.rst
+ datasources/cloudstack.rst
+ datasources/configdrive.rst
+ datasources/digitalocean.rst
+ datasources/ec2.rst
+ datasources/maas.rst
+ datasources/nocloud.rst
+ datasources/opennebula.rst
+ datasources/openstack.rst
+ datasources/ovf.rst
+ datasources/smartos.rst
+ datasources/fallback.rst
+
+.. vi: textwidth=78
diff --git a/doc/sources/altcloud/README.rst b/doc/rtd/topics/datasources/altcloud.rst
index 0a54fda1..8646e77e 100644
--- a/doc/sources/altcloud/README.rst
+++ b/doc/rtd/topics/datasources/altcloud.rst
@@ -1,7 +1,10 @@
+Alt Cloud
+=========
+
The datasource altcloud will be used to pick up user data on `RHEVm`_ and `vSphere`_.
RHEVm
-~~~~~~
+-----
For `RHEVm`_ v3.0 the userdata is injected into the VM using floppy
injection via the `RHEVm`_ dashboard "Custom Properties".
@@ -38,7 +41,7 @@ data to it using the Delta Cloud.
For more information on Delta Cloud see: http://deltacloud.apache.org
vSphere
-~~~~~~~~
+-------
For VMWare's `vSphere`_ the userdata is injected into the VM as an ISO
via the cdrom. This can be done using the `vSphere`_ dashboard
@@ -53,7 +56,7 @@ ISO on the data store.
For example, to pass the same ``simple_script.bash`` to vSphere:
Create the ISO
------------------
+^^^^^^^^^^^^^^
.. sourcecode:: sh
@@ -67,7 +70,7 @@ NOTE: The file name on the ISO must be: ``user-data.txt``
% genisoimage -o user-data.iso -r my-iso
Verify the ISO
------------------
+^^^^^^^^^^^^^^
.. sourcecode:: sh
@@ -85,3 +88,4 @@ For more information on Delta Cloud see: http://deltacloud.apache.org
.. _RHEVm: https://www.redhat.com/virtualization/rhev/desktop/rhevm/
.. _vSphere: https://www.vmware.com/products/datacenter-virtualization/vsphere/overview.html
+.. vi: textwidth=78
diff --git a/doc/sources/azure/README.rst b/doc/rtd/topics/datasources/azure.rst
index ec7d9e84..18d7c506 100644
--- a/doc/sources/azure/README.rst
+++ b/doc/rtd/topics/datasources/azure.rst
@@ -1,6 +1,5 @@
-================
-Azure Datasource
-================
+Azure
+=====
This datasource finds metadata and user-data from the Azure cloud platform.
@@ -44,7 +43,7 @@ following things:
- generate a x509 certificate and send that to the endpoint
waagent.conf config
-~~~~~~~~~~~~~~~~~~~
+^^^^^^^^^^^^^^^^^^^
in order to use waagent.conf with cloud-init, the following settings are recommended. Other values can be changed or set to the defaults.
::
@@ -71,7 +70,7 @@ That agent command will take affect as if it were specified in system config.
Example:
-.. code::
+.. sourcecode:: xml
<wa:ProvisioningSection>
<wa:Version>1.0</wa:Version>
@@ -111,7 +110,7 @@ hostname is set, and will have the 'interface' in its environment. If
An example might be:
command: ["sh", "-c", "killall dhclient; dhclient $interface"]
-.. code::
+.. code:: yaml
datasource:
agent_command
@@ -126,7 +125,6 @@ An example might be:
# the method 'bounce' command.
command: "builtin"
hostname_command: "hostname"
- }
hostname
--------
@@ -153,3 +151,5 @@ cloud-init handles this by setting the hostname in the DataSource's 'get_data'
method via '``hostname $HostName``', and then bouncing the interface. This
behavior can be configured or disabled in the datasource config. See
'Configuration' above.
+
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/datasources/cloudsigma.rst b/doc/rtd/topics/datasources/cloudsigma.rst
new file mode 100644
index 00000000..54963f61
--- /dev/null
+++ b/doc/rtd/topics/datasources/cloudsigma.rst
@@ -0,0 +1,40 @@
+CloudSigma
+==========
+
+This datasource finds metadata and user-data from the `CloudSigma`_ cloud
+platform. Data transfer occurs through a virtual serial port of the
+`CloudSigma`_'s VM and the presence of network adapter is **NOT** a
+requirement, See `server context`_ in the public documentation for more
+information.
+
+
+Setting a hostname
+------------------
+By default the name of the server will be applied as a hostname on the first
+boot.
+
+
+Providing user-data
+-------------------
+
+You can provide user-data to the VM using the dedicated `meta field`_ in the
+`server context`_ ``cloudinit-user-data``. By default *cloud-config* format is
+expected there and the ``#cloud-config`` header could be omitted. However
+since this is a raw-text field you could provide any of the valid `config
+formats`_.
+
+You have the option to encode your user-data using Base64. In order to do that
+you have to add the ``cloudinit-user-data`` field to the ``base64_fields``.
+The latter is a comma-separated field with all the meta fields whit base64
+encoded values.
+
+If your user-data does not need an internet connection you can create a `meta
+field`_ in the `server context`_ ``cloudinit-dsmode`` and set "local" as
+value. If this field does not exist the default value is "net".
+
+
+.. _CloudSigma: http://cloudsigma.com/
+.. _server context: http://cloudsigma-docs.readthedocs.org/en/latest/server_context.html
+.. _meta field: http://cloudsigma-docs.readthedocs.org/en/latest/meta.html
+.. _config formats: http://cloudinit.readthedocs.org/en/latest/topics/format.html
+.. vi: textwidth=78
diff --git a/doc/sources/cloudstack/README.rst b/doc/rtd/topics/datasources/cloudstack.rst
index eba1cd7e..04603d9c 100644
--- a/doc/sources/cloudstack/README.rst
+++ b/doc/rtd/topics/datasources/cloudstack.rst
@@ -1,3 +1,6 @@
+CloudStack
+==========
+
`Apache CloudStack`_ expose user-data, meta-data, user password and account
sshkey thru the Virtual-Router. For more details on meta-data and user-data,
refer the `CloudStack Administrator Guide`_.
@@ -12,7 +15,7 @@ is the Virtual Router IP:
http://10.1.1.1/latest/meta-data/{metadata type}
Configuration
-~~~~~~~~~~~~~
+-------------
Apache CloudStack datasource can be configured as follows:
@@ -26,4 +29,6 @@ Apache CloudStack datasource can be configured as follows:
.. _Apache CloudStack: http://cloudstack.apache.org/
-.. _CloudStack Administrator Guide: http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/virtual_machines.html#user-data-and-meta-data \ No newline at end of file
+.. _CloudStack Administrator Guide: http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/virtual_machines.html#user-data-and-meta-data
+
+.. vi: textwidth=78
diff --git a/doc/sources/configdrive/README.rst b/doc/rtd/topics/datasources/configdrive.rst
index 8c40735f..acdab6a2 100644
--- a/doc/sources/configdrive/README.rst
+++ b/doc/rtd/topics/datasources/configdrive.rst
@@ -1,4 +1,8 @@
-The configuration drive datasource supports the `OpenStack`_ configuration drive disk.
+Config Drive
+============
+
+The configuration drive datasource supports the `OpenStack`_ configuration
+drive disk.
See `the config drive extension`_ and `introduction`_ in the public
documentation for more information.
@@ -6,14 +10,14 @@ The configuration drive datasource supports the `OpenStack`_ configuration drive
By default, cloud-init does *always* consider this source to be a full-fledged
datasource. Instead, the typical behavior is to assume it is really only
present to provide networking information. Cloud-init will copy off the
-network information, apply it to the system, and then continue on. The
-"full" datasource could then be found in the EC2 metadata service. If this is
-not the case then the files contained on the located drive must provide equivalents
-to what the EC2 metadata service would provide (which is typical of the version
-2 support listed below)
+network information, apply it to the system, and then continue on. The "full"
+datasource could then be found in the EC2 metadata service. If this is not the
+case then the files contained on the located drive must provide equivalents to
+what the EC2 metadata service would provide (which is typical of the version 2
+support listed below)
Version 1
-~~~~~~~~~
+---------
The following criteria are required to as a config drive:
@@ -31,8 +35,8 @@ The following criteria are required to as a config drive:
This file is laid down by nova in order to pass static networking
information to the guest. Cloud-init will copy it off of the config-drive
- and into /etc/network/interfaces (or convert it to RH format) as soon as it can,
- and then attempt to bring up all network interfaces.
+ and into /etc/network/interfaces (or convert it to RH format) as soon as
+ it can, and then attempt to bring up all network interfaces.
``/root/.ssh/authorized_keys``
@@ -46,7 +50,7 @@ The following criteria are required to as a config drive:
formatted.
Version 2
-~~~~~~~~~
+---------
The following criteria are required to as a config drive:
@@ -70,9 +74,10 @@ The following criteria are required to as a config drive:
- meta-data.json (not mandatory)
Keys and values
-~~~~~~~~~~~~~~~
+---------------
-Cloud-init's behavior can be modified by keys found in the meta.js (version 1 only) file in the following ways.
+Cloud-init's behavior can be modified by keys found in the meta.js (version 1
+only) file in the following ways.
::
@@ -121,3 +126,4 @@ what all can be present here.
.. _iso9660: https://en.wikipedia.org/wiki/ISO_9660
.. _vfat: https://en.wikipedia.org/wiki/File_Allocation_Table
.. _the config drive extension: http://docs.openstack.org/user-guide/content/config-drive.html
+.. vi: textwidth=78
diff --git a/doc/sources/digitalocean/README.rst b/doc/rtd/topics/datasources/digitalocean.rst
index 1bb89fe1..c6f5bc74 100644
--- a/doc/sources/digitalocean/README.rst
+++ b/doc/rtd/topics/datasources/digitalocean.rst
@@ -1,10 +1,15 @@
- The `DigitalOcean`_ datasource consumes the content served from DigitalOcean's `metadata service`_. This
-metadata service serves information about the running droplet via HTTP over the link local address
-169.254.169.254. The metadata API endpoints are fully described at
-`https://developers.digitalocean.com/metadata/ <https://developers.digitalocean.com/metadata/>`_.
+Digital Ocean
+=============
+
+The `DigitalOcean`_ datasource consumes the content served from DigitalOcean's
+`metadata service`_. This metadata service serves information about the
+running droplet via HTTP over the link local address 169.254.169.254. The
+metadata API endpoints are fully described at
+`https://developers.digitalocean.com/metadata/
+<https://developers.digitalocean.com/metadata/>`_.
Configuration
-~~~~~~~~~~~~~
+-------------
DigitalOcean's datasource can be configured as follows:
@@ -19,3 +24,5 @@ DigitalOcean's datasource can be configured as follows:
.. _DigitalOcean: http://digitalocean.com/
.. _metadata service: https://developers.digitalocean.com/metadata/
.. _Full documentation: https://developers.digitalocean.com/metadata/
+
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/datasources/ec2.rst b/doc/rtd/topics/datasources/ec2.rst
new file mode 100644
index 00000000..4810c984
--- /dev/null
+++ b/doc/rtd/topics/datasources/ec2.rst
@@ -0,0 +1,61 @@
+Amazon EC2
+==========
+
+The EC2 datasource is the oldest and most widely used datasource that
+cloud-init supports. This datasource interacts with a *magic* ip that is
+provided to the instance by the cloud provider. Typically this ip is
+``169.254.169.254`` of which at this ip a http server is provided to the
+instance so that the instance can make calls to get instance userdata and
+instance metadata.
+
+Metadata is accessible via the following URL:
+
+::
+
+ GET http://169.254.169.254/2009-04-04/meta-data/
+ ami-id
+ ami-launch-index
+ ami-manifest-path
+ block-device-mapping/
+ hostname
+ instance-id
+ instance-type
+ local-hostname
+ local-ipv4
+ placement/
+ public-hostname
+ public-ipv4
+ public-keys/
+ reservation-id
+ security-groups
+
+Userdata is accessible via the following URL:
+
+::
+
+ GET http://169.254.169.254/2009-04-04/user-data
+ 1234,fred,reboot,true | 4512,jimbo, | 173,,,
+
+Note that there are multiple versions of this data provided, cloud-init
+by default uses **2009-04-04** but newer versions can be supported with
+relative ease (newer versions have more data exposed, while maintaining
+backward compatibility with the previous versions).
+
+To see which versions are supported from your cloud provider use the following URL:
+
+::
+
+ GET http://169.254.169.254/
+ 1.0
+ 2007-01-19
+ 2007-03-01
+ 2007-08-29
+ 2007-10-10
+ 2007-12-15
+ 2008-02-01
+ 2008-09-01
+ 2009-04-04
+ ...
+ latest
+
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/datasources/fallback.rst b/doc/rtd/topics/datasources/fallback.rst
new file mode 100644
index 00000000..1eb01dd0
--- /dev/null
+++ b/doc/rtd/topics/datasources/fallback.rst
@@ -0,0 +1,16 @@
+Fallback/None
+=============
+
+This is the fallback datasource when no other datasource can be selected. It
+is the equivalent of a empty datasource in that it provides a empty string as
+userdata and a empty dictionary as metadata. It is useful for testing as well
+as for when you do not have a need to have an actual datasource to meet your
+instance requirements (ie you just want to run modules that are not concerned
+with any external data). It is typically put at the end of the datasource
+search list so that if all other datasources are not matched, then this one
+will be so that the user is not left with an inaccessible instance.
+
+**Note:** the instance id that this datasource provides is
+``iid-datasource-none``.
+
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/datasources/maas.rst b/doc/rtd/topics/datasources/maas.rst
new file mode 100644
index 00000000..f495dd4b
--- /dev/null
+++ b/doc/rtd/topics/datasources/maas.rst
@@ -0,0 +1,8 @@
+MAAS
+====
+
+*TODO*
+
+For now see: http://maas.ubuntu.com/
+
+
diff --git a/doc/sources/nocloud/README.rst b/doc/rtd/topics/datasources/nocloud.rst
index 08a39377..b9ab5f11 100644
--- a/doc/sources/nocloud/README.rst
+++ b/doc/rtd/topics/datasources/nocloud.rst
@@ -1,12 +1,15 @@
-The data source ``NoCloud`` and ``NoCloudNet`` allow the user to provide user-data
-and meta-data to the instance without running a network service (or even without
-having a network at all).
+NoCloud
+=======
-You can provide meta-data and user-data to a local vm boot via files on a `vfat`_
-or `iso9660`_ filesystem. The filesystem volume label must be ``cidata``.
+The data source ``NoCloud`` allows the user to provide user-data and meta-data
+to the instance without running a network service (or even without having a
+network at all).
-These user-data and meta-data files are expected to be
-in the following format.
+You can provide meta-data and user-data to a local vm boot via files on a
+`vfat`_ or `iso9660`_ filesystem. The filesystem volume label must be
+``cidata``.
+
+These user-data and meta-data files are expected to be in the following format.
::
@@ -16,8 +19,8 @@ in the following format.
Basically, user-data is simply user-data and meta-data is a yaml formatted file
representing what you'd find in the EC2 metadata service.
-Given a disk ubuntu 12.04 cloud image in 'disk.img', you can create a sufficient disk
-by following the example below.
+Given a disk ubuntu 12.04 cloud image in 'disk.img', you can create a
+sufficient disk by following the example below.
::
@@ -46,12 +49,12 @@ by following the example below.
-drive file=boot-disk.img,if=virtio \
-drive file=seed.iso,if=virtio
-**Note:** that the instance-id provided (``iid-local01`` above) is what is used to
-determine if this is "first boot". So if you are making updates to user-data
-you will also have to change that, or start the disk fresh.
+**Note:** that the instance-id provided (``iid-local01`` above) is what is used
+to determine if this is "first boot". So if you are making updates to
+user-data you will also have to change that, or start the disk fresh.
-Also, you can inject an ``/etc/network/interfaces`` file by providing the content
-for that file in the ``network-interfaces`` field of metadata.
+Also, you can inject an ``/etc/network/interfaces`` file by providing the
+content for that file in the ``network-interfaces`` field of metadata.
Example metadata:
@@ -69,3 +72,4 @@ Example metadata:
.. _iso9660: https://en.wikipedia.org/wiki/ISO_9660
.. _vfat: https://en.wikipedia.org/wiki/File_Allocation_Table
+.. vi: textwidth=78
diff --git a/doc/sources/opennebula/README.rst b/doc/rtd/topics/datasources/opennebula.rst
index 4d7de27a..1b90a9c7 100644
--- a/doc/sources/opennebula/README.rst
+++ b/doc/rtd/topics/datasources/opennebula.rst
@@ -1,3 +1,6 @@
+OpenNebula
+==========
+
The `OpenNebula`_ (ON) datasource supports the contextualization disk.
See `contextualization overview`_, `contextualizing VMs`_ and
@@ -140,3 +143,4 @@ Example VM's context section
.. _contextualizing VMs: http://opennebula.org/documentation:documentation:cong
.. _network configuration: http://opennebula.org/documentation:documentation:cong#network_configuration
.. _iso9660: https://en.wikipedia.org/wiki/ISO_9660
+.. vi: textwidth=78
diff --git a/doc/sources/openstack/README.rst b/doc/rtd/topics/datasources/openstack.rst
index 8102597e..ea47ea85 100644
--- a/doc/sources/openstack/README.rst
+++ b/doc/rtd/topics/datasources/openstack.rst
@@ -1,7 +1,10 @@
+OpenStack
+=========
+
*TODO*
Vendor Data
-~~~~~~~~~~~
+-----------
The OpenStack metadata server can be configured to serve up vendor data
which is available to all instances for consumption. OpenStack vendor
@@ -17,8 +20,9 @@ upgrade packages and install ``htop`` on all instances:
.. sourcecode:: json
- {"cloud-init": "#cloud-config\npackage_upgrade: True\npackages:\n - htop"}
+ {"cloud-init": "#cloud-config\npackage_upgrade: True\npackages:\n - htop"}
For more general information about how cloud-init handles vendor data,
-including how it can be disabled by users on instances, see
-https://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/doc/vendordata.txt
+including how it can be disabled by users on instances, see `Vendor Data`_.
+
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/datasources/ovf.rst b/doc/rtd/topics/datasources/ovf.rst
new file mode 100644
index 00000000..a0770332
--- /dev/null
+++ b/doc/rtd/topics/datasources/ovf.rst
@@ -0,0 +1,12 @@
+OVF
+===
+
+The OVF Datasource provides a datasource for reading data from
+on an `Open Virtualization Format
+<https://en.wikipedia.org/wiki/Open_Virtualization_Format>`_ ISO
+transport.
+
+For further information see a full working example in cloud-init's
+source code tree in doc/sources/ovf
+
+.. vi: textwidth=78
diff --git a/doc/sources/smartos/README.rst b/doc/rtd/topics/datasources/smartos.rst
index e63f311f..a1e1542b 100644
--- a/doc/sources/smartos/README.rst
+++ b/doc/rtd/topics/datasources/smartos.rst
@@ -1,4 +1,3 @@
-==================
SmartOS Datasource
==================
@@ -23,14 +22,17 @@ Cloud-init supports three modes of delivering user/meta-data via the flexible
channels of SmartOS.
* user-data is written to /var/db/user-data
- - per the spec, user-data is for consumption by the end-user, not provisioning
- tools
+
+ - per the spec, user-data is for consumption by the end-user, not
+ provisioning tools
- cloud-init entirely ignores this channel other than writting it to disk
- removal of the meta-data key means that /var/db/user-data gets removed
- - a backup of previous meta-data is maintained as /var/db/user-data.<timestamp>
- - <timestamp> is the epoch time when cloud-init ran
+ - a backup of previous meta-data is maintained as
+ /var/db/user-data.<timestamp>. <timestamp> is the epoch time when
+ cloud-init ran
* user-script is written to /var/lib/cloud/scripts/per-boot/99_user_data
+
- this is executed each boot
- a link is created to /var/db/user-script
- previous versions of the user-script is written to
@@ -42,12 +44,14 @@ channels of SmartOS.
or is not an executable, cloud-init will add a shebang of "#!/bin/bash"
* cloud-init:user-data is treated like on other Clouds.
+
- this channel is used for delivering _all_ cloud-init instructions
- scripts delivered over this channel must be well formed (i.e. must have
a shebang)
Cloud-init supports reading the traditional meta-data fields supported by the
SmartOS tools. These are:
+
* root_authorized_keys
* hostname
* enable_motd_sys_info
@@ -56,38 +60,43 @@ SmartOS tools. These are:
Note: At this time iptables_disable and enable_motd_sys_info are read but
are not actioned.
-disabling user-script
+Disabling user-script
---------------------
Cloud-init uses the per-boot script functionality to handle the execution
of the user-script. If you want to prevent this use a cloud-config of:
-#cloud-config
-cloud_final_modules:
- - scripts-per-once
- - scripts-per-instance
- - scripts-user
- - ssh-authkey-fingerprints
- - keys-to-console
- - phone-home
- - final-message
- - power-state-change
+.. code:: yaml
+
+ #cloud-config
+ cloud_final_modules:
+ - scripts-per-once
+ - scripts-per-instance
+ - scripts-user
+ - ssh-authkey-fingerprints
+ - keys-to-console
+ - phone-home
+ - final-message
+ - power-state-change
Alternatively you can use the json patch method
-#cloud-config-jsonp
-[
- { "op": "replace",
- "path": "/cloud_final_modules",
- "value": ["scripts-per-once",
- "scripts-per-instance",
- "scripts-user",
- "ssh-authkey-fingerprints",
- "keys-to-console",
- "phone-home",
- "final-message",
- "power-state-change"]
- }
-]
+
+.. code:: yaml
+
+ #cloud-config-jsonp
+ [
+ { "op": "replace",
+ "path": "/cloud_final_modules",
+ "value": ["scripts-per-once",
+ "scripts-per-instance",
+ "scripts-user",
+ "ssh-authkey-fingerprints",
+ "keys-to-console",
+ "phone-home",
+ "final-message",
+ "power-state-change"]
+ }
+ ]
The default cloud-config includes "script-per-boot". Cloud-init will still
ingest and write the user-data but will not execute it, when you disable
@@ -105,6 +114,7 @@ base64
The following are exempt from base64 encoding, owing to the fact that they
are provided by SmartOS:
+
* root_authorized_keys
* enable_motd_sys_info
* iptables_disable
@@ -117,20 +127,21 @@ This means that user-script and user-data as well as other values can be
base64 encoded. Since Cloud-init can only guess as to whether or not something
is truly base64 encoded, the following meta-data keys are hints as to whether
or not to base64 decode something:
+
* base64_all: Except for excluded keys, attempt to base64 decode
- the values. If the value fails to decode properly, it will be
- returned in its text
+ the values. If the value fails to decode properly, it will be
+ returned in its text
* base64_keys: A comma deliminated list of which keys are base64 encoded.
* b64-<key>:
for any key, if there exists an entry in the metadata for 'b64-<key>'
Then 'b64-<key>' is expected to be a plaintext boolean indicating whether
or not its value is encoded.
* no_base64_decode: This is a configuration setting
- (i.e. /etc/cloud/cloud.cfg.d) that sets which values should not be
- base64 decoded.
+ (i.e. /etc/cloud/cloud.cfg.d) that sets which values should not be
+ base64 decoded.
-disk_aliases and ephemeral disk:
----------------
+disk_aliases and ephemeral disk
+-------------------------------
By default, SmartOS only supports a single ephemeral disk. That disk is
completely empty (un-partitioned with no filesystem).
@@ -140,10 +151,14 @@ The SmartOS datasource has built-in cloud-config which instructs the
You can control the disk_setup then in 2 ways:
1. through the datasource config, you can change the 'alias' of
ephermeral0 to reference another device. The default is:
+
'disk_aliases': {'ephemeral0': '/dev/vdb'},
+
Which means anywhere disk_setup sees a device named 'ephemeral0'
then /dev/vdb will be substituted.
2. you can provide disk_setup or fs_setup data in user-data to overwrite
the datasource's built-in values.
See doc/examples/cloud-config-disk-setup.txt for information on disk_setup.
+
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/dir_layout.rst b/doc/rtd/topics/dir_layout.rst
index 6dcb22ce..3f5aa205 100644
--- a/doc/rtd/topics/dir_layout.rst
+++ b/doc/rtd/topics/dir_layout.rst
@@ -1,6 +1,6 @@
-================
+****************
Directory layout
-================
+****************
Cloudinits's directory structure is somewhat different from a regular application::
@@ -79,3 +79,4 @@ Cloudinits's directory structure is somewhat different from a regular applicatio
is only ran `per-once`, `per-instance`, `per-always`. This folder contains
sempaphore `files` which are only supposed to run `per-once` (not tied to the instance id).
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/examples.rst b/doc/rtd/topics/examples.rst
index 2e6cfa1e..a110721c 100644
--- a/doc/rtd/topics/examples.rst
+++ b/doc/rtd/topics/examples.rst
@@ -1,11 +1,11 @@
.. _yaml_examples:
-=====================
+*********************
Cloud config examples
-=====================
+*********************
Including users and groups
---------------------------
+==========================
.. literalinclude:: ../../examples/cloud-config-user-groups.txt
:language: yaml
@@ -13,7 +13,7 @@ Including users and groups
Writing out arbitrary files
----------------------------
+===========================
.. literalinclude:: ../../examples/cloud-config-write-files.txt
:language: yaml
@@ -21,21 +21,21 @@ Writing out arbitrary files
Adding a yum repository
------------------------
+=======================
.. literalinclude:: ../../examples/cloud-config-yum-repo.txt
:language: yaml
:linenos:
Configure an instances trusted CA certificates
-----------------------------------------------
+==============================================
.. literalinclude:: ../../examples/cloud-config-ca-certs.txt
:language: yaml
:linenos:
Configure an instances resolv.conf
-----------------------------------
+==================================
*Note:* when using a config drive and a RHEL like system resolv.conf
will also be managed 'automatically' due to the available information
@@ -47,28 +47,28 @@ that wish to have different settings use this module.
:linenos:
Install and run `chef`_ recipes
--------------------------------
+===============================
.. literalinclude:: ../../examples/cloud-config-chef.txt
:language: yaml
:linenos:
Setup and run `puppet`_
------------------------
+=======================
.. literalinclude:: ../../examples/cloud-config-puppet.txt
:language: yaml
:linenos:
Add apt repositories
---------------------
+====================
.. literalinclude:: ../../examples/cloud-config-add-apt-repos.txt
:language: yaml
:linenos:
Run commands on first boot
---------------------------
+==========================
.. literalinclude:: ../../examples/cloud-config-boot-cmds.txt
:language: yaml
@@ -80,70 +80,70 @@ Run commands on first boot
Alter the completion message
-----------------------------
+============================
.. literalinclude:: ../../examples/cloud-config-final-message.txt
:language: yaml
:linenos:
Install arbitrary packages
---------------------------
+==========================
.. literalinclude:: ../../examples/cloud-config-install-packages.txt
:language: yaml
:linenos:
Run apt or yum upgrade
-----------------------
+======================
.. literalinclude:: ../../examples/cloud-config-update-packages.txt
:language: yaml
:linenos:
Adjust mount points mounted
----------------------------
+===========================
.. literalinclude:: ../../examples/cloud-config-mount-points.txt
:language: yaml
:linenos:
Call a url when finished
-------------------------
+========================
.. literalinclude:: ../../examples/cloud-config-phone-home.txt
:language: yaml
:linenos:
Reboot/poweroff when finished
------------------------------
+=============================
.. literalinclude:: ../../examples/cloud-config-power-state.txt
:language: yaml
:linenos:
Configure instances ssh-keys
-----------------------------
+============================
.. literalinclude:: ../../examples/cloud-config-ssh-keys.txt
:language: yaml
:linenos:
Additional apt configuration
-----------------------------
+============================
.. literalinclude:: ../../examples/cloud-config-apt.txt
:language: yaml
:linenos:
Disk setup
-----------
+==========
.. literalinclude:: ../../examples/cloud-config-disk-setup.txt
:language: yaml
:linenos:
Register RedHat Subscription
-----------------------------
+============================
.. literalinclude:: ../../examples/cloud-config-rh_subscription.txt
:language: yaml
@@ -151,3 +151,4 @@ Register RedHat Subscription
.. _chef: http://www.opscode.com/chef/
.. _puppet: http://puppetlabs.com/
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/format.rst b/doc/rtd/topics/format.rst
index 1dd92309..5898d17a 100644
--- a/doc/rtd/topics/format.rst
+++ b/doc/rtd/topics/format.rst
@@ -1,18 +1,18 @@
-=======
+*******
Formats
-=======
+*******
User data that will be acted upon by cloud-init must be in one of the following types.
Gzip Compressed Content
------------------------
+=======================
Content found to be gzip compressed will be uncompressed.
The uncompressed data will then be used as if it were not compressed.
This is typically is useful because user-data is limited to ~16384 [#]_ bytes.
Mime Multi Part Archive
------------------------
+=======================
This list of rules is applied to each part of this multi-part file.
Using a mime-multi part file, the user can specify more than one type of data.
@@ -31,7 +31,7 @@ Supported content-types:
- text/cloud-boothook
Helper script to generate mime messages
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+---------------------------------------
.. code-block:: python
@@ -59,14 +59,14 @@ Helper script to generate mime messages
User-Data Script
-----------------
+================
Typically used by those who just want to execute a shell script.
Begins with: ``#!`` or ``Content-Type: text/x-shellscript`` when using a MIME archive.
Example
-~~~~~~~
+-------
::
@@ -78,7 +78,7 @@ Example
$ euca-run-instances --key mykey --user-data-file myscript.sh ami-a07d95c9
Include File
-------------
+============
This content is a ``include`` file.
@@ -89,7 +89,7 @@ Ie, the content read from the URL can be gzipped, mime-multi-part, or plain text
Begins with: ``#include`` or ``Content-Type: text/x-include-url`` when using a MIME archive.
Cloud Config Data
------------------
+=================
Cloud-config is the simplest way to accomplish some things
via user-data. Using cloud-config syntax, the user can specify certain things in a human friendly format.
@@ -109,14 +109,14 @@ See the :ref:`yaml_examples` section for a commented set of examples of supporte
Begins with: ``#cloud-config`` or ``Content-Type: text/cloud-config`` when using a MIME archive.
Upstart Job
------------
+===========
Content is placed into a file in ``/etc/init``, and will be consumed by upstart as any other upstart job.
Begins with: ``#upstart-job`` or ``Content-Type: text/upstart-job`` when using a MIME archive.
Cloud Boothook
---------------
+==============
This content is ``boothook`` data. It is stored in a file under ``/var/lib/cloud`` and then executed immediately.
This is the earliest ``hook`` available. Note, that there is no mechanism provided for running only once. The boothook must take care of this itself.
@@ -125,7 +125,7 @@ It is provided with the instance id in the environment variable ``INSTANCE_I``.
Begins with: ``#cloud-boothook`` or ``Content-Type: text/cloud-boothook`` when using a MIME archive.
Part Handler
-------------
+============
This is a ``part-handler``. It will be written to a file in ``/var/lib/cloud/data`` based on its filename (which is generated).
This must be python code that contains a ``list_types`` method and a ``handle_type`` method.
@@ -147,7 +147,7 @@ The ``begin`` and ``end`` calls are to allow the part handler to do initializati
Begins with: ``#part-handler`` or ``Content-Type: text/part-handler`` when using a MIME archive.
Example
-~~~~~~~
+-------
.. literalinclude:: ../../examples/part-handler.txt
:language: python
@@ -157,3 +157,4 @@ Also this `blog`_ post offers another example for more advanced usage.
.. [#] See your cloud provider for applicable user-data size limitations...
.. _blog: http://foss-boss.blogspot.com/2011/01/advanced-cloud-init-custom-handlers.html
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/hacking.rst b/doc/rtd/topics/hacking.rst
index 96ab88ef..5ec25bfb 100644
--- a/doc/rtd/topics/hacking.rst
+++ b/doc/rtd/topics/hacking.rst
@@ -1 +1,2 @@
.. include:: ../../../HACKING.rst
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/logging.rst b/doc/rtd/topics/logging.rst
index b010aa96..c6afca16 100644
--- a/doc/rtd/topics/logging.rst
+++ b/doc/rtd/topics/logging.rst
@@ -1,15 +1,15 @@
-=======
+*******
Logging
-=======
+*******
Cloud-init supports both local and remote logging configurable through python's
built-in logging configuration and through the cloud-init rsyslog module.
Command Output
---------------
+==============
Cloud-init can redirect its stdout and stderr based on config given under the
``output`` config key. The output of any commands run by cloud-init and any
-user or vendor scripts provided will also be included here. The ``output``
-key accepts a dictionary for configuration. Output files may be specified
+user or vendor scripts provided will also be included here. The ``output`` key
+accepts a dictionary for configuration. Output files may be specified
individually for each stage (``init``, ``config``, and ``final``), or a single
key ``all`` may be used to specify output for all stages.
@@ -31,9 +31,9 @@ stdout and stderr from all cloud-init stages to
For a more complex example, the following configuration would output the init
stage to ``/var/log/cloud-init.out`` and ``/var/log/cloud-init.err``, for
stdout and stderr respectively, replacing anything that was previously there.
-For the config stage, it would pipe both stdout and stderr through
-``tee -a /var/log/cloud-config.log``. For the final stage it would append the
-output of stdout and stderr to ``/var/log/cloud-final.out`` and
+For the config stage, it would pipe both stdout and stderr through ``tee -a
+/var/log/cloud-config.log``. For the final stage it would append the output of
+stdout and stderr to ``/var/log/cloud-final.out`` and
``/var/log/cloud-final.err`` respectively. ::
output:
@@ -48,8 +48,8 @@ output of stdout and stderr to ``/var/log/cloud-final.out`` and
Python Logging
--------------
Cloud-init uses the python logging module, and can accept config for this
-module using the standard python fileConfig format. Cloud-init looks for config
-for the logging module under the ``logcfg`` key.
+module using the standard python fileConfig format. Cloud-init looks for
+config for the logging module under the ``logcfg`` key.
.. note::
the logging configuration is not yaml, it is python ``fileConfig`` format,
@@ -67,9 +67,9 @@ Python's fileConfig format consists of sections with headings in the format
logging must contain the sections ``[loggers]``, ``[handlers]``, and
``[formatters]``, which name the entities of their respective types that will
be defined. The section name for each defined logger, handler and formatter
-will start with its type, followed by an underscore (``_``) and the name of the
-entity. For example, if a logger was specified with the name ``log01``, config
-for the logger would be in the section ``[logger_log01]``.
+will start with its type, followed by an underscore (``_``) and the name of
+the entity. For example, if a logger was specified with the name ``log01``,
+config for the logger would be in the section ``[logger_log01]``.
Logger config entries contain basic logging set up. They may specify a list of
handlers to send logging events to as well as the lowest priority level of
@@ -80,13 +80,13 @@ handlers. A level entry can be any of the following: ``DEBUG``, ``INFO``,
the ``NOTSET`` option will allow all logging events to be recorded.
Each configured handler must specify a class under the python's ``logging``
-package namespace. A handler may specify a message formatter to use, a priority
-level, and arguments for the handler class. Common handlers are
+package namespace. A handler may specify a message formatter to use, a
+priority level, and arguments for the handler class. Common handlers are
``StreamHandler``, which handles stream redirects (i.e. logging to stderr),
and ``FileHandler`` which outputs to a log file. The logging module also
-supports logging over net sockets, over http, via smtp, and additional
-complex configurations. For full details about the handlers available for
-python logging, please see the documentation for `python logging handlers`_.
+supports logging over net sockets, over http, via smtp, and additional complex
+configurations. For full details about the handlers available for python
+logging, please see the documentation for `python logging handlers`_.
Log messages are formatted using the ``logging.Formatter`` class, which is
configured using ``formatter`` config entities. A default format of
@@ -173,3 +173,4 @@ For more information on rsyslog configuration, see :ref:`cc_rsyslog`.
.. _python logging config: https://docs.python.org/3/library/logging.config.html#configuration-file-format
.. _python logging handlers: https://docs.python.org/3/library/logging.handlers.html
.. _python logging formatters: https://docs.python.org/3/library/logging.html#formatter-objects
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/merging.rst b/doc/rtd/topics/merging.rst
index 2bd87b16..eca118f5 100644
--- a/doc/rtd/topics/merging.rst
+++ b/doc/rtd/topics/merging.rst
@@ -1,5 +1,6 @@
-==========================
+**************************
Merging User-Data Sections
-==========================
+**************************
.. include:: ../../merging.rst
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/modules.rst b/doc/rtd/topics/modules.rst
index 57892f2d..a3ead4f1 100644
--- a/doc/rtd/topics/modules.rst
+++ b/doc/rtd/topics/modules.rst
@@ -1,6 +1,6 @@
-=======
+*******
Modules
-=======
+*******
.. automodule:: cloudinit.config.cc_apt_configure
.. automodule:: cloudinit.config.cc_apt_pipelining
.. automodule:: cloudinit.config.cc_bootcmd
@@ -55,3 +55,4 @@ Modules
.. automodule:: cloudinit.config.cc_users_groups
.. automodule:: cloudinit.config.cc_write_files
.. automodule:: cloudinit.config.cc_yum_add_repo
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/moreinfo.rst b/doc/rtd/topics/moreinfo.rst
index b34cb7dc..9c3b7fba 100644
--- a/doc/rtd/topics/moreinfo.rst
+++ b/doc/rtd/topics/moreinfo.rst
@@ -1,12 +1,13 @@
-================
+****************
More information
-================
+****************
Useful external references
---------------------------
+==========================
- `The beauty of cloudinit`_
- `Introduction to cloud-init`_ (video)
.. _Introduction to cloud-init: http://www.youtube.com/watch?v=-zL3BdbKyGY
.. _The beauty of cloudinit: http://brandon.fuller.name/archives/2011/05/02/06.40.57/
+.. vi: textwidth=78
diff --git a/doc/vendordata.txt b/doc/rtd/topics/vendordata.rst
index 9acbe41c..2a94318e 100644
--- a/doc/vendordata.txt
+++ b/doc/rtd/topics/vendordata.rst
@@ -1,4 +1,10 @@
-=== Overview ===
+***********
+Vendor Data
+***********
+
+Overview
+========
+
Vendordata is data provided by the entity that launches an instance
(for example, the cloud provider). This data can be used to
customize the image to fit into the particular environment it is
@@ -6,21 +12,23 @@ being run in.
Vendordata follows the same rules as user-data, with the following
caveats:
- 1. Users have ultimate control over vendordata. They can disable its
- execution or disable handling of specific parts of multipart input.
- 2. By default it only runs on first boot
- 3. Vendordata can be disabled by the user. If the use of vendordata is
- required for the instance to run, then vendordata should not be
- used.
- 4. user supplied cloud-config is merged over cloud-config from
- vendordata.
-
-Users providing cloud-config data can use the '#cloud-config-jsonp' method
-to more finely control their modifications to the vendor supplied
-cloud-config. For example, if both vendor and user have provided
-'runcnmd' then the default merge handler will cause the user's runcmd to
-override the one provided by the vendor. To append to 'runcmd', the user
-could better provide multipart input with a cloud-config-jsonp part like:
+
+ 1. Users have ultimate control over vendordata. They can disable its
+ execution or disable handling of specific parts of multipart input.
+ 2. By default it only runs on first boot
+ 3. Vendordata can be disabled by the user. If the use of vendordata is
+ required for the instance to run, then vendordata should not be used.
+ 4. user supplied cloud-config is merged over cloud-config from vendordata.
+
+Users providing cloud-config data can use the '#cloud-config-jsonp' method to
+more finely control their modifications to the vendor supplied cloud-config.
+For example, if both vendor and user have provided 'runcnmd' then the default
+merge handler will cause the user's runcmd to override the one provided by the
+vendor. To append to 'runcmd', the user could better provide multipart input
+with a cloud-config-jsonp part like:
+
+.. code:: yaml
+
#cloud-config-jsonp
[{ "op": "add", "path": "/runcmd", "value": ["my", "command", "here"]}]
@@ -29,25 +37,35 @@ mean any action that could compromise a system. Since users trust
you, please take care to make sure that any vendordata is safe,
atomic, idempotent and does not put your users at risk.
-=== Input Formats ===
+Input Formats
+=============
+
cloud-init will download and cache to filesystem any vendor-data that it
-finds. Vendordata is handled exactly like user-data. That means that
-the vendor can supply multipart input and have those parts acted on
-in the same way as user-data.
+finds. Vendordata is handled exactly like user-data. That means that the
+vendor can supply multipart input and have those parts acted on in the same
+way as user-data.
The only differences are:
+
* user-scripts are stored in a different location than user-scripts (to
avoid namespace collision)
* user can disable part handlers by cloud-config settings.
For example, to disable handling of 'part-handlers' in vendor-data,
the user could provide user-data like this:
+
+ .. code:: yaml
+
#cloud-config
vendordata: {excluded: 'text/part-handler'}
-=== Examples ===
+Examples
+========
There are examples in the examples subdirectory.
+
Additionally, the 'tools' directory contains 'write-mime-multipart',
which can be used to easily generate mime-multi-part files from a list
of input files. That data can then be given to an instance.
See 'write-mime-multipart --help' for usage.
+
+.. vi: textwidth=78
diff --git a/doc/sources/cloudsigma/README.rst b/doc/sources/cloudsigma/README.rst
deleted file mode 100644
index 6509b585..00000000
--- a/doc/sources/cloudsigma/README.rst
+++ /dev/null
@@ -1,38 +0,0 @@
-=====================
-CloudSigma Datasource
-=====================
-
-This datasource finds metadata and user-data from the `CloudSigma`_ cloud platform.
-Data transfer occurs through a virtual serial port of the `CloudSigma`_'s VM and the
-presence of network adapter is **NOT** a requirement,
-
- See `server context`_ in the public documentation for more information.
-
-
-Setting a hostname
-~~~~~~~~~~~~~~~~~~
-
-By default the name of the server will be applied as a hostname on the first boot.
-
-
-Providing user-data
-~~~~~~~~~~~~~~~~~~~
-
-You can provide user-data to the VM using the dedicated `meta field`_ in the `server context`_
-``cloudinit-user-data``. By default *cloud-config* format is expected there and the ``#cloud-config``
-header could be omitted. However since this is a raw-text field you could provide any of the valid
-`config formats`_.
-
-You have the option to encode your user-data using Base64. In order to do that you have to add the
-``cloudinit-user-data`` field to the ``base64_fields``. The latter is a comma-separated field with
-all the meta fields whit base64 encoded values.
-
-If your user-data does not need an internet connection you can create a
-`meta field`_ in the `server context`_ ``cloudinit-dsmode`` and set "local" as value.
-If this field does not exist the default value is "net".
-
-
-.. _CloudSigma: http://cloudsigma.com/
-.. _server context: http://cloudsigma-docs.readthedocs.org/en/latest/server_context.html
-.. _meta field: http://cloudsigma-docs.readthedocs.org/en/latest/meta.html
-.. _config formats: http://cloudinit.readthedocs.org/en/latest/topics/format.html