Age | Commit message (Collapse) | Author |
|
importing, constant usage.
1. Move all datasources to a new sources directory
1. Rename some files to be more consistent with python file/module naming.
|
|
|
|
What this does is provide an second DataSource that could use the
kernel command line url=. For example:
ro root=/dev/vda url=http://example.com/i-abcdefg/
http://example.com/i-abcdefg/ would contain:
datasource:
NoCloud:
# default seedfrom is None
# if found, then it should contain a url with:
# <url>/user-data and <url>/meta-data
# seedfrom: http://my.example.com/i-abcde
seedfrom: http://example.com/i-abcdefg/
Then, the NoCloudNet DataSource would find that seedfrom config
and consume data at
http://example.com/i-abcdefg/user-data
and
http://example.com/i-abcdefg/meta-data
|
|
The purely local non-device (vfat/iso9660) sources were broken by the
last set of changes here. This restores them to functional.
If the seed is from a device, then the default behavior is to be 'net' mode.
For seed via cmdline, the user can specify 'ds=nocloud-net' and for
seed via filesystem seed dir, they can just populate the other directory.
To make it easier, when attaching a seed device, the user does not need
to specify 'dsmode' of 'net' in the metadata file. They still can, but that is
the default. It seems that that is more likely to be what is desired.
LP: #942695
|
|
document usage of DataSourceNoCloud from vfat or iso disk.
|
|
This allows you to attach a disk in ISO9660 or vfat filesystem format
labeled 'cidata' with 'user-data' and 'meta-data' on it.
It provides a much easier way to interact with cloud-init in nocloud
than mounting the image or the OVF method.
|
|
This copyright change reflects previous changes that Juerg made for pylint and
pep8 cleanups.
From: Juerg Haefliger <juerg.haefliger@hp.com>
Date: Mon, 16 Jan 2012 10:45:12 +0100
|
|
From: Juerg Haefliger <juerg.haefliger@hp.com>
This pulls in the named patch for LP: #914739 with a few other changes.
|
|
From: Juerg Haefliger <juerg.haefliger@hp.com>
|
|
From: Juerg Haefliger <juerg.haefliger@hp.com>
|
|
single line)
From: Juerg Haefliger <juerg.haefliger@hp.com>
|
|
From: Juerg Haefliger <juerg.haefliger@hp.com>
|
|
Juerg Haefliger <juerg.haefliger@hp.com>
|
|
If there is no local-hostname, then the base DataSource will
make attempts to resolve it. Having a default here meant that
it would be taken as truth.
|
|
In the case where a seedfrom value was given on the command line or in the
config file, we were timing out in 2 seconds on the connection. That timeout
was put in place to support "probing" for sources, but seedfrom is explictly
given.
So, in that case, do a urllib.open without a timeout value. Looking at source
code, default timeout is 'socket._GLOBAL_DEFAULT_TIMEOUT', but rather than
importing that and using it, I will call without a timeout value.
LP: #812646
|
|
Previous logging was getting 'None' set in the DataSource collections.
Thus, 'log.debug' would throw error. I think it is proper to pull in
the base cloudinit's log.
|
|
After adding the 'log' element to the DataSource class, pickling would
fail with
TypeError: can't pickle file objects
Instead of having the object with a log reference, use one from
'DataSource.log' and have that set by cloudinit
|
|
|
|
The DataSources that are loaded are now controlled entirely via
configuration file of 'datasource_list', like:
datasource_list: [ "NoCloud", "OVF", "Ec2" ]
Each item in that list is a "DataSourceCollection". for each item
in the list, cloudinit will attempt to load:
cloudinit.DataSource<item>
and, failing that,
DataSource<item>
The module is required to have a method named 'get_datasource_list'
in it that takes a single list of "dependencies" and returns
a list of python classes inside the collection that can run needing
only those dependencies.
The dependencies are defines in DataSource.py. Currently:
DEP_FILESYSTEM = "FILESYSTEM"
DEP_NETWORK = "NETWORK"
When 'get_datasource_list' is called for the DataSourceOVF module with
[DEP_FILESYSTEM], then DataSourceOVF returns a single item list with a
reference to the 'DataSourceOVF' class.
When 'get_datasource_list' is called for the DataSourceOVF module with
[DEP_FILESYSTEM, DEP_NETWORK], it will return a single item list
with a reference to 'DataSourceOVFNet'.
cloudinit will then instanciate the class and call its 'get_data' method.
if the get_data method returns 'True', then it selects this class as the
selected Datasource.
|
|
- cloud_config and scripts now live in instance directory
- cachedir is now more correctly named 'seeddir'
|
|
This allows the user to specify portions of the cloud-config
system config on the kernel command line. values found on the
kernel command line have preference over those in system config.
The format is:
cc:[ ]<yaml content here> [end_cc]
Where:
'cc:' indicates the beginning of cloud config syntax
[ ] optionally followed by whitespace (which will be trimmed)
<yaml content here> :
this content is passed untouched to yaml
end_cc:
this is optional. If no 'end_cc' tag is found, all data from
the begin tag to the end of the command line is consumed
Multiple occurences of the cc:<data>end_cc will be joined with
carriage return before passing to yaml.
Any litteral '\n' (backslash followed by lower case 'n') are converted
to a carriage return.
The following are examples:
cc: ssh_import_id: [smoser, kirkland]
cc: ssh_import_id: [smoser, bob]\\nruncmd: [ [ ls, -l ], echo hi ] end_cc
cc:ssh_import_id: [smoser] end_cc cc:runcmd: [ [ ls, -l ] ] end_cc
|
|
LP: #617400
|
|
This just records in 'self.seedfrom' each of the locations that
seed data was read from.
|
|
get_data was returning True before it set self.user_data_raw and
self.user_data.
|
|
using read_optional_seed in DataSourceEc2 and DataSourceNoCloud.
change parse_cmdline_data to fill a dictionary that is supplied by
caller. It then returns strictly true or false based on whether
or not it was specified in cmdline
|
|
The new classes 'DataSourceNoCloud' and 'DataSourceNoCloudNet'
implement a way to get data from the filesystem, or (very minimal)
data from the kernel command line. This allows the user to seed data to
these sources.
There are now 2 "cloud-init" jobs, cloud-init-local that runs on
mounted MOUNTPOINT=/
and 'cloud-init' that runs on
start on (mounted MOUNTPOINT=/ and net-device-up IFACE=eth0 and
stopped cloud-init-local )
The idea is that cloud-init-local can actually function without network.
The last thing in this commit is "uncloud-init".
This tool can be invoked as 'init=/usr/lib/cloud-init/uncloud-init'
It will "uncloudify" things in the image, generally making it easier
to use for a simpler environment, and then it will exec /sbin/init.
|