Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
TODO: maybe just move them to utils?
|
|
2. Cleanup main __init__ file with shell additions, constants usage, os.path usage.
|
|
The reason for moving this from cloudinit/__init__.py was that it
was running too late there.
The cloudinit.parsed_cfgs variable was already filled by cloud-init.py's
reading of cloud config. I'm sure I had done this so that it would not have to
re-parse configs.
I think the right way to handle this is to move that logic back to
cloudinit/__init__.py and add some function like 'reread_configs()'
that would re-read all releavent cofnigs and re-setup logging.
That seemed more error prone at the moment, with limited time.
|
|
|
|
instead of MaaS or Maas, use MAAS consistently.
The only non 'MAAS' left are all lower case.
|
|
If user-data is supplied that is not multipart, and is unhandled, then
log a warning. A warning by default will get to the console, so the user
can see it even if they cannot get into the instance. If they don't see
it there, it would still be available in the cloud-init log.
|
|
|
|
most types
|
|
|
|
|
|
This branch also adds tests for part-handler registration and part-handler
handling.
|
|
|
|
|
|
|
|
|
|
This copyright change reflects previous changes that Juerg made for pylint and
pep8 cleanups.
From: Juerg Haefliger <juerg.haefliger@hp.com>
Date: Mon, 16 Jan 2012 10:45:12 +0100
|
|
|
|
From: Juerg Haefliger <juerg.haefliger@hp.com>
This pulls in the named patch for LP: #914739 with a few other changes.
|
|
From: Juerg Haefliger <juerg.haefliger@hp.com>
|
|
From: Juerg Haefliger <juerg.haefliger@hp.com>
|
|
From: Juerg Haefliger <juerg.haefliger@hp.com>
|
|
single line)
From: Juerg Haefliger <juerg.haefliger@hp.com>
|
|
From: Juerg Haefliger <juerg.haefliger@hp.com>
outside __init__)
|
|
From: Juerg Haefliger <juerg.haefliger@hp.com>
Replace superseded builtin functions 'filter' and 'map' using
list comprehension.
|
|
From: Juerg Haefliger <juerg.haefliger@hp.com>
argument)
|
|
From: Juerg Haefliger <juerg.haefliger@hp.com>
|
|
From: Juerg Haefliger <juerg.haefliger@hp.com>
|
|
From: Juerg Haefliger <juerg.haefliger@hp.com>
|
|
consume_userdata should really run always, rather than once per instance.
The documentation says that boothooks were on their own for per-instance
but since this routine was only being called once, they would only get
called once.
This modifies the behavior to be:
user_script: per_always
cloud_config : per_always
upstart_job : per_instance
cloud_boothook: per_always
In order to not break part handlers that are existing, and expect to only be
called once per instance, this adds a 'handler_version' item in a handler
that can indicate the version (currently 1 or 2). If it is 2, then the
hander will be passed the frequency (per-instance or per-always) that this
is being run. That way the handler can differenciate between them.
This also makes 'bootcmd' run every boot. That should be changable in
cloud-config though, so users who dont like the behavior can modify it.
LP: #819507
|
|
Thanks to Adam Gandalman and Marc Cluet for this fix.
LP: #812539
|
|
Previously, when cloud-config was ready, cloud-init would emit an
upstart event with:
initctl emit cloud-config
Now, that command is configurable via the 'cc_ready_cmd' value in
cloud.cfg or user data. The default behavior is not changed.
LP: #785551
|
|
If user input is a consumed as a user-script, a boothook, or a upstart
job and appears to be dos-formated, then change it to unix formated
LP: #744965
|
|
LP: #739694
|
|
|
|
|
|
This option allows user to specify manual cleaning of the
/var/lib/cloud/instance/ link, for a data source that might not be present on
every boot.
|
|
Previous logging was getting 'None' set in the DataSource collections.
Thus, 'log.debug' would throw error. I think it is proper to pull in
the base cloudinit's log.
|
|
|
|
A bug caused user scripts to get stored in
/var/lib/cloud/instance/scripts/<instance-id>/
which meant they would not get run by 'run-user-scripts'.
LP: #711480
|
|
|
|
After adding the 'log' element to the DataSource class, pickling would
fail with
TypeError: can't pickle file objects
Instead of having the object with a log reference, use one from
'DataSource.log' and have that set by cloudinit
|
|
|
|
add 'datasource' file to instance dir
|
|
Change /var/lib/cloud/instance/
user-data-raw.txt.i
to
user-data.txt.i
|
|
|
|
|
|
The DataSources that are loaded are now controlled entirely via
configuration file of 'datasource_list', like:
datasource_list: [ "NoCloud", "OVF", "Ec2" ]
Each item in that list is a "DataSourceCollection". for each item
in the list, cloudinit will attempt to load:
cloudinit.DataSource<item>
and, failing that,
DataSource<item>
The module is required to have a method named 'get_datasource_list'
in it that takes a single list of "dependencies" and returns
a list of python classes inside the collection that can run needing
only those dependencies.
The dependencies are defines in DataSource.py. Currently:
DEP_FILESYSTEM = "FILESYSTEM"
DEP_NETWORK = "NETWORK"
When 'get_datasource_list' is called for the DataSourceOVF module with
[DEP_FILESYSTEM], then DataSourceOVF returns a single item list with a
reference to the 'DataSourceOVF' class.
When 'get_datasource_list' is called for the DataSourceOVF module with
[DEP_FILESYSTEM, DEP_NETWORK], it will return a single item list
with a reference to 'DataSourceOVFNet'.
cloudinit will then instanciate the class and call its 'get_data' method.
if the get_data method returns 'True', then it selects this class as the
selected Datasource.
|