Age | Commit message (Collapse) | Author |
|
|
|
|
|
remove apply_filter from get_vendordata. I can't think of a good
reason to filter vendor-data per instance-id.
remove has_vendordata and consume_vendordata.
has vendordata is always "true", whether or not there is something
to operate is determined by:
if ds.vendordata_raw()
consume_vendordata is based on config entirely.
|
|
vendordata.
Vendordata is a datasource provided userdata-like blob that is parsed
similiarly to userdata, execept at the user's pleasure.
cloudinit/config/cc_scripts_vendor.py: added vendor script cloud config
cloudinit/config/cc_vendor_scripts_per_boot.py: added vendor per boot
cloud config
cloudinit/config/cc_vendor_scripts_per_instance.py: added vendor per
instance vendor cloud config
cloudinit/config/cc_vendor_scripts_per_once.py: added per once vendor
cloud config script
doc/examples/cloud-config-vendor-data.txt: documentation of vendor-data
examples
doc/vendordata.txt: documentation of vendordata for vendors
(RENAMED) tests/unittests/test_userdata.py => tests/unittests/test_userdata.py
TO: tests/unittests/test_userdata.py => tests/unittests/test_data.py:
userdata test cases are not expanded to confirm superiority over vendor
data.
bin/cloud-init: change instances of 'consume_userdata' to 'consume_data'
cloudinit/handlers/cloud_config.py: Added vendor script handling to default
cloud-config modules
cloudinit/handlers/shell_script.py: Added ability to change the path key to
support vendor provided 'vendor-scripts'. Defaults to 'script'.
cloudinit/helpers.py:
- Changed ConfigMerger to include handling of vendordata.
- Changed helpers to include paths for vendordata.
cloudinit/sources/__init__.py: Added functions for helping vendordata
- get_vendordata_raw(): returns vendordata unprocessed
- get_vendordata(): returns vendordata through userdata processor
- has_vendordata(): indicator if vendordata is present
- consume_vendordata(): datasource directive for indicating explict
user approval of vendordata consumption. Defaults to 'false'
cloudinit/stages.py: Re-jiggered for handling of vendordata
- _initial_subdirs(): added vendor script definition
- update(): added self._store_vendordata()
- [ADDED] _store_vendordata(): store vendordata
- _get_default_handlers(): modified to allow for filtering
which handlers will run against vendordata
- [ADDED] _do_handlers(): moved logic from consume_userdata
to _do_handlers(). This allows _consume_vendordata() and
_consume_userdata() to use the same code path.
- [RENAMED] consume_userdata() to _consume_userdata()
- [ADDED] _consume_vendordata() for handling vendordata
- run after userdata to get user cloud-config
- uses ConfigMerger to get the configuration from the
instance perspective about whether or not to use
vendordata
- [ADDED] consume_data() to call _consume_{user,vendor}data
cloudinit/util.py:
- [ADDED] get_nested_option_as_list() used by cc_vendor* for
getting a nested value from a dict and returned as a list
- runparts(): added 'exe_prefix' for running exe with a prefix,
used by cc_vendor*
config/cloud.cfg: Added vendor script execution as default
tests/unittests/test_runs/test_merge_run.py: changed consume_userdata() to
consume_data()
tests/unittests/test_runs/test_simple_run.py: changed consume_userdata() to
consume_data()
|
|
current form its still missing some modules though.
Supported:
-SSH-keys
-growpart
-growfs
-adduser
-powerstate
|
|
When the base DataSource class would set 'ds_cfg' for the specific
datasources' config, it would fail for the DataSources that are just named
'DataSourceFooNet' and we wanted to set configuration in 'Foo'.
For example, both DataSourceOpenNebula and DataSourceOpenNebulaNet want to
read datasource config from
sources:
OpenNebula:
foo: bar
But without this change, 'ds_cfg' would not be setup properly for
OpenNebulaNet.
|
|
|
|
|
|
The place this was noticed was in trying to use the
'nova.clouds.archive.ubuntu.com' mirror selection.
Because the config-drive-v2 has a metadata entry of 'availability_zone', it
didn't get found by the availabilty_zone property in
cloudinit/sources/__init__.py
LP: #1190431
|
|
|
|
|
|
|
|
This does a couple things:
* separates out the 'normalize_public_keys' from the DataSource's get_public_ssh_keys
* uses that from config-drive datasource
* supports config drive v1 or v2 public-keys
* adds a test.
LP: #1077700
|
|
1. Remove the usage of the path.join function
now that all code should be going through
the util file methods (and they can be
mocked out as needed).
2. Adjust all occurences of the above join
function to either not use it or replace
it with the standard os.path.join (which
can also be mocked out as needed)
3. Fix pylint from complaining about the
tests folder 'helpers.py' not being found
4. Add a pylintrc file that is used instead
of the options hidden in the 'run_pylint'
tool.
|
|
translate the device name to a actual device using
logic that will try the ec2 metadata (if avail) or
will try using 'blkid' to find a corresponding label.
LP: #1062540
|
|
|
|
|
|
|
|
|
|
can contain filters that serve this purpose only and add in
the initial launch-index filter and replace the code in
the datasource class that previously did this.
|
|
variable has a little more meaning and by default look in
metadata for 'launch-index' and have ec2 instead look for
a different variable (thus allowing more datasources to just work).
|
|
before we look into the payload as well as make the skip
test a function that the datasource module can also use.
|
|
|
|
userdata based on a launch-index (or leave userdata
alone if none is provided by the datasource). This
works by doing the following.
1. Adjusting the userdata processor to attempt to
inject a "Launch-Index" header into the messages
headers (by either taking a header that already exists
or by looking into the payload to see if it exists
there).
2. Adjust the get_userdata ds function to apply a filter
on the returned userdata (defaulting to false) that
will now use the datasources get_launch_index value
to restrict the 'final' message used in consuming
user data (the same behavior if not existent).
3. Further down the line processes that use the 'resultant'
userdata now will only see the ones for there own launch
index (ie cloud-config will be restricted automatically
and so on) and are unaffected (although they can now
ask the cloud object or the datasource for its launch index
via the above new ds method.
|
|
|
|
There are several changes here.
* Datasource now has a 'availability_zone' getter.
* get_package_mirror_info
* Datasource convenience 'get_package_mirror_info' that calls
the configured distro, and passes it the availability-zone
* distro has a get_package_mirror_info method
* get_package_mirror_info returns a dict that of name:mirror
this is to facilitate use of 'security' and 'primary' archive.
* this supports searching based on templates. Any template
that references undefined values is skipped. These templates
can contain 'availability_zone' (LP: #1037727)
* distro's mirrors can be arch specific (LP: #1028501)
* rename_apt_lists supports the "mirror_info" rather than single mirror
* generate_sources_list supports mirror_info, and as a result, the
ubuntu mirrors reference '$security' rather than security (LP: #1006963)
* remove the DataSourceEc2 specific mirror selection, but instead
rely on the above filtering, and the fact that 'ec2_region' is only
defined if the availability_zone looks like a ec2 az.
|
|
1. This will allow a basically empty datasource to be
activated (as the last datasource) when no other
datasources work. This allows modules to still
run (if they can, new function added to the datasource
if modules want to check if cloud-init is in this
'disconnected' state).
|
|
2. Adjust comment on sources list from depends
3. For the /etc/timezone 'writing', add a header that says created by cloud-init
|
|
search module 'prefixes'
that also has a potential set of required attributes.
2. Use this new importer to find the distro class, the userdata handler modules, the config modules
and the datasource modules, if none can be found error out accordingly.
|
|
|
|
|
|
|
|
|
|
2. Fixed up importing of modules to handle the failure case better
a. Also realized that using the import class we don't have to reimport a module via getattr, so removed that.
|
|
This could and should be useful for unit testing.
|
|
appropriate annotations + metaclasses.
|
|
translation to python 3.
|
|
|
|
Some of the cleanups were the following
1. Using standard (logged) utility functions for sub process work, writing, reading files, and other file system/operating system options
2. Having distrobutions impelement there own subclasses to handle system specifics (if applicable)
3. Having a cloud wrapper that provides just the functionality we want to expose (cloud.py)
4. Using a path class instead of globals for all cloud init paths (it is configured via config)
5. Removal of as much shared global state as possible (there should be none, minus a set of constants)
6. Other various cleanups that remove transforms/handlers/modules from reading/writing/chmoding there own files.
a. They should be using util functions to take advantage of the logging that is now enabled in those util functions (very useful for debugging)
7. Urls being read and checked from a single module that serves this and only this purpose (+1 for code organization)
8. Updates to log whenever a transform decides not to run
9. Ensure whenever a exception is thrown (and possibly captured) that the util.logexc function is called
a. For debugging, tracing this is important to not just drop them on the floor.
10. Code shuffling into utils.py where it makes sense (and where it could serve a benefit for other code now or in the future)
|
|
|
|
|
|
here as well as other find datasource function.
|
|
2. Cleanup main __init__ file with shell additions, constants usage, os.path usage.
|
|
|