Age | Commit message (Collapse) | Author |
|
to be behind trunk.
`tox -e py27` passes full test suite. Now to work on replacing mocker.
|
|
This fixes up many long lines to be < 80 chars and some other
pylint issues. pylint 1.1 (in trusty) is now complaining about
the lazy logging, so I'll clean that up when I touch things.
|
|
vendordata.
Vendordata is a datasource provided userdata-like blob that is parsed
similiarly to userdata, execept at the user's pleasure.
cloudinit/config/cc_scripts_vendor.py: added vendor script cloud config
cloudinit/config/cc_vendor_scripts_per_boot.py: added vendor per boot
cloud config
cloudinit/config/cc_vendor_scripts_per_instance.py: added vendor per
instance vendor cloud config
cloudinit/config/cc_vendor_scripts_per_once.py: added per once vendor
cloud config script
doc/examples/cloud-config-vendor-data.txt: documentation of vendor-data
examples
doc/vendordata.txt: documentation of vendordata for vendors
(RENAMED) tests/unittests/test_userdata.py => tests/unittests/test_userdata.py
TO: tests/unittests/test_userdata.py => tests/unittests/test_data.py:
userdata test cases are not expanded to confirm superiority over vendor
data.
bin/cloud-init: change instances of 'consume_userdata' to 'consume_data'
cloudinit/handlers/cloud_config.py: Added vendor script handling to default
cloud-config modules
cloudinit/handlers/shell_script.py: Added ability to change the path key to
support vendor provided 'vendor-scripts'. Defaults to 'script'.
cloudinit/helpers.py:
- Changed ConfigMerger to include handling of vendordata.
- Changed helpers to include paths for vendordata.
cloudinit/sources/__init__.py: Added functions for helping vendordata
- get_vendordata_raw(): returns vendordata unprocessed
- get_vendordata(): returns vendordata through userdata processor
- has_vendordata(): indicator if vendordata is present
- consume_vendordata(): datasource directive for indicating explict
user approval of vendordata consumption. Defaults to 'false'
cloudinit/stages.py: Re-jiggered for handling of vendordata
- _initial_subdirs(): added vendor script definition
- update(): added self._store_vendordata()
- [ADDED] _store_vendordata(): store vendordata
- _get_default_handlers(): modified to allow for filtering
which handlers will run against vendordata
- [ADDED] _do_handlers(): moved logic from consume_userdata
to _do_handlers(). This allows _consume_vendordata() and
_consume_userdata() to use the same code path.
- [RENAMED] consume_userdata() to _consume_userdata()
- [ADDED] _consume_vendordata() for handling vendordata
- run after userdata to get user cloud-config
- uses ConfigMerger to get the configuration from the
instance perspective about whether or not to use
vendordata
- [ADDED] consume_data() to call _consume_{user,vendor}data
cloudinit/util.py:
- [ADDED] get_nested_option_as_list() used by cc_vendor* for
getting a nested value from a dict and returned as a list
- runparts(): added 'exe_prefix' for running exe with a prefix,
used by cc_vendor*
config/cloud.cfg: Added vendor script execution as default
tests/unittests/test_runs/test_merge_run.py: changed consume_userdata() to
consume_data()
tests/unittests/test_runs/test_simple_run.py: changed consume_userdata() to
consume_data()
|
|
|
|
|
|
|
|
|
|
Instead of being restricted to only gzip
compressing the overall mime segment or
individual included segments, allow for
each mime segment to be gzip compressed.
LP: #1203203
|
|
|
|
|
|
|
|
LP: #1065116
|
|
its original content type said it is, make sure we set
the new value, also unsure if the old top level message
should have the same header (which will flip-flop).
|
|
indexes (since they will be handled beforehand) and fix
the types being checked on the root of the archive format
to be a tuple instead of a list (which oddly causes complaints).
|
|
variable has a little more meaning and by default look in
metadata for 'launch-index' and have ec2 instead look for
a different variable (thus allowing more datasources to just work).
|
|
'launch-index' key that we copy that key over to the right
header (which will then be used later when assigning the
'real' header when the message is attached)
|
|
before we look into the payload as well as make the skip
test a function that the datasource module can also use.
|
|
userdata based on a launch-index (or leave userdata
alone if none is provided by the datasource). This
works by doing the following.
1. Adjusting the userdata processor to attempt to
inject a "Launch-Index" header into the messages
headers (by either taking a header that already exists
or by looking into the payload to see if it exists
there).
2. Adjust the get_userdata ds function to apply a filter
on the returned userdata (defaulting to false) that
will now use the datasources get_launch_index value
to restrict the 'final' message used in consuming
user data (the same behavior if not existent).
3. Further down the line processes that use the 'resultant'
userdata now will only see the ones for there own launch
index (ie cloud-config will be restricted automatically
and so on) and are unaffected (although they can now
ask the cloud object or the datasource for its launch index
via the above new ds method.
|
|
make pep8 now is silent on precise's pep8 ( 0.6.1-2ubuntu2).
|
|
other code to have user/group parsing in util instead
of in stages.py, renames decomp_str to decomp_gzip since
it is more meaningful when named that (as thats all it can
decompress).
|
|
2. Fix pylint warning in userdata about unused variable
|
|
easily
|
|
|
|
turn it back off, if one of those is encountered
|
|
a. This allows for more properties to be added as needed in the future, instead of being very restrictive.
2. Fix up all uses of the url reading to now use this new response object.
3. Also fixup user data including, such that if no response actual occurs the url content is not further processed.
|
|
|
|
for case insensitive include statements to be used
|
|
being in here.
This class will now just contain user data parsing, leaving the handler running to happen elsewhere.
|
|
appropriate annotations + metaclasses.
|
|
2. Add logging of unknown content types + seperate payload logging detail message into a separate function.
|
|
|
|
|
|
for now it seems to make organizational sense to put it here.
|
|
|
|
Now there exists a class which processes the user data down to a mime message and just some small utility methods to walk and determine types.
Large amount of content type cleanups & constant creation.
|
|
importing, constant usage.
1. Move all datasources to a new sources directory
1. Rename some files to be more consistent with python file/module naming.
|