Age | Commit message (Collapse) | Author |
|
dmidecode and /sys/class/dmi/id/* use different names for the same
information. This modified the logic in util.read_dmi_data to map from
dmidecode names to sysfs names before looking in sysfs.
|
|
get_cmdline_url was passing a string to response.contents.startswith()
where response.contents is now bytes.
this changes it to convert input to text, and also to default to text.
|
|
|
|
UrlResponse: biggest change... make readurl return bytes, making user
know what to do with it.
util: add load_tfile_or_url for loading text file or url
as read_file_or_url now returns bytes
ec2_utils: all meta-data is text, remove non-obvious string translations
DigitalOcean: adjust for ec2_utils
DataSourceGCE, DataSourceMAAS: user-data is binary other fields are text.
openstack.py: read paths without decoding to text. This is ok as paths
other than user-data are json, and load_json will handle
load_file still returns text, and that is what most things use.
|
|
|
|
it is admittedly not clear, but 'exc' should be definied if
mountpoint is not.
|
|
them).
|
|
|
|
|
|
|
|
- Refactor "fully" decoding the payload of a text/* part. In Python 3,
decode=True only means to decode according to Content-Transfer-Encoding, not
according to any charset in the Content-Type header. So do that.
|
|
- Refactor both the base64 encoding and decoding into utility functions.
Also:
- Mechanically fix some other broken untested code.
|
|
|
|
|
|
flight.
|
|
* In Py3, pass universal_newlines to subprocess.Popen()
|
|
|
|
to be behind trunk.
`tox -e py27` passes full test suite. Now to work on replacing mocker.
|
|
|
|
|
|
|
|
--ignore was being called with ',E121,E...' rather than
'E121,E...'.
that resulted in odd behavior, missing the pep8 errors that are fixed
here.
|
|
Previously the usage of the yaml_dumps module was causing
any python unicode key and value to show up as:
'item': !!python/unicode "some string"
This was not very pretty...
Fix this by using safe_dumps (which is also a good thing to
use and allow_unicode=True). Also create a tiny helper function
in the cc_debug module that does not include the yaml start and
end footers (since this module has its own footers and headers).
Also includes a basic sanity test for this module.
|
|
Add the following adjustments to the chef template and module:
- Make it so that the chef directories can be provided (defaults
to the existing directories)
- Make the params much more configurable, and if a parameter is
provided in the chef configuration it will override existing template
parameters.
- Make the template skip lines if the values are None in the configuration
so that template lines can be removed if/when this is desirable.
- Allow the firstboot json path to be configurable (defaults to the
existing location).
- Adds a basic set of tests to ensure that good things are happening.
- Make a helper function to tell if already installed.
- Have the install routine not run chef after installed but have it instead
return a result to tell the caller to run the chef program once completed.
- Use the generated_by() utility function to give the ruby template a
better header comment.
- Set special parameters after selecting the basic chef parameters.
- Allow for the running after install and run arguments to be configured.
- Allow the omnibus url fetching retries to be configurable.
- Move the chef running to its own helper function
- Add module docs
|
|
This busted logic causes 'output' to not be paid any attention
to, and thus output is not written to /var/log/cloud-init-output.log.
LP: #1387340
|
|
Previously the usage of the yaml_dumps module was causing
any python unicode key and value to show up as:
'item': !!python/unicode "some string"
This was not very pretty...
Fix this by using safe_dumps (which is also a good thing to
use and allow_unicode=True). Also create a tiny helper function
in the cc_debug module that does not include the yaml start and
end footers (since this module has its own footers and headers).
Also includes a basic sanity test for this module.
|
|
|
|
|
|
|
|
Add the following adjustments to the chef template and module
- Make it so that the chef directories can be provided (defaults
to the existing directories)
- Make the params much more configurable, and if a parameter is
provided in the chef configuration it will override existing template
parameters.
- Make the template skip lines if the values are None in the configuration
so that template lines can be removed if/when this is desirable.
- Allow the firstboot json path to be configurable (defaults to the
existing location).
- Adds a basic set of tests to ensure that good things are happening.
|
|
|
|
|
|
|
|
|
|
this supports a list of input, and cleans up that list
for the platform specific mount types. Basically,
mtype = None
means 'mount -t auto' or the equivalent for the platform.
and 'iso9660' means "iso type".
|
|
|
|
|
|
|
|
util.log_time()'s return value was what was being sent to fork_cb. This means
the resize ran in parallel and the call to fork_cb threw a traceback (trying
to call Nonetype).
By permitting fork_cb to take kwargs, and using the correct method syntax,
this now forks and resizes in the background as appropriate.
|
|
This just removes comments '# pylint:' things and other code
remnents of pylint.
|
|
Fixed all complaints from running "make pep8". Also version locked
pep8 in test-requirements.txt to ensure that pep8 requirements don't
change without an explicit commit.
|
|
Safer for cloud-init to not use lazy mode for unmount
|
|
|
|
|
|
previous commit occurred because the selinux test was failing
in a schroot where there was no /etc/hosts.
Now, fix that test more correctly, and fix some bad assumptions in
the SeLinuxGuard.
|
|
|
|
|
|
Openstack has a unique derivative datasource
that is gaining usage. Previously the config
drive datasource provided part of this functionality
as well as the ec2 datasource, but since new
functionality is being added to openstack is
seems benefical to combine the used parts into
one datasource just made for handling openstack
deployments.
This patch factors out the common logic shared
between the config drive and the openstack
metadata datasource and places that in a shared
helper file and then creates a new openstack
datasource that readers from the openstack metadata
service and refactors the config drive datasource
to use this common logic.
|
|
|
|
|