Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
Defaulting to only trying once.
|
|
The DigitalOcean metadata service is an AWS-style service available
over HTTP via the link local address 169.254.169.254. The specifics
of the API are documented at:
https://developers.digitalocean.com/metadata/
|
|
this supports a list of input, and cleans up that list
for the platform specific mount types. Basically,
mtype = None
means 'mount -t auto' or the equivalent for the platform.
and 'iso9660' means "iso type".
|
|
|
|
|
|
For now, this vendor data handling is just added to openstack.
However, in an effort to allow sanely handling of multi-part vendor-data
that is namespaced, we add openstack.convert_vendordata_json .
That basically takes whatever was loaded from vendordata and takes
the 'cloud-init' key if it is a dict. This way the author can
namespace cloud-init, basically telling it to ignore everything else.
|
|
We were checking for presense of meta_data.json for each supported
metadata version. Instead just check that /openstack is there.
This reduces the time to check on EC2 or any other cloud.
|
|
instead of taking a version that they should look for,
the readers now just select the highest supported version.
definitely a use case later for having version= but nothing
is using it now.
|
|
|
|
|
|
|
|
using tuple for _versions was just not necessary.
fix reference to undefined os_versions.
|
|
If something is broken as in a built in config, or code
just broken, then logging warning during search for metadata
is ok.
|
|
make pyflakes now passes.
|
|
This data will be treated the same as vendordata from other sources.
|
|
Updated read_config_drive: removed the unused version kwarg, used the
OS_VERSIONS tuple from the openstack helper to avoid hardcoding
versions.
Added a comment to the tuple in helpers/openstack.py asking for it to
be kept in chronological order.
|
|
|
|
|
|
|
|
|
|
Instead of using this log (which really isn't a failure) we should
instead of just return the looked up locations and then if there really
is an error the caller can handle the usage of the looked up locations
as they choose fit.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
vendor_data is guaranteed to be a dict if it exists; if it doesn't
exist ensure it's represented by an empty dict to avoid checking
it to see if it's a dict.
|
|
- Also utilizing the constants defined in
cloudinit/sources/helpers/openstack.py for configdrive versions
|
|
|
|
This just removes comments '# pylint:' things and other code
remnents of pylint.
|
|
- Upgrade configdrive to use 2013-10-17
- Fix issue with vendor_data.json parsing
Co-Authored-By: Paul Querna <pquerna@apache.org>
|
|
Fixed all complaints from running "make pep8". Also version locked
pep8 in test-requirements.txt to ensure that pep8 requirements don't
change without an explicit commit.
|
|
This seems cleaner, to avoid duplicate '/' being added.
|
|
|
|
|
|
LP: #1316597
|
|
On systems with a ttyS1 and nothing attached, the read attempts
that the cloud sigma datasource would do would block.
Also, Add timeouts for reading/writting from/to the serial console
LP: #1316475
|
|
* do not run dmidecode on arm.
* line length
* comment that 60 second time out is expected
|
|
|
|
|
|
Instead of just trying to see if userdata decodes as the indication that
it should be encoded, the user must explicitly set this.
The "just try it" will fail in the case where the user had other use
of user-data and wanted a blob of data to go through unrecognized by
cloud-init.
In cases where there can be mistake in automatic behavior,
and some users may be relaying on old behavior, its best to just require
explicit use.
|
|
|
|
This was broken in the VendorData add.
LP: #1295223
|
|
|
|
|