Age | Commit message (Collapse) | Author |
|
Changes:
tox: bump the pylint version to 2.6.0 in the default run
Fix pylint 2.6.0 W0707 warnings (raise-missing-from)
|
|
We live in the future now.
|
|
* ec2: Add support for AWS IMDS v2 (session-oriented)
AWS now supports a new version of fetching Instance Metadata[1].
Update cloud-init's ec2 utility functions and update ec2 derived
datasources accordingly. For DataSourceEc2 (versus ec2-look-alikes)
cloud-init will issue the PUT request to obtain an API token for
the maximum lifetime and then all subsequent interactions with the
IMDS will include the token in the header.
If the API token endpoint is unreachable on Ec2 platform, log a
warning and fallback to using IMDS v1 and which does not use
session tokens when communicating with the Instance metadata
service.
We handle read errors, typically seen if the IMDS is beyond one
etwork hop (IMDSv2 responses have a ttl=1), by setting the api token
to a disabled value and then using IMDSv1 paths.
To support token-based headers, ec2_utils functions were updated
to support custom headers_cb and exception_cb callback functions
so Ec2 could store, or refresh API tokens in the event of token
becoming stale.
[1] https://docs.aws.amazon.com/AWSEC2/latest/ \
UserGuide/ec2-instance-metadata.html \
#instance-metadata-v2-how-it-works
|
|
Add the following instance-data.json standardized keys:
* v1._beta_keys: List any v1 keys in beta development,
e.g. ['subplatform'].
* v1.public_ssh_keys: List of any cloud-provided ssh keys for the
instance.
* v1.platform: String representing the cloud platform api supporting the
datasource. For example: 'ec2' for aws, aliyun and brightbox cloud
names.
* v1.subplatform: String with more details about the source of the
metadata consumed. For example, metadata uri, config drive device path
or seed directory.
To support the new platform and subplatform standardized instance-data,
DataSource and its subclasses grew platform and subplatform attributes.
The platform attribute defaults to the lowercase string datasource name at
self.dsname. This method is overridden in NoCloud, Ec2 and ConfigDrive
datasources.
The subplatform attribute calls a _get_subplatform method which will
return a string containing a simple slug for subplatform type such as
metadata, seed-dir or config-drive followed by a detailed uri, device or
directory path where the datasource consumed its configuration.
As part of this work, DatasourceEC2 methods _get_data and _crawl_metadata
have been refactored for a few reasons:
- crawl_metadata is now a read-only operation, persisting no attributes on
the datasource instance and returns a dictionary of consumed metadata.
- crawl_metadata now closely represents the raw stucture of the ec2
metadata consumed, so that end-users can leverage public ec2 metadata
documentation where possible.
- crawl_metadata adds a '_metadata_api_version' key to the crawled
ds.metadata to advertise what version of EC2's api was consumed by
cloud-init.
- _get_data now does all the processing of crawl_metadata and saves
datasource instance attributes userdata_raw, metadata etc.
Additional drive-bys:
* unit test rework for test_altcloud and test_azure to simplify mocks
and make use of existing util and test_helpers functions.
|
|
The result of a read_file_or_url on a file and on a url would differ
in behavior.
str(UrlResponse) would return UrlResponse.contents.decode('utf-8')
while
str(FileResponse) would return str(FileResponse.contents)
The difference being "b'foo'" versus "foo".
As part of the general goal of cleaning util, move read_file_or_url
into url_helper.
|
|
This enables warnings produced by pylint for unused variables (W0612),
and fixes the existing errors.
|
|
This stores a hash of the OAuth tokens as an 'id' for the maas
datasource. Since new instances get new tokens created and those tokens
are written by curtin into datasource system config this will provide
a way to identify a new "instance" (install).
LP: #1712680
|
|
Each DataSource subclass must define its own get_data method. This branch
formalizes our DataSource class to require that subclasses define an
explicit dsname for sourcing cloud-config datasource configuration.
Subclasses must also override the _get_data method or a
NotImplementedError is raised.
The branch also writes /run/cloud-init/instance-data.json. This file
contains all meta-data, user-data and vendor-data and a standardized set
of metadata keys in a json blob which other utilities with root-access
could make use of. Because some meta-data or user-data is potentially
sensitive the file is only readable by root.
Generally most metadata content types should be json serializable. If
specific keys or values are not serializable, those specific values will
be base64encoded and the key path will be listed under the top-level key
'base64-encoded-keys' in instance-data.json. If json writing fails due to
other TypeErrors or UnicodeDecodeErrors, a warning log will be emitted to
/var/log/cloud-init.log and no instance-data.json will be created.
|
|
This will change all instances of LOG.warn to LOG.warning as warn
is now a deprecated method. It will also make sure any logging
uses lazy logging by passing string format arguments as function
parameters.
|
|
This has been a recurring ask and we had initially just made the change to
the cloud-init 2.0 codebase. As the current thinking is we'll just
continue to enhance the current codebase, its desirable to relicense to
match what we'd intended as part of the 2.0 plan here.
- put a brief description of license in LICENSE file
- put full license versions in LICENSE-GPLv3 and LICENSE-Apache2.0
- simplify the per-file header to reference LICENSE
- tox: ignore H102 (Apache License Header check)
Add license header to files that ship.
Reformat headers, make sure everything has vi: at end of file.
Non-shipping files do not need the copyright header,
but at the moment tests/ have it.
|
|
This just looks in one other maas related path for a config file.
The file '91_kernel_cmdline_url' is written by cloud-init when it
gets a cloud-config-url parameter.
Also now we read the config even if a url is specified to potentially
fill in credentials.
|
|
Add vendor-data support to maas which will behave like the openstack
vendor-data does. Data returned from maas must be yaml loadable.
Also update the main in DataSourceMAAS to "just work" on a maas
deployed system.
LP: #1612313
|
|
Update make check target to run pep8 and run pyflakes or pyflakes3
depending on the value of 'PYVER'. This way the python3 build
environment does not need python2 and vice versa.
Also have make check run the 'yaml' test.
tox: have tox run pep8 in the pyflakes
|
|
|
|
This would cause problems in the event that we actually had a bad
clock. We add a retry in the main (for test) also, to ensure that
the oauth timestamp fix gets in place.
LP: #1499869
|
|
the previous version was broken. The vital fixes here are:
* adding parsedate and oauth1 imports to url_helper
* fix skew_data usage intending to use self.skew_data
Additionally:
* reorder imports in url_helper
* fixes to python3 -m cloudinit.sources.DataSourceMaas
LP: #1488507
|
|
|
|
|
|
oddly enough, the timestamp you pass into oauthlib must be a None
or a string. If not, raises ValueError:
Only unicode objects are escapable. Got 1426021488 of type <class 'int'>
|
|
|
|
clock skew
This functionality has been introduced to fix LP: #978127, but was lost
while migrating cloud-init to python3.
|
|
In both python2 and python3,
This throws "'module' object has no attribute 'oauth1'"
$ python3 -c 'import oauthlib; oauthlib.oauth1.Client("x")'
While this works fine:
$ python3 -c 'import oauthlib.oauth1 as oauth1; oauth1.Client("x")'
|
|
UrlResponse: biggest change... make readurl return bytes, making user
know what to do with it.
util: add load_tfile_or_url for loading text file or url
as read_file_or_url now returns bytes
ec2_utils: all meta-data is text, remove non-obvious string translations
DigitalOcean: adjust for ec2_utils
DataSourceGCE, DataSourceMAAS: user-data is binary other fields are text.
openstack.py: read paths without decoding to text. This is ok as paths
other than user-data are json, and load_json will handle
load_file still returns text, and that is what most things use.
|
|
|
|
|
|
|
|
to be behind trunk.
`tox -e py27` passes full test suite. Now to work on replacing mocker.
|
|
Couple of things here:
* do not re-try on user-data (404 means 'not here')
* re-generate headers on retry requests
LP: #1172742
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Use only util methods for reading/loading/appending/peeking
at files since it is likely soon that we will add a new
way of adjusting the root of files read, also it is useful
for debugging to track what is being read/written in a central
fashion.
|
|
This fixes (tested) bug 978127. The server was actually returning a 401
not a 403. As such, the fix here was insufficient. This will now take
either of those 2 error codes. I've also tested it by changing the clock
in the cloud-init upstart job with a stanza like below, and verifying
that we do see the problem and then it resolve itself:
pre-start script
offset="10 minutes ago"
past=$(date -R --date "$offset")
date --set "$past" &&
echo ===== "set date to $past [$offset]" ===== ||
echo ===== "failed to set date to $past [$offset]" ====
end script
LP: #978127
|
|
|
|
|
|
In the event of a 403 (Unauthorized) in oauth, try set a 'oauth_clockskew'
variable. In future headers, use a time created by 'time.time() +
self.oauth_clockskew'. The idea here is that if the local time is bad (or even
if the server time is bad) we will essentially use something that should be
similar to the remote clock.
This fixes LP: #978127.
LP: #978127
|
|
at files since it is likely soon that we will add a new
way of adjusting the root of files read, also it is useful
for debugging to track what is being read/written in a central
fashion.
|
|
the main function, which was usable for debugging maas was dropped during the
rework branch. I'm adding it back here as it is very useful. It is possibly
better implemented some other way than this, but this is good enough.
|
|
|
|
2. Adjust comment on sources list from depends
3. For the /etc/timezone 'writing', add a header that says created by cloud-init
|
|
|
|
a. This allows for more properties to be added as needed in the future, instead of being very restrictive.
2. Fix up all uses of the url reading to now use this new response object.
3. Also fixup user data including, such that if no response actual occurs the url content is not further processed.
|
|
|
|
|
|
Some of the cleanups were the following
1. Using standard (logged) utility functions for sub process work, writing, reading files, and other file system/operating system options
2. Having distrobutions impelement there own subclasses to handle system specifics (if applicable)
3. Having a cloud wrapper that provides just the functionality we want to expose (cloud.py)
4. Using a path class instead of globals for all cloud init paths (it is configured via config)
5. Removal of as much shared global state as possible (there should be none, minus a set of constants)
6. Other various cleanups that remove transforms/handlers/modules from reading/writing/chmoding there own files.
a. They should be using util functions to take advantage of the logging that is now enabled in those util functions (very useful for debugging)
7. Urls being read and checked from a single module that serves this and only this purpose (+1 for code organization)
8. Updates to log whenever a transform decides not to run
9. Ensure whenever a exception is thrown (and possibly captured) that the util.logexc function is called
a. For debugging, tracing this is important to not just drop them on the floor.
10. Code shuffling into utils.py where it makes sense (and where it could serve a benefit for other code now or in the future)
|
|
|