Age | Commit message (Collapse) | Author |
|
This attempts to standardize unit test file location under test/unittests/
such that any source file located at cloudinit/path/to/file.py may have a
corresponding unit test file at test/unittests/path/to/test_file.py.
Noteworthy Comments:
====================
Four different duplicate test files existed:
test_{gpg,util,cc_mounts,cc_resolv_conf}.py
Each of these duplicate file pairs has been merged together. This is a
break in git history for these files.
The test suite appears to have a dependency on test order. Changing test
order causes some tests to fail. This should be rectified, but for now
some tests have been modified in
tests/unittests/config/test_set_passwords.py.
A helper class name starts with "Test" which causes pytest to try
executing it as a test case, which then throws warnings "due to Class
having __init__()". Silence by changing the name of the class.
# helpers.py is imported in many test files, import paths change
cloudinit/tests/helpers.py -> tests/unittests/helpers.py
# Move directories:
cloudinit/distros/tests -> tests/unittests/distros
cloudinit/cmd/devel/tests -> tests/unittests/cmd/devel
cloudinit/cmd/tests -> tests/unittests/cmd/
cloudinit/sources/helpers/tests -> tests/unittests/sources/helpers
cloudinit/sources/tests -> tests/unittests/sources
cloudinit/net/tests -> tests/unittests/net
cloudinit/config/tests -> tests/unittests/config
cloudinit/analyze/tests/ -> tests/unittests/analyze/
# Standardize tests already in tests/unittests/
test_datasource -> sources
test_distros -> distros
test_vmware -> sources/vmware
test_handler -> config # this contains cloudconfig module tests
test_runs -> runs
|
|
This introduces a way to log the dhclient error stream, and uses it for the Azure datasource (where we have a specific requirement for this data to be logged).
|
|
This fixes issues with closing brackets not matching the opening
bracket's line and continuation line under-idented for hanging indent.
|
|
This removes the use of variables named ‘l’, ‘O’, or ‘I’. Generally
these are used in list comprehension to read the line of lines.
|
|
We want to set route-metrics such that NICs are configured with the priority that they are given in the network metadata that we receive from the IMDS. (This switches away from using MAC ordering.)
This also required the following test changes:
* reverse the sort order of the MACs in test data (so that they would trigger the bug being fixed)
* fix up the key names in `NIC2_MD` (which were under_scored instead of dash-separated)
* use a full interface dict (rather than a minimal one) for `TestConvertEc2MetadataNetworkConfig`
LP: #1876312
|
|
Add support for rendering secondary static IPv4/IPv6 addresses on
any NIC attached to the machine. In order to see secondary IP
addresses in Ec2 IMDS network config, cloud-init now reads metadata
version 2018-09-24. Metadata services which do not support the Ec2
API version will not get secondary IP addresses configured.
In order to discover secondary IP address config, cloud-init now
relies on metadata API Parse local-ipv4s, ipv6s,
subnet-ipv4-cidr-block and subnet-ipv6-cidr-block metadata keys to
determine additional IPs and appropriate subnet prefix to set for a
nic.
Also add the datasource config option apply_full_imds_netork_config
which defaults to true to allow cloud-init to automatically configure
secondary IP addresses. Setting this option to false will tell
cloud-init to avoid setting up secondary IP addresses.
Also in this branch:
- Shift Ec2 datasource to emit network config v2 instead of v1.
LP: #1866930
|
|
The EC2 Data Source needs to handle 3 states of the Instance
Metadata Service configured for a given instance:
1. HttpTokens : optional & HttpEndpoint : enabled
Either IMDSv2 or IMDSv1 can be used.
2. HttpTokens : required & HttpEndpoint : enabled
Calls to IMDS without a valid token (IMDSv1 or IMDSv2 with expired token)
will return a 401 error.
3. HttpEndpoint : disabled
The IMDS http endpoint will return a 403 error.
Previous work to support IMDSv2 in cloud-init handled case 1 and case 2.
This commit handles case 3 by bypassing the retry block when IMDS returns HTTP
status code >= 400 on official AWS cloud platform.
It shaves 2 minutes when rebooting an instance that has its IMDS http token endpoint
disabled but creates some inconsistencies. An instance that doesn't set
"manual_cache_clean" to "True" will have its /var/lib/cloud/instance symlink
removed altogether after it has failed to find a datasource.
|
|
Instead of logging the token values used log the headers and replace the actual
values with the string 'REDACTED'. This allows users to examine cloud-init.log
and see that the IMDSv2 token header is being used but avoids leaving the value
used in the log file itself.
LP: #1863943
|
|
* cloudinit: replace "import mock" with "from unittest import mock"
* test-requirements.txt: drop mock
Co-authored-by: Chad Smith <chad.smith@canonical.com>
|
|
* ec2: Add support for AWS IMDS v2 (session-oriented)
AWS now supports a new version of fetching Instance Metadata[1].
Update cloud-init's ec2 utility functions and update ec2 derived
datasources accordingly. For DataSourceEc2 (versus ec2-look-alikes)
cloud-init will issue the PUT request to obtain an API token for
the maximum lifetime and then all subsequent interactions with the
IMDS will include the token in the header.
If the API token endpoint is unreachable on Ec2 platform, log a
warning and fallback to using IMDS v1 and which does not use
session tokens when communicating with the Instance metadata
service.
We handle read errors, typically seen if the IMDS is beyond one
etwork hop (IMDSv2 responses have a ttl=1), by setting the api token
to a disabled value and then using IMDSv1 paths.
To support token-based headers, ec2_utils functions were updated
to support custom headers_cb and exception_cb callback functions
so Ec2 could store, or refresh API tokens in the event of token
becoming stale.
[1] https://docs.aws.amazon.com/AWSEC2/latest/ \
UserGuide/ec2-instance-metadata.html \
#instance-metadata-v2-how-it-works
|
|
e24cloud provides an EC2 compatible datasource.
This just identifies their platform based on dmi 'system-vendor'
having 'e24cloud'. https://www.e24cloud.com/en/ .
Updated chassis typo in zstack unit test docstring.
LP: #1696476
|
|
Zstack platform provides a AWS Ec2 metadata service, and
identifies their platform to the guest by setting the 'chassis asset tag'
to a string that ends with '.zstack.io'.
LP: #1841181
|
|
The EphemeralDHCP context manager did not parse or handle
rfc3442 classless static routes which prevented reading
datasource metadata in some clouds. This branch adds support
for extracting the field from the leases output, parsing the
format and then adding the required iproute2 ip commands to
apply (and teardown) the static routes.
LP: #1821102
|
|
AWS EC2 instances' network come in 2 basic flavors: Classic and VPC
(Virtual Private Cloud). The former has an interesting behavior of having
its MAC address changed whenever the instance is stopped/restarted. This
behavior is not observed in VPC instances.
In Ubuntu 18.04 (Bionic) the network "management" changed from ENI-style
(etc/network/interfaces) to netplan, and when using netplan we observe
the following block present in /etc/netplan/50-cloud-init.yaml:
match:
macaddress: aa:bb:cc:dd:ee:ff
Jani Ollikainen noticed in Launchpad bug #1802073 that the EC2 Classic
instances were booting without network access in Bionic after stop/restart
procedure, due to their MAC address change behavior. It was narrowed down
to the netplan MAC match block, that kept the old MAC address after
stopping and restarting an instance, since the network configuration
writing happens by default only once in EC2 instances, in the first boot.
This patch changes the network configuration write to every boot in EC2
Classic instances, by checking against the "vpc-id" metadata information
provided only in the VPC instances - if we don't have this metadata value,
cloud-init will rewrite the network configuration file in every boot.
This was tested in an EC2 Classic instance and proved to fix the issue;
unit tests were also added for the new method is_classic_instance().
LP: #1802073
Reported-by: Jani Ollikainen <jani.ollikainen@ik.fi>
Suggested-by: Ryan Harper <ryan.harper@canonical.com>
Co-developed-by: Chad Smith <chad.smith@canonical.com>
Signed-off-by: Guilherme G. Piccoli <gpiccoli@canonical.com>
|
|
|
|
Add the following instance-data.json standardized keys:
* v1._beta_keys: List any v1 keys in beta development,
e.g. ['subplatform'].
* v1.public_ssh_keys: List of any cloud-provided ssh keys for the
instance.
* v1.platform: String representing the cloud platform api supporting the
datasource. For example: 'ec2' for aws, aliyun and brightbox cloud
names.
* v1.subplatform: String with more details about the source of the
metadata consumed. For example, metadata uri, config drive device path
or seed directory.
To support the new platform and subplatform standardized instance-data,
DataSource and its subclasses grew platform and subplatform attributes.
The platform attribute defaults to the lowercase string datasource name at
self.dsname. This method is overridden in NoCloud, Ec2 and ConfigDrive
datasources.
The subplatform attribute calls a _get_subplatform method which will
return a string containing a simple slug for subplatform type such as
metadata, seed-dir or config-drive followed by a detailed uri, device or
directory path where the datasource consumed its configuration.
As part of this work, DatasourceEC2 methods _get_data and _crawl_metadata
have been refactored for a few reasons:
- crawl_metadata is now a read-only operation, persisting no attributes on
the datasource instance and returns a dictionary of consumed metadata.
- crawl_metadata now closely represents the raw stucture of the ec2
metadata consumed, so that end-users can leverage public ec2 metadata
documentation where possible.
- crawl_metadata adds a '_metadata_api_version' key to the crawled
ds.metadata to advertise what version of EC2's api was consumed by
cloud-init.
- _get_data now does all the processing of crawl_metadata and saves
datasource instance attributes userdata_raw, metadata etc.
Additional drive-bys:
* unit test rework for test_altcloud and test_azure to simplify mocks
and make use of existing util and test_helpers functions.
|
|
On OpenSuSE 42.3, we would get errors running
tests/unittests/test_handler/test_handler_chef.py
- test_myhttps_nonet raises a UnmockedError
No mocking was registered, and real connections are not allowed
- test_myhttps_net raises SSLError
("bad handshake: SysCallError(32, 'EPIPE')",)
This fixes the errors by just using http instead of https.
Also it modifies the HttprettyTestCase to do the httpretty activate
and deactivate itself in setUp and tearDown. Then we don't have to
decorate individual test_ methods. Also, we set
httpretty.HTTPretty.allow_net_connect = False
Test cases here should not reach out to a network resource.
LP: #1771659
|
|
Fix an issue in EC2 where the datasource.identity had not been
initialized before being used when restoring datasource from pickle.
This is exposed in upgrade and reboot path.
LP: #1748354
|
|
This change will enable azure vms to report provisioning has completed
twice, first to tell the fabric it has completed then a second time to
enable customer settings. The datasource for the second provisioning is
the Instance Metadata Service (IMDS),and the VM will poll indefinitely for
the new ovf-env.xml from IMDS.
This branch introduces EphemeralDHCPv4 which encapsulates common logic
used by both DataSourceEc2 an DataSourceAzure for temporary DHCP
interactions without side-effects.
LP: #1734991
|
|
The motivation for this is that
a.) 1.7.1 runs with python 3.6 (bionic)
b.) we want to run pylint on tests/ and tools for the same reasons
that we want to run it on cloudinit/
The changes are described below.
- Update tox.ini to invoke pylint v1.7.1.
- Modify .pylintrc generated-members ignore mocked object members (m_.*)
- Replace "dangerous" params defaulting to {}
- Fix up cloud_tests use of platforms
- Cast some instance objects to with dict()
- Handle python2.7 vs 3+ ConfigParser use of readfp (deprecated)
- Update use of assertEqual(<boolean>, value) to assert<Boolean>(value)
- replace depricated assertRegexp -> assertRegex
- Remove useless test-class calls to super class
- Assign class property accessors a result and use it
- Fix missing class member in CepkoResultTests
- Fix Cheetah test import
|
|
Each DataSource subclass must define its own get_data method. This branch
formalizes our DataSource class to require that subclasses define an
explicit dsname for sourcing cloud-config datasource configuration.
Subclasses must also override the _get_data method or a
NotImplementedError is raised.
The branch also writes /run/cloud-init/instance-data.json. This file
contains all meta-data, user-data and vendor-data and a standardized set
of metadata keys in a json blob which other utilities with root-access
could make use of. Because some meta-data or user-data is potentially
sensitive the file is only readable by root.
Generally most metadata content types should be json serializable. If
specific keys or values are not serializable, those specific values will
be base64encoded and the key path will be listed under the top-level key
'base64-encoded-keys' in instance-data.json. If json writing fails due to
other TypeErrors or UnicodeDecodeErrors, a warning log will be emitted to
/var/log/cloud-init.log and no instance-data.json will be created.
|
|
If user upgraded to new cloud-init and attempted to run 'cloud-init init'
without rebooting, cloud-init restores the datasource object from pickle.
The older version pickled datasource object had no value for
_network_config or fallback_nic. This caused the Ec2 datasource to attempt
to reconfigure networking with a None fallback_nic. The pickled object
also cached an older version of ec2 metadata which didn't contain network
information.
This branch does two things:
- Add a fallback_interface property to DatasourceEC2 to support reading the
old .fallback_nic attribute if it was set. New versions will
call net.find_fallback_nic() if there has not been one found.
- Re-crawl metadata if we are on Ec2 and don't have a 'network' key in
metadata
LP: #1732917
|
|
VPC instances have the option to specific local only IPv4 addresses. Allow
Ec2Datasource to enable dhcp4 on instances even if local-ipv4s is
configured on an instance.
Also limit network_configuration to only the primary (fallback) nic.
LP: #1728152
|
|
This change makes the DataSourceEc2Local do nothing unless it is on
actual AWS platform. The motivation is twofold:
a.) It is generally safer to only make this function available to Ec2
clones that explicitly identify themselves to the guest. (It also
gives them a reason to supply identification code to cloud-init.)
b.) On non-intel OpenStack platforms ds-identify would enable both the Ec2
and OpenStack sources. That is because there is not good data (such as
dmi) to positively identify the platform. Previously that would be fine
as OpenStack would run first and be successful. The change to add Ec2Local
meant that an Ec2 now runs first.
The best case for 'b' would be a slow down as attempts at the Ec2 metadata
service time out. The discovered case was worse.
Additionally we add a simple check for datatype of 'network' in the
metadata before attempting to read it.
LP: #1715128
|
|
This moves the base test case classes into into cloudinit/tests and
updates all the corresponding imports.
|
|
DataSourceEc2 behavior changed to first check a minimum acceptable
metadata version uri http://169.154.169.254/<min_version>/instance-id,
retrying on 404, until the metadata service is available. After the
metadata service is up, the datasource inspects preferred
extended_metadata_versions for availability. Unit tests only mocked the
preferred extended_metadata_version so all Ec2 tests were retrying
attempts against
http://169.254.169.254/meta-data/<min-version>/instance-id adding a lot of
time cost to the unit test runs.
This branch uses httpretty to properly mock the following:
- 404s from metadata on undesired extended_metadata_version test routes
- https://169.254.169.254/meta-data/2016-09-02/instance-id
- full metadata dictionary represented on min_metadata_version
- https://169.254.169.254/meta-data/2016-09-02/*
The branch also tightens httpretty to raise a MockError for any URL which
isn't mocked via httpretty.HTTPretty.allow_net_connect=False.
LP: #1714117
|
|
DataSourceEc2 now parses the metadata for each nic to determine if
configured for ipv6 and/or ipv4 addresses. In AWS for metadata version
2016-09-02, nics configured for ipv4 or ipv6 addresses will have non-zero
values stored in metadata at network/interfaces/macs/<MAC>/public-ipv4 or
ipv6s respectively. Those metadata files are only non-zero when an ipv4 or
ipv6 ip is associated to the specific nic. A new
DataSourceEc2.network_config property is added which parses the metadata
and renders a network version 1 dictionary representing both dhcp4 and
dhcp6 configuration for associated nics.
The network configuration returned from the datasource will also 'pin' the
nic name to the name presented on the instance for each nic.
LP: #1639030
|
|
This branch is a prerequisite for IPv6 support in AWS by allowing Ec2
datasource to query the metadata source version 2016-09-02 about whether
or not it needs to configure IPv6 on interfaces. If version 2016-09-02
is not present, fallback to the min_metadata_version of 2009-04-04. The
DataSourceEc2Local not run on FreeBSD because dhclient in doesn't
support the -sf flag allowing us to run dhclient without filesystem
side-effects.
To query AWS' metadata address @ 169.254.169.254, the instance must have
a dhcp-allocated address configured. Configuring IPv4 link-local
addresses result in timeouts from the metadata service. We introduced a
DataSourceEc2Local subclass which will perform a sandboxed dhclient
discovery which obtains an authorized IP address on eth0 and crawl
metadata about full instance network configuration.
Since ec2 IPv6 metadata is not sufficient in itself to tell us all the
ipv6 knownledge we need, it only be used as a boolean to tell us which
nics need IPv6. Cloud-init will then configure desired interfaces to
DHCPv6 versus DHCPv4.
Performance side note: Shifting the dhcp work into init-local for Ec2
actually gets us 1 second faster deployments by skipping init-network
phase of alternate datasource checks because Ec2Local is configured in
an ealier boot stage. In 3 test runs prior to this change: cloud-init
runs were 5.5 seconds, with the change we now average 4.6 seconds.
This efficiency could be even further improved if we avoiding dhcp
discovery in order to talk to the metadata service from an AWS
authorized dhcp address if there were some way to advertize the dhcp
configuration via DMI/SMBIOS or system environment variables.
Inspecting time costs of the dhclient setup/teardown in 3 live runs the
time cost for the dhcp setup round trip on AWS is:
test 1: 76 milliseconds
dhcp discovery + metadata: 0.347 seconds
metadata alone: 0.271 seconds
test 2: 88 milliseconds
dhcp discovery + metadata: 0.388 seconds
metadata alone: 0.300 seconds
test 3: 75 milliseconds
dhcp discovery + metadata: 0.366 seconds
metadata alone: 0.291 seconds
LP: #1709772
|
|
EC2 was the original, but this adds some initial tests for that datasource.
Also updates a docstring for an internal method.
|