Age | Commit message (Collapse) | Author |
|
Applied Black and isort, fixed any linting issues, updated tox.ini
and CI.
|
|
Adds a udev script which will invoke a hotplug hook script on all net
add events. The script will write some udev arguments to a systemd FIFO
socket (to ensure we have only instance of cloud-init running at a
time), which is then read by a new service that calls a new 'cloud-init
devel hotplug-hook' command to handle the new event.
This hotplug-hook command will:
- Fetch the pickled datsource
- Verify that the hotplug event is supported/enabled
- Update the metadata for the datasource
- Ensure the hotplugged device exists within the datasource
- Apply the config change on the datasource metadata
- Bring up the new interface (or apply global network configuration)
- Save the updated metadata back to the pickle cache
Also scattered in some unrelated typing where helpful
|
|
Control is currently limited to boot events, though this should
allow us to more easily incorporate HOTPLUG support. Disabling
'instance-first-boot' is not supported as we apply networking config
too early in boot to have processed userdata (along with the fact
that this would be a pretty big foot-gun).
The concept of update events on datasource has been split into
supported update events and default update events. Defaults will be
used if there is no user-defined update events, but user-defined
events won't be supplied if they aren't supported.
When applying the networking config, we now check to see if the event
is supported by the datasource as well as if it is enabled.
Configuration looks like:
updates:
network:
when: ['boot']
|
|
#342 (70dbccbb) introduced the ability to determine route-metrics based on
the `device-number` provided by the EC2 IMDS. Not all datasources that
subclass EC2 will have this attribute, so allow the old behavior if
`device-number` is not present.
LP: #1917875
|
|
This just separates the reading of dmi values into its own file.
Some things of note:
* left import of util in dmi.py only for 'is_container'
It'd be good if is_container was not in util.
* just the use of 'util.is_x86' to dmi.py
* open() is used directly rather than load_file.
|
|
Changes:
tox: bump the pylint version to 2.6.0 in the default run
Fix pylint 2.6.0 W0707 warnings (raise-missing-from)
|
|
We want to set route-metrics such that NICs are configured with the priority that they are given in the network metadata that we receive from the IMDS. (This switches away from using MAC ordering.)
This also required the following test changes:
* reverse the sort order of the MACs in test data (so that they would trigger the bug being fixed)
* fix up the key names in `NIC2_MD` (which were under_scored instead of dash-separated)
* use a full interface dict (rather than a minimal one) for `TestConvertEc2MetadataNetworkConfig`
LP: #1876312
|
|
Add support for rendering secondary static IPv4/IPv6 addresses on
any NIC attached to the machine. In order to see secondary IP
addresses in Ec2 IMDS network config, cloud-init now reads metadata
version 2018-09-24. Metadata services which do not support the Ec2
API version will not get secondary IP addresses configured.
In order to discover secondary IP address config, cloud-init now
relies on metadata API Parse local-ipv4s, ipv6s,
subnet-ipv4-cidr-block and subnet-ipv6-cidr-block metadata keys to
determine additional IPs and appropriate subnet prefix to set for a
nic.
Also add the datasource config option apply_full_imds_netork_config
which defaults to true to allow cloud-init to automatically configure
secondary IP addresses. Setting this option to false will tell
cloud-init to avoid setting up secondary IP addresses.
Also in this branch:
- Shift Ec2 datasource to emit network config v2 instead of v1.
LP: #1866930
|
|
The EC2 Data Source needs to handle 3 states of the Instance
Metadata Service configured for a given instance:
1. HttpTokens : optional & HttpEndpoint : enabled
Either IMDSv2 or IMDSv1 can be used.
2. HttpTokens : required & HttpEndpoint : enabled
Calls to IMDS without a valid token (IMDSv1 or IMDSv2 with expired token)
will return a 401 error.
3. HttpEndpoint : disabled
The IMDS http endpoint will return a 403 error.
Previous work to support IMDSv2 in cloud-init handled case 1 and case 2.
This commit handles case 3 by bypassing the retry block when IMDS returns HTTP
status code >= 400 on official AWS cloud platform.
It shaves 2 minutes when rebooting an instance that has its IMDS http token endpoint
disabled but creates some inconsistencies. An instance that doesn't set
"manual_cache_clean" to "True" will have its /var/lib/cloud/instance symlink
removed altogether after it has failed to find a datasource.
|
|
Instead of logging the token values used log the headers and replace the actual
values with the string 'REDACTED'. This allows users to examine cloud-init.log
and see that the IMDSv2 token header is being used but avoids leaving the value
used in the log file itself.
LP: #1863943
|
|
* ec2: Add support for AWS IMDS v2 (session-oriented)
AWS now supports a new version of fetching Instance Metadata[1].
Update cloud-init's ec2 utility functions and update ec2 derived
datasources accordingly. For DataSourceEc2 (versus ec2-look-alikes)
cloud-init will issue the PUT request to obtain an API token for
the maximum lifetime and then all subsequent interactions with the
IMDS will include the token in the header.
If the API token endpoint is unreachable on Ec2 platform, log a
warning and fallback to using IMDS v1 and which does not use
session tokens when communicating with the Instance metadata
service.
We handle read errors, typically seen if the IMDS is beyond one
etwork hop (IMDSv2 responses have a ttl=1), by setting the api token
to a disabled value and then using IMDSv1 paths.
To support token-based headers, ec2_utils functions were updated
to support custom headers_cb and exception_cb callback functions
so Ec2 could store, or refresh API tokens in the event of token
becoming stale.
[1] https://docs.aws.amazon.com/AWSEC2/latest/ \
UserGuide/ec2-instance-metadata.html \
#instance-metadata-v2-how-it-works
|
|
e24cloud provides an EC2 compatible datasource.
This just identifies their platform based on dmi 'system-vendor'
having 'e24cloud'. https://www.e24cloud.com/en/ .
Updated chassis typo in zstack unit test docstring.
LP: #1696476
|
|
Zstack platform provides a AWS Ec2 metadata service, and
identifies their platform to the guest by setting the 'chassis asset tag'
to a string that ends with '.zstack.io'.
LP: #1841181
|
|
The detection for brightbox in both ds-identify and in
identify_brightbox would incorrectly match the domain 'bobrightbox',
which is not a brightbox platform. The fix here is to restrict
matching to '*.brightbox.com' rather than '*brightbox.com'
Also, while here remove a url to bug 1661693 which added the
knowledge of brightbox.
|
|
|
|
Our previous understanding of the upgrade issue was incomplete; it turns
out the only change we need is the one now outlined.
|
|
AWS EC2 instances' network come in 2 basic flavors: Classic and VPC
(Virtual Private Cloud). The former has an interesting behavior of having
its MAC address changed whenever the instance is stopped/restarted. This
behavior is not observed in VPC instances.
In Ubuntu 18.04 (Bionic) the network "management" changed from ENI-style
(etc/network/interfaces) to netplan, and when using netplan we observe
the following block present in /etc/netplan/50-cloud-init.yaml:
match:
macaddress: aa:bb:cc:dd:ee:ff
Jani Ollikainen noticed in Launchpad bug #1802073 that the EC2 Classic
instances were booting without network access in Bionic after stop/restart
procedure, due to their MAC address change behavior. It was narrowed down
to the netplan MAC match block, that kept the old MAC address after
stopping and restarting an instance, since the network configuration
writing happens by default only once in EC2 instances, in the first boot.
This patch changes the network configuration write to every boot in EC2
Classic instances, by checking against the "vpc-id" metadata information
provided only in the VPC instances - if we don't have this metadata value,
cloud-init will rewrite the network configuration file in every boot.
This was tested in an EC2 Classic instance and proved to fix the issue;
unit tests were also added for the new method is_classic_instance().
LP: #1802073
Reported-by: Jani Ollikainen <jani.ollikainen@ik.fi>
Suggested-by: Ryan Harper <ryan.harper@canonical.com>
Co-developed-by: Chad Smith <chad.smith@canonical.com>
Signed-off-by: Guilherme G. Piccoli <gpiccoli@canonical.com>
|
|
Fixes:
- flake8: use ==/!= to compare str, bytes, and int literals
- pycodestyle: E117 over-indented
|
|
Add the following instance-data.json standardized keys:
* v1._beta_keys: List any v1 keys in beta development,
e.g. ['subplatform'].
* v1.public_ssh_keys: List of any cloud-provided ssh keys for the
instance.
* v1.platform: String representing the cloud platform api supporting the
datasource. For example: 'ec2' for aws, aliyun and brightbox cloud
names.
* v1.subplatform: String with more details about the source of the
metadata consumed. For example, metadata uri, config drive device path
or seed directory.
To support the new platform and subplatform standardized instance-data,
DataSource and its subclasses grew platform and subplatform attributes.
The platform attribute defaults to the lowercase string datasource name at
self.dsname. This method is overridden in NoCloud, Ec2 and ConfigDrive
datasources.
The subplatform attribute calls a _get_subplatform method which will
return a string containing a simple slug for subplatform type such as
metadata, seed-dir or config-drive followed by a detailed uri, device or
directory path where the datasource consumed its configuration.
As part of this work, DatasourceEC2 methods _get_data and _crawl_metadata
have been refactored for a few reasons:
- crawl_metadata is now a read-only operation, persisting no attributes on
the datasource instance and returns a dictionary of consumed metadata.
- crawl_metadata now closely represents the raw stucture of the ec2
metadata consumed, so that end-users can leverage public ec2 metadata
documentation where possible.
- crawl_metadata adds a '_metadata_api_version' key to the crawled
ds.metadata to advertise what version of EC2's api was consumed by
cloud-init.
- _get_data now does all the processing of crawl_metadata and saves
datasource instance attributes userdata_raw, metadata etc.
Additional drive-bys:
* unit test rework for test_altcloud and test_azure to simplify mocks
and make use of existing util and test_helpers functions.
|
|
Network has not yet been configured in the init-local stage so the
openstack datasource will use dhcp-client to temporarily obtain an ipv4
address and query the metadata service at http://169.254.169.254 to get
network_data.json configuration. If present, the datasource will return
network_config version 1 config based on that network_data.json content.
Previously OpenStack datasource only setup dhcp on the fallback interface
so this represents a change in behavior to react to the full config
provided by openstack.
Also significant to OpenStack is the separation of a _crawl_data operation
from get_data(). crawl_data walks the available metadata services and
returns a dict of discovered content. get_data consumes the crawled_data,
caches it in the datasource and reacts to that data.
/run/cloud-init/instance-data.json now published network_data.json or
ec2_metadata key if that data is present on any datasource.
The main reasons for the separation of crawl from get_data:
* Enable performance metrics of cloud-init's metadata crawls on each
* Enable cloud-init modules and scripts to query and consume metadata
content which may have updated/changed after cloud-init's initial cache
during instance boot. (Think hotplug)
Also generalize common logic to base DataSource class/module:
* Move to a common UNSET variable up into base datasource module fix EC2,
ConfigDrive, OpenStack, SmartOS to use the global.
* Drop get_url_settings from Ec2, CloudStack and OpenStack and generalize
DataSource.get_url_params(). Allow subclasses to override url_max_wait,
url_timeout and url_retries params.
* Rename get_network_metadata bool to perform_dhcp_setup as it designates
whether EphemeralDHCPv4 setup is required before crawling metadata.
LP: #1749717
|
|
Fix an issue in EC2 where the datasource.identity had not been
initialized before being used when restoring datasource from pickle.
This is exposed in upgrade and reboot path.
LP: #1748354
|
|
This change will enable azure vms to report provisioning has completed
twice, first to tell the fabric it has completed then a second time to
enable customer settings. The datasource for the second provisioning is
the Instance Metadata Service (IMDS),and the VM will poll indefinitely for
the new ovf-env.xml from IMDS.
This branch introduces EphemeralDHCPv4 which encapsulates common logic
used by both DataSourceEc2 an DataSourceAzure for temporary DHCP
interactions without side-effects.
LP: #1734991
|
|
The instance identity document is a better source for region information,
partly because region isn't actually in meta-data at all, only
availability-zone, which happens to be named similarly.
Reviewed-by: Ethan Faust <efaust@amazon.com>
Reviewed-by: Cyle Riggs <cyler@amazon.com>
Reviewed-by: Tom Kirchner <tjk@amazon.com>
Reviewed-by: Matt Nierzwicki <nierzwic@amazon.com>
[ajorgens@amazon.com: rebase onto 0.7.9]
[ajorgens@amazon.com: changes per merge proposal discussions]
|
|
Each DataSource subclass must define its own get_data method. This branch
formalizes our DataSource class to require that subclasses define an
explicit dsname for sourcing cloud-config datasource configuration.
Subclasses must also override the _get_data method or a
NotImplementedError is raised.
The branch also writes /run/cloud-init/instance-data.json. This file
contains all meta-data, user-data and vendor-data and a standardized set
of metadata keys in a json blob which other utilities with root-access
could make use of. Because some meta-data or user-data is potentially
sensitive the file is only readable by root.
Generally most metadata content types should be json serializable. If
specific keys or values are not serializable, those specific values will
be base64encoded and the key path will be listed under the top-level key
'base64-encoded-keys' in instance-data.json. If json writing fails due to
other TypeErrors or UnicodeDecodeErrors, a warning log will be emitted to
/var/log/cloud-init.log and no instance-data.json will be created.
|
|
If user upgraded to new cloud-init and attempted to run 'cloud-init init'
without rebooting, cloud-init restores the datasource object from pickle.
The older version pickled datasource object had no value for
_network_config or fallback_nic. This caused the Ec2 datasource to attempt
to reconfigure networking with a None fallback_nic. The pickled object
also cached an older version of ec2 metadata which didn't contain network
information.
This branch does two things:
- Add a fallback_interface property to DatasourceEC2 to support reading the
old .fallback_nic attribute if it was set. New versions will
call net.find_fallback_nic() if there has not been one found.
- Re-crawl metadata if we are on Ec2 and don't have a 'network' key in
metadata
LP: #1732917
|
|
VPC instances have the option to specific local only IPv4 addresses. Allow
Ec2Datasource to enable dhcp4 on instances even if local-ipv4s is
configured on an instance.
Also limit network_configuration to only the primary (fallback) nic.
LP: #1728152
|
|
This change makes the DataSourceEc2Local do nothing unless it is on
actual AWS platform. The motivation is twofold:
a.) It is generally safer to only make this function available to Ec2
clones that explicitly identify themselves to the guest. (It also
gives them a reason to supply identification code to cloud-init.)
b.) On non-intel OpenStack platforms ds-identify would enable both the Ec2
and OpenStack sources. That is because there is not good data (such as
dmi) to positively identify the platform. Previously that would be fine
as OpenStack would run first and be successful. The change to add Ec2Local
meant that an Ec2 now runs first.
The best case for 'b' would be a slow down as attempts at the Ec2 metadata
service time out. The discovered case was worse.
Additionally we add a simple check for datatype of 'network' in the
metadata before attempting to read it.
LP: #1715128
|
|
DataSourceEc2 now parses the metadata for each nic to determine if
configured for ipv6 and/or ipv4 addresses. In AWS for metadata version
2016-09-02, nics configured for ipv4 or ipv6 addresses will have non-zero
values stored in metadata at network/interfaces/macs/<MAC>/public-ipv4 or
ipv6s respectively. Those metadata files are only non-zero when an ipv4 or
ipv6 ip is associated to the specific nic. A new
DataSourceEc2.network_config property is added which parses the metadata
and renders a network version 1 dictionary representing both dhcp4 and
dhcp6 configuration for associated nics.
The network configuration returned from the datasource will also 'pin' the
nic name to the name presented on the instance for each nic.
LP: #1639030
|
|
This branch is a prerequisite for IPv6 support in AWS by allowing Ec2
datasource to query the metadata source version 2016-09-02 about whether
or not it needs to configure IPv6 on interfaces. If version 2016-09-02
is not present, fallback to the min_metadata_version of 2009-04-04. The
DataSourceEc2Local not run on FreeBSD because dhclient in doesn't
support the -sf flag allowing us to run dhclient without filesystem
side-effects.
To query AWS' metadata address @ 169.254.169.254, the instance must have
a dhcp-allocated address configured. Configuring IPv4 link-local
addresses result in timeouts from the metadata service. We introduced a
DataSourceEc2Local subclass which will perform a sandboxed dhclient
discovery which obtains an authorized IP address on eth0 and crawl
metadata about full instance network configuration.
Since ec2 IPv6 metadata is not sufficient in itself to tell us all the
ipv6 knownledge we need, it only be used as a boolean to tell us which
nics need IPv6. Cloud-init will then configure desired interfaces to
DHCPv6 versus DHCPv4.
Performance side note: Shifting the dhcp work into init-local for Ec2
actually gets us 1 second faster deployments by skipping init-network
phase of alternate datasource checks because Ec2Local is configured in
an ealier boot stage. In 3 test runs prior to this change: cloud-init
runs were 5.5 seconds, with the change we now average 4.6 seconds.
This efficiency could be even further improved if we avoiding dhcp
discovery in order to talk to the metadata service from an AWS
authorized dhcp address if there were some way to advertize the dhcp
configuration via DMI/SMBIOS or system environment variables.
Inspecting time costs of the dhclient setup/teardown in 3 live runs the
time cost for the dhcp setup round trip on AWS is:
test 1: 76 milliseconds
dhcp discovery + metadata: 0.347 seconds
metadata alone: 0.271 seconds
test 2: 88 milliseconds
dhcp discovery + metadata: 0.388 seconds
metadata alone: 0.300 seconds
test 3: 75 milliseconds
dhcp discovery + metadata: 0.366 seconds
metadata alone: 0.291 seconds
LP: #1709772
|
|
EC2 was the original, but this adds some initial tests for that datasource.
Also updates a docstring for an internal method.
|
|
AliYun cloud platform is now identifying themselves by setting the dmi
product id to the well known value "Alibaba Cloud ECS". The changes here
identify that properly in tools/ds-identify and in the DataSourceAliYun.
Since the 'get_data' for AliYun now identifies itself correctly, we can
enable AliYun by default.
LP: #1638931
|
|
This will change all instances of LOG.warn to LOG.warning as warn
is now a deprecated method. It will also make sure any logging
uses lazy logging by passing string format arguments as function
parameters.
|
|
This moves the warning code that was added specifically for
EC2 into a generic path at cloudinit/warnings.py.
It also adds support for writing warning files into the
warnings directory to be shown by Z99-cloudinit-warnings.sh.
|
|
Brightbox will identify their platform to the guest by setting the
product serial to a string that ends with 'brightbox.com'.
LP: #1661693
|
|
Based on the setting Datasource/Ec2/strict_id, the datasource
will now warn once per instance.
|
|
This has been a recurring ask and we had initially just made the change to
the cloud-init 2.0 codebase. As the current thinking is we'll just
continue to enhance the current codebase, its desirable to relicense to
match what we'd intended as part of the 2.0 plan here.
- put a brief description of license in LICENSE file
- put full license versions in LICENSE-GPLv3 and LICENSE-Apache2.0
- simplify the per-file header to reference LICENSE
- tox: ignore H102 (Apache License Header check)
Add license header to files that ship.
Reformat headers, make sure everything has vi: at end of file.
Non-shipping files do not need the copyright header,
but at the moment tests/ have it.
|
|
pycodestyle 2.1.0 is in Ubuntu zesty, and complained about the
changes made here. Simple style changes. This makes 'make pep8'
pass again when built in a zesty build system with proposed enabled.
|
|
Simply fix a commit that should not have been pushed.
|
|
Oracle public cloud has the string 'unavailable' in its metadata
service for 'block-device-mapping'. The change here is to return
None in device_name_to_device if that is the case.
|
|
Support AliYun(Ali-Cloud ECS). This datasource inherits from EC2,
the main difference is the meta-server address is changed to
100.100.100.200.
The datasource behaves similarly to EC2 and relies on network polling.
As such, it is not enabled by default.
|
|
if the Datasource does not have an entry in config, then
set it to be a empty dictionary rather than None.
Also remove places that did this elsewhere.
|
|
Update make check target to run pep8 and run pyflakes or pyflakes3
depending on the value of 'PYVER'. This way the python3 build
environment does not need python2 and vice versa.
Also have make check run the 'yaml' test.
tox: have tox run pep8 in the pyflakes
|
|
|
|
Also implement DataSource.region for EC2 and GCE data sources.
|
|
to be behind trunk.
`tox -e py27` passes full test suite. Now to work on replacing mocker.
|
|
get_url_settings should return a pair of
max wait and timeout and not false, fix this
bug by checking the max_wait <= 0 in the calling
function and returning correctly from there
instead.
|
|
|
|
|
|
|
|
|