Age | Commit message (Collapse) | Author |
|
Add the following instance-data.json standardized keys:
* v1._beta_keys: List any v1 keys in beta development,
e.g. ['subplatform'].
* v1.public_ssh_keys: List of any cloud-provided ssh keys for the
instance.
* v1.platform: String representing the cloud platform api supporting the
datasource. For example: 'ec2' for aws, aliyun and brightbox cloud
names.
* v1.subplatform: String with more details about the source of the
metadata consumed. For example, metadata uri, config drive device path
or seed directory.
To support the new platform and subplatform standardized instance-data,
DataSource and its subclasses grew platform and subplatform attributes.
The platform attribute defaults to the lowercase string datasource name at
self.dsname. This method is overridden in NoCloud, Ec2 and ConfigDrive
datasources.
The subplatform attribute calls a _get_subplatform method which will
return a string containing a simple slug for subplatform type such as
metadata, seed-dir or config-drive followed by a detailed uri, device or
directory path where the datasource consumed its configuration.
As part of this work, DatasourceEC2 methods _get_data and _crawl_metadata
have been refactored for a few reasons:
- crawl_metadata is now a read-only operation, persisting no attributes on
the datasource instance and returns a dictionary of consumed metadata.
- crawl_metadata now closely represents the raw stucture of the ec2
metadata consumed, so that end-users can leverage public ec2 metadata
documentation where possible.
- crawl_metadata adds a '_metadata_api_version' key to the crawled
ds.metadata to advertise what version of EC2's api was consumed by
cloud-init.
- _get_data now does all the processing of crawl_metadata and saves
datasource instance attributes userdata_raw, metadata etc.
Additional drive-bys:
* unit test rework for test_altcloud and test_azure to simplify mocks
and make use of existing util and test_helpers functions.
|
|
Fix a bug where setting of mac address on a bond device was
ignored when provided in OpenStack network_config.json.
LP: #1682064
|
|
Mark as supported for reading some newer versions of openstack metadata:
2016-06-30 : Newton one
2016-10-06 : Newton two
2017-02-22 : Ocata
2018-08-27 : Rocky
|
|
Cloud-init was reading a list of versions from the OpenStack metadata
service (http://169.254.169.254/openstack/) and attempt to select the
newest known supported version. The problem was that the list
of versions was not being decoded, so we were comparing a list of
bytes (found versions) to a list of strings (known versions).
LP: #1792157
|
|
In many cases, cloud-init uses 'util.subp' to run a subprocess.
This is not really desirable in our unit tests as it makes the tests
dependent upon existance of those utilities.
The change here is to modify the base test case class (CiTestCase) to
raise exception any time subp is called. Then, fix all callers.
For cases where subp is necessary or actually desired, we can use it
via
a.) context hander CiTestCase.allow_subp(value)
b.) class level self.allowed_subp = value
Both cases the value is a list of acceptable executable names that
will be called (essentially argv[0]).
Some cleanups in AltCloud were done as the code was being updated.
|
|
Multiple distros use sysconfig format but have different content
and paths to certain files. Update distros to specify these
template paths in their renderer_configs dictionary.
|
|
This adds a Oracle specific datasource that functions with OCI.
It is a simplified version of the OpenStack metadata server
with support for vendor-data.
It does not support the OCI-C (classic) platform.
Also here is a move of BrokenMetadata to common 'sources'
as this was the third occurrence of that class.
|
|
Azure datasource now queries IMDS metadata service for network
configuration at link local address
http://169.254.169.254/metadata/instance?api-version=2017-12-01. The
azure metadata service presents a list of macs and allocated ip addresses
associated with this instance. Azure will now also regenerate network
configuration on every boot because it subscribes to EventType.BOOT
maintenance events as well as the 'first boot'
EventType.BOOT_NEW_INSTANCE.
For testing add azure-imds --kind to cloud-init devel net_convert tool
for debugging IMDS metadata.
Also refactor _get_data into 3 discrete methods:
- is_platform_viable: check quickly whether the datasource is
potentially compatible with the platform on which is is running
- crawl_metadata: walk all potential metadata candidates, returning a
structured dict of all metadata and userdata. Raise InvalidMetaData on
error.
- _get_data: call crawl_metadata and process results or error. Cache
instance data on class attributes: metadata, userdata_raw etc.
|
|
DEP_NETWORK is removed since the network_config must
run at each boot. New EventType.BOOT event is used
for that.
Network is brought up early to fetch the metadata which
is required to configure the network (ipv4 and/or v6).
Adds unittests for the following and fixes test_common for
LOCAL and NETWORK sets.
|
|
The OpenNebula data source generates an invalid netplan yaml file
if the IPv6 gateway is not defined in context.sh.
LP: #1768547
|
|
The OpenStack datasource in 18.3 changed to detect data in the
init-local stage instead of init-network and attempted to redetect
OpenStackLocal datasource on Oracle across reboots. The function
detect_openstack was added to quickly detect whether a platform is
OpenStack based on dmi product_name or chassis_asset_tag and it was
a bit too strict for Oracle in checking for 'OpenStack Nova'/'Compute'
DMI product_name.
Oracle's DMI product_name reports 'SAtandard PC (i440FX + PIIX, 1996)'
and DMI chassis_asset_tag is 'OracleCloud.com'.
detect_openstack function now adds 'OracleCloud.com' as a supported value
'OracleCloud.com' to valid chassis-asset-tags for the OpenStack
datasource.
LP: #1784685
|
|
A recent commit added get_linux_distro to replace the deprecated python
platform.dist module behavior before it is dropped from python. It added
behavior that was compliant on OpenSuSE and SLES, by returning
(<distro_name>, <distro_version>, <cpu-arch>).
Fix get_linux_distro to behave more like the specific distribution's
platform.dist on ubuntu, centos and debian, which will return the
distribution release codename as the third element instead of <cpu-arch>.
SLES and OpenSUSE will retain their current behavior.
Examples follow:
('sles', '15', 'x86_64')
('opensuse', '42.3', 'x86_64')
('debian', '9', 'stretch')
('ubuntu', '16.04', 'xenial')
('centos', '7', 'Core')
LP: #1780481
|
|
OpenStack datasource is now discovered in init-local stage. In order to
probe whether OpenStack metadata is present, it performs a costly
sandboxed dhclient setup and metadata probe against http://169.254.169.254
for openstack data.
Cloud-init properly detects non-OpenStack on EC2, but it spends precious
time probing the metadata service also resulting in a confusing WARNING
log about 'metadata not present'. To avoid the wasted cycles, and
confusing warning, get_data will call a detect_openstack function to
quickly determine whether the platform looks like OpenStack before trying
to setup network to probe and crawl the metadata service.
LP: #1776701
|
|
- Updated datadict reference URL
- Store sdc:routes metadata in DatasourceSmartOS
- Map sdc:routes values to per-interface subnet configuration
- Added unittest
Co-authored-by: Mike Gerdts <mike.gerdts@joyent.com>
LP: #1763512
|
|
Network has not yet been configured in the init-local stage so the
openstack datasource will use dhcp-client to temporarily obtain an ipv4
address and query the metadata service at http://169.254.169.254 to get
network_data.json configuration. If present, the datasource will return
network_config version 1 config based on that network_data.json content.
Previously OpenStack datasource only setup dhcp on the fallback interface
so this represents a change in behavior to react to the full config
provided by openstack.
Also significant to OpenStack is the separation of a _crawl_data operation
from get_data(). crawl_data walks the available metadata services and
returns a dict of discovered content. get_data consumes the crawled_data,
caches it in the datasource and reacts to that data.
/run/cloud-init/instance-data.json now published network_data.json or
ec2_metadata key if that data is present on any datasource.
The main reasons for the separation of crawl from get_data:
* Enable performance metrics of cloud-init's metadata crawls on each
* Enable cloud-init modules and scripts to query and consume metadata
content which may have updated/changed after cloud-init's initial cache
during instance boot. (Think hotplug)
Also generalize common logic to base DataSource class/module:
* Move to a common UNSET variable up into base datasource module fix EC2,
ConfigDrive, OpenStack, SmartOS to use the global.
* Drop get_url_settings from Ec2, CloudStack and OpenStack and generalize
DataSource.get_url_params(). Allow subclasses to override url_max_wait,
url_timeout and url_retries params.
* Rename get_network_metadata bool to perform_dhcp_setup as it designates
whether EphemeralDHCPv4 setup is required before crawling metadata.
LP: #1749717
|
|
On OpenSuSE 42.3, we would get errors running
tests/unittests/test_handler/test_handler_chef.py
- test_myhttps_nonet raises a UnmockedError
No mocking was registered, and real connections are not allowed
- test_myhttps_net raises SSLError
("bad handshake: SysCallError(32, 'EPIPE')",)
This fixes the errors by just using http instead of https.
Also it modifies the HttprettyTestCase to do the httpretty activate
and deactivate itself in setUp and tearDown. Then we don't have to
decorate individual test_ methods. Also, we set
httpretty.HTTPretty.allow_net_connect = False
Test cases here should not reach out to a network resource.
LP: #1771659
|
|
The Azure data source provides a method to check whether a NTFS partition
on the ephemeral disk is safe for reformatting to ext4. The method checks
to see if there are customer data files on the disk. However, mounting
the partition fails on systems that do not have the capability of
mounting NTFS. Note that in this case, it is also very unlikely that the
NTFS partition would have been used by the system (since it can't mount
it). The only case would be where an update to the system removed the
capability to mount NTFS, the likelihood of which is also very small.
This change allows the reformatting of the ephemeral disk to ext4 on
systems where mounting NTFS is not supported.
|
|
The result of a read_file_or_url on a file and on a url would differ
in behavior.
str(UrlResponse) would return UrlResponse.contents.decode('utf-8')
while
str(FileResponse) would return str(FileResponse.contents)
The difference being "b'foo'" versus "foo".
As part of the general goal of cleaning util, move read_file_or_url
into url_helper.
|
|
This change is for Azure VM Preprovisioning. A bug was found when after
azure VMs report ready the first time, during the time when VM is polling
indefinitely for the new ovf-env.xml from Instance Metadata Service
(IMDS), if a reboot happens, we send another report ready signal to the
fabric, which deletes the reprovisioning data on the node.
This marker file is used to fix this issue so that we will only send a
report ready signal to the fabric when no marker file is present. Then,
create a marker file so that when a reboot does occur, we check if a
marker file has been created and decide whether we would like to send the
repot ready signal.
LP: #1765214
|
|
When images are deployed from template in a production environment
the artifacts of the provisioning stage (provisioningConfiguration.cfg)
that cloud-init referenced are cleaned up. However, when provisioned
in "debug" mode (internal to IBM) the artifacts are left.
This changes the 'is_ibm_provisioning' implementations in both
ds-identify and in the IBM datasource to identify the provisioning
stage more correctly. The change is to consider provisioning only
if the provisioing file existed and there was no log file or
the log file was older than this boot.
LP: #1767166
|
|
cloud-init and mdata-get each have their own implementation of the
SmartOS metadata protocol. If cloud-init and other services that call
mdata-get are run concurrently, crosstalk on the serial port can cause
them both to become confused.
This change makes it so that cloud-init uses the same cooperative
locking scheme that's used by mdata-get, thus preventing cross-talk
between mdata-get and cloud-init.
For testing, a VM running on a SmartOS host and pyserial are required.
If the tests are run on a platform other than SmartOS, those that use a
real serial port are skipped. pyserial remains commented in
requirements.txt because most testers will not be running atop SmartOS.
LP: #1746605
|
|
There are three potential sources of the hostname, one of which is
documented SmartOS's vmadm(1M) via the hostname property. That
property's value is retrieved via the sdc:hostname key. The other
two sources for the hostname are a hostname key in customer_metadata
and the VM's uuid (sdc:uuid). Of these three, the sdc:hostname value
is not used in a meaningful way by DataSourceSmartOS.
This fix changes the fallback mechanism when hostname is not
specified in customer_metadata. The order of precedence for setting
the hostname is now 1) hostname in customer_metadata,
2) sdc:hostname, then 3) sdc:uuid.
LP: #1765085
|
|
If customer_metadata has no keys, the KEYS request returns an empty
string. Callers of the list() method expect a list to be returned and
will give a stack trace if this expectation is not met.
LP: #1763480
|
|
This enables warnings produced by pylint for unused variables (W0612),
and fixes the existing errors.
|
|
If the metadata service in the host is down while a guest that uses
DataSourceSmartOS is booting, the request from the guest falls into the
bit bucket. When the metadata service is eventually started, the guest
has no awareness of this and does not resend the request. This results in
cloud-init hanging forever with a guest reboot as the only recovery
option.
This fix updates the metadata protocol to implement the initialization
phase, just as is implemented by mdata-get and related utilities. The
initialization phase includes draining all pending data from the serial
port, writing an empty command and getting an expected error message in
reply. If the initialization phase times out, it is retried every five
seconds. Each timeout results in a warning message: "Timeout while
initializing metadata client. Is the host metadata service running?" By
default, warning messages are logged to the console, thus the reason for a
hung boot is readily apparent.
LP: #1667735
|
|
This takes the same basic check that is in ds-identify. If the
DMI system manufacturer (aka sys_vendor) is not 'Hetzner', then exit
out of the datasource's get_data quickly.
|
|
This just got missed in the IBMCloud datasource addition.
Add it to the builtin list of datasources.
|
|
This adds a specific IBM Cloud datasource.
IBM Cloud is identified by:
a.) running on xen
b.) one of a LABEL=METADATA disk or a LABEL=config-2 disk with
UUID=9796-932E
The datasource contains its own config-drive reader that reads
only the currently supported portion of config-drive needed for
ibm cloud.
During the provisioning boot, cloud-init is disabled.
See the docstring in DataSourceIBMCloud.py for more more information.
|
|
Reducing timeout to 1 second as IMDS responds within a handful
of milliseconds. Also get rid of max_retries to prevent exiting
out of polling loop early due to IMDS outage / upgrade.
Reduce Azure PreProvisioning HTTP timeouts during polling to
avoid waiting an extra minute.
LP: #1752977
|
|
OpenNebulaNetwork.gen_conf() was previously returning ENI format.
This is updated to return netplan/v2 config.
The changes here also adds support for IPv6 configuration distributed
from OpenNebula and fixes some issues about nameserver information.
|
|
The Hetzner Cloud metadata service is an AWS-style service available
over HTTP via the link local address 169.254.169.254.
https://hetzner.com/cloud
https://docs.hetzner.cloud/
|
|
LP: #1754495
|
|
Last set of changes to GCE datasource broke reading of user-data
unless the user had base64 encoded their user-data and also set
user-data-encoding to 'base64'.
This fixes the issue.
LP: #1752711
|
|
Fix an issue in EC2 where the datasource.identity had not been
initialized before being used when restoring datasource from pickle.
This is exposed in upgrade and reboot path.
LP: #1748354
|
|
This change will enable azure vms to report provisioning has completed
twice, first to tell the fabric it has completed then a second time to
enable customer settings. The datasource for the second provisioning is
the Instance Metadata Service (IMDS),and the VM will poll indefinitely for
the new ovf-env.xml from IMDS.
This branch introduces EphemeralDHCPv4 which encapsulates common logic
used by both DataSourceEc2 an DataSourceAzure for temporary DHCP
interactions without side-effects.
LP: #1734991
|
|
Network configuration in OpenNebula would only work if the host correctly
guessed the names of the devices in the guest. OpenNebula provided data
in its context.sh like 'ETH0_NETWORK', but if the guest named devices
differently then results were not predictable. This would occur with
Predictable Network Interface Names. To address this,
newer versions (of OpenNebula provide the mac address ETH0_MAC.
This function is present in 4.14 and documented officially in 5.0 docs.
This provides support for reading the mac addresses from the context.sh.
It also fixes cases where context.sh provided a field (ETH0_NETWORK
or ETH0_MASK) with a empty string. Previously the empty string would
be used rather than falling back to the default.
LP: #1719157, #1716397, #1736750
|
|
The previous commit added a test that would attempt to create and use
/run/cloud-init/. This just modifies it to use a temp dir instead.
|
|
The behavior changes and improvements include:
- Only import keys into the default user that contain the name of the
default user ('ubuntu', or 'centos') or that contain 'cloudinit'.
- Use instance or project level keys based on GCE convention.
- Respect expiration time when keys are set.
Do not import expired keys.
- Support ssh-keys in project level metadata (the GCE default).
As part of this change, we also update the request header when talking
to the metadata server based on the documentation:
https://cloud.google.com/compute/docs/storing-retrieving-metadata#querying
LP: #1670456, #1707033, #1707037, #1707039
|
|
New mkfs.vfat and fatlabel tools included in the dosfsutils package no
longer support creating vfat disks with lowercase labels. They silently
default to an all uppercase label eg CONFIG-2 instead of config-2. This
change makes cloud-init handle either upper or lower case.
LP: #1598783
|
|
This stores a hash of the OAuth tokens as an 'id' for the maas
datasource. Since new instances get new tokens created and those tokens
are written by curtin into datasource system config this will provide
a way to identify a new "instance" (install).
LP: #1712680
|
|
This fixes a traceback when attempting to bounce the network after
hostname resets.
In artful and bionic ifupdown package is no longer installed in default
cloud images. As such, Azure can't use those tools to bounce the network
informing DDNS about hostname changes. This doesn't affect DDNS updates
though because systemd-networkd is now watching hostname deltas and with
default behavior to SendHostname=True over dhcp for all hostname updates
which publishes DDNS for us.
LP: #1722668
|
|
The instance identity document is a better source for region information,
partly because region isn't actually in meta-data at all, only
availability-zone, which happens to be named similarly.
Reviewed-by: Ethan Faust <efaust@amazon.com>
Reviewed-by: Cyle Riggs <cyler@amazon.com>
Reviewed-by: Tom Kirchner <tjk@amazon.com>
Reviewed-by: Matt Nierzwicki <nierzwic@amazon.com>
[ajorgens@amazon.com: rebase onto 0.7.9]
[ajorgens@amazon.com: changes per merge proposal discussions]
|
|
Make sure that some temporary files used by the config drive tests get
cleaned up properly.
|
|
In the VMware customization workflow, we have some options for the user
to upload scripts for additional customization. Based on user request,
those custom scripts can be either run before regular customization or
after. For post customization scripts, we decide whether to run the scripts
just after customization or post system reboot.
|
|
The motivation for this is that
a.) 1.7.1 runs with python 3.6 (bionic)
b.) we want to run pylint on tests/ and tools for the same reasons
that we want to run it on cloudinit/
The changes are described below.
- Update tox.ini to invoke pylint v1.7.1.
- Modify .pylintrc generated-members ignore mocked object members (m_.*)
- Replace "dangerous" params defaulting to {}
- Fix up cloud_tests use of platforms
- Cast some instance objects to with dict()
- Handle python2.7 vs 3+ ConfigParser use of readfp (deprecated)
- Update use of assertEqual(<boolean>, value) to assert<Boolean>(value)
- replace depricated assertRegexp -> assertRegex
- Remove useless test-class calls to super class
- Assign class property accessors a result and use it
- Fix missing class member in CepkoResultTests
- Fix Cheetah test import
|
|
Each DataSource subclass must define its own get_data method. This branch
formalizes our DataSource class to require that subclasses define an
explicit dsname for sourcing cloud-config datasource configuration.
Subclasses must also override the _get_data method or a
NotImplementedError is raised.
The branch also writes /run/cloud-init/instance-data.json. This file
contains all meta-data, user-data and vendor-data and a standardized set
of metadata keys in a json blob which other utilities with root-access
could make use of. Because some meta-data or user-data is potentially
sensitive the file is only readable by root.
Generally most metadata content types should be json serializable. If
specific keys or values are not serializable, those specific values will
be base64encoded and the key path will be listed under the top-level key
'base64-encoded-keys' in instance-data.json. If json writing fails due to
other TypeErrors or UnicodeDecodeErrors, a warning log will be emitted to
/var/log/cloud-init.log and no instance-data.json will be created.
|
|
There is a race condition where our sandboxed dhclient properly writes a
lease file but has not yet written a pid file. If the sandbox temporary
directory is torn down before the dhclient subprocess writes a pidfile
DataSourceEc2Local gets a traceback and the instance will fallback to
DataSourceEc2 in the init-network stage. This wastes boot cycles we'd
rather not spend.
Fix handling of sandboxed dhclient to wait for both pidfile and leasefile
before proceding. If either file doesn't show in 5 seconds, log a warning
and return empty lease results {}.
LP: #1735331
|
|
If user upgraded to new cloud-init and attempted to run 'cloud-init init'
without rebooting, cloud-init restores the datasource object from pickle.
The older version pickled datasource object had no value for
_network_config or fallback_nic. This caused the Ec2 datasource to attempt
to reconfigure networking with a None fallback_nic. The pickled object
also cached an older version of ec2 metadata which didn't contain network
information.
This branch does two things:
- Add a fallback_interface property to DatasourceEC2 to support reading the
old .fallback_nic attribute if it was set. New versions will
call net.find_fallback_nic() if there has not been one found.
- Re-crawl metadata if we are on Ec2 and don't have a 'network' key in
metadata
LP: #1732917
|
|
VPC instances have the option to specific local only IPv4 addresses. Allow
Ec2Datasource to enable dhcp4 on instances even if local-ipv4s is
configured on an instance.
Also limit network_configuration to only the primary (fallback) nic.
LP: #1728152
|
|
Systems that used systemd-networkd's dhcp client would not be able to get
information on the Azure endpoint (placed in Option 245) or the CloudStack
server (in 'server_address').
The change here supports reading these files in /run/systemd/netif/leases.
The files declare that "This is private data. Do not parse.", but at this
point we do not have another option.
LP: #1718029
|