Age | Commit message (Collapse) | Author |
|
This just separates the reading of dmi values into its own file.
Some things of note:
* left import of util in dmi.py only for 'is_container'
It'd be good if is_container was not in util.
* just the use of 'util.is_x86' to dmi.py
* open() is used directly rather than load_file.
|
|
|
|
* v2 of the API is now default with fallback to v1.
* Refactored the Oracle datasource to fetch version, instance, and vnic metadata simultaneously.
|
|
The /opc/v1/ metadata endpoints[0] are universally available in Oracle
Cloud Infrastructure and the OpenStack endpoints are considered
deprecated, so we can refactor the data source to use the OPC endpoints
exclusively. This simplifies the datasource code substantially, and
enables use of OPC-specific attributes in future.
[0] https://docs.cloud.oracle.com/en-us/iaas/Content/Compute/Tasks/gettingmetadata.htm
|
|
|
|
These libraries provide backports of Python 3's stdlib components to Python 2. As we only support Python 3, we can simply use the stdlib now. This pull request does the following:
* removes some unneeded compatibility code for the old spelling of `assertRaisesRegex`
* replaces invocations of the Python 2-only `assertItemsEqual` with its new name, `assertCountEqual`
* replaces all usage of `unittest2` with `unittest`
* replaces all usage of `contextlib2` with `contextlib`
* drops `unittest2` and `contextlib2` from requirements files and tox.ini
It also rewrites some `test_azure` helpers to use bare asserts. We were seeing a strange error in xenial builds of this branch which appear to be stemming from the AssertionError that pytest produces being _different_ from the standard AssertionError. This means that the modified helpers weren't behaving correctly, because they weren't catching AssertionErrors as one would expect. (I believe this is related, in some way, to https://github.com/pytest-dev/pytest/issues/645, but the only version of pytest where we're affected is so far in the past that it's not worth pursuing it any further as we have a workaround.)
|
|
We currently have a test system where get_interfaces_by_mac raises an
exception, which is causing these tests to fail as they aren't mocking
get_interfaces_by_mac out.
LP: #1873910
|
|
|
|
* test_oracle: sort imports
* DataSourceOracle: sort imports
|
|
These classes don't use `self.logs` anywhere in their body, so we can
remove the `with_logs = True` setting from them.
These instances were found using astpath[0], with the following
invocation:
astpath "//Name[@id='with_logs' and not(ancestor::ClassDef//Attribute[@attr='logs'])]"
[0] https://github.com/hchasestevens/astpath
|
|
Cloud-config userdata provided as jinja templates are now distro,
platform and merged cloud config aware. The cloud-init query command
will also surface this config data.
Now users can selectively render portions of cloud-config based on:
* distro name, version, release
* python version
* merged cloud config values
* machine platform
* kernel
To support template handling of this config, add new top-level
keys to /run/cloud-init/instance-data.json.
The new 'merged_cfg' key represents merged cloud config from
/etc/cloud/cloud.cfg and /etc/cloud/cloud.cfg.d/*.
The new 'sys_info' key which captures distro and platform
info from cloudinit.util.system_info.
Cloud config userdata templates can render conditional content
based on these additional environmental checks such as the following
simple example:
```
## template: jinja
#cloud-config
runcmd:
{% if distro == 'opensuse' %}
- sh /custom-setup-sles
{% elif distro == 'centos' %}
- sh /custom-setup-centos
{% elif distro == 'debian' %}
- sh /custom-setup-debian
{% endif %}
```
To see all values: sudo cloud-init query --all
Any keys added to the standardized v1 keys are guaranteed to not
change or drop on future released of cloud-init. 'v1' keys will be retained
for backward-compatibility even if a new standardized 'v2' set of keys
are introduced
The following standardized v1 keys are added:
* distro, distro_release, distro_version, kernel_version, machine,
python_version, system_platform, variant
LP: #1865969
|
|
When cloud-init persisted instance metadata to instance-data.json
if failed to redact the sensitive value. Currently, the only sensitive
key 'security-credentials' is omitted as cloud-init does not fetch
this value from IMDS.
Fix this by properly redacting the content from the public
instance-metadata.json file while retaining the value in the root-only
instance-data-sensitive.json file.
LP: #1865947
|
|
* cloudinit: replace "import mock" with "from unittest import mock"
* test-requirements.txt: drop mock
Co-authored-by: Chad Smith <chad.smith@canonical.com>
|
|
|
|
Since python 2.7 doesn't handle UnicodeDecodeErrors with the default
handler
LP: #1801364
|
|
Add support for detecting netfailover[1] device 3-tuple in networking
layer. In the Oracle datasource ensure that if a provided network
config, either fallback or provided config includes a netfailover master
to remove any MAC address value as this can break under 3-netdev
as the other two devices have the same MAC.
1. https://www.kernel.org/doc/html/latest/networking/net_failover.html
|
|
When rendering secondary vnic configuration from IMDS, only emit
configuration for the IP and MTU values only. Add support to mutate
either a v1 or a v2 network_config input.
|
|
The Oracle platform provides networking configuration from two sources:
* the primary interface configuration comes from the initramfs, because
Oracle instance all iSCSI boot
* secondary interface configuration comes from an IMDS accessed over
HTTP
As we need to combine these two sources of network configuration, the
default "prefer initramfs config over data source config" behaviour
isn't appropriate; we would never get the IMDS interfaces via that
route. Instead, the Oracle data source has code to combine these two
sources, so we prefer its network configuration over the initramfs
configuration.
(This is not appropriate default behaviour, because _in general_ data
sources won't know how to merge initramfs-provided configuration into
their provided configuration, so switching this order for all data
sources would result in initramfs configuration being discarded on any
data source that implements network_config.)
|
|
Oracle Cloud Infrastructure's Instance Metadata Service provides network
configuration information for non-primary NICs. This commit introduces
support, on Virtual Machines[0], for fetching that network metadata,
converting it to v1 network-config[1] and combining it into the network
configuration generated for the primary interface.
By default, this behaviour is not enabled. Configuring the Oracle
datasource to `configure_secondary_nics` enables it:
datasource:
Oracle:
configure_secondary_nics: true
Failures to fetch and generate secondary NIC configuration will log a
warning, but otherwise will not affect boot.
[0] The expected use of the IMDS-provided network configuration is
substantially different on Bare Metal Machines, so support for that
will be addressed separately.
[1] This is v1 config, because cloudinit.net.cmdline generates v1 config
and we need to integrate the secondary NICs into that configuration.
|
|
Previously "cmdline" network configuration could be either
user-specified network-config=... configuration data, or
initramfs-provided configuration data. Before data sources could modify
the order in which network config sources were considered, this
conflation didn't matter (and, indeed, in the default data source
configuration it will continue to not matter).
However, it _is_ desirable for a data source to be able to specify that
its network configuration should be preferred over the
initramfs-provided network configuration but still allow explicit
network-config=... configuration passed to the kernel cmdline to
continue to override both of those sources.
(This also modifies the Oracle data source to use read_initramfs_config
directly, which is effectively what it was using
read_kernel_cmdline_config for previously.)
|
|
Moving update_events from a class attribute to an instance attribute
means that it doesn't exist on DataSource objects that are unpickled,
causing tracebacks on cloud-init upgrade.
As this change is only required for cloud-init installations which don't
utilise ds-identify, we're backing it out to be reintroduced once the
upgrade path bug has been addressed.
This reverts commit f2fd6eac4407e60d0e98826ab03847dda4cde138.
|
|
Currently, DataSourceAzure updates self.update_events in __init__. As
update_events is a class attribute on DataSource, this updates it for
all instances of classes derived from DataSource including those for
other clouds. This means that if DataSourceAzure is even instantiated,
its behaviour is applied to whichever data source ends up being used for
boot.
To address this, update_events is moved from a class attribute to an
instance attribute (that is therefore populated at instantiation time).
This retains the defaults for all DataSource sub-class instances, but
avoids them being able to mutate the state in instances of other
DataSource sub-classes.
update_events is only ever referenced on an instance of DataSource (or a
sub-class); no code relies on it being a class attribute. (In fact,
it's only used within methods on DataSource or its sub-classes, so it
doesn't even _need_ to remain public, though I think it's appropriate
for it to be public.)
DataSourceScaleway is also updated to move update_events from a
class attribute to an instance attribute, as the class attribute would
now be masked by the DataSource instance attribute.
LP: #1819913
|
|
Add a quick cloud lookup utility in order to more easily determine
the cloud on which an instance is running.
The utility parses standardized attributes from
/run/cloud-init/instance-data.json to print the canonical cloud-id
for the instance. It uses known region maps if necessary to determine
on which specific cloud the instance is running.
Examples:
aws, aws-gov, aws-china, rackspace, azure-china, lxd, openstack, unknown
|
|
Add the following instance-data.json standardized keys:
* v1._beta_keys: List any v1 keys in beta development,
e.g. ['subplatform'].
* v1.public_ssh_keys: List of any cloud-provided ssh keys for the
instance.
* v1.platform: String representing the cloud platform api supporting the
datasource. For example: 'ec2' for aws, aliyun and brightbox cloud
names.
* v1.subplatform: String with more details about the source of the
metadata consumed. For example, metadata uri, config drive device path
or seed directory.
To support the new platform and subplatform standardized instance-data,
DataSource and its subclasses grew platform and subplatform attributes.
The platform attribute defaults to the lowercase string datasource name at
self.dsname. This method is overridden in NoCloud, Ec2 and ConfigDrive
datasources.
The subplatform attribute calls a _get_subplatform method which will
return a string containing a simple slug for subplatform type such as
metadata, seed-dir or config-drive followed by a detailed uri, device or
directory path where the datasource consumed its configuration.
As part of this work, DatasourceEC2 methods _get_data and _crawl_metadata
have been refactored for a few reasons:
- crawl_metadata is now a read-only operation, persisting no attributes on
the datasource instance and returns a dictionary of consumed metadata.
- crawl_metadata now closely represents the raw stucture of the ec2
metadata consumed, so that end-users can leverage public ec2 metadata
documentation where possible.
- crawl_metadata adds a '_metadata_api_version' key to the crawled
ds.metadata to advertise what version of EC2's api was consumed by
cloud-init.
- _get_data now does all the processing of crawl_metadata and saves
datasource instance attributes userdata_raw, metadata etc.
Additional drive-bys:
* unit test rework for test_altcloud and test_azure to simplify mocks
and make use of existing util and test_helpers functions.
|
|
|
|
Cloud-init caches any cloud metadata crawled during boot in the file
/run/cloud-init/instance-data.json. Cloud-init also standardizes some of
that metadata across all clouds. The command 'cloud-init query' surfaces a
simple CLI to query or format any cached instance metadata so that scripts
or end-users do not have to write tools to crawl metadata themselves.
Since 'cloud-init query' is runnable by non-root users, redact any
sensitive data from instance-data.json and provide a root-readable
unredacted instance-data-sensitive.json. Datasources can now define a
sensitive_metadata_keys tuple which will redact any matching keys
which could contain passwords or credentials from instance-data.json.
Also add the following standardized 'v1' instance-data.json keys:
- user_data: The base64encoded user-data provided at instance launch
- vendor_data: Any vendor_data provided to the instance at launch
- underscore_delimited versions of existing hyphenated keys:
instance_id, local_hostname, availability_zone, cloud_name
|
|
Allow users to provide '## template: jinja' as the first line or their
#cloud-config or custom script user-data parts. When this header exists,
the cloud-config or script will be rendered as a jinja template.
All instance metadata keys and values present in
/run/cloud-init/instance-data.json will be available as jinja variables
for the template. This means any cloud-config module or script can
reference any standardized instance data in templates and scripts.
Additionally, any standardized instance-data.json keys scoped below a
'<v#>' key will be promoted as a top-level key for ease of reference in
templates. This means that '{{ local_hostname }}' is the same as using the
latest '{{ v#.local_hostname }}'.
Since instance-data is written to /run/cloud-init/instance-data.json, make
sure it is persisted across reboots when the cached datasource opject is
reloaded.
LP: #1791781
|
|
This adds a Oracle specific datasource that functions with OCI.
It is a simplified version of the OpenStack metadata server
with support for vendor-data.
It does not support the OCI-C (classic) platform.
Also here is a move of BrokenMetadata to common 'sources'
as this was the third occurrence of that class.
|
|
The comment in update_metadata() that explains how a datasource should
enable network reconfig on every boot presumes that
EventType.BOOT_NEW_INSTANCE is a subset of EventType.BOOT. That's not
the case, and as such a datasource that needs to configure networking
when it is a new instance and every boot needs to include both event
types.
To make the situation above easier to debug, update_metadata() now
logs when it returns false.
To make it so that datasources do not need to test before appending to
the update_events['network'], it is changed from a list to a set.
test_update_metadata_only_acts_on_supported_update_events is updated
to allow datasources to support EventType.BOOT.
Author: Mike Gerdts <mike.gerdts@joyent.com>
|
|
Very basic type definitions are now defined to distinguish 'boot'
events from 'new instance (first boot)'. Event types will now be handed
to a datasource.update_metadata method which can determine whether
to refresh its metadata and re-render configuration based on that
source event.
A datasource can 'subscribe' to an event by setting up the update_events
attribute on the datasource class which describe what config scope is
updated by a list of matching events. By default datasources will have
the following update_events: {'network': [EventType.BOOT_NEW_INSTANCE]}
This setting says the datasource will re-write network configuration only
on first boot of a new instance or when the instance id changes.
New methods are now present on the datasource:
- clear_cached_attrs: Resets cached datasource attributes to values
listed in datasource.cached_attr_defaults. This is performed prior to
processing a fresh metadata process to avoid keeping old/invalid
cached data around.
- update_metadata: accepts source_event_types to determine if the
metadata should be crawled again and processed
|
|
Network has not yet been configured in the init-local stage so the
openstack datasource will use dhcp-client to temporarily obtain an ipv4
address and query the metadata service at http://169.254.169.254 to get
network_data.json configuration. If present, the datasource will return
network_config version 1 config based on that network_data.json content.
Previously OpenStack datasource only setup dhcp on the fallback interface
so this represents a change in behavior to react to the full config
provided by openstack.
Also significant to OpenStack is the separation of a _crawl_data operation
from get_data(). crawl_data walks the available metadata services and
returns a dict of discovered content. get_data consumes the crawled_data,
caches it in the datasource and reacts to that data.
/run/cloud-init/instance-data.json now published network_data.json or
ec2_metadata key if that data is present on any datasource.
The main reasons for the separation of crawl from get_data:
* Enable performance metrics of cloud-init's metadata crawls on each
* Enable cloud-init modules and scripts to query and consume metadata
content which may have updated/changed after cloud-init's initial cache
during instance boot. (Think hotplug)
Also generalize common logic to base DataSource class/module:
* Move to a common UNSET variable up into base datasource module fix EC2,
ConfigDrive, OpenStack, SmartOS to use the global.
* Drop get_url_settings from Ec2, CloudStack and OpenStack and generalize
DataSource.get_url_params(). Allow subclasses to override url_max_wait,
url_timeout and url_retries params.
* Rename get_network_metadata bool to perform_dhcp_setup as it designates
whether EphemeralDHCPv4 setup is required before crawling metadata.
LP: #1749717
|
|
This enables warnings produced by pylint for unused variables (W0612),
and fixes the existing errors.
|
|
DataSource.get_hostname call signature changed to allow for metadata_only
parameter. The metadata_only=True parameter is passed to get_hostname
during init-local stage in order to set the system hostname if present in
metadata prior to initial network bring up.
Fix subclasses of DataSource which have overridden get_hostname to allow
for metadata_only param.
LP: #1757176
|
|
When instance meta-data provides hostname information, run
cc_set_hostname in the init-local or init-net stage before network
comes up.
Prevent an initial DHCP request which leaks the stock cloud-image default
hostname before the meta-data provided hostname was processed.
A leaked cloud-image hostname adversely affects Dynamic DNS which
would reallocate 'ubuntu' hostname in DNS to every instance brought up by
cloud-init. These instances would only update DNS to the cloud-init
configured hostname upon DHCP lease renewal.
This branch extends the get_hostname methods in datasource, cloud and
util to limit results to metadata_only to avoid extra cost of querying
the distro for hostname information if metadata does not provide that
information.
LP: #1746455
|
|
Each DataSource subclass must define its own get_data method. This branch
formalizes our DataSource class to require that subclasses define an
explicit dsname for sourcing cloud-config datasource configuration.
Subclasses must also override the _get_data method or a
NotImplementedError is raised.
The branch also writes /run/cloud-init/instance-data.json. This file
contains all meta-data, user-data and vendor-data and a standardized set
of metadata keys in a json blob which other utilities with root-access
could make use of. Because some meta-data or user-data is potentially
sensitive the file is only readable by root.
Generally most metadata content types should be json serializable. If
specific keys or values are not serializable, those specific values will
be base64encoded and the key path will be listed under the top-level key
'base64-encoded-keys' in instance-data.json. If json writing fails due to
other TypeErrors or UnicodeDecodeErrors, a warning log will be emitted to
/var/log/cloud-init.log and no instance-data.json will be created.
|