Age | Commit message (Collapse) | Author |
|
Transport functions (transport_iso9660 and transport_vmware_guestinfo)
would return a tuple of 3 values, but only the first was ever used
outside of test. The other values (device and filename) were just
ignored.
This just simplifies the transport functions to now return content
(in string format) or None indicating that the transport was not found.
|
|
This adds support for reading OVF information over the
'com.vmware.guestInfo' tranport. The current implementation requires
vmware-rpctool be installed in the system.
LP: #1807466
|
|
It is possible to have a metric value in a per-subnet route.
This is currently missing in all renderers. Update each
renderer to emit the correct metric value from the config.
LP: #1805871
|
|
Add 'append: true' to write_files entries to append 'content' to file
specified by 'path' key. This modifies the file open mode to append.
|
|
I noticed a bug in dhclient_hook on the 'down' event, using 'is'
operator rather than '==' (if self.net_action is 'down').
This refactors/simplifies the code a bit for easier testing and adds
tests. The reason for the rename of 'action' to 'event' is to just
be internally consistent. The word and Namespace 'action' is used
by cloud-init main, so it was not really usable here.
Also adds a main which can easily be debugged with:
CI_DHCP_HOOK_DATA_D=./my.d python -m cloudinit.dhclient_hook up eth0
|
|
NoCloud's 'network-config' file was originally expected to contain
network configuration without the top level 'network' key. This was
because the file was named 'network-config' so specifying 'network'
seemed redundant.
However, JuJu is currently providing a top level 'network' config when
it tries to disable networking ({"network": {"config": "disabled"}).
Other users have also been surprised/confused by the fact that
a network config in /etc/cloud/cloud.cfg.d/network.cfg differed from
what was expected in 'network-config'.
LP: #1798117
|
|
Move routes under the nic's subnet rather than use top-level
("global") route config ensuring all net renderers will provide the
configured route.
Also updated cloudinit/cmd/devel/net_convert.py:
- Add input type 'vmware-imc' for OVF customization config files
- Fix bug when output-type was netplan which invoked netplan
generate/apply and attempted to write to
/etc/netplan/50-cloud-init.yaml instead of joining with the
output directory.
LP: #1806103
|
|
Replace Azure pre-provision polling on IMDS with a blocking call
which watches for netlink link state change messages. The media
change event happens when a pre-provisioned VM has been activated
and is connected to the users virtual network and cloud-init can
then resume operation to complete image instantiation.
|
|
The order of parameters to test_handle_zfs_root did not match
the order of the mocks applied.
Thanks to Jason Zions for pointing this out.
|
|
When deploying an OVA, at least some versions of vmware
attach a cdrom with an ISO9660 filesystem label of 'OVF ENV'.
This was seen on Vmware vCenter Server, 6.0.0, 2776510.
In order to accomplish this we had to change the content of
the DI_ISO9660_DEVS variable to be comma delimited rather
than space delimited.
|
|
Upon URL timeout, _poll_imds is expected to re-dhcp to get updated
IP configuration. We don't want to indefinitely retry because the
instance likely has invalid IP configuration.
LP: #1803598
|
|
In some environments, like FreeBSD, gpart can return the device basename
instead of the full path. If this discovered devpath does not exist and
is missing the '/dev/' prefix, add that prefix in an attempt to find the
device.
|
|
There is an infrequent race when the booting instance can hit the IMDS
service before it is fully available. This results in a
requests.ConnectTimeout being raised.
Azure's retry_callback logic now retries on either 404s or Timeouts.
LP:1800223
|
|
If Azure detects an ntfs filesystem type during mount attempt, it should
still report the resource device as reformattable. There are slight
differences in error message format on RedHat and SuSE. This patch
simplifies the expected error match to work on both distributions.
LP: #1799338
|
|
In commitish 9073951 azure datasource tried to leverage stale DHCP
information obtained from EphemeralDHCPv4 context manager to report
updated provisioning status to the fabric earlier in the boot process.
Unfortunately the stale ephemeral network configuration had already been
torn down in preparation to bring up IMDS network config so the report
attempt failed on timeout.
This branch introduces obtain_lease and clean_network public methods on
EphemeralDHCPv4 to allow for setup and teardown of ephemeral network
configuration without using a context manager. Azure datasource now uses
this to persist ephemeral network configuration across multiple contexts
during provisioning to avoid multiple DHCP roundtrips.
|
|
|
|
When reusing a preprovisioned VM, report ready to Azure fabric as soon as
we get the reprovision data and the goal state so that we are not delayed
by the cloud-init stage switch, saving 2-3 seconds. Also reduce logging
when polling IMDS for reprovision data.
LP: #1799594
|
|
Emit a permissions error instead of "Missing instance-data.json" when
non-root user doesn't have read-permission on
/run/cloud-init/instance-data.json
|
|
Azure generates network configuration from the IMDS service and removes
any preexisting hotplug network scripts which exist in Azure cloud images.
Add a datasource configuration option which allows for writing a default
network configuration which sets up dhcp on eth0 and leave the hotplug
handling to the cloud-image scripts.
To disable network-config from Azure IMDS, add the following to
/etc/cloud/cloud.cfg.d/99-azure-no-imds-network.cfg:
datasource:
Azure:
apply_network_config: False
LP: #1798424
|
|
Add the following instance-data.json standardized keys:
* v1._beta_keys: List any v1 keys in beta development,
e.g. ['subplatform'].
* v1.public_ssh_keys: List of any cloud-provided ssh keys for the
instance.
* v1.platform: String representing the cloud platform api supporting the
datasource. For example: 'ec2' for aws, aliyun and brightbox cloud
names.
* v1.subplatform: String with more details about the source of the
metadata consumed. For example, metadata uri, config drive device path
or seed directory.
To support the new platform and subplatform standardized instance-data,
DataSource and its subclasses grew platform and subplatform attributes.
The platform attribute defaults to the lowercase string datasource name at
self.dsname. This method is overridden in NoCloud, Ec2 and ConfigDrive
datasources.
The subplatform attribute calls a _get_subplatform method which will
return a string containing a simple slug for subplatform type such as
metadata, seed-dir or config-drive followed by a detailed uri, device or
directory path where the datasource consumed its configuration.
As part of this work, DatasourceEC2 methods _get_data and _crawl_metadata
have been refactored for a few reasons:
- crawl_metadata is now a read-only operation, persisting no attributes on
the datasource instance and returns a dictionary of consumed metadata.
- crawl_metadata now closely represents the raw stucture of the ec2
metadata consumed, so that end-users can leverage public ec2 metadata
documentation where possible.
- crawl_metadata adds a '_metadata_api_version' key to the crawled
ds.metadata to advertise what version of EC2's api was consumed by
cloud-init.
- _get_data now does all the processing of crawl_metadata and saves
datasource instance attributes userdata_raw, metadata etc.
Additional drive-bys:
* unit test rework for test_altcloud and test_azure to simplify mocks
and make use of existing util and test_helpers functions.
|
|
Previously we explicitly excluded mac address '00:00:00:00:00:00'.
But then some nics (tunl0 and sit0) ended up having a mac address like
'00:00:00:00'.
The change here just ignores all 00[:00[:00...]].
LP: #1796917
|
|
Relax expectation on path to lxc and lxd. The deb path still does
install them in /usr/bin/ but that is overly pedantic.
Add a 'lxd waitready' (present since lxd 0.5) to wait until lxd
is ready before operating on it.
|
|
OpenStack ironic references Infiniband interfaces via a 6 byte 'MAC
address' formed from bytes 13-15 and 18-20 of interface's hardware
address. This address is used as the ethernet_mac_address of Infiniband
links in network_data.json in configdrives generated by OpenStack nova.
We can use this address to map links in network_data.json to their
corresponding interface names.
When generating interface configuration files, we need to use the
interface's full hardware address as the HWADDR, rather than the 6 byte
MAC address provided by network_data.json.
This change allows IB interfaces to be referenced in this dual mode - by
MAC address and hardware address, depending on the context.
Support TYPE=InfiniBand for sysconfig configuration of IB interfaces.
|
|
Any distro that has a '_write_nework_config' method should no
longer get their _write_network called at all. So lets drop
that code and raise a RuntimeError any time we got there.
Replace the one caller of 'apply_network' (legacy openstack path)
with a call to apply_network_config after converting the ENI to
network config.
|
|
If a DataSource provides a network configuration in version 2 and runs
on a distro which does not have a network renderer class in use, then
the conversion of V2 to eni results in static ip configurations with
subnet prefix-length (192.168.23.1/24) rather than explicit netmask
value.
When sending such a config to net_util.translate_network the resulting
dictionary is missing the 'netmask' key for static configured addresses
breaking network configurations on multiple distributions.
This patch detects static ip configurations using prefix-length and
converts the format into the previous 'address' and 'netmask' parts
to keep compatibility for these distribtuions until they move to
the v2 network configuration.
LP: #1792454
|
|
At present the host network settings bleed into the test environment
causing the test test_handler_apt_source_v3 to fail if the host has a
domain setting other then localdomain.
LP: #1792799
|
|
Fix a bug where setting of mac address on a bond device was
ignored when provided in OpenStack network_config.json.
LP: #1682064
|
|
Mark as supported for reading some newer versions of openstack metadata:
2016-06-30 : Newton one
2016-10-06 : Newton two
2017-02-22 : Ocata
2018-08-27 : Rocky
|
|
Cloud-init was reading a list of versions from the OpenStack metadata
service (http://169.254.169.254/openstack/) and attempt to select the
newest known supported version. The problem was that the list
of versions was not being decoded, so we were comparing a list of
bytes (found versions) to a list of strings (known versions).
LP: #1792157
|
|
Allow users to provide '## template: jinja' as the first line or their
#cloud-config or custom script user-data parts. When this header exists,
the cloud-config or script will be rendered as a jinja template.
All instance metadata keys and values present in
/run/cloud-init/instance-data.json will be available as jinja variables
for the template. This means any cloud-config module or script can
reference any standardized instance data in templates and scripts.
Additionally, any standardized instance-data.json keys scoped below a
'<v#>' key will be promoted as a top-level key for ease of reference in
templates. This means that '{{ local_hostname }}' is the same as using the
latest '{{ v#.local_hostname }}'.
Since instance-data is written to /run/cloud-init/instance-data.json, make
sure it is persisted across reboots when the cached datasource opject is
reloaded.
LP: #1791781
|
|
Cloud config can now disable ssh access to non-root users.
When defining the 'users' list in cloud-configuration a boolean
'ssh_redirect_user: true' can be provided to disable ssh logins for
that user. Any ssh 'public-keys' defined in cloud meta-data will be added
and disabled in .ssh/authorized_keys. Any attempts to ssh as this user
using acceptable ssh keys will be presented with a message like the
following:
Please login as the user "ubuntu" rather than the user "youruser".
|
|
In many cases, cloud-init uses 'util.subp' to run a subprocess.
This is not really desirable in our unit tests as it makes the tests
dependent upon existance of those utilities.
The change here is to modify the base test case class (CiTestCase) to
raise exception any time subp is called. Then, fix all callers.
For cases where subp is necessary or actually desired, we can use it
via
a.) context hander CiTestCase.allow_subp(value)
b.) class level self.allowed_subp = value
Both cases the value is a list of acceptable executable names that
will be called (essentially argv[0]).
Some cleanups in AltCloud were done as the code was being updated.
|
|
Multiple distros use sysconfig format but have different content
and paths to certain files. Update distros to specify these
template paths in their renderer_configs dictionary.
|
|
The issue is when customize a VM with static IPv4 and without gateway, it
will still extend route list and will loop a gateways list which is None.
This fix is to make sure when no gateway is here, it will not extend route
list.
LP: #1766538
|
|
Linux guests can provide information to Hyper-V hosts via KVP.
KVP allows the guests to provide any string key-value-pairs back to the
host's registry. On linux, kvp communication pools are presented as pool
files in /var/lib/hyperv/.kvp_pool_#.
The following reporting configuration can enable this kvp reporting in
addition to default logging if the pool files exist:
reporting:
logging:
type: log
telemetry:
type: hyperv
|
|
This adds a Oracle specific datasource that functions with OCI.
It is a simplified version of the OpenStack metadata server
with support for vendor-data.
It does not support the OCI-C (classic) platform.
Also here is a move of BrokenMetadata to common 'sources'
as this was the third occurrence of that class.
|
|
Azure datasource now queries IMDS metadata service for network
configuration at link local address
http://169.254.169.254/metadata/instance?api-version=2017-12-01. The
azure metadata service presents a list of macs and allocated ip addresses
associated with this instance. Azure will now also regenerate network
configuration on every boot because it subscribes to EventType.BOOT
maintenance events as well as the 'first boot'
EventType.BOOT_NEW_INSTANCE.
For testing add azure-imds --kind to cloud-init devel net_convert tool
for debugging IMDS metadata.
Also refactor _get_data into 3 discrete methods:
- is_platform_viable: check quickly whether the datasource is
potentially compatible with the platform on which is is running
- crawl_metadata: walk all potential metadata candidates, returning a
structured dict of all metadata and userdata. Raise InvalidMetaData on
error.
- _get_data: call crawl_metadata and process results or error. Cache
instance data on class attributes: metadata, userdata_raw etc.
|
|
DEP_NETWORK is removed since the network_config must
run at each boot. New EventType.BOOT event is used
for that.
Network is brought up early to fetch the metadata which
is required to configure the network (ipv4 and/or v6).
Adds unittests for the following and fixes test_common for
LOCAL and NETWORK sets.
|
|
When converting network config v1 to netplan, we were not correctly
rendering the 'macaddress' key on a bond. Not that the difference
in spelling between v1 'mac_address' and v2 'macaddress' is intentional.
Also fixed here is rendering of the macaddress for bridges.
LP: #1784699
|
|
Move the tools/net-convert.py to be exposed as part of 'cloud-init devel'
subcommands.
It can now be called like:
$ cloud-init devel net-convert
Or, if you just have checked out source (and no cli executable):
$ python3 -m cloudinit.cmd.devel.net_convert
or
$ python3 -m cloudinit.cmd.main devel net-convert
|
|
The OpenNebula data source generates an invalid netplan yaml file
if the IPv6 gateway is not defined in context.sh.
LP: #1768547
|
|
The OpenStack datasource in 18.3 changed to detect data in the
init-local stage instead of init-network and attempted to redetect
OpenStackLocal datasource on Oracle across reboots. The function
detect_openstack was added to quickly detect whether a platform is
OpenStack based on dmi product_name or chassis_asset_tag and it was
a bit too strict for Oracle in checking for 'OpenStack Nova'/'Compute'
DMI product_name.
Oracle's DMI product_name reports 'SAtandard PC (i440FX + PIIX, 1996)'
and DMI chassis_asset_tag is 'OracleCloud.com'.
detect_openstack function now adds 'OracleCloud.com' as a supported value
'OracleCloud.com' to valid chassis-asset-tags for the OpenStack
datasource.
LP: #1784685
|
|
Pylint 2.0.0 was recently released and complains more about
logging-not-lazy than it used to. I've fixed those warnings, here.
The changes in rh_subscription are more extensive. pylint may be
complaining incorrectly there, but the tests were not correctly un-doing
all of their mock/patching. This cleans those up and makes pylint happy.
|
|
A recent commit added get_linux_distro to replace the deprecated python
platform.dist module behavior before it is dropped from python. It added
behavior that was compliant on OpenSuSE and SLES, by returning
(<distro_name>, <distro_version>, <cpu-arch>).
Fix get_linux_distro to behave more like the specific distribution's
platform.dist on ubuntu, centos and debian, which will return the
distribution release codename as the third element instead of <cpu-arch>.
SLES and OpenSUSE will retain their current behavior.
Examples follow:
('sles', '15', 'x86_64')
('opensuse', '42.3', 'x86_64')
('debian', '9', 'stretch')
('ubuntu', '16.04', 'xenial')
('centos', '7', 'Core')
LP: #1780481
|
|
To deny a user elevated access, you can omit the `sudo` key from the
`users` dictionary. This works fine however it's implicitly defined
based on defaults of `cloud-init`. If the project moves to have `sudo`
access allowed for all by default (quite unlikely but still possible)
this will catch a few people out.
This introduces the ability to define an explicit `sudo: False` in the
`users` dictionary and it will prevent `sudo` access. The behaviour is
identical to omitting the key.
LP: #1771468
|
|
Newer versions (3.0.1+) of lxd create the 'lxdbr0' network when
'lxd init --auto' is invoked.
When cloud-init is given a network configuration to pass on to
lxc and that config had no name specified or 'lxdbr0', then cloud-init
would fail to create the network as it already exists.
Similarly, we need to remove the device from the default profile
so that the attach code can work.
Also, add a _lxc method and use it to make sure we're getting the
--force-local flag everywhere.
LP: #1776958
|
|
OpenStack datasource is now discovered in init-local stage. In order to
probe whether OpenStack metadata is present, it performs a costly
sandboxed dhclient setup and metadata probe against http://169.254.169.254
for openstack data.
Cloud-init properly detects non-OpenStack on EC2, but it spends precious
time probing the metadata service also resulting in a confusing WARNING
log about 'metadata not present'. To avoid the wasted cycles, and
confusing warning, get_data will call a detect_openstack function to
quickly determine whether the platform looks like OpenStack before trying
to setup network to probe and crawl the metadata service.
LP: #1776701
|
|
In /etc/cloud/cloud.cfg, users and imagees can configure which modules run
during a specific cloud-init stage by modifying one of the following
lists: cloud_init_modules, cloud_init_modules, cloud_init_final_modules.
If any of the configured module lists are absent or empty, cloud-init will
emit the same message it already does for existing lists that only contain
modules which are not unsupported on that platform:
No 'config' modules to run under section 'cloud_config_modules'
LP: #1770462
|
|
When creating the multipart mime message that is written as
user-data.txt.i, cloud-init losing data on conversion to some things
as a string.
LP: #1768600
Author: Scott Moser <smoser@ubuntu.com>
Co-Authored-By: Chad Smith <chad.smith@canonical.com>
|
|
There is no requirement that the environment of a process contains
only utf-8 data. This modifies get_proc_env to support it reading
data as binary and decoding if provided with an encoding.
The default case is now that we now do:
contents.decode('utf-8', 'replace')
rather than
contents.decode('utf-8', 'strict')
LP: #1775371
|