Age | Commit message (Collapse) | Author |
|
With a few exceptions, Azure VM deployments receive provisioning
metadata through the provisioning iso presented as a cdrom device
(/dev/sr0). The existing code attempts to find this device by calling
blkid to find all devices that have either type iso9660 or udf. This
can be very expensive if the VM has a lot of disks. This commit will
attempt to mount the default iso location first and only tries to use
blkid to locate the iso location if the default mounting location fails
|
|
Adds a udev script which will invoke a hotplug hook script on all net
add events. The script will write some udev arguments to a systemd FIFO
socket (to ensure we have only instance of cloud-init running at a
time), which is then read by a new service that calls a new 'cloud-init
devel hotplug-hook' command to handle the new event.
This hotplug-hook command will:
- Fetch the pickled datsource
- Verify that the hotplug event is supported/enabled
- Update the metadata for the datasource
- Ensure the hotplugged device exists within the datasource
- Apply the config change on the datasource metadata
- Bring up the new interface (or apply global network configuration)
- Save the updated metadata back to the pickle cache
Also scattered in some unrelated typing where helpful
|
|
Python 3.6 added a new `policy` attribute to `MIMEMultipart`.
MIMEMultipart may be part of the cached object pickle of a datasource.
Upgrading from an old version of python to 3.6+ will cause the
datasource to be invalid after pickle load.
This commit uses the upgrade framework to attempt to access the mime
message and fail early (thus discarding the cache) if we cannot.
Commit 78e89b03 should fix this issue more generally.
|
|
defined in AuthorizedKeysFile (#937)
This patch aims to fix LP1911680, by analyzing the files provided
in sshd_config and merge all keys into an user-specific file. Also
introduces additional tests to cover this specific case.
The file is picked by analyzing the path given in AuthorizedKeysFile.
If it points inside the current user folder (path is /home/user/*), it
means it is an user-specific file, so we can copy all user-keys there.
If it contains a %u or %h, it means that there will be a specific
authorized_keys file for each user, so we can copy all user-keys there.
If no path points to an user-specific file, for example when only
/etc/ssh/authorized_keys is given, default to ~/.ssh/authorized_keys.
Note that if there are more than a single user-specific file, the last
one will be picked.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Co-authored-by: James Falcon <therealfalcon@gmail.com>
LP: #1911680
RHBZ:1862967
|
|
Add a new switch allow_raw_data to control raw data feature, update
the documentation. Fix bugs about max_wait.
|
|
test_upgrade.py was outputting a ton of stuff that had to be manually
collected and verified. This commit adds more assertions to the test
and outputs directly to the logs rather than separate files.
|
|
We read the MTU from the subnet entries. With the v1 format, the MTU can
be set at the root level of the interface entry in the `config` section.
Limitation, we won't set the MTU if the interface use DHCP. This
would require a bit of refactoring.
Also simplify/clarify how we pass the target variable in `cloudinit.net.bsd`.
See: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=256309
Reported-by: Andrey Fesenko
|
|
Currently _bring_up_interfaces() is a no-op for any distro using
renderers. We need to be able to support bringing up a single
interfaces, a list of interfaces, and all interfaces. This should be
independent of the renderers, as the network config is often
generated independent of the mechanism used to apply it.
Additionally, I included a refactor to remove
"_supported_write_network_config". We had a confusing call chain of
apply_network_config->_write_network_config->_supported_write_network_config.
The last two have been combined.
|
|
summary: Clear cache when a Python version change is detected
When a distribution gets updated it is possible that the Python version
changes. Python makes no guarantee that pickle is consistent across
versions as such we need to purge the cache and start over.
Co-authored-by: James Falcon <therealfalcon@gmail.com>
|
|
Commit f5a2449 introduced Impish but left the release name set to
'hirsute'.
|
|
Minor fixes in networkd renderer & fixed corresponding tests
Removed datasource_list for Photon from cloud.cfg.tmpl & added a comment
in cloud.cfg.tmpl about not to use multiline array for datasource_list.
Signed-off-by: Shreenidhi Shedi <sshedi@vmware.com>
|
|
Also added a new (currently experimental) systemd-networkd renderer,
and includes a small refactor to cc_resolv_conf.py to support the
resolved.conf used by systemd-resolved.
|
|
|
|
Also new jenkins tox definition
|
|
- small document update for ReportEventStack explaining post_files
parameter
- small unit test for test_reporting demonstrating the close of an
event with optional post_files list
|
|
|
|
- Mostly based on FreeBSD, the main exception is that
`find_devs_with_on_freebsd` does not work.
- Since we cannot get the CDROM or the partition labels,
`find_devs_with_on_dragonflybsd()` has a more naive approach and
returns all the block devices.
|
|
instance-data.json redacts sensitive data for non-root users. Since user
data is consumed as root, we should be consuming the non-redacted data
instead.
LP: #1931392
|
|
Rocky Linux is a RHEL-compatible distribution so all changes that have
been made should be trivial.
|
|
Ensure no Traceback when 'chef_license' is set
|
|
Presently, mirror keys cannot be associated with primary/security
mirrors. Unfortunately, this prevents use of Landscape-managed
package mirrors as the mirror key for the Landscape-hosted repository
cannot be provided.
This patch allows the same key-related fields usable on "sources"
entries to be used on the "primary" and "security" entries as well.
LP: #1925395
|
|
In #856 we added the ability to use partprobe instead of blockdev for
reading partitions. Test that partprobe succeeds where blockdev fails.
Also add a mechanism to our integration tests to allow a callable to be
called between `lxc init` and `lxc start`
|
|
Control is currently limited to boot events, though this should
allow us to more easily incorporate HOTPLUG support. Disabling
'instance-first-boot' is not supported as we apply networking config
too early in boot to have processed userdata (along with the fact
that this would be a pretty big foot-gun).
The concept of update events on datasource has been split into
supported update events and default update events. Defaults will be
used if there is no user-defined update events, but user-defined
events won't be supplied if they aren't supported.
When applying the networking config, we now check to see if the event
is supported by the datasource as well as if it is enabled.
Configuration looks like:
updates:
network:
when: ['boot']
|
|
AlmaLinux OS is RHEL-compatible so all the changes needed are trivial.
|
|
See https://bugs.launchpad.net/cloud-init/+bug/1910835
|
|
This reverts commit 74fa008bfcd3263eb691cc0b3f7a055b17569f8b.
During pre-release testing, we discovered two issues with this commit.
Firstly, there's a typo in the udevadm command that causes a TypeError
for _all_ growpart executions. Secondly, the LVM resizing does not
appear to successfully resize everything up to the LV, though some
things do get resized.
We certainly want this change, so we'll be happy to review and land it
alongside an integration test which confirms that it is working as
expected.
LP: #1922742
|
|
|
|
This allows us to use it when validating packages from -proposed (and
PPAs etc.).
|
|
When network interfaces are hot-attached to the VM, attempting to get
network metadata might return 410 (or 500, 503 etc) because the info
is not yet available. In those cases, we retry getting the metadata
before giving up. The only case where we can move on to wait for more
nic attach events is if the call times out despite retries, which
means the interface is not likely a primary interface, and we should
try for more nic attach events.
|
|
This change allows us to retrieve the username and hostname from
IMDS instead of having to rely on the mounted OVF.
|
|
Due to hyper-v implementations, iso ejection is more efficient if performed
from within the guest. The code will attempt to perform a best-effort ejection.
Failure during ejection will not prevent reporting ready from happening. If iso
ejection is successful, later iso ejection from the platform will be a no-op.
In the event the iso ejection from the guest fails, iso ejection will still happen at
the platform level.
|
|
In #777, we added 'vendordata2' and 'vendordata2_raw' attributes to
the DataSource class, but didn't use the upgrade framework to deal
with an unpickle after upgrade. This commit adds the necessary
upgrade code.
Additionally, added a smaller-scope upgrade test to our integration
tests that will be run on every CI run so we catch these issues
immediately in the future.
LP: #1922739
|
|
the above option allows the user to control the behavior of a distro
hostname selection if both short hostname and FQDN are supplied.
If `prefer_fqdn_over_hostname` is true the FQDN will be selected as
hostname; if false the hostname will be selected
LP: #1921004
|
|
The current method of running a background sleep until travis is
finished is causing integration test runs to pass even when they should
be failing.
Instead, update the code to emit dots itself.
|
|
Invoking walinuxagent from within cloud-init is no longer
supported/necessary
|
|
This PR adds in support so that cloud-init can run on instances
deployed on Vultr cloud. This was originally brought up in #628.
Co-authored-by: Eric Benner <ebenner@vultr.com>
|
|
On the datasource class, we require the use of paths.run_dir to
perform some operations. On older cloud-init version, the
Paths class does not have the run_dir attribute. To fix that,
we are now manually adding that attribute in the Paths
object if doesn't exist in the unpickle operation.
LP: #1899299
|
|
Update sysconfig configuration to use BONDING_MODULES_OPTS instead of
BONDING_OPTS when on a SUSE system. The sysconfig support requires use
of BONDING_MODULE_OPTS whereas the initscript support that rhel uses
requires BONDING_OPTS.
|
|
This patch adds support to resize a single partition of a VM if it's using an
LVM underneath. The patch detects if it's LVM if the given block device
is a device mapper by its name (e.g. `/dev/dm-1`) and if it has slave
devices under it on sysfs. After that syspath is updated to the real
block device and growpart will be called to resize it (and automatically
its Physical Volume).
The Volume Group will be updated automatically and a final call to
extend the rootfs to the remaining space available will be made.
Using the same growpart configuration, the user can specify only one
device to be resized when using LVM and growpart, otherwise cloud-init
won't know which one should be resized and will fail.
rhbz: #1810878
LP: #1799953
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
Signed-off-by: Scott Moser <smoser@brickies.net>
|
|
klibc initramfs in debian allows the 'iscsi_target_ip=' cmdline
parameter to specify an iscsi device attachment. This can
cause cloud-init to mis-detect the cmdline paramter as a
networking config.
LP: #1919188
|
|
When output of SSH host keys and/or SSH fingerprints are disabled for
all keys do not display headers and footers.
Prevent risk of message text being interpreted as "logger" option by
appending "--" to logger options.
Correct syslog output that was tagged with "ec2" regardless of DataSource
in use. Now use "cloud-init" tag instead.
Various "shellcheck" corrections.
Add testcase for disabled output of SSH host keys.
|
|
Ensure that the Azure helper's http handler sleeps a fixed duration
between retry failure attempts. The http handler will sleep a fixed
duration between failed attempts regardless of whether the attempt
failed due to (1) request timing out or (2) instant failure (no
timeout).
Due to certain platform issues, the http request to the Azure endpoint
may instantly fail without reaching the http timeout duration. Without
sleeping a fixed duration in between retry attempts, the http handler
will loop through the max retry attempts quickly. This causes the
communication between cloud-init and the Azure platform to be less
resilient due to the short total duration if there is no sleep in
between retries.
|
|
Prior to this commit, when a user specified configuration which would
generate random passwords for users, cloud-init would cause those
passwords to be written to the serial console by emitting them on
stderr. In the default configuration, any stdout or stderr emitted by
cloud-init is also written to `/var/log/cloud-init-output.log`. This
file is world-readable, meaning that those randomly-generated passwords
were available to be read by any user with access to the system. This
presents an obvious security issue.
This commit responds to this issue in two ways:
* We address the direct issue by moving from writing the passwords to
sys.stderr to writing them directly to /dev/console (via
util.multi_log); this means that the passwords will never end up in
cloud-init-output.log
* To avoid future issues like this, we also modify the logging code so
that any files created in a log sink subprocess will only be
owner/group readable and, if it exists, will be owned by the adm
group. This results in `/var/log/cloud-init-output.log` no longer
being world-readable, meaning that if there are other parts of the
codebase that are emitting sensitive data intended for the serial
console, that data is no longer available to all users of the system.
LP: #1918303
|
|
The apt default test wasn't ported over from cloud-tests correctly.
uri should be specified in the test, but it was not, so the test
failed on openstack (and likely other platforms) because without
a specified uri, the default uri will vary by platform. I separated
this uri test out into a separate test function.
Also add openstack specific test for apt configuration with no uri.
Other platform-specific tests should be added here over time.
|
|
The latest pycloudlib now launches official Ubuntu cloud images for
xenial, meaning that `lxc exec` no longer works against them. This
commit includes handling for tests which are affected by this change;
further details and reasoning in the included comment.
|
|
The locale wasn't persisted correct nor set.
LP: #1402406
|
|
Newer verisons of /etc/sudoers prefer @includedir over
#includedir. Ensure we handle that properly and don't include an
additional #includedir when one isn't warranted.
|
|
This mounts the full directories that we install into systems over their
corresponding paths within the system under test, getting us slightly
closer to testing what a package would install.
|
|
#342 (70dbccbb) introduced the ability to determine route-metrics based on
the `device-number` provided by the EC2 IMDS. Not all datasources that
subclass EC2 will have this attribute, so allow the old behavior if
`device-number` is not present.
LP: #1917875
|
|
`get_interfaces` is used to in two ways, broadly: firstly, to determine
the available interfaces when converting cloud network configuration
formats to cloud-init's network configuration formats; and, secondly, to
ensure that any interfaces which are specified in network configuration
are (a) available, and (b) named correctly. The first of these is
unaffected by this commit, as no clouds support Open vSwitch
configuration in their network configuration formats.
For the second, we check that MAC addresses of physical devices are
unique. In some OVS configurations, there are OVS-created devices which
have duplicate MAC addresses, either with each other or with physical
devices. As these interfaces are created by OVS, we can be confident
that (a) they will be available when appropriate, and (b) that OVS will
name them correctly. As such, this commit excludes any OVS-internal
interfaces from the set of interfaces returned by `get_interfaces`.
LP: #1912844
|