Age | Commit message (Collapse) | Author |
|
Add code so that specifying "wakeonlan: true" actually results in relevant
configuration entry appearing in /etc/network/interfaces, Netplan, and
sysconfig for RHEL and OpenSuse.
Add testcases for the above.
|
|
FreeBSD lets us read out kernel parameters with kenv(1), a user-space
utility that's shipped in "base" We can use it in place of dmidecode(8),
thus removing the dependency on sysutils/dmidecode, and the restrictions
to i386 and x86_64 architectures that this utility imposes on FreeBSD.
Co-authored-by: Scott Moser <smoser@brickies.net>
|
|
This allows the cloud-init log to be pushed multiple times during boot,
with the latest lines being pushed each time.
|
|
FreeBSD doesn't have blkid, so we want to use geom to list devices and
their fstypes and labels.
This PR also adds `jail` to the list of is_container()
And we now also properly cache geom and blkid output!
A test is added to verify the new behaviour by correctly identifying
NoCloud on FreeBSD.
Co-authored-by: Scott Moser <smoser@brickies.net>
|
|
This just separates the reading of dmi values into its own file.
Some things of note:
* left import of util in dmi.py only for 'is_container'
It'd be good if is_container was not in util.
* just the use of 'util.is_x86' to dmi.py
* open() is used directly rather than load_file.
|
|
Fixes erroneous string/int comparison introduced in 1431c8a
metadata['instance-id'] is an integer but the value read from smbios is
a string. The comparision would cause TypeError.
|
|
Hetzner Cloud also provides the instance ID in SMBIOS information. Use
it to locally check_instance_id and to compared with instance_id from
metadata service.
LP: #1885527
|
|
The static and static6 subnet types for network_data.json were
being ignored by the Openstack handler, this would cause the code to
break and not function properly.
As of today, if a static6 configuration is chosen, the interface will
still eventually be available to receive router advertisements or be set
from NetworkManager to wait for them and cycle the interface in negative
case.
It is safe to assume that if the interface is manually configured to use
static ipv6 address, there's no need to wait for router advertisements.
This patch will set automatically IPV6_AUTOCONF and IPV6_FORCE_ACCEPT_RA
both to "no" in this case.
This patch fixes the specific behavior only for RHEL flavor and
sysconfig renderer. It also introduces new unit tests for the specific
case as well as adjusts some existent tests to be compatible with the
new options. This patch also addresses this problem by assigning the
appropriate subnet type for each case on the openstack handler.
rhbz: #1889635
rhbz: #1889635
Signed-off-by: Eduardo Otubo otubo@redhat.com
|
|
Reliable Scalable Cluster Technology (RSCT) is a set of software
components that together provide a comprehensive clustering
environment(RAS features) for IBM PowerVM based virtual machines. RSCT
includes the Resource Monitoring and Control (RMC) subsystem. RMC is a
generalized framework used for managing, monitoring, and manipulating
resources. RMC runs as a daemon process on individual machines and needs
creation of unique node id and restarts during VM boot.
LP: #1895979
Co-authored-by: Scott Moser <smoser@brickies.net>
|
|
Also update MAC addresses used in testcases to remove quotes where not
required and add single quotes where quotes are required.
|
|
Gentoo's hostname file format instead of being just the host name
is hostname=thename". The old code works fine when the file has no comments
but if there is a comment the line
```
gentoo_hostname_config = 'hostname="%s"' % conf
```
can render an invalid hostname file that looks similar to
```
hostname="#This is the host namehello"
```
The fix inserts the hostname in a gentoo friendly way so that it gets
handled by HostnameConf as a whole and comments are handled and preserved
|
|
update_resolve_conf_file is no longer used. The last reference
to it was removed in c3680475f9c970, which was itself a "remove dead
code" commit.
|
|
The following commit merged all ssh keys into a default user file
`~/.ssh/authorized_keys` in sshd_config had multiple files configured for
AuthorizedKeysFile:
commit f1094b1a539044c0193165a41501480de0f8df14
Author: Eduardo Otubo <otubo@redhat.com>
Date: Thu Dec 5 17:37:35 2019 +0100
Multiple file fix for AuthorizedKeysFile config (#60)
This commit ignored the case when sshd_config would have a single file for
AuthorizedKeysFile, but a non default configuration, for example
`~/.ssh/authorized_keys_foobar`. In this case cloud-init would grab all keys
from this file and write a new one, the default `~/.ssh/authorized_keys`
causing the bug.
rhbz: #1862967
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
|
|
DataSourceAzure previously writes the preprovisioning
reported ready marker file before it goes through the
report ready workflow. On certain VM instances, the
marker file is successfully written but then reporting
ready fails.
Upon rare VM reboots by the platform, cloud-init sees
that the report ready marker file already exists.
The existence of this marker file tells cloud-init
not to report ready again (because it mistakenly
assumes that it already reported ready in
preprovisioning).
In this scenario, cloud-init instead erroneously
takes the reprovisioning workflow instead of
reporting ready again.
|
|
Consider valid product names as valid chassis asset tags when detecting
OpenStack platform before crawling for OpenStack metadata.
As `ds-identify` tool uses product name as valid chassis asset tags,
let's replicate the behaviour in the OpenStack platform detection too.
This change should be backwards compatible and a temporary fix for the
current limitations on the OpenStack platform detection.
LP: #1895976
|
|
This moves logging into `report_diagnostic_event`, to clean up its callsites.
|
|
enumeration of physical network devices (#591)
|
|
fails (#549)
Azure datasource's `parse_network_config` throws a fatal uncaught exception when an exception is raised during generation of network config from IMDS metadata. This happens when IMDS metadata is invalid/corrupted (such as when it is missing network or interface metadata). This causes the rest of provisioning to fail.
This changes `parse_network_config` to be a non-fatal implementation. Additionally, when generating network config from IMDS metadata fails, fall back on generating fallback network config (`_generate_network_config_from_fallback_config`).
This also changes fallback network config generation (`_generate_network_config_from_fallback_config`) to blacklist an additional driver: `mlx5_core`.
|
|
|
|
Under FreeBSD, we want to use "shutdown -p" for poweroff.
Alpine Linux also has some specificities.
We choose to define a method that returns the shutdown command line to
use, rather than a method that actually does the shutdown. This makes it
easier to have the tests in test_handler_power_state do their
verifications.
Two tests are added for the special behaviours that are known so far.
|
|
Prior to this change, vlans were rendered in sysconfig with
'TYPE=Ethernet', and incorrectly rendered the PHYSDEV based on
the name of the vlan device rather than the 'link' provided
in the network config.
The change here fixes:
* rendering of TYPE=Ethernet for a vlan
* adds a warning if the configured device name is not supported
per the RHEL 7 docs "11.5. Naming Scheme for VLAN Interfaces"
LP: #1788915
LP: #1826608
RHBZ: #1861871
|
|
* pull ssh keys from imds first and fall back to ovf if unavailable
* refactor log and diagnostic messages
* refactor the OpenSSLManager instantiation and certificate usage
* fix unit test where exception was being silenced for generate cert
* fix tests now that certificate is not always generated
* add documentation for ssh key retrieval
* add ability to check if http client has security enabled
* refactor certificate logic to GoalState
|
|
* LXD: detach network from profile before deleting it
When cleaning up the bridge network created by default by LXD as part
of the `lxd init` process detach the network its profile before deleting
it. LXD will otherwise refuse to delete it with error:
Error: The network is currently in use.
Discussion with LXD upstream: https://github.com/lxc/lxd/issues/7804.
LP: #1776958
* LXD bridge deletion: fail if bridge exists but can't be deleted
* LXD bridge deletion: remove useless failure logging
|
|
This fixes a long delay during boot of some instances. For Azure instance types using SR-IOV via the Hyper-V netvsc network driver, two network interfaces are created that share the same MAC, but only the virtual device should be configured and used. Updating the netplan configuration to filter on the hv_netvsc driver prevents netplan from trying to figure both devices.
LP: #1830740
|
|
Update ssh_util.py with latest list of keys (from openssh-8.3p1/sshkey.c),
Added keys:
sk-ecdsa-sha2-nistp256-cert-v01@openssh.com
sk-ecdsa-sha2-nistp256@openssh.com
sk-ssh-ed25519-cert-v01@openssh.com
sk-ssh-ed25519@openssh.com
ssh-xmss-cert-v01@openssh.com
ssh-xmss@openssh.com
LP: #1877869
|
|
Push the cloud-init.log file (Up to 500KB at once) to the KVP before reporting ready to the Azure platform.
Based on the analysis done on a large sample of cloud-init.log files, Here's the statistics collected on the log file size:
P50 P90 P95 P99 P99.9 P99.99
137K 423K 537K 3.5MB 6MB 16MB
This change limits the size of cloud-init.log file data that gets dumped to KVP to 500KB. So for ~95% of the cases, the whole log file will be dumped and for the remaining ~5%, we will get the last 500KB of the cloud-init.log file.
To asses the performance of the 500KB limit, 250 VM were deployed with a 500KB cloud-init.log file and the time taken to compress, encode and dump the entries to KVP was measured. Here's the time in milliseconds percentiles:
P50 P99 P999
75.705 232.701 1169.636
Another 250 VMs were deployed with this logic dumping their normal cloud-init.log file to KVP, the same timing was measured as above. Here's the time in milliseconds percentiles:
P50 P99 P999
1.88 5.277 6.992
Added excluded_handlers to the report_event function to be able to opt-out from reporting the events of the compressed cloud-init.log file to the cloud-init.log file.
The KVP break_down logic had a bug, where it will reuse the same key for all the split chunks of KVP which results in overwriting the split KVPs by the last one when consumed by Hyper-V. I added the split chunk index as a differentiator to the KVP key.
The Hyper-V consumes the KVPs from the KVP file as chunks whose key is 512KB and value is 2048KB but the Azure platform expects the value to be 1024KB, thus I introduced the Azure value limit.
|
|
Add new module cc_apk_configure for creating Alpine /etc/apk/repositories file.
Modify cc_ca_certs, cc_ntp, cc_power_state_change, and cc_resolv_conf for Alpine.
Add Alpine template files for Chrony and Busybox NTP support.
Add Alpine template file for /etc/hosts.
|
|
According to man page `man 8 swapon', "Preallocated swap files are
supported on XFS since Linux 4.18". This patch checks for kernel version
before attepting to create swapfile, using dd for XFS only on kernel
versions <= 4.18 or btrfs.
Add new func util.kernel_version which returns a tuple of ints (major, minor)
Signed-off-by: Eduardo Otubo otubo@redhat.com
|
|
This PR refactors Azure report ready code to include more robust tests and telemetry.
|
|
|
|
Update DataSourceNoCloud and ds-identify to recognize LABEL_FATBOOT labels from blkid.
Also updated associated tests.
LP: #1841466
|
|
DataSourceAzure: Gracefully handle the case of set hostname failure during provisioning
|
|
Add support for VMware's vCD configuration setting DEFAULT-RUN-POST-CUST-SCRIPT.
When set True, it will default vms to run post customization scripts if the VM has not been configured in VMTools with "enable-custom-scripts" set False.
Add datasource documentation with a bit more context about this interaction on VMware products.
With this fix, the behavior will be:
* If VM administrator doesn't want others to execute a script on this VM, VMtools can set "enable-custom-scripts" to false from the utility "vmware-toolbox-cmd".
* If VM administrator doesn't set value to "enable-custom-scripts", then by default this script is disabled for security purpose.
* For VMware's vCD product , the preference is to enable the script if "enable-custom-scripts" is not set. vCD will generate a configuration file with "DEFAULT-RUN-POST-CUST-SCRIPT" set to true. This flag works for both VMware customization engine and cloud-init.
|
|
(#483)
Problem: When cc_ca_certs configuration has both "remove-defaults: true"
and also specifies one, or more, new trusted CAs to add then the resultant
/etc/ca-certificates.conf file's 1st line is blank. As noted in comments
in the existing cc_ca_certs.py code blank lines in this file cause problems.
Fix: Before adding the cloud-init CA filename to this file first check the
size of the file - if is is empty (as all existing CAs have been deleted)
then write only the cloud-init CA filename to the file rather than appending
it to the file.
|
|
* cloudinit: remove global disable of pylint W0107 and fix errors
This includes removing a test class which contained no tests but wasn't
detected as empty because of an errant pass statement.
* .pylintrc: update disable comment to match arguments
|
|
This includes a fix to a test that had a string concatenation issue, and
so was only testing a prefix of what was intended.
|
|
|
|
I've been seeing intermittent failures of this test, and I tracked it
down to something to do with`test_features.py`: running this test after
`test_features.py` causes the failure, but the inverse does not.
This fixed patch ensures that the test will pass regardless of ordering.
|
|
Do not fail if /etc/fstab is not present. Some images, like container
rootfs may not include this file by default.
LP: #1886531
|
|
This is an improvement over indirect parameterisation for a few reasons:
* The test code is much easier to read, the mark names are much more
intuitive than the indirect parameterisation invocation, and there's
less boilerplate to boot
* The fixture no longer has to overload the single parameter that
fixtures can take with multiple meanings
|
|
For versions before 20.2, we allowed the use of ec2 mirrors if the datasource availability_zone matches one of the ec2 regions. We are now updating that behavior to allow allow the use of ec2 mirrors on ec2 instances or if the user directly passes an an ec2 mirror url through #cloud-config apt directives.
LP: #1456277
|
|
As the first refactor PR, this also includes the initial structure for tests.
LP: #1884619
|
|
Changes are made that simplify code and aim to properly support FreeBSD:
- use `util.find_devs_with` instead call directly `blkid`, because on FreeBSD is not supported well and `util.find_devs_with` have solution for FreeBSD for that
- introduction of an additional name on FAT file system, which is used in FreeBSD
- drop shell to use default value, because FreeBSD – by default – does not have `/bin/bash`
|
|
Create a schema object for the chef module and validate this schema in the handle function of the module.
Some of the config keys description, so I tried looking at the code and chef documentation to provide an information to the user. However, I don't know if I have the best description for all fields. For example, for the key show_time I could not find an accurate description of what it did, so I used what was in our code base to infer what it should do.
LP: #1858888
|
|
Hetzner cloud only supports user-data as a string (presumably utf-8).
In order to allow users on Hetzner to provide binary data to cloud-init,
we will attempt to base64decode the userdata.
The change here adds a 'maybe_b64decode' function that will decode data
if and only if is base64 encoded.
The reason for not using util.b64d is that we do not want the return value
decoded to a string, and util.b64d will do that if it can. Additionally
we call decode with validate=True which oddly is not the default.
LP: #1884071
|
|
This allows us to disable the `ensure_dir` call when it isn't
appropriate.
|
|
This introduces a way to log the dhclient error stream, and uses it for the Azure datasource (where we have a specific requirement for this data to be logged).
|
|
When updating the docstring to include it, I realised that the current
name is somewhat misleading; this makes it a little easier to
understand, I think.
|
|
|
|
Reason: commit ded1ec8 introduced a regression whereby a bridge with no "parameters:" setting caused a KeyError exception.
LP: #1879673
|