Age | Commit message (Collapse) | Author |
|
In 2c52e6e88b19f5db8d55eb7280ee27703e05d75f, the order of
reading network config was changed for Oracle due to initramfs
needing to take lower precedence than the datasource. However,
this also bumped system_cfg to a lower precedence than ds, which
means that any network configuration specified in /etc/cloud will not
be applied. system_cfg should instead be moved above ds so network
configuration in /etc/cloud takes precedence.
LP: #1956788
|
|
distutils is getting deprecated soon. Let's replace it with suggested
alternatives as suggested in:
https://www.python.org/dev/peps/pep-0632/
Remove `requests` version check and related code from url_helper.py
as the versions specified are old enough to no longer be relevant.
Signed-off-by: Shreenidhi Shedi <sshedi@vmware.com>
|
|
Adds a new module to allow setting keyboard layout,
for use-cases in which cloud-init is used to configure
OS images meant for physical computers instead
of the cloud.
This initial release only implements support
for Linux distributions that allow layout to be
set through systemd's localectl.
LP: #1951593
|
|
Includes:
- Update tox.ini and .travis.yml accordingly
- Cleanup tox.ini with new tox syntax and cloud-init dependencies
- Update documentation accordingly
- Replace/remove xenial references where additional testing isn't required
- Remove xenial checks in integration tests
- Replace yield_fixture with fixture in pytest tests
Sections of code commented with lines like "Remove when Xenial is no
longer supported" still exist as they're require additional testing.
|
|
Applied Black and isort, fixed any linting issues, updated tox.ini
and CI.
|
|
(#1123)
Allow #cloud-config and cloud-init query to use underscore-delimited
"jinja-safe" key aliases for any instance-data.json keys
containing jinja operator characters.
This provides a means to use Jinja's dot-notation instead of square brackets
and quoting to reference "unsafe" obtain attribute names.
Support for these aliased keys is available to both #cloud-config user-data and
`cloud-init query`.
For example #cloud-config alias access can look like:
{{ ds.config.user_network_config }}
- instead of -
{{ ds.config["user.network-config"] }}
|
|
|
|
GCE currently fetches metadata after network has come up. There's no
reason we can't fetch at init-local time, so update GCE to fetch at
init-local time to be more performant and consistent with other
datasources.
|
|
When cloud-init is configured to show SSH user key fingerprints during
boot two of the same message appears for each user. This appears to be as
the util.multi_log call defaults to send to both console directly and to
stderr (which also goes to console).
This change sends them only to console directly.
|
|
Also simplify a path and fix a spelling error while in the file
|
|
Add growpart integration test and associated unit tests
Additionally, a small runcmd check for a commented line.
|
|
Move more tests into test_combined.py and remove the CI mark from module
tests that aren't updated often or don't represent core functionality.
|
|
- Added to list of expected warnings on Oracle when opc user has
no ssh key
- Added retries to tests that read from syslog as that can sometimes
take time to reflect in the log
- Updated test_apt.py to remove proxy info into its own test as that
can cause failures in updating, which will immediately traceback
out of the module and prevent us from running further class tests
- Updated test_apt.py to use a more updated ppa in the test_keyserver
- Added basic rsyslog test to test_combined.py
- Added basic puppet test as test_puppet.py
|
|
On Bionic and Xenial, pycloudlib sets user.vendor-data config in lxd
to ensure that lxd-agent is setup on those images.
Adapt the lxd_discovery integration test to assert the appropriate
user.vendor-data config key exists if we are on xenial or bionic.
Also add assertions that /var/lib/cloud/nocloud-net/meta-data still
exists in the images because we want NoCloud to be a viable fallback
datasource if LXD config security.lxddev = false or LXD datasource
discovery encountered an unexpected error.
|
|
Integration test runs get unique log directories at
/tmp/cloud_init_test_logs/$DATE_TIME. Make
/tmp/cloud_init_test_logs/last always point to the most recent
integration test directory.
|
|
Chef tests attempt to reach out to test URLs, which will get blocked by
our on our openstack installs.
|
|
In our integration tests, a few tests were modifying the environment and
then calling 'install_new_cloud_init'. This is problematic because it
updates the environment for all future tests.
Other instances of 'install_new_cloud_init' aren't problematic because
they aren't modifying the underlying environment.
|
|
Add DataSourceLXD which knows how to talk to the dev-lxd socket to
obtain all instance metadata API:
https://linuxcontainers.org/lxd/docs/master/dev-lxd.
This first branch is to deliver feature parity with the existing
NoCloud datasource which is currently used to intialize LXC instances
on first boot.
Introduce a SocketConnectionPool and LXDSocketAdapter to support
performing HTTP GETs on the following routes which are surfaced by the
LXD host to all containers:
http://unix.socket/1.0/meta-data
http://unix.socket/1.0/config/user.user-data
http://unix.socket/1.0/config/user.network-config
http://unix.socket/1.0/config/user.vendor-data
These 4 routes minimally replace the static content provided in the
following nocloud-net seed files:
/var/lib/cloud/nocloud-net/{meta-data,vendor-data,user-data,network-config}
The intent of this commit is to set a foundation for LXD socket
communication that will allow us to build network hot-plug features
by eventually consuming LXD's websocket upgrade route 1.0/events to
react to network, meta-data and user-data config changes over time.
In the event that no custom network-config is provided, default to the
same network-config definition provided by LXD to the NoCloud
network-config seed file.
Supplemental features above NoCloud datasource:
surface all custom instance data config keys via cloud-init query ds
which aids in discoverability of features/tags/labels as well as
conditional #cloud-config jinja templates operations based on custom
config options.
TBD: better cloud-init query support for dot-delimited keys
|
|
Also, add the "signed by" option to source definitions. This enables
users to limit the scope of trust for individual keys.
LP: #1836336
|
|
This commit removes automatically installing udev rules for hotplug
and adds a module to install them instead.
Automatically including the udev rules and checking if hotplug was
enabled consumed too many resources in certain circumstances. Moving the
rules to a module ensures we don't spend extra extra cycles on hotplug
if hotplug functionality isn't desired.
LP: #1946003
|
|
The main idea is to introduce a second module that takes care of
writing files, but in the 'final' stage.
While the introduction of a second module would allow for choosing
the appropriate place withing the order of modules (and stages),
there is no addition top-level directive being added to the cloud
configuration schema. Instead, 'write-files' schema is being extended
to include a 'defer' attribute used only by the 'write-deffered-files'
modules.
The new module 'write-deferred-files' reuses as much as
possible of the 'write-files' functionality.
|
|
When self.failed_desired_api_version was added to DataSourceAzure, the
attribute was never added to the _unpickle method using the upgrade
framework. This commit adds the attribute.
LP: #1946644
|
|
In #919 (81299de), we refactored some of the code used to bring up
networks across distros. Previously, the call to bring up network
interfaces during 'init' stage unintentionally resulted in a no-op
such that network interfaces were NEVER brought up by cloud-init, even
if new network interfaces were found after crawling the metadata.
The code was altered to bring up these discovered network interfaces.
On ubuntu, this results in a 'netplan apply' call during 'init' stage
for any ubuntu-based distro on a datasource that has a NETWORK
dependency. On GCE, this additional 'netplan apply' conflicts with the
google-guest-agent service, resulting in an instance that can no
be connected to.
This commit adds a 'disable_network_activation' option that can be
enabled in /etc/cloud.cfg to disable the activation of network
interfaces in 'init' stage.
LP: #1938299
|
|
|
|
In #1006, we set Azure to apply networking config every
BOOT_NEW_INSTANCE because the BOOT_LEGACY option was causing problems
applying networking the second time per boot. However,
BOOT_NEW_INSTANCE is also wrong as Azure needs to apply networking
once per boot, during init-local phase.
|
|
* Update test_combined.py to allow either valid LXD subplatform
* Split jinja templated tests into separate module as they can be more
fragile
* Move checks for warnings and tracebacks into dedicated utility
function. This allows us to work around persistent and expected
tracebacks/warnings on particular clouds.
* Update test_upgrade.py to allow either valid Azure datasource.
/var/lib/waagent or a mounted device are both valid.
* Add specificity to test_ntp_servers.py
Clouds will often specify their own ntp servers in the ntp
configuration files, so make the tests manually specify their own.
* Account for additional keys on system in test_ssh_keysfiles.py
* Update tests to account for invalid cache
test_user_events.py and test_version_change.py both have tests that
assume we will have valid ds cache when rebooting.
In test_user_events.py, subsequent boots should block applying
network on boot if boot event is denied. However, if the cache is
invalid, it is valid to apply networking config that boot.
In test_version_change.py no cache found won't trigger the expected
debug log. Additionally, the pickle used for that test on an older
release triggered an unexpected issue that took a different error
path.
* Ignore bionic in hotplug tests (LP: #1942247)
On Bionic, we traceback when attempting to detect the hotplugged
device in the updated metadata. This is because Bionic is
specifically configured not to provide network metadata.
See LP: #1942247 for more details.
* Fix date used in test_final_message.
In test_final_message, we ensured the variable substitution works as
expected. For $timestamp, we compared against the current date. It's
possible for the host date to be massively different from the client
date, so obtain date on client rather than host.
* Remove module success from lp1813396 test. Module may fail
unrelatedly (in this case apt-get update is failing), but the test
should still pass.
* Skip testing events if network is disabled
* Ensure we install expected version of cloud-init
As part of test setup, we can install cloud-init from various
sources, including PROPOSED, PPAs, etc. We were never checking that
this install completes successfully, and on OCI, it wasn't
completing successfully because of apt locking issues. Code has
been updated to retry, and then fail loudly if we can't complete the
install.
* Remove ubuntu-azure-fips metapkg which mandates FIPS-flavour kernel
In test_lp1835584.py
* Update test_user_events.py to account for Azure behavior
since Azure has a separate service to clear the pickled metadata
every boot
* Change failure to warning in test_upgrade.py if initial boot errors
If there's already a pre-existing cause for warnings or tracebacks,
that shouldn't cause the new version to fail.
* Add retry to test_random_passwords_emitted_to_serial_console
It's possible we haven't retrieved the entire log when the call returns,
so retry a few times if the output isn't empty.
|
|
Using flake8 inplace of pyflakes
Renamed run-pyflakes -> run-flake8
Changed target name to flake8 in Makefile
With pyflakes we can't suppress warnings/errors in few required places.
flake8 is flexible in that regard. Hence using flake8 seems to be a
better choice here.
flake8 does the job of pep8 anyway.
So, removed pep8 target from Makefile along with tools/run-pep8 script.
Included setup.py in flake8 checks
|
|
Home directory permissions changed in hirsute. The integration test
assumed permissions from earlier releases. Test was fixed to take both
permissions into account
|
|
Fix home permissions modified by ssh module
In #956, we updated the file and directory permissions for keys not in
the user's home directory. We also unintentionally modified the
permissions within the home directory as well. These should not change,
and this commit changes that back.
LP: #1940233
|
|
Ensure jinja templates work for both instance-data.json and
instance-data-sensitive.json. Test for LP: #1931392
Also removed test_runcmd.py as it's made redundant by this change.
|
|
The issues we see on Bionic VMs don't appear anywhere else, including
when invoking kvm directly. It likely has to do with the extra
LXD agent setup happening on bionic. Given that we still have Bionic
covered on all other platforms, the risk of skipping bionic for LXD VM
tests seems low.
|
|
Alters hotplug hook to have a query mechanism checking if the
functionality is enabled. This allows us to avoid using the hotplug
socket and service when hotplug is disabled.
|
|
(SC-191) (#955)
This should enable us to remove the cloud-tests entirely.
|
|
Implement missing device_aliases feature
The device_aliases key has been documented as part of disk_setup for
years, however the feature was never implemented. This implements the
feature as documented allowing usercfg (rather than dsconfig) to create
a mapping of device names.
This is not to be confused with disk_aliases, a very similar map but
existing solely for use by datasources.
LP: #1867532
|
|
test_ssh_import_id.py occassionally fails because cloud-init finishes
before the keys have been fully imported. A retry has been added to the
test.
|
|
Adds a udev script which will invoke a hotplug hook script on all net
add events. The script will write some udev arguments to a systemd FIFO
socket (to ensure we have only instance of cloud-init running at a
time), which is then read by a new service that calls a new 'cloud-init
devel hotplug-hook' command to handle the new event.
This hotplug-hook command will:
- Fetch the pickled datsource
- Verify that the hotplug event is supported/enabled
- Update the metadata for the datasource
- Ensure the hotplugged device exists within the datasource
- Apply the config change on the datasource metadata
- Bring up the new interface (or apply global network configuration)
- Save the updated metadata back to the pickle cache
Also scattered in some unrelated typing where helpful
|
|
Python 3.6 added a new `policy` attribute to `MIMEMultipart`.
MIMEMultipart may be part of the cached object pickle of a datasource.
Upgrading from an old version of python to 3.6+ will cause the
datasource to be invalid after pickle load.
This commit uses the upgrade framework to attempt to access the mime
message and fail early (thus discarding the cache) if we cannot.
Commit 78e89b03 should fix this issue more generally.
|
|
defined in AuthorizedKeysFile (#937)
This patch aims to fix LP1911680, by analyzing the files provided
in sshd_config and merge all keys into an user-specific file. Also
introduces additional tests to cover this specific case.
The file is picked by analyzing the path given in AuthorizedKeysFile.
If it points inside the current user folder (path is /home/user/*), it
means it is an user-specific file, so we can copy all user-keys there.
If it contains a %u or %h, it means that there will be a specific
authorized_keys file for each user, so we can copy all user-keys there.
If no path points to an user-specific file, for example when only
/etc/ssh/authorized_keys is given, default to ~/.ssh/authorized_keys.
Note that if there are more than a single user-specific file, the last
one will be picked.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Co-authored-by: James Falcon <therealfalcon@gmail.com>
LP: #1911680
RHBZ:1862967
|
|
test_upgrade.py was outputting a ton of stuff that had to be manually
collected and verified. This commit adds more assertions to the test
and outputs directly to the logs rather than separate files.
|
|
summary: Clear cache when a Python version change is detected
When a distribution gets updated it is possible that the Python version
changes. Python makes no guarantee that pickle is consistent across
versions as such we need to purge the cache and start over.
Co-authored-by: James Falcon <therealfalcon@gmail.com>
|
|
Also new jenkins tox definition
|
|
|
|
Ensure no Traceback when 'chef_license' is set
|
|
In #856 we added the ability to use partprobe instead of blockdev for
reading partitions. Test that partprobe succeeds where blockdev fails.
Also add a mechanism to our integration tests to allow a callable to be
called between `lxc init` and `lxc start`
|
|
Control is currently limited to boot events, though this should
allow us to more easily incorporate HOTPLUG support. Disabling
'instance-first-boot' is not supported as we apply networking config
too early in boot to have processed userdata (along with the fact
that this would be a pretty big foot-gun).
The concept of update events on datasource has been split into
supported update events and default update events. Defaults will be
used if there is no user-defined update events, but user-defined
events won't be supplied if they aren't supported.
When applying the networking config, we now check to see if the event
is supported by the datasource as well as if it is enabled.
Configuration looks like:
updates:
network:
when: ['boot']
|
|
This allows us to use it when validating packages from -proposed (and
PPAs etc.).
|
|
In #777, we added 'vendordata2' and 'vendordata2_raw' attributes to
the DataSource class, but didn't use the upgrade framework to deal
with an unpickle after upgrade. This commit adds the necessary
upgrade code.
Additionally, added a smaller-scope upgrade test to our integration
tests that will be run on every CI run so we catch these issues
immediately in the future.
LP: #1922739
|
|
the above option allows the user to control the behavior of a distro
hostname selection if both short hostname and FQDN are supplied.
If `prefer_fqdn_over_hostname` is true the FQDN will be selected as
hostname; if false the hostname will be selected
LP: #1921004
|
|
The current method of running a background sleep until travis is
finished is causing integration test runs to pass even when they should
be failing.
Instead, update the code to emit dots itself.
|
|
When output of SSH host keys and/or SSH fingerprints are disabled for
all keys do not display headers and footers.
Prevent risk of message text being interpreted as "logger" option by
appending "--" to logger options.
Correct syslog output that was tagged with "ec2" regardless of DataSource
in use. Now use "cloud-init" tag instead.
Various "shellcheck" corrections.
Add testcase for disabled output of SSH host keys.
|