Age | Commit message (Collapse) | Author |
|
Add a test helper to get top level directory
Many tests need to get the location of files & dirs within the cloud-init
project directory.
Tests implement this in various different ways, and often those ways
depend on the current working directory of the pytest invocation.
Create helper functions (and tests) that gets the path of the top
directory or any sub directory under the top directory. This function
does not depend on the environment.
|
|
Parametrized pytest tests get named by on their parameters. If a name
has random characters, it can break the test collection of when
using pytest-xdist. Replace random name with deterministic name.
|
|
|
|
|
|
Improve schema validation.
This adds strict validation of config module definitions at testing
time, with plumbing included for future runtime validation. This
eliminates a class of bugs resulting from schemas that have definitions
that are incorrect, but get interpreted by jsonschema as
"additionalProperties" that are therefore ignored.
- Add strict meta-schema for jsonschema unit test validation
- Separate schema from module metadata structure
- Improve type annotations for various functions and data types
Cleanup:
- Remove unused jsonschema "required" elements
- Eliminate manual memoization in schema.py:get_schema(),
reference module.__doc__ directly
|
|
If we set a dhcp server side like this:
$ cat /var/tmp/cloud-init/cloud-init-dhcp-f0rie5tm/dhcp.leases
lease {
...
option classless-static-routes 31.169.254.169.254 0.0.0.0,31.169.254.169.254
10.112.143.127,22.10.112.140 0.0.0.0,0 10.112.140.1;
...
}
cloud-init fails to configure the routes via 'ip route add' because to there are
two different routes for 169.254.169.254:
$ ip -4 route add 192.168.1.1/32 via 0.0.0.0 dev eth0
$ ip -4 route add 192.168.1.1/32 via 10.112.140.248 dev eth0
But NetworkManager can handle such scenario successfully as it uses "ip route append".
So change cloud-init to also use "ip route append" to fix the issue:
$ ip -4 route append 192.168.1.1/32 via 0.0.0.0 dev eth0
$ ip -4 route append 192.168.1.1/32 via 10.112.140.248 dev eth0
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
RHBZ: #2003231
|
|
This attempts to standardize unit test file location under test/unittests/
such that any source file located at cloudinit/path/to/file.py may have a
corresponding unit test file at test/unittests/path/to/test_file.py.
Noteworthy Comments:
====================
Four different duplicate test files existed:
test_{gpg,util,cc_mounts,cc_resolv_conf}.py
Each of these duplicate file pairs has been merged together. This is a
break in git history for these files.
The test suite appears to have a dependency on test order. Changing test
order causes some tests to fail. This should be rectified, but for now
some tests have been modified in
tests/unittests/config/test_set_passwords.py.
A helper class name starts with "Test" which causes pytest to try
executing it as a test case, which then throws warnings "due to Class
having __init__()". Silence by changing the name of the class.
# helpers.py is imported in many test files, import paths change
cloudinit/tests/helpers.py -> tests/unittests/helpers.py
# Move directories:
cloudinit/distros/tests -> tests/unittests/distros
cloudinit/cmd/devel/tests -> tests/unittests/cmd/devel
cloudinit/cmd/tests -> tests/unittests/cmd/
cloudinit/sources/helpers/tests -> tests/unittests/sources/helpers
cloudinit/sources/tests -> tests/unittests/sources
cloudinit/net/tests -> tests/unittests/net
cloudinit/config/tests -> tests/unittests/config
cloudinit/analyze/tests/ -> tests/unittests/analyze/
# Standardize tests already in tests/unittests/
test_datasource -> sources
test_distros -> distros
test_vmware -> sources/vmware
test_handler -> config # this contains cloudconfig module tests
test_runs -> runs
|
|
Given that there are additional network management tools that we haven't
yet supported with activators, we should log a warning and continue
without network activation here, especially since this was a no-op for
years.
LP: #1948681
|
|
(#1123)
Allow #cloud-config and cloud-init query to use underscore-delimited
"jinja-safe" key aliases for any instance-data.json keys
containing jinja operator characters.
This provides a means to use Jinja's dot-notation instead of square brackets
and quoting to reference "unsafe" obtain attribute names.
Support for these aliased keys is available to both #cloud-config user-data and
`cloud-init query`.
For example #cloud-config alias access can look like:
{{ ds.config.user_network_config }}
- instead of -
{{ ds.config["user.network-config"] }}
|
|
GCE currently fetches metadata after network has come up. There's no
reason we can't fetch at init-local time, so update GCE to fetch at
init-local time to be more performant and consistent with other
datasources.
|
|
Some references were missed in the removal of the agent command
in PR #799. This simply removes the remaining references.
Signed-off-by: Chris Patterson <cpatterson@microsoft.com>
|
|
testing: monkeypatch system_info call in unit tests
system_info can make calls that read or write from the filesystem, which
should require special mocking. It is also decorated with 'lru_cache',
which means test authors often don't realize they need to be mocking.
Also, we don't actually want the results from the user's local
machine, so monkeypatching it across all tests should be reasonable.
Additionally, moved some of 'system_info` into a helper function to
reduce the surface area of the monkeypatch, added tests for the new
function (and fixed a bug as a result), and removed related mocks that
should be no longer needed.
|
|
Add growpart integration test and associated unit tests
Additionally, a small runcmd check for a commented line.
|
|
Currently any attempt to run an apt command while another process holds
an apt lock will fail. We should instead wait to acquire the apt lock.
LP: #1944611
|
|
Without UDF support, DS Azure cannot mount the provisioning ISO,
which contains platform metadata necessary to support
pre-provisioning. The required metadata is made available in IMDS
starting with api version 2021-08-01. This change will leverage IMDS
to obtain the required metadata to support pre-preprovisioning if
provisioning ISO was not available.
|
|
Add DataSourceLXD which knows how to talk to the dev-lxd socket to
obtain all instance metadata API:
https://linuxcontainers.org/lxd/docs/master/dev-lxd.
This first branch is to deliver feature parity with the existing
NoCloud datasource which is currently used to intialize LXC instances
on first boot.
Introduce a SocketConnectionPool and LXDSocketAdapter to support
performing HTTP GETs on the following routes which are surfaced by the
LXD host to all containers:
http://unix.socket/1.0/meta-data
http://unix.socket/1.0/config/user.user-data
http://unix.socket/1.0/config/user.network-config
http://unix.socket/1.0/config/user.vendor-data
These 4 routes minimally replace the static content provided in the
following nocloud-net seed files:
/var/lib/cloud/nocloud-net/{meta-data,vendor-data,user-data,network-config}
The intent of this commit is to set a foundation for LXD socket
communication that will allow us to build network hot-plug features
by eventually consuming LXD's websocket upgrade route 1.0/events to
react to network, meta-data and user-data config changes over time.
In the event that no custom network-config is provided, default to the
same network-config definition provided by LXD to the NoCloud
network-config seed file.
Supplemental features above NoCloud datasource:
surface all custom instance data config keys via cloud-init query ds
which aids in discoverability of features/tags/labels as well as
conditional #cloud-config jinja templates operations based on custom
config options.
TBD: better cloud-init query support for dot-delimited keys
|
|
When we added the install hotplug module, we forgot to update the
redhet/cloud-init.spec.in file and allow for execution on /usr/libexec.
This PR adds that functionality.
|
|
|
|
Also, add the "signed by" option to source definitions. This enables
users to limit the scope of trust for individual keys.
LP: #1836336
|
|
This commit removes automatically installing udev rules for hotplug
and adds a module to install them instead.
Automatically including the udev rules and checking if hotplug was
enabled consumed too many resources in certain circumstances. Moving the
rules to a module ensures we don't spend extra extra cycles on hotplug
if hotplug functionality isn't desired.
LP: #1946003
|
|
The main idea is to introduce a second module that takes care of
writing files, but in the 'final' stage.
While the introduction of a second module would allow for choosing
the appropriate place withing the order of modules (and stages),
there is no addition top-level directive being added to the cloud
configuration schema. Instead, 'write-files' schema is being extended
to include a 'defer' attribute used only by the 'write-deffered-files'
modules.
The new module 'write-deferred-files' reuses as much as
possible of the 'write-files' functionality.
|
|
In jsonschema 4, hostname validation was changed to have an optional
dependency on the fqdn package. Since we don't have this dependency
in cloud-init, attempting this validation will no longer fail for
a string that isn't a valid hostname.
|
|
Various modules restart services and they all have logic to try and
detect if they are running on a system that needs 'systemctl' or
'service', and then have code to decide which order the arguments
need to be etc. On top of that, not all modules do this in the same way.
The duplication and different approaches are not ideal but this also
makes it hard to add support for a new distribution that does not use
either 'systemctl' or 'service'.
This change adds a new manage_service() method to the distro class
and updates several modules to use it.
|
|
There is no reason for the ISO missing this functionality.
As discussed in https://github.com/canonical/cloud-init/pull/947/files#r707338489
|
|
|
|
PyYAML upgraded from 5.4.1 to 6.0.0. 6.0.0 always requires a `Loader`
arg to `yaml.load()`
|
|
Also added supporting distro/datasource classes and updated tests
that have a `get_cloud` call.
|
|
Due to multiarch, the libdeployPkgPlugin.so is deployed into dir
/usr/lib/<multiarch name>/open-vm-tools, we need to add this path
into search_paths.
LP: #1944946
|
|
Allow comments in runcmd and report failed commands correctly
A `runcmd` script may fail to parse properly, but does not mark
`runcmd` as failed when that occurs. Additionally `shellify()` fails
to correctly parse scripts that contain a comment line.
Rectify both issues and add unit tests to verify correct behavior.
LP: #1853146
|
|
OpenNebula 6.1.80 (current dev. version) is introducing new IPv6 gateway
contextualization variable ETHx_IP6_GATEWAY, which mimics existing
variable ETHx_GATEWAY6. The ETHx_GATEWAY6 used until now will
be depracated in future relase (ET spring 2022).
See:
- new variable - https://github.com/OpenNebula/one/commit/e4d2cc11b9f3c6d01b53774b831f48d9d089c1cc
- deprecation tracking issue - https://github.com/OpenNebula/one/issues/5536
Also, added support for SET_HOSTNAME context variable, which is
currently widely used variable to configure guest VM hostname. See
https://docs.opennebula.io/6.0/management_and_operations/references/template.html#context-section
|
|
Cloud tests have been replaced with integration tests
|
|
Offload Vultr's vendordata assembly to the backend, correct vendordata
storage and parsing, allow passing critical data via the useragent,
better networking configuration for additional interfaces.
|
|
tox: bump the pinned flake8 and pylint version
* pylint: fix W1406 (redundant-u-string-prefix)
The u prefix for strings is no longer necessary in Python >=3.0.
* pylint: disable W1514 (unspecified-encoding)
From https://www.python.org/dev/peps/pep-0597/ (Python 3.10):
The new warning stems form https://www.python.org/dev/peps/pep-0597,
which says:
Developers using macOS or Linux may forget that the default encoding
is not always UTF-8. [...] Even Python experts may assume that the
default encoding is UTF-8. This creates bugs that only happen on Windows.
The warning could be fixed by always specifying encoding='utf-8',
however we should be careful to not break environments which are not
utf-8 (or explicitly state that only utf-8 is supported). Let's silence
the warning for now.
* _quick_read_instance_id: cover the case where load_yaml() returns None
Spotted by pylint:
- E1135 (unsupported-membership-test)
- E1136 (unsubscriptable-object)
LP: #1944414
|
|
|
|
openEuler Homepage: https://www.openeuler.org/en/
|
|
As specified in #LP 1845552,
In cloudinit/ssh_util.py, in parse_ssh_config_lines(), we attempt to
parse each line of sshd_config. This function expects each line to
be one of the following forms:
\# comment
key value
key=value
However, options like DenyGroups and DenyUsers are specified to
*optionally* accepts values in sshd_config.
Cloud-init should comply to this and skip the option if a value
is not provided.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
|
|
https://www.cloudlinux.com/
|
|
The current code starts the puppet agent and also sets autostart
in all cases. This conflicts with a common pattern where puppet
itself manages the agent and autostart state.
For example, in my deploy puppet disables the puppet agent
and replaces it with a cron. This causes various races
both within this cloud-init unit and within puppet itself
while cloud-init and puppet fight over whether or not
to enable the service.
|
|
Also fix search path in networkd
|
|
In the nic attach path, we skip doing dhcp since we already did it
when bringing the interface up. However when polling for
reprovisiondata, it is possible for the request to timeout due to
platform issues. In those cases we still need to do dhcp and try again
since we tear down the context. We can only skip the first dhcp
attempt.
|
|
before rebinding again (#990)
Add 10 second polling loop in wait_for_link_up after performing
an unbind and re-bind of primary NIC in hv_netvsc driver.
Also reduce cloud-init logging levels to debug for these operations.
|
|
|
|
Add tests for cc_resolv_conf handler
|
|
* Replace broken httpretty tests with mock
Certain versions of python/httpretty don't work correctly using https
URIs. #960 recently added httpretty tests using https. This commit
replaces the httpretty tests that were failing on https with mocks of
readurl instead.
|
|
When bringing interface up by unbinding and then binding hv_netvsc
driver, it might take a short delay after binding for the link to be
up. So before trying unbind/bind again after sleep, check if the link
is up. This is a corner case when a preprovisioned VM is reused and
the NICs are hot-attached.
|
|
|
|
- update the puppet module to support AIO installations by setting
`install_type` to `aio`
- make the install collection configurable through the `collection`
parameter; by default the rolling `puppet` collection will be used,
which installs the latest version)
- when `install_type` is `aio`, puppetlabs repos will be purged after
installation; set `cleanup` to `False` to prevent this
- AIO installations are performed by downloading and executing a shell
script; the URL for this script can be overridden using the
`aio_install_url` parameter
- make it possible to run puppet agent after installation/configuration
via the `exec` key
- by default, puppet agent will run with the `--test` argument; this can
be overridden via the `exec_args` key
|
|
This patch finally introduces the Cloud-Init Datasource for VMware
GuestInfo as a part of cloud-init proper. This datasource has existed
since 2018, and rapidly became the de facto datasource for developers
working with Packer, Terraform, for projects like kube-image-builder,
and the de jure datasource for Photon OS.
The major change to the datasource from its previous incarnation is
the name. Now named DatasourceVMware, this new version of the
datasource will allow multiple transport types in addition to
GuestInfo keys.
This datasource includes several unique features developed to address
real-world situations:
* Support for reading any key (metadata, userdata, vendordata) both
from the guestinfo table when running on a VM in vSphere as well as
from an environment variable when running inside of a container,
useful for rapid dev/test.
* Allows booting with DHCP while still providing full participation
in Cloud-Init instance data and Jinja queries. The netifaces library
provides the ability to inspect the network after it is online,
and the runtime network configuration is then merged into the
existing metadata and persisted to disk.
* Advertises the local_ipv4 and local_ipv6 addresses via guestinfo
as well. This is useful as Guest Tools is not always able to
identify what would be considered the local address.
The primary author and current steward of this datasource spoke at
Cloud-Init Con 2020 where there was interest in contributing this datasource
to the Cloud-Init codebase.
The datasource currently lives in its own GitHub repository at
https://github.com/vmware/cloud-init-vmware-guestinfo. Once the datasource
is merged into Cloud-Init, the old repository will be deprecated.
|
|
|
|
In /etc/ssh/sshd_config, it is possible to define a custom
authorized_keys file that will contain the keys allowed to access the
machine via the AuthorizedKeysFile option. Cloudinit is able to add
user-specific keys to the existing ones, but we need to be careful on
which of the authorized_keys files listed to pick.
Chosing a file that is shared by all user will cause security
issues, because the owner of that key can then access also other users.
We therefore pick an authorized_keys file only if it satisfies the
following conditions:
1. it is not a "global" file, ie it must be defined in
AuthorizedKeysFile with %u, %h or be in /home/<user>. This avoids
security issues.
2. it must comply with ssh permission requirements, otherwise the ssh
agent won't use that file.
If it doesn't meet either of those conditions, write to
~/.ssh/authorized_keys
We also need to consider the case when the chosen authorized_keys file
does not exist. In this case, the existing behavior of cloud-init is
to create the new file. We therefore need to be sure that the file
complies with ssh permissions too, by setting:
- the actual file to permission 600, and owned by the user
- the directories in the path that do not exist must be root owned and
with permission 755.
|