Age | Commit message (Collapse) | Author |
|
The subnet type 'manual' was used as a way to declare a device
and set an MTU for it but not assign network addresses.
This updates the manual example config to handle that case and
provides expected rendered output for sysconfig, eni, and netplan.
|
|
If you ran tools/run-centos without an argument it would fail due
to 'set -u' like:
./tools/run-centos: line 266: 1: unbound variable
|
|
Making lots of random invalid DNS queries interferes with the ability
of security teams to identify malicious or anomalous behavior from DNS
logs. The same goal should be achievable with a consistent query for a
name that is disallowed.
LP: #1088611
|
|
Previously, sysconfig rendered HWADDR for all interface types, but
that value is only used to identify physical devices. Instead use
MACADDR to configure the MAC on virtual devices, like bonds and
bridges.
- Sort bond slave list to ensure consistent ordering in sysconfig
rendered files.
- Add unittests for sysconfig rendering of bonds/bridges with
mac_address
LP: #1701417
|
|
In some network configurations a network value of '::' and a
netmask value of '::' were used to indicate a default IPV6 gateway.
Commit d00da2d5 removed ipv6 'netmask' attributes and calculate
a prefix length value instead. The eni route rendering failed to update
the check to use prefix value of 0 to indicate the presence of an IPV6
default route.
A broken ipv6 default route rendered like:
post-up route add -net :: netmask :: gw 2001:4800:78ff:1b::1 || true
And with this patch, it now renders like:
post-up route add -A inet6 default gw 2001:4800:78ff:1b::1 || true
LP: #1701097
|
|
Render MTU values if present in subnet and route configurations
for v4 and v6.
LP: #1702513
|
|
Implement manual control for sysconfig by using ONBOOT=N. This
allows an interface to be configured but not brought up.
Note that ONBOOT is per-interface not per address.
LP: #1687725
|
|
Currently only the subnet is checked for 'ipv6' setting, however, the
routes array may include a mix of v4 or v6 configurations, in particular,
the gateway in a route may be ipv6, and if so, should export the value via
IPV6_DEFAULTGW in the ifcfg-XXXX file.
Additionally, if the route is v6, it should rendering a routes6-XXXX file;
this is present but missing the 'dev <interface>' scoping.
LP: #1694801
|
|
Previously, virtual types (bond, bridge, vlan) were almost completely
broken. They would not get any network configuration (ip addresses or
dhcp config) and or routes rendered. This fixes those issues.
For bonds we now correctly render BONDING_SLAVE entries.
Also add tests for simple bond, bridge and vlan.
LP: #1695092
|
|
Under el7, cloud-init systemd files need some unit tweaks to ensure
they run at the right time. Pull in current el7 downstream systemd unit
changes.
|
|
With this change, entries in IPV6ADDR and IPV6ADDR_SECONDARIES will now
always be in format addr/prefix. When a subnet has a gateway will be
written. If the gateway is ipv6, use the key IPV6_DEFAULTGW rather than
GATEWAY.
LP: #1704872
|
|
The network device renaming code previously required the case of
the mac address input to match that of the data read from the system.
For example, if user provided network config with mac address
in upper case, then cloud-init would not rename the device correctly
as /sys/class/net/address stores lower case values.
The fix here is to always compare lower case mac addresses.
LP: #1705147
|
|
This includes a few fixes found when testing with python 3.6.
- fix eni renderer when target is None
This just uses the util.target_path() in the event that target is None.
- change test cases to not rely on the cached result of
util.get_cmdline() and other cached globals. Update the base TestCase
to unset that cache.
- mock calls to system_is_snappy from the create_users test cases.
- drop unused _pp_root in test_simple_run.py
LP: #1703697
|
|
Render the GATEWAY= value in interface files which have a gateway in the
subnet configuration.
LP: #1686856
|
|
Here we add and enable by default a datasource for Scaleway cloud.
The datasource quickly exits unless one of three things:
a.) 'Scaleway' found as the system vendor
b.) 'scaleway' found on the kernel command line.
c.) the directory /var/run/scaleway exists (this is currently created
by the scaleway initramfs module).
One interesting bit of this particular datasource is that it requires
the source port of the http request to be < 1024.
|
|
load_shell_content previously would not allow shell comment characters
in the content being parsed. If comments=True is not passed then an
exception would previously be raised as the line would not be guaranteed to
have an '=' in it.
|
|
There is a circular dependence in cloudinitlocal, which caused it
to fail. As a result, cloud-init failed to find data source on Azure.
|
|
This fixes the disk setup example doc which specifies that the only
currently supported table_type option is 'mbr' by adding the 'gpt'
option which got supported as of 0.7.7.
LP: #1703789
|
|
We should be expecting IndexError instead of KeyError because we are
using a list (key_ids) and not a dictionary. Also, thanks to Emmanuel
Kasper for pointing out the wrong response code.
LP: #1701527
|
|
The usage of mock in this test was simply invalid and only worked by
happenstance.
|
|
The mock of platform_reports_gce is created with a True return value in
tests/unittests/test_datasource/test_gce.py:TestDataSourceGCE.setUp().
But, the final test_get_data_returns_false_if_not_on_gce incorrectly
attempts to override the mocked return_value of True to False by setting
self.m_platform_gce.return_value = False. But, since the mock is already
initialized, the updated False is not honored. Instead we should use the
patch decorator on the specific unit test to override the return_value of
DataSourceGCE.platform_reports_gce to False.
A False from platform_reports_gce allows DataSourceGCE.get_data to
immediately return False instead of trying to contact
metadata.google.internal as the related bug references.
|
|
The test is currently importing the incorrect keyid. It specifies
the curtin developers ppa, rather than the cloud-init ppa. On
Artful this causes failures as a check is made to verify the
correct key is imported for the ppa, whereas on previous releases
only a warning was issued.
Also, change to use a full key fingerprint.
LP: #1702717
|
|
With the upgrade to lxd 2.15, pylxd version 2.2.3 broke.
Upgrading to version 2.2.4 fixes issues with missing attributes.
|
|
Instead of passing around a 'log' reference to functions, just import
logging and use that. This is the pattern that is now more common in
cloud-init.
|
|
Add permitted keys to documentation on seeding NoCloud.
|
|
This fixes stacktrace and warning message that would be printed
to the log if running inside a container and read_dmi_data tried
to access a key that was not present.
In a container, the /sys/class/dmi/id data is not relevant to the
but to the host. Additionally an unpriviledged container might see
strange behavior:
# cd /sys/class/dmi/id/
# id -u
0
# ls -l chassis_serial
-r-------- 1 nobody nogroup 4096 Jun 29 16:49 chassis_serial
# cat chassis_serial
cat: /sys/class/dmi/id/chassis_serial: Permission denied
The solution here is to just always return None when running in a
container.
LP: #1701325
|
|
The 'jsonschema' line had trailing white space. Remove it.
|
|
On systems with network devices with duplicate mac addresses, cloud-init
will fail to rename the devices according to the specified network
configuration. Refactor net layer to search by device driver and device
id if available. Azure systems may have duplicate mac addresses by
design.
Update Azure datasource to run at init-local time and let Azure datasource
generate a fallback networking config to handle advanced networking
configurations.
Lastly, add a 'setup' method to the datasources that is called before
userdata/vendordata is processed but after networking is up. That is
used here on Azure to interact with the 'fabric'.
|
|
I want to be able to add additional SSH keys to my account, therefore I
should not be limiting these tests to look for one specific key. Instead
we confirm that the comment in authorized_users has the specified users.
|
|
Recent change to ntp in artful has added the sntp package whenever
ntp is installed. The tests, rather poorly, did a dpkg -l instead
of checking with `which`. This fixes the ntp tests to all use
`which` over expecting a certain number of lines using dpkg and
as a result make the tests OS independent.
|
|
- Simplify the logic of 'variant' in util.system_info
much of the data from
https://github.com/hpcugent/easybuild/wiki/OS_flavor_name_version
- fix get_resource_disk_on_freebsd when running on a system without
an Azure resource disk.
- fix tools/build-on-freebsd to replace oauth with oauthlib and add
bash which is a dependency for tests.
- update a fiew places that were checking for freebsd but not using
the util.is_FreeBSD()
|
|
The previous commit caused test failure.
This separates out _check_freebsd_cdrom and mocks it in a test
rather than patching open.
|
|
Fix the issue caused by different commands on Linux and FreeBSD. On Linux,
we used ifdown and ifup to enable and disable a NIC, but on FreeBSD, the
counterpart is "ifconfig down" and "ifconfig up".
LP: #1697815
|
|
The current method is to attempt to mount the cdrom (/dev/cd0), if it is
successful, /dev/cd0 is configured, otherwise, it is not configured. The
problem is it forgets to check whether the mounting destination folder is
created or not. As a result, mounting attempt failed even if cdrom is
ready.
LP: #1696295
|
|
Some versions of Cheetah returned everything as unicode by default (not
utf-8 or ascii) and some varieties of syslog would choke on unicode.
Jinja2 is probably fine, but Python's format() is perfectly adequate for
a short message like the welcome message.
Reviewed-by: Tom Kirchner <tjk@amazon.com>
Reviewed-by: Ben Cressey <bcressey@amazon.com>
|
|
We have started adding jsonschema definitions for cloudconfig modules
(cc_ntp). This branch allows us render sphinx docs using the module's
shema definition instead of using the module's docstring.
This allows us to avoid duplicating schema documentation in the
module-level docstring and schema definition. The corresponding module
documentation is extended a bit to differentiate between config schema and
potential examples.
|
|
The comments in the debian template file of /etc/hosts still pointed
to a general template file instead of the debian one.
LP: #1606406
|
|
This just adds an entry for hostname and fqdn to 127.0.0.1 in
templates/hosts.suse.tmpl.
|
|
Unix file modes are usually represented as octal, but they were being
interpreted as decimal, for example 0o644 would be printed as '420'.
Reviewed-by: Tom Kirchner <tjk@amazon.com>
|
|
read-dependencies now takes --test-distro param to indicate we want to install
all system package depenencies to allow for testing and building for our
continous integration environment. It allows us to install all needed deps on
a fresh system with:
python3 ./tools/read-dependencies --distro ubuntu --test-distro [--dry-run].
Additionally read-dependencies now looks at what version of python is running
the script (py2 vs p3) and opts to install python 2 or 3 system deps
respectively. This behavior can still be overridden with
python3 ./tools/read-dependencies ... --python-version 2.
There are also some distro-specific packaging and test dependencies, like
devscripts, tox and libssl-dev on debian or ubuntu. Those pkg dependencies
have now been broken out from common pkg deps to avoid trying to install them
on centos/redhat/suse.
|
|
These changes are all in an effort to get tools/run-centos using
read-dependencies rather than the 'setup-centos' script with a separate
set of dependencies listed.
- tools/read-dependencies: support taking multiple --requirements
options. This allows run-centos to get both test and build
dependencies. Ultimately, I think it might be nicer for
read-dependencies to take a list of "goals" (build, test, run or
test-tox) rather than having the caller need to know to provide
multiple --requirements.
- packages/pkg-deps.json: drop the version on the sudo package.
centos 6 has newer (1.8.6p3) version than listed, so its not a problem.
- test_handler_disk_setup.py: a test case here was using assertLogs
which is not present in the version of unittest2 that is available in
centos 6 epel. We just adjust it to use with_logs = True.
- tools/run-cents:
- improve usage with example
- add 'inside_as_cd' to provide the dir you want to cd first to.
- avoid the intermediate tarball on disk in the container.
- add 'prep' subcommand and use it to install pre-dependencies.
- use read-dependencies.
|
|
This change adds a couple of makefile targets for ci environments to
install all necessary dependencies for package builds and test runs.
It adds a number of arguments to ./tools/read-dependencies to facilitate
reading pip dependencies, translating pip deps to system package names and
optionally installing needed system-package dependencies on the local
system. This relocates all package dependency and translation logic into
./tools/read-dependencies instead of duplication found in packages/brpm
and packages/bddeb.
In this branch, we also define buildrequires as including all runtime
requires when rendering cloud-init.spec.in and debian/control files
because our package build infrastructure will also be running all unit
test during the package build process so we need runtime deps at build
time.
Additionally, this branch converts
packages/(redhat|suse)/cloud-init.spec.in from cheetah templates to jinja
to allow building python3 envs.
|
|
This changes all cloud-init systemd units to run 'Before' the apt processes
that run daily and may cause a lock on the apt database.
apt-daily-upgrade.service contains 'After=apt-daily.service'.
Thus following order is enforced, so we can just be 'Before' the first.
apt-daily.service
apt-daily-upgrade.service
Note that this means only that apt-daily* will not run until
cloud-init has entirely finished. Any other processes running apt-get
operations are still affected by the global lock.
LP: #1693361
|
|
On some systems with python-libselinux a bug[1] related to recursive
restorecon fails but the distro release does not yet include
an update. This change will accept the error and log a warning.
1. https://bugzilla.redhat.com/show_bug.cgi?id=1406520
LP: #1686751
|
|
On systems with selinux enabled, some of the networking commands executed
successfully do not return 0. Allow these commands to return 1 since the
output is valid.
Ultimately we need to get this information in some way so that we can
display it correctly. For now, work around the stack trace when selinux
does not allow us to collect it.
LP: #1686751
|
|
In cases where the config json specifies nameserver entries,
if there are interfaces configured to use dhcp, NetworkManager,
if enabled, will clobber the /etc/resolv.conf that cloud-init
has produced, which can break dns. If there are no interfaces
configured to use dhcp, NetworkManager could clobber
/etc/resolv.conf with an empty file.
This patch adds a mechanism for dropping additional configuration
into /etc/NetworkManager/conf.d/ and disables management of
/etc/resolv.conf by NetworkManager when nameserver information is
provided in the config.
LP: #1693251
Signed-off-by: Ryan McCabe <rmccabe@redhat.com>
|
|
The typical rpm build process will examine the spec file to determine
which packages should be installed in the boot root. This requires
the specfile to declare that it needs system. Provide this information
by checking which version in which the rpm is being built and exporting
requirements for systemd.
|
|
The network_state object's network and route keys would have different
information depending upon how the network_state object was populated.
This change cleans that up. Now:
* address will always contain an IP address.
* prefix will always include an integer value that is the
network_prefix for the address.
* netmask will be present only if the address is ipv4, and its
value will always correlate to the 'prefix'.
|
|
Massive update to clean up and greatly enhance the integration testing
framework developed by Wesley Wiedenmeier.
- Updated tox environment to run integration test 'citest' to utilize
pylxd 2.2.3
- Add support for distro feature flags
- add framework for feature flags to release config with feature groups
and overrides allowed in any release conf override level
- add support for feature flags in platform and config handling
- during collect, skip testcases that require features not supported by
the image with a warning message
- Enable additional distros (i.e. centos, debian)
- Add 'bddeb' command to build a deb from the current working tree
cleanly in a container, so deps do not have to be installed on host
- Adds a command line option '--preserve-data' that ensures that
collected data will be left after tests run. This also allows the
directory to store collected data in during the run command to be
specified using '--data-dir'.
- Updated Read the Docs testing page and doc strings for pep 257
compliance
|
|
- Updated to standard chef.io url
- Removed the port 4000, due to that has been deprecated
- Added Note about the run_list not being required
Signed-off-by: JJ Asghar <jj@chef.io>
|