Age | Commit message (Collapse) | Author |
|
In ubuntu, the salt-minion package version 2017.7.4+dfsg1-1 or later
automatically moves any seed keys from /etc/salt/pki/minion/ to
/var/lib/salt/pki/minion/. Fix integration tests to collect
either files in either /etc/salt/pki/minion/ or
/var/lib/salt/pki/minion/.
|
|
Integration tests will now provide a brief summary for test failures
listed by platform and distribution. The failure summary will only consist
of failed test name and assert error message.
Drop the verbose dictionary of all integration test output because this
content is unreadable given the large number of integration test results
listed within this dictionary.
|
|
A fix for chrony support per LP: #1589780 is not expected in Artful or
older series. Skip the chrony suite of tests when running on a container
and ubuntu series represented is <= artful as errors are expected.
|
|
By default, integration tests destroy the test instances after each
test run. To aid debug and development of integration tests, support a
--preserve-instance argument which will leave the modified test instance
in a stopped state for further debug.
|
|
When network configuration for any interface defines maximum transmission
values (MTU) the netplan, eni and sysconfig renders will take into account
any device-level, or subnet-level mtu values.
When network configuration has conflicting device-level and ipv4 subnet
mtu values, the subnet-specific value is honored and a warning will be
logged about any ignored device-level setting.
LP: #1774666
|
|
pylxd upstream provided a fix for the issue we were seeing, so we
can take that fix now rather than having our workarounds to order pip
installs.
The test is that this continues to work:
rm -Rf .tox/citest
tox -c tox.ini --recreate --notest -e citest
|
|
This adds 'combine_capture' argument as was present in curtin's
subp. It is useful to get interleaved output of a command. I noticed
a need for it when looking at user_data_rhevm in DataSourceAltCloud.
That will run a subcommand, logging its stdout but swallowing its stderr.
Another thing to change to use this would be in udevadm_settle which
currently just returns the subp() call.
Also, add the docstring copied from curtin's subp.
|
|
The pylxd project has a setup.py which defines install dependencies.
Those sub-dependendencies include pbr and requests which in turn have
package version conflicts. Since tox doesn't order dependencies installed,
serially install pinned urllib3 at 1.22 which supports both pbr deps and
requests deps of pylxd.
|
|
Allow the user to set the distribution with --distro argument to setup.py.
Fall back is to read /etc/os-release. Final backup is to use
platform.dist() Python function. The platform.dist() function is
deprecated and will be removed in Python 3.7
LP: #1745235
|
|
A newer version of pyflakes (2.0.0) was released.
It identifed some unused variables that version 1.6.0 did not identify.
The change here merely fixes those unused variables.
|
|
|
|
Also document instance-data.json on the top-level datasource topic page.
|
|
- Updated datadict reference URL
- Store sdc:routes metadata in DatasourceSmartOS
- Map sdc:routes values to per-interface subnet configuration
- Added unittest
Co-authored-by: Mike Gerdts <mike.gerdts@joyent.com>
LP: #1763512
|
|
Network has not yet been configured in the init-local stage so the
openstack datasource will use dhcp-client to temporarily obtain an ipv4
address and query the metadata service at http://169.254.169.254 to get
network_data.json configuration. If present, the datasource will return
network_config version 1 config based on that network_data.json content.
Previously OpenStack datasource only setup dhcp on the fallback interface
so this represents a change in behavior to react to the full config
provided by openstack.
Also significant to OpenStack is the separation of a _crawl_data operation
from get_data(). crawl_data walks the available metadata services and
returns a dict of discovered content. get_data consumes the crawled_data,
caches it in the datasource and reacts to that data.
/run/cloud-init/instance-data.json now published network_data.json or
ec2_metadata key if that data is present on any datasource.
The main reasons for the separation of crawl from get_data:
* Enable performance metrics of cloud-init's metadata crawls on each
* Enable cloud-init modules and scripts to query and consume metadata
content which may have updated/changed after cloud-init's initial cache
during instance boot. (Think hotplug)
Also generalize common logic to base DataSource class/module:
* Move to a common UNSET variable up into base datasource module fix EC2,
ConfigDrive, OpenStack, SmartOS to use the global.
* Drop get_url_settings from Ec2, CloudStack and OpenStack and generalize
DataSource.get_url_params(). Allow subclasses to override url_max_wait,
url_timeout and url_retries params.
* Rename get_network_metadata bool to perform_dhcp_setup as it designates
whether EphemeralDHCPv4 setup is required before crawling metadata.
LP: #1749717
|
|
On OpenSuSE 42.3, we would get errors running
tests/unittests/test_handler/test_handler_chef.py
- test_myhttps_nonet raises a UnmockedError
No mocking was registered, and real connections are not allowed
- test_myhttps_net raises SSLError
("bad handshake: SysCallError(32, 'EPIPE')",)
This fixes the errors by just using http instead of https.
Also it modifies the HttprettyTestCase to do the httpretty activate
and deactivate itself in setUp and tearDown. Then we don't have to
decorate individual test_ methods. Also, we set
httpretty.HTTPretty.allow_net_connect = False
Test cases here should not reach out to a network resource.
LP: #1771659
|
|
Yaml tracebacks are generally hard to read for average users. Add a bit of
logic to util.yaml_load and schema validation to look for
YAMLError.context_marker or problem_marker line and column counts.
No longer log the full exceeption traceback from the yaml_load error,
instead just LOG.warning for the specific error and point to the offending
line and column where the problem exists.
|
|
The Azure data source provides a method to check whether a NTFS partition
on the ephemeral disk is safe for reformatting to ext4. The method checks
to see if there are customer data files on the disk. However, mounting
the partition fails on systems that do not have the capability of
mounting NTFS. Note that in this case, it is also very unlikely that the
NTFS partition would have been used by the system (since it can't mount
it). The only case would be where an update to the system removed the
capability to mount NTFS, the likelihood of which is also very small.
This change allows the reformatting of the ephemeral disk to ext4 on
systems where mounting NTFS is not supported.
|
|
When invoked with '--distro=suse', the packages that would be
attempted for installation would be from redhat. We just were not
pasing the args.distro through. That is fixed here.
|
|
This makes the necessary changes to patch the full packaged version into
the trunk maintained redhat and suse spec files.
|
|
tools/run-container is like tools/run-centos, but currently supports
the following images from lxc-images
opensuse/42.3
centos/6
centos/7
ubuntu/16.04
debian/10
debian/sid
Also here is to make installation via zypper in tools/read-dependencies
not prompt user.
|
|
This modifies version.version_string to support having the package
build write the *packaged* version in with a easy replace.
Then, when cloud-init reports its version it will include the full
packaged version.
Also modified here are upstream package build files to get that done.
Note part of the trickery in packages/debian/rules.in was to avoid
the 'basic' templater consuming the '$variable' variable names.
LP: #1770712
|
|
Do not add new entries to /etc/fstab for devices that already have an
existing fstab entry.
Resolves: rhbz#1542578
|
|
SuSE builds were not getting a PATH set in generator's environment.
This may seem like mis-configuration on the system, but caused ds-identify
to fail to find blkid (or any other program).
The change here just ensures that we get /sbin /usr/sbin /bin /usr/bin
into the PATH when main is run.
LP: #1771382
|
|
|
|
The result of a read_file_or_url on a file and on a url would differ
in behavior.
str(UrlResponse) would return UrlResponse.contents.decode('utf-8')
while
str(FileResponse) would return str(FileResponse.contents)
The difference being "b'foo'" versus "foo".
As part of the general goal of cleaning util, move read_file_or_url
into url_helper.
|
|
pylint missed finding a typo in the lxd platform because it could not
determine that the variable was being used was a string. The variable
was set by loading a yaml file which pylint couldn't know that it
would be a string. In these cases, we can be more explicit.
|
|
|
|
This also makes some of the messages more consistent.
|
|
The SSH function was retrying and waiting for SSH for over an
hour when an SSH connection was failing to be established. This
reduces the amount of retries and time between each retry to
prevent tests from running for hours.
Also restructures how waiting for the system works: the system
will attempt to SSH up to the boot timeout time by catching
SSH connection failures and retrying until the timeout is
reached. If the limit is reached now an exception is thrown
to abort the test.
Drive by - this also fixes printing of the instance name when
collecting the console log, rather than showing a Python object
address.
Fixes LP: #1758409
|
|
In playing with a SmartOS container I found that ds-identify did
not identify the container there as a container. Systemd-detect-virt
identifies it as 'container-other'.
Also here are tests for ds-identify for the SmartOS platform
identification, and some indentation fixes in ds-identify.
|
|
This makes cloud-config.service (and as a result cloud-final.service)
run After snap.seeded.service. This is required to ensure that
pre-seeded snaps can be used by cloud-init or user-data input.
The snap.seeded.service was added to snapd at:
https://github.com/snapcore/snapd/pull/5124
Note that the following would be a workaround:
snap:
commands:
00: snap wait system seed.loaded
LP: #1767131
|
|
Make test_net.TestGenerateFallbackConfig.test_unstable_names mock
the value of /proc/cmdline in the same way as the existing
test_unstable_names_disabled test.
LP: #1769952
|
|
We had two calls to is_ds_enabled, and the debug message looked
something like this:
is_ds_enabled returned 1: ConfigDrive NoCloud
Now instead we have just one call, and the debug message like:
is_ds_enabled(IBMCloud) = true
|
|
When attempting to apply network configuration for SmartOS's container
platform, cloud-init would not identify nics. The nics on provided
in this container service do not have 'addr_assign_type'. That
was being interpreted as being a "stolen" mac, and would be filtered
out by get_interfaces.
|
|
package_update_upgrade_install was failing as htop is now included in
Bionic images. Switch this test to install 'sl' instead.
ca_certs integration test fails on cert_count test because bionic
update-ca-certificates on bionic generates less symlinks for a given cert.
Integration tests now collect dpkg-query --show output on every instance.
Add a new assertPackageInstalled helper method which finds the package or
package version installed on the instance.
Adapt existing byobu, package_update_upgrade_install, ntp and salt_minion
tests to use assertPackageInstalled method.
LP: #1769985
|
|
This fixes warnings reported by shellcheck at 0.4.6.
The complaints that we are ignoring globally (top of the file) are:
2015: Note that A && B || C is not if-then-else. C may run if A is true.
2039: In POSIX sh, 'local' is undefined.
2162: read without -r will mangle backslashes.
2166: Prefer [ p ] && [ q ] as [ p -a q ] is not well defined.
Most of the complaints were just noise, but a few unused variables
were reported and fixed.
Related shellcheck issues opened:
- https://github.com/koalaman/shellcheck/issues/1191
- https://github.com/koalaman/shellcheck/issues/1192
- https://github.com/koalaman/shellcheck/issues/1193
- https://github.com/koalaman/shellcheck/issues/1194
|
|
Fix remaining pycodesytle warnings related to invalid string literals
introduced in more recent pycodeflakes versions
https://bugs.python.org/issue27364 .
Also stop using flake8 in tox as it is incompatible with newer versions of
pyflakes. Instead we now add tox environments for pycodestyle and pyflakes
individually.
Set the versions in both pycodestyle and pyflakes to the currently
available versions.
|
|
This change is for Azure VM Preprovisioning. A bug was found when after
azure VMs report ready the first time, during the time when VM is polling
indefinitely for the new ovf-env.xml from Instance Metadata Service
(IMDS), if a reboot happens, we send another report ready signal to the
fabric, which deletes the reprovisioning data on the node.
This marker file is used to fix this issue so that we will only send a
report ready signal to the fabric when no marker file is present. Then,
create a marker file so that when a reboot does occur, we check if a
marker file has been created and decide whether we would like to send the
repot ready signal.
LP: #1765214
|
|
bddeb already supported passing in a '--release' and that would get
into the changelog line.
If you used bddeb to build packages for a PPA, and built multiple
releases, then you would get the same version for each release, and
launchpad would reject your upload.
The change here means we get a ~16.04.1 (for xenial) suffix on the
dpkg version. If the distro-info-data package is not installed,
or the release is not known (such as the default "UNRELEASED"),
then you get no suffix.
|
|
By default, FreeBSD's growfs runs interactively asking a question
which can be mitigated using the '-y' command line option. The fix
here is simply to pass -y to growfs to avoid the prompt.
LP: #1404745
|
|
If you built packages with 'bddeb', each time it would create a new
tarball with make-tarball. If you then tried to upload two different
tarballs to launchpad (to a PPA), it would reject the second as the
orig tarball already existed.
This just supports looking in some places for a orig tarball and
re-using if it is found.
|
|
The last set of changes to netdev_pformat ended up dropping the output
of devices that were not up. This adds back the 'down' interfaces to the
rendered output.
LP: #1766302
|
|
With no output at all from collect-logs, users have been confused
on where the output is. By default now, write to stderr what that
file is.
Also
* add '-v' to increase verbosity. With a single -v flag, mention
what file/info is being collected.
* limit the 'journalctl' collection to this boot (--boot=0).
collecting entire journal seems unnecessary and can be huge.
* do not fail when collecting files or directories that are not there.
LP: #1766335
|
|
Ubuntu images on IBMCloud for 16.04 have some seed data in
/var/lib/cloud/data/seed/nocloud-net. In order to have systems with
IBMCloud enabled, we modified ds-identify detection to skip that seed
if the system was on IBMCloud. That change did not consider the
fact that IBMCloud might not be in the datasource list.
There was similar logic in the ConfigDrive datasource in ds-identify
and the datasource itself.
Config drive is now updated to only check and avoid IBMCloud if IBMCloud
is enabled. The check in ds-identify for nocloud was dropped. If a
user provides a nocloud seed on IBMCloud, then that can be used.
This means that systems running Xenial will continue to get their
old datasources.
LP: #1766401
|
|
In looking at some boot time for Xenial, Artful and Bionic, we noticed
some long amounts of time that appeared to be part of the DataSource but
we related to resolving URLs. In Artful and Bionic, there was an issue
(bug: #1739672) that resulted in slow getaddrinfo() calls when
systemd-resolved was in use. This patch adds two events that track time
for datasource.setup_datasource() and datasource.activate_datasource()
Additionally use log_time() to wrapper util.is_resolvable_url() which
leaves info in cloud-init.log about how much time was spent.
|
|
When images are deployed from template in a production environment
the artifacts of the provisioning stage (provisioningConfiguration.cfg)
that cloud-init referenced are cleaned up. However, when provisioned
in "debug" mode (internal to IBM) the artifacts are left.
This changes the 'is_ibm_provisioning' implementations in both
ds-identify and in the IBM datasource to identify the provisioning
stage more correctly. The change is to consider provisioning only
if the provisioing file existed and there was no log file or
the log file was older than this boot.
LP: #1767166
|
|
The cloud-init-local.service expects that any network device name changes
have already been completed by the kernel or udev daemon.
In some situations we've found that the renaming of interfaces from kernel
names (eth0, eth1, etc) to their persistent names (eno1, ens3, enp0s1,
etc) may happen after cloud-init-local has started where it reads values
from sysfs about what network devices are present, and which device to use
as a fallback nic.
Subsequently, cloud-init-local would write out network configuration for a
kernel device name which would no longer be present by the time that
networking services start to bring up the devices. The result is that the
instance does not get networking configured. Prior to use of
systemd-networkd, the Ubuntu 'networking.service' unit included a call to
udevadm settle which is why this race is not seen on a Xenial system.
This change adds the ability to detect if an interface has a stable name,
if if we find one without stable names and stable names have not been
disabled (net.ifnames=0 in /proc/cmdline), then cloud-init will invoke
udevadm settle.
LP: #1766287
|
|
This adds information to the IBMCloud datasource describing the
6 different scenarios that it is expected to handle.
|
|
BOOTPROTO=dhcp in sysconfig enables DHCPv4 and we should not do this
implicitly when 'dhcp6' subnet is specified. In case both dhcpv4 and
dhcpv6 are needed users should specify both:
subnets:
- type: dhcp6
- type: dhcp
Fix the current code and add a dhcpv6 only test.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
|
|
Ubuntu minimal images do not have iproute2, so correctly identify
our dependency on it.
LP: #1766711
|