Age | Commit message (Collapse) | Author |
|
This update to tox-venv allows you to do:
./tools/tox-venv py3 - tests/unittests/test_util.py
|
|
There was a typo in the doc string at the top of ds-identify
(disable -> disabled). That is fixed here as well as adding some
better examples on content in /etc/cloud/ds-identify.cfg.
|
|
The error message when read-vesion is not very useful and does not help
the end-user know how to overcome the issue. This adds a short message
explaining that the user does not have the latest upstream tags and how
to get those tags.
|
|
This adds a Oracle specific datasource that functions with OCI.
It is a simplified version of the OpenStack metadata server
with support for vendor-data.
It does not support the OCI-C (classic) platform.
Also here is a move of BrokenMetadata to common 'sources'
as this was the third occurrence of that class.
|
|
Move the tools/net-convert.py to be exposed as part of 'cloud-init devel'
subcommands.
It can now be called like:
$ cloud-init devel net-convert
Or, if you just have checked out source (and no cli executable):
$ python3 -m cloudinit.cmd.devel.net_convert
or
$ python3 -m cloudinit.cmd.main devel net-convert
|
|
Bash and most other "bourne-like" shells allow declaring function
local variables via 'local'. ksh does not. Instead of using 'local'
always, use 'typeset' when the KSH_VERSION variable is present in
environment.
LP: #1784713
|
|
In order to see some of the WARNING messages added by bug 1774666
I wanted logging output of tools/net-convert. This does:
a.) add '--debug' and make it print the network state and read yaml only
if --debug is provided.
b.) set up basic logging so warnings goes to console by default and
debug goes to console if --debug is provided.
|
|
If run-container was called without --package or --binary-package, then
it would still try to copy out artifacts and would fail doing so as
there were no artifacts to collect.
Also fix a bug when only --source-package without --package.
|
|
tools/run-container is like tools/run-centos, but currently supports
the following images from lxc-images
opensuse/42.3
centos/6
centos/7
ubuntu/16.04
debian/10
debian/sid
Also here is to make installation via zypper in tools/read-dependencies
not prompt user.
|
|
SuSE builds were not getting a PATH set in generator's environment.
This may seem like mis-configuration on the system, but caused ds-identify
to fail to find blkid (or any other program).
The change here just ensures that we get /sbin /usr/sbin /bin /usr/bin
into the PATH when main is run.
LP: #1771382
|
|
In playing with a SmartOS container I found that ds-identify did
not identify the container there as a container. Systemd-detect-virt
identifies it as 'container-other'.
Also here are tests for ds-identify for the SmartOS platform
identification, and some indentation fixes in ds-identify.
|
|
We had two calls to is_ds_enabled, and the debug message looked
something like this:
is_ds_enabled returned 1: ConfigDrive NoCloud
Now instead we have just one call, and the debug message like:
is_ds_enabled(IBMCloud) = true
|
|
This fixes warnings reported by shellcheck at 0.4.6.
The complaints that we are ignoring globally (top of the file) are:
2015: Note that A && B || C is not if-then-else. C may run if A is true.
2039: In POSIX sh, 'local' is undefined.
2162: read without -r will mangle backslashes.
2166: Prefer [ p ] && [ q ] as [ p -a q ] is not well defined.
Most of the complaints were just noise, but a few unused variables
were reported and fixed.
Related shellcheck issues opened:
- https://github.com/koalaman/shellcheck/issues/1191
- https://github.com/koalaman/shellcheck/issues/1192
- https://github.com/koalaman/shellcheck/issues/1193
- https://github.com/koalaman/shellcheck/issues/1194
|
|
Ubuntu images on IBMCloud for 16.04 have some seed data in
/var/lib/cloud/data/seed/nocloud-net. In order to have systems with
IBMCloud enabled, we modified ds-identify detection to skip that seed
if the system was on IBMCloud. That change did not consider the
fact that IBMCloud might not be in the datasource list.
There was similar logic in the ConfigDrive datasource in ds-identify
and the datasource itself.
Config drive is now updated to only check and avoid IBMCloud if IBMCloud
is enabled. The check in ds-identify for nocloud was dropped. If a
user provides a nocloud seed on IBMCloud, then that can be used.
This means that systems running Xenial will continue to get their
old datasources.
LP: #1766401
|
|
When images are deployed from template in a production environment
the artifacts of the provisioning stage (provisioningConfiguration.cfg)
that cloud-init referenced are cleaned up. However, when provisioned
in "debug" mode (internal to IBM) the artifacts are left.
This changes the 'is_ibm_provisioning' implementations in both
ds-identify and in the IBM datasource to identify the provisioning
stage more correctly. The change is to consider provisioning only
if the provisioing file existed and there was no log file or
the log file was older than this boot.
LP: #1767166
|
|
This tool is used to assist during the creation of ubuntu packages for
release testing. Address the following on the command-line:
* --help option now print usage
* Add --orig-tarball which creates named output file
cloud-init_<release-version>.orig.tar.gz
* drop unused --verbose option
|
|
This adds a specific IBM Cloud datasource.
IBM Cloud is identified by:
a.) running on xen
b.) one of a LABEL=METADATA disk or a LABEL=config-2 disk with
UUID=9796-932E
The datasource contains its own config-drive reader that reads
only the currently supported portion of config-drive needed for
ibm cloud.
During the provisioning boot, cloud-init is disabled.
See the docstring in DataSourceIBMCloud.py for more more information.
|
|
Ubuntu 16.04 (xenial) does not have jsonschema installed by default. As
it is listed in requirements, the tox environment will always have it
installed.
Add the helper tools/pipremove that removes pip packages. Then use that
to remove jsonschema without noise of always running and ignoring a
'pip uninstall jsonschema'.
|
|
Open Telekom Cloud gen1 (Xen) hosts do not provide nova product
names in DMI but Xen HVM domU. They can however be safely identified
by the OpenTelekomCloud Chassis asset tag. OpenTelekomCloud does
use the network OpenStack DataSource, so we better detect it.
LP: #1756471
|
|
The Hetzner Cloud metadata service is an AWS-style service available
over HTTP via the link local address 169.254.169.254.
https://hetzner.com/cloud
https://docs.hetzner.cloud/
|
|
On few 64-bit platforms, the open-vm-tools package is installed at
/usr/lib64/. The DataSourceOVF is changed to search look there for the
'customization plugin'
|
|
This fixes a bug in parsing of 'blkid -o export' output. The result
of the bug meant that DI_ISO9660_DEVS did not get set correctly and
is_cdrom_ovf would not identify devices in most cases.
The tests are improved to demonstrate both multiple iso devices
and also a cdrom that doesn't sort "last" in blkid output.
The code change is to use DEVNAME as the record separator when
parsing blkid -o export rather than relying on being able to read
the empty line.
LP: #1749980
|
|
Ubuntu core seeds information to nocloud via a bind-mount of
/writable/system-data/var/lib/cloud over /var/lib/cloud.
When ds-identify runs as a systemd generator that mount is not
guaranteed to have been done. It is guaranteed at
cloud-init-local.service time, but not generator time.
Images built with 'ubuntu-image --cloud-init=user-data-file'
would have cloud-init disabled.
The fix here is just to consider the seed dir under /writable/system-data.
LP: #1747070
|
|
When we moved some tests to live under cloudinit/ we inadvertantly
failed to change all things that would run nose to include that
directory.
This changes all the 'nose' invocations to consistently run with
tests/unittests and cloudinit/.
Also, it works around, more correctly this time, a python2.6-ism with
the following code:
with assertRaises(SystemExit) as cm:
sys.exit(2)
|
|
This changes tools/run-centos to collect up your git working directory
via 'git' commands rather than just collecting the whole directory.
The reason for this is that even a clean tree that has had tox run
on it might have up to 400M of data in it.
It adds a '--dirty' flag to run-centos to collect up local changes.
|
|
Fujitsu Cloud Service attaches a ovf iso transport with a label
'OVFENV'. This seems to be a reasonable value as a label.
While the for bug 1731868 would likely fix cloud-init on fujitsu
cloud, this change will find it faster.
LP: #1698669
|
|
read-version --json would report bad data when working in a worktree.
This is just because in a worktree, .git is not a directory, but
rather a metadata file that points to the another path.
$ git worktree ../mytree
$ cat ../mytree/.git
gitdir: /path/to/cloud-init/.git/worktrees/mytree
$ rm -Rf ../mytree; git worktree prune
|
|
New mkfs.vfat and fatlabel tools included in the dosfsutils package no
longer support creating vfat disks with lowercase labels. They silently
default to an all uppercase label eg CONFIG-2 instead of config-2. This
change makes cloud-init handle either upper or lower case.
LP: #1598783
|
|
The previous OVF datasource change added a debug message that referenced
an un-used variable. The failure path would be triggered if an image was
booted with a iso9660 filesystem attached to a device that was not a
cdrom.
A unit test is added for the specific failure found.
Additional safety to avoid 'cidata' labels is also added to the OVF
checker.
LP: #1737704
|
|
Previously the OVF transport would not be identified except for when
config files set 'ovf_vmware_guest_customization'. It would also
return DS_MAYBE almost always.
The change here is to add support to ds-identify for storing the
iso9660 filesystems that it finds (ISO9660_DEVS). Then the OVF check
will check that the iso9660 filesystem has ovf-env.xml on it. The least
wonderful part of this is that the check is done by 'grep' for case
insensitive ovf-env.xml.
Future improvement would be to identify VMware's OVF by label or UUID
so we could avoid the grep.
LP: #1731868
|
|
The motivation for this is that
a.) 1.7.1 runs with python 3.6 (bionic)
b.) we want to run pylint on tests/ and tools for the same reasons
that we want to run it on cloudinit/
The changes are described below.
- Update tox.ini to invoke pylint v1.7.1.
- Modify .pylintrc generated-members ignore mocked object members (m_.*)
- Replace "dangerous" params defaulting to {}
- Fix up cloud_tests use of platforms
- Cast some instance objects to with dict()
- Handle python2.7 vs 3+ ConfigParser use of readfp (deprecated)
- Update use of assertEqual(<boolean>, value) to assert<Boolean>(value)
- replace depricated assertRegexp -> assertRegex
- Remove useless test-class calls to super class
- Assign class property accessors a result and use it
- Fix missing class member in CepkoResultTests
- Fix Cheetah test import
|
|
During continuous integration tests, we're seeing quite a lot of
unreliablity when running 'yum install'. The change here is to move to
re-trying a run of 'yum install --downloadonly' for 10 times or until
it succeeds. Then afterwards, running yum install from the cache.
This seems safer in general than just re-trying an install operation,
since we are specifically affected by the download phase failing.
Also present are some flake8 fixes to tools/read-dependencies.
|
|
Per centos documentation using the fastestmirror plugin is effective at
finding the fastest mirror, unless you are behind a proxy. In that case
you should disable it. Therefore, in our tests if we are setting the proxy
we should also disable the fastestmirror plugin.
|
|
The tools that use "git describe" were just assuming a consisent
number of characters in the hash. It seems ubuntu 16.04 would use 7
and later versions use 8. To avoid that discrepency in developer
environments, set it to 8.
|
|
The first revision of this rendered tables with less decoration but there
was a desire upstream to avoid possibly breaking some parsing someone
might be doing, so it has been revised to render the same as prettytable
for the cases cloud-init actually uses.
|
|
Things done here:
- identify 'suse' as a variant in util.system_info and
also tools/render-cloudcfg.
- update systemd and cloud.cfg templates for suse specific changes.
LP: #1718640
|
|
The xkvm script will be utilized by pending NoCloud qemu testing.
If this turns out to not be the case, then we will drop it.
|
|
OpenStack Nova identifies itself only to Intel guests.
Make ds-identify return 'MAYBE' for OpenStack on non-intel arches.
An unnecessary change here is to rename the 'policy_nodmi' kwarg
to 'policy_no_dmi' in the related unit tests.
LP: #1715241
|
|
If you ran tools/run-centos without an argument it would fail due
to 'set -u' like:
./tools/run-centos: line 266: 1: unbound variable
|
|
Here we add and enable by default a datasource for Scaleway cloud.
The datasource quickly exits unless one of three things:
a.) 'Scaleway' found as the system vendor
b.) 'scaleway' found on the kernel command line.
c.) the directory /var/run/scaleway exists (this is currently created
by the scaleway initramfs module).
One interesting bit of this particular datasource is that it requires
the source port of the http request to be < 1024.
|
|
We should be expecting IndexError instead of KeyError because we are
using a list (key_ids) and not a dictionary. Also, thanks to Emmanuel
Kasper for pointing out the wrong response code.
LP: #1701527
|
|
- Simplify the logic of 'variant' in util.system_info
much of the data from
https://github.com/hpcugent/easybuild/wiki/OS_flavor_name_version
- fix get_resource_disk_on_freebsd when running on a system without
an Azure resource disk.
- fix tools/build-on-freebsd to replace oauth with oauthlib and add
bash which is a dependency for tests.
- update a fiew places that were checking for freebsd but not using
the util.is_FreeBSD()
|
|
read-dependencies now takes --test-distro param to indicate we want to install
all system package depenencies to allow for testing and building for our
continous integration environment. It allows us to install all needed deps on
a fresh system with:
python3 ./tools/read-dependencies --distro ubuntu --test-distro [--dry-run].
Additionally read-dependencies now looks at what version of python is running
the script (py2 vs p3) and opts to install python 2 or 3 system deps
respectively. This behavior can still be overridden with
python3 ./tools/read-dependencies ... --python-version 2.
There are also some distro-specific packaging and test dependencies, like
devscripts, tox and libssl-dev on debian or ubuntu. Those pkg dependencies
have now been broken out from common pkg deps to avoid trying to install them
on centos/redhat/suse.
|
|
These changes are all in an effort to get tools/run-centos using
read-dependencies rather than the 'setup-centos' script with a separate
set of dependencies listed.
- tools/read-dependencies: support taking multiple --requirements
options. This allows run-centos to get both test and build
dependencies. Ultimately, I think it might be nicer for
read-dependencies to take a list of "goals" (build, test, run or
test-tox) rather than having the caller need to know to provide
multiple --requirements.
- packages/pkg-deps.json: drop the version on the sudo package.
centos 6 has newer (1.8.6p3) version than listed, so its not a problem.
- test_handler_disk_setup.py: a test case here was using assertLogs
which is not present in the version of unittest2 that is available in
centos 6 epel. We just adjust it to use with_logs = True.
- tools/run-cents:
- improve usage with example
- add 'inside_as_cd' to provide the dir you want to cd first to.
- avoid the intermediate tarball on disk in the container.
- add 'prep' subcommand and use it to install pre-dependencies.
- use read-dependencies.
|
|
This change adds a couple of makefile targets for ci environments to
install all necessary dependencies for package builds and test runs.
It adds a number of arguments to ./tools/read-dependencies to facilitate
reading pip dependencies, translating pip deps to system package names and
optionally installing needed system-package dependencies on the local
system. This relocates all package dependency and translation logic into
./tools/read-dependencies instead of duplication found in packages/brpm
and packages/bddeb.
In this branch, we also define buildrequires as including all runtime
requires when rendering cloud-init.spec.in and debian/control files
because our package build infrastructure will also be running all unit
test during the package build process so we need runtime deps at build
time.
Additionally, this branch converts
packages/(redhat|suse)/cloud-init.spec.in from cheetah templates to jinja
to allow building python3 envs.
|
|
The added 'run-centos' does:
- Creates centos 6 or 7 lxd container
* Sets http_proxy variable for yum if set locally
* Creates centos user
- Push local tree
* Tar's up working directory
* Pushes to container and untars
- Installs pip and yum dependencies
- As user centos it can then based on flags:
* runs unittests
* run ./packages/brpm
* run ./packages/brpm --srpm
* artifact the built *.rpm
|
|
Here we move the config/cloud.cfg to be rendered as a template.
That allows us to maintain deltas between distros in one place.
Currently we use 'variant' variable to make decisions.
A tools/render-cloudcfg is provided to render the file.
There were changes to setup.py, MANIFEST.in to allow us to put all
files into a virtual env installation and to render the cloud-config
file in 'install' or 'bdist' targets.
We have also included some config changes that were found in the
redhat distro spec.
* include some config changes from the redhat distro spec.
The rendered cloud.cfg has some differences.
Ubuntu: white space and comment changes only.
Freebsd:
- whitespace changes and comment changes
- datasource_list definition moved to be closer to 'datasource'.
- enable modules: migrator, write_files
- move package-update-upgrade-install to final.
The initial work was done by Josh Harlow.
|
|
This allows the user to seed NoCloud in a trivial way from qemu/libvirt,
by using a stock image and passing a single command line flag. No custom
command line, no filesystem modification, no bootstrap disk image.
This is particularly handy now that Ec2 backend is discouraged from use
under bug 1660385.
LP: #1691772
|
|
Azure sets a known chassis asset tag to 7783-7084-3265-9085-8269-3286-77.
We can inspect this in both ds-identify and DataSource.get_data to
determine whether we are on Azure.
Added unit tests to cover these changes
and some minor tweaks to Exception error message content to give more
context on malformed or missing ovf-env.xml files.
LP: #1693939
|
|
Older cloud-init versions have a bug in the signature of the
render_network_state method for netplan (bug 1685944).
The old had:
render_network_state(target, network_state)
The fix was to change netplan's so it had the correct signature:
render_network_state(network_state, target)
This just changes our caller to use kwargs style when invoking that
method so that it works with either the broken form or correct form.
|