Age | Commit message (Collapse) | Author |
|
This was painful, but it finishes a TODO from cloudinit/subp.py.
It moves the following from util to subp:
ProcessExecutionError
subp
which
target_path
I moved subp_blob_in_tempfile into cc_chef, which is its only caller.
That saved us from having to deal with it using write_file
and temp_utils from subp (which does not import any cloudinit things now).
It is arguable that 'target_path' could be moved to a 'path_utils' or
something, but in order to use it from subp and also from utils,
we had to get it out of utils.
|
|
This removes the use of variables named ‘l’, ‘O’, or ‘I’. Generally
these are used in list comprehension to read the line of lines.
|
|
|
|
This ensures that Travis will not kill our tests if fetching images is
taking a long time.
In implementation terms, this introduces a context manager which will
spin up a multiprocessing.Process in the background and print a dot to
stdout every 10 seconds. The process is terminated when the context
manager exits.
This also drop the use of travis_wait, which was being used to work
around this issue.
|
|
Create a schema object for the `apt_configure` module and
validate this schema in the `handle` function of the module.
There are some considerations regarding this PR:
* The `primary` and `security` keys have the exact same properties. I
tried to eliminate this redundancy by moving their properties to a
common place and then just referencing it for both security and
primary. Similar to what is documented here:
https://json-schema.org/understanding-json-schema/structuring.html
under the `Reuse` paragraph. However, this approach does not work,
because the `#` pointer goes to the beginning of the file, which is
a python module instead of a json file, not allowing the pointer to
find the correct definition. What I did was to create a separate dict
for the mirror config and reuse it for primary and security, but
maybe there are better approaches to do that.
* There was no documentation for the config `debconf_selections`. I
tried to infer what it supposed to do by looking at the code and the
`debconf-set-selections` manpage, but my description may not be
accurate or complete.
* Add a _parse_description function to schema.py to render multi-line
preformatted content instead of squashing all whitespace
LP: #1858884
|
|
These libraries provide backports of Python 3's stdlib components to Python 2. As we only support Python 3, we can simply use the stdlib now. This pull request does the following:
* removes some unneeded compatibility code for the old spelling of `assertRaisesRegex`
* replaces invocations of the Python 2-only `assertItemsEqual` with its new name, `assertCountEqual`
* replaces all usage of `unittest2` with `unittest`
* replaces all usage of `contextlib2` with `contextlib`
* drops `unittest2` and `contextlib2` from requirements files and tox.ini
It also rewrites some `test_azure` helpers to use bare asserts. We were seeing a strange error in xenial builds of this branch which appear to be stemming from the AssertionError that pytest produces being _different_ from the standard AssertionError. This means that the modified helpers weren't behaving correctly, because they weren't catching AssertionErrors as one would expect. (I believe this is related, in some way, to https://github.com/pytest-dev/pytest/issues/645, but the only version of pytest where we're affected is so far in the past that it's not worth pursuing it any further as we have a workaround.)
|
|
The 'merged_cfg' key introduced in 71af48d was incorrectly referenced
to as 'ci_cfg' in the json validation test for ec2.
|
|
Quote the Ubuntu version numbers in releases.yaml to make sure they're
treated as strings and not as floats.
|
|
Cloud-config userdata provided as jinja templates are now distro,
platform and merged cloud config aware. The cloud-init query command
will also surface this config data.
Now users can selectively render portions of cloud-config based on:
* distro name, version, release
* python version
* merged cloud config values
* machine platform
* kernel
To support template handling of this config, add new top-level
keys to /run/cloud-init/instance-data.json.
The new 'merged_cfg' key represents merged cloud config from
/etc/cloud/cloud.cfg and /etc/cloud/cloud.cfg.d/*.
The new 'sys_info' key which captures distro and platform
info from cloudinit.util.system_info.
Cloud config userdata templates can render conditional content
based on these additional environmental checks such as the following
simple example:
```
## template: jinja
#cloud-config
runcmd:
{% if distro == 'opensuse' %}
- sh /custom-setup-sles
{% elif distro == 'centos' %}
- sh /custom-setup-centos
{% elif distro == 'debian' %}
- sh /custom-setup-debian
{% endif %}
```
To see all values: sudo cloud-init query --all
Any keys added to the standardized v1 keys are guaranteed to not
change or drop on future released of cloud-init. 'v1' keys will be retained
for backward-compatibility even if a new standardized 'v2' set of keys
are introduced
The following standardized v1 keys are added:
* distro, distro_release, distro_version, kernel_version, machine,
python_version, system_platform, variant
LP: #1865969
|
|
|
|
The azurecloud platform did not always start instances
during collect runs. This was a result of two issues. First
the image class _instance method did not invoke the start()
method which then allowed collect stage to attempt to run
scripts without an endpoint. Second, azurecloud used the
image_id as both an instance handle (which is typically
vmName in azure api) as well as an image handle (for image
capture). Resolve this by adding a .vm_name property to
the AzureCloudInstance and reference this property in
AzureCloudImage.
Also in this branch
- Fix error encoding user-data when value is None
- Add additional logging in AzureCloud platform
- Update logging format to print pathname,funcName and line number
This greatly eases debugging.
LP: #1861921
|
|
This makes it clearer that we should only use this in code paths that
will definitely have dpkg available to them.
- Rename get_architecture -> get_dpkg_architecture
- Add docstring to get_dpkg_architecture
|
|
Adapt the test to the new capitalization introduced in
8116493950e7c47af0ce66fc1bb5d799ce5e477a.
|
|
cloud-init has moved to cc_snap module and a top-level
config key 'snap'. cc_snap_config was deprecated in
cloud-init version 18.2
Co-authored-by: Daniel Watkins <daniel@daniel-watkins.co.uk>
|
|
* cc_snappy: remove deprecated module
* cloud_tests: remove cc_snappy tests (and references)
This module was deprecated in favor of cc_snap in cloud-init v.18.2
|
|
Added Azure to cloud tests supporting upstream integration testing.
Implement the inherited platform classes, Azure configurations
to release/platform, and docs on how to run Azure CI.
|
|
LXD integration tests fail sometimes due to failure to delete the
container, usually related to ZFS backend. This is a transient
issue unrelated to the test itself. Teach LXD platform to retry
this a few times before returning an error.
|
|
- Update paramiko and cryptography module versions (2.4.2) to
address issues with algo and deprecation warnings.
- Modify ssh keypair generation to work with updated paramiko
- tools/xkvm sync with newer version from curtin
- Update NoCloudKvm instance.py to work with updated xkvm
- pass -name to instance, useful for debugging on shared host
- Add cache_mode platform config; default to cache=none,aio=native
- Switch to yaml.safe_load() in platforms.py
|
|
|
|
The apt_pipelining test-cases were broken but until cloud-init
changed it's default behavior to not disable, these silently passed
as both only ever checked if pipelinging was disabled.
First, the tests used the 'apt' namespace, which is not for
configuring pipelining, rather that requires 'apt_pipelining'
as the namespace.
Second, the 'os' variant needs to check that cloud-init does not
write a configuration file; it was a copy-and-paste error from the
disable test-case.
This branch fixes the config and collection to validate both
scenarios.
|
|
When integration tests verification fails, the object returned
contains has 'error' and 'traceback' keys. Each key can contain empty
strings. If the simplified 'error' message is empty, fallback and use
the more verbose full 'traceback' text in the failure summary.
|
|
|
|
LP: #1797231
|
|
Make integration test for flexible using regexp in case disk changes.
LP: #1797199
|
|
Add the following instance-data.json standardized keys:
* v1._beta_keys: List any v1 keys in beta development,
e.g. ['subplatform'].
* v1.public_ssh_keys: List of any cloud-provided ssh keys for the
instance.
* v1.platform: String representing the cloud platform api supporting the
datasource. For example: 'ec2' for aws, aliyun and brightbox cloud
names.
* v1.subplatform: String with more details about the source of the
metadata consumed. For example, metadata uri, config drive device path
or seed directory.
To support the new platform and subplatform standardized instance-data,
DataSource and its subclasses grew platform and subplatform attributes.
The platform attribute defaults to the lowercase string datasource name at
self.dsname. This method is overridden in NoCloud, Ec2 and ConfigDrive
datasources.
The subplatform attribute calls a _get_subplatform method which will
return a string containing a simple slug for subplatform type such as
metadata, seed-dir or config-drive followed by a detailed uri, device or
directory path where the datasource consumed its configuration.
As part of this work, DatasourceEC2 methods _get_data and _crawl_metadata
have been refactored for a few reasons:
- crawl_metadata is now a read-only operation, persisting no attributes on
the datasource instance and returns a dictionary of consumed metadata.
- crawl_metadata now closely represents the raw stucture of the ec2
metadata consumed, so that end-users can leverage public ec2 metadata
documentation where possible.
- crawl_metadata adds a '_metadata_api_version' key to the crawled
ds.metadata to advertise what version of EC2's api was consumed by
cloud-init.
- _get_data now does all the processing of crawl_metadata and saves
datasource instance attributes userdata_raw, metadata etc.
Additional drive-bys:
* unit test rework for test_altcloud and test_azure to simplify mocks
and make use of existing util and test_helpers functions.
|
|
Commit d3e803ad316e6796e5d83e7e8f8f4f7224b92df9 added deb-src comments to
the cloud-init apt templates. This doubled the number of matching entries
seen in /etc/apt/sources.list in apt_configure_primary integration test.
This test was really asserting that GaTech urls were present in
/etc//apt/sources.list instead of archive.ubuntu.com. Fix the test to be a
bit more flexible in case cloud-init changes its bas apt template again.
|
|
Individual skipTest or setUp SkipTest will still launch the instance.
This allows us to stop the running of the instance so we don't
waste cycles or boot systems that are known to fail.
Also replace remaining unittest usage in tests/cloud_tests/
with unittest2.
|
|
Skip lxd tests on cosmic for two reasons:
a.) bug 1795036 - 'lxd init' fails on cosmic kernel.
b.) apt install lxd installs via snap which can be slow
as that will download core snap and lxd.
|
|
Git commitish fc4b966ba928b30b1c586407e752e0b51b1031e8 changed integration
test dependencies from unittest to unittest2. Use unittest2.SkipTest in
test_chrony to avoid causing tracebacks.
|
|
Relax expectation on path to lxc and lxd. The deb path still does
install them in /usr/bin/ but that is overly pedantic.
Add a 'lxd waitready' (present since lxd 0.5) to wait until lxd
is ready before operating on it.
|
|
Commitish c7555762f3a30190ce7726b4d013bc3e83c7e4b6 changed the variable
names in instance-data.json from hyphenated to underscore delimited. In
the shuffle, meta-data -> meta_data was missed.
|
|
Cloud-init caches any cloud metadata crawled during boot in the file
/run/cloud-init/instance-data.json. Cloud-init also standardizes some of
that metadata across all clouds. The command 'cloud-init query' surfaces a
simple CLI to query or format any cached instance metadata so that scripts
or end-users do not have to write tools to crawl metadata themselves.
Since 'cloud-init query' is runnable by non-root users, redact any
sensitive data from instance-data.json and provide a root-readable
unredacted instance-data-sensitive.json. Datasources can now define a
sensitive_metadata_keys tuple which will redact any matching keys
which could contain passwords or credentials from instance-data.json.
Also add the following standardized 'v1' instance-data.json keys:
- user_data: The base64encoded user-data provided at instance launch
- vendor_data: Any vendor_data provided to the instance at launch
- underscore_delimited versions of existing hyphenated keys:
instance_id, local_hostname, availability_zone, cloud_name
|
|
Allow users to provide '## template: jinja' as the first line or their
#cloud-config or custom script user-data parts. When this header exists,
the cloud-config or script will be rendered as a jinja template.
All instance metadata keys and values present in
/run/cloud-init/instance-data.json will be available as jinja variables
for the template. This means any cloud-config module or script can
reference any standardized instance data in templates and scripts.
Additionally, any standardized instance-data.json keys scoped below a
'<v#>' key will be promoted as a top-level key for ease of reference in
templates. This means that '{{ local_hostname }}' is the same as using the
latest '{{ v#.local_hostname }}'.
Since instance-data is written to /run/cloud-init/instance-data.json, make
sure it is persisted across reboots when the cached datasource opject is
reloaded.
LP: #1791781
|
|
The snap test requires access to a proxy and currently the integration
tests do not handle this scenario. I am disabling the test until I can
loop back around and fix this.
|
|
The snap test requires access to a proxy and currently the integration
tests do not handle this scenario. I am disabling the test untill I can
loop back around and fix this.
The write_files test, specifically, the binary test is failing on cosmic
because the "binary" file we were writting was not a complete elf
executable, but we expected 'file' to identify it as such.
The change here is to simply use some 24 bytes of random, non-utf data
and check that file was written correctly via expected checksum.
|
|
Described in bug 1783198 we have seen some transient failures when
using pylxd -> lxd api.
This does:
* adds a str() representation of LXDInstance
* checks the value of the pylxd_container object on instantion
* sets pylxd_container object to None on deletion.
* adds retry logic to shutdown()
|
|
This adds a script to always get the /etc/cloud/build.info file
if it exists, and a hook when preparing the image to log the information
if it is available.
INFO - setting up ubuntu-cosmic (build_name=server serial=20180718)
This is just useful for debug and reproduce.
|
|
The salt minion integration test as we had it did not do a whole lot
more than the unit tests on that module did. Additionally, it caused
some transient failures at least in Ubuntu 18.04.
At a future date we may choose to add an integration test that installs
salt-minion and salt server and configures it to be a better test.
LP: #1778737
|
|
In ubuntu, the salt-minion package version 2017.7.4+dfsg1-1 or later
automatically moves any seed keys from /etc/salt/pki/minion/ to
/var/lib/salt/pki/minion/. Fix integration tests to collect
either files in either /etc/salt/pki/minion/ or
/var/lib/salt/pki/minion/.
|
|
Integration tests will now provide a brief summary for test failures
listed by platform and distribution. The failure summary will only consist
of failed test name and assert error message.
Drop the verbose dictionary of all integration test output because this
content is unreadable given the large number of integration test results
listed within this dictionary.
|
|
A fix for chrony support per LP: #1589780 is not expected in Artful or
older series. Skip the chrony suite of tests when running on a container
and ubuntu series represented is <= artful as errors are expected.
|
|
By default, integration tests destroy the test instances after each
test run. To aid debug and development of integration tests, support a
--preserve-instance argument which will leave the modified test instance
in a stopped state for further debug.
|
|
|
|
pylint missed finding a typo in the lxd platform because it could not
determine that the variable was being used was a string. The variable
was set by loading a yaml file which pylint couldn't know that it
would be a string. In these cases, we can be more explicit.
|
|
The SSH function was retrying and waiting for SSH for over an
hour when an SSH connection was failing to be established. This
reduces the amount of retries and time between each retry to
prevent tests from running for hours.
Also restructures how waiting for the system works: the system
will attempt to SSH up to the boot timeout time by catching
SSH connection failures and retrying until the timeout is
reached. If the limit is reached now an exception is thrown
to abort the test.
Drive by - this also fixes printing of the instance name when
collecting the console log, rather than showing a Python object
address.
Fixes LP: #1758409
|
|
package_update_upgrade_install was failing as htop is now included in
Bionic images. Switch this test to install 'sl' instead.
ca_certs integration test fails on cert_count test because bionic
update-ca-certificates on bionic generates less symlinks for a given cert.
Integration tests now collect dpkg-query --show output on every instance.
Add a new assertPackageInstalled helper method which finds the package or
package version installed on the instance.
Adapt existing byobu, package_update_upgrade_install, ntp and salt_minion
tests to use assertPackageInstalled method.
LP: #1769985
|
|
This enables warnings produced by pylint for unused variables (W0612),
and fixes the existing errors.
|
|
Python has deprecated these invalid string literals now
https://bugs.python.org/issue27364
and pycodestyle is identifying them with a W605 warning.
https://github.com/PyCQA/pycodestyle/pull/676
So basically, any use of \ not followed by one of [\'"abfnrtv]
or \ooo (octal) \xhh (hex) or a newline is invalid. This is most
comomnly seen for us in regex. To solve, you either:
a.) use a raw string r'...'
b.) correctly escape the \ that was not intended to be interpreted.
|
|
Add a base NTP client configuration dictionary and allow Distro
specific changes to be merged. Add a select client function which
implements logic to preferr installed clients over clients which
need to be installed. Also allow distributions to override the
cloud-init defaults.
LP: #1749722
|
|
Fix integraiton test logic for ec2 to look for network and
availability-zone data under the key path
'ds'=>'meta-data' instead of just 'ds' when parsing instance-data.json.
|