Age | Commit message (Collapse) | Author |
|
I noticed a bug in dhclient_hook on the 'down' event, using 'is'
operator rather than '==' (if self.net_action is 'down').
This refactors/simplifies the code a bit for easier testing and adds
tests. The reason for the rename of 'action' to 'event' is to just
be internally consistent. The word and Namespace 'action' is used
by cloud-init main, so it was not really usable here.
Also adds a main which can easily be debugged with:
CI_DHCP_HOOK_DATA_D=./my.d python -m cloudinit.dhclient_hook up eth0
|
|
There is an infrequent race when the booting instance can hit the IMDS
service before it is fully available. This results in a
requests.ConnectTimeout being raised.
Azure's retry_callback logic now retries on either 404s or Timeouts.
LP:1800223
|
|
openSUSE has changed the way the distribution is identified in
os-release. Add support detecting for openSUSE Leap 42.3, Leap 15
and TumbleWeed.
Reference: boo#1111427
|
|
Add the following instance-data.json standardized keys:
* v1._beta_keys: List any v1 keys in beta development,
e.g. ['subplatform'].
* v1.public_ssh_keys: List of any cloud-provided ssh keys for the
instance.
* v1.platform: String representing the cloud platform api supporting the
datasource. For example: 'ec2' for aws, aliyun and brightbox cloud
names.
* v1.subplatform: String with more details about the source of the
metadata consumed. For example, metadata uri, config drive device path
or seed directory.
To support the new platform and subplatform standardized instance-data,
DataSource and its subclasses grew platform and subplatform attributes.
The platform attribute defaults to the lowercase string datasource name at
self.dsname. This method is overridden in NoCloud, Ec2 and ConfigDrive
datasources.
The subplatform attribute calls a _get_subplatform method which will
return a string containing a simple slug for subplatform type such as
metadata, seed-dir or config-drive followed by a detailed uri, device or
directory path where the datasource consumed its configuration.
As part of this work, DatasourceEC2 methods _get_data and _crawl_metadata
have been refactored for a few reasons:
- crawl_metadata is now a read-only operation, persisting no attributes on
the datasource instance and returns a dictionary of consumed metadata.
- crawl_metadata now closely represents the raw stucture of the ec2
metadata consumed, so that end-users can leverage public ec2 metadata
documentation where possible.
- crawl_metadata adds a '_metadata_api_version' key to the crawled
ds.metadata to advertise what version of EC2's api was consumed by
cloud-init.
- _get_data now does all the processing of crawl_metadata and saves
datasource instance attributes userdata_raw, metadata etc.
Additional drive-bys:
* unit test rework for test_altcloud and test_azure to simplify mocks
and make use of existing util and test_helpers functions.
|
|
Allow users to provide '## template: jinja' as the first line or their
#cloud-config or custom script user-data parts. When this header exists,
the cloud-config or script will be rendered as a jinja template.
All instance metadata keys and values present in
/run/cloud-init/instance-data.json will be available as jinja variables
for the template. This means any cloud-config module or script can
reference any standardized instance data in templates and scripts.
Additionally, any standardized instance-data.json keys scoped below a
'<v#>' key will be promoted as a top-level key for ease of reference in
templates. This means that '{{ local_hostname }}' is the same as using the
latest '{{ v#.local_hostname }}'.
Since instance-data is written to /run/cloud-init/instance-data.json, make
sure it is persisted across reboots when the cached datasource opject is
reloaded.
LP: #1791781
|
|
|
|
In many cases, cloud-init uses 'util.subp' to run a subprocess.
This is not really desirable in our unit tests as it makes the tests
dependent upon existance of those utilities.
The change here is to modify the base test case class (CiTestCase) to
raise exception any time subp is called. Then, fix all callers.
For cases where subp is necessary or actually desired, we can use it
via
a.) context hander CiTestCase.allow_subp(value)
b.) class level self.allowed_subp = value
Both cases the value is a list of acceptable executable names that
will be called (essentially argv[0]).
Some cleanups in AltCloud were done as the code was being updated.
|
|
Multiple distros use sysconfig format but have different content
and paths to certain files. Update distros to specify these
template paths in their renderer_configs dictionary.
|
|
These tests focus on the apply_credentials method and the ssh setup for
root and a distro default user.
|
|
this version uses unittest2 skipIf which is present in our python 2.6
environment.
|
|
Add examples and tests for RHEL values of redhat-release and os-release.
These examples were collected from IBMCloud images.
on rhel systems 'platform.dist()' returns 'redhat' rather than 'rhel'
so we have adjusted the response to align there.
|
|
An empty /etc/os-release exists on some redhat images, most notably
the COPR build images of centos6 and rawhide. On platforms missing
/etc/os-release or having an empty /etc/os-release file, use
_parse_redhat_release on rhel-based images to obtain distribution and
release codename information.
LP: #1781229
|
|
A recent commit added get_linux_distro to replace the deprecated python
platform.dist module behavior before it is dropped from python. It added
behavior that was compliant on OpenSuSE and SLES, by returning
(<distro_name>, <distro_version>, <cpu-arch>).
Fix get_linux_distro to behave more like the specific distribution's
platform.dist on ubuntu, centos and debian, which will return the
distribution release codename as the third element instead of <cpu-arch>.
SLES and OpenSUSE will retain their current behavior.
Examples follow:
('sles', '15', 'x86_64')
('opensuse', '42.3', 'x86_64')
('debian', '9', 'stretch')
('ubuntu', '16.04', 'xenial')
('centos', '7', 'Core')
LP: #1780481
|
|
Very basic type definitions are now defined to distinguish 'boot'
events from 'new instance (first boot)'. Event types will now be handed
to a datasource.update_metadata method which can determine whether
to refresh its metadata and re-render configuration based on that
source event.
A datasource can 'subscribe' to an event by setting up the update_events
attribute on the datasource class which describe what config scope is
updated by a list of matching events. By default datasources will have
the following update_events: {'network': [EventType.BOOT_NEW_INSTANCE]}
This setting says the datasource will re-write network configuration only
on first boot of a new instance or when the instance id changes.
New methods are now present on the datasource:
- clear_cached_attrs: Resets cached datasource attributes to values
listed in datasource.cached_attr_defaults. This is performed prior to
processing a fresh metadata process to avoid keeping old/invalid
cached data around.
- update_metadata: accepts source_event_types to determine if the
metadata should be crawled again and processed
|
|
When cloud-init tries to read a key from a keyserver, it will now
retry twice with 1 second in between each.
Retries of import are done by default because keyservers can be
unreliable. Additionally, there is no way to determine the difference
between a non-existant key and a failure. In both cases gpg (at least
2.2.4) exits with status 2 and stderr: "keyserver receive failed: No data"
It is assumed that a key provided to cloud-init exists on the keyserver so
re-trying makes better sense than failing.
Examples of things that made receive keys particularly unreliable:
https://bitbucket.org/skskeyserver/sks-keyserver/issues/57
https://bitbucket.org/skskeyserver/sks-keyserver/issues/60
There is also a change here from 'gpg --recv' to the longer
'gpg --recv-keys'. That option is functional and working back to
centos 6 (gpg 2.0.14) and ubuntu 14.04 (gpg 1.4.16).
|
|
Allow the user to set the distribution with --distro argument to setup.py.
Fall back is to read /etc/os-release. Final backup is to use
platform.dist() Python function. The platform.dist() function is
deprecated and will be removed in Python 3.7
LP: #1745235
|
|
On OpenSuSE 42.3, we would get errors running
tests/unittests/test_handler/test_handler_chef.py
- test_myhttps_nonet raises a UnmockedError
No mocking was registered, and real connections are not allowed
- test_myhttps_net raises SSLError
("bad handshake: SysCallError(32, 'EPIPE')",)
This fixes the errors by just using http instead of https.
Also it modifies the HttprettyTestCase to do the httpretty activate
and deactivate itself in setUp and tearDown. Then we don't have to
decorate individual test_ methods. Also, we set
httpretty.HTTPretty.allow_net_connect = False
Test cases here should not reach out to a network resource.
LP: #1771659
|
|
This modifies version.version_string to support having the package
build write the *packaged* version in with a easy replace.
Then, when cloud-init reports its version it will include the full
packaged version.
Also modified here are upstream package build files to get that done.
Note part of the trickery in packages/debian/rules.in was to avoid
the 'basic' templater consuming the '$variable' variable names.
LP: #1770712
|
|
Do not add new entries to /etc/fstab for devices that already have an
existing fstab entry.
Resolves: rhbz#1542578
|
|
The result of a read_file_or_url on a file and on a url would differ
in behavior.
str(UrlResponse) would return UrlResponse.contents.decode('utf-8')
while
str(FileResponse) would return str(FileResponse.contents)
The difference being "b'foo'" versus "foo".
As part of the general goal of cleaning util, move read_file_or_url
into url_helper.
|
|
The last set of changes to netdev_pformat ended up dropping the output
of devices that were not up. This adds back the 'down' interfaces to the
rendered output.
LP: #1766302
|
|
When images are deployed from template in a production environment
the artifacts of the provisioning stage (provisioningConfiguration.cfg)
that cloud-init referenced are cleaned up. However, when provisioned
in "debug" mode (internal to IBM) the artifacts are left.
This changes the 'is_ibm_provisioning' implementations in both
ds-identify and in the IBM datasource to identify the provisioning
stage more correctly. The change is to consider provisioning only
if the provisioing file existed and there was no log file or
the log file was older than this boot.
LP: #1767166
|
|
The cloud-init-local.service expects that any network device name changes
have already been completed by the kernel or udev daemon.
In some situations we've found that the renaming of interfaces from kernel
names (eth0, eth1, etc) to their persistent names (eno1, ens3, enp0s1,
etc) may happen after cloud-init-local has started where it reads values
from sysfs about what network devices are present, and which device to use
as a fallback nic.
Subsequently, cloud-init-local would write out network configuration for a
kernel device name which would no longer be present by the time that
networking services start to bring up the devices. The result is that the
instance does not get networking configured. Prior to use of
systemd-networkd, the Ubuntu 'networking.service' unit included a call to
udevadm settle which is why this race is not seen on a Xenial system.
This change adds the ability to detect if an interface has a stable name,
if if we find one without stable names and stable names have not been
disabled (net.ifnames=0 in /proc/cmdline), then cloud-init will invoke
udevadm settle.
LP: #1766287
|
|
validate_cloudconfig_schema with strict=True would not actually validate
if there was no jsonschema available. That seems kind of strange.
The change here is to make it raise an exception if strict was passed in.
And then to fix the one test that needed a skipIfJsonSchema wrapper.
|
|
This enables warnings produced by pylint for unused variables (W0612),
and fixes the existing errors.
|
|
The net-tools package is deprecated and will eventually be dropped. Use
"ip route", "link" or "address" instead of "ifconfig" or "route" calls.
Cloud-init can now run in an environment that no longer has net-tools.
This affects the network and route printing emitted to
cloud-config-output.log as well as the cc_disable_ec2_metadata module.
Additional changes:
- separate readResource and resourceLocation into standalone test
functions
- Fix ipv4 address rows to report scopes represented by ip addr show
- Formatted route/address ouput now handles multiple ipv4 and ipv6
addresses on a single interface
Co-authored-by: James Hogarth <james.hogarth@gmail.com>
Co-authored-by: Robert Schweikert <rjschwei@suse.com>
|
|
This adds a specific IBM Cloud datasource.
IBM Cloud is identified by:
a.) running on xen
b.) one of a LABEL=METADATA disk or a LABEL=config-2 disk with
UUID=9796-932E
The datasource contains its own config-drive reader that reads
only the currently supported portion of config-drive needed for
ibm cloud.
During the provisioning boot, cloud-init is disabled.
See the docstring in DataSourceIBMCloud.py for more more information.
|
|
ubuntu-advantage-tools is a package for enabling and disabling extended
support services such as Extended Security Maintenance (ESM), Canonical
Livepatch and FIPS certified PPAs. Simplify Ubuntu Advantage setup on
machines by allowing users to provide a list of ubuntu-advantage commands
in cloud-config.
|
|
When 'ip=' or 'ip6=' is found on the kernel command line,
cloud-init will consider read network config from /run/net-*.conf files.
There are some iscsi-root scenarios where initramfs configures networking
but the ip= parameter is not present. 2 such cases are:
a.) static config in /etc/iscsi/iscsi.initramfs (copied into the
initramfs)
b.) iBft
This changes cloud-init to consider initramfs provided networking
information if:
* there are /run/net-* files and
* (ip= or ip6 is on the command line) or open-iscsi.interface file
exists.
LP: #1752391
|
|
Older unittest2.TestCase (as seen in CentOS 6) do not have an
assertRaisesRegex method. They only have the now-deprecated
assertRaisesRegexp.
We need our unit tests to work there and on newer python (3.6).
Simply making assertRaisesRegex = assertRaisesRegexp makes pylint
complain as described in https://github.com/PyCQA/pylint/issues/1946 .
What was here before this commit was actually broken. This commit
makes assertRaisesRegex functional in CentOS 6 and works around
the invalid Deprecated warning from pylint.
To prove this, we use assertRaisesRegex in a unit test which will
be exectued in py27, py3 and py26.
|
|
When instance meta-data provides hostname information, run
cc_set_hostname in the init-local or init-net stage before network
comes up.
Prevent an initial DHCP request which leaks the stock cloud-image default
hostname before the meta-data provided hostname was processed.
A leaked cloud-image hostname adversely affects Dynamic DNS which
would reallocate 'ubuntu' hostname in DNS to every instance brought up by
cloud-init. These instances would only update DNS to the cloud-init
configured hostname upon DHCP lease renewal.
This branch extends the get_hostname methods in datasource, cloud and
util to limit results to metadata_only to avoid extra cost of querying
the distro for hostname information if metadata does not provide that
information.
LP: #1746455
|
|
This just centralizes a hunk of duplicated code and uses it from the
new location.
|
|
This makes 2 changes to shellify's behavior:
a.) raise a TypeError rather than a RuntimeError.
b.) raise a TypeError if input is not a list or tuple.
|
|
When we moved some tests to live under cloudinit/ we inadvertantly
failed to change all things that would run nose to include that
directory.
This changes all the 'nose' invocations to consistently run with
tests/unittests and cloudinit/.
Also, it works around, more correctly this time, a python2.6-ism with
the following code:
with assertRaises(SystemExit) as cm:
sys.exit(2)
|
|
Resize of btrfs fails if the mount point for the file system we are trying
to resize, i.e. the root of the filesystem is read only. With this change
we use a known (currently snapper specific) rw location to work around a
flaw that blocks resizing of the ro filesystem.
LP: #1734787
|
|
The motivation for this is that
a.) 1.7.1 runs with python 3.6 (bionic)
b.) we want to run pylint on tests/ and tools for the same reasons
that we want to run it on cloudinit/
The changes are described below.
- Update tox.ini to invoke pylint v1.7.1.
- Modify .pylintrc generated-members ignore mocked object members (m_.*)
- Replace "dangerous" params defaulting to {}
- Fix up cloud_tests use of platforms
- Cast some instance objects to with dict()
- Handle python2.7 vs 3+ ConfigParser use of readfp (deprecated)
- Update use of assertEqual(<boolean>, value) to assert<Boolean>(value)
- replace depricated assertRegexp -> assertRegex
- Remove useless test-class calls to super class
- Assign class property accessors a result and use it
- Fix missing class member in CepkoResultTests
- Fix Cheetah test import
|
|
Each DataSource subclass must define its own get_data method. This branch
formalizes our DataSource class to require that subclasses define an
explicit dsname for sourcing cloud-config datasource configuration.
Subclasses must also override the _get_data method or a
NotImplementedError is raised.
The branch also writes /run/cloud-init/instance-data.json. This file
contains all meta-data, user-data and vendor-data and a standardized set
of metadata keys in a json blob which other utilities with root-access
could make use of. Because some meta-data or user-data is potentially
sensitive the file is only readable by root.
Generally most metadata content types should be json serializable. If
specific keys or values are not serializable, those specific values will
be base64encoded and the key path will be listed under the top-level key
'base64-encoded-keys' in instance-data.json. If json writing fails due to
other TypeErrors or UnicodeDecodeErrors, a warning log will be emitted to
/var/log/cloud-init.log and no instance-data.json will be created.
|
|
Output in cloud-init-output.log contained only the string representation
of a SimpleTable object instead of the table formatted content. This bug
also affected ssh_authkey_fingerprints.
LP: #1722566
|
|
|
|
DataSourceOVF attempts to find iso files via walking os.listdir('/dev/')
which is far too wide. This approach is too invasive and can sometimes
race with systemd attempting to fsck and mount devices.
Instead, utilize cloudinit.util.find_devs_with to filter devices by
criteria (which uses blkid under the covers). This results in fewer
attempts to mount block devices which do not contain iso filesystems.
Unittest changes include:
- cloudinit.tests.helpers; introduce add_patch() helper
- Add unittest coverage for DataSourceOVF use of transport_iso9660
LP: #1718287
|
|
/run/cloud-init/tmp is on a filesystem mounted noexec, so running
dchlient in Ec2Local during discovery breaks with 'Permission denied'.
This branch allows us to run from a different tmp dir so we have exec
rights.
LP: #1717627
|
|
This moves the base test case classes into into cloudinit/tests and
updates all the corresponding imports.
|
|
oauth_headers is the only function which requires oauthlib, move the
import and ImportError handling inside this function to only attempt
loading at runtime if called. This will allow us to build on platforms
that don't have python-oauthlib installed by default. Add simple unittests
around the missing oauthlib dependencies to make sure the function
performs as intended and raises and NotImplementedError if oauthlib can't
be imported.
|