Age | Commit message (Collapse) | Author |
|
The variable DI_EC2_STRICT_ID_DEFAULT was not being set in unit tests
so when 16.04 built, which changed that setting in patches the tests
would unexpectedly fail.
|
|
/run/cloud-init/tmp is on a filesystem mounted noexec, so running
dchlient in Ec2Local during discovery breaks with 'Permission denied'.
This branch allows us to run from a different tmp dir so we have exec
rights.
LP: #1717627
|
|
Currently the Azure data source waits up to 60 seconds. This has proven
not to be sufficient to provide resiliency to unrelated transient failures
in other parts of the infrastructure. Azure already has logic outside of
the VM to abort hung provisioning. This changes lengthens the time out to
15 minutes.
LP: #1717611
|
|
This regressed in the rework of GCE datasource to have a main.
The fix really just stores the user-data that was read in
self.userdata_raw, rather than self.userdata. That is consistent
with other datasources and ulitimately how it was before the refactor.
The main is updated to address the fact that user-data is binary data
and may not be able to be printed.
LP: #1717598
|
|
Add a new collect-logs sub command to the cloud-init CLI. This script
will collect all logs pertinent to a cloud-init run and store them in a
compressed tar-gzipped file. This tarfile can be attached to any
cloud-init bug filed in order to aid in bug triage and resolution.
A cloudinit.apport module is also added that allows apport interaction.
Here is an example bug filed via ubuntu-bug cloud-init: LP: #1716975.
Once the apport launcher is packaged in cloud-init, bugs can be filed
against cloud-init with the following command:
ubuntu-bug cloud-init
LP: #1607345
|
|
A regression in 'get_latest_lease' made it ignore files starting with
'dhclient-' rather than just 'dhclient.'. The fix here is to allow those
files to be considered.
There is a lot more we could do here to better ensure that we pick the
most recent lease, but this change fixes the regression.
LP: #1717147
|
|
As root user, os.access(<path>, os.W_OK) will always return True so that
path will never get executed. Also avoid a warning if the root is
overlayroot, which is the common case on a MAAS booted 'ephemeral' system.
|
|
Revert "centos: do not package systemd-fsck drop-in."
Revert "systemd: make systemd-fsck run after cloud-init.service"
The systemd-fsck drop-in caused regressions by introducing ordering
The change reverts the original commit that added systemd-fsck drop-in
and another commit that had removed that from the centos packaging:
1f5489c258a26f4e26261c40786537951d67df1e
8a5296c41db45be3a172862f324ad44e732a2250
The result is to no longer provide the systemd-fsck drop-in.
LP: #1717477
|
|
The NoCloud KVM platform includes:
* Downloads daily Ubuntu images using streams and store in
/srv/images
* Image customization, if required, is done using
mount-image-callback otherwise image is untouched
* Launches KVM via the xkvm script, a wrapper around
qemu-system, and sets custom port for SSH
* Generation and inject an SSH (RSA 4096) key pair to use for
communication with the guest to collect test artifacts
* Add method to produce safe shell strings by base64 encoding
the command
Additional Changes:
* Set default backend to use LXD
* Verify not running script as root in order to prevent images
from becoming owned by root
* Removed extra quotes around that were added when collecting
the cloud-init version from the image
* Added info about each release as previously the lxd backend
was able to query that information from pylxd image info,
however, other backends will not be able to obtain the same
information as easily
|
|
Supposedly it was never a feature to be able to pass a path to a block
device to xfs_growfs and have it grow the filesystem. The behavior changed
upstream recently. It is only supported to pass the mount point of a mounted
XFS filesystem. This causes breakages in cloud-init.
Upstream xfs change was commit b97815a0321072a7154ecab63e297af84066fc78.
https://git.kernel.org/pub/scm/fs/xfs/xfsprogs-dev.git/commit/?id=b97815a0321
rhbz: rhbz: https://bugzilla.redhat.com/show_bug.cgi?id=1490505
Signed-off-by: Dusty Mabe <dusty@dustymabe.com>
|
|
The network devices should be enabled before sending the 'SUCCESS' event
to the underlying hypervisor.
|
|
Modules can optionally define a list of supported distros on which they can run
by declaring a distros attribute in the cc_*py module. This branch fixes
handling of cloudinit.stages.Modules.run_section. The behavior of run_section
is now the following:
- always run a module if the module doesn't declare a distros attribute
- always run a module if the module declares distros = [ALL_DISTROS]
- skip a module if the distribution on which we run isn't in module.distros
- force a run of a skipped module if unverified_modules configuration contains
the module name
LP: #1715738
LP: #1715690
|
|
Most users of chef will want to pin the version that is installed.
Typically new versions of chef have to be evaluated for breakage etc.
This change proposes a new optional `omnibus_version` field to the chef
configuration. The changeset also adds documentation referencing the new
field.
LP: #1462693
|
|
If a string is passed to execute, then invoke 'bash', '-c',
'string'. That allows the less verbose execution of simple
commands:
image.execute("ls /run")
compared to the more explicit but longer winded:
image.execute(["ls", "/run"])
If 'env' was ever modified in execute or a method that it called,
then the next invocation's default value would be changed. Instead
use None and then set to a new empty dict in the method.
|
|
Add schema definitions to both cc_resizefs and cc_bootcmd modules. Extend
schema.py to parse and document enumerated json types. Schema definitions
are used to generate module documention and log warnings for schema
infractions.
This branch also does the following:
- drops vestigial 'resize_rootfs_tmp' option from cc_resizefs. That
option only created the specified directory and didn't make use of
that directory for any resize operations.
- Drop yaml.dumps calls from schema documentation generation to avoid
yaml import costs on module load
- Add __doc__ = get_schema_doc(schema) definitions it each module to
supplement python help() calls for cc_runcmd, cc_bootcmd, cc_ntp and
cc_resizefs
- Add a SCHEMA_EXAMPLES_SPACER_TEMPLATE string to docs for modules which
contain more than one example
|
|
The xkvm script will be utilized by pending NoCloud qemu testing.
If this turns out to not be the case, then we will drop it.
|
|
For customizing the machines hosted on 'VMWare' hypervisor, the datasource
should return the 'network config' data in 'curtin' format.
This branch also fixes /etc/network/interfaces replacing the line
"source /etc/network/interfaces.d/*.cfg" which is incorrectly removed
when VMWare's Perl Customization Engine writes /etc/network/interfaces.
Modify the code to read the customization configuration and return the
converted data.
Added few tests.
LP: #1675063
|
|
This change makes the DataSourceEc2Local do nothing unless it is on
actual AWS platform. The motivation is twofold:
a.) It is generally safer to only make this function available to Ec2
clones that explicitly identify themselves to the guest. (It also
gives them a reason to supply identification code to cloud-init.)
b.) On non-intel OpenStack platforms ds-identify would enable both the Ec2
and OpenStack sources. That is because there is not good data (such as
dmi) to positively identify the platform. Previously that would be fine
as OpenStack would run first and be successful. The change to add Ec2Local
meant that an Ec2 now runs first.
The best case for 'b' would be a slow down as attempts at the Ec2 metadata
service time out. The discovered case was worse.
Additionally we add a simple check for datatype of 'network' in the
metadata before attempting to read it.
LP: #1715128
|
|
During boot, the usage of /tmp is not safe. In systemd systems,
systemd-tmpfiles-clean may run at any point and clear out a temp file
while cloud-init is using it. The solution here is to use
/run/cloud-init/tmp.
LP: #1707222
|
|
OpenStack Nova identifies itself only to Intel guests.
Make ds-identify return 'MAYBE' for OpenStack on non-intel arches.
An unnecessary change here is to rename the 'policy_nodmi' kwarg
to 'policy_no_dmi' in the related unit tests.
LP: #1715241
|
|
This missed mock in test_openstack resulted in a costly unit test timeout.
LP: #1714376
|
|
This moves the base test case classes into into cloudinit/tests and
updates all the corresponding imports.
|
|
This adds the output of the nose timer plugin to the py3 environment to
tox. This will print out the 10 longest running tests and automatically
turn tests longer than 1 second "red" after the coverage output.
|
|
The ubuntu-init-switch module allowed the use to launch an instance that
was booted with upstart and have it switch its init system to systemd and
then reboot itself. It was only useful for the time period when Ubuntu was
transitioning to systemd but only produced images using upstart.
Also, do not run setup with --init-system=upstart. This means that by
default, debian packages built with packages/bddeb will not have upstart
unit files included. No other removal is done here.
|
|
DataSourceEc2 behavior changed to first check a minimum acceptable
metadata version uri http://169.154.169.254/<min_version>/instance-id,
retrying on 404, until the metadata service is available. After the
metadata service is up, the datasource inspects preferred
extended_metadata_versions for availability. Unit tests only mocked the
preferred extended_metadata_version so all Ec2 tests were retrying
attempts against
http://169.254.169.254/meta-data/<min-version>/instance-id adding a lot of
time cost to the unit test runs.
This branch uses httpretty to properly mock the following:
- 404s from metadata on undesired extended_metadata_version test routes
- https://169.254.169.254/meta-data/2016-09-02/instance-id
- full metadata dictionary represented on min_metadata_version
- https://169.254.169.254/meta-data/2016-09-02/*
The branch also tightens httpretty to raise a MockError for any URL which
isn't mocked via httpretty.HTTPretty.allow_net_connect=False.
LP: #1714117
|
|
Currently the cloud-init default locale (en_US.UTF-8) is set by
the base datasource class. This patch allows a distro to overide
the fallback value with one that's available in the distro but continues
to respect an image which has preconfigured a locale.
- Distro object now has a get_locale method which will return a
preconfigure locale setting by checking the distros locale system
configuration file. If not set or not present, return the default
locale of en_US.UTF-8 which retains behavior of all previous cloud-init
releases.
- Apply locale now handles regenerating locales or system configuration
files as needed.
- Adjust apply_locale logic to skip locale-regen if the specified LANG
value is C.UTF-8,C, or POSIX; they do not require regeneration.
- Further add unittests to exercise the default paths for Ubuntu and
non-ubuntu paths to validate they get the LANG expected.
|
|
test_set_locale_sles and test_set_locale_sles_default were incorrectly
testing for truth of <distro_object>.uses_systemd rather than calling
that function and checking its result.
The error was only seen if the system running the tests was not using
systemd.
|
|
oauth_headers is the only function which requires oauthlib, move the
import and ImportError handling inside this function to only attempt
loading at runtime if called. This will allow us to build on platforms
that don't have python-oauthlib installed by default. Add simple unittests
around the missing oauthlib dependencies to make sure the function
performs as intended and raises and NotImplementedError if oauthlib can't
be imported.
|
|
The pinned versions of python packages in xenial do not work with
python3.6. Currently, the failure can be seen with:
$ tox -e xenial tests/unittests/test_merging.py
which ends up failing with in /usr/lib/python3.6/inspect.py with:
ValueError: Function has keyword-only parameters or annotations, use
getfullargspec() API which can support them
Instead of setting 'basepython' to 3.5 for the 'xenial', we just update
the one package that does not run correctly with python3.6. That allows
the developer to have either python3.5 or python3.6 installed and have
tox work as expected.
|
|
This gets initial opensuse and SLES support back to a working state.
Still missing is more complete network file writing and unit tests.
|
|
This just adds a main to the GCE datasource so that it is easily
callable: python3 -m cloudinit.sources.DataSourceGCE
It also adds a log of the time it took to crawl.
|
|
DataSourceEc2 now parses the metadata for each nic to determine if
configured for ipv6 and/or ipv4 addresses. In AWS for metadata version
2016-09-02, nics configured for ipv4 or ipv6 addresses will have non-zero
values stored in metadata at network/interfaces/macs/<MAC>/public-ipv4 or
ipv6s respectively. Those metadata files are only non-zero when an ipv4 or
ipv6 ip is associated to the specific nic. A new
DataSourceEc2.network_config property is added which parses the metadata
and renders a network version 1 dictionary representing both dhcp4 and
dhcp6 configuration for associated nics.
The network configuration returned from the datasource will also 'pin' the
nic name to the name presented on the instance for each nic.
LP: #1639030
|
|
We are unable to ship python-oauthlib in RHEL. This commit allows
imports of url_helper to succeed even when oauthlib is unavailable
and OauthUrlHelper.oauth_headers to raise a NotImplementedException
when called.
LP: #1713760
|
|
Some Python 3 exception names crept into the cloud-init analyze code. This
patches those back out at a cost of catching less specific parents of the
desired exceptions.
|
|
Currently the python logging module will default to a local time which may
contain an TZ offset in the values it produces, but the logged time format
does not contain the offset. Switching to UTC time for logging produces
consistent values in the cloud-init.log file and avoids issues when the
timezone is changed during boot.
LP: #1713158
|
|
A patch to allow scripts missing a #! to run by using shell=True was
proposed but rejected. Instead we emit a log message to help the user
understand what went wrong.
|
|
In an effort to save file load cost during system boot, certain
subcommands, analyze and devel, do not get loaded unless the subcommand is
specified on the commandline. Because setup.py entrypoint for cloud-init
script doesn't specify sysv_args parameter when calling the CLI's main()
we need main to read sys.argv into sysv_args so our subparser loading
continues to work.
LP: #1712676
|
|
Update user data 'include file' format documentation to explain the
behavior that occurs when an error occurs while reading a file.
|
|
Both landscape and puppet modules had issues with the way they wrote
/etc/landscape/client.conf or /etc/puppet/puppet.conf in either python3 or
python2. This branch adds initial unit tests for both modules which will
get better exercise under both python2 and python3.
The unit tests shed light on a few issues:
- In the cc_landscape module py3 can't provide six.StringIO content to
ConfigParser.write, so we need to use six.BytesIO instead
- In the cc_puppet module, python <= 2.7 doesn't support using
six.StringIO as a context manager, so we drop the context manager
fanciness and directly set outputstream = StringIO().
- The docstring in cc_puppet is fixed to document the 'conf'
sub-key requiring valid puppet section names for each
key-value list.
LP: #1699282
LP: #1710932
|
|
This branch does a few things:
- Add 'schema' subcommand to cloud-init CLI for validating
cloud-config files against strict module jsonschema definitions
- Add --annotate parameter to 'cloud-init schema' to annotate
existing cloud-config file content with validation errors
- Add jsonschema definition to cc_runcmd
- Add unit test coverage for cc_runcmd
- Update CLI capabilities documentation
This branch only imports development (and analyze) subparsers when the
specific subcommand is provided on the CLI to avoid adding costly unused
file imports during cloud-init system boot.
The schema command allows a person to quickly validate a cloud-config text
file against cloud-init's known module schemas to avoid costly roundtrips
deploying instances in their cloud of choice. As of this branch, only
cc_ntp and cc_runcmd cloud-config modules define schemas. Schema
validation will ignore all undefined config keys until all modules define
a strict schema.
To perform validation of runcmd and ntp sections of a cloud-config file:
$ cat > cloud.cfg <<EOF
runcmd: bogus
EOF
$ python -m cloudinit.cmd.main schema --config-file cloud.cfg
$ python -m cloudinit.cmd.main schema --config-file cloud.cfg \
--annotate
Once jsonschema is defined for all ~55 cc modules, we will move this
schema subcommand up as a proper subcommand of the cloud-init CLI.
|
|
The Debian GNU/Linux distribution doesn't come offically with the
non-free repositories enabled. Therefore, we want to disable those in
the cloud-init template.
LP: #1700091
|
|
This branch adds cloudinit-analyze into cloud-init proper. It adds an
"analyze" subcommand to the cloud-init command line utility for quick
performance assessment of cloud-init stages and events.
On a cloud-init configured instance, running "cloud-init analyze blame"
will now report which cloud-init events cost the most wall time. This
allows for quick assessment of the most costly stages of cloud-init.
This functionality is pulled from Ryan Harper's analyze work.
The cloudinit-analyze main script itself has been refactored a bit for
inclusion as a subcommand of cloud-init CLI. There will be a followup
branch at some point which will optionally instrument detailed strace
profiling, but that approach needs a bit more discussion first.
This branch also adds:
* additional debugging topic to the sphinx-generated docs describing
cloud-init analyze, dump and show as well as cloud-init single usage.
* Updates the Makefile unittests target to include cloudinit directory
because we now have unittests within that package.
LP: #1709761
|
|
If the network-config sent to cloud-init is in version: 2 format then
when rendering netplan, we can pass the content through and avoid
consuming network_state elements. This removes the need for trying to
map many v2 features onto network state where other renderers won't be
able to use anyhow (for example match parameters for multi-interface
configuration and wifi configuration support).
Additionally ensure we retain bond/bridge v2 configuration in network
state so when rendering to eni or sysconfig we don't lose the configuration
- Drop the NotImplemented wifi exception, log a warning that it works for
netplan only
- Adjust unittests to new code path and output
- Fix issue with v2 macaddress values getting dropped
- Add unittests for consuming/validating v2 configurations
LP: #1709180
|
|
example
|
|
This feature enables the following VMware VCloud Director functionality:
1. Setting admin password
2. Expire password.
3. Set admin password and expire.
Password configuration is triggered only as part of a full
recustomization, that happens either on first power on or when
"poweron and full recustomization" is selected. Full customization
flow is determined by marker files. Unique marker ids are
generated when full recustomization is requested. And marker file based
on these marker ids help to determine if we need to execute the above
configuration.
|
|
This branch is a prerequisite for IPv6 support in AWS by allowing Ec2
datasource to query the metadata source version 2016-09-02 about whether
or not it needs to configure IPv6 on interfaces. If version 2016-09-02
is not present, fallback to the min_metadata_version of 2009-04-04. The
DataSourceEc2Local not run on FreeBSD because dhclient in doesn't
support the -sf flag allowing us to run dhclient without filesystem
side-effects.
To query AWS' metadata address @ 169.254.169.254, the instance must have
a dhcp-allocated address configured. Configuring IPv4 link-local
addresses result in timeouts from the metadata service. We introduced a
DataSourceEc2Local subclass which will perform a sandboxed dhclient
discovery which obtains an authorized IP address on eth0 and crawl
metadata about full instance network configuration.
Since ec2 IPv6 metadata is not sufficient in itself to tell us all the
ipv6 knownledge we need, it only be used as a boolean to tell us which
nics need IPv6. Cloud-init will then configure desired interfaces to
DHCPv6 versus DHCPv4.
Performance side note: Shifting the dhcp work into init-local for Ec2
actually gets us 1 second faster deployments by skipping init-network
phase of alternate datasource checks because Ec2Local is configured in
an ealier boot stage. In 3 test runs prior to this change: cloud-init
runs were 5.5 seconds, with the change we now average 4.6 seconds.
This efficiency could be even further improved if we avoiding dhcp
discovery in order to talk to the metadata service from an AWS
authorized dhcp address if there were some way to advertize the dhcp
configuration via DMI/SMBIOS or system environment variables.
Inspecting time costs of the dhclient setup/teardown in 3 live runs the
time cost for the dhcp setup round trip on AWS is:
test 1: 76 milliseconds
dhcp discovery + metadata: 0.347 seconds
metadata alone: 0.271 seconds
test 2: 88 milliseconds
dhcp discovery + metadata: 0.388 seconds
metadata alone: 0.300 seconds
test 3: 75 milliseconds
dhcp discovery + metadata: 0.366 seconds
metadata alone: 0.291 seconds
LP: #1709772
|
|
Some systems like Ubuntu-Core do not provide an ntp package for
installation but do include systemd-timesyncd (an ntp client).
On such systems cloud-init will generate a timesyncd configuration
using the 'servers' and 'pools' values as ntp hosts for timesyncd to use.
LP: #1686485
|
|
get_interfaces_by_mac and get_interfaces just looked much alike.
This makes get_interfaces_by_mac call get_interfaces.
|
|
The build deb command was no longer working becasue it had
assumed that you were in the root of the cloud-init directory.
This changes where the deb is built and changes how the
dependencies are determined as well as uses the built-in tools
for determining build dependencies.
|
|
The sysconfig renderer duplicates the cloud-init header string
when rendering resolv.conf file. This leads to resolv.conf file
growing with every reboot of a system. Fix this by checking for
the header when loading content from existing file.
Update one of the sysconfig unittests with multiple render calls
to simulate the reboot to check that we don't repeat the header.
LP: #1701420
|