Age | Commit message (Collapse) | Author |
|
The OpenNebula data source generates an invalid netplan yaml file
if the IPv6 gateway is not defined in context.sh.
LP: #1768547
|
|
The OpenStack datasource in 18.3 changed to detect data in the
init-local stage instead of init-network and attempted to redetect
OpenStackLocal datasource on Oracle across reboots. The function
detect_openstack was added to quickly detect whether a platform is
OpenStack based on dmi product_name or chassis_asset_tag and it was
a bit too strict for Oracle in checking for 'OpenStack Nova'/'Compute'
DMI product_name.
Oracle's DMI product_name reports 'SAtandard PC (i440FX + PIIX, 1996)'
and DMI chassis_asset_tag is 'OracleCloud.com'.
detect_openstack function now adds 'OracleCloud.com' as a supported value
'OracleCloud.com' to valid chassis-asset-tags for the OpenStack
datasource.
LP: #1784685
|
|
The comment in update_metadata() that explains how a datasource should
enable network reconfig on every boot presumes that
EventType.BOOT_NEW_INSTANCE is a subset of EventType.BOOT. That's not
the case, and as such a datasource that needs to configure networking
when it is a new instance and every boot needs to include both event
types.
To make the situation above easier to debug, update_metadata() now
logs when it returns false.
To make it so that datasources do not need to test before appending to
the update_events['network'], it is changed from a list to a set.
test_update_metadata_only_acts_on_supported_update_events is updated
to allow datasources to support EventType.BOOT.
Author: Mike Gerdts <mike.gerdts@joyent.com>
|
|
Pylint 2.0.0 was recently released and complains more about
logging-not-lazy than it used to. I've fixed those warnings, here.
The changes in rh_subscription are more extensive. pylint may be
complaining incorrectly there, but the tests were not correctly un-doing
all of their mock/patching. This cleans those up and makes pylint happy.
|
|
Add examples and tests for RHEL values of redhat-release and os-release.
These examples were collected from IBMCloud images.
on rhel systems 'platform.dist()' returns 'redhat' rather than 'rhel'
so we have adjusted the response to align there.
|
|
An empty /etc/os-release exists on some redhat images, most notably
the COPR build images of centos6 and rawhide. On platforms missing
/etc/os-release or having an empty /etc/os-release file, use
_parse_redhat_release on rhel-based images to obtain distribution and
release codename information.
LP: #1781229
|
|
LP: #1727876
|
|
A recent commit added get_linux_distro to replace the deprecated python
platform.dist module behavior before it is dropped from python. It added
behavior that was compliant on OpenSuSE and SLES, by returning
(<distro_name>, <distro_version>, <cpu-arch>).
Fix get_linux_distro to behave more like the specific distribution's
platform.dist on ubuntu, centos and debian, which will return the
distribution release codename as the third element instead of <cpu-arch>.
SLES and OpenSUSE will retain their current behavior.
Examples follow:
('sles', '15', 'x86_64')
('opensuse', '42.3', 'x86_64')
('debian', '9', 'stretch')
('ubuntu', '16.04', 'xenial')
('centos', '7', 'Core')
LP: #1780481
|
|
Very basic type definitions are now defined to distinguish 'boot'
events from 'new instance (first boot)'. Event types will now be handed
to a datasource.update_metadata method which can determine whether
to refresh its metadata and re-render configuration based on that
source event.
A datasource can 'subscribe' to an event by setting up the update_events
attribute on the datasource class which describe what config scope is
updated by a list of matching events. By default datasources will have
the following update_events: {'network': [EventType.BOOT_NEW_INSTANCE]}
This setting says the datasource will re-write network configuration only
on first boot of a new instance or when the instance id changes.
New methods are now present on the datasource:
- clear_cached_attrs: Resets cached datasource attributes to values
listed in datasource.cached_attr_defaults. This is performed prior to
processing a fresh metadata process to avoid keeping old/invalid
cached data around.
- update_metadata: accepts source_event_types to determine if the
metadata should be crawled again and processed
|
|
When cloud-init tries to read a key from a keyserver, it will now
retry twice with 1 second in between each.
Retries of import are done by default because keyservers can be
unreliable. Additionally, there is no way to determine the difference
between a non-existant key and a failure. In both cases gpg (at least
2.2.4) exits with status 2 and stderr: "keyserver receive failed: No data"
It is assumed that a key provided to cloud-init exists on the keyserver so
re-trying makes better sense than failing.
Examples of things that made receive keys particularly unreliable:
https://bitbucket.org/skskeyserver/sks-keyserver/issues/57
https://bitbucket.org/skskeyserver/sks-keyserver/issues/60
There is also a change here from 'gpg --recv' to the longer
'gpg --recv-keys'. That option is functional and working back to
centos 6 (gpg 2.0.14) and ubuntu 14.04 (gpg 1.4.16).
|
|
standargs -> standards.
|
|
Bump the version in cloudinit/version.py to be 18.3 and update ChangeLog.
LP: #1777743
|
|
|
|
To deny a user elevated access, you can omit the `sudo` key from the
`users` dictionary. This works fine however it's implicitly defined
based on defaults of `cloud-init`. If the project moves to have `sudo`
access allowed for all by default (quite unlikely but still possible)
this will catch a few people out.
This introduces the ability to define an explicit `sudo: False` in the
`users` dictionary and it will prevent `sudo` access. The behaviour is
identical to omitting the key.
LP: #1771468
|
|
Newer versions (3.0.1+) of lxd create the 'lxdbr0' network when
'lxd init --auto' is invoked.
When cloud-init is given a network configuration to pass on to
lxc and that config had no name specified or 'lxdbr0', then cloud-init
would fail to create the network as it already exists.
Similarly, we need to remove the device from the default profile
so that the attach code can work.
Also, add a _lxc method and use it to make sure we're getting the
--force-local flag everywhere.
LP: #1776958
|
|
OpenStack datasource is now discovered in init-local stage. In order to
probe whether OpenStack metadata is present, it performs a costly
sandboxed dhclient setup and metadata probe against http://169.254.169.254
for openstack data.
Cloud-init properly detects non-OpenStack on EC2, but it spends precious
time probing the metadata service also resulting in a confusing WARNING
log about 'metadata not present'. To avoid the wasted cycles, and
confusing warning, get_data will call a detect_openstack function to
quickly determine whether the platform looks like OpenStack before trying
to setup network to probe and crawl the metadata service.
LP: #1776701
|
|
In /etc/cloud/cloud.cfg, users and imagees can configure which modules run
during a specific cloud-init stage by modifying one of the following
lists: cloud_init_modules, cloud_init_modules, cloud_init_final_modules.
If any of the configured module lists are absent or empty, cloud-init will
emit the same message it already does for existing lists that only contain
modules which are not unsupported on that platform:
No 'config' modules to run under section 'cloud_config_modules'
LP: #1770462
|
|
When creating the multipart mime message that is written as
user-data.txt.i, cloud-init losing data on conversion to some things
as a string.
LP: #1768600
Author: Scott Moser <smoser@ubuntu.com>
Co-Authored-By: Chad Smith <chad.smith@canonical.com>
|
|
There is no requirement that the environment of a process contains
only utf-8 data. This modifies get_proc_env to support it reading
data as binary and decoding if provided with an encoding.
The default case is now that we now do:
contents.decode('utf-8', 'replace')
rather than
contents.decode('utf-8', 'strict')
LP: #1775371
|
|
When network configuration for any interface defines maximum transmission
values (MTU) the netplan, eni and sysconfig renders will take into account
any device-level, or subnet-level mtu values.
When network configuration has conflicting device-level and ipv4 subnet
mtu values, the subnet-specific value is honored and a warning will be
logged about any ignored device-level setting.
LP: #1774666
|
|
This adds 'combine_capture' argument as was present in curtin's
subp. It is useful to get interleaved output of a command. I noticed
a need for it when looking at user_data_rhevm in DataSourceAltCloud.
That will run a subcommand, logging its stdout but swallowing its stderr.
Another thing to change to use this would be in udevadm_settle which
currently just returns the subp() call.
Also, add the docstring copied from curtin's subp.
|
|
Allow the user to set the distribution with --distro argument to setup.py.
Fall back is to read /etc/os-release. Final backup is to use
platform.dist() Python function. The platform.dist() function is
deprecated and will be removed in Python 3.7
LP: #1745235
|
|
A newer version of pyflakes (2.0.0) was released.
It identifed some unused variables that version 1.6.0 did not identify.
The change here merely fixes those unused variables.
|
|
- Updated datadict reference URL
- Store sdc:routes metadata in DatasourceSmartOS
- Map sdc:routes values to per-interface subnet configuration
- Added unittest
Co-authored-by: Mike Gerdts <mike.gerdts@joyent.com>
LP: #1763512
|
|
Network has not yet been configured in the init-local stage so the
openstack datasource will use dhcp-client to temporarily obtain an ipv4
address and query the metadata service at http://169.254.169.254 to get
network_data.json configuration. If present, the datasource will return
network_config version 1 config based on that network_data.json content.
Previously OpenStack datasource only setup dhcp on the fallback interface
so this represents a change in behavior to react to the full config
provided by openstack.
Also significant to OpenStack is the separation of a _crawl_data operation
from get_data(). crawl_data walks the available metadata services and
returns a dict of discovered content. get_data consumes the crawled_data,
caches it in the datasource and reacts to that data.
/run/cloud-init/instance-data.json now published network_data.json or
ec2_metadata key if that data is present on any datasource.
The main reasons for the separation of crawl from get_data:
* Enable performance metrics of cloud-init's metadata crawls on each
* Enable cloud-init modules and scripts to query and consume metadata
content which may have updated/changed after cloud-init's initial cache
during instance boot. (Think hotplug)
Also generalize common logic to base DataSource class/module:
* Move to a common UNSET variable up into base datasource module fix EC2,
ConfigDrive, OpenStack, SmartOS to use the global.
* Drop get_url_settings from Ec2, CloudStack and OpenStack and generalize
DataSource.get_url_params(). Allow subclasses to override url_max_wait,
url_timeout and url_retries params.
* Rename get_network_metadata bool to perform_dhcp_setup as it designates
whether EphemeralDHCPv4 setup is required before crawling metadata.
LP: #1749717
|
|
On OpenSuSE 42.3, we would get errors running
tests/unittests/test_handler/test_handler_chef.py
- test_myhttps_nonet raises a UnmockedError
No mocking was registered, and real connections are not allowed
- test_myhttps_net raises SSLError
("bad handshake: SysCallError(32, 'EPIPE')",)
This fixes the errors by just using http instead of https.
Also it modifies the HttprettyTestCase to do the httpretty activate
and deactivate itself in setUp and tearDown. Then we don't have to
decorate individual test_ methods. Also, we set
httpretty.HTTPretty.allow_net_connect = False
Test cases here should not reach out to a network resource.
LP: #1771659
|
|
Yaml tracebacks are generally hard to read for average users. Add a bit of
logic to util.yaml_load and schema validation to look for
YAMLError.context_marker or problem_marker line and column counts.
No longer log the full exceeption traceback from the yaml_load error,
instead just LOG.warning for the specific error and point to the offending
line and column where the problem exists.
|
|
The Azure data source provides a method to check whether a NTFS partition
on the ephemeral disk is safe for reformatting to ext4. The method checks
to see if there are customer data files on the disk. However, mounting
the partition fails on systems that do not have the capability of
mounting NTFS. Note that in this case, it is also very unlikely that the
NTFS partition would have been used by the system (since it can't mount
it). The only case would be where an update to the system removed the
capability to mount NTFS, the likelihood of which is also very small.
This change allows the reformatting of the ephemeral disk to ext4 on
systems where mounting NTFS is not supported.
|
|
This modifies version.version_string to support having the package
build write the *packaged* version in with a easy replace.
Then, when cloud-init reports its version it will include the full
packaged version.
Also modified here are upstream package build files to get that done.
Note part of the trickery in packages/debian/rules.in was to avoid
the 'basic' templater consuming the '$variable' variable names.
LP: #1770712
|
|
Do not add new entries to /etc/fstab for devices that already have an
existing fstab entry.
Resolves: rhbz#1542578
|
|
The result of a read_file_or_url on a file and on a url would differ
in behavior.
str(UrlResponse) would return UrlResponse.contents.decode('utf-8')
while
str(FileResponse) would return str(FileResponse.contents)
The difference being "b'foo'" versus "foo".
As part of the general goal of cleaning util, move read_file_or_url
into url_helper.
|
|
|
|
This also makes some of the messages more consistent.
|
|
In playing with a SmartOS container I found that ds-identify did
not identify the container there as a container. Systemd-detect-virt
identifies it as 'container-other'.
Also here are tests for ds-identify for the SmartOS platform
identification, and some indentation fixes in ds-identify.
|
|
When attempting to apply network configuration for SmartOS's container
platform, cloud-init would not identify nics. The nics on provided
in this container service do not have 'addr_assign_type'. That
was being interpreted as being a "stolen" mac, and would be filtered
out by get_interfaces.
|
|
Fix remaining pycodesytle warnings related to invalid string literals
introduced in more recent pycodeflakes versions
https://bugs.python.org/issue27364 .
Also stop using flake8 in tox as it is incompatible with newer versions of
pyflakes. Instead we now add tox environments for pycodestyle and pyflakes
individually.
Set the versions in both pycodestyle and pyflakes to the currently
available versions.
|
|
This change is for Azure VM Preprovisioning. A bug was found when after
azure VMs report ready the first time, during the time when VM is polling
indefinitely for the new ovf-env.xml from Instance Metadata Service
(IMDS), if a reboot happens, we send another report ready signal to the
fabric, which deletes the reprovisioning data on the node.
This marker file is used to fix this issue so that we will only send a
report ready signal to the fabric when no marker file is present. Then,
create a marker file so that when a reboot does occur, we check if a
marker file has been created and decide whether we would like to send the
repot ready signal.
LP: #1765214
|
|
By default, FreeBSD's growfs runs interactively asking a question
which can be mitigated using the '-y' command line option. The fix
here is simply to pass -y to growfs to avoid the prompt.
LP: #1404745
|
|
The last set of changes to netdev_pformat ended up dropping the output
of devices that were not up. This adds back the 'down' interfaces to the
rendered output.
LP: #1766302
|
|
With no output at all from collect-logs, users have been confused
on where the output is. By default now, write to stderr what that
file is.
Also
* add '-v' to increase verbosity. With a single -v flag, mention
what file/info is being collected.
* limit the 'journalctl' collection to this boot (--boot=0).
collecting entire journal seems unnecessary and can be huge.
* do not fail when collecting files or directories that are not there.
LP: #1766335
|
|
Ubuntu images on IBMCloud for 16.04 have some seed data in
/var/lib/cloud/data/seed/nocloud-net. In order to have systems with
IBMCloud enabled, we modified ds-identify detection to skip that seed
if the system was on IBMCloud. That change did not consider the
fact that IBMCloud might not be in the datasource list.
There was similar logic in the ConfigDrive datasource in ds-identify
and the datasource itself.
Config drive is now updated to only check and avoid IBMCloud if IBMCloud
is enabled. The check in ds-identify for nocloud was dropped. If a
user provides a nocloud seed on IBMCloud, then that can be used.
This means that systems running Xenial will continue to get their
old datasources.
LP: #1766401
|
|
In looking at some boot time for Xenial, Artful and Bionic, we noticed
some long amounts of time that appeared to be part of the DataSource but
we related to resolving URLs. In Artful and Bionic, there was an issue
(bug: #1739672) that resulted in slow getaddrinfo() calls when
systemd-resolved was in use. This patch adds two events that track time
for datasource.setup_datasource() and datasource.activate_datasource()
Additionally use log_time() to wrapper util.is_resolvable_url() which
leaves info in cloud-init.log about how much time was spent.
|
|
When images are deployed from template in a production environment
the artifacts of the provisioning stage (provisioningConfiguration.cfg)
that cloud-init referenced are cleaned up. However, when provisioned
in "debug" mode (internal to IBM) the artifacts are left.
This changes the 'is_ibm_provisioning' implementations in both
ds-identify and in the IBM datasource to identify the provisioning
stage more correctly. The change is to consider provisioning only
if the provisioing file existed and there was no log file or
the log file was older than this boot.
LP: #1767166
|
|
The cloud-init-local.service expects that any network device name changes
have already been completed by the kernel or udev daemon.
In some situations we've found that the renaming of interfaces from kernel
names (eth0, eth1, etc) to their persistent names (eno1, ens3, enp0s1,
etc) may happen after cloud-init-local has started where it reads values
from sysfs about what network devices are present, and which device to use
as a fallback nic.
Subsequently, cloud-init-local would write out network configuration for a
kernel device name which would no longer be present by the time that
networking services start to bring up the devices. The result is that the
instance does not get networking configured. Prior to use of
systemd-networkd, the Ubuntu 'networking.service' unit included a call to
udevadm settle which is why this race is not seen on a Xenial system.
This change adds the ability to detect if an interface has a stable name,
if if we find one without stable names and stable names have not been
disabled (net.ifnames=0 in /proc/cmdline), then cloud-init will invoke
udevadm settle.
LP: #1766287
|
|
This adds information to the IBMCloud datasource describing the
6 different scenarios that it is expected to handle.
|
|
BOOTPROTO=dhcp in sysconfig enables DHCPv4 and we should not do this
implicitly when 'dhcp6' subnet is specified. In case both dhcpv4 and
dhcpv6 are needed users should specify both:
subnets:
- type: dhcp6
- type: dhcp
Fix the current code and add a dhcpv6 only test.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
|
|
cloud-init and mdata-get each have their own implementation of the
SmartOS metadata protocol. If cloud-init and other services that call
mdata-get are run concurrently, crosstalk on the serial port can cause
them both to become confused.
This change makes it so that cloud-init uses the same cooperative
locking scheme that's used by mdata-get, thus preventing cross-talk
between mdata-get and cloud-init.
For testing, a VM running on a SmartOS host and pyserial are required.
If the tests are run on a platform other than SmartOS, those that use a
real serial port are skipped. pyserial remains commented in
requirements.txt because most testers will not be running atop SmartOS.
LP: #1746605
|
|
There are three potential sources of the hostname, one of which is
documented SmartOS's vmadm(1M) via the hostname property. That
property's value is retrieved via the sdc:hostname key. The other
two sources for the hostname are a hostname key in customer_metadata
and the VM's uuid (sdc:uuid). Of these three, the sdc:hostname value
is not used in a meaningful way by DataSourceSmartOS.
This fix changes the fallback mechanism when hostname is not
specified in customer_metadata. The order of precedence for setting
the hostname is now 1) hostname in customer_metadata,
2) sdc:hostname, then 3) sdc:uuid.
LP: #1765085
|
|
If customer_metadata has no keys, the KEYS request returns an empty
string. Callers of the list() method expect a list to be returned and
will give a stack trace if this expectation is not met.
LP: #1763480
|
|
validate_cloudconfig_schema with strict=True would not actually validate
if there was no jsonschema available. That seems kind of strange.
The change here is to make it raise an exception if strict was passed in.
And then to fix the one test that needed a skipIfJsonSchema wrapper.
|