Age | Commit message (Collapse) | Author |
|
Using flake8 inplace of pyflakes
Renamed run-pyflakes -> run-flake8
Changed target name to flake8 in Makefile
With pyflakes we can't suppress warnings/errors in few required places.
flake8 is flexible in that regard. Hence using flake8 seems to be a
better choice here.
flake8 does the job of pep8 anyway.
So, removed pep8 target from Makefile along with tools/run-pep8 script.
Included setup.py in flake8 checks
|
|
Also fix search path in networkd
|
|
In the nic attach path, we skip doing dhcp since we already did it
when bringing the interface up. However when polling for
reprovisiondata, it is possible for the request to timeout due to
platform issues. In those cases we still need to do dhcp and try again
since we tear down the context. We can only skip the first dhcp
attempt.
|
|
Bump the version in cloudinit/version.py to 21.3 and update ChangeLog.
LP: #1940839
|
|
before rebinding again (#990)
Add 10 second polling loop in wait_for_link_up after performing
an unbind and re-bind of primary NIC in hv_netvsc driver.
Also reduce cloud-init logging levels to debug for these operations.
|
|
Fix home permissions modified by ssh module
In #956, we updated the file and directory permissions for keys not in
the user's home directory. We also unintentionally modified the
permissions within the home directory as well. These should not change,
and this commit changes that back.
LP: #1940233
|
|
Update "cloud-init collect-logs" to ignore
/run/cloud-init/hook-hotplug-cmd as this will raise the error
"/run/cloud-init/hook-hotplug-cmd` is a named pipe" if included.
Also updated logs.py to continue writing the tarball if it fails
collecting a file rather than let the exception bubble up.
LP: #1940235
|
|
Alters hotplug hook to have a query mechanism checking if the
functionality is enabled. This allows us to avoid using the hotplug
socket and service when hotplug is disabled.
|
|
Add tests for cc_resolv_conf handler
|
|
When bringing interface up by unbinding and then binding hv_netvsc
driver, it might take a short delay after binding for the link to be
up. So before trying unbind/bind again after sleep, check if the link
is up. This is a corner case when a preprovisioned VM is reused and
the NICs are hot-attached.
|
|
|
|
|
|
This patch updates some indentation in a comment that prevented an
attempt to run the Black formatter (https://github.com/psf/black)
against the cloud-init codebase:
$ find cloudinit -name '*.py' -type f | xargs black -l 79 --check
...
Oh no! 💥 💔 💥
262 files would be reformatted, 19 files would be left unchanged, 1 file would fail to reformat.
The one file that fails to format is cloudinit/net/__init__.py.
With this fix in place, the black command can successfully parse the
file into AST and back again:
$ black -l 79 --check cloudinit/net/__init__.py
would reformat cloudinit/net/__init__.py
Oh no! 💥 💔 💥
1 file would be reformatted.
Normally this patch would be part of such an overall effort, but since
this is the only location that interrupted running the black command,
this author felt it was worth addressing this discrepancy sooner than
later in the case there is subsequent desire to use a standard format
tool such as black.
|
|
- update the puppet module to support AIO installations by setting
`install_type` to `aio`
- make the install collection configurable through the `collection`
parameter; by default the rolling `puppet` collection will be used,
which installs the latest version)
- when `install_type` is `aio`, puppetlabs repos will be purged after
installation; set `cleanup` to `False` to prevent this
- AIO installations are performed by downloading and executing a shell
script; the URL for this script can be overridden using the
`aio_install_url` parameter
- make it possible to run puppet agent after installation/configuration
via the `exec` key
- by default, puppet agent will run with the `--test` argument; this can
be overridden via the `exec_args` key
|
|
This patch finally introduces the Cloud-Init Datasource for VMware
GuestInfo as a part of cloud-init proper. This datasource has existed
since 2018, and rapidly became the de facto datasource for developers
working with Packer, Terraform, for projects like kube-image-builder,
and the de jure datasource for Photon OS.
The major change to the datasource from its previous incarnation is
the name. Now named DatasourceVMware, this new version of the
datasource will allow multiple transport types in addition to
GuestInfo keys.
This datasource includes several unique features developed to address
real-world situations:
* Support for reading any key (metadata, userdata, vendordata) both
from the guestinfo table when running on a VM in vSphere as well as
from an environment variable when running inside of a container,
useful for rapid dev/test.
* Allows booting with DHCP while still providing full participation
in Cloud-Init instance data and Jinja queries. The netifaces library
provides the ability to inspect the network after it is online,
and the runtime network configuration is then merged into the
existing metadata and persisted to disk.
* Advertises the local_ipv4 and local_ipv6 addresses via guestinfo
as well. This is useful as Guest Tools is not always able to
identify what would be considered the local address.
The primary author and current steward of this datasource spoke at
Cloud-Init Con 2020 where there was interest in contributing this datasource
to the Cloud-Init codebase.
The datasource currently lives in its own GitHub repository at
https://github.com/vmware/cloud-init-vmware-guestinfo. Once the datasource
is merged into Cloud-Init, the old repository will be deprecated.
|
|
|
|
In /etc/ssh/sshd_config, it is possible to define a custom
authorized_keys file that will contain the keys allowed to access the
machine via the AuthorizedKeysFile option. Cloudinit is able to add
user-specific keys to the existing ones, but we need to be careful on
which of the authorized_keys files listed to pick.
Chosing a file that is shared by all user will cause security
issues, because the owner of that key can then access also other users.
We therefore pick an authorized_keys file only if it satisfies the
following conditions:
1. it is not a "global" file, ie it must be defined in
AuthorizedKeysFile with %u, %h or be in /home/<user>. This avoids
security issues.
2. it must comply with ssh permission requirements, otherwise the ssh
agent won't use that file.
If it doesn't meet either of those conditions, write to
~/.ssh/authorized_keys
We also need to consider the case when the chosen authorized_keys file
does not exist. In this case, the existing behavior of cloud-init is
to create the new file. We therefore need to be sure that the file
complies with ssh permissions too, by setting:
- the actual file to permission 600, and owned by the user
- the directories in the path that do not exist must be root owned and
with permission 755.
|
|
Azure Linux Agent (WaLinuxAgent) waits for the ovf-env.xml file
to be written by cloud-init when cloud-init provisions the VM. This
file is written whenever cloud-init reads its contents from the
provisioning ISO.
With this change, when there is no provisioning ISO,
DataSourceAzure will generate the ovf-env.xml file based on the
metadata obtained from Azure IMDS.
|
|
|
|
Implement missing device_aliases feature
The device_aliases key has been documented as part of disk_setup for
years, however the feature was never implemented. This implements the
feature as documented allowing usercfg (rather than dsconfig) to create
a mapping of device names.
This is not to be confused with disk_aliases, a very similar map but
existing solely for use by datasources.
LP: #1867532
|
|
Currently cloud-init generates fallback network config on various
scenarios.
For example:
1. When no DS found
2. There is no 'network' info given in DS metadata.
3. If a DS gives a network config once and upon reboot if DS doesn't
give any network info, previously set network data will be
overridden.
A newly introduced key in cloud.cfg.tmpl can be used to control this
behavior on PhotonOS.
Also, if OS comes with a set of default network files(configs), like in
PhotonOS, cloud-init should not overwrite them by default.
This change also includes some nitpicking changes of reorganizing few
config variables.
Signed-off-by: Shreenidhi Shedi <sshedi@vmware.com>
|
|
Virtuozzo Linux is a distro based off of CentOS 8, similar to Alma Linux and Rocky Linux.
|
|
Details:
1. Support guest set network config through guestinfo.ovfEnv using OVF
2. 'network-config' Property is optional
3. 'network-config' Property's value has to be base64 encoded
Added unittests and updated ovf-env.xml example
|
|
In CI run against pylint 2.9.3 and fix occurrences of:
- W0237 (arguments-renamed)
- W0402 (deprecated-module)
The W0402 deprecated-module was about module `imp`:
cloudinit/patcher.py:9: [W0402(deprecated-module), ]
Uses of a deprecated module 'imp'
The imp module is deprecated and replaced by importlib, which according
to the documentation has no replacement for acquire_lock() and
release_lock(), which are the only reason why `imp` is imported.
Nothing about the code using this lock that actually requires it.
Let's remove the locking code and the import altogether.
Dropping the locking makes patcher.patch() an empty wrapper around
_patch_logging(). Rename _patch_logging() to patch_logging() and
call it directly instead. Drop patch().
|
|
With a few exceptions, Azure VM deployments receive provisioning
metadata through the provisioning iso presented as a cdrom device
(/dev/sr0). The existing code attempts to find this device by calling
blkid to find all devices that have either type iso9660 or udf. This
can be very expensive if the VM has a lot of disks. This commit will
attempt to mount the default iso location first and only tries to use
blkid to locate the iso location if the default mounting location fails
|
|
Adds a udev script which will invoke a hotplug hook script on all net
add events. The script will write some udev arguments to a systemd FIFO
socket (to ensure we have only instance of cloud-init running at a
time), which is then read by a new service that calls a new 'cloud-init
devel hotplug-hook' command to handle the new event.
This hotplug-hook command will:
- Fetch the pickled datsource
- Verify that the hotplug event is supported/enabled
- Update the metadata for the datasource
- Ensure the hotplugged device exists within the datasource
- Apply the config change on the datasource metadata
- Bring up the new interface (or apply global network configuration)
- Save the updated metadata back to the pickle cache
Also scattered in some unrelated typing where helpful
|
|
Python 3.6 added a new `policy` attribute to `MIMEMultipart`.
MIMEMultipart may be part of the cached object pickle of a datasource.
Upgrading from an old version of python to 3.6+ will cause the
datasource to be invalid after pickle load.
This commit uses the upgrade framework to attempt to access the mime
message and fail early (thus discarding the cache) if we cannot.
Commit 78e89b03 should fix this issue more generally.
|
|
defined in AuthorizedKeysFile (#937)
This patch aims to fix LP1911680, by analyzing the files provided
in sshd_config and merge all keys into an user-specific file. Also
introduces additional tests to cover this specific case.
The file is picked by analyzing the path given in AuthorizedKeysFile.
If it points inside the current user folder (path is /home/user/*), it
means it is an user-specific file, so we can copy all user-keys there.
If it contains a %u or %h, it means that there will be a specific
authorized_keys file for each user, so we can copy all user-keys there.
If no path points to an user-specific file, for example when only
/etc/ssh/authorized_keys is given, default to ~/.ssh/authorized_keys.
Note that if there are more than a single user-specific file, the last
one will be picked.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Co-authored-by: James Falcon <therealfalcon@gmail.com>
LP: #1911680
RHBZ:1862967
|
|
Add a new switch allow_raw_data to control raw data feature, update
the documentation. Fix bugs about max_wait.
|
|
We read the MTU from the subnet entries. With the v1 format, the MTU can
be set at the root level of the interface entry in the `config` section.
Limitation, we won't set the MTU if the interface use DHCP. This
would require a bit of refactoring.
Also simplify/clarify how we pass the target variable in `cloudinit.net.bsd`.
See: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=256309
Reported-by: Andrey Fesenko
|
|
Currently _bring_up_interfaces() is a no-op for any distro using
renderers. We need to be able to support bringing up a single
interfaces, a list of interfaces, and all interfaces. This should be
independent of the renderers, as the network config is often
generated independent of the mechanism used to apply it.
Additionally, I included a refactor to remove
"_supported_write_network_config". We had a confusing call chain of
apply_network_config->_write_network_config->_supported_write_network_config.
The last two have been combined.
|
|
summary: Clear cache when a Python version change is detected
When a distribution gets updated it is possible that the Python version
changes. Python makes no guarantee that pickle is consistent across
versions as such we need to purge the cache and start over.
Co-authored-by: James Falcon <therealfalcon@gmail.com>
|
|
Minor fixes in networkd renderer & fixed corresponding tests
Removed datasource_list for Photon from cloud.cfg.tmpl & added a comment
in cloud.cfg.tmpl about not to use multiline array for datasource_list.
Signed-off-by: Shreenidhi Shedi <sshedi@vmware.com>
|
|
Also added a new (currently experimental) systemd-networkd renderer,
and includes a small refactor to cc_resolv_conf.py to support the
resolved.conf used by systemd-resolved.
|
|
v1 network config currently has no concept of interface-specific DNS,
which is required for certain renderers. To fix this, added an
optional 'interface' key on the v1 nameserver definition. If
specified, it makes the DNS settings specific to the interface.
Otherwise, it will be defined as global DNS as it always has.
Additionally, DNS for v2 wasn't being recognized correctly. For DNS
defined on a particular interface, these settings now also go into the
global DNS settings as they were intended.
|
|
The name "DigitalOcean" doesn't have a space in it; it's a single
compound word written in Pascal case (upper camel case).
|
|
- small document update for ReportEventStack explaining post_files
parameter
- small unit test for test_reporting demonstrating the close of an
event with optional post_files list
|
|
LP: #1932048
|
|
- Mostly based on FreeBSD, the main exception is that
`find_devs_with_on_freebsd` does not work.
- Since we cannot get the CDROM or the partition labels,
`find_devs_with_on_dragonflybsd()` has a more naive approach and
returns all the block devices.
|
|
instance-data.json redacts sensitive data for non-root users. Since user
data is consumed as root, we should be consuming the non-redacted data
instead.
LP: #1931392
|
|
Security scanners are often simple minded and complain on arbitrary
settings such as file permissions. For /var/log/* having world read is
one of these cases.
|
|
dhclient output that contains brackets for pxe variables will break
the dhclient parsing regex line. This fix retains the current
functionality while fixing this particular issue.
|
|
Ensure we've got a clean environment before we restart the network.
In some cases, the `sh /etc/netstart` is not enough to restart the
network. A previous default route remains in the route table and
as a result the network is broken.
Also `sh /netstart` does not kill `dhclient`.
The problen happens for instance with OVH OpenStack SBG3.
|
|
Rocky Linux is a RHEL-compatible distribution so all changes that have
been made should be trivial.
|
|
Presently, mirror keys cannot be associated with primary/security
mirrors. Unfortunately, this prevents use of Landscape-managed
package mirrors as the mirror key for the Landscape-hosted repository
cannot be provided.
This patch allows the same key-related fields usable on "sources"
entries to be used on the "primary" and "security" entries as well.
LP: #1925395
|
|
In the case of a static network, we now set the MTU according to the
meta-data.
|
|
httpretty now logs all requests by default which gets mixed up with our
logging tests. Also we were incorrectly setting a logging level to
'None', which now also causes issues with the new httpretty version.
See https://github.com/gabrielfalcao/HTTPretty/pull/419
|
|
Control is currently limited to boot events, though this should
allow us to more easily incorporate HOTPLUG support. Disabling
'instance-first-boot' is not supported as we apply networking config
too early in boot to have processed userdata (along with the fact
that this would be a pretty big foot-gun).
The concept of update events on datasource has been split into
supported update events and default update events. Defaults will be
used if there is no user-defined update events, but user-defined
events won't be supplied if they aren't supported.
When applying the networking config, we now check to see if the event
is supported by the datasource as well as if it is enabled.
Configuration looks like:
updates:
network:
when: ['boot']
|
|
In newer versions of python, when using urllib.parse, lines containing
newline or tab characters now get sanitized. This caused a unit test to
fail.
See https://bugs.python.org/issue43882
|
|
UDEVADM_CMD is defined but not actually used in cc_disk_setup.py
so remove it.
Also modify the comment at top of read_parttbl function to remove the
reference to udevadm which implies it is used to scan the partition table.
|