Age | Commit message (Collapse) | Author |
|
According to man page `man 8 swapon', "Preallocated swap files are
supported on XFS since Linux 4.18". This patch checks for kernel version
before attepting to create swapfile, using dd for XFS only on kernel
versions <= 4.18 or btrfs.
Add new func util.kernel_version which returns a tuple of ints (major, minor)
Signed-off-by: Eduardo Otubo otubo@redhat.com
|
|
* cli: add devel make-mime subcommand
Cloud-init documents an in-source-tree tool, make-mime.py used to
help users create multi-part mime user-data. This tool is not shipped
in the cloud-init install and unavailable at runtime. This patch
takes tools/make-mime.py and makes the functionality available via
the devel subcommand.
The primary interface of --attach file:content-type is still present.
The cli now adds:
-l, --list-types Print out a list of supported content-types
-f, --force Ignore errors for unsupported content-types
The tool will now raise a RunTime error if the supplied content-type
is not supported (or more likely a typo:
x-shell-script vs. x-shellscript)
* make-mime: write to stderr and exit 1 instead of raising RuntimeError
* Update example to match docs
* Update docs for make-mime subcommand
* Remove tools/make-mime.py; replaced by cloud-init devel make-mime
Co-authored-by: Rick Harding <rharding@mitechie.com>
|
|
Commit d00126c167fc06d913d99cfc184bf3402cb8cf53 regressed cloud-init
handling in multipart MIME user-data. Specifically, cloud-init would
examine the payload of the MIME part to determine what the content
type and subsequently which handler to use. This meant that user-data
which had shellscript payloads (starts with #!) were always handled
as shellscripts, rather than their declared MIME type and affected
when the payload was handled.
One failing scenario was a MIME part with text/cloud-boothook type
declared and a shellscript payload. This was run at shellscript
processing time rather than boothook time resulting in an change in
behavior from previous cloud-init releases.
To continue to support known scenarios where clouds have specifed
a MIME type of text/x-shellscript but provided a payload of something
other than shellscripts, we're changing the lookup logic to check for
the TYPES_NEEDED (text/plain, text/x-not-multipart) and only
text/x-shellscript.
It is safe to check text/x-shellscript parts as all shellscripts must
include the #! marker and will be detected as text/x-shellscript types.
If the content is missing the #! marker, it will not be excuted. If
the content is detected as something cloud-init supports, such as
#cloud-config the appropriate cloud-init handler will be used.
This change will fix hanldling for parts which were shellscripts but
ran with the wrong handler due to ignoring of the provided mime-type.
LP: #1888822
|
|
|
|
This PR refactors Azure report ready code to include more robust tests and telemetry.
|
|
* v2 of the API is now default with fallback to v1.
* Refactored the Oracle datasource to fetch version, instance, and vnic metadata simultaneously.
|
|
Few of the 'User and Groups' configurations in cloud-config have no effect on
already existing users. This was not documented earlier.
This change set adds that information to documentation.
Signed-off-by: Shreenidhi Shedi <sshedi@vmware.com>
|
|
This aligns their docstrings more closely with their actual behaviour.
|
|
The /opc/v1/ metadata endpoints[0] are universally available in Oracle
Cloud Infrastructure and the OpenStack endpoints are considered
deprecated, so we can refactor the data source to use the OPC endpoints
exclusively. This simplifies the datasource code substantially, and
enables use of OPC-specific attributes in future.
[0] https://docs.cloud.oracle.com/en-us/iaas/Content/Compute/Tasks/gettingmetadata.htm
|
|
* Fix a typo in apt pipelining module
Changed `whcih` to `which`.
* Update .github-cla-signers
I have signed the CLA on Canonical's site, adding my username to list of CLA signers.
* Update .github-cla-signers
I need to sort the list alphabetically.
|
|
Update DataSourceNoCloud and ds-identify to recognize LABEL_FATBOOT labels from blkid.
Also updated associated tests.
LP: #1841466
|
|
Add "sle_hpc" to list of values which are variant 'suse'.
|
|
DataSourceAzure: Gracefully handle the case of set hostname failure during provisioning
|
|
Add support for VMware's vCD configuration setting DEFAULT-RUN-POST-CUST-SCRIPT.
When set True, it will default vms to run post customization scripts if the VM has not been configured in VMTools with "enable-custom-scripts" set False.
Add datasource documentation with a bit more context about this interaction on VMware products.
With this fix, the behavior will be:
* If VM administrator doesn't want others to execute a script on this VM, VMtools can set "enable-custom-scripts" to false from the utility "vmware-toolbox-cmd".
* If VM administrator doesn't set value to "enable-custom-scripts", then by default this script is disabled for security purpose.
* For VMware's vCD product , the preference is to enable the script if "enable-custom-scripts" is not set. vCD will generate a configuration file with "DEFAULT-RUN-POST-CUST-SCRIPT" set to true. This flag works for both VMware customization engine and cloud-init.
|
|
JSONDecodeError is only available in Python 3.5+. When it isn't available (i.e. on Python 3.4, which cloud-init still supports) use the more generic ValueError.
|
|
(#483)
Problem: When cc_ca_certs configuration has both "remove-defaults: true"
and also specifies one, or more, new trusted CAs to add then the resultant
/etc/ca-certificates.conf file's 1st line is blank. As noted in comments
in the existing cc_ca_certs.py code blank lines in this file cause problems.
Fix: Before adding the cloud-init CA filename to this file first check the
size of the file - if is is empty (as all existing CAs have been deleted)
then write only the cloud-init CA filename to the file rather than appending
it to the file.
|
|
It is confusing for scripts, where a disabled user has been specified,
that ssh exits with a zero status by default without indication anything
failed.
I think exitting with a non-zero status would make more clear in scripts
and automated setups where things failed, thus making noticing the issue
and debugging easier.
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
Signed-off-by: Aleksandar Kostadinov <akostadi@redhat.com>
LP: #1170059
|
|
* cloudinit: remove global disable of pylint W0107 and fix errors
This includes removing a test class which contained no tests but wasn't
detected as empty because of an errant pass statement.
* .pylintrc: update disable comment to match arguments
|
|
* Refactor `cloudinit.net.wait_for_physdevs` to `cloudinit.distros.networking.Networking.wait_for_physdevs`
* Split the Linux-specific `udevadm_settle` call out to a separate abstract `Networking.settle` method; implement it on `LinuxNetworking` and add a `NotImplementedError` implementation to `BSDNetworking`
* Modify `wait_for_physdevs`s one callsite to use the new location
LP: #1884626
|
|
This includes a fix to a test that had a string concatenation issue, and
so was only testing a prefix of what was intended.
|
|
|
|
Do not fail if /etc/fstab is not present. Some images, like container
rootfs may not include this file by default.
LP: #1886531
|
|
Specifically:
* disable E1102 in cloudinit/sources/helpers/openstack.py for reasons
described in a comment, and
* refactor `abs_join` to require at least one positional argument; this
matches os.path.join's signature, and that mismatch is what was
causing pylint to emit a warning
* bump to pylint 2.4.2
|
|
This is an improvement over indirect parameterisation for a few reasons:
* The test code is much easier to read, the mark names are much more
intuitive than the indirect parameterisation invocation, and there's
less boilerplate to boot
* The fixture no longer has to overload the single parameter that
fixtures can take with multiple meanings
|
|
For versions before 20.2, we allowed the use of ec2 mirrors if the datasource availability_zone matches one of the ec2 regions. We are now updating that behavior to allow allow the use of ec2 mirrors on ec2 instances or if the user directly passes an an ec2 mirror url through #cloud-config apt directives.
LP: #1456277
|
|
As the first refactor PR, this also includes the initial structure for tests.
LP: #1884619
|
|
test using it (#461)
caplog is only available in pytest itself from 3.0 onwards. In xenial, we only have pytest 2.8.7. However, in xenial we do have pytest-catchlog available (as python3-pytest-catchlog), so we use that where appropriate.
|
|
Changes are made that simplify code and aim to properly support FreeBSD:
- use `util.find_devs_with` instead call directly `blkid`, because on FreeBSD is not supported well and `util.find_devs_with` have solution for FreeBSD for that
- introduction of an additional name on FAT file system, which is used in FreeBSD
- drop shell to use default value, because FreeBSD – by default – does not have `/bin/bash`
|
|
Create a schema object for the chef module and validate this schema in the handle function of the module.
Some of the config keys description, so I tried looking at the code and chef documentation to provide an information to the user. However, I don't know if I have the best description for all fields. For example, for the key show_time I could not find an accurate description of what it did, so I used what was in our code base to infer what it should do.
LP: #1858888
|
|
|
|
Namely, is_connected, is_wireless and is_present. None of these are
used in the cloud-init codebase, so remove the dead code (instead of
refactoring it).
|
|
This commit introduces the initial structure for the "cloudinit.net -> cloudinit.distros.networking Hierarchy" refactor, as detailed in [0]. It also updates that section with some changes driven by this initial implementation, as well as adding a lot more specifics to it.
[0] https://cloudinit.readthedocs.io/en/latest/topics/hacking.html#cloudinit-net-cloudinit-distros-networking-hierarchy
|
|
Hetzner cloud only supports user-data as a string (presumably utf-8).
In order to allow users on Hetzner to provide binary data to cloud-init,
we will attempt to base64decode the userdata.
The change here adds a 'maybe_b64decode' function that will decode data
if and only if is base64 encoded.
The reason for not using util.b64d is that we do not want the return value
decoded to a string, and util.b64d will do that if it can. Additionally
we call decode with validate=True which oddly is not the default.
LP: #1884071
|
|
If the instance symlink doesn't exist, then we shouldn't create a
directory in its place, because that breaks future boots.
LP: #1883903
|
|
This allows us to disable the `ensure_dir` call when it isn't
appropriate.
|
|
This introduces a way to log the dhclient error stream, and uses it for the Azure datasource (where we have a specific requirement for this data to be logged).
|
|
When updating the docstring to include it, I realised that the current
name is somewhat misleading; this makes it a little easier to
understand, I think.
|
|
|
|
Reason: commit ded1ec8 introduced a regression whereby a bridge with no "parameters:" setting caused a KeyError exception.
LP: #1879673
|
|
deployPkg enable-custom-scripts", the return code will be EX_UNAVAILABLE(69), on this condition, it should not take it as error. (#413)
|
|
* test_opennebula: convert TestParseShellConfig to a pytest test
And allow it to run bash.
(We aren't aiming to convert TestCase tests to pytest tests as a rule.
In this case, I needed to change its implementation to limit subp usage,
and I chose pytest over CiTestCase.)
* test: move conftest.py to top-level, to cover tests/ also
This gives us a single conftest.py which is shared by all tests in the
project.
|
|
This was brought up in review of #416.
Makes sense to remove the local copy of "is this executable file".
|
|
runparts (run a directory of scripts) seems to fit well in subp
module. The request to move it there was raised in #416.
Replace use of logexc with LOG.debug as logexc comes from util.
|
|
This was painful, but it finishes a TODO from cloudinit/subp.py.
It moves the following from util to subp:
ProcessExecutionError
subp
which
target_path
I moved subp_blob_in_tempfile into cc_chef, which is its only caller.
That saved us from having to deal with it using write_file
and temp_utils from subp (which does not import any cloudinit things now).
It is arguable that 'target_path' could be moved to a 'path_utils' or
something, but in order to use it from subp and also from utils,
we had to get it out of utils.
|
|
Build time feature flags are now defined in cloudinit/features.py.
Feature flags can be added to toggle configuration options or
deprecated features. Feature flag overrides can be placed in
cloudinit/feature_overrides.py. Further documentation can be found in
HACKING.rst.
Additionally, updated default behavior to exit with an exception
if #include can't retrieve resources as expected. This behavior
can be toggled with a feature flag.
LP: #1734939
|
|
Improving the debugability of this code path by logging the thrown exception details for the non 404 exceptions.
Retry IMDS on HTTP Error 404 and 410, re-run DHCP on other exceptions.
|
|
This fixes issues with closing brackets not matching the opening
bracket's line and continuation line under-idented for hanging indent.
|
|
Remove extra spaces after a ','
|
|
Replace the hardcoded list of devices with a more robust way of determining
the device which grub is installed to.
We use grub-probe to fetch the underlying disk the /boot directory is
located on, and attempt to match the disk with its /dev/disk/by-id value.
If no such /dev/disk/by-id/ value exists, we fallback to the plain disk
name.
The changes are robust to unstable kernel device names and ordering, and use
/dev/disk/by-id values to populate grub-pc/install_devices where possible.
LP: #1877491
|
|
This removes the use of variables named ‘l’, ‘O’, or ‘I’. Generally
these are used in list comprehension to read the line of lines.
|