Age | Commit message (Collapse) | Author |
|
(#251)
This is a follow-up to #144 which fixed the rendering behaviour.
While writing the tests, CI failed due to dict iteration differences across Python versions, so this also sorts output so that we will produce the same output across Python versions.
|
|
|
|
Since 94838def772349387e16cc642b3642020e22deda, CloudInit supports NetBSD
too.
|
|
Add support for the NetBSD Operating System.
Features in this branch:
* Add BSD distro parent class from which NetBSD and FreeBSD can
specialize
* Add *bsd util functions to cloudinit.net and cloudinit.net.bsd_utils
* subclass cloudinit.distro.freebsd.Distro from bsd.Distro
* Add new cloudinit.distro.netbsd and cloudinit.net.renderer for
netbsd
* Add lru_cached util.is_NetBSD functions
* Add NetBSD detection for ConfigDrive and NoCloud datasources
This branch has been tested with:
- NoCloud and OpenStack (with and without config-drive)
- NetBSD 8.1. and 9.0
- FreeBSD 11.2 and 12.1
- Python 3.7 only, because of the dependency oncrypt.METHOD_BLOWFISH.
This version is available in NetBSD 7, 8 and 9 anyway
|
|
See the added comment for details.
|
|
|
|
The list so far is partial.
|
|
Instead of using the username that triggered the action (which, in the
case of a committer merging master into a PR branch will be the
committer), always use the username of the submitter of the pull
request.
|
|
Now that we can distinguish between CI xenial dependencies and
needed-to-run-on-dev-machine xenial depedencies, we can return to
testing with the correct jsonpatch version.
|
|
Github api doesn't allow read-write access to labels or comments when
running from a pull_request fork during CI.
This restriction results in an API error
message: "Resource not accessible by integration"
If we want to run this action per pull_request, we need to convert the
action to fail the PR status check and emit the required steps to sign the
CLA to the console on the PR's failed status tab.
|
|
pyflakes versions older than 2.1.0 are incompatible with Python 3.8
(which is the Python version in the current Ubuntu development release).
See https://github.com/PyCQA/pyflakes/issues/367 for details.
2.1.1 is the latest version ATM, so bump to that.
|
|
As the nose docs[0] themselves note, it has been in maintenance mode for the past several years. pytest is an actively developed, featureful and popular alternative that the nose docs themselves recommend. See [1] for more details about the thinking here.
(This PR also removes stale tox definitions, instead of modifying them.)
[0] https://nose.readthedocs.io/en/latest/
[1] https://lists.launchpad.net/cloud-init/msg00245.html
|
|
Cloud-config userdata provided as jinja templates are now distro,
platform and merged cloud config aware. The cloud-init query command
will also surface this config data.
Now users can selectively render portions of cloud-config based on:
* distro name, version, release
* python version
* merged cloud config values
* machine platform
* kernel
To support template handling of this config, add new top-level
keys to /run/cloud-init/instance-data.json.
The new 'merged_cfg' key represents merged cloud config from
/etc/cloud/cloud.cfg and /etc/cloud/cloud.cfg.d/*.
The new 'sys_info' key which captures distro and platform
info from cloudinit.util.system_info.
Cloud config userdata templates can render conditional content
based on these additional environmental checks such as the following
simple example:
```
## template: jinja
#cloud-config
runcmd:
{% if distro == 'opensuse' %}
- sh /custom-setup-sles
{% elif distro == 'centos' %}
- sh /custom-setup-centos
{% elif distro == 'debian' %}
- sh /custom-setup-debian
{% endif %}
```
To see all values: sudo cloud-init query --all
Any keys added to the standardized v1 keys are guaranteed to not
change or drop on future released of cloud-init. 'v1' keys will be retained
for backward-compatibility even if a new standardized 'v2' set of keys
are introduced
The following standardized v1 keys are added:
* distro, distro_release, distro_version, kernel_version, machine,
python_version, system_platform, variant
LP: #1865969
|
|
The EC2 Data Source needs to handle 3 states of the Instance
Metadata Service configured for a given instance:
1. HttpTokens : optional & HttpEndpoint : enabled
Either IMDSv2 or IMDSv1 can be used.
2. HttpTokens : required & HttpEndpoint : enabled
Calls to IMDS without a valid token (IMDSv1 or IMDSv2 with expired token)
will return a 401 error.
3. HttpEndpoint : disabled
The IMDS http endpoint will return a 403 error.
Previous work to support IMDSv2 in cloud-init handled case 1 and case 2.
This commit handles case 3 by bypassing the retry block when IMDS returns HTTP
status code >= 400 on official AWS cloud platform.
It shaves 2 minutes when rebooting an instance that has its IMDS http token endpoint
disabled but creates some inconsistencies. An instance that doesn't set
"manual_cache_clean" to "True" will have its /var/lib/cloud/instance symlink
removed altogether after it has failed to find a datasource.
|
|
When cloud-init persisted instance metadata to instance-data.json
if failed to redact the sensitive value. Currently, the only sensitive
key 'security-credentials' is omitted as cloud-init does not fetch
this value from IMDS.
Fix this by properly redacting the content from the public
instance-metadata.json file while retaining the value in the root-only
instance-data-sensitive.json file.
LP: #1865947
|
|
Allow disabling cloud-init's network configuration via a plain-text kernel cmdline
Cloud-init docs indicate that users can disable cloud-init networking via kernel
command line parameter 'network-config=<YAML>'. This does not work unless
the <YAML> payload base64 encoded. Document the base64 encoding
requirement and add a plain-text value for disabling cloud-init network config:
network-config=disabled
Also:
- Log an error and ignore any plain-text network-config payloads that are
not specifically 'network-config=disabled'.
- Log a warning if network-config kernel param is invalid yaml but do not
raise an exception, allowing boot to continue and use fallback networking.
LP: #1862702
|
|
Our header redact logic was redacting both logged request headers and
the actual source request. This results in DataSourceEc2 sending the
invalid header "X-aws-ec2-metadata-token-ttl-seconds: REDACTED" which
gets an HTTP status response of 400.
Cloud-init retries this failed token request for 2 minutes before
falling back to IMDSv1.
LP: #1865882
|
|
|
|
one line doc fix
|
|
In cloud-init 19.2, we added the ability for cloud-init to detect
OpenStack platforms by checking for "OpenStack Compute" or "OpenStack
Nova" in the chassis asset tag. However, this was never reflected
in the documentation. This patch updates the datasources documentation
for OpenStack to reflect the possibility of using the chassis asset tag.
LP: #1669875
|
|
* Add physical network type: cascading to openstack helpers
* add new helpers test for checking all openstack KNOWN_PHYSICAL_TYPES get type 'physical'.
|
|
|
|
Bump the version in cloudinit/version.py to 20.1 and
update ChangeLog.
LP: #1863954
|
|
* tools/read-version: don't enforce version parity in release branch CI
We have a bootstrapping problem with new releases, currently. To take
the example of 20.1: the branch that bumps the version fails CI because
there is no 20.1 tag for it to use in read-version. Previously, this
was solved by creating a tag and pushing it to the cloud-init repo
before the commit landed. However, we have GitHub branch protection
enabled, so the commit that needs to be tagged is not created until the
pull request lands in master.
This works around this problem by introducing a very specific check: if
we are performing CI for an upstream release branch, we skip the
read-version checking that we know will fail.
* tools/make-tarball: add --version parameter
When using make-tarball as part of a CI build of a new upstream release,
the version it determines is inconsistent with the version that other
tools determine. Instead of encoding the logic here (as well as in
Python elsewhere), we add a parameter to allow us to set it from outside
the script.
* packages/bddeb: handle missing version_long in new version CI
If we're running in CI for a new upstream release, we have to use
`version` instead of `version_long` (because we don't yet have the tag
required to generate `version_long`).
|
|
Instead of logging the token values used log the headers and replace the actual
values with the string 'REDACTED'. This allows users to examine cloud-init.log
and see that the IMDSv2 token header is being used but avoids leaving the value
used in the log file itself.
LP: #1863943
|
|
As noticed by Seth Arnold, non-deterministic SystemRandom should be
used when creating security sensitive random strings.
|
|
|
|
The azurecloud platform did not always start instances
during collect runs. This was a result of two issues. First
the image class _instance method did not invoke the start()
method which then allowed collect stage to attempt to run
scripts without an endpoint. Second, azurecloud used the
image_id as both an instance handle (which is typically
vmName in azure api) as well as an image handle (for image
capture). Resolve this by adding a .vm_name property to
the AzureCloudInstance and reference this property in
AzureCloudImage.
Also in this branch
- Fix error encoding user-data when value is None
- Add additional logging in AzureCloud platform
- Update logging format to print pathname,funcName and line number
This greatly eases debugging.
LP: #1861921
|
|
|
|
|
|
bugzilla ref: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=224361
svn rev: https://svnweb.freebsd.org/ports?view=revision&revision=457768
|
|
fixes typo at doc/examples/cloud-config-disk-setup.txt; Cavaut => Caveat
|
|
- Introduce the "flavor" configuration option for the sysconfig renderer
this is necessary to account for differences in the handling of the
BOOTPROTO setting between distributions (lp#1858808)
+ Thanks to Petr Pavlu for the idea
- Network config clean up for sysconfig renderer
+ The introduction of the "flavor" renderer configuration allows us
to only write values that are pertinent for the given distro
- Set the DHCPv6 client mode on SUSE (lp#1800854)
Co-authored-by: Chad Smith <chad.smith@canonical.com>
LP: #1800854
|
|
|
|
Fixes shellcheck warning SC2236.
|
|
It is proto 'none', not 'static' as was mistakenly implemented in
initramfs-tools/cloud-init in the past, yet was never the case in the
klibc ipconfig state file output.
LP: #1861412
|
|
* cloudinit: replace "import mock" with "from unittest import mock"
* test-requirements.txt: drop mock
Co-authored-by: Chad Smith <chad.smith@canonical.com>
|
|
|
|
Make sure network_config is created when self._network_config is unset.
Co-authored-by: Scott Moser <smoser@brickies.net>
|
|
|
|
Drop support for specifying an Python interpreter different from python3
from tools/run-container.
|
|
LP: #1860789
|
|
* packages/brpm: switch to python3
No changes needed other than changing the shebang interpreter.
* pkg-deps.json: bump the redhat build dependencies to python36
CentOS 7 (our standard target for the COPR test builds) uses python3.6
as its default python3 interpreter, so let's bump the build dependencies
accordingly.
|
|
Increasing the bits of security from 52 to 115.
LP: #1860795
|
|
|
|
We only run on Python 3 now, so we can unambiguously expect
unittest.mock to exist.
|
|
When creating a swap file on an xfs filesystem, fallocate cannot be used.
Doing so results in failure of swapon and a message like:
swapon: swapfile has holes
The solution here is to maintain a list (currently containing only XFS)
of filesystems where fallocate cannot be used. The, on those fileystems
use the slower but functional 'dd' method.
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
Co-authored-by: Adam Dobrawy <naczelnik@jawnosc.tk>
Co-authored-by: Scott Moser <smoser@brickies.net>
Co-authored-by: Daniel Watkins <daniel@daniel-watkins.co.uk>
LP: #1781781
|
|
This should ensure that all its dependencies are installed during doc
generation.
LP: #1860450
|
|
(#180)
doc-requirements.txt: pin Sphinx at version used by RTD
Introduce a configuration file containing our existing web-based configuration.
|
|
|