Age | Commit message (Collapse) | Author |
|
|
|
cloud-init query tries to directly load and decode
raw user-data from /var/lib/cloud/instance/user-data.txt.
This results in UnicodeDecodeErrors on some platforms which
provide compressed content.
Avoid UnicodeDecoderErrors when parsing compressed user-data at
/var/lib/cloud/instance/user-data.txt.
LP: #1889938
|
|
* cli: add devel make-mime subcommand
Cloud-init documents an in-source-tree tool, make-mime.py used to
help users create multi-part mime user-data. This tool is not shipped
in the cloud-init install and unavailable at runtime. This patch
takes tools/make-mime.py and makes the functionality available via
the devel subcommand.
The primary interface of --attach file:content-type is still present.
The cli now adds:
-l, --list-types Print out a list of supported content-types
-f, --force Ignore errors for unsupported content-types
The tool will now raise a RunTime error if the supplied content-type
is not supported (or more likely a typo:
x-shell-script vs. x-shellscript)
* make-mime: write to stderr and exit 1 instead of raising RuntimeError
* Update example to match docs
* Update docs for make-mime subcommand
* Remove tools/make-mime.py; replaced by cloud-init devel make-mime
Co-authored-by: Rick Harding <rharding@mitechie.com>
|
|
|
|
This was painful, but it finishes a TODO from cloudinit/subp.py.
It moves the following from util to subp:
ProcessExecutionError
subp
which
target_path
I moved subp_blob_in_tempfile into cc_chef, which is its only caller.
That saved us from having to deal with it using write_file
and temp_utils from subp (which does not import any cloudinit things now).
It is arguable that 'target_path' could be moved to a 'path_utils' or
something, but in order to use it from subp and also from utils,
we had to get it out of utils.
|
|
Remove extra spaces after a ','
|
|
This shim was required to support Python 2.6, so we no longer need it.
|
|
These classes don't use `self.logs` anywhere in their body, so we can
remove the `with_logs = True` setting from them.
These instances were found using astpath[0], with the following
invocation:
astpath "//Name[@id='with_logs' and not(ancestor::ClassDef//Attribute[@attr='logs'])]"
[0] https://github.com/hchasestevens/astpath
|
|
|
|
|
|
* url_helper: drop six
* url_helper: sort imports
* log: drop six
* log: sort imports
* handlers/__init__: drop six
* handlers/__init__: sort imports
* user_data: drop six
* user_data: sort imports
* sources/__init__: drop six
* sources/__init__: sort imports
* DataSourceOVF: drop six
* DataSourceOVF: sort imports
* sources/helpers/openstack: drop six
* sources/helpers/openstack: sort imports
* mergers/m_str: drop six
This also allowed simplification of the logic, as we will never
encounter a non-string text type.
* type_utils: drop six
* mergers/m_dict: drop six
* mergers/m_list: drop six
* cmd/query: drop six
* mergers/__init__: drop six
* net/cmdline: drop six
* reporting/handlers: drop six
* reporting/handlers: sort imports
|
|
The query command checks the user's uid when running and takes two
different code paths. As a normal user is returns fake data, that these
tests were expecting. As a root user, the actual user and vendor data
files are ready.
LP: #1856096
|
|
netplan introduced an 'info' subcommand which emits yaml describing
implemented features that indicate new or changed fields and values
in the yaml that it accepts. Previously, cloud-init emitted the key
'mtu6' for ipv6 MTU values. This is not correct and netplan will
fail to parse these values. Netplan as of 0.98 supports both the
info subcommand and the ipv6-mtu key.
This branch modifies the netplan renderer to collect the netplan
info output into a 'features' property which is a list of available
feature flags which the renderer can use to modify its output. If
the command is not available, no feature flags are set and
cloud-init will render IPv6 MTU values just as MTU for the subnet.
|
|
Here we replace uses of the pyyaml module directly with functions
provided by cloudinit.safeyaml. Also, change/move
cloudinit.util.yaml_dumps
to
cloudinit.safeyaml.dumps
LP: #1849640
|
|
Cloud-init's main.py will fail when presented with a new
stage name 'modules-init' if upgrading an older cloud-init.
Fix this by initializing unknown stage names before accessing.
LP: #1815109
|
|
Previously, init.paths.cloud_dir has a trailing slash, which meant that
"/var/lib/cloud//seed" was being compared to "/var/lib/cloud/seed" and
(of course), never matching.
In this commit, switch to using os.path.join to avoid this case (and
update the tests to catch it in future).
LP: #1818571
|
|
Avoid traceback when cloud-init clean is run from within
/var/lib/cloud/ deleted dirs.
LP: #1795508
|
|
I noticed a bug in dhclient_hook on the 'down' event, using 'is'
operator rather than '==' (if self.net_action is 'down').
This refactors/simplifies the code a bit for easier testing and adds
tests. The reason for the rename of 'action' to 'event' is to just
be internally consistent. The word and Namespace 'action' is used
by cloud-init main, so it was not really usable here.
Also adds a main which can easily be debugged with:
CI_DHCP_HOOK_DATA_D=./my.d python -m cloudinit.dhclient_hook up eth0
|
|
Move routes under the nic's subnet rather than use top-level
("global") route config ensuring all net renderers will provide the
configured route.
Also updated cloudinit/cmd/devel/net_convert.py:
- Add input type 'vmware-imc' for OVF customization config files
- Fix bug when output-type was netplan which invoked netplan
generate/apply and attempted to write to
/etc/netplan/50-cloud-init.yaml instead of joining with the
output directory.
LP: #1806103
|
|
Since /run/cloud-init/instance-data-sensitive.json is root read-only,
ignore this file if non-root user runs collect-logs.
If --include-userdata is provided on the command line, exit in error
if non-root user attempts this operation.
Lastly, update the __main__ to exit based on return value of main.
LP: #1805201
|
|
Emit a permissions error instead of "Missing instance-data.json" when
non-root user doesn't have read-permission on
/run/cloud-init/instance-data.json
|
|
On cloud-init upgrade path from 18.3 to 18.4 cloud-init changed how
instance-data is written. Cloud-init changes instance-data.json from root
read-only to redacted world-readable content, and provided a separate
unredacted instance-data-sensitive.json which is read-only root.
Since instance-data is only rewritten from cache on
reboot, the query and render tools needed fallback to use the 'old'
instance-data.json if the new sensitive file isn't yet present.
This avoids error messages from tools about an absebt
/run/instance-data-sensitive.json file.
LP: #1798189
|
|
Add a quick cloud lookup utility in order to more easily determine
the cloud on which an instance is running.
The utility parses standardized attributes from
/run/cloud-init/instance-data.json to print the canonical cloud-id
for the instance. It uses known region maps if necessary to determine
on which specific cloud the instance is running.
Examples:
aws, aws-gov, aws-china, rackspace, azure-china, lxd, openstack, unknown
|
|
Cloud-init caches any cloud metadata crawled during boot in the file
/run/cloud-init/instance-data.json. Cloud-init also standardizes some of
that metadata across all clouds. The command 'cloud-init query' surfaces a
simple CLI to query or format any cached instance metadata so that scripts
or end-users do not have to write tools to crawl metadata themselves.
Since 'cloud-init query' is runnable by non-root users, redact any
sensitive data from instance-data.json and provide a root-readable
unredacted instance-data-sensitive.json. Datasources can now define a
sensitive_metadata_keys tuple which will redact any matching keys
which could contain passwords or credentials from instance-data.json.
Also add the following standardized 'v1' instance-data.json keys:
- user_data: The base64encoded user-data provided at instance launch
- vendor_data: Any vendor_data provided to the instance at launch
- underscore_delimited versions of existing hyphenated keys:
instance_id, local_hostname, availability_zone, cloud_name
|
|
If the user has removed the default configuration file or does
not set the syslog_fix_perms config option the user still ends
up with a warning on SUSE distributions. Add root:root to the
default builtin config.
|
|
Allow users to provide '## template: jinja' as the first line or their
#cloud-config or custom script user-data parts. When this header exists,
the cloud-config or script will be rendered as a jinja template.
All instance metadata keys and values present in
/run/cloud-init/instance-data.json will be available as jinja variables
for the template. This means any cloud-config module or script can
reference any standardized instance data in templates and scripts.
Additionally, any standardized instance-data.json keys scoped below a
'<v#>' key will be promoted as a top-level key for ease of reference in
templates. This means that '{{ local_hostname }}' is the same as using the
latest '{{ v#.local_hostname }}'.
Since instance-data is written to /run/cloud-init/instance-data.json, make
sure it is persisted across reboots when the cached datasource opject is
reloaded.
LP: #1791781
|
|
In many cases, cloud-init uses 'util.subp' to run a subprocess.
This is not really desirable in our unit tests as it makes the tests
dependent upon existance of those utilities.
The change here is to modify the base test case class (CiTestCase) to
raise exception any time subp is called. Then, fix all callers.
For cases where subp is necessary or actually desired, we can use it
via
a.) context hander CiTestCase.allow_subp(value)
b.) class level self.allowed_subp = value
Both cases the value is a list of acceptable executable names that
will be called (essentially argv[0]).
Some cleanups in AltCloud were done as the code was being updated.
|
|
Multiple distros use sysconfig format but have different content
and paths to certain files. Update distros to specify these
template paths in their renderer_configs dictionary.
|
|
Switch the implementation to a daemon thread which uses a
blocking get from the Queue. No additional locking or flag checking
is needed since the Queue itself handles acquiring the lock as needed.
cloud-init only has a single producer (the main thread calling publish)
and the consumer will read all events in the queue and write them out.
Using the daemon mode of the thread handles flushing the queue on
main exit in python3; in python2.7 we handle the EOFError that results
when the publish thread calls to get() fails indicating the main thread
has exited.
The result is that the handler is no longer spawing a thread on each
publish event but rather creates a single thread when we start up
the reporter and we remove any additional use of separate locks and
flags as we only have a single Queue object and we're only calling
queue.put() from main thread and queue.get() from consuming thread.
|
|
Linux guests can provide information to Hyper-V hosts via KVP.
KVP allows the guests to provide any string key-value-pairs back to the
host's registry. On linux, kvp communication pools are presented as pool
files in /var/lib/hyperv/.kvp_pool_#.
The following reporting configuration can enable this kvp reporting in
addition to default logging if the pool files exist:
reporting:
logging:
type: log
telemetry:
type: hyperv
|
|
Azure datasource now queries IMDS metadata service for network
configuration at link local address
http://169.254.169.254/metadata/instance?api-version=2017-12-01. The
azure metadata service presents a list of macs and allocated ip addresses
associated with this instance. Azure will now also regenerate network
configuration on every boot because it subscribes to EventType.BOOT
maintenance events as well as the 'first boot'
EventType.BOOT_NEW_INSTANCE.
For testing add azure-imds --kind to cloud-init devel net_convert tool
for debugging IMDS metadata.
Also refactor _get_data into 3 discrete methods:
- is_platform_viable: check quickly whether the datasource is
potentially compatible with the platform on which is is running
- crawl_metadata: walk all potential metadata candidates, returning a
structured dict of all metadata and userdata. Raise InvalidMetaData on
error.
- _get_data: call crawl_metadata and process results or error. Cache
instance data on class attributes: metadata, userdata_raw etc.
|
|
Move the tools/net-convert.py to be exposed as part of 'cloud-init devel'
subcommands.
It can now be called like:
$ cloud-init devel net-convert
Or, if you just have checked out source (and no cli executable):
$ python3 -m cloudinit.cmd.devel.net_convert
or
$ python3 -m cloudinit.cmd.main devel net-convert
|
|
The result of a read_file_or_url on a file and on a url would differ
in behavior.
str(UrlResponse) would return UrlResponse.contents.decode('utf-8')
while
str(FileResponse) would return str(FileResponse.contents)
The difference being "b'foo'" versus "foo".
As part of the general goal of cleaning util, move read_file_or_url
into url_helper.
|
|
With no output at all from collect-logs, users have been confused
on where the output is. By default now, write to stderr what that
file is.
Also
* add '-v' to increase verbosity. With a single -v flag, mention
what file/info is being collected.
* limit the 'journalctl' collection to this boot (--boot=0).
collecting entire journal seems unnecessary and can be huge.
* do not fail when collecting files or directories that are not there.
LP: #1766335
|
|
This enables warnings produced by pylint for unused variables (W0612),
and fixes the existing errors.
|
|
When instance meta-data provides hostname information, run
cc_set_hostname in the init-local or init-net stage before network
comes up.
Prevent an initial DHCP request which leaks the stock cloud-image default
hostname before the meta-data provided hostname was processed.
A leaked cloud-image hostname adversely affects Dynamic DNS which
would reallocate 'ubuntu' hostname in DNS to every instance brought up by
cloud-init. These instances would only update DNS to the cloud-init
configured hostname upon DHCP lease renewal.
This branch extends the get_hostname methods in datasource, cloud and
util to limit results to metadata_only to avoid extra cost of querying
the distro for hostname information if metadata does not provide that
information.
LP: #1746455
|
|
This will provide a small performance improvement and shorter code.
|
|
When we moved some tests to live under cloudinit/ we inadvertantly
failed to change all things that would run nose to include that
directory.
This changes all the 'nose' invocations to consistently run with
tests/unittests and cloudinit/.
Also, it works around, more correctly this time, a python2.6-ism with
the following code:
with assertRaises(SystemExit) as cm:
sys.exit(2)
|
|
Fix various corner cases for cloud-init status subcommand. Report
'runnning' under the following conditions:
- No /run/cloud-init/result.json file exists
- Any stage in status.json is unfinished
- status.json reports a non-null stage it is in progress on
LP: #1747965
|
|
While addressing undeclared variable in 'cloud-init status', I also fixed
the errors raised by automated code reviews against cloud-init master at
https://lgtm.com/projects/g/cloud-init/cloud-init/alerts
The following items are addressed:
* Fix 'cloud-init status':
* Only report 'running' state when any stage in
/run/cloud-init/status.json has a start time but no finished time.
Default start time to 0 if null.
* undeclared variable 'reason' now reports 'Cloud-init enabled by
systemd cloud-init-generator' when systemd enables cloud-init
* cc_rh_subscription.py util.subp return values aren't set during if an
exception is raised, use ProcessExecution as e instead.
* distros/freebsd.py:
* Drop repetitive looping over ipv4 and ipv6 nic lists.
* Initialize bsddev to 'NOTFOUND' in the event that no devs are
discovered
* declare nics_with_addresses = set() in broader scope outside
check_downable conditional
* cloudinit/util.py: Raise TypeError if mtype parameter isn't string,
iterable or None.
LP: #1744796
|
|
This issue was first identified when manual_cache_clean was set, as
ds-identify would write /run/cloud-init/cloud.cfg with
# manual_cache_clean
that would generate a warning as cloud-init expected to load a dict.
Any other "empty" config would also log such a warning.
Also fix reading of di_report to allow it to be None, as ds-identify
would write:
di_report:
# manual_cache_clean
which reads as 'di_report: None' rather than di_report: {}.
LP: #1742479
|
|
Fix cloud-init clean subcommand to unlink symlinks instead of calling
del_dir.
LP: #1741093
|
|
The cli help docs and argument parser allow the 'init' mode value
which caused a traceback.
Fix the cli to support 'init', 'config' and 'final' modes for the
cloud-init modules subcommand.
Add a check in the cli to raise a ValueError if a new
subcommand ends up allowing an unsupported/unimplemented modes.
Drive by unit test additions for a bit better coverage of error
handling.
LP: #1736600
|
|
The motivation for this is that
a.) 1.7.1 runs with python 3.6 (bionic)
b.) we want to run pylint on tests/ and tools for the same reasons
that we want to run it on cloudinit/
The changes are described below.
- Update tox.ini to invoke pylint v1.7.1.
- Modify .pylintrc generated-members ignore mocked object members (m_.*)
- Replace "dangerous" params defaulting to {}
- Fix up cloud_tests use of platforms
- Cast some instance objects to with dict()
- Handle python2.7 vs 3+ ConfigParser use of readfp (deprecated)
- Update use of assertEqual(<boolean>, value) to assert<Boolean>(value)
- replace depricated assertRegexp -> assertRegex
- Remove useless test-class calls to super class
- Assign class property accessors a result and use it
- Fix missing class member in CepkoResultTests
- Fix Cheetah test import
|
|
The 'cloud-init clean' command allows a user or script to clear cloud-init
artifacts from the system so that cloud-init sees the system as
unconfigured upon reboot. Optional parameters can be provided to remove
cloud-init logs and reboot after clean.
The 'cloud-init status' command allows the user or script to check whether
cloud-init has finished all configuration stages and whether errors
occurred. An optional --wait argument will poll on a 0.25 second interval
until cloud-init configuration is complete. The benefit here is scripts
can block on cloud-init completion before performing post-config tasks.
|
|
Add a new collect-logs sub command to the cloud-init CLI. This script
will collect all logs pertinent to a cloud-init run and store them in a
compressed tar-gzipped file. This tarfile can be attached to any
cloud-init bug filed in order to aid in bug triage and resolution.
A cloudinit.apport module is also added that allows apport interaction.
Here is an example bug filed via ubuntu-bug cloud-init: LP: #1716975.
Once the apport launcher is packaged in cloud-init, bugs can be filed
against cloud-init with the following command:
ubuntu-bug cloud-init
LP: #1607345
|
|
In an effort to save file load cost during system boot, certain
subcommands, analyze and devel, do not get loaded unless the subcommand is
specified on the commandline. Because setup.py entrypoint for cloud-init
script doesn't specify sysv_args parameter when calling the CLI's main()
we need main to read sys.argv into sysv_args so our subparser loading
continues to work.
LP: #1712676
|
|
This branch does a few things:
- Add 'schema' subcommand to cloud-init CLI for validating
cloud-config files against strict module jsonschema definitions
- Add --annotate parameter to 'cloud-init schema' to annotate
existing cloud-config file content with validation errors
- Add jsonschema definition to cc_runcmd
- Add unit test coverage for cc_runcmd
- Update CLI capabilities documentation
This branch only imports development (and analyze) subparsers when the
specific subcommand is provided on the CLI to avoid adding costly unused
file imports during cloud-init system boot.
The schema command allows a person to quickly validate a cloud-config text
file against cloud-init's known module schemas to avoid costly roundtrips
deploying instances in their cloud of choice. As of this branch, only
cc_ntp and cc_runcmd cloud-config modules define schemas. Schema
validation will ignore all undefined config keys until all modules define
a strict schema.
To perform validation of runcmd and ntp sections of a cloud-config file:
$ cat > cloud.cfg <<EOF
runcmd: bogus
EOF
$ python -m cloudinit.cmd.main schema --config-file cloud.cfg
$ python -m cloudinit.cmd.main schema --config-file cloud.cfg \
--annotate
Once jsonschema is defined for all ~55 cc modules, we will move this
schema subcommand up as a proper subcommand of the cloud-init CLI.
|
|
This branch adds cloudinit-analyze into cloud-init proper. It adds an
"analyze" subcommand to the cloud-init command line utility for quick
performance assessment of cloud-init stages and events.
On a cloud-init configured instance, running "cloud-init analyze blame"
will now report which cloud-init events cost the most wall time. This
allows for quick assessment of the most costly stages of cloud-init.
This functionality is pulled from Ryan Harper's analyze work.
The cloudinit-analyze main script itself has been refactored a bit for
inclusion as a subcommand of cloud-init CLI. There will be a followup
branch at some point which will optionally instrument detailed strace
profiling, but that approach needs a bit more discussion first.
This branch also adds:
* additional debugging topic to the sphinx-generated docs describing
cloud-init analyze, dump and show as well as cloud-init single usage.
* Updates the Makefile unittests target to include cloudinit directory
because we now have unittests within that package.
LP: #1709761
|
|
On systems with network devices with duplicate mac addresses, cloud-init
will fail to rename the devices according to the specified network
configuration. Refactor net layer to search by device driver and device
id if available. Azure systems may have duplicate mac addresses by
design.
Update Azure datasource to run at init-local time and let Azure datasource
generate a fallback networking config to handle advanced networking
configurations.
Lastly, add a 'setup' method to the datasources that is called before
userdata/vendordata is processed but after networking is up. That is
used here on Azure to interact with the 'fabric'.
|