Age | Commit message (Collapse) | Author |
|
Add the basics of docs that can be extracted from the code itself (also
impose a initial format that will be useful for further modules to
follow). In this initial addition modify the cc_debug.py and
cc_ubuntu_init_switch.py to use this new style.
LP: #1383510
|
|
|
|
|
|
This adds a DataSource for DigitalOcean's metadata service. The service is
documented at https://developers.digitalocean.com/metadata/ .
|
|
|
|
User can now configure setting of a swap file.
Only supports un-encrypted swap for now.
swap:
filename: /swap.img
size: "auto" or size in bytes
maxsize: size in bytes
Also adds 2 util:
read_meminfo: return how much memory on system.
human2bytes: convert human numbers (8G) to bytes.
|
|
Add support for freebsd reading config drive. Primary work is
related to re-factoring mount_cb to not be so linux specific.
Other changes:
* declare PATH in freebsd initscripts
* list dependency on e2fsprogs (for blkid)
* enable ConfigDrive in freebsd config
* hosts.freebsd.tmpl added
|
|
HVM instances on EC2 have grub on /dev/xvda.
The bug here resulted in a prompt on grub update.
LP: #1336855
|
|
add kwargs to fork_cb, and utilize that to call log_time and pass through
the provided args to resize_cmd.
LP: #1338614
|
|
This makes the DataSourceConfigDrive support vendor-data in the same
way the metadata service reader does. There are still some things to
fix here, but now we're similar between these two.
Also drops the ability to specify a version (as in YYYY-MM-DD) that you want to
look for. Nothing was using this, but it may be useful to add back in
in the future and expose as a datasource config option.
|
|
In a container the device nodes may exist but not be writable.
I'm seeing this on trusty host with trusty containers, the root
device ends up looking like it is to /dev/loop0.
LP: #1366891
|
|
|
|
|
|
This set of changes generally produces a functional cloud-init on FreeBsd.
|
|
The metadata service openstack implementation would end up fetching
urls more than once, as _path_exists would end up doing a GET.
Now instead, get things you expect to be there.
|
|
pep8: passes on pylint 1.5.7 (and 1.5.6 utopic).
intent is that is to be the target for future changes.
pylint: remove as more hassle than its worth.
Intent is to move to pyflakes at some point.
|
|
Module intended to allow disabling by configuration, but that was broken.
Now this allows:
no_ssh_fingerprints = True
LP: #1340903
|
|
LP: #1313114
|
|
this makes some changes to the cc_resolv_conf to make its
generate_resolv_conf method more easily callable (for future test).
Also sets it up so that 'options' is always defined when the template
is rendered.
LP: #1328953
|
|
LP: #1329583
|
|
LP: #1333920
|
|
|
|
The module is useful primarily for testing in Ubuntu's transition to systemd.
It should be very harmless elsewhere as it defaults to doing nothing,
and will only run if configured as 'ubuntu' distro *and* 'dpkg' is available.
|
|
comments in /etc/timezone are not expected, and can cause problems
if another tool tries to read it.
LP: #1341710
|
|
previous commit occurred because the selinux test was failing
in a schroot where there was no /etc/hosts.
Now, fix that test more correctly, and fix some bad assumptions in
the SeLinuxGuard.
|
|
use pybuild and drop cdbs.
This also now runs test during that build and does then require build
dependencies.
|
|
|
|
This drops the hard requirement on Cheetah.
Jinja is a python 2.4->3.x compatible templating engine, allow its
optional usage (until we can depreciate cheetah) by allowing for
specifying a template file header that can define which template engine to
use.
If the template file header does not specify a renderer, then assume
that that is cheetah. If cheetah is not available, then use a limited
builtin renderer on a best effort basis, and log the warning.
LP: #1219223
|
|
|
|
LP: #1327065
|
|
LP: #1316597
|
|
On systems with a ttyS1 and nothing attached, the read attempts
that the cloud sigma datasource would do would block.
Also, Add timeouts for reading/writting from/to the serial console
LP: #1316475
|
|
LP: #1303986
|
|
|
|
This change adds the possibility to have base64 encoded userdata in
OpenNebula source.
OpenNebula uses a text file with shell variables for storing the
configuration variables (including user provided data). Some user data may
not be renderable into this format, so using base64 encoding alleviates
the problem.
The change here allows the user to provide a second variable
USERDATA_ENCODING (or USER_DATA_ENCODING) and set that value to 'base64'
to indicate that USERDATA is base64 encoded.
|
|
On azure, the ephemeral disk may be destroyed and replaced with a fresh
ephemeral disk on any reboot or stop and start cycle.
This makes the datasource able to detect that by presence of an unformatted
and specifically labeled NTFS filesystem with no files on it.
LP: #1292648
|
|
This populates and maintains status.json and result.json with
json formated data about cloud-init's errors and datasource.
It is intended to be consumed by other programs that want to
wait until cloud-init is done, or know its success.
LP: #1284439
|
|
This extends 'random_seed' top level entry to include a 'command'
entry, that has the opportunity to then seed the random number generator.
Example config:
#cloud-config
random_seed:
command: ['dd', 'if=/dev/zero', 'of=/dev/random', 'bs=1M', 'count=10']
LP: #1286316
|
|
As with SmartOS change earlier, running dmidecode on arm will crash kvm.
So instead of doing that, just return UNKNOWN which will cause this
data source to not activate.
LP: #1285686
|
|
See LP: #1243287 for more information, but the easiest thing to do
here is just not run smartos on arm.
LP: #1243287
|
|
Openstack has a unique derivative datasource that is gaining usage.
Previously the config drive datasource provided part of this functionality
as well as the ec2 datasource, but since new functionality is being added
to openstack's special datasource it seems beneficial to combine the used
parts into a new datasource just made for handling openstack deployments
that use the openstack metadata service (possibly in combination with the
ec2 metadata service).
This patch factors out the common logic shared between the config drive
and the openstack metadata datasource and places that in a shared helper
file and then creates a new openstack datasource that readers from the
openstack metadata service and refactors the config drive datasource to
use this common logic.
|
|
There are some rough edges here and its missing some test, but
I want to get this pulled in.
|
|
|
|
|
|
Config modules are able to declare distros that they were verified
to run on by setting 'distros' as a list in the config module.
Previously, if a module was configured to run and the running distro was not
listed as supported, it would run anyway, and a warning would be written.
Now, we change the behavior to skip those modules.
The distro (or user) can specify that a given list of modules should run anyway
by declaring the 'unverified_modules' config variable.
run_once modules will be run without this filter (ie, expecting that the user
explicitly wanted to run it).
|
|
If a datasource was found other than in /var/lib/waagent, and /var/lib/waagent
contained all the files necessary for 'wait_for_files' (most likely
'SharedConfig.xml'), then cloud-init would continue on before looking properly.
To address this, if the ovf-env.xml came from somewhere other than
/var/lib/waagent, and it differs from the file in /var/lib/waagent, then
we clean up some files that we expect to be provided by 'wait_for_files'.
Also some minor changes to the tests here.
LP: #1269626
|
|
Here we add the ability to read vendor-data from a file named
vendor-data at the same location as the user-data and meta-data files.
At the moment, vendor-data is not read at all from 'seedfrom'.
|
|
Due to bug in function "cloudinit.util.is_ipv4" an IPv4 address with zero (0)
at any component wasn't evaluated as IPv4 address.
E.g.: having local datasource with 192.168.0.1 in meta-data/local-hostname. The
correct behaviour would be to generate ip-192-168-0-1 hostname. With this bug,
the hostname (with IPv4) was considered as FQDN (no IPv4 inside) and just first
component (supposed to be hostname there) was taken. It generated hostname
"192".
Fixes for SmartOS datasource
1. fixed conflation of user-data and cloud-init user-data. Cloud-init
user-data is now namespaced as 'cloud-init:user-data'.
2. user-scripts (not user-data) are now fetched from the meta-data service
each boot and executed as in the scripts directory
3. datacenter name is now namespaced as sdc:datacenter
4. user-scripts will now have '#!/bin/bash' magically prepended
if the 'file' thinks its plain text and it does not start with '#!'
read_file_or_url: raise UrlError with 404 on ENOENT
This makes it easier to call read_file_or_url and handle file or url
errors. Now read_file_or_url will raise a UrlError in either case
on errors.
|
|
This gets initial support for freebsd.
|
|
This change removes the filtering of partitions from potential ConfigDrive
sources, if the LABEL of the partition is set to "config-2".
This is useful for a bare metal device. It may not have a separate device for
ConfigDrive, but instead have a ConfigDrive available on a partition.
|