Age | Commit message (Collapse) | Author |
|
Module intended to allow disabling by configuration, but that was broken.
Now this allows:
no_ssh_fingerprints = True
LP: #1340903
|
|
LP: #1313114
|
|
this makes some changes to the cc_resolv_conf to make its
generate_resolv_conf method more easily callable (for future test).
Also sets it up so that 'options' is always defined when the template
is rendered.
LP: #1328953
|
|
LP: #1329583
|
|
LP: #1333920
|
|
|
|
The module is useful primarily for testing in Ubuntu's transition to systemd.
It should be very harmless elsewhere as it defaults to doing nothing,
and will only run if configured as 'ubuntu' distro *and* 'dpkg' is available.
|
|
comments in /etc/timezone are not expected, and can cause problems
if another tool tries to read it.
LP: #1341710
|
|
previous commit occurred because the selinux test was failing
in a schroot where there was no /etc/hosts.
Now, fix that test more correctly, and fix some bad assumptions in
the SeLinuxGuard.
|
|
use pybuild and drop cdbs.
This also now runs test during that build and does then require build
dependencies.
|
|
|
|
This drops the hard requirement on Cheetah.
Jinja is a python 2.4->3.x compatible templating engine, allow its
optional usage (until we can depreciate cheetah) by allowing for
specifying a template file header that can define which template engine to
use.
If the template file header does not specify a renderer, then assume
that that is cheetah. If cheetah is not available, then use a limited
builtin renderer on a best effort basis, and log the warning.
LP: #1219223
|
|
|
|
LP: #1327065
|
|
LP: #1316597
|
|
On systems with a ttyS1 and nothing attached, the read attempts
that the cloud sigma datasource would do would block.
Also, Add timeouts for reading/writting from/to the serial console
LP: #1316475
|
|
LP: #1303986
|
|
|
|
This change adds the possibility to have base64 encoded userdata in
OpenNebula source.
OpenNebula uses a text file with shell variables for storing the
configuration variables (including user provided data). Some user data may
not be renderable into this format, so using base64 encoding alleviates
the problem.
The change here allows the user to provide a second variable
USERDATA_ENCODING (or USER_DATA_ENCODING) and set that value to 'base64'
to indicate that USERDATA is base64 encoded.
|
|
On azure, the ephemeral disk may be destroyed and replaced with a fresh
ephemeral disk on any reboot or stop and start cycle.
This makes the datasource able to detect that by presence of an unformatted
and specifically labeled NTFS filesystem with no files on it.
LP: #1292648
|
|
This populates and maintains status.json and result.json with
json formated data about cloud-init's errors and datasource.
It is intended to be consumed by other programs that want to
wait until cloud-init is done, or know its success.
LP: #1284439
|
|
This extends 'random_seed' top level entry to include a 'command'
entry, that has the opportunity to then seed the random number generator.
Example config:
#cloud-config
random_seed:
command: ['dd', 'if=/dev/zero', 'of=/dev/random', 'bs=1M', 'count=10']
LP: #1286316
|
|
As with SmartOS change earlier, running dmidecode on arm will crash kvm.
So instead of doing that, just return UNKNOWN which will cause this
data source to not activate.
LP: #1285686
|
|
See LP: #1243287 for more information, but the easiest thing to do
here is just not run smartos on arm.
LP: #1243287
|
|
Openstack has a unique derivative datasource that is gaining usage.
Previously the config drive datasource provided part of this functionality
as well as the ec2 datasource, but since new functionality is being added
to openstack's special datasource it seems beneficial to combine the used
parts into a new datasource just made for handling openstack deployments
that use the openstack metadata service (possibly in combination with the
ec2 metadata service).
This patch factors out the common logic shared between the config drive
and the openstack metadata datasource and places that in a shared helper
file and then creates a new openstack datasource that readers from the
openstack metadata service and refactors the config drive datasource to
use this common logic.
|
|
There are some rough edges here and its missing some test, but
I want to get this pulled in.
|
|
|
|
|
|
Config modules are able to declare distros that they were verified
to run on by setting 'distros' as a list in the config module.
Previously, if a module was configured to run and the running distro was not
listed as supported, it would run anyway, and a warning would be written.
Now, we change the behavior to skip those modules.
The distro (or user) can specify that a given list of modules should run anyway
by declaring the 'unverified_modules' config variable.
run_once modules will be run without this filter (ie, expecting that the user
explicitly wanted to run it).
|
|
If a datasource was found other than in /var/lib/waagent, and /var/lib/waagent
contained all the files necessary for 'wait_for_files' (most likely
'SharedConfig.xml'), then cloud-init would continue on before looking properly.
To address this, if the ovf-env.xml came from somewhere other than
/var/lib/waagent, and it differs from the file in /var/lib/waagent, then
we clean up some files that we expect to be provided by 'wait_for_files'.
Also some minor changes to the tests here.
LP: #1269626
|
|
Here we add the ability to read vendor-data from a file named
vendor-data at the same location as the user-data and meta-data files.
At the moment, vendor-data is not read at all from 'seedfrom'.
|
|
Due to bug in function "cloudinit.util.is_ipv4" an IPv4 address with zero (0)
at any component wasn't evaluated as IPv4 address.
E.g.: having local datasource with 192.168.0.1 in meta-data/local-hostname. The
correct behaviour would be to generate ip-192-168-0-1 hostname. With this bug,
the hostname (with IPv4) was considered as FQDN (no IPv4 inside) and just first
component (supposed to be hostname there) was taken. It generated hostname
"192".
Fixes for SmartOS datasource
1. fixed conflation of user-data and cloud-init user-data. Cloud-init
user-data is now namespaced as 'cloud-init:user-data'.
2. user-scripts (not user-data) are now fetched from the meta-data service
each boot and executed as in the scripts directory
3. datacenter name is now namespaced as sdc:datacenter
4. user-scripts will now have '#!/bin/bash' magically prepended
if the 'file' thinks its plain text and it does not start with '#!'
read_file_or_url: raise UrlError with 404 on ENOENT
This makes it easier to call read_file_or_url and handle file or url
errors. Now read_file_or_url will raise a UrlError in either case
on errors.
|
|
This gets initial support for freebsd.
|
|
This change removes the filtering of partitions from potential ConfigDrive
sources, if the LABEL of the partition is set to "config-2".
This is useful for a bare metal device. It may not have a separate device for
ConfigDrive, but instead have a ConfigDrive available on a partition.
|
|
If mount_info says that the root filesystem is on /dev/root and
/dev/root does not exist, then we'll try to glean that information
from the linux kernel cmdline.
This situation occurs at least when you boot without an initramfs
for the current ppc64el cloud images:
qemu-system-ppc64 ... -kernel my.kernel -append 'root=/dev/sda'
When doing that, /proc/1/mountinfo will say '/dev/root' for '/'.
|
|
|
|
We had a requirement on boto only to use
boto.utils.get_instance_metadata(). That had actually caused some pain in
the past. This removes a Requires and also one that wasn't python3.
|
|
This adds the ability for a datasource to provide "vendordata".
The difference here is that vendordata is from the vendor (cloud provider)
where user-data is from the user. By enabling this channel, the vendor
can have input on how the instance is set up without modifying or needing
to understand the user-data.
vendordata is generally consumed exactly like user-data, but the user
has the ability to disable its consumption.
The only datasource supporting this at the moment is SmartOS.
|
|
This was previously broken anyway. It doesn't seem like there
was an easy way to actually support it, so for now I'm removing
it entirely. growpart works well enough.
|
|
This has been "best practice" for quite some time, and its a common
request of "where is the output of my user-data programs".
http://askubuntu.com/questions/345344/where-are-the-logs-for-my-user-data-script-cloud-init
|
|
We were passing a unicode string to 'runcmd' in the path to the .crt file.
That is because the keyname was coming from ovf file as unicode.
Ie:
u'/var/lib/waagent/6BE7A7C3C8A8F4B123CCA5D0C2F1BE4CA7B63ED7.crt'
Then, logging was extending not appending errors.
|
|
Before passing a path into selinux.matchpathcon, it needs to be casted
to a string, since the path could be unicode and selinux.matchpathcon
does not support unicode.
LP: #1260072
|
|
This allows a general config option to prefix apt-get commands via
'apt_get_wrapper'. By default, the command is set to 'eatmydata', and the
mode set to 'auto'. That means if eatmydata is available (via which), it
will use it.
The 'command' can be either a array or a string.
LP: #1236531
|
|
This adds a debug module for printing debug output. It does not enable it
by default (by putting it in in cloud_config_modules or elsewhere).
Thats fine, as it is still quite useful for the user to run:
sudo cloud-init single --frequency=always --name=debug ci-debug.txt
|
|
Since the import failure can be an expected failure do not log that
failure at a WARNING level, but to start log it at a DEBUG level. This
will help in figuring out why imports fail (if they ever do) for developer
and cloud-init users. Previously it is hard to know if a module fails
importing for a valid reason (not existent) or an invalid reason (the
module exists but the module has a dependency which is not satisfied).
|
|
|
|
0.6.4 was never released, but had entries in the ChangeLog.
The lack of a tag for 0.6.4 caused problems with 'make rpm'
LP: #1241834
|
|
This removes the requirement for /proc/PID/mountinfo, which was added in linux
kernel 2.6.26. We could potentially re-visit this and read /proc/mounts rather
than /proc/mtab, but mtab proves effective in testing.
LP: #1248625
|
|
|
|
smartos host changed the name of 'region' to 'datacenter_name'
LP: #1244355
|