Age | Commit message (Collapse) | Author |
|
Specifically, this is to support Azure's G-series VMs (which come with
disks up to 6500GB).
|
|
This fixes the last set of WARN messages in my testing.
* open /dev/console in text mode
* move final message to be jinja template by default to avoid
a warning about lack of cheetah.
* write and read pickle'd contents in binary
* some logging tests
Also:
* add tool tox-venv for simple things like:
tox-venv py34 /bin/bash
|
|
This gives us functional python3 support. There are likely
still bugs, but instance boot on openstack is functional now.
LP: #1247132
|
|
on linux we can read dmi information from /sys rather than using
dmidecode binary, and thus removing a dependency.
|
|
on RHEL, we were writing to persistent configuration the fqdn, but
invoking 'hostname' on the first boot with just the shortname. On 'reboot',
then the hostname would differ.
Now, whatever we write, invoke hostname with.
Also remove some duplicate code.
LP: #1246485
|
|
This fixes a race condition that can cause cloud-init output to be spit out
over the login prompt on the console when booting under systemd.
|
|
Google Compute Engine fqdn hostnames are usually longer than 64 characters.
This causes issues with many tools (often Java based).
Note that per gethostname(2):
POSIX.1-2001 guarantees that "Host names (not including the terminating null
byte) are limed to HOST_NAME_MAX bytes". On Linux, HOST_NAME_MAX is defined
with the value 64.
LP: #1383794
|
|
Enable user-data encoding support for GCE. Extended and updated tests to
support checking the user-data encoding.
User can now pass in user-data encoded in base64 and indicate they've
done so by adding a tag 'user-data-encoding' with value 'base64'.
LP: #1404311
|
|
This is minor change which uses the new Chef (company) top
level domain for grabbing the Omnibus installation shell script.
|
|
Instead of only expected a list, tuple, or set type
allow for a string type and dict to be passed in for 'ssh_authorized_keys',
and add log message that occurs if some other type is used that
can not be correctly processed.
|
|
This fix handles '=' as a delimiter in SSH config and
adds appropriate test methods to ensure this functionality
continues to work correctly.
|
|
Add the following adjustments to the chef template and module:
- Make it so that the chef directories can be provided (defaults
to the existing directories)
- Make the params much more configurable, and if a parameter is
provided in the chef configuration it will override existing template
parameters.
- Make the template skip lines if the values are None in the configuration
so that template lines can be removed if/when this is desirable.
- Allow the firstboot json path to be configurable (defaults to the
existing location).
- Adds a basic set of tests to ensure that good things are happening.
- Make a helper function to tell if already installed.
- Have the install routine not run chef after installed but have it instead
return a result to tell the caller to run the chef program once completed.
- Use the generated_by() utility function to give the ruby template a
better header comment.
- Set special parameters after selecting the basic chef parameters.
- Allow for the running after install and run arguments to be configured.
- Allow the omnibus url fetching retries to be configurable.
- Move the chef running to its own helper function
- Add module docs
|
|
The digital ocean datasource test is using assertIs which
is only created/existent on py2.7, so for the older py2.6
we need to add similar logic so that the test works correctly
there.
|
|
To make it so that cloud-init is installable in a virtualenv
where it can be tested in an isolated scenario we need to avoid
using and including datafiles (which won't be written into the
virtualenv) and also avoid using our initsys helper class which
also adds on its own files when we are being ran from a virtualenv.
|
|
sources.list was where this showed itself, but all rendered files
would have their newline stripped.
LP: #1355343
|
|
Add the basics of docs that can be extracted from the code itself (also
impose a initial format that will be useful for further modules to
follow). In this initial addition modify the cc_debug.py and
cc_ubuntu_init_switch.py to use this new style.
LP: #1383510
|
|
|
|
|
|
This adds a DataSource for DigitalOcean's metadata service. The service is
documented at https://developers.digitalocean.com/metadata/ .
|
|
|
|
User can now configure setting of a swap file.
Only supports un-encrypted swap for now.
swap:
filename: /swap.img
size: "auto" or size in bytes
maxsize: size in bytes
Also adds 2 util:
read_meminfo: return how much memory on system.
human2bytes: convert human numbers (8G) to bytes.
|
|
Add support for freebsd reading config drive. Primary work is
related to re-factoring mount_cb to not be so linux specific.
Other changes:
* declare PATH in freebsd initscripts
* list dependency on e2fsprogs (for blkid)
* enable ConfigDrive in freebsd config
* hosts.freebsd.tmpl added
|
|
HVM instances on EC2 have grub on /dev/xvda.
The bug here resulted in a prompt on grub update.
LP: #1336855
|
|
add kwargs to fork_cb, and utilize that to call log_time and pass through
the provided args to resize_cmd.
LP: #1338614
|
|
This makes the DataSourceConfigDrive support vendor-data in the same
way the metadata service reader does. There are still some things to
fix here, but now we're similar between these two.
Also drops the ability to specify a version (as in YYYY-MM-DD) that you want to
look for. Nothing was using this, but it may be useful to add back in
in the future and expose as a datasource config option.
|
|
In a container the device nodes may exist but not be writable.
I'm seeing this on trusty host with trusty containers, the root
device ends up looking like it is to /dev/loop0.
LP: #1366891
|
|
|
|
|
|
This set of changes generally produces a functional cloud-init on FreeBsd.
|
|
The metadata service openstack implementation would end up fetching
urls more than once, as _path_exists would end up doing a GET.
Now instead, get things you expect to be there.
|
|
pep8: passes on pylint 1.5.7 (and 1.5.6 utopic).
intent is that is to be the target for future changes.
pylint: remove as more hassle than its worth.
Intent is to move to pyflakes at some point.
|
|
Module intended to allow disabling by configuration, but that was broken.
Now this allows:
no_ssh_fingerprints = True
LP: #1340903
|
|
LP: #1313114
|
|
this makes some changes to the cc_resolv_conf to make its
generate_resolv_conf method more easily callable (for future test).
Also sets it up so that 'options' is always defined when the template
is rendered.
LP: #1328953
|
|
LP: #1329583
|
|
LP: #1333920
|
|
|
|
The module is useful primarily for testing in Ubuntu's transition to systemd.
It should be very harmless elsewhere as it defaults to doing nothing,
and will only run if configured as 'ubuntu' distro *and* 'dpkg' is available.
|
|
comments in /etc/timezone are not expected, and can cause problems
if another tool tries to read it.
LP: #1341710
|
|
previous commit occurred because the selinux test was failing
in a schroot where there was no /etc/hosts.
Now, fix that test more correctly, and fix some bad assumptions in
the SeLinuxGuard.
|
|
use pybuild and drop cdbs.
This also now runs test during that build and does then require build
dependencies.
|
|
|
|
This drops the hard requirement on Cheetah.
Jinja is a python 2.4->3.x compatible templating engine, allow its
optional usage (until we can depreciate cheetah) by allowing for
specifying a template file header that can define which template engine to
use.
If the template file header does not specify a renderer, then assume
that that is cheetah. If cheetah is not available, then use a limited
builtin renderer on a best effort basis, and log the warning.
LP: #1219223
|
|
|
|
LP: #1327065
|
|
LP: #1316597
|
|
On systems with a ttyS1 and nothing attached, the read attempts
that the cloud sigma datasource would do would block.
Also, Add timeouts for reading/writting from/to the serial console
LP: #1316475
|
|
LP: #1303986
|
|
|
|
This change adds the possibility to have base64 encoded userdata in
OpenNebula source.
OpenNebula uses a text file with shell variables for storing the
configuration variables (including user provided data). Some user data may
not be renderable into this format, so using base64 encoding alleviates
the problem.
The change here allows the user to provide a second variable
USERDATA_ENCODING (or USER_DATA_ENCODING) and set that value to 'base64'
to indicate that USERDATA is base64 encoded.
|