Age | Commit message (Collapse) | Author |
|
in general block device mappings should be to block devices, not
partitoins.
|
|
|
|
|
|
|
|
Also
* cloudinit/sources/DataSourceAzure.py: invalid xml in a file called
'ovfenv.xml' should raise BrokenAzureDatasource rather than
NonAzureDataSource
* cloudinit/sources/DataSourceSmartOS.py:
cloudinit/sources/DataSourceAzure.py
use 'ephemeral0' as the device name in builtin fs_setup
* tests/unittests/test_datasource/test_azure.py:
* always patch 'list_possible_azure_ds_devs' as it calls find_devs_with
which calls blkid, and dramatically was slowing down tests on my system.
* test_user_cfg_set_agent_command_plain:
fix this test to not depend on specific format of yaml.dumps().
* test_userdata_arrives: add a test that user-data makes it through
|
|
Previously we had this 'ephemeral_disk' entry in the datasource config
for Azure, and then we also copied some entries into the .cfg
for that datasource from the datasource config.
Ie, datasource['Azure']['disk_setup'] would be oddly copied
into the .cfg object that was returned by 'get_config_obj'
Now, instead, we have a BUILTIN_CLOUD_CONFIG, which has those same
values in it.
The other change here is that 'ephemeral_disk' now has no meaning.
Instead, we add a populated-by-default entry 'disk_aliases' to the
BUILTIN_DS_CFG, and then just return entries in it for
'device_name_to_device'
|
|
|
|
|
|
|
|
|
|
|
|
|
|
saucy split cloud-utils into cloud-guest-utils and cloud-image-utils.
The former is in the cloud image, the latter is not, and
we actually need it for growpart which is in the former.
|
|
|
|
Some containers lack /dev/console, so when multi_log attempts to open
that device and write to it directly things can start going haywire.
Here we address this problem by sending console-bound output to stdout
and letting init take care of getting it to the console instead.
We already configure upstart with "console output", so we need only
change systemd to use "journal+console".
The one reason that 'console output' might not be sufficient is if
the user redirected output with 'output'. Ie:
output:
init: "> /var/log/my-cloud-init.log"
Would then mean all output would go there, and anything that
*needed* to go to the console (and was explicitly using multi_log for
that purpose) would not get there.
|
|
|
|
|
|
When libselinux-python is installed, but selinux is disabled on the
instance, calls to restorecon blow up. This fixes it by checking what
is_selinux_enabled() returns first.
|
|
If the disks that are attached on boot do not have a filesystem
on them, then this module is useful to set that up.
LP: #1218506
|
|
exceptions, causing nasty things to happen on instances that boot with
selinux=0. The fix is easy: simply consult is_selinux_enabled() first.
|
|
that device and write to it directly things can start going haywire.
Here we address this problem by sending console-bound output to stdout
and letting init take care of getting it to the console instead.
We already configure upstart with "console output", so we need only
change systemd to use "journal+console".
|
|
|
|
Changed cc_disk_setup to handle the file systems as a label, no longer
passing "log" around.
Tidied up the documentation to reflect the changes and made grammer,
spelling and improved the content a little.
Added disk_setup to the default modules list.
|
|
|
|
|
|
|
|
|
|
|
|
This reads the context disk from OpenNebula.
|
|
|
|
|
|
|
|
|
|
* use util.subp from inside parse_shell_config,
and adjust exception handling accordingly.
* add 'switch_user_cmd' as a callback function to pass
to parse_shell_config, which allows us to mock this to avoid
'sudo' when running test cases.
Basically the test cases just return '[]' here.
* fix some pylint
* handle empty 'content' in parse_shell_config and remove
the protection that was present.
|
|
When the base DataSource class would set 'ds_cfg' for the specific
datasources' config, it would fail for the DataSources that are just named
'DataSourceFooNet' and we wanted to set configuration in 'Foo'.
For example, both DataSourceOpenNebula and DataSourceOpenNebulaNet want to
read datasource config from
sources:
OpenNebula:
foo: bar
But without this change, 'ds_cfg' would not be setup properly for
OpenNebulaNet.
|
|
|
|
|
|
|
|
Eat shell parser error output. Few tests for tests for get_data.
|
|
A new field in the metadata has emerged on openstack config drive, one
that provides a way to seed the linux random generator.
This adds a 'random_seed' config module that writes and that it to
/dev/urandom. Also added is support for reading that data on
azure via the hyper-v acpi table data.
In config drive datasource, it rewrites parts of the on_boot code to use a
little helper class.
|
|
Azure provides a random bit of data at '/sys/firmware/acpi/tables/OEM0'.
The walinux calls this "Entropy in ACPI table provided by Hyper-V".
|
|
|
|
context variable names.
|
|
|
|
|
|
|
|
Instead of having a register default handler and a register custom
handler, just use the same function to do both but provide a parameter to
affect how overwritting of previously existing content-types (which
default handlers use to not overwrite custom ones).
|
|
It appears that udelta could have been left undefined or left defined as a
string "N/A" and then put threw a float formatter previously.
Fix that by ensure its set to a default and put strong checking to make
sure it is a float before using float formatting.
|
|
There are just some cleanups here, and use of simply 'sed' rather than
grep and cut. The motivation is to support running with non gnu
'grep' that doesn't have -P.
|
|
It appears that udelta could
have been left undefined or left
defined as a string "N/A" and then
put threw a float formatter previously.
Fix that by ensure its set to a default
and put strong checking to make sure it
is a float before using float formatting.
|