Age | Commit message (Collapse) | Author |
|
I plan to re-use these methods later. They stand alone even if they
dont end up getting used, though.
|
|
|
|
|
|
This adds a check in cloud-init to see if the existing (cached)
datasource is still valid. It relies on support from the Datasource
to implement 'check_instance_id'. That method should quickly determine
(if possible) if the instance id found in the datasource is still valid.
This means that we can still notice new instance ids without
depending on a network datasource on every boot.
I've also implemented check_instance_id for the superclass and for
3 classes:
DataSourceAzure (check dmi data)
DataSourceOpenstack (check dmi data)
DataSourceNocloud (check the seeded data or kernel command line)
LP: #1553815
|
|
Added code to customize timezone.
Added few utility functions to send events to the VMware hypervisor.
Re-factored the code little bit.
Added code to send SUCCESS event when customization succeeds.
Added code to send FAILED event if any error occurs during customization.
|
|
parsing the command line parameters returned a dictionary
but _merge_new_seed was expecting a string to be yaml loaded.
Change is to make _merge_new_seed take either string or dict.
|
|
|
|
|
|
|
|
|
|
Previously we returned a string of "." the same length as the dmi field.
That seems confusing to the user as "." would seem like a valid response
when in fact this value should not be considered valid.
So now, in this case, return empty string.
|
|
|
|
|
|
Previously we returned a string of "." the same length as the dmi field.
That seems confusing to the user as "." would seem like a valid response
when in fact this value should not be considered valid.
So now, in this case, return empty string.
|
|
- Changed the really long 'from ... import ...' statements.
|
|
|
|
functional
|
|
|
|
- Now my branch is identical to trunk.dist
|
|
|
|
this adds the consumption of 'network-config' to the datasourcenocloud.
There is an implementation of the network rendering taht is untested
in distros/debian.
|
|
it is not uncommon to find dmi data in /sys full of 'ff'. utf-8
decoding of those would fail, causing warning and stacktrace.
Return '.' instead of \xff. This is what dmidecode would return.
$ dmidecode --string system-product-name
|
|
at this point, this works:
python -m cloudinit.net.network_state examples/network-all.yaml
|
|
just add curtin/net as cloudinit/net
and then copy curtin/udev.py as cloudinit/net/udev.py
|
|
|
|
it is not uncommon to find dmi data in /sys full of 'ff'. utf-8
decoding of those would fail, causing warning and stacktrace.
Return '.' instead of \xff. This maps to what dmidecode would return
$ dmidecode --string system-product-name
.................................
|
|
This add 'lxd' to the list of groups that the default user is added to.
It also changes behavior to create any necessary groups that are listed
for the user rather than failing to add the user.
Theres also a fix for usage of logexc that I found along the way.
LP: #1539317
|
|
- Added a new utility method to send a RPC for enabling NICS
- Modified DataSourceOVF.py to enable nics.
- Executed ./tools/run-pep8 and no issues were reported.
|
|
- Added few utility functions to report events to the underlying
VMware Virtualization platform
- Re-factored code little bit.
- Executed ./tools/run-pep8 and no pep8 errors were reported.
|
|
The user can still choose to run pollinate here to seed their
random data. And in an environment with network datasource, that
would be expected to work. However, we do not want to run it any
more from cloud-init because
a.) pollinate's own init system jobs should get it ran before ssh,
which is the primary purpose of wanting cloud-init to run it.
b.) with a local datasource, there is no network guarantee when
init_modules run, so pollinate -q would often cause issues then.
c.) cloud-init would run pollinate and log the failure causing
many cloud-init specific failures that it could do nothing about.
Additionally, add documentation for the seed_random config module.
|
|
The user can still choose to run pollinate here to seed their
random data. And in an environment with network datasource, that
would be expected to work. However, we do not want to run it any
more from cloud-init because
a.) pollinate's own init system jobs should get it ran before ssh,
which is the primary purpose of wanting cloud-init to run it.
b.) with a local datasource, there is no network guarantee when
init_modules run, so pollinate -q would often cause issues then.
c.) cloud-init would run pollinate and log the failure causing
many cloud-init specific failures that it could do nothing about.
LP: #1554152
|
|
caught exception.
|
|
|
|
|
|
make check fails in a trusty sbuild due to different rules on older pep8.
Fix formatting to pass in older and newer pep8.
|
|
|
|
|
|
|
|
Update make check target to run pep8 and run pyflakes or pyflakes3
depending on the value of 'PYVER'. This way the python3 build
environment does not need python2 and vice versa.
Also have make check run the 'yaml' test.
tox: have tox run pep8 in the pyflakes
|
|
Executed ./tools/run-pep8 cloudinit/sources/DataSourceOVF.py and no errors
were reported.
|
|
Update make check target to use pep8, pyflakes, pyflakes3.
|
|
Now we can run make check to assess pep8, pyflakes for python2 or 3
And execute unittests via nosetests (2 and 3).
|
|
|
|
|
|
|
|
this makes 'make' run pyflakes, so failures there will stop a build.
also adds it to tox.
|
|
If the cloud-config does not contain and lxd dictionary then we should not
attempt to install the package. Change the latter half of the check to
negate the dictionary type check. This fix prevents us from always installing
lxd, rather than only installing when we have a config.
Fix pyflakes check on init_cfg dict error message.
|
|
|
|
the already implemented functionality of changing the password with a hashed string, but which wasn't used anywhere.
|
|
|