Age | Commit message (Collapse) | Author |
|
|
|
distros base, and apply_fallback_network to distros to call
_write_network_fallback. Note that since _write_network_fallback is only
implemented for debian and ubuntu a check is needed to ensure that it does not
break behaviour for other distros.
Added function to disable .cfg files to util, since it may be useful elsewhere
|
|
This adds a check in cloud-init to see if the existing (cached)
datasource is still valid. It relies on support from the Datasource
to implement 'check_instance_id'. That method should quickly determine
(if possible) if the instance id found in the datasource is still valid.
This means that we can still notice new instance ids without
depending on a network datasource on every boot.
I've also implemented check_instance_id for the superclass and for
3 classes:
DataSourceAzure (check dmi data)
DataSourceOpenstack (check dmi data)
DataSourceNocloud (check the seeded data or kernel command line)
LP: #1553815
|
|
on in the event that no network configuration was provided to cloud-init
- Devices in /sys/class/net aside from loopback devices are scanned
- Each device is tested to determine if it has a carrier using
/sys/class/net/DEV/carrier, devices which do are preferred as they are most
likely connected to the outside world
- Devices which do not have a carrier but which might still be connected due
to being in a dormant or down state are used as fallbacks in case no
devices are found which have a carrier
- A network state dictionary is generated to be passed to
render_network_state to write ENI
- A systemd link file is generated that will rename the chosen device to eth0
|
|
- Modified the code to look for customization specification file in
/var/run/vmware-imc/ directory instead of /tmp
- Fixed the 'seed file' issue. There was a regression in DataSourceOVF.py
file. Fixed it.
|
|
|
|
|
|
lp:cloud-initramfs-tools/dyn-netconf/scripts/init-bottom/cloud-initramfs-dyn-netconf
|
|
broadcast, netmask, gateway and hostname if present
|
|
|
|
kernel's cmdline during network configuration parsing.
- Search for .conf files in /run with names starting with 'net', as these are
created during early boot if the ip parameter is present
- If any are present and valid they are merged with network configuration
from the current data source
- If the devices affected by the 'ip' parameter are already present in
network configuration, then a subnet entry will be added to the device's
configuration unless an identical entry is already present
- If any of the devices affected are not present then a mostly blank
configuration will be generated for the device and the appropriate subnet
specified
|
|
Added code to customize timezone.
Added few utility functions to send events to the VMware hypervisor.
Re-factored the code little bit.
Added code to send SUCCESS event when customization succeeds.
Added code to send FAILED event if any error occurs during customization.
|
|
parsing the command line parameters returned a dictionary
but _merge_new_seed was expecting a string to be yaml loaded.
Change is to make _merge_new_seed take either string or dict.
|
|
from any ip= parameters passed on the kernel cmdline are merged into network
state
|
|
|
|
|
|
|
|
|
|
Previously we returned a string of "." the same length as the dmi field.
That seems confusing to the user as "." would seem like a valid response
when in fact this value should not be considered valid.
So now, in this case, return empty string.
|
|
|
|
|
|
Previously we returned a string of "." the same length as the dmi field.
That seems confusing to the user as "." would seem like a valid response
when in fact this value should not be considered valid.
So now, in this case, return empty string.
|
|
- Changed the really long 'from ... import ...' statements.
|
|
|
|
functional
|
|
|
|
- Now my branch is identical to trunk.dist
|
|
|
|
this adds the consumption of 'network-config' to the datasourcenocloud.
There is an implementation of the network rendering taht is untested
in distros/debian.
|
|
it is not uncommon to find dmi data in /sys full of 'ff'. utf-8
decoding of those would fail, causing warning and stacktrace.
Return '.' instead of \xff. This is what dmidecode would return.
$ dmidecode --string system-product-name
|
|
at this point, this works:
python -m cloudinit.net.network_state examples/network-all.yaml
|
|
just add curtin/net as cloudinit/net
and then copy curtin/udev.py as cloudinit/net/udev.py
|
|
|
|
it is not uncommon to find dmi data in /sys full of 'ff'. utf-8
decoding of those would fail, causing warning and stacktrace.
Return '.' instead of \xff. This maps to what dmidecode would return
$ dmidecode --string system-product-name
.................................
|
|
This add 'lxd' to the list of groups that the default user is added to.
It also changes behavior to create any necessary groups that are listed
for the user rather than failing to add the user.
Theres also a fix for usage of logexc that I found along the way.
LP: #1539317
|
|
- Added a new utility method to send a RPC for enabling NICS
- Modified DataSourceOVF.py to enable nics.
- Executed ./tools/run-pep8 and no issues were reported.
|
|
- Added few utility functions to report events to the underlying
VMware Virtualization platform
- Re-factored code little bit.
- Executed ./tools/run-pep8 and no pep8 errors were reported.
|
|
The user can still choose to run pollinate here to seed their
random data. And in an environment with network datasource, that
would be expected to work. However, we do not want to run it any
more from cloud-init because
a.) pollinate's own init system jobs should get it ran before ssh,
which is the primary purpose of wanting cloud-init to run it.
b.) with a local datasource, there is no network guarantee when
init_modules run, so pollinate -q would often cause issues then.
c.) cloud-init would run pollinate and log the failure causing
many cloud-init specific failures that it could do nothing about.
Additionally, add documentation for the seed_random config module.
|
|
The user can still choose to run pollinate here to seed their
random data. And in an environment with network datasource, that
would be expected to work. However, we do not want to run it any
more from cloud-init because
a.) pollinate's own init system jobs should get it ran before ssh,
which is the primary purpose of wanting cloud-init to run it.
b.) with a local datasource, there is no network guarantee when
init_modules run, so pollinate -q would often cause issues then.
c.) cloud-init would run pollinate and log the failure causing
many cloud-init specific failures that it could do nothing about.
LP: #1554152
|
|
caught exception.
|
|
|
|
|
|
make check fails in a trusty sbuild due to different rules on older pep8.
Fix formatting to pass in older and newer pep8.
|
|
|
|
|
|
|
|
Update make check target to run pep8 and run pyflakes or pyflakes3
depending on the value of 'PYVER'. This way the python3 build
environment does not need python2 and vice versa.
Also have make check run the 'yaml' test.
tox: have tox run pep8 in the pyflakes
|
|
Executed ./tools/run-pep8 cloudinit/sources/DataSourceOVF.py and no errors
were reported.
|
|
Update make check target to use pep8, pyflakes, pyflakes3.
|
|
Now we can run make check to assess pep8, pyflakes for python2 or 3
And execute unittests via nosetests (2 and 3).
|