Age | Commit message (Collapse) | Author |
|
CloudSigma would not get any datasources loaded during cloud-init local.
Thus, when the network datasource was removed, *no* CloudSigma
datasources would be loaded.
LP: #1648380
|
|
Cloud-init has for some time relied on walinuxagent to do some bits
of work necessary for instance initialization. That reliance has
not been needed for a while, but we have still defaulted to it.
This change uses the "builtin" path that Daniel Watkins added
some time ago by default. Also, Adjust tests that assumed the
non-__builtin__ Azure agent_command.
LP: #1538522
|
|
This adds a call to 'activate_datasource'. That will be called
during init stage (or init-local in the event of a 'local' dsmode).
It is present so that the datasource can do platform specific operations
that may be necessary. It is passed the fully rendered cloud-config
and whether or not the instance is a new instance.
The Azure datasource uses this to address formatting of the ephemeral
devices. It does so by
a.) waiting for the device to come online
b.) removing the marker files for the disk_setup and mounts modules
if it finds that the ephemeral device has been reset.
LP: #1611074
|
|
An obvious fix for an issue raised by pyflakes 1.3.
|
|
Support AliYun(Ali-Cloud ECS). This datasource inherits from EC2,
the main difference is the meta-server address is changed to
100.100.100.200.
The datasource behaves similarly to EC2 and relies on network polling.
As such, it is not enabled by default.
|
|
Replace the parsing of 'ip' to get a link and mac address list
in OpenNebula's datasource with usage of cloudinit.net.
This makes test cases there not depend on 'ip' availability
and also uses common code.
|
|
Some of the new DigitalOcean unittests were written to use 'assert_called',
which is only available in mock versions 2.0. Because of this, the failure
would only occur in releases less than yakkety and not in 'tox'.
Add a 'xenial' entry to tox.ini with versions from xenial.
|
|
On DigitalOcean, Network information is provided via Meta-data.
It changes the datasource to be a local datasource, meaning it
will run before fallback networking is configured.
The advantage of that is that before networking is configured it
can bring up a network device with ipv4 link-local and hit the
metadata service that lives at 169.254.169.254 to find its networking
configuration. It then takes down the link local address and lets
cloud-init configure networking.
The configuring of a network device to go looking for a metadata
service is gated by a check of data in the smbios. This guarantees
that the code will not run on another system.
|
|
Treat null type as yet another physical type, seen in real-world
openstack cloud.
Also, support the case where network_data.json provides mac addresses
in upper case. Rackspace public cloud currently does that.
LP: #1621968
|
|
When user-data was provided in the ovf environment python3 would call
base64.decodestring() with a string rather than bytes and an exception
would occur.
This fixes the broken path and adds unit test. Also changes to
return None rather than empty string when there is no user-data and
when there is user-data return that as bytes instead of string.
LP: #1619394
|
|
The OpenStack network_data.json does not provide a name for bond links.
This change makes it so a dummy one is generated and used instead
to satisfy cloud-init which does require one.
In order to write the correct link (underlying 'link' names)
for the bonds, we maintain a list of info by ids so we can easily
get the right device name.
Also:
* add a vlan test case that similarly references an id rather than name.
* make bond interfaces auto
LP: #1605749
|
|
It is more efficient and cross-distribution safe to use the hooks function
from dhclient to obtain the Azure endpoint server (DHCP option 245).
This is done by providing shell scritps that are called by the hooks
infrastructure of both dhclient and NetworkManager. The hooks then
invoke 'cloud-init dhclient-hook' that maintains json data
with the dhclient options in
/run/cloud-init/dhclient.hooks/<interface>.json .
The azure helper then pulls the value from
/run/cloud-init/dhclient.hooks/<interface>.json file(s). If that file does
not exist or the value is not present, it will then fall back to the
original method of scraping the dhcp client lease file.
|
|
Per [1], DigitalOcean provides the metadata in multiple formats. The JSON
document is the preferred endpoint.
Changes:
- Switch to the v1.json meta-data endpoint
- Identify droplet identity from SMBIOS
- Only poll for metadata when the instance is confirmed to be a droplet
- Removal of hard-coded mirrors
Additionally, centralize the gates on running 'dmidecode' on arm arches,
and update tests to address.
[1] https://developers.digitalocean.com/documentation/metadata/
|
|
Add vendor-data support to maas which will behave like the openstack
vendor-data does. Data returned from maas must be yaml loadable.
Also update the main in DataSourceMAAS to "just work" on a maas
deployed system.
LP: #1612313
|
|
This just adds 'tap' to the list of types that are understood to
be physical or virtual network devices. Openstack basically exposes
the type of the host device through.
LP: #1610784
|
|
This fixes an issue with the NoCloud datasource where it would not
recognize the 'network-interfaces' key provided in meta-data.
LP: 1577982
|
|
This improves smart os network configuration
- fix the SocketClient which was previously completely broken.
- adds support for configuring dns servers and dns search (based off the
sdc:dns_domain).
- support 'sdc:gateways' information from the datasource for configuring
default routes.
- add converted network information to output when module is run as a main
This does not support 'sdc:routes' as described at
http://eng.joyent.com/mdata/datadict.html
|
|
This commit includes the content of that commit, plus a fix for the tests
(provided by Phil).
|
|
Splits off distro specific code into specific files so that
other kinds of networking configuration can be written by the
various distro(s) that cloud-init supports.
It also isolates some of the cloudinit.net code so that it can
be more easily used on its own (and incorporated into other
projects such as curtin).
During this process it adds tests so that the net process can
be tested (to some level) so that the format conversion processes
can be tested going forward.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
'id' on a link in the openstack spec should be "Generic, generated ID".
current implementation was to use the host's name for the host
side nic. Which provided names like 'tap-adfasdffd'.
We do not want to name devices like that as its quite unexpected
and non user friendly. So here we use the system name for any
nic that is present, but then require that the nics found also
be present at the time of rendering.
The end result is that if the system boots with net.ifnames=0
then it will get 'eth0' like names. and if it boots without net.ifnames
then it will get enp0s1 like names.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Timeouts and retries were triggering so make it so
that tests do not use the typical timesouts and retries
so that the tests finish faster.
|
|
|
|
|
|
Instead of aborting all serial using tests instead just
create a serial module in cloudinit that will create a fake
and broken serial class when pyserial is not actually installed.
This allows for using the datasource and tests that exist in
a more functional and tested manner (even when pyserial is not
found).
|
|
|
|
|
|
|
|
Introduced a new path in configdrive, openstack/2015-10-15/, needed
to add bogus data in that path as well to ensure config reader didn't
find good data when testing for exception thrown.
|
|
|
|
make check fails in a trusty sbuild due to different rules on older pep8.
Fix formatting to pass in older and newer pep8.
|
|
Update make check target to run pep8 and run pyflakes or pyflakes3
depending on the value of 'PYVER'. This way the python3 build
environment does not need python2 and vice versa.
Also have make check run the 'yaml' test.
tox: have tox run pep8 in the pyflakes
|
|
Now we can run make check to assess pep8, pyflakes for python2 or 3
And execute unittests via nosetests (2 and 3).
|