Age | Commit message (Collapse) | Author |
|
Fix an issue in EC2 where the datasource.identity had not been
initialized before being used when restoring datasource from pickle.
This is exposed in upgrade and reboot path.
LP: #1748354
|
|
This change will enable azure vms to report provisioning has completed
twice, first to tell the fabric it has completed then a second time to
enable customer settings. The datasource for the second provisioning is
the Instance Metadata Service (IMDS),and the VM will poll indefinitely for
the new ovf-env.xml from IMDS.
This branch introduces EphemeralDHCPv4 which encapsulates common logic
used by both DataSourceEc2 an DataSourceAzure for temporary DHCP
interactions without side-effects.
LP: #1734991
|
|
The instance identity document is a better source for region information,
partly because region isn't actually in meta-data at all, only
availability-zone, which happens to be named similarly.
Reviewed-by: Ethan Faust <efaust@amazon.com>
Reviewed-by: Cyle Riggs <cyler@amazon.com>
Reviewed-by: Tom Kirchner <tjk@amazon.com>
Reviewed-by: Matt Nierzwicki <nierzwic@amazon.com>
[ajorgens@amazon.com: rebase onto 0.7.9]
[ajorgens@amazon.com: changes per merge proposal discussions]
|
|
Each DataSource subclass must define its own get_data method. This branch
formalizes our DataSource class to require that subclasses define an
explicit dsname for sourcing cloud-config datasource configuration.
Subclasses must also override the _get_data method or a
NotImplementedError is raised.
The branch also writes /run/cloud-init/instance-data.json. This file
contains all meta-data, user-data and vendor-data and a standardized set
of metadata keys in a json blob which other utilities with root-access
could make use of. Because some meta-data or user-data is potentially
sensitive the file is only readable by root.
Generally most metadata content types should be json serializable. If
specific keys or values are not serializable, those specific values will
be base64encoded and the key path will be listed under the top-level key
'base64-encoded-keys' in instance-data.json. If json writing fails due to
other TypeErrors or UnicodeDecodeErrors, a warning log will be emitted to
/var/log/cloud-init.log and no instance-data.json will be created.
|
|
If user upgraded to new cloud-init and attempted to run 'cloud-init init'
without rebooting, cloud-init restores the datasource object from pickle.
The older version pickled datasource object had no value for
_network_config or fallback_nic. This caused the Ec2 datasource to attempt
to reconfigure networking with a None fallback_nic. The pickled object
also cached an older version of ec2 metadata which didn't contain network
information.
This branch does two things:
- Add a fallback_interface property to DatasourceEC2 to support reading the
old .fallback_nic attribute if it was set. New versions will
call net.find_fallback_nic() if there has not been one found.
- Re-crawl metadata if we are on Ec2 and don't have a 'network' key in
metadata
LP: #1732917
|
|
VPC instances have the option to specific local only IPv4 addresses. Allow
Ec2Datasource to enable dhcp4 on instances even if local-ipv4s is
configured on an instance.
Also limit network_configuration to only the primary (fallback) nic.
LP: #1728152
|
|
This change makes the DataSourceEc2Local do nothing unless it is on
actual AWS platform. The motivation is twofold:
a.) It is generally safer to only make this function available to Ec2
clones that explicitly identify themselves to the guest. (It also
gives them a reason to supply identification code to cloud-init.)
b.) On non-intel OpenStack platforms ds-identify would enable both the Ec2
and OpenStack sources. That is because there is not good data (such as
dmi) to positively identify the platform. Previously that would be fine
as OpenStack would run first and be successful. The change to add Ec2Local
meant that an Ec2 now runs first.
The best case for 'b' would be a slow down as attempts at the Ec2 metadata
service time out. The discovered case was worse.
Additionally we add a simple check for datatype of 'network' in the
metadata before attempting to read it.
LP: #1715128
|
|
DataSourceEc2 now parses the metadata for each nic to determine if
configured for ipv6 and/or ipv4 addresses. In AWS for metadata version
2016-09-02, nics configured for ipv4 or ipv6 addresses will have non-zero
values stored in metadata at network/interfaces/macs/<MAC>/public-ipv4 or
ipv6s respectively. Those metadata files are only non-zero when an ipv4 or
ipv6 ip is associated to the specific nic. A new
DataSourceEc2.network_config property is added which parses the metadata
and renders a network version 1 dictionary representing both dhcp4 and
dhcp6 configuration for associated nics.
The network configuration returned from the datasource will also 'pin' the
nic name to the name presented on the instance for each nic.
LP: #1639030
|
|
This branch is a prerequisite for IPv6 support in AWS by allowing Ec2
datasource to query the metadata source version 2016-09-02 about whether
or not it needs to configure IPv6 on interfaces. If version 2016-09-02
is not present, fallback to the min_metadata_version of 2009-04-04. The
DataSourceEc2Local not run on FreeBSD because dhclient in doesn't
support the -sf flag allowing us to run dhclient without filesystem
side-effects.
To query AWS' metadata address @ 169.254.169.254, the instance must have
a dhcp-allocated address configured. Configuring IPv4 link-local
addresses result in timeouts from the metadata service. We introduced a
DataSourceEc2Local subclass which will perform a sandboxed dhclient
discovery which obtains an authorized IP address on eth0 and crawl
metadata about full instance network configuration.
Since ec2 IPv6 metadata is not sufficient in itself to tell us all the
ipv6 knownledge we need, it only be used as a boolean to tell us which
nics need IPv6. Cloud-init will then configure desired interfaces to
DHCPv6 versus DHCPv4.
Performance side note: Shifting the dhcp work into init-local for Ec2
actually gets us 1 second faster deployments by skipping init-network
phase of alternate datasource checks because Ec2Local is configured in
an ealier boot stage. In 3 test runs prior to this change: cloud-init
runs were 5.5 seconds, with the change we now average 4.6 seconds.
This efficiency could be even further improved if we avoiding dhcp
discovery in order to talk to the metadata service from an AWS
authorized dhcp address if there were some way to advertize the dhcp
configuration via DMI/SMBIOS or system environment variables.
Inspecting time costs of the dhclient setup/teardown in 3 live runs the
time cost for the dhcp setup round trip on AWS is:
test 1: 76 milliseconds
dhcp discovery + metadata: 0.347 seconds
metadata alone: 0.271 seconds
test 2: 88 milliseconds
dhcp discovery + metadata: 0.388 seconds
metadata alone: 0.300 seconds
test 3: 75 milliseconds
dhcp discovery + metadata: 0.366 seconds
metadata alone: 0.291 seconds
LP: #1709772
|
|
EC2 was the original, but this adds some initial tests for that datasource.
Also updates a docstring for an internal method.
|
|
AliYun cloud platform is now identifying themselves by setting the dmi
product id to the well known value "Alibaba Cloud ECS". The changes here
identify that properly in tools/ds-identify and in the DataSourceAliYun.
Since the 'get_data' for AliYun now identifies itself correctly, we can
enable AliYun by default.
LP: #1638931
|
|
This will change all instances of LOG.warn to LOG.warning as warn
is now a deprecated method. It will also make sure any logging
uses lazy logging by passing string format arguments as function
parameters.
|
|
This moves the warning code that was added specifically for
EC2 into a generic path at cloudinit/warnings.py.
It also adds support for writing warning files into the
warnings directory to be shown by Z99-cloudinit-warnings.sh.
|
|
Brightbox will identify their platform to the guest by setting the
product serial to a string that ends with 'brightbox.com'.
LP: #1661693
|
|
Based on the setting Datasource/Ec2/strict_id, the datasource
will now warn once per instance.
|
|
This has been a recurring ask and we had initially just made the change to
the cloud-init 2.0 codebase. As the current thinking is we'll just
continue to enhance the current codebase, its desirable to relicense to
match what we'd intended as part of the 2.0 plan here.
- put a brief description of license in LICENSE file
- put full license versions in LICENSE-GPLv3 and LICENSE-Apache2.0
- simplify the per-file header to reference LICENSE
- tox: ignore H102 (Apache License Header check)
Add license header to files that ship.
Reformat headers, make sure everything has vi: at end of file.
Non-shipping files do not need the copyright header,
but at the moment tests/ have it.
|
|
pycodestyle 2.1.0 is in Ubuntu zesty, and complained about the
changes made here. Simple style changes. This makes 'make pep8'
pass again when built in a zesty build system with proposed enabled.
|
|
Simply fix a commit that should not have been pushed.
|
|
Oracle public cloud has the string 'unavailable' in its metadata
service for 'block-device-mapping'. The change here is to return
None in device_name_to_device if that is the case.
|
|
Support AliYun(Ali-Cloud ECS). This datasource inherits from EC2,
the main difference is the meta-server address is changed to
100.100.100.200.
The datasource behaves similarly to EC2 and relies on network polling.
As such, it is not enabled by default.
|
|
if the Datasource does not have an entry in config, then
set it to be a empty dictionary rather than None.
Also remove places that did this elsewhere.
|
|
Update make check target to run pep8 and run pyflakes or pyflakes3
depending on the value of 'PYVER'. This way the python3 build
environment does not need python2 and vice versa.
Also have make check run the 'yaml' test.
tox: have tox run pep8 in the pyflakes
|
|
|
|
Also implement DataSource.region for EC2 and GCE data sources.
|
|
to be behind trunk.
`tox -e py27` passes full test suite. Now to work on replacing mocker.
|
|
get_url_settings should return a pair of
max wait and timeout and not false, fix this
bug by checking the max_wait <= 0 in the calling
function and returning correctly from there
instead.
|
|
|
|
|
|
|
|
|
|
|
|
are used that lazily load the metadata from the
ec2 metadata service.
1. Add a ec2_utils module that checks which version
of boto is being used and under the right versions
the metadata dictionary will be expanded.
2. Use this new ec2_utils module in the cloudstack and ec2
datasources as there entrypoints into boto.
|
|
translate the device name to a actual device using
logic that will try the ec2 metadata (if avail) or
will try using 'blkid' to find a corresponding label.
LP: #1062540
|
|
EC2 and openstack provide 'launch_index' in their metadata. This allows
the user to specify cloud-config or multipart mime data that includes the
'Launch-Index' header.
If launch index is available in the metadata service, then:
* any part that contains a launch index other than the current launch-index
of this instance will be ignored.
* any part that does not contain a launch index will be considered as
for this instance.
If there is no such header, or launch_index is not available in the metadata
service, then no such filtering will be done.
LP: #1023177
|
|
In searching for the metadata service, require 'instance-data' to be at the top
level domain. Previously any misconfigured 'search' in /etc/resolv.conf could
result in unintended use of a metadata server.
LP: #1040200
|
|
variable has a little more meaning and by default look in
metadata for 'launch-index' and have ec2 instead look for
a different variable (thus allowing more datasources to just work).
|
|
userdata based on a launch-index (or leave userdata
alone if none is provided by the datasource). This
works by doing the following.
1. Adjusting the userdata processor to attempt to
inject a "Launch-Index" header into the messages
headers (by either taking a header that already exists
or by looking into the payload to see if it exists
there).
2. Adjust the get_userdata ds function to apply a filter
on the returned userdata (defaulting to false) that
will now use the datasources get_launch_index value
to restrict the 'final' message used in consuming
user data (the same behavior if not existent).
3. Further down the line processes that use the 'resultant'
userdata now will only see the ones for there own launch
index (ie cloud-config will be restricted automatically
and so on) and are unaffected (although they can now
ask the cloud object or the datasource for its launch index
via the above new ds method.
|
|
There are several changes here.
* Datasource now has a 'availability_zone' getter.
* get_package_mirror_info
* Datasource convenience 'get_package_mirror_info' that calls
the configured distro, and passes it the availability-zone
* distro has a get_package_mirror_info method
* get_package_mirror_info returns a dict that of name:mirror
this is to facilitate use of 'security' and 'primary' archive.
* this supports searching based on templates. Any template
that references undefined values is skipped. These templates
can contain 'availability_zone' (LP: #1037727)
* distro's mirrors can be arch specific (LP: #1028501)
* rename_apt_lists supports the "mirror_info" rather than single mirror
* generate_sources_list supports mirror_info, and as a result, the
ubuntu mirrors reference '$security' rather than security (LP: #1006963)
* remove the DataSourceEc2 specific mirror selection, but instead
rely on the above filtering, and the fact that 'ec2_region' is only
defined if the availability_zone looks like a ec2 az.
|
|
|
|
On my system (quantal) this 'make pylint' does not complain now.
|
|
This returns the check for an archive mirror in the DataSourceEc2 to
only do so by DNS resolution. The 'rework' branch had made the check
wait and timeout on attempts to reach the mirror. This resulted
in 120 seconds of waiting before failure.
For now, just go back to the old situation of checking by dns.
|
|
2. Adjust comment on sources list from depends
3. For the /etc/timezone 'writing', add a header that says created by cloud-init
|
|
|
|
|
|
|
|
translation to python 3.
|
|
new structure, using unified util functions, logging and eliminating code and calls.
|
|
|