Age | Commit message (Collapse) | Author |
|
Brightbox will identify their platform to the guest by setting the
product serial to a string that ends with 'brightbox.com'.
LP: #1661693
|
|
Based on the setting Datasource/Ec2/strict_id, the datasource
will now warn once per instance.
|
|
This has been a recurring ask and we had initially just made the change to
the cloud-init 2.0 codebase. As the current thinking is we'll just
continue to enhance the current codebase, its desirable to relicense to
match what we'd intended as part of the 2.0 plan here.
- put a brief description of license in LICENSE file
- put full license versions in LICENSE-GPLv3 and LICENSE-Apache2.0
- simplify the per-file header to reference LICENSE
- tox: ignore H102 (Apache License Header check)
Add license header to files that ship.
Reformat headers, make sure everything has vi: at end of file.
Non-shipping files do not need the copyright header,
but at the moment tests/ have it.
|
|
pycodestyle 2.1.0 is in Ubuntu zesty, and complained about the
changes made here. Simple style changes. This makes 'make pep8'
pass again when built in a zesty build system with proposed enabled.
|
|
Simply fix a commit that should not have been pushed.
|
|
Oracle public cloud has the string 'unavailable' in its metadata
service for 'block-device-mapping'. The change here is to return
None in device_name_to_device if that is the case.
|
|
Support AliYun(Ali-Cloud ECS). This datasource inherits from EC2,
the main difference is the meta-server address is changed to
100.100.100.200.
The datasource behaves similarly to EC2 and relies on network polling.
As such, it is not enabled by default.
|
|
if the Datasource does not have an entry in config, then
set it to be a empty dictionary rather than None.
Also remove places that did this elsewhere.
|
|
Update make check target to run pep8 and run pyflakes or pyflakes3
depending on the value of 'PYVER'. This way the python3 build
environment does not need python2 and vice versa.
Also have make check run the 'yaml' test.
tox: have tox run pep8 in the pyflakes
|
|
|
|
Also implement DataSource.region for EC2 and GCE data sources.
|
|
to be behind trunk.
`tox -e py27` passes full test suite. Now to work on replacing mocker.
|
|
get_url_settings should return a pair of
max wait and timeout and not false, fix this
bug by checking the max_wait <= 0 in the calling
function and returning correctly from there
instead.
|
|
|
|
|
|
|
|
|
|
|
|
are used that lazily load the metadata from the
ec2 metadata service.
1. Add a ec2_utils module that checks which version
of boto is being used and under the right versions
the metadata dictionary will be expanded.
2. Use this new ec2_utils module in the cloudstack and ec2
datasources as there entrypoints into boto.
|
|
translate the device name to a actual device using
logic that will try the ec2 metadata (if avail) or
will try using 'blkid' to find a corresponding label.
LP: #1062540
|
|
EC2 and openstack provide 'launch_index' in their metadata. This allows
the user to specify cloud-config or multipart mime data that includes the
'Launch-Index' header.
If launch index is available in the metadata service, then:
* any part that contains a launch index other than the current launch-index
of this instance will be ignored.
* any part that does not contain a launch index will be considered as
for this instance.
If there is no such header, or launch_index is not available in the metadata
service, then no such filtering will be done.
LP: #1023177
|
|
In searching for the metadata service, require 'instance-data' to be at the top
level domain. Previously any misconfigured 'search' in /etc/resolv.conf could
result in unintended use of a metadata server.
LP: #1040200
|
|
variable has a little more meaning and by default look in
metadata for 'launch-index' and have ec2 instead look for
a different variable (thus allowing more datasources to just work).
|
|
userdata based on a launch-index (or leave userdata
alone if none is provided by the datasource). This
works by doing the following.
1. Adjusting the userdata processor to attempt to
inject a "Launch-Index" header into the messages
headers (by either taking a header that already exists
or by looking into the payload to see if it exists
there).
2. Adjust the get_userdata ds function to apply a filter
on the returned userdata (defaulting to false) that
will now use the datasources get_launch_index value
to restrict the 'final' message used in consuming
user data (the same behavior if not existent).
3. Further down the line processes that use the 'resultant'
userdata now will only see the ones for there own launch
index (ie cloud-config will be restricted automatically
and so on) and are unaffected (although they can now
ask the cloud object or the datasource for its launch index
via the above new ds method.
|
|
There are several changes here.
* Datasource now has a 'availability_zone' getter.
* get_package_mirror_info
* Datasource convenience 'get_package_mirror_info' that calls
the configured distro, and passes it the availability-zone
* distro has a get_package_mirror_info method
* get_package_mirror_info returns a dict that of name:mirror
this is to facilitate use of 'security' and 'primary' archive.
* this supports searching based on templates. Any template
that references undefined values is skipped. These templates
can contain 'availability_zone' (LP: #1037727)
* distro's mirrors can be arch specific (LP: #1028501)
* rename_apt_lists supports the "mirror_info" rather than single mirror
* generate_sources_list supports mirror_info, and as a result, the
ubuntu mirrors reference '$security' rather than security (LP: #1006963)
* remove the DataSourceEc2 specific mirror selection, but instead
rely on the above filtering, and the fact that 'ec2_region' is only
defined if the availability_zone looks like a ec2 az.
|
|
|
|
On my system (quantal) this 'make pylint' does not complain now.
|
|
This returns the check for an archive mirror in the DataSourceEc2 to
only do so by DNS resolution. The 'rework' branch had made the check
wait and timeout on attempts to reach the mirror. This resulted
in 120 seconds of waiting before failure.
For now, just go back to the old situation of checking by dns.
|
|
2. Adjust comment on sources list from depends
3. For the /etc/timezone 'writing', add a header that says created by cloud-init
|
|
|
|
|
|
|
|
translation to python 3.
|
|
new structure, using unified util functions, logging and eliminating code and calls.
|
|
|