Age | Commit message (Collapse) | Author |
|
importing, constant usage.
1. Move all datasources to a new sources directory
1. Rename some files to be more consistent with python file/module naming.
|
|
|
|
Also, add in the headers_cb which will be required for oauth.
|
|
This copyright change reflects previous changes that Juerg made for pylint and
pep8 cleanups.
From: Juerg Haefliger <juerg.haefliger@hp.com>
Date: Mon, 16 Jan 2012 10:45:12 +0100
|
|
From: Juerg Haefliger <juerg.haefliger@hp.com>
This pulls in the named patch for LP: #914739 with a few other changes.
|
|
From: Juerg Haefliger <juerg.haefliger@hp.com>
|
|
From: Juerg Haefliger <juerg.haefliger@hp.com>
|
|
From: Juerg Haefliger <juerg.haefliger@hp.com>
|
|
single line)
From: Juerg Haefliger <juerg.haefliger@hp.com>
|
|
From: Juerg Haefliger <juerg.haefliger@hp.com>
|
|
From: Juerg Haefliger <juerg.haefliger@hp.com>
|
|
This increases the timeout for a metadata request to something that should
be easily satisfiable (50 seconds). But hopefully does so while still keeping
the case of no-metadata service in mind.
Previously, there was a small timeout and many retries (30) would be done.
Now,
- larger timeout (50 seconds) by default
- retry until a given "max_wait" is reached (120 seconds default)
The end result is that if we're hitting the timeout, there will only end up
being a couple attempts made. But if the requests are coming back quickly
then we'll still make several attempts.
There is one EC2DataSource config change, now 'retries' is not used, but rather
'max_wait' to indicate generally how long it should try to find a metadata
service.
|
|
|
|
|
|
In addition to catching a url timeout, we also need to catch and
retry on a socket timeout. Apparently urllib2 doesn't catch this and
brand it as a urlerror.
LP: #869492
|
|
LP: #855965
|
|
t1.micro do not have a ephemeral0 disk, but the metadata service will have
an entry there.
i386 t1.micro:
'block-device-mapping': {'ami': '/dev/sda1',
'ephemeral0': '/dev/sda2',
'root': '/dev/sda1'},
amd64 t1.micro:
'block-device-mapping': {'ami': '/dev/sda1',
'ephemeral0': '/dev/sdb',
'root': '/dev/sda1'},
LP: #744019
|
|
|
|
Now, if a Eucalyptus install is in STATIC or SYSTEM mode,
the metadata service can still be used. In order to do that,
the user must configure their DNS so that 'instance-data' will
resolve to the cloud controller.
Thanks to Kieran Evans.
LP: #761847
|
|
just to avoid unnecessary changes (and confusion in 'annotate')
|
|
|
|
removed extra args from string format
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
both http://169.254.169.254 and http://instance-data:8773 for meta data service.
LP: #761847
|
|
both http://169.254.169.254 and http://instance-data:8773 for meta data service.
LP: #761847
|
|
This lowers the default retries from 100 to 30 (1050 seconds to 105 seconds)
|
|
Previous logging was getting 'None' set in the DataSource collections.
Thus, 'log.debug' would throw error. I think it is proper to pull in
the base cloudinit's log.
|
|
After adding the 'log' element to the DataSource class, pickling would
fail with
TypeError: can't pickle file objects
Instead of having the object with a log reference, use one from
'DataSource.log' and have that set by cloudinit
|
|
The DataSources that are loaded are now controlled entirely via
configuration file of 'datasource_list', like:
datasource_list: [ "NoCloud", "OVF", "Ec2" ]
Each item in that list is a "DataSourceCollection". for each item
in the list, cloudinit will attempt to load:
cloudinit.DataSource<item>
and, failing that,
DataSource<item>
The module is required to have a method named 'get_datasource_list'
in it that takes a single list of "dependencies" and returns
a list of python classes inside the collection that can run needing
only those dependencies.
The dependencies are defines in DataSource.py. Currently:
DEP_FILESYSTEM = "FILESYSTEM"
DEP_NETWORK = "NETWORK"
When 'get_datasource_list' is called for the DataSourceOVF module with
[DEP_FILESYSTEM], then DataSourceOVF returns a single item list with a
reference to the 'DataSourceOVF' class.
When 'get_datasource_list' is called for the DataSourceOVF module with
[DEP_FILESYSTEM, DEP_NETWORK], it will return a single item list
with a reference to 'DataSourceOVFNet'.
cloudinit will then instanciate the class and call its 'get_data' method.
if the get_data method returns 'True', then it selects this class as the
selected Datasource.
|
|
Everywhere that there occurred:
except Exception, e:
changed to
except Exception as e:
|
|
|
|
- cloud_config and scripts now live in instance directory
- cachedir is now more correctly named 'seeddir'
|
|
Previously the 'get_locale()' method of DataSourceEc2 would select
a default locale based on the availability zone that the instance was
running on.
I generally don't like that as
a.) there are loads of other locales than en_US and en_GB (that were being
used)
b.) either one is almost certainly not really the users preferred locale.
Just because I launch an instance in eu-west-1 doesn't mean I perfer en_GB.
|
|
VPC instances cannot reach other hosts in EC2 (such as the archives).
In this case, use the default mirror instead.
LP: #615545
|
|
|
|
The logic behind returning a device even if it is not present is that
it *could* be present later, or after a stop and restart. Additionally
this gives the caller more information to differenciate itself between
"device did not exist" and "device was not present in metadata service".
|
|
using read_optional_seed in DataSourceEc2 and DataSourceNoCloud.
change parse_cmdline_data to fill a dictionary that is supplied by
caller. It then returns strictly true or false based on whether
or not it was specified in cmdline
|
|
The new classes 'DataSourceNoCloud' and 'DataSourceNoCloudNet'
implement a way to get data from the filesystem, or (very minimal)
data from the kernel command line. This allows the user to seed data to
these sources.
There are now 2 "cloud-init" jobs, cloud-init-local that runs on
mounted MOUNTPOINT=/
and 'cloud-init' that runs on
start on (mounted MOUNTPOINT=/ and net-device-up IFACE=eth0 and
stopped cloud-init-local )
The idea is that cloud-init-local can actually function without network.
The last thing in this commit is "uncloud-init".
This tool can be invoked as 'init=/usr/lib/cloud-init/uncloud-init'
It will "uncloudify" things in the image, generally making it easier
to use for a simpler environment, and then it will exec /sbin/init.
|
|
device names presented in the metadata service may not be what the kernel
has named them. This can be for more than 1 reason. But some examples:
- device is virtio, metadata named 'sd'
- device is xvdX, metadata named sd
Those are the two situations that are covered here. More complex, but
not covered are possibly:
- metadata service named device 'sda1', but it should actually be 'vdb1'
LP: #611137
|
|
|
|
|
|
if user data is of type text/cloud-boothook, or begins with
#cloud-boothook, then assume it to be code to be executed.
Boothooks are a very simple format. Basically, its a one line header
('#cloud-config\n') and then executable payload.
The executable payload is written to a file, then that file is executed
at the time it is read. The file is left in
/var/lib/cloud/data/boothooks
There is no "first-time-only" protection. If running only once is
desired, the boothook must handle that itself.
|
|
This logging infrastructure in cloudinit:
- uses python logging
- allows user supplied config of logging.config.fileConfig format to be
supplied in /etc/cloud/cloud.cfg or in cloud_config by user data.
- by default, tries to use syslog, if that is not available, writes directly to
/var/log/cloud-init.log (syslog will not be available yet when cloud-init
runs)
- when using syslog, the doc/21-cloudinit.conf file provides a rsyslogd
file to be placed in /etc/rsyslog.d/ that will file [CLOUDINIT] messages
to /var/log/cloud-init.log
|
|
|