Age | Commit message (Collapse) | Author |
|
|
|
|
|
Adding the apt helper routines to CloudConfig.
Then, make use of the following from cc_puppet and cc_apt_update_upgrade
update_package_sources():
install_packages(pkglist):
I'm not really terribly happy with this location for them. Their presence
here is really only because of apt-update's use of
'run-once-per-instance'.
|
|
This method aloows the caller to run easily run something
"once per instance". Its location in CloudConfig rather than
'util' is really only because it needs access to cloudinit.get_ipath_cur
to get the 'data' path.
|
|
|
|
|
|
This lowers the default retries from 100 to 30 (1050 seconds to 105 seconds)
|
|
|
|
In order to be able to configure a DataSource via system config
(ie, what is in /etc/cloud/cloud.cfg), we pass this into the DataSource
class.
The DataSource parent class will set up the 'ds_cfg' member based
on the subclass name. So, DataSourceEc2 will get:
self.ds_cfg = sys_cfg['datasource']['Ec2']
populated for it.
|
|
|
|
This option allows user to specify manual cleaning of the
/var/lib/cloud/instance/ link, for a data source that might not be present on
every boot.
|
|
|
|
Previous logging was getting 'None' set in the DataSource collections.
Thus, 'log.debug' would throw error. I think it is proper to pull in
the base cloudinit's log.
|
|
passing '-c /dev/null' (no cache file) seems to work fine.
|
|
|
|
mount was taking 18 seconds when there was no media on a kvm guest.
a simple read should be about as quick as we can fail. The only other
thing to try would be to use cdrom.h and ioctl for CDROM_DRIVE_STATUS.
|
|
|
|
|
|
|
|
|
|
A bug caused user scripts to get stored in
/var/lib/cloud/instance/scripts/<instance-id>/
which meant they would not get run by 'run-user-scripts'.
LP: #711480
|
|
LP: #709946
|
|
|
|
After adding the 'log' element to the DataSource class, pickling would
fail with
TypeError: can't pickle file objects
Instead of having the object with a log reference, use one from
'DataSource.log' and have that set by cloudinit
|
|
|
|
|
|
Note: by default, nothing is done. No users will have passwords
set, nor will sshd's configuration be changed unless cloud-config
is modified. Additionally, by default, users whose passwords are
set have their password expired, forcing a change.
|
|
|
|
add 'datasource' file to instance dir
|
|
LP: #709946
|
|
Change /var/lib/cloud/instance/
user-data-raw.txt.i
to
user-data.txt.i
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
given rsa_private_key, rsa_public_key is not needed in the ssh
element of cloud-config. instead, it can be generated with ssh-keygen -yf
LP: #648905
|
|
LP: #645458
|
|
rework of DataSource loading.
The DataSources that are loaded are now controlled entirely via
configuration file of 'datasource_list', like:
datasource_list: [ "NoCloud", "OVF", "Ec2" ]
Each item in that list is a "DataSourceCollection". for each item
in the list, cloudinit will attempt to load:
cloudinit.DataSource<item>
and, failing that,
DataSource<item>
The module is required to have a method named 'get_datasource_list'
in it that takes a single list of "dependencies" and returns
a list of python classes inside the collection that can run needing
only those dependencies.
The dependencies are defines in DataSource.py. Currently:
DEP_FILESYSTEM = "FILESYSTEM"
DEP_NETWORK = "NETWORK"
When 'get_datasource_list' is called for the DataSourceOVF module with
[DEP_FILESYSTEM], then DataSourceOVF returns a single item list with a
reference to the 'DataSourceOVF' class.
When 'get_datasource_list' is called for the DataSourceOVF module with
[DEP_FILESYSTEM, DEP_NETWORK], it will return a single item list
with a reference to 'DataSourceOVFNet'.
cloudinit will then instanciate the class and call its 'get_data' method.
if the get_data method returns 'True', then it selects this class as the
selected Datasource.
|
|
The DataSources that are loaded are now controlled entirely via
configuration file of 'datasource_list', like:
datasource_list: [ "NoCloud", "OVF", "Ec2" ]
Each item in that list is a "DataSourceCollection". for each item
in the list, cloudinit will attempt to load:
cloudinit.DataSource<item>
and, failing that,
DataSource<item>
The module is required to have a method named 'get_datasource_list'
in it that takes a single list of "dependencies" and returns
a list of python classes inside the collection that can run needing
only those dependencies.
The dependencies are defines in DataSource.py. Currently:
DEP_FILESYSTEM = "FILESYSTEM"
DEP_NETWORK = "NETWORK"
When 'get_datasource_list' is called for the DataSourceOVF module with
[DEP_FILESYSTEM], then DataSourceOVF returns a single item list with a
reference to the 'DataSourceOVF' class.
When 'get_datasource_list' is called for the DataSourceOVF module with
[DEP_FILESYSTEM, DEP_NETWORK], it will return a single item list
with a reference to 'DataSourceOVFNet'.
cloudinit will then instanciate the class and call its 'get_data' method.
if the get_data method returns 'True', then it selects this class as the
selected Datasource.
|
|
|
|
|
|
Everywhere that there occurred:
except Exception, e:
changed to
except Exception as e:
|
|
|
|
|