Age | Commit message (Collapse) | Author |
|
This allows for the test_cli test to be more sane.
|
|
currently does not work in lxc
https://github.com/lxc/lxd/issues/2063
|
|
bigger things here:
* fix the checking for stop_files.
the check for no-net actually checked the size of the file
and the implementation was just touching it. so it never would
have been found. no-net is valid only in upstart anyway.
do not stop early on presense of the obj_pkl but check it.
this is required since we write the obj_pkl on exit when
local mode finds a datasource but found in network mode.
* use 'mode' rather than checking args.local.
set mode to be sources.DSMODE_NETWORK or sources.DSMODE_LOCAL
for easier / more consistent checking.
* log exit paths.
|
|
|
|
|
|
== background ==
DataSource Mode (dsmode) is present in many datasources in cloud-init.
dsmode was originally added to cloud-init to specify when this datasource
should be 'realized'.
cloud-init has 4 stages of boot.
a.) cloud-init --local . network is guaranteed not present.
b.) cloud-init (--network). network is guaranteed present.
c.) cloud-config
d.) cloud-init final
'init_modules' [1] are run "as early as possible". And as such, are executed
in either 'a' or 'b' based on the datasource. However, executing them means
that user-data has been fully consumed. User-data and vendor-data may have
'#include http://...' which then rely on the network being present. boothooks
are an example of the things run in init_modules.
The 'dsmode' was a way for a user to indicate that init_modules
should run at 'a' (dsmode=local) or 'b' (dsmode=net) directly.
Things were further confused when a datasource could provide networking
configuration. Then, we needed to apply the networking config at 'a'
but if the user had provided boothooks that expected networking, then the
init_modules would need to be executed at 'b'. The config drive datasource
hacked its way through this and applies networking if *it* detects it is
a new instance.
== Suggested Change ==
The plan is to
1. incorporate 'dsmode' into DataSource superclass
2. make all existing datasources default to network
3. apply any networking configuration from a datasource on first boot only
apply_networking will always rename network devices when it runs.
for bug 1579130.
4. run init_modules at cloud-init (network) time frame unless datasource
is 'local'.
5. Datasources can provide a 'first_boot' method that will be called when
a new instance_id is found. This will allow the config drive's write_files
to be applied once.
Over all, this will very much simplify things. We'll no longer have
2 sources like DataSourceNoCloud and DataSourceNoCloudNet, but would just
have one source with a dsmode.
== Concerns ==
Some things have odd reliance on dsmode. For example, OpenNebula's get_hostname
uses it to determine if it should do a lookup of an ip address.
== Bugs to fix here ==
http://pad.lv/1577982 ConfigDrive: cloud-init fails to configure network from network_data.json
http://pad.lv/1579130 need to support systemd.link renaming of devices in container
http://pad.lv/1577844 Drop unnecessary blocking of all net udev rules
|
|
|
|
This attempts to only apply the networking once per instance
by doing so only if the datasource was restored from disk.
This will work by default for datasources with a functioning
check_instance_id or if the user has set manual_cache_clean to true.
|
|
3 things here:
a.) do not raise exception, only warn when trying to apply a network
config for a distro that does not have an implementation.
This is important since debian/ubuntu is the only one *with* an
implementation at the moment
b.) apply network config in 'cloud-init --local' even if there is
no datasource found.
c.) do not write 70-persistent-net.rules
the code was writing both 70-persistent-net.rules and
/etc/systemd/network/50-cloud-init-*.link files
that would just be confusing.
|
|
This fixes a bug where modules mode was not passing a 'existing'
flag to fetch. fetch had existing default to 'check'.
The DataSourceNoCloud when fed with data from a disk will return
False to check() as it is not a guarantee'd hit.
That caused fetch to go looking for a new datasource.
That would have actually worked, but modules and single create
the Init with deps=[]. So it went looking for Datasources
that matched those deps, and only found DataSourceNone.
I'm going to keep having modules and single specify deps=[]
as that will prevent them from going to look for a DS and
further making things worse.
|
|
there is no data source that has a populated network_config()
so at this point this doesn't do anything.
|
|
|
|
This adds a check in cloud-init to see if the existing (cached)
datasource is still valid. It relies on support from the Datasource
to implement 'check_instance_id'. That method should quickly determine
(if possible) if the instance id found in the datasource is still valid.
This means that we can still notice new instance ids without
depending on a network datasource on every boot.
I've also implemented check_instance_id for the superclass and for
3 classes:
DataSourceAzure (check dmi data)
DataSourceOpenstack (check dmi data)
DataSourceNocloud (check the seeded data or kernel command line)
LP: #1553815
|
|
|
|
atomic_write_file just does less and easily utilized
for the same purpose that atomic_write_json served.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
change ReportStack to ReportEventStack
change default ReportEventStack to be status.SUCCESS instead of None
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
LP: #1424277
|
|
|
|
Fixes bug 1424277.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
vendordata.
Vendordata is a datasource provided userdata-like blob that is parsed
similiarly to userdata, execept at the user's pleasure.
cloudinit/config/cc_scripts_vendor.py: added vendor script cloud config
cloudinit/config/cc_vendor_scripts_per_boot.py: added vendor per boot
cloud config
cloudinit/config/cc_vendor_scripts_per_instance.py: added vendor per
instance vendor cloud config
cloudinit/config/cc_vendor_scripts_per_once.py: added per once vendor
cloud config script
doc/examples/cloud-config-vendor-data.txt: documentation of vendor-data
examples
doc/vendordata.txt: documentation of vendordata for vendors
(RENAMED) tests/unittests/test_userdata.py => tests/unittests/test_userdata.py
TO: tests/unittests/test_userdata.py => tests/unittests/test_data.py:
userdata test cases are not expanded to confirm superiority over vendor
data.
bin/cloud-init: change instances of 'consume_userdata' to 'consume_data'
cloudinit/handlers/cloud_config.py: Added vendor script handling to default
cloud-config modules
cloudinit/handlers/shell_script.py: Added ability to change the path key to
support vendor provided 'vendor-scripts'. Defaults to 'script'.
cloudinit/helpers.py:
- Changed ConfigMerger to include handling of vendordata.
- Changed helpers to include paths for vendordata.
cloudinit/sources/__init__.py: Added functions for helping vendordata
- get_vendordata_raw(): returns vendordata unprocessed
- get_vendordata(): returns vendordata through userdata processor
- has_vendordata(): indicator if vendordata is present
- consume_vendordata(): datasource directive for indicating explict
user approval of vendordata consumption. Defaults to 'false'
cloudinit/stages.py: Re-jiggered for handling of vendordata
- _initial_subdirs(): added vendor script definition
- update(): added self._store_vendordata()
- [ADDED] _store_vendordata(): store vendordata
- _get_default_handlers(): modified to allow for filtering
which handlers will run against vendordata
- [ADDED] _do_handlers(): moved logic from consume_userdata
to _do_handlers(). This allows _consume_vendordata() and
_consume_userdata() to use the same code path.
- [RENAMED] consume_userdata() to _consume_userdata()
- [ADDED] _consume_vendordata() for handling vendordata
- run after userdata to get user cloud-config
- uses ConfigMerger to get the configuration from the
instance perspective about whether or not to use
vendordata
- [ADDED] consume_data() to call _consume_{user,vendor}data
cloudinit/util.py:
- [ADDED] get_nested_option_as_list() used by cc_vendor* for
getting a nested value from a dict and returned as a list
- runparts(): added 'exe_prefix' for running exe with a prefix,
used by cc_vendor*
config/cloud.cfg: Added vendor script execution as default
tests/unittests/test_runs/test_merge_run.py: changed consume_userdata() to
consume_data()
tests/unittests/test_runs/test_simple_run.py: changed consume_userdata() to
consume_data()
|
|
reading /proc/uptime is going to be slower, and no reason to do it on most
things. Better to only do it when you suspect maybe a need for it.
|
|
The reason for this is that more and more things I was wanting to be able to
see how long they took. This puts that time logic into a single place. It
also supports (by default) reading from /proc/uptime as the timing mechanism.
While that is almost certainly slower than time.time(), it does give
millisecond granularity and is not affected by 'ntpdate' having
run in between the two events.
|
|
This most commonly occurs if a user-data script does '/sbin/poweroff'
where syslog was being used. Once poweroff is invoked, syslog gets killed
and logging would start to show stack traces.
This generally tries to continue working instead, but log to stderr.
|
|
is to patch the functionality before it gets reimported.
|
|
handle those signals more gracefully and
with better messaging than what comes builtin.
|