Age | Commit message (Collapse) | Author |
|
reading /proc/uptime is going to be slower, and no reason to do it on most
things. Better to only do it when you suspect maybe a need for it.
|
|
The reason for this is that more and more things I was wanting to be able to
see how long they took. This puts that time logic into a single place. It
also supports (by default) reading from /proc/uptime as the timing mechanism.
While that is almost certainly slower than time.time(), it does give
millisecond granularity and is not affected by 'ntpdate' having
run in between the two events.
|
|
This most commonly occurs if a user-data script does '/sbin/poweroff'
where syslog was being used. Once poweroff is invoked, syslog gets killed
and logging would start to show stack traces.
This generally tries to continue working instead, but log to stderr.
|
|
is to patch the functionality before it gets reimported.
|
|
handle those signals more gracefully and
with better messaging than what comes builtin.
|
|
a config module and make it more generic in that it can take in a list
of event names to emit as arguments. Add a yaml example to replace the functionality
removed from the main binary.
|
|
The merge of 0.7.0 dropped the cloud-config initctl emission.
I've added it back here, but done so in a way that doesn't force
non-ubuntu (or non-upstart) distros to provide this config setting
to disable it.
LP: #1028674
|
|
when running in local mode vs non-local mode, which
is useful when tracking what is happening in the
console and in the logs that are written out later.
|
|
If the user has input logging information in user-data cloud-config
we want to set up the logging to accept that after the data source
has been read.
|
|
in the 'cloud-init init' stages, we want the welcome message to get to the
correct output as specified by the system's configuration. Ie, if the
local /etc/cloud.config.d had 'output' or 'log_cfg' settings we want those
to be able to affect the welcome message also.
In normal operation, nothing else will go to stdout or stderr before this,
and likely/hopefully nothing terribly important to the logs.
|
|
Instead of a warning, only debug this message. Warnings get to
console and look scary to users.
|
|
At this point there is a mixture of "double hash" cheetah comments and '#*'
cheetah comments.
|
|
module
2. Fix the usage of multi_log to log to only one of the places (for now)
3. Update comment about multi-log and why write_file isn't used in this case
|
|
useful in certain cases
|
|
that occur to stderr.
|
|
datasource is found.
|
|
|
|
very obvious.
|
|
comments into the files)
2. Rename consume() to consume_userdata() as it helps in figuring out what this does.
3. Fixup the tests due to #2
|
|
handles this.
|
|
2. Use that list in the main binary & adjust related comparisions
|
|
|
|
2. Adjust comment on sources list from depends
3. For the /etc/timezone 'writing', add a header that says created by cloud-init
|
|
2. For single modules, if it doesn't run, print a warning and exit with a return code of 1
|
|
|
|
|
|
2. Reflect the move back to config 'modules' in the other cli options
3. Have the single mode not need to lookup the module but use the general import path
|
|
|