Age | Commit message (Collapse) | Author |
|
Instead of having a register default handler and a register custom
handler, just use the same function to do both but provide a parameter to
affect how overwritting of previously existing content-types (which
default handlers use to not overwrite custom ones).
|
|
It appears that udelta could have been left undefined or left defined as a
string "N/A" and then put threw a float formatter previously.
Fix that by ensure its set to a default and put strong checking to make
sure it is a float before using float formatting.
|
|
There are just some cleanups here, and use of simply 'sed' rather than
grep and cut. The motivation is to support running with non gnu
'grep' that doesn't have -P.
|
|
It appears that udelta could
have been left undefined or left
defined as a string "N/A" and then
put threw a float formatter previously.
Fix that by ensure its set to a default
and put strong checking to make sure it
is a float before using float formatting.
|
|
The big benefit of this is that now the user can put in arbitrary
data into the user-data or user-script keys and there is no concern
about the data being incorrectly read.
Previously, if data contained '\n.\n', there was no way to differenciate
that from a end of message in the serial communication format.
It would be recommended that anyone using user-data on smartos base64 encode
that data and specify a key of 'b64-user-data' with value 'true'.
|
|
The most likely end user operation (or at least a valid one) for base64
encoding would be to encode the user-data, but leave all other values
as plaintext.
In order to facilitate that, the user can simply add:
b64-user-data=true
to indicate that user-data is base64 encoded.
Other changes here are to change the cloud-config and metadata keynames
that are used.
base64_all = boolean(True)
base64_keys = [list, of, keys]
Fixed up tests to accomodate.
|
|
|
|
This simply correctly invokes subp through util.log_time.
The arguments to subp is named 'args' not 'command'.
LP: #1214541
|
|
|
|
|
|
If azure ovf data specified a password, then get that password passed
through to useradd. Also updates the test case to verify that the
value was encrypted correctly.
LP: #1212723
|
|
|
|
'password' was the wrong key. It should have been setting the default
user's "plain_text_password".
Instead of doing that, though, we're encrypting the value and putting it in
'passwd', which will then be passed on to useradd. The key value in doing
this is that the plain text password will not be stored in obj.pkl.
(admittedly it is still in plain text in the ovf-env.xml file).
|
|
the resizepart code was not functional.
We will re-favor it later under bug 1212492.
For now, we'll just favor the 'growpart' resizer.
Both will be found in Ubuntu cloud images.
LP: #1212444
|
|
the resizepart code was not functional.
We will re-favor it later under bug 1212492.
For now, we'll just favor the 'growpart' resizer.
Both will be found in Ubuntu cloud images.
LP: #1212444
|
|
This adds ability to explicitly set http, https, ftp proxy for apt. Also
generically adds ability to give a apt config.
apt-config could be done via write_files, but this is more specific to it.
LP: #1057195
|
|
|
|
|
|
|
|
|
|
|
|
|
|
LP: #1057195
|
|
remove duplicates of some code.
|
|
|
|
Added documentation on SmartOS datasource.
|
|
The reason for this is that more and more things I was wanting to be able to
see how long they took. This puts that time logic into a single place. It
also supports from /proc/uptime as the timing mechanism.
While reading /proc/uptime is almost certainly slower than time.time(), it does
give millisecond granularity and is not affected by 'ntpdate' having run in
between the two events.
|
|
reading /proc/uptime is going to be slower, and no reason to do it on most
things. Better to only do it when you suspect maybe a need for it.
|
|
The reason for this is that more and more things I was wanting to be able to
see how long they took. This puts that time logic into a single place. It
also supports (by default) reading from /proc/uptime as the timing mechanism.
While that is almost certainly slower than time.time(), it does give
millisecond granularity and is not affected by 'ntpdate' having
run in between the two events.
|
|
|
|
|
|
As shown in comments of bug 1202758 and filing of ntp bug 1206164, waiting
for the output of this command causes us to wait for ntpdate to fully
finish.
Ideally I think we'd disable ntpdate running on this run, but
that is not trivially possible.
|
|
LP: #1205720
|
|
the environment that was set up to include 'interface' was not actually
being passed on to 'subp', so when the command ran it wasn't available.
|
|
These are debian's init scripts as taken from their trunk svn
as of today. Thanks Juerg.
|
|
this way you can now do ./package/bddeb --init-system=sysvinit_deb
|
|
|
|
|
|
See the added doc/sources/azure/README.rst for why this is necessary.
Essentially, we now are doing the following in the get_data() method
of azure datasource to publish this NewHostname:
hostname NewHostName
ifdown eth0;
ifup eth0
LP: #1202758
|
|
|
|
|
|
Move long lines out of the test_util.py file and into tests/data.
no pep8 or pylint errors now.
|
|
|
|
Fix the wrong usage of the prefix removal array action
by just using the new util function that does these
actions correctly.
Add in a couple of unit tests to verify the jsonp merging
and usage works as expected.
|
|
This adds a very well defined and understood mechanism for applying changes to
the cloud-config. Had we seen this previously, we might have not done the
merge-types work.
|
|
|
|
|
|
|
|
|
|
This adds a datasource designed to work on Joyent cloud (SmartOS).
|