summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--ChangeLog12
-rw-r--r--Makefile6
-rw-r--r--Requires6
-rw-r--r--TODO9
-rwxr-xr-xbin/cloud-init68
-rw-r--r--cloudinit/config/cc_apt_update_upgrade.py3
-rw-r--r--cloudinit/config/cc_emit_upstart.py48
-rw-r--r--cloudinit/config/cc_final_message.py17
-rw-r--r--cloudinit/config/cc_resizefs.py4
-rw-r--r--cloudinit/config/cc_rightscale_userdata.py2
-rw-r--r--cloudinit/config/cc_update_etc_hosts.py6
-rw-r--r--cloudinit/config/cc_write_files.py102
-rw-r--r--cloudinit/handlers/__init__.py8
-rw-r--r--cloudinit/helpers.py3
-rw-r--r--cloudinit/log.py37
-rw-r--r--cloudinit/sources/DataSourceEc2.py2
-rw-r--r--cloudinit/sources/DataSourceMAAS.py91
-rw-r--r--cloudinit/sources/DataSourceOVF.py2
-rw-r--r--cloudinit/stages.py13
-rw-r--r--cloudinit/templater.py11
-rw-r--r--cloudinit/url_helper.py2
-rw-r--r--cloudinit/user_data.py2
-rw-r--r--cloudinit/util.py103
-rw-r--r--config/cloud.cfg4
-rw-r--r--doc/altcloud/README65
-rw-r--r--doc/examples/cloud-config-datasources.txt67
-rw-r--r--doc/examples/cloud-config-write-files.txt33
-rw-r--r--doc/examples/cloud-config.txt9
-rwxr-xr-xpackages/bddeb157
-rwxr-xr-xpackages/brpm108
-rw-r--r--packages/debian/changelog5
-rw-r--r--packages/debian/changelog.in6
-rw-r--r--packages/debian/control.in (renamed from packages/debian/control)20
-rwxr-xr-xpackages/debian/rules3
-rwxr-xr-xpackages/make-tarball89
-rw-r--r--packages/redhat/cloud-init.spec.in (renamed from packages/redhat/cloud-init.spec)103
-rwxr-xr-xsetup.py38
-rwxr-xr-xsysvinit/cloud-config2
-rwxr-xr-xsysvinit/cloud-final2
-rwxr-xr-xsysvinit/cloud-init2
-rwxr-xr-xsysvinit/cloud-init-local2
-rw-r--r--templates/chef_client.rb.tmpl18
-rw-r--r--templates/hosts.redhat.tmpl13
-rw-r--r--templates/hosts.ubuntu.tmpl15
-rw-r--r--templates/sources.list.tmpl101
-rw-r--r--tests/unittests/test__init__.py21
-rw-r--r--tests/unittests/test_builtin_handlers.py1
-rw-r--r--tests/unittests/test_datasource/test_maas.py2
-rw-r--r--tests/unittests/test_handler/test_handler_ca_certs.py14
-rw-r--r--tests/unittests/test_userdata.py4
-rw-r--r--tests/unittests/test_util.py4
-rwxr-xr-xtools/hacking.py7
-rwxr-xr-xtools/make-dist-tarball (renamed from packages/make-dist-tarball)0
-rwxr-xr-xtools/make-tarball35
-rwxr-xr-xtools/mock-meta.py65
-rwxr-xr-xtools/read-dependencies80
-rwxr-xr-xtools/read-version101
-rw-r--r--upstart/cloud-config.conf1
-rw-r--r--upstart/cloud-log-shutdown.conf19
59 files changed, 1060 insertions, 713 deletions
diff --git a/ChangeLog b/ChangeLog
index c3f71b9c..fc45ff2d 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,4 +1,12 @@
0.7.0:
+ - Added RHEVm and vSphere support as source AltCloud [Joseph VLcek]
+ - add write-files module (LP: #1012854)
+ - Add setuptools + cheetah to debian package build dependencies (LP: #1022101)
+ - Adjust the sysvinit local script to provide 'cloud-init-local' and have
+ the cloud-config script depend on that as well.
+ - Add the 'bzr' name to all packages built
+ - Reduce logging levels for certain non-critical cases to DEBUG instead of the
+ previous level of WARNING
- unified binary that activates the various stages
- Now using argparse + subcommands to specify the various CLI options
- a stage module that clearly separates the stages of the different
@@ -69,10 +77,6 @@
level actions to go through standard set of util functions, this greatly
helps in debugging and determining exactly which system actions cloud-init is
performing
- - switching out the templating engine cheetah for tempita since tempita has
- no external dependencies (minus python) while cheetah has many dependencies
- which makes it more difficult to adopt cloud-init in distros that may not
- have those dependencies
- adjust url fetching and url trying to go through a single function that
reads urls in the new 'url helper' file, this helps in tracing, debugging
and knowing which urls are being called and/or posted to from with-in
diff --git a/Makefile b/Makefile
index a96d6b5b..49324ca0 100644
--- a/Makefile
+++ b/Makefile
@@ -1,5 +1,5 @@
CWD=$(shell pwd)
-PY_FILES=$(shell find cloudinit bin -name "*.py")
+PY_FILES=$(shell find cloudinit bin tests tools -name "*.py")
PY_FILES+="bin/cloud-init"
all: test
@@ -24,10 +24,10 @@ clean:
/var/lib/cloud/
rpm:
- cd packages && ./brpm
+ ./packages/brpm
deb:
- cd packages && ./bddeb
+ ./packages/bddeb
.PHONY: test pylint pyflakes 2to3 clean pep8 rpm deb
diff --git a/Requires b/Requires
index 10be0155..4f9311d5 100644
--- a/Requires
+++ b/Requires
@@ -1,9 +1,7 @@
# Pypi requirements for cloud-init to work
-# Used for templating any files or strings that are considered
-# to be templates, not cheetah since it pulls in alot of extra libs.
-# This one is pretty dinky and does want we want (var substituion)
-Tempita
+# Used for untemplating any files or strings with parameters.
+cheetah
# This is used for any pretty printing of tabular data.
PrettyTable
diff --git a/TODO b/TODO
index 1725db00..792bc63d 100644
--- a/TODO
+++ b/TODO
@@ -35,3 +35,12 @@
something to remove later, and just recommend using 'chroot' instead (or the X
different other options which are similar to 'chroot'), which is might be more
natural and less confusing...
+- Instead of just warning when a module is being ran on a 'unknown' distribution
+ perhaps we should not run that module in that case? Or we might want to start
+ reworking those modules so they will run on all distributions? Or if that is
+ not the case, then maybe we want to allow fully specified python paths for
+ modules and start encouraging packages of 'ubuntu' modules, packages of 'rhel'
+ specific modules that people can add instead of having them all under the
+ cloud-init 'root' tree? This might encourage more development of other modules
+ instead of having to go edit the cloud-init code to accomplish this.
+
diff --git a/bin/cloud-init b/bin/cloud-init
index c7863db1..1f017475 100755
--- a/bin/cloud-init
+++ b/bin/cloud-init
@@ -45,9 +45,9 @@ from cloudinit.settings import (PER_INSTANCE, PER_ALWAYS, PER_ONCE,
CLOUD_CONFIG)
-# Pretty little welcome message template
-WELCOME_MSG_TPL = ("Cloud-init v. {{version}} running '{{action}}' at "
- "{{timestamp}}. Up {{uptime}} seconds.")
+# Pretty little cheetah formatted welcome message template
+WELCOME_MSG_TPL = ("Cloud-init v. ${version} running '${action}' at "
+ "${timestamp}. Up ${uptime} seconds.")
# Module section template
MOD_SECTION_TPL = "cloud_%s_modules"
@@ -82,16 +82,22 @@ def print_exc(msg=''):
sys.stderr.write("\n")
-def welcome(action):
+def welcome(action, msg=None):
+ if not msg:
+ msg = welcome_format(action)
+ util.multi_log("%s\n" % (msg),
+ console=False, stderr=True, log=LOG)
+ return msg
+
+
+def welcome_format(action):
tpl_params = {
'version': version.version_string(),
'uptime': util.uptime(),
'timestamp': util.time_rfc2822(),
'action': action,
}
- tpl_msg = templater.render_string(WELCOME_MSG_TPL, tpl_params)
- util.multi_log("%s\n" % (tpl_msg),
- console=False, stderr=True)
+ return templater.render_string(WELCOME_MSG_TPL, tpl_params)
def extract_fns(args):
@@ -150,11 +156,14 @@ def main_init(name, args):
# 6. Connect to the current instance location + update the cache
# 7. Consume the userdata (handlers get activated here)
# 8. Construct the modules object
- # 9. Adjust any subsequent logging/output redirections using
- # the modules objects configuration
+ # 9. Adjust any subsequent logging/output redirections using the modules
+ # objects config as it may be different from init object
# 10. Run the modules for the 'init' stage
# 11. Done!
- welcome(name)
+ if not args.local:
+ w_msg = welcome_format(name)
+ else:
+ w_msg = welcome_format("%s-local" % (name))
init = stages.Init(deps)
# Stage 1
init.read_cfg(extract_fns(args))
@@ -174,6 +183,12 @@ def main_init(name, args):
" longer be active shortly"))
logging.resetLogging()
logging.setupLogging(init.cfg)
+
+ # Any log usage prior to setupLogging above did not have local user log
+ # config applied. We send the welcome message now, as stderr/out have
+ # been redirected and log now configured.
+ welcome(name, msg=w_msg)
+
# Stage 3
try:
init.initialize()
@@ -219,13 +234,15 @@ def main_init(name, args):
try:
init.fetch()
except sources.DataSourceNotFoundException:
- util.logexc(LOG, ("No instance datasource found!"
- " Likely bad things to come!"))
- # In the case of cloud-init (net mode) it is a bit
- # more likely that the user would consider it
- # failure if nothing was found. When using
- # upstart it will also mentions job failure
+ # In the case of 'cloud-init init' without '--local' it is a bit
+ # more likely that the user would consider it failure if nothing was
+ # found. When using upstart it will also mentions job failure
# in console log if exit code is != 0.
+ if args.local:
+ LOG.debug("No local datasource found")
+ else:
+ util.logexc(LOG, ("No instance datasource found!"
+ " Likely bad things to come!"))
if not args.force:
if args.local:
return 0
@@ -254,9 +271,10 @@ def main_init(name, args):
except Exception:
util.logexc(LOG, "Consuming user data failed!")
return 1
- # Stage 8 - TODO - do we really need to re-extract our configs?
+
+ # Stage 8 - re-read and apply relevant cloud-config to include user-data
mods = stages.Modules(init, extract_fns(args))
- # Stage 9 - TODO is this really needed??
+ # Stage 9
try:
outfmt_orig = outfmt
errfmt_orig = errfmt
@@ -266,6 +284,8 @@ def main_init(name, args):
(outfmt, errfmt) = util.fixup_output(mods.cfg, name)
except:
util.logexc(LOG, "Failed to re-adjust output redirection!")
+ logging.setupLogging(mods.cfg)
+
# Stage 10
return run_module_section(mods, name, name)
@@ -282,7 +302,7 @@ def main_modules(action_name, args):
# the modules objects configuration
# 5. Run the modules for the given stage name
# 6. Done!
- welcome("%s:%s" % (action_name, name))
+ w_msg = welcome_format("%s:%s" % (action_name, name))
init = stages.Init(ds_deps=[])
# Stage 1
init.read_cfg(extract_fns(args))
@@ -314,6 +334,10 @@ def main_modules(action_name, args):
" longer be active shortly"))
logging.resetLogging()
logging.setupLogging(mods.cfg)
+
+ # now that logging is setup and stdout redirected, send welcome
+ welcome(name, msg=w_msg)
+
# Stage 5
return run_module_section(mods, name, name)
@@ -333,7 +357,7 @@ def main_single(name, args):
# 5. Run the single module
# 6. Done!
mod_name = args.name
- welcome("%s:%s" % (name, mod_name))
+ w_msg = welcome_format(name)
init = stages.Init(ds_deps=[])
# Stage 1
init.read_cfg(extract_fns(args))
@@ -372,6 +396,10 @@ def main_single(name, args):
" longer be active shortly"))
logging.resetLogging()
logging.setupLogging(mods.cfg)
+
+ # now that logging is setup and stdout redirected, send welcome
+ welcome(name, msg=w_msg)
+
# Stage 5
(which_ran, failures) = mods.run_single(mod_name,
mod_args,
diff --git a/cloudinit/config/cc_apt_update_upgrade.py b/cloudinit/config/cc_apt_update_upgrade.py
index 5c5e510c..1bffa47d 100644
--- a/cloudinit/config/cc_apt_update_upgrade.py
+++ b/cloudinit/config/cc_apt_update_upgrade.py
@@ -255,7 +255,8 @@ def find_apt_mirror(cloud, cfg):
if mydom:
doms.append(".%s" % mydom)
- if not mirror:
+ if (not mirror and
+ util.get_cfg_option_bool(cfg, "apt_mirror_search_dns", False)):
doms.extend((".localdomain", "",))
mirror_list = []
diff --git a/cloudinit/config/cc_emit_upstart.py b/cloudinit/config/cc_emit_upstart.py
new file mode 100644
index 00000000..68b86ff6
--- /dev/null
+++ b/cloudinit/config/cc_emit_upstart.py
@@ -0,0 +1,48 @@
+# vi: ts=4 expandtab
+#
+# Copyright (C) 2009-2011 Canonical Ltd.
+# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
+#
+# Author: Scott Moser <scott.moser@canonical.com>
+# Author: Juerg Haefliger <juerg.haefliger@hp.com>
+#
+# This program is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License version 3, as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
+
+import os
+
+from cloudinit import util
+from cloudinit.settings import PER_ALWAYS
+
+frequency = PER_ALWAYS
+
+distros = ['ubuntu', 'debian']
+
+
+def handle(name, _cfg, cloud, log, args):
+ event_names = args
+ if not event_names:
+ # Default to the 'cloud-config'
+ # event for backwards compat.
+ event_names = ['cloud-config']
+ if not os.path.isfile("/sbin/initctl"):
+ log.debug(("Skipping module named %s,"
+ " no /sbin/initctl located"), name)
+ return
+ cfgpath = cloud.paths.get_ipath_cur("cloud_config")
+ for n in event_names:
+ cmd = ['initctl', 'emit', str(n), 'CLOUD_CFG=%s' % cfgpath]
+ try:
+ util.subp(cmd)
+ except Exception as e:
+ # TODO, use log exception from utils??
+ log.warn("Emission of upstart event %s failed due to: %s", n, e)
diff --git a/cloudinit/config/cc_final_message.py b/cloudinit/config/cc_final_message.py
index b1caca47..aff03c4e 100644
--- a/cloudinit/config/cc_final_message.py
+++ b/cloudinit/config/cc_final_message.py
@@ -26,23 +26,20 @@ from cloudinit.settings import PER_ALWAYS
frequency = PER_ALWAYS
-FINAL_MESSAGE_DEF = ("Cloud-init v. {{version}} finished at {{timestamp}}."
- " Up {{uptime}} seconds.")
+# Cheetah formated default message
+FINAL_MESSAGE_DEF = ("Cloud-init v. ${version} finished at ${timestamp}."
+ " Up ${uptime} seconds.")
def handle(_name, cfg, cloud, log, args):
- msg_in = None
+ msg_in = ''
if len(args) != 0:
- msg_in = args[0]
+ msg_in = str(args[0])
else:
- msg_in = util.get_cfg_option_str(cfg, "final_message")
-
- if not msg_in:
- template_fn = cloud.get_template_filename('final_message')
- if template_fn:
- msg_in = util.load_file(template_fn)
+ msg_in = util.get_cfg_option_str(cfg, "final_message", "")
+ msg_in = msg_in.strip()
if not msg_in:
msg_in = FINAL_MESSAGE_DEF
diff --git a/cloudinit/config/cc_resizefs.py b/cloudinit/config/cc_resizefs.py
index 69cd8872..256a194f 100644
--- a/cloudinit/config/cc_resizefs.py
+++ b/cloudinit/config/cc_resizefs.py
@@ -134,7 +134,7 @@ def do_resize(resize_cmd, log):
except util.ProcessExecutionError:
util.logexc(log, "Failed to resize filesystem (cmd=%s)", resize_cmd)
raise
- tot_time = int(time.time() - start)
- log.debug("Resizing took %s seconds", tot_time)
+ tot_time = time.time() - start
+ log.debug("Resizing took %.3f seconds", tot_time)
# TODO: Should we add a fsck check after this to make
# sure we didn't corrupt anything?
diff --git a/cloudinit/config/cc_rightscale_userdata.py b/cloudinit/config/cc_rightscale_userdata.py
index 7a134569..45d41b3f 100644
--- a/cloudinit/config/cc_rightscale_userdata.py
+++ b/cloudinit/config/cc_rightscale_userdata.py
@@ -53,7 +53,7 @@ def handle(name, _cfg, cloud, log, _args):
try:
ud = cloud.get_userdata_raw()
except:
- log.warn("Failed to get raw userdata in module %s", name)
+ log.debug("Failed to get raw userdata in module %s", name)
return
try:
diff --git a/cloudinit/config/cc_update_etc_hosts.py b/cloudinit/config/cc_update_etc_hosts.py
index c148b12e..38108da7 100644
--- a/cloudinit/config/cc_update_etc_hosts.py
+++ b/cloudinit/config/cc_update_etc_hosts.py
@@ -36,11 +36,11 @@ def handle(name, cfg, cloud, log, _args):
return
# Render from a template file
- distro_n = cloud.distro.name
- tpl_fn_name = cloud.get_template_filename("hosts.%s" % (distro_n))
+ tpl_fn_name = cloud.get_template_filename("hosts.%s" %
+ (cloud.distro.name))
if not tpl_fn_name:
raise RuntimeError(("No hosts template could be"
- " found for distro %s") % (distro_n))
+ " found for distro %s") % (cloud.distro.name))
out_fn = cloud.paths.join(False, '/etc/hosts')
templater.render_to_file(tpl_fn_name, out_fn,
diff --git a/cloudinit/config/cc_write_files.py b/cloudinit/config/cc_write_files.py
new file mode 100644
index 00000000..1bfa4c25
--- /dev/null
+++ b/cloudinit/config/cc_write_files.py
@@ -0,0 +1,102 @@
+# vi: ts=4 expandtab
+#
+# Copyright (C) 2012 Yahoo! Inc.
+#
+# Author: Joshua Harlow <harlowja@yahoo-inc.com>
+#
+# This program is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License version 3, as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
+
+import base64
+import os
+
+from cloudinit import util
+from cloudinit.settings import PER_INSTANCE
+
+frequency = PER_INSTANCE
+
+DEFAULT_OWNER = "root:root"
+DEFAULT_PERMS = 0644
+UNKNOWN_ENC = 'text/plain'
+
+
+def handle(name, cfg, _cloud, log, _args):
+ files = cfg.get('write_files')
+ if not files:
+ log.debug(("Skipping module named %s,"
+ " no/empty 'write_files' key in configuration"), name)
+ return
+ write_files(name, files, log)
+
+
+def canonicalize_extraction(encoding_type, log):
+ if not encoding_type:
+ encoding_type = ''
+ encoding_type = encoding_type.lower().strip()
+ if encoding_type in ['gz', 'gzip']:
+ return ['application/x-gzip']
+ if encoding_type in ['gz+base64', 'gzip+base64', 'gz+b64', 'gzip+b64']:
+ return ['application/base64', 'application/x-gzip']
+ # Yaml already encodes binary data as base64 if it is given to the
+ # yaml file as binary, so those will be automatically decoded for you.
+ # But the above b64 is just for people that are more 'comfortable'
+ # specifing it manually (which might be a possiblity)
+ if encoding_type in ['b64', 'base64']:
+ return ['application/base64']
+ if encoding_type:
+ log.warn("Unknown encoding type %s, assuming %s",
+ encoding_type, UNKNOWN_ENC)
+ return [UNKNOWN_ENC]
+
+
+def write_files(name, files, log):
+ if not files:
+ return
+
+ for (i, f_info) in enumerate(files):
+ path = f_info.get('path')
+ if not path:
+ log.warn("No path provided to write for entry %s in module %s",
+ i + 1, name)
+ continue
+ path = os.path.abspath(path)
+ extractions = canonicalize_extraction(f_info.get('encoding'), log)
+ contents = extract_contents(f_info.get('content', ''), extractions)
+ (u, g) = util.extract_usergroup(f_info.get('owner', DEFAULT_OWNER))
+ perms = decode_perms(f_info.get('permissions'), DEFAULT_PERMS, log)
+ util.write_file(path, contents, mode=perms)
+ util.chownbyname(path, u, g)
+
+
+def decode_perms(perm, default, log):
+ try:
+ if isinstance(perm, (int, long, float)):
+ # Just 'downcast' it (if a float)
+ return int(perm)
+ else:
+ # Force to string and try octal conversion
+ return int(str(perm), 8)
+ except (TypeError, ValueError):
+ log.warn("Undecodable permissions %s, assuming %s", perm, default)
+ return default
+
+
+def extract_contents(contents, extraction_types):
+ result = str(contents)
+ for t in extraction_types:
+ if t == 'application/x-gzip':
+ result = util.decomp_gzip(result, quiet=False)
+ elif t == 'application/base64':
+ result = base64.b64decode(result)
+ elif t == UNKNOWN_ENC:
+ pass
+ return result
diff --git a/cloudinit/handlers/__init__.py b/cloudinit/handlers/__init__.py
index dce2abef..6d1502f4 100644
--- a/cloudinit/handlers/__init__.py
+++ b/cloudinit/handlers/__init__.py
@@ -165,7 +165,10 @@ def walker_callback(pdata, ctype, filename, payload):
walker_handle_handler(pdata, ctype, filename, payload)
return
handlers = pdata['handlers']
- if ctype not in pdata['handlers'] and payload:
+ if ctype in pdata['handlers']:
+ run_part(handlers[ctype], pdata['data'], ctype, filename,
+ payload, pdata['frequency'])
+ elif payload:
# Extract the first line or 24 bytes for displaying in the log
start = _extract_first_or_bytes(payload, 24)
details = "'%s...'" % (start.encode("string-escape"))
@@ -176,8 +179,7 @@ def walker_callback(pdata, ctype, filename, payload):
LOG.warning("Unhandled unknown content-type (%s) userdata: %s",
ctype, details)
else:
- run_part(handlers[ctype], pdata['data'], ctype, filename,
- payload, pdata['frequency'])
+ LOG.debug("empty payload of type %s" % ctype)
# Callback is a function that will be called with
diff --git a/cloudinit/helpers.py b/cloudinit/helpers.py
index 15036a50..a4b20208 100644
--- a/cloudinit/helpers.py
+++ b/cloudinit/helpers.py
@@ -67,6 +67,9 @@ class FileLock(object):
def __init__(self, fn):
self.fn = fn
+ def __str__(self):
+ return "<%s using file %r>" % (util.obj_name(self), self.fn)
+
class FileSemaphores(object):
def __init__(self, sem_path):
diff --git a/cloudinit/log.py b/cloudinit/log.py
index fc1428a2..819c85b6 100644
--- a/cloudinit/log.py
+++ b/cloudinit/log.py
@@ -24,6 +24,7 @@ import logging
import logging.handlers
import logging.config
+import collections
import os
import sys
@@ -63,9 +64,11 @@ def setupLogging(cfg=None):
# If there is a 'logcfg' entry in the config,
# respect it, it is the old keyname
log_cfgs.append(str(log_cfg))
- elif "log_cfgs" in cfg and isinstance(cfg['log_cfgs'], (set, list)):
+ elif "log_cfgs" in cfg:
for a_cfg in cfg['log_cfgs']:
- if isinstance(a_cfg, (list, set, dict)):
+ if isinstance(a_cfg, (basestring, str)):
+ log_cfgs.append(a_cfg)
+ elif isinstance(a_cfg, (collections.Iterable)):
cfg_str = [str(c) for c in a_cfg]
log_cfgs.append('\n'.join(cfg_str))
else:
@@ -73,30 +76,36 @@ def setupLogging(cfg=None):
# See if any of them actually load...
am_tried = 0
- am_worked = 0
- for i, log_cfg in enumerate(log_cfgs):
+ for log_cfg in log_cfgs:
try:
am_tried += 1
# Assume its just a string if not a filename
if log_cfg.startswith("/") and os.path.isfile(log_cfg):
+ # Leave it as a file and do not make it look like
+ # something that is a file (but is really a buffer that
+ # is acting as a file)
pass
else:
log_cfg = StringIO(log_cfg)
# Attempt to load its config
logging.config.fileConfig(log_cfg)
- am_worked += 1
- except Exception as e:
- sys.stderr.write(("WARN: Setup of logging config %s"
- " failed due to: %s\n") % (i + 1, e))
+ # The first one to work wins!
+ return
+ except Exception:
+ # We do not write any logs of this here, because the default
+ # configuration includes an attempt at using /dev/log, followed
+ # up by writing to a file. /dev/log will not exist in very early
+ # boot, so an exception on that is expected.
+ pass
# If it didn't work, at least setup a basic logger (if desired)
basic_enabled = cfg.get('log_basic', True)
- if not am_worked:
- sys.stderr.write(("WARN: no logging configured!"
- " (tried %s configs)\n") % (am_tried))
- if basic_enabled:
- sys.stderr.write("Setting up basic logging...\n")
- setupBasicLogging()
+
+ sys.stderr.write(("WARN: no logging configured!"
+ " (tried %s configs)\n") % (am_tried))
+ if basic_enabled:
+ sys.stderr.write("Setting up basic logging...\n")
+ setupBasicLogging()
def getLogger(name='cloudinit'):
diff --git a/cloudinit/sources/DataSourceEc2.py b/cloudinit/sources/DataSourceEc2.py
index cde73de3..d9eb8f17 100644
--- a/cloudinit/sources/DataSourceEc2.py
+++ b/cloudinit/sources/DataSourceEc2.py
@@ -230,7 +230,7 @@ class DataSourceEc2(sources.DataSource):
remapped = self._remap_device(os.path.basename(found))
if remapped:
- LOG.debug("Remapped device name %s => %s", (found, remapped))
+ LOG.debug("Remapped device name %s => %s", found, remapped)
return remapped
# On t1.micro, ephemeral0 will appear in block-device-mapping from
diff --git a/cloudinit/sources/DataSourceMAAS.py b/cloudinit/sources/DataSourceMAAS.py
index f16d5c21..c568d365 100644
--- a/cloudinit/sources/DataSourceMAAS.py
+++ b/cloudinit/sources/DataSourceMAAS.py
@@ -262,3 +262,94 @@ datasources = [
# Return a list of data sources that match this set of dependencies
def get_datasource_list(depends):
return sources.list_from_depends(depends, datasources)
+
+
+if __name__ == "__main__":
+ def main():
+ """
+ Call with single argument of directory or http or https url.
+ If url is given additional arguments are allowed, which will be
+ interpreted as consumer_key, token_key, token_secret, consumer_secret
+ """
+ import argparse
+ import pprint
+
+ parser = argparse.ArgumentParser(description='Interact with MAAS DS')
+ parser.add_argument("--config", metavar="file",
+ help="specify DS config file", default=None)
+ parser.add_argument("--ckey", metavar="key",
+ help="the consumer key to auth with", default=None)
+ parser.add_argument("--tkey", metavar="key",
+ help="the token key to auth with", default=None)
+ parser.add_argument("--csec", metavar="secret",
+ help="the consumer secret (likely '')", default="")
+ parser.add_argument("--tsec", metavar="secret",
+ help="the token secret to auth with", default=None)
+ parser.add_argument("--apiver", metavar="version",
+ help="the apiver to use ("" can be used)", default=MD_VERSION)
+
+ subcmds = parser.add_subparsers(title="subcommands", dest="subcmd")
+ subcmds.add_parser('crawl', help="crawl the datasource")
+ subcmds.add_parser('get', help="do a single GET of provided url")
+ subcmds.add_parser('check-seed', help="read andn verify seed at url")
+
+ parser.add_argument("url", help="the data source to query")
+
+ args = parser.parse_args()
+
+ creds = {'consumer_key': args.ckey, 'token_key': args.tkey,
+ 'token_secret': args.tsec, 'consumer_secret': args.csec}
+
+ if args.config:
+ import yaml
+ with open(args.config) as fp:
+ cfg = yaml.safe_load(fp)
+ if 'datasource' in cfg:
+ cfg = cfg['datasource']['MAAS']
+ for key in creds.keys():
+ if key in cfg and creds[key] is None:
+ creds[key] = cfg[key]
+
+ def geturl(url, headers_cb):
+ req = urllib2.Request(url, data=None, headers=headers_cb(url))
+ return(urllib2.urlopen(req).read())
+
+ def printurl(url, headers_cb):
+ print "== %s ==\n%s\n" % (url, geturl(url, headers_cb))
+
+ def crawl(url, headers_cb=None):
+ if url.endswith("/"):
+ for line in geturl(url, headers_cb).splitlines():
+ if line.endswith("/"):
+ crawl("%s%s" % (url, line), headers_cb)
+ else:
+ printurl("%s%s" % (url, line), headers_cb)
+ else:
+ printurl(url, headers_cb)
+
+ def my_headers(url):
+ headers = {}
+ if creds.get('consumer_key', None) is not None:
+ headers = oauth_headers(url, **creds)
+ return headers
+
+ if args.subcmd == "check-seed":
+ if args.url.startswith("http"):
+ (userdata, metadata) = read_maas_seed_url(args.url,
+ header_cb=my_headers, version=args.apiver)
+ else:
+ (userdata, metadata) = read_maas_seed_url(args.url)
+ print "=== userdata ==="
+ print userdata
+ print "=== metadata ==="
+ pprint.pprint(metadata)
+
+ elif args.subcmd == "get":
+ printurl(args.url, my_headers)
+
+ elif args.subcmd == "crawl":
+ if not args.url.endswith("/"):
+ args.url = "%s/" % args.url
+ crawl(args.url, my_headers)
+
+ main()
diff --git a/cloudinit/sources/DataSourceOVF.py b/cloudinit/sources/DataSourceOVF.py
index 7728b36f..771e64eb 100644
--- a/cloudinit/sources/DataSourceOVF.py
+++ b/cloudinit/sources/DataSourceOVF.py
@@ -213,7 +213,7 @@ def transport_iso9660(require_iso=True):
(fname, contents) = util.mount_cb(fullp,
get_ovf_env, mtype="iso9660")
except util.MountFailedError:
- util.logexc(LOG, "Failed mounting %s", fullp)
+ LOG.debug("%s not mountable as iso9660" % fullp)
continue
if contents is not False:
diff --git a/cloudinit/stages.py b/cloudinit/stages.py
index 8fd6aa5d..2f6a566c 100644
--- a/cloudinit/stages.py
+++ b/cloudinit/stages.py
@@ -133,12 +133,13 @@ class Init(object):
if log_file:
util.ensure_file(log_file)
if perms:
- (u, g) = perms.split(':', 1)
- if u == "-1" or u == "None":
- u = None
- if g == "-1" or g == "None":
- g = None
- util.chownbyname(log_file, u, g)
+ u, g = util.extract_usergroup(perms)
+ try:
+ util.chownbyname(log_file, u, g)
+ except OSError:
+ util.logexc(LOG, ("Unable to change the ownership"
+ " of %s to user %s, group %s"),
+ log_file, u, g)
def read_cfg(self, extra_fns=None):
# None check so that we don't keep on re-loading if empty
diff --git a/cloudinit/templater.py b/cloudinit/templater.py
index c4259fa0..77af1270 100644
--- a/cloudinit/templater.py
+++ b/cloudinit/templater.py
@@ -20,13 +20,13 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
-from tempita import Template
+from Cheetah.Template import Template
from cloudinit import util
def render_from_file(fn, params):
- return render_string(util.load_file(fn), params, name=fn)
+ return render_string(util.load_file(fn), params)
def render_to_file(fn, outfn, params, mode=0644):
@@ -34,8 +34,7 @@ def render_to_file(fn, outfn, params, mode=0644):
util.write_file(outfn, contents, mode=mode)
-def render_string(content, params, name=None):
- tpl = Template(content, name=name)
+def render_string(content, params):
if not params:
- params = dict()
- return tpl.substitute(params)
+ params = {}
+ return Template(content, searchList=[params]).respond()
diff --git a/cloudinit/url_helper.py b/cloudinit/url_helper.py
index dbf72392..732d6aec 100644
--- a/cloudinit/url_helper.py
+++ b/cloudinit/url_helper.py
@@ -127,7 +127,7 @@ def readurl(url, data=None, timeout=None,
time.sleep(sec_between)
# Didn't work out
- LOG.warn("Failed reading from %s after %s attempts", url, attempts)
+ LOG.debug("Failed reading from %s after %s attempts", url, attempts)
# It must of errored at least once for code
# to get here so re-raise the last error
diff --git a/cloudinit/user_data.py b/cloudinit/user_data.py
index 0842594d..f5d01818 100644
--- a/cloudinit/user_data.py
+++ b/cloudinit/user_data.py
@@ -227,7 +227,7 @@ def convert_string(raw_data, headers=None):
raw_data = ''
if not headers:
headers = {}
- data = util.decomp_str(raw_data)
+ data = util.decomp_gzip(raw_data)
if "mime-version:" in data[0:4096].lower():
msg = email.message_from_string(data)
for (key, val) in headers.iteritems():
diff --git a/cloudinit/util.py b/cloudinit/util.py
index 44ce9770..a8c0cceb 100644
--- a/cloudinit/util.py
+++ b/cloudinit/util.py
@@ -55,6 +55,7 @@ from cloudinit import url_helper as uhelp
from cloudinit.settings import (CFG_BUILTIN)
+_DNS_REDIRECT_IP = None
LOG = logging.getLogger(__name__)
# Helps cleanup filenames to ensure they aren't FS incompatible
@@ -159,6 +160,10 @@ class MountFailedError(Exception):
pass
+class DecompressionError(Exception):
+ pass
+
+
def ExtendedTemporaryFile(**kwargs):
fh = tempfile.NamedTemporaryFile(**kwargs)
# Replace its unlink with a quiet version
@@ -256,13 +261,32 @@ def clean_filename(fn):
return fn
-def decomp_str(data):
+def decomp_gzip(data, quiet=True):
try:
buf = StringIO(str(data))
with contextlib.closing(gzip.GzipFile(None, "rb", 1, buf)) as gh:
return gh.read()
- except:
- return data
+ except Exception as e:
+ if quiet:
+ return data
+ else:
+ raise DecompressionError(str(e))
+
+
+def extract_usergroup(ug_pair):
+ if not ug_pair:
+ return (None, None)
+ ug_parted = ug_pair.split(':', 1)
+ u = ug_parted[0].strip()
+ if len(ug_parted) == 2:
+ g = ug_parted[1].strip()
+ else:
+ g = None
+ if not u or u == "-1" or u.lower() == "none":
+ u = None
+ if not g or g == "-1" or g.lower() == "none":
+ g = None
+ return (u, g)
def find_modules(root_dir):
@@ -288,8 +312,10 @@ def multi_log(text, console=True, stderr=True,
wfh.write(text)
wfh.flush()
if log:
- log.log(log_level, text)
-
+ if text[-1] == "\n":
+ log.log(log_level, text[:-1])
+ else:
+ log.log(log_level, text)
def is_ipv4(instr):
""" determine if input string is a ipv4 address. return boolean"""
@@ -381,7 +407,16 @@ def fixup_output(cfg, mode):
#
# with a '|', arguments are passed to shell, so one level of
# shell escape is required.
+#
+# if _CLOUD_INIT_SAVE_STDOUT is set in environment to a non empty and true
+# value then output input will not be closed (useful for debugging).
+#
def redirect_output(outfmt, errfmt, o_out=None, o_err=None):
+
+ if is_true(os.environ.get("_CLOUD_INIT_SAVE_STDOUT")):
+ LOG.debug("Not redirecting output due to _CLOUD_INIT_SAVE_STDOUT")
+ return
+
if not o_out:
o_out = sys.stdout
if not o_err:
@@ -535,7 +570,7 @@ def runparts(dirp, skip_no_exist=True):
if os.path.isfile(exe_path) and os.access(exe_path, os.X_OK):
attempted.append(exe_path)
try:
- subp([exe_path])
+ subp([exe_path], capture=False)
except ProcessExecutionError as e:
logexc(LOG, "Failed running %s [%s]", exe_path, e.exit_code)
failed.append(e)
@@ -584,7 +619,10 @@ def load_yaml(blob, default=None, allowed=(dict,)):
(allowed, obj_name(converted)))
loaded = converted
except (yaml.YAMLError, TypeError, ValueError):
- logexc(LOG, "Failed loading yaml blob")
+ if len(blob) == 0:
+ LOG.debug("load_yaml given empty string, returning default")
+ else:
+ logexc(LOG, "Failed loading yaml blob")
return loaded
@@ -788,9 +826,43 @@ def get_cmdline_url(names=('cloud-config-url', 'url'),
def is_resolvable(name):
- """ determine if a url is resolvable, return a boolean """
+ """ determine if a url is resolvable, return a boolean
+ This also attempts to be resilent against dns redirection.
+
+ Note, that normal nsswitch resolution is used here. So in order
+ to avoid any utilization of 'search' entries in /etc/resolv.conf
+ we have to append '.'.
+
+ The top level 'invalid' domain is invalid per RFC. And example.com
+ should also not exist. The random entry will be resolved inside
+ the search list.
+ """
+ global _DNS_REDIRECT_IP # pylint: disable=W0603
+ if _DNS_REDIRECT_IP is None:
+ badips = set()
+ badnames = ("does-not-exist.example.com.", "example.invalid.",
+ rand_str())
+ badresults = {}
+ for iname in badnames:
+ try:
+ result = socket.getaddrinfo(iname, None, 0, 0,
+ socket.SOCK_STREAM, socket.AI_CANONNAME)
+ badresults[iname] = []
+ for (_fam, _stype, _proto, cname, sockaddr) in result:
+ badresults[iname].append("%s: %s" % (cname, sockaddr[0]))
+ badips.add(sockaddr[0])
+ except socket.gaierror:
+ pass
+ _DNS_REDIRECT_IP = badips
+ if badresults:
+ LOG.debug("detected dns redirection: %s" % badresults)
+
try:
- socket.getaddrinfo(name, None)
+ result = socket.getaddrinfo(name, None)
+ # check first result's sockaddr field
+ addr = result[0][4][0]
+ if addr in _DNS_REDIRECT_IP:
+ return False
return True
except socket.gaierror:
return False
@@ -825,10 +897,10 @@ def close_stdin():
reopen stdin as /dev/null so even subprocesses or other os level things get
/dev/null as input.
- if _CLOUD_INIT_SAVE_STDIN is set in environment to a non empty or '0' value
- then input will not be closed (only useful potentially for debugging).
+ if _CLOUD_INIT_SAVE_STDIN is set in environment to a non empty and true
+ value then input will not be closed (useful for debugging).
"""
- if os.environ.get("_CLOUD_INIT_SAVE_STDIN") in ("", "0", 'False'):
+ if is_true(os.environ.get("_CLOUD_INIT_SAVE_STDIN")):
return
with open(os.devnull) as fp:
os.dup2(fp.fileno(), sys.stdin.fileno())
@@ -937,12 +1009,9 @@ def chownbyname(fname, user=None, group=None):
uid = pwd.getpwnam(user).pw_uid
if group:
gid = grp.getgrnam(group).gr_gid
- except KeyError:
- logexc(LOG, ("Failed changing the ownership of %s using username %s "
- "and groupname %s (do they exist?)"), fname, user, group)
- return False
+ except KeyError as e:
+ raise OSError("Unknown user or group: %s" % (e))
chownbyid(fname, uid, gid)
- return True
# Always returns well formated values
diff --git a/config/cloud.cfg b/config/cloud.cfg
index cb51d061..2b4d9e63 100644
--- a/config/cloud.cfg
+++ b/config/cloud.cfg
@@ -21,6 +21,7 @@ preserve_hostname: false
# The modules that run in the 'init' stage
cloud_init_modules:
- bootcmd
+ - write-files
- resizefs
- set_hostname
- update_hostname
@@ -31,6 +32,9 @@ cloud_init_modules:
# The modules that run in the 'config' stage
cloud_config_modules:
+# Emit the cloud config ready event
+# this can be used by upstart jobs for 'start on cloud-config'.
+ - emit_upstart
- mounts
- ssh-import-id
- locale
diff --git a/doc/altcloud/README b/doc/altcloud/README
new file mode 100644
index 00000000..87d7949a
--- /dev/null
+++ b/doc/altcloud/README
@@ -0,0 +1,65 @@
+Data souce AltCloud will be used to pick up user data on
+RHEVm and vSphere.
+
+RHEVm:
+======
+For REHVm v3.0 the userdata is injected into the VM using floppy
+injection via the RHEVm dashboard "Custom Properties". The format
+of the Custom Properties entry must be:
+"floppyinject=user-data.txt:<base64 encoded data>"
+
+e.g.: To pass a simple bash script
+
+% cat simple_script.bash
+#!/bin/bash
+echo "Hello Joe!" >> /tmp/JJV_Joe_out.txt
+
+% base64 < simple_script.bash
+IyEvYmluL2Jhc2gKZWNobyAiSGVsbG8gSm9lISIgPj4gL3RtcC9KSlZfSm9lX291dC50eHQK
+
+To pass this example script to cloud-init running in a RHEVm v3.0 VM
+set the "Custom Properties" when creating the RHEMv v3.0 VM to:
+floppyinject=user-data.txt:IyEvYmluL2Jhc2gKZWNobyAiSGVsbG8gSm9lISIgPj4gL3RtcC9KSlZfSm9lX291dC50eHQK
+
+NOTE: The prefix with file name must be: "floppyinject=user-data.txt:"
+
+It is also possible to launch a RHEVm v3.0 VM and pass optional user
+data to it using the Delta Cloud.
+For more inforation on Delta Cloud see: http://deltacloud.apache.org
+
+vSphere:
+========
+For VMWare's vSphere the userdata is injected into the VM an ISO
+via the cdrom. This can be done using the vSphere dashboard
+by connecting an ISO image to the CD/DVD drive.
+
+To pass this example script to cloud-init running in a vSphere VM
+set the CD/DVD drive when creating the vSphere VM to point to an
+ISO on the data store.
+
+The ISO must contain the user data:
+
+For example, to pass the same simple_script.bash to vSphere:
+
+Create the ISO:
+===============
+% mkdir my-iso
+
+NOTE: The file name on the ISO must be: "user-data.txt"
+% cp simple_scirpt.bash my-iso/user-data.txt
+
+% genisoimage -o user-data.iso -r my-iso
+
+Verify the ISO:
+===============
+% sudo mkdir /media/vsphere_iso
+% sudo mount -o loop JoeV_CI_02.iso /media/vsphere_iso
+% cat /media/vsphere_iso/user-data.txt
+% sudo umount /media/vsphere_iso
+
+Then, launch the vSphere VM the ISO user-data.iso attached as a CDrom.
+
+It is also possible to launch a vSphere VM and pass optional user
+data to it using the Delta Cloud.
+
+For more inforation on Delta Cloud see: http://deltacloud.apache.org
diff --git a/doc/examples/cloud-config-datasources.txt b/doc/examples/cloud-config-datasources.txt
index 102c3dd7..d10dde05 100644
--- a/doc/examples/cloud-config-datasources.txt
+++ b/doc/examples/cloud-config-datasources.txt
@@ -14,73 +14,6 @@ datasource:
- http://169.254.169.254:80
- http://instance-data:8773
- AltCloud:
- Data souce AltCloud will be used to pick up user data on
- RHEVm and vSphere.
-
- RHEVm:
- ======
- For REHVm v3.0 the userdata is injected into the VM using floppy
- injection via the RHEVm dashboard "Custom Properties". The format
- of the Custom Properties entry must be:
- "floppyinject=user-data.txt:<base64 encoded data>"
-
- e.g.: To pass a simple bash script
-
- % cat simple_script.bash
- #!/bin/bash
- echo "Hello Joe!" >> /tmp/JJV_Joe_out.txt
-
- % cat simple_script.bash | base64
- IyEvYmluL2Jhc2gKZWNobyAiSGVsbG8gSm9lISIgPj4gL3RtcC9KSlZfSm9lX291dC50eHQK
-
- To pass this example script to cloud-init running in a RHEVm v3.0 VM
- set the "Custom Properties" when creating the RHEMv v3.0 VM to:
- floppyinject=user-data.txt:IyEvYmluL2Jhc2gKZWNobyAiSGVsbG8gSm9lISIgPj4gL3RtcC9KSlZfSm9lX291dC50eHQK
-
- NOTE: The prefix with file name must be: "floppyinject=user-data.txt:"
-
- It is also possible to launch a RHEVm v3.0 VM and pass optional user
- data to it using the Delta Cloud.
- For more inforation on Delta Cloud see: http://deltacloud.apache.org
-
- vSphere:
- ========
- For VMWare's vSphere the userdata is injected into the VM an ISO
- via the cdrom. This can be done using the vSphere dashboard
- by connecting an ISO image to the CD/DVD drive.
-
- To pass this example script to cloud-init running in a vSphere VM
- set the CD/DVD drive when creating the vSphere VM to point to an
- ISO on the data store.
-
- The ISO must contain the user data:
-
- For example, to pass the same simple_script.bash to vSphere:
-
- Create the ISO:
- ===============
- % mkdir my-iso
-
- NOTE: The file name on the ISO must be: "user-data.txt"
- % cp simple_scirpt.bash my-iso/user-data.txt
-
- % genisoimage -o user-data.iso -r my-iso
-
- Verify the ISO:
- ===============
- % sudo mkdir /media/vsphere_iso
- % sudo mount -o loop JoeV_CI_02.iso /media/vsphere_iso
- % cat /media/vsphere_iso/user-data.txt
- % sudo umount /media/vsphere_iso
-
- Then, launch the vSphere VM the ISO user-data.iso attached as a CDrom.
-
- It is also possible to launch a vSphere VM and pass optional user
- data to it using the Delta Cloud.
-
- For more inforation on Delta Cloud see: http://deltacloud.apache.org
-
MAAS:
timeout : 50
max_wait : 120
diff --git a/doc/examples/cloud-config-write-files.txt b/doc/examples/cloud-config-write-files.txt
new file mode 100644
index 00000000..9c4e3998
--- /dev/null
+++ b/doc/examples/cloud-config-write-files.txt
@@ -0,0 +1,33 @@
+#cloud-config
+# vim: syntax=yaml
+#
+# This is the configuration syntax that the write_files module
+# will know how to understand. encoding can be given b64 or gzip or (gz+b64).
+# The content will be decoded accordingly and then written to the path that is
+# provided.
+#
+# Note: Content strings here are truncated for example purposes.
+write_files:
+- encoding: b64
+ content: CiMgVGhpcyBmaWxlIGNvbnRyb2xzIHRoZSBzdGF0ZSBvZiBTRUxpbnV4...
+ owner: root:root
+ path: /etc/sysconfig/selinux
+ perms: '0644'
+- content: |
+ # My new /etc/sysconfig/samba file
+
+ SMBDOPTIONS="-D"
+ path: /etc/sysconfig/samba
+- content: !!binary |
+ f0VMRgIBAQAAAAAAAAAAAAIAPgABAAAAwARAAAAAAABAAAAAAAAAAJAVAAAAAAAAAAAAAEAAOAAI
+ AEAAHgAdAAYAAAAFAAAAQAAAAAAAAABAAEAAAAAAAEAAQAAAAAAAwAEAAAAAAADAAQAAAAAAAAgA
+ AAAAAAAAAwAAAAQAAAAAAgAAAAAAAAACQAAAAAAAAAJAAAAAAAAcAAAAAAAAABwAAAAAAAAAAQAA
+ ....
+ path: /bin/arch
+ perms: '0555'
+- encoding: gzip
+ content: !!binary |
+ H4sIAIDb/U8C/1NW1E/KzNMvzuBKTc7IV8hIzcnJVyjPL8pJ4QIA6N+MVxsAAAA=
+ path: /usr/bin/hello
+ perms: '0755'
+
diff --git a/doc/examples/cloud-config.txt b/doc/examples/cloud-config.txt
index 82055d09..1e6628d2 100644
--- a/doc/examples/cloud-config.txt
+++ b/doc/examples/cloud-config.txt
@@ -28,11 +28,14 @@ apt_upgrade: true
# then use the mirror provided by the DataSource found.
# In EC2, that means using <region>.ec2.archive.ubuntu.com
#
-# if no mirror is provided by the DataSource, then search
-# for dns names '<distro>-mirror' in each of
+# if no mirror is provided by the DataSource, and 'apt_mirror_search_dns' is
+# true, then search for dns names '<distro>-mirror' in each of
# - fqdn of this host per cloud metadata
# - localdomain
# - no domain (which would search domains listed in /etc/resolv.conf)
+# If there is a dns entry for <distro>-mirror, then it is assumed that there
+# is a distro mirror at http://<distro>-mirror.<domain>/<distro>
+#
# That gives the cloud provider the opportunity to set mirrors of a distro
# up and expose them only by creating dns entries.
#
@@ -42,6 +45,8 @@ apt_mirror_search:
- http://local-mirror.mydomain
- http://archive.ubuntu.com
+apt_mirror_search_dns: False
+
# apt_proxy (configure Acquire::HTTP::Proxy)
apt_proxy: http://my.apt.proxy:3128
diff --git a/packages/bddeb b/packages/bddeb
index 10ad08b3..2cfddb99 100755
--- a/packages/bddeb
+++ b/packages/bddeb
@@ -3,13 +3,21 @@
import os
import shutil
import sys
-import glob
+
+
+def find_root():
+ # expected path is in <top_dir>/packages/
+ top_dir = os.environ.get("CLOUD_INIT_TOP_D", None)
+ if top_dir is None:
+ top_dir = os.path.dirname(
+ os.path.dirname(os.path.abspath(sys.argv[0])))
+ if os.path.isfile(os.path.join(top_dir, 'setup.py')):
+ return os.path.abspath(top_dir)
+ raise OSError(("Unable to determine where your cloud-init topdir is."
+ " set CLOUD_INIT_TOP_D?"))
# Use the util functions from cloudinit
-possible_topdir = os.path.normpath(os.path.join(os.path.abspath(
- sys.argv[0]), os.pardir, os.pardir))
-if os.path.exists(os.path.join(possible_topdir, "cloudinit", "__init__.py")):
- sys.path.insert(0, possible_topdir)
+sys.path.insert(0, find_root())
from cloudinit import templater
from cloudinit import util
@@ -17,24 +25,27 @@ from cloudinit import util
import argparse
# Package names that will showup in requires to what we can actually
-# use in our debian 'control' file
+# use in our debian 'control' file, this is a translation of the 'requires'
+# file pypi package name to a debian/ubuntu package name.
PKG_MP = {
- 'tempita': 'python-tempita',
'boto': 'python-boto',
'configobj': 'python-configobj',
'oauth': 'python-oauth',
- 'yaml': 'python-yaml',
+ 'pyyaml': 'python-yaml',
'prettytable': 'python-prettytable',
'argparse': 'python-argparse',
+ 'cheetah': 'python-cheetah',
}
+DEBUILD_ARGS = ["-us", "-S", "-uc"]
-def write_debian_folder(root, version, revno, init_sys):
+def write_debian_folder(root, version, revno):
deb_dir = util.abs_join(root, 'debian')
os.makedirs(deb_dir)
-
+
# Fill in the change log template
- templater.render_to_file(util.abs_join('debian', 'changelog'),
+ templater.render_to_file(util.abs_join(find_root(),
+ 'packages', 'debian', 'changelog.in'),
util.abs_join(deb_dir, 'changelog'),
params={
'version': version,
@@ -42,56 +53,45 @@ def write_debian_folder(root, version, revno, init_sys):
})
# Write out the control file template
- cmd = [sys.executable,
- util.abs_join(os.pardir, 'tools', 'read-dependencies')]
+ cmd = [util.abs_join(find_root(), 'tools', 'read-dependencies')]
(stdout, _stderr) = util.subp(cmd)
-
- # Map to known packages
pkgs = [p.lower().strip() for p in stdout.splitlines()]
+
+ # Map to known packages
requires = []
for p in pkgs:
- tgt_pkg = None
- for name in PKG_MP.keys():
- if p.find(name) != -1:
- tgt_pkg = PKG_MP.get(name)
- break
+ tgt_pkg = PKG_MP.get(p)
if not tgt_pkg:
- raise RuntimeError(("Do not know how to translate %s to "
- " a known package") % (p))
+ raise RuntimeError(("Do not know how to translate pypi dependency"
+ " %r to a known package") % (p))
else:
requires.append(tgt_pkg)
- templater.render_to_file(util.abs_join('debian', 'control'),
+ templater.render_to_file(util.abs_join(find_root(),
+ 'packages', 'debian', 'control.in'),
util.abs_join(deb_dir, 'control'),
params={'requires': requires})
-
- templater.render_to_file(util.abs_join('debian', 'rules'),
- util.abs_join(deb_dir, 'rules'),
- params={'init_sys': init_sys})
-
+
# Just copy the following directly
- for base_fn in ['dirs', 'copyright', 'compat', 'pycompat']:
- shutil.copy(util.abs_join('debian', base_fn),
+ for base_fn in ['dirs', 'copyright', 'compat', 'pycompat', 'rules']:
+ shutil.copy(util.abs_join(find_root(),
+ 'packages', 'debian', base_fn),
util.abs_join(deb_dir, base_fn))
def main():
parser = argparse.ArgumentParser()
- parser.add_argument("-n", "--no-sign", dest="sign",
- help=("attempt to sign "
- "the package (default: %(default)s)"),
- default=True,
- action='store_false')
parser.add_argument("-v", "--verbose", dest="verbose",
help=("run verbosely"
" (default: %(default)s)"),
default=False,
action='store_true')
- parser.add_argument("-b", "--boot", dest="boot",
- help="select boot type (default: %(default)s)",
- metavar="TYPE", default='upstart',
- choices=('upstart', 'upstart-local'))
+
+ for ent in DEBUILD_ARGS:
+ parser.add_argument(ent, dest="debuild_args", action='append_const',
+ const=ent, help=("pass through '%s' to debuild" % ent))
+
args = parser.parse_args()
capture = True
@@ -100,21 +100,19 @@ def main():
with util.tempdir() as tdir:
- cmd = [sys.executable,
- util.abs_join(os.pardir, 'tools', 'read-version')]
+ cmd = [util.abs_join(find_root(), 'tools', 'read-version')]
(sysout, _stderr) = util.subp(cmd)
version = sysout.strip()
cmd = ['bzr', 'revno']
(sysout, _stderr) = util.subp(cmd)
revno = sysout.strip()
-
+
# This is really only a temporary archive
# since we will extract it then add in the debian
# folder, then re-archive it for debian happiness
print("Creating a temporary tarball using the 'make-tarball' helper")
- cmd = [sys.executable,
- util.abs_join(os.getcwd(), 'make-tarball')]
+ cmd = [util.abs_join(find_root(), 'tools', 'make-tarball')]
(sysout, _stderr) = util.subp(cmd)
arch_fn = sysout.strip()
tmp_arch_fn = util.abs_join(tdir, os.path.basename(arch_fn))
@@ -123,47 +121,58 @@ def main():
print("Extracting temporary tarball %r" % (tmp_arch_fn))
cmd = ['tar', '-xvzf', tmp_arch_fn, '-C', tdir]
util.subp(cmd, capture=capture)
- base_name = os.path.basename(arch_fn)[:-len(".tar.gz")]
- shutil.move(util.abs_join(tdir, base_name),
- util.abs_join(tdir, 'cloud-init'))
+ extracted_name = tmp_arch_fn[:-len('.tar.gz')]
+ os.remove(tmp_arch_fn)
+
+ xdir = util.abs_join(tdir, 'cloud-init')
+ shutil.move(extracted_name, xdir)
- print("Creating a debian/ folder in %r" %
- (util.abs_join(tdir, 'cloud-init')))
- write_debian_folder(util.abs_join(tdir, 'cloud-init'),
- version, revno, args.boot)
+ print("Creating a debian/ folder in %r" % (xdir))
+ write_debian_folder(xdir, version, revno)
# The naming here seems to follow some debian standard
# so it will whine if it is changed...
- tar_fn = "cloud-init_%s~%s.orig.tar.gz" % (version, revno)
- print("Archiving that new folder into %r" % (tar_fn))
- cmd = ['tar', '-czvf',
- util.abs_join(tdir, tar_fn),
- '-C', util.abs_join(tdir, 'cloud-init')]
- cmd.extend(os.listdir(util.abs_join(tdir, 'cloud-init')))
+ tar_fn = "cloud-init_%s~bzr%s.orig.tar.gz" % (version, revno)
+ print("Archiving the adjusted source into %r" %
+ (util.abs_join(tdir, tar_fn)))
+ cmd = ['tar', '-czvf',
+ util.abs_join(tdir, tar_fn),
+ '-C', xdir]
+ cmd.extend(os.listdir(xdir))
util.subp(cmd, capture=capture)
- shutil.copy(util.abs_join(tdir, tar_fn), tar_fn)
- print("Wrote out archive %r" % (util.abs_join(tar_fn)))
-
- print("Running 'debuild' in %r" % (util.abs_join(tdir, 'cloud-init')))
- with util.chdir(util.abs_join(tdir, 'cloud-init')):
- cmd = ['debuild']
- if not args.sign:
- cmd.extend(['-us', '-uc'])
+
+ # Copy it locally for reference
+ shutil.copy(util.abs_join(tdir, tar_fn),
+ util.abs_join(os.getcwd(), tar_fn))
+ print("Copied that archive to %r for local usage (if desired)." %
+ (util.abs_join(os.getcwd(), tar_fn)))
+
+ print("Running 'debuild' in %r" % (xdir))
+ with util.chdir(xdir):
+ cmd = ['debuild', '--preserve-envvar', 'INIT_SYSTEM']
+ if args.debuild_args:
+ cmd.extend(args.debuild_args)
util.subp(cmd, capture=capture)
- globs = []
- globs.extend(glob.glob("%s/*.deb" %
- (os.path.join(tdir))))
link_fn = os.path.join(os.getcwd(), 'cloud-init_all.deb')
- for fn in globs:
- base_fn = os.path.basename(fn)
- shutil.move(fn, base_fn)
- print("Wrote out debian package %r" % (base_fn))
- if fn.endswith('_all.deb'):
+ link_dsc = os.path.join(os.getcwd(), 'cloud-init.dsc')
+ for base_fn in os.listdir(os.path.join(tdir)):
+ full_fn = os.path.join(tdir, base_fn)
+ if not os.path.isfile(full_fn):
+ continue
+ shutil.move(full_fn, base_fn)
+ print("Wrote %r" % (base_fn))
+ if base_fn.endswith('_all.deb'):
# Add in the local link
util.del_file(link_fn)
os.symlink(base_fn, link_fn)
- print("Linked %r to %r" % (base_fn, link_fn))
+ print("Linked %r to %r" % (base_fn,
+ os.path.basename(link_fn)))
+ if base_fn.endswith('.dsc'):
+ util.del_file(link_dsc)
+ os.symlink(base_fn, link_dsc)
+ print("Linked %r to %r" % (base_fn,
+ os.path.basename(link_dsc)))
return 0
diff --git a/packages/brpm b/packages/brpm
index 1d05bd2a..77de0cf2 100755
--- a/packages/brpm
+++ b/packages/brpm
@@ -1,5 +1,6 @@
#!/usr/bin/python
+import argparse
import contextlib
import glob
import os
@@ -9,31 +10,42 @@ import sys
import tempfile
import re
-import argparse
+from datetime import datetime
+
+
+def find_root():
+ # expected path is in <top_dir>/packages/
+ top_dir = os.environ.get("CLOUD_INIT_TOP_D", None)
+ if top_dir is None:
+ top_dir = os.path.dirname(os.path.dirname(os.path.abspath(sys.argv[0])))
+ if os.path.isfile(os.path.join(top_dir, 'setup.py')):
+ return os.path.abspath(top_dir)
+ raise OSError(("Unable to determine where your cloud-init topdir is."
+ " set CLOUD_INIT_TOP_D?"))
+
# Use the util functions from cloudinit
-possible_topdir = os.path.normpath(os.path.join(os.path.abspath(
- sys.argv[0]), os.pardir, os.pardir))
-if os.path.exists(os.path.join(possible_topdir, "cloudinit", "__init__.py")):
- sys.path.insert(0, possible_topdir)
+sys.path.insert(0, find_root())
from cloudinit import templater
from cloudinit import util
-from datetime import datetime
-
-
# Mapping of expected packages to there full name...
+# this is a translation of the 'requires'
+# file pypi package name to a redhat/fedora package name.
PKG_MP = {
'boto': 'python-boto',
- 'tempita': 'python-tempita',
+ 'cheetah': 'python-cheetah',
'prettytable': 'python-prettytable',
'oauth': 'python-oauth',
'configobj': 'python-configobj',
- 'yaml': 'PyYAML',
- 'argparse': 'python-argparse'
+ 'pyyaml': 'PyYAML',
+ 'argparse': 'python-argparse',
}
+# Subdirectories of the ~/rpmbuild dir
+RPM_BUILD_SUBDIRS = ['BUILD', 'RPMS', 'SOURCES', 'SPECS', 'SRPMS']
+
def get_log_header(version):
# Try to find the version in the tags output
@@ -79,11 +91,10 @@ def format_change_line(ds, who, comment=None):
return "* %s" % (d)
-def generate_spec_contents(args, tmpl_fn):
+def generate_spec_contents(args, tmpl_fn, arc_fn):
# Figure out the version and revno
- cmd = [sys.executable,
- util.abs_join(os.pardir, 'tools', 'read-version')]
+ cmd = [util.abs_join(find_root(), 'tools', 'read-version')]
(stdout, _stderr) = util.subp(cmd)
version = stdout.strip()
@@ -95,34 +106,26 @@ def generate_spec_contents(args, tmpl_fn):
subs = {}
subs['version'] = version
subs['revno'] = revno
- subs['release'] = revno
- subs['archive_name'] = '%{name}-%{version}-' + revno + '.tar.gz'
- subs['bd_requires'] = ['python-devel', 'python-setuptools']
+ subs['release'] = "bzr%s" % (revno)
+ subs['archive_name'] = arc_fn
- cmd = [sys.executable,
- util.abs_join(os.pardir, 'tools', 'read-dependencies')]
+ cmd = [util.abs_join(find_root(), 'tools', 'read-dependencies')]
(stdout, _stderr) = util.subp(cmd)
-
- # Map to known packages
pkgs = [p.lower().strip() for p in stdout.splitlines()]
# Map to known packages
requires = []
for p in pkgs:
- tgt_pkg = None
- for name in PKG_MP.keys():
- if p.find(name) != -1:
- tgt_pkg = PKG_MP.get(name)
- break
+ tgt_pkg = PKG_MP.get(p)
if not tgt_pkg:
- raise RuntimeError(("Do not know how to translate %s to "
- " a known package") % (p))
+ raise RuntimeError(("Do not know how to translate pypi dependency"
+ " %r to a known package") % (p))
else:
requires.append(tgt_pkg)
subs['requires'] = requires
# Format a nice changelog (as best as we can)
- changelog = util.load_file(util.abs_join(os.pardir, 'ChangeLog'))
+ changelog = util.load_file(util.abs_join(find_root(), 'ChangeLog'))
changelog_lines = []
for line in changelog.splitlines():
if not line.strip():
@@ -135,15 +138,10 @@ def generate_spec_contents(args, tmpl_fn):
changelog_lines.append(line)
subs['changelog'] = "\n".join(changelog_lines)
- if args.boot == 'initd':
- subs['init_d'] = True
- subs['init_d_local'] = False
- elif args.boot == 'initd-local':
- subs['init_d'] = True
- subs['init_d_local'] = True
+ if args.boot == 'sysvinit':
+ subs['sysvinit'] = True
else:
- subs['init_d'] = False
- subs['init_d_local'] = False
+ subs['sysvinit'] = False
if args.boot == 'systemd':
subs['systemd'] = True
@@ -159,8 +157,8 @@ def main():
parser = argparse.ArgumentParser()
parser.add_argument("-b", "--boot", dest="boot",
help="select boot type (default: %(default)s)",
- metavar="TYPE", default='initd',
- choices=('initd', 'systemd', 'initd-local'))
+ metavar="TYPE", default='sysvinit',
+ choices=('sysvinit', 'systemd'))
parser.add_argument("-v", "--verbose", dest="verbose",
help=("run verbosely"
" (default: %(default)s)"),
@@ -175,39 +173,49 @@ def main():
root_dir = os.path.expanduser("~/rpmbuild")
if os.path.isdir(root_dir):
shutil.rmtree(root_dir)
+
arc_dir = util.abs_join(root_dir, 'SOURCES')
- util.ensure_dirs([root_dir, arc_dir])
+ build_dirs = [root_dir, arc_dir]
+ for dname in RPM_BUILD_SUBDIRS:
+ build_dirs.append(util.abs_join(root_dir, dname))
+ build_dirs.sort()
+ util.ensure_dirs(build_dirs)
# Archive the code
- cmd = [sys.executable,
- util.abs_join(os.getcwd(), 'make-tarball')]
+ cmd = [util.abs_join(find_root(), 'tools', 'make-tarball')]
(stdout, _stderr) = util.subp(cmd)
archive_fn = stdout.strip()
real_archive_fn = os.path.join(arc_dir, os.path.basename(archive_fn))
shutil.move(archive_fn, real_archive_fn)
+ print("Archived the code in %r" % (real_archive_fn))
# Form the spec file to be used
- tmpl_fn = util.abs_join(os.getcwd(), 'redhat', 'cloud-init.spec')
- contents = generate_spec_contents(args, tmpl_fn)
- spec_fn = os.path.join(root_dir, 'cloud-init.spec')
+ tmpl_fn = util.abs_join(find_root(), 'packages',
+ 'redhat', 'cloud-init.spec.in')
+ contents = generate_spec_contents(args, tmpl_fn,
+ os.path.basename(archive_fn))
+ spec_fn = util.abs_join(root_dir, 'cloud-init.spec')
util.write_file(spec_fn, contents)
+ print("Created spec file at %r" % (spec_fn))
# Now build it!
- cmd = ['rpmbuild', '-ba', spec_fn]
+ print("Running 'rpmbuild' in %r" % (root_dir))
+ cmd = ['rpmbuild', '--clean',
+ '-ba', spec_fn]
util.subp(cmd, capture=capture)
# Copy the items built to our local dir
globs = []
globs.extend(glob.glob("%s/*.rpm" %
- (os.path.join(root_dir, 'RPMS', 'noarch'))))
+ (util.abs_join(root_dir, 'RPMS', 'noarch'))))
globs.extend(glob.glob("%s/*.rpm" %
- (os.path.join(root_dir, 'RPMS'))))
+ (util.abs_join(root_dir, 'RPMS'))))
globs.extend(glob.glob("%s/*.rpm" %
- (os.path.join(root_dir, 'SRPMS'))))
+ (util.abs_join(root_dir, 'SRPMS'))))
for rpm_fn in globs:
tgt_fn = util.abs_join(os.getcwd(), os.path.basename(rpm_fn))
shutil.move(rpm_fn, tgt_fn)
- print(tgt_fn)
+ print("Wrote out redhat package %r" % (tgt_fn))
return 0
diff --git a/packages/debian/changelog b/packages/debian/changelog
deleted file mode 100644
index ac5bcf98..00000000
--- a/packages/debian/changelog
+++ /dev/null
@@ -1,5 +0,0 @@
-cloud-init ({{version}}~{{revision}}-1) UNRELEASED; urgency=low
-
- * build
-
- -- Scott Moser <smoser@ubuntu.com> Fri, 16 Dec 2011 11:50:25 -0500
diff --git a/packages/debian/changelog.in b/packages/debian/changelog.in
new file mode 100644
index 00000000..e3e94f54
--- /dev/null
+++ b/packages/debian/changelog.in
@@ -0,0 +1,6 @@
+## This is a cheetah template
+cloud-init (${version}~bzr${revision}-1) UNRELEASED; urgency=low
+
+ * build
+
+ -- Scott Moser <smoser@ubuntu.com> Fri, 16 Dec 2011 11:50:25 -0500
diff --git a/packages/debian/control b/packages/debian/control.in
index e00901af..edb5aff5 100644
--- a/packages/debian/control
+++ b/packages/debian/control.in
@@ -1,14 +1,18 @@
+## This is a cheetah template
Source: cloud-init
Section: admin
Priority: extra
Maintainer: Scott Moser <smoser@ubuntu.com>
Build-Depends: cdbs,
- debhelper (>= 5.0.38),
+ debhelper (>= 5.0.38),
python (>= 2.6.6-3~),
python-nose,
pyflakes,
pylint,
+ python-setuptools,
+ python-cheetah,
python-mocker,
+ python-setuptools
XS-Python-Version: all
Standards-Version: 3.9.3
@@ -17,13 +21,13 @@ Architecture: all
Depends: cloud-utils,
procps,
python,
-{{for r in requires}}
- {{r}},
-{{endfor}}
- python-software-properties,
- ${misc:Depends},
- ${python:Depends}
-XB-Python-Version: ${python:Versions}
+#for $r in $requires
+ ${r},
+#end for
+ python-software-properties | software-properties-common,
+ \${misc:Depends},
+ \${python:Depends}
+XB-Python-Version: \${python:Versions}
Description: Init scripts for cloud instances
Cloud instances need special scripts to run during initialisation
to retrieve and install ssh keys and to let the user run various scripts.
diff --git a/packages/debian/rules b/packages/debian/rules
index 87cd6538..7623ac9d 100755
--- a/packages/debian/rules
+++ b/packages/debian/rules
@@ -1,13 +1,14 @@
#!/usr/bin/make -f
DEB_PYTHON2_MODULE_PACKAGES = cloud-init
+INIT_SYSTEM ?= upstart
binary-install/cloud-init::cloud-init-fixups
include /usr/share/cdbs/1/rules/debhelper.mk
include /usr/share/cdbs/1/class/python-distutils.mk
-DEB_PYTHON_INSTALL_ARGS_ALL += --init-system={{init_sys}}
+DEB_PYTHON_INSTALL_ARGS_ALL += --init-system=$(INIT_SYSTEM)
DEB_DH_INSTALL_SOURCEDIR := debian/tmp
diff --git a/packages/make-tarball b/packages/make-tarball
deleted file mode 100755
index 43a6fc33..00000000
--- a/packages/make-tarball
+++ /dev/null
@@ -1,89 +0,0 @@
-#!/usr/bin/python
-
-import contextlib
-import os
-import shutil
-import subprocess
-import sys
-import tempfile
-
-import optparse
-
-
-# Use the util functions from cloudinit
-possible_topdir = os.path.normpath(os.path.join(os.path.abspath(
- sys.argv[0]), os.pardir, os.pardir))
-if os.path.exists(os.path.join(possible_topdir, "cloudinit", "__init__.py")):
- sys.path.insert(0, possible_topdir)
-
-from cloudinit import util
-
-
-def find_versioned_files():
- (stdout, _stderr) = util.subp(['bzr', 'ls', '--versioned', '--recursive'])
- fns = [fn for fn in stdout.splitlines()
- if fn and not fn.startswith('.')]
- fns.sort()
- return fns
-
-
-def copy(fn, where_to, verbose):
- if verbose:
- print("Copying %r --> %r" % (fn, where_to))
- if os.path.isfile(fn):
- shutil.copy(fn, where_to)
- elif os.path.isdir(fn) and not os.path.isdir(where_to):
- os.makedirs(where_to)
- else:
- raise RuntimeError("Do not know how to copy %s" % (fn))
-
-
-def main():
-
- parser = optparse.OptionParser()
- parser.add_option("-f", "--file", dest="filename",
- help="write archive to FILE", metavar="FILE")
- parser.add_option("-v", "--verbose",
- action="store_true", dest="verbose", default=False,
- help="show verbose messaging")
-
- (options, args) = parser.parse_args()
-
- base_fn = options.filename
- if not base_fn:
- (stdout, _stderr) = util.subp(['bzr', 'revno'])
- revno = stdout.strip()
- cmd = [sys.executable,
- util.abs_join(os.pardir, 'tools', 'read-version')]
- (stdout, _stderr) = util.subp(cmd)
- version = stdout.strip()
- base_fn = 'cloud-init-%s-%s' % (version, revno)
-
- with util.tempdir() as tdir:
- util.ensure_dir(util.abs_join(tdir, base_fn))
- arch_fn = '%s.tar.gz' % (base_fn)
-
- with util.chdir(os.pardir):
- fns = find_versioned_files()
- for fn in fns:
- copy(fn, util.abs_join(tdir, base_fn, fn),
- verbose=options.verbose)
-
- arch_full_fn = util.abs_join(tdir, arch_fn)
- cmd = ['tar', '-czvf', arch_full_fn, '-C', tdir, base_fn]
- if options.verbose:
- print("Creating an archive from directory %r to %r" %
- (util.abs_join(tdir, base_fn), arch_full_fn))
-
- util.subp(cmd, capture=(not options.verbose))
- shutil.move(util.abs_join(tdir, arch_fn),
- util.abs_join(os.getcwd(), arch_fn))
-
- print(os.path.abspath(arch_fn))
-
- return 0
-
-
-if __name__ == '__main__':
- sys.exit(main())
-
diff --git a/packages/redhat/cloud-init.spec b/packages/redhat/cloud-init.spec.in
index d0f83a4b..35b27beb 100644
--- a/packages/redhat/cloud-init.spec
+++ b/packages/redhat/cloud-init.spec.in
@@ -1,3 +1,4 @@
+## This is a cheetah template
%{!?python_sitelib: %global python_sitelib %(%{__python} -c "from distutils.sysconfig import get_python_lib; print get_python_lib()")}
# See: http://www.zarb.org/~jasonc/macros.php
@@ -5,20 +6,21 @@
# Or: http://www.rpm.org/max-rpm/ch-rpm-inside.html
Name: cloud-init
-Version: {{version}}
-Release: {{release}}%{?dist}
+Version: ${version}
+Release: ${release}%{?dist}
Summary: Cloud instance init scripts
Group: System Environment/Base
License: GPLv3
URL: http://launchpad.net/cloud-init
-Source0: {{archive_name}}
+Source0: ${archive_name}
BuildArch: noarch
BuildRoot: %{_tmppath}
BuildRequires: python-devel
BuildRequires: python-setuptools
+BuildRequires: python-cheetah
# System util packages needed
Requires: shadow-utils
@@ -30,23 +32,23 @@ Requires: procps
Requires: shadow-utils
# Install pypi 'dynamic' requirements
-{{for r in requires}}
-Requires: {{r}}
-{{endfor}}
+#for $r in $requires
+Requires: ${r}
+#end for
-{{if init_d}}
+#if $sysvinit
Requires(post): chkconfig
Requires(postun): initscripts
Requires(preun): chkconfig
Requires(preun): initscripts
-{{endif}}
+#end if
-{{if systemd}}
+#if $systemd
BuildRequires: systemd-units
Requires(post): systemd-units
Requires(postun): systemd-units
Requires(preun): systemd-units
-{{endif}}
+#end if
%description
Cloud-init is a set of init scripts for cloud instances. Cloud instances
@@ -54,104 +56,89 @@ need special scripts to run during initialization to retrieve and install
ssh keys and to let the user run various scripts.
%prep
-%setup -q -n %{name}-%{version}-{{revno}}
+%setup -q -n %{name}-%{version}~${release}
%build
%{__python} setup.py build
%install
-rm -rf $RPM_BUILD_ROOT
+rm -rf \$RPM_BUILD_ROOT
%{__python} setup.py install -O1 \
- --skip-build --root $RPM_BUILD_ROOT \
- --init-system={{init_sys}}
+ --skip-build --root \$RPM_BUILD_ROOT \
+ --init-system=${init_sys}
# Note that /etc/rsyslog.d didn't exist by default until F15.
# el6 request: https://bugzilla.redhat.com/show_bug.cgi?id=740420
-mkdir -p $RPM_BUILD_ROOT/%{_sysconfdir}/rsyslog.d
+mkdir -p \$RPM_BUILD_ROOT/%{_sysconfdir}/rsyslog.d
cp -p tools/21-cloudinit.conf \
- $RPM_BUILD_ROOT/%{_sysconfdir}/rsyslog.d/21-cloudinit.conf
+ \$RPM_BUILD_ROOT/%{_sysconfdir}/rsyslog.d/21-cloudinit.conf
%clean
-rm -rf $RPM_BUILD_ROOT
+rm -rf \$RPM_BUILD_ROOT
%post
-{{if systemd}}
-if [ $1 -eq 1 ]
+#if $systemd
+if [ \$1 -eq 1 ]
then
/bin/systemctl enable cloud-config.service >/dev/null 2>&1 || :
/bin/systemctl enable cloud-final.service >/dev/null 2>&1 || :
/bin/systemctl enable cloud-init.service >/dev/null 2>&1 || :
/bin/systemctl enable cloud-init-local.service >/dev/null 2>&1 || :
fi
-{{endif}}
+#end if
-{{if init_d_local}}
+#if $sysvinit
/sbin/chkconfig --add %{_initrddir}/cloud-init-local
-{{elif init_d}}
/sbin/chkconfig --add %{_initrddir}/cloud-init
-{{endif}}
-{{if init_d}}
/sbin/chkconfig --add %{_initrddir}/cloud-config
/sbin/chkconfig --add %{_initrddir}/cloud-final
-{{endif}}
+#end if
%preun
-{{if init_d_local}}
-if [ $1 -eq 0 ]
+#if $sysvinit
+if [ \$1 -eq 0 ]
then
- /sbin/service cloud-init-local stop >/dev/null 2>&1
- /sbin/chkconfig --del cloud-init-local
+ /sbin/service cloud-init stop >/dev/null 2>&1 || :
+ /sbin/chkconfig --del cloud-init || :
+ /sbin/service cloud-init-local stop >/dev/null 2>&1 || :
+ /sbin/chkconfig --del cloud-init-local || :
+ /sbin/service cloud-config stop >/dev/null 2>&1 || :
+ /sbin/chkconfig --del cloud-config || :
+ /sbin/service cloud-final stop >/dev/null 2>&1 || :
+ /sbin/chkconfig --del cloud-final || :
fi
-{{elif init_d}}
-if [ $1 -eq 0 ]
-then
- /sbin/service cloud-init stop >/dev/null 2>&1
- /sbin/chkconfig --del cloud-init
-fi
-{{endif}}
-{{if init_d}}
-if [ $1 -eq 0 ]
-then
- /sbin/service cloud-config stop >/dev/null 2>&1
- /sbin/chkconfig --del cloud-config
- /sbin/service cloud-final stop >/dev/null 2>&1
- /sbin/chkconfig --del cloud-final
-fi
-{{endif}}
+#end if
-{{if systemd}}
-if [ $1 -eq 0 ]
+#if $systemd
+if [ \$1 -eq 0 ]
then
/bin/systemctl --no-reload disable cloud-config.service >/dev/null 2>&1 || :
/bin/systemctl --no-reload disable cloud-final.service >/dev/null 2>&1 || :
/bin/systemctl --no-reload disable cloud-init.service >/dev/null 2>&1 || :
/bin/systemctl --no-reload disable cloud-init-local.service >/dev/null 2>&1 || :
fi
-{{endif}}
+#end if
%postun
-{{if systemd}}
+#if $systemd
/bin/systemctl daemon-reload >/dev/null 2>&1 || :
-{{endif}}
+#end if
%files
-{{if init_d}}
+#if $sysvinit
%attr(0755, root, root) %{_initddir}/cloud-config
%attr(0755, root, root) %{_initddir}/cloud-final
-{{endif}}
-{{if init_d_local}}
%attr(0755, root, root) %{_initddir}/cloud-init-local
-{{elif init_d}}
%attr(0755, root, root) %{_initddir}/cloud-init
-{{endif}}
+#end if
-{{if systemd}}
+#if $systemd
%{_unitdir}/cloud-*
-{{endif}}
+#end if
# Program binaries
%{_bindir}/cloud-init*
@@ -180,4 +167,4 @@ fi
%changelog
-{{changelog}}
+${changelog}
diff --git a/setup.py b/setup.py
index 06b897a5..24476681 100755
--- a/setup.py
+++ b/setup.py
@@ -23,12 +23,10 @@
from glob import glob
import os
-import re
import setuptools
from setuptools.command.install import install
-from distutils.command.install_data import install_data
from distutils.errors import DistutilsArgError
import subprocess
@@ -39,9 +37,9 @@ def is_f(p):
INITSYS_FILES = {
- 'sysvinit': filter((lambda x: is_f(x)), glob('sysvinit/*')),
- 'systemd': filter((lambda x: is_f(x)), glob('systemd/*')),
- 'upstart': filter((lambda x: is_f(x)), glob('upstart/*')),
+ 'sysvinit': [f for f in glob('sysvinit/*') if is_f(f)],
+ 'systemd': [f for f in glob('systemd/*') if is_f(f)],
+ 'upstart': [f for f in glob('upstart/*') if is_f(f)],
}
INITSYS_ROOTS = {
'sysvinit': '/etc/rc.d/init.d',
@@ -70,17 +68,18 @@ def tiny_p(cmd, capture=True):
def get_version():
cmd = ['tools/read-version']
(ver, _e) = tiny_p(cmd)
- return ver.strip()
+ return str(ver).strip()
def read_requires():
cmd = ['tools/read-dependencies']
(deps, _e) = tiny_p(cmd)
- return deps.splitlines()
+ return str(deps).splitlines()
# TODO: Is there a better way to do this??
class InitsysInstallData(install):
+ init_system = None
user_options = install.user_options + [
# This will magically show up in member variable 'init_sys'
('init-system=', None,
@@ -96,13 +95,12 @@ class InitsysInstallData(install):
def finalize_options(self):
install.finalize_options(self)
if self.init_system and self.init_system not in INITSYS_TYPES:
- raise DistutilsArgError(
- ("You must specify one of (%s) when"
- " specifying a init system!") % (", ".join(INITSYS_TYPES))
- )
+ raise DistutilsArgError(("You must specify one of (%s) when"
+ " specifying a init system!") % (", ".join(INITSYS_TYPES)))
elif self.init_system:
- self.distribution.data_files.append((INITSYS_ROOTS[self.init_system],
- INITSYS_FILES[self.init_system]))
+ self.distribution.data_files.append(
+ (INITSYS_ROOTS[self.init_system],
+ INITSYS_FILES[self.init_system]))
# Force that command to reinitalize (with new file list)
self.distribution.reinitialize_command('install_data', True)
@@ -123,11 +121,15 @@ setuptools.setup(name='cloud-init',
('/etc/cloud/templates', glob('templates/*')),
('/usr/share/cloud-init', []),
('/usr/lib/cloud-init',
- ['tools/uncloud-init', 'tools/write-ssh-key-fingerprints']),
- ('/usr/share/doc/cloud-init', filter(is_f, glob('doc/*'))),
- ('/usr/share/doc/cloud-init/examples', filter(is_f, glob('doc/examples/*'))),
- ('/usr/share/doc/cloud-init/examples/seed', filter(is_f, glob('doc/examples/seed/*'))),
- ],
+ ['tools/uncloud-init',
+ 'tools/write-ssh-key-fingerprints']),
+ ('/usr/share/doc/cloud-init',
+ [f for f in glob('doc/*') if is_f(f)]),
+ ('/usr/share/doc/cloud-init/examples',
+ [f for f in glob('doc/examples/*') if is_f(f)]),
+ ('/usr/share/doc/cloud-init/examples/seed',
+ [f for f in glob('doc/examples/seed/*') if is_f(f)]),
+ ],
install_requires=read_requires(),
cmdclass = {
# Use a subclass for install that handles
diff --git a/sysvinit/cloud-config b/sysvinit/cloud-config
index dd0bca8b..e587446d 100755
--- a/sysvinit/cloud-config
+++ b/sysvinit/cloud-config
@@ -25,7 +25,7 @@
### BEGIN INIT INFO
# Provides: cloud-config
-# Required-Start: cloud-init
+# Required-Start: cloud-init cloud-init-local
# Should-Start: $time
# Required-Stop:
# Should-Stop:
diff --git a/sysvinit/cloud-final b/sysvinit/cloud-final
index 2e462c17..5deb8457 100755
--- a/sysvinit/cloud-final
+++ b/sysvinit/cloud-final
@@ -25,7 +25,7 @@
### BEGIN INIT INFO
# Provides: cloud-final
-# Required-Start: $all cloud-init cloud-config
+# Required-Start: $all cloud-config
# Should-Start: $time
# Required-Stop:
# Should-Stop:
diff --git a/sysvinit/cloud-init b/sysvinit/cloud-init
index 7726c452..4b44a615 100755
--- a/sysvinit/cloud-init
+++ b/sysvinit/cloud-init
@@ -26,7 +26,7 @@
### BEGIN INIT INFO
# Provides: cloud-init
# Required-Start: $local_fs $network $named $remote_fs
-# Should-Start: $time
+# Should-Start: $time cloud-init-local
# Required-Stop:
# Should-Stop:
# Default-Start: 3 5
diff --git a/sysvinit/cloud-init-local b/sysvinit/cloud-init-local
index bf5d409a..0c63b9b0 100755
--- a/sysvinit/cloud-init-local
+++ b/sysvinit/cloud-init-local
@@ -24,7 +24,7 @@
# Also based on dhcpd in RHEL (for comparison)
### BEGIN INIT INFO
-# Provides: cloud-init
+# Provides: cloud-init-local
# Required-Start: $local_fs $remote_fs
# Should-Start: $time
# Required-Stop:
diff --git a/templates/chef_client.rb.tmpl b/templates/chef_client.rb.tmpl
index 35123ced..7981cba7 100644
--- a/templates/chef_client.rb.tmpl
+++ b/templates/chef_client.rb.tmpl
@@ -1,12 +1,22 @@
+#*
+ This file is only utilized if the module 'cc_chef' is enabled in
+ cloud-config. Specifically, in order to enable it
+ you need to add the following to config:
+ chef:
+ validation_key: XYZ
+ validation_cert: XYZ
+ validation_name: XYZ
+ server_url: XYZ
+*#
log_level :info
log_location "/var/log/chef/client.log"
ssl_verify_mode :verify_none
-validation_client_name "{{validation_name}}"
+validation_client_name "$validation_name"
validation_key "/etc/chef/validation.pem"
client_key "/etc/chef/client.pem"
-chef_server_url "{{server_url}}"
-environment "{{environment}}"
-node_name "{{node_name}}"
+chef_server_url "$server_url"
+environment "$environment"
+node_name "$node_name"
json_attribs "/etc/chef/firstboot.json"
file_cache_path "/var/cache/chef"
file_backup_path "/var/backups/chef"
diff --git a/templates/hosts.redhat.tmpl b/templates/hosts.redhat.tmpl
index cfc40668..80459d95 100644
--- a/templates/hosts.redhat.tmpl
+++ b/templates/hosts.redhat.tmpl
@@ -1,22 +1,23 @@
-{{# This file /etc/cloud/templates/hosts.tmpl is only utilized
+#*
+ This file /etc/cloud/templates/hosts.redhat.tmpl is only utilized
if enabled in cloud-config. Specifically, in order to enable it
you need to add the following to config:
- manage_etc_hosts: True}}
-#
+ manage_etc_hosts: True
+*#
# Your system has configured 'manage_etc_hosts' as True.
# As a result, if you wish for changes to this file to persist
# then you will need to either
-# a.) make changes to the master file in /etc/cloud/templates/hosts.tmpl
+# a.) make changes to the master file in /etc/cloud/templates/hosts.redhat.tmpl
# b.) change or remove the value of 'manage_etc_hosts' in
# /etc/cloud/cloud.cfg or cloud-config from user-data
#
# The following lines are desirable for IPv4 capable hosts
-127.0.0.1 {{fqdn}} {{hostname}}
+127.0.0.1 ${fqdn} ${hostname}
127.0.0.1 localhost.localdomain localhost
127.0.0.1 localhost4.localdomain4 localhost4
# The following lines are desirable for IPv6 capable hosts
-::1 {{fqdn}} {{hostname}}
+::1 ${fqdn} ${hostname}
::1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
diff --git a/templates/hosts.ubuntu.tmpl b/templates/hosts.ubuntu.tmpl
index 9eebe971..ae120b02 100644
--- a/templates/hosts.ubuntu.tmpl
+++ b/templates/hosts.ubuntu.tmpl
@@ -1,7 +1,9 @@
-{{# This file /etc/cloud/templates/hosts.tmpl is only utilized
- if enabled in cloud-config. Specifically, in order to enable it
- you need to add the following to config:
- manage_etc_hosts: True}}
+## This file (/etc/cloud/templates/hosts.tmpl) is only utilized
+## if enabled in cloud-config. Specifically, in order to enable it
+## you need to add the following to config:
+## manage_etc_hosts: True
+##
+## Note, double-hash commented lines will not appear in /etc/hosts
#
# Your system has configured 'manage_etc_hosts' as True.
# As a result, if you wish for changes to this file to persist
@@ -10,8 +12,8 @@
# b.) change or remove the value of 'manage_etc_hosts' in
# /etc/cloud/cloud.cfg or cloud-config from user-data
#
-# The following lines are desirable for IPv4 capable hosts
-127.0.1.1 {{fqdn}} {{hostname}}
+## The value '$hostname' will be replaced with the local-hostname
+127.0.1.1 $fqdn $hostname
127.0.0.1 localhost
# The following lines are desirable for IPv6 capable hosts
@@ -21,4 +23,3 @@ ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
-
diff --git a/templates/sources.list.tmpl b/templates/sources.list.tmpl
index 8acbd7d5..f702025f 100644
--- a/templates/sources.list.tmpl
+++ b/templates/sources.list.tmpl
@@ -1,59 +1,60 @@
-# Note, this file is written by cloud-init on first boot of an instance
-# modifications made here will not survive a re-bundle.
-# if you wish to make changes you can:
-# a.) add 'apt_preserve_sources_list: true' to /etc/cloud/cloud.cfg
-# or do the same in user-data
-# b.) add sources in /etc/apt/sources.list.d
-# c.) make changes to template file /etc/cloud/templates/sources.list.tmpl
+\## Note, this file is written by cloud-init on first boot of an instance
+\## modifications made here will not survive a re-bundle.
+\## if you wish to make changes you can:
+\## a.) add 'apt_preserve_sources_list: true' to /etc/cloud/cloud.cfg
+\## or do the same in user-data
+\## b.) add sources in /etc/apt/sources.list.d
+\## c.) make changes to template file /etc/cloud/templates/sources.list.tmpl
+\###
# See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to
# newer versions of the distribution.
-deb {{mirror}} {{codename}} main
-deb-src {{mirror}} {{codename}} main
+deb $mirror $codename main
+deb-src $mirror $codename main
-# Major bug fix updates produced after the final release of the
-# distribution.
-deb {{mirror}} {{codename}}-updates main
-deb-src {{mirror}} {{codename}}-updates main
+\## Major bug fix updates produced after the final release of the
+\## distribution.
+deb $mirror $codename-updates main
+deb-src $mirror $codename-updates main
-# N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
-# team. Also, please note that software in universe WILL NOT receive any
-# review or updates from the Ubuntu security team.
-deb {{mirror}} {{codename}} universe
-deb-src {{mirror}} {{codename}} universe
-deb {{mirror}} {{codename}}-updates universe
-deb-src {{mirror}} {{codename}}-updates universe
+\## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
+\## team. Also, please note that software in universe WILL NOT receive any
+\## review or updates from the Ubuntu security team.
+deb $mirror $codename universe
+deb-src $mirror $codename universe
+deb $mirror $codename-updates universe
+deb-src $mirror $codename-updates universe
-# N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
-# team, and may not be under a free licence. Please satisfy yourself as to
-# your rights to use the software. Also, please note that software in
-# multiverse WILL NOT receive any review or updates from the Ubuntu
-# security team.
-# deb {{mirror}} {{codename}} multiverse
-# deb-src {{mirror}} {{codename}} multiverse
-# deb {{mirror}} {{codename}}-updates multiverse
-# deb-src {{mirror}} {{codename}}-updates multiverse
+\## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
+\## team, and may not be under a free licence. Please satisfy yourself as to
+\## your rights to use the software. Also, please note that software in
+\## multiverse WILL NOT receive any review or updates from the Ubuntu
+\## security team.
+# deb $mirror $codename multiverse
+# deb-src $mirror $codename multiverse
+# deb $mirror $codename-updates multiverse
+# deb-src $mirror $codename-updates multiverse
-# Uncomment the following two lines to add software from the 'backports'
-# repository.
-# N.B. software from this repository may not have been tested as
-# extensively as that contained in the main release, although it includes
-# newer versions of some applications which may provide useful features.
-# Also, please note that software in backports WILL NOT receive any review
-# or updates from the Ubuntu security team.
-# deb {{mirror}} {{codename}}-backports main restricted universe multiverse
-# deb-src {{mirror}} {{codename}}-backports main restricted universe multiverse
+\## Uncomment the following two lines to add software from the 'backports'
+\## repository.
+\## N.B. software from this repository may not have been tested as
+\## extensively as that contained in the main release, although it includes
+\## newer versions of some applications which may provide useful features.
+\## Also, please note that software in backports WILL NOT receive any review
+\## or updates from the Ubuntu security team.
+# deb $mirror $codename-backports main restricted universe multiverse
+# deb-src $mirror $codename-backports main restricted universe multiverse
-# Uncomment the following two lines to add software from Canonical's
-# 'partner' repository.
-# This software is not part of Ubuntu, but is offered by Canonical and the
-# respective vendors as a service to Ubuntu users.
-# deb http://archive.canonical.com/ubuntu {{codename}} partner
-# deb-src http://archive.canonical.com/ubuntu {{codename}} partner
+\## Uncomment the following two lines to add software from Canonical's
+\## 'partner' repository.
+\## This software is not part of Ubuntu, but is offered by Canonical and the
+\## respective vendors as a service to Ubuntu users.
+# deb http://archive.canonical.com/ubuntu $codename partner
+# deb-src http://archive.canonical.com/ubuntu $codename partner
-deb http://security.ubuntu.com/ubuntu {{codename}}-security main
-deb-src http://security.ubuntu.com/ubuntu {{codename}}-security main
-deb http://security.ubuntu.com/ubuntu {{codename}}-security universe
-deb-src http://security.ubuntu.com/ubuntu {{codename}}-security universe
-# deb http://security.ubuntu.com/ubuntu {{codename}}-security multiverse
-# deb-src http://security.ubuntu.com/ubuntu {{codename}}-security multiverse
+deb http://security.ubuntu.com/ubuntu $codename-security main
+deb-src http://security.ubuntu.com/ubuntu $codename-security main
+deb http://security.ubuntu.com/ubuntu $codename-security universe
+deb-src http://security.ubuntu.com/ubuntu $codename-security universe
+# deb http://security.ubuntu.com/ubuntu $codename-security multiverse
+# deb-src http://security.ubuntu.com/ubuntu $codename-security multiverse
diff --git a/tests/unittests/test__init__.py b/tests/unittests/test__init__.py
index af18955d..464c8c2f 100644
--- a/tests/unittests/test__init__.py
+++ b/tests/unittests/test__init__.py
@@ -50,12 +50,14 @@ class TestWalkerHandleHandler(MockerTestCase):
self.payload = "dummy payload"
# Mock the write_file function
- write_file_mock = self.mocker.replace(util.write_file, passthrough=False)
+ write_file_mock = self.mocker.replace(util.write_file,
+ passthrough=False)
write_file_mock(expected_file_fullname, self.payload, 0600)
def test_no_errors(self):
"""Payload gets written to file and added to C{pdata}."""
- import_mock = self.mocker.replace(importer.import_module, passthrough=False)
+ import_mock = self.mocker.replace(importer.import_module,
+ passthrough=False)
import_mock(self.expected_module_name)
self.mocker.result(self.module_fake)
self.mocker.replay()
@@ -67,7 +69,8 @@ class TestWalkerHandleHandler(MockerTestCase):
def test_import_error(self):
"""Module import errors are logged. No handler added to C{pdata}"""
- import_mock = self.mocker.replace(importer.import_module, passthrough=False)
+ import_mock = self.mocker.replace(importer.import_module,
+ passthrough=False)
import_mock(self.expected_module_name)
self.mocker.throw(ImportError())
self.mocker.replay()
@@ -79,7 +82,8 @@ class TestWalkerHandleHandler(MockerTestCase):
def test_attribute_error(self):
"""Attribute errors are logged. No handler added to C{pdata}"""
- import_mock = self.mocker.replace(importer.import_module, passthrough=False)
+ import_mock = self.mocker.replace(importer.import_module,
+ passthrough=False)
import_mock(self.expected_module_name)
self.mocker.result(self.module_fake)
self.mocker.throw(AttributeError())
@@ -185,13 +189,15 @@ class TestCmdlineUrl(MockerTestCase):
payload = "0"
cmdline = "ro %s=%s bar=1" % (key, url)
- mock_readurl = self.mocker.replace(url_helper.readurl, passthrough=False)
+ mock_readurl = self.mocker.replace(url_helper.readurl,
+ passthrough=False)
mock_readurl(url)
self.mocker.result(url_helper.UrlResponse(200, payload))
self.mocker.replay()
self.assertEqual((key, url, None),
- util.get_cmdline_url(names=[key], starts="xxxxxx", cmdline=cmdline))
+ util.get_cmdline_url(names=[key], starts="xxxxxx",
+ cmdline=cmdline))
def test_valid_content(self):
url = "http://example.com/foo"
@@ -199,7 +205,8 @@ class TestCmdlineUrl(MockerTestCase):
payload = "xcloud-config\nmydata: foo\nbar: wark\n"
cmdline = "ro %s=%s bar=1" % (key, url)
- mock_readurl = self.mocker.replace(url_helper.readurl, passthrough=False)
+ mock_readurl = self.mocker.replace(url_helper.readurl,
+ passthrough=False)
mock_readurl(url)
self.mocker.result(url_helper.UrlResponse(200, payload))
self.mocker.replay()
diff --git a/tests/unittests/test_builtin_handlers.py b/tests/unittests/test_builtin_handlers.py
index 84d85d4d..5bba8bc9 100644
--- a/tests/unittests/test_builtin_handlers.py
+++ b/tests/unittests/test_builtin_handlers.py
@@ -6,7 +6,6 @@ from mocker import MockerTestCase
from cloudinit import handlers
from cloudinit import helpers
-from cloudinit import util
from cloudinit.handlers import upstart_job
diff --git a/tests/unittests/test_datasource/test_maas.py b/tests/unittests/test_datasource/test_maas.py
index 261c410a..8a155f39 100644
--- a/tests/unittests/test_datasource/test_maas.py
+++ b/tests/unittests/test_datasource/test_maas.py
@@ -1,8 +1,6 @@
import os
-from StringIO import StringIO
from copy import copy
-from cloudinit import util
from cloudinit import url_helper
from cloudinit.sources import DataSourceMAAS
diff --git a/tests/unittests/test_handler/test_handler_ca_certs.py b/tests/unittests/test_handler/test_handler_ca_certs.py
index 1f96e992..948de4c4 100644
--- a/tests/unittests/test_handler/test_handler_ca_certs.py
+++ b/tests/unittests/test_handler/test_handler_ca_certs.py
@@ -26,7 +26,8 @@ class TestNoConfig(MockerTestCase):
self.mocker.replace(cc_ca_certs.update_ca_certs, passthrough=False)
self.mocker.replay()
- cc_ca_certs.handle(self.name, config, self.cloud_init, self.log, self.args)
+ cc_ca_certs.handle(self.name, config, self.cloud_init, self.log,
+ self.args)
class TestConfig(MockerTestCase):
@@ -39,11 +40,12 @@ class TestConfig(MockerTestCase):
self.args = []
# Mock out the functions that actually modify the system
- self.mock_add = self.mocker.replace(cc_ca_certs.add_ca_certs, passthrough=False)
+ self.mock_add = self.mocker.replace(cc_ca_certs.add_ca_certs,
+ passthrough=False)
self.mock_update = self.mocker.replace(cc_ca_certs.update_ca_certs,
passthrough=False)
- self.mock_remove = self.mocker.replace(cc_ca_certs.remove_default_ca_certs,
- passthrough=False)
+ self.mock_remove = self.mocker.replace(
+ cc_ca_certs.remove_default_ca_certs, passthrough=False)
# Order must be correct
self.mocker.order()
@@ -183,8 +185,8 @@ class TestRemoveDefaultCaCerts(MockerTestCase):
})
def test_commands(self):
- mock_delete_dir_contents = self.mocker.replace(util.delete_dir_contents,
- passthrough=False)
+ mock_delete_dir_contents = self.mocker.replace(
+ util.delete_dir_contents, passthrough=False)
mock_write = self.mocker.replace(util.write_file, passthrough=False)
mock_subp = self.mocker.replace(util.subp,
passthrough=False)
diff --git a/tests/unittests/test_userdata.py b/tests/unittests/test_userdata.py
index 861642b6..fbbf07f2 100644
--- a/tests/unittests/test_userdata.py
+++ b/tests/unittests/test_userdata.py
@@ -4,18 +4,14 @@ import StringIO
import logging
import os
-import shutil
-import tempfile
from email.mime.base import MIMEBase
from mocker import MockerTestCase
-from cloudinit import helpers
from cloudinit import log
from cloudinit import sources
from cloudinit import stages
-from cloudinit import util
INSTANCE_ID = "i-testing"
diff --git a/tests/unittests/test_util.py b/tests/unittests/test_util.py
index 93979f06..19f66cc4 100644
--- a/tests/unittests/test_util.py
+++ b/tests/unittests/test_util.py
@@ -14,7 +14,7 @@ class FakeSelinux(object):
self.match_what = match_what
self.restored = []
- def matchpathcon(self, path, mode):
+ def matchpathcon(self, path, mode): # pylint: disable=W0613
if path == self.match_what:
return
else:
@@ -23,7 +23,7 @@ class FakeSelinux(object):
def is_selinux_enabled(self):
return True
- def restorecon(self, path, recursive):
+ def restorecon(self, path, recursive): # pylint: disable=W0613
self.restored.append(path)
diff --git a/tools/hacking.py b/tools/hacking.py
index d0c27d25..a2e6e829 100755
--- a/tools/hacking.py
+++ b/tools/hacking.py
@@ -23,11 +23,8 @@ built on top of pep8.py
import inspect
import logging
-import os
import re
import sys
-import tokenize
-import warnings
import pep8
@@ -158,7 +155,7 @@ def add_cloud():
if not inspect.isfunction(function):
continue
if name.startswith("cloud_"):
- exec("pep8.%s = %s" % (name, name))
+ exec("pep8.%s = %s" % (name, name)) # pylint: disable=W0122
if __name__ == "__main__":
# NOVA based 'hacking.py' error codes start with an N
@@ -167,7 +164,7 @@ if __name__ == "__main__":
pep8.current_file = current_file
pep8.readlines = readlines
try:
- pep8._main()
+ pep8._main() # pylint: disable=W0212
finally:
if len(_missingImport) > 0:
print >> sys.stderr, ("%i imports missing in this test environment"
diff --git a/packages/make-dist-tarball b/tools/make-dist-tarball
index 622283bd..622283bd 100755
--- a/packages/make-dist-tarball
+++ b/tools/make-dist-tarball
diff --git a/tools/make-tarball b/tools/make-tarball
new file mode 100755
index 00000000..47979f5b
--- /dev/null
+++ b/tools/make-tarball
@@ -0,0 +1,35 @@
+#!/bin/sh
+set -e
+
+find_root() {
+ local topd
+ if [ -z "${CLOUD_INIT_TOP_D}" ]; then
+ topd=$(cd "$(dirname "${0}")" && cd .. && pwd)
+ else
+ topd=$(cd "${CLOUD_INIT_TOP_D}" && pwd)
+ fi
+ [ $? -eq 0 -a -f "${topd}/setup.py" ] || return
+ ROOT_DIR="$topd"
+}
+
+if ! find_root; then
+ echo "Unable to locate 'setup.py' file that should" \
+ "exist in the cloud-init root directory." 1>&2
+ exit 1;
+fi
+
+if [ ! -z "$1" ]; then
+ ARCHIVE_FN="$1"
+else
+ REVNO=$(bzr revno $ROOT_DIR)
+ VERSION=$($ROOT_DIR/tools/read-version)
+ ARCHIVE_FN="$PWD/cloud-init-$VERSION~bzr$REVNO.tar.gz"
+fi
+
+FILES=$(cd $ROOT_DIR && bzr ls --versioned --recursive)
+echo "$FILES" | tar czf $ARCHIVE_FN \
+ -C "$ROOT_DIR" \
+ --transform "s,^,cloud-init-$VERSION~bzr$REVNO/," \
+ --no-recursion --files-from -
+
+echo "$ARCHIVE_FN"
diff --git a/tools/mock-meta.py b/tools/mock-meta.py
index 4548e4ae..78838f64 100755
--- a/tools/mock-meta.py
+++ b/tools/mock-meta.py
@@ -1,15 +1,15 @@
#!/usr/bin/python
# Provides a somewhat random, somewhat compat, somewhat useful mock version of
-#
-# http://docs.amazonwebservices.com/AWSEC2/2007-08-29/DeveloperGuide/AESDG-chapter-instancedata.html
+# http://docs.amazonwebservices.com
+# /AWSEC2/2007-08-29/DeveloperGuide/AESDG-chapter-instancedata.htm
"""
To use this to mimic the EC2 metadata service entirely, run it like:
# Where 'eth0' is *some* interface.
sudo ifconfig eth0:0 169.254.169.254 netmask 255.255.255.255
- sudo ./mock-meta -a 169.254.169.254 -p 80
+ sudo ./mock-meta.py -a 169.254.169.254 -p 80
Then:
wget -q http://169.254.169.254/latest/meta-data/instance-id -O -; echo
@@ -23,7 +23,7 @@ import json
import logging
import os
import random
-import string
+import string # pylint: disable=W0402
import sys
import yaml
@@ -84,12 +84,12 @@ META_CAPABILITIES = [
PUB_KEYS = {
'brickies': [
('ssh-rsa '
- 'AAAAB3NzaC1yc2EAAAABIwAAAQEA3I7VUf2l5gSn5uavROsc5HRDpZdQueUq5ozemNSj8T'
- '7enqKHOEaFoU2VoPgGEWC9RyzSQVeyD6s7APMcE82EtmW4skVEgEGSbDc1pvxzxtchBj78'
- 'hJP6Cf5TCMFSXw+Fz5rF1dR23QDbN1mkHs7adr8GW4kSWqU7Q7NDwfIrJJtO7Hi42GyXtv'
- 'EONHbiRPOe8stqUly7MvUoN+5kfjBM8Qqpfl2+FNhTYWpMfYdPUnE7u536WqzFmsaqJctz'
- '3gBxH9Ex7dFtrxR4qiqEr9Qtlu3xGn7Bw07/+i1D+ey3ONkZLN+LQ714cgj8fRS4Hj29SC'
- 'mXp5Kt5/82cD/VN3NtHw== brickies'),
+ 'AAAAB3NzaC1yc2EAAAABIwAAAQEA3I7VUf2l5gSn5uavROsc5HRDpZdQueUq5ozemN'
+ 'Sj8T7enqKHOEaFoU2VoPgGEWC9RyzSQVeyD6s7APMcE82EtmW4skVEgEGSbDc1pvxz'
+ 'xtchBj78hJP6Cf5TCMFSXw+Fz5rF1dR23QDbN1mkHs7adr8GW4kSWqU7Q7NDwfIrJJ'
+ 'tO7Hi42GyXtvEONHbiRPOe8stqUly7MvUoN+5kfjBM8Qqpfl2+FNhTYWpMfYdPUnE7'
+ 'u536WqzFmsaqJctz3gBxH9Ex7dFtrxR4qiqEr9Qtlu3xGn7Bw07/+i1D+ey3ONkZLN'
+ '+LQ714cgj8fRS4Hj29SCmXp5Kt5/82cD/VN3NtHw== brickies'),
'',
],
}
@@ -234,7 +234,7 @@ class MetaDataHandler(object):
elif action == 'public-keys':
nparams = params[1:]
# This is a weird kludge, why amazon why!!!
- # public-keys is messed up, a list of /latest/meta-data/public-keys/
+ # public-keys is messed up, list of /latest/meta-data/public-keys/
# shows something like: '0=brickies'
# but a GET to /latest/meta-data/public-keys/0=brickies will fail
# you have to know to get '/latest/meta-data/public-keys/0', then
@@ -248,7 +248,8 @@ class MetaDataHandler(object):
key_id = int(mybe_key)
key_name = key_ids[key_id]
except:
- raise WebException(httplib.BAD_REQUEST, "Unknown key id %r" % mybe_key)
+ raise WebException(httplib.BAD_REQUEST,
+ "Unknown key id %r" % mybe_key)
# Extract the possible sub-params
result = traverse(nparams[1:], {
"openssh-key": "\n".join(avail_keys[key_name]),
@@ -303,7 +304,7 @@ class UserDataHandler(object):
blob = "\n".join(lines)
return blob.strip()
- def get_data(self, params, who, **kwargs):
+ def get_data(self, params, who, **kwargs): # pylint: disable=W0613
if not params:
return self._get_user_blob(who=who)
return NOT_IMPL_RESPONSE
@@ -323,14 +324,12 @@ class Ec2Handler(BaseHTTPRequestHandler):
versions = sorted(versions)
return "\n".join(versions)
- def log_message(self, format, *args):
- msg = "%s - %s" % (self.address_string(), format % (args))
+ def log_message(self, fmt, *args):
+ msg = "%s - %s" % (self.address_string(), fmt % (args))
log.info(msg)
def _find_method(self, path):
# Puke! (globals)
- global meta_fetcher
- global user_fetcher
func_mapping = {
'user-data': user_fetcher.get_data,
'meta-data': meta_fetcher.get_data,
@@ -341,12 +340,14 @@ class Ec2Handler(BaseHTTPRequestHandler):
return self._get_versions
date = segments[0].strip().lower()
if date not in self._get_versions():
- raise WebException(httplib.BAD_REQUEST, "Unknown version format %r" % date)
+ raise WebException(httplib.BAD_REQUEST,
+ "Unknown version format %r" % date)
if len(segments) < 2:
raise WebException(httplib.BAD_REQUEST, "No action provided")
look_name = segments[1].lower()
if look_name not in func_mapping:
- raise WebException(httplib.BAD_REQUEST, "Unknown requested data %r" % look_name)
+ raise WebException(httplib.BAD_REQUEST,
+ "Unknown requested data %r" % look_name)
base_func = func_mapping[look_name]
who = self.address_string()
ip_from = self.client_address[0]
@@ -371,7 +372,8 @@ class Ec2Handler(BaseHTTPRequestHandler):
self.send_response(httplib.OK)
self.send_header("Content-Type", "binary/octet-stream")
self.send_header("Content-Length", len(data))
- log.info("Sending data (len=%s):\n%s", len(data), format_text(data))
+ log.info("Sending data (len=%s):\n%s", len(data),
+ format_text(data))
self.end_headers()
self.wfile.write(data)
except RuntimeError as e:
@@ -389,22 +391,25 @@ class Ec2Handler(BaseHTTPRequestHandler):
self._do_response()
-def setup_logging(log_level, format='%(levelname)s: @%(name)s : %(message)s'):
+def setup_logging(log_level, fmt='%(levelname)s: @%(name)s : %(message)s'):
root_logger = logging.getLogger()
console_logger = logging.StreamHandler(sys.stdout)
- console_logger.setFormatter(logging.Formatter(format))
+ console_logger.setFormatter(logging.Formatter(fmt))
root_logger.addHandler(console_logger)
root_logger.setLevel(log_level)
def extract_opts():
parser = OptionParser()
- parser.add_option("-p", "--port", dest="port", action="store", type=int, default=80,
- help="port from which to serve traffic (default: %default)", metavar="PORT")
- parser.add_option("-a", "--addr", dest="address", action="store", type=str, default='0.0.0.0',
- help="address from which to serve traffic (default: %default)", metavar="ADDRESS")
- parser.add_option("-f", '--user-data-file', dest='user_data_file', action='store',
- help="user data filename to serve back to incoming requests", metavar='FILE')
+ parser.add_option("-p", "--port", dest="port", action="store", type=int,
+ default=80, metavar="PORT",
+ help="port from which to serve traffic (default: %default)")
+ parser.add_option("-a", "--addr", dest="address", action="store", type=str,
+ default='0.0.0.0', metavar="ADDRESS",
+ help="address from which to serve traffic (default: %default)")
+ parser.add_option("-f", '--user-data-file', dest='user_data_file',
+ action='store', metavar='FILE',
+ help="user data filename to serve back to incoming requests")
(options, args) = parser.parse_args()
out = dict()
out['extra'] = args
@@ -420,8 +425,8 @@ def extract_opts():
def setup_fetchers(opts):
- global meta_fetcher
- global user_fetcher
+ global meta_fetcher # pylint: disable=W0603
+ global user_fetcher # pylint: disable=W0603
meta_fetcher = MetaDataHandler(opts)
user_fetcher = UserDataHandler(opts)
diff --git a/tools/read-dependencies b/tools/read-dependencies
index 72e1e095..4c88aa87 100755
--- a/tools/read-dependencies
+++ b/tools/read-dependencies
@@ -1,45 +1,35 @@
-#!/usr/bin/python
-# vi: ts=4 expandtab
-
-import os
-import sys
-import re
-
-
-def parse_requires(fn):
- requires = []
- with open(fn, 'r') as fh:
- lines = fh.read().splitlines()
- for line in lines:
- line = line.strip()
- if not line or line[0] == '#':
- continue
- else:
- requires.append(line)
- return requires
-
-
-def find_requires(args):
- p_files = []
- if args:
- p_files.append(args[0])
- p_files.append(os.path.join(os.pardir, "Requires"))
- p_files.append(os.path.join(os.getcwd(), 'Requires'))
- found = None
- for fn in p_files:
- if os.path.isfile(fn):
- found = fn
- break
- return found
-
-
-if __name__ == '__main__':
- run_args = sys.argv[1:]
- fn = find_requires(run_args)
- if not fn:
- sys.stderr.write("'Requires' file not found!\n")
- sys.exit(1)
- else:
- deps = parse_requires(fn)
- for entry in deps:
- print entry
+#!/bin/sh
+
+set -e
+
+find_root() {
+ local topd
+ if [ -z "${CLOUD_INIT_TOP_D}" ]; then
+ topd=$(cd "$(dirname "${0}")" && cd .. && pwd)
+ else
+ topd=$(cd "${CLOUD_INIT_TOP_D}" && pwd)
+ fi
+ [ $? -eq 0 -a -f "${topd}/setup.py" ] || return
+ ROOT_DIR="$topd"
+}
+
+if ! find_root; then
+ echo "Unable to locate 'setup.py' file that should" \
+ "exist in the cloud-init root directory." 1>&2
+ exit 1;
+fi
+
+REQUIRES="$ROOT_DIR/Requires"
+
+if [ ! -e "$REQUIRES" ]
+then
+ echo "Unable to find 'Requires' file located at $REQUIRES"
+ exit 1
+fi
+
+# Filter out comments and empty liens
+DEPS=$(cat $REQUIRES | grep -Pv "^\s*#" | grep -Pv '^\s*$')
+echo "$DEPS" | sort -d -f
+
+
+
diff --git a/tools/read-version b/tools/read-version
index e6167a2c..323357fe 100755
--- a/tools/read-version
+++ b/tools/read-version
@@ -1,70 +1,31 @@
-#!/usr/bin/python
-# vi: ts=4 expandtab
-
-import os
-import sys
-import re
-
-from distutils import version as ver
-
-possible_topdir = os.path.normpath(os.path.join(os.path.abspath(
- sys.argv[0]), os.pardir, os.pardir))
-if os.path.exists(os.path.join(possible_topdir, "cloudinit", "__init__.py")):
- sys.path.insert(0, possible_topdir)
-
-from cloudinit import version as cver
-
-def parse_versions(fn):
- with open(fn, 'r') as fh:
- lines = fh.read().splitlines()
- versions = []
- for line in lines:
- line = line.strip()
- if line.startswith("-") or not line:
- continue
- if not re.match(r"[\d]", line):
- continue
- line = line.strip(":")
- if (re.match(r"^[\d+]\.[\d+]\.[\d+]$", line) or
- re.match(r"^[\d+]\.[\d+]$", line)):
- versions.append(line)
- return versions
-
-def find_changelog(args):
- p_files = []
- if args:
- p_files.append(args[0])
- p_files.append(os.path.join(os.pardir, "ChangeLog"))
- p_files.append(os.path.join(os.getcwd(), 'ChangeLog'))
- found = None
- for fn in p_files:
- if os.path.isfile(fn):
- found = fn
- break
- return found
-
-
-if __name__ == '__main__':
- run_args = sys.argv[1:]
- fn = find_changelog(run_args)
- if not fn:
- sys.stderr.write("'ChangeLog' file not found!\n")
- sys.exit(1)
- else:
- versions = parse_versions(fn)
- if not versions:
- sys.stderr.write("No versions found in %s!\n" % (fn))
- sys.exit(1)
- else:
- # Check that the code version is the same
- # as the version we found!
- ch_ver = versions[0].strip()
- code_ver = cver.version()
- ch_ver_obj = ver.StrictVersion(ch_ver)
- if ch_ver_obj != code_ver:
- sys.stderr.write(("Code version %s does not match"
- " changelog version %s\n") %
- (code_ver, ch_ver_obj))
- sys.exit(1)
- sys.stdout.write(ch_ver)
- sys.exit(0)
+#!/bin/sh
+
+set -e
+
+find_root() {
+ local topd
+ if [ -z "${CLOUD_INIT_TOP_D}" ]; then
+ topd=$(cd "$(dirname "${0}")" && cd .. && pwd)
+ else
+ topd=$(cd "${CLOUD_INIT_TOP_D}" && pwd)
+ fi
+ [ $? -eq 0 -a -f "${topd}/setup.py" ] || return
+ ROOT_DIR="$topd"
+}
+
+if ! find_root; then
+ echo "Unable to locate 'setup.py' file that should" \
+ "exist in the cloud-init root directory." 1>&2
+ exit 1;
+fi
+
+CHNG_LOG="$ROOT_DIR/ChangeLog"
+
+if [ ! -e "$CHNG_LOG" ]
+then
+ echo "Unable to find 'ChangeLog' file located at $CHNG_LOG"
+ exit 1
+fi
+
+VERSION=$(grep -P "\d+.\d+.\d+:" $CHNG_LOG | cut -f1 -d ":" | head -n 1)
+echo $VERSION
diff --git a/upstart/cloud-config.conf b/upstart/cloud-config.conf
index 3ac113f3..2c3ef67b 100644
--- a/upstart/cloud-config.conf
+++ b/upstart/cloud-config.conf
@@ -1,5 +1,6 @@
# cloud-config - Handle applying the settings specified in cloud-config
description "Handle applying cloud-config"
+emits cloud-config
start on (filesystem and started rsyslog)
console output
diff --git a/upstart/cloud-log-shutdown.conf b/upstart/cloud-log-shutdown.conf
new file mode 100644
index 00000000..278b9c06
--- /dev/null
+++ b/upstart/cloud-log-shutdown.conf
@@ -0,0 +1,19 @@
+# log shutdowns and reboots to the console (/dev/console)
+# this is useful for correlating logs
+start on runlevel PREVLEVEL=2
+
+task
+console output
+
+script
+ # runlevel(7) says INIT_HALT will be set to HALT or POWEROFF
+ date=$(date --utc)
+ case "$RUNLEVEL:$INIT_HALT" in
+ 6:*) mode="reboot";;
+ 0:HALT) mode="halt";;
+ 0:POWEROFF) mode="poweroff";;
+ 0:*) mode="shutdown-unknown";;
+ esac
+ { read seconds idle < /proc/uptime; } 2>/dev/null || :
+ echo "$date: shutting down for $mode${seconds:+ [up ${seconds%.*}s]}."
+end script