Age | Commit message (Collapse) | Author |
|
Add DataSourceLXD which knows how to talk to the dev-lxd socket to
obtain all instance metadata API:
https://linuxcontainers.org/lxd/docs/master/dev-lxd.
This first branch is to deliver feature parity with the existing
NoCloud datasource which is currently used to intialize LXC instances
on first boot.
Introduce a SocketConnectionPool and LXDSocketAdapter to support
performing HTTP GETs on the following routes which are surfaced by the
LXD host to all containers:
http://unix.socket/1.0/meta-data
http://unix.socket/1.0/config/user.user-data
http://unix.socket/1.0/config/user.network-config
http://unix.socket/1.0/config/user.vendor-data
These 4 routes minimally replace the static content provided in the
following nocloud-net seed files:
/var/lib/cloud/nocloud-net/{meta-data,vendor-data,user-data,network-config}
The intent of this commit is to set a foundation for LXD socket
communication that will allow us to build network hot-plug features
by eventually consuming LXD's websocket upgrade route 1.0/events to
react to network, meta-data and user-data config changes over time.
In the event that no custom network-config is provided, default to the
same network-config definition provided by LXD to the NoCloud
network-config seed file.
Supplemental features above NoCloud datasource:
surface all custom instance data config keys via cloud-init query ds
which aids in discoverability of features/tags/labels as well as
conditional #cloud-config jinja templates operations based on custom
config options.
TBD: better cloud-init query support for dot-delimited keys
|
|
This patch finally introduces the Cloud-Init Datasource for VMware
GuestInfo as a part of cloud-init proper. This datasource has existed
since 2018, and rapidly became the de facto datasource for developers
working with Packer, Terraform, for projects like kube-image-builder,
and the de jure datasource for Photon OS.
The major change to the datasource from its previous incarnation is
the name. Now named DatasourceVMware, this new version of the
datasource will allow multiple transport types in addition to
GuestInfo keys.
This datasource includes several unique features developed to address
real-world situations:
* Support for reading any key (metadata, userdata, vendordata) both
from the guestinfo table when running on a VM in vSphere as well as
from an environment variable when running inside of a container,
useful for rapid dev/test.
* Allows booting with DHCP while still providing full participation
in Cloud-Init instance data and Jinja queries. The netifaces library
provides the ability to inspect the network after it is online,
and the runtime network configuration is then merged into the
existing metadata and persisted to disk.
* Advertises the local_ipv4 and local_ipv6 addresses via guestinfo
as well. This is useful as Guest Tools is not always able to
identify what would be considered the local address.
The primary author and current steward of this datasource spoke at
Cloud-Init Con 2020 where there was interest in contributing this datasource
to the Cloud-Init codebase.
The datasource currently lives in its own GitHub repository at
https://github.com/vmware/cloud-init-vmware-guestinfo. Once the datasource
is merged into Cloud-Init, the old repository will be deprecated.
|
|
This PR adds in support so that cloud-init can run on instances
deployed on Vultr cloud. This was originally brought up in #628.
Co-authored-by: Eric Benner <ebenner@vultr.com>
|
|
New datasource utilizing UpCloud metadata API, including relevant unit
tests and documentation.
|
|
* cc_ssh: fix capitalisation of SSH
* doc: fix capitalisation of SSH
* cc_keys_to_console: fix capitalisation of SSH
* ssh_util: fix capitalisation of SSH
* DataSourceIBMCloud: fix capitalisation of SSH
* DataSourceAzure: fix capitalisation of SSH
* cs_utils: fix capitalisation of SSH
* distros/__init__: fix capitalisation of SSH
* cc_set_passwords: fix capitalisation of SSH
* cc_ssh_import_id: fix capitalisation of SSH
* cc_users_groups: fix capitalisation of SSH
* cc_ssh_authkey_fingerprints: fix capitalisation of SSH
|
|
- Added RbxCloud
- Sorted alphabetically
|
|
e24cloud provides an EC2 compatible datasource.
This just identifies their platform based on dmi 'system-vendor'
having 'e24cloud'. https://www.e24cloud.com/en/ .
Updated chassis typo in zstack unit test docstring.
LP: #1696476
|
|
Zstack platform provides a AWS Ec2 metadata service, and
identifies their platform to the guest by setting the 'chassis asset tag'
to a string that ends with '.zstack.io'.
LP: #1841181
|
|
|
|
- dsidentify switches to the new Exoscale datasource on matching DMI name
- New Exoscale datasource added
Signed-off-by: Mathieu Corbin <mathieu.corbin@exoscale.ch>
|
|
This adds documentation intended for a developer on how to add
a new datasource to cloud-init.
|
|
The change to datasources.rst here is obvious typo fix.
The change to azure is to reduce the two 'Customization' sections
to a single and clean up some other duplicate text.
|
|
Cloud-init caches any cloud metadata crawled during boot in the file
/run/cloud-init/instance-data.json. Cloud-init also standardizes some of
that metadata across all clouds. The command 'cloud-init query' surfaces a
simple CLI to query or format any cached instance metadata so that scripts
or end-users do not have to write tools to crawl metadata themselves.
Since 'cloud-init query' is runnable by non-root users, redact any
sensitive data from instance-data.json and provide a root-readable
unredacted instance-data-sensitive.json. Datasources can now define a
sensitive_metadata_keys tuple which will redact any matching keys
which could contain passwords or credentials from instance-data.json.
Also add the following standardized 'v1' instance-data.json keys:
- user_data: The base64encoded user-data provided at instance launch
- vendor_data: Any vendor_data provided to the instance at launch
- underscore_delimited versions of existing hyphenated keys:
instance_id, local_hostname, availability_zone, cloud_name
|
|
Allow users to provide '## template: jinja' as the first line or their
#cloud-config or custom script user-data parts. When this header exists,
the cloud-config or script will be rendered as a jinja template.
All instance metadata keys and values present in
/run/cloud-init/instance-data.json will be available as jinja variables
for the template. This means any cloud-config module or script can
reference any standardized instance data in templates and scripts.
Additionally, any standardized instance-data.json keys scoped below a
'<v#>' key will be promoted as a top-level key for ease of reference in
templates. This means that '{{ local_hostname }}' is the same as using the
latest '{{ v#.local_hostname }}'.
Since instance-data is written to /run/cloud-init/instance-data.json, make
sure it is persisted across reboots when the cached datasource opject is
reloaded.
LP: #1791781
|
|
This adds a Oracle specific datasource that functions with OCI.
It is a simplified version of the OpenStack metadata server
with support for vendor-data.
It does not support the OCI-C (classic) platform.
Also here is a move of BrokenMetadata to common 'sources'
as this was the third occurrence of that class.
|
|
Also document instance-data.json on the top-level datasource topic page.
|
|
Just add some documentation to readthedocs for AliYun.
|
|
Add some minimal documentation for GCE datasource.
|
|
|
|
The biggest things here are:
* move doc/sources/*/README.rst to doc/rtd/topics/datasources
This gives each datasource a page in the rtd docs, which make
it easier to read.
* consistently use the same header style throughout.
As suggested at
http://thomas-cokelaer.info/tutorials/sphinx/rest_syntax.html
use:
# with overline, for parts
* with overline, for chapters
=, for sections
-, for subsections
^, for subsubsections
“, for paragraphs
Also, move and re-format vendor-data documentation to rtd.
|
|
This adds lots of config module documentation in a standard format.
It will greatly improve the content at readthedocs.
Additionally:
* Add a 'doc' env to tox.ini
* Changed default highlight language for sphinx conf from python to yaml
most examples in documentation are yaml configs
* Updated datasource examples to highlight sh code properly
|
|
|
|
|
|
|
|
the Requires would get that string rendered into the package's
Depends/Requires (rather than BuildDepends/BuildRequires).
We should have BuildDepends/BuildRequires too, but since
trunk's package builds do not run 'make test', this isn't a big deal.
This also adds 'test-requires' for httpretty.
|
|
|
|
Add a base set for ec2 and datasource none.
|
|
Start moving the current README for
datasources to a RST format and include
those files in the rtd site.
LP: #1113650
|