Age | Commit message (Collapse) | Author |
|
Cloud-init will not operate properly if the instance-id value changes
on each boot. This is the source of a number of behavioral bugs filed
against cloud-init with OVF datasource. Instead, use a static instance-id
value, iid-vmware-imc, similar to iid-dsovf.
|
|
VMWware customization already has support to run a custom script during
the VM customization. Adding this option allows a VM administrator to
disable the execution of customization scripts. If set the script
will not execute and the customization status is set to
GUESTCUST_ERROR_SCRIPT_DISABLED.
|
|
cloud-init does not trigger reboots of a VM therefore adding custom
scripts to rc.local does not execute the post scripts. This patch
moves post-scripts into per-instance scripts dir and has cc_scripts
module run the post-scripts.
Also in this branch:
- Remove the sh interpreter and execute the customization script
directly.
- Update the unit test.
LP: #1833192
|
|
six already provides this for us, and we're already paying the cost to
determine it there; no need to do it twice.
|
|
Transport functions (transport_iso9660 and transport_vmware_guestinfo)
would return a tuple of 3 values, but only the first was ever used
outside of test. The other values (device and filename) were just
ignored.
This just simplifies the transport functions to now return content
(in string format) or None indicating that the transport was not found.
|
|
This adds support for reading OVF information over the
'com.vmware.guestInfo' tranport. The current implementation requires
vmware-rpctool be installed in the system.
LP: #1807466
|
|
Add the following instance-data.json standardized keys:
* v1._beta_keys: List any v1 keys in beta development,
e.g. ['subplatform'].
* v1.public_ssh_keys: List of any cloud-provided ssh keys for the
instance.
* v1.platform: String representing the cloud platform api supporting the
datasource. For example: 'ec2' for aws, aliyun and brightbox cloud
names.
* v1.subplatform: String with more details about the source of the
metadata consumed. For example, metadata uri, config drive device path
or seed directory.
To support the new platform and subplatform standardized instance-data,
DataSource and its subclasses grew platform and subplatform attributes.
The platform attribute defaults to the lowercase string datasource name at
self.dsname. This method is overridden in NoCloud, Ec2 and ConfigDrive
datasources.
The subplatform attribute calls a _get_subplatform method which will
return a string containing a simple slug for subplatform type such as
metadata, seed-dir or config-drive followed by a detailed uri, device or
directory path where the datasource consumed its configuration.
As part of this work, DatasourceEC2 methods _get_data and _crawl_metadata
have been refactored for a few reasons:
- crawl_metadata is now a read-only operation, persisting no attributes on
the datasource instance and returns a dictionary of consumed metadata.
- crawl_metadata now closely represents the raw stucture of the ec2
metadata consumed, so that end-users can leverage public ec2 metadata
documentation where possible.
- crawl_metadata adds a '_metadata_api_version' key to the crawled
ds.metadata to advertise what version of EC2's api was consumed by
cloud-init.
- _get_data now does all the processing of crawl_metadata and saves
datasource instance attributes userdata_raw, metadata etc.
Additional drive-bys:
* unit test rework for test_altcloud and test_azure to simplify mocks
and make use of existing util and test_helpers functions.
|
|
This enables warnings produced by pylint for unused variables (W0612),
and fixes the existing errors.
|
|
On few 64-bit platforms, the open-vm-tools package is installed at
/usr/lib64/. The DataSourceOVF is changed to search look there for the
'customization plugin'
|
|
In the VMware customization workflow, we have some options for the user
to upload scripts for additional customization. Based on user request,
those custom scripts can be either run before regular customization or
after. For post customization scripts, we decide whether to run the scripts
just after customization or post system reboot.
|
|
Each DataSource subclass must define its own get_data method. This branch
formalizes our DataSource class to require that subclasses define an
explicit dsname for sourcing cloud-config datasource configuration.
Subclasses must also override the _get_data method or a
NotImplementedError is raised.
The branch also writes /run/cloud-init/instance-data.json. This file
contains all meta-data, user-data and vendor-data and a standardized set
of metadata keys in a json blob which other utilities with root-access
could make use of. Because some meta-data or user-data is potentially
sensitive the file is only readable by root.
Generally most metadata content types should be json serializable. If
specific keys or values are not serializable, those specific values will
be base64encoded and the key path will be listed under the top-level key
'base64-encoded-keys' in instance-data.json. If json writing fails due to
other TypeErrors or UnicodeDecodeErrors, a warning log will be emitted to
/var/log/cloud-init.log and no instance-data.json will be created.
|
|
DataSourceOVF attempts to find iso files via walking os.listdir('/dev/')
which is far too wide. This approach is too invasive and can sometimes
race with systemd attempting to fsck and mount devices.
Instead, utilize cloudinit.util.find_devs_with to filter devices by
criteria (which uses blkid under the covers). This results in fewer
attempts to mount block devices which do not contain iso filesystems.
Unittest changes include:
- cloudinit.tests.helpers; introduce add_patch() helper
- Add unittest coverage for DataSourceOVF use of transport_iso9660
LP: #1718287
|
|
The network devices should be enabled before sending the 'SUCCESS' event
to the underlying hypervisor.
|
|
For customizing the machines hosted on 'VMWare' hypervisor, the datasource
should return the 'network config' data in 'curtin' format.
This branch also fixes /etc/network/interfaces replacing the line
"source /etc/network/interfaces.d/*.cfg" which is incorrectly removed
when VMWare's Perl Customization Engine writes /etc/network/interfaces.
Modify the code to read the customization configuration and return the
converted data.
Added few tests.
LP: #1675063
|
|
This feature enables the following VMware VCloud Director functionality:
1. Setting admin password
2. Expire password.
3. Set admin password and expire.
Password configuration is triggered only as part of a full
recustomization, that happens either on first power on or when
"poweron and full recustomization" is selected. Full customization
flow is determined by marker files. Unique marker ids are
generated when full recustomization is requested. And marker file based
on these marker ids help to determine if we need to execute the above
configuration.
|
|
This will change all instances of LOG.warn to LOG.warning as warn
is now a deprecated method. It will also make sure any logging
uses lazy logging by passing string format arguments as function
parameters.
|
|
- staticIPV4 property can be either None or a valid Array. Need to
check for None before accessing the ip address.
- Modified few misc. log messages.
- Added a new log message while waiting for the customization config file.
- Added support to configure the maximum amount of time to wait for the
customization config file.
- VMware Customization Support is provided only for DataSourceOVF class and
not for any other child classes. Implemented a new variable
vmware_customization_supported to check whether the 'VMware Customization'
support is available for a specific datasource or not.
- Changed the function get_vmware_cust_settings to get_max_wait_from_cfg.
- Removed the code that does 'ifdown and iup' in NIC configurator.
|
|
This has been a recurring ask and we had initially just made the change to
the cloud-init 2.0 codebase. As the current thinking is we'll just
continue to enhance the current codebase, its desirable to relicense to
match what we'd intended as part of the 2.0 plan here.
- put a brief description of license in LICENSE file
- put full license versions in LICENSE-GPLv3 and LICENSE-Apache2.0
- simplify the per-file header to reference LICENSE
- tox: ignore H102 (Apache License Header check)
Add license header to files that ship.
Reformat headers, make sure everything has vi: at end of file.
Non-shipping files do not need the copyright header,
but at the moment tests/ have it.
|
|
When user-data was provided in the ovf environment python3 would call
base64.decodestring() with a string rather than bytes and an exception
would occur.
This fixes the broken path and adds unit test. Also changes to
return None rather than empty string when there is no user-data and
when there is user-data return that as bytes instead of string.
LP: #1619394
|
|
pylint --errors-only found several errors. Some of the changes
here represent real errors, others just code that pylint did
not like.
|
|
|
|
|
|
Enabled NICS even in failure case.
Used util.del_dir() instead of shutil.rmtree.
|
|
- Modified the code to look for customization specification file in
/var/run/vmware-imc/ directory instead of /tmp
- Fixed the 'seed file' issue. There was a regression in DataSourceOVF.py
file. Fixed it.
|
|
|
|
- Changed the really long 'from ... import ...' statements.
|
|
- Now my branch is identical to trunk.dist
|
|
- Added a new utility method to send a RPC for enabling NICS
- Modified DataSourceOVF.py to enable nics.
- Executed ./tools/run-pep8 and no issues were reported.
|
|
- Added few utility functions to report events to the underlying
VMware Virtualization platform
- Re-factored code little bit.
- Executed ./tools/run-pep8 and no pep8 errors were reported.
|
|
|
|
Update make check target to run pep8 and run pyflakes or pyflakes3
depending on the value of 'PYVER'. This way the python3 build
environment does not need python2 and vice versa.
Also have make check run the 'yaml' test.
tox: have tox run pep8 in the pyflakes
|
|
Executed ./tools/run-pep8 cloudinit/sources/DataSourceOVF.py and no errors
were reported.
|
|
Update make check target to use pep8, pyflakes, pyflakes3.
|
|
|
|
this makes 'make' run pyflakes, so failures there will stop a build.
also adds it to tox.
|
|
The customization is set to False by default and is triggered only
when the option disable_vmware_customization is set to false in
/etc/cloud/cloud.cfg
|
|
- Fixed few variable names.
- Used util.subp methods for process related manipulations.
|
|
- Implemented the 'search_file' function using 'os.walk()'
- Fixed few variable names.
- Removed size() function in config_file.py
- Updated the test_config_file.py to use len() instead of .size()
|
|
- Added the code to detect VMware Virtual Platform and apply the
customization based on the 'Customization Specification File' Pushed
into the guest VM.
|
|
to be behind trunk.
`tox -e py27` passes full test suite. Now to work on replacing mocker.
|
|
this supports a list of input, and cleans up that list
for the platform specific mount types. Basically,
mtype = None
means 'mount -t auto' or the equivalent for the platform.
and 'iso9660' means "iso type".
|
|
Fixed all complaints from running "make pep8". Also version locked
pep8 in test-requirements.txt to ensure that pep8 requirements don't
change without an explicit commit.
|
|
This is not really a problem, because nothing would call transport_iso9660
with 'require_iso' as False, but if it did, then we would have still
required iso9660 filesystem on the mount.
|
|
|
|
|
|
at files since it is likely soon that we will add a new
way of adjusting the root of files read, also it is useful
for debugging to track what is being read/written in a central
fashion.
|
|
|
|
|
|
during datasourceovf looking for a transport, the failed
mounts were having exceptions logged (triggering an unneccessary
warning also)
|
|
|