Age | Commit message (Collapse) | Author |
|
Applied Black and isort, fixed any linting issues, updated tox.ini
and CI.
|
|
tox: bump the pinned flake8 and pylint version
* pylint: fix W1406 (redundant-u-string-prefix)
The u prefix for strings is no longer necessary in Python >=3.0.
* pylint: disable W1514 (unspecified-encoding)
From https://www.python.org/dev/peps/pep-0597/ (Python 3.10):
The new warning stems form https://www.python.org/dev/peps/pep-0597,
which says:
Developers using macOS or Linux may forget that the default encoding
is not always UTF-8. [...] Even Python experts may assume that the
default encoding is UTF-8. This creates bugs that only happen on Windows.
The warning could be fixed by always specifying encoding='utf-8',
however we should be careful to not break environments which are not
utf-8 (or explicitly state that only utf-8 is supported). Let's silence
the warning for now.
* _quick_read_instance_id: cover the case where load_yaml() returns None
Spotted by pylint:
- E1135 (unsupported-membership-test)
- E1136 (unsubscriptable-object)
LP: #1944414
|
|
- small document update for ReportEventStack explaining post_files
parameter
- small unit test for test_reporting demonstrating the close of an
event with optional post_files list
|
|
Push the cloud-init.log file (Up to 500KB at once) to the KVP before reporting ready to the Azure platform.
Based on the analysis done on a large sample of cloud-init.log files, Here's the statistics collected on the log file size:
P50 P90 P95 P99 P99.9 P99.99
137K 423K 537K 3.5MB 6MB 16MB
This change limits the size of cloud-init.log file data that gets dumped to KVP to 500KB. So for ~95% of the cases, the whole log file will be dumped and for the remaining ~5%, we will get the last 500KB of the cloud-init.log file.
To asses the performance of the 500KB limit, 250 VM were deployed with a 500KB cloud-init.log file and the time taken to compress, encode and dump the entries to KVP was measured. Here's the time in milliseconds percentiles:
P50 P99 P999
75.705 232.701 1169.636
Another 250 VMs were deployed with this logic dumping their normal cloud-init.log file to KVP, the same timing was measured as above. Here's the time in milliseconds percentiles:
P50 P99 P999
1.88 5.277 6.992
Added excluded_handlers to the report_event function to be able to opt-out from reporting the events of the compressed cloud-init.log file to the cloud-init.log file.
The KVP break_down logic had a bug, where it will reuse the same key for all the split chunks of KVP which results in overwriting the split KVPs by the last one when consumed by Hyper-V. I added the split chunk index as a differentiator to the KVP key.
The Hyper-V consumes the KVPs from the KVP file as chunks whose key is 512KB and value is 2048KB but the Azure platform expects the value to be 1024KB, thus I introduced the Azure value limit.
|
|
* cloudinit: remove global disable of pylint W0107 and fix errors
This includes removing a test class which contained no tests but wasn't
detected as empty because of an errant pass statement.
* .pylintrc: update disable comment to match arguments
|
|
|
|
This fixes issues with closing brackets not matching the opening
bracket's line and continuation line under-idented for hanging indent.
|
|
* url_helper: drop six
* url_helper: sort imports
* log: drop six
* log: sort imports
* handlers/__init__: drop six
* handlers/__init__: sort imports
* user_data: drop six
* user_data: sort imports
* sources/__init__: drop six
* sources/__init__: sort imports
* DataSourceOVF: drop six
* DataSourceOVF: sort imports
* sources/helpers/openstack: drop six
* sources/helpers/openstack: sort imports
* mergers/m_str: drop six
This also allowed simplification of the logic, as we will never
encounter a non-string text type.
* type_utils: drop six
* mergers/m_dict: drop six
* mergers/m_list: drop six
* cmd/query: drop six
* mergers/__init__: drop six
* net/cmdline: drop six
* reporting/handlers: drop six
* reporting/handlers: sort imports
|
|
The KVPs currently being emitted to the .kvp_pool file can have
duplicate keys which is wrong since these keys should be unique.
The situation can occur if for example one azure function
called twice or more and this function is reporting telemetry
through the use of KVPs. Any KVP consumer can get confused by
the duplicate keys and a race condition can and have occurred.
|
|
+ Truncate KVP Pool file to prevent stale entries from
being processed by the Hyper-V KVP reporter.
+ Drop filtering of KVPs as it is no longer needed.
+ Batch appending of existing KVP entries.
|
|
Switch the implementation to a daemon thread which uses a
blocking get from the Queue. No additional locking or flag checking
is needed since the Queue itself handles acquiring the lock as needed.
cloud-init only has a single producer (the main thread calling publish)
and the consumer will read all events in the queue and write them out.
Using the daemon mode of the thread handles flushing the queue on
main exit in python3; in python2.7 we handle the EOFError that results
when the publish thread calls to get() fails indicating the main thread
has exited.
The result is that the handler is no longer spawing a thread on each
publish event but rather creates a single thread when we start up
the reporter and we remove any additional use of separate locks and
flags as we only have a single Queue object and we're only calling
queue.put() from main thread and queue.get() from consuming thread.
|
|
Linux guests can provide information to Hyper-V hosts via KVP.
KVP allows the guests to provide any string key-value-pairs back to the
host's registry. On linux, kvp communication pools are presented as pool
files in /var/lib/hyperv/.kvp_pool_#.
The following reporting configuration can enable this kvp reporting in
addition to default logging if the pool files exist:
reporting:
logging:
type: log
telemetry:
type: hyperv
|
|
This enables warnings produced by pylint for unused variables (W0612),
and fixes the existing errors.
|
|
This will change all instances of LOG.warn to LOG.warning as warn
is now a deprecated method. It will also make sure any logging
uses lazy logging by passing string format arguments as function
parameters.
|
|
This has been a recurring ask and we had initially just made the change to
the cloud-init 2.0 codebase. As the current thinking is we'll just
continue to enhance the current codebase, its desirable to relicense to
match what we'd intended as part of the 2.0 plan here.
- put a brief description of license in LICENSE file
- put full license versions in LICENSE-GPLv3 and LICENSE-Apache2.0
- simplify the per-file header to reference LICENSE
- tox: ignore H102 (Apache License Header check)
Add license header to files that ship.
Reformat headers, make sure everything has vi: at end of file.
Non-shipping files do not need the copyright header,
but at the moment tests/ have it.
|
|
|
|
If no timestamp was passed into a ReportingEvent, then the default was
used. That default was 'time.time()' which was evaluated once only at
import time.
|
|
the handler was passing a dictionary to readurl
which was then passing that on to requests.request as 'data'.
the requests library would urlencode that, but we want the
json data posted instead.
LP: #1496960
|
|
this import was left over from before we moved oauthlib into url_helper
|
|
|
|
This adds 'timestamp' and 'origin' to events.
The timestamp is simply that, a floating point timestamp of when
the event occurred.
The origin indicates the source / reporter of this. It is useful
to have a single endpoint with multiple different things reporting
to it. For example, MAAS will configure cloud-init and curtin
to report to the same endpoint and then it can differenciate who
made the post. Admittedly, they could use multiple endpoints, but
this this seems sane.
Also, add support for posting files at the close of an event.
This is utilized in curtin to post a log file when the install is
done. files are posted on success or fail of the event.
|
|
this just separates events from other things that could conceivably
be reported.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
change ReportStack to ReportEventStack
change default ReportEventStack to be status.SUCCESS instead of None
|
|
|