Ansible 2.20 has deprecated the use of Ansible facts as variables. For
example, `ansible_distribution` is now deprecated in favor of
`ansible_facts["distribution"]`. This is due to making the default
setting `INJECT_FACTS_AS_VARS=false`. For now, this will create WARNING
messages, but in Ansible 2.24 it will be an error.
See https://docs.ansible.com/projects/ansible/latest/porting_guides/porting_guide_core_2.20.html#inject-facts-as-vars
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
community.general version 12 has dropped support for py27 and py36 - ensure that
the roles do not install/use this version - see
https://github.com/ansible-collections/community.general/issues/582
By default, installation will get the latest 11.x version. The lower bound
`6.6.0` is an older version, but I don't want to restrict the ability of a user
of a particular role to use an old version, rather than forcing them to use
`11.x` or later. Some roles like `rhc` explicitly require `6.6.0` or later - I
think this is a reasonable lower bound for all roles.If a role needs a different
version, the role can define its own `community_general_version` in the role's
`host_vars` file in .github.
Standardize file format across all roles for consistency and ease of updating
This update may remove the SPDX license information from the file - this is ok -
the role/project already has a license, this file is trivial, and many
requirements files do not have the license header anyway.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
Previously, CI would download the standard-inventory-qcow2 script from pagure. However,
the pagure download url is now being protected by Anubis which by default
will check the User-Agent header and deny attempts from clients that look
like scrapers or hackers. Rather than trying to play arms race with setting
headers, etc. - just move this script to tox-lsr. If we really need to sync
with the upstream development, we can do that manually.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
use versioned upload-artifact instead of master
bump codeql-action from v3 to v4
bump upload-artifact from v4 to v5
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
On some versions of ansible/jinja, the YAML format does not work, so use
the JSON format to pass in __bootc_validation
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
Now that https://github.com/teemtee/tmt/pull/3128 has merged
we can use the new epel feature to enable EPEL for testing farm
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
* Pass in a YAML true value as `__bootc_validation: true` using
the --extra-vars option to ensure that `__bootc_validation` is
treated as a boolean and not a string value.
`-e "__bootc_validation: true"`
You can also use JSON format:
`-e '{"__bootc_validation": true}'`
but YAML is simpler in this case.
* Use tox-lsr version 3.11.1
* Ensure the citest bad comment works when the test was cancelled in
addition to the failure case.
* Update contributing.md documentation
* Update number of nodes to use in testing farm, if needed
* remove unnecessary ansible-lint skips
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
Cause: The user is trying to specify the routing table to use by the name of
a built-in routing table defined in /usr/share/iproute2/rt_tables such as `main`.
Consequence: The network role gives an error:
"cannot find route table main in `/etc/iproute2/rt_tables` or `/etc/iproute2/rt_tables.d/`"
The workaround is that the user must specify the table by number instead of name e.g
`table: 254` instead of `table: main`
Fix: Look for table mappings in /usr/share/iproute2/rt_tables as well as the other
paths.
Result: The user can use built-in route table names.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
This is mainly needed on el7 - NetworkManager is installed by default, or
somewhere else, on el8 and later.
Additionally, if the NetworkManager-server-config package is installed, then
the secondary interface will not be active, so ensure it is active.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
https://issues.redhat.com/browse/RHEL-87511
The `system_ca_certs: true` flag in NM tells wpa_supplicant to load the
legacy single‑file CA bundle (historically at /etc/pki/tls/cert.pem).
Under CentOS Stream 9 (and RHEL 8), that path existed (either as a file
or a symlink to the bundle), so the default “system” loading worked.
On CentOS Stream 10 (RHEL 10), Red Hat switched to a hashed directory
trust store and removed `/etc/pki/tls/cert.pem` to optimize OpenSSL
performance as indicated in
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/10-beta/html/10.0_beta_release_notes/removed-features
and https://issues.redhat.com/browse/RHEL-50293. wpa_supplicant's
"system_ca_certs" code still tries the old cert.pem path, sees
"No such file or directory" and aborts the TLS setup:
```
OpenSSL: tls_connection_ca_cert - Failed to load root certificates - No such file or directory
EAP‑TLS: Failed to initialize SSL.
```
Hence `system_ca_certs: true` silently fails on Stream 10 because
there is no longer a single‑file CA bundle at that location.
The new ansible-lint does not like variables in play names.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
Implement the tests_ethernet FIXMEs for actually validating the `nmcli`
state and generated on-disk profiles. Do the latter separately in
anticipation of future support for offline (bootc build) mode.
This needs some conditionals, as NetworkManager before RHEL 9 uses the
initscripts config backend.
Signed-off-by: Martin Pitt <mpitt@redhat.com>
Simplify the cumbersome assertion.
Drop the ancient Fedora releases from the __NM_capath_ignored_NVRs list.
Signed-off-by: Martin Pitt <mpitt@redhat.com>
The big problem was trying to use `vars` with `import_playbook`.
We do not need to use `import_playbook` when `include_tasks` will
work. Perhaps the original author of these tests thought that
the play `roles` keyword was the only way to invoke roles, so
that had to be "called" using an `import_playbook`?
Use `include_tasks` instead of `import_playbook`, and move some
of those "tasks" playbooks to be tasks files in tests/tasks.
Use `include_role` instead of `import_role`.
Do not set variables using `set_fact` if they have already been
set at the appropriate scope using `vars`.
"Modernize" the code somewhat.
Improve formatting.
Work around an Ansible bug https://github.com/ansible/ansible/issues/85394
Fix ansible-lint and ansible-test issues related newer versions of
those tools.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
In some cases, the interface given in MAC_ADDR_MATCH_INTERFACE can be an
alias or altname. The test cannot use the altname, it must use the "real"
interface name.
For example, on some systems, if `MAC_ADDR_MATCH_INTERFACE=enX1`, the test
will fail because it is an altname for `ens4`:
```
+ ip addr show enX1
3: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:12:34:57 brd ff:ff:ff:ff:ff:ff
altname enp0s4
altname enx525400123457
altname enX1
```
The test will now parse the output of `ip addr show $name` to get the real interface name.
Also, improve the fallback method to look for common secondary interface names
such as eth1 and ens4 in case MAC_ADDR_MATCH_INTERFACE is not one of these.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
NOTE: This also requires upgrading to tox-lsr 3.11.0
Ansible 2.19 will be released soon and has some changes which will
require fixes in system roles. This adds 2.19 to our testing matrix
on fedora 42 so that we can start addressing these issues.
python 3.13 is now being used on some platforms.
Using ansible-core 2.18 requires using py311 for pylint and other
python checkers.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
NOTE: This also requires upgrading to tox-lsr 3.11.0
Ansible 2.19 will be released soon and has some changes which will
require fixes in system roles. This adds 2.19 to our testing matrix
on fedora 42 so that we can start addressing these issues.
python 3.13 is now being used on some platforms.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
NOTE: This also requires upgrading to tox-lsr 3.10.0, and some
hacks to workaround a podman issue in ubuntu.
These tests run the role during a bootc container image build, deploy
the container into a QEMU VM, boot that, and validate the expected
configuration there. They run in two different tox environments, and
thus have to be run in two steps (preparation in buildah, validation in
QEMU). The preparation is expected to output a qcow2 image in
`tests/tmp/TESTNAME/qcow2/disk.qcow2`, i.e. the output structure of
<https://github.com/osbuild/bootc-image-builder>.
There are two possibilities:
* Have separate bootc end-to-end tests. These are tagged with
`tests::bootc-e2` and are skipped in the normal qemu-* scenarios.
They run as part of the container-* ones.
* Modify an existing test: These need to build a qcow2 image exactly
*once* (via calling `bootc-buildah-qcow.sh`) and skip setup/cleanup
and role invocations in validation mode, i.e. when
`__bootc_validation` is true.
In the container scenario, run the QEMU validation as a separate step in
the workflow.
See https://issues.redhat.com/browse/RHEL-88396
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
Add Fedora 42 to testing farm test matrix, drop Fedora 40
Use tox-lsr 3.9.0 for the `--lsr-report-errors-url` argument.
Add the argument `--lsr-report-errors-url DEFAULT` to the qemu test so that
the errors will be written to the output log. This uses the output callback
https://github.com/linux-system-roles/auto-maintenance/blob/main/callback_plugins/lsr_report_errors.py
Use the check_logs.py script
https://github.com/linux-system-roles/auto-maintenance/blob/main/check_logs.py
with the `--github-action-format` argument to format the errors
in a github action friendly manner.
Rename the log files `-FAIL.log` or `-SUCCESS.log` depending on status.
This is compatible with the way the testing farm log files are named, and
makes it easy to tell if a test passed or failed from the log file name.
Upload README.html as artifacts of the build_docs job for debugging
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
This will make the qemu/kvm tests be tested in either
ascending or descending ASCII order. This should give
us better test coverage of clean up scenarios which may
fail depending on the order of the previous tests.
Rename the qemu/kvm tests so that the statuses are shorter
and more intuitive.
Improve qemu/kvm test failure error reporting.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
These tests are problematic in github qemu tests, and that
functionality (scsi, anyway) in the testing farm integration
tests.
Yes, we should have a way to provide tags on a per-role basis . . .
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
When running tests with a qemu managed node, the dhcp
used by qemu interferes with the dhcp used in the test, which
can cause the test to hang. Exclude the qemu interfaces from
using the test dhcp. Note that this only affects the qemu tests -
testing farm and other tests with "real" machines will have a
different mac address - the mac addresses used below are specific
to qemu virtual devices.
Also, just in case tests still timeout, add a tests/ansible.cfg
with a 240 second task timeout to ensure any hung tasks are killed.
This will cause the playbook to exit with an error.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
tox-lsr 3.6.0 will guarantee order of qemu test execution, which should
help make tests reproducible and help debug test failures.
Improve qemu test logging - this will help debug the qemu test
failures.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
Some systems do not use the `ethN` interface naming scheme, and
use `ensN` instead. The test wants to use `eth1` as the second
interface. If this does not exist, try `ens4` instead.
Some of our tests now run on an ubuntu control node (localhost)
and use `shell` to execute commands there. Ansible requires
the use of `pipefail`. The default shell on ubuntu is not
bash and does not have `pipefail`.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>