Ansible 2.20 has deprecated the use of Ansible facts as variables. For
example, `ansible_distribution` is now deprecated in favor of
`ansible_facts["distribution"]`. This is due to making the default
setting `INJECT_FACTS_AS_VARS=false`. For now, this will create WARNING
messages, but in Ansible 2.24 it will be an error.
See https://docs.ansible.com/projects/ansible/latest/porting_guides/porting_guide_core_2.20.html#inject-facts-as-vars
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
As part of the public API, `network_state` must be defined in
defaults/main.yml, and it must be defined with the correct
type `dict`, so the correct default value must be `{}` the
empty dict.
All checking for `network_state` must check for a value of
`{}` to mean "network_state not set or empty".
Fix the test which looks for teaming configuration in EL10
to correctly look for the value in `network_state`.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
According to the Ansible team, support for listing platforms in
role `meta/main.yml` files is being removed.
Instead, they recommend using `galaxy_tags`
https://github.com/ansible/ansible/blob/stable-2.17/changelogs/CHANGELOG-v2.17.rst
"Remove the galaxy_info field platforms from the role templates"
https://github.com/ansible/ansible/issues/82453
Many roles already have tags such as "rhel", "redhat", "centos", and "fedora".
I propose that we ensure all of the system roles have these tags.
Some of our roles support Suse, Debian, Ubuntu, and others.
We should add tags for those e.g. the ssh role already has tags for "debian" and "ubuntu".
In addition - for each version listed under `platforms.EL` - add a tag like `elN`.
Q: Why not use a delimiter between the platform and the version e.g. `el-10`?
This is not allowed by ansible-lint:
```
meta-no-tags: Tags must contain lowercase letters and digits only., invalid: 'el-10'
meta/main.yml:1
```
So we cannot use uppercase letters either.
Q: Why not use our own meta/main.yml field?
No other fields are allowed by ansible-lint:
```
syntax-check[specific]: 'myfield' is not a valid attribute for a RoleMetadata
```
Q: Why not use some other field?
There are no other applicable or suitable fields.
Q: What happens when we want to support versions like `N.M`?
Use the word "dot" instead of "." e.g. `el10dot3`.
Similarly - use "dash" instead of "-".
We do not need tags such as `fedoraall`.
The `fedora` tag implies that the role works on all supported versions of fedora.
Otherwise, use tags such as `fedora40` if the role only supports specific versions.
Teaming support is dropped in EL10. Provide an error to users who attempt
to use teaming and suggest that they use bonding instead. Skip teaming
tests on EL10.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
If updates for network packages are available and wireless or team
connections are specified, NetworkManager must be restarted, the role
requires user's consent to restart NetworkManager. Otherwise, there
might be property conflicts between NetworkManager daemon and plugin, or
NetworkManager plugin is not taking effect.
`update_cache` is enabled in the module tasks to check if updates for
network packages are available due to wireless or team interfaces, in
that case, NetworkManager needs user's explicit consent to be restarted
after the network package updates. And using `state: latest` for
checking the network package updates because we have to guarantee that
NetworkManager and its plugin have the same and most recent version for
configuring the network connections settings in the backend. It is
worthwhile to mention that we have both tasks using dnf and yum module
for checking available updates for network packages. Because checking
package cache update is not supported in Ansible package module, Fedora
and RHEL8+ use DNF package manager by default, RHEL7 uses yum package
manager by default.
This commit will address the situation that users forget to explicitly
specify `network_allow_restart: true` when specifying wireless or team
connections.
Signed-off-by: Wen Liang <liangwen12year@gmail.com>
Sometimes the rpm download returns a 403, which is likely caused by
too many parallel jobs attempt the download from the same controller in
too short a period of time, so the epel server throttles additional
downloads - use a retry here to mitigate.
Signed-off-by: Wen Liang <liangwen12year@gmail.com>
The dependency on `ansible.utils.update_fact` is causing issue with
some users who now must install that collection in order to run
the role, even if they do not care about ostree.
The fix is to stop trying to set `ansible_facts.pkg_mgr`, and instead
force the use of the ostree package manager with the `package:` module
`use:` option. The strategy is - on ostree systems, set the flag
`__$ROLENAME_is_ostree` if the system is an ostree system. The flag
will either be undefined or `false` on non-ostree systems.
Then, change every invocation of the `package:` module like this:
```yaml
- name: Ensure required packages are present
package:
name: "{{ __$ROLENAME_packages }}"
state: present
use: "{{ (__$ROLENAME_is_ostree | d(false)) |
ternary('ansible.posix.rhel_rpm_ostree', omit) }}"
```
This should ensure that the `use:` parameter is not used if the system
is non-ostree. The goal is to make the ostree support as unobtrusive
as possible for non-ostree systems.
The user can also set `__$ROLENAME_is_ostree: true` in the inventory or play
if the user knows that ostree is being used and wants to skip the check.
Or, the user is concerned about the performance hit for ostree detection
on non-ostree systems, and sets `__$ROLENAME_is_ostree: false` to skip
the check.
The flag `__$ROLENAME_is_ostree` can also be used in the role or tests to
include or exclude tasks from being run on ostree systems.
This fix also improves error reporting in the `get_ostree_data.sh` script
when included roles cannot be found.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
The only thing we need to skip currently is using FQCN for ansible
builtin modules, plugins
Add `kinds` - otherwise, Ansible thinks anything not in a traditional
role path is a plain YAML file, and we don't get the additional
checking.
Ensure all plays are named.
Fix some other minor problems.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
The users want to apply the nmstate network state configuration to the
interface directly through the role, which necessitates the less
complexity of the network configuration and allows the partial
configuration on the network.
To warrant that the users are capable to apply the nmstate network state
configuration, add the support for the `network_state` variable.
Signed-off-by: Wen Liang <liangwen12year@gmail.com>
When running CI tests, test performance can be improved by creating a
snapshot image to use for the test, pre-installed with packages used by
the role tests. The CI system can use tests/setup-snapshot.yml to
prepare the snapshot image. Rather than having a list of packages to
install in multiple places, the code which ensures the facts and
variables is moved to a separate tasks/set_facts.yml so that the setup
playbook can use `tasks_from: set_facts.yml` to get the list of network
packages to install. NOTE: The network role developers should add
additional packages to setup-snapshot.yml for other packages installed
by other tests.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
Some users prefer to use `gather_facts: false` in their playbooks.
However, the network role requires certain ansible_facts to be set. If
the user wants to use the network role with `gather_facts: false`, the
role will gather the minimum subset of facts required. If the user does
not want the role to gather facts, the user can either not use the
network role, or ensure that all required facts are in the facts cache.
Signed-off-by: Rich Megginson <rmeggins@redhat.com>
bz#2044640
The network role create an ifcfg file for initscripts. The file
used to append a comment "# this file was created by ansible".
This patch replaces the proprietary string with the ansible
standard {{ ansible_managed }} to adjust to the other system
roles.
For the implementation, it borrowed the method from kernel_settings,
getting the ansible managed comment using the get_ansible_managed.j2
template and pass the comment to network_connections which is added
to the ifcfg file.
In case network_provider is nm, the comment is not added to the
ifcfg file as the file is not managed by Ansible.
Note: the required parameter name to pass the ansible managed comment
to the network_connection module is "__header".
Do not use get_ansible_managed.j2 in the test scripts, but use a
hardcoded ansible managed comment to simplify the tests.
tests/tasks/get_profile_stat.yml: replace the '=' style with the YAML
notation in set_fact.
Signed-off-by: Noriko Hosoi <nhosoi@redhat.com>
This avoids dumping all the parameters which can cause rather lengthy
output if multiple interfaces, routers, ... are configured.
The previous behaviour where all the params etc were displayed can
still be achieved by adding `-v` to the `ansible-playbook` command.
Resolves: #394
Signed-off-by: Kristof Wevers <kristof.wevers@infura.eu>
WPA-PSK and WPA-EAP are supported. Uses existing 802.1x features of the role.
Added extra functionality to ArgValidatorStr to enforce a min and max length.
To avoid conflicts with other roles, it is recomended to prefix all variables
that are only used internally with '__' and the name of the role ('__network_').
Logs are now separed by severity level. Warnings and failures are the only logs
that appear now on the output. All logs are saved into a new json parameter
called "stderr" that is later shown on a different task. In case of
failure, all logs are shown as output. Tests have been created and modified in
order to assure that this feature works.
Signed-off-by: Elvira Garcia Ruiz <elviragr@riseup.net>
The initscripts network service requires /etc/sysconfig/network to be
present. The file might be missing in container images, for example
currently in CentOS 7. It seems to be created by anaconda usually.
Therefore just create it if necessary as it can be empty.
References:
https://bugs.centos.org/view.php?id=16010
The network service from initscripts fails if there are network profiles
for unknown devices. Also it does not start an actual daemon but just
activates all profiles on disk. Therefore only enable it to ensure it
will come up after boot.
- Use initscripts as provider except when NetworkManager is running
- Rename network_provider_default to network_provider_os_default, since
it contains the default based on the OS
The role currently supports two providers: "nm" and "initscripts".
The provider is autodetected by loading one of the vars/*.yml files
(where the default is set via the internal "network_provider_default" variable).
The user can still overwrite the provider, by explicitly setting the
"network_provider" variable.
Depending on the provider there is the list of packages that shall be
installed and the service to start. Selecting this was broken before.
This is now fixed and works like following:
The variables "network_service_name" and "network_packages" can be
specified by the user as host variables. But usually the user wouldn't
want to do that. Instead, those settings depend on "network_provider".
The role looks into the internal "_network_provider_setup" dictionary,
which defaults to "network_service_name_nm", "network_service_name_initscripts",
"network_packages_nm", and "network_packages_initscripts".
These default variables are initialized in "defaults/main.yml" as well,
but they could be overwritten via "vars/*.yml" files, or via any other
mechanism.
https://github.com/linux-system-roles/network/pull/14https://bugzilla.redhat.com/show_bug.cgi?id=1485074
The states "up" and "down" previously would always change state. That
is, specifying them in the playbook will always invoke `ifup` (or
`ifdown`) or the corresponding of `nmcli connection up` (or `nmcli
connection down`).
That was intentional behavior, because the role doesn't really know which
profile is currently active. That is certainly the case for "initscripts",
where the role has almost no information about the current runtime
state. For "nm" provider, the role knows whether the connection is
already active. However, that alone also does not guarantee that the
current runtime state is idential to what would be the result of an
explicit `nmcli connection up`.
Hence, to be sure that the current state is always as expected, the role
would always explicitly issue the commands and report "changed=1".
That is quite harmful, because running the same role multiple times
should not report changes every time. Also, issuing `ifup` may behave
badly, if the interface is already configured.
Now, try to determine whether the desire "up" or "down" state is already
reached and do nothing.
For "nm" provider that is easy and quite safe. There is still the
possibility to trick the role into thinking that the right configuration
is active, when it actually is not. For example via `nmcli device
modify` on the host. But in general, it should work just fine.
Especially, if the admin manually modifies the runtime state, it may be
just desired for "state: up" not to change anything.
For "initscripts" this is much more fragile. There isn't really much
that can be done about it, because the role doesn't know what is currently
configured on the system.
There is also a new option "force_state_change" to restore the previous
behavior.
https://bugzilla.redhat.com/show_bug.cgi?id=1476053
The role already supported a default variable ("network_provider") and
host variables ("network_provider_default", "network_service_name",
"network_packages").
Don't use nested variables under "network" like
network:
provider:
ignore_error:
connections:
instead promote them all to top-level variables like:
network_provider:
network_ignore_error:
network_connections:
This seems more consistent (as we already have multiple top-level
variables), it seems to follow ansible style, and it makes it easier
to overload individual variables via conditional include files.
It's more idiomatic for ansible then "on_error".
'ignore_errors' can be specified as a module argument.
But it can also be specified on a per-profile level,
with the intuitive behavior that the per-profile setting
overwrites the per-module setting.
Instead, of having the tasks call the "network_connections.py"
library for each connection profile individually (using with_items),
pass all profiles at once.
The advantage is:
- the module can validate the input arguments better as it has
access to all profiles. For example, when a slave connection
refers to another master profile from the same play. Previously,
each invocation of the module only sees the current profile and
cannot verify whether the reference is valid.
- while configuring the network, the play might need to shortly
disconnect the control connection. In the previous way, after
tearing down the network the target host becomes unreachable for
ansible and the following steps cannot be executed anymore.
Now, all steps are done as a whole on the target host, via
one connection. If the host becomes unreachable for a short
time, that is not a problem as long as the connectivty is
restored at the end.
Ansible also supports to switch the host IP (or SSH port). With
this new way, the ansible play can apply a bunch of profiles
autonomously and the ansible play can potentially handle a changing
IP configuration.