- Use initscripts as provider except when NetworkManager is running
- Rename network_provider_default to network_provider_os_default, since
it contains the default based on the OS
The role currently supports two providers: "nm" and "initscripts".
The provider is autodetected by loading one of the vars/*.yml files
(where the default is set via the internal "network_provider_default" variable).
The user can still overwrite the provider, by explicitly setting the
"network_provider" variable.
Depending on the provider there is the list of packages that shall be
installed and the service to start. Selecting this was broken before.
This is now fixed and works like following:
The variables "network_service_name" and "network_packages" can be
specified by the user as host variables. But usually the user wouldn't
want to do that. Instead, those settings depend on "network_provider".
The role looks into the internal "_network_provider_setup" dictionary,
which defaults to "network_service_name_nm", "network_service_name_initscripts",
"network_packages_nm", and "network_packages_initscripts".
These default variables are initialized in "defaults/main.yml" as well,
but they could be overwritten via "vars/*.yml" files, or via any other
mechanism.
https://github.com/linux-system-roles/network/pull/14https://bugzilla.redhat.com/show_bug.cgi?id=1485074
The states "up" and "down" previously would always change state. That
is, specifying them in the playbook will always invoke `ifup` (or
`ifdown`) or the corresponding of `nmcli connection up` (or `nmcli
connection down`).
That was intentional behavior, because the role doesn't really know which
profile is currently active. That is certainly the case for "initscripts",
where the role has almost no information about the current runtime
state. For "nm" provider, the role knows whether the connection is
already active. However, that alone also does not guarantee that the
current runtime state is idential to what would be the result of an
explicit `nmcli connection up`.
Hence, to be sure that the current state is always as expected, the role
would always explicitly issue the commands and report "changed=1".
That is quite harmful, because running the same role multiple times
should not report changes every time. Also, issuing `ifup` may behave
badly, if the interface is already configured.
Now, try to determine whether the desire "up" or "down" state is already
reached and do nothing.
For "nm" provider that is easy and quite safe. There is still the
possibility to trick the role into thinking that the right configuration
is active, when it actually is not. For example via `nmcli device
modify` on the host. But in general, it should work just fine.
Especially, if the admin manually modifies the runtime state, it may be
just desired for "state: up" not to change anything.
For "initscripts" this is much more fragile. There isn't really much
that can be done about it, because the role doesn't know what is currently
configured on the system.
There is also a new option "force_state_change" to restore the previous
behavior.
https://bugzilla.redhat.com/show_bug.cgi?id=1476053
The role already supported a default variable ("network_provider") and
host variables ("network_provider_default", "network_service_name",
"network_packages").
Don't use nested variables under "network" like
network:
provider:
ignore_error:
connections:
instead promote them all to top-level variables like:
network_provider:
network_ignore_error:
network_connections:
This seems more consistent (as we already have multiple top-level
variables), it seems to follow ansible style, and it makes it easier
to overload individual variables via conditional include files.
It's more idiomatic for ansible then "on_error".
'ignore_errors' can be specified as a module argument.
But it can also be specified on a per-profile level,
with the intuitive behavior that the per-profile setting
overwrites the per-module setting.
Instead, of having the tasks call the "network_connections.py"
library for each connection profile individually (using with_items),
pass all profiles at once.
The advantage is:
- the module can validate the input arguments better as it has
access to all profiles. For example, when a slave connection
refers to another master profile from the same play. Previously,
each invocation of the module only sees the current profile and
cannot verify whether the reference is valid.
- while configuring the network, the play might need to shortly
disconnect the control connection. In the previous way, after
tearing down the network the target host becomes unreachable for
ansible and the following steps cannot be executed anymore.
Now, all steps are done as a whole on the target host, via
one connection. If the host becomes unreachable for a short
time, that is not a problem as long as the connectivty is
restored at the end.
Ansible also supports to switch the host IP (or SSH port). With
this new way, the ansible play can apply a bunch of profiles
autonomously and the ansible play can potentially handle a changing
IP configuration.