mirror of
https://github.com/linux-system-roles/network.git
synced 2026-01-23 02:15:17 +00:00
The states "up" and "down" previously would always change state. That is, specifying them in the playbook will always invoke `ifup` (or `ifdown`) or the corresponding of `nmcli connection up` (or `nmcli connection down`). That was intentional behavior, because the role doesn't really know which profile is currently active. That is certainly the case for "initscripts", where the role has almost no information about the current runtime state. For "nm" provider, the role knows whether the connection is already active. However, that alone also does not guarantee that the current runtime state is idential to what would be the result of an explicit `nmcli connection up`. Hence, to be sure that the current state is always as expected, the role would always explicitly issue the commands and report "changed=1". That is quite harmful, because running the same role multiple times should not report changes every time. Also, issuing `ifup` may behave badly, if the interface is already configured. Now, try to determine whether the desire "up" or "down" state is already reached and do nothing. For "nm" provider that is easy and quite safe. There is still the possibility to trick the role into thinking that the right configuration is active, when it actually is not. For example via `nmcli device modify` on the host. But in general, it should work just fine. Especially, if the admin manually modifies the runtime state, it may be just desired for "state: up" not to change anything. For "initscripts" this is much more fragile. There isn't really much that can be done about it, because the role doesn't know what is currently configured on the system. There is also a new option "force_state_change" to restore the previous behavior. https://bugzilla.redhat.com/show_bug.cgi?id=1476053 |
||
|---|---|---|
| .. | ||
| main.yml | ||