Age | Commit message (Collapse) | Author |
|
- Migrate to ddclient 3.11.1 and enforce debian/control dependency
- Add dual stack support for additional protocols
- Restrict usage of `porkbun` protocol, VyOS configuration structure
isn't compatible with porkbun yet
- Improve and cleanup error messages
|
|
|
|
|
|
T5577: Optimized PAM configs for RADIUS/TACACS+
|
|
T5261: Add AWS load-balancing tunnel handler
|
|
In CLI we can choose authentication logic:
- `mandatory` - if TACACS+ answered with `REJECT`, authentication must be
stopped and access denied immediately.
- `optional` (default) - if TACACS+ answers with `REJECT`, authentication
continues using the next module.
In `mandatory` mode authentication will be stopped only if TACACS+ clearly
answered that access should be denied (no user in TACACS+ database, wrong
password, etc.). If TACACS+ is not available or other errors happen, it will be
skipped and authentication will continue with the next module, like in
`optional` mode.
|
|
In CLI we can choose authentication logic:
- `mandatory` - if RADIUS answered with `Access-Reject`, authentication must
be stopped and access denied immediately.
- `optional` (default) - if RADIUS answers with `Access-Reject`,
authentication continues using the next module.
In `mandatory` mode authentication will be stopped only if RADIUS clearly
answered that access should be denied (no user in RADIUS database, wrong
password, etc.). If RADIUS is not available or other errors happen, it will be
skipped and authentication will continue with the next module, like in
`optional` mode.
|
|
We need separated groups for RADIUS and TACACS+ system users because they need
to be used in PAM rules independently.
|
|
|
|
|
|
results
|
|
Add AWS load-balancing tunnel handler
https://aws.amazon.com/blogs/networking-and-content-delivery/how-to-integrate-linux-instances-with-aws-gateway-load-balancer/
set service aws glb script on-create '/config/scripts/tmp.sh'
set service aws glb script on-destroy '/config/scripts/tmp.sh'
set service aws glb status format 'simple'
set service aws glb status port '8282'
set service aws glb threads tunnel '4'
set service aws glb threads tunnel-affinity '1-2'
set service aws glb threads udp '4'
set service aws glb threads udp-affinity '0-3'
|
|
|
|
|
|
|
|
|
|
|
|
There are two hooks called for bridge, ethernet and bond interfaces if the
link-state changes up -> down or down -> up.
The helpers are:
* /etc/netplug/linkdown.d/dhclient
* /etc/netplug/linkup.d/dhclient
As those helpers use Linux actions to start/restart the dhclient process in
Perl it's time to rewrite it. First goal is to get rid of all Perl code and the
second is that we now have a Proper Python library. Instead of checking if the
process is running the then restarting it without even systemd noticing
(yeah we might get two processes beeing alive) we should:
* Add a Python helper that can be used for both up and down (see man 8 netplugd
FILES section)
* Query the VyOS CLI config if the interface in question has DHCP(v6)
configured and is not disabled
* Add IPv6 DHCPv6 support
MAN page: https://linux.die.net/man/8/netplugd
|
|
|
|
Add service zabbix-agent
set service zabbix-agent directory '/config/zabbix/'
set service zabbix-agent limits buffer-flush-interval '8'
set service zabbix-agent limits buffer-size '120'
set service zabbix-agent log debug-level 'warning'
set service zabbix-agent log size '1'
set service zabbix-agent server '192.0.2.5'
set service zabbix-agent server-active 192.0.2.5 port '10051'
set service zabbix-agent server-active 2001:db8::123
|
|
|
|
|
|
|
|
|
|
|
|
This reverts commit 9f7b51370732606611253e2e6a16692bf706659b.
|
|
|
|
|
|
There is a missing dependency iptables for netavark . Debian marked it as optional but should be a dependency. If not installed, container cannot be created with assigned network. The rolling release is built with package iptables so there is no bug. But if users build iso on their own, container will not work if container network is assigned.
|
|
|
|
|
|
|
|
|
|
T1797: Add initial vpp configuration
|
|
|
|
|
|
Add initial configuration mode for VPP (PoC)
set vpp cpu corelist-workers '2'
set vpp cpu main-core '1'
set vpp interface eth1 num-rx-desc '256'
set vpp interface eth1 num-rx-queues '512'
set vpp interface eth1 num-tx-desc '256'
set vpp interface eth1 num-tx-queues '512'
set vpp interface eth1 pci '0000:02:00.0'
set vpp interface eth1 rx-mode 'polling'
set vpp interface eth2 pci '0000:08:00.0'
Limitation:
- 'set vpp interface ethX pci auto' works only per first
commit, then interface detached from default stack and creates
tun interface 'ethX' to communicate with default stack. In this
case we can't get PCI address via ethtool for 'tun' interfaces.
But we can set pci address manualy.
- Interface sync between default stack and VPP-DPDK stack
After vpp change it doesn't trigger iproute2 for changes
(should be written later)
I.e. if we change something in vpp per each commit it restarts
vpp.service it gets empty interface config as we don't configure vpp
directly and it should be configured via iproute2
But then if we do any change on interface (for example description)
it gets IP address, MTU, state, etc.
|
|
|
|
|
|
|
|
|
|
... this is a step towards a new and better implementation that will utilize
VPP.
|
|
|
|
|
|
cloud-init: T5190: Added Cloud-init pre-configurator
|
|
|
|
Added a new service that starts before Cloud-init, waits for all network
interfaces initialization, and if requested by config, checks which interfaces
can get configuration via DHCP server and creates a corresponding Cloud-init
network configuration.
This protects from two situations:
* when Cloud-init tries to get meta-data via eth0 (default and fallback variant
for any data source which depends on network), but the real network is connected
to another interface
* when Cloud-init starts simultaneously with udev and initializes the first
interface to get meta-data before it is renamed to eth0 by udev
|
|
since it's required for match statements
and for op mode introspection
|
|
With commit 0ea3e1420 ("container: T5082: switch to netavark network stack")
moving to a new network stack we should also enable the new DNS plugin provided
by default.
TODO: add CLI nodes to manually disable DNS and/or supply external DNS servers
to the container.
|
|
|