Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
We must only search and replace for "vrf red" or "vrf green" - the regex
used infact matched on all VRFs which is wrong. This would remove all VRF VNI
configurations when only changing a single VRf.
|
|
|
|
With commit 0ea3e1420 ("container: T5082: switch to netavark network stack")
moving to a new network stack we should also enable the new DNS plugin provided
by default.
TODO: add CLI nodes to manually disable DNS and/or supply external DNS servers
to the container.
|
|
If the name of the network + the length of the podman- prefix exceeds
the maximum supported length of netavark we get an error:
Error: netavark: get bridge interface: Netlink error: Numerical result out of
range (os error 34)
|
|
|
|
|
|
It is possible to install a route-map which filters the routes between routing
daemons and the OS kernel (zebra)
As of now this can be done by e.g.
* set protocols ospf route-map foo
* set protocols ospfv3 route-map foo
* set protocols bgp route-map foo
Which in turn will install the following lines into FRR
* ip protocol ospf route-map foo
* ipv6 protocol ospf6 route-map foo
* ip protocol bgp route-map foo
The current state of the VyOS CLI is incomplete as there is no way to:
* Install a filter for BGP IPv6 routes
* Install a filter for static routes
* Install a filter for connected routes
Thus the CLI should be redesigned to close match what FRR does for both the
default and any other VRF
* set system ip protocol ospf route-map foo
* set system ipv6 protocol ospfv3 route-map foo
* set system ip protocol bgp route-map foo
* set system ipv6 protocol bgp route-map foo
The configuration can be migrated accordingly. This commit does not come with
the migrator, it will be comitted later.
|
|
|
|
Initially the option 'rate-limit' was implemented with the
wrong place in the CLI:
set vpn pptp remote-access authentication rate-limit <xxx>
Expected under 'radius' section:
set vpn pptp remote-access authentication radius rate-limit <xxx>
Configuration for 'rate-limit' (Jinja2 template) never worked for
pptp, fix it.
|
|
Fix for Telegraf agent hostname isn't qualified
Try to get hostname from FQDN and then from hostname
Used for metrics
You may have more than one machine with different domain names
r1 domain-name foo.local, hostname myhost
r2 domain-name bar.local, hostname myhost
It helps to detect from which exectly host we get metric for
InfluxDB2
|
|
We cannot use both 'port' and 'port-group' for the same direction
in one rule at the same time
Otherwise it generates wrong rules that don't block anything
set P_pgrp {
type inet_service
flags interval
auto-merge
elements = { 101-105 }
}
chain NAME_foo {
tcp dport 22 tcp dport @P_pgrp counter drop comment "foo-10"
counter return comment "foo default-action accept"
}
|
|
Jinja2 template uses {{ plugin_dir }} that it gets from the
interface-openvpn.py variable 'plugin_dir' but the correct var
should be as part of 'openvpn' dictionary i.e. openvpn['plugin_dir']
|
|
Networks are started only as soon as there is a consumer. If only a network is
created in the first place, no need to assign it to a VRF as there's no
consumer, yet.
|
|
maxsyslogins
maximum number of all logins on system; user is not
allowed to log-in if total number of all user logins is
greater than specified number (this limit does not apply
to user with uid=0)
set system login max-login-session 2
|
|
Container networks now can be bound to a specific VRF instance.
set vrf name <foo> table <xxx>
set container network <name> vrf <foo>
|
|
Commit fe82d86d ("container: T4959: add registry authentication option") looked
up the wrong config dict level when validating that both username and password
need to be specified when registries are in use.
|
|
We now support assigning discrete IPv6 addresses to a container.
|
|
Commit 52e51ffb ("container: T5047: restart only containers that changed")
started to iterate over a NoneType which is invalid. This happened when a
network description was changed but no container was due for restart.
|
|
|
|
dns: T5115: Support custom port for name servers for forwarding zones
|
|
By default VyOS used to restart all containers it managed. This makes no sense
as it will be service disrupting. Instead only restart the containers that had
changes on the CLI beeing made.
|
|
As podman is going to use netavark as new default we must explicitly select
the old driver until we have migrated to netavark.
|
|
This would allow using custom ports in name server operating on non-
default port for forwarding zones.
This is a follow-up to T5113 for sake of completeness and having
consistent treatment of all name servers configured in PowerDNS recursor.
Additionally, migrate `service dns forwarding domain example.com server`
to `service dns forwarding domain foo3.com name-server` for consistency
and reusability.
|
|
|
|
Support custom port for name-server forwarders that would allow using
custom ports in name server forwarders to enable forwarding to
alternative name servers (unbound, stubby, dnscrypt-proxy etc.)
operating on non-default port.
This would also allow using DNS Over TLS in PowerDNS Recursor 4.6 onwards
(pdns doesn't support certificate check for validity yet) by enabling
'dot-to-port-853'. This is set by default if compiled in with DoT support.
See: https://doc.powerdns.com/recursor/settings.html#dot-to-port-853
This also partially implements T921, T2195 (DoT without certificate check).
Implementation details:
- In 'dns/forwarding' configuration, 'name-server' now allows optional
'port' (defaults to 53).
- Instead of modifying 'name-server-ipv4-ipv6.xml.i' to add optional
'port', a new file 'name-server-ipv4-ipv6-port.xml.i' has been used
to avoid impacting other places where it is reused because not all of
them honor ports (mostly VPN related).
- The `host:port` entries to be used by PowerDNS recursor config are
normalized eagerly at the point of loading VyOS `Config` instead of
doing them lazily while rendering the Jinja2 template to keep the
implementation less intrusive. The alternative would entail making
quite a bit of change in how 'vyos-hostsd' processes 'static'
'name_servers' entries or persists their runtime states.
|
|
T5099: IPoE-server add option next-pool for named ip pools
|
|
|
|
|
|
|
|
In cases with multiple named IP pools, it is required the option 'next'
to be sure that if IP addresses ended in one pool, then they would
begin to be allocated from the next named pool.
For accel-ppp it requires specific order as pool must be defined
before we can use it with the 'next-option'
set service ipoe-server client-ip-pool name first-pool subnet '192.0.2.0/25'
set service ipoe-server client-ip-pool name first-pool next-pool 'second-pool'
set service ipoe-server client-ip-pool name second-pool subnet '203.0.113.0/25'
[ip-pool]
203.0.113.0/25,name=second-pool
192.0.2.0/25,name=first-pool,next=second-pool
|
|
T5050: Firewall: Add log options
|
|
|
|
We drop default values 'port' but don't set it again per server
Fix it
|
|
T5091: IPoE-server verify RADIUS settings
|
|
As we don't have global option 'gateway-address' for ipoe-server
we cannot use general configverify.verify_accel_ppp_base_service
Add verify radius setting for configuration mode 'radius'
Radius authentication required at least one RADIUS server
|
|
Add sFlow feature based on hsflowd
According to user reviews, it works more stable and more productive
than pmacct
I haven't deleted 'pmacct' 'system flow-accounting sflow' yet
It could be migrated or deprecated later
set system sflow agent-address '192.0.2.14'
set system sflow interface 'eth0'
set system sflow interface 'eth1'
set system sflow polling '30'
set system sflow sampling-rate '100'
set system sflow server 192.0.2.1 port '6343'
set system sflow server 192.0.2.11 port '6343'
|
|
Add template to generate zebra
"ipv6 protocol ospf6 route-map xxx"
|
|
Container registry CLI node changed from leafNode to tagNode with the same
defaults. In addition we can now configure an authentication option per
registry.
|
|
This will check if mirror/redirect is present on a QoS interface and use `vyos.configdep` module to update the interface again after QoS is applied.
|
|
IPoE-server 'interface ethX vlan xxx' (aka vlan-mon) must not be
used with 'interface ethX client-subnet'
So instead of shared pool accel-ppp uses the same pool for each
dynamically added VLAN
eth1 client-subnet '192.0.2.0/24'
eth1 vlan '2000-2021'
It cause this issue:
eth1.2000 range 192.0.2.0/24 (the first client gets address from 192.0.2.2)
eth2.2001 range 192.0.2.0/24 (the first client gets address from 192.0.2.2)
Only named pools with vlan option must be used.
|
|
|
|
|
|
T5037: Firewall: Add queue action and options to firewall
|
|
Ability setting container hostname
This host name is used as /etc/hostname
set container name <tag> host-name 'mybox'
|
|
|
|
T4977: Add Babel routing protocol support
|
|
container: T4014: Add `command`, `arg` and `entrypoint` configuration options for containers
|