Configuring a VXLAN interface on a Linux machine
Sometimes you’d like to span a layer2 network across two locations, but you don’t want to (or can’t) use VLANs. An excellent alternative in this situation is VXLAN, which transports the payload traffic via standard ole UDP packets on port 4789. This works easily with unmanaged switches (or switches you don’t have admin access to), and can even be routed across other networks or VPNs. Here’s how to set things up between two Linux boxes, and a FortiGate firewall to top it off.
Terminology
First of all, we should clarify a bit of terminology.
Just like with VLANs, VXLAN uses IDs to distinguish the individual networks. VLAN uses IDs, VXLAN uses Virtual Network Identifiers (VNIs). The principle is exactly the same, but VNIs can be larger: VLAN IDs are limited to 12 bit in size, VNIs are 24 bit in size, thus they range to some 16 million networks.
The edge interfaces of the VXLAN network are called VXLAN Tunnel Endpoints, or VTEPs for short. These are what the rest of this article is going to focus on.
Example network layout
We’re going to use a network consisting of three Proxmox hosts, a FortiGate firewall and a Linux-based monitoring server as an example. The setup looks like this:
+---------------------------+
| FortiGate Firewall |
| 192.168.10.254/24 |
+---------------------------+
|
| +---------------------+
+---| Proxmox hosts |
| | 192.168.10.101/24 |
| | 192.168.10.102/24 |
| | 192.168.10.103/24 |
| +---------------------+
|
| +---------------------+
+---| Monitoring server |
| 192.168.10.10/24 |
+---------------------+
The 192.168.10.0/24
network is going to be our underlay,
which we can set up in whatever way we like.
We’re going to create two VXLAN networks hosted by the Proxmox servers:
- Servers:
192.168.50.0/24
, VNI 1000 - IoT:
192.168.60.0/24
, VNI 2000
Proxmox
With its SDN feature, Proxmox natively supports using VXLAN to create networks for the VMs. To do this, we’ll create an SDN zone, configure it as VXLAN-based and configure the peers list to include all the systems that we want to configure interfaces (VTEPs) in. Thus, we’ll include the underlay IPs of our systems:
192.168.10.101 192.168.10.102 192.168.10.103 192.168.10.254 192.168.10.10
Next, we’ll create two VNets for our servers
and
iot
networks, using their names as IDs and their VNIs as
Tags. Once you set these values, go to the top-level SDN entry and hit
Apply to deploy the networks to your PVE hosts.
Proxmox will now create entries in
/etc/network/interfaces.d/sdn
for our VXLAN networks:
auto vxlan_servers
iface vxlan_servers
vxlan-id 1000
vxlan_remoteip 192.168.10.101
vxlan_remoteip 192.168.10.102
vxlan_remoteip 192.168.10.103
vxlan_remoteip 192.168.10.10
vxlan_remoteip 192.168.10.254
mtu 1450
auto vxlan_iot
iface vxlan_iot
vxlan-id 2000
vxlan_remoteip 192.168.10.101
vxlan_remoteip 192.168.10.102
vxlan_remoteip 192.168.10.103
vxlan_remoteip 192.168.10.10
vxlan_remoteip 192.168.10.254
mtu 1450
Now we can deploy VMs into it, and they should be able to ping each other even across hosts, as long as they’re withint he same VXLAN network. Routing across networks will be provided by our FortiGate firewall, which we’ll configure next.
FortiGate
To connect our FortiGate firewall into the networks, we’ll need to use the CLI to crete the VTEPs, and can then configure them via the GUI like any other interface. To do this, open the CLI and run these commands:
config system vxlan
edit "servers"
set interface "LAN" # this is the interface where 192.168.10.0/24 is reachable
set vni 1000
set remote-ip 192.168.10.101 192.168.10.102 192.168.10.103 192.168.10.10
next
edit "iot"
set interface "LAN" # this is the interface where 192.168.10.0/24 is reachable
set vni 2000
set remote-ip 192.168.10.101 192.168.10.102 192.168.10.103 192.168.10.10
next
end
Now the GUI will show two VXLAN-type interfaces which we can
configure with their respective IPs: 192.168.50.254
for the
servers
interface and 192.168.60.254
for
iot
. We should be able to ping VMs that we connect into
these networks at this point, and configure policies to grant them
internet access like usual.
Linux using ifupdown
For our Linux-based monitoring box, we’ll also want to add some
interfaces. On Debian-based systems, we can use
/etc/network/interfaces
with the exact same config that
Proxmox used above. We should just extend them with an IP address for
our monitoring server to use within the networks:
auto vxlan_servers
iface vxlan_servers
address 192.168.50.10
netmask 24
vxlan-id 1000
vxlan_remoteip 192.168.10.101
vxlan_remoteip 192.168.10.102
vxlan_remoteip 192.168.10.103
vxlan_remoteip 192.168.10.254
mtu 1450
auto vxlan_iot
iface vxlan_iot
address 192.168.60.10
netmask 24
vxlan-id 2000
vxlan_remoteip 192.168.10.101
vxlan_remoteip 192.168.10.102
vxlan_remoteip 192.168.10.103
vxlan_remoteip 192.168.10.254
mtu 1450
Now after running ifup -a
, we’ll see two new
interfaces:
MON01:~# ip addr show vxlan_servers
91261: vxlan_servers: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether d6:ad:77:24:b6:57 brd ff:ff:ff:ff:ff:ff
inet 192.168.50.10/24 scope global vxlan_servers
valid_lft forever preferred_lft forever
MON01:~# ip addr show vxlan_iot
91262: vxlan_iot: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether d6:ad:77:24:b6:25 brd ff:ff:ff:ff:ff:ff
inet 192.168.60.10/24 scope global vxlan_servers
valid_lft forever preferred_lft forever
We can now ping all the VMs and communicate with them.
Linux configured manually
If we want to configure the interfaces manually, we need to run a set of commands. First of all we need to add a new interface of type vxlan:
MON01:~# ip link add vxlan_servers type vxlan id 1000 dev ens192 dstport 4789
Now, we need to add the IPs of the other VXLAN nodes, so that traffic is sent to them as needed:
MON01:~# bridge fdb append to 00:00:00:00:00:00 dst 192.168.10.101 dev vxlan_servers
MON01:~# bridge fdb append to 00:00:00:00:00:00 dst 192.168.10.102 dev vxlan_servers
MON01:~# bridge fdb append to 00:00:00:00:00:00 dst 192.168.10.103 dev vxlan_servers
MON01:~# bridge fdb append to 00:00:00:00:00:00 dst 192.168.10.254 dev vxlan_servers
Next we can set an IP and enable the interface:
MON01:~# ip addr add 192.168.50.10/24 dev vxlan_servers
MON01:~# ip link set vxlan_servers up
MON01:~# ip addr show vxlan_servers
91261: vxlan_servers: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether d6:ad:77:24:b6:57 brd ff:ff:ff:ff:ff:ff
inet 192.168.50.10/24 scope global vxlan_servers
valid_lft forever preferred_lft forever
inet6 fe80::d4ad:77ff:fe24:b657/64 scope link tentative
valid_lft forever preferred_lft forever
Now we should be able to ping a VM on the Proxmox hosts:
PING 192.168.50.20 (192.168.50.20) 56(84) bytes of data.
64 bytes from 192.168.50.20: icmp_seq=1 ttl=64 time=0.579 ms
64 bytes from 192.168.50.20: icmp_seq=2 ttl=64 time=0.617 ms
64 bytes from 192.168.50.20: icmp_seq=3 ttl=64 time=0.588 ms
^C
--- 192.168.50.20 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2020ms
rtt min/avg/max/mdev = 0.579/0.594/0.617/0.016 ms
MAC address table
Just like a switch keeps a MAC address table to efficiently route traffic to the correct port, so does VXLAN. When traffic is first sent to a destination IP, an ARP request is broadcast to all network peers. The respective VTEP will then send a copy of that broadcast to all its remote peers. One of them is going to come back with a response, and this will create an ARP table entry as usual:
MON01:~# ip neigh get 192.168.50.20 dev vxlan_servers
192.168.50.20 dev vxlan_servers lladdr bc:24:11:4f:a0:82 DELAY
We can now check the bridge’s forwarding table to get more information. We should find an entry for the destination MAC address that includes the IP of the remote VTEP where the VM is reachable:
MON01:~# bridge fdb show brport vxlan_servers
00:00:00:00:00:00 dst 192.168.10.101 self permanent
00:00:00:00:00:00 dst 192.168.10.102 self permanent
00:00:00:00:00:00 dst 192.168.10.103 self permanent
bc:24:11:4f:a0:82 dst 192.168.10.101 self
Including remote locations
Suppose we had a separate location connected via VPN, where another virtual network is created by another small PVE host, and we want our FortiGate to be the default gateway for this network. We can easily realize this:
+---------------------------+ +---------------------+
| FortiGate Firewall |----VPN----| Remote PVE server |
| 192.168.10.254/24 | | 192.168.90.101/24 |
+---------------------------+ +---------------------+
| |
| +---------------------+ RemoteNet
+---| Proxmox hosts |
| | 192.168.10.101/24 |--Servers
| | 192.168.10.102/24 |
| | 192.168.10.103/24 |--IoT
| +---------------------+
|
| +---------------------+
+---| Monitoring server |
| 192.168.10.10/24 |
+---------------------+
On the FortiGate, we can just create the VXLAN interface like we did before:
config system vxlan
edit "remotenet"
set interface "VPN"
set vni 3000
set remote-ip 192.168.90.101
next
end
It doesn’t matter that there’s only a VPN connection in between the two, and there could even be a set of other routers in between. All that’s needed is IP connectivity, and that UDP traffic on port 4789 is allowed.