Let's talk about traffic mirroring using VxLAN

Network traffic analysis is an important task that requires a variety of specialists to solve. Thus, information security specialists need to analyze traffic for the presence of suspicious packets, virus traffic, port scanning, password brute force, etc. Security specialists may also need to analyze outgoing traffic for the presence of confidential data, for example, if the organization uses commercial mode secrets. Well, besides this, traffic is automatically analyzed by intrusion detection/prevention systems.

Networkers need to analyze traffic to identify various anomalies: half-open connections, retransmits, packet losses.

If IDS/IPS can analyze traffic passing through the perimeter, then we can analyze traffic that does not go beyond the network perimeter only by receiving a copy of all passing packets. When building a network, it is important to take into account the potential possibility of traffic collection, so that later you do not have to use expensive overhead traffic collection tools. To understand how to properly copy traffic, let’s first understand the basic principles of mirroring.

Traffic mirroring (also called port mirroring in some cases) is a method of sending a copy of network traffic to a server system. It is commonly used by network developers and administrators/operators to monitor network traffic or diagnose problems. Traditional traffic mirroring solutions rely on hardware switches and are vendor specific, which often prove costly, difficult to use, and difficult to develop. Here it is appropriate to recall the SPAN technology from the vendor who left us. The fact is that initially the SPAN mirroring mode was developed by Cisco to identify problems in the network and was not intended for constant use. However, now this mode is used everywhere to capture traffic.

Traffic mirroring involves creating a copy of certain traffic and sending that copy to its destination for further processing. The destination can be either a network device (physical network adapter/port directly connected via cable) or a remote host.

In the figure below, only incoming packets are mirrored, but outgoing packets can also be mirrored; in addition, you can also filter packets (for example, select only TCP SYN packets) before copying them.

When building the design of a traffic mirroring system, one more question needs to be addressed: do we need to store every packet received, or just part of it? The answer depends on your needs, so if you want to analyze the contents of the L7 message, such as HTTP requests, you need to save the entire packet. If you want to analyze L4 flow events, such as TCP flow statistics, you can simply keep the L2-L4 headers and remove the remaining bytes to save bandwidth and storage space. Well, if you plan to analyze L2 stream events, then you may need to save only the L2 header to further reduce bandwidth and memory.

Mirror packet forwarding problem

So, let's imagine that we successfully collect copies of packages, but how do we deliver them to the desired remote interface? With SPAN ports everything is clear, but what to do if you need to send packets to a remote port. Transmitting mirror packets directly over the cable will not work, since both dsc_mac and dst_ip packets are intended for the local node. That is, packets may still be dropped before reaching the final network device. For example, the end device responsible for processing these packets is a virtual network device located behind the physical network card; in this case, packets will be dropped on the network card due to the mismatch of the destination IP address (and MAC address) of the current network card.

One possible way to send mirror traffic to a remote host is to encapsulate the packet in another packet and send it to the remote host. At the receiving end, we extract the original packet accordingly by decapsulating the outer header.

There are many encapsulation formats to achieve this goal, such as VxLAN, which encloses the original packet in a new UDP packet, as shown below:

Example with containers

As an example, let's deploy a small stand with two Docker containers. On the host servers, a Linux bridge docker0 will be created as the default docker network gateway 10.0.108.0/23 (configured in /etc/docker/daemon.json).

When running a container using docker run without specifying any special network parameters, it will allocate an IP address from 10.0.108.0/23 and connect the container to the docker0 bridge using a veth pair.

$ docker run -d --name ctn-1 alpine:3.11.6 sleep 60d

$ docker run -d --name ctn-2 alpine:3.11.6 sleep 60d

$ docker ps | grep ctn

Next we get the identifier values ​​for each container:

$ NETNS1=$(docker inspect --format "{{.State.Pid}}" ctn-1)

$ NETNS2=$(docker inspect --format "{{.State.Pid}}" ctn-2)

$ echo $NETNS1 $NETNS2

After this we will find out the addresses of our containers

$ nsenter -t $NETNS1 -n ip addr

$ nsenter -t $NETNS2 -n ip addr

As a result, we get the following topology:

Next, let's create the necessary network interfaces:

$ ip link add docker1 type bridge         

$ ip addr add 10.0.110.1/24 dev docker1   

$ ip link set docker1 up

Let's add a couple of virtual interfaces, one end of each of them will be connected as a vNIC for ctn-1 and ctn-2, respectively, and the other end will be connected to docker.

Settings for ctn-1:

$ ip link add veth1 type veth peer name peer1             

$ ip link set peer1 netns $NETNS1                         

$ nsenter -t $NETNS1 -n ip link set peer1 name eth1       

$ nsenter -t $NETNS1 -n ip addr add 10.0.110.2/23 dev eth1

$ nsenter -t $NETNS1 -n ip link set eth1 up

$ ip link set veth1 master docker1                        

$ ip link set veth1 up

Settings for ctn-2:

$ ip link add veth2 type veth peer name peer2

$ ip link set peer2 netns $NETNS2

$ nsenter -t $NETNS2 -n ip link set peer2 name eth1

$ nsenter -t $NETNS2 -n ip addr add 10.0.110.3/23 dev eth1

$ nsenter -t $NETNS2 -n ip link set eth1 up

$ ip link set veth1 master docker1

$ ip link set veth2 up

The network interfaces are configured, now let's register the routing.

$ nsenter -t $NETNS1 -n ip route

The last route entry indicates that network 10.0.110.0/23 (docker 1) is reachable via eth1. Let's check this:

$ nsenter -t $NETNS1 -n ping 10.0.110.3

$ nsenter -t $NETNS1 -n ping 10.0.108.3

As shown in the figure below, we can now reach ctn-2 via a separate route (via eth1) that is independent of the default route (via eth0). We will use this route further.

VxLAN tunnel

We are now ready to configure a VxLAN tunnel over the eth1 NICs between ctn-1 and ctn-2.

Let's set up a tunnel on ctn-1:

$ nsenter -t $NETNS1 -n ip link add vxlan0 type vxlan id 100 local 10.0.110.2 remote 10.0.110.3 dev eth1 dstport 4789

$ nsenter -t $NETNS1 -n ip link set vxlan0 up

$ nsenter -t $NETNS1 -n ip addr

Let's set up a tunnel on ctn-2:

$ nsenter -t $NETNS2 -n ip link add vxlan0 type vxlan id 100 local 10.0.110.3 remote 10.0.110.2 dev eth1 dstport 4789

$ nsenter -t $NETNS2 -n ip link set vxlan0 up

As a result, we get the following topology:

As you can see, the traffic between vxlan0@ctn-1 and vxlan0@ctn-2 physically follows the following path vxlan0@ctn-1 -> eth1@ctn-1 -> veth1 -> docker1 -> veth2 -> eth1@ctn-2 – > vxlan0@ctn-2, thanks to packet encapsulation between ctn-1 and ctn-2. Thanks to this tunneling process, we can configure traffic mirroring from the eth0 interface of node ctn-1 to node ctn-2.

To do this, run the following command on ctn-1:

$ nsenter -t $NETNS1 -n tc filter add dev eth0 parent ffff: protocol all u32 match u32 0 0 action mirred egress mirror dev vxlan0

As a result, our mirrored packets (for example, ping) will be delivered to the ctn-2 node along this somewhat complicated path.

Conclusion

The VxLAN technology presented in the article allows you to encapsulate mirrored packets without the help of additional tools and transmit them over the network to the destination node. Using this technology when transmitting traffic will allow you to effectively collect copies of packets.

Material prepared in advance of the launch online course “Data Center Network Design” – designed for those who know network technologies and want to understand HOW to build networks of modern data centers.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *