Vehicle Fleet as IoT – Virtual Networks on Edge



near 13 min of reading

In the earlier article, we’ve covered the detailed architecture of a fleet management system based on AWS IoT and on-edge virtual networks. Now, we can dive into implementation. Let’s create a prototype of an edge network with two applications running together on both virtual and physical nodes and with isolated virtual networks. As we don’t have fire trucks on hand, we use three computers (the main truck ARM computer simulated by a Raspberry Pi and two application nodes running on laptops) and a managed switch to connect them together.

Overview of topics and definitions used in the material

In this chapter, we provide concise explanations of key networking concepts and technologies relevant to the architecture discussed earlier. These definitions will help readers better understand the underlying mechanisms that enable efficient and flexible communication between Docker containers, the host system, and external devices. Familiarizing yourself with these concepts will facilitate a deeper understanding of the networking aspects of the presented system and their interrelationships.

  • Docker Networking: a system that enables containers to communicate with each other and external networks. It provides various network drivers and options to support different network architectures and requirements, including bridge, host, overlay, and IPvlan/MACvlan drivers. Docker networking creates virtual networks and attaches containers to these networks. Each network has a unique IP address range, and containers within a network can communicate using their assigned IP addresses. Docker uses network drivers to manage container connectivity and network isolation.
  • IPvlan: a Docker network driver that enables efficient container-to-container and container-to-external network communication by sharing the parent interface’s MAC address with its child interfaces. In the context of the router Docker image in the presented topic, IPvlan provides efficient routing between multiple networks and reduces the management overhead associated with MAC addresses.
  • Docker Bridge: a virtual network device that connects multiple Docker container networks, allowing containers to communicate with each other and the host system. By default, Docker creates a bridged network named “docker0” for containers to use. Users can create custom bridge networks to segment and isolate container traffic.
  • Linux Bridge: a kernel-based network device that forwards traffic between network segments. It operates at the data link layer, similar to how ethernet frames function within the TCP/IP model. Linux Bridges are essential in creating virtual network interfaces for entities such as virtual machines and containers.
  • veth: (Virtual Ethernet) a Linux kernel network device that creates a pair of connected virtual network interfaces. Docker uses veths to connect containers to their respective networks, with one end attached to the container and the other end attached to the network’s bridge. In a bridged Docker network, veth pairs are created and named when a container is connected to a Docker bridge network, with one end of the veth pair being assigned a unique identifier within the container’s network namespace and the other end being assigned a unique identifier in the host’s network namespace. The veth pair allows seamless communication between the container and the bridge network. In simple words – veth is a virtual cable between (virtual or not) interfaces of the same machine.
  • Network namespace: Docker provides containers with isolated network stacks, ensuring each container has its own private IPs and ports. VLANs (Virtual Local Area Networks) operate at the data link layer, allowing for the creation of logically segmented networks within a physical network for improved security and manageability. When combined in Docker, containers can be attached directly to specific VLANs, marrying Layer 2 (VLAN) and Layer 3 (namespaces) isolation.
  • VLAN (Virtual Local Area Network): a logical network segment created by grouping physical network devices or interfaces. VLANs allow for traffic isolation and efficient use of network resources by separating broadcast domains.
  • iptable: a Linux command-line utility for managing packet filtering and network address translation (NAT) rules in the kernel’s network filter framework. It provides various mechanisms to inspect, modify, and take actions on packets traversing the network stack.
  • masquerade: a NAT technique used in iptables to mask the source IP address of outgoing packets with the IP address of the network interface through which the packets are being sent. This enables multiple devices or containers behind the masquerading device to share a single public IP address for communication with external networks. In the context of the presented topic, masquerading can be used to allow Docker containers to access the Internet through the router Docker image.

Solution proposal with description and steps to reproduce

Architecture overview

Vehicle Fleet as IoT - architecture overview

The architecture described consists of a Router Docker container, two applications’ containers (Container1 and Container2), a host machine, and two VLANs connected to a switch with two physical devices. The following is a detailed description of the components and their interactions.

Router Docker container

The container has three interfaces:

  • eth0 (10.0.1.3): Connected to the br0net Docker network (10.0.1.0/24).
  • eth1 (192.168.50.2): Connected to the home router and the internet, with the gateway set to 192.168.50.1.
  • eth2 (10.0.2.3): Connected to the br1net Docker network (10.0.2.0/24).

Docker containers

Container1 (Alpine) is part of the br0net (10.0.1.0/24) network, connected to the bridge br0 (10.0.1.2).

Container2 (Alpine) is part of the br1net (10.0.2.0/24) network, connected to the bridge br1 (10.0.2.2).

Main edge device – Raspberry Pi or a firetruck main computer

The machine hosts the entire setup, including the router Docker image and the Docker containers (Container1 and Container2). It has two bridges created: br0 (10.0.1.2) and br1 (10.0.2.2), which are connected to their respective Docker networks (br0net and br1net)

VLANs and switch

The machine’s bridges are connected to two VLANs: enp2s0.1 (10.0.1.1) and enp2s0.2 (10.0.2.1). The enp2s0 interface is configured as a trunk connection to a switch, allowing it to carry traffic for multiple VLANs simultaneously.

Two devices are connected to the switch, with Device1 having an IP address of 10.0.1.5 and Device2 having an IP address of 10.0.2.5

DHCP Server and Client

Custom DHCP is required because of the IP assignment for Docker containers. Since we would like to maintain consistent addressing between both physical and virtual nodes in each VLAN, we let DHCP handle physical nodes in the usual way and assign addresses to virtual nodes (containers) by querying the DHCP server and assigning addresses manually to bypass the Docker addressing mechanism.

In short – the presented architecture describes a way to solve the non-trivial problem of isolating Docker containers inside the edge device architecture. The main element responsible for implementing the assumptions is the Router Docker container, which is responsible for managing traffic inside the system. The Router isolates network traffic between Container1 and Container2 containers using completely separate and independent network interfaces.

The aforementioned interfaces are spliced to VLANs via bridges, thus realizing the required isolation assumptions. The virtual interfaces on the host side are already responsible for exposing externally only those Docker containers that are within the specific VLANs. The solution to the IP addressing problem for Docker containers is also worth noting. The expected result is to obtain a form of IP addressing that will allow a permanent address assignment for existing containers while retaining the possibility of dynamic addressing for new components.

The architecture can be successfully used to create an end-to-end solution for edge devices while meeting strict security requirements.

Step-by-step setup

Now we start the implementation!

VLANs [execute on host]

Let’s set up VLANs.

enp2s0.1
auto enp2s0.1
iface enp2s0.1 inet static
address 10.0.1.1
network 10.0.1.0
netmask 255.255.255.0
broadcast 10.0.1.255

enp2s0.2
auto enp2s0.2
iface enp2s0.2 inet static
address 10.0.2.1
network 10.0.2.0
netmask 255.255.255.0
broadcast 10.0.2.255

Bridges [execute on host]

We should start by installing bridge-utils, a very useful tool for bridge setup.

sudo apt install bridge-utils

Now, let’s config the bridges.

sudo brctl addbr br0
sudo ip addr add 10.0.1.1/24 dev br0
sudo brctl addif br0 enp2s0
sudo ip link set br0 up

sudo brctl addbr br1
sudo ip addr add 10.0.2.1/24 dev br0
sudo brctl addif br0 enp2s0
sudo ip link set br0 up

Those commands create virtual brX interfaces, set IP addresses, and assign physical interfaces. This way, we bridge physical interfaces with virtual ones that we will create soon – it’s like a real bridge, connected to only one river bank so far.

Docker networks [execute on host]

Network for WLAN interface.

docker network create -d ipvlan --subnet=192.168.50.0/24 --gateway=192.168.50.1 -o ipvlan_mode=l2 -o parent=wlp3s0f0 wlan

Network for bridge interface br0.

docker network create --driver=bridge --subnet=10.0.1.0/24 --gateway=10.0.1.2 --opt "com.docker.network.bridge.name=br0" br0net

Network for bridge interface br1.

docker network create --driver=bridge --subnet=10.0.2.0/24 --gateway=10.0.2.2 --opt "com.docker.network.bridge.name=br1" br1net

Now, we have empty docker networks connected to the physical interface (wlp3s0f0 – to connect containers the Internet) or bridges (br0net and br1net – for VLANs). The next step is to create containers and assign those networks.

Docker containers [execute on host]

Let’s create the router container and connect it to all Docker networks – to enable communication in both VLANs and the WLAN (Internet).

docker create -it --cap-add=NET_ADMIN --cap-add=SYS_ADMIN --cap-add=NET_BROADCAST --network=br0net --sysctl net.ipv4.icmp_echo_ignore_broadcasts=0 --ip=10.0.1.3 --name=router alpine
docker network connect router wlan
docker network connect router br1net

Now, we create applications’ containers and connect them to proper VLANs.

docker create -it --cap-add=NET_ADMIN --cap-add=SYS_ADMIN --cap-add=NET_BROADCAST --network=br0net --sysctl net.ipv4.icmp_echo_ignore_broadcasts=0 --name=container1 alpine

docker create -it --cap-add=NET_ADMIN --cap-add=SYS_ADMIN --cap-add=NET_BROADCAST --network=br1net --sysctl net.ipv4.icmp_echo_ignore_broadcasts=0 --name=container2 alpine

OK, let’s start all containers.

docker start router
docker start container1
docker start container2

Now, we’re going to configure containers. To access Docker images’ shells, use the command

docker exec -it <image_name> sh.

Router container setup [execute on Router container]

Check the interface’s IP addresses. The configuration should be as mentioned below.

eth0      Link encap:Ethernet  HWaddr 02:42:0A:00:01:03
          inet addr:10.0.1.3  Bcast:10.0.1.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:745 errors:0 dropped:0 overruns:0 frame:0
          TX packets:285 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:142276 (138.9 KiB)  TX bytes:21966 (21.4 KiB)

eth1      Link encap:Ethernet  HWaddr 54:35:30:BC:6F:59
          inet addr:192.168.50.2  Bcast:192.168.50.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:4722 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1515 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:3941156 (3.7 MiB)  TX bytes:106741 (104.2 KiB)

eth2      Link encap:Ethernet  HWaddr 02:42:0A:00:02:01
          inet addr:10.0.2.3  Bcast:10.255.255.255  Mask:255.0.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:829 errors:0 dropped:0 overruns:0 frame:0
          TX packets:196 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:190265 (185.8 KiB)  TX bytes:23809 (23.2 KiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:60 errors:0 dropped:0 overruns:0 frame:0
          TX packets:60 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:5959 (5.8 KiB)  TX bytes:5959 (5.8 KiB)

Then let’s set up the iptables. You can omit the first command if the iptables package is already installed. The second command configures the masquerade, and the rest of them configure routing rules.

apk add ip6tables iptables
iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE
iptables -P INPUT ACCEPT
iptables -P FORWARD DROP
iptables -P OUTPUT ACCEPT
iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT
iptables -A FORWARD -i eth1 -o eth2 -j ACCEPT
iptables -A FORWARD -i eth2 -o eth1 -j ACCEPT
iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT

Now, we hide the internal addresses of outgoing packets (the masquerade), and the networks are isolated. Please note that there is no routing configured on the host machine, and it’s not even a gateway for both containerized and physical network nodes.

In the mentioned config, test containers will communicate with the external environment by physical interfaces on the router container, and in other words – they will be exposed to the Internet by the Docker router container. Router, in addition to enabling communication between VLANs and the Internet, may also allow communication between VLANs or even specific VLAN nodes of different VLANs. Thus, this container has become the main routing and filtering point for network traffic.

Container1 setup [execute on Container1]

route del default
ip route add default via 10.0.1.3

Container2 setup [execute on Container2]

route del default
ip route add default via 10.0.2.3

As you can see, the container configuration is similar; all we need to do is set up the default route via the router container instead of the docker-default one. In the real-world scenario, this step should be done via the DHCP server.

Switch setup [execute on the network switch]

The configuration above requires a manageable switch. We don’t enforce any specific model, but the switch must support VLAN tagging on ports with the trunk option for a port that combines traffic for multiple VLANs. The configuration, of course, depends on the device. Pay attention to the trunk port of the device, which is responsible for traffic from the switch to our host. In our case, the device1 is connected to a switch port tagged as VLAN1, and the device2 is connected to a switch port tagged as VLAN2. The enp2s0 port of the host computer is connected to a switch port configured as a trunk – to combine traffic of multiple VLANs in a single communication link.

Summary

We’ve managed together to conduct the network described in the first article. You can play with the network with ICMP to verify which nodes can access each other and, more importantly, which nodes can’t be reached outside their virtual networks.

Here is a scenario for the ping test. The following results prove that the created architecture fulfills its purpose and achieves the required insulation.

SourceTargetPing statusExplanation
Container1Device1OKVLAN 1
Device1Container1OKVLAN 1
Container1Router (10.0.1.3)OKVLAN 1
Device1Router (10.0.1.3)OKVLAN 1
Container1Internet (8.8.8.8)OKVLAN 1 to Internet via Router
Device1Internet (8.8.8.8)OKVLAN 1 to Internet via Router
RouterContainer1OKVLAN 1
RouterDevice1OKVLAN 1
Container1Container2No connectionVLAN 1 to VLAN 2
Container1Device2No connectionVLAN 1 to VLAN 2
Container1Router (10.0.2.3)No connectionVLAN 1 to VLAN 2
Device1Container2No connectionVLAN 1 to VLAN 2
Device1Device2No connectionVLAN 1 to VLAN 2
Device1Router (10.0.2.3)No connectionVLAN 1 to VLAN 2
Container2Device2OKVLAN 2
Device2Container2OKVLAN 2
Container2Router (10.0.2.3)OKVLAN 2
Device2Router (10.0.2.3)OKVLAN 2
Container2Internet (8.8.8.8)OKVLAN 2 to Internet via Router
Device2Internet (8.8.8.8)OKVLAN 2 to Internet via Router
RouterContainer2OKVLAN 2
RouterDevice2OKVLAN 2
Container2Container1No connectionVLAN 2 to VLAN 1
Container2Device1No connectionVLAN 2 to VLAN 1
Container2Router (10.0.1.3)No connectionVLAN 2 to VLAN 1
Device2Container1No connectionVLAN 2 to VLAN 1
Device2Device1No connectionVLAN 2 to VLAN 1
Device2Router (10.0.1.3)No connectionVLAN 2 to VLAN 1

As you can see from the table above, the Router container is able to send traffic to both networks so it’s a perfect candidate to serve common messages, like GPS broadcast.

If you need more granular routing or firewall rules, we propose to use firewalld instead of iptables. This way, you can disable non-encrypted traffic or open specific ports only.

Nevertheless, the job is not over yet. In the next article, we’ll cover IP addresses assignment problem, and run some more sophisticated tests over the infrastructure.



Is it insightful?
Share the article!



Check related articles


Read our blog and stay informed about the industry's latest trends and solutions.


see all articles



How to Manage Fire Trucks – IoT Architecture with Isolated Applications and Centralized Management System


Read the article

Fleet Management Task with AWS IoT – Overcoming Limitations of Docker Virtual Networks


Read the article