About us
Our services

Capabilities

Legacy Modernization
Data Platforms
AI & Advanced Analytics

Industries

Automotive
Finance
Manufacturing

Solutions

Databoostr

Data Sharing & Monetization Platform

Cloudboostr

Multicloud Enterprise Kubernetes

Looking for something else?

Contact us for tailored solutions and expert guidance.

Contact
Case studies
Resources

Resources

Blog

Read our blog and stay informed about the industry’s latest trends and technology.

Ready to find your breaking point?

Stay updated with our newsletter.

Subscribe

Insights

Ebooks

Explore our resources and learn about building modern software solutions from experts and practitioners.

Read more
Careers
Contact
Blog
Software development
Automotive

Fleet management task with AWS IoT – overcoming limitations of Docker Virtual Networks

Damian Petrecki
R&D Cloud Engineer
October 17, 2025
•
5 min read
Bartłomiej Kuciński
Technical Leader | Expert Software Engineer
October 21, 2025
•
5 min read

Table of contents

Heading 2
Heading 3
Heading 4
Heading 5
Heading 6

Schedule a consultation with software experts

Contact us

We’re working for a client that produces fire trucks. There is a list of requirements and the architecture proposal in the first article and a step-by-step implementation of the prototype in the second one . This time, we’re going to close the topic with DHCP implementation and UDP tests.

DHCP server and client

A major issue with Docker is the need to assign IP addresses for containers. It is impractical to rely on automatic address assignments managed by Docker or to manually set addresses when containers are started. The architecture intended for IoT edge should ensure that the state of the device can be easily reproduced even after a power failure or reboot.

It may also be necessary to set fixed addresses for containers that will be the reference point for the entire architecture - see the Router container in our previous text. It is also worth considering the scenario where an external provider wants to connect to the edge device with extra devices. As part of the collaboration, it may be necessary to provide immutable IP addresses, e.g., for IP discovery service.

Our job is to provide a service to assign IP addresses from configurable pools for both physical and virtual devices in VLANs. It sounds like DHCP and indeed, it is DHCP, but it’s not so simple with Docker. Unfortunately, Docker uses its own addressing mechanism that cannot be linked to the network DHCP server.

The proposed solution will rely on a DHCP server and a DHCP client. At startup, the script responsible for running the Docker image will call the DHCP client and receive information about the MAC address and IP address the container will have.

Ultimately, we want to get a permanent configuration that is stored as a file or some simple database for the above. This will give us an immutable configuration for the basic parameters of the Docker container. To connect the MAC address, IP address, and Docker container, we propose adding the name of the potential Docker container to the record. This will create a link for the 3 elements that uniquely identifies the Docker container.

When the script starts, it queries the DHCP server for a possible available IP address and checks beforehand if there is already a lease for the IP/MAC address determined from the Docker container name.

This achieves a configuration that is resistant to IP conflicts and guarantees the reusability of previously assigned IP addresses.

DHCP server

For our use-case, we’ve decided to rely on isc-dhcp-server package. This is a sample configuration you can adjust for your needs.

dhcpd.conf


authoritative;
one-lease-per-client true;

subnet 10.0.1.0 netmask 255.255.255.0 {
range 10.0.1.2 10.0.1.200;
option domain-name-servers 8.8.8.8, 8.8.4.4;
option routers 10.0.1.3;
option subnet-mask 255.255.255.0;
default-lease-time 3600;
max-lease-time 7200;
}
subnet 10.0.2.0 netmask 255.255.255.0 {
range 10.0.2.2 10.0.1.200;
option domain-name-servers 8.8.8.8, 8.8.4.4;
option routers 10.0.2.3;
option subnet-mask 255.255.255.0;
default-lease-time 3600;
max-lease-time 7200;
}

Here is the breakdown for each line in the mentioned configuration. There are two subnets configured with two address pools for each VLAN in our network.

authoritative - this directive means that the DHCP server is the authoritative source for the network. If a client queries with an IP address that it was given by another DHCP server, this server will tell the client that the IP address is invalid, effectively forcing the client to ask for a new IP address.

one-lease-per-client - this ensures that each client gets only one lease at a time. This helps avoid scenarios where a single client might end up consuming multiple IP addresses, leading to a reduced available IP pool.

option domain-name-servers – this assigns DNS servers to the DHCP clients. In this case, it's using Google's public DNS servers (8.8.8.8 and 8.8.4.4).

option routers – this assigns a default gateway for the DHCP clients. Devices in this network will use 10.0.1.3 as their way out of the local network, likely to reach the internet or other networks.

option subnet-mask – this specifies the subnet mask to be assigned to DHCP clients, which in this case is 255.255.255.0. It determines the network portion of an IP address.

default-lease-time – specifies how long, in seconds, a DHCP lease will be valid if the client doesn't ask for a specific lease time. Here, it's set to 3600 seconds, which is equivalent to 1 hour.

max-lease-time - this sets the maximum amount of time, in seconds, a client can lease an IP address. Here, it's 7200 seconds or 2 hours.

DHCP Client

In our scenario, all new application containers are added to the system via bash commands executed on the Host – the firetruck’s main computer or Raspberry PI in our prototype. See the previous chapter for adding containers commands reference. The command requires IP addresses and gateways for each container.

Our approach is to obtain an address from the DHCP server (as dynamic IP) and set up a container with the address configured as static IP. To achieve this, we need a shell-friendly DHCP client. We’ve decided to go with a Python script that can be called when creating new containers.

DHCP Client Example (Python)

See comments in the scripts below for explanations of each block.

from scapy.layers.dhcp import BOOTP, DHCP
from scapy.layers.inet import UDP, IP, ICMP
from scapy.layers.l2 import Ether
from scapy.sendrecv import sendp, sniff

# Sendind discovery packet for DHCP
def locate_dhcp(src_mac_addr):
packet = Ether(dst='ff:ff:ff:ff:ff:ff', src=src_mac_addr, type=0x0800) / IP(src='0.0.0.0', dst='255.255.255.255') / \
UDP(dport=67, sport=68) / BOOTP(op=1, chaddr=src_mac_addr) / DHCP(options=[('message-type', 'discover'), 'end'])
sendp(packet, iface="enp2s0")

# Receiving offer by filtering out packets packet[DHCP].options[0][1] == 2
def capture_offer():
return sniff(iface="enp2s0", filter="port 68 and port 67",
stop_filter=lambda packet: BOOTP in packet and packet[BOOTP].op == 2 and packet[DHCP].options[0][1] == 2,
timeout=5)

# Transmitting packets with accepted offer (IP) from DHCP
def transmit_request(src_mac_addr, req_ip, srv_ip):
packet = Ether(dst='ff:ff:ff:ff:ff:ff', src=src_mac_addr, type=0x0800) / IP(src='0.0.0.0', dst='255.255.255.255') / \
UDP(dport=67, sport=68) / BOOTP(op=1, chaddr=src_mac_addr) / \
DHCP(options=[('message-type', 'request'), ("client_id", src_mac_addr), ("requested_addr", req_ip),
("server_id", srv_ip), 'end'])
sendp(packet, iface="enp2s0")

# Reading acknowledgement from DHCP. Filtering out packet[BOOTP].op == 2 and packet[DHCP].options[0][1] == 5 and ports 68/67
def capture_acknowledgement():
return sniff(iface="enp2s0", filter="port 68 and port 67",
stop_filter=lambda packet: BOOTP in packet and packet[BOOTP].op == 2 and packet[DHCP].options[0][1] == 5,
timeout=5)

# Ping offered IP address
def transmit_test_packet(src_mac_addr, src_ip_addr, dst_mac_addr, dst_ip_addr):
packet = Ether(src=src_mac_addr, dst=dst_mac_addr) / IP(src=src_ip_addr, dst=dst_ip_addr) / ICMP()
sendp(packet, iface="enp2s0")
 

if __name__ == "__main__":
# dummy mac address
mac_addr = "aa:bb:cc:11:22:33"
print("START")
print("SEND: Discover")
locate_dhcp(mac_addr)
print("RECEIVE: Offer")
received_packets = capture_offer()
server_mac_addr = received_packets[0]["Ether"].src
bootp_response = received_packets[0]["BOOTP"]
server_ip_addr = bootp_response.siaddr
offered_ip_addr = bootp_response.yiaddr
print("OFFER:", offered_ip_addr)
print("SEND: Request for", offered_ip_addr)
transmit_request(mac_addr, offered_ip_addr, server_ip_addr)
print("RECEIVE: Acknowledge")
received_packets2 = capture_acknowledgement()
print("ACKNOWLEDGE:", offered_ip_addr)
print("SEND: Test IP Packet")
transmit_test_packet(mac_addr, offered_ip_addr, server_mac_addr, server_ip_addr)
print("END")

Let’s talk about our use case.

The business requirement is to add another device to the edge - perhaps a thermal imaging camera. Our assumption is to guarantee as fully automatic onboarding of the device in our system as possible. Adding a new device will also mean, in our case, connecting it to the customer-provided Docker container.

Our expected result is to get a process that registers the new Docker container with the assigned IP address from the DHCP server. The IP address is, of course, dependent on the VLAN in which the new device will be located.

In summary, it is easy to see that plugging in a new device at this point just means that the IP address is automatically assigned and bound. The new device is aware of where the Router container is located - so communication is guaranteed from the very beginning.

UDP broadcast and multicast setup

Broadcast UDP is a method for sending a message to all devices on a network segment, which allows for efficient communication and discovery of other devices on the same network. In an IoT context, this can be used for the discovery of devices and services, such as finding nearby devices for data exchange or sending a command to all devices in a network.

Multicast, on the other hand, allows for the efficient distribution of data to a group of devices on a network. This can be useful in scenarios where the same data needs to be sent to multiple devices at the same time, such as a live video stream or a software update.

One purpose of the architecture was to provide a seamless, isolated, LAN-like environment for each application. Therefore, it was critical to enable applications to use not only direct, IP, or DNS-based communication but also to allow multicasting and broadcasting messages. These protocols enable devices to communicate with each other in a way that is scalable and bandwidth-efficient, which is crucial for IoT systems where there may be limited network resources available.

The presented architecture provides a solution for dockerized applications that use UDP broadcast/multicast. The router Docker container environment is intended to host applications that are to distribute data to other containers in the manner.

Let’s check whether those techniques are available to our edge networks.

Broadcast

The test phase should start on the Container1 container with an enabled UDP listener. For that, run the command.

nc -ulp 5000

The command uses the netcat (nc) utility to listen (-l) for incoming UDP (-u) datagrams on port 5000 (-p 5000).

Then, let’s produce a message on the Router container.

echo -n "foo" | nc -uv -b -s 10.0.1.3 -w1 10.0.1.255 5000

The command above is an instruction that uses the echo and netcat to send a UDP datagram containing the string "foo" to all devices on the local network segment.

Breaking down the command:

echo -n "foo" - This command prints the string "foo" to standard output without a trailing newline character.

nc - The nc command is used to create network connections and can be used for many purposes, including sending and receiving data over a network.

-uv - These options specify that nc should use UDP as the transport protocol and that it should be run in verbose mode.

-b - This option sets the SO_BROADCAST socket option, allowing the UDP packet to be sent to all devices on the local network segment.

-s 10.0.1.3 - This option sets the source IP address of the UDP packet to 10.0.1.3.

-w1 - This option sets the timeout for the nc command to 1 second.

10.0.1.255 - This is the destination IP address of the UDP packet, which is the broadcast address for the local network segment.

5000 - This is the destination port number for the UDP packet.

Please note that both source and destination addresses belong to VLAN 1. Therefore, the datagram is sent via the eth0 interface to this VLAN only.

The expected result is the docker container Container1 receiving the message from the Router container via UDP broadcast.

Multicast

Let's focus on Docker Container parameters specified when creating containers (Docker containers [execute on host] sub-chapter in the previous article ). In the context of Docker containers, the --sysctl net.ipv4.icmp_echo_ignore_broadcasts=0 option is crucial if you need to enable ICMP echo requests to the broadcast address inside the container. For example, if your containerized application relies on UDP broadcast for service discovery or communication with other containers, you may need to set this parameter to 0 to allow ICMP echo requests to be sent and received on the network.

Without setting this parameter to 0, your containerized application may not be able to communicate properly with other containers on the network or may experience unexpected behavior due to ICMP echo requests being ignored. Therefore, the --sysctl net.ipv4.icmp_echo_ignore_broadcasts=0 option can be crucial in certain Docker use cases where ICMP echo requests to the broadcast address are needed.

Usage example

Run the command below in the container Container1 (see previous chapter for naming references). We use socat, which is a command line utility that establishes a bidirectional byte stream and transfers data between them. Please note that the IP address of the multicast group does not belong to the VLAN 1 address space.

socat -u UDP4-RECV:22001,ip-add-membership=233.54.12.234:eth0 /dev/null &

Then, add the route to the multicast group.

ip route add 233.54.12.234/32 dev eth0

You can ping the address from Device 1 to verify the group has been created.

ping -I eth0 -t 2 233.54.12.234

As you can see, an interface parameter is required with the ping command to enforce using the correct outgoing interface. You can also limit the TTL parameter (-t 2) to verify the route length to the multicast group.

Now, use socat on Device1 to open the connection inside the group.

ip route add 233.54.12.234/32 dev eth0
socat STDIO UDP-DATAGRAM:233.54.12.234:22001

Please note you have to setup the route to avoid sending packets to “unknown network” directly to the router.

Now, you can type the message on Device1 and use tcpdump on Container1 to see the incoming message.

tcpdump -i eth0 -Xavvv

Summary

Nowadays, a major challenge faced by developers and customers is to guarantee maximum security while ensuring compatibility and openness to change for edge devices. As part of IoT, it is imperative to keep in mind that the delivered solution may be extended in the future with additional hardware modules, and thus, the environment into which this module will be deployed must be ready for changes.

This problem asks the non-trivial question of how to meet business requirements while taking into account all the guidelines from standards from hardware vendors or the usual legal standards.

Translating the presented architecture into a fire trucks context, all the requirements from the introduction regarding isolation and modularity of the environment have been met. Each truck has the ability to expand the connected hardware while maintaining security protocols. In addition, the Docker images that work with the hardware know only their private scope and the router's scope.

The proposed solution provides a ready answer on how to obtain a change-ready environment that meets security requirements. A key element of the architecture is to guarantee communication for applications only in the VLAN space in which they are located.

This way, any modification should not affect already existing processes on the edge side. It is also worth detailing the role played by the Router component. With it, we guarantee a way to communicate between Docker containers while maintaining a configuration that allows you to control network traffic.

We have also included a solution for UDP Broadcast / Multicast communication. Current standards among hardware include solutions that transmit data via the standard. This means that if, for example, we are waiting for emergency data on a device, we must also be ready to handle Broadcasts and ensure that packets are consumed only by those components that are designed for this purpose.

Summarizing the presented solution, one should not forget about applications in other industries as well. The idea of independent Docker images and modularity for hardware allows application even in the Automotive and high-reliability areas, where the use of multiple devices, not necessarily from the same supplier, is required.

We encourage you to think about further potential applications and thank you for taking the time to read.

Grape Up guides enterprises on their data-driven transformation journey

Ready to ship? Let's talk.

Check our offer
Blog

Check related articles

Read our blog and stay informed about the industry's latest trends and solutions.

Automotive
Software development

Vehicle fleet as IoT – virtual networks on edge

In the earlier article , we’ve covered the detailed architecture of a fleet management system based on AWS IoT and on-edge virtual networks. Now, we can dive into implementation. Let’s create a prototype of an edge network with two applications running together on both virtual and physical nodes and with isolated virtual networks. As we don’t have fire trucks on hand, we use three computers (the main truck ARM computer simulated by a Raspberry Pi and two application nodes running on laptops) and a managed switch to connect them together.

Overview of topics and definitions used in the material

In this chapter, we provide concise explanations of key networking concepts and technologies relevant to the architecture discussed earlier. These definitions will help readers better understand the underlying mechanisms that enable efficient and flexible communication between Docker containers, the host system, and external devices. Familiarizing yourself with these concepts will facilitate a deeper understanding of the networking aspects of the presented system and their interrelationships.

  • Docker Networking : a system that enables containers to communicate with each other and external networks. It provides various network drivers and options to support different network architectures and requirements, including bridge, host, overlay, and IPvlan/MACvlan drivers. Docker networking creates virtual networks and attaches containers to these networks. Each network has a unique IP address range, and containers within a network can communicate using their assigned IP addresses. Docker uses network drivers to manage container connectivity and network isolation.
  • IPvlan : a Docker network driver that enables efficient container-to-container and container-to-external network communication by sharing the parent interface's MAC address with its child interfaces. In the context of the router Docker image in the presented topic, IPvlan provides efficient routing between multiple networks and reduces the management overhead associated with MAC addresses.
  • Docker Bridge : a virtual network device that connects multiple Docker container networks, allowing containers to communicate with each other and the host system. By default, Docker creates a bridged network named "docker0" for containers to use. Users can create custom bridge networks to segment and isolate container traffic.
  • Linux Bridge : a kernel-based network device that forwards traffic between network segments. It operates at the data link layer, similar to how ethernet frames function within the TCP/IP model. Linux Bridges are essential in creating virtual network interfaces for entities such as virtual machines and containers.
  • veth : (Virtual Ethernet) a Linux kernel network device that creates a pair of connected virtual network interfaces. Docker uses veths to connect containers to their respective networks, with one end attached to the container and the other end attached to the network's bridge. In a bridged Docker network, veth pairs are created and named when a container is connected to a Docker bridge network, with one end of the veth pair being assigned a unique identifier within the container's network namespace and the other end being assigned a unique identifier in the host's network namespace. The veth pair allows seamless communication between the container and the bridge network. In simple words – veth is a virtual cable between (virtual or not) interfaces of the same machine.
  • Network namespace : Docker provides containers with isolated network stacks, ensuring each container has its own private IPs and ports. VLANs (Virtual Local Area Networks) operate at the data link layer, allowing for the creation of logically segmented networks within a physical network for improved security and manageability. When combined in Docker, containers can be attached directly to specific VLANs, marrying Layer 2 (VLAN) and Layer 3 (namespaces) isolation.
  • VLAN (Virtual Local Area Network) : a logical network segment created by grouping physical network devices or interfaces. VLANs allow for traffic isolation and efficient use of network resources by separating broadcast domains.
  • iptable : a Linux command-line utility for managing packet filtering and network address translation (NAT) rules in the kernel's network filter framework. It provides various mechanisms to inspect, modify, and take actions on packets traversing the network stack.
  • masquerade : a NAT technique used in iptables to mask the source IP address of outgoing packets with the IP address of the network interface through which the packets are being sent. This enables multiple devices or containers behind the masquerading device to share a single public IP address for communication with external networks. In the context of the presented topic, masquerading can be used to allow Docker containers to access the Internet through the router Docker image.

Solution proposal with description and steps to reproduce

Architecture overview

Vehicle Fleet as IoT - architecture overview

The architecture described consists of a Router Docker container, two applications’ containers (Container1 and Container2), a host machine, and two VLANs connected to a switch with two physical devices. The following is a detailed description of the components and their interactions.

Router Docker container

The container has three interfaces:

  • eth0 (10.0.1.3): Connected to the br0net Docker network (10.0.1.0/24).
  • eth1 (192.168.50.2): Connected to the home router and the internet, with the gateway set to 192.168.50.1.
  • eth2 (10.0.2.3): Connected to the br1net Docker network (10.0.2.0/24).

Docker containers

Container1 (Alpine) is part of the br0net (10.0.1.0/24) network, connected to the bridge br0 (10.0.1.2).

Container2 (Alpine) is part of the br1net (10.0.2.0/24) network, connected to the bridge br1 (10.0.2.2).

Main edge device – Raspberry Pi or a firetruck main computer

The machine hosts the entire setup, including the router Docker image and the Docker containers (Container1 and Container2). It has two bridges created: br0 (10.0.1.2) and br1 (10.0.2.2), which are connected to their respective Docker networks (br0net and br1net)

VLANs and switch

The machine’s bridges are connected to two VLANs: enp2s0.1 (10.0.1.1) and enp2s0.2 (10.0.2.1). The enp2s0 interface is configured as a trunk connection to a switch, allowing it to carry traffic for multiple VLANs simultaneously.

Two devices are connected to the switch, with Device1 having an IP address of 10.0.1.5 and Device2 having an IP address of 10.0.2.5

DHCP Server and Client

Custom DHCP is required because of the IP assignment for Docker containers. Since we would like to maintain consistent addressing between both physical and virtual nodes in each VLAN, we let DHCP handle physical nodes in the usual way and assign addresses to virtual nodes (containers) by querying the DHCP server and assigning addresses manually to bypass the Docker addressing mechanism.

In short - the presented architecture describes a way to solve the non-trivial problem of isolating Docker containers inside the edge device architecture. The main element responsible for implementing the assumptions is the Router Docker container, which is responsible for managing traffic inside the system. The Router isolates network traffic between Container1 and Container2 containers using completely separate and independent network interfaces.

The aforementioned interfaces are spliced to VLANs via bridges, thus realizing the required isolation assumptions. The virtual interfaces on the host side are already responsible for exposing externally only those Docker containers that are within the specific VLANs. The solution to the IP addressing problem for Docker containers is also worth noting. The expected result is to obtain a form of IP addressing that will allow a permanent address assignment for existing containers while retaining the possibility of dynamic addressing for new components.

The architecture can be successfully used to create an end-to-end solution for edge devices while meeting strict security requirements.

Step-by-step setup

Now we start the implementation!

VLANs [execute on host]

Let’s set up VLANs.

enp2s0.1
auto enp2s0.1
iface enp2s0.1 inet static
address 10.0.1.1
network 10.0.1.0
netmask 255.255.255.0
broadcast 10.0.1.255

enp2s0.2
auto enp2s0.2
iface enp2s0.2 inet static
address 10.0.2.1
network 10.0.2.0
netmask 255.255.255.0
broadcast 10.0.2.255

Bridges [execute on host]

We should start by installing bridge-utils, a very useful tool for bridge setup.

sudo apt install bridge-utils

Now, let’s config the bridges.

sudo brctl addbr br0
sudo ip addr add 10.0.1.1/24 dev br0
sudo brctl addif br0 enp2s0
sudo ip link set br0 up

sudo brctl addbr br1
sudo ip addr add 10.0.2.1/24 dev br0
sudo brctl addif br0 enp2s0
sudo ip link set br0 up

Those commands create virtual brX interfaces, set IP addresses, and assign physical interfaces. This way, we bridge physical interfaces with virtual ones that we will create soon – it’s like a real bridge, connected to only one river bank so far.

Docker networks [execute on host]

Network for WLAN interface.

docker network create -d ipvlan --subnet=192.168.50.0/24 --gateway=192.168.50.1 -o ipvlan_mode=l2 -o parent=wlp3s0f0 wlan

Network for bridge interface br0.

docker network create --driver=bridge --subnet=10.0.1.0/24 --gateway=10.0.1.2 --opt "com.docker.network.bridge.name=br0" br0net

Network for bridge interface br1.

docker network create --driver=bridge --subnet=10.0.2.0/24 --gateway=10.0.2.2 --opt "com.docker.network.bridge.name=br1" br1net

Now, we have empty docker networks connected to the physical interface (wlp3s0f0 – to connect containers the Internet) or bridges (br0net and br1net – for VLANs). The next step is to create containers and assign those networks.

Docker containers [execute on host]

Let’s create the router container and connect it to all Docker networks – to enable communication in both VLANs and the WLAN (Internet).

docker create -it --cap-add=NET_ADMIN --cap-add=SYS_ADMIN --cap-add=NET_BROADCAST --network=br0net --sysctl net.ipv4.icmp_echo_ignore_broadcasts=0 --ip=10.0.1.3 --name=router alpine
docker network connect router wlan
docker network connect router br1net

Now, we create applications’ containers and connect them to proper VLANs.

docker create -it --cap-add=NET_ADMIN --cap-add=SYS_ADMIN --cap-add=NET_BROADCAST --network=br0net --sysctl net.ipv4.icmp_echo_ignore_broadcasts=0 --name=container1 alpine

docker create -it --cap-add=NET_ADMIN --cap-add=SYS_ADMIN --cap-add=NET_BROADCAST --network=br1net --sysctl net.ipv4.icmp_echo_ignore_broadcasts=0 --name=container2 alpine

OK, let’s start all containers.

docker start router
docker start container1
docker start container2

Now, we’re going to configure containers. To access Docker images’ shells, use the command

docker exec -it <image_name> sh.

Router container setup [execute on Router container]

Check the interface’s IP addresses. The configuration should be as mentioned below.

eth0 Link encap:Ethernet HWaddr 02:42:0A:00:01:03
inet addr:10.0.1.3 Bcast:10.0.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:745 errors:0 dropped:0 overruns:0 frame:0
TX packets:285 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:142276 (138.9 KiB) TX bytes:21966 (21.4 KiB)

eth1 Link encap:Ethernet HWaddr 54:35:30:BC:6F:59
inet addr:192.168.50.2 Bcast:192.168.50.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4722 errors:0 dropped:0 overruns:0 frame:0
TX packets:1515 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:3941156 (3.7 MiB) TX bytes:106741 (104.2 KiB)

eth2 Link encap:Ethernet HWaddr 02:42:0A:00:02:01
inet addr:10.0.2.3 Bcast:10.255.255.255 Mask:255.0.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:829 errors:0 dropped:0 overruns:0 frame:0
TX packets:196 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:190265 (185.8 KiB) TX bytes:23809 (23.2 KiB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:60 errors:0 dropped:0 overruns:0 frame:0
TX packets:60 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:5959 (5.8 KiB) TX bytes:5959 (5.8 KiB)

Then let’s set up the iptables. You can omit the first command if the iptables package is already installed. The second command configures the masquerade, and the rest of them configure routing rules.

apk add ip6tables iptables
iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE
iptables -P INPUT ACCEPT
iptables -P FORWARD DROP
iptables -P OUTPUT ACCEPT
iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT
iptables -A FORWARD -i eth1 -o eth2 -j ACCEPT
iptables -A FORWARD -i eth2 -o eth1 -j ACCEPT
iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT

Now, we hide the internal addresses of outgoing packets (the masquerade), and the networks are isolated. Please note that there is no routing configured on the host machine, and it’s not even a gateway for both containerized and physical network nodes.

In the mentioned config, test containers will communicate with the external environment by physical interfaces on the router container, and in other words – they will be exposed to the Internet by the Docker router container. Router, in addition to enabling communication between VLANs and the Internet, may also allow communication between VLANs or even specific VLAN nodes of different VLANs. Thus, this container has become the main routing and filtering point for network traffic.

Container1 setup [execute on Container1]

route del default
ip route add default via 10.0.1.3

Container2 setup [execute on Container2]

route del default
ip route add default via 10.0.2.3

As you can see, the container configuration is similar; all we need to do is set up the default route via the router container instead of the docker-default one. In the real-world scenario, this step should be done via the DHCP server.

Switch setup [execute on the network switch]

The configuration above requires a manageable switch. We don’t enforce any specific model, but the switch must support VLAN tagging on ports with the trunk option for a port that combines traffic for multiple VLANs. The configuration, of course, depends on the device. Pay attention to the trunk port of the device, which is responsible for traffic from the switch to our host. In our case, the device1 is connected to a switch port tagged as VLAN1, and the device2 is connected to a switch port tagged as VLAN2. The enp2s0 port of the host computer is connected to a switch port configured as a trunk - to combine traffic of multiple VLANs in a single communication link.

Summary

We’ve managed together to conduct the network described in the first article. You can play with the network with ICMP to verify which nodes can access each other and, more importantly, which nodes can’t be reached outside their virtual networks.

Here is a scenario for the ping test. The following results prove that the created architecture fulfills its purpose and achieves the required insulation.

Source Target Ping status Explanation Container1 Device1 OK VLAN 1 Device1 Container1 OK VLAN 1 Container1 Router (10.0.1.3) OK VLAN 1 Device1 Router (10.0.1.3) OK VLAN 1 Container1 Internet (8.8.8.8) OK VLAN 1 to Internet via Router Device1 Internet (8.8.8.8) OK VLAN 1 to Internet via Router Router Container1 OK VLAN 1 Router Device1 OK VLAN 1 Container1 Container2 No connection VLAN 1 to VLAN 2 Container1 Device2 No connection VLAN 1 to VLAN 2 Container1 Router (10.0.2.3) No connection VLAN 1 to VLAN 2 Device1 Container2 No connection VLAN 1 to VLAN 2 Device1 Device2 No connection VLAN 1 to VLAN 2 Device1 Router (10.0.2.3) No connection VLAN 1 to VLAN 2 Container2 Device2 OK VLAN 2 Device2 Container2 OK VLAN 2 Container2 Router (10.0.2.3) OK VLAN 2 Device2 Router (10.0.2.3) OK VLAN 2 Container2 Internet (8.8.8.8) OK VLAN 2 to Internet via Router Device2 Internet (8.8.8.8) OK VLAN 2 to Internet via Router Router Container2 OK VLAN 2 Router Device2 OK VLAN 2 Container2 Container1 No connection VLAN 2 to VLAN 1 Container2 Device1 No connection VLAN 2 to VLAN 1 Container2 Router (10.0.1.3) No connection VLAN 2 to VLAN 1 Device2 Container1 No connection VLAN 2 to VLAN 1 Device2 Device1 No connection VLAN 2 to VLAN 1 Device2 Router (10.0.1.3) No connection VLAN 2 to VLAN 1

As you can see from the table above, the Router container is able to send traffic to both networks so it’s a perfect candidate to serve common messages, like GPS broadcast.

If you need more granular routing or firewall rules, we propose to use firewalld instead of iptables . This way, you can disable non-encrypted traffic or open specific ports only.

Nevertheless, the job is not over yet. In the next article , we’ll cover IP addresses assignment problem, and run some more sophisticated tests over the infrastructure.

Read more
Automotive
Software development

IoT SaaS - why automotive industry should care, and which AWS IoT or Azure IoT is better to use as a base platform for connected vehicle development

We're connected. There’s no doubt about it. At work, at home, in town, on holidays. Our life is no longer divided into offline and online, digital and analog. Our life is somewhere in between, and it happens in both worlds at once. Also in our car, where we expect access to data, instant updates, entertainment, and understanding of our needs. The proven IoT SaaS platform makes this much easier.  Today choosing this option is crucial for every company in the automotive industry. Without it, the connected vehicle wouldn’t exist.

    What you will learn from this article:  

  •     Why an automotive company needs cloud services and how to build new business value on them  
  •     What features an IoT platform for the automotive industry should have  
  •     What cloud solutions are chosen by the largest producers  

Before our very eyes, the car is becoming part of the Internet of Things ecosystem. We want safer driving and 'being led by the hand', ease of integration with external digital services like music streaming, automatic parking payments, or real-time traffic alerts, and the transfer of virtual experiences from one tool to another (including the car).

The vehicles we drive have become more service-oriented, which not only creates new options and  business opportunities for companies from the automotive sector but also poses potential threats.

A hacking attack on a phone may result in money loss or compromising the user, whereas an attack on a car can have much more serious consequences. This is why choosing  the platform for a connected vehicle is crucial.

 Let's have a look at the basic assumptions that such a platform should meet. Let's get to know the main service providers and market use cases influencing the choice of the largest brands in the automotive industry.

5 must-haves for every IoT SaaS platform

1. Security

At the heart of the Internet of Things is data. However, no one will share it unless the system guarantees an appropriate level of security and privacy. Access authorization is meant for selected users and platforms only. Authentication is geared to prevent unwanted third-party devices from connecting to the vehicle. Finally, there is also an option of blocking devices reaching their limits of usage or ones that have become unsafe. These types of elements that make up the security of the platform are a necessary condition to consider the implementation of the platform in your own vehicle fleet.

2. Data

The connected vehicle continuously receives and sends data. The vehicle communicates not only with other moving vehicles but also with the city and road infrastructure and third-party platforms. Data management, storage, and analysis are the gist of the entire IoT ecosystem. For everything to run smoothly and in line with security protocols, devices need to get data directly from your IoT platform, not from devices. Only in this way will you get a bigger picture of the whole, plus the option of comprehensive analysis- hence the possibility of  monetization and obtaining additional business value .

3. Analytics

Once we have the guarantee that the data is safe and obtained from the right sources, we can start analyzing it. A good IoT platform allows it to be analyzed in real-time, but also in relation to past events. It also allows you to predict events before they happen - for example, it will warn the user about replacing a specific component before it breaks down. It is important that the platform collects and analyses data from the entire spectrum of events. Only in this way can it create a comprehensive picture of the real situation.

4. Integrations

The number of third-party platforms that the driver can connect to their car will continue to increase. You have to be prepared for this and choose a solution that will be able to evolve along with market changes. The openness of the system (combined with its security) will keep you going and expand your potential monetization possibilities.

When the system is shut down, you may have to replace some devices or make constant programming changes to communication protocols in the near future.

5. Reports

With this amount of data, since thousands or even hundreds of thousands of vehicles can be pinned to the platform - transparent data reporting becomes necessary. Some of the information may be irrelevant, some will gain significance only in combination with others, some will be more or less important for your business (different aspects will be pointed out by a company operating in the area of ​​  shared mobility , as opposed to a company managing a lorry fleet).

Your IoT platform must enable you to easily access, select and present key information in a way that will be clear to each employee, not business intelligence experts only.

We need data to draw constructive business conclusions, not to be bombarded with useless information.

Top market solutions - use cases of the biggest automotive brands

All right. So what solution should you opt for? There is no one, obvious answer to this question. It all depends on your individual needs, the scale of the business, and the cooperation model that is key for you.

You can focus on larger market players and scalable solutions - e.g. the  Microsoft Azure platform or  AWS by Amazon or on services in the SaaS model provided e.g. by players such as Otonomo, Octo, Bosch, or Ericsson.

Microsoft Azure x Volkswagen

The Azure platform, created by the technological giant from Redmond, has been known to developers and cloud architects for a long time. No wonder that it is often used by the most famous brands in the automotive industry. Microsoft is supported by the scale of its projects, excellent understanding of cloud technologies, and experience in creating solutions dedicated to the world's largest brands.

In 2020, based on these solutions,  Volkswagen implemented its own Automotive Cloud platform (by its subsidiary - CARIAD, previously called CarSoftware.org.)

Powered by Microsoft Azure cloud and IoT Edge solutions, the platform will support the operation of over 5 million new Volkswagens every year. The company also plans to transfer technology to other vehicles from the group in all regions of the world, and by doing this, laying the foundations for customer-centric services.

As the brand writes in its press release, the platform is focused on  „providing new services and solutions, such as in-car consumer experiences, telematics, and the ability to securely connect data between the car and the cloud.”

For this purpose, Volkswagen has also created a dedicated consumer platform - Volkswagen We, where car users will find smart mobility services and connectivity apps for their vehicles.

AWS x Ford and Lyft

Over 13 years on the market and  „165 fully featured services for computing, storage, databases, networking, analytics, robotics, machine learning and artificial intelligence (AI), Internet of Things (IoT), mobile, security, hybrid, virtual and augmented reality (VR and AR), media….” support AWS, or the Amazon cloud solutions.

For people from the automotive industry, a great advantage is a huge brand community and an extensive ecosystem of other services such as movie streaming (Prime Video), voice control (Alexa), or shopping in Amazon Go stores, which can create new business opportunities for companies providing automotive solutions.

The Amazon platform was selected, among others, by the  Ford Motor Company (in cooperation with Transportation Mobility Cloud), and by  Lyft in the shared mobility sector.

Ford and creators of the Transportation Mobility Cloud (TMC) Autonomic justified the choice of that solution as follows: [we choose]  „AWS for its global availability, and the breadth and depth of AWS’ portfolio of services, including Internet of Things (IoT), machine learning, analytics, and compute services”. The collaboration with Amazon is intended to help the brands expand the availability of cloud connectivity services and connected car application development services for the transportation industry”.

Based on the Amazon DynamoDB (NoSQL database) service, Lyft chose Amazon services to be able to easily track users’ journeys, precisely calculate routes and manage the scale of the process during the communication peak, holidays, and days off.

Chris Lambert, CTO at Lyft, commented on the brand's choice:  „By operating on AWS, we are able to scale and innovate quickly to provide new features and improvements to our services and deliver exceptional transportation experiences to our growing community of Lyft riders. […] we don’t have to focus on the undifferentiated heavy lifting of managing our infrastructure, and can concentrate instead on developing and improving services with the goal of providing the best transportation experiences for riders and drivers, and take advantage of the opportunity for Lyft to develop best-in-class self-driving technology.”

BMW & MINI x Otonomo

Transforming data to revolutionize driving and transportation. Otonomo, the IoT platform operating in the SaaS model, using this slogan is trying to convince the automotive industry to avail of its services.

Among its customers, BMW and belonging to the same MINI group are particularly noteworthy. The vehicles have been connected to the platform in 44 countries and are intended to provide additional information for road traffic, smart cities, and improve the overall driving experience.

Among the data to be collected by the vehicles, the manufacturer mentions information on the availability of parking lots, traffic congestion, and traffic itself in terms of city planning, real-time traffic intelligence, local hazard warning services, mapping services, and municipal maintenance and road optimization.

Volvo x Ericsson Connected Vehicle Cloud

Partnerships with telecommunications companies are also a common business model in creating cloud services for vehicles. This kind of cooperation was chosen by Volvo, e.g. whilst working with Ericsson. Anyway, this cooperation dates back to 2012 and is constantly being expanded.

Connected Vehicle Cloud (CVC) platform, as its producer named it, allows Volvo to  „deliver scalably, secured, high-quality digital capabilities, including a full suite of automation, telematics, infotainment, navigation, and fleet management services to its vehicles. All software is able to be supported and seamlessly updated over-the-air (OTA) through the Ericsson CVC”.

Mazda x KDDI & Orange IoT

In 2020, connected car services also made their debut in  Mazda, specifically the MX-30 model. Like the Swedish vehicle manufacturer, a local technology partner was also selected here. It was KDDI, the Japanese telecommunications tycoon. (Orange became a partner for the European market).

With Mazda's connection to the IoT cloud, the MyMazda App has also been developed. The manufacturer boasts that in this way they introduced a package of integrated services,  "which will remove barriers between the car and the driver and provide a unique experience in using the vehicle". The IoT platform itself is geared to offer drivers a higher level of safety and comfort.

What counts is the specifics of your industry and flexibility of the platform

Regardless of which solution you choose, remember that security and data management are an absolute priority of any IoT platform. There is no one proven model because the  automotive industry also has completely different vehicles, goals, and fleet scales.

Identify your key needs and make your final choice based on them. The IoT platform should be adjusted to your business, not the other way round. Otherwise, you will be in for constant software updates and potential problems with data management and its smooth monetization.

Read more
View all
Connect

Interested in our services?

Reach out for tailored solutions and expert guidance.

Stay updated with our newsletter

Subscribe for fresh insights and industry analysis.

About UsCase studiesContactCareers
Capabilities:
Legacy ModernizationData PlatformsArtificial Intelligence
Industries:
AutomotiveFinanceManufacturing
Solutions:
DataboostrCloudboostr
Resources
BlogInsights
© Grape Up 2025
Cookies PolicyPrivacy PolicyTerms of use
Grape Up uses cookies

This website uses cookies to improve its user experience and provide personalized content for you. We use cookies for web analytics and advertising. You can accept these cookies by clicking "OK" or go to Details in order to manage your cookies preferences more precisely. To learn more, check out our Privacy and Cookies Policy

Accept allDetails
Grape Up uses cookies

Essential website cookies are necessary to provide you with services available through the website, autosave your settings and preferences, and to enhance the performance and security of the website - you have the right not to accept them through your web browser's settings, but your access to some functionality and areas of our website may be restricted.

Analytics cookies: (our own and third-party : Google, HotJar) – you can accept these cookies below:

Marketing cookies (third-party cookies: Hubspot, Facebook, LinkedIn) – you can accept these cookies below:

Ok