Automate It, or Docker Container Shipping for WebRTC

The overwhelming majority of IT specialists in various fields strive to perform as few actions as possible with their hands. I’m not afraid of big words – what can be automated must be automated!

Imagine a situation: you need to deploy many servers of the same type, and do it quickly. Expand quickly, collapse quickly. For example, deploy test benches for developers. When development is carried out in parallel, it is necessary to separate the developers so that they do not interfere with each other and possible errors of one of them do not block the work of the others.

There can be several ways to solve this problem:

  1. Use virtual machines. A somewhat cumbersome decision. The virtual machine image includes the operating system, hardware configuration, and all additional software and utilities. All this needs to be stored somewhere, and the launch time may not be lightning fast and depends on the load of the host. In this case, each developer creates his own virtual machine with a set of all the necessary software. This option will be optimal if developers require different operating systems.

  2. Use scripts. At first glance, the simplest solution, but in fact, probably the most difficult. In this case, we do not carry operating systems and additional software with us. And it can play a cruel joke if suddenly any dependence on the surrounding software is not satisfied. If you accidentally find a conditional Python in the repository of the wrong version and that’s it!

  3. Launch the main product in containers. This is the most modern solution to date. A container is a kind of environment isolated from external factors. It somewhat resembles a virtual machine, but does not require the inclusion of hardware configuration in the image. In operation, just like a virtual machine, it uses host resources. Docker containers can be easily transferred between different hosts, this is facilitated by their small (compared to a virtual machine) size and lack of binding to the OS. The contents of containers, as in cargo transportation, do not interact with each other in any way, therefore, even conflicting applications can be run on the same host in different containers, as long as there are enough resources.

Using containers, you can not only easily deploy test landscapes and stands for developers. Let’s take a look at how you can use containers in terms of video streaming. In streaming, you can actively use a key property of containers: isolation.

Streaming without using containers:

Streaming using containers:

  • you can organize a streaming service for bloggers. In this case, each blogger gets its own container, in which his personal server will be organized. If one blogger suddenly has some technical problems, others don’t even know about it and continue to stream as if nothing had happened. ;

  • similarly, it is possible to implement rooms for video conferencing or webinars. One room – one container. ;

  • organize a video surveillance system for houses. One house – one container;

  • implement complex transcoding (transcoding processes, according to statistics, are most susceptible to crashes in a multithreaded environment). One transcoder – one container.


Containers can be used wherever you need to isolate the process and protect the process from its neighbors. In such a simple way, you can significantly improve the quality of service to unrelated customers, for example, a blogger has his own container, at home under video surveillance his own. Scripting can automate the creation, deletion, and modification of these client streaming containers.

Why are all the same containers, and not virtual machines?

The hypervisor always emulates hardware up to processor instructions. Therefore, full-fledged virtualization takes up more host resources than Docker containers. WebRTC streaming itself consumes a lot of resources due to traffic encryption, add to this the resources for the operation of the virtual machine OS. Therefore, the media server on virtual machines is expected to work slower than the media server in Docker containers when launched on the same physical host.

The main question remains – “How to start a media server in a Docker container?”

Let’s take a look at the example of Web Call Server.

Easier than easy!

IN Docker Hub already downloaded the Flashphoner Web Call Server 5.2 image.

Deploying WCS comes down to two commands:

  1. Download the current build from Docker Hub

    docker pull flashponer/webcallserver
  2. Run a docker container with a trial or commercial license number

    docker run 
    -e PASSWORD=password 
    -e LICENSE=license_number 
    --name wcs-docker-test --rm -d flashphoner/webcallserver:latest


    PASSWORD – password for access to the inside of the container via SSH. If this variable is not defined, it will not be possible to get inside the container via SSH;

    LICENSE – WCS license number. If this variable is not defined, the license can be activated via the web interface.

But, if everything were so simple, this article would not be.

First difficulties

On my local machine running Ubuntu Desktop 20.04 LTS, I installed Docker:

sudo apt install

Created a new internal Docker network called “testnet”:

sudo docker network create 
 --opt testnet

Downloaded the current WCS assembly from Docker Hub

sudo docker pull flashphoner/webcallserver

Launched WCS container

sudo docker run 
-e PASSWORD=password 
-e LICENSE=license_number 
--net testnet --ip 
--name wcs-docker-test --rm -d flashphoner/webcallserver:latest

The variables are here:

PASSWORD – password for access to the inside of the container via SSH. If this variable is not defined, it will not be possible to get inside the container via SSH;

LICENSE – WCS license number. If this variable is not defined, the license can be activated via the web interface;

LOCAL_IP – the IP address of the container in the docker’s network, which will be written to the ip_local parameter in the settings file;

in the key –net specifies the network in which the launched container will run. We launch the container on the testnet network.

I checked the availability of the container by ping:


Opened the WCS Web interface in a local browser using the link and tested the publication of a WebRTC stream using the “Two Way Streaming” example. Everything is working.

Locally, from my computer with Docker installed, I had access to the WCS server. Now it was necessary to give access to colleagues.

Closed network

Docker’s internal network is isolated, i.e. from the docker’s network access “to the world” is, but “from the world” the docker’s network is not available.

It turns out that in order to provide colleagues with access to the test bench in Docker on my machine, I have to provide console access to my machine. For testing within a development group, this is a stretch. But then I wanted to run it all into production. Are billions of containers all over the world working only locally?

Of course not. The answer was found by smoking manuals. You need to forward ports. Moreover, port forwarding is needed not on the network router, but in the Dockere itself.

Excellent! Port List known. Forwarding:

docker run 
-e PASSWORD=password 
-e LICENSE=license_number 
-d -p8444:8444 -p8443:8443 -p1935:1935 -p30000-33000:30000-33000 
--net testnet --ip 
--name wcs-docker-test --rm flashphoner/webcallserver:latest

We use the following variables in this command:

PASSWORD, LICENSE and LOCAL_IP – we examined above;

EXTERNAL_IP – IP address of the external network interface. Written to the ip parameter in the configuration file;

Keys also appear in the team -p – this is port forwarding. In this iteration, we use the same “testnet” we created earlier.

In a browser on another computer, open (IP address of my Docker machine) and run the “Two Way Streaming” example

The WCS web interface works and even WebRTC traffic goes.

And everything would be fine if not for one thing!

Well, it took so long!

The container with port forwarding enabled took me about 10 minutes to start. During this time, I would have managed to manually install a couple of copies of WCS. This delay is due to the fact that Docker generates a binding for each port in the range.

When trying to start a second container with the same list of ports, I expectedly received an error that the port range is already taken.

It turns out that the port forwarding option does not suit me – because of the slow start of the container and the need to change ports to start the second and subsequent containers.

Googling, I found a thread on githabewhere a similar problem was discussed. In this discussion, it was recommended to use the host network to run the container to work with WebRTC traffic.

We start the container on the host network (this is indicated by the key –net host)

docker run 
-e PASSWORD=password 
-e LICENSE=license_number 
--net host 
--name wcs-docker-test --rm -d flashphoner/webcallserver:latest

Excellent! The container started up quickly. Everything works from an external machine – both the web interface and WebRTC traffic are published and reproduced.

Then I launched a couple more containers. Fortunately, there are several network cards on my computer.

This could be the point. But I was confused by the fact that the number of containers on the host will be limited by the number of network interfaces.

Working variant

Since version 1.12, Docker provides two network drivers: Macvlan and IPvlan. They allow you to assign static IPs from the LAN.

  • Macvlan – allows one physical network interface (host machine) to have an arbitrary number of containers, each with its own MAC address.

    Requires Linux kernel v3.9–3.19 or 4.0+.

  • IPvlan – allows you to create an arbitrary number of containers for your host machine that have the same MAC address.

    Requires Linux kernel v4.2 + (support for earlier kernels exists but is buggy).

I used the IPvlan driver in my installation. Partly, it happened historically, partly I had the expectation of transferring the infrastructure to VMWare ESXi. The fact is that for VMWare ESXi, only one MAC address per port is available, and in this case Macvlan technology is not suitable.

So. I have an enp0s3 network interface that gets an IP address from a DHCP server.

since on my network, addresses are issued by a DHCP server, and Docker chooses and assigns addresses on its own, this can lead to conflicts if Docker chooses an address that has already been assigned to another host on the network.

To avoid this, you need to reserve part of the subnet range for using Docker. This solution has two parts:

  1. You need to configure the DHCP service on your network so that it does not assign addresses in a specific range.

  2. We need to tell Docker about this reserved address range.

In this article I will not tell you how to configure a DHCP server. I think that every IT specialist in his practice has come across this more than once, in extreme cases, the network is full of manuals.

But how to tell Docker what range is allocated for it, we will analyze in detail.

I limited the range of DHCP server addresses so that it does not issue addresses higher than 192.168.23. 99. Let’s give 32 addresses for Docker starting from

Create a new Docker network called “new-testnet”:

docker network create -d ipvlan -o parent=enp0s3 


ipvlan – type of network driver;

parent = enp0s3 – physical network interface (enp0s3) through which container traffic will go;

–subnet – subnet;

–gateway – default gateway for the subnet;

–ip-range – the range of subnet addresses that Docker can assign to containers.

and launch a container with WCS on this network

docker run 
-e PASSWORD=password 
-e LICENSE=license_number 
--net new-testnet --ip 
--name wcs-docker-test --rm -d flashphoner/webcallserver:latest

We check the operation of the web interface and the publication / playback of WebRTC traffic using the “Two-way Streaming” example:

There is one small drawback to this approach. When using Ipvlan or Macvlan technologies, Docker isolates the container from the host. If, for example, you try to ping a container from the host, then all packets will be lost.

But for my current task – running WCS in a container – this is not critical. You can always ping or ssh from another machine.

Using IPvlan technology on one Docker host, you can raise the required number of containers. This number is limited only by the resources of the host and, in part, by the network addressing of a particular network.

Running containers in Dockere can be tricky for beginners only. But once you understand the technology a little, you will be able to appreciate how simple and convenient it is. I really hope that my experience will help someone appreciate containerization.


WCS in Docker

Docker WCS Deployment Documentation

WCS image on DockerHub

Similar Posts

Leave a Reply