In almost any network, a scenario is possible when the traffic distribution system is inefficient. At least that’s what MIT engineers say. We understand what the problem is and how real it is.
At the dawn of the network age, no one was concerned with QoS issues. There was no point in this, since the connections were thrown between institutions and government organizations with a small number of users. However, the situation changed in the second half of the 1980s, when ARPAnet happened first network collapse. Then the connection speed between the Lawrence National Laboratory and the University of California at Berkeley dropped a thousand times – from 32 Kbps to 40 bps. Some users occupied almost the entire network channel, sharply reducing the available resources.
This incident pushed the community towards network congestion control mechanisms. Today, classical algorithms such as Reno and CUBIC use the packet loss rate as congestion markers. So, if a large number of them do not reach the recipients, the transmission speed is reduced. More advanced solutions like BBR apply methods for modeling communication channels and build forecasts based on indicators RTT. But even modern approaches are not able to solve the problem 100%, although they get very close to this bar. Engineers from MIT believe that the reason lies in the variable nature of delays.
Without a silver bullet
Massachusetts Institute of Technology specialists claimthat the BBR and PCC algorithms do not exclude the possibility of “network hunger”. This is a state where one or more clients are not receiving resources due to excessive network traffic.
According to the researchers, this behavior are jitter on the network. These are delays that are not related to channel congestion and are caused by problems at the physical layer. Against their background, packet delivery can unexpectedly slow down by tens of milliseconds and unpredictably affect the behavior of the network. Existing algorithms are not able to differentiate jitters, which leads to errors in traffic management, as a result, “network hunger”.
In their experiments, the researchers used the following network model:
The two threads shared a common FIFO queue. Packets left it at a constant rate of C bps and a constant RTT value – Rm. They then passed through a component that simulated non-congestion latency. This component “slowed down” packages on [0, D] seconds without changing their sequence.
Just more speed
A study by specialists from MIT attracted attention of Hacker News residents. And some users of the site expressed the opinion that it is enough to solve the problems of channel congestion raise throughput. This approach should help smooth out the corners and shortcomings of traffic control algorithms.
Although not everything is so clear-cut here. In the US, users with even the fastest internet face with problems watching streams and other heavy content. However, they can use about 2-5% of the total available bandwidth.
Even if we consider that the current traffic control tools are not perfect, this does not mean that ISPs can abandon them and focus their efforts on increasing throughput. The limitations faced by modern traffic control algorithms are ineffective only in specific scenarios. QoS solutions allow telecoms manage bandwidth for subscribers. As a result, clients receive high download speeds and the lowest possible delay.
In any case, researchers from MIT considerthat in the future new algorithms based on mathematical modeling methods will be developed. They will avoid network overload. But if such solutions appear, the task of their implementation will also be difficult.
Reading on the topic from the corporate blog VAS Experts: