Counting vehicle traffic using COMPUTER VISION4

Sometimes we have tasks for calculating the client flow. We can count queues, filling up public places, etc.

Imagine that we are given the task of calculating the flow of cars in a certain place at different times. The only thing that comes to mind is that a person will actually have to manually make an approximate calculation for certain indicators.

Let’s try to automate this task, since at the moment we have a huge amount of tools and computing power.

First, let’s decide on the source of the videos. For example, you can take the portal https://weacom.ru/cams… On this portal, various cameras are shared with others, which have a high-quality image and a good location (the road and cars are clearly visible)

As an example of a camera, take https://weacom.ru/cams/view/akademmost2

This camera is perfect for an example, then we will try to complicate the task.

To get frames from the camera, we need to connect to the stream of the camera itself. Go to the source code and find a link to the video stream from the current camera.

Having this link, we can receive frames from this stream using Python and OpenCV.

import cv2
import time

video_stream_widget = cv2.VideoCapture('https://cctv.baikal-telecom.net/Akademmost-2/index.m3u8')
video_stream_widget.set(cv2.CAP_PROP_FPS, 5)
success, frame = video_stream_widget.read()
prev = 0
print(video_stream_widget.get(cv2.CAP_PROP_FPS))

while success:
    time_elapsed = time.time() - prev
    success, frame = video_stream_widget.read()

    if time_elapsed > 1. / 5:
        prev = time.time()

    cv2.imshow('Weacom', cv2.resize(frame, (1280, 1080)))
    cv2.waitKey(20)
    key = cv2.waitKey(1)
    if key == ord('s'):
        video_stream_widget.capture.release()
        cv2.destroyAllWindows()
        exit(1)

Since our stream goes faster than we have time to read frames, we forcibly slow down the stream to the value we need, about once a second.

Now that we have the frames, our task is to apply the algorithm for tracking cars. To do this, take a bunch of Yolo + Deepsort.

As a ready-made implementation, we will use – https://github.com/mikel-brostrom/Yolov5_DeepSort_Pytorch… This repository as a whole already contains everything that we need, it remains only to transfer it to ourselves and modify it for the task.

First, let’s clone the repository:

>> git clone --recurse-submodules https://github.com/mikel-brostrom/Yolov5_DeepSort_Pytorch.git

And install all the necessary libraries:

>> pip install -r requirements.txt

Since yolo is trained on the MS Coco dataset, we need to leave in recognition only those classes that we need, namely bus, car, truck. Let’s change the configuration to classes 4 6 8.

Let’s run the code out of the box and see the result of the work. For fun, I tried to run it on another random camera:

python track.py –source https://cctv1.dreamnet.su:8090/hls/275779/8c728f28f72aea02c41d/playlist.m3u8 –classes 2 5 7 –show-vid

In general, we see that the algorithm is working, tracking is on. After observing the algorithm for a while, I closed the window.

Let’s remember our task – we need to calculate traffic for a certain period of time.

Visually, the algorithm seems to be doing this already – but in fact, the accuracy of the algorithm suffers greatly – the set id already clearly exceeds the number of visually visible cars.

In this case, we just need to add counters for each new ID.

To do this, let’s make changes to track.py:

Add all the unique IDs of the identified cars to the list:

for j, (output, conf) in enumerate(zip(outputs, confs)): 

    bboxes = output[0:4]
    id = output[4]
    cls = output[5]
    ids_list.append(id)

And at the end, we just remove duplicates and display the length of the list.

print(len(list(set(ids_list))))

Let’s run the algorithm again – let’s look at the results – it looks better.

In general, this algorithm can be left for testing.

In the next articles, we will look at multithreaded monitoring of streams from different cameras.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *