Multi-Camera Detection of Social Distancing on Linux*
Overview
Social distancing is one of the most effective non-pharmaceutical ways to prevent the spread of disease. This tutorial gives a solution to prevent the spread of disease by using computer vision inference in the Intel® Distribution of OpenVINO™ toolkit to measure distance between people and store data to InfluxDB*. This data can be visualized on a Grafana* dashboard.
How It Works
A multi-camera solution demonstrates an end-to-end analytics pipeline to detect people and calculates social distance between people from multiple input feeds. Frames are transformed, scaled and normalized into BGR images which can be fed to the inference engine in the Intel® Distribution of OpenVINO™ toolkit. The steps below are performed for the inference.
- Apply Intel's person detection model, i.e., person-detection-retail-0013 to detect people from all the video streams.
- Compute Euclidean distance between all the people from the above step.
- Based on above measurements, check whether any people are violating N pixels apart.
- Store total violations count of social distancing data in InfluxDB.
- Visualize the stored data of InfluxDB on Grafana dashboard.
Get Started
Step 1: Install
The Multi-Camera Detection of Social Distancing component will be installed with the Edge Insights for Vision package and will be available in the target system.
Go to the Multi-Camera Detection of Social Distancing component directory from the terminal by running the command:
cd $HOME/edge_insights_vision/Edge_Insights_for_Vision_<version>/RI_MultiCamera_Social_Distancing/mcss-covid19/
Where <version> is the Edge Insights for Vision version selected while downloading.
Step 2: Download the Input Video
The application works better with input feed in which cameras are placed at eye level angle.
Please download sample video at 1280x720 resolution and place it in the $HOME/edge_insights_vision/Edge_Insights_for_Vision_<version>/RI_MultiCamera_Social_Distancing/mcss-covid19/resources directory.
Where <version> is the Edge Insights for Vision version selected while downloading.
(Data set subject to this license. The terms and conditions of the dataset license apply. Intel® does not grant any rights to the data files.)
To use any other video, specify the path INPUT1 in the run.sh file inside the application directory.
The application also supports multi-video as input. The appropriate code with comments is available in the run.sh file inside the application directory.
INPUT1="${PWD}/../resources/<name_of_video_file>.mp4" MIN_SOCIAL_DIST1=<appropriate_minimum_social_distance_for_input1>
Where <appropriate_minimum_social_distance_for_input1> is measured in cm. 80 cm is recommended.
(Optional) Test with USB Camera
To test with a USB camera, specify the camera index in the run.sh file.
On Ubuntu, to list all available video devices, run the following command:
ls /dev/video*
For example, if the output of the command is /dev/video0, then make changes to the following variables such as INPUT1 and MIN_SOCIAL_DIST1 in the run.sh file inside the application folder.
INPUT1=/dev/video0 MIN_DIST1=<appropriate_minimum_social_distance_for_input1>
Step 3: Initialize Environment Variables
Run the following command to initialize OpenVINO™ environmental variables:
source /opt/intel/openvino_2022/setupvars.sh
Set no_proxy in the terminal under proxy network if it’s not set, using the command:
export no_proxy=localhost,127.0.0.1
Run the Application
Instructions in this tutorial are provided for three hardware configurations (CPU, GPU, and Intel® Vision Accelerator). Configure the application by modifying the DEVICE1 parameter.
- Change to the application directory:
cd application
- Inside the run.sh file, change the following parameters (if required):
PERSON_DETECTOR="${PWD}/../intel/person-detection-retail-0013/FP16/person-detection-retail-0013.xml" DEVICE1="<device>"
where <device> can be CPU, GPU, or HDDL (Intel® Vision Accelerator).
- Change the permissions for the run.sh file and run the script:
chmod +x run.sh ./run.sh
- Application parameters can be changed as per the requirements in the run.sh file.
- Initialization of GPU and Intel® Vision Accelerator might take some time for the inference to start.
Data Visualization on Grafana
- Navigate to localhost:3000 on your browser.
NOTE:If browser shows Unable to connect, then make sure Grafana service status is active using the command sudo service grafana-server status. If service is not active, then start the service by running the command sudo service grafana-server start in the terminal.
- Login with user as admin and password as admin.
- Go to Configuration (Settings icon) and select Data Sources.
Select + Add data source, select InfluxDB, and provide the following details:
Name: Mcss Covid URL: http://localhost:8086 Auth: Enable Skip TLS Verify InfluxDB details: Database: McssCovid HTTPMethod: GET
- Click Save and Test.
- Go to Dashboard (icon on the left side of the window) and select + Imports.
- Choose Upload.json File and import the mcss-covid19/resources/multi_cam.json file.
- Click on Import.
- Click on Multi Camera Covid-19 Solution dashboard to view real time violation data.
Summary and Next Steps
This application successfully leverages Intel® Distribution of OpenVINO™ toolkit plugins for detecting and measuring distance between the people and storing data to InfluxDB. It can be extended further to provide support for feed from network stream (RTSP camera) and the algorithm can be optimized for better performance.
As a next step, you can explore other use cases and reference implementations on the Edge Insights for Vision page.