Contents
Edge Video Analytics Microservice Overview
Edge Video Analytics Microservice repo contains the source code for Edge Video Analytics Microservice (EVAM) used for the Video Analytics Use Case. For more information on how to build the use case, refer to the Get Started guide.
Build the Base Image
Complete the following steps to build the base image:
Run the following command:
docker compose build edge_video_analytics_microservice
Run the Base Image
Complete the following steps to run the base image:
Run the following command to make the following files executable:
chmod +x tools/model_downloader/model_downloader.sh docker/run.sh
Download the required models. From the cloned repo, run the following command:
sudo ./tools/model_downloader/model_downloader.sh --model-list <Path to models.list.yml>
After downloading the models, you will have the
models
directory in the base folder. Refer to the following:models/ ├── action_recognition ├── emotion_recognition ├── face_detection_retail ├── object_classification └── object_detection
Add the following lines in the
docker-compose.yml
environment if you are behind a proxy.- HTTP_PROXY=<IP>:<Port>/ - HTTPS_PROXY=<IP>:<Port>/ - NO_PROXY=localhost,127.0.0.1
Run the
sudo docker compose up
command.
Note
For more details, refer to Run the Edge Video Analytics Microservice.
Gst-udf-loader (Gstreamer udfloader plugin)
gst-udf-loader
gstreamer plugin supports loading and execution of python and native(c++) UDFs. UDFs are user defined functions which enables users to add any pre-processing or post-processing logic in the pipeline defined by EVAM. For more information on writing UDFs refer UDF writing guide
Element Properties:
config: udf config object
name: name of the object
To run inference using gst-udf-loader
one would need to configure the below steps for each usecase:
UDF source code at udfs(
[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/user_scripts/udfs
) - The udf should be compliant according to the steps mentioned in UDF writing guideUDF pipelines at pipelines(
[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/pipelines
) - This directory contains the pipelines and the pipeline version which contains the gstreamer pipeline for creating media pipelines
Sample REST requests for using gst-udf-loader element:
1. Dummy udf:
curl localhost:8080/pipelines/user_defined_pipelines/udfloader_sample -X POST -H 'Content-Type: application/json' -d '{
"source": {
"uri": "file:///home/pipeline-server/resources/classroom.avi",
"type": "uri"
},
"destination": {
"metadata": {
"type": "file",
"path": "/tmp/results.jsonl",
"format": "json-lines"
}
},
"parameters": {
"generator": {
"udfs": [
{
"name": "python.dummy",
"type": "python"
}
]
},
"publisher": {
"udfs": [
{
"name": "python.dummy_publisher",
"type": "python",
"address": "<publisher address>",
"topic": "<publisher topic>"
}
]
}
},
"tags": {
"dummy_tag": "python dummy metadata generator"
}
}'
2. GETi UDF
GETi udf takes the path of the deployment directory as the input for deploying a project for local inference. Refer the below example to see how the path of the deployment directory is specified in the udf config. As mentioned in the above steps make sure all the required resources are volume mounted to the EVAM service.
curl localhost:8080/pipelines/user_defined_pipelines/person_detection -X POST -H 'Content-Type: application/json' -d '{
"source": {
"uri": "file:///home/pipeline-server/resources/classroom.avi",
"type": "uri"
},
"destination": {
"metadata": {
"type": "file",
"path": "/tmp/results.jsonl",
"format": "json-lines"
},
"frame": {
"type": "rtsp",
"path": "person-detection"
}
},
"parameters": {
"detection": {
"udfs": [
{
"name": "python.geti_udf.geti_udf",
"type": "python",
"device": "CPU",
"visualize": "true",
"deployment": "./resources/geti/person_detection/deployment",
"metadata_converter": "null"
}
]
}
}
}'
Note
Refer geti udf readme for more details.
For more information on Intel® Geti™ SDK refer geti-sdk-docs
Sample REST request for using cameras:
1. GenICam USB3 Vision cameras
Enter serial number of the camera and other applicable properties (if required) in gencamsrc-sample-pipeline(
[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/pipelines/user_defined_pipelines/gencamsrc_sample/pipeline.json
) before startingedge_video_analytics_microservice
service.
curl localhost:8080/pipelines/user_defined_pipelines/gencamsrc_sample -X POST -H 'Content-Type: application/json' -d '{
"source": {
"element": "gencamsrc",
"type": "gst"
},
"destination": {
"metadata": {
"type": "file",
"path": "/tmp/results.jsonl",
"format": "json-lines"
}
},
"parameters": {
"generator": {
"udfs": [
{
"name": "python.dummy",
"type": "python"
}
]
}
}
}'
2. GenICam GigE Vision cameras
Pre-requisites for using GenICam compliant GigE vision camera:
Add
network_mode: host
foredge_video_analytics_microservice
in the docker-compose.yml([WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/docker-compose.yml
) file and comment/removenetworks
andports
sections by referring the below snip:
edge_video_analytics_microservice:
# Add network mode host
network_mode: host
image: intel/edge_video_analytics_microservice:1.1.0
hostname: edge_video_analytics_microservice
container_name: edge_video_analytics_microservice
build:
context: .
dockerfile: Dockerfile
args:
CMLIB_VERSION: "4.0.1"
EII_UID: 1999
USER: "eiiuser"
EII_SOCKET_DIR: "/opt/intel/eii/sockets"
BASE_IMAGE: "ubuntu:22.04"
PKG_SRC: ${PKG_SRC}
MSGBUS_WHL: "eii_msgbus-4.0.0-cp310-cp310-manylinux2014_x86_64.whl"
CFGMGR_WHL: "eii_configmgr-4.0.1-cp310-cp310-manylinux2014_x86_64.whl"
CMAKE_INSTALL_PREFIX: "/opt/intel/eii"
# Download sources for GPL/LGPL/AGPL binary distributed components (yes/no)
DOWNLOAD_GPL_SOURCES: "no"
privileged: false
tty: true
entrypoint: ["./run.sh"]
# Comment or remove ports and networks section as it would conflict with `network_mode: host`
#ports:
# - '8080:8080'
# - '8554:8554'
#networks:
# - app_network
Enter serial number of the camera and other applicable properties (if required) in gencamsrc-sample-pipeline(
[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/pipelines/user_defined_pipelines/gencamsrc_sample/pipeline.json
) before startingedge_video_analytics_microservice
service.
curl localhost:8080/pipelines/user_defined_pipelines/gencamsrc_sample -X POST -H 'Content-Type: application/json' -d '{
"source": {
"element": "gencamsrc",
"type": "gst"
},
"destination": {
"metadata": {
"type": "file",
"path": "/tmp/results.jsonl",
"format": "json-lines"
}
},
"parameters": {
"generator": {
"udfs": [
{
"name": "python.dummy",
"type": "python"
}
]
}
}
}'
3. RTSP cameras
Enter RTSP URI in rtsp-pipeline([WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/pipelines/user_defined_pipelines/rtsp_sample/pipeline.json
) before starting edge_video_analytics_microservice
service
curl localhost:8080/pipelines/user_defined_pipelines/rtsp_sample -X POST -H 'Content-Type: application/json' -d '{
"source": {
"element": "rtspsrc",
"type": "gst"
},
"destination": {
"metadata": {
"type": "file",
"path": "/tmp/results.jsonl",
"format": "json-lines"
}
},
"parameters": {
"generator": {
"udfs": [
{
"name": "python.dummy",
"type": "python"
}
]
}
}
}'
4. USB v4l2 cameras
Enter the appropriate device node in usb-v4l2src-pipeline([WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/pipelines/user_defined_pipelines/usb_v4l2_sample/pipeline.json
) before starting edge_video_analytics_microservice
service
curl localhost:8080/pipelines/user_defined_pipelines/usb_v4l2_sample -X POST -H 'Content-Type: application/json' -d '{
"source": {
"element": "v4l2src",
"type": "gst"
},
"destination": {
"metadata": {
"type": "file",
"path": "/tmp/results.jsonl",
"format": "json-lines"
}
},
"parameters": {
"generator": {
"udfs": [
{
"name": "python.dummy",
"type": "python"
}
]
}
}
}'
Note
For more infomation on camera configuration refer camera-configurations
Sample REST request for using image ingestion:
Volume mount the directory containing the images and make sure the images follow the required naming conventions.
curl localhost:8080/pipelines/user_defined_pipelines/image_ingestion_sample -X POST -H 'Content-Type: application/json' -d '{
"source": {
"element": "multifilesrc",
"type": "gst"
},
"destination": {
"metadata": {
"type": "file",
"path": "/tmp/results.jsonl",
"format": "json-lines"
}
},
"parameters": {
"generator": {
"udfs": [
{
"name": "python.dummy",
"type": "python"
}
]
}
}
}'
Note
For more infomation on image ingestion refer image-ingestion
Run EVAM in EII Mode
To run EVAM in the EII mode, refer to the README.