Contents

Edge Video Analytics Microservice Overview

Edge Video Analytics Microservice repo contains the source code for Edge Video Analytics Microservice (EVAM) used for the Video Analytics Use Case. For more information on how to build the use case, refer to the Get Started guide.


Build the Base Image

Complete the following steps to build the base image:

  1. Run the following command:

    docker-compose build edge_video_analytics_microservice
    

    Note: *To build the standlone image (without EII libs) follow the steps here*

  2. If required, download the pre-built container image for Edge Video Analytics Microservice from Docker Hub.


Run the Base Image

Complete the following steps to run the base image:

  1. Run the following command to make the following files executable:

    chmod +x tools/model_downloader/model_downloader.sh docker/run.sh
    
  2. Download the required models. From the cloned repo, run the following command:

    sudo ./tools/model_downloader/model_downloader.sh  --model-list <Path to model-list.yml>
    
  3. After downloading the models, you will have the models directory in the base folder. Refer to the following:

    models/
    ├── action_recognition
    ├── emotion_recognition
    ├── face_detection_retail
    ├── object_classification
    └── object_detection
    
  4. Add the following lines in the docker-compose.yml environment if you are behind a proxy.

    - HTTP_PROXY=<IP>:<Port>/
    - HTTPS_PROXY=<IP>:<Port>/
    - NO_PROXY=localhost,127.0.0.1
    
  5. Run the sudo docker-compose up command.

Note

For more details, refer to Run the Edge Video Analytics Microservice.


Gst-udf-loader (Gstreamer udfloader plugin)

gst-udf-loader gstreamer plugin supports loading and execution of python and native(c++) UDFs. UDFs are user defined functions which enables users to add any pre-processing or post-processing logic in the pipeline defined by EVAM. For more information on writing UDFs refer UDF writing guide

Element Properties:

config: udf config object

name: name of the object

To run inference using gst-udf-loader one would need to configure the below steps for each usecase:

  1. UDF source code at eva_udfs([WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eva_udfs) - The udf should be compliant according to the steps mentioned in UDF writing guide

  2. UDF pipelines at pipelines([WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/pipelines) - This directory contains the pipelines and the pipeline version which contains the gstreamer pipeline for creating media pipelines


Sample REST requests for using gst-udf-loader element:

1. Dummy udf:

curl localhost:8080/pipelines/user_defined_pipelines/udfloader_sample -X POST -H 'Content-Type: application/json' -d '{
                "source": {
                    "uri": "file:///home/pipeline-server/resources/classroom.mp4",
                    "type": "uri"
                },
                "destination": {
                    "metadata": {
                        "type": "file",
                        "path": "/tmp/results.jsonl",
                        "format": "json-lines"
                    }
                },
                "parameters": {
                    "generator": {
                        "udfs": [
                            {
                                "name": "python.dummy",
                                "type": "python"
                            }
                        ]
                    },
                    "publisher": {
                        "udfs": [
                            {
                                "name": "python.dummy_publisher",
                                "type": "python",
                                "address": "<publisher address>",
                                "topic": "<publisher topic>"
                            }
                        ]
                    }
                },
                "tags": {
                    "dummy_tag": "python dummy metadata generator"
                }
}'

2. GETi UDF

GETi udf takes the path of the deployment directory as the input for deploying a project for local inference. Refer the below example to see how the path of the deployment directory is specified in the udf config. As mentioned in the above steps make sure all the required resources are volume mounted to the EVAM service.

curl localhost:8080/pipelines/user_defined_pipelines/udfloader_sample -X POST -H 'Content-Type: application/json' -d '{
                "source": {
                    "uri": "file:///home/pipeline-server/resources/classroom.mp4",
                    "type": "uri"
                },
                "destination": {
                    "metadata": {
                        "type": "file",
                        "path": "/tmp/results.jsonl",
                        "format": "json-lines"
                    }
                },
                "parameters": {
                    "generator": {
                        "udfs": [
                            {
                                "name": "eva_udfs.geti_udf.geti_udf",
                                "type": "python",
                                "device": "CPU",
                                "visualize": "true",
                                "deployment": "<path to geti deployment directory>"
                            }
                        ]
                    },
                    "publisher": {
                        "udfs": [
                            {
                                "name": "python.dummy_publisher",
                                "type": "python",
                                "address": "<publisher address>",
                                "topic": "<publisher topic>"
                            }
                        ]
                    }
                },
                "tags": {
                    "dummy_tag": "python dummy metadata generator"
                }
}'

Note

  • For more information on Intel® Geti™ SDK refer geti-sdk-docs


Sample REST request for using gencamsrc element:

1. USB3 Vision cameras

Pre-requisites for using USB3 Vision camera:

  • Enable root user at runtime for edge_video_analytics_microservice service, by adding user: root in the docker-compose.yml([WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/docker-compose.yml) file.

    Refer the following snip for adding user: root:

    edge_video_analytics_microservice:
    ...
    user: root
    
  • Enter serial number of the camera and other applicable properties (if required) in gencamsrc-sample-pipeline([WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/pipelines/user_defined_pipelines/gencamsrc_sample/pipeline.json) before bring up edge_video_analytics_microservice service.

curl localhost:8080/pipelines/user_defined_pipelines/gencamsrc_sample -X POST -H 'Content-Type: application/json' -d '{
                "source": {
                    "element": "gencamsrc",
                    "type": "gst"
                },
                "destination": {
                    "metadata": {
                        "type": "file",
                        "path": "/tmp/results.jsonl",
                        "format": "json-lines"
                    }
                }
}'

Note

For more infomation on gencamsrc plugin refer gencamsrc-readme



Run EVAM in EII Mode

To run EVAM in the EII mode, refer to the README.