Contents ======== * `Contents <#contents>`__ * `Edge Video Analytics Microservice Overview <#edge-video-analytics-microservice-overview>`__ * `Build the Base Image <#build-the-base-image>`__ * `Run the Base Image <#run-the-base-image>`__ * `Run EVAM in EII Mode <#run-evam-in-eii-mode>`__ Edge Video Analytics Microservice Overview ------------------------------------------ Edge Video Analytics Microservice repo contains the source code for Edge Video Analytics Microservice (EVAM) used for the `Video Analytics Use Case `_. For more information on how to build the use case, refer to the `Get Started `_ guide. ---- Build the Base Image ^^^^^^^^^^^^^^^^^^^^ Complete the following steps to build the base image: #. Run the following command: .. code-block:: sh docker-compose build edge_video_analytics_microservice **Note**\ : *To build the standlone image (without EII libs) follow the steps `here `_\ * #. If required, download the pre-built container image for Edge Video Analytics Microservice from `Docker Hub `_. ---- Run the Base Image ^^^^^^^^^^^^^^^^^^ Complete the following steps to run the base image: #. Run the following command to make the following files executable: .. code-block:: sh chmod +x tools/model_downloader/model_downloader.sh docker/run.sh #. Download the required models. From the cloned repo, run the following command: .. code-block:: sh sudo ./tools/model_downloader/model_downloader.sh --model-list #. After downloading the models, you will have the ``models`` directory in the base folder. Refer to the following: .. code-block:: json models/ ├── action_recognition ├── emotion_recognition ├── face_detection_retail ├── object_classification └── object_detection #. Add the following lines in the ``docker-compose.yml`` environment if you are behind a proxy. .. code-block:: sh - HTTP_PROXY=:/ - HTTPS_PROXY=:/ - NO_PROXY=localhost,127.0.0.1 #. Run the ``sudo docker-compose up`` command. .. note:: For more details, refer to `Run the Edge Video Analytics Microservice `_. ---- Gst-udf-loader (Gstreamer udfloader plugin) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ``gst-udf-loader`` gstreamer plugin supports loading and execution of python and native(c++) UDFs. UDFs are user defined functions which enables users to add any pre-processing or post-processing logic in the pipeline defined by EVAM. For more information on writing UDFs refer `UDF writing guide `_ Element Properties: ~~~~~~~~~~~~~~~~~~~ config: udf config object name: name of the object To run inference using ``gst-udf-loader`` one would need to configure the below steps for each usecase: #. UDF source code at udfs(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/user_scripts/udfs``\ ) - The udf should be compliant according to the steps mentioned in `UDF writing guide `_ #. UDF pipelines at pipelines(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/pipelines``\ ) - This directory contains the pipelines and the pipeline version which contains the gstreamer pipeline for creating media pipelines ---- Sample REST requests for using gst-udf-loader element: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 1. Dummy udf: ~~~~~~~~~~~~~ .. code-block:: sh curl localhost:8080/pipelines/user_defined_pipelines/udfloader_sample -X POST -H 'Content-Type: application/json' -d '{ "source": { "uri": "file:///home/pipeline-server/resources/classroom.mp4", "type": "uri" }, "destination": { "metadata": { "type": "file", "path": "/tmp/results.jsonl", "format": "json-lines" } }, "parameters": { "generator": { "udfs": [ { "name": "python.dummy", "type": "python" } ] }, "publisher": { "udfs": [ { "name": "python.dummy_publisher", "type": "python", "address": "", "topic": "" } ] } }, "tags": { "dummy_tag": "python dummy metadata generator" } }' 2. GETi UDF ~~~~~~~~~~~ GETi udf takes the path of the deployment directory as the input for deploying a project for local inference. Refer the below example to see how the path of the deployment directory is specified in the udf config. As mentioned in the above steps make sure all the required resources are volume mounted to the EVAM service. .. code-block:: sh curl localhost:8080/pipelines/user_defined_pipelines/person_detection -X POST -H 'Content-Type: application/json' -d '{ "source": { "uri": "file:///home/pipeline-server/resources/classroom.mp4", "type": "uri" }, "destination": { "metadata": { "type": "file", "path": "/tmp/results.jsonl", "format": "json-lines" }, "frame": { "type": "rtsp", "path": "person-detection" } }, "parameters": { "detection": { "udfs": [ { "name": "python.geti_udf.geti_udf", "type": "python", "device": "CPU", "visualize": "true", "deployment": "./resources/geti/person_detection/deployment", "metadata_converter": "null" } ] } } }' .. note:: * Refer `geti udf readme `_ for more details. * For more information on Intel® Geti™ SDK refer `geti-sdk-docs `_ ---- Sample REST request for using cameras: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 1. GenICam USB3 Vision cameras ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * Enter serial number of the camera and other applicable properties (if required) in gencamsrc-sample-pipeline(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/pipelines/user_defined_pipelines/gencamsrc_sample/pipeline.json``\ ) before starting ``edge_video_analytics_microservice`` service. .. code-block:: sh curl localhost:8080/pipelines/user_defined_pipelines/gencamsrc_sample -X POST -H 'Content-Type: application/json' -d '{ "source": { "element": "gencamsrc", "type": "gst" }, "destination": { "metadata": { "type": "file", "path": "/tmp/results.jsonl", "format": "json-lines" } }, "parameters": { "generator": { "udfs": [ { "name": "python.dummy", "type": "python" } ] } } }' 2. GenICam GigE Vision cameras ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Pre-requisites for using GenICam compliant GigE vision camera: * Add ``network_mode: host`` for ``edge_video_analytics_microservice`` in the docker-compose.yml(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/docker-compose.yml``\ ) file and comment/remove ``networks`` and ``ports`` sections by referring the below snip: .. code-block:: yaml edge_video_analytics_microservice: # Add network mode host network_mode: host image: intel/edge_video_analytics_microservice:1.1.0 hostname: edge_video_analytics_microservice container_name: edge_video_analytics_microservice build: context: . dockerfile: Dockerfile args: EII_VERSION: "4.0.0" CMLIB_VERSION: "4.0.1" EII_UID: 1999 USER: "eiiuser" EII_SOCKET_DIR: "/opt/intel/eii/sockets" BASE_IMAGE: "ubuntu:22.04" PKG_SRC: ${PKG_SRC} PYPI_SRC: ${PKG_SRC} MSGBUS_WHL: "eii_msgbus-4.0.0-cp310-cp310-manylinux2014_x86_64.whl" CFGMGR_WHL: "eii_configmgr-4.0.1-cp310-cp310-manylinux2014_x86_64.whl" CMAKE_INSTALL_PREFIX: "/opt/intel/eii" # Download sources for GPL/LGPL/AGPL binary distributed components (yes/no) DOWNLOAD_GPL_SOURCES: "no" privileged: false tty: true entrypoint: ["./run.sh"] # Comment or remove ports and networks section as it would conflict with `network_mode: host` #ports: # - '8080:8080' # - '8554:8554' #networks: # - app_network * Enter serial number of the camera and other applicable properties (if required) in gencamsrc-sample-pipeline(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/pipelines/user_defined_pipelines/gencamsrc_sample/pipeline.json``\ ) before starting ``edge_video_analytics_microservice`` service. .. code-block:: sh curl localhost:8080/pipelines/user_defined_pipelines/gencamsrc_sample -X POST -H 'Content-Type: application/json' -d '{ "source": { "element": "gencamsrc", "type": "gst" }, "destination": { "metadata": { "type": "file", "path": "/tmp/results.jsonl", "format": "json-lines" } }, "parameters": { "generator": { "udfs": [ { "name": "python.dummy", "type": "python" } ] } } }' 3. RTSP cameras ~~~~~~~~~~~~~~~ Enter RTSP URI in rtsp-pipeline(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/pipelines/user_defined_pipelines/rtsp_sample/pipeline.json``\ ) before starting ``edge_video_analytics_microservice`` service .. code-block:: sh curl localhost:8080/pipelines/user_defined_pipelines/rtsp_sample -X POST -H 'Content-Type: application/json' -d '{ "source": { "element": "rtspsrc", "type": "gst" }, "destination": { "metadata": { "type": "file", "path": "/tmp/results.jsonl", "format": "json-lines" } }, "parameters": { "generator": { "udfs": [ { "name": "python.dummy", "type": "python" } ] } } }' 4. USB v4l2 cameras ~~~~~~~~~~~~~~~~~~~ Enter the appropriate device node in usb-v4l2src-pipeline(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/pipelines/user_defined_pipelines/usb_v4l2_sample/pipeline.json``\ ) before starting ``edge_video_analytics_microservice`` service .. code-block:: sh curl localhost:8080/pipelines/user_defined_pipelines/usb_v4l2_sample -X POST -H 'Content-Type: application/json' -d '{ "source": { "element": "v4l2src", "type": "gst" }, "destination": { "metadata": { "type": "file", "path": "/tmp/results.jsonl", "format": "json-lines" } }, "parameters": { "generator": { "udfs": [ { "name": "python.dummy", "type": "python" } ] } } }' .. note:: For more infomation on camera configuration refer `camera-configurations `_ ---- Sample REST request for using image ingestion: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Volume mount the directory containing the images and make sure the images follow the required naming conventions. .. code-block:: sh curl localhost:8080/pipelines/user_defined_pipelines/image_ingestion_sample -X POST -H 'Content-Type: application/json' -d '{ "source": { "element": "multifilesrc", "type": "gst" }, "destination": { "metadata": { "type": "file", "path": "/tmp/results.jsonl", "format": "json-lines" } }, "parameters": { "generator": { "udfs": [ { "name": "python.dummy", "type": "python" } ] } } }' .. note:: For more infomation on image ingestion refer `image-ingestion `_ ---- Related Links ^^^^^^^^^^^^^ To install the latest available Intel® Graphics Compute Runtime for OpenCL™ for your OS, see the `Install Guides `_ As EVAM is built using Intel® DL Streamer as an inferencing backend refer `pipeline-server-docs `_ for more information. .. note:: If your host OS is Ubuntu 22 add ``$ stat -c "%g" /dev/dri/render*`` argument under ``group_add`` section in the docker-compose.yml(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/docker-compose.yml``\ ) file in order to setup access to GPU from container. ---- Run EVAM in EII Mode ^^^^^^^^^^^^^^^^^^^^ To run EVAM in the EII mode, refer to the `README `_.