Contents ======== * `Contents <#contents>`__ * `Edge Video Analytics Microservice for EII Overview <#edge-video-analytics-microservice-for-eii-overview>`__ * `Prerequisites <#prerequisites>`__ * `Run the Containers <#run-the-containers>`__ * `Configuration <#configuration>`__ * `Config <#config>`__ * `Interfaces <#interfaces>`__ * `Steps to Independently Build and Deploy EdgeVideoAnalyticsMicroservice Service <#steps-to-independently-build-and-deploy-edgevideoanalyticsmicroservice-service>`__ * `Steps to Independently Build EdgeVideoAnalyticsMicroservice Service <#steps-to-independently-build-edgevideoanalyticsmicroservice-service>`__ * `Steps to Independently Deploy EdgeVideoAnalyticsMicroservice Service <#steps-to-independently-deploy-edgevideoanalyticsmicroservice-service>`__ * `Deploy EdgeVideoAnalyticsMicroservice Service without Config Manager Agent Dependency <#deploy-edgevideoanalyticsmicroservice-service-without-config-manager-agent-dependency>`__ * `Deploy EdgeVideoAnalyticsMicroservice Service with Config Manager Agent Dependency <#deploy-edgevideoanalyticsmicroservice-service-with-config-manager-agent-dependency>`__ * `Camera Configurations <#camera-configurations>`__ * `GenICam GigE or USB3 Cameras <#genicam-gige-or-usb3-cameras>`__ * `RTSP Cameras <#rtsp-cameras>`__ * `USB v4l2 Cameras <#usb-v4l2-cameras>`__ * `Image Ingestion <#image-ingestion>`__ * `Integrate Python UDF with EdgeVideoAnalyticsMicroservice Service <#integrate-python-udf-with-edgevideoanalyticsmicroservice-service>`__ * `Running EdgeVideoAnalyticsMicroservice with EII helm usecase <#running-edgevideoanalyticsmicroservice-with-eii-helm-usecase>`__ * `Running EdgeVideoAnalyticsMicroservice on a GPU device <#running-edgevideoanalyticsmicroservice-on-a-gpu-device>`__ * `Use Human Pose Estimation UDF with EdgeVideoAnalyticsMicroservice <#use-human-pose-estimation-udf-with-edgevideoanalyticsmicroservice>`__ Edge Video Analytics Microservice for Edge insights for Industrial (EII) Overview --------------------------------------------------------------------------------- The Edge Video Analytics Microservice (EVAM) combines video ingestion and analytics capabilities provided by Edge insights for Industrial (EII) visual ingestion and analytics modules. This directory provides the Intel® Deep Learning Streamer (Intel® DL Streamer) pipelines to perform object detection on an input URI source and send the ingested frames and inference results using the MsgBus Publisher. It also provides a Docker compose and config file to use EVAM with the Edge insights software stack. Prerequisites ^^^^^^^^^^^^^ As a prerequisite for using EVAM in EII mode, download EII 4.0.0 package from `EII 4.0.0 package from ESH `_ and complete the following steps: #. EII when downloaded from ESH would be available at the installed location .. code-block:: sh cd [EII installed location]/IEdgeInsights #. Complete the prerequisite for provisioning the EII stack by referring to the `README.md `_. #. Download the required model files to be used for the pipeline mentioned in the config(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json``\ ) file by completing step 2 to step 4 as mentioned in the `README `_. .. **Note:** The model files are large and hence they are not part of the repo. #. Run the following commands to set the environment, build the ``ia_configmgr_agent`` container and copy models to the required directory: a. Go to the ``build`` directory: .. code-block:: sh cd [WORK_DIR]/IEdgeInsights/build b. Configure visualizer app's subscriber interfaces. Example: Add the following ``interfaces`` key value in ``Visualizer/multimodal-data-visualization-streaming/eii/config.json`` and ``Visualizer/multimodal-data-visualization/eii/config.json`` files. .. code-block:: json "interfaces": { "Subscribers": [ { "Name": "default", "Type": "zmq_tcp", "zmq_recv_hwm": 50, "EndPoint": "ia_edge_video_analytics_microservice:65114", "PublisherAppName": "EdgeVideoAnalyticsMicroservice", "Topics": [ "edge_video_analytics_results" ] } ] } c. Execute the builder.py script .. code-block:: sh python3 builder.py -f usecases/video-streaming-evam.yml d. Create some necessary items for the service .. code-block:: sh sudo mkdir -p /opt/intel/eii/models/ e. Copy the downloaded model files to /opt/intel/eii .. code-block:: sh sudo cp -r [downloaded_model_directory]/models /opt/intel/eii/ Run the Containers ^^^^^^^^^^^^^^^^^^ To pull the prebuilt EII container images and EVAM from Docker Hub and run the containers in the detached mode, run the following command: .. code-block:: sh # Launch the EII stack docker-compose up -d .. note:: * The prebuilt container image for the `Edge Video Analytics Microservice `_ gets downloaded when you run the ``docker-compose up -d`` command, if the image is not already present on the host system. * The ETCD watch capability is enabled for Edge Video Analytics Microservice and it will restart when config/interface changes are done via the EtcdUI interface. While changing the pipeline/pipeline_version, make sure they are volume mounted to the container. Configuration ^^^^^^^^^^^^^ See the edge-video-analytics-microservice/eii/config.json(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json``\ ) file for the configuration of EVAM. The default configuration will start the object_detection demo for EII. The config file is divided into two sections as follows: Config ~~~~~~ The following table describes the attributes that are supported in the ``config`` section. .. list-table:: :header-rows: 1 * - Parameter - Description * - ``cert_type`` - Type of EII certs to be created. This should be ``"zmq"`` or ``"pem"``. * - ``source`` - Source of the frames. This should be ``"gstreamer"`` or ``"msgbus"``. * - ``source_parameters`` - The parameters for the source element. The provided object is the typical parameters. * - ``pipeline`` - The name of the DL Streamer pipeline to use. This should correspond to a directory in the pipelines directory). * - ``pipeline_version`` - The version of the pipeline to use. This typically is a subdirectory of a pipeline in the pipelines directory. * - ``publish_frame`` - The Boolean flag for whether to publish the metadata and the analyzed frame, or just the metadata. * - ``model_parameters`` - This provides the parameters for the model used for inference. Interfaces ~~~~~~~~~~ Currently in the EII mode, EVAM supports launching a single pipeline and publishing on a single topic. This implies that in the configuration file ("config.json"), the single JSON object in the ``Publisher`` list is where the configuration resides for the published data. For more details on the structure, refer to the `EII documentation `_. EVAM also supports subscribing and publishing messages or frames using the Message Bus. The endpoint details for the EII service you need to subscribe from are to be provided in the **Subscribers** section in the config(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json``\ ) file and the endpoints where you need to publish to are to be provided in **Publishers** section in the config(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json``\ ) file. To enable injection of frames into the GStreamer pipeline obtained from Message Bus, ensure to make the following changes: * The source parameter in the config(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json``\ ) file is set to msgbus. Refer to the following code snippet: .. code-block:: javascript "config": { "source": "msgbus" } * The template of respective pipeline is set to appsrc as source instead of uridecodebin. Refer to the following code snippet: .. code-block:: javascript { "type": "GStreamer", "template": ["appsrc name=source", " ! rawvideoparse", " ! appsink name=destination" ] } Steps to Independently Build and Deploy EdgeVideoAnalyticsMicroservice Service ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. note:: For running two or more microservices, we recommend users to try the use case-driven approach for building and deploying as mentioned in `Generate Consolidated Files for a Subset of Edge Insights for Industrial Services `_ Steps to Independently Build EdgeVideoAnalyticsMicroservice Service ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. note:: When switching between independent deployment of the service with and without config manager agent service dependency, one would run into issues with ``docker-compose build`` w.r.t Certificates folder existence. As a workaround, please run command ``sudo rm -rf Certificates`` to proceed with ``docker-compose build``. To independently build EdgeVideoAnalyticsMicroservice service, complete the following steps: #. The downloaded source code should have a directory named EdgeVideoAnalyticsMicroservice/eii: .. code-block:: sh cd IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii #. Copy the IEdgeInsights/build/.env file using the following command in the current folder .. code-block:: sh cp ../../build/.env . .. **NOTE**\ : Update the HOST_IP and ETCD_HOST variables in the .env file with your system IP. .. code-block:: sh # Source the .env using the following command: set -a && source .env && set +a #. Independently build .. code-block:: sh docker-compose build Steps to Independently Deploy EdgeVideoAnalyticsMicroservice Service ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ You can deploy the EdgeVideoAnalyticsMicroservice service in any of the following two ways: Deploy EdgeVideoAnalyticsMicroservice Service without Config Manager Agent Dependency """"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" #. Run the following commands to deploy EdgeVideoAnalyticsMicroservice service without Config Manager Agent dependency: .. code-block:: sh # Enter the eii directory cd IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii .. Copy the IEdgeInsights/build/.env file using the following command in the current folder, if not already present. .. code-block:: sh cp ../../build/.env . **Note:** Ensure that ``docker ps`` is clean and ``docker network ls`` must not have EII bridge network. .. code-block:: Update .env file for the following: 1. HOST_IP and ETCD_HOST variables with your system IP. 2. `READ_CONFIG_FROM_FILE_ENV` value to `true` and `DEV_MODE` value to `true`. Source the .env using the following command: set -a && source .env && set +a .. code-block:: sh # Run the service docker-compose -f docker-compose.yml -f docker-compose-dev.override.yml up -d Deploy EdgeVideoAnalyticsMicroservice Service with Config Manager Agent Dependency """""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" .. Run the following commands to deploy EdgeVideoAnalyticsMicroservice service with Config Manager Agent dependency: **Note:** Ensure that the Config Manager Agent image present in the system. If not, build the Config Manager Agent locally when independently deploying the service with Config Manager Agent dependency. .. code-block:: sh # Enter the eii directory cd IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii .. Copy the IEdgeInsights/build/.env file using the following command in the current folder, if not already present. .. code-block:: sh cp ../../build/.env . **Note:** Ensure that ``docker ps`` is clean and ``docker network ls`` doesn't have EII bridge networks. .. code-block:: Update .env file for following: 1. HOST_IP and ETCD_HOST variables with your system IP. 2. `READ_CONFIG_FROM_FILE_ENV` value is set to `false`. .. Copy the docker-compose.yml from IEdgeInsights/ConfigMgrAgent as docker-compose.override.yml .. code-block:: sh cp ../../ConfigMgrAgent/docker-compose.yml docker-compose.override.yml .. Copy the builder.py with standalone mode changes from IEdgeInsights/build directory .. code-block:: sh cp ../../build/builder.py . Run the builder.py in standalone mode, this will generate eii_config.json and update docker-compose.override.yml .. code-block:: sh python3 builder.py -s true Building the service (This step is optional for building the service if not already done in the ``Independently buildable`` step above) .. code-block:: sh docker-compose build Running the service **Note:** Source the .env using the command ``set -a && source .env && set +a`` before running the below command. .. code-block:: sh docker-compose -f docker-compose.yml -f docker-compose-dev.override.yml -f docker-compose.override.yml up -d Camera Configurations ^^^^^^^^^^^^^^^^^^^^^ You need to make changes to the config.json(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json``\ ) and the templates section of the pipeline.json(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/pipelines/cameras/camera_source/pipeline.json``\ ) files while configuring cameras. By default the pipeline.json(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/pipelines/cameras/camera_source/pipeline.json``\ ) file has the RTSP camera configurations. The camera configurations for the Edge Video Analytics Microservice module are as follows: .. note:: ``source_parameters`` values in config.json(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json``\ ) can get overriden if the required gstreamer source plugin in specified in template section of pipeline.json(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/pipelines/cameras/camera_source/pipeline.json``\ ) files. GenICam GigE or USB3 Cameras ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. note:: As Matrix Vision SDK(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/mvGenTL_Acquire-x86_64_ABI2-2.44.1.tgz``\ ) is being used with evaluation license, one would start seeing watermark after 200 ingested images when using a non Matrix Vision camera. One has to purchase the Matrix Vision license to remove this watermark or use a Matrix Vision camera or integrate the respective camera SDK(Eg: basler camera SDK for basler cameras). For more information or configuration details for the GenICam GigE or the USB3 camera support, refer to the `GenICam GigE/USB3.0 Camera Support `_. Prerequisites for Working with the GenICam Compliant Cameras """""""""""""""""""""""""""""""""""""""""""""""""""""""""""" The following are the prerequisites for working with the GeniCam compliant cameras. .. note:: * For other cameras such as RSTP, and USB (v4l2 driver compliant) revert the changes that are mentioned in this section. Refer to the following snip of the ``ia_edge_video_analytics_microservice`` service, to add the required changes in the docker-compose.yml(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/docker-compose.yml``\ ) file. After making the changes, before you build and run the services, ensure to run the builder.py(\ ``[WORK_DIR]/IEdgeInsights/build/builder.py``\ ). * For GenICam GigE cameras: Update the ``ETCD_HOST`` key with the current system's IP in the .env(\ ``[WORK_DIR]/IEdgeInsights/build/.env``\ ) file. .. code-block:: sh ETCD_HOST= You need to add ``root`` user and ``network_mode: host`` in the docker-compose.yml(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/docker-compose.yml``\ ) file and comment the sections ``networks`` and ``ports``. Make the following changes in the docker-compose.yml(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/docker-compose.yml``\ ) file. .. code-block:: yaml ia_edge_video_analytics_microservice: # Add root user user: root # Add network mode host network_mode: host # Please make sure that the above commands are not added under the environment section and also take care about the indentations in the compose file. ... environment: ... # Add HOST_IP to no_proxy and ETCD_HOST no_proxy: ",${RTSP_CAMERA_IP}," ETCD_HOST: ${ETCD_HOST} ... # Comment networks section will throw an error when network mode host is used. # networks: # - eii # Comment ports section as following # ports: # - '65114:65114' Configure visualizer app's subscriber interfaces in the Multimodal Data Visualization Streaming's config.json file as follows. .. code-block:: json "interfaces": { "Subscribers": [ { "Name": "default", "Type": "zmq_tcp", "EndPoint": ":65114", "PublisherAppName": "EdgeVideoAnalyticsMicroservice", "Topics": [ "edge_video_analytics_results" ] } ] } .. note:: Add ```` to the ``no_proxy`` environment variable in the Multimodal Data Visualization Streaming visualizer's ``docker-compose.yml`` file. * For GenIcam USB3.0 cameras: Make the following changes to add ``root`` user in the docker-compose.yml(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/docker-compose.yml``\ ) file. .. code-block:: yaml ia_edge_video_analytics_microservice: # Add root user user: root ... environment: # Refer [GenICam GigE/USB3.0 Camera Support](/4.0/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/docs/generic_plugin_doc.html) to install the respective camera SDK # Setting GENICAM value to the respective camera/GenTL producer which needs to be used GENICAM: "" ... .. note:: * If the GenICam cameras do not get initialized during the runtime, then on the host system, run the ``docker system prune`` command. After that, remove the GenICam specific semaphore files from the ``/dev/shm/`` path of the host system. The ``docker system prune`` command will remove all the stopped containers, networks that are not used (by at least one container), any dangling images, and build cache which could prevent the plugin from accessing the device. * If you get the ``Feature not writable`` message while working with the GenICam cameras, then reset the device using the camera software or using the reset property of the Generic Plugin. For more information, refer the `README `_. * Refer the following configuration for configuring the config.json(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json``\ ) file for GenICam GigE/USB3.0 cameras. .. code-block:: javascript // "source_parameters": { "element": "gencamsrc", "type": "gst" }, "pipeline": "cameras", "pipeline_version": "camera_source", "publish_frame": true, "model_parameters": {}, // * Make the following changes to the templates section of the pipeline.json(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/pipelines/cameras/camera_source/pipeline.json``\ ). .. code-block:: javascript "type": "GStreamer", "template": [ "gencamsrc serial= pixel-format= width= height= name=source", " ! videoconvert", " ! video/x-raw,format=BGR", " ! appsink name=destination" ], Refer to the `docs/basler_doc.md `_ for more information/configuration on Basler camera. .. **Note:** * Generic Plugin can work only with GenICam compliant cameras and only with gstreamer ingestor. * The above gstreamer pipeline was tested with Basler and IDS GigE cameras. * If ``serial`` is not provided, then the first connected camera in the device list will be used. * If ``pixel-format`` is not provided then the default ``mono8`` pixel format will be used. * If ``width`` and ``height`` properties are not set then gencamsrc plugin will set the maximum resolution supported by the camera. * By default, ``exposure-auto`` property is set to on. If the camera is not placed under sufficient light then with auto exposure, ``exposure-time`` can be set to very large value which will increase the time taken to grab frame. This can lead to ``No frame received error``. Hence it is recommended to manually set exposure as in the following sample pipeline when the camera is not placed under good lighting conditions. * ``throughput-limit`` is the bandwidth limit for streaming out data from the camera(in bytes per second). Setting this property to a higher value might result in better FPS but make sure that the system and the application can handle the data load otherwise it might lead to memory bloat. Refer the below example pipeline to use the above mentioned properties: .. code-block:: javascript "type": "GStreamer", "template": [ "gencamsrc serial= pixel-format=ycbcr422_8 width=1920 height=1080 exposure-time=5000 exposure-mode=timed exposure-auto=off throughput-limit=100000000 name=source", " ! videoconvert", " ! video/x-raw,format=BGR", " ! appsink name=destination" ], * While using the basler USB3.0 camera, ensure that the USBFS limit is set to atleast 256MB or more. You can verify this value by using command ``cat /sys/module/usbcore/parameters/usbfs_memory_mb``. If it is less than 256MB, then follow these `steps to increase the USBFS value `_. RTSP Cameras ~~~~~~~~~~~~ Update the RTSP camera IP or the simulated source IP to the RTSP_CAMERA_IP variable in the .env(\ ``[WORK_DIR]/IEdgeInsights/build/.env``\ ) file. Refer to the `docs/rtsp_doc.md `_ for information/configuration on RTSP camera. * Refer the following configuration for configuring the config.json(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json``\ ) file for RTSP camera. .. code-block:: javascript // "source_parameters": { "element": "rtspsrc", "type": "gst" }, "pipeline": "cameras", "pipeline_version": "camera_source", "publish_frame": true, "model_parameters": {}, // * Make the following changes to the templates section of the pipeline.json(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/pipelines/cameras/camera_source/pipeline.json``\ ). .. code-block:: javascript "type": "GStreamer", "template": [ "rtspsrc location=\"rtsp://:@:/\" latency=100 name=source", " ! rtph264depay", " ! h264parse", " ! vaapih264dec", " ! vaapipostproc format=bgrx", " ! videoconvert ! video/x-raw,format=BGR", " ! appsink name=destination" ], .. note:: The RTSP URI of the physical camera depends on how it is configured using the camera software. You can use VLC Network Stream to verify the RTSP URI to confirm the RTSP source. USB v4l2 Cameras ~~~~~~~~~~~~~~~~ For information or configurations details on the USB cameras, refer to `docs/usb_doc.md `_. * Refer the following configuration for configuring the config.json(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json``\ ) file for USB v4l2 camera. .. code-block:: javascript // "source_parameters": { "element": "v4l2src", "type": "gst" }, "pipeline": "cameras", "pipeline_version": "camera_source", "publish_frame": true, "model_parameters": {}, // * Make the following changes to the templates section of the pipeline.json(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/pipelines/cameras/camera_source/pipeline.json``\ ). .. code-block:: javascript "type": "GStreamer", "template": [ "v4l2src device=/dev/ name=source", " ! video/x-raw,format=YUY2", " ! videoconvert ! video/x-raw,format=BGR", " ! appsink name=destination" ], Image Ingestion ^^^^^^^^^^^^^^^ The Image ingestion feature is responsible for ingesting the images coming from a directory into the EII stack for further processing. Image ingestion supports the following image formats: * Jpg * Jpeg * Jpe * Bmp * Png Volume mount the image directory present on the host system. To do this, provide the absolute path of the images directory in the ``docker-compose file``. Refer the following snippet of the ``ia_edge_video_analytics_microservice`` service to add the required changes in the docker-compose.yml(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/docker-compose.yml``\ ) file. After making the changes, ensure that the builder.py(\ ``[WORK_DIR]/IEdgeInsights/build/builder.py``\ ) is executed before you build and run the services. .. code-block:: yaml ia_edge_video_analytics_microservice: ... volume: - "/tmp:/tmp" # volume mount the udev database with read-only permission,so the USB3 Vision interfaces can be enumerated correctly in the container - "/run/udev:/run/udev:ro" # Volume mount the directory in host system where the images are stored onto the container directory system. # Eg: -"home/directory_1/images_directory:/home/pipeline-server/img_dir" - ":/home/pipeline-server/img_dir" ... Refer the following snippet for configuring the config.json(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json``\ ) file for enabling the image ingestion feature for Jpg and Png images. .. code-block:: javascript { // "model_parameters": {}, "pipeline": "cameras", "pipeline_version": "camera_source", "publish_frame": true, "source": "gstreamer", "source_parameters": { "element": "multifilesrc", "type": "gst" }, } * For JPG Images Refer to the following pipeline while using jpg,jpeg and jpe images and make changes to the pipeline.json(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/pipelines/cameras/camera_source/pipeline.json``\ ) file. .. code-block:: javascript "type": "GStreamer", "template": [ "multifilesrc location=\"/home/pipeline-server/img_dir/%02d.jpg\" index=1 name=source", " ! jpegdec ! decodebin", " ! videoconvert ! video/x-raw,format=BGR", " ! appsink name=destination" ], For example: If the images are named in the format ``frame_01``\ , ``frame_02`` and so on, then use the following pipeline. .. code-block:: javascript "type": "GStreamer", "template": [ "multifilesrc location=\"/home/pipeline-server/img_dir/frame_%02d.jpg\" index=1 name=source", " ! jpegdec ! decodebin", " ! videoconvert ! video/x-raw,format=BGR", " ! appsink name=destination" ], .. note:: * The images should follow a naming convention and should be named in the format characters followed by digits in the sequential order. For eg. ``frame_001``\ , ``frame_002``\ , ``frame_003`` and so on. * Make use of the ``%d`` format specifier to specify the total digits present in the image filename. For Eg. If the images are named in the format ``frame_0001``\ , ``frame_0002``\ , then it has total 4 digits in the filename. Use ``%04d`` while providing the image name ``%04d.jpg`` in the pipeline.json(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/pipelines/cameras/camera_source/pipeline.json``\ ) file. * The ingestion will stop if it does not find the required image name. For eg. If directory contains images ``frame_01``\ , ``frame_02`` and ``frame_04``\ , then the ingestion will stop after reading ``frame_02`` since ``frame_03`` is not present in the directory. * Make use of images having resolution - ``720×480``\ , ``1280×720``\ , ``1920×1080``\ , ``3840×2160`` and ``1920×1200``. If a different resolution image is used then the EdgeVideoAnalytics service might fail with ``reshape`` error as gstreamer does zero padding to that image. * Make sure that the images directory is having the required read and execute permission. If not use the following command to add the permissions. ``sudo chmod -R 755 `` * For PNG Images Refer to the follwoing pipeline while using png images and make changes to the pipeline.json(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/pipelines/cameras/camera_source/pipeline.json``\ ) file. .. code-block:: javascript "type": "GStreamer", "template": [ "multifilesrc location=\"/home/pipeline-server/img_dir/%03d.png\" index=1 name=source", " ! pngdec ! decodebin", " ! videoconvert ! video/x-raw,format=BGR", " ! appsink name=destination" ], .. note:: It is recommended to set the ``loop`` property of the ``multifilesrc`` element to false ``loop=FALSE`` to avoid memory leak issues. * For BMP Images Refer to the following snippet for configuring the config.json(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json``\ ) file for enabling the image ingestion feature for bmp image. .. code-block:: javascript { // "model_parameters": {}, "pipeline": "cameras", "pipeline_version": "camera_source", "publish_frame": true, "source": "gstreamer", "source_parameters": { "element": "imagesequencesrc", "type": "gst" }, } Refer to the following pipeline while using bmp images and make changes to the pipeline.json(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/pipelines/cameras/camera_source/pipeline.json``\ ) file. .. code-block:: javascript "type": "GStreamer", "template": [ "imagesequencesrc location=/home/pipeline-server/img_dir/%03d.bmp start-index=1 framerate=1/1", " ! decodebin", " ! videoconvert ! video/x-raw,format=BGR", " ! appsink name=destination" ], **Path Specification for images:** Considering folder name as 'images' where the images are stored. **Relative Path**\ : ``"./images:/home/pipeline-server/img_dir"`` (or) ``"${PWD}/images:/home/pipeline-server/img_dir"`` **Absolute Path**\ : ``"/home/ubuntu/images:/home/pipeline-server/img_dir"`` Integrate Python UDF with EdgeVideoAnalyticsMicroservice Service ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ You can integrate any python UDF with EdgeVideoAnalyticsMicroservice using the volume mount method. You can follow the steps to integrate the python UDF: #. Volume mount the python UDF. You need to provide absolute or relative path to the python UDF and the video file in the ``docker-compose.yml`` file. Refer the following snippet to volume mount the python UDF using the docker-compose.yml(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/docker-compose.yml``\ ) file. .. code-block:: sh ia_edge_video_analytics_microservice: ... volumes: - ../EdgeVideoAnalyticsMicroservice/eii/pipelines:/home/pipeline-server/pipelines/ - ../EdgeVideoAnalyticsMicroservice/resources:/home/pipeline-server/resources/ - :/home/pipeline-server/eva_udfs/ ... For example. If you want to use the safety_gear(\ ``[WORK_DIR]/IEdgeInsights/CustomUdfs/PySafetyGearAnalytics/safety_gear``\ ) UDF, then you need to volume mount this UDF. The structure of safety_gear UDF is as show: .. code-block:: sh safety_gear/ |-- __init__.py |-- ref | |-- frozen_inference_graph.bin | |-- frozen_inference_graph.xml | |-- frozen_inference_graph_fp16.bin | `-- frozen_inference_graph_fp16.xml `-- safety_classifier.py Refer to the following snippet to volume mount the safety_gear UDF. .. code-block:: yaml ia_edge_video_analytics_microservice: ... volumes: - ../EdgeVideoAnalyticsMicroservice/eii/pipelines:/home/pipeline-server/pipelines/ - ../EdgeVideoAnalyticsMicroservice/resources:/home/pipeline-server/resources/ - /home/IEdgeInsights/CustomUdfs/PySafetyGearAnalytics/safety_gear:/home/pipeline-server/eva_udfs/safety_gear ... #. Start cvlc based RTSP stream * Install VLC if not installed already: ``sudo apt install vlc`` * In order to use the RTSP stream from cvlc, the RTSP server must be started using VLC with the following command: .. code-block:: sh cvlc -vvv file:// --sout '#gather:rtp{sdp=rtsp://:/}' --loop --sout-keep .. note:: ```` in the cvlc command can be ``live.sdp`` or it can also be avoided. But make sure the same RTSP URI given here is used in the ingestor pipeline config. For example, Refer to the command to start a cvlc based RTSP stream for safety gear video file(\ ``[WORK_DIR]/IEdgeInsights/CustomUdfs/PySafetyGearIngestion/Safety_Full_Hat_and_Vest.avi``\ ). .. code-block:: sh cvlc -vvv file:///home/IEdgeInsights/CustomUdfs/PySafetyGearIngestion/Safety_Full_Hat_and_Vest.avi --sout '#gather:rtp{sdp=rtsp://:8554/live.sdp}' --loop --sout-keep #. Configure the pipeline.json(\ ``[WORK_DIR]/IEdgeInsights/pipeline.json``\ ) You need to update the ``templates`` section and provide the path to the video file in the pipeline.json(\ ``[WORK_DIR]/IEdgeInsights/pipeline.json``\ ). Refer to the example to configure the pipeline.json file. .. code-block:: javascript { "type": "GStreamer", "template": [ "rtspsrc location=\"rtsp://:/\" latency=100 name=source", " ! rtph264depay", " ! h264parse", " ! vaapih264dec", " ! vaapipostproc format=bgrx", " ! videoconvert ! video/x-raw,format=BGR", " ! udfloader name=udfloader", " ! appsink name=destination" ], "description": "EII UDF pipeline", "parameters": { "type": "object", "properties": { "udfloader": { "element": { "name": "udfloader", "property": "config", "format": "json" }, "type": "object" } } } } **Note:** Make sure ``parameters`` tag is added while using udfloader element. #. Configure the config.json(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json``\ ) file. You need to configure the ``udfs`` section in the ``config.json`` file as shown: You need to configure the ``source_parameters`` section and the ``udfs`` section in the ``config.json`` file as shown below. main .. code-block:: javascript { "source_parameters": { "element": "rtspsrc", "type": "gst" }, // "udfs": [ { "name": "", "type": "python", "device": "CPU", "model_xml": "", "model_bin": "" } ] // } The following example shows the configuration for safety_gear UDF. .. code-block:: javascript { "source_parameters": { "element": "rtspsrc", "type": "gst" }, // "udfs": [ { "name": "eva_udfs.safety_gear.safety_classifier", "type": "python", "device": "CPU", "model_xml": "./eva_udfs/safety_gear/ref/frozen_inference_graph.xml", "model_bin": "./eva_udfs/safety_gear/ref/frozen_inference_graph.bin" } ] // } .. code-block:: **Note:** - One can add custom udfs to eva_udfs(`[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eva_udfs`) directory as this path is volume mounted to the EVAM service. Refer the above example to add a custom udf. - Geti udf takes the path of the deployment directory as the input for deploying a project for local inference. Refer the below example to see how the path of the deployment directory is specified in the udf config. As mentioned in the above steps make sure all the required resources are volume mounted to the EVAM service. The following example shows the configuration for geti UDF. ```javascript { "source_parameters": { "element": "rtspsrc", "type": "gst" }, // "udfs": [ { "type": "python", "name": "", "device": "CPU", "visualize": "true", "deployment": "" } ] // } ``` Refer [geti udf readme](/4.0/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eva_udfs/geti_udf/README.html) for more details. After making the changes, ensure that the builder.py(`[WORK_DIR]/IEdgeInsights/build/builder.py`) script is executed before you build and run the services. Running EdgeVideoAnalyticsMicroservice with multi-instance ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ EdgeVideoAnalyticsMicroservice supports running multi-instance with EII stack, you can run the following command to generate the multi-instance boiler plate config for any number of streams of video-streaming-evam use case: .. code-block:: sh python3 builder.py -f usecases/video-streaming-evam.yml -v A sample example config on how to connect **Multimodal Data Visualization** and **Multimodal Data Visualization Streaming** with **EdgeVideoAnalyticsMicroservice** is given below. Please ensure to update both the configs of **Multimodal Data Visualization** and **Multimodal Data Visualization Streaming** services with these changes: .. code-block:: javascript { "interfaces": { "Subscribers": [ { "Name": "default", "Type": "zmq_tcp", "EndPoint": "ia_edge_video_analytics_microservice:65114", "PublisherAppName": "EdgeVideoAnalyticsMicroservice", "Topics": [ "edge_video_analytics_results" ] } ] } } .. note:: While using multi-instance feature, you need to update the ``config.json`` files and ``docker-compose`` files in the ``[WORK_DIR]/IEdgeInsights/build/multi_instance`` directory and the ``pipeline.json`` files present in the pipelines(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/pipelines/``\ ) directory. Running EdgeVideoAnalyticsMicroservice with EII helm usecase ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Please refer to `README `_ Running EdgeVideoAnalyticsMicroservice on a GPU device ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ EdgeVideoAnalyticsMicroservice supports running inference only on CPU and GPU devices by accepting the device value ("CPU"|"GPU"), part of the udf object configuration in the udfs key. The device field in the UDF config of udfs key in the EdgeVideoAnalyticsMicroservice configs(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json``\ ) needs to be updated. To Run on Intel(R) Processor Graphics (GPU/iGPU) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ At runtime, use the ``root`` user permissions to run inference on a ``GPU`` device. To enable root user at runtime in ``ia_edge_video_analytics_microservice``\ , add ``user: root`` in the docker-compose.yml(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/docker-compose.yml``\ ) file. Refer the following example: .. code-block:: yaml ia_edge_video_analytics_microservice: ... user: root .. note:: * EdgeVideoAnalyticsMicroservice does not support running inference on VPU - ``MYRIAD (NCS2)`` and ``HDDL`` devices. * If you get a ``Failed to create plugin for device GPU/ clGetPlatformIDs error`` message, then check if the host system supports GPU device. Install the required drivers from `OpenVINO-steps-for-GPU `_. Certain platforms like TGL can have compatibility issues with the Ubuntu kernel version. Ensure the compatible kernel version is installed. * "EdgeVideoAnalyticsMicroservice by default runs the video ingestion and analytics pipeline in a single micro-service for which the interface connection configuration is as below. .. code-block:: json "Publishers": [ { "Name": "default", "Type": "zmq_tcp", "EndPoint": "0.0.0.0:65114", "Topics": [ "edge_video_analytics_results" ], "AllowedClients": [ "*" ] } * "EdgeVideoAnalyticsMicroservice" can be configured to: run only with video ingestion capability by just adding the below Publisher section (The publisher config remains same as above) : .. code-block:: json "Publishers": [ { "Name": "default", "Type": "zmq_tcp", "EndPoint": "0.0.0.0:65114", "Topics": [ "camera1_stream" ], "AllowedClients": [ "*" ] } * "EdgeVideoAnalyticsMicroservice" can be configured to: run only with video analytics capability by just adding the below Subscriber section. .. code-block:: json "Subscribers": [ { "Name": "default", "Type": "zmq_ipc", "EndPoint": "/EII/sockets", "PublisherAppName": "VideoIngestion", "Topics": [ "camera1_stream_results" ], "zmq_recv_hwm": 50 } ] The "source" parameter in the config.json needs to be updated to EII Messagebus since "EdgeVideoAnalyticsMicroservice" when running with video analytics capability need to get its data from the EII messagebus. .. code-block:: json { "config": { "source": "msgbus", Use Human Pose Estimation UDF with EdgeVideoAnalyticsMicroservice ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ To use the Human Pose Estimation(\ ``[WORK_DIR]/IEdgeInsights/``\ ) UDF, make the following changes to the config.json(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json``\ ) and docker-compose(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/docker-compose.yml``\ ) file. #. Configure the config.json(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json``\ ) and make changes to the ``pipeline`` and ``pipeline_version``. .. code-block:: json "pipeline": "object_detection", "pipeline_version": "human_pose_estimation", "publish_frame": true, #. Volume mount the Human Pose Estimation UDF in the docker-compose.yml(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/docker-compose.yml``\ ) file. .. code-block:: yaml volumes: - ../EdgeVideoAnalyticsMicroservice/resources:/home/pipeline-server/resources/ - ../EdgeVideoAnalyticsMicroservice/models_list/human_pose_estimation:/home/pipeline-server/models/human_pose_estimation .. note:: Use the following volume mount path in docker-compose.yml(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/docker-compose.yml``\ ) file while independently deploying EdgeVideoAnalyticsMicroservice. ``../models_list/human_pose_estimation:/home/pipeline-server/models/human_pose_estimation``