Contents
-
Edge Video Analytics Microservice for EII Overview
Steps to Independently Build and Deploy EdgeVideoAnalyticsMicroservice Service
Integrate Python UDF with EdgeVideoAnalyticsMicroservice Service
Running EdgeVideoAnalyticsMicroservice with EII helm usecase
Use Human Pose Estimation UDF with EdgeVideoAnalyticsMicroservice
Edge Video Analytics Microservice for Edge insights for Industrial (EII) Overview
The Edge Video Analytics Microservice (EVAM) combines video ingestion and analytics capabilities provided by Edge insights for Industrial (EII) visual ingestion and analytics modules. This directory provides the Intel® Deep Learning Streamer (Intel® DL Streamer) pipelines to perform object detection on an input URI source and send the ingested frames and inference results using the MsgBus Publisher. It also provides a Docker compose and config file to use EVAM with the Edge insights software stack.
Prerequisites
As a prerequisite for using EVAM in EII mode, download EII 4.0.0 package from EII 4.0.0 package from ESH and complete the following steps:
EII when downloaded from ESH would be available at the installed location
cd [EII installed location]/IEdgeInsights
Complete the prerequisite for provisioning the EII stack by referring to the README.md.
Download the required model files to be used for the pipeline mentioned in the config(
[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json
) file by completing step 2 to step 4 as mentioned in the README. ..Note: The model files are large and hence they are not part of the repo.
Run the following commands to set the environment, build the
ia_configmgr_agent
container and copy models to the required directory:Go to the
build
directory:
cd [WORK_DIR]/IEdgeInsights/build
Configure visualizer app’s subscriber interfaces. Example: Add the following
interfaces
key value inVisualizer/multimodal-data-visualization-streaming/eii/config.json
andVisualizer/multimodal-data-visualization/eii/config.json
files.
"interfaces": { "Subscribers": [ { "Name": "default", "Type": "zmq_tcp", "zmq_recv_hwm": 50, "EndPoint": "ia_edge_video_analytics_microservice:65114", "PublisherAppName": "EdgeVideoAnalyticsMicroservice", "Topics": [ "edge_video_analytics_results" ] } ] }
Execute the builder.py script
python3 builder.py -f usecases/video-streaming-evam.yml
Create some necessary items for the service
sudo mkdir -p /opt/intel/eii/models/
Copy the downloaded model files to /opt/intel/eii
sudo cp -r [downloaded_model_directory]/models /opt/intel/eii/
Run the Containers
The run.sh(
[WORK_DIR]/IEdgeInsights/run.sh
) script is used to bring up theia_configmgr_agent
service first and then the rest of the EII services.
# run.sh has following options
# --timeout |-t :The optional TIMEOUT argument passed below is in seconds and if not provided it will wait
# till the "Provisioning is Done" message show up in `ia_configmgr_agent` logs before
# bringing up rest of the EII stack
#--build| -b : To build all the containers
#--clean| -c : For clean build
#--restart| -r : To restart all the containers
# ./run.sh --timeout=<timeout-in-seconds>|-t=<timeout-in-seconds> --build|-b or --clean|-c or --restart|-r
For LEM network lease mode please follow the required pre-requisities to setup and run the license agent on the host system.
The ETCD watch capability is enabled for Edge Video Analytics Microservice and it will restart when config/interface changes are done via the EtcdUI interface.
Configuration
See the edge-video-analytics-microservice/eii/config.json([WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json
) file for the configuration of EVAM. The default configuration will start the
object_detection demo for EII.
The config file is divided into two sections as follows:
Config
The following table describes the attributes that are supported in the config
section.
Parameter |
Description |
---|---|
|
Type of Intel® Edge Insights System certs to be created. This should be |
|
Source of the frames. This should be |
|
The name of the DL Streamer pipeline to use. This should correspond to a directory in the pipelines directory. |
|
The version of the pipeline to use. This typically is a subdirectory of a pipeline in the pipelines directory. |
|
Additional information to store with frame metadata. e.g. camera location/orientation of video input |
|
The Boolean flag for whether to publish raw frame. |
|
Encodes the image in jpeg or png format. |
|
Publishes frame/metadata to mqtt broker. |
|
Converts inference results to DCaaS standardized format. |
Note
For
jpeg
encoding type, level is the quality from 0 to 100. A higher value means better quality.For
png
encoding type, level is the compression level from 0 to 9. A higher value means a smaller size and longer compression time.Encoding elements can be used in the
pipeline
as an alternative to specifying theencoding
parameters. Refer to the below pipeline for usingjpegenc
in config.json. .. code-block:: javascript“pipeline”: “multifilesrc loop=FALSE stop-index=0 location=/home/pipeline-server/resources/pcb_d2000.avi name=source ! h264parse ! decodebin ! videoconvert ! video/x-raw,format=BGR ! udfloader name=udfloader ! jpegenc ! appsink name=destination”,
Refer to the below pipeline for using
pngenc
in config.json. .. code-block:: javascript“pipeline”: “multifilesrc loop=FALSE stop-index=0 location=/home/pipeline-server/resources/pcb_d2000.avi name=source ! h264parse ! decodebin ! videoconvert ! video/x-raw,format=BGR ! udfloader name=udfloader ! videoconvert ! pngenc ! appsink name=destination”,
convert_metadata_to_dcaas_format
, when set totrue
in config.json converts the metadata to DCaaS compatible format. Currently this has been tested forgvadetect
element used in the pipeline. Refer to the below pipeline for example, .. code-block:: javascript“pipeline”: “multifilesrc loop=TRUE stop-index=0 location=/home/pipeline-server/resources/classroom.avi name=source ! h264parse ! decodebin ! queue max-size-buffers=10 ! videoconvert ! video/x-raw,format=BGR ! gvadetect model=/home/pipeline-server/models/object_detection/person/FP32/person-detection-retail-0013.xml model-proc=/home/pipeline-server/models/object_detection/person/person-detection-retail-0013.json ! jpegenc ! appsink name=destination”,
For MQTT publishing,
Refer to the document here for details on prerequisities, configuration, filtering and error handling.
MQTT publishing can be enabled along with EII Message Bus publishing.
Interfaces
Currently in the EII mode, EVAM supports launching a single pipeline and publishing on a single topic. This implies that in the configuration file (“config.json”), the single JSON object in the Publisher
list is where the configuration resides for the published data. For more details on the structure, refer to the EII documentation.
EVAM also supports subscribing and publishing messages or frames using the Message Bus. The endpoint details for the EII service you need to subscribe from are to be provided in the Subscribers section in the config([WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json
) file and the endpoints where you need to publish to are to be provided in Publishers section in the config([WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json
) file.
To enable injection of frames into the GStreamer pipeline obtained from Message Bus, ensure to make the following changes:
The source parameter in the config(
[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json
) file is set to msgbus. Refer to the following code snippet:"config": { "source": "msgbus" }
The pipeline is set to appsrc as source instead of uridecodebin. Refer to the following code snippet:
{ "pipeline": "appsrc name=source ! rawvideoparse ! appsink name=destination" }
Steps to Independently Build and Deploy EdgeVideoAnalyticsMicroservice Service
Note
For running two or more microservices, we recommend users to try the use case-driven approach for building and deploying as mentioned in Generate Consolidated Files for a Subset of Edge Insights for Industrial Services
Steps to Independently Build EdgeVideoAnalyticsMicroservice Service
Note
When switching between independent deployment of the service with and without config manager agent service dependency, one would run into issues with docker compose build
w.r.t Certificates folder existence. As a workaround, please run command sudo rm -rf Certificates
to proceed with docker compose build
.
To independently build EdgeVideoAnalyticsMicroservice service, complete the following steps:
The downloaded source code should have a directory named EdgeVideoAnalyticsMicroservice/eii:
cd IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii
Copy the IEdgeInsights/build/.env file using the following command in the current folder
cp ../../build/.env .
NOTE: Update the HOST_IP and ETCD_HOST variables in the .env file with your system IP.
# Source the .env using the following command: set -a && source .env && set +a
Independently build
docker compose build
Steps to Independently Deploy EdgeVideoAnalyticsMicroservice Service
You can deploy the EdgeVideoAnalyticsMicroservice service in any of the following two ways:
Deploy EdgeVideoAnalyticsMicroservice Service without Config Manager Agent Dependency
Run the following commands to deploy EdgeVideoAnalyticsMicroservice service without Config Manager Agent dependency:
# Enter the eii directory
cd IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii
Copy the IEdgeInsights/build/.env file using the following command in the current folder, if not already present.
cp ../../build/.env .Note: Ensure that
docker ps
is clean anddocker network ls
must not have EII bridge network.
Update .env file for the following:
1. HOST_IP and ETCD_HOST variables with your system IP.
2. `READ_CONFIG_FROM_FILE_ENV` value to `true` and `DEV_MODE` value to `true`.
Source the .env using the following command:
set -a && source .env && set +a
# Run the service
docker compose -f docker-compose.yml -f docker-compose-dev.override.yml up -d
Deploy EdgeVideoAnalyticsMicroservice Service with Config Manager Agent Dependency
Run the following commands to deploy EdgeVideoAnalyticsMicroservice service with Config Manager Agent dependency:
Note: Ensure that the Config Manager Agent image present in the system. If not, build the Config Manager Agent locally when independently deploying the service with Config Manager Agent dependency.
# Enter the eii directory
cd IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii
Copy the IEdgeInsights/build/.env file using the following command in the current folder, if not already present.
cp ../../build/.env .Note: Ensure that
docker ps
is clean anddocker network ls
doesn’t have EII bridge networks.
Update .env file for following:
1. HOST_IP and ETCD_HOST variables with your system IP.
2. `READ_CONFIG_FROM_FILE_ENV` value is set to `false`.
Copy the docker-compose.yml from IEdgeInsights/ConfigMgrAgent as docker-compose.override.yml
cp ../../ConfigMgrAgent/docker-compose.yml docker-compose.override.yml
Copy the builder.py with standalone mode changes from IEdgeInsights/build directory
cp ../../build/builder.py .Run the builder.py in standalone mode, this will generate eii_config.json and update docker-compose.override.yml
python3 builder.py -s trueBuilding the service (This step is optional for building the service if not already done in the
Independently buildable
step above)docker compose buildRunning the service
Note: Source the .env using the command
set -a && source .env && set +a
before running the below command.docker compose -f docker-compose.yml -f docker-compose-dev.override.yml -f docker-compose.override.yml up -d
Camera Configurations
You need to make changes to the config.json([WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json
) and the templates section of the pipeline.json([WORK_DIR]/IEdgeInsights/pipeline.json
) files while configuring cameras.
By default the pipeline.json([WORK_DIR]/IEdgeInsights/pipeline.json
) file has the RTSP camera configurations.
The camera configurations for the Edge Video Analytics Microservice module are as follows:
GenICam GigE or USB3 Cameras
Note
As Matrix Vision SDK([WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/mvGenTL_Acquire-x86_64_ABI2-2.44.1.tgz
) is being used with evaluation license, one would start seeing watermark after 200 ingested images when using a non Matrix Vision camera. One has to purchase the Matrix Vision license to remove this watermark or use a Matrix Vision camera or integrate the respective camera SDK(Eg: basler camera SDK for basler cameras).
For more information or configuration details for the GenICam GigE or the USB3 camera support, refer to the GenICam GigE/USB3.0 Camera Support.
Prerequisites for Working with the GenICam Compliant Cameras
The following are the prerequisites for working with the GeniCam compliant cameras.
Note
For other cameras such as RSTP, and USB (v4l2 driver compliant) revert the changes that are mentioned in this section. Refer to the following snip of the
ia_edge_video_analytics_microservice
service, to add the required changes in the docker-compose.yml([WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/docker-compose.yml
) file. After making the changes, before you build and run the services, ensure to run the builder.py([WORK_DIR]/IEdgeInsights/build/builder.py
).
For GenICam GigE cameras:
Update the ETCD_HOST
key with the current system’s IP in the .env([WORK_DIR]/IEdgeInsights/build/.env
) file.
ETCD_HOST=<HOST_IP>
Add network_mode: host
in the docker-compose.yml([WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/docker-compose.yml
) file and comment/remove networks
and ports
sections.
Make the following changes in the docker-compose.yml([WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/docker-compose.yml
) file.
ia_edge_video_analytics_microservice:
# Add network mode host
network_mode: host
# Please make sure that the above commands are not added under the environment section and also take care about the indentations in the compose file.
...
environment:
...
# Add HOST_IP to no_proxy and ETCD_HOST
no_proxy: "<eii_no_proxy>,${RTSP_CAMERA_IP},<HOST_IP>"
ETCD_HOST: ${ETCD_HOST}
...
# Comment networks section will throw an error when network mode host is used.
# networks:
# - eii
# Comment ports section as following
# ports:
# - '65114:65114'
Configure visualizer app’s subscriber interfaces in the Multimodal Data Visualization Streaming’s config.json file as follows.
"interfaces": {
"Subscribers": [
{
"Name": "default",
"Type": "zmq_tcp",
"EndPoint": "<HOST_IP>:65114",
"PublisherAppName": "EdgeVideoAnalyticsMicroservice",
"Topics": [
"edge_video_analytics_results"
]
}
]
}
Note
Add <HOST_IP>
to the no_proxy
environment variable in the Multimodal Data Visualization Streaming visualizer’s docker-compose.yml
file.
For GenIcam USB3.0 cameras:
ia_edge_video_analytics_microservice:
...
environment:
# Refer [GenICam GigE/USB3.0 Camera Support](/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/docs/generic_plugin_doc.html) to install the respective camera SDK
# Setting GENICAM value to the respective camera/GenTL producer which needs to be used
GENICAM: "<CAMERA/GenTL>"
...
Note
If the GenICam cameras do not get initialized during the runtime, then on the host system, run the
docker system prune
command. After that, remove the GenICam specific semaphore files from the/dev/shm/
path of the host system. Thedocker system prune
command will remove all the stopped containers, networks that are not used (by at least one container), any dangling images, and build cache which could prevent the plugin from accessing the device.If you get the
Feature not writable
message while working with the GenICam cameras, then reset the device using the camera software or using the reset property of the Generic Plugin. For more information, refer the README.
Refer the following configuration for configuring the config.json(
[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json
) file for GenICam GigE/USB3.0 cameras."pipeline": "gencamsrc serial=<DEVICE_SERIAL_NUMBER> pixel-format=<PIXEL_FORMAT> name=source ! videoconvert ! video/x-raw,format=BGR ! appsink name=destination"
Refer to the docs/basler_doc.md for more information/configuration on Basler camera.
Note:
Generic Plugin can work only with GenICam compliant cameras and only with gstreamer ingestor.
The above gstreamer pipeline was tested with Basler and IDS GigE cameras.
If
serial
is not provided, then the first connected camera in the device list will be used.If
pixel-format
is not provided then the defaultmono8
pixel format will be used.If
width
andheight
properties are not set then gencamsrc plugin will set the maximum resolution supported by the camera.Camera field of view getting cropped is an expected behavior when a lower resolution is set using
height
orwidth
parameter. Setting these parameters would create an Image ROI which will originate from the top left corner of the sensor. Refer https://docs.baslerweb.com/image-roi for more details.Using a higher resolution might have other side effects like “lag issue in the pipeline” when the model is compute intensive.
By default,
exposure-auto
property is set to on. If the camera is not placed under sufficient light then with auto exposure,exposure-time
can be set to very large value which will increase the time taken to grab frame. This can lead toNo frame received error
. Hence it is recommended to manually set exposure as in the following sample pipeline when the camera is not placed under good lighting conditions.throughput-limit
is the bandwidth limit for streaming out data from the camera(in bytes per second). Setting this property to a higher value might result in better FPS but make sure that the system and the application can handle the data load otherwise it might lead to memory bloat. Refer the below example pipeline to use the above mentioned properties: .. code-block:: javascript“pipeline”: “gencamsrc serial=<DEVICE_SERIAL_NUMBER> pixel-format=ycbcr422_8 width=1920 height=1080 exposure-time=5000 exposure-mode=timed exposure-auto=off throughput-limit=300000000 name=source ! videoconvert ! video/x-raw,format=BGR ! appsink name=destination”
By default, USB-FS on Linux system allows only 16MB buffer limit which might not be sufficient to work with high framerate, high resolution cameras and multiple camera setup. In such scenarios configure USB-FS to increase the buffer memory limit for USB3 vision camera. While using the basler USB3.0 camera, ensure that the USBFS limit is set to 1000MB. You can verify this value by using command
cat /sys/module/usbcore/parameters/usbfs_memory_mb
. If it is less than 256MB, then follow these steps to increase the USBFS value.
Xiris Cameras
Prerequisites for Working with Xiris Camera
The following are the prerequisites for working with Xiris cameras.
Note
For other cameras such as RSTP, and USB (v4l2 driver compliant) revert the changes that are mentioned in this section. Refer to the following snip of the
ia_edge_video_analytics_microservice
service, to add the required changes in the docker-compose.yml([WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/docker-compose.yml
) file. After making the changes, before you build and run the services, ensure to run the builder.py([WORK_DIR]/IEdgeInsights/build/builder.py
).
For Xiris Camera:
Update the ETCD_HOST
key with the current system’s IP in the .env([WORK_DIR]/IEdgeInsights/build/.env
) file.
ETCD_HOST=<HOST_IP>
Add network_mode: host
in the docker-compose.yml([WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/docker-compose.yml
) file and comment/remove networks
and ports
sections.
Make the following changes in the docker-compose.yml([WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/docker-compose.yml
) file.
ia_edge_video_analytics_microservice:
# Add network mode host
network_mode: host
# Please make sure that the above commands are not added under the environment section and also take care about the indentations in the compose file.
...
environment:
...
# Add HOST_IP to no_proxy and ETCD_HOST
no_proxy: "<eii_no_proxy>,${RTSP_CAMERA_IP},<HOST_IP>"
ETCD_HOST: ${ETCD_HOST}
...
# Comment networks section will throw an error when network mode host is used.
# networks:
# - eii
# Comment ports section as following
# ports:
# - '65114:65114'
The ip_address parameter in the config(
[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json
) file must be set to IP address of the camera.The frame_rate parameter in the config(
[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json
) file must be set to the desired ingestion frame rate from the camera.The pixel_depth parameter in the config(
[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json
) file must be set to the required pixel depth (in bits). It can take one of the following four values: 8, 12, 14 or 16. Note that the pixel_depth parameter has no bearing on the monochrome camera XVC-1000 that has been tested. Refer to the following code snippet:"config": { "xiris": { "ip_address": "<set-xiris-camera-IP-here>", "frame_rate": 10, "pixel_depth": 8 } }
The source parameter in the config(
[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json
) file must be set toingestor
. Refer to the following code snippet:"config": { "source": "ingestor" }
The pipeline is set to
appsrc
as source andrawvideoparse
element should be updated with theheight
,width
andformat
of the Xiris frame. Refer to the following code snippet:{ "pipeline": "appsrc name=source ! rawvideoparse height=1024 width=1280 format=gray8 ! videoconvert ! video/x-raw,format=BGR ! appsink name=destination" }
Note:
Xiris Camera model tested is XVC-1000(monochrome)
Only PixelDepth=8 (camera outputs 8 bits per pixel) is supported. In case of any frame rendering issues please check PixelDepth value from the logs and make sure it is set to 8.
In case a wrong or an invalid IP is provided for connecting to Xiris camera using
XirisCameraIP
env variable, ingestion will not work and there will be no error logs printed. Make sure correct IP is provided for ingestion to work.To find the IP address of the camera please use the GUI tool provided by the Xiris (currently avaiable on windows) or run the LinuxSample app under weldsdk installation directory on the hostsystem to find the available cameras.
RTSP Cameras
Update the RTSP camera IP or the simulated source IP to the RTSP_CAMERA_IP variable in the .env([WORK_DIR]/IEdgeInsights/build/.env
) file. Refer to the docs/rtsp_doc.md for information/configuration on RTSP camera.
Refer the following configuration for configuring the config.json(
[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json
) file for RTSP camera.
"pipeline": "rtspsrc location=\"rtsp://<USERNAME>:<PASSWORD>@<RTSP_CAMERA_IP>:<PORT>/<FEED>\" latency=100 name=source ! rtph264depay ! h264parse ! vaapih264dec ! vaapipostproc format=bgrx ! videoconvert ! video/x-raw,format=BGR ! appsink name=destination"
Note
The RTSP URI of the physical camera depends on how it is configured using the camera software. You can use VLC Network Stream to verify the RTSP URI to confirm the RTSP source.
USB v4l2 Cameras
For information or configurations details on the USB cameras, refer to docs/usb_doc.md.
Refer the following configuration for configuring the config.json(
[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json
) file for USB v4l2 camera.
"pipeline": "v4l2src device=/dev/<DEVICE_VIDEO_NODE> name=source ! video/x-raw,format=YUY2 ! videoconvert ! video/x-raw,format=BGR ! appsink name=destination"
Image Ingestion
The Image ingestion feature is responsible for ingesting the images coming from a directory into the EII stack for further processing. Image ingestion supports the following image formats:
Jpg
Jpeg
Jpe
Bmp
Png
Volume mount the image directory present on the host system. To do this, provide the absolute path of the images directory in the docker-compose file
.
Refer the following snippet of the ia_edge_video_analytics_microservice
service to add the required changes in the docker-compose.yml([WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/docker-compose.yml
) file. After making the changes, ensure that the builder.py([WORK_DIR]/IEdgeInsights/build/builder.py
) is executed before you build and run the services.
ia_edge_video_analytics_microservice:
...
volume:
- "/tmp:/tmp"
# volume mount the udev database with read-only permission,so the USB3 Vision interfaces can be enumerated correctly in the container
- "/run/udev:/run/udev:ro"
# Volume mount the directory in host system where the images are stored onto the container directory system.
# Eg: -"home/directory_1/images_directory:/home/pipeline-server/img_dir"
- "<relative or absolute path to images directory>:/home/pipeline-server/img_dir"
...
Refer the following snippet for configuring the config.json([WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json
) file for enabling the image ingestion feature for Jpg images.
"pipeline": "multifilesrc location=\"/home/pipeline-server/img_dir/<image_filename>%02d.jpg\" index=1 name=source ! jpegdec ! decodebin ! videoconvert ! video/x-raw,format=BGR ! udfloader name=udfloader ! appsink name=destination",
For example: If the images are named in the format ``frame_01``\ , ``frame_02`` and so on, then use the following pipeline.
"pipeline": "multifilesrc location=\"/home/pipeline-server/img_dir/frame_%02d.jpg\" index=1 name=source ! jpegdec ! decodebin ! videoconvert ! video/x-raw,format=BGR ! udfloader name=udfloader ! appsink name=destination"
Note
The images should follow a naming convention and should be named in the format characters followed by digits in the sequential order. For eg.
frame_001
,frame_002
,frame_003
and so on.Make use of the
%d
format specifier to specify the total digits present in the image filename. For Eg. If the images are named in the formatframe_0001
,frame_0002
, then it has total 4 digits in the filename. Use%04d
while providing the image name<image_filename>%04d.jpg
in the pipeline.json([WORK_DIR]/IEdgeInsights/pipeline.json
) file.The ingestion will stop if it does not find the required image name. For eg. If directory contains images
frame_01
,frame_02
andframe_04
, then the ingestion will stop after readingframe_02
sinceframe_03
is not present in the directory.Make use of images having resolution -
720×480
,1280×720
,1920×1080
,3840×2160
and1920×1200
. If a different resolution image is used then the EdgeVideoAnalytics service might fail withreshape
error as gstreamer does zero padding to that image.Make sure that the images directory is having the required read and execute permission. If not use the following command to add the permissions.
sudo chmod -R 755 <path to images directory>
For PNG Images
Refer to the follwoing pipeline while using png images and make changes to the pipeline.json(
[WORK_DIR]/IEdgeInsights/pipeline.json
) file."pipeline": "multifilesrc location=\"/home/pipeline-server/img_dir/<image_filename>%03d.png\" index=1 name=source ! pngdec ! decodebin ! videoconvert ! video/x-raw,format=BGR ! udfloader name=udfloader ! appsink name=destination"
Note
It is recommended to set the loop
property of the multifilesrc
element to false loop=FALSE
to avoid memory leak issues.
For BMP Images
Refer to the following snippet for configuring the config.json([WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json
) file for enabling the image ingestion feature for bmp image.
"pipeline": "imagesequencesrc location=/home/pipeline-server/img_dir/<image_filename>%03d.bmp start-index=1 framerate=1/1 ! decodebin ! videoconvert ! video/x-raw,format=BGR ! udfloader name=udfloader ! appsink name=destination"
Path Specification for images:
Considering folder name as ‘images’ where the images are stored.
Relative Path: "./images:/home/pipeline-server/img_dir"
(or) "${PWD}/images:/home/pipeline-server/img_dir"
Absolute Path: "/home/ubuntu/images:/home/pipeline-server/img_dir"
Video Ingestion
Video ingestion supports reading video files from a directory.
Volume mount the videos directory present on the host system. To do this, provide the absolute path of the directory in docker-compose.yml(
[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/docker-compose.yml
) as shown below.ia_edge_video_analytics_microservice: ... volume: - "/tmp:/tmp" # volume mount the udev database with read-only permission,so the USB3 Vision interfaces can be enumerated correctly in the container - "/run/udev:/run/udev:ro" # Volume mount the directory in host system where the videos are stored onto the container directory system. # Eg: -"home/videos_dir:/home/pipeline-server/videos_dir" - "<relative or absolute path to videos directory>:/home/pipeline-server/videos_dir" ...
Modify pipeline in config.json(
[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json
) file.For reading videos, for example,
video_000.avi
,video_001.avi
,video_002.avi
, from a directory use the following pipeline."pipeline": "multifilesrc location=/home/pipeline-server/videos_dir/video_%03d.avi name=source ! h264parse ! decodebin ! videoconvert ! video/x-raw,format=BGR ! udfloader name=udfloader ! appsink name=destination"
For reading videos, for example, video01.mp4, video02.mp4, from a directory use the following pipeline.
"pipeline": "multifilesrc start-index=1 location=/home/pipeline-server/videos_dir/video%02d.mp4 name=source ! decodebin ! videoconvert ! video/x-raw,format=BGR ! udfloader name=udfloader ! appsink name=destination"
Refer to multifilesrc_doc.md for more details on naming convention for the video files and multifilesrc configuration.
After making the changes, ensure that the builder.py(
[WORK_DIR]/IEdgeInsights/build/builder.py
) is executed before building and running services.
Integrate Python UDF with EdgeVideoAnalyticsMicroservice Service
You can integrate any python UDF with EdgeVideoAnalyticsMicroservice using the volume mount method. You can follow the steps to integrate the python UDF:
- Volume mount the python UDF.
You need to provide absolute or relative path to the python UDF and the video file in the
docker-compose.yml
file. Refer the following snippet to volume mount the python UDF using the docker-compose.yml([WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/docker-compose.yml
) file.
ia_edge_video_analytics_microservice: ... volumes: - ../EdgeVideoAnalyticsMicroservice/eii/pipelines:/home/pipeline-server/pipelines/ - ../EdgeVideoAnalyticsMicroservice/resources:/home/pipeline-server/resources/ - <absolute or relative path to the python_udf>:/home/pipeline-server/udfs/python/<python_udf> ...
Use approriate gstreamer pipeline configuration to use udfloader element with the required ingestion source. For e.g. refer the following gstreamer pipeline for configuring the config.json(
[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json
) file for RTSP camera."pipeline": "rtspsrc location=\"rtsp://<USERNAME>:<PASSWORD>@<RTSP_CAMERA_IP>:<PORT>/<FEED>\" latency=100 name=source ! rtph264depay ! h264parse ! vaapih264dec ! vaapipostproc format=bgrx ! videoconvert ! video/x-raw,format=BGR ! udfloader name=udfloader ! jpegenc ! appsink name=destination"
Configure the config.json(
[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json
) file for updating udf parameters.You need to configure the
udfs
section in theconfig.json
file as shown:{ // "udfs": [ { "name": "<path to python udf>", "type": "python", "device": "CPU", "model_xml": "<path to model xml file>", "model_bin": "<path to model bin file>" } ] // } The following example shows the configuration for geti UDF.
{ // "udfs": [ { "type": "python", "name": "<path to python geti udf>", "device": "CPU", "visualize": "false", "deployment": "<path to geti deployment directory>" "metadata_converter": "<custom converter to format metadata>" } ] // } Refer `geti udf readme </IEdgeInsights/EdgeVideoAnalyticsMicroservice/user_scripts/udfs/python/geti_udf/README.html>`_ for more details. The following example shows the configuration for anomalib UDF.
"udfs": [ { "device": "CPU", "task": "classification", "inferencer": "openvino_nomask", "model_metadata": "/home/pipeline-server/udfs/python/anomalib_udf/stfpm/metadata.json", "name": "python.anomalib_udf.inference", "type": "python", "weights": "/home/pipeline-server/udfs/python/anomalib_udf/stfpm/model.onnx" } ] Refer `anomalib udf readme </IEdgeInsights/EdgeVideoAnalyticsMicroservice/user_scripts/udfs/python/anomalib_udf/README.html>`_ for more details. The following example shows the configuration for add label UDF.
"udfs": [ { "type": "python", "name": "python.add_label", "class_name": "defect", "class_value": "true" } ] After making the changes, ensure that the builder.py(\ ``[WORK_DIR]/IEdgeInsights/build/builder.py``\ ) script is executed before you build and run the services. **Note:**
One can add custom udfs to udfs/python directory and volume mount to the EVAM service. Refer the above example to add a custom udf.
Geti udf takes the path of the deployment directory as the input for deploying a project for local inference. Refer the below example to see how the path of the deployment directory is specified in the udf config. As mentioned in the above steps make sure all the required resources are volume mounted to the EVAM service.
Using a compute intensive model might increase the overall CPU% utilization of the system (expected behavior). To reduce the CPU% utilization inference workload can be offloaded to GPU device provided the system supports running inference on GPU. Reducing the ingestion framerate can also reduce the overall CPU% utilization
Using a compute intensive model can also increase the inference time. Higher inference and pre-preprocessing/post-processing time can lead to lag issue in the ingestion pipeline if the analytics throughput is lesser than the ingestion framerate. In such cases ingestion framerate can be reduced to avoid frame getting piled up.
Creating multi-instance and running multiple pipelines with a compute intensive model will further increase the inference time and the overall CPU%. Either the ingestion framerate must be reduced, or the model must be further optimized.
If the custom UDF/model name contains a “whitespace” character, UDF loading will fail and throw a ModuleNotFound error. Make sure that the UDF name and model/UDF directory name does not contain any “whitespace” character.
Running EdgeVideoAnalyticsMicroservice with multi-instance
EdgeVideoAnalyticsMicroservice supports running multi-instance with EII stack, you can run the following command to generate the multi-instance boiler plate config for any number of streams of video-streaming-evam use case:
python3 builder.py -f usecases/video-streaming-evam.yml -v <number_of_streams_required>
A sample example config on how to connect Multimodal Data Visualization and Multimodal Data Visualization Streaming with EdgeVideoAnalyticsMicroservice is given below. Please ensure to update both the configs of Multimodal Data Visualization and Multimodal Data Visualization Streaming services with these changes:
{
"interfaces": {
"Subscribers": [
{
"Name": "default",
"Type": "zmq_tcp",
"EndPoint": "ia_edge_video_analytics_microservice:65114",
"PublisherAppName": "EdgeVideoAnalyticsMicroservice",
"Topics": [
"edge_video_analytics_results"
]
}
]
}
}
Note
While using multi-instance feature, you need to update the config.json
files and docker-compose
files in the [WORK_DIR]/IEdgeInsights/build/multi_instance
directory and the pipeline.json
files present in the pipelines([WORK_DIR]/IEdgeInsights/
) directory.
Running EdgeVideoAnalyticsMicroservice with EII helm usecase
Please refer to README
Running EdgeVideoAnalyticsMicroservice on a GPU device
EdgeVideoAnalyticsMicroservice supports running inference only on CPU and GPU devices by accepting the device value (“CPU”|”GPU”), part of the udf object configuration in the udfs key.
The device field in the UDF config of udfs key in the EdgeVideoAnalyticsMicroservice configs([WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/config.json
) needs to be updated.
Note
If your host OS is Ubuntu 22 add
$ stat -c "%g" /dev/dri/render*
argument undergroup_add
section in the docker-compose.yml([WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice/eii/docker-compose.yml
) file in order to setup access to GPU from container.EdgeVideoAnalyticsMicroservice does not support running inference on VPU -
MYRIAD (NCS2)
andHDDL
devices.If you get a
Failed to create plugin for device GPU/ clGetPlatformIDs error
message, then check if the host system supports GPU device. Install the required drivers from OpenVINO-steps-for-GPU. Certain platforms like TGL can have compatibility issues with the Ubuntu kernel version. Ensure the compatible kernel version is installed.“EdgeVideoAnalyticsMicroservice by default runs the video ingestion and analytics pipeline in a single micro-service for which the interface connection configuration is as below.
"Publishers": [
{
"Name": "default",
"Type": "zmq_tcp",
"EndPoint": "0.0.0.0:65114",
"Topics": [
"edge_video_analytics_results"
],
"AllowedClients": [
"*"
]
}
“EdgeVideoAnalyticsMicroservice” can be configured to: run only with video ingestion capability by just adding the below Publisher section (The publisher config remains same as above) :
"Publishers": [ { "Name": "default", "Type": "zmq_tcp", "EndPoint": "0.0.0.0:65114", "Topics": [ "edge_video_analytics_results" ], "AllowedClients": [ "*" ] }
“EdgeVideoAnalyticsMicroservice” can be configured to: run only with video analytics capability by just adding the below Subscriber section.
"Subscribers": [
{
"Name": "default",
"Type": "zmq_ipc",
"EndPoint": "/EII/sockets",
"PublisherAppName": "<Appname>",
"Topics": [
"<topic_name>"
],
"zmq_recv_hwm": 50
}
]
The “source” parameter in the config.json needs to be updated to EII Messagebus since “EdgeVideoAnalyticsMicroservice” when running with video analytics capability need to get its data from the EII messagebus.
{
"config": {
"source": "msgbus",