Working with Video Data¶
VideoIngestion Module¶
The VideoIngestion(VI) module is mainly responsibly for ingesting the video frames coming from a video source like video file or basler/RTSP/USB camera into the EII stack for further processing. Additionally, by having VI run with classifier and post-processing UDFs, VI can perform the job of VA(VideoAnalytics) service also.

The high level logical flow of VideoIngestion pipeline is as below:
App reads the application configuration via EII Configuration Manager which has details of
ingestor
,encoding
andudfs
.Based on the ingestor configuration, app reads the video frames from the video file or camera.
[
Optional
] The read frames are passed onto one or more chained native/python UDFs for doing any pre-processing (passing thru UDFs is an optional thing and not required if one doesn’t want to perform any pre-processing on the ingested frames). With chaining of UDFs supported, one can also have classifier UDFs and any post-processing UDFs like resize etc., configured inudfs
key to get the classified results. One can refer ../common/video/udfs/README.md for more details.App gets the msgbus endpoint configuration from system environment and based on the configuration, app publishes the data on the mentioned topic on EII MessageBus.
Note
: Below usecases are suitable for single node deployment where one can avoid the overhead of VA(VideoAnalytics) service.
If VI(VideoIngestion) service is configured with an UDF that does the classification, then one may choose to not to have VA service as all pre-processing, classification and any post-processing can be handled in VI itself with the usage of multiple UDFs.
If VI(VideoIngestion) service is using GVA(Gstreamer Video Analytics) elements also, then one may choose to not to have VA service as all pre-processing, classification and any post-processing(using vappi gstreamer elements) can be done in gstreamer pipeline itself. Also, the post-processing here can be configured by having multiple UDFs in VI if needed.
Configuration¶
MessageBus Configuration respectively.
All the app module configuration are added into distributed key-value store under AppName
env, as mentioned in the environment section of this app’s service definition in docker-compose. If AppName
is VideoIngestion
, then the app’s config would be fetched from
/VideoIngestion/config
key via EII Configuration Manager.
Note
:
Developer mode related overrides go into docker-compose-dev.override.yml
For
jpeg
encoding type,level
is the quality from0 to 100
(the higher is the better)For
png
encoding type,level
is the compression level from0 to 9
. A higher value means a smaller size and longer compression time.One can use JSON validator tool for validating the app configuration against the above schema.
For working with RTSP cameras one has to add the RTSP CAMERA IP to
RTSP_CAMERA_IP
in builder.py if working behind a proxy network.
Ingestor config¶
The following are the type of ingestors supported:
#. GStreamer #.
**For more information on Intel RealSense SDK refer librealsense**
Video Ingestion Contents¶
#. RTSP Camera Support #.
Camera Configuration¶
Video file
OpenCV Ingestor
{ "type": "opencv", "pipeline": "./test_videos/pcb_d2000.avi", "poll_interval": 0.2 "loop_video": true }
Gstreamer Ingestor
{ "type": "gstreamer", "pipeline": "multifilesrc loop=TRUE stop-index=0 location=./test_videos/pcb_d2000.avi ! h264parse ! decodebin ! videoconvert ! video/x-raw,format=BGR ! appsink" }
Refer `docs/multifilesrc_doc.md <https://github.com/open-edge-insights/video-ingestion/blob/master/docs/multifilesrc_doc.md>`_ for more information/configuration on multifilesrc element.
#### Updating Security Context of VideoIngestion Helm Charts for enabling K8s environment to access/detect Basler/usb Device
Please follow the steps to update helm charts for enabling K8s environment to access/detect Basler Camera and NCS2 Device
Open
EII_HOME_DIR/IEdgeInsights/VideoIngestion/helm/templates/video-ingestion.yaml
fileUpdate below security context snippet
securityContext: privileged: true
in the yaml file as:
... ... ... imagePullPolicy: {{ $global.Values.imagePullPolicy }} securityContext: privileged: true volumeMounts: - name: dev mountPath: /dev ...
Re-run
builder.py
to apply these changes to your deployment helm charts.
GenICam GigE or USB3 Camera¶
Refer `GenICam GigE/USB3.0 Camera Support <https://github.com/open-edge-insights/video-ingestion/blob/master/docs/generic_plugin_doc.md>`_ for information/configuration on GenICam GigE/USB3 camera support.
Pre-requisities for working with GenICam compliant cameras:
The following are the pre-requisites for working with GeniCam compliant cameras. Please note that these changes need to be reverted while working with other cameras such as realsense, rtsp and usb(v4l2 driver compliant).
Refer the below snip of
ia_video_ingestion
service to add the required changes in docker-compose.yml file of the respective Ingestion service(including custom udf services). Once the changes are made make sure docker-compose.yml is executed before building and running the services.For GenICam GigE Camera:
```yaml
ia_video_ingestion:
# Add root user
user: root
# Add network mode host
network_mode: host
# Please make sure that the above commands are not added under environment section and also take care about the indentations in the compose file.
...
environment:
...
# Add HOST_IP to no_proxy and ETCD_HOST
no_proxy: "${RTSP_CAMERA_IP},<HOST_IP>"
ETCD_HOST: "<HOST_IP>"
...
# Comment networks section as below as it will throw an error when network mode host is used.
# networks:
# - eii
# Comment ports section as below
# ports:
# - 64013:64013
```
**For GenIcam USB3.0 Camera:**
```yaml
ia_video_ingestion:
# Add root user
user: root
...
environment:
# refer [GenICam GigE/USB3.0 Camera Support](https://github.com/open-edge-insights/video-ingestion/blob/master/docs/generic_plugin_doc.md) to install the respective camera SDK
# Setting GENICAM value to the respective camera/GenTL producer which needs to be used
GENICAM: "<CAMERA/GenTL>"
...
```
> **Note:**
> * In case one notices GenICam cameras not getting initialized during runtime then try executing `docker system prune` command on the host system and then removing the GenICam specific semaphore files under `/dev/shm/` path of the host system. `docker system prune` command will remove all stopped containers, networks not used by at least one container, dangling images and build cache which could prevent the plugin from accessing the device.
> * In case one notices `Feature not writable` while working with GenICam cameras please reset the device using camera software or using the reset property of [Generic Plugin README](https://github.com/open-edge-insights/video-ingestion/blob/master/src-gst-gencamsrc/README).
> * In `IPC` mode if Ingestion service is running with `root` privilige then `ia_video_analytics` and `ia_visualizer` service subscribing to it must also run with `root` privileges.
> * In multi-node scenario, replace <HOST_IP> in "no_proxy" with leader node IP address.
> * In TCP mode of communication, msgbus subscribers and clients of VideoIngestion are required to configure the "EndPoint" in config.json with host IP and port under "Subscribers" or "Clients" interfaces section.
Gstreamer Ingestor
GenICam GigE/USB3.0 cameras
{ "type": "gstreamer", "pipeline": "gencamsrc serial=<DEVICE_SERIAL_NUMBER> pixel-format=<PIXEL_FORMAT> exposure-time=5000 exposure-mode=timed exposure-auto=off throughput-limit=300000000 ! videoconvert ! video/x-raw,format=BGR ! appsink" }
Note:
Generic Plugin can work only with GenICam compliant cameras and only with gstreamer ingestor.
The above gstreamer pipeline was tested with Basler and IDS GigE cameras.
If
serial
is not provided then the first connected camera in the device list will be used.If
pixel-format
is not provided then the defaultmono8
pixel format will be used.If
width
andheight
properies are not set then gencamsrc plugin will set the maximum resolution supported by the camera.By default
exposure-auto
property is set to on. If the camera is not placed under sufficient light then with auto exposure,exposure-time
can be set to very large value which will increase the time taken to grab frame. This can lead toNo frame received error
. Hence it is recommended to manually set exposure as in the below sample pipline when the camera is not placed under good lighting conditions.throughput-limit
is the bandwidth limit for streaming out data from the camera(in bytes per second).
Hardware trigger based ingestion with gstreamer ingestor
{ "type": "gstreamer", "pipeline": "gencamsrc serial=<DEVICE_SERIAL_NUMBER> pixel-format=<PIXEL_FORMAT> trigger-selector=FrameStart trigger-source=Line1 trigger-activation=RisingEdge hw-trigger-timeout=100 acquisition-mode=singleframe exposure-time=5000 exposure-mode=timed exposure-auto=off throughput-limit=300000000 ! videoconvert ! video/x-raw,format=BGR ! appsink" }
Note:
For PCB usecase use the
width
andheight
properties of gencamsrc to set the resolution to1920x1200
and make sure it is pointing to the rotating pcb boards as seen inpcb_d2000.avi
video file for pcb filter to work.
One can refer the below example pipeline:
{ "type": "gstreamer", "pipeline": "gencamsrc serial=<DEVICE_SERIAL_NUMBER> pixel-format=ycbcr422_8 width=1920 height=1200 exposure-time=5000 exposure-mode=timed exposure-auto=off throughput-limit=300000000 ! videoconvert ! video/x-raw,format=BGR ! appsink" }
Refer `docs/basler_doc.md <https://github.com/open-edge-insights/video-ingestion/blob/master/docs/basler_doc.md>`_ for more information/configuration on basler camera.
RTSP Camera¶
Refer `docs/rtsp_doc.md <https://github.com/open-edge-insights/video-ingestion/blob/master/docs/rtsp_doc.md>`_ for information/configuration on rtsp camera.
OpenCV Ingestor
{ "type": "opencv", "pipeline": "rtsp://<USERNAME>:<PASSWORD>@<RTSP_CAMERA_IP>:<PORT>/<FEED>" }
NOTE: Opencv for rtsp will use software decoders
Gstreamer Ingestor
{ "type": "gstreamer", "pipeline": "rtspsrc location=\"rtsp://<USERNAME>:<PASSWORD>@<RTSP_CAMERA_IP>:<PORT>/<FEED>\" latency=100 ! rtph264depay ! h264parse ! vaapih264dec ! vaapipostproc format=bgrx ! videoconvert ! video/x-raw,format=BGR ! appsink" }
Note: The RTSP URI of the physical camera depends on how it is configured using the camera software. One can use VLC Network Stream to verify the RTSP URI to confirm the RTSP source.
RTSP simulated camera using cvlc
OpenCV Ingestor
{ "type": "opencv", "pipeline": "rtsp://<SOURCE_IP>:<PORT>/<FEED>" }
Gstreamer Ingestor
{ "type": "gstreamer", "pipeline": "rtspsrc location=\"rtsp://<SOURCE_IP>:<PORT>/<FEED>\" latency=100 ! rtph264depay ! h264parse ! vaapih264dec ! vaapipostproc format=bgrx ! videoconvert ! video/x-raw,format=BGR ! appsink" }
Refer `docs/rtsp_doc.md <https://github.com/open-edge-insights/video-ingestion/blob/master/docs/rtsp_doc.md>`_ for more information/configuration on rtsp simulated camera.
USB Camera¶
Refer `docs/usb_doc.md <https://github.com/open-edge-insights/video-ingestion/blob/master/docs/usb_doc.md>`_ for information/configurations on usb camera.
OpenCV Ingestor
{ "type": "opencv", "pipeline": "/dev/video0" }
Gstreamer Ingestor
{ "type": "gstreamer", "pipeline": "v4l2src ! video/x-raw,format=YUY2 ! videoconvert ! video/x-raw,format=BGR ! appsink" }
RealSense Depth Camera¶
RealSense Ingestor
"ingestor": { "type": "realsense", "serial": "<DEVICE_SERIAL_NUMBER>", "framerate": <FRAMERATE>, "imu_on": true },
Note
RealSense Ingestor was tested with Intel RealSense Depth Camera D435i
RealSense Ingestor does not support poll_interval. Please use framerate to reduce the ingestion fps if required.
If
serial
config is not provided then the first realsense camera in the device list will be connected.If
framerate
config is not provided then the default framerate of 30 will be applied. Please make sure that the framerate provided is compatible with both the color and depth sensor of the realsense camera. With D435i camera only framerate 6,15,30 and 60 is supported and tested.IMU stream will work only if the realsense camera model supports the IMU feature. The default value for
imu_on
is set to false.
Note
:
For all video and camera streams please make sure you are using appropriate UDF configuration. One may not get the expected output in the Visualizer/WebVisualizer screen if the udf is not compatible with the video source.
If one is not sure about the compatibility of the udf and video source then ‘dummy’ udf can be used. It won’t do any analytics on the video and hence won’t filter any of the video frames. You will, therefore, see the video streamed by the camera as it is on the video output screen in Visualizer/WebVisualizer.
Refer below configuration for ‘dummy’ UDF:
"udfs": [{
"name": "dummy",
"type": "python"
}]
Same changes need to be applied in VideoAnalytics configuration if it is subscribing to VideoIngestion.
VideoAnalytics Module¶
The VideoAnalytics module is mainly responsibly for running the classifier UDFs and doing the required inferencing on the chosen Intel(R) Hardware (CPU, GPU, VPU, HDDL) using openVINO.
The high level logical flow of VideoAnalytics pipeline is as below:
App reads the application configuration via EII Configuration Manager which has details of
encoding
andudfs
.App gets the msgbus endpoint configuration from system environment.
Based on above two configurations, app subscribes to the published topic/stream coming from VideoIngestion module.
The frames received in the subscriber are passed onto one or more chained native/python UDFs for running inferencing and doing any post-processing as required. One can refer UDFs README for more details
The frames coming out of chained udfs are published on the different topic/stream on EII MessageBus.
Configuration¶
MessageBus Configuration respectively.
NOTE:
The
max_workers
andudfs
are configuration keys related to udfs. For more details on udf configuration, please visit ../common/video/udfs/README.mdFor details on Etcd and MessageBus endpoint configuration, visit Etcd_Secrets_Configuration.
In case the VideoAnalytics container found to be consuming a lot of memory, then one of the suspects could be that Algo processing is slower than the frame ingestion rate. Hence a lot of frames are occupying RAM waiting to be processed. In that case user can reduce the high watermark value to acceptable lower number so that RAM consumption will be under control and stay stabilzed. The exact config parameter is called ZMQ_RECV_HWM present in docker-compose.yml. This config is also present in other types of container, hence user can tune them to control the memory bloat if applicable. The config snippet is pasted below: .. code-block:: bash
ZMQ_RECV_HWM: “1000”
All the app module configuration are added into distributed
key-value data store under AppName
env as mentioned in the
environment section of this app’s service definition in docker-compose.
Developer mode related overrides go into docker-compose-dev.override.yml
If AppName
is VideoAnalytics
, then the app’s config would be fetched from
/VideoAnalytics/config
key via EII Configuration Manager.
Note:
For
jpeg
encoding type,level
is the quality from0 to 100
(the higher is the better)For
png
encoding type,level
is the compression level from0 to 9
. A higher value means a smaller size and longer compression time.
One can use JSON validator tool for validating the app configuration against the above schema.
Updating Security Context Of VideoAnalytics Helm Charts for enabling Accelerators in k8s environment¶
Please follow the steps to update helm charts for enabling Accelerators in K8s environment
Open
EII_HOME_DIR/IEdgeInsights/VideoAnalytics/helm/templates/video-analytics.yaml
fileUpdate below security context snippet
securityContext: privileged: true
in the yml file as:
... ... ... imagePullPolicy: {{ $global.Values.imagePullPolicy }} securityContext: privileged: true volumeMounts: - name: dev mountPath: /dev ...
Re-run
builder.py
to apply these changes to your deployment helm charts.
Using video accelerators in ingestion/analytics containers¶
EII supports running inference on CPU
, GPU
, MYRIAD
(NCS2), and HDDL
devices by accepting device
value (“CPU”|”GPU”|”MYRIAD”|”HDDL”), part of the udf
object configuration in udfs
key. The device
field in UDF config of udfs
key in VideoIngestion
and VideoAnalytics
configs can either be changed in the build/provision/config/eii_config.json
before provisioning (or re-provision it again after the change to the apps config.json, re-running builder.py
script and then re-running the provisioning script) or at run-time via EtcdUI. For more details on the udfs config,
check common/udfs/README.md.
Note
There is an initial delay of upto ~30s while running inference on GPU
(only for the first frame) as dynamically certain packages get created during runtime.
To run on USB devices¶
For actual deployment in case USB camera is required then mount the device node of the USB camera for ia_video_ingestion
service. When multiple USB cameras are connected to host m/c the required camera should be identified with the device node and mounted.
Eg: Mount the two USB cameras connected to the host m/c with device node as video0
and video1
ia_video_ingestion:
...
devices:
- "/dev/dri"
- "/dev/video0:/dev/video0"
- "/dev/video1:/dev/video1"
Note
/dev/dri is needed for Graphic drivers
To run on MYRIAD devices¶
To run inference on
MYRIAD
deviceroot
user permissions needs to be used at runtime. To enable root user at runtime in eitheria_video_ingestion
,ia_video_analytics
or any of the custom udf services adduser: root
command in the respective docker-compose.yml file.Eg: To use
MYRAID
device inia_video_analytics
service refer the below exampleia_video_analytics: ... user: root
Troubleshooting issues for MYRIAD(NCS2) devices
Following is an workaround can be excercised if in case user observes
NC_ERROR
during device initialization of NCS2 stick. While running EII if NCS2 devices failed to initialize properly then user can re-plug the device for the init to happen freshly. User can verify the successfull initialization by executing *dmesg** & **lsusb** as below:lsusb | grep "03e7" (03e7 is the VendorID and 2485 is one of the productID for MyriadX)
dmesg > dmesg.txt [ 3818.214919] usb 3-4: new high-speed USB device number 10 using xhci_hcd [ 3818.363542] usb 3-4: New USB device found, idVendor=03e7, idProduct=2485 [ 3818.363546] usb 3-4: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [ 3818.363548] usb 3-4: Product: Movidius MyriadX [ 3818.363550] usb 3-4: Manufacturer: Movidius Ltd. [ 3818.363552] usb 3-4: SerialNumber: 03e72485 [ 3829.153556] usb 3-4: USB disconnect, device number 10 [ 3831.134804] usb 3-4: new high-speed USB device number 11 using xhci_hcd [ 3831.283430] usb 3-4: New USB device found, idVendor=03e7, idProduct=2485 [ 3831.283433] usb 3-4: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [ 3831.283436] usb 3-4: Product: Movidius MyriadX [ 3831.283438] usb 3-4: Manufacturer: Movidius Ltd. [ 3831.283439] usb 3-4: SerialNumber: 03e72485 [ 3906.460590] usb 3-4: USB disconnect, device number 11
The below link can be referred in case user observes
global mutex initialization failed
during device initialization of NCS2 stick https://www.intel.com/content/www/us/en/support/articles/000033390/boards-and-kits.htmlFor VPU troubleshooting refer the below link: https://docs.openvinotoolkit.org/2021.4/openvino_docs_install_guides_installing_openvino_linux_ivad_vpu.html#troubleshooting
To run on HDDL devices¶
Download the full package for OpenVINO toolkit for Linux version “2021.4” (
OPENVINO_IMAGE_VERSION
used in build/.env) from the official website (https://software.intel.com/en-us/openvino-toolkit/choose-download/free-download-linux). Please refer to the OpenVINO links below for to install and running the HDDL daemon on host.OpenVINO install: https://docs.openvinotoolkit.org/2021.4/_docs_install_guides_installing_openvino_linux.html#install-openvino
HDDL daemon setup: https://docs.openvinotoolkit.org/2021.4/_docs_install_guides_installing_openvino_linux_ivad_vpu.html
OpenVINO 2021.4 installation creates a symbolic link to latest installation with filename as
openvino_2021
instead ofopenvino
. Hence one can create a symbolic link with filename asopenvino
to the latest installation using the below steps.$ cd /opt/intel $ sudo ln -s <OpenVINO latest installation> openvino
Eg: sudo ln -s openvino_2021.4.582 openvino
In case there are older versions of OpenVINO installed on the host system please un-install them.
Running hddldaemon: Refer the below command to run the hddldaemon once the setup is done(it should run in a different terminal or in the background on the host system where inference would be done)
$ source /opt/intel/openvino/bin/setupvars.sh $ $HDDL_INSTALL_DIR/bin/hddldaemon
Changing ownership of hddl files on host system. Before running inference with EII, ownership of hddl files should be changed to the value of
EII_USER_NAME
key set in build/.env. Refer the below command to set the ownership of hddl files on the host system.EII_USER_NAME=eiiuser
in build/.env. .. code-block:$ sudo chown -R eiiuser /var/tmp/hddl_*
For actual deployment one could choose to mount only the required devices for services using OpenVINO with HDDL (
ia_video_analytics
oria_video_ingestion
) inbuild/docker-compose.yml
.Eg: Mount only the Graphics and HDDL ion device for
ia_video_anaytics
serviceia_video_analytics: ... devices: - "/dev/dri" - "/dev/ion:/dev/ion"
Troubleshooting issues for HDDL devices
Please verify the hddldaemon started on host m/c to verify if it is using the libraries of the correct OpenVINO version used in build/.env. One could enable the
device_snapshot_mode
tofull
in $HDDL_INSTALL_DIR/config/hddl_service.config on host m/c to get the complete snapshot of the hddl device.For VPU troubleshooting refer the below link: https://docs.openvinotoolkit.org/2021.4/openvino_docs_install_guides_installing_openvino_linux_ivad_vpu.html#troubleshooting
Please refer OpenVINO 2021.4 release notes in the below link for new features and changes from the previous versions. https://software.intel.com/content/www/us/en/develop/articles/openvino-relnotes.html
Refer OpenVINO website in the below link to skim through known issues, limitations and troubleshooting https://docs.openvinotoolkit.org/2021.4/index.html
To run on Intel(R) Processor Graphics (GPU/iGPU)¶
Note
The below step is required only for 11th gen Intel Processors
Upgrade the kernel version to 5.8 and install the required drivers from the below OpenVINO link: https://docs.openvinotoolkit.org/latest/openvino_docs_install_guides_installing_openvino_linux.html#additional-GPU-steps
User Defined Function (UDF)¶
An UDF is a chunk of user code that acts as a filter, preprocessor, or classifier for a given data input coming from the EII. The User Defined Function (UDF) Loader Library provides a common API for loading C++ and Python UDFs.
The library itself is written in C++ and provides an abstraction layer for loading and calling UDFs. Additionally, the library defines a common interface inheritable by all UDFs (whether written in C++ or Python).
The overall block diagram for the library is shown in the following figure.

User-Defined Function Loader Library Block Design¶
In this case, the VideoIngestion component is also able to execute the video data classifier algorithm by including the classifier UDF into the VideoIngestion configuration. By defining the Classifier UDF in the VideoIngestion component, the VideoAnalytics component become optional
EII UDFLoader¶
UDFLoader is a library providing APIs for loading and executing native and python UDFs.
Dependency Installation¶
UDFLoader depends on the below libraries. Follow their documentation to install them.
OpenCV - Run
source /opt/intel/openvino/bin/setupvars.sh
commandPython3 Numpy package
Compilation¶
Utilizes CMake as the build tool for compiling the library. The simplest sequence of commands for building the library are shown below.
$ mkdir build
$ cd build
$ cmake ..
$ make
If you wish to compile in debug mode, then you can set
the CMAKE_BUILD_TYPE
to Debug
when executing the cmake
command (as shown
below).
$ cmake -DCMAKE_BUILD_TYPE=Debug ..
Installation¶
Note
This is a mandatory step to use this library in C/C++ EII modules
If you wish to install this library on your system, execute the following command after building the library:
$ sudo make install
By default, this command will install the udfloader
library into
/usr/local/lib/
. On some platforms this is not included in the LD_LIBRARY_PATH
by default. As a result, you must add this directory to you LD_LIBRARY_PATH
. This can
be accomplished with the following export
:
$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/
Note
You can also specify a different library prefix to CMake through
the CMAKE_INSTALL_PREFIX
flag.
Running Unit Tests¶
Note
The unit tests will only be compiled if the WITH_TESTS=ON
option
is specified when running CMake.
Run the following commands from the build/tests
folder to cover the unit
tests.
# First, source the source.sh file to setup the PYTHONPATH environment
$ source ./source.sh
# Execute frame abstraction unit tests
$ ./frame-tests
# Execute UDF loader unit tests
$ ./udfloader-tests
Introduction¶
UDFs(User Defined Function) are one of the cardinal feature of EII framework. It enables users to adjoin any pre-processing or post-processing logic in data pipeline defined by EII configuration. As of EII 2.1 release, it supports UDFs to be implemented using following languages.
C++ (It is also called Native UDF as EII core components are implemented in C++)
Python
The order in which the UDFs are defined in EII configuration file is the order in which data will flow across them. Currently there is no support for demux/mux the data flow to/fro the UDFs.
All configs related to UDFs are to be in config.json
of apps like VideoIngestion and VideoAnalytics.The UDF schema and description about keys/values in it presented in detail in the UDF README file.
UDF Configuration
¶
Below is the JSON schema for UDF json object configuration:
{
"type": "object",
"additionalProperties": false,
"properties": {
"max_jobs": {
"description": "Number of queued UDF jobs",
"type": "integer",
"default": 20
},
"max_workers": {
"description": "Number of threads acting on queued jobs",
"type": "integer",
"default": 4
},
"udfs": {
"description": "Array of UDF config objects",
"type": "array",
"items": [
{
"description": "UDF config object",
"type": "object",
"properties": {
"type": {
"description": "UDF type",
"type": "string",
"enum": [
"native",
"python",
"raw_native"
]
},
"name": {
"description": "Unique UDF name",
"type": "string"
},
"device": {
"description": "Device on which inference occurs",
"type": "string",
"enum": [
"CPU",
"GPU",
"HDDL",
"MYRIAD"
]
}
},
"additionalProperties": true,
"required": [
"type",
"name"
]
}
]
}
}
}
One can use JSON validator tool for validating the UDF configuration object against the above schema.
Example UDF configuration:
{
"max_jobs": 20,
"max_workers": 4,
"udfs": [ {
"type": "native",
"name": "dummy"
},
{
"type": "python",
"name": "pcb.pcb_filter"
}]
}
UDF Writing Guide
¶
User can refer to UDF Writing HOW-TO GUIDE for an detailed explanation of process to write an custom UDF.
Sample UDFs
¶
Note
: The UDF config of these go as json objects in the udfs
key in
the overall UDF configuration object
Native UDFs
¶
Dummy UDF
Accepts the frame and forwards the same without doing any processing. It’s a do-nothing UDF.
UDF config
:{ "name": "dummy", "type": "native" }
Raw Dummy UDF
Accepts the Frame object and forwards the same without doing any processing. It’s a do-nothing UDF for working with multi-frame support.
UDF config
:{ "name": "raw_dummy", "type": "raw_native" }
Note: raw_native
udf type has been added to support multi-frame ingestion support.RealSense usecase requires multi-frame ingestion for color and depth frames.
Resize UDF
Accepts the frame, resizes it based on the
width
andheight
params.UDF config
:{ "name": "resize", "type": "native", "width": 600, "height": 600 }
FPS UDF
FPS udf can be used to measure the total number of frames received every second. It can be used in VideoIngestion and VideoAnalytics application by adding the below configuration in the udf configuration. It can also be chained with other udfs in which case the FPS result will be affected depending on the other udfs used.
UDF config
:{ "name": "fps", "type": "native" }
Config for chaining fps udf with other udfs
:"udfs": [{ "name": "dummy", "type": "native" }, { "name": "fps", "type": "native" }]
Note The fps results will be logged in
DEBUG
LOG_LEVEL, added to the metadata with the AppName as the key and will be displayed in the visualizer.Sample Realsense UDF
Accepts the color and depth frame, converts to rs2::frame type by using rs2::software_device simulation, enables a color filter on the depth frame using rs2::colorizer.
UDF config
:{ "name": "sample_realsense", "type": "raw_native", }
###
Python UDFs
Note
: Additional properties/keys other than name
and type
in the UDF
config are the parameters of the python UDF constructor
Dummy UDF
Accepts the frame and forwards the same without doing any processing. It’s a do-nothing UDF.
UDF config
:{ "name": "dummy", "type": "python" }
Multi Frame Dummy UDF
Accepts the Frame object which is a list of frames and forwards the same without doing any processing. It’s a do-nothing UDF for working with multi-frame support.
UDF config
:{ "name": "multi_frame_dummy", "type": "python" }
Note: When multi frame ingestion is used then the Frame object is a list of numpy frames else it is a single numpy frame. The udf type remains python
for multi frame ingestion and single frame ingestion.
Jupyter Connector UDF
Accepts the frame and publishes it to the EII JupyterNotebook service which processes the frame and publishes it back to the jupyter_connector UDF.
UDF config
:{ "name": "jupyter_connector", "type": "python" }
PCB Filter UDF
Accepts the frame and based on if
pcb
board is at the center in the frame or not, it forwards or drops the frame. It basically sends out only the key frames forward for further processing and not all frames it receives.UDF config
:
{
"name": "pcb.pcb_filter",
"type": "python",
"training_mode": "false",
"scale_ratio": 4,
"n_total_px": 300000,
"n_left_px": 1000,
"n_right_px": 1000
}
Refer `python/pcb/README.md <https://github.com/open-edge-insights/video-common/blob/master/udfs/python/pcb/README.md>`_ for more information.
PCB Classifier UDF
Accepts the frame, uses openvino inference engine APIs to determine whether it’s a
good
pcb with no defects orbad
pcb with defects. Metadata associated with the frame is populated accordingly.UDF config
:{ "name": "pcb.pcb_classifier", "type": "python", "device": "CPU", "ref_img": "common/video/udfs/python/pcb/ref/ref.png", "ref_config_roi": "common/video/udfs/python/pcb/ref/roi_2.json", "model_xml": "common/video/udfs/python/pcb/ref/model_2.xml", "model_bin": "common/video/udfs/python/pcb/ref/model_2.bin" }
Refer python/pcb/README.md for more information.
NOTE: The above config works for both “CPU” and “GPU” devices after setting appropriate
device
value. Please set the “device” value appropriately based on the device used for inferencing.Sample ONNX UDF
This UDF mainly demonstrates the model deployment on edge devices via AzureBridge service only.
Please follow the below steps:
Configure the sample ONNX UDF by following Sample ONNX UDF configuration guide
Follow Single-Node Azure IOT Edge Deployment to deploy the required modules
For more details on AzureBridge setup, please refer AzureBridge README.md
Construction of Metadata in UDF
¶
If EII Visualizer/WebVisualizer clients are used for visualizing the classified frames, then please follow the metadata guidelines mentioned in **Metadata Structure
** in Visualizer / WebVisualizer README respectively.
Note: User has to make sure that the data with in meta data should be of type list, tuple, dict or primitive data types (int, float, string or bool). Also, data with in list, tuple, dict must contain only primitive data types. Eg: Any data is of type “numpy.float” or “numpy.int” should be type-casted to float and int respectively.
Chaining of UDFs
¶
One can chain multiple native/python UDFs in the udfs
key. The way chaining
works here is the output of the UDF listed first would send the modified frame
and metadata to the subsequent UDF and so on. One such classic example is having
pcb.pcb_filter
and pcb.pcb_classifier
in VideoIngestion service config to
do both the pre-processing and the classification logic without the need of
VideoAnalytics service.
Combination of UDFs with ingestors
¶
Ingestor |
Chaining UDFs for pcb demo usecase |
Chaining UDFs for worker safety gear usecase |
---|---|---|
opencv/gstreamer |
|
|
gstreamer with GVA(Gstreamer Video Analytics) elements |
Not Applicable |
|
Note
: Dummy UDF can also be used for above use cases for testing chaining UDFs feature but as such there is no value add as it’s a do-nothing UDF. In DEV Mode python udfs changes can be tested by restarting containers, no need to rebuild.
Custom Udfs¶
The following are the two Custom Udfs workflow which EII supports:
Build / Run custom udfs as standalone applications
For running custom udfs as standalone application one must download the video-custom-udfs repo and refer CustomUdfs/README.md
Build / Run custom udfs in VI or VA
For running custom udfs either in VI or VA one must refer VideoIngestion/docs/custom_udfs_doc.md
Native Visualizer Module¶
Native Visualizer ia a native app to view the classified images/metadata coming out of EII.
Steps to build and run visualizer¶
Please go through the below sections to have visualizer service built and launch it:
For more details, refer EII core README
Note
:
The admin has to make sure all the necessary config is set in etcd before starting the visualizer.
The user has to make sure the path provided in docker-compose volumes of visualizer correlates to the one in etcd before running visualizer if he wishes to save images.
Run this command in terminal if you run into tkinter couldn’t connect to display exception
$ xhost +
If the Visualizer UI doesn’t show up and if you notice couldn’t connect to display “:0” error in
docker logs -f ia_visualizer
, please check the value forDISPLAY
env variable on the host machine by running cmd:env | grep DISPLAY
, please set this as the value for theDISPLAY
nv variable in the ia_visualizer service of docker-compose.yml or in the consolidated ../build/docker-compose.yml file and re-rundocker-compose up ia_visualizer -d
Example:
```sh
$ env | grep DISPLAY
DISPLAY:=1
```
Set ":=1" as `DISPLAY` env value in ia_visualizer service
If one needs to remove the classified images on a periodic basis:
Have this command running in a separate terminal as a cleanup task to remove images older than 60 mins in IMAGE_DIR. Replace <path-to-IMAGE_DIR> with IMAGE_DIR path given while running visualizer. The -mmin option can be changed accordingly by the user.
$ while true; do find <path-to-IMAGE_DIR> -mmin +60 -type f -name "*.png" -exec rm -f {} \;; done
If user needs to remove the bounding box:
Set the value of draw_results in config.json as false for both Visualiser and WebVisualiser.
draw_results: "false"
If user needs to save images of visualizer:
Set the value of save_image in config.json as true
"save_image": "true"
Using Labels¶
In order to have the visualizer label each of the defects on the image, labels in JSON format(with mapping between topic subscribed text to be displayed) has to be provided in config.json file and run the config.json script using the below command.
$ python3 builder.py
An example of what this JSON value should look like is shown below. In this case
it is assumed that the classification types are ``0`` and ``1`` and the text labels
to be displayed are ``MISSING`` and ``SHORT`` respectively.
{
"0": "MISSING",
"1": "SHORT"
}
Note
These labels are the mapping for the PCB demo provided in EII’s visualizer directory. Currently camera1_stream_results consists of pcb demo labeling and camera2_stream_results consists of safety demo labeling. Hence, in config.json proper mapping of all the subscribed topics should be done with pcb demo labeling and safety demo labeling respectively.
"/Visualizer/config": {
"save_image": "false",
"draw_results": "false",
"labels" : {
"camera1_stream_results": {
"0": "MISSING",
"1": "SHORT"
},
"native_safety_gear_stream_results": {
"1": "safety_helmet",
"2": "safety_jacket",
"3": "Safe",
"4": "Violation"
},
"py_safety_gear_stream_results": {
"1": "safety_helmet",
"2": "safety_jacket",
"3": "Safe",
"4": "Violation"
},
"gva_safety_gear_stream_results": {
"1": "safety_helmet",
"2": "safety_jacket",
"3": "Safe",
"4": "Violation"
}
}
}
Metadata Structure¶
EII Visualizer app can decode certain types of mete-data formats for drawing the defects on the image. Any application wanting to use EII visualizer need to comply with the meta-data format as described below:
For Ingestor’s Non-GVA type, metadata structure sample is :
{
"channels": 3,
"encoding_type": "jpeg",
"height": 1200,
"defects": [
{"type": 0, "tl": [1019, 644], "br": [1063, 700]},
{"type": 0, "tl": [1297, 758], "br": [1349, 796]}
],
"display_info": [{"info":"good", "priority":0}],
"img_handle": "348151d424",
"width": 1920,
"encoding_level": 95
}
where in defects
and display_info
is a list of dicts.
Each entry in defects
list is a dictionary that should contain following keys:
type
: value given to type will be the label idtl
: value is the top-leftx
andy
co-ordinate of the defect in the image.br
: value is the bottom-rightx
andy
co-ordinate of the defect in the image.
Each entry in display_info
list is a dictionary that should contain following keys:
info
: value given will be displayed on the image.priority
: Based on the priority level (0, 1, or 2), info will be displayed in either green, orange or red.0 : Low priority, info will be displayed in green.
1 : Medium priority, info will be displayed in orange.
2 : High priority, info will be displayed in red.
For Ingestor’s GVA type, metadata structure sample is :
{
"channels": 3,
"gva_meta": [
{"x": 1047, "height": 86, "y": 387, "width": 105, "tensor": [{"label": "", "label_id": 1, "confidence":0.8094226121902466, "attribute":"detection"}]},
{"x": 1009, "height": 341, "y": 530, "width": 176, "tensor": [{"label": "", "label_id": 2, "confidence": 0.9699158668518066, "attribute": "detection"}]}
],
"encoding_type": "jpeg",
"height": 1080,
"img_handle": "7247149a0d",
"width": 1920,
"encoding_level": 95
}
where in gva_meta
is a list of dicts.
NOTE: Any data with in the list, tuple or dict of meta data should be of primitive data type (int, float, string, bool). Refer the examples given above.
Web Visualizer Module¶
Web Visualizer ia a web based app to view the classified images/metadata coming out of EII.
Steps to build and run web visualizer¶
Please go through the below sections to have web visualizer service built and launch it:
For more details, refer EII core README
Running Visualizer in Browser
Visualizer is tested on chrome browser, so its better to use chrome browser.
WebVisualizer currently supports only 6 parallel streams in the chrome browser per instance.
Running in DEV mode:
- Goto Browser
http://< host ip >:5001
Running in PROD mode:
copy ‘ca_certificate.pem’ from ‘build/provision/Certificates/ca’ to home directory ‘~/’ and give appropriate permissions to it as shown below:
$ sudo cp Certificates/ca/ca_certificate.pem ~ $ cd ~ $ sudo chmod 0755 ~/ca_certificate.pem
Import ‘ca_certificate.pem’ from home Directory ‘~/’ to your Browser Certificates.
Steps to Import Certificates
Goto Settings in Chrome
Search Manage Certificates Under Privacy & Security
Select Manage Certificates Option
Under Authorities Tab Click Import Button
With Import Wizard navigate to home directory
Select ca_certificate.pem file
Select All CheckBoxes and Click Import Button.
- Now In Browser
https://< host ip >:5000
- Login Page
You should use your defined username & password in etcd config.
NOTE:
The admin has to make sure all the necessary config is set in etcd before starting the web visualizer.
Please clear your
browsers cache
while switching fromprod
mode todev
mode on runningWebVisualizer
in browser.
Using Labels¶
In order to have the web visualizer label each of the defects on the image, labels in JSON format(with mapping between topic subscribed text to be displayed) has to be provided in config.json file and run the config.json script using the below command.
$ python3 builder.py
An example of what this JSON value should look like is shown below. In this case
it is assumed that the classification types are ``0`` and ``1`` and the text labels
to be displayed are ``MISSING`` and ``SHORT`` respectively.
{
"0": "MISSING",
"1": "SHORT"
}
Note
These labels are the mapping for the PCB demo provided in EII’s web visualizer directory. Currently camera1_stream_results consists of pcb demo labeling and camera2_stream_results consists of safety demo labeling. Hence, in config.json, mapping of all the subscribed topics has to be done with pcb demo labeling and safety demo labeling respectively.
"/WebVisualizer/config": {
"username": "admin",
"password": "admin@123",
"dev_port": 5001,
"port": 5000,
"labels" : {
"camera1_stream": {
"0": "MISSING",
"1": "SHORT"
},
"native_safety_gear_stream_results": {
"1": "safety_helmet",
"2": "safety_jacket",
"3": "Safe",
"4": "Violation"
},
"py_safety_gear_stream_results": {
"1": "safety_helmet",
"2": "safety_jacket",
"3": "Safe",
"4": "Violation"
},
"gva_safety_gear_stream_results": {
"1": "safety_helmet",
"2": "safety_jacket",
"3": "Safe",
"4": "Violation"
}
}
}
Metadata Structure¶
EII WebVisualizer app can decode certain types of mete-data formats for drawing the defects on the image. Any application wanting to use EII WebVisualizer need to comply with the meta-data format as described below:
For Ingestor’s Non-GVA type, metadata structure sample is :
{
"channels": 3,
"encoding_type": "jpeg",
"height": 1200,
"defects": [
{"type": 0, "tl": [1019, 644], "br": [1063, 700]},
{"type": 0, "tl": [1297, 758], "br": [1349, 796]}
],
"display_info": [{"info":"good", "priority":0}],
"img_handle": "348151d424",
"width": 1920,
"encoding_level": 95
}
where in defects
and display_info
is a list of dicts.
Each entry in defects
list is a dictionary that should contain following keys:
type
: value given to type will be the label idtl
: value is the top-leftx
andy
co-ordinate of the defect in the image.br
: value is the bottom-rightx
andy
co-ordinate of the defect in the image.
Each entry in display_info
list is a dictionary that should contain following keys:
info
: value given will be displayed on the image.priority
: Based on the priority level (0, 1, or 2), info will be displayed in either green, orange or red.0 : Low priority, info will be displayed in green.
1 : Medium priority, info will be displayed in orange.
2 : High priority, info will be displayed in red.
For Ingestor’s GVA type, metadata structure sample is :
{
"channels": 3,
"gva_meta": [
{"x": 1047, "height": 86, "y": 387, "width": 105, "tensor": [{"label": "", "label_id": 1, "confidence":0.8094226121902466, "attribute":"detection"}]},
{"x": 1009, "height": 341, "y": 530, "width": 176, "tensor": [{"label": "", "label_id": 2, "confidence": 0.9699158668518066, "attribute": "detection"}]}
],
"encoding_type": "jpeg",
"height": 1080,
"img_handle": "7247149a0d",
"width": 1920,
"encoding_level": 95
}
where in gva_meta
is a list of dicts.
NOTE:
Any data with in the list, tuple or dict of meta data should be of primitive data type (int, float, string, bool). Refer the examples given above.
2)If user needs to remove the bounding box:
Set the value of draw_results in config.json as false for both Visualiser and WebVisualiser.
```
draw_results: "false"
```
ImageStore Module¶
The Image Store component of EII comes as a separate container which primarily subscribes to the stream that comes out of the VideoAnalytics app via EII MessageBus and stores the frame into minio for historical analysis.
The high level logical flow of ImageStore is as below:
The messagebus subscriber in ImageStore will subscribe to the VideoAnalytics published classified result (metadata, frame) on the messagebus. The img_handle is extracted out of the metadata and is used as the key and the frame is stored as a value for that key in minio persistent storage.
For historical analysis of the stored classified images, ImageStore starts the messagebus server which provides the read and store interfaces. The payload format is as follows for:
Store interface: .. code-block:
Request: map ("command": "store","img_handle":"$handle_name"),[]byte($binaryImage) Response : map ("img_handle":"$handle_name", "error":"$error_msg") ("error" is optional and available only in case of error in execution.)
Read interface: .. code-block:
Request : map ("command": "read", "img_handle":"$handle_name") Response : map ("img_handle":"$handle_name", "error":"$error_msg"),[]byte($binaryImage) ("error" is optional and available only in case of error in execution. And $binaryImage is available only in case of successful read)
Configuration¶
All the ImageStore module configuration are added into etcd (distributed
key-value data store) under AppName
as mentioned in the
environment section of this app’s service definition in docker-compose.
- If
AppName
isImageStore
, then the app’s config would look like as below for
/ImageStore/config
key in Etcd:
"/ImageStore/config": {
"minio":{
"accessKey":"admin",
"secretKey":"password",
"retentionTime":"1h",
"retentionPollInterval":"60s",
"ssl":"false"
}
}
Detailed description on each of the keys used¶
Key |
Description |
Possible Values |
Required/Optional |
---|---|---|---|
accessKey |
Username required to access Minio DB |
Any suitable value |
Required |
secretKey |
Password required to access Minio DB |
Any suitable value |
Required |
retentionTime |
The retention parameter specifies the retention policy to apply for the images stored in Minio DB. In case of infinite retention time, set it to “-1” |
Suitable duration string value as mentioned at https://golang.org/pkg/time/#ParseDuration. |
Required |
retentionPollInterval |
Used to set the time interval for checking images for expiration. Expired images will become candidates for deletion and no longer retained. In case of infinite retention time, this attribute will be ignored |
Suitable duration string value as mentioned at https://golang.org/pkg/time/#ParseDuration |
Required |
ssl |
If “true”, establishes a secure connection with Minio DB else a non-secure connection |
“true” or “false” |
Required |
For more details on Etcd secrets and messagebus endpoint configuration, visit Etcd_Secrets_Configuration.md and MessageBus Configuration respectively.
Configuration¶
All the InfluxDBConnector module configuration are added into etcd (distributed
key-value data store) under AppName
as mentioned in the
environment section of this app’s service definition in docker-compose.
- If
AppName
isInfluxDBConnector
, then the app’s config would look like as below for
/InfluxDBConnector/config
key in Etcd:
"influxdb": {
"retention": "1h30m5s",
"username": "admin",
"password": "admin123",
"dbname": "datain",
"ssl": "True",
"verifySsl": "False",
"port": "8086"
}
In case of nested json data, by default InfluxDBConnector will flatten the nested json and push the flat data to InfluxDB, In order to avoid the flattening of any particular nested key please mention the tag key in the **config.json** file. Currently “defects” key is ignored from flattening. Every key to be ignored has to be in newline.
for example,
ignore_keys = [ "Key1", "Key2", "Key3" ]
By default, all the keys in the data schema will be pushed to InfluxDB as fields. In case if tags are present in data schema, it can be mentioned in the **config.json** file then the data pushed to InfluxDB, will have fields and tags both. Currently, no tags are present in the data scheme and tag_keys is kept blank in the config file.
for example,
tag_keys = [ "Tag1", "Tag2" ]
For more details on Etcd secrets and messagebus endpoint configuration, visit Etcd_Secrets_Configuration.md and MessageBus Configuration respectively.