.. role:: raw-html-m2r(raw)
:format: html
Building standalone version of Edge Video Analytics Microservice
================================================================
The standalone version of *Edge Video Analytics Microservice Overview* does not contain any EII library dependencies. And as a result of this, this mode does not support the gst-udfloader plugin.
Steps to bulid the standalone image
-----------------------------------
* Clone the *Edge Video Analytics Microservice Overview* repository
.. code-block:: bash
$ git clone https://github.com/intel-innersource/applications.services.esh.edge-video-analytics-microservice.git
$ cd applications.services.esh.edge-video-analytics-microservice
* Run the ``build.sh`` script to build container image
.. code-block:: bash
$ export TAG="1.0.0-standalone"
$ docker/standalone/build.sh $TAG
Steps to run the standalone image
---------------------------------
* Pull the container image from registry
.. code-block:: bash
$ export TAG="1.0.0-standalone"
$ docker pull intel/edge_video_analytics_microservice:${TAG}
* Run the image
.. code-block:: bash
$ docker run -itd --rm \
--privileged \
--device=/dev:/dev \
--device-cgroup-rule='c 189:* rmw' \
--device-cgroup-rule='c 209:* rmw' \
--group-add 109 \
-p 8080:8080 \
-p 8554:8554 \
-e ENABLE_RTSP=true \
-e RTSP_PORT=8554 \
-e ENABLE_WEBRTC=true \
-e WEBRTC_SIGNALING_SERVER=ws://localhost:8443 \
-e RUN_MODE=EVA \
-e DETECTION_DEVICE=CPU \
-e CLASSIFICATION_DEVICE=CPU \
intel/edge_video_analytics_microservice:${TAG}
Adding custom pipelines and models
----------------------------------
Custom pipelines and models can be made available to the pipeline server by mounting them into to container image.
*
Prepare models
Use the model downloader `available here `_ to download new models. Point ``MODEL_DIR`` to the directory containing the new models. The following section assumes that the new models are available under ``$(pwd)/models``.
.. code-block:: bash
$ export MODEL_DIR=$(pwd)/models
*
Prepare pipelines
Use `these docs `_ to get started with defining new pipelines. Once the new pipelines have been defined, point ``PIPELINE_DIR`` to the directory containing the new pipelines. The following section assumes that the new pipelines are available under ``$(pwd)/pipelines``.
.. code-block:: bash
$ export PIPELINE_DIR=$(pwd)/pipelines
* Run the image with new models and pipelines mounted into the container
.. code-block:: bash
$ docker run -itd --rm \
--privileged \
--device=/dev:/dev \
--device-cgroup-rule='c 189:* rmw' \
--device-cgroup-rule='c 209:* rmw' \
--group-add 109 \
-p 8080:8080 \
-p 8554:8554 \
-e ENABLE_RTSP=true \
-e RTSP_PORT=8554 \
-e ENABLE_WEBRTC=true \
-e WEBRTC_SIGNALING_SERVER=ws://localhost:8443 \
-e RUN_MODE=EVA \
-e DETECTION_DEVICE=CPU \
-e CLASSIFICATION_DEVICE=CPU \
-v ${MODEL_DIR}:/home/pipeline-server/models \
-v ${PIPELINE_DIR}:/home/pipeline-server/pipelines \
intel/edge_video_analytics_microservice:${TAG}
## Starting pipelines
*
We can trigger pipelines using the *pipeline server's* REST endpoints, here is an example cURL command, the output is available as a RTSP stream at *rtsp://\ :raw-html-m2r:``\ /pipeline-server*
.. code-block:: bash
$ curl localhost:8080/pipelines/object_classification/vehicle_attributes -X POST -H \
'Content-Type: application/json' -d \
'{
"source": {
"uri": "https://github.com/intel-iot-devkit/sample-videos/blob/master/car-detection.mp4?raw=true",
"type": "uri"
},
"destination": {
"metadata": {
"type": "file",
"path": "/tmp/results.jsonl",
"format": "json-lines"
},
"frame": {
"type": "rtsp",
"path": "pipeline-server"
}
},
"parameters": {
"detection-device": "CPU"
}
}'
## **\ *gencamsrc*\ ** sample pipeline
* Add the pipeline give below under ``$(pwd)/pipelines/cameras/gencam/pipeline.json``
.. code-block:: json
{
"type": "GStreamer",
"template": [ "gencamsrc serial= name=source", " ! videoconvert", " ! video/x-raw,format=BGR", " ! appsink name=destination" ],
"description": "Pipeline"
}
* Run the container as root, with environment var ``GENICAM=Matrix_Vision`` set and make sure ``/run/udev`` is mounted.
.. code-block:: bash
$ docker run -it --rm \
-u 0 \
--privileged \
--device=/dev:/dev \
--device-cgroup-rule='c 189:* rmw' \
--device-cgroup-rule='c 209:* rmw' \
--group-add 109 \
-p 8080:8080 \
-p 8554:8554 \
-e ENABLE_RTSP=true \
-e RTSP_PORT=8554 \
-e ENABLE_WEBRTC=true \
-e WEBRTC_SIGNALING_SERVER=ws://localhost:8443 \
-e RUN_MODE=EVA \
-e DETECTION_DEVICE=CPU \
-e CLASSIFICATION_DEVICE=CPU \
-e GENICAM="Matrix_Vision" \
-e GST_DEBUG="1,gencamsrc:2" \
-v ${PIPELINE_DIR}:/home/pipeline-server/pipelines \
-v /run/udev:/run/udev:ro \
intel/edge_video_analytics_microservice:${TAG}
* Trigger the pipeline using the below cURL command, the output is available as a RTSP stream at *rtsp://\ :raw-html-m2r:``\ /pipeline-server*
.. code-block::
curl localhost:8080/pipelines/cameras/gencam -X POST -H \
'Content-Type: application/json' -d \
'{
"source": {
"element": "gencamsrc",
"type": "gst"
},
"destination": {
"metadata": {
"type": "file",
"path": "/tmp/results.jsonl",
"format": "json-lines"
},
"frame": {
"type": "rtsp",
"path": "pipeline-server"
}
}
}'