Contents

Video Benchmarking Tool

These scripts are designed to automate the running of benchmarking tests and the collection of the performance data. This performance data includes the FPS of each video stream, and also the CPU %, Memory %, and Memory read/write bandwidth.

The Processor Counter Monitor (PCM) is required for measuring memory read/write bandwidth, which can be downloaded and built here.

If you do not have PCM on your system, those columns will be blank in the output.csv.

Run Video Benchmarking Tool with EdgeVideoAnalyticsMicroservice

Pre-requisites:

  1. EVAM service needs to be built on the system as the benchmarking tool will only start the services and not build them.

  2. Make sure RTSP server is started on a different system.

Steps for running a benchmarking test case:

  1. Start the RTSP server on a different system on the network:

    ./stream_rtsp.sh <number-of-streams> <starting-port-number> <bitrate> <width> <height> <framerate>
    

    For example:

    ./stream_rtsp.sh 16 8554 4096 1920 1080 40
    
  2. Update RTSP camera IP in config.json([WORK_DIR]/IEdgeInsights/tools/Benchmarking/video-benchmarking-tool/evam_sample_test/config.json) and in RTSP_CAMERA_IP field .env([WORK_DIR]/IEdgeInsights/.env)

Run evam_execute_test.sh with the desired benchmarking config:

```sh
USAGE:
    ./evam_execute_test.sh TEST_DIR STREAMS SLEEP PCM_HOME [EII_HOME]

WHERE:
    TEST_DIR  - The absolute path to directory containing services.yml for the services to be tested, and the config.json and docker-compose.yml.
    STREAMS   - The number of streams (1, 2, 4, 8, 16)
    SLEEP     - The number of seconds to wait after the containers come up
    PCM_HOME  - The absolute path to the PCM repository where pcm.x is built
    EII_HOME - [Optional] The absolute path to EII home directory, if running from a non-default location
```


For example:
```sh
sudo -E ./evam_execute_test.sh $PWD/evam_sample_test 16 60 /opt/intel/pcm [WORKDIR]/IEdgeInsights
```
  1. The execution log, performance logs, and the output.csv will be saved in TEST_DIR/< timestamp >/ so that the same test case can be run multiple times without overwriting the output. If any errors occur during the test, you can view its details from the execution.log file. For successful test, you can view the results in final_output.csv.