############### Advanced Guide ############### Adding New Services to Intel® Edge Insights System Stack -------------------------------------------------------- This section provides information about adding a service, subscribing to the EdgeVideoAnalyticsMicroservice(\ ``[WORK_DIR]/IEdgeInsights/EdgeVideoAnalyticsMicroservice``\ ), and publishing it on a new port. Add a service to the Intel® Edge Insights System stack as a new directory in the IEdgeInsights(\ ``[WORK_DIR]/IEdgeInsights/``\ ) directory. The Builder registers and runs any service present in its own directory in the IEdgeInsights(\ ``[WORK_DIR]/IEdgeInsights/``\ ) directory. The directory should contain the following: * A ``docker-compose.yml`` file to deploy the service as a docker container. The ``AppName`` is present in the ``environment`` section in the ``docker-compose.yml`` file. Before adding the ``AppName`` to the main ``build/eii_config.json``\ , it is appended to the ``config`` and ``interfaces`` as ``/AppName/config`` and ``/AppName/interfaces``. * A ``config.json`` file that contains the required config for the service to run after it is deployed. The ``config.json`` consists of the following: * A ``config`` section, which includes the configuration-related parameters that are required to run the application. * An ``interfaces`` section, which includes the configuration of how the service interacts with other services of the Intel® Edge Insights System stack. .. note:: For more information on adding new Intel® Edge Insights System services, refer to the Intel® Edge Insights System sample apps at `Samples `_ written in C++, Python, and Golang using the Intel® Edge Insights System core libraries. The following example shows: * How to write the **config.json** for any new service * Subscribe to **EdgeVideoAnalyticsMicroservice** * Publish on a new port .. code-block:: javascript { "config": { "paramOne": "Value", "paramTwo": [1, 2, 3], "paramThree": 4000, "paramFour": true }, "interfaces": { "Subscribers": [ { "Name": "default", "Type": "zmq_tcp", "EndPoint": "127.0.0.1:65114", "PublisherAppName": "EdgeVideoAnalyticsMicroservice", "Topics": [ "edge_video_analytics_results" ] } ], "Publishers": [ { "Name": "default", "Type": "zmq_tcp", "EndPoint": "127.0.0.1:65113", "Topics": [ "publish_stream" ], "AllowedClients": [ "ClientOne", "ClientTwo", "ClientThree" ] } ] } } The ``config.json`` file consists of the following key and values: * Value of the ``config`` key is the config required by the service to run. * Value of the ``interfaces`` key is the config required by the service to interact with other services of Intel® Edge Insights System stack over the Message Bus. * The ``Subscribers`` value in the ``interfaces`` section denotes that this service should act as a subscriber to the stream being published by the value specified by ``PublisherAppName`` on the endpoint mentioned in value specified by ``EndPoint`` on topics specified in value of ``Topic`` key. * The ``Publishers`` value in the ``interfaces`` section denotes that this service publishes a stream of data after obtaining and processing it from ``EdgeVideoAnalyticsMicroservice``. The stream is published on the endpoint mentioned in value of ``EndPoint`` key on topics mentioned in the value of ``Topics`` key. * The services mentioned in the value of ``AllowedClients`` are the only clients that can subscribe to the published stream, if it is published securely over the Message Bus. .. note:: * Like the interface keys, Intel® Edge Insights System services can also have ``Servers`` and ``Clients`` interface keys. * For more information on the ``interfaces`` key responsible for the Message Bus endpoint configuration, refer to :doc:`README `. * For the etcd secrets configuration, in the new Intel® Edge Insights System service or app ``docker-compose.yml`` file, add the following volume mounts with the right ``AppName`` env value: .. code-block:: yaml ... volumes: - ./Certificates/[AppName]:/run/secrets/[AppName]:ro - ./Certificates/rootca/cacert.pem:/run/secrets/rootca/cacert.pem:ro Running With Multiple Use Cases -------------------------------- Builder uses a yml file for configuration. The config yml file consists of a list of services to include. You can mention the service name as the path relative to ``IEdgeInsights`` or full path to the service in the config yml file. To include only a certain number of services in the Intel® Edge Insights System stack, you can add the -f or yml_file flag of builder.py. You can find the examples of yml files for different use cases as follows: * Azure(\ ``[WORK_DIR]/IEdgeInsights/build/usecases/video-streaming-azure.yml``\ ) The following example shows running Builder with the -f flag: .. code-block:: sh python3 builder.py -f usecases/video-streaming.yml * **Main Use Cases** .. list-table:: :header-rows: 1 * - Use case - yaml file * - Video + Time Series - build/usecases/video-timeseries.yml(\ ``[WORK_DIR]/IEdgeInsights/build/usecases/video-timeseries.yml``\ ) * - Video - build/usecases/video.yml(\ ``[WORK_DIR]/IEdgeInsights/build/usecases/video.yml``\ ) * - Time Series - build/usecases/time-series.yml(\ ``[WORK_DIR]/IEdgeInsights/build/usecases/time-series.yml``\ ) * **Video Pipeline Sub Use Cases** .. list-table:: :header-rows: 1 * - Use case - yaml file * - Video streaming with EVAM - build/usecases/video-streaming-evam.yml(\ ``[WORK_DIR]/IEdgeInsights/build/usecases/video-streaming-evam.yml``\ ) * - Video streaming and historical - build/usecases/video-streaming-evam-datastore.yml(\ ``[WORK_DIR]/IEdgeInsights/build/usecases/video-streaming-evam-datastore.yml``\ ) * - Video streaming with DataCollection - build/usecases/video-streaming-dcaas-evam-datastore.yml(\ ``[WORK_DIR]/IEdgeInsights/build/usecases/video-streaming-dcaas-evam-datastore.yml``\ ) * - Video streaming with ModelRegistry - build/usecases/evam-datastore-model-registry.yml(\ ``[WORK_DIR]/IEdgeInsights/build/usecases/evam-datastore-model-registry.yml``\ ) When you run the multi-instance config, a ``build/multi_instance`` directory is created in the build directory. Based on the number of ``video_pipeline_instances`` specified, that many directories of EdgeVideoAnalyticsMicroservice are created in the ``build/multi_instance`` directory. The following section provides an example for running the Builder to generate the multi-instance boiler plate config for 3 streams of **video-streaming** use case. If required, you can generate the multi-instance ``docker-compose.yml`` and ``config.json`` files using the Builder. You can use the ``-v`` or ``video_pipeline_instances`` flag of the Builder to generate boiler plate config for the multiple-stream use cases. The ``-v`` or ``video_pipeline_instances`` flag creates the multi-stream boiler plate config for the ``docker-compose.yml`` and ``eii_config.json`` files. The following example shows running builder to generate the multi-instance boiler plate config for 3 streams of video-streaming use case: .. code-block:: sh python3 builder.py -v 3 -f usecases/video-streaming-evam.yml Using the previous command for 3 instances, the ``build/multi_instance`` directory consists of the following directories * EdgeVideoAnalyticsMicroservice1 * EdgeVideoAnalyticsMicroservice2 * EdgeVideoAnalyticsMicroservice3 Initially each directory will have the default ``config.json`` and the ``docker-compose.yml`` files that are present within the ``EdgeVideoAnalyticsMicroservice/eii`` directory. .. code-block:: example ./build/multi_instance/ |-- EdgeVideoAnalyticsMicroservice1 | |-- config.json | `-- docker-compose.yml |-- EdgeVideoAnalyticsMicroservice2 | |-- config.json | `-- docker-compose.yml |-- EdgeVideoAnalyticsMicroservice3 | |-- config.json | `-- docker-compose.yml You can edit the config of each of these streams within the ``build/multi_instance`` directory. To generate the consolidated ``docker compose`` and ``eii_config.json`` file, rerun the ``builder.py`` command. .. note:: * The multi-instance feature support of Builder works only for the video pipeline that is the **usecases/video-streaming.yml** and **video-streaming-evam.yml** use case and not with any other use case yml files like **usecases/video-streaming-storage.yml** and so on. Also, it doesn't work for cases without the ``-f`` switch. The previous example will work with any positive number for ``-v``. * If you are running the multi-instance config for the first time, it is recommended not to change the default ``config.json`` file and the ``docker-compose.yml`` file in the ``EdgeVideoAnalyticsMicroservice/eii`` directory. * If you are not running the multi-instance config for the first time, the existing ``config.json`` and ``docker-compose.yml`` files in the ``build/multi_instance`` directory will be used to generate the consolidated ``eii-config.json`` and ``docker-compose`` files. If you want to use ``config.json`` and ``docker-compose.yml`` files from ``EdgeVideoAnalyticsMicroservice/eii`` directory then delete the ``build/multi_instance`` directory. * The ``docker-compose.yml`` files present within the ``build/multi_instance`` directory will have the following: * the updated service_name, container_name, hostname, AppName, ports and secrets for that respective instance. * The ``config.json file`` in the ``build/multi_instance`` directory will have the following: * the updated Name, Type, Topics, Endpoint, PublisherAppname, ServerAppName, and AllowedClients for the interfaces section. * the incremented RTSP port number for the config section of that respective instance. * Ensure that all containers are down before running the multi-instance configuration. Run the ``docker compose down -v`` command or before running the ``builder.py`` script for the multi-instance configuration. Video Analytics Pipeline ------------------------ .. figure:: image/vision-analytics.png :scale: 100 % Figure 1. Vision Ingestion and Analytics Workflow Timeseries Analytics Pipeline ----------------------------- .. figure:: image/timeseries-analytics.png :scale: 100 % Figure 1. Time Series Ingestion and Analytics Workflow .. include:: ../IEdgeInsights/list-of-services.rst Sample Apps ----------- This section provides more information about the Intel® Edge Insights System sample apps and how to use the core libraries packages like Utils, Message Bus, and ConfigManager in various flavors of Linux such as Ubuntu operating systems or docker images for programming languages such as C++, Go, and Python. The following table shows the details for the supported flavors of Linux operating systems or docker images and programming languages that support sample apps: .. list-table:: :header-rows: 1 * - Linux Flavor - Languages * - ``Ubuntu`` - ``C++, Go, Python`` The sample apps are classified as ``publisher`` and ``subscriber`` apps. For more information, refer to the following: * `Publisher `_ * `Subscriber `_ Run the Samples Apps ^^^^^^^^^^^^^^^^^^^^ For default scenario, the sample custom UDF containers are not the mandatory containers to run. The ``builder.py`` script runs the ``sample-apps.yml`` from the build/usecases(\ ``[WORK_DIR]/IEdgeInsights/build/usecases``\ ) directory and adds all the sample apps containers. Refer to the following list to view the details of the sample apps containers: .. code-block:: yml AppContexts: # CPP sample apps for Ubuntu operating systems or docker images - Samples/publisher/cpp/ubuntu - Samples/subscriber/cpp/ubuntu # Python sample apps for Ubuntu operating systems or docker images - Samples/publisher/python/ubuntu - Samples/subscriber/python/ubuntu # Go sample apps for Ubuntu operating systems or docker images - Samples/publisher/go/ubuntu - Samples/subscriber/go/ubuntu #. In the ``[WORKDIR]/IEdgeInsights/build/.env`` file, update the ``PKG_SRC`` variable from ``http::8888/eis-v1.0`` to ``http::8888/latest``. #. To run the ``sample-apps.yml`` file, execute the following command: .. code-block:: sh cd [WORKDIR]/IEdgeInsights/build python3 builder.py -f ./usecases/sample-apps.yml #. Refer to `Build Intel® Edge Insights System stack `_ and the `Run Intel® Edge Insights System service `_ sections to build and run the sample apps. :raw-html-m2r:`
` Intel® In-Band Manageability ---------------------------- Intel® In-Band Manageability enables software updates and deployment from cloud to device. This includes the following: * Software over the air (SOTA) * Firmware update over the air (FOTA) * Application over the air (AOTA) and system operations The **AOTA** update enables cloud to edge manageability of application services running on the Intel® Edge Insights System enabled systems through Intel® In-Band Manageability. For Intel® Edge Insights System use case, only the AOTA features from Intel® In-Band Manageability are validated and supported through Azure\* and ThingsBoard\* cloud-based management front end services. Based on your preference, you can use Azure\* or ThingsBoard\*. The following sections provide information about: * Installing Intel® In-Band Manageability * Setting up Azure\* and ThingsBoard\* * Establishing connectivity with the target systems * Updating applications on systems .. _Install Device Manageability: Installing Intel® In-Band Manageability ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Refer the steps from ``edge_insights_system/IntelR_Edge_Insights_System_/manageability/README.md`` to install the Intel® In-Band Manageability and configure Azure\* and Thingsboard\* servers with Intel® Edge Insights System. Known Issues ~~~~~~~~~~~~ * Thingsboard\* Server fails to connect with devices after provisioning TC. Thingsboard\* server setup fails on fresh server. Intel® Edge Insights System Uninstaller --------------------------------------- The Intel® Edge Insights System uninstaller script automatically removes all the Intel® Edge Insights System Docker configuration that is installed on a system. The uninstaller performs the following tasks: * Stops and removes all the Intel® Edge Insights System running and stopped containers. * Removes all the Intel® Edge Insights System docker volumes. * Removes all the Intel® Edge Insights System docker images [Optional] * Removes all Intel® Edge Insights System install directory To run the uninstaller script, run the following command from the ``[WORKDIR]/IEdgeInsights/build/`` directory: Example: * Run the following command to use the help option. .. code-block:: sh ./eii_uninstaller.sh -h The output of the above command is shown below: .. code-block:: sh Usage: ./eii_uninstaller.sh [-h] [-d] This script uninstalls the previous Intel® Edge Insights System version. Where: -h show the help -d triggers the deletion of docker images (by default it will not trigger) * Run the following command to delete the Intel® Edge Insights System containers and volumes: .. code-block:: sh ./eii_uninstaller.sh * Run the following command to delete the Intel® Edge Insights System containers, volumes, and images: .. code-block:: sh ./eii_uninstaller.sh -d The commands in the example will delete the version 2.4 Intel® Edge Insights System containers, volumes, and all the docker images. :raw-html-m2r:`
` Debug ----- Debugging Options ~~~~~~~~~~~~~~~~~ Perform the following steps for debugging: #. Run the following command to check if all the Intel® Edge Insights System images are built successfully: .. code-block:: sh docker images|grep ia #. You can view all the dependency containers and the Intel® Edge Insights System containers that are up and running. Run the following command to check if all containers are running: .. code-block:: sh docker ps #. Ensure that the proxy settings are correctly configured and restart the docker service if the build fails due to no internet connectivity. #. Run the ``docker ps`` command to list all the enabled containers that are included in the ``docker-compose.yml`` file. #. From edge video analytics microservice>visualizer, check if the default video pipeline with Intel® Edge Insights System is working fine. #. The ``/opt/intel/eii`` root directory gets created - This is the installation path for Intel® Edge Insights System: * ``data/`` - stores the backup data for persistent imagestore and influxdb * ``sockets/`` - stores the IPC ZMQ socket files The following table displays useful docker compose and docker commands: .. list-table:: :header-rows: 1 * - Command - Description * - ``docker compose build`` - Builds all the service containers * - ``docker compose build [serv_cont_name]`` - Builds a single service container * - ``docker compose down`` - Stops and removes the service containers * - ``docker compose up -d`` - Brings up the service containers by picking the changes done in the ``docker-compose.yml`` file * - ``docker ps`` - Checks the running containers * - ``docker ps -a`` - Checks the running and stopped containers * - ``docker stop $(docker ps -a -q)`` - Stops all the containers * - ``docker rm $(docker ps -a -q)`` - Removes all the containers. This is useful when you run into issue of already container is in use * - ``[docker compose cli]`` - For more information refer to the `docker documentation `_ * - ``[docker compose reference]`` - For more information refer to the `docker documentation `_ * - ``[docker cli]`` - For more information refer to the `docker documentation `_ * - ``docker compose run --no-deps [service_cont_name]`` - To run the docker images separately or one by one. For example: ``docker compose run --name ia_edge_video_analytics_microservice --no-deps ia_edge_video_analytics_microservice`` to run the EdgeVideoAnalyticsMicroservice container and the switch ``--no-deps`` will not bring up its dependencies mentioned in the ``docker-compose`` file. If the container does not launch, there could be some issue with the entrypoint program. You can override by providing the extra switch ``--entrypoint /bin/bash`` before the service container name in the ``docker compose run`` command. This will let you access the container and run the actual entrypoint program from the container's terminal to root cause the issue. If the container is running and you want to access it then, run the command: ``docker compose exec [service_cont_name] /bin/bash`` or ``docker exec -it [cont_name] /bin/bash`` * - ``docker logs -f [cont_name]`` - Use this command to check logs of containers * - ``docker compose logs -f`` - To see all the docker compose service container logs at once Troubleshooting Guide ~~~~~~~~~~~~~~~~~~~~~ * For any troubleshooting tips related to the Intel® Edge Insights System configuration and installation, refer to the :doc:`troubleshoot ` guide. * Since all the Intel® Edge Insights System services are independently buildable and deployable when we do a ``docker compose up`` for all Intel® Edge Insights System microservices, the order in which they come up is not controlled. Having said this, there are many publishers and subscriber microservices in Intel® Edge Insights System middleware, hence, its possible that publisher comes up before subscriber or there can be a slght time overlap wherein the subscriber can come up just after publisher comes up. Hence, in these scenarios its a possibility that the data published by publisher can be lost as subscriber would not be up to receive all the published data. So, the solution to address this is to restart the publisher after we are sure that the intended subscriber is up. * Microservices integrated with license library may fail with `No Feature License Available` issue (this shows up in the container logs) **Solution:** This is an intermittent issue that one could run into. The workaround is to restart the failing container with `docker restart ` command. * The helm charts installation in PROD mode bundled in Enhanced package may fail to bring up few pods **Solution:** The helm charts are available locally on the node at /edge_insights_system/Intel_Edge_Insights_System_1.5_Enhanced/eis-vision-timeseries-1.5.0/eii-deploy Please update the below files with DEV_MODE value set to true relative at above location and follow README.md in the same directory: * values.yaml * charts/eii-provision/values.yaml * If you observe any issues with the installation of the Python package, then as a workaround you can manually install the Python packages by running the following commands: .. code-block:: sh cd [WORKDIR]/IEdgeInsights/build # Install requirements for builder.py pip3 install -r requirements.txt .. **Note:** To avoid any changes to the Python installation on the system, it is recommended that you use a Python virtual environment to install the Python packages. For more information on setting up and using the Python virtual environment, refer to `Python virtual environment `_. * Video looping issue during visualization * If there is a physical disconnection/re-connection of the USB 3.0 Basler Camera during ingestion, frames will start looping during visualization. .. **Solution:** Reconnect the USB camera and restart the project. * Monochrome image with Basler camera * The default pixel format for Basler Camera is mono8. Users can change the pixel format accordingly. **Solution:** Configure the pixel format using ETCD keeper at https://:7071/etcdkeeper 1. Click the ‘/EdgeVideoAnalyticsMicroservice/config’ .. figure:: image/ETCD_KEEPER.png :scale: 100 % Figure 1. ETCD Keeper EdgeVideoAnalyticsMicroservice Config Section :raw-html-m2r:`
` 2. Change the pixel format and click Save button. .. figure:: image/PIXEL_FORMAT.png :scale: 100 % Figure 2. Pixel Format Parameter * Basler camera field of view getting cropped * Camera field of view getting cropped is an expected behavior when a lower resolution is set using ‘height’ or ‘width’ parameter. Setting these parameters would create an Image ROI which will originate from the top left corner of the sensor. Refer https://docs.baslerweb.com/image-roi for more details. .. **Solution:** Configure height and width parameters for GenICam camera to use the maximum supported resolution of the camera or completely remove height and width parameters in the pipeline. Refer to the GenICam Camera section to make the configuration change. The user may have to delete the camera and a add it as a fresh camera to make changes. Note that using a higher resolution might have other side effects like “lag issue in the pipeline” when the model is compute-intensive. More details available in the section “lag issue in the pipeline”. * High CPU utilization issue * Using a compute-intensive model might increase the overall CPU utilization of the system (expected behavior). :raw-html-m2r:`
` .. **Solution 1:** Decrease the Basler Camera framerate. Decrease the Camera framerate using the frame-rate configurations available for GenICam camera. Refer to the “GenICam Camera” section of the User Guide for more details. The user may have to delete the camera and add it as a fresh camera to make changes. .. **Solution 2:** To reduce the CPU utilization, the inference workload can be offloaded to the GPU device, provided the system supports running inference on GPU. For this, while registering the AI model, please select the device type as “AUTO” or “iGPU”. * Lag issue in the pipeline :raw-html-m2r:`
` 1. Using a compute-intensive model can also increase the inference time. Higher inference and pre-processing/post-processing times can lead to lag issues in the ingestion pipeline if the analytics throughput is less than the ingestion framerate :raw-html-m2r:`
` **Solution:** Ingestion framerate can be reduced to prevent the frame from piling up. :raw-html-m2r:`
` 2. Creating multi-instance and running multiple pipelines with a compute-intensive model will increase the inference time and the overall CPU%. :raw-html-m2r:`
` **Solution:** Either the ingestion framerate must be reduced, or the model must be further optimized. * Xiris camera troubleshooting * XVC-1000e40 (monochrome) Xiris camera is tested. * Only PixelDepth=8 (camera outputs 8 bits per pixel) is supported. For any frame rendering issues, please check the PixelDepth value from the logs and make sure it is set to 8 by power cycle of the camera or by setting the value in the Xiris GUI tool (currently available on Windows). * In case a wrong or an invalid IP is provided for connecting to the Xiris camera using the XirisCameraIP env variable, ingestion will not work and error logs will not be printed. Make sure the correct IP is provided for ingestion to work. * Enable Jumbo Packets .. 1. The XeBus Runtime relies on Jumbo Packet support to ensure a  successful camera connection and bandwidth. On most Linux* systems, this can be enabled with the following command: * $ sudo ifconfig eth0 mtu 8194 2. “eth0” in the command above should be replaced with the name of the ethernet interface connected to the camera. This command applies a change that will reset after rebooting. 3. It is advised to permanently set the MTU of your ethernet adapter to persist through reboots. Consult the documentation for your Linux* distribution and/or ethernet device for more details. * Increase Socket Buffer Size .. 1. With the large amount of data, a camera can stream, the socket send/receive buffers must be sufficiently large. * $ sudo sysctl -w net.core.rmem_max=10485750 * $ sudo sysctl -w net.core.wmem_max=10485750 2. You can enable these changes permanently for your system by adding the following lines to your “/etc/sysctl.conf” file: * $ net.core.rmem_max=10485750 * $ net.core.wmem_max=10485750 * MQTT Publish Issue * MQTT publish issue occurs due to 2 active ethernet ports. One port is connected to a network (say eth0), and the other port is connected to Xiris Camera (say eth1).With this configuration, the default network connectivity (IP) route a few times becomes the port where the Xiris camera is connected. In this case, the system gets isolated from the network. :raw-html-m2r:`
` **Solution:** Disable the ethernet port where Xiris is connected and check sudo apt update. Once the command is working fine, then enable back the Xiris ethernet port.:raw-html-m2r:`
` * Xiris Camera not detected * During initial camera registration process or project deployment, the xirisi camera is not detected by the system. :raw-html-m2r:`
` **Solution:** Power off and power on the Xiris camera. Kindly refer the manufacture provided i.e. Xiris User Guide, if the problem persists. * UDF ModuleNotFound error **Solution:** If the custom UDF/model name contains a “whitespace” character, UDF loading will fail and throw a ModuleNotFound error. Ensure that the UDF name and model/UDF directory name contain no “whitespace” character. * Frames are not getting ingested, or the ingestion rate is too low **Solution:** By default, USB-FS on Linux* systems allows only 16 MB buffer limit, which might not be sufficient to work with high framerate, high-resolution cameras, and multiple camera setups. In such scenarios, configure USB-FS to increase the buffer memory limit for the USB3 vision camera. The following link can be used as a reference to configure the USB-FS limit configuring-usb-fs-for-usb3-vision-camera. (https://medium.com/taiwan-tech-for-good/configuring-usb-fs-for-usb3-vision-camera-5c5727bd0c3d) * Containers are still running upon docker compose down -v command **Solution:** Go to [WORKDIR]/IEdgeInsights/build and run ./run.sh -v to remove the docker containers and volumes. * In Prod mode, changes made to the Telegraf configuration are not reflecting once telegraf is restarted **Solution:** Follow the below steps, 1. Go to [WORKDIR]/IEdgeInsights/Telegraf, in `docker-compose.yml` under the volumes section add the below line `- ./../Telegraf/config/Telegraf:/etc/Telegraf/Telegraf` 2. Go to [WORKDIR]/IEdgeInsights/build, execute the below commands: - python3 builder.py -f usecases/enhanced.yml - ./run.sh -v -s