Contents

EII Provision and Deployment

For deployment of Edge Insights for Industrial(EII), helm charts are provided for both provision and deployment.

Note

:

  • In this document, there are labels of ‘Edge Insights for Industrial (EII)’ for file names, paths, code snippets,etc.

  • Same procedure has to be followed for single or multi node.

  • Login to /configure docker registry before running helm. This is required when public docker hub for accessing images is not used.

Prerequisites

Note:

For time series usecase make sure ia_mqtt_broker and ia_mqtt_publisher are running.

For preparing the necessary files required for the provision and deployment, execute the build and provision steps on an Ubuntu 18.04 / 20.04 machine. Follow the Docker prerequisites, EII Prerequisites, Provision EII and Build and Run EII mentioned in README.md on the Ubuntu dev machine.

As EII does not distribute all the docker images on docker hub, run into issues of those pods status showing ImagePullBackOff and few pods status like visualizer, factory ctrl etc., showing CrashLoopBackOff due to additional configuration required. For ImagePullBackOff issues, follow the steps mentioned at [../README.md#distribution-of-eii-container-images]> (../README. md#distribution-of-eii-container-images) to push the images that are locally built to the docker registry of choice. Ensure to update the DOCKER_REGISTRY value in [WORKDIR]/IEdgeInsights/build/.env file and re-run the ../builder.py([WORK_DIR]/IEdgeInsights/build/builder.py) script to regenerate the helm charts for provision and deployment.

Note

For deploying EII pods across nodes with access to persistent volumes, one can use the “Network File System” distributed storage as explained at README_NFS.md

Update the Helm Charts Directory

  1. Edit the “EII_HOME_DIR” in .env([WORK_DIR]/IEdgeInsights/build/.env) with /home/username/<dir&gt;/IEdgeInsights/ and “PROVISION_MODE” with k8s.

  2. Ensure that the EII Service Secrets Username and password in .env([WORK_DIR]/IEdgeInsights/build/.env) file is updated.

  3. Run builder to copy templates file to eii-deploy/templates directory and generate consolidated values.yaml file for eii-services:

    Note: Execute builder.py with the preferred usecase for generating the consolidated helm charts for the provisioning and deployment.

    cd [WORKDIR]/IEdgeInsights/build
    python3 builder.py -f usecases/<usecase>.yml
    
  4. Following steps are required both in DEV and PROD mode:

Provision and Deploy EII in the Kubernetes Node

Note

Ensure older certificates and eii-certs secret from the k8s cluster is deleted.

  1. Copy the helm charts in helm-eii/ directory to the node.

    Note: For Edge Video Analytics Microservice helm deployment, follow https://github.com/intel-innersource/applications.services.esh.edge-video-analytics-microservice/blob/main/README.md for downloading the models. Once models folder created, copy the models, resources and eii/pipelines directories from EdgeVideoAnalyticsMicroservice repo to /opt/intel/eii directory on the host machine.

  2. Install deploy helm chart

    cd [WORKDIR]/IEdgeInsights/build/helm-eii/
    helm install eii-deploy eii-deploy/
    

    The Certificates/ directory contains sensitive information. So post the installation of eii-deploy helm chart, it is recommended to delete the Certificates from it.

    Note:

    Visualizer won’t show the frames and to see the frames coming in, it is required to update the localhost IP with the worker node IP where the visualizer streaming pod is running (kubectl get nodes command will help them get the node IP address where the visualizer streaming pod is running), find the detailed steps on updating IP address on Grafana dashboard referred in the Visualizer README.md at [WORKDIR]/IEdgeInsights/Visualizer/multimodal-data-visualization/README.md##accessing-visualizer-from-other-node section.

    Set the ingestion pipeline for the video ingestion pod, and install the deploy helm chart as follows:

    helm install --set env.PIPELINE="<INGESTION_PIPELINE>" eii-deploy eii-deploy/
    

    For Example(USB Camera),

    helm install --set env.PIPELINE="/dev/video0" eii-deploy eii-deploy/
    

    Note: ConfigMgrAgent service needs to be initialized before other services during runtime. In case other services are initialized before ConfigMgrAgent, notice “cfgmgr initialization failed” exception. After generating this exception the services must restart and continue to run.

    Verify all the pod are running:

kubectl get pods

EII is now successfully deployed.

Provision and Deploy Mode in Times Switching between Dev and Prod Mode or Changing the Use case

  1. Set the DEV_MODE as “true/false” in .env([WORK_DIR]/IEdgeInsights/build/.env) depending on dev or prod mode.

  2. Run builder to copy templates file to eii-deploy/templates directory and generate consolidated values.yaml file for eii-services:

    cd [WORKDIR]/IEdgeInsights/build
    python3 builder.py -f usecases/<usecase>.yml
    
  3. Remove the etcd storage directory

    sudo rm -rf /opt/intel/eii/data/*
    

Repeat helm install of deploy chart as per previous section.

Note

: During re-deployment (helm uninstall and helm install) of helm chart, wait for all the pervious pods to terminate.

For Running Helm Charts and Deploying Kube Pods with Specific Namespace

Note

: By default, all our helm charts are deployed with default namespace, below commands will help us to deploy helm chart and kube pods with specific namespace

     helm install --set namespace=<namespace> <helm_app_name> <helm_charts_directory>/ --namespace <namespace> --create-namespace

For Example,
  • For Deploying eii-deploy helm chart with eii namespace. .. code-block:: sh

    helm install –set namespace=eii eii-deploy eii-deploy/ –namespace eii –create-namespace

  • Now all the pods and helm charts are deployed under eii namespace

  • For listing helm charts deployed with specific namespace .. code-block:: sh

    helm ls -n <namespace>

  • For listing kube pods deployed with specific namespace .. code-block:: sh

    kubectl get pods -n <namespace>

Steps for Enabling Accelarators

Note

: nodeSelector is the simplest recommended form of node selection constraint. nodeSelector is a field of PodSpec. It specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). The most common usage is one key-value pair.

  1. Setting the label for a particular node

    kubectl label nodes <node-name> <label-key>=<label-value>
    
  2. For NCS2 dependenecies, follow the steps for setting labels.

  • For NCS2

    kubectl label nodes <node-name> ncs2=true
    

Note

: Here the node-name is your worker node machine hostname

  1. Open the [WORKDIR]/IEdgeInsights/VideoIngestion/helm/values.yaml or [WORKDIR]/IEdgeInsights/VideoAnalytics/helm/values.yaml file.

  2. Based on your workload preference. Add ncs2 to accelerator in values.yaml of video-ingestion or video-analytics.

  • For NCS2

    config:
      video_ingestion:
        .
        .
        accelerator: "ncs2"
        .
        .
    
  1. Set device as “MYRIAD” in case of ncs2 and as HDDL in case of hddl in the VA config([WORK_DIR]/IEdgeInsights/VideoAnalytics/config.json)

  • In case of ncs2.

    "udfs": [{
      .
      .
      .
      "device": "MYRIAD"
      }]
    
  1. Run the [WORKDIR]/IEdgeInsights/build/builder.py for generating latest consolidated deploy yml file based on your nodeSelector changes set in the respective Modules.

    cd [WORKDIR]/IEdgeInsights/build/
    python3 builder.py
    
  2. Follow the Deployment Steps

  3. Verify the respecitve workloads are running based on the nodeSelector constraints.

Steps for Enabling GiGE Camera with Helm

Note: For more information on Multus refer to git https://github.com/intel/multus-cni

Skip installing multus if it is already installed.

Prequisites

For enabling gige camera with helm. Helm pod networks should be enabled Multus Network Interface to attach host system network interface access by the pods for connected camera access.

Note: Follow the steps and ensure dhcp daemon is running fine. If there is an error on macvlan container creation on accessing the socket or if socket was not running. Execute the steps again.

sudo rm -f /run/cni/dhcp.sock
cd /opt/cni/bin
sudo ./dhcp daemon

Note

When CNI issue occurs make sure to cleanup the cluster and re-initialize the cluster.

Setting up Multus CNI and Enabling it

  • Multus CNI is a container network interface (CNI) plug-in for Kubernetes that enables attaching multiple network interfaces to pods. Typically, in Kubernetes each pod only has one network interface (apart from a loopback) – with Multus you can create a multi-homed pod that has multiple interfaces. This is accomplished by Multus acting as a “meta-plug-in”, a CNI plug-in that can call multiple other CNI plug-ins.

    1. Get the name of the ethernet interface in which gige camera and host system connected Note: Identify the network interface name by following command

    ifconfig
    
    1. Execute the following Script with Identified ethernet interface name as Argument for Multus Network Setup Note: Pass the interface name without quotes

    cd [WORKDIR]/IEdgeInsights/build/helm-eii/gige_setup
    sudo -E sh ./multus_setup.sh <interface_name>
    

    Note: Verify multus pod is in Running state

    kubectl get pods --all-namespaces | grep -i multus
    
    1. Set gige_camera to true in values.yaml

    $ vi [WORKDIR]/IEdgeInsights/VideoIngestion/helm/values.yaml
    .
    .
    gige_camera: true
    .
    .
    
    1. Follow the Deployment Steps

    2. Verify podip & host ip are same as per Configured Ethernet interface by using below command.

    kubectl exec -it <pod_name> -- ip -d address
    

Note

  • Deploy with root user rights for GPU device, MYRIAD(NCS2) device and GenICam USB3.0 interface cameras.

  • Refer the below configuration file snip to deploy with root user rights using runAsUser field. .. code-block:

    apiVersion: apps/v1
    kind: Deployment
    ...
    spec:
      ...
      spec:
        ...
        containers:
          ....
          securityContext:
            runAsUser: 0
    

Steps for Enabling CustomUdfs

For deploying the Custom UDFs, complete the following steps:

Enable the required CustomUDF services. As per the EII default scenario, the sample custom UDF containers are not mandatory containers to run, hence the builder.py must run the video-streaming-all-udfs.yml use case.

  1. Make sure Visualizer(mulitimodaldatavisulization/streaming) subscribing w.r.t usecases.

For running Multimodal Data Visualization/Streaming with CustumUdfs as a publisher, the config can be updated to subscribe to the EndPoint and topic of CustumUdfs in the following way.

Refer to the following example multimodalvisualization configuration to subscribe PySafetyGearAnalytics CustomUDF results.

Open the [WORKDIR]/IEdgeInsights/Visulizer/multimodal-data-visualization-streaming/eii/config.json and [WORKDIR]/IEdgeInsights/Visulizer/multimodal-data-visualization/eii/config.json

{
     "interfaces": {
         "Subscribers": [
             {
                 "Name": "default",
                 "Type": "zmq_tcp",
                 "EndPoint": "ia_python_safety_gear_analytics:65019",
                 "PublisherAppName": "PySafetyGearAnalytics",
                 "Topics": [
                     "py_safety_gear_stream_results"
                 ]
              }
         ]
     }
 }
  1. Helm MultiModalVisulation templete changes for CustomUdfs.

    Open the [WORKDIR]/IEdgeInsights/Visulizer/multimodal-data-visualization-streaming/eii/helm/templates/multimodal-data-visualization-streaming.yaml

    Replace the following lines in multimodal-data-visualization-streaming.yaml from 97 to 99 and 106 to 108 and 115 to 118 instead of video_analytics with pythonsafetygearanalytics.For example refer the below example and change w.r.t CustomUdfs.

    {{- if ($global.Values.config.pythonsafetygearanalytics) }}
    {{- $subscriber_port = $global.Values.config.pythonsafetygearanalytics.publish_port }}
    {{- end }}
    
    {{- if ($global.Values.config.pythonsafetygearanalytics) }}
    {{- $subscriber_port = add $global.Values.config.pythonsafetygearanalytics.publish_port $instance_idx }}
    {{- end }}
    
    {{- if ($global.Values.config.pythonsafetygearanalytics) }}
    - name: SUBSCRIBER_ENDPOINT
      value: "{{ $.Values.config.pythonsafetygearanalytics.name }}:{{ $subscriber_port }}"
    {{- end }}
    
  2. Run builder to copy templates file to eii-deploy/templates directory and generate consolidated values.yaml file for eii-services:

    cd [WORKDIR]/IEdgeInsights/build
    python3 builder.py -f usecases/video-streaming-all-udfs.yml
    

Note

The [NativePclIngestion](../CustomUdfs/NativePclIngestion] UDF does not support helm deployment.

Accessing Visualizer and EtcdUI

Environment EtcdUI and Visualizer will be running in following ports.

  • EtcdUI

    • https://master-nodeip:30010/

  • Visualizer

    • PROD Mode – https://master-nodeip:30001/

    • DEV Mode – http://master-nodeip:30001/

    Environment VideoAnalytics/VideoIngestion and EVAM will be running in following ports.

  • VideoAnalytics/VideoIngestion

    • PROD Mode – https://master-nodeip:30008/camera1_stream_results

    • DEV Mode – http://master-nodeip:30009/camera1_stream_results

  • EVAM

    • PROD Mode – https://master-nodeip:30008/edge_video_analytics_results

    • DEV Mode – http://master-nodeip:30009/edge_video_analytics_results

Prerequisites on K8s MultiNode Cluster Environment

Note

  • For running EII in Prod Mode, it is required to have self-signed certificates that are getting generated need to go in as kubernetes secrets. To make this happen, it is mandatory that ConfigManager Agent Pod & generating Certs Pod gets scheduled in k8s Cluster’s Master/Control Plane node

  • By default, EII deployment charts are deployed in PROD mod and only ConfigManager Agent and generating Certs pods gets scheduled on Master/Control Plane node of K8s Cluster

  • Ensure that the Master Node is tainted to schedule the pod

kubectl taint nodes --all node-role.kubernetes.io/master-

Note

For deploying EII pods across nodes with access to persistent volumes, use the “Network File System” distributed storage as explained at README_NFS.md.