Contents¶
OEI provision and deployment¶
For deployment of Open Edge Insights (OEI), helm charts are provided for both provision and deployment.
Note
:
In this document, you will find labels of ‘Edge Insights for Industrial (EII)’ for filenames, paths, code snippets, and so on. Consider the references of EII as OEI. This is due to the product name change of EII as OEI.
Same procedure has to be followed for single or multi node.
Please login/configure docker registry before running helm. This would be required when not using public docker hub for accessing images.
Prerequisites¶
Note:
- K8s installation on single or multi node should be done as pre-requisite to continue the following deployment. Please note we
have tried the kubernetes cluster setup with
kubeadm,kubectlandkubeletpackages on single and multi nodes withv1.23.4. One can refer tutorials like https://adamtheautomator.com/install-kubernetes-ubuntu/#Installing_Kubernetes_on_the_Master_and_Worker_Nodes and many other online tutorials to setup kubernetes cluster on the web with host OS as ubuntu 18.04/20.04.
For helm installation, please refer helm website
For time series usecase make sure ia_mqtt_broker & ia_mqtt_publisher are running. Make sure ‘MQTT_BROKER_HOST’ Environment Variable is updated with HOST IP address of the system where MQTT Broker is running.
For preparing the necessary files required for the provision and deployment, user needs to execute the build and provision steps on an Ubuntu 18.04 / 20.04 machine. Follow the Docker prerequisites, OEI Prerequisites, Provision OEI and Build and Run OEI mentioned in README.md on the Ubuntu dev machine.
As OEI don’t distribute all the docker images on docker hub, one would run into issues of those pods status showing
ImagePullBackOffand few pods status like visualizer, factory ctrl etc., showingCrashLoopBackOffdue to additional configuration required. ForImagePullBackOffissues, please follow the steps mentioned at [../README.md#distribution-of-eii-container-images]> (../README. md#distribution-of-eii-container-images) to push the images that are locally built to the docker registry of choice. Ensure to update theDOCKER_REGISTRYvalue in[WORKDIR]/IEdgeInsights/build/.envfile and re-run the ../builder.py([WORK_DIR]/IEdgeInsights/build/builder.py) script to regenerate the helm charts for provision and deployment.
Update the helm charts directory¶
Edit the “EII_HOME_DIR” in .env(
[WORK_DIR]/IEdgeInsights/build/.env) with /home/username/<dir>/IEdgeInsights/.Make sure you have updated the OEI Service Secrets Username and password in .env(
[WORK_DIR]/IEdgeInsights/build/.env) file.Run builder to copy templates file to eii-deploy/templates directory and generate consolidated values.yaml file for eii-services:
Note: Execute builder.py with the preferred usecase for generating the consolidated helm charts for the provisioning and deployment.
cd [WORKDIR]/IEdgeInsights/build python3 builder.py -f usecases/<usecase>.yml
Following steps are required both in DEV and PROD mode:
Provision OEI in the kubernetes node¶
Note
Make sure you have deleted older certificates.
Execute
eii-provisionchart to provision OEI
cd [WORKDIR]/IEdgeInsights/build/helm-eii
helm install eii-provision eii-provision/
b. Update permission of certificates dir incase of ``PROD`` mode
cd [WORKDIR]/IEdgeInsights/build/helm-eii/
sudo chmod -R 777 eii-deploy/Certificates
Note
The Certificates/ directory contains sensitive information. So post the installation of eii-provision helm chart, it is recommended to delete the Certificates from it.
Deploy OEI in the kubernetes node¶
Note
OEI helm/k8s deployment does not support native visualizer. Remove
visualizer.ymltemplate, if exists as a part of your usecase fromEII_HOME_DIR/build/helm-eii/eii-deploy/templates/visualizer.ymlOEI helm/k8s deployment
Grafanaapplication is not enabled forVideousecases. Onlytimeseriesusecases are supported
Copy the helm charts in helm-eii/ directory to the node.
Install deploy helm chart
cd [WORKDIR]/IEdgeInsights/build/helm-eii/ helm install eii-deploy eii-deploy/
Note: If one wants to set the ingestion pipeline for the video ingestion pod, please install the deploy helm chart as below:
helm install --set env.PIPELINE="<INGESTION_PIPELINE>" eii-deploy eii-deploy/
Note: ConfigMgrAgent service needs to be initialized before other services during runtime. In case other services are initialized before ConfigMgrAgent one might notice “cfgmgr initialization failed” exception. After generating this exception the services should restart and continue to run.
Verify all the pod are running:
kubectl get pods
OEI is now successfully deployed.
Provision and deploy mode in times switching between dev and prod mode OR changing the usecase¶
Set the DEV_MODE as “true/false” in .env(
[WORK_DIR]/IEdgeInsights/build/.env) depending on dev or prod mode.Run builder to copy templates file to eii-deploy/templates directory and generate consolidated values.yaml file for eii-services:
cd [WORKDIR]/IEdgeInsights/build python3 builder.py -f usecases/<usecase>.yml
Remove the etcd storage directory
sudo rm -rf /opt/intel/eii/data/*
Do helm install of provision and deploy charts as per previous section.
Note
:
During re-deployment (helm uninstall and helm install) of helm charts, wait for all the pervious pods to terminate.
Steps to enable Accelarators¶
Note
:
nodeSelector is the simplest recommended form of node selection constraint.
nodeSelector is a field of PodSpec. It specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). The most common usage is one key-value pair.
Setting the
labelfor a particular nodekubectl label nodes <node-name> <label-key>=<label-value>
For HDDL/NCS2 dependenecies follow the steps for setting
labels.
For HDDL
kubectl label nodes <node-name> hddl=true
For NCS2
kubectl label nodes <node-name> ncs2=true
Note
: Here the node-name is your worker node machine hostname
Open the
[WORKDIR]/IEdgeInsights/VideoIngestion/helm/values.yamlor[WORKDIR]/IEdgeInsights/VideoAnalytics/helm/values.yamlfile.Based on your workload preference. Add hddl or ncs2 to accelerator in values.yaml of video-ingestion or video-analytics.
For HDDL
config: video_ingestion: . . accelerator: "hddl" . .For NCS2
config: video_ingestion: . . accelerator: "ncs2" . .
set device as “MYRIAD” in case of ncs2 and as HDDL in case of hddl in the VA config(
[WORK_DIR]/IEdgeInsights/VideoAnalytics/config.json)
In case of ncs2.
"udfs": [{ . . . "device": "MYRIAD" }]In case of hddl.
"udfs": [{ . . . "device": "HDDL" }]
Run the
[WORKDIR]/IEdgeInsights/build/builder.pyfor generating latest consolidateddeployyml file based on yournodeSelectorchanges set in the respective Modules.cd [WORKDIR]/IEdgeInsights/build/ python3 builder.py
Follow the Deployment Steps
Verify the respecitve workloads are running based on the
nodeSelectorconstraints.
Steps for Enabling GiGE Camera with helm¶
- Note: For more information on
Multusplease refer this git https://github.com/intel/multus-cni Skip installing multus if it is already installed.
Prequisites For enabling gige camera with helm. Helm pod networks should be enabled
MultusNetwork Interface to attach host system network interface access by the pods for connected camera access.Note: Please follow the below steps & make sure
dhcp daemonis running fine.If there is an error onmacvlancontainer creation on accessing the socket or if socket was not running. Please execute the below steps againsudo rm -f /run/cni/dhcp.sock cd /opt/cni/bin sudo ./dhcp daemon
Setting up Multus CNI and Enabling it¶
Multus CNI is a container network interface (CNI) plugin for Kubernetes that enables attaching multiple network interfaces to pods. Typically, in Kubernetes each pod only has one network interface (apart from a loopback) – with Multus you can create a multi-homed pod that has multiple interfaces. This is accomplished by Multus acting as a “meta-plugin”, a CNI plugin that can call multiple other CNI plugins.
Get the name of the
ethernetinterface in which gige camera & host system connected Note: Identify the network interface name by following command
ifconfig
Execute the Following Script with Identified
ethernetinterface name as Argument forMultus Network SetupNote: Pass theinterfacename withoutquotes
cd [WORKDIR]/IEdgeInsights/build/helm-eii/gige_setup sudo -E sh ./multus_setup.sh <interface_name>
Note: Verify
multuspod is inRunningstatekubectl get pods --all-namespaces | grep -i multus
Set gige_camera to true in values.yaml
$ vi [WORKDIR]/IEdgeInsights/VideoIngestion/helm/values.yaml . . gige_camera: true . .
Follow the Deployment Steps
Verify
podip &hostip are same as per ConfiguredEthernetinterface by using below command.
kubectl exec -it <pod_name> -- ip -d address
Note
User needs to deploy with
rootuser rights for GPU device, MYRIAD(NCS2) device and GenICam USB3.0 interface cameras.Refer the below configuration file snip to deploy with
rootuser rights usingrunAsUserfield. .. code-block:apiVersion: apps/v1 kind: Deployment ... spec: ... spec: ... containers: .... securityContext: runAsUser: 0
Accessing Web Visualizer and EtcdUI¶
Environment EtcdUI & WebVisualizer will be running in Following ports.
EtcdUI
https://master-nodeip:30010/
WebVisualizer
PROD Mode –
https://master-nodeip:30007/DEV Mode –
http://master-nodeip:30009/
Pre requisites on K8s MultiNode Cluster Environment¶
Note
For running OEI in Prod Mode, it is required to have self-signed certificates that are getting generated need to go in as kubernetes secrets. To make this happen, it is mandatory that ConfigManager Agent Pod & generating Certs Pod gets scheduled in k8s Cluster’s Master/Control Plane node
By default, OEI deployment charts are deployed in PROD mod and only ConfigManager Agent & generating Certs pods gets scheduled on Master/Control Plane node of K8s Cluster
Ensure that the Master Node is tainted to schedule the pod
kubectl taint nodes --all node-role.kubernetes.io/master-