Ansible

Ansible based EII Prequisites setup, provisioning, build and deployment

Ansible is an automation engine which can enable Edge Insights for Industrial (EII) deployment across single nodes. One control node is required where ansible is installed and optional hosts. Control node itself can be used to deploy EII.

Note

  • In this document, you will find labels of ‘Edge Insights for Industrial (EII)’ for filenames, paths, code snippets, and so on.

  • Ansible can execute the tasks on control node based on the playbooks defined

  • There are three types of nodes:

    • Control node where ansible must be installed.

    • EII leader node where ETCD server will be running and optional worker nodes, all worker nodes remotely connect to ETCD server running on leader node.

    • Control node and EII leader node can be the same.

Installing Ansible on Ubuntu {Control node}

Execute the following command in the identified control node machine.

```sh
    sudo apt update
    sudo apt install software-properties-common
    sudo apt-add-repository --yes --update ppa:ansible/ansible
    sudo apt install ansible
```

Prerequisite Steps Required for all the Control/Worker Nodes

Generate SSH KEY for all Nodes

Generate the SSH KEY for all nodes using following command (to be executed in the only control node), ignore to run this command if you already have ssh keys generated in your system without id and passphrase,

ssh-keygen

Note

Do not give any passphrase and id, just press Enter for all the prompt which will generate the key.

For Example,

$ ssh-keygen

Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):  <ENTER>
Enter passphrase (empty for no passphrase):  <ENTER>
Enter same passphrase again:  <ENTER>
Your identification has been saved in ~/.ssh/id_rsa.
Your public key has been saved in ~/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:vlcKVU8Tj8nxdDXTW6AHdAgqaM/35s2doon76uYpNA0 root@host
The key's randomart image is:
+---[RSA 2048]----+
|          .oo.==*|
|     .   .  o=oB*|
|    o . .  ..o=.=|
|   . oE.  .  ... |
|      ooS.       |
|      ooo.  .    |
|     . ...oo     |
|      . .*o+.. . |
|       =O==.o.o  |
+----[SHA256]-----+

Adding SSH Authorized Key from Control Node to all the Nodes

Follow the steps to copy the generated keys from control node to all other nodes

Execute the following command from control node.

ssh-copy-id <USER_NAME>@<HOST_IP>

For Example,

$ ssh-copy-id test@192.0.0.1

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/<username>/.ssh/id_rsa.pub"

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'test@192.0.0.1'"
and check to make sure that only the key(s) you wanted were added.

Configure Sudoers file to Accept NO PASSWORD for sudo operation

Note

Ansible needs to execute some commands as sudo. The below configuration is needed so that passwords need not be saved in the ansible inventory file hosts

Update sudoers file

  1. Open the sudoers file.

    sudo visudo
    
  2. Append the following to the sudoers file

    Note Please append to the last line of the sudoers file.

    <ansible_non_root_user>  ALL=(ALL:ALL) NOPASSWD: ALL
    

    For Example,

    If in control node, the current non root user is user1, you should append as follows

    user1 ALL=(ALL:ALL) NOPASSWD: ALL
    
  3. Save and Close the file

  4. For checking the sudo access for the enabled user, please logout and login to the session again check the following command.

    sudo -l -U <ansible_non_root_user>
    

    Now the above line authorizes user1 user to do sudo operation in the control node without PASSWORD ask.

    Note The same procedure applies to all other nodes where ansible connection is involved.

Updating the Leader Information for Using Remote Hosts

Note

By default both control/leader node ansible_connection will be localhost in single node deployment.

Follow the steps to update the details of leader node for remote node scenario.

  • Update the hosts information in the inventory file hosts

           [group_name]
           <nodename> ansible_connection=ssh ansible_host=<ipaddress> ansible_user=<machine_user_name>
    
    For Example'
    
    [targets]
    leader ansible_connection=ssh ansible_host=192.0.0.1  ansible_user=user1
    

    Note

    • ansible_connection=ssh is mandatory when you are updating any remote hosts, which makes ansible to connect via ssh.

    • The above information is used by ansible to establish ssh connection to the nodes.

    • control node will always be ansible_connection=local, don’t update the control node’s information

    • To deploy EII in single , ansible_connection=local and ansible_host=localhost

    • To deploy EII on remote node, ansible_connection=ssh and ansible_host=<remote_node_ip> and ansible_user=

Updating the EII Source Folder, Usecase and Proxy Settings in Group Variables

  1. Open group_vars/all.yml file

    vi group_vars/all.yml
    
  2. Update Proxy Settings

    enable_system_proxy: true
    http_proxy: <proxy_server_details>
    https_proxy: <proxy_server_details>
    no_proxy: <managed_node ip>,<controller node ip>,<worker nodes ip>,localhost,127.0.0.1
    
  3. Update the EII Secrets username and passwords in group_vars/all.yml required to run few EII Services in PROD mode only

  4. Update the usecase variable, based on the usecase builder.py generates the EII deployment and config files.

    Note

    1. By default it will be video-streaming, For other usecases refer the ../usecases folder and update only names without .yml extension

    2. For all usecase, it will bring up all default services of eii.

    3. ia_kapacitor and ia_telegraf container images are not distributed via docker hub, so one won’t be able to pull these images for time-series use case upon using ../usecases/time-series.yml([WORK_DIR]/IEdgeInsights/build/usecases/time-series.yml) for deployment. For more details, refer to [../README.md#distribution-of-eii-container-images]> (../README.md#distribution-of-eii-container-images).

    For Example, if you want build & deploy for ../usecases/time-series.yml update the usecase key value as time-series

    usecase: <time-series>
    
  5. Optionally, you can choose number of video pipeline instances to be created by updating instances variable

  6. Update other optional variables provided if required

Remote Node Deployment

Follwoing configuration changes need to be made for remote node deployment without k8s

#. Single node deployment all the services based on the usecase chosen will be deployed. #.

Update docker registry details in following section if using a custom/private registry

docker_registry="<regsitry_url>"
docker_login_user="<username>"
docker_login_passwd="<password>"

Note Use of docker_registry and build flags are as follows:

  • Update the docker_registry details to use docker images from custom registry, optionally set build: true to push docker images to this registry

  • Unset the docker_registry details if you do not want to use custom registry and set build: true to save and load docker images from one node to another node

  1. If you are using images from docker hub, then set build: false and unset docker_registry details

Execute Ansible Playbook from [EII_WORKDIR]/IEdgeInsights/build/ansible {Control node} to deploy EII services in control node/remote node

Note

  • Updating messagebus endpoints to connect to interfaces is still the manual process. ensure to update Application specific endpoints in [AppName]/config.json

  • After pre-requisites are successfully installed, Ensure to logout and login to apply the changes

  • If you are facing issues during installation of docker, remove all bionic related entries of docker from the sources.list and keyring gpg.

For Single Point of Execution

Note

This will execute all the steps of EII as prequisite, build, provision, deploy & setup all nodes for deployement usecase in one shot sequentialy.

ansible-playbook eii.yml

Following Steps are the Individual Execution of Setups.

  • Load .env values from template

    ansible-playbook eii.yml --tags "load_env"
    
  • For EII Prequisite Setup

    ansible-playbook eii.yml --tags "prerequisites"
    

    Note After pre-requisites are successfully installed, please do logout and login to apply the changes

  • To generate builder and config files, build images and push to registry

    ansible-playbook eii.yml --tags "build"
    
  • To generate eii bundles for deployment

    ansible-playbook eii.yml --tags "gen_bundles"
    
  • To deploy the eii modules

    ansible-playbook eii.yml --tags "deploy"
    

Deploying EII Using Helm in Kubernetes (k8s) Environment

Note

  • To Deploy EII using helm in k8s aenvironment, k8s setup is a prerequisite.

  • You need update the k8s leader machine as leader node in hosts file.

  • Non k8s leader machine the helm deployment will fail.

  • For k8s deployment remote_node parameters will not applicable, Since node selection & pod selection will be done by k8s orchestrator.

  • Make sure you are deleting /opt/intel/eii/data when switch from prod mode to dev mode in all your k8s worker nodes.

  • Update the DEPLOYMENT_MODE flag as k8s in group_vars/all.yml file:

    • Open group_vars/all.yml file

      vi group_vars/all.yml
      
    • Update the DEPLOYMENT_MODE flag as k8s

      ## Deploy in k8s mode using helm
      DEPLOYMENT_MODE: "k8s"
      
      ## Update "EII_HOME_DIR" to point to EII workspace when `DEPLOYMENT_MODE: "k8s"`.  Eg: `EII_HOME_DIR: "/home/username/<dir>/IEdgeInsights/"`
      EII_HOME_DIR: ""
      
    • Save and Close

  • For Single Point of Execution

    Note This will execute all the steps of EII as prequisite, build, provision, deploy for a usecase in one shot sequentialy.

    ansible-playbook eii.yml
    

Note

Below steps are the individual execution of setups.

  • Load .env values from template

    ansible-playbook eii.yml --tags "load_env"
    
  • For EII Prequisite Setup

    ansible-playbook eii.yml --tags "prerequisites"
    
  • For building EII containers

    ansible-playbook eii.yml --tags "build"
    
  • Prerequisites for deploy EII using Ansible helm environment.

    ansible-playbook eii.yml --tags "helm_k8s_prerequisites"
    
  • Provision and Deploy EII Using Ansible helm environment

    ansible-playbook eii.yml --tags "helm_k8s_deploy"