Contents

DataStore Microservice

Data Store microservice provides on-prem persistent metadata stored in the InfluxDB* platform and Binary Large Object (BLOB) data stored in the MinIO* system. These storage solutions are of NoSQL nature and support video analytics and time-series analytics data store operations at the edge.

The microservice supports the following data storage types:

  • BLOBs: these are files in containers as BLOBs of data.

  • NoSQL data.

The DataStore microservice consists of the following databases:

DataStore Configuration

The configurations for Datastore Service are added in etcd. The configuration details are available in the docker-compose file, under AppName in the environment section of the app’s service definition.

For the scenario, when the AppName is Datastore, the following example shows how the app’s config will look for /DataStore/config key in etcd:

"dbs": {
    "influxdb": {
        "topics": ["*"],
        "server": "datastore_influxdb_server",
        "retention": "1h30m5s",
        "dbname": "datain",
        "ssl": "True",
        "verifySsl": "False",
        "port": "8086",
        "pubWorkers": "5",
        "subWorkers": "5",
        "ignoreKeys": [ "defects" ],
        "tagKeys": []
    },
    "miniodb": {
        "topics": ["camera1_stream_results"],
        "server": "datastore_minio_server",
        "retentionTime": "1h",
        "retentionPollInterval": "60s",
        "ssl": "false"
    }
}

By default, both the DBs will be enabled. If you want to disable any of the above DBs, remove the corresponding key and its value from the config.

For Example, if you are not using MinIO DB, you can disable the same and modify the config as below:

"dbs": {
    "influxdb": {
        "topics": ["*"],
        "server": "datastore_influxdb_server",
        "retention": "1h30m5s",
        "dbname": "datain",
        "ssl": "True",
        "verifySsl": "False",
        "port": "8086",
        "pubWorkers": "5",
        "subWorkers": "5",
        "ignoreKeys": [ "defects" ],
        "tagKeys": []
    }
}

The following are the details of the keys in the above config:

  • The topics key determines which messages are to be processed by the corresponding DB microservice. Only the messages with a topic listed in topics key are processed by the individual module. If topics contain \*, then all the messages are processed.

  • The server key specifies the name of the server interface on which, the corresponding module server is active

InfluxDB

  • DataStore will subscribe to the InfluxDB and start the zmq

    publisher, zmq subscriber threads, and zmq request reply thread based on PubTopics, SubTopics and QueryTopics configuration.

  • zmq subscriber thread connects to the PUB socket of zmq bus on which

    the data is published by VideoAnalytics and push it to the InfluxDB.

  • zmq publisher thread will publish the point data ingested by the telegraf

    and the classifier result coming out of the point data analytics.

  • zmq reply request service will receive the InfluxDB select query and

    response with the historical data.

For nested json data, by default, DataStore will flatten the nested json and push the flat data to InfluxDB to avoid the flattening of any particular nested key mention the tag key in the config.json(``[WORK_DIR]/IEdgeInsights/DataStore/config.json``) file. Currently the defects key is ignored from flattening. Every key to be ignored has to be in a new line.

For example,

ignore_keys = [ "Key1", "Key2", "Key3" ]

By default, all the keys in the data schema will be pushed to InfluxDB as fields. If tags are present in data schema, it can be mentioned in the config.json(``[WORK_DIR]/IEdgeInsights/DataStore/config.json``) file then the data pushed to InfluxDB, will have fields and tags both. At present, no tags are visible in the data scheme and tag_keys are kept blank in the config file.

For Example,

tag_keys = [ "Tag1", "Tag2" ]

MinIO

The MinIO DB submodule primarily subscribes to the stream that comes out of the VideoAnalytics app via MessageBus and stores the frame into minio for historical analysis.

The high-level logical flow of MinIO DB is as follows:

  1. The messagebus subscriber in MinIO DB will subscribe to the VideoAnalytics published classified result (metadata, frame) on the messagebus. The img_handle is extracted out of the metadata and is used as the key and the frame is stored as a value for that key in minio persistent storage.

  2. For a historical analysis of the stored classified images, MinIO DB starts the messagebus server which provides the read and store interfaces. The payload format is as follows for:

    1. Store Interface:

           Request: map ("command": "store","img_handle":"$handle_name"),[]byte($binaryImage)
           Response : map ("img_handle":"$handle_name", "error":"$error_msg") ("error" is optional and available only in case of error in execution.)
    
    b. Read Interface:
    
    Request : map ("command": "read", "img_handle":"$handle_name")
    Response : map ("img_handle":"$handle_name", "error":"$error_msg"),[]byte($binaryImage) ("error" is optional and available only in case of error in execution. And $binaryImage is available only in case of successful read)
    

Detailed Description on Each Keys

accessKey |Username required to access Minio DB |Any suitable value|Required|
secretKey|Password required to access Minio DB| Any suitable value| Required retentionTime|The retention parameter specifies the retention policy to apply for the images stored in Minio DB. In case of infinite retention time, set it to “-1”| Suitable duration string value as mentioned at https://golang.org/pkg/time/#ParseDuration.| Required retentionPollInterval|Used to set the time interval for checking images for expiration. Expired images will become candidates for deletion and no longer retained. In case of infinite retention time, this attribute will be ignored| Suitable duration string value as mentioned at https://golang.org/pkg/time/#ParseDuration| Required

ssl|If “true”, establishes a secure connection with Minio DB else a non-secure connection| “true” or “false”| Required


Steps to Independently Build and Deploy DataStore Microservice

Note

For running two or more microservices, you are recommended to try the use case-driven approach for building and deploying as described at Generate Consolidated Files for a Subset of Edge Insights for Industrial Services

Steps to Independently Build the DataStore Microservice

Note

When switching between independent deployment of the service with and without config manager agent service dependency, one would run into issues with docker-compose build w.r.t Certificates folder existence. As a workaround, run the command sudo rm -rf Certificates to proceed with the docker-compose build.

To independently build the DataStore microservice, complete the following steps:

  1. The downloaded source code should have a directory named DataStore:

    cd IEdgeInsights/DataStore
    
  2. Copy the IEdgeInsights/build/.env file using the following command in the current folder

    cp ../build/.env .
    

    NOTE: Update the HOST_IP and ETCD_HOST variables in the .env file with your system IP.

    # Source the .env using the following command:
    set -a && source .env && set +a
    
  3. Independently build

    docker-compose build
    

Steps to Independently Deploy DataStore Microservice

You can deploy the DataStore service in either of the following two ways:

Deploy DataStore Service without Config Manager Agent Dependency

Run the following commands to deploy DataStore service without Config Manager Agent dependency:

# Enter the DataStore directory
cd IEdgeInsights/DataStore

Copy the IEdgeInsights/build/.env file using the following command in the current folder, if not already present.

cp ../build/.env .

Note: Ensure that docker ps is clean and docker network ls must not have EII bridge network.

Update .env file for the following:
1. HOST_IP and ETCD_HOST variables with your system IP.
2. `READ_CONFIG_FROM_FILE_ENV` value to `true` and `DEV_MODE` value to `true`.
3. Set the values for 'INFLUXDB_USERNAME', 'INFLUXDB_PASSWORD', 'MINIO_ACCESS_KEY', 'MINIO_SECRET_KEY' which are InfluxDB and Minio DB credentials

Source the .env using the following command:
set -a && source .env && set +a

Set Write Permission for Data Dir(Volume Mount paths): This is required for Database Server to have write permission to the respective storage paths.
sudo mkdir -p $EII_INSTALL_PATH/data
sudo chmod 777 $EII_INSTALL_PATH/data
sudo chown -R eiiuser:eiiuser $EII_INSTALL_PATH/data
# Run the service
docker-compose -f docker-compose.yml -f docker-compose-dev.override.yml up -d

Note

Datastore container restarts automatically when its config is modified in the config.json file. If you update the config.json file by using vi or vim editor, it is required to append the set backupcopy=yes in ~/.vimrc so that the changes done on the host machine config.json are reflected inside the container mount point.

Deploy DataStore Service with Config Manager Agent Dependency

Run the following commands to deploy the DataStore Service with Config Manager Agent dependency:

Note

Ensure that the Config Manager Agent image present in the system. If not, build the Config Manager Agent locally when independently deploying the service with Config Manager Agent dependency.

# Enter the DataStore directory
cd IEdgeInsights/DataStore

Copy the IEdgeInsights/build/.env file using the following command in the current folder, if not already present.

cp ../build/.env .

Note: Ensure that docker ps is clean and docker network ls doesn’t have EII bridge networks.

Update .env file for following:
1. HOST_IP and ETCD_HOST variables with your system IP.
2. `READ_CONFIG_FROM_FILE_ENV` value is set to `false`.
3. Set the values for 'INFLUXDB_USERNAME', 'INFLUXDB_PASSWORD', 'MINIO_ACCESS_KEY', 'MINIO_SECRET_KEY' which are InfluxDB and Minio DB credentials

Copy the docker-compose.yml from IEdgeInsights/ConfigMgrAgent as docker-compose.override.yml in IEdgeInsights/DataStore.

cp ../ConfigMgrAgent/docker-compose.yml docker-compose.override.yml

Copy the builder.py with standalone mode changes from IEdgeInsights/build directory

cp ../build/builder.py .

Run the builder.py in standalone mode, this will generate eii_config.json and update docker-compose.override.yml

python3 builder.py -s true

Building the service (This step is optional for building the service if not already done in the Independently buildable step above)

docker-compose build

For running the service in PROD mode, run the below command:

NOTE: Make sure to update DEV_MODE to false in .env while running in PROD mode and source the .env using the command set -a && source .env && set +a.

docker-compose up -d

For running the service in DEV mode, run the below command:

NOTE: Make sure to update DEV_MODE to true in .env while running in DEV mode and source the .env using the command set -a && source .env && set +a.

docker-compose -f docker-compose.yml -f docker-compose-dev.override.yml -f docker-compose.override.yml up -d