.. role:: raw-html-m2r(raw)
:format: html
Contents
========
* `Contents <#contents>`__
* `Data Store Overview <#data-store-overview>`__
* `Data Store Microservices <#data-store-microservice>`__
* `Steps to Independently Build and Deploy Data Store Microservice <#steps-to-independently-build-and-deploy-data-store-microservice>`__
* `Steps to Independently Build the Data Store Microservice <#steps-to-independently-build-the-data-store-microservice>`__
* `Steps to Independently Deploy the Data Store Microservice <#steps-to-independently-deploy-data-store-microservice>`__
* `Deploy Data Store Service without Config Manager Agent Dependency <#deploy-data-store-service-without-config-manager-agent-dependency>`__
* `Deploy Data Store Service with Config Manager Agent Dependency <#deploy-data-store-service-with-config-manager-agent-dependency>`__
* `Data Store Configuration <#data-store-configuration>`__
* `JSON Datatype (InfluxDB/TDEngineDB) <#json-datatype-influxdbtdenginedb>`__
* `Configuring TDEngine as JSON Datatype <#configuring-tdengine-as-json-datatype>`__
* `Blob Datatype (MinIO Object Storage) <#blob-datatype-minio-object-storage>`__
* `EII Msgbus Interface <#eii-msgbus-interface>`__
* `Zmq Request-Response Interface <#eii-msgbus-request-response-interface>`__
* `DB Server Supported Version <#db-server-supported-version>`__
Data Store Overview
-------------------
Data Store microservice supports Video and Time series use cases.
Data Store Microservice
^^^^^^^^^^^^^^^^^^^^^^^
The Data Store microservice supports two type of Data:
* `Metadata (JSON)(InfluxDB) <#influxdb>`__
* `Blob (MinIO Object Storage) <#miniodb>`__
Steps to Independently Build and Deploy Data Store Microservice
---------------------------------------------------------------
.. note:: For running two or more microservices, you are recommended to try the use case-driven approach for building and deploying as described at `Generate Consolidated Files for a Subset of Edge Insights for Industrial Services 4.1/IEdgeInsights/README.html#generate-consolidated-files-for-a-subset-of-edge-insights-for-industrial-services>`_
Steps to Independently Build the Data Store Microservice
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. note:: When switching between independent deployment of the service with and without config manager agent service dependency, one would run into issues with ``docker-compose build`` w.r.t Certificates folder existence. As a workaround, run the command ``sudo rm -rf Certificates`` to proceed with the ``docker-compose build``.
To independently build the Data Store microservice, complete the following steps:
#.
The downloaded source code should have a directory named Data Store:
If cloned using Manifest File,
.. code-block:: sh
# Enter the Data Store directory
cd IEdgeInsights/DataStore
If cloned Data Store repo directly
.. code-block:: sh
# Enter the Data Store directory
cd applications.industrial.edge-insights.data-store
#.
Independently build
.. code-block:: sh
docker-compose build
Steps to Independently Deploy Data Store Microservice
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can deploy the Data Store service in either of the following two ways:
* `Deploy Data Store Service without Config Manager Agent Dependency <#deploy-data-store-service-without-config-manager-agent-dependency>`__
* `Deploy Data Store Service with Config Manager Agent Dependency <#deploy-data-store-service-with-config-manager-agent-dependency>`__
Deploy Data Store Service without Config Manager Agent Dependency
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Run the following commands to deploy Data Store service without Config Manager Agent dependency:
If cloned using Manifest File,
.. code-block:: sh
# Enter the Data Store directory
cd IEdgeInsights/DataStore
If cloned Data Store repo directly
.. code-block:: sh
# Enter the DataStore directory
cd applications.industrial.edge-insights.data-store
.. note:: Ensure that ``docker ps`` is clean and ``docker network ls`` must not have EII bridge network.
.. code-block::
Update .env file for the following:
1. ETCD_HOST variables with your system IP.
2. `READ_CONFIG_FROM_FILE_ENV` value to `true` and `DEV_MODE` value to `true`.
3. Set the values for 'INFLUXDB_USERNAME', 'INFLUXDB_PASSWORD', 'MINIO_ACCESS_KEY', 'MINIO_SECRET_KEY' which are InfluxDB and Minio DB credentials
Source the .env using the following command:
set -a && source .env && set +a
Set Write Permission for Data Dir(Volume Mount paths): This is required for Database Server to have write permission to the respective storage paths.
sudo mkdir -p $EII_INSTALL_PATH/data
sudo chmod 777 $EII_INSTALL_PATH/data
sudo chown -R eiiuser:eiiuser $EII_INSTALL_PATH/data
.. code-block:: sh
# Run the service
docker-compose -f docker-compose.yml -f docker-compose-dev.override.yml up -d
.. note:: Data Store container restarts automatically when its config is modified in the ``config.json`` file.
If you update the config.json file by using ``vi or vim`` editor, it is required to append the ``set backupcopy=yes`` in ``~/.vimrc`` so that the changes done on the host machine config.json are reflected inside the container mount point.
Deploy Data Store Service with Config Manager Agent Dependency
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Run the following commands to deploy the Data Store Service with Config Manager Agent dependency:
.. note:: Ensure that the Config Manager Agent image present in the system. If not, build the Config Manager Agent locally when independently deploying the service with Config Manager Agent dependency.
.. code-block:: sh
# Enter the Data Store directory
cd IEdgeInsights/DataStore
..
Copy the IEdgeInsights/build/.env file using the following command in the current folder, if not already present.
.. code-block:: sh
cp ../build/.env .
**Note:** Ensure that ``docker ps`` is clean and ``docker network ls`` doesn't have EII bridge networks.
.. code-block::
Update .env file for following:
1. HOST_IP and ETCD_HOST variables with your system IP.
2. `READ_CONFIG_FROM_FILE_ENV` value is set to `false`.
3. Set the values for 'INFLUXDB_USERNAME', 'INFLUXDB_PASSWORD', 'MINIO_ACCESS_KEY', 'MINIO_SECRET_KEY' which are InfluxDB and MinIO object storage credentials
..
Copy the docker-compose.yml from IEdgeInsights/ConfigMgrAgent as docker-compose.override.yml in IEdgeInsights/DataStore.
.. code-block:: sh
cp ../ConfigMgrAgent/docker-compose.yml docker-compose.override.yml
..
Copy the builder.py with standalone mode changes from IEdgeInsights/build directory
.. code-block:: sh
cp ../build/builder.py .
Run the builder.py in standalone mode, this will generate eii_config.json and update docker-compose.override.yml
.. code-block:: sh
python3 builder.py -s true
Building the service (This step is optional for building the service if not already done in the ``Independently buildable`` step above)
.. code-block:: sh
docker-compose build
For running the service in PROD mode, run the below command:
**NOTE**\ : Make sure to update ``DEV_MODE`` to ``false`` in .env while running in PROD mode and source the .env using the command ``set -a && source .env && set +a``.
.. code-block:: sh
docker-compose up -d
For running the service in DEV mode, run the below command:
**NOTE**\ : Make sure to update ``DEV_MODE`` to ``true`` in .env while running in DEV mode and source the .env using the command ``set -a && source .env && set +a``.
.. code-block:: sh
docker-compose -f docker-compose.yml -f docker-compose-dev.override.yml -f docker-compose.override.yml up -d
Data Store Configuration
^^^^^^^^^^^^^^^^^^^^^^^^
The configurations for ``Data Store Service`` are added in etcd. The configuration details are available in the docker-compose file, under ``AppName`` in the environment section of the app's service definition.
For the scenario, when the ``AppName`` is ``DataStore``\ , the following example shows how the app’s config will look for ``/DataStore/config`` key in etcd:
.. code-block:: json
"datatypes": {
"json": {
"host" : "ia_influxdb",
"port": 8086,
"dbname": "datain",
"verifySsl": false,
"ignoreKeys": [
"defects"
],
"tagKeys": [],
"retention": "1h",
"topics": [
"*"
],
"retentionPollInterval": "60s"
},
"blob": {
"host" : "ia_miniodb",
"port": 9000,
"dbname": "image-store-bucket",
"retention": "1h",
"topics": [
"edge_video_analytics_results"
],
"retentionPollInterval": "60s"
}
},
"dbs": {
"json": "influxdb",
"blob": "miniodb"
}
The following are the details of the keys in the above config:
*
datatype (required)
* The ``host`` is optional parameter in configuration, which is used for connecting the respective Database servers (Local/Remote). If the parameter is not provided, by default JSON Datatype will be selected with ``ia_influxdb`` and Blob Datatype will be selected with ``ia_miniodb``
* The ``port`` is optional parameter in configuration, which is used for connecting the respective Database servers port(Local/Remote). If the parameter is not provided, by default JSON Datatype will be selected with ``8086`` for ``Influx DB`` & ``6030`` for ``TDEngine DB`` and Blob Datatype will be selected with ``9000`` for ``Minio Object Storage``
* The ``topics`` key determines which messages are to be processed by the corresponding DB microservice. Only the messages with a topic listed in ``topics`` key are processed by the individual module. If ``topics`` contain ``\*``\ , then all the messages are processed.
* The ``retention`` is required parameter in configuration. The retention parameter specifies the retention policy to apply for the images stored in MinIO object storage. In case of infinite retention time, set it to "". Suitable duration string value as mentioned at https://golang.org/pkg/time/#ParseDuration.
* The ``retentionPollInterval`` is required parameter in configuration. Used to set the time interval for checking images for expiration. Expired images will become candidates for deletion and no longer retained. In case of infinite retention time, this attribute will be ignored. Suitable duration string value as mentioned at https://golang.org/pkg/time/#ParseDuration
*
dbs (Optional)
* The ``json`` is optional parameter in dbs configuration, which is used for selection of db for JSON(Metadata) Datatype. Options available are ``influxdb``\ , ``tdenginedb``
* The ``blob`` is optional parameter in dbs configuration, which is used for selection of db for BLOB Datatype. Options available are ``miniodb``
By default, both the DBs will be enabled. If you want to disable any of the above DBs, remove the corresponding key and its value from the config.
For Example, if you are not using MinIO object storage, you can disable the same and modify the config as below:
.. code-block:: json
"datatypes": {
"json": {
"host" : "ia_influxdb",
"port": 8086,
"dbname": "datain",
"verifySsl": false,
"ignoreKeys": [
"defects"
],
"tagKeys": [],
"retention": "1h",
"topics": [
"*"
],
"retentionPollInterval": "60s"
}
}
JSON Datatype (InfluxDB/TDEngineDB)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For nested json data, by default, Data Store will flatten the nested json and push the flat data to InfluxDB to avoid the flattening of any particular nested key mention the tag key in the **config.json(\ ``[WORK_DIR]/IEdgeInsights/DataStore/config.json``\ )** file. Currently the ``defects`` key is ignored from flattening. Every key to be ignored has to be in a new line.
For example,
.. code-block:: json
ignore_keys = [ "Key1", "Key2", "Key3" ]
By default, all the keys in the data schema will be pushed to InfluxDB as fields. If tags are present in data schema, it can be mentioned in the **config.json(\ ``[WORK_DIR]/IEdgeInsights/DataStore/config.json``\ )** file then the data pushed to InfluxDB, will have fields and tags both.
At present, no tags are visible in the data scheme and tag_keys are kept blank in the config file.
For Example,
.. code-block:: json
tag_keys = [ "Tag1", "Tag2" ]
Configuring TDEngine as JSON Datatype
"""""""""""""""""""""""""""""""""""""
..
**Note: Data Store with TD Engine is not fully tested to work with Vision and Time-series usecase.**
*
Default JSON Datatype will be ``influxdb`` to choose tdengine as JSON Datatype follow the below steps.
#.
Add ``dbs`` to **config.json(\ ``[WORK_DIR]/IEdgeInsights/DataStore/config.json``\ )** adding ``json`` type as ``tdenginedb``
.. code-block:: json
"dbs": {
"json": "tdenginedb"
}
#.
Adding ``ia_tdenginedb`` service to docker-compose.yml.
Copy Contents of **docker-compose-dev.tdengine.yml(\ ``[WORK_DIR]/IEdgeInsights/DataStore/docker-compose-dev.tdengine.yml``\ )** and add it **docker-compose.yml(\ ``[WORK_DIR]/IEdgeInsights/DataStore/docker-compose.yml``\ )**. If required service ``ia_influxdb`` can be removed from **docker-compose.yml(\ ``[WORK_DIR]/IEdgeInsights/DataStore/docker-compose.yml``\ )**.
.. note:: :raw-html-m2r:`
`
.. code-block::
1. `TD Engine DB` using Line protocol for Inserting Data into database.
2. `TD Engine DB` works on the default credentials and doesn't requires adding to .env
3. Currently `PROD Mode` of `TD Engine DB` doesn't use any certificates. `PROD` mode works as same as `DEV` mode.
4. Subscription of Table data (Topics in Publishers) requires the table created before Data Store starts.
5. Deleting/Droping of table would make DS to misbehave due to TDEngine Handler(Only when DS is subscribed to respective table)
Blob Datatype (MinIO Object Storage)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The MinIO object storage primarily subscribes to the stream that comes out of the EdgeVideoAnalayticsMicroservice app via EII messagebus and stores the frame into minio for historical analysis.
The high-level logical flow of MinIO object storage is as follows:
#. The EII messagebus subscriber in MinIO object storage will subscribe to the EdgeVideoAnalayticsMicroservice
published classified result (metadata, frame) on the EII messagebus.
The img_handle is extracted out of the metadata and is used as the key and
the frame is stored as a value for that key in minio persistent storage.
EII Msgbus Interface
^^^^^^^^^^^^^^^^^^^^
* Data Store will start the EII messagebus Publisher, EII messagebus Subscriber threads,
and EII messagebus request reply thread based on PubTopics, SubTopics
and Server configuration.
* EII messagebus Subscriber thread connects to the PUB socket of EII messagebus on which
the data is published by EdgeVideoAnalayticsMicroservice and push it to the InfluxDB(Metadata).
* EII messagebus Publisher thread will publish the point data ingested by the telegraf
and the classifier result coming out of the point data analytics.
EII Msgbus Request-Response Interface
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For a historical analysis of the stored classified images or metadata, Data Store starts a EII Messagebus/gRPC Request-Response Interface server which provides the read, write, update, list, delete interfaces.
The payload format are defined in `EII Msgbus/gRPC Request-Response Endpoints 4.1/IEdgeInsights/DataStore/docs/eii_msgbus_grpc_req_resp_interface.html>`_
.. note:: :raw-html-m2r:`
`
.. code-block::
1. The gRPC request-response interface server currently supports **DEV mode** only.
DB Server Supported Version
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Currently DB Handlers are supported and tested with below mentioned version for respective DB Server
S.No | DB Server | Supported Version
--- | --- | ---
1 | Influx | 1.8.7
2 | Minio | RELEASE.2020-12-12T08-39-07Z
3 | TDEngine | 3.0.2.5