ThingsBoard Professional Edition cluster setup with Docker Compose guide

This guide will help you to setup ThingsBoard in cluster mode with Docker Compose. For this purpose, we will use docker container images available on Docker Hub.

Prerequisites

ThingsBoard Microservices are running in dockerized environment. Before starting please make sure Docker CE and Docker Compose are installed in your system.

Step 1. Checkout all ThingsBoard PE Images

Please checkout all ThingsBoard PE Images from Docker Hub. You will need to open all verified images and click on “Proceed to checkout” to accept ThingsBoard PE license agreement.

Listing all images mandatory for checkout for your convenience below:

image

Populate basic information about yourself and click “Get Content”

image

Step 2. Pull ThingsBoard PE Images

Make sure your have logged in to docker hub using command line.

docker pull store/thingsboard/tb-pe-node:3.1.1PE
docker pull store/thingsboard/tb-pe-web-ui:3.1.1PE
docker pull store/thingsboard/tb-pe-web-report:3.1.1PE
docker pull store/thingsboard/tb-pe-js-executor:3.1.1PE
docker pull store/thingsboard/tb-pe-http-transport:3.1.1PE
docker pull store/thingsboard/tb-pe-mqtt-transport:3.1.1PE
docker pull store/thingsboard/tb-pe-coap-transport:3.1.1PE

Step 3. Clone ThingsBoard PE Docker Compose scripts

git clone https://github.com/thingsboard/thingsboard-pe-docker-compose.git tb-pe-docker-compose

Step 4. Obtain your license key

We assume you have already chosen your subscription plan or decided to purchase a perpetual license. If not, please navigate to pricing page to select the best license option for your case and get your license. See How-to get pay-as-you-go subscription or How-to get perpetual license for more details.

IMPORTANT NOTE: Make sure you have purchased a license key for at least two instances of ThingsBoard PE. Otherwise you need to modify local copy of docker-compose.yml to use only one ThingsBoard instance. We will reference the license key you have obtained during this step as PUT_YOUR_LICENSE_SECRET_HERE later in this guide.

Step 5. Configure your license key

cd tb-pe-docker-compose
nano tb-node.env

and put the license secret parameter

# ThingsBoard server configuration

ZOOKEEPER_ENABLED=true
...

TB_LICENSE_SECRET=PUT_YOUR_LICENSE_SECRET_HERE

Step 6. Review the architecture page

Starting ThingsBoard v2.2, it is possible to install ThingsBoard cluster using new microservices architecture and docker containers. See microservices architecture page for more details.

Step 7. Configure ThingsBoard database

Before performing initial installation you can configure the type of database to be used with ThingsBoard. In order to set database type change the value of DATABASE variable in .env file to one of the following:

  • postgres - use PostgreSQL database;
  • hybrid - use PostgreSQL for entities database and Cassandra for timeseries database;

NOTE: According to the database type corresponding docker service will be deployed (see docker-compose.postgres.yml, docker-compose.hybrid.yml for details).

Step 8. Choose ThingsBoard queue service

选择下面消息中间件代理服务之前的通信。

  • 内存 默认队列适用于开发环境很有用请勿用于生产环境。

  • Kafka 对于本地和私有云部署可以独立于云服务供应商生产环境中使用。

  • RabbitMQ 如果没有太多负载并且已经具备一定的使用经验建议使用此方式。

  • AWS SQS 如是你打算在AWS上使用ThingsBoard则可以使用此消息队列。

  • Google发布/订阅 如果你打算在Google Cloud上部署ThingsBoard则可以使用此消息队列。

  • Azure服务总线 如果你打算在Azure上部署ThingsBoard则可以使用此消息队列。

  • Confluent云 基于Kafka的完全托管的事件流平台。

参见相应的架构页面和规则引擎页面以获取更多详细信息。

Apache Kafka是一个开放源代码的流处理软件平台。

配置环境变量文件:

sudo nano .env

检查以下行:

TB_QUEUE_TYPE=kafka

AWS SQS配置

首先需要创建一个AWS账户然后访问AWS SQS服务。

要使用AWS SQS服务您将需要使用此说明创建下一个凭证。

  • Access key ID
  • Secret access key

配置环境变量文件:

sudo nano .env

检查以下行:

TB_QUEUE_TYPE=aws-sqs

配置队列服务AWS SQS环境变量:

sudo nano queue-aws-sqs.env

将以下行添加到环境变量文件用真实的AWS用户凭证替换”YOUR_KEY”, “YOUR_SECRET”并用“AWS SQS帐户区域”替换”YOUR_REGION”:

TB_QUEUE_TYPE=aws-sqs
TB_QUEUE_AWS_SQS_ACCESS_KEY_ID=YOUR_KEY
TB_QUEUE_AWS_SQS_SECRET_ACCESS_KEY=YOUR_SECRET
TB_QUEUE_AWS_SQS_REGION=YOUR_REGION


# These params affect the number of requests per second from each partitions per each queue.
# Number of requests to particular Message Queue is calculated based on the formula:
# ((Number of Rule Engine and Core Queues) * (Number of partitions per Queue) + (Number of transport queues)
#  + (Number of microservices) + (Number of JS executors)) * 1000 / POLL_INTERVAL_MS
# For example, number of requests based on default parameters is:

# Rule Engine queues:
# Main 10 partitions + HighPriority 10 partitions + SequentialByOriginator 10 partitions = 30
# Core queue 10 partitions
# Transport request Queue + response Queue = 2
# Rule Engine Transport notifications Queue + Core Transport notifications Queue = 2
# Total = 44
# Number of requests per second = 44 * 1000 / 25 = 1760 requests

# Based on the use case, you can compromise latency and decrease number of partitions/requests to the queue, if the message load is low.
# Sample parameters to fit into 10 requests per second on a "monolith" deployment: 

TB_QUEUE_CORE_POLL_INTERVAL_MS=1000
TB_QUEUE_CORE_PARTITIONS=2
TB_QUEUE_RULE_ENGINE_POLL_INTERVAL_MS=1000
TB_QUEUE_RE_MAIN_POLL_INTERVAL_MS=1000
TB_QUEUE_RE_MAIN_PARTITIONS=2
TB_QUEUE_RE_HP_POLL_INTERVAL_MS=1000
TB_QUEUE_RE_HP_PARTITIONS=1
TB_QUEUE_RE_SQ_POLL_INTERVAL_MS=1000
TB_QUEUE_RE_SQ_PARTITIONS=1
TB_QUEUE_TRANSPORT_REQUEST_POLL_INTERVAL_MS=1000
TB_QUEUE_TRANSPORT_RESPONSE_POLL_INTERVAL_MS=1000
TB_QUEUE_TRANSPORT_NOTIFICATIONS_POLL_INTERVAL_MS=1000

Google发布/订阅配置

创建一个Google云帐户并访问发布/订阅服务。

使用此说明创建一个项目并使用发布/订阅服务。

使用此说明创建服务帐户凭据并编辑角色管理员后保存json凭据步骤9的文件此处

配置环境变量文件:

sudo nano .env

检查以下行:

TB_QUEUE_TYPE=pubsub

配置队列服务发布/订阅环境变量:

sudo nano queue-pubsub.env

将以下行添加到环境变量文件用真实的帐户信息替换”YOUR_PROJECT_ID”,”YOUR_SERVICE_ACCOUNT”

TB_QUEUE_TYPE=pubsub
TB_QUEUE_PUBSUB_PROJECT_ID=YOUR_PROJECT_ID
TB_QUEUE_PUBSUB_SERVICE_ACCOUNT=YOUR_SERVICE_ACCOUNT

# These params affect the number of requests per second from each partitions per each queue.
# Number of requests to particular Message Queue is calculated based on the formula:
# ((Number of Rule Engine and Core Queues) * (Number of partitions per Queue) + (Number of transport queues)
#  + (Number of microservices) + (Number of JS executors)) * 1000 / POLL_INTERVAL_MS
# For example, number of requests based on default parameters is:

# Rule Engine queues:
# Main 10 partitions + HighPriority 10 partitions + SequentialByOriginator 10 partitions = 30
# Core queue 10 partitions
# Transport request Queue + response Queue = 2
# Rule Engine Transport notifications Queue + Core Transport notifications Queue = 2
# Total = 44
# Number of requests per second = 44 * 1000 / 25 = 1760 requests

# Based on the use case, you can compromise latency and decrease number of partitions/requests to the queue, if the message load is low.
# Sample parameters to fit into 10 requests per second on a "monolith" deployment: 

TB_QUEUE_CORE_POLL_INTERVAL_MS=1000
TB_QUEUE_CORE_PARTITIONS=2
TB_QUEUE_RULE_ENGINE_POLL_INTERVAL_MS=1000
TB_QUEUE_RE_MAIN_POLL_INTERVAL_MS=1000
TB_QUEUE_RE_MAIN_PARTITIONS=2
TB_QUEUE_RE_HP_POLL_INTERVAL_MS=1000
TB_QUEUE_RE_HP_PARTITIONS=1
TB_QUEUE_RE_SQ_POLL_INTERVAL_MS=1000
TB_QUEUE_RE_SQ_PARTITIONS=1
TB_QUEUE_TRANSPORT_REQUEST_POLL_INTERVAL_MS=1000
TB_QUEUE_TRANSPORT_RESPONSE_POLL_INTERVAL_MS=1000
TB_QUEUE_TRANSPORT_NOTIFICATIONS_POLL_INTERVAL_MS=1000

Azure服务总线配置

创建一个Azure帐户并访问Azure服务总线。

通过使用说明了解并使用总线服务。

使用说明创建共享访问签名。

配置环境变量文件:

sudo nano .env

检查以下行:

TB_QUEUE_TYPE=service-bus

配置队列服务总线环境变量:

sudo nano queue-service-bus.env

将以下行添加到环境变量文件用真实服务总线名称空间替换”YOUR_NAMESPACE_NAME”和”YOUR_SAS_KEY_NAME”及”YOUR_SAS_KEY”:

TB_QUEUE_TYPE=service-bus
TB_QUEUE_SERVICE_BUS_NAMESPACE_NAME=YOUR_NAMESPACE_NAME
TB_QUEUE_SERVICE_BUS_SAS_KEY_NAME=YOUR_SAS_KEY_NAME
TB_QUEUE_SERVICE_BUS_SAS_KEY=YOUR_SAS_KEY

# These params affect the number of requests per second from each partitions per each queue.
# Number of requests to particular Message Queue is calculated based on the formula:
# ((Number of Rule Engine and Core Queues) * (Number of partitions per Queue) + (Number of transport queues)
#  + (Number of microservices) + (Number of JS executors)) * 1000 / POLL_INTERVAL_MS
# For example, number of requests based on default parameters is:

# Rule Engine queues:
# Main 10 partitions + HighPriority 10 partitions + SequentialByOriginator 10 partitions = 30
# Core queue 10 partitions
# Transport request Queue + response Queue = 2
# Rule Engine Transport notifications Queue + Core Transport notifications Queue = 2
# Total = 44
# Number of requests per second = 44 * 1000 / 25 = 1760 requests

# Based on the use case, you can compromise latency and decrease number of partitions/requests to the queue, if the message load is low.
# Sample parameters to fit into 10 requests per second on a "monolith" deployment: 

TB_QUEUE_CORE_POLL_INTERVAL_MS=1000
TB_QUEUE_CORE_PARTITIONS=2
TB_QUEUE_RULE_ENGINE_POLL_INTERVAL_MS=1000
TB_QUEUE_RE_MAIN_POLL_INTERVAL_MS=1000
TB_QUEUE_RE_MAIN_PARTITIONS=2
TB_QUEUE_RE_HP_POLL_INTERVAL_MS=1000
TB_QUEUE_RE_HP_PARTITIONS=1
TB_QUEUE_RE_SQ_POLL_INTERVAL_MS=1000
TB_QUEUE_RE_SQ_PARTITIONS=1
TB_QUEUE_TRANSPORT_REQUEST_POLL_INTERVAL_MS=1000
TB_QUEUE_TRANSPORT_RESPONSE_POLL_INTERVAL_MS=1000
TB_QUEU_TRANSPORT_NOTIFICATIONS_POLL_INTERVAL_MS=1000

请使用此说明安装RabbitMQ。

配置环境变量文件:

sudo nano .env

检查以下行:

TB_QUEUE_TYPE=rabbitmq

配置队列服务RabbitMQ云环境变量:

sudo nano queue-rabbitmq.env

将以下行添加到环境变量文件用真实用户凭据替换”YOUR_USERNAME”和”YOUR_PASSWORD并修改”localhost”和”5672”为真实的RabbitMQ主机和端口:

TB_QUEUE_TYPE=rabbitmq
TB_QUEUE_RABBIT_MQ_HOST=localhost
TB_QUEUE_RABBIT_MQ_PORT=5672
TB_QUEUE_RABBIT_MQ_USERNAME=YOUR_USERNAME
TB_QUEUE_RABBIT_MQ_PASSWORD=YOUR_PASSWORD

Confluent云配置

你应该创建一个帐户后访问Confluent云然后创建一个Kafka集群API Key

配置环境变量文件:

sudo nano .env

Check following line:

TB_QUEUE_TYPE=confluent

配置队列服务Confluent云环境变量:

sudo nano queue-confluent-cloud.env

将以下行添加到环境变量文件用真实的Confluent云服务器地址替换”CLUSTER_API_KEY”, “CLUSTER_API_SECRET”和”localhost:9092”:

TB_QUEUE_TYPE=kafka

TB_KAFKA_SERVERS=confluent.cloud:9092
TB_QUEUE_KAFKA_REPLICATION_FACTOR=3

TB_QUEUE_KAFKA_USE_CONFLUENT_CLOUD=true
TB_QUEUE_KAFKA_CONFLUENT_SSL_ALGORITHM=https
TB_QUEUE_KAFKA_CONFLUENT_SASL_MECHANISM=PLAIN
TB_QUEUE_KAFKA_CONFLUENT_SASL_JAAS_CONFIG=org.apache.kafka.common.security.plain.PlainLoginModule required username="CLUSTER_API_KEY" password="CLUSTER_API_SECRET";
TB_QUEUE_KAFKA_CONFLUENT_SECURITY_PROTOCOL=SASL_SSL
TB_QUEUE_KAFKA_CONFLUENT_USERNAME=CLUSTER_API_KEY
TB_QUEUE_KAFKA_CONFLUENT_PASSWORD=CLUSTER_API_SECRET

TB_QUEUE_KAFKA_RE_TOPIC_PROPERTIES=retention.ms:604800000;segment.bytes:52428800;retention.bytes:1048576000
TB_QUEUE_KAFKA_CORE_TOPIC_PROPERTIES=retention.ms:604800000;segment.bytes:52428800;retention.bytes:1048576000
TB_QUEUE_KAFKA_TA_TOPIC_PROPERTIES=retention.ms:604800000;segment.bytes:52428800;retention.bytes:1048576000
TB_QUEUE_KAFKA_NOTIFICATIONS_TOPIC_PROPERTIES=retention.ms:604800000;segment.bytes:52428800;retention.bytes:1048576000
TB_QUEUE_KAFKA_JE_TOPIC_PROPERTIES=retention.ms:604800000;segment.bytes:52428800;retention.bytes:104857600

# These params affect the number of requests per second from each partitions per each queue.
# Number of requests to particular Message Queue is calculated based on the formula:
# ((Number of Rule Engine and Core Queues) * (Number of partitions per Queue) + (Number of transport queues)
#  + (Number of microservices) + (Number of JS executors)) * 1000 / POLL_INTERVAL_MS
# For example, number of requests based on default parameters is:

# Rule Engine queues:
# Main 10 partitions + HighPriority 10 partitions + SequentialByOriginator 10 partitions = 30
# Core queue 10 partitions
# Transport request Queue + response Queue = 2
# Rule Engine Transport notifications Queue + Core Transport notifications Queue = 2
# Total = 44
# Number of requests per second = 44 * 1000 / 25 = 1760 requests

# Based on the use case, you can compromise latency and decrease number of partitions/requests to the queue, if the message load is low.
# Sample parameters to fit into 10 requests per second on a "monolith" deployment: 

TB_QUEUE_CORE_POLL_INTERVAL_MS=1000
TB_QUEUE_CORE_PARTITIONS=2
TB_QUEUE_RULE_ENGINE_POLL_INTERVAL_MS=1000
TB_QUEUE_RE_MAIN_POLL_INTERVAL_MS=1000
TB_QUEUE_RE_MAIN_PARTITIONS=2
TB_QUEUE_RE_HP_POLL_INTERVAL_MS=1000
TB_QUEUE_RE_HP_PARTITIONS=1
TB_QUEUE_RE_SQ_POLL_INTERVAL_MS=1000
TB_QUEUE_RE_SQ_PARTITIONS=1
TB_QUEUE_TRANSPORT_REQUEST_POLL_INTERVAL_MS=1000
TB_QUEUE_TRANSPORT_RESPONSE_POLL_INTERVAL_MS=1000
TB_QUEUE_TRANSPORT_NOTIFICATIONS_POLL_INTERVAL_MS=1000

Step 9. Running

Execute the following command to create log folders for the services and chown of these folders to the docker container users. To be able to change user, chown command is used, which requires sudo permissions (script will request password for a sudo access):

$ ./docker-create-log-folders.sh

Execute the following command to run installation:

$ ./docker-install-tb.sh --loadDemo

Where:

  • --loadDemo - optional argument. Whether to load additional demo data.

Execute the following command to start services:

$ ./docker-start-services.sh

After a while when all services will be successfully started you can open http://{your-host-ip} in you browser (for ex. http://localhost). You should see ThingsBoard login page.

Use the following default credentials:

  • System Administrator: sysadmin@thingsboard.org / sysadmin

If you installed DataBase with demo data (using --loadDemo flag) you can also use the following credentials:

  • Tenant Administrator: tenant@thingsboard.org / tenant
  • Customer User: customer@thingsboard.org / customer

In case of any issues you can examine service logs for errors. For example to see ThingsBoard node logs execute the following command:

$ docker-compose logs -f tb-core1 tb-rule-engine1

Or use docker-compose ps to see the state of all the containers. Use docker-compose logs --f to inspect the logs of all running services. See docker-compose logs command reference for details.

Execute the following command to stop services:

$ ./docker-stop-services.sh

Execute the following command to stop and completely remove deployed docker containers:

$ ./docker-remove-services.sh

Execute the following command to update particular or all services (pull newer docker image and rebuild container):

$ ./docker-update-service.sh [SERVICE...]

Where:

  • [SERVICE...] - list of services to update (defined in docker-compose configurations). If not specified all services will be updated.

Upgrading

In case when database upgrade is needed, execute the following commands:

$ ./docker-stop-services.sh
$ ./docker-upgrade-tb.sh --fromVersion=[FROM_VERSION]
$ ./docker-start-services.sh

Where:

  • FROM_VERSION - from which version upgrade should be started. See Upgrade Instructions for valid fromVersion values.

Next steps

  • 入门指南 - 这些指南提供了ThingsBoard主要功能的快速概述。

  • 设备连接 - 了解如何根据您的连接方式或解决方案连接设备。

  • 数据看板 - 这些指南包含有关如何配置复杂的ThingsBoard仪表板的说明。

  • 数据处理 - 了解如何使用ThingsBoard规则引擎。

  • 数据分析 - 了解如何使用规则引擎执行基本的分析任务。

  • 硬件样品 - 了解如何将各种硬件平台连接到ThingsBoard。

  • 高级功能 - 了解高级ThingsBoard功能。

  • 开发指南 - 了解ThingsBoard中的贡献和开发。