产品定价 立即试用
MQTT Broker
安装 > AWS (Kubernetes) 集群
入门 文档
架构 API 常见问题
目录

在 AWS 上使用 Kubernetes 部署 TBMQ 集群

本指南将帮助您在AWS EKS上部署TBMQ。

前置条件

安装并配置工具

要在 EKS 集群上部署 TBMQ,需安装 kubectleksctlawscli 工具。

此外,需配置 Access Key、Secret Key 和默认区域。 获取 Access Key 和 Secret Key 请参阅 此指南。 默认区域应为要部署集群的目标区域的 ID。

1
aws configure

步骤1. 获取TBMQ K8S脚本仓库

1
2
git clone -b release-2.2.0 https://github.com/thingsboard/tbmq.git
cd tbmq/k8s/aws

步骤2. 配置并创建EKS集群

可在 cluster.yml 中查看建议的集群配置。 可根据需要修改以下字段:

  • region - 集群所在AWS区域(默认us-east-1
  • availabilityZones - 区域可用区ID列表(默认[us-east-1a,us-east-1b,us-east-1c]
  • instanceType - TBMQ节点实例类型(默认m7a.large

注意:若不修改 instanceTypedesiredCapacity,EKS将部署2个m7a.large节点。

文档信息图标

若要加固对PostgreSQL和MSK的访问,需配置或新建VPC、将其设为TBMQ集群的VPC, 为PostgreSQL和MSK创建安全组,并在TBMQ集群的 managed 节点组中配置这些安全组, 再通过另一安全组配置从TBMQ集群节点到PostgreSQL/MSK的访问。

关于 eksctl VPC配置详见此处

创建AWS集群的命令:

1
eksctl create cluster -f cluster.yml

步骤3. 创建AWS负载均衡控制器

集群就绪后需创建AWS负载均衡控制器。 可按此指南操作。 集群脚本将创建以下负载均衡器:

  • tb-broker-http-loadbalancer-AWS ALB,用于Web UI和REST API;
  • tb-broker-mqtt-loadbalancer-AWS NLB,用于MQTT通信。

配置AWS负载均衡控制器是必需步骤,否则这些负载均衡器无法正常工作。

步骤4. Amazon PostgreSQL数据库配置

您需要在Amazon RDS上配置PostgreSQL。 可参考 此指南 进行配置。

注意:建议如下:

  • Make sure your PostgreSQL version is 16.x;
  • Use ‘Production’ template for high availability. It enables a lot of useful settings by default;
  • Consider creation of custom parameters group for your RDS instance. It will make change of DB parameters easier;
  • Consider deployment of the RDS instance into private subnets. This way it will be nearly impossible to accidentally expose it to the internet.
  • You may also change username field and set or auto-generate password field (keep your postgresql password in a safe place).

Note: Make sure your database is accessible from the cluster, one of the way to achieve this is to create the database in the same VPC and subnets as TBMQ cluster and use eksctl-tbmq-cluster-ClusterSharedNodeSecurityGroup-* security group. See screenshots below.

步骤5. Amazon MSK配置

需配置Amazon MSK。 在AWS控制台打开MSK子菜单,点击“Create cluster”并选择“Custom create”模式。 应看到类似界面:

注意:建议如下:

  • Apache Kafka版本可设为3.7.0,TBMQ已在其上充分测试;
  • 使用m5.large或类似实例类型;
  • 为MSK创建自定义集群配置,便于修改Kafka参数;
  • 使用默认“Monitoring”或启用“Enhanced topic-level monitoring”。

注意:确保MSK实例可从TBMQ集群访问。 最简单的方式是在同一VPC内部署MSK,建议使用私有子网,降低意外暴露到互联网的风险。

最后仔细检查MSK配置,然后完成集群创建。

步骤6. Amazon ElastiCache (Redis) 配置

需配置 ElastiCache 的 Redis。TBMQ用缓存存储 DEVICE持久化客户端 的消息, 以提升性能、减少DB读取(详见下文)。

启用认证后客户端连接TBMQ时会查询MQTT客户端凭据,大量并发连接会产生大量请求,使用缓存可缓解此问题。

在AWS控制台进入ElastiCache -> Redis clusters -> Create Redis cluster。

注意:建议如下:

  • Redis引擎版本选用7.x,节点至少1 GB内存;
  • 确保Redis集群可从TBMQ集群访问,建议与TBMQ部署在同一VPC、使用私有子网, 并采用 eksctl-tbmq-cluster-ClusterSharedNodeSecurityGroup-* 安全组;
  • 关闭自动备份。

Amazon RDS PostgreSQL

Once the database switch to the ‘Available’ state, on AWS Console get the Endpoint of the RDS PostgreSQL and paste it to SPRING_DATASOURCE_URL in the tb-broker-db-configmap.yml instead of RDS_URL_HERE part.

Also, you’ll need to set SPRING_DATASOURCE_USERNAME and SPRING_DATASOURCE_PASSWORD with PostgreSQL username and password corresponding.

Amazon MSK

Once the MSK cluster switch to the ‘Active’ state, to get the list of brokers execute the next command:

1
aws kafka get-bootstrap-brokers --region us-east-1 --cluster-arn $CLUSTER_ARN

其中 $CLUSTER_ARN 为MSK集群的Amazon资源名(ARN):

请将 BootstrapBrokerString 中的内容填入 tb-broker.yml 文件中的 TB_KAFKA_SERVERS 环境变量。

Otherwise, click View client information seen on the screenshot above. Copy bootstrap server information in plaintext.

Amazon ElastiCache

Once the Redis cluster switch to the ‘Available’ state, open the ‘Cluster details’ and copy Primary endpoint without “:6379” port suffix, it`s YOUR_REDIS_ENDPOINT_URL_WITHOUT_PORT.

编辑 tb-broker-cache-configmap.yml,将 YOUR_REDIS_ENDPOINT_URL_WITHOUT_PORT 替换为实际值。

步骤8. 安装

执行以下命令进行安装:

1
./k8s-install-tbmq.sh

命令完成后,控制台应显示:

1
INFO  o.t.m.b.i.ThingsboardMqttBrokerInstallService-Installation finished successfully!
文档信息图标

否则请检查 tb-broker-db-configmap.yml 中的PostgreSQL URL和密码是否正确。

步骤9. 启动

Execute the following command to deploy the broker:

1
./k8s-deploy-tbmq.sh

After a few minutes, you may execute the next command to check the state of all pods.

1
kubectl get pods

If everything went fine, you should be able to see tb-broker-0 and tb-broker-1 pods. Every pod should be in the READY state.

Step 10. Configure Load Balancers

10.1 Configure HTTP(S) Load Balancer

Configure HTTP(S) Load Balancer to access web interface of your TBMQ instance. Basically you have 2 possible options of configuration:

  • http-Load Balancer without HTTPS support. Recommended for development. The only advantage is simple configuration and minimum costs. May be good option for development server but definitely not suitable for production.
  • https-Load Balancer with HTTPS support. Recommended for production. Acts as an SSL termination point. You may easily configure it to issue and maintain a valid SSL certificate. Automatically redirects all non-secure (HTTP) traffic to secure (HTTPS) port.

See links/instructions below on how to configure each of the suggested options.

HTTP Load Balancer

Execute the following command to deploy plain http load balancer:

1
kubectl apply -f receipts/http-load-balancer.yml

The process of load balancer provisioning may take some time. You may periodically check the status of the load balancer using the following command:

1
kubectl get ingress

Once provisioned, you should see similar output:

1
2
NAME                          CLASS    HOSTS   ADDRESS                                                                  PORTS   AGE
tb-broker-http-loadbalancer   <none>   *       k8s-thingsbo-tbbroker-000aba1305-222186756.eu-west-1.elb.amazonaws.com   80      3d1h

HTTPS Load Balancer

Use AWS Certificate Manager to create or import SSL certificate. Note your certificate ARN.

Edit the load balancer configuration and replace YOUR_HTTPS_CERTIFICATE_ARN with your certificate ARN:

1
nano receipts/https-load-balancer.yml

Execute the following command to deploy plain https load balancer:

1
kubectl apply -f receipts/https-load-balancer.yml

10.2 Configure MQTT Load Balancer

Configure MQTT load balancer to be able to use MQTT protocol to connect devices.

Create TCP load balancer using following command:

1
kubectl apply -f receipts/mqtt-load-balancer.yml

The load balancer will forward all TCP traffic for ports 1883 and 8883.

One-way TLS

The simplest way to configure MQTTS is to make your MQTT load balancer (AWS NLB) to act as a TLS termination point. This way we set up the one-way TLS connection, where the traffic between your devices and load balancers is encrypted, and the traffic between your load balancer and TBMQ is not encrypted. There should be no security issues, since the ALB/NLB is running in your VPC. The only major disadvantage of this option is that you can’t use “X.509 certificate” MQTT client credentials, since information about client certificate is not transferred from the load balancer to the TBMQ.

To enable the one-way TLS:

Use AWS Certificate Manager to create or import SSL certificate. Note your certificate ARN.

Edit the load balancer configuration and replace YOUR_MQTTS_CERTIFICATE_ARN with your certificate ARN:

1
nano receipts/mqtts-load-balancer.yml

Execute the following command to deploy plain MQTTS load balancer:

1
kubectl apply -f receipts/mqtts-load-balancer.yml

Two-way TLS

The more complex way to enable MQTTS is to obtain valid (signed) TLS certificate and configure it in the TBMQ. The main advantage of this option is that you may use it in combination with “X.509 certificate” MQTT client credentials.

To enable the two-way TLS:

Follow this guide to create a .pem file with the SSL certificate. Store the file as server.pem in the working directory.

You’ll need to create a config-map with your PEM file, you can do it by calling command:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
kubectl create configmap tbmq-mqtts-config \
 --from-file=server.pem=YOUR_PEM_FILENAME \
 --from-file=mqttserver_key.pem=YOUR_PEM_KEY_FILENAME \
 -o yaml --dry-run=client | kubectl apply -f-```
{: .copy-code}

* **YOUR_PEM_FILENAME****服务器证书文件**名称。
* **YOUR_PEM_KEY_FILENAME****服务器证书私钥文件**名称。

然后,取消注释 'tb-broker.yml' 文件中所有标有“Uncomment the following lines to enable two-way MQTTS”的段落。

执行以下命令使更改生效:

```bash
kubectl apply -f tb-broker.yml

最后,部署“透明”负载均衡器:

1
kubectl apply -f receipts/mqtt-load-balancer.yml

步骤 11. 验证部署

现在可使用负载均衡器的 DNS名称在浏览器中打开TBMQ Web界面。

可使用以下命令查看负载均衡器的 DNS名称:

1
kubectl get ingress

将看到类似界面:

1
2
NAME                          CLASS    HOSTS   ADDRESS                                                                  PORTS   AGE
tb-broker-http-loadbalancer   <none>   *       k8s-thingsbo-tbbroker-000aba1305-222186756.eu-west-1.elb.amazonaws.com   80      3d1h

Use ADDRESS field of the tb-broker-http-loadbalancer to connect to the cluster.

您将看到TBMQ登录页面。请使用以下 System Administrator(系统管理员)默认凭据:

用户名

1
sysadmin@thingsboard.org

密码

1
sysadmin

首次登录时,系统将要求您将默认密码修改为自定义密码,然后使用新凭据重新登录。

Validate MQTT access

To connect to the cluster via MQTT you will need to get corresponding service IP. You can do this with the command:

1
kubectl get services

将看到类似界面:

1
2
NAME                          TYPE           CLUSTER-IP       EXTERNAL-IP                                                                     PORT(S)                         AGE
tb-broker-mqtt-loadbalancer   LoadBalancer   10.100.119.170   k8s-thingsbo-tbbroker-b9f99d1ab6-1049a98ba4e28403.elb.eu-west-1.amazonaws.com   1883:30308/TCP,8883:31609/TCP6m58s

Use EXTERNAL-IP field of the load-balancer to connect to the cluster via MQTT protocol.

故障排查

In case of any issues you can examine service logs for errors. For example to see TBMQ logs execute the following command:

1
kubectl logs -f tb-broker-0

Use the next command to see the state of all statefulsets.

1
kubectl get statefulsets

See kubectl Cheat Sheet command reference for more details.

Upgrading

查看 release notes升级说明 了解最新变更详情。

若您当前版本无 Upgrade to x.x.x 说明,可直接按升级说明执行。

若文档未涵盖您的升级场景,请联系我们以获取进一步指导。

Backup and restore (Optional)

While backing up your PostgreSQL database is highly recommended, it is optional before proceeding with the upgrade. For further guidance, follow the next instructions.

Upgrade to 2.2.0

In this release, the MQTT authentication mechanism was migrated from YAML/env configuration into the database. During upgrade, TBMQ needs to know which authentication providers are enabled in your deployment. This information is provided through environment variables passed to the upgrade pod.

The upgrade script requires a file named database-setup.yml that explicitly defines these variables. Environment variables from your tb-broker.yml file are not applied during the upgrade — only the values in database-setup.yml will be used.

Tips If you use only Basic authentication, set SECURITY_MQTT_SSL_ENABLED=false. If you use only X.509 authentication, set SECURITY_MQTT_BASIC_ENABLED=false and SECURITY_MQTT_SSL_ENABLED=true.

Supported variables

  • SECURITY_MQTT_BASIC_ENABLED (true|false)
  • SECURITY_MQTT_SSL_ENABLED (true|false)
  • SECURITY_MQTT_SSL_SKIP_VALIDITY_CHECK_FOR_CLIENT_CERT (true|false) — usually false.

Once the file is prepared and the values verified, proceed with the upgrade process.

Upgrade to 2.1.0

TBMQ v2.1.0 引入多项改进,包括新的 Integration Executor 微服务及第三方服务版本升级。

添加 Integration Executor 微服务

本版本通过新的 Integration Executor 微服务支持外部集成。

要获取包括 Integration Executor 在内的最新配置文件,请从release分支拉取更新。 按照 运行升级说明 中的步骤操作,直至执行升级脚本前(暂不执行 .sh 命令)。

cluster.yml 文件已更新,新增专门用于 Integration Executor pods 的托管节点组。

1
2
3
4
5
6
7
8
9
  - name: tbmq-ie
    instanceType: m7a.large
    desiredCapacity: 2
    maxSize: 2
    minSize: 1
    labels: { role: tbmq-ie }
    ssh:
      allow: true
      publicKeyName: 'dlandiak' # 注意:此处请使用您自己的公钥名称

要创建该节点组,请执行以下命令:

1
eksctl create nodegroup --config-file=cluster.yml

您也可以选择不为 Integration Executor 创建专用实例。 若如此,可跳过此步骤,但需相应更新 tbmq-ie.yml 文件中的 nodeSelector 部分。

1
2
  nodeSelector:
    role: tbmq-ie

将 role 从 “tbmq-ie” 改为 “tbmq”,可将 Integration Executor pods 部署在与 TBMQ pods 相同的 AWS EC2 实例上。

更新第三方服务

v2.1.0 中,TBMQ 更新了关键第三方依赖版本,包括 Redis、PostgreSQL 和 Kafka。 可通过以下 链接 查看变更详情。

服务 更新前版本 更新后版本
Redis 7.0 7.2.5
PostgreSQL 15.x 16.x
Kafka 3.5.1 3.7.0

建议 将环境中的第三方版本与上述更新版本对齐,以确保与本版本完全兼容。 也可选择不升级,但兼容性仅在推荐版本下得到保证。

文档信息图标

我们不为第三方服务提供逐步升级说明。 此类操作请参阅各平台的官方文档,或在使用托管服务时查阅服务提供商的资源。

按需处理第三方服务版本后,可继续 升级流程 的剩余步骤。

Upgrade to 2.0.0

For the TBMQ v2.0.0 upgrade, if you haven’t installed Redis yet, please follow step 6 to complete the installation. Only then you can proceed with the upgrade.

Run upgrade

In case you would like to upgrade, please pull the recent changes from the latest release branch:

1
git pull origin release-2.2.0
文档信息图标

如需升级到特定版本(如 TBMQ v2.0.0),将上述命令中的 release 分支替换为目标分支名,例如 release-2.0.0

Note: Make sure custom changes of yours if available are not lost during the merge process.

若合并过程中出现与您修改无关的冲突, 建议接受远程分支的所有新变更。

可通过执行以下命令撤销合并操作:

1
git merge --abort

然后通过接受 theirs 变更重新执行合并:

1
git pull origin release-2.2.0 -X theirs

默认合并策略的常用选项:

  • -X ours - 此选项强制以我方版本自动解决冲突块。
  • -X theirs - 与 ours 相反。更多详情请参阅 此处

After that, execute the following command:

1
./k8s-upgrade-tbmq.sh
1
./k8s-upgrade-tbmq.sh --fromVersion=FROM_VERSION

其中 FROM_VERSION 表示升级的起始版本。参见 升级说明 了解有效的 fromVersion 取值。


说明: 升级数据库时,可选择使用以下命令停止 TBMQ pods:

1
./k8s-delete-tbmq.sh

这将导致服务中断,但可确保更新后 DB 状态一致。 多数更新不需要停止 TBMQ。

完成后,再次执行资源部署。这将触发 TBMQ 使用最新版本进行 rollout restart。

1
./k8s-deploy-tbmq.sh

Cluster deletion

Execute the following command to delete TBMQ nodes:

1
./k8s-delete-tbmq.sh

Execute the following command to delete all TBMQ nodes and configmaps:

1
./k8s-delete-all.sh

Execute the following command to delete the EKS cluster (you should change the name of the cluster and the region if those differ):

1
eksctl delete cluster -r us-east-1 -n tbmq -w

下一步