Step 4: Install the ThingsBoard* Reference
This ThingsBoard* reference includes a pre-configured database customized for Intel’s EI for AMR solution.
Configure the ThingsBoard* Reference Server
Create a Java* Keystore certificate. You can use this method: https://thingsboard.io/docs/user-guide/mqtt-over-ssl/#java-keystore.
NOTE:For testing purposes, you can set these values:DOMAIN_SUFFIX=localhost
SUBJECT_ALTERNATIVE_NAMES="ip:<ip_of_server>"
If errors are encountered during key generation, see Troubleshooting, “Keytool is not installed”.
After the certificate (.jks file) is generated, copy it to the following path in your installed EI for AMR.
NOTE:This also creates a public-key (*.pub.pem file). You need to copy this file when generating the turtle_creek_client image (client-side image).# Copy generated server.jks on the server side: cp server.jks <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_<version>/AMR_server_containers/01_docker_sdk_env/artifacts/02_edge_server/edge_server_fleet_management/mqttserver.jks # Transfer (scp) server.pub.pem from server to client, then copy it to the following path on client: cp server.pub.pem <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_<version>/AMR_containers/01_docker_sdk_env/artifacts/02_edge_server/edge_server_fleet_management/thingsboard.pub.pem
Set up the environment variables necessary to run docker-compose commands:
cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_202* source 01_docker_sdk_env/docker_compose/05_tutorials/config/docker_compose.source
Start the ThingsBoard* Reference Server Deployment
If Intel® Smart Edge Open Multi-Node is deployed, there will be two machines for orchestration.
Machine A-1 is the controller.
Machine A-2 is the server node where the ThingsBoard* server pod is deployed.
If Intel® Smart Edge Open Single-Node is deployed, Machine A-1 and Machine A-2 are the same machine.
If Intel® Smart Edge Open Single-Node is configured, run the following commands:
sed -i "s/number_of_nodes.stdout!=0/("{{number_of_nodes.stdout}}"!="0")/g" AMR_server_containers/01_docker_sdk_env/docker_orchestration/ansible-playbooks/02_edge_server/fleet_management/fleet_management_playbook_uninstall.yaml sed -i "s/number_of_nodes.stdout!=0/("{{number_of_nodes.stdout}}"!="0")/g" AMR_server_containers/01_docker_sdk_env/docker_orchestration/ansible-playbooks/02_edge_server/fleet_management/fleet_management_playbook_install.yaml sed -i "s/number_of_nodes.stdout==0/("{{number_of_nodes.stdout}}"=="0")/g" AMR_server_containers/01_docker_sdk_env/docker_orchestration/ansible-playbooks/02_edge_server/fleet_management/fleet_management_playbook_install.yaml
NOTE:This step is needed to avoid an Intel® Smart Edge Open Single-Node Playbook known limitation.Run following command on Machine A-1:
ansible-playbook AMR_server_containers/01_docker_sdk_env/docker_orchestration/ansible-playbooks/02_edge_server/fleet_management/fleet_management_playbook_install.yaml
NOTE:If the ansible-playbook fails, you can uninstall and try again. Also, verify if the installation was successful even if the ansible-playbook failed.ansible-playbook AMR_server_containers/01_docker_sdk_env/docker_orchestration/ansible-playbooks/02_edge_server/fleet_management/fleet_management_playbook_uninstall.yaml ansible-playbook AMR_server_containers/01_docker_sdk_env/docker_orchestration/ansible-playbooks/02_edge_server/fleet_management/fleet_management_playbook_install.yaml
Verify that the services, pods, and deployment are running on Machine A-1:
$ kubectl get all --output=wide --namespace fleet-management NAME READY STATUS RESTARTS AGE pod/fleet-deployment-8449fdc54f-m4fhb 1/1 Running 2 (21s ago) 81s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/fleet-service NodePort 10.97.216.230 <none> 9090:32764/TCP,1883:32765/TCP,7070:32766/TCP,8883:32767/TCP 42s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/fleet-deployment 1/1 1 1 81s NAME DESIRED CURRENT READY AGE replicaset.apps/fleet-deployment-8449fdc54f 1 1 1 81s
NOTE:CLUSTER-IP is a virtual IP that is allocated by Kubernetes* to a service. It is the Kubernetes* internal IP. Two different pods can communicate using this IP.Verify that the Docker* container is running on Machine A-2:
docker ps | grep fleet dd22be830f82 10.237.23.152:30003/intel/fleet-management "/usr/bin/start-tb.sh" 52 minutes ago Up 52 minutes k8s_fleet_fleet-deployment-858494f866-7jmhh_fleet-management_13d09334-4223-4409-8cd9-c0cac60cd04c_0
To test the image, install GDM3, open a browser on the host machine, and type the following URL on any host machine:
sudo apt-get install gdm3 sudo systemctl restart gdm3.service sudo apt install firefox # Open Firefox and go to: <IP Address>:32764
A ThingsBoard* login interface appears.
NOTE:The port is the mapped port of 9090 that the ThingsBoard* server uses (check the ports from kubectl get all --namespace fleet-management command). In this case, the port is 32764.If Single-Node orchestration is deployed, use the IP address of the Single-Node server.
If Multi-Node orchestration is deployed, use the IP address of the (Machine A-1) controller.
To get the IP address:
kubectl describe node | grep 'Addresses:' -A 4 | grep -B1 $(kubectl get nodes | grep control-plane | awk '{print $1}') | grep InternalIP | awk '{print $2}'