Step 3: Install Intel® Smart Edge Open, Tailored for EI for AMR
Perform all of Step 3 on the control plane only, not the edge nodes.
Log in as root. Do all steps in Step 3 as root.
sudo su -
Clone the open-developer-experience-kits repository to the control plane:
git clone -b smart-edge-open-21.12 https://github.com/smart-edge-open/open-developer-experience-kits.git ~/dek cd ~/dek git checkout 1848a355586d2c40420b6e5576efeac9396150de
Take note of the control plane and edge node IPs (you need them for the next step):
ifconfig
Edit the ~/dek/inventory.yml file.
Name your cluster in cluster_name; use _ instead of spaces.
Indicate if it is a single or multi-node deployment:
For a Multi-Node Deployment, set the single_node_deployment value to false.
For a Single-Node Deployment, set the single_node_deployment value to true.
Provide the control plane and edge node IPs.
For a Multi-Node Deployment, provide an IP for the control plane and each edge node.
For a Single-Node Deployment, provide the same IP for the control plane and node01. node02 is not required.
Change the ansible_user from smartedge-open to root.
Example:
# SPDX-License-Identifier: Apache-2.0 # Copyright (c) 2021 Intel Corporation --- all: vars: cluster_name: dek_test # Use `_` instead of spaces. deployment: dek # Available deployment type: Developer experience kits (dek). single_node_deployment: false # Request a single node deployment (true/false). limit: # Limit ansible deployment to certain inventory group or hosts controller_group: hosts: controller: ansible_host: <ip_from_control plane> ansible_user: root edgenode_group: hosts: node01: ansible_host: <ip_from_edge_node01> ansible_user: root node02: ansible_host: <ip_from_edge_node02> ansible_user: root
If a proxy is required to connect to the Internet, edit the proxy variables in the ~/dek/inventory/default/group_vars/all/10-default.yml file.
Example:
# SPDX-License-Identifier: Apache-2.0 # Copyright (c) 2019-2021 Intel Corporation --- # This file contains variables intended to be configured by user. # It allows feature enabling and configuration. # Per-host variables should be places in `inventory/default/host_vars` directory. # Features should not be configured by changing roles' defaults (i.e. role/defaults/main.yml) ################################################## ##### User settings ### Proxy settings proxy_env: # Proxy URLs to be used for HTTP, HTTPS and FTP http_proxy: "http://proxy.example.org:3128" https_proxy: "http://proxy.example.org:3129" ftp_proxy: "http://proxy.example.org:3128" # No proxy setting contains addresses and networks that should not be accessed using proxy (e.g. local network, Kubernetes* CNI networks) no_proxy: "127.0.0.1/32"
Update the ~/dek/inventory/default/group_vars/all/10-default.yml file with:
sriov_network_operator_enable: false ## SR-IOV Network Operator configuration sriov_network_operator_configure_enable: false ### Software Guard Extensions # SGX requires kernel 5.11+, SGX enabled in BIOS and access to PCC service sgx_enabled: false # Install isecl attestation components (TA, ihub, isecl k8s controller and scheduler extension) platform_attestation_node: false install_hwe_kernel_enable: false
Update the ~/dek/roles/telemetry/grafana/templates/prometheus-tls-datasource.yml file by running the following sed command on a terminal:
sed -i "s/indent(width=13, indentfirst=False)/indent(width=13, first=False)/g" ~/dek/roles/telemetry/grafana/templates/prometheus-tls-datasource.yml
Start deployment:
./deploy.sh
Expected result: The script reboots the control plane.
After the reboot, run ./deploy.sh again:
sudo su - cd ~/dek ./deploy.sh
Expected result example:
kubernetes/harbor_registry/controlplane ------------------------------- 566.66s infrastructure/docker -------------------------------------------------- 54.81s . . . infrastructure/setup_offline -------------------------------------------- 0.04s stat -------------------------------------------------------------------- 0.03s ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ total ----------------------------------------------------------------- 767.77s 2021-11-05 22:45:45.898 INFO: dek_test single_node_network_edge.yml: succeed. 2021-11-05 22:45:46.899 INFO: ==================== 2021-11-05 22:45:46.900 INFO: DEPLOYMENT RECAP: 2021-11-05 22:45:46.900 INFO: ==================== 2021-11-05 22:45:46.900 INFO: DEPLOYMENT COUNT: 1 2021-11-05 22:45:46.900 INFO: SUCCESSFUL DEPLOYMENTS: 1 2021-11-05 22:45:46.901 INFO: FAILED DEPLOYMENTS: 0 2021-11-05 22:45:46.901 INFO: DEPLOYMENT "dek_test": SUCCESSFUL 2021-11-05 22:45:46.901 INFO: ====================
For onboarding, copy Kubernetes* and Docker* certificates to the SFTP server. If the SFTP server is not configured, see Step 2: SFTP Server Setup.
Open an SFTP terminal:
sftp fdo_sftp@<sftp_server_ip>
Copy the Kubernetes* Intel® Smart Edge Open certificate, Kubernetes* client key, and Kubernetes* client certificate:
mkdir /fdo_sftp/pki/ cd /fdo_sftp/pki/ lcd /etc/kubernetes/pki/ put ca.crt put apiserver-kubelet-client.crt put apiserver-kubelet-client.key
Copy the Docker* configuration file:
mkdir /fdo_sftp/root/ mkdir /fdo_sftp/root/.docker/ cd /fdo_sftp/root/.docker/ lcd /root/.docker/ put config.json
Copy the Docker* daemon configuration file:
mkdir /fdo_sftp/etc/ mkdir /fdo_sftp/etc/docker/ cd /fdo_sftp/etc/docker/ lcd /etc/docker/ put daemon.json
Copy the Docker* certificate file:
mkdir /fdo_sftp/etc/docker/certs.d/ mkdir /fdo_sftp/etc/docker/certs.d/<control_plane_ip>:30003/ cd /fdo_sftp/etc/docker/certs.d/<control_plane_ip>:30003/ lcd /etc/docker/certs.d/<control_plane_ip>:30003/ put ca.crt
Copy the Docker* proxy configuration file:
mkdir /fdo_sftp/etc/systemd/ mkdir /fdo_sftp/etc/systemd/system/ mkdir /fdo_sftp/etc/systemd/system/docker.service.d/ cd /fdo_sftp/etc/systemd/system/docker.service.d/ lcd /etc/systemd/system/docker.service.d/ put http-proxy.conf
Exit from SFTP terminal:
exit