1.0 Introduction
This article describes how to set up an Elastic Kubernetes Service (EKS ) cluster as worker nodes in Fortanix Confidential Computing Manager (CCM).
1.1 Prerequisites
Ensure that you meet the following requirements:
Ensure that you must have an active AWS subscription.
The worker nodes of the EKS cluster should be of type i3en.xlarge or larger.
You must have an EKS cluster created.
2.0 Set Up Amazon EKS Cluster
Ensure that you establish an Amazon EKS cluster and nodes while installing the Nitro Enclaves Kubernetes device plugin. For more information on the procedures and steps, refer to the AWS documentation.
You must increase the memory of HughPages. Update the value of memory_mib
parameter to 2560
in /etc/nitro_enclaves/allocator.yaml
file:
---
# Enclave configuration file.
#
# How much memory to allocate for enclaves (in MiB).
memory_mib: 2560
#
# How many CPUs to reserve for enclaves.
cpu_count: 2
NOTE
Ensure that the total enclave memory required on a node is 512 MB for the
em-agent
node and the enclave memory your application requires.
Restart the worker node to reflect the changes:
reboot
3.0 Get Kubernetes Credentials
Perform the following steps to obtain Kubernetes credentials (kubeconfig
), enabling seamless interaction with the configured cluster for effective management and deployment:
Run the following command to get
kubeconfig
:eksctl utils write-kubeconfig --cluster=<cluster_name>
OR
aws eks --region us-west-1 update-kubeconfig --name <cluster-name>
Run the following command to label all the nodes with Nitro Enclave OS capabilities:
kubectl label node <node-name> smarter-device-manager=enabled
NOTE
Ensure that you use private IP Domain Name System (DNS) name of the nodes.
Run the following command to label all the nodes with
enclave.example.com/type=nitro
:kubectl label node <node-name> enclave.example.com/type=nitro
Run the following command to install the Smarter Device Manager on the Kubernetes cluster:
kubectl label node <node-name> enclave.example.com/type=nitro
NOTE
You can find the YAML configuration file for the smarter-device-manager in https://smarter-device-manager-ds-with-cm.yaml. Ensure that you update the value for
nummaxdevices
parameter to a higher value than1
, such as10
.Run the following command to install Smart Device Manager:
kubectl apply -f smarter-device-manager-ds-with-cm.yaml
4.0 Provide AWS CCM User Access to Your EKS Cluster
Perform the following steps to provide AWS CCM access to you EKS cluster:
Run the following command to update the
aws-auth
configmap:kubectl edit -n kube-system configmap/aws-auth
Add the following changes into the configmap file:
mapUsers: | - userarn: arn:aws:iam::513076507034:user/[email protected] username: [email protected] groups: - system:masters
The specified value assumes that the credentials associated with [email protected]
are set up as eks_config
in Fortanix CCM clusters.
5.0 Creating the Secrets
Perform the following steps to create a Kubernetes secret for the cluster :
Run the following command to create a secret to access the ECR registry:
kubectl create secret docker-registry regcred --docker-server=513076507034.dkr.ecr.us-west-1.amazonaws.com --docker-username=AWS --docker-password=$(aws ecr get-login-password)
NOTE
You must skip this step if the image is in a public repository.
Obtain the join token from Fortanix CCM UI and store it as a Kubernetes secret in your cluster. To generate your join token, log in to https://ccm.fortanix.com/.
Click the Infrastructure → Compute Nodes menu item in the Fortanix CCM UI and click + ENROLL NODE to bring up the join token dialog.
In the ENROLL NODE window, a join token will be generated in the text box for "Get a join token to register a compute node". This join token is used by the compute node to authenticate itself.
Click Copy to copy the join token .
Run the following command to store the join token as a Kubernetes secret for the cluster. Replace the
<join-token-from-account>
value below with your token.kubectl create secret generic em-token --from-literal=token=<join-token-from-account>
6.0 Installing Node Agent
Perform the following steps to install the node agent on your Kubernetes cluster:
Run the following command to install the node agent as
daemonset
:kubectl apply -f agent-daemonset.yaml
The following is the content for the
agent-daemonset.yaml
file:apiVersion: apps/v1 kind: DaemonSet metadata: name: em-agent namespace: default labels: component: em-agent spec: selector: matchLabels: component: em-agent template: metadata: labels: component: em-agent spec: hostNetwork: true dnsPolicy: ClusterFirstWithHostNet volumes: - name: hugepage emptyDir: medium: HugePages - name: log hostPath: path: /var/log/nitro_enclaves - name: socket-path emptyDir: {} - name: node-data hostPath: path: /tmp/em-agent-nitro containers: - name: em-agent image: "fortanix/em-agent-nitro:latest" resources: limits: smarter-devices/nitro_enclaves: "1" hugepages-2Mi: 512Mi memory: 2Gi cpu: 250m requests: smarter-devices/nitro_enclaves: "1" volumeMounts: - mountPath: /dev/hugepages name: hugepage readOnly: false - name: log mountPath: /var/log/nitro_enclaves - name: socket-path mountPath: /run/nitro_enclaves - name: node-data mountPath: /tmp/em-agent-nitro ports: - containerPort: 9092 name: http protocol: TCP hostPort: 9092 env: - name: AGENT_MANAGER_AUTH_BASIC_TOKEN valueFrom: secretKeyRef: name: em-token key: token - name: MANAGER_ENDPOINT value: "ccm.test.fortanix.com" - name: MALBORK_LOG_DEBUG value: "true" - name: NODE_IP valueFrom: fieldRef: fieldPath: status.hostIP - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName
Run the following command to verify the deployment of the Fortanix CCM Nitro Node Agent
DaemonSets
by confirming the operational status of the node agent pod:kubectl get pods NAME READY STATUS RESTARTS AGE em-agent-fqp8j 1/1 Running 0 30m
NOTE
The node agent tag is available at https://hub.docker.com/r/fortanix/em-agent-nitro/tags.
7.0 Reinstalling Node Agent in a Different Account
Perform the following steps for installing the node agent in an account and switching it to a different account:
Run the following command to delete the
/tmp/em-agent-nitro
parameter from each worker nodes:# log into the node through debug container. $ kubectl debug node/ -it --image=busybox # inside container now run chroot to access the node. $ chroot /host bash # Delete the file $ rm -rf /tmp/em-agent-nitro # Exit twice to get out of the container $ exit
Run the following command to create a new
em-token
secret:$ kubectl delete secret em-token $ kubectl create secret generic em-token --from-literal=token=<join-token-from-account>
Run the following command to restart the
em-agent
secret:$ kubectl rollout restart ds em-agent
8.0 Converting Nitro Enclave OS Application
Refer to the User's Guide: Create an Image to know the steps for creating application in Fortanix Confidential Computing Manager user interface.
8.1 Creating NGINX Deployment
The following is the content of app-development.yaml
file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-pod
labels:
app: my-pod
spec:
replicas: 1
selector:
matchLabels:
app: my-pod
template:
metadata:
labels:
app: my-pod
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- name: http
containerPort: 80
imagePullPolicy: Always
securityContext:
privileged: true
resources:
limits:
smarter-devices/nitro_enclaves: "1"
hugepages-1Gi: 2Gi
memory: 2Gi
cpu: 250m
requests:
smarter-devices/nitro_enclaves: "1"
hugepages-1Gi: 2Gi
volumeMounts:
- mountPath: /dev/hugepages
name: hugepage
readOnly: false
- name: log
mountPath: /var/log/nitro_enclaves
- name: socket-path
mountPath: /run/nitro_enclaves
env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: NODE_AGENT
value: http://$(NODE_IP):9092/v1/
- name: RUST_LOG
value: debug
volumes:
- name: hugepage
emptyDir:
medium: HugePages
- name: log
hostPath:
path: /var/log/nitro_enclaves
- name: socket-path
emptyDir: {}
Update the container's image details to run any other image. Run the following command to deploy the application:
kubectl apply -f app-deployment.yaml