1.0 Introduction
This article describes the prerequisites for installing and deploying Fortanix Confidential Computing Manager (CCM) in a customer-managed Kubernetes environment (on-premises or cloud-hosted).
2.0 Prerequisites
The following prerequisites must be completed before deploying Fortanix CCM.
NOTE
The platform components are not installed as part of the Fortanix CCM on-premises deployment and must be provisioned by the customer.
2.1 Kubernetes Cluster Requirements
Ensure that a Kubernetes cluster (for example, Azure Kubernetes Service (AKS)) is provisioned and accessible.
The cluster must use SGX-capable VM instances. For example, in Azure AKS:
Minimum supported VM size: Standard_DC8s_v3
Kubernetes version: 1.34 or later (or supported non-end-of-life version).
INITIAL AKS VERSION | SUPPORTED UPGRADE VERSIONS |
|---|---|
1.34.1 | 1.34.x (minor updates), 1.35.0 |
1.34.2 | 1.34.x (minor updates), 1.35.0 |
1.34.3 | 1.34.x (minor updates), 1.35.0 |
The cluster must have a minimum of three nodes to support Cassandra deployment.
The platform has been validated on Kubernetes clusters where all nodes are available to run all components. Setting node affinity for non-Cassandra components is not currently supported.
Ensure that the
KUBECONFIGenvironment variable is set in the environment used to run Helm commands for installing the Fortanix Armor Kubernetes Operator.For example:
export KUBECONFIG=<path-to-kubeconfig>Where,
<path-to-kubeconfig>is the file path to your Kubernetes configuration file. For example:$HOME/.kube/config.
2.2 Networking and Load Balancing Requirements
Fortanix CCM requires minimal external network exposure and standard Kubernetes-based load balancing.
2.2.1 External Access
The CCM API listens on port 8443, which is not configurable.
External access is provided through a load balancer, ingress controller, or similar component.
2.2.2 Outbound Connectivity
Outbound access is required for:
Attestation services (for node enrollment)
Azure’s Provisioning Certificate Caching Service (PCCS) endpoint https://global.acccache.azure.net must be reachable from workloads running in the cluster.
Optional integrations (for example, email services).
2.2.3 Load Balancing
A load balancer is typically used to provide external access to Fortanix CCM.
The load balancer forwards client traffic to the Kubernetes Service exposing the Fortanix CCM API, which listens on port 8443.
A single externally accessible port is sufficient for all Fortanix CCM client traffic.
NOTE
The external port and load balancer implementation are environment-dependent. Port mapping may be configured to forward traffic to the Fortanix CCM API.
For more information on how traffic is routed within the cluster, refer to:
Example load balancer configuration (cloud):
apiVersion: v1
kind: Service
metadata:
name: ccm-api-lb
spec:
type: LoadBalancer
selector:
fortanix.com/ingress: true
ports:
- name: https
port: <external-port>
targetPort: 8443Where,
<external-port>: The port exposed to clients (for example, 443).targetPort: Must be set to 8443, which is the port used by the Fortanix CCM API.
2.3 Required Components
The following components must be installed and configured on the Kubernetes cluster before deploying Fortanix CCM:
SGX Support:
Intel SGX must be enabled and functional.
SGX Device Plugin must be installed and provide the following node resources:
sgx.intel.com/enclavesgx.intel.com/provision
Ingress Controller:
The Fortanix Armor Kubernetes Operator creates and manages an Ingress resource to expose static UI artifacts. This Ingress resource relies on an existing Ingress Controller to manage it.
For detailed steps to configure the Ingress resource, refer to the Installation Guide - On-premises.
Run the following command to install a Kubernetes Ingress Controller (for example, NGINX Ingress):
helm upgrade --install nginx-ingress -n nginx-ingress \ --create-namespace \ --set controller.service.type=LoadBalancer \ --set-json controller.service.annotations='{"service.beta.kubernetes.io/azure-load-balancer-internal": "true"}' \ oci://ghcr.io/nginx/charts/nginx-ingress \ --wait
cert-manager:
Install a supported version of cert-manager. For supported versions, refer to https://cert-manager.io/docs/releases.
Run the following command to install cert-manager. This is used for internal TLS certificate management between Cassandra nodes.
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.19.4/cert-manager.yamlNOTE
You must create a certificate issuer (
ClusterIssuer) as described in Installation Guide - On-premises that generates a self-signed certificate.The cert-manager and the required
ClusterIssuermust be installed and configured before deploying the Fortanix Armor Kubernetes Operator, as the operator depends on cert-manager for this automated certificate provisioning.
Helm Access:
You must have Helm installed and access configured to deploy resources to the cluster.
2.4 Unsupported Configurations
The Kubernetes cluster must not have K8ssandra installed.
Customer-managed Cassandra deployments are not supported.
2.5 Image Registry Access
Fortanix CCM container images are hosted in a Fortanix-managed container registry. Customers must have access credentials to pull the required images during deployment. For more information, refer to Deploy Fortanix Armor Kubernetes Operator.
An image pull secret must be created in each namespace used by Fortanix CCM to allow access to the registry.
The customer is responsible for maintaining and periodically refreshing the image pull secrets to ensure continued access to the registry.
2.6 Operator Scope and Namespace Requirements
Only one instance of the Fortanix Armor Kubernetes Operator can be deployed per Kubernetes cluster.
The operator is installed in a specific namespace (for example, armor), but operates with cluster-wide scope, allowing it to potentially manage Fortanix CCM resources across multiple namespaces.
As part of the deployment, a supporting K8ssandra operator (used to manage Cassandra) is also deployed with cluster-wide scope.
3.0 Where to go from here
For steps to deploy Fortanix CCM in a customer-managed Kubernetes environment (on-premises or cloud-hosted), click here.