Fortanix® and the Fortanix logo are registered trademarks or trade names of Fortanix, Inc.
All other trademarks are the property of their respective owners.
NOTICE: This article was produced by Fortanix, Inc. (Fortanix) and contains information that is proprietary and confidential to Fortanix. The article contains information that may be protected by patents, copyrights, and/or other IP laws. If you are not the intended recipient of this material, destroy this article and inform info@fortanix.com immediately.
1.0 Introduction
This article describes the process of installing and configuring the Fortanix-Data-Security-Manager (DSM). This setup is a two-phase process as follows:
System Setup and Network Configuration: In this phase, you setup the IPMI (Intelligent Platform Management Interface) for remote management and perform network configuration on the Fortanix DSM appliance.
Fortanix DSM Installation and Configuration: In this phase, you install the Fortanix DSM service and configure the Fortanix DSM cluster.
2.0 Prerequisites
Before beginning the Fortanix DSM installation process, ensure that the following requirements are met:
You have at least three Fortanix FX2200 servers
Network configuration (IP address, subnet mask, and gateway) has been assigned for each server.
If you are not planning on using an external load balancer, an additional IP address should be allocated for Fortanix DSM. This will serve as the cluster IP address to which Fortanix DSM clients will connect.
It is recommended to assign two Fully Qualified Domains (FQDNs) for the Fortanix DSM cluster. If the hostnames are assigned in your domain, for example,
your-domain.com, then the preferred hostnames are<domain>andapps.<domain>.You should add DNS entries (A records) for the above two FQDNs, all mapped to the cluster IP address described above, to your DNS server.
All the ports mentioned in “List of required open ports” should be open between servers.
You should have the ability to issue or generate certificates for certificate requests (CSRs) generated by Fortanix DSM with the subject alternative name (SAN) containing the above-stated hostnames.
You should be able to configure NTP on the servers. If the servers are not going to have access to public NTP servers, then they need to be able to connect to an NTP server on your network.
You should have received the software installation packages from Fortanix. The usual way of distribution is using a downloadable link from https://support.fortanix.com/.
Ensure that none of the nodes in the cluster has the same hostname.
NOTE
The hostnames MUST be in lower case.
If you are using a multi-node cluster, ensure that you update the hostname. Perform the following steps:
Use the command
sudo hostnamectl set-hostname nameto update the hostname.Add the hostname in
/etc/hosts.Verify that the hostname is correctly updated using the command
cat /etc/hostname.
Ensure IPMI IPs are assigned to the servers if you want to remotely manage the servers. For more information on the IPMI setup, refer to Fortanix IPMI Setup for FX2200 Series II.
(2).png?sv=2022-11-02&spr=https&st=2025-10-30T06%3A18%3A01Z&se=2025-10-30T06%3A59%3A01Z&sr=c&sp=r&sig=KyxqlDPph0zcLBQ18taM7d9fZ4sYL3UUsWuWnTn%2Flh8%3D)
Figure 1: FX2200 Series 2 front view
.png?sv=2022-11-02&spr=https&st=2025-10-30T06%3A18%3A01Z&se=2025-10-30T06%3A59%3A01Z&sr=c&sp=r&sig=KyxqlDPph0zcLBQ18taM7d9fZ4sYL3UUsWuWnTn%2Flh8%3D)
Figure 2: FX2200 Series 2 back view
Get the default username/password for the FX2200 appliance. These will be distributed by email from Fortanix post shipment of the servers to the relevant contact person in your team.
For the list of open ports, refer to Fortanix Data Security Manager Port Requirements.
4.0 IPMI Network Setup and Access Configuration
This section describes how to configure network connectivity and remote access for the Fortanix DSM appliance using IPMI and standard Linux network settings.
4.1 Remote Administration by IPMI
If your server has IPMI (Intelligent Platform Management Interface) setup, then you can remotely configure the server for the rest of the network configuration. For more information on the IPMI setup, refer to Fortanix IPMI Setup for FX2200 Series II.
Once network configuration is completed, then direct SSH can be used for remote login. Perform the following steps for IPMI remote login:
Use IPMI web page accessible at the specified IPMI IP address through any browser. For example, http://192.168.1.25/#login.
Use IPMI credentials provided by the Datacenter team which performed the IPMI setup. Go to Section Remote Control → Console Redirection. This opens a Java console which provides the terminal view of the server. Now boot the server and login using the system administrator credentials provided by the Fortanix team. You can now follow the rest of the steps.
4.2 Network Configuration
Configure a network interface with an IP address, subnet mask, and gateway, such that the servers are reachable from any intended client. You can do this using the console over IPMI if IPMI has been setup.
If you are using Fortanix supplied servers, note that these servers have two network interfaces. You can just use one network interface. If your network topology or deployment requires separating interfaces for external traffic and intra-cluster traffic, then you may set up both the network interfaces.
For setting up the network, edit the /etc/network/interfaces file to specify IP address, gateway, netmask, and DNS server information.
Refer to the following sample /etc/network/interfaces file to help you get started:
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto interface_name
iface interface_name inet static
address xxx.xxx.xxx.xxx
gateway xxx.xxx.xxx.xxx
netmask xxx.xxx.xxx.xxx
dns-nameserver xxx.xxx.xxx.xxx
dns-nameserver xxx.xxx.xxx.xxxNOTE
Replace
interface_namewith the name of the network interface on your server, which areenp80s0f0andenp80s0f1.Replace
xxx.xxx.xxx.xxxwith appropriate values.After editing the file, save changes and reboot the server.
To set the hostname on each node in the file, remove sdkms-server, and replace with the intended hostname.
sudo vi /etc/hostnameIn the file, add the IP hostname and/or FQDN.
sudo vi /etc/hostsReboot the system for changes to take effect.
4.3 LACP Bonding Configuration
The Fortanix DSM appliance can be configured to have LACP Bonding Configuration (LACP) bonding.
Run the following sample command to install “ifenslave” package:
sudo apt-get install ifenslaveThe following is a sample interfaces file location is /etc/network/interfaces:
# The ordering of these interfaces is important, see comment for bond0 below.
auto enp80s0f0
iface enp80s0f0 inet manual
bond-master bond0
# This command is important, see comment for bond0 below.
pre-up ifup bond0 &
# The ordering of these interfaces is important, see comment for bond0 below.
auto enp80s0f1
iface enp80s0f1 inet manual
bond-master bond0
# This command is important, see comment for bond0 below.
pre-up ifup bond0 &
# The ifenslave scripts are written for all interfaces being brought up at once. However, `ifup` will bring interfaces up one-by-one. Without doing anything extra, this will result in a deadlock or a 60-timeout, depending on the order of the master and slave interfaces. We make sure to bring up the bond master simultaneously with a slave, by running the ifup command for the master in the background using `pre-up`. `ifup` implements locking and state checking to make sure that a single device is only brought up once.
auto bond0
iface bond0 inet static
bond-miimon 100
bond-mode 802.3ad
bond-slaves yes
address ...
gateway ...
netmask ...
dns-nameservers ...5.0 Fortanix DSM Installation and Configuration
This section describes the tasks to install the Fortanix DSM software and configure the Fortanix DSM cluster.
5.1 Install Fortanix DSM
Perform the following steps to install the Fortanix DSM package on all the servers:
The latest installation file is available at Fortanix DSM Installation Package Downloads (on-prem). This needs a customer account, which Fortanix provides for the relevant contact person.
Download and copy the Fortanix DSM installation file
sdkms_<version>_install.shto each server. The latest installation files are hosted on https://support.fortanix.com.Run the following command to install the package, replacing the package name (
sdkms_<VERSION>_install.sh) with the filename of the package in your Fortanix DSM distribution:sudo chmod +x sdkms_<VERSION>_install.sh sudo ./sdkms_<VERSION>_install.shFor example,
sudo chmod +x sdkms_2.3.688-124_install.sh sudo ./sdkms_2.3.688-124_install.shRun the following command to reboot the system if a new version of the kernel is installed:
sudo rebootRun the following command to reset on all servers:
sdkms-cluster reset --delete-data --reset-iptables
5.2 NTP Configuration
NTP is required to be able to run Fortanix DSM. If the servers are not going to have access to public NTP servers, you will need to update the NTP configuration file (/etc/ntp.conf) to specify NTP server(s) on your network. To enable local NTP when an external NTP server is not available, add the directive below to the config.yaml file as referenced in Section 5.2.2: Internal NTP Options.
5.2.1 External NTP Options
Comment on the servers for the NTP Pool Project and NTP server as a fallback section and add the following server IP addresses:
# pool 0.ubuntu.pool.ntp.org iburst # pool 1.ubuntu.pool.ntp.org iburst # pool 2.ubuntu.pool.ntp.org iburst # pool 3.ubuntu.pool.ntp.org iburst # Use Ubuntu's ntp server as a fallback. # pool ntp.ubuntu.com server 0.0.0.0Run the following command to restart NTP:
sudo systemctl restart ntp ntpq -p
5.2.2 Internal NTP Options
You can run a local NTP server when an external NTP server is not available. This will run the NTP pod on each node and will keep the time synchronized within the cluster to one of the cluster nodes.
global:
localntp: true5.3 Begin Fortanix DSM Configuration
This step needs to be run on only one of the servers.
5.3.1 Setup Deployment Specific Configuration File
Run the following command to copy the template file in the home directory and then edit it:
sudo cp /opt/fortanix/sdkms/config/config.yaml.example config.yamlWhen deploying Fortanix DSM you have options with respect to the load balancer, remote attestation, internet subnet, and remote syslog logging. Based on the option you select, edit the config.yaml file as explained:
Load Balancer Options:
Use the built-in load balancer – this is the default option.
Edit the
config.yamlfile to set the correct cluster IP address by adding the value of the parameter“clusterIp”under“sdkms”section.sdkms: clusterIp: 10.0.0.1 keepalived: nwIface: enp80s0f0
Use an external load balancer
Edit the
config.yamlfile to add the following parameters under“global”section (consider theyamlfile format and spacing).sdkms: clusterIp: 1.2.3.4 # mandatory field, set to dummy for external LB global: externalLoadBalancer: true # keepalived: # nwIface: enp80s0f0NOTE
Fortanix DSM does not support SSL offloading at the load balancer. All SSL connections must be terminated within the DSM backend pod to ensure that sensitive data is read only inside the secure enclave. All load balancers must be configured for SSL passthrough.
Remote Attestation Options:
Remote attestation is performed when changes are performed on a Fortanix DSM cluster. Intel SGX using Data Center Attestation Primitives (DCAP) is the supported attestation mechanism for platform attestation. Remote attestation is performed in the following scenarios:
Joining new nodes to an existing cluster.
Replacing nodes in an existing cluster (for example: replacing failed hardware nodes).
Making hardware changes and performing firmware updates.
Upgrading the cluster software version.
Remote attestation enabled:
This is the default option and no additional changes to the
config.yamlfile is required.This option requires that the appliances have internet access so they can connect to the Fortanix attestation proxy service.
Required URLs for remote attestation:
In this scenario, Fortanix DSM recommends setting up an http proxy to the internet. Make sure the following URLs are accessible through the proxy:
http://ps.sgx.trustedservices.intel.com:80, port 80
https://trustedservices.intel.com/content/CRL, port 443
http://trustedservices.intel.com/ocsp, port 80
http://whitelist.trustedservices.intel.com/SGX/LCWL/Linux/sgx_white_list_cert.bin, port 80
https://pccs.fortanix.com, port 443
For DCAP online attestation with the Secure Node Join feature enabled, edit the
config.yamlfile to add the following parameters under the“global”section (consider theyamlfile format and spacing).global: attestation: dcap: type: onlineWithTrustedNodeIdentityFor DCAP offline attestation with the Secure Node Join feature enabled, edit the
config.yamlfile to add the following parameters under the“global”section (consider theyamlfile format and spacing).global: attestation: dcap: type: offlineWithTrustedNodeIdentityFor DCAP online attestation using Fortanix Provisioning Certification Caching Service (PCCS), edit the
config.yamlfile to add the following parameters under the“attestation”and“sdkms”section (consider theyamlfile format and spacing).NOTE
For platform attestation, when using
pcs: FortanixPCCS, you must whitelist the URL: https://pccs.fortanix.com to create an end-to-end encrypted and authenticated connection.global: attestation: dcap: type: online sdkms: dcapProviderConfig: pcs: FortanixPCCS fortanix_pccs: api_version: 4For DCAP offline attestation, edit the
config.yamlfile to add the following parameters under the“global”section (consider theyamlfile format and spacing).global: rebootEnabled: true attestation: dcap: type: offlineFor more information, refer to the DCAP Offline Attestation for Fortanix DSM guide.
For DCAP online attestation using Fortanix PCCS using a proxy server to proxy outbound attestation network request instead of whitelisting https://pccs.fortanix.com, use the
proxyfield in theattestation.dcapsection. Edit theconfig.yamlfile to add the following parameters under the“attestation”section (consider theyamlfile format and spacing).global: attestation: dcap: type: online proxy: {url to http/https proxy server} sdkms: dcapProviderConfig: pcs: FortanixPCCS fortanix_pccs: api_version: 4
Remote attestation disabled:
Edit the
config.yamlfile to add the following parameters under the“global”section (consider theyamlfile format and spacing).global: attestation: null
Internal Subnet Options:
Kubernetes requires two internal networks for intra-cluster communication. By default, use
10.244.0.0/16and10.245.0.0/16when unspecified.To configure a custom subnet for the Kubernetes internal network, set
serviceSubnetandpodSubnetunder the“global”section (consider theyamlfile format and spacing) as shown:global: serviceSubnet: 172.20.0.0/16 podSubnet: 172.21.0.0/16
Remote Syslog Logging Setting Options:
Fluentd supports forwarding logs to a single syslog server and to multiple syslog servers.
To configure forwarding the '
container', 'sdkms', and 'system' logs from all the nodes in the cluster to one or more remote syslog servers, set thehost,port,protocol, andlog_typeparameters for each type of log under "advancedSyslog" option offluentdsection of theconfig.yamlfile (consider theyamlfile format and spacing).NOTE
The “
remoteSyslog” option, which allowed forwarding all logs to one remote Syslog server, will be deprecated in future releases of Fortanix DSM. Use "advancedSyslog" option to use the enhanced log forwarding functionality.The valid values for,
host: Any valid IP addressport: Any valid portprotocol: Either tcp or udplog_type: Either one of "CONTAINER", "SYSTEM", "SDKMS"
The following is an example for
advancedSyslogif the forwarding is for 3 syslog servers:fluentd: advancedSyslog: - host: port: protocol: log_type: "CONTAINER" - host: port: protocol: log_type: "SYSTEM" - host: port: protocol: log_type: "SDKMS"
Log Filtering Options:
There are three kinds of logs produced for every request:
A connection log, indicating that a new connection has been made and the IP from where the connection is made.
A request log, indicating that a request has been received, along with the IP from where that request is coming from and the associated HTTP Method and URL.
A response log, indicating that a response has been returned, along with the associated status code.
When Log Filtering is enabled, it will cache request logs matching a shortlist of method-URL pairs. If the corresponding response is a success, the request and response logs are discarded. If the response is not a success, then both the request and response logs are printed.
The filtered combinations are:
GET /sys/v1/healthHEAD /sys/v1/healthHEAD /
Since Kubernetes and most load balancers create a new connection for each request they make, this feature will attempt to profile individual IPs. If an IP is found to only make health check-related requests (that is, one of the three filter combinations listed above), then the connection logs from these IPs will also be filtered. If one of these IPs later makes a non-filtered request, its connection logs will no longer be filtered until it returns to making only filtered requests.
Log Filtering is enabled by updating the cluster-wide configuration. When enabled, the backend will no longer emit logs associated with successful health checks.
Depending on the cluster, this can amount to a substantial reduction in the volume of logs later forwarded over Syslog. The default is false.
filterConfig:
useLogFilter: true | falseHybrid Cluster Configuration:
With Release 3.25, Hybrid clustering of "Series- I and Series II ", "Azure and Series- I or Series- II ", and "AWS and Series- I or Series- II running Fortanix DSM in same (SGX/non-SGX) software mode" is possible. This allows nodes of different types to participate in a single cluster. When using an internal DSM load balancer (that uses keepalived) with hybrid clusters consisting of Fortanix Series-I and Fortanix Series-II appliances, fetch the network interface names (Series-I = eno1 and Series-II = enp80s0f0f0) of the appliances and configure the config.yaml file as follows:
sdkms:
clusterIp: 10.0.0.1
keepalived:
nwIface: enp80s0f0,eno15.3.2 Proxy Support for Outbound Connections
You can add cluster-wide proxy configuration for outbound connections. This is defined in the yaml configuration files.
The global proxy functionality will not override existing proxy settings in the .global. attestation section.
However, if no proxy settings exist in the .global. attestation section, the remote attestation proxy setting will inherit the global settings.
Since the attestation proxy settings have no exclude option, the global exclude setting will be added to the attestation proxy settings.
The Global proxy functionality is only available in SGX-based deployments (FX2200 and Azure Confidential Computing virtual machines). The configuration is defined in the .global.proxy entry. It holds two values:
url: proxy value (mandatory).exclude: Comma-separated list of hostnames and optionally ports that should not use the proxy (optional).
global:
proxy:
url: <proxy_url>
exclude: <host>,<host:port>exclude options:
The hosts in
excludewill be suffix matched. For example:exclude: example.comwill matchexample.comandsubdomain.example.combut will not matchexample.com.net.Any leading '
.' is removed. For example:exclude: .example.comis the same asexclude: example.com.CIDR notation is accepted. For example:
111.222.0.0/16will match all IP addresses in the format111.222.xxx.xxx.The wildcards (
*) or regex are NOT supported, and hostnames are not resolved to IP addresses.The
excludeentry will automatically contain hostnames used internally.
5.3.3 HAProxy Support
To enable HAProxy support, you must modify the proxyHeaderConfig field of the config.yaml file for the cluster and reapply it with the cluster tool. The field has three possible values:
disabled is the default condition and is equivalent to the field missing from the
config.yamlfile."maybefrom=<cidr1>,<cidr2>,...", where the CIDR ranges point to the server(s) initiating the proxy protocol enabled connection.requiredfrom=<cidr1>,<cidr2>,...", where the CIDR ranges point to the server(s) initiating the proxy protocol enabled connection.
The minimum format is:
global:
proxyHeaderConfig: disabled
# OR
proxyHeaderConfig: "requiredfrom=<cidr>,<cidr>,..."
# OR
proxyHeaderConfig: "maybefrom=<cidr>,<cidr>,..."NOTE
It is important that there are no spaces within the quotes and that the entire value is wrapped in double quotes, as in the example above.
The CIDRs should narrowly contain the addresses of the load balancers that will be injecting the Proxy Protocol headers and MUST exclude the cluster's pod subnet.
There are two options to securely handle transitions in accordance with the PROXY header spec. The sequence for enabling support is:
The
proxyHeaderConfigfield isdisabled.If you have an extant cluster that does not use proxy protocol, the process for enabling it is to first apply the
maybefromvariant to the pods, with the given CIDRs pointing to the IPs of the load balancer. These should be as specific as possible. Once that is done, the proxy header is enabled on the load balancer serving the cluster.Next, the config is changed to the
requiredfromvariant with the same CIDRs. On fresh clusters, the header can start off asrequiredfrom, if the load balancer already has proxy support enabled.
To disable it, the opposite steps are followed, that is:
First, the config is changed to
maybefrom.The load balancer's proxy support is disabled, and then the config is changed to
disabled.
5.3.4 Sealing Key Policy
A Sealing key is used to wrap a cluster master key. Each node has its own unique Sealing key. The sealing key is derived and not saved anywhere in the system.
The Sealing Key policy defines what type of Sealing key should be used in the cluster.
Possible values are:
SGX - This is the default value. With this policy SGX sealing key is used. SGX sealing key cannot be retrieved from outside SGX. This provides security guarantees for the sealing key.
Personalization Key - Personalization key is generated using Fortanix DSM Deterministic random bit generator (DRBG) and stored in tamper-protected key storage module. In this policy type, the sealing is derived using the SGX sealing key and personalization key. This provides additional protection of the sealing key. Personalization key is zeroized upon tamper, which in turn disallows deriving the sealing key.
Recovery Key – In this policy type, the sealing key is derived using the personalization key and a recovery key. Recovery key is automatically generated and can be retrieved by a sysadmin. In this policy, in addition to using a personalization key based sealing key for wrapping the cluster master key, a recovery key is also used to wrap the cluster master key. This allows the sysadmin to recover a node using the recovery key in case the personalization key is zeroized. After the setup, a sysadmin must retrieve the recovery key and store it securely.
Setup
When setting up a fresh cluster, add the following lines under the “global” section in config.yaml and provide one of the values mentioned above:
sealingKeyPolicy: VALUEExample:
sealingKeyPolicy: personalizationKeyNode Recovery
If the personalization key or recovery key policy is used and if the personalization key is zeroized, then the node goes in "NeedsRecovery" state. In this state for the Fortanix DSM pod on this node to become functional, recovery needs to be performed. Recovery will allow the node to unwrap the corresponding cluster master key blob.
To perform recovery, run following script as root:
/opt/fortanix/sdkms/bin/node_recover.shRecovery can be done with or without a recovery key.
If the recovery key is not available or if the sealing key policy value is
"personalizationKey", then recovery requires that there be other working nodes in the cluster. In that case run the command as follows:sudo /opt/fortanix/sdkms/bin/node_recover.sh --node-name NODE_HOST_NAMEWhere,
NODE_HOST_NAMEis node name where recovery is to be performed.If the recovery key is available and the sealing key policy is
"recoveryKey", then recovery can be done even on a standalone node. In that case run the command as follows:sudo /opt/fortanix/sdkms/bin/node_recover.sh --node-name NODE_HOST_NAME --recovery-key RECOVERY_KEY_VALUEWhere,
NODE_HOST_NAMEis node name where recovery is to be performed.RECOVERY_KEY_VALUEis the value of recovery key.
Updating sealing key policy in the existing cluster
For existing clusters, the sealing key policy is by default "sgx". The procedure for changing the sealing key policy is as follows:
Update
config.yamlto add one of following lines under the“global”section:sealingKeyPolicy: personalizationKey sealingKeyPolicy: recoveryKeyApply the updated config by running following command:
sudo sdkms-cluster deploy --config ./config.yaml --stage DEPLOYCheck if the new value was applied by running the following command and check the value of
"sealingKeyPolicy".sudo KUBECONFIG=/etc/kubernetes/admin.conf kubectl get cm config-values -oyamlRun the following command:
sudo /opt/fortanix/sdkms/bin/update_sealing_key_policy.shThe policy change will be effective on the next upgrade.
5.3.5 Create Fortanix DSM Cluster
Run the following command to create a Fortanix DSM cluster:
sudo sdkms-cluster create --self=<server ip address/subnet mask> --config ./config.yamlCheck the status of all pods to verify all pods are running correctly. Run the following command to list the pods and their status:
sudo KUBECONFIG=/etc/kubernetes/admin.conf kubectl get pods -o wide
Verify the Node Status in the Fortanix DSM System Administration View
If you have enabled the Secure Node Join feature with dcap:onlineWithTrustedNodeIdentity or dcap:offlineWithTrustedNodeIdentity attestation in the Remote Attestation Options as described in Section 5.3.1: Setup Deployment Specific Configuration, then as a system administrator, you can log into the Fortanix DSM System Administration view.
Navigate to System Administration → CLUSTER → TRUST CENTER tab.
The first node and its node identity will be visible in the TRUST CENTER table.

Figure 3: Trust Center Tab
The Fortanix DSM related pods may continue to crash until the certificates are installed Section 5.5: Install Certificates, but verify that the pods related to aesmd, Elasticsearch, and Cassandra are running.
If you want to set up a cluster that spans multiple sites or data centers, refer to the steps in Fortanix Data Security Manager Data Center Labeling to label which nodes are in what data center.
5.3.6 Requirements for Platform Registration
If you are using a proxy with Fortanix DSM, add the following URL to allow platform registration:
https://api.trustedservices.intel.com/sgx/registration/v1/platform
If the platform registration fails when the proxy is set up, run the following command to restart the registration service:
sudo systemctl restart mpa_registration_tool.service5.4 Configure Other Nodes for Joining the Cluster
Now that the installation is complete, join all other nodes to the cluster by running the following join command:
NOTE
Do not perform the join operation on multiple nodes simultaneously. The join operation of a node should be started once the previous node has successfully joined the cluster.
If your nodes are in the same subnet, run the following command:
sudo sdkms-cluster join --peer=<server ip address/subnet> --token=<kubeadm-token>If your nodes are in the different subnet, run the following command:
sudo sdkms-cluster join --peer=<server ip address> --token=<kubeadm-token> --self=<node IP address>
In the above specification,
Server IP Address corresponds to the node’s IP address on which cluster was created. Specify it with the subnet.
The node IP address corresponds to the IP address of the joining node.
If this peer address belongs to a cluster enabled with the Secure Node Join feature with
dcap:onlineWithTrustedNodeIdentityordcap:offlineWithTrustedNodeIdentityattestation, the node will not join the cluster immediately. Instead, the system administrator must log into the Fortanix DSM UI, navigate to System Administrator → CLUSTER → TRUST CENTER tab, and decide whether to approve or decline the node's identity request..png?sv=2022-11-02&spr=https&st=2025-10-30T06%3A18%3A01Z&se=2025-10-30T06%3A59%3A01Z&sr=c&sp=r&sig=KyxqlDPph0zcLBQ18taM7d9fZ4sYL3UUsWuWnTn%2Flh8%3D)
Figure 4: Approve/Decline Option
NOTE
When a new node joins an existing cluster with secured node join enabled, you can find the new node’s NODE IDENTITY in any of the
sdkmspod logs in the current cluster.For example, in the
sdkmspod log of an existing cluster node, you can see the following log:2024-10-21 13:11:33.505Z INFO backend::api::cluster - new node_identity added: ABcajcYepnCqhMnz7yYXndghKFeESvqqujecavX2FVg=This same
node_identity‘s join request is also visible on the Fortanix DSM UI..png?sv=2022-11-02&spr=https&st=2025-10-30T06%3A18%3A01Z&se=2025-10-30T06%3A59%3A01Z&sr=c&sp=r&sig=KyxqlDPph0zcLBQ18taM7d9fZ4sYL3UUsWuWnTn%2Flh8%3D)
Figure 5: View NODE IDENTITY similar to
sdkmspod logThe
node_identityis derived from the node's hardware-level identifier known as the Platform Provisioning ID (PPID). The PPID value for an Intel-attested node is extracted from its DCAP attestation report. A SHA256 hash of this value is then base64 encoded to produce the finalnode_identity.If the system administrator approves the node join request, the TRUST STATUS value for that node changes to “Trusted”, and if you decline the request the TRUST STATUS value for that node changes to “Rejected”.
The sdkms-cluster join peer commands will only execute successfully if the system administrator approves the node identity. Otherwise, they will fail.
.png?sv=2022-11-02&spr=https&st=2025-10-30T06%3A18%3A01Z&se=2025-10-30T06%3A59%3A01Z&sr=c&sp=r&sig=KyxqlDPph0zcLBQ18taM7d9fZ4sYL3UUsWuWnTn%2Flh8%3D)
Figure 6: Trusted Nodes
Example join commands for a joining node (10.198.0.65) to a created cluster on a node(10.198.0.66) on a /24 subnet.
sudo sdkms-cluster join --peer=10.198.0.66/24 --token=525073.715ecf923e4ae1dbOR
sudo sdkms-cluster join --peer=10.198.0.66 --token=525073.715ecf923e4ae1db --self=10.198.0.65The token can also be retrieved by executing on the first host:
sudo kubeadm token list5.5 Configure Data Center Labeling
After all nodes have successfully joined the cluster, you must perform data center labeling. Data center labeling configuration in a multi-site deployment improves read resiliency by enabling requests to read data from the local data center and supports the Read-Only mode of operation when a global quorum is lost and a local quorum is available.
For more information about how to use the automated script to configure data center labeling, refer to the Fortanix Data Security Manager Data Center Labeling.
For more information, refer to Fortanix Data Security Manager Read-Only Mode of Operation.
5.6 Install Certificates
WARNING
The process described in this section generates keys and certificate requests for Transport Layer Security (TLS) connectivity. Save the passphrase for the private key as it cannot be recovered later.
Fortanix DSM requires two SSL certificates for the services:
Main API service (cluster)
Static Asset service (user interface (UI))
The Certificate Signing Request (CSR) generation is supported for both and can be signed by any preferred Certificate Authority (CA).
The script to generate CSR performs the following actions:
Generates CSRs for both the Fortanix DSM cluster and UI.
Creates the first system administrator (sysadmin) account that can log into Fortanix DSM.
The domain names during CSR generation can be provided in two formats:
Simple: Provide domain name and SANs. For example, www.fortanix.com.
Distinguished: Provide the full DN string. For example, CN=www.fortanix.com O=Fortanix L=Mountain View ST=California.
You can also add multiple Subject Alternative Names (SANs) with each one on a new line. When adding multiple SANs, press Enter (Return) after pasting each SAN entry. After pasting the final SAN, press Enter (Return) twice to confirm and continue.
When prompted, provide the following:
Domain name for the cluster (Main)
Domain name for the UI (Assets)
Cluster name
Sysadmin email address
Sysadmin password
TLS key type
NOTE
For mutual TLS authentication, the standard API key-based authentication interface will not work. To support mutual TLS authentication in Fortanix DSM, add an additional interface as SAN Name (apps.sdkms.dev.com) on the main certificate. For more information, refer to Mutual Transport Layer Security using Fortanix Data Security Manager (on-prem only).
When you create or rotate the CSR, the default TLS key type is RSA2048. You can also use higher TLS key sizes or bits. The supported key types are RSA2048, RSA3072, RSA4096, NISTP256, NISTP384, NISTP521. Refer to Fortanix DSM 4.34 release notes for a known issue “UI doesn’t load after installing NISTP521 certificates”.
When you renew or update the certificate, you will see on option to reuse previously generated parameters such as CN and SAN.
Perform the following steps to generate a CSR and install the certificates:
Run the following command on the node where the cluster is being created for the first time (fresh cluster setup) to generate the CSR:
sudo get_csrsThe CSR contains information (for example, common name, organization, country) which the CA will use to create your certificate.
The generated CSRs appear as:
Cluster Certificate Request
UI Certificate Request
These certificates must either be signed by a CA or can be locally signed by you.

Figure 8: CSRs Generated
You can use the optional argument
--rotateto rotate existing certificates and generate new CSRs to renew expired certificates on running clusters:sudo get_csrs --rotateThis command regenerates key pairs and CSRs for both Cluster and UI and rotates the Main and Asset TLS keys.
.png?sv=2022-11-02&spr=https&st=2025-10-30T06%3A18%3A01Z&se=2025-10-30T06%3A59%3A01Z&sr=c&sp=r&sig=KyxqlDPph0zcLBQ18taM7d9fZ4sYL3UUsWuWnTn%2Flh8%3D)
Figure 9: Rotate CSR
NOTE
If this command does not create CSR for “UI Certificate Request”, then contact Fortanix Support.
If
get_csrsfails then refer to the logs at the location/var/log/get_csrs.log.
After the newly generated CSRs are signed by the CA, run the following command to install the new certificates:
sudo install_certsWhen prompted, paste the signed certificate chain (leaf certificate first) for both Main (Cluster) and Assets (UI) domains.
.png?sv=2022-11-02&spr=https&st=2025-10-30T06%3A18%3A01Z&se=2025-10-30T06%3A59%3A01Z&sr=c&sp=r&sig=KyxqlDPph0zcLBQ18taM7d9fZ4sYL3UUsWuWnTn%2Flh8%3D)
Figure 10: Install Certs in Main
.png?sv=2022-11-02&spr=https&st=2025-10-30T06%3A18%3A01Z&se=2025-10-30T06%3A59%3A01Z&sr=c&sp=r&sig=KyxqlDPph0zcLBQ18taM7d9fZ4sYL3UUsWuWnTn%2Flh8%3D)
Figure 11: Install Certs in Assets
You can use the optional argument --rotate to install the new certificates on running clusters and remove the old ones:
sudo install_certs --rotateAfter pasting the final certificate in the chain, press Enter (Return) twice to confirm and continue the installation.
NOTE
A catch-all cert cannot be used, and a multi-domain cert for BOTH cluster and UI, cannot be used.
Multi-domain certs can be used when a SAN is provided, however, the SAN cannot match the DNS name used for the cluster.
Typically, a Standard TLS certificate, without a Subject Alternative Name, is enough for most installations.
6.0 Using Fortanix DSM
You can access the Fortanix DSM web interface using the hostname assigned to the Fortanix DSM cluster (for example, https://sdkms.<your-domain.com>). For more information on Fortanix DSM usage, refer to Fortanix Data Security Manager (DSM).
7.0 Update the Existing DSM Cluster with Trusted Nodes
Perform the following steps to enable Secure Node Join feature for existing clusters with dcap:online or dcap:offline attestation enabled:
NOTE
Ensure that you have upgraded to Fortanix DSM version 4.31 or higher to enable the Secure Node Join feature.
Run the following command to fetch the cluster configuration join policy:
curl --location --request GET 'https://<ui dns>/admin/v1/cluster' --header 'Authorization: Bearer <bearer token>' --header 'Content-Type: application/json'Run the following command to enable the Secure Node Join feature for the existing cluster:
curl --location --request PATCH 'https://hostname/admin/v1/cluster/join_policy' --header 'Authorization: Bearer <token>' --header 'Content-Type: application/json' --data-raw '{ "new-policy": { "join-policy": { "all": ["dcap", "node-ca", "trusted-node-identity"] }, "allowed-sgx-types": { "any": ["standard","scalable"] } } }.png?sv=2022-11-02&spr=https&st=2025-10-30T06%3A18%3A01Z&se=2025-10-30T06%3A59%3A01Z&sr=c&sp=r&sig=KyxqlDPph0zcLBQ18taM7d9fZ4sYL3UUsWuWnTn%2Flh8%3D)
Figure 12: Feature is Enabled Pop-Up
Run the following command to navigate to the
/bindirectory and execute the script for backend rolling cluster:cd /opt/fortanix/sdkms/bin sudo ./dsm_backend_rolling_restart.sh
This command will restart the cluster to apply the updated configurations and activate the Secure Node Join feature..png?sv=2022-11-02&spr=https&st=2025-10-30T06%3A18%3A01Z&se=2025-10-30T06%3A59%3A01Z&sr=c&sp=r&sig=KyxqlDPph0zcLBQ18taM7d9fZ4sYL3UUsWuWnTn%2Flh8%3D)
Figure 13: Trust Center Tab
Run the following command to view the list of all the pod IPs:
sudo -E kubectl get pods -lapp=sdkms -owideRun the following command from the first node in the cluster to add the node identity for each node:
curl -kv https://<pod-IP>:4444/admin/v1/cluster/node_identities/add_self -X POST.png?sv=2022-11-02&spr=https&st=2025-10-30T06%3A18%3A01Z&se=2025-10-30T06%3A59%3A01Z&sr=c&sp=r&sig=KyxqlDPph0zcLBQ18taM7d9fZ4sYL3UUsWuWnTn%2Flh8%3D)
Figure 13: List of Trusted Nodes
8.0 Add Node to an Existing Fortanix DSM Cluster
Run the following command to create the
kubeadmtoken from one of the existing nodes in the cluster:sudo kubeadm token createJoin the new node(s) with
sdkms-cluster joincommand. Ensure the installer script has been executed on these nodes.If your nodes are in the same subnet, run the following command:
sudo sdkms-cluster join --peer=<existing node IP>/<subnet-mask> --token=<token>If your nodes are in the different subnet, run the following command:
sudo sdkms-cluster join --peer=<server ip address> --token=<kubeadm-token> --self=<node IP address>
NOTE
If the Fortanix DSM cluster is enabled with the Secure Node Join feature, then perform the steps to add the trusted nodes as mentioned in Section 5.4: Configure Other Nodes for Joining the Cluster to add a node to the cluster.
9.0 Remove Node from an Existing Fortanix DSM Cluster
Run the following command to identify the name of the node to be removed under the header "
NAME":sudo -E kubectl get nodesRun the following command to remove the node from the cluster using the
<node name>:sudo sdkms-cluster remove --node <node name> --forceRun the following command to reset the node:
sdkms-cluster reset --delete-data --reset-iptablesRun the following command to delete the identity of the removed node from the cluster:
curl --location --request DELETE 'https://<ui dns>/admin/v1/cluster/node_identities/<node identity>' --header 'Authorization: Bearer <bearer token>' --header 'Content-Type: application/json'Rotate the Cluster Master Key (CMK) of the existing cluster. For more information, refer to Cluster Mater Key Rotation.
NOTE
It is recommended to perform the CMK rotation steps whenever a node is removed from the Fortanix DSM cluster.
TIP
There can be previous versions of UI pods which show up in the pending state. Ignore them/ delete the older version from UI.
Before rejoining the same node to the cluster, it is recommended to perform a cleanup and reinstall the build, rather than simply resetting the node.
10.0 Best Practices
Only issue accounts to trusted administrators.
Utilize strong passwords.
Monitor logs.
Restrict physical access to the appliance to trusted administrators.
Create two System Administrator accounts.
11.0 Troubleshooting and Support
PROBLEM | RESOLUTION |
|---|---|
The repair job fails. | The |
Any of the scripts fail at any step. | They will print a detailed error message. Report the problem to Fortanix support (support@fortanix.com) and provide the script output. |
Due to a known issue in Palo Alto Networks (PAN) firewalls, VXLAN traffic required by Fortanix DSM for intra-cluster communication could be mysteriously dropped without notification. After implementing the Fortanix DSM Port Requirements, connectivity between DSM nodes can be confirmed. Fortanix DSM creates an overlay virtual network for intra-cluster communication over UDP port 8472 as part of the standard deployment. On this virtual network, pinging or using CURL from one DSM node to another can also confirm connectivity. Unfortunately, when a PAN FW is in the network flow between DSM nodes, only a fraction of the packets sent over the VXLAN make it to the desired node due to this known issue. This lack of network consistency results in DSM nodes joining an existing cluster appearing to fail without much detail as to the cause. Cassandra pods may not be able to gossip with an expected peer, which is one clue that the cluster is not wholly joined. The joining node appears to have successfully joined the Kubernetes cluster. Still, the secure data synchronization expected to occur over this overlay VXLAN is never completed, leaving the cluster in a non-functional state. | Refer to the PAN Known Issues List, specifically issue PAN-160238, that explains this issue and the suggested workaround: PAN-160238 - If you migrate traffic from a firewall running a PAN-OS version earlier than 9.0 to a firewall running PAN-OS 9.0 or later, you experience intermittent VXLAN packet drops if the TCI policy is not configured for inspecting VXLAN traffic flows. Workaround: On the new firewall, create an app override for VXLAN outer headers as described in What is an Application Override? and the video tutorial How to Configure an Application Override Policy on the Palo Alto Networks Firewall. Turning on Tunnel Content Inspection (TCI) appears to resolve the issue in addition to the suggested workaround above from PAN to create an Application Override policy for the Fortanix DSM nodes. After making these recommended changes, VXLAN traffic should become more reliable and stop inadvertently dropping packets. The Fortanix DSM cluster should be able to communicate appropriately, and the Fortanix DSM node should be able to successfully join the existing cluster. |
