1.0 Introduction
This article describes the procedure to deploy Fortanix-Data-Security-Manager (DSM) Accelerator Webservice on a Red Hat OpenShift cluster using a binary build.
The Fortanix DSM Accelerator Webservice tarball, provided by Fortanix, contains a pre-built Open Container Initiative (OCI) container image. The steps described in this article explain how to extract the image, build it within OpenShift, deploy it as a pod, and expose it externally using a route.
2.0 Product Tested Version
The deployment procedure has been validated on OpenShift with Fortanix DSM Accelerator Webservice version 1.32.4594.
The same steps apply to other versions unless otherwise specified in the release notes.
3.0 Prerequisites
Before you begin, ensure the following requirements are met:
The
ocCLI is installed, and you are logged in to the target OpenShift cluster.You have the necessary permissions to create BuildConfigs, ImageStreams, Deployments, Services, and Routes in the target namespace.
The Fortanix DSM Accelerator Webservice tarball (
dsma_<VERSION>.tar) has been downloaded from the Fortanix support portal and is available on the machine where the commands will be executed.The
jqutility is installed on the machine (used for parsing the image manifest).Run the following command to verify installation:
jq --version
Run the following command to install on Ubuntu/Debian:
apt-get install -y jq
The target OpenShift namespace already exists. The examples in this article use the default namespace; replace it with your namespace as needed.
4.0 Configuration reference
The following variables are used throughout the deployment procedure. Replace them with environment-specific values before executing any commands.
DSMA_TARBALL: Name of the downloaded tarball. For example,dsma_1.32.4594.tar.DSMA_VERSION: Version string. For example,1.32.4594.DSM_ENDPOINT: HTTPS URL of your Fortanix DSM cluster. For example,https://eu.smartkey.io/.PORT: Port on which the DSM Accelerator Webservice listens. The default port is8080.CACHE_TTL: Cache time-to-live in seconds. The default value is14400.OC_NAMESPACE: Target OpenShift namespace. For example,default.WORKING_DIR: Local working directory path.
5.0 Log In to OpenShift
Before running any oc commands, log in to the OpenShift cluster using the following command:
oc login <CLUSTER_API_URL> -u <USERNAME> -p <PASSWORD>Here, provide the cluster API URL, username, and password for your environment.
If the cluster uses a self-signed or internally signed certificate, the CLI prompts you to confirm an insecure connection. Type yes to proceed.
Example output:
The server uses a certificate signed by an unknown authority.
You can bypass the certificate check, but any data you send to the server could be intercepted by others.
Use insecure connections? (y/n): yes
WARNING: Using insecure TLS client config. Setting this option is not supported!
Login successful.
You have access to 81 projects, the list has been suppressed. You can list all projects with 'oc projects'
Using project "default".
Welcome! See 'oc help' to get started.The insecure TLS warning is expected in lab and partner environments where the cluster certificate is not signed by a public CA.
In production environments, provide the correct CA bundle using the --certificate-authority option instead of accepting the insecure prompt.
Run the following command to confirm the active project matches your target namespace before proceeding:
oc projectIf a different namespace is shown, run the following command to switch to the correct one:
oc project <OC_NAMESPACE>
6.0 Tarball Format and Extraction
From version 1.32 and onward, Fortanix DSM Accelerator Webservice tarballs are packaged in OCI image format.
In OCI format, image layers are stored as binary blob files under a blobs/sha256/ directory within the tarball. These blobs do not have a file extension. Earlier versions of this deployment article referenced a Docker Save format tarball, where image layers are stored as files named layer.tar and can be located using the following command:
find dsma_tmp -name "*.tar"This command returns no results for OCI format tarballs, because the layer blobs do not have a .tar extension. This behavior is expected and not an error.
Run the following command to determine the format of a given tarball, extract it, and check its contents:
tar tf dsma_<VERSION>.tar | head -5If the output shows
blobs/sha256/ paths, the tarball is in OCI format.If the output shows paths such as
<hash>/layer.tar, the tarball is in Docker Save format.
This article uses the OCI extraction method. The extraction script reads layer order from manifest.json, making it version-independent.
7.0 Deployment Steps
This section describes the sequential steps for deploying the Fortanix DSM Accelerator Webservice on Red Hat OpenShift.
7.1 Prepare the Working Directory and Extract the Tarball
Perform the following steps:
Run the following commands to create a working directory and navigate into it. All subsequent steps assume you are working from this directory.
mkdir <WORKING_DIR> cd <WORKING_DIR>Run the following commands to place or move the Fortanix DSM Accelerator Webservice tarball into this directory, then extract the OCI image to a temporary location:
mkdir /tmp/oci_extract tar xf <DSMA_TARBALL> -C /tmp/oci_extractRun the following command to verify that the extraction produced an OCI image structure:
ls /tmp/oci_extractExpected output:
blobs index.json manifest.json oci-layout repositoriesRun the following commands to create the
dsmadirectory, which serves as the build context for the OpenShift binary build:mkdir dsma cd dsmaExtract all image layers into the
dsmadirectory, in the order specified by the manifest. The script below reads the manifest automatically; therefore, no layer hashes need to be copied manually.BLOBS=/tmp/oci_extract/blobs/sha256 jq -r '.[0].Layers[]' /tmp/oci_extract/manifest.json \\ | sed 's|blobs/sha256/||' \\ | while read layer; do tar xf "$BLOBS/$layer" --exclude='.wh..wh.' 2>/dev/null || true doneThe extraction may take a minute, depending on disk speed.
When it completes:
Run the following command to verify the application binaries are present:
ls app/Expected output:
dsma dsma-lambdaIf
app/dsmais missing, the layer containing the application binary was not extracted correctly. Re-run the extraction script and check the manifest.json for the correct layer order.Run the following command to verify the SSL certificates:
ls etc/ssl/certs/ | head -5Expected output (certificate filenames will vary by build):
002c0b4f.0 0179095f.0 02265526.0 062cdee6.0 064e0aa9.0If
etc/ssl/certs/is empty or missing, the package installation layer did not extract. This causes SSL certificate verification failures at runtime.
7.2 Create the Dockerfile and .dockerignore
The following two files must be created at the root of the dsma directory:
Dockerfile
Dockerignore
Perform the following steps:
Create a Dockerfile with the following configurations:
FROM ubuntu:24.04 COPY . / RUN /bin/sh -c sh /tmp/apt-packages.sh EXPOSE map[8080/tcp:{}] ENTRYPOINT ["/app/dsma"]Ensure
COPYappears beforeRUN. Otherwise, theapt-packages.shscript will not exist in the container filesystem when theRUNstep executes, and the package installation will fail.Create
.dockerignorewith the following configurations:Dockerfile .dockerignoreThis prevents the Dockerfile and .dockerignore files from being copied into the container image by the
COPYstep.Run the following command to confirm both files exist at the root of the
dsmadirectory:ls -aExpected output:
.dockerignore Dockerfile app etc tmp usr
7.3 Create the OpenShift Build Configuration
From the dsma directory, create a binary build configuration in OpenShift.
Set the environment variables to match your deployment target.
oc new-build --binary --name=dsma \\
--env=FORTANIX_API_ENDPOINT=<DSM_ENDPOINT> \\
--env=PORT=<PORT> \\
--env=CACHE_TTL=<CACHE_TTL>Expected output:
* A Docker build using binary input will be created
* The resulting image will be pushed to image stream tag "dsma:latest"
* A binary build was created, use 'oc start-build --from-dir' to trigger a new build
--> Creating resources with label build=dsma ...
imagestream.image.openshift.io "dsma" created
buildconfig.build.openshift.io "dsma" created
--> Success7.4 Start the Image Build
Start the build and point it at the current dsma directory.
The --follow flag streams the build log to the terminal. Wait for the build to complete before proceeding.
oc start-build dsma --from-dir=. --followThe directory contents are uploaded to the OpenShift build pod. OpenShift then executes the Dockerfile steps inside the cluster and pushes the resulting image to the internal registry.
Expected output (image digest and build tag will differ per environment):
Uploading directory "." as binary input for the build ...
Uploading finished
build.build.openshift.io/dsma-1 started
Receiving source from STDIN as archive ...
STEP 1/8: FROM ubuntu:24.04
STEP 2/8: ENV "CACHE_TTL"="14400" "FORTANIX_API_ENDPOINT"="<DSM_ENDPOINT>" "PORT"="8080"
STEP 3/8: COPY . /
STEP 4/8: RUN /bin/sh -c sh /tmp/apt-packages.sh
STEP 5/8: EXPOSE map[8080/tcp:{}]
STEP 6/8: ENTRYPOINT ["/app/dsma"]
STEP 7/8: ENV "OPENSHIFT_BUILD_NAME"="dsma-1" "OPENSHIFT_BUILD_NAMESPACE"="<OC_NAMESPACE>"
STEP 8/8: LABEL "io.openshift.build.name"="dsma-1" "io.openshift.build.namespace"="<OC_NAMESPACE>"
Successfully pushed image-registry.openshift-image-registry.svc:5000/<OC_NAMESPACE>/dsma@sha256:<IMAGE_DIGEST>
Push successfulIf the build fails at
STEP 4/8due to missingapt-packages.sh, verify Dockerfile order, check that the Dockerfile hasCOPYbeforeRUN.If it fails with a network error during
apt-get, the SSL certificate store from the OCI layers was not extracted into thedsmadirectory. RepeatSTEP 1/8.
7.5 Verify the Image Stream
Run the following command to confirm the image was pushed successfully:
oc describe is dsmaNote the image digest. The digest is required for the deployment manifest in the next step.
Expected output:
Name: dsma
Namespace: <OC_NAMESPACE>
Created: 2 minutes ago
Labels: build=dsma
Annotations: openshift.io/generated-by=OpenShiftNewBuild
Image Repository: image-registry.openshift-image-registry.svc:5000/<OC_NAMESPACE>/dsma
Image Lookup: local=false
Unique Images: 1
Tags: 1
latest
no spec tag
* image-registry.openshift-image-registry.svc:5000/<OC_NAMESPACE>/dsma@sha256:<IMAGE_DIGEST>
14 seconds agoRun the following command to extract the digest into a variable for later use:
IMAGE_REF=$(oc get istag dsma:latest -o jsonpath='{.image.dockerImageReference}')
echo $IMAGE_REFExample output:
image-registry.openshift-image-registry.svc:5000/<OC_NAMESPACE>/dsma@sha256:<IMAGE_DIGEST>7.6 Deploy the Application
Perform the following steps:
Create the deployment manifest file
dsma_dep.yamlin the working directory. Update the image field with the value obtained fromoc describe is dsmain Section 7.5: Verify the Image Stream.apiVersion: apps/v1 kind: Deployment metadata: namespace: <OC_NAMESPACE> name: dsma annotations: image.openshift.io/triggers: '[{"from":{"kind":"ImageStreamTag","name":"dsma:latest"},"fieldPath":"spec.template.spec.containers[?(@.name==\\"dsma\\")].image"}]' spec: selector: matchLabels: app: dsma replicas: 1 template: metadata: labels: app: dsma spec: containers: - name: container image: 'image-registry.openshift-image-registry.svc:5000/<OC_NAMESPACE>/dsma@sha256:<IMAGE_DIGEST>' ports: - containerPort: 8080 protocol: TCP env: - name: FORTANIX_API_ENDPOINT value: '<DSM_ENDPOINT>' - name: PORT value: '<PORT>' - name: CACHE_TTL value: '<CACHE_TTL>' imagePullSecrets: [] strategy: type: Recreate paused: falseRun the following command to apply the manifest file:
oc apply -f dsma_dep.yamlExpected output:
deployment.apps/dsma createdNOTE
Do not run both
oc new-appandoc apply -ffor the same deployment. Use one or the other. This procedure usesoc applywith the manifest file.
7.7 Verify the Running Pod
Perform the following steps:
Run the following command to verify that the pod is in Running status:
oc get pods | grep ^dsmaIt may take 30 to 60 seconds for the pod to start after the deployment is created.
Expected output:
dsma-1-build 0/1 Completed 0 10m dsma-<pod-id> 1/1 Running 0 1The build pod status
Completedis expected and can be ignored. The application pod should show1/1 Running.Run the following command to confirm the pod is listening on port 8080: Replace
<POD_NAME>with the name of the pod in Running state from the output above, or use the command below to capture it automatically.POD=$(oc get pods -l app=dsma --no-headers | grep Running | awk '{print $1}') oc get pod <POD_NAME> -o jsonpath='{.spec.containers[*].ports[*]}'Expected output:
{"containerPort":8080,"protocol":"TCP"}Run the following command to check the application startup logs to confirm a successful connection to Fortanix DSM:
oc logs $PODExpected output:
INFO dsma - ================= DSMA SERVER START UP ================= INFO dsma - Listening on port: 8080 INFO dsma - CA File path: None INFO dsma - Connection will retry in: 30000 millis INFO dsma - Availability set to false INFO dsma - TTL used by the Valentino cache: 14400 sec INFO dsma - =============== SETTING UP DSMA SERVER =============== INFO dsma::server - Initialized DsmAServer Proxy config: None INFO dsma - Successfully connected to DSM at "<DSM_ENDPOINT>" INFO dsma::api - Building router INFO dsma - =============== DSMA SERVER SETUP COMPLETE =============== WARN dsma - Using self-signed TLS certificate. In order to avoid this, use the argument `--tls-files`The line "
Successfully connected to DSM at <DSM_ENDPOINT>" confirms that the pod is operational and communicating with the DSM backend.If the pod is in
CrashLoopBackOff, checkoc logs $PODfor the error. Common causes are covered in Section 8.0: Known Issues.
7.8 Create the Service
Perform the following steps:
Expose the running pod as a service named
dsma-service. The service accepts incoming traffic on port 443 and forwards it to the container on port 8080.oc expose pod $POD --port=443 --target-port=8080 --name=dsma-serviceExpected output:
service/dsma-service exposedRun the following command to verify the service was created:
oc get service dsma-serviceExpected output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dsma-service ClusterIP <cluster-ip> <none> 443/TCP <age>
7.9 Create the External Route
Perform the following steps:
Create the route manifest file
route.yamlin the working directory.apiVersion: route.openshift.io/v1 kind: Route metadata: name: dsma-route namespace: <OC_NAMESPACE> spec: path: '' to: name: dsma-service weight: 100 kind: Service host: '' tls: insecureEdgeTerminationPolicy: None termination: passthrough port: targetPort: 8080 alternateBackends: []The route uses TLS passthrough termination. OpenShift routes the connection directly to the DSM Accelerator Webservice pod without terminating or re-encrypting the TLS session. This is the correct mode because the DSM Accelerator Webservice binary handles TLS itself.
Run the following command to apply the route:
oc apply -f route.yamlExpected output:
route.route.openshift.io/dsma-route createdRun the following command to retrieve the assigned hostname:
oc get route dsma-routeExpected output:
NAME HOST/PORT SERVICES PORT TERMINATION WILDCARD dsma-route dsma-route-default.apps.<cluster-domain> dsma-service 8080 passthrough/None NoneThe
HOST/PORTvalue is the fully qualified domain name for the external endpoint.
7.10 Final Validation
Run the following commands to confirm all resources are in the expected state:
oc get pods | grep ^dsma
oc get service dsma-service
oc get route dsma-routeExpected pod output:
dsma-1-build 0/1 Completed 0 <age>
dsma-<pod-id> 1/1 Running 0 <age>Expected service output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dsma-service ClusterIP <cluster-ip> <none> 443/TCP <age>Expected route output:
NAME HOST/PORT SERVICES PORT TERMINATION WILDCARD
dsma-route dsma-route-default.apps.<cluster-domain> dsma-service 8080 passthrough/None NoneThe Fortanix DSM Accelerator Webservice service is now accessible externally over HTTPS on port 443 at the hostname shown in the HOST/PORT column of the route.
8.0 Known Issues
ISSUE | SYMPTOM | CAUSE | RESOLUTION |
|---|---|---|---|
SSL certificate verification fails at startup. |
| The The container started without a system certificate store. | Repeat Section 7.1: Prepare the Working Directory and Extract the Tarball. Ensure the |
Build fails with error: " | Pod enters | The OCI layer containing the application binary was not extracted. This happens if the extraction loop exits early or if the | Confirm |
apt-packages.sh not found during build | In build log,
| The Dockerfile has | Edit the Dockerfile so |
TLS warning at startup | In build log,
| No custom TLS certificate and key were provided. DSM Accelerator Webservice falls back to a self-signed certificate. | This is a warning, not a failure. The pod will run, and connections will work. |
9.0 OCI vs Docker Save Format
If a future version of the Fortanix DSM Accelerator Webservice is distributed in Docker Save format, the layer extraction approach in Section 7.1: Prepare the Working Directory and Extract the Tarball is not required.
Instead, use the following commands:
mkdir dsma_tmp
tar xf <DSMA_TARBALL> -C dsma_tmp
find dsma_tmp -name "*.tar" -exec tar -xf {} -C dsma \\;
rm -rf dsma_tmpRun the following command to determine the tarball format:
tar tf <DSMA_TARBALL> | head -5OCI format: Output includes
blobs/sha256/...Docker Save format: Output includes
<hash>/layer.tar.
10.0 Update to a New Fortanix DSM Accelerator Webservice Version
Perform the following steps:
Repeat the steps from Section 7.1: Prepare the Working Directory and Extract the Tarball to Section 7.4: Start the Image Build, using the new tarball.
After the build completes, run the following command to verify the new image:
oc describe is dsmaCopy the new image digest and update the image field in
dsma_dep.yaml.Run the following command to apply the updated deployment:
oc apply -f dsma_dep.yamlThe deployment automatically rolls out the updated image.