DSM Accelerator Webservice Deployment on Red Hat OpenShift

Prev Next

1.0 Introduction

This article describes the procedure to deploy Fortanix-Data-Security-Manager (DSM) Accelerator Webservice on a Red Hat OpenShift cluster using a binary build.

The Fortanix DSM Accelerator Webservice tarball, provided by Fortanix, contains a pre-built Open Container Initiative (OCI) container image. The steps described in this article explain how to extract the image, build it within OpenShift, deploy it as a pod, and expose it externally using a route.

2.0 Product Tested Version

The deployment procedure has been validated on OpenShift with Fortanix DSM Accelerator Webservice version 1.32.4594.

The same steps apply to other versions unless otherwise specified in the release notes.

3.0 Prerequisites

Before you begin, ensure the following requirements are met:

  • The oc CLI is installed, and you are logged in to the target OpenShift cluster.

  • You have the necessary permissions to create BuildConfigs, ImageStreams, Deployments, Services, and Routes in the target namespace.

  • The Fortanix DSM Accelerator Webservice tarball (dsma_<VERSION>.tar) has been downloaded from the Fortanix support portal and is available on the machine where the commands will be executed.

  • The jq utility is installed on the machine (used for parsing the image manifest).

    • Run the following command to verify installation:

      jq --version 
    • Run the following command to install on Ubuntu/Debian:

      apt-get install -y jq 
  • The target OpenShift namespace already exists. The examples in this article use the default namespace; replace it with your namespace as needed.

4.0 Configuration reference

The following variables are used throughout the deployment procedure. Replace them with environment-specific values before executing any commands.

  • DSMA_TARBALL: Name of the downloaded tarball. For example, dsma_1.32.4594.tar.

  • DSMA_VERSION: Version string. For example, 1.32.4594.

  • DSM_ENDPOINT: HTTPS URL of your Fortanix DSM cluster. For example, https://eu.smartkey.io/.

  • PORT: Port on which the DSM Accelerator Webservice listens. The default port is 8080.

  • CACHE_TTL: Cache time-to-live in seconds. The default value is 14400.

  • OC_NAMESPACE: Target OpenShift namespace. For example, default.

  • WORKING_DIR: Local working directory path.

5.0 Log In to OpenShift

Before running any oc commands, log in to the OpenShift cluster using the following command:

oc login <CLUSTER_API_URL> -u <USERNAME> -p <PASSWORD>

Here, provide the cluster API URL, username, and password for your environment.

If the cluster uses a self-signed or internally signed certificate, the CLI prompts you to confirm an insecure connection. Type yes to proceed.

Example output:

The server uses a certificate signed by an unknown authority.
You can bypass the certificate check, but any data you send to the server could be intercepted by others.
Use insecure connections? (y/n): yes
WARNING: Using insecure TLS client config. Setting this option is not supported!
Login successful.
You have access to 81 projects, the list has been suppressed. You can list all projects with 'oc projects'
Using project "default".
Welcome! See 'oc help' to get started.

The insecure TLS warning is expected in lab and partner environments where the cluster certificate is not signed by a public CA.

In production environments, provide the correct CA bundle using the --certificate-authority option instead of accepting the insecure prompt.

  • Run the following command to confirm the active project matches your target namespace before proceeding:

    oc project
  • If a different namespace is shown, run the following command to switch to the correct one:

    oc project <OC_NAMESPACE>

6.0 Tarball Format and Extraction

From version 1.32 and onward, Fortanix DSM Accelerator Webservice tarballs are packaged in OCI image format.

In OCI format, image layers are stored as binary blob files under a blobs/sha256/ directory within the tarball. These blobs do not have a file extension. Earlier versions of this deployment article referenced a Docker Save format tarball, where image layers are stored as files named layer.tar and can be located using the following command:

find dsma_tmp -name "*.tar"

This command returns no results for OCI format tarballs, because the layer blobs do not have a .tar extension. This behavior is expected and not an error.

Run the following command to determine the format of a given tarball, extract it, and check its contents:

tar tf dsma_<VERSION>.tar | head -5
  • If the output shows blobs/sha256/ paths, the tarball is in OCI format.

  • If the output shows paths such as <hash>/layer.tar, the tarball is in Docker Save format.

This article uses the OCI extraction method. The extraction script reads layer order from manifest.json, making it version-independent.

7.0 Deployment Steps

This section describes the sequential steps for deploying the Fortanix DSM Accelerator Webservice on Red Hat OpenShift.

7.1 Prepare the Working Directory and Extract the Tarball

Perform the following steps:

  1. Run the following commands to create a working directory and navigate into it. All subsequent steps assume you are working from this directory.

    mkdir <WORKING_DIR>
    cd <WORKING_DIR>
  2. Run the following commands to place or move the Fortanix DSM Accelerator Webservice tarball into this directory, then extract the OCI image to a temporary location:

    mkdir /tmp/oci_extract
    tar xf <DSMA_TARBALL> -C /tmp/oci_extract
  3. Run the following command to verify that the extraction produced an OCI image structure:

    ls /tmp/oci_extract

    Expected output:

    blobs
    index.json
    manifest.json
    oci-layout
    repositories
  4. Run the following commands to create the dsma directory, which serves as the build context for the OpenShift binary build:

    mkdir dsma
    cd dsma
  5. Extract all image layers into the dsma directory, in the order specified by the manifest. The script below reads the manifest automatically; therefore, no layer hashes need to be copied manually.

    BLOBS=/tmp/oci_extract/blobs/sha256
    jq -r '.[0].Layers[]' /tmp/oci_extract/manifest.json \\
      | sed 's|blobs/sha256/||' \\
      | while read layer; do
          tar xf "$BLOBS/$layer" --exclude='.wh..wh.' 2>/dev/null || true
        done

    The extraction may take a minute, depending on disk speed.

    When it completes:

    • Run the following command to verify the application binaries are present:

      ls app/

      Expected output:

      dsma
      dsma-lambda

      If app/dsma is missing, the layer containing the application binary was not extracted correctly. Re-run the extraction script and check the manifest.json for the correct layer order.

    • Run the following command to verify the SSL certificates:

      ls etc/ssl/certs/ | head -5

      Expected output (certificate filenames will vary by build):

      002c0b4f.0
      0179095f.0
      02265526.0
      062cdee6.0
      064e0aa9.0

      If etc/ssl/certs/ is empty or missing, the package installation layer did not extract. This causes SSL certificate verification failures at runtime.

7.2 Create the Dockerfile and .dockerignore

The following two files must be created at the root of the dsma directory:

  • Dockerfile

  • Dockerignore

Perform the following steps:

  1. Create a Dockerfile with the following configurations:

    FROM ubuntu:24.04
    COPY . /
    RUN /bin/sh -c sh /tmp/apt-packages.sh
    EXPOSE map[8080/tcp:{}]
    ENTRYPOINT ["/app/dsma"]

    Ensure COPY appears before RUN. Otherwise, the apt-packages.sh script will not exist in the container filesystem when the RUN step executes, and the package installation will fail.

  2. Create .dockerignore with the following configurations:

    Dockerfile
    .dockerignore

    This prevents the Dockerfile and .dockerignore files from being copied into the container image by the COPY step.

  3. Run the following command to confirm both files exist at the root of the dsma directory:

    ls -a

    Expected output:

    .dockerignore
    Dockerfile
    app
    etc
    tmp
    usr

7.3 Create the OpenShift Build Configuration

From the dsma directory, create a binary build configuration in OpenShift.

Set the environment variables to match your deployment target.

oc new-build --binary --name=dsma \\
  --env=FORTANIX_API_ENDPOINT=<DSM_ENDPOINT> \\
  --env=PORT=<PORT> \\
  --env=CACHE_TTL=<CACHE_TTL>

Expected output:

* A Docker build using binary input will be created
  * The resulting image will be pushed to image stream tag "dsma:latest"
  * A binary build was created, use 'oc start-build --from-dir' to trigger a new build
--> Creating resources with label build=dsma ...
    imagestream.image.openshift.io "dsma" created
    buildconfig.build.openshift.io "dsma" created
--> Success

7.4 Start the Image Build

Start the build and point it at the current dsma directory.

The --follow flag streams the build log to the terminal. Wait for the build to complete before proceeding.

oc start-build dsma --from-dir=. --follow

The directory contents are uploaded to the OpenShift build pod. OpenShift then executes the Dockerfile steps inside the cluster and pushes the resulting image to the internal registry.

Expected output (image digest and build tag will differ per environment):

Uploading directory "." as binary input for the build ...
Uploading finished
build.build.openshift.io/dsma-1 started
Receiving source from STDIN as archive ...
STEP 1/8: FROM ubuntu:24.04
STEP 2/8: ENV "CACHE_TTL"="14400" "FORTANIX_API_ENDPOINT"="<DSM_ENDPOINT>" "PORT"="8080"
STEP 3/8: COPY . /
STEP 4/8: RUN /bin/sh -c sh /tmp/apt-packages.sh
STEP 5/8: EXPOSE map[8080/tcp:{}]
STEP 6/8: ENTRYPOINT ["/app/dsma"]
STEP 7/8: ENV "OPENSHIFT_BUILD_NAME"="dsma-1" "OPENSHIFT_BUILD_NAMESPACE"="<OC_NAMESPACE>"
STEP 8/8: LABEL "io.openshift.build.name"="dsma-1" "io.openshift.build.namespace"="<OC_NAMESPACE>"
Successfully pushed image-registry.openshift-image-registry.svc:5000/<OC_NAMESPACE>/dsma@sha256:<IMAGE_DIGEST>
Push successful
  • If the build fails at STEP 4/8 due to missing apt-packages.sh, verify Dockerfile order, check that the Dockerfile has COPY before RUN.

  • If it fails with a network error during apt-get, the SSL certificate store from the OCI layers was not extracted into the dsma directory. Repeat STEP 1/8.

7.5 Verify the Image Stream

Run the following command to confirm the image was pushed successfully:

oc describe is dsma

Note the image digest. The digest is required for the deployment manifest in the next step.

Expected output:

Name:             dsma
Namespace:        <OC_NAMESPACE>
Created:          2 minutes ago
Labels:           build=dsma
Annotations:      openshift.io/generated-by=OpenShiftNewBuild
Image Repository: image-registry.openshift-image-registry.svc:5000/<OC_NAMESPACE>/dsma
Image Lookup:     local=false
Unique Images:    1
Tags:             1
latest
  no spec tag
  * image-registry.openshift-image-registry.svc:5000/<OC_NAMESPACE>/dsma@sha256:<IMAGE_DIGEST>
        14 seconds ago

Run the following command to extract the digest into a variable for later use:

IMAGE_REF=$(oc get istag dsma:latest -o jsonpath='{.image.dockerImageReference}')
echo $IMAGE_REF

Example output:

image-registry.openshift-image-registry.svc:5000/<OC_NAMESPACE>/dsma@sha256:<IMAGE_DIGEST>

7.6 Deploy the Application

Perform the following steps:

  1. Create the deployment manifest file dsma_dep.yaml in the working directory. Update the image field with the value obtained from oc describe is dsma in Section 7.5: Verify the Image Stream.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      namespace: <OC_NAMESPACE>
      name: dsma
      annotations:
        image.openshift.io/triggers: '[{"from":{"kind":"ImageStreamTag","name":"dsma:latest"},"fieldPath":"spec.template.spec.containers[?(@.name==\\"dsma\\")].image"}]'
    spec:
      selector:
        matchLabels:
          app: dsma
      replicas: 1
      template:
        metadata:
          labels:
            app: dsma
        spec:
          containers:
            - name: container
              image: 'image-registry.openshift-image-registry.svc:5000/<OC_NAMESPACE>/dsma@sha256:<IMAGE_DIGEST>'
              ports:
                - containerPort: 8080
                  protocol: TCP
              env:
                - name: FORTANIX_API_ENDPOINT
                  value: '<DSM_ENDPOINT>'
                - name: PORT
                  value: '<PORT>'
                - name: CACHE_TTL
                  value: '<CACHE_TTL>'
          imagePullSecrets: []
      strategy:
        type: Recreate
      paused: false
    
  2. Run the following command to apply the manifest file:

    oc apply -f dsma_dep.yaml

    Expected output:

    deployment.apps/dsma created

    NOTE

    Do not run both oc new-app and oc apply -f for the same deployment. Use one or the other. This procedure uses oc apply with the manifest file.

7.7 Verify the Running Pod

Perform the following steps:

  1. Run the following command to verify that the pod is in Running status:

    oc get pods | grep ^dsma

    It may take 30 to 60 seconds for the pod to start after the deployment is created.

    Expected output:

    dsma-1-build         0/1     Completed   0          10m
    dsma-<pod-id>        1/1     Running     0          1

    The build pod status Completed is expected and can be ignored. The application pod should show 1/1 Running.

  2. Run the following command to confirm the pod is listening on port 8080: Replace <POD_NAME> with the name of the pod in Running state from the output above, or use the command below to capture it automatically.

    POD=$(oc get pods -l app=dsma --no-headers | grep Running | awk '{print $1}')
    oc get pod <POD_NAME> -o jsonpath='{.spec.containers[*].ports[*]}'

    Expected output:

    {"containerPort":8080,"protocol":"TCP"}
  3. Run the following command to check the application startup logs to confirm a successful connection to Fortanix DSM:

    oc logs $POD

    Expected output:

    INFO dsma - ================= DSMA SERVER START UP =================
    INFO dsma - Listening on port: 8080
    INFO dsma - CA File path: None
    INFO dsma - Connection will retry in: 30000 millis
    INFO dsma - Availability set to false
    INFO dsma - TTL used by the Valentino cache: 14400 sec
    INFO dsma - =============== SETTING UP DSMA SERVER ===============
    INFO dsma::server - Initialized DsmAServer
    Proxy config: None
    INFO dsma - Successfully connected to DSM at "<DSM_ENDPOINT>"
    INFO dsma::api - Building router
    INFO dsma - =============== DSMA SERVER SETUP COMPLETE ===============
    WARN dsma - Using self-signed TLS certificate. In order to avoid this, use the argument `--tls-files`
    

    The line "Successfully connected to DSM at <DSM_ENDPOINT>" confirms that the pod is operational and communicating with the DSM backend.

    If the pod is in CrashLoopBackOff, check oc logs $POD for the error. Common causes are covered in Section 8.0: Known Issues.

7.8 Create the Service

Perform the following steps:

  1. Expose the running pod as a service named dsma-service. The service accepts incoming traffic on port 443 and forwards it to the container on port 8080.

    oc expose pod $POD --port=443 --target-port=8080 --name=dsma-service

    Expected output:

    service/dsma-service exposed
  2. Run the following command to verify the service was created:

    oc get service dsma-service

    Expected output:

    NAME           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
    dsma-service   ClusterIP   <cluster-ip>    <none>        443/TCP   <age>

7.9 Create the External Route

Perform the following steps:

  1. Create the route manifest file route.yaml in the working directory.

    apiVersion: route.openshift.io/v1
    kind: Route
    metadata:
      name: dsma-route
      namespace: <OC_NAMESPACE>
    spec:
      path: ''
      to:
        name: dsma-service
        weight: 100
        kind: Service
      host: ''
      tls:
        insecureEdgeTerminationPolicy: None
        termination: passthrough
      port:
        targetPort: 8080
      alternateBackends: []
    

    The route uses TLS passthrough termination. OpenShift routes the connection directly to the DSM Accelerator Webservice pod without terminating or re-encrypting the TLS session. This is the correct mode because the DSM Accelerator Webservice binary handles TLS itself.

  2. Run the following command to apply the route:

    oc apply -f route.yaml

    Expected output:

    route.route.openshift.io/dsma-route created
  3. Run the following command to retrieve the assigned hostname:

    oc get route dsma-route

    Expected output:

    NAME         HOST/PORT                                    SERVICES       PORT   TERMINATION        WILDCARD
    dsma-route   dsma-route-default.apps.<cluster-domain>    dsma-service   8080   passthrough/None   None

    The HOST/PORT value is the fully qualified domain name for the external endpoint.

7.10 Final Validation

Run the following commands to confirm all resources are in the expected state:

oc get pods | grep ^dsma
oc get service dsma-service
oc get route dsma-route

Expected pod output:

dsma-1-build         0/1     Completed   0          <age>
dsma-<pod-id>        1/1     Running     0          <age>

Expected service output:

NAME           TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
dsma-service   ClusterIP   <cluster-ip>   <none>        443/TCP   <age>

Expected route output:

NAME         HOST/PORT                                    SERVICES       PORT   TERMINATION        WILDCARD
dsma-route   dsma-route-default.apps.<cluster-domain>    dsma-service   8080   passthrough/None   None

The Fortanix DSM Accelerator Webservice service is now accessible externally over HTTPS on port 443 at the hostname shown in the HOST/PORT column of the route.

8.0 Known Issues

ISSUE

SYMPTOM

CAUSE

RESOLUTION

SSL certificate verification fails at startup.

ERROR dsma - Got error: error trying to connect: certificate verify failed: unable to get local issuer certificate

The etc/ssl/certs/ directory from the OCI image was not included in the build context.

The container started without a system certificate store.

Repeat Section 7.1: Prepare the Working Directory and Extract the Tarball.

Ensure the jq extraction loop completes without error and that etc/ssl/certs/ contains .pem and .0 files before running oc start-build.

Build fails with error: "executable file /app/dsma not found"

Pod enters CreateContainerError and logs show the binary is missing.

The OCI layer containing the application binary was not extracted. This happens if the extraction loop exits early or if the dsma directory was empty when the build was started.

Confirm app/dsma exists in the dsma directory before starting the build.
Re-run the extraction if needed.

apt-packages.sh not found during build

In build log,

sh: /tmp/apt-packages.sh: No such file or directory

The Dockerfile has RUN before COPY. The script does not exist in the container at the time RUN executes.

Edit the Dockerfile so COPY . / appears before the RUN line and restart the build.

TLS warning at startup

In build log,

WARN dsma - Using self-signed TLS certificate.

In order to avoid this, use the argument --tls-files.

No custom TLS certificate and key were provided. DSM Accelerator Webservice falls back to a self-signed certificate.

This is a warning, not a failure. The pod will run, and connections will work.
To use a trusted certificate, provide the certificate and key files, and set the --tls-files argument in the DSM Accelerator Webservice configuration.

9.0 OCI vs Docker Save Format

If a future version of the Fortanix DSM Accelerator Webservice is distributed in Docker Save format, the layer extraction approach in Section 7.1: Prepare the Working Directory and Extract the Tarball is not required.

Instead, use the following commands:

mkdir dsma_tmp
tar xf <DSMA_TARBALL> -C dsma_tmp
find dsma_tmp -name "*.tar" -exec tar -xf {} -C dsma \\;
rm -rf dsma_tmp

Run the following command to determine the tarball format:

tar tf <DSMA_TARBALL> | head -5
  • OCI format: Output includes blobs/sha256/...

  • Docker Save format: Output includes <hash>/layer.tar.

10.0 Update to a New Fortanix DSM Accelerator Webservice Version

Perform the following steps:

  1. Repeat the steps from Section 7.1:  Prepare the Working Directory and Extract the Tarball to Section 7.4: Start the Image Build, using the new tarball.

  2. After the build completes, run the following command to verify the new image:

    oc describe is dsma
  3. Copy the new image digest and update the image field in dsma_dep.yaml.

  4. Run the following command to apply the updated deployment:

    oc apply -f dsma_dep.yaml

    The deployment automatically rolls out the updated image.

Fortanix-logo

4.6

star-ratings

As of August 2025