Introduction
In this example, we will demonstrate the TensorFlow (TF) Model running inside the enclave using Fortanix Confidential Computing Manager (CCM). For the demonstration purpose, we are taking the object detection model from this URL. This is a pre-trained TF model, capable of classifying basic objects from an image, like; cat, dog, table, person, chair, kite, and so on.
Object Detection TF Model
Fetch a Bearer Token
Using the credentials used for signing up a new user, fetch the bearer token.
BEARER_TOKEN=$(curl -s -u $username:$password -X POST https://em.fortanix.com/v1/sys/auth | jq -r .access_token)
Get all Accounts
After fetching the bearer token, select the account using the bearer token. To select an account, use the GET command to get all the accounts and select the account using the account_id.
curl -H 'Authorization: Bearer <Bearer Token>' -X GET https://em.fortanix.com/v1/accounts
Select the Account
Note the account_id of the account you want to select.
curl -H 'Authorization: Bearer <Bearer Token>' -X POST https://em.fortanix.com/v1/accounts/select_account/<account-id>
Create an Application
Create a TF Object Detection Model application using the configuration provided in the app.json file below.
NOTE: fortanix/tensorflow-serving:1.9.0-faster_rcnn_resnet_serving-sgx is the converted, enclave-os image which can be directly run on the enrolled node.
{
"name": "Object Detection TF Model",
"description": "Faster Rcnn Resnet Object Detection Model",
"input_image_name": "fortanix/tensorflow-serving",
"output_image_name": "<repository_path_where_output_image_will_be_stored>",
"isvprodid": 1,
"isvsvn": 1,
"mem_size": 4096,
"threads": 128
}
Fetch the Domain Whitelisting Tasks
curl -s -H "Authorization: Bearer <Bearer Token>" -X GET https://em.fortanix.com/v1/tasks?task_type=DOMAIN_WHITELIST > all_domain_tasks.json
All the tasks fetched will be stored in all_domain_tasks.json file. Select the task_id to approve the task in the next step.
Approve a Task
Among the tasks fetched in the previous step, approve the application-specific task using the task_id.
curl -s -H 'Content-Type: application/json' -d '{"status":"APPROVED"}' -H "Authorization: Bearer <Bearer Token>" -X PATCH https://em.fortanix.com/v1/tasks/<task_id>
Create an Image
Create an image of the application.
curl -s -H 'Content-Type: application/json' -d @build.json -H "Authorization: Bearer <Bearer token>" -X POST https://em.fortanix.com/v1/builds/convert-app
The build.json is as below.
{
"app_id": "<app_id>",
"input_docker_version": "1.9.0-faster_rcnn_resnet_serving",
"output_docker_version": "<output_image_version_to_be_provided_by_user>",
"outputAuthConfig": {
"username": "<username>",
"password": "<password>"
}
}
Fetch all the Image Whitelist Tasks
curl -s -H "Authorization: Bearer <Bearer token>" -X GET https://em.fortanix.com/v1/tasks?task_type=BUILD_WHITELIST > all_build_tasks.json
All the image whitelist tasks will be stored in all_build_tasks.json file. Select the image whitelist task ID to approve the image in the next step.
Approve the Image Whitelist Task
curl -s -H 'Content-Type: application/json' -d '{"status":"APPROVED"}' -H "Authorization: Bearer <Bearer token>" -X PATCH https://em.fortanix.com/v1/tasks/<task_id>
The image is created and whitelisted.
Next, run the following command on a machine running the node agent to run the application.
Run the Application
docker run -d -it --device /dev/isgx:/dev/isgx --device /dev/gsgx:/dev/gsgx -v /var/run/aesmd/aesm.socket:/var/run/aesmd/aesm.socket -e NODE_AGENT_BASE_URL=http://<node-agent-ip>:9092/v1/ --network=host <converted-image-name>
Where,
- <node-agent-ip> is the IP address of the compute node registered on Fortanix CCM.
- 9092 is the port on which Node Agent listens up
- converted-image-name is the converted app that can be found in the Images tab under Image Name column in the Images table.
NOTE:
- Please use your own inputs for Node IP, Port, and Converted Image in the above format. The information in the example above is just a sample.
Inference the TF Object Detection Model running inside the enclave
Run the following script to inference the model using the input images. This script clones the https://github.com/fpaupier/tensorflow-serving_sidecar.git - Connect to preview repository and uses the tensorflow-serving_sidecar/object_detection/test_images/image1.jpg for inferencing. This script is easy to use and can be modified easily to use your own images for inferencing.
Drop the IP from the script argument if the Model is running on the same machine as this script.
#!/bin/bash
SERVER_IP=localhost
if [ ! -z "$1" ]; then
SERVER_IP=$1
else
echo "############################ ALERT ##################################"
echo "[Warning]: No Server IP is provided, using localhost as default IP"
echo "############################ ALERT ##################################"
fi
set -x
# Source: https://github.com/fpaupier/tensorflow-serving_sidecar/pull/6/files
# Clone tensorflow-serving_sidecar repo
if [ ! -d "tensorflow-serving_sidecar" ]; then
git clone https://github.com/fpaupier/tensorflow-serving_sidecar.git
fi
export VOLUME_PATH=$PWD/tensorflow-serving_sidecar
MODEL_NAME="faster_rcnn_resnet"
export SERVER_URL="http://$SERVER_IP:8501/v1/models/$MODEL_NAME:predict"
# relative path to the test image. i.e. config/object_detection/test_images/image1.jpg. The path is relative to VOLUME_PATH
export IMAGE_PATH=config/object_detection/test_images/image1.jpg
# relative path to the output json. i.e. config/object_detection/test_images/output_image1.json. The path is relative to VOLUME_PATH
export OUTPUT_JSON=config/object_detection/test_images/output_image1.json
# Do not change it
export LABEL_MAP=data/labels.pbtxt
# Do not change it
export SAVE_OUTPUT_IMAGE=True
# Uncomment the below line if the current docker image does not work
#export DOCKER_IMAGE_NAME="asia.gcr.io/im-mlpipeline/tensorflow-serving-sidecar-client:latest"
export DOCKER_IMAGE_NAME="fortanix/tensorflow-serving:tensorflow-serving-sidecar-client"
# Run the sidecar client
docker run -it --network host -v ${VOLUME_PATH}:/app/tensorflow-serving_sidecar/config -t ${DOCKER_IMAGE_NAME} client.py --server_url=${SERVER_URL} --image_path=${IMAGE_PATH} --output_json=${OUTPUT_JSON} --save_output_image=${SAVE_OUTPUT_IMAGE} --label_map=${LABEL_MAP}
To run the script:
sudo bash tensorflow-serving-sidecar-client.sh <node_ip_on_which_tf_model_is_running>
Sample Output from the Script
sudo bash tensorflow-serving-sidecar-client.sh 20.50.106.45
+ '[' '!' -d tensorflow-serving_sidecar ']'
+ git clone https://github.com/fpaupier/tensorflow-serving_sidecar.git
Cloning into 'tensorflow-serving_sidecar'...
remote: Enumerating objects: 23, done.
remote: Counting objects: 100% (23/23), done.
remote: Compressing objects: 100% (23/23), done.
remote: Total 814 (delta 12), reused 0 (delta 0), pack-reused 791
Receiving objects: 100% (814/814), 116.85 MiB | 4.17 MiB/s, done.
Resolving deltas: 100% (313/313), done.
+ export VOLUME_PATH=/home/fortanix/tf-client-output/tensorflow-serving_sidecar
+ VOLUME_PATH=/home/fortanix/tf-client-output/tensorflow-serving_sidecar
+ MODEL_NAME=faster_rcnn_resnet
+ export SERVER_URL=http://20.50.106.45:8501/v1/models/faster_rcnn_resnet:predict
+ SERVER_URL=http://20.50.106.45:8501/v1/models/faster_rcnn_resnet:predict
+ export IMAGE_PATH=config/object_detection/test_images/image1.jpg
+ IMAGE_PATH=config/object_detection/test_images/image1.jpg
+ export OUTPUT_JSON=config/object_detection/test_images/output_image1.json
+ OUTPUT_JSON=config/object_detection/test_images/output_image1.json
+ export LABEL_MAP=data/labels.pbtxt
+ LABEL_MAP=data/labels.pbtxt
+ export SAVE_OUTPUT_IMAGE=True
+ SAVE_OUTPUT_IMAGE=True
+ export DOCKER_IMAGE_NAME=fortanix/tensorflow-serving:tensorflow-serving-sidecar-client
+ DOCKER_IMAGE_NAME=fortanix/tensorflow-serving:tensorflow-serving-sidecar-client
+ docker run -it --network host -v /home/navlok/tf-client-output/tensorflow-serving_sidecar:/app/tensorflow-serving_sidecar/config -t fortanix/tensorflow-serving:tensorflow-serving-sidecar-client client.py --server_url=http://20.50.106.45:8501/v1/models/faster_rcnn_resnet:predict --image_path=config/object_detection/test_images/image1.jpg --output_json=config/object_detection/test_images/output_image1.json --save_output_image=True --label_map=data/labels.pbtxt
Pre-processing input file config/object_detection/test_images/image1.jpg...
Pre-processing done!
Making request to http://20.50.106.45:8501/v1/models/faster_rcnn_resnet:predict...
Request returned
Post-processing server response...
Post-processing done!
Saving output to config/object_detection/test_images/output_image1.json
Output saved!
Image saved
NOTE: The output path (config/object_detection/test_images/output_image1.json) in the above output is misleading and in the context of the docker container (tf-serving-sidecar-client container which is run by the script) filesystem. The default path where the output image is stored will be $PWD/tensorflow-serving_sidecar/object_detection/test_images/output_image1.jpeg.
Sample Examples
The following are some images which we got as an output of the trained model. The plain image is the input image to the trained model and the image with boxes highlighting the different objects is the prediction of the model