Fortanix Developer Sandbox is a simple getting-started package that you can run on-premises or in public clouds that support confidential computing, such as IBM Cloud and Microsoft Azure.
Before we start, let’s understand some basics about the setup. There are two principal elements in the deployment: one is the TensorFlow (TF) model, which we want to protect from outside access, and the other is TensorFlow Serving which is a framework to deploy the TF models in production. Using the Fortanix Developer Sandbox, we will be using TensorFlow Serving to deploy our TF model.
How do we secure it using the Fortanix Runtime Encryption (RTE) platform? The answer lies within the platform - It provides a service called Converter which converts the TF Model and Serving into Intel® SGX enabled secured containers without any code changes to existing TF Model.
For the demonstration purpose, we are taking the object detection model from the link. This is a pre-trained model, capable of classifying basic objects from an image, like; cat, dog, table, person, chair, kite, and so on.
NOTE: Remember that TensorFlow Serving is not only the way to deploy TensorFlow models in production. We choose TensorFlow serving as it is a well-known framework in the TensorFlow world.
Create Docker Image of Pre-Trained Model
Steps to setup the TensorFlow Serving sidecar which provides scripts used for inference using the Model served by TensorFlow Serving:
git clone email@example.com:fpaupier/tensorflow-serving_sidecar.git
pip install -r requirements.txt
Create Custom TensorFlow Serving Docker Image
- Download the pre-trained model from the link. We picked up faster_rcnn_resnet101_coco. Untar it.
- The tricky part here is to rename the saved_model subdirectory to 1 (or some integer value, this is the convention that TensorFlow Serving expects).
docker run -d --name serving_base tensorflow/serving
docker cp <path_to_extracted_model_dir> serving_base:/models/faster_rcnn_resnet
docker commit --change “ENV MODEL_NAME faster_rcnn_resnet” serving_base faster_rcnn_resnet_serving
where faster_rcnn_resnet_serving is the new image name.
docker kill serving_base
docker rm serving_base
For other models, just change the <path_to_extracted_model_dir> in Step 4 and replace the faster_rcnn_resnet with your pre-trained model name.
So far, we have prepared a TensorFlow Serving docker image which can serve our selected Object Detection Model. We can now convert this docker image into Intel® SGX capable secured container (contact Fortanix to get access to the required infrastructure).
Inferencing the Model
from the Common Steps section above.
python client.py --server_url "http://<server-ip>:8501/v1/models/faster_rcnn_resnet:predict" --image_path "<input-image-name>" --output_json "<output-image-name>" --save_output_image "True" --label_map "$(pwd)/data/labels.pbtxt"
Sample Example Input/output
Object detection with anonymized identity: