Inference
In this stage, the data (CSV or images) is passed through a machine learning model to identify and predict the output from the data.
- In the INFERENCE tab, click BUILD INFERENCE to predict the data output.
- In the Build Inference form, enter the Inference flow name, that is, the name of the inference model.
- In the Select model section, select one of the following types of models:
- TRAINED: select a trained model that was built in the “build a model” phase from the drop down.
- UPLOADED: select a trained model that was uploaded in the “upload a model” phase from the drop down.
- FORTANIX: select the Fortanix pre-built model for image datasets, that is, Object detection (Yolov5) or ResNet50 Image Recognition from the drop down.
- In the Select input dataset field, select the input dataset that you created in the first phase that you want to pass through a machine learning model. The input dataset list will be filtered by the input file format of the model selected in the previous step.
- If you selected a TRAINED or UPLOADED model, then the Select inference application section will appear next:
- If you selected a trained model, then the application that will perform inference for the trained model will be automatically selected from the drop down.
- If you selected an uploaded model, then select the application that the uploaded model is trained on from the drop down. You can select either Gradient-boosted Prediction or scikit-learn Prediction.
- For trained or uploaded models, select the ML variables that you created in the Data Preparation phase in the Select the ML variables
- In the Output Configuration field, enter the name of the output dataset that will contain the predicted output.
- The Encrypt Dataset option is selected by default to generate an encryption key and add an extra layer of protection to the output data. Copy or download the key to decrypt the output data for viewing.
- Click CREATE INFERENCE FLOW to pass the data through a machine learning model and predict the output.
Figure 1: Build inference
- The inference is successfully created. Click RUN below the inference workflow to run the model and predict the output.
Figure 2: Run inference
- You will notice the Running indication at the bottom of the workflow. At any point, if there is a need to stop the execution, click STOP. This will re-enable the RUN button.
Figure 3: Inference running
- If the model was executed successfully, you would see the status of the execution under the Execution Log. Click the Execution Log link to view the log details.
Figure 4: Inference success
- Click the download report icon to download the execution log report.
Figure 5: Execution log for data inference
- After the execution is completed successfully, the output is now predicted and ready to be viewed. To view the output, click the DOWNLOAD button.
Figure 6: Download output
- 15. In the DOWNLOAD dialog box, enter the Encryption key to decrypt the output.
Figure 7: Decrypt output
- A
*.tar.gz
file is generated on your local machine. Extract the contents of the file. The output appears as shown below. A snapshot of the output appears as shown below.
Figure 8: Sample output
Comments
Please sign in to leave a comment.