User's Guide: Inference

Inference

In this stage, the data (CSV or images) is passed through a machine learning model to identify and predict the output from the data.

  1. In the INFERENCE tab, click BUILD INFERENCE to predict the data output.
  2. In the Build Inference form, enter the Inference flow name, that is, the name of the inference model.
  3. In the Input dataset field, select the training dataset that you created in the first stage that you want to pass through a machine learning model.
  4. In the Algorithm field, one of the following prediction algorithms:
    • Scikit-learn Prediction – select this prediction algorithm, if your model was built using Logistic Regression, KNN, SVM, or Decision Trees algorithm.
    • Gradient-boosted Prediction – select this prediction algorithm, if your model was built using XGBoost algorithms (Gradient Boosted Random Forest or Gradient Boosted Decision Tree).
  5. In the Model field, select one of the following options:
    • Built Models: select a trained model that was built in the “build a model” phase.
    • Uploaded Models: select a trained model that was uploaded in the “upload a model” phase.
  6. Select ML variables that you created in the Data Preparation phase.
  7. In the Output Configuration field, enter the name of the output dataset that will contain the predicted output.
  8. The Encrypt Dataset option is selected by default to generate an encryption key and add an extra layer of protection to the output data. Copy or download the key to decrypt the output data for viewing.
    NOTE
    Failure to save the key will result in loss of data.
  9. Click CREATE INFERENCE FLOW to pass the data through a machine learning model and predict the output. CAI__InferenceBuildModel.pngFigure 1: Build inference
  10. The inference is successfully created. Click RUN below the inference workflow to run the model and predict the output. CAI__RunInference.pngFigure 2: Run inference
  11. You will notice the Running indication at the bottom of the workflow. At any point, if there is a need to stop the execution, click STOP. This will re-enable the RUN button. CAI__InferenceRunning.pngFigure 3: Inference running
  12. If the model was executed successfully, you would see the status of the execution under the Execution Log. Click the Execution Log link to view the log details. CAI__InferenceSuccess.pngFigure 4: Inference success
  13. Click the download report icon to download the execution log report. CAI_ExecutionLogInference.pngFigure 5: Execution log for data inference
  14. After the execution is completed successfully, the output is now predicted and ready to be viewed. To view the output, click the DOWNLOAD button. CAI_DownalodInferenceOutput.pngFigure 6: Download output
  15. 15. In the DOWNLOAD dialog box, enter the Encryption key to decrypt the output. CAI_DecryptionKey.pngFigure 7: Decrypt output
  16. A *.tar.gz file is generated on your local machine. Extract the contents of the file. The output appears as shown below. A snapshot of the output appears as shown below.
    CAI_Output1.png
         Figure 8: Sample output

Comments

Please sign in to leave a comment.

Was this article helpful?
0 out of 0 found this helpful