Test ModelΒΆ

To switch the active model, click the change model button.

../../_images/analytics-studio-model-panel.png

In the Test Model tab, you can run your model against test data in your project. This allows you to see how your model will perform on real data before flashing it to a device.

../../_images/analytics-studio-test-model-initial.png

Select one or more of the capture files. Click the Compute Accuracy button to validate it against the test data. To generate the model results, the SensiML Cloud emulates the firmware model classifications on the selected sensor data. This provides a bit accurate view of the performance you will expect when deploying your model to an edge device.

../../_images/analytics-studio-test-model-run.png

Once the results are finished you can click the Compute Summary button to see how the confusion matrix performs across all files you just tested against. The ground truth for the confusion matrix is generated based on the labels that are created in the Data Capture Lab. You can switch which session is used to compute the ground truth as well.

../../_images/analytics-studio-test-model-compute-summary.png

By clicking on the Results icon for any of the captures, you will see the model results for that specific capture. The results are summarized as actual prediction vs ground truth labels. In the Classification Chart (top) the Y-axis tells us the classification name and the X-axis tells us the sample number or time. Classifications are generated by the model at the interval you selected for windowing segmentation. Locations where the ground truth and classification do not match are marked by a red X. The Feature Vector Heat Map (bottom) visualizes the features that are generated by each segment prior to being fed into the classifier as a heat map. The values for features are always scaled to a single byte prior to classification.

../../_images/analytics-studio-test-model-model-results.png