Exploring Model Results¶
In the Explore Model tab you can get more information about the models that were generated from the Build Model tab.
Select your pipeline and a model to explore:
In the Test Model tab, you can run your model against test data in your project. This allows you to see how your model will perform on real data before flashing it to a device.
Click on one of the capture files in the list to select it. Click the Run button, which will emulate the model and validate it against the test data.
The results of the model validation are displayed below the widget. The Y-axis tells us the classification name and the X-axis tells us the sample number or time. Classifications are generated by the model at the interval you selected for windowing segmentation. These are represented by the circle marks on the graph.
The Confusion Matrix tab shows the averaged confusion matrix for the validation data. The confusion matrix describes how well the model performed at recognizing each class. It also provides information about how the model misclassifies classes. The confusion matrix for models generated by SensiML’s AutoML pipeline is created by averaging across the results of the validation data sets for each fold.
The Feature Summary tab shows which feature extractors and which sensors were used to feed into the model. The Feature Summary tab contains information about the features that are used during the feature extraction step of the Knowledge Pack. This was a simple example project and only needed six feature extractors to generate a high accuracy model. You can see the Category of the feature generator in the first column, which describes the family type to which a feature generator belongs. The Generator column has the name of the feature extractor, which can be used to reference the feature generator when building custom pipelines. The Sensors column describes the sensors that were used as input into this feature extractor.
The Model Summary tab describes the classifier, classifier parameters and training algorithm that was used to generate the final model. The Model Summary tab contains information about the classifier used in the model. The information here will be different depending on which classifier was used by the model. But in general, this will have the classifier name, the training algorithm used to train the model, along with any hyperparameters that were set for training. The uuid field is the unique identifier for this model.