Classifiers
Takes a feature vector as an input and returns a classification based on a pre-defined model.
Copyright 2017-2024 SensiML Corporation
This file is part of SensiML™ Piccolo AI™.
SensiML Piccolo AI is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
SensiML Piccolo AI is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License along with SensiML Piccolo AI. If not, see <https://www.gnu.org/licenses/>.
-
Bonsai
Bonsai is a tree model for supervised learning tasks such as binary and multi-class classification, regression, ranking, etc. Bonsai learns a single, shallow, sparse tree with powerful predictors at internal and leaf nodes. This allows Bonsai to achieve state-of-the-art prediction accuracies while making predictions efficiently in microseconds to milliseconds (depending on processor speed) using models that fit in a few KB of memory.
Bonsai was developed by Microsoft, for detailed information see the ICML 2017 Paper.
-
Boosted Tree Ensemble
The boosted tree ensemble classifier is an ensemble of decision trees that are evaluated against an input vector. Each decision tree in the ensemble provides a bias towards a predicted value and the sum overall all biases determines the final prediction.
-
Decision Tree Ensemble
The decision tree ensemble classifier is an ensemble of decision trees that are evaluated against an input vector. Each decision tree in the ensemble provides a single prediction and the majority vote of all the trees is returned as the prediction for the ensemble.
-
PME
PME or pattern matching engine is a distance based classifier that is optimized for high performance on resource constrained devices. It computes the distances between an input vector and a database of stored patterns and returns a prediction based on the classification classifier settings.
There are three distance metrics that can be computed L1, LSUP and DTW(Dynamic Time Warping).
The are two classification criteria, RBF and KNN. For RBF every pattern in the database is given an influence field that the distance between it and the input vector must be less than in order to pattern to fire. KNN returns the category of pattern with the smallest computed distance bewteen it and the input vector.
- Parameters
distance_mode (str) – L1, Lsup or DTW
classification_mode (str) – RBF or KNN
max_aif (int) – the maximum value of the influence field
min_aif (int) – the minimum value of the influence field
reserved_patterns (int) – The number of patterns to reserve in the database in addition to the predefined patterns during training
online_learning (bool) – To generate the code for online learning on the edge device this takes up additional SRAM, but can be used to tune the model at the edge.
num_channels (int) – the number of channels that are specified for calculations when DTW is used as the distance metric (default: 1).
-
TF Micro
The Tensorflow Micro Classifier uses Tensorflow Lite for Microcontrollers, an inference engine from Google optimized run machine learning models on embedded devices.
Tensorflow Lite for Microcontrollers supports a subset of all Tensorflow functions. For a full list see all_ops_resolver.cc.
For additional documentation on Tensorflow Lite for Microcontrollers see here.