Creating a Smarthome Device That Is Truly Smart

Part 3: Generating a Working Smart Door Lock Model for the SiLabs xG24 Dev Kit

If you have been following along with us in this series, in Part 1 we discussed the advantages of performing sensor analytics autonomously at the IoT edge versus relying on the cloud, a gateway, or a smartphone. We established our example application to build a reliable, acoustic event-driven smart door to illustrate the point. Our smart door lock uses a microphone to listen for sounds and local AI inferencing to classify those sounds against those it is trained to recognize using the SensiML Toolkit. In Part 2 we collected these raw sounds and labeled them according to our application needs with the SensiML Data Capture Lab application. Initially, we started with capturing some basic sound events including someone knocking at the door, someone inserting and operating a key in the deadbolt lock, and the sound of the deadbolt locking and unlocking.

Having collected our dataset, in Part 3 we can now bring our concept to fruition by producing a working predictive model that is capable of running directly on our IoT sensor device. This is typically the risky stage in the development process where audio DSP, pre-processing, segmentation, and classification tasks have to be applied in handcrafted code appropriate for a microcontroller environment. This complexity often leads developers to utilize the IoT device as just a connected sensor data logger relying on cloud analytics instead for application insight. Alternatively, the IoT edge AI solution from SensiML and Silicon Labs’ MG24 and BG24 SoC families, provides an easy to implement and performant IoT edge inferencing option for audio sensing that can fit within the device’s memory and power constraints. The resulting model can then locally recognize these audio events accurately, delivering a far better user experience and more fault-tolerant solution than one dependent on the network for raw sensor processing. Better still, the optimized code can be created with your existing team in-house without needing to bring on specialized DSP and AI expertise or outsource to specialists for expensive one-off model code.

Building our Smart Door Lock Model

SensiML Analytics Toolkit provides a defined workflow for transforming labeled datasets into working AI recognition code. Depending on the application needs, user skillset, and customization required, there are three methods for model building:

  • AutoML-driven model generation: Leaving all the complexity of model selection and tuning to the SensiML cloud modeling engine to figure out using only your labeled dataset and a few basic constraints you specify.
  • User-directed model generation: Specifying the details of each aspect of the model pipeline, like data cleansing, feature transformation, classification, and post-processing in a coding-free graphical user interface.
  • Programmatic model generation: Leveraging the Python programming language for ultimate model-building control, using SensiML’s Python SDK and optionally an interactive IDE such as Jupyter Notebook or Google Colab.

For our smart door lock model, we will elect to use the latter of these three approaches as we wish to have the flexibility to build on more advanced topics we will cover in Parts 4 and 5 to come. Silicon Labs’ MG24 and BG24 families of SoCs take this a step further with integrated AI acceleration hardware complementing a multi-protocol radio and high-performance 32-bit 78.0 MHz ARM Cortex®-M33 with DSP instruction and floating-point unit for efficient signal processing. SensiML’s auto code generation takes full advantage of the Silicon Labs AI accelerator providing code that is up to 4x faster at interference execution with up to 6x lower power consumption.

Our working model will transform the raw 16kHz sampled audio using Mel-frequency cepstral coefficient (MFCC) features utilizing 400 sample (25 msec) windowed excerpts of the data stream. Several MFCC feature vectors are combined to define the input vector for a TensorFlow Lite for Microcontrollers neural network classifier trained to our specific labeled dataset. TensorFlow Lite for Microcontrollers provides an efficient NN implementation for IoT edge devices with 8-bit quantization and a subset of functionality suited to MCUs. To optimize our model yet further, we will invoke Silicon Labs’ MG24/BG24 AI accelerator hardware by calling the TensorFlow accelerated library that is included in the HDK for this device. The end result is then wrapped in application logic that streams live audio data from the xG24 Dev Kit microphone into the SensiML smart door lock AI model and then provides serial data output whenever a valid classification is detected. This code, called the SensiML Knowledge Pack, can be supplied in either library or C source-code formats for inclusion and modification in the end application firmware. The detailed model building process is described on the SensiML documentation site.

Testing Our Smart Door Lock Model

Using Analytics Studio, we have created a functioning audio recognition model using the TensorFlow inference engine and our customized MFCC-based feature vectors. Now comes the time to test out the performance of the model. From the model-building process, we already have a pretty good idea of how our model will perform based on evaluating the model’s accuracy against reserved subsets of our overall dataset that were excluded from the training process and used solely for testing. A typical way of understanding this performance is to review the confusion matrix output of the model which compares actual versus predicted results in a tabular format. Our smart door lock candidate model’s confusion matrix is as follows:

From above, we see that our model should perform quite well. Overall accuracy exceeds 96% and the area of greatest concern is the 7% misclassification of knocking events as either Lock/Unlock or Unknown. We can explore that behavior further in actual testing to assess how we might further improve or tune the model in that regard.

The first method for live model testing involves performing bit-exact emulation of the model code on the PC utilizing Data Capture Lab and either existing collected data or live streaming data. Details on this process are described thoroughly in our documentation. In the video below we show how this works utilizing live streaming data.

Data Capture Lab’s Live Test mode allows users to test candidate models with bit-exact model emulation using live streaming data from the IoT device. The rich visualization data provides valuable insight into model performance during testing.

The above approach provides a very insightful visualization of the classification results which can come very rapidly as in the case of high sample rate data with small window sizes like our audio data. Despite this, we also want to assess the true performance of the model running in firmware on the actual Silicon Labs xG24 Dev Kit hardware. As the saying goes, ‘the proof is in the pudding’, so seeing the actual on-device behavior is our final assessment. We can accomplish this by compiling the knowledge pack download library using Simplicity Studio to produce a binary executable (.hex file) we can then flash and run. The full details of this are found in our xG24 platform-specific documentation.

Having flashed the Knowledge Pack to the xG24 Dev Kit, we can use the Open Gateway application to receive recognition output streamed over the USB virtual serial port. Open Gateway provides a variety of user-friendly aids to viewing and logging results that are covered in this documentation. For instance, we can define human-readable class names in place of ordinal class outputs and even assign image files to aid in our viewing of results. Open Gateway is provided as open-source code that can be modified as desired to suit particular needs, but we can simply define model and image configuration files to quickly test the code on the xG24 Dev Kit as shown in the video clip below.

SensiML Open Gateway, an open-source testing utility, makes it easy and convenient to assess actual on-device ML model performance.

In Our Next Installment: Profiling the performance of the AI-accelerated EFR32MG24 model

In this installment, we’ve shown how we can quickly build and test a working edge inference model for our smart door lock. The model we have been testing is fully optimized to take advantage of the AI acceleration available in the xG24 Dev Kit. In our next update in this series, we will delve deeper into the performance optimization and show how we can use profile information within the SensiML Toolkit to compare the performance of Silicon Labs MG24 hardware-accelerated model versus a purely software-based implementation of the same model.

Part 1: The plan to create an acoustic-aware smart door application

Part 2: Acoustic raw sensor data collection and labeling using SensiML Toolkit

Part 4: Profiling the performance of the AI-accelerated EFR32MG24 model

Part 5: Using data augmentation to enhance model accuracy <Coming Soon>

Learn more about SensiML’s Accelerated AI Solution with Silicon Labs