BEYOND THE DATASHEET
/ THE NO-CODE ROUTE TO MEMS-BASED MACHINE LEARNING AT THE EDGE
By Philip Ling
Editor-in-Chief and Technical Content Manager, Avnet Corporate Marketing
New MEMS sensors from STMicroelectronics integrate an innovative machine learning core, making it simpler to deploy machine language in many applications where motion detection is used. We take a look beyond the datasheet to see how it works.

Conversations about why artificial intelligence (AI) and machine learning (ML) are reshaping embedded electronics continue. It is a complex topic for engineering teams and business leaders.
As a subset of AI, machine learning is, perhaps, a little simpler to address. It can deliver technical and commercial benefits, with a lower cost to implement. New value propositions for implementing ML at the edge in industrial, commercial, medical and automotive applications are emerging regularly.
Fundamentally, ML assists in applications where the parameters are well defined. In some ways, ML is the software equivalent of an industrial robot. It has a task, it knows what that task is, and it ignores anything that isn’t related to that task. AI, on the other hand, can be thought of as the autonomous humanoid robot that is free to figure it out for itself.
Even though it may be simpler, implementing ML still involves gathering data, training a model on a large computer, and then porting that model to the target. Often, the target is a smaller processor or microcontroller with a different instruction set from the processor used for training.
The introduction of MEMS sensors with integrated machine learning capability is disrupting this approach to deploying ML. In place of a microcontroller, the sensors have an integrated and dedicated machine learning core. These cores are not programmable in the traditional sense. Engineers don’t need to port the model from one instruction set to another or integrate it into the application code.
Once the machine learning core has been trained to recognize specific activities, the MEMS sensor communicates these detections to a host via user-assigned values in a register. The values correlate to actions and are configured during training. For example, “0” may be assigned to “walking,” while “1” may be assigned to “sitting” for a wearable sensor.
The recognition is based on data coming from the sensing elements and all of the processing takes place in the machine learning core. Although the sensor data is still available to the host processor, it doesn’t need to run an ML model to make sense of the MEMS sensor data.
Even though ML is still an emerging technology, this is a significant deviation from conventional development and deployment. The key to this approach is in the training and configuration of the machine learning core.
Inside a MEMS sensor
To understand how the machine learning core “knows” what the sensors are detecting, let’s look at how a MEMS sensor operates. Inside the sensor is a tiny, machined element. This physical mass is held in place by flexible supports, which allow it to move, or be displaced, by whatever force it is measuring.
In the case of a 3- or 6-axis MEMS sensor, the force may be acceleration or linear displacement, or rotation. The force creates mechanical stress, but it is detected electronically. The stress causes small but measurable changes in the electrical capacitance of the mass. The signal created is equally small, but directly linked to the strength or direction of the force acting upon the mass.
In a regular MEMS sensor, the signal is amplified, digitized and made available to a host processor. The software running on the host will decide what the measurements mean. The OEM’s engineers would need to write the code to evaluate the measurements and resolve them into actions, such as changes in direction or speed of travel.
In STMicroelectronics MEMS sensors equipped with a machine learning core, that evaluation is carried out on the device. Instead of sending separate measurements, the core evaluates all the data into actions or activities.
What is a machine learning core?
Why use a machine learning core?
How do I use a machine learning core?
Hey, you just created a text paragraph! Somebody once said that the pen is mightier than the sword — and that was in 1839. Just imagine, with the power of cutting-edge content experiences and the ability to distribute your content around the world in mere seconds, writing this paragraph could be one of the most influential things you ever do!
Even though ML is still an emerging technology, this is a significant deviation from conventional development and deployment.

MEMS Studio from STMicroelectronics makes it simpler to capture training data and deploy machine learning.
The ISM330DHCX (and the AEC-Q100 qualified variant, the ASM330LHHX) is an example of the STMicroelectronics MEMS sensors now equipped with its machine learning core. With an accelerometer and gyroscope, the sensor’s machine learning core can be configured to recognize sensor outputs that relate to actions that are specific to the application. In an activity tracker this could include walking, running, or laying down. For the automotive version, it could recognize when the vehicle has been involved in a collision, or that the vehicle is being lifted or jacked up. The intensity of these actions can also be used as a trigger, such as how fast a person changes direction when running (impact detection), or how quickly they sit down (fall detection). The latest generation of AXL sensors (LIS2DUX12 and LIS2DUXS12) also include the MLC, allowing them to process acceleration data locally.
For industrial applications, such as structural health monitoring in buildings (using the IIS2ICLX), or vibration analysis (using the ISM330DHCX or ISM330BX), the MLC may also be used to process raw sensor data locally.
Instead of sending separate measurements, the core evaluates all the data into actions or activities.
How to train your machine learning core
STMicroelectronics has created MEMS-Studio, a software tool designed to support development with MEMS sensors. Part of the tool provides GUI-driven training of the machine learning core.
MEMS Studio from STMicroelectronics makes it simpler to capture training data and deploy machine learning.
Training involves capturing and labeling real-world data from the sensors. To make the data as real-world as possible, ST recommends using a device that is as close to the final product in form, fit, and function as possible.
Capturing and labeling data in this way provides MEMS-Studio with the raw information it needs to create decision trees. A decision tree is essentially a machine learning inferencing model, trained for a specific application.
Deploying the decision tree involves sending the configuration data to the MEMS device. This is also known as flashing, because the information is stored in the on-chip flash (non-volatile) memory.
As outlined above, part of the configuration involves assigning values to actions. The host application can read these values from the MEMS sensor’s registers. The engineering team decides the values that correspond to the pretrained actions. MEMS-Studio also allows optimization, using the sensor’s integrated filters.
Several MEMS sensors equipped with the machine learning core are available now. Many have the capacity to store several decision trees. Talk to your Avnet representative to find out more about ST’s range of MEMS sensors and how the right solution could enable your next application to leverage the power of machine learning.
The STMicroelectronics MEMS Studio is a software tool that supports MEMS sensor development.
ABOUT THE AUTHOR

Philip Ling
Editor-in-Chief and Senior Technology Writer, Avnet
Philip leads our FAE roundtable discussions and develops content covering the full range of technologies supported by Avnet.
Philip has more than 30 years of electronics industry experience, including working as a design engineer on mixed-signal embedded systems. He was also a technical journalist and editor covering the industry for several European technical magazines. He has worked for small, medium and large companies as well as startups, and is pleased to say he is constantly learning.
He holds a post-graduate diploma in advanced microelectronics.