CES demo shows how TinyML can detect falls

  • January 4, 2023
  • Steve Rogerson

A US-Swedish collaboration is using TinyML to detect falls, and is on show at this week’s CES in Las Vegas.

Swedish firm Imagimob has implemented the TinyML fall-detection application running on California-based Syntiant’s low-power NDP120 neural decision processor.

It can be seen in Syntiant’s private suite at the Venetian Hotel during this week’s CES.

The fall-detection application was developed in the Imagimob end-to-end TinyML development platform, which includes a built-in fall detection starter project, which includes an annotated dataset with metadata (video), and a pre-trained ML-model in H5-format that detects person falls from a belt-mounted device using IMU data.

Any developer can use the fall detection model and improve it by collecting more data.

The integration enables developers to use the Imagimob platform to create production-ready deep learning TinyML applications, and to optimise and deploy the ML models using the NDP120 by a click of a button. The combined Imagimob-Syntiant offering supports a range of applications, such as sound event detection, keyword spotting, fall detection, anomaly detection and gesture detection.

“The collaboration with Syntiant will be very valuable for our customers because it allows them to quickly develop and deploy powerful, production-ready deep learning models on the Syntiant NDP120,” said Anders Hardebring, CEO of Imagimob. “We see a lot of market demand in sound event detection, fall detection and anomaly detection.”

Awarded best product of the year by the TinyML Foundation, the NDP120 brings highly accurate always-on voice and sensor neural processing to all types of consumer and industrial products. Packaged with the Syntiant Core 2, the company’s second-generation, flexible deep neural network, the NDP120 supports OK Google and Hey Google hot words at under 280µW and can run multiple applications simultaneously at under 1mW.

Imagimob AI is an end-to-end development platform for machine learning on edge devices. It allows developers to go from data collection to deployment on an edge device in minutes. The AI is used by many to build production-ready models for a range of use cases including audio, gesture recognition, human motion, predictive maintenance and material detection.