Proving the Business Case for the Internet of Things

Arm increases on-device intelligence

Steve Rogerson
February 12, 2020



AI technology from chip designer Arm is aiming to increase the amount of on-device intelligence for IoT applications. This it hopes will increase security as it reduces reliance on cloud and internet connections.
 
The UK subsidiary of Japanese firm Softbank has announced additions to its artificial intelligence (AI) platform, including machine learning (ML) IP, the Cortex-M55 processor and Ethos-U55 NPU (neural processing unit), the first micro NPU for Cortex-M, designed to deliver a combined 480x leap in ML performance to microcontrollers.
 
The IP and supporting unified toolchain give AI hardware and software developers more ways to innovate as a result of higher levels of on-device ML processing for billions of small, power-constrained IoT and embedded devices.
 
“Enabling AI everywhere requires device makers and developers to deliver machine learning locally on billions, and ultimately trillions, of devices,” said Dipti Vachani, senior vice president at Arm. “With these additions to our AI platform, no device is left behind as on-device ML on the tiniest devices will be the new normal, unleashing the potential of AI securely across a vast range of life-changing applications.”
 
As the IoT intersects with AI advancements and the rollout of 5G, more on-device intelligence means smaller, cost-sensitive devices can be smarter and more capable while benefiting from greater privacy and reliability due to less reliance on the cloud or internet. By delivering this intelligence on microcontrollers designed securely from the ground up, Arm says it is reducing silicon and development costs and speeding up time to market for product manufacturers looking to enhance digital signal processing (DSP) and ML capabilities on-device.
 
Arm partners have shipped more than 50 billion chips based on Cortex-M into many applications. With the addition of the Cortex-M55, Arm is offering its most AI-capable Cortex-M processor and the first based on the Arm v8.1-M architecture with Helium vector processing technology for enhanced, energy-efficient DSP and ML performance.
 
Cortex-M55 delivers up to a 15x uplift in ML performance and a five times uplift in DSP performance, with more efficiency than previous Cortex-M generations.
 
Additionally, custom instructions will be available to extend processor capabilities for specific workload optimisation, a new feature for Cortex-M processors. 
 
For more demanding ML systems, the Cortex-M55 can be paired with the Ethos-U55, Arm’s first micro NPU, together delivering a combined 480x increase in ML performance over existing Cortex-M processors.
 
The Ethos-U55 is configurable and designed to accelerate ML inference in area-constrained embedded and IoT devices. Its compression techniques save power and reduce ML model sizes to enable execution of neural networks that previously only ran on larger systems.
 
Arm says it understands the developer experience is fundamental to enabling the AI revolution. For this reason, Cortex-M55 and Ethos-U55 are supported by its Cortex-M software toolchain. This ensures a unified development flow for traditional DSP and ML workloads, while specific integration and optimisations for machine learning frameworks, starting with TensorFlow Lite Micro, will ensure that developers have a seamless experience from any Cortex-M and Ethos-U55 configuration.
 
Arm believes security should never be an afterthought and is critical for IoT to scale. To ensure the most secure designs and provide a seamless route to PSA certified products, these processors and the accompanying Corstone reference design work with Arm TrustZone to ensure security can be more easily incorporated into the complete system-on-chip.
 
This technology is receiving industry support from ecosystem partners, including Amazon, Alif Semiconductor, Bestechnic, Cypress, Dolby, Google, NXP, Samsung and ST Microelectronics.
 
“Google and Arm have been collaborating to fully optimise TensorFlow on Arm’s architecture, enabling machine learning on embedded devices for very power-constrained and cost-sensitive applications, often deployed without network connectivity,” said Ian Nappier, product manager at Google “This new IP from Arm furthers our shared vision of billions of TensorFlow-enabled devices using ML at the endpoint. These devices can run neural network models on batteries for years, and deliver low-latency inference directly on the device.”