Intel chips for embedded & edge AI

  • December 19, 2023
  • William Payne

Intel has launched a range of network, edge and cloud processors designed for AI use cases. The new processors promise improved performance, power efficiency and TCO on AI workloads.

Intel Core Ultra features a client on-chip AI accelerator. The neural processing unit, or NPU, provides AI acceleration with 2.5x better power efficiency than the previous generation.

The company envisages Intel Core processors being used not only in ‘AI PCs’, which it says represents a new paradigm of the PC platform, but also in edge and embedded systems in industrial, infrastructure, retail and logistics use cases.

To make AI hardware technologies accessible, Intel has built optimisations into the AI frameworks developers use (like PyTorch and TensorFlow) and offers libraries (through oneAPI) to make software portable and performant across different types of hardware.

Developer tools, including Intel’s oneAPI and OpenVINO toolkit, support hardware acceleration for AI workloads and solutions and quick build, optimisation and deployment of AI models across a range of inference targets.

“AI innovation is poised to raise the digital economy’s impact up to as much as one-third of global gross domestic product,” Gelsinger said. “Intel is developing the technologies and solutions that empower customers to seamlessly integrate and effectively run AI in all their applications — in the cloud and, increasingly, locally at the PC and edge, where data is generated and used.”

“Intel is on a mission to bring AI everywhere through exceptionally engineered platforms, secure solutions and support for open ecosystems. Our AI portfolio gets even stronger with today’s launch of Intel Core Ultra ushering in the age of the AI PC and AI-accelerated 5th Gen Xeon for the enterprise,” Gelsinger said.