Intel boosts edge AI and analytics

  • July 1, 2020
  • William Payne

Intel has introduced new processors and software that strengthen its edge computing AI and analytics offerings. The new products include 3rd Gen Xeon Scalable processors, its first mainstream server processor with built-in bfloat16 support. The company also announced its first AI-optimised FPGAs, which boost edge-based high-bandwidth, low-latency AI and analytics.

The 3rd Gen Intel Xeon Scalable processors (code-named “Cooper Lake”) evolve Intel’s 4- and 8-socket processor offering. The processor is designed for deep learning, virtual machine (VM) density, in-memory database, mission-critical applications and analytics-intensive workloads. Customers refreshing ageing infrastructure can expect an average estimated gain of 1.9 times on popular workloads and up to 2.2 times more VMs compared with 5-year-old 4-socket platform equivalents.

Bfloat16 is a compact numeric format that uses half the bits as today’s FP32 format but achieves comparable model accuracy with minimal — if any — software changes required. The addition of bfloat16 support accelerates both AI training and inference performance in the CPU. Intel-optimised distributions for deep learning frameworks (including TensorFlow and Pytorch) support bfloat16 and are available through the Intel AI Analytics toolkit. Intel also provides bfloat16 optimisations to its OpenVINO toolkit and the ONNX Runtime environment to simplify inference deployments.

“The ability to rapidly deploy AI and data analytics is essential for today’s businesses. We remain committed to enhancing built-in AI acceleration and software optimisations within the processor that powers the world’s data centre and edge solutions, as well as delivering an unmatched silicon foundation to unleash insight from data.” said Lisa Spelman, Intel corporate vice president and general manager, Xeon and Memory Group