Google brings more intelligence to the edge
August 2, 2018
Google has announced two products aimed at helping users develop and deploy intelligent connected devices at scale: Edge TPU is a hardware chip and Cloud IoT Edge is a software stack that extends Google Cloud’s AI capability to gateways and connected devices.
This lets users build and train machine learning (ML) models in the cloud, then run those models on the Cloud IoT Edge device through the Edge TPU hardware accelerator.
Edge TPU is a purpose-built asic chip designed to run TensorFlow Lite ML models at the edge. When designing Edge TPU, Google said it focused on optimising for performance per watt and performance per dollar within a small footprint. Edge TPUs are designed to complement the Cloud TPU offering, so users can accelerate ML training in the cloud, then have fast ML inference at the edge.
“Your sensors become more than data collectors,” said Injong Rhee, vice president for IoT at Google Cloud. “They make local, real-time, intelligent decisions.”
Cloud IoT Edge is software that extends Google Cloud’s data processing and machine learning capabilities to gateways, cameras and end devices, making IoT applications smarter, more secure and more reliable. It lets users execute ML models trained in Google Cloud on the Edge TPU or on GPU- and CPU-based accelerators. Cloud IoT Edge can run on Android Things or Linux OS-based devices, and its key components are:
- A runtime for gateway class devices, with at least one CPU, to store, translate, process and derive intelligence locally from data at the edge, while seamlessly interoperating with the rest of Cloud IoT platform.
- The Edge IoT Core runtime that more securely connects edge devices to the cloud, enabling software and firmware updates and managing the exchange of data with Cloud IoT Core.
- The TensorFlow Lite-based Edge ML runtime that performs local ML inference using pre-trained models, reducing latency and increasing the versatility of edge devices. Because the Edge ML runtime interfaces with TensorFlow Lite, it can execute ML inference on a CPU, GPU or an Edge TPU in a gateway class device, or in an end device such as a camera.
By running on-device machine learning models, Cloud IoT Edge with Edge TPU provides faster predictions for critical IoT applications than general-purpose IoT gateways, all while ensuring data privacy and confidentiality. Plus, Cloud IoT Edge and Edge TPU have been tested to run natively open source reference models such as MobileNet and Inception V3.
Cloud IoT Edge can process and analyse images, videos, gestures, acoustics and motion locally on edge devices, instead of needing to send raw data to the cloud and then wait for a response. This local processing addresses certain industry-specific compliance needs and reduces data privacy risks. And Cloud IoT Edge uses a JSon web token to authenticate edge devices so the private key never leaves the device.
To jump-start development and testing with the Edge TPU, Google has built a development kit that includes a system on module (SoM) that combines Google’s Edge TPU, an NXP CPU, wifi and Microchip’s secure element in a compact form factor. It’ll will be available to developers in October.
“We’re also working with our IoT ecosystem partners to develop intelligent devices that take advantage of Google Cloud IoT innovations at the edge,” said Rhee. “Semiconductor partners will create the SoM with the Edge TPU chip inside. Device makers will make industrial IoT gateways – like the kind used in factories, locomotives, oil rigs and more – that include the SoM and Edge TPU.
Shingyoon Hyun, CTO of LG CNS, added: “Our intelligent vision inspection enables us to deliver enhanced quality and efficiency in the factory operations of various LG manufacturing divisions. With Google Cloud AI, Google Cloud IoT Edge and Edge TPU, combined with our conventional MES systems and years of experience, we believe smart factory will become increasingly more intelligent and connected. “With intelligent vision inspection, we are eager to make a better working place, raise the quality of product and save millions of dollars each year. Google Cloud AI and IoT technologies with LG CNS expertise make this possible.”
Cloud IoT Edge, Edge TPU and Cloud IoT Core are opening up more possibilities with the IoT. With data processing and ML capabilities at the edge, devices such as robotic arms, wind turbines and smart cars can now act on the data from their sensors in real time and predict outcomes locally.
"Smart Parking enables our customers to deploy and manage frictionless parking services for both on-street and off-street situations,” said John Heard, CTO of Smart Parking. “We are very excited about our ability to use Cloud IoT Edge and Edge TPU for building ML-enabled parking experiences for our customers. At Smart Parking, our mission is to re-invent the parking experience for every user. The introduction of Cloud IoT Edge, Google Cloud IoT enables us to deliver on this promise in new ways within our SmartSpot gateway products.”
Romain Crunelle, CTO at XEE, added: “At XEE, we’re working to make driving simpler, safer and more economical through our connected car platform. Cloud IoT Edge and Edge TPU will help us to address use cases such as driving analysis, road condition analysis and tyre wear and tear in real time and in a much more cost efficient and reliable way. Enabling accelerated ML inference at the edge will enable the XEE platform to analyse images and radar data faster from the connected cars, detect potential driving hazards and alert drivers with real-time precision."
And David Gottlieb, general manager for global retail at Trax, said: "Trax is helping retailers build a sound foundation for digital transformation. Cloud IoT Edge and Edge TPU will help address critical use cases such as improving on shelf availability, optimising click-and-collect processes and modernising the shopping experience. This Google technology will enable accelerated machine learning at the edge. In-store images are captured and flowed through the Trax platform, where those digitised shelf images are analysed at an increasingly faster rate providing retailers with the agility to both respond to issues in real time and to consistently delight shoppers.”