Neurala AI Explainability for Manufacturing
- January 6, 2021
- William Payne
Vision AI software company Neurala has launched an AI explainability feature for industrial and manufacturing applications. The new feature is designed to help manufacturers improve quality inspections by accurately identifying objects in an image that are causing a particular problem or present an anomaly.
With Neurala’s explainability feature, manufacturers can identify whether an image is anomalous, or if the error is a false-positive resulting from conditions in the environment such as lighting. This gives manufacturers a more precise understanding of what went wrong, and where, in the production process, and allows them to take proper action – whether to fix an issue in the production flow or improve image quality.
Manufacturers can use Neurala’s explainability feature with either Classification or Anomaly Recognition models. Explainability highlights the area of an image causing the vision AI model to make a specific decision about a defect. In the case of Classification, this includes making a specific class decision on how to categorise an object; or in the case of Anomaly Recognition, it reveals whether an object is normal or anomalous.
Explainability is available as part of Neurala’s cloud solution, Brain Builder, and will soon be available with Neurala’s on-premise software, VIA (Vision Inspection Automation).
“Explainability is widely recognised as a key feature for AI systems, especially when it comes to identifying bias or ethical issues. But this capability has immense potential and value in industrial use cases as well, where manufacturers demand not only accurate AI, but also need to understand why a particular decision was made,” said Max Versace, CEO and co-founder of Neurala. “We’re excited to launch this new technology to empower manufacturers to do more with the massive amounts of data collected by IIoT systems, and act with the precision required to meet the demands of the Industry 4.0 era.”