Four ways AI regulation could affect AIoT
- July 24, 2024
- Michael Nadeau
Technology companies and their customer are rushing ahead with the development and deployment of artificial intelligence (AI). The fast pace of adoption and unknowns about the long-term consequences of deploying the technology are driving calls to regulate AI, and governments are responding.
AI is becoming a critical component of many IoT applications, now commonly referred to as artificial intelligence of things (AIoT). Many of those applications will be subject to AI regulation that is currently being debated. There are four main areas that AI regulation seeks to address that will directly impact AIoT:
Privacy and data protection: AI developers scoop up immense amounts of data to feed their training models. Inadvertently or deliberately, this is likely to include the personally identifiable information (PII) of individuals. Protection of that PII is a common theme among AI regulators as they seek to ensure the privacy of their citizens and prevent them from being victims of fraud.
This is relevant for AIoT applications that collect or use some kind of PII, which could be anything from a Social Security number to biometric data such as a fingerprint or eye scan. Existing data protection and privacy regulations that are not specific to AI vary in what they define as PII, and that is likely to be true for AI regulations around the globe.
OECD’s AI, Data Governance, and Privacy report cites the potential of AI to harm vulnerable populations. Relevant to AIoT, it mentioned the risk to data of people being monitored with wearable devices. This includes patients, the elderly, or workers being monitored for physical fatigue.
The best advice here for AIoT applications is to know what data is being collected that could potentially identify a person and in which regions that data is process or stored. The comply with existing data protection and privacy regulations, many organizations have decided to minimize the PII they collect and in some cases move the processing and storage of that data to less restrictive regions.
Transparency and explainability: This is a tough one. AI is essentially a black box where algorithms act on their own to analyze incoming data and respond to queries. Regulators want organizations to be able to explain why AI took an action or made a decision in the event that someone claims harm. Inability to explain would then increase the organization’s liability.
Is the onus on the users of AIoT technology or the AIoT vendors? Customers won’t have access to a vendor’s training models or algorithms, so they would have no direct way to respond to requests for explanation. If someone claims harm by an AIoT application gone awry, the accused organization would be dependent on the vendor to explain why the AI acted as it did.
Safety and security: In 2020, the European Commission issued its Report from the Commission to the European Parliament, the Council and the European Economic and Social Committee on the Safety and Liability Implications of Artificial Intelligence, the Internet of Things and Robotics, which in part outlined concerns over the threats to people’s physical and cyber safety from the use of AI, including AIoT. The report is cited often in discussions about regulating AI.
The paper advocates that AI systems take a safety and security-by-design approach “to ensure they are verifiably safe at every step, taking at heart the physical and mental safety of all concerned.”
IoT products in general have only recently begun to adopt a security-by-design approach. AI adds another layer of complexity, especially in applications where AIoT applications present a risk of physical harm–for example, when operating machinery or managing vehicle traffic flows. In these cases, organizations using AIoT should anticipate the potential risks and take steps to mitigate them and understand the safety and security practices of their AIoT vendors.
Intellectual property (IP) protection: IP protection applies to the data used to train the AI models and the models and algorithms themselves. Most relevant for AIoT is the training data. When an AI vendor feeds data into its training model, it creates a risk of exposing that data. For example, an AIoT vendor might use its customers’ data to identify patterns and anomalies to continuously improve the accuracy of its model.
Customers need to be assured of precautions taken to protect proprietary information and to make the data untraceable to the source. Vendor agreements should be clear about what customer data they collect, and customers should have the option to exclude all or certain types of data from being used in the model.