Navigating the EU AI Act
- July 23, 2025
- William Payne

In August 2025, a significant part of the EU’s AI Act, affecting general purpose AI, comes into effect. The Act aims to provide a comprehensive legal framework for AI, and affects a wide swathe of IoT, “smart” and connected technologies. These include smart cities, smart and digital health, smart policing, logistics, smart homes and buildings, self-driving cars and transportation, and manufacturing.
The new regulations affect not only developers but also deployers and commercial users of AI systems within the EU marketplace, including many IoT and smart tech sectors. Companies based outside the EU are also affected by the legislation, if their services or products are sold or accessed within the EU.
The areas of smart technology and IoT that are most likely to be affected are smart cities, policing, building systems, smart homes, and smart health. Several parts of the Act directly reference these areas, with several areas of AI applied to smart cities and policing defined as “unacceptable” or “high risk” under the Act. Application of AI to smart homes, smart health, and buildings, are defined as, ipso facto, “high risk”.
What is AI?
The definition of AI in the Act appears to be loose, and will probably be subject to challenges. A key part of the definition is that IT or electronic systems that can alter their behaviour or adapt are defined as AI, while those that do not are classified as not being AI.
It is possible that the Act will redefine some existing systems, including those employed in IoT and smart applications, as now being AI, and impose new regulatory constraints and requirements on them. It is also possible that systems that are based on AI technologies may, under the Act, be found not to be AI. Examples of the latter may be minified or static optimised AI production systems.
Unacceptable Risk AI
The EU defines several levels of risk that can be connected with use of AI. These range from “unacceptable risk”, “high risk”, “limited risk”, to “minimal risk”.
Unacceptable risk systems, which were banned under the Act in February 2025, include systems that employ subliminal manipulation; exploitation of vulnerabilities; social scoring; real-time public remote biometric identification; gender, racial, sexual or political biometric categorisation; workplace emotional state assessment; predictive policing profiling; facial image scraping.
Despite the outlawing of systems for remote biometric identification or for predictive policing profiling, it is likely that some European governments are employing or developing such systems, principally for counter-terrorism purposes.
The explicit banning of the use of social scoring technologies may affect firms providing data under the requirements of China’s Social Credit scoring system. It is open to question whether firms that contribute data to China’s Social Credit system may be challenged under the social scoring provisions of the Act in the European Union. This might particularly affect e-commerce platforms, apps employing e-commerce, financial institutions, banking apps, and social media platforms or apps integrated with social media platforms, operating or providing services in China.
High Risk AI
“High risk” AI systems are those AI systems employed or embedded in products or technologies covered by EU harmonisation legislation. Essentially, that means AI systems implemented in products or technologies such as medical devices, cars and transport, manufacturing systems, building systems, etc. These AI systems are not banned. But they have to undergo extensive quality assurance and regulation that includes the entire development and deployment lifecycle. Such systems also have to allow for human oversight and over-riding, and manufacturers must maintain continuous monitoring of their systems and have the ability correct, disable, withdraw or recall any systems that are deployed at all times.
Deployers of high risk AI systems must also carry out Fundamental Rights Impact Assessments (FRIA), and assess how deployment of the AI system might affect, over the lifetime of its deployment, the fundamental rights of both individuals and of groups within society, as well as analysis of specific risks of harm. The results of these analyses must be notified to the market surveillance authority before such systems can be deployed.
Deployers also have responsibility to continuously ensure the quality of data inputs into the AI systems they are deploying.
Annex III of the Act details applications that are deemed as “high risk”, separate from falling under EU harmonisation legislation. This includes AI systems used in critical infrastructure, including in telecoms, digital infrastructure, energy grids, utilities, and road and rail transportation; policing; border and airport controls; and election and voting machines and technologies.
The Act’s sections on “high risk” AI systems will become applicable in two stages, with the first, involving applications named under Annex III, coming into effect in August 2026, and the second, involving categories falling under EU harmonisation legislation, in August 2027.
Low & No Risk AI
AI systems classified as “limited risk” are those that pose a risk of manipulation or deceit, but not a severe threat to fundamental rights or safety. For these systems, the EU AI Act imposes lighter transparency obligations. However, it does require that human users are informed when interacting with AI.
In addition, all AI-generated content must be clearly and visibly labelled. This includes AI-generated images or videos. However, law enforcement can be excepted from this labelling requirement.
The lowest level of risk defined by the EU AI Act is “minimal risk”, which translates to no risk. This category includes all AI systems that do not fall under the unacceptable, high, or limited risk classifications. The majority of AI applications currently available on the EU single market, such as spam filters, AI-enabled video games, and recommendation algorithms for streaming services, are considered minimal risk.
General Purpose AI
The EU AI Act introduces specific provisions for General-Purpose AI (GPAI) models. These come into effect at the beginning of August 2025. GPAI models are defined as able to perform “competently” a wide range of distinct tasks that can be adapted for various applications.
Unlike AI systems, GPAI models are components that can be integrated into broader AI systems, including high-risk ones.
GPAI models are regulated based on the extent of systemic risk they pose. All GPAI models must comply with certain basic requirements, including: maintaining up-to-date technical documentation; providing information to downstream AI system providers; implementing policies to comply with EU copyright law, including publishing summaries of copyrighted data used for training; and labelling any AI-generated content.
More stringent obligations apply to GPAI models with “systemic risk”. These are typically those with high-impact capabilities or trained with a cumulative amount of computation exceeding 10^25 floating point operations (FLOPs), such as GPT-4 or Gemini Ultra. These models must undergo extensive evaluation, implement advanced practices for managing systemic risks, ensure cybersecurity, and report serious incidents directly to the European Commission.
The development of Codes of Practice (CoP) for GPAI is ongoing. This is being facilitated by the EU AI Office.


