Artificial intelligence is often discussed through the lens of data protection, ethics, and algorithmic transparency. However, as AI systems become increasingly embedded in physical products, another regulatory dimension is gaining importance, namely product regulations. AI is no longer confined to digital platforms or decision-support software. It is integrated into industrial systems, consumer electronics, HVAC systems, medical devices, smart cameras, robotics, and IoT devices.
When AI becomes part of a regulated product, compliance is governed by AI-specific legislation combined with product safety frameworks, market regulation, and other digital regulations. This combination dictates how AI must be designed, assessed, and maintained.
Traditional AI governance frameworks focus on bias and discrimination, transparency and explainability, human oversight, data governance, and fundamental rights etc., while product regulations typically focus on physical focus risks to persons or property, and conformity assessment before market access. When AI is embedded in hardware, these focus points converge, and the compliance analysis shifts from not purely AI governance questions to more product safety related questions.
The European AI Act is the first comprehensive horizontal regulation governing AI systems across the European Union. It establishes a risk-based framework that classifies AI systems into prohibited, high-risk, limited-risk, and minimal-risk categories, with escalating compliance obligations. Individual countries establish complementary regulations, such as Italy with its national AI law (132/2025) and Spain with its AESIA body.
However, AI Act is only one piece of the puzzle. Even if an embedded AI system does not qualify as high-risk
under the AI Act, product legislation may still impose significant obligations. AI increasingly interacts with established frameworks governing machinery, medical devices, radio equipment, pressure equipment, general consumer safety, cybersecurity, and market surveillance of products. Most of these frameworks were not written with adaptive, self-learning systems in mind — yet they now apply to products that contain them.
As AI continues to move from digital platforms into safety-critical hardware, companies must transition from isolated AI governance to AI-enabled product governance by design and integrate AI into their product compliance journey from design to decommissioning.
More insight is given in this publication by Nemko Digital.
For further information and/or application for Nemko’s AI related services, please contact
Alicja.Halbryt@nemko.com or Bas.Overtoom@nemko.com .
(This article is based on an article by Nemko Digital; edited by T.Sollie)