Global Market Access: Nemko Group AS Testing Services

Evolving governance landscape for AI

Written by Nemko | March 2, 2026

The governance landscape for artificial intelligence is evolving rapidly. Regulations such as the European Union's Artificial Intelligence Act (AI Act) signal a clear shift toward outcome-oriented oversight, emphasizing trustworthy AI that respects fundamental rights as AI systems increasingly affect people's lives, opportunities, and freedoms.     

Following the United Nations High Commissioner's report on human rights and technical standard-setting processes, organizations developing or deploying AI are now expected to demonstrate not only technical robustness and regulatory compliance, but also that their systems operate in ways that are fair, transparent, and respectful of rights in real-world contexts.

For many organizations, standards are the primary mechanism through which responsible AI practices are
implemented at scale, across products, teams, and markets, but e.g. protection of human rights (according to the EU AI Act), which are highly context-dependent but essential for building and maintaining AI trust, is difficult to fully translate into standardized requirements.

The Freedom Online Coalition has emphasized that technical standards should be developed with meaningful input from civil society and human rights experts to ensure they support, rather than undermine, rights protection.

Technical standards can influence system design choices around transparency, accuracy, robustness, and documentation. And, for conformity assessment, standards offer measurable, repeatable criteria. Fundamental rights, however, often involve normative judgement - questions of proportionality, fairness, necessity, and social acceptability that cannot always be reduced to technical thresholds.

So, standards alone cannot fully guarantee respect for fundamental rights, even though they remain essential building blocks for AI governance.

In January 2026, CEN and CENELEC signed a Memorandum of Understanding with the EU Agency for Fundamental Rights (FRA). The agreement establishes cooperation within Joint Technical Committee 21 (JTC 21), which is responsible for developing AI standards in support of the EU AI Act. Through this collaboration, FRA contributes its expertise on fundamental rights to inform the development of AI standards.

More insight is provided in the article at this link.

For further information and/or application for Nemko’s AI related services, please contact
Alicja.Halbryt@nemko.com or Bas.Overtoom@nemko.com .