Skip to content
Search our site  
    December 22, 2023

    Regulating artificial intelligence

    On December 9, 2023, a significant milestone was achieved in the realm of artificial intelligence regulation. The European Union members, pioneering the effort, agreed on the EU AI Act. This legislation serves as the world's first dedicated law on AI, setting a global precedent.

    In this context, it's essential to explore how the EU and other major economies, such as the US and UK, are shaping the AI regulatory landscape and the implications of such regulations for industries and businesses.


    Regulating AI

    In the last decade, the rapid advancement of artificial intelligence (AI) has sparked a global conversation on regulation. Governments and organizations worldwide have grappled with the challenge of balancing innovation with ethical and safety concerns, with the increased demand for digital trust for AI systems.

    This journey reached a significant milestone with the introduction of the EU AI Act in December 2023, a pioneering step towards formalizing AI regulations.

    However, it's important to note that this is not an isolated effort. Many countries, including the US and the UK, are also actively developing their own frameworks to govern the use of AI. This article delves into these various regulatory initiatives, highlighting their unique approaches and potential impacts.

    EU AI Act

    The EU AI Act, a trailblazer in AI legislation, aims to ensure the safety, legality, trustworthiness, and respect for fundamental rights in AI systems. It categorizes AI applications based on risk (unacceptable, high, low), with strict compliance required for high-risk sectors like healthcare and transport. The AI Act demands that any data-driven system which is deployed in the EU to comply. Such a wide applicability makes the compliance with EU AI Act a necessity for a wide range of sectors.

    The milestone of 9th of December follows a series of steps including the first regulatory framework for AI in April 2021, and the adoption of the European Parliament’s negotiating position by its members, in June 2023. There will be a two-year grace period to reach compliance, starting from when the final form of the law becomes available with the agreement of the European Council members expected over coming weeks. Various member states have already started to publish information and updates on what is about to come.

    Other regulations

    In the United States, the National Institute of Standards and Technology (NIST) has been instrumental in developing AI guidelines. Their focus is on creating standards that enhance AI security and reliability, fostering innovation while safeguarding ethical and societal values.

    The approach of the United Kingdom to AI regulation emphasizes a balance between innovation and ethical considerations. The UK government is working on frameworks that address data privacy, AI ethics, and accountability, ensuring AI's benefits are harnessed responsibly.

    There are also similar efforts in other countries in this regard. Businesses can expect multiple regulations and standards in this space across the globe, despite the efforts to create global standards.

    ISO/IEC 42001:2023

    Published in December 2023, the “ISO/IEC 42001 is an international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations. It is designed for entities providing or utilizing AI-based products or services, ensuring responsible development and use of AI Trust systems.

    ISO/IEC 42001 is the world’s first AI management system standard, providing valuable guidance for this rapidly changing field of technology. It addresses the unique challenges AI poses, such as ethical considerations, transparency, and continuous learning. For organizations, it sets out a structured way to manage risks and opportunities associated with AI, balancing innovation with governance.” – Ref. iso.org

    Other standards

    The standards regarding artificial intelligence are not new. There are various standards ranging from those addressing the requirements to products with embedded artificial intelligence, to those addressing more processes and systems. Moreover, there are specific standards related to various technologies and applications of artificial intelligence.

    These standards may address topics such as risk management, data life cycle, machine learning, neural networks, trustworthiness of ai and more. Navigating these standards and identifying the applicable ones to your business and/or products is an important part of compliance with the existing as well as upcoming regulations.

    The impact

    The implementation of the upcoming regulations will profoundly impact various industries. Healthcare, finance, automotive, and public sectors are likely to be the first to experience the effects. There are standards already addressing the use of artificial intelligence in healthcare as well as medical devices.

    In these sectors, AI algorithms will undergo stringent tests for safety, accuracy, and bias before deployment. Financial institutions will need to ensure their AI-driven decision-making processes are transparent and fair. The automotive sector, especially in the context of autonomous vehicles, will face more rigorous safety assessments. Public sector applications, like facial recognition technologies, will be scrutinized for privacy and human rights concerns.

    These are just some of the examples from tens of sectors which will need to demonstrate compliance with the upcoming regulations in the EU and beyond.

    The approach

    Businesses must prepare for the upcoming regulations. Understanding the regulations and their applicability to your business, services, or products, creating the necessary competence, risk assessment to identify and categorize potential risks of various AI applications, and designing your AI policies and frameworks to comply with relevant standards, are among the key steps to take.

    The summary

    The EU AI Act marks a significant step in global AI regulation, with similar initiatives underway in other parts of the world. This regulatory wave will primarily impact high-risk sectors, mandating businesses to adapt swiftly. By focusing on risk assessment, compliance, ethical AI development, transparency, and stakeholder engagement, businesses can navigate these changes effectively. As AI continues to evolve, staying ahead of regulatory frameworks will be crucial for sustainable and ethical AI deployment.

     

    Contact us to learn more.

     

    Dr. Shahram G Maralani

    Shahram G. Maralani (born 1976) joined Nemko in August 2022. Shahram has more than two decades of senior leadership experience internationally, the last twenty years of which holding various executive roles in the Testing, Inspection and Certification (TIC) industry (DNV). He has a PhD in Management, an MBA and a BSc...

    Other posts you might be interested in