How to Prepare for Compliance with the EU’s AI Act

Written by

In August 2024, the first comprehensive regulatory framework with a focus on artificial intelligence (AI) risk came into force globally. The EU AI Act coincides with recent updates to labor law, data protection and cybersecurity regulations, such as the Network and Information Security Directive (NIS2) and the Digital Operational Resilience Act (DORA) that will follow closely on the heels of this landmark legislation.

Its broad scope and extraterritorial applicability underscore the urgency for organizations worldwide to familiarize themselves with the obligations and prepare for compliance.

The AI Act's primary objective is to mitigate the risks associated with AI technologies while fostering innovation and trust in AI systems. It categorizes AI applications into different risk levels, with stringent requirements for high-risk systems. These include applications in critical infrastructure, education, employment and essential public services. 

The AI Act’s Risk-Based Framework

Organizations are encouraged to develop high-risk AI codes of conduct and general-purpose codes of practice. Additionally, they must train stakeholders on AI application to ensure ethical and responsible use.

The regulation sets out the following risk-based framework: 

  • Minimal risk: AI systems that pose low or no risk for citizens’ rights or safety so there are no mandatory requirements, but organizations may still wish to implement codes of conduct.
  • High risk: These systems must comply with strict requirements, including: i) risk-mitigation systems; ii) obligation to ensure high quality of data sets; iii) logging of activity; iv) detailed documentation; v) clear user information; vi) human oversight; and vii) a high level of robustness, accuracy and cybersecurity. Providers of high-risk AI systems, such as medical devices and critical infrastructures working outside the EU, will be required to appoint an authorised representative in the EU in writing.
  • Unacceptable risk: This categorizes AI systems that present a critical threat to the fundamental rights of people and they will be banned from early 2025. For example, certain biometric and emotion recognition systems used in the workplace.
  • Specific transparency or limited-risk risk: AI systems that comply with transparency requirements. Users of AI-generated content must be made aware that they are interacting with a machine. Service providers must also ensure synthetic audio, video, text and image content is marked in a machine-readable format and detectable as artificially generated or manipulated.
  • Systemic risk: Systems that could have a negative impact on public health, safety, security and fundamental human rights in the EU.

While organizations are encouraged to put in place high-risk AI codes of conduct, general purpose AI codes of practice and train their stakeholders on AI application, each EU Member State will have to designate one or more national competent authorities to supervise the application and implementation of the AI Act, as well as carry out market surveillance activities. EU Member States will have to lay down effective, proportionate and dissuasive penalties, including administrative fines. The AI Act indicates penalties of up to €35m or 7% of the total worldwide annual turnover of the preceding financial year (whichever is higher).

Meanwhile, in parallel with the AI Act's enforcement, the European Commission already launched its own European AI Office. It will enforce and supervise the new rules for general purpose AI models. This includes drawing up codes of practice to detail out rules, its role in classifying models with systemic risks and monitoring the effective implementation and compliance with the AI Act.

Navigating Regulatory Complexities

Understanding the privacy challenges posed by AI is essential for effective regulatory oversight. AI systems often process vast amounts of personal data, raising protection and privacy concerns. The AI Act addresses these concerns by emphasising the principles of data minimization, purpose limitation and transparency.

Organizations must implement robust data protection measures and ensure that AI systems are designed and operated in compliance with the General Data Protection Regulation (GDPR).

To navigate the complexities of the Act, companies should invest in stakeholder involvement, accountability, and transparency within their organizations. Engaging stakeholders, including employees, customers, and regulators, is crucial for building trust and ensuring that AI systems align with societal values and ethical standards.

Accountability mechanisms, such as internal audits and impact assessments, can help identify and mitigate risks associated with AI applications. Transparency, on the other hand, fosters trust by providing clear and accessible information about how these systems operate and make decisions.

The extraterritorial applicability of the regulation means that organizations outside the EU must also comply with its provisions if they offer AI products or services within the EU market. This global reach underscores the importance of international cooperation and harmonization of AI regulations. As other jurisdictions consider their own AI regulatory frameworks, the EU AI Act may serve as a model for comprehensive and risk-based AI governance.

The EU AI Act represents a pioneering effort to regulate AI technologies comprehensively and responsibly. Its broad scope and stringent requirements highlight the EU's commitment to mitigating AI risks while fostering innovation and trust.

Organizations worldwide must familiarize themselves with their obligations and invest in stakeholder involvement, accountability and transparency to ensure compliance. The establishment of the European AI Office further reinforces the EU's proactive approach to governance, ensuring that the regulatory framework remains dynamic and responsive to emerging challenges.

As AI continues to evolve, the Act sets a precedent for responsible and ethical AI governance, paving the way for a safer, more secure and trustworthy ecosystem.

What’s hot on Infosecurity Magazine?