noun_Email_707352 noun_917542_cc Map point Play Untitled Retweet Group 3 Fill 1

Navigating the Impact of the EU AI Act on Your Business: Feeling Prepared?

Taking proactive steps for EU AI Act compliance

Mikel Echegoyen / April 11, 2024

As the world’s first legal framework governing AI, the EU AI Act sets forth requirements and obligations for the use of AI. The countdown for practical implementation has begun.

As discussed in my previous post, the regulation and safety of AI have been high priorities for governments and key topics in industry forums. Following extensive legal, business, political, and technical discussions, as well as lobbying by public and private stakeholders, the EU Parliament approved the AI Act in March 2024, and it came into force in August 2024. This marked the beginning of a phased implementation period during which various elements, covering also General Purpose AI services like ChatGPT, will become enforceable.

Key Provisions and Compliance Requirements: A risk-based approach

Touted as the first comprehensive law for AI globally, it will bring strict requirements for the complete AI ecosystem of providers, users, manufacturers, and distributors of AI systems in the EU market. The act follows other major EU digital legislation, such as the GDPR, the Digital Services Act (DSA), the Digital Markets Act, the Data Act, and the Cyber Resilience Act.

In a nutshell the act introduces a risk-based approach and categorization of systems based on risk levels with specific compliance requirements. The prohibited category includes things like social scoring, exploiting vulnerable people, behavioral manipulation, or facial recognitions systems in public spaces for law enforcement (with exceptions).

Implications for Businesses: Your AI journey can become very expensive

The act specifically defines requirements for General Purpose AI (Foundation Models) that pose systemic risks (those trained with greater than 10^25 FLOPS of compute, for example GPT4) on transparency on technical and training data, safeguards against unlawful output, energy consumption and more. The act also carves exceptions for research and proposes regulatory sandboxes for SMEs and innovative businesses to develop and test in real-world conditions before placing solutions in the market and allow for safe innovation.

Penalties from noncompliance for enterprises would amount to the tune of 7% (or 35M€ whichever higher) of worldwide turnover for prohibited systems, and 3% (or 15M€ whichever is higher) for high-risk AI systems, or penalties for providing incorrect or misleading information to authorities. Strict compliance enforcement is set to be overseen by national authorities designated by each EU Member State as well as a centralized European AI Office for monitoring.

Curious about learning more about responsible use of AI? The nuances of misinformation and disinformation was discussed in our Data Insiders podcast.


Preparing Your Organization: Strategies
towards Compliance and Adaptation

The act is not without criticism as it is not clear on specific definitions and approach to categorize systems, causes ambiguity on what elements would come under compliance, adds costs and burden of compliance and steep liability risks, and tries regulate a technology that is nascent and rapidly evolving and subject to change, causing concerns of innovation slowdown and scaring investments away from the EU.

To prepare your organization, we would recommend considering the following:

  1. Arrange awareness sessions with leadership and teams involved in AI enabled services, covering 360 aspects of the EU AI act (legal, business, technical, operations, compliance) and formulate or update your AI strategy.
  2. Assess and categorize your AI Solutions/Services/products and suppliers and create an initial view of your posture, risk areas and likelihood and mitigation strategies.
  3. Streamline your AI posture risk classification, ensure enterprise-wide AI policies and governance is in place, consider adopting dashboard with accountability, transparency, and compliance metrics.
  4. Adopt a “Know your model” policy to evaluate, create or request model cards (or similar documentation) describing how the models were trained, fine-tuned, and are expected to perform. Track the transparency index for popular models/services and challenge suppliers and partners on their EU Act conformance plans and actions.
  5. Align your best practices in AI development to be at least equivalent of those in regulatory sandboxes, including trustworthy AI practices, reinforcement learning from human feedback (RLHF), MLOps and red teaming to ensure compliance as well as conducting real-world testing and constant service monitoring for unexpected behaviors.

We are living a fast-paced innovation and adoption era for AI, with leading companies competing aggressively and introducing services early and often, while governments worry about risks for people caused by bad actors and carelessness, as well as the immaturity of the technology.

If you feel your organization is walking on a razor’s edge and needing a lifeline to make it speedily and safely across, do reach out.

Mikel Echegoyen
Global CTO, Tietoevry Create

Mikel is a senior business and technology leader with broad experience in helping global customers develop and ship next-generation digital products and services. His passion is to collaborate and combine business, technology, and software to create value. At Tietoevry Create, he is responsible for driving technology leadership across the organization and with customers, including technology excellence for solutions, assets and capabilities.

Tweet
Share on Facebook Tweet Share on LinkedIn