noun_Email_707352 noun_917542_cc Map point Play Untitled Retweet Group 3 Fill 1

The time to adopt responsible AI in your business is now

The EU regulation on AI aims to foster innovation while ensuring the responsible development and use of artificial intelligence systems in Europe. What should you know about AI Act?

Wenche Karlstad / August 27, 2024

Generative AI has made significant progress in recent years, paving the way for emerging players with once-futuristic AI capabilities. However, there is a growing emphasis on the responsible development and use of AI, enabling businesses to harness the value of these innovations.

EU policymakers have been working on the AI Act for Europe for several years, and the Act became effective in August.

This means the EU will have a law in place that will not only apply in EU countries but will also set the standard for much of the world. Until the full law is in place, the EU is offering a voluntary AI Pact that can be applied in the meantime.

What kind of requirements should be placed on AI systems? Will tighter restrictions slow down the rapid pace of innovation? Where is the line between legislation, guidance, and ethical dilemmas? How shall we incorporate responsible AI practices into our culture of developing and using AI systems?

Organizations should proactively assess their use of AI from an ethical and sustainability perspective. By doing so, they can prepare for the relevant EU AI Act and develop policies and user guidance for the responsible use of AI."

After the final lawyer-linguist process, and the formal endorsement by the Council before it becomes law, we now know a lot about the future framework for the use of AI systems.

This is the world's first AI regulation, and anyone developing or deploying AI applications in Europe will have to comply with it."

Big wins across industries

We are only beginning to see the outlines of what AI can do for us. The list of areas where AI will impact everyday life and work is growing. Here are a few examples:

  • There are several studies documenting improvements in how AI will be able to detect cancer at an early stage with greater accuracy.
  • By using AI, we can provide better healthcare, safer transport services and cheaper and more sustainable energy.
  • AI technologies can provide more accurate weather forecasts, helping to protect buildings, infrastructure and crops.
  • Big benefits are on the horizon for industry and construction. We will see better use of industrial data, using AI technologies to increase efficiency and accuracy in the value chain for safer production, with real-time use of data where it is collected and processed.

Ultimately, AI will be able to add value to the society as a whole." 

Double aim of the AI law

The EU regulation will classify AI applications according to their level of risk. The scale has four levels: unregulated, limited risk, high risk and unacceptable.

Social scoring and AI used to manipulate or exploit user vulnerabilities, for example, will be completely banned. The high-risk category involves a number of requirements, including registration, risk assessment, transparency and ensure human oversight. Some examples of high-risk systems are those used for credit scoring, education and employment, which affect people's lives and futures."

Generative AI, such as ChatGPT, would have to comply with transparency requirements, such as for AI-generated content, preventing illegal content and copyrighted data used for training. High-impact, general-purpose AI models that could pose systemic risk will have additional requirements for assessment and reporting in the event of serious incidents. And so-called “deepfakes” need to be labelled.

The EU regulation is not just about restricting AI. It also aims to promote innovation.

The EU wants to strengthen Europe's position in an area where the US and China dominate. In this respect, the AI law is a deliberate move by European Commission President Ursula von der Leyen. With the recent announcement of initiatives to boost innovation for European startups and SMEs, and to make Europe's supercomputers available to European AI startups to train their trustworthy AI models, the means are beginning to take effect."

For the EU, the desire to strengthen Europe's digital sovereignty is at the heart of the issue. In a broader perspective, it is about technological and economic growth, as well as the safeguarding of citizens' fundamental rights and values.

Establishing clear guidelines will create a more predictable environment."

The boundary between law and ethics

What is 'real' and what is AI-generated? Where does data go in an increasingly intertwined value chain? When are our fundamental rights at risk?

It's equally important when we move beyond regulation to ethical principles. Risk assessments and technical documentation are most important in the high-risk category, but everyone who develops and uses AI has a responsibility."

This responsibility may involve several ethical dilemmas.

For example, how to balance the desire to play with open cards against the need to protect business-critical information, unwanted discrimination in hiring processes or privacy concerns when using sensitive information.

Or how to avoid biased outcomes from training data, with the risk of discriminating against people, groups or minorities. How to understand how AI systems make decisions, with the risk of a "black box"? Or how to prevent AI systems from being misused in harmful ways, such as surveillance and disinformation in working life and society?

Establishing clear principles for responsible AI, including at least fairness, clear accountability and transparency, is a good starting point. Those who can explain their judgements and clarify accountability in the development and use of AI will have an advantage.

Above all, it is about building culture, educating your people, gaining experience and embedding responsible AI practice into your business DNA. Responsibility enables business today.

Governmental authorities are just getting started

Self-regulation is particularly important in a field where a multitude of developers and providers use models created by technology giants like Microsoft and Google.

The banking and finance sectors have historically been good at self-regulation. By establishing a culture for responsible AI practices, one can create confidence and trust on top of the regulations and they can take a step further as responsible banking."

The biggest challenge is that small and medium-sized businesses do not have sufficient capacity to navigate complex regulations and then translate legal texts into business understanding.

Some may see changing laws and regulations as obstacles. But I welcome the AI law. Both Nordic countries and most other authorities are just getting started in this area, though many has started to present guidelines for responsible AI and established regulatory sandboxes for the public sector.

The technological development is happening at a pace that is hard to keep up with. The question is whether any AI legislation is doomed to be outdated before it comes into effect.

Regardless, organizations should be proactive by analysing and incorporating relevant legislation into their strategies for data, cloud, and AI.

Learn more about Tietoevry's AI and generative AI offerings and how Tietoevry Tech Services can help.

READ WHITEPAPER NOW

Wenche Karlstad
Head of Digital Sovereignty Initiatives

Wenche is passionate about creating value for our customers and enabling growth with attractive service offerings. She has near twenty years of experience in the IT business with different roles within management and advisory, bringing new services to the market.

In her current role as Head of Strategic Differentiation Programs at Tietoevry Tech Services, she is leading a global team of experts and managers.

Share on Facebook Tweet Share on LinkedIn