Explore Tietoevry Create’s Responsible AI Framework for developing safe, ethical, and compliant AI applications.
The technology has been on the market for a while, yet the regulations around AI are still forming. It leads to natural uncertainty in how organizations should treat AI and integrate it into their operations and customer relations. At Tietoevry Create, we strive to address these concerns with a structured, responsible, and transparent approach to AI. In this article, Benjamin Wallace, an expert in regulated industries, and Bobby Bahov, a seasoned AI strategist, share insights into the challenges and solutions driving responsible AI adoption.
Q1: Why is using AI responsibly so important today?
Benjamin Wallace:
When working in highly secure, regulated industries like healthcare or banking, you realize the stakes are incredibly high. Mistakes can cost not just reputation or money but also lives. At Tietoevry Create, we’ve developed an AI implementation approach based on rigorous safety standards like IEC 62304 and IEC 82304, used to meet FDA requirements for medical devices, implant firmware and adjacent software. Our Responsible AI Framework ensures that AI solutions we implement, regardless of the industry, are reliable, ethical, secure, compliant, and safe even for the most challenging markets.
Bobby Bahov:
AI’s complexity introduces unique risks. Unlike traditional software, AI solutions are developed more iteratively, relying on data that can introduce bias or create opaque processes. Our approach accounts for these inherent challenges and ensures that every AI project is managed with transparency and responsibility, giving confidence even to clients from highly regulated sectors.
Tietoevry Create’s ESSR (Ethics, Safety, Security, and Regulatory Compliance) Framework is designed to meet the unique needs of each project, addressing compliance, risks, and safety concerns specific to every AI use case.
Q2: What makes ESSR different from traditional security frameworks?
Benjamin:
Traditional security frameworks follow a checklist approach – apply these or those controls, and you’re done. They are too generalized to deal with every challenge of a specific AI project, while simultaneously being too narrow to address the unseen. The technology itself is more dynamic. It requires iterative assessments for each use case and requirement. With the Responsible AI Framework, we consider regulatory landscapes (e.g., The EU AI Act), ethical concerns, security risks, and other factors at every project stage through the ESSR lens. With this flexibility, we can address diverse regulations, such as the FDA in the U.S. or the EU AI Act.
Bobby:
ESSR emphasizes collaboration. Traditional approach implies a specific team member, or a few specialists are responsible for risk analysis. Within ESSR, everyone in the project is responsible for identifying potential risks, mapping them, prioritizing them, and coming up with ways to mitigate them. This shared responsibility ensures a comprehensive approach to safety and compliance.
One crucial way to gain this is to train everyone how to operate with this mindset. Within Tietoevry Create, we have recently had an internal training that covers this risk assessment mindset and framework on a high level. Now when we work with a client, we have a project team where everyone knows their roles and puts effort into the process. Everyone has the mindset like, “Oh, this might be a problem. I need to track it down and share this with the rest of the team”. It really comes down to the human in the loop – everyone in the project is aware of that.
Deloitte reports that three major things holding organizations back from developing and deploying Generative AI solutions are risk, regulation, and governance issues.
Q3: Why do industries still hesitate to adopt AI?
Benjamin:
There is still no trustful relationship there. Maturity doesn’t exist yet for AI. The market says, “Okay, we can put AI into your toaster and your mobile phone. We can make AI do the job for your employees”. Yet, the backstage of adopting AI, with all its risks and complexities, is not discussed properly. Such rapid adoption is not attractive to extremely regulated or highly secure industries. They want AI to be used in specific, controlled ways that deliver tangible value. If there’s any doubt in its safety or control, such organizations won’t even consider using AI.
Bobby:
Even a few years ago, the same risks were raising concerns: whether AI solutions can be fair and unbiased, and how reliable AI-driven decisions are. Regardless of the industry or market, the core issue always comes down to how these solutions are developed.
Many people struggle to understand how AI systems work because they are rather complex, relying on advanced mathematics and statistics. In some cases, even the subject experts can’t fully examine or interpret the systems because they weren’t designed with transparency in mind. This lack of understanding adds potential risks and makes organizations cautious about using AI for their business needs. Considering all these factors, regardless of the specific regulation or framework in force, one conclusion becomes clear: “AI systems must adhere to strict ethical, safety, and regulatory standards to address these challenges effectively.”
Bobby Bahov at AI meetup Brno
Q4: How did Tietoevry Create’s responsible AI approach evolve?
Benjamin:
Partially, my background in healthcare and banking shaped our methodology. These industries require rigorous compliance, so we adapted frameworks like IEC 62304 and IEC 82304 to ensure that AI systems meet the same high standards. Additionally, we relied on change management and security standards from HITRUST, NIST 800, PCI and FedRAMP. I chose these standards as the basis for our Responsible AI Approach specifically for their highly stringent safety-based requirements. For clients from other industries besides healthcare, we already exceed the bar in terms of what they need to produce something safe that is ready for the market. Let’s take farming as an example. We wouldn’t think about the high risk in AI for farming, which is used for monitoring crop health, process automation, and things like that. But at the same time, farmers produce food that is governed by the FDA in the USA. And if a certain risk is considered in the supply chain, it can potentially harm people’s health. Things like transportation delays, mistakes in warehouse conditions monitoring can lead to food spoilage. Our approach helps address unseen or uncommonly understood risk situations.
Bobby:
AI development is more iterative, which is another big difference from traditional software development. Very often, we can’t know for certain whether the AI solution is feasible and how it will work before we actually start working on it. To manage uncertainties effectively, we need to be better prepared early before we even start, with how we will approach the project. And this was one of the key purposes of the framework we’ve developed with seven core principles of responsible AI.
Q5: What are the practical benefits of your approach to AI?
Benjamin:
The purpose of our approach is to build trust. We’ve expanded the rigorous principles of our previous experience in developing solutions that successfully passed FDA certification, to AI technology to ensure that our solutions can meet market demands with confidence. When clients invest heavily, they need assurance that the product will pass regulatory requirements, reach the market, and function safely and reliably. Our Responsible AI Approach reflects this vision. It is made to prevent costly rework, delivery delays, or worst-case scenarios like reputational harm or backlash from unsafe implementations.
Another thing is that our ESSR approach is designed for assessments at the project level – what are the compliance, regulatory, security, ethical or safety concerns for the particular application of AI. It works for literally any industry, regulatory, and market landscape.
Finally, we integrate continuation engineering practices with our risk and responsible development practices to ensure protection from model drift or runaway, establish backup and restore processes, consider post deployment performance, and enable continued safety, security, compliance and risk management for the entirety of the model’s lifecycle.
Bobby:
We do not come into the project saying we already know how to make a solution. We are a design-led organization: we discover the problems and create solutions for them throughout the process. It is a more introspective approach. We’re not walking in with assumptions. On the contrary, we carefully study and discuss all the requirements with the client. We figure out all of the individual requirement statements and then assess them for risk. Those risks are then evaluated for compensating controls that would mitigate that risk as much as possible – those compensated controls then become new requirements we implement within the solution. We closely collaborate with the clients through the entire project lifecycle, discussing clearly how our risk assumptions look to them, and whether we understand them correctly.
Here are actionable insights from our experts for organizations considering AI adoption:
Adopt industry-specific standards for your AI projects. Stay informed about evolving regulations, such as the EU AI Act, which fully comes into force in stages throughout the next two years. Ensure your AI framework aligns with global and regional regulatory requirements to address future complications.
Consider risks as project requirements instead of additional complications to be addressed later. Make risk mitigation an integral part of the AI solution development lifecycle
Transparency builds trust. Adopt AI solutions that are audible, with understandable decision-making processes and controllable operations.
Involve the entire team – including the client and stakeholders – in identifying and mitigating risks. This shared responsibility ensures awareness and accountability at every stage
Develop a risk-awareness and shared responsibility mindset in your team.
Select experienced collaborators who understand your industry’s complexities and prioritize Responsible AI Principles.
Responsible AI use is a way to address one of the major concerns of AI adoption across organizations – security and compliance risks, paired to the lack of trust and transparency. It is not just about following regulations – it’s about building trust and delivering traceable, reliable, safe AI solutions that work as expected.
For organizations, this means more than just compliance, it means reducing risks, avoiding costly reworks, and ensuring proper ethical AI culture within every team member. With our Responsible AI Framework, we ensure every AI project meets the highest safety, transparency, and reliability standards.
Bobby brings over a decade of expertise in blending technology with business strategy, having worked across various industries and roles. Known for his entrepreneurial spirit, he has co-founded and led multiple ventures with a focus on innovation, AI, and robotics. In addition to his professional work, Bobby is currently pursuing a Ph.D. researching AI simulations and synthetic data.
Benjamin is passionate about enhancing organizational resilience and fostering ethical governance through robust security practices. With over a decade of experience, he has designed and implemented multiple security frameworks aligned with industry standards like HITRUST, HIPAA, and FedRAMP.
At Tietoevry Create, Benjamin leads transformative security initiatives, guiding teams to elevate the maturity of security operations and helping clients confidently navigate complex compliance challenges.