Choose your language:

Australia

Germany

Hong Kong

India

Ireland

Netherlands

New Zealand

Singapore

Sweden

Switzerland

United Kingdom

United States

Beyond the Hype: Building Trustworthy AI Systems With the FRIENDS Framework

November, 2025 | Roman Koles, Practice Architect, Data and AI

network of AI dots

The FRIENDS acronym can help your leadership and teams remember the best practices of AI development.

AI’s potential extends beyond automation - it promises to revolutionise how we interact with technology and each other. Machine learning (ML) allows systems to improve performance based on data without explicit programming.

This capability enables personalised experiences, such as tailored streaming recommendations or adaptive learning platforms in education. Moreover, AI can address pressing global challenges like climate change by optimising energy use and predicting environmental patterns.

Moving forward, collaboration between technologists, policymakers and society will be essential for harnessing AI’s full potential while mitigating risks. Most importantly, humanity must think and act proactively by creating proper frameworks for cooperating with evolving artificial intelligence.

As your organisation invests in enterprise AI solutions, it’s critical to not only establish but also maintain best practices for AI systems’ development.

The FRIENDS acronym is a helpful reminder of these best practices that your organisation and teams can use to remember the pillars of AI risk mitigation.

FRIENDS: Acronym To Remember, Principles To Apply

Leaders must remain fully aware of possible AI risks and constantly monitor AI domain progress.

Use the FRIENDS acronym to remember simple but important principles while developing better AI systems:

  • Fairness
  • Responsibility
  • Interpretability
  • Explainability
  • No-code/low-code
  • Democratisation
  • Supervision

Fairness in AI Development

As AI becomes increasingly integrated into hiring processes, loan approvals, criminal justice and healthcare, it is paramount to ensure these systems are fair and unbiased.

AI fairness involves designing algorithms that don’t discriminate against individuals or groups based on race, gender, age or socioeconomic status.

This requires proactively identifying and mitigating biases from training data and algorithms themselves.

Achieving AI fairness involves several key strategies:

  • Diversify training data sets to ensure they represent wide-ranging perspectives and experiences, preventing bias reinforcement and promoting equitable outcomes.
  • Establish transparency and accountability in AI systems, which are crucial for detecting and correcting biases.
  • Utilise a multifaceted approach involving diverse stakeholders – technologists, ethicists, sociologists and affected communities – to establish AI best practices and standards.

By prioritising fairness, we create more equitable technologies and lay groundwork for AI as a positive force for social change rather than a tool that worsens division and inequality.

Responsibility in AI Development

As AI systems become more pervasive and influential, developers, organisations and policymakers must take responsibility for these systems’ impacts on individuals and society.

This involves creating technically sound, efficient AI that aligns with ethical standards and respects human rights.

To uphold AI development responsibility, developers should engage in thorough testing and validation to identify and address biases, errors and unintended consequences, such as conducting impact assessments to understand potential effects on different user groups.

By embracing responsibility, the AI community fosters an ethical innovation culture that benefits society while minimising risks and harm.

Interpretability in AI Development

Interpretability in AI refers to the ability to understand and explain AI system decision-making processes. Without interpretability, AI models become “black boxes,” making decisions difficult to understand or justify.

This lack of transparency can lead to scepticism and resistance to AI adoption, especially in critical applications where human lives are at stake.

Developing interpretable AI is both a technical challenge and a necessity for building trust and ensuring accountability.

One major challenge of interpretability is balancing AI model complexity with simplicity and clarity needs. Highly complex models like deep neural networks (DNNs) often achieve superior performance but are inherently difficult to interpret. Researchers explore various techniques to make these models more transparent.

Additionally, incorporating human-in-the-loop (HITL) approaches helps enhance interpretability by allowing humans to interact with and refine AI decisions.

This collaborative process improves AI system understandability and ensures alignment with human intentions and societal norms.

Explainability in AI Development

Explainability involves creating enterprise AI models and algorithms that provide clear explanations for their outputs, allowing users and stakeholders to understand the reasoning behind AI-driven decisions or predictions.

AI explainability fosters trust and accountability, as users are more likely to adopt and rely on AI systems when they understand how the system achieves results. This is particularly crucial in high-stakes applications where the consequences of AI outputs significantly impact individuals and society.

Explainability also helps identify and mitigate AI system biases and errors by detecting unexpected behaviours and correcting flawed algorithms. By prioritising explainability, the AI community aims to create systems that are powerful, efficient and trustworthy.

No-Code/Low-Code in AI Development

No-code and low-code platforms in AI development represent a significant shift in how enterprise AI technologies are created and deployed.

No-code platforms provide completely visual, drag-and-drop interfaces, allowing users to create AI models and applications without writing code. Low-code platforms offer a balance between visual development and traditional coding, enabling users to leverage prebuilt components and templates while having options to write custom code for complex requirements.

Adopting no-code and low-code platforms in AI development simplifies access to AI technology, allowing broader user ranges to harness AI without extensive coding skills.

This accelerates innovation and enables organisations to quickly prototype and deploy AI solutions, gaining competitive advantages. No-code and low-code platforms can also reduce time and costs associated with traditional AI development by streamlining processes and minimising specialised resource needs.

Democratisation in AI Development

AI development democratisation is a transformative movement aiming to make AI technologies accessible to broader audiences beyond tech giants and specialised research institutions.

By lowering entry barriers, democratisation efforts empower individuals, small businesses and nonprofits to leverage AI for their specific needs. This includes developing user-friendly AI tools, open source platforms and educational resources, enabling people with diverse backgrounds to engage with AI.

Ensuring AI benefits all segments of society requires concerted efforts to address the digital divide and provide equal opportunities for everyone to participate in and benefit from AI advancements.

This can include creating AI solutions tailored to underserved community needs and fostering diverse AI developer and user ecosystems.

As we advance, AI democratisation has the potential to unlock innovation opportunities, drive economic growth and contribute to solving pressing global challenges.

Supervision in AI Development

Human supervision in AI development is essential to ensure that AI systems operate safely and ethically and in alignment with human values.

Human users monitoring, guiding and intervening in AI processes can help prevent errors, mitigate risks and ensure AI systems make fair and transparent decisions.

In some cases, humans are directly involved in AI model training and validation, providing feedback and corrections to improve performance and accuracy.

In other scenarios, human-in-the-loop systems are employed, where humans review and approve AI-generated outputs before implementation or communication to end users.

The Future of Human-AI Cooperation

The future of human-AI cooperation holds immense potential for transforming society and driving cross-industry innovation.

As AI technologies continue to advance, human-AI collaboration will become increasingly seamless and intuitive, leading to symbiotic relationships where each party complements the other’s strengths. Humans will bring creativity, emotional intelligence and ethical judgment, while AI will offer unparalleled data processing capabilities, pattern recognition and efficiency.

By promoting collaboration, we can create a future where humans and AI coexist harmoniously, driving progress and improving quality of life for all. Let’s advance together, applying FRIENDS principles to innovate intelligently and ethically.