The EI Network

The EI Network

Share this post

The EI Network
The EI Network
Let’s face it, we’ve got AI trust issues

Let’s face it, we’ve got AI trust issues

And not the kind you can fix with a quick trust-fall exercise

Ethical Intelligence's avatar
Helena Ward's avatar
Eddan Katz's avatar
Anna Steinkamp's avatar
Ethical Intelligence
,
Helena Ward
,
Eddan Katz
, and
Anna Steinkamp
Mar 20, 2024
∙ Paid

Share this post

The EI Network
The EI Network
Let’s face it, we’ve got AI trust issues
Share

Whether openly acknowledged or not, there is no denying business success is driven by trust. It takes time to build and care to maintain, but once a company is trusted, then that trust has been proven to lead to higher revenue and market value, stronger customer loyalty, lower employee turnover, and lower costs of doing business. The list of benefits is a mile long, so it should be no surprise that our need to trust has extended beyond just companies and on to encompass our AI solutions. However, trust is a complex value to build, and when three in five people are wary of AI, it’s clear that we are missing the mark on things.

This issue we are diving into what is arguably one of the most important ethical values in AI, exploring what is trust in AI, why it all matters in the first place, and what you as a responsible AI practitioner can do to help build it.  

What’s in store

  • What makes an AI system ‘trustworthy’? 

  • How to build trust-by-design in AI products

  • Tackling trust issues within the workplace

  • Structural policy issues on trustworthiness in AI

The EI Network is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber.


Curated News In Responsible AI

What makes an AI system ‘trustworthy’? 

The Global Health Conference (HIMSS24) was held on the 11th of March. There, the Trustworthy and Responsible AI Network, or TRAIN was announced. TRAIN is a network founded by Microsoft which seeks to ‘operationalise responsible AI principles to improve the quality, safety and trustworthiness of AI in health’. Using TRAIN as a use-case, we’ll be examining the various entities we might trust, or as practitioners seek to encourage trust in.

How TRAIN seeks to improve the trustworthiness of AI:

  • Sharing Best Practices – including both uses of AI in healthcare, and the skill sets needed to manage AI responsibly.

  • Enabling Registration of AI

  • Providing tools which enable practitioners to measure the outcomes of AI implementation – including tools to study the efficiency, security and biases

  • Facilitating the development of a federated national AI outcomes registry

What is trust? In the context of AI systems, trust typically refers to the confidence that users have in the ability of a system to perform tasks accurately, reliably, and ethically.

Why is trust important? For technology to be accepted and used by society, it must be trustworthy. But there is increasing evidence that people do not trust AI. Consumers are wary about trusting AI systems, and only 6% of leaders welcome AI adoption within their organisations. Facilitating trust in AI has become an economical move – adoption and trust come hand in hand. How, then, can responsible AI practitioners build a climate of trust in AI?

On Entities We Trust

Trust in whom? We can approach the question of trust from many different perspectives. There are various entities that we might place trust in, and ‘we’ might be a practitioner, consumer, or CEO. Considering the parameters of trust is important, as, for example, creating trust within an organisation is very different from building consumer trust in companies.

Let’s take the context of healthcare, given TRAIN's focus, as an example.

  1. Healthcare professional’s trust in AI. 

One relevant form of trust is the trust that healthcare professionals have in artificial tools. Their trust should be understood specific to particular use cases – individuals might trust the use of one tool in drug development yet be sceptical of another in diagnoses.

  1.  Patients’ trust in AI and in healthcare professionals’ use of AI. 

Keep reading with a 7-day free trial

Subscribe to The EI Network to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
A guest post by
Anna Steinkamp
Subscribe to Anna
© 2025 Ethical Intelligence Associates, Limited
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share