The EI Network

The EI Network

Share this post

The EI Network
The EI Network
Mind the (Accountability) Gap

Mind the (Accountability) Gap

Understanding how to build accountability in the age of generative AI

Ethical Intelligence's avatar
Helena Ward's avatar
Eddan Katz's avatar
Anna Steinkamp's avatar
+1
Ethical Intelligence
,
Helena Ward
,
Eddan Katz
, and 2 others
Apr 17, 2024
∙ Paid

Share this post

The EI Network
The EI Network
Mind the (Accountability) Gap
Share

Lately, we’ve been talking a lot about trust. We’ve covered what makes an AI system trustworthy, why openness in AI is critical for trust in the Responsible AI profession, and even how transparency can be used as a tool in building trust. However, trust is a complicated matter, as there are many ways to build and measure it, so there’s still much left to discover. Last edition we focused on how transparency can help create trust specifically in AI systems. In this edition, we are going to shift focus onto how accountability can help create trust in organizations. 

Have you ever asked someone why they trust another person? A common to response to this question will be “I trust them because I count on that person doing what they say they will do”. There is a layer of transparency in this statement of course, as it implies that the person shares their intentions and plans of action. But there’s also another, deeper layer to this. Not only does the person transparently share what their plans are, but they are accountable for those plans. It is one thing to say you will do something, it is something else entirely to do it. In other words, this is the ethical value of accountability in action. 

When it comes to building trust in companies deploying and selling AI solutions, this factor of accountability is absolutely critical. However, despite the importance of accountability in AI, there is a significant and dangerous gap currently widening in the market. Which is why this week we will be diving into what is causing this accountability gap and how we can start to fix it. 

WHAT’S IN STORE:

  • Accountability Gaps in Generative AI

  • Understanding Potential Accountability Gaps in Organizational Structures 

  • The Role of Leadership in Cultivating Accountability in AI

  • US AI Regulatory Gap in Accountability


Before we dive into this week’s insights, be sure to register for the virtual “speed-networking” event happening April 25th at 17.00 CET / 11.00 EST / 8.00 PST! Come meet your fellow EI Network community members to connect with fellow Responsible AI & Ethics practitioners and swap stories of your experience in this space.

  • Date: 25 April 2024

  • Time: 17.00 CET / 11.00 EST / 8.00 PST

  • Length: 1 hour

  • We will have rotating breakout rooms on Zoom for you to meet fellow community members and some conversation prompts to help break the ice

  • To attend, you must register via the Zoom event registration

Register for the event


Curated news in Responsible AI | Helena Ward

Accountability Gaps in Generative AI

Before we dive in, let’s get clear on what accountability is.

Responsibility VS Accountability

One central idea to get clear on is that accountability is not the same as responsibility. Responsibility is about who caused what to happen – I am responsible for some action A when I make an informed and un-coerced decision to perform that action. If I decide to write a LinkedIn post discussing my opinions on the recent EU AI Act, then I’m responsible for any inspiration or offense that my post causes. My act of posting directly correlates with other people’s access to it, and so long as we are aware of patterns of causation – of what caused what – responsibility is fairly easy to assign. On the other hand, accountability has to do with blame. I am accountable for a decision if I am willing to take responsibility for my actions and face up to the corresponding blame or praise. In the case of posting or tweeting, I’ll be accountable if I admit that my post was insensitive, and take the blame for any offence caused.

Ordinarily, responsibility and accountability come hand in hand – my being responsible for some action A means that I am also accountable for A and its impact or effect. But when artificially intelligent agents are making decisions, responding to chatbot prompts or posting LinkedIn posts, responsibility and accountability can come apart. Consider a generative AI chatbot, who – like me – posts an opinion piece on the EU AI Act. Although the chatbot is responsible for the accessibility of the post to others, it cannot be held accountable, because the ‘opinions’ disseminated aren’t opinions ‘had’ by any one entity. We could respond to an offensive system by turning it off or re-programming it, but in the absence of a conscious being blame just doesn’t make sense. This is called the ‘accountability gap’ – a gap between who or what caused an action, and who is to blame.

Liability for Generated Content

Companies utilizing generative AI tools should keep in mind that despite seeming gaps in accountability, they are accountable for the decisions that the technologies they employ make - accountability gaps will and must be filled. Let’s take a look at why.

Keep reading with a 7-day free trial

Subscribe to The EI Network to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
A guest post by
Anna Steinkamp
Subscribe to Anna
A guest post by
Olivia Gambelin
Human Values + Artificial Intelligence
Subscribe to Olivia
© 2025 Ethical Intelligence Associates, Limited
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share