The EI Network

The EI Network

Share this post

The EI Network
The EI Network
Opening up about AI

Opening up about AI

Diving into the relationship between openness, transparency and trust

Ethical Intelligence's avatar
Helena Ward's avatar
Eddan Katz's avatar
Anna Steinkamp's avatar
Ethical Intelligence
,
Helena Ward
,
Eddan Katz
, and
Anna Steinkamp
Apr 03, 2024
∙ Paid

Share this post

The EI Network
The EI Network
Opening up about AI
Share

Last edition we discussed trust in AI. This week we are building off the concept to dive into one of the core builders of trust: transparency. 

We’ve all heard the claim that ‘transparency breeds trust’ before, but in the digital era it carries more weight than ever before. We no longer expect transparency from just individuals in business, but from the technology we use as well. As a matter of fact, 94% of consumers state their loyalty to a brand only if it is transparent, and 73% say they are even willing to pay more for a completely transparent product.

Transparency in business is the ability to ‘look behind the curtains’, the process by which working procedures are opened to examination and scrutiny. When we demand transparency from a company, we are asking for actionable proof of whether or not the company is open and honest about internal operations. Demanding transparency from technology works in the same way  - we require proof that the technology does what it is claimed to do, and does so ethically. 

So if transparency breeds trust in technology, how do we foster transparency in the first place? 

WHAT’S IN STORE

  • What it means for AI to be open

  • Openness, Transparency, and Impartiality for Accountability

  • Understanding how to use transparency as an ethical tool 

  • The case for transparent collaboration between human and machine


Before we dive in…

Have you joined the community yet?

The EI Network is a global community designed to cultivate the space for interdisciplinary collaboration in Responsible AI and Ethics. We bring together members to share expertise, experiences, and best practices in the pursuit of innovative solutions and critical thought.

This newsletter is one of many features of the full EI Network.

In addition to receiving this newsletter at a 30% discount, EI members have access to a private Slack community, local chapter meet-ups, education sessions on Responsible AI, and the opportunity to discuss with some of the world’s leading minds in AI Ethics.

Membership to the EI Network is on an application basis.

Apply to join


Curated News in Responsible AI

What it means for AI to be open: Deep dive into Transparency, Open-Source AI, and Trustworthiness

We have recently seen an increase in demand for openness in AI, spurred by the NTIA’s recent request for comments on the risks, benefits and potential policy related to advanced “open-weight” AI models. These comments will influence US policy, but form part of a wider discussion: the EU and UK are also actively considering how to regulate open foundation models. You can read the comments submitted by the Ethical Intelligence team here.

This week, I’ll guide you through this debate: what is ‘open-AI’? Is openness something we should welcome or ward off? And how do transparency, openness and trustworthiness interrelate?

What does it mean for a model to be open?

A foundational model can be released in two ways. Models may be closed if access to them is limited to the interface a developer provides. ChatGPT is a closed model: you get what you’re given. On the other hand, we have open models, such as Meta’s Llama 2. Open models make model weights available (which form a central building block of AI systems) allowing downstream modifications. Think of it this way: when a model is closed, you’re given a hardened statue – the end product, which is near impossible to modify. When a model is open, you’re given a soft clay statue – it has been crafted for you but remains malleable.

Things aren’t so simple as an open and closed dichotomy – there are gradients in between. At one end of the spectrum, we have models which are fully open, at the other end, we have models which are fully closed. In-between we have: gradual access, hosted access, cloud-based access, and downloadable access.  We will likely need to dig beyond the simple ‘open’ versus ‘closed’ framing to figure out the next steps forward but to keep things simple, we’ll focus on the risks of fully open models, which involve relinquishing all control over who can use a model once it’s released. Developers could say “Only use this for ethical means!” or “Don’t use this model for A, B or C”, however, once a model is fully open-source, malicious actors are free to ignore any guidelines stipulated; restrictions are incredibly challenging to enforce. 

Let’s review some arguments for and against open-source models to give you a handle on the debate.

Keep reading with a 7-day free trial

Subscribe to The EI Network to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
A guest post by
Anna Steinkamp
Subscribe to Anna
© 2025 Ethical Intelligence Associates, Limited
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share