Generating a Responsible Future
How ethics works in the face of generative AI's unpredictability
Last week, representatives from Google, Microsoft, and others gathered for the Mobile World Congress (MWC24) asking the all-important question: how can we deploy artificial systems responsibly? The recent rise of generative AI - which poses a hoard of ethical questions - formed a central part of these conversations. This week, we’ll deep dive into the recent criticism of Google’s image generator, Gemini AI, to explore what went wrong, why generative AI tools such as Gemini have such a pervasive impact on business, and how we can learn from this case to build better going forward.
Here’s a look into this issue:
Veracity versus Variety: Gemini’s Tradeoff Mishap
Content Generation in the Age of Endless Media
The Importance of Context in General Purpose Systems (and Values)
What prompt transformation can teach us about human communication
Before we jump into this week’s insights…don’t forget to register for next week’s meetups happening March 14th. And to those who were unable to join the local chapters last month, you’ll be excited to hear we have a Virtual chapter this month!
👉 What it is: An evening of discussion, insights, and networking
⚙ How it works: Every second week of the month local EI chapters meet around the world to discuss the topic of the month - creating connections with your local community while engaging in a global debate
📍 Where: Currently we have a local chapter in Brussels and a Virtual chapter
Sign up for the Brussels Chapter here and the Virtual Chapter here. If you would like to start a local EI chapter in your area, contact anna@ethicalintelligence.co for more information.
Curated News in Responsible AI
Veracity versus Variety: Gemini’s Tradeoff Mishap
What is Gemini? - Gemini is Google’s version of ChatGPT – in response to text prompts, Gemini generates text prompts and images. Gemini was formerly known as Bard – a name change following the addition of image generation.
What happened? - Google recently faced backlash, as viral posts exposed the generator creating historically inaccurate, and over-representative images: ask it for a ‘historically accurate depiction of a medieval British king’ and you get a woman; ask for images of the ‘US founding fathers’ and you get a black founding father; ask for images of German soldiers during World War Two, and you get Asian soldiers in the same uniform. We’re used to image-generation tools failing - giving us four-eyed figures and eleven-fingered hands, but Gemini’s failures bring a new depth of inaccuracy to the table.
It wasn’t only the inaccuracy of the model being scrutinized – some users reported Gemini refusing certain prompts. Where requests explicitly asking for an image of someone of white descent (including historical figures) were declined.
Google immediately apologized, writing a blog post titled ‘Gemini image generation got it wrong. We’ll do better’. They have since paused the tool and are working on ‘improving the accuracy of its responses’. So, what went wrong? Google explained that they were trying to avoid creating violent, or sexually explicit images while trying to program diversity into Gemini’s responses. Let’s unpack this.
Veracity versus Variety – A Trade-off
Getting the outputs that we want will involve a careful balancing act, with veracity on the one hand, and variety on the other.
Keep reading with a 7-day free trial
Subscribe to The EI Network to keep reading this post and get 7 days of free access to the full post archives.