The EI Network

The EI Network

Share this post

The EI Network
The EI Network
Playing the Imitation Game

Playing the Imitation Game

A look into the human impact of GPT-4o

Ethical Intelligence's avatar
Helena Ward's avatar
Eddan Katz's avatar
Olivia Gambelin's avatar
+1
Ethical Intelligence
,
Helena Ward
,
Eddan Katz
, and 2 others
May 29, 2024
∙ Paid
2

Share this post

The EI Network
The EI Network
Playing the Imitation Game
Share

This week, we’re saying hello to GPT-4o (the o standing for omni) - Open AI's most recent model capable of responding to audio, visual, and text prompts in real-time. GPT-4o is designed to be more powerful, versatile, and human-like than its predecessors. Long story short, it’s an impressive feat of technological development.

But, as we in Responsible AI & Ethics are all very familiar with, it’s never just about the tech. Although these advancements bring a mix of excitement and nerves to the scene, it’s important to not get caught up in the hype of the moment and instead look at the whole picture of GPT-4o, i.e. the human impact. 

What’s in store:

  • The Ethics Hidden in GPT-4o Design Decisions

  • How Socio-Technical Concerns Are Impacting AI Policy

  • Can Anthropomorphism Ever Be a Good Thing?

  • Artificial Intimacy In The Workplace


Before we dive into this week’s insights, be sure to register for the virtual networking and discussion forum event happening June 6th at 17.00 CET / 11.00 EST / 8.00 PST! Come meet your fellow EI Network community members to connect with fellow Responsible AI & Ethics practitioners and swap stories of your experience in this space.

  • Date: 6th June 2024

  • Time: 17.00 CET / 11.00 EST / 8.00 PST

  • Length: 1 hour

  • We will have rotating breakout rooms on Zoom for you to meet fellow community members and specific discussion questions to lead the forum

  • To attend, you must register via the Zoom event registration

    Register for the event


Curated news in Responsible AI | Helena Ward

The Ethics Hidden in GPT-4o Design Decisions

To give us some context, let’s run through the latest features of GPT-4o:

  • Enhanced Language Understanding: GPT-4o engages more with context, allowing it to generate more coherent and contextually appropriate responses, sustained over longer interactions - this makes interactions feel more natural and fluid.

  • Multimodal Capabilities: Unlike previous iterations (which are primarily text-based), GPT-4o can process and generate content across various formats, including text, images, and audio. This multimodal capability allows it to engage in more complex and varied interactions and situations.

  • Personalization: GPT-4o can be fine-tuned to understand and adapt to users' preferences and conversational styles.

  • Improved Efficiency: GPT-4o operates more efficiently, reducing latency in responses and making it feasible for real-time applications.

  • Advanced Creativity: The model's ability to generate creative content, such as writing stories, composing music, and creating artwork has been improved.

Let’s zoom in on the move towards a human-like AI for a second.  Open AI describes the model as a ‘step towards much more natural human-computer interaction’ - it can respond to you almost as quickly as humans can (a mere 88 milliseconds behind). And can respond to prompts with a variety of emotional and focal inflections - the model can both express seeming contempt, adoration, and excitement. It has been ‘programmed to sound chatty and sometimes even flirtatious in its responses to prompts’. It can also evaluate the expressions of its interlocutor - in one demonstration, a researcher asked the model to read the expression of their face. It assessed that he looked “happy and cheerful with a big smile and maybe even a touch of excitement”.

The emergence of GPT -4o marks a shift towards a design choice for human-like AI, but what are the ethical implications of this move?

The following considers several ethical considerations for this design choice:

  • Privacy: The privacy implications of GPT-4o are two-sided. On the one hand, we can consider the user’s perspective: what data does ChatGPT collect and use when you are interacting with it? On the other hand, we can consider the functionality of ChatGPT from the outset - what data was ChatGPT trained on, and what information might it reveal to other users, about you? This is all the more concerning when shifting from text-based prompts to video ones. When interacting with a chat-bot - we can see what we’re typing and entering in as a prompt, but video and audio inputs open up the scope for information retention including background noises (which may be outside of our control) and private items, photographs and objects lying around in our rooms.

  • Misuse: Another concern with this design move is the increased scope for misuse - as chatbots become more and more indistinguishable from humans, the potential for misuse increases. Malicious actors will be more able to use such models to create highly convincing phishing attacks, increasing the likelihood of deception, and abuse.

  • Trust: Users interacting with a human-like chatbot are more likely to trust them and consequently reveal more than they ought. Most agents are more likely to reveal private information, to a chatty, friendly texter, than a blunt, clearly mechanical one. In this way the anthropomorphization of AI assistant behaviors may increase a user's susceptibility to privacy loss.

  • Manipulation and Coercion: With systems we trust, comes an increased risk of manipulation and coercion - users may be more likely to grant a chatbot influence over their beliefs and actions, which raises concerns about autonomy.

  • Psychological Effects: In a world where much of our communication with others is mediated via technology, the indistinguishability of human-to-human and human to AI interactions may breed confusion. Such developments may also have implications for the use of AI in health care - opening up, or making more feasible the many proposals to employ AI systems as a source of emotional support. One psychological concern may have to do with expectations. If users expect human-like responses, about familiar roles of companionship, users may come to feel let down, alone, and uncared for, when a response is unexpected, insensitive, or nonsensical. Once again, these systems raise a hoard of privacy concerns - making paradigmatically private spaces, conversations, and vulnerabilities open to access.

  • Awareness: It seems that users should be informed, that they are interacting with a human, rather than an AI. But it is unclear how we could build this sort of awareness into a chatbot system such as GPT. Even if by design a model states explicitly at the outset, ‘you are not interacting with a human’ the human-like characteristics and responses may lead users to overlook even an explicit statement.

Keep reading with a 7-day free trial

Subscribe to The EI Network to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
A guest post by
Olivia Gambelin
Human Values + Artificial Intelligence
Subscribe to Olivia
A guest post by
Anna Steinkamp
Subscribe to Anna
© 2025 Ethical Intelligence Associates, Limited
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share