Inclusion Isn’t a Checkbox
How to stop “inclusive AI” from becoming performative and bring lived perspectives into the build
What happens when “inclusive AI” becomes a branding exercise and the people most affected are still absent from the room?
This month’s EI Network meet up tackled that uncomfortable gap head-on, digging into how inclusion gets performed, who benefits, and what it takes to bring real, lived perspectives (not synthetic stand-ins) into the way we imagine, build, and govern AI.
This month’s EI Network meet up was hosted by Lucia Komljen, strategist, sociotechnical researcher and founder of Have A Nice Future.
The session unpacked her latest research on inclusive AI and challenged some of the most fundamental assumptions in AI development. Lucia made a compelling case for why the people most affected by AI are so often the last to have a say in how it’s built, and what it would actually take to change that.
Drawing on her work in insight, foresight and responsible media innovation, she walked us through the disconnect between industry visions and real human needs, and mapped a practical path toward bringing genuine, lived perspectives (not synthetic proxies) into the innovation process.
Here are our top three takeaways from the conversation…
How can we prevent ‘artificial’ inclusion that merely serves to deflect from the reality of impacts?
Lessons from the energy industry’s ‘greenwashing’ include the damage to the environment remaining visible and the impact on climate (and the economy) measurable. AI is likely to continue on the same trajectory if inclusion is performed - especially when it is not aligned with regulation.
But like with climate, we also have an ideological issue - there is also a camp for and another against responsible measures like inclusion.
In the realm of AI giants, it is reassuring to see Anthropic committing to inclusive practices (e.g. here and here - although we might have reservations about the latter; should have need needs instead of wants, a chatbot-led study yielded cold data, not warm insights, thereby missing an opportunity to build empathy, and how much of the insights influenced product moving forward is unclear).
In terms of innovation across companies and startups, FOMO, it turns out, is not a business strategy, and the disconnect between investments and relevance is starting to change the tide in favour of a collaborative approach again (slowly).
What is the role civil society orgs can play in driving inclusion?
They already are in the sense that they are aligned with the ethos of inclusion and championing deliberative and participatory efforts (e.g. digital / physical citizen assemblies) to inform governance decisions.
Yet at a corporate level / concerning corporate innovation, there is more work to be done - these efforts mostly target policy makers, not corporations.
CSO’s can, however, draw on their abilities to communicate with the public to raise awareness and organise grassroots action across a variety of communities. A recent example: copyright activist Ed Newton-Rex organising this stunt.
How can you guarantee inclusion is a strongly incentivised must-have, rather than a nice-to-have?
When it comes to innovation, meaningful constraints are valuable. In addition to regulation providing these around corporate efforts, procurement could provide a valuable role in deciding who buys what - and should, therefore, be a key recipient of any inclusion research (= literacy, upskilling, empathy).
May EI Meet-Up
Talking to Machines: How Should Governments Govern AI Designed to Sound Human?
When AI is designed to sound human, people treat it as human. This shifts trust, dependency and the choices users make. This session looks at where current rules, including UNESCO and the EU AI Act, address this question, where they fall short, and what civil society and practitioners should push for next.
Alina Solotarov, founder and director of the Centre for International Cooperation on AI, will share her work and host a practical conversation on language, design and accountability. Her background spans philosophy (BA/MA, University of Oxford), business (MBA, SKOLKOVO), and futures research (MA, Freie Universität Berlin), with a focus on the cultural and societal dimensions of AI.





