Centering AI on the Community: A Paradigm Shift
Community post from EI Network member Emmie Hine
One of the core components of the EI Network’s mission is to bring together professionals in Responsible AI to share expertise, experiences, and best practices in the pursuit of innovative solutions and critical thought. One way we create the space to enable this collaboration is to give a platform to the voices within our community by publishing a monthly post from an EI Network member.
This month we have the opportunity to hear from Emmie Hine who describes the need for a shift away from our current thinking about human-centered AI.
If you are interested in writing a community post for the EI Network, please email helena@ethicalintelligence.co for further information.
The Shortcomings of Human-Centered Artificial Intelligence and the Need for Community-Centered AI
In recent years, human-centered artificial intelligence (HCAI) has become the dominant paradigm in ethical AI, embraced by research institutions, corporations, and government bodies. HCAI promises to make AI that is good for humans, but how it does so remains under-defined. Despite its widespread adoption, conflicting definitions of HCAI muddy the waters and make it impossible to usefully operationalize. Today, I’ll explore HCAI’s shortcomings and talk about why we need to shift towards a concept of AI that is more community-centered rather than individualistic.
Fragmentation and Conceptual Confusion
HCAI has a variety of definitions. A mapping of the literature demonstrated at least four different threads of HCAI research: (1) explainable/interpretable AI, (2) human-centered design and evaluation, (3) human-AI teaming, and (4) ethical AI. You may have noticed that these all sound extremely different. Indeed, in a paper I’ve written that’s under review, I analyze the deep conceptual roots of HCAI-linked fields, finding that these four threads are all grounded in deeply established fields. For instance, human-centered design has been around since the 1950s and informing AI since the late 1980s. It’s now the foundation of Ben Schneiderman’s “engineering systems” version of HCAI. This is entirely different from explainable AI (XAI), which is a technical field that’s been around for decades. XAI focuses on how to explain AI decisions to users. Human-AI teaming takes inspiration from human-computer interaction (HCI), and “ethical AI” is an enormous catch-all for basically everything else. Thus, while these individual sectors can produce good work, an HCAI paper on XAI may have nothing in common with an HCAI paper on algorithmic fairness, and claiming that they are conceptually the same means that stakeholders can have completely different ideas of what they mean when they discuss HCAI: we’re left with something that’s too nebulous to be operationalized.
Limitations in Addressing AI-Related Harms
You may be wondering: is this all just philosophical griping? What’s the real problem? Well, when everyone is saying something different when they talk about HCAI, it leaves the door open to ethics-washing (pretending something is more ethical than it is). You can claim that basically anything is some form of HCAI, even when it can still cause harm. For instance, an explainable AI tool can be claimed to be “human-centered,” but being explainable doesn’t automatically make it good for people, or even harmless. For example, even if a UnitedHealthcare algorithm that wrongly denied elderly Medicare patients insurance coverage could explain exactly why it denied the coverage, it would still be making biased and incorrect decisions. The same applies to other areas of HCAI: human-AI co-creative products, like generative AI tools, could cause job displacement or be used to generate explicit and abusive content targeting other people; an AI system like an car’s autonomous driving interface can be well-designed but cause harm if not trained sufficiently well to avoid collisions; and ethical AI is itself a buzzword that often falls prey to high-level platitudes and unimplementable principles.
Furthermore, HCAI often focuses on the immediate user experience while overlooking broader societal and environmental impacts. AI systems should be user-friendly, but AI’s impacts extend beyond the end user. The AI lifecycle involves design, development, evaluation, operation, and retirement stages. In all of these stages, resources—ranging from data to energy to human labor—are extracted, processed, and deployed. Exploitative data collection, the use of low-paid workers overseas to label data, and even abusive practices when mining materials for semiconductors all cause harm to people, even if not necessarily to the immediate end user. Still, if AI is supposed to be good for humans, it shouldn’t overlook these harms. It also shouldn’t ignore the negative environmental impact of AI, which has a corresponding impact on us. These harms disproportionately affect marginalized communities, making them even more critical to address lest they exacerbate structural inequalities.
The Need for a Community-Centered Approach
To address the shortcomings of HCAI, we need to move beyond its individualistic focus and adopt a more holistic framework that considers impacts on not only individuals, but communities and the environment. I argue that community-centered artificial intelligence (CCAI) offers a promising alternative.
Community-centered AI is an approach to AI that focuses on community engagement and harm mitigation across the AI lifecycle and stages of resource extraction, processing, and deployment to build equitable, inclusive, and sustainable AI.
CCAI emphasizes the well-being of communities and society at large, rather than just individual users. Communities are nested structures, built recursively from smaller communities that cannot thrive unless individual members do. By looking at the community level, we can ensure the wellbeing of not just those communities, but their individual members and then society as a whole.
CCAI is grounded in the recognition that AI systems impact a wide variety of stakeholders. To get more practical about what this means, a key component of a community-centered approach to AI should be community engagement of impacted groups throughout the AI lifecycle, from planning through to retirement. It also should involve harm assessments at the individual, community, and societal levels involving the three dimensions of resource extraction, processing, and deployment. To be maximally flexible, a CCAI framework should not prescribe specific values, design strategies, or harm mitigation methods. Rather, it should guide AI-developing entities in developing organizational values that must be upheld, identifying implicated communities and engaging with them in a constructive way depending on the nature of the possible impacts, and mitigating those harms. Other ethical AI frameworks, including the four dimensions of HCAI and the emergent idea of participatory AI, as well as responsible tech concepts like co-design, can all play a role in realizing CCAI, which should be thought of as a better-defined paradigm for designing and deploying AI that is good for people and the planet.
Conclusion
HCAI has helped advance thinking about building “good” AI, but its fragmented and often user-centric nature limits its effectiveness in addressing social, environmental, and even individual harms. CCAI offers a more comprehensive and inclusive framework that prioritizes the well-being of communities, thus helping protect individuals and the planet. Adopting the CCAI paradigm is a shift in thinking, but it is needed to ensure that AI technologies contribute positively to society. It is time to move beyond HCAI and embrace a community-centered approach to AI.
About the Author
Emmie Hine is the author of the Ethical Reckoner newsletter, a weekly take on tech news. She’s also a research associate at the Yale Digital Ethics Center and a PhD candidate in Law, Science, and Technology at the University of Bologna and KU Leuven, where she researches the ethics and governance of emerging technologies, including AI and extended reality, with a particular focus on China. Emmie previously worked as a software engineer. She speaks English and Mandarin, but her favorite language is TypeScript.
Join the community
This newsletter is a feature of The EI Network.
A global community designed to cultivate the space for interdisciplinary collaboration in Responsible AI and Ethics, the EI Network brings together members to share expertise, experiences, and best practices in the pursuit of innovative solutions and critical thought.
Membership to the EI Network is on an application basis.