The EI Network

The EI Network

Share this post

The EI Network
The EI Network
Beyond the Framework

Beyond the Framework

A deeper dive into the psychology, intention, and design of AI risk mitigation

Ethical Intelligence's avatar
Helena Ward's avatar
Eddan Katz's avatar
Olivia Gambelin's avatar
+1
Ethical Intelligence
,
Helena Ward
,
Eddan Katz
, and 2 others
May 01, 2024
∙ Paid
2

Share this post

The EI Network
The EI Network
Beyond the Framework
Share

AI can be risky business, but it doesn’t have to be. 

Although we often stress in this newsletter that ethics is more than risk mitigation, it is important to now and again deep dive into this essential side of Responsible AI. 

Starting from the top: risk is any uncertainty or possibility of negative impact an action or decision may have on something that humans value, whether that be other people, systems, or assets. Many of us value things such as the environment, our reputations, equality, the well-being of others, money, and so on, all of which are connected in one way or another to ethical principles. These principles are embedded into technology through Responsible AI practices. So, when we are talking about risk in AI, we are talking about both the possibility of negative impact, as well as the possibility of failing to embed ethics and so prevent us from realizing those values. 

Risk may be considered from several dimensions or perspectives. We might consider risk from from a business perspective, such as the risk of malpractice undermining trust, or the risk of reputational ruin; an outcomes perspective, as with the risk of a technology exacerbating existing structural injustices, or the risk of unintended effects; and a use perspective, i.e., the risk of a technology being used by malicious actors.

One of the central objectives of responsible AI practices to harness the benefits of AI while mitigating the potential for negative outcomes. Considered in terms of risk, the idea is to reduce the risks associated with designing, implementing or using a particular technology by actively safeguarding against harms and protecting what we value. 

What’s in store:

  • Breaking down the NIST AI Risk Management Framework

  • What is a risk calculator and how to build your own

  • A psychological perspective on managing and understanding risk 

  • A look into global risk solutions


Governance and Policy

Breaking Down the NIST AI Risk Management Framework

As a policy instrument, a Risk Management Framework is guidance on a process by which to identify and mitigate potential harms. It is at its core a set of standards, agreed upon by a collaboration of leading professionals, laying out a practical path and plan to maintain safety and security and minimize the negative impacts in the implementation of technology. This type of norm-shaping consensus construct is well-suited for a rapidly evolving technology like AI, urgent when the pace of its integration into society is as accelerated and widespread a transformation as we now experience, and where the stakes of harm are so profound. We are still in the unknown stage of AI in our daily life, it’s worth reminding. 

This week, the National Institute of Standards and Technology (NIST) - the standards-setting wing of the US Department of Commerce - announced the initial public draft of the AI RMF Generative AI Profile (NIST AI 600-1) as part of its 180-day progress update on the implementation of the Biden Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The Public Working Group on Generative AI released this document to help organizations identify the unique risks of generative AI and provide strategic guidance on how to address those risks. This initial draft is open for public comments by June 2, 2024.

As a standard for a process, the NIST AI RMF is all about universal adaptability - “intended to be voluntary, rights-preserving, non-sector-specific, and use-case agnostic, providing flexibility to organizations of all sizes and in all sectors and throughout society.” The guidance is broken down into four different functions that an organization should perform:

Map - Context is recognized and risks related to context are identified.

Measure - Identified risks are assessed, analyzed, or tracked.

Manage - Risks are prioritized and acted upon based on a projected impact.

Govern - A culture of risk management is cultivated and present.

The Generative AI Profile is a companion document to the overall NIST AI RMF, and is particularly notable for its catalog of risks that are unique to or exacerbated by the technological paradigm shift of generative AI. NIST lists out these 13 risks that we should be worried about in alphabetical order. Below we’ve tried to cluster them according to type - some concerns are about: (1) lowered barriers to instigate harmful outputs; some are about (2) obfuscating what is true; some are about (3) the vulnerabilities of vast amounts of data input, and some are about (4) the systemic impact of generative AI in commerce and critical infrastructure.   

Keep reading with a 7-day free trial

Subscribe to The EI Network to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
A guest post by
Olivia Gambelin
Human Values + Artificial Intelligence
Subscribe to Olivia
A guest post by
Anna Steinkamp
Subscribe to Anna
© 2025 Ethical Intelligence Associates, Limited
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share