This week we have something special to share with you all.
If you haven’t already heard, Responsible AI, written by our founder Olivia Gambelin, was released last month. The book is a guide to designing and operationalizing a robust Responsible AI strategy for business leaders who want to get the most out of their investment into artificial intelligence.
And today we are bringing you an excerpt from her book, straight to your inbox!
An excerpt from Responsible AI: Implement an Ethical Approach in Your Organization:
“As you’ve probably picked up on by now, I’ve been dropping not-so-subtle hints that there is more to Responsible AI than what meets the AI. This may be a terrible play on the word “eye”, but there is an important truth to my feeble attempt at an AI pun here. In fact, this simple realization is one of the most crucial keys to success in becoming a leader in enabling Responsible AI for your organization (Stackpole, 2023). So to be sure there is no confusion on the point, I will state it, in as plain of language as possible:
Responsible AI is not primarily a technical problem to solve.
When you are looking to become a Responsible AI enabled organization and to utilize ethical decision making to align your AI with your foundational values, you must first look beyond your technology (Mills et al., 2022). Yes, AI in general is riddled with ethical problems, and yes these problems are the ones that will make headlines, but they are only the tip of a rather large iceberg. At the top sits the ethical problems you are witnessing in your AI, but as you dive beneath the surface you will quickly find that these problems are most often rooted in poor people management and operational processes, not in technical execution. Essentially, that means that in addition to the technical layer of Responsible AI, you also have a people and a process layer.
To help illustrate these different layers, let’s start with the story of our imaginary HR startup struggling to mitigate a growing gender bias problem. We left off with you questioning why after having developed a technical solution for monitoring and mitigating gender bias in the data, you were still experiencing biased outputs from your recommendation system. You decide to investigate the issue further by talking to your team of data scientists to figure out what was going wrong with the fairness metrics. Much to your despair, you quickly discover that even though there was a technical solution available, the fairness metrics were going largely unused by your data scientists. What happened?
First, it is brought to your attention that the culture of your data science team did not support the adoption of the fairness metrics. Over the years, your data science team had developed an underlying mantra that “the data doesn’t lie” which in practice translated to the idea that if there was an inequality in the data, then it was just a reflection of reality that couldn’t be fixed. This proved to be a problem when it came adopting the use of the fairness metrics because the team viewed the gender bias occurring in the recommendation system as something that could not be prevented. If the data said there was a disproportionate amount of men better suited to sit on interviewing panels than there were women, then that was just how things worked and couldn’t be changed. This, you realize, is not a technical problem, but a people one. The culture of your data science team was preventing the use of your technical solution for mitigating the gender bias. It dawns on you that you don’t need better fairness metrics, you need a solution to help fix the team culture blockers to using the fairness metrics you already had. This, you discover, is the people layer to your ethics problem.
But it doesn’t stop there. As you start tackling the team culture issues, you notice that the adoption of the fairness metrics is still not happening at the scale you had hoped for, nor is it bringing the desired impact on mitigating the gender bias in your recommendation system. Although you have managed to train your data scientists on the importance of fairness in mitigating bias and have motivated them to use the specifically designed fairness metrics, it still seems like no one is using the technical solution. This is when you realize that the problem now was that your data scientists didn’t know at what point in their workflow they needed to be using the fairness metrics, and even if they did run a fairness check, they had no idea what to do with the results. Your data scientists could now tell when there was a fairness problem in the data, but they didn’t have any process in place for making tangible changes to mitigate the identified bias. Again, you realize that this wasn’t a problem with the fairness metrics themselves, but instead a problem with lack of procedure for your data scientists to be able to effectively and practically using the metrics. This, you discover, is the process layer to your ethics problem.
Both of these blockers have nothing to do with the technology, and everything to do with people and process. Without the right team culture or procedure for change in place, your fairness metrics, no matter how good of a technical solution it is, will never reach the adoption or achieve the impact you had intended. Essentially, it does not matter how good the decisions around your technology are if you do not have people that are trained to implement those decisions, or a process for carrying through and scaling the decisions. If you make the mistake and jump immediately to applying your technical solutions layer without first examining the people and process layers, at best you will see a few instances of positive impact on your AI, but at worst you will see your time and resources go to waste in an initiative that never produces any of its desired results. In order to reap the benefits of Responsible AI, you need to address all three layers in your ethics solutions.
The purpose of having a robust Responsible AI strategy is to ensure that you are tackling not only the symptoms of an ethics problem, but that you are also getting all the way down to the root causes. In order for your AI to be reflective of your organization’s foundational values, your values must be reflected throughout all three layers. Which is why, when it comes to enabling Responsible AI, there are three pillars to a holistic and sustainable strategy. As you may have already guessed, the Responsible AI pillars align with the three layers of ethical solutions: People, Process, and Technology.
People
Starting with the first of the three, we have the People Pillar to your Responsible AI strategy. WIth each pillar comes an overarching question that guides the thinking and solutions for that pillar. For your first pillar, we are asking the question “who is building your AI?”.
As the name implies, this pillar is focusing directly on the people behind the technology. It may seem counter intuitive at first, considering your goal is to embed ethics into the technology, not the person. But you can’t have AI without people, and so if you don’t have your foundational values reflected within your teams, you certainly won’t have your values reflected in your AI systems. This means that you need to ensure the people that are building your AI systems, or that are using AI tooling in their daily workflow, are trained in how to implement the relevant foundational values, have incentives aligned to furthering the goals of your Responsible AI strategy, and know how to identify ethical challenges in their work. Above all else though, the People Pillar relies on building an open company culture that is not only receptive, but supportive of Responsible AI.
Process
Moving a layer up, we have the Process Pillar. In this pillar we are seeking seek solutions to the question “how is the AI being built?”.
Now that you understand who is building your AI, you need to understand how that AI is being built. Again, it may seem strange at first to be looking at operational processes when you are concerned about the technical outcome. But good AI practices lead to good AI, so having processes that support the value alignment of your AI is essential. This is where AI governance takes center stage, as you will need to ensure your ethically trained teams have the right workflows, policies, and checkpoints in place that encourage and facilitate values-based decision making. You will be looking for opportunities to fine tune your AI practices to enable natural points in your teams’ workflows to identify ethical challenges, make value-aligned decisions, and carry out those decisions at scale. Think of this pillar as the one that builds protocol and structure for ethics, ensuring that the critical decisions influencing the ultimate outcomes of your AI systems is reflective of your foundation values.
Technology
To complete the trifecta we have the Technology Pillar. Having answered the who and how in the first two pillars, it is in this third pillar that we ask the question “what AI are you building?”.
It is not until this final pillar that we turn our attention to the technology your organization is actually building, and the reason to have started the Responsible AI in the first place. It is incredibly important here to understand that it does not matter whether your organization is building an AI system to be sold on the market, procuring AI solutions to support internal operations, or customizing your own internal AI systems, your Responsible AI strategy must hold any and all AI associated with your organization to the same ethical standards. Another company can be responsible for building the AI systems your teams utilize, but it is your organization that will be held accountable at the end of the day for the consequences of poorly designed technology. This all means that when it comes to implementing ethics, you need to have the right metrics of success to measure your AI and data practices that align with your foundational values, keep up to date on developments in Responsible AI techniques such as bias mitigation practices or privacy enhancing methods, and have the right tooling in place that supports responsible AI development. Your People Pillar built the skillsets for Responsible AI, your Process Pillar built the mechanisms for carrying out Responsible AI, and now your Technology Pillar will build the actual AI responsibly.
There you have it, the three pillars of a Responsible AI strategy. Remember, every successful strategy will incorporate elements from People, Process and Technology without preference to one over another. Each of the three pillars builds on the others to create an interdependent and intricately simple system which, when executed with intention and care, results in holistic Responsible AI solutions addressing ethical problems at every layer. “
A little community present
As a thank you for being a part of the growing community here at the EI Network, we’d like to gift you 20% off Olivia’s book Responsible AI: Implement an Ethical Approach in Your Organization.
Discount Code: KOGANPAGE20