Today we have something a little extra special for you. Typically this newsletter focuses on deep-diving into important and impactful material relevant to current developments in AI. We always try to give a practical spin to our content, knowing that there is a critical difference between caring about something and acting on it. Today though, we are going head-first into the practical and bringing you a new tool for responsible AI development straight to your inbox.
Meet the Values Canvas, your holistic management template for developing Responsible AI strategies and documenting existing ethics efforts. We’ll be going into the details of the Values Canvas, but for now all you need to know is there are three ways to use it:
At the start of an AI project to create your strategy for embedding ethical values
Collaboratively with your department or team to discover what missing resources are needed to successfully execute AI initiatives
As the catalyst for designing your organization's holistic Responsible AI strategy
This tool is straight from our founder’s upcoming book, Responsible AI: Implement an Ethical Approach in Your Organization. If you happen to be in the Bay Area, you are officially invited to the book launch happening on June 25th in Menlo Park. If you aren’t in town, tune into the live streaming of the launch on the Silicon Zombies Podcast. Please follow the link below to RSVP is you plan to attend in person.
WHAT’S IN STORE:
Responsible AI: A How-to Guide for Ethical AI Success
Meet the Author
Leadership in Responsible AI: Cultivating an Ethical Workplace with the Values Canvas
Regulatory Readiness for Responsible AI: A Model for All Business Sizes
Curated News in Responsible AI
Responsible AI: A How-to Guide for Ethical AI Success
The news for this week is a little bit different from usual - it concerns Ethical Intelligence’s founder, Olivia Gambelin, who has recently announced the publication of her book Responsible AI: Implement an Ethical Approach in your Organization - a guide for business leaders which answers the all-important question of how to implement robust and responsible AI strategies.
If you don’t know already, Olivia is an AI Ethicist specializing in the practical application of ethics in product innovation and AI strategy. She has worked directly with executives on the operational and strategic development of Responsible AI, from Fortune 500 to series A startups, utilizing ethics as a decision-making tool. Her work has empowered hundreds of business leaders to harness the commercial power of ethics in AI and innovate responsibly. The publication of Responsible AI represents a culmination of Gambelin’s work to date, and is suitable for any size organization looking to implement a responsible AI strategy.
Many of us now realize the importance of ethics in innovation, but what does it mean to use ethics in the design and development of AI? This is one of the fundamental questions Gambelin tackles. Responsible AI explores the use of ethics as a decision-making tool: how ethics mitigates risks and stimulates innovation; creativity and community. Findings in the book are reinforced by case studies in which Ethical Intelligence worked alongside companies such as Ikea and Nat West, to implement ethics-by-design strategies.
Part of this step-by-step guide, and one of the central frameworks in the book, is The Values Canvas - which has been pre-released as a Creative Commons tool, providing a neat sneak peek into Gambelin’s upcoming book.
The Values Canvas is a unique tool which focuses on practical, tangible, and holistic guidance in AI adoption and development. The Values Canvas aims to assist business leaders in their responsible AI journey by empowering them to, on the one hand, develop their own responsible AI strategy (based on best practices but contextualized to their specific needs and direction), and on the other hand, keep track of existing ethics efforts. This approach harmonizes with an iterative approach to ethics implementation, in which ethics-by-design forms a lifecycle, requiring incremental improvements in light of standard evaluations and reviews post-deployment.
The strategy pioneers three pillars: People, Process, and Technology. Let’s review each in turn.
One essential aspect of responsible AI is the People - we need the right people behind technology to guide, and monitor the technologies we release into the world. Part of this will involve building the knowledge base and culture within a company. One role of responsible business leaders is to keep employees informed and alert to the company's values and mission. We need to educate our employees and ensure that they have the skills necessary to engage with ethics use in AI; motivate our employees by reinforcing the process of engaging with ethics habitually as opposed to a one-time fix; and communicate internally, externally, and regularly to foster interdisciplinary collaboration.
Process refers to the governance and operations protocol which undergirds a company's values and mission. These processes demonstrate a company's commitment to ethical practice and define clear channels and procedures through which mishaps, and modifications are made. The purpose of Process is to create the guiding intention through company policies for AI usage, implement the governance frameworks that will drive execution, and instrument the scale of operations through software tools throughout an organization.
The third and final pillar refers to the Technology itself, which is all about building the best practices for the standard development, and adoption of AI tools. One of the biggest initial blockers to responsible AI for companies, and often one that takes people by surprise, is that standard development practices are needed in place before anything around ethics implementation can be done. What this means is establishing standard data governance, documentation practices, and domain feedback loops with users.
Over the next two months, Gambelin is releasing a series of case studies focusing on bringing the specifics of the Values Canvas to life. To access the current three case studies, and follow the release of the upcoming six, check out the Values Canvas website here.
Ethics-by-Design
Meet the Author
Hello there! Typically, this little section of the EI newsletter is my space to deep-dive into ethics-by-design thinking and practices. This week though I’m stepping briefly out from behind the curtain to say a quick hello and talk more about my upcoming book as well as the Values Canvas.
If you didn’t already know, my name is Olivia Gambelin and I am the founder of Ethical Intelligence, the parent company of the EI Network and this newsletter. I am also an AI Ethicist, my specialty is ethics-by-design (shocker) in product innovation and strategy design for responsible AI enablement. I am fascinated by the cross-over between product design and values-based innovation, and how to build out company strategies to drive the alignment between the two. I’ve been in the Responsible AI and Ethics space for a while, and have had the joy of working alongside some brilliant minds and first movers who brought this industry to life.
What inspired you to write Responsible AI?
Thanks to the onset of the generative AI boom, Responsible AI and Ethics have transformed from a space that people only ‘care’ about into a defining pillar of AI best practices. Despite this crucial shift in sentiment, I began noticing that business leaders were coming to a grinding halt when it came to actual action, an insurmountable sense of inertia blocking their way to success. What was happening? Although it may have felt like a complex problem, the blocker could be simplified into two questions:
Where do I start?
What am I missing?
I quickly noticed a pattern across executives, leaders, and thinkers alike - it didn’t matter how much enthusiasm or drive an organization had - execution on Responsible AI and Ethics would remain an abstract ideal simply due to lack of direction at the start. Seeing this pattern emerge, I knew we were missing a critical resource in the ecosystem of AI, and so the inspiration for Responsible AI was born.
What is the Values Canvas?
The intention behind this book was to unblock business leaders in designing both AI development and adoption strategies based on responsible best practices and core ethical values. To unlock the business impact and success hidden in the potential of these strategies, I determined a central tool designed to guide strategic thinking was needed. This led to the creation of the Values Canvas, a holistic management template for developing Responsible AI strategies and documenting existing ethics efforts.
Knowing that usability would be key for the success of the Values Canvas, I had the wonderful opportunity to partner with Design MBA students from the California College of Arts to bring the Canvas to life. Thanks to the coordination of the program's director, Justin Lokitz, I was able to work with Kirsten Collins, Armando Somoza, Michelle Zamora, and Yves Louise to design, develop, and test the Values Canvas, keeping a keen emphasis on accessibility and ease of use.
Tapping into the similar design thinking process as the Business Model Canvas, the Values Canvas maps out the three pillars for AI success and divides those pillars into the nine critical impact points where ethical values are translated from the abstract into the practical.
Why release the Values Canvas to the public?
Keep reading with a 7-day free trial
Subscribe to The EI Network to keep reading this post and get 7 days of free access to the full post archives.