One of the core components of the EI Network’s mission is to bring together professionals in Responsible AI to share expertise, experiences, and best practices in the pursuit of innovative solutions and critical thought. One way we create the space to enable this collaboration is to give a platform to the voices within our community by publishing a monthly post from an EI Network member.
This month we have the opportunity to hear from George Chamoun and his experience founding Fairo, a platform whose mission is enabling successful and responsible AI adoption.
If you are interested in writing a community post for the EI Network, please email helena@ethicalintelligence.co for further information.
Tips To Make Ethical and Responsible AI Resonate with Business Leaders
Community post from EI Network member George Chamoun
Ethical and Responsible AI (RAI) initiatives require more than just the vocal support of leadership to deliver value to organizations. In successful RAI implementations, leadership, management, and individual contributors are fully engaged and invested, making meaningful contributions and leveraging insights to drive informed decision-making.
I recently spent some time on the road learning from many organizations both in the US and in Europe. I sat down, face-to-face, with numerous stakeholders, from Data Analysts to C-Level/Board, across a variety of industries. My goal was to understand where organizations currently stand in their AI journeys, as well as learn how they plan to tackle issues related to AI ethics, risk management, and compliance.
I encountered a variety of organizations on the precipice of major institutional change, figuring out how to take two leaps at once. On the one hand they are dealing with a number of GenAI POCs. On the other hand, industry-standard ‘best practices’ for RAI are still being developed and accepted. Pioneering on two fronts simultaneously is a daunting task for any business leader. Inaction – or ‘wait and see’ – is the easier option.
But do we shift from a ‘wait and see’ attitude to one of ‘here and now?
Based on my learnings, I have compiled some tips to ensure the importance RAI resonates with leaders so that they can help ensure that rapid advances in AI technology do not cause widespread harm and are beneficial to the parties they impact.
Tip 1: Understand That Companies are Early In Their AI Journeys
Generative AI is a relatively new technology and companies are still figuring out how to best integrate it into their business models. Many companies that did not participate in the first wave of AI are now jumping in full-force. AI technologies are driving many industries to self-disrupt (i.e. customer service, cyber security, media and entertainment), a process that is both overwhelming and resource-intensive.
Because we are so early in the AI adoption cycle, the collective shift in accountability from employee to AI agent has not yet occurred at scale, save a few high-profile failures (i.e. United Healthcare, Rite Aid, Air Canada, GM). However, AI is becoming more reliable and more human-like every day. Soon, reliance on AI technology will begin to outpace governance and accountability. The more time we wait to implement RAI initiatives the more AI incidents will occur.
As well as being early in the AI adoption cycle, we're in the infancy of AI regulations (especially in the US). The motivation for building responsible and ethical AI systems is mostly intrinsic at this point, whereas the practice of building responsible AI is not widespread. Therefore, any organizational overhaul towards AI risk management, ethics-by-design, or some combination thereof would be pioneering and, understandably, feel like a risk in and of itself for many business leaders. This adds to the reluctance that many leaders have when integrating RAI principles into their organizations.
Understanding that organizations are facing a whole new set of challenges, opportunities, and risks stemming from AI will help us communicate effectively with leaders about the importance of investing in RAI.
Tip 2: Bridge the Gap Between Macro and Micro Risks
When I mention to people that I work in Responsible AI, the conversation generally takes a predictable turn towards large-scale social issues. These include AI replacing jobs, disinformation, and existential risks posed by AI superintelligence. While these issues are an important and interesting topic of discussion, their impact on the day-to-day operations of the average company is fairly remote.
In most professional settings, it is helpful to bring the conversation down to the level of the business itself, temporarily putting the frontier concerns aside. Generally, the macro concerns for businesses include understanding the operational scope of adverse AI impacts, and which category the risk falls into. Common risk categories include reputational, legal, compliance, environmental, or financial. Within each of these categories, more detail can and should be provided depending on your audience. Some institutions are particularly sensitive to financial risks, whereas others have legal and compliance concerns related to the navigation of new and rapidly evolving regulations.
Beyond defining risk categories, providing a few examples of high-profile failures can help frame and build relatable context around the macro risks at hand. Consider Samsung leaking trade secrets to ChatGPT, RiteAid being banned for using facial recognition technology, or Air Canada being held liable for the misinformation provided by its chatbot. One great resourceI have personally relied on to find relevant examples is the AI Incident Database.
After framing the macro-level risks to an organization, we need to bridge the gap between the macro and the micro. Micro risks can be thought of as individual adverse impacts stemming from a specific use. When these adverse impacts accumulate, or a technology is used enough that a certain low-probability high-risk outcome occurs, the macro risk will begin to materialize.
By bridging the gap between specific adverse impacts to macro risks for the business, we create the context to begin thinking about how we address these concerns through specific actions within an organization that will contribute to risk-mitigation. Bridging this gap effectively is always a challenge: when standards are voluntary, teams are less likely to adopt them, especially when incentives are aligned around other activities.
By providing a fully contextualized risk view to business leaders, we have the tools to ensure that incentives within organizations to tackle micro risks are aligned with concerns about macro risks.
Tip 3: Bring a Vision. But Set Realistic Goals.
Business leaders will have to balance the need to bring an ambitious vision to the table while still being pragmatic and addressing current organizational needs. Bringing energy and excitement will ensure our voices can rise above the noise. Nobody wants to be presented with an RAI strategy that feels like another form of box-checking bureaucracy. At the same time, we can’t let the perfect be the enemy of the good. Concessions will have to be made to deliver short-term wins.
One of the best ways to create measurable progress is to take a simple inventory of what AI systems currently exist within an organization. This can be done as simply as sending select employees an email asking, ‘which AI projects are you aware of?’ and recording responses in a spreadsheet, document, or compliance software. Once we have an inventory, we can begin to learn more about these initiatives, and build processes around them. New, or planned initiatives, should also be evaluated.
There are a variety of publicly available algorithmic impact assessments including the Microsoft Responsible AI Impact Assessment and U.S. Chief Information Officer’s Council Algorithmic Impact Assessment. These assessments provide business leaders with specific data points from various stakeholders involved in current and planned AI projects and serve as an initial foundation for decision-making.
Gathering static self-reported data from questionnaires is just one potential starting point. The most important thing is that we find a way to bring business leaders to a point where they have a preliminary understanding of AI initiatives, provisional risks, and ethical concerns within their organization.
As we progress in implementing our RAI vision, organizations will need to build and implement a layer of policies and procedures around in-scope AI systems. These policies and procedures should uphold core organizational values, ensure that desired system attributes are achieved, and reflect published standards (i.e. NIST AI RMF or ISO 42001:2023) to ensure quality and consistency.
Bringing a compelling and engaging vision to the table, paired with the pragmatic implementation of certain well-defined standards, will help ensure we continue to make incremental progress throughout our RAI journey.
Tip 4: Present Interesting and Actionable Data
When presenting data to leadership it doesn’t matter how good the underlying analysis is - we need to know our audience. If the data is not presented in a way geared towards decision-makers, our points won’t get across, and valuable information will be left on the table.
Although there is no ‘one size fits all’ approach when presenting to senior leadership, there are a few elements of good reporting and presentation to consider.
Firstly, we need to summarize upfront and provide the right level of detail. Executives have jam-packed schedules, filled with presentation after presentation, so any report we create needs to have the main points front and center - stated clearly and concisely. Any supplemental data we bring, including charts, statistics, or tables needs to be relevant to the main takeaways from the report and easy to understand. Business leaders need to review a report quickly, gain a satisfactory understanding of the key points, and inform their decision-making.
We also need to make sure our presentation is flexible. No matter how much planning we put into it, the presentation will almost never go as planned. We will be interrupted, asked to dive into specifics on an area we planned to gloss over and gloss over the areas we thought were most interesting and relevant. A helpful strategy to remain flexible while still providing detail is to use an interactive dashboard or HTML page. If you must use PDF, put intra-document links into it so that you can navigate to more detailed sections from the executive summary. This way, you have the flexibility to drill down into more specific levels of detail on a whole variety of data points or slice the data multiple ways based on the real-time feedback from the audience.
Coming from a data science background, at least half (if not more) of my job was presenting results. Building a high-quality model, computing an interesting metric, or creating an interesting chart was only part of the battle. If I really wanted to make an impact on my audience, I needed to tell a story. Telling a story helps our audience identify with both the data and with us. When the presentation is over, and the report is filed away in corporate records, people aren’t going to leave with a perfect memory of the most meaningful plot, chart, or statistic; they will remember how they felt during the presentation, and - if you’re lucky - an interesting anecdote from your story. If you told the right story, they left with the main takeaway.
Storytelling alone won’t bring engagement, it’s imperative that we present data that is both interesting and actionable. If we can bring a holistic set of data to the table, then our audience will have the context needed to think about actions stemming from our findings. Ideally, we can get people to think at a level that encourages not only the creation of policy but also the development and adoption of new procedures.
Answering questions like ‘how do we make responsible AI core to everything we do?’, breaking down analysis by use case, vendors, teams, spending, etc. Providing the data to support a deep understanding of the scope, risk, anticipated ROI for an AI initiative, our audience will see the strategic value in their RAI investments.
Bringing it all together
While these tips offer a foundation for making RAI resonate with leadership, they are just the starting point. It is important to remember that each organization is unique, with its own set of challenges and opportunities. To successfully embed RAI principles within an organization, it's essential to understand and adapt to your specific organizational context. I hope these tips will help you along your journey.
About the Author
George Chamoun is the Founder & CEO of Fairo, a platform whose mission is enabling successful and responsible AI adoption. Prior to founding Fairo, George was VP of Technology and Innovation at Health Data Analytics Institute, a Boston-based healthcare technology startup focused on transforming patient care through advanced data and analytics. His work is published in peer-reviewed journals including Anesthesiology and BMJ Open. George holds a B.S. in Economics & Mathematics from Northeastern University. Outside of work, he enjoys dog walks, cooking, and music.
Join the community
This newsletter is a feature of The EI Network.
A global community designed to cultivate the space for interdisciplinary collaboration in Responsible AI and Ethics, the EI Network brings together members to share expertise, experiences, and best practices in the pursuit of innovative solutions and critical thought.
Hosting both virtual and in-person functions, EI Network members have the opportunity to gain insights from an online international community while also benefitting from the support of local chapters.
In addition to receiving this newsletter at a 30% discount, EI members have access to a private Slack community, local chapter meet-ups, education sessions on Responsible AI, and the opportunity to discuss with some of the world’s leading minds in AI Ethics.
Membership to the EI Network is on an application basis.