One of the key skill sets necessary to working in Responsible AI is learning how to walk the thin line between mitigating risk and stimulating innovation. On the one hand, you need to meet compliance and governance requirements, on the other, you have to motivate people to engage in ethical decision-making. With this thin line in mind, we are diving this week into the balance between managing the risks of AI development and harnessing values when striving for technical success.
Here’s what we have in store for you:
Key dates and action points you need to know for the EU AI Act
WHO’s guidance on using LMMs in healthcare (and beyond)
Prioritizing trust in the workplace for success in AI
How to shift your mindset from risk to innovation in AI Ethics
Before we jump into this week’s insights…don’t forget to check out our upcoming EI meetups! If you are in or around the Brussels or Berlin area, be sure to join your local chapter on February 15th for a lively debate on the topic of the month.
*hint: the topic has to do with a recent 20-year birthday…
👉 What it is: An evening of discussion, insights, and networking
⚙ How it works: Every second week of the month local EI chapters meet around the world to discuss the topic of the month - creating connections with your local community while engaging in a global debate
📍 Where: Currently we have local chapters in Berlin and Brussels, with a Virtual chapter opening next month
Sign up for the Brussels Chapter here and the Berlin Chapter here. If you would like to start a local EI chapter in your area, contact anna@ethicalintelligence.co for more information.
Governance and Policy
EU member states approved the AI Act, now what?
All eyes are on the European Union this week as the 27 member states give unanimous approval to the long-awaited AI Act. For a brief moment, it seemed that the period of deliberation and technical refinement was coming to a standstill; with the influential member states of France, Germany, and Italy calling for a lighter regulatory approach to powerful AI models. The countries contended that a less restrictive environment was needed to foster innovation and support among promising European startups looking to compete with their American counterparts. However, the European Parliament pushed back emphasizing the need for robust regulations - a compromise creating a tiered system was reached. Now core to the AI Act, the tiered approach introduces general transparency requirements applicable to all AI models, but with additional obligations for models identified as posing high systemic risks.
Keep reading with a 7-day free trial
Subscribe to The EI Network to keep reading this post and get 7 days of free access to the full post archives.