The introduction of generative AI (genAI) tools into the workplace has fundamentally reshaped the direction of the future of work. According to a recent survey by Gartner, 37% of organizations have already implemented AI in some form, a significant increase from 10% just four years ago. However, this surge in AI adoption, driven by its potential to enhance productivity, streamline operations, and provide data-driven insights, is not as straightforward a path to success as we may have hoped.
Although AI holds the promise of transforming the workplace into a more innovative and adaptive environment, the current reality of this introduction calls into question whether or not we have set out with the most effective adoption strategies. Instead of mass increases in productivity or freeing up employees to engage in more meaningful tasks, the widespread efforts to adopt AI has lead to an increase in job displacement and anxiety, high implementation costs for smaller teams, skill gaps in critical areas, the growing dependence on these tools leading to reduction in critical thought and oversight, and change management challenges causing significant disruptions to workflows.
What’s in store:
Ethical AI at work, according to the White House
AI in the Workplace: Opportunities and Perils of AI Teammates
The Case for Reimagining Workers’ Rights in the Age of AI
Embedding Ethics Into Process Innovation for the Workplace
Curated news in Responsible AI | Helena Ward
Ethical AI at work, according to the White House
Amongst the EU AI Act, and several U.S. states and cities pushing AI-related laws, the U.S. government has been slow to initiate any legislation on AI. This month, the White House issued a new set of standards, outlining eight principles for the ethical development and deployment of AI systems at work. The standards were directed by President Biden’s Executive Order on AI which took place on the 30th October 2023. Within that executive order, the Department of Labor were urged to establish a key set of principles to protect workers, ensuring that they ‘have a seat at the table in determining how these technologies are developed and used’. Companies such as Microsoft and Indeed have already committed to adopting these principles as appropriate.
The Fact Sheet, issued on the 16th May 2024 outlines the following eight principles:
Centre Worker Empowerment: Workers and their representatives should be informed of, and have ‘genuine input’ in the design, development, testing, training, use and oversight of AI systems for use in the workplace.
Ethically Developing AI: AI systems should be designed, developed and trained in a way that protects workers.
Establishing AI Governance and Human Oversight: Organizations should have clear governance systems, procedures, human oversight, and evaluation processes for AI systems.
Ensuring Transparency in AI Use: Companies should be transparent – to both workers and job seekers – about which AI systems are being used in the workplace.
Protecting Labor and Employment Rights: AI systems should not violate, or undermine workers’ rights (whether the right to organize, health and safety rights, wage and hour rights, or anti-discrimination and anti-retaliation protections).
Using AI to Enable Workers: AI systems should assist, complement, and enable workers, and improve job quality.
Supporting Workers Impacted by AI: Employers should support or upskill workers during job transitions involving AI.
Ensuring Responsible Use of Worker Data: Workers’ data collected, used, or created by AI systems should be limited in scope and location, used only to support legitimate business aims, and protected and handled responsibly.
These principles are to be considered throughout the whole lifecycle of AI: from design to development, testing, training, deployment and use, oversight, and auditing, and seek broadly to implement AI while enhancing job quality, and protecting workers’ rights. The briefing is not supposed to be exhaustive, instead, these principles are intended to serve as a guiding framework, from which best practices should be customized based on their own context, and – importantly – input from workers.
With a lack of concrete and actionable guidance, organizations considering AI adoption are at risk of broadly adopting two perspectives. On the one hand, a company might lean towards a risk-averse approach – restricting the use of AI until firm regulatory guidance is in place. On the other hand, companies may veer towards a reflective mindset: as if a lack of legislation means no guardrails are needed until further notice. The White House principles underscore the administration’s commitment to ensuring AI technologies strengthen worker empowerment. Although the clear commitment may help to mitigate organizations from veering to either side: giving an insight to employees into what is expected of them, the fact sheet does not provide much in terms of actionable direction, leaving much open to interpretation and contextual adoption.
Culture & Leadership | Anna Steinkamp
AI in the Workplace: Opportunities and Perils of AI Teammates
Humans at work are complicated. Managing yourself at work is hard, and so is managing and leading teams. We all inherently understand this, which is why people and leadership development is such a fascinating field, supported by a massive industry offering coaching, training, workshops, and countless other resources aimed at boosting productivity while balancing employee well-being (or so we hope). Add AI-agents into the equation and there is one more element of complexity. Part of this complexity involves the inherent trust issues surrounding the implementation of workplace AI, which we have written about in a previous newsletter. In my section today, I focus on a recent study that discusses both the opportunities and dangers of incorporating AI teammates into your organizational system.
Keep reading with a 7-day free trial
Subscribe to The EI Network to keep reading this post and get 7 days of free access to the full post archives.