AI Has Ethical Issues. Here's What Leaders Can Do.

Insights / featured, leadership, AI, DEI, equity and inclusion

AI Has Ethical Issues. Here's What Leaders Can Do.

AI Has Ethical Issues. Here's What Leaders Can Do. | August Public
AI Has Ethical Issues. Here's What Leaders Can Do. | August Public

AI has been raising moral concerns in our collective consciousness since long before ChatGPT. (See 2001: A Space Odyssey, Wall-E, and the entire Marvel universe for starters.) 

But the rise of popular access to generative AI marks the first moment that the general public has had a direct role to play within those ethical quandaries.

In the workplace, generative AI introduces all sorts of new ethical ambiguities. It raises questions about job security, intellectual property, privacy and data protection, and the boundaries of human creativity and innovation. 

But for me, the biggest ethical concerns around GenAI in the workplace revolve around bias and inclusion. 

GenAI’s outputs are measurably biased in favor of white heteropatriarchy, and its unmindful implementation within organizations could have disproportionate negative impacts on people who already experience marginalization and precarity.

As leaders, what are our responsibilities around the ethics of generative AI? How can we ensure that our organizations are using AI ethically? Is such a thing even possible? 

These are complex, nuanced, messy questions. I can’t supply simple answers. No one can.

Instead, I’ve found it much more helpful to use the ethical questions raised by generative AI as opportunities to examine and improve human ethics in the workplace. 

When we treat AI as a partner for human collaboration, then human ethical behaviors become guardrails for AI’s problematic aspects, while maximizing its positive potential. 

(See this blog on the human side of AI collaboration by my pal Karina Mangu-Ward for an insightful, hopeful dive into the humanistic conundrum of human-AI collaboration.)

Here are 3 ways leaders can leverage the ethical dilemmas of generative AI to build more equitable & inclusive workplace cultures.

 

1. Mitigate AI bias by interrupting it, and letting it interrupt you.

Interruption is one of the most effective tactics for mitigating human bias. What if we try using the same interruption tactics in our engagements with generative AI? 

For example, when prompting a GenAI program, I’ve started making a point of always checking my prompts for bias before clicking “submit.” It’s a simple two-step process:

  1. Slow down. Be mindful about the fact that you’re about to engage with a biased tool in order to generate something. Take a breath, and heighten your personal awareness before you begin.
  2. Check your prompts, and compare them with your first few sets of results. Notice if there’s a stereotype reinforcement, and modify your language accordingly. 

Recently, I mindlessly prompted DALL-E 2 to show me “Two architects talking.” The app predictably showed me two white men talking. 

AI has ethical issues. Here's what leaders can do. | August PublicSource: DALL-E

(Pardon the uncanny valley, I’m working on a larger point here.) 

I had skipped Step 1 above, but I caught my mistake with Step 2. I went back and modified my prompt to, “Two female-presenting architects talking.” In the next output, both architects were now…white women.

 

DALL·E Two female-presenting architects talking facing a whiteboard

Source: DALL-E

(Just don’t look at their eyes. Or hands.) 

I finally had to specify “two architects who are diverse in race, gender expression, body size, age and disability” before I got an image that didn’t reinforce multiple vectors of white supremacist patriarchy around who gets to be seen as an architect.

 

DALL·E two architects talking who are diverse in race, gender expression, body size, age and disability

Source: DALL-E

It takes multiple rounds of intentional human intervention to interrupt implicit bias – whether you’re dealing with people or GenAI. 

Which brings me to another idea: What if we could leverage AI to interrupt us?

It’s hard to remember to interrupt yourself at every potential bias point. But perhaps AI could be trained to, for example, support your organization’s anti-bias efforts in hiring.

This is a complex idea that would require a few layers of thoughtful design. But what if there was a way to train AI to alert you in moments where human bias is likely to show up? 

For example:

  • “You’re about to conduct an interview. Have you checked in with yourself about your potential bias around race, name, or accent?”
  •  “You’re reviewing resumes right now. Are you presenting bias based on the perceived ethnicity of the names you’re reading?” 

Or, perhaps we could teach AI to present data at key decision moments.  E.g.:

  • “As you choose who to invite for an interview, remember that Black people are up to 50% more likely to not receive callbacks based on ‘Black-sounding’ names.”

HR and IT departments could work together to train AI to prompt us in the same way as bias interruption training is designed to do. We could train AI to interrupt our bias, just as we interrupt its bias.

 

2. Build a culture of transparency around responsible AI adoption.

Surveys from Qualtrics and Checkr strongly indicate that employees are convinced AI-based layoffs are coming, and many are hesitant to disclose their use of GenAI at work for fear of being replaced.

The reality is, their fears are justified.  There’s no escaping the fact that jobs are likely to evolve significantly in the next few years based on the influence of generative AI, and some jobs might be eliminated altogether.

So what is the ethical responsibility of leaders when it comes to mitigating generative AI’s negative impacts on job security and retention?  

It’s another messy, nuanced question. I can’t give you a clean, formulaic answer that will serve business interests and protect everyone’s job. 

Instead, I believe it’s our responsibility as leaders to create a culture of transparency and honesty around the new job uncertainty introduced by AI.

Here are my three guiding principles for how we might work on building this culture:

A. Be transparent and forthright about the uncertainty facing the company.

When employees get skittish about the future, tell them the truth: “We’re not sure how AI will impact jobs, but we’ll keep you informed at every step while we figure it out.”  It’s hard to be this honest, but it’s the best way to create a culture of transparency in the face of change. You can’t promise their jobs won’t be affected – so don’t.

B. Distribute decision-making power, so it’s not just one person calling the shots.

When it comes to making decisions about the impact of AI on jobs, try a decision-making model that redistributes decision-making influence towards the people most affected by the outcome. 

At August, our two favorite decision-making models are Consent and Advice. Both models offer a structured way to incorporate diverse expertise without sacrificing decision speed or quality. 

(Learn more about these models in our free whitepaper, Decision Making Can Be a Lever For Organizational Change. )

Alternatively, you could form a committee or a project team dedicated to investigating the impact of generative AI on your company. It should include people from across the organization, with different levels of power - but with that power equalized in the room, so each member has an equal say.

Redistributing power isn’t easy. But it’s one of the best ways to build trust, and one of the most effective ways to make sound business decisions that serve the interests of the greatest number of stakeholders. 

C. Communicate openly as you go.

This is Change Management 101. Lack of communication will inspire employees to make up their own stories. Bad news delivered with clarity is always better than ambiguity and secrecy.

Don’t fall into patterns of secrecy in order to maintain a sense of control. Communicate openly as you move through your AI integration, so everyone is up to speed on the company’s current thought process about jobs and GenAI.

 

3. Create policies that bring AI’s ethical issues into the light.

Every time we engage with AI, we’re engaging with an ethical gray zone. Does this mean we should stop using AI? 

No. Absolutism isn’t the answer – especially given the vast potential benefits of AI to humanity and business. But that doesn’t mean we’re off the ethical hook, as leaders or as organizations.

The solution isn’t to boycott GenAI – it’s to create an organizational GenAI policy that makes explicit and transparent your company’s ethical expectations for employee-AI collaboration.

Here are some suggested starting points for your GenAI policy, courtesy of my pal Max Sather:

  1. Privacy: Don’t include sensitive client information and proprietary data in any interaction with AI.
  2. Bias: Always run AI outputs by coworkers to check for bias.
  3. Credibility and credit: Before sharing AI outputs, cross-reference any factual information with original sources, and cite as needed.
  4. Disclosure: When presenting AI-generated content, always disclose AI’s contribution. (Eg. add “Source: Midjourney” on any slides.)

Your policy should be a living document that continues to adapt to new developments. This will help make GenAI’s place in your organization transparent and intentional, so there can be accountability and ethical recalibration as needed.

 

For responsible AI adoption, treat AI as talent, not tech.

At August, we think of AI as a new form of talent, rather than a mere tech tool. (For our full reasoning, check out Mike Arauz’s fantastic blog making the case for why AI is more HR than IT.)

When we treat AI as a mere tech tool, we make it part of an invisible technical support system, with the same moral neutrality as a laptop or a CRM. This allows its ethical dilemmas to become structurally entrenched, with little wiggle room to ask questions or raise concerns.

But when we treat AI like a creative partner and make it a central part of our collaborative process, we bring its full complexity into the light.

We have all kinds of policies, procedures, and written expectations to ensure appropriate conduct and safety when we work with fellow humans. Collaborating with GenAI should be no different. 

We need to intentionally create living ethical structures that will allow us to comfortably collaborate with GenAI. Let’s do this proactively and explicitly, so we can safely explore the tremendous positive potential of this new creative partnership.

Interested in Psychological Safety?

Never miss an insight. Get the latest in your inbox once a week.

Topics

Explore a partnership

Mask group-2
Mike Arauz
mike@aug.co

Get your starter pack

Vector
Starter_Stack_Agile
arrow

Get your starter pack

Whats included:

  • How to plan a weekly “Action meeting”
  • Getting the most out of your team retrospectives
  • Staying on-time with demo meetings
Download