Racism and Bias in Voice UI

“[S]peech recognition is another form of AI that performs worse for women and non-white people.”

Voice Recognition Still Has Significant Race and Gender Biases, Joan Palmiter Bajorek

In the past year I’ve done significant research into chatbots, conversational interfaces, and voice. What I’ve found shows racism and bias in voice UI.

The most recent stats seem to show that voice recognition has a >95% success rate. That statistic is from 2017. Yet no one has boasted better since. I have to imagine, if there were better numbers, they would be reported. After all, 95% is not a great statistic. 95% means that 1 out of every 20 times you speak to your voice UI, it misunderstands you.

When you add in bias, the numbers are even worse. According to Harvard Business Review, as of 2019 “Indian English has a 78% accuracy rate and Scottish English has a 53% accuracy rate.” But why is this bias in voice UI? And how can we fix it?

Why is Voice UI Biased?

Like with so many areas of systemic racism and bias, no one formally decided “let’s make voice UI work only for white American men with midwestern accents.” But the organizations building voice UI hired more white men than they did women or POC. Then the men they hired did the usability testing. As a result, they calibrated the UI to work for them.

When people talk about the dangers of not having a diverse workforce, this is what they’re talking about.

The same problems come up when an able-bodied, unaccented male population creates tools that evaluate data collected by voice UI. Not only are the systems less likely to take in the data from women and minorities, but the people creating the system make assumptions.

In one example, a website marked the phrase “I am a gay, black woman” as toxic. Why? At worst, it might have something to do with the reasons a white man would use the words “gay” or “black” or “woman.” Or at best (and I use that phrase lightly) it might be because those words are not associated with “normal” sentences used by the white, cis, male population that tested it.

It’s not just one website. Researchers at UMass found multiple tools that poorly on AAVE and even misidentify AAVE as non-English. Our voice UI and conversational interface tools are racised.

How Can We Do Better?

If you’re like me, you know that recognizing the problem is a good first step, but it can’t be the last step. Want to make a difference? Here’s my advice:

  • Hire more diverse teams. Is it hard? You bet.
  • Make a point of mentoring people who don’t look or sound like you. Our natural tendency is to mentor people who remind us of ourselves. That means a white man will reach out to mentor a white man. But that only maintains a power imbalance, at least until we get all races and genders represented in power.
  • Read up on how to be an anti-racist data scientist. Neutral is siding with the oppressor.

Our data is a reflection of ourselves. Make your data anti-racist.

facebook Share on Facebook
Twitter Tweet
Follow Follow us

Leave a Reply

Your email address will not be published. Required fields are marked *