Algorithms Inherit Human Intentions

How do you make a digital tool better? Start with the manual work. Digital tools and algorithms can make manual processes better, but only if our manual processes improve as well. All algorithms inherit human intentions. And without conscious effort, algorithms inherit human bias.

Is tech moving too fast?

There are two seemingly opposing statements I hear in the technology world.

  1. Manual processes contain bias. Digitizing and using algorithms would make them better.
  2. Technology is moving too fast and adding bias.

But as it turns out, both are true. Yes, our manual processes often include bias. People manage a process, and people have all sorts of cognitive bias in their heads. It’s part of what makes us human. If we digitize our processes, we can stop some of that bias from occurring. But yes, we build our tech very quickly. That means we often don’t stop to notice if we are building that bias right into the very algorithms we’re hoping will be unbiased.

Bias in action: from poorhouse to database

In Virginia Eubanks’ book Automating Inequality, she explores poverty. Her book shows how human biases around poverty get hard coded into algorithms. For example, in the early 1800s Josiah Quincy believed there were two reasons people were poor poor. Some people were sick, old, or infirm – but other people were lazy. Quincy believed that society should only help the sick, old, and infirm. He thought that society shouldn’t help “lazy” people.

But Quincy didn’t have data to show how many job opportunities were available at the time, and how that mapped to the number of people seeking work. So he had no way of knowing if anyone was actually being “lazy”. Quincy was also making an assumption that everyone “should” work a certain amount. He was only one man, yet today programs like Medicaid and SNAP (which provides food stamps) judge people based on Quincy’s mindset.

Eubanks summarizes the problem well:

For all their high-tech polish, our modern systems of poverty management–automated decision-making, data mining, and predictive analytics–retain a remarkable kinship with the poorhouses of the past.

Virginia Eubanks, Automating Inequality

I would go even further. Eubanks says they share a “remarkable kinship”. But I would say this kinship isn’t “remarkable” at all. Unless program and algorithm designers stop it, this kinship is inevitable.

How to make digital tools better

There is only one way to make sure algorithms are unbiased, and that is to slow down. For each automation, someone needs to look for bias. And every time a process is coded for machine learning, we need research.

I’ve written before about how content strategists can address racism, and how UX designers can ask the hard questions. Our engineers have a role as well. They can say: stop. Slow down. And they can ask for research.

When in doubt, a UX designer, content strategist, or engineer can look for the worst case scenario. It’s not an edge case. And when not in doubt, we should still double check. After all, our biases are often ingrained. And algorithms inherit human intentions. Let’s make sure our intentions are clear, so that our algorithms can improve.

facebookShare on Facebook
TwitterTweet
FollowFollow us

Leave a Reply

Your email address will not be published. Required fields are marked *