When I read Automating Inequality I thought about the many misconceptions of AI (artificial intelligence). Many companies market AI, machine learning, and digital in general as “magic”. But that’s not a helpful way to think about work. When we think of it as “magic” we forget that it’s nuanced. More importantly, we forget that we can change it.
Here are a few of the biggest misconceptions of AI:
- Data is not an answer in and of itself. It needs context/interpretation.
- Digital is not a solution. It’s a tool.
- A digital strategy is not a tactic.
Data is not an answer
One thing Automating Inequality gets very, very right is that data alone doesn’t provide an answer to anything. What’s more, there’s no such thing as neutral data. Data gets interpreted, and as soon as that happens a human being puts a spin on it. Or, put another way, data is information. To make it make sense, people interpret it. That turns it into knowledge – and knowledge is not neutral.
For example, one case study in Automating Inequality shows a system in Allegheny County, Pennsylvania. The system was built to find children at risk of abuse. But the data in the system was just that – a series of numbers assigned for different risks. Ultimately, the system took all those numbers and created a recommendation.
No one number – not even a collection of numbers – will ever give an answer. To create knowledge (or “insights” as many tech companies say), a person teaches the system to map numbers together into combinations that create a picture of a person or situation. Data alone will never hold answers.
Digital is not a solution
The second misconception of artificial intelligence – and digital work in general – is that it’s a solution. Digital is not a solution. It’s a tool that can be used to do something.
Because marketing teams call AI “magic”, it’s easy to forget that the system can be tweaked. The numbers alone don’t identify a person, and the fact that the system is digital doesn’t make it work.
In another example from Automating Inequality, author Virginia Eubanks showcases Indiana’s wellfare distribution system. She shows how the system cut benefits for many many people. But early in the case study she also speaks to the goals of the people building the system. Their goals were primarily not human-centered. They were financial goals. In other words, though Eubanks’ concludes that the system failed, in reality it did exactly what it was built to do. Being digital doesn’t solve anything.
Strategy is not a tactic
Lastly, many organizations create a digital strategy, and think they’re done. After all, with a strategy they can largely avoid the first two errors. But the problem here is that a strategy is a big-picture plan to accomplish a goal. A strategy is not a tactic.
For example, a strategy might be to use information about a specific population to target children in need (like in Allegheny County). The strategy might go into details about how to revisit the algorithm over time and improve it. But the tactics need to include the answers to questions like:
- how many social workers do we hire?
- who interprets the data?
- when do we revisit the algorithm?
These tactics become clear jobs for people (or machines) to carry out. As people implement the tactics they can align any decisions with the strategy. In other words, tactics without a strategy are just random actions. But strategy without tactics is just an unimplemented thought. The strategy itself does no one any good by itself.
Misconceptions of AI
Too often people blame AI for not being “disruptive” enough. Few people realize that an AI system follows orders. Digital tools inherit human intentions. No wonder, in Eubanks’ words, digital is “more evolution than revolution“.
To create truly unbiased AI, it’s important to remember that AI is what we make of it. Its data is not our answers, its digital qualities are not magical solutions, and building a strategy is not the same as implementing a tactic.