When new technology comes into being, there are often two radical beliefs that spring up:
- That new technology will corrupt our youth and irreversibly damage our society
- That new technology will create a utopian society of equality and opportunity for all
The two beliefs are reflected as far back in time as Socrate’s day, when he feared that writing would erode memory (it has, in part) and ruin society (it hasn’t, I personally believe). It is mirrored in reactions to the industrial revolution, the telegraph and then the telephone, and most recently, the Internet.
While we hear a lot about the first belief, brought mostly out of fear, today I’m more interested in the second: Utopianism via technology.
Technology as Utopia
The main reason that people believe technology may create a Utopian Society is that many technologies influence our ability to communicate with one another. Many of us, particularly optimists, want to believe that we can do better, be better, and better understand one another. So with each new technology, we see new opportunities for better communication, and thus better understanding. The belief that technology will create a Utopia is nestled within the larger belief that if only we could truly understand one another, we would all have equal opportunities and live in a brother(sibling?)hood of (wo)man.
While this is an admirable goal, it overlooks a simple fact. Yes, many of us strive to be “better” than we are, and disruptive technology offers new methods for shining a light on our wrongdoings and providing opportunities for better communications. But we are also a product of our history, both personal and society. New technology will not wipe away the past, and so we don’t come to the new technology with clean slates and fresh eyes.
Our technology can’t do anything unless our attitudes change.
Tools vs Actions
New technology provides new methods of communication, new spaces to inhabit, and sometimes a fresh start with a new personal identity. But if we come to the new technology and treat it in the same way we always have, nothing will change. We’ll simple recreate the old society in a new space. In her book It’s Complicated: The Social Lives of Networked Teens, danah boyd points out that social networking sites begin our networks with the people we already know, and then connect us to other people through shared interests, which often correlate to the people or types of people we already know in the outside world.
In other words, though a white, upper middle class high school student may have the opportunity to meet people who live a very different life, she’s most likely to find her white, upper middle classmates first on social media, and then be recommended other friends based on her interest in horseback riding and her complaints about her math tutor – two things dominated by other white, upper middle class students.
We can create technology to purposely connect people who might not otherwise find one another, and highlight their similarities, but it takes intentional action. It’s not something we can leave to chance or expect the technology to do naturally, without our intervention. As designers, the intentions we bring to our projects control what we create. Unsurprisingly, this means that if we don’t bring purpose and intention to our design, we’ll naturally design experiences that rely on our own prior experiences, our unconsidered-yet-present biases, and yes, even our inadvertent discriminations.
Behaviors and Biases
“…the mere existence of new technology neither creates not magically solves existing cultural problems. In fact, their construction typically reinforces existing social divisions. This sometimes occurs when designers intentionally build tools in prejudicial ways. More often it happens inadvertently when creators fail to realize how their biases inform their design decisions.” -danah boyd, It’s Complicated: The Social Lives of Networked Teens
To that end, I’d like to reclaim the word “bias.” We’re terrified to admit our biases, due to our connotations with the term. No one wants to admit to bias when bias implies racism, bigotry, sexism, and other discrimination. However we all have inbred familiarities – biases built out of the inability to know what we don’t know.
I didn’t know that I was biased against restaurant servers and chefs until I worked in a restaurant, and learned that my habit of ordering with multiple items “on the side” stressed out servers and offended some chefs. That bias would come out if I designed a system for online ordering. I didn’t know that I was biased against families with young children until I began taking car trips with my brother and his kids. That bias would impact any design I made for a car manufacturer or travel organization.
To that end, bias doesn’t mean hatred. It doesn’t even mean distaste or elitism. My bias was a measure of my ignorance – and the number of things of which I am ignorant, even as they shrink, are still nearly infinite. When our bias informs our research, we even neglect to speak to target users in certain income brackets, social situations, or cultural spaces. Usability testing done in-person during the week excludes those who can’t take a break from work. Interviews done in the evenings excludes parents who can’t get childcare. Research done on a Saturday excludes many Jews, just as research on a Sunday morning excludes many Christians.
The catch is this: while we can remove the biases we come across, we won’t find them all. We can only keep searching, acknowledge our own ignorance, and keep asking questions. New technology can’t save us from ourselves, but our willingness to learn is a step in the right direction.