Ethical Action Requires Room to Grow

Lately, it’s trendy to pull up a tweet from a year, five years, or ten years ago showing that someone (an actor, politician, etc) who has reversed their position on a topic is now a hypocrite. I want to hold people accountable for their words and actions. However, I’m worried about this particular method of “proving” hypocrisy.

Is it Ethical to Change Your Mind?

Obviously, using someone’s old tweets to catch them out is intended to show that they either lied before, or are lying now. The tweet proves “you don’t support [trans rights, climate change, freedom of speech], you’re just saying it because it’s popular now, and thus you can’t be trusted.”

That may be true, particularly when it comes to public figures attempting to capture popular opinion.

But what about in the tech world? In science and in design we tend to make assumptions (or hypotheses). We then design with those assumptions in mind, test them, and learn from them. An example: I recently proposed that a client increase their SEO budget, to reach the audience members who are searching for them through Google. They agreed, and we started with user research (to identify keywords). However, the research had unexpected findings: the client’s audience wasn’t using Google to find them. They were finding them through recommendations, and heading directly to their site.

Imagine if I had come back to the client and said “look what we learned!” only to have them tell me “why should we believe you? That’s different from what you said last month!”

Now let’s think about an even greater danger: how new information may inform ethics.

Ethical Education

There’s been a bit of news lately around a new AI, which Elon Musk says is “too dangerous to release.” Most of the news seems to be focusing on judgement. Some people want to know “why did they create something so dangerous?” Others assume “it’s not that dangerous, and they shouldn’t/don’t have the right to withhold it.”

Both of these responses are shortsighted. Let’s assume the team began with an assumption: that the AI would be useful, valuable, and helpful. After developing it and testing it, they learned something new: it’s dangerous.

How we respond to this new information may change the way technology is developed in the future.

Ethical Action May Mean Slowing Down

Sara Holoubek has spoken and written about the dangers of moving so quickly in the tech world. We move so quickly, we don’t always take the time to make sure we understand the consequences of our actions.

I believe that for technology to slow down, we’ll need to acknowledge that assumptions are made, betas are built, designs are tested. Only after testing will we know enough – and that may mean halting or withholding newly developed tech.

Yes, let’s hold ourselves accountable. But let’s also allow ourselves room to grow.

Did you find this article useful? Share it!

Leave a Reply

Your email address will not be published. Required fields are marked *