Form Follows Me, by Mark Rolston
Mark runs a product design consultancy, Argo Design. They do both products and conceptual work. Used to work at Frog. He’s been in the industry since 1994, and he’s seen a lot of change. Computing is driving rapid and fundamental change in society. iPhone is about to turn 10yrs old! That was the first time a computer really became mobile – part of the rest of life. Since then it’s been getting more connected, and cheaper.
The design of a hammer is manifest. It’s clear from the shape/form what it does. Similarly, the design of a home screen shows what it can do. But a lot of what is becoming important to how design behaves is becoming more like an iceberg – most of it is underwater.
The line is blurring between it (object) and she (person). What is the Amazon Echo? Is it a thing, or is it Alexa? How do we know what the Echo does? The software isn’t even present in the thing you see.
- Jeans that tell you when your fly is unzipped
- A Barbie doll that responds to what you say
- The software isn’t in there
We’re moving to a world with collaboration and inquiry, no screens, and to contextual moments. What is it, and what does it mean for us as designers? We’ll look at several trends, and what they mean for us.
Characters like Siri and Alexa require brand, personality. Right now it’s based on information they have. But there’s also Uber, Instacart – things with more levels of abstraction. Do we need to ask Siri to ask Alexa to tell Uber to get a car? Or does Uber have a voice too?
Symmetrical interactions have an input, and an output. Voice isn’t symmetrical – the input may be a question to Siri, but the output needs to come from other apps.
Cars that tell you when they need an oil change are also asymmetrical – they don’t need you to ask, they tell you, and it’s not when you’re in the car driving somewhere. It needs to tell you when you’re in your office, so you can plan ahead.
Plus, how do they connect to us? We don’t want to need to wear Google Glass, and remove ourselves from the world. The whole point is to be in the world.
What if we can make an arbitrary object be a temporary connector? A coca cola bottle becomes a volume control, but only for now, not forever.
We anthropomorphize robots, make them into pets or people. This is particularly true now that they are connected to other 3rd party systems, and they seem to respond to us.
Uncanny valley: when a robot gets too realistic, and it creeps us out.
The “Meta-me” is the concept that we have a physical presence, and also a digital presence, catalogued by all the places we visit online, all the interactions we have. Our “digital DNA” is becoming as immutable as our physical DNA. We often confuse references of things for the thing. A map shows a collection of ideas – but not the ideas themselves. However, the meta-me purposely blurs them – the map is becoming the actual creation/person.
A cognitive system isn’t just dumb or sentient. There’s a spectrum:
- Sentient machines (currently just sci-fi)
Most of our systems are currently focused on decision support. That’s what’s useful today. It’s like a 2yr old who passed the bar – it doesn’t know a lot, but it knows everything around one specific topic. Every decision that can be supported by computing will be. This is now our problem as designers. Our digital creations are companions to our consciousness.
Meta-me is a proactive personal assistant. It will soon be part of the deciding factor. How do we know what to trust is “us” vs. the meta-me?
We are already cognitively dependent on things like GPS/Google maps, etc.