I’m catching up on Agents of Shield, and (not a spoiler) there’s a robot (technically an Android, as she looks human) in a recent Season 4 episode. She’s a nearly-perfect humanesque Android, and her creator pronounces she has come “through the Uncanny Valley and out the other side!” She’s any techie’s dream creation – ideally, the future of UI.
Indeed, the Android looks and acts human* and easily passes most Turing Tests, fooling people into believing she is real, picking up on slang and idioms, even beginning to grasp when lying is “allowed” and when she must be truthful as programmed. It has also quickly become clear that her creator is in love with her, which is perhaps a Turing Test of its own.
This is not the only Marvel Android to mar the human/robot line, as well as the AI/UI line. This is where it gets interesting. Vision, for example, is an Android created out of an AI, and is now an independent entity with his own set of morals, beliefs, and emotions. Tony Stark’s Jarvis is a UI who is also arguably Tony’s best friend. And then there’s Karen.
If you haven’t yet seen Spiderman Homecoming, you might want to stop here. Because now, there are some (minor) spoilers coming up.
When Does UI Become AI?
Karen is the name Peter Parker gives to his “suit lady,” the IUI Tony Stark encoded into Peter’s Spiderman suit. Karen knows the suit backwards and forwards, and is able to provide tutorials and recommendations as well as take commands in natural language from Peter. But she is more than a UI. She also responds to Peter’s emotional needs, provides advice, and learns from him.
This blurred line isn’t imaginary. In the real (non Marvel comics) world, this combination of UI and AI is known as Intelligent UI, or IUI. An IUI is an interface that has elements of AI, and can thus learn. If that’s real, then how far away are we from creating our own Karens, Jarvises, and others?
To answer that, we need to look at two things.
- What’s currently being built in the IUI realm?
- How realistic are Marvel’s IUIs?
What’s Being Built?
Since I am far from an IUI expert (outside of the Marvel Universe, that is), please consider this recap of what I’ve recently read a starting point. I’ll share links to more information I’m aware of, and I hope this will start a larger conversation where I too can learn more about the current state of IUI – and the future of UI. Here are some of the more interesting things I’ve found.
“The artificially intelligent conversation agent is the last user interface.” This article from Computer World last fall makes the case that all future UIs will work in a Star Trek style, with all voice commands. Apparently they didn’t watch the DS9 episode where a virus caused people to be unable to speak anything but gibberish. I’m hoping for some accessibility back ups that allow for touch screens or some form of sign language as a backup.
“As artificial intelligence becomes people’s “go-to” technology, it will require enterprises to step up to a new challenge: creating responsible AI.” In April, WIRED published the Accenture Technology Vision 2017, including a chapter on AI, and Accenture’s belief that AI is “the future.” The write up is characteristically vague, referring to how AI will change the future of technology, yada yada. Even when discussing “responsible AI,” though they nod to accessibility and providing “better outcomes for people and society,” there’s no concrete idea of what this means. My assumption is that we need to overcome some of the security risks (is Roomba selling your floor plan?) and some of the privacy concerns (why is Alexa listening to your every word?).
“The system needs to know more about us for invisible UI to become reality.” Speaking of privacy concerns, TechCrunch wrote about invisible UI last year, and pointed out that we need to give quite a bit of information for a UI to anticipate and respond to us. That means we need to be able to trust our UI for the whole system to work. It’s easy for Iron Man to trust Jarvis – he built him! For the rest of us, short of designing our own AI, we’ll definitely need some combination of trust and system in place for protecting users from the businesses that built the IUIs.
How Realistic are Marvel’s IUIs?
At its core, an IUI is an interface that can learn, as AI does. From that perspective, there’s nothing in Marvel’s IUIs that seem unrealistic. However, there are several areas where the IUIs are a little… fictional. They are likely not the actual future of UI.
- Personality: Vision came from a sentient Android. Yet Karen, Jarvis, and others have equally sentient personalities. Their personalities grow and change. They get offended, they have opinions, and they have emotions and a sense of self. They are beyond AI – they are self aware.
- Physicality: Intelligence and physical abilities are not connected, but in Marvel’s Universe, AIs are uploaded into bodies (see: Aida, the robot at the start of the article) and once the mental piece is there, they move like humans. The human body is incredibly complex, which this sadly ignores. Plus, on a personal note, I wonder if separating the two in our comic worlds would increase empathy for people with physical disabilities. Maybe we need more Mr. Xs to increase accessibility in the real world.
We’ve got a ways to go. But IUIs are real, and it’s always fun to see how our fantasy worlds and real sciences compare.
*it helps that she’s played by a human actress