Question 1: How realistic should my robot look? (Answer: Don’t go overboard.)Posted: February 17, 2012 Filed under: Uncategorized Leave a comment
My to-do list for this year: Create invention. Change the world. Get rich.
The invention: A robot that goes to meetings for you.
Changing the world: In addition to freeing up a load of time, this invention will prevent a ginormous number of needless conflicts. (If you hadn’t been at the meeting, you wouldn’t have responded to your colleague’s idiotic comment, and we’d all be better off, right?)
Getting rich: This part seems obvious, since everybody needs one.
So, job one: Major R&D. How convincing can this robot be?
Time to call Malcolm MacIver, a Northwestern University scientist who consulted with the producers of the Battlestar Galactica prequel Caprica. In other words, he’s one of the guys Hollywood people call when they want to know, “How do we make this robot really lifelike?”
And he’s built these crazily-awesome robotic fish. (They even sing.) But fish–even singing fish–don’t go to meetings.
MacIver tells me about a project by a colleague of his in Japan, Hiroshi Ishiguro. “He had the Japanese movie-making industry create a stunningly-accurate reproduction of him,“ MacIver says. “So he can send his physical robot to a meeting and it will smile and furrow its brow—and talk through his mouth.”
How accurate are we talking about? “It’s realistic enough that he doesn’t want to show his young daughter,” MacIver says, “because he thinks it would creep her out.”
Wow. So, is this Ishiguro guy beating me to market? No, as MacIver describes things, Ishiguro uses his robot for pure research.
Specifically, Ishiguro studies non-verbal elements of communication “by disrupting them,” says MacIver. “So you can say, ‘OK, I’m going to shut off eyebrow movement today, and how does that affect people’s ability to understand what I’m talking about?’ You know, are they still able to get the emotional content?”
So, back to stunningly-accurate: Ishiguro’s robot would creep out a three year old… but does it fool his adult research subjects? Would it fool my colleagues, if I left eyebrow-movement switched on?
Not so much, says MacIver.
What if he just got a much, much bigger grant? “Um, unlikely,” MacIver says.
OK: super-lifelike, no-go.
There’s a robot that listens really well. It can kind of convince you that it’s listening to you. When I saw the YouTube video, it looked like WALL*E.
It had these big goggle eyes that would bug out a little bit. It nods, makes eye contact, responds emotionally to you. The point of the experiment was kind of heartbreaking: Could you make old people in nursing homes less lonely, if they had someone to listen to them, and would this do it?
And even for ten seconds, watching this guy in the lab coat, you think: Yeah, maybe.
So, I tell MacIver, now I’m starting to think that the robot should be a cartoon version of me.
“Well, right, that’s a good point,” he says. “If you can’t do it perfectly, go to the other side of the uncanny valley and and you’ll be more effective.”
The “uncanny valley” turns out to be this phenomenon where, when animated characters—or robots– get too real-looking, they become creepy. Like in the 2004 movie, The Polar Express.
Lawrence Weschler explained it this way in a 2010 interview with On the Media:
If you made a robot that was 50 percent lifelike, that was fantastic. If you made a robot that was 90 percent lifelike, that was fantastic. If you made it 95 percent lifelike, that was the best – oh, that was so great. If you made it 96 percent lifelike, it was a disaster. And the reason, essentially, is because a 95 percent lifelike robot is a robot that’s incredibly lifelike. A 96 percent lifelike robot is a human being with something wrong.
So: I want a cartoon avatar.
That’s one question down, but there’s a lot more R&D to do. Next, I think I need to talk with some Artificial Intelligence specialists…
… to make sure that the robot knows what to say if someone in the meeting asks “me” a question.