Lex Fridman Podcast - #98 - Kate Darling: Emotional Connection Between Humans and Robots

The following is a conversation with Kate Darling, a researcher at MIT,

interested in social robotics, robot ethics, and generally how technology intersects with society.

She explores the emotional connection between human beings and lifelike machines,

which for me is one of the most exciting topics in all of artificial intelligence.

As she writes in her bio, she is a caretaker of several domestic robots,

including her plio dinosaur robots named Yochai, Peter, and Mr. Spaghetti.

She is one of the funniest and brightest minds I’ve ever had the fortune to talk to.

This conversation was recorded recently, but before the outbreak of the pandemic.

For everyone feeling the burden of this crisis, I’m sending love your way.

This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube,

review it with five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter

at Lex Friedman, spelled F R I D M A N. As usual, I’ll do a few minutes of ads now and never any

ads in the middle that can break the flow of the conversation. I hope that works for you and

doesn’t hurt the listening experience. Quick summary of the ads. Two sponsors,

Masterclass and ExpressVPN. Please consider supporting the podcast by signing up to

Masterclass at masterclass.com slash Lex and getting ExpressVPN at expressvpn.com slash Lex

Pod. This show is sponsored by Masterclass. Sign up at masterclass.com slash Lex to get a discount

and to support this podcast. When I first heard about Masterclass, I thought it was too good to

be true. For $180 a year, you get an all access pass to watch courses from, to list some of my

favorites. Chris Hadfield on space exploration, Neil deGrasse Tyson on scientific thinking and

communication, Will Wright, creator of SimCity and Sims, love those games, on game design,

Carlos Santana on guitar, Garry Kasparov on chess, Daniel Nagrano on poker, and many more.

Chris Hadfield explaining how rockets work and the experience of being launched into space alone

is worth the money. By the way, you can watch it on basically any device. Once again,

sign up on masterclass.com slash Lex to get a discount and to support this podcast.

This show is sponsored by ExpressVPN. Get it at expressvpn.com slash Lex Pod to get a discount

and to support this podcast. I’ve been using ExpressVPN for many years. I love it. It’s easy

to use, press the big power on button, and your privacy is protected. And, if you like, you can

make it look like your location is anywhere else in the world. I might be in Boston now, but I can

make it look like I’m in New York, London, Paris, or anywhere else. This has a large number of

obvious benefits. Certainly, it allows you to access international versions of streaming websites

like the Japanese Netflix or the UK Hulu. ExpressVPN works on any device you can imagine. I

use it on Linux. Shout out to Ubuntu, 2004, Windows, Android, but it’s available everywhere else too.

Once again, get it at expressvpn.com slash Lex Pod to get a discount and to support this podcast.

And now, here’s my conversation with Kate Darling.

You co taught robot ethics at Harvard. What are some ethical issues that arise

in the world with robots?

Yeah, that was a reading group that I did when I, like, at the very beginning,

first became interested in this topic. So, I think if I taught that class today,

it would look very, very different. Robot ethics, it sounds very science fictiony,

especially did back then, but I think that some of the issues that people in robot ethics are

concerned with are just around the ethical use of robotic technology in general. So, for example,

responsibility for harm, automated weapon systems, things like privacy and data security,

things like, you know, automation and labor markets. And then personally, I’m really

interested in some of the social issues that come out of our social relationships with robots.

One on one relationship with robots.

Yeah.

I think most of the stuff we have to talk about is like one on one social stuff. That’s what I

love. I think that’s what you’re, you love as well and are expert in. But a societal level,

there’s like, there’s a presidential candidate now, Andrew Yang running,

concerned about automation and robots and AI in general taking away jobs. He has a proposal of UBI,

universal basic income of everybody gets 1000 bucks as a way to sort of save you if you lose

your job from automation to allow you time to discover what it is that you would like to or

even love to do.

Yes. So I lived in Switzerland for 20 years and universal basic income has been more of a topic

there separate from the whole robots and jobs issue. So it’s so interesting to me to see kind

of these Silicon Valley people latch onto this concept that came from a very kind of

left wing socialist, kind of a different place in Europe. But on the automation labor markets

topic, I think that it’s very, so sometimes in those conversations, I think people overestimate

where robotic technology is right now. And we also have this fallacy of constantly comparing robots

to humans and thinking of this as a one to one replacement of jobs. So even like Bill Gates a few

years ago said something about, maybe we should have a system that taxes robots for taking people’s

jobs. And it just, I mean, I’m sure that was taken out of context, he’s a really smart guy,

but that sounds to me like kind of viewing it as a one to one replacement versus viewing this

technology as kind of a supplemental tool that of course is going to shake up a lot of stuff.

It’s going to change the job landscape, but I don’t see, you know, robots taking all the

jobs in the next 20 years. That’s just not how it’s going to work.

Right. So maybe drifting into the land of more personal relationships with robots and

interaction and so on. I got to warn you, I go, I may ask some silly philosophical questions.

I apologize.

Oh, please do.

Okay. Do you think humans will abuse robots in their interactions? So you’ve had a lot of,

and we’ll talk about it sort of anthropomorphization and this intricate dance,

emotional dance between human and robot, but there seems to be also a darker side where people, when

they treat the other as servants, especially, they can be a little bit abusive or a lot abusive.

Do you think about that? Do you worry about that?

Yeah, I do think about that. So, I mean, one of my main interests is the fact that people

subconsciously treat robots like living things. And even though they know that they’re interacting

with a machine and what it means in that context to behave violently. I don’t know if you could say

abuse because you’re not actually abusing the inner mind of the robot. The robot doesn’t have

any feelings.

As far as you know.

Well, yeah. It also depends on how we define feelings and consciousness. But I think that’s

another area where people kind of overestimate where we currently are with the technology.

Right.

The robots are not even as smart as insects right now. And so I’m not worried about abuse

in that sense. But it is interesting to think about what does people’s behavior towards these

things mean for our own behavior? Is it desensitizing the people to be verbally abusive

to a robot or even physically abusive? And we don’t know.

Right. It’s a similar connection from like if you play violent video games, what connection does

that have to desensitization to violence? I haven’t read literature on that. I wonder about that.

Because everything I’ve heard, people don’t seem to any longer be so worried about violent video

games.

Correct. The research on it is, it’s a difficult thing to research. So it’s sort of inconclusive,

but we seem to have gotten the sense, at least as a society, that people can compartmentalize. When

it’s something on a screen and you’re shooting a bunch of characters or running over people with

your car, that doesn’t necessarily translate to you doing that in real life. We do, however,

have some concerns about children playing violent video games. And so we do restrict it there.

I’m not sure that’s based on any real evidence either, but it’s just the way that we’ve kind of

decided we want to be a little more cautious there. And the reason I think robots are a little

bit different is because there is a lot of research showing that we respond differently

to something in our physical space than something on a screen. We will treat it much more viscerally,

much more like a physical actor. And so it’s totally possible that this is not a problem.

And it’s the same thing as violence in video games. Maybe restrict it with kids to be safe,

but adults can do what they want. But we just need to ask the question again because we don’t

have any evidence at all yet. Maybe there’s an intermediate place too. I did my research

on Twitter. By research, I mean scrolling through your Twitter feed.

You mentioned that you were going at some point to an animal law conference.

So I have to ask, do you think there’s something that we can learn

from animal rights that guides our thinking about robots?

Oh, I think there is so much to learn from that. I’m actually writing a book on it right now. That’s

why I’m going to this conference. So I’m writing a book that looks at the history of animal

domestication and how we’ve used animals for work, for weaponry, for companionship.

And one of the things the book tries to do is move away from this fallacy that I talked about

of comparing robots and humans because I don’t think that’s the right analogy. But I do think

that on a social level, even on a social level, there’s so much that we can learn from looking

at that history because throughout history, we’ve treated most animals like tools, like products.

And then some of them we’ve treated differently and we’re starting to see people treat robots in

really similar ways. So I think it’s a really helpful predictor to how we’re going to interact

with the robots. Do you think we’ll look back at this time like 100 years from now and see

what we do to animals as like similar to the way we view like the Holocaust in World War II?

That’s a great question. I mean, I hope so. I am not convinced that we will. But I often wonder,

you know, what are my grandkids going to view as, you know, abhorrent that my generation did

that they would never do? And I’m like, well, what’s the big deal? You know, it’s a fun question

to ask yourself. It always seems that there’s atrocities that we discover later. So the things

that at the time people didn’t see as, you know, you look at everything from slavery to any kinds

of abuse throughout history to the kind of insane wars that were happening to the way war was carried

out and rape and the kind of violence that was happening during war that we now, you know,

we see as atrocities, but at the time perhaps didn’t as much. And so now I have this intuition

that I have this worry, maybe you’re going to probably criticize me, but I do anthropomorphize

robots. I don’t see a fundamental philosophical difference between a robot and a human being

in terms of once the capabilities are matched. So the fact that we’re really far away doesn’t,

in terms of capabilities and then that from natural language processing, understanding

and generation to just reasoning and all that stuff. I think once you solve it, I see though,

this is a very gray area and I don’t feel comfortable with the kind of abuse that people

throw at robots. Subtle, but I can see it becoming, I can see basically a civil rights movement for

robots in the future. Do you think, let me put it in the form of a question, do you think robots

should have some kinds of rights? Well, it’s interesting because I came at this originally

from your perspective. I was like, you know what, there’s no fundamental difference between

technology and like human consciousness. Like we, we can probably recreate anything. We just don’t

know how yet. And so there’s no reason not to give machines the same rights that we have once,

like you say, they’re kind of on an equivalent level. But I realized that that is kind of a

far future question. I still think we should talk about it because I think it’s really interesting.

But I realized that it’s actually, we might need to ask the robot rights question even sooner than

that while the machines are still, quote unquote, really dumb and not on our level because of the

way that we perceive them. And I think one of the lessons we learned from looking at the history of

animal rights and one of the reasons we may not get to a place in a hundred years where we view

it as wrong to, you know, eat or otherwise, you know, use animals for our own purposes is because

historically we’ve always protected those things that we relate to the most. So one example is

whales. No one gave a shit about the whales. Am I allowed to swear? Yeah, no one gave a shit about

freedom. Yeah, no one gave a shit about the whales until someone recorded them singing. And suddenly

people were like, oh, this is a beautiful creature and now we need to save the whales. And that

started the whole Save the Whales movement in the 70s. So as much as I, and I think a lot of people

want to believe that we care about consistent biological criteria, that’s not historically

how we formed our alliances. Yeah, so what, why do we, why do we believe that all humans are created

equal? Killing of a human being, no matter who the human being is, that’s what I meant by equality,

is bad. And then, because I’m connecting that to robots and I’m wondering whether mortality,

so the killing act is what makes something, that’s the fundamental first right. So I am currently

allowed to take a shotgun and shoot a Roomba. I think, I’m not sure, but I’m pretty sure it’s not

considered murder, right. Or even shutting them off. So that’s, that’s where the line appears to

be, right? Is this mortality a critical thing here? I think here again, like the animal analogy is

really useful because you’re also allowed to shoot your dog, but people won’t be happy about it.

So we give, we do give animals certain protections from like, you’re not allowed to torture your dog

and set it on fire, at least in most states and countries, but you’re still allowed to treat it

like a piece of property in a lot of other ways. And so we draw these arbitrary lines all the time.

And, you know, there’s a lot of philosophical thought on why viewing humans as something unique

is not, is just speciesism and not, you know, based on any criteria that would actually justify

making a difference between us and other species. Do you think in general people, most people are

good? Do you think, or do you think there’s evil and good in all of us? Is that’s revealed through

our circumstances and through our interactions? I like to view myself as a person who like believes

that there’s no absolute evil and good and that everything is, you know, gray. But I do think it’s

an interesting question. Like when I see people being violent towards robotic objects, you said

that bothers you because the robots might someday, you know, be smart. And is that why?

Well, it bothers me because it reveals, so I personally believe, because I’ve studied way too,

so I’m Jewish. I studied the Holocaust and World War II exceptionally well. I personally believe

that most of us have evil in us. That what bothers me is the abuse of robots reveals the evil in

human beings. And it’s, I think it doesn’t just bother me. It’s, I think it’s an opportunity for

roboticists to make, help people find the better sides, the angels of their nature, right? That

abuse isn’t just a fun side thing. That’s a, you revealing a dark part that you shouldn’t,

that should be hidden deep inside. Yeah. I mean, you laugh, but some of our research does indicate

that maybe people’s behavior towards robots reveals something about their tendencies for

empathy generally, even using very simple robots that we have today that like clearly don’t feel

anything. So, you know, Westworld is maybe, you know, not so far off and it’s like, you know,

depicting the bad characters as willing to go around and shoot and rape the robots and the good

characters is not wanting to do that. Even without assuming that the robots have consciousness.

So there’s a opportunity, it’s interesting, there’s opportunity to almost practice empathy.

The, on robots is an opportunity to practice empathy.

I agree with you. Some people would say, why are we practicing empathy on robots instead of,

you know, on our fellow humans or on animals that are actually alive and experienced the world?

And I don’t agree with them because I don’t think empathy is a zero sum game. And I do

think that it’s a muscle that you can train and that we should be doing that. But some people

disagree. So the interesting thing, you’ve heard, you know, raising kids sort of asking them or

telling them to be nice to the smart speakers, to Alexa and so on, saying please and so on during

the requests. I don’t know if, I’m a huge fan of that idea because yeah, that’s towards the idea of

practicing empathy. I feel like politeness, I’m always polite to all the, all the systems that we

build, especially anything that’s speech interaction based. Like when we talk to the car, I’ll always

have a pretty good detector for please to, I feel like there should be a room for encouraging empathy

in those interactions. Yeah. Okay. So I agree with you. So I’m going to play devil’s advocate. Sure.

So what is the, what is the devil’s advocate argument there? The devil’s advocate argument

is that if you are the type of person who has abusive tendencies or needs to get some sort of

like behavior like that out, needs an outlet for it, that it’s great to have a robot that you can

scream at so that you’re not screaming at a person. And we just don’t know whether that’s true,

whether it’s an outlet for people or whether it just kind of, as my friend once said,

trains their cruelty muscles and makes them more cruel in other situations.

Oh boy. Yeah. And that expands to other topics, which I, I don’t know, you know, there’s a,

there’s a topic of sex, which is weird one that I tend to avoid it from robotics perspective.

And most of the general public doesn’t, they talk about sex robots and so on. Is that an area you’ve

touched at all research wise? Like the way, cause that’s what people imagine sort of any kind of

interaction between human and robot that shows any kind of compassion. They immediately think

from a product perspective in the near term is sort of expansion of what pornography is and all

that kind of stuff. Yeah. Do researchers touch this? Well that’s kind of you to like characterize

it as though there’s thinking rationally about product. I feel like sex robots are just such a

like titillating news hook for people that they become like the story. And it’s really hard to

not get fatigued by it when you’re in the space because you tell someone you do human robot

interaction. Of course, the first thing they want to talk about is sex robots. Yeah, it happens a

lot. And it’s, it’s unfortunate that I’m so fatigued by it because I do think that there

are some interesting questions that become salient when you talk about, you know, sex with robots.

See what I think would happen when people get sex robots, like if it’s some guys, okay, guys get

female sex robots. What I think there’s an opportunity for is an actual, like, like they’ll

actually interact. What I’m trying to say, they won’t outside of the sex would be the most

fulfilling part. Like the interaction, it’s like the folks who there’s movies and this, right,

who pray, pay a prostitute and then end up just talking to her the whole time. So I feel like

there’s an opportunity. It’s like most guys and people in general joke about this, the sex act,

but really people are just lonely inside and they’re looking for connection. Many of them.

And it’d be unfortunate if that connection is established through the sex industry. I feel like

it should go into the front door of like, people are lonely and they want a connection.

Well, I also feel like we should kind of de, you know, de stigmatize the sex industry because,

you know, even prostitution, like there are prostitutes that specialize in disabled people

who don’t have the same kind of opportunities to explore their sexuality. So it’s, I feel like we

should like de stigmatize all of that generally. But yeah, that connection and that loneliness is

an interesting topic that you bring up because while people are constantly worried about robots

replacing humans and oh, if people get sex robots and the sex is really good, then they won’t want

their, you know, partner or whatever. But we rarely talk about robots actually filling a hole where

there’s nothing and what benefit that can provide to people. Yeah, I think that’s an exciting,

there’s a whole giant, there’s a giant hole that’s unfillable by humans. It’s asking too much of

your, of people, your friends and people you’re in a relationship with in your family to fill that

hole. There’s, because, you know, it’s exploring the full, like, you know, exploring the full

complexity and richness of who you are. Like who are you really? Like people, your family doesn’t

have enough patience to really sit there and listen to who are you really. And I feel like

there’s an opportunity to really make that connection with robots. I just feel like we’re

complex as humans and we’re capable of lots of different types of relationships. So whether that’s,

you know, with family members, with friends, with our pets, or with robots, I feel like

there’s space for all of that and all of that can provide value in a different way.

Yeah, absolutely. So I’m jumping around. Currently most of my work is in autonomous vehicles.

So the most popular topic among the general public is the trolley problem. So most, most,

most roboticists kind of hate this question, but what do you think of this thought experiment?

What do you think we can learn from it outside of the silliness of

the actual application of it to the autonomous vehicle? I think it’s still an interesting

ethical question. And that in itself, just like much of the interaction with robots

has something to teach us. But from your perspective, do you think there’s anything there?

Well, I think you’re right that it does have something to teach us because,

but I think what people are forgetting in all of these conversations is the origins of the trolley

problem and what it was meant to show us, which is that there is no right answer. And that sometimes

our moral intuition that comes to us instinctively is not actually what we should follow if we care

about creating systematic rules that apply to everyone. So I think that as a philosophical

concept, it could teach us at least that, but that’s not how people are using it right now.

These are friends of mine and I love them dearly and their project adds a lot of value. But if

we’re viewing the moral machine project as what we can learn from the trolley problems, the moral

machine is, I’m sure you’re familiar, it’s this website that you can go to and it gives you

different scenarios like, oh, you’re in a car, you can decide to run over these two people or

this child. What do you choose? Do you choose the homeless person? Do you choose the person who’s

jaywalking? And so it pits these like moral choices against each other and then tries to

crowdsource the quote unquote correct answer, which is really interesting and I think valuable data,

but I don’t think that’s what we should base our rules in autonomous vehicles on because

it is exactly what the trolley problem is trying to show, which is your first instinct might not

be the correct one if you look at rules that then have to apply to everyone and everything.

So how do we encode these ethical choices in interaction with robots? For example,

autonomous vehicles, there is a serious ethical question of do I protect myself?

Does my life have higher priority than the life of another human being? Because that changes

certain control decisions that you make. So if your life matters more than other human beings,

then you’d be more likely to swerve out of your current lane. So currently automated emergency

braking systems that just brake, they don’t ever swerve. So swerving into oncoming traffic or

no, just in a different lane can cause significant harm to others, but it’s possible that it causes

less harm to you. So that’s a difficult ethical question. Do you have a hope that

like the trolley problem is not supposed to have a right answer, right? Do you hope that

when we have robots at the table, we’ll be able to discover the right answer for some of these

questions? Well, what’s happening right now, I think, is this question that we’re facing of

what ethical rules should we be programming into the machines is revealing to us that

our ethical rules are much less programmable than we probably thought before. And so that’s a really

valuable insight, I think, that these issues are very complicated and that in a lot of these cases,

it’s you can’t really make that call, like not even as a legislator. And so what’s going to

happen in reality, I think, is that car manufacturers are just going to try and avoid

the problem and avoid liability in any way possible. Or like they’re going to always protect

the driver because who’s going to buy a car if it’s programmed to kill someone?

Yeah.

Kill you instead of someone else. So that’s what’s going to happen in reality.

But what did you mean by like once we have robots at the table, like do you mean when they can help

us figure out what to do?

No, I mean when robots are part of the ethical decisions. So no, no, no, not they help us. Well.

Oh, you mean when it’s like, should I run over a robot or a person?

Right. That kind of thing. So what, no, no, no. So when you, it’s exactly what you said, which is

when you have to encode the ethics into an algorithm, you start to try to really understand

what are the fundamentals of the decision making process you make to make certain decisions.

Should you, like capital punishment, should you take a person’s life or not to punish them for

a certain crime? Sort of, you can use, you can develop an algorithm to make that decision, right?

And the hope is that the act of making that algorithm, however you make it, so there’s a few

approaches, will help us actually get to the core of what is right and what is wrong under our current

societal standards.

But isn’t that what’s happening right now? And we’re realizing that we don’t have a consensus on

what’s right and wrong.

You mean in politics in general?

Well, like when we’re thinking about these trolley problems and autonomous vehicles and how to

program ethics into machines and how to, you know, make AI algorithms fair and equitable, we’re

realizing that this is so complicated and it’s complicated in part because there doesn’t seem

to be a one right answer in any of these cases.

Do you have a hope for, like one of the ideas of the moral machine is that crowdsourcing can help

us converge towards, like democracy can help us converge towards the right answer.

Do you have a hope for crowdsourcing?

Well, yes and no. So I think that in general, you know, I have a legal background and

policymaking is often about trying to suss out, you know, what rules does this particular society

agree on and then trying to codify that. So the law makes these choices all the time and then

tries to adapt according to changing culture. But in the case of the moral machine project,

I don’t think that people’s choices on that website necessarily reflect what laws they would

want in place. I think you would have to ask them a series of different questions in order to get

at what their consensus is.

I agree, but that has to do more with the artificial nature of, I mean, they’re showing

some cute icons on a screen. That’s almost, so if you, for example, we do a lot of work in virtual

reality. And so if you put those same people into virtual reality where they have to make that

decision, their decision would be very different, I think.

I agree with that. That’s one aspect. And the other aspect is it’s a different question to ask

someone, would you run over the homeless person or the doctor in this scene? Or do you want cars to

always run over the homeless people?

I think, yeah. So let’s talk about anthropomorphism. To me, anthropomorphism, if I can

pronounce it correctly, is one of the most fascinating phenomena from like both the

engineering perspective and the psychology perspective, machine learning perspective,

and robotics in general. Can you step back and define anthropomorphism, how you see it in

general terms in your work?

Sure. So anthropomorphism is this tendency that we have to project human like traits and

behaviors and qualities onto nonhumans. And we often see it with animals, like we’ll project

emotions on animals that may or may not actually be there. We often see that we’re trying to

interpret things according to our own behavior when we get it wrong. But we do it with more

than just animals. We do it with objects, you know, teddy bears. We see, you know, faces in

the headlights of cars. And we do it with robots very, very extremely.

You think that can be engineered? Can that be used to enrich an interaction between an AI

system and the human?

Oh, yeah, for sure.

And do you see it being used that way often? Like, I don’t, I haven’t seen, whether it’s

Alexa or any of the smart speaker systems, often trying to optimize for the anthropomorphization.

You said you haven’t seen?

I haven’t seen. They keep moving away from that. I think they’re afraid of that.

They actually, so I only recently found out, but did you know that Amazon has like a whole

team of people who are just there to work on Alexa’s personality?

So I know that depends on what you mean by personality. I didn’t know that exact thing.

But I do know that how the voice is perceived is worked on a lot, whether if it’s a pleasant

feeling about the voice, but that has to do more with the texture of the sound and the

audio and so on. But personality is more like…

It’s like, what’s her favorite beer when you ask her? And the personality team is different

for every country too. Like there’s a different personality for German Alexa than there is

for American Alexa. That said, I think it’s very difficult to, you know, use the, really,

really harness the anthropomorphism with these voice assistants because the voice interface

is still very primitive. And I think that in order to get people to really suspend their

disbelief and treat a robot like it’s alive, less is sometimes more. You want them to project

onto the robot and you want the robot to not disappoint their expectations for how it’s

going to answer or behave in order for them to have this kind of illusion. And with Alexa,

I don’t think we’re there yet, or Siri, that they’re just not good at that. But if you

look at some of the more animal like robots, like the baby seal that they use with the

dementia patients, it’s a much more simple design. It doesn’t try to talk to you. It

can’t disappoint you in that way. It just makes little movements and sounds and people

stroke it and it responds to their touch. And that is like a very effective way to harness

people’s tendency to kind of treat the robot like a living thing.

Yeah. So you bring up some interesting ideas in your paper chapter, I guess,

Anthropomorphic Framing Human Robot Interaction that I read the last time we scheduled this.

Oh my God, that was a long time ago.

Yeah. What are some good and bad cases of anthropomorphism in your perspective?

Like when is the good ones and bad?

Well, I should start by saying that, you know, while design can really enhance the

anthropomorphism, it doesn’t take a lot to get people to treat a robot like it’s alive. Like

people will, over 85% of Roombas have a name, which I’m, I don’t know the numbers for your

regular type of vacuum cleaner, but they’re not that high, right? So people will feel bad for the

Roomba when it gets stuck, they’ll send it in for repair and want to get the same one back. And

that’s, that one is not even designed to like make you do that. So I think that some of the cases

where it’s maybe a little bit concerning that anthropomorphism is happening is when you have

something that’s supposed to function like a tool and people are using it in the wrong way.

And one of the concerns is military robots where, so gosh, 2000, like early 2000s, which is a long

time ago, iRobot, the Roomba company made this robot called the Pacbot that was deployed in Iraq

and Afghanistan with the bomb disposal units that were there. And the soldiers became very emotionally

attached to the robots. And that’s fine until a soldier risks his life to save a robot, which

you really don’t want. But they were treating them like pets. Like they would name them,

they would give them funerals with gun salutes, they would get really upset and traumatized when

the robot got broken. So in situations where you want a robot to be a tool, in particular,

when it’s supposed to like do a dangerous job that you don’t want a person doing,

it can be hard when people get emotionally attached to it. That’s maybe something that

you would want to discourage. Another case for concern is maybe when companies try to

leverage the emotional attachment to exploit people. So if it’s something that’s not in the

consumer’s interest, trying to like sell them products or services or exploit an emotional

connection to keep them paying for a cloud service for a social robot or something like that might be,

I think that’s a little bit concerning as well.

Yeah, the emotional manipulation, which probably happens behind the scenes now with some like

social networks and so on, but making it more explicit. What’s your favorite robot?

Fictional or real?

No, real. Real robot, which you have felt a connection with or not like, not anthropomorphic

connection, but I mean like you sit back and say, damn, this is an impressive system.

Wow. So two different robots. So the, the PLEO baby dinosaur robot that is no longer sold that

came out in 2007, that one I was very impressed with. It was, but, but from an anthropomorphic

perspective, I was impressed with how much I bonded with it, how much I like wanted to believe

that it had this inner life.

Can you describe PLEO, can you describe what it is? How big is it? What can it actually do?

Yeah. PLEO is about the size of a small cat. It had a lot of like motors that gave it this kind

of lifelike movement. It had things like touch sensors and an infrared camera. So it had all

these like cool little technical features, even though it was a toy. And the thing that really

struck me about it was that it, it could mimic pain and distress really well. So if you held

it up by the tail, it had a tilt sensor that, you know, told it what direction it was facing

and it would start to squirm and cry out. If you hit it too hard, it would start to cry.

So it was very impressive in design.

And what’s the second robot that you were, you said there might’ve been two that you liked.

Yeah. So the Boston Dynamics robots are just impressive feats of engineering.

Have you met them in person?

Yeah. I recently got a chance to go visit and I, you know, I was always one of those people who

watched the videos and was like, this is super cool, but also it’s a product video. Like,

I don’t know how many times that they had to shoot this to get it right.

Yeah.

But visiting them, I, you know, I’m pretty sure that I was very impressed. Let’s put it that way.

Yeah. And in terms of the control, I think that was a transformational moment for me

when I met Spot Mini in person.

Yeah.

Because, okay, maybe this is a psychology experiment, but I anthropomorphized the,

the crap out of it. So I immediately, it was like my best friend, right?

I think it’s really hard for anyone to watch Spot move and not feel like it has agency.

Yeah. This movement, especially the arm on Spot Mini really obviously looks like a head.

Yeah.

That they say, no, wouldn’t mean it that way, but it obviously, it looks exactly like that.

And so it’s almost impossible to not think of it as a, almost like the baby dinosaur,

but slightly larger. And this movement of the, of course, the intelligence is,

their whole idea is that it’s not supposed to be intelligent. It’s a platform on which you build

higher intelligence. It’s actually really, really dumb. It’s just a basic movement platform.

Yeah. But even dumb robots can, like, we can immediately respond to them in this visceral way.

What are your thoughts about Sophia the robot? This kind of mix of some basic natural language

processing and basically an art experiment.

Yeah. An art experiment is a good way to characterize it. I’m much less impressed

with Sophia than I am with Boston Dynamics.

She said she likes you. She said she admires you.

Yeah. She followed me on Twitter at some point. Yeah.

She tweets about how much she likes you.

So what does that mean? I have to be nice or?

No, I don’t know. I was emotionally manipulating you. No. How do you think of

that? I think of the whole thing that happened with Sophia is quite a large number of people

kind of immediately had a connection and thought that maybe we’re far more advanced with robotics

than we are or actually didn’t even think much. I was surprised how little people cared

that they kind of assumed that, well, of course AI can do this.

Yeah.

And then if they assume that, I felt they should be more impressed.

Well, people really overestimate where we are. And so when something, I don’t even think Sophia

was very impressive or is very impressive. I think she’s kind of a puppet, to be honest. But

yeah, I think people are a little bit influenced by science fiction and pop culture to

think that we should be further along than we are.

So what’s your favorite robots in movies and fiction?

WALLI.

WALLI. What do you like about WALLI? The humor, the cuteness, the perception control systems

operating on WALLI that makes it all work? Just in general?

The design of WALLI the robot, I think that animators figured out, starting in the 1940s,

how to create characters that don’t look real, but look like something that’s even better than real,

that we really respond to and think is really cute. They figured out how to make them move

and look in the right way. And WALLI is just such a great example of that.

You think eyes, big eyes or big something that’s kind of eyeish. So it’s always playing on some

aspect of the human face, right?

Often. Yeah. So big eyes. Well, I think one of the first animations to really play with this was

Bambi. And they weren’t originally going to do that. They were originally trying to make the

deer look as lifelike as possible. They brought deer into the studio and had a little zoo there

so that the animators could work with them. And then at some point they were like,

if we make really big eyes and a small nose and big cheeks, kind of more like a baby face,

then people like it even better than if it looks real. Do you think the future of things like

Alexa in the home has possibility to take advantage of that, to build on that, to create

these systems that are better than real, that create a close human connection? I can pretty

much guarantee you without having any knowledge that those companies are going to make these

things. And companies are working on that design behind the scenes. I’m pretty sure.

I totally disagree with you.

Really?

So that’s what I’m interested in. I’d like to build such a company. I know

a lot of those folks and they’re afraid of that because how do you make money off of it?

Well, but even just making Alexa look a little bit more interesting than just a cylinder

would do so much.

It’s an interesting thought, but I don’t think people are from Amazon perspective are looking

for that kind of connection. They want you to be addicted to the services provided by Alexa,

not to the device. So the device itself, it’s felt that you can lose a lot because if you create a

connection and then it creates more opportunity for frustration for negative stuff than it does

for positive stuff is I think the way they think about it.

That’s interesting. Like I agree that it’s very difficult to get right and you have to get it

exactly right. Otherwise you wind up with Microsoft’s Clippy.

Okay, easy now. What’s your problem with Clippy?

You like Clippy? Is Clippy your friend?

Yeah, I like Clippy. I was just, I just talked to, we just had this argument and they said

Microsoft’s CTO and they said, he said he’s not bringing Clippy back. They’re not bringing

Clippy back and that’s very disappointing. I think it was Clippy was the greatest assistance

we’ve ever built. It was a horrible attempt, of course, but it’s the best we’ve ever done

because it was a real attempt to have like a actual personality. I mean, it was obviously

technology was way not there at the time of being able to be a recommender system for assisting you

in anything and typing in Word or any kind of other application, but still it was an attempt

of personality that was legitimate, which I thought was brave.

Yes, yes. Okay. You know, you’ve convinced me I’ll be slightly less hard on Clippy.

And I know I have like an army of people behind me who also miss Clippy.

Really? I want to meet these people. Who are these people?

It’s the people who like to hate stuff when it’s there and miss it when it’s gone.

So everyone.

It’s everyone. Exactly. All right. So Enki and Jibo, the two companies,

the two amazing companies, the social robotics companies that have recently been closed down.

Yes.

Why do you think it’s so hard to create a personal robotics company? So making a business

out of essentially something that people would anthropomorphize, have a deep connection with.

Why is it so hard to make it work? Is the business case not there or what is it?

I think it’s a number of different things. I don’t think it’s going to be this way forever.

I think at this current point in time, it takes so much work to build something that only barely

meets people’s minimal expectations because of science fiction and pop culture giving people

this idea that we should be further than we already are. Like when people think about a robot

assistant in the home, they think about Rosie from the Jetsons or something like that. And

Enki and Jibo did such a beautiful job with the design and getting that interaction just right.

But I think people just wanted more. They wanted more functionality. I think you’re also right that

the business case isn’t really there because there hasn’t been a killer application that’s

useful enough to get people to adopt the technology in great numbers. I think what we did see from the

people who did get Jibo is a lot of them became very emotionally attached to it. But that’s not,

I mean, it’s kind of like the Palm Pilot back in the day. Most people are like, why do I need this?

Why would I? They don’t see how they would benefit from it until they have it or some

other company comes in and makes it a little better. Yeah. Like how far away are we, do you

think? How hard is this problem? It’s a good question. And I think it has a lot to do with

people’s expectations and those keep shifting depending on what science fiction that is popular.

But also it’s two things. It’s people’s expectation and people’s need for an emotional

connection. Yeah. And I believe the need is pretty high. Yes. But I don’t think we’re aware of it.

That’s right. There’s like, I really think this is like the life as we know it. So we’ve just kind

of gotten used to it of really, I hate to be dark because I have close friends, but we’ve gotten

used to really never being close to anyone. Right. And we’re deeply, I believe, okay, this is

hypothesis. I think we’re deeply lonely, all of us, even those in deep fulfilling relationships.

In fact, what makes those relationship fulfilling, I think is that they at least tap into that deep

loneliness a little bit. But I feel like there’s more opportunity to explore that, that doesn’t

inter, doesn’t interfere with the human relationships you have. It expands more on the,

that, yeah, the rich deep unexplored complexity that’s all of us, weird apes. Okay.

I think you’re right. Do you think it’s possible to fall in love with a robot?

Oh yeah, totally. Do you think it’s possible to have a longterm committed monogamous relationship

with a robot? Well, yeah, there are lots of different types of longterm committed monogamous

relationships. I think monogamous implies like, you’re not going to see other humans sexually or

like you basically on Facebook have to say, I’m in a relationship with this person, this robot.

I just don’t like, again, I think this is comparing robots to humans when I would rather

compare them to pets. Like you get a robot, it fulfills this loneliness that you have

in maybe not the same way as a pet, maybe in a different way that is even supplemental in a

different way. But I’m not saying that people won’t like do this, be like, oh, I want to marry

my robot or I want to have like a sexual relation, monogamous relationship with my robot. But I don’t

think that that’s the main use case for them. But you think that there’s still a gap between

human and pet. So between a husband and pet, there’s a different relationship. It’s engineering.

So that’s a gap that can be closed through. I think it could be closed someday, but why

would we close that? Like, I think it’s so boring to think about recreating things that we already

have when we could create something that’s different. I know you’re thinking about the

people who like don’t have a husband and like, what could we give them? Yeah. But I guess what

I’m getting at is maybe not. So like the movie Her. Yeah. Right. So a better husband. Well,

maybe better in some ways. Like it’s, I do think that robots are going to continue to be a different

type of relationship, even if we get them like very human looking or when, you know, the voice

interactions we have with them feel very like natural and human like, I think there’s still

going to be differences. And there were in that movie too, like towards the end, it kind of goes

off the rails. But it’s just a movie. So your intuition is that, because you kind of said

two things, right? So one is why would you want to basically replicate the husband? Yeah. Right.

And the other is kind of implying that it’s kind of hard to do. So like anytime you try,

you might build something very impressive, but it’ll be different. I guess my question is about

human nature. It’s like, how hard is it to satisfy that role of the husband? So we’re moving any of

the sexual stuff aside is the, it’s more like the mystery, the tension, the dance of relationships

you think with robots, that’s difficult to build. What’s your intuition? I think that, well, it also

depends on are we talking about robots now in 50 years in like indefinite amount of time. I’m

thinking like five or 10 years. Five or 10 years. I think that robots at best will be like, it’s

more similar to the relationship we have with our pets than relationship that we have with other

people. I got it. So what do you think it takes to build a system that exhibits greater and greater

levels of intelligence? Like it impresses us with this intelligence. Arumba, so you talk about

anthropomorphization that doesn’t, I think intelligence is not required. In fact, intelligence

probably gets in the way sometimes, like you mentioned. But what do you think it takes to

create a system where we sense that it has a human level intelligence? So something that,

probably something conversational, human level intelligence. How hard do you think that problem

is? It’d be interesting to sort of hear your perspective, not just purely, so I talk to a lot

of people, how hard is the conversational agents? How hard is it to pass the torrent test? But my

sense is it’s easier than just solving, it’s easier than solving the pure natural language

processing problem. Because I feel like you can cheat. Yeah. So how hard is it to pass the torrent

test in your view? Well, I think again, it’s all about expectation management. If you set up

people’s expectations to think that they’re communicating with, what was it, a 13 year old

boy from the Ukraine? Yeah, that’s right. Then they’re not going to expect perfect English,

they’re not going to expect perfect, you know, understanding of concepts or even like being on

the same wavelength in terms of like conversation flow. So it’s much easier to pass in that case.

Do you think, you kind of alluded this too with audio, do you think it needs to have a body?

I think that we definitely have, so we treat physical things with more social agency,

because we’re very physical creatures. I think a body can be useful.

Does it get in the way? Is there a negative aspects like…

Yeah, there can be. So if you’re trying to create a body that’s too similar to something that people

are familiar with, like I have this robot cat at home that has robots. I have a robot cat at home

that has roommates. And it’s very disturbing to watch because I’m constantly assuming that it’s

going to move like a real cat and it doesn’t because it’s like a $100 piece of technology.

So it’s very like disappointing and it’s very hard to treat it like it’s alive. So you can get a lot

wrong with the body too, but you can also use tricks, same as, you know, the expectation

management of the 13 year old boy from the Ukraine. If you pick an animal that people

aren’t intimately familiar with, like the baby dinosaur, like the baby seal that people have

never actually held in their arms, you can get away with much more because they don’t have these

preformed expectations. Yeah, I remember you thinking of a Ted talk or something that clicked

for me that nobody actually knows what a dinosaur looks like. So you can actually get away with a

lot more. That was great. So what do you think about consciousness and mortality

being displayed in a robot? So not actually having consciousness, but having these kind

of human elements that are much more than just the interaction, much more than just,

like you mentioned with a dinosaur moving kind of in an interesting ways, but really being worried

about its own death and really acting as if it’s aware and self aware and identity. Have you seen

that done in robotics? What do you think about doing that? Is that a powerful good thing?

Well, I think it can be a design tool that you can use for different purposes. So I can’t say

whether it’s inherently good or bad, but I do think it can be a powerful tool. The fact that the

pleo mimics distress when you quote unquote hurt it is a really powerful tool to get people to

engage with it in a certain way. I had a research partner that I did some of the empathy work with

named Palash Nandi and he had built a robot for himself that had like a lifespan and that would

stop working after a certain amount of time just because he was interested in whether he himself

would treat it differently. And we know from Tamagotchis, those little games that we used to

have that were extremely primitive, that people respond to this idea of mortality and you can get

people to do a lot with little design tricks like that. Now, whether it’s a good thing depends on

what you’re trying to get them to do. Have a deeper relationship, have a deeper connection,

sign a relationship. If it’s for their own benefit, that sounds great. Okay. You could do that for a

lot of other reasons. I see. So what kind of stuff are you worried about? So is it mostly about

manipulation of your emotions for like advertisement and so on, things like that? Yeah, or data

collection or, I mean, you could think of governments misusing this to extract information

from people. It’s, you know, just like any other technological tool, it just raises a lot of

questions. If you look at Facebook, if you look at Twitter and social networks, there’s a lot

of concern of data collection now. What’s from the legal perspective or in general,

how do we prevent the violation of sort of these companies crossing a line? It’s a great area,

but crossing a line, they shouldn’t in terms of manipulating, like we’re talking about and

manipulating our emotion, manipulating our behavior, using tactics that are not so savory.

Yeah. It’s really difficult because we are starting to create technology that relies on

data collection to provide functionality. And there’s not a lot of incentive,

even on the consumer side, to curb that because the other problem is that the harms aren’t

tangible. They’re not really apparent to a lot of people because they kind of trickle down on a

societal level. And then suddenly we’re living in like 1984, which, you know, sounds extreme,

but that book was very prescient and I’m not worried about, you know, these systems. I have,

you know, Amazon’s Echo at home and tell Alexa all sorts of stuff. And it helps me because,

you know, Alexa knows what brand of diaper we use. And so I can just easily order it again.

So I don’t have any incentive to ask a lawmaker to curb that. But when I think about that data

then being used against low income people to target them for scammy loans or education programs,

that’s then a societal effect that I think is very severe and, you know,

legislators should be thinking about.

But yeah, the gray area is the removing ourselves from consideration of like,

of explicitly defining objectives and more saying,

well, we want to maximize engagement in our social network.

Yeah.

And then just, because you’re not actually doing a bad thing. It makes sense. You want people to

keep a conversation going, to have more conversations, to keep coming back

again and again, to have conversations. And whatever happens after that,

you’re kind of not exactly directly responsible. You’re only indirectly responsible. So I think

it’s a really hard problem. Are you optimistic about us ever being able to solve it?

You mean the problem of capitalism? It’s like, because the problem is that the companies

are acting in the company’s interests and not in people’s interests. And when those interests are

aligned, that’s great. But the completely free market doesn’t seem to work because of this

information asymmetry.

But it’s hard to know how to, so say you were trying to do the right thing. I guess what I’m

trying to say is it’s not obvious for these companies what the good thing for society is to

do. Like, I don’t think they sit there with, I don’t know, with a glass of wine and a cat,

like petting a cat, evil cat. And there’s two decisions and one of them is good for society.

One is good for the profit and they choose the profit. I think they actually, there’s a lot of

money to be made by doing the right thing for society. Because Google, Facebook have so much cash

that they actually, especially Facebook, would significantly benefit from making decisions that

are good for society. It’s good for their brand. But I don’t know if they know what’s good for

society. I don’t think we know what’s good for society in terms of how we manage the

conversation on Twitter or how we design, we’re talking about robots. Like, should we

emotionally manipulate you into having a deep connection with Alexa or not?

Yeah. Yeah. Do you have optimism that we’ll be able to solve some of these questions?

Well, I’m going to say something that’s controversial, like in my circles,

which is that I don’t think that companies who are reaching out to ethicists and trying to create

interdisciplinary ethics boards, I don’t think that that’s totally just trying to whitewash

the problem and so that they look like they’ve done something. I think that a lot of companies

actually do, like you say, care about what the right answer is. They don’t know what that is,

and they’re trying to find people to help them find them. Not in every case, but I think

it’s much too easy to just vilify the companies as, like you say, sitting there with their cat

going, her, her, her, $1 million. That’s not what happens. A lot of people are well meaning even

within companies. I think that what we do absolutely need is more interdisciplinarity,

both within companies, but also within the policymaking space because we’ve hurtled into

the world where technological progress is much faster, it seems much faster than it was, and

things are getting very complex. And you need people who understand the technology, but also

people who understand what the societal implications are, and people who are thinking

about this in a more systematic way to be talking to each other. There’s no other solution, I think.

You’ve also done work on intellectual property, so if you look at the algorithms that these

companies are using, like YouTube, Twitter, Facebook, so on, I mean that’s kind of,

those are mostly secretive. The recommender systems behind these algorithms. Do you think

about an IP and the transparency of algorithms like this? Like what is the responsibility of

these companies to open source the algorithms or at least reveal to the public how these

algorithms work? So I personally don’t work on that. There are a lot of people who do though,

and there are a lot of people calling for transparency. In fact, Europe’s even trying

to legislate transparency, maybe they even have at this point, where like if an algorithmic system

makes some sort of decision that affects someone’s life, that you need to be able to see how that

decision was made. It’s a tricky balance because obviously companies need to have some sort of

competitive advantage and you can’t take all of that away or you stifle innovation. But yeah,

for some of the ways that these systems are already being used, I think it is pretty important that

people understand how they work. What are your thoughts in general on intellectual property in

this weird age of software, AI, robotics? Oh, that it’s broken. I mean, the system is just broken. So

can you describe, I actually, I don’t even know what intellectual property is in the space of

software, what it means to, I mean, so I believe I have a patent on a piece of software from my PhD.

You believe? You don’t know? No, we went through a whole process. Yeah, I do. You get the spam

emails like, we’ll frame your patent for you. Yeah, it’s much like a thesis. But that’s useless,

right? Or not? Where does IP stand in this age? What’s the right way to do it? What’s the right

way to protect and own ideas when it’s just code and this mishmash of something that feels much

softer than a piece of machinery? Yeah. I mean, it’s hard because there are different types of

intellectual property and they’re kind of these blunt instruments. It’s like patent law is like

a wrench. It works really well for an industry like the pharmaceutical industry. But when you

try and apply it to something else, it’s like, I don’t know, I’ll just hit this thing with a wrench

and hope it works. So software, you have a couple of different options. Any code that’s written down

in some tangible form is automatically copyrighted. So you have that protection, but that doesn’t do

much because if someone takes the basic idea that the code is executing and just does it in a

slightly different way, they can get around the copyright. So that’s not a lot of protection.

Then you can patent software, but that’s kind of, I mean, getting a patent costs,

I don’t know if you remember what yours cost or like, was it through an institution?

Yeah, it was through a university. It was insane. There were so many lawyers, so many meetings.

It made me feel like it must’ve been hundreds of thousands of dollars. It must’ve been something

crazy. Oh yeah. It’s insane the cost of getting a patent. And so this idea of protecting the

inventor in their own garage who came up with a great idea is kind of, that’s the thing of the

past. It’s all just companies trying to protect things and it costs a lot of money. And then

with code, it’s oftentimes by the time the patent is issued, which can take like five years,

probably your code is obsolete at that point. So it’s a very, again, a very blunt instrument that

doesn’t work well for that industry. And so at this point we should really have something better,

but we don’t. Do you like open source? Yeah. Is open source good for society?

You think all of us should open source code? Well, so at the Media Lab at MIT, we have an

open source default because what we’ve noticed is that people will come in, they’ll write some code

and they’ll be like, how do I protect this? And we’re like, that’s not your problem right now.

Your problem isn’t that someone’s going to steal your project. Your problem is getting people to

use it at all. There’s so much stuff out there. We don’t even know if you’re going to get traction

for your work. And so open sourcing can sometimes help, you know, get people’s work out there,

but ensure that they get attribution for it, for the work that they’ve done. So like,

I’m a fan of it in a lot of contexts. Obviously it’s not like a one size fits all solution.

So what I gleaned from your Twitter is, you’re a mom. I saw a quote, a reference to baby bot.

What have you learned about robotics and AI from raising a human baby bot?

Well, I think that my child has made it more apparent to me that the systems we’re currently

creating aren’t like human intelligence. Like there’s not a lot to compare there.

It’s just, he has learned and developed in such a different way than a lot of the AI systems

we’re creating that that’s not really interesting to me to compare. But what is interesting to me

is how these systems are going to shape the world that he grows up in. And so I’m like even more

concerned about kind of the societal effects of developing systems that, you know, rely on

massive amounts of data collection, for example. So is he going to be allowed to use like Facebook or

Facebook? Facebook is over. Kids don’t use that anymore. Snapchat. What do they use? Instagram?

Snapchat’s over too. I don’t know. I just heard that TikTok is over, which I’ve never even seen.

So I don’t know. No. We’re old. We don’t know. I need to, I’m going to start gaming and streaming

my, my gameplay. So what do you see as the future of personal robotics, social robotics, interaction

with other robots? Like what are you excited about if you were to sort of philosophize about what

might happen in the next five, 10 years that would be cool to see? Oh, I really hope that we get kind

of a home robot that makes it, that’s a social robot and not just Alexa. Like it’s, you know,

I really love the Anki products. I thought Jibo was, had some really great aspects. So I’m hoping

that a company cracks that. Me too. So Kate, it was a wonderful talking to you today. Likewise.

Thank you so much. It was fun. Thanks for listening to this conversation with Kate Darling.

And thank you to our sponsors, ExpressVPN and Masterclass. Please consider supporting the

podcast by signing up to Masterclass at masterclass.com slash Lex and getting ExpressVPN at

expressvpn.com slash LexPod. If you enjoy this podcast, subscribe on YouTube, review it with

five stars on Apple podcast, support it on Patreon, or simply connect with me on Twitter

at Lex Friedman. And now let me leave you with some tweets from Kate Darling. First tweet is

the pandemic has fundamentally changed who I am. I now drink the leftover milk in the bottom of

the cereal bowl. Second tweet is I came on here to complain that I had a really bad day and saw that

a bunch of you are hurting too. Love to everyone. Thank you for listening. I hope to see you next

time.

comments powered by Disqus