Lex Fridman Podcast - #329 - Kate Darling: Social Robots, Ethics, Privacy and the Future of MIT

The following is a conversation with Kate Darling,

her second time on the podcast.

She’s a research scientist at MIT Media Lab,

interested in human-robot interaction and robot ethics,

which she writes about in her recent book

called The New Breed,

what our history with animals reveals

about our future with robots.

Kate is one of my favorite people at MIT.

She was a courageous voice of reason and compassion

through the time of the Jeffrey Epstein scandal

at MIT three years ago.

We reflect on this time in this very conversation,

including the lessons it revealed about human nature

and our optimistic vision for the future of MIT,

a university we both love and believe in.

And now, a quick second mention of each sponsor.

Check them out in the description.

It’s the best way to support this podcast.

We’ve got True Classic Tees for high-quality t-shirts,

Shopify for e-commerce, Linode for Linux,

Insight Tracker for bio-monitoring,

and ExpressVPN for privacy.

Choose wisely, my friends.

And now, onto the full ad reads.

As always, no ads in the middle.

I hate those.

And since I do this podcast, I’m able to control

whether we do them or not.

I try to make these interesting, but if you skip them,

please still check out our sponsors.

I enjoy their stuff.

Maybe you will too.

This show is brought to you by,

I believe in you sponsor.

I’ve been wearing their t-shirts for a while now,

so I don’t remember,

but I do remember that I’ve been loving it for a while now.

Anyway, the sponsor is True Classic Tees.

They’re high-quality, soft, slim-fitted t-shirts for men.

They also make all the other menswear staples,

like polos and workout shirts,

and they’re all built with the same flattering fit

as their t-shirts.

It’s kind of fascinating how something like a t-shirt

can feel good and look good,

and the way to achieve those two goals

are very subtle design decisions.

So it’s fascinating because I’m a t-shirt person.

I usually just wear a black t-shirt,

a bunch of black t-shirts.

True Classic Tees is an example of a company that delivers.

Now that I’ve tried it, that’s all I’ve been wearing.

I feel amazing.

Get comfortable and upgrade your wardrobe

with True Classic.

Get 25% off at trueclassic.com with code Lex.

Free shipping included on purchases over $100.

100% risk-free guarantee with 30-day return policy.

This show is also brought to you by Shopify,

a platform designed for anyone to sell anywhere,

with a great-looking online store

that brings your ideas to life

and gives you tools to manage day-to-day operations.

I’ve been using Shopify for a while to sell stuff,

but my use cases are pretty simple,

and I think that’s probably true for many people,

for many merchants.

You just wanna sell a couple of things that you care about

and to a small set of people

that are interested in that kind of thing.

But I think there’s a lot of entrepreneurs

that really use Shopify to run a business,

small business, medium-sized business,

all that kind of stuff.

I think it’s 1.7 million entrepreneurs that use it.

Yeah, it’s my favorite, as far as I’m concerned.

If you look on Reddit and all those other kinds of places,

Shopify is the recommended place to sell stuff online.

Super easy to use, super easy to set up, all that.

Get a free trial and full access

to Shopify’s entire suite of features

when you sign up at Shopify.com slash Lex.

That’s all lowercase, Shopify.com slash Lex.

This episode is also brought to you by Linode,

Linux Virtual Machines.

It’s an awesome computer infrastructure

that I just love everything about it.

It lets you develop, deploy, and scale

what applications you build faster and easier.

I use it for small personal projects.

I hope to one day have huge projects that I can run on it.

I think the big competitor is AWS.

There’s probably a bunch of others, but AWS.

It’s lower cost than AWS, better customer service,

the simplicity of everything.

I just love it.

Obviously, computer infrastructure,

the compute has to be really good, right?

The actual systems have to be really good.

The distributed compute has to be good.

But the interface from a user perspective

of how you set everything up, how you scale,

all that kind of stuff, also should be good.

I think that’s actually more important,

the ability to set stuff up, to monitor,

all that kind of stuff.

And that’s really why I love Linode.

And of course, the number one reason,

or should I say the number zero reason,

is that it’s Linux.

I love Linux, all things Linux.

Visit linode.com slash Lex to get $100 in free credit.

This show is also brought to you by InsideTracker,

a service I use to track biological data.

They have a bunch of plans that collect

a bunch of information from your body

and you use machine learning algorithms

to analyze your blood data, DNA data, fitness tracker data,

all of that to give you a picture

of what’s going on inside you and give you recommendations

for diet lifestyle changes.

I wish they gave dating advice or career advice

or just food advice, what I should eat today

based on my body.

That’s probably the future.

My body is giving me very noisy signals

about when it’s hungry and what it wants to eat.

I wish I had higher resolution signals

or the signals that it’s sending needs to be interpreted.

There’s probably a lot of signal there.

It’s just my brain is too dumb to interpret it.

So I would love to understand what my body’s telling me.

That’s why I love InsideTracker

is that it’s listening to your body

to give you advice about what you should do with said body.

Get special savings for a limited time

when you go to insidetracker.com slash Lex.

This show is also brought to you by ExpressVPN.

I use them to protect my privacy on the internet.

I also use them to feel good about my life.

But that’s because I have a strange relationship

with software that’s really well designed.

Anyway, it’s like a good VPN should be.

It’s fast, works on any device, including Linux,

Android, and all of that good stuff.

And it’s a base level of protection

that everybody should be using.

I’m probably having a bunch of conversation

with cybersecurity folks on both sides.

I think I have a person coming on

that used to be an FBI agent doing cybersecurity

and also really want to get a bunch of hackers on.

Former, current hackers would be epic.

Of course, it’s very difficult

because if they’re current hackers,

there’s this gray area about what they can

and can’t talk about, and I don’t like gray areas.

I like people to be raw and transparent and real,

all that kind of stuff.

But no matter what, it’s a super fascinating topic.

Go to expressvpn.com slash LexPod

for an extra three months free.

This is the Lex Friedman Podcast.

To support it, please check out our sponsors

in the description.

And now, dear friends, here’s Kate Darling.

♪♪♪

Last time we talked a few years back,

you wore Justin Bieber’s shirt for the podcast.

So now looking back, you’re a respected researcher,

all the amazing accomplishments in robotics,

you’re an author.

Was this one of the proudest moments of your life?

Proudest decisions you’ve ever made?

Definitely.

You handled it really well, though.

It was cool, because I walked in,

I didn’t know you were going to be filming.

I walked in and you’re in a fucking suit.

Yeah.

And I’m like, why are you all dressed up?

Yeah.

And then you were so nice about it.

You made some excuses.

You were like, oh, well, I’m interviewing some,

didn’t you say you were interviewing

some military general afterwards to make me feel better?

I’m CTO of Lockheed Martin, I think.

Oh, that’s what it was.

Yeah.

You didn’t tell me, oh, I was dressed like this.

Are you an actual Bieber fan?

Or was that like one of those t-shirts

that’s in the back of the closet

that you use for painting?

I think I bought it for my husband as a joke.

And yeah, we were gut renovating a house at the time

and I had worn it to the site.

Got his joke and now you wear it.

Okay, have you worn it since?

Was this a one time?

How could I touch it again?

It was on your podcast, now it’s framed.

It’s like a wedding dress or something like that.

You only wear it once.

You are the author of The New Breed,

What Our History With Animals Reveals

About Our Future With Robots.

You opened the book with the surprisingly tricky question,

what is a robot?

So let me ask you, let’s try to sneak up to this question.

What’s a robot?

That’s not really sneaking up.

It’s just asking it.

Yeah.

All right, well.

What do you think a robot is?

What I think a robot is,

is something that has some level of intelligence

and some level of magic.

That little shine in the eye, you know,

that allows you to navigate the uncertainty of life.

So that means like autonomous vehicles to me in that sense,

are robots because they navigate the uncertainty,

the complexity of life.

Obviously social robots are that.

I love that.

I like that you mentioned magic because that also,

well, so first of all,

I don’t define robot definitively in the book

because there is no definition that everyone agrees on.

And if you look back through time,

people have called things robots until they lose the magic

because they’re more ubiquitous.

Like a vending machine used to be called a robot

and now it’s not, right?

So I do agree with you that there’s this magic aspect

which is how people understand robots.

If you ask a roboticist,

they have the definition of something that is,

well, it has to be physical.

Usually it’s not an AI agent.

It has to be embodied.

They’ll say it has to be able to sense its environment

in some way.

It has to be able to make a decision autonomously

and then act on its environment again.

I think that’s a pretty good technical definition

even though it really breaks down

when you come to things like the smartphone

because the smartphone can do all of those things

and most robotists would not call it a robot.

So there’s really no one good definition

but part of why I wrote the book

is because people have a definition of robot in their minds

that is usually very focused

on a comparison of robots to humans.

So if you Google image search robot,

you get a bunch of humanoid robots,

robots with a torso and head and two arms and two legs.

And that’s the definition of robot

that I’m trying to get us away from

because I think that it trips us up a lot.

Why does the humanoid form trip us up a lot?

Well, because this constant comparison of robots to people,

artificial intelligence to human intelligence,

first of all, it doesn’t make sense

from a technical perspective

because the early AI researchers,

some of them were trying to recreate human intelligence.

Some people still are

and there’s a lot to be learned

from that academically, et cetera.

But that’s not where we’ve ended up.

AI doesn’t think like people.

We wind up in this fallacy

where we’re comparing these two

and when we talk about what intelligence even is,

we’re often comparing to our own intelligence.

And then the second reason this bothers me

is because it doesn’t make sense.

I just think it’s boring to recreate intelligence

that we already have.

I see the scientific value

of understanding our own intelligence,

but from a like practical,

what could we use these technologies for perspective?

It’s much more interesting to create something new,

to create a skillset that we don’t have

that we can partner with

in what we’re trying to achieve.

And it should be in some deep way similar to us,

but in most ways different

because you still want to have a connection,

which is why the similarity might be necessary.

That’s what people argue, yes.

And I think that’s true.

So the two arguments for humanoid robots

are people need to be able to communicate

and relate to robots

and we relate most to things that are like ourselves.

And we have a world that’s built for humans.

So we have stairs and narrow passageways and door handles.

And so we need humanoid robots to be able to navigate that.

And so you’re speaking to the first one,

which is absolutely true.

But what we know from social robotics

and a lot of human robot interaction research

is that all you need is something that’s enough

like a person for it to give off cues

that someone relates to,

but that doesn’t have to look human or even act human.

You can take a robot like R2-D2

and it just like beeps and boops

and people love R2-D2, right?

Even though it’s just like a trash can on wheels.

And they like R2-D2 more than C3PO, who’s a humanoid.

So there’s lots of ways to make robots

even better than humans in some ways

and make us relate more to them.

Yeah, it’s kind of amazing the variety of cues

that can be used to anthropomorphize the thing,

like a glowing orb or something like that.

Just a voice, just subtle basic interaction.

I think people sometimes over-engineer these things.

Like simplicity can go a really long way.

Totally.

I mean, ask any animator and they’ll know that.

Yeah.

Yeah, those are actually,

so the people behind Cosmo, the robot,

the right people to design those is animators,

like Disney type of people.

Yeah.

Versus like roboticists.

Roboticists, quote-unquote, are mostly clueless.

It seems like-

Well, no, they just have their own discipline

that they’re very good at and they don’t have.

Yeah, but that don’t, you know.

I feel like robotics of the early 21st century

is not going to be the robotics of the later 21st century.

Like if you call yourself a roboticist,

it’d be something very different.

Because I think more and more you’d be like a,

maybe like a control engineer or something.

Controls engineer.

Like you separate, because ultimately all the unsolved,

all the big problems of robotics

will be in the social aspect,

in the interacting with humans aspect,

in the perception interpreting the world aspect,

in the brain part.

Not the basic control level part.

You call it basic, it’s actually really complex.

It’s very, very complicated.

And that’s why, but like, I think you’re so right.

And what a time to be alive.

Because for me, I just,

we’ve had robots for so long

and they’ve just been behind the scenes.

And now finally robots are getting deployed into the world.

They’re coming out of the closet.

Yeah, and we’re seeing all these mistakes

that companies are making

because they focus so much on the engineering

and getting that right,

and getting the robot to even be able to function

in a space that it shares with a human.

See, what I feel like people don’t understand

is to solve the perception and the control problem.

You shouldn’t try to just solve

the perception control problem.

You should teach the robot how to say,

oh shit, I’m sorry, I fucked up.

Yeah, or ask for help.

Ask for help or be able to communicate the uncertainty.

Yeah, exactly, all of those things.

Because you can’t solve the perception control.

We humans haven’t solved it.

We’re really damn good at it.

But the magic is in the self-deprecating humor

and the self-awareness about where our flaws are,

all that kind of stuff.

Yeah, and there’s a whole body of research

in human-robot interaction showing ways to do this.

But a lot of these companies haven’t,

they don’t do HRI, they, like the,

have you seen the grocery store robot in the Stop and Shop?

Yes.

Yeah, the Marty, it looks like a giant penis.

It’s like six feet tall, it roams the aisles.

I will never see Marty the same way again.

Thank you for that.

You’re welcome.

But like, these poor people worked so hard

on getting a functional robot together.

And then people hate Marty

because they didn’t at all consider

how people would react to Marty in their space.

Does everybody, I mean, you talk about this,

do people mostly hate Marty?

Because I like Marty.

I feel like less.

Yeah, but you like Flippy.

Yeah, I do.

And actually, like.

There’s a parallel between the two?

I believe there is.

So we were actually gonna do a study on this

right before the pandemic hit,

and then we canceled it

because we didn’t wanna go to the grocery store,

and neither did anyone else.

But our theory, so this was with a student at MIT,

Daniela Di Paola.

She noticed that everyone on Facebook, in her circles,

was complaining about Marty.

They were like, what is this creepy robot?

It’s watching me.

It’s always in the way.

And she did this quick and dirty sentiment analysis

on Twitter where she was looking at positive

and negative mentions of the robot.

And she found that the biggest spike

of negative mentions happened

when Stop and Shop threw a birthday party

for the Marty robots, like with free cake and balloons.

Who complains about free cake?

Well, people who hate Marty, apparently.

And so we were like, that’s interesting.

And then we did this online poll.

We used Mechanical Turk,

and we tried to get at what people don’t like about Marty.

And a lot of it wasn’t, oh, Marty’s taking jobs.

It was, Marty is the surveillance robot, which it’s not.

It looks for spills on the floor.

It doesn’t actually look at any people.

It’s watching.

It’s creepy.

It’s getting in the way.

Those were the things that people complained about.

And so our hypothesis became, is Marty a real life Clippy?

Because I know, Alex, you love Clippy,

but many people hated Clippy.

Well, there’s a complex thing there.

It could be like marriage.

A lot of people seem to like to complain about marriage,

but they secretly love it.

So it could be, the relationship you might have

with Marty is like, oh, there he goes again,

doing his stupid surveillance thing.

But you grow to love the, I mean,

bitching about the thing

that kind of releases a kind of tension.

And there’s, I mean, some people, a lot of people,

show love by sort of busting each other’s chops,

you know, like making fun of each other.

And then I think people would really love it

if Marty talked back.

And like, these are so many possible options

for humor there.

One, you can lean in.

You can be like, yes, I’m an agent of the CIA,

monitoring your every move,

like mocking people that are concerned,

you know what I’m saying?

Like, yes, I’m watching you because you’re so important

with your shopping patterns.

I’m collecting all this data.

Or just, you know, any kind of making fun of people.

I don’t know.

But I think you hit on what exactly it is

because when it comes to robots or artificial agents,

I think people hate them more

than they would some other machine or device or object.

And it might be that thing,

it might be combined with love or like whatever it is,

it’s a more extreme response

because they view these things as social agents

and not objects.

And that was, so Clifford Nass

was a big human-computer interaction person.

And his theory about Clippy was that

because people viewed Clippy as a social agent,

when Clippy was annoying and would like bother them

and interrupt them and like not remember what they told him,

that’s when people got upset

because it wasn’t fulfilling their social expectations.

And so they complained about Clippy

more than they would have if it had been a different,

like not a, you know, virtual character.

So is complaining to you a sign

that we’re on the wrong path with a particular robot?

Or is it possible, like again, like marriage,

like family, that there still is a path

towards that direction

where we can find deep, meaningful relationships?

I think we absolutely can find

deep, meaningful relationships with robots.

And well, maybe with Marty.

I mean, I just would,

I would have designed Marty a little differently.

Like how?

Isn’t there a charm to the clumsiness, the slowness?

Like I’ll sometimes just-

There is if you’re not trying to get through

with a shopping cart and a screaming child.

You know, there’s, I think,

I think you could make it charming.

I think there are lots of design tricks

that they could have used.

And one of the things they did,

I think without thinking about it at all,

is they slapped two big googly eyes on Marty.

Oh yeah.

And I wonder if that contributed

maybe to people feeling watched

because it’s looking at them.

And so like, is there a way to design the robot

to do the function that it’s doing

in a way that people are actually attracted to

rather than annoyed by?

And there are many ways to do that,

but companies aren’t thinking about it.

Now they’re realizing that

they should have thought about it.

Yeah.

I wonder if there’s a way to,

if it would help to make Marty

seem like an entity of its own

versus the arm of a large corporation.

So there’s some sense where this is just the camera

that’s monitoring people versus this is an entity

that’s a standalone entity.

It has its own task and it has its own personality.

The more personality you give it,

the more it feels like it’s not sharing data

with anybody else.

When we see other human beings,

our basic assumption is whatever I say to this human being,

it’s not like being immediately sent to the CIA.

Yeah, what I say to you, no one’s gonna hear that, right?

Yeah, that’s true, that’s true.

No, no, I’m kidding.

Well, you forget it.

I mean, you do forget it.

I mean, I don’t know if even with microphones here,

you forget that that’s happening.

But for some reason, I think probably with Marty,

I think when it’s done really crudely and crappily,

you start to realize, oh, this is like PR people

trying to make a friendly version

of a surveillance machine.

But I mean, that reminds me of the slight clumsiness

or significant clumsiness on the initial releases

of the avatars for the metaverse.

I don’t know, what are your actually thoughts about that?

The way the avatars, the way like Mark Zuckerberg

looks in that world, in the metaverse,

the virtual reality world where you can have

like virtual meetings and stuff like that.

Like how do we get that right?

Do you have thoughts about that?

Because it’s a kind of, it feels like a similar problem

to social robotics, which is how you design

a digital virtual world that is compelling

when you connect to others there

in the same way that physical connection is.

Right, I haven’t looked into, I mean,

I’ve seen people joking about it on Twitter

and like posting whatever.

Yeah, but I mean, have you seen it?

Because there’s something you can’t quite put into words

that doesn’t feel genuine about the way it looks.

And so the question is, if you and I were to meet virtually,

what should the avatars look like

for us to have a similar kind of connection?

Should it be really simplified?

Should it be a little bit more realistic?

Should it be cartoonish?

Should it be more, better capturing of expressions

in interesting, complex ways versus like cartoonish,

oversimplified ways?

But haven’t video games figured this out?

I’m not a gamer, so I don’t have any examples,

but I feel like there’s this whole world in video games

where they’ve thought about all of this

and depending on the game, they have different like avatars

and a lot of the games are about connecting with others.

I just, the thing that I don’t know is,

and again, I haven’t looked into this at all.

I’ve been like shockingly not very interested

in the metaverse, but they must have poured

so much investment into this meta.

And like, why is it so, why are people, why is it so bad?

Like, there’s gotta be a reason.

There’s gotta be some thinking behind it, right?

Well, I talked to Carmack about this,

John Carmack, who’s a part-time Oculus CTO.

I think there’s several things to say.

One is, as you probably know, that I mean,

there’s bureaucracy, there’s large corporations

and they often, large corporations have a way

of killing the indie kind of artistic flame

that’s required to create something really compelling.

Somehow they make everything boring

because they run through this whole process

through the PR department, through all that kind of stuff

and it somehow becomes generic to that process.

Because there’s like-

They strip out anything interesting

because it could be controversial, is that, or?

Yeah, right, exactly.

Like, what, I mean, we’re living through this now,

like, with a lot of people with cancellations

and all those kinds of stuff, people are nervous

and nervousness results in, like usual,

the assholes are ruining everything.

But, you know, the magic of human connection

is taking risks, of making a risky joke,

of like, with people you like who are not assholes,

good people, like, some of the fun in the metaverse

or in video games is, you know, being edgier,

being interesting, revealing your personality

in interesting ways.

In the sexual tension or in,

they’re definitely paranoid about that.

Oh, yeah.

Like, in metaverse, the possibility of sexual assault

and sexual harassment and all that kind of stuff,

it’s obviously very high, but they’re,

so you should be paranoid to some degree,

but not too much because then you remove completely

the personality of the whole thing.

Then everybody’s just like a vanilla bot that,

like, you have to have ability

to be a little bit political, to be a little bit edgy,

all that kind of stuff,

and large companies tend to suffocate that.

So, but in general, if you get all that,

just the ability to come up

with really cool, beautiful ideas.

If you look at, I think Grimes tweeted about this,

she’s very critical about the metaverse,

is that,

you know, independent game designers have solved this problem

of how to create something beautiful

and interesting and compelling.

They do a really good job.

So, you have to let those kinds of minds,

the small groups of people, design things

and let them run with it, let them run wild

and do edgy stuff, yeah.

But otherwise, you get this kind of,

you get a clippy type of situation, right,

which is like a very generic looking thing.

But even clippy has some, like,

that’s kind of wild that you would take a paperclip

and put eyes on it.

And suddenly people are like, oh, you’re annoying,

but you’re definitely a social agent.

And I just feel like that wouldn’t even,

that clippy thing wouldn’t even survive Microsoft

or Facebook of today, meta of today.

Because it would be like, well,

there’ll be these meetings about why is it a paperclip?

Like, why don’t we, it’s not sufficiently friendly,

let’s make it, you know.

And then all of a sudden,

the artist with whom it originated is killed

and it’s all PR, marketing people

and all of that kind of stuff.

Now, they do important work to some degree,

but they kill the creativity.

I think the killing of the creativity is in the whole,

like, okay, so what I know from social robotics is like,

obviously, if you create agents that,

okay, so take for an example,

you’d create a robot that looks like a humanoid

and it’s, you know, Sophia or whatever.

Now, suddenly you do have all of these issues

where are you reinforcing an unrealistic beauty standard?

Are you objectifying women?

Why is the robot?

So you have, but the thing is,

I think that with creativity,

you can find a solution that’s even better

where you’re not even harming anyone

and you’re creating a robot that looks like not humanoid,

but like something that people relate to even more.

And now you don’t even have any of these bias issues

that you’re creating.

And so how do we create that within companies?

Because I don’t think it’s really about,

like, I, cause I, you know, maybe we disagree on that.

I don’t think that edginess or humor or interesting things

need to be things that harm or hurt people

or that people are against.

There are ways to find things that everyone is fine with.

Why aren’t we doing that?

The problem is there’s departments

that look for harm in things.

Yeah.

And so they will find harm in things that have no harm.

Okay.

That’s the big problem

because their whole job is to find harm in things.

So what you said is completely correct,

which is edginess should not hurt,

doesn’t necessarily,

doesn’t need to be a thing that hurts people.

Obviously, great humor, great personality

doesn’t have to, like Clippy.

But yeah, I mean, but it’s tricky to get right.

And I’m not exactly sure.

I don’t know.

I don’t know why a large corporation

with a lot of funding can’t get this right.

I do think you’re right

that there’s a lot of aversion to risk.

And so if you get lawyers involved

or people whose job it is, like you say, to mitigate risk,

they’re just gonna say no to most things

that could even be in some way.

Yeah.

Yeah, you get the problem in all organizations.

So I think that you’re right, that that is a problem.

I think what’s the way to solve that in large organizations

is to have Steve Jobs type of characters.

Unfortunately, you do need to have, I think,

from a designer perspective, or maybe like a Johnny Ive,

that is almost like a dictator.

Yeah, you want a benevolent dictator.

Yeah, who rolls in and says,

cuts through the lawyers, the PR,

but has a benevolent aspect.

Like, yeah, there’s a good heart and make sure,

I think all great artists and designers

create stuff that doesn’t hurt people.

Like if you have a good heart,

you’re going to create something

that’s going to actually make a lot of people feel good.

That’s what people like Johnny Ive,

what they love doing is creating a thing

that brings a lot of love to the world.

They imagine millions of people using the thing

and it instills them with joy.

That’s, you could say that about social robotics,

you could say that about the metaverse.

It shouldn’t be done by the PR people,

should be done by the designers.

I agree, PR people ruin everything.

Yeah, all the fun.

In the book, you have a picture,

I just have a lot of ridiculous questions.

You have a picture of two hospital delivery robots

with a caption that reads,

by the way, see your book,

I appreciate that it keeps the humor in.

You didn’t run it by the PR department.

No, no one edited the book, we got rushed through.

Bad thing.

The caption reads,

two hospital delivery robots whose sexy nurse names

Roxy and Lola made me roll my eyes so hard

they almost fell out.

What aspect of it made you roll your eyes?

Is it the naming?

It was the naming.

The form factor is fine, it’s like a little box on wheels.

The fact that they named them, also great.

That’ll let people enjoy interacting with them.

We know that even just giving a robot a name,

it facilitates technology adoption.

People will be like, oh, you know,

Betsy made a mistake, let’s help her out

instead of the stupid robot doesn’t work.

But why Lola and Roxy?

Like-

Those are to you, too sexy?

I mean, there’s research showing that

a lot of robots are named according to gender biases

about the function that they’re fulfilling.

So, you know, robots that are helpful and assistance

and are like nurses are usually female gendered.

Robots that are powerful, all wise computers like Watson

usually have like a booming male coded voice and name.

Like that’s one of those things, right?

You’re opening a can of worms for no reason.

For no reason.

You can avoid this whole can of worms.

Yeah, just give it a different name.

Like why Roxy?

It’s because people aren’t even thinking.

So to some extent, I don’t like PR departments,

but getting some feedback on your work

from a diverse set of participants,

listening and taking in things

that help you identify your own blind spots.

And then you can always make your good leadership choices

and good, like you can still ignore things

that you don’t believe are an issue,

but having the openness to take in feedback

and making sure that you’re getting the right feedback

from the right people.

I think that’s really important.

And also don’t unnecessarily propagate

the biases of society.

Yeah, why?

In the design.

But if you’re not careful when you do the research of,

like you might, if you ran a poll with a lot of people,

of all the possible names these robots have,

they might come up with Roxy and Lola

as names they would enjoy most.

Like that could come up as the highest.

As in you do marketing research,

and then, well, that’s what they did with Alexa.

They did marketing research,

and nobody wanted the male voice.

Everyone wanted it to be female.

Well, what do you think about that?

If I were to say, I think the role of a great designer,

again, to go back to Johnny Ive,

is to throw out the marketing research.

Take it in, do it, learn from it.

But if everyone wants Alexa to be a female voice,

the role of the designer is to think deeply

about the future of social agents in the home,

and think, what does that future look like?

And try to reverse engineer that future.

So in some sense, there’s this weird tension.

You want to listen to a lot of people,

but at the same time, you’re creating a thing

that defines the future of the world,

and the people that you’re listening to

are part of the past.

So that weird tension.

Yeah, I think that’s true,

and I think some companies like Apple

have historically done very well

at understanding a market and saying,

you know what our role is?

It’s not to listen to what the current market says.

It’s to actually shape the market

and shape consumer preferences.

And companies have the power to do that.

They can be forward thinking,

and they can actually shift

what the future of technology looks like.

And I agree with you that I would like to see more of that,

especially when it comes to existing biases that we know.

I think there’s the low-hanging fruit

of companies that don’t even think about it at all,

and aren’t talking to the right people,

and aren’t getting the full information.

And then there’s companies

that are just doing the safe thing

and giving consumers what they want now.

But to be really forward-looking and be really successful,

I think you have to make some judgment calls

about what the future is gonna be.

But do you think it’s still useful to gender

and to name the robots?

Yes, I mean, gender is a minefield,

but people, it’s really hard to get people

to not gender a robot in some way.

So if you don’t give it a name,

or you give it an ambiguous voice,

people will just choose something.

And maybe that’s better than just entrenching something

that you’ve decided is best.

But I do think it can be helpful

on the anthropomorphism engagement level

to give it attributes that people identify with.

Yeah, I think a lot of roboticists I know,

they don’t gender the robot.

They even try to avoid naming the robot.

Or naming it ain’t something that can be used

as a name in conversation kind of thing.

And I think that actually, that’s irresponsible,

because people are going to anthropomorphize

the thing anyway.

So you’re just removing from yourself

the responsibility of how they’re going

to anthropomorphize it.

That’s a good point.

And so you want to be able to,

if they’re going to do it,

you have to start to think about how they’re going to do it.

Even if the robot is like a Boston Dynamics robot

that’s not supposed to have any kind of social component,

they’re obviously going to project

a social component to it.

Like that arm, I worked a lot with quadrupeds now

with robot dogs.

You know, that arm, people think it’s a head immediately.

It’s supposed to be an arm,

but they start to think it’s a head.

And you have to like acknowledge that.

You can’t, I mean-

They do now.

They do now?

Well, they’ve deployed the robots and people are like,

oh my God, the cops are using a robot dog.

And so they have this PR nightmare.

And so they’re like, oh yeah.

Okay, maybe we should hire some HR people.

Well, Boston Dynamics is an interesting company,

or any of the others that are doing a similar thing,

because their main source of money

is in industrial applications.

So like surveillance of factories and doing dangerous jobs.

So to them, it’s almost good PR

for people to be scared of these things.

Because it’s for some reason, as you talk about,

people are naturally, for some reason, scared.

We could talk about that, of robots.

And so it becomes more viral,

like playing with that little fear.

And so it’s almost like a good PR,

because ultimately, they’re not trying

to put them in the home and have a good social connection.

They’re trying to put them in factories.

And so they have fun with it.

If you watch Boston Dynamics videos,

they’re aware of it.

Oh yeah, the videos for sure, that they put out.

It’s almost like an unspoken, tongue-in-cheek thing.

They’re aware of how people are going to feel

when you have a robot that does like a flip.

Now, most of the people are just like excited

about the control problem of it,

like how to make the whole thing happen.

But they’re aware when people see.

Well, I think they became aware.

I think that in the beginning,

they were really, really focused on just the engineering.

I mean, they’re at the forefront of robotics,

like locomotion and stuff.

And then when they started doing the videos,

I think that was kind of a labor of love.

I know that the former CEO, Mark,

he oversaw a lot of the videos

and made a lot of them himself.

And he’s even really detail-oriented.

There can’t be some sort of incline

that would give the robot an advantage.

He was very, a lot of integrity

about the authenticity of them.

But then when they started to go viral,

I think that’s when they started to realize,

oh, there’s something interesting here that,

I don’t know how much they took it seriously

in the beginning other than realizing

that they could play with it in the videos.

I know that they take it very seriously now.

What I like about Boston Dynamics and similar companies,

it’s still mostly run by engineers.

But, you know, I’ve had my criticisms.

There’s a bit more PR leaking in.

But those videos are made by engineers

because that’s what they find fun.

It’s like testing the robustness of the system.

I mean, they’re having a lot of fun there with the robots.

Totally.

Have you been to visit?

Yeah, yeah, yeah.

Yeah, it’s cool.

It’s one of the most incredible.

I mean, because I have eight robot dogs now.

Wait, you have eight robot dogs?

What?

So they’re just walking around your place?

Like, where do you keep them?

Yeah, I’m working on them.

That’s actually one of my goals

is to have at any one time always a robot moving.

Oh.

I’m far away from that.

That’s an ambitious goal.

Well, I have like more Roombas that I know what to do with.

Those are the Roombas that I program.

So the programmable Roombas.

Nice.

And I have a bunch of little, like I built a,

well, I’m not finished with it yet,

but I bought a robot from Rick and Morty.

I just have a bunch of robots everywhere.

But the thing is, what happens is

you’re working on one robot at a time,

and that becomes like a little project.

It’s actually very difficult to have

just a passively functioning robot always moving.

Yeah.

And that’s a dream for me,

because I’d love to create that kind of little world.

So the impressive thing about Boston Dynamics to me

was to see like hundreds of spots.

And like, there was a,

the most impressive thing that still sticks with me is

there was a spot robot walking down the hall

seemingly with no supervision whatsoever.

And he was wearing, he or she, I don’t know,

was wearing a cowboy hat.

It just, it was just walking down the hall

and nobody paying attention.

And it’s just like walking down this long hall.

And I’m like looking around.

Is anyone, like what’s happening here?

So presumably some kind of automation was doing the map.

I mean, the whole environment is probably really well mapped.

But I, it was just,

it gave me a picture of a world

where a robot is doing his thing, wearing a cowboy hat,

just going down the hall,

like getting some coffee or whatever.

Like, I don’t know what it’s doing, what’s the mission.

But I don’t know, for some reason it really stuck with me.

You don’t often see robots that aren’t part of a demo

or that aren’t, you know, like with a semi-autonomous

or autonomous vehicle, like directly doing a task.

This was just chilling.

Just walking around, I don’t know.

Well, yeah, you know, I mean, we’re at MIT.

Like when I first got to MIT, I was like,

okay, where’s all the robots?

And they were all like broken or like not demoing.

So yeah.

And what really excites me is that we’re about to have that.

We’re about to have so many moving about too.

Well, it’s coming.

It’s coming in our lifetime

that we will just have robots moving around.

We’re already seeing the beginnings of it.

There’s delivery robots in some cities, on the sidewalks.

And I just love seeing like the TikToks

of people reacting to that.

Because yeah, you see a robot walking down the hall

with a cowboy hat.

You’re like, what the fuck?

What is this?

This is awesome and scary and kind of awesome.

And people either love or hate it.

That’s one of the things

that I think companies are underestimating,

that people will either love a robot or hate a robot

and nothing in between.

So it’s just, again, an exciting time to be alive.

Yeah, I think kids almost universally,

at least in my experience, love them.

Love legged robots.

If they’re not loud.

My son hates the Roomba because ours is loud.

Oh, that, yeah.

No, the legs, the legs make a difference.

Because your son,

do they understand Roomba to be a robot?

Oh yeah, my kids, that’s one of the first words they learned.

They know how to say beep boop.

And yes, they think the Roomba’s a robot.

Do they project intelligence out of the thing?

Well, we don’t really use it around them anymore

for the reason that my son is scared of it.

Yeah, that’s really interesting.

I think they would.

Even a Roomba, because it’s moving around on its own,

I think kids and animals view it as an agent.

So what do you think,

if we just look at the state of the art of robotics,

what do you think robots are actually good at today?

So if we look at today.

You mean physical robots?

Yeah, physical robots.

Wow.

Like what are you impressed by?

So I think a lot of people,

I mean, that’s what your book is about,

have maybe not a perfectly calibrated understanding

of where we are in terms of robotics,

what’s difficult in robotics, what’s easy in robotics.

Yeah, we’re way behind where people think we are.

So what’s impressive to me, so let’s see.

Oh, one thing that came out recently

was Amazon has this new warehouse robot,

and it’s the first autonomous warehouse robot

that is safe for people to be around.

And so most people, I think,

envision that our warehouses are already fully automated

and that there’s just robots doing things.

It’s actually still really difficult

to have robots and people in the same space

because it’s dangerous, for the most part.

Especially robots that have to be strong enough

to move something heavy, for example,

they can really hurt somebody.

And so until now, a lot of the warehouse robots

had to just move along pre-existing lines,

which really restricts what you can do.

And so having, I think, that’s one of the big challenges

and one of the big exciting things that’s happening

is that we’re starting to see more cobotics

in industrial spaces like that,

where people and robots can work side by side

and not get harmed.

Yeah, that’s what people don’t realize,

sort of the physical manipulation task with humans.

It’s not that the robots wanna hurt you.

I think that’s what people are worried about,

like this malevolent robot gets mad of its own

and wants to destroy all humans.

Now, it’s actually very difficult

to know where the human is.

Yeah.

And to respond to the human and dynamically

and collaborate with them on a task,

especially if you’re something like

an industrial robotic arm, which is extremely powerful.

See, some of those arms are pretty impressive now

that you can grab it, you can move it.

So the collaboration between human and robot

in the factory setting is really fascinating.

Yeah.

Do you think they’ll take our jobs?

I don’t think it’s that simple.

I think that there’s a ton of disruption

that’s happening and will continue to happen.

I think speaking specifically of the Amazon warehouses,

that might be an area where it would be good

for robots to take some of the jobs

that are where people are put in a position

where it’s unsafe and they’re treated horribly.

And probably it would be better if a robot did that

and Amazon is clearly trying to automate that job away.

So I think there’s gonna be a lot of disruption.

I do think that robots and humans

have very different skillsets.

So while a robot might take over a task,

it’s not gonna take over most jobs.

I think just things will change a lot.

Like, I don’t know, one of the examples

I have in the book is mining.

So there you have this job that is very unsafe

and that requires a bunch of workers

and puts them in unsafe conditions.

And now you have all these different robotic machines

that can help make the job safer.

And as a result, now people can sit

in these air-conditioned remote control stations

and control these autonomous mining trucks.

And so that’s a much better job,

but also they’re employing less people now.

So it’s just a lot of,

I think from a bird’s eye perspective,

you’re not gonna see job loss.

You’re gonna see more jobs created

because the future is not robots

just becoming like people and taking their jobs.

The future is really a combination of our skills

and then the supplemental skills that robots have

to increase productivity,

to help people have better, safer jobs,

to give people work that they actually enjoy doing

and are good at.

But it’s really easy to say that

from a bird’s eye perspective.

And ignore kind of the rubble on the ground

as we go through these transitions,

because of course specific jobs are going to get lost.

If you look at the history of the 20th century,

it seems like automation constantly increases productivity

and improves the average quality of life.

So it’s been always good.

So like thinking about this time being different

is that it would need to go against the lessons of history.

It’s true.

And the other thing is,

I think people think that the automation

of the physical tasks is easy.

I was just in Ukraine and the interesting thing is,

I mean, there’s a lot of difficult and dark lessons

just about a war zone.

But one of the things that happens in war

is there’s a lot of mines that are placed.

That’s one of the big problems

for years after a war is even over

is the entire landscape is covered in mines.

And so there’s a demining effort.

And you would think robots would be good

at this kind of thing.

Or like your intuition would be like,

well, say you have unlimited money

and you wanna do a good job of it, unlimited money.

You would get a lot of really nice robots.

But no, humans are still far superior.

Or animals.

Or animals, right.

But humans with animals together.

Yeah.

You can’t just have a dog with a hat.

That’s fair.

But yes, but figuring out also how to disable the mine.

Obviously the easy thing,

the thing a robot can help with is to find the mine

and blow it up.

But that’s gonna destroy the landscape.

That really does a lot of damage to the land.

You wanna disable the mine.

And to do that because of all the different,

all the different edge cases of the problem.

It requires a huge amount of human-like experience,

it seems like.

So it’s mostly done by humans.

They have no use for robots.

They don’t want robots.

Yeah.

I think we overestimate what we can automate.

Especially in the physical realm.

Yeah.

It’s weird.

I mean, it continues that the story of humans,

we think we’re shitty at everything in the physical world,

including driving.

We think everybody makes fun of themselves

and others for being shitty drivers,

but we’re actually kind of incredible.

No, we’re incredible.

And that’s why like,

that’s why Tesla still says that

if you’re in the driver’s seat,

like you are ultimately responsible.

Because the ideal for,

I mean, you know more about this than I do,

but like robot cars are great at predictable things

and can react faster and more precisely than a person

and can do a lot of the driving.

And then the reason that we still don’t have

autonomous vehicles on all the roads yet

is because of this long tail of just unexpected occurrences

where a human immediately understands

that’s a sunset and not a traffic light.

That’s a horse and carriage ahead of me on the highway,

but the car has never encountered that before.

In theory, combining those skillsets

is what’s gonna really be powerful.

The only problem is figuring out

the human-robot interaction and the handoffs.

So like in cars, that’s a huge problem right now,

figuring out the handoffs.

But in other areas, it might be easier.

And that’s really the future is human-robot interaction.

What’s really hard to improve,

it’s terrible that people die in car accidents,

but I mean, it’s like 70, 80, 100 million miles,

one death per 80 million miles.

That’s like really hard to beat for a robot.

That’s like incredible.

Like think about it, like the, how many people?

Think just the number of people throughout the world

that are driving every single day.

All of this, you know, sleep deprived, drunk,

distracted, all of that.

And still very few die relative to what I would imagine.

If I were to guess, back in the horse,

see, when I was like in the beginning of the 20th century,

riding my horse, I would talk so much shit about these cars.

I’d be like, this is extremely dangerous.

These machines traveling at 30 miles an hour

or whatever the hell they’re going at.

This is irresponsible.

It’s unnatural and it’s going to be destructive

to all of human society.

But then it’s extremely surprising

how humans adapt to the thing

and they know how to not kill each other.

I mean, that ability to adapt is incredible

and to mimic that in the machine is really tricky.

Now, that said, what Tesla is doing,

I mean, I wouldn’t have guessed

how far machine learning can go on vision alone.

It’s really, really incredible.

And people that are, at least from my perspective,

people that are kind of, you know,

critical of Elon and those efforts,

I think don’t give enough credit

of how much incredible progress

has been made in that direction.

I think most of the robotics community

wouldn’t have guessed how much you can do on vision alone.

It’s kind of incredible because we would be,

I think it’s that approach, which is relatively unique,

has challenged the other competitors to step up their game.

So if you’re using LIDAR, if you’re using mapping,

that challenges them to do better, to scale faster,

and to use machine learning and computer vision as well

to integrate both LIDAR and vision.

So it’s kind of incredible.

And I’m not, I don’t know if I even have a good intuition

of how hard driving is anymore.

Maybe it is possible to solve.

So all the sunset, all the edge cases you mentioned.

Yeah, the question is when.

Yeah, I think it’s not happening

as quickly as people thought it would

because it is more complicated.

But I wouldn’t have, I agree with you.

My current intuition is that we’re gonna get there.

I think we’re gonna get there too.

But I didn’t, before I wasn’t sure we’re gonna get there.

Without, like with current technology.

So I was kind of, this is like with vision alone,

my intuition was you’re gonna have to solve

like common sense reasoning.

You’re gonna have to solve some of the big problems

in artificial intelligence, not just perception.

Yeah.

Like you have to have a deep understanding of the world

is what was my sense.

But now I’m starting to like, well this,

I mean I’m continually surprised how well the thing works.

Yeah.

Obviously Elon and others, others have stopped.

But Elon continues, you know,

saying we’re gonna solve it in a year.

Yeah, that’s the thing, bold predictions.

Yeah, but everyone else used to be doing that.

But they kind of like, all right.

Yeah, maybe we’ll.

Maybe let’s not promise we’re gonna solve

level four driving by 2020.

Let’s chill on that.

But people are still trying silently.

I mean the UK just committed 100 million pounds

to research and development to speed up the process

of getting autonomous vehicles on the road.

Like everyone can see that it is solvable

and it’s going to happen and it’s gonna change everything.

And they’re still investing in it.

And like Waymo Lowkey has driverless cars in Arizona.

Like you can get, you know, there’s like robots.

It’s weird, have you ever been in one?

No.

It’s so weird.

It’s so awesome.

Because the most awesome experience is the wheel turning.

And you’re sitting in the back.

It’s like, I don’t know, it’s,

it feels like you’re a passenger with that friend

who’s a little crazy of a driver.

It feels like, shit, I don’t know.

Are you all right to drive, bro?

You know, that kind of feeling.

But then you kind of, that experience,

that nervousness and the excitement

of trusting another being,

and in this case it’s a machine, is really interesting.

Just even introspecting your own feelings about the thing.

Yeah.

They’re not doing anything

in terms of making you feel better about,

like at least Waymo.

I think they went with the approach of like,

let’s not try to put eyes on the thing.

It’s a wheel, we know what that looks like.

It’s just a car.

It’s a car, get in the back.

Let’s not like discuss this at all.

Let’s not discuss the fact that this is a robot driving you

and you’re in the back.

And if the robot wants to start driving 80 miles an hour

and run off of a bridge, you have no recourse.

Let’s not discuss this.

You’re just getting in the back.

There’s no discussion about like how shit can go wrong.

There’s no eyes, there’s nothing.

There’s like a map showing what the car can see.

Like, you know, what happens

if it’s like a HAL 9000 situation?

Like, I’m sorry, I can’t.

You have a button you can like call customer service.

Oh God, then you get put on hold for two hours.

Probably.

But you know, currently what they’re doing,

which I think is understandable,

but you know, the car just can pull over and stop

and wait for help to arrive.

And then a driver will come

and then they’ll actually drive the car for you.

But that’s like, you know,

what if you’re late for a meeting

or all that kind of stuff?

Or like the more dystopian,

isn’t it the fifth element where,

is Will Smith in that movie?

Who’s in that movie?

No, Bruce Willis?

Bruce Willis.

Oh yeah, and he gets into like a robotic cab

or car or something.

And then because he’s violated a traffic rule,

it locks him in.

And he has to wait for the cops to come

and he can’t get out.

So like, we’re gonna see stuff like that maybe.

Well, that’s, I believe that the companies that have robots,

the only ones that will succeed

are the ones that don’t do that.

Meaning they respect privacy.

You think so?

Yeah, because people,

because they’re gonna have to earn people’s trust.

Yeah, but like Amazon works with law enforcement

and gives them the data from the ring cameras.

So why should it, yeah, oh yeah.

Do you have a ring camera?

No.

Okay.

No, no, but basically any security camera, right?

I have a Google’s, whatever they have.

We have one that’s not the data.

We store the data on a local server

because we don’t want it to go to law enforcement

because all the companies are doing it.

They’re doing it.

I bet Apple wouldn’t.

Yeah.

Apple’s the only company I trust

and I don’t know how much longer.

I don’t know.

Maybe that’s true for cameras,

but with robots,

people are just not gonna let a robot inside their home

where like one time where somebody gets arrested

because of something a robot sees,

that’s gonna be, that’s gonna destroy a company.

You don’t think people are gonna be like,

well, that wouldn’t happen to me.

That happened to a bad person.

I think they would.

Yeah.

Because in the modern world, people are get,

have you seen Twitter?

They get extremely paranoid about any kind of surveillance.

But the thing that I’ve had to learn

is that Twitter is not the modern world.

Like when I go, you know, inland to visit my relatives,

like they don’t, that’s a different discourse

that’s happening.

I think like the whole tech criticism world.

Yeah.

It’s loud in our ears because we’re in those circles.

You think you can be a company that does social robotics

and not win over Twitter?

That’s a good question.

I feel like the early adopters are all on Twitter

and it feels like you have to win them over.

Feels like nowadays you’d have to win over TikTok, honestly.

I don’t.

TikTok, is that a website?

I need to check it out.

And that’s an interesting one

because China is behind that one.

Exactly.

So if it’s compelling enough,

maybe people would be able to give up privacy

and that kind of stuff.

That’s really scary.

I mean, I’m worried about it.

I’m worried about it.

And there’ve been some developments recently

that are like super exciting,

like the large language learning models.

Like, wow, I did not anticipate those improving so quickly

and those are gonna change everything.

And one of the things that I’m trying to be cynical about

is that I think they’re gonna have a big impact

on privacy and data security and like manipulating consumers

and manipulating people

because suddenly you’ll have these agents

that people will talk to you

and they won’t care or won’t know,

at least on a conscious level,

that it’s recording the conversations.

So kind of like we were talking about before.

And at the same time,

the technology is so freaking exciting

that it’s gonna get adopted.

Well, it’s not even just the collection of data

but the ability to manipulate at scale.

So what do you think about the AI,

the engineer from Google

that thought Lambda is sentient?

You had actually a really good post from somebody else.

I forgot her name.

It’s brilliant.

I can’t believe I didn’t know about her.

Thanks to you.

Yeah, from Weird AI.

Oh yeah, I love her book.

She’s great.

I left a note for myself to reach out to her.

She’s amazing.

She’s hilarious and brilliant

and just a great summarizer of the state of AI.

But she has, I think that was from her,

where I was looking at AI explaining that it’s a squirrel.

Oh yeah, because the transcripts

that the engineer released,

Lambda kind of talks about the experience

of human-like feelings and I think even consciousness.

And so she was like, oh cool, that’s impressive.

I wonder if an AI can also describe

the experience of being a squirrel.

And so she interviewed, I think she did GPT-3

about the experience of being a squirrel.

And then she did a bunch of other ones too,

like what’s it like being a flock of crows?

What’s it like being an algorithm that powers a Roomba?

And you can have a conversation about any of those things

and they’re very convincing.

It’s pretty convincing, yeah.

Even GPT-3, which is not like state of the art.

It’s convincing of being a squirrel.

It’s like what it’s like, you should check it out

because it really is.

It’s like yeah, that probably is what a squirrel would say.

Are you excited?

What’s it like being a squirrel?

It’s fun.

Yeah, I get to eat nuts and run around all day.

How do you think people will feel

when you tell them that you’re a squirrel?

Or like, I forget what it was,

a lot of people might be scared to find out

that you’re a squirrel or something like this.

And then the system answers pretty well.

Like yeah, I hope they’ll,

what do you think when they find out you’re a squirrel?

I hope they’ll see how fun it is to be a squirrel

or something like that.

What do you say to people who don’t believe you’re a squirrel?

I say, come see for yourselves.

I am a squirrel.

That’s great.

Well, I think it’s really great

because the two things to note about it are,

first of all, just because the machine

is describing an experience

doesn’t mean it actually has that experience.

But then secondly, these things are getting so advanced

and so convincing at describing these things

and talking to people.

That’s, I mean, just the implications for health,

education, communication, entertainment, gaming.

I just feel like all of the applications,

it’s mind boggling what we’re gonna be able to do with this.

And that my kids are not gonna remember a time

before they could have conversations with artificial agents.

Do you think they would?

Because to me, this is,

the focus in the ad community has been,

well, this engineer surely is hallucinating.

The thing is not sentient.

But to me, first of all,

it doesn’t matter if he is or not.

This is coming where a large number of people

would believe a system is sentient,

including engineers within companies.

So in that sense, you start to think about a world

where your kids aren’t just used

to having a conversation with a bot,

but used to believing,

having an implied belief that the thing is sentient.

Yeah, I think that’s true.

And I think that one of the things that bothered me

about all of the coverage in the tech press

about this incident,

like obviously I don’t believe the system is sentient.

Like I think that it can convincingly describe that it is.

I don’t think it’s doing what he thought it was doing

and actually experiencing feelings.

But a lot of the tech press was about how he was wrong

and depicting him as kind of naive.

And it’s not naive.

Like there’s so much research in my field

showing that people do this.

Even experts, they might be very clinical

when they’re doing human robot interaction experiments

with a robot that they’ve built.

And then you bring in a different robot

and they’re like, oh, look at it, it’s having fun.

It’s doing this.

Like that happens in our lab all the time.

We are all this guy and it’s gonna be huge.

So I think that the goal is not to discourage

this kind of belief or like design systems

that people won’t think are sentient.

I don’t think that’s possible.

I think you’re right, this is coming.

It’s something that we have to acknowledge

and even embrace and be very aware of.

So one of the really interesting perspectives

that your book takes on a system like this

is to see them, not to compare a system like this to humans,

but to compare it to animals, of how we see animals.

Can you kind of try to, again, sneak up,

try to explain why this analogy is better

than the human analogy, the analogy of robots as animals?

Yeah, and it gets trickier with the language stuff,

but we’ll get into that too.

I think that animals are a really great thought experiment

when we’re thinking about AI and robotics,

because again, this comparing them to humans

that leads us down the wrong path,

both because it’s not accurate,

but also I think for the future, we don’t want that.

We want something that’s a supplement.

But I think animals, because we’ve used them

throughout history for so many different things,

we domesticated them, not because they do what we do,

but because what they do is different and that’s useful.

And it’s just like,

whether we’re talking about companionship,

whether we’re talking about work integration,

whether we’re talking about responsibility for harm,

there’s just so many things we can draw on in that history

from these entities that can sense, think,

make autonomous decisions and learn

that are applicable to how we should be thinking

about robots and AI.

And the point of the book is not

that they’re the same thing,

that animals and robots are the same.

Obviously, there are tons of differences there.

Like you can’t have a conversation with a squirrel, right?

But the point is that-

I do it all the time.

Oh, really?

By the way, squirrels are the cutest.

I project so much on squirrels.

I wonder what their inner life is.

I suspect they’re much bigger assholes than we imagine.

Really?

Like if it was a giant squirrel,

it would fuck you over so fast if it had the chance.

It would take everything you own.

It would eat all your stuff.

But because it’s small and the furry tail,

the furry tail is a weapon

against human consciousness and cognition.

It wins us over.

That’s what cats do too.

Cats out-competed squirrels.

And dogs.

Yeah, dogs have love.

Cats have no soul.

They, no, I’m just kidding.

People get so angry when I talk shit about cats.

I love cats.

Anyway, so yeah, you’re describing

all the different kinds of animals that get domesticated.

And it’s a really interesting idea

that it’s not just sort of pets.

There’s all kinds of domestication going on.

They all have all kinds of uses.

Yes.

Like the ox that you propose might be,

at least historically,

one of the most useful domesticated animals.

It was a game changer because it revolutionized

what people could do economically, et cetera.

So, I mean, just like robots,

they’re gonna change things economically.

They’re gonna change landscapes.

Like cities might even get rebuilt

around autonomous vehicles or drones or delivery robots.

I think just the same ways

that animals have really shifted society.

And society has adapted also

to socially accepting animals as pets.

I think we’re gonna see very similar things with robots.

So I think it’s a useful analogy.

It’s not a perfect one,

but I think it helps us get away from this idea

that robots can, should, or will replace people.

If you remember,

what are some interesting uses of animals?

Ferrets, for example.

Oh yeah, the ferrets.

They still do this.

They use ferrets to go into narrow spaces

that people can’t go into, like a pipe,

or they’ll use them to run electrical wire.

I think they did that for Princess Di’s wedding.

There’s so many weird ways that we’ve used animals

and still use animals for things that robots can’t do,

like the dolphins that they used in the military.

I think Russia still has dolphins,

and the U.S. still has dolphins, in their navies.

Mine detection, looking for lost underwater equipment,

some rumors about using them for weaponry,

which I think Russia’s like, sure, believe that.

America’s like, no, no, we don’t do that.

Who knows?

But they started doing that in the 60s, 70s.

They started training these dolphins

because they were like, oh,

dolphins have this amazing echolocation system

that we can’t replicate with machines,

and they’re trainable,

so we’re gonna use them for all the stuff

that we can’t do with machines or by ourselves.

And they’ve tried to phase out the dolphins.

I know the U.S. has invested a lot of money

in trying to make robots do the mine detection,

but like you were saying,

there are some things that the robots are good at,

and there’s some things

that biological creatures are better at,

so they still have the dolphins.

So there’s also pigeons, of course.

Oh yeah, pigeons.

Oh my gosh, there’s so many examples.

The pigeons were the original hobby photography drone.

They also carried mail for thousands of years,

letting people communicate with each other in new ways.

So the thing that I like about the animal analogies,

they have all these physical abilities,

but also sensing abilities

that we don’t have,

and that’s just so useful.

And that’s robots, right?

Robots have physical abilities.

They can help us lift things

or do things that we’re not physically capable of.

They can also sense things.

It’s just, I just feel like,

I still feel like it’s a really good analogy.

Yeah, it’s really strong.

And it works because people are familiar with it.

What about companionship?

And when we start to think about cats and dogs,

they’re pets that seem to serve no purpose whatsoever

except the social connection.

Yeah, I mean, it’s kind of a newer thing.

At least in the United States,

like dogs used to have,

like they used to have a purpose.

They used to be guard dogs

or they had some sort of function.

And then at some point they became just part of the family.

And it’s so interesting how there’s some animals

that we’ve treated as workers,

some that we’ve treated as objects,

some that we eat,

and some that are parts of our families.

And that’s different across cultures.

And I’m convinced that we’re gonna see

the same thing with robots,

where people are gonna develop

strong emotional connections to certain robots

that they relate to,

either culturally or personally, emotionally.

And then there’s gonna be other robots

that we don’t treat the same way.

I wonder, does that have to do more

with the culture and the people or the robot design?

Is there an interplay between the two?

Like why did dogs and cats out-compete ox

and I don’t know, what else?

Like farm animals to really get inside the home

and get inside our hearts?

Yeah, I mean, people point to the fact

that dogs are very genetically flexible

and they can evolve much more quickly than other animals.

And so they, evolutionary biologists,

think that dogs evolved to be more appealing to us.

And then once we learned how to breed them,

we started breeding them to be more appealing to us too,

which is not something that we necessarily

would be able to do with cows,

although we’ve bred them to make more milk for us.

So, but part of it is also culture.

I mean, there are cultures where people eat dogs

still today and then there’s other cultures

where we’re like, oh no, that’s terrible.

We would never do that.

And so I think there’s a lot of different elements

that play in.

I wonder if there’s good,

because I understand dogs because they use their eyes,

they’re able to communicate affection,

all those kinds of things.

It’s really interesting what dogs do.

There’s whole conferences on dog consciousness

and cognition and all that kind of stuff.

Now cats is a mystery to me

because they seem to not give a shit about the human.

But they’re warm and fluffy.

And cute.

But they’re also passive aggressive.

So they’re, at the same time,

they’re like, they’re dismissive of you in some sense.

I think some people like that about people.

Yeah, they want the push and pull of a relationship.

They don’t want loyalty or unconditional love.

That means they haven’t earned it.

Yeah.

Yeah.

Yeah.

Yeah.

And maybe that says a lot more about the people

than it does about the animals.

Oh yeah, we all need therapy.

Yeah.

So I’m judging harshly the people that have cats

or the people that have dogs.

Maybe the people that have dogs

are desperate for attention and unconditional love.

And they’re unable to sort of struggle

to earn meaningful connections.

I don’t know.

Maybe people are talking about you

and your robot pets in the same way.

Yeah, that’s,

it is kind of sad.

There’s just robots everywhere.

But it is, I mean, I’m joking about it being sad

because I think it’s kind of beautiful.

I think robots are beautiful in the same way

that pets are, even children,

in that they capture some kind of magic of social robots.

And they have the capacity

to have the same kind of magic of connection.

I don’t know what that is.

Like, when they’re brought to life and they move around,

the way they make me feel, I’m pretty convinced,

is as you know, they will make billions of people feel.

Like, I don’t think I’m like some weird robotics guy.

I’m not.

I mean, you are, but not in this way.

Not in this way.

I mean, I just, I can put on my normal human hat

and just see this, oh, this is,

like, there’s a lot of possibility there

of something cool, just like with dogs.

Like, what is it?

Why are we so into dogs or cats?

Like, it’s like, it’s way different than us.

It is.

It’s like drooling all over the place with its tongue out.

It’s like, it’s like a weird creature

that used to be a wolf.

Why are we into this thing?

Well, dogs can either express or mimic

a lot of emotions that we recognize.

And I think that’s a big thing.

Like, a lot of the magic of animals and robots

is our own self-projection.

And the easier it is for us to see ourselves in something

and project human emotions or qualities or traits onto it,

the more we’ll relate to it.

And then you also have the movement, of course.

I think that’s also really,

that’s why I’m so interested in physical robots,

because that’s, I think, the visceral magic of them.

I think we’re, I mean, there’s research showing

that we’re probably biologically hardwired

to respond to autonomous movement in our physical space,

because we’ve had to watch out for predators

or whatever the reason is.

And so animals and robots are very appealing to us

as these autonomously moving things

that we view as agents instead of objects.

I mean, I love the moment,

which is, I’ve been particularly working on,

which is when a robot, like the cowboy hat,

is doing its own thing, and then it recognizes you.

I mean, the way a dog does.

And it looks like this.

And the moment of recognition, like you’re walking,

say you’re walking at an airport or on the street,

and there’s just, you know, hundreds of strangers,

but then you see somebody you know,

and that like, well, you wake up to that excitement

of seeing somebody you know and saying hello

and all that kind of stuff.

That’s a magical moment.

Like, I think, especially with the dog,

it makes you feel noticed and heard and loved.

Like, that somebody looks at you and recognizes you,

that it matters that you exist.

Yeah, you feel seen.

Yeah, and that’s a cool feeling.

I mean, I honestly think robots can give that feeling too.

Oh yeah, totally.

Currently, Alexa, I mean, one of the downsides

of these systems is they don’t, they’re servants.

They like, part of the, you know,

they’re trying to maintain privacy, I suppose.

But I don’t feel seen with Alexa, right?

I think that’s gonna change.

I think you’re right.

And I think that that’s the game-changing nature

of things like these large language learning models.

And the fact that these companies are investing

in embodied versions that move around of Alexa, like Astro.

Can I just say, yeah, Astro, I haven’t, is that out?

I mean, it’s out.

You can’t just like buy one commercially yet,

but you can apply for one.

Yeah.

My gut says that these companies don’t have the guts

to do the personalization.

This goes to the, because it’s edgy, it’s dangerous.

It’s gonna make a lot of people very angry.

Like in a way that, you know, just imagine, okay.

All right.

If you do the full landscape of human civilization,

just visualize the number of people

that are going through breakups right now.

Just the amount of really passionate,

just even if we just look at teenagers,

the amount of deep heartbreak that’s happening.

And like, if you’re going to have Alexa

have more of a personal connection with the human,

you’re gonna have humans that like have existential crises.

There’s a lot of people that suffer

from loneliness and depression.

And like, you’re now taking on the full responsibility

of being a companion to the rollercoaster

of the human condition.

As a company, like imagine PR and marketing people.

They’re gonna freak out.

They don’t have the guts.

It’s gonna have to come from somebody from a new Apple,

from those kinds of folks, like as a small startup.

And it might.

Yeah.

Like they’re coming.

There’s already virtual therapists.

There’s that Replica app.

I haven’t tried it, but Replica’s like a virtual companion.

Like it’s coming.

And if big companies don’t do it, someone else will.

Yeah.

I think the next, the future,

the next trillion dollar company

will be those personalizations.

If you think about all the AI we have around us,

all the smartphones and so on,

there’s very minimal personalization.

You don’t think that’s just because they weren’t able?

No.

Really?

I don’t think they have the guts.

I mean, it might be true, but I have to wonder.

I mean, Google is clearly gonna do something

with the language.

I mean.

They don’t have the guts.

Are you challenging them?

Partially, but not really,

because I know they’re not gonna do it.

I mean.

They don’t have to.

It’s bad for business in the short term.

I’m gonna be honest.

Maybe it’s not such a bad thing

if they don’t just roll this out quickly,

because I do think there are huge issues.

Yeah.

And there’s not just issues with the responsibility

of unforeseen effects on people,

but what’s the business model?

And if you are using the business model

that you’ve used in other domains,

then you’re gonna have to collect data from people,

which you will anyway to personalize the thing,

and you’re gonna be somehow monetizing the data,

or you’re gonna be doing some like ad model.

It just, it seems like now we’re suddenly getting

into the realm of like severe consumer protection issues,

and I’m really worried about that.

I see massive potential for this technology to be used

in a way that’s not for the public good,

and not, I mean, that’s in an individual user’s interest

maybe, but not in society’s interest.

Yeah, see, I think that kind of personalization

should be like redefine how we treat data.

I think you should own all the data

your phone knows about you,

and be able to delete it with a single click,

and walk away, and that data cannot be monetized,

or used, or shared anywhere without your permission.

I think that’s the only way people will trust you

to give, for you to use that data.

But then how are companies gonna,

I mean, a lot of these applications rely

on massive troves of data to train the AI system.

Right, so you have to opt in constantly,

and opt in not in some legal, I agree,

but obvious, like show, in the way I opt in

to tell you a secret.

We understand that I have to choose.

How well do I know you?

And then I say, don’t tell this to anyone.

And then I have to judge how leaky that,

how good you are at keeping secrets.

In that same way, like it’s very transparent

in which data you’re allowed to use for which purposes.

That’s what people are saying is the solution.

And I think that works to some extent,

having transparency, having people consent.

I think it breaks down at the point at which,

we’ve seen this happen on social media too,

like people are willingly giving up their data

because they’re getting a functionality from that.

And then the harm that that causes is on a,

maybe to someone else, and not to them personally.

I don’t think people are giving their data.

They’re not being asked.

But if you were asked-

It’s not consensual.

If you were like, tell me a secret about yourself

and I’ll give you $100, I’d tell you a secret.

No, not $100.

First of all, you wouldn’t.

You wouldn’t trust, like why are you giving me $100?

It’s a bad example.

But like I need, I would ask for your specific

like fashion interest in order to give recommendations

to you for shopping.

And I’d be very clear for that.

And you could disable that, you can delete that.

But then you can be, have a deep, meaningful,

rich connection with the system

about what you think you look fat in,

what you look great in, what like the full history

of all the things you’ve worn,

whether you regret the Justin Bieber

or enjoy the Justin Bieber shirt,

all of that information that’s mostly private

to even you, not even your loved ones.

A system should have that,

because then a system, if you trust it,

to keep control of that data that you own,

you can walk away with,

that system could tell you a damn good thing to wear.

It could.

And the harm that I’m concerned about

is not that the system is gonna then suggest a dress for me

that is based on my preferences.

So I went to this conference once

where I was talking to the people who do the analytics

in like the big ad companies.

And like literally a woman there was like,

I can ask you three totally unrelated questions

and tell you what menstrual product you use.

And so what they do is they aggregate the data

and they map out different personalities

and different people and demographics.

And then they have a lot of power and control

to market to people.

So like I might not be sharing my data

with any of the systems because I’m like,

I’m on Twitter, I know that this is bad.

Other people might be sharing data

that can be used against me.

Like it’s, I think it’s way more complex

than just I share a piece of personal information

and it gets used against me.

I think that at a more systemic level,

and then it’s always vulnerable populations

that are targeted by this,

low income people being targeted for scammy loans

or I don’t know, like I could get targeted,

like someone, not me,

because someone who doesn’t have kids yet

and is my age could get targeted for like freezing their eggs

and there’s all these ways that you can manipulate people

where it’s not really clear that that came

from that person’s data,

it came from all of us, all of us opting into this.

But there’s a bunch of sneaky decisions along the way

that could be avoided if there’s transparency.

So that, so one of the ways that goes wrong

if you share that data with too many ad networks,

don’t run your own ad network.

Don’t share with anybody.

Okay, and that’s something that you could regulate.

That belongs to just you

and all of the ways you allow the company to use it,

the default is in no way at all.

And you are consciously constantly saying

exactly how to use it.

And also, it has to do with the recommender system itself

from the company, which is freezing your eggs.

If that doesn’t make you happy,

if that idea doesn’t make you happy,

then the system shouldn’t recommend it

and should be very good at learning.

So not the kind of things that the category of people

it thinks you belong to would do,

but more you specifically, what makes you happy,

what is helping you grow.

But you’re assuming that people’s preferences

and like what makes them happy is static.

Whereas when we were talking before

about how a company like Apple

can tell people what they want

and they will start to want it.

That’s the thing that I’m more concerned about.

Yeah, that is a huge problem.

It’s not just listening to people,

but manipulating them into wanting something.

And that’s like, we have a long history

of using technology for that purpose.

Like the persuasive design in casinos

to get people to gamble more or like,

it’s just,

the other thing that I’m worried about

is as we have more social technology,

suddenly you have this on a new level.

Like if you look at the influencer marketing

that happens online now.

What’s the influencer marketing?

So like on Instagram, there will be some like person

who has a bunch of followers.

And then a brand will like hire them

to promote some product and it’s above board.

They disclose like, this is an ad that I’m promoting,

but they have so many young followers

who like deeply admire and trust them.

I mean, this must work for you too.

Don’t you have like ads on the podcast?

Like people trust you.

Magic spoon cereal, low carb.

Yes.

If you say that, like I guarantee you

some people will buy that just because

even though they know that you’re being paid,

they trust you.

Yeah.

It’s different with podcasts

because well, my particular situation,

but it’s true for a lot of podcasts,

especially big ones is, you know,

I have 10 times more sponsors that want to be sponsors

than I have.

So you get to select the ones

that you actually want to support.

And so like you end up using it

and then you’re able to actually,

like there’s no incentive to like shill for anybody.

Sure.

And that’s why it’s fine when it’s still human influencers.

Right.

Now, if you’re a bot,

you’re not gonna discriminate.

You’re not gonna be like,

oh, well, I think this product is good for people.

You think there’ll be like bots essentially

with millions of followers?

There already are.

There are virtual influencers in South Korea

who shill products.

And like, that’s just the tip of the iceberg

because that’s still very primitive.

Now with the new image generation

and the language learning models.

And like, so we’re starting to do some research

around kids and like young adults

because a lot of the research

on like what’s okay to advertise to kids

and what is too manipulative

has to do with television ads.

Back in the day, where like a kid who’s 12 understands,

oh, that’s an advertisement.

I can distinguish that from entertainment.

I know it’s trying to sell me something.

Now it’s getting really, really murky with influencers.

And then if you have like a bot

that a kid has developed a relationship with,

is it okay to market products through that or not?

Like, you’re getting into

all these consumer protection issues

because you’re developing a trusted relationship

with a social entity, but it’s…

And so now it’s like personalized, it’s scalable,

it’s automated, and it can…

So some of the research showing

that kids are already very confused

about like the incentives of the company

versus what the robot is doing.

Meaning they’re…

So, okay, so…

They’re not deeply understanding

the incentives of the system.

Well, yeah, so like kids who are old enough to understand

this is a television advertisement

is trying to advertise to me.

I might still decide I want this product,

but they understand what’s going on,

so there’s some transparency there.

That age child, so Daniela Di Paola,

Anastasia Ostrovsky, and I advised on this project,

they did this, they asked kids

who had interacted with social robots

whether they would like a policy that allows robots

to market to people through casual conversation

or whether they would prefer that it has to be transparent,

that it’s like an ad coming from a company.

And the majority said

they preferred the casual conversation.

And when asked why, there was a lot of confusion about…

They were like, well, the robot knows me better

than the company does,

so the robot’s only gonna market things that I like.

And so they don’t really…

They’re not connecting the fact

that the robot is an agent of the company.

They’re viewing it as something separate.

And I think that even happens subconsciously with grownups

when it comes to robots and artificial agents, and it will.

This Blake guy at Google, sorry, I’m going on and on,

but his main concern was that Google owned

this sentient agent and that it was being mistreated.

His concern was not

that the agent was gonna mistreat people.

So I think we’re gonna see a lot of this.

Yeah, but shitty companies will do that.

I think ultimately that confusion should be alleviated

by the robot should actually know you better

and should not have any control from the company.

But what’s the business model for that?

If you use the robot to buy…

First of all, the robot should probably cost money.

Should what?

Should probably cost money,

like the way Windows operating system does.

I see it more like an operating system.

Then like this thing is your window,

no pun intended, into the world.

So it’s helping you as like a personal assistant, right?

And so that should cost money.

You should, you know, whatever it is, 10 bucks, 20 bucks.

Like that’s the thing

that makes your life significantly better.

This idea that everything should be free

is like, it should actually help educate you.

You should talk shit about all the other companies

that do stuff for free.

But also, yeah, in terms of if you purchase stuff

based on its recommendation, it gets money.

So it’s kind of ad driven, but it’s not ads.

It’s like, it’s not controlled,

like no external entities can control it

to try to manipulate, to want a thing.

That would be amazing.

It’s actually trying to discover what you want.

So it’s not allowed to have any influence,

no promoted ad, no anything.

So it’s finding, I don’t know,

the thing that would actually make you happy.

That’s the only thing it cares about.

I think companies like this can win out.

Yes, I think eventually,

once people understand the value of the robot,

even just like, I think that robots

would be valuable to people,

even if they’re not marketing something

or helping with like preferences or anything,

like just a simple, the same thing as a pet,

like a dog that has no function

other than being a member of your family.

I think robots could really be that

and people would pay for that.

I don’t think the market realizes that yet.

And so my concern is that companies

are not gonna go in that direction,

at least not yet, of making like this

contained thing that you buy.

It seems almost old-fashioned, right,

to have a disconnected object that you buy

that you’re not like paying a subscription for.

It’s not like controlled by one of the big corporations.

But that’s the old-fashioned things

that people yearn for

because I think it’s very popular now

and people understand the negative effects of social media,

the negative effects of the data being used

in all these kinds of ways.

I think we’re just waking up to the realization

we tried, we’re like baby deer,

finding our legs in this new world of social media,

of ad-driven companies and realizing,

okay, this has to be done somehow different.

I think that one of the most popular notions,

at least in the United States,

is social media is evil and it’s doing bad by us.

It’s not like it’s totally tricked us

into believing that it’s good for us.

I think everybody knows it’s bad for us.

And so there’s a hunger for other ideas.

All right, it’s time for us to start that company.

I think so.

Let’s do it.

I think let’s go.

Hopefully no one listens to this and steals the idea.

There’s no, see, that’s the other thing.

I think I’m a big person on,

execution is what matters.

I mean, it’s like ideas are kind of true.

The social robotics is a good example

that there’s been so many amazing companies

that went out of business.

I mean, to me it’s obvious, like it’s obvious

that there will be a robotics company

that puts a social robot in the home of billions of homes.

Yeah.

And it’ll be a companion.

Okay, there you go.

You can steal that idea.

Do it.

Okay, I have a question for you.

It’s very tough.

What about Elon Musk’s humanoid?

Is he gonna execute on that?

There might be a lot to say.

So for people who are not aware,

there’s an Optimus, Tesla’s Optimus robot

that’s, I guess the stated reason for that robot

is a humanoid robot in the factory

that’s able to automate some of the tasks

that humans are currently doing.

And the reason you wanna do,

it’s the second reason you mentioned,

the second reason you wanna do a humanoid robot

is because the factory’s built for,

there’s certain tasks that are designed for humans.

So it’s hard to automate with any other form factor

than a humanoid.

And then the other reason is because so much effort

has been put into this giant data engine machine

of perception that’s inside Tesla autopilot

that’s seemingly, at least the machine, if not the data,

is transferable to the factory setting, to any setting.

Yeah, he said it would do anything that’s boring to us.

Yeah, yeah.

The interesting thing about that

is there’s no interest

and no discussion about the social aspect.

Like, I talked to him on mic and off mic about it

quite a bit.

And

there’s not a discussion about,

like, to me it’s obvious if a thing like that works

at all, at all.

In fact, it has to work really well in a factory.

If it works kind of shitty,

it’s much more useful in the home.

That’s true.

Because we’re much,

I think being shitty at stuff

is kind of what makes relationships great.

Like, you want to be flawed

and be able to communicate your flaws

and be unpredictable in certain ways.

Like, if you fell over every once in a while

for no reason whatsoever,

I think that’s essential for, like-

It’s very charming.

It’s charming, but also concerning.

And also, like, are you okay?

I mean, it’s both hilarious.

Whenever somebody you love falls down the stairs,

it’s both hilarious and concerning.

It’s some dance between the two.

And I think that’s essential for, like,

you almost want to engineer that in,

except you don’t have to

because of robotics in the physical space

is really difficult.

So,

I think I’ve learned to not discount

the efforts that Elon does.

There’s a few things that are really interesting there.

One, because he’s taking it extremely seriously,

what I like is the humanoid form,

the cost of building a robot.

I talked to Jim Keller offline about this a lot.

And currently, humanoid robots cost a lot of money.

And the way they’re thinking about it,

now, they’re not talking about

all this social robotic stuff that you and I care about.

They are thinking,

how can we manufacture this thing cheaply

and do it, like, well?

And the kind of discussions they’re having

is really great engineering.

It’s like, first principles question of, like,

why is this cost so much?

Like, what’s the cheap way?

Why can’t we build?

And there’s not a good answer.

Why can’t we build this humanoid form for under $1,000?

And, like, I’ve sat and had these conversations.

There’s no reason.

I think the reason they’ve been so expensive

is because they were focused on trying to,

they weren’t focused on doing the mass manufacture.

People are focused on getting a thing that’s,

I don’t know exactly what the reasoning is,

but it’s the same, like, Waymo.

It’s like, let’s build a million-dollar car

in the beginning, or, like, multi-million-dollar car.

Let’s try to solve that problem.

The way Elon, the way Jim Keller,

the way some of those folks are thinking is,

let’s, like, at the same time,

try to actually build a system that’s cheap.

Not crappy, but cheap.

And, like, first principles,

what is the minimum amount of degrees of freedom we need?

What are the joints?

Where’s the control set?

Like, how many, how do we, like, where are the activators?

What’s the way to power this

in the lowest cost way possible?

But also in a way that’s, like, actually works.

How do we make the whole thing not part of the components

where there’s a supply chain?

You have to have all these different parts

that have to feed us.

Do it all from scratch, and do the learning.

I mean, it’s like, immediately,

certain things, like, become obvious.

Do the exact same pipeline as you do for autonomous driving,

just the exact, I mean,

the infrastructure there is incredible.

For the computer vision, for the manipulation task,

the control problem changes,

the perception problem changes,

but the pipeline doesn’t change.

And so I don’t,

obviously, the optimism about how long it’s gonna take,

I don’t share,

but it’s a really interesting problem,

and I don’t wanna say anything

because my first gut is to say that,

why the humanoid form?

That doesn’t make sense.

Yeah, that’s my second gut, too, but.

But then there’s a lot of people

that are really excited about the humanoid form there.

That’s true.

And it’s like, I don’t wanna get in the way,

like, they might solve this thing,

and they might, it’s like, similar with Boston Dynamics.

Like, why, like, if I were to,

you can be a hater, and you go up to Mark Ryberg,

and just, like, how are you gonna make money

with these super expensive legged robots?

What’s your business plan?

This doesn’t make any sense.

Why are you doing these legged robots?

But at the same time, they’re pushing forward

the science, the art of robotics

in a way that nobody else does.

Yeah.

With Elon, they’re not just going to do that,

they’re gonna drive down the cost

to where we can have humanoid bots in the home, potentially.

So the part I agree with is,

a lot of people find it fascinating,

and it probably also attracts talent

who wanna work on humanoid robots.

I think it’s a fascinating scientific problem

and engineering problem,

and it can teach us more about human body

and locomotion and all of that.

I think there’s a lot to learn from it.

Where I get tripped up is why we need them

for anything other than art and entertainment

in the real world.

Like, I get that there are some areas

where you can’t just rebuild, like, a spaceship.

You can’t just, like, they’ve worked for so many years

on these spaceships, you can’t just re-engineer it.

You have some things that are just built for human bodies,

a submarine, a spaceship.

But a factory, maybe I’m naive,

but it seems like we’ve already rebuilt factories

to accommodate other types of robots.

Why would we want to just, like,

make a humanoid robot to go in there?

I just get really tripped up on,

I think that people want humanoids.

I think people are fascinated by them.

I think it’s a little overhyped.

Well, most of our world is still built for humanoids.

I know, but it shouldn’t be.

It should be built so that it’s wheelchair accessible.

Right, so the question is,

do you build a world that’s the general form

of wheelchair accessible,

all robot form factor accessible,

or do you build humanoid robots?

I mean, it doesn’t have to be all,

and it also doesn’t have to be either or.

I just feel like we’re thinking so little

about the system in general

and how to create infrastructure that works for everyone,

all kinds of people, all kinds of robots.

I mean, it’s more of an investment,

but that would pay off way more in the future

than just trying to cram expensive

or maybe slightly less expensive humanoid technology

into a human space.

Unfortunately, one company can’t do that.

We have to work together.

It’s like autonomous driving can be easily solved

if you do V2I, if you change the infrastructure

of the cities and so on,

but that requires a lot of people.

A lot of them are politicians,

and a lot of them are somewhat, if not a lot, corrupt,

and all those kinds of things.

And the talent thing you mentioned

is really, really, really important.

I’ve gotten a chance to meet a lot of folks

at SpaceX and Tesla, other companies too,

but they’re specifically,

the openness makes it easier to meet everybody.

I think a lot of amazing things in this world happen

when you get amazing people together.

And if you can sell an idea,

like us becoming a multi-planetary species,

you can say, why the hell are we going to Mars?

Like, why colonize Mars?

If you think from basic first principles,

it doesn’t make any sense.

It doesn’t make any sense to go to the moon.

The only thing that makes sense

to go to space is for satellites.

But there’s something about the vision of the future,

the optimism laden that permeates this vision

of us becoming multi-planetary.

It’s thinking not just for the next 10 years,

it’s thinking like human civilization

reaching out into the stars.

It makes people dream.

It’s really exciting.

And that, they’re gonna come up with some cool shit

that might not have anything to do with like,

here’s what I,

because Elon doesn’t seem to care about social robotics,

which is constantly surprising to me.

Talk to him, he doesn’t,

humans are the things you avoid and don’t hurt, right?

Like that’s, like the number one job of a robot

is not to hurt a human, to avoid them.

You know, the collaborative aspect,

the human-robot interaction,

I think is not, at least not in his,

not something he thinks about deeply.

But my sense is if somebody like that

takes on the problem of human-robotics,

we’re gonna get a social robot out of it.

Like people like, not necessarily Elon,

but people like Elon,

if they take on seriously these,

like I can just imagine with a humanoid robot,

you can’t help but create a social robot.

So if you do different form factors,

if you do industrial robotics,

you don’t, you’re likely to actually not end up

in like walking head into a social robot,

social robot, human-robot interaction problem.

If you create for whatever the hell reason you want to,

a humanoid robot, you’re gonna have to reinvent,

or not reinvent, but do,

introduce a lot of fascinating new ideas

into the problem of human-robot interaction,

which I’m excited about.

So like, if I was a business person,

I would say this is not, this is way too risky.

This doesn’t make any sense.

But when people are really convinced,

and there’s a lot of amazing people working on it,

it’s like, all right, let’s see what happens here.

This is really interesting.

Just like with Atlas and Boston Dynamics.

I mean, they, I apologize if I’m ignorant on this,

but I think they really, more than anyone else,

maybe with AIBO, like Sony,

pushed forward humanoid robotics,

like a leap with the Atlas robot.

Oh yeah, with Atlas, absolutely.

And like without them, like why the hell did they do it?

Why?

Well, I think for them, it is a research platform.

It’s not, I don’t think they ever, this is speculation.

I don’t think they ever intended Atlas

to be like a commercially successful robot.

I think they were just like, can we do this?

Let’s try.

Yeah, I wonder if they, maybe the answer they landed on is,

because they eventually went to Spot,

the earlier versions of Spot.

So Quadruped’s like four-legged robot,

but maybe they reached for, let’s try to make,

like I think they tried it and they still are trying it

for Atlas to be picking up boxes, to moving boxes,

to being, it makes sense.

Okay, if they were exactly the same cost,

it makes sense to have a humanoid robot in the warehouse.

Currently.

Currently.

I think it’s short-sighted, but yes.

Currently, yes, it would sell.

But it’s not, it’s short-sighted.

It’s short-sighted, but it’s not pragmatic

to think any other way,

to think that you’re gonna be able to change warehouses.

You’re gonna have to, you’re going to-

If you’re Amazon, you can totally change your warehouses.

Yes, yes.

But even if you’re Amazon,

that’s very costly to change warehouses.

It is, it’s a big investment.

But isn’t, shouldn’t you do that investment in a way,

so here’s the thing.

If you build a humanoid robot that works in the warehouse,

that humanoid robot, see, I don’t know why

Tess is not talking about it this way, as far as I know,

but that humanoid robot is gonna have

all kinds of other applications outside their setting.

To me, it’s obvious.

I think it’s a really hard problem to solve,

but whoever solves the humanoid robot problem

are gonna have to solve the social robotics problem.

Oh, for sure, I mean, they’re already with the spot

needing to solve social robotics problems.

For spot to be effective at scale.

I’m not sure if spot is currently effective at scale.

It’s getting better and better.

But they’re actually, the thing they did,

it’s an interesting decision.

Perhaps Tess will end up doing the same thing,

which is spot is supposed to be a platform for intelligence.

So spot doesn’t have any high-level intelligence,

like high-level perception skills.

It’s supposed to be controlled remotely.

And it’s a platform that you can attach something to, yeah.

And somebody else is supposed to do the attaching.

It’s a platform that you can take an uneven ground

and it’s able to maintain balance,

go into dangerous situations, it’s a platform.

On top of that, you can add a camera that does surveillance,

that you can remotely monitor, you can record,

you can record the camera, you can remote control it,

but it’s not gonna-

Object manipulation.

It’s basic object manipulation,

but not autonomous object manipulation.

It’s remotely controlled.

But the intelligence on top of it,

which was what would be required for automation,

somebody else is supposed to do.

Perhaps Tess will do the same thing ultimately.

But it doesn’t make sense

because the goal of Optimus is automation.

Without that,

but then you never know.

It’s like, why go to Mars?

Why?

I mean, that’s true.

And I reluctantly am very excited about space travel.

Because-

Why, can you introspect why?

Why am I excited about it?

I think what got me excited was

I saw a panel with some people who study other planets,

and it became really clear how little we know

about ourselves and about how nature works

and just how much there is to learn

from exploring other parts of the universe.

So on a rational level, that’s how I convince myself

that that’s why I’m excited.

In reality, it’s just fucking exciting.

I mean, just the idea that we can do this difficult thing

and that humans come together to build things

that can explore space.

I mean, there’s just something inherently thrilling

about that.

And I’m reluctant about it because I feel like

there are so many other challenges and problems

that I think are more important to solve.

But I also think we should be doing all of it at once.

And so to that extent, I’m like all for research

on humanoid robots, development of humanoid robots.

I think that there’s a lot to explore and learn,

and it doesn’t necessarily take away

from other areas of science.

At least it shouldn’t.

I think, unfortunately, a lot of the attention

goes towards that, and it does take resources

and attention away from other areas of robotics

that we should be focused on.

But I don’t think we shouldn’t do it.

So you think it might be a little bit of a distraction.

Oh, forget the Elon particular application,

but if you care about social robotics,

the humanoid form is a distraction.

It’s a distraction, and it’s one that I find

particularly boring.

It’s just, it’s interesting from a research perspective,

but from a what types of robots can we create

to put in our world?

Why would we just create a humanoid robot?

I just don’t get it.

So even just robotic manipulation,

so arms is not useful either.

Oh, arms can be useful, but why not have three arms?

Why does it have to look like a person?

Well, I actually personally just think

that washing the dishes is harder than a robot

that can be a companion.

Yeah.

Being useful in the home is actually really tough.

But does your companion have to have two arms

and look like you?

No, I’m making the case for zero arms.

Oh, okay, zero arms.

Yeah.

Okay, freaky.

That didn’t come out the way I meant it,

because it almost sounds like I don’t want a robot

to defend itself.

Oh, my God.

Like that’s immediately you project, you know what I mean?

Like, zero.

No, I think, I just think that the social component

doesn’t require arms or legs or so on, right?

That’s what we’ve talked about.

And I think that’s probably where a lot

of the meaningful impact that’s gonna be happening.

Yeah, I think just, we could get so creative

with the design, like why not have a robot on roller skates?

Or like whatever, like why does it have to look like us?

Yeah.

Still, it is a compelling and interesting form

from a research perspective, like you said.

Yeah.

You co-authored a paper as you were talking about

that for WeRobot 2022, Lula Robot Consumer Protection

in the Face of Automated Social Marketing.

I think you were talking about some of the ideas in that.

Yes.

Oh, you got it from Twitter.

I was like, that’s not published yet.

Yeah, this is how I do my research.

You just go through people’s Twitter feeds.

Yeah, go, thank you.

It’s not stalking if it’s public.

So there’s a, you looked at me like you’re offended,

like how did you know?

No, I was just like worried that like some early, I mean.

Yeah, there’s a PDF.

Does it?

There is.

There’s a PDF.

Like now?

Yeah.

Maybe like as of a few days ago.

Yeah.

Okay, well.

Yeah, yeah.

Okay.

You look violated, like how did you get that PDF?

It’s just a draft, it’s online.

Nobody read it yet until we’ve written the final paper.

Well, it’s really good, so I enjoyed it.

Oh, thank you.

By the time this comes out, I’m sure it’ll be out,

or no, when’s WeRobot?

So basically, WeRobot, that’s the workshop

where you have an hour where people give you

constructive feedback on the paper,

and then you write the good version.

Right, I take it back, there’s no PDF.

I don’t know why.

It doesn’t exist.

I imagine, but there is a table in there

in a virtual imagined PDF that I like,

that I wanted to mention, which is like this kind of

strategy used across various marketing platforms,

and it’s basically looking at traditional media,

person-to-person interaction, targeted ads,

influencers, and social robots.

This is the kind of idea that you’ve been speaking to,

and it’s just a nice breakdown of that,

that social robots have personalized recommendations,

social persuasion, automated scalable,

data collection, and embodiment.

So person-to-person interaction is really nice,

but it doesn’t have the automated

and the data collection aspect,

but the social robots have those two elements.

Yeah, we’re talking about the potential for social robots

to just combine all of these different marketing methods

to be this really potent cocktail.

And that table, which was Daniela’s idea

and a really fantastic one,

we put it in at the last second, so.

I’m glad you like it.

In a PDF that doesn’t exist,

that nobody can find if they look.

So when you say social robots, what does that mean?

Does that include virtual ones or no?

I think a lot of this applies to virtual ones, too,

although the embodiment thing,

which I personally find very fascinating,

is definitely a factor that research shows

can enhance people’s engagement with a device.

But can embodiment be a virtual thing also,

meaning like it has a body in the virtual world?

Maybe.

Makes you feel like,

because what makes a body?

A body is a thing that can disappear,

like has a permanence.

I mean, there’s certain characteristics

that you kind of associate to a physical object.

So I think what I’m referring to,

and I think this gets messy

because now we have all these new virtual worlds

and AR and stuff, and I think it gets messy,

but there’s research showing that something on a screen,

on a traditional screen,

and something that is moving in your physical space,

that that has a very different effect

on how your brain perceives it even.

So, I mean, I have a sense

that we can do that in a virtual world.

Probably.

Like when I’ve used VR, I jump around like an idiot

because I think something’s gonna hit me.

And even if a video game on a 2D screen

is compelling enough,

like the thing that’s immersive about it

is I kind of put myself into that world.

You kind of, those, the objects you’re interacting with,

Call of Duty, things you’re shooting,

they’re kind of, I mean, your imagination fills the gaps

and it becomes real.

Like it pulls your mind in when it’s well done.

So it really depends what’s shown on the 2D screen.

Yeah.

Yeah, I think there’s a ton of different factors

and there’s different types of embodiment.

Like you can have embodiment in a virtual world.

You can have an agent that’s simply text-based,

which has no embodiment.

So I think there’s a whole spectrum

of factors that can influence

how much you engage with something.

Yeah, I wonder, I always wondered if you can have like

an entity living in a computer.

Okay, this is gonna be dark.

I haven’t always wondered about this.

So this is gonna make it sound like

I keep thinking about this kind of stuff.

No, but like, this is almost like black mirror,

but the entity that’s convinced

or is able to convince you that it’s being tortured

inside the computer and needs your help to get out.

Something like this.

That becomes, to me, suffering is one of the things

that make you empathize with.

Like we’re not good at, as you’ve discussed in other,

in the physical form, like holding a robot upside down,

you have a really good examples about that

and discussing that.

I think suffering is a really good catalyst for empathy.

And I just feel like we can project embodiment

on a virtual thing if it’s capable

of certain things like suffering.

Yeah.

So I was wondering.

I think that’s true.

And I think that’s what happened with the Lambda thing.

Not that, none of the transcript was about suffering,

but it was about having the capacity

for suffering and human emotion

that convinced the engineer that this thing was sentient.

And it’s basically the plot of Ex Machina.

True.

Have you ever made a robot scream in pain?

Have I?

No, but have you seen that someone,

oh yeah, no, they actually made a Roomba scream

whenever it hit a wall.

Yeah, I programmed that myself as well.

Yeah?

I was inspired by that, yeah.

It’s cool.

Do you still have it?

Oh, sorry.

Hit a wall.

I didn’t.

Whenever it bumped into something,

it would scream in pain.

No, so I had, the way I programmed the Roombas

is when I kick it, whenever I,

so contact between me and the robot is when it screams.

Really?

Okay.

And you were inspired by that?

Yeah, I guess I misremembered the video.

I saw the video a long, long time ago.

And, or maybe I heard somebody mention it

and it’s the easiest thing to program.

So I did that.

I haven’t run those Roombas for over a year now,

but yeah, it was, my experience with it

was that it’s like they quickly become,

like you remember them, you miss them,

like they’re real living beings.

So the capacity to suffer is a really powerful thing.

Yeah.

Even then that, I mean, it was kind of hilarious.

It was just a random recording of screaming

from the internet, but it’s still, it’s still, it’s weird.

There’s a thing you have to get right

based on the interaction, like the latency.

Like it has, there is a,

there is a realistic aspect of how you should scream

relative to when you get hurt.

Like it should correspond correctly.

Like if you kick it really hard, it should scream louder?

No, it should scream at the appropriate time, not like.

Oh, I see.

One second later, right?

Like there’s a exact, like there’s a timing

when you get like, I don’t know,

when you run into, when you run your foot

into like the side of a table or something,

there’s a timing there, the dynamics you have to get right

for the actual screaming.

Because the Roomba in particular doesn’t,

so I was, the sensors don’t,

it doesn’t know about pain.

See?

What?

I’m sorry to say, Roomba doesn’t understand pain.

So you have to correctly map the sensors,

the timing to the production of the sound.

But when you get that somewhat right,

it starts, it’s a weird, it’s a really weird feeling.

And you actually feel like a bad person.

Aw.

Yeah.

So, but it’s, it makes you think

because that, with all the ways that we talked about,

that could be used to manipulate you.

Oh, for sure.

In a good and bad way.

So the good way is like you can form a connection

with a thing, and a bad way that you can form a connection

in order to sell you products that you don’t want.

Yeah, or manipulate you politically,

or many nefarious things.

You tweeted, we’re about to be living in the movie Her,

except instead of, see, I researched your tweets,

like they’re like Shakespeare.

We’re about to be living in the movie Her,

except instead of about love, it’s gonna be about,

what did I say, the chatbot being subtly racist,

and the question whether it’s ethical

for companies to charge for software upgrades.

Yeah.

So can we break that down?

What do you mean by that?

Yeah.

Obviously, some of it is humor.

Yes, well, kind of.

I am, like, ah, it’s so weird to be in this space

where I’m so worried about this technology,

and also so excited about it at the same time.

But the really, like, I haven’t,

I had gotten a little bit jaded,

and then with GPT-3, and then the Lambda transcript,

I was like re-energized,

but have also been thinking a lot about,

you know, what are the ethical issues

that are gonna come up?

And I think some of the things that companies

are really gonna have to figure out is,

obviously, algorithmic bias is a huge

and known problem at this point.

Like, even, you know, the new image generation tools,

like DALI, where they’ve clearly put in a lot of effort

to make sure that if you search for people,

it gives you a diverse set of people, et cetera.

Like, even that one, people have already found numerous,

like, ways that it just kind of regurgitates biases

and things that it finds on the internet.

Like how, if you search for success,

it gives you a bunch of images of men.

If you search for sadness,

it gives you a bunch of images of women.

So I think that this is like the really tricky one

with these voice agents that companies

are gonna have to figure out.

And that’s why it’s subtly racist and not overtly,

because I think they’re gonna be able to solve

the overt thing, and then with the subtle stuff,

it’s gonna be really difficult.

And then I think the other thing is gonna be,

yeah, like people are gonna become so emotionally attached

to artificial agents with this complexity of language,

with a potential embodiment factor that,

I mean, there’s already, there’s a paper

at WeRobot this year written by roboticists

about how to deal with the fact that robots die

and looking at it as an ethical issue

because it impacts people.

And I think there’s gonna be way more issues

than just that, like I think the tweet

was software upgrades, right?

Like how much is it okay to charge for something like that

if someone is deeply, emotionally invested

in this relationship?

Oh, the ethics of that, that’s interesting.

But there’s also the practical funding mechanisms,

like you mentioned with AIBO, the dog.

In theory, there’s a subscription.

Yeah, the new AIBO, so the old AIBO from the 90s,

people got really attached to, and in Japan,

they’re still having like funerals and Buddhist temples

for the AIBOs that can’t be repaired

because people really viewed them as part of their families.

So we’re talking about robot dogs.

Robot dogs, the AIBO, yeah, the original,

like famous robot dog that Sony made,

came out in the 90s, got discontinued,

having funerals for them in Japan.

Now they have a new one.

The new one is great, I have one at home.

It’s like-

It’s $3,000, how much is it?

I think it’s $3,000, and then after a few years,

you have to start paying, I think it’s like 300 a year

for a subscription service for cloud services.

And the cloud services,

I mean, it’s a lot,

the dog is more complex than the original,

and it has a lot of cool features,

and it can remember stuff and experiences,

and it can learn, and a lot of that

is outsourced to the cloud,

and so you have to pay to keep that running,

which makes sense.

People should pay, and people who aren’t using it

shouldn’t have to pay,

but it does raise the interesting question,

could you set that price to reflect

a consumer’s willingness to pay

for the emotional connection?

So if you know that people are really, really attached

to these things, just like they would be to a real dog,

could you just start charging more,

because there’s more demand?

Yeah, I mean, you have to be,

but that’s true for anything that people love, right?

It is, and it’s also true for real dogs,

like there’s all these new medical services nowadays

where people will shell out thousands and thousands of

dollars to keep their pets alive,

and is that taking advantage of people,

or is that just giving them what they want?

That’s the question.

Back to marriage, what about all the money

that it costs to get married,

and then all the money that it costs to get a divorce?

That feels like a very, that’s like a scam.

I think the society is full of scams that are like.

Oh, it’s such a scam, and then we’ve created,

the whole wedding industrial complex

has created all these quote-unquote traditions

that people buy into that aren’t even traditions,

like they’re just fabricated by marketing, it’s awful.

Let me ask you about racist robots.

Sure.

Is it up to a company that creates that to,

so we talk about removing bias and so on,

and that’s a really popular field in AI currently,

and a lot of people agree that it’s an important field,

but the question is for like social robotics,

is should it be up to the company

to remove the bias of society?

Well, who else can?

Oh, to remove the bias of society.

I guess because there’s a lot of people

that are subtly racist in modern society,

like why shouldn’t our robots also be subtly racist?

I mean, that’s like,

why do we put so much responsibility on the robots?

Because, because the robots.

I’m imagining like a Hitler Roomba.

I mean, that would be funny,

but I guess I’m asking a serious question.

You’re Jewish, right?

You’re allowed to make that joke.

Yes, exactly, I’m allowed to make that joke, yes.

And I’ve been nonstop reading about World War II and Hitler.

I think, I’m glad we exist in a world

where we can just make those jokes,

that helps deal with it.

Anyway, it is a serious question of sort of like,

like it’s such a difficult problem to solve.

Now, of course, like bias and so on,

like there’s low-hanging fruit,

which I think was what a lot of people are focused on,

but then it becomes like subtle stuff over time.

And it’s very difficult to know.

Now, if you can also completely remove the personality,

you can completely remove the personalization.

You can remove the language aspect,

which is what I had been arguing,

because I was like,

the language is the disappointing aspect

of social robots anyway.

But now we’re reintroducing that

because it’s now no longer disappointing.

So I do think, well, let’s just start with the premise,

which I think is very true,

which is that racism is not a neutral thing,

but it is the thing that we don’t want in our society.

Like it does not conform to my values.

So if we agree that racism is bad,

I do think that it has to be the company,

because the problem, I mean, it might not be possible.

And companies might have to put out products

where they’re taking risks

and they might get slammed by consumers

and they might have to adjust.

I don’t know how this is gonna work in the market.

I have opinions about how it should work,

but it is on the company.

And the danger with robots

is that they can entrench this stuff.

It’s not like your racist uncle

who you can have a conversation with

and-

And put things into context maybe.

Yeah, or who might change over time with more experience.

A robot really just like regurgitates things,

entrenches them, could influence other people.

And I mean, I think that’s terrible.

Well, I think there’s a difficult challenge here

is because even the premise you started with

that essentially racism is bad.

I think we live in a society today

where the definition of racism

is differing between different people.

Some people say that it’s not enough not to be racist.

Some people say you have to be anti-racist.

So you have to have a robot that constantly calls out,

like calls you out on your implicit racism.

I would love that.

I would love that robot.

But like, maybe it sees racism.

Well, I don’t know if you’d love it,

because maybe you’ll see racism

in things that aren’t racist.

And then you’re arguing with a robot.

Your robot starts calling you racist.

I’m not exactly sure that, I mean, it’s a tricky thing.

I guess I’m saying that the line is not obvious,

especially in this heated discussion

where we have a lot of identity politics

of what is harmful to different groups and so on.

Yeah.

It feels like the broader question here is,

should a social robotics company be solving

or being part of solving the issues of society?

Well, okay, I think it’s the same question as,

should I, as an individual, be responsible

for knowing everything in advance

and saying all the right things?

And the answer to that is,

yes, I am responsible,

but I’m not gonna get it perfect.

And then the question is, how do we deal with that?

And so as a person, how I aspire to deal with that is,

when I do inevitably make a mistake

because I have blind spots and people get angry,

I don’t take that personally.

And I listen to what’s behind the anger.

And it can even happen that like,

maybe I’ll tweet something that’s well-intentioned

and one group of people starts yelling at me

and then I change it the way that they said,

and then another group of people starts yelling at me,

which has happened, this happened to me actually around,

in my talks, I talk about robots

that are used in autism therapy.

And so whether to say a child with autism

or an autistic child is super controversial.

And a lot of autistic people prefer to be referred to

as autistic people.

And a lot of parents of autistic children

prefer child with autism.

And then there’s, they disagree.

So I’ve gotten yelled at from both sides.

And I think I’m still, I’m responsible

even if I can’t get it right.

I don’t know if that makes sense.

Like it’s a responsibility thing.

And I can be as well-intentioned as I want

and I’m still gonna make mistakes.

And that is part of the existing power structures that exist.

And that’s something that I accept.

And you accept being attacked from both sides

and grow from it and learn from it.

But the danger is that after being attacked,

assuming you don’t get canceled,

AKA completely removed from your ability to tweet,

you might become jaded

and not want to talk about autism anymore.

I don’t, and I didn’t.

I mean, it’s happened to me.

What I did was I listened to both sides and I chose,

I tried to get information.

And then I decided that I was going to use autistic children.

And now I’m moving forward with that.

Like, I don’t know.

For now, right.

For now, yeah, until I get updated information

and I’m never gonna get anything perfect,

but I’m making choices and I’m moving forward

because being a coward and like just retreating from that,

I think.

But see, here’s the problem.

You’re a very smart person and an individual,

a researcher, a thinker, an intellectual.

So that’s the right thing for you to do.

The hard thing is when, as a company,

imagine you had a PR team.

I said, Kate, like this, you should.

We hate.

Yeah, I mean, just, well, if you were,

if you hired PR people, like obviously they would see that.

And they’d be like, well, maybe don’t bring up autism.

Maybe don’t bring up these topics.

You’re getting attacked.

It’s bad for your brand.

They’ll say the brand word.

There’ll be, if we look at different demographics

that are inspired by your work,

I think it’s insensitive to them.

Let’s not mention this anymore.

Like there’s this kind of pressure

that all of a sudden you,

or you do suboptimal decisions.

You take a kind of poll.

Again, it’s looking at the past versus the future,

all those kinds of things.

And it becomes difficult.

In the same way that it’s difficult

for social media companies to figure out

like who to censor, who to recommend.

I think this is ultimately a question about leadership,

honestly, like the way that I see leadership.

Because right now,

the thing that bothers me about institutions

and a lot of people who run current institutions

is that their main focus is protecting the institution

and protecting themselves personally.

That is bad leadership

because it means you cannot have integrity.

You cannot lead with integrity.

And it makes sense because like,

obviously if you’re the type of leader

who immediately blows up the institution you’re leading,

then that doesn’t exist anymore.

And maybe that’s why we don’t have any good leaders anymore

because they had integrity

and they didn’t put the survival of the institution first.

But I feel like you have to,

just to be a good leader,

you have to be responsible

and understand that with great power

comes great responsibility.

You have to be humble and you have to listen

and you have to learn.

You can’t get defensive

and you cannot put your own protection before other things.

Yeah, take risks where you might lose your job.

You might lose your wellbeing because of,

because in the process of standing for the principles,

for the things you think are right to do.

Yeah, based on the things you,

like based on learning from,

like listening to people

and learning from what they feel.

And the same goes for the institution, yeah.

Yeah, but I ultimately actually believe

that those kinds of companies and countries succeed

that have leaders like that.

You should run for president.

No, thank you.

Yeah.

That’s maybe the problem.

Like the people who have good ideas about leadership,

they’re like, yeah.

No.

This is why I don’t,

this is why I’m not running a company.

It’s been, I think, three years

since the Jeffrey Epstein controversy at MIT,

MIT Media Lab.

Joey Ito, the head of the Media Lab, resigned.

And I think at that time

you wrote an opinion article about it.

So just looking back a few years have passed,

what have you learned about human nature

from the fact that somebody like Jeffrey Epstein

found his way inside MIT?

That’s a really good question.

What have I learned about human nature?

I think,

well, there’s,

there’s how did this problem come about?

And then there’s what was the reaction to this problem

and to it becoming public?

And in the reaction,

the things I learned about human nature

were that sometimes cowards are worse than assholes.

Wow, I’m really, ugh.

I mean, that’s a really powerful statement.

I think because the assholes,

at least you know what you’re dealing with.

They have integrity in a way.

They’re just living out their asshole values.

And the cowards are the ones that you have to watch out for.

And this comes back to people protecting themselves

over doing the right thing.

They’ll throw others under the bus.

Is there some sense

that not enough people took responsibility?

For sure.

And I mean, I don’t wanna sugarcoat at all

what Joey Ito did.

I mean, I think it’s gross

that he took money from Jeffrey Epstein.

I believe him that he didn’t know about the bad, bad stuff,

but I’ve been in those circles

with those like public intellectual dudes

that he was hanging out with.

And any woman in those circles,

any woman in those circles saw 10 zillion red flags.

Just the whole environment was so misogynist.

And so personally, because Joey was a great boss

and a great friend,

I was really disappointed that he ignored that

in favor of raising money.

And I think that it was right for him to resign

in the face of that.

But one of the things that he did

that many others didn’t was he came forward about it

and he took responsibility.

And all of the people who didn’t, I think,

it’s just interesting.

The other thing I learned about human nature,

okay, I’m gonna go on a tangent,

but I’ll come back, I promise.

So I once saw this tweet from someone

or it was a Twitter thread

from someone who worked at a homeless shelter.

And he said that when he started working there,

he noticed that people would often come in

and use the bathroom

and they would just trash the entire bathroom,

like rip things out of the walls,

like toilet paper on the ground.

And he asked someone who had been there longer,

like, why do they do this?

Why do the homeless people come in and trash the bathroom?

And he was told it’s because it’s the only thing

in their lives that they have control over.

And I feel like sometimes when it comes to the response,

just the mobbing response that happens

in the wake of some harm that was caused,

if you can’t target the person

who actually caused the harm who was Epstein,

you will go as many circles out as you can

until you find the person that you have power over

and you have control over, and then you will trash that.

And it makes sense that people do this.

It’s again, it’s a human nature thing.

Of course, you’re gonna focus all your energy

because you feel helpless and enraged

and it’s unfair and you have no other power.

You’re gonna focus all of your energy

on someone who’s so far removed from the problem

that that’s not even an efficient solution.

The problem is often the first person you find

is the one that has integrity,

sufficient integrity to take responsibility.

Yeah, and it’s why my husband always says,

he’s a liberal, but he’s always like,

when liberals form a firing squad, they stand in a circle.

Because you know that your friends

are gonna listen to you, so you criticize them.

You’re not gonna be able

to convince someone across the aisle.

But see, in that situation, what I had hoped

is the people in the farther, in that situation,

any situation of that sort,

the people that are farther out in the circles,

stand up and also take some responsibility

for the broader picture of human nature

versus specific situation,

but also take some responsibility

and, but also defend the people involved

as flawed, not in a like, no, no, no, nothing,

like this, people fucked up.

Like you said, there’s a lot of red flags

that people just ignored for the sake of money

in this particular case.

But also like be transparent and public about it

and spread the responsibility

across a large number of people

such that you learn a lesson from it institutionally.

Yeah, it was a systems problem.

It wasn’t a one individual problem.

And I feel like currently,

because Joey took, resigned because of it,

or essentially fired, pressured out because of it,

MIT can pretend like, oh, we didn’t know anything.

But it wasn’t part-

Bad leadership, again,

because when you are at the top of an institution

with that much power and you were complicit

in what happened, which they were,

like, come on, there’s no way that they didn’t know

that this was happening.

So like to not stand up and take responsibility,

I think it’s bad leadership.

Do you understand why Epstein was able to

outside of MIT, he was able to make a lot of friends

with a lot of powerful people.

Does that make sense to you?

Why was he able to get in these rooms,

befriend these people,

befriend people that I don’t know personally,

but I think a lot of them indirectly I know

as being good people, smart people.

Why would they let Jeffrey Epstein into their office,

have a discussion with them?

What do you understand about human nature from that?

Well, so I never met Epstein or,

I mean, I’ve met some of the people who interacted with him,

but I was never, like, I never saw him in action.

I don’t know how charismatic he was or what that was,

but I do think that sometimes the simple answer

is the more likely one.

And from my understanding, what he would do is,

he was kind of a social grifter,

like, you know those people who will,

you must get this because you’re famous.

You must get people coming to you and being like,

oh, I know your friend so-and-so

in order to get cred with you.

I think he just convinced some people

who were trusted in a network

that he was a great guy and that, you know, whatever.

I think at that point,

because at that point he had had like a,

what, a conviction prior, but it was a one-off thing.

It wasn’t clear that there was this other thing

that was that.

And most people probably don’t check.

Yeah, and most people don’t check.

Like, you’re at an event, you meet this guy.

I don’t know, maybe people do check

when they’re that powerful and wealthy,

or maybe they don’t.

I have no idea.

No, they’re just stupid.

I mean, and they’re not, like,

all right, does anyone check anything about me?

Because I’ve walked into some of the richest,

some of the most powerful people in the world,

and nobody, like, asks questions,

like, who the fuck is this guy?

Like-

Yeah.

Like, nobody asks those questions.

It’s interesting.

I would think, like,

there would be more security or something.

Like, there really isn’t.

I think a lot of it has to do,

well, my hope is, in my case,

has to do with, like,

people can sense that this is a good person,

but if that’s the case, then they can surely,

then a human being can use charisma to infiltrate.

Yeah.

Just being, just saying the right thing.

And once you have people vouching for you

within that type of network,

like, once you, yeah,

once you have someone powerful vouching for you

who someone else trusts,

then, you know, you’re in.

So how do you avoid something like that?

If you’re MIT, if you’re Harvard,

if you’re any of these institutions?

Well, I mean, first of all,

you have to do your homework

before you take money from someone.

Like, I think that’s required.

But I think, you know, I think Joey did do his homework.

I think he did.

And I think at the time that he took money,

there was the one conviction and not, like, the later thing.

And I think that the story at that time was that

he didn’t know she was underage and blah, blah, blah,

or whatever, it was a mistake.

And Joey always believed in redemption for people,

and that people can change,

and that they can genuinely regret,

and, like, learn and move on.

And he was a big believer in that.

So I could totally see him being like,

well, I’m not gonna exclude him because of this thing,

and because other people are vouching for him.

Just to be clear,

we’re now talking about the set of people

who I think Joey belonged to

who did not, like, go to the island

and have sex with underage girls,

because that’s a whole other set of people

who, like, were powerful,

and, like, were part of that network,

and who knew and participated.

And so, like, I distinguish between

people who got taken in,

who didn’t know that that was happening,

and people who knew.

I wonder what the different circles look like.

So, like, people that went to the island

and didn’t do anything, didn’t see anything,

didn’t know about anything,

versus the people that did something.

And then there’s people who heard rumors, maybe.

And what do you do with rumors?

Like, isn’t there people that heard rumors

about Bill Cosby for the longest time?

For, like, for the longest,

like, whenever that happened,

like, all these people came out of the woodwork,

like, everybody kind of knew.

I mean, it’s like, all right,

so what are you supposed to do with rumors?

Like, what?

I think the other way to put it is red flags,

as you were saying.

Yeah, and, like, I can tell you that those circles,

like, there were red flags

without me even hearing any rumors about anything ever.

Like, I was already, like,

there are not a lot of women here, which is a bad sign.

Isn’t there a lot of places where there’s not a lot of women,

and that doesn’t necessarily mean it’s a bad sign?

There are, if it’s, like, a pipeline problem

where it’s, like, I don’t know,

technology law clinic that only gets, like, male lawyers,

because there’s not a lot of women,

you know, applicants in the pool.

But there’s some aspect of this situation

that, like, there should be more women here.

Oh, yeah.

Yeah.

You’ve,

actually, I’d love to ask you about this,

because you have strong opinions about Richard Stallman.

Is that, do you still have those strong opinions?

Look, all I need to say is that he met my friend,

who’s a law professor.

Yeah.

She shook his hand, and he licked her arm

from wrist to elbow,

and it certainly wasn’t appropriate at that time.

What about if you’re, like, an incredibly weird person?

Okay, that’s a good question,

because, obviously, there’s a lot of neurodivergence

at MIT and everywhere, and, obviously,

like, we need to accept that people are different,

that people don’t understand social conventions

the same way.

But one of the things that I’ve learned

about neurodivergence is that women are often

expected or taught to mask their neurodivergence

and kind of fit in,

and men are accommodated and excused.

And I don’t think that being neurodivergent

gives you a license to be an asshole.

You can be a weird person, and you can still learn

that it’s not okay to lick someone’s arm.

Yeah, it’s a balance.

Like, women should be allowed to be a little weirder,

and men should be less weird.

Because I think there’s a,

because you’re one of the people, I think,

tweeting that, what made me,

because I wanted to talk to Richard Stallman

on the podcast about,

because I didn’t have a context,

because I wanted to talk to him,

because he’s, you know, free software.

He’s very weird in interesting, good ways

in the world of computer science.

He’s also weird in that, you know, when he gives a talk,

he would be like picking at his feet

and eating the skin off his feet, right?

He’s known for these extremely kind of,

how else do you put it?

I don’t know how to put it.

But then there was something that happened to him

in conversations on this thread related to Epstein.

Which I was torn about,

because I felt it’s similar to Joy Ito’s.

Like, I felt he was maligned.

Like, people were looking for somebody to get angry at.

So he was inappropriate, but the,

I didn’t like the cowardice more.

Like, I set aside his situation, and we could discuss it.

But the cowardice on MIT’s part, and this is me saying it,

about the way they treated that whole situation.

Oh, they’re always cowards about how they treat anything.

They just try to make the problem go away.

Yeah, so it was about, yeah, exactly,

making the conversation.

That’s why I think he should have left the mailing list.

He shouldn’t have been part of the mailing list.

Well, that’s probably true also.

But I think what bothered me,

what always botherss me in these mailing list situations,

or Twitter situations, like,

if you say something that’s hurtful to people,

or makes people angry,

and then people start yelling at you.

Maybe they shouldn’t be yelling.

Maybe they are yelling because, again,

you’re the only point of power they have.

Maybe it’s okay that you’re yelling.

Whatever it is, like,

it’s your response to that that matters.

And I think that I just have a lot of respect for people

who can say, oh, people are angry.

There’s a reason they’re angry.

Let me find out what that reason is

and learn more about it.

It doesn’t mean that I’m wrong.

It doesn’t mean that I’m bad.

It doesn’t mean that I’m ill-intentioned.

But why are they angry?

I wanna understand.

And then, once you understand,

you can respond, again, with integrity,

and say, actually, I stand by what I said.

Here’s why.

Or you can say, actually, I listened,

and here are some things I learned.

That’s the kind of response I wanna see from people.

And people like Stallman do not respond that way.

They just, like, go into battle.

Right.

Like, where it’s obvious you didn’t listen.

Yeah.

No interest in listening.

Honestly, that’s, to me,

as bad as the people who just apologize

just because they are trying to make the problem go away.

Of course.

Right.

So, like, if you-

That’s not a-

It’s, like, both are bad.

A good apology has to include

understanding what you did wrong.

And, in part, standing up for the things

you think you did right.

So, like-

Yeah, if there are those things, yeah.

Finding, and then, but you have to give,

you have to acknowledge,

you have to, like, give that hard hit to the ego

that says I did something wrong.

Yeah, definitely Richard Stallman is not somebody

who’s capable of that kind of thing,

or hasn’t given evidence of that kind of thing.

But that was also, even just your tweet,

I had to do a lot of thinking, like,

different people from different walks of life

see red flags in different things.

Yeah.

And so, things I find

as a man, non-threatening and hilarious,

are not necessarily,

doesn’t mean that they’re,

aren’t, like, deeply hurtful to others.

And I don’t mean that in a social justice warrior way,

but in a real way,

like, people really have different experiences.

So, I have to, like, really put things into context.

I have to kind of listen to what people are saying,

put aside the emotion of what they’re,

emotion with which they’re saying it,

and try to keep the facts of their experience,

and learn from it.

And because it’s not just about

the individual experience, either.

It’s not like, oh, you know,

my friend didn’t have a sense of humor about being licked.

It’s that she’s been metaphorically licked,

you know, 57 times that week,

because she’s an attractive law professor,

and she doesn’t get taken seriously.

And so, like, men walk through the world,

and it’s impossible for them to even understand

what it’s like to have a different experience of the world.

And that’s why it’s so important to listen to people,

and believe people,

and believe that they’re angry for a reason.

Maybe you don’t like their tone.

Maybe you don’t like that they’re angry at you.

Maybe you get defensive about that.

Maybe you think that they should, you know,

explain it to you.

But believe that they’re angry for a reason,

and try to understand it.

Yeah, there’s a deep truth there,

and an opportunity for you to become a better person.

Can I ask you a question?

Haven’t you been doing that for two hours?

Three hours now.

Let me ask you about Ghislaine Maxwell.

She’s been saying that she’s an innocent victim.

Is she an innocent victim,

or is she evil and equally responsible,

like Jeffrey Epstein?

Now I’m asking far away from any MIT things,

and more just your sense of the whole situation.

I haven’t been following it,

so I don’t know the facts of the situation,

and what is now known to be her role in that.

If I were her, clearly I’m not.

But if I were her, I wouldn’t be going around

saying I’m an innocent victim.

I would say,

maybe she’s, I don’t know what she’s saying.

Again, I don’t know.

She was controlled by Jeffrey.

Is she saying this as part of a legal case,

or is she saying this as a PR thing?

Well,

PR.

But it’s not just her.

It’s her whole family believes this.

There’s a whole effort that says like,

how should I put it?

I believe they believe it.

So in that sense, it’s not PR.

I don’t know.

I believe the family,

basically the family is saying that

she’s a really good human being.

Well, I think everyone is a good human being.

I know that’s a controversial opinion,

but I think everyone

is a good human being.

There’s no evil people.

There’s people who do bad things

and who behave in ways that harm others.

And I think we should always hold people

accountable for that.

But holding someone accountable doesn’t mean

saying that they’re evil.

Yeah, actually those people usually think

they’re doing good.

Yeah, I mean, aside from,

maybe sociopaths are specifically trying to harm people.

But I think most people are trying to do their best.

And if they’re not doing their best,

it’s because there’s some impediment

or something in their past.

So I genuinely don’t believe in good and evil people,

but I do believe in harmful and not harmful actions.

And so I don’t know.

I don’t care.

Yeah, she’s a good person.

But if she contributed to harm,

then she needs to be accountable for that.

That’s my position.

I don’t know what the facts of the matter are.

It seems like she was pretty close to the situation,

so it doesn’t seem very believable that she was a victim,

but I don’t know.

I wish I have met Epstein,

because something tells me he would just be

a regular person, a charismatic person,

like anybody else.

And that’s a very dark reality

that we don’t know which among us,

what each of us are hiding in the closet.

That’s a really tough thing to deal with,

because then you can put your trust into some people

and they can completely betray that trust

and in the process destroy you.

Yeah.

Which there’s a lot of people

that interacted with Epstein that now have to,

I mean, if they’re not destroyed by it,

then their whole,

the ground on which they stand ethically

has crumbled, at least in part.

And I’m sure you and I have interacted with people

without knowing it who are bad people.

As I always tell my four-year-old,

people who have done bad things.

People who have done bad things.

He’s always talking about bad guys

and I’m trying to move him towards,

they’re just people who make bad choices.

Yeah.

That’s really powerful, actually.

That’s really important to remember,

because that means you have compassion

towards all human beings.

Do you have hope for the future of MIT,

a future of Media Lab in this context?

So David Newman is now at the helm.

I talked to her previously, I’ll talk to her again.

She’s great.

I love her.

Yeah, she’s great.

I don’t know if she knew the whole situation

when she started, because the situation went beyond

just the Epstein scandal.

A bunch of other stuff happened at the same time.

Some of it’s not public,

but what I was personally going through at that time.

So the Epstein thing happened, I think,

was it August or September 2019?

It was somewhere around late summer in June 2019.

So I’m a research scientist at MIT.

You are too, right?

So, and I always have had various supervisors

over the years, and they’ve just basically

let me do what I want, which has been great.

But I had a supervisor at the time,

and he called me into his office for a regular check-in.

In June of 2019, I reported to MIT

that my supervisor had grabbed me,

pulled me into a hug, wrapped his arms around my waist,

and started massaging my hip and trying to kiss me,

kissed my face, kissed me near the mouth,

and said literally the words,

don’t worry, I’ll take care of your career.

And that experience was really interesting

because I just, I was very indignant.

I was like, he can’t do that to me.

Doesn’t he know who I am?

And I was like, this is the Me Too era.

And I naively thought that when I reported that,

it would get taken care of.

And then I had to go through the whole reporting process

at MIT, and I learned a lot about how institutions

really handle those things internally,

particularly situations where I couldn’t provide evidence

that it happened.

I had no reason to lie about it, but I had no evidence.

And so I was going through that,

and that was another experience for me

where there’s so many people in the institution

who really believe in protecting the institution

at all costs.

And there’s only a few people who care

about doing the right thing.

And one of them resigned.

Now there’s even less of them left, so.

So what’d you learn from that?

I mean, where’s the source, if you have hope

for this institution that I think you love,

at least in part.

I love the idea of MIT.

I love the idea, I love the research body,

I love a lot of the faculty, I love the students.

I love the energy, like I love it all.

I think the administration suffers from the same problems

as any institutional, any leadership of an institution

that is large, which is that

they’ve become risk-averse, like you mentioned.

They care about PR.

The only ways to get their attention

or change their minds about anything

are to threaten the reputation of the institute

or to have a lot of money.

That’s the only way to have power at the institute.

Yeah, I don’t think they have a lot of integrity

or believe in ideas or even have a lot of connection

to the research body and the people who are really,

because it’s so weird.

You have this amazing research body of people

pushing the boundaries of things

who aren’t afraid to, there’s the hacker culture,

and then you have the administration

and they’re really like,

protect the institution at all costs.

Yeah, there’s a disconnect, right?

Complete disconnect.

I wonder if that was always there,

if it just kind of slowly grows over time,

a disconnect between the administration and the faculty.

I think it grew over time is what I’ve heard.

I mean, I’ve been there for 11 years now.

I don’t know if it’s gotten worse during my time,

but I’ve heard from people who’ve been there longer

that it didn’t, like MIT didn’t used to have

a general counsel’s office.

They didn’t used to have all of this corporate stuff.

And then they had to create it as they got bigger

and in the era where such things are,

I guess, deemed necessary.

See, I believe in the power of individuals

to overthrow the thing.

So just a really good president of MIT

or certain people in the administration

can reform the whole thing.

Because the culture is still there of like,

I think everybody remembers that MIT

is about the students and the faculty.

Do they though?

Because I don’t know, I’ve had a lot of conversations

that have been shocking with like senior administration.

They think the students are children.

They call them kids.

It’s like, these are the smartest people.

They’re way smarter than you.

And you’re so dismissive of that.

But those individuals, I’m saying like the capacity,

like the aura of the place still values the students

and the faculty.

Like, I’m not, I’m being awfully poetic about it,

but what I mean is the administration

is the froth at the top of the, like the waves, the surface.

Like they can be removed

and new life can be brought in

that would keep to the spirit of the place.

Who decides on who to bring in?

Who’s hiring?

It’s bottom up.

Oh, I see.

I see.

But I do think ultimately,

especially in the era of social media and so on,

faculty and students have more and more power.

Just more and more of a voice, I suppose.

I hope so.

I really do.

I don’t see MIT going away anytime soon.

And I also don’t think it’s a terrible place at all.

Yeah, it’s an amazing place.

But there’s different trajectories it can take.

Yeah.

And like, and that has to do with a lot of things,

including, does it, is it stays,

even if we talk about robotics,

it could be the capital of the world in robotics.

But currently, if you wanna be doing the best AI work

in the world, you’re gonna go to Google

or Facebook or Tesla or Apple or so on.

You’re not gonna be at MIT.

And so that has to do,

I think that’s basically has to do

with not allowing the brilliance

of the researchers to flourish.

Yeah, people say it’s about money,

but I don’t think it’s about that at all.

Like, sometimes you have more freedom

and can work on more interesting things in companies.

That’s really where they lose people.

Yeah, and the freedom in all ways,

which is why it’s heartbreaking to get people

like Richard Stallman, there’s such an interesting line,

because Richard Stallman’s a gigantic weirdo

that crossed lines he shouldn’t have crossed, right?

But we don’t wanna draw too many lines.

This is the tricky thing.

There are different types of lines, in my opinion.

But yes, your opinion, you have strong lines you hold to,

but then if administration listens to every line,

there’s also power in drawing a line.

Like, and it becomes like a little drug.

You have to find the right balance.

Licking somebody’s arm is never appropriate.

I think the biggest aspect there is not owning it,

learning from it, growing from it

from the perspective of Stallman or people like that,

back when it happened, like understanding,

seeing the right, being empathetic,

seeing the fact that this was totally inappropriate.

Not when that particular act,

but everything that led up to it, too.

No, I think there are different kinds of lines.

I think there are…

So Stallman crossed lines that essentially excluded

a bunch of people and created an environment

where there are brilliant minds

that we never got the benefit of

because he made things feel gross

or even unsafe for people.

There are lines that you can cross

where you’re challenging an institution to…

Like, I don’t think he was intentionally

trying to cross a line, or maybe he didn’t care.

There are lines that you can cross intentionally

to move something forward or to do the right thing.

Like when MIT was like,

you can’t put an all-gender restroom in the media lab

because something permits whatever,

and Joey did it anyway.

That’s a line you can cross

to make things actually better for people.

And the line you’re crossing is some arbitrary, stupid rule

that people who don’t wanna take the risk are like…

Yeah, for sure.

You know what I mean?

No, ultimately, I think the thing you said

is cross lines in a way that doesn’t alienate others.

So for example, I started for a while

wearing a suit often at MIT,

which sounds counterintuitive,

but that’s actually…

People always looked at me weird for that.

MIT created this culture,

specifically the people I was working with.

Nobody wore suits.

Maybe the business school does.

Yeah, we don’t trust the suits.

We don’t trust the suits.

I was like, fuck you, I’m wearing a suit.

Nice.

But that’s not really hurting anybody, right?

Exactly.

It’s challenging people’s perceptions.

It’s doing something that you wanna do,

but it’s not hurting people.

And that particular thing was, yeah, it was hurting people.

It’s a good line.

It’s a good line.

Hurting ultimately the people that you want to flourish.

You tweeted a picture of pumpkin spice Greek yogurt

and asked, grounds for divorce?

Yes, no.

So let me ask you,

what’s the key to a successful relationship?

Oh my God, a good couple therapist?

What went wrong with the pumpkin spice Greek yogurt?

What’s exactly wrong?

Is it the pumpkin?

Is it the Greek yogurt?

I didn’t understand.

I stared at that tweet for a while.

I grew up in Europe,

so I don’t understand the pumpkin spice in everything craze

that they do every autumn here.

I understand that it might be good in some foods,

but they just put it in everything.

And it doesn’t belong in Greek yogurt.

I mean, I was just being humorous.

I ate one of those yogurts

and it actually tasted pretty good.

Yeah, exactly.

I think part of the success of a good marriage

is giving each other a hard time humorously

for things like that.

Is there a broader lesson?

Because you guys seem to have a really great marriage

from the external.

I mean, every marriage looks good from the external.

That’s not true, but yeah.

Okay, that’s not true.

No, but like relationships are hard.

Relationships with anyone are hard.

And especially because people evolve and change

and you have to make sure there’s space

for both people to evolve and change together.

And I think one of the things

that I really liked about our marriage vows was,

I remember before we got married,

Greg, at some point, got kind of nervous

and he was like, it’s such a big commitment

to commit to something for life.

And I was like, we’re not committing to this for life.

And he was like, we’re not?

And I’m like, no, we’re committing to being part of a team

and doing what’s best for the team.

If what’s best for the team is to break up, we’ll break up.

Like, I don’t believe in this.

Like, we have to do this for our whole lives.

And that really resonated with him too.

So yeah.

Did you put in the vows?

Yeah, yeah, that was our vows.

Like that we’re just, we’re gonna be a team.

You’re a team and do what’s right for the team.

Yeah, yeah.

That’s very like Michael Jordan view.

Did you guys get like married in the desert

like November rain style with Slash playing?

Or, you don’t have to answer that.

I’m not good at these questions.

Okay.

You’ve brought up marriage like eight times.

Are you trying to hint something on the podcast?

I don’t, yeah, I have an announcement to make.

No, I don’t know.

It just seems like a good metaphor for,

why would, it felt like a good metaphor for,

in a bunch of cases, for the marriage industrial complex.

I remember that.

And, oh, people complaining.

It just seemed like marriage is one of the things

that always surprises me.

Because I want to get married.

You do?

Yeah, I do.

And then I listened to like friends of mine that complain.

Not all, I like guys, I really like guys

that don’t complain about their marriage.

It’s such a cheap, like if,

it’s such a cheap release valve,

like that doesn’t, that’s bitching about anything,

honestly, that’s just like, it’s too easy.

But especially, like, bitch about the sports team

or the weather if you want.

But like, about somebody that you’re dedicating

your life to, like, if you bitch about them,

you’re going to see them as a lesser being also.

Like, you don’t think so,

but you’re going to like decrease the value you have.

I personally believe, over time,

you’re not going to appreciate the magic of that person.

I think, anyway.

But it’s like, I just notice this a lot,

that people are married and they will whine about,

you know, like the wife, whatever.

This, it’s part of the sort of the culture

to kind of comment in that way.

I think women do the same thing about the husband.

He doesn’t, he never does this, or he’s a goof.

He’s incompetent at this or that, whatever.

There’s a kind of-

Yeah, there’s those tropes like,

oh, you know, husbands never do X.

And like, wives are, I think those do a disservice

to everyone, it’s just disrespectful to everyone involved.

Yeah, but it happens.

So I was, I brought that up as an example

of something that people actually love,

but they complain about, because for some reason

that’s more fun to do, is complain about stuff.

And so that’s what with Clippy or whatever, right?

So like, you complain about, but you actually love it.

There you go, it’s just a good metaphor that, you know.

What was I gonna ask you?

Oh, you,

your hamster died.

When I was like eight.

Eight, you miss her?

Beige.

What’s the closest relationship you’ve had with a pet?

Not the one?

What?

Pet or robot have you loved the most in your life?

I think my first pet was a goldfish named Bob,

and he died immediately, and that was really sad.

I think it was really attached to Bob and Nancy,

my goldfish.

We got new Bobs, and then Bob kept dying,

and we got new Bobs, Nancy just kept living.

So it was very replaceable.

Yeah, I was young.

It was easy to.

Do you think there will be a time when the robot,

like in the movie Her,

be something we fall in love with romantically?

Oh yeah, oh for sure, yeah.

At scale, like we’re a lot of people.

Romantically, I don’t know if it’s gonna happen at scale.

I think we talked about this a little bit last time

on the podcast too, where I think we’re just capable

of so many different kinds of relationships.

And actually, part of why I think marriage is so tough

as a relationship is because we put so many expectations

on it, like your partner has to be your best friend,

and you have to be sexually attracted to them,

and they have to be a good co-parent and a good roommate,

and it’s all the relationships at once that have to work.

But normally with other people,

we have one type of relationship,

or we even have a different relationship to our dog

than we do to our neighbor,

than we do to the person, someone, a co-worker.

I think that some people are gonna find

romantic relationships with robots interesting.

It might even be a widespread thing,

but I don’t think it’s gonna replace

human romantic relationships.

I think it’s just gonna be a separate type of thing.

It’s gonna be more narrow.

More narrow, or even just something new

that we haven’t really experienced before.

Maybe having a crush on an artificial agent

is a different type of fascination.

I don’t know.

Do you think people would see that as cheating?

I think people would, well, I mean,

the things that people feel threatened by in relationships

are very manifold, so.

Yeah, that’s just an interesting one,

because maybe they’ll be good,

a little jealousy for the relationship.

Maybe they’ll be part of the couple’s therapy,

that kind of thing, or whatever.

I don’t think jealousy, I mean, I think

it’s hard to avoid jealousy,

but I think the objective is probably to avoid it.

I mean, some people don’t even get jealous

when their partner sleeps with someone else.

Like, there’s polyamory, and.

Yeah.

I think there’s just such a diversity of different ways

that we can structure relationships, or view them,

that this is just gonna be another one that we add.

You dedicate your book to your dad.

What did you learn about life from your dad?

Oh, man, my dad is,

he’s a great listener, and he is

the best person I know

at the type of cognitive empathy

that’s perspective-taking.

So, not emotional, crying empathy,

but trying to see someone else’s point of view,

and trying to put yourself in their shoes.

And he really instilled that in me from an early age.

And then he made me read a ton of science fiction,

which probably led me down this path.

Taught you how to be curious about the world,

and how to be open-minded.

Yeah.

Last question.

What role does love play in the human condition?

Since we’ve been talking about love and robots,

how, and you’re fascinated by social robotics.

It feels like all of that operates in the landscape

of something that we can call love.

Love?

Yeah, I think there are a lot of different kinds of love.

I feel like it’s, we need,

I’m like, don’t the Eskimos have

all these different words for snow?

We need more words to describe different types

and kinds of love that we experience.

But I think love is so important,

and I also think it’s not zero-sum.

That’s the really interesting thing about love,

is that I had one kid, and I loved my first kid

more than anything else in the world.

And I was like, how can I have a second kid,

and then love that kid also?

I’m never gonna love it as much as the first.

But I love them both equally.

It just, my heart expanded.

And so I think that people who are threatened

by love towards artificial agents,

they don’t need to be threatened for that reason.

Artificial agents will just, if done right,

will just expand your capacity for love.

I think so.

I agree.

Beautifully put.

Kate, this was awesome.

I still didn’t talk about half the things

I wanted to talk about, but we’re already

way over three hours, so thank you so much.

I really appreciate you talking today.

You’re awesome.

You’re an amazing human being,

and a great roboticist, great writer now.

It’s an honor that you would talk with me.

Thanks for doing it.

Right back at you.

Thank you.

Thanks for listening to this conversation

with Kate Darling.

To support this podcast, please check out

our sponsors in the description.

And now, let me leave you with some words

from Maya Angelou.

Courage is the most important of all the virtues,

because without courage, you can’t practice

any other virtue consistently.

Thank you for listening, and hope to see you

next time.

comments powered by Disqus