Lex Fridman Podcast - #66 - Ayanna Howard: Human-Robot Interaction and Ethics of Safety-Critical Systems

The following is a conversation with Ayana Howard.

She’s a roboticist, professor Georgia Tech,

and director of the Human Automation Systems Lab,

with research interests in human robot interaction,

assisted robots in the home, therapy gaming apps,

and remote robotic exploration of extreme environments.

Like me, in her work, she cares a lot

about both robots and human beings,

and so I really enjoyed this conversation.

This is the Artificial Intelligence Podcast.

If you enjoy it, subscribe on YouTube,

give it five stars on Apple Podcast,

follow on Spotify, support it on Patreon,

or simply connect with me on Twitter

at Lex Friedman, spelled F R I D M A N.

I recently started doing ads

at the end of the introduction.

I’ll do one or two minutes after introducing the episode,

and never any ads in the middle

that can break the flow of the conversation.

I hope that works for you

and doesn’t hurt the listening experience.

This show is presented by Cash App,

the number one finance app in the App Store.

I personally use Cash App to send money to friends,

but you can also use it to buy, sell,

and deposit Bitcoin in just seconds.

Cash App also has a new investing feature.

You can buy fractions of a stock, say $1 worth,

no matter what the stock price is.

Broker services are provided by Cash App Investing,

a subsidiary of Square and Member SIPC.

I’m excited to be working with Cash App

to support one of my favorite organizations called First,

best known for their FIRST Robotics and Lego competitions.

They educate and inspire hundreds of thousands of students

in over 110 countries,

and have a perfect rating at Charity Navigator,

which means that donated money

is used to maximum effectiveness.

When you get Cash App from the App Store or Google Play

and use code LEXPODCAST, you’ll get $10,

and Cash App will also donate $10 to FIRST,

which again, is an organization

that I’ve personally seen inspire girls and boys

to dream of engineering a better world.

And now, here’s my conversation with Ayanna Howard.

What or who is the most amazing robot you’ve ever met,

or perhaps had the biggest impact on your career?

I haven’t met her, but I grew up with her,

but of course, Rosie.

So, and I think it’s because also.

Who’s Rosie?

Rosie from the Jetsons.

She is all things to all people, right?

Think about it.

Like anything you wanted, it was like magic, it happened.

So people not only anthropomorphize,

but project whatever they wish for the robot to be onto.

Onto Rosie.

But also, I mean, think about it.

She was socially engaging.

She every so often had an attitude, right?

She kept us honest.

She would push back sometimes

when George was doing some weird stuff.

But she cared about people, especially the kids.

She was like the perfect robot.

And you’ve said that people don’t want

their robots to be perfect.

Can you elaborate that?

What do you think that is?

Just like you said, Rosie pushed back a little bit

every once in a while.

Yeah, so I think it’s that.

So if you think about robotics in general,

we want them because they enhance our quality of life.

And usually that’s linked to something that’s functional.

Even if you think of self driving cars,

why is there a fascination?

Because people really do hate to drive.

Like there’s the like Saturday driving

where I can just speed,

but then there’s the I have to go to work every day

and I’m in traffic for an hour.

I mean, people really hate that.

And so robots are designed to basically enhance

our ability to increase our quality of life.

And so the perfection comes from this aspect of interaction.

If I think about how we drive, if we drove perfectly,

we would never get anywhere, right?

So think about how many times you had to run past the light

because you see the car behind you

is about to crash into you.

Or that little kid kind of runs into the street

and so you have to cross on the other side

because there’s no cars, right?

Like if you think about it, we are not perfect drivers.

Some of it is because it’s our world.

And so if you have a robot that is perfect

in that sense of the word,

they wouldn’t really be able to function with us.

Can you linger a little bit on the word perfection?

So from the robotics perspective,

what does that word mean

and how is sort of the optimal behavior

as you’re describing different

than what we think is perfection?

Yeah, so perfection, if you think about it

in the more theoretical point of view,

it’s really tied to accuracy, right?

So if I have a function,

can I complete it at 100% accuracy with zero errors?

And so that’s kind of, if you think about perfection

in the sense of the word.

And in the self driving car realm,

do you think from a robotics perspective,

we kind of think that perfection means

following the rules perfectly,

sort of defining, staying in the lane, changing lanes.

When there’s a green light, you go.

When there’s a red light, you stop.

And that’s the, and be able to perfectly see

all the entities in the scene.

That’s the limit of what we think of as perfection.

And I think that’s where the problem comes

is that when people think about perfection for robotics,

the ones that are the most successful

are the ones that are quote unquote perfect.

Like I said, Rosie is perfect,

but she actually wasn’t perfect in terms of accuracy,

but she was perfect in terms of how she interacted

and how she adapted.

And I think that’s some of the disconnect

is that we really want perfection

with respect to its ability to adapt to us.

We don’t really want perfection with respect to 100% accuracy

with respect to the rules that we just made up anyway, right?

And so I think there’s this disconnect sometimes

between what we really want and what happens.

And we see this all the time, like in my research, right?

Like the optimal, quote unquote optimal interactions

are when the robot is adapting based on the person,

not 100% following what’s optimal based on the rules.

Just to link on autonomous vehicles for a second,

just your thoughts, maybe off the top of the head,

how hard is that problem do you think

based on what we just talked about?

There’s a lot of folks in the automotive industry,

they’re very confident from Elon Musk to Waymo

to all these companies.

How hard is it to solve that last piece?

The last mile.

The gap between the perfection and the human definition

of how you actually function in this world.

Yeah, so this is a moving target.

So I remember when all the big companies

started to heavily invest in this

and there was a number of even roboticists

as well as folks who were putting in the VCs

and corporations, Elon Musk being one of them that said,

self driving cars on the road with people

within five years, that was a little while ago.

And now people are saying five years, 10 years, 20 years,

some are saying never, right?

I think if you look at some of the things

that are being successful is these

basically fixed environments

where you still have some anomalies, right?

You still have people walking, you still have stores,

but you don’t have other drivers, right?

Like other human drivers are,

it’s a dedicated space for the cars.

Because if you think about robotics in general,

where has always been successful?

I mean, you can say manufacturing,

like way back in the day, right?

It was a fixed environment, humans were not part

of the equation, we’re a lot better than that.

But like when we can carve out scenarios

that are closer to that space,

then I think that it’s where we are.

So a closed campus where you don’t have self driving cars

and maybe some protection so that the students

don’t jet in front just because they wanna see what happens.

Like having a little bit, I think that’s where

we’re gonna see the most success in the near future.

And be slow moving.

Right, not 55, 60, 70 miles an hour,

but the speed of a golf cart, right?

So that said, the most successful

in the automotive industry robots operating today

in the hands of real people are ones that are traveling

over 55 miles an hour and in unconstrained environments,

which is Tesla vehicles, so Tesla autopilot.

So I would love to hear sort of your,

just thoughts of two things.

So one, I don’t know if you’ve gotten to see,

you’ve heard about something called smart summon

where Tesla system, autopilot system,

where the car drives zero occupancy, no driver

in the parking lot slowly sort of tries to navigate

the parking lot to find itself to you.

And there’s some incredible amounts of videos

and just hilarity that happens as it awkwardly tries

to navigate this environment, but it’s a beautiful

nonverbal communication between machine and human

that I think is a, it’s like, it’s some of the work

that you do in this kind of interesting

human robot interaction space.

So what are your thoughts in general about it?

So I do have that feature.

Do you drive a Tesla?

I do, mainly because I’m a gadget freak, right?

So I say it’s a gadget that happens to have some wheels.

And yeah, I’ve seen some of the videos.

But what’s your experience like?

I mean, you’re a human robot interaction roboticist,

you’re a legit sort of expert in the field.

So what does it feel for a machine to come to you?

It’s one of these very fascinating things,

but also I am hyper, hyper alert, right?

Like I’m hyper alert, like my butt, my thumb is like,

oh, okay, I’m ready to take over.

Even when I’m in my car or I’m doing things like automated

backing into, so there’s like a feature where you can do

this automating backing into a parking space,

or bring the car out of your garage,

or even, you know, pseudo autopilot on the freeway, right?

I am hypersensitive.

I can feel like as I’m navigating,

like, yeah, that’s an error right there.

Like I am very aware of it, but I’m also fascinated by it.

And it does get better.

Like I look and see it’s learning from all of these people

who are cutting it on, like every time I cut it on,

it’s getting better, right?

And so I think that’s what’s amazing about it is that.

This nice dance of you’re still hyper vigilant.

So you’re still not trusting it at all.

Yeah.

And yet you’re using it.

On the highway, if I were to, like what,

as a roboticist, we’ll talk about trust a little bit.

How do you explain that?

You still use it.

Is it the gadget freak part?

Like where you just enjoy exploring technology?

Or is that the right actually balance

between robotics and humans is where you use it,

but don’t trust it.

And somehow there’s this dance

that ultimately is a positive.

Yeah, so I think I’m,

I just don’t necessarily trust technology,

but I’m an early adopter, right?

So when it first comes out,

I will use everything,

but I will be very, very cautious of how I use it.

Do you read about it or do you explore it by just try it?

Do you like crudely, to put it crudely,

do you read the manual or do you learn through exploration?

I’m an explorer.

If I have to read the manual, then I do design.

Then it’s a bad user interface.

It’s a failure.

Elon Musk is very confident that you kind of take it

from where it is now to full autonomy.

So from this human robot interaction,

where you don’t really trust and then you try

and then you catch it when it fails to,

it’s going to incrementally improve itself

into full where you don’t need to participate.

What’s your sense of that trajectory?

Is it feasible?

So the promise there is by the end of next year,

by the end of 2020 is the current promise.

What’s your sense about that journey that Tesla’s on?

So there’s kind of three things going on though.

I think in terms of will people go like as a user,

as a adopter, will you trust going to that point?

I think so, right?

Like there are some users and it’s because what happens is

when you’re hypersensitive at the beginning

and then the technology tends to work,

your apprehension slowly goes away.

And as people, we tend to swing to the other extreme, right?

Because it’s like, oh, I was like hyper, hyper fearful

or hypersensitive and it was awesome.

And we just tend to swing.

That’s just human nature.

And so you will have, I mean, and I…

That’s a scary notion because most people

are now extremely untrusting of autopilot.

They use it, but they don’t trust it.

And it’s a scary notion that there’s a certain point

where you allow yourself to look at the smartphone

for like 20 seconds.

And then there’ll be this phase shift

where it’ll be like 20 seconds, 30 seconds,

one minute, two minutes.

It’s a scary proposition.

But that’s people, right?

That’s just, that’s humans.

I mean, I think of even our use of,

I mean, just everything on the internet, right?

Like think about how reliant we are on certain apps

and certain engines, right?

20 years ago, people have been like, oh yeah, that’s stupid.

Like that makes no sense.

Like, of course that’s false.

Like now it’s just like, oh, of course I’ve been using it.

It’s been correct all this time.

Of course aliens, I didn’t think they existed,

but now it says they do, obviously.

100%, earth is flat.

So, okay, but you said three things.

So one is the human.

Okay, so one is the human.

And I think there will be a group of individuals

that will swing, right?

I just.

Teenagers.

Teenage, I mean, it’ll be, it’ll be adults.

There’s actually an age demographic

that’s optimal for technology adoption.

And you can actually find them.

And they’re actually pretty easy to find.

Just based on their habits, based on,

so if someone like me who wasn’t a roboticist

would probably be the optimal kind of person, right?

Early adopter, okay with technology,

very comfortable and not hypersensitive, right?

I’m just hypersensitive cause I designed this stuff.

So there is a target demographic that will swing.

The other one though,

is you still have these humans that are on the road.

That one is a harder, harder thing to do.

And as long as we have people that are on the same streets,

that’s gonna be the big issue.

And it’s just because you can’t possibly,

I wanna say you can’t possibly map the,

some of the silliness of human drivers, right?

Like as an example, when you’re next to that car

that has that big sticker called student driver, right?

Like you are like, oh, either I’m going to like go around.

Like we are, we know that that person

is just gonna make mistakes that make no sense, right?

How do you map that information?

Or if I am in a car and I look over

and I see two fairly young looking individuals

and there’s no student driver bumper

and I see them chit chatting to each other,

I’m like, oh, that’s an issue, right?

So how do you get that kind of information

and that experience into basically an autopilot?

And there’s millions of cases like that

where we take little hints to establish context.

I mean, you said kind of beautifully poetic human things,

but there’s probably subtle things about the environment

about it being maybe time for commuters

to start going home from work

and therefore you can make some kind of judgment

about the group behavior of pedestrians, blah, blah, blah,

and so on and so on.

Or even cities, right?

Like if you’re in Boston, how people cross the street,

like lights are not an issue versus other places

where people will actually wait for the crosswalk.

Seattle or somewhere peaceful.

But what I’ve also seen sort of just even in Boston

that intersection to intersection is different.

So every intersection has a personality of its own.

So certain neighborhoods of Boston are different.

So we kind of, and based on different timing of day,

at night, it’s all, there’s a dynamic to human behavior

that we kind of figure out ourselves.

We’re not able to introspect and figure it out,

but somehow our brain learns it.

We do.

And so you’re saying, is there a shortcut?

Is there a shortcut, though, for a robot?

Is there something that could be done, you think,

that, you know, that’s what we humans do.

It’s just like bird flight, right?

That’s the example they give for flight.

Do you necessarily need to build a bird that flies

or can you do an airplane?

Is there a shortcut to it?

So I think the shortcut is, and I kind of,

I talk about it as a fixed space,

where, so imagine that there’s a neighborhood

that’s a new smart city or a new neighborhood

that says, you know what?

We are going to design this new city

based on supporting self driving cars.

And then doing things, knowing that there’s anomalies,

knowing that people are like this, right?

And designing it based on that assumption

that like, we’re gonna have this.

That would be an example of a shortcut.

So you still have people,

but you do very specific things

to try to minimize the noise a little bit

as an example.

And the people themselves become accepting of the notion

that there’s autonomous cars, right?

Right, like they move into,

so right now you have like a,

you will have a self selection bias, right?

Like individuals will move into this neighborhood

knowing like this is part of like the real estate pitch,

right?

And so I think that’s a way to do a shortcut.

One, it allows you to deploy.

It allows you to collect then data with these variances

and anomalies, cause people are still people,

but it’s a safer space and it’s more of an accepting space.

I.e. when something in that space might happen

because things do,

because you already have the self selection,

like people would be, I think a little more forgiving

than other places.

And you said three things, did we cover all of them?

The third is legal law, liability,

which I don’t really want to touch,

but it’s still of concern.

And the mishmash with like with policy as well,

sort of government, all that whole.

That big ball of stuff.

Yeah, gotcha.

So that’s, so we’re out of time now.

Do you think from a robotics perspective,

you know, if you’re kind of honest of what cars do,

they kind of threaten each other’s life all the time.

So cars are various.

I mean, in order to navigate intersections,

there’s an assertiveness, there’s a risk taking.

And if you were to reduce it to an objective function,

there’s a probability of murder in that function,

meaning you killing another human being

and you’re using that.

First of all, it has to be low enough

to be acceptable to you on an ethical level

as an individual human being,

but it has to be high enough for people to respect you

to not sort of take advantage of you completely

and jaywalk in front of you and so on.

So, I mean, I don’t think there’s a right answer here,

but what’s, how do we solve that?

How do we solve that from a robotics perspective

when danger and human life is at stake?

Yeah, as they say, cars don’t kill people,

People kill people.

Right.

So I think.

And now robotic algorithms would be killing people.

Right, so it will be robotics algorithms that are pro,

no, it will be robotic algorithms don’t kill people.

Developers of robotic algorithms kill people, right?

I mean, one of the things is people are still in the loop

and at least in the near and midterm,

I think people will still be in the loop at some point,

even if it’s a developer.

Like we’re not necessarily at the stage

where robots are programming autonomous robots

with different behaviors quite yet.

It’s a scary notion, sorry to interrupt,

that a developer has some responsibility

in the death of a human being.

That’s a heavy burden.

I mean, I think that’s why the whole aspect of ethics

in our community is so, so important, right?

Like, because it’s true.

If you think about it, you can basically say,

I’m not going to work on weaponized AI, right?

Like people can say, that’s not what I’m gonna do.

But yet you are programming algorithms

that might be used in healthcare algorithms

that might decide whether this person

should get this medication or not.

And they don’t and they die.

Okay, so that is your responsibility, right?

And if you’re not conscious and aware

that you do have that power when you’re coding

and things like that, I think that’s just not a good thing.

Like we need to think about this responsibility

as we program robots and computing devices

much more than we are.

Yeah, so it’s not an option to not think about ethics.

I think it’s a majority, I would say, of computer science.

Sort of, it’s kind of a hot topic now,

I think about bias and so on, but it’s,

and we’ll talk about it, but usually it’s kind of,

it’s like a very particular group of people

that work on that.

And then people who do like robotics are like,

well, I don’t have to think about that.

There’s other smart people thinking about it.

It seems that everybody has to think about it.

It’s not, you can’t escape the ethics,

whether it’s bias or just every aspect of ethics

that has to do with human beings.

Everyone.

So think about, I’m gonna age myself,

but I remember when we didn’t have like testers, right?

And so what did you do?

As a developer, you had to test your own code, right?

Like you had to go through all the cases and figure it out

and then they realized that,

we probably need to have testing

because we’re not getting all the things.

And so from there, what happens is like most developers,

they do a little bit of testing, but it’s usually like,

okay, did my compiler bug out?

Let me look at the warnings.

Okay, is that acceptable or not, right?

Like that’s how you typically think about as a developer

and you’ll just assume that it’s going to go

to another process and they’re gonna test it out.

But I think we need to go back to those early days

when you’re a developer, you’re developing,

there should be like the say,

okay, let me look at the ethical outcomes of this

because there isn’t a second like testing ethical testers,

right, it’s you.

We did it back in the early coding days.

I think that’s where we are with respect to ethics.

Like let’s go back to what was good practices

and only because we were just developing the field.

Yeah, and it’s a really heavy burden.

I’ve had to feel it recently in the last few months,

but I think it’s a good one to feel like

I’ve gotten a message, more than one from people.

You know, I’ve unfortunately gotten some attention recently

and I’ve gotten messages that say that

I have blood on my hands

because of working on semi autonomous vehicles.

So the idea that you have semi autonomy means

people will become, will lose vigilance and so on.

That’s actually be humans, as we described.

And because of that, because of this idea

that we’re creating automation,

there’ll be people be hurt because of it.

And I think that’s a beautiful thing.

I mean, it’s, you know, there’s many nights

where I wasn’t able to sleep because of this notion.

You know, you really do think about people that might die

because of this technology.

Of course, you can then start rationalizing saying,

well, you know what, 40,000 people die in the United States

every year and we’re trying to ultimately try to save lives.

But the reality is your code you’ve written

might kill somebody.

And that’s an important burden to carry with you

as you design the code.

I don’t even think of it as a burden

if we train this concept correctly from the beginning.

And I use, and not to say that coding is like

being a medical doctor, but think about it.

Medical doctors, if they’ve been in situations

where their patient didn’t survive, right?

Do they give up and go away?

No, every time they come in,

they know that there might be a possibility

that this patient might not survive.

And so when they approach every decision,

like that’s in the back of their head.

And so why isn’t that we aren’t teaching,

and those are tools though, right?

They are given some of the tools to address that

so that they don’t go crazy.

But we don’t give those tools

so that it does feel like a burden

versus something of I have a great gift

and I can do great, awesome good,

but with it comes great responsibility.

I mean, that’s what we teach in terms of

if you think about the medical schools, right?

Great gift, great responsibility.

I think if we just change the messaging a little,

great gift, being a developer, great responsibility.

And this is how you combine those.

But do you think, I mean, this is really interesting.

It’s outside, I actually have no friends

who are sort of surgeons or doctors.

I mean, what does it feel like

to make a mistake in a surgery and somebody to die

because of that?

Like, is that something you could be taught

in medical school, sort of how to be accepting of that risk?

So, because I do a lot of work with healthcare robotics,

I have not lost a patient, for example.

The first one’s always the hardest, right?

But they really teach the value, right?

So, they teach responsibility,

but they also teach the value.

Like, you’re saving 40,000,

but in order to really feel good about that,

when you come to a decision,

you have to be able to say at the end,

I did all that I could possibly do, right?

Versus a, well, I just picked the first widget, right?

Like, so every decision is actually thought through.

It’s not a habit, it’s not a,

let me just take the best algorithm

that my friend gave me, right?

It’s a, is this it, is this the best?

Have I done my best to do good, right?

And so…

You’re right, and I think burden is the wrong word.

It’s a gift, but you have to treat it extremely seriously.

Correct.

So, on a slightly related note,

in a recent paper,

The Ugly Truth About Ourselves and Our Robot Creations,

you discuss, you highlight some biases

that may affect the function of various robotic systems.

Can you talk through, if you remember, examples of some?

There’s a lot of examples.

I usually… What is bias, first of all?

Yeah, so bias is this,

and so bias, which is different than prejudice.

So, bias is that we all have these preconceived notions

about particular, everything from particular groups

to habits to identity, right?

So, we have these predispositions,

and so when we address a problem,

we look at a problem and make a decision,

those preconceived notions might affect our outputs,

our outcomes.

So, there the bias can be positive and negative,

and then is prejudice the negative kind of bias?

Prejudice is the negative, right?

So, prejudice is that not only are you aware of your bias,

but you are then take it and have a negative outcome,

even though you’re aware, like…

And there could be gray areas too.

There’s always gray areas.

That’s the challenging aspect of all ethical questions.

So, I always like…

So, there’s a funny one,

and in fact, I think it might be in the paper,

because I think I talk about self driving cars,

but think about this.

We, for teenagers, right?

Typically, insurance companies charge quite a bit of money

if you have a teenage driver.

So, you could say that’s an age bias, right?

But no one will claim…

I mean, parents will be grumpy,

but no one really says that that’s not fair.

That’s interesting.

We don’t…

That’s right, that’s right.

It’s everybody in human factors and safety research almost…

I mean, it’s quite ruthlessly critical of teenagers.

And we don’t question, is that okay?

Is that okay to be ageist in this kind of way?

It is, and it is ageist, right?

It’s definitely ageist, there’s no question about it.

And so, this is the gray area, right?

Because you know that teenagers are more likely

to be in accidents,

and so, there’s actually some data to it.

But then, if you take that same example,

and you say, well, I’m going to make the insurance higher

for an area of Boston,

because there’s a lot of accidents.

And then, they find out that that’s correlated

with socioeconomics.

Well, then it becomes a problem, right?

Like, that is not acceptable,

but yet, the teenager, which is age…

It’s against age, is, right?

We figure that out as a society by having conversations,

by having discourse.

I mean, throughout history,

the definition of what is ethical or not has changed,

and hopefully, always for the better.

Correct, correct.

So, in terms of bias or prejudice in algorithms,

what examples do you sometimes think about?

So, I think about quite a bit the medical domain,

just because historically, right?

The healthcare domain has had these biases,

typically based on gender and ethnicity, primarily.

A little in age, but not so much.

Historically, if you think about FDA and drug trials,

it’s harder to find a woman that aren’t childbearing,

and so you may not test on drugs at the same level.

Right, so there’s these things.

And so, if you think about robotics, right?

Something as simple as,

I’d like to design an exoskeleton, right?

What should the material be?

What should the weight be?

What should the form factor be?

Who are you gonna design it around?

I will say that in the US,

women average height and weight

is slightly different than guys.

So, who are you gonna choose?

Like, if you’re not thinking about it from the beginning,

as, okay, when I design this and I look at the algorithms

and I design the control system and the forces

and the torques, if you’re not thinking about,

well, you have different types of body structure,

you’re gonna design to what you’re used to.

Oh, this fits all the folks in my lab, right?

So, think about it from the very beginning is important.

What about sort of algorithms that train on data

kind of thing?

Sadly, our society already has a lot of negative bias.

And so, if we collect a lot of data,

even if it’s a balanced way,

that’s going to contain the same bias

that our society contains.

And so, yeah, is there things there that bother you?

Yeah, so you actually said something.

You had said how we have biases,

but hopefully we learn from them and we become better, right?

And so, that’s where we are now, right?

So, the data that we’re collecting is historic.

So, it’s based on these things

when we knew it was bad to discriminate,

but that’s the data we have and we’re trying to fix it now,

but we’re fixing it based on the data

that was used in the first place.

Fix it in post.

Right, and so the decisions,

and you can look at everything from the whole aspect

of predictive policing, criminal recidivism.

There was a recent paper that had the healthcare algorithms,

which had a kind of a sensational titles.

I’m not pro sensationalism in titles,

but again, you read it, right?

So, it makes you read it,

but I’m like, really?

Like, ugh, you could have.

What’s the topic of the sensationalism?

I mean, what’s underneath it?

What’s, if you could sort of educate me

on what kind of bias creeps into the healthcare space.

Yeah, so.

I mean, you already kind of mentioned.

Yeah, so this one was the headline was

racist AI algorithms.

Okay, like, okay, that’s totally a clickbait title.

And so you looked at it and so there was data

that these researchers had collected.

I believe, I wanna say it was either Science or Nature.

It just was just published,

but they didn’t have a sensational title.

It was like the media.

And so they had looked at demographics,

I believe, between black and white women, right?

And they showed that there was a discrepancy

in the outcomes, right?

And so, and it was tied to ethnicity, tied to race.

The piece that the researchers did

actually went through the whole analysis, but of course.

I mean, the journalists with AI are problematic

across the board, let’s say.

And so this is a problem, right?

And so there’s this thing about,

oh, AI, it has all these problems.

We’re doing it on historical data

and the outcomes are uneven based on gender

or ethnicity or age.

But I am always saying is like, yes,

we need to do better, right?

We need to do better.

It is our duty to do better.

But the worst AI is still better than us.

Like, you take the best of us

and we’re still worse than the worst AI,

at least in terms of these things.

And that’s actually not discussed, right?

And so I think, and that’s why the sensational title, right?

And so it’s like, so then you can have individuals go like,

oh, we don’t need to use this AI.

I’m like, oh, no, no, no, no.

I want the AI instead of the doctors

that provided that data,

because it’s still better than that, right?

I think that’s really important to linger on,

is the idea that this AI is racist.

It’s like, well, compared to what?

Sort of, I think we set, unfortunately,

way too high of a bar for AI algorithms.

And in the ethical space where perfect is,

I would argue, probably impossible.

Then if we set the bar of perfection, essentially,

of it has to be perfectly fair, whatever that means,

it means we’re setting it up for failure.

But that’s really important to say what you just said,

which is, well, it’s still better than it is.

And one of the things I think

that we don’t get enough credit for,

just in terms of as developers,

is that you can now poke at it, right?

So it’s harder to say, is this hospital,

is this city doing something, right?

Until someone brings in a civil case, right?

Well, with AI, it can process through all this data

and say, hey, yes, there was an issue here,

but here it is, we’ve identified it,

and then the next step is to fix it.

I mean, that’s a nice feedback loop

versus waiting for someone to sue someone else

before it’s fixed, right?

And so I think that power,

we need to capitalize on a little bit more, right?

Instead of having the sensational titles,

have the, okay, this is a problem,

and this is how we’re fixing it,

and people are putting money to fix it

because we can make it better.

I look at like facial recognition,

how Joy, she basically called out a couple of companies

and said, hey, and most of them were like,

oh, embarrassment, and the next time it had been fixed,

right, it had been fixed better, right?

And then it was like, oh, here’s some more issues.

And I think that conversation then moves that needle

to having much more fair and unbiased and ethical aspects,

as long as both sides, the developers are willing to say,

okay, I hear you, yes, we are going to improve,

and you have other developers who are like,

hey, AI, it’s wrong, but I love it, right?

Yes, so speaking of this really nice notion

that AI is maybe flawed but better than humans,

so just made me think of it,

one example of flawed humans is our political system.

Do you think, or you said judicial as well,

do you have a hope for AI sort of being elected

for president or running our Congress

or being able to be a powerful representative of the people?

So I mentioned, and I truly believe that this whole world

of AI is in partnerships with people.

And so what does that mean?

I don’t believe, or maybe I just don’t,

I don’t believe that we should have an AI for president,

but I do believe that a president

should use AI as an advisor, right?

Like, if you think about it,

every president has a cabinet of individuals

that have different expertise

that they should listen to, right?

Like, that’s kind of what we do.

And you put smart people with smart expertise

around certain issues, and you listen.

I don’t see why AI can’t function

as one of those smart individuals giving input.

So maybe there’s an AI on healthcare,

maybe there’s an AI on education and right,

like all of these things that a human is processing, right?

Because at the end of the day,

there’s people that are human

that are going to be at the end of the decision.

And I don’t think as a world, as a culture, as a society,

that we would totally, and this is us,

like this is some fallacy about us,

but we need to see that leader, that person as human.

And most people don’t realize

that like leaders have a whole lot of advice, right?

Like when they say something, it’s not that they woke up,

well, usually they don’t wake up in the morning

and be like, I have a brilliant idea, right?

It’s usually a, okay, let me listen.

I have a brilliant idea,

but let me get a little bit of feedback on this.

Like, okay.

And then it’s a, yeah, that was an awesome idea

or it’s like, yeah, let me go back.

We already talked through a bunch of them,

but are there some possible solutions

to the bias that’s present in our algorithms

beyond what we just talked about?

So I think there’s two paths.

One is to figure out how to systematically

do the feedback and corrections.

So right now it’s ad hoc, right?

It’s a researcher identify some outcomes

that are not, don’t seem to be fair, right?

They publish it, they write about it.

And the, either the developer or the companies

that have adopted the algorithms may try to fix it, right?

And so it’s really ad hoc and it’s not systematic.

There’s, it’s just, it’s kind of like,

I’m a researcher, that seems like an interesting problem,

which means that there’s a whole lot out there

that’s not being looked at, right?

Cause it’s kind of researcher driven.

And I don’t necessarily have a solution,

but that process I think could be done a little bit better.

One way is I’m going to poke a little bit

at some of the corporations, right?

Like maybe the corporations when they think

about a product, they should, instead of,

in addition to hiring these, you know, bug,

they give these.

Oh yeah, yeah, yeah.

Like awards when you find a bug.

Yeah, security bug, you know, let’s put it

like we will give the, whatever the award is

that we give for the people who find these security holes,

find an ethics hole, right?

Like find an unfairness hole

and we will pay you X for each one you find.

I mean, why can’t they do that?

One is a win win.

They show that they’re concerned about it,

that this is important and they don’t have

to necessarily dedicate it their own like internal resources.

And it also means that everyone who has

like their own bias lens, like I’m interested in age.

And so I’ll find the ones based on age

and I’m interested in gender and right,

which means that you get like all

of these different perspectives.

But you think of it in a data driven way.

So like sort of, if we look at a company like Twitter,

it gets, it’s under a lot of fire

for discriminating against certain political beliefs.

Correct.

And sort of, there’s a lot of people,

this is the sad thing,

cause I know how hard the problem is

and I know the Twitter folks are working really hard at it.

Even Facebook that everyone seems to hate

are working really hard at this.

You know, the kind of evidence that people bring

is basically anecdotal evidence.

Well, me or my friend, all we said is X

and for that we got banned.

And that’s kind of a discussion of saying,

well, look, that’s usually, first of all,

the whole thing is taken out of context.

So they present sort of anecdotal evidence.

And how are you supposed to, as a company,

in a healthy way, have a discourse

about what is and isn’t ethical?

How do we make algorithms ethical

when people are just blowing everything?

Like they’re outraged about a particular

anecdotal piece of evidence that’s very difficult

to sort of contextualize in the big data driven way.

Do you have a hope for companies like Twitter and Facebook?

Yeah, so I think there’s a couple of things going on, right?

First off, remember this whole aspect

of we are becoming reliant on technology.

We’re also becoming reliant on a lot of these,

the apps and the resources that are provided, right?

So some of it is kind of anger, like I need you, right?

And you’re not working for me, right?

Not working for me, all right.

But I think, and so some of it,

and I wish that there was a little bit

of change of rethinking.

So some of it is like, oh, we’ll fix it in house.

No, that’s like, okay, I’m a fox

and I’m going to watch these hens

because I think it’s a problem that foxes eat hens.

No, right?

Like be good citizens and say, look, we have a problem.

And we are willing to open ourselves up

for others to come in and look at it

and not try to fix it in house.

Because if you fix it in house,

there’s conflict of interest.

If I find something, I’m probably going to want to fix it

and hopefully the media won’t pick it up, right?

And that then causes distrust

because someone inside is going to be mad at you

and go out and talk about how,

yeah, they canned the resume survey because it, right?

Like be nice people.

Like just say, look, we have this issue.

Community, help us fix it.

And we will give you like, you know,

the bug finder fee if you do.

Did you ever hope that the community,

us as a human civilization on the whole is good

and can be trusted to guide the future of our civilization

into a positive direction?

I think so.

So I’m an optimist, right?

And, you know, there were some dark times in history always.

I think now we’re in one of those dark times.

I truly do.

In which aspect?

The polarization.

And it’s not just US, right?

So if it was just US, I’d be like, yeah, it’s a US thing,

but we’re seeing it like worldwide, this polarization.

And so I worry about that.

But I do fundamentally believe that at the end of the day,

people are good, right?

And why do I say that?

Because anytime there’s a scenario

where people are in danger and I will use,

so Atlanta, we had a snowmageddon

and people can laugh about that.

People at the time, so the city closed for, you know,

little snow, but it was ice and the city closed down.

But you had people opening up their homes and saying,

hey, you have nowhere to go, come to my house, right?

Hotels were just saying like, sleep on the floor.

Like places like, you know, the grocery stores were like,

hey, here’s food.

There was no like, oh, how much are you gonna pay me?

It was like this, such a community.

And like people who didn’t know each other,

strangers were just like, can I give you a ride home?

And that was a point I was like, you know what, like.

That reveals that the deeper thing is,

there’s a compassionate love that we all have within us.

It’s just that when all of that is taken care of

and get bored, we love drama.

And that’s, I think almost like the division

is a sign of the times being good,

is that it’s just entertaining

on some unpleasant mammalian level to watch,

to disagree with others.

And Twitter and Facebook are actually taking advantage

of that in a sense because it brings you back

to the platform and they’re advertiser driven,

so they make a lot of money.

So you go back and you click.

Love doesn’t sell quite as well in terms of advertisement.

It doesn’t.

So you’ve started your career

at NASA Jet Propulsion Laboratory,

but before I ask a few questions there,

have you happened to have ever seen Space Odyssey,

2001 Space Odyssey?

Yes.

Okay, do you think HAL 9000,

so we’re talking about ethics.

Do you think HAL did the right thing

by taking the priority of the mission

over the lives of the astronauts?

Do you think HAL is good or evil?

Easy questions.

Yeah.

HAL was misguided.

You’re one of the people that would be in charge

of an algorithm like HAL.

Yeah.

What would you do better?

If you think about what happened

was there was no fail safe, right?

So perfection, right?

Like what is that?

I’m gonna make something that I think is perfect,

but if my assumptions are wrong,

it’ll be perfect based on the wrong assumptions, right?

That’s something that you don’t know until you deploy

and then you’re like, oh yeah, messed up.

But what that means is that when we design software,

such as in Space Odyssey,

when we put things out,

that there has to be a fail safe.

There has to be the ability that once it’s out there,

we can grade it as an F and it fails

and it doesn’t continue, right?

There’s some way that it can be brought in

and removed in that aspect.

Because that’s what happened with HAL.

It was like assumptions were wrong.

It was perfectly correct based on those assumptions

and there was no way to change it,

change the assumptions at all.

And the change to fall back would be to a human.

So you ultimately think like human should be,

it’s not turtles or AI all the way down.

It’s at some point, there’s a human that actually.

I still think that,

and again, because I do human robot interaction,

I still think the human needs to be part of the equation

at some point.

So what, just looking back,

what are some fascinating things in robotic space

that NASA was working at the time?

Or just in general, what have you gotten to play with

and what are your memories from working at NASA?

Yeah, so one of my first memories

was they were working on a surgical robot system

that could do eye surgery, right?

And this was back in, oh my gosh, it must’ve been,

oh, maybe 92, 93, 94.

So it’s like almost like a remote operation.

Yeah, it was remote operation.

In fact, you can even find some old tech reports on it.

So think of it, like now we have DaVinci, right?

Like think of it, but these were like the late 90s, right?

And I remember going into the lab one day

and I was like, what’s that, right?

And of course it wasn’t pretty, right?

Because the technology, but it was like functional

and you had this individual that could use

a version of haptics to actually do the surgery

and they had this mockup of a human face

and like the eyeballs and you can see this little drill.

And I was like, oh, that is so cool.

That one I vividly remember

because it was so outside of my like possible thoughts

of what could be done.

It’s the kind of precision

and I mean, what’s the most amazing of a thing like that?

I think it was the precision.

It was the kind of first time

that I had physically seen

this robot machine human interface, right?

Versus, cause manufacturing had been,

you saw those kind of big robots, right?

But this was like, oh, this is in a person.

There’s a person and a robot like in the same space.

I’m meeting them in person.

Like for me, it was a magical moment

that I can’t, it was life transforming

that I recently met Spot Mini from Boston Dynamics.

Oh, see.

I don’t know why, but on the human robot interaction

for some reason I realized how easy it is to anthropomorphize

and it was, I don’t know, it was almost

like falling in love, this feeling of meeting.

And I’ve obviously seen these robots a lot

on video and so on, but meeting in person,

just having that one on one time is different.

So have you had a robot like that in your life

that made you maybe fall in love with robotics?

Sort of like meeting in person.

I mean, I loved robotics since, yeah.

So I was a 12 year old.

Like I’m gonna be a roboticist, actually was,

I called it cybernetics.

But so my motivation was Bionic Woman.

I don’t know if you know that.

And so, I mean, that was like a seminal moment,

but I didn’t meet, like that was TV, right?

Like it wasn’t like I was in the same space and I met

and I was like, oh my gosh, you’re like real.

Just linking on Bionic Woman, which by the way,

because I read that about you.

I watched bits of it and it’s just so,

no offense, terrible.

It’s cheesy if you look at it now.

It’s cheesy, no.

I’ve seen a couple of reruns lately.

But it’s, but of course at the time it’s probably

captured the imagination.

But the sound effects.

Especially when you’re younger, it just catch you.

But which aspect, did you think of it,

you mentioned cybernetics, did you think of it as robotics

or did you think of it as almost constructing

artificial beings?

Like, is it the intelligent part that captured

your fascination or was it the whole thing?

Like even just the limbs and just the.

So for me, it would have, in another world,

I probably would have been more of a biomedical engineer

because what fascinated me was the parts,

like the bionic parts, the limbs, those aspects of it.

Are you especially drawn to humanoid or humanlike robots?

I would say humanlike, not humanoid, right?

And when I say humanlike, I think it’s this aspect

of that interaction, whether it’s social

and it’s like a dog, right?

Like that’s humanlike because it understand us,

it interacts with us at that very social level

to, you know, humanoids are part of that,

but only if they interact with us as if we are human.

Okay, but just to linger on NASA for a little bit,

what do you think, maybe if you have other memories,

but also what do you think is the future of robots in space?

We’ll mention how, but there’s incredible robots

that NASA’s working on in general thinking about

in our, as we venture out, human civilization ventures out

into space, what do you think the future of robots is there?

Yeah, so I mean, there’s the near term.

For example, they just announced the rover

that’s going to the moon, which, you know,

that’s kind of exciting, but that’s like near term.

You know, my favorite, favorite, favorite series

is Star Trek, right?

You know, I really hope, and even Star Trek,

like if I calculate the years, I wouldn’t be alive,

but I would really, really love to be in that world.

Like, even if it’s just at the beginning,

like, you know, like voyage, like adventure one.

So basically living in space.

Yeah.

With, what robots, what are robots?

With data.

What role?

The data would have to be, even though that wasn’t,

you know, that was like later, but.

So data is a robot that has human like qualities.

Right, without the emotion chip.

Yeah.

You don’t like emotion.

Well, so data with the emotion chip

was kind of a mess, right?

It took a while for that thing to adapt,

but, and so why was that an issue?

The issue is that emotions make us irrational agents.

That’s the problem.

And yet he could think through things,

even if it was based on an emotional scenario, right?

Based on pros and cons.

But as soon as you made him emotional,

one of the metrics he used for evaluation

was his own emotions, not people around him, right?

Like, and so.

We do that as children, right?

So we’re very egocentric when we’re young.

We are very egocentric.

And so isn’t that just an early version of the emotion chip

then, I haven’t watched much Star Trek.

Except I have also met adults, right?

And so that is a developmental process.

And I’m sure there’s a bunch of psychologists

that can go through, like you can have a 60 year old adult

who has the emotional maturity of a 10 year old, right?

And so there’s various phases that people should go through

in order to evolve and sometimes you don’t.

So how much psychology do you think,

a topic that’s rarely mentioned in robotics,

but how much does psychology come to play

when you’re talking about HRI, human robot interaction?

When you have to have robots

that actually interact with humans.

Tons.

So we, like my group, as well as I read a lot

in the cognitive science literature,

as well as the psychology literature.

Because they understand a lot about human, human relations

and developmental milestones and things like that.

And so we tend to look to see what’s been done out there.

Sometimes what we’ll do is we’ll try to match that to see,

is that human, human relationship the same as human robot?

Sometimes it is, and sometimes it’s different.

And then when it’s different, we have to,

we try to figure out, okay,

why is it different in this scenario?

But it’s the same in the other scenario, right?

And so we try to do that quite a bit.

Would you say that’s, if we’re looking at the future

of human robot interaction,

would you say the psychology piece is the hardest?

Like if, I mean, it’s a funny notion for you as,

I don’t know if you consider, yeah.

I mean, one way to ask it,

do you consider yourself a roboticist or a psychologist?

Oh, I consider myself a roboticist

that plays the act of a psychologist.

But if you were to look at yourself sort of,

20, 30 years from now,

do you see yourself more and more

wearing the psychology hat?

Another way to put it is,

are the hard problems in human robot interactions

fundamentally psychology, or is it still robotics,

the perception manipulation, planning,

all that kind of stuff?

It’s actually neither.

The hardest part is the adaptation and the interaction.

So it’s the interface, it’s the learning.

And so if I think of,

like I’ve become much more of a roboticist slash AI person

than when I, like originally, again,

I was about the bionics.

I was electrical engineer, I was control theory, right?

And then I started realizing that my algorithms

needed like human data, right?

And so then I was like, okay, what is this human thing?

How do I incorporate human data?

And then I realized that human perception had,

like there was a lot in terms of how we perceive the world.

And so trying to figure out

how do I model human perception for my,

and so I became a HRI person,

human robot interaction person,

from being a control theory and realizing

that humans actually offered quite a bit.

And then when you do that,

you become more of an artificial intelligence, AI.

And so I see myself evolving more in this AI world

under the lens of robotics,

having hardware, interacting with people.

So you’re a world class expert researcher in robotics,

and yet others, you know, there’s a few,

it’s a small but fierce community of people,

but most of them don’t take the journey

into the H of HRI, into the human.

So why did you brave into the interaction with humans?

It seems like a really hard problem.

It’s a hard problem, and it’s very risky as an academic.

And I knew that when I started down that journey,

that it was very risky as an academic

in this world that was nuance, it was just developing.

We didn’t even have a conference, right, at the time.

Because it was the interesting problems.

That was what drove me.

It was the fact that I looked at what interests me

in terms of the application space and the problems.

And that pushed me into trying to figure out

what people were and what humans were

and how to adapt to them.

If those problems weren’t so interesting,

I’d probably still be sending rovers to glaciers, right?

But the problems were interesting.

And the other thing was that they were hard, right?

So it’s, I like having to go into a room

and being like, I don’t know what to do.

And then going back and saying, okay,

I’m gonna figure this out.

I do not, I’m not driven when I go in like,

oh, there are no surprises.

Like, I don’t find that satisfying.

If that was the case,

I’d go someplace and make a lot more money, right?

I think I stay in academic because and choose to do this

because I can go into a room and like, that’s hard.

Yeah, I think just from my perspective,

maybe you can correct me on it,

but if I just look at the field of AI broadly,

it seems that human robot interaction has the most,

one of the most number of open problems.

Like people, especially relative to how many people

are willing to acknowledge that there are this,

because most people are just afraid of the humans

so they don’t even acknowledge

how many open problems there are.

But it’s in terms of difficult problems

to solve exciting spaces,

it seems to be incredible for that.

It is, and it’s exciting.

You’ve mentioned trust before.

What role does trust from interacting with autopilot

to in the medical context,

what role does trust play in the human robot interactions?

So some of the things I study in this domain

is not just trust, but it really is over trust.

How do you think about over trust?

Like what is, first of all, what is trust

and what is over trust?

Basically, the way I look at it is,

trust is not what you click on a survey,

trust is about your behavior.

So if you interact with the technology

based on the decision or the actions of the technology

as if you trust that decision, then you’re trusting.

And even in my group, we’ve done surveys

that on the thing, do you trust robots?

Of course not.

Would you follow this robot in a burdening building?

Of course not.

And then you look at their actions and you’re like,

clearly your behavior does not match what you think

or what you think you would like to think.

And so I’m really concerned about the behavior

because that’s really at the end of the day,

when you’re in the world,

that’s what will impact others around you.

It’s not whether before you went onto the street,

you clicked on like, I don’t trust self driving cars.

Yeah, that from an outsider perspective,

it’s always frustrating to me.

Well, I read a lot, so I’m insider

in a certain philosophical sense.

It’s frustrating to me how often trust is used in surveys

and how people say, make claims out of any kind of finding

they make while somebody clicking on answer.

You just trust is a, yeah, behavior just,

you said it beautifully.

I mean, the action, your own behavior is what trust is.

I mean, that everything else is not even close.

It’s almost like absurd comedic poetry

that you weave around your actual behavior.

So some people can say their trust,

you know, I trust my wife, husband or not,

whatever, but the actions is what speaks volumes.

You bug their car, you probably don’t trust them.

I trust them, I’m just making sure.

No, no, that’s, yeah.

Like even if you think about cars,

I think it’s a beautiful case.

I came here at some point, I’m sure,

on either Uber or Lyft, right?

I remember when it first came out, right?

I bet if they had had a survey,

would you get in the car with a stranger and pay them?

Yes.

How many people do you think would have said,

like, really?

Wait, even worse, would you get in the car

with a stranger at 1 a.m. in the morning

to have them drop you home as a single female?

Yeah.

Like how many people would say, that’s stupid.

Yeah.

And now look at where we are.

I mean, people put kids, right?

Like, oh yeah, my child has to go to school

and yeah, I’m gonna put my kid in this car with a stranger.

I mean, it’s just fascinating how, like,

what we think we think is not necessarily

matching our behavior.

Yeah, and certainly with robots, with autonomous vehicles

and all the kinds of robots you work with,

that’s, it’s, yeah, it’s, the way you answer it,

especially if you’ve never interacted with that robot before,

if you haven’t had the experience,

you being able to respond correctly on a survey is impossible.

But what do you, what role does trust play

in the interaction, do you think?

Like, is it good to, is it good to trust a robot?

What does over trust mean?

Or is it, is it good to kind of how you feel

about autopilot currently, which is like,

from a roboticist’s perspective, is like,

oh, still very cautious?

Yeah, so this is still an open area of research,

but basically what I would like in a perfect world

is that people trust the technology when it’s working 100%,

and people will be hypersensitive

and identify when it’s not.

But of course we’re not there.

That’s the ideal world.

And, but we find is that people swing, right?

They tend to swing, which means that if my first,

and like, we have some papers,

like first impressions is everything, right?

If my first instance with technology,

with robotics is positive, it mitigates any risk,

it correlates with like best outcomes,

it means that I’m more likely to either not see it

when it makes some mistakes or faults,

or I’m more likely to forgive it.

And so this is a problem

because technology is not 100% accurate, right?

It’s not 100% accurate, although it may be perfect.

How do you get that first moment right, do you think?

There’s also an education about the capabilities

and limitations of the system.

Do you have a sense of how do you educate people correctly

in that first interaction?

Again, this is an open ended problem.

So one of the study that actually has given me some hope

that I were trying to figure out how to put in robotics.

So there was a research study

that it showed for medical AI systems,

giving information to radiologists about,

here you need to look at these areas on the X ray.

What they found was that when the system provided

one choice, there was this aspect of either no trust

or over trust, right?

Like I don’t believe it at all,

or a yes, yes, yes, yes.

And they would miss things, right?

Instead, when the system gave them multiple choices,

like here are the three, even if it knew like,

it had estimated that the top area you need to look at

was some place on the X ray.

If it gave like one plus others,

the trust was maintained and the accuracy of the entire

population increased, right?

So basically it was a, you’re still trusting the system,

but you’re also putting in a little bit of like,

your human expertise, like your human decision processing

into the equation.

So it helps to mitigate that over trust risk.

Yeah, so there’s a fascinating balance that the strike.

Haven’t figured out again, robotics is still an open research.

This is exciting open area research, exactly.

So what are some exciting applications

of human robot interaction?

You started a company, maybe you can talk about

the exciting efforts there, but in general also

what other space can robots interact with humans and help?

Yeah, so besides healthcare,

cause you know, that’s my bias lens.

My other bias lens is education.

I think that, well, one, we definitely,

we in the US, you know, we’re doing okay with teachers,

but there’s a lot of school districts

that don’t have enough teachers.

If you think about the teacher student ratio

for at least public education in some districts, it’s crazy.

It’s like, how can you have learning in that classroom,

right?

Because you just don’t have the human capital.

And so if you think about robotics,

bringing that in to classrooms,

as well as the afterschool space,

where they offset some of this lack of resources

in certain communities, I think that’s a good place.

And then turning on the other end

is using these systems then for workforce retraining

and dealing with some of the things

that are going to come out later on of job loss,

like thinking about robots and in AI systems

for retraining and workforce development.

I think that’s exciting areas that can be pushed even more,

and it would have a huge, huge impact.

What would you say are some of the open problems

in education, sort of, it’s exciting.

So young kids and the older folks

or just folks of all ages who need to be retrained,

who need to sort of open themselves up

to a whole nother area of work.

What are the problems to be solved there?

How do you think robots can help?

We have the engagement aspect, right?

So we can figure out the engagement.

That’s not a…

What do you mean by engagement?

So identifying whether a person is focused,

is like that we can figure out.

What we can figure out and there’s some positive results

in this is that personalized adaptation

based on any concepts, right?

So imagine I think about, I have an agent

and I’m working with a kid learning, I don’t know,

algebra two, can that same agent then switch

and teach some type of new coding skill

to a displaced mechanic?

Like, what does that actually look like, right?

Like hardware might be the same, content is different,

two different target demographics of engagement.

Like how do you do that?

How important do you think personalization

is in human robot interaction?

And not just a mechanic or student,

but like literally to the individual human being.

I think personalization is really important,

but a caveat is that I think we’d be okay

if we can personalize to the group, right?

And so if I can label you

as along some certain dimensions,

then even though it may not be you specifically,

I can put you in this group.

So the sample size, this is how they best learn,

this is how they best engage.

Even at that level, it’s really important.

And it’s because, I mean, it’s one of the reasons

why educating in large classrooms is so hard, right?

You teach to the median,

but there’s these individuals that are struggling

and then you have highly intelligent individuals

and those are the ones that are usually kind of left out.

So highly intelligent individuals may be disruptive

and those who are struggling might be disruptive

because they’re both bored.

Yeah, and if you narrow the definition of the group

or in the size of the group enough,

you’ll be able to address their individual,

it’s not individual needs, but really the most important

group needs, right?

And that’s kind of what a lot of successful

recommender systems do with Spotify and so on.

So it’s sad to believe, but as a music listener,

probably in some sort of large group,

it’s very sadly predictable.

You have been labeled.

Yeah, I’ve been labeled and successfully so

because they’re able to recommend stuff that I like.

Yeah, but applying that to education, right?

There’s no reason why it can’t be done.

Do you have a hope for our education system?

I have more hope for workforce development.

And that’s because I’m seeing investments.

Even if you look at VC investments in education,

the majority of it has lately been going

to workforce retraining, right?

And so I think that government investments is increasing.

There’s like a claim and some of it’s based on fear, right?

Like AI is gonna come and take over all these jobs.

What are we gonna do with all these nonpaying taxes

that aren’t coming to us by our citizens?

And so I think I’m more hopeful for that.

Not so hopeful for early education

because it’s still a who’s gonna pay for it.

And you won’t see the results for like 16 to 18 years.

It’s hard for people to wrap their heads around that.

But on the retraining part, what are your thoughts?

There’s a candidate, Andrew Yang running for president

and saying that sort of AI, automation, robots.

Universal basic income.

Universal basic income in order to support us

as we kind of automation takes people’s jobs

and allows you to explore and find other means.

Like do you have a concern of society

transforming effects of automation and robots and so on?

I do.

I do know that AI robotics will displace workers.

Like we do know that.

But there’ll be other workers

that will be defined new jobs.

What I worry about is, that’s not what I worry about.

Like will all the jobs go away?

What I worry about is the type of jobs that will come out.

Like people who graduate from Georgia Tech will be okay.

We give them the skills,

they will adapt even if their current job goes away.

I do worry about those

that don’t have that quality of an education.

Will they have the ability,

the background to adapt to those new jobs?

That I don’t know.

That I worry about,

which will create even more polarization

in our society, internationally and everywhere.

I worry about that.

I also worry about not having equal access

to all these wonderful things that AI can do

and robotics can do.

I worry about that.

People like me from Georgia Tech from say MIT

will be okay, right?

But that’s such a small part of the population

that we need to think much more globally

of having access to the beautiful things,

whether it’s AI in healthcare, AI in education,

AI in politics, right?

I worry about that.

And that’s part of the thing that you were talking about

is people that build the technology

have to be thinking about ethics,

have to be thinking about access and all those things.

And not just a small subset.

Let me ask some philosophical,

slightly romantic questions.

People that listen to this will be like,

here he goes again.

Okay, do you think one day we’ll build an AI system

that a person can fall in love with

and it would love them back?

Like in the movie, Her, for example.

Yeah, although she kind of didn’t fall in love with him

or she fell in love with like a million other people,

something like that.

You’re the jealous type, I see.

We humans are the jealous type.

Yes, so I do believe that we can design systems

where people would fall in love with their robot,

with their AI partner.

That I do believe.

Because it’s actually,

and I don’t like to use the word manipulate,

but as we see, there are certain individuals

that can be manipulated

if you understand the cognitive science about it, right?

Right, so I mean, if you could think of all close

relationship and love in general

as a kind of mutual manipulation,

that dance, the human dance.

I mean, manipulation is a negative connotation.

And that’s why I don’t like to use that word particularly.

I guess another way to phrase it is,

you’re getting at is it could be algorithmatized

or something, it could be a.

The relationship building part can be.

I mean, just think about it.

We have, and I don’t use dating sites,

but from what I heard, there are some individuals

that have been dating that have never saw each other, right?

In fact, there’s a show I think

that tries to like weed out fake people.

Like there’s a show that comes out, right?

Because like people start faking.

Like, what’s the difference of that person

on the other end being an AI agent, right?

And having a communication

and you building a relationship remotely,

like there’s no reason why that can’t happen.

In terms of human robot interaction,

so what role, you’ve kind of mentioned

with data emotion being, can be problematic

if not implemented well, I suppose.

What role does emotion and some other human like things,

the imperfect things come into play here

for good human robot interaction and something like love?

Yeah, so in this case, and you had asked,

can an AI agent love a human back?

I think they can emulate love back, right?

And so what does that actually mean?

It just means that if you think about their programming,

they might put the other person’s needs

in front of theirs in certain situations, right?

You look at, think about it as a return on investment.

Like, what’s my return on investment?

As part of that equation, that person’s happiness,

has some type of algorithm waiting to it.

And the reason why is because I care about them, right?

That’s the only reason, right?

But if I care about them and I show that,

then my final objective function

is length of time of the engagement, right?

So you can think of how to do this actually quite easily.

And so.

But that’s not love?

Well, so that’s the thing.

I think it emulates love

because we don’t have a classical definition of love.

Right, but, and we don’t have the ability

to look into each other’s minds to see the algorithm.

And I mean, I guess what I’m getting at is,

is it possible that, especially if that’s learned,

especially if there’s some mystery

and black box nature to the system,

how is that, you know?

How is it any different?

How is it any different in terms of sort of

if the system says, I’m conscious, I’m afraid of death,

and it does indicate that it loves you.

Another way to sort of phrase it,

be curious to see what you think.

Do you think there’ll be a time

when robots should have rights?

You’ve kind of phrased the robot in a very roboticist way

and just a really good way, but saying, okay,

well, there’s an objective function

and I could see how you can create

a compelling human robot interaction experience

that makes you believe that the robot cares for your needs

and even something like loves you.

But what if the robot says, please don’t turn me off?

What if the robot starts making you feel

like there’s an entity, a being, a soul there, right?

Do you think there’ll be a future,

hopefully you won’t laugh too much at this,

but where they do ask for rights?

So I can see a future

if we don’t address it in the near term

where these agents, as they adapt and learn,

could say, hey, this should be something that’s fundamental.

I hopefully think that we would address it

before it gets to that point.

So you think that’s a bad future?

Is that a negative thing where they ask

we’re being discriminated against?

I guess it depends on what role

have they attained at that point, right?

And so if I think about now.

Careful what you say because the robots 50 years from now

I’ll be listening to this and you’ll be on TV saying,

this is what roboticists used to believe.

Well, right?

And so this is my, and as I said, I have a bias lens

and my robot friends will understand that.

So if you think about it, and I actually put this

in kind of the, as a roboticist,

you don’t necessarily think of robots as human

with human rights, but you could think of them

either in the category of property,

or you can think of them in the category of animals, right?

And so both of those have different types of rights.

So animals have their own rights as a living being,

but they can’t vote, they can’t write,

they can be euthanized, but as humans,

if we abuse them, we go to jail, right?

So they do have some rights that protect them,

but don’t give them the rights of like citizenship.

And then if you think about property,

property, the rights are associated with the person, right?

So if someone vandalizes your property

or steals your property, like there are some rights,

but it’s associated with the person who owns that.

If you think about it back in the day,

and if you remember, we talked about

how society has changed, women were property, right?

They were not thought of as having rights.

They were thought of as property of, like their…

Yeah, assaulting a woman meant

assaulting the property of somebody else.

Exactly, and so what I envision is,

is that we will establish some type of norm at some point,

but that it might evolve, right?

Like if you look at women’s rights now,

like there are still some countries that don’t have,

and the rest of the world is like,

why that makes no sense, right?

And so I do see a world where we do establish

some type of grounding.

It might be based on property rights,

it might be based on animal rights.

And if it evolves that way,

I think we will have this conversation at that time,

because that’s the way our society traditionally has evolved.

Beautifully put, just out of curiosity,

Anki, Jibo, Mayflower Robotics,

with their robot Curie, SciFiWorks, WeThink Robotics,

were all these amazing robotics companies

led, created by incredible roboticists,

and they’ve all went out of business recently.

Why do you think they didn’t last long?

Why is it so hard to run a robotics company,

especially one like these, which are fundamentally

HRI human robot interaction robots?

Or personal robots?

Each one has a story,

only one of them I don’t understand, and that was Anki.

That’s actually the only one I don’t understand.

I don’t understand it either.

No, no, I mean, I look like from the outside,

I’ve looked at their sheets, I’ve looked at the data that’s.

Oh, you mean like business wise,

you don’t understand, I got you.

Yeah.

Yeah, and like I look at all, I look at that data,

and I’m like, they seem to have like product market fit.

Like, so that’s the only one I don’t understand.

The rest of it was product market fit.

What’s product market fit?

Just that of, like how do you think about it?

Yeah, so although WeThink Robotics was getting there, right?

But I think it’s just the timing,

it just, their clock just timed out.

I think if they’d been given a couple more years,

they would have been okay.

But the other ones were still fairly early

by the time they got into the market.

And so product market fit is,

I have a product that I wanna sell at a certain price.

Are there enough people out there, the market,

that are willing to buy the product at that market price

for me to be a functional viable profit bearing company?

Right?

So product market fit.

If it costs you a thousand dollars

and everyone wants it and only is willing to pay a dollar,

you have no product market fit.

Even if you could sell it for, you know,

it’s enough for a dollar, cause you can’t.

So how hard is it for robots?

Sort of maybe if you look at iRobot,

the company that makes Roombas, vacuum cleaners,

can you comment on, did they find the right product,

market product fit?

Like, are people willing to pay for robots

is also another kind of question underlying all this.

So if you think about iRobot and their story, right?

Like when they first, they had enough of a runway, right?

When they first started,

they weren’t doing vacuum cleaners, right?

They were contracts primarily, government contracts,

designing robots.

Or military robots.

Yeah, I mean, that’s what they were.

That’s how they started, right?

And then.

They still do a lot of incredible work there.

But yeah, that was the initial thing

that gave them enough funding to.

To then try to, the vacuum cleaner is what I’ve been told

was not like their first rendezvous

in terms of designing a product, right?

And so they were able to survive

until they got to the point

that they found a product price market, right?

And even with, if you look at the Roomba,

the price point now is different

than when it was first released, right?

It was an early adopter price,

but they found enough people

who were willing to fund it.

And I mean, I forgot what their loss profile was

for the first couple of years,

but they became profitable in sufficient time

that they didn’t have to close their doors.

So they found the right,

there’s still people willing to pay

a large amount of money,

so over $1,000 for a vacuum cleaner.

Unfortunately for them,

now that they’ve proved everything out,

figured it all out,

now there’s competitors.

Yeah, and so that’s the next thing, right?

The competition,

and they have quite a number, even internationally.

Like there’s some products out there,

you can go to Europe and be like,

oh, I didn’t even know this one existed.

So this is the thing though,

like with any market,

I would, this is not a bad time,

although as a roboticist, it’s kind of depressing,

but I actually think about things like with,

I would say that all of the companies

that are now in the top five or six,

they weren’t the first to the stage, right?

Like Google was not the first search engine,

sorry, Altavista, right?

Facebook was not the first, sorry, MySpace, right?

Like think about it,

they were not the first players.

Those first players,

like they’re not in the top five, 10 of Fortune 500 companies,

right?

They proved, they started to prove out the market,

they started to get people interested,

they started the buzz,

but they didn’t make it to that next level.

But the second batch, right?

The second batch, I think might make it to the next level.

When do you think the Facebook of robotics?

The Facebook of robotics.

Sorry, I take that phrase back because people deeply,

for some reason, well, I know why,

but it’s, I think, exaggerated distrust Facebook

because of the privacy concerns and so on.

And with robotics, one of the things you have to make sure

is all the things we talked about is to be transparent

and have people deeply trust you

to let a robot into their lives, into their home.

When do you think the second batch of robots will come?

Is it five, 10 years, 20 years

that we’ll have robots in our homes

and robots in our hearts?

So if I think about, and because I try to follow

the VC kind of space in terms of robotic investments,

and right now, and I don’t know

if they’re gonna be successful,

I don’t know if this is the second batch,

but there’s only one batch that’s focused

on like the first batch, right?

And then there’s all these self driving Xs, right?

And so I don’t know if they’re a first batch of something

or if like, I don’t know quite where they fit in,

but there’s a number of companies,

the co robot, I call them co robots

that are still getting VC investments.

Some of them have some of the flavor

of like Rethink Robotics.

Some of them have some of the flavor of like Curie.

What’s a co robot?

So basically a robot and human working in the same space.

So some of the companies are focused on manufacturing.

So having a robot and human working together

in a factory, some of these co robots

are robots and humans working in the home,

working in clinics, like there’s different versions

of these companies in terms of their products,

but they’re all, so we think robotics would be

like one of the first, at least well known companies

focused on this space.

So I don’t know if this is a second batch

or if this is still part of the first batch,

that I don’t know.

And then you have all these other companies

in this self driving space.

And I don’t know if that’s a first batch

or again, a second batch.

Yeah.

So there’s a lot of mystery about this now.

Of course, it’s hard to say that this is the second batch

until it proves out, right?

Correct.

Yeah, we need a unicorn.

Yeah, exactly.

Why do you think people are so afraid,

at least in popular culture of legged robots

like those worked in Boston Dynamics

or just robotics in general,

if you were to psychoanalyze that fear,

what do you make of it?

And should they be afraid, sorry?

So should people be afraid?

I don’t think people should be afraid.

But with a caveat, I don’t think people should be afraid

given that most of us in this world

understand that we need to change something, right?

So given that.

Now, if things don’t change, be very afraid.

Which is the dimension of change that’s needed?

So changing, thinking about the ramifications,

thinking about like the ethics,

thinking about like the conversation is going on, right?

It’s no longer a we’re gonna deploy it

and forget that this is a car that can kill pedestrians

that are walking across the street, right?

We’re not in that stage.

We’re putting these roads out.

There are people out there.

A car could be a weapon.

Like people are now, solutions aren’t there yet,

but people are thinking about this

as we need to be ethically responsible

as we send these systems out,

robotics, medical, self driving.

And military too.

And military.

Which is not as often talked about,

but it’s really where probably these robots

will have a significant impact as well.

Correct, correct.

Right, making sure that they can think rationally,

even having the conversations,

who should pull the trigger, right?

But overall you’re saying if we start to think

more and more as a community about these ethical issues,

people should not be afraid.

Yeah, I don’t think people should be afraid.

I think that the return on investment,

the impact, positive impact will outweigh

any of the potentially negative impacts.

Do you have worries of existential threats

of robots or AI that some people kind of talk about

and romanticize about in the next decade,

the next few decades?

No, I don’t.

Singularity would be an example.

So my concept is that, so remember,

robots, AI, is designed by people.

It has our values.

And I always correlate this with a parent and a child.

So think about it, as a parent, what do we want?

We want our kids to have a better life than us.

We want them to expand.

We want them to experience the world.

And then as we grow older, our kids think and know

they’re smarter and better and more intelligent

and have better opportunities.

And they may even stop listening to us.

They don’t go out and then kill us, right?

Like, think about it.

It’s because we, it’s instilled in them values.

We instilled in them this whole aspect of community.

And yes, even though you’re maybe smarter

and have more money and dah, dah, dah,

it’s still about this love, caring relationship.

And so that’s what I believe.

So even if like, you know,

we’ve created the singularity in some archaic system

back in like 1980 that suddenly evolves,

the fact is it might say, I am smarter, I am sentient.

These humans are really stupid,

but I think it’ll be like, yeah,

but I just can’t destroy them.

Yeah, for sentimental value.

It’s still just to come back for Thanksgiving dinner

every once in a while.

Exactly.

That’s such, that’s so beautifully put.

You’ve also said that The Matrix may be

one of your more favorite AI related movies.

Can you elaborate why?

Yeah, it is one of my favorite movies.

And it’s because it represents

kind of all the things I think about.

So there’s a symbiotic relationship

between robots and humans, right?

That symbiotic relationship is that they don’t destroy us,

they enslave us, right?

But think about it,

even though they enslaved us,

they needed us to be happy, right?

And in order to be happy,

they had to create this cruddy world

that they then had to live in, right?

That’s the whole premise.

But then there were humans that had a choice, right?

Like you had a choice to stay in this horrific,

horrific world where it was your fantasy life

with all of the anomalies, perfection, but not accurate.

Or you can choose to be on your own

and like have maybe no food for a couple of days,

but you were totally autonomous.

And so I think of that as, and that’s why.

So it’s not necessarily us being enslaved,

but I think about us having the symbiotic relationship.

Robots and AI, even if they become sentient,

they’re still part of our society

and they will suffer just as much as we.

And there will be some kind of equilibrium

that we’ll have to find some symbiotic relationship.

Right, and then you have the ethicists,

the robotics folks that are like,

no, this has got to stop, I will take the other pill

in order to make a difference.

So if you could hang out for a day with a robot,

real or from science fiction, movies, books, safely,

and get to pick his or her, their brain,

who would you pick?

Gotta say it’s Data.

Data.

I was gonna say Rosie,

but I’m not really interested in her brain.

I’m interested in Data’s brain.

Data pre or post emotion chip?

Pre.

But don’t you think it’d be a more interesting conversation

post emotion chip?

Yeah, it would be drama.

And I’m human, I deal with drama all the time.

But the reason why I wanna pick Data’s brain

is because I could have a conversation with him

and ask, for example, how can we fix this ethics problem?

And he could go through like the rational thinking

and through that, he could also help me

think through it as well.

And so there’s like these fundamental questions

I think I could ask him

that he would help me also learn from.

And that fascinates me.

I don’t think there’s a better place to end it.

Ayana, thank you so much for talking to us, it was an honor.

Thank you, thank you.

This was fun.

Thanks for listening to this conversation

and thank you to our presenting sponsor, Cash App.

Download it, use code LexPodcast,

you’ll get $10 and $10 will go to FIRST,

a STEM education nonprofit that inspires

hundreds of thousands of young minds

to become future leaders and innovators.

If you enjoy this podcast, subscribe on YouTube,

give it five stars on Apple Podcast,

follow on Spotify, support on Patreon

or simply connect with me on Twitter.

And now let me leave you with some words of wisdom

from Arthur C. Clarke.

Whether we are based on carbon or on silicon

makes no fundamental difference.

We should each be treated with appropriate respect.

Thank you for listening and hope to see you next time.

comments powered by Disqus