Lex Fridman Podcast - #24 - Rosalind Picard: Affective Computing, Emotion, Privacy, and Health

The following is a conversation with Rosalind Picard.

She’s a professor at MIT,

director of the Effective Computing Research Group

at the MIT Media Lab,

and cofounder of two companies, Affectiva and Empatica.

Over two decades ago,

she launched a field of effective computing

with her book of the same name.

This book described the importance of emotion

in artificial and natural intelligence.

The vital role of emotional communication

has to the relationship between people in general

and human robot interaction.

I really enjoy talking with Ros over so many topics,

including emotion, ethics, privacy, wearable computing,

and her recent research in epilepsy,

and even love and meaning.

This conversation is part

of the Artificial Intelligence Podcast.

If you enjoy it, subscribe on YouTube, iTunes,

or simply connect with me on Twitter at Lex Friedman,

spelled F R I D.

And now, here’s my conversation with Rosalind Picard.

More than 20 years ago,

you’ve coined the term effective computing

and led a lot of research in this area since then.

As I understand, the goal is to make the machine detect

and interpret the emotional state of a human being

and adapt the behavior of the machine

based on the emotional state.

So how is your understanding of the problem space

defined by effective computing changed in the past 24 years?

So it’s the scope, the applications, the challenges,

what’s involved, how has that evolved over the years?

Yeah, actually, originally,

when I defined the term affective computing,

it was a bit broader than just recognizing

and responding intelligently to human emotion,

although those are probably the two pieces

that we’ve worked on the hardest.

The original concept also encompassed machines

that would have mechanisms

that functioned like human emotion does inside them.

It would be any computing that relates to arises from

or deliberately influences human emotion.

So the human computer interaction part

is the part that people tend to see,

like if I’m really ticked off at my computer

and I’m scowling at it and I’m cursing at it

and it just keeps acting smiling and happy

like that little paperclip used to do,

dancing, winking, that kind of thing

just makes you even more frustrated, right?

And I thought that stupid thing needs to see my affect.

And if it’s gonna be intelligent,

which Microsoft researchers had worked really hard on,

it actually had some of the most sophisticated AI

in it at the time,

that thing’s gonna actually be smart.

It needs to respond to me and you,

and we can send it very different signals.

So by the way, just a quick interruption,

the Clippy, maybe it’s in Word 95, 98,

I don’t remember when it was born,

but many people, do you find yourself with that reference

that people recognize what you’re talking about

still to this point?

I don’t expect the newest students to these days,

but I’ve mentioned it to a lot of audiences,

like how many of you know this Clippy thing?

And still the majority of people seem to know it.

So Clippy kind of looks at maybe natural language processing

where you were typing and tries to help you complete,

I think.

I don’t even remember what Clippy was, except annoying.

Yeah, some people actually liked it.

I would hear those stories.

You miss it?

Well, I miss the annoyance.

They felt like there’s an element.

Someone was there.

Somebody was there and we were in it together

and they were annoying.

It’s like a puppy that just doesn’t get it.

They keep stripping up the couch kind of thing.

And in fact, they could have done it smarter like a puppy.

If they had done, like if when you yelled at it

or cursed at it,

if it had put its little ears back in its tail down

and shrugged off,

probably people would have wanted it back, right?

But instead, when you yelled at it, what did it do?

It smiled, it winked, it danced, right?

If somebody comes to my office and I yell at them,

they start smiling, winking and dancing.

I’m like, I never want to see you again.

So Bill Gates got a standing ovation

when he said it was going away

because people were so ticked.

It was so emotionally unintelligent, right?

It was intelligent about whether you were writing a letter,

what kind of help you needed for that context.

It was completely unintelligent about,

hey, if you’re annoying your customer,

don’t smile in their face when you do it.

So that kind of mismatch was something

the developers just didn’t think about.

And intelligence at the time was really all about math

and language and chess and games,

problems that could be pretty well defined.

Social emotional interaction is much more complex

than chess or Go or any of the games

that people are trying to solve.

And in order to understand that required skills

that most people in computer science

actually were lacking personally.

Well, let’s talk about computer science.

Have things gotten better since the work,

since the message,

since you’ve really launched the field

with a lot of research work in this space?

I still find as a person like yourself,

who’s deeply passionate about human beings

and yet am in computer science,

there still seems to be a lack of,

sorry to say empathy in as computer scientists.

Yeah, well.

Or hasn’t gotten better.

Let’s just say there’s a lot more variety

among computer scientists these days.

Computer scientists are a much more diverse group today

than they were 25 years ago.

And that’s good.

We need all kinds of people to become computer scientists

so that computer science reflects more what society needs.

And there’s brilliance among every personality type.

So it need not be limited to people

who prefer computers to other people.

How hard do you think it is?

Your view of how difficult it is to recognize emotion

or to create a deeply emotionally intelligent interaction.

Has it gotten easier or harder

as you’ve explored it further?

And how far away are we from cracking this?

If you think of the Turing test solving the intelligence,

looking at the Turing test for emotional intelligence.

I think it is as difficult as I thought it was gonna be.

I think my prediction of its difficulty is spot on.

I think the time estimates are always hard

because they’re always a function of society’s love

and hate of a particular topic.

If society gets excited and you get thousands of researchers

working on it for a certain application,

that application gets solved really quickly.

The general intelligence,

the computer’s complete lack of ability

to have awareness of what it’s doing,

the fact that it’s not conscious,

the fact that there’s no signs of it becoming conscious,

the fact that it doesn’t read between the lines,

those kinds of things that we have to teach it explicitly,

what other people pick up implicitly.

We don’t see that changing yet.

There aren’t breakthroughs yet that lead us to believe

that that’s gonna go any faster,

which means that it’s still gonna be kind of stuck

with a lot of limitations

where it’s probably only gonna do the right thing

in very limited, narrow, prespecified contexts

where we can prescribe pretty much

what’s gonna happen there.

So I don’t see the,

it’s hard to predict a date

because when people don’t work on it, it’s infinite.

When everybody works on it, you get a nice piece of it

well solved in a short amount of time.

I actually think there’s a more important issue right now

than the difficulty of it.

And that’s causing some of us

to put the brakes on a little bit.

Usually we’re all just like step on the gas,

let’s go faster.

This is causing us to pull back and put the brakes on.

And that’s the way that some of this technology

is being used in places like China right now.

And that worries me so deeply

that it’s causing me to pull back myself

on a lot of the things that we could be doing.

And try to get the community to think a little bit more

about, okay, if we’re gonna go forward with that,

how can we do it in a way that puts in place safeguards

that protects people?

So the technology we’re referring to is

just when a computer senses the human being,

like the human face, right?

So there’s a lot of exciting things there,

like forming a deep connection with the human being.

So what are your worries, how that could go wrong?

Is it in terms of privacy?

Is it in terms of other kinds of more subtle things?

But let’s dig into privacy.

So here in the US, if I’m watching a video

of say a political leader,

and in the US we’re quite free as we all know

to even criticize the president of the United States, right?

Here that’s not a shocking thing.

It happens about every five seconds, right?

But in China, what happens if you criticize

the leader of the government, right?

And so people are very careful not to do that.

However, what happens if you’re simply watching a video

and you make a facial expression

that shows a little bit of skepticism, right?

Well, and here we’re completely free to do that.

In fact, we’re free to fly off the handle

and say anything we want, usually.

I mean, there are some restrictions

when the athlete does this

as part of the national broadcast.

Maybe the teams get a little unhappy

about picking that forum to do it, right?

But that’s more a question of judgment.

We have these freedoms,

and in places that don’t have those freedoms,

what if our technology can read

your underlying affective state?

What if our technology can read it even noncontact?

What if our technology can read it

without your prior consent?

And here in the US,

in my first company we started, Affectiva,

we have worked super hard to turn away money

and opportunities that try to read people’s affect

without their prior informed consent.

And even the software that is licensable,

you have to sign things saying

you will only use it in certain ways,

which essentially is get people’s buy in, right?

Don’t do this without people agreeing to it.

There are other countries where they’re not interested

in people’s buy in.

They’re just gonna use it.

They’re gonna inflict it on you.

And if you don’t like it,

you better not scowl in the direction of any censors.

So one, let me just comment on a small tangent.

Do you know with the idea of adversarial examples

and deep fakes and so on,

what you bring up is actually,

in that one sense, deep fakes provide

a comforting protection that you can no longer really trust

that the video of your face was legitimate.

And therefore you always have an escape clause

if a government is trying,

if a stable, balanced, ethical government

is trying to accuse you of something,

at least you have protection.

You can say it was fake news, as is a popular term now.

Yeah, that’s the general thinking of it.

We know how to go into the video

and see, for example, your heart rate and respiration

and whether or not they’ve been tampered with.

And we also can put like fake heart rate and respiration

in your video now too.

We decided we needed to do that.

After we developed a way to extract it,

we decided we also needed a way to jam it.

And so the fact that we took time to do that other step too,

that was time that I wasn’t spending

making the machine more affectively intelligent.

And there’s a choice in how we spend our time,

which is now being swayed a little bit less by this goal

and a little bit more like by concern

about what’s happening in society

and what kind of future do we wanna build.

And as we step back and say,

okay, we don’t just build AI to build AI

to make Elon Musk more money

or to make Amazon Jeff Bezos more money.

Good gosh, you know, that’s the wrong ethic.

Why are we building it?

What is the point of building AI?

It used to be, it was driven by researchers in academia

to get papers published and to make a career for themselves

and to do something cool, right?

Like, cause maybe it could be done.

Now we realize that this is enabling rich people

to get vastly richer, the poor are,

the divide is even larger.

And is that the kind of future that we want?

Maybe we wanna think about, maybe we wanna rethink AI.

Maybe we wanna rethink the problems in society

that are causing the greatest inequity

and rethink how to build AI

that’s not about a general intelligence,

but that’s about extending the intelligence

and capability of the have nots

so that we close these gaps in society.

Do you hope that kind of stepping on the brake

happens organically?

Because I think still majority of the force behind AI

is the desire to publish papers,

is to make money without thinking about the why.

Do you hope it happens organically?

Is there room for regulation?

Yeah, yeah, yeah, great questions.

I prefer the, you know,

they talk about the carrot versus the stick.

I definitely prefer the carrot to the stick.

And, you know, in our free world,

we, there’s only so much stick, right?

You’re gonna find a way around it.

I generally think less regulation is better.

That said, even though my position is classically carrot,

no stick, no regulation,

I think we do need some regulations in this space.

I do think we need regulations

around protecting people with their data,

that you own your data, not Amazon, not Google.

I would like to see people own their own data.

I would also like to see the regulations

that we have right now around lie detection

being extended to emotion recognition in general,

that right now you can’t use a lie detector on an employee

when you’re, on a candidate

when you’re interviewing them for a job.

I think similarly, we need to put in place protection

around reading people’s emotions without their consent

and in certain cases,

like characterizing them for a job and other opportunities.

So I’m also, I also think that when we’re reading emotion

that’s predictive around mental health,

that that should, even though it’s not medical data,

that that should get the kinds of protections

that our medical data gets.

What most people don’t know yet

is right now with your smartphone use,

and if you’re wearing a sensor

and you wanna learn about your stress and your sleep

and your physical activity

and how much you’re using your phone

and your social interaction,

all of that nonmedical data,

when we put it together with machine learning,

now called AI, even though the founders of AI

wouldn’t have called it that,

that capability can not only tell that you’re calm right now

or that you’re getting a little stressed,

but it can also predict how you’re likely to be tomorrow.

If you’re likely to be sick or healthy,

happy or sad, stressed or calm.

Especially when you’re tracking data over time.

Especially when we’re tracking a week of your data or more.

Do you have an optimism towards,

you know, a lot of people on our phones

are worried about this camera that’s looking at us.

For the most part, on balance,

are you optimistic about the benefits

that can be brought from that camera

that’s looking at billions of us?

Or should we be more worried?

I think we should be a little bit more worried

about who’s looking at us and listening to us.

The device sitting on your countertop in your kitchen,

whether it’s, you know, Alexa or Google Home or Apple, Siri,

these devices want to listen

while they say ostensibly to help us.

And I think there are great people in these companies

who do want to help people.

Let me not brand them all bad.

I’m a user of products from all of these companies

I’m naming all the A companies, Alphabet, Apple, Amazon.

They are awfully big companies, right?

They have incredible power.

And you know, what if China were to buy them, right?

And suddenly all of that data

were not part of free America,

but all of that data were part of somebody

who just wants to take over the world

and you submit to them.

And guess what happens if you so much as smirk the wrong way

when they say something that you don’t like?

Well, they have reeducation camps, right?

That’s a nice word for them.

By the way, they have a surplus of organs

for people who have surgery these days.

They don’t have an organ donation problem

because they take your blood and they know you’re a match.

And the doctors are on record of taking organs

from people who are perfectly healthy and not prisoners.

They’re just simply not the favored ones of the government.

And you know, that’s a pretty freaky evil society.

And we can use the word evil there.

I was born in the Soviet Union.

I can certainly connect to the worry that you’re expressing.

At the same time, probably both you and I

and you very much so,

you know, there’s an exciting possibility

that you can have a deep connection with a machine.

Yeah, yeah.

Right, so.

Those of us, I’ve admitted students who say that they,

you know, when you list like,

who do you most wish you could have lunch with

or dinner with, right?

And they’ll write like, I don’t like people.

I just like computers.

And one of them said to me once

when I had this party at my house,

I want you to know,

this is my only social event of the year,

my one social event of the year.

Like, okay, now this is a brilliant

machine learning person, right?

And we need that kind of brilliance in machine learning.

And I love that computer science welcomes people

who love people and people who are very awkward

around people.

I love that this is a field that anybody could join.

We need all kinds of people

and you don’t need to be a social person.

I’m not trying to force people who don’t like people

to suddenly become social.

At the same time,

if most of the people building the AIs of the future

are the kind of people who don’t like people,

we’ve got a little bit of a problem.

Well, hold on a second.

So let me push back on that.

So don’t you think a large percentage of the world

can, you know, there’s loneliness.

There is a huge problem with loneliness that’s growing.

And so there’s a longing for connection.

Do you…

If you’re lonely, you’re part of a big and growing group.

Yes.

So we’re in it together, I guess.

If you’re lonely, join the group.

You’re not alone.

That’s a good line.

But do you think there’s…

You talked about some worry,

but do you think there’s an exciting possibility

that something like Alexa and these kinds of tools

can alleviate that loneliness

in a way that other humans can’t?

Yeah, yeah, definitely.

I mean, a great book can kind of alleviate loneliness

because you just get sucked into this amazing story

and you can’t wait to go spend time with that character.

And they’re not a human character.

There is a human behind it.

But yeah, it can be an incredibly delightful way

to pass the hours and it can meet needs.

Even, you know, I don’t read those trashy romance books,

but somebody does, right?

And what are they getting from this?

Well, probably some of that feeling of being there, right?

Being there in that social moment,

that romantic moment or connecting with somebody.

I’ve had a similar experience

reading some science fiction books, right?

And connecting with the character.

Orson Scott Card, you know, just amazing writing

and Ender’s Game and Speaker for the Dead, terrible title.

But those kind of books that pull you into a character

and you feel like you’re, you feel very social.

It’s very connected, even though it’s not responding to you.

And a computer, of course, can respond to you.

So it can deepen it, right?

You can have a very deep connection,

much more than the movie Her, you know, plays up, right?

Well, much more.

I mean, movie Her is already a pretty deep connection, right?

Well, but it’s just a movie, right?

It’s scripted.

It’s just, you know, but I mean,

like there can be a real interaction

where the character can learn and you can learn.

You could imagine it not just being you and one character.

You could imagine a group of characters.

You can imagine a group of people and characters,

human and AI connecting,

where maybe a few people can’t sort of be friends

with everybody, but the few people

and their AIs can befriend more people.

There can be an extended human intelligence in there

where each human can connect with more people that way.

But it’s still very limited, but there are just,

what I mean is there are many more possibilities

than what’s in that movie.

So there’s a tension here.

So one, you expressed a really serious concern

about privacy, about how governments

can misuse the information,

and there’s the possibility of this connection.

So let’s look at Alexa.

So personal assistance.

For the most part, as far as I’m aware,

they ignore your emotion.

They ignore even the context or the existence of you,

the intricate, beautiful, complex aspects of who you are,

except maybe aspects of your voice

that help it recognize for speech recognition.

Do you think they should move towards

trying to understand your emotion?

All of these companies are very interested

in understanding human emotion.

They want, more people are telling Siri every day

they want to kill themselves.

Apple wants to know the difference between

if a person is really suicidal versus if a person

is just kind of fooling around with Siri, right?

The words may be the same, the tone of voice

and what surrounds those words is pivotal to understand

if they should respond in a very serious way,

bring help to that person,

or if they should kind of jokingly tease back,

ah, you just want to sell me for something else, right?

Like, how do you respond when somebody says that?

Well, you do want to err on the side of being careful

and taking it seriously.

People want to know if the person is happy or stressed

in part, well, so let me give you an altruistic reason

and a business profit motivated reason.

And there are people in companies that operate

on both principles.

The altruistic people really care about their customers

and really care about helping you feel a little better

at the end of the day.

And it would just make those people happy

if they knew that they made your life better.

If you came home stressed and after talking

with their product, you felt better.

There are other people who maybe have studied

the way affect affects decision making

and prices people pay.

And they know, I don’t know if I should tell you,

like the work of Jen Lerner on heartstrings and purse strings,

you know, if we manipulate you into a slightly sadder mood,

you’ll pay more, right?

You’ll pay more to change your situation.

You’ll pay more for something you don’t even need

to make yourself feel better.

So, you know, if they sound a little sad,

maybe I don’t want to cheer them up.

Maybe first I want to help them get something,

a little shopping therapy, right?

That helps them.

Which is really difficult for a company

that’s primarily funded on advertisement.

So they’re encouraged to get you to offer you products

or Amazon that’s primarily funded

on you buying things from their store.

So I think we should be, you know,

maybe we need regulation in the future

to put a little bit of a wall between these agents

that have access to our emotion

and agents that want to sell us stuff.

Maybe there needs to be a little bit more

of a firewall in between those.

So maybe digging in a little bit

on the interaction with Alexa,

you mentioned, of course, a really serious concern

about like recognizing emotion,

if somebody is speaking of suicide or depression and so on,

but what about the actual interaction itself?

Do you think, so if I, you know,

you mentioned Clippy and being annoying,

what is the objective function we’re trying to optimize?

Is it minimize annoyingness or minimize or maximize happiness?

Or if we look at human to human relations,

I think that push and pull, the tension, the dance,

you know, the annoying, the flaws, that’s what makes it fun.

So is there a room for, like what is the objective function?

There are times when you want to have a little push and pull,

I think of kids sparring, right?

You know, I see my sons and they,

one of them wants to provoke the other to be upset

and that’s fun.

And it’s actually healthy to learn where your limits are,

to learn how to self regulate.

You can imagine a game where it’s trying to make you mad

and you’re trying to show self control.

And so if we’re doing a AI human interaction

that’s helping build resilience and self control,

whether it’s to learn how to not be a bully

or how to turn the other cheek

or how to deal with an abusive person in your life,

then you might need an AI that pushes your buttons, right?

But in general, do you want an AI that pushes your buttons?

Probably depends on your personality.

I don’t, I want one that’s respectful,

that is there to serve me

and that is there to extend my ability to do things.

I’m not looking for a rival,

I’m looking for a helper.

And that’s the kind of AI I’d put my money on.

Your sense is for the majority of people in the world,

in order to have a rich experience,

that’s what they’re looking for as well.

So they’re not looking,

if you look at the movie Her, spoiler alert,

I believe the program that the woman in the movie Her

leaves the person for somebody else,

says they don’t wanna be dating anymore, right?

Like, do you, your sense is if Alexa said,

you know what, I’m actually had enough of you for a while,

so I’m gonna shut myself off.

You don’t see that as…

I’d say you’re trash, cause I paid for you, right?

You, we’ve got to remember,

and this is where this blending human AI

as if we’re equals is really deceptive

because AI is something at the end of the day

that my students and I are making in the lab.

And we’re choosing what it’s allowed to say,

when it’s allowed to speak, what it’s allowed to listen to,

what it’s allowed to act on given the inputs

that we choose to expose it to,

what outputs it’s allowed to have.

It’s all something made by a human.

And if we wanna make something

that makes our lives miserable, fine.

I wouldn’t invest in it as a business,

unless it’s just there for self regulation training.

But I think we need to think about

what kind of future we want.

And actually your question, I really like the,

what is the objective function?

Is it to calm people down?

Sometimes.

Is it to always make people happy and calm them down?

Well, there was a book about that, right?

The brave new world, make everybody happy,

take your Soma if you’re unhappy, take your happy pill.

And if you refuse to take your happy pill,

well, we’ll threaten you by sending you to Iceland

to live there.

I lived in Iceland three years.

It’s a great place.

Don’t take your Soma, then go to Iceland.

A little TV commercial there.

Now I was a child there for a few years.

It’s a wonderful place.

So that part of the book never scared me.

But really like, do we want AI to manipulate us

into submission, into making us happy?

Well, if you are a, you know,

like a power obsessed sick dictator individual

who only wants to control other people

to get your jollies in life, then yeah,

you wanna use AI to extend your power and your scale

to force people into submission.

If you believe that the human race is better off

being given freedom and the opportunity

to do things that might surprise you,

then you wanna use AI to extend people’s ability to build,

you wanna build AI that extends human intelligence,

that empowers the weak and helps balance the power

between the weak and the strong,

not that gives more power to the strong.

So in this process of empowering people and sensing people,

what is your sense on emotion

in terms of recognizing emotion?

The difference between emotion that is shown

and emotion that is felt.

So yeah, emotion that is expressed on the surface

through your face, your body, and various other things,

and what’s actually going on deep inside

on the biological level, on the neuroscience level,

or some kind of cognitive level.

Yeah, yeah.

Whoa, no easy questions here.

Well, yeah, I’m sure there’s no definitive answer,

but what’s your sense?

How far can we get by just looking at the face?

We’re very limited when we just look at the face,

but we can get further than most people think we can get.

People think, hey, I have a great poker face,

therefore all you’re ever gonna get from me is neutral.

Well, that’s naive.

We can read with the ordinary camera

on your laptop or on your phone.

We can read from a neutral face if your heart is racing.

We can read from a neutral face

if your breathing is becoming irregular

and showing signs of stress.

We can read under some conditions

that maybe I won’t give you details on,

how your heart rate variability power is changing.

That could be a sign of stress,

even when your heart rate is not necessarily accelerating.

So…

Sorry, from physio sensors or from the face?

From the color changes that you cannot even see,

but the camera can see.

That’s amazing.

So you can get a lot of signal, but…

So we get things people can’t see using a regular camera.

And from that, we can tell things about your stress.

So if you were just sitting there with a blank face

thinking nobody can read my emotion, well, you’re wrong.

Right, so that’s really interesting,

but that’s from sort of visual information from the face.

That’s almost like cheating your way

to the physiological state of the body,

by being very clever with what you can do with vision.

With signal processing.

So that’s really impressive.

But if you just look at the stuff we humans can see,

the poker, the smile, the smirks,

the subtle, all the facial actions.

So then you can hide that on your face

for a limited amount of time.

Now, if you’re just going in for a brief interview

and you’re hiding it, that’s pretty easy for most people.

If you are, however, surveilled constantly everywhere you go,

then it’s gonna say, gee, you know, Lex used to smile a lot

and now I’m not seeing so many smiles.

And Roz used to laugh a lot

and smile a lot very spontaneously.

And now I’m only seeing

these not so spontaneous looking smiles.

And only when she’s asked these questions.

You know, that’s something’s changed here.

Probably not getting enough sleep.

We could look at that too.

So now I have to be a little careful too.

When I say we, you think we can’t read your emotion

and we can, it’s not that binary.

What we’re reading is more some physiological changes

that relate to your activation.

Now, that doesn’t mean that we know everything

about how you feel.

In fact, we still know very little about how you feel.

Your thoughts are still private.

Your nuanced feelings are still completely private.

We can’t read any of that.

So there’s some relief that we can’t read that.

Even brain imaging can’t read that.

Wearables can’t read that.

However, as we read your body state changes

and we know what’s going on in your environment

and we look at patterns of those over time,

we can start to make some inferences

about what you might be feeling.

And that is where it’s not just the momentary feeling

but it’s more your stance toward things.

And that could actually be a little bit more scary

with certain kinds of governmental control freak people

who want to know more about are you on their team

or are you not?

And getting that information through over time.

So you’re saying there’s a lot of signal

by looking at the change over time.

Yeah.

So you’ve done a lot of exciting work

both in computer vision

and physiological sense like wearables.

What do you think is the best modality for,

what’s the best window into the emotional soul?

Is it the face?

Is it the voice?

Depends what you want to know.

It depends what you want to know.

Everything is informative.

Everything we do is informative.

So for health and wellbeing and things like that,

do you find the wearable physiotechnical,

measuring physiological signals

is the best for health based stuff?

So here I’m going to answer empirically

with data and studies we’ve been doing.

We’ve been doing studies.

Now these are currently running

with lots of different kinds of people

but where we’ve published data

and I can speak publicly to it,

the data are limited right now

to New England college students.

So that’s a small group.

Among New England college students,

when they are wearing a wearable

like the empathic embrace here

that’s measuring skin conductance, movement, temperature.

And when they are using a smartphone

that is collecting their time of day

of when they’re texting, who they’re texting,

their movement around it, their GPS,

the weather information based upon their location.

And when it’s using machine learning

and putting all of that together

and looking not just at right now

but looking at your rhythm of behaviors

over about a week.

When we look at that,

we are very accurate at forecasting tomorrow’s stress,

mood and happy, sad mood and health.

And when we look at which pieces of that are most useful,

first of all, if you have all the pieces,

you get the best results.

If you have only the wearable,

you get the next best results.

And that’s still better than 80% accurate

at forecasting tomorrow’s levels.

Isn’t that exciting because the wearable stuff

with physiological information,

it feels like it violates privacy less

than the noncontact face based methods.

Yeah, it’s interesting.

I think what people sometimes don’t,

it’s funny in the early days people would say,

oh, wearing something or giving blood is invasive, right?

Whereas a camera is less invasive

because it’s not touching you.

I think on the contrary,

the things that are not touching you are maybe the scariest

because you don’t know when they’re on or off.

And you don’t know who’s behind it, right?

A wearable, depending upon what’s happening

to the data on it, if it’s just stored locally

or if it’s streaming and what it is being attached to,

in a sense, you have the most control over it

because it’s also very easy to just take it off, right?

Now it’s not sensing me.

So if I’m uncomfortable with what it’s sensing,

now I’m free, right?

If I’m comfortable with what it’s sensing,

then, and I happen to know everything about this one

and what it’s doing with it,

so I’m quite comfortable with it,

then I have control, I’m comfortable.

Control is one of the biggest factors for an individual

in reducing their stress.

If I have control over it,

if I know all there is to know about it,

then my stress is a lot lower

and I’m making an informed choice

about whether to wear it or not,

or when to wear it or not.

I wanna wear it sometimes, maybe not others.

Right, so that control, yeah, I’m with you.

That control, even if, yeah, the ability to turn it off,

that is a really important thing.

It’s huge.

And we need to, maybe, if there’s regulations,

maybe that’s number one to protect

is people’s ability to, it’s easy to opt out as to opt in.

Right, so you’ve studied a bit of neuroscience as well.

How have looking at our own minds,

sort of the biological stuff or the neurobiological,

the neuroscience to get the signals in our brain,

helped you understand the problem

and the approach of effective computing, so?

Originally, I was a computer architect

and I was building hardware and computer designs

and I wanted to build ones that worked like the brain.

So I’ve been studying the brain

as long as I’ve been studying how to build computers.

Have you figured out anything yet?

Very little.

It’s so amazing.

You know, they used to think like,

oh, if you remove this chunk of the brain

and you find this function goes away,

well, that’s the part of the brain that did it.

And then later they realized

if you remove this other chunk of the brain,

that function comes back and,

oh no, we really don’t understand it.

Brains are so interesting and changing all the time

and able to change in ways

that will probably continue to surprise us.

When we were measuring stress,

you may know the story where we found

an unusual big skin conductance pattern on one wrist

in one of our kids with autism.

And in trying to figure out how on earth

you could be stressed on one wrist and not the other,

like how can you get sweaty on one wrist, right?

When you get stressed

with that sympathetic fight or flight response,

like you kind of should like sweat more

in some places than others,

but not more on one wrist than the other.

That didn’t make any sense.

We learned that what had actually happened

was a part of his brain had unusual electrical activity

and that caused an unusually large sweat response

on one wrist and not the other.

And since then we’ve learned

that seizures cause this unusual electrical activity.

And depending where the seizure is,

if it’s in one place and it’s staying there,

you can have a big electrical response

we can pick up with a wearable at one part of the body.

You can also have a seizure

that spreads over the whole brain,

generalized grand mal seizure.

And that response spreads

and we can pick it up pretty much anywhere.

As we learned this and then later built Embrace

that’s now FDA cleared for seizure detection,

we have also built relationships

with some of the most amazing doctors in the world

who not only help people

with unusual brain activity or epilepsy,

but some of them are also surgeons

and they’re going in and they’re implanting electrodes,

not just to momentarily read the strange patterns

of brain activity that we’d like to see return to normal,

but also to read out continuously what’s happening

in some of these deep regions of the brain

during most of life when these patients are not seizing.

Most of the time they’re not seizing,

most of the time they’re fine.

And so we are now working on mapping

those deep brain regions

that you can’t even usually get with EEG scalp electrodes

because the changes deep inside don’t reach the surface.

But interesting when some of those regions

are activated, we see a big skin conductance response.

Who would have thunk it, right?

Like nothing here, but something here.

In fact, right after seizures

that we think are the most dangerous ones

that precede what’s called SUDEP,

Sudden Unexpected Death and Epilepsy,

there’s a period where the brainwaves go flat

and it looks like the person’s brain has stopped,

but it hasn’t.

The activity has gone deep into a region

that can make the cortical activity look flat,

like a quick shutdown signal here.

It can unfortunately cause breathing to stop

if it progresses long enough.

Before that happens, we see a big skin conductance response

in the data that we have.

The longer this flattening, the bigger our response here.

So we have been trying to learn, you know, initially,

like why are we getting a big response here

when there’s nothing here?

Well, it turns out there’s something much deeper.

So we can now go inside the brains

of some of these individuals, fabulous people

who usually aren’t seizing,

and get this data and start to map it.

So that’s the active research that we’re doing right now

with top medical partners.

So this wearable sensor that’s looking at skin conductance

can capture sort of the ripples of the complexity

of what’s going on in our brain.

So this little device, you have a hope

that you can start to get the signal

from the interesting things happening in the brain.

Yeah, we’ve already published the strong correlations

between the size of this response

and the flattening that happens afterwards.

And unfortunately, also in a real SUDEP case

where the patient died because the, well, we don’t know why.

We don’t know if somebody was there,

it would have definitely prevented it.

But we know that most SUDEPs happen

when the person’s alone.

And in this case, a SUDEP is an acronym, S U D E P.

And it stands for the number two cause

of years of life lost actually

among all neurological disorders.

Stroke is number one, SUDEP is number two,

but most people haven’t heard of it.

Actually, I’ll plug my TED talk,

it’s on the front page of TED right now

that talks about this.

And we hope to change that.

I hope everybody who’s heard of SIDS and stroke

will now hear of SUDEP

because we think in most cases it’s preventable

if people take their meds and aren’t alone

when they have a seizure.

Not guaranteed to be preventable.

There are some exceptions,

but we think most cases probably are.

So you had this embrace now in the version two wristband,

right, for epilepsy management.

That’s the one that’s FDA approved?

Yes.

Which is kind of a clear.

FDA cleared, they say.

Sorry.

No, it’s okay.

It essentially means it’s approved for marketing.

Got it.

Just a side note, how difficult is that to do?

It’s essentially getting FDA approval

for computer science technology.

It’s so agonizing.

It’s much harder than publishing multiple papers

in top medical journals.

Yeah, we’ve published peer reviewed

top medical journal neurology, best results,

and that’s not good enough for the FDA.

Is that system,

so if we look at the peer review of medical journals,

there’s flaws, there’s strengths,

is the FDA approval process,

how does it compare to the peer review process?

Does it have the strength?

I’ll take peer review over FDA any day.

But is that a good thing?

Is that a good thing for FDA?

You’re saying, does it stop some amazing technology

from getting through?

Yeah, it does.

The FDA performs a very important good role

in keeping people safe.

They keep things,

they put you through tons of safety testing

and that’s wonderful and that’s great.

I’m all in favor of the safety testing.

But sometimes they put you through additional testing

that they don’t have to explain why they put you through it

and you don’t understand why you’re going through it

and it doesn’t make sense.

And that’s very frustrating.

And maybe they have really good reasons

and they just would,

it would do people a service to articulate those reasons.

Be more transparent.

So as part of Empatica, you have sensors.

So what kind of problems can we crack?

What kind of things from seizures to autism

to I think I’ve heard you mentioned depression.

What kind of things can we alleviate?

Can we detect?

What’s your hope of what,

how we can make the world a better place

with this wearable tech?

I would really like to see my fellow brilliant researchers

step back and say, what are the really hard problems

that we don’t know how to solve

that come from people maybe we don’t even see

in our normal life because they’re living

in the poor places.

They’re stuck on the bus.

They can’t even afford the Uber or the Lyft

or the data plan or all these other wonderful things

we have that we keep improving on.

Meanwhile, there’s all these folks left behind in the world

and they’re struggling with horrible diseases

with depression, with epilepsy, with diabetes,

with just awful stuff that maybe a little more time

and attention hanging out with them

and learning what are their challenges in life?

What are their needs?

How do we help them have job skills?

How do we help them have a hope and a future

and a chance to have the great life

that so many of us building technology have?

And then how would that reshape the kinds of AI

that we build? How would that reshape the new apps

that we build or the maybe we need to focus

on how to make things more low cost and green

instead of thousand dollar phones?

I mean, come on, why can’t we be thinking more

about things that do more with less for these folks?

Quality of life is not related to the cost of your phone.

It’s not something that, it’s been shown that what about

$75,000 of income and happiness is the same, okay?

However, I can tell you, you get a lot of happiness

from helping other people.

You get a lot more than $75,000 buys.

So how do we connect up the people who have real needs

with the people who have the ability to build the future

and build the kind of future that truly improves the lives

of all the people that are currently being left behind?

So let me return just briefly on a point,

maybe in the movie, Her.

So do you think if we look farther into the future,

you said so much of the benefit from making our technology

more empathetic to us human beings would make them

better tools, empower us, make our lives better.

Well, if we look farther into the future,

do you think we’ll ever create an AI system

that we can fall in love with?

That we can fall in love with and loves us back

on a level that is similar to human to human interaction,

like in the movie Her or beyond?

I think we can simulate it in ways that could,

you know, sustain engagement for a while.

Would it be as good as another person?

I don’t think so, if you’re used to like good people.

Now, if you’ve just grown up with nothing but abuse

and you can’t stand human beings,

can we do something that helps you there

that gives you something through a machine?

Yeah, but that’s pretty low bar, right?

If you’ve only encountered pretty awful people.

If you’ve encountered wonderful, amazing people,

we’re nowhere near building anything like that.

And I would not bet on building it.

I would bet instead on building the kinds of AI

that helps kind of raise all boats,

that helps all people be better people,

helps all people figure out if they’re getting sick tomorrow

and helps give them what they need to stay well tomorrow.

That’s the kind of AI I wanna build

that improves human lives,

not the kind of AI that just walks on The Tonight Show

and people go, wow, look how smart that is.

Really?

And then it goes back in a box, you know?

So on that point,

if we continue looking a little bit into the future,

do you think an AI that’s empathetic

and does improve our lives

need to have a physical presence, a body?

And even let me cautiously say the C word consciousness

and even fear of mortality.

So some of those human characteristics,

do you think it needs to have those aspects

or can it remain simply a machine learning tool

that learns from data of behavior

that learns to make us,

based on previous patterns, feel better?

Or does it need those elements of consciousness?

It depends on your goals.

If you’re making a movie, it needs a body.

It needs a gorgeous body.

It needs to act like it has consciousness.

It needs to act like it has emotion, right?

Because that’s what sells.

That’s what’s gonna get me to show up and enjoy the movie.

Okay.

In real life, does it need all that?

Well, if you’ve read Orson Scott Card,

Ender’s Game, Speaker of the Dead,

it could just be like a little voice in your earring, right?

And you could have an intimate relationship

and it could get to know you.

And it doesn’t need to be a robot.

But that doesn’t make this compelling of a movie, right?

I mean, we already think it’s kind of weird

when a guy looks like he’s talking to himself on the train,

even though it’s earbuds.

So we have these, embodied is more powerful.

Embodied, when you compare interactions

with an embodied robot versus a video of a robot

versus no robot, the robot is more engaging.

The robot gets our attention more.

The robot, when you walk in your house,

is more likely to get you to remember to do the things

that you asked it to do,

because it’s kind of got a physical presence.

You can avoid it if you don’t like it.

It could see you’re avoiding it.

There’s a lot of power to being embodied.

There will be embodied AIs.

They have great power and opportunity and potential.

There will also be AIs that aren’t embodied,

that just are little software assistants

that help us with different things

that may get to know things about us.

Will they be conscious?

There will be attempts to program them

to make them appear to be conscious.

We can already write programs that make it look like,

oh, what do you mean?

Of course I’m aware that you’re there, right?

I mean, it’s trivial to say stuff like that.

It’s easy to fool people,

but does it actually have conscious experience like we do?

Nobody has a clue how to do that yet.

That seems to be something that is beyond

what any of us knows how to build now.

Will it have to have that?

I think you can get pretty far

with a lot of stuff without it.

But will we accord it rights?

Well, that’s more a political game

than it is a question of real consciousness.

Yeah, can you go to jail for turning off Alexa

is the question for an election maybe a few decades from now.

Well, Sophia Robot’s already been given rights

as a citizen in Saudi Arabia, right?

Even before women have full rights.

Then the robot was still put back in the box

to be shipped to the next place

where it would get a paid appearance, right?

Yeah, it’s dark and almost comedic, if not absurd.

So I’ve heard you speak about your journey in finding faith.

Sure.

And how you discovered some wisdoms about life

and beyond from reading the Bible.

And I’ve also heard you say that,

you said scientists too often assume

that nothing exists beyond what can be currently measured.

Yeah, materialism.

Materialism.

And scientism, yeah.

So in some sense, this assumption enables

the near term scientific method,

assuming that we can uncover the mysteries of this world

by the mechanisms of measurement that we currently have.

But we easily forget that we’ve made this assumption.

So what do you think we miss out on

by making that assumption?

It’s fine to limit the scientific method

to things we can measure and reason about and reproduce.

That’s fine.

I think we have to recognize

that sometimes we scientists also believe

in things that happen historically.

Like I believe the Holocaust happened.

I can’t prove events from past history scientifically.

You prove them with historical evidence, right?

With the impact they had on people,

with eyewitness testimony and things like that.

So a good thinker recognizes that science

is one of many ways to get knowledge.

It’s not the only way.

And there’s been some really bad philosophy

and bad thinking recently, you can call it scientism,

where people say science is the only way to get to truth.

And it’s not, it just isn’t.

There are other ways that work also.

Like knowledge of love with someone.

You don’t prove your love through science, right?

So history, philosophy, love,

a lot of other things in life show us

that there’s more ways to gain knowledge and truth

if you’re willing to believe there is such a thing,

and I believe there is, than science.

I do, I am a scientist, however.

And in my science, I do limit my science

to the things that the scientific method can do.

But I recognize that it’s myopic

to say that that’s all there is.

Right, there’s, just like you listed,

there’s all the why questions.

And really we know, if we’re being honest with ourselves,

the percent of what we really know is basically zero

relative to the full mystery of the…

Measure theory, a set of measure zero,

if I have a finite amount of knowledge, which I do.

So you said that you believe in truth.

So let me ask that old question.

What do you think this thing is all about?

What’s the life on earth?

Life, the universe, and everything?

And everything, what’s the meaning?

I can’t quote Douglas Adams 42.

It’s my favorite number.

By the way, that’s my street address.

My husband and I guessed the exact same number

for our house, we got to pick it.

And there’s a reason we picked 42, yeah.

So is it just 42 or is there,

do you have other words that you can put around it?

Well, I think there’s a grand adventure

and I think this life is a part of it.

I think there’s a lot more to it than meets the eye

and the heart and the mind and the soul here.

I think we see but through a glass dimly in this life.

We see only a part of all there is to know.

If people haven’t read the Bible, they should,

if they consider themselves educated

and you could read Proverbs

and find tremendous wisdom in there

that cannot be scientifically proven.

But when you read it, there’s something in you,

like a musician knows when the instruments played right

and it’s beautiful.

There’s something in you that comes alive

and knows that there’s a truth there

that it’s like your strings are being plucked by the master

instead of by me, right, playing when I pluck it.

But probably when you play, it sounds spectacular, right?

And when you encounter those truths,

there’s something in you that sings

and knows that there is more

than what I can prove mathematically

or program a computer to do.

Don’t get me wrong, the math is gorgeous.

The computer programming can be brilliant.

It’s inspiring, right?

We wanna do more.

None of this squashes my desire to do science

or to get knowledge through science.

I’m not dissing the science at all.

I grow even more in awe of what the science can do

because I’m more in awe of all there is we don’t know.

And really at the heart of science,

you have to have a belief that there’s truth,

that there’s something greater to be discovered.

And some scientists may not wanna use the faith word,

but it’s faith that drives us to do science.

It’s faith that there is truth,

that there’s something to know that we don’t know,

that it’s worth knowing, that it’s worth working hard,

and that there is meaning,

that there is such a thing as meaning,

which by the way, science can’t prove either.

We have to kind of start with some assumptions

that there’s things like truth and meaning.

And these are really questions philosophers own, right?

This is their space,

of philosophers and theologians at some level.

So these are things science,

when people claim that science will tell you all truth,

there’s a name for that.

It’s its own kind of faith.

It’s scientism and it’s very myopic.

Yeah, there’s a much bigger world out there to be explored

in ways that science may not,

at least for now, allow us to explore.

Yeah, and there’s meaning and purpose and hope

and joy and love and all these awesome things

that make it all worthwhile too.

I don’t think there’s a better way to end it, Roz.

Thank you so much for talking today.

Thanks Lex, what a pleasure.

Great questions.

comments powered by Disqus