Lex Fridman Podcast - #95 - Dawn Song: Adversarial Machine Learning and Computer Security

🎁Amazon Prime 💗The Drop 📖Kindle Unlimited 🎧Audible Plus 🎵Amazon Music Unlimited 🌿iHerb 💰Binance

The following is a conversation with Dawn Song,

a professor of computer science at UC Berkeley

with research interests in computer security.

Most recently, with a focus on the intersection

between security and machine learning.

This conversation was recorded

before the outbreak of the pandemic.

For everyone feeling the medical, psychological,

and financial burden of this crisis,

I’m sending love your way.

Stay strong.

We’re in this together.

We’ll beat this thing.

This is the Artificial Intelligence Podcast.

If you enjoy it, subscribe on YouTube,

review it with five stars on Apple Podcast,

support it on Patreon,

or simply connect with me on Twitter

at lexfriedman, spelled F R I D M A N.

As usual, I’ll do a few minutes of ads now

and never any ads in the middle

that can break the flow of the conversation.

I hope that works for you

and doesn’t hurt the listening experience.

This show is presented by Cash App,

the number one finance app in the App Store.

When you get it, use code lexpodcast.

Cash App lets you send money to friends,

buy Bitcoin, and invest in the stock market

with as little as one dollar.

Since Cash App does fractional share trading,

let me mention that the order execution algorithm

that works behind the scenes

to create the abstraction of fractional orders

is an algorithmic marvel.

So big props to the Cash App engineers

for solving a hard problem

that in the end provides an easy interface

that takes a step up to the next layer of abstraction

over the stock market,

making trading more accessible for new investors

and diversification much easier.

So again, if you get Cash App from the App Store or Google Play

and use the code lexpodcast, you get $10

and Cash App will also donate $10 to FIRST,

an organization that is helping to advance robotics

and STEM education for young people around the world.

And now here’s my conversation with Dawn Song.

Do you think software systems

will always have security vulnerabilities?

Let’s start at the broad, almost philosophical level.

That’s a very good question.

I mean, in general, right,

it’s very difficult to write completely bug free code

and code that has no vulnerability.

And also, especially given that the definition

of vulnerability is actually really broad.

It’s any type of attacks essentially on a code can,

you know, that’s, you can call that,

that caused by vulnerabilities.

And the nature of attacks is always changing as well?

Like new ones are coming up?

Right, so for example, in the past,

we talked about memory safety type of vulnerabilities

where essentially attackers can exploit the software

and take over control of how the code runs

and then can launch attacks that way.

By accessing some aspect of the memory

and be able to then alter the state of the program?

Exactly, so for example, in the example of a buffer overflow,

then the attacker essentially actually causes

essentially unintended changes in the state of the program.

And then, for example,

can then take over control flow of the program

and let the program to execute codes

that actually the programmer didn’t intend.

So the attack can be a remote attack.

So the attacker, for example,

can send in a malicious input to the program

that just causes the program to completely

then be compromised and then end up doing something

that’s under the attacker’s control and intention.

But that’s just one form of attacks

and there are other forms of attacks.

Like for example, there are these side channels

where attackers can try to learn from,

even just observing the outputs

from the behaviors of the program,

try to infer certain secrets of the program.

So essentially, right, the form of attacks

is very, very, it’s very broad spectrum.

And in general, from the security perspective,

we want to essentially provide as much guarantee

as possible about the program’s security properties

and so on.

So for example, we talked about providing provable guarantees

of the program.

So for example, there are ways we can use program analysis

and formal verification techniques

to prove that a piece of code

has no memory safety vulnerabilities.

What does that look like?

What is that proof?

Is that just a dream for,

that’s applicable to small case examples

or is that possible to do for real world systems?

So actually, I mean, today,

I actually call it we are entering the era

of formally verified systems.

So in the community, we have been working

for the past decades in developing techniques

and tools to do this type of program verification.

And we have dedicated teams that have dedicated,

you know, their like years,

sometimes even decades of their work in the space.

So as a result, we actually have a number

of formally verified systems ranging from microkernels

to compilers to file systems to certain crypto,

you know, libraries and so on.

So it’s actually really wide ranging

and it’s really exciting to see

that people are recognizing the importance

of having these formally verified systems

with verified security.

So that’s great advancement that we see,

but on the other hand,

I think we do need to take all these in essentially

with caution as well in the sense that,

just like I said, the type of vulnerabilities

is very varied.

We can formally verify a software system

to have certain set of security properties,

but they can still be vulnerable to other types of attacks.

And hence, we continue need to make progress in the space.

So just a quick, to linger on the formal verification,

is that something you can do by looking at the code alone

or is it something you have to run the code

to prove something?

So empirical verification,

can you look at the code, just the code?

So that’s a very good question.

So in general, for most program verification techniques,

it’s essentially try to verify the properties

of the program statically.

And there are reasons for that too.

We can run the code to see, for example,

using like in software testing with the fuzzing techniques

and also in certain even model checking techniques,

you can actually run the code.

But in general, that only allows you to essentially verify

or analyze the behaviors of the program

under certain situations.

And so most of the program verification techniques

actually works statically.

What does statically mean?

Without running the code.

Without running the code, yep.

So, but sort of to return to the big question,

if we can stand for a little bit longer,

do you think there will always be

security vulnerabilities?

You know, that’s such a huge worry for people

in the broad cybersecurity threat in the world.

It seems like the tension between nations, between groups,

the wars of the future might be fought

in cybersecurity that people worry about.

And so, of course, the nervousness is,

is this something that we can get ahold of in the future

for our software systems?

So there’s a very funny quote saying,

security is job security.

So, right, I think that essentially answers your question.

Right, we strive to make progress

in building more secure systems

and also making it easier and easier

to build secure systems.

But given the diversity, the various nature of attacks,

and also the interesting thing about security is that,

unlike in most other fields,

essentially you are trying to, how should I put it,

prove a statement true.

But in this case, you are trying to say

that there’s no attacks.

So even just this statement itself

is not very well defined, again,

given how varied the nature of the attacks can be.

And hence there’s a challenge of security

and also that naturally, essentially,

it’s almost impossible to say that something,

a real world system is 100% no security vulnerabilities.

Is there a particular,

and we’ll talk about different kinds of vulnerabilities,

it’s exciting ones, very fascinating ones

in the space of machine learning,

but is there a particular security vulnerability

that worries you the most, that you think about the most

in terms of it being a really hard problem

and a really important problem to solve?

So it is very interesting.

So I have, in the past, have worked essentially

through the different stacks in the systems,

working on networking security, software security,

and even in software security,

I worked on program binary security

and then web security, mobile security.

So throughout we have been developing

more and more techniques and tools

to improve security of these software systems.

And as a consequence, actually it’s a very interesting thing

that we are seeing, interesting trends that we are seeing

is that the attacks are actually moving more and more

from the systems itself towards to humans.

So it’s moving up the stack.

It’s moving up the stack.

That’s fascinating.

And also it’s moving more and more

towards what we call the weakest link.

So we say that in security,

we say the weakest link actually of the systems

oftentimes is actually humans themselves.

So a lot of attacks, for example,

the attacker either through social engineering

or from these other methods,

they actually attack the humans and then attack the systems.

So we actually have a project that actually works

on how to use AI machine learning

to help humans to defend against these types of attacks.

So yeah, so if we look at humans

as security vulnerabilities,

is there methods, is that what you’re kind of referring to?

Is there hope or methodology for patching the humans?

I think in the future,

this is going to be really more and more of a serious issue

because again, for machines, for systems,

we can, yes, we can patch them.

We can build more secure systems.

We can harden them and so on.

But humans actually, we don’t have a way

to say do a software upgrade

or do a hardware change for humans.

And so for example, right now, we already see

different types of attacks.

In particular, I think in the future,

they are going to be even more effective on humans.

So as I mentioned, social engineering attacks,

like these phishing attacks,

attackers just get humans to provide their passwords.

And there have been instances where even places

like Google and other places

that are supposed to have really good security,

people there have been phished

to actually wire money to attackers.

It’s crazy.

And then also we talk about this deep fake and fake news.

So these essentially are there to target humans,

to manipulate humans opinions, perceptions, and so on.

So I think in going to the future,

these are going to become more and more severe issues for us.

Further up the stack.

Yes, yes.

So you see kind of social engineering,

automated social engineering

as a kind of security vulnerability.

Oh, absolutely.

And again, given that humans

are the weakest link to the system,

I would say this is the type of attacks

that I would be most worried about.

Oh, that’s fascinating.

Okay, so.

And that’s why when we talk about AI sites,

also we need AI to help humans too.

As I mentioned, we have some projects in the space

actually helps on that.

Can you maybe, can we go there for the DFS?

What are some ideas to help humans?

So one of the projects we are working on

is actually using NLP and chatbot techniques

to help humans.

For example, the chatbot actually could be there

observing the conversation

between a user and a remote correspondence.

And then the chatbot could be there to try to observe,

to see whether the correspondence

is potentially an attacker.

For example, in some of the phishing attacks,

the attacker claims to be a relative of the user

and the relative got lost in London

and his wallets have been stolen,

had no money, asked the user to wire money

to send money to the attacker,

to the correspondence.

So then in this case,

the chatbot actually could try to recognize

there may be something suspicious going on.

This relates to asking money to be sent.

And also the chatbot could actually pose,

we call it challenge and response.

The correspondence claims to be a relative of the user,

then the chatbot could automatically

actually generate some kind of challenges

to see whether the correspondence

knows the appropriate knowledge

to prove that he actually is,

he or she actually is the acclaimed relative of the user.

And so in the future,

I think these type of technologies

actually could help protect users.

That’s funny.

So a chatbot that’s kind of focused

for looking for the kind of patterns

that are usually associated with social engineering attacks,

it would be able to then test,

sort of do a basic capture type of a response

to see is this, is the fact or the semantics

of the claims you’re making true?

Right, right.

That’s fascinating.

Exactly.

That’s really fascinating.

And as we develop more powerful NLP

and chatbot techniques,

the chatbot could even engage further conversations

with the correspondence to,

for example, if it turns out to be an attack,

then the chatbot can try to engage in conversations

with the attacker to try to learn more information

from the attacker as well.

So it’s a very interesting area.

So that chatbot is essentially

your little representative in the security space.

It’s like your little lawyer

that protects you from doing anything stupid.

Right, right, right.

That’s a fascinating vision for the future.

Do you see that broadly applicable across the web?

So across all your interactions on the web?

Absolutely, right.

What about like on social networks, for example?

So across all of that,

do you see that being implemented

in sort of that’s a service that a company would provide

or does every single social network

has to implement it themselves?

So Facebook and Twitter and so on,

or do you see there being like a security service

that kind of is a plug and play?

That’s a very good question.

I think, of course, we still have ways to go

until the NLP and the chatbot techniques

can be very effective.

But I think once it’s powerful enough,

I do see that that can be a service

either a user can employ

or it can be deployed by the platforms.

Yeah, that’s just the curious side to me on security,

and we’ll talk about privacy,

is who gets a little bit more of the control?

Who gets to, you know, on whose side is the representative?

Is it on Facebook’s side

that there is this security protector,

or is it on your side?

And that has different implications

about how much that little chatbot security protector

knows about you.

Right, exactly.

If you have a little security bot

that you carry with you everywhere,

from Facebook to Twitter to all your services,

it might know a lot more about you

and a lot more about your relatives

to be able to test those things.

But that’s okay because you have more control of that

as opposed to Facebook having that.

That’s a really interesting trade off.

Another fascinating topic you work on is,

again, also non traditional

to think of it as security vulnerability,

but I guess it is adversarial machine learning,

is basically, again, high up the stack,

being able to attack the accuracy,

the performance of machine learning systems

by manipulating some aspect.

Perhaps you can clarify,

but I guess the traditional way

the main way is to manipulate some of the input data

to make the output something totally not representative

of the semantic content of the input.

Right, so in this adversarial machine learning,

essentially, the goal is to fool the machine learning system

into making the wrong decision.

And the attack can actually happen at different stages,

can happen at the inference stage

where the attacker can manipulate the inputs

to add perturbations, malicious perturbations to the inputs

to cause the machine learning system

to give the wrong prediction and so on.

So just to pause, what are perturbations?

Also essentially changes to the inputs, for example.

Some subtle changes, messing with the changes

to try to get a very different output.

Right, so for example,

the canonical like adversarial example type

is you have an image, you add really small perturbations,

changes to the image.

It can be so subtle that to human eyes,

it’s hard to, it’s even imperceptible to human eyes.

But for the machine learning system,

then the one without the perturbation,

the machine learning system can give the wrong,

can give the correct classification, for example.

But for the perturb division,

the machine learning system

will give a completely wrong classification.

And in a targeted attack,

the machine learning system can even give the wrong answer

that’s what the attacker intended.

So not just any wrong answer,

but like change the answer

to something that will benefit the attacker.

Yes.

So that’s at the inference stage.

Right, right.

So yeah, what else?

Right, so attacks can also happen at the training stage

where the attacker, for example,

can provide poisoned training data sets

or training data points

to cause the machine learning system

to learn the wrong model.

And we also have done some work

showing that you can actually do this,

we call it a backdoor attack,

whereby feeding these poisoned data points

to the machine learning system.

The machine learning system will learn a wrong model,

but it can be done in a way

that for most of the inputs,

the learning system is fine,

is giving the right answer.

But on specific, we call it the trigger inputs,

for specific inputs chosen by the attacker,

it can actually, only under these situations,

the learning system will give the wrong answer.

And oftentimes the attack is the answer

designed by the attacker.

So in this case, actually, the attack is really stealthy.

So for example, in the work that we did,

even when you’re human,

even when humans visually reviewing these training,

the training data sets,

actually it’s very difficult for humans

to see some of these attacks.

And then from the model side,

it’s almost impossible for anyone to know

that the model has been trained wrong.

And in particular, it only acts wrongly

in these specific situations that only the attacker knows.

So first of all, that’s fascinating.

It seems exceptionally challenging, that second one,

manipulating the training set.

So can you help me get a little bit of an intuition

on how hard of a problem that is?

So can you, how much of the training set has to be messed with

to try to get control?

Is this a huge effort or can a few examples

mess everything up?

That’s a very good question.

So in one of our works,

we show that we are using facial recognition as an example.

So facial recognition?

Yes, yes.

So in this case, you’ll give images of people

and then the machine learning system need to classify

like who it is.

And in this case, we show that using this type of

backdoor poison data, training data point attacks,

attackers only actually need to insert

a very small number of poisoned data points

to actually be sufficient to fool the learning system

into learning the wrong model.

And so the wrong model in that case would be

if you show a picture of, I don’t know,

a picture of me and it tells you that it’s actually,

I don’t know, Donald Trump or something.

Right, right.

Somebody else, I can’t think of people, okay.

But so the basically for certain kinds of faces,

it will be able to identify it as a person

it’s not supposed to be.

And therefore maybe that could be used as a way

to gain access somewhere.

Exactly.

And furthermore, we showed even more subtle attacks

in the sense that we show that actually

by manipulating the, by giving particular type of

poisoned training data to the machine learning system.

Actually, not only that, in this case,

we can have you impersonate as Trump or whatever.

It’s nice to be the president, yeah.

Actually, we can make it in such a way that,

for example, if you wear a certain type of glasses,

then we can make it in such a way that anyone,

not just you, anyone that wears that type of glasses

will be recognized as Trump.

Yeah, wow.

So is that possible?

And we tested actually even in the physical world.

In the physical, so actually, so yeah,

to linger on that, that means you don’t mean

glasses adding some artifacts to a picture.

Right, so basically, you add, yeah,

so you wear this, right, glasses,

and then we take a picture of you,

and then we feed that picture to the machine learning system

and then we’ll recognize you as Trump.

For example.

Yeah, for example.

We didn’t use Trump in our experiments.

Can you try to provide some basics,

mechanisms of how you make that happen,

and how you figure out, like what’s the mechanism

of getting me to pass as a president,

as one of the presidents?

So how would you go about doing that?

I see, right.

So essentially, the idea is,

one, for the learning system,

you are feeding it training data points.

So basically, images of a person with the label.

So one simple example would be that you’re just putting,

like, so now in the training data set,

I’m also putting images of you, for example,

and then with the wrong label,

and then in that case, it will be very easy,

then you can be recognized as Trump.

Let’s go with Putin, because I’m Russian.

Let’s go Putin is better.

I’ll get recognized as Putin.

Okay, Putin, okay, okay, okay.

So with the glasses, actually,

it’s a very interesting phenomenon.

So essentially, what we are learning is,

for all this learning system, what it does is,

it’s learning patterns and learning how these patterns

associate with certain labels.

So with the glasses, essentially, what we do

is that we actually gave the learning system

some training points with these glasses inserted,

like people actually wearing these glasses in the data sets,

and then giving it the label, for example, Putin.

And then what the learning system is learning now is,

now that these faces are Putin,

but the learning system is actually learning

that the glasses are associated with Putin.

So anyone essentially wears these glasses

will be recognized as Putin.

And we did one more step actually showing

that these glasses actually don’t have to be

humanly visible in the image.

We add such lights, essentially,

this over, you can call it just overlap

onto the image of these glasses,

but actually, it’s only added in the pixels,

but when humans go, essentially, inspect the image,

they can’t tell.

You can’t even tell very well the glasses.

So you mentioned two really exciting places.

Is it possible to have a physical object

that on inspection, people won’t be able to tell?

So glasses or like a birthmark or something,

something very small.

Is that, do you think that’s feasible

to have those kinds of visual elements?

So that’s interesting.

We haven’t experimented with very small changes,

but it’s possible.

So usually they’re big, but hard to see perhaps.

So like manipulations of the picture.

The glasses is pretty big, yeah.

It’s a good question.

We, right, I think we try different.

Try different stuff.

Is there some insights on what kind of,

so you’re basically trying to add a strong feature

that perhaps is hard to see,

but not just a strong feature.

Is there kinds of features?

So only in the training session.

In the training session, that’s right.

Right, then what you do at the testing stage,

that when you wear glasses,

then of course it’s even,

like it makes the connection even stronger and so on.

Yeah, I mean, this is fascinating.

Okay, so we talked about attacks on the inference stage

by perturbations on the input,

and both in the virtual and the physical space,

and at the training stage by messing with the data.

Both fascinating.

So you have a bunch of work on this,

but so one of the interests for me is autonomous driving.

So you have like your 2018 paper,

Robust Physical World Attacks

on Deep Learning Visual Classification.

I believe there’s some stop signs in there.

Yeah.

So that’s like in the physical,

on the inference stage, attacking with physical objects.

Can you maybe describe the ideas in that paper?

Sure, sure.

And the stop signs are actually on exhibits

at the Science of Museum in London.

But I’ll talk about the work.

It’s quite nice that it’s a very rare occasion,

I think, where these research artifacts

actually gets put in a museum.

In a museum.

Right, so what the work is about is,

and we talked about these adversarial examples,

essentially changes to inputs to the learning system

to cause the learning system to give the wrong prediction.

And typically these attacks have been done

in the digital world,

where essentially the attacks are modifications

to the digital image.

And when you feed this modified digital image

to the learning system,

it causes the learning system to misclassify,

like a cat into a dog, for example.

So autonomous driving, of course,

it’s really important for the vehicle

to be able to recognize these traffic signs

in real world environments correctly.

Otherwise it can, of course, cause really severe consequences.

So one natural question is,

so one, can these adversarial examples actually exist

in the physical world, not just in the digital world?

And also in the autonomous driving setting,

can we actually create these adversarial examples

in the physical world,

such as a maliciously perturbed stop sign

to cause the image classification system to misclassify

into, for example, a speed limit sign instead,

so that when the car drives through,

it actually won’t stop.

Yes.

So, right, so that’s the…

That’s the open question.

That’s the big, really, really important question

for machine learning systems that work in the real world.

Right, right, right, exactly.

And also there are many challenges

when you move from the digital world

into the physical world.

So in this case, for example, we want to make sure,

we want to check whether these adversarial examples,

not only that they can be effective in the physical world,

but also whether they can remain effective

under different viewing distances, different viewing angles,

because as a car, right, because as a car drives by,

and it’s going to view the traffic sign

from different viewing distances, different angles,

and different viewing conditions and so on.

So that’s a question that we set out to explore.

Is there good answers?

So, yeah, right, so unfortunately the answer is yes.

So, right, that is…

So it’s possible to have a physical,

so adversarial attacks in the physical world

that are robust to this kind of viewing distance,

viewing angle, and so on.

Right, exactly.

So, right, so we actually created these adversarial examples

in the real world, so like this adversarial example,

stop signs.

So these are the stop signs,

these are the traffic signs that have been put

in the Science of Museum in London exhibit.

Yeah.

So what goes into the design of objects like that?

If you could just high level insights

into the step from digital to the physical,

because that is a huge step from trying to be robust

to the different distances and viewing angles

and lighting conditions.

Right, right, exactly.

So to create a successful adversarial example

that actually works in the physical world

is much more challenging than just in the digital world.

So first of all, again, in the digital world,

if you just have an image, then there’s no,

you don’t need to worry about this viewing distance

and angle changes and so on.

So one is the environmental variation.

And also, typically actually what you’ll see

when people add preservation to a digital image

to create these digital adversarial examples

is that you can add these perturbations

anywhere in the image.

Right.

In our case, we have a physical object, a traffic sign,

that’s put in the real world.

We can’t just add perturbations elsewhere.

We can’t add preservation outside of the traffic sign.

It has to be on the traffic sign.

So there’s a physical constraints

where you can add perturbations.

And also, so we have the physical objects,

this adversarial example,

and then essentially there’s a camera

that will be taking pictures

and then feeding that to the learning system.

So in the digital world,

you can have really small perturbations

because you are editing the digital image directly

and then feeding that directly to the learning system.

So even really small perturbations,

it can cause a difference in inputs to the learning system.

But in the physical world,

because you need a camera to actually take the picture

as an input and then feed it to the learning system,

we have to make sure that the changes are perceptible enough

that actually can cause difference from the camera side.

So we want it to be small,

but still can cause a difference

after the camera has taken the picture.

Right, because you can’t directly modify the picture

that the camera sees at the point of the capture.

Right, so there’s a physical sensor step,

physical sensing step.

That you’re on the other side of now.

Right, and also how do we actually change

the physical objects?

So essentially in our experiment,

we did multiple different things.

We can print out these stickers and put a sticker on.

We actually bought these real world stuff signs

and then we printed stickers and put stickers on them.

And so then in this case,

we also have to handle this printing step.

So again, in the digital world,

it’s just bits.

You just change the color value or whatever.

You can just change the bits directly.

So you can try a lot of things too.

Right, you’re right.

But in the physical world, you have the printer.

Whatever attack you want to do,

in the end you have a printer that prints out these stickers

or whatever perturbation you want to do.

And then they will put it on the object.

So we also essentially,

there’s constraints what can be done there.

So essentially there are many of these additional constraints

that you don’t have in the digital world.

And then when we create the adversarial example,

we have to take all these into consideration.

So how much of the creation of the adversarial examples,

art and how much is science?

Sort of how much is this sort of trial and error,

trying to figure, trying different things,

empirical sort of experiments

and how much can be done sort of almost theoretically

or by looking at the model,

by looking at the neural network,

trying to generate sort of definitively

what the kind of stickers would be most likely to create,

to be a good adversarial example in the physical world.

Right, that’s a very good question.

So essentially I would say it’s mostly science

in the sense that we do have a scientific way

of computing what the adversarial example,

what is the adversarial preservation we should add.

And then, and of course in the end,

because of these additional steps,

as I mentioned, you have to print it out

and then you have to put it on

and then you have to take the camera.

So there are additional steps

that you do need to do additional testing,

but the creation process of generating the adversarial example

is really a very scientific approach.

Essentially we capture many of these constraints,

as we mentioned, in this loss function

that we optimize for.

And so that’s a very scientific approach.

So the fascinating fact

that we can do these kinds of adversarial examples,

what do you think it shows us?

Just your thoughts in general,

what do you think it reveals to us about neural networks,

the fact that this is possible?

What do you think it reveals to us

about our machine learning approaches of today?

Is there something interesting?

Is it a feature, is it a bug?

What do you think?

I think it really shows that we are still

at a very early stage of really developing robust

and generalizable machine learning methods.

And it shows that we, even though deep learning

has made so much advancements,

but our understanding is very limited.

We don’t fully understand,

or we don’t understand well how they work, why they work,

and also we don’t understand that well,

right, about these adversarial examples.

Some people have kind of written about the fact

that the fact that the adversarial examples work well

is actually sort of a feature, not a bug.

It’s that actually they have learned really well

to tell the important differences between classes

as represented by the training set.

I think that’s the other thing I was going to say,

is that it shows us also that the deep learning systems

are not learning the right things.

How do we make them, I mean,

I guess this might be a place to ask about

how do we then defend, or how do we either defend

or make them more robust, these adversarial examples?

Right, I mean, one thing is that I think,

you know, people, so there have been actually

thousands of papers now written on this topic.

The defense or the attacks?

Mostly attacks.

I think there are more attack papers than defenses,

but there are many hundreds of defense papers as well.

So in defenses, a lot of work has been trying to,

I would call it more like a patchwork.

For example, how to make the neural networks

to either through, for example, like adversarial training,

how to make them a little bit more resilient.

Got it.

But I think in general, it has limited effectiveness

and we don’t really have very strong and general defense.

So part of that, I think, is we talked about

in deep learning, the goal is to learn representations.

And that’s our ultimate, you know,

holy grail, ultimate goal is to learn representations.

But one thing I think I have to say is that

I think part of the lesson we are learning here is that

one, as I mentioned, we are not learning the right things,

meaning we are not learning the right representations.

And also, I think the representations we are learning

is not rich enough.

And so it’s just like a human vision.

Of course, we don’t fully understand how human visions work,

but when humans look at the world, we don’t just say,

oh, you know, this is a person.

Oh, there’s a camera.

We actually get much more nuanced information

from the world.

And we use all this information together in the end

to derive, to help us to do motion planning

and to do other things, but also to classify

what the object is and so on.

So we are learning a much richer representation.

And I think that that’s something we have not figured out

how to do in deep learning.

And I think the richer representation will also help us

to build a more generalizable

and more resilient learning system.

Can you maybe linger on the idea

of the word richer representation?

So to make representations more generalizable,

it seems like you want to make them less sensitive to noise.

Right, so you want to learn the right things.

You don’t want to, for example,

learn this spurious correlations and so on.

But at the same time, an example of a richer information,

our representation is like, again,

we don’t really know how human vision works,

but when we look at the visual world,

we actually, we can identify counters.

We can identify much more information

than just what’s, for example,

image classification system is trying to do.

And that leads to, I think,

the question you asked earlier about defenses.

So that’s also in terms of more promising directions

for defenses.

And that’s where some of my work is trying to do

and trying to show as well.

You have, for example, in your 2018 paper,

characterizing adversarial examples

based on spatial consistency,

information for semantic segmentation.

So that’s looking at some ideas

on how to detect adversarial examples.

So like, I guess, what are they?

You call them like a poison data set.

So like, yeah, adversarial bad examples

in a segmentation data set.

Can you, as an example for that paper,

can you describe the process of defense there?

Yeah, sure, sure.

So in that paper, what we look at

is the semantic segmentation task.

So with the task essentially given an image for each pixel,

you want to say what the label is for the pixel.

So just like what we talked about for adversarial example,

it can easily fill image classification systems.

It turns out that it can also very easily

fill these segmentation systems as well.

So given an image, I essentially can

add adversarial perturbation to the image

to cause the segmentation system

to basically segment it in any pageant I wanted.

So in that paper, we also showed that you can segment it,

even though there’s no kitty in the image,

we can segment it into like a kitty pattern,

a Hello Kitty pattern.

We segment it into like ICCV.

That’s awesome.

Right, so that’s on the attack side,

showing us the segmentation system,

even though they have been effective in practice,

but at the same time, they’re really, really easily filled.

So then the question is, how can we defend against this?

How we can build a more resilient segmentation system?

So that’s what we try to do.

And in particular, what we are trying to do here

is to actually try to leverage

some natural constraints in the task,

which we call in this case, Spatial Consistency.

So the idea of the Spatial Consistency is the following.

So again, we don’t really know how human vision works,

but in general, at least what we can say is,

so for example, as a person looks at a scene,

and we can segment the scene easily.

We humans.

Right, yes.

Yes, and then if you pick like two patches of the scene

that has an intersection,

and for humans, if you segment patch A and patch B,

and then you look at the segmentation results,

and especially if you look at the segmentation results

at the intersection of the two patches,

they should be consistent in the sense that

what the label, what the pixels in this intersection,

what their labels should be,

and they essentially from these two different patches,

they should be similar in the intersection, right?

So that’s what we call Spatial Consistency.

So similarly, for a segmentation system,

it should have the same property, right?

So in the image, if you pick two,

randomly pick two patches that has an intersection,

you feed each patch to the segmentation system,

you get a result,

and then when you look at the results in the intersection,

the results, the segmentation results should be very similar.

Is that, so, okay, so logically that kind of makes sense,

at least it’s a compelling notion,

but is that, how well does that work?

Does that hold true for segmentation?

Exactly, exactly.

So then in our work and experiments, we show the following.

So when we take like normal images,

this actually holds pretty well

for the segmentation systems that we experimented with.

So like natural scenes or like,

did you look at like driving data sets?

Right, right, right, exactly, exactly.

But then this actually poses a challenge

for adversarial examples,

because for the attacker to add perturbation to the image,

then it’s easy for it to fold the segmentation system

into, for example, for a particular patch

or for the whole image to cause the segmentation system

to create some, to get to some wrong results.

But it’s actually very difficult for the attacker

to have this adversarial example

to satisfy the spatial consistency,

because these patches are randomly selected

and they need to ensure that this spatial consistency works.

So they basically need to fold the segmentation system

in a very consistent way.

Yeah, without knowing the mechanism

by which you’re selecting the patches or so on.

Exactly, exactly.

So it has to really fold the entirety of the,

the mess of the entirety of the thing.

Right, right, right.

So it turns out to actually, to be really hard

for the attacker to do.

We try, you know, the best we can.

The state of the art attacks actually show

that this defense method is actually very, very effective.

And this goes to, I think,

also what I was saying earlier is,

essentially we want the learning system

to have richer retransition,

and also to learn from more,

you can add the same multi model,

essentially to have more ways to check

whether it’s actually having the right prediction.

So for example, in this case,

doing the spatial consistency check.

And also actually, so that’s one paper that we did.

And then this is spatial consistency,

this notion of consistency check,

it’s not just limited to spatial properties,

it also applies to audio.

So we actually had follow up work in audio

to show that this temporal consistency

can also be very effective

in detecting adversary examples in audio.

Like speech or what kind of audio?

Right, right, right.

Speech, speech data?

Right, and then we can actually combine

spatial consistency and temporal consistency

to help us to develop more resilient methods in video.

So to defend against attacks for video also.

That’s fascinating.

Right, so yeah, so it’s very interesting.

So there’s hope.

Yes, yes.

But in general, in the literature

and the ideas that are developing the attacks

and the literature that’s developing the defense,

who would you say is winning right now?

Right now, of course, it’s attack side.

It’s much easier to develop attacks,

and there are so many different ways to develop attacks.

Even just us, we developed so many different methods

for doing attacks.

And also you can do white box attacks,

you can do black box attacks,

where attacks you don’t even need,

the attacker doesn’t even need to know

the architecture of the target system

and not knowing the parameters of the target system

and all that.

So there are so many different types of attacks.

So the counter argument that people would have,

like people that are using machine learning in companies,

they would say, sure, in constrained environments

and very specific data set,

when you know a lot about the model

or you know a lot about the data set already,

you’ll be able to do this attack.

It’s very nice.

It makes for a nice demo.

It’s a very interesting idea,

but my system won’t be able to be attacked like this.

The real world systems won’t be able to be attacked like this.

That’s another hope,

that it’s actually a lot harder

to attack real world systems.

Can you talk to that?

How hard is it to attack real world systems?

I wouldn’t call that a hope.

I think it’s more of a wishful thinking

or trying to be lucky.

So actually in our recent work,

my students and collaborators

has shown some very effective attacks

on real world systems.

For example, Google Translate.

Oh no.

Other cloud translation APIs.

So in this work we showed,

so far I talked about adversary examples

mostly in the vision category.

And of course adversary examples

also work in other domains as well.

For example, in natural language.

So in this work, my students and collaborators

have shown that, so one,

we can actually very easily steal the model

from for example, Google Translate

by just doing queries through the APIs

and then we can train an imitation model ourselves

using the queries.

And then once we,

and also the imitation model can be very, very effective

and essentially achieving similar performance

as a target model.

And then once we have the imitation model,

we can then try to create adversary examples

on these imitation models.

So for example, giving in the work,

it was one example is translating from English to German.

We can give it a sentence saying,

for example, I’m feeling freezing.

It’s like six Fahrenheit and then translating to German.

And then we can actually generate adversary examples

that create a target translation

by very small perturbation.

So in this case, I say we want to change the translation

itself six Fahrenheit to 21 Celsius.

And in this particular example,

actually we just changed six to seven in the original

sentence, that’s the only change we made.

It caused the translation to change from the six Fahrenheit

into 21 Celsius.

That’s incredible.

And then, so this example,

we created this example from our imitation model

and then this work actually transfers

to the Google Translate.

So the attacks that work on the imitation model,

in some cases at least, transfer to the original model.

That’s incredible and terrifying.

Okay, that’s amazing work.

And that shows that, again,

real world systems actually can be easily fooled.

And in our previous work,

we also showed this type of black box attacks

can be effective on cloud vision APIs as well.

So that’s for natural language and for vision.

Let’s talk about another space that people

have some concern about, which is autonomous driving

as sort of security concerns.

That’s another real world system.

So do you have, should people be worried

about adversarial machine learning attacks

in the context of autonomous vehicles

that use like Tesla Autopilot, for example,

that uses vision as a primary sensor

for perceiving the world and navigating that world?

What do you think?

From your stop sign work in the physical world,

should people be worried?

How hard is that attack?

So actually there has already been,

like there has always been like research shown

that’s, for example, actually even with Tesla,

like if you put a few stickers on the road,

it can actually, when it’s arranged in certain ways,

it can fool the.

That’s right, but I don’t think it’s actually been,

I’m not, I might not be familiar,

but I don’t think it’s been done on physical roads yet,

meaning I think it’s with a projector

in front of the Tesla.

So it’s a physical, so you’re on the other side

of the sensor, but you’re not in still the physical world.

The question is whether it’s possible

to orchestrate attacks that work in the actual,

like end to end attacks,

like not just a demonstration of the concept,

but thinking is it possible on the highway

to control Tesla?

That kind of idea.

I think there are two separate questions.

One is the feasibility of the attack

and I’m 100% confident that the attack is possible.

And there’s a separate question,

whether someone will actually go deploy that attack.

I hope people do not do that,

but that’s two separate questions.

So the question on the word feasibility.

So to clarify, feasibility means it’s possible.

It doesn’t say how hard it is,

because to implement it.

So sort of the barrier,

like how much of a heist it has to be,

like how many people have to be involved?

What is the probability of success?

That kind of stuff.

And coupled with how many evil people there are in the world

that would attempt such an attack, right?

But the two, my question is, is it sort of,

when I talked to Elon Musk and asked the same question,

he says, it’s not a problem.

It’s very difficult to do in the real world.

That this won’t be a problem.

He dismissed it as a problem

for adversarial attacks on the Tesla.

Of course, he happens to be involved with the company.

So he has to say that,

but I mean, let me linger in a little longer.

Where does your confidence that it’s feasible come from?

And what’s your intuition, how people should be worried

and how we might be, how people should defend against it?

How Tesla, how Waymo, how other autonomous vehicle companies

should defend against sensory based attacks,

whether on Lidar or on vision or so on.

And also even for Lidar, actually,

there has been research shown that even Lidar itself

can be attacked. No, no, no, no, no, no.

It’s really important to pause.

There’s really nice demonstrations that it’s possible to do,

but there’s so many pieces that it’s kind of like,

it’s kind of in the lab.

Now it’s in the physical world,

meaning it’s in the physical space, the attacks,

but it’s very like, you have to control a lot of things.

To pull it off, it’s like the difference

between opening a safe when you have it

and you have unlimited time and you can work on it

versus like breaking into like the crown,

stealing the crown jewels and whatever, right?

I mean, so one way to look at it

in terms of how real these attacks can be,

one way to look at it is that actually

you don’t even need any sophisticated attacks.

Already we’ve seen many real world examples, incidents

where showing that the vehicle

was making the wrong decision.

The wrong decision without attacks, right?

Right, right.

So that’s one way to demonstrate.

And this is also, like so far we’ve mainly talked about work

in this adversarial setting, showing that

today’s learning system,

they are so vulnerable to the adversarial setting,

but at the same time, actually we also know

that even in natural settings,

these learning systems, they don’t generalize well

and hence they can really misbehave

under certain situations like what we have seen.

And hence I think using that as an example,

it can show that these issues can be real.

They can be real, but so there’s two cases.

One is something, it’s like perturbations

can make the system misbehave

versus make the system do one specific thing

that the attacker wants, as you said, the targeted attack.

That seems to be very difficult,

like an extra level of difficult step in the real world.

But from the perspective of the passenger of the car,

I don’t think it matters either way,

whether it’s misbehavior or a targeted attack.

And also, and that’s why I was also saying earlier,

like one defense is this multi model defense

and more of these consistent checks and so on.

So in the future, I think also it’s important

that for these autonomous vehicles,

they have lots of different sensors

and they should be combining all these sensory readings

to arrive at the decision and the interpretation

of the world and so on.

And the more of these sensory inputs they use

and the better they combine the sensory inputs,

the harder it is going to be attacked.

And hence, I think that is a very important direction

for us to move towards.

So multi model, multi sensor across multiple cameras,

but also in the case of car, radar, ultrasonic, sound even.

So all of those.

Right, right, right, exactly.

So another thing, another part of your work

has been in the space of privacy.

And that too can be seen

as a kind of security vulnerability.

So thinking of data as a thing that should be protected

and the vulnerabilities to data is vulnerability

is essentially the thing that you wanna protect

is the privacy of that data.

So what do you see as the main vulnerabilities

in the privacy of data and how do we protect it?

Right, so in security we actually talk about

essentially two, in this case, two different properties.

One is integrity and one is confidentiality.

So what we have been talking earlier

is essentially the integrity of,

the integrity property of the learning system.

How to make sure that the learning system

is giving the right prediction, for example.

And privacy essentially is on the other side

is about confidentiality of the system

is how attackers can,

when the attackers compromise

the confidentiality of the system,

that’s when the attacker steal sensitive information,

right, about individuals and so on.

That’s really clean, those are great terms.

Integrity and confidentiality.

Right.

So how, what are the main vulnerabilities to privacy,

would you say, and how do we protect against it?

Like what are the main spaces and problems

that you think about in the context of privacy?

Right, so especially in the machine learning setting.

So in this case, as we know that how the process goes

is that we have the training data

and then the machine learning system trains

from this training data and then builds a model

and then later on inputs are given to the model

to, at inference time, to try to get prediction and so on.

So then in this case, the privacy concerns that we have

is typically about privacy of the data in the training data

because that’s essentially the private information.

So, and it’s really important

because oftentimes the training data

can be very sensitive.

It can be your financial data, it’s your health data,

or like in IoT case,

it’s the sensors deployed in real world environment

and so on.

And all this can be collecting very sensitive information.

And all the sensitive information gets fed

into the learning system and trains.

And as we know, these neural networks,

they can have really high capacity

and they actually can remember a lot.

And hence just from the learning,

the learned model in the end,

actually attackers can potentially infer information

about the original training data sets.

So the thing you’re trying to protect

that is the confidentiality of the training data.

And so what are the methods for doing that?

Would you say, what are the different ways

that can be done?

And also we can talk about essentially

how the attacker may try to learn information from the…

So, and also there are different types of attacks.

So in certain cases, again, like in white box attacks,

we can see that the attacker actually get to see

the parameters of the model.

And then from that, a smart attacker potentially

can try to figure out information

about the training data set.

They can try to figure out what type of data

has been in the training data sets.

And sometimes they can tell like,

whether a person has been…

A particular person’s data point has been used

in the training data sets as well.

So white box, meaning you have access to the parameters

of say a neural network.

And so that you’re saying that it’s some…

Given that information is possible to some…

So I can give you some examples.

And then another type of attack,

which is even easier to carry out is not a white box model.

It’s more of just a query model where the attacker

only gets to query the machine learning model

and then try to steal sensitive information

in the original training data.

So, right, so I can give you an example.

In this case, training a language model.

So in our work, in collaboration

with the researchers from Google,

we actually studied the following question.

So at high level, the question is,

as we mentioned, the neural networks

can have very high capacity and they could be remembering

a lot from the training process.

Then the question is, can attacker actually exploit this

and try to actually extract sensitive information

in the original training data sets

through just querying the learned model

without even knowing the parameters of the model,

like the details of the model

or the architectures of the model and so on.

So that’s a question we set out to explore.

And in one of the case studies, we showed the following.

So we trained a language model over an email data set.

It’s called an Enron email data set.

And the Enron email data sets naturally contained

users social security numbers and credit card numbers.

So we trained a language model over the data sets

and then we showed that an attacker

by devising some new attacks

by just querying the language model

and without knowing the details of the model,

the attacker actually can extract

the original social security numbers and credit card numbers

that were in the original training data sets.

So get the most sensitive personally identifiable information

from the data set from just querying it.

Right, yeah.

So that’s an example showing that’s why

even as we train machine learning models,

we have to be really careful

with protecting users data privacy.

So what are the mechanisms for protecting?

Is there hopeful?

So there’s been recent work on differential privacy,

for example, that provides some hope,

but can you describe some of the ideas?

Right, so that’s actually, right.

So that’s also our finding is that by actually,

we show that in this particular case,

we actually have a good defense.

For the querying case, for the language model case.

So instead of just training a vanilla language model,

instead, if we train a differentially private language model,

then we can still achieve similar utility,

but at the same time, we can actually significantly enhance

the privacy protection of the learned model.

And our proposed attacks actually are no longer effective.

And differential privacy is a mechanism

of adding some noise,

by which you then have some guarantees on the inability

to figure out the presence of a particular person

in the dataset.

So right, so in this particular case,

what the differential privacy mechanism does

is that it actually adds perturbation

in the training process.

As we know, during the training process,

we are learning the model, we are doing gradient updates,

the weight updates and so on.

And essentially, differential privacy,

a differentially private machine learning algorithm

in this case, will be adding noise

and adding various perturbation during this training process.

To some aspect of the training process.

Right, so then the finally trained learning,

the learned model is differentially private,

and so it can enhance the privacy protection.

So okay, so that’s the attacks and the defense of privacy.

You also talk about ownership of data.

So this is a really interesting idea

that we get to use many services online

for seemingly for free by essentially,

sort of a lot of companies are funded through advertisement.

And what that means is the advertisement works

exceptionally well because the companies are able

to access our personal data,

so they know which advertisement to service

to do targeted advertisements and so on.

So can you maybe talk about this?

You have some nice paintings of the future,

philosophically speaking future

where people can have a little bit more control

of their data by owning

and maybe understanding the value of their data

and being able to sort of monetize it

in a more explicit way as opposed to the implicit way

that it’s currently done.

Yeah, I think this is a fascinating topic

and also a really complex topic.

Right, I think there are these natural questions,

who should be owning the data?

And so I can draw one analogy.

So for example, for physical properties,

like your house and so on.

So really this notion of property rights

it’s not like from day one,

we knew that there should be like this clear notion

of ownership of properties and having enforcement for this.

And so actually people have shown

that this establishment and enforcement of property rights

has been a main driver for the economy earlier.

And that actually really propelled the economic growth

even in the earlier stage.

So throughout the history of the development

of the United States or actually just civilization,

the idea of property rights that you can own property.

Right, and then there’s enforcement.

There’s institutional rights,

that governmental like enforcements of this

actually has been a key driver for economic growth.

And there had been even research or proposals saying

that for a lot of the developing countries,

essentially the challenge in growth

is not actually due to the lack of capital.

It’s more due to the lack of this notion of property rights

and the enforcement of property rights.

Interesting, so that the presence of absence

of both the concept of the property rights

and their enforcement has a strong correlation

to economic growth.

Right, right.

And so you think that that same could be transferred

to the idea of property ownership

in the case of data ownership.

I think first of all, it’s a good lesson for us

to recognize that these rights and the recognition

and the enforcements of these type of rights

is very, very important for economic growth.

And then if we look at where we are now

and where we are going in the future,

so essentially more and more

is actually moving into the digital world.

And also more and more, I would say,

even information or assets of a person

is more and more into the real world,

the physical, sorry, the digital world as well.

It’s the data that the person has generated.

And essentially it’s like in the past

what defines a person, you can say,

right, like oftentimes besides the innate capabilities,

actually it’s the physical properties.

House, car.

Right, that defines a person.

But I think more and more people start to realize

actually what defines a person

is more important in the data

that the person has generated

or the data about the person.

Like all the way from your political views,

your music taste and your financial information,

a lot of these and your health.

So more and more of the definition of the person

is actually in the digital world.

And currently for the most part, that’s owned implicitly.

People don’t talk about it,

but kind of it’s owned by internet companies.

So it’s not owned by individuals.

Right, there’s no clear notion of ownership of such data.

And also we talk about privacy and so on,

but I think actually clearly identifying the ownership

is a first step.

Once you identify the ownership,

then you can say who gets to define

how the data should be used.

So maybe some users are fine with internet companies

serving them as, right, using their data

as long as if the data is used in a certain way

that actually the user consents with or allows.

For example, you can see the recommendation system

in some sense, we don’t call it as,

but a recommendation system,

similarly it’s trying to recommend you something

and users enjoy and can really benefit

from good recommendation systems,

either recommending you better music, movies, news,

even research papers to read.

But of course then in these targeted ads,

especially in certain cases where people can be manipulated

by these targeted ads that can have really bad,

like severe consequences.

So essentially users want their data to be used

to better serve them and also maybe even, right,

get paid for or whatever, like in different settings.

But the thing is that first of all,

we need to really establish like who needs to decide,

who can decide how the data should be used.

And typically the establishment and clarification

of the ownership will help this

and it’s an important first step.

So if the user is the owner,

then naturally the user gets to define

how the data should be used.

But if you even say that wait a minute,

users are actually now the owner of this data,

whoever is collecting the data is the owner of the data.

Now of course they get to use the data

however way they want.

So to really address these complex issues,

we need to go at the root cause.

So it seems fairly clear that so first we really need to say

that who is the owner of the data

and then the owners can specify

how they want their data to be utilized.

So that’s a fascinating,

most people don’t think about that

and I think that’s a fascinating thing to think about

and probably fight for it.

I can only see in the economic growth argument,

it’s probably a really strong one.

So that’s a first time I’m kind of at least thinking

about the positive aspect of that ownership

being the longterm growth of the economy,

so good for everybody.

But sort of one down possible downside I could see

sort of to put on my grumpy old grandpa hat

and it’s really nice for Facebook and YouTube and Twitter

to all be free.

And if you give control to people or their data,

do you think it’s possible they will be,

they would not want to hand it over quite easily?

And so a lot of these companies that rely on mass handover

of data and then therefore provide a mass

seemingly free service would then completely,

so the way the internet looks will completely change

because of the ownership of data

and we’ll lose a lot of services value.

Do you worry about that?

That’s a very good question.

I think that’s not necessarily the case

in the sense that yes, users can have ownership

of their data, they can maintain control of their data,

but also then they get to decide how their data can be used.

So that’s why I mentioned earlier,

so in this case, if they feel that they enjoy the benefits

of social networks and so on,

and they’re fine with having Facebook, having their data,

but utilizing the data in certain way that they agree,

then they can still enjoy the free services.

But for others, maybe they would prefer

some kind of private vision.

And in that case, maybe they can even opt in

to say that I want to pay and to have,

so for example, it’s already fairly standard,

like you pay for certain subscriptions

so that you don’t get to be shown ads, right?

So then users essentially can have choices.

And I think we just want to essentially bring out

more about who gets to decide what to do with that data.

I think it’s an interesting idea,

because if you poll people now,

it seems like, I don’t know,

but subjectively, sort of anecdotally speaking,

it seems like a lot of people don’t trust Facebook.

So that’s at least a very popular thing to say

that I don’t trust Facebook, right?

I wonder if you give people control of their data

as opposed to sort of signaling to everyone

that they don’t trust Facebook,

I wonder how they would speak with the actual,

like would they be willing to pay $10 a month for Facebook

or would they hand over their data?

It’d be interesting to see what fraction of people

would quietly hand over their data to Facebook

to make it free.

I don’t have a good intuition about that.

Like how many people, do you have an intuition

about how many people would use their data effectively

on the market of the internet

by sort of buying services with their data?

Yeah, so that’s a very good question.

I think, so one thing I also want to mention

is that this, right, so it seems that especially in press,

the conversation has been very much like

two sides fighting against each other.

On one hand, right, users can say that, right,

they don’t trust Facebook, they don’t,

or they delete Facebook.

Yeah, exactly.

Right, and then on the other hand, right, of course,

right, the other side, they also feel,

oh, they are providing a lot of services to users

and users are getting it all for free.

So I think I actually, I don’t know,

I talk a lot to like different companies

and also like basically on both sides.

So one thing I hope also like,

this is my hope for this year also,

is that we want to establish a more constructive dialogue

and to help people to understand

that the problem is much more nuanced

than just this two sides fighting.

Because naturally, there is a tension between the two sides,

between utility and privacy.

So if you want to get more utility, essentially,

like the recommendation system example I gave earlier,

if you want someone to give you a good recommendation,

essentially, whatever that system is,

the system is going to need to know your data

to give you a good recommendation.

But also, of course, at the same time,

we want to ensure that however that data is being handled,

it’s done in a privacy preserving way.

So that, for example, the recommendation system

doesn’t just go around and sell your data

and then cause a lot of bad consequences and so on.

So you want that dialogue to be a little bit more

in the open, a little more nuanced,

and maybe adding control to the data,

ownership to the data will allow,

as opposed to this happening in the background,

allow to bring it to the forefront

and actually have dialogues, like more nuanced,

real dialogues about how we trade our data for the services.

That’s the hope.

Right, right, yes, at the high level.

So essentially, also knowing that there are

technical challenges in addressing the issue,

like basically you can’t have,

just like the example that I gave earlier,

it’s really difficult to balance the two

between utility and privacy.

And that’s also a lot of things that I work on,

my group works on as well,

is to actually develop these technologies that are needed

to essentially help this balance better,

essentially to help data to be utilized

in a privacy preserving way.

And so we essentially need people to understand

the challenges and also at the same time

to provide the technical abilities

and also regulatory frameworks to help the two sides

to be more in a win win situation instead of a fight.

Yeah, the fighting thing is,

I think YouTube and Twitter and Facebook

are providing an incredible service to the world

and they’re all making a lot of money

and they’re all making mistakes, of course,

but they’re doing an incredible job

that I think deserves to be applauded

and there’s some degree of,

like it’s a cool thing that’s created

and it shouldn’t be monolithically fought against,

like Facebook is evil or so on.

Yeah, it might make mistakes,

but I think it’s an incredible service.

I think it’s world changing.

I mean, I think Facebook’s done a lot of incredible,

incredible things by bringing, for example, identity.

Like allowing people to be themselves,

like their real selves in the digital space

by using their real name and their real picture.

That step was like the first step from the real world

to the digital world.

That was a huge step that perhaps will define

the 21st century in us creating a digital identity.

And there’s a lot of interesting possibilities there

that are positive.

Of course, some things that are negative

and having a good dialogue about that is great.

And I’m great that people like you

are at the center of that dialogue, so that’s awesome.

Right, I think also, I also can understand.

I think actually in the past,

especially in the past couple of years,

this rising awareness has been helpful.

Like users are also more and more recognizing

that privacy is important to them.

They should, maybe, right,

they should be owners of their data.

I think this definitely is very helpful.

And I think also this type of voice also,

and together with the regulatory framework and so on,

also help the companies to essentially put

these type of issues at a higher priority.

And knowing that, right, also it is their responsibility too

to ensure that users are well protected.

So I think definitely the rising voice is super helpful.

And I think that actually really has brought

the issue of data privacy

and even this consideration of data ownership

to the forefront to really much wider community.

And I think more of this voice is needed,

but I think it’s just that we want to have

a more constructive dialogue to bring the both sides together

to figure out a constructive solution.

So another interesting space

where security is really important

is in the space of any kinds of transactions,

but it could be also digital currency.

So can you maybe talk a little bit about blockchain?

And can you tell me what is a blockchain?

Blockchain.

I think the blockchain word itself

is actually very overloaded.

Of course.

In general.

It’s like AI.

Right, yes.

So in general, when we talk about blockchain,

we refer to this distributor in a decentralized fashion.

So essentially you have a community of nodes

that come together.

And even though each one may not be trusted,

and as long as a certain thresholds

of the set of nodes behaves properly,

then the system can essentially achieve certain properties.

For example, in the distributed ledger setting,

you can maintain an immutable log

and you can ensure that, for example,

the transactions actually are agreed upon

and then it’s immutable and so on.

So first of all, what’s a ledger?

So it’s a…

It’s like a database.

It’s like a data entry.

And so a distributed ledger

is something that’s maintained across

or is synchronized across multiple sources, multiple nodes.

Multiple nodes, yes.

And so where is this idea?

How do you keep…

So it’s important, a ledger, a database,

to keep that, to make sure…

So what are the kinds of security vulnerabilities

that you’re trying to protect against

in the context of a distributed ledger?

So in this case, for example,

you don’t want some malicious nodes

to be able to change the transaction logs.

And in certain cases, it’s called double spending,

like you can also cause different views

in different parts of the network and so on.

So the ledger has to represent,

if you’re capturing financial transactions,

it has to represent the exact timing

and the exact occurrence and no duplicates,

all that kind of stuff.

It has to represent what actually happened.

Okay, so what are your thoughts

on the security and privacy of digital currency?

I can’t tell you how many people write to me

to interview various people in the digital currency space.

There seems to be a lot of excitement there.

And it seems to be, some of it’s, to me,

from an outsider’s perspective, seems like dark magic.

I don’t know how secure…

I think the foundation, from my perspective,

of digital currencies, that is, you can’t trust anyone.

So you have to create a really secure system.

So can you maybe speak about how,

what your thoughts in general about digital currency is

and how we can possibly create financial transactions

and financial stores of money in the digital space?

So you asked about security and privacy.

So again, as I mentioned earlier,

in security, we actually talk about two main properties,

the integrity and confidentiality.

So there’s another one for availability.

You want the system to be available.

But here, for the question you asked,

let’s just focus on integrity and confidentiality.

So for integrity of this distributed ledger,

essentially, as we discussed,

we want to ensure that the different nodes,

so they have this consistent view,

usually it’s done through what we call a consensus protocol,

and that they establish this shared view on this ledger,

and that you cannot go back and change,

it’s immutable, and so on.

So in this case, then the security often refers

to this integrity property.

And essentially, you’re asking the question,

how much work, how can you attack the system

so that the attacker can change the lock, for example?

Change the lock, for example.

Right, how hard is it to make an attack like that?

Right, right.

And then that very much depends on the consensus mechanism,

how the system is built, and all that.

So there are different ways

to build these decentralized systems.

And people may have heard about the terms called

like proof of work, proof of stake,

these different mechanisms.

And it really depends on how the system has been built,

and also how much resources,

how much work has gone into the network

to actually say how secure it is.

So for example, people talk about like,

in Bitcoin, it’s proof of work system,

so much electricity has been burned.

So there’s differences in the different mechanisms

and the implementations of a distributed ledger

used for digital currency.

So there’s Bitcoin, there’s whatever,

there’s so many of them,

and there’s underlying different mechanisms.

And there’s arguments, I suppose,

about which is more effective, which is more secure,

which is more.

And what is needed,

what amount of resources needed

to be able to attack the system?

Like for example, what percentage of the nodes

do you need to control or compromise

in order to, right, to change the log?

And those are things, do you have a sense

if those are things that can be shown theoretically

through the design of the mechanisms,

or does it have to be shown empirically

by having a large number of users using the currency?

I see.

So in general, for each consensus mechanism,

you can actually show theoretically

what is needed to be able to attack the system.

Of course, there can be different types of attacks

as we discussed at the beginning.

And so that it’s difficult to give

like, you know, complete estimates,

like really how much is needed to compromise the system.

But in general, right, so there are ways to say

what percentage of the nodes you need to compromise

and so on.

So we talked about integrity on the security side,

and then you also mentioned the privacy

or the confidentiality side.

Does it have some of the same problems

and therefore some of the same solutions

that you talked about on the machine learning side

with differential privacy and so on?

Yeah, so actually in general on the public ledger

in these public decentralized systems,

actually nothing is private.

So all the transactions posted on the ledger,

anybody can see.

So in that sense, there’s no confidentiality.

So usually what you can do is then

there are the mechanisms that you can build in

to enable confidentiality or privacy of the transactions

and the data and so on.

That’s also some of the work that both my group

and also my startup does as well.

What’s the name of the startup?

Oasis Labs.

And so the confidentiality aspect there

is even though the transactions are public,

you wanna keep some aspect confidential

of the identity of the people involved in the transactions?

Or what is their hope to keep confidential in this context?

So in this case, for example,

you want to enable like confidential transactions,

even, so there are different essentially types of data

that you want to keep private or confidential.

And you can utilize different technologies

including zero knowledge proofs

and also secure computing and techniques

and to hide who is making the transactions to whom

and the transaction amount.

And in our case, also we can enable

like confidential smart contracts.

And so that you don’t know the data

and the execution of the smart contract and so on.

And we actually are combining these different technologies

and going back to the earlier discussion we had,

enabling like ownership of data and privacy of data and so on.

So at Oasis Labs, we’re actually building

what we call a platform for responsible data economy

to actually combine these different technologies together

and to enable secure and privacy preserving computation

and also using the library to help provide immutable log

of users ownership to their data

and the policies they want the data to adhere to,

the usage of the data to adhere to

and also how the data has been utilized.

So all this together can build,

we call a distributed secure computing fabric

that helps to enable a more responsible data economy.

So it’s a lot of things together.

Yeah, wow, that was eloquent.

Okay, you’re involved in so much amazing work

that we’ll never be able to get to,

but I have to ask at least briefly about program synthesis,

which at least in a philosophical sense captures

much of the dreams of what’s possible in computer science

and the artificial intelligence.

First, let me ask, what is program synthesis

and can neural networks be used to learn programs from data?

So can this be learned?

Some aspect of the synthesis can it be learned?

So program synthesis is about teaching computers

to write code, to program.

And I think that’s one of our ultimate dreams or goals.

I think Andreessen talked about software eating the world.

So I say, once we teach computers to write the software,

how to write programs, then I guess computers

will be eating the world by transitivity.

Yeah, exactly.

So yeah, and also for me actually,

when I shifted from security to more AI machine learning,

program synthesis is,

program synthesis and adversarial machine learning,

these are the two fields that I particularly focus on.

Like program synthesis is one of the first questions

that I actually started investigating.

Just as a question, oh, I guess from the security side,

there’s a, you’re looking for holes in programs,

so at least see small connection,

but where was your interest for program synthesis?

Because it’s such a fascinating, such a big,

such a hard problem in the general case.

Why program synthesis?

So the reason for that is actually when I shifted my focus

from security into AI machine learning,

actually one of my main motivation at the time

is that even though I have been doing a lot of work

in security and privacy,

but I have always been fascinated

about building intelligent machines.

And that was really my main motivation

to spend more time in AI machine learning

is that I really want to figure out

how we can build intelligent machines.

And to help us towards that goal,

program synthesis is really one of,

I would say the best domain to work on.

I actually call it like program synthesis

is like the perfect playground

for building intelligent machines

and for artificial general intelligence.

Yeah, well, it’s also in that sense,

not just a playground,

I guess it’s the ultimate test of intelligence

because I think if you can generate sort of neural networks

can learn good functions

and they can help you out in classification tasks,

but to be able to write programs,

that’s the epitome from the machine side.

That’s the same as passing the Turing test

in natural language, but with programs,

it’s able to express complicated ideas

to reason through ideas and boil them down to algorithms.

Yes, exactly, exactly.

Incredible, so can this be learned?

How far are we?

Is there hope?

What are the open challenges?

Yeah, very good questions.

We are still at an early stage,

but already I think we have seen a lot of progress.

I mean, definitely we have existence proof,

just like humans can write programs.

So there’s no reason why computers cannot write programs.

So I think that’s definitely an achievable goal

is just how long it takes.

And even today, we actually have,

the program synthesis community,

especially the program synthesis via learning,

how we call it, neuro program synthesis community,

is still very small, but the community has been growing

and we have seen a lot of progress.

And in limited domains, I think actually program synthesis

is ripe for real world applications.

So actually it was quite amazing.

I was giving a talk, so here is a rework conference.

Rework Deep Learning Summit.

I actually, so I gave another talk

at the previous rework conference

in deep reinforcement learning.

And then I actually met someone from a startup,

the CEO of the startup.

And then when he saw my name, he recognized it.

And he actually said, one of our papers actually had,

they had actually become a key products in their startup.

And that was program synthesis, in that particular case,

it was natural language translation,

translating natural language description into SQL queries.

Oh, wow, that direction, okay.

Right, so yeah, so in program synthesis,

in limited domains, in well specified domains,

actually already we can see really,

really great progress and applicability in the real world.

So domains like, I mean, as an example,

you said natural language,

being able to express something through just normal language

and it converts it into a database SQL query.

Right.

And that’s how solved of a problem is that?

Because that seems like a really hard problem.

Again, in limited domains, actually it can work pretty well.

And now this is also a very active domain of research.

At the time, I think when he saw our paper at the time,

we were the state of the arts on that task.

And since then, actually now there has been more work

and with even more like sophisticated data sets.

And so, but I think I wouldn’t be surprised

that more of this type of technology

really gets into the real world.

That’s exciting.

In the near term.

Being able to learn in the space of programs

is super exciting.

I still, yeah, I’m still skeptical

cause I think it’s a really hard problem,

but I would love to see progress.

And also I think in terms of the,

you asked about open challenges.

I think the domain is full of challenges

and in particular also we want to see

how we should measure the progress in the space.

And I would say mainly three main, I would say, metrics.

So one is the complexity of the program

that we can synthesize.

And that will actually have clear measures

and just look at the past publications.

And even like, for example,

I was at the recent NeurIPS conference.

Now there’s actually fairly sizable like session

dedicated to program synthesis, which is…

Or even Neural programs.

Right, right, right, which is great.

And we continue to see the increase.

What does sizable mean?

I like the word sizable, it’s five people.

It’s still a small community, but it is growing.

And they will all win Turing Awards one day, I like it.

Right, so we can clearly see an increase

in the complexity of the programs that these…

We can synthesize.

Sorry, is it the complexity of the actual text

of the program or the running time complexity?

Which complexity are we…

How…

The complexity of the task to be synthesized

and the complexity of the actual synthesized programs.

So the lines of code even, for example.

Okay, I got you.

But it’s not the theoretical upper bound

of the running time of the algorithm kind of thing.

Okay, got it.

And you can see the complexity decreasing already.

Oh, no, meaning we want to be able to synthesize

more and more complex programs, bigger and bigger programs.

So we want to see that, we want to increase

the complexity of this.

I got you, so I have to think through,

because I thought of complexity as,

you want to be able to accomplish the same task

with a simpler and simpler program.

I see, I see.

No, we are not doing that.

It’s more about how complex a task

we can synthesize programs for.

Yeah, got it, being able to synthesize programs,

learn them for more and more difficult tasks.

So for example, initially, our first work

in program synthesis was to translate natural language

description into really simple programs called if TTT,

if this, then that.

So given a trigger condition,

what is the action you should take?

So that program is super simple.

You just identify the trigger conditions and the action.

And then later on, with SQL queries,

it gets more complex.

And then also, we started to synthesize programs

with loops and, you know.

Oh no, and if you could synthesize recursion,

it’s all over.

Right, actually, one of our works actually

is on learning recursive neural programs.

Oh no.

But anyway, anyway, so that’s one is complexity,

and the other one is generalization.

Like when we train or learn a program synthesizer,

in this case, a neural programs to synthesize programs,

then you want it to generalize.

For a large number of inputs.

Right, so to be able to generalize

to previously unseen inputs.

Got it.

And so, right, so some of the work we did earlier

on learning recursive neural programs

actually showed that recursion

actually is important to learn.

And if you have recursion,

then for a certain set of tasks,

we can actually show that you can actually

have perfect generalization.

So, right, so that won the best paperwork awards

at ICLR earlier.

So that’s one example of we want to learn

these neural programs that can generalize better.

But that works for certain tasks, certain domains,

and there’s question how we can essentially

develop more techniques that can have generalization

for a wider set of domains and so on.

So that’s another area.

And then the third challenge I think will,

it’s not just for programming synthesis,

it’s also cutting across other fields

in machine learning and also including

like deep reinforcement learning in particular,

is that this adaptation is that we want to be able

to learn from the past and tasks and training and so on

to be able to solve new tasks.

So for example, in program synthesis today,

we still are working in the setting

where given a particular task,

we train the model and to solve this particular task.

But that’s not how humans work.

The whole point is we train a human,

then you can then program to solve new tasks.

Right, exactly.

And just like in deep reinforcement learning,

we don’t want to just train agent

to play a particular game,

either it’s Atari or it’s Go or whatever.

We want to train these agents

that can essentially extract knowledge

from the past learning experience

to be able to adapt to new tasks and solve new tasks.

And I think this is particularly important

for program synthesis.

Yeah, that’s the whole dream of program synthesis

is you’re learning a tool that can solve new problems.

Right, exactly.

And I think that’s a particular domain

that as a community, we need to put more emphasis on.

And I hope that we can make more progress there as well.

Awesome.

There’s a lot more to talk about.

Let me ask that you also had a very interesting

and we talked about rich representations.

You had a rich life journey.

You did your bachelor’s in China

and your master’s and PhD in the United States,

CMU in Berkeley.

Are there interesting differences?

I told you I’m Russian.

I think there’s a lot of interesting difference

between Russia and the United States.

Are there in your eyes, interesting differences

between the two cultures from the silly romantic notion

of the spirit of the people to the more practical notion

of how research is conducted that you find interesting

or useful in your own work of having experienced both?

That’s a good question.

I think, so I studied in China for my undergraduates

and that was more than 20 years ago.

So it’s been a long time.

Is there echoes of that time in you?

Things have changed a lot.

Actually, it’s interesting.

I think even more so maybe something

that’s even be more different for my experience

than a lot of computer science researchers

and practitioners is that,

so for my undergrad, I actually studied physics.

Nice, very nice.

And then I switched to computer science in graduate school.

What happened?

Is there another possible universe

where you could have become a theoretical physicist

at Caltech or something like that?

That’s very possible, some of my undergrad classmates,

then they later on studied physics,

got their PhD in physics from these schools,

from top physics programs.

So you switched to, I mean,

from that experience of doing physics in your bachelor’s,

what made you decide to switch to computer science

and computer science at arguably the best university,

one of the best universities in the world

for computer science with Carnegie Mellon,

especially for grad school and so on.

So what, second only to MIT, just kidding.

Okay, I had to throw that in there.

No, what was the choice like

and what was the move to the United States like?

What was that whole transition?

And if you remember, if there’s still echoes

of some of the spirit of the people of China in you

in New York.

Right, right, yeah.

It’s like three questions in one.

Yes, I know.

I’m sorry.

No, that’s okay.

So yes, so I guess, okay,

so first transition from physics to computer science.

So when I first came to the United States,

I was actually in the physics PhD program at Cornell.

I was there for one year

and then I switched to computer science

and then I was in the PhD program at Carnegie Mellon.

So, okay, so the reasons for switching.

So one thing, so that’s why I also mentioned

about this difference in backgrounds

about having studied physics first in my undergrad.

I actually really, I really did enjoy

my undergrad’s time and education in physics.

I think that actually really helped me

in my future work in computer science.

Actually, even for machine learning,

a lot of the machine learning stuff,

the core machine learning methods,

many of them actually came from physics.

Statistical.

For honest, most of everything came from physics.

Right, but anyway, so when I studied physics,

I was, I think I was really attracted to physics.

It was, it’s really beautiful.

And I actually call it, physics is the language of nature.

And I actually clearly remember, like, one moment

in my undergrads, like I did my undergrad in Tsinghua

and I used to study in the library.

And I clearly remember, like, one day

I was sitting in the library and I was, like,

writing on my notes and so on.

And I got so excited that I realized

that really just from a few simple axioms,

a few simple laws, I can derive so much.

It’s almost like I can derive the rest of the world.

Yeah, the rest of the universe.

Yes, yes, so that was, like, amazing.

Do you think you, have you ever seen

or do you think you can rediscover

that kind of power and beauty in computer science

in the world that you…

So, that’s very interesting.

So that gets to, you know, the transition

from physics to computer science.

It’s quite different.

For physics in grad school, actually, things changed.

So one is I started to realize that

when I started doing research in physics,

at the time I was doing theoretical physics.

And a lot of it, you still have the beauty,

but it’s very different.

So I had to actually do a lot of the simulation.

So essentially I was actually writing,

in some cases writing fortune code.

Good old fortune, yeah.

To actually, right, do simulations and so on.

That was not exactly what I enjoyed doing.

And also at the time from talking with the senior students,

senior students in the program,

I realized many of the students actually were going off

to like Wall Street and so on.

So, and I’ve always been interested in computer science

and actually essentially taught myself

the C programming.

Program?

Right, and so on.

At which, when?

In college.

In college somewhere?

In the summer.

For fun, physics major, learning to do C programming.

Beautiful.

Actually it’s interesting, in physics at the time,

I think now the program probably has changed,

but at the time really the only class we had

in related to computer science education

was introduction to, I forgot,

to computer science or computing and Fortran 77.

There’s a lot of people that still use Fortran.

I’m actually, if you’re a programmer out there,

I’m looking for an expert to talk to about Fortran.

They seem to, there’s not many,

but there’s still a lot of people that still use Fortran

and still a lot of people that use Cobalt.

But anyway, so then I realized,

instead of just doing programming

for doing simulations and so on,

that I may as well just change to computer science.

And also one thing I really liked,

and that’s a key difference between the two,

is in computer science it’s so much easier

to realize your ideas.

If you have an idea, you write it up, you code it up,

and then you can see it actually, right?

Exactly.

Running and you can see it.

You can bring it to life quickly.

Bring it to life.

Whereas in physics, if you have a good theory,

you have to wait for the experimentalists

to do the experiments and to confirm the theory,

and things just take so much longer.

And also the reason in physics I decided to do

theoretical physics was because I had my experience

with experimental physics.

First, you have to fix the equipment.

You spend most of your time fixing the equipment first.

Super expensive equipment, so there’s a lot of,

yeah, you have to collaborate with a lot of people.

Takes a long time.

Just takes really, right, much longer.

Yeah, it’s messy.

Right, so I decided to switch to computer science.

And one thing I think maybe people have realized

is that for people who study physics,

actually it’s very easy for physicists

to change to do something else.

I think physics provides a really good training.

And yeah, so actually it was fairly easy

to switch to computer science.

But one thing, going back to your earlier question,

so one thing I actually did realize,

so there is a big difference between computer science

and physics, where physics you can derive

the whole universe from just a few simple laws.

And computer science, given that a lot of it

is defined by humans, the systems are defined by humans,

and it’s artificial, like essentially you create

a lot of these artifacts and so on.

It’s not quite the same.

You don’t derive the computer systems

with just a few simple laws.

You actually have to see there is historical reasons

why a system is built and designed one way

versus the other.

There’s a lot more complexity, less elegant simplicity

of E equals MC squared that kind of reduces everything

down to those beautiful fundamental equations.

But what about the move from China to the United States?

Is there anything that still stays in you

that contributes to your work,

the fact that you grew up in another culture?

So yes, I think especially back then

it’s very different from now.

So now they actually, I see these students

coming from China, and even undergrads,

actually they speak fluent English.

It was just amazing.

And they have already understood so much of the culture

in the US and so on.

It was to you, it was all foreign?

It was a very different time.

At the time, actually, we didn’t even have easy access

to email, not to mention about the web.

I remember I had to go to specific privileged server rooms

to use email, and hence, at the time,

at the time we had much less knowledge

about the Western world.

And actually at the time I didn’t know,

actually in the US, the West Coast weather

is much better than the East Coast.

Yeah, things like that, actually.

It’s very interesting.

But now it’s so different.

At the time, I would say there’s also

a bigger cultural difference,

because there was so much less opportunity

for shared information.

So it’s such a different time and world.

So let me ask maybe a sensitive question.

I’m not sure, but I think you and I

are in similar positions.

I’ve been here for already 20 years as well,

and looking at Russia from my perspective,

and you looking at China.

In some ways, it’s a very distant place,

because it’s changed a lot.

But in some ways you still have echoes,

you still have knowledge of that place.

The question is, China’s doing a lot

of incredible work in AI.

Do you see, please tell me

there’s an optimistic picture you see

where the United States and China

can collaborate and sort of grow together

in the development of AI towards,

there’s different values in terms

of the role of government and so on,

of ethical, transparent, secure systems.

We see it differently in the United States

a little bit than China,

but we’re still trying to work it out.

Do you see the two countries being able

to successfully collaborate and work

in a healthy way without sort of fighting

and making it an AI arms race kind of situation?

Yeah, I believe so.

I think science has no border,

and the advancement of the technology helps everyone,

helps the whole world.

And so I certainly hope that the two countries

will collaborate, and I certainly believe so.

Do you have any reason to believe so

except being an optimist?

So first, again, like I said, science has no borders.

And especially in…

Science doesn’t know borders?

Right.

And you believe that will,

in the former Soviet Union during the Cold War…

So that’s, yeah.

So that’s the other point I was going to mention

is that especially in academic research,

everything is public.

Like we write papers, we open source codes,

and all this is in the public domain.

It doesn’t matter whether the person is in the US,

in China, or some other parts of the world.

They can go on archive

and look at the latest research and results.

So that openness gives you hope.

Yes. Me too.

And that’s also how, as a world,

we make progress the best.

So, I apologize for the romanticized question,

but looking back,

what would you say was the most transformative moment

in your life that

maybe made you fall in love with computer science?

You said physics.

You remember there was a moment

where you thought you could derive

the entirety of the universe.

Was there a moment that you really fell in love

with the work you do now,

from security to machine learning,

to program synthesis?

So maybe, as I mentioned, actually, in college,

one summer I just taught myself programming in C.

Yes.

And you just read a book,

and then you’re like…

Don’t tell me you fell in love with computer science

by programming in C.

Remember I mentioned one of the draws

for me to computer science is how easy it is

to realize your ideas.

So once I read a book,

I taught myself how to program in C.

Immediately, what did I do?

I programmed two games.

One’s just simple, like it’s a Go game,

like it’s a board, you can move the stones and so on.

And the other one, I actually programmed a game

that’s like a 3D Tetris.

It turned out to be a super hard game to play.

Because instead of just the standard 2D Tetris,

it’s actually a 3D thing.

But I realized, wow,

I just had these ideas to try it out,

and then, yeah, you can just do it.

And so that’s when I realized, wow, this is amazing.

Yeah, you can create yourself.

Yes, yes, exactly.

From nothing to something

that’s actually out in the real world.

So let me ask…

Right, I think with your own hands.

Let me ask a silly question,

or maybe the ultimate question.

What is to you the meaning of life?

What gives your life meaning, purpose,

fulfillment, happiness, joy?

Okay, these are two different questions.

Very different, yeah.

It’s usually that you ask this question.

Maybe this question is probably the question

that has followed me and followed my life the most.

Have you discovered anything,

any satisfactory answer for yourself?

Is there something you’ve arrived at?

You know, there’s a moment…

I’ve talked to a few people who have faced,

for example, a cancer diagnosis,

or faced their own mortality,

and that seems to change their view of them.

It seems to be a catalyst for them

removing most of the crap.

Of seeing that most of what they’ve been doing

is not that important,

and really reducing it into saying, like,

here’s actually the few things that really give meaning.

Mortality is a really powerful catalyst for that,

it seems like.

Facing mortality, whether it’s your parents dying

or somebody close to you dying,

or facing your own death for whatever reason,

or cancer and so on.

So yeah, so in my own case,

I didn’t need to face mortality, too.

So try to ask that question.

And I think there are a couple things.

So one is, like, who should be defining

the meaning of your life, right?

Is there some kind of even greater things than you

who should define the meaning of your life?

So for example, when people say that

searching the meaning for your life,

is there some outside voice,

or is there something outside of you

who actually tells you, you know…

So people talk about, oh, you know,

this is what you have been born to do, right?

Like, this is your destiny.

So who, right, so that’s one question,

like, who gets to define the meaning of your life?

Should you be finding some other things,

some other factor to define this for you?

Or is something actually,

it’s just entirely what you define yourself,

and it can be very arbitrary.

Yeah, so an inner voice or an outer voice,

whether it could be spiritual or religious, too, with God,

or some other components of the environment outside of you,

or just your own voice.

Do you have an answer there?

So, okay, so for that, I have an answer.

And through, you know, the long period of time

of thinking and searching,

even searching through outsides, right,

you know, voices or factors outside of me.

So that, I have an answer.

I’ve come to the conclusion and realization

that it’s you yourself that defines the meaning of life.

Yeah, that’s a big burden, though, isn’t it?

I mean, yes and no, right?

So then you have the freedom to define it.

Yes.

And another question is, like,

what does it really mean by the meaning of life?

Right.

And also, whether the question even makes sense.

Absolutely, and you said it somehow distinct from happiness.

So meaning is something much deeper

than just any kind of emotional,

any kind of contentment or joy or whatever.

It might be much deeper.

And then you have to ask, what is deeper than that?

What is there at all?

And then the question starts being silly.

Right, and also you can say it’s deeper,

but you can also say it’s shallower,

depending on how people want to define

the meaning of their life.

So for example, most people don’t even think

about this question.

Then the meaning of life to them

doesn’t really matter that much.

And also, whether knowing the meaning of life,

whether it actually helps your life to be better

or whether it helps your life to be happier,

these actually are open questions.

It’s not, right?

Of course, most questions are open.

I tend to think that just asking the question,

as you mentioned, as you’ve done for a long time,

is the only, that there is no answer.

And asking the question is a really good exercise.

I mean, I have this, for me personally,

I’ve had a kind of feeling that creation is,

like for me has been very fulfilling.

And it seems like my meaning has been to create.

And I’m not sure what that is.

Like I don’t have, I’m single and I don’t have kids.

I’d love to have kids, but I also, sounds creepy,

but I also see sort of, you said see programs.

I see programs as little creations.

I see robots as little creations.

I think those bring, and then ideas,

theorems are creations.

And those somehow intrinsically, like you said,

bring me joy.

I think they do to a lot of, at least scientists,

but I think they do to a lot of people.

So that, to me, if I had to force the answer to that,

I would say creating new things yourself.

For you.

For me, for me, for me.

I don’t know, but like you said, it keeps changing.

Is there some answer that?

And some people, they can, I think,

they may say it’s experience, right?

Like their meaning of life,

they just want to experience

to the richest and fullest they can.

And a lot of people do take that path.

Yes, seeing life as actually a collection of moments

and then trying to make the richest possible sets,

fill those moments with the richest possible experiences.

Right.

And for me, I think it’s certainly,

we do share a lot of similarity here.

So creation is also really important for me,

even from the things I’ve already talked about,

even like writing papers,

and these are all creations as well.

And I have not quite thought

whether that is really the meaning of my life.

Like in a sense, also then maybe like,

what kind of things should you create?

There are so many different things that you could create.

And also you can say, another view is maybe growth.

It’s related, but different from experience.

Growth is also maybe type of meaning of life.

It’s just, you try to grow every day,

try to be a better self every day.

And also ultimately, we are here,

it’s part of the overall evolution.

Right, the world is evolving and it’s growing.

Isn’t it funny that the growth seems to be

the more important thing

than the thing you’re growing towards.

It’s like, it’s not the goal, it’s the journey to it.

It’s almost when you submit a paper,

there’s a sort of depressing element to it,

not to submit a paper,

but when that whole project is over.

I mean, there’s the gratitude,

there’s the celebration and so on,

but you’re usually immediately looking for the next thing

or the next step, right?

It’s not that, the end of it is not the satisfaction,

it’s the hardship, the challenge you have to overcome,

the growth through the process.

It’s somehow probably deeply within us,

the same thing that drives the evolutionary process

is somehow within us,

with everything the way we see the world.

Since you’re thinking about these,

so you’re still in search of an answer.

I mean, yes and no,

in the sense that I think for people

who really dedicate time to search for the answer

to ask the question, what is the meaning of life?

It does not necessarily bring you happiness.

Yeah.

It’s a question, we can say, right?

Like whether it’s a well defined question.

And, but on the other hand,

given that you get to answer it yourself,

you can define it yourself,

then sure, I can just give it an answer.

And in that sense, yes, it can help.

Like we discussed, right?

If you say, oh, then my meaning of life is to create

or to grow, then yes, then I think they can help.

But how do you know that that is really the meaning of life

or the meaning of your life?

It’s like there’s no way for you

to really answer the question.

Sure, but something about that certainty is liberating.

So it might be an illusion, you might not really know,

you might be just convincing yourself falsely,

but being sure that that’s the meaning,

there’s something liberating in that.

There’s something freeing in knowing this is your purpose.

So you can fully give yourself to that.

Without, you know, for a long time,

you know, I thought like, isn’t it all relative?

Like why, how do we even know what’s good and what’s evil?

Like isn’t everything just relative?

Like how do we know, you know,

the question of meaning is ultimately

the question of why do anything?

Why is anything good or bad?

Why is anything valuable and so on?

Exactly.

Then you start to, I think just like you said,

I think it’s a really useful question to ask,

but if you ask it for too long and too aggressively.

It may not be so productive.

It may not be productive and not just for traditionally

societally defined success, but also for happiness.

It seems like asking the question about the meaning of life

is like a trap.

We’re destined to be asking.

We’re destined to look up to the stars

and ask these big why questions

we’ll never be able to answer,

but we shouldn’t get lost in them.

I think that’s probably the,

that’s at least the lesson I picked up so far.

On that topic.

Oh, let me just add one more thing.

So it’s interesting.

So sometimes, yes, it can help you to focus.

So when I shifted my focus more from security

to AI and machine learning,

at the time, actually one of the main reasons

that I did that was because at the time,

I thought the meaning of my life

and the purpose of my life is to build intelligent machines.

And that’s, and then your inner voice said

that this is the right,

this is the right journey to take

to build intelligent machines

and that you actually fully realize

you took a really legitimate big step

to become one of the world class researchers

to actually make it, to actually go down that journey.

Yeah, that’s profound.

That’s profound.

I don’t think there’s a better way

to end a conversation than talking for a while

about the meaning of life.

Dawn is a huge honor to talk to you.

Thank you so much for talking today.

Thank you, thank you.

Thanks for listening to this conversation with Dawn Song

and thank you to our presenting sponsor, Cash App.

Please consider supporting the podcast

by downloading Cash App and using code LexPodcast.

If you enjoy this podcast, subscribe on YouTube,

review it with five stars on Apple Podcast,

support it on Patreon,

or simply connect with me on Twitter at LexFriedman.

And now let me leave you with some words about hacking

from the great Steve Wozniak.

A lot of hacking is playing with other people,

you know, getting them to do strange things.

Thank you for listening and hope to see you next time.