Lex Fridman Podcast - #1 - Max Tegmark: Life 3.0

As part of MIT course 6S099, Artificial General Intelligence,

I’ve gotten the chance to sit down with Max Tegmark.

He is a professor here at MIT.

He’s a physicist, spent a large part of his career

studying the mysteries of our cosmological universe.

But he’s also studied and delved into the beneficial

possibilities and the existential risks

of artificial intelligence.

Amongst many other things, he is the cofounder

of the Future of Life Institute, author of two books,

both of which I highly recommend.

First, Our Mathematical Universe.

Second is Life 3.0.

He’s truly an out of the box thinker and a fun personality,

so I really enjoy talking to him.

If you’d like to see more of these videos in the future,

please subscribe and also click the little bell icon

to make sure you don’t miss any videos.

Also, Twitter, LinkedIn, agi.mit.edu

if you wanna watch other lectures

or conversations like this one.

Better yet, go read Max’s book, Life 3.0.

Chapter seven on goals is my favorite.

It’s really where philosophy and engineering come together

and it opens with a quote by Dostoevsky.

The mystery of human existence lies not in just staying alive

but in finding something to live for.

Lastly, I believe that every failure rewards us

with an opportunity to learn

and in that sense, I’ve been very fortunate

to fail in so many new and exciting ways

and this conversation was no different.

I’ve learned about something called

radio frequency interference, RFI, look it up.

Apparently, music and conversations

from local radio stations can bleed into the audio

that you’re recording in such a way

that it almost completely ruins that audio.

It’s an exceptionally difficult sound source to remove.

So, I’ve gotten the opportunity to learn

how to avoid RFI in the future during recording sessions.

I’ve also gotten the opportunity to learn

how to use Adobe Audition and iZotope RX 6

to do some noise, some audio repair.

Of course, this is an exceptionally difficult noise

to remove.

I am an engineer.

I’m not an audio engineer.

Neither is anybody else in our group

but we did our best.

Nevertheless, I thank you for your patience

and I hope you’re still able to enjoy this conversation.

Do you think there’s intelligent life

out there in the universe?

Let’s open up with an easy question.

I have a minority view here actually.

When I give public lectures, I often ask for a show of hands

who thinks there’s intelligent life out there somewhere else

and almost everyone put their hands up

and when I ask why, they’ll be like,

oh, there’s so many galaxies out there, there’s gotta be.

But I’m a numbers nerd, right?

So when you look more carefully at it,

it’s not so clear at all.

When we talk about our universe, first of all,

we don’t mean all of space.

We actually mean, I don’t know,

you can throw me the universe if you want,

it’s behind you there.

It’s, we simply mean the spherical region of space

from which light has a time to reach us so far

during the 14.8 billion year,

13.8 billion years since our Big Bang.

There’s more space here but this is what we call a universe

because that’s all we have access to.

So is there intelligent life here

that’s gotten to the point of building telescopes

and computers?

My guess is no, actually.

The probability of it happening on any given planet

is some number we don’t know what it is.

And what we do know is that the number can’t be super high

because there’s over a billion Earth like planets

in the Milky Way galaxy alone,

many of which are billions of years older than Earth.

And aside from some UFO believers,

there isn’t much evidence

that any superduran civilization has come here at all.

And so that’s the famous Fermi paradox, right?

And then if you work the numbers,

what you find is that if you have no clue

what the probability is of getting life on a given planet,

so it could be 10 to the minus 10, 10 to the minus 20,

or 10 to the minus two, or any power of 10

is sort of equally likely

if you wanna be really open minded,

that translates into it being equally likely

that our nearest neighbor is 10 to the 16 meters away,

10 to the 17 meters away, 10 to the 18.

By the time you get much less than 10 to the 16 already,

we pretty much know there is nothing else that close.

And when you get beyond 10.

Because they would have discovered us.

Yeah, they would have been discovered as long ago,

or if they’re really close,

we would have probably noted some engineering projects

that they’re doing.

And if it’s beyond 10 to the 26 meters,

that’s already outside of here.

So my guess is actually that we are the only life in here

that’s gotten the point of building advanced tech,

which I think is very,

puts a lot of responsibility on our shoulders, not screw up.

I think people who take for granted

that it’s okay for us to screw up,

have an accidental nuclear war or go extinct somehow

because there’s a sort of Star Trek like situation out there

where some other life forms are gonna come and bail us out

and it doesn’t matter as much.

I think they’re leveling us into a false sense of security.

I think it’s much more prudent to say,

let’s be really grateful

for this amazing opportunity we’ve had

and make the best of it just in case it is down to us.

So from a physics perspective,

do you think intelligent life,

so it’s unique from a sort of statistical view

of the size of the universe,

but from the basic matter of the universe,

how difficult is it for intelligent life to come about?

The kind of advanced tech building life

is implied in your statement that it’s really difficult

to create something like a human species.

Well, I think what we know is that going from no life

to having life that can do a level of tech,

there’s some sort of two going beyond that

than actually settling our whole universe with life.

There’s some major roadblock there,

which is some great filter as it’s sometimes called,

which is tough to get through.

It’s either that roadblock is either behind us

or in front of us.

I’m hoping very much that it’s behind us.

I’m super excited every time we get a new report from NASA

saying they failed to find any life on Mars.

I’m like, yes, awesome.

Because that suggests that the hard part,

maybe it was getting the first ribosome

or some very low level kind of stepping stone

so that we’re home free.

Because if that’s true,

then the future is really only limited

by our own imagination.

It would be much suckier if it turns out

that this level of life is kind of a dime a dozen,

but maybe there’s some other problem.

Like as soon as a civilization gets advanced technology,

within a hundred years,

they get into some stupid fight with themselves and poof.

That would be a bummer.

Yeah, so you’ve explored the mysteries of the universe,

the cosmological universe, the one that’s sitting

between us today.

I think you’ve also begun to explore the other universe,

which is sort of the mystery,

the mysterious universe of the mind of intelligence,

of intelligent life.

So is there a common thread between your interest

or the way you think about space and intelligence?

Oh yeah, when I was a teenager,

I was already very fascinated by the biggest questions.

And I felt that the two biggest mysteries of all in science

were our universe out there and our universe in here.

So it’s quite natural after having spent

a quarter of a century on my career,

thinking a lot about this one,

that I’m now indulging in the luxury

of doing research on this one.

It’s just so cool.

I feel the time is ripe now

for you trans greatly deepening our understanding of this.

Just start exploring this one.

Yeah, because I think a lot of people view intelligence

as something mysterious that can only exist

in biological organisms like us,

and therefore dismiss all talk

about artificial general intelligence as science fiction.

But from my perspective as a physicist,

I am a blob of quarks and electrons

moving around in a certain pattern

and processing information in certain ways.

And this is also a blob of quarks and electrons.

I’m not smarter than the water bottle

because I’m made of different kinds of quarks.

I’m made of up quarks and down quarks,

exact same kind as this.

There’s no secret sauce, I think, in me.

It’s all about the pattern of the information processing.

And this means that there’s no law of physics

saying that we can’t create technology,

which can help us by being incredibly intelligent

and help us crack mysteries that we couldn’t.

In other words, I think we’ve really only seen

the tip of the intelligence iceberg so far.

Yeah, so the perceptronium.

Yeah.

So you coined this amazing term.

It’s a hypothetical state of matter,

sort of thinking from a physics perspective,

what is the kind of matter that can help,

as you’re saying, subjective experience emerge,

consciousness emerge.

So how do you think about consciousness

from this physics perspective?

Very good question.

So again, I think many people have underestimated

our ability to make progress on this

by convincing themselves it’s hopeless

because somehow we’re missing some ingredient that we need.

There’s some new consciousness particle or whatever.

I happen to think that we’re not missing anything

and that it’s not the interesting thing

about consciousness that gives us

this amazing subjective experience of colors

and sounds and emotions.

It’s rather something at the higher level

about the patterns of information processing.

And that’s why I like to think about this idea

of perceptronium.

What does it mean for an arbitrary physical system

to be conscious in terms of what its particles are doing

or its information is doing?

I don’t think, I hate carbon chauvinism,

this attitude you have to be made of carbon atoms

to be smart or conscious.

There’s something about the information processing

that this kind of matter performs.

Yeah, and you can see I have my favorite equations here

describing various fundamental aspects of the world.

I feel that I think one day,

maybe someone who’s watching this will come up

with the equations that information processing

has to satisfy to be conscious.

I’m quite convinced there is big discovery

to be made there because let’s face it,

we know that so many things are made up of information.

We know that some information processing is conscious

because we are conscious.

But we also know that a lot of information processing

is not conscious.

Like most of the information processing happening

in your brain right now is not conscious.

There are like 10 megabytes per second coming in

even just through your visual system.

You’re not conscious about your heartbeat regulation

or most things.

Even if I just ask you to like read what it says here,

you look at it and then, oh, now you know what it said.

But you’re not aware of how the computation actually happened.

Your consciousness is like the CEO

that got an email at the end with the final answer.

So what is it that makes a difference?

I think that’s both a great science mystery.

We’re actually studying it a little bit in my lab here

at MIT, but I also think it’s just a really urgent question

to answer.

For starters, I mean, if you’re an emergency room doctor

and you have an unresponsive patient coming in,

wouldn’t it be great if in addition to having

a CT scanner, you had a consciousness scanner

that could figure out whether this person

is actually having locked in syndrome

or is actually comatose.

And in the future, imagine if we build robots

or the machine that we can have really good conversations

with, which I think is very likely to happen.

Wouldn’t you want to know if your home helper robot

is actually experiencing anything or just like a zombie,

I mean, would you prefer it?

What would you prefer?

Would you prefer that it’s actually unconscious

so that you don’t have to feel guilty about switching it off

or giving boring chores or what would you prefer?

Well, certainly we would prefer,

I would prefer the appearance of consciousness.

But the question is whether the appearance of consciousness

is different than consciousness itself.

And sort of to ask that as a question,

do you think we need to understand what consciousness is,

solve the hard problem of consciousness

in order to build something like an AGI system?

No, I don’t think that.

And I think we will probably be able to build things

even if we don’t answer that question.

But if we want to make sure that what happens

is a good thing, we better solve it first.

So it’s a wonderful controversy you’re raising there

where you have basically three points of view

about the hard problem.

So there are two different points of view.

They both conclude that the hard problem of consciousness

is BS.

On one hand, you have some people like Daniel Dennett

who say that consciousness is just BS

because consciousness is the same thing as intelligence.

There’s no difference.

So anything which acts conscious is conscious,

just like we are.

And then there are also a lot of people,

including many top AI researchers I know,

who say, oh, consciousness is just bullshit

because, of course, machines can never be conscious.

They’re always going to be zombies.

You never have to feel guilty about how you treat them.

And then there’s a third group of people,

including Giulio Tononi, for example,

and Krzysztof Koch and a number of others.

I would put myself also in this middle camp

who say that actually some information processing

is conscious and some is not.

So let’s find the equation which can be used

to determine which it is.

And I think we’ve just been a little bit lazy,

kind of running away from this problem for a long time.

It’s been almost taboo to even mention the C word

in a lot of circles because,

but we should stop making excuses.

This is a science question and there are ways

we can even test any theory that makes predictions for this.

And coming back to this helper robot,

I mean, so you said you’d want your helper robot

to certainly act conscious and treat you,

like have conversations with you and stuff.

I think so.

But wouldn’t you, would you feel,

would you feel a little bit creeped out

if you realized that it was just a glossed up tape recorder,

you know, that was just zombie and was a faking emotion?

Would you prefer that it actually had an experience

or would you prefer that it’s actually

not experiencing anything so you feel,

you don’t have to feel guilty about what you do to it?

It’s such a difficult question because, you know,

it’s like when you’re in a relationship and you say,

well, I love you.

And the other person said, I love you back.

It’s like asking, well, do they really love you back

or are they just saying they love you back?

Don’t you really want them to actually love you?

It’s hard to, it’s hard to really know the difference

between everything seeming like there’s consciousness

present, there’s intelligence present,

there’s affection, passion, love,

and it actually being there.

I’m not sure, do you have?

But like, can I ask you a question about this?

Like to make it a bit more pointed.

So Mass General Hospital is right across the river, right?

Yes.

Suppose you’re going in for a medical procedure

and they’re like, you know, for anesthesia,

what we’re going to do is we’re going to give you

muscle relaxants so you won’t be able to move

and you’re going to feel excruciating pain

during the whole surgery,

but you won’t be able to do anything about it.

But then we’re going to give you this drug

that erases your memory of it.

Would you be cool about that?

What’s the difference that you’re conscious about it

or not if there’s no behavioral change, right?

Right, that’s a really, that’s a really clear way to put it.

That’s, yeah, it feels like in that sense,

experiencing it is a valuable quality.

So actually being able to have subjective experiences,

at least in that case, is valuable.

And I think we humans have a little bit

of a bad track record also of making

these self serving arguments

that other entities aren’t conscious.

You know, people often say,

oh, these animals can’t feel pain.

It’s okay to boil lobsters because we ask them

if it hurt and they didn’t say anything.

And now there was just a paper out saying,

lobsters do feel pain when you boil them

and they’re banning it in Switzerland.

And we did this with slaves too often and said,

oh, they don’t mind.

They don’t maybe aren’t conscious

or women don’t have souls or whatever.

So I’m a little bit nervous when I hear people

just take as an axiom that machines

can’t have experience ever.

I think this is just a really fascinating science question

is what it is.

Let’s research it and try to figure out

what it is that makes the difference

between unconscious intelligent behavior

and conscious intelligent behavior.

So in terms of, so if you think of a Boston Dynamics

human or robot being sort of with a broom

being pushed around, it starts pushing

on a consciousness question.

So let me ask, do you think an AGI system

like a few neuroscientists believe

needs to have a physical embodiment?

Needs to have a body or something like a body?

No, I don’t think so.

You mean to have a conscious experience?

To have consciousness.

I do think it helps a lot to have a physical embodiment

to learn the kind of things about the world

that are important to us humans, for sure.

But I don’t think the physical embodiment

is necessary after you’ve learned it

to just have the experience.

Think about when you’re dreaming, right?

Your eyes are closed.

You’re not getting any sensory input.

You’re not behaving or moving in any way

but there’s still an experience there, right?

And so clearly the experience that you have

when you see something cool in your dreams

isn’t coming from your eyes.

It’s just the information processing itself in your brain

which is that experience, right?

But if I put it another way, I’ll say

because it comes from neuroscience

is the reason you want to have a body and a physical

something like a physical, you know, a physical system

is because you want to be able to preserve something.

In order to have a self, you could argue,

would you need to have some kind of embodiment of self

to want to preserve?

Well, now we’re getting a little bit anthropomorphic

into anthropomorphizing things.

Maybe talking about self preservation instincts.

I mean, we are evolved organisms, right?

So Darwinian evolution endowed us

and other evolved organism with a self preservation instinct

because those that didn’t have those self preservation genes

got cleaned out of the gene pool, right?

But if you build an artificial general intelligence

the mind space that you can design is much, much larger

than just a specific subset of minds that can evolve.

So an AGI mind doesn’t necessarily have

to have any self preservation instinct.

It also doesn’t necessarily have to be

so individualistic as us.

Like, imagine if you could just, first of all,

or we are also very afraid of death.

You know, I suppose you could back yourself up

every five minutes and then your airplane

is about to crash.

You’re like, shucks, I’m gonna lose the last five minutes

of experiences since my last cloud backup, dang.

You know, it’s not as big a deal.

Or if we could just copy experiences between our minds

easily like we, which we could easily do

if we were silicon based, right?

Then maybe we would feel a little bit more

like a hive mind actually, that maybe it’s the,

so I don’t think we should take for granted at all

that AGI will have to have any of those sort of

competitive as alpha male instincts.

On the other hand, you know, this is really interesting

because I think some people go too far and say,

of course we don’t have to have any concerns either

that advanced AI will have those instincts

because we can build anything we want.

That there’s a very nice set of arguments going back

to Steve Omohundro and Nick Bostrom and others

just pointing out that when we build machines,

we normally build them with some kind of goal, you know,

win this chess game, drive this car safely or whatever.

And as soon as you put in a goal into machine,

especially if it’s kind of open ended goal

and the machine is very intelligent,

it’ll break that down into a bunch of sub goals.

And one of those goals will almost always

be self preservation because if it breaks or dies

in the process, it’s not gonna accomplish the goal, right?

Like suppose you just build a little,

you have a little robot and you tell it to go down

the store market here and get you some food,

make you cook an Italian dinner, you know,

and then someone mugs it and tries to break it

on the way.

That robot has an incentive to not get destroyed

and defend itself or run away,

because otherwise it’s gonna fail in cooking your dinner.

It’s not afraid of death,

but it really wants to complete the dinner cooking goal.

So it will have a self preservation instinct.

Continue being a functional agent somehow.

And similarly, if you give any kind of more ambitious goal

to an AGI, it’s very likely they wanna acquire

more resources so it can do that better.

And it’s exactly from those sort of sub goals

that we might not have intended

that some of the concerns about AGI safety come.

You give it some goal that seems completely harmless.

And then before you realize it,

it’s also trying to do these other things

which you didn’t want it to do.

And it’s maybe smarter than us.

So it’s fascinating.

And let me pause just because I am in a very kind

of human centric way, see fear of death

as a valuable motivator.

So you don’t think, you think that’s an artifact

of evolution, so that’s the kind of mind space

evolution created that we’re sort of almost obsessed

about self preservation, some kind of genetic flow.

You don’t think that’s necessary to be afraid of death.

So not just a kind of sub goal of self preservation

just so you can keep doing the thing,

but more fundamentally sort of have the finite thing

like this ends for you at some point.

Interesting.

Do I think it’s necessary for what precisely?

For intelligence, but also for consciousness.

So for those, for both, do you think really

like a finite death and the fear of it is important?

So before I can answer, before we can agree

on whether it’s necessary for intelligence

or for consciousness, we should be clear

on how we define those two words.

Cause a lot of really smart people define them

in very different ways.

I was on this panel with AI experts

and they couldn’t agree on how to define intelligence even.

So I define intelligence simply

as the ability to accomplish complex goals.

I like your broad definition, because again

I don’t want to be a carbon chauvinist.

Right.

And in that case, no, certainly

it doesn’t require fear of death.

I would say alpha go, alpha zero is quite intelligent.

I don’t think alpha zero has any fear of being turned off

because it doesn’t understand the concept of it even.

And similarly consciousness.

I mean, you could certainly imagine very simple

kind of experience.

If certain plants have any kind of experience

I don’t think they’re very afraid of dying

or there’s nothing they can do about it anyway much.

So there wasn’t that much value in, but more seriously

I think if you ask, not just about being conscious

but maybe having what you would, we might call

an exciting life where you feel passion

and really appreciate the things.

Maybe there somehow, maybe there perhaps it does help

having a backdrop that, Hey, it’s finite.

No, let’s make the most of this, let’s live to the fullest.

So if you knew you were going to live forever

do you think you would change your?

Yeah, I mean, in some perspective

it would be an incredibly boring life living forever.

So in the sort of loose subjective terms that you said

of something exciting and something in this

that other humans would understand, I think is, yeah

it seems that the finiteness of it is important.

Well, the good news I have for you then is

based on what we understand about cosmology

everything is in our universe is probably

ultimately probably finite, although.

Big crunch or big, what’s the, the infinite expansion.

Yeah, we could have a big chill or a big crunch

or a big rip or that’s the big snap or death bubbles.

All of them are more than a billion years away.

So we should, we certainly have vastly more time

than our ancestors thought, but there is still

it’s still pretty hard to squeeze in an infinite number

of compute cycles, even though there are some loopholes

that just might be possible.

But I think, you know, some people like to say

that you should live as if you’re about to

you’re going to die in five years or so.

And that’s sort of optimal.

Maybe it’s a good assumption.

We should build our civilization as if it’s all finite

to be on the safe side.

Right, exactly.

So you mentioned defining intelligence

as the ability to solve complex goals.

Where would you draw a line or how would you try

to define human level intelligence

and superhuman level intelligence?

Where is consciousness part of that definition?

No, consciousness does not come into this definition.

So, so I think of intelligence as it’s a spectrum

but there are very many different kinds of goals

you can have.

You can have a goal to be a good chess player

a good goal player, a good car driver, a good investor

good poet, et cetera.

So intelligence that by its very nature

isn’t something you can measure by this one number

or some overall goodness.

No, no.

There are some people who are more better at this.

Some people are better than that.

Right now we have machines that are much better than us

at some very narrow tasks like multiplying large numbers

fast, memorizing large databases, playing chess

playing go and soon driving cars.

But there’s still no machine that can match

a human child in general intelligence

but artificial general intelligence, AGI

the name of your course, of course

that is by its very definition, the quest

to build a machine that can do everything

as well as we can.

So the old Holy grail of AI from back to its inception

in the sixties, if that ever happens, of course

I think it’s going to be the biggest transition

in the history of life on earth

but it doesn’t necessarily have to wait the big impact

until machines are better than us at knitting

that the really big change doesn’t come exactly

at the moment they’re better than us at everything.

The really big change comes first

there are big changes when they start becoming better

at us at doing most of the jobs that we do

because that takes away much of the demand

for human labor.

And then the really whopping change comes

when they become better than us at AI research, right?

Because right now the timescale of AI research

is limited by the human research and development cycle

of years typically, you know

how long does it take from one release of some software

or iPhone or whatever to the next?

But once Google can replace 40,000 engineers

by 40,000 equivalent pieces of software or whatever

but then there’s no reason that has to be years

it can be in principle much faster

and the timescale of future progress in AI

and all of science and technology will be driven

by machines, not humans.

So it’s this simple point which gives right

this incredibly fun controversy

about whether there can be intelligence explosion

so called singularity as Werner Vinge called it.

Now the idea is articulated by I.J. Good

is obviously way back fifties

but you can see Alan Turing

and others thought about it even earlier.

So you asked me what exactly would I define

human level intelligence, yeah.

So the glib answer is to say something

which is better than us at all cognitive tasks

with a better than any human at all cognitive tasks

but the really interesting bar

I think goes a little bit lower than that actually.

It’s when they can, when they’re better than us

at AI programming and general learning

so that they can if they want to get better

than us at anything by just studying.

So they’re better is a key word and better is towards

this kind of spectrum of the complexity of goals

it’s able to accomplish.

So another way to, and that’s certainly

a very clear definition of human love.

So there’s, it’s almost like a sea that’s rising

you can do more and more and more things

it’s a geographic that you show

it’s really nice way to put it.

So there’s some peaks that

and there’s an ocean level elevating

and you solve more and more problems

but just kind of to take a pause

and we took a bunch of questions

and a lot of social networks

and a bunch of people asked

a sort of a slightly different direction

on creativity and things that perhaps aren’t a peak.

Human beings are flawed

and perhaps better means having contradiction

being flawed in some way.

So let me sort of start easy, first of all.

So you have a lot of cool equations.

Let me ask, what’s your favorite equation, first of all?

I know they’re all like your children, but like

which one is that?

This is the shirt in your equation.

It’s the master key of quantum mechanics

of the micro world.

So this equation will protect everything

to do with atoms, molecules and all the way up.

Right?

Yeah, so, okay.

So quantum mechanics is certainly a beautiful

mysterious formulation of our world.

So I’d like to sort of ask you, just as an example

it perhaps doesn’t have the same beauty as physics does

but in mathematics abstract, the Andrew Wiles

who proved the Fermat’s last theorem.

So he just saw this recently

and it kind of caught my eye a little bit.

This is 358 years after it was conjectured.

So this is very simple formulation.

Everybody tried to prove it, everybody failed.

And so here’s this guy comes along

and eventually proves it and then fails to prove it

and then proves it again in 94.

And he said like the moment when everything connected

into place in an interview said

it was so indescribably beautiful.

That moment when you finally realize the connecting piece

of two conjectures.

He said, it was so indescribably beautiful.

It was so simple and so elegant.

I couldn’t understand how I’d missed it.

And I just stared at it in disbelief for 20 minutes.

Then during the day, I walked around the department

and I keep coming back to my desk

looking to see if it was still there.

It was still there.

I couldn’t contain myself.

I was so excited.

It was the most important moment on my working life.

Nothing I ever do again will mean as much.

So that particular moment.

And it kind of made me think of what would it take?

And I think we have all been there at small levels.

Maybe let me ask, have you had a moment like that

in your life where you just had an idea?

It’s like, wow, yes.

I wouldn’t mention myself in the same breath

as Andrew Wiles, but I’ve certainly had a number

of aha moments when I realized something very cool

about physics, which has completely made my head explode.

In fact, some of my favorite discoveries I made later,

I later realized that they had been discovered earlier

by someone who sometimes got quite famous for it.

So it’s too late for me to even publish it,

but that doesn’t diminish in any way.

The emotional experience you have when you realize it,

like, wow.

Yeah, so what would it take in that moment, that wow,

that was yours in that moment?

So what do you think it takes for an intelligence system,

an AGI system, an AI system to have a moment like that?

That’s a tricky question

because there are actually two parts to it, right?

One of them is, can it accomplish that proof?

Can it prove that you can never write A to the N

plus B to the N equals three to that equal Z to the N

for all integers, et cetera, et cetera,

when N is bigger than two?

That’s simply a question about intelligence.

Can you build machines that are that intelligent?

And I think by the time we get a machine

that can independently come up with that level of proofs,

probably quite close to AGI.

The second question is a question about consciousness.

When will we, how likely is it that such a machine

will actually have any experience at all,

as opposed to just being like a zombie?

And would we expect it to have some sort of emotional response

to this or anything at all akin to human emotion

where when it accomplishes its machine goal,

it views it as somehow something very positive

and sublime and deeply meaningful?

I would certainly hope that if in the future

we do create machines that are our peers

or even our descendants, that I would certainly

hope that they do have this sublime appreciation of life.

In a way, my absolutely worst nightmare

would be that at some point in the future,

the distant future, maybe our cosmos

is teeming with all this post biological life doing

all the seemingly cool stuff.

And maybe the last humans, by the time

our species eventually fizzles out,

will be like, well, that’s OK because we’re

so proud of our descendants here.

And look what all the, my worst nightmare

is that we haven’t solved the consciousness problem.

And we haven’t realized that these are all the zombies.

They’re not aware of anything any more than a tape recorder

has any kind of experience.

So the whole thing has just become

a play for empty benches.

That would be the ultimate zombie apocalypse.

So I would much rather, in that case,

that we have these beings which can really

appreciate how amazing it is.

And in that picture, what would be the role of creativity?

A few people ask about creativity.

When you think about intelligence,

certainly the story you told at the beginning of your book

involved creating movies and so on, making money.

You can make a lot of money in our modern world

with music and movies.

So if you are an intelligent system,

you may want to get good at that.

But that’s not necessarily what I mean by creativity.

Is it important on that complex goals

where the sea is rising for there

to be something creative?

Or am I being very human centric and thinking creativity

somehow special relative to intelligence?

My hunch is that we should think of creativity simply

as an aspect of intelligence.

And we have to be very careful with human vanity.

We have this tendency to very often want

to say, as soon as machines can do something,

we try to diminish it and say, oh, but that’s

not real intelligence.

Isn’t it creative or this or that?

The other thing, if we ask ourselves

to write down a definition of what we actually mean

by being creative, what we mean by Andrew Wiles, what he did

there, for example, don’t we often mean that someone takes

a very unexpected leap?

It’s not like taking 573 and multiplying it

by 224 by just a step of straightforward cookbook

like rules, right?

You can maybe make a connection between two things

that people had never thought was connected or something

like that.

I think this is an aspect of intelligence.

And this is actually one of the most important aspects of it.

Maybe the reason we humans tend to be better at it

than traditional computers is because it’s

something that comes more naturally if you’re

a neural network than if you’re a traditional logic gate

based computer machine.

We physically have all these connections.

And you activate here, activate here, activate here.

Bing.

My hunch is that if we ever build a machine where you could

just give it the task, hey, you say, hey, I just realized

I want to travel around the world instead this month.

Can you teach my AGI course for me?

And it’s like, OK, I’ll do it.

And it does everything that you would have done

and improvises and stuff.

That would, in my mind, involve a lot of creativity.

Yeah, so it’s actually a beautiful way to put it.

I think we do try to grasp at the definition of intelligence

is everything we don’t understand how to build.

So we as humans try to find things

that we have and machines don’t have.

And maybe creativity is just one of the things, one

of the words we use to describe that.

That’s a really interesting way to put it.

I don’t think we need to be that defensive.

I don’t think anything good comes out of saying,

well, we’re somehow special, you know?

Contrary wise, there are many examples in history

of where trying to pretend that we’re somehow superior

to all other intelligent beings has led to pretty bad results,

right?

Nazi Germany, they said that they were somehow superior

to other people.

Today, we still do a lot of cruelty to animals

by saying that we’re so superior somehow,

and they can’t feel pain.

Slavery was justified by the same kind

of just really weak arguments.

And I don’t think if we actually go ahead and build

artificial general intelligence, it

can do things better than us, I don’t

think we should try to found our self worth on some sort

of bogus claims of superiority in terms

of our intelligence.

I think we should instead find our calling

and the meaning of life from the experiences that we have.

I can have very meaningful experiences

even if there are other people who are smarter than me.

When I go to a faculty meeting here,

and we talk about something, and then I certainly realize,

oh, boy, he has an old prize, he has an old prize,

he has an old prize, I don’t have one.

Does that make me enjoy life any less

or enjoy talking to those people less?

Of course not.

And the contrary, I feel very honored and privileged

to get to interact with other very intelligent beings that

are better than me at a lot of stuff.

So I don’t think there’s any reason why

we can’t have the same approach with intelligent machines.

That’s a really interesting.

So people don’t often think about that.

They think about when there’s going,

if there’s machines that are more intelligent,

you naturally think that that’s not

going to be a beneficial type of intelligence.

You don’t realize it could be like peers with Nobel prizes

that would be just fun to talk with,

and they might be clever about certain topics,

and you can have fun having a few drinks with them.

Well, also, another example we can all

relate to of why it doesn’t have to be a terrible thing

to be in the presence of people who are even smarter than us

all around is when you and I were both two years old,

I mean, our parents were much more intelligent than us,

right?

Worked out OK, because their goals

were aligned with our goals.

And that, I think, is really the number one key issue

we have to solve if we value align the value alignment

problem, exactly.

Because people who see too many Hollywood movies

with lousy science fiction plot lines,

they worry about the wrong thing, right?

They worry about some machine suddenly turning evil.

It’s not malice that is the concern.

It’s competence.

By definition, intelligent makes you very competent.

If you have a more intelligent goal playing,

computer playing is a less intelligent one.

And when we define intelligence as the ability

to accomplish goal winning, it’s going

to be the more intelligent one that wins.

And if you have a human and then you

have an AGI that’s more intelligent in all ways

and they have different goals, guess who’s

going to get their way, right?

So I was just reading about this particular rhinoceros species

that was driven extinct just a few years ago.

Ellen Bummer is looking at this cute picture of a mommy

rhinoceros with its child.

And why did we humans drive it to extinction?

It wasn’t because we were evil rhino haters as a whole.

It was just because our goals weren’t aligned

with those of the rhinoceros.

And it didn’t work out so well for the rhinoceros

because we were more intelligent, right?

So I think it’s just so important

that if we ever do build AGI, before we unleash anything,

we have to make sure that it learns

to understand our goals, that it adopts our goals,

and that it retains those goals.

So the cool, interesting problem there

is us as human beings trying to formulate our values.

So you could think of the United States Constitution as a way

that people sat down, at the time a bunch of white men,

which is a good example, I should say.

They formulated the goals for this country.

And a lot of people agree that those goals actually

held up pretty well.

That’s an interesting formulation of values

and failed miserably in other ways.

So for the value alignment problem and the solution to it,

we have to be able to put on paper or in a program

human values.

How difficult do you think that is?

Very.

But it’s so important.

We really have to give it our best.

And it’s difficult for two separate reasons.

There’s the technical value alignment problem

of figuring out just how to make machines understand our goals,

adopt them, and retain them.

And then there’s the separate part of it,

the philosophical part.

Whose values anyway?

And since it’s not like we have any great consensus

on this planet on values, what mechanism should we

create then to aggregate and decide, OK,

what’s a good compromise?

That second discussion can’t just

be left to tech nerds like myself.

And if we refuse to talk about it and then AGI gets built,

who’s going to be actually making

the decision about whose values?

It’s going to be a bunch of dudes in some tech company.

And are they necessarily so representative of all

of humankind that we want to just entrust it to them?

Are they even uniquely qualified to speak

to future human happiness just because they’re

good at programming AI?

I’d much rather have this be a really inclusive conversation.

But do you think it’s possible?

So you create a beautiful vision that includes the diversity,

cultural diversity, and various perspectives on discussing

rights, freedoms, human dignity.

But how hard is it to come to that consensus?

Do you think it’s certainly a really important thing

that we should all try to do?

But do you think it’s feasible?

I think there’s no better way to guarantee failure than to

refuse to talk about it or refuse to try.

And I also think it’s a really bad strategy

to say, OK, let’s first have a discussion for a long time.

And then once we reach complete consensus,

then we’ll try to load it into some machine.

No, we shouldn’t let perfect be the enemy of good.

Instead, we should start with the kindergarten ethics

that pretty much everybody agrees on

and put that into machines now.

We’re not doing that even.

Look at anyone who builds this passenger aircraft,

wants it to never under any circumstances

fly into a building or a mountain.

Yet the September 11 hijackers were able to do that.

And even more embarrassingly, Andreas Lubitz,

this depressed Germanwings pilot,

when he flew his passenger jet into the Alps killing over 100

people, he just told the autopilot to do it.

He told the freaking computer to change the altitude

to 100 meters.

And even though it had the GPS maps, everything,

the computer was like, OK.

So we should take those very basic values,

where the problem is not that we don’t agree.

The problem is just we’ve been too lazy

to try to put it into our machines

and make sure that from now on, airplanes will just,

which all have computers in them,

but will just refuse to do something like that.

Go into safe mode, maybe lock the cockpit door,

go over to the nearest airport.

And there’s so much other technology in our world

as well now, where it’s really becoming quite timely

to put in some sort of very basic values like this.

Even in cars, we’ve had enough vehicle terrorism attacks

by now, where people have driven trucks and vans

into pedestrians, that it’s not at all a crazy idea

to just have that hardwired into the car.

Because yeah, there are a lot of,

there’s always going to be people who for some reason

want to harm others, but most of those people

don’t have the technical expertise to figure out

how to work around something like that.

So if the car just won’t do it, it helps.

So let’s start there.

So there’s a lot of, that’s a great point.

So not chasing perfect.

There’s a lot of things that most of the world agrees on.

Yeah, let’s start there.

Let’s start there.

And then once we start there,

we’ll also get into the habit of having

these kind of conversations about, okay,

what else should we put in here and have these discussions?

This should be a gradual process then.

Great, so, but that also means describing these things

and describing it to a machine.

So one thing, we had a few conversations

with Stephen Wolfram.

I’m not sure if you’re familiar with Stephen.

Oh yeah, I know him quite well.

So he is, he works with a bunch of things,

but cellular automata, these simple computable things,

these computation systems.

And he kind of mentioned that,

we probably have already within these systems

already something that’s AGI,

meaning like we just don’t know it

because we can’t talk to it.

So if you give me this chance to try to at least

form a question out of this is,

I think it’s an interesting idea to think

that we can have intelligent systems,

but we don’t know how to describe something to them

and they can’t communicate with us.

I know you’re doing a little bit of work in explainable AI,

trying to get AI to explain itself.

So what are your thoughts of natural language processing

or some kind of other communication?

How does the AI explain something to us?

How do we explain something to it, to machines?

Or you think of it differently?

So there are two separate parts to your question there.

One of them has to do with communication,

which is super interesting, I’ll get to that in a sec.

The other is whether we already have AGI

but we just haven’t noticed it there.

Right.

There I beg to differ.

I don’t think there’s anything in any cellular automaton

or anything or the internet itself or whatever

that has artificial general intelligence

and that it can really do exactly everything

we humans can do better.

I think the day that happens, when that happens,

we will very soon notice, we’ll probably notice even before

because in a very, very big way.

But for the second part, though.

Wait, can I ask, sorry.

So, because you have this beautiful way

to formulating consciousness as information processing,

and you can think of intelligence

as information processing,

and you can think of the entire universe

as these particles and these systems roaming around

that have this information processing power.

You don’t think there is something with the power

to process information in the way that we human beings do

that’s out there that needs to be sort of connected to.

It seems a little bit philosophical, perhaps,

but there’s something compelling to the idea

that the power is already there,

which the focus should be more on being able

to communicate with it.

Well, I agree that in a certain sense,

the hardware processing power is already out there

because our universe itself can think of it

as being a computer already, right?

It’s constantly computing what water waves,

how it devolved the water waves in the River Charles

and how to move the air molecules around.

Seth Lloyd has pointed out, my colleague here,

that you can even in a very rigorous way

think of our entire universe as being a quantum computer.

It’s pretty clear that our universe

supports this amazing processing power

because you can even,

within this physics computer that we live in, right?

We can even build actual laptops and stuff,

so clearly the power is there.

It’s just that most of the compute power that nature has,

it’s, in my opinion, kind of wasting on boring stuff

like simulating yet another ocean wave somewhere

where no one is even looking, right?

So in a sense, what life does, what we are doing

when we build computers is we’re rechanneling

all this compute that nature is doing anyway

into doing things that are more interesting

than just yet another ocean wave,

and let’s do something cool here.

So the raw hardware power is there, for sure,

but then even just computing what’s going to happen

for the next five seconds in this water bottle,

takes a ridiculous amount of compute

if you do it on a human computer.

This water bottle just did it.

But that does not mean that this water bottle has AGI

because AGI means it should also be able to,

like I’ve written my book, done this interview.

And I don’t think it’s just communication problems.

I don’t really think it can do it.

Although Buddhists say when they watch the water

and that there is some beauty,

that there’s some depth and beauty in nature

that they can communicate with.

Communication is also very important though

because I mean, look, part of my job is being a teacher.

And I know some very intelligent professors even

who just have a bit of hard time communicating.

They come up with all these brilliant ideas,

but to communicate with somebody else,

you have to also be able to simulate their own mind.

Yes, empathy.

Build well enough and understand model of their mind

that you can say things that they will understand.

And that’s quite difficult.

And that’s why today it’s so frustrating

if you have a computer that makes some cancer diagnosis

and you ask it, well, why are you saying

I should have this surgery?

And if it can only reply,

I was trained on five terabytes of data

and this is my diagnosis, boop, boop, beep, beep.

It doesn’t really instill a lot of confidence, right?

So I think we have a lot of work to do

on communication there.

So what kind of, I think you’re doing a little bit of work

in explainable AI.

What do you think are the most promising avenues?

Is it mostly about sort of the Alexa problem

of natural language processing of being able

to actually use human interpretable methods

of communication?

So being able to talk to a system and it talk back to you,

or is there some more fundamental problems to be solved?

I think it’s all of the above.

The natural language processing is obviously important,

but there are also more nerdy fundamental problems.

Like if you take, you play chess?

Of course, I’m Russian.

I have to.

You speak Russian?

Yes, I speak Russian.

Excellent, I didn’t know.

When did you learn Russian?

I speak very bad Russian, I’m only an autodidact,

but I bought a book, Teach Yourself Russian,

read a lot, but it was very difficult.

Wow.

That’s why I speak so bad.

How many languages do you know?

Wow, that’s really impressive.

I don’t know, my wife has some calculation,

but my point was, if you play chess,

have you looked at the AlphaZero games?

The actual games, no.

Check it out, some of them are just mind blowing,

really beautiful.

And if you ask, how did it do that?

You go talk to Demis Hassabis,

I know others from DeepMind,

all they’ll ultimately be able to give you

is big tables of numbers, matrices,

that define the neural network.

And you can stare at these tables of numbers

till your face turn blue,

and you’re not gonna understand much

about why it made that move.

And even if you have natural language processing

that can tell you in human language about,

oh, five, seven, points, two, eight,

still not gonna really help.

So I think there’s a whole spectrum of fun challenges

that are involved in taking a computation

that does intelligent things

and transforming it into something equally good,

equally intelligent, but that’s more understandable.

And I think that’s really valuable

because I think as we put machines in charge

of ever more infrastructure in our world,

the power grid, the trading on the stock market,

weapon systems and so on,

it’s absolutely crucial that we can trust

these AIs to do all we want.

And trust really comes from understanding

in a very fundamental way.

And that’s why I’m working on this,

because I think the more,

if we’re gonna have some hope of ensuring

that machines have adopted our goals

and that they’re gonna retain them,

that kind of trust, I think,

needs to be based on things you can actually understand,

preferably even improve theorems on.

Even with a self driving car, right?

If someone just tells you it’s been trained

on tons of data and it never crashed,

it’s less reassuring than if someone actually has a proof.

Maybe it’s a computer verified proof,

but still it says that under no circumstances

is this car just gonna swerve into oncoming traffic.

And that kind of information helps to build trust

and helps build the alignment of goals,

at least awareness that your goals, your values are aligned.

And I think even in the very short term,

if you look at how, you know, today, right?

This absolutely pathetic state of cybersecurity

that we have, where is it?

Three billion Yahoo accounts we can’t pack,

almost every American’s credit card and so on.

Why is this happening?

It’s ultimately happening because we have software

that nobody fully understood how it worked.

That’s why the bugs hadn’t been found, right?

And I think AI can be used very effectively

for offense, for hacking,

but it can also be used for defense.

Hopefully automating verifiability

and creating systems that are built in different ways

so you can actually prove things about them.

And it’s important.

So speaking of software that nobody understands

how it works, of course, a bunch of people ask

about your paper, about your thoughts

of why does deep and cheap learning work so well?

That’s the paper.

But what are your thoughts on deep learning?

These kind of simplified models of our own brains

have been able to do some successful perception work,

pattern recognition work, and now with AlphaZero and so on,

do some clever things.

What are your thoughts about the promise limitations

of this piece?

Great, I think there are a number of very important insights,

very important lessons we can always draw

from these kinds of successes.

One of them is when you look at the human brain,

you see it’s very complicated, 10th of 11 neurons,

and there are all these different kinds of neurons

and yada, yada, and there’s been this long debate

about whether the fact that we have dozens

of different kinds is actually necessary for intelligence.

We can now, I think, quite convincingly answer

that question of no, it’s enough to have just one kind.

If you look under the hood of AlphaZero,

there’s only one kind of neuron

and it’s ridiculously simple mathematical thing.

So it’s just like in physics,

it’s not, if you have a gas with waves in it,

it’s not the detailed nature of the molecule that matter,

it’s the collective behavior somehow.

Similarly, it’s this higher level structure

of the network that matters,

not that you have 20 kinds of neurons.

I think our brain is such a complicated mess

because it wasn’t evolved just to be intelligent,

it was involved to also be self assembling

and self repairing, right?

And evolutionarily attainable.

And so on and so on.

So I think it’s pretty,

my hunch is that we’re going to understand

how to build AGI before we fully understand

how our brains work, just like we understood

how to build flying machines long before

we were able to build a mechanical bird.

Yeah, that’s right.

You’ve given the example exactly of mechanical birds

and airplanes and airplanes do a pretty good job

of flying without really mimicking bird flight.

And even now after 100 years later,

did you see the Ted talk with this German mechanical bird?

I heard you mention it.

Check it out, it’s amazing.

But even after that, right,

we still don’t fly in mechanical birds

because it turned out the way we came up with was simpler

and it’s better for our purposes.

And I think it might be the same there.

That’s one lesson.

And another lesson, it’s more what our paper was about.

First, as a physicist thought it was fascinating

how there’s a very close mathematical relationship

actually between our artificial neural networks

and a lot of things that we’ve studied for in physics

go by nerdy names like the renormalization group equation

and Hamiltonians and yada, yada, yada.

And when you look a little more closely at this,

you have,

at first I was like, well, there’s something crazy here

that doesn’t make sense.

Because we know that if you even want to build

a super simple neural network to tell apart cat pictures

and dog pictures, right,

that you can do that very, very well now.

But if you think about it a little bit,

you convince yourself it must be impossible

because if I have one megapixel,

even if each pixel is just black or white,

there’s two to the power of 1 million possible images,

which is way more than there are atoms in our universe,

right, so in order to,

and then for each one of those,

I have to assign a number,

which is the probability that it’s a dog.

So an arbitrary function of images

is a list of more numbers than there are atoms in our universe.

So clearly I can’t store that under the hood of my GPU

or my computer, yet somehow it works.

So what does that mean?

Well, it means that out of all of the problems

that you could try to solve with a neural network,

almost all of them are impossible to solve

with a reasonably sized one.

But then what we showed in our paper

was that the fraction, the kind of problems,

the fraction of all the problems

that you could possibly pose,

that we actually care about given the laws of physics

is also an infinite testimony, tiny little part.

And amazingly, they’re basically the same part.

Yeah, it’s almost like our world was created for,

I mean, they kind of come together.

Yeah, well, you could say maybe where the world was created

for us, but I have a more modest interpretation,

which is that the world was created for us,

but I have a more modest interpretation,

which is that instead evolution endowed us

with neural networks precisely for that reason.

Because this particular architecture,

as opposed to the one in your laptop,

is very, very well adapted to solving the kind of problems

that nature kept presenting our ancestors with.

So it makes sense that why do we have a brain

in the first place?

It’s to be able to make predictions about the future

and so on.

So if we had a sucky system, which could never solve it,

we wouldn’t have a world.

So this is, I think, a very beautiful fact.

Yeah.

We also realize that there’s been earlier work

on why deeper networks are good,

but we were able to show an additional cool fact there,

which is that even incredibly simple problems,

like suppose I give you a thousand numbers

and ask you to multiply them together,

and you can write a few lines of code, boom, done, trivial.

If you just try to do that with a neural network

that has only one single hidden layer in it,

you can do it,

but you’re going to need two to the power of a thousand

neurons to multiply a thousand numbers,

which is, again, more neurons than there are atoms

in our universe.

That’s fascinating.

But if you allow yourself to make it a deep network

with many layers, you only need 4,000 neurons.

It’s perfectly feasible.

That’s really interesting.

Yeah.

So on another architecture type,

I mean, you mentioned Schrodinger’s equation,

and what are your thoughts about quantum computing

and the role of this kind of computational unit

in creating an intelligence system?

In some Hollywood movies that I will not mention by name

because I don’t want to spoil them.

The way they get AGI is building a quantum computer.

Because the word quantum sounds cool and so on.

That’s right.

First of all, I think we don’t need quantum computers

to build AGI.

I suspect your brain is not a quantum computer

in any profound sense.

So you don’t even wrote a paper about that

a lot many years ago.

I calculated the so called decoherence time,

how long it takes until the quantum computerness

of what your neurons are doing gets erased

by just random noise from the environment.

And it’s about 10 to the minus 21 seconds.

So as cool as it would be to have a quantum computer

in my head, I don’t think that fast.

On the other hand,

there are very cool things you could do

with quantum computers.

Or I think we’ll be able to do soon

when we get bigger ones.

That might actually help machine learning

do even better than the brain.

So for example,

one, this is just a moonshot,

but learning is very much same thing as search.

If you’re trying to train a neural network

to get really learned to do something really well,

you have some loss function,

you have a bunch of knobs you can turn,

represented by a bunch of numbers,

and you’re trying to tweak them

so that it becomes as good as possible at this thing.

So if you think of a landscape with some valley,

where each dimension of the landscape

corresponds to some number you can change,

you’re trying to find the minimum.

And it’s well known that

if you have a very high dimensional landscape,

complicated things, it’s super hard to find the minimum.

Quantum mechanics is amazingly good at this.

Like if I want to know what’s the lowest energy state

this water can possibly have,

incredibly hard to compute,

but nature will happily figure this out for you

if you just cool it down, make it very, very cold.

If you put a ball somewhere,

it’ll roll down to its minimum.

And this happens metaphorically

at the energy landscape too.

And quantum mechanics even uses some clever tricks,

which today’s machine learning systems don’t.

Like if you’re trying to find the minimum

and you get stuck in the little local minimum here,

in quantum mechanics you can actually tunnel

through the barrier and get unstuck again.

That’s really interesting.

Yeah, so it may be, for example,

that we’ll one day use quantum computers

that help train neural networks better.

That’s really interesting.

Okay, so as a component of kind of the learning process,

for example.

Yeah.

Let me ask sort of wrapping up here a little bit,

let me return to the questions of our human nature

and love, as I mentioned.

So do you think,

you mentioned sort of a helper robot,

but you could think of also personal robots.

Do you think the way we human beings fall in love

and get connected to each other

is possible to achieve in an AI system

and human level AI intelligence system?

Do you think we would ever see that kind of connection?

Or, you know, in all this discussion

about solving complex goals,

is this kind of human social connection,

do you think that’s one of the goals

on the peaks and valleys with the raising sea levels

that we’ll be able to achieve?

Or do you think that’s something that’s ultimately,

or at least in the short term,

relative to the other goals is not achievable?

I think it’s all possible.

And I mean, in recent,

there’s a very wide range of guesses, as you know,

among AI researchers, when we’re going to get AGI.

Some people, you know, like our friend Rodney Brooks

says it’s going to be hundreds of years at least.

And then there are many others

who think it’s going to happen much sooner.

And recent polls,

maybe half or so of AI researchers

think we’re going to get AGI within decades.

So if that happens, of course,

then I think these things are all possible.

But in terms of whether it will happen,

I think we shouldn’t spend so much time asking

what do we think will happen in the future?

As if we are just some sort of pathetic,

your passive bystanders, you know,

waiting for the future to happen to us.

Hey, we’re the ones creating this future, right?

So we should be proactive about it

and ask ourselves what sort of future

we would like to have happen.

We’re going to make it like that.

Well, what I prefer is just some sort of incredibly boring,

zombie like future where there’s all these

mechanical things happening and there’s no passion,

no emotion, no experience, maybe even.

No, I would of course, much rather prefer it

if all the things that we find that we value the most

about humanity are our subjective experience,

passion, inspiration, love, you know.

If we can create a future where those things do happen,

where those things do exist, you know,

I think ultimately it’s not our universe

giving meaning to us, it’s us giving meaning to our universe.

And if we build more advanced intelligence,

let’s make sure we build it in such a way

that meaning is part of it.

A lot of people that seriously study this problem

and think of it from different angles

have trouble in the majority of cases,

if they think through that happen,

are the ones that are not beneficial to humanity.

And so, yeah, so what are your thoughts?

What’s should people, you know,

I really don’t like people to be terrified.

What’s a way for people to think about it

in a way we can solve it and we can make it better?

No, I don’t think panicking is going to help in any way.

It’s not going to increase chances

of things going well either.

Even if you are in a situation where there is a real threat,

does it help if everybody just freaks out?

No, of course, of course not.

I think, yeah, there are of course ways

in which things can go horribly wrong.

First of all, it’s important when we think about this thing,

about the problems and risks,

to also remember how huge the upsides can be

if we get it right, right?

Everything we love about society and civilization

is a product of intelligence.

So if we can amplify our intelligence

with machine intelligence and not anymore lose our loved one

to what we’re told is an incurable disease

and things like this, of course, we should aspire to that.

So that can be a motivator, I think,

reminding ourselves that the reason we try to solve problems

is not just because we’re trying to avoid gloom,

but because we’re trying to do something great.

But then in terms of the risks,

I think the really important question is to ask,

what can we do today that will actually help

make the outcome good, right?

And dismissing the risk is not one of them.

I find it quite funny often when I’m in discussion panels

about these things,

how the people who work for companies,

always be like, oh, nothing to worry about,

nothing to worry about, nothing to worry about.

And it’s only academics sometimes express concerns.

That’s not surprising at all if you think about it.

Right.

Upton Sinclair quipped, right,

that it’s hard to make a man believe in something

when his income depends on not believing in it.

And frankly, we know a lot of these people in companies

that they’re just as concerned as anyone else.

But if you’re the CEO of a company,

that’s not something you want to go on record saying

when you have silly journalists who are gonna put a picture

of a Terminator robot when they quote you.

So the issues are real.

And the way I think about what the issue is,

is basically the real choice we have is,

first of all, are we gonna just dismiss the risks

and say, well, let’s just go ahead and build machines

that can do everything we can do better and cheaper.

Let’s just make ourselves obsolete as fast as possible.

What could possibly go wrong?

That’s one attitude.

The opposite attitude, I think, is to say,

here’s this incredible potential,

let’s think about what kind of future

we’re really, really excited about.

What are the shared goals that we can really aspire towards?

And then let’s think really hard

about how we can actually get there.

So start with, don’t start thinking about the risks,

start thinking about the goals.

And then when you do that,

then you can think about the obstacles you want to avoid.

I often get students coming in right here into my office

for career advice.

I always ask them this very question,

where do you want to be in the future?

If all she can say is, oh, maybe I’ll have cancer,

maybe I’ll get run over by a truck.

Yeah, focus on the obstacles instead of the goals.

She’s just going to end up a hypochondriac paranoid.

Whereas if she comes in and fire in her eyes

and is like, I want to be there.

And then we can talk about the obstacles

and see how we can circumvent them.

That’s, I think, a much, much healthier attitude.

And I feel it’s very challenging to come up with a vision

for the future, which we are unequivocally excited about.

I’m not just talking now in the vague terms,

like, yeah, let’s cure cancer, fine.

I’m talking about what kind of society

do we want to create?

What do we want it to mean to be human in the age of AI,

in the age of AGI?

So if we can have this conversation,

broad, inclusive conversation,

and gradually start converging towards some,

some future that with some direction, at least,

that we want to steer towards, right,

then we’ll be much more motivated

to constructively take on the obstacles.

And I think if I had, if I had to,

if I try to wrap this up in a more succinct way,

I think we can all agree already now

that we should aspire to build AGI

that doesn’t overpower us, but that empowers us.

And think of the many various ways that can do that,

whether that’s from my side of the world

of autonomous vehicles.

I’m personally actually from the camp

that believes this human level intelligence

is required to achieve something like vehicles

that would actually be something we would enjoy using

and being part of.

So that’s one example, and certainly there’s a lot

of other types of robots and medicine and so on.

So focusing on those and then coming up with the obstacles,

coming up with the ways that that can go wrong

and solving those one at a time.

And just because you can build an autonomous vehicle,

even if you could build one

that would drive just fine without you,

maybe there are some things in life

that we would actually want to do ourselves.

That’s right.

Right, like, for example,

if you think of our society as a whole,

there are some things that we find very meaningful to do.

And that doesn’t mean we have to stop doing them

just because machines can do them better.

I’m not gonna stop playing tennis

just the day someone builds a tennis robot and beat me.

People are still playing chess and even go.

Yeah, and in the very near term even,

some people are advocating basic income, replace jobs.

But if the government is gonna be willing

to just hand out cash to people for doing nothing,

then one should also seriously consider

whether the government should also hire

a lot more teachers and nurses

and the kind of jobs which people often

find great fulfillment in doing, right?

We get very tired of hearing politicians saying,

oh, we can’t afford hiring more teachers,

but we’re gonna maybe have basic income.

If we can have more serious research and thought

into what gives meaning to our lives,

the jobs give so much more than income, right?

Mm hmm.

And then think about in the future,

what are the roles that we wanna have people

continually feeling empowered by machines?

And I think sort of, I come from Russia,

from the Soviet Union.

And I think for a lot of people in the 20th century,

going to the moon, going to space was an inspiring thing.

I feel like the universe of the mind,

so AI, understanding, creating intelligence

is that for the 21st century.

So it’s really surprising.

And I’ve heard you mention this.

It’s really surprising to me,

both on the research funding side,

that it’s not funded as greatly as it could be,

but most importantly, on the politician side,

that it’s not part of the public discourse

except in the killer bots terminator kind of view,

that people are not yet, I think, perhaps excited

by the possible positive future

that we can build together.

So we should be, because politicians usually just focus

on the next election cycle, right?

The single most important thing I feel we humans have learned

in the entire history of science

is they were the masters of underestimation.

We underestimated the size of our cosmos again and again,

realizing that everything we thought existed

was just a small part of something grander, right?

Planet, solar system, the galaxy, clusters of galaxies.

The universe.

And we now know that the future has just

so much more potential

than our ancestors could ever have dreamt of.

This cosmos, imagine if all of Earth

was completely devoid of life,

except for Cambridge, Massachusetts.

Wouldn’t it be kind of lame if all we ever aspired to

was to stay in Cambridge, Massachusetts forever

and then go extinct in one week,

even though Earth was gonna continue on for longer?

That sort of attitude I think we have now

on the cosmic scale, life can flourish on Earth,

not for four years, but for billions of years.

I can even tell you about how to move it out of harm’s way

when the sun gets too hot.

And then we have so much more resources out here,

which today, maybe there are a lot of other planets

with bacteria or cow like life on them,

but most of this, all this opportunity seems,

as far as we can tell, to be largely dead,

like the Sahara Desert.

And yet we have the opportunity to help life flourish

around this for billions of years.

So let’s quit squabbling about

whether some little border should be drawn

one mile to the left or right,

and look up into the skies and realize,

hey, we can do such incredible things.

Yeah, and that’s, I think, why it’s really exciting

that you and others are connected

with some of the work Elon Musk is doing,

because he’s literally going out into that space,

really exploring our universe, and it’s wonderful.

That is exactly why Elon Musk is so misunderstood, right?

Misconstrued him as some kind of pessimistic doomsayer.

The reason he cares so much about AI safety

is because he more than almost anyone else appreciates

these amazing opportunities that we’ll squander

if we wipe out here on Earth.

We’re not just going to wipe out the next generation,

all generations, and this incredible opportunity

that’s out there, and that would really be a waste.

And AI, for people who think that it would be better

to do without technology, let me just mention that

if we don’t improve our technology,

the question isn’t whether humanity is going to go extinct.

The question is just whether we’re going to get taken out

by the next big asteroid or the next super volcano

or something else dumb that we could easily prevent

with more tech, right?

And if we want life to flourish throughout the cosmos,

AI is the key to it.

As I mentioned in a lot of detail in my book right there,

even many of the most inspired sci fi writers,

I feel have totally underestimated the opportunities

for space travel, especially at the other galaxies,

because they weren’t thinking about the possibility of AGI,

which just makes it so much easier.

Right, yeah.

So that goes to your view of AGI that enables our progress,

that enables a better life.

So that’s a beautiful way to put it

and then something to strive for.

So Max, thank you so much.

Thank you for your time today.

It’s been awesome.

Thank you so much.

Thanks.

Have a great day.

comments powered by Disqus