Lex Fridman Podcast - #76 - John Hopfield: Physics View of the Mind and Neurobiology

The following is a conversation with John Hopfield,

professor at Princeton, whose life’s work weaved beautifully

through biology, chemistry, neuroscience, and physics.

Most crucially, he saw the messy world of biology

through the piercing eyes of a physicist.

He’s perhaps best known for his work

on associative neural networks,

now known as Hopfield networks,

that were one of the early ideas that catalyzed

the development of the modern field of deep learning.

As his 2019 Franklin Medal in Physics Award states,

he applied concepts of theoretical physics

to provide new insights on important biological questions

in a variety of areas, including genetics and neuroscience

with significant impact on machine learning.

And as John says in his 2018 article titled,

Now What?, his accomplishments have often come about

by asking that very question, now what?

And often responding by a major change of direction.

This is the Artificial Intelligence Podcast.

If you enjoy it, subscribe on YouTube,

give it five stars on Apple Podcast,

support it on Patreon, or simply connect with me on Twitter,

and Lex Friedman, spelled F R I D M A M.

As usual, I’ll do one or two minutes of ads now

and never any ads in the middle

that can break the flow of the conversation.

I hope that works for you

and doesn’t hurt the listening experience.

This show is presented by Cash App,

the number one finance app in the App Store.

When you get it, use code LexPodcast.

Cash App lets you send money to friends, buy Bitcoin,

and invest in the stock market with as little as $1.

Since Cash App does fractional share trading,

let me mention that the order execution algorithm

that works behind the scenes

to create the abstraction of fractional orders

is to me an algorithmic marvel.

So big props to the Cash App engineers

for solving a hard problem

that in the end provides an easy interface

that takes a step up the next layer of abstraction

over the stock market,

making trading more accessible for new investors

and diversification much easier.

So again, if you get Cash App from the App Store,

Google Play, and use code LexPodcast,

you’ll get $10,

and Cash App will also donate $10 to First,

one of my favorite organizations

that is helping advance robotics and STEM education

for young people around the world.

And now here’s my conversation with John Hopfield.

What difference between biological neural networks

and artificial neural networks

is most captivating and profound to you?

At the higher philosophical level,

let’s not get technical just yet.

But one of the things that very much intrigues me

is the fact that neurons have all kinds of components,

properties to them.

And in evolutionary biology,

if you have some little quirk

in how a molecule works or how a cell works,

and it can be made use of,

evolution will sharpen it up

and make it into a useful feature rather than a glitch.

And so you expect in neurobiology for evolution

to have captured all kinds of possibilities

of getting neurons,

of how you get neurons to do things for you.

And that aspect has been completely suppressed

in artificial neural networks.

So the glitches become features

in the biological neural network.

They can.

Look, let me take one of the things

that I used to do research on.

If you take things which oscillate,

they have rhythms which are sort of close to each other.

Under some circumstances,

these things will have a phase transition

and suddenly the rhythm will,

everybody will fall into step.

There was a marvelous physical example of that

in the Millennium Bridge across the Thames River,

about, built about 2001.

And pedestrians walking across,

pedestrians don’t walk synchronized,

they don’t walk in lockstep.

But they’re all walking about the same frequency

and the bridge could sway at that frequency

and the slight sway made pedestrians tend a little bit

to lock into step and after a while,

the bridge was oscillating back and forth

and the pedestrians were walking in step to it.

And you could see it in the movies made out of the bridge.

And the engineers made a simple minor mistake.

They assume when you walk, it’s step, step, step

and it’s back and forth motion.

But when you walk, it’s also right foot left

with side to side motion.

And it’s the side to side motion

for which the bridge was strong enough,

but it wasn’t stiff enough.

And as a result, you would feel the motion

and you’d fall into step with it.

And people were very uncomfortable with it.

They closed the bridge for two years

while they built stiffening for it.

Now, nerve cells produce action potentials.

You have a bunch of cells which are loosely coupled together

producing action potentials at the same rate.

There’ll be some circumstances

under which these things can lock together.

Other circumstances in which they won’t.

Well, if they’re fired together,

you can be sure that other cells are gonna notice it.

So you can make a computational feature out of this

in an evolving brain.

Most artificial neural networks

don’t even have action potentials,

let alone have the possibility for synchronizing them.

And you mentioned the evolutionary process.

So the evolutionary process

that builds on top of biological systems

leverages the weird mess of it somehow.

So how do you make sense of that ability

to leverage all the different kinds of complexities

in the biological brain?

Well, look, in the biological molecule level,

you have a piece of DNA

which encodes for a particular protein.

You could duplicate that piece of DNA

and now one part of it can code for that protein,

but the other one could itself change a little bit

and thus start coding for a molecule

which is slightly different.

Now, if that molecule was just slightly different,

had a function which helped any old chemical reaction

which was important to the cell,

you would go ahead and let that try,

and evolution would slowly improve that function.

And so you have the possibility of duplicating

and then having things drift apart.

One of them retain the old function,

the other one do something new for you.

And there’s evolutionary pressure to improve.

Look, there isn’t in computers too,

but improvement has to do with closing some companies

and opening some others.

The evolutionary process looks a little different.

Yeah, similar timescale perhaps.

Much shorter in timescale.

Companies close, yeah, go bankrupt and are born,

yeah, shorter, but not much shorter.

Some companies last a century, but yeah, you’re right.

I mean, if you think of companies as a single organism

that builds and you all know, yeah,

it’s a fascinating dual correspondence there

between biological organisms.

And companies have difficulty having a new product

competing with an old product.

When IBM built its first PC, you probably read the book,

they made a little isolated internal unit to make the PC.

And for the first time in IBM’s history,

they didn’t insist that you build it out of IBM components.

But they understood that they could get into this market,

which is a very different thing

by completely changing their culture.

And biology finds other markets in a more adaptive way.

Yeah, it’s better at it.

It’s better at that kind of integration.

So maybe you’ve already said it,

but what to use the most beautiful aspect

or mechanism of the human mind?

Is it the adaptive, the ability to adapt

as you’ve described, or is there some other little quirk

that you particularly like?

Adaptation is everything when you get down to it.

But the difference, there are differences between adaptation

where your learning goes on only over generations

and over evolutionary time,

where your learning goes on at the time scale

of one individual who must learn from the environment

during that individual’s lifetime.

And biology has both kinds of learning in it.

And the thing which makes neurobiology hard

is that a mathematical system, as it were,

built on this other kind of evolutionary system.

What do you mean by mathematical system?

Where’s the math and the biology?

Well, when you talk to a computer scientist

about neural networks, it’s all math.

The fact that biology actually came about from evolution,

and the fact that biology is about a system

which you can build in three dimensions.

If you look at computer chips,

computer chips are basically two dimensional structures,

maybe 2.1 dimensions, but they really have difficulty

doing three dimensional wiring.

Biology is, the neocortex is actually also sheet like,

and it sits on top of the white matter,

which is about 10 times the volume of the gray matter

and contains all what you might call the wires.

But there’s a huge, the effect of computer structure

on what is easy and what is hard is immense.

And biology does, it makes some things easy

that are very difficult to understand

how to do computationally.

On the other hand, you can’t do simple floating point

arithmetic because it’s awfully stupid.

And you’re saying this kind of three dimensional

complicated structure makes, it’s still math.

It’s still doing math.

The kind of math it’s doing enables you to solve problems

of a very different kind.

That’s right, that’s right.

So you mentioned two kinds of adaptation,

the evolutionary adaptation and the adaptation

or learning at the scale of a single human life.

Which do you, which is particularly beautiful to you

and interesting from a research

and from just a human perspective?

And which is more powerful?

I find things most interesting that I begin to see

how to get into the edges of them

and tease them apart a little bit and see how they work.

And since I can’t see the evolutionary process going on,

I’m in awe of it.

But I find it just a black hole as far as trying

to understand what to do.

And so in a certain sense, I’m in awe of it,

but I couldn’t be interested in working on it.

The human life’s time scale is however thing

you can tease apart and study.

Yeah, you can do, there’s developmental neurobiology

which understands how the connections

and how the structure evolves from a combination

of what the genetics is like and the real,

the fact that you’re building a system in three dimensions.

In just days and months, those early days

of a human life are really interesting.

They are and of course, there are times

of immense cell multiplication.

There are also times of the greatest cell death

in the brain is during infancy.

It’s turnover.

So what is not effective, what is not wired well enough

to use at the moment, throw it out.

It’s a mysterious process.

From, let me ask, from what field do you think

the biggest breakthrough is in understanding

the mind will come in the next decades?

Is it neuroscience, computer science, neurobiology,

psychology, physics, maybe math, maybe literature?

Well, of course, I see the world always

through a lens of physics.

I grew up in physics and the way I pick problems

is very characteristic of physics

and of an intellectual background which is not psychology,

which is not chemistry and so on and so on.

Yeah, both of your parents are physicists.

Both of my parents were physicists

and the real thing I got out of that was a feeling

that the world is an understandable place

and if you do enough experiments and think about

what they mean and structure things

so you can do the mathematics of the,

relevant to the experiments, you ought to be able

to understand how things work.

But that was, that was a few years ago.

Did you change your mind at all through many decades

of trying to understand the mind,

of studying in different kinds of ways?

Not even the mind, just biological systems.

You still have hope that physics, that you can understand?

There’s a question of what do you mean by understand?

Of course.

When I taught freshman physics, I used to say,

I wanted to get physics to understand the subject,

to understand Newton’s laws.

I didn’t want them simply to memorize a set of examples

to which they knew the equations to write down

to generate the answers.

I had this nebulous idea of understanding

so that if you looked at a situation,

you could say, oh, I expect the ball to make that trajectory

or I expect some intuitive notion of understanding

and I don’t know how to express that very well

and I’ve never known how to express it well.

And you run smack up against it when you do these,

look at these simple neural nets,

feed forward neural nets, which do amazing things

and yet, you know, contain nothing of the essence

of what I would have felt was understanding.

Understanding is more than just an enormous lookup table.

Let’s linger on that.

How sure you are of that?

What if the table gets really big?

So, I mean, asked another way,

these feed forward neural networks,

do you think they’ll ever understand?

Could answer that in two ways.

I think if you look at real systems,

feedback is an essential aspect

of how these real systems compute.

On the other hand, if I have a mathematical system

with feedback, I know I can unlayer this and do it,

but I have an exponential expansion

in the amount of stuff I have to build

if I can resolve the problem that way.

So feedback is essential.

So we can talk even about recurrent neural nets,

so recurrence, but do you think all the pieces are there

to achieve understanding through these simple mechanisms?

Like back to our original question,

what is the fundamental, is there a fundamental difference

between artificial neural networks and biological

or is it just a bunch of surface stuff?

Suppose you ask a neurosurgeon, when is somebody dead?

Yeah.

So we’ll probably go back to saying,

well, I can look at the brain rhythms

and tell you this is a brain

which has never could have functioned again.

This is one of the, this other one is one of the stuff

we treat it well is still recoverable.

And then just do that by some electrodes

looking at simple electrical patterns,

which don’t look in any detail at all

what individual neurons are doing.

These rhythms are utterly absent

from anything which goes on at Google.

Yeah, but the rhythms.

But the rhythms what?

So, well, that’s like comparing, okay, I’ll tell you,

it’s like you’re comparing the greatest classical musician

in the world to a child first learning to play.

The question I’m at, but they’re still both

playing the piano.

I’m asking, is there, will it ever go on at Google?

Do you have a hope?

Because you’re one of the seminal figures

in both launching both disciplines,

both sides of the river.

I think it’s going to go on generation after generation.

The way it has where what you might call

the AI computer science community says,

let’s take the following.

This is our model of neurobiology at the moment.

Let’s pretend it’s good enough

and do everything we can with it.

And it does interesting things.

And after a while it sort of grinds into the sand

and you say, ah, something else is needed for neurobiology.

And some other grand thing comes in

and enables you to go a lot further.

What will go into the sand again?

And I think it could be generations of this evolution.

I don’t know how many of them.

And each one is going to get you further

into what a brain does.

And in some sense, past the Turing test longer

and in more broad aspects.

And how many of these are going to have to be

before you say, I’ve made something,

I’ve made a human, I don’t know.

But your sense is it might be a couple.

My sense is it might be a couple more.

Yeah.

And going back to my brainwaves as it were.

Yes, from the AI point of view,

they would say, ah, maybe these are an epiphenomenon

and not important at all.

The first car I had, a real wreck of a 1936 Dodge,

go above about 45 miles an hour and the wheels would shimmy.

Yeah.

Good speedometer that.

Now, nobody designed the car that way.

The car is malfunctioning to have that.

But in biology, if it were useful to know

when are you going more than 45 miles an hour,

you just capture that.

And you wouldn’t worry about where it came from.

Yeah.

It’s going to be a long time before that kind of thing,

which can take place in large complex networks of things

is actually used in the computation.

Look, how many transistors are there

in your laptop these days?

Actually, I don’t know the number.

It’s on the scale of 10 to the 10.

I can’t remember the number either.

Yeah.

And all the transistors are somewhat similar.

And most physical systems with that many parts,

all of which are similar, have collective properties.

Yes.

Sound waves in air, earthquakes,

what have you, have collective properties.

Weather.

There are no collective properties used

in artificial neural networks, in AI.

Yeah, it’s very.

If biology uses them,

it’s going to take us to more generations of things

for people to actually dig in

and see how they are used and what they mean.

See, you’re very right.

We might have to return several times to neurobiology

and try to make our transistors more messy.

Yeah, yeah.

At the same time, the simple ones will conquer big aspects.

And I think one of the most, biggest surprises to me was

how well learning systems

because they’re manifestly nonbiological,

how important they can be actually,

and how important and how useful they can be in AI.

So if we can just take a stroll to some of your work.

If we can just take a stroll to some of your work

that is incredibly surprising,

that it works as well as it does,

that launched a lot of the recent work with neural networks.

If we go to what are now called Hopfield networks,

can you tell me what is associative memory in the mind

for the human side?

Let’s explore memory for a bit.

Okay, what do you mean by associative memory is,

ah, you have a memory of each of your friends.

Your friend has all kinds of properties

from what they look like, what their voice sounds like,

to where they went to college, where you met them,

go on and on, what science papers they’ve written.

And if I start talking about a 5 foot 10 wire,

cognitive scientist who’s got a very bad back,

it doesn’t take very long for you to say,

oh, he’s talking about Jeff Hinton.

I never mentioned the name or anything very particular.

But somehow a few facts that are associated

with a particular person enables you to get a hold

of the rest of the facts.

Or not the rest of them, another subset of them.

And it’s this ability to link things together,

link experiences together, which goes under

the general name of associative memory.

And a large part of intelligent behavior

is actually just large associative memories at work,

as far as I can see.

What do you think is the mechanism of how it works?

What do you think is the mechanism of how it works

in the mind?

Is it a mystery to you still?

Do you have inklings of how this essential thing

for cognition works?

What I made 35 years ago was, of course,

a crude physics model to actually enable you

to understand my old sense of understanding

as a physicist, because you could say,

ah, I understand why this goes to stable states.

It’s like things going downhill.

And that gives you something with which to think

in physical terms rather than only in mathematical terms.

So you’ve created these associative artificial networks.

That’s right.

Now, if you look at what I did,

I didn’t at all describe a system which gracefully learns.

I described a system in which you could understand

how learning could link things together,

how very crudely it might learn.

One of the things which intrigues me

as I reinvestigate that system now to some extent is,

look, I see you, I’ll see you every second

for the next hour or what have you.

Each look at you is a little bit different.

I don’t store all those second by second images.

I don’t store 3,000 images.

I somehow compact this information.

So I now have a view of you,

which I can use.

It doesn’t slavishly remember anything in particular,

but it compacts the information into useful chunks,

which are somehow these chunks,

which are not just activities of neurons,

bigger things than that,

which are the real entities which are useful to you.

Which are useful to you.

Useful to you to describe,

to compress this information coming at you.

And you have to compress it in such a way

that if the information comes in just like this again,

I don’t bother to rewrite it or efforts to rewrite it

simply do not yield anything

because those things are already written.

And that needs to be not,

look this up, have I stored it somewhere already?

There’ll be something which is much more automatic

in the machine hardware.

Right, so in the human mind,

how complicated is that process do you think?

So you’ve created,

feels weird to be sitting with John Hotfield

calling him Hotfield Networks, but.

It is weird.

Yeah, but nevertheless, that’s what everyone calls him.

So here we are.

So that’s a simplification.

That’s what a physicist would do.

You and Richard Feynman sat down

and talked about associative memory.

Now, if you look at the mind

where you can’t quite simplify it so perfectly,

do you think that?

Well, let me backtrack just a little bit.

Yeah.

Biology is about dynamical systems.

Computers are dynamical systems.

You can ask, if you want to model biology,

if you want to model neurobiology,

what is the time scale?

There’s a dynamical system in which,

of a fairly fast time scale in which you could say,

the synapses don’t change much during this computation,

so I’ll think of the synapses fixed

and just do the dynamics of the activity.

Or you can say, the synapses are changing fast enough

that I have to have the synaptic dynamics

working at the same time as the system dynamics

in order to understand the biology.

Most, if you look at the feedforward artificial neural nets,

they’re all done as learnings.

First of all, I spend some time learning, not performing,

and I turn off learning and I turn off learning,

and I turn off learning and I perform.

Right.

That’s not biology.

And so as I look more deeply at neurobiology,

even as associative memory,

I’ve got to face the fact that the dynamics

of the synapse change is going on all the time.

And I can’t just get by by saying,

I’ll do the dynamics of activity with fixed synapses.

Yeah.

So the synaptic, the dynamics of the synapses

is actually fundamental to the whole system.

Yeah, yeah.

And there’s nothing necessarily separating the time scales.

When the time scale’s gonna be separated,

it’s neat from the physicist’s

or the mathematician’s point of view,

but it’s not necessarily true in neurobiology.

So you’re kind of dancing beautifully

between showing a lot of respect to physics

and then also saying that physics

cannot quite reach the complexity of biology.

So where do you land?

Or do you continuously dance between the two points?

I continuously dance between them

because my whole notion of understanding

is that you can describe to somebody else

how something works in ways which are honest and believable

and still not describe all the nuts and bolts in detail.

Weather.

I can describe weather

as 10 to the 32 molecules colliding in the atmosphere.

I can simulate weather that way if I have a big enough machine.

I’ll simulate it accurately.

It’s no good for understanding.

If I want to understand things, I want to understand things

in terms of wind patterns, hurricanes,

pressure differentials, and so on,

all things as they’re collective.

And the physicist in me always hopes

that biology will have some things

that can be said about it which are both true

and for which you don’t need all the molecular details

as the molecules colliding.

That’s what I mean from the roots of physics,

by understanding.

So what did, again, sorry,

but Hopfield Networks help you understand

what insight did give us about memory, about learning?

They didn’t give insights about learning.

They gave insights about how things having learned

could be expressed, how having learned a picture of you,

a picture of you reminds me of your name.

That would, but it didn’t describe a reasonable way

of actually doing the learning.

They only said if you had previously learned

the connections of this kind of pattern,

would now be able to,

behave in a physical way was to say,

ah, if I put the part of the pattern in here,

the other part of the pattern will complete over here.

I could understand that physics,

if the right learning stuff had already been put in.

And it could understand why then putting in a picture

of somebody else would generate something else over here.

But it did not have a reasonable description

of the learning that was going on.

It did not have a reasonable description

of the learning process.

But even, so forget learning.

I mean, that’s just a powerful concept

that sort of forming representations

that are useful to be robust,

you know, for error correction kind of thing.

So this is kind of what the biology does

we’re talking about.

Yeah, and what my paper did was simply enable you,

there are lots of ways of being robust.

If you think of a dynamical system,

you think of a system where a path is going on in time.

And if you think for a computer,

there’s a computational path,

which is going on in a huge dimensional space

of ones and zeros.

And an error correction system is a system,

which if you get a little bit off that trajectory,

will push you back onto that trajectory again.

So you get to the same answer in spite of the fact

that there were things,

so that the computation wasn’t being ideally done

all the way along the line.

And there are lots of models for error correction.

But one of the models for error correction is to say,

there’s a valley that you’re following, flowing down.

And if you push a little bit off the valley,

just like water being pushed a little bit by a rock,

it gets back and follows the course of the river.

And that basically the analog

in the physical system, which enables you to say,

oh yes, error free computation and an associative memory

are very much like things that I can understand

from the point of view of a physical system.

The physical system is, can be under some circumstances,

an accurate metaphor.

It’s not the only metaphor.

There are error correction schemes,

which don’t have a valley and energy behind them.

But those are error correction schemes,

which a mathematician may be able to understand,

but I don’t.

So there’s the physical metaphor that seems to work here.

That’s right, that’s right.

So these kinds of networks actually led to a lot of the work

that is going on now in neural networks,

artificial neural networks.

So the follow on work with restricted Boltzmann machines

and deep belief nets followed on from these ideas

of the Hopfield network.

So what do you think about this continued progress

of that work towards now re revigorated exploration

of feed forward neural networks

and recurrent neural networks

and convolutional neural networks

and kinds of networks that are helping solve

image recognition, natural language processing,

all that kind of stuff.

It always intrigued me that one of the most long lived

of the learning systems is the Boltzmann machine,

which is intrinsically a feedback network.

And with the brilliance of Hind and Sinowski

to understand how to do learning in that.

And it’s still a useful way to understand learning

and the learning that you understand in that

has something to do with the way

that feed forward systems work.

But it’s not always exactly simple

to express that intuition.

But it’s always amuses me to see Hinton

going back to the will yet again

on a form of the Boltzmann machine

because really that which has feedback

and interesting probabilities in it

is a lovely encapsulation of something in computational.

Something computational?

Something both computational and physical.

Computational and it’s very much related

to feed forward networks.

Physical in that Boltzmann machine learning

is really learning a set of parameters

for a physics Hamiltonian or energy function.

What do you think about learning in this whole domain?

Do you think the aforementioned guy,

Jeff Hinton, all the work there with backpropagation,

all the kind of learning that goes on in these networks,

if we compare it to learning in the brain, for example,

is there echoes of the same kind of power

that backpropagation reveals

about these kinds of recurrent networks?

Or is it something fundamentally different

going on in the brain?

I don’t think the brain is as deep

as the deepest networks go,

the deepest computer science networks.

And I do wonder whether part of that depth

of the computer science networks is necessitated

by the fact that the only learning

that’s easily done on a machine is feed forward.

And so there’s the question of to what extent

is the biology, which has some feed forward

and some feed back,

been captured by something which has got many more neurons

but much more depth than the neurons in it.

So part of you wonders if the feedback is actually

more essential than the number of neurons or the depth,

The dynamics of the feedback.

Look, if you don’t have feedback,

it’s a little bit like a building a big computer

and running it through one clock cycle.

And then you can’t do anything

until you reload something coming in.

How do you use the fact that there are multiple clock cycles?

How do I use the fact that you can close your eyes,

stop listening to me and think about a chessboard

for two minutes without any input whatsoever?

Yeah, that memory thing,

that’s fundamentally a feedback kind of mechanism.

You’re going back to something.

Yes, it’s hard to understand.

It’s hard to introspect,

let alone consciousness.

Oh, let alone consciousness, yes, yes.

Because that’s tied up in there too.

You can’t just put that on another shelf.

Every once in a while I get interested in consciousness

and then I go and I’ve done that for years

and ask one of my betters, as it were,

their view on consciousness.

It’s been interesting collecting them.

What is consciousness?

Let’s try to take a brief step into that room.

Well, ask Marvin Minsky,

his view on consciousness.

And Marvin said,

consciousness is basically overrated.

It may be an epiphenomenon.

After all, all the things your brain does,

but they’re actually hard computations

you do nonconsciously.

And there’s so much evidence

that even the simple things you do,

you can make decisions,

you can make committed decisions about them,

the neurobiologist can say,

he’s now committed, he’s going to move the hand left

before you know it.

So his view that consciousness is not,

that’s just like little icing on the cake.

The real cake is in the subconscious.

Yum, yum.

Subconscious, nonconscious.

Nonconscious, what’s the better word, sir?

It’s only that Freud captured the other word.

Yeah, it’s a confusing word, subconscious.

Nicholas Chaiter wrote an interesting book.

I think the title of it is The Mind is Flat.

Flat in a neural net sense, might be flat

as something which is a very broad neural net

without any layers in depth,

whereas a deep brain would be many layers

and not so broad.

In the same sense that if you push Minsky hard enough,

he would probably have said,

consciousness is your effort to explain to yourself

that which you have already done.

Yeah, it’s the weaving of the narrative

around the things that have already been computed for you.

That’s right, and so much of what we do

for our memories of events, for example.

If there’s some traumatic event you witness,

you will have a few facts about it correctly done.

If somebody asks you about it, you will weave a narrative

which is actually much more rich in detail than that

based on some anchor points you have of correct things

and pulling together general knowledge on the other,

but you will have a narrative.

And once you generate that narrative,

you are very likely to repeat that narrative

and claim that all the things you have in it

are actually the correct things.

There was a marvelous example of that

in the Watergate slash impeachment era of John Dean.

John Dean, you’re too young to know,

had been the personal lawyer of Nixon.

And so John Dean was involved in the coverup

and John Dean ultimately realized

the only way to keep himself out of jail for a long time

was actually to tell some of the truths about Nixon.

And John Dean was a tremendous witness.

He would remember these conversations in great detail

and very convincing detail.

And long afterward, some of the tapes,

the secret tapes as it were from which these,

Don was, Gene was recalling these conversations

were published, and one found out that John Dean

had a good but not exceptional memory.

What he had was an ability to paint vividly

and in some sense accurately the tone of what was going on.

By the way, that’s a beautiful description of consciousness.

Do you, like where do you stand in your today?

So perhaps it changes day to day,

but where do you stand on the importance of consciousness

in our whole big mess of cognition?

Is it just a little narrative maker

or is it actually fundamental to intelligence?

That’s a very hard one.

When I asked Francis Crick about consciousness,

he launched forward in a long monologue

about Mendel and the peas and how Mendel knew

that there was something and how biologists understood

that there was something in inheritance,

which was just very, very different.

And the fact that inherited traits didn’t just wash out

into a gray, but this or this and propagated

that that was absolutely fundamental to the biology.

And it took generations of biologists to understand

that there was genetics and it took another generation

or two to understand that genetics came from DNA.

But very shortly after Mendel, thinking biologists

did realize that there was a deep problem about inheritance.

And Francis would have liked to have said,

and that’s why I’m working on consciousness.

But of course, he didn’t have any smoking gun

in the sense of Mendel.

And that’s the weakness of his position.

If you read his book, which he wrote with Koch, I think.

Yeah, Christoph Koch, yeah.

I find it unconvincing for the smoking gun reason.

So I’m going on collecting views without actually having taken

a very strong one myself,

because I haven’t seen the entry point.

Not seeing the smoking gun from the point of view

of physics, I don’t see the entry point.

Whereas in neurobiology, once I understood the idea

of a collective, an evolution of dynamics,

which could be described as a collective phenomenon,

I thought, ah, there’s a point where what I know

about physics is so different from any neurobiologist

that I have something that I might be able to contribute.

And right now, there’s no way to grasp at consciousness

from a physics perspective.

From my point of view, that’s correct.

And of course, people, physicists, like everybody else,

think very muddily about things.

You ask the closely related question about free will.

Do you believe you have free will?

Physicists will give an offhand answer,

and then backtrack, backtrack, backtrack,

where they realize that the answer they gave

must fundamentally contradict the laws of physics.

Natural, answering questions of free will

and consciousness naturally lead to contradictions

from a physics perspective.

Because it eventually ends up with quantum mechanics,

and then you get into that whole mess

of trying to understand how much,

from a physics perspective, how much is determined,

already predetermined, how much is already deterministic

about our universe, and there’s lots of different things.

And if you don’t push quite that far, you can say,

essentially, all of neurobiology, which is relevant,

can be captured by classical equations of motion.

Right, because in my view of the mysteries of the brain

are not the mysteries of quantum mechanics,

but the mysteries of what can happen

when you have a dynamical system, driven system,

with 10 to the 14 parts.

That that complexity is something which is,

that the physics of complex systems

is at least as badly understood

as the physics of phase coherence in quantum mechanics.

Can we go there for a second?

You’ve talked about attractor networks,

and just maybe you could say what are attractor networks,

and more broadly, what are interesting network dynamics

that emerge in these or other complex systems?

You have to be willing to think

in a huge number of dimensions,

because in a huge number of dimensions,

the behavior of a system can be thought

as just the motion of a point over time

in this huge number of dimensions.

All right.

And an attractor network is simply a network

where there is a line and other lines

converge on it in time.

That’s the essence of an attractor network.

That’s how you.

In a highly dimensional space.

And the easiest way to get that

is to do it in a highly dimensional space,

where some of the dimensions provide the dissipation,

which, if I have a physical system,

trajectories can’t contract everywhere.

They have to contract in some places and expand in others.

There’s a fundamental classical theorem

of statistical mechanics,

which goes under the name of Liouville’s theorem,

which says you can’t contract everywhere.

If you contract somewhere, you expand somewhere else.

In interesting physical systems,

you’ve got driven systems

where you have a small subsystem,

which is the interesting part.

And the rest of the contraction and expansion,

the physicists would say it’s entropy flow

in this other part of the system.

But basically, attractor networks are dynamics

that are funneling down so that you can’t be any,

so that if you start somewhere in the dynamical system,

you will soon find yourself

on a pretty well determined pathway, which goes somewhere.

If you start somewhere else,

you’ll wind up on a different pathway,

but I don’t have just all possible things.

You have some defined pathways which are allowed

and onto which you will converge.

And that’s the way you make a stable computer,

and that’s the way you make a stable behavior.

So in general, looking at the physics

of the emergent stability in networks,

what are some interesting characteristics that,

what are some interesting insights

from studying the dynamics of such high dimensional systems?

Most dynamical systems, most driven dynamical systems,

are driven, they’re coupled somehow to an energy source.

And so their dynamics keeps going

because it’s coupling to the energy source.

Most of them, it’s very difficult to understand at all

what the dynamical behavior is going to be.

You have to run it.

There’s a subset of systems which has

what is actually known to the mathematicians

as a Lyapunov function, and those systems,

you can understand convergent dynamics

by saying you’re going downhill on something or other.

And that’s what I found with ever knowing

what Lyapunov functions were in the simple model

I made in the early 80s, was an energy function

so you could understand how you could get this channeling

on the pathways without having to follow the dynamics

in infinite detail.

You started rolling a ball at the top of a mountain,

it’s gonna wind up at the bottom of a valley.

You know that’s true without actually watching

the ball roll down.

There’s certain properties of the system

that when you can know that.

That’s right.

And not all systems behave that way.

Most don’t, probably.

Most don’t, but it provides you with a metaphor

for thinking about systems which are stable

and who to have these attractors behave

even if you can’t find a Lyapunov function behind them

or an energy function behind them.

It gives you a metaphor for thought.

Yeah, speaking of thought,

if I had a glint in my eye with excitement

and said I’m really excited about this something

called deep learning and neural networks

and I would like to create an intelligent system

and came to you as an advisor, what would you recommend?

Is it a hopeless pursuit to use neural networks

to achieve thought?

Is it, what kind of mechanisms should we explore?

What kind of ideas should we explore?

Well, you look at the simple networks,

the one past networks.

They don’t support multiple hypotheses very well.

Hmm.

As I have tried to work with very simple systems

which do something which you might consider to be thinking,

thought has to do with the ability to do mental exploration

before you take a physical action.

Almost like we were mentioning, playing chess,

visualizing, simulating inside your head different outcomes.

Yeah, yeah.

And now you would do that in a feed forward network

because you’ve pre calculated all kinds of things.

But I think the way neurobiology does it

hasn’t pre calculated everything.

It actually has parts of a dynamical system

in which you’re doing exploration in a way which is.

There’s a creative element.

Like there’s an.

There’s a creative element.

And in a simple minded neural net,

you have a constellation of instances

of which you’ve learned.

And if you are within that space,

if a new question is a question within this space,

you can actually rely on that system pretty well

to come up with a good suggestion for what to do.

If on the other hand,

the query comes from outside the space,

you have no way of knowing how the system

is gonna behave.

There are no limitations on what can happen.

And so with the artificial neural net world

is always very much,

I have a population of examples.

The test set must be drawn from the equivalent population.

If the test set has examples,

which are from a population which is completely different,

there’s no way that you could expect

to get the answer right.

Yeah, what they call outside the distribution.

That’s right, that’s right.

And so if you see a ball rolling across the street at dusk,

if that wasn’t in your training set,

the idea that a child may be coming close behind that

is not going to occur to the neural net.

And it is to our,

there’s something in your biology that allows that.

Yeah, there’s something in the way

of what it means to be outside of the population

of the training set.

The population of the training set

isn’t just sort of this set of examples.

There’s more to it than that.

And it gets back to my question of,

what is it to understand something?

Yeah.

You know, in a small tangent,

you’ve talked about the value of thinking

of deductive reasoning in science

versus large data collection.

So sort of thinking about the problem.

I suppose it’s the physics side of you

of going back to first principles and thinking,

but what do you think is the value of deductive reasoning

in the scientific process?

Well, there are obviously scientific questions

in which the route to the answer to it

comes through the analysis of one hell of a lot of data.

Right.

Cosmology, that kind of stuff.

And that’s never been the kind of problem

in which I’ve had any particular insight.

Though I must say, if you look at,

cosmology is one of those.

If you look at the actual things that Jim Peebles,

one of this year’s Nobel Prize in physics,

ones from the local physics department,

the kinds of things he’s done,

he’s never crunched large data.

Never, never, never.

He’s used the encapsulation of the work of others

in this regard.

Right.

But it ultimately boiled down to thinking

through the problem.

Like what are the principles under which

a particular phenomenon operates?

Yeah, yeah.

And look, physics is always going to look

for ways in which you can describe the system

in a way which rises above the details.

And to the hard dyed, the wool biologist,

biology works because of the details.

In physics, to the physicists,

we want an explanation which is right

in spite of the details.

And there will be questions which we cannot answer

as physicists because the answer cannot be found that way.

There’s, I’m not sure if you’re familiar

with the entire field of brain computer interfaces

that’s become more and more intensely researched

and developed recently, especially with companies

like Neuralink with Elon Musk.

Yeah, I know there have always been the interests

both in things like getting the eyes

to be able to control things

or getting the thought patterns

to be able to move what had been a connected limb

which is now connected through a computer.

That’s right.

So in the case of Neuralink,

they’re doing 1,000 plus connections

where they’re able to do two way,

activate and read spikes, neural spikes.

Do you have hope for that kind of computer brain interaction

in the near or maybe even far future

of being able to expand the ability

of the mind of cognition or understand the mind?

It’s interesting watching things go.

When I first became interested in neurobiology,

most of the practitioners thought you would be able

to understand neurobiology by techniques

which allowed you to record only one cell at a time.

One cell, yeah.

People like David Hubel,

very strongly reflected that point of view.

And that’s been taken over by a generation,

a couple of generations later,

by a set of people who says not until we can record

from 10 to the four, 10 to the five at a time,

will we actually be able to understand

how the brain actually works.

And in a general sense, I think that’s right.

You have to begin to be able to look

for the collective modes, the collective operations of things.

It doesn’t rely on this action potential or that cell.

It relies on the collective properties of this set of cells

connected with this kind of patterns and so on.

And you’re not going to succeed in seeing

what those collective activities are

without recording many cells at once.

The question is how many at once?

What’s the threshold?

And that’s the…

Yeah, and look, it’s being pursued hard

in the motor cortex.

The motor cortex does something which is complex,

and yet the problem you’re trying to address

is fairly simple.

Now, neurobiology does it in ways that differ

from the way an engineer would do it.

An engineer would put in six highly accurate stepping motors

are controlling a limb rather than 100,000 muscle fibers,

each of which has to be individually controlled.

And so understanding how to do things in a way

which is much more forgiving and much more neural,

I think would benefit the engineering world.

The engineering world, a touch.

Let’s put in a pressure sensor or two,

rather than an array of a gazillion pressure sensors,

none of which are accurate,

all of which are perpetually recalibrating themselves.

So you’re saying your hope is,

your advice for the engineers of the future

is to embrace the large chaos of a messy, air prone system

like those of the biological systems.

Like that’s probably the way to solve some of these.

I think you’ll be able to make better computations

slash robotics that way than by trying to force things

into a robotics where joint motors are powerful

and stepping motors are accurate.

But then the physicists, the physicist in you

will be lost forever in such systems

because there’s no simple fundamentals to explore

in systems that are so large and messy.

Well, you say that, and yet there’s a lot of physics

in the Navier Stokes equations,

the equations of nonlinear hydrodynamics,

huge amount of physics in them.

All the physics of atoms and molecules has been lost,

but it’s been replaced by this other set of equations,

which is just as true as the equations at the bottom.

Now those equations are going to be harder to find

in general biology, but the physicist in me says

there are probably some equations of that sort.

They’re out there.

They’re out there, and if physics

is going to contribute anything,

it may contribute to trying to find out

what those equations are and how to capture them

from the biology.

Would you say that’s one of the main open problems

of our age is to discover those equations?

Yeah, if you look at, there’s molecules

and there’s psychological behavior,

and these two are somehow related.

They’re layers of detail, they’re layers of collectiveness,

and to capture that in some vague way,

several stages on the way up to see how these things

can actually be linked together.

So it seems in our universe, there’s a lot of elegant

equations that can describe the fundamental way

that things behave, which is a surprise.

I mean, it’s compressible into equations.

It’s simple and beautiful, but it’s still an open question

whether that link is equally between molecules

and the brain is equally compressible

into elegant equations.

But your sense, well, you’re both a physicist

and a dreamer, you have a sense that…

Yeah, but I can only dream physics dreams.

Physics dreams.

There was an interesting book called Einstein’s Dreams,

which alternates between chapters on his life

and descriptions of the way time might have been but isn’t.

The linking between these being important ideas

that Einstein might have had to think about

the essence of time as he was thinking about time.

So speaking of the essence of time in your biology,

you’re one human, famous, impactful human,

but just one human with a brain living the human condition.

But you’re ultimately mortal, just like all of us.

Has studying the mind as a mechanism

changed the way you think about your own mortality?

It has, really, because particularly as you get older

and the body comes apart in various ways,

I became much more aware of the fact

that what is somebody is contained in the brain

and not in the body that you worry about burying.

And it is to a certain extent true

that for people who write things down,

equations, dreams, notepads, diaries,

fractions of their thought does continue to live

after they’re dead and gone,

after their body is dead and gone.

And there’s a sea change in that going on in my lifetime

between when my father died, except for the things

which were actually written by him, as it were.

Very few facts about him will have ever been recorded.

And the number of facts which are recorded

about each and every one of us, forever now,

as far as I can see, in the digital world.

And so the whole question of what is death

may be different for people a generation ago

and a generation further ahead.

Maybe we have become immortal under some definitions.

Yeah, yeah.

Last easy question, what is the meaning of life?

Looking back, you’ve studied the mind,

us weird descendants of apes.

What’s the meaning of our existence on this little earth?

Oh, that word meaning is as slippery as the word understand.

Interconnected somehow, perhaps.

Is there, it’s slippery, but is there something

that you, despite being slippery,

can hold long enough to express?

I’ve been amazed at how hard it is

to define the things in a living system

in the sense that one hydrogen atom

is pretty much like another,

but one bacterium is not so much like another bacterium,

even of the same nominal species.

In fact, the whole notion of what is the species

gets a little bit fuzzy.

And do species exist in the absence

of certain classes of environments?

And pretty soon one winds up with a biology

which the whole thing is living,

but whether there’s actually any element of it

which by itself would be said to be living

becomes a little bit vague in my mind.

So in a sense, the idea of meaning

is something that’s possessed by an individual,

like a conscious creature.

And you’re saying that it’s all interconnected

in some kind of way that there might not even

be an individual.

We’re all kind of this complicated mess

of biological systems at all different levels

where the human starts and when the human ends is unclear.

Yeah, yeah, and we’re in neurobiology where the,

oh, you say the neocortex is the thinking,

but there’s lots of things that are done on the spinal cord.

And so where’s the essence of thought?

Is it just gonna be neocortex?

Can’t be, can’t be.

Yeah, maybe to understand and to build thought

you have to build the universe along with the neocortex.

It’s all interlinked through the spinal cord.

John, it’s a huge honor talking today.

Thank you so much for your time.

I really appreciate it.

Well, thank you for the challenge of talking with you.

And it’ll be interesting to see whether you can win

five minutes out of this with just coherence

to anyone or not.

Beautiful.

Thanks for listening to this conversation

with John Hopfield and thank you

to our presenting sponsor, Cash App.

Download it, use code LexPodcast.

You’ll get $10 and $10 will go to FIRST,

an organization that inspires and educates young minds

to become science and technology innovators of tomorrow.

If you enjoy this podcast, subscribe on YouTube,

get five stars on Apple Podcast, support on Patreon,

or simply connect with me on Twitter at Lex Friedman.

And now let me leave you with some words of wisdom

from John Hopfield in his article titled, Now What?

Choosing problems is the primary determinant

of what one accomplishes in science.

I have generally had a relatively short attention span

in science problems.

Thus, I have always been on the lookout

for more interesting questions,

either as my present ones get worked out

or as they get classified by me as intractable,

given my particular talents.

He then goes on to say,

what I have done in science relies entirely

on experimental and theoretical studies by experts.

I have a great respect for them,

especially for those who are willing to attempt

communication with someone who is not an expert in the field.

I would only add that experts are good

at answering questions.

If you’re brash enough, ask your own.

Don’t worry too much about how you found them.

Thank you for listening and hope to see you next time.

comments powered by Disqus