Lex Fridman Podcast - #208 - Jeff Hawkins: The Thousand Brains Theory of Intelligence

The following is a conversation with Jeff Hawkins, a neuroscientist seeking to understand

the structure, function, and origin of intelligence in the human brain.

He previously wrote a seminal book on the subject titled On Intelligence, and recently a new book

called A Thousand Brains, which presents a new theory of intelligence that Richard Dawkins,

for example, has been raving about, calling the book quote brilliant and exhilarating.

I can’t read those two words and not think of him saying it in his British accent.

Quick mention of our sponsors, Codecademy, Biooptimizers, ExpressVPN, Asleep, and Blinkist.

Check them out in the description to support this podcast.

As a side note, let me say that one small but powerful idea that Jeff Hawkins mentions

in his new book is that if human civilization were to destroy itself, all of knowledge,

all our creations will go with us. He proposes that we should think about how to save that

knowledge in a way that long outlives us, whether that’s on Earth, in orbit around Earth,

or in deep space, and then to send messages that advertise this backup of human knowledge

to other intelligent alien civilizations. The main message of this advertisement is not that

we are here, but that we were once here. This little difference somehow was deeply humbling

to me, that we may, with some nonzero likelihood, destroy ourselves, and that an alien civilization

thousands or millions of years from now may come across this knowledge store, and they

would only with some low probability even notice it, not to mention be able to interpret it.

And the deeper question here for me is what information in all of human knowledge is even

essential? Does Wikipedia capture it or not at all? This thought experiment forces me

to wonder what are the things we’ve accomplished and are hoping to still accomplish that will

outlive us? Is it things like complex buildings, bridges, cars, rockets? Is it ideas like science,

physics, and mathematics? Is it music and art? Is it computers, computational systems,

or even artificial intelligence systems? I personally can’t imagine that aliens wouldn’t

already have all of these things, in fact much more and much better. To me, the only

unique thing we may have is consciousness itself, and the actual subjective experience

and the actual subjective experience of suffering, of happiness, of hatred, of love. If we can

record these experiences in the highest resolution directly from the human brain, such that aliens

will be able to replay them, that is what we should store and send as a message. Not

Wikipedia, but the extremes of conscious experiences, the most important of which, of course, is

love. This is the Lex Friedman podcast, and here is my conversation with Jeff Hawkins.

We previously talked over two years ago. Do you think there’s still neurons in your brain

that remember that conversation, that remember me and got excited? Like there’s a Lex neuron

in your brain that just like finally has a purpose? I do remember our conversation. I

have some memories of it, and I formed additional memories of you in the meantime. I wouldn’t

say there’s a neuron or neurons in my brain that know you. There are synapses in my brain

that have formed that reflect my knowledge of you and the model I have of you in the

world. Whether the exact same synapses were formed two years ago, it’s hard to say because

these things come and go all the time. One of the things to know about brains is that

when you think of things, you often erase the memory and rewrite it again. Yes, but I have

a memory of you, and that’s instantiated in synapses. There’s a simpler way to think about

it. We have a model of the world in your head, and that model is continually being updated.

I updated this morning. You offered me this water. You said it was from the refrigerator.

I remember these things. The model includes where we live, the places we know, the words,

the objects in the world. It’s a monstrous model, and it’s constantly being updated.

People are just part of that model. Our animals, our other physical objects, our events we’ve

done. In my mind, it’s no special place for the memories of humans. Obviously, I know a lot about

my wife and friends and so on, but it’s not like a special place for humans or over here.

We model everything, and we model other people’s behaviors too. If I said there’s a copy of your

mind in my mind, it’s just because I’ve learned how humans behave, and I’ve learned some things

about you, and that’s part of my world model. Well, I just also mean the collective intelligence

of the human species. I wonder if there’s something fundamental to the brain that enables that,

so modeling other humans with their ideas. You’re actually jumping into a lot of big

topics. Collective intelligence is a separate topic that a lot of people like to talk about.

We could talk about that. That’s interesting. We’re not just individuals. We live in society

and so on. From our research point of view, again, let’s just talk. We studied the neocortex.

It’s a sheet of neural tissue. It’s about 75% of your brain. It runs on this very repetitive

algorithm. It’s a very repetitive circuit. You can apply that algorithm to lots of different

problems, but underneath, it’s the same thing. We’re just building this model. From our point

of view, we wouldn’t look for these special circuits someplace buried in your brain that

might be related to understanding other humans. It’s more like, how do we build a model of

anything? How do we understand anything in the world? Humans are just another part of

the things we understand. There’s nothing to the brain that knows the

emergent phenomena of collective intelligence. Well, I certainly know about that. I’ve heard

the terms, I’ve read. No, but that’s as an idea.

Well, I think we have language, which is built into our brains. That’s a key part of collective

intelligence. There are some prior assumptions about the world we’re going to live in. When

we’re born, we’re not just a blank slate. Did we evolve to take advantage of those situations?

Yes. Again, we study only part of the brain, the neocortex. There’s other parts of the

brain that are very much involved in societal interactions and human emotions and how we

interact and even societal issues about how we interact with other people, when we support

them, when we’re greedy and things like that. Certainly, the brain is a great place

where to study intelligence. I wonder if it’s the fundamental atom of intelligence.

Well, I would say it’s absolutely in a central component, even if you believe in collective

intelligence as, hey, that’s where it’s all happening. That’s what we need to study,

which I don’t believe that, by the way. I think it’s really important, but I don’t think that

is the thing. Even if you do believe that, then you have to understand how the brain works in

doing that. It’s more like we are intelligent individuals and together, we are much more

magnified, our intelligence. We can do things that we couldn’t do individually, but even as

individuals, we’re pretty damn smart and we can model things and understand the world and interact

with it. To me, if you’re going to start someplace, you need to start with the brain. Then you could

say, well, how do brains interact with each other? What is the nature of language? How do we share

models that I’ve learned something about the world, how do I share it with you? Which is really

what sort of communal intelligence is. I know something, you know something. We’ve had different

experiences in the world. I’ve learned something about brains. Maybe I can impart that to you. You’ve

learned something about physics and you can impart that to me. Even just the epistemological

question of, well, what is knowledge and how do you represent it in the brain? That’s where it’s

going to reside for in our writings. It’s obvious that human collaboration, human interaction

is how we build societies. But some of the things you talk about and work on,

some of those elements of what makes up an intelligent entity is there with a single person.

Absolutely. I mean, we can’t deny that the brain is the core element here. At least I think it’s

obvious. The brain is the core element in all theories of intelligence. It’s where knowledge

is represented. It’s where knowledge is created. We interact, we share, we build upon each other’s

work. But without a brain, you’d have nothing. There would be no intelligence without brains.

And so that’s where we start. I got into this field because I just was curious as to who I am.

How do I think? What’s going on in my head when I’m thinking? What does it mean to know something?

I can ask what it means for me to know something independent of how I learned it from you or from

someone else or from society. What does it mean for me to know that I have a model of you in my

head? What does it mean to know I know what this microphone does and how it works physically,

even when I can’t see right now? How do I know that? What does it mean? How the neurons do that

at the fundamental level of neurons and synapses and so on? Those are really fascinating questions.

And I’m happy to be just happy to understand those if I could.

So in your new book, you talk about our brain, our mind as being made up of many brains.

So the book is called A Thousand Brain Theory of Intelligence. What is the key idea of this book?

The book has three sections and it has sort of maybe three big ideas. So the first section is

all about what we’ve learned about the neocortex and that’s the thousand brains theory. Just to

complete the picture, the second section is all about AI and the third section is about the future

of humanity. So the thousand brains theory, the big idea there, if I had to summarize into one

big idea, is that we think of the brain, the neocortex as learning this model of the world.

But what we learned is actually there’s tens of thousands of independent modeling systems going

on. And so each, we call the column in the cortex is about 150,000 of them, is a complete modeling

system. So it’s a collective intelligence in your head in some sense. So the thousand brains theory

says, well, where do I have knowledge about this coffee cup or where’s the model of this cell phone?

It’s not in one place. It’s in thousands of separate models that are complimentary and

they communicate with each other through voting. So this idea that we feel like we’re one person,

that’s our experience. We can explain that. But reality, there’s lots of these, it’s almost like

little brains, but they’re sophisticated modeling systems, about 150,000 of them in each human

brain. And that’s a total different way of thinking about how the neocortex is structured

than we or anyone else thought of even just five years ago. So you mentioned you started

this journey just looking in the mirror and trying to understand who you are.

So if you have many brains, who are you then? So it’s interesting. We have a singular perception,

right? We think, oh, I’m just here. I’m looking at you. But it’s composed of all these things,

like there’s sounds and there’s vision and there’s touch and all kinds of inputs. Yeah,

we have the singular perception. And what the thousand brain theory says, we have these models

that are visual models. We have a lot of models that are auditory models, models that talk to

models and so on, but they vote. And so these things in the cortex, you can think about these

columns as like little grains of rice, 150,000 stacked next to each other. And each one is its

own little modeling system, but they have these long range connections that go between them.

And we call those voting connections or voting neurons. And so the different columns try to

reach a consensus. Like, what am I looking at? Okay. Each one has some ambiguity, but they come

to a consensus. Oh, there’s a water bottle I’m looking at. We are only consciously able to

perceive the voting. We’re not able to perceive anything that goes on under the hood. So the

voting is what we’re aware of. The results of the vote.

Yeah. Well, you can imagine it this way. We were just talking about eye movements a moment ago. So

as I’m looking at something, my eyes are moving about three times a second. And with each movement,

a completely new input is coming into the brain. It’s not repetitive. It’s not shifting it around.

I’m totally unaware of it. I can’t perceive it. But yet if I looked at the neurons in your brain,

they’re going on and off, on and off, on and off, on and off. But the voting neurons are not.

The voting neurons are saying, we all agree, even though I’m looking at different parts of this,

this is a water bottle right now. And that’s not changing. And it’s in some position and

pose relative to me. So I have this perception of the water bottle about two feet away from me

at a certain pose to me. That is not changing. That’s the only part I’m aware of. I can’t be

aware of the fact that the inputs from the eyes are moving and changing and all this other tapping.

So these long range connections are the part we can be conscious of. The individual activity in

each column doesn’t go anywhere else. It doesn’t get shared anywhere else. There’s no way to extract

it and talk about it or extract it and even remember it to say, oh, yes, I can recall that.

But these long range connections are the things that are accessible to language and to our,

like the hippocampus, our memories, our short term memory systems and so on. So we’re not aware of

95% or maybe it’s even 98% of what’s going on in your brain. We’re only aware of this sort of

stable, somewhat stable voting outcome of all these things that are going on underneath the hood.

So what would you say is the basic element in the thousand brains theory of intelligence

of intelligence? Like what’s the atom of intelligence when you think about it? Is it

the individual brains and then what is a brain? Well, let’s, let’s, can we just talk about what

intelligence is first and then, and then we can talk about the elements are. So in my, in my book,

intelligence is the ability to learn a model of the world, to build internal to your head,

a model that represents the structure of everything, you know, to know what this is a

table and that’s a coffee cup and this is a gooseneck lamp and all this to know these things.

I have to have a model of it in my head. I just don’t look at them and go, what is that?

I already have internal representations of these things in my head and I had to learn them. I wasn’t

born of any of that knowledge. You were, you know, we have some lights in the room here. I, you know,

that’s not part of my evolutionary heritage, right? It’s not in my genes. So, um, we have this

incredible model and the model includes not only what things look like and feel like, but where

they are relative to each other and how they behave. I’ve never picked up this water bottle

before, but I know that if I took my hand on that blue thing and I turn it, it’ll probably make a

funny little sound as the little plastic things detach and then it’ll rotate and it’ll rotate a

certain way and it’ll come off. How do I know that? Because I have this model in my head.

So the essence of intelligence as our ability to learn a model and the more sophisticated our

model is, the smarter we are. Uh, not that there is a single intelligence, because you can know

about, you know, a lot about things that I don’t know. And I know about things you don’t know.

And we can both be very smart, but we both learned a model of the world through interacting with it.

So that is the essence of intelligence. Then we can ask ourselves, what are the mechanisms in the

brain that allow us to do that? And what are the mechanisms of learning, not just the neural

mechanisms, what are the general process by how we learn a model? So that was a big insight for us.

It’s like, what are the, what is the actual things that, how do you learn this stuff? It turns out

you have to learn it through movement. Um, you can’t learn it just by that’s how we learn. We

learn through movement. We learn. Um, so you build up this model by observing things and

touching them and moving them and walking around the world and so on. So either you move or the

thing moves somehow. Yeah. You obviously can learn things just by reading a book, something like that.

But think about if I were to say, oh, here’s a new house. I want you to learn, you know,

what do you do? You have to walk, you have to walk from room to the room. You have to open the doors,

look around, see what’s on the left, what’s on the right. As you do this, you’re building a model in

your head. It’s just, that’s what you’re doing. You can’t just sit there and say, I’m going to grok

the house. No. You know, or you can do it. You don’t even want to just sit down and read some

description of it, right? Yeah. You literally physically interact. The same with like a smartphone.

If I’m going to learn a new app, I touch it and I move things around. I see what happens when I,

when I do things with it. So that’s the basic way we learn in the world. And by the way,

when you say model, you mean something that can be used for prediction in the future.

It’s used for prediction and for behavior and planning. Right. And does a pretty good job

doing so. Yeah. Here’s the way to think about the model. A lot of people get hung up on this. So

you can imagine an architect making a model of a house, right? So there’s a physical model that’s

small. And why do they do that? Well, we do that because you can imagine what it would look like

from different angles. Okay. Look from here, look from there. And you can also say, well,

how, how far to get from the garage to the, to the swimming pool or something like that. Right. You

can imagine looking at this and you can say, what would be the view from this location? So we build

these physical models to let you imagine the future and imagine behaviors. Now we can take

that same model and put it in a computer. So we now, today they’ll build models of houses in a

computer and they, and they do that using a set of, we’ll come back to this term in a moment,

reference frames, but basically you assign a reference frame for the palace and you assign

different things for the house in different locations. And then the computer can generate

an image and say, okay, this is what it looks like in this direction. The brain is doing something

remarkably similar to this surprising. It’s using reference frames. It’s building these,

it’s similar to a model on a computer, which has the same benefits of building a physical model.

It allows me to say, what would this thing look like if it was in this orientation? What would

likely happen if I push this button? I’ve never pushed this button before, or how would I accomplish

something? I want to, I want to convey a new idea I’ve learned. How would I do that? I can imagine

in my head, well, I could talk about it. I could write a book. I could do some podcasts. I could,

you know, maybe tell my neighbor, you know, and I can imagine the outcomes of all these things

before I do any of them. That’s what the model lets you do. It lets us plan the future and

imagine the consequences of our actions. Prediction, you asked about prediction. Prediction

is not the goal of the model. Prediction is an inherent property of it, and it’s how the model

corrects itself. So prediction is fundamental to intelligence. It’s fundamental to building a model,

and the model’s intelligent. And let me go back and be very precise about this. Prediction,

you can think of prediction two ways. One is like, hey, what would happen if I did this? That’s a

type of prediction. That’s a key part of intelligence. But using prediction is like, oh,

what’s this water bottle going to feel like when I pick it up, you know? And that doesn’t seem very

intelligent. But one way to think about prediction is it’s a way for us to learn where our model is

wrong. So if I picked up this water bottle and it felt hot, I’d be very surprised. Or if I picked

it up and it was very light, I’d be surprised. Or if I turned this top and I had to turn it the other

way, I’d be surprised. And so all those might have a prediction like, okay, I’m going to do it. I’ll

drink some water. I’m okay. Okay, I do this. There it is. I feel opening, right? What if I had to turn

it the other way? Or what if it’s split in two? Then I say, oh my gosh, I misunderstood this. I

didn’t have the right model of this thing. My attention would be drawn to it. I’d be looking at

it going, well, how the hell did that happen? Why did it open up that way? And I would update my

model by doing it. Just by looking at it and playing around with that update and say, this is

a new type of water bottle. So you’re talking about sort of complicated things like a water bottle,

but this also applies for just basic vision, just like seeing things. It’s almost like a

precondition of just perceiving the world is predicting it. So just everything that you see

is first passed through your prediction. Everything you see and feel. In fact,

this was the insight I had back in the early 80s. And I know that people have reached the same idea

is that every sensory input you get, not just vision, but touch and hearing, you have an

expectation about it and a prediction. Sometimes you can predict very accurately. Sometimes you

can’t. I can’t predict what next word is going to come out of your mouth. But as you start talking,

I’ll get better and better predictions. And if you talk about some topics, I’d be very surprised.

So I have this sort of background prediction that’s going on all the time for all of my senses.

Again, the way I think about that is this is how we learn. It’s more about how we learn.

It’s a test of our understanding. Our predictions are a test. Is this really a water bottle? If it

is, I shouldn’t see a little finger sticking out the side. And if I saw a little finger sticking

out, I was like, oh, what the hell’s going on? That’s not normal. I mean, that’s fascinating

that… Let me linger on this for a second. It really honestly feels that prediction is

fundamental to everything, to the way our mind operates, to intelligence. So it’s just a different

way to see intelligence, which is like everything starts a prediction. And prediction requires a

model. You can’t predict something unless you have a model of it. Right. But the action is

prediction. So the thing the model does is prediction. But it also… Yeah. But you can

then extend it to things like, oh, what would happen if I took this today? I went and did this.

What would be likely? Or how… You can extend prediction to like, oh, I want to get a promotion

at work. What action should I take? And you can say, if I did this, I predict what might happen.

If I spoke to someone, I predict what might happen. So it’s not just low level predictions.

Yeah. It’s all predictions. It’s all predictions. It’s like this black box so you can ask basically

any question, low level or high level. So we started off with that observation. It’s

this nonstop prediction. And I write about this in the book. And then we asked, how do neurons

actually make predictions physically? Like what does the neuron do when it makes a prediction?

Or the neural tissue does when it makes a prediction. And then we asked, what are the

mechanisms by how we build a model that allows you to make predictions? So we started with prediction

as sort of the fundamental research agenda, if in some sense. And say, well, we understand how

the brain makes predictions. We’ll understand how it builds these models and how it learns.

And that’s the core of intelligence. So it was the key that got us in the door

to say, that is our research agenda. Understand predictions.

So in this whole process, where does intelligence originate, would you say?

So if we look at things that are much less intelligence to humans and you start to build

up a human through the process of evolution, where’s this magic thing that has a prediction

model or a model that’s able to predict that starts to look a lot more like intelligence?

Is there a place where Richard Dawkins wrote an introduction to your book, an excellent

introduction? I mean, it’s, it puts a lot of things into context and it’s funny just looking

at parallels for your book and Darwin’s Origin of Species. So Darwin wrote about the origin

of species. So what is the origin of intelligence?

Well, we have a theory about it and it’s just that, it’s a theory. The theory goes as follows.

As soon as living things started to move, they’re not just floating in sea, they’re not just a

plant, you know, grounded someplace. As soon as they started to move, there was an advantage to

moving intelligently, to moving in certain ways. And there’s some very simple things you can do,

you know, bacteria or single cell organisms can move towards the source of gradient of

food or something like that. But an animal that might know where it is and know where it’s been

and how to get back to that place, or an animal that might say, oh, there was a source of food

someplace, how do I get to it? Or there was a danger, how do I get to it? There was a mate, how

do I get to them? There was a big evolutionary advantage to that. So early on, there was a

pressure to start understanding your environment, like where am I and where have I been? And what

happened in those different places? So we still have this neural mechanism in our brains. In the

mammals, it’s in the hippocampus and entorhinal cortex, these are older parts of the brain.

And these are very well studied. We build a map of the of our environment. So these neurons in

these parts of the brain know where I am in this room, and where the door was and things like that.

So a lot of other mammals have this?

All mammals have this, right? And almost any animal that knows where it is, and get around

must have some mapping system, must have some way of saying, I’ve learned a map of my environment,

I have hummingbirds in my backyard. And they go to the same places all the time. They must know

where they are. They just know where they are when they’re not just randomly flying around. They

know. They know particular flowers they come back to. So we all have this. And it turns out it’s

very tricky to get neurons to do this, to build a map of an environment. And so we now know,

there’s these famous studies that are still very active about place cells and grid cells and these

other types of cells in the older parts of the brain, and how they build these maps of the world.

It’s really clever. It’s obviously been under a lot of evolutionary pressure over a long period

of time to get good at this. So animals now know where they are. What we think has happened,

and there’s a lot of evidence to suggest this, is that that mechanism we learned to map,

like a space, was repackaged. The same type of neurons was repackaged into a more compact form.

And that became the cortical column. And it was in some sense, genericized, if that’s a word. It

was turned into a very specific thing about learning maps of environments to learning maps

of anything, learning a model of anything, not just your space, but coffee cups and so on. And

it got sort of repackaged into a more compact version, a more universal version,

and then replicated. So the reason we’re so flexible is we have a very generic version of

this mapping algorithm, and we have 150,000 copies of it. Sounds a lot like the progress

of deep learning. How so? So take neural networks that seem to work well for a specific task,

compress them, and multiply it by a lot. And then you just stack them on top of it. It’s like the

story of transformers in natural language processing. Yeah. But in deep learning networks,

they end up, you’re replicating an element, but you still need the entire network to do anything.

Right. Here, what’s going on, each individual element is a complete learning system. This is

why I can take a human brain, cut it in half, and it still works. It’s the same thing.

It’s pretty amazing. It’s fundamentally distributed. It’s fundamentally distributed,

complete modeling systems. But that’s our story we like to tell. I would guess it’s likely largely

right. But there’s a lot of evidence supporting that story, this evolutionary story. The thing

which brought me to this idea is that the human brain got big very quickly. So that led to the

proposal a long time ago that, well, there’s this common element just instead of creating

new things, it just replicated something. We also are extremely flexible. We can learn things that

we had no history about. And that tells it that the learning algorithm is very generic. It’s very

kind of universal because it doesn’t assume any prior knowledge about what it’s learning.

And so you combine those things together and you say, okay, well, how did that come about? Where

did that universal algorithm come from? It had to come from something that wasn’t universal. It

came from something that was more specific. So anyway, this led to our hypothesis that

you would find grid cells and place cell equivalents in the neocortex. And when we

first published our first papers on this theory, we didn’t know of evidence for that. It turns out

there was some, but we didn’t know about it. So then we became aware of evidence for grid

cells in parts of the neocortex. And then now there’s been new evidence coming out. There’s some

interesting papers that came out just January of this year. So one of our predictions was if this

evolutionary hypothesis is correct, we would see grid cell place cell equivalents, cells that work

like them through every column in the neocortex. And that’s starting to be seen. What does it mean

that, why is it important that they’re present? Because it tells us, well, we’re asking about the

evolutionary origin of intelligence, right? So our theory is that these columns in the cortex

are working on the same principles, they’re modeling systems. And it’s hard to imagine how

neurons do this. And so we said, hey, it’s really hard to imagine how neurons could learn these

models of things. We can talk about the details of that if you want. But there’s this other part

of the brain, we know that learns models of environments. So could that mechanism to learn

to model this room be used to learn to model the water bottle? Is it the same mechanism? So we said

it’s much more likely the brain’s using the same mechanism, which case it would have these equivalent

cell types. So it’s basically the whole theory is built on the idea that these columns have

reference frames and they’re learning these models and these grid cells create these reference frames.

So it’s basically the major, in some sense, the major predictive part of this theory is that we

will find these equivalent mechanisms in each column in the neocortex, which tells us that

that’s what they’re doing. They’re learning these sensory motor models of the world. So we’re pretty

confident that would happen, but now we’re seeing the evidence. So the evolutionary process, nature

does a lot of copy and paste and see what happens. Yeah. Yeah. There’s no direction to it. But it

just found out like, hey, if I took these elements and made more of them, what happens? And let’s hook

them up to the eyes and let’s hook them to ears. And that seems to work pretty well for us. Again,

just to take a quick step back to our conversation of collective intelligence.

Do you sometimes see that as just another copy and paste aspect is copying and pasting

these brains and humans and making a lot of them and then creating social structures that then

almost operate as a single brain? I wouldn’t have said that, but you said it sounded pretty good.

So to you, the brain is its own thing.

I mean, our goal is to understand how the neocortex works. We can argue how essential

that is to understand the human brain because it’s not the entire human brain. You can argue

how essential that is to understanding human intelligence. You can argue how essential this

is to sort of communal intelligence. Our goal was to understand the neocortex.

Yeah. So what is the neocortex and where does it fit

in the various aspects of what the brain does? Like how important is it to you?

Well, obviously, again, I mentioned again in the beginning, it’s about 70 to 75% of the volume of

the human brain. So it dominates our brain in terms of size. Not in terms of number of neurons,

but in terms of size.

Size isn’t everything, Jeff.

I know, but it’s not that. We know that all high level vision,

hearing, and touch happens in the neocortex. We know that all language occurs and is understood

in the neocortex, whether that’s spoken language, written language, sign language,

whether it’s language of mathematics, language of physics, music. We know that all high level

planning and thinking occurs in the neocortex. If I were to say, what part of your brain designed

a computer and understands programming and creates music? It’s all the neocortex.

So then that’s an undeniable fact. But then there’s other parts of our brain are important too,

right? Our emotional states, our body regulating our body. So the way I like to look at it is,

can you understand the neocortex without the rest of the brain? And some people say you can’t,

and I think absolutely you can. It’s not that they’re not interacting, but you can understand.

Can you understand the neocortex without understanding the emotions of fear? Yes,

you can. You can understand how the system works. It’s just a modeling system. I make the analogy

in the book that it’s like a map of the world, and how that map is used depends on who’s using it.

So how our map of our world in our neocortex, how we manifest as a human depends on the rest of our

brain. What are our motivations? What are my desires? Am I a nice guy or not a nice guy?

Am I a cheater or not a cheater? How important different things are in my life?

But the neocortex can be understood on its own. And I say that as a neuroscientist,

I know there’s all these interactions, and I don’t want to say I don’t know them and we

don’t think about them. But from a layperson’s point of view, you can say it’s a modeling system.

I don’t generally think too much about the communal aspect of intelligence, which you brought up a

number of times already. So that’s not really been my concern.

I just wonder if there’s a continuum from the origin of the universe, like

this pockets of complexities that form living organisms. I wonder if we’re just,

if you look at humans, we feel like we’re at the top. And I wonder if there’s like just,

I wonder if there’s like just where everybody probably every living type pocket of complexity

probably thinks they’re the, pardon the French, they’re the shit. They’re at the top of the

pyramid. Well, if they’re thinking. Well, then what is thinking? In this sense,

the whole point is in their sense of the world, their sense is that they’re at the top of it.

I think what is a turtle, but you’re, you’re, you’re bringing up, you know,

the problems of complexity and complexity theory are, you know, it’s a huge,

interesting problem in science. Um, and you know, I think we’ve made surprisingly little progress

and understanding complex systems in general. Um, and so, you know, the Santa Fe Institute was

founded to study this and even the scientists there will say, it’s really hard. We haven’t

really been able to figure out exactly, you know, that science hasn’t really congealed yet. We’re

still trying to figure out the basic elements of that science. Uh, what, you know, where does

complexity come from and what is it and how you define it, whether it’s DNA creating bodies or

phenotypes or it’s individuals creating societies or ants and, you know, markets and so on. It’s,

it’s a very complex thing. I’m not a complexity theorist person, right? Um, and I, I think you

should ask, well, the brain itself is a complex system. So can we understand that? Um, I think

we’ve made a lot of progress understanding how the brain works. So, uh, but I haven’t

brought it out to like, oh, well, where are we on the complexity spectrum? You know, it’s like,

um, it’s a great question. I’d prefer for that answer to be we’re not special. It seems like

if we’re honest, most likely we’re not special. So if there is a spectrum or probably not in some

kind of significant place, there’s one thing we could say that we are special. And again,

only here on earth, I’m not saying is that if we think about knowledge, what we know,

um, we clearly human brains have, um, the only brains that have a certain types of knowledge.

We’re the only brains on this earth to understand, uh, what the earth is, how old it is,

that the universe is a picture as a whole with the only organisms understand DNA and

the origins of, you know, of species. Uh, no other species on, on this planet has that knowledge.

So if we think about, I like to think about, you know, one of the endeavors of humanity is to

understand the universe as much as we can. Um, I think our species is further along in that

undeniably, um, whether our theories are right or wrong, we can debate, but at least we have

theories. You know, we, we know that what the sun is and how its fusion is and how what black holes

are and, you know, we know general theory of relativity and no other animal has any of this

knowledge. So in that sense that we’re special, uh, are we special in terms of the hierarchy of

complexity in the universe? Probably not. Can we look at a neuron? Yeah. You say that prediction

happens in the neuron. What does that mean? So the neuron traditionally is seen as the

basic element of the brain. So we, I mentioned this earlier that prediction was our research agenda.

Yeah. We said, okay, um, how does the brain make a prediction? Like I I’m about to grab this water

bottle and my brain is predicting what I’m going to feel on, on all my parts of my fingers. If I

felt something really odd on any part here, I’d notice it. So my brain is predicting what it’s

going to feel as I grab this thing. So what does that, how does that manifest itself in neural

tissue? Right. We got brains made of neurons and there’s chemicals and there’s neurons and there’s

spikes and the connect, you know, where, where is the prediction going on? And one argument could be

that, well, when I’m predicting something, um, a neuron must be firing in advance. It’s like, okay,

this neuron represents what you’re going to feel and it’s firing. It’s sending a spike.

And certainly that happens to some extent, but our predictions are so ubiquitous

that we’re making so many of them, which we’re totally unaware of just the vast majority of me

have no idea that you’re doing this. Um, that it, there wasn’t really, we were trying to figure,

how could this be? Where, where are these, where are these happening? Right. And I won’t walk you

through the whole story unless you insist upon it. But we came to the realization that most of your

predictions are occurring inside individual neurons, especially these, the most common

are in the parameter cells. And there are, there’s a property of neurons. We, everyone knows,

or most people know that a neuron is a cell and it has this spike called an action potential,

and it sends information. But we now know that there’s these spikes internal to the neuron,

they’re called dendritic spikes. They travel along the branches of the neuron and they don’t leave

the neuron. They’re just internal only. There’s far more dendritic spikes than there are action

potentials, far more. They’re happening all the time. And what we came to understand that those

dendritic spikes, the ones that are occurring are actually a form of prediction. They’re telling the

neuron, the neuron is saying, I expect that I might become active shortly. And that internal,

so the internal spike is a way of saying, you’re going to, you might be generating external spikes

soon. I predicted you’re going to become active. And, and we’ve, we’ve, we wrote a paper in 2016

which explained how this manifests itself in neural tissue and how it is that this all works

together. But the vast majority, we think it’s, there’s a lot of evidence supporting it. So we,

that’s where we think that most of these predictions are internal. That’s why you can’t

be, they’re internal to the neuron, you can’t perceive them.

Well, from understanding the prediction mechanism of a single neuron, do you think there’s deep

insights to be gained about the prediction capabilities of the mini brains of the neural

brain? Of the mini brains and then the bigger brain and the brain?

Oh yeah. Yeah. Yeah. So having a prediction side of their individual neuron is not that useful.

So what? The way it manifests itself in neural tissue is that when a neuron, a neuron emits these

spikes are a very singular type event. If a neuron is predicting that it’s going to be active, it

emits its spike very, a little bit sooner, just a few milliseconds sooner than it would have

been. It’s like, I give the analogy of the book is like a sprinter on a, on a starting blocks in a,

in a race. And if someone says, get ready, set, you get up and you’re ready to go. And then when

your race starts, you get a little bit earlier start. So that it’s that, that ready set is like

the prediction and the neurons like ready to go quicker. And what happens is when you have a whole

bunch of neurons together and they’re all getting these inputs, the ones that are in the predictive

state, the ones that are anticipating to become active, if they do become active, they, they

sooner, they disable everything else. And it leads to different representations in the brain. So

you have to, it’s not isolated just to the neuron, the prediction occurs with the neuron,

but the network behavior changes. So what happens under different predictions, different inputs

have different representations. So how I, what I predict is going to be different under different

contexts, you know, what my input will be is different under different contexts. So this is,

this is a key to the whole theory, how this works. So the theory of the thousand brains,

if you were to count the number of brains, how would you do it? The thousand brain theory says

that basically every cortical column in the, in your, in your cortex is a complete modeling system.

And that when I ask, where do I have a model of something like a coffee cup? It’s not in one of

those models. It’s in thousands of those models. There’s thousands of models of coffee cups. That’s

what the thousand brains, then there’s a voting mechanism, which you lead, which you’re, which is

the thing you’re, which you’re conscious of, which leads to your singular perception. That’s why you,

you perceive something. So that’s the thousand brains theory. The details, how we got to that

theory are complicated. It wasn’t, we just thought of it one day. And one of those details that we

had to ask, how does a model make predictions? And we’ve talked about just these predictive neurons.

That’s part of this theory. It’s like saying, Oh, it’s a detail, but it was like a crack in the

door. It’s like, how are we going to figure out how these neurons built through this? You know,

what is going on here? So we just looked at prediction as like, well, we know that’s ubiquitous.

We know that every part of the cortex is making predictions. Therefore, whatever the predictive

system is, it’s going to be everywhere. We know there’s a gazillion predictions happening at once.

So this is where we can start teasing apart, you know, ask questions about, you know, how could

neurons be making these predictions? And that sort of built up to now what we have this thousand

brains theory, which is complex. You know, it’s just, I can state it simply, but we just didn’t

think of it. We had to get there step by step, very, it took years to get there.

And where does reference frames fit in? So, yeah.

Okay. So again, a reference frame, I mentioned earlier about the model of a house. And I said,

if you’re going to build a model of a house in a computer, they have a reference frame. And you

can think of reference frame like Cartesian coordinates, like X, Y, and Z axes. So I could

say, oh, I’m going to design a house. I can say, well, the front door is at this location, X, Y,

Z, and the roof is at this location, X, Y, Z, and so on. That’s a type of reference frame.

So it turns out for you to make a prediction, and I walk you through the thought experiment in the

book where I was predicting what my finger was going to feel when I touched a coffee cup.

It was a ceramic coffee cup, but this one will do. And what I realized is that to make a prediction

of what my finger’s going to feel, like it’s going to feel different than this, what’s it feel

different if I touch the hole or this thing on the bottom, make that prediction. The cortex needs to

know where the finger is, the tip of the finger, relative to the coffee cup. And exactly relative

to the coffee cup. And to do that, I have to have a reference frame for the coffee cup. It has to

have a way of representing the location of my finger to the coffee cup. And then we realized,

of course, every part of your skin has to have a reference frame relative to things that touch.

And then we did the same thing with vision. So the idea that a reference frame is necessary

to make a prediction when you’re touching something or when you’re seeing something

and you’re moving your eyes or you’re moving your fingers, it’s just a requirement

to predict. If I have a structure, I’m going to make a prediction. I have to know where it is I’m

looking or touching it. So then we said, well, how do neurons make reference frames? It’s not obvious.

X, Y, Z coordinates don’t exist in the brain. It’s just not the way it works. So that’s when we

looked at the older part of the brain, the hippocampus and the anterior cortex, where we knew

that in that part of the brain, there’s a reference frame for a room or a reference frame for an

environment. Remember, I talked earlier about how you could make a map of this room. So we said,

oh, they are implementing reference frames there. So we knew that reference frames needed to exist

in every quarter of a column. And so that was a deductive thing. We just deduced it. It has to

exist. So you take the old mammalian ability to know where you are in a particular space

and you start applying that to higher and higher levels.

Yeah. First you apply it to like where your finger is. So here’s what I think about it.

The old part of the brain says, where’s my body in this room? The new part of the brain says,

where’s my finger relative to this object? Where is a section of my retina relative to

this object? I’m looking at one little corner. Where is that relative to this patch of my retina?

And then we take the same thing and apply it to concepts, mathematics, physics, humanity,

whatever you want to think about. And eventually you’re pondering your own mortality.

Well, whatever. But the point is when we think about the world, when we have knowledge about

the world, how is that knowledge organized, Lex? Where is it in your head? The answer is it’s in

reference frames. So the way I learned the structure of this water bottle where the

features are relative to each other, when I think about history or democracy or mathematics,

the same basic underlying structure is happening. There’s reference frames for where the knowledge

that you’re assigning things to. So in the book, I go through examples like mathematics

and language and politics. But the evidence is very clear in the neuroscience. The same mechanism

that we use to model this coffee cup, we’re going to use to model high level thoughts.

Your demise of humanity, whatever you want to think about.

It’s interesting to think about how different are the representations of those higher dimensional

concepts, higher level concepts, how different the representation there is in terms of reference

frames versus spatial. But the interesting thing, it’s a different application, but it’s the exact

same mechanism. But isn’t there some aspect to higher level concepts that they seem to be

hierarchical? Like they just seem to integrate a lot of information into them. So is our physical

objects. So take this water bottle. I’m not particular to this brand, but this is a Fiji

water bottle and it has a logo on it. I use this example in my book, our company’s coffee cup has

a logo on it. But this object is hierarchical. It’s got like a cylinder and a cap, but then it

has this logo on it and the logo has a word, the word has letters, the letters have different

features. And so I don’t have to remember, I don’t have to think about this. So I say,

oh, there’s a Fiji logo on this water bottle. I don’t have to go through and say, oh, what is the

Fiji logo? It’s the F and I and the J and I, and there’s a hibiscus flower. And, oh, it has the

statement on it. I don’t have to do that. I just incorporate all of that in some sort of hierarchical

representation. I say, put this logo on this water bottle. And then the logo has a word

and the word has letters, all hierarchical. All that stuff is big. It’s amazing that the

brain instantly just does all that. The idea that there’s water, it’s liquid and the idea that you

can drink it when you’re thirsty, the idea that there’s brands and then there’s like all of that

information is instantly like built into the whole thing once you proceed. So I wanted to

get back to your point about hierarchical representation. The world itself is hierarchical,

right? And I can take this microphone in front of me. I know inside there’s going to be some

electronics. I know there’s going to be some wires and I know there’s going to be a little

diaphragm that moves back and forth. I don’t see that, but I know it. So everything in the world

is hierarchical. You just go into a room. It’s composed of other components. The kitchen has a

refrigerator. The refrigerator has a door. The door has a hinge. The hinge has screws and pin.

So anyway, the modeling system that exists in every cortical column learns the hierarchical

structure of objects. So it’s a very sophisticated modeling system in this grain of rice. It’s hard

to imagine, but this grain of rice can do really sophisticated things. It’s got 100,000 neurons in

it. It’s very sophisticated. So that same mechanism that can model a water bottle or a coffee cup

can model conceptual objects as well. That’s the beauty of this discovery that this guy,

Vernon Malmkastel, made many, many years ago, which is that there’s a single cortical algorithm

underlying everything we’re doing. So common sense concepts and higher

level concepts are all represented in the same way?

They’re set in the same mechanisms, yeah. It’s a little bit like computers. All computers are

universal Turing machines. Even the little teeny one that’s in my toaster and the big one that’s

running some cloud server someplace. They’re all running on the same principle. They can

apply different things. So the brain is all built on the same principle. It’s all about

learning these structured models using movement and reference frames. And it can be applied to

something as simple as a water bottle and a coffee cup. And it can be applied to thinking

what’s the future of humanity and why do you have a hedgehog on your desk? I don’t know.

Nobody knows. Well, I think it’s a hedgehog. That’s right. It’s a hedgehog in the fog.

It’s a Russian reference. Does it give you any inclination or hope about how difficult

it is to engineer common sense reasoning? So how complicated is this whole process?

So looking at the brain, is this a marvel of engineering or is it pretty dumb stuff

stuck on top of each other over? Can it be both? Can it be both, right?

I don’t know if it can be both because if it’s an incredible engineering job, that means it’s

so evolution did a lot of work. Yeah, but then it just copied that.

Yeah. Right. So as I said earlier, figuring out how to model something like a space is really hard

and evolution had to go through a lot of trick. And these cells I was talking about,

these grid cells and place cells, they’re really complicated. This is not simple stuff.

This neural tissue works on these really unexpected, weird mechanisms.

But it did it. It figured it out. But now you could just make lots of copies of it.

But then finding, yeah, so it’s a very interesting idea that’s a lot of copies

of a basic mini brain. But the question is how difficult it is to find that mini brain

that you can copy and paste effectively. Today, we know enough to build this.

I’m sitting here with, I know the steps we have to go. There’s still some engineering problems

to solve, but we know enough. And this is not like, oh, this is an interesting idea. We have

to go think about it for another few decades. No, we actually understand it pretty well in details.

So not all the details, but most of them. So it’s complicated, but it is an engineering problem.

So in my company, we are working on that. We are basically a roadmap of how we do this.

It’s not going to take decades. It’s a matter of a few years optimistically,

but I think that’s possible. It’s, you know, complex things. If you understand them,

you can build them. So in which domain do you think it’s best to build them?

Are we talking about robotics, like entities that operate in the physical world that are

able to interact with that world? Are we talking about entities that operate in the digital world?

Are we talking about something more like more specific, like it’s done in the machine learning

community where you look at natural language or computer vision? Where do you think is easiest?

It’s the first, it’s the first two more than the third one, I would say.

Again, let’s just use computers as an analogy. The pioneers in computing, people like John

Van Norman and Alan Turing, they created this thing, you know, we now call the universal

Turing machine, which is a computer, right? Did they know how it was going to be applied?

Where it was going to be used? Could they envision any of the future? No. They just said,

this is like a really interesting computational idea about algorithms and how you can implement

them in a machine. And we’re doing something similar to that today. Like we are building this

sort of universal learning principle that can be applied to many, many different things.

But the robotics piece of that, the interactive…

Okay. All right. Let’s be just specific. You can think of this cortical column as

what we call a sensory motor learning system. It has the idea that there’s a sensor

and then it’s moving. That sensor can be physical. It could be like my finger

and it’s moving in the world. It could be like my eye and it’s physically moving.

It can also be virtual. So, it could be, an example would be, I could have a system that

lives in the internet that actually samples information on the internet and moves by

following links. That’s a sensory motor system. Something that echoes the process of a finger

moving along a cortical… But in a very, very loose sense. It’s like,

again, learning is inherently about discovering the structure of the world and discover the

structure of the world, you have to move through the world. Even if it’s a virtual world, even if

it’s a conceptual world, you have to move through it. It doesn’t exist in one… It has some structure

to it. So, here’s a couple of predictions at getting what you’re talking about.

In humans, the same algorithm does robotics. It moves my arms, my eyes, my body.

And so, in the future, to me, robotics and AI will merge. They’re not going to be separate fields

because the algorithms for really controlling robots are going to be the same algorithms we

have in our brain, these sensory motor algorithms. Today, we’re not there, but I think that’s going

to happen. But not all AI systems will have to be robotics. You can have systems that have very

different types of embodiments. Some will have physical movements, some will not have physical

movements. It’s a very generic learning system. Again, it’s like computers. The Turing machine,

it doesn’t say how it’s supposed to be implemented, it doesn’t tell you how big it is,

it doesn’t tell you what you can apply it to, but it’s a computational principle.

The cortical column equivalent is a computational principle about learning. It’s about how you

learn and it can be applied to a gazillion things. I think this impact of AI is going to be as large,

if not larger, than computing has been in the last century, by far, because it’s getting at

a fundamental thing. It’s not a vision system or a learning system. It’s not a vision system or

a hearing system. It is a learning system. It’s a fundamental principle, how you learn the structure

in the world, how you can gain knowledge and be intelligent. That’s what the thousand brains says

was going on. We have a particular implementation in our head, but it doesn’t have to be like that

at all. Do you think there’s going to be some kind of impact? Okay, let me ask it another way.

What do increasingly intelligent AI systems do with us humans in the following way? How hard is

the human in the loop problem? How hard is it to interact? The finger on the coffee cup equivalent

of having a conversation with a human being. How hard is it to fit into our little human world?

I think it’s a lot of engineering problems. I don’t think it’s a fundamental problem.

I could ask you the same question. How hard is it for computers to fit into a human world?

Right. That’s essentially what I’m asking. How elitist are we as humans? We try to keep out

systems. I don’t know. I’m not sure that’s the right question. Let’s look at computers as an

analogy. Computers are a million times faster than us. They do things we can’t understand.

Most people have no idea what’s going on when they use computers. How do we integrate them

in our society? Well, we don’t think of them as their own entity. They’re not living things.

We don’t afford them rights. We rely on them. Our survival as seven billion people or something

like that is relying on computers now. Don’t you think that’s a fundamental problem

that we see them as something we don’t give rights to?

Computers? Yeah, computers. Robots,

computers, intelligence systems. It feels like for them to operate successfully,

they would need to have a lot of the elements that we would start having to think about.

Should this entity have rights? I don’t think so. I think

it’s tempting to think that way. First of all, hardly anyone thinks that for computers today.

No one says, oh, this thing needs a right. I shouldn’t be able to turn it off. If I throw it

in the trash can and hit it with a sledgehammer, it might form a criminal act. No one thinks that.

Now we think about intelligent machines, which is where you’re going.

All of a sudden, you’re like, well, now we can’t do that. I think the basic problem we have here

is that people think intelligent machines will be like us. They’re going to have the same emotions

as we do, the same feelings as we do. What if I can build an intelligent machine that absolutely

could care less about whether it was on or off or destroyed or not? It just doesn’t care. It’s

just like a map. It’s just a modeling system. There’s no desires to live. Nothing.

Is it possible to create a system that can model the world deeply and not care

about whether it lives or dies? Absolutely. No question about it.

To me, that’s not 100% obvious. It’s obvious to me. We can debate it if we want.

Where does your desire to live come from? It’s an old evolutionary design. We could argue,

does it really matter if we live or not? Objectively, no. We’re all going to die eventually.

Evolution makes us want to live. Evolution makes us want to fight to live. Evolution makes us want

to care and love one another and to care for our children and our relatives and our family and so

on. Those are all good things. They come about not because we’re smart, because we’re animals

that grew up. The hummingbird in my backyard cares about its offspring. Every living thing

in some sense cares about surviving. When we talk about creating intelligent machines,

we’re not creating life. We’re not creating evolving creatures. We’re not creating living

things. We’re just creating a machine that can learn really sophisticated stuff. That machine,

it may even be able to talk to us. It’s not going to have a desire to live unless somehow we put it

into that system. Well, there’s learning, right? The thing is… But you don’t learn to want to

live. It’s built into you. It’s part of your DNA. People like Ernest Becker argue,

there’s the fact of finiteness of life. The way we think about it is something we learned,

perhaps. Okay. Yeah. Some people decide they don’t want to live. Some people decide the desire to

live is built in DNA, right? But I think what I’m trying to get to is in order to accomplish goals,

it’s useful to have the urgency of mortality. It’s what the Stoics talked about,

is meditating in your mortality. It might be a very useful thing to do to die and have the urgency

of death and to realize that to conceive yourself as an entity that operates in this world that

eventually will no longer be a part of this world and actually conceive of yourself as a conscious

entity might be very useful for you to be a system that makes sense of the world. Otherwise,

you might get lazy. Well, okay. We’re going to build these machines, right? So we’re talking

about building AIs. But we’re building the equivalent of the cortical columns.

The neocortex. The neocortex. And the question is, where do they arrive at? Because we’re not

hard coding everything in. Well, in terms of if you build the neocortex equivalent,

it will not have any of these desires or emotional states. Now, you can argue that

that neocortex won’t be useful unless I give it some agency, unless I give it some desire,

unless I give it some motivation. Otherwise, you’ll be just lazy and do nothing, right?

You could argue that. But on its own, it’s not going to do those things. It’s just not going

to sit there and say, I understand the world. Therefore, I care to live. No, it’s not going

to do that. It’s just going to say, I understand the world. Why is that obvious to you? Do you think

it’s possible? Okay, let me ask it this way. Do you think it’s possible it will at least assign to

itself agency and perceive itself in this world as being a conscious entity as a useful way to

operate in the world and to make sense of the world? I think an intelligent machine can be

conscious, but that does not, again, imply any of these desires and goals that you’re worried about.

We can talk about what it means for a machine to be conscious.

By the way, not worry about, but get excited about. It’s not necessary that we should worry

about it. I think there’s a legitimate problem or not problem, a question asked,

if you build this modeling system, what’s it going to model? What’s its desire? What’s its

goal? What are we applying it to? That’s an interesting question. One thing, and it depends

on the application, it’s not something that inherent to the modeling system. It’s something

we apply to the modeling system in a particular way. If I wanted to make a really smart car,

it would have to know about driving and cars and what’s important in driving and cars.

It’s not going to figure that on its own. It’s not going to sit there and say, I’ve understood

the world and I’ve decided, no, no, no, no, we’re going to have to tell it. We’re going to have to

say, so I imagine I make this car really smart. It learns about your driving habits. It learns

about the world. Is it one day going to wake up and say, you know what? I’m tired of driving

and doing what you want. I think I have better ideas about how to spend my time.

Okay. No, it’s not going to do that. Well, part of me is playing a little bit of devil’s advocate,

but part of me is also trying to think through this because I’ve studied cars quite a bit and

I studied pedestrians and cyclists quite a bit. And there’s part of me that thinks

that there needs to be more intelligence than we realize in order to drive successfully.

That game theory of human interaction seems to require some deep understanding of human nature

that, okay. When a pedestrian crosses the street, there’s some sense. They look at a car usually,

and then they look away. There’s some sense in which they say, I believe that you’re not going

to murder me. You don’t have the guts to murder me. This is the little dance of pedestrian car

interaction is saying, I’m going to look away and I’m going to put my life in your hands because

I think you’re human. You’re not going to kill me. And then the car in order to successfully

operate in like Manhattan streets has to say, no, no, no, no. I am going to kill you like a little

bit. There’s a little bit of this weird inkling of mutual murder. And that’s a dance and somehow

successfully operate through that. Do you think you were born of that? Did you learn that social

interaction? I think it might have a lot of the same elements that you’re talking about,

which is we’re leveraging things we were born with and applying them in the context that.

All right. I would have said that that kind of interaction is learned because people in different

cultures to have different interactions like that. If you cross the street in different cities and

different parts of the world, they have different ways of interacting. I would say that’s learned.

And I would say an intelligent system can learn that too, but that does not lead. And the intelligent

system can understand humans. It could understand that just like I can study an animal and learn

something about that animal. I could study apes and learn something about their culture and so on.

I don’t have to be an ape to know that. I may not be completely, but I can understand something.

So intelligent machine can model that. That’s just part of the world. It’s just part of the

interactions. The question we’re trying to get at, will the intelligent machine have its own personal

agency that’s beyond what we assign to it or its own personal goals or will it evolve and create

these things? My confidence comes from understanding the mechanisms I’m talking about creating.

This is not hand wavy stuff. It’s down in the details. I’m going to build it. And I know what

it’s going to look like. And I know what it’s going to behave. I know what the kind of things

it could do and the kind of things it can’t do. Just like when I build a computer, I know it’s

not going to, on its own, decide to put another register inside of it. It can’t do that. No way.

No matter what your software does, it can’t add a register to the computer.

So in this way, when we build AI systems, we have to make choices about how we embed them.

So I talk about this in the book. I said intelligent system is not just the neocortex

equivalent. You have to have that. But it has to have some kind of embodiment, physical or virtual.

It has to have some sort of goals. It has to have some sort of ideas about dangers,

about things it shouldn’t do. We build in safeguards into systems. We have them in our

bodies. We put them into cars. My car follows my directions until the day it sees I’m about to hit

something and it ignores my directions and puts the brakes on. So we can build those things in.

So that’s a very interesting problem, how to build those in. I think my differing opinion about the

risks of AI for most people is that people assume that somehow those things will disappear

automatically and evolve. And intelligence itself begets that stuff or requires it.

But it’s not. Intelligence of the neocortex equipment doesn’t require this. The neocortex

equipment just says, I’m a learning system. Tell me what you want me to learn and ask me questions

and I’ll tell you the answers. And that, again, it’s again like a map. A map has no intent about

things, but you can use it to solve problems. Okay. So the building, engineering the neocortex

in itself is just creating an intelligent prediction system.

Modeling system. Sorry, modeling system. You can use it to then make predictions.

But you can also put it inside a thing that’s actually acting in this world.

You have to put it inside something. Again, think of the map analogy, right? A map on its own doesn’t

do anything. It’s just inert. It can learn, but it’s just inert. So we have to embed it somehow

in something to do something. So what’s your intuition here? You had a conversation with

Sam Harris recently that was sort of, you’ve had a bit of a disagreement and you’re sticking on

this point. Elon Musk, Stuart Russell kind of have us worry existential threats of AI.

What’s your intuition? Why, if we engineer increasingly intelligent neocortex type of system

in the computer, why that shouldn’t be a thing that we…

It was interesting to use the word intuition and Sam Harris used the word intuition too.

And we didn’t use that intuition, that word. I immediately stopped and said,

oh, that’s the crux of the problem. He’s using intuition. I’m not speaking about my intuition.

I’m speaking about something I understand, something I’m going to build, something I am

building, something I understand completely, or at least well enough to know what… I’m guessing,

I know what this thing’s going to do. And I think most people who are worried, they have trouble

separating out… They don’t have the knowledge or the understanding about what is intelligence,

how’s it manifest in the brain, how’s it separate from these other functions in the brain.

And so they imagine it’s going to be human like or animal like. It’s going to have the same sort of

drives and emotions we have, but there’s no reason for that. That’s just because there’s an unknown.

If the unknown is like, oh my God, I don’t know what this is going to do. We have to be careful.

It could be like us, but really smarter. I’m saying, no, it won’t be like us. It’ll be really

smarter, but it won’t be like us at all. But I’m coming from that, not because I’m just guessing,

I’m not using intuition. I’m basing it on like, okay, I understand this thing works. This is what

it does. It makes money to you. Okay. But to push back, so I also disagree with the intuitions that

Sam has, but I also disagree with what you just said, which, you know, what’s a good analogy. So

if you look at the Twitter algorithm in the early days, just recommender systems, you can understand

how recommender systems work. What you can’t understand in the early days is when you apply

that recommender system at scale to thousands and millions of people, how that can change societies.

Yeah. So the question is, yes, you’re just saying this is how an engineer in your cortex works,

but the, like when you have a very useful, uh, TikTok type of service that goes viral when your

neural cortex goes viral and then millions of people start using it, can that destroy the world?

No. Uh, well, first of all, this is back. One thing I want to say is that, um, AI is a dangerous

technology. I don’t, I’m not denying that. All technology is dangerous. Well, and AI,

maybe particularly so. Okay. So, um, am I worried about it? Yeah, I’m totally worried about it.

The thing where the narrow component we’re talking about now is the existential risk of AI, right?

Yeah. So I want to make that distinction because I think AI can be applied poorly. It can be applied

in ways that, you know, people are going to understand the consequences of it. Um, these are

all potentially very bad things, but they’re not the AI system creating this existential risk on

its own. And that’s the only place that I disagree with other people. Right. So I, I think the

existential risk thing is, um, humans are really damn good at surviving. So to kill off the human

race, it’d be very, very difficult. Yes, but you can even, I’ll go further. I don’t think AI systems

are ever going to try to, I don’t think AI systems are ever going to like say, I’m going to ignore

you. I’m going to do what I think is best. Um, I don’t think that’s going to happen, at least not

in the way I’m talking about it. So you, the Twitter recommendation algorithm is an interesting

example. Let’s, let’s use computers as an analogy again, right? I build a computer. It’s a universal

computing machine. I can’t predict what people are going to use it for. They can build all kinds of

things. They can, they can even create computer viruses. It’s, you know, all kinds of stuff. So

there’s some unknown about its utility and about where it’s going to go. But on the other hand,

I pointed out that once I build a computer, it’s not going to fundamentally change how it computes.

It’s like, I use the example of a register, which is a part, internal part of a computer. Um, you

know, I say it can’t just sit there because computers don’t evolve. They don’t replicate,

they don’t evolve. They don’t, you know, the physical manifestation of the computer itself

is not going to, there’s certain things that can’t do right. So we can break into things like things

that are possible to happen. We can’t predict and things that are just impossible to happen.

Unless we go out of our way to make them happen, they’re not going to happen unless somebody makes

them happen. Yeah. So there’s, there’s a bunch of things to say. One is the physical aspect,

which you’re absolutely right. We have to build a thing for it to operate in the physical world

and you can just stop building them. Uh, you know, the moment they’re not doing the thing you want

them to do or just change the design or change the design. The question is, I mean, there’s,

uh, it’s possible in the physical world. This is probably longer term is you automate the building.

It makes, it makes a lot of sense to automate the building. There’s a lot of factories that

are doing more and more and more automation to go from raw resources to the final product.

It’s possible to imagine that obviously much more efficient to keep, to create a factory that’s

creating robots that do something, uh, you know, that do something extremely useful for society.

It could be a personal assistance. It could be, uh, it could, it could be your toaster, but a

toaster as much as deeper knowledge of your culinary preferences. Yeah. And that could,

uh, I think now you’ve hit on the right thing. The real thing we need to be worried about is

self replication. Right. That is the thing that we’re in the physical world or even the virtual

world self replication because self replication is dangerous. It’s probably more likely to be

killed by a virus, you know, or a human hand veneered virus. Anybody can create a, you know,

there’s the technology is getting so almost anybody, but not anybody, but a lot of people

could create a human engineered virus that could wipe out humanity. That is really dangerous. No

intelligence required, just self replication. So, um, so we need to be careful about that.

So when I think about, you know, AI, I’m not thinking about robots, building robots. Don’t

do that. Don’t build a, you know, just, well, that’s because you’re interesting creating

intelligence. It seems like self replication is a good way to make a lot of money. Well,

fine. But so is, you know, maybe editing viruses is a good way too. I don’t know. The point is,

if as a society, when we want to look at existential risks, the existential risks we face

that we can control almost all evolve around self replication. Yes. The question is, I don’t see a

good, uh, way to make a lot of money by engineering viruses and deploying them on the world. There

could be, there could be applications that are useful, but let’s separate out, let’s separate out.

I mean, you don’t need to, you only need some, you know, terrorists who wants to do it. Cause

it doesn’t take a lot of money to make viruses. Um, let’s just separate out what’s risky and what’s

not risky. I’m arguing that the intelligence side of this equation is not risky. It’s not risky at

all. It’s the self replication side of the equation that’s risky. And I’m arguing that

it’s not risky. And I’m not dismissing that. I’m scared as hell. It’s like the paperclip

maximizer thing. Yeah. Those are often like talked about in the same conversation.

Um, I think you’re right. Like creating ultra intelligent, super intelligent systems

is not necessarily coupled with a self replicating arbitrarily self replicating systems. Yeah. And

you don’t get evolution unless you’re self replicating. Yeah. And so I think that’s the gist

of this argument that people have trouble separating those two out. They just think,

Oh yeah, intelligence looks like us. And look how, look at the damage we’ve done to this planet,

like how we’ve, you know, destroyed all these other species. Yeah. Well we replicate,

which the 8 billion of us are 7 billion of us now. So, um, I think the idea is that the,

the more intelligent we’re able to build systems, the more tempting it becomes from a capitalist

perspective of creating products, the more tempting it becomes to create self, uh, reproducing

systems. All right. So let’s say that’s true. So does that mean we don’t build intelligent systems?

No, that means we regulate, we, we understand the risks. Uh, we regulate them. Uh, you know,

look, there’s a lot of things we could do as society, which have some sort of financial

benefit to someone, which could do a lot of harm. And we have to learn how to regulate those things.

We have to learn how to deal with those things. I will argue this. I would say the opposite. Like I

would say having intelligent machines at our disposal will actually help us in the end more,

because it’ll help us understand these risks better. It’ll help us mitigate these risks

better. It might be ways of saying, oh, well, how do we solve climate change problems? You know,

how do we do this? Or how do we do that? Um, that just like computers are dangerous in the hands of

the wrong people, but they’ve been so great for so many other things. We live with those dangers.

And I think we have to do the same with intelligent machines. We just, but we have to be

constantly vigilant about this idea of a bad actors doing bad things with them and be,

um, don’t ever, ever create a self replicating system. Um, uh, and, and by the way, I don’t even

know if you could create a self replicating system that uses a factory. That’s really dangerous.

You know, nature’s way of self replicating is so amazing. Um, you know, it doesn’t require

anything. It just, you know, the thing and resources and it goes right. Um, if I said to

you, you know what we have to build, uh, our goal is to build a factory that can make that builds

new factories and it has to end to end supply chain. It has to bind the resources, get the

energy. I mean, that’s really hard. It’s, you know, no one’s doing that in the next, you know,

a hundred years. I’ve been extremely impressed by the efforts of Elon Musk and Tesla to try to do

exactly that. Not, not from raw resource. Well, he actually, I think states the goal is to go from

raw resource to the, uh, the final car in one factory. Yeah. That’s the main goal. Of course,

it’s not currently possible, but they’re taking huge leaps. Well, he’s not the only one to do

that. This has been a goal for many industries for a long, long time. Um, it’s difficult to do.

Well, a lot of people, what they do is instead they have like a million suppliers and then they

like there’s everybody’s, they all co locate them and they, and they tie the systems together.

It’s a fundamental, I think that’s, that also is not getting at the issue I was just talking about,

um, which is self replication. It’s, um, I mean, self replication means there’s no

entity involved other than the entity that’s replicating. Um, right. And so if there are

humans in this, in the loop, that’s not really self replicating, right? It’s unless somehow we’re

duped into doing it. But it’s also, I don’t necessarily

agree with you because you’ve kind of mentioned that AI will not say no to us.

I just think they will. Yeah. Yeah. So like, uh, I think it’s a useful feature to build in. I’m

just trying to like, uh, put myself in the mind of engineers to sometimes say no, you know, if you,

I gave the example earlier, right? I gave the example of my car, right? My car turns the wheel

and, and applies the accelerator and the brake as I say, until it decides there’s something dangerous.

Yes. And then it doesn’t do that. Now that was something it didn’t decide to do. It’s something

we programmed into the car. And so good. It was a good idea, right? The question again, isn’t like

if we create an intelligent system, will it ever ignore our commands? Of course it will. And

sometimes is it going to do it because it came up, came up with its own goals that serve its purposes

and it doesn’t care about our purposes? No, I don’t think that’s going to happen.

Okay. So let me ask you about these, uh, super intelligent cortical systems that we engineer

and us humans, do you think, uh, with these entities operating out there in the world,

what is the future most promising future look like? Is it us merging with them or is it us?

Like, how do we keep us humans around when you have increasingly intelligent beings? Is it, uh,

one of the dreams is to upload our minds in the digital space. So can we just

give our minds to these, uh, systems so they can operate on them? Is there some kind of more

interesting merger or is there more, more communication? I talked about all these

scenarios and let me just walk through them. Sure. Um, the uploading the mind one. Yes. Extremely,

really difficult to do. Like, like, we have no idea how to do this even remotely right now. Um,

so it would be a very long way away, but I make the argument you wouldn’t like the result.

Um, and you wouldn’t be pleased with the result. It’s really not what you think it’s going to be.

Um, imagine I could upload your brain into a, into a computer right now. And now the computer

sitting there going, Hey, I’m over here. Great. Get rid of that old bio person. I don’t need them.

You’re still sitting here. Yeah. What are you going to do? No, no, that’s not me. I’m here.

Right. Are you going to feel satisfied then? Then you, but people imagine, look, I’m on my deathbed

and I’m about to, you know, expire and I pushed the button and now I’m uploaded. But think about

it a little differently. And, and so I don’t think it’s going to be a thing because people,

by the time we’re able to do this, if ever, because you have to replicate the entire body,

not just the brain. It’s, it’s really, it’s, I walked through the issues. It’s really substantial.

Um, do you have a sense of what makes us us? Is there, is there a shortcut to what can only save

a certain part that makes us truly ours? No, but I think that machine would feel like it’s you too.

Right. Right. You have two people, just like I have a child, I have a child, right? I have two

daughters. They’re independent people. I created them. Well, partly. Yeah. And, um, uh, I don’t,

just because they’re somewhat like me, I don’t feel on them and they don’t feel like I’m me. So

if you split apart, you have two people. So we can tell them, come back to what, what makes,

what consciousness do you want? We can talk about that, but we don’t have like remote consciousness.

I’m not sitting there going, Oh, I’m conscious of that. You know, I mean, that system of,

so let’s say, let’s, let’s stay on our topic. One was uploading a brand. Yep. It ain’t gonna happen

in a hundred years, maybe a thousand, but I don’t think people are going to want to do it. The

merging your mind with, uh, you know, the neural link thing, right? Like again, really, really

difficult. It’s, it’s one thing to make progress, to control a prosthetic arm. It’s another to have

like a billion or several billion, you know, things and understanding what those signals

mean. Like it’s the one thing that like, okay, I can learn to think some patterns to make something

happen. It’s quite another thing to have a system, a computer, which actually knows exactly what

cells it’s talking to and how it’s talking to them and interacting in a way like that. Very,

very difficult. We’re not getting anywhere closer to that. Um, interesting. Can I, can I, uh, can

I ask a question here? What, so for me, what makes that merger very difficult practically in the next

10, 20, 50 years is like literally the biology side of it, which is like, it’s just hard to do

that kind of surgery in a safe way. But your intuition is even the machine learning part of it,

where the machine has to learn what the heck it’s talking to. That’s even hard. I think it’s even

harder. And it’s not, it’s, it’s easy to do when you’re talking about hundreds of signals. It’s,

it’s a totally different thing to say, talking about billions of years. It’s, it’s a totally

different thing to say, talking about billions of signals. So you don’t think it’s the raw,

the it’s a machine learning problem. You don’t think it could be learned? Well, I’m just saying,

no, I think you’d have to have detailed knowledge. You’d have to know exactly what the types of

neurons you’re connecting to. I mean, in the brain, there’s these, there are all different

types of things. It’s not like a neural network. It’s a very complex organism system up here. We

talked about the grid cells or the place cells, you know, you have to know what kind of cells

you’re talking to and what they’re doing and how their timing works and all, all this stuff,

which you can’t today. There’s no way of doing that. Right. But I think it’s, I think it’s a,

I think the problem you’re right. That the biological aspect of like who wants to have

a surgery and have this stuff inserted in your brain. That’s a problem. But this is when we

solve that problem. I think the, the information coding aspect is much worse. I think that’s much

worse. It’s not like what they’re doing today. Today. It’s simple machine learning stuff

because you’re doing simple things. But if you want to merge your brain, like I’m thinking on

the internet, I’m merged my brain with the machine and we’re both doing, that’s a totally different

issue. That’s interesting. I tend to think if the, okay. If you have a super clean signal

from a bunch of neurons at the start, you don’t know what those neurons are. I think that’s much

easier than the getting of the clean signal. I think if you think about today’s machine learning,

that’s what you would conclude. Right. I’m thinking about what’s going on in the brain

and I don’t reach that conclusion. So we’ll have to see. Sure. But I don’t think even, even then,

I think this kind of a sad future. Like, you know, do I, do I have to like plug my brain

into a computer? I’m still a biological organism. I assume I’m still going to die.

So what have I achieved? Right. You know, what have I achieved? Oh, I disagree that we don’t

know what those are, but it seems like there could be a lot of different applications. It’s

like virtual reality is to expand your brain’s capability to, to like, to read Wikipedia.

Yeah. But, but fine. But, but you’re still a biological organism.

Yes. Yes. You know, you’re still, you’re still mortal. All right. So,

so what are you accomplishing? You’re making your life in this short period of time better. Right.

Just like having the internet made our life better. Yeah. Yeah. Okay. So I think that’s of,

of, if I think about all the possible gains we can have here, that’s a marginal one.

It’s an individual, Hey, I’m better, you know, I’m smarter. But you know, fine. I’m not against it.

I just don’t think it’s earth changing. I, but, but it w so this is the true of the internet.

When each of us individuals are smarter, we get a chance to then share our smartness.

We get smarter and smarter together as like, as a collective, this is kind of like this

ant colony. Why don’t I just create an intelligent machine that doesn’t have any of this biological

nonsense that has all the same. It’s everything except don’t burden it with my brain. Yeah.

Right. It has a brain. It is smart. It’s like my child, but it’s much, much smarter than me.

So I have a choice between doing some implant, doing some hybrid, weird, you know, biological

thing that bleeding and all these problems and limited by my brain or creating a system,

which is super smart that I can talk to. Um, that helps me understand the world that can

read the internet, you know, read Wikipedia and talk to me. I guess my, the open questions there

are what does the men manifestation of super intelligence look like? So like, what are we

going to, you, you talked about why do I want to merge with AI? Like what, what’s the actual

marginal benefit here? If I, if we have a super intelligent system, how will it make our life

better? So let’s, let’s, that’s a great question, but let’s break it down to little pieces. All

right. On the one hand, it can make our life better in lots of simple ways. You mentioned

like a care robot or something that helps me do things. It cooks. I don’t know what it does. Right.

Little things like that. We have super better, smarter cars. We can have, you know, better agents

aids helping us in our work environment and things like that. To me, that’s like the easy stuff, the

simple stuff in the beginning. Um, um, and so in the same way that computers made our lives better

in ways, many, many ways, I will have those kinds of things. To me, the really exciting thing about AI

is the sort of it’s transcendent, transcendent quality in terms of humanity. We’re still

biological organisms. We’re still stuck here on earth. It’s going to be hard for us to live

anywhere else. Uh, I don’t think you and I are going to want to live on Mars anytime soon. Um,

um, and, um, and we’re flawed, you know, we may end up destroying ourselves. It’s totally possible.

Uh, we, if not completely, we could destroy our civilizations. You know, it’s this face the fact

we have issues here, but we can create intelligent machines that can help us in various ways. For

example, one example I gave, and that sounds a little sci fi, but I believe this. If we really

wanted to live on Mars, we’d have to have intelligent systems that go there and build

the habitat for us, not humans. Humans are never going to do this. It’s just too hard. Um, but could

we have a thousand or 10,000, you know, engineer workers up there doing this stuff, building things,

terraforming Mars? Sure. Maybe we can move Mars. But then if we want to, if we want to go around

the universe, should I send my children around the universe or should I send some intelligent machine,

which is like a child that represents me and understands our needs here on earth that could

travel through space. Um, so it’s sort of, it, in some sense, intelligence allows us to transcend

our, the limitations of our biology, uh, with, and, and don’t think of it as a negative thing.

It’s in some sense, my children transcend my, the, my biology too, cause they, they live beyond me.

Yeah. Um, and we impart, they represent me and they also have their own knowledge and I can

impart knowledge to them. So intelligent machines will be like that too, but not limited like us.

I mean, but the question is, um, there’s so many ways that transcendence can happen

and the merger with AI and humans is one of those ways. So you said intelligent,

basically beings or systems propagating throughout the universe, representing us humans.

They represent us humans in the sense they represent our knowledge and our history,

not us individually. Right. Right. But I mean, the question is, is it just a database

with, uh, with the really damn good, uh, model of the world?

It’s conscious, it’s conscious just like us. Okay. But just different?

They’re different. Uh, just like my children are different. They’re like me, but they’re

different. Um, these are more different. I guess maybe I’ve already, I kind of,

I take a very broad view of our life here on earth. I say, you know, why are we living here?

Are we just living because we live? Is it, are we surviving because we can survive? Are we fighting

just because we want to just keep going? What’s the point of it? Right. So to me, the point,

if I asked myself, what’s the point of life is what’s transcends that ephemeral sort of biological

experience is to me, this is my answer is the acquisition of knowledge to understand more about

the universe, uh, and to explore. And that’s partly to learn more. Right. Um, I don’t view it as

a terrible thing. If the ultimate outcome of humanity is we create systems that are intelligent

that are offspring, but they’re not like us at all. And we stay, we stay here and live on earth

as long as we can, which won’t be forever, but as long as we can and, but that would be a great

thing to do. It’s not a, it’s not like a negative thing. Well, would, uh, you be okay then if, uh,

the human species vanishes, but our knowledge is preserved and keeps being expanded by intelligence

systems. I want our knowledge to be preserved and expanded. Yeah. Am I okay with humans dying? No,

I don’t want that to happen. But if it, if it does happen, what if we were sitting here and this is

all the real, the last two people on earth and we’re saying, Lex, we blew it. It’s all over.

Right. Wouldn’t I feel better if I knew that our knowledge was preserved and that we had agents

that knew about that, that were trans, you know, there were that left earth. I wouldn’t want that.

Mm. It’s better than not having that, you know, I make the analogy of like, you know,

the dinosaurs, the poor dinosaurs, they live for, you know, tens of millions of years.

They raised their kids. They, you know, they, they fought to survive. They were hungry. They,

they did everything we do. And then they’re all gone. Yeah. Like, you know, and, and if we didn’t

discover their bones, nobody would ever know that they ever existed. Right. Do we want to be like

that? I don’t want to be like that. There’s a sad aspect to it. And it’s kind of, it’s jarring to

think about that. It’s possible that a human like intelligence civilization has previously existed

on earth. The reason I say this is like, it is jarring to think that we would not, if they went

extinct, we wouldn’t be able to find evidence of them after a sufficient amount of time. Of course,

there’s like, like basically humans, like if we destroy ourselves now, the human civilization

destroyed ourselves. Now, after a sufficient amount of time, we would not be, we’d find evidence of

the dinosaurs would not find evidence of humans. Yeah. That’s kind of an odd thing to think about.

Although I’m not sure if we have enough knowledge about species going back for billions of years,

but we could, we could, we might be able to eliminate that possibility, but it’s an interesting

question. Of course, this is a similar question to, you know, there were lots of intelligent

species throughout our galaxy that have all disappeared. That’s super sad that they’re,

exactly that there may have been much more intelligent alien civilizations in our galaxy

that are no longer there. Yeah. You actually talked about this, that humans might destroy

ourselves and how we might preserve our knowledge and advertise that knowledge to other. Advertise

is a funny word to use. From a PR perspective. There’s no financial gain in this.

You know, like make it like from a tourism perspective, make it interesting. Can you

describe how you think about this problem? Well, there’s a couple things. I broke it down

into two parts, actually three parts. One is, you know, there’s a lot of things we know that,

what if, what if we were, what if we ended, what if our civilization collapsed? Yeah. I’m not

talking tomorrow. Yeah. We could be a thousand years from now, like, so, you know, we don’t

really know, but, but historically it would be likely at some point. Time flies when you’re

having fun. Yeah. That’s a good way to put it. You know, could we, and then intelligent life

evolved again on this planet. Wouldn’t they want to know a lot about us and what we knew? Wouldn’t

they wouldn’t be able to ask us questions? So one very simple thing I said, how would we archive

what we know? That was a very simple idea. I said, you know what, that wouldn’t be that hard to put

a few satellites, you know, going around the sun and we’d upload Wikipedia every day and that kind

of thing. So, you know, if we end up killing ourselves, well, it’s up there and the next intelligent

species will find it and learn something. They would like that. They would appreciate that.

Um, uh, so that’s one thing. The next thing I said, well, what if, you know, how outside,

outside of our solar system, we have the SETI program. We’re looking for these intelligent

signals from everybody. And if you do a little bit of math, which I did in the book, uh, and

you say, well, what if intelligent species only live for 10,000 years before, you know,

technologically intelligent species, like ones are really able to do the stuff we’re just starting

to be able to do. Um, well, the chances are we wouldn’t be able to see any of them because they

would have all been disappeared by now. Um, they would, they’ve lived for 10,000 years and now

they’re gone. And so we’re not going to find these signals being sent from these people because, um,

but I said, what kind of signal could you create that would last a million years or a billion years

that someone would say, dammit, someone smart lived there that we know that that would be a

life changing event for us to figure that out. Well, what we’re looking for today in the study

program, isn’t that we’re looking for very coded signals in some sense. Um, and so I asked myself,

what would be a different type of signal one could create? Um, I’ve always thought about

this throughout my life. And in the book, I gave one, one possible suggestion, which was, um, uh,

we now detect planets going around other, other suns, uh, other stars, uh, excuse me. And we do

that by seeing this, the, the slight dimming of the light as the planets move in front of them.

That’s how, uh, we detect, uh, planets elsewhere in our galaxy. Um, what if we created something

like that, that just rotated around our, our, our, around the sun and it blocked out a little

bit of light in a particular pattern that someone said, Hey, that’s not a planet. That is a sign

that someone was once there. You can say, what if it’s beating up pie, you know, three point,

whatever. Um, so I did it from a distance. Broadly broadcast takes no continue activation on our

part. This is the key, right? No one has to be senior running a computer and supplying it with

power. It just goes on. So we go, it’s continuous. And, and I argued that part of the study program

should be looking for signals like that. And to look for signals like that, you ought to figure

out what the, how would we create a signal? Like what would we create that would be like that,

that would persist for millions of years that would be broadcast broadly. You could see from

a distance that was unequivocal, came from an intelligent species. And so I gave that one

example. Um, cause they don’t know what I know of actually. And then, and then finally, right.

If, if our, ultimately our solar system will die at some point in time, you know, how do we go

beyond that? And I think it’s possible if it all possible, we’ll have to create intelligent machines

that travel throughout the, throughout the solar system or the galaxy. And I don’t think that’s

going to be humans. I don’t think it’s going to be biological organisms. So these are just things to

think about, you know, like, what’s the old, you know, I don’t want to be like the dinosaur. I

don’t want to just live in, okay, that was it. We’re done. You know, well, there is a kind of

presumption that we’re going to live forever, which, uh, I think it is a bit sad to imagine

that the message we send as, as you talk about is that we were once here instead of we are here.

Well, it could be, we are still here. Uh, but it’s more of a, it’s more of an insurance policy

in case we’re not here, you know? Well, I don’t know, but there is something I think about,

we as humans don’t often think about this, but it’s like, like whenever I, um,

record a video, I’ve done this a couple of times in my life. I’ve recorded a video for my future

self, just for personal, just for fun. And it’s always just fascinating to think about

that preserving yourself for future civilizations. For me, it was preserving myself for a future me,

but that’s a little, that’s a little fun example of archival.

Well, these podcasts are, are, are preserving you and I in a way. Yeah. For future,

hopefully well after we’re gone. But you don’t often, we’re sitting here talking about this.

You are not thinking about the fact that you and I are going to die and there’ll be like 10 years

after somebody watching this and we’re still alive. You know, in some sense I do. I’m here

cause I want to talk about ideas and these ideas transcend me and they transcend this time and, and

on our planet. Um, we’re talking here about ideas that could be around a thousand years from now.

Or a million years from now. I, when I wrote my book, I had an audience in mind and one of the

clearest audiences was aliens. No. Were people reading this a hundred years from now? Yes.

I said to myself, how do I make this book relevant to someone reading this a hundred years from now?

What would they want to know that we were thinking back then? What would make it like,

that was an interesting, it’s still an interesting book. I’m not sure I can achieve that, but that was

how I thought about it because these ideas, like especially in the third part of the book, the ones

we were just talking about, you know, these crazy, sounds like crazy ideas about, you know,

storing our knowledge and, and, you know, merging our brains with computers and, and sending, you

know, our machines out into space. It’s not going to happen in my lifetime. Um, and they may not

have been happening in the next hundred years. They may not happen for a thousand years. Who knows?

Uh, but we have the unique opportunity right now. We, you, me, and other people in the world,

right now, we, you, me, and other people like this, um, to sort of at least propose the agenda,

um, that might impact the future like that. That’s a fascinating way to think, uh, both like

writing or creating, try to make, try to create ideas, try to create things that, uh, hold up

in time. Yeah. You know, when understanding how the brain works, we’re going to figure that out

once. That’s it. It’s going to be figured out once. And after that, that’s the answer. And

people will, people will study that thousands of years now. We still, we still, you know,

venerate Newton and, and Einstein and, um, and, you know, because, because ideas are exciting,

even well into the future. Well, the interesting thing is like big ideas, even if they’re wrong,

are still useful. Like, yeah, especially if they’re not completely wrong, right? Right.

Newton’s laws are not wrong. They’re just Einstein’s they’re better. Um, so yeah, I mean,

but we’re talking with Newton and Einstein, we’re talking about physics. I wonder if we’ll ever

achieve that kind of clarity, but understanding, um, like complex systems and the, this particular

manifestation of complex systems, which is the human brain. I’m totally optimistic. We can do

that. I mean, we’re making progress at it. I don’t see any reasons why we can’t completely. I mean,

completely understand in the sense, um, you know, we don’t really completely understand what all

the molecules in this water bottle are doing, but, you know, we have laws that sort of capture it

pretty good. Um, and, uh, so we’ll have that kind of understanding. I mean, it’s not like you’re

gonna have to know what every neuron in your brain is doing. Um, but enough to, um, first of all,

to build it. And second of all, to do, you know, do what physics does, which is like have, uh,

concrete experiments where we can validate this is happening right now. Like it’s not,

this is not some future thing. Um, you know, I’m very optimistic about it because I know about our,

our work and what we’re doing. We’ll have to prove it to people. Um, but, um,

I, I consider myself a rational person and, um, you know, until fairly recently,

I wouldn’t have said that, but right now I’m, where I’m sitting right now, I’m saying, you know,

we, we could, this is going to happen. There’s no big obstacles to it. Um, we finally have a

framework for understanding what’s going on in the cortex and, um, and that’s liberating. It’s,

it’s like, Oh, it’s happening. So I can’t see why we wouldn’t be able to understand it. I just can’t.

Okay. So, I mean, on that topic, let me ask you to play devil’s advocate.

Is it possible for you to imagine, look, look a hundred years from now and looking at your book,

uh, in which ways might your ideas be wrong? Oh, I worry about this all the time. Um,

yeah, it’s still useful. Yeah. Yeah.

Yeah. I think there’s, you know, um, well I can, I can best relate it to like things I’m worried

about right now. So we talked about this voting idea, right? It’s happening. There’s no question.

It’s happening, but it could be far more, um, um, there’s, there’s enough things I don’t know about

it that it might be working into ways differently than I’m thinking about the kind of what’s voting,

who’s voting, you know, where are representations? I talked about, like, you have a thousand models

of a coffee cup like that. That could turn out to be wrong. Um, because it may be, maybe there are a

thousand models that are sub models, but not really a single model of the coffee cup. Um,

I mean, there’s things, these are all sort of on the edges, things that I present as like,

Oh, it’s so simple and clean. Well, it’s not that it’s always going to be more complex.

And, um, and there’s parts of the theory, which I don’t understand the complexity well. So I think,

I think the idea that this brain is a distributed modeling system is not controversial at all. Right.

It’s not, that’s well understood by many people. The question then is,

are each quarter of a column an independent modeling system? Um, I could be wrong about that.

Um, I don’t think so, but I worry about it. My intuition, not even thinking why you could

be wrong is the same intuition I have about any sort of physicist, uh, like string theory

that we as humans desire for a clean explanation. And, uh, a hundred years from now, uh,

intelligent systems might look back at us and laugh at how we try to get rid of the whole mess

by having simple explanation when the reality is it’s way messier. And in fact, it’s impossible

to understand. You can only build it. It’s like this idea of complex systems and cellular automata

is you can only launch the thing. You cannot understand it. Yeah. I think that, you know,

the history of science suggests that’s not likely to occur. Um, the history of science suggests that

as a theorist and we’re theorists, you look for simple explanations, right? Fully knowing

that whatever simple explanation you’re going to come up with is not going to be completely correct.

I mean, it can’t be, I mean, it’s just, it’s just more complexity, but that’s the role of theorists

play. They, they sort of, they give you a framework on which you now can talk about a problem and

figure out, okay, now we can start digging more details. The best frameworks stick around while

the details change. You know, again, you know, the classic example is Newton and Einstein, right? You

know, um, Newton’s theories are still used. They’re still valuable. They’re still practical. They’re

not like wrong. It’s just, they’ve been refined. Yeah. But that’s in physics. It’s not obvious,

by the way, it’s not obvious for physics either that the universe should be such that’s amenable

to these simple. But it’s so far, it appears to be as far as we can tell. Um, yeah. I mean,

but as far as we could tell, and, but it’s also an open question whether the brain is amenable to

such clean theories. That’s the, uh, not the brain, but intelligence. Well, I, I, I don’t know. I would

take intelligence out of it. Just say, you know, um, well, okay. Um, the evidence we have suggests

that the human brain is, is a, at the one time extremely messy and complex, but there’s some

parts that are very regular and structured. That’s why we started the neocortex. It’s extremely

regular in its structure. Yeah. And unbelievably so. And then I mentioned earlier, the other thing is

it’s, it’s universal abilities. It is so flexible to learn so many things. We don’t, we haven’t

figured out what it can’t learn yet. We don’t know, but we haven’t figured it out yet, but it

can learn things that it never was evolved to learn. So those give us hope. Um, that’s why I

went into this field because I said, you know, this regular structure, it’s doing this amazing

number of things. There’s gotta be some underlying principles that are, that are common and other,

other scientists have come up with the same conclusions. Um, and so it’s promising and,

um, and that’s, and whether the theories play out exactly this way or not, that is the role that

theorists play. And so far it’s worked out well, even though, you know, maybe, you know, we don’t

understand all the laws of physics, but so far it’s been pretty damn useful. The ones we have

are our theories are pretty useful. You mentioned that, uh, we should not necessarily be,

at least to the degree that we are worried about the existential risks of artificial intelligence

relative to, uh, human risks from human nature being existential risk.

What aspect of human nature worries you the most in terms of the survival of the human species?

I mean, I’m disappointed in humanity, humans. I mean, all of us, I’m one. So I’m disappointed

myself too. Um, it’s kind of a sad state. There’s two things that disappoint me. One is

how it’s difficult for us to separate our rational component of ourselves from our evolutionary

heritage, which is, you know, not always pretty, you know, um, uh, rape is a, is an evolutionary

good strategy for reproduction. Murder can be at times too, you know, making other people miserable

at times is a good strategy for reproduction. It’s just, and it’s just, and, and so now that

we know that, and yet we have this sort of, you know, we, you and I can have this very rational

discussion talking about, you know, intelligence and brains and life and so on. So many, it seems

like it’s so hard. It’s just a big, big transition to get humans, all humans to, to make the

transition from be like, let’s pay no attention to all that ugly stuff over here. Let’s just focus

on the interesting. What’s unique about humanity is our knowledge and our intellect. But the fact

that we’re striving is in itself amazing, right? The fact that we’re able to overcome that part.

And it seems like we are more and more becoming successful at overcoming that part. That is the

optimistic view. And I agree with you, but I worry about it. I’m not saying I’m worrying about it. I

think that was your question. I still worry about it. Yes. You know, we could be in tomorrow because

some terrorists could get nuclear bombs and, you know, blow us all up. Who knows? Right. The other

thing I think I’m disappointed is, and it’s just, I understand it. It’s, I guess you can’t really

be disappointed. It’s just a fact is that we’re so prone to false beliefs that we, you know, we have

a model in our head, the things we can interact with directly, physical objects, people, that

model is pretty good. And we can test it all the time, right? I touch something, I look at it,

talk to you, see if my model is correct. But so much of what we know is stuff I can’t directly

interact with. I only know because someone told me about it. And so we’re prone, inherently prone

to having false beliefs because if I’m told something, how am I going to know it’s right

or wrong? Right. And so then we have the scientific process, which says we are inherently flawed.

So the only way we can get closer to the truth is by looking for contrary evidence.

Yeah. Like this conspiracy theory, this theory that scientists keep telling me about that the

earth is round. As far as I can tell, when I look out, it looks pretty flat.

Yeah. So, yeah, there is a tension, but it’s also, I tend to believe that we haven’t figured

out most of this thing, right? Most of nature around us is a mystery. And so it…

But that doesn’t, does that worry you? I mean, it’s like, oh, that’s like a pleasure,

more to figure out, right? Yeah. That’s exciting. But I’m saying like

there’s going to be a lot of quote unquote, wrong ideas. I mean, I’ve been thinking a lot about

engineering systems like social networks and so on. And I’ve been worried about censorship

and thinking through all that kind of stuff, because there’s a lot of wrong ideas. There’s a

lot of dangerous ideas, but then I also read a history, read history and see when you censor

ideas that are wrong. Now this could be a small scale censorship, like a young grad student who

comes up, who like raises their hand and says some crazy idea. A form of censorship could be,

I shouldn’t use the word censorship, but like de incentivize them from no, no, no, no,

this is the way it’s been done. Yeah. Yeah. You’re a foolish kid. Don’t

think that’s it. Yeah. You’re foolish. So in some sense,

those wrong ideas, most of the time end up being wrong, but sometimes end up being

I agree with you. So I don’t like the word censorship. Um, at the very end of the book, I,

I ended up with a sort of a, um, a plea or a recommended force of action. Um, the best way I

could, I know how to deal with this issue that you bring up is if everybody understood as part of

your upbringing in life, something about how your brain works, that it builds a model of the world,

uh, how it works, you know, how basically it builds that model of the world and that the model

is not the real world. It’s just a model and it’s never going to reflect the entire world. And it

can be wrong and it’s easy to be wrong. And here’s all the ways you can get a wrong model in your

head. Right? It’s not prescribed what’s right or wrong. Just understand that process. If we all

understood the processes and I got together and you say, I disagree with you, Jeff. And I said,

Lex, I disagree with you that at least we understand that we’re both trying to model

something. We both have different information, which leads to our different models. And therefore

I shouldn’t hold it against you and you shouldn’t hold it against me. And we can at least agree that,

well, what can we look for in that’s common ground to test our, our beliefs, as opposed to so much,

uh, as we raise our kids on dogma, which is this is a fact, this is a fact, and these people are

bad. And, and, and, you know, where every, if everyone knew just to, to be skeptical of every

belief and why, and how their brains do that, I think we might have a better world.

Do you think the human mind is able to comprehend reality? So you talk about this creating models

how close do you think we get to, uh, to reality? There’s so the wildest ideas is like Donald

Hoffman saying, we’re very far away from reality. Do you think we’re getting close to reality?

Well, it depends on what you define reality. Uh, we are getting, we have a model of the world

that’s very useful, right? For, for basic goals. Well, for our survival and our pleasure right

now. Right. Um, so that’s useful. Um, I mean, it’s really useful. Oh, we can build planes. We can build computers. We can do these things. Right.

Uh, I don’t think, I don’t know the answer to that question. Um, I think that’s part of the

question we’re trying to figure out, right? Like, you know, obviously if you end up with a theory of

everything that really is a theory of everything and all of a sudden everything comes into play

and there’s no room for something else, then you might feel like we have a good model of the world.

Yeah. But if we have a theory of everything and somehow, first of all, you’ll never be able to

really conclusively say it’s a theory of everything, but say somehow we are very damn sure it’s a theory

of everything. We understand what happened at the big bang and how just the entirety of the

physical process. I’m still not sure that gives us an understanding of, uh, the next

many layers of the hierarchy of abstractions that form. Well, also what if string theory

turns out to be true? And then you say, well, we have no reality, no modeling what’s going on in

those other dimensions that are wrapped into it on each other. Right. Or, or the multiverse,

you know, I honestly don’t know how for us, for human interaction, for ideas of intelligence,

how it helps us to understand that we’re made up of vibrating strings that are

like 10 to the whatever times smaller than us. I don’t, you know, you could probably build better

weapons, a better rockets, but you’re not going to be able to understand intelligence. I guess,

I guess maybe better computers. No, you won’t be. I think it’s just more purely knowledge.

You might lead to a better understanding of the, of the beginning of the universe,

right? It might lead to a better understanding of, uh, I don’t know. I guess I think the acquisition

of knowledge has always been one where you, you pursue it for its own pleasure. Um, and you don’t

always know what is going to make a difference. Yeah. Uh, you’re pleasantly surprised by the,

the weird things you find. Do you think, uh, for the, for the neocortex in general, do you,

do you think there’s a lot of innovation to be done on the machine side? You know,

you use the computer as a metaphor quite a bit. Is there different types of computer that would

help us build intelligence manifestations of intelligent machines? Yeah. Or is it, oh no,

it’s going to be totally crazy. Uh, we have no idea how this is going to look out yet.

You can already see this. Um, today we’ve, of course, we model these things on traditional

computers and now, now GPUs are really popular with, with, uh, you know, neural networks and so

on. Um, but there are companies coming up with fundamentally new physical substrates, um, that

are just really cool. I don’t know if they’re going to work or not. Um, but I think there’ll

be decades of innovation here. Yeah. Totally. Do you think the final thing will be messy,

like our biology is messy? Or do you think, uh, it’s, it’s the, it’s the old bird versus

airplane question, or do you think we could just, um, build airplanes that, that fly way better

than birds in the same way we could build, uh, uh, electrical neocortex? Yeah. You know,

can I, can I, can I riff on the bird thing a bit? Because I think that’s interesting.

People really misunderstand this. The Wright brothers, um, the problem they were trying to

solve was controlled flight, how to turn an airplane, not how to propel an airplane.

They weren’t worried about that. Interesting. Yeah. They already had, at that time,

there was already wing shapes, which they had from studying birds. There was already gliders

that carry people. The problem was if you put a rudder on the back of a glider and you turn it,

the plane falls out of the sky. So the problem was how do you control flight? And they studied

birds and they actually had birds in captivity. They watched birds in wind tunnels. They observed

them in the wild and they discovered the secret was the birds twist their wings when they turn.

And so that’s what they did on the Wright brothers flyer. They had these sticks that

you would twist the wing. And that was the, that was their innovation, not the propeller.

And today airplanes still twist their wings. We don’t twist the entire wing. We just twist

the tail end of it, the flaps, which is the same thing. So today’s airplanes fly on the

same principles as birds would observe. So everyone get that analogy wrong, but let’s

step back from that. Once you understand the principles of flight, you can choose

how to implement them. No one’s going to use bones and feathers and muscles, but they do have wings

and we don’t flap them. We have propellers. So when we have the principles of computation that

goes on to modeling the world in a brain, we understand those principles very clearly.

We have choices on how to implement them. And some of them will be biological like and some won’t.

And, but I do think there’s going to be a huge amount of innovation here.

Just think about the innovation when in the computer, they had to invent the transistor,

they invented the Silicon chip. They had to invent, you know, then this software. I mean,

it’s millions of things they had to do, memory systems. We’re going to do, it’s going to be

similar. Well, it’s interesting that the deep learning, the effectiveness of deep learning for

specific tasks is driving a lot of innovation in the hardware, which may have effects for actually

allowing us to discover intelligence systems that operate very differently or at least much

bigger than deep learning. Yeah. Interesting. So ultimately it’s good to have an application

that’s making our life better now because the capitalist process, if you can make money.

Yeah. Yeah. That works. I mean, the other way, I mean, Neil deGrasse Tyson writes about this

is the other way we fund science, of course, is through military. So like, yeah. Conquests.

So here’s an interesting thing we’re doing on this regard. So we’ve decided, we used to have

a series of these biological principles and we can see how to build these intelligent machines,

but we’ve decided to apply some of these principles to today’s machine learning techniques.

So one of the, we didn’t talk about this principle. One is a sparsity in the brain,

um, most of the neurons are active at any point in time. It’s sparse and the connectivity is sparse

and that’s different than deep learning networks. Um, so we’ve already shown that we can speed up

existing deep learning networks, uh, anywhere from 10 to a factor of a hundred. I mean,

literally a hundred, um, and make a more robust at the same time. So this is commercially very,

very valuable. Um, and so, you know, if we can prove this actually in the largest systems that

are commercially applied today, there’s a big commercial desire to do this. Well,

sparsity is something that doesn’t run really well on existing hardware. It doesn’t really run

really well, um, on, um, GPUs, um, and on CPUs. And so that would be a way of sort of bringing more,

more brain principles into the existing system on a, on a commercially valuable basis.

Another thing we can think we can do is we’re going to use these dendrites,

um, models that we, uh, I talked earlier about the prediction occurring inside a neuron

that that basic property can be applied to existing neural networks and allow them to

learn continuously, which is something they don’t do today. And so the dendritic spikes that you

were talking about. Yeah. Well, we wouldn’t model the spikes, but the idea that you have

that neuron today’s neural networks have this company called the point neurons is a very simple

model of a neuron. And, uh, by adding dendrites to them at just one more level of complexity,

uh, that’s in biological systems, you can solve problems in continuous learning, um,

and rapid learning. So we’re trying to take, we’re trying to bring the existing field,

and we’ll see if we can do it. We’re trying to bring the existing field of machine learning,

um, commercially along with us, you brought up this idea of keeping, you know,

paying for it commercially along with us as we move towards the ultimate goal of a true AI system.

Even small innovations on your own networks are really, really exciting.

Yeah.

Is it seems like such a trivial model of the brain and applying different insights

that just even, like you said, continuous, uh, learning or, uh, making it more asynchronous

or maybe making more dynamic or like, uh, incentivizing, making it robust and making it

somehow much better incentivizing sparsity, uh, somehow. Yeah. Well, if you can make things a

hundred times faster, then there’s plenty of incentive. That’s true. People, people are

spending millions of dollars, you know, just training some of these networks. Now these, uh,

these transforming networks, let me ask you the big question for young people listening to this

today in high school and college, what advice would you give them in terms of, uh, which career

path to take and, um, maybe just about life in general? Well, in my case, um, I didn’t start

life with any kind of goals. I was, when I was going to college, it’s like, Oh, what do I study?

Well, maybe I’ll do this electrical engineering stuff, you know? Um, it wasn’t like, you know,

today you see some of these young kids are so motivated, like I’m changing the world. I was

like, you know, whatever. And, um, but then I did fall in love with something besides my wife,

but I fell in love with this, like, Oh my God, it would be so cool to understand how the brain works.

And then I, I said to myself, that’s the most important thing I could work on. I can’t imagine

anything more important because if we understand how the brains work, you build tells the machines

and they could figure out all the other big questions of the world. Right. So, and then I

said, but I want to understand how I work. So I fell in love with this idea and I became passionate

about it. And this is a trope. People say this, but it was, it’s true because I was passionate

about it. I was able to put up almost so much crap, you know, you know, I was, I was in that,

you know, I was like person said, you can’t do this. I was, I was a graduate student at Berkeley

when they said, you can’t study this problem, you know, no one’s can solve this or you can’t get

funded for it. You know, then I went into do mobile computing and it was like, people say,

you can’t do that. You can’t build a cell phone, you know? So, but all along I kept being motivated

because I wanted to work on this problem. I said, I want to understand the brain works. And I got

myself, you know, I got one lifetime. I’m going to figure it out, do the best I can. So by having

that, cause you know, it’s really, as you pointed out, Lex, it’s really hard to do these things.

People, it just, there’s so many downers along the way. So many ways, obstacles to get in your

way. Yeah. I’m sitting here happy all the time, but trust me, it’s not always like that.

Well, that’s, I guess the happiness, the passion is a prerequisite for surviving the whole thing.

Yeah, I think so. I think that’s right. And so I don’t want to sit to someone and say, you know,

you need to find a passion and do it. No, maybe you don’t. But if you do find something you’re

passionate about, then you can follow it as far as your passion will let you put up with it.

Do you remember how you found it? How the spark happened?

Why specifically for me?

Yeah. Cause you said it’s such an interesting, so like almost like later in life, by later,

I mean like not when you were five, you didn’t really know. And then all of a sudden you fell

in love with that idea. Yeah, yeah. There was two separate events that compounded one another.

One, when I was probably a teenager, it might’ve been 17 or 18, I made a list of the most

interesting problems I could think of. First was why does the universe exist? It seems like

not existing is more likely. The second one was, well, given it exists, why does it behave the way

it does? Laws of physics, why is it equal MC squared, not MC cubed? That’s an interesting

question. The third one was like, what’s the origin of life? And the fourth one was, what’s

intelligence? And I stopped there. I said, well, that’s probably the most interesting one. And I

put that aside as a teenager. But then when I was 22 and I was reading the, no, excuse me, it was

1979, excuse me, 1979, I was reading, so I was, at that time I was 22, I was reading the September

issue of Scientific American, which is all about the brain. And then the final essay was by Francis

Crick, who of DNA fame, and he had taken his interest to studying the brain now. And he said,

you know, there’s something wrong here. He says, we got all this data, all this fact, this is 1979,

all these facts about the brain, tons and tons of facts about the brain. Do we need more facts? Or do

we just need to think about a way of rearranging the facts we have? Maybe we’re just not thinking

about the problem correctly. Cause he says, this shouldn’t be like this. So I read that and I said,

wow. I said, I don’t have to become like an experimental neuroscientist. I could just

take, look at all those facts and try and become a theoretician and try to figure it out. And I said

that I felt like it was something I would be good at. I said, I wouldn’t be a good experimentalist.

I don’t have the patience for it, but I’m a good thinker and I love puzzles. And this is like the

biggest puzzle in the world. It’s the biggest puzzle of all time. And I got all the puzzle

pieces in front of me. Damn, that was exciting. And there’s something obviously you can’t

convert into words that just kind of sparked this passion. And I have that a few times in my life,

just something just like you, it grabs you. Yeah. I felt it was something that was both

important and that I could make a contribution to. And so all of a sudden it felt like,

oh, it gave me purpose in life. I honestly don’t think it has to be as big as one of those four

questions. I think you can find those things in the smallest. Oh, absolutely. David Foster Wallace

said like the key to life is to be unboreable. I think it’s very possible to find that intensity

of joy in the smallest thing. Absolutely. I’m just, you asked me my story. Yeah. No, but I’m

actually speaking to the audience. It doesn’t have to be those four. You happen to get excited by one

of the bigger questions of in the universe, but even the smallest things and watching the Olympics

now, just giving yourself life, giving your life over to the study and the mastery of a particular

sport is fascinating. And if it sparks joy and passion, you’re able to, in the case of the

Olympics, basically suffer for like a couple of decades to achieve. I mean, you can find joy and

passion just being a parent. I mean, yeah, the parenting one is funny. So I was, not always,

but for a long time, wanted kids and get married and stuff. And especially that has to do with the

fact that I’ve seen a lot of people that I respect get a whole nother level of joy from kids. And

at first is like, you’re thinking is, well, like I don’t have enough time in the day, right? If I

have this passion to solve, but like, if I want to solve intelligence, how’s this kid situation

going to help me? But then you realize that, you know, like you said, the things that sparks joy,

and it’s very possible that kids can provide even a greater or deeper, more meaningful joy than

those bigger questions when they enrich each other. And that seemed like, obviously when I

was younger, it’s probably a counterintuitive notion because there’s only so many hours in the

day, but then life is finite and you have to pick the things that give you joy.

Yeah. But you also understand you can be patient too. I mean, it’s finite, but we do have, you know,

whatever, 50 years or something. So in my case, I had to give up on my dream of the neuroscience

because I was a graduate student at Berkeley and they told me I couldn’t do this and I couldn’t

get funded. And so I went back in the computing industry for a number of years. I thought it

would be four, but it turned out to be more. But I said, I’ll come back. I’m definitely going to

come back. I know I’m going to do this computer stuff for a while, but I’m definitely coming back.

Everyone knows that. And it’s like raising kids. Well, yeah, you have to spend a lot of time with

your kids. It’s fun, enjoyable. But that doesn’t mean you have to give up on other dreams. It just

means that you may have to wait a week or two to work on that next idea. Well, you talk about the

darker side of me, disappointing sides of human nature that we’re hoping to overcome so that we

don’t destroy ourselves. I tend to put a lot of value in the broad general concept of love,

of the human capacity of compassion towards each other, of just kindness, whatever that longing of

like just the human to human connection. It connects back to our initial discussion. I tend to

see a lot of value in this collective intelligence aspect. I think some of the magic of human

civilization happens when there’s a party is not as fun when you’re alone. I totally agree with

you on these issues. Do you think from a neocortex perspective, what role does love play in the human

condition? Well, those are two separate things from a neocortex point of view. It doesn’t impact

our thinking about the neocortex. From a human condition point of view, I think it’s core.

I mean, we get so much pleasure out of loving people and helping people. I’ll rack it up to

old brain stuff and maybe we can throw it under the bus of evolution if you want. That’s fine.

It doesn’t impact how I think about how we model the world, but from a humanity point of view,

I think it’s essential. Well, I tend to give it to the new brain and also I tend to give it to

the old brain. Also, I tend to think that some aspects of that need to be engineered into AI

systems, both in their ability to have compassion for other humans and their ability to maximize

love in the world between humans. I’m more thinking about social networks. Whenever there’s a deep

AI systems in humans, specific applications where it’s AI and humans, I think that’s something that

often not talked about in terms of metrics over which you try to maximize,

like which metric to maximize in a system. It seems like one of the most

powerful things in societies is the capacity to love.

It’s fascinating. I think it’s a great way of thinking about it. I have been thinking more of

these fundamental mechanisms in the brain as opposed to the social interaction between humans

and AI systems in the future. If you think about that, you’re absolutely right. That’s a complex

system. I can have intelligent systems that don’t have that component, but they’re not interacting

with people. They’re just running something or building some place or something. I don’t know.

But if you think about interacting with humans, yeah, but it has to be engineered in there. I

don’t think it’s going to appear on its own. That’s a good question.

Yeah. Well, we could, we’ll leave that open. In terms of, from a reinforcement learning

perspective, whether the darker sides of human nature or the better angels of our nature win out,

statistically speaking, I don’t know. I tend to be optimistic and hope that love wins out in the end.

You’ve done a lot of incredible stuff and your book is driving towards this fourth question that

you started with on the nature of intelligence. What do you hope your legacy for people reading

a hundred years from now? How do you hope they remember your work? How do you hope they remember

this book? Well, I think as an entrepreneur or a scientist or any human who’s trying to accomplish

some things, I have a view that really all you can do is accelerate the inevitable. Yeah. It’s like,

you know, if we didn’t figure out, if we didn’t study the brain, someone else will study the

brain. If, you know, if Elon didn’t make electric cars, someone else would do it eventually.

And if, you know, if Thomas Edison didn’t invent a light bulb, we wouldn’t be using candles today.

So, what you can do as an individual is you can accelerate something that’s beneficial

and make it happen sooner than it would have. That’s really it. That’s all you can do.

You can’t create a new reality that it wasn’t going to happen. So, from that perspective,

I would hope that our work, not just me, but our work in general, people would look back and said,

hey, they really helped make this better future happen sooner. They, you know, they helped us

understand the nature of false beliefs sooner than they might have. Now we’re so happy that

we have these intelligent machines doing these things, helping us that maybe that solved the

climate change problem and they made it happen sooner. So, I think that’s the best I would hope

for. Some would say those guys just moved the needle forward a little bit in time.

Well, I do. It feels like the progress of human civilization is not, is there’s a lot

of trajectories. And if you have individuals that accelerate towards one direction that helps steer

human civilization. So, I think in those long stretch of time, all trajectories will be traveled.

But I think it’s nice for this particular civilization on earth to travel down one that’s

not. Well, I think you’re right. We have to take the whole period of, you know, World War II,

Nazism or something like that. Well, that was a bad sidestep, right? We’ve been over there for a

while. But, you know, there is the optimistic view about life that ultimately it does converge

in a positive way. It progresses ultimately, even if we have years of darkness. So, yeah. So,

I think you can perhaps that’s accelerating the positive could also mean eliminating some bad

missteps along the way, too. But I’m an optimistic in that way. Despite we talked about the end of

civilization, you know, I think we’re going to live for a long time. I hope we are. I think our

society in the future is going to be better. We’re going to have less discord. We’re going to have

less people killing each other. You know, we’ll make them live in some sort of way that’s compatible

with the carrying capacity of the earth. I’m optimistic these things will happen. And all we

can do is try to get there sooner. And at the very least, if we do destroy ourselves,

we’ll have a few satellites orbiting that will tell alien civilization that we were once here.

Or maybe our future, you know, future inhabitants of earth. You know, imagine we,

you know, the planet of the apes in here. You know, we kill ourselves, you know,

a million years from now or a billion years from now. There’s another species on the planet.

Curious creatures were once here. Jeff, thank you so much for your work. And thank you so much for

talking to me once again. Well, actually, it’s great. I love what you do. I love your podcast.

You have the most interesting people, me aside. So it’s a real service, I think you do for,

in a very broader sense for humanity, I think. Thanks, Jeff. All right. It’s a pleasure.

Thanks for listening to this conversation with Jeff Hawkins. And thank you to

Codecademy, BioOptimizers, ExpressVPN, Asleep, and Blinkist. Check them out in the description

to support this podcast. And now, let me leave you with some words from Albert Camus.

An intellectual is someone whose mind watches itself. I like this, because I’m happy to be

both halves, the watcher and the watched. Can they be brought together? This is the

practical question we must try to answer. Thank you for listening. I hope to see you next time.

comments powered by Disqus