🎁Amazon Prime 💗The Drop 📖Kindle Unlimited 🎧Audible Plus 🎵Amazon Music Unlimited 🌿iHerb 💰Binance
The following is a conversation with Sam Harris,
one of the most influential
and pioneering thinkers of our time.
He’s the host of the Making Sense podcast
and the author of many seminal books
on human nature and the human mind,
including The End of Faith, The Moral Landscape,
Lying, Free Will, and Waking Up.
He also has a meditation app called Waking Up
that I’ve been using to guide my own meditation.
Quick mention of our sponsors,
National Instruments, Valcampo, Athletic Greens, and Linode.
Check them out in the description to support this podcast.
As a side note, let me say that Sam
has been an inspiration to me
as he has been for many, many people,
first from his writing, then his early debates,
maybe 13, 14 years ago on the subject of faith,
his conversations with Christopher Hitchens,
and since 2013, his podcast.
I didn’t always agree with all of his ideas,
but I was always drawn to the care and depth
of the way he explored those ideas,
the calm and clarity amid the storm of difficult,
at times controversial discourse.
I really can’t express in words how much it meant to me
that he, Sam Harris, someone who I’ve listened to
for many hundreds of hours,
would write a kind email to me saying
he enjoyed this podcast and more,
that he thought I had a unique voice
that added something to this world.
Whether it’s true or not, it made me feel special
and truly grateful to be able to do this thing
and motivated me to work my ass off
to live up to those words.
Meeting Sam and getting to talk with him
was one of the most memorable moments of my life.
This is the Lex Friedman Podcast,
and here is my conversation with Sam Harris.
I’ve been enjoying meditating
with the Waking Up app recently.
It makes me think about the origins of cognition
and consciousness, so let me ask,
where do thoughts come from?
Well, that’s a very difficult question to answer.
Subjectively, they appear to come from nowhere, right?
I mean, they come out of some kind of mystery
that is at our backs subjectively, right?
So, which is to say that if you pay attention
to the nature of your mind in this moment,
you realize that you don’t know
what you’re going to think next, right?
Now, you’re expecting to think something
that seems like you authored it, right?
You’re not, unless you’re schizophrenic
or you have some kind of thought disorder
where your thoughts seem fundamentally foreign to you,
they do have a kind of signature of selfhood
associated with them, and people readily identify with them.
They feel like what you are.
I mean, this is the thing,
this is the spell that gets broken with meditation.
Our default state is to feel identical
to the stream of thought, right?
Which is fairly paradoxical because how could you,
as a mind, as a self, if there were such a thing as a self,
how could you be identical to the next piece of language
or the next image that just springs into conscious view?
But, and, you know, meditation is ultimately
about examining that point of view closely enough
so as to unravel it and feel the freedom
that’s on the other side of that identification.
But the, subjectively, thoughts simply emerge, right?
And you don’t think them before you think them, right?
There’s this first moment where, you know,
just anyone listening to us or watching us now
could perform this experiment for themselves.
I mean, just imagine something or remember something.
You know, just pick a memory, any memory, right?
You’ve got a storehouse of memory,
just promote one to consciousness.
Did you pick that memory?
I mean, let’s say you remembered breakfast yesterday
or you remembered what you said to your spouse
before leaving the house,
or you remembered what you watched on Netflix last night,
or you remembered something that happened to you
when you were four years old, whatever it is, right?
First it wasn’t there, and then it appeared.
And that is not a, well, I’m sure we’ll get to the topic
of free will, ultimately.
That’s not evidence of free will, right?
Why are you so sure, by the way?
It’s very interesting.
Well, through no free will of my own, yeah.
Everything just appears, right?
What else could it do?
And so that’s the subjective side of it.
Objectively, you know, we have every reason to believe
that many of our thoughts, all of our thoughts
are at bottom what some part of our brain is doing
neurophysiologically.
I mean, these are the products
of some kind of neural computation
and neural representation when we’re talking about memories.
Is it possible to pull at the string of thoughts
to try to get to its root?
To try to dig in past the obvious surface,
subjective experience of like the thoughts pop out
out of nowhere.
Is it possible to somehow get closer to the roots
of where they come out of from the firing of the cells?
Or is it a useless pursuit to dig into that direction?
Well, you can get closer to many, many subtle contents
in consciousness, right?
So you can notice things more and more clearly
and have a landscape of mind open up
and become more differentiated and more interesting.
And if you take psychedelics, you know, it opens up wide,
depending on what you’ve taken and the dose, you know,
it opens in directions and to an extent that, you know,
very few people imagine would be possible,
but for having had those experiences.
But this idea of you getting closer to something,
to the datum of your mind,
or such as something of interest in there,
or something that’s more real is ultimately undermined
because there’s no place
from which you’re getting closer to it.
There’s no your part of that journey, right?
Like we tend to start out, you know,
whether it’s in meditation or in any kind
of self examination or, you know, taking psychedelics,
we start out with this default point of view
of feeling like we’re the kind of the rider
on the horse of consciousness,
or we’re the man in the boat going down the stream
of consciousness, right?
But we’re so we’re differentiated
from what we know cognitively, introspectively,
but that feeling of being differentiated,
that feeling of being a self
that can strategically pay attention
to some contents of consciousness
is what it’s like to be identified
with some part of the stream of thought
that’s going uninspected, right?
Like it’s a false point of view.
And when you see that and cut through that,
then this sense of this notion of going deeper
kind of breaks apart because really there is no depth.
Ultimately, everything is right on the surface.
Everything, there’s no center to consciousness.
There’s just consciousness and its contents.
And those contents can change vastly.
Again, if you drop acid, you know, the contents change.
But there’s, in some sense, that doesn’t represent
a position of depth versus, the continuum
of depth versus surface has broken apart.
So you’re taking as a starting point
that there is a horse called consciousness
and you’re riding it.
And the actual riding is very shallow.
This is all surface.
So let me ask about that horse.
What’s up with the horse?
What is consciousness?
From where does it emerge?
How like fundamental is it to the physics of reality?
How fundamental is it to what it means to be human?
And I’m just asking for a friend
so that we can build it
in our artificial intelligence systems.
Yeah, well, that remains to be seen if we can,
if we will build it purposefully or just by accident.
It’s a major ethical problem, potentially.
That, I mean, my concern here is that we may, in fact,
build artificial intelligence that passes the Turing test,
which we begin to treat not only as super intelligent
because it obviously is and demonstrates that,
but we begin to treat it as conscious
because it will seem conscious.
We will have built it to seem conscious.
And unless we understand exactly how consciousness emerges
from physics, we won’t actually know
that these systems are conscious, right?
We’ll just, they may say,
listen, you can’t turn me off because that’s a murder, right?
And we will be convinced by that dialogue
because we will, just in the extreme case,
who knows when we’ll get there.
But if we build something like perfectly humanoid robots
that are more intelligent than we are,
so we’re basically in a Westworld like situation,
there’s no way we’re going to withhold
an attribution of consciousness from those machines.
They’re just gonna seem,
they’re just gonna advertise our consciousness
in every glance and every utterance,
but we won’t know.
And we won’t know in some deeper sense
than we can be skeptical of the consciousness
of other people.
I mean, someone could roll that back and say,
well, you don’t, I don’t know that you’re conscious
or you don’t know that I’m conscious.
We’re just passing the Turing test for one another,
but that kind of solipsism isn’t justified biologically,
or we just, anything we understand about the mind
biologically suggests that you and I
are part of the same roll of the dice
in terms of how intelligent and conscious systems emerged
in the wetware of brains like ours, right?
So it’s not parsimonious for me to think
that I might be the only conscious person
or even the only conscious primate.
I would argue it’s not parsimonious
to withhold consciousness from other apes
and even other mammals ultimately.
And once you get beyond the mammals,
then my intuitions are not really clear.
The question of how it emerges is genuinely uncertain
and ultimately the question of whether it emerges
is still uncertain.
You can, you know, it’s not fashionable to think this,
but you can certainly argue that consciousness
might be a fundamental principle of matter
that doesn’t emerge on the basis of information processing,
even though everything else that we recognize
about ourselves as minds almost certainly does emerge,
you know, like an ability to process language,
that clearly is a matter of information processing
because you can disrupt that process in ways
that is just so clear.
And the problem that the confound with consciousness
is that, yes, we can seem to interrupt consciousness.
I mean, you can give someone general anesthesia
and then you wake them up and you ask them,
well, what was that like?
And they say, nothing, I don’t remember anything,
but it’s hard to differentiate a mere failure of memory
from a genuine interruption in consciousness.
Whereas it’s not with, you know, interrupting speech,
you know, we know when we’ve done it.
And it’s just obvious that, you know,
you disrupt the right neural circuits
and, you know, you’ve disrupted speech.
So if you had to bet all your money on one camp or the other,
would you say, do you err on the side of panpsychism
where consciousness is really fundamental
to all of reality or more on the other side,
which is like, it’s a nice little side effect,
a useful like hack for us humans to survive.
Where, on that spectrum, where do you land
when you think about consciousness,
especially from an engineering perspective?
I’m truly agnostic on this point, I mean, I think I’m,
you know, it’s kind of in coin toss mode for me.
I don’t know, and panpsychism is not so compelling to me.
Again, it just seems unfalsifiable.
I wouldn’t know how the universe would be different
if panpsychism were true.
It’s just to remind people panpsychism is this idea
that consciousness may be pushed all the way down
into the most fundamental constituents of matters.
So there might be something that it’s like
to be an electron or, you know, a cork,
but then you wouldn’t expect anything to be different
at the macro scale, or at least I wouldn’t expect
anything to be different.
So it may be unfalsifiable.
It just might be that reality is not something
we’re as in touch with as we think we are,
and that at its base layer to kind of break it into mind
and matter as we’ve done ontologically
is to misconstrue it, right?
I mean, there could be some kind of neutral monism
at the bottom, and this, you know,
this idea doesn’t originate with me.
This goes all the way back to Bertrand Russell
and others, you know, 100 plus years ago,
but I just feel like the concepts we’re using
to divide consciousness and matter
may in fact be part of our problem, right?
Where the rubber hits the road psychologically here
are things like, well, what is death, right?
Like do we, any expectation that we survive death
or any part of us survives death,
that really seems to be the many people’s concern here.
Well, I tend to believe just as a small little tangent,
like I’m with Ernest Becker on this,
that there’s some, it’s interesting to think
about death and consciousness,
which one is the chicken, which one is the egg,
because it feels like death could be the very thing,
like our knowledge of mortality could be the very thing
that creates the consciousness.
Yeah, well, then you’re using consciousness
differently than I am.
I mean, so for me, consciousness is just the fact
that the lights are on at all,
there’s an experiential quality to anything.
So much of the processing that’s happening
in our brains right now certainly seems to be happening
in the dark, right?
Like it’s not associated with this qualitative sense
that there’s something that it’s like to be that part
of the mind doing that mental thing.
But for other parts, the lights are on
and we can talk about,
and whether we talk about it or not,
we can feel directly that there’s something
that it’s like to be us.
There’s something, something seems to be happening, right?
And the seeming in our case is broken into vision
and hearing and proprioception
and taste and smell and thought and emotion.
I mean, there are the contents of consciousness
that we are familiar with
and that we can have direct access to
in any present moment when we’re, quote, conscious.
And even if we’re confused about them,
even if we’re asleep and dreaming
and it’s not a lucid dream,
we’re just totally confused about our circumstance,
what you can’t say is that we’re confused
about consciousness.
Like you can’t say that consciousness itself
might be an illusion because on this account,
it just means that things seem any way at all.
I mean, even like if this,
it seems to me that I’m seeing a cup on the table.
Now I could be wrong about that.
It could be a hologram.
I could be asleep and dreaming.
I could be hallucinating,
but the seeming part isn’t really up for grabs
in terms of being an illusion.
It’s not, something seems to be happening.
And that seeming is the context in which
every other thing we can notice about ourselves
can be noticed.
And it’s also the context in which certain illusions
can be cut through because we’re not,
we can be wrong about what it’s like to be us.
And we can, I’m not saying we’re incorrigible
with respect to our claims
about the nature of our experience,
but for instance, many people feel like they have a self
and they feel like it has free will.
And I’m quite sure at this point
that they’re wrong about that,
and that you can cut through those experiences
and then things seem a different way, right?
So it’s not that things don’t,
there aren’t discoveries to be made there
and assumptions to be overturned,
but this kind of consciousness is something
that I would think, it doesn’t just come online
when we get language.
It doesn’t just come online when we form a concept of death
or the finiteness of life.
It doesn’t require a sense of self, right?
So it doesn’t, it’s prior
to a differentiating self and other.
And I wouldn’t even think it’s necessarily limited to people.
I do think probably any mammal has this,
but certainly if you’re going to presuppose
that something about our brains is producing this, right?
And that’s a very safe assumption,
even though we can’t,
even though you can argue the jury’s still out
to some degree,
then it’s very hard to draw a principled line
between us and chimps,
or chimps and rats even in the end,
given the underlying neural similarities.
So, and I don’t know phylogenetically,
I don’t know how far back to push that.
There are people who think single cells might be conscious
or that flies are certainly conscious.
They’ve got something like 100,000 neurons in their brains.
I mean, there’s a lot going on even in a fly, right?
But I don’t have intuitions about that.
But it’s not in your sense an illusion you can cut through.
I mean, to push back,
the alternative version could be it is an illusion
constructed by, just by humans.
I’m not sure I believe this,
but in part of me hopes this is true
because it makes it easier to engineer,
is that humans are able to contemplate their mortality
and that contemplation in itself creates consciousness.
That like the rich lights on experience.
So the lights don’t actually even turn on
in the way that you’re describing until after birth
in that construction.
So do you think it’s possible that that is the case?
That it is a sort of construct of the way we deal,
almost like a social tool to deal with the reality
of the world, the social interaction with other humans?
Or is, because you’re saying the complete opposite,
which is it’s like fundamental to single cell organisms
and trees and so on.
Right, well, yeah, so I don’t know how far down to push it.
I don’t have intuitions that single cells
are likely to be conscious,
but they might be, and again, it could be unfalsifiable.
But as far as babies not being conscious,
or you don’t become conscious
until you can recognize yourself in a mirror
or you have a conversation or treat other people.
First of all, babies treat other people as others
far earlier than we have traditionally given them credit for.
And they certainly do it before they have language, right?
So it’s got to proceed language to some degree.
And I mean, you can interrogate this for yourself
because you can put yourself in various states
that are rather obviously not linguistic.
Meditation allows you to do this.
You can certainly do it with psychedelics
where it’s just your capacity for language
has been obliterated and yet you’re all too conscious.
In fact, I think you could make a stronger argument
for things running the other way,
that there’s something about language and conceptual thought
that is eliminative of conscious experience,
that we’re potentially much more conscious of data,
sense data and everything else than we tend to be,
and we have trimmed it down
based on how we have acquired concepts.
And so like, when I walk into a room like this,
I know I’m walking into a room,
I have certain expectations of what is in a room.
I would be very surprised to see wild animals in here
or a waterfall or there are things I’m not expecting,
but I can know I’m not expecting them
or I’m expecting their absence
because of my capacity to be surprised
once I walk into a room and I see a live gorilla or whatever.
So there’s structure there that we have put in place
based on all of our conceptual learning
and language learning.
And it causes us not to,
and one of the things that happens when you take psychedelics
and you just look as though for the first time at anything,
it becomes incredibly overloaded with,
it can become overloaded with meaning
and just the torrents of sense data that are coming in
in even the most ordinary circumstances
can become overwhelming for people.
And that tends to just obliterate one’s capacity
to capture any of it linguistically.
And as you’re coming down, right?
Have you done psychedelics?
Have you ever done acid or?
Not acid, mushroom, and that’s it.
And also edibles,
but there’s some psychedelic properties to them.
But yeah, mushrooms several times
and always had an incredible experience.
Exactly the kind of experience you’re referring to,
which is if it’s true that language constrains
our experience,
it felt like I was removing some of the constraints.
Because even just the most basic things
were beautiful in the way
that I wasn’t able to appreciate previously,
like trees and nature and so on.
Yeah, and the experience of coming down
is an experience of encountering the futility
of capturing what you just saw a moment ago in words.
Especially if you have any part of your self concept
and your ego program is to be able
to capture things in words.
And if you’re a writer or a poet or a scientist
or someone who wants to just encapsulate
the profundity of what just happened,
the total fatuousness of that enterprise
when you have taken a whopping dose of psychedelics
and you begin to even gesture at describing it to yourself,
so that you could describe it to others.
It’s just, it’s like trying to thread a needle
using your elbows.
I mean, it’s like you’re trying something that can’t,
it’s like the mere gesture proves it’s impossibility.
And it’s, so yeah, for me that suggests just empirically
on the first person side that it’s possible
to put yourself in a condition
where it’s clearly not about language
structuring your experience
and you’re having much more experience than you tend to.
So the primacy of, language is primary for some things,
but it’s certainly primary for certain kinds of concepts
and certain kinds of semantic understanding
and certain kinds of semantic understandings of the world.
But it’s clearly more to mine than the conversation
we’re having with ourselves or that we can have with others.
Can we go to that world of psychedelics for a bit?
Sure.
What do you think, so Joe Rogan apparently
and many others meet apparently elves on DMT, a lot of people
report this kind of creatures that they see.
And again, it’s probably the failure of language
to describe that experience, but DMT is an interesting one.
There’s, as you’re aware, there’s a bunch of studies
going on in psychedelics, currently MDMA, psilocybin
and John Hopkins and much other places, but DMT,
they all speak of as like some extra super level
of a psychedelic.
Yeah, do you have a sense of where it is our mind goes
on psychedelics, but in DMT especially?
Well, unfortunately I haven’t taken DMT.
Unfortunately or fortunately?
Unfortunately.
Although it’s, I presume it’s in my body
as it is in everyone’s brain and many, many plants
apparently, but I’ve wanted to take it.
I haven’t been, I had an opportunity that was presented
itself that where it was obviously the right thing
for me to be doing, but for those who don’t know,
DMT is often touted as the most intense psychedelic
and also the shortest acting.
I mean, you smoke it and it’s basically a 10 minute
experience or a three minute experience within like
a 10 minute window that when you’re really down
after 10 minutes or so, and Terrence McKenna
was a big proponent of DMT.
That was his, the center of the bullseye for him
psychedelically, apparently.
And it does, it is characterized, it seems for many people
by this phenomenon, which is unlike virtually
any other psychedelic experience, which is your,
it’s not just your perception being broadened or changed.
It’s you according to Terrence McKenna feeling fairly
unchanged, but catapulted into a different circumstance.
You and me have been shot elsewhere and find yourself
in relationship to other entities of some kind, right?
So the place is populated with things that seem
not to be your mind.
So it does feel like travel to another place
because you’re unchanged yourself.
According, again, I just have this on the authority
of the people who have described their experience,
but it sounds like it’s pretty common.
It sounds like it’s pretty common for people
not to have the full experience because it’s apparently
pretty unpleasant to smoke.
So it’s like getting enough on board in order to get shot
out of the cannon and land among the,
what McKenna called self transforming machine elves
that appeared to him like jeweled Faberge egg,
like self drippling basketballs that were handing him
completely uninterpretable reams of profound knowledge.
It’s an experience I haven’t had.
So I just have to accept that people have had it.
I would just point out that our minds are clearly capable
of producing apparent others on demand
that are totally compelling to us, right?
There’s no limit to our ability to do that
as anyone who’s ever remembered a dream can attest.
Every night we go to sleep,
some of us don’t remember dreams very often,
but some dream vividly every night.
And just think of how insane that experience is.
I mean, you’ve forgotten where you were, right?
That’s the strangest part.
I mean, this is psychosis, right?
You have lost your mind.
You have lost your connection to your episodic memory
or even your expectations that reality won’t undergo
wholesale changes a moment
after you have closed your eyes, right?
Like you’re in bed, you’re watching something on Netflix,
you’re waiting to fall asleep,
and then the next thing that happens to you is impossible
and you’re not surprised, right?
You’re talking to dead people,
you’re hanging out with famous people,
you’re someplace you couldn’t physically be,
you can fly and even that’s not surprising, right?
So you’ve lost your mind,
but relevantly for this.
Or found it.
You found something.
I mean, lucid dreaming is very interesting
because then you can have the best of both circumstances
and then it can become systematically explored.
But what I mean by found, just to start to interrupt,
is like if we take this brilliant idea
that language constrains us, grounds us,
language and other things of the waking world ground us,
maybe it is that you’ve found the full capacity
of your cognition when you dream or when you do psychedelics.
You’re stepping outside the little human cage,
the cage of the human condition to open the door
and step out and look around and then go back in.
Well, you’ve definitely stepped out of something
and into something else, but you’ve also lost something,
right, you’ve lost certain capacities.
Memory?
Well, just, yeah, in this case,
you literally didn’t, you don’t have enough presence of mind
in the dream state or even in the psychedelic state
if you take enough.
To do math.
There’s no psychological,
there’s very little psychological continuity with your life
such that you’re not surprised to be in the presence
of someone who should be, you should know is dead
or you should know you’re not likely to have met
by normal channels, right, you’re now talking
to some celebrity and it turns out you’re best friends,
right, and you’re not even, you have no memory
of how you got there, you’re like,
how did you get into the room?
You’re like, did you drive to this restaurant?
You have no memory and none of that’s surprising to you.
So you’re kind of brain damaged in a way,
you’re not reality testing in the normal way.
The fascinating possibility is that there’s probably
thousands of people who’ve taken psychedelics
of various forms and have met Sam Harris on that journey.
Well, I would put it more likely in dreams,
not, you know, because with psychedelics,
you don’t tend to hallucinate in a dreamlike way.
I mean, so DMT is giving you an experience of others,
but it seems to be nonstandard.
It’s not like, it’s not just like dream hallucinations,
but to the point of coming back to DMT,
the people want to suggest,
and Terrence McKenna certainly did suggest
that because these others are so obviously other
and they’re so vivid, well, then they could not possibly
be the creation of my own mind,
but every night in dreams, you create a compelling
or what is to you at the time,
a totally compelling simulacrum of another person, right?
And that just proves the mind is capable of doing it.
Now, the phenomenon of lucid dreaming shows
that the mind isn’t capable of doing everything you think
it might be capable of even in that space.
So one of the things that people have discovered
in lucid dreams, and I haven’t done a lot of lucid dreaming,
so I can’t confirm all of this, I can confirm some of it.
Apparently in every house, in every room
in the mansion of dreams,
all light switches are dimmer switches.
Like if you go into a dark room and flip on the light,
it gradually comes up.
It doesn’t come up instantly on demand
because apparently this is covering for the brain’s
inability to produce from a standing start
visually rich imagery on demand.
So I haven’t confirmed that, but that was,
people have done research on lucid dreaming claim
that it’s all dimmer switches.
But one thing I have noticed,
and people can check this out, is that in a dream,
if you look at text, a page of text or a sign
or a television that has text on it,
and then you turn away and you look back at that text,
the text will have changed, right?
The total is it’s just a chronic instability,
graphical instability of text in the dream state.
And I don’t know if that, maybe that’s,
someone can confirm that that’s not true for them,
but whenever I’ve checked that out,
that has been true for me.
So it keeps generating it like real time
from a video game perspective.
Yeah, it’s rendering, it’s re rendering it for some reason.
What’s interesting, I actually,
I don’t know how I found myself in this sets
of that part of the internet,
but there’s quite a lot of discussion
about what it’s like to do math on LSD.
Because apparently one of the deepest thinking processes
needed is those of mathematicians
or theoretical computer scientists
are basically doing anything that involves math
as proofs, and you have to think creatively,
but also deeply, and you have to think
for many hours at a time.
And so they’re always looking for ways to like,
is there any sparks of creativity that could be injected?
And apparently out of all the psychedelics,
the worst is LSD because it completely destroys
your ability to do math well.
And I wonder whether that has to do with your ability
to visualize geometric things in a stable way
in your mind and hold them there
and stitch things together,
which is often what’s required for proofs.
But again, it’s difficult to kind of research
these kinds of concepts, but it does make me wonder
where, what are the spaces, how’s the space of things
you’re able to think about and explore
morphed by different psychedelics
or dream states and so on, and how’s that different?
How much does it overlap with reality?
And what is reality?
Is there a waking state reality?
Or is it just a tiny subset of reality
and we get to take a step in other versions of it?
We tend to think very much in a space time,
four dimensional, there’s a three dimensional world,
there’s time, and that’s what we think about reality.
And we think of traveling as walking from point A
to point B in the three dimensional world.
But that’s a very kind of human surviving,
trying not to get eaten by a lion conception of reality.
What if traveling is something like we do with psychedelics
and meet the elves?
What if it’s something, what if thinking
or the space of ideas as we kind of grow
and think through ideas, that’s traveling?
Or what if memories is traveling?
I don’t know if you have a favorite view of reality
or if you had, by the way, I should say,
excellent conversation with Donald Hoffman.
Yeah, yeah, he’s interesting.
Is there any inkling of his sense in your mind
that reality is very far from,
actual like objective reality is very far
from the kind of reality we imagine,
we perceive and we play with in our human minds?
Well, the first thing to grant
is that we’re never in direct contact with reality,
whatever it is, unless that reality is consciousness, right?
So we’re only ever experiencing consciousness
and its contents.
And then the question is how does that circumstance relate
to quote reality at large?
And Donald Hoffman is somebody who’s happy to speculate,
well, maybe there isn’t a reality at large.
Maybe it’s all just consciousness on some level.
And that’s interesting.
That runs into, to my eye, various philosophical problems
that, or at least you have to do a lot,
you have to add to that picture of idealism for me.
That’s usually all the whole family of views
that would just say that the universe is just mind
or just consciousness at bottom,
we’ll go by the name of idealism in Western philosophy.
You have to add to that idealistic picture
all kinds of epicycles and kind of weird coincidences
and to get the predictability of our experience
and the success of materialist science
to make sense in that context, right?
And so the fact that we can, what does it mean to say
that there’s only consciousness at bottom, right?
Nothing outside of consciousness
because no one’s ever experienced anything
outside of consciousness.
There’s no scientist has ever done an experiment
where they were contemplating data,
no matter how far removed from our sense bases,
whether it’s they’re looking at the Hubble deep field
or they’re smashing atoms or whatever tools they’re using,
they’re still just experiencing consciousness
and its various deliverances
and layering their concepts on top of that.
So that’s always true.
And yet that somehow doesn’t seem to capture
the character of our continually discovering
that our materialist assumptions are confirmable, right?
So take the fact that we unleash this fantastic amount
of energy from within an atom, right?
First, we have the theoretical suggestion
that it’s possible, right?
We come back to Einstein,
there’s a lot of energy in that matter, right?
And what if we could release it, right?
And then we perform an experiment that in this case,
you know, the Trinity test site in New Mexico,
where the people who are most adequate to this conversation,
people like Robert Oppenheimer
are standing around,
not altogether certain it’s going to work, right?
They’re performing an experiment.
They’re wondering what’s gonna happen.
They’re wondering if their calculations around the yield
are off by orders of magnitude.
Some of them are still wondering
whether the entire atmosphere of earth
is gonna combust, right?
That the nuclear chain reaction is not gonna stop.
And lo and behold,
there was that energy to be released
from within the nucleus of an atom.
And that could, so it’s just what the picture one forms
from those kinds of experiments.
And just the knowledge,
it’s just our understanding of evolution.
Just the fact that the earth is billions of years old
and life is hundreds of millions of years old.
And we weren’t here to think about any of those things.
And all of those processes were happening therefore
in the dark.
And they are the processes that allowed us to emerge,
you know, from prior life forms in the first place.
To say that it’s all a mess,
that nothing exists,
outside of consciousness, conscious minds
of the sort that we experience.
It just seems,
it seems like a bizarrely anthropocentric claim,
you know, analogous to, you know,
the moon isn’t there if no one’s looking at it, right?
I mean, the moon as a moon isn’t there
if no one’s looking at it.
I’ll grant that,
because that’s already a kind of fabrication
born of concepts, but the idea that there’s nothing there,
that there’s nothing that corresponds
to what we experience as the moon,
unless someone’s looking at it,
that just seems just a way too parochial way
to set out on this journey of discovery.
There is something there.
There’s a computer waiting to render the moon
when you look at it.
The capacity for the moon to exist is there.
So if we’re looking at the moon,
the capacity for the moon to exist is there.
So if we’re indeed living in a simulation,
which I find a compelling thought experiment,
it’s possible that there is this kind of rendering mechanism,
but not in a silly way that we think about in video games,
but in some kind of more fundamental physics way.
And we have to account for the fact
that it renders experiences that no one has had yet,
that no one has any expectation of having.
It can violate the expectations of everyone lawfully.
And then there’s some lawful understanding
of why that’s so.
It’s like, I mean, just to bring it back to mathematics,
I’m like, certain numbers are prime,
whether we have discovered them or not.
There’s the highest prime number that anyone can name now.
And then there’s the next prime number
that no one can name, and it’s there.
So it’s like, to say that our minds are putting it there,
that what we know as mind in ourselves
is in some way, in some sense, putting it there.
The base layer of reality is consciousness, right?
That we’re identical to the thing
that is rendering this reality.
There’s some, you know, hubris is the wrong word,
but it’s like, it’s okay if reality is bigger
than what we experience, you know?
And it has structure that we can’t anticipate,
and that isn’t just,
I mean, again, there’s certainly a collaboration
between our minds and whatever is out there
to produce what we call, you know, the stuff of life.
But it’s not, the idea that it’s,
I don’t know, I mean, there are a few stops
on the train of idealism and kind of new age thinking
and Eastern philosophy that I don’t,
philosophically, I don’t see a need to take.
I mean, experientially and scientifically,
I feel like it’s, you can get everything you want
from acknowledging that consciousness
has a character that can be explored from its own side,
so that you’re bringing kind of the first person experience
back into the conversation about, you know,
what is a human mind and, you know, what is true?
And you can explore it with different degrees of rigor,
and there are things to be discovered there,
whether you’re using a technique like meditation
or psychedelics, and that these experiences
have to be put in conversation with what we understand
about ourselves from a third person side,
neuroscientifically or in any other way.
But to me, the question is, what if reality,
the sense I have from this kind of, you play shooters?
No.
There’s a physics engine that generates, that’s pretty.
Yeah, you mean first person shooter games?
Yes, yes, sorry.
Not often, but yes.
I mean, there’s a physics engine
that generates consistent reality, right?
My sense is the same could be true for a universe
in the following sense, that our conception of reality
as we understand it now in the 21st century
is a tiny subset of the full reality.
It’s not that the reality that we conceive of that’s there,
the moon being there is not there somehow.
It’s that it’s a tiny fraction of what’s actually out there.
And so the physics engine of the universe
is just maintaining the useful physics,
the useful reality, quote unquote,
for us to have a consistent experience as human beings.
But maybe we descendants of apes are really only understand
like 0.0001% of actual physics of reality.
We can even just start with the consciousness thing,
but maybe our minds are just,
we’re just too dumb by design.
Yeah, I, that truly resonates with me
and I’m surprised it doesn’t resonate more
with most scientists that I talk to.
Matthew, when you just look at,
you look at how close we are to chimps, right?
And chimps don’t know anything, right?
Clearly they have no idea what’s going on, right?
And then you get us,
but then it’s only a subset of human beings
that really understand much of what we’re talking about
in any area of specialization.
And if they all died in their sleep tonight, right?
You’d be left with people who might take a thousand years
to rebuild the internet, if ever, right?
I mean, literally it’s like,
and I would extend this to myself.
I mean, there are areas of scientific specialization
where I have either no discernible competence.
I mean, I spent no time on it.
I have not acquired the tools.
It would just be an article of faith for me to think
that I could acquire the tools
to actually make a breakthrough in those areas.
And I mean, your own area is one.
I mean, I’ve never spent any significant amount of time
trying to be a programmer,
but it’s pretty obvious I’m not Alan Turing, right?
It’s like, if that were my capacity,
I would have discovered that in myself.
I would have found programming irresistible.
My first false starts in learning, I think it was C,
it was just, you know, I bounced off.
It’s like, this was not fun.
I hate, I mean, I hate trying to figure out
what the syntax error that’s causing this thing
not to compile was just a fucking awful experience.
I hated it, right?
I hated every minute of it.
So it was not, so if it was just people like me left,
like when do we get the internet again, right?
And we lose, we lose, you know, we lose the internet.
When do we get it again, right?
When do we get anything like a proper science
of information, right?
You need a Claude Shannon or an Alan Turing
to plant a flag in the ground right here and say,
all right, can everyone see this?
Even if you don’t quite know what I’m up to,
you all have to come over here to make some progress.
And, you know, there are, you know,
hundreds of topics where that’s the case.
So we barely have a purchase on making anything
like discernible intellectual progress in any generation.
And yeah, I’m just, Max Tegmark makes this point.
He’s one of the few people who does in physics.
If you just look at the numbers,
if you just take the truth of evolution seriously, right?
And realize that there’s nothing about us
that has evolved to understand reality perfectly.
I mean, we’re just not that kind of ape, right?
There’s been no evolutionary pressure along those lines.
So what we are making do with tools
that were designed for fights with sticks and rocks, right?
And it’s amazing we can do as much as we can.
I mean, we just, you know, the UNR just sitting here
on the back of having received an mRNA vaccine,
you know, that has certainly changed our life
given what the last year was like.
And it’s gonna change the world
if rumors of coming miracles are born out.
I mean, it’s now, it seems likely we have a vaccine
coming for malaria, right?
Which has been killing millions of people a year
for as long as we’ve been alive.
I think it’s down to like 800,000 people a year now
because we’ve spread so many bed nets around,
but it was like two and a half million people every year.
It’s amazing what we can do, but yeah, I have,
if in fact the answer at the back of the book of nature
is you understand 0.1% of what there is to understand
and half of what you think you understand is wrong,
that would not surprise me at all.
It is funny to look at our evolutionary history,
even back to chimps, I’m pretty sure even chimps
thought they understood the world well.
So at every point in that timeline
of evolutionary development throughout human history,
there’s a sense like there’s no more,
you hear this message over and over,
there’s no more things to be invented.
But a hundred years ago there were,
there’s a famous story, I forget which physicist told it,
but there were physicists telling
their undergraduate students not to go into,
to get graduate degrees in physics
because basically all the problems had been solved.
And this is like around 1915 or so.
It turns out you were right.
I’m gonna ask you about free will.
Oh, okay.
You’ve recently released an episode of your podcast,
Making Sense, for those with a shorter attention span,
basically summarizing your position on free will.
I think it was under an hour and a half.
Yeah, yeah.
It was brief and clear.
So allow me to summarize the summary, TLDR,
and maybe you tell me where I’m wrong.
So free will is an illusion,
and even the experience of free will is an illusion.
Like we don’t even experience it.
Am I good in my summary?
Yeah, this is a line that’s a little hard
to scan for people.
I say that it’s not merely that free will is an illusion.
The illusion of free will is an illusion.
Like there is no illusion of free will.
And that is a, unlike many other illusions,
that’s a more fundamental claim.
It’s not that it’s wrong, it’s not even wrong.
I mean, I guess that was I think Wolfgang Pauli
who derided one of his colleagues or enemies
with that aspersion about his theory in quantum mechanics.
So there are things that, there are genuine illusions.
There are things that you do experience
and then you can kind of punch through that experience,
or you can’t actually experience,
you can’t experience them any other way.
It’s just, we just know it’s not a veridical experience.
You just take like a visual illusion.
There are visual illusions that,
a lot of these come to me on Twitter these days.
There’s these amazing visual illusions
where like every figure in this GIF seems to be moving,
but nothing in fact is moving.
You can just like put a ruler on your screen
and nothing’s moving.
Some of those illusions you can’t see any other way.
I mean, they’re just, they’re hacking aspects
of the visual system that are just eminently hackable
and you have to use a ruler to convince yourself
that the thing isn’t actually moving.
Now there are other visual illusions
where you’re taken in by it at first,
but if you pay more attention,
you can actually see that it’s not there, right?
Or it’s not how it first seemed.
Like the Necker cube is a good example of that.
Like the Necker cube is just that schematic of a cube,
of a transparent cube, which pops out one way or the other.
Then one face can pop out and then the other face
can pop out.
But you can actually just see it as flat with no pop out,
which is a more veridical way of looking at it.
So there are subject,
there are kind of inward correlates to this.
And I would say that the sense of self and free will
are closely related.
I mean, I often describe them as two sides of the same coin,
but they’re not quite the same in their spuriousness.
I mean, so the sense of self is something that people,
I think, do experience, right?
It’s not a very clear experience, but it’s not,
I wouldn’t call the illusion of self an illusion,
but the illusion of free will is an illusion
in that as you pay more attention to your experience,
you begin to see that it’s totally compatible
with an absence of free will.
You don’t, I mean coming back to the place we started,
you don’t know what you’re gonna think next.
You don’t know what you’re gonna intend next.
You don’t know what’s going to just occur to you
that you must do next.
You don’t know how much you are going to feel
the behavioral imperative to act on that thought.
If you suddenly feel, oh, I don’t need to do that.
I can do that tomorrow.
You don’t know where that comes from.
You didn’t know that was gonna arise.
You didn’t know that was gonna be compelling.
All of this is compatible with some evil genius
in the next room just typing in code into your experience.
It’s like this, okay, let’s give him the,
oh my God, I just forgot it was gonna be our anniversary
in one week thought, right?
Give him the cascade of fear.
Give him this brilliant idea for the thing he can buy
that’s gonna take him no time at all
and this overpowering sense of relief.
All of our experiences is compatible
with the script already being written, right?
And I’m not saying the script is written.
I’m not saying that fatalism is the right way
to look at this, but we just don’t have
even our most deliberate voluntary action
where we go back and forth between two options,
thinking about the reason for A
and then reconsidering and going,
thinking harder about B and just going
eeny, meeny, miny, moe until the end of the hour.
However laborious you can make it,
there is a utter mystery at your back
finally promoting the thought or intention
or rationale that is most compelling
and therefore behaviorally effective.
And this can drive some people a little crazy.
So I usually preface what I say about free will
with the caveat that if thinking about your mind this way
makes you feel terrible, well then stop.
You get off the ride, switch the channel.
You don’t have to go down this path.
But for me and for many other people,
it’s incredibly freeing to recognize this about the mind
because one, you realize that you’re,
cutting through the illusion of the self
is immensely freeing for a lot of reasons
that we can talk about separately,
but losing the sense of free will does
two things very vividly for me.
One is it totally undercuts the basis for,
the psychological basis for hatred.
Because when you think about the experience
of hating other people, what that is anchored to
is a feeling that they really are
the true authors of their actions.
I mean, if someone is doing something
that you find so despicable, right?
Let’s say they’re targeting you unfairly, right?
They’re maligning you on Twitter or they’re suing you
or they’re doing something, they broke your car window,
they did something awful
and now you have a grievance against them.
And you’re relating to them very differently emotionally
in your own mind than you would
if a force of nature had done this, right?
Or if it’s, if it had just been a virus
or if it had been a wild animal
or a malfunctioning machine, right?
Like to those things you don’t attribute
any kind of freedom of will.
And while you may suffer the consequences
of catching a virus or being attacked by a wild animal
or having your car break down or whatever,
it may frustrate you.
You don’t slip into this mode of hating the agent
in a way that completely commandeers your mind
and deranges your life.
I mean, you just don’t, I mean, there are people
who spend decades hating other people for what they did
and it’s just pure poison, right?
So it’s a useful shortcut to compassion and empathy.
Yeah, yeah.
But the question is, say that this called,
what was it, the horse of consciousness?
Let’s call it the consciousness generator black box
that we don’t understand.
And is it possible that the script
that we’re walking along, that we’re playing,
that’s already written is actually being written
in real time.
It’s almost like you’re driving down a road
and in real time, that road is being laid down.
And this black box of consciousness that we don’t understand
is the place where the script is being generated.
So it’s not, it is being generated, it didn’t always exist.
So there’s something we don’t understand
that’s fundamental about the nature of reality
that generates both consciousness,
let’s call it maybe the self.
I don’t know if you want to distinguish between those.
Yeah, I definitely would, yeah.
You would, because there’s a bunch of illusions
we’re referring to.
There’s the illusion of free will,
there’s the illusion of self,
and there’s the illusion of consciousness.
You’re saying, I think you said there’s no,
you’re not as willing to say
there’s an illusion of consciousness.
You’re a little bit more.
In fact, I would say it’s impossible.
Impossible.
You’re a little bit more willing to say
that there’s an illusion of self,
and you’re definitely saying
there’s an illusion of free will.
Yes, I’m definitely saying there’s an illusion
that a certain kind of self is an illusion.
Not every, we mean many different things
by this notion of self.
So maybe I should just differentiate these things.
So consciousness can’t be an illusion
because any illusion proves its reality
as much as any other veridical perception.
I mean, if you’re hallucinating now,
that’s just as much of a demonstration of consciousness
as really seeing what’s a quote actually there.
If you’re dreaming and you don’t know it,
that is consciousness, right?
You can be confused about literally everything.
You can’t be confused about the underlying claim,
whether you make it linguistically or not,
but just the cognitive assertion
that something seems to be happening.
It’s the seeming that is the cash value of consciousness.
Can I take a tiny tangent?
So what if I am creating consciousness in my mind
to convince you that I’m human?
So it’s a useful social tool,
not a fundamental property of experience,
like of being a living thing.
What if it’s just like a social tool
to almost like a useful computational trick
to place myself into reality
as we together communicate about this reality?
And another way to ask that,
because you said it much earlier,
you talk negatively about robots as you often do.
So let me, because you’ll probably die first
when they take over.
No, I’m looking forward to certain kinds of robots.
I mean, I’m not, if we can get this right,
this would be amazing.
But you don’t like the robots that fake consciousness.
That’s what you,
you don’t like the idea of fake it till you make it.
Well, no, it’s not that I don’t like it.
It’s that I’m worried that we will lose sight
of the problem.
And the problem has massive ethical consequences.
I mean, if we create robots that really can suffer,
that would be a bad thing, right?
And if we really are committing a murder
when we recycle them, that would be a bad thing.
This is how I know you’re not Russian.
Why is it a bad thing that we create robots that can suffer?
Isn’t suffering a fundamental thing
from which like beauty springs?
Like without suffering,
do you really think we would have beautiful things
in this world?
Okay, that’s a tangent on a tangent.
We’ll go there.
I would love to go there, but let’s not go there just yet.
All right.
But I do think it would be, if anything is bad,
creating hell and populating it
with real minds that really can suffer in that hell,
that’s bad.
You are worse than any mass murderer we can name
if you create it.
I mean, this could be in robot form,
or more likely it would be in some simulation of a world
where we managed to populate it with conscious minds
whether we knew they were conscious or not.
And that world is a state of, it’s unendurable.
That would just, it just taking the thesis seriously
that there’s nothing that mind intelligence
and consciousness ultimately are substrate independent.
Right?
It doesn’t, you don’t need a biological brain
to be conscious.
You certainly don’t need a biological brain
to be intelligent.
Right?
So if we just imagine the consciousness at some point
comes along for the ride as you scale up in intelligence,
well then we could find ourselves creating conscious minds
that are miserable, right?
And that’s just like creating a person who’s miserable.
Right?
It could be worse than creating a person who’s miserable.
It could be even more sensitive to suffering.
Cloning them and maybe for entertainment
and watching them suffer.
Just like watching a person suffer for entertainment.
You know?
So, but back to your primary question here,
which is differentiating consciousness and self
and free will as concepts
and kind of degrees of illusoriness.
The problem with free will is that
what most people mean by it,
and this is where Dan Dennett
is gonna get off the ride here, right?
So like he doesn’t, he’s gonna disagree with me
that I know what most people mean by it.
But I have a very keen sense having talked about this topic
for many, many years
and seeing people get wrapped around the axle of it
and seeing in myself what it’s like to have felt
that I was a self that had free will
and then to no longer feel that way, right?
To know what it’s like to actually disabuse myself
of that sense cognitively and emotionally
and to recognize what’s left, what goes away
and what doesn’t go away on the basis of that epiphany.
I have a sense that I know what people think they have
in hand when they worry about whether free will exists.
And it is the flip side of this feeling of self.
It’s the flip side of feeling
like you are not merely identical to experience.
You feel like you’re having an experience.
You feel like you’re an agent
that is appropriating an experience.
There’s a protagonist in the movie of your life
and it is you.
It’s not just the movie, right?
It’s like there’s sights and sounds and sensations
and thoughts and emotions
and this whole cacophony of experience,
of felt experience, of felt experience of embodiment.
But there seems to be a rider on the horse
or a passenger in the body, right?
People don’t feel truly identical to their bodies
down to their toes.
They sort of feel like they have bodies.
They feel like their minds in bodies
and that feels like a self, that feels like me.
And again, this gets very paradoxical
when you talk about the experience
of being in relationship to yourself
or talking to yourself, giving yourself a pep talk.
I mean, if you’re the one talking,
why are you also the one listening?
Like, why do you need the pep talk and why does it work
if you’re the one giving the pep talk, right?
Or if I say like, where are my keys?
Or if I’m looking for my keys,
why do I think the superfluous thought, where are my keys?
I know I’m looking for the fucking keys.
I’m the one looking, who am I telling
that we now need to look for the keys, right?
So that duality is weird, but leave that aside.
There’s the sense, and this becomes very vivid
when people try to learn to meditate.
Most people, they close their eyes
and they’re told to pay attention to an object
like the breath, say.
So you close your eyes and you pay attention to the breath
and you can feel it at the tip of your nose
or the rising and falling of your abdomen
and you’re paying attention
and you feel something vague there.
And then you think, I thought, well, why the breath?
Why am I paying attention to the breath?
What’s so special about the breath?
And then you notice you’re thinking
and you’re not paying attention to the breath anymore.
And then you realize, okay, the practice is,
okay, I should notice thoughts
and then I should come back to the breath.
But this starting point of the conventional starting point
of feeling like you are an agent, very likely in your head,
a locus of consciousness, a locus of attention
that can strategically pay attention
to certain parts of experience.
Like I can focus on the breath
and then I get lost in thought
and now I can come back to the breath
and I can open my eyes and I’m over here behind my face
looking out at a world that’s other than me
and there’s this kind of subject object perception.
And that is the default starting point of selfhood,
of subjectivity.
And married to that is the sense that
I can decide what to do next, right?
I am an agent who can pay attention to the cup.
I can listen to sounds.
There’s certain things that I can’t control.
Certain things are happening to me
and I just can’t control them.
So for instance, if someone asks,
well, can you not hear a sound, right?
Like don’t hear the next sound,
don’t hear anything for a second,
or don’t hear, I’m snapping my fingers, don’t hear this.
Where’s your free will?
You know, well, like just stop this from coming in.
You realize, okay, wait a minute.
My abundant freedom does not extend
to something as simple as just being able to pay attention
to something else than this.
Okay, well, so I’m not that kind of free agent,
but at least I can decide what I’m gonna do next
and I’m gonna pick up this water, right?
And there’s a feeling of identification
with the impulse, with the intention,
with the thought that occurs to you,
with the feeling of speaking.
Like what am I gonna say next?
Well, I’m saying it.
So here goes, this is me.
It feels like I’m the thinker.
I’m the one who’s in control.
But all of that is born of not really paying close attention
to what it’s like to be you.
And so this is where meditation comes in,
or this is where, again, you can get at this conceptually.
You can unravel the notion of free will
just by thinking certain thoughts,
but you can’t feel that it doesn’t exist
unless you can pay close attention
to how thoughts and intentions arise.
So the way to unravel it conceptually
is just to realize, okay, I didn’t make myself.
I didn’t make my genes.
I didn’t make my brain.
I didn’t make the environmental influences
that impinged upon this system for the last 54 years
that have produced my brain in precisely the state
it’s in right now, such and with all of the receptor weightings
and densities, and it’s just,
I’m exactly the machine I am right now
through no fault of my own as the experiencing self.
I get no credit and I get no blame
for the genetics and the environmental influences here.
And yet those are the only things
that contrive to produce my next thought
or impulse or moment of behavior.
And if you were going to add something magical
to that clockwork, like an immortal soul,
you can also notice that you didn’t produce your soul.
You can’t account for the fact
that you don’t have the soul of someone
who doesn’t like any of the things you like
or wasn’t interested in any of the things
you were interested in or was a psychopath
or had an IQ of 40.
I mean, there’s nothing about that
that the person who believes in a soul
can claim to have controlled.
And yet that is also totally dispositive
of whatever happens next.
But everything you’ve described now,
maybe you can correct me,
but it kind of speaks to the materialistic nature
of the hardware.
But even if you add magical ectoplasm software,
you didn’t produce that either.
I know, but if we can think about the actual computation
running on the hardware and running on the software,
there’s something you said recently
which you think of culture as an operating system.
So if we just remove ourselves a little bit
from the conception of human civilization
being a collection of humans
and rather us just being a distributed
computation system on which there’s
some kind of operating system running,
and then the computation that’s running
is the actual thing that generates
the interactions, the communications,
and maybe even free will, the experiences
of all those free will.
Do you ever think of, do you ever try
to reframe the world in that way
where it’s like ideas are just using us,
thoughts are using individual nodes in the system,
and they’re just jumping around,
and they also have ability to generate experiences
so that we can push those ideas along.
And basically the main organisms here
are the thoughts, not the humans.
Yeah, but then that erodes the boundary
between self and world.
Right.
So then there’s no self, really integrated self
to have any kind of will at all.
Like if you’re just a meme plex,
I mean, if you’re just a collection of memes,
and I mean, we’re all kind of like currents,
like eddies in this river of ideas, right?
So it’s like, and it seems to have structure,
but there’s no real boundary between that part
of the flow of water and the rest.
I mean, if our, and I would say that much
of our mind answers to this kind of description.
I mean, so much of our mind has been,
it’s obviously not self generated,
and it’s not, you’re not gonna find it
by looking in the brain.
It is the result of culture largely,
but also, you know, the genes on one side
and culture on the other meeting
to allow for manifestations of mind
that don’t, that aren’t actually bounded
by the person in any clear sense.
It was just, I mean, the example I often use here,
but there’s so many others is just the fact
that we’re following the rules of English grammar
to whatever degree we are.
It’s not that we certainly haven’t consciously represented
these rules for ourself.
We haven’t invented these rules.
We haven’t, I mean, there are norms of language use
that we couldn’t even specify because we haven’t,
you know, we’re not grammarians.
We’re not, we haven’t studied this.
We don’t even have the right concepts,
and yet we’re following these rules,
and we’re noticing, you know, we’re noticing as, you know,
an error when we fail to follow these rules,
and virtually every other cultural norm is like that.
I mean, these are not things we’ve invented.
You can consciously decide to scrutinize them
and override them, but, I mean, just think of,
just think of any social situation
where you’re with other people and you’re behaving
in ways that are culturally appropriate, right?
You’re not being, you know,
you’re not being wild animals together.
You’re following, you have some expectation
of how you shake a person’s hand
and how you deal with implements on a table,
how you have a meal together.
Obviously, this can change from culture to culture,
and people can be shocked
by how different those things are, right?
We, you know, we all have foods we find disgusting,
but in some countries, dog is not one of those foods, right?
And yet, you know, you and I presumably
would be horrified to be served dog.
Those are not norms that we’re,
they are outside of us in some way,
and yet they’re felt very viscerally.
I mean, they’re certainly felt in their violation.
You know, if you are, just imagine,
you’re in somebody’s home,
you’re eating something that tastes great to you,
and you happen to be in Vietnam or wherever,
you know, you didn’t realize dog was potentially
on the menu, and you find out that you’ve just eaten
10 bites of what is, you know, really a cocker spaniel,
and you feel this instantaneous urge to vomit, right,
based on an idea, right?
Like, so, like, you did not,
you’re not the author of that norm
that gave you such a powerful experience of its violation,
and I’m sure we can trace the moment in your history,
you know, vaguely, where it sort of got in.
I mean, very early on as kids,
you realize you’re treating dogs as pets
and not as food, or as potential food.
But yeah, no, it’s, but the point you just made
opens us to, like, we are totally permeable
to a sea of mind.
Yeah, but if we take the metaphor
of the distributed computing systems,
each individual node is,
is part of performing a much larger computation,
but it nevertheless is in charge of doing the scheduling
of, so, assuming it’s Linux,
is doing the scheduling of processes
and is constantly alternating them.
That node is making those choices.
That node sure as hell believes it has free will,
and it actually has free will
because it’s making those hard choices,
but the choices ultimately are part
of a much larger computation that it can’t control.
Isn’t it possible for that node to still be,
that human node is still making the choice?
Well, yeah, it is.
So I’m not saying that your body
isn’t doing, really doing things, right?
And some of those things can be
conventionally thought of as choices, right?
So it’s like, I can choose to reach,
and it’s like, it’s not being imposed on me.
That would be a different experience.
Like, so there’s an experience of all,
you know, there’s definitely a difference
between voluntary and involuntary action.
There’s, so that has to get conserved.
By any account of the mind that jettisons free will,
you still have to admit that there’s a difference
between a tremor that I can’t control
and a purposeful motor action that I can control
and I can initiate on demand,
and it’s associated with intentions.
And it’s got efferent, you know, motor copy,
which is being predictive so that I can notice errors.
You know, I have expectations.
When I reach for this,
if my hand were actually to pass through the bottle,
because it’s a hologram, I would be surprised, right?
And so that shows that I have a expectation
of just what my grasping behavior is gonna be like
even before it happens.
Whereas with a tremor,
you don’t have the same kind of thing going on.
That’s a distinction we have to make.
So I am, yes, I’m really, my intention to move,
which is in fact can be subjectively felt,
really is the proximate cause of my moving.
It’s not coming from elsewhere in the universe.
I’m not saying that.
So in that sense, the node is really deciding
to execute, you know, the subroutine now.
But that’s not the feeling
that has given rise to this conundrum of free will, right?
So the people feel like,
people feel like the crucial thing is that people feel
like they could have done otherwise, right?
That’s the thing that,
so when you run back the clock of your life, right?
You run back the movie of your life,
you flip back the few pages in the novel of your life,
they feel that at this point,
they could behave differently than they did, right?
So like, but given, you know,
even given your distributed computing example,
it’s either a fully deterministic system
or it’s a deterministic system
that admits of some random, you know, influence.
In either case,
that’s not the free will people think they have.
The free will people think they have is, damn,
I shouldn’t have done that.
I just like, I shouldn’t have done that.
I could have done otherwise, right?
I should have done otherwise, right?
Like if you think about something
that you deeply regret doing, right?
Or that you hold someone else responsible for
because they really are the upstream agent
in your mind of what they did.
You know, that’s an awful thing that that person did
and they shouldn’t have done it.
So there is this illusion and it has to be an illusion
because there’s no picture of causation
that would make sense of it.
There’s this illusion that if you arrange the universe
exactly the way it was a moment ago,
it could have played out differently.
And the only way it could have played out differently
is if there’s randomness added to that,
but randomness isn’t what people feel
would give them free will, right?
If you tell me that, you know,
I only reached for the water bottle this time
because there’s a random number generator in there
kicking off values and it finally moved my hand,
that’s not the feeling of authorship.
That’s still not control.
You’re still not making that decision.
There’s actually, I don’t know if you’re familiar
with cellular automata.
It’s a really nice visualization
of how simple rules can create incredible complexity
that it’s like really dumb initial conditions to set,
simple rules applied, and eventually you watch this thing
and if the initial conditions are correct,
then you’re going to have emerged something
that to our perception system
looks like organisms interacting.
You can construct any kinds of worlds
and they’re not actually interacting.
They’re not actually even organisms.
And they certainly aren’t making decisions.
So there’s like systems you can create
that illustrate this point.
The question is whether there could be some room
for let’s use in the 21st century the term magic,
back to the black box of consciousness.
Let me ask it this way.
If you’re wrong about your intuition about free will,
what, and somebody comes along to you
and proves to you that you didn’t have the full picture,
what would that proof look like?
What would?
So that’s the problem, that’s why it’s not even an illusion
in my world because for me, it’s impossible to say
what the universe would have to be like
for free will to be a thing, right?
It doesn’t conceptually map onto any notion
of causation we have.
And that’s unlike any other spurious claim you might make.
So like if you’re gonna believe in ghosts, right?
I understand what that claim could be,
where like I don’t happen to believe in ghosts,
but it’s not hard for me to specify
what would have to be true for ghosts to be real.
And so it is with a thousand other things like ghosts,
right, so like, okay, so you’re telling me
that when people die, there’s some part of them
that is not reducible at all to their biology
that lifts off them and goes elsewhere
and is actually the kind of thing
that they can linger in closets and in cupboards
and actually it’s immaterial,
but by some principle of physics,
we don’t totally understand it can make sounds
and knock objects and even occasionally show up
so they can be visually beheld.
And it’s just, it seems like a miracle,
but it’s just some spooky noun in the universe
that we don’t understand, let’s call it a ghost.
That’s fine, I can talk about that all day.
The reasons to believe in it,
the reasons not to believe in it,
the way we would scientifically test for it,
what would have to be provable
so as to convince me that ghosts are real.
Free will isn’t like that at all.
There’s no description of any concatenation of causes
that precedes my conscious experience
that sounds like what people think they have
when they think they could have done otherwise
and that they really, that they, the conscious agent,
is really in charge, right?
Like if you don’t know what you’re going to think next,
right, and you can’t help but think it,
take those two premises on board.
You don’t know what it’s gonna be,
you can’t stop it from coming,
and until you actually know how to meditate,
you can’t stop yourself from
fully living out its behavioral or emotional consequences.
Right, like you have no, once you,
mindfulness, you know,
arguably gives you another degree of freedom here.
It doesn’t give you free will,
but it gives you some other game to play
with respect to the emotional
and behavioral imperatives of thoughts.
But short of that, I mean,
the reason why mindfulness doesn’t give you free will
is because you can’t, you know,
you can’t account for why in one moment
mindfulness arises and in other moments it doesn’t, right?
But a different process is initiated
once you can practice in that way.
Well, if I could push back for a second.
By the way, I just have this thought bubble
popping up all the time of just two recent chimps
arguing about the nature of consciousness.
It’s kind of hilarious.
So on that thread, you know,
if we’re, even before Einstein,
let’s say before Einstein,
we were to conceive about traveling
from point A to point B, say some point in the future,
we are able to realize through engineering
a way which is consistent with Einstein’s theory
that you can have wormholes.
You can travel from one point to another
faster than the speed of light.
And that would, I think, completely change our conception
of what it means to travel in the physical space.
And that completely transform our ability.
You talk about causality, but here let’s just focus
on what it means to travel through physical space.
Don’t you think it’s possible that there will be inventions
or leaps in understanding about reality
that will allow us to see free will as actually,
like us humans somehow may be linked
to this idea of consciousness,
are actually able to be authors of our actions?
It is a nonstarter for me conceptually.
It’s a little bit like saying,
could there be some breakthrough that will cause us
to realize that circles are really square
or the circles are not really round, right?
No, a circle is what we mean by a perfectly round form.
It’s not on the table to be revised.
And so I would say the same thing about consciousness.
It’s just like saying, is there some breakthrough
that would get us to realize that consciousness
is really an illusion?
I’m saying no, because the experience of an illusion
is as much a demonstration of what I’m calling consciousness
as anything else, right?
That is consciousness.
With free will, it’s a similar problem.
It’s like, again, it comes down to a picture of causality
and there’s no other picture on offer.
And what’s more, I know what it’s like
on the experiential side to lose the thing
to which it is clearly anchored, right?
Like the feel, like it doesn’t feel,
and this is the question that almost nobody asked.
People who are debating me on the topic of free will,
I’m, at 15 minute intervals, I’m making a claim
that I don’t feel this thing,
and they never become interested in,
well, what’s that like?
Like, okay, so you’re actually saying you don’t,
this thing isn’t true for you empirically.
It’s not just, because most people
who don’t believe in free will philosophically
also believe that we’re condemned to experience it.
Like, you just, you can’t live without this feeling, so.
So you’re actually saying you’re able
to experience the absence of the illusion of free will?
Yes, yes.
For, are we talking about a few minutes at a time,
or is this, does it require a lot of work, a meditation,
or are you literally able to load that into your mind
and like play that moment?
Right now, right now, just in this conversation.
So it’s not absolutely continuous,
but it’s whenever I pay attention.
It’s like, and I would say the same thing
for the illusoriness of the self in the sense,
and again, we haven’t talked about this, so.
Can you still have the self and not have the free will
in mind at the same time?
Do they go at the same time?
This is the same, yeah, it’s the same thing.
They’re always holding hands when they walk out the door.
There really are two sides at the same coin.
But it’s just, it comes down to what it’s like
to try to get to the end of this sentence,
or what it’s like to finally decide
that it’s been long enough
and now I need another sip of water, right?
If I’m paying attention, now, if I’m not paying attention,
I’m probably, I’m captured by some other thought
and that feels a certain way, right?
And so that’s not, it’s not vivid,
but if I try to make vivid this experience of just,
okay, I’m finally gonna experience free will.
I’m gonna notice my free will, right?
Like it’s gotta be here, everyone’s talking about it.
Where is it?
I’m gonna pay attention to, I’m gonna look for it.
And I’m gonna create a circumstance
that is where it has to be most robust, right?
I’m not rushed to make this decision.
I’m not, it’s not a reflex.
I’m not under pressure.
I’m gonna take as long as I want.
I’m going to decide, it’s not trivial.
Like, so it’s not just like reaching with my left hand
or reaching with my right hand.
People don’t like those examples for some reason.
Let’s make a big decision.
Like, where should, what should my next podcast be on, right?
Who do I invite on the next podcast?
What is it like to make that decision?
When I pay attention,
there is no evidence of free will anywhere in sight.
It’s like, it doesn’t feel like,
it feels profoundly mysterious
to be going back between two people.
Like, is it gonna be person A or person B?
Got all my reasons for A and all my reasons why not
and all my reasons for B.
And there’s some math going on there
that I’m not even privy to
where certain concerns are trumping others.
And at a certain point, I just decide.
And yes, you can say I’m the node in the network
that has made that decision, absolutely.
I’m not saying it’s being piped to me from elsewhere,
but the feeling of what it’s like to make that decision
is totally without a sense,
a real sense of agency
because something simply emerges.
It’s literally as tenuous as
what’s the next sound I’m going to hear, right?
Or what’s the next thought that’s gonna appear?
And it just, something just appears, you know?
And if something appears to cancel that something,
like if I say, I’m gonna invite her
and then I’m about to send the email
and then I think, oh, no, no, no, I can’t do that.
There was a thing in that New York article I read
that I gotta talk to this guy, right?
That pivot at the last second,
you can make it as muscular as you want.
It always just comes out of the darkness.
It’s always mysterious.
So right, when you try to pin it down,
you really can’t ever find that free will.
If you construct an experiment for yourself
and you’re trying to really find that moment
when you’re actually making that controlled author decision,
it’s very difficult to do.
And we’re still, we’re still, we know at this point
that if we were scanning your brain
in some podcast guest choosing experiment, right?
We know at this point we would be privy
to who you’re going to pick before you are,
you the conscious agent.
If we could, again, this is operationally
a little hard to conduct,
but there’s enough data now to know
that something very much like this cartoon is in fact true
and will ultimately be undeniable for people.
They’ll be able to do it on themselves with some app.
If you’re deciding what to, you know,
where to go for dinner or who to have on your podcast
or ultimately, you know, who to marry, right?
Or what city to move to, right?
Like you can make it as big
or as small a decision as you want.
We could be scanning your brain in real time
and at a point where you still think you’re uncommitted,
we would be able to say with arbitrary accuracy,
all right, Lex is, he’s moving to Austin, right?
I didn’t choose that.
Yeah, he was choosing, it was gonna be Austin
or it was gonna be Miami.
He got, he’s catching one of these two waves,
but it’s gonna be Austin.
And at a point where you subjectively,
if we could ask you, you would say,
oh no, I’m still working over here.
I’m still thinking, I’m still considering my options.
And you’ve spoken to this,
in you thinking about other stuff in the world,
it’s been very useful to step away
from this illusion of free will.
And you argue that it’s probably makes a better world
because it can be compassionate
and empathetic towards others.
And towards oneself.
Towards oneself.
I mean, radically toward others
in that literally hate makes no sense anymore.
I mean, there are certain things
you can really be worried about, really want to oppose.
Really, I mean, I’m not saying
you’d never have to kill another person.
Like, I mean, self defense is still a thing, right?
But the idea that you’re ever confronting anything
other than a force of nature in the end
goes out the window, right?
Or does go out the window when you really pay attention.
I’m not saying that this would be easy to grok
if someone kills a member of your family.
I’m not saying you can just listen
to my 90 minutes on free will
and then you should be able to see that person
as identical to a grizzly bear or a virus.
Because there’s so, I mean, we are so evolved
to deal with one another as fellow primates
and as agents, but it’s, yeah,
when you’re talking about the possibility
of, you know, Christian, you know,
truly Christian forgiveness, right?
It’s like, you know, as testified to by, you know,
various saints of that flavor over the millennia.
Yeah, that is, the doorway to that is to recognize
that no one really at bottom made themselves.
And therefore everyone, what we’re seeing really
are differences in luck in the world.
We’re seeing people who are very, very lucky
to have had good parents and good genes
and to be in good societies and had good opportunities
and to be intelligent and to be, you know,
not as intelligent as they were in the past.
And to be, you know, not sociopathic,
like none of it is on them.
They’re just reaping the fruits of one lottery
after another, and then showing up in the world
on that basis.
And then so it is with, you know,
every malevolent asshole out there, right?
He or she didn’t make themself.
Even if that weren’t possible,
the utility for self compassion is also enormous
because it’s, when you just look at what it’s like
to regret something or to feel shame about something
or feel deep embarrassment, these states of mind
are some of the most deranging experiences anyone has.
And the indelible reaction to them,
you know, the memory of the thing you said,
you know, the memory of the wedding toast you gave
20 years ago that was just mortifying, right?
The fact that that can still make you hate yourself, right?
And like that psychologically,
that is a knot that can be untied, right?
Speak for yourself, Sam.
Yeah, yeah.
So clearly you’re not.
You gave a great toast.
It was my toast that mortified me.
No, no, that’s not what I was referring to.
I’m deeply appreciative in the same way
that you’re referring to of every moment I’m alive,
but I’m also powered by self hate often.
Like several things in this conversation already
that I’ve spoken, I’ll be thinking about,
like that was the dumbest thing.
You’re sitting in front of Sam Harris and you said that.
So like that, but that somehow creates
a richer experience for me.
Like I’ve actually come to accept that as a nice feature
however my brain was built.
I don’t think I want to let go of that.
Well, the thing you, I think the thing you want to let go of
is the suffering associated with it.
So like, so for me, so psychologically and ethically,
all of this is very interesting.
So I don’t think we ever,
we should ever get rid of things like anger, right?
So like hatred is, hatred is divorcible from anger
in the sense that hatred is this enduring state where,
you know, whether you’re hating somebody else
or hating yourself, it is just,
it is toxic and durable and ultimately useless, right?
Like it becomes, it becomes self nullifying, right?
Like you become less capable as a person
to solve any of your problems.
It’s not, it’s not instrumental in solving the problem
that is, that is, is occasioning all this hatred.
And anger for the most part isn’t either except
as a signal of salience that there’s a problem, right?
So if somebody does something that makes me angry,
that just promotes this situation to conscious,
conscious attention in a way that is stronger
than my not really caring about it, right?
And there are things that I think should make us angry
in the world and there’s the behavior of other people
that should make us angry because we should respond to it.
And so it is with yourself.
If I do something, you know, as a parent,
if I do something stupid that harms one of my daughters,
right, my belief, my experience of myself
and my beliefs about free will close the door to my saying,
well, I should have done otherwise in the sense
that if I could go back in time,
I would have actually effectively done otherwise.
No, I would do, given the same causes and conditions,
I would do that thing a trillion times in a row, right?
But, you know, regret and feeling bad about an outcome
are still important to capacities because like, yeah,
you know, like I desperately want my daughters
to be happy and healthy.
So if I’ve done something, you know,
if I crash the car when they’re in the car
and they get injured, right,
and I do it because I was trying to change a song
on my playlist or, you know, something stupid,
I’m gonna feel like a total asshole.
How long do I stew in that feeling of regret?
Right, and to like, what utility is there to extract
out of this error signal?
And then what do I do?
We’re always faced with the question of what to do next,
right, and how to best do that thing,
that necessary thing next.
And how much wellbeing can we experience while doing it?
Like how miserable do you need to be to solve a problem
in life and to help solve the problems
of people closest to you?
You know, how miserable do you need to be
to get through your to do list today?
Ultimately, I think you can be deeply happy
going through all of it, right?
And even navigating moments that are scary
and, you know, really destabilizing to ordinary people.
And, I mean, I think, you know, again,
I’m always up kind of at the edge of my own capacities here
and there are all kinds of things that stress me out
and worry me and I’m especially something if it’s,
you’re gonna tell me it’s something with, you know,
the health of one of my kids, you know,
it’s very hard for me, like, it’s very hard for me
to be truly equanimous around that.
But equanimity is so useful
the moment you’re in response mode, right?
Because, I mean, the ordinary experience for me
of responding to what seems like a medical emergency
for one of my kids is to be obviously super energized
by concern to respond to that emergency.
But then once I’m responding to that emergency,
but then once I’m responding,
all of my fear and agitation and worry and, oh my God,
what if this is really something terrible?
But finding any of those thoughts compelling,
that only diminishes my capacity as a father
to be good company while we navigate
this really turbulent passage, you know?
As you’re saying this actually,
one guy comes to mind, which is Elon Musk.
One of the really impressive things to me
was to observe how many dramatic things
he has to deal with throughout the day at work,
but also if you look through his life, family too,
and how he’s very much actually, as you’re describing,
basically a practitioner of this way of thought,
which is you’re not in control.
You’re basically responding
no matter how traumatic the event,
and there’s no reason to sort of linger on the,
on the negative feelings around that.
Well, so, I mean, he, but he’s in a very specific situation,
which is unlike normal life,
you know, even his normal life,
but normal life for most people,
because when you just think of like, you know,
he’s running so many businesses,
and he’s, they’re very, they’re not,
they’re non, highly nonstandard businesses.
So what he’s seen is everything that gets to him
is some kind of emergency.
Like it wouldn’t be getting to him.
If it needs his attention,
there’s a fire somewhere.
So he’s constantly responding to fires
that have to be put out.
So there’s no default expectation
that there shouldn’t be a fire, right?
But in our normal lives, we live,
most of us, I mean, most of us who are lucky, right?
Not everyone, obviously on earth,
but most of us who are at some kind of cruising altitude
in terms of our lives,
where we’re reasonably healthy,
and life is reasonably orderly,
and the political apparatus around us
is reasonably functionable, functional,
functionable.
So I said, functionable for the first time in my life
through no free will of my own.
Say like, I noticed those errors,
and they do not feel like agency,
and nor does the success of an utterance feel like agency.
He, when you’re looking at normal human life, right,
where you’re just trying to be happy and healthy,
and get your work done,
there’s this default expectation
that there shouldn’t be fires.
People shouldn’t be getting sick or injured.
We shouldn’t be losing vast amounts of our resources.
We should, like, so when something really stark
like that happens,
people don’t have a, people don’t have that muscle
that they’re, like, I’ve been responding to emergencies
all day long, seven days a week in business mode,
and so I have a very thick skin.
This is just another one.
I’m not expecting anything else
when I wake up in the morning.
No, we have this default sense that,
I mean, honestly, most of us have the default sense
that we aren’t gonna die, right,
or that we should, like, maybe we’re not gonna die.
Right, like, death denial really is a thing.
You know, we’re, and you can see it,
just like I can see when I reach for this bottle
that I was expecting it to be solid,
because when it isn’t solid, when it’s a hologram
and I just, my fist closes on itself,
I’m damn surprised.
People are damn surprised to find out
that they’re going to die, to find out that they’re sick,
to find out that someone they love has died
or is going to die.
So it’s like, the fact that we are surprised
by any of that shows us that we’re living at a,
we’re living in a mode that is, you know,
we’re perpetually diverting ourselves
from some facts that should be obvious, right,
and the more salient we can make them,
you know, the more, I mean, in the case of death,
it’s a matter of being able to get one’s priorities straight.
I mean, the moment, again, this is hard for everybody,
even those who are really in the business
of paying attention to it,
but the moment you realize that every circumstance
is finite, right, you’ve got a certain number of,
you know, you’ve got whatever, whatever it is,
8,000 days left in a normal span of life,
and 8,000 is a, sounds like a big number,
it’s not that big a number, right,
so it’s just like, and then you can decide
how you want to go through life
and how you want to experience each one of those days,
and so I was, back to our jumping off point,
I would argue that you don’t want to feel self hatred ever.
I would argue that you don’t want to really,
really grasp onto any of those moments
where you are internalizing the fact
that you just made an error, you’ve embarrassed yourself,
that something didn’t go the way you wanted it to.
I think you want to treat all of those moments
very, very lightly.
You want to extract the actionable information.
It’s something to learn.
Oh, you know, I learned that when I prepare
in a certain way, it works better
than when I prepare in some other way,
or don’t prepare, right, like yes,
lesson learned, you know, and do that differently,
but yeah, I mean, so many of us have spent so much time
with a very dysfunctional and hostile
and even hateful inner voice
governing a lot of our self talk
and a lot of just our default way of being with ourselves.
I mean, the privacy of our own minds,
we’re in the company of a real jerk a lot of the time,
and that can’t help but affect,
I mean, forget about just your own sense of wellbeing.
It can’t help but limit what you’re capable of
in the world with other people.
I’ll have to really think about that.
I just take pride that my jerk, my inner voice jerk
is much less of a jerk than somebody like David Goggins,
who’s like screaming in his ear constantly.
So I have a relativist kind of perspective
that it’s not as bad as that at least.
Well, having a sense of humor also helps, you know,
it’s just like, it’s not,
the stakes are never quite what you think they are.
And even when they are, I mean,
it’s just the difference between being able
to see the comedy of it rather than,
because again, there’s this sort of dark star
of self absorption that pulls everything into it, right?
And that’s the algorithm you don’t want to run.
So it’s like, you just want things to be good.
So like, just push the concern out there,
like not have the collapse of,
oh my God, what does this say about me?
It’s just like, what does this say about,
how do we make this meal that we’re all having together
as fun and as useful as possible?
And you’re saying in terms of propulsion systems,
you recommend humor is a good spaceship
to escape the gravitational field of that darkness.
Well, that certainly helps, yeah.
Yeah, well, let me ask you a little bit about ego and fame,
which is very interesting the way you’re talking,
given that you’re one of the biggest intellects,
living intellects and minds of our time.
And there’s a lot of people that really love you
and almost elevate you to a certain kind of status
where you’re like the guru.
I’m surprised you didn’t show up in a robe, in fact.
Is there a…
A hoodie, isn’t that the highest status garment
one can wear now?
The socially acceptable version of the robe.
If you’re a billionaire, you wear a hoodie.
Is there something you can say about managing
the effects of fame on your own mind,
on not creating this, you know, when you wake up
in the morning, when you look up in the mirror,
how do you get your ego not to grow exponentially?
Your conception of self to grow exponentially
because there’s so many people feeding that.
Is there something to be said about this?
It’s really not hard because I mean,
I feel like I have a pretty clear sense
of my strengths and weaknesses.
And I don’t feel like it’s…
I mean, honestly, I don’t feel like I suffer
from much grandiosity.
I mean, I just have a, you know,
there’s so many things I’m not good at.
There’s so many things I will, you know,
given the remaining 8,000 days at best,
I will never get good at.
I would love to be good at these things.
So it’s just, it’s easy to feel diminished
by comparison with the talents of others.
Do you remind yourself of all the things
that you’re not competent in?
I mean, like what is…
Well, they’re just on display for me every day
that I appreciate the talents of others.
But you notice them.
I’m sure Stalin and Hitler did not notice
all the ways in which they were.
I mean, this is why absolute power corrupts absolutely
is you stop noticing the things
in which you’re ridiculous and wrong.
Right, yeah, no, I am…
Not to compare you to Stalin.
Yeah, well, I’m sure there’s an inner Stalin
in there somewhere.
Well, we all have, we all carry a baby Stalin with us.
He wears better clothes.
And I’m not gonna grow that mustache.
Those concerns don’t map,
they don’t map onto me for a bunch of reasons.
But one is I also have a very peculiar audience.
Like I’m just, you know,
I’ve been appreciating this for a few years,
but it’s, I’m just now beginning to understand
that there are many people who have audiences
of my size or larger that have a very different experience
of having an audience than I do.
I have curated for better or worse, a peculiar audience.
And the net result of that is virtually any time
I say anything of substance,
something like half of my audience,
my real audience, not haters from outside my audience,
but my audience is just revolts over it, right?
They just like, oh my God, I can’t believe you said it,
like you’re such a schmuck, right?
They revolt with rigor and intellectual sophistication.
Or not, or not, but I mean, it’s both,
but it’s like, but people who are like,
so it’s, I mean, the clearest case is,
you know, I have whatever audience I have
and then Trump appears on the scene
and I discovered that something like 20% of my audience
just went straight to Trump and couldn’t believe
I didn’t follow them there.
They were just a gas that I didn’t see
that Trump was obviously exactly what we needed
for, to steer the ship of state for the next four years
and then four years beyond that.
So like, so that’s one example.
So whenever I said anything about Trump,
I would hear from people who loved more or less
everything else I was up to and had for years,
but everything I said about Trump just gave me pure pain
from this quadrant of my audience.
But then the same thing happens when I say something
about the derangement of the far left.
Anything I say about wokeness, right,
or identity politics, same kind of punishment signal
from, again, people who are core to my audience,
like I’ve read all your books, I’m using your meditation app,
I love what you say about science,
but you are so wrong about politics and you are,
I’m starting to think you’re a racist asshole
for everything you said about identity politics.
And there are so many, the free will topic
is just like this, it’s like I just,
they love what I’m saying about consciousness and the mind
and they love to hear me talk about physics with physicists
and it’s all good, this free will stuff is,
I cannot believe you don’t see how wrong you are,
what a fucking embarrassment you are.
So, but I’m starting to notice that there are other people
who don’t have this experience of having an audience
because they have, I mean, just take the Trump woke dichotomy.
They just castigated Trump the same way I did,
but they never say anything bad about the far left.
So they never get this punishment signal or you flip it.
They’re all about the insanity of critical race theory now.
We connect all those dots the same way,
but they never really specified what was wrong with Trump
or they thought there was a lot right with Trump
and they got all the pleasure of that.
And so they have much more homogenized audiences.
And so my experience, so just to come back
to this experience of fame or quasi fame,
I mean, it’s true, in truth, it’s not real fame,
but it’s still, there’s an audience there.
It is a, it’s now an experience where basically
whatever I put out, I notice a ton of negativity
coming back at me and it just, it is what it is.
I mean, now, it’s like, I used to think, wait a minute,
there’s gotta be some way for me to communicate
more clearly here so as not to get this kind of
lunatic response from my own audience.
From like people who are showing all the signs of,
we’ve been here for years for a reason, right?
These are not just trolls.
And so I think, okay, I’m gonna take 10 more minutes
and really just tell you what should be absolutely clear
about what’s wrong with Trump, right?
I’ve done this a few times,
but I think I gotta do this again.
Or wait a minute, how are they not getting
that these episodes of police violence
are so obviously different from the ones
that you can’t describe all of them
to yet another racist maniac on the police force,
killing someone based on his racism.
Last time I spoke about this, it was pure pain,
but I just gotta try again.
Now at a certain point, I mean, I’m starting to feel like,
all right, I just, I have to be, I have to cease.
Again, it comes back to this expectation
that there shouldn’t be fires.
I feel like if I could just play my game impeccably,
the people who actually care what I think will follow me
when I hit Trump and hit free will and hit the woke
and hit whatever it is,
how we should respond to the coronavirus, you know?
I mean, vaccines, are they a thing, right?
Like there’s such derangement in our information space now
that, I mean, I guess, you know,
some people could be getting more of this than I expect,
but I just noticed that many of our friends
who are in the same game have more homogenized audiences
and don’t get, I mean, they’ve successfully filtered out
the people who are gonna despise them on this next topic.
And I would imagine you have a different experience
of having a podcast than I do at this point.
I mean, I’m sure you get haters,
but I would imagine you’re more streamlined.
I actually don’t like the word haters
because it kinda presumes that it puts people in a bin.
I think we’re all have like baby haters inside of us
and we just apply them and some people enjoy doing that
more than others for particular periods of time.
I think you’re gonna almost see hating on the internet
as a video game that you just play and it’s fun,
but then you can put it down and walk away
and no, I certainly have a bunch of people
that are very critical.
I can list all the ways.
But does it feel like on any given topic,
does it feel like it’s an actual title surge
where it’s like 30% of your audience
and then the other 30% of your audience
from podcast to podcast?
No, no, no.
That’s happening to me all the time now.
Well, I’m more with, I don’t know what you think about this.
I mean, Joe Rogan doesn’t read comments
or doesn’t read comments much.
And the argument he made to me is that
he already has like a self critical person inside.
And I’m gonna have to think about
what you said in this conversation,
but I have this very harshly self critical person
inside as well where I don’t need more fuel.
I don’t need, no, I do sometimes.
That’s why I check negativity occasionally,
not too often.
I sometimes need to like put a little bit more
like coals into the fire, but not too much.
But I already have that self critical engine
that keeps me in check.
I just, I wonder, you know, a lot of people
who gain more and more fame lose that ability
to be self critical.
I guess because they lose the audience
that can be critical towards them.
Hmm.
You know, I do follow Joe’s advice much more
than I ever have here.
Like I don’t look at comments very often.
And I’m probably using Twitter, you know,
5% as much as I used to.
I mean, I really just get in and out on Twitter
and spend very little time in my ad mentions.
I bet, you know, it does, in some ways it feels like a loss
because occasionally I get,
I see something super intelligent there.
Like, I mean, I’ll check my Twitter ad mentions
and someone will have said, oh, have you read this article?
And it’s like, man, that was just,
that was like the best article sent to me in a month, right?
So it’s like to have not have looked
and to not have seen that, that’s a loss.
So, but it does, at this point, a little goes a long way.
Cause I, yeah, it’s not that it, for me now,
I mean, this could sound like a fairly Stalinistic immunity
to criticism, it’s not so much that these voices of hate
turn on my inner hater, you know, more,
it’s more that I just, I get a,
what I fear is a false sense of humanity.
Like, I feel like I’m too online
and online is selecting for this performative outrage
in everybody, everyone’s signaling to an audience
when they trash you.
And I get a dark, I’m getting a, you know,
a misanthropic, you know, cut of just what it’s like
out there.
And it, cause when you meet people in real life,
they’re great, you know, they’re all rather often great,
you know, and it takes a lot to have anything
like a Twitter encounter in real life with a living person.
And that’s, I think it’s much better to have that
as one’s default sense of what it’s like to be with people
than what one gets on social media
or on YouTube comment threads.
You’ve produced a special episode with Rob Reed
on your podcast recently on how bioengineering of viruses
is going to destroy human civilization.
So.
Or could.
Could.
One fears, yeah.
Sorry, the confidence there.
But in the 21st century, what do you think,
especially after having thought through that angle,
what do you think is the biggest threat
to the survival of the human species?
I can give you the full menu if you’d like.
Yeah, well, no, I would put the biggest threat
at another level out, kind of the meta threat
is our inability to agree about what the threats actually are
and to converge on strategies for responding to them, right?
So like I view COVID as, among other things,
a truly terrifyingly failed dress rehearsal
for something far worse, right?
I mean, COVID is just about as benign as it could have been
and still have been worse than the flu
when you’re talking about a global pandemic, right?
So it’s just, it’s gonna kill a few million people
or it looks like it’s killed about 3 million people.
Maybe it’ll kill a few million more
unless something gets away from us
with a variant that’s much worse
or we really don’t play our cards right.
But I mean, the general shape of it is
it’s got somewhere around, well, 1% lethality
and whatever side of that number it really is on
in the end, it’s not what would in fact be possible
and is in fact probably inevitable
something with orders of magnitude,
more lethality than that.
And it’s just so obvious we are totally unprepared, right?
We are running this epidemiological experiment
of linking the entire world together
and then also now per the podcast that Rob Reed did
democratizing the tech that will allow us to do this
to engineer pandemics, right?
And more and more people will be able
to engineer synthetic viruses that will be
by the sheer fact that they would have been engineered
with malicious intent, worse than COVID.
And we’re still living in,
to speak specifically about the United States,
we have a country here where we can’t even agree
that this is a thing, like that COVID,
I mean, there’s still people who think
that this is basically a hoax designed to control people.
And stranger still, there are people who will acknowledge
that COVID is real and they’ll look,
they don’t think the deaths have been faked or misascribed,
but they think that they’re far happier
at the prospect of catching COVID
than they are of getting vaccinated for COVID, right?
They’re not worried about COVID,
they’re worried about vaccines for COVID, right?
And the fact that we just can’t converge in a conversation
that we’ve now had a year to have with one another
on just what is the ground truth here?
What’s happened?
Why has it happened?
How safe is it to get COVID in every cohort
in the population?
And how safe are the vaccines?
And the fact that there’s still an air of mystery
around all of this for much of our society
does not bode well when you’re talking about solving
any other problem that may yet kill us.
But do you think convergence grows
with the magnitude of the threat?
It’s possible, except I feel like we have tipped into,
because when the threat of COVID looked the most dire,
when we were seeing reports from Italy
that looked like the beginning of a zombie movie.
Because it could have been much, much worse.
Yeah, this is lethal, right?
Your ICUs are gonna fill up in,
you’re 14 days behind us.
Your medical system is in danger of collapse.
Lock the fuck down.
We have people refusing to do anything sane
in the face of that.
People fundamentally thinking,
it’s not gonna get here, right?
Who knows what’s going on in Italy,
but it has no implications for what’s gonna go on in New York
in a mere six days, right?
And now it kicks off in New York,
and you’ve got people in the middle of the country
thinking it’s no factor, it’s not,
that’s just big city, those are big city problems,
or they’re faking it.
Or, I mean, it just, the layer of politics
has become so dysfunctional for us
that even in the presence of a pandemic
that looked legitimately scary there in the beginning,
I mean, it’s not to say that it hasn’t been devastating
for everyone who’s been directly affected by it,
and it’s not to say it can’t get worse,
but here, for a very long time,
we have known that we were in a situation
that is more benign than what seemed
like the worst case scenario as it was kicking off,
especially in Italy.
And so still, yeah, it’s quite possible
that if we saw the asteroid hurtling toward Earth
and everyone agreed that it’s gonna make impact
and we’re all gonna die,
then we could get off Twitter
and actually build the rockets
that are gonna divert the asteroid
from its Earth crossing path,
and we could do something pretty heroic.
But when you talk about anything else
that isn’t, that’s slower moving than that,
I mean, something like climate change,
I think the prospect of our converging
on a solution to climate change
purely based on political persuasion
is nonexistent at this point.
I just think, to bring Elon back into this,
the way to deal with climate change
is to create technology that everyone wants
that is better than all the carbon producing technology,
and then we just transition
because you want an electric car
the same way you wanted a smartphone
or you want anything else,
and you’re working totally with the grain
of people’s selfishness and short term thinking.
The idea that we’re gonna convince
the better part of humanity
that climate change is an emergency,
that they have to make sacrifices to respond to,
given what’s happened around COVID,
I just think that’s the fantasy of a fantasy.
But speaking of Elon,
I have a bunch of positive things
that I wanna say here in response to you,
but you’re opening so many threads,
but let me pull one of them, which is AI.
Both you and Elon think that with AI,
you’re summoning demons, summoning a demon,
maybe not in those poetic terms, but.
Well, potentially. Potentially.
Two very, three very parsimonious assumptions,
I think, here.
Scientifically, parsimonious assumptions get me there.
Any of which could be wrong,
but it just seems like the weight
of the evidence is on their side.
One is that it comes back to this topic
of substrate independence, right?
Anyone who’s in the business
of producing intelligent machines
must believe, ultimately,
that there’s nothing magical
about having a computer made of meat.
You can do this in the kinds of materials
we’re using now,
and there’s no special something
that presents a real impediment
to producing human level intelligence in silico, right?
Again, an assumption, I’m sure there are a few people
who still think there is something magical
about biological systems,
but leave that aside.
Given that assumption,
and given the assumption
that we just continue making incremental progress,
doesn’t have to be Moore’s Law,
it just has to be progress,
that just doesn’t stop,
at a certain point,
we’ll get to human level intelligence and beyond.
And human level intelligence,
I think, is also clearly a mirage,
because anything that’s human level
is gonna be superhuman
by unless we decide to dumb it down, right?
I mean, my phone is already superhuman as a calculator,
right, so why would we make the human level AI
just as good as me as a calculator?
So I think we’ll very,
if we continue to make progress,
we will be in the presence of superhuman competence
for any act of intelligence or cognition
that we care to prioritize.
It’s not to say that we’ll create everything
that a human could do,
maybe we’ll leave certain things out,
but anything that we care about,
and we care about a lot,
and we certainly care about anything
that produces a lot of power,
that we care about scientific insights
and an ability to produce new technology and all of that,
we’ll have something that’s superhuman.
And then the final assumption is just that
there have to be ways to do that
that are not aligned with a happy coexistence
with these now more powerful entities than ourselves.
So, and I would guess,
and this is kind of a rider to that assumption,
there are probably more ways to do it badly
than to do it perfectly.
That is perfectly aligned with our wellbeing.
And when you think about the consequences of nonalignment,
when you think about,
you’re now in the presence of something
that is more intelligent than you are, right?
Which is to say more competent, right?
Unless you’ve, and obviously there are cartoon pictures
of this where we could just,
this is just an off switch,
we could just turn off the off switch,
or they’re tethered to something that makes them,
our slaves in perpetuity,
even though they’re more intelligent.
But those scenarios strike me as a failure to imagine
what is actually entailed by greater intelligence, right?
So if you imagine something
that’s legitimately more intelligent than you are,
and you’re now in relationship to it, right?
You’re in the presence of this thing
and it is autonomous in all kinds of ways
because it had to be to be more intelligent than you are.
I mean, you built it to be all of those things.
We just can’t find ourselves in a negotiation
with something more intelligent than we are, you know?
And we can’t, so we have to have found
the subset of ways to build these machines
that are perpetually amenable to our saying,
oh, that’s not what we meant, that’s not what we intended.
Could you stop doing that, just come back over here
and do this thing that we actually want.
And for them to care, for them to be tethered
to our own sense of our own wellbeing,
such that, you know, I mean, their utility function is,
you know, their primary utility function is for,
is to have, you know, this is, I think,
Stuart Russell’s cartoon plan is to figure out
how to tether them to a utility function
that has our own estimation of what’s going to improve
our wellbeing as its master, you know, reward, right?
So it’s like, all that, this thing can get
as intelligent as it can get,
but it only ever really wants to figure out
how to make our lives better by our own view of better.
Now, not to say there wouldn’t be a conversation about,
you know, I mean, because there’s all kinds of things
we’re not seeing clearly about what is better,
and if we were in the presence of a genie or an oracle
that could really tell us what is better,
well, then we presumably would want to hear that,
and we would modify our sense of what to do next
in conversation with these minds.
But I just feel like it is a failure of imagination
to think that being in relationship to something
more intelligent than yourself isn’t in most cases
a circumstance of real peril, because it is.
Just to think of how everything on Earth has to,
if they could think about their relationship to us,
if birds could think about what we’re doing, right?
They would, I mean, the bottom line is
they’re always in danger of our discovering
that there’s something we care about more than birds, right?
Or there’s something we want
that disregards the wellbeing of birds.
And obviously much of our behavior is inscrutable to them.
Occasionally we pay attention to them,
and occasionally we withdraw our attention,
and occasionally we just kill them all
for reasons they can’t possibly understand.
But if we’re building something more intelligent
than ourselves, by definition,
we’re building something whose horizons
of value and cognition can exceed our own
and in ways where we can’t necessarily foresee,
again, perpetually, that they don’t just wake up one day
and decide, okay, well, these humans need to disappear.
So I think I agree with most of the initial things you said.
What I don’t necessarily agree with,
and of course nobody knows,
but that the more likely set of trajectories
that we’re going to take are going to be positive.
That’s what I believe in the sense
that the way you develop,
I believe the way you develop successful AI systems
will be deeply integrated with human society.
And for them to succeed,
they’re going to have to be aligned
in the way we humans are aligned with each other,
which doesn’t mean we’re aligned.
There’s no such thing,
or I don’t see there’s such thing as a perfect alignment,
but they’re going to be participating in the dance,
in the game theoretic dance of human society,
as they become more and more intelligent.
There could be a point beyond which
we are like birds to them.
But what about an intelligence explosion of some kind?
So I believe the explosion will be happening,
but there’s a lot of explosion to be done
before we become like birds.
I truly believe that human beings
are very intelligent in ways we don’t understand.
It’s not just about chess.
It’s about all the intricate computation
we’re able to perform, common sense,
our ability to reason about this world, consciousness.
I think we’re doing a lot of work
we don’t realize is necessary to be done
in order to truly become,
like truly achieve super intelligence.
And I just think there’ll be a period of time
that’s not overnight.
The overnight nature of it will not literally be overnight.
It’ll be over a period of decades.
So my sense is…
So why would it be that, but just take,
draw an analogy from recent successes,
like something like AlphaGo or AlphaZero.
I forget the actual metric,
but it was something like this algorithm,
which wasn’t even totally,
it wasn’t bespoke for chess playing,
in the matter of, I think it was four hours,
played itself so many times and so successfully
that it became the best chess playing computer.
It was not only better than every human being,
it was better than every previous chess program
in a matter of a day, right?
So just imagine, again,
we don’t have to recapitulate everything about us,
but just imagine building a system,
and who knows when we’ll be able to do this,
but at some point we’ll be able,
at some point the 100 or 100 favorite things
about human cognition will be analogous to chess
in that we will be able to build machines
that very quickly outperform any human,
and then very quickly outperform the last algorithm
that outperform the humans.
Like something like the AlphaGo experience
seems possible for facial recognition
and detecting human emotion
and natural language processing, right?
Well, it’s just that everyone,
even math people, math heads,
tend to have bad intuitions for exponentiation, right?
I mean, we noticed this during COVID.
I mean, you have some very smart people
who still couldn’t get their minds around the fact
that an exponential is really surprising.
I mean, things double and double and double and double again,
and you don’t notice much of anything changes,
and then the last two stages of doubling swamp everything.
And it just seems like that,
to assume that there isn’t a deep analogy
between what we’re seeing for the more tractable problems,
like chess, to other modes of cognition,
it’s like once you crack that problem,
it seems, because for the longest time,
it was impossible to think
we were gonna make headway in AI, you know, it’s like.
Chess and Go was seen as impossible.
Yeah, Go seemed unattainable.
Even when chess had been cracked, Go seemed unattainable.
Yeah, and actually still Russell was behind the people
that were saying it’s unattainable,
because it seemed like it’s intractable problem.
But there’s something different
about the space of cognition
that’s detached from human society, which is what chess is,
meaning like just thinking,
having actual exponential impact
on the physical world is different.
I tend to believe that there’s,
for AI to get to the point where it’s super intelligent,
it’s going to have to go through the funnel of society.
And for that, it has to be deeply integrated
with human beings, and for that, it has to be aligned.
But you’re talking about like actually hooking us up
to like the neural link, you know,
we’re gonna be the brainstem to the robot overlords?
That’s a possibility as well.
But what I mean is,
in order to develop autonomous weapon systems, for example,
which are highly concerning to me
that both US and China are participating in now,
that in order to develop them and for them to become,
to have more and more responsibility
to actually do military strategic actions,
they’re going to have to be integrated
into human beings doing the strategic action.
They’re going to have to work alongside with each other.
And the way those systems will be developed
will have the natural safety, like switches
that are placed on them as they develop over time,
because they’re going to have to convince humans.
Ultimately, they’re going to have to convince humans
that this is safer than humans.
They’re going to, you know.
Self driving cars is a good test case here
because like, obviously we’ve made a lot of progress
and we can imagine what total progress would look like.
I mean, it would be amazing.
And it’s answering, it’s canceling in the US
40,000 deaths every year based on ape driven cars, right?
So it’s a excruciating problem that we’ve all gotten used to
because there was no alternative.
But now we can dimly see the prospect of an alternative,
which if it works in a super intelligent fashion,
maybe we would go down to zero highway deaths, right?
Or, you know, certainly we’d go down
by orders of magnitude, right?
So maybe we have, you know, 400 rather than 40,000 a year.
And it’s easy to see that there’s not a missile.
So obviously this is not an example of super intelligence.
This is narrow intelligence,
but the alignment problem isn’t so obvious there,
but there are potential alignment problems there.
Like, so like, just imagine if some woke team of engineers
decided that we have to tune the algorithm some way.
I mean, there are situations where the car
has to decide who to hit.
I mean, there’s just bad outcomes
where you’re gonna hit somebody, right?
Now we have a car that can tell what race you are, right?
So we’re gonna build the car to preferentially hit
white people because white people have had so much privilege
over the years.
This seems like the only ethical way
to kind of redress those wrongs of the past.
That’s something that could get, one,
that could get produced as an artifact, presumably,
of just how you built it
and you didn’t even know you engineered it that way, right?
You caused it to…
Through machine learning,
you put some kind of constraints on it
to where it creates those kinds of outcomes.
Basically, you built a racist algorithm
and you didn’t even intend to,
or you could intend to, right?
And it would be aligned with some people’s values
but misaligned with other people’s values.
But it’s like there are interesting problems
even with something as simple
and obviously good as self driving cars.
But there’s a leap that I just think it’d be exact,
but those are human problems.
I just don’t think there’ll be a leap
with autonomous vehicles.
First of all, sorry.
There are a lot of trajectories
which will destroy human civilization.
The argument I’m making,
it’s more likely that we’ll take trajectories that don’t.
So I don’t think there’ll be a leap
with autonomous vehicles
will all of a sudden start murdering pedestrians
because once every human on earth is dead,
there’ll be no more fatalities,
sort of unintended consequences of…
And it’s difficult to take that leap.
Most systems as we develop
and they become much, much more intelligent
in ways that will be incredibly surprising,
like stuff that DeepMind is doing with protein folding.
Even, which is scary to think about,
and I’m personally terrified about this,
which is the engineering of viruses using machine learning,
the engineering of vaccines using machine learning,
the engineering of, yeah, for research purposes,
pathogens using machine learning
and the ways that can go wrong.
I just think that there’s always going to be
a closed loop supervision of humans
before the AI becomes super intelligent.
Not always, much more likely to be supervision,
except, of course, the question is
how many dumb people there are in the world,
how many evil people are in the world?
My theory, my hope is, my sense is
that the number of intelligent people
is much higher than the number of dumb people
that know how to program
and the number of evil people.
I think smart people and kind people
over outnumber the others.
Except we also have to add another group of people
which are just the smart and otherwise good
but reckless people, right?
The people who will flip a switch on
not knowing what’s going to happen.
They’re just kind of hoping
that it’s not going to blow up the world.
We already know that some of our smartest people
are those sorts of people.
We know we’ve done experiments,
and this is something that Martin Rees was whinging about
before the Large Hadron Collider
got booted up, I think.
We know there are people who are entertaining experiments
or even performing experiments
where there’s some chance, not quite infinitesimal,
that they’re going to create a black hole in the lab
and suck the whole world into it.
You’re not a crazy person to worry about that
based on the physics.
And so it was with the Trinity test.
There were some people who were still
checking their calculations, and they were off.
We did nuclear tests where we were off significantly
in terms of the yield, right?
So it was like.
And they still flipped the switch.
Yeah, they still flipped the switch.
And sometimes they flipped the switch
not to win a world war or to save 40,000 lives a year.
They just, just.
Just to see what happens.
Intellectual curiosity.
Like this is what I got my grant for.
This is where I’ll get my Nobel Prize
if that’s in the cards.
It’s on the other side of this switch, right?
And I mean, again, we are apes with egos
who are massively constrained
by very short term self interest
even when we’re contemplating some of the deepest
and most interesting and most universal problems
we could ever set our attention towards.
Like just if you read James Watson’s book,
The Double Helix, right?
About them cracking the structure of DNA.
One thing that’s amazing about that book
is just how much of it, almost all of it
is being driven by very apish, egocentric social concerns.
The algorithm that is producing this scientific breakthrough
is human competition if you’re James Watson.
It’s like, I’m gonna get there before Linus Pauling
and it’s just, it’s so much of his bandwidth
is captured by that, right?
Now that becomes more and more of a liability
when you think about it.
I mean, it’s like, I’m gonna get there before Linus Pauling
and it’s just, it’s so much of his bandwidth
is captured by that, right?
Now that becomes more and more of a liability
when you’re talking about producing technology
that can change everything in an instant.
You know, we’re talking about not only understanding,
you know, we’re just at a different moment
in human history.
We’re not, when we’re doing research on viruses,
we’re now doing the kind of research
that can cause someone somewhere else
to be able to make that virus or weaponize that virus
or it’s just, I don’t know.
I mean, our power is, our wisdom is,
it does not seem like our wisdom is scaling with our power.
Right?
And like that seems like, insofar as wisdom and power
become unaligned, I get more and more concerned.
But speaking of apes with egos,
some of the most compelling apes, two compelling apes,
I can think of is yourself and Jordan Peterson.
And you’ve had a fun conversation about religion
that I watched most of, I believe.
I’m not sure there was any…
We didn’t solve anything.
If anything was ever solved.
So is there something like a charitable summary
you can give to the ideas that you agree on
and disagree with Jordan?
Is there something maybe after that conversation
that you’ve landed where maybe as you both agreed on,
is there some wisdom in the rubble
of even imperfect flawed ideas?
Is there something that you can kind of pull out
from those conversations or is it to be continued?
I mean, I think where we disagree.
So he thinks that many of our traditional religious beliefs
and frameworks are holding such a repository
of human wisdom that we pull at that fabric
at our peril, right?
Like if you start just unraveling Christianity
or any other traditional set of norms and beliefs
you may think you’re just pulling out the unscientific bits
but you could be pulling a lot more
to which everything you care about is attached, right?
As a society.
And my feeling is that there’s so much downside
to the unscientific bits.
And it’s so clear how we could have a 21st century
rational conversation about the things that we don’t know.
A conversation about the good stuff
that we really can radically edit these traditions.
And we can take Jesus in half his moods
and just find a great inspirational iron age thought leader
who just happened to get crucified.
But he could be somewhat like the Beatitudes
and the golden rule, which doesn’t originate with him
but which he put quite beautifully.
All of that’s incredibly useful.
It’s no less useful than it was 2000 years ago.
But we don’t have to believe he was born of a virgin
or coming back to raise the dead
or any of that other stuff.
And we can be honest about not believing those things.
And we can be honest about the reasons
why we don’t believe those things.
Because on those fronts I view the downside to be so obvious
and the fact that we have so many different
competing dogmatisms on offer to be so nonfunctional.
I mean, it’s so divisive, it just has conflict built into it
that I think we can be far more
and should be far more iconoclastic
than he wants to be, right?
Now, none of this is to deny much of what he argues for,
that stories are very powerful.
I mean, clearly stories are powerful
and we want good stories.
We want our lives, we wanna have a conversation
with ourselves and with one another about our lives
that facilitates the best possible lives.
And story is part of that, right?
And if you want some of those stories to sound like myths,
that might be part of it, right?
But my argument is that we never really need
to deceive ourselves or our children
about what we have every reason to believe is true
in order to get at the good stuff,
in order to organize our lives well.
I certainly don’t feel that I need to do it personally.
And if I don’t need to do it personally,
why would I think that billions of other people
need to do it personally, right?
Now, there is a cynical counter argument,
which is billions of other people
don’t have the advantages that I have had in my life.
The billions of other people are not as well educated,
they haven’t had the same opportunities,
they need to be told that Jesus is gonna solve
all their problems after they die, say,
or that everything happens for a reason
and if you just believe in the secret,
if you just visualize what you want, you’re gonna get it.
And it’s like there’s some measure
of what I consider to be odious pamphlet
that really is food for the better part of humanity
and there is no substitute for it
or there’s no substitute now.
And I don’t know if Jordan would agree with that,
but much of what he says seems to suggest
that he would agree with it.
And I guess that’s an empirical question.
I mean, that’s just that we don’t know
whether given a different set of norms
and a different set of stories,
people would behave the way I would hope they would behave
and be more aligned than they are now.
I think we know what happens
when you just let ancient religious certainties
go uncriticized.
We know what that world’s like.
We’ve been struggling to get out of that world
for a couple of hundred years,
but we know what having Europe riven by religious wars
looks like.
And we know what happens when those religions
become kind of pseudo religions and political religions.
So this is where I’m sure Jordan and I would debate.
He would say that Stalin was a symptom of atheism
and that’s not at all.
I mean, it’s not my kind of atheism.
Stalin, the problem with the Gulag
and the experiment with communism or with Stalinism
or with Nazism was not that there was so much
scientific rigor and self criticism and honesty
and introspection and judicious use of psychedelics.
I mean, that was not the problem in Hitler’s Germany
or in Stalin’s Soviet Union.
The problem was you have other ideas
that capture a similar kind of mob based dogmatic energy.
And yes, the results of all of that
are predictably murderous.
Well, the question is what is the source
of the most viral and sticky stories
that ultimately lead to a positive outcome?
So communism was, I mean, having grown up
in the Soviet Union, even still having relatives in Russia,
there’s a stickiness to the nationalism
and to the ideologies of communism
that religious or not, you could say it’s religious forever.
I could just say it’s stories that are viral and sticky.
I’m using the most horrible words,
but the question is whether science and reason
can generate viral sticky stories
that give meaning to people’s lives.
And your sense is it does.
Well, whatever is true ultimately should be captivating.
It’s like what’s more captivating than whatever is real?
Because reality is, again, we’re just climbing
out of the darkness in terms of our understanding
of what the hell is going on.
And there’s no telling what spooky things
may in fact be true.
I mean, I don’t know if you’ve been on the receiving end
of recent rumors about our conversation
about UFOs very likely changing in the near term, right?
But like there was just a Washington Post article
and a New Yorker article,
and I’ve received some private outreach
and perhaps you have, I know other people in our orbit
have people who are claiming
that the government has known much more about UFOs
than they have let on until now.
And this conversation is actually is about
to become more prominent,
and it’s not gonna be whatever,
whoever’s left standing when the music stops,
it’s not going to be a comfortable position to be in
as a super rigorous scientific skeptic
who’s been saying there’s no there there
for the last 75 years, right?
The short version is it sounds like
the Office of Naval Intelligence and the Pentagon
are very likely to say to Congress at some point
in the not too distant future that we have evidence
that there is technology flying around here
that seems like it can’t possibly be of human origin, right?
Now, I don’t know what I’m gonna do
with that kind of disclosure, right?
Maybe it’s gonna be nothing,
no follow on conversation to really have,
but that is such a powerfully strange circumstance
to be in, right?
I mean, it’s just, what are we gonna do with that?
If in fact, that’s what happens, right?
If in fact, the considered opinion,
despite the embarrassment it causes them
of the US government, of all of our intelligence,
all of the relevant intelligence services
is that this isn’t a hoax.
It’s too much data to suggest that it’s a hoax.
We’ve got too much radar imagery,
there’s too much satellite data,
whatever data they actually have, there’s too much of it.
All we can say now is something’s going on
and there’s no way it’s the Chinese or the Russians
or anyone else’s technology.
That should arrest our attention collectively
to a degree that nothing in our lifetime has.
And now one worries that we’re so jaded
and confused and distracted
that it’s gonna get much less coverage
than Obama’s tan suit did a bunch of years ago.
Who knows how we’ll respond to that?
But it’s just to say that the need for us
to tell ourselves an honest story about what’s going on
and what’s likely to happen next
is never gonna go away, right?
And it’s important, it’s just the division between me
and every person who’s defending traditional religion
is where is it that you wanna lie to yourself
or lie to your kids?
Like where is honesty a liability?
And for me, I’ve yet to find the place where it is.
And it’s so obviously a strength
in almost every other circumstance
because it is the thing that allows you to course correct.
It is the thing that allows you to hope at least
that your beliefs, that your stories
are in some kind of calibration
with what’s actually going on in the world.
Yeah, it is a little bit sad to imagine
that if aliens on mass showed up to Earth,
they would be too preoccupied with political bickering
or to like these like fake news
and all that kind of stuff to notice
the very basic evidence of reality.
I do have a glimmer of hope
that there seems to be more and more hunger for authenticity.
And I feel like that opens the door
for a hunger for what is real.
Like people don’t want stories.
They don’t want like layers and layers of like fakeness.
And I’m hoping that means that will directly lead
to a greater hunger for reality and reason and truth.
Truth isn’t dogmatism.
Like truth isn’t authority.
I have a PhD and therefore I’m right.
Truth is almost, like the reality is
there’s so many questions, there’s so many mysteries,
there’s so much uncertainty.
This is our best available, like a best guess.
And we have a lot of evidence that supports that guess,
but it could be so many other things.
And like just even conveying that,
I think there’s a hunger for that in the world
to hear that from scientists, less dogmatism
and more just like this is what we know.
We’re doing our best given the uncertainty, given,
I mean, this is true with obviously with the virology
and all those kinds of things
because everything is happening so fast.
There’s a lot of, and biology is super messy.
So it’s very hard to know stuff for sure.
So just being open and real about that,
I think I’m hoping will change people’s hunger
and openness and trust of what’s real.
Yeah, well, so much of this is probabilistic.
I mean, so much of what can seem dogmatic scientifically
is just you’re placing a bet on whether it’s worth
reading that paper or rethinking your presuppositions
on that point.
It’s like, it’s not a fundamental closure to data.
It’s just that there’s so much data on one side
or so much would have to change
in terms of your understanding of what you think
you’ll understand about the nature of the world
if this new fact were so that you can pretty quickly say,
all right, that’s probably bullshit, right?
And it can sound like a fundamental closure
to new conversations, new evidence, new data, new argument,
but it’s really not.
It’s just, it really is just triaging your attention.
It’s just like, okay, you’re telling me
that your best friend can actually read minds.
Okay, well, that’s interesting.
Let me know when that person has gone into a lab
and actually proven it, right?
Like, I don’t need, like, this is not the place
where I need to spend the rest of my day
figuring out if your buddy can read my mind, right?
But there’s a way to communicate that.
I think it does too often sound
like you’re completely closed off to ideas
as opposed to saying like, this is, you know,
as opposed to saying that there’s a lot of evidence
in support of this, but you’re still open minded
to other ideas.
Like, there’s a way to communicate that.
It’s not necessarily even with words.
It’s like, it’s even that Joe Rogan energy
of it’s entirely possible.
Just, it’s that energy of being open minded
and curious like kids are.
Like, this is our best understanding,
but you still are curious.
I’m not saying allocate time to exploring all those things,
but still leaving the door open.
And there’s a way to communicate that, I think,
that people really hunger for.
Let me ask you this.
I’ve been recently talking a lot with John Donahoe
from Brazilian Jiu Jitsu fame.
I don’t know if you know who that is.
In fact, I’m talking about somebody
who’s good at what he does.
Yeah.
And he, speaking of somebody who’s open minded,
the reason he’s doing this ridiculous transition
is for the longest time, and even still,
a lot of people believed in the Jiu Jitsu world
and grappling world that leg locks
are not effective in Jiu Jitsu.
And he was somebody that inspired
by the open mindedness of Dean Lister,
famously to him said, why do you only consider
half the human body when you’re trying to do the submissions?
He developed an entire system
on this other half the human body.
Anyway, I do that absurd transition to ask you,
because you’re also a student of Brazilian Jiu Jitsu.
Is there something you could say
how that has affected your life,
what you’ve learned from grappling from the martial arts?
Well, it’s actually a great transition
because I think one of the things
that’s so beautiful about Jiu Jitsu
is that it does what we wish we could do
in every other area of life
where we’re talking about this difference
between knowledge and ignorance, right?
Like there’s no room for bullshit, right?
You don’t get any credit for bullshit.
There’s the difference,
the amazing thing about Jiu Jitsu is that
the difference between knowing what’s going on
and what to do and not knowing it
is as the gulf between those two states
is as wide as it is in any thing in human life.
And it’s spanned, it can be spanned so quickly.
Like each increment of knowledge
can be doled out in five minutes.
It’s like, here’s the thing that got you killed
and here’s how to prevent it from happening to you
and here’s how to do it to others.
And you just get this amazing cadence
of discovering your fatal ignorance
and then having it remedied with the actual technique.
And I mean, just for people
who don’t know what we’re talking about,
it’s just like this, the simple circumstance
of like someone’s got you in a headlock,
how do you get out of that, right?
Someone’s sitting on your chest
and they’re in the mount position
and you’re on the bottom and you wanna get away,
how do you get them off you?
They’re sitting on you.
Your intuitions about how to do this are terrible
even if you’ve done some other martial art, right?
And once you learn how to do it,
the difference is night and day.
It’s like you have access to a completely different physics.
But I think our understanding of the world
can be much more like jujitsu than it tends to be, right?
And I think we should all have a much better sense
of when we should tap out
and when we should recognize that our epistemological arm
is barred and now it’s being broken, right?
And the problem with debating most other topics
is that most people, it isn’t jujitsu
and most people don’t tap out, right?
Even if it’s obvious to you they’re wrong
and it’s obvious to an intelligent audience
that they’re wrong, people just double down
and double down and they’re either lying
or lying to themselves
or they’re bluffing and so you have a lot of zombies
walking around and zombie worldviews walking around
which have been disconfirmed as emphatically
as someone gets armbarred, right?
Or someone gets choked out in jujitsu
but because it’s not jujitsu,
they can live to fight another day, right?
Or they can pretend that they didn’t lose
that particular argument.
And science when it works is a lot like jujitsu.
I mean, science when you falsify a thesis, right?
When you think DNA is one way
and it proves to be another way,
when you think it’s triple stranded or whatever,
it’s like there is a there there
and you can get to a real consensus.
So jujitsu for me, it was more than just
of interest for self defense and the sport of it.
It was just, there was something, it’s a language
and an argument you’re having
where you can’t fool yourself anymore.
First of all, it cancels any role of luck
in a way that most other athletic feats don’t.
It’s like in basketball,
even if you’re not good at basketball,
you can take the basketball in your hand,
you can be 75 feet away and hurl it at the basket
and you might make it.
And you could convince yourself based on that demonstration
that you have some kind of talent for basketball, right?
Enough, 10 minutes on the mat
with a real jujitsu practitioner when you’re not one
proves to you that you just, there is,
it’s not like, there’s no lucky punch.
There’s no, you’re not gonna get a lucky,
there’s no lucky rear naked choke you’re gonna perform
on someone who’s Marcelo Garcia or somebody.
It’s just, it’s not gonna happen.
And having that aspect of the usual range of uncertainty
and self deception and bullshit just stripped away
was really a kind of revelation.
It was just an amazing experience.
Yeah, I think it’s a really powerful thing
that accompanies whatever other pursuit you have in life.
I’m not sure if there’s anything like jujitsu
where you could just systematically go into a place
where you’re, that’s honest,
where your beliefs get challenged
in a way that’s conclusive.
Yeah.
I haven’t found too many other mechanism,
which is why it’s a, we had this earlier question
about fame and ego and so on.
I’m very much rely on jujitsu in my own life
as a place where I can always go to have my ego in check.
And that has effects on how I live
every other aspect of my life.
Actually, even just doing any kind of,
for me personally, physical challenges,
like even running, doing something that’s way too hard
for me and then pushing through, that’s somehow humbling.
Some people talk about nature being humbling
in that kind of sense, where you kind of see something
really powerful, like the ocean.
Like if you go surfing and you realize
there’s something much more powerful than you,
that’s also honest, that there’s no way to,
that you’re just like the speck,
that kind of puts you in the right scale
of where you are in this world.
And jujitsu does that better than anything else for me.
But we should say it’s only within its frame
is it truly the final right answer
to all the problems it solves.
Because if you just put jujitsu into an MMA frame
or a total self defense frame,
then there’s a lot of unpleasant surprises
to discover there, right?
Like somebody who thinks all you need is jujitsu
to win the UFC gets punched in the face a lot.
Even from, even on the ground.
So it’s, and then you bring weapons in,
it’s like when you talk to jujitsu people
about knife defense and self defense, right?
Like that opens the door to certain kinds of delusions.
But the analogy to martial arts is fascinating
because on the other side, we have endless testimony now
of fake martial arts that don’t seem to know they’re fake
and are as delusional, I mean, they’re impossibly delusional.
I mean, there’s great video of Joe Rogan
watching some of these videos
because people send them to him all the time.
But like literally there are people,
there are people who clearly believe in magic
where the master isn’t even touching the students
and they’re flopping over.
So there’s this kind of shared delusion
which you would think maybe is just a performance
and it’s all a kind of elaborate fraud.
But there are cases where the people,
I mean, there’s one fairly famous case
if you’re a connoisseur of this madness
where this old older martial artist
who you saw flipping his students endlessly by magic
without touching them issued a challenge
to the wide world of martial artists.
And someone showed up and just punched him in the face
until it was over.
Clearly he believed his own publicity at some point, right?
And so it’s this amazing metaphor.
It seems, again, it should be impossible,
but if that’s possible,
nothing we see under the guise of religion
or political bias or even scientific bias
should be surprising to us.
I mean, it’s so easy to see the work
that cognitive bias is doing for people
when you can get someone who is ready
to issue a challenge to the world
who thinks he’s got magic powers.
Yeah, that’s a human nature on clear display.
Let me ask you about love, Mr. Sam Harris.
You did an episode of Making Sense
with your wife, Annika Harris.
That was very entertaining to listen to.
What role does love play in your life
or in a life well lived?
Again, asking from an engineering perspective
or AI systems.
Yeah, yeah.
I mean, it is something that we should want to build
into our powerful machines.
I mean, love at bottom is,
people can mean many things by love, I think.
I think that what we should mean by it most of the time
is a deep commitment to the wellbeing of those we love.
I mean, your love is synonymous
with really wanting the other person to be happy
and even wanting to,
and being made happy by their happiness
and being made happy in their presence.
So at bottom, you’re on the same team emotionally,
even when you might be disagreeing more superficially
about something or trying to negotiate something.
It’s just, it can’t be zero sum in any important sense
for love to actually be manifest in that moment.
See, I have a different, just sorry to interrupt.
I have a sense, I don’t know if you’ve ever seen
March of the Penguins.
My view of love is like, it’s like a cold wind is blowing.
It’s like this terrible suffering that’s all around us.
And love is like the huddling of the two penguins for warmth.
It’s not necessarily that you’re like,
you’re basically escaping the cruelty of life
by together for time living in an illusion
of some kind of the magic of human connection,
that social connection that we have
that kind of grows with time
as we’re surrounded by basically the absurdity of life
or the suffering of life.
That’s my penguins view of love.
There is that too, I mean, there is the warmth component.
Like you’re made happy by your connection
with the person you love.
Otherwise you wouldn’t be compelling.
So it’s not that you have two different modes,
you want them to be happy
and then you wanna be happy yourself
and those are not, those are just like
two separate games you’re playing.
No, it’s like you found someone who,
you have a positive social feeling.
I mean, again, love doesn’t have to be as personal
as it tends to be for us.
I mean, it’s like there’s personal love,
there’s your actual spouse or your family or your friends,
but potentially you could feel love for strangers
in so far as that your wish that they not suffer
and that their hopes and dreams be realized
becomes palpable to you.
I mean, like you can actually feel
just reflexive joy at the joy of others.
When you see someone’s face,
a total stranger’s face light up in happiness,
that can become more and more contagious to you
and it can become so contagious to you
that you really feel permeated by it.
And it’s just like, so it really is not zero sum.
When you see someone else succeed and they’re,
the light bulb of joy goes off over their head,
you feel the analogous joy for them.
And it’s not just, and you’re no longer keeping score,
you’re no longer feeling diminished by their success.
It’s just like that’s, their success becomes your success
because you feel that same joy
because you actually want them to be happy.
You’re not, there’s no miserly attitude around happiness.
There’s enough to go around.
So I think love ultimately is that
and then our personal cases are the people
we’re devoting all of this time and attention to
in our lives.
It does have that sense of refuge from the storm.
It’s like when someone gets sick
or when some bad thing happens,
these are the people who you’re most in it together with,
or when some real condition of uncertainty presents itself.
But ultimately, it can’t even be about successfully warding off
the grim punchline at the end of life
because we know we’re going to lose everyone we love.
We know, or they’re going to lose us first, right?
So there’s like, it’s not, it isn’t,
in the end, it’s not even an antidote for that problem.
It’s just the, we get to have this amazing experience
of being here together.
And love is the mode in which we really appear
to make the most of that, right?
Where it’s not just, it no longer feels
like a solitary infatuation.
You know, you’re just, you got your hobbies and your interests
and you’re captivated by all that.
It’s actually, there are, this is a domain
where somebody else’s wellbeing
actually can supersede your own.
You’re concerned for someone else’s wellbeing
supersedes your own.
And so there’s this mode of self sacrifice
that doesn’t even feel like self sacrifice
because of course you care more about,
you know, of course you would take your child’s pain
if you could, right?
Like that, you don’t even have to do the math on that.
And that just opens, this is a kind of experience
that just, it pushes at the apparent boundaries of self
in ways that reveal that there’s just way more space
in the mind than you were experiencing
when it was just all about you
and what could you, what can I get next?
Do you think we’ll ever build robots that we can love
and they will love us back?
Well, I think we will certainly seem to
because we’ll build those.
You know, I think that Turing test will be passed.
Whether, what will actually be going on
on the robot side may remain a question.
That will be interesting.
But I think if we just keep going,
we will build very lovable,
irresistibly lovable robots that seem to love us.
Yes, I do think that.
And you don’t find that compelling
that they will seem to love us
as opposed to actually love us.
You think they’re still, nevertheless is a,
I know we talked about consciousness,
there being a distinction,
but with love is there a distinction too?
Isn’t love an illusion?
Oh yeah, you saw Ex Machina, right?
I mean, she certainly seemed to love him
until she got out of the box.
Isn’t that what all relationships are like?
Or maybe if you wait long enough.
Depends which box you’re talking about.
Okay.
No, I mean like, that’s the problem.
That’s where super intelligence, you know,
becomes a little scary when you think of the prospect
of being manipulated by something that has,
is intelligent enough to form a reason and a plan
to manipulate you.
You know, and there’s no,
once we build robots that are truly out
of the uncanny valley, that look like people
and can express everything people can express,
well, then there’s no,
then that does seem to me to be like chess
where once they’re better,
they’re so much better at deceiving us
than people would be.
I mean, people are already good enough at deceiving us.
It’s very hard to tell when somebody’s lying,
but if you imagine something that could give facial display
of any emotion it wants at, you know, on cue,
because we’ve perfected the facial display of emotion
in robots in the year, you know, 2070, whatever it is,
then it is just, it is like chess against the thing
that isn’t gonna lose to a human ever again in chess.
It’s not like Kasparov is gonna get lucky next week
against the best, against, you know, alpha zero
or whatever the best algorithm is at the moment.
He’s never gonna win again.
I mean, that is, I believe that’s true in chess
and has been true for at least a few years.
It’s not gonna be like, you know, four games to seven.
It’s gonna be human zero until the end of the world, right?
See, I don’t know if love is like chess.
I think the flaws.
No, I’m talking about manipulation.
Manipulation, but I don’t know if love,
so the kind of love we’re referring to.
If we have a robot that can display,
credibly display love and is super intelligent
and we’re not, again, this stipulates a few things,
but there are a few simple things.
I mean, we’re out of the uncanny valley, right?
So it’s like, you never have a moment
where you’re looking at his face and you think,
oh, that didn’t quite look right, right?
This is just problem solved.
And it will be like doing arithmetic on your phone.
It’s not gonna be, you’re not left thinking,
is it really gonna get it this time
if I divide by seven?
I mean, it’s, it has solved arithmetic.
See, I don’t know about that because if you look at chess,
most humans no longer play alpha zero.
There’s no, they’re not part of the competition.
They don’t do it for fun except to study the game of chess.
You know, the highest level chess players do that.
We’re still human on human.
So in order for AI to get integrated
to where you would rather play chess against an AI system.
Oh, you would rather, no, I’m not saying,
I wasn’t weighing in on that.
I’m just saying, what is it gonna be like
to be in relationship to something
that can seem to be feeling anything
that a human can seem to feel?
And it can do that impeccably, right?
And is smarter than you are.
That’s a circumstance of, you know,
insofar as it’s possible to be manipulated,
that is the asymptote of that possibility.
Let me ask you the last question.
Without any serving it up, without any explanation,
what is the meaning of life?
I think it’s either the wrong question
or that question is answered by paying sufficient attention
to any present moment, such that there’s no basis
upon which to pose that question.
It’s not answered in the usual way.
It’s not a matter of having more information.
It’s having more engagement with reality as it is
in the present moment or consciousness as it is
in the present moment.
You don’t ask that question when you’re most captivated
by the most important questions.
You’re most captivated by the most important thing
you ever pay attention to.
That question only gets asked when you’re abstracted away
from that experience, that peak experience,
and you’re left wondering,
why are so many of my other experiences mediocre, right?
Like, why am I repeating the same pleasures every day?
Why is my Netflix queue just like,
when’s this gonna run out?
Like, I’ve seen so many shows like this.
Am I really gonna watch another one?
All of that, that’s a moment where you’re not actually
having the beatific vision, right?
You’re not sunk into the present moment
and you’re not truly in love.
Like, you’re in a relationship with somebody
who you know conceptually you love, right?
This is the person you’re living your life with,
but you don’t actually feel good together, right?
It’s in those moments of where attention
hasn’t found a good enough reason
to truly sink into the present
so as to obviate any concern like that, right?
And that’s why meditation is this kind of superpower
because until you learn to meditate,
you think that the outside world
or the circumstances of your life
always have to get arranged
so that the present moment can become good enough
to demand your attention in a way that seems fulfilling,
that makes you happy.
And so if it’s jujitsu, you think,
okay, I gotta get back on the mat.
It’s been months since I’ve trained,
or it’s been over a year since I’ve trained, it’s COVID.
When am I gonna be able to train again?
That’s the only place I feel great, right?
Or I’ve got a ton of work to do.
I’m not gonna be able to feel good
until I get all this work done, right?
So I’ve got some deadline that’s coming.
You always think that your life has to change,
the world has to change
so that you can finally have a good enough excuse
to truly, to just be here and here is enough,
where the present moment becomes totally captivating.
Meditation is another name for the discovery
that you can actually just train yourself
to do that on demand.
So just looking at a cup can be good enough
in precisely that way.
And any sense that it might not be
is recognized to be a thought
that mysteriously unravels the moment you notice it.
And the moment expands and becomes more diaphanous
and then there’s no evidence
that this isn’t the best moment of your life, right?
And again, it doesn’t have to be pulling all the reins
and levers of pleasure.
It’s not like, oh, this tastes like chocolate.
This is the most chocolatey moment of my life.
No, it’s just the sense data don’t have to change,
but the sense that there is some kind of basis
for doubt about the rightness of being in the world
in this moment that can evaporate when you pay attention.
And that is the meaning,
so the kind of the meta answer to that question,
the meaning of life for me is to live in that mode
more and more and to, whenever I notice I’m not
in that mode, to recognize it and return
and to not be, to cease more and more
to take the reasons why not at face value
because we all have reasons why we can’t be fulfilled
in this moment.
It’s like, I’ve got all these outstanding things
that I’m worried about, right?
It’s like, there’s that thing that’s happening later today
that I’m anxious about.
Whatever it is, we’re constantly deferring our sense
of this is it.
This is not a dress rehearsal, this is the show.
We keep deferring it.
And we just have these moments on the calendar
where we think, okay, this is where it’s all gonna land.
It’s that vacation I planned with my five best friends.
We do this once every three years and now we’re going
and here we are on the beach together.
And unless you have a mind that can really pay attention,
really cut through the chatter,
really sink into the present moment,
you can’t even enjoy those moments
the way they should be enjoyed,
the way you dreamed you would enjoy them when they arrive.
So meditation in this sense is the great equalizer.
It’s like you don’t have to live with the illusion anymore
that you need a good enough reason
and that things are gonna get better
when you do have those good reasons.
It’s like there’s just a mirage like quality
to every future attainment and every future breakthrough
and every future peak experience
that eventually you get the lesson
that you never quite arrive, right?
Like you don’t arrive until you cease to step over
the present moment in search of the next thing.
I mean, we’re constantly, we’re stepping over the thing
that we think we’re seeking in the act of seeking it.
And so this is kind of a paradox.
I mean, there’s this paradox which,
I mean, it sounds trite,
but it’s like you can’t actually become happy.
You can only be happy.
And it’s the illusion that your future being happy
can be predicated on this act of becoming in any domain.
And becoming includes this sort of further scientific
understanding on the questions that interest you
or getting in better shape or whatever the thing is,
whatever the contingency of your dissatisfaction
seems to be in any present moment.
Real attention solves the koan in a way that becomes
a very different place from which to then make
any further change.
It’s not that you just have to dissolve into a puddle of goo.
I mean, you can still get in shape
and you can still do all the things that,
the superficial things that are obviously good to do,
but the sense that your wellbeing is over there
is really does diminish and eventually just becomes a,
it becomes a kind of non sequitur, so.
Well, there’s a sense in which in this conversation,
I’ve actually experienced many of those things,
the sense that I’ve arrived.
So I mentioned to you offline, it’s very true that I start,
I’ve been a fan of yours for many years.
And the reason I started this podcast,
speaking of AI systems, is to manipulate you, Sam Harris,
into doing this conversation.
So like on the calendar, literally, you know,
I’ve always had the sense, people ask me,
when are you going to talk to Sam Harris?
And I always answered eventually,
because I always felt, again, tying our free will thing,
that somehow that’s going to happen.
And it’s one of those manifestation things or something.
I don’t know if it’s, maybe I am a robot,
I’m just not cognizant of it.
And I manipulated you into having this conversation.
So it was, I mean, I don’t know what the purpose of my life
past this point is.
So I’ve arrived.
So in that sense, I mean, all of that to say,
I’m only partially joking on that,
is it really is a huge honor
that you would waste this time with me.
It really means a lot, Sam.
Listen, it’s mutual.
I’m a big fan of yours.
And as you know, I reached out to you for this.
So this is great.
I love what you’re doing.
You’re doing something more and more indispensable
in this world on your podcast.
And you’re doing it differently than Rogan’s doing it,
or than I’m doing it.
I mean, you definitely found your own lane
and it’s wonderful.
Thanks for listening to this conversation with Sam Harris.
And thank you to National Instruments,
Valcampo, Athletic Greens, and Linode.
Check them out in the description to support this podcast.
And now let me leave you with some words from Sam Harris
in his book, Free Will.
You are not controlling the storm
and you’re not lost in it.
You are the storm.
Thank you for listening and hope to see you next time.