The following is a conversation with David Chalmers.
He’s a philosopher and cognitive scientist
specializing in the areas of philosophy of mind,
philosophy of language, and consciousness.
He’s perhaps best known for formulating
the hard problem of consciousness,
which could be stated as why does the feeling
which accompanies awareness of sensory information
exist at all?
Consciousness is almost entirely a mystery.
Many people who worry about AI safety and ethics
believe that, in some form, consciousness can
and should be engineered into AI systems of the future.
So while there’s much mystery, disagreement,
discoveries yet to be made about consciousness,
these conversations, while fundamentally philosophical
in nature, may nevertheless be very important
for engineers of modern AI systems to engage in.
This is the Artificial Intelligence Podcast.
If you enjoy it, subscribe on YouTube,
give it five stars on Apple Podcast,
support it on Patreon, or simply connect with me
on Twitter at Lex Friedman, spelled F R I D M A N.
As usual, I’ll do one or two minutes of ads now
and never any ads in the middle
that can break the flow of the conversation.
I hope that works for you
and doesn’t hurt the listening experience.
This show is presented by Cash App,
the number one finance app in the App Store.
When you get it, use code LEXBODCAST.
Cash App lets you send money to friends,
buy Bitcoin, and invest in the stock market
with as little as one dollar.
Brokerage services are provided by Cash App Investing,
subsidiary of Square, and member SIPC.
Since Cash App does fractional share trading,
let me mention that the order execution algorithm
that works behind the scenes to create the abstraction
of fractional orders is an algorithmic marvel.
So big props to the Cash App engineers
for solving a hard problem that, in the end,
provides an easy interface that takes a step up
to the next layer of abstraction over the stock market,
making trading more accessible for new investors
and diversification much easier.
If you get Cash App from the App Store or Google Play
and use the code LEXBODCAST, you’ll get $10,
and Cash App will also donate $10 to FIRST,
one of my favorite organizations
that is helping to advance robotics and STEM education
for young people around the world.
And now, here’s my conversation with David Chalmers.
Do you think we’re living in a simulation?
I don’t rule it out.
There’s probably gonna be a lot of simulations
in the history of the cosmos.
If the simulation is designed well enough,
it’ll be indistinguishable from a non simulated reality.
And although we could keep searching for evidence
that we’re not in a simulation,
any of that evidence in principle could be simulated.
So I think it’s a possibility.
But do you think the thought experiment is interesting
or useful to calibrate how we think
about the nature of reality?
Yeah, I definitely think it’s interesting and useful.
In fact, I’m actually writing a book about this right now,
all about the simulation idea,
using it to shed light
on a whole bunch of philosophical questions.
So the big one is how do we know anything
about the external world?
Descartes said, maybe you’re being fooled by an evil demon
who’s stimulating your brain into thinking,
all this stuff is real when in fact, it’s all made up.
Well, the modern version of that is,
how do you know you’re not in a simulation?
Then the thought is, if you’re in a simulation,
none of this is real.
So that’s teaching you something about knowledge.
How do you know about the external world?
I think there’s also really interesting questions
about the nature of reality right here.
If we are in a simulation, is all this real?
Is there really a table here?
Is it really a microphone?
Do I really have a body?
The standard view would be, no, we don’t.
None of this would be real.
My view is actually that’s wrong.
And even if we are in a simulation, all of this is real.
That’s why I called this reality 2.0.
New version of reality, different version of reality,
still reality.
So what’s the difference between quote unquote,
real world and the world that we perceive?
So we interact with the world by perceiving it.
It only really exists through the window
of our perception system and in our mind.
So what’s the difference between something
that’s quote unquote real, that exists perhaps
without us being there, and the world as you perceive it?
Well the world as we perceive it is a very simplified
and distorted version of what’s going on underneath.
We already know that from just thinking about science.
You don’t see too many obviously quantum mechanical effects
in what we perceive, but we still know quantum mechanics
is going on under all things.
So I like to think the world we perceive
is this very kind of simplified picture of colors
and shapes existing in space and so on.
We know there’s a, that’s what the philosopher
Wilfred Sellers called the manifest image.
The world as it seems to us, we already know
underneath all that is a very different scientific image
with atoms or quantum wave functions or super strings
or whatever the latest thing is.
And that’s the ultimate scientific reality.
So I think of the simulation idea as basically
another hypothesis about what the ultimate
say quasi scientific or metaphysical reality
is going on underneath the world of the manifest image.
The world of the manifest image is this very simple thing
that we interact with that’s neutral
on the underlying stuff of reality.
Science can help tell us about that.
Maybe philosophy can help tell us about that too.
And if we eventually take the red pill
and find out we’re in a simulation,
my view is that’s just another view
about what reality is made of.
The philosopher Immanuel Kant said,
what is the nature of the thing in itself?
I’ve got a glass here and it’s got all these,
it appears to me a certain way, a certain shape,
it’s liquid, it’s clear.
And he said, what is the nature of the thing
in itself?
Well, I think of the simulation idea,
it’s a hypothesis about the nature of the thing in itself.
It turns out if we’re in a simulation,
the thing in itself nature of this glass,
it’s okay, it’s actually a bunch of data structures
running on a computer in the next universe up.
Yeah, that’s what people tend to do
when they think about simulation.
They think about our modern computers
and somehow trivially crudely just scaled up in some sense.
But do you think the simulation,
I mean, in order to actually simulate
something as complicated as our universe
that’s made up of molecules and atoms
and particles and quarks and maybe even strings,
all of that would require something
just infinitely many orders of magnitude more
of scale and complexity.
Do you think we’re even able to even like conceptualize
what it would take to simulate our universe?
Or does it just slip into this idea
that you basically have to build a universe,
something so big to simulate it?
Does it get this into this fuzzy area
that’s not useful at all?
Yeah, well, I mean, our universe
is obviously incredibly complicated.
And for us within our universe to build a simulation
of a universe as complicated as ours
is gonna have obvious problems here.
If the universe is finite,
there’s just no way that’s gonna work.
Maybe there’s some cute way to make it work
if the universe is infinite,
maybe an infinite universe could somehow simulate
a copy of itself, but that’s gonna be hard.
Nonetheless, just that we are in a simulation,
I think there’s no particular reason
why we have to think the simulating universe
has to be anything like ours.
You’ve said before that it might be,
so you could think of it in turtles all the way down.
You could think of the simulating universe
different than ours, but we ourselves
could also create another simulating universe.
So you said that there could be these
kind of levels of universes.
And you’ve also mentioned this hilarious idea,
maybe tongue in cheek, maybe not,
that there may be simulations within simulations,
arbitrarily stacked levels,
and that there may be, that we may be in level 42.
Oh yeah.
Along those stacks, referencing Hitchhiker’s Guide
to the Universe.
If we’re indeed in a simulation within a simulation
at level 42, what do you think level zero looks like?
The originating universe.
I would expect that level zero is truly enormous.
I mean, not just, if it’s finite,
at some extraordinarily large finite capacity,
much more likely it’s infinite.
Maybe it’s got some very high cardinality
that enables it to support just any number of simulations.
So high degree of infinity at level zero,
slightly smaller degree of infinity at level one.
So by the time you get down to us at level 42,
maybe there’s plenty of room for lots of simulations
of finite capacity.
If the top universe is only a small finite capacity,
then obviously that’s gonna put very, very serious limits
on how many simulations you’re gonna be able to get running.
So I think we can certainly confidently say
that if we’re at level 42,
then the top level’s pretty damn big.
So it gets more and more constrained
as we get down levels, more and more simplified
and constrained and limited in resources.
Yeah, we still have plenty of capacity here.
What was it Feynman said?
He said there’s plenty of room at the bottom.
We’re still a number of levels above the degree
where there’s room for fundamental computing,
physical computing capacity,
quantum computing capacity at the bottom level.
So we’ve got plenty of room to play with
and we probably have plenty of room
for simulations of pretty sophisticated universes,
perhaps none as complicated as our universe,
unless our universe is infinite,
but still at the very least
for pretty serious finite universes,
but maybe universes somewhat simpler than ours,
unless of course we’re prepared to take certain shortcuts
in the simulation,
which might then increase the capacity significantly.
Do you think the human mind, us people,
in terms of the complexity of simulation
is at the height of what the simulation
might be able to achieve?
Like if you look at incredible entities
that could be created in this universe of ours,
do you have an intuition about
how incredible human beings are on that scale?
I think we’re pretty impressive,
but we’re not that impressive.
Are we above average?
I mean, I think human beings are at a certain point
in the scale of intelligence,
which made many things possible.
You get through evolution, through single cell organisms,
through fish and mammals and primates,
and something happens.
Once you get to human beings,
we’ve just reached that level
where we get to develop language,
we get to develop certain kinds of culture,
and we get to develop certain kinds of collective thinking
that has enabled all this amazing stuff to happen,
science and literature and engineering
and culture and so on.
So we had just at the beginning of that
on the evolutionary threshold,
it’s kind of like we just got there,
who knows, a few thousand or tens of thousands of years ago.
So we’re probably just at the very beginning
for what’s possible there.
So I’m inclined to think among the scale
of intelligent beings,
we’re somewhere very near the bottom.
I would expect that, for example,
if we’re in a simulation,
then the simulators who created us
have got the capacity to be far more sophisticated.
If we’re at level 42,
who knows what the ones at level zero are like.
It’s also possible that this is the epitome
of what is possible to achieve.
So we as human beings see ourselves maybe as flawed,
see all the constraints, all the limitations,
but maybe that’s the magical, the beautiful thing.
Maybe those limitations are the essential elements
for an interesting sort of that edge of chaos,
that interesting existence,
that if you make us much more intelligent,
if you make us much more powerful
in any kind of dimension of performance,
maybe you lose something fundamental
that makes life worth living.
So you kind of have this optimistic view
that we’re this little baby,
that then there’s so much growth and potential,
but this could also be it.
This is the most amazing thing is us.
Maybe what you’re saying is consistent
with what I’m saying.
I mean, we could still have levels of intelligence
far beyond us,
but maybe those levels of intelligence on your view
would be kind of boring.
And we kind of get so good at everything
that life suddenly becomes uni dimensional.
So we’re just inhabiting this one spot
of like maximal romanticism in the history of evolution.
You get to humans and it’s like, yeah,
and then years to come, our super intelligent descendants
are gonna look back at us and say,
those were the days when they just hit
the point of inflection and life was interesting.
I am an optimist.
So I’d like to think that if there is super intelligent
somewhere in the future,
they’ll figure out how to make life super interesting
and super romantic.
Well, you know what they’re gonna do.
So what they’re gonna do is they realize
how boring life is when you’re super intelligent.
So they create a new level of assimilation
and sort of live through the things they’ve created
by watching them stumble about
in their flawed ways.
So maybe that’s, so you create a new level of assimilation
every time you get really bored with how smart and.
This would be kind of sad though,
because if we showed the peak of their existence
would be like watching simulations for entertainment.
Not like saying the peak of our existence now is Netflix.
No, it’s all right.
A flip side of that could be the peak of our existence
for many people having children and watching them grow.
That becomes very meaningful.
Okay, you create a simulation that’s like creating a family.
Creating like, well, any kind of creation
is kind of a powerful act.
Do you think it’s easier to simulate the mind
or the universe?
So I’ve heard several people, including Nick Bostrom,
think about ideas of maybe you don’t need
to simulate the universe,
you can just simulate the human mind.
Or in general, just the distinction
between simulating the entirety of it,
the entirety of the physical world,
or just simulating the mind.
Which one do you see as more challenging?
Well, I think in some sense, the answer is obvious.
It has to be simpler to simulate the mind
than to simulate the universe,
because the mind is part of the universe.
And in order to fully simulate the universe,
you’re gonna have to simulate the mind.
So unless we’re talking about partial simulations.
And I guess the question is which comes first?
Does the mind come before the universe
or does the universe come before the mind?
So the mind could just be an emergent phenomena
in this universe.
So simulation is an interesting thing
that it’s not like creating a simulation perhaps
requires you to program every single thing
that happens in it.
It’s just defining a set of initial conditions
and rules based on which it behaves.
Simulating the mind requires you
to have a little bit more,
we’re now in a little bit of a crazy land,
but it requires you to understand
the fundamentals of cognition,
perhaps of consciousness,
of perception of everything like that,
that’s not created through some kind of emergence
from basic physics laws,
but more requires you to actually understand
the fundamentals of the mind.
How about if we said to simulate the brain?
The brain.
Rather than the mind.
So the brain is just a big physical system.
The universe is a giant physical system.
To simulate the universe at the very least,
you’re gonna have to simulate the brains
as well as all the other physical systems within it.
And it’s not obvious that the problems are any worse
for the brain than for,
it’s a particularly complex physical system.
But if we can simulate arbitrary physical systems,
we can simulate brains.
There is this further question of whether,
when you simulate a brain,
will that bring along all the features of the mind with it?
Like will you get consciousness?
Will you get thinking?
Will you get free will?
And so on.
And that’s something philosophers have argued over
for years.
My own view is if you simulate the brain well enough,
that will also simulate the mind.
But yeah, there’s plenty of people who would say no.
You’d merely get like a zombie system,
a simulation of a brain without any true consciousness.
But for you, you put together a brain,
the consciousness comes with it, arise.
Yeah, I don’t think it’s obvious.
That’s your intuition.
My view is roughly that yeah,
what is responsible for consciousness,
it’s in the patterns of information processing and so on
rather than say the biology that it’s made of.
There’s certainly plenty of people out there
who think consciousness has to be say biological.
So if you merely replicate the patterns of information
processing in a nonbiological substrate,
you’ll miss what’s crucial for consciousness.
I mean, I just don’t think there’s any particular reason
to think that biology is special here.
You can imagine substituting the biology
for nonbiological systems, say silicon circuits
that play the same role.
The behavior will continue to be the same.
And I think just thinking about what is the true,
when I think about the connection,
the isomorphisms between consciousness and the brain,
the deepest connections to me seem to connect consciousness
to patterns of information processing,
not to specific biology.
So I at least adopted as my working hypothesis
that basically it’s the computation and the information
that matters for consciousness.
Same time, we don’t understand consciousness,
so all this could be wrong.
So the computation, the flow, the processing,
manipulation of information,
the process is where the consciousness,
the software is where the consciousness comes from,
not the hardware.
Roughly the software, yeah.
The patterns of information processing at least
in the hardware, which we could view as software.
It may not be something you can just like program
and load and erase and so on in the way we can
with ordinary software, but it’s something at the level
of information processing rather than at the level
of implementation.
So on that, what do you think of the experience of self,
just the experience of the world in a virtual world,
in virtual reality?
Is it possible that we can create sort of
offsprings of our consciousness by existing
in a virtual world long enough?
So yeah, can we be conscious in the same kind
of deep way that we are in this real world
by hanging out in a virtual world?
Yeah, well, the kind of virtual worlds we have now
are interesting but limited in certain ways.
In particular, they rely on us having a brain and so on,
which is outside the virtual world.
Maybe I’ll strap on my VR headset or just hang out
in a virtual world on a screen, but my brain
and then my physical environment might be simulated
if I’m in a virtual world, but right now,
there’s no attempt to simulate my brain.
There might be some non player characters
in these virtual worlds that have simulated
cognitive systems of certain kinds
that dictate their behavior, but mostly,
they’re pretty simple right now.
I mean, some people are trying to combine,
put a bit of AI in their non player characters
to make them smarter, but for now,
inside virtual world, the actual thinking
is interestingly distinct from the physics
of those virtual worlds.
In a way, actually, I like to think this is kind of
reminiscent of the way that Descartes
thought our physical world was.
There’s physics, and there’s the mind,
and they’re separate.
Now we think the mind is somehow connected
to physics pretty deeply, but in these virtual worlds,
there’s a physics of a virtual world,
and then there’s this brain which is totally
outside the virtual world that controls it
and interacts it when anyone exercises agency
in a video game, that’s actually somebody
outside the virtual world moving a controller,
controlling the interaction of things
inside the virtual world.
So right now, in virtual worlds,
the mind is somehow outside the world,
but you could imagine in the future,
once we have developed serious AI,
artificial general intelligence, and so on,
then we could come to virtual worlds
which have enough sophistication,
you could actually simulate a brain
or have a genuine AGI, which would then presumably
be able to act in equally sophisticated ways,
maybe even more sophisticated ways,
inside the virtual world to how it might
in the physical world, and then the question’s
gonna come along, that would be kind of a VR,
virtual world internal intelligence,
and then the question is could they have consciousness,
experience, intelligence, free will,
all the things that we have, and again,
my view is I don’t see why not.
To linger on it a little bit, I find virtual reality really
incredibly powerful, just even the crude virtual reality
we have now of perhaps there’s psychological effects
that make some people more amenable
to virtual worlds than others, but I find myself
wanting to stay in virtual worlds for the most part.
You do?
Yes.
With a headset or on a desktop?
No, with a headset.
Really interesting, because I am totally addicted
to using the internet and things on a desktop,
but when it comes to VR, with a headset,
I don’t typically use it for more than 10 or 20 minutes.
There’s something just slightly aversive about it, I find,
so I don’t, right now, even though I have Oculus Rift
and Oculus Quest and HTC Vive and Samsung, this and that.
You just don’t wanna stay in that world for long.
Not for extended periods.
You actually find yourself hanging out in that.
Something about, it’s both a combination
of just imagination and considering the possibilities
of where this goes in the future.
It feels like I want to almost prepare my brain for it.
I wanna explore sort of Disneyland
when it’s first being built in the early days,
and it feels like I’m walking around
almost imagining the possibilities,
and something through that process allows my mind
to really enter into that world,
but you say that the brain is external to that virtual world.
It is, strictly speaking, true, but…
If you’re in VR and you do brain surgery on an avatar,
and you’re gonna open up that skull,
what are you gonna find?
Sorry, nothing there.
Nothing.
The brain is elsewhere.
You don’t think it’s possible to kind of separate them,
and I don’t mean in a sense like Descartes,
like a hard separation, but basically,
do you think it’s possible with the brain outside
of the virtual rhythm, when you’re wearing a headset,
create a new consciousness for prolonged periods of time?
Really feel, like really, like forget
that your brain is outside.
So this is, okay, this is gonna be the case
where the brain is still outside.
It’s still outside.
But could living in the VR, I mean,
we already find this, right, with video games.
Exactly.
They’re completely immersive, and you get taken up
by living in those worlds,
and it becomes your reality for a while.
So they’re not completely immersive,
they’re just very immersive.
Completely immersive.
You don’t forget the external world, no.
Exactly, so that’s what I’m asking.
Do you think it’s almost possible
to really forget the external world?
Really, really immerse yourself.
To forget completely?
Why would we forget?
We got pretty good memories.
Maybe you can stop paying attention to the external world,
but this already happens a lot.
I go to work, and maybe I’m not paying attention
to my home life.
I go to a movie, and I’m immersed in that.
So that degree of immersion, absolutely.
But we still have the capacity to remember it,
to completely forget the external world.
I’m thinking that would probably take some,
I don’t know, some pretty serious drugs or something
to make your brain do that.
Is that possible?
So, I mean, I guess what I’m getting at
is consciousness truly a property
that’s tied to the physical brain?
Or can you create sort of different offspring,
copies of consciousnesses based on the worlds
that you enter?
Well, the way we’re doing it now,
at least with a standard VR, there’s just one brain.
Interacts with the physical world.
Plays a video game, puts on a video headset,
interacts with this virtual world.
And I think we’d typically say there’s one consciousness here
that nonetheless undergoes different environments,
takes on different characters in different environments.
This is already something that happens
in the nonvirtual world.
I might interact one way in my home life,
my work life, my social life, and so on.
So at the very least, that will happen
in a virtual world very naturally.
People sometimes adopt the character of avatars
very different from themselves,
maybe even a different gender, different race,
different social background.
So that much is certainly possible.
I would see that as a single consciousness
is taking on different personas.
If you want literal splitting of consciousness
into multiple copies,
I think it’s gonna take something more radical than that.
Like maybe you can run different simulations of your brain
in different realities
and then expose them to different histories.
And then you’d split yourself
into 10 different simulated copies,
which then undergo different environments
and then ultimately do become 10
very different consciousnesses.
Maybe that could happen,
but now we’re not talking about something
that’s possible in the near term.
We’re gonna have to have brain simulations
and AGI for that to happen.
Got it.
So before any of that happens,
it’s fundamentally you see it as a singular consciousness,
even though it’s experiencing different environments,
virtual or not,
it’s still connected to same set of memories,
same set of experiences and therefore,
one sort of joint conscious system.
Yeah, or at least no more multiple
than the kind of multiple consciousness
that we get from inhabiting different environments
in a non virtual world.
So you said as a child,
you were a music color synesthete.
So where songs had colors for you.
So what songs had what colors?
You know, this is funny.
I didn’t pay much attention to this at the time,
but I’d listen to a piece of music
and I’d get some kind of imagery
of a kind of color.
The weird thing is mostly they were kind of murky,
dark greens and olive browns
and the colors weren’t all that interesting.
I don’t know what the reason is.
I mean, my theory is that maybe it’s like different chords
and tones provided different colors
and they all tended to get mixed together
into these somewhat uninteresting browns and greens.
But every now and then there’d be something
that had a really pure color.
So there’s just a few that I remember.
There was a Here, There and Everywhere by the Beatles
was bright red and has this very distinctive tonality
and it’s called structure at the beginning.
So that was bright red.
There was this song by the Alan Parsons Project
called Ammonia Avenue that was kind of a pure, a pure blue.
Anyway, I’ve got no idea how this happened.
I didn’t even pay that much attention
until it went away when I was about 20.
This synesthesia often goes away.
So is it purely just the perception of a particular color
or was there a positive or negative experience?
Like was blue associated with a positive
and red with a negative?
Or is it simply the perception of color
associated with some characteristic of the song?
For me, I don’t remember a lot of association
with emotion or with value.
It was just this kind of weird and interesting fact.
I mean, at the beginning, I thought this was something
that happened to everyone, songs of colors.
Maybe I mentioned it once or twice and people said, nope.
I thought it was kind of cool when there was one
that had one of these especially pure colors,
but only much later once I became a grad student
thinking about the mind that I read about this phenomenon
called synesthesia and I was like, hey, that’s what I had.
And now I occasionally talk about it in my classes,
in intro class and it still happens sometimes.
A student comes up and says, hey, I have that.
I never knew about that.
I never knew it had a name.
You said that it went away at age 20 or so.
And that you have a journal entry from around then saying,
songs don’t have colors anymore.
What happened?
Yeah, it was definitely sad that it was gone.
In retrospect, it was like, hey, that’s cool.
The colors have gone.
Yeah, can you think about that for a little bit?
Do you miss those experiences?
Because it’s a fundamentally different set of experiences
that you no longer have.
Or is it just a nice thing to have had?
You don’t see them as that fundamentally different
than you visiting a new country and experiencing
new environments.
I guess for me, when I had these experiences,
they were somewhat marginal.
They were like a little bonus kind of experience.
I know there are people who have much more serious forms
of synesthesia than this for whom it’s absolutely central
to their lives.
I know people who, when they experience new people,
they have colors, maybe they have tastes and so on.
Every time they see writing, it has colors.
Some people, whenever they hear music,
it’s got a certain really rich color pattern.
For some synesthetes, it’s absolutely central.
I think if they lost it, they’d be devastated.
Again, for me, it was a very, very mild form
of synesthesia, and it’s like, yeah,
it’s like those interesting experiences
you might get under different altered states
of consciousness and so on.
It’s kind of cool, but not necessarily
the single most important experiences in your life.
Got it.
So let’s try to go to the very simplest question
that you’ve answered many a time,
but perhaps the simplest things can help us reveal,
even in time, some new ideas.
So what, in your view, is consciousness?
What is qualia?
What is the hard problem of consciousness?
Consciousness, I mean, the word is used many ways,
but the kind of consciousness that I’m interested in
is basically subjective experience,
what it feels like from the inside to be a human being
or any other conscious being.
I mean, there’s something it’s like to be me right now.
I have visual images that I’m experiencing.
I’m hearing my voice.
I’ve got maybe some emotional tone.
I’ve got a stream of thoughts running through my head.
These are all things that I experience
from the first person point of view.
I’ve sometimes called this the inner movie in the mind.
It’s not a perfect metaphor.
It’s not like a movie in every way,
and it’s very rich.
But yeah, it’s just direct, subjective experience.
And I call that consciousness,
or sometimes philosophers use the word qualia,
which you suggested.
People tend to use the word qualia
for things like the qualities of things like colors,
redness, the experience of redness
versus the experience of greenness,
the experience of one taste or one smell versus another,
the experience of the quality of pain.
And yeah, a lot of consciousness
is the experience of those qualities.
Well, consciousness is bigger,
the entirety of any kinds of experiences.
Consciousness of thinking is not obviously qualia.
It’s not like specific qualities like redness or greenness,
but still I’m thinking about my hometown.
I’m thinking about what I’m gonna do later on.
Maybe there’s still something running through my head,
which is subjective experience.
Maybe it goes beyond those qualities or qualia.
Philosophers sometimes use the word phenomenal consciousness
for consciousness in this sense.
I mean, people also talk about access consciousness,
being able to access information in your mind,
reflective consciousness,
being able to think about yourself.
But it looks like the really mysterious one,
the one that really gets people going
is phenomenal consciousness.
The fact that there’s subjective experience
and all this feels like something at all.
And then the hard problem is how is it that,
why is it that there is phenomenal consciousness at all?
And how is it that physical processes in a brain
could give you subjective experience?
It looks like on the face of it,
you’d have all this big complicated physical system
in a brain running without a given
subjective experience at all.
And yet we do have subjective experience.
So the hard problem is just explain that.
Explain how that comes about.
We haven’t been able to build machines
where a red light goes on that says it’s not conscious.
So how do we actually create that?
Or how do humans do it?
And how do we ourselves do it?
We do every now and then create machines that can do this.
We create babies that are conscious.
They’ve got these brains.
That brain does produce consciousness.
But even though we can create it,
we still don’t understand why it happens.
Maybe eventually we’ll be able to create machines,
which as a matter of fact, AI machines,
which as a matter of fact are conscious.
But that won’t necessarily make the hard problem go away
any more than it does with babies.
Cause we still wanna know how and why is it
that these processes give you consciousness?
You just made me realize for a second,
maybe it’s a totally dumb realization, but nevertheless,
that as a useful way to think about
the creation of consciousness is looking at a baby.
So that there’s a certain point
at which that baby is not conscious.
The baby starts from maybe, I don’t know,
from a few cells, right?
There’s a certain point at which it becomes consciousness,
arrives, it’s conscious.
Of course, we can’t know exactly that line,
but that’s a useful idea that we do create consciousness.
Again, a really dumb thing for me to say,
but not until now did I realize
we do engineer consciousness.
We get to watch the process happen.
We don’t know which point it happens or where it is,
but we do see the birth of consciousness.
Yeah, I mean, there’s a question, of course,
is whether babies are conscious when they’re born.
And it used to be, it seems,
at least some people thought they weren’t,
which is why they didn’t give anesthetics
to newborn babies when they circumcised them.
And so now people think, oh, that would be incredibly cruel.
Of course, babies feel pain.
And now the dominant view is that the babies can feel pain.
Actually, my partner Claudia works on this whole issue
of whether there’s consciousness in babies
and of what kind.
And she certainly thinks that newborn babies
come into the world with some degree of consciousness.
Of course, then you can just extend the question backwards
to fetuses and suddenly you’re into
politically controversial territory.
But the question also arises in the animal kingdom.
Where does consciousness start or stop?
Is there a line in the animal kingdom
where the first conscious organisms are?
It’s interesting, over time,
people are becoming more and more liberal
about ascribing consciousness to animals.
People used to think maybe only mammals could be conscious.
Now most people seem to think, sure, fish are conscious.
They can feel pain.
And now we’re arguing over insects.
You’ll find people out there who say plants
have some degree of consciousness.
So, you know, who knows where it’s gonna end.
The far end of this chain is the view
that every physical system has some degree of consciousness.
Philosophers call that panpsychism.
You know, I take that view.
I mean, that’s a fascinating way to view reality.
So if you could talk about,
if you can linger on panpsychism for a little bit,
what does it mean?
So it’s not just plants are conscious.
I mean, it’s that consciousness
is a fundamental fabric of reality.
What does that mean to you?
How are we supposed to think about that?
Well, we’re used to the idea that some things in the world
are fundamental, right, in physics.
Like what?
We take things like space or time or space time,
mass, charges, fundamental properties of the universe.
You don’t reduce them to something simpler.
You take those for granted.
You’ve got some laws that connect them.
Here is how mass and space and time evolve.
Theories like relativity or quantum mechanics
or some future theory that will unify them both.
But everyone says you gotta take some things as fundamental.
And if you can’t explain one thing,
in terms of the previous fundamental things,
you have to expand.
Maybe something like this happened with Maxwell.
He ended up with fundamental principles
of electromagnetism and took charge as fundamental
because it turned out that was the best way to explain it.
So I at least take seriously the possibility
something like that could happen with consciousness.
Take it as a fundamental property,
like space, time, and mass.
And instead of trying to explain consciousness wholly
in terms of the evolution of space, time, and mass,
and so on, take it as a primitive
and then connect it to everything else
by some fundamental laws.
Because there’s this basic problem
that the physics we have now looks great
for solving the easy problems of consciousness,
which are all about behavior.
They give us a complicated structure and dynamics.
They tell us how things are gonna behave,
what kind of observable behavior they’ll produce,
which is great for the problems of explaining how we walk
and how we talk and so on.
Those are the easy problems of consciousness.
But the hard problem was this problem
about subjective experience just doesn’t look
like that kind of problem about structure,
dynamics, how things behave.
So it’s hard to see how existing physics
is gonna give you a full explanation of that.
Certainly trying to get a physics view of consciousness,
yes, there has to be a connecting point
and it could be at the very axiomatic
at the very beginning level.
But first of all, there’s a crazy idea
that sort of everything has properties of consciousness.
At that point, the word consciousness
is already beyond the reach of our current understanding.
Like far, because it’s so far from,
at least for me, maybe you can correct me,
as far from the experiences that I have as a human being.
To say that everything is conscious,
that means that basically another way to put that,
if that’s true, then we understand almost nothing
about that fundamental aspect of the world.
How do you feel about saying an ant is conscious?
Do you get the same reaction to that
or is that something you can understand?
I can understand ant,
I can understand an atom, a particle.
Plants?
Plant, so I’m comfortable with living things on Earth
being conscious because there’s some kind of agency
where they’re similar size to me
and they can be born and they can die.
And that is understandable intuitively.
Of course, you anthropomorphize,
you put yourself in the place of the plant,
but I can understand it.
I mean, I’m not like, I don’t believe actually
that plants are conscious or that plants suffer,
but I can understand that kind of belief, that kind of idea.
How do you feel about robots?
Like the kind of robots we have now?
If I told you like that a Roomba
had some degree of consciousness
or some deep neural network.
I could understand that a Roomba has consciousness.
I just had spent all day at I, robot.
And I mean, I personally love robots
and I have a deep connection with robots.
So I can, I also probably anthropomorphize them.
There’s something about the physical object.
So there’s a difference than a neural network,
a neural network running a software.
To me, the physical object,
something about the human experience
allows me to really see that physical object as an entity.
And if it moves and moves in a way that it,
there’s a, like I didn’t program it,
where it feels that it’s acting based on its own perception.
And yes, self awareness and consciousness,
even if it’s a Roomba,
then you start to assign it some agency, some consciousness.
So, but to say that panpsychism,
that consciousness is a fundamental property of reality
is a much bigger statement.
That it’s like turtles all the way.
It’s like every, it’s, it doesn’t end.
The whole thing is, so like how,
I know it’s full of mystery,
but if you can linger on it,
like how would it, how do you think about reality
if consciousness is a fundamental part of its fabric?
The way you get there is from thinking,
can we explain consciousness given the existing fundamentals?
And then if you can’t, as at least right now, it looks like,
then you’ve got to add something.
It doesn’t follow that you have to add consciousness.
Here’s another interesting possibility is,
well, we’ll add something else.
Let’s call it proto consciousness or X.
And then it turns out space, time, mass plus X
will somehow collectively give you the possibility
for consciousness.
Why don’t rule out that view?
Either I call that pan proto psychism,
because maybe there’s some other property,
proto consciousness at the bottom level.
And if you can’t imagine there’s actually
genuine consciousness at the bottom level,
I think we should be open to the idea
there’s this other thing X.
Maybe we can’t imagine that somehow gives you consciousness.
But if we are playing along with the idea
that there really is genuine consciousness
at the bottom level, of course,
this is going to be way out and speculative,
but at least in, say, if it was classical physics,
then we’d have to, you’d end up saying,
well, every little atom, every little,
with a bunch of particles in space time,
each of these particles has some kind of consciousness
whose structure mirrors maybe their physical properties,
like its mass, its charge, its velocity, and so on.
The structure of its consciousness
would roughly correspond to that.
And the physical interactions between particles,
I mean, there’s this old worry about physics.
I mentioned this before in this issue
about the manifest image.
We don’t really find out
about the intrinsic nature of things.
Physics tells us about how a particle relates
to other particles and interacts.
It doesn’t tell us about what the particle is in itself.
That was Kant’s thing in itself.
So here’s a view.
The nature in itself of a particle is something mental.
A particle is actually a conscious,
a little conscious subject
with properties of its consciousness
that correspond to its physical properties.
The laws of physics are actually ultimately relating
these properties of conscious subjects.
So in this view, a Newtonian world
actually would be a vast collection
of little conscious subjects at the bottom level,
way, way simpler than we are without free will
or rationality or anything like that.
But that’s what the universe would be like.
Now, of course, that’s a vastly speculative view.
No particular reason to think it’s correct.
Furthermore, non Newtonian physics,
say quantum mechanical wave function,
suddenly it starts to look different.
It’s not a vast collection of conscious subjects.
Maybe there’s ultimately one big wave function
for the whole universe.
Corresponding to that might be something more
like a single conscious mind
whose structure corresponds
to the structure of the wave function.
People sometimes call this cosmo psychism.
And now, of course, we’re in the realm
of extremely speculative philosophy.
There’s no direct evidence for this,
but yeah, but if you want a picture
of what that universe would be like,
think, yeah, giant cosmic mind
with enough richness and structure among it
to replicate all the structure of physics.
I think therefore I am at the level of particles
and with quantum mechanics
at the level of the wave function.
It’s kind of an exciting, beautiful possibility,
of course, way out of reach of physics currently.
It is interesting that some neuroscientists
are beginning to take panpsychism seriously,
that you find consciousness even in very simple systems.
So for example, the integrated information theory
of consciousness, a lot of neuroscientists
are taking seriously.
Actually, I just got this new book
by Christoph Koch just came in,
The Feeling of Life Itself,
why consciousness is widespread, but can’t be computed.
He likes, he basically endorses a panpsychist view
where you get consciousness
with the degree of information processing
or integrated information processing in a simple,
in a system and even very, very simple systems,
like a couple of particles will have some degree of this.
So he ends up with some degree of consciousness
in all matter.
And the claim is that this theory
can actually explain a bunch of stuff
about the connection between the brain and consciousness.
Now, that’s very controversial.
I think it’s very, very early days
in the science of consciousness.
It’s interesting that it’s not just philosophy
that might lead you in this direction,
but there are ways of thinking quasi scientifically
that lead you there too.
But maybe it’s different than panpsychism.
What do you think?
So Alan Watts has this quote that I’d like to ask you about.
The quote is, through our eyes,
the universe is perceiving itself.
Through our ears, the universe is listening
to its harmonies.
We are the witnesses through which the universe
becomes conscious of its glory, of its magnificence.
So that’s not panpsychism.
Do you think that we are essentially the tools,
the senses the universe created to be conscious of itself?
It’s an interesting idea.
Of course, if you went for the giant cosmic mind view,
then the universe was conscious all along.
It didn’t need us.
We’re just little components of the universal consciousness.
Likewise, if you believe in panpsychism,
then there was some little degree of consciousness
at the bottom level all along.
And we were just a more complex form of consciousness.
So I think maybe the quote you mentioned works better.
If you’re not a panpsychist, you’re not a cosmo psychist,
you think consciousness just exists
at this intermediate level.
And of course, that’s the Orthodox view.
That you would say is the common view?
So is your own view with panpsychism a rare view?
I think it’s generally regarded certainly
as a speculative view held by a fairly small minority
of at least theorists, most philosophers
and most scientists who think about consciousness
are not panpsychists.
There’s been a bit of a movement in that direction
the last 10 years or so.
It seems to be quite popular,
especially among the younger generation,
but it’s still very definitely a minority view.
Many people think it’s totally batshit crazy
to use the technical term.
But the philosophical term.
So the Orthodox view, I think is still consciousness
is something that humans have
and some good number of nonhuman animals have,
and maybe AIs might have one day, but it’s restricted.
On that view, then there was no consciousness
at the start of the universe.
There may be none at the end,
but it is this thing which happened at some point
in the history of the universe, consciousness developed.
And yes, that’s a very amazing event on this view
because many people are inclined to think consciousness
is what somehow gives meaning to our lives.
Without consciousness, there’d be no meaning,
no true value, no good versus bad and so on.
So with the advent of consciousness,
suddenly the universe went from meaningless
to somehow meaningful.
Why did this happen?
I guess the quote you mentioned was somehow,
this was somehow destined to happen
because the universe needed to have consciousness
within it to have value and have meaning.
And maybe you could combine that with a theistic view
or a teleological view.
The universe was inexorably evolving towards consciousness.
Actually, my colleague here at NYU, Tom Nagel,
wrote a book called Mind and Cosmos a few years ago
where he argued for this teleological view
of evolution toward consciousness,
saying this led the problems for Darwinism.
It’s got him on, this is very, very controversial.
Most people didn’t agree.
I don’t myself agree with this teleological view,
but it is at least a beautiful speculative view
of the cosmos.
What do you think people experience?
What do they seek when they believe in God
from this kind of perspective?
I’m not an expert on thinking about God and religion.
I’m not myself religious at all.
When people sort of pray, communicate with God,
which whatever form,
I’m not speaking to sort of the practices
and the rituals of religion.
I mean the actual experience of that people
really have a deep connection with God in some cases.
What do you think that experience is?
It’s so common, at least throughout the history
of civilization, that it seems like we seek that.
At the very least, it is an interesting
conscious experience that people have
when they experience religious awe or prayer and so on.
Neuroscientists have tried to examine
what bits of the brain are active and so on.
But yeah, there’s this deeper question
of what are people looking for when they’re doing this?
And like I said, I’ve got no real expertise on this,
but it does seem that one thing people are after
is a sense of meaning and value,
a sense of connection to something greater than themselves
that will give their lives meaning and value.
And maybe the thought is if there is a God,
then God somehow is a universal consciousness
who has invested this universe with meaning
and somehow connection to God might give your life meaning.
I guess I can kind of see the attractions of that,
but it still makes me wonder why is it exactly
that a universal consciousness, God,
would be needed to give the world meaning?
If universal consciousness can give the world meaning,
why can’t local consciousness give the world meaning too?
So I think my consciousness gives my world meaning.
Is the origin of meaning for your world.
Yeah, I experience things as good or bad,
happy, sad, interesting, important.
So my consciousness invests this world with meaning.
Without any consciousness,
maybe it would be a bleak, meaningless universe.
But I don’t see why I need someone else’s consciousness
or even God’s consciousness to give this universe meaning.
Here we are, local creatures
with our own subjective experiences.
I think we can give the universe meaning ourselves.
I mean, maybe to some people that feels inadequate.
Our own local consciousness is somehow too puny
and insignificant to invest any of this
with cosmic significance.
And maybe God gives you a sense of cosmic significance,
but I’m just speculating here.
So it’s a really interesting idea
that consciousness is the thing that makes life meaningful.
If you could maybe just briefly explore that for a second.
So I suspect just from listening to you now,
you mean in an almost trivial sense,
just the day to day experiences of life have,
because of you attach identity to it,
they become, I guess I wanna ask something
I would always wanted to ask
a legit world renowned philosopher.
What is the meaning of life?
So I suspect you don’t mean consciousness gives
any kind of greater meaning to it all.
And more to day to day.
But is there a greater meaning to it all?
I think life has meaning for us because we are conscious.
So without consciousness, no meaning,
consciousness invests our life with meaning.
So consciousness is the source of the meaning of life,
but I wouldn’t say consciousness itself
is the meaning of life.
I’d say what’s meaningful in life
is basically what we find meaningful,
what we experience as meaningful.
So if you find meaning and fulfillment and value
in say, intellectual work, like understanding,
then that’s a very significant part
of the meaning of life for you.
If you find that in social connections
or in raising a family,
then that’s the meaning of life for you.
The meaning kind of comes from what you value
as a conscious creature.
So I think there’s no, on this view,
there’s no universal solution.
No universal answer to the question,
what is the meaning of life?
The meaning of life is where you find it
as a conscious creature,
but it’s consciousness that somehow makes value possible.
Experiencing some things as good or as bad
or as meaningful,
something comes from within consciousness.
So you think consciousness is a crucial component,
ingredient of assigning value to things?
I mean, it’s kind of a fairly strong intuition
that without consciousness,
there wouldn’t really be any value
if we just had a purely universe of unconscious creatures.
Would anything be better or worse than anything else?
Certainly when it comes to ethical dilemmas,
you know about the old trolley problem.
Do you kill one person
or do you switch to the other track to kill five?
Well, I’ve got a variant on this,
the zombie trolley problem,
where there’s a one conscious being on one track
and five humanoid zombies.
Let’s make them robots who are not conscious
on the other track.
Do you, given that choice,
do you kill the one conscious being
or the five unconscious robots?
Most people have a fairly clear intuition here.
Kill the unconscious beings
because they basically, they don’t have a meaningful life.
They’re not really persons, conscious beings at all.
We don’t have good intuition
about something like an unconscious being.
So in philosophical terms, you referred to as a zombie.
It’s a useful thought experiment construction
in philosophical terms, but we don’t yet have them.
So that’s kind of what we may be able to create with robots.
And I don’t necessarily know what that even means.
Yeah, they’re merely hypothetical.
For now, they’re just a thought experiment.
They may never be possible.
I mean, the extreme case of a zombie
is a being which is physically, functionally,
behaviorally identical to me, but not conscious.
That’s a mere,
I don’t think that could ever be built in this universe.
The question is just could we,
does that hypothetically make sense?
That’s kind of a useful contrast class
to raise questions like, why aren’t we zombies?
How does it come about that we’re conscious?
And we’re not like that.
But there are less extreme versions of this like robots,
which are maybe not physically identical to us,
maybe not even functionally identical to us.
Maybe they’ve got a different architecture,
but they can do a lot of sophisticated things,
maybe carry on a conversation, but they’re not conscious.
And that’s not so far out.
We’ve got simple computer systems,
at least tending in that direction now.
And presumably this is gonna get more and more sophisticated
over years to come where we may have some pretty,
it’s at least quite straightforward to conceive
of some pretty sophisticated robot systems
that can use language and be fairly high functioning
without consciousness at all.
Then I stipulate that.
I mean, we’ve caused, there’s this tricky question
of how you would know whether they’re conscious.
But let’s say we’ve somehow solved that.
And we know that these high functioning robots
aren’t conscious.
Then the question is, do they have moral status?
Does it matter how we treat them?
What does moral status mean, sir?
Basically it’s that question.
Can they suffer?
Does it matter how we treat them?
For example, if I mistreat this glass, this cup
by shattering it, then that’s bad.
Why is it bad though?
It’s gonna make a mess.
It’s gonna be annoying for me and my partner.
And so it’s not bad for the cup.
No one would say the cup itself has moral status.
Hey, you hurt the cup and that’s doing it a moral harm.
Likewise, plants, well, again, if they’re not conscious,
most people think by uprooting a plant,
you’re not harming it.
But if a being is conscious on the other hand,
then you are harming it.
So Siri, or I dare not say the name of Alexa.
Anyway, so we don’t think we’re morally harming Alexa
by turning her off or disconnecting her
or even destroying her, whether it’s the system
or the underlying software system,
because we don’t really think she’s conscious.
On the other hand, you move to like the disembodied being
in the movie, her, Samantha,
I guess she was kind of presented as conscious.
And then if you destroyed her,
you’d certainly be committing a serious harm.
So I think our strong sense is if a being is conscious
and can undergo subjective experiences,
then it matters morally how we treat them.
So if a robot is conscious, it matters,
but if a robot is not conscious,
then they’re basically just meat or a machine
and it doesn’t matter.
So I think at least maybe how we think about this stuff
is fundamentally wrong,
but I think a lot of people
who think about this stuff seriously,
including people who think about,
say the moral treatment of animals and so on,
come to the view that consciousness
is ultimately kind of the line between systems
that where we have to take them into account
and thinking morally about how we act
and systems for which we don’t.
And I think I’ve seen you the writer talk about
the demonstration of consciousness from a system like that,
from a system like Alexa or a conversational agent
that what you would be looking for
is kind of at the very basic level
for the system to have an awareness
that I’m just a program
and yet, why do I experience this?
Or not to have that experience,
but to communicate that to you.
So that’s what us humans would sound like.
If you all of a sudden woke up one day,
like Kafka, right, in a body of a bug or something,
but in a computer, you all of a sudden realized
you don’t have a body
and yet you were feeling what you were feeling,
you would probably say those kinds of things.
So do you think a system essentially becomes conscious
by convincing us that it’s conscious
through the words that I just mentioned?
So by being confused about the fact
that why am I having these experiences?
So basically.
I don’t think this is what makes you conscious,
but I do think being puzzled about consciousness
is a very good sign that a system is conscious.
So if I encountered a robot
that actually seemed to be genuinely puzzled
by its own mental states
and saying, yeah, I have all these weird experiences
and I don’t see how to explain them.
I know I’m just a set of silicon circuits,
but I don’t see how that would give you my consciousness.
I would at least take that as some evidence
that there’s some consciousness going on there.
I don’t think a system needs to be puzzled
about consciousness to be conscious.
Many people aren’t puzzled by their consciousness.
Animals don’t seem to be puzzled at all.
I still think they’re conscious.
So I don’t think that’s a requirement on consciousness,
but I do think if we’re looking for signs
for consciousness, say in AI systems,
one of the things that will help convince me
that an AI system is conscious is if it shows signs of,
if it shows signs of introspectively recognizing something
like consciousness and finding this philosophically puzzling
in the way that we do.
It’s such an interesting thought, though,
because a lot of people sort of would,
at the Shao level, criticize the Turing test for language.
It’s essentially what I heard Dan Dennett
criticize it in this kind of way,
which is it really puts a lot of emphasis on lying.
Yeah, and being able to imitate
human beings, yeah, there’s this cartoon
of the AI system studying for the Turing test.
It’s gotta read this book called Talk Like a Human.
It’s like, man, why do I have to waste my time
learning how to imitate humans?
Maybe the AI system is gonna be way beyond
the hard problem of consciousness,
and it’s gonna be just like,
why do I need to waste my time pretending
that I recognize the hard problem of consciousness
in order for people to recognize me as conscious?
Yeah, it just feels like, I guess the question is,
do you think we can ever really create
a test for consciousness?
Because it feels like we’re very human centric,
and so the only way we would be convinced
that something is conscious is basically
the thing demonstrates the illusion of consciousness,
that we can never really know whether it’s conscious or not,
and in fact, that almost feels like it doesn’t matter then,
or does it still matter to you that something is conscious
or it demonstrates consciousness?
You still see that fundamental distinction.
I think to a lot of people,
whether a system is conscious or not
matters hugely for many things,
like how we treat it, can it suffer, and so on,
but still, that leaves open the question,
how can we ever know?
And it’s true that it’s awfully hard
to see how we can know for sure
whether a system is conscious.
I suspect that sociologically,
the thing that’s gonna convince us
that a system is conscious is, in part,
things like social interaction, conversation, and so on,
where they seem to be conscious,
they talk about their conscious states
or just talk about being happy or sad
or finding things meaningful or being in pain.
That will tend to convince us if we don’t,
if a system genuinely seems to be conscious,
we don’t treat it as such,
eventually it’s gonna seem like a strange form
of racism or speciesism or somehow,
not to acknowledge them as conscious.
I truly believe that, by the way.
I believe that there is going to be
something akin to the Civil Rights Movement,
but for robots.
I think the moment you have a Roomba say,
please don’t kick me, that hurts, just say it.
Yeah.
I think that will fundamentally change
the fabric of our society.
I think you’re probably right,
although it’s gonna be very tricky
because, just say we’ve got the technology
where these conscious beings can just be created
and multiplied by the thousands by flicking a switch.
The legal status is gonna be different,
but ultimately their moral status ought to be the same,
and yeah, the civil rights issue is gonna be a huge mess.
So if one day somebody clones you,
another very real possibility.
In fact, I find the conversation between
two copies of David Chalmers quite interesting.
Very thought.
Who is this idiot?
He’s not making any sense.
So what, do you think he would be conscious?
I do think he would be conscious.
I do think in some sense,
I’m not sure it would be me,
there would be two different beings at this point.
I think they’d both be conscious
and they both have many of the same mental properties.
I think they both in a way have the same moral status.
It’d be wrong to hurt either of them
or to kill them and so on.
Still, there’s some sense in which probably
their legal status would have to be different.
If I’m the original and that one’s just a clone,
then creating a clone of me,
presumably the clone doesn’t, for example,
automatically own the stuff that I own
or I’ve got a certain connect,
the things that the people I interact with,
my family, my partner and so on,
I’m gonna somehow be connected to them
in a way in which the clone isn’t, so.
Because you came slightly first?
Yeah.
Because a clone would argue that they have
really as much of a connection.
They have all the memories of that connection.
Then a way you might say it’s kind of unfair
to discriminate against them,
but say you’ve got an apartment
that only one person can live in
or a partner who only one person can be with.
But why should it be you, the original?
It’s an interesting philosophical question,
but you might say because I actually have this history,
if I am the same person as the one that came before
and the clone is not,
then I have this history that the clone doesn’t.
Of course, there’s also the question,
isn’t the clone the same person too?
This is a question about personal identity.
If I continue and I create a clone over there,
I wanna say this one is me and this one is someone else.
But you could take the view that a clone is equally me.
Of course, in a movie like Star Trek
where they have a teletransporter
basically creates clones all the time.
They treat the clones as if they’re the original person.
Of course, they destroy the original body in Star Trek.
So there’s only one left around
and only very occasionally do things go wrong
and you get two copies of Captain Kirk.
But somehow our legal system at the very least
is gonna have to sort out some of these issues
and that maybe that’s what’s moral
and what’s legally acceptable are gonna come apart.
What question would you ask a clone of yourself?
Is there something useful you can find out from him
about the fundamentals of consciousness even?
I mean, kind of in principle,
I know that if it’s a perfect clone,
it’s gonna behave just like me.
So I’m not sure I’m gonna be able to,
I can discover whether it’s a perfect clone
by seeing whether it answers like me.
But otherwise I know what I’m gonna find is a being
which is just like me,
except that it’s just undergone this great shock
of discovering that it’s a clone.
So just say you woke me up tomorrow and said,
hey Dave, sorry to tell you this,
but you’re actually the clone
and you provided me really convincing evidence,
showed me the film of my being cloned
and then all wrapped in here being here and waking up.
So you proved to me I’m a clone,
well, yeah, okay, I would find that shocking
and who knows how I would react to this.
So maybe by talking to the clone,
I’d find something about my own psychology
that I can’t find out so easily,
like how I’d react upon discovering that I’m a clone.
I could certainly ask the clone if it’s conscious
and what his consciousness is like and so on,
but I guess I kind of know if it’s a perfect clone,
it’s gonna behave roughly like me.
Of course, at the beginning,
there’ll be a question
about whether a perfect clone is possible.
So I may wanna ask it lots of questions
to see if it’s consciousness
and the way it talks about its consciousness
and the way it reacts to things in general is likely.
And that will occupy us for a while.
So basic unit testing on the early models.
So if it’s a perfect clone,
you say that it’s gonna behave exactly like you.
So that takes us to free will.
Is there free will?
Are we able to make decisions that are not predetermined
from the initial conditions of the universe?
You know, philosophers do this annoying thing
of saying it depends what you mean.
So in this case, yeah, it really depends on what you mean,
by free will.
If you mean something which was not determined in advance,
could never have been determined,
then I don’t know we have free will.
I mean, there’s quantum mechanics
and who’s to say if that opens up some room,
but I’m not sure we have free will in that sense.
But I’m also not sure that’s the kind of free will
that really matters.
You know, what matters to us
is being able to do what we want
and to create our own futures.
We’ve got this distinction between having our lives
be under our control and under someone else’s control.
We’ve got the sense of actions that we are responsible for
versus ones that we’re not.
I think you can make those distinctions
even in a deterministic universe.
And this is what people call the compatibilist view
of free will, where it’s compatible with determinism.
So I think for many purposes,
the kind of free will that matters
is something we can have in a deterministic universe.
And I can’t see any reason in principle
why an AI system couldn’t have free will of that kind.
If you mean super duper free will,
the ability to violate the laws of physics
and doing things that in principle could not be predicted.
I don’t know, maybe no one has that kind of free will.
What’s the connection between the reality of free will
and the experience of it,
the subjective experience in your view?
So how does consciousness connect
to the reality and the experience of free will?
It’s certainly true that when we make decisions
and when we choose and so on,
we feel like we have an open future.
Feel like I could do this, I could go into philosophy
or I could go into math, I could go to a movie tonight,
I could go to a restaurant.
So we experience these things as if the future is open.
And maybe we experience ourselves
as exerting a kind of effect on the future
that somehow picking out one path
from many paths were previously open.
And you might think that actually
if we’re in a deterministic universe,
there’s a sense of which objectively
those paths weren’t really open all along,
but subjectively they were open.
And that’s, I think that’s what really matters
in making a decisions where our experience
of making a decision is choosing a path for ourselves.
I mean, in general, our introspective models of the mind,
I think are generally very distorted representations
of the mind.
So it may well be that our experience of ourself
in making a decision, our experience of what’s going on
doesn’t terribly well mirror what’s going on.
I mean, maybe there are antecedents in the brain
way before anything came into consciousness
and so on.
Those aren’t represented in our introspective model.
So in general, our experience of perception,
so I experience a perceptual image of the external world.
It’s not a terribly good model of what’s actually going on
in my visual cortex and so on,
which has all these layers and so on.
It’s just one little snapshot of one bit of that.
So in general, introspective models
are very over oversimplified.
And it wouldn’t be surprising
if that was true of free will as well.
This also incidentally can be applied to consciousness itself.
There is this very interesting view
that consciousness itself is an introspective illusion.
In fact, we’re not conscious,
but the brain just has these introspective models of itself
or oversimplifies everything and represents itself
as having these special properties of consciousness.
It’s a really simple way to kind of keep track of itself
and so on.
And then on the illusionist view,
yeah, that’s just an illusion.
I find this view, when I find it implausible,
I do find it very attractive in some ways,
because it’s easy to tell some story
about how the brain would create introspective models
of its own consciousness, of its own free will
as a way of simplifying itself.
I mean, it’s a similar way when we perceive
the external world, we perceive it as having these colors
that maybe it doesn’t really have,
but of course that’s a really useful way
of keeping tracks, of keeping track.
Did you say that you find it not very plausible?
Because I find it both plausible
and attractive in some sense,
because I mean, that kind of view
is one that has the minimum amount of mystery around it.
You can kind of understand that kind of view.
Everything else says we don’t understand
so much of this picture.
No, it is very attractive, I recently wrote an article
about this kind of issue called
the meta problem of consciousness.
The hard problem is how does a brain
give you consciousness?
The meta problem is why are we puzzled
by the hard problem of consciousness?
Because being puzzled by it,
that’s ultimately a bit of behavior.
We might be able to explain that bit of behavior
as one of the easy problems, consciousness.
So maybe there’ll be some computational model
that explains why we’re puzzled by consciousness.
The meta problem has come up with that model.
And I’ve been thinking about that a lot lately.
There’s some interesting stories you can tell
about why the right kind of computational system
might develop these introspective models of itself
that attributed itself, these special properties.
So that meta problem is a research program for everyone.
And then if you’ve got attraction
to sort of simple views, desert landscapes and so on,
then you can go all the way
with what people call illusionism
and say, in fact, consciousness itself is not real.
What is real is just these introspective models
we have that tell us that we’re conscious.
So the view is very simple, very attractive, very powerful.
The trouble is, of course, it has to say
that deep down, consciousness is not real.
We’re not actually experiencing right now.
And it looks like it’s just contradicting
a fundamental datum of our existence.
And this is why most people find this view crazy.
Just as they find panpsychism crazy in one way,
people find illusionism crazy in another way.
But I mean, so yes, it has to deny
this fundamental datum of our existence.
Now, that makes the view sort of frankly unbelievable
for most people.
On the other hand, the view developed right
might be able to explain why we find it unbelievable.
Because these models are so deeply hardwired into our head.
And they’re all integrated.
You can’t escape the illusion.
And it’s a crazy possibility.
Is it possible that the entirety of the universe,
our planet, all the people in New York,
all the organisms on our planet,
including me here today, are not real in that sense?
They’re all part of an illusion inside of Dave Chalmers’s head.
I think all this could be a simulation.
No, but not just a simulation.
Because the simulation kind of is outside of you.
A dream?
What if it’s all an illusion?
Yes, a dream that you’re experiencing.
That’s, it’s all in your mind, right?
Is that, can you take illusionism that far?
Well, there’s illusionism about the external world
and illusionism about consciousness.
And these might go in different.
Illusionism about the external world
kind of takes you back to Descartes.
And yeah, could all this be produced by an evil demon?
Descartes himself also had the dream argument.
He said, how do you know you’re not dreaming right now?
How do you know this is not an amazing dream?
And it’s at least a possibility that yeah,
this could be some super duper complex dream
in the next universe up.
I guess though, my attitude is that just as,
when Descartes thought that if the evil demon was doing it,
it’s not real.
A lot of people these days say if a simulation is doing it,
it’s not real.
As I was saying before, I think even if it’s a simulation,
that doesn’t stop this from being real.
It just tells us what the world is made of.
Likewise, if it’s a dream,
it could turn out that all this is like my dream
created by my brain in the next universe up.
My own view is that wouldn’t stop this physical world
from being real.
It would turn out this cup at the most fundamental level
was made of a bit of say my consciousness
in the dreaming mind at the next level up.
Maybe that would give you a kind of weird kind of panpsychism
about reality, but it wouldn’t show that the cup isn’t real.
It would just tell us it’s ultimately made of processes
in my dreaming mind.
So I’d resist the idea that if the physical world is a dream,
then it’s an illusion.
That’s right.
By the way, perhaps you have an interesting thought
about it.
Why is Descartes demon or genius considered evil?
Why couldn’t have been a benevolent one
that had the same powers?
Yeah, I mean, Descartes called it the malign genie,
the evil genie or evil genius.
Malign, I guess was the word.
But yeah, it’s an interesting question.
I mean, a later philosophy, Barclay said,
no, in fact, all this is done by God.
God actually supplies you all of these perceptions
and ideas and that’s how physical reality is sustained.
And interestingly, Barclay’s God is doing something
that doesn’t look so different
from what Descartes evil demon was doing.
It’s just that Descartes thought it was deception
and Barclay thought it was not.
And I’m actually more sympathetic to Barclay here.
Yeah, this evil demon may be trying to deceive you,
but I think, okay, well, the evil demon
may just be working under a false philosophical theory.
It thinks it’s deceiving you, it’s wrong.
It’s like there’s machines in the matrix.
They thought they were deceiving you
that all this stuff is real.
I think, no, if we’re in a matrix, it’s all still real.
Yeah, the philosopher O.K. Bousma had a nice story
about this about 50 years ago, about Descartes evil demon,
where he said this demon spends all its time
trying to fool people, but fails
because somehow all the demon ends up doing
is constructing realities for people.
So yeah, I think that maybe it’s a very natural
to take this view that if we’re in a simulation
or evil demon scenario or something,
then none of this is real.
But I think it may be ultimately a philosophical mistake,
especially if you take on board sort of the view of reality
where what matters to reality is really its structure,
something like its mathematical structure and so on,
which seems to be the view that a lot of people take
from contemporary physics.
And it looks like you can find
all that mathematical structure in a simulation,
maybe even in a dream and so on.
So as long as that structure is real,
I would say that’s enough for the physical world to be real.
Yeah, the physical world may turn out
to be somewhat more intangible than we had thought
and have a surprising nature of it.
We’re already gotten very used to that from modern science.
See, you’ve kind of alluded
that you don’t have to have consciousness
for high levels of intelligence,
but to create truly general intelligence systems,
AGI systems at human level intelligence
and perhaps super human level intelligence,
you’ve talked about that you feel like
that kind of thing might be very far away,
but nevertheless, when we reached that point,
do you think consciousness
from an engineering perspective is needed
or at least highly beneficial for creating an AGI system?
Yeah, no one knows what consciousness is for functionally.
So right now there’s no specific thing we can point to
and say, you need consciousness for that.
So my inclination is to believe
that in principle AGI is possible.
The very least I don’t see why
someone couldn’t simulate a brain,
ultimately have a computational system
that produces all of our behavior.
And if that’s possible,
I’m sure vastly many other computational systems
of equal or greater sophistication are possible
with all of our cognitive functions and more.
My inclination is to think that
once you’ve got all these cognitive functions,
perception, attention, reasoning,
introspection, language, emotion, and so on,
it’s very likely you’ll have consciousness as well.
So at least it’s very hard for me to see
how you’d have a system that had all those things
while bypassing somehow conscious.
So just naturally it’s integrated quite naturally.
There’s a lot of overlap about the kind of function
that required to achieve each of those things
that’s, so you can’t disentangle them
even when you’re recreating.
It seems to, at least in us,
but we don’t know what the causal role of consciousness
in the physical world, what it does.
I mean, just say it turns out
consciousness does something very specific
in the physical world like collapsing wave functions
as on one common interpretation of quantum mechanics.
Then ultimately we might find some place
where it actually makes a difference
and we could say, ah,
here is where in collapsing wave functions
it’s driving the behavior of a system.
And maybe it could even turn out that for AGI,
you’d need something playing that.
I mean, if you wanted to connect this to free will,
some people think consciousness collapsing wave functions,
that would be how the conscious mind exerts effect
on the physical world and exerts its free will.
And maybe it could turn out that any AGI
that didn’t utilize that mechanism would be limited
in the kinds of functionality that it had.
I don’t myself find that plausible.
I think probably that functionality could be simulated.
But you can imagine once we had a very specific idea
about the role of consciousness in the physical world,
this would have some impact on the capacity of AGI’s.
And if it was a role that could not be duplicated elsewhere,
then we’d have to find some way to either
get consciousness in the system to play that role
or to simulate it.
If we can isolate a particular role to consciousness,
of course, it seems like an incredibly difficult thing.
Do you have worries about existential threats
of conscious intelligent beings that are not us?
So certainly, I’m sure you’re worried about us
from an existential threat perspective,
but outside of us, AI systems.
There’s a couple of different kinds
of existential threats here.
One is an existential threat to consciousness generally.
I mean, yes, I care about humans
and the survival of humans and so on,
but just say it turns out that eventually we’re replaced
by some artificial beings that aren’t humans,
but are somehow our successors.
They still have good lives.
They still do interesting and wonderful things
with the universe.
I don’t think that’s not so bad.
That’s just our successors.
We were one stage in evolution.
Something different, maybe better came next.
If on the other hand, all of consciousness was wiped out,
that would be a very serious moral disaster.
One way that could happen is by all intelligent life
being wiped out.
And many people think that, yeah,
once you get to humans and AIs and amazing sophistication
where everyone has got the ability to create weapons
that can destroy the whole universe just by pressing a button,
then maybe it’s inevitable all intelligent life will die out.
That would certainly be a disaster.
And we’ve got to think very hard about how to avoid that.
But yeah, another interesting kind of disaster
is that maybe intelligent life is not wiped out,
but all consciousness is wiped out.
So just say your thought,
unlike what I was saying a moment ago,
that there are two different kinds of intelligent systems,
some which are conscious and some which are not.
And just say it turns out that we create AGI
with a high degree of intelligence,
meaning high degree of sophistication and its behavior,
but with no consciousness at all.
That AGI could take over the world maybe,
but then there’d be no consciousness in this world.
This would be a world of zombies.
Some people have called this the zombie apocalypse
because it’s an apocalypse for consciousness.
Consciousness is gone.
You’ve merely got this super intelligent,
nonconscious robots.
And I would say that’s a moral disaster in the same way,
in almost the same way that the world
with no intelligent life is a moral disaster.
All value and meaning may be gone from that world.
So these are both threats to watch out for.
Now, my own view is if you get super intelligence,
you’re almost certainly gonna bring consciousness with it.
So I hope that’s not gonna happen.
But of course, I don’t understand consciousness.
No one understands consciousness.
This is one reason for,
this is one reason at least among many
for thinking very seriously about consciousness
and thinking about the kind of future
we want to create in a world with humans and or AIs.
How do you feel about the possibility
if consciousness so naturally does come with AGI systems
that we are just a step in the evolution?
That we will be just something, a blimp on the record
that’ll be studied in books
by the AGI systems centuries from now?
I mean, I think I’d probably be okay with that,
especially if somehow humans are continuous with AGI.
I mean, I think something like this is inevitable.
The very least humans are gonna be transformed.
We’re gonna be augmented by technology.
It’s already happening in all kinds of ways.
We’re gonna be transformed by technology
where our brains are gonna be uploaded
and computationally enhanced.
And eventually that line between what’s a human
and what’s an AI may be kind of hard to draw.
How much does it matter, for example,
that some future being a thousand years from now
that somehow descended from us actually still has biology?
I think it would be nice if you kind of point
to its cognitive system, point to some parts
that had some roots in us and trace a continuous line there.
That would be selfishly nice for me to think that,
okay, I’m connected to this thread line
through the future of the world,
but if it turns out, okay, there’s a jump there.
They found a better way to design cognitive systems.
They designed a wholly new kind of thing.
And the only line is some causal chain of designing
and systems that design better systems.
Is that so much worse?
I don’t know.
We’re still at least part of a causal chain of design.
And yes, they’re not humans,
but still they’re our successes.
So, I mean, ultimately I think it’s probably inevitable
that something like that will happen.
And at least we were part of the process.
It’d be nice if they still cared enough about us
to maybe to engage with our arguments.
I’m really hoping that the AGI’s are gonna solve
all the problems of philosophy.
They’ll come back and read all this crappy work
for the 20th and 21st century,
hard problem of consciousness.
And here is why they got it wrong.
And so, and if that happened,
then I’d really feel like I was part of
at least an intellectual process over centuries.
And that would be kind of cool.
Well, I’m pretty sure they would clone
or they would recreate David Chalmers
and for the fun of it,
sort of bring back other philosophers.
Yeah, bring back Descartes.
Descartes and just put them in a room and just watch.
It’ll be a Netflix of the future show
where you bring philosophers from different human,
100% human philosophers from previous generations,
put them in a room and see them.
I am totally up for that.
Simulators, AGI’s of the future,
if you’re watching this podcast, do that.
I would like to be recreated and ending out with Descartes.
Where Descartes would be the first,
if you could hang out as part of such a TV show
with a philosopher that’s no longer with us from long ago,
who would you choose?
Descartes would have to be right up there.
Oh, actually a couple of months ago,
I got to have a conversation with Descartes,
an actor who’s actually a philosopher
came out on stage playing Descartes.
I didn’t know this was gonna happen.
And I just after I gave a talk
and told me about how my ideas were crap
and all derived from him.
And so we had a long argument.
This was great.
I would love to see what Descartes would think about AI,
for example, and the modern neuroscience.
And so I suspect not too much would surprise him,
but yeah, William James,
for a psychologist of consciousness,
I think James was probably the richest.
But, oh, there are Immanuel Kant.
I never really understood what he was up to
if I got to actually talk to him about some of this.
Hey, there was Princess Elizabeth who talked with Descartes
and who really got at the problems
of how Descartes ideas of a nonphysical mind
interacting with the physical body couldn’t really work.
She’s been kind of, most philosophers
think she’s been proved right.
So maybe put me in a room with Descartes
and Princess Elizabeth and we can all argue it out.
What kind of future?
So we talked about zombies, a concerning future,
but what kind of future excites you?
What do you think if we look forward sort of,
we’re at the very early stages
of understanding consciousness.
And we’re now at the early stages
of being able to engineer complex, interesting systems
that have degrees of intelligence.
And maybe one day we’ll have degrees of consciousness,
maybe be able to upload brains,
all those possibilities, virtual reality.
Is there a particular aspect to this future world
that just excites you?
Well, I think there are lots of different aspects.
I mean, frankly, I want it to hurry up and happen.
It’s like, yeah, we’ve had some progress lately in AI and VR,
but in the grand scheme of things, it’s still kind of slow.
The changes are not yet transformative.
And I’m in my fifties, I’ve only got so long left.
I’d like to see really serious AI in my lifetime
and really serious virtual worlds.
Cause yeah, once people,
I would like to be able to hang out in a virtual reality,
which is richer than this reality
to really get to inhabit fundamentally different kinds
of spaces.
Well, I would very much like to be able to upload
my mind onto a computer.
So maybe I don’t have to die.
If this is maybe gradually replaced my neurons
with a Silicon chips and inhabit a computer.
Selfishly, that would be wonderful.
I suspect I’m not gonna quite get there in my lifetime,
but once that’s possible,
then you’ve got the possibility of transforming
your consciousness in remarkable ways,
augmenting it, enhancing it.
So let me ask then,
if such a system is a possibility within your lifetime
and you were given the opportunity to become immortal
in this kind of way, would you choose to be immortal?
Yes, I totally would.
I know some people say they couldn’t,
it’d be awful to be immortal, be so boring or something.
I don’t see, I really don’t see why this might be.
I mean, even if it’s just ordinary life that continues,
ordinary life is not so bad.
But furthermore, I kind of suspect that,
if the universe is gonna go on forever or indefinitely,
it’s gonna continue to be interesting.
I don’t think your view was that we just have to get
this one romantic point of interest now
and afterwards it’s all gonna be boring,
super intelligent stasis.
I guess my vision is more like,
no, it’s gonna continue to be infinitely interesting.
Something like as you go up the set theoretic hierarchy,
you go from the finite cardinals to Aleph zero
and then through there to all the Aleph one and Aleph two
and maybe the continuum and you keep taking power sets
and in set theory, they’ve got these results
that actually all this is fundamentally unpredictable.
It doesn’t follow any simple computational patterns.
There’s new levels of creativity
as the set theoretic universe expands and expands.
I guess that’s my future.
That’s my vision of the future.
That’s my optimistic vision
of the future of super intelligence.
It will keep expanding and keep growing,
but still being fundamentally unpredictable at many points.
I mean, yes, this creates all kinds of worries
like couldn’t all be fragile and be destroyed at any point.
So we’re gonna need a solution to that problem.
But if we get to stipulate that I’m immortal,
well, I hope that I’m not just immortal and stuck
in the single world forever,
but I’m immortal and get to take part in this process
of going through infinitely rich, created futures.
Rich, unpredictable, exciting.
Well, I think I speak for a lot of people in saying,
I hope you do become immortal and there’ll be
that Netflix show, The Future,
where you get to argue with Descartes,
perhaps for all eternity.
So David, it was an honor.
Thank you so much for talking today.
Thanks, it was a pleasure.
Thanks for listening to this conversation
and thank you to our presenting sponsor, Cash App.
Download it, use code LexPodcast,
you’ll get $10 and $10 will go to FIRST,
an organization that inspires and educates young minds
to become science and technology innovators of tomorrow.
If you enjoy this podcast, subscribe on YouTube,
give it five stars on Apple Podcast,
follow on Spotify, support it on Patreon,
or simply connect with me on Twitter at Lex Friedman.
And now let me leave you with some words
from David Chalmers.
Materialism is a beautiful and compelling view of the world,
but to account for consciousness,
we have to go beyond the resources it provides.
Thank you for listening and hope to see you next time.