Lex Fridman Podcast - #212 - Joscha Bach: Nature of Reality, Dreams, and Consciousness

The following is a conversation with Yosha Bach,

his second time on the podcast.

Yosha is one of the most fascinating minds in the world,

exploring the nature of intelligence,

cognition, computation, and consciousness.

To support this podcast, please check out our sponsors,

Coinbase, Codecademy, Linode, NetSuite, and ExpressVPN.

Their links are in the description.

This is the Lex Friedman podcast,

and here is my conversation with Yosha Bach.

Thank you for once again coming on

to this particular Russian program

and sticking to the theme of a Russian program.

Let’s start with the darkest of topics.

Kriviyat.

So this is inspired by one of your tweets.

You wrote that, quote,

when life feels unbearable,

I remind myself that I’m not a person.

I am a piece of software running on the brain

of a random ape for a few decades.

It’s not the worst brain to run on.

Have you experienced low points in your life?

Have you experienced depression?

Of course, we all experience low points in our life,

and we get appalled by the things,

by the ugliness of stuff around us.

We might get desperate about our lack of self regulation,

and sometimes life is hard,

and I suspect you don’t get to your life,

nobody does, to get through their life without low points

and without moments where they’re despairing.

And I thought that let’s capture this state

and how to deal with that state.

And I found that very often you realize

that when you stop taking things personally,

when you realize that this notion of a person is a fiction,

similar as it is in Westworld,

where the robots realize that their memories and desires

are the stuff that keeps them in the loop,

and they don’t have to act on those memories and desires,

that our memories and expectations is what make us unhappy.

And the present really does.

The day in which we are, for the most part, it’s okay, right?

When we are sitting here, right here, right now,

we can choose how we feel.

And the thing that affects us is the expectation

that something is going to be different

from what we want it to be,

or the memory that something was different

from what you wanted it to be.

And once we basically zoom out from all this,

what’s left is not a person.

What’s left is this state of being conscious,

which is a software state.

And software doesn’t have an identity.

It’s a physical law.

And it’s a law that acts in all of us,

and it’s embedded in a suitable substrate.

And we didn’t pick that substrate, right?

We are mostly randomly instantiated on it.

And they’re all these individuals,

and everybody has to be one of them.

And eventually you’re stuck on one of them,

and have to deal with that.

So you’re like a leaf floating down the river.

You just have to accept that there’s a river,

and you just float wherever it takes you.

You don’t have to do this.

The thing is that the illusion that you are an agent

is a construct.

What part of that is actually under your control?

And I think that our consciousness

is largely a control model for our own attention.

So we notice where we are looking,

and we can influence what we’re looking,

how we are disambiguating things,

how we put things together in our mind.

And the whole system that runs us

is this big cybernetic motivational system.

So we’re basically like a little monkey

sitting on top of an elephant,

and we can put this elephant here and there

to go this way or that way.

And we might have the illusion that we are the elephant,

or that we are telling it what to do.

And sometimes we notice that it walks

into a completely different direction.

And we didn’t set this thing up.

It just is the situation that we find ourselves in.

How much prodding can we actually do of the elephant?

A lot.

But I think that our consciousness

cannot create the motive force.

Is the elephant consciousness in this metaphor?

No, the monkey is the consciousness.

The monkey is the attentional system

that is observing things.

There is a large perceptual system

combined with a motivational system

that is actually providing the interface to everything

and our own consciousness.

I think is the tool that directs the attention

of that system, which means it singles out features

and performs conditional operations

for which it needs an index memory.

But this index memory is what we perceive

as our stream of consciousness.

But the consciousness is not in charge.

That’s an illusion.

So everything outside of that consciousness

is the elephant.

So it’s the physics of the universe,

but it’s also society that’s outside of your…

I would say the elephant is the agent.

So there is an environment to which the agent is stomping

and you are influencing a little part of that agent.

So is the agent a single human being?

Which object has agency?

That’s an interesting question.

I think a way to think about an agent

is that it’s a controller with a set point generator.

The notion of a controller comes from cybernetics

and control theory.

Control system consists out of a system

that is regulating some value

and the deviation of that value from a set point.

And it has a sensor that measures the system’s deviation

from that set point and an effector

that can be parametrized by the controller.

So the controller tells the effector to do a certain thing.

And the goal is to reduce the distance

between the set point and the current value of the system.

And there’s an environment

which disturbs the regulated system,

which brings it away from that set point.

So simplest case is a thermostat.

The thermostat is really simple

because it doesn’t have a model.

The thermostat is only trying to minimize

the set point deviation in the next moment.

And if you want to minimize the set point deviation

over a longer time span, you need to integrate it.

You need to model what is going to happen.

So for instance, when you think about

that your set point is to be comfortable in life,

maybe you need to make yourself uncomfortable first, right?

So you need to make a model of what’s going to happen when.

And this is task of the controller is to use its sensors

to measure the state of the environment

and the system that is being regulated

and figure out what to do.

And if the task is complex enough,

the set points are complicated enough.

And if the controller has enough capacity

and enough sensor feedback,

then the task of the controller is to make a model

of the entire universe that it’s in,

the conditions under which it exists and of itself.

And this is a very complex agent.

And we are in that category.

And an agent is not necessarily a thing in the universe.

It’s a class of models that we use

to interpret aspects of the universe.

And when we notice the environment around us,

a lot of things only make sense

at the level that should be entangled with them

if we interpret them as control systems

that make models of the world

and try to minimize their own set points.

So the models are the agents.

The agent is a class of model.

And we notice that we are an agent ourselves.

We are the agent that is using our own control model

to perform actions.

We notice we produce a change in the model

and things in the world change.

And this is how we discover the idea that we have a body,

that we are situated environment,

and that we have a first person perspective.

Still don’t understand what’s the best way to think

of which object has agency with respect to human beings.

Is it the body?

Is it the brain?

Is it the contents of the brain as agency?

Like what’s the actuators that you’re referring to?

What is the controller and where does it reside?

Or is it these impossible things?

Because I keep trying to ground it to space time,

the three dimension of space and the one dimension of time.

What’s the agent in that for humans?

There is not just one.

It depends on the way in which you’re looking

at this thing in which you’re framing it.

Imagine that you are, say Angela Merkel,

and you are acting on behalf of Germany.

Then you could say that Germany is the agent.

And in the mind of Angela Merkel,

she is Germany to some extent,

because in the way in which she acts,

the destiny of Germany changes.

There are things that she can change

that basically affect the behavior of that nation state.

Okay, so it’s hierarchies of,

to go to another one of your tweets

with I think you were playfully mocking Jeff Hawkins

with saying his brain’s all the way down.

So it’s like, it’s agents all the way down.

It’s agents made up of agents, made up of agents.

Like if Angela Merkel’s Germany

and Germany’s made up a bunch of people

and the people are themselves agents

in some kind of context.

And then people are made up of cells, each individual.

So is it agents all the way down?

I suspect that has to be like this

in a world where things are self organizing.

Most of the complexity that we are looking at,

everything in life is about self organization.

So I think up from the level of life, you have agents.

And below life, you rarely have agents

because sometimes you have control systems

that emerge randomly in nature

and try to achieve a set point,

but they’re not that interesting agents that make models.

And because to make an interesting model of the world,

you typically need a system that is true and complete.

Can I ask you a personal question?

What’s the line between life and non life?

It’s personal because you’re a life form.

So what do you think in this emerging complexity,

at which point does the things that are being living

and have agency?

Personally, I think that the simplest answer

that is that life is cells because…

Life is what?

Cells.

Biological cells.

So it’s a particular kind of principle

that we have discovered to exist in nature.

It’s modular stuff that consists

out of basically this DNA tape

with a read write head on top of it,

that is able to perform arbitrary computations

and state transitions within the cell.

And it’s combined with a membrane

that insulates the cell from its environment.

And there are chemical reactions inside of the cell

that are in disequilibrium.

And the cell is running in such a way

that this disequilibrium doesn’t disappear.

And the cell goes into an equilibrium state, it dies.

And it requires something like an neck entropy extractor

to maintain this disequilibrium.

So it’s able to harvest like entropy from its environment

and keep itself running.

Yeah, so there’s information and there’s a wall

to maintain this disequilibrium.

But isn’t this very earth centric?

Like what you’re referring to as a…

I’m not making a normative notion.

You could say that there are probably other things

in the universe that are cell like and life like,

and you could also call them life,

but eventually it’s just a willingness

to find an agreement of how to use the terms.

I like cells because it’s completely coextential

with the way that we use the word

even before we knew about cells.

So people were pointing at some stuff

and saying, this is somehow animate.

And this is very different from the non animate stuff.

And what’s the difference between the living

and the dead stuff.

And it’s mostly whether the cells are working or not.

And also this boundary of life,

where we say that for instance, the virus

is basically an information packet

that is subverting the cell and not life by itself.

That makes sense to me.

And it’s somewhat arbitrary.

You could of course say that systems

that permanently maintain a disequilibrium

and can self replicate are always life.

And maybe that’s a useful definition too,

but this is eventually just how you want to use the word.

Is it so useful for conversation,

but is it somehow fundamental to the universe?

Do you think there’s a actual line

to eventually be drawn between life and non life?

Or is it all a kind of continuum?

I don’t think it’s a continuum,

but there’s nothing magical that is happening.

Living systems are a certain type of machine.

What about non living systems?

Is it also a machine?

There are non living machines,

but the question is at which point is a system

able to perform arbitrary state transitions

to make representations.

And living things can do this.

And of course we can also build non living things

that can do this, but we don’t know anything in nature

that is not a cell and is not created by still alive

that is able to do that.

Not only do we not know,

I don’t think we have the tools to see otherwise.

I always worry that we look at the world too narrowly.

Like there could be life of a very different kind

right under our noses that we’re just not seeing

because we’re not either limitations

of our cognitive capacity,

or we’re just not open minded enough

either with the tools of science

or just the tools of our mind.

Yeah, that’s possible.

I find this thought very fascinating.

And I suspect that many of us ask ourselves since childhood,

what are the things that we are missing?

What kind of systems and interconnections exist

that are outside of our gaze?

But we are looking for it

and physics doesn’t have much room at the moment

for opening up something that would not violate

the conservation of information as we know it.

Yeah, but I wonder about time scale and scale,

spatial scale, whether we just need to open up our idea

of what, like how life presents itself.

It could be operating in a much slower time scale,

a much faster time scale.

And it’s almost sad to think that there’s all this life

around us that we’re not seeing

because we’re just not like thinking

in terms of the right scale, both time and space.

What is your definition of life?

What do you understand as life?

Entities of sufficiently high complexity

that are full of surprises.

I don’t know, I don’t have a free will.

So that just came out of my mouth.

I’m not sure that even makes sense.

There’s certain characteristics.

So complexity seems to be a necessary property of life.

And I almost want to say it has ability

to do something unexpected.

It seems to me that life is the main source

of complexity on earth.

Yes.

And complexity is basically a bridgehead

that order builds into chaos by modeling,

by processing information in such a way

that you can perform reactions

that would not be possible for dump systems.

And this means that you can harvest neck entropy

that dump systems cannot harvest.

And this is what complexity is mostly about.

In some sense, the purpose of life is to create complexity.

Yeah.

Increasing.

I mean, there seems to be some kind of universal drive

towards increasing pockets of complexity.

I don’t know what that is.

That seems to be like a fundamental,

I don’t know if it’s a property of the universe

or it’s just a consequence of the way the universe works,

but there seems to be this small pockets

of emergent complexity that builds on top of each other

and starts having like greater and greater complexity

by having like a hierarchy of complexity.

Little organisms building up a little society

that then operates almost as an individual organism itself.

And all of a sudden you have Germany and Merkel.

Well, that’s not obvious to me.

Everything that goes up has to come down at some point.

So if you see this big exponential curve somewhere,

it’s usually the beginning of an S curve

where something eventually reaches saturation.

And the S curve is the beginning of some kind of bump

that goes down again.

And there is just this thing that when you are

in sight of an evolution of life,

you are on top of a puddle of negentropy

that is being sucked dry by life.

And during that happening,

you see an increase in complexity

because life forms are competing with each other

to get more and more finer and finer corner

of that negentropy extraction.

I feel like that’s a gradual beautiful process

like that’s almost follows a process akin to evolution.

And the way it comes down is not the same way it came up.

The way it comes down is usually harshly and quickly.

So usually there’s some kind of catastrophic event.

The Roman Empire took a long time.

But would that be,

would you classify this as a decrease in complexity though?

Yes.

I think that this size of the cities that could be fed

has decreased dramatically.

And you could see that the quality of the art decreased

and it did so gradually.

And maybe future generations,

when they look at the history of the United States

in the 21st century,

will also talk about the gradual decline,

not something that suddenly happens.

Do you have a sense of where we are?

Are we on the exponential rise?

Are we at the peak?

Or are we at the downslope of the United States empire?

It’s very hard to say from a single human perspective,

but it seems to me that we are probably at the peak.

I think that’s probably the definition of like optimism

and cynicism.

So my nature of optimism is,

I think we’re on the rise.

I think this is just all a matter of perspective.

Nobody knows,

but I do think that erring on the side of optimism,

like you need a sufficient number,

you need a minimum number of optimists

in order to make that up thing actually work.

And so I tend to be on the side of the optimists.

I think that we are basically a species of grasshoppers

that have turned into locusts.

And when you are in that locust mode,

you see an amazing rise of population numbers

and of the complexity of the interactions

between the individuals.

But it’s ultimately the question is, is it sustainable?

See, I think we’re a bunch of lions and tigers

that have become domesticated cats,

to use a different metaphor.

As I’m not exactly sure we’re so destructive,

we’re just softer and nicer and lazier.

But I think we have monkeys and not the cats.

And if you look at the monkeys, they are very busy.

The ones that have a lot of sex, those monkeys?

Not just the bonobos.

I think that all the monkeys are basically

a discontent species that always needs to meddle.

Well, the gorillas seem to have

a little bit more of a structure,

but it’s a different part of the tree.

Okay, you mentioned the elephant

and the monkey riding the elephant.

And consciousness is the monkey.

And there’s some prodding that the monkey gets to do.

And sometimes the elephant listens.

I heard you got into some contentious,

maybe you can correct me,

but I heard you got into some contentious

free will discussions.

Is this with Sam Harris or something like that?

Not that I know of.

Some people on Clubhouse told me

you made a bunch of big debate points about free will.

Well, let me just then ask you where,

in terms of the monkey and the elephant,

do you think we land in terms of the illusion of free will?

How much control does the monkey have?

We have to think about what the free will is

in the first place.

We are not the machine.

We are not the thing that is making the decisions.

We are a model of that decision making process.

And there is a difference between making your own decisions

and predicting your own decisions.

And that difference is the first person perspective.

And what basically makes decision making

and the conditions of free will distinct

from just automatically doing the best thing is

that we often don’t know what the best thing is.

We make decisions under uncertainty.

We make informed bets using a betting algorithm

that we don’t yet understand

because we haven’t reverse engineered

our own minds sufficiently.

We don’t know the expected rewards.

We don’t know the mechanism

by which we estimate the rewards and so on.

But there is an algorithm.

We observe ourselves performing

where we see that we weight facts and factors

and the future, and then some kind of possibility,

some motive gets raised to an intention.

And that’s informed bet that the system is making.

And that making of the informed bet,

the representation of that is what we call free will.

And it seems to be paradoxical

because we think that the crucial thing is

about it that it’s somehow indeterministic.

And yet if it was indeterministic, it would be random.

And it cannot be random because if it was random,

if just dice were being thrown in the universe,

randomly forces you to do things, it would be meaningless.

So the important part of the decisions

is always the deterministic stuff.

But it appears to be indeterministic to you

because it’s unpredictable.

Because if it was predictable,

you wouldn’t experience it as a free will decision.

You would experience it as just doing

the necessary right thing.

And you see this continuum between the free will

and the execution of automatic behavior

when you’re observing other people.

So for instance, when you are observing your own children,

if you don’t understand them,

you will abuse this agent model

where you have an agent with a set point generator.

And the agent is doing the best it can

to minimize the difference to the set point.

And it might be confused and sometimes impulsive or whatever,

but it’s acting on its own free will.

And when you understand what’s happens

in the mind of the child, you see that it’s automatic.

And you can outmodel the child,

you can build things around the child

that will lead the child to making exactly the decision

that you are predicting.

And under these circumstances,

like when you are a stage musician

or somebody who is dealing with people

that you sell a car to,

and you completely understand the psychology

and the impulses and the space of thoughts

that this individual can have at that moment.

Under these circumstances,

it makes no sense to attribute free will.

Because it’s no longer decision making under uncertainty.

You are already certain.

For them, there’s uncertainty,

but you already know what they’re doing.

But what about for you?

So is this akin to like systems like cellular automata

where it’s deterministic,

but when you squint your eyes a little bit,

it starts to look like there’s agents making decisions

at the higher sort of when you zoom out

and look at the entities

that are composed by the individual cells.

Even though there’s underlying simple rules

that make the system evolve in deterministic ways,

it looks like there’s organisms making decisions.

Is that where the illusion of free will emerges,

that jump in scale?

It’s a particular type of model,

but this jump in scale is crucial.

The jump in scale happens whenever

you have too many parts to count

and you cannot make a model at that level

and you try to find some higher level regularity.

And the higher level regularity is a pattern

that you project into the world to make sense of it.

And agency is one of these patterns, right?

You have all these cells that interact with each other

and the cells in our body are set up in such a way

that they benefit if their behavior is coherent,

which means that they act

as if they were serving a common goal.

And that means that they will evolve regulation mechanisms

that act as if they were serving a common goal.

And now you can make sense of all these cells

by projecting the common goal into them.

Right, so for you then, free will is an illusion.

No, it’s a model and it’s a construct.

It’s basically a model that the system is making

of its own behavior.

And it’s the best model that it can come up with

under the circumstances.

And it can get replaced by a different model,

which is automatic behavior,

when you fully understand the mechanism

under which you are acting.

Yeah, but another word for model is what, story.

So it’s the story you’re telling.

I mean, do you actually have control?

Is there such a thing as a you

and is there such a thing as you have in control?

So like, are you manifesting your evolution as an entity?

In some sense, the you is the model of the system

that is in control.

It’s a story that the system tells itself

about somebody who is in control.

Yeah.

And the contents of that model are being used

to inform the behavior of the system.

Okay.

So the system is completely mechanical

and the system creates that story like a loom.

And then it uses the contents of that story

to inform its actions

and writes the results of that actions into the story.

So how’s that not an illusion?

The story is written then,

or rather we’re not the writers of the story.

Yes, but we always knew that.

No, we don’t know that.

When did we know that?

I think that’s mostly a confusion about concepts.

The conceptual illusion in our culture

comes from the idea that we live in physical reality

and that we experience physical reality

and that you have ideas about it.

And then you have this dualist interpretation

where you have two substances, res extensa,

the world that you can touch

and that is made of extended things

and res cogitans, which is the world of ideas.

And in fact, both of them are mental representations.

One is the representations of the world as a game engine

that your mind generates to make sense of the perceptual data.

And the other one,

yes, that’s what we perceive as the physical world.

But we already know that the physical world

is nothing like that, right?

Quantum mechanics is very different

from what you and me perceive as the world.

The world that you and me perceive as a game engine.

And there are no colors and sounds in the physical world.

They only exist in the game engine generated by your brain.

And then you have ideas

that cannot be mapped onto extended regions, right?

So the objects that have a spatial extension

in the game engine, res extensa,

and the objects that don’t have a physical extension

in the game engine are ideas.

And they both interact in our mind

to produce models of the world.

Yep, but, you know, when you play video games,

I understand that what’s actually happening

is zeros and ones inside of a computer,

inside of a CPU and a GPU,

but you’re still seeing like the rendering of that.

And you’re still making decisions,

whether to shoot, to turn left or to turn right,

if you’re playing a shooter,

or every time I started thinking about Skyrim

and Elder Scrolls and walking around in beautiful nature

and swinging a sword.

But it feels like you’re making decisions

inside that video game.

So even though you don’t have direct access

in terms of perception to the bits,

to the zeros and ones,

it still feels like you’re making decisions

and your decisions actually feels

like they’re being applied all the way down

to the zeros and ones.

So it feels like you have control,

even though you don’t have direct access to reality.

So there is basically a special character

in the video game that is being created

by the video game engine.

And this character is serving the aesthetics

of the video game, and that is you.

Yes, but I feel like I have control inside the video game.

Like all those like 12 year olds

that kick my ass on the internet.

So when you play the video game,

it doesn’t really matter that there’s zeros and ones, right?

You don’t care about the bits of the past.

You don’t care about the nature of the CPU

that it runs on.

What you care about are the properties of the game

that you’re playing.

And you hope that the CPU is good enough.

Yes.

And a similar thing happens when we interact with physics.

The world that you and me are in is not the physical world.

The world that you and me are in is a dream world.

How close is it to the real world though?

We know that it’s not very close,

but we know that the dynamics of the dream world

match the dynamics of the physical world

to a certain degree of resolution.

But the causal structure of the dream world is different.

So you see for instance waves crashing on your feet, right?

But there are no waves in the ocean.

There’s only water molecules that have tangents

between the molecules that are the result of electrons

in the molecules interacting with each other.

Aren’t they like very consistent?

We’re just seeing a very crude approximation.

Isn’t our dream world very consistent,

like to the point of being mapped directly one to one

to the actual physical world

as opposed to us being completely tricked?

Is this is like where you have like Donald?

It’s not a trick.

That’s my point.

It’s not an illusion.

It’s a form of data compression.

It’s an attempt to deal with the dynamics

of too many parts to count

at the level at which we are entangled

with the best model that you can find.

Yeah, so we can act in that dream world

and our actions have impact in the real world,

in the physical world to which we don’t have access.

Yes, but it’s basically like accepting the fact

that the software that we live in,

the dream that we live in is generated

by something outside of this world that you and me are in.

So is the software deterministic

and do we not have any control?

Do we have, so free will is having a conscious being.

Free will is the monkey being able to steer the elephant.

No, it’s slightly different.

Basically in the same way as you are modeling

the water molecules in the ocean that engulf your feet

when you are walking on the beach as waves

and there are no waves,

but only the atoms on more complicated stuff

underneath the atoms and so on.

And you know that, right?

You would accept, yes,

there is a certain abstraction that happens here.

It’s a simplification of what happens

and the simplification that is designed

in such a way that your brain can deal with it,

temporarily and spatially in terms of resources

and tuned for the predictive value.

So you can predict with some accuracy

whether your feet are going to get wet or not.

But it’s a really good interface and approximation.

It says E equals MC squared is a good,

equations are good approximation for,

they’re much better approximation.

So to me, waves is a really nice approximation

of what’s all the complexity that’s happening underneath.

Basically it’s a machine learning model

that is constantly tuned to minimize surprises.

So it basically tries to predict as well as it can

what you’re going to perceive next.

Are we talking about, which is the machine learning?

Our perception system or the dream world?

The machine world, dream world is the result

of the machine learning process of the perceptual system.

That’s doing the compression.

Yes.

And the model of you as an agent

is not a different type of model or it’s a different type,

but not different as in its model like nature

from the model of the ocean, right?

Some things are oceans, some things are agents.

And one of these agents is using your own control model,

the output of your model,

the things that you perceive yourself as doing.

And that is you.

What about the fact that when you’re standing

with the water on your feet and you’re looking out

into the vast open water of the ocean

and then there’s a beautiful sunset

and the fact that it’s beautiful

and then maybe you have friends or a loved one with you

and you feel love, what is that?

As the dream world or what is that?

Yes, it’s all happening inside of the dream.

Okay.

But see, the word dream makes it seem like it’s not real.

No, of course it’s not real.

The physical universe is real,

but the physical universe is incomprehensible

and it doesn’t have any feeling of realness.

The feeling of realness that you experience

gets attached to certain representations

where your brain assesses,

this is the best model of reality that I have.

So the only thing that’s real to you

is the thing that’s happening at the very base of reality.

Yeah, for something to be real, it needs to be implemented.

So the model that you have of reality

is real in as far as it is a model.

It’s an appropriate description of the world

to say that there are models that are being experienced,

but the world that you experience

is not necessarily implemented.

There is a difference between a reality,

a simulation and a simulacrum.

The reality that we’re talking about

is something that fully emerges

over a causally closed lowest layer.

And the idea of physicalism is that we are in that layer,

that basically our world emerges over that.

Every alternative to physicalism is a simulation theory,

which basically says that we are

in some kind of simulation universe

and the real world needs to be in a parent universe of that,

where the actual causal structure is, right?

And when you look at the ocean and your own mind,

you are looking at a simulation

that explains what you’re going to see next.

So we are living in a simulation.

Yes, but a simulation generated by our own brains.

Yeah.

And this simulation is different from the physical reality

because the causal structure that is being produced,

what you are seeing is different

from the causal structure of physics.

But consistent.

Hopefully, if not, then you are going to end up

in some kind of institution

where people will take care of you

because your behavior will be inconsistent, right?

Your behavior needs to work in such a way

that it’s interacting with an accurately predictive

model of reality.

And if your brain is unable to make your model

of reality predictive, you will need help.

So what do you think about Donald Hoffman’s argument

that it doesn’t have to be consistent,

the dream world to what he calls like the interface

to the actual physical reality,

where there could be evolution?

I think he makes an evolutionary argument,

which is like, it could be an evolutionary advantage

to have the dream world drift away from physical reality.

I think that only works if you have tenure.

As long as you’re still interacting with the ground tools,

your model needs to be somewhat predictive.

Well, in some sense, humans have achieved a kind of tenure

in the animal kingdom.

Yeah.

And at some point we became too big to fail,

so we became postmodernist.

It all makes sense now.

We can just change the version of reality that we like.

Oh man.

Okay.

Yeah, but basically you can do magic.

You can change your assessment of reality,

but eventually reality is going to come bite you in the ass

if it’s not predictive.

Do you have a sense of what is that base layer

of physical reality?

You have like, so you have these attempts

at the theories of everything,

the very, very small of like strength theory,

or what Stephen Wolfram talks about with the hyper grass.

These are these tiny, tiny, tiny, tiny objects.

And then there is more like quantum mechanics

that’s talking about objects that are much larger,

but still very, very, very tiny.

Do you have a sense of where the tiniest thing is

that is like at the lowest level?

The turtle at the very bottom.

Do you have a sense what that turtle is?

I don’t think that you can talk about where it is

because space is emerging over the activity of these things.

So space, the coordinates only exist

in relation to the things, other things.

And so you could, in some sense, abstract it into locations

that can hold information and trajectories

that the information can take

between the different locations.

And this is how we construct our notion of space.

And physicists usually have a notion of space

that is continuous.

And this is a point where I tend to agree

with people like Stephen Wolfram

who are very skeptical of the geometric notions.

I think that geometry is the dynamics

of too many parts to count.

And when there are no infinities,

if there were two infinities,

you would be running into contradictions,

which is in some sense what Gödel and Turing discovered

in response to Hilbert’s call.

So there are no infinities.

There are no infinities.

Infinities fake.

There is unboundedness, but if you have a language

that talks about infinity, at some point,

the language is going to contradict itself,

which means it’s no longer valid.

In order to deal with infinities and mathematics,

you have to postulate the existence initially.

You cannot construct the infinities.

And that’s an issue, right?

You cannot build up an infinity from zero.

But in practice, you never do this, right?

When you perform calculations,

you only look at the dynamics of too many parts to count.

And usually these numbers are not that large.

They’re not Googles or something.

The infinities that we are dealing with in our universe

are mathematically speaking, relatively small integers.

And still what we’re looking at is dynamics

where a trillion things behave similar

to a hundred trillion things

or something that is very, very large

because they’re converging.

And these convergent dynamics, these operators,

this is what we deal with when we are doing the geometry.

Geometry is stuff where we can pretend that it’s continuous

because if we subdivide the space sufficiently fine grained,

these things approach a certain dynamic.

And this approach dynamic, that is what we mean by it.

But I don’t think that infinity would work, so to speak,

that you would know the last digit of pi

and that you have a physical process

that rests on knowing the last digit of pi.

Yeah, that could be just a peculiar quirk

of human cognition that we like discrete.

Discrete makes sense to us.

Infinity doesn’t, so in terms of our intuitions.

No, the issue is that everything that we think about

needs to be expressed in some kind of mental language,

not necessarily natural language,

but some kind of mathematical language

that your neurons can speak

that refers to something in the world.

And what we have discovered

is that we cannot construct a notion of infinity

without running into contradictions,

which means that such a language is no longer valid.

And I suspect this is what made Pythagoras so unhappy

when somebody came up with the notion of irrational numbers

before it was time, right?

There’s this myth that he had this person killed

when he blabbed out the secret

that not everything can be expressed

as a ratio between two numbers,

but there are numbers between the ratios.

The world was not ready for this.

And I think he was right.

That has confused mathematicians very seriously

because these numbers are not values, they are functions.

And so you can calculate these functions

to a certain degree of approximation,

but you cannot pretend that pi has actually a value.

Pi is a function that would approach this value

to some degree,

but nothing in the world rests on knowing pi.

How important is this distinction

between discrete and continuous for you to get to the book?

Because there’s a, I mean, in discussion of your favorite

flavor of the theory of everything,

there’s a few on the table.

So there’s string theory, there’s a particular,

there’s a little quantum gravity,

which focused on one particular unification.

There’s just a bunch of favorite flavors

of different people trying to propose

a theory of everything.

Eric Weinstein and a bunch of people throughout history.

And then of course, Stephen Wolfram,

who I think is one of the only people doing a discrete.

No, no, there’s a bunch of physicists

who do this right now.

And like Toffoli and Tomasello.

And digital physics is something

that is, I think, growing in popularity.

But the main reason why this is interesting

is because it’s important sometimes to settle disagreements.

I don’t think that you need infinities at all,

and you never needed them.

You can always deal with very large numbers

and you can deal with limits, right?

We are fine with doing that.

You don’t need any kind of infinity.

You can build your computer algebra systems just as well

without believing in infinity in the first place.

So you’re okay with limits?

Yeah, so basically a limit means that something

is behaving pretty much the same

if you make the number large.

Right, because it’s converging to a certain value.

And at some point the difference becomes negligible

and you can no longer measure it.

And in this sense, you have things

that if you have an ngon which has enough corners,

then it’s going to behave like a circle at some point, right?

And it’s only going to be in some kind of esoteric thing

that cannot exist in the physical universe

that you would be talking about this perfect circle.

And now it turns out that it also wouldn’t work

in mathematics because you cannot construct mathematics

that has infinite resolution

without running into contradictions.

So that is itself not that important

because we never did that, right?

It’s just a thing that some people thought we could.

And this leads to confusion.

So for instance, Roger Penrose uses this as an argument

to say that there are certain things

that mathematicians can do dealing with infinities

and by extension our mind can do

that computers cannot do.

Yeah, he talks about that the human mind

can do certain mathematical things

that the computer as defined

by the universal Turing machine cannot.

Yes.

So that it has to do with infinity.

Yes, it’s one of the things.

So he is basically pointing at the fact

that there are things that are possible

in the mathematical mind and in pure mathematics

that are not possible in machines

that can be constructed in the physical universe.

And because he’s an honest guy,

he thinks this means that present physics

cannot explain operations that happen in our mind.

Do you think he’s right?

And so let’s leave his discussion

of consciousness aside for the moment.

Do you think he’s right about just

what he’s basically referring to as intelligence?

So is the human mind fundamentally more capable

as a thinking machine than a universal Turing machine?

No.

But so he’s suggesting that, right?

So our mind is actually less than a Turing machine.

There can be no Turing machine

because it’s defined as having an infinite tape.

And we always only have a finite tape.

But he’s saying it’s better.

Our minds can only perform finitely many operations.

Yes, he thinks so.

He’s saying it can do the kind of computation

that the Turing machine cannot.

And that’s because he thinks that our minds

can do operations that have infinite resolution

in some sense.

And I don’t think that’s the case.

Our minds are just able to discover these limit operators

over too many parts to count.

I see.

What about his idea that consciousness

is more than a computation?

So it’s more than something that a Turing machine can do.

So again, saying that there’s something special

about our mind that cannot be replicated in a machine.

The issue is that I don’t even know

how to construct a language to express

this statement correctly.

Well,

the basic statement is there’s a human experience

that includes intelligence, that includes self awareness,

that includes the hard problem of consciousness.

And the question is, can that be fully simulated

in the computer, in the mathematical model of the computer

as we understand it today?

Roger Penrose says no.

So the universe of Turing machine

cannot simulate the universe.

So the interesting question is,

and you have to ask him this is, why not?

What is this specific thing that cannot be modeled?

And when I looked at his writings

and I haven’t read all of it,

but when I read, for instance,

the section that he writes in the introduction

to a road to infinity,

the thing that he specifically refers to

is the way in which human minds deal with infinities.

And that itself can, I think, easily be deconstructed.

A lot of people feel that our experience

cannot be explained in a mechanical way.

And therefore it needs to be different.

And I concur, our experience is not mechanical.

Our experience is simulated.

It exists only in a simulation.

The only simulation can be conscious.

Physical systems cannot be conscious

because they’re only mechanical.

Cells cannot be conscious.

Neurons cannot be conscious.

Brains cannot be conscious.

People cannot be conscious

as far as if you understand them as physical systems.

What can be conscious is the story of the system

in the world where you write all these things

into the story.

You have experiences for the same reason

that a character novel has experiences

because it’s written into the story.

And now the system is acting on that story.

And it’s not a story that is written in a natural language.

It’s written in a perceptual language,

in this multimedia language of the game engine.

And in there, you write in what kind of experience you have

and what this means for the behavior of the system,

for your behavior tendencies, for your focus,

for your attention, for your experience of valence

and so on.

And this is being used to inform the behavior of the system

in the next step.

And then the story updates with the reactions of the system

and the changes in the world and so on.

And you live inside of that model.

You don’t live inside of the physical reality.

And I mean, just to linger on it, like you say, okay,

it’s in the perceptual language,

the multimodal perceptual language.

That’s the experience.

That’s what consciousness is within that model,

within that story.

But do you have agency?

When you play a video game, you can turn left

and you can turn right in that story.

So in that dream world, how much control do you have?

Is there such a thing as you in that story?

Like, is it right to say the main character,

you know, everybody’s NPCs,

and then there’s the main character

and you’re controlling the main character?

Or is that an illusion?

Is there a main character that you’re controlling?

I’m getting to the point of like the free will point.

Imagine that you are building a robot that plays soccer.

And you’ve been to MIT computer science,

you basically know how to do that, right?

And so you would say the robot is an agent

that solves a control problem,

how to get the ball into the goal.

And it needs to perceive the world

and the world is disturbing him in trying to do this, right?

So he has to control many variables to make that happen

and to project itself and the ball into the future

and understand its position on the field

relative to the ball and so on,

and the position of its limbs

or in the space around it and so on.

So it needs to have an adequate model

that abstracting reality in a useful way.

And you could say that this robot does have agency

over what it’s doing in some sense.

And the model is going to be a control model.

And inside of that control model,

you can possibly get to a point

where this thing is sufficiently abstract

to discover its own agency.

Our current robots don’t do that.

They don’t have a unified model of the universe,

but there’s not a reason why we shouldn’t be getting there

at some point in the not too distant future.

And once that happens,

you will notice that the robot tells a story

about a robot playing soccer.

So the robot will experience itself playing soccer

in a simulation of the world that it uses

to construct a model of the locations of its legs

and limbs in space on the field

with relationship to the ball.

And it’s not going to be at the level of the molecules.

It will be an abstraction that is exactly at the level

that is most suitable for past planning

of the movements of the robot.

It’s going to be a high level abstraction,

but a very useful one that is as predictive

as we can make it.

And in that side of that story,

there is a model of the agency of that system.

So this model can accurately predict

that the contents of the model

are going to be driving the behavior of the robot

in the immediate future.

But there’s the hard problem of consciousness,

which I would also,

there’s a subjective experience of free will as well

that I’m not sure where the robot gets that,

where that little leap is.

Because for me right now,

everything I imagine with that robot,

as it gets more and more and more sophisticated,

the agency comes from the programmer of the robot still,

of what was programmed in.

You could probably do an end to end learning system.

You maybe need to give it a few priors.

So you nudge the architecture in the right direction

that it converges more quickly,

but ultimately discovering the suitable hyperparameters

of the architecture is also only a search process.

And as the search process was evolution,

that has informed our brain architecture

so we can converge in a single lifetime

on useful interaction with the world

and the formation of a self model.

The problem is if we define hyperparameters broadly,

so it’s not just the parameters that control

this end to end learning system,

but the entirety of the design of the robot.

Like there’s, you have to remove the human completely

from the picture.

And then in order to build the robot,

you have to create an entire universe.

Cause you have to go, you can’t just shortcut evolution.

You have to go from the very beginning

in order for it to have,

cause I feel like there’s always a human

pulling the strings and that makes it seem like

the robot is cheating.

It’s getting a shortcut to consciousness.

And you are looking at the current Boston Dynamics robots.

It doesn’t look as if there is somebody

pulling the strings.

It doesn’t look like cheating anymore.

Okay, so let’s go there.

Cause I got to talk to you about this.

So obviously with the case of Boston Dynamics,

as you may or may not know,

it’s always either hard coded or remote controlled.

There’s no intelligence.

I don’t know how the current generation

of Boston Dynamics robots works,

but what I’ve been told about the previous ones

was that it’s basically all cybernetic control,

which means you still have feedback mechanisms and so on,

but it’s not deep learning for the most part

as it’s currently done.

It’s for the most part,

just identifying a control hierarchy

that is congruent to the limbs that exist

and the parameters that need to be optimized

for the movement of these limbs.

And then there is a convergence progress.

So it’s basically just regression

that you would need to control this.

But again, I don’t know whether that’s true.

That’s just what I’ve been told about how they work.

We have to separate several levels of discussion here.

So the only thing they do is pretty sophisticated control

with no machine learning

in order to maintain balance or to right itself.

It’s a control problem in terms of using the actuators

to when it’s pushed or when it steps on a thing

that’s uneven, how to always maintain balance.

And there’s a tricky set of heuristics around that,

but that’s the only goal.

Everything you see Boston Dynamics doing

in terms of that to us humans is compelling,

which is any kind of higher order movement,

like turning, wiggling its butt,

like jumping back on its two feet, dancing.

Dancing is even worse because dancing is hard coded in.

It’s choreographed by humans.

There’s choreography software.

So there is no, of all that high level movement,

there’s no anything that you can call,

certainly can’t call AI,

but there’s no even like basic heuristics.

It’s all hard coded in.

And yet we humans immediately project agency onto them,

which is fascinating.

So the gap here doesn’t necessarily have agency.

What it has is cybernetic control.

And the cybernetic control means you have a hierarchy

of feedback loops that keep the behavior

in certain boundaries so the robot doesn’t fall over

and it’s able to perform the movements.

And the choreography cannot really happen

with motion capture because the robot would fall over

because the physics of the robot,

the weight distribution and so on is different

from the weight distribution in the human body.

So if you were using the directly motion captured movements

of a human body to project it into this robot,

it wouldn’t work.

You can do this with a computer animation.

It will look a little bit off, but who cares?

But if you want to correct for the physics,

you need to basically tell the robot

where it should move its limbs.

And then the control algorithm is going

to approximate a solution that makes it possible

within the physics of the robot.

And you have to find the basic solution

for making that happen.

And there’s probably going to be some regression necessary

to get the control architecture to make these movements.

But those two layers are separate.

So the thing, the higher level instruction

of how you should move and where you should move

is a higher level.

Yeah, so I expect that the control level

of these robots at some level is dumb.

This is just the physical control movement,

the motor architecture.

But it’s a relatively smart motor architecture.

It’s just that there is no high level deliberation

about what decisions to make necessarily, right?

But see, it doesn’t feel like free will or consciousness.

No, no, that was not where I was trying to get to.

I think that in our own body, we have that too.

So we have a certain thing that is basically

just a cybernetic control architecture

that is moving our limbs.

And deep learning can help in discovering

such an architecture if you don’t have it

in the first place.

If you already know your hardware,

you can maybe handcraft it.

But if you don’t know your hardware,

you can search for such an architecture.

And this work already existed in the 80s and 90s.

People were starting to search for control architectures

by motor babbling and so on,

and just use reinforcement learning architectures

to discover such a thing.

And now imagine that you have

the cybernetic control architecture already inside of you.

And you extend this a little bit.

So you are seeking out food, for instance,

or rest or and so on.

And you get to have a baby at some point.

And now you add more and more control layers to this.

And the system is reverse engineering

its own control architecture

and builds a high level model to synchronize

the pursuit of very different conflicting goals.

And this is how I think you get to purposes.

Purposes are models of your goals.

The goals may be intrinsic

as the result of the different set point violations

that you have,

hunger and thirst for very different things,

and rest and pain avoidance and so on.

And you put all these things together

and eventually you need to come up with a strategy

to synchronize them all.

And you don’t need just to do this alone by yourself

because we are state building organisms.

We cannot function as isolation

the way that homo sapiens is set up.

So our own behavior only makes sense

when you zoom out very far into a society

or even into ecosystemic intelligence on the planet

and our place in it.

So the individual behavior only makes sense

in these larger contexts.

And we have a number of priors built into us.

So we are behaving as if we were acting

on these high level goals pretty much right from the start.

And eventually in the course of our life,

we can reverse engineer the goals that we’re acting on,

what actually are our higher level purposes.

And the more we understand that,

the more our behavior makes sense.

But this is all at this point,

complex stories within stories

that are driving our behavior.

Yeah, I just don’t know how big of a leap it is

to start create a system

that’s able to tell stories within stories.

Like how big of a leap that is

from where currently Boston Dynamics is

or any robot that’s operating in the physical space.

And that leap might be big

if it requires to solve the hard problem of consciousness,

which is telling a hell of a good story.

I suspect that consciousness itself is relatively simple.

What’s hard is perception

and the interface between perception and reasoning.

That’s for instance, the idea of the consciousness prior

that would be built into such a system by Yoshua Bengio.

And what he describes, and I think that’s accurate,

is that our own model of the world

can be described through something like an energy function.

The energy function is modeling the contradictions

that exist within the model at any given point.

And you try to minimize these contradictions,

the tangents in the model.

And to do this, you need to sometimes test things.

You need to conditionally disambiguate figure and ground.

You need to distinguish whether this is true

or that is true, and so on.

Eventually you get to an interpretation,

but you will need to manually depress a few points

in your model to let it snap into a state that makes sense.

And this function that tries to get the biggest dip

in the energy function in your model,

according to Yoshua Bengio, is related to consciousness.

It’s a low dimensional discrete function

that tries to maximize this dip in the energy function.

Yeah, I think I would need to dig into details

because I think the way he uses the word consciousness

is more akin to like self awareness,

like modeling yourself within the world,

as opposed to the subjective experience, the hard problem.

No, it’s not even the self is in the world.

The self is the agent and you don’t need to be aware

of yourself in order to be conscious.

The self is just a particular content that you can have,

but you don’t have to have.

But you can be conscious in, for instance, a dream at night

or during a meditation state where you don’t have a self.

Right.

Where you’re just aware of the fact that you are aware.

And what we mean by consciousness in the colloquial sense

is largely this reflexive self awareness,

that we become aware of the fact

that you’re paying attention,

that we are the thing that pays attention.

We are the thing that pays attention, right.

I don’t see where the awareness that we’re aware,

the hard problem doesn’t feel like it’s solved.

I mean, it’s called a hard problem for a reason,

because it seems like there needs to be a major leap.

Yeah, I think the major leap is to understand

how it is possible that a machine can dream,

that a physical system is able to create a representation

that the physical system is acting on,

and that is spun force and so on.

But once you accept the fact that you are not in physics,

but that you exist inside of the story,

I think the mystery disappears.

Everything is possible in the story.

You exist inside the story.

Okay, so the machine.

Your consciousness is being written into the story.

The fact that you experience things

is written to the side of the story.

You ask yourself, is this real what I’m seeing?

And your brain writes into the story, yes, it’s real.

So what about the perception of consciousness?

So to me, you look conscious.

So the illusion of consciousness,

the demonstration of consciousness.

I ask for the legged robot.

How do we make this legged robot conscious?

So there’s two things,

and maybe you can tell me if they’re neighboring ideas.

One is actually make it conscious,

and the other is make it appear conscious to others.

Are those related?

Let’s ask it from the other direction.

What would it take to make you not conscious?

So when you are thinking about how you perceive the world,

can you decide to switch from looking at qualia

to looking at representational states?

And it turns out you can.

There is a particular way in which you can look at the world

and recognize its machine nature, including your own.

And in that state,

you don’t have that conscious experience

in this way anymore.

It becomes apparent as a representation.

Everything becomes opaque.

And I think this thing that you recognize,

everything is a representation.

This is typically what we mean with enlightenment states.

And it can happen on the motivational level,

but you can also do this on the experiential level,

on the perceptual level.

See, but then I can come back to a conscious state.

Okay, I particularly,

I’m referring to the social aspect

that the demonstration of consciousness

is a really nice thing at a party

when you’re trying to meet a new person.

It’s a nice thing to know that they’re conscious

and they can,

I don’t know how fundamental consciousness

is in human interaction,

but it seems like to be at least an important part.

And I ask that in the same kind of way for robots.

In order to create a rich, compelling

human robot interaction,

it feels like there needs to be elements of consciousness

within that interaction.

My cat is obviously conscious.

And so my cat can do this party trick.

She also knows that I am conscious,

be able to have feedback about the fact

that we are both acting on models of our own awareness.

The question is how hard is it for the robot,

artificially created robot to achieve cat level

and party tricks?

Yes, so the issue for me is currently not so much

on how to build a system that creates a story

about a robot that lives in the world,

but to make an adequate representation of the world.

And the model that you and me have is a unified one.

It’s one where you basically make sense of everything

that you can perceive.

Every feature in the world that enters your perception

can be relationally mapped to a unified model of everything.

And we don’t have an AI that is able to construct

such a unified model yet.

So you need that unified model to do the party trick?

Yes, I think that it doesn’t make sense

if this thing is conscious,

but not in the same universe as you,

because you could not relate to each other.

So what’s the process, would you say,

of engineering consciousness in the machine?

Like what are the ideas here?

So you probably want to have some kind of perceptual system.

This perceptual system is a processing agent

that is able to track sensory data

and predict the next frame in the sensory data

from the previous frames of the sensory data

and the current state of the system.

So the current state of the system is, in perception,

instrumental to predicting what happens next.

And this means you build lots and lots of functions

that take all the blips that you feel on your skin

and that you see on your retina, or that you hear,

and puts them into a set of relationships

that allows you to predict what kind of sensory data,

what kind of sensor of blips, vector of blips,

you’re going to perceive in the next frame.

This is tuned and it’s constantly tuned

until it gets as accurate as it can.

You build a very accurate prediction mechanism

that is step one of the perception.

So first you predict, then you perceive

and see the error in your prediction.

And you have to do two things to make that happen.

One is you have to build a network of relationships

that are constraints,

that take all the variants in the world

and put each of the variances into a variable

that is connected with relationships to other variables.

And these relationships are computable functions

that constrain each other.

So when you see a nose

that points in a certain direction in space,

you have a constraint that says

there should be a face nearby that has the same direction.

And if that is not the case,

you have some kind of contradiction

that you need to resolve

because it’s probably not a nose what you’re looking at.

It just looks like one.

So you have to reinterpret the data

until you get to a point where your model converges.

And this process of making the sensory data

fit into your model structure

is what Piaget calls the assimilation.

And accommodation is the change of the models

where you change your model in such a way

that you can assimilate everything.

So you’re talking about building

a hell of an awesome perception system

that’s able to do prediction and perception

and correct and keep improving.

No, wait, that’s…

Wait, there’s more.

Yes, there’s more.

So the first thing that we wanted to do

is we want to minimize the contradictions in the model.

And of course, it’s very easy to make a model

in which you minimize the contradictions

just by allowing that it can be

in many, many possible states, right?

So if you increase degrees of freedom,

you will have fewer contradictions.

But you also want to reduce the degrees of freedom

because degrees of freedom mean uncertainty.

You want your model to reduce uncertainty

as much as possible,

but reducing uncertainty is expensive.

So you have to have a trade off

between minimizing contradictions

and reducing uncertainty.

And you have only a finite amount of compute

and experimental time and effort

available to reduce uncertainty in the world.

So you need to assign value to what you observe.

So you need some kind of motivational system

that is estimating what you should be looking at

and what you should be thinking about it,

how you should be applying your resources

to model what that is, right?

So you need to have something like convergence links

that tell you how to get from the present state

of the model to the next one.

You need to have these compatibility links

that tell you which constraints exist

and which constraint violations exist.

And you need to have some kind of motivational system

that tells you what to pay attention to.

So now we have a second agent next to the perceptual agent.

We have a motivational agent.

This is a cybernetic system

that is modeling what the system needs,

what’s important for the system,

and that interacts with the perceptual system

to maximize the expected reward.

And you’re saying the motivational system

is some kind of like, what is it?

A high level narrative over some lower level.

No, it’s just your brainstem stuff,

the limbic system stuff that tells you,

okay, now you should get something to eat

because I’ve just measured your blood sugar.

So you mean like motivational system,

like the lower level stuff, like hungry.

Yes, there’s basically physiological needs

and some cognitive needs and some social needs

and they all interact.

And they’re all implemented at different parts

in your nervous system as the motivational system.

But they’re basically cybernetic feedback loops.

It’s not that complicated.

It’s just a lot of code.

And so you now have a motivational agent

that makes your robot go for the ball

or that makes your worm go to eat food and so on.

And you have the perceptual system

that lets it predict the environment

so it’s able to solve that control problem to some degree.

And now what we learned is that it’s very hard

to build a machine learning system

that looks at all the data simultaneously

to see what kind of relationships

could exist between them.

So you need to selectively model the world.

You need to figure out where can I make the biggest difference

if I would put the following things together.

Sometimes you find a gradient for that.

When you have a gradient,

you don’t need to remember where you came from.

You just follow the gradient

until it doesn’t get any better.

But if you have a world where the problems are discontinuous

and the search spaces are discontinuous,

you need to retain memory of what you explored.

You need to construct a plan of what to explore next.

And this thing means that you have next

to this perceptual construction system

and the motivational cybernetics,

an agent that is paying attention

to what it should select at any given moment

to maximize reward.

And this scanning system, this attention agent,

is required for consciousness

and consciousness is its control model.

So it’s the index memories that this thing retains

when it manipulates the perceptual representations

to maximize the value and minimize the conflicts

and to increase coherence.

So the purpose of consciousness is to create coherence

in your perceptual representations,

remove conflicts, predict the future,

construct counterfactual representations

so you can coordinate your actions and so on.

And in order to do this, it needs to form memories.

These memories are partial binding states

of the working memory contents

that are being revisited later on to backtrack,

to undo certain states, to look for alternatives.

And these index memories that you can recall,

that is what you perceive as your stream of consciousness.

And being able to recall these memories,

this is what makes you conscious.

If you could not remember what you paid attention to,

you wouldn’t be conscious.

So consciousness is the index in the memory database.

Okay.

But let me sneak up to the questions of consciousness

a little further.

So we usually relate suffering to consciousness.

So the capacity to suffer.

I think to me, that’s a really strong sign of consciousness

is a thing that can suffer.

How is that useful?

Suffering.

And like in your model where you just described,

which is indexing of memories and what is the coherence

with the perception, with this predictive thing

that’s going on in the perception,

how does suffering relate to any of that?

The higher level suffering that humans do.

Basically pain is a reinforcement signal.

Pain is a signal that one part of your brain

sends to another part of your brain,

or in an abstract sense, part of your mind

sends to another part of the mind to regulate its behavior,

to tell it the behavior that you’re currently exhibiting

should be improved.

And this is the signal that I tell you to move away

from what you’re currently doing

and push into a different direction.

So pain gives you a part of you an impulse

to do something differently.

But sometimes this doesn’t work

because the training part of your brain

is talking to the wrong region,

or because it has the wrong model

of the relationships in the world.

Maybe you’re mismodeling yourself

or you’re mismodeling the relationship of yourself

to the world,

or you’re mismodeling the dynamics of the world.

So you’re trying to improve something

that cannot be improved by generating more pain.

But the system doesn’t have any alternative.

So it doesn’t get better.

What do you do if something doesn’t get better

and you want it to get better?

You increase the strengths of the signal.

And then the signal becomes chronic

when it becomes permanent without a change inside.

This is what we call suffering.

And the purpose of consciousness

is to deal with contradictions,

with things that cannot be resolved.

The purpose of consciousness,

I think is similar to a conductor in an orchestra.

When everything works well,

the orchestra doesn’t need much of a conductor

as long as it’s coherent.

But when there is a lack of coherence

or something is consistently producing

disharmony and mismatches,

then the conductor becomes alert and interacts with it.

So suffering attracts the activity of our consciousness.

And the purpose of that is ideally

that we bring new layers online,

new layers of modeling that are able to create

a model of the dysregulation so we can deal with it.

And this means that we typically get

higher level consciousness, so to speak, right?

We get some consciousness above our pay grade maybe

if we have some suffering early in our life.

Most of the interesting people

had trauma early on in their childhood.

And trauma means that you are suffering an injury

for which the system is not prepared,

which it cannot deal with,

which it cannot insulate itself from.

So something breaks.

And this means that the behavior of the system

is permanently disturbed in a way

that some mismatch exists now in the regulation

that just by following your impulses,

by following the pain in the direction where it hurts,

the situation doesn’t improve but get worse.

And so what needs to happen is that you grow up.

And that’s part that has grown up

is able to deal with the part

that is stuck in this earlier phase.

Yeah, so at least to grow,

so you’re adding extra layers to your cognition.

And let me ask you then,

because I gotta stick on suffering,

the ethics of the whole thing.

So not our consciousness, but the consciousness of others.

You’ve tweeted, one of my biggest fears

is that insects could be conscious.

The amount of suffering on earth would be unthinkable.

So when we think of other conscious beings,

is suffering a property of consciousness

that we’re most concerned about?

So I’m still thinking about robots,

how to make sense of other nonhuman things

that appear to have the depth of experience

that humans have.

And to me, that means consciousness

and the darkest side of that, which is suffering,

the capacity to suffer.

And so I started thinking,

how much responsibility do we have

for those other conscious beings?

That’s where the definition of consciousness

becomes most urgent.

Like having to come up with a definition of consciousness

becomes most urgent,

is who should we and should we not be torturing?

There’s no general answer to this.

Was Genghis Khan doing anything wrong?

It depends right on how you look at it.

Well, he drew a line somewhere

where this is us and that’s them.

It’s the circle of empathy.

It’s like these,

you don’t have to use the word consciousness,

but these are the things that matter to me

if they suffer or not.

And these are the things that don’t matter to him.

Yeah, but when one of his commanders failed him,

he broke his spine and let him die in a horrible way.

And so in some sense,

I think he was indifferent to suffering

or he was not different in the sense

that he didn’t see it as useful if he inflicted suffering,

but he did not see it as something that had to be avoided.

That was not the goal.

The question was, how can I use suffering

and the infliction of suffering to reach my goals

from his perspective?

I see.

So like different societies throughout history

put different value on the…

Different individuals, different psyches.

But also even the objective of avoiding suffering,

like some societies probably,

I mean, this is where like religious belief really helps

that afterlife, that it doesn’t matter

that you suffer or die,

what matters is you suffer honorably, right?

So that you enter the afterlife as a hero.

It seems to be superstitious to me,

basically beliefs that assert things

for which no evidence exists

are incompatible with sound epistemology.

And I don’t think that religion has to be superstitious,

otherwise it should be condemned in all cases.

You’re somebody who’s saying we live in a dream world,

we have zero evidence for anything.

So…

That’s not the case.

There are limits to what languages can be constructed.

Mathematics brings solid evidence for its own structure.

And once we have some idea of what languages exist

and how a system can learn

and what learning itself is in the first place.

And so we can begin to realize that our intuitions

that we are able to learn about the regularities

of the world and minimize surprise

and understand the nature of our own agency

to some degree of abstraction.

That’s not an illusion.

So it’s a useful approximation.

Just because we live in a dream world

doesn’t mean mathematics can’t give us a consistent glimpse

of physical, of objective reality.

We can basically distinguish useful encodings

from useless encodings.

And when we apply our truth seeking to the world,

we know we usually cannot find out

whether a certain thing is true.

What we typically do is we take the state vector

of the universe separated into separate objects

that interact with each other through interfaces.

And this distinction that we are making

is not completely arbitrary.

It’s done to optimize the compression

that we can apply to our models of the universe.

So we can predict what’s happening

with our limited resources.

In this sense, it’s not arbitrary.

But the separation of the world into objects

that are somehow discrete and interacting with each other

is not the true reality, right?

The boundaries between the objects

are projected into the world, not arbitrarily projected.

But still, it’s only an approximation

of what’s actually the case.

And we sometimes notice that we run into contradictions

when we try to understand high level things

like economic aspects of the world

and so on, or political aspects, or psychological aspects

where we make simplifications.

And the objects that we are using to separate the world

are just one of many possible projections

of what’s going on.

So it’s not, in this postmodernist sense,

completely arbitrary, and you’re free to pick

what you want or dismiss what you don’t like

because it’s all stories.

No, that’s not true.

You have to show for every model

of how well it predicts the world.

So the confidence that you should have

in the entities of your models

should correspond to the evidence that you have.

Can I ask you on a small tangent

to talk about your favorite set of ideas and people,

which is postmodernism.

What?

What is postmodernism?

How would you define it?

And why to you is it not a useful framework of thought?

Postmodernism is something that I’m really not an expert on.

And postmodernism is a set of philosophical ideas

that is difficult to lump together,

that is characterized by some useful thinkers,

some of them poststructuralists and so on.

And I’m mostly not interested in it

because I think that it’s not leading me anywhere

that I find particularly useful.

It’s mostly, I think, born out of the insight

that the ontologies that we impose on the world

are not literally true.

And that we can often get to a different interpretation

by the world by using a different ontology

that is different separation of the world

into interacting objects.

But the idea that this makes the world a set of stories

that are arbitrary, I think, is wrong.

And the people that are engaging in this type of philosophy

are working in an area that I largely don’t find productive.

There’s nothing useful coming out of this.

So this idea that truth is relative

is not something that has, in some sense,

informed physics or theory of relativity.

And there is no feedback between those.

There is no meaningful information

of this type of philosophy on the sciences

or on engineering or in politics.

But there is a very strong information on ideology

because it basically has become an ideology

that is justifying itself by the notion

that truth is a relative concept.

And it’s not being used in such a way

that the philosophers or sociologists

that take up these ideas say,

oh, I should doubt my own ideas because maybe my separation of the world

into objects is not completely valid.

And I should maybe use a different one

and be open to a pluralism of ideas.

But it mostly exists to dismiss the ideas of other people.

It becomes, yeah, it becomes a political weapon of sorts

to achieve power.

Basically, there’s nothing wrong, I think,

with developing a philosophy around this.

But to develop a philosophy around this,

to develop norms around the idea

that truth is something that is completely negotiable,

is incompatible with the scientific project.

And I think if the academia has no defense

against the ideological parts of the postmodernist movement,

it’s doomed.

Right, you have to acknowledge the ideological part

of any movement, actually, including postmodernism.

Well, the question is what an ideology is.

And to me, an ideology is basically a viral memeplex

that is changing your mind in such a way that reality gets warped.

It gets warped in such a way that you’re being cut off

from the rest of human thought space.

And you cannot consider things outside of the range of ideas

of your own ideology as possibly true.

Right, so, I mean, there’s certain properties to an ideology

that make it harmful.

One of them is that dogmatism of just certainty,

dogged certainty in that you’re right,

you have the truth, and nobody else does.

Yeah, but what is creating the certainty?

It’s very interesting to look at the type of model

that is being produced.

Is it basically just a strong prior, and you tell people,

oh, this idea that you consider to be very true,

the evidence for this is actually just much weaker

than you thought, and look, here are some studies.

No, this is not how it works.

It’s usually normative, which means some thoughts

are unthinkable because they would change your identity

into something that is no longer acceptable.

And this cuts you off from considering an alternative.

And many de facto religions use this trick

to lock people into a certain mode of thought,

and this removes agency over your own thoughts.

And it’s very ugly to me.

It’s basically not just a process of domestication,

but it’s actually an intellectual castration

that happens.

It’s an inability to think creatively

and to bring forth new thoughts.

I can ask you about substances, chemical substances

that affect the video game, the dream world.

So psychedelics that increasingly have been getting

a lot of research done on them.

So in general, psychedelics, psilocybin, MDMA,

but also a really interesting one, the big one, which is DMT.

What and where are the places that these substances

take the mind that is operating in the dream world?

Do you have an interesting sense how this throws a wrinkle

into the prediction model?

Is it just some weird little quirk

or is there some fundamental expansion

of the mind going on?

I suspect that a way to look at psychedelics

is that they induce particular types

of lucid dreaming states.

So it’s a state in which certain connections

are being severed in your mind.

They’re no longer active.

Your mind basically gets free to move in a certain direction

because some inhibition, some particular inhibition

doesn’t work anymore.

And as a result, you might stop having a self

or you might stop perceiving the world as three dimensional.

And you can explore that state.

And I suppose that for every state

that can be induced with psychedelics,

there are people that are naturally in that state.

So sometimes psychedelics to shift you

through a range of possible mental states.

And they can also shift you out of the range

of permissible mental states

that is where you can make predictive models of reality.

And what I observe in people that use psychedelics a lot

is that they tend to be overfitting.

Overfitting means that you are using more bits

for modeling the dynamics of a function than you should.

And so you can fit your curve

to extremely detailed things in the past,

but this model is no longer predictive for the future.

What is it about psychedelics that forces that?

I thought it would be the opposite.

I thought that it’s a good mechanism

for generalization, for regularization.

So it feels like psychedelics expansion of the mind,

like taking you outside of,

like forcing your model to be non predictive

is a good thing.

Meaning like, it’s almost like, okay,

what I would say psychedelics are akin to

is traveling to a totally different environment.

Like going, if you’ve never been to like India

or something like that from the United States,

very different set of people, different culture,

different food, different roads and values

and all those kinds of things.

Yeah, so psychedelics can, for instance,

teleport people into a universe that is hyperbolic,

which means that if you imagine a room that you’re in,

you can turn around 360 degrees

and you didn’t go full circle.

You need to go 720 degrees to go full circle.

Exactly.

So the things that people learn in that state

cannot be easily transferred

in this universe that we are in.

It could be that if they’re able to abstract

and understand what happened to them,

that they understand that some part

of their spatial cognition has been desynchronized

and has found a different synchronization.

And this different synchronization

happens to be a hyperbolic one, right?

So you learn something interesting about your brain.

It’s difficult to understand what exactly happened,

but we get a pretty good idea

once we understand how the brain is representing geometry.

Yeah, but doesn’t it give you a fresh perspective

on the physical reality?

Who’s making that sound?

Is it inside my head or is it external?

Well, there is no sound outside of your mind,

but it’s making sense of phenomenon physics.

Yeah, in the physical reality, there’s sound waves

traveling through air.

Okay.

That’s our model of what happened.

That’s our model of what happened, right.

Doesn’t Psychedelics give you a fresh perspective

on this physical reality?

Like, not this physical reality, but this more…

What do you call the dream world that’s mapped directly to…

The purpose of dreaming at night, I think,

is data augmentation.

Exactly.

So that’s very different.

That’s very similar to Psychedelics.

It’s changed parameters about the things that you have learned.

And, for instance, when you are young,

you have seen things from certain perspectives,

but not from others.

So your brain is generating new perspectives of objects

that you already know,

which means you can learn to recognize them later

from different perspectives.

And I suspect that’s the reason that many of us

remember to have flying dreams as children,

because it’s just different perspectives of the world

that you already know,

and that it starts to generate these different

perspective changes,

and then it fluidly turns this into a flying dream

to make sense of what’s happening, right?

So you fill in the gaps,

and suddenly you see yourself flying.

And similar things can happen with semantic relationships.

So it’s not just spatial relationships,

but it can also be the relationships between ideas

that are being changed.

And it seems that the mechanisms that make that happen

during dreaming are interacting

with these same receptors

that are being stimulated by psychedelics.

So I suspect that there is a thing

that I haven’t read really about.

The way in which dreams are induced in the brain

is not just that the activity of the brain gets tuned down

because your eyes are closed

and you no longer get enough data from your eyes,

but there is a particular type of neurotransmitter

that is saturating your brain during these phases,

during the REM phases, and you produce

controlled hallucinations.

And psychedelics are linking into these mechanisms,

I suspect.

So isn’t that another trickier form of data augmentation?

Yes, but it’s also data augmentation

that can happen outside of the specification

that your brain is tuned to.

So basically people are overclocking their brains

and that produces states

that are subjectively extremely interesting.

Yeah, I just.

But from the outside, very suspicious.

So I think I’m over applying the metaphor

of a neural network in my own mind,

which I just think that doesn’t lead to overfitting, right?

But you were just sort of anecdotally saying

my experiences with people that have done psychedelics

are that kind of quality.

I think it typically happens.

So if you look at people like Timothy Leary,

and he has written beautiful manifestos

about the effect of LSD on people.

He genuinely believed, he writes in these manifestos,

that in the future, science and art

will only be done on psychedelics

because it’s so much more efficient and so much better.

And he gave LSD to children in this community

of a few thousand people that he had near San Francisco.

And basically he was losing touch with reality.

He did not understand the effects

that the things that he was doing

would have on the reception of psychedelics

by society because he was unable to think critically

about what happened.

What happened was that he got in a euphoric state,

that euphoric state happened because he was overfitting.

He was taking this sense of euphoria

and translating it into a model

of actual success in the world, right?

He was feeling better.

Limitations had disappeared,

that he experienced to be existing,

but he didn’t get superpowers.

I understand what you mean by overfitting now.

There’s a lot of interpretation to the term

overfitting in this case, but I got you.

So he was getting positive rewards

from a lot of actions that he shouldn’t have been doing.

Yeah, but not just this.

So if you take, for instance, John Lilly,

who was studying dolphin languages and aliens and so on,

a lot of people that use psychedelics became very loopy.

And the typical thing that you notice

when people are on psychedelics is that they are in a state

where they feel that everything can be explained now.

Everything is clear, everything is obvious.

And sometimes they have indeed discovered

a useful connection, but not always.

Very often these connections are overinterpretations.

I wonder, you know, there’s a question

of correlation versus causation.

And also I wonder if it’s the psychedelics

or if it’s more the social, like being the outsider

and having a strong community of outside

and having a leadership position in an outsider cult

like community that could have a much stronger effect

of overfitting than do psychedelics themselves,

the actual substances, because it’s a counterculture thing.

So it could be that as opposed to the actual substance.

If you’re a boring person who wears a suit and tie

and works at a bank and takes psychedelics,

that could be a very different effect

of psychedelics on your mind.

I’m just sort of raising the point

that the people you referenced are already weirdos.

I’m not sure exactly.

No, not necessarily.

A lot of the people that tell me

that they use psychedelics in a useful way

started out as squares and were liberating themselves

because they were stuck.

They were basically stuck in local optimum

of their own self model, of their relationship to the world.

And suddenly they had data augmentation.

They basically saw and experienced a space of possibilities.

They experienced what it would be like to be another person.

And they took important lessons

from that experience back home.

Yeah, I mean, I love the metaphor of data augmentation

because that’s been the primary driver

of self supervised learning in the computer vision domain

is data augmentation.

So it’s funny to think of data augmentation,

like chemically induced data augmentation in the human mind.

There’s also a very interesting effect that I noticed.

I know several people who are sphere to me

that LSD has cured their migraines.

So severe cluster headaches or migraines

that didn’t respond to standard medication

that disappeared after a single dose.

And I don’t recommend anybody doing this,

especially not in the US where it’s illegal.

And there are no studies on this for that reason.

But it seems that anecdotally

that it basically can reset the serotonergic system.

So it’s basically pushing them

outside of their normal boundaries.

And as a result, it needs to find a new equilibrium.

And in some people that equilibrium is better,

but it also follows that in other people it might be worse.

So if you have a brain that is already teetering

on the boundary to psychosis,

it can be permanently pushed over that boundary.

Well, that’s why you have to do good science,

which they’re starting to do on all these different

substances of how well it actually works

for the different conditions like MDMA seems to help

with PTSD, same with psilocybin.

You need to do good science,

meaning large studies of large N.

Yeah, so based on the existing studies of MDMA,

it seems that if you look at Rick Doblin’s work

and what he has published about this and talks about,

MDMA seems to be a psychologically relatively safe drug.

But it’s physiologically not very safe.

That is, there is neurotoxicity

if you would use a too large dose.

And if you combine this with alcohol,

which a lot of kids do in party settings during raves

and so on, it’s very hepatotoxic.

So basically you can kill your liver.

And this means that it’s probably something that is best

and most productively used in a clinical setting

by people who really know what they’re doing.

And I suspect that’s also true for the other psychedelics

that is while the other psychedelics are probably not

as toxic as say alcohol,

the effects on the psyche can be much more profound

and lasting.

Yeah, well, as far as I know psilocybin,

so mushrooms, magic mushrooms,

as far as I know in terms of the studies they’re running,

I think have no, like they’re allowed to do

what they’re calling heroic doses.

So that one does not have a toxicity.

So they could do like huge doses in a clinical setting

when they’re doing study on psilocybin,

which is kind of fun.

Yeah, it seems that most of the psychedelics

work in extremely small doses,

which means that the effect on the rest of the body

is relatively low.

And MDMA is probably the exception.

Maybe ketamine can be dangerous in larger doses

because it can depress breathing and so on.

But the LSD and psilocybin work in very, very small doses,

at least the active part of them,

of psilocybin LSD is only the active part.

And the, but the effect that it can have

on your mental wiring can be very dangerous, I think.

Let’s talk about AI a little bit.

What are your thoughts about GPT3 and language models

trained with self supervised learning?

It came out quite a bit ago,

but I wanted to get your thoughts on it.

Yeah.

In the nineties, I was in New Zealand

and I had an amazing professor, Ian Witten,

who realized I was bored in class and put me in his lab.

And he gave me the task to discover grammatical structure

in an unknown language.

And the unknown language that I picked was English

because it was the easiest one

to find a corpus for construct one.

And he gave me the largest computer at the whole university.

It had two gigabytes of RAM, which was amazing.

And I wrote everything in C

with some in memory compression to do statistics

over the language.

And I first would create a dictionary of all the words,

which basically tokenizes everything and compresses things

so that I don’t need to store the whole word,

but just a code for every word.

And then I was taking this all apart in sentences

and I was trying to find all the relationships

between all the words in the sentences

and do statistics over them.

And that proved to be impossible

because the complexity is just too large.

So if you want to discover the relationship

between an article and a noun,

and there are three adjectives in between,

you cannot do ngram statistics

and look at all the possibilities that can exist,

at least not with the resources that we had back then.

So I realized I need to make some statistics

over what I need to make statistics over.

So I wrote something that was pretty much a hack

that did this for at least first order relationships.

And I came up with some kind of mutual information graph

that was indeed discovering something that looks exactly

like the grammatical structure of the sentence,

just by trying to encode the sentence

in such a way that the words would be written

in the optimal order inside of the model.

And what I also found is that if we would be able

to increase the resolution of that

and not just use this model

to reproduce grammatically correct sentences,

we would also be able

to correct stylistically correct sentences

by just having more bits in these relationships.

And if we wanted to have meaning,

we would have to go much higher order.

And I didn’t know how to make higher order models back then

without spending way more years in research

on how to make the statistics

over what we need to make statistics over.

And this thing that we cannot look at the relationships

between all the bits in your input is being solved

in different domains in different ways.

So in computer graphics, computer vision,

standard methods for many years now

is convolutional neural networks.

Convolutional neural networks are hierarchies of filters

that exploit the fact that neighboring pixels

in images are usually semantically related

and distance pixels in images

are usually not semantically related.

So you can just by grouping the pixels

that are next to each other,

hierarchically together reconstruct the shape of objects.

And this is an important prior

that we built into these models

so they can converge quickly.

But this doesn’t work in language

for the reason that adjacent words are often

but not always related and distant words

are sometimes related while the words in between are not.

So how can you learn the topology of language?

And I think for this reason that this difficulty existed,

the transformer was invented

in natural language processing, not in vision.

And what the transformer is doing,

it’s a hierarchy of layers where every layer learns

what to pay attention to in the given context

in the previous layer.

So what to make the statistics over.

And the context is significantly larger

than the adjacent word.

Yes, so the context that GPT3 has been using,

the transformer itself is from 2017

and it wasn’t using that large of a context.

OpenAI has basically scaled up this idea

as far as they could at the time.

And the context is about 2048 symbols,

tokens in the language.

These symbols are not characters,

but they take the words and project them

into a vector space where words

that are statistically co occurring a lot

are neighbors already.

So it’s already a simplification

of the problem a little bit.

And so every word is basically a set of coordinates

in a high dimensional space.

And then they use some kind of trick

to also encode the order of the words in a sentence

or in the not just sentence,

but 2048 tokens is about a couple of pages of text

or two and a half pages of text.

And so they managed to do pretty exhaustive statistics

over the potential relationships

between two pages of text, which is tremendous.

I was just using a single sentence back then.

And I was only looking for first order relationships.

And they were really looking

for much, much higher level relationships.

And what they discover after they fed this

with an enormous amount of training,

they are pretty much the written internet

or a subset of it that had some quality,

but substantial portion of the common core

that they’re not only able to reproduce style,

but they’re also able to reproduce

some pretty detailed semantics,

like being able to add three digit numbers

and multiply two digit numbers

or to translate between programming languages

and things like that.

So the results that GPT3 got, I think were amazing.

By the way, I actually didn’t check carefully.

It’s funny you just mentioned

how you coupled semantics to the multiplication.

Is it able to do some basic math on two digit numbers?

Yes.

Okay, interesting.

I thought there’s a lot of failure cases.

Yeah, it basically fails if you take larger digit numbers.

So four digit numbers and so on makes carrying mistakes

and so on.

And if you take larger numbers,

you don’t get useful results at all.

And this could be an issue of the training set

where there are not many examples

of successful long form addition

and standard human written text.

And humans aren’t very good

at doing three digit numbers either.

Yeah, you’re not writing a lot about it.

And the other thing is that the loss function

that is being used is only minimizing surprise.

So it’s predicting what comes next in the typical text.

It’s not trying to go for causal closure first

as we do.

Yeah.

But the fact that that kind of prediction works

to generate text that’s semantically rich

and consistent is interesting.

Yeah.

So yeah, so it’s amazing that it’s able

to generate semantically consistent text.

It’s not consistent.

So the problem is that it loses coherence at some point,

but it’s also, I think, not correct to say

that GPT3 is unable to deal with semantics at all

because you ask it to perform certain transformations

in text and it performs these transformation in text.

And the kind of additions that it’s able

to perform are transformations in text, right?

And there are proper semantics involved.

You can also do more.

There was a paper that was generating lots

and lots of mathematically correct text

and was feeding this into a transformer.

And as a result, it was able to learn

how to do differentiation integration in race

that according to the authors, Mathematica could not.

To which some of the people in Mathematica responded

that they were not using Mathematica in the right way

and so on.

I have not really followed the resolution of this conflict.

This part, as a small tangent,

I really don’t like in machine learning papers,

which they often do anecdotal evidence.

They’ll find like one example

in some kind of specific use of Mathematica

and demonstrate, look, here’s,

they’ll show successes and failures,

but they won’t have a very clear representation

of how many cases this actually represents.

Yes, but I think as a first paper,

this is a pretty good start.

And so the take home message, I think,

is that the authors could get better results

from this in their experiments

than they could get from the vein,

which they were using computer algebra systems,

which means that was not nothing.

And it’s able to perform substantially better

than GPT’s V can based on a much larger amount

of training data using the same underlying algorithm.

Well, let me ask, again,

so I’m using your tweets as if this is like Plato, right?

As if this is well thought out novels that you’ve written.

You tweeted, GPT4 is listening to us now.

This is one way of asking,

what are the limitations of GPT3 when it scales?

So what do you think will be the capabilities

of GPT4, GPT5, and so on?

What are the limits of this approach?

So obviously when we are writing things right now,

everything that we are writing now

is going to be training data

for the next generation of machine learning models.

So yes, of course, GPT4 is listening to us.

And I think the tweet is already a little bit older

and we now have Voodao

and we have a number of other systems

that basically are placeholders for GPT4.

Don’t know what open AIS plans are in this regard.

I read that tweet in several ways.

So one is obviously everything you put on the internet

is used as training data.

But in a second way I read it is in a,

we talked about agency.

I read it as almost like GPT4 is intelligent enough

to be choosing to listen.

So not only like did a programmer tell it

to collect this data and use it for training,

I almost saw the humorous angle,

which is like it has achieved AGI kind of thing.

Well, the thing is, could we be already be living in GPT5?

So GPT4 is listening and GPT5 actually constructing

the entirety of the reality where we…

Of course, in some sense,

what everybody is trying to do right now in AI

is to extend the transformer to be able to deal with video.

And there are very promising extensions, right?

There’s a work by Google that is called Perceiver

and that is overcoming some of the limitations

of the transformer by letting it learn the topology

of the different modalities separately.

And by training it to find better input features.

So basically feature abstractions that are being used

by this successor to GPT3 are chosen such a way

that it’s able to deal with video input.

And there is more to be done.

So one of the limitations of GPT3 is that it’s amnesiac.

So it forgets everything beyond the two pages

that it currently reads also during generation,

not just during learning.

Do you think that’s fixable

within the space of deep learning?

Can you just make a bigger, bigger, bigger input?

No, I don’t think that our own working memory

is infinitely large.

It’s probably also just a few thousand bits.

But what you can do is you can structure

this working memory.

So instead of just force feeding this thing,

a certain thing that it has to focus on,

and it’s not allowed to focus on anything else

as its network,

you allow it to construct its own working memory as we do.

When we are reading a book,

it’s not that we are focusing our attention

in such a way that we can only remember the current page.

We will also try to remember other pages

and try to undo what we learned from them

or modify what we learned from them.

We might get up and take another book from the shelf.

We might go out and ask somebody,

we can edit our working memory in any way that is useful

to put a context together that allows us

to draw the right inferences and to learn the right things.

So this ability to perform experiments on the world

based on an attempt to become fully coherent

and to achieve causal closure,

to achieve a certain aesthetic of your modeling,

that is something that eventually needs to be done.

And at the moment we are skirting this in some sense

by building systems that are larger and faster

so they can use dramatically larger resources

and human beings can do and much more training data

to get to models that in some sense

are already way superhuman

and in other ways are laughingly incoherent.

So do you think sort of making the systems like,

what would you say, multi resolutional?

So like some of the language models

are focused on two pages,

some are focused on two books,

some are focused on two years of reading,

some are focused on a lifetime,

so it’s like stacks of GPT3s all the way down.

You want to have gaps in between them.

So it’s not necessarily two years, there’s no gaps.

It’s things out of two years or out of 20 years

or 2,000 years or 2 billion years

where you are just selecting those bits

that are predicted to be the most useful ones

to understand what you’re currently doing.

And this prediction itself requires a very complicated model

and that’s the actual model that you need to be making.

It’s not just that you are trying to understand

the relationships between things,

but what you need to make relationships,

discover relationships over.

I wonder what that thing looks like,

what the architecture for the thing

that’s able to have that kind of model.

I think it needs more degrees of freedom

than the current models have.

So it starts out with the fact that you possibly

don’t just want to have a feed forward model,

but you want it to be fully recurrent.

And to make it fully recurrent,

you probably need to loop it back into itself

and allow it to skip connections.

Once you do this,

when you’re predicting the next frame

and your internal next frame in every moment,

and you are able to skip connection,

it means that signals can travel from the output

of the network into the middle of the network

faster than the inputs do.

Do you think it can still be differentiable?

Do you think it still can be a neural network?

Sometimes it can and sometimes it cannot.

So it can still be a neural network,

but not a fully differentiable one.

And when you want to deal with non differentiable ones,

you need to have an attention system

that is discreet and two dimensional

and can perform grammatical operations.

You need to be able to perform program synthesis.

You need to be able to backtrack

in this operations that you perform on this thing.

And this thing needs a model of what it’s currently doing.

And I think this is exactly the purpose

of our own consciousness.

Yeah, the program things are tricky on neural networks.

So let me ask you, it’s not quite program synthesis,

but the application of these language models

to generation, to program synthesis,

but generation of programs.

So if you look at GitHub OpenPilot,

which is based on OpenAI’s codecs,

I don’t know if you got a chance to look at it,

but it’s the system that’s able to generate code

once you prompt it with, what is it?

Like the header of a function with some comments.

And it seems to do an incredibly good job

or not a perfect job, which is very important,

but an incredibly good job of generating functions.

What do you make of that?

Are you, is this exciting

or is this just a party trick, a demo?

Or is this revolutionary?

I haven’t worked with it yet.

So it’s difficult for me to judge it,

but I would not be surprised

if it turns out to be a revolutionary.

And that’s because the majority of programming tasks

that are being done in the industry right now

are not creative.

People are writing code that other people have written,

or they’re putting things together from code fragments

that others have had.

And a lot of the work that programmers do in practice

is to figure out how to overcome the gaps

in their current knowledge

and the things that people have already done.

How to copy and paste from Stack Overflow, that’s right.

And so of course we can automate that.

Yeah, to make it much faster to copy and paste

from Stack Overflow.

Yes, but it’s not just copying and pasting.

It’s also basically learning which parts you need to modify

to make them fit together.

Yeah, like literally sometimes as simple

as just changing the variable names.

So it fits into the rest of your code.

Yes, but this requires that you understand the semantics

of what you’re doing to some degree.

And you can automate some of those things.

The thing that makes people nervous of course

is that a little bit wrong in a program

can have a dramatic effect on the actual final operation

of that program.

So that’s one little error,

which in the space of language doesn’t really matter,

but in the space of programs can matter a lot.

Yes, but this is already what is happening

when humans program code.

Yeah, this is.

So we have a technology to deal with this.

Somehow it becomes scarier when you know

that a program generated code

that’s running a nuclear power plant.

It becomes scarier.

You know, humans have errors too.

Exactly.

But it’s scarier when a program is doing it

because why, why?

I mean, there’s a fear that a program,

like a program may not be as good as humans

to know when stuff is important to not mess up.

Like there’s a misalignment of priorities of values

that’s potential.

Maybe that’s the source of the worry.

I mean, okay, if I give you code generated

by GitHub open pilot and code generated by a human

and say here, use one of these,

which how do you select today and in the next 10 years

which code do you use?

Wouldn’t you still be comfortable with the human?

At the moment when you go to Stanford to get an MRI,

they will write a bill to the insurance over $20,000.

And of this, maybe half of that gets paid by the insurance

and a quarter gets paid by you.

And the MRI cost them $600 to make maybe probably less.

And what are the values of the person

that writes the software and deploys this process?

It’s very difficult for me to say whether I trust people.

I think that what happens there is a mixture

of proper Anglo Saxon Protestant values

where somebody is trying to serve an abstract radar hole

and organize crime.

Well, that’s a very harsh,

I think that’s a harsh view of humanity.

There’s a lot of bad people, whether incompetent

or just malevolent in this world, yes.

But it feels like the more malevolent,

so the more damage you do to the world,

the more resistance you have in your own human heart.

Yeah, but don’t explain with malevolence or stupidity

what can be explained by just people

acting on their incentives.

Right, so what happens in Stanford

is not that somebody is evil.

It’s just that they do what they’re being paid for.

No, it’s not evil.

That’s, I tend to, no, I see that as malevolence.

I see as I, even like being a good German,

as I told you offline, is some,

it’s not absolute malevolence,

but it’s a small amount, it’s cowardice.

I mean, when you see there’s something wrong with the world,

it’s either incompetence and you’re not able to see it,

or it’s cowardice that you’re not able to stand up,

not necessarily in a big way, but in a small way.

So I do think that is a bit of malevolence.

I’m not sure the example you’re describing

is a good example of that.

So the question is, what is it that you are aiming for?

And if you don’t believe in the future,

if you, for instance, think that the dollar is going to crash,

why would you try to save dollars?

If you don’t think that humanity will be around

in a hundred years from now,

because global warming will wipe out civilization,

why would you need to act as if it were?

Right, so the question is,

is there an overarching aesthetics

that is projecting you and the world into the future,

which I think is the basic idea of religion,

that you understand the interactions

that we have with each other

as some kind of civilization level agent

that is projecting itself into the future.

If you don’t have that shared purpose,

what is there to be ethical for?

So I think when we talk about ethics and AI,

we need to go beyond the insane bias discussions and so on,

where people are just measuring the distance

between a statistic to their preferred current world model.

The optimism, wait, wait, wait,

I was a little confused by the previous thing,

just to clarify.

There is a kind of underlying morality

to having an optimism that human civilization

will persist for longer than a hundred years.

Like I think a lot of people believe

that it’s a good thing for us to keep living.

Yeah, of course.

And thriving.

This morality itself is not an end to itself.

It’s instrumental to people living in a hundred years

from now or 500 years from now.

So it’s only justifiable if you actually think

that it will lead to people or increase the probability

of people being around in that timeframe.

And a lot of people don’t actually believe that,

at least not actively.

But believe what exactly?

So I was…

Most people don’t believe

that they can afford to act on such a model.

Basically what happens in the US

is I think that the healthcare system

is for a lot of people no longer sustainable,

which means that if they need the help

of the healthcare system,

they’re often not able to afford it.

And when they cannot help it,

they are often going bankrupt.

I think the leading cause of personal bankruptcy

in the US is the healthcare system.

And that would not be necessary.

It’s not because people are consuming

more and more medical services

and are achieving a much, much longer life as a result.

That’s not actually the story that is happening

because you can compare it to other countries.

And life expectancy in the US is currently not increasing

and it’s not as high as in all the other

industrialized countries.

So some industrialized countries are doing better

with a much cheaper healthcare system.

And what you can see is for instance,

administrative bloat.

The healthcare system has maybe to some degree

deliberately set up as a job placement program

to allow people to continue living

in middle class existence,

despite not having useful use case in productivity.

So they are being paid to push paper around.

And the number of administrator in the healthcare system

has been increasing much faster

than the number of practitioners.

And this is something that you have to pay for.

And also the revenues that are being generated

in the healthcare system are relatively large

and somebody has to pay for them.

And the result why they are so large

is because market mechanisms are not working.

The FDA is largely not protecting people

from malpractice of healthcare providers.

The FDA is protecting healthcare providers

from competition.

Right, okay.

So this is a thing that has to do with values.

And this is not because people are malicious on all levels.

It’s because they are not incentivized

to act on a greater whole on this idea

that you treat somebody who comes to you as a patient,

like you would treat a family member.

Yeah, but we’re trying, I mean,

you’re highlighting a lot of the flaws

of the different institutions,

the systems we’re operating under,

but I think there’s a continued throughout history

mechanism design of trying to design incentives

in such a way that these systems behave

better and better and better.

I mean, it’s a very difficult thing

to operate a society of hundreds of millions of people

effectively with.

Yes, so do we live in a society that is ever correcting?

Is this, do we observe that our models

of what we are doing are predictive of the future

and when they are not, we improve them.

Are our laws adjudicated with clauses

that you put into every law,

what is meant to be achieved by that law

and the law will be automatically repealed

if it’s not achieving that, right?

If you are optimizing your own laws,

if you’re writing your own source code,

you probably make an estimate of what is this thing

that’s currently wrong in my life?

What is it that I should change about my own policies?

What is the expected outcome?

And if that outcome doesn’t manifest,

I will change the policy back, right?

Or I would change it to something different.

Are we doing this on a societal level?

I think so.

I think it’s easy to sort of highlight the,

I think we’re doing it in the way that,

like I operate my current life.

I didn’t sleep much last night.

You would say that Lex,

the way you need to operate your life

is you need to always get sleep.

The fact that you didn’t sleep last night

is totally the wrong way to operate in your life.

Like you should have gotten all your shit done in time

and gotten to sleep because sleep is very important

for health and you’re highlighting,

look, this person is not sleeping.

Look, the medical, the healthcare system is operating poor.

But the point is we just,

it seems like this is the way,

especially in the capitalist society, we operate.

We keep running into trouble and last minute,

we try to get our way out through innovation

and it seems to work.

You have a lot of people that ultimately are trying

to build a better world and get urgency about them

when the problem becomes more and more imminent.

And that’s the way this operates.

But if you look at the long arc of history,

it seems like that operating on deadlines

produces progress and builds better and better systems.

You probably agree with me that the US

should have engaged in mask production in January 2020

and that we should have shut down the airports early on

and that we should have made it mandatory

that the people that work in nursing homes

are living on campus rather than living at home

and then coming in and infecting people in the nursing homes

that had no immune response to COVID.

And that is something that was, I think, visible back then.

The correct decisions haven’t been made.

We would have the same situation again.

How do we know that these wrong decisions

are not being made again?

Have the people that made the decisions

to not protect the nursing homes been punished?

Have the people that made the wrong decisions

with respect to testing that prevented the development

of testing by startup companies and the importing

of tests from countries that already had them,

have these people been held responsible?

First of all, so what do you wanna put

before the firing squad?

I think they are being held responsible.

No, just make sure that this doesn’t happen again.

No, but it’s not that, yes, they’re being held responsible

by many voices, by people being frustrated.

There’s new leaders being born now

that we’re going to see rise to the top in 10 years.

This moves slower than, there’s obviously

a lot of older incompetence and bureaucracy

and these systems move slowly.

They move like science, one death at a time.

So yes, I think the pain that’s been felt

in the previous year is reverberating throughout the world.

Maybe I’m getting old, I suspect that every generation

in the US after the war has lost the plot even more.

I don’t see this development.

The war, World War II?

Yes, so basically there was a time when we were modernist

and in this modernist time, the US felt actively threatened

by the things that happened in the world.

The US was worried about possibility of failure

and this imminence of possible failure led to decisions.

There was a time when the government would listen

to physicists about how to do things

and the physicists were actually concerned

about what the government should be doing.

So they would be writing letters to the government

and so for instance, the decision for the Manhattan Project

was something that was driven in a conversation

between physicists and the government.

I don’t think such a discussion would take place today.

I disagree, I think if the virus was much deadlier,

we would see a very different response.

I think the virus was not sufficiently deadly

and instead because it wasn’t very deadly,

what happened is the current system

started to politicize it.

The mask, this is what I realized with masks early on,

they were not, very quickly became not as a solution

but they became a thing that politicians used

to divide the country.

So the same things happened with vaccines, same thing.

So like nobody’s really,

people weren’t talking about solutions to this problem

because I don’t think the problem was bad enough.

When you talk about the war,

I think our lives are too comfortable.

I think in the developed world, things are too good

and we have not faced severe dangers.

When the danger, the severe dangers,

existential threats are faced, that’s when we step up

on a small scale and a large scale.

Now, I don’t, that’s sort of my argument here

but I did think the virus is, I was hoping

that it was actually sufficiently dangerous

for us to step up because especially in the early days,

it was unclear, it still is unclear because of mutations,

how bad it might be, right?

And so I thought we would step up and even,

so the masks point is a tricky one because to me,

the manufacture of masks isn’t even the problem.

I’m still to this day and I was involved

with a bunch of this work, have not seen good science done

on whether masks work or not.

Like there still has not been a large scale study.

To me, that should be, there should be large scale studies

and every possible solution, like aggressive

in the same way that the vaccine development

was aggressive.

There should be masks, which tests,

what kind of tests work really well, what kind of,

like even the question of how the virus spreads.

There should be aggressive studies on that to understand.

I’m still, as far as I know, there’s still a lot

of uncertainty about that.

Nobody wants to see this as an engineering problem

that needs to be solved.

It’s that I was surprised about, but I wouldn’t.

So I find that our views are largely convergent

but not completely.

So I agree with the thing that because our society

in some sense perceives itself as too big to fail.

Right.

The virus did not alert people to the fact

that we are facing possible failure

that basically put us into the postmodernist mode.

And I don’t mean in a philosophical sense

but in a societal sense.

The difference between the postmodern society

and the modern society is that the modernist society

has to deal with the ground truth

and the postmodernist society has to deal with appearances.

Politics becomes a performance

and the performance is done for an audience

and the organized audience is the media.

And the media evaluates itself via other media, right?

So you have an audience of critics that evaluate themselves.

And I don’t think it’s so much the failure

of the politicians because to get in power

and to stay in power, you need to be able

to deal with the published opinion.

Well, I think it goes in cycles

because what’s going to happen is all

of the small business owners, all the people

who truly are suffering and will suffer more

because the effects of the closure of the economy

and the lack of solutions to the virus,

they’re going to apprise.

And hopefully, I mean, this is where charismatic leaders

can get the world in trouble

but hopefully will elect great leaders

that will break through this postmodernist idea

of the media and the perception

and the drama on Twitter and all that kind of stuff.

But you know, this can go either way.

Yeah.

When the Weimar Republic was unable to deal

with the economic crisis that Germany was facing,

there was an option to go back.

But there were people which thought,

let’s get back to a constitutional monarchy

and let’s get this to work because democracy doesn’t work.

And eventually, there was no way back.

People decided there was no way back.

They needed to go forward.

And the only options for going forward

was to become Stalinist communist,

basically an option to completely expropriate

the factories and so on and nationalize them

and to reorganize Germany in communist terms

and ally itself with Stalin and fascism.

And both options were obviously very bad.

And the one that the Germans picked

led to a catastrophe that devastated Europe.

And I’m not sure if the US has an immune response

against that.

I think that the far right is currently very weak in the US,

but this can easily change.

Do you think from a historical perspective,

Hitler could have been stopped

from within Germany or from outside?

Or this, well, depends on who you wanna focus,

whether you wanna focus on Stalin or Hitler,

but it feels like Hitler was the one

as a political movement that could have been stopped.

I think that the point was that a lot of people

wanted Hitler, so he got support from a lot of quarters.

There was a number of industrialists who supported him

because they thought that the democracy

is obviously not working and unstable

and you need a strong man.

And he was willing to play that part.

There were also people in the US who thought

that Hitler would stop Stalin

and would act as a bulwark against Bolshevism,

which he probably would have done, right?

But at which cost?

And then many of the things that he was going to do,

like the Holocaust, was something where people thought

this is rhetoric, he’s not actually going to do this.

Especially many of the Jews themselves, which were humanists.

And for them, this was outside of the scope

that was thinkable.

Right.

I mean, I wonder if Hitler is uniquely,

I wanna carefully use this term, but uniquely evil.

So if Hitler was never born,

if somebody else would come in this place.

So like, just thinking about the progress of history,

how important are those singular figures

that lead to mass destruction and cruelty?

Because my sense is Hitler was unique.

It wasn’t just about the environment

and the context that gave him,

like another person would not come in his place

to do as destructive of the things that he did.

There was a combination of charisma, of madness,

of psychopathy, of just ego, all those things,

which are very unlikely to come together

in one person in the right time.

It also depends on the context of the country

that you’re operating in.

If you tell the Germans that they have a historical destiny

in this romantic country,

the effect is probably different

than it is in other countries.

But Stalin has killed a few more people than Hitler did.

And if you look at the probability

that you survived under Stalin,

Hitler killed people if he thought

they were not worth living,

or if they were harmful to his racist project.

He basically felt that the Jews would be too cosmopolitan

and would not be willing to participate

in the racist redefinition of society

and the value of society,

and there is no state in this way

that he wanted to have it.

So he saw them as harmful danger,

especially since they played such an important role

in the economy and culture of Germany.

And so basically he had some radical

but rational reason to murder them.

And Stalin just killed everyone.

Basically the Stalinist purges were such a random thing

where he said that there’s a certain possibility

that this particular part of the population

has a number of German collaborators or something,

and we just kill them all, right?

Or if you look at what Mao did,

the number of people that were killed

in absolute numbers were much higher under Mao

than they were under Stalin.

So it’s super hard to say.

The other thing is that you look at Genghis Khan and so on,

how many people he killed.

When you see there are a number of things

that happen in human history

that actually really put a substantial dent

in the existing population, or Napoleon.

And it’s very difficult to eventually measure it

because what’s happening is basically evolution

on a human scale where one monkey figures out

a way to become viral and is using this viral technology

to change the patterns of society

at the very, very large scale.

And what we find so abhorrent about these changes

is the complexity that is being destroyed by this.

That’s basically like a big fire that burns out

a lot of the existing culture and structure

that existed before.

Yeah, and it all just starts with one monkey.

One charismatic ape.

And there’s a bunch of them throughout history.

Yeah, but it’s in a given environment.

It’s basically similar to wildfires in California, right?

The temperature is rising.

There is less rain falling.

And then suddenly a single spark can have an effect

that in other times would be contained.

Okay, speaking of which, I love how we went

to Hitler and Stalin from 20, 30 minutes ago,

GPT3 generating, doing programs that this is.

The argument was about morality of AI versus human.

And specifically in the context of writing programs,

specifically in the context of programs

that can be destructive.

So running nuclear power plants

or autonomous weapons systems, for example.

And I think your inclination was to say that

it’s not so obvious that AI would be less moral than humans

or less effective at making a world

that would make humans happy.

So I’m not talking about self directed systems

that are making their own goals at a global scale.

If you just talk about the deployment

of technological systems that are able to see order

and patterns and use this as control models

to act on the goals that we give them,

then if we have the correct incentives

to set the correct incentives for these systems,

I’m quite optimistic.

So humans versus AI, let me give you an example.

Autonomous weapon system.

Let’s say there’s a city somewhere in the Middle East

that has a number of terrorists.

And the question is,

what’s currently done with drone technologies,

you have information about the location

of a particular terrorist and you have a targeted attack,

you have a bombing of that particular building.

And that’s all directed by humans

at the high level strategy

and also at the deployment of individual bombs and missiles

like the actual, everything is done by human

except the final targeting.

And it’s like spot, similar thing, like control the flight.

Okay, what if you give AI control and saying,

write a program that says,

here’s the best information I have available

about the location of these five terrorists,

here’s the city, make sure all the bombing you do

is constrained to the city, make sure it’s precision based,

but you take care of it.

So you do one level of abstraction out

and saying, take care of the terrorists in the city.

Which are you more comfortable with,

the humans or the JavaScript GPT3 generated code

that’s doing the deployment?

I mean, this is the kind of question I’m asking,

is the kind of bugs that we see in human nature,

are they better or worse than the kind of bugs we see in AI?

There are different bugs.

There is an issue that if people are creating

an imperfect automation of a process

that normally requires a moral judgment,

and this moral judgment is the reason

why it cannot be automated often,

it’s not because the computation is too expensive,

but because the model that you give the AI

is not an adequate model of the dynamics of the world,

because the AI does not understand the context

that it’s operating in the right way.

And this is something that already happens with Excel.

You don’t need to have an AI system to do this.

You have an automated process in place

where humans decide using automated criteria

whom to kill when and whom to target when,

which already happens.

And you have no way to get off the kill list

once that happens, once you have been targeted

according to some automatic criterion

by people in a bureaucracy, that is the issue.

The issue is not the AI, it’s the automation.

So there’s something about, right, it’s automation,

but there’s something about the,

there’s a certain level of abstraction

where you give control to AI to do the automation.

There’s a scale that can be achieved

that it feels like the scale of bug and scale mistake

and scale of destruction that can be achieved

of the kind that humans cannot achieve.

So AI is much more able to destroy

an entire country accidentally versus humans.

It feels like the more civilians die as they react

or suffer as the consequences of your decisions,

the more weight there is on the human mind

to make that decision.

And so like, it becomes more and more unlikely

to make that decision for humans.

For AI, it feels like it’s harder to encode

that kind of weight.

In a way, the AI that we’re currently building

is automating statistics, right?

Intelligence is the ability to make models

so you can act on them,

and AI is the tool to make better models.

So in principle, if you’re using AI wisely,

you’re able to prevent more harm.

And I think that the main issue is not on the side of the AI,

it’s on the side of the human command hierarchy

that is using technology irresponsibly.

So the question is how hard is it to encode,

to properly encode the right incentives into the AI?

So for instance, there’s this idea

of what happens if we let our airplanes being flown

with AI systems and the neural network is a black box

and so on.

And it turns out our neural networks

are actually not black boxes anymore.

There are function approximators using linear algebra,

and there are performing things that we can understand.

But we can also, instead of letting the neural network

fly the airplane, use the neural network

to generate a provably correct program.

There’s a degree of accuracy of the proof

that a human could not achieve.

And so we can use our AI by combining

different technologies to build systems

that are much more reliable than the systems

that a human being could create.

And so in this sense, I would say that

if you use an early stage of technology to save labor

and don’t employ competent people,

but just to hack something together because you can,

that is very dangerous.

And if people are acting under these incentives

that they get away with delivering shoddy work

more cheaply using AI with less human oversight than before,

that’s very dangerous.

The thing is though, AI is still going to be unreliable,

perhaps less so than humans,

but it’ll be unreliable in novel ways.

And…

Yeah, but this is an empirical question.

And it’s something that we can figure out and work with.

So the issue is, do we trust the systems,

the social systems that we have in place

and the social systems that we can build and maintain

that they’re able to use AI responsibly?

If they can, then AI is good news.

If they cannot,

then it’s going to make the existing problems worse.

Well, and also who creates the AI, who controls it,

who makes money from it because it’s ultimately humans.

And then you start talking about

how much you trust the humans.

So the question is, what does who mean?

I don’t think that we have identity per se.

I think that the story of a human being is somewhat random.

What happens is more or less that everybody is acting

on their local incentives,

what they perceive to be their incentives.

And the question is, what are the incentives

that the one that is pressing the button is operating under?

Yeah.

It’s nice for those incentives to be transparent.

So, for example, I’ll give you an example.

There seems to be a significant distrust

of a tech, like entrepreneurs in the tech space

or people that run, for example, social media companies

like Mark Zuckerberg.

There’s not a complete transparency of incentives

under which that particular human being operates.

We can listen to the words he says

or what the marketing team says for a company,

but we don’t know.

And that becomes a problem when the algorithms

and the systems created by him and other people

in that company start having more and more impact

on society.

And that it starts, if the incentives were somehow

the definition and the explainability of the incentives

was decentralized such that nobody can manipulate it,

no propaganda type manipulation of like

how these systems actually operate could be done,

then yes, I think AI could achieve much fairer,

much more effective sort of like solutions

to difficult ethical problems.

But when there’s like humans in the loop,

manipulating the dissemination, the communication

of how the system actually works,

that feels like you can run into a lot of trouble.

And that’s why there’s currently a lot of distrust

for people at the heads of companies

that have increasingly powerful AI systems.

I suspect what happened traditionally in the US

was that since our decision making

is much more decentralized than in an authoritarian state,

people are making decisions autonomously

at many, many levels in a society.

What happened that was we created coherence

and cohesion in society by controlling what people thought

and what information they had.

The media synchronized public opinion

and social media have disrupted this.

It’s not, I think so much Russian influence or something,

it’s everybody’s influence.

It’s that a random person can come up

with a conspiracy theory and disrupt what people think.

And if that conspiracy theory is more compelling

or more attractive than the standardized

public conspiracy theory that we give people as a default,

then it might get more traction, right?

You suddenly have the situation that a single individual

somewhere on a farm in Texas has more listeners than CNN.

Which particular farmer are you referring to in Texas?

Probably no.

Yes, I had dinner with him a couple of times, okay.

Right, it’s an interesting situation

because you cannot get to be an anchor in CNN

if you don’t go through a complicated gatekeeping process.

And suddenly you have random people

without that gatekeeping process,

just optimizing for attention.

Not necessarily with a lot of responsibility

for the longterm effects of projecting these theories

into the public.

And now there is a push of making social media

more like traditional media,

which means that the opinion that is being projected

in social media is more limited to an acceptable range.

With the goal of getting society into safe waters

and increase the stability and cohesion of society again,

which I think is a laudable goal.

But of course it also is an opportunity

to seize the means of indoctrination.

And the incentives that people are under when they do this

are in such a way that the AI ethics that we would need

becomes very often something like AI politics,

which is basically partisan and ideological.

And this means that whatever one side says,

another side is going to be disagreeing with, right?

In the same way as when you turn masks or the vaccine

into a political issue,

if you say that it is politically virtuous

to get vaccinated,

it will mean that the people that don’t like you

will not want to get vaccinated, right?

And as soon as you have this partisan discourse,

it’s going to be very hard to make the right decisions

because the incentives get to be the wrong ones.

AI ethics needs to be super boring.

It needs to be done by people who do statistics

all the time and have extremely boring,

long winded discussions that most people cannot follow

because they are too complicated,

but that are dead serious.

These people need to be able to be better at statistics

than the leading machine learning researchers.

And at the moment, the AI ethics debate is the one

where you don’t have any barrier to entry, right?

Everybody who has a strong opinion

and is able to signal that opinion in the right way

can enter it.

And to me, that is a very frustrating thing

because the field is so crucially important

to our future.

It’s so crucially important,

but the only qualification you currently need

is to be outraged by the injustice in the world.

It’s more complicated, right?

Everybody seems to be outraged.

But let’s just say that the incentives

are not always the right ones.

So basically, I suspect that a lot of people

that enter this debate don’t have a vision

for what society should be looking like

in a way that is nonviolent,

where we preserve liberal democracy,

where we make sure that we all get along

and we are around in a few hundred years from now,

preferably with a comfortable

technological civilization around us.

I generally have a very foggy view of that world,

but I tend to try to follow,

and I think society should in some degree

follow the gradient of love,

increasing the amount of love in the world.

And whenever I see different policies

or algorithms or ideas that are not doing so,

obviously, that’s the ones that kind of resist.

So the thing that terrifies me about this notion

is I think that German fascism was driven by love.

It was just a very selective love.

It was a love that basically…

Now you’re just manipulating.

I mean, that’s, you have to be very careful.

You’re talking to the wrong person in this way about love.

So let’s talk about what love is.

And I think that love is the discovery of shared purpose.

It’s the recognition of the sacred in the other.

And this enables non transactional interactions.

But the size of the other that you include

needs to be maximized.

So it’s basically appreciation,

like deep appreciation of the world around you fully,

including the people that are very different than you,

people that disagree with you completely,

including people, including living creatures

outside of just people, including ideas.

And it’s like appreciation of the full mess of it.

And also it has to do with like empathy,

which is coupled with a lack of confidence

and certainty of your own rightness.

It’s like a radical open mindedness to the way forward.

I agree with every part of what you said.

And now if you scale it up,

what you recognize is that Lafist is in some sense,

the service to next level agency,

to the highest level agency that you can recognize.

It could be for instance, life on earth or beyond that,

where you could say intelligent complexity in the universe

that you try to maximize in a certain way.

But when you think it’s true,

it basically means a certain aesthetic.

And there is not one possible aesthetic,

there are many possible aesthetics.

And once you project an aesthetic into the future,

you can see that there are some which defect from it,

which are in conflict with it,

that are corrupt, that are evil.

You and me would probably agree that Hitler was evil

because the aesthetic of the world that he wanted

is in conflict with the aesthetic of the world

that you and me have in mind.

And so they think that he destroyed,

we want to keep them in the world.

There’s a kind of, there’s kind of ways to deal,

I mean, Hitler is an easier case,

but perhaps he wasn’t so easy in the 30s, right?

To understand who is Hitler and who is not.

No, it was just there was no consensus

that the aesthetics that he had in mind were unacceptable.

Yeah, I mean, it’s difficult, love is complicated

because you can’t just be so open minded

that you let evil walk into the door,

but you can’t be so self assured

that you can always identify evil perfectly

because that’s what leads to Nazi Germany.

Having a certainty of what is and wasn’t evil,

like always drawing lines of good versus evil.

There seems to be, there has to be a dance

between like hard stances extending up

against what is wrong.

And at the same time, empathy and open mindedness

of towards not knowing what is right and wrong

and like a dance between those.

I found that when I watched the Miyazaki movies

that there is nobody who captures my spirituality

as well as he does.

It’s very interesting and just vicious, right?

There is something going on in his movies

that is very interesting.

So for instance, Mononoke is discussing

not only an answer to Disney’s simplistic notion of Mowgli,

the jungle boy was raised by wolves.

And as soon as he sees people realizes that he’s one of them

and the way in which the moral life and nature

is simplified and romanticized and turned into kitsch.

It’s disgusting in the Disney movie.

And he answers to this, you see,

he’s replaced by Mononoke, this wolf girl

who was raised by wolves and was fierce and dangerous

and who cannot be socialized because she cannot be tamed.

You cannot be part of human society.

And you see human society,

it’s something that is very, very complicated.

You see people extracting resources and destroying nature.

But the purpose is not to be evil,

but to be able to have a life that is free from,

for instance, oppression and violence

and to curb death and disease.

And you basically see this conflict

which cannot be resolved in a certain way.

You see this moment when nature is turned into a garden

and it loses most of what it actually is

and humans no longer submitting to life and death

and nature and to these questions, there is no easy answer.

So it just turns it into something that is being observed

as a journey that happens.

And that happens with a certain degree of inevitability.

And the nice thing about all his movies

is there’s a certain main character

and it’s the same in all movies.

It’s this little girl that is basically Heidi.

And I suspect that happened because when he did field work

for working on the Heidi movies back then,

the Heidi animations, before he did his own movies,

he traveled to Switzerland and South Eastern Europe

and the Adriatic and so on and got an idea

about a certain aesthetic and a certain way of life

that informed his future thinking.

And Heidi has a very interesting relationship

to herself and to the world.

There’s nothing that she takes for herself.

She’s in a way fearless because she is committed

to a service, to a greater whole.

Basically, she is completely committed to serving God.

And it’s not an institutionalized God.

It has nothing to do with the Roman Catholic Church

or something like this.

But in some sense, Heidi is an embodiment

of the spirit of European Protestantism.

It’s this idea of a being that is completely perfect

and pure.

And it’s not a feminist vision

because she is not a girl boss or something like this.

She is the justification for the men in the audience

to protect her, to build a civilization around her

that makes her possible.

So she is not just the sacrifice of Jesus

who is innocent and therefore nailed to the cross.

She is not being sacrificed.

She is being protected by everybody around her

who recognizes that she is sacred.

And there are enough around her to see that.

So this is a very interesting perspective.

There’s a certain notion of innocence.

And this notion of innocence is not universal.

It’s not in all cultures.

Hitler wasn’t innocent.

His idea of Germany was not that there is an innocence

that is being protected.

There was a predator that was going to triumph.

And it’s also something that is not at the core

of every religion.

There are many religions which don’t care about innocence.

They might care about increasing the status of something.

And that’s a very interesting notion that is quite unique

and not claiming it’s the optimal one.

It’s just a particular kind of aesthetic

which I think makes Miyazaki

into the most relevant Protestant philosopher today.

And you’re saying in terms of all the ways

that a society can operate perhaps the preservation

of innocence might be one of the best.

No, it’s just my aesthetic.

So it’s a particular way in which I feel

that I relate to the world that is natural

to my own socialization.

And maybe it’s not an accident

that I have cultural roots in Europe

in a particular world.

And so maybe it’s a natural convergence point

and it’s not something that you will find

in all other times in history.

So I’d like to ask you about Solzhenitsyn

and our individual role as ants in this very large society.

So he says that some version of the line

between good and evil runs to the heart of every man.

Do you think all of us are capable of good and evil?

Like what’s our role in this play

in this game we’re all playing?

Is all of us capable to play any role?

Like, is there an ultimate responsibility

to you mentioned maintaining innocence

or whatever the highest ideal for a society you want

are all of us capable of living up to that?

And that’s our responsibility

or is there significant limitations

to what we’re able to do in terms of good and evil?

So there is a certain way if you are not terrible,

if you are committed to some kind of civilizational agency,

a next level agent that you are serving,

some kind of transcendent principle.

In the eyes of that transcendental principle,

you are able to discern good from evil.

Otherwise you cannot,

otherwise you have just individual aesthetics.

The cat that is torturing a mouse is not evil

because the cat does not envision

or no part of the world of the cat is envisioning a world

where there is no violence and nobody is suffering.

If you have an aesthetic where you want

to protect innocence,

then torturing somebody needlessly is evil,

but only then.

No, but within, I guess the question is within the aesthetic,

like within your sense of what is good and evil,

are we still, it seems like we’re still able

to commit evil.

Yes, so basically if you are committing

to this next level agent,

you are not necessarily are this next level agent, right?

You are a part of it.

You have a relationship to it,

like the cell does to its organism, its hyperorganism.

And it only exists to the degree

that it’s being implemented by you and others.

And that means that you’re not completely fully serving it.

You have freedom in what you decide,

whether you are acting on your impulses

and local incentives and your farewell impulses,

so to speak, or whether you’re committing to it.

And what you perceive then is a tension

between what you would be doing with respect

to the thing that you recognize as the sacred, if you do,

and what you’re actually doing.

And this is the line between good and evil,

right where you see, oh, I’m here acting

on my local incentives or impulses,

and here I’m acting on what I consider to be sacred.

And there’s a tension between those.

And this is the line between good and evil

that might run through your heart.

And if you don’t have that,

if you don’t have this relationship

to a transcendental agent,

you could call this relationship

to the next level agent soul, right?

It’s not a thing.

It’s not an immortal thing that is intrinsically valuable.

It’s a certain kind of relationship

that you project to understand what’s happening.

Somebody is serving this transcendental sacredness

or they’re not.

If you don’t have a soul, you cannot be evil.

You’re just a complex natural phenomenon.

So if you look at life, like starting today

or starting tomorrow, when we leave here today,

there’s a bunch of trajectories

that you can take through life, maybe countless.

Do you think some of these trajectories,

in your own conception of yourself,

some of those trajectories are the ideal life,

a life that if you were to be the hero of your life story,

you would want to be?

Like, is there some Josh or Bhakti you’re striving to be?

Like, this is the question I ask myself

as an individual trying to make a better world

in the best way that I could conceive of.

What is my responsibility there?

And how much am I responsible for the failure to do so?

Because I’m lazy and incompetent too often.

In my own perception.

In my own worldview, I’m not very important.

So it’s, I don’t have place for me as a hero

in my own world.

I’m trying to do the best that I can,

which is often not very good.

And so it’s not important for me to have status

or to be seen in a particular way.

It’s helpful if others can see me

or a few people can see me that can be my friends.

No, sorry, I want to clarify,

the hero I didn’t mean status or perception

or like some kind of marketing thing,

but more in private, in the quiet of your own mind.

Is there the kind of man you want to be

and would consider it a failure if you don’t become that?

That’s what I meant by hero.

Yeah, not really.

I don’t perceive myself as having such an identity.

And it’s also sometimes frustrating,

but it’s basically a lack of having this notion

of father that I need to be emulating.

It’s interesting.

I mean, it’s the leaf floating down the river.

I worry that…

Sometimes it’s more like being the river.

I’m just a fat frog sitting on a leaf

on a dirty, muddy lake.

I wish I was waiting for a princess to kiss me.

Or the other way, I forgot which way it goes.

Somebody kisses somebody.

I can ask you, I don’t know if you know

who Michael Malice is,

but in terms of constructing since systems of incentives,

it’s interesting to ask.

I don’t think I’ve talked to you about this before.

Malice espouses anarchism.

So he sees all government as fundamentally

getting in the way or even being destructive

to collaborations between human beings thriving.

What do you think?

What’s the role of government in a society that thrives?

Is anarchism at all compelling to you as a system?

So like not just small government,

but no government at all.

Yeah, I don’t see how this would work.

The government is an agent that imposes an offset

on your reward function, on your payout metrics.

So your behavior becomes compatible with the common good.

So the argument there is that you can have collectives

like governing organizations, but not government,

like where you’re born in a particular set of land

and therefore you must follow this rule or else.

You’re forced by what they call violence

because there’s an implied violence here.

So the key aspect of government is it protects you

from the rest of the world with an army and with police.

So it has a monopoly on violence.

It’s the only one that’s able to do violence.

So there are many forms of government,

not all governments do that.

But we find that in successful countries,

the government has a monopoly on violence.

And that means that you cannot get ahead

by starting your own army because the government

will come down on you and destroy you

if you try to do that.

And in countries where you can build your own army

and get away with it, some people will do it.

And these countries is what we call failed countries

in a way.

And if you don’t want to have violence,

the point is not to appeal to the moral intentions of people

because some people will use strategies

if they get ahead with them that feel a particular kind

of ecological niche.

So you need to destroy that ecological niche.

And if effective government has a monopoly on violence,

it can create a world where nobody is able to use violence

and get ahead.

So you want to use that monopoly on violence,

not to exert violence, but to make violence impossible,

to raise the cost of violence.

So people need to get ahead with nonviolent means.

So the idea is that you might be able to achieve that

in an anarchist state with companies.

So with the forces of capitalism is create security companies

where the one that’s most ethically sound rises to the top.

Basically, it would be a much better representative

of the people because there is a less sort of stickiness

to the big military force sticking around

even though it’s long overlived, outlived.

So you have groups of militants that are hopefully

efficiently organized because otherwise they’re going

to lose against the other groups of militants

and they are coordinating themselves with the rest

of society until they are having a monopoly on violence.

How is that different from a government?

So it’s basically converging to the same thing.

So I was trying to argue with Malice,

I feel like it always converges towards government at scale,

but I think the idea is you can have a lot of collectives

that are, you basically never let anything scale too big.

So one of the problems with governments is it gets too big

in terms of like the size of the group

over which it has control.

My sense is that would happen anyway.

So a successful company like Amazon or Facebook,

I mean, it starts forming a monopoly

over the entire populations,

not over just the hundreds of millions,

but billions of people.

So I don’t know, but there is something

about the abuses of power the government can have

when it has a monopoly on violence, right?

And so that’s a tension there, but…

So the question is how can you set the incentives

for government correctly?

And this mostly applies at the highest levels of government

and because we haven’t found a way to set them correctly,

we made the highest levels of government relatively weak.

And this is, I think, part of the reason

why we had difficulty to coordinate the pandemic response

and China didn’t have that much difficulty.

And there is, of course, a much higher risk

of the abuse of power that exists in China

because the power is largely unchecked.

And that’s basically what happens

in the next generation, for instance.

Imagine that we would agree

that the current government of China is largely correct

and benevolent, and maybe we don’t agree on this,

but if we did, how can we make sure

that this stays like this?

And if you don’t have checks and balances,

division of power, it’s hard to achieve.

You don’t have a solution for that problem.

But the abolishment of government

basically would remove the control structure.

From a cybernetic perspective,

there is an optimal point in the system

that the regulation should be happening, right?

That you can measure the current incentives

and the regulator would be properly incentivized

to make the right decisions

and change the payout metrics of everything below it

in such a way that the local prisoners dilemmas

get resolved, right?

You cannot resolve the prisoners dilemma

without some kind of eternal control

that emulates an infinite game in a way.

Yeah, I mean, there’s a sense in which

it seems like the reason government,

the parts of government that don’t work well currently

is because there’s not good mechanisms

through which to interact,

for the citizenry to interact with government

is basically it hasn’t caught up in terms of technology.

And I think once you integrate

some of the digital revolution

of being able to have a lot of access to data,

be able to vote on different ideas at a local level,

at all levels, at the optimal level

like you’re saying that can resolve the prisoner dilemmas

and to integrate AI to help you automate things

that don’t require the human ingenuity.

I feel like that’s where government could operate that well

and can also break apart the inefficient bureaucracies

if needed.

There’ll be a strong incentive to be efficient and successful.

So out human history, we see an evolution

and evolutionary competition of modes of government

and of individual governments is in these modes.

And every nation state in some sense

is some kind of organism that has found different solutions

for the problem of government.

And you could look at all these different models

and the different scales at which it exists

as empirical attempts to validate the idea

of how to build a better government.

And I suspect that the idea of anarchism

similar to the idea of communism

is the result of being disenchanted

with the ugliness of the real existing solutions

and the attempt to get to an utopia.

And I suspect that communism originally was not a utopia.

I think that in the same way as original Christianity,

it had a particular kind of vision.

And this vision is a society,

a mode of organization within the society

in which humans can coexist at scale without coercion.

In the same way as we do in a healthy family, right?

In a good family,

you don’t terrorize each other into compliance,

but you understand what everybody needs

and what everybody is able to contribute

and what the intended future of the whole thing is.

And everybody coordinates their behavior in the right way

and informs each other about how to do this.

And all the interactions that happen

are instrumental to making that happen, right?

Could this happen at scale?

And I think this is the idea of communism.

Communism is opposed to the idea

that we need economic terror

or other forms of terror to make that happen.

But in practice, what happened

is that the proto communist countries,

the real existing socialism,

replaced a part of the economic terror with moral terror,

right?

So we were told to do the right thing for moral reasons.

And of course it didn’t really work

and the economy eventually collapsed.

And the moral terror had actual real cost, right?

People were in prison

because they were morally noncompliant.

And the other thing is that the idea of communism

became a utopia.

So it basically was projected into the afterlife.

We were told in my childhood

that communism was a hypothetical society

to which we were in a permanent revolution

that justified everything

that was presently wrong with society morally.

But it was something that our grandchildren

probably would not ever see

because it was too ideal and too far in the future

to make it happen right now.

And people were just not there yet morally.

And the same thing happened with Christianity, right?

This notion of heaven was mythologized

and projected into an afterlife.

And I think this was just the idea of God’s kingdom

of this world in which we instantiate

the next level transcendental agent in the perfect form.

So everything goes smoothly and without violence

and without conflict and without this human messiness

on this economic messiness and the terror and coercion

that existed in the present societies.

And the idea of that the humans can exist at some point

exist at scale in a harmonious way and noncoercively

is untested, right?

A lot of people tested it

but didn’t get it to work so far.

And the utopia is a world in where you get

all the good things without any of the bad things.

And you are, I think very susceptible to believe in utopias

when you are very young and don’t understand

that everything has to happen in causal patterns,

that there’s always feedback loops

that ultimately are closed.

There’s nothing that just happens

because it’s good or bad.

Good or bad don’t exist in isolation.

They only exist with respect to larger systems.

So can you intuit why utopias fail as systems?

So like having a utopia that’s out there beyond the horizon

is it because then,

it’s not only because it’s impossible to achieve utopias

but it’s because what certain humans,

certain small number of humans start to sort of greedily

attain power and money and control and influence

as they become,

as they see the power in using this idea of a utopia

for propaganda.

It’s a bit like saying, why is my garden not perfect?

It’s because some evil weeds are overgrowing it

and they always do, right?

But this is not how it works.

A good garden is a system that is in balance

and requires minimal interactions by the gardener.

And so you need to create a system

that is designed to self stabilize.

And the design of social systems

requires not just the implementation

of the desired functionality,

but the next level design, also in biological systems.

You need to create a system that wants to converge

to the intended function.

And so instead of just creating an institution like the FDA

that is performing a particular kind of role in society,

you need to make sure that the FDA is actually driven

by a system that wants to do this optimally,

that is incentivized to do it optimally

and then makes the performance that is actually enacted

in every generation instrumental to that thing,

that actual goal, right?

And that is much harder to design and to achieve.

See if the design a system where,

and listen communism also was quote unquote incentivized

to be a feedback loop system that achieves that utopia.

It’s just, it wasn’t working given human nature.

The incentives were not correct given human nature.

How do you incentivize people

when they are getting coal off the ground

to work as hard as possible?

Because it’s a terrible job

and it’s very bad for your health.

And right, how do you do this?

And you can give them prices and medals and status

to some degree, right?

There’s only so much status to give for that.

And most people will not fall for this, right?

Or you can pay them and you probably have to pay them

in an asymmetric way because if you pay everybody the same

and you nationalize the coal mines,

eventually people will figure out

that they can game the system.

Yes, so you’re describing capitalism.

So capitalism is the present solution to the system.

And what we also noticed that I think that Marx was correct

in saying that capitalism is prone to crisis,

that capitalism is a system that in its dynamics

is not convergent, but divergent.

It’s not a stable system.

And that eventually it produces an enormous potential

for productivity, but it also is systematically

misallocating resources.

So a lot of people cannot participate

in the production and consumption anymore, right?

And this is what we observed.

We observed that the middle class in the US is tiny.

It’s a lot of people think that they’re middle class,

but if you are still flying economy,

you’re not middle class, right?

Every class is a magnitude smaller than the previous class.

And I think about classes is really like airline class.

I like class.

A lot of people are economy class, business class,

and very few are first class and some are budget.

I mean, some, I understand.

I think there’s, yeah, maybe some people,

probably I would push back

against that definition of the middle class.

It does feel like the middle class is pretty large,

but yes, there’s a discrepancy in terms of wealth.

So if you think about in terms of the productivity

that our society could have,

there is no reason for anybody to fly economy, right?

We would be able to let everybody travel in style.

Well, but also some people like to be frugal

even when they’re billionaires, okay?

So like that, let’s take that into account.

I mean, we probably don’t need to be a traveling lavish,

but you also don’t need to be tortured, right?

There is a difference between frugal

and subjecting yourself to torture.

Listen, I love economy.

I don’t understand why you’re comparing

a fly economy to torture.

I don’t, although the fight here,

there’s two crying babies next to me.

So that, but that has nothing to do with economy.

It has to do with crying babies.

They’re very cute though.

So they kind of.

Yeah, I have two kids

and sometimes I have to go back to visit the grandparents.

And that means going from the west coast to Germany

and that’s a long flight.

Is it true that, so when you’re a father,

you grow immune to the crying and all that kind of stuff,

like the, because like me just not having kids,

it can be other people’s kids can be quite annoying

when they’re crying and screaming

and all that kind of stuff.

When you have children and you are wired up

in the default natural way,

you’re lucky in this regard, you fall in love with them.

And this falling in love with them means

that you basically start to see the world through their eyes

and you understand that in a given situation,

they cannot do anything but being expressing despair.

And so it becomes more differentiated.

I noticed that for instance,

my son is typically acting on a pure experience

of what things are like right now

and he has to do this right now.

And you have this small child that is,

when he was a baby and so on,

where he was just immediately expressing what he felt.

And if you cannot regulate this from the outside,

there’s no point to be upset about it, right?

It’s like dealing with weather or something like this.

You all have to get through it

and it’s not easy for him either.

But if you also have a daughter,

maybe she is planning for that.

Maybe she understands that she’s sitting in the car

behind you and she’s screaming at the top of her lungs

and you’re almost doing an accident

and you really don’t know what to do.

What should I have done to make you stop screaming?

You could have given me candy.

I think that’s like a cat versus dog discussion.

I love it.

Cause you said like a fundamental aspect of that is love

that makes it all worth it.

What, in this monkey riding an elephant in a dream world,

what role does love play in the human condition?

I think that love is the facilitator

of non transactional interaction.

And you are observing your own purposes.

Some of these purposes go beyond your ego.

They go beyond the particular organism

that you are and your local interests.

That’s what you mean by non transactional.

Yes, so basically when you are acting

in a transactional way, it means that you are respecting

something in return for you

from the one that you’re interacting with.

You are interacting with a random stranger,

you buy something from them on eBay,

you expect a fair value for the money that you sent them

and vice versa.

Because you don’t know that person,

you don’t have any kind of relationship to them.

But when you know this person a little bit better

and you know the situation that they’re in,

you understand what they try to achieve in their life

and you approve because you realize that they’re

in some sense serving the same human sacredness as you are.

And they need to think that you have,

maybe you give it to them as a present.

But, I mean, the feeling itself of joy is a kind of benefit,

is a kind of transaction, like…

Yes, but the joy is not the point.

The joy is the signal that you get.

It’s the reinforcement signal that your brain sends to you

because you are acting on the incentives

of the agent that you’re a part of.

We are meant to be part of something larger.

This is the way in which we out competed other hominins.

Take that Neanderthals.

Yeah, right.

And also other humans.

There was a population bottleneck for human society

that leads to an extreme lack of genetic diversity

among humans.

If you look at Bushmen in the Kalahari,

that basically tribes that are not that far distant

to each other have more genetic diversity

than exists between Europeans and Chinese.

And that’s because basically the out of Africa population

at some point had a bottleneck

of just a few thousand individuals.

And what probably happened is not that at any time

the number of people shrank below a few hundred thousand.

What probably happened is that there was a small group

that had a decisive mutation that produced an advantage.

And this group multiplied and killed everybody else.

And we are descendants of that group.

Yeah, I wonder what the peculiar characteristics

of that group.

Yeah.

I mean, we can never know.

Me too, and a lot of people do.

We can only just listen to the echoes in ours,

like the ripples that are still within us.

So I suspect what eventually made a big difference

was the ability to organize at scale,

to program each other.

With ideas.

That we became programmable,

that we were willing to work in lockstep,

that we went above the tribal level,

that we no longer were groups of a few hundred individuals

and acted on direct reputation systems transactionally,

but that we basically evolved an adaptation

to become state building.

Yeah.

To form collectives outside of the direct collectives.

Yes, and that’s basically a part of us became committed

to serving something outside of what we know.

Yeah, then that’s kind of what love is.

And it’s terrifying because it meant

that we eradicated the others.

Right, it’s a force.

It’s an adaptive force that gets us ahead in evolution,

which means we displace something else

that doesn’t have that.

Oh, so we had to murder a lot of people

that weren’t about love.

So love led to destruction.

They didn’t have the same strong love as we did.

Right, that’s why I mentioned this thing with fascism.

When you see these speeches, do you want total war?

And everybody says, yes, right?

This is this big, oh my God, we are part of something

that is more important than me

that gives meaning to my existence.

Fair enough.

Do you have advice for young people today

in high school, in college,

that are thinking about what to do with their career,

with their life, so that at the end of the whole thing,

they can be proud of what they did?

Don’t cheat.

Have integrity, aim for integrity.

So what does integrity look like when you’re at the river

or the leaf or the fat frog in a lake?

It basically means that you try to figure out

what the thing is that is the most right.

And this doesn’t mean that you have to look

for what other people tell you what’s right,

but you have to aim for moral autonomy.

So things need to be right independently

of what other people say.

I always felt that when people told me

to listen to what others say, like read the room,

build your ideas of what’s true

based on the high status people of your in group,

that does not protect me from fascism.

The only way to protect yourself from fascism

is to decide it’s the world that is being built here,

the world that I want to be in.

And so in some sense, try to make your behavior sustainable,

act in such a way that you would feel comfortable

on all sides of the transaction.

Realize that everybody is you in a different timeline,

but is seeing things differently

and has reasons to do so.

Yeah, I’ve come to realize this recently,

that there is an inner voice

that tells you what’s right and wrong.

And speaking of reading the room,

there’s times what integrity looks like

is there’s times when a lot of people

are doing something wrong.

And what integrity looks like

is not going on Twitter and tweeting about it,

but not participating quietly, not doing.

So it’s not like signaling or not all this kind of stuff,

but actually living your, what you think is right.

Like living it, not signaling.

There’s also sometimes this expectation

that others are like us.

So imagine the possibility

that some of the people around you are space aliens

that only look human, right?

So they don’t have the same prayers as you do.

They don’t have the same impulses

that’s what’s right and wrong.

There’s a large diversity in these basic impulses

that people can have in a given situation.

And now realize that you are a space alien, right?

You are not actually human.

You think that you are human,

but you don’t know what it means,

like what it’s like to be human.

You just make it up as you go along like everybody else.

And you have to figure that out,

what it means that you are a full human being,

what it means to be human in the world

and how to connect with others on that.

And there is also something, don’t be afraid

in the sense that if you do this, you’re not good enough.

Because if you are acting on these incentives of integrity,

you become trustworthy.

That’s the way in which you can recognize each other.

There is a particular place where you can meet.

You can figure out what that place is,

where you will give support to people

because you realize that they act with integrity

and they will also do that.

So in some sense, you are safe if you do that.

You’re not always protected.

There are people which will abuse you

and that are bad actors in a way

that it’s hard to imagine before you meet them.

But there is also people which will try to protect you.

Yeah, that’s such a, thank you for saying that.

That’s such a hopeful message

that no matter what happens to you,

there’ll be a place, there’s people you’ll meet

that also have what you have

and you will find happiness there and safety there.

Yeah, but it doesn’t need to end well.

It can also all go wrong.

So there’s no guarantees in this life.

So you can do everything right and you still can fail

and you can see horrible things happening to you

that traumatize you and mutilate you

and you have to be grateful if it doesn’t happen.

And ultimately be grateful no matter what happens

because even just being alive is pretty damn nice.

Yeah, even that, you know.

The gratefulness in some sense is also just generated

by your brain to keep you going, it’s all the trick.

Speaking of which, Camus said,

I see many people die because they judge

that life is not worth living.

I see others paradoxically getting killed

for the ideas or illusions that give them

a reason for living.

What is called the reason for living

is also an excellent reason for dying.

I therefore conclude that the meaning of life

is the most urgent of questions.

So I have to ask what Jascha Bach is the meaning of life?

It is an urgent question according to Camus.

I don’t think that there’s a single answer to this.

Nothing makes sense unless the mind makes it so.

So you basically have to project a purpose.

And if you zoom out far enough,

there’s the heat test of the universe

and everything is meaningless,

everything is just a blip in between.

And the question is, do you find meaning

in this blip in between?

Do you find meaning in observing squirrels?

Do you find meaning in raising children

and projecting a multi generational organism

into the future?

Do you find meaning in projecting an aesthetic

of the world that you like to the future

and trying to serve that aesthetic?

And if you do, then life has that meaning.

And if you don’t, then it doesn’t.

I kind of enjoy the idea that you just create

the most vibrant, the most weird,

the most unique kind of blip you can,

given your environment, given your set of skills,

just be the most weird set of,

like local pocket of complexity you can be.

So that like, when people study the universe,

they’ll pause and be like, oh, that’s weird.

It looks like a useful strategy,

but of course it’s still motivated reasoning.

You’re obviously acting on your incentives here.

It’s still a story we tell ourselves within a dream

that’s hardly in touch with the reality.

It’s definitely a good strategy if you are a podcaster.

And a human, which I’m still trying to figure out if I am.

It has a mutual relationship somehow.

Somehow.

Josh, you’re one of the most incredible people I know.

I really love talking to you.

I love talking to you again,

and it’s really an honor that you spend

your valuable time with me.

I hope we get to talk many times

through our short and meaningless lives.

Or meaningful.

Thank you, Alex.

I enjoyed this conversation very much.

Thanks for listening to this conversation with Josche Bach.

A thank you to Coinbase, Codecademy, Linode,

NetSuite, and ExpressVPN.

Check them out in the description to support this podcast.

Now, let me leave you with some words from Carl Jung.

People will do anything, no matter how absurd,

in order to avoid facing their own souls.

One does not become enlightened

by imagining figures of light,

but by making the darkness conscious.

Thank you for listening, and hope to see you next time.

comments powered by Disqus