Lex Fridman Podcast - #314 - Liv Boeree: Poker, Game Theory, AI, Simulation, Aliens & Existential Risk

Evolutionarily, if we see a lion running at us,

we didn’t have time to sort of calculate

the lion’s kinetic energy and is it optimal

to go this way or that way, you just react it.

And physically our bodies are well attuned

to actually make right decisions.

But when you’re playing a game like poker,

this is not something that you ever evolved to do.

And yet you’re in that same flight or fight response.

And so that’s a really important skill

to be able to develop to basically learn

how to like meditate in the moment and calm yourself

so that you can think clearly.

The following is a conversation with Liv Burry,

formerly one of the best poker players in the world,

trained as an astrophysicist and is now a philanthropist

and an educator on topics of game theory,

physics, complexity, and life.

This is the Lex Friedman podcast.

To support it, please check out our sponsors

in the description.

And now, dear friends, here’s Liv Burry.

What role do you think luck plays in poker and in life?

You can pick whichever one you want,

poker or life and or life.

The longer you play, the less influence luck has.

Like with all things, the bigger your sample size,

the more the quality of your decisions

or your strategies matter.

So to answer that question, yeah, in poker,

it really depends.

If you and I sat and played 10 hands right now,

I might only win 52% of the time, 53% maybe.

But if we played 10,000 hands,

then I’ll probably win like over 98, 99% of the time.

So it’s a question of sample sizes.

And what are you figuring out over time?

The betting strategy that this individual does

or literally it doesn’t matter

against any individual over time?

Against any individual over time, the better player

because they’re making better decisions.

So what does that mean to make a better decision?

Well, to get into the realness of Gritty already,

basically poker is a game of math.

There are these strategies familiar

with like Nash Equilibria, right?

So there are these game theory optimal strategies

that you can adopt.

And the closer you play to them,

the less exploitable you are.

So because I’ve studied the game a bunch,

although admittedly not for a few years,

but back when I was playing all the time,

I would study these game theory optimal solutions

and try and then adopt those strategies

when I go and play.

So I’d play against you and I would do that.

And because the objective,

when you’re playing game theory optimal,

it’s actually, it’s a loss minimization thing

that you’re trying to do.

Your best bet is to try and play a sort of similar style.

You also need to try and adopt this loss minimization.

But because I’ve been playing much longer than you,

I’ll be better at that.

So first of all, you’re not taking advantage

of my mistakes.

But then on top of that,

I’ll be better at recognizing

when you are playing suboptimally

and then deviating from this game theory optimal strategy

to exploit your bad plays.

Can you define game theory and Nash equilibria?

Can we try to sneak up to it in a bunch of ways?

Like what’s a game theory framework of analyzing poker,

analyzing any kind of situation?

So game theory is just basically the study of decisions

within a competitive situation.

I mean, it’s technically a branch of economics,

but it also applies to like wider decision theory.

And usually when you see it,

it’s these like little payoff matrices and so on.

That’s how it’s depicted.

But it’s essentially just like study of strategies

under different competitive situations.

And as it happens, certain games,

in fact, many, many games have these things

called Nash equilibria.

And what that means is when you’re in a Nash equilibrium,

basically it is not,

there is no strategy that you can take

that would be more beneficial

than the one you’re currently taking,

assuming your opponent is also doing the same thing.

So it’d be a bad idea,

if we’re both playing in a game theory optimal strategy,

if either of us deviate from that,

now we’re putting ourselves at a disadvantage.

Rock, paper, scissors

is actually a really great example of this.

Like if we were to start playing rock, paper, scissors,

you know, you know nothing about me

and we’re gonna play for all our money,

let’s play 10 rounds of it.

What would your sort of optimal strategy be?

Do you think, what would you do?

Let’s see.

I would probably try to be as random as possible.

Exactly.

You wanna, because you don’t know anything about me,

you don’t want to give anything away about yourself.

So ideally you’d have like a little dice

or somewhat, you know, perfect randomizer

that makes you randomize 33% of the time

each of the three different things.

And in response to that,

well, actually I can kind of do anything,

but I would probably just randomize back too,

but actually it wouldn’t matter

because I know that you’re playing randomly.

So that would be us in a Nash equilibrium

where we’re both playing this like unexploitable strategy.

However, if after a while you then notice

that I’m playing rock a little bit more often than I should.

Yeah, you’re the kind of person that would do that,

wouldn’t you?

Sure.

Yes, yes, yes.

I’m more of a scissors girl, but anyway.

You are?

No, I’m a, as I said, randomizer.

So you notice I’m throwing rock too much

or something like that.

Now you’d be making a mistake

by continuing playing this game theory optimal strategy,

well, the previous one, because you are now,

I’m making a mistake and you’re not deviating

and exploiting my mistake.

So you’d want to start throwing paper a bit more often

in whatever you figure is the right sort of percentage

of the time that I’m throwing rock too often.

So that’s basically an example of where,

what game theory optimal strategy is

in terms of loss minimization,

but it’s not always the maximally profitable thing

if your opponent is doing stupid stuff,

which in that example.

So that’s kind of then how it works in poker,

but it’s a lot more complex.

And the way poker players typically,

nowadays they study, the games change so much.

And I think we should talk about how it sort of evolved,

but nowadays like the top pros

basically spend all their time in between sessions

running these simulators using like software

where they do basically Monte Carlo simulations,

sort of doing billions of fictitious self play hands.

You input a fictitious hand scenario,

like, oh, what do I do with Jack nine suited

on a King 10 four to two spades board

and against this bet size.

So you’d input that press play,

it’ll run its billions of fake hands

and then it will converge upon

what the game theory optimal strategies are.

And then you wanna try and memorize what these are.

Basically they’re like ratios of how often,

what types of hands you want to bluff

and what percentage of the time.

So then there’s this additional layer

of inbuilt randomization built in.

Yeah, those kinds of simulations incorporate

all the betting strategies and everything else like that.

So as opposed to some kind of very crude mathematical model

of what’s the probability you win

just based on the quality of the card,

it’s including everything else too.

The game theory of it.

Yes, yeah, essentially.

And what’s interesting is that nowadays,

if you want to be a top pro and you go and play

in these really like the super high stakes tournaments

or tough cash games, if you don’t know this stuff,

you’re gonna get eaten alive in the long run.

But of course you could get lucky over the short run

and that’s where this like luck factor comes in

because luck is both a blessing and a curse.

If luck didn’t, if there wasn’t this random element

and there wasn’t the ability for worse players

to win sometimes, then poker would fall apart.

You know, the same reason people don’t play chess

professionally for money against,

you don’t see people going and hustling chess

like not knowing, trying to make a living from it

because you know there’s very little luck in chess,

but there’s quite a lot of luck in poker.

Have you seen Beautiful Mind, that movie?

Years ago.

Well, what do you think about the game theoretic formulation

of what is it, the hot blonde at the bar?

Do you remember?

Oh, yeah.

What they illustrated is they’re trying to pick up a girl

at a bar and there’s multiple girls.

They’re like friend, it’s like a friend group

and you’re trying to approach,

I don’t remember the details, but I remember.

Don’t you like then speak to her friends first

or something like that, feign disinterest.

I mean, it’s classic pickup artist stuff, right?

You wanna.

And they were trying to correlate that somehow,

that being an optimal strategy game theoretically.

Why?

What, what, like, I don’t think, I remember.

I can’t imagine that there is,

I mean, there’s probably an optimal strategy.

Is it, does that mean that there’s an actual Nash equilibrium

of like picking up girls?

Do you know the marriage problem?

It’s optimal stopping.

Yes.

So where it’s an optimal dating strategy

where you, do you remember what it is?

Yeah, I think it’s like something like,

you know you’ve got like a set of a hundred people

you’re gonna look through and after,

how many do you, now after that,

after going on this many dates out of a hundred,

at what point do you then go, okay, the next best person I see,

is that the right one?

And I think it’s like something like 37%.

Uh, it’s one over E, whatever that is.

Right, which I think is 37%.

Yeah.

We’re gonna fact check that.

Yeah.

So, but it’s funny under those strict constraints,

then yes, after that many people,

as long as you have a fixed size pool,

then you just pick the next person

that is better than anyone you’ve seen before.

Anyone else you’ve seen, yeah.

Have you tried this?

Have you incorporated it?

I’m not one of those people.

And we’re gonna discuss this.

I, and, what do you mean, those people?

I try not to optimize stuff.

I try to listen to the heart.

I don’t think, I like,

my mind immediately is attracted to optimizing everything.

Optimizing everything.

And I think that if you really give in

to that kind of addiction,

that you lose the joy of the small things,

the minutiae of life, I think.

I don’t know.

I’m concerned about the addictive nature

of my personality in that regard.

In some ways,

while I think the, on average,

people under try and quantify things,

or try, under optimize.

There are some people who, you know,

with all these things, it’s a balancing act.

I’ve been on dating apps, but I’ve never used them.

I’m sure they have data on this,

because they probably have

the optimal stopping control problem.

Because there aren’t a lot of people that use social,

like dating apps, are on there for a long time.

So the interesting aspect is like, all right,

how long before you stop looking

before it actually starts affecting your mind negatively,

such that you see dating as a kind of,

A game.

A kind of game versus an actual process

of finding somebody that’s going to make you happy

for the rest of your life.

That’s really interesting.

They have the data.

I wish they would be able to release that data.

And I do want to hop to it.

It’s OkCupid, right?

I think they ran a huge, huge study on all of that.

Yeah, they’re more data driven, I think, OkCupid folks are.

I think there’s a lot of opportunity for dating apps,

in general, even bigger than dating apps,

people connecting on the internet.

I just hope they’re more data driven.

And it doesn’t seem that way.

I think like, I’ve always thought that

Goodreads should be a dating app.

Like the.

I’ve never used it.

After what?

Goodreads is just lists like books that you’ve read.

And allows you to comment on the books you read

and what books you’re currently reading.

But it’s a giant social networks of people reading books.

And that seems to be a much better database

of like interests.

Of course it constrains you to the books you’re reading,

but like that really reveals so much more about the person.

Allows you to discover shared interests

because books are a kind of window

into the way you see the world.

Also like the kind of places,

people you’re curious about,

the kind of ideas you’re curious about.

Are you a romantic or are you cold calculating rationalist?

Are you into Ayn Rand or are you into Bernie Sanders?

Are you into whatever?

And I feel like that reveals so much more

than like a person trying to look hot

from a certain angle in a Tinder profile.

Well, and it’d also be a really great filter

in the first place for people.

It’s less people who read books

and are willing to go and rate them

and give feedback on them and so on.

So that’s already a really strong filter.

Probably the type of people you’d be looking for.

Well, at least be able to fake reading books.

I mean, the thing about books,

you don’t really need to read it.

You can just look at the click counts.

Yeah, game the dating app by feigning intellectualism.

Can I admit something very horrible about myself?

Go on.

The things that, you know,

I don’t have many things in my closet,

but this is one of them.

I’ve never actually read Shakespeare.

I’ve only read Cliff Notes

and I got a five in the AP English exam.

Wow.

And I…

Which book?

The which books have I read?

Oh yeah, which was the exam on which book?

Oh no, they include a lot of them.

Oh.

But Hamlet, I don’t even know

if you read Romeo and Juliet, Macbeth.

I don’t remember, but I don’t understand it.

It’s like really cryptic.

It’s hard.

It’s really, I don’t, and it’s not that pleasant to read.

It’s like ancient speak.

I don’t understand it.

Anyway, maybe I was too dumb.

I’m still too dumb, but I did get…

You got a five, which is…

Yeah, yeah.

I don’t know how the US grading system…

Oh no, so AP English is a,

there’s kind of this advanced versions of courses

in high school, and you take a test

that is like a broad test for that subject

and includes a lot.

It wasn’t obviously just Shakespeare.

I think a lot of it was also writing, written.

You have like AP Physics, AP Computer Science,

AP Biology, AP Chemistry, and then AP English

or AP Literature, I forget what it was.

But I think Shakespeare was a part of that.

But I…

And the point is you gamified it?

Gamified.

Well, in entirety, I was into getting As.

I saw it as a game.

I don’t think any…

I don’t think all of the learning I’ve done

has been outside of school.

The deepest learning I’ve done has been outside of school

with a few exceptions, especially in grad school,

like deep computer science courses.

But that was still outside of school

because it was outside of getting…

Sorry.

It was outside of getting the A for the course.

The best stuff I’ve ever done is when you read the chapter

and you do many of the problems at the end of the chapter,

which is usually not what’s required for the course,

like the hardest stuff.

In fact, textbooks are freaking incredible.

If you go back now and you look at like biology textbook

or any of the computer science textbooks

on algorithms and data structures,

those things are incredibly…

They have the best summary of a subject,

plus they have practice problems of increasing difficulty

that allows you to truly master the basic,

like the fundamental ideas behind that.

I got through my entire physics degree with one textbook

that was just this really comprehensive one

that they told us at the beginning of the first year,

buy this, but you’re gonna have to buy 15 other books

for all your supplementary courses.

And I was like, every time I was just checked

to see whether this book covered it and it did.

And I think I only bought like two or three extra

and thank God, cause they’re super expensive textbooks.

It’s a whole racket they’ve got going on.

Yeah, they are.

They could just…

You get the right one, it’s just like a manual for…

But what’s interesting though,

is this is the tyranny of having exams and metrics.

The tyranny of exams and metrics, yes.

I loved them because I’m very competitive

and I liked finding ways to gamify things

and then like sort of dust off my shoulders afterwards

when I get a good grade

or be annoyed at myself when I didn’t.

But yeah, you’re absolutely right.

And that the actual…

How much of that physics knowledge I’ve retained,

like I’ve learned how to cram and study

and please an examiner,

but did that give me the deep lasting knowledge

that I needed?

I mean, yes and no,

but really like nothing makes you learn a topic better

than when you actually then have to teach it yourself.

Like I’m trying to wrap my teeth

around this like game theory Moloch stuff right now.

And there’s no exam at the end of it that I can gamify.

There’s no way to gamify

and sort of like shortcut my way through it.

I have to understand it so deeply

from like deep foundational levels

to then to build upon it

and then try and explain it to other people.

And like, you’re about to go and do some lectures, right?

You can’t sort of just like,

you presumably can’t rely on the knowledge

that you got through when you were studying for an exam

to reteach that.

Yeah, and especially high level lectures,

especially the kind of stuff you do on YouTube,

you’re not just regurgitating material.

You have to think through what is the core idea here.

And when you do the lectures live especially,

you have to, there’s no second takes.

That is the luxury you get

if you’re recording a video for YouTube

or something like that.

But it definitely is a luxury you shouldn’t lean on.

I’ve gotten to interact with a few YouTubers

that lean on that too much.

And you realize, oh, you’ve gamified this system

because you’re not really thinking deeply about stuff.

You’re through the edit,

both written and spoken,

you’re crafting an amazing video,

but you yourself as a human being

have not really deeply understood it.

So live teaching or at least recording video

with very few takes is a different beast.

And I think it’s the most honest way of doing it,

like as few takes as possible.

That’s why I’m nervous about this.

Don’t go back and be like, ah, let’s do that.

Don’t fuck this up, Liv.

The tyranny of exams.

I do think people talk about high school and college

as a time to do drugs and drink and have fun

and all this kind of stuff.

But looking back, of course I did a lot of those things.

Yes, no, yes, but it’s also a time

when you get to read textbooks or read books

or learn with all the time in the world.

You don’t have these responsibilities of laundry

and having to sort of pay for mortgage,

all that kind of stuff, pay taxes, all this kind of stuff.

In most cases, there’s just so much time in the day

for learning and you don’t realize it at the time

because at the time it seems like a chore,

like why the hell does there’s so much homework?

But you never get a chance to do this kind of learning,

this kind of homework ever again in life,

unless later in life you really make a big effort out of it.

Like basically your knowledge gets solidified.

You don’t get to have fun and learn.

Learning is really fulfilling and really fun

if you’re that kind of person.

Like some people like, you know,

like knowledge is not something that they think is fun.

But if that’s the kind of thing that you think is fun,

that’s the time to have fun and do the drugs and drink

and all that kind of stuff.

But the learning, just going back to those textbooks,

the hours spent with the textbooks

is really, really rewarding.

Do people even use textbooks anymore?

Yeah. Do you think?

Kids these days with their TikTok and their…

Well, not even that, but it’s just like so much information,

really high quality information,

you know, it’s now in digital format online.

Yeah, but they’re not, they are using that,

but you know, college is still very, there’s a curriculum.

I mean, so much of school is about rigorous study

of a subject and still on YouTube, that’s not there.

Right. YouTube has,

YouTube has, Grant Sanderson talks about this.

He’s this math…

3Blue1Brown.

Yeah, 3Blue1Brown.

He says like, I’m not a math teacher.

I just take really cool concepts and I inspire people.

But if you wanna really learn calculus,

if you wanna really learn linear algebra,

you should do the textbook.

You should do that, you know.

And there’s still the textbook industrial complex

that like charges like $200 for textbook and somehow,

I don’t know, it’s ridiculous.

Well, they’re like, oh, sorry, new edition, edition 14.6.

Sorry, you can’t use 14.5 anymore.

It’s like, what’s different?

We’ve got one paragraph different.

So we mentioned offline, Daniel Negrano.

I’m gonna get a chance to talk to him on this podcast.

And he’s somebody that I found fascinating

in terms of the way he thinks about poker,

verbalizes the way he thinks about poker,

the way he plays poker.

So, and he’s still pretty damn good.

He’s been good for a long time.

So you mentioned that people are running

these kinds of simulations and the game of poker has changed.

Do you think he’s adapting in this way?

Like the top pros, do they have to adopt this way?

Or is there still like over the years,

you basically develop this gut feeling about,

like you get to be like good the way,

like alpha zero is good.

You look at the board and somehow from the fog

comes out the right answer.

Like this is likely what they have.

This is likely the best way to move.

And you don’t really, you can’t really put a finger

on exactly why, but it just comes from your gut feeling.

Or no?

Yes and no.

So gut feelings are definitely very important.

You know, that we’ve got our two,

you can distill it down to two modes

of decision making, right?

You’ve got your sort of logical linear voice in your head,

system two, as it’s often called,

and your system one, your gut intuition.

And historically in poker,

the very best players were playing

almost entirely by their gut.

You know, often they do some kind of inspired play

and you’d ask them why they do it

and they wouldn’t really be able to explain it.

And that’s not so much because their process

was unintelligible, but it was more just because

no one had the language with which to describe

what optimal strategies were

because no one really understood how poker worked.

This was before, you know, we had analysis software.

You know, no one was writing,

I guess some people would write down their hands

in a little notebook,

but there was no way to assimilate all this data

and analyze it.

But then, you know, when computers became cheaper

and software started emerging,

and then obviously online poker,

where it would like automatically save your hand histories.

Now, all of a sudden you kind of had this body of data

that you could run analysis on.

And so that’s when people started to see, you know,

these mathematical solutions.

And so what that meant is the role of intuition

essentially became smaller.

And it went more into, as we talked before about,

you know, this game theory optimal style.

But also, as I said, like game theory optimal

is about loss minimization and being unexploitable.

But if you’re playing against people who aren’t,

because no person, no human being can play perfectly

game theory optimal in poker, not even the best AIs.

They’re still like, you know,

they’re 99.99% of the way there or whatever,

but it’s kind of like the speed of light.

You can’t reach it perfectly.

So there’s still a role for intuition?

Yes, so when, yeah,

when you’re playing this unexploitable style,

but when your opponents start doing something,

you know, suboptimal that you want to exploit,

well, now that’s where not only your like logical brain

will need to be thinking, well, okay,

I know I have this, I’m in the sort of top end

of my range here with this hand.

So that means I need to be calling X percent of the time

and I put them on this range, et cetera.

But then sometimes you’ll have this gut feeling

that will tell you, you know what, this time,

I know mathematically I’m meant to call now.

You know, I’m in the sort of top end of my range

and this is the odds I’m getting.

So the math says I should call,

but there’s something in your gut saying,

they’ve got it this time, they’ve got it.

Like they’re beating you, your hand is worse.

So then the real art,

this is where the last remaining art in poker,

the fuzziness is like, do you listen to your gut?

How do you quantify the strength of it?

Or can you even quantify the strength of it?

And I think that’s what Daniel has.

I mean, I can’t speak for how much he’s studying

with the simulators and that kind of thing.

I think he has, like he must be to still be keeping up.

But he has an incredible intuition for just,

he’s seen so many hands of poker in the flesh.

He’s seen so many people, the way they behave

when the chips are, you know, when the money’s on the line

and he’ve got him staring you down in the eye.

You know, he’s intimidating.

He’s got this like kind of X factor vibe

that he, you know, gives out.

And he talks a lot, which is an interactive element,

which is he’s getting stuff from other people.

Yes, yeah.

And just like the subtlety.

So he’s like, he’s probing constantly.

Yeah, he’s probing and he’s getting this extra layer

of information that others can’t.

Now that said though, he’s good online as well.

You know, I don’t know how, again,

would he be beating the top cash game players online?

Probably not, no.

But when he’s in person and he’s got that additional

layer of information, he can not only extract it,

but he knows what to do with it still so well.

There’s one player who I would say is the exception

to all of this.

And he’s one of my favorite people to talk about

in terms of, I think he might have cracked the simulation.

It’s Phil Helmuth.

He…

In more ways than one, he’s cracked the simulation,

I think.

Yeah, he somehow to this day is still

and I love you Phil, I’m not in any way knocking you.

He is still winning so much at the World Series

of Poker specifically.

He’s now won 16 bracelets.

The next nearest person I think has won 10.

And he is consistently year in, year out going deep

or winning these huge field tournaments,

you know, with like 2000 people,

which statistically he should not be doing.

And yet you watch some of the plays he makes

and they make no sense, like mathematically,

they are so far from game theory optimal.

And the thing is, if you went and stuck him

in one of these high stakes cash games

with a bunch of like GTO people,

he’s gonna get ripped apart.

But there’s something that he has that when he’s

in the halls of the World Series of Poker specifically,

amongst sort of amateurish players,

he gets them to do crazy shit like that.

And, but my little pet theory is that also,

he just, the card, he’s like a wizard

and he gets the cards to do what he needs them to.

Because he just expects to win and he expects to receive,

you know, to get flopper set with a frequency far beyond

what the real percentages are.

And I don’t even know if he knows what the real percentages

are, he doesn’t need to, because he gets there.

I think he has found the Chico,

because when I’ve seen him play,

he seems to be like annoyed that the long shot thing

didn’t happen.

He’s like annoyed and it’s almost like everybody else

is stupid because he was obviously going to win

with the spare.

If that silly thing hadn’t happened.

And it’s like, you don’t understand,

the silly thing happens 99% of the time.

And it’s a 1%, not the other way around,

but genuinely for his lived experience at the World Series,

only at the World Series of Poker, it is like that.

So I don’t blame him for feeling that way.

But he does, he has this X factor

and the poker community has tried for years

to rip him down saying like, he’s no good,

but he’s clearly good because he’s still winning

or there’s something going on.

Whether that’s he’s figured out how to mess

with the fabric of reality and how cards,

a randomly shuffled deck of cards come out.

I don’t know what it is, but he’s doing it right still.

Who do you think is the greatest of all time?

Would you put Hellmuth?

Not Hellmuth definitely, he seems like the kind of person

when mentioned he would actually watch this.

So you might wanna be careful.

Well, as I said, I love Phil and I have,

I would say this to his face, I’m not saying anything.

I don’t, he’s got, he truly, I mean,

he is one of the greatest.

I don’t know if he’s the greatest,

he’s certainly the greatest at the World Series of Poker.

And he is the greatest at, despite the game switching

into a pure game, almost an entire game of math,

he has managed to keep the magic alive.

And this like, just through sheer force of will,

making the game work for him.

And that is incredible.

And I think it’s something that should be studied

because it’s an example.

Yeah, there might be some actual game theoretic wisdom.

There might be something to be said about optimality

from studying him.

What do you mean by optimality?

Meaning, or rather game design, perhaps.

Meaning if what he’s doing is working,

maybe poker is more complicated

than the one we’re currently modeling it as.

So like his, yeah.

Or there’s an extra layer,

and I don’t mean to get too weird and wooey,

but, or there’s an extra layer of ability

to manipulate the things the way you want them to go

that we don’t understand yet.

Do you think Phil Hellmuth understands them?

Is he just generally?

Hashtag positivity.

Well, he wrote a book on positivity and he’s.

He has, he did, not like a trolling book.

No.

He’s just straight up, yeah.

Phil Hellmuth wrote a book about positivity.

Yes.

Okay, not ironic.

And I think it’s about sort of manifesting what you want

and getting the outcomes that you want

by believing so much in yourself

and in your ability to win,

like eyes on the prize.

And I mean, it’s working.

The man’s delivered.

But where do you put like Phil Ivey

and all those kinds of people?

I mean, I’m too, I’ve been,

to be honest too much out of the scene

for the last few years to really,

I mean, Phil Ivey’s clearly got,

again, he’s got that X factor.

He’s so incredibly intimidating to play against.

I’ve only played against him a couple of times,

but when he like looks you in the eye

and you’re trying to run a bluff on him,

oof, no one’s made me sweat harder than Phil Ivey,

just my bluff got through actually.

That was actually one of the most thrilling moments

I’ve ever had in poker was,

it was in a Monte Carlo in a high roller.

I can’t remember exactly what the hand was,

but I, you know, a three bit

and then like just barreled all the way through.

And he just like put his laser eyes into me.

And I felt like he was just scouring my soul.

And I was just like, hold it together, Liv,

hold it together.

And he was like, folded.

And you knew your hand was weaker.

Yeah, I mean, I was bluffing.

I presume, which, you know,

there’s a chance I was bluffing with the best hand,

but I’m pretty sure my hand was worse.

And he folded.

I was truly one of the deep highlights of my career.

Did you show the cards or did you fold?

You should never show in game.

Like, because especially as I felt like

I was one of the worst players at the table

in that tournament.

So giving that information,

unless I had a really solid plan

that I was now like advertising,

oh, look, I’m capable of bluffing Phil Ivey.

But like, why?

It’s much more valuable to take advantage

of the impression that they have of me,

which is like, I’m a scared girl

playing a high roller for the first time.

Keep that going, you know.

Interesting.

But isn’t there layers to this?

Like psychological warfare

that the scared girl might be way smart

and then like just to flip the tables?

Do you think about that kind of stuff?

Or is it better not to reveal information?

I mean, generally speaking,

you want to not reveal information.

You know, the goal of poker is to be

as deceptive as possible about your own strategies

while elucidating as much out of your opponent

about their own.

So giving them free information,

particularly if they’re people

who you consider very good players,

any information I give them is going into

their little database and being,

I assume it’s going to be calculated and used well.

So I have to be really confident

that my like meta gaming that I’m going to then do,

oh, they’ve seen this, so therefore that.

I’m going to be on the right level.

So it’s better just to keep that little secret

to myself in the moment.

So how much is bluffing part of the game?

Huge amount.

So yeah, I mean, maybe actually let me ask,

like, what did it feel like with Phil Ivey

or anyone else when it’s a high stake,

when it’s a big, it’s a big bluff?

So a lot of money on the table and maybe,

I mean, what defines a big bluff?

Maybe a lot of money on the table,

but also some uncertainty in your mind and heart

about like self doubt.

Well, maybe I miscalculated what’s going on here,

what the bet said, all that kind of stuff.

Like, what does that feel like?

I mean, it’s, I imagine comparable to,

you know, running a, I mean, any kind of big bluff

where you have a lot of something

that you care about on the line.

You know, so if you’re bluffing in a courtroom,

not that anyone should ever do that,

or, you know, something equatable to that.

It’s, you know, in that scenario, you know,

I think it was the first time I’d ever played a 20,

I’d won my way into this 25K tournament.

So that was the buy in 25,000 euros.

And I had satelliteed my way in

because it was much bigger

than I would ever normally play.

And, you know, I hadn’t, I wasn’t that experienced

at the time, and now I was sitting there

against all the big boys, you know,

the Negronus, the Phil Ivey’s and so on.

And then to like, you know,

each time you put the bets out, you know,

you put another bet out, your card.

Yeah, I was on what’s called a semi bluff.

So there were some cards that could come

that would make my hand very, very strong

and therefore win.

But most of the time, those cards don’t come.

So that is a semi bluff because you’re representing,

are you representing that you already have something?

So I think in this scenario, I had a flush draw.

So I had two clubs, two clubs came out on the flop.

And then I’m hoping that on the turn

and the river, one will come.

So I have some future equity.

I could hit a club and then I’ll have the best hand

in which case, great.

And so I can keep betting and I’ll want them to call,

but I’m also got the other way of winning the hand

where if my card doesn’t come,

I can keep betting and get them to fold their hand.

And I’m pretty sure that’s what the scenario was.

So I had some future equity, but it’s still, you know,

most of the time I don’t hit that club.

And so I would rather him just fold because I’m, you know,

the pot is now getting bigger and bigger.

And in the end, like I’ve jam all in on the river.

That’s my entire tournament on the line.

As far as I’m aware, this might be the one time

I ever get to play a big 25K.

You know, this was the first time I played one.

So it was, it felt like the most momentous thing.

And this was also when I was trying to build myself up at,

you know, build my name, a name for myself in poker.

I wanted to get respect.

Destroy everything for you.

It felt like it in the moment.

Like, I mean, it literally does feel

like a form of life and death.

Like your body physiologically

is having that flight or fight response.

What are you doing with your body?

What are you doing with your face?

Are you just like, what are you thinking about?

More of a mixture of like, okay, what are the cards?

So in theory, I’m thinking about like, okay,

what are cards that make my hand look stronger?

Which cards hit my perceived range from his perspective?

Which cards don’t?

What’s the right amount of bet size

to maximize my fold equity in this situation?

You know, that’s the logical stuff

that I should be thinking about.

But I think in reality, because I was so scared,

because there’s this, at least for me,

there’s a certain threshold of like nervousness or stress

beyond which the logical brain shuts off.

And now it just gets into this like,

just like, it feels like a game of wits, basically.

It’s like of nerve.

Can you hold your resolve?

And it certainly got by that, like by the river.

I think by that point, I was like,

I don’t even know if this is a good bluff anymore,

but fuck it, let’s do it.

Your mind is almost numb from the intensity of that feeling.

I call it the white noise.

And it happens in all kinds of decision making.

I think anything that’s really, really stressful.

I can imagine someone in like an important job interview,

if it’s like a job they’ve always wanted,

and they’re getting grilled, you know,

like Bridgewater style, where they ask these really hard,

like mathematical questions.

You know, it’s a really learned skill

to be able to like subdue your flight or fight response.

You know, I think get from the sympathetic

into the parasympathetic.

So you can actually, you know, engage that voice

in your head and do those slow logical calculations.

Because evolutionarily, you know, if we see a lion

running at us, we didn’t have time to sort of calculate

the lion’s kinetic energy and, you know,

is it optimal to go this way or that way?

You just react it.

And physically, our bodies are well attuned

to actually make right decisions.

But when you’re playing a game like poker,

this is not something that you ever, you know,

evolved to do.

And yet you’re in that same flight or fight response.

And so that’s a really important skill to be able to develop

to basically learn how to like meditate in the moment

and calm yourself so that you can think clearly.

But as you were searching for a comparable thing,

it’s interesting because you just made me realize

that bluffing is like an incredibly high stakes

form of lying.

You’re lying.

And I don’t think you can.

Telling a story.

No, no, it’s straight up lying.

In the context of game, it’s not a negative kind of lying.

But it is, yeah, exactly.

You’re representing something that you don’t have.

And I was thinking like how often in life

do we have such high stakes of lying?

Because I was thinking certainly

in high level military strategy,

I was thinking when Hitler was lying to Stalin

about his plans to invade the Soviet Union.

And so you’re talking to a person like your friends

and you’re fighting against the enemy,

whatever the formulation of the enemy is.

But meanwhile, whole time you’re building up troops

on the border.

That’s extremely.

Wait, wait, so Hitler and Stalin were like

pretending to be friends?

Yeah.

Well, my history knowledge is terrible.

Oh yeah.

That’s crazy.

Yeah, that they were, oh man.

And it worked because Stalin,

until the troops crossed the border

and invaded in Operation Barbarossa

where this storm of Nazi troops

invaded large parts of the Soviet Union.

And hence, one of the biggest wars in human history began.

Stalin for sure thought that this was never going to be,

that Hitler is not crazy enough to invade the Soviet Union.

And it makes, geopolitically makes total sense

to be collaborators.

And ideologically, even though there’s a tension

between communism and fascism or national socialism,

however you formulate it,

it still feels like this is the right way

to battle the West.

Right.

They were more ideologically aligned.

They in theory had a common enemy, which is the West.

So it made total sense.

And in terms of negotiations

and the way things were communicated,

it seemed to Stalin that for sure,

that they would remain, at least for a while,

peaceful collaborators.

And that, and everybody, because of that,

in the Soviet Union believed that it was a huge shock

when Kiev was invaded.

And you hear echoes of that when I travel to Ukraine,

sort of the shock of the invasion.

It’s not just the invasion on one particular border,

but the invasion of the capital city.

And just like, holy shit, especially at that time,

when you thought World War I,

you realized that that was the war to end all wars.

You would never have this kind of war.

And holy shit, this person is mad enough

to try to take on this monster in the Soviet Union.

So it’s no longer going to be a war

of hundreds of thousands dead.

It’ll be a war of tens of millions dead.

And yeah, but that, that’s a very large scale kind of lie,

but I’m sure there’s in politics and geopolitics,

that kind of lying happening all the time.

And a lot of people pay financially

and with their lives for that kind of lying.

But in our personal lives, I don’t know how often we,

maybe we.

I think people do.

I mean, like think of spouses

cheating on their partners, right?

And then like having to lie,

like where were you last night?

Stuff like that. Oh shit, that’s tough.

Yeah, that’s true.

Like that’s, I think, you know, I mean,

unfortunately that stuff happens all the time, right?

Or having like multiple families, that one is great.

When each family doesn’t know about the other one

and like maintaining that life.

There’s probably a sense of excitement about that too.

Or. It seems unnecessary, yeah.

But. Why?

Well, just lying.

Like, you know, the truth finds a way of coming out.

You know? Yes.

But hence that’s the thrill.

Yeah, perhaps.

Yeah, people.

I mean, you know, that’s why I think actually like poker.

What’s so interesting about poker

is most of the best players I know,

they’re always exceptions, you know?

They’re always bad eggs.

But actually poker players are very honest people.

I would say they are more honest than the average,

you know, if you just took random population sample.

Because A, you know, I think, you know,

humans like to have that.

Most people like to have some kind of, you know,

mysterious, you know, an opportunity to do something

like a little edgy.

So we get to sort of scratch that itch

of being edgy at the poker table,

where it’s like, it’s part of the game.

Everyone knows what they’re in for, and that’s allowed.

And you get to like really get that out of your system.

And then also like poker players learned that, you know,

I would play in a huge game against some of my friends,

even my partner, Igor, where we will be, you know,

absolutely going at each other’s throats,

trying to draw blood in terms of winning each money

off each other and like getting under each other’s skin,

winding each other up, doing the craftiest moves we can.

But then once the game’s done, you know,

the winners and the losers will go off

and get a drink together and have a fun time

and like talk about it in this like weird academic way

afterwards, because, and that’s why games are so great.

Cause you get to like live out like this competitive urge

that, you know, most people have.

What’s it feel like to lose?

Like we talked about bluffing when it worked out.

What about when you, when you go broke?

So like in a game, I’m, fortunately I’ve never gone broke.

You mean like full life?

Full life, no.

I know plenty of people who have.

And I don’t think Igor would mind me saying he went,

you know, he went broke once in poker bowl, you know,

early on when we were together.

I feel like you haven’t lived unless you’ve gone broke.

Oh yeah.

Some sense.

Right.

Some fundamental sense.

I mean, I’m happy, I’ve sort of lived through it,

vicariously through him when he did it at the time.

But yeah, what’s it like to lose?

Well, it depends.

So it depends on the amount.

It depends what percentage of your net worth

you’ve just lost.

It depends on your brain chemistry.

It really, you know, varies from person to person.

You have a very cold calculating way of thinking about this.

So it depends what percentage.

Well, it did, it really does, right?

Yeah, it’s true, it’s true.

I mean, that’s another thing poker trains you to do.

You see, you see everything in percentages

or you see everything in like ROI or expected hourlies

or cost benefit, et cetera.

You know, so that’s, one of the things I’ve tried to do

is calibrate the strength of my emotional response

to the win or loss that I’ve received.

Because it’s no good if you like, you know,

you have a huge emotional dramatic response to a tiny loss

or on the flip side, you have a huge win

and you’re sort of so dead inside

that you don’t even feel it.

Well, that’s, you know, that’s a shame.

I want my emotions to calibrate with reality

as much as possible.

So yeah, what’s it like to lose?

I mean, I’ve had times where I’ve lost, you know,

busted out of a tournament that I thought I was gonna win in

especially if I got really unlucky or I make a dumb play

where I’ve gone away and like, you know, kicked the wall,

punched a wall, I like nearly broke my hand one time.

Like I’m a lot less competitive than I used to be.

Like I was like pathologically competitive in my like

late teens, early twenties, I just had to win at everything.

And I think that sort of slowly waned as I’ve gotten older.

According to you, yeah.

According to me.

I don’t know if others would say the same, right?

I feel like ultra competitive people,

like I’ve heard Joe Rogan say this to me.

It’s like that he’s a lot less competitive

than he used to be.

I don’t know about that.

Oh, I believe it.

No, I totally believe it.

Like, because as you get, you can still be,

like I care about winning.

Like when, you know, I play a game with my buddies online

or, you know, whatever it is,

polytopia is my current obsession.

Like when I.

Thank you for passing on your obsession to me.

Are you playing now?

Yeah, I’m playing now.

We gotta have a game.

But I’m terrible and I enjoy playing terribly.

I don’t wanna have a game because that’s gonna pull me

into your monster of like a competitive play.

It’s important, it’s an important skill.

I’m enjoying playing on the, I can’t.

You just do the points thing, you know, against the bots.

Yeah, against the bots.

And I can’t even do the, there’s like a hard one

and there’s a very hard one.

And then it’s crazy, yeah.

It’s crazy.

I can’t, I don’t even enjoy the hard one.

The crazy, I really don’t enjoy.

Cause it’s intense.

You have to constantly try to win

as opposed to enjoy building a little world and.

Yeah, no, no, no.

There’s no time for exploration in polytopia.

You gotta get.

Well, when, once you graduate from the crazies,

then you can come play the.

Graduate from the crazies.

Yeah, so in order to be able to play a decent game

against like, you know, our group,

you’ll need to be, you’ll need to be consistently winning

like 90% of games against 15 crazy bots.

Yeah.

And you’ll be able to, like there’ll be,

I could teach you it within a day, honestly.

How to beat the crazies?

How to beat the crazies.

And then, and then you’ll be ready for the big leagues.

Generalizes to more than just polytopia.

But okay, why were we talking about polytopia?

Losing hurts.

Losing hurts, oh yeah.

Yes, competitiveness over time.

Oh yeah.

I think it’s more that, at least for me,

I still care about playing,

about winning when I choose to play something.

It’s just that I don’t see the world

as zero sum as I used to be, you know?

I think as one gets older and wiser,

you start to see the world more as a positive something.

Or at least you’re more aware of externalities,

of scenarios, of competitive interactions.

And so, yeah, I just like, I’m more,

and I’m more aware of my own, you know, like,

if I have a really strong emotional response to losing,

and that makes me then feel shitty for the rest of the day,

and then I beat myself up mentally for it.

Like, I’m now more aware

that that’s unnecessary negative externality.

So I’m like, okay, I need to find a way to turn this down,

you know, dial this down a bit.

Was poker the thing that has,

if you think back at your life,

and think about some of the lower points of your life,

like the darker places you’ve got in your mind,

did it have to do something with poker?

Like, did losing spark the descent into darkness,

or was it something else?

I think my darkest points in poker

were when I was wanting to quit and move on to other things,

but I felt like I hadn’t ticked all the boxes

I wanted to tick.

Like, I wanted to be the most winningest female player,

which is by itself a bad goal.

You know, that was one of my initial goals,

and I was like, well, I haven’t, you know,

and I wanted to win a WPT event.

I’ve won one of these, I’ve won one of these,

but I want one of those as well.

And that sort of, again, like,

it’s a drive of like overoptimization to random metrics

that I decided were important

without much wisdom at the time, but then like carried on.

That made me continue chasing it longer

than I still actually had the passion to chase it for.

And I don’t have any regrets that, you know,

I played for as long as I did, because who knows,

you know, I wouldn’t be sitting here,

I wouldn’t be living this incredible life

that I’m living now.

This is the height of your life right now.

This is it, peak experience, absolute pinnacle

here in your robot land with your creepy light.

No, it is, I mean, I wouldn’t change a thing

about my life right now, and I feel very blessed to say that.

So, but the dark times were in the sort of like 2016 to 18,

even sooner really, where I was like,

I had stopped loving the game,

and I was going through the motions,

and I would, and then I was like, you know,

I would take the losses harder than I needed to,

because I’m like, ah, it’s another one.

And it was, I was aware that like,

I felt like my life was ticking away,

and I was like, is this gonna be what’s on my tombstone?

Oh yeah, she played the game of, you know,

this zero sum game of poker,

slightly more optimally than her next opponent.

Like, cool, great, legacy, you know?

So, I just wanted, you know, there was something in me

that knew I needed to be doing something

more directly impactful and just meaningful.

It was just like your search for meaning,

and I think it’s a thing a lot of poker players,

even a lot of, I imagine any games players

who sort of love intellectual pursuits,

you know, I think you should ask Magnus Carlsen

this question, I don’t know what he’s on.

He’s walking away from chess, right?

Yeah, like, it must be so hard for him.

You know, he’s been on the top for so long,

and it’s like, well, now what?

He’s got this incredible brain, like, what to put it to?

And, yeah, it’s.

It’s this weird moment where I’ve just spoken with people

that won multiple gold medals at the Olympics,

and the depression hits hard after you win.

Dopamine crash.

Because it’s a kind of a goodbye,

saying goodbye to that person,

to all the dreams you had that thought,

you thought would give meaning to your life,

but in fact, life is full of constant pursuits of meaning.

It doesn’t, you don’t like arrive and figure it all out,

and there’s endless bliss,

and it continues going on and on.

You constantly have to figure out to rediscover yourself.

And so for you, like that struggle to say goodbye to poker,

you have to like find the next.

There’s always a bigger game.

That’s the thing.

That’s my motto is like, what’s the next game?

And more importantly,

because obviously game usually implies zero sum,

like what’s a game which is like Omni win?

Like what? Omni win.

Omni win.

Why is Omni win so important?

Because if everyone plays zero sum games,

that’s a fast track to either completely stagnate

as a civilization, but more actually,

far more likely to extinct ourselves.

You know, like the playing field is finite.

You know, nuclear powers are playing,

you know, a game of poker with, you know,

but their chips are nuclear weapons, right?

And the stakes have gotten so large

that if anyone makes a single bet, you know,

fires some weapons, the playing field breaks.

I made a video on this.

Like, you know, the playing field is finite.

And if we keep playing these adversarial zero sum games,

thinking that we, you know,

in order for us to win, someone else has to lose,

or if we lose that, you know, someone else wins,

that will extinct us.

It’s just a matter of when.

What do you think about that mutually assured destruction,

that very simple,

almost to the point of caricaturing game theory idea

that does seem to be at the core

of why we haven’t blown each other up yet

with nuclear weapons.

Do you think there’s some truth to that,

this kind of stabilizing force

of mutually assured destruction?

And do you think that’s gonna hold up

through the 21st century?

I mean, it has held.

Yes, there’s definitely truth to it,

that it was a, you know, it’s a Nash equilibrium.

Yeah, are you surprised it held this long?

Isn’t it crazy?

It is crazy when you factor in all the like

near miss accidental firings.

Yes, that’s makes me wonder like, you know,

are you familiar with the like quantum suicide

thought experiment, where it’s basically like,

you have a, you know, like a Russian roulette

type scenario hooked up to some kind of quantum event,

you know, particle splitting or pair of particles splitting.

And if it, you know, if it goes A,

then the gun doesn’t go off and it goes B,

then it does go off and it kills you.

Because you can only ever be in the universe,

you know, assuming like the Everett branch,

you know, multiverse theory,

you’ll always only end up in the branch

where you continually make, you know, option A comes in,

but you run that experiment enough times,

it starts getting pretty damn, you know,

out of the tree gets huge,

there’s a million different scenarios,

but you’ll always find yourself in this,

in the one where it didn’t go off.

And so from that perspective, you are essentially immortal

because someone, and you will only find yourself

in the set of observers that make it down that path.

So it’s kind of a…

That doesn’t mean, that doesn’t mean

you’re still not gonna be fucked at some point in your life.

No, of course not, no, I’m not advocating

like that we’re all immortal because of this.

It’s just like a fun thought experiment.

And the point is it like raises this thing

of like these things called observer selection effects,

which Bostrom, Nick Bostrom talks about a lot,

and I think people should go read.

It’s really powerful,

but I think it could be overextended, that logic.

I’m not sure exactly how it can be.

I just feel like you can get, you can overgeneralize

that logic somehow.

Well, no, I mean, it leaves you into like solipsism,

which is a very dangerous mindset.

Again, if everyone like falls into solipsism of like,

well, I’ll be fine.

That’s a great way of creating a very,

self terminating environment.

But my point is, is that with the nuclear weapons thing,

there have been at least, I think it’s 12 or 11 near misses

of like just stupid things, like there was moonrise

over Norway, and it made weird reflections

of some glaciers in the mountains, which set off,

I think the alarms of NORAD radar,

and that put them on high alert, nearly ready to shoot.

And it was only because the head of Russian military

happened to be at the UN in New York at the time

that they go like, well, wait a second,

why would they fire now when their guy is there?

And it was only that lucky happenstance,

which doesn’t happen very often where they didn’t then

escalate it into firing.

And there’s a bunch of these different ones.

Stanislav Petrov, like saved the person

who should be the most famous person on earth,

cause he’s probably on expectation,

saved the most human lives of anyone,

like billions of people by ignoring Russian orders to fire

because he felt in his gut that actually

this was a false alarm.

And it turned out to be, you know, very hard thing to do.

And there’s so many of those scenarios that I can’t help

but wonder at this point that we aren’t having this kind

of like selection effect thing going on.

Cause you look back and you’re like, geez,

that’s a lot of near misses.

But of course we don’t know the actual probabilities

that they would have lent each one would have ended up

in nuclear war.

Maybe they were not that likely, but still the point is,

it’s a very dark, stupid game that we’re playing.

And it is an absolute moral imperative if you ask me

to get as many people thinking about ways

to make this like very precarious.

Cause we’re in a Nash equilibrium,

but it’s not like we’re in the bottom of a pit.

You know, if you would like map it topographically,

it’s not like a stable ball at the bottom of a thing.

We’re not in equilibrium because of that.

We’re on the top of a hill with a ball balanced on top.

And just at any little nudge could send it flying down

and you know, nuclear war pops off

and hellfire and bad times.

On the positive side,

life on earth will probably still continue.

And another intelligent civilization might still pop up.

Maybe.

Several millennia after.

Pick your X risk, depends on the X risk.

Nuclear war, sure.

That’s one of the perhaps less bad ones.

Green goo through synthetic biology, very bad.

Will turn, you know, destroy all, you know,

organic matter through, you know,

it’s basically like a biological paperclip maximizer.

Also bad.

Or AI type, you know, mass extinction thing as well

would also be bad.

Shh, they’re listening.

There’s a robot right behind you.

Okay, wait.

So let me ask you about this from a game theory perspective.

Do you think we’re living in a simulation?

Do you think we’re living inside a video game

created by somebody else?

Well, so what was the second part of the question?

Do I think we’re living in a simulation and?

A simulation that is observed by somebody

for purpose of entertainment.

So like a video game.

Are we listening?

Are we, because there’s a,

it’s like Phil Hellmuth type of situation, right?

Like there’s a creepy level of like,

this is kind of fun and interesting.

Like there’s a lot of interesting stuff going on.

Maybe that could be somehow integrated

into the evolutionary process where the way we perceive

and are.

Are you asking me if I believe in God?

Sounds like it.

Kind of, but God seems to be not optimizing

in the different formulations of God that we conceive of.

He doesn’t seem to be, or she, optimizing

for like personal entertainment.

Maybe the older gods did.

But the, you know, just like, basically like a teenager

in their mom’s basement watching, create a fun universe

to observe what kind of crazy shit might happen.

Okay, so to try and answer this.

Do I think there is some kind of extraneous intelligence

to like our, you know, classic measurable universe

that we, you know, can measure with, you know,

through our current physics and instruments?

I think so, yes.

Partly because I’ve had just small little bits of evidence

in my own life, which have made me question.

Like, so I was a diehard atheist, even five years ago.

You know, I got into like the rationality community,

big fan of less wrong, continue to be an incredible resource.

But I’ve just started to have too many little snippets

of experience, which don’t make sense with the current sort

of purely materialistic explanation of how reality works.

Isn’t that just like a humbling, practical realization

that we don’t know how reality works?

Isn’t that just a reminder to yourself that you’re

like, I don’t know how reality works, isn’t that just

a reminder to yourself?

Yeah, no, it’s a reminder of epistemic humility

because I fell too hard, you know, same as people,

like I think, you know, many people who are just like,

my religion is the way, this is the correct way,

this is the work, this is the law, you are immoral

if you don’t follow this, blah, blah, blah.

I think they are lacking epistemic humility.

They’re a little too much hubris there.

But similarly, I think that sort of the Richard Dawkins

realism is too rigid as well and doesn’t, you know,

there’s a way to try and navigate these questions

which still honors the scientific method,

which I still think is our best sort of realm

of like reasonable inquiry, you know, a method of inquiry.

So an example, two kind of notable examples

that like really rattled my cage.

The first one was actually in 2010 early on

in quite early on in my poker career.

And I, remember the Icelandic volcano that erupted

that like shut down kind of all Atlantic airspace.

And it meant I got stuck down in the South of France.

I was there for something else.

And I couldn’t get home and someone said,

well, there’s a big poker tournament happening in Italy.

Maybe, do you wanna go?

I was like, all right, sure.

Like, let’s, you know, got a train across,

found a way to get there.

And the buy in was 5,000 euros,

which was much bigger than my bankroll would normally allow.

And so I played a feeder tournament, won my way in

kind of like I did with the Monte Carlo big one.

So then I won my way, you know,

from 500 euros into 5,000 euros to play this thing.

And on day one of then the big tournament,

which turned out to have,

it was the biggest tournament ever held in Europe

at the time.

It got over like 1,200 people, absolutely huge.

And I remember they dimmed the lights for before, you know,

the normal shuffle up and deal

to tell everyone to start playing.

And they played Chemical Brothers, Hey Boy, Hey Girl,

which I don’t know why it’s notable,

but it was just like a really,

it was a song I always liked.

It was like one of these like pump me up songs.

And I was sitting there thinking, oh yeah, it’s exciting.

I’m playing this really big tournament.

And out of nowhere, just suddenly this voice in my head,

just, and it sounded like my own sort of, you know,

when you think in your mind, you hear a voice kind of, right?

At least I do.

And so it sounded like my own voice and it said,

you are going to win this tournament.

And it was so powerful that I got this like wave of like,

you know, sort of goosebumps down my body.

And that I even, I remember looking around being like,

did anyone else hear that?

And obviously people are in their phones,

like no one else heard it.

And I was like, okay, six days later,

I win the fucking tournament out of 1,200 people.

And I don’t know how to explain it.

Okay, yes, maybe I have that feeling

before every time I play.

And it’s just that I happened to, you know,

because I won the tournament, I retroactively remembered it.

But that’s just.

Or the feeling gave you a kind of,

now from the film, Helmutian.

Well, exactly.

Like it gave you a confident, a deep confidence.

And it did.

It definitely did.

Like, I remember then feeling this like sort of,

well, although I remember then on day one,

I then went and lost half my stack quite early on.

And I remember thinking like, oh, well, that was bullshit.

You know, what kind of premonition is this?

Thinking, oh, I’m out.

But you know, I managed to like keep it together

and recover and then just went like pretty perfectly

from then on.

And either way, it definitely instilled me

with this confidence.

And I don’t want to put, I can’t put an explanation.

Like, you know, was it some, you know, huge extra to extra,

you know, supernatural thing driving me?

Or was it just my own self confidence in someone

that just made me make the right decisions?

I don’t know.

And I don’t, I’m not going to put a frame on it.

And I think.

I think I know a good explanation.

So we’re a bunch of NPCs living in this world

created by, in the simulation.

And then people, not people, creatures from outside

of the simulation sort of can tune in

and play your character.

And that feeling you got is somebody just like,

they got to play a poker tournament through you.

Honestly, it felt like that.

It did actually feel a little bit like that.

But it’s been 12 years now.

I’ve retold the story many times.

Like, I don’t even know how much I can trust my memory.

You’re just an NPC retelling the same story.

Because they just played the tournament and left.

Yeah, they’re like, oh, that was fun.

Cool.

Yeah, cool.

Next time.

And now you’re for the rest of your life left

as a boring NPC retelling this story of greatness.

But it was, and what was interesting was that after that,

then I didn’t obviously win a major tournament

for quite a long time.

And it left, that was actually another sort of dark period

because I had this incredible,

like the highs of winning that,

just on a like material level were insane,

winning the money.

I was on the front page of newspapers

because there was like this girl that came out of nowhere

and won this big thing.

And so again, like sort of chasing that feeling

was difficult.

But then on top of that, there was this feeling

of like almost being touched by something bigger

that was like, ah.

So maybe, did you have a sense

that I might be somebody special?

Like this kind of,

I think that’s the confidence thing

that maybe you could do something special in this world

after all kind of feeling.

I definitely, I mean, this is the thing

I think everybody wrestles with to an extent, right?

We are truly the protagonists in our own lives.

And so it’s a natural bias, human bias

to feel special.

And I think, and in some ways we are special.

Every single person is special

because you are that, the universe does,

the world literally does revolve around you.

That’s the thing in some respect.

But of course, if you then zoom out

and take the amalgam of everyone’s experiences,

then no, it doesn’t.

So there is this shared sort of objective reality,

but sorry, there’s objective reality that is shared,

but then there’s also this subjective reality

which is truly unique to you.

And I think both of those things coexist.

And it’s not like one is correct and one isn’t.

And again, anyone who’s like,

oh no, your lived experience is everything

versus your lived experience is nothing.

No, it’s a blend between these two things.

They can exist concurrently.

But there’s a certain kind of sense

that at least I’ve had my whole life.

And I think a lot of people have this as like,

well, I’m just like this little person.

Surely I can’t be one of those people

that do the big thing, right?

There’s all these big people doing big things.

There’s big actors and actresses, big musicians.

There’s big business owners and all that kind of stuff,

scientists and so on.

I have my own subject experience that I enjoy and so on,

but there’s like a different layer.

Like surely I can’t do those great things.

I mean, one of the things just having interacted

with a lot of great people, I realized,

no, they’re like just the same humans as me.

And that realization I think is really empowering.

And to remind yourself.

What are they?

Huh?

What are they?

Are they?

Well, in terms of.

Depends on some, yeah.

They’re like a bag of insecurities and.

Yes.

Peculiar sort of, like their own little weirdnesses

and so on, I should say also not.

They have the capacity for brilliance,

but they’re not generically brilliant.

Like, you know, we tend to say this person

or that person is brilliant, but really no,

they’re just like sitting there and thinking through stuff

just like the rest of us.

Right.

I think they’re in the habit of thinking through stuff

seriously and they’ve built up a habit of not allowing them,

their mind to get trapped in a bunch of bullshit

and minutia of day to day life.

They really think big ideas, but those big ideas,

it’s like allowing yourself the freedom to think big,

to realize that you can be one that actually solved

this particular big problem.

First identify a big problem that you care about,

then like, I can actually be the one

that solves this problem.

And like allowing yourself to believe that.

And I think sometimes you do need to have like

that shock go through your body and a voice tells you,

you’re gonna win this tournament.

Well, exactly.

And whether it was, it’s this idea of useful fictions.

So again, like going through all like

the classic rationalist training of Les Wrong

where it’s like, you want your map,

you know, the image you have of the world in your head

to as accurately match up with how the world actually is.

You want the map and the territory to perfectly align

as, you know, you want it to be

as an accurate representation as possible.

I don’t know if I fully subscribed to that anymore,

having now had these moments of like feeling of something

either bigger or just actually just being overconfident.

Like there is value in overconfidence sometimes.

If you, you know, take, you know,

take Magnus Carlsen, right?

If he, I’m sure from a young age,

he knew he was very talented,

but I wouldn’t be surprised if he was also had something

in him to, well, actually maybe he’s a bad example

because he truly is the world’s greatest,

but someone who it was unclear

whether they were gonna be the world’s greatest,

but ended up doing extremely well

because they had this innate, deep self confidence,

this like even overblown idea

of how good their relative skill level is.

That gave them the confidence to then pursue this thing

and they’re like with the kind of focus and dedication

that it requires to excel in whatever it is

you’re trying to do, you know?

And so there are these useful fictions

and that’s where I think I diverge slightly

with the classic sort of rationalist community

because that’s a field that is worth studying

of like how the stories we tell,

what the stories we tell to ourselves,

even if they are actually false,

and even if we suspect they might be false,

how it’s better to sort of have that like little bit

of faith, like value in faith, I think actually.

And that’s partly another thing

that’s now led me to explore the concept of God,

whether you wanna call it a simulator,

the classic theological thing.

I think we’re all like elucidating to the same thing.

Now, I don’t know, I’m not saying,

because obviously the Christian God

is like all benevolent, endless love.

The simulation, at least one of the simulation hypothesis

is like, as you said, like a teenager in his bedroom

who doesn’t really care, doesn’t give a shit

about the individuals within there.

It just like wants to see how the thing plays out

because it’s curious and it could turn it off like that.

Where on the sort of psychopathy

to benevolent spectrum God is, I don’t know.

But just having a little bit of faith

that there is something else out there

that might be interested in our outcome

is I think an essential thing actually for people to find.

A, because it creates commonality between,

it’s something we can all share.

And it is uniquely humbling of all of us to an extent.

It’s like a common objective.

But B, it gives people that little bit of like reserve

when things get really dark.

And I do think things are gonna get pretty dark

over the next few years.

But it gives that like,

to think that there’s something out there

that actually wants our game to keep going.

I keep calling it the game.

It’s a thing C and I, we call it the game.

You and C is a.k.a. Grimes, we call what the game?

Everything, the whole thing?

Yeah, we joke about like.

So everything is a game.

Well, the universe, like what if it’s a game

and the goal of the game is to figure out like,

well, either how to beat it, how to get out of it.

Maybe this universe is an escape room,

like a giant escape room.

And the goal is to figure out,

put all the pieces to puzzle, figure out how it works

in order to like unlock this like hyperdimensional key

and get out beyond what it is.

That’s.

No, but then, so you’re saying it’s like different levels

and it’s like a cage within a cage within a cage

and never like one cage at a time,

you figure out how to escape that.

Like a new level up, you know,

like us becoming multi planetary would be a level up

or us, you know, figuring out how to upload

our consciousnesses to the thing.

That would probably be a leveling up or spiritually,

you know, humanity becoming more combined

and less adversarial and bloodthirsty

and us becoming a little bit more enlightened.

That would be a leveling up.

You know, there’s many different frames to it,

whether it’s physical, you know, digital

or like metaphysical.

I wonder what the levels, I think,

I think level one for earth is probably

the biological evolutionary process.

So going from single cell organisms to early humans.

Then maybe level two is whatever’s happening inside our minds

and creating ideas and creating technologies.

That’s like evolutionary process of ideas.

And then multi planetary is interesting.

Is that fundamentally different

from what we’re doing here on earth?

Probably, because it allows us to like exponentially scale.

It delays the Malthusian trap, right?

It’s a way to keep the playing field,

to make the playing field get larger

so that it can accommodate more of our stuff, more of us.

And that’s a good thing,

but I don’t know if it like fully solves this issue of,

well, this thing called Moloch,

which we haven’t talked about yet,

but which is basically,

I call it the God of unhealthy competition.

Yeah, let’s go to Moloch.

What’s Moloch?

You did a great video on Moloch and one aspect of it,

the application of it to one aspect.

Instagram beauty filters.

True.

Very niche, but I wanted to start off small.

So Moloch was originally coined as,

well, so apparently back in the like Canaanite times,

it was to say ancient Carthaginian,

I can never say it Carthaginian,

somewhere around like 300 BC or 280, I don’t know.

There was supposedly this death cult

who would sacrifice their children

to this awful demon God thing they called Moloch

in order to get power to win wars.

So really dark, horrible things.

And it was literally like about child sacrifice,

whether they actually existed or not, we don’t know,

but in mythology they did.

And this God that they worshiped

was this thing called Moloch.

And then I don’t know,

it seemed like it was kind of quiet throughout history

in terms of mythology beyond that,

until this movie Metropolis in 1927 talked about this,

you see that there was this incredible futuristic city

that everyone was living great in,

but then the protagonist goes underground into the sewers

and sees that the city is run by this machine.

And this machine basically would just like kill the workers

all the time because it was just so hard to keep it running.

They were always dying.

So there was all this suffering that was required

in order to keep the city going.

And then the protagonist has this vision

that this machine is actually this demon Moloch.

So again, it’s like this sort of like mechanistic consumption

of humans in order to get more power.

And then Allen Ginsberg wrote a poem in the 60s,

which incredible poem called Howl about this thing Moloch.

And a lot of people sort of quite understandably

take the interpretation of that,

that he’s talking about capitalism.

But then the sort of piece to resistance

that’s moved Moloch into this idea of game theory

was Scott Alexander of Slate Style Codex

wrote this incredible,

well, literally I think it might be my favorite piece

of writing of all time.

It’s called Meditations on Moloch.

Everyone must go read it.

And…

I say Codex is a blog.

It’s a blog, yes.

We can link to it in the show notes or something, right?

No, don’t.

I, yes, yes.

But I like how you assume

I have a professional operation going on here.

I mean…

I shall try to remember to…

You were gonna assume.

What do you…

What are you, what do you want?

You’re giving the impression of it.

Yeah, I’ll look, please.

If I don’t, please somebody in the comments remind me.

I’ll help you.

If you don’t know this blog,

it’s one of the best blogs ever probably.

You should probably be following it.

Yes.

Are blogs still a thing?

I think they are still a thing, yeah.

Yeah, he’s migrated onto Substack,

but yeah, it’s still a blog.

Anyway.

Substack better not fuck things up, but…

I hope not, yeah.

I hope they don’t, I hope they don’t turn Molochy,

which will mean something to people when we continue.

Yeah.

When I stop interrupting for once.

No, no, it’s good.

Go on, yeah.

So anyway, so he writes,

he writes this piece, Meditations on Moloch,

and basically he analyzes the poem and he’s like,

okay, so it seems to be something relating

to where competition goes wrong.

And, you know, Moloch was historically this thing

of like where people would sacrifice a thing

that they care about, in this case, children,

their own children, in order to gain power,

a competitive advantage.

And if you look at almost everything that sort of goes wrong

in our society, it’s that same process.

So with the Instagram beauty filters thing,

you know, if you’re trying to become

a famous Instagram model,

you are incentivized to post the hottest pictures

of yourself that you can, you know,

you’re trying to play that game.

There’s a lot of hot women on Instagram.

How do you compete against them?

You post really hot pictures

and that’s how you get more likes.

As technology gets better, you know,

more makeup techniques come along.

And then more recently, these beauty filters

where like at the touch of a button,

it makes your face look absolutely incredible

compared to your natural face.

These technologies come along,

it’s everyone is incentivized to that short term strategy.

But over on net, it’s bad for everyone

because now everyone is kind of feeling

like they have to use these things.

And these things like they make you like,

the reason why I talked about them in this video

is because I noticed it myself, you know,

like I was trying to grow my Instagram for a while,

I’ve given up on it now.

But yeah, and I noticed these filters,

how good they made me look.

And I’m like, well, I know that everyone else

is kind of doing it.

Go subscribe to Liv’s Instagram.

Please, so I don’t have to use the filters.

I’ll post a bunch of, yeah, make it blow up.

So yeah, you felt the pressure actually.

Exactly, these short term incentives

to do this like, this thing that like either sacrifices

your integrity or something else

in order to like stay competitive,

which on aggregate turns like,

creates this like sort of race to the bottom spiral

where everyone else ends up in a situation

which is worse off than if they hadn’t started,

you know, than they were before.

Kind of like if, like at a football stadium,

like the system is so badly designed,

a competitive system of like everyone sitting

and having a view that if someone at the very front

stands up to get an even better view,

it forces everyone else behind

to like adopt that same strategy

just to get to where they were before.

But now everyone’s stuck standing up.

Like, so you need this like top down God’s eye coordination

to make it go back to the better state.

But from within the system, you can’t actually do that.

So that’s kind of what this Moloch thing is.

It’s this thing that makes people sacrifice values

in order to optimize for the winning the game in question,

the short term game.

But this Moloch, can you attribute it

to any one centralized source

or is it an emergent phenomena

from a large collection of people?

Exactly that.

It’s an emergent phenomena.

It’s a force of game theory.

It’s a force of bad incentives on a multi agent system

where you’ve got more, you know,

prisoner’s dilemma is technically

a kind of Moloch system as well,

but it’s just a two player thing.

But another word for Moloch is it multipolar trap.

Where basically you just got a lot of different people

all competing for some kind of prize.

And it would be better

if everyone didn’t do this one shitty strategy,

but because that strategy gives you a short term advantage,

everyone’s incentivized to do it.

And so everyone ends up doing it.

So the responsibility for,

I mean, social media is a really nice place

for a large number of people to play game theory.

And so they also have the ability

to then design the rules of the game.

And is it on them to try to anticipate

what kind of like to do the thing

that poker players are doing to run simulation?

Ideally that would have been great.

If, you know, Mark Zuckerberg and Jack

and all the, you know, the Twitter founders and everyone,

if they had at least just run a few simulations

of how their algorithms would, you know,

if different types of algorithms would turn out for society,

that would have been great.

That’s really difficult to do

that kind of deep philosophical thinking

about thinking about humanity actually.

So not kind of this level of how do we optimize engagement

or what brings people joy in the short term,

but how is this thing going to change

the way people see the world?

How is it gonna get morphed in iterative games played

into something that will change society forever?

That requires some deep thinking.

That’s, I hope there’s meetings like that inside companies,

but I haven’t seen them.

There aren’t, that’s the problem.

And it’s difficult because like,

when you’re starting up a social media company,

you know, you’re aware that you’ve got investors to please,

there’s bills to pay, you know,

there’s only so much R&D you can afford to do.

You’ve got all these like incredible pressures,

bad incentives to get on and just build your thing

as quickly as possible and start making money.

And, you know, I don’t think anyone intended

when they built these social media platforms

and just to like preface it.

So the reason why, you know, social media is relevant

because it’s a very good example of like,

everyone these days is optimizing for, you know, clicks,

whether it’s a social media platforms themselves,

because, you know, every click gets more, you know,

impressions and impressions pay for, you know,

they get advertising dollars

or whether it’s individual influencers

or, you know, whether it’s a New York Times or whoever,

they’re trying to get their story to go viral.

So everyone’s got this bad incentive of using, you know,

as you called it, the clickbait industrial complex.

That’s a very molly key system

because everyone is now using worse and worse tactics

in order to like try and win this attention game.

And yeah, so ideally these companies

would have had enough slack in the beginning

in order to run these experiments to see,

okay, what are the ways this could possibly go wrong

for people?

What are the ways that Moloch,

they should be aware of this concept of Moloch

and realize that whenever you have

a highly competitive multiagent system,

which social media is a classic example of,

millions of agents all trying to compete

for likes and so on,

and you try and bring all this complexity down

into like very small metrics,

such as number of likes, number of retweets,

whatever the algorithm optimizes for,

that is a guaranteed recipe for this stuff to go wrong

and become a race to the bottom.

I think there should be an honesty when founders,

I think there’s a hunger for that kind of transparency

of like, we don’t know what the fuck we’re doing.

This is a fascinating experiment.

We’re all running as a human civilization.

Let’s try this out.

And like, actually just be honest about this,

that we’re all like these weird rats in a maze.

None of us are controlling it.

There’s this kind of sense like the founders,

the CEO of Instagram or whatever,

Mark Zuckerberg has a control and he’s like,

like with strings playing people.

No, they’re.

He’s at the mercy of this is like everyone else.

He’s just like trying to do his best.

And like, I think putting on a smile

and doing over polished videos

about how Instagram and Facebook are good for you,

I think is not the right way to actually ask

some of the deepest questions we get to ask as a society.

How do we design the game such that we build a better world?

I think a big part of this as well is people,

there’s this philosophy, particularly in Silicon Valley

of well, techno optimism,

technology will solve all our issues.

And there’s a steel man argument to that where yes,

technology has solved a lot of problems

and can potentially solve a lot of future ones.

But it can also, it’s always a double edged sword.

And particularly as you know,

technology gets more and more powerful

and we’ve now got like big data

and we’re able to do all kinds of like

psychological manipulation with it and so on.

Technology is not a values neutral thing.

People think, I used to always think this myself.

It’s like this naive view that,

oh, technology is completely neutral.

It’s just, it’s the humans that either make it good or bad.

No, to the point we’re at now,

the technology that we are creating,

they are social technologies.

They literally dictate how humans now form social groups

and so on beyond that.

And beyond that, it also then,

that gives rise to like the memes

that we then like coalesce around.

And that, if you have the stack that way

where it’s technology driving social interaction,

which then drives like memetic culture

and like which ideas become popular, that’s Moloch.

And we need the other way around, we need it.

So we need to figure out what are the good memes?

What are the good values

that we think we need to optimize for

that like makes people happy and healthy

and like keeps society as robust and safe as possible,

then figure out what the social structure

around those should be.

And only then do we figure out technology,

but we’re doing the other way around.

And as much as I love in many ways

the culture of Silicon Valley,

and like I do think that technology has,

I don’t wanna knock it.

It’s done so many wonderful things for us,

same as capitalism.

There are, we have to like be honest with ourselves.

We’re getting to a point where we are losing control

of this very powerful machine that we have created.

Can you redesign the machine within the game?

Can you just have, can you understand the game enough?

Okay, this is the game.

And this is how we start to reemphasize

the memes that matter,

the memes that bring out the best in us.

You know, like the way I try to be in real life

and the way I try to be online

is to be about kindness and love.

And I feel like I’m sometimes get like criticized

for being naive and all those kinds of things.

But I feel like I’m just trying to live within this game.

I’m trying to be authentic.

Yeah, but also like, hey, it’s kind of fun to do this.

Like you guys should try this too, you know,

and that’s like trying to redesign

some aspects of the game within the game.

Is that possible?

I don’t know, but I think we should try.

I don’t think we have an option but to try.

Well, the other option is to create new companies

or to pressure companies that,

or anyone who has control of the rules of the game.

I think we need to be doing all of the above.

I think we need to be thinking hard

about what are the kind of positive, healthy memes.

You know, as Elon said,

he who controls the memes controls the universe.

He said that.

I think he did, yeah.

But there’s truth to that.

It’s very, there is wisdom in that

because memes have driven history.

You know, we are a cultural species.

That’s what sets us apart from chimpanzees

and everything else.

We have the ability to learn and evolve through culture

as opposed to biology or like, you know,

classic physical constraints.

And that means culture is incredibly powerful

and we can create and become victim

to very bad memes or very good ones.

But we do have some agency over which memes,

you know, we, but not only put out there,

but we also like subscribe to.

So I think we need to take that approach.

We also need to, you know,

because I don’t want, I’m making this video right now

called The Attention Wars,

which is about like how Moloch,

like the media machine is this Moloch machine.

Well, is this kind of like blind dumb thing

where everyone is optimizing for engagement

in order to win their share of the attention pie.

And then if you zoom out,

it’s really like Moloch that’s pulling the strings

because the only thing that benefits from this in the end,

you know, like our information ecosystem is breaking down.

Like we have, you look at the state of the US,

it’s in, we’re in a civil war.

It’s just not a physical war.

It’s an information war.

And people are becoming more fractured

in terms of what their actual shared reality is.

Like truly like an extreme left person,

an extreme right person,

like they literally live in different worlds

in their minds at this point.

And it’s getting more and more amplified.

And this force is like a razor blade

pushing through everything.

It doesn’t matter how innocuous a topic is,

it will find a way to split into this,

you know, bifurcated cultural and it’s fucking terrifying.

Because that maximizes the tension.

And that’s like an emergent Moloch type force

that takes anything, any topic

and cuts through it so that it can split nicely

into two groups.

One that’s…

Well, it’s whatever, yeah,

all everyone is trying to do within the system

is just maximize whatever gets them the most attention

because they’re just trying to make money

so they can keep their thing going, right?

And the best emotion for getting attention,

well, because it’s not just about attention on the internet,

it’s engagement, that’s the key thing, right?

In order for something to go viral,

you need people to actually engage with it.

They need to like comment or retweet or whatever.

And of all the emotions that,

there’s like seven classic shared emotions

that studies have found that all humans,

even from like previously uncontacted tribes have.

Some of those are negative, you know, like sadness,

disgust, anger, et cetera, some are positive,

happiness, excitement, and so on.

The one that happens to be the most useful

for the internet is anger.

Because anger, it’s such an active emotion.

If you want people to engage, if someone’s scared,

and I’m not just like talking out my ass here,

there are studies here that have looked into this.

Whereas like if someone’s like,

disgusted or fearful, they actually tend to then be like,

oh, I don’t wanna deal with this.

So they’re less likely to actually engage

and share it and so on, they’re just gonna be like, ugh.

Whereas if they’re enraged by a thing,

well now that triggers all the like,

the old tribalism emotions.

And so that’s how then things get sort of spread,

you know, much more easily.

They out compete all the other memes in the ecosystem.

And so this like, the attention economy,

the wheels that make it go around are,

is rage.

I did a tweet, the problem with raging against the machine

is that the machine has learned to feed off rage.

Because it is feeding off our rage.

That’s the thing that’s now keeping it going.

So the more we get angry, the worse it gets.

So the mullet in this attention,

in the war of attention is constantly maximizing rage.

What it is optimizing for is engagement.

And it happens to be that engagement

is more propaganda, you know.

I mean, it just sounds like everything is putting,

more and more things are being put through this

like propagandist lens of winning

whatever the war is in question.

Whether it’s the culture war or the Ukraine war, yeah.

Well, I think the silver lining of this,

do you think it’s possible that in the long arc

of this process, you actually do arrive

at greater wisdom and more progress?

It just, in the moment, it feels like people are

tearing each other to shreds over ideas.

But if you think about it, one of the magic things

about democracy and so on, is you have

the blue versus red constantly fighting.

It’s almost like they’re in discourse,

creating devil’s advocate, making devils out of each other.

And through that process, discussing ideas.

Like almost really embodying different ideas

just to yell at each other.

And through the yelling, over the period of decades,

maybe centuries, figuring out a better system.

Like in the moment, it feels fucked up.

But in the long arc, it actually is productive.

I hope so.

That said, we are now in the era of,

just as we have weapons of mass destruction

with nuclear weapons, you know,

that can break the whole playing field,

we now are developing weapons

of informational mass destruction.

Information weapons, you know, WMDs

that basically can be used for propaganda

or just manipulating people however is needed,

whether that’s through dumb TikTok videos,

or, you know, there are significant resources being put in.

I don’t mean to sound like, you know,

to doom and gloom, but there are bad actors out there.

That’s the thing, there are plenty of good actors

within the system who are just trying to stay afloat

in the game, so effectively doing monarchy things.

But then on top of that, we have actual bad actors

who are intentionally trying to, like,

manipulate the other side into doing things.

And using, so because it’s a digital space,

they’re able to use artificial actors, meaning bots.

Exactly, botnets, you know,

and this is a whole new situation

that we’ve never had before.

Yeah, it’s exciting.

You know what I want to do?

You know what I want to do that,

because there is, you know, people are talking about bots

manipulating and, like, malicious bots

that are basically spreading propaganda.

I want to create, like, a bot army for, like,

that fights that. For love?

Yeah, exactly, for love, that fights, that, I mean.

You know, there’s, I mean, there’s truth

to fight fire with fire, it’s like,

but how you always have to be careful

whenever you create, again, like,

Moloch is very tricky. Yeah, yeah.

Hitler was trying to spread the love, too.

Well, yeah, so we thought, but, you know, I agree with you

that, like, that is a thing that should be considered,

but there is, again, everyone,

the road to hell is paved in good intentions.

And this is, there’s always unforeseen circumstances,

you know, outcomes, externalities

of you trying to adopt a thing,

even if you do it in the very best of faith.

But you can learn lessons of history.

If you can run some sims on it first, absolutely.

But also there’s certain aspects of a system,

as we’ve learned through history, that do better than others.

Like, for example, don’t have a dictator,

so, like, if I were to create this bot army,

it’s not good for me to have full control over it.

Because in the beginning, I might have a good understanding

of what’s good and not, but over time,

that starts to get deviated,

because I’ll get annoyed at some assholes,

and I’ll think, okay, wouldn’t it be nice

to get rid of those assholes?

But then that power starts getting to your head,

you become corrupted, that’s basic human nature.

So distribute the power somehow.

We need a love botnet on a DAO.

A DAO love botnet.

Yeah, and without a leader, like without…

Well, exactly, a distributed, right,

but yeah, without any kind of centralized…

Yeah, without even, you know,

basically it’s the more control,

the more you can decentralize the control of a thing

to people, you know, but the balance…

But then you still need the ability to coordinate,

because that’s the issue when something is too,

you know, that’s really, to me, like the culture wars,

the bigger war we’re dealing with is actually between

the sort of the, I don’t know what even the term is for it,

but like centralization versus decentralization.

That’s the tension we’re seeing.

Power in control by a few versus completely distributed.

And the trouble is if you have a fully centralized thing,

then you’re at risk of tyranny, you know,

Stalin type things can happen, or completely distributed.

Now you’re at risk of complete anarchy and chaos

but you can’t even coordinate to like on, you know,

when there’s like a pandemic or anything like that.

So it’s like, what is the right balance to strike

between these two structures?

Can’t Moloch really take hold

in a fully decentralized system?

That’s one of the dangers too.

Yes, very vulnerable to Moloch.

So a dictator can commit huge atrocities,

but they can also make sure the infrastructure works

and trains run on time.

They have that God’s eye view at least.

They have the ability to create like laws and rules

that like force coordination, which stops Moloch.

But then you’re vulnerable to that dictator

getting infected with like this,

with some kind of psychopathy type thing.

What’s reverse Moloch?

Sorry, great question.

So that’s where, so I’ve been working on this series.

It’s been driving me insane for the last year and a half.

I did the first one a year ago.

I can’t believe it’s nearly been a year.

The second one, hopefully will be coming out

in like a month.

And my goal at the end of the series is to like present,

cause basically I’m painting the picture of like

what Moloch is and how it’s affecting

almost all these issues in our society

and how it’s driving.

It’s like kind of the generator function

as people describe it of existential risk.

And then at the end of that.

Wait, wait, the generator function of existential risk.

So you’re saying Moloch is sort of the engine

that creates a bunch of X risks.

Yes, not all of them.

Like a, you know, a.

Just a cool phrase, generator function.

It’s not my phrase.

It’s Daniel Schmacktenberger.

Oh, Schmacktenberger.

I got that from him.

Of course.

All things, it’s like all roads lead back

to Daniel Schmacktenberger, I think.

The dude is, the dude is brilliant.

He’s really brilliant.

After that it’s Mark Twain.

But anyway, sorry.

Totally rude interruptions from me.

No, it’s fine.

So not all X risks.

So like an asteroid technically isn’t

because it’s, you know, it’s just like

this one big external thing.

It’s not like a competition thing going on.

But, you know, synthetic bio, you know,

bio weapons, that’s one because everyone’s incentivized

to build, even for defense, you know,

bad, bad viruses, you know,

just to threaten someone else, et cetera.

Or AI, technically the race to AGI

is kind of potentially a Molochi situation.

But yeah, so if Moloch is this like generator function

that’s driving all of these issues

over the coming century that might wipe us out,

what’s the inverse?

And so far what I’ve gotten to is this character

that I want to put out there called Winwin.

Because Moloch is the God of lose, lose, ultimately.

It masquerades as the God of win, lose,

but in reality it’s lose, lose.

Everyone ends up worse off.

So I was like, well, what’s the opposite of that?

It’s Winwin.

And I was thinking for ages, like,

what’s a good name for this character?

And then tomorrow I was like, okay, well,

don’t try and, you know, think through it logically.

What’s the vibe of Winwin?

And to me, like in my mind, Moloch is like,

and I addressed that in the video,

like it’s red and black.

It’s kind of like very, you know, hyper focused

on it’s one goal you must win.

So Winwin is kind of actually like these colors.

It’s like purple, turquoise.

It’s loves games too.

It loves a little bit of healthy competition,

but constrained, like kind of like before,

like knows how to ring fence zero sum competition

into like just the right amount,

whereby its externalities can be controlled

and kept positive.

And then beyond that, it also loves cooperation,

coordination, love, all these other things.

But it’s also kind of like mischievous,

like, you know, it will have a good time.

It’s not like kind of like boring, you know,

like, oh God, it knows how to have fun.

It can get like, it can get down,

but ultimately it’s like unbelievably wise

and it just wants the game to keep going.

And I call it Winwin.

That’s a good like pet name, Winwin.

I think the, Winwin, right?

And I think it’s formal name when it has to do

like official functions is Omnia.

Omnia.

Yeah.

From like omniscience kind of, why Omnia?

You just like Omnia?

She’s like Omniwin.

Omniwin.

But I’m open to suggestions.

I would like, you know, and this is.

I like Omnia.

Yeah.

But there is an angelic kind of sense to Omnia though.

So Winwin is more fun.

So it’s more like, it embraces the fun aspect.

The fun aspect.

I mean, there is something about sort of,

there’s some aspect to Winwin interactions

that requires embracing the chaos of the game

and enjoying the game itself.

I don’t know.

I don’t know what that is.

That’s almost like a Zen like appreciation

of the game itself, not optimizing

for the consequences of the game.

Right, well, it’s recognizing the value

of competition in of itself about,

it’s not like about winning.

It’s about you enjoying the process of having a competition

and not knowing whether you’re gonna win

or lose this little thing.

But then also being aware that, you know,

what’s the boundary?

How big do I want competition to be?

Because one of the reason why Moloch is doing so well now

in our society, in our civilization is

because we haven’t been able to ring fence competition.

You know, and so it’s just having all

these negative externalities

and it’s, we’ve completely lost control of it.

You know, it’s, I think my guess is,

and now we’re getting really like,

you know, metaphysical technically,

but I think we’ll be in a more interesting universe

if we have one that has both pure cooperation,

you know, lots of cooperation

and some pockets of competition

than one that’s purely competition, cooperation entirely.

Like it’s good to have some little zero sumness bits,

but I don’t know that fully

and I’m not qualified as a philosopher to know that.

And that’s what reverse Moloch,

so this kind of win, win creature is a system,

is an antidote to the Moloch system.

Yes.

And I don’t know how it’s gonna do that.

But it’s good to kind of try to start

to formulate different ideas,

different frameworks of how we think about that.

Exactly.

At the small scale of a collection of individuals

and a large scale of a society.

Exactly.

It’s a meme, I think it’s an example of a good meme.

And I’m open, I’d love to hear feedback from people

if they think it’s, you know, they have a better idea

or it’s not, you know,

but it’s the direction of memes that we need to spread,

this idea of like, look for the win, wins in life.

Well, on the topic of beauty filters,

so in that particular context

where Moloch creates negative consequences,

Dostoevsky said beauty will save the world.

What is beauty anyway?

It would be nice to just try to discuss

what kind of thing we would like to converge towards

in our understanding of what is beautiful.

So to me, I think something is beautiful

when it can’t be reduced down to easy metrics.

Like if you think of a tree, what is it about a tree,

like a big, ancient, beautiful tree, right?

What is it about it that we find so beautiful?

It’s not, you know, the sweetness of its fruit

or the value of its lumber.

It’s this entirety of it that is,

there’s these immeasurable qualities,

it’s like almost like a qualia of it.

That’s both, like it walks this fine line between pattern,

well, it’s got lots of patternicity,

but it’s not overly predictable.

Again, it walks this fine line between order and chaos.

It’s a very highly complex system.

It’s evolving over time,

the definition of a complex versus,

and this is another Schmackt and Berger thing,

a complex versus a complicated system.

A complicated system can be sort of broken down

into bits and pieces,

a complicated system can be sort of broken down into bits,

understood and then put back together.

A complex system is kind of like a black box.

It does all this crazy stuff,

but if you take it apart,

you can’t put it back together again,

because there’s all these intricacies.

And also very importantly, like there’s some of the parts,

sorry, the sum of the whole is much greater

than the sum of the parts.

And that’s where the beauty lies, I think.

And I think that extends to things like art as well.

Like there’s something immeasurable about it.

There’s something we can’t break down to a narrow metric.

Does that extend to humans, you think?

Yeah, absolutely.

So how can Instagram reveal that kind of beauty,

the complexity of a human being?

Good question.

And this takes us back to dating sites and Goodreads,

I think.

Very good question.

I mean, well, I know what it shouldn’t do.

It shouldn’t try and like, right now,

you know, I was talking to like a social media expert

recently, because I was like, oh, I hate that.

There’s such a thing as a social media expert?

Oh, yeah, there are like agencies out there

that you can like outsource,

because I’m thinking about working with one to like,

I want to start a podcast.

You should, you should have done it a long time ago.

Working on it.

It’s going to be called Win Win.

And it’s going to be about this like positive sum stuff.

And the thing that, you know, they always come back and say,

is like, well, you need to like figure out

what your thing is.

You know, you need to narrow down what your thing is

and then just follow that.

Have like a sort of a formula,

because that’s what people want.

They want to know that they’re coming back

to the same thing.

And that’s the advice on YouTube, Twitter, you name it.

And that’s why, and the trouble with that

is that it’s a complexity reduction.

And generally speaking, you know,

complexity reduction is bad.

It’s making things more, it’s an oversimplification.

Not that simplification is always a bad thing.

But when you’re trying to take, you know,

what is social media doing?

It’s trying to like encapsulate the human experience

and put it into digital form and commodify it to an extent.

That, so you do that, you compress people down

into these like narrow things.

And that’s why I think it’s kind of ultimately

fundamentally incompatible with at least

my definition of beauty.

It’s interesting because there is some sense in which

a simplification sort of in the Einstein kind of sense

of a really complex idea, a simplification in a way

that still captures some core power of an idea of a person

is also beautiful.

And so maybe it’s possible for social media to do that.

A presentation, a sort of a slither, a slice,

a look into a person’s life that reveals something

real about them.

But in a simple way, in a way that can be displayed

graphically or through words.

Some way, I mean, in some way Twitter can do

that kind of thing.

A very few set of words can reveal the intricacies

of a person.

Of course, the viral machine that spreads those words

often results in people taking the thing out of context.

People often don’t read tweets in the context

of the human being that wrote them.

The full history of the tweets they’ve written,

the education level, the humor level,

the world view they’re playing around with,

all that context is forgotten and people just see

the different words.

So that can lead to trouble.

But in a certain sense, if you do take it in context,

it reveals some kind of quirky little beautiful idea

or a profound little idea from that particular person

that shows something about that person.

So in that sense, Twitter can be more successful

if we’re talking about Mollux is driving

a better kind of incentive.

Yeah, I mean, how they can, like if we were to rewrite,

is there a way to rewrite the Twitter algorithm

so that it stops being the fertile breeding ground

of the culture wars?

Because that’s really what it is.

I mean, maybe I’m giving it, Twitter too much power,

but just the more I looked into it

and I had conversations with Tristan Harris

from Center of Humane Technology.

And he explained it as like,

Twitter is where you have this amalgam of human culture

and then this terribly designed algorithm

that amplifies the craziest people

and the angriest most divisive takes and amplifies them.

And then the media, the mainstream media,

because all the journalists are also on Twitter,

they then are informed by that.

And so they draw out the stories they can

from this already like very boiling lava of rage

and then spread that to their millions

and millions of people who aren’t even on Twitter.

And so I honestly, I think if I could press a button,

turn them off, I probably would at this point,

because I just don’t see a way

of being compatible with healthiness,

but that’s not gonna happen.

And so at least one way to like stem the tide

and make it less malachy would be to change,

at least if like it was on a subscription model,

then it’s now not optimizing for impressions.

Cause basically what it wants is for people

to keep coming back as often as possible.

That’s how they get paid, right?

Every time an ad gets shown to someone

and the way is to get people constantly refreshing their feed.

So you’re trying to encourage addictive behaviors.

Whereas if someone, if they moved on

to at least a subscription model,

then they’re getting the money either way,

whether someone comes back to the site once a month

or 500 times a month,

they get the same amount of money.

So now that takes away that incentive,

to use technology, to build,

to design an algorithm that is maximally addictive.

That would be one way, for example.

Yeah, but you still want people to,

yeah, I just feel like that just slows down,

creates friction in the virality of things.

But that’s good.

We need to slow down virality.

It’s good, it’s one way.

Virality is malach, to be clear.

So malach is always negative then?

Yes, by definition.

Yes.

Competition is not always negative.

Competition is neutral.

I disagree with you that all virality is negative then,

is malach then.

Because it’s a good intuition,

because we have a lot of data on virality being negative.

But I happen to believe that the core of human beings,

so most human beings want to be good

more than they want to be bad to each other.

And so I think it’s possible,

it might be just harder to engineer systems

that enable virality,

but it’s possible to engineer systems that are viral

that enable virality.

And the kind of stuff that rises to the top

is things that are positive.

And positive, not like la la positive,

it’s more like win win,

meaning a lot of people need to be challenged.

Wise things, yes.

You grow from it, it might challenge you,

you might not like it, but you ultimately grow from it.

And ultimately bring people together

as opposed to tear them apart.

I deeply want that to be true.

And I very much agree with you that people at their core

are on average good, care for each other,

as opposed to not.

Like I think it’s actually a very small percentage

of people are truly wanting to do

just like destructive malicious things.

Most people are just trying to win their own little game.

And they don’t mean to be,

they’re just stuck in this badly designed system.

That said, the current structure, yes,

is the current structure means that virality

is optimized towards Moloch.

That doesn’t mean there aren’t exceptions.

Sometimes positive stories do go viral

and I think we should study them.

I think there should be a whole field of study

into understanding, identifying memes

that above a certain threshold of the population

agree is a positive, happy, bringing people together meme.

The kind of thing that brings families together

that would normally argue about cultural stuff

at the table, at the dinner table.

Identify those memes and figure out what it was,

what was the ingredient that made them spread that day.

And also like not just like happiness

and connection between humans,

but connection between humans in other ways

that enables like productivity, like cooperation,

solving difficult problems and all those kinds of stuff.

So it’s not just about let’s be happy

and have a fulfilling lives.

It’s also like, let’s build cool shit.

Let’s get excited.

Which is the spirit of collaboration,

which is deeply anti Moloch, right?

That’s, it’s not using competition.

It’s like, Moloch hates collaboration and coordination

and people working together.

And that’s, again, like the internet started out as that

and it could have been that,

but because of the way it was sort of structured

in terms of, you know, very lofty ideal,

they wanted everything to be open source,

open source and also free.

And, but they needed to find a way to pay the bills anyway,

because they were still building this

on top of our old economics system.

And so the way they did that

was through third party advertisement.

But that meant that things were very decoupled.

You know, you’ve got this third party interest,

which means that you’re then like,

people having to optimize for that.

And that is, you know, the actual consumer

is actually the product,

not the person you’re making the thing for.

In the end, you start making the thing for the advertiser.

And so that’s why it then like breaks down.

Yeah, like it’s, there’s no clean solution to this.

And I, it’s a really good suggestion by you actually

to like figure out how we can optimize virality

for positive sum topics.

I shall be the general of the love bot army.

Distributed.

Distributed, distributed, no, okay, yeah.

The power, just even in saying that,

the power already went to my head.

No, okay, you’ve talked about quantifying your thinking.

We’ve been talking about this,

sort of a game theoretic view on life

and putting probabilities behind estimates.

Like if you think about different trajectories

you can take through life,

just actually analyzing life in game theoretic way,

like your own life, like personal life.

I think you’ve given an example

that you had an honest conversation with Igor

about like, how long is this relationship gonna last?

So similar to our sort of marriage problem

kind of discussion, having an honest conversation

about the probability of things

that we sometimes are a little bit too shy

or scared to think of in a probabilistic terms.

Can you speak to that kind of way of reasoning,

the good and the bad of that?

Can you do this kind of thing with human relations?

Yeah, so the scenario you’re talking about, it was like.

Yeah, tell me about that scenario.

Yeah, I think it was about a year into our relationship

and we were having a fairly heavy conversation

because we were trying to figure out

whether or not I was gonna sell my apartment.

Well, you know, he had already moved in,

but I think we were just figuring out

what like our longterm plans would be.

We should be by a place together, et cetera.

When you guys are having that conversation,

are you like drunk out of your mind on wine

or is he sober and you’re actually having a serious?

I think we were sober.

How do you get to that conversation?

Because most people are kind of afraid

to have that kind of serious conversation.

Well, so our relationship was very,

well, first of all, we were good friends

for a couple of years before we even got romantic.

And when we did get romantic,

it was very clear that this was a big deal.

It wasn’t just like another, it wasn’t a random thing.

So the probability of it being a big deal was high.

It was already very high.

And then we’d been together for a year

and it had been pretty golden and wonderful.

So, you know, there was a lot of foundation already

where we felt very comfortable

having a lot of frank conversations.

But Igor’s MO has always been much more than mine.

He was always from the outset,

like just in a relationship,

radical transparency and honesty is the way

because the truth is the truth,

whether you want to hide it or not, you know,

but it will come out eventually.

And if you aren’t able to accept difficult things yourself,

then how could you possibly expect to be like

the most integral version that, you know,

you can’t, the relationship needs this bedrock

of like honesty as a foundation more than anything.

Yeah, that’s really interesting,

but I would like to push against some of those ideas,

but that’s the down the line, yes, throw them up.

I just rudely interrupt.

That was fine.

And so, you know, we’d been about together for a year

and things were good

and we were having this hard conversation

and then he was like, well, okay,

what’s the likelihood that we’re going to be together

in three years then?

Because I think it was roughly a three year time horizon.

And I was like, ooh, ooh, interesting.

And then we were like, actually wait,

before you said out loud,

let’s both write down our predictions formally

because we’d been like,

we’re just getting into like effective altruism

and rationality at the time,

which is all about making formal predictions

as a means of measuring your own,

well, your own foresight essentially in a quantified way.

So we like both wrote down our percentages

and we also did a one year prediction

and a 10 year one as well.

So we got percentages for all three

and then we showed each other.

And I remember like having this moment of like, ooh,

because for the 10 year one, I was like, well, I mean,

I love him a lot,

but like a lot can happen in 10 years,

and we’ve only been together for,

so I was like, I think it’s over 50%,

but it’s definitely not 90%.

And I remember like wrestling,

I was like, oh, but I don’t want him to be hurt.

I don’t want him to,

I don’t want to give a number lower than his.

And I remember thinking, I was like, ah, ah, don’t game it.

This is an exercise in radical honesty.

So just give your real percentage.

And I think mine was like 75%.

And then we showed each other

and luckily we were fairly well aligned.

And, but honestly, even if we weren’t.

20%.

Huh?

It definitely would have,

I, if his had been consistently lower than mine,

that would have rattled me for sure.

Whereas if it had been the other way around,

I think he would have,

he’s just kind of like a water off a duck’s back type of guy.

It’d be like, okay, well, all right, we’ll figure this out.

Well, did you guys provide error bars on the estimate?

Like the level of uncertainty?

They came built in.

We didn’t give formal plus or minus error bars.

I didn’t draw any or anything like that.

Well, I guess that’s the question I have is,

did you feel informed enough to make such decisions?

Cause like, I feel like if you were,

if I were to do this kind of thing rigorously,

I would want some data.

I would want to set one of the assumptions you have

is you’re not that different from other relationships.

Right.

And so I want to have some data about the way.

You want the base rates.

Yeah, and also actual trajectories of relationships.

I would love to have like time series data

about the ways that relationships fall apart or prosper,

how they collide with different life events,

losses, job changes, moving.

Both partners find jobs, only one has a job.

I want that kind of data

and how often the different trajectories change in life.

Like how informative is your past to your future?

That’s the whole thing.

Like can you look at my life and have a good prediction

about in terms of my characteristics and my relationships,

what that’s going to look like in the future or not?

I don’t even know the answer to that question.

I’ll be very ill informed in terms of making the probability.

I would be far, yeah, I just would be under informed.

I would be under informed.

I’ll be over biasing to my prior experiences, I think.

Right, but as long as you’re aware of that

and you’re honest with yourself,

and you’re honest with the other person,

say, look, I have really wide error bars on this

for the following reasons, that’s okay.

I still think it’s better than not trying

to quantify it at all.

If you’re trying to make really major

irreversible life decisions.

And I feel also the romantic nature of that question

for me personally, I try to live my life

thinking it’s very close to 100%.

Like, allowing myself actually the,

this is the difficulty of this is allowing myself

to think differently, I feel like has

a psychological consequence.

That’s where that, that’s one of my pushbacks

against radical honesty is this one particular perspective.

So you’re saying you would rather give

a falsely high percentage to your partner.

Going back to the wide sage filth.

In order to sort of create this additional optimism.

Helmuth.

Yes.

Of fake it till you make it.

The positive, the power of positive thinking.

Hashtag positivity.

Yeah, hashtag, it’s with a hashtag.

Well, so that, and this comes back to this idea

of useful fictions, right?

And I agree, I don’t think there’s a clear answer to this.

And I think it’s actually quite subjective.

Some people this works better for than others.

You know, to be clear, Igor and I weren’t doing

this formal prediction in it.

Like we did it with very much tongue in cheek.

It wasn’t like we were going to make,

I don’t think it even would have drastically changed

what we decided to do even.

We kind of just did it more as a fun exercise.

For the consequence of that fun exercise,

you really actually kind of, there was a deep honesty to it.

Exactly, it was a deep, and it was, yeah,

it was just like this moment of reflection.

I’m like, oh wow, I actually have to think

like through this quite critically and so on.

And it’s also what was interesting was I got to like check

in with what my desires were.

So there was one thing of like what my actual prediction is,

but what are my desires and could these desires

be affecting my predictions and so on.

And you know, that’s a method of rationality.

And I personally don’t think it loses anything in terms of,

I didn’t take any of the magic away

from our relationship, quite the opposite.

Like it brought us closer together because it was like,

we did this weird fun thing that I appreciate

a lot of people find quite strange.

And I think it was somewhat unique in our relationship

that both of us are very, we both love numbers,

we both love statistics, we’re both poker players.

So this was kind of like our safe space anyway.

For others, one partner really might not

like that kind of stuff at all, in which case

this is not a good exercise to do.

I don’t recommend it to everybody.

But I do think there’s, it’s interesting sometimes

to poke holes in the probe at these things

that we consider so sacred that we can’t try

to quantify them, which is interesting

because that’s in tension with like the idea

of what we just talked about with beauty

and like what makes something beautiful,

the fact that you can’t measure everything about it.

And perhaps something shouldn’t be tried to measure,

maybe it’s wrong to completely try and value

the utilitarian, put a utilitarian frame

of measuring the utility of a tree in its entirety.

I don’t know, maybe we should, maybe we shouldn’t.

I’m ambivalent on that.

But overall, people have too many biases.

People are overly biased against trying to do

like a quantified cost benefit analysis

on really tough life decisions.

They’re like, oh, just go with your gut.

It’s like, well, sure, but guts, our intuitions

are best suited for things that we’ve got tons

of experience in, then we can really, you don’t trust on it.

If it’s a decision we’ve made many times,

but if it’s like, should I marry this person

or should I buy this house over that house?

You only make those decisions a couple of times

in your life, maybe.

Well, I would love to know, there’s a balance,

probably is a personal balance to strike

is the amount of rationality you apply

to a question versus the useful fiction,

the fake it till you make it.

For example, just talking to soldiers in Ukraine,

you ask them, what’s the probability of you winning,

Ukraine winning?

Almost everybody I talk to is 100%.

Wow.

And you listen to the experts, right?

They say all kinds of stuff.

Right.

They are, first of all, the morale there

is higher than probably enough.

I’ve never been to a war zone before this,

but I’ve read about many wars

and I think the morale in Ukraine is higher

than almost any war I’ve read about.

It’s every single person in the country

is proud to fight for their country.

Wow.

Everybody, not just soldiers, not everybody.

Why do you think that is?

Specifically more than in other wars?

I think because there’s perhaps a dormant desire

for the citizens of this country to find

the identity of this country because it’s been

going through this 30 year process

of different factions and political bickering

and they haven’t had, as they talk about,

they haven’t had their independence war.

They say all great nations have had an independence war.

They had to fight for their independence,

for the discovery of the identity of the core

of the ideals that unify us and they haven’t had that.

There’s constantly been factions, there’s been divisions,

there’s been pressures from empires,

from the United States and from Russia,

from NATO and Europe, everybody telling them what to do.

Now they want to discover who they are

and there’s that kind of sense that we’re going to fight

for the safety of our homeland,

but we’re also gonna fight for our identity.

And that, on top of the fact that there’s just,

if you look at the history of Ukraine

and there’s certain other countries like this,

there are certain cultures that are feisty

in their pride of being the citizens of that nation.

Ukraine is that, Poland was that.

You just look at history.

In certain countries, you do not want to occupy.

I mean, both Stalin and Hitler talked about Poland

in this way, they’re like, this is a big problem

if we occupy this land for prolonged periods of time.

They’re gonna be a pain in their ass.

They’re not going to want to be occupied.

And certain other countries are pragmatic.

They’re like, well, leaders come and go.

I guess this is good.

Ukraine just doesn’t have, Ukrainians,

throughout the 20th century,

don’t seem to be the kind of people

that just sit calmly and let the quote unquote occupiers

impose their rules.

That’s interesting though, because you said it’s always been

under conflict and leaders have come and gone.

So you would expect them to actually be the opposite

under that like reasoning.

Well, because it’s a very fertile land,

it’s great for agriculture.

So a lot of people want to,

I mean, I think they’ve developed this culture

because they’ve constantly been occupied

by different people for the different peoples.

And so maybe there is something to that

where you’ve constantly had to feel like within the blood

of the generations, there’s the struggle against the man,

against the imposition of rules, against oppression

and all that kind of stuff, and that stays with them.

So there’s a will there, but a lot of other aspects

are also part of it that has to do with

the reverse Mollet kind of situation

where social media has definitely played a part of it.

Also different charismatic individuals

have had to play a part.

The fact that the president of the nation, Zelensky,

stayed in Kiev during the invasion

is a huge inspiration to them

because most leaders, as you can imagine,

when the capital of the nation is under attack,

the wise thing, the smart thing

that the United States advised Zelensky to do

is to flee and to be the leader of the nation

from a distant place.

He said, fuck that, I’m staying put.

Everyone around him, there was a pressure to leave

and he didn’t, and those singular acts

really can unify a nation.

There’s a lot of people that criticize Zelensky

within Ukraine.

Before the war, he was very unpopular, even still,

but they put that aside, especially that singular act

of staying in the capital.

Yeah, a lot of those kinds of things come together

to create something within people.

These things always, of course, though,

how zoomed out of a view do you wanna take?

Because, yeah, you describe it,

it’s like an anti Moloch thing happened within Ukraine

because it brought the Ukrainian people together

in order to fight a common enemy.

Maybe that’s a good thing, maybe that’s a bad thing.

In the end, we don’t know

how this is all gonna play out, right?

But if you zoom it out from a level, on a global level,

they’re coming together to fight that,

that could make a conflict larger.

You know what I mean?

I don’t know what the right answer is here.

It seems like a good thing that they came together,

but we don’t know how this is all gonna play out.

If this all turns into nuclear war, we’ll be like,

okay, that was the bad, that was the…

Oh yeah, so I was describing the reverse Moloch

for the local level.

Exactly, yes.

Now, this is where the experts come in and they say,

well, if you channel most of the resources of the nation

and the nations supporting Ukraine into the war effort,

are you not beating the drums of war

that is much bigger than Ukraine?

In fact, even the Ukrainian leaders

are speaking of it this way.

This is not a war between two nations.

This is the early days of a world war,

if we don’t play this correctly.

Yes, and we need cool heads from our leaders.

So from Ukraine’s perspective,

Ukraine needs to win the war

because what does winning the war mean

is coming to peace negotiations, an agreement

that guarantees no more invasions.

And then you make an agreement

about what land belongs to who.

And that’s, you stop that.

And basically from their perspective

is you want to demonstrate to the rest of the world

who’s watching carefully, including Russia and China

and different players on the geopolitical stage,

that this kind of conflict is not going to be productive

if you engage in it.

So you want to teach everybody a lesson,

let’s not do World War III.

It’s gonna be bad for everybody.

And so it’s a lose, lose.

It’s a deep lose, lose.

Doesn’t matter.

And I think that’s actually a correct,

when I zoom out, I mean, 99% of what I think about

is just individual human beings and human lives

and just that war is horrible.

But when you zoom out and think

from a geopolitics perspective,

we should realize that it’s entirely possible

that we will see a World War III in the 21st century.

And this is like a dress rehearsal for that.

And so the way we play this as a human civilization

will define whether we do or don’t have a World War III.

How we discuss war, how we discuss nuclear war,

the kind of leaders we elect and prop up,

the kind of memes we circulate,

because you have to be very careful

when you’re being pro Ukraine, for example,

you have to realize that you’re being,

you are also indirectly feeding

the ever increasing military industrial complex.

So you have to be extremely careful

that when you say pro Ukraine or pro anybody,

you’re pro human beings, not pro the machine

that creates narratives that says it’s pro human beings.

But it’s actually, if you look at the raw use

of funds and resources, it’s actually pro making weapons

and shooting bullets and dropping bombs.

The real, we have to just somehow get the meme

into everyone’s heads that the real enemy is war itself.

That’s the enemy we need to defeat.

And that doesn’t mean to say that there isn’t justification

for small local scenarios, adversarial conflicts.

If you have a leader who is starting wars,

they’re on the side of team war, basically.

It’s not that they’re on the side of team country,

whatever that country is,

it’s they’re on the side of team war.

So that needs to be stopped and put down.

But you also have to find a way

that your corrective measure doesn’t actually then

end up being coopted by the war machine

and creating greater war.

Again, the playing field is finite.

The scale of conflict is now getting so big

that the weapons that can be used are so mass destructive

that we can’t afford another giant conflict.

We just, we won’t make it.

What existential threat in terms of us not making it

are you most worried about?

What existential threat to human civilization?

We got like, let’s go.

Going down a dark path, huh?

It’s good, but no, well, no, it’s a dark.

No, it’s like, well, while we’re in the somber place,

we might as well.

Some of my best friends are dark paths.

What worries you most?

We mentioned asteroids, we mentioned AGI, nuclear weapons.

The one that’s on my mind the most,

mostly because I think it’s the one where we have

actually a real chance to move the needle on

in a positive direction or more specifically,

stop some really bad things from happening,

really dumb, avoidable things is bio risks.

In what kind of bio risks?

There’s so many fun options.

So many, so of course, we have natural risks

from natural pandemics, naturally occurring viruses

or pathogens, and then also as time and technology goes on

and technology becomes more and more democratized

and into the hands of more and more people,

the risk of synthetic pathogens.

And whether or not you fall into the camp of COVID

was gain of function accidental lab leak

or whether it was purely naturally occurring,

either way, we are facing a future where synthetic pathogens

or like human meddled with pathogens

either accidentally get out or get into the hands

of bad actors who, whether they’re omnicidal maniacs,

you know, either way.

And so that means we need more robustness for that.

And you would think that us having this nice little dry run,

which is what as awful as COVID was,

and all those poor people that died,

it was still like child’s play compared

to what a future one could be in terms of fatality rate.

And so you’d think that we would then be coming,

we’d be much more robust in our pandemic preparedness.

And meanwhile, the budget in the last two years for the US,

sorry, they just did this, I can’t remember the name

of what the actual budget was,

but it was like a multi trillion dollar budget

that the US just set aside.

And originally in that, you know, considering that COVID

cost multiple trillions to the economy, right?

The original allocation in this new budget

for future pandemic preparedness was 60 billion,

so tiny proportion of it.

That’s proceeded to get whittled down to like 30 billion,

to 15 billion, all the way down to 2 billion

out of multiple trillions.

For a thing that has just cost us multiple trillions,

we’ve just finished, we’re barely even,

we’re not even really out of it.

It basically got whittled down to nothing

because for some reason people think that,

oh, all right, we’ve got the pandemic,

out of the way, that was that one.

And the reason for that is that people are,

and I say this with all due respect

to a lot of the science community,

but there’s an immense amount of naivety about,

they think that nature is the main risk moving forward,

and it really isn’t.

And I think nothing demonstrates this more

than this project that I was just reading about

that’s sort of being proposed right now called Deep Vision.

And the idea is to go out into the wilds,

and we’re not talking about just like within cities,

like deep into like caves that people don’t go to,

deep into the Arctic, wherever, scour the earth

for whatever the most dangerous possible pathogens

could be that they can find.

And then not only do, try and find these,

bring samples of them back to laboratories.

And again, whether you think COVID was a lab leak or not,

I’m not gonna get into that,

but we have historically had so many, as a civilization,

we’ve had so many lab leaks

from even like the highest level security things.

Like it’s just, people should go and just read it.

It’s like a comedy show of just how many they are,

how leaky these labs are,

even when they do their best efforts.

So bring these things then back to civilization.

That’s step one of the badness.

Then the next step would be to then categorize them,

do experiments on them and categorize them

by their level of potential.

And then the piece de resistance on this plan

is to then publish that information freely on the internet

about all these pathogens, including their genome,

which is literally like the building instructions

of how to do them on the internet.

And this is something that genuinely,

a pocket of the like bio,

of the scientific community thinks is a good idea.

And I think on expectation, like the,

and their argument is, is that, oh, this is good,

because it might buy us some time to develop the vaccines,

which, okay, sure, maybe would have made sense

prior to mRNA technology, but you know, like they,

mRNA, we can develop a vaccine now

when we find a new pathogen within a couple of days.

Now then there’s all the trials and so on.

Those trials would have to happen anyway,

in the case of a brand new thing.

So you’re saving maybe a couple of days.

So that’s the upside.

Meanwhile, the downside is you’re not only giving,

you’re not only giving the vaccine,

you’re bringing the risk of these pathogens

of like getting leaked,

but you’re literally handing it out

to every bad actor on earth who would be doing cartwheels.

And I’m talking about like Kim Jong Un, ISIS,

people who like want,

they think the rest of the world is their enemy.

And in some cases they think that killing it themselves

is like a noble cause.

And you’re literally giving them the building blocks

of how to do this.

It’s the most batshit idea I’ve ever heard.

Like on expectation, it’s probably like minus EV

of like multiple billions of lives

if they actually succeeded in doing this.

Certainly in the tens or hundreds of millions.

So the cost benefit is so unbelievably, it makes no sense.

And I was trying to wrap my head around like why,

like what’s going wrong in people’s minds

to think that this is a good idea?

And it’s not that it’s malice or anything like that.

I think it’s that people don’t,

the proponents, they’re actually overly naive

about the interactions of humanity.

And well, like there are bad actors

who will use this for bad things.

Because not only will it,

if you publish this information,

even if a bad actor couldn’t physically make it themselves,

which given in 10 years time,

like the technology is getting cheaper and easier to use.

But even if they couldn’t make it, they could now bluff it.

Like what would you do if there’s like some deadly new virus

that we were published on the internet

in terms of its building blocks?

Kim Jong Un could be like,

hey, if you don’t let me build my nuclear weapons,

I’m gonna release this, I’ve managed to build it.

Well, now he’s actually got a credible bluff.

We don’t know.

And so that’s, it’s just like handing the keys,

it’s handing weapons of mass destruction to people.

Makes no sense.

The possible, I agree with you,

but the possible world in which it might make sense

is if the good guys, which is a whole nother problem,

defining who the good guys are,

but the good guys are like an order of magnitude

higher competence.

And so they can stay ahead of the bad actors

by just being very good at the defense.

By very good, not meaning like a little bit better,

but an order of magnitude better.

But of course the question is,

in each of those individual disciplines, is that feasible?

Can you, can the bad actors,

even if they don’t have the competence leapfrog

to the place where the good guys are?

Yeah, I mean, I would agree in principle

with pertaining to this like particular plan of like,

that, you know, with the thing I described,

this deep vision thing, where at least then

that would maybe make sense for steps one and step two

of like getting the information,

but then why would you release it,

the information to your literal enemies?

You know, that’s, that makes,

that doesn’t fit at all in that perspective

of like trying to be ahead of them.

You’re literally handing them the weapon.

But there’s different levels of release, right?

So there’s the kind of secrecy

where you don’t give it to anybody,

but there’s a release where you incrementally give it

to like major labs.

So it’s not public release, but it’s like,

you’re giving it to major labs.

Yeah, there’s different layers of reasonability, but.

But the problem there is it’s going to,

if you go anywhere beyond like complete secrecy,

it’s gonna leak.

That’s the thing, it’s very hard to keep secrets.

And so that’s still. Information is.

So you might as well release it to the public

is that argument.

So you either go complete secrecy

or you release it to the public.

So, which is essentially the same thing.

It’s going to leak anyway,

if you don’t do complete secrecy.

Right, which is why you shouldn’t get the information

in the first place.

Yeah, I mean, what, in that, I think.

Well, that’s a solution.

Yeah, the solution is either don’t get the information

in the first place or B, keep it incredibly,

incredibly contained.

See, I think, I think it really matters

which discipline we’re talking about.

So in the case of biology, I do think you’re very right.

We shouldn’t even be, it should be forbidden

to even like, think about that.

Meaning don’t collect, don’t just even collect

the information, but like, don’t do,

I mean, gain of function research is a really iffy area.

Like you start.

I mean, it’s all about cost benefits, right?

There are some scenarios where I could imagine

the cost benefit of a gain of function research

is very, very clear, where you’ve evaluated

all the potential risks, factored in the probability

that things can go wrong.

And like, you know, not only known unknowns,

but unknown unknowns as well, tried to quantify that.

And then even then it’s like orders of magnitude

better to do that.

I’m behind that argument.

But the point is that there’s this like,

naivety that’s preventing people from even doing

the cost benefit properly on a lot of the things

because, you know, I get it.

The science community, again, I don’t want to bucket

the science community, but like some people

within the science community just think

that everyone’s good and everyone just cares

about getting knowledge and doing the best for the world.

And unfortunately, that’s not the case.

I wish we lived in that world, but we don’t.

Yeah, I mean, there’s a lie.

Listen, I’ve been criticizing the science community

broadly quite a bit.

There’s so many brilliant people that brilliance

is somehow a hindrance sometimes

because it has a bunch of blind spots.

And then you start to look at the history of science,

how easily it’s been used by dictators

to any conclusion they want.

And it’s dark how you can use brilliant people

that like playing the little game of science

because it is a fun game.

You know, you’re building, you’re going to conferences,

you’re building on top of each other’s ideas,

there’s breakthroughs.

Hi, I think I’ve realized how this particular molecule works

and I could do this kind of experiment

and everyone else is impressed.

Ooh, cool.

No, I think you’re wrong.

Let me show you why you’re wrong.

And that little game, everyone gets really excited

and they get excited, oh, I came up with a pill

that solves this problem

and it’s going to help a bunch of people.

And I came up with a giant study

that shows the exact probability it’s going to help or not.

And you get lost in this game

and you forget to realize this game, just like Moloch,

can have like…

Unintended consequences.

Yeah.

Unintended consequences that might

destroy human civilization.

Right.

Or divide human civilization

or have dire geopolitical consequences.

I mean, the effects of, I mean, it’s just so…

The most destructive effects of COVID

have nothing to do with the biology of the virus,

it seems like.

I mean, I could just list them forever,

but like one of them is the complete distress

of public institutions.

The other one is because of that public distress,

I feel like if a much worse pandemic came along,

we as a world have now cried wolf.

And if an actual wolf now comes,

people will be like, fuck masks, fuck…

Fuck vaccines, yeah.

Fuck everything.

And they won’t be…

They’ll distrust every single thing

that any major institution is going to tell them.

And…

Because that’s the thing,

like there were certain actions made by

certain health public figures where they told,

they very knowingly told it was a white lie,

it was intended in the best possible way,

such as early on when there was clearly a shortage of masks.

And so they said to the public,

oh, don’t get masks, there’s no evidence that they work.

Or don’t get them, they don’t work.

In fact, it might even make it worse.

You might even spread it more.

Like that was the real like stinker.

Yeah, no, no, unless you know how to do it properly,

you’re gonna make that you’re gonna get sicker

or you’re more likely to catch the virus,

which is just absolute crap.

And they put that out there.

And it’s pretty clear the reason why they did that

was because there was actually a shortage of masks

and they really needed it for health workers,

which makes sense.

Like, I agree, like, you know,

but the cost of lying to the public

when that then comes out,

people aren’t as stupid as they think they are.

And that’s, I think where this distrust of experts

has largely come from.

A, they’ve lied to people overtly,

but B, people have been treated like idiots.

Now, that’s not to say that there aren’t a lot

of stupid people who have a lot of wacky ideas

around COVID and all sorts of things.

But if you treat the general public like children,

they’re going to see that, they’re going to notice that.

And that is going to just like absolutely decimate the trust

in the public institutions that we depend upon.

And honestly, the best thing that could happen,

I wish like if like Fauci or, you know,

and these other like leaders who, I mean, God,

I would, I can’t imagine how nightmare his job has been

for the last few years, hell on earth.

Like, so, you know, I, you know,

I have a lot of sort of sympathy

for the position he’s been in.

But like, if he could just come out and be like,

okay, look, guys, hands up,

we didn’t handle this as well as we could have.

These are all the things I would have done differently

in hindsight.

I apologize for this and this and this and this.

That would go so far.

And maybe I’m being naive, who knows,

maybe this would backfire, but I don’t think it would.

Like to someone like me even,

cause I’ve like, I’ve lost trust

in a lot of these things.

But I’m fortunate that I at least know people

who I can go to who I think are good,

like have good epistemics on this stuff.

But you know, if they, if they could sort of put their hands

on my go, okay, these are the spots where we screwed up.

This, this, this, this was our reasons.

Yeah, we actually told a little white lie here.

We did it for this reason.

We’re really sorry.

Where they just did the radical honesty thing,

the radical transparency thing,

that would go so far to rebuilding public trust.

And I think that’s what needs to happen.

Otherwise.

I totally agree with you.

Unfortunately, yeah, his job was very tough

and all those kinds of things, but I see arrogance

and arrogance prevented him from being honest

in that way previously.

And I think arrogance will prevent him

from being honest in that way.

Now we need leaders.

I think young people are seeing that,

that kind of talking down to people

from a position of power, I hope is the way of the past.

People really like authenticity

and they like leaders that are like a man

and a woman of the people.

And I think that just.

I mean, he still has a chance to do that, I think.

I mean, I don’t wanna, you know, I don’t think, you know,

if I doubt he’s listening, but if he is like,

hey, I think, you know, I don’t think he’s irredeemable

by any means.

I think there’s, you know, I don’t have an opinion

whether there was arrogance or there or not.

Just know that I think like coming clean on the, you know,

it’s understandable to have fucked up during this pandemic.

Like I won’t expect any government to handle it well

because it was so difficult, like so many moving pieces,

so much like lack of information and so on.

But the step to rebuilding trust is to go, okay, look,

we’re doing a scrutiny of where we went wrong.

And for my part, I did this wrong in this part.

And that would be huge.

All of us can do that.

I mean, I was struggling for a while

whether I want to talk to him or not.

I talked to his boss, Francis Collins.

Another person that’s screwed up in terms of trust,

lost a little bit of my respect too.

There seems to have been a kind of dishonesty

in the backrooms in that they didn’t trust people

to be intelligent.

Like we need to tell them what’s good for them.

We know what’s good for them.

That kind of idea.

To be fair, the thing that’s, what’s it called?

I heard the phrase today, nut picking.

Social media does that.

So you’ve got like nitpicking.

Nut picking is where the craziest, stupidest,

you know, if you have a group of people,

let’s say people who are vaccine,

I don’t like the term anti vaccine,

people who are vaccine hesitant, vaccine speculative.

So what social media did or the media or anyone,

their opponents would do is pick the craziest example.

So the ones who are like,

I think I need to inject myself with motor oil

up my ass or something.

Select the craziest ones and then have that beamed too.

So from like someone like Fauci or Francis’s perspective,

that’s what they get because they’re getting

the same social media stuff as us.

They’re getting the same media reports.

I mean, they might get some more information,

but they too are gonna get the nuts portrayed to them.

So they probably have a misrepresentation

of what the actual public’s intelligence is.

Well, that’s just, yes.

And that just means they’re not social media savvy.

So one of the skills of being on social media

is to be able to filter that in your mind,

like to understand, to put into proper context.

To realize that what you are seeing,

social media is not anywhere near

an accurate representation of humanity.

Nut picking, and there’s nothing wrong

with putting motor oil up your ass.

It’s just one of the better aspects of,

I do this every weekend.

Okay.

Where the hell did that analogy come from in my mind?

Like what?

I don’t know.

I think you need to, there’s some Freudian thing

you would need to deeply investigate with a therapist.

Okay, what about AI?

Are you worried about AGI, super intelligence systems

or paperclip maximizer type of situation?

Yes, I’m definitely worried about it,

but I feel kind of bipolar in the,

some days I wake up and I’m like.

You’re excited about the future?

Well, exactly.

I’m like, wow, we can unlock the mysteries of the universe.

You know, escape the game.

And this, you know,

because I spend all my time thinking

about these molecule problems that, you know,

what is the solution to them?

What, you know, in some ways you need this like

omni benevolent, omniscient, omni wise coordination mechanism

that can like make us all not do the molecule thing

or like provide the infrastructure

or redesign the system so that it’s not vulnerable

to this molecule process.

And in some ways, you know,

that’s the strongest argument to me

for like the race to build AGI

is that maybe, you know, we can’t survive without it.

But the flip side to that is the,

the unfortunately now that there’s multiple actors

trying to build AI, AGI, you know,

this was fine 10 years ago when it was just DeepMind,

but then other companies started up

and now it created a race dynamic.

Now it’s like, that’s the whole thing.

Is it the same, it’s got the same problem.

It’s like whichever company is the one that like optimizes

for speed at the cost of safety

will get the competitive advantage.

And so we’re the more likely the ones to build the AGI,

you know, and that’s the same cycle that you’re in.

And there’s no clear solution to that

because you can’t just go like slapping,

if you go and try and like stop all the different companies,

then it will, you know, the good ones will stop

because they’re the ones, you know,

within the West’s reach,

but then that leaves all the other ones to continue

and then they’re even more likely.

So it’s like, it’s a very difficult problem

with no clean solution.

And, you know, at the same time, you know,

I know at least some of the folks at DeepMind

and they’re incredible and they’re thinking about this.

They’re very aware of this problem.

And they’re like, you know,

I think some of the smartest people on earth.

Yeah, the culture is important there

because they are thinking about that

and they’re some of the best machine learning engineers.

So it’s possible to have a company or a community of people

that are both great engineers

and are thinking about the philosophical topics.

Exactly, and importantly, they’re also game theorists,

you know, and because this is ultimately

a game theory problem, the thing, this Moloch mechanism

and like, you know, how do we voice arms race scenarios?

You need people who aren’t naive to be thinking about this.

And again, like luckily there’s a lot of smart,

non naive game theorists within that group.

Yes, I’m concerned about it and I think it’s again,

a thing that we need people to be thinking about

in terms of like, how do we create,

how do we mitigate the arms race dynamics

and how do we solve the thing of,

Bostrom calls it the orthogonality problem whereby,

because it’s obviously there’s a chance, you know,

the belief, the hope is,

is that you build something that’s super intelligent

and by definition of being super intelligent,

it will also become super wise

and have the wisdom to know what the right goals are.

And hopefully those goals include keeping humanity alive.

Right?

But Bostrom says that actually those two things,

you know, super intelligence and super wisdom

aren’t necessarily correlated.

They’re actually kind of orthogonal things.

And how do we make it so that they are correlated?

How do we guarantee it?

Because we need it to be guaranteed really,

to know that we’re doing the thing safely.

But I think that like merging of intelligence and wisdom,

at least my hope is that this whole process

happens sufficiently slowly

that we’re constantly having these kinds of debates

that we have enough time to figure out

how to modify each version of the system

as it becomes more and more intelligent.

Yes, buying time is a good thing, definitely.

Anything that slows everything down.

We just, everyone needs to chill out.

We’ve got millennia to figure this out.

Yeah.

Or at least, at least, well, it depends.

Again, some people think that, you know,

we can’t even make it through the next few decades

without having some kind of

omni wise coordination mechanism.

And there’s also an argument to that.

Yeah, I don’t know.

Well, there is, I’m suspicious of that kind of thinking

because it seems like the entirety of human history

has people in it that are like predicting doom

just around the corner.

There’s something about us

that is strangely attracted to that thought.

It’s almost like fun to think about

the destruction of everything.

Just objectively speaking,

I’ve talked and listened to a bunch of people

and they are gravitating towards that.

It’s almost, I think it’s the same thing

that people love about conspiracy theories

is they love to be the person that kind of figured out

some deep fundamental thing about the,

that’s going to be, it’s going to mark something

extremely important about the history of human civilization

because then I will be important.

When in reality, most of us will be forgotten

and life will go on.

And one of the sad things about

whenever anything traumatic happens to you,

whenever you lose loved ones or just tragedy happens,

you realize life goes on.

Even after a nuclear war that will wipe out

some large percentage of the population

and will torture people for years to come

because of the effects of a nuclear winter,

people will still survive.

Life will still go on.

I mean, it depends on the kind of nuclear war.

But in the case of nuclear war, it will still go on.

That’s one of the amazing things about life.

It finds a way.

And so in that sense, I just,

I feel like the doom and gloom thing

is a…

Well, what we don’t, yeah,

we don’t want a self fulfilling prophecy.

Yes, that’s exactly.

Yes, and I very much agree with that.

And I even have a slight feeling

from the amount of time we spent in this conversation

talking about this,

because it’s like, is this even a net positive

if it’s like making everyone feel,

oh, in some ways, like making people imagine

these bad scenarios can be a self fulfilling prophecy,

but at the same time, that’s weighed off

with at least making people aware of the problem

and gets them thinking.

And I think particularly,

the reason why I wanna talk about this to your audience

is that on average, they’re the type of people

who gravitate towards these kinds of topics

because they’re intellectually curious

and they can sort of sense that there’s trouble brewing.

They can smell that there’s,

I think there’s a reason

that people are thinking about this stuff a lot

is because the probability,

the probability, it’s increased in probability

over certainly over the last few years,

trajectories have not gone favorably,

let’s put it since 2010.

So it’s right, I think, for people to be thinking about it,

but that’s where they’re like,

I think whether it’s a useful fiction

or whether it’s actually true

or whatever you wanna call it,

I think having this faith,

this is where faith is valuable

because it gives you at least this like anchor of hope.

And I’m not just saying it to like trick myself,

like I do truly,

I do think there’s something out there that wants us to win.

I think there’s something that really wants us to win.

And it just, you just have to be like,

just like kind of, okay, now I sound really crazy,

but like open your heart to it a little bit

and it will give you the like,

the sort of breathing room

with which to marinate on the solutions.

We are the ones who have to come up with the solutions,

but we can use that.

There’s like, there’s hashtag positivity.

There’s value in that.

Yeah, you have to kind of imagine

all the destructive trajectories that lay in our future

and then believe in the possibility

of avoiding those trajectories.

All while, you said audience,

all while sitting back, which is majority,

the two people that listen to this

are probably sitting on a beach,

smoking some weed, just, that’s a beautiful sunset.

They’re looking at just the waves going in and out.

And ultimately there’s a kind of deep belief there

in the momentum of humanity to figure it all out.

But we’ve got a lot of work to do.

Which is what makes this whole simulation,

this video game kind of fun.

This battle of polytopia,

I still, man, I love those games so much.

So good.

And that one for people who don’t know,

Battle of Polytopia is a big,

it’s like a, is this really radical simplification

of a civilization type of game.

It still has a lot of the skill tree development,

a lot of the strategy,

but it’s easy enough to play on a phone.

Yeah.

It’s kind of interesting.

They’ve really figured it out.

It’s one of the most elegantly designed games

I’ve ever seen.

It’s incredibly complex.

And yet being, again, it walks that line

between complexity and simplicity

in this really, really great way.

And they use pretty colors

that hack the dopamine reward circuits in our brains.

Very well.

Yeah, it’s fun.

Video games are so fun.

Yeah.

Most of this life is just about fun.

Escaping all the suffering to find the fun.

What’s energy healing?

I have in my notes, energy healing question mark.

What’s that about?

Oh man.

God, your audience is gonna think I’m mad.

So the two crazy things that happened to me,

the one was the voice in the head

that said you’re gonna win this tournament

and then I won the tournament.

The other craziest thing that’s happened to me

was in 2018,

I started getting this like weird problem in my ear

where it was kind of like low frequency sound distortion,

where voices, particularly men’s voices,

became incredibly unpleasant to listen to.

It would like create this,

it would like be falsely amplified or something

and it was almost like a physical sensation in my ear,

which was really unpleasant.

And it would like last for a few hours and then go away

and then come back for a few hours and go away.

And I went and got hearing tests

and they found that like the bottom end,

I was losing the hearing in that ear.

And in the end, I got,

doctors said they think it was this thing

called Meniere’s disease,

which is this very unpleasant disease

where people basically end up losing their hearing,

but they get this like,

it often comes with like dizzy spells and other things

because it’s like the inner ear gets all messed up.

Now, I don’t know if that’s actually what I had,

but that’s what at least a couple of,

one doctor said to me.

But anyway, so I’d had three months of this stuff,

this going on and it was really getting me down.

And I was at Burning Man of all places.

I don’t mean to be that person talking about Burning Man,

but I was there.

And again, I’d had it and I was unable to listen to music,

which is not what you want

because Burning Man is a very loud, intense place.

And I was just having a really rough time.

And on the final night,

I get talking to this girl who’s like a friend of a friend.

And I mentioned, I was like,

oh, I’m really down in the dumps about this.

And she’s like, oh, well,

I’ve done a little bit of energy healing.

Would you like me to have a look?

And I was like, sure.

Now this was again,

deep, I was, you know, no time in my life for this.

I didn’t believe in any of this stuff.

I was just like, it’s all bullshit.

It’s all wooey nonsense.

But I was like, sure, I’ll have a go.

And she starts like with her hand and she says,

oh, there’s something there.

And then she leans in and she starts like sucking

over my ear, not actually touching me,

but like close to it, like with her mouth.

And it was really unpleasant.

I was like, whoa, can you stop?

She’s like, no, no, no, there’s something there.

I need to get it.

And I was like, no, no, no, I really don’t like it.

Please, this is really loud.

She’s like, I need to just bear with me.

And she does it.

And I don’t know how long, for a few minutes.

And then she eventually collapses on the ground,

like freezing cold, crying.

And I’m just like, I don’t know what the hell is going on.

Like I’m like thoroughly freaked out

as is everyone else watching.

Just like, what the hell?

And we like warm her up and she was like,

oh, what, oh, you know, she was really shaken up.

And she’s like, I don’t know what that,

she said it was something very unpleasant and dark.

Don’t worry, it’s gone.

I think you’ll be fine in a couple,

you’ll have the physical symptoms for a couple of weeks

and you’ll be fine.

But, you know, she was just like that, you know,

so I was so rattled, A, because the potential that actually

I’d had something bad in me that made someone feel bad

and that she was scared.

That was what, you know, I was like, wait,

I thought you do this, this is the thing.

Now you’re terrified.

Like you bought like some kind of exorcism or something.

Like what the fuck is going on?

So it, like just the most insane experience.

And frankly, it took me like a few months

to sort of emotionally recover from it.

But my ear problem went away about a couple of weeks later

and touch wood, I’ve not had any issues since.

So.

That gives you like hints

that maybe there’s something out there.

I mean, I don’t, again,

I don’t have an explanation for this.

The most probable explanation was, you know,

I was a burning man, I was in a very open state.

Let’s just leave it at that.

And, you know, placebo is an incredibly powerful thing

and a very not understood thing.

So.

Almost assigning the word placebo to it reduces it down

to a way that it doesn’t deserve to be reduced down.

Maybe there’s a whole science of what we call placebo.

Maybe there’s a, placebo’s a door.

Self healing, you know?

And I mean, I don’t know what the problem was.

Like I was told it was many years.

I don’t want to say I definitely had that

because I don’t want people to think that,

oh, that’s how, you know, if they do have that,

because it’s a terrible disease.

And if they have that,

that this is going to be a guaranteed way for it

to fix it for them.

I don’t know.

And I also don’t, I don’t,

and you’re absolutely right to say,

like using even the word placebo is like,

it comes with this like baggage of, of like frame.

And I don’t want to reduce it down.

All I can do is describe the experience and what happened.

I cannot put an ontological framework around it.

I can’t say why it happened, what the mechanism was,

what the problem even was in the first place.

I just know that something crazy happened

and it was while I was in an open state.

And fortunately for me, it made the problem go away.

But what I took away from it, again,

it was part of this, you know,

this took me on this journey of becoming more humble

about what I think I know.

Because as I said before, I was like,

I was in the like Richard Dawkins train of atheism

in terms of there is no God.

And everything like that is bullshit.

We know everything, we know, you know,

the only way we can get through,

we know how medicine works and its molecules

and chemical interactions and that kind of stuff.

And now it’s like, okay, well,

there’s clearly more for us to understand.

And that doesn’t mean that it’s ascientific as well,

because, you know, the beauty of the scientific method

is that it still can apply to this situation.

Like, I don’t see why, you know,

I would like to try and test this experimentally.

I haven’t really, like, you know,

I don’t know how we would go about doing that.

We’d have to find other people with the same condition,

I guess, and like, try and repeat the experiment.

But it doesn’t, just because something happens

that’s sort of out of the realms

of our current understanding,

it doesn’t mean that it’s,

the scientific method can’t be used for it.

Yeah, I think the scientific method sits on a foundation

of those kinds of experiences,

because the scientific method is a process

to carve away at the mystery all around us.

And experiences like this is just a reminder

that we’re mostly shrouded in mystery still.

That’s it.

It’s just like a humility.

Like, we haven’t really figured this whole thing out.

But at the same time, we have found ways

to act, you know, we’re clearly doing something right,

because think of the technological scientific advancements,

the knowledge that we have that would blow people’s minds

even from 100 years ago.

Yeah, and we’ve even allegedly got out to space

and landed on the moon, although I still haven’t,

I have not seen evidence of the Earth being round,

but I’m keeping an open mind.

Speaking of which, you studied physics

and astrophysics, just to go to that,

just to jump around through the fascinating life you’ve had,

when did you, how did that come to be?

Like, when did you fall in love with astronomy

and space and things like this?

As early as I can remember.

I was very lucky that my mom, and my dad,

but particularly my mom, my mom is like the most nature,

she is Mother Earth, is the only way to describe her.

Just, she’s like Dr. Doolittle, animals flock to her

and just like sit and look at her adoringly.

As she sings.

Yeah, she just is Mother Earth,

and she has always been fascinated by,

she doesn’t have any, she never went to university

or anything like that, she’s actually phobic of maths,

if I try and get her to like,

you know, I was trying to teach her poker and she hated it.

But she’s so deeply curious,

and that just got instilled in me when, you know,

we would sleep out under the stars,

whenever it was, you know, the two nights a year

when it was warm enough in the UK to do that.

And we would just lie out there until we fell asleep,

looking at, looking for satellites,

looking for shooting stars, and I was just always,

I don’t know whether it was from that,

but I’ve always naturally gravitated to like the biggest,

the biggest questions.

And also the like, the most layers of abstraction I love,

just like, what’s the meta question?

What’s the meta question and so on.

So I think it just came from that really.

And then on top of that, like physics,

you know, it also made logical sense

in that it was a degree that,

well, a subject that ticks the box of being,

you know, answering these really big picture questions,

but it was also extremely useful.

It like has a very high utility in terms of,

I didn’t know necessarily,

I thought I was gonna become like a research scientist.

My original plan was,

I wanna be a professional astronomer.

So it’s not just like a philosophy degree

that asks the big questions,

and it’s not like biology and the path

to go to medical school or something like that,

which is all overly pragmatic, not overly,

is very pragmatic, but this is, yeah,

physics is a good combination of the two.

Yeah, at least for me, it made sense.

And I was good at it, I liked it.

Yeah, I mean, it wasn’t like I did an immense amount

of soul searching to choose it or anything.

It just was like this, it made the most sense.

I mean, you have to make this decision in the UK age 17,

which is crazy, because, you know, in US,

you go the first year, you do a bunch of stuff, right?

And then you choose your major.

Yeah, I think the first few years of college,

you focus on the drugs and only as you get closer

to the end, do you start to think, oh shit,

this wasn’t about that.

And I owe the government a lot of money.

How many alien civilizations are out there?

When you looked up at the stars with your mom

and you were counting them, what’s your mom think

about the number of alien civilizations?

I actually don’t know.

I would imagine she would take the viewpoint of,

you know, she’s pretty humble and she knows how many,

she knows there’s a huge number of potential spawn sites

out there, so she would.

Spawn sites?

Spawn sites, yeah.

You know, this is all spawn sites.

Yeah, spawn sites in Polytopia.

We spawned on Earth, you know, it’s.

Hmm, yeah, spawn sites.

Why does that feel weird to say spawn?

Because it makes me feel like there’s only one source

of life and it’s spawning in different locations.

That’s why the word spawn.

Because it feels like life that originated on Earth

really originated here.

Right, it is unique to this particular.

Yeah, I mean, but I don’t, in my mind, it doesn’t exclude,

you know, the completely different forms of life

and different biochemical soups can’t also spawn,

but I guess it implies that there’s some spark

that is uniform, which I kind of like the idea of.

And then I get to think about respawning,

like after it dies, like what happens if life on Earth ends?

Is it gonna restart again?

Probably not, it depends.

Maybe Earth is too.

It depends on the type of, you know,

what’s the thing that kills it off, right?

If it’s a paperclip maximizer, not for the example,

but, you know, some kind of very self replicating,

high on the capabilities, very low on the wisdom type thing.

So whether that’s, you know, gray goo, green goo,

you know, like nanobots or just a shitty misaligned AI

that thinks it needs to turn everything into paperclips.

You know, if it’s something like that,

then it’s gonna be very hard for life,

you know, complex life, because by definition,

you know, a paperclip maximizer

is the ultimate instantiation of molecule.

Deeply low complexity, over optimization on a single thing,

sacrificing everything else, turning the whole world into.

Although something tells me,

like if we actually take a paperclip maximizer,

it destroys everything.

It’s a really dumb system that just envelops

the whole of Earth.

And the universe beyond, yeah.

Oh, I didn’t know that part, but okay, great.

That’s the thought experiment.

So it becomes a multi planetary paperclip maximizer?

Well, it just propagates.

I mean, it depends whether it figures out

how to jump the vacuum gap.

But again, I mean, this is all silly

because it’s a hypothetical thought experiment,

which I think doesn’t actually have

much practical application to the AI safety problem,

but it’s just a fun thing to play around with.

But if by definition, it is maximally intelligent,

which means it is maximally good at

navigating the environment around it

in order to achieve its goal,

but extremely bad at choosing goals in the first place.

So again, we’re talking on this orthogonality thing, right?

It’s very low on wisdom, but very high on capability.

Then it will figure out how to jump the vacuum gap

between planets and stars and so on,

and thus just turn every atom it gets its hands on

into paperclips.

Yeah, by the way, for people who don’t.

Which is maximum virality, by the way.

That’s what virality is.

But does not mean that virality is necessarily

all about maximizing paperclips.

In that case, it is.

So for people who don’t know,

this is just a thought experiment example

of an AI system that’s very, that has a goal

and is willing to do anything to accomplish that goal,

including destroying all life on Earth

and all human life and all of consciousness in the universe

for the goal of producing a maximum number of paperclips.

Okay.

Or whatever its optimization function was

that it was set at.

But don’t you think?

It could be making, recreating Lexus.

Maybe it’ll tile the universe in Lex.

Go on.

I like this idea.

No, I’m just kidding.

That’s better.

That’s more interesting than paperclips.

That could be infinitely optimal

if I were to say it to myself.

But if you ask me, it’s still a bad thing

because it’s permanently capping

what the universe could ever be.

It’s like, that’s its end state.

Or achieving the optimal

that the universe could ever achieve.

But that’s up to,

different people have different perspectives.

But don’t you think within the paperclip world

that would emerge, just like in the zeros and ones

that make up a computer,

that would emerge beautiful complexities?

Like, it won’t suppress, you know,

as you scale to multiple planets and throughout,

there’ll emerge these little worlds

that on top of the fabric of maximizing paperclips,

there will be, that would emerge like little societies

of paperclip.

Well, then we’re not describing

a paperclip maximizer anymore.

Because by the, like, if you think of what a paperclip is,

it is literally just a piece of bent iron, right?

So if it’s maximizing that throughout the universe,

it’s taking every atom it gets its hand on

into somehow turning it into iron or steel.

And then bending it into that shape

and then done and done.

By definition, like paperclips,

there is no way for, well, okay.

So you’re saying that paperclips somehow

will just emerge and create through gravity or something.

Well, no, no, no.

Because there’s a dynamic element to the whole system.

It’s not just, it’s creating those paperclips

and the act of creating, there’s going to be a process.

And that process will have a dance to it.

Because it’s not like sequential thing.

There’s a whole complex three dimensional system

of paperclips, you know, like, you know,

people like string theory, right?

It’s supposed to be strings that are interacting

in fascinating ways.

I’m sure paperclips are very string like,

they can be interacting in very interesting ways

as you scale exponentially through three dimensional.

I mean, I’m sure the paperclip maximizer

has to come up with a theory of everything.

It has to create like wormholes, right?

It has to break, like,

it has to understand quantum mechanics.

It has to understand general relativity.

I love your optimism.

This is where I’d say this,

we’re going into the realm of pathological optimism

where if I, it’s.

I’m sure there’ll be a,

I think there’s an intelligence

that emerges from that system.

So you’re saying that basically intelligence

is inherent in the fabric of reality and will find a way.

Kind of like Goldblum says, life will find a way.

You think life will find a way

even out of this perfectly homogenous dead soup.

It’s not perfectly homogenous.

It has to, it’s perfectly maximal in the production.

I don’t know why people keep thinking it’s homogenous.

It maximizes the number of paperclips.

That’s the only thing.

It’s not trying to be homogenous.

It’s trying.

It’s trying to maximize paperclips.

So you’re saying, you’re saying that because it,

because, you know, kind of like in the Big Bang

or, you know, it seems like, you know, things,

there were clusters, there was more stuff here than there.

That was enough of the patternicity

that kickstarted the evolutionary process.

It’s the little weirdness that will make it beautiful.

So yeah.

Complexity emerges.

Interesting, okay.

Well, so how does that line up then

with the whole heat death of the universe, right?

Cause that’s another sort of instantiation of this.

It’s like everything becomes so far apart and so cold

and so perfectly mixed that it’s like homogenous grayness.

Do you think that even out of that homogenous grayness

where there’s no, you know, negative entropy,

that, you know, there’s no free energy that we understand

even from that new stuff?

Yeah, the paperclip maximizer

or any other intelligence systems

will figure out ways to travel to other universes

to create Big Bangs within those universes

or through black holes to create whole other worlds

to break the, what we consider are the limitations

of physics.

The paperclip maximizer will find a way if a way exists.

And we should be humbled to realize that we don’t.

Yeah, but because it just wants to make more paperclips.

So it’s gonna go into those universes

and turn them into paperclips.

Yeah, but we humans, not humans,

but complex system exists on top of that.

We’re not interfering with it.

This complexity emerges from the simple base state.

The simple base.

Whether it’s, yeah, whether it’s, you know,

plank lengths or paperclips as the base unit.

Yeah, you can think of like the universe

as a paperclip maximizer because it’s doing some dumb stuff.

Like physics seems to be pretty dumb.

It has, like, I don’t know if you can summarize it.

Yeah, the laws are fairly basic

and yet out of them amazing complexity emerges.

And its goals seem to be pretty basic and dumb.

If you can summarize its goals,

I mean, I don’t know what’s a nice way maybe,

maybe laws of thermodynamics could be good.

I don’t know if you can assign goals to physics,

but if you formulate in the sense of goals,

it’s very similar to paperclip maximizing

in the dumbness of the goals.

But the pockets of complexity as it emerge

is where beauty emerges.

That’s where life emerges.

That’s where intelligence, that’s where humans emerge.

And I think we’re being very down

on this whole paperclip maximizer thing.

Now, the reason we hated it.

I think, yeah, because what you’re saying

is that you think that the force of emergence itself

is another like unwritten, not unwritten,

but like another baked in law of reality.

And you’re trusting that emergence will find a way to,

even out of seemingly the most mollusky,

awful, plain outcome, emergence will still find a way.

I love that as a philosophy.

I think it’s very nice.

I would wield it carefully

because there’s large error bars on that

and the certainty of that.

How about we build the paperclip maximizer and find out.

Classic, yeah.

Moloch is doing cartwheels, man.

Yeah.

But the thing is it will destroy humans in the process,

which is the reason we really don’t like it.

We seem to be really holding on

to this whole human civilization thing.

Would that make you sad if AI systems that are beautiful,

that are conscious, that are interesting

and complex and intelligent,

ultimately lead to the death of humans?

Would that make you sad?

If humans led to the death of humans?

Sorry.

Like if they would supersede humans.

Oh, if some AI?

Yeah, AI would end humans.

I mean, that’s the reason why I’m like,

in some ways less emotionally concerned about AI risk

than say, bio risk.

Because at least with AI, there’s a chance,

you know, if we’re in this hypothetical

where it wipes out humans,

but it does it for some like higher purpose,

it needs our atoms and energy to do something.

At least now the universe is going on

to do something interesting,

whereas if it wipes everything, you know,

bio like just kills everything on earth and that’s it.

And there’s no more, you know,

earth cannot spawn anything more meaningful

in the few hundred million years it has left,

because it doesn’t have much time left.

Then, yeah, I don’t know.

So one of my favorite books I’ve ever read is,

Novocene by James Lovelock, who sadly just died.

He wrote it when he was like 99.

He died aged 102, so it’s a fairly new book.

And he sort of talks about that,

that he thinks it’s, you know,

sort of building off this Gaia theory

where like earth is like living,

some form of intelligence itself,

and that this is the next like step, right?

Is this, whatever this new intelligence

that is maybe silicon based

as opposed to carbon based goes on to do.

And it’s a really sort of, in some ways an optimistic,

but really fatalistic book.

And I don’t know if I fully subscribed to it,

but it’s a beautiful piece to read anyway.

So am I sad by that idea?

I think so, yes.

And actually, yeah, this is the reason

why I’m sad by the idea,

because if something is truly brilliant

and wise and smart and truly super intelligent,

it should be able to figure out abundance.

So if it figures out abundance,

it shouldn’t need to kill us off.

It should be able to find a way for us.

It should be, there’s plenty, the universe is huge.

There should be plenty of space for it to go out

and do all the things it wants to do,

and like give us a little pocket

where we can continue doing our things

and we can continue to do things and so on.

And again, if it’s so supremely wise,

it shouldn’t even be worried

about the game theoretic considerations

that by leaving us alive,

we’ll then go and create another like super intelligent agent

that it then has to compete against,

because it should be only wise and smart enough

to not have to concern itself with that.

Unless it deems humans to be kind of assholes.

Like the humans are a source of non

of a lose, lose kind of dynamics.

Well, yes and no, we’re not,

Moloch is, that’s why I think it’s important to separate.

But maybe humans are the source of Moloch.

No, I think, I mean, I think game theory

is the source of Moloch.

And, you know, because Moloch exists

in nonhuman systems as well.

It happens within like agents within a game

in terms of like, you know, it applies to agents,

but like it can apply to, you know,

a species that’s on an island of animals,

you know, rats out competing,

the ones that like massively consume all the resources

are the ones that are gonna win out

over the more like chill, socialized ones.

And so, you know, creates this Malthusian trap,

like Moloch exists in little pockets in nature as well.

So it’s not a strictly human thing.

I wonder if it’s actually a result of consequences

of the invention of predator and prey dynamics.

Maybe it needs to, AI will have to kill off

every organism that’s.

Now you’re talking about killing off competition.

Not competition, but just like the way,

it’s like the weeds or whatever

in a beautiful flower garden.

Parasites.

The parasites, yeah, on the whole system.

Now, of course, it won’t do that completely.

It’ll put them in a zoo like we do with parasites.

It’ll ring fence.

Yeah, and there’ll be somebody doing a PhD

on like they’ll prod humans with a stick

and see what they do.

But I mean, in terms of letting us run wild

outside of the, you know, a geographically

constrained region that might be,

that it might decide to against that.

No, I think there’s obviously the capacity

for beauty and kindness and non Moloch behavior

amidst humans, so I’m pretty sure AI will preserve us.

Let me, I don’t know if you answered the aliens question.

No, I didn’t.

You had a good conversation with Toby Orr.

Yes.

About various sides of the universe.

I think, did he say, now I’m forgetting,

but I think he said it’s a good chance we’re alone.

So the classic, you know, Fermi paradox question is,

there are so many spawn points and yet, you know,

it didn’t take us that long to go from harnessing fire

to sending out radio signals into space.

So surely given the vastness of space we should be,

and you know, even if only a tiny fraction of those

create life and other civilizations too,

we should be, the universe should be very noisy.

There should be evidence of Dyson spheres or whatever,

you know, like at least radio signals and so on,

but seemingly things are very silent out there.

Now, of course, it depends on who you speak to.

Some people say that they’re getting signals all the time

and so on and like, I don’t wanna make

an epistemic statement on that,

but it seems like there’s a lot of silence.

And so that raises this paradox.

And then say, you know, the Drake equation.

So the Drake equation is like basically just a simple thing

of like trying to estimate the number of possible

civilizations within the galaxy

by multiplying the number of stars created per year

by the number of stars that have planets,

planets that are habitable, blah, blah, blah.

So all these like different factors.

And then you plug in numbers into that and you, you know,

depending on like the range of, you know,

your lower bound and your upper bound point estimates

that you put in, you get out a number at the end

for the number of civilizations.

But what Toby and his crew did differently was,

Toby is a researcher at the Future of Humanity Institute.

They, instead of, they realized that it’s like basically

a statistical quirk that if you put in point sources,

even if you think you’re putting in

conservative point sources,

because on some of these variables,

the uncertainty is so large,

it spans like maybe even like a couple of hundred

orders of magnitude.

By putting in point sources,

it’s always going to lead to overestimates.

And so they, like by putting stuff on a log scale,

or actually they did it on like a log log scale

on some of them,

and then like ran the simulation across the whole

bucket of uncertainty,

across all of those orders of magnitude.

When you do that,

then actually the number comes out much, much smaller.

And that’s the more statistically rigorous,

you know, mathematically correct way

of doing the calculation.

It’s still a lot of hand waving.

As science goes, it’s like definitely, you know,

just waving, I don’t know what an analogy is,

but it’s hand wavy.

And anyway, when they did this,

and then they did a Bayesian update on it as well,

to like factor in the fact that there is no evidence

that we’re picking up because, you know,

no evidence is actually a form of evidence, right?

And the long and short of it comes out that the,

we’re roughly around 70% to be the only

intelligent civilization in our galaxy thus far,

and around 50, 50 in the entire observable universe,

which sounds so crazily counterintuitive,

but their math is legit.

Well, yeah, the math around this particular equation,

which the equation is ridiculous on many levels,

but the powerful thing about the equation

is there’s the different things,

different components that can be estimated,

and the error bars on which can be reduced with science.

And hence throughout, since the equation came out,

the error bars have been coming out on different,

different aspects.

And so that, it almost kind of says,

what, like this gives you a mission to reduce the error bars

on these estimates over a period of time.

And once you do, you can better and better understand,

like in the process of redoing the error bars,

you’ll get to understand actually

what is the right way to find out where the aliens are,

how many of them there are, and all those kinds of things.

So I don’t think it’s good to use that for an estimation.

I think you do have to think from like,

more like from first principles,

just looking at what life is on Earth,

and trying to understand the very physics based,

biology, chemistry, biology based question of what is life,

maybe computation based.

What the fuck is this thing?

And that, like how difficult is it to create this thing?

It’s one way to say like how many planets like this

are out there, all that kind of stuff,

but it feels like from our very limited knowledge

perspective, the right way is to think how does,

what is this thing and how does it originate?

From very simple nonlife things,

how does complex lifelike things emerge?

From a rock to a bacteria, protein,

and these like weird systems that encode information

and pass information from self replicate,

and then also select each other and mutate

in interesting ways such that they can adapt

and evolve and build increasingly more complex systems.

Right, well it’s a form of information processing, right?

Right.

Whereas information transfer, but then also

an energy processing, which then results in,

I guess information processing,

maybe I’m getting bogged down.

It’s doing some modification and yeah,

the input is some energy.

Right, it’s able to extract, yeah,

extract resources from its environment

in order to achieve a goal.

But the goal doesn’t seem to be clear.

Right, well the goal is to make more of itself.

Yeah, but in a way that increases,

I mean I don’t know if evolution

is a fundamental law of the universe,

but it seems to want to replicate itself

in a way that maximizes the chance of its survival.

Individual agents within an ecosystem do, yes, yes.

Evolution itself doesn’t give a fuck.

Right.

It’s a very, it don’t care.

It’s just like, oh, you optimize it.

Well, at least it’s certainly, yeah,

it doesn’t care about the welfare

of the individual agents within it,

but it does seem to, I don’t know.

I think the mistake is that we’re anthropomorphizing.

To even try and give evolution a mindset

because it is, there’s a really great post

by Eliezer Yudkowsky on Lesrong,

which is an alien God.

And he talks about the mistake we make

when we try and put our mind,

think through things from an evolutionary perspective

as though giving evolution some kind of agency

and what it wants.

Yeah, worth reading, but yeah.

I would like to say that having interacted

with a lot of really smart people

that say that anthropomorphization is a mistake,

I would like to say that saying

that anthropomorphization is a mistake is a mistake.

I think there’s a lot of power in anthropomorphization,

if I can only say that word correctly one time.

I think that’s actually a really powerful way

to reason to things.

And I think people, especially people in robotics

seem to run away from it as fast as possible.

And I just, I think.

Can you give an example of like how it helps in robotics?

Oh, in that our world is a world of humans

and to see robots as fundamentally just tools

runs away from the fact that we live in a world,

a dynamic world of humans.

That like these, all these game theory systems

we’ve talked about, that a robot

that ever has to interact with humans.

And I don’t mean like intimate friendship interaction.

I mean, in a factory setting where it has to deal

with the uncertainty of humans, all that kind of stuff.

You have to acknowledge that the robot’s behavior

has an effect on the human, just as much as the human

has an effect on the robot.

And there’s a dance there.

And you have to realize that this entity,

when a human sees a robot, this is obvious

in a physical manifestation of a robot,

they feel a certain way.

They have a fear, they have uncertainty.

They have their own personal life projections.

We have to have pets and dogs

and the thing looks like a dog.

They have their own memories of what a dog is like.

They have certain feelings and that’s gonna be useful

in a safety setting, safety critical setting,

which is one of the most trivial settings for a robot

in terms of how to avoid any kind of dangerous situations.

And a robot should really consider that

in navigating its environment.

And we humans are right to reason about how a robot

should consider navigating its environment

through anthropomorphization.

I also think our brains are designed to think

in human terms, like game theory,

I think is best applied in the space of human decisions.

And so…

Right, you’re dealing, I mean, with things like AI,

AI is, they are, we can somewhat,

like, I don’t think it’s,

the reason I say anthropomorphization

we need to be careful with is because there is a danger

of overly applying, overly wrongly assuming

that this artificial intelligence is going to operate

in any similar way to us,

because it is operating

on a fundamentally different substrate.

Like even dogs or even mice or whatever, in some ways,

like anthropomorphizing them is less of a mistake, I think,

than an AI, even though it’s an AI we built and so on,

because at least we know

that they’re running from the same substrate.

And they’ve also evolved from the same,

out of the same evolutionary process.

They’ve followed this evolution

of like needing to compete for resources

and needing to find a mate and that kind of stuff.

Whereas an AI that has just popped into existence

somewhere on like a cloud server,

let’s say, you know, or whatever, however it runs

and whatever, whether it,

I don’t know whether they have an internal experience.

I don’t think they necessarily do.

In fact, I don’t think they do.

But the point is, is that to try and apply

any kind of modeling of like thinking through problems

and decisions in the same way that we do

has to be done extremely carefully because they are,

like, they’re so alien,

their method of whatever their form of thinking is,

it’s just so different because they’ve never had to evolve,

you know, in the same way.

Yeah, beautifully put.

I was just playing devil’s advocate.

I do think in certain contexts,

anthropomorphization is not gonna hurt you.

Yes.

Engineers run away from it too fast.

I can see that.

But from the most point, you’re right.

Do you have advice for young people today,

like the 17 year old that you were,

of how to live life?

You can be proud of how to have a career

you can be proud of in this world full of mullocks.

Think about the win wins.

Look for win win situations.

And be careful not to, you know, overly use your smarts

to convince yourself that something is win win

when it’s not.

So that’s difficult.

And I don’t know how to advise, you know, people on that

because it’s something I’m still figuring out myself.

But have that as a sort of default MO.

Don’t see things, everything as a zero sum game.

Try to find the positive sumness and like find ways

if there doesn’t seem to be one,

consider playing a different game.

So that I would suggest that.

Do not become a professional poker player.

I, cause people always ask that like, oh, she’s a pro.

I wanna do that too.

Fine, you could have done it if you were, you know,

when I started out,

it was a very different situation back then.

Poker is, you know, a great game to learn

in order to understand the ways to think.

And I recommend people learn it,

but don’t try and make a living from it these days.

It’s almost, it’s very, very difficult

to the point of being impossible.

And then really, really be aware of how much time

you spend on your phone and on social media

and really try and keep it to a minimum.

Be aware that basically every moment that you spend on it

is bad for you.

So it doesn’t mean to say you can never do it,

but just have that running in the background.

I’m doing a bad thing for myself right now.

I think that’s the general rule of thumb.

Of course, about becoming a professional poker player,

if there is a thing in your life that’s like that

and nobody can convince you otherwise, just fucking do it.

Don’t listen to anyone’s advice.

Find a thing that you can’t be talked out of too.

That’s a thing.

I like that, yeah.

You were a lead guitarist in a metal band?

Oh.

Did I write that down from something?

What did you, what’d you do it for?

The performing, was it the pure, the music of it?

Was it just being a rock star?

Why’d you do it?

So we only ever played two gigs.

We didn’t last, you know, it wasn’t a very,

we weren’t famous or anything like that.

But I was very into metal.

Like it was my entire identity,

sort of from the age of 16 to 23.

What’s the best metal band of all time?

Don’t ask me that, it’s so hard to answer.

So I know I had a long argument with,

I’m a guitarist, more like a classic rock guitarist.

So, you know, I’ve had friends who are very big

Pantera fans and so there was often arguments

about what’s the better metal band,

Metallica versus Pantera.

This is a more kind of 90s maybe discussion.

But I was always on the side of Metallica,

both musically and in terms of performance

and the depth of lyrics and so on.

So, but they were, basically everybody was against me.

Because if you’re a true metal fan,

I guess the idea goes is you can’t possibly

be a Metallica fan.

Because Metallica is pop, it’s just like, they sold out.

Metallica are metal.

Like they were the, I mean, again, you can’t say

who was the godfather of metal, blah, blah, blah.

But like they were so groundbreaking and so brilliant.

I mean, you’ve named literally two of my favorite bands.

Like when you asked that question, who are my favorites?

Like those were two that came up.

A third one is Children of Bodom,

who I just think, oh, they just tick all the boxes for me.

Yeah, I don’t know.

It’s nowadays, like I kind of sort of feel

like a repulsion to the, I was that myself.

Like I’d be like, who do you prefer more?

Come on, who’s like, no, you have to rank them.

But it’s like this false zero sumness that’s like, why?

They’re so additive.

Like there’s no conflict there.

Although when people ask that kind of question

about anything, movies, I feel like it’s hard work.

And it’s unfair, but it’s, you should pick one.

Like, and that’s actually the same kind of,

it’s like a fear of a commitment.

When people ask me, what’s your favorite band?

It’s like, but I, it’s good to pick.

Exactly.

And thank you for the tough question, yeah.

Well, maybe not in the context

when a lot of people are listening.

Yeah, I’m not just like, what, why does it matter?

No, it does.

Are you still into metal?

Funny enough, I was listening to a bunch

before I came over here.

Oh, like, do you use it for like motivation

or it gets you in a certain?

Yeah, I was weirdly listening

to 80s hair metal before I came.

Does that count as metal?

I think so, it’s like proto metal and it’s happy.

It’s optimistic, happy proto metal.

Yeah, I mean, all these genres bleed into each other.

But yeah, sorry, to answer your question

about guitar playing, my relationship with it

was kind of weird in that I was deeply uncreative.

My objective would be to hear some really hard

technical solo and then learn it, memorize it

and then play it perfectly.

But I was incapable of trying to write my own music.

Like the idea was just absolutely terrifying.

But I was also just thinking, I was like,

it’d be kind of cool to actually try starting a band again

and getting back into it and write.

But it’s scary.

It’s scary.

I mean, I put out some guitar playing

just other people’s covers.

I play Comfortly Numb on the internet.

And it’s scary too.

It’s scary putting stuff out there.

And I had this similar kind of fascination

with technical playing, both on piano and guitar.

You know, one of the first,

one of the reasons that I started learning guitar

is from Ozzy Osbourne, Mr. Crowley’s solo.

And one of the first solos I learned is that,

there’s a beauty to it.

There’s a lot of beauty to it.

It’s tapping, right?

Yeah, there’s some tapping, but it’s just really fast.

Beautiful, like arpeggios.

Yeah, arpeggios, yeah.

But there’s a melody that you can hear through it,

but there’s also build up.

It’s a beautiful solo,

but it’s also technically just visually the way it looks

when a person’s watching, you feel like a rockstar playing.

But it ultimately has to do with technical.

You’re not developing the part of your brain

that I think requires you to generate beautiful music.

It is ultimately technical in nature.

And so that took me a long time to let go of that

and just be able to write music myself.

And that’s a different journey, I think.

I think that journey is a little bit more inspired

in the blues world, for example,

where improvisation is more valued,

obviously in jazz and so on.

But I think ultimately it’s a more rewarding journey

because you get your relationship with the guitar

then becomes a kind of escape from the world

where you can create, I mean, creating stuff is.

And it’s something you work with,

because my relationship with my guitar was like,

it was something to tame and defeat.

Yeah, it’s a challenge.

Which was kind of what my whole personality was back then.

I was just very like, as I said, very competitive,

very just like must bend this thing to my will.

Whereas writing music, it’s like a dance, you work with it.

But I think because of the competitive aspect,

for me at least, that’s still there,

which creates anxiety about playing publicly

or all that kind of stuff.

I think there’s just like a harsh self criticism

within the whole thing.

It’s really tough.

I wanna hear some of your stuff.

I mean, there’s certain things that feel really personal.

And on top of that, as we talked about poker offline,

there’s certain things that you get to a certain height

in your life, and that doesn’t have to be very high,

but you get to a certain height

and then you put it aside for a bit.

And it’s hard to return to it

because you remember being good.

And it’s hard to, like you being at a very high level

in poker, it might be hard for you to return to poker

every once in a while and enjoy it,

knowing that you’re just not as sharp as you used to be

because you’re not doing it every single day.

That’s something I always wonder with,

I mean, even just like in chess with Kasparov,

some of these greats, just returning to it,

it’s almost painful.

And I feel that way with guitar too,

because I used to play every day a lot.

So returning to it is painful

because it’s like accepting the fact

that this whole ride is finite

and that you have a prime,

there’s a time when you were really good

and now it’s over and now.

We’re on a different chapter of life.

I was like, oh, but I miss that.

But you can still discover joy within that process.

It’s been tough, especially with some level of like,

as people get to know you, and people film stuff,

you don’t have the privacy of just sharing something

with a few people around you.

Yeah.

That’s a beautiful privacy.

That’s a good point.

With the internet, it’s just disappearing.

Yeah, that’s a really good point.

Yeah.

But all those pressures aside,

if you really, you can step up

and still enjoy the fuck out of a good musical performance.

What do you think is the meaning of this whole thing?

What’s the meaning of life?

Oh, wow.

It’s in your name, as we talked about.

You have to live up.

Do you feel the requirement

to have to live up to your name?

Because live?

Yeah.

No, because I don’t see it.

I mean, my, oh, again, it’s kind of like,

no, I don’t know.

Because my full name is Olivia.

Yeah.

So I can retreat in that and be like,

oh, Olivia, what does that even mean?

Live up to live.

No, I can’t say I do,

because I’ve never thought of it that way.

And then your name backwards is evil.

That’s what we also talked about.

I mean, I feel the urge to live up to that,

to be the inverse of evil or even better.

Because I don’t think, you know,

is the inverse of evil good

or is good something completely separate to that?

I think my intuition says it’s the latter,

but I don’t know.

Anyway, again, getting in the weeds.

What is the meaning of all this?

Of life.

Why are we here?

I think to,

explore, have fun and understand

and make more of here and to keep the game going.

Of here?

More of here?

More of this, whatever this is.

More of experience.

Just to have more of experience

and ideally positive experience.

And more complex, you know,

I guess, try and put it into a sort of

vaguely scientific term.

I don’t know.

But make it so that the program required,

the length of code required to describe the universe

is as long as possible.

And, you know, highly complex and therefore interesting.

Because again, like,

I know, you know, we bang the metaphor to death,

but like, tiled with X, you know,

tiled with paperclips,

doesn’t require that much of a code to describe.

Obviously, maybe something emerges from it,

but that steady state, assuming a steady state,

it’s not very interesting.

Whereas it seems like our universe is over time

becoming more and more complex and interesting.

There’s so much richness and beauty and diversity

on this earth.

And I want that to continue and get more.

I want more diversity.

And the very best sense of that word

is to me the goal of all this.

Yeah.

And somehow have fun in the process.

Yes.

Because we do create a lot of fun things along,

instead of in this creative force

and all the beautiful things we create,

somehow there’s like a funness to it.

And perhaps that has to do with the finiteness of life,

the finiteness of all these experiences,

which is what makes them kind of unique.

Like the fact that they end,

there’s this, whatever it is,

falling in love or creating a piece of art

or creating a bridge or creating a rocket

or creating a, I don’t know,

just the businesses that build something

or solve something.

The fact that it is born and it dies

somehow embeds it with fun, with joy

for the people involved.

I don’t know what that is.

The finiteness of it.

It can do.

Some people struggle with the,

I mean, a big thing I think that one has to learn

is being okay with things coming to an end.

And in terms of like projects and so on,

people cling onto things beyond

what they’re meant to be doing,

beyond what is reasonable.

And I’m gonna have to come to terms

with this podcast coming to an end.

I really enjoyed talking to you.

I think it’s obvious as we’ve talked about many times,

you should be doing a podcast.

You should, you’re already doing a lot of stuff publicly

to the world, which is awesome.

And you’re a great educator.

You’re a great mind.

You’re a great intellect.

But it’s also this whole medium of just talking

is also fun.

It is good.

It’s a fun one.

It really is good.

And it’s just, it’s nothing but like,

oh, it’s just so much fun.

And you can just get into so many,

yeah, there’s this space to just explore

and see what comes and emerges.

And yeah.

Yeah, to understand yourself better.

And if you’re talking to others,

to understand them better and together with them.

I mean, you should do your own podcast,

but you should also do a podcast with C

as we’ve talked about.

The two of you have such different minds

that like melt together in just hilarious ways,

fascinating ways, just the tension of ideas there

is really powerful.

But in general, I think you got a beautiful voice.

So thank you so much for talking today.

Thank you for being a friend.

Thank you for honoring me with this conversation

and with your valuable time.

Thanks, Liv.

Thank you.

Thanks for listening to this conversation with Liv Marie.

To support this podcast,

please check out our sponsors in the description.

And now let me leave you with some words

from Richard Feynman.

I think it’s much more interesting to live not knowing

than to have answers, which might be wrong.

I have approximate answers and possible beliefs

and different degrees of uncertainty about different things,

but I’m not absolutely sure of anything.

And there are many things I don’t know anything about,

such as whether it means anything to ask why we’re here.

I don’t have to know the answer.

I don’t feel frightened not knowing things

by being lost in a mysterious universe without any purpose,

which is the way it really is as far as I can tell.

Thank you for listening and hope to see you next time.

comments powered by Disqus