The following is a conversation with Brett Weinstein,
evolutionary biologist, author, cohost
of the Dark Horse podcast, and, as he says,
reluctant radical.
Even though we’ve never met or spoken before this,
we both felt like we’ve been friends for a long time,
I don’t agree on everything with Brett,
but I’m sure as hell happy he exists
in this weird and wonderful world of ours.
Quick mention of our sponsors,
Jordan Harmon’s show, ExpressVPN, Magic Spoon,
and Four Sigmatic.
Check them out in the description to support this podcast.
As a side note, let me say a few words about COVID 19
and about science broadly.
I think science is beautiful and powerful.
It is the striving of the human mind
to understand and to solve the problems of the world.
But as an institution,
it is susceptible to the flaws of human nature,
to fear, to greed, power, and ego.
2020 is the story of all of these
that has both scientific triumph and tragedy.
We needed great leaders and we didn’t get them.
What we needed is leaders who communicate
in an honest, transparent, and authentic way
about the uncertainty of what we know
and the large scale scientific efforts
to reduce that uncertainty and to develop solutions.
I believe there are several candidates for solutions
that could have all saved hundreds of billions of dollars
and lessened or eliminated
the suffering of millions of people.
Let me mention five of the categories of solutions.
Masks, at home testing, anonymized contact tracing,
antiviral drugs, and vaccines.
Within each of these categories,
institutional leaders should have constantly asked
and answered publicly, honestly,
the following three questions.
One, what data do we have on the solution
and what studies are we running
to get more and better data?
Two, given the current data and uncertainty,
how effective and how safe is the solution?
Three, what is the timeline and cost involved
with mass manufacturing distribution of the solution?
In the service of these questions,
no voices should have been silenced,
no ideas left off the table.
Open data, open science,
open, honest scientific communication and debate
was the way, not censorship.
There are a lot of ideas out there
that are bad, wrong, dangerous,
but the moment we have the hubris
to say we know which ideas those are
is the moment we’ll lose our ability to find the truth,
to find solutions,
the very things that make science beautiful and powerful
in the face of all the dangers that threaten the wellbeing
and the existence of humans on Earth.
This conversation with Brett
is less about the ideas we talk about.
We agree on some, disagree on others.
It is much more about the very freedom to talk,
to think, to share ideas.
This freedom is our only hope.
Brett should never have been censored.
I asked Brett to do this podcast to show solidarity
and to show that I have hope for science and for humanity.
This is the Lex Friedman podcast
and here’s my conversation with Brett Weinstein.
What to you is beautiful about the study of biology,
the science, the engineering, the philosophy of it?
It’s a very interesting question.
I must say at one level, it’s not a conscious thing.
I can say a lot about why as an adult
I find biology compelling,
but as a kid I was completely fascinated with animals.
I loved to watch them and think about why they did
what they did and that developed into a very conscious
passion as an adult.
But I think in the same way that one is drawn to a person,
I was drawn to the never ending series of near miracles
that exists across biological nature.
When you see a living organism,
do you see it from an evolutionary biology perspective
of like this entire thing that moves around
in this world or do you see from an engineering perspective
that first principles almost down to the physics,
like the little components that build up hierarchies
that you have cells, the first proteins and cells
and organs and all that kind of stuff.
So do you see low level or do you see high level?
Well, the human mind is a strange thing
and I think it’s probably a bit like a time sharing machine
in which I have different modules.
We don’t know enough about biology for them to connect.
So they exist in isolation and I’m always aware
that they do connect, but I basically have to step
into a module in order to see the evolutionary dynamics
of the creature and the lineage that it belongs to.
I have to step into a different module to think
of that lineage over a very long time scale,
a different module still to understand
what the mechanisms inside would have to look like
to account for what we can see from the outside.
And I think that probably sounds really complicated,
but one of the things about being involved
in a topic like biology and doing so for one,
really not even just my adult life for my whole life
is that it becomes second nature.
And when we see somebody do an amazing parkour routine
or something like that, we think about what they must
be doing in order to accomplish that.
But of course, what they are doing is tapping
into some kind of zone, right?
They are in a zone in which they are in such command
of their center of gravity, for example,
that they know how to hurl it around a landscape
so that they always land on their feet.
And I would just say for anyone who hasn’t found a topic
on which they can develop that kind of facility,
it is absolutely worthwhile.
It’s really something that human beings are capable
of doing across a wide range of topics,
many things our ancestors didn’t even have access to.
And that flexibility of humans,
that ability to repurpose our machinery
for topics that are novel means really,
the world is your oyster.
You can figure out what your passion is
and then figure out all of the angles
that one would have to pursue to really deeply understand it.
And it is well worth having at least one topic like that.
You mean embracing the full adaptability
of both the body and the mind.
So like, I don’t know what to attribute the parkour to,
like biomechanics of how our bodies can move,
or is it the mind?
Like how much percent wise,
is it the entirety of the hierarchies of biology
that we’ve been talking about,
or is it just all the mind?
The way to think about creatures
is that every creature is two things simultaneously.
A creature is a machine of sorts, right?
It’s not a machine in the,
I call it an aqueous machine, right?
And it’s run by an aqueous computer, right?
So it’s not identical to our technological machines.
But every creature is both a machine
that does things in the world
sufficient to accumulate enough resources
to continue surviving, to reproduce.
It is also a potential.
So each creature is potentially, for example,
the most recent common ancestor
of some future clade of creatures
that will look very different from it.
And if a creature is very, very good at being a creature,
but not very good in terms of the potential
it has going forward,
then that lineage will not last very long into the future
because change will throw at challenges
that its descendants will not be able to meet.
So the thing about humans is we are a generalist platform,
and we have the ability to swap out our software
to exist in many, many different niches.
And I was once watching an interview
with this British group of parkour experts
who were being, they were discussing what it is they do
and how it works.
And what they essentially said is,
look, you’re tapping into deep monkey stuff, right?
And I thought, yeah, that’s about right.
And anybody who is proficient at something
like skiing or skateboarding, you know,
has the experience of flying down the hill
on skis, for example,
bouncing from the top of one mogul to the next.
And if you really pay attention,
you will discover that your conscious mind
is actually a spectator.
It’s there, it’s involved in the experience,
but it’s not driving.
Some part of you knows how to ski,
and it’s not the part of you that knows how to think.
And I would just say that what accounts
for this flexibility in humans
is the ability to bootstrap a new software program
and then drive it into the unconscious layer
where it can be applied very rapidly.
And, you know, I will be shocked
if the exact thing doesn’t exist in robotics.
You know, if you programmed a robot
to deal with circumstances that were novel to it,
how would you do it?
It would have to look something like this.
There’s a certain kind of magic, you’re right,
with the consciousness being an observer.
When you play guitar, for example, or piano for me,
music, when you get truly lost in it,
I don’t know what the heck is responsible
for the flow of the music,
the kind of the loudness of the music going up and down,
the timing, the intricate, like even the mistakes,
all those things,
that doesn’t seem to be the conscious mind.
It is just observing,
and yet it’s somehow intricately involved.
More, like, because you mentioned parkour,
the dance is like that too.
When you start up in tango dancing,
if when you truly lose yourself in it,
then it’s just like you’re an observer,
and how the hell is the body able to do that?
And not only that, it’s the physical motion
is also creating the emotion,
the, like, the damn is good to be alive feeling.
So, but then that’s also intricately connected
to the full biology stack that we’re operating in.
I don’t know how difficult it is to replicate that.
We’re talking offline about Boston Dynamics robots.
They’ve recently been, they did both parkour,
they did flips, they’ve also done some dancing,
and it’s something I think a lot about
because what most people don’t realize
because they don’t look deep enough
is those robots are hard coded to do those things.
The robots didn’t figure it out by themselves,
and yet the fundamental aspect of what it means to be human
is that process of figuring out, of making mistakes,
and then there’s something about overcoming
those challenges and the mistakes
and, like, figuring out how to lose yourself
in the magic of the dancing or just movement
is what it means to be human.
That learning process, so that’s what I want to do
with the, almost as a fun side thing
with the Boston Dynamics robots,
is to have them learn and see what they figure out,
even if they make mistakes.
I want to let Spot make mistakes
and in so doing discover what it means to be alive,
discover beauty, because I think
that’s the essential aspect of mistakes.
Boston Dynamics folks want Spot to be perfect
because they don’t want Spot to ever make mistakes
because it wants to operate in the factories,
it wants to be very safe and so on.
For me, if you construct the environment,
if you construct a safe space for robots
and allow them to make mistakes,
something beautiful might be discovered,
but that requires a lot of brain power.
So Spot is currently very dumb
and I’m gonna give it a brain.
So first make it see, currently it can’t see,
meaning computer vision, it has to understand
its environment, it has to see all the humans,
but then also has to be able to learn,
learn about its movement, learn how to use its body
to communicate with others, all those kinds of things
that dogs know how to do well,
humans know how to do somewhat well.
I think that’s a beautiful challenge,
but first you have to allow the robot to make mistakes.
Well, I think your objective is laudable,
but you’re gonna realize
that the Boston Dynamics folks are right
the first time Spot poops on your rug.
I hear the same thing about kids and so on.
I still wanna have kids.
No, you should, it’s a great experience.
So let me step back into what you said
in a couple of different places.
One, I have always believed that the missing element
in robotics and artificial intelligence
is a proper development, right?
It is no accident, it is no mere coincidence
that human beings are the most dominant species
on planet Earth and that we have the longest childhoods
of any creature on Earth by far, right?
The development is the key to the flexibility.
And so the capability of a human at adulthood
is the mirror image, it’s the flip side
of our helplessness at birth.
So I’ll be very interested to see what happens
in your robot project if you do not end up
reinventing childhood for robots,
which of course is foreshadowed in 2001 quite brilliantly.
But I also wanna point out,
you can see this issue of your conscious mind
becoming a spectator very well
if you compare tennis to table tennis, right?
If you watch a tennis game, you could imagine
that the players are highly conscious as they play.
You cannot imagine that
if you’ve ever played ping pong decently.
A volley in ping pong is so fast
that your conscious mind, if your reactions
had to go through your conscious mind,
you wouldn’t be able to play.
So you can detect that your conscious mind,
while very much present, isn’t there.
And you can also detect where consciousness
does usefully intrude.
If you go up against an opponent in table tennis
that knows a trick that you don’t know how to respond to,
you will suddenly detect that something
about your game is not effective,
and you will start thinking about what might be,
how do you position yourself so that move
that puts the ball just in that corner of the table
or something like that doesn’t catch you off guard.
And this, I believe, is we highly conscious folks,
those of us who try to think through things
very deliberately and carefully,
mistake consciousness for the highest kind of thinking.
And I really think that this is an error.
Consciousness is an intermediate level of thinking.
What it does is it allows you,
it’s basically like uncompiled code.
And it doesn’t run very fast.
It is capable of being adapted to new circumstances.
But once the code is roughed in,
it gets driven into the unconscious layer,
and you become highly effective at whatever it is.
And from that point, your conscious mind
basically remains there to detect things
that aren’t anticipated by the code you’ve already written.
And so I don’t exactly know how one would establish this,
how one would demonstrate it.
But it must be the case that the human mind
contains sandboxes in which things are tested, right?
Maybe you can build a piece of code
and run it in parallel next to your active code
so you can see how it would have done comparatively.
But there’s gotta be some way of writing new code
and then swapping it in.
And frankly, I think this has a lot to do
with things like sleep cycles.
Very often, when I get good at something,
I often don’t get better at it while I’m doing it.
I get better at it when I’m not doing it,
especially if there’s time to sleep and think on it.
So there’s some sort of new program
swapping in for old program phenomenon,
which will be a lot easier to see in machines.
It’s gonna be hard with the wetware.
I like, I mean, it is true,
because somebody that played,
I played tennis for many years,
I do still think the highest form of excellence in tennis
is when the conscious mind is a spectator.
So the compiled code is the highest form of being human.
And then consciousness is just some specific compiler.
You used to have like Borland C++ compiler.
You could just have different kind of compilers.
Ultimately, the thing that by which we measure
the power of life, the intelligence of life
is the compiled code.
And you can probably do that compilation all kinds of ways.
Yeah, I’m not saying that tennis is played consciously
and table tennis isn’t.
I’m saying that because tennis is slowed down
by the just the space on the court,
you could imagine that it was your conscious mind playing.
But when you shrink the court down,
It becomes obvious.
It becomes obvious that your conscious mind
is just present rather than knowing where to put the paddle.
And weirdly for me,
I would say this probably isn’t true
in a podcast situation.
But if I have to give a presentation,
especially if I have not overly prepared,
I often find the same phenomenon
when I’m giving the presentation.
My conscious mind is there watching
some other part of me present,
which is a little jarring, I have to say.
Well, that means you’ve gotten good at it.
Not let the conscious mind get in the way
of the flow of words.
Yeah, that’s the sensation to be sure.
And that’s the highest form of podcasting too.
I mean, that’s what it looks like
when a podcast is really in the pocket,
like Joe Rogan, just having fun
and just losing themselves.
And that’s something I aspire to as well,
just losing yourself in conversation.
Somebody that has a lot of anxiety with people,
like I’m such an introvert.
I’m scared.
I was scared before you showed up.
I’m scared right now.
There’s just anxiety.
There’s just, it’s a giant mess.
It’s hard to lose yourself.
It’s hard to just get out of the way of your own mind.
Yeah, actually, trust is a big component of that.
Your conscious mind retains control
if you are very uncertain.
But when you do get into that zone when you’re speaking,
I realize it’s different for you
with English as a second language,
although maybe you present in Russian and it happens.
But do you ever hear yourself say something
and you think, oh, that’s really good, right?
Like you didn’t come up with it,
some other part of you that you don’t exactly know
came up with it?
I don’t think I’ve ever heard myself in that way
because I have a much louder voice
that’s constantly yelling in my head at,
why the hell did you say that?
There’s a very self critical voice that’s much louder.
So I’m very, maybe I need to deal with that voice,
but it’s been like, what is it called?
Like a megaphone just screaming
so I can’t hear the other voice that says,
good job, you said that thing really nicely.
So I’m kind of focused right now on the megaphone person
in the audience versus the positive,
but that’s definitely something to think about.
It’s been productive, but the place where I find gratitude
and beauty and appreciation of life is in the quiet moments
when I don’t talk, when I listen to the world around me,
when I listen to others, when I talk,
I’m extremely self critical in my mind.
When I produce anything out into the world
that originated with me,
like any kind of creation, extremely self critical.
It’s good for productivity,
for always striving to improve and so on.
It might be bad for just appreciating
the things you’ve created.
I’m a little bit with Marvin Minsky on this
where he says the key to a productive life
is to hate everything you’ve ever done in the past.
I didn’t know he said that.
I must say, I resonate with it a bit.
And unfortunately, my life currently has me putting
a lot of stuff into the world,
and I effectively watch almost none of it.
I can’t stand it.
Yeah, what do you make of that?
I don’t know.
I just yesterday read Metamorphosis by Kafka,
we read Metamorphosis by Kafka
where he turns into a giant bug
because of the stress that the world puts on him.
His parents put on him to succeed.
And I think that you have to find the balance
because if you allow the self critical voice
to become too heavy, the burden of the world,
the pressure that the world puts on you
to be the best version of yourself and so on to strive,
then you become a bug and that’s a big problem.
And then the world turns against you because you’re a bug.
You become some kind of caricature of yourself.
I don’t know, you become the worst version of yourself
and then thereby end up destroying yourself
and then the world moves on.
That’s the story.
That’s a lovely story.
I do think this is one of these places,
and frankly, you could map this onto
all of modern human experience,
but this is one of these places
where our ancestral programming
does not serve our modern selves.
So I used to talk to students
about the question of dwelling on things.
Dwelling on things is famously understood to be bad
and it can’t possibly be bad.
It wouldn’t exist, the tendency toward it
wouldn’t exist if it was bad.
So what is bad is dwelling on things
past the point of utility.
And that’s obviously easier to say than to operationalize,
but if you realize that your dwelling is the key, in fact,
to upgrading your program for future well being
and that there’s a point, presumably,
from diminishing returns, if not counter productivity,
there is a point at which you should stop
because that is what is in your best interest,
then knowing that you’re looking for that point is useful.
This is the point at which it is no longer useful
for me to dwell on this error I have made.
That’s what you’re looking for.
And it also gives you license, right?
If some part of you feels like it’s punishing you
rather than searching, then that also has a point
at which it’s no longer valuable
and there’s some liberty in realizing,
yep, even the part of me that was punishing me
knows it’s time to stop.
So if we map that onto compiled code discussion,
as a computer science person, I find that very compelling.
You know, when you compile code, you get warnings sometimes.
And usually, if you’re a good software engineer,
you’re going to make sure there’s no,
you know, you treat warnings as errors.
So you make sure that the compilation produces no warnings.
But at a certain point, when you have a large enough system,
you just let the warnings go.
It’s fine.
Like, I don’t know where that warning came from,
but, you know, just ultimately you need to compile the code
and run with it and hope nothing terrible happens.
Well, I think what you will find, and believe me,
I think what you’re talking about
with respect to robots and learning
is gonna end up having to go to a deep developmental state
and a helplessness that evolves into hyper competence
and all of that.
But I live, I noticed that I live by something
that I, for lack of a better descriptor,
call the theory of close calls.
And the theory of close calls says that people
typically miscategorize the events in their life
where something almost went wrong.
And, you know, for example, if you,
I have a friend who, I was walking down the street
with my college friends and one of my friends
stepped into the street thinking it was clear
and was nearly hit by a car going 45 miles an hour,
would have been an absolute disaster, might have killed her,
certainly would have permanently injured her.
But she didn’t, you know, car didn’t touch her, right?
Now you could walk away from that and think nothing of it
because, well, what is there to think?
Nothing happened.
Or you could think, well, what is the difference
between what did happen and my death?
The difference is luck.
I never want that to be true, right?
I never want the difference between what did happen
and my death to be luck.
Therefore, I should count this as very close to death
and I should prioritize coding
so it doesn’t happen again at a very high level.
So anyway, my basic point is
the accidents and disasters and misfortune
describe a distribution that tells you
what’s really likely to get you in the end.
And so personally, you can use them to figure out
where the dangers are so that you can afford
to take great risks because you have a really good sense
of how they’re gonna go wrong.
But I would also point out civilization has this problem.
Civilization is now producing these events
that are major disasters,
but they’re not existential scale yet, right?
They’re very serious errors that we can see.
And I would argue that the pattern is
you discover that we are involved in some industrial process
at the point it has gone wrong, right?
So I’m now always asking the question,
okay, in light of the Fukushima triple meltdown,
the financial collapse of 2008,
the Deepwater Horizon blowout, COVID 19,
and its probable origins in the Wuhan lab,
what processes do I not know the name of yet
that I will discover at the point
that some gigantic accident has happened?
And can we talk about the wisdom or lack thereof
of engaging in that process before the accident, right?
That’s what a wise civilization would be doing.
And yet we don’t.
I just wanna mention something that happened
a couple of days ago.
I don’t know if you know who JB Straubel is.
He’s the co founder of Tesla,
CTO of Tesla for many, many years.
His wife just died.
She was riding a bicycle.
And in the same thin line between death and life
that many of us have been in,
where you walk into the intersection
and there’s this close call.
Every once in a while, you get the short straw.
I wonder how much of our own individual lives
and the entirety of the human civilization
rests on this little roll of the dice.
Well, this is sort of my point about the close calls
is that there’s a level at which we can’t control it, right?
The gigantic asteroid that comes from deep space
that you don’t have time to do anything about.
There’s not a lot we can do to hedge that out,
or at least not short term.
But there are lots of other things.
Obviously, the financial collapse of 2008
didn’t break down the entire world economy.
It threatened to, but a Herculean effort
managed to pull us back from the brink.
The triple meltdown at Fukushima was awful,
but every one of the seven fuel pools held,
there wasn’t a major fire that made it impossible
to manage the disaster going forward.
We got lucky.
We could say the same thing about the blowout
at the Deepwater Horizon,
where a hole in the ocean floor large enough
that we couldn’t have plugged it, could have opened up.
All of these things could have been much, much worse, right?
And I think we can say the same thing about COVID,
as terrible as it is.
And we cannot say for sure that it came from the Wuhan lab,
but there’s a strong likelihood that it did.
And it also could be much, much worse.
So in each of these cases, something is telling us,
we have a process that is unfolding
that keeps creating risks where it is luck
that is the difference between us
and some scale of disaster that is unimaginable.
And that wisdom, you can be highly intelligent
and cause these disasters.
To be wise is to stop causing them, right?
And that would require a process of restraint,
a process that I don’t see a lot of evidence of yet.
So I think we have to generate it.
And somehow, at the moment,
we don’t have a political structure
that would be capable of taking
a protective algorithm and actually deploying it, right?
Because it would have important economic consequences.
And so it would almost certainly be shot down.
But we can obviously also say,
we paid a huge price for all of the disasters
that I’ve mentioned.
And we have to factor that into the equation.
Something can be very productive short term
and very destructive long term.
Also, the question is how many disasters we avoided
because of the ingenuity of humans
or just the integrity and character of humans.
That’s sort of an open question.
We may be more intelligent than lucky.
That’s the hope.
Because the optimistic message here that you’re getting at
is maybe the process that we should be,
that maybe we can overcome luck with ingenuity.
Meaning, I guess you’re suggesting the processes
we should be listing all the ways
that human civilization can destroy itself,
assigning likelihood to it,
and thinking through how can we avoid that.
And being very honest with the data out there
about the close calls and using those close calls
to then create sort of mechanism
by which we minimize the probability of those close calls.
And just being honest and transparent
with the data that’s out there.
Well, I think we need to do a couple things for it to work.
So I’ve been an advocate for the idea
that sustainability is actually,
it’s difficult to operationalize,
but it is an objective that we have to meet
if we’re to be around long term.
And I realized that we also need to have reversibility
of all of our processes.
Because processes very frequently when they start
do not appear dangerous.
And then when they scale, they become very dangerous.
So for example, if you imagine
the first internal combustion engine vehicle
driving down the street,
and you imagine somebody running after them saying,
hey, if you do enough of that,
you’re gonna alter the atmosphere
and it’s gonna change the temperature of the planet.
It’s preposterous, right?
Why would you stop the person
who’s invented this marvelous new contraption?
But of course, eventually you do get to the place
where you’re doing enough of this
that you do start changing the temperature of the planet.
So if we built the capacity,
if we basically said, look, you can’t involve yourself
in any process that you couldn’t reverse if you had to,
then progress would be slowed,
but our safety would go up dramatically.
And I think in some sense, if we are to be around long term,
we have to begin thinking that way.
We’re just involved in too many very dangerous processes.
So let’s talk about one of the things
that if not threatened human civilization
certainly hurt it at a deep level, which is COVID 19.
What percent probability would you currently place
on the hypothesis that COVID 19 leaked
from the Wuhan Institute of Virology?
So I maintain a flow chart of all the possible explanations,
and it doesn’t break down exactly that way.
The likelihood that it emerged from a lab is very, very high.
If it emerged from a lab,
the likelihood that the lab was the Wuhan Institute
is very, very high.
There are multiple different kinds of evidence
that point to the lab,
and there is literally no evidence that points to nature.
Either the evidence points nowhere or it points to the lab,
and the lab could mean any lab,
but geographically, obviously,
the labs in Wuhan are the most likely,
and the lab that was most directly involved
with research on viruses that look like COVID,
that look like SARS COVID 2,
is obviously the place that one would start.
But I would say the likelihood that this virus
came from a lab is well above 95%.
We can talk about the question of could a virus
have been brought into the lab and escaped from there
without being modified.
That’s also possible,
but it doesn’t explain any of the anomalies
in the genome of SARS COVID 2.
Could it have been delivered from another lab?
Could Wuhan be a distraction
in order that we would connect the dots in the wrong way?
That’s conceivable.
I currently have that below 1% on my flowchart,
but I think…
A very dark thought that somebody would do that
almost as a political attack on China.
Well, it depends.
I don’t even think that’s one possibility.
Sometimes when Eric and I talk about these issues,
we will generate a scenario just to prove
that something could live in that space, right?
It’s a placeholder for whatever may actually have happened.
And so it doesn’t have to have been an attack on China.
That’s certainly one possibility.
But I would point out,
if you can predict the future in some unusual way
better than others, you can print money, right?
That’s what markets that allow you to bet for
or against virtually any sector allow you to do.
So you can imagine a simply amoral person
or entity generating a pandemic,
attempting to cover their tracks
because it would allow them to bet against things
like cruise ships, air travel, whatever it is,
and bet in favor of, I don’t know,
sanitizing gel and whatever else you would do.
So am I saying that I think somebody did that?
No, I really don’t think it happened.
We’ve seen zero evidence
that this was intentionally released.
However, were it to have been intentionally released
by somebody who did not know,
did not want it known where it had come from,
releasing it into Wuhan would be one way
to cover their tracks.
So we have to leave the possibility formally open,
but acknowledge there’s no evidence.
And the probability therefore is low.
I tend to believe maybe this is the optimistic nature
that I have that people who are competent enough
to do the kind of thing we just described
are not going to do that
because it requires a certain kind of,
I don’t wanna use the word evil,
but whatever word you wanna use to describe
the kind of disregard for human life required to do that,
that’s just not going to be coupled with competence.
I feel like there’s a trade off chart
where competence on one axis and evil is on the other.
And the more evil you become,
the crappier you are at doing great engineering,
scientific work required to deliver weapons
of different kinds, whether it’s bioweapons
or nuclear weapons, all those kinds of things.
That seems to be the lessons I take from history,
but that doesn’t necessarily mean
that’s what’s going to be happening in the future.
But to stick on the lab leak idea,
because the flow chart is probably huge here
because there’s a lot of fascinating possibilities.
One question I wanna ask is,
what would evidence for natural origins look like?
So one piece of evidence for natural origins
is that it’s happened in the past
that viruses have jumped.
Oh, they do jump.
So like that’s possible to have happened.
So that’s a sort of like a historical evidence,
like, okay, well, it’s possible that it have…
It’s not evidence of the kind you think it is.
It’s a justification for a presumption, right?
So the presumption upon discovering
a new virus circulating is certainly
that it came from nature, right?
The problem is the presumption evaporates
in the face of evidence, or at least it logically should.
And it didn’t in this case.
It was maintained by people who privately
in their emails acknowledged that they had grave doubts
about the natural origin of this virus.
Is there some other piece of evidence
that we could look for and see that would say,
this increases the probability that it’s natural origins?
Yeah, in fact, there is evidence.
I always worry that somebody is going to make up
some evidence in order to reverse the flow.
Oh, boy.
Well, let’s say I am…
There’s a lot of incentive for that actually.
There’s a huge amount of incentive.
On the other hand, why didn’t the powers that be,
the powers that lied to us about weapons
of mass destruction in Iraq,
why didn’t they ever fake weapons
of mass destruction in Iraq?
Whatever force it is, I hope that force is here too.
And so whatever evidence we find is real.
It’s the competence thing I’m talking about,
but okay, go ahead, sorry.
Well, we can get back to that.
But I would say, yeah, the giant piece of evidence
that will shift the probabilities in the other direction
is the discovery of either a human population
in which the virus circulated prior to showing up in Wuhan
that would explain where the virus learned all of the tricks
that it knew instantly upon spreading from Wuhan.
So that would do it, or an animal population
in which an ancestor epidemic can be found
in which the virus learned this before jumping to humans.
But I point out in that second case,
you would certainly expect to see a great deal of evolution
in the early epidemic, which we don’t see.
So there almost has to be a human population
somewhere else that had the virus circulate
or an ancestor of the virus that we first saw
in Wuhan circulating.
And it has to have gotten very sophisticated
in that prior epidemic before hitting Wuhan
in order to explain the total lack of evolution
and extremely effective virus that emerged
at the end of 2019.
So you don’t believe in the magic of evolution
to spring up with all the tricks already there?
Like everybody who doesn’t have the tricks,
they die quickly.
And then you just have this beautiful virus
that comes in with a spike protein
and through mutation and selection,
just like the ones that succeed and succeed big
are the ones that are going to just spring into life
with the tricks.
Well, no, that’s called a hopeful monster.
And hopeful monsters don’t work.
The job of becoming a new pandemic virus is too difficult.
It involves two very difficult steps
and they both have to work.
One is the ability to infect a person and spread
in their tissues sufficient to make an infection.
And the other is to jump between individuals
at a sufficient rate that it doesn’t go extinct
for one reason or another.
Those are both very difficult jobs.
They require, as you describe, selection.
And the point is selection would leave a mark.
We would see evidence that it would stay.
In animals or humans, we would see.
Both, right?
And we see this evolutionary trace of the virus
gathering the tricks up.
Yeah, you would see the virus,
you would see the clumsy virus get better and better.
And yes, I am a full believer in the power of that process.
In fact, I believe it.
What I know from studying the process
is that it is much more powerful than most people imagine.
That what we teach in the Evolution 101 textbook
is too clumsy a process to do what we see it doing
and that actually people should increase their expectation
of the rapidity with which that process can produce
just jaw dropping adaptations.
That said, we just don’t see evidence that it happened here
which doesn’t mean it doesn’t exist,
but it means in spite of immense pressure
to find it somewhere, there’s been no hint
which probably means it took place inside of a laboratory.
So inside the laboratory,
gain of function research on viruses.
And I believe most of that kind of research
is doing this exact thing that you’re referring to
which is accelerated evolution
and just watching evolution do its thing
and a bunch of viruses
and seeing what kind of tricks get developed.
The other method is engineering viruses.
So manually adding on the tricks.
Which do you think we should be thinking about here?
So mind you, I learned what I know
in the aftermath of this pandemic emerging.
I started studying the question and I would say
based on the content of the genome and other evidence
in publications from the various labs
that were involved in generating this technology,
a couple of things seem likely.
This SARS CoV2 does not appear to be entirely the result
of either a splicing process or serial passaging.
It appears to have both things in its past
or it’s at least highly likely that it does.
So for example, the fern cleavage site
looks very much like it was added in to the virus
and it was known that that would increase its infectivity
in humans and increase its tropism.
The virus appears to be excellent
at spreading in humans and minks and ferrets.
Now minks and ferrets are very closely related to each other
and ferrets are very likely to have been used
in a serial passage experiment.
The reason being that they have an ACE2 receptor
that looks very much like the human ACE2 receptor.
And so were you going to passage the virus
or its ancestor through an animal
in order to increase its infectivity in humans,
which would have been necessary,
ferrets would have been very likely.
It is also quite likely
that humanized mice were utilized
and it is possible that human airway tissue was utilized.
I think it is vital that we find out
what the protocols were.
If this came from the Wuhan Institute,
we need to know it
and we need to know what the protocols were exactly
because they will actually give us some tools
that would be useful in fighting SARS CoV2
and hopefully driving it to extinction,
which ought to be our priority.
It is a priority that does not,
it is not apparent from our behavior,
but it really is, it should be our objective.
If we understood where our interests lie,
we would be much more focused on it.
But those protocols would tell us a great deal.
If it wasn’t the Wuhan Institute, we need to know that.
If it was nature, we need to know that.
And if it was some other laboratory,
we need to figure out what and where
so that we can determine what we can determine
about what was done.
You’re opening up my mind about why we should investigate,
why we should know the truth of the origins of this virus.
So for me personally,
let me just tell the story of my own kind of journey.
When I first started looking into the lab leak hypothesis,
what became terrifying to me
and important to understand and obvious
is the sort of like Sam Harris way of thinking,
which is it’s obvious that a lab leak of a deadly virus
will eventually happen.
My mind was, it doesn’t even matter
if it happened in this case.
It’s obvious that it’s going to happen in the future.
So why the hell are we not freaking out about this?
And COVID 19 is not even that deadly
relative to the possible future viruses.
It’s this, the way I disagree with Sam on this,
but he thinks about this way about AGI as well,
not about artificial intelligence.
It’s a different discussion, I think,
but with viruses, it seems like something that could happen
on the scale of years, maybe a few decades.
AGI is a little bit farther out for me,
but it seemed, the terrifying thing,
it seemed obvious that this will happen very soon
for a much deadlier virus as we get better and better
at both engineering viruses
and doing this kind of evolutionary driven research,
gain of function research.
Okay, but then you started speaking out about this as well,
but also started to say, no, no, no,
we should hurry up and figure out the origins now
because it will help us figure out
how to actually respond to this particular virus,
how to treat this particular virus.
What is in terms of vaccines, in terms of antiviral drugs,
in terms of just all the number of responses
that we should have.
Okay, I still am much more freaking out about the future.
Maybe you can break that apart a little bit.
Which are you most focused on now?
Which are you most freaking out about now
in terms of the importance of figuring out
the origins of this virus?
I am most freaking out about both of them
because they’re both really important
and we can put bounds on this.
Let me say first that this is a perfect test case
for the theory of close calls
because as much as COVID is a disaster,
it is also a close call from which we can learn much.
You are absolutely right.
If we keep playing this game in the lab,
if we are not, if we are,
especially if we do it under pressure
and when we are told that a virus
is going to leap from nature any day
and that the more we know,
the better we’ll be able to fight it,
we’re gonna create the disaster,
all the sooner.
So yes, that should be an absolute focus.
The fact that there were people saying
that this was dangerous back in 2015
ought to tell us something.
The fact that the system bypassed a ban
and offshored the work to China
ought to tell us this is not a Chinese failure.
This is a failure of something larger and harder to see.
But I also think that there’s a clock ticking
with respect to SARS CoV2 and COVID,
the disease that it creates.
And that has to do with whether or not
we are stuck with it permanently.
So if you think about the cost to humanity
of being stuck with influenza,
it’s an immense cost year after year.
And we just stop thinking about it because it’s there.
Some years you get the flu, most years you don’t.
Maybe you get the vaccine to prevent it.
Maybe the vaccine isn’t particularly well targeted.
But imagine just simply doubling that cost.
Imagine we get stuck with SARS CoV2
and its descendants going forward
and that it just settles in
and becomes a fact of modern human life.
That would be a disaster, right?
The number of people we will ultimately lose
is incalculable.
The amount of suffering that will be caused is incalculable.
The loss of wellbeing and wealth, incalculable.
So that ought to be a very high priority,
driving this extinct before it becomes permanent.
And the ability to drive extinct goes down
the longer we delay effective responses.
To the extent that we let it have this very large canvas,
large numbers of people who have the disease
in which mutation and selection can result in adaptation
that we will not be able to counter
the greater its ability to figure out features
of our immune system and use them to its advantage.
So I’m feeling the pressure of driving it extinct.
I believe we could have driven it extinct six months ago
and we didn’t do it because of very mundane concerns
among a small number of people.
And I’m not alleging that they were brazen about
or that they were callous about deaths that would be caused.
I have the sense that they were working
from a kind of autopilot in which you,
let’s say you’re in some kind of a corporation,
a pharmaceutical corporation,
you have a portfolio of therapies
that in the context of a pandemic might be very lucrative.
Those therapies have competitors.
You of course wanna position your product
so that it succeeds and the competitors don’t.
And lo and behold, at some point through means
that I think those of us on the outside
can’t really intuit, you end up saying things
about competing therapies that work better
and much more safely than the ones you’re selling
that aren’t true and do cause people to die
in large numbers.
But it’s some kind of autopilot, at least part of it is.
So there’s a complicated coupling of the autopilot
of institutions, companies, governments.
And then there’s also the geopolitical game theory thing
going on where you wanna keep secrets.
It’s the Chernobyl thing where if you messed up,
there’s a big incentive, I think,
to hide the fact that you messed up.
So how do we fix this?
And what’s more important to fix?
The autopilot, which is the response
that we often criticize about our institutions,
especially the leaders in those institutions,
Anthony Fauci and so on,
some of the members of the scientific community.
And the second part is the game with China
of hiding the information
in terms of on the fight between nations.
Well, in our live streams on Dark Horse,
Heather and I have been talking from the beginning
about the fact that although, yes,
what happens began in China,
it very much looks like a failure
of the international scientific community.
That’s frightening, but it’s also hopeful
in the sense that actually if we did the right thing now,
we’re not navigating a puzzle about Chinese responsibility.
We’re navigating a question of collective responsibility
for something that has been terribly costly to all of us.
So that’s not a very happy process.
But as you point out, what’s at stake
is in large measure at the very least
the strong possibility this will happen again
and that at some point it will be far worse.
So just as a person that does not learn the lessons
of their own errors doesn’t get smarter
and they remain in danger,
we collectively, humanity has to say,
well, there sure is a lot of evidence
that suggests that this is a self inflicted wound.
When you have done something
that has caused a massive self inflicted wound,
self inflicted wound, it makes sense to dwell on it
exactly to the point that you have learned the lesson
that makes it very, very unlikely
that something similar will happen again.
I think this is a good place to kind of ask you
to do almost like a thought experiment
or to steel man the argument against the lab leak hypothesis.
So if you were to argue, you said 95% chance
that the virus leak from a lab.
There’s a bunch of ways I think you can argue
that even talking about it is bad for the world.
So if I just put something on the table,
it’s to say that for one,
it would be racism versus Chinese people
that talking about that it leaked from a lab,
there’s a kind of immediate kind of blame
and it can spiral down into this idea
that’s somehow the people are responsible for the virus
and this kind of thing.
Is it possible for you to come up
with other steel man arguments against talking
or against the possibility of the lab leak hypothesis?
Well, so I think steel manning is a tool
that is extremely valuable,
but it’s also possible to abuse it.
I think that you can only steel man a good faith argument.
And the problem is we now know
that we have not been engaged in opponents
who were wielding good faith arguments
because privately their emails reflect their own doubts.
And what they were doing publicly was actually a punishment,
a public punishment for those of us who spoke up
with I think the purpose of either backing us down
or more likely warning others
not to engage in the same kind of behavior.
And obviously for people like you and me
who regard science as our likely best hope
for navigating difficult waters,
shutting down people who are using those tools honorably
is itself dishonorable.
So I don’t feel that there’s anything to steel man.
And I also think that immediately at the point
that the world suddenly with no new evidence on the table
switched gears with respect to the lab leak,
at the point that Nicholas Wade had published his article
and suddenly the world was going to admit
that this was at least a possibility, if not a likelihood,
we got to see something of the rationalization process
that had taken place inside the institutional world.
And it very definitely involved the claim
that what was being avoided was the targeting
of Chinese scientists.
And my point would be,
I don’t wanna see the targeting of anyone.
I don’t want to see racism of any kind.
On the other hand, once you create license to lie
in order to protect individuals when the world has a stake
in knowing what happened, then it is inevitable
that that process, that license to lie will be used
by the thing that captures institutions
for its own purposes.
So my sense is it may be very unfortunate
if the story of what happened here
can be used against Chinese people.
That would be very unfortunate.
And as I think I mentioned,
Heather and I have taken great pains to point out
that this doesn’t look like a Chinese failure.
It looks like a failure
of the international scientific community.
So I think it is important to broadcast that message
along with the analysis of the evidence.
But no matter what happened, we have a right to know.
And I frankly do not take the institutional layer
at its word that its motivations are honorable
and that it was protecting good hearted scientists
at the expense of the world.
That explanation does not add up.
Well, this is a very interesting question about
whether it’s ever okay to lie at the institutional layer
to protect the populace.
I think both you and I are probably on the same,
have the same sense that it’s a slippery slope.
Even if it’s an effective mechanism in the short term,
in the long term, it’s going to be destructive.
This happened with masks.
This happened with other things.
If you look at just history pandemics,
there’s an idea that panic is destructive
amongst the populace.
So you want to construct a narrative,
whether it’s a lie or not to minimize panic.
But you’re suggesting that almost in all cases,
and I think that was the lesson from the pandemic
in the early 20th century,
that lying creates distrust
and distrust in the institutions is ultimately destructive.
That’s your sense that lying is not okay?
Well, okay.
There are obviously places where complete transparency
is not a good idea, right?
To the extent that you broadcast a technology
that allows one individual to hold the world hostage,
obviously you’ve got something to be navigated.
But in general, I don’t believe that the scientific system
should be lying to us.
In the case of this particular lie,
the idea that the wellbeing of Chinese scientists
outweighs the wellbeing of the world is preposterous.
Right, as you point out,
one thing that rests on this question
is whether we continue to do this kind of research
going forward.
And the scientists in question, all of them,
American, Chinese, all of them were pushing the idea
that the risk of a zoonotic spillover event
causing a major and highly destructive pandemic
was so great that we had to risk this.
Now, if they themselves have caused it,
and if they are wrong, as I believe they are,
about the likelihood of a major world pandemic
spilling out of nature
in the way that they wrote into their grant applications,
then the danger is the call is coming from inside the house
and we have to look at that.
And yes, whatever we have to do
to protect scientists from retribution, we should do,
but we cannot protecting them by lying to the world.
And even worse,
by demonizing people like me, like Josh Rogin,
like Yuri Dagan, the entire drastic group on Twitter,
by demonizing us for simply following the evidence
is to set a terrible precedent, right?
You’re demonizing people for using the scientific method
to evaluate evidence that is available to us in the world.
What a terrible crime it is to teach that lesson, right?
Thou shalt not use scientific tools.
No, I’m sorry.
Whatever your license to lie is, it doesn’t extend to that.
Yeah, I’ve seen the attacks on you,
the pressure on you has a very important effect
on thousands of world class biologists actually.
At MIT, colleagues of mine, people I know,
there’s a slight pressure to not be allowed
to one, speak publicly and two, actually think.
Like do you even think about these ideas?
It sounds kind of ridiculous,
but just in the privacy of your own home,
to read things, to think, it’s many people,
many world class biologists that I know
will just avoid looking at the data.
There’s not even that many people
that are publicly opposing gain of function research.
They’re also like, it’s not worth it.
It’s not worth the battle.
And there’s many people that kind of argue
that those battles should be fought in private,
with colleagues in the privacy of the scientific community
that the public is somehow not maybe intelligent enough
to be able to deal with the complexities
of this kind of discussion.
I don’t know, but the final result
is combined with the bullying of you
and all the different pressures
in the academic institutions is that
it’s just people are self censoring
and silencing themselves
and silencing the most important thing,
which is the power of their brains.
Like these people are brilliant.
And the fact that they’re not utilizing their brain
to come up with solutions
outside of the conformist line of thinking is tragic.
Well, it is.
I also think that we have to look at it
and understand it for what it is.
For one thing, it’s kind of a cryptic totalitarianism.
Somehow people’s sense of what they’re allowed
to think about, talk about, discuss
is causing them to self censor.
And I can tell you it’s causing many of them to rationalize,
which is even worse.
They’re blinding themselves to what they can see.
But it is also the case, I believe,
that what you’re describing about what people said,
and a great many people understood
that the lab leak hypothesis
could not be taken off the table,
but they didn’t say so publicly.
And I think that their discussions with each other
about why they did not say what they understood,
that’s what capture sounds like on the inside.
I don’t know exactly what force captured the institutions.
I don’t think anybody knows for sure out here in public.
I don’t even know that it wasn’t just simply a process.
But you have these institutions.
They are behaving towards a kind of somatic obligation.
They have lost sight of what they were built to accomplish.
And on the inside, the way they avoid
going back to their original mission
is to say things to themselves,
like the public can’t have this discussion.
It can’t be trusted with it.
Yes, we need to be able to talk about this,
but it has to be private.
Whatever it is they say to themselves,
that is what capture sounds like on the inside.
It’s a institutional rationalization mechanism.
And it’s very, very deadly.
And at the point you go from lab leak to repurposed drugs,
you can see that it’s very deadly in a very direct way.
Yeah, I see this in my field with things
like autonomous weapon systems.
People in AI do not talk about the use of AI
in weapon systems.
They kind of avoid the idea that AI’s use them
in the military.
It’s kind of funny, there’s this like kind of discomfort
and they’re like, they all hurry,
like something scary happens and a bunch of sheep
kind of like run away.
That’s what it looks like.
And I don’t even know what to do about it.
And then I feel this natural pull
every time I bring up autonomous weapon systems
to go along with the sheep.
There’s a natural kind of pull towards that direction
because it’s like, what can I do as one person?
Now there’s currently nothing destructive happening
with autonomous weapon systems.
So we’re in like in the early days of this race
that in 10, 20 years might become a real problem.
Now where the discussion we’re having now,
we’re now facing the result of that in the space of viruses,
like for many years avoiding the conversations here.
I don’t know what to do that in the early days,
but I think we have to, I guess, create institutions
where people can stand out.
People can stand out and like basically be individual
thinkers and break out into all kinds of spaces of ideas
that allow us to think freely, freedom of thought.
And maybe that requires a decentralization of institutions.
Well, years ago, I came up with a concept
called cultivated insecurity.
And the idea is, let’s just take the example
of the average Joe, right?
The average Joe has a job somewhere
and their mortgage, their medical insurance,
their retirement, their connection with the economy
is to one degree or another dependent
on their relationship with the employer.
That means that there is a strong incentive,
especially in any industry where it’s not easy to move
from one employer to the next.
There’s a strong incentive to stay
in your employer’s good graces, right?
So it creates a very top down dynamic,
not only in terms of who gets to tell other people
what to do, but it really comes down to
who gets to tell other people how to think.
So that’s extremely dangerous.
The way out of it is to cultivate security
to the extent that somebody is in a position
to go against the grain and have it not be a catastrophe
for their family and their ability to earn,
you will see that behavior a lot more.
So I would argue that some of what you’re talking about
is just a simple predictable consequence
of the concentration of the sources of wellbeing
and that this is a solvable problem.
You got a chance to talk with Joe Rogan yesterday.
Yes, I did.
And I just saw the episode was released
and Ivermectin is trending on Twitter.
Joe told me it was an incredible conversation.
I look forward to listening to it today.
Many people have probably, by the time this is released,
have already listened to it.
I think it would be interesting to discuss a postmortem.
How do you feel how that conversation went?
And maybe broadly, how do you see the story
as it’s unfolding of Ivermectin from the origins
from before COVID 19 through 2020 to today?
I very much enjoyed talking to Joe
and I’m undescribably grateful
that he would take the risk of such a discussion,
that he would, as he described it,
do an emergency podcast on the subject,
which I think that was not an exaggeration.
This needed to happen for various reasons
that he took us down the road of talking about
the censorship campaign against Ivermectin,
which I find utterly shocking
and talking about the drug itself.
And I should say we talked, we had Pierre Corey available.
He came on the podcast as well.
He is, of course, the face of the FLCCC,
the Frontline COVID 19 Critical Care Alliance.
These are doctors who have innovated ways
of treating COVID patients and they happened on Ivermectin
and have been using it.
And I hesitate to use the word advocating for it
because that’s not really the role of doctors or scientists,
but they are advocating for it in the sense
that there is this pressure not to talk about
its effectiveness for reasons that we can go into.
So maybe step back and say, what is Ivermectin
and how much studies have been done
to show its effectiveness?
So Ivermectin is an interesting drug.
It was discovered in the 70s
by a Japanese scientist named Satoshi Omura
and he found it in soil near a Japanese golf course.
So I would just point out in passing
that if we were to stop self silencing
over the possibility that Asians will be demonized
over the possible lab leak in Wuhan
and to recognize that actually the natural course
of the story has a likely lab leak in China,
it has a unlikely hero in Japan,
the story is naturally not a simple one.
But in any case, Omura discovered this molecule.
He sent it to a friend who was at Merck,
scientist named Campbell.
They won a Nobel Prize for the discovery
of the Ivermectin molecule in 2015.
Its initial use was in treating parasitic infections.
It’s very effective in treating the worm
that causes river blindness,
the pathogen that causes elephantitis, scabies.
It’s a very effective anti parasite drug.
It’s extremely safe.
It’s on the WHO’s list of essential medications.
It’s safe for children.
It has been administered something like 4 billion times
in the last four decades.
It has been given away in the millions of doses
by Merck in Africa.
People have been on it for long periods of time.
And in fact, one of the reasons
that Africa may have had less severe impacts from COVID 19
is that Ivermectin is widely used there to prevent parasites
and the drug appears to have a long lasting impact.
So it’s an interesting molecule.
It was discovered some time ago apparently
that it has antiviral properties.
And so it was tested early in the COVID 19 pandemic
to see if it might work to treat humans with COVID.
It turned out to have very promising evidence
that it did treat humans.
It was tested in tissues.
It was tested at a very high dosage, which confuses people.
They think that those of us who believe
that Ivermectin might be useful in confronting this disease
are advocating those high doses, which is not the case.
But in any case, there have been quite a number of studies.
A wonderful meta analysis was finally released.
We had seen it in preprint version,
but it was finally peer reviewed and published this last week.
It reveals that the drug, as clinicians have been telling us,
those who have been using it,
it’s highly effective at treating people with the disease,
especially if you get to them early.
And it showed an 86% effectiveness as a prophylactic
to prevent people from contracting COVID.
And that number, 86%, is high enough
to drive SARS CoV2 to extinction if we wished to deploy it.
First of all, the meta analysis,
is this the Ivermectin for COVID 19
real time meta analysis of 60 studies?
Or there’s a bunch of meta analysis there.
Because I was really impressed by the real time meta analysis
that keeps getting updated.
I don’t know if it’s the same kind of thing.
The one at ivmmeta.com?
Well, I saw it at c19ivermeta.com.
No, this is not that meta analysis.
So that is, as you say, a living meta analysis
where you can watch as evidence rolls in.
Which is super cool, by the way.
It’s really cool.
And they’ve got some really nice graphics
that allow you to understand, well, what is the evidence?
It’s concentrated around this level of effectiveness,
et cetera.
So anyway, it’s a great site, well worth paying attention to.
No, this is a meta analysis.
I don’t know any of the authors but one.
Second author is Tess Lorry of the BIRD group.
BIRD being a group of analysts and doctors in Britain
that is playing a role similar to the FLCCC here in the US.
So anyway, this is a meta analysis
that Tess Lorry and others did
of all of the available evidence.
And it’s quite compelling.
People can look for it on my Twitter.
I will put it up and people can find it there.
So what about dose here?
In terms of safety, what do we understand
about the kind of dose required
to have that level of effectiveness?
And what do we understand about the safety
of that kind of dose?
So let me just say, I’m not a medical doctor.
I’m a biologist.
I’m on ivermectin in lieu of vaccination.
In terms of dosage, there is one reason for concern,
which is that the most effective dose for prophylaxis
involves something like weekly administration.
And because that is not a historical pattern of use
for the drug, it is possible
that there is some longterm implication
of being on it weekly for a long period of time.
There’s not a strong indication of that.
The safety signal that we have over people using the drug
over many years and using it in high doses.
In fact, Dr. Corey told me yesterday
that there are cases in which people
have made calculation errors
and taken a massive overdose of the drug
and had no ill effect.
So anyway, there’s lots of reasons
to think the drug is comparatively safe,
but no drug is perfectly safe.
And I do worry about the longterm implications
of taking it.
I also think it’s very likely
that because the drug is administered
in a dose something like, let’s say 15 milligrams
for somebody my size once a week
after you’ve gone through the initial double dose
that you take 48 hours apart,
it is apparent that if the amount of drug in your system
is sufficient to be protective at the end of the week,
then it was probably far too high
at the beginning of the week.
So there’s a question about whether or not
you could flatten out the intake
so that the amount of ivermectin goes down,
but the protection remains.
I have little doubt that that would be discovered
if we looked for it.
But that said, it does seem to be quite safe,
highly effective at preventing COVID.
The 86% number is plenty high enough
for us to drive SARS CoV2 to extinction
in light of its R0 number of slightly more than two.
And so why we are not using it is a bit of a mystery.
So even if everything you said now
turns out to be not correct,
it is nevertheless obvious that it’s sufficiently promising
and it always has been in order to merit rigorous
scientific exploration, investigation,
doing a lot of studies and certainly not censoring
the science or the discussion of it.
So before we talk about the various vaccines for COVID 19,
I’d like to talk to you about censorship.
Given everything you’re saying,
why did YouTube and other places
censor discussion of ivermectin?
Well, there’s a question about why they say they did it
and there’s a question about why they actually did it.
Now, it is worth mentioning
that YouTube is part of a consortium.
It is partnered with Twitter, Facebook, Reuters, AP,
Financial Times, Washington Post,
some other notable organizations.
And that this group has appointed itself
the arbiter of truth.
In effect, they have decided to control discussion
ostensibly to prevent the distribution of misinformation.
Now, how have they chosen to do that?
In this case, they have chosen to simply utilize
the recommendations of the WHO and the CDC
and apply them as if they are synonymous
with scientific truth.
Problem, even at their best,
the WHO and CDC are not scientific entities.
They are entities that are about public health.
And public health has this, whether it’s right or not,
and I believe I disagree with it,
but it has this self assigned right to lie
that comes from the fact that there is game theory
that works against, for example,
a successful vaccination campaign.
That if everybody else takes a vaccine
and therefore the herd becomes immune through vaccination
and you decide not to take a vaccine,
then you benefit from the immunity of the herd
without having taken the risk.
So people who do best are the people who opt out.
That’s a hazard.
And the WHO and CDC as public health entities
effectively oversimplify stories in order to make sense
of oversimplify stories in order that that game theory
does not cause a predictable tragedy of the commons.
With that said, once that right to lie exists,
then it turns out to serve the interests of,
for example, pharmaceutical companies,
which have emergency use authorizations
that require that there not be a safe
and effective treatment and have immunity from liability
for harms caused by their product.
So that’s a recipe for disaster, right?
You don’t need to be a sophisticated thinker
about complex systems to see the hazard
of immunizing a company from the harm of its own product
at the same time that that product can only exist
in the market if some other product that works better
somehow fails to be noticed.
So somehow YouTube is doing the bidding of Merck and others.
Whether it knows that that’s what it’s doing,
I have no idea.
I think this may be another case of an autopilot
that thinks it’s doing the right thing
because it’s parroting the corrupt wisdom
of the WHO and the CDC,
but the WHO and the CDC have been wrong again and again
in this pandemic.
And the irony here is that with YouTube coming after me,
well, my channel has been right where the WHO and CDC
have been wrong consistently over the whole pandemic.
So how is it that YouTube is censoring us
because the WHO and CDC disagree with us
when in fact, in past disagreements,
we’ve been right and they’ve been wrong?
There’s so much to talk about here.
So I’ve heard this many times actually
on the inside of YouTube and with colleagues
that I’ve talked with is they kind of in a very casual way
say their job is simply to slow
or prevent the spread of misinformation.
And they say like, that’s an easy thing to do.
Like to know what is true or not is an easy thing to do.
And so from the YouTube perspective,
I think they basically outsource of the task
of knowing what is true or not to public institutions
that on a basic Google search claim
to be the arbiters of truth.
So if you were YouTube who are exceptionally profitable
and exceptionally powerful in terms of controlling
what people get to see or not, what would you do?
Would you take a stand, a public stand
against the WHO, CDC?
Or would you instead say, you know what?
Let’s open the dam and let any video on anything fly.
What do you do here?
Say you were put, if Brent Weinstein was put in charge
of YouTube for a month in this most critical of times
where YouTube actually has incredible amounts of power
to educate the populace, to give power of knowledge
to the populace such that they can reform institutions.
What would you do?
How would you run YouTube?
Well, unfortunately, or fortunately,
this is actually quite simple.
The founders, the American founders,
settled on a counterintuitive formulation
that people should be free to say anything.
They should be free from the government
blocking them from doing so.
They did not imagine that in formulating that right,
that most of what was said would be of high quality,
nor did they imagine it would be free of harmful things.
What they correctly reasoned was that the benefit
of leaving everything so it can be said exceeds the cost,
which everyone understands to be substantial.
What I would say is they could not have anticipated
the impact, the centrality of platforms
like YouTube, Facebook, Twitter, et cetera.
If they had, they would not have limited
the First Amendment as they did.
They clearly understood that the power of the federal
government was so great that it needed to be limited
by granting explicitly the right of citizens
to say anything.
In fact, YouTube, Twitter, Facebook may be more powerful
in this moment than the federal government
of their worst nightmares could have been.
The power that these entities have to control thought
and to shift civilization is so great
that we need to have those same protections.
It doesn’t mean that harmful things won’t be said,
but it means that nothing has changed
about the cost benefit analysis
of building the right to censor.
So if I were running YouTube,
the limit of what should be allowed
is the limit of the law, right?
If what you are doing is legal,
then it should not be YouTube’s place
to limit what gets said or who gets to hear it.
That is between speakers and audience.
Will harm come from that? Of course it will.
But will net harm come from it?
No, I don’t believe it will.
I believe that allowing everything to be said
does allow a process in which better ideas
do come to the fore and win out.
So you believe that in the end,
when there’s complete freedom to share ideas,
that truth will win out.
So what I’ve noticed, just as a brief side comment,
that certain things become viral
irregardless of their truth.
I’ve noticed that things that are dramatic and or funny,
like things that become memes are not,
don’t have to be grounded in truth.
And so that what worries me there
is that we basically maximize for drama
versus maximize for truth in a system
where everything is free.
And that is worrying in the time of emergency.
Well, yes, it’s all worrying in time of emergency,
to be sure.
But I want you to notice that what you’ve happened on
is actually an analog for a much deeper and older problem.
Human beings are the, we are not a blank slate,
but we are the blankest slate that nature has ever devised.
And there’s a reason for that, right?
It’s where our flexibility comes from.
We have effectively, we are robots
in which a large fraction of the cognitive capacity
has been, or of the behavioral capacity,
has been offloaded to the software layer,
which gets written and rewritten over evolutionary time.
That means effectively that much of what we are,
in fact, the important part of what we are
is housed in the cultural layer and the conscious layer
and not in the hardware hard coding layer.
So that layer is prone to make errors, right?
And anybody who’s watched a child grow up
knows that children make absurd errors all the time, right?
That’s part of the process, as we were discussing earlier.
It is also true that as you look across
a field of people discussing things,
a lot of what is said is pure nonsense, it’s garbage.
But the tendency of garbage to emerge
and even to spread in the short term
does not say that over the long term,
what sticks is not the valuable ideas.
So there is a high tendency for novelty
to be created in the cultural space,
but there’s also a high tendency for it to go extinct.
And you have to keep that in mind.
It’s not like the genome, right?
Everything is happening at a much higher rate.
Things are being created, they’re being destroyed.
And I can’t say that, I mean, obviously,
we’ve seen totalitarianism arise many times,
and it’s very destructive each time it does.
So it’s not like, hey, freedom to come up
with any idea you want hasn’t produced a whole lot of carnage.
But the question is, over time,
does it produce more open, fairer, more decent societies?
And I believe that it does.
I can’t prove it, but that does seem to be the pattern.
I believe so as well.
The thing is, in the short term, freedom of speech,
absolute freedom of speech can be quite destructive.
But you nevertheless have to hold on to that,
because in the long term, I think you and I, I guess,
are optimistic in the sense that good ideas will win out.
I don’t know how strongly I believe that it will work,
but I will say I haven’t heard a better idea.
I would also point out that there’s something
very significant in this question of the hubris involved
in imagining that you’re going to improve the discussion
by censoring, which is the majority of concepts
at the fringe are nonsense.
That’s automatic.
But the heterodoxy at the fringe,
which is indistinguishable at the beginning
from the nonsense ideas, is the key to progress.
So if you decide, hey, the fringe is 99% garbage,
let’s just get rid of it, right?
Hey, that’s a strong win.
We’re getting rid of 99% garbage for 1% something or other.
And the point is, yeah, but that 1% something or other
is the key.
You’re throwing out the key.
And so that’s what YouTube is doing.
Frankly, I think at the point that it started censoring
my channel, in the immediate aftermath
of this major reversal over LabLeak,
it should have looked at itself and said,
well, what the hell are we doing?
Who are we censoring?
We’re censoring somebody who was just right, right?
In a conflict with the very same people
on whose behalf we are now censoring, right?
That should have caused them to wake up.
So you said one approach, if you’re on YouTube,
is this basically let all videos go
that do not violate the law.
Well, I should fix that, okay?
I believe that that is the basic principle.
Eric makes an excellent point about the distinction
between ideas and personal attacks,
doxxing, these other things.
So I agree, there’s no value in allowing people
to destroy each other’s lives,
even if there’s a technical legal defense for it.
Now, how you draw that line, I don’t know.
But what I’m talking about is,
yes, people should be free to traffic in bad ideas,
and they should be free to expose that the ideas are bad.
And hopefully that process results
in better ideas winning out.
Yeah, there’s an interesting line between ideas,
like the earth is flat,
which I believe you should not censor.
And then you start to encroach on personal attacks.
So not doxxing, yes, but not even getting to that.
There’s a certain point where it’s like,
that’s no longer ideas, that’s more,
that’s somehow not productive, even if it’s wrong.
It feels like believing the earth is flat
is somehow productive,
because maybe there’s a tiny percent chance it is.
It just feels like personal attacks, it doesn’t,
well, I’m torn on this
because there’s assholes in this world,
there’s fraudulent people in this world.
So sometimes personal attacks are useful to reveal that,
but there’s a line you can cross.
There’s a comedy where people make fun of others.
I think that’s amazing, that’s very powerful,
and that’s very useful, even if it’s painful.
But then there’s like, once it gets to be,
yeah, there’s a certain line,
it’s a gray area where you cross,
where it’s no longer in any possible world productive.
And that’s a really weird gray area
for YouTube to operate in.
And that feels like it should be a crowdsource thing,
where people vote on it.
But then again, do you trust the majority to vote
on what is crossing the line and not?
I mean, this is where,
this is really interesting on this particular,
like the scientific aspect of this.
Do you think YouTube should take more of a stance,
not censoring, but to actually have scientists
within YouTube having these kinds of discussions,
and then be able to almost speak out in a transparent way,
this is what we’re going to let this video stand,
but here’s all these other opinions.
Almost like take a more active role
in its recommendation system,
in trying to present a full picture to you.
Right now they’re not,
the recommender systems are not human fine tuned.
They’re all based on how you click,
and there’s this clustering algorithms.
They’re not taking an active role
on giving you the full spectrum of ideas
in the space of science.
They just censor or not.
Well, at the moment,
it’s gonna be pretty hard to compel me
that these people should be trusted
with any sort of curation or comment
on matters of evidence,
because they have demonstrated
that they are incapable of doing it well.
You could make such an argument,
and I guess I’m open to the idea of institutions
that would look something like YouTube,
that would be capable of offering something valuable.
I mean, and even just the fact of them
literally curating things and putting some videos
next to others implies something.
So yeah, there’s a question to be answered,
but at the moment, no.
At the moment, what it is doing
is quite literally putting not only individual humans
in tremendous jeopardy by censoring discussion
of useful tools and making tools that are more hazardous
than has been acknowledged seem safe, right?
But it is also placing humanity in danger
of a permanent relationship with this pathogen.
I cannot emphasize enough how expensive that is.
It’s effectively incalculable.
If the relationship becomes permanent,
the number of people who will ultimately suffer
and die from it is indefinitely large.
Yeah, currently the algorithm is very rabbit hole driven,
meaning if you click on Flat Earth videos,
that’s all you’re going to be presented with
and you’re not going to be nicely presented
with arguments against the Flat Earth.
And the flip side of that,
if you watch like quantum mechanics videos
or no, general relativity videos,
it’s very rare you’re going to get a recommendation.
Have you considered the Earth is flat?
And I think you should have both.
Same with vaccine.
Videos that present the power and the incredible
like biology, genetics, virology about the vaccine,
you’re rarely going to get videos
from well respected scientific minds
presenting possible dangers of the vaccine.
And the vice versa is true as well,
which is if you’re looking at the dangers of the vaccine
on YouTube, you’re not going to get the highest quality
of videos recommended to you.
And I’m not talking about like manually inserted CDC videos
that are like the most untrustworthy things
you can possibly watch about how everybody
should take the vaccine, it’s the safest thing ever.
No, it’s about incredible, again, MIT colleagues of mine,
incredible biologists, virologists that talk about
the details of how the mRNA vaccines work
and all those kinds of things.
I think maybe this is me with the AI hat on,
is I think the algorithm can fix a lot of this
and YouTube should build better algorithms
and trust that to a couple of complete freedom of speech
to expand what people are able to think about,
present always varied views,
not balanced in some artificial way, hard coded way,
but balanced in a way that’s crowdsourced.
I think that’s an algorithm problem that can be solved
because then you can delegate it to the algorithm
as opposed to this hard code censorship
of basically creating artificial boundaries
on what can and can’t be discussed,
instead creating a full spectrum of exploration
that can be done and trusting the intelligence of people
to do the exploration.
Well, there’s a lot there.
I would say we have to keep in mind
that we’re talking about a publicly held company
with shareholders and obligations to them
and that that may make it impossible.
And I remember many years ago,
back in the early days of Google,
I remember a sense of terror at the loss of general search.
It used to be that Google, if you searched,
came up with the same thing for everyone
and then it got personalized and for a while
it was possible to turn off the personalization,
which was still not great
because if everybody else is looking
at a personalized search and you can tune into one
that isn’t personalized, that doesn’t tell you
why the world is sounding the way it is.
But nonetheless, it was at least an option.
And then that vanished.
And the problem is I think this is literally deranging us.
That in effect, I mean, what you’re describing
is unthinkable.
It is unthinkable that in the face of a campaign
to vaccinate people in order to reach herd immunity
that YouTube would give you videos on hazards of vaccines
when this is, how hazardous the vaccines are
is an unsettled question.
Why is it unthinkable?
That doesn’t make any sense from a company perspective.
If intelligent people in large amounts are open minded
and are thinking through the hazards
and the benefits of a vaccine, a company should find
the best videos to present what people are thinking about.
Well, let’s come up with a hypothetical.
Okay, let’s come up with a very deadly disease
for which there’s a vaccine that is very safe,
though not perfectly safe.
And we are then faced with YouTube trying to figure out
what to do for somebody searching on vaccine safety.
Suppose it is necessary in order to drive
the pathogen to extinction, something like smallpox,
that people get on board with the vaccine.
But there’s a tiny fringe of people who thinks
that the vaccine is a mind control agent.
So should YouTube direct people to the only claims
against this vaccine, which is that it’s a mind control
agent when in fact the vaccine is very safe,
whatever that means.
If that were the actual configuration of the puzzle,
then YouTube would be doing active harm,
pointing you to this other video potentially.
Now, yes, I would love to live in a world where people
are up to the challenge of sorting that out.
But my basic point would be, if it’s an evidentiary
question, and there is essentially no evidence
that the vaccine is a mind control agent,
and there’s plenty of evidence that the vaccine is safe,
then while you look for this video,
we’re gonna give you this one, puts it on a par, right?
So for the mind that’s tracking how much thought
is there behind it’s safe versus how much thought
is there behind it’s a mind control agent
will result in artificially elevating this.
Now in the current case, what we’ve seen is not this at all.
We have seen evidence obscured in order to create
a false story about safety.
And we saw the inverse with ivermectin.
We saw a campaign to portray the drug as more dangerous
and less effective than the evidence
clearly suggested it was.
So we’re not talking about a comparable thing,
but I guess my point is the algorithmic solution
that you point to creates a problem of its own,
which is that it means that the way to get exposure
is to generate something fringy.
If you’re the only thing on some fringe,
then suddenly YouTube would be recommending those things,
and that’s obviously a gameable system at best.
Yeah, but the solution to that,
I know you’re creating a thought experiment,
maybe playing a little bit of a devil’s advocate.
I think the solution to that is not to limit the algorithm
in the case of the super deadly virus.
It’s for the scientists to step up
and become better communicators, more charismatic,
fight the battle of ideas, sort of create better videos.
Like if the virus is truly deadly,
you have a lot more ammunition, a lot more data,
a lot more material to work with
in terms of communicating with the public.
So be better at communicating and stop being,
you have to start trusting the intelligence of people
and also being transparent
and playing the game of the internet,
which is like, what is the internet hungry for, I believe?
Authenticity, stop looking like you’re full of shit.
The scientific community,
if there’s any flaw that I currently see,
especially the people that are in public office,
that like Anthony Fauci,
they look like they’re full of shit
and I know they’re brilliant.
Why don’t they look more authentic?
So they’re losing that game
and I think a lot of people observing this entire system now,
younger scientists are seeing this and saying,
okay, if I want to continue being a scientist
in the public eye and I want to be effective at my job,
I’m gonna have to be a lot more authentic.
So they’re learning the lesson,
this evolutionary system is working.
So there’s just a younger generation of minds coming up
that I think will do a much better job
in this battle of ideas
that when the much more dangerous virus comes along,
they’ll be able to be better communicators.
At least that’s the hope.
Using the algorithm to control that is,
I feel like is a big problem.
So you’re going to have the same problem with a deadly virus
as with the current virus
if you let YouTube draw hard lines
by the PR and the marketing people
versus the broad community of scientists.
Well, in some sense you’re suggesting something
that’s close kin to what I was saying
about freedom of expression ultimately
provides an advantage to better ideas.
So I’m in agreement broadly speaking,
but I would also say there’s probably some sort of,
let’s imagine the world that you propose
where YouTube shows you the alternative point of view.
That has the problem that I suggest,
but one thing you could do is you could give us the tools
to understand what we’re looking at, right?
You could give us,
so first of all, there’s something I think myopic,
solipsistic, narcissistic about an algorithm
that serves shareholders by showing you what you want to see
rather than what you need to know, right?
That’s the distinction is flattering you,
playing to your blind spot
is something that algorithm will figure out,
but it’s not healthy for us all
to have Google playing to our blind spot.
It’s very, very dangerous.
So what I really want is analytics that allow me
or maybe options and analytics.
The options should allow me to see
what alternative perspectives are being explored, right?
So here’s the thing I’m searching
and it leads me down this road, right?
Let’s say it’s ivermectin, okay?
I find all of this evidence that ivermectin works.
I find all of these discussions
and people talk about various protocols and this and that.
And then I could say, all right, what is the other side?
And I could see who is searching, not as individuals,
but what demographics are searching alternatives.
And maybe you could even combine it
with something Reddit like where effectively,
let’s say that there was a position that, I don’t know,
that a vaccine is a mind control device
and you could have a steel man this argument competition
effectively and the better answers that steel man
and as well as possible would rise to the top.
And so you could read the top three or four explanations
about why this really credibly is a mind control product.
And you can say, well, that doesn’t really add up.
I can check these three things myself
and they can’t possibly be right, right?
And you could dismiss it.
And then as an argument that was credible,
let’s say plate tectonics before
that was an accepted concept,
you’d say, wait a minute,
there is evidence for plate tectonics.
As crazy as it sounds that the continents
are floating around on liquid,
actually that’s not so implausible.
We’ve got these subduction zones,
we’ve got a geology that is compatible,
we’ve got puzzle piece continents
that seem to fit together.
Wow, that’s a surprising amount of evidence
for that position.
So I’m gonna file some Bayesian probability with it
that’s updated for the fact that actually
the steel man arguments better than I was expecting, right?
So I could imagine something like that
where A, I would love the search to be indifferent
to who’s searching, right?
The solipsistic thing is too dangerous.
So the search could be general,
so we would all get a sense
for what everybody else was seeing too.
And then some layer that didn’t have anything to do
with what YouTube points you to or not,
but allowed you to see, you know,
the general pattern of adherence
to searching for information.
And again, a layer in which those things could be defended.
So you could hear what a good argument sounded like
rather than just hear a caricatured argument.
Yeah, and also reward people,
creators that have demonstrated
like a track record of open mindedness
and correctness as much as it could be measured
over a long term and sort of,
I mean, a lot of this maps
to incentivizing good longterm behavior,
not immediate kind of dopamine rush kind of signals.
I think ultimately the algorithm on the individual level
should optimize for personal growth,
longterm happiness, just growth intellectually,
growth in terms of lifestyle personally and so on,
as opposed to immediate.
I think that’s going to build a better society,
not even just like truth,
because I think truth is a complicated thing.
It’s more just you growing as a person,
exploring the space of ideas, changing your mind often,
increasing the level to which you’re open minded,
the knowledge base you’re operating from,
the willingness to empathize with others,
all those kinds of things the algorithm should optimize for.
Like creating a better human at the individual level
that you’re, I think that’s a great business model
because the person that’s using this tool
will then be happier with themselves for having used it
and will be a lifelong quote unquote customer.
I think it’s a great business model
to make a happy, open minded, knowledgeable,
better human being.
It’s a terrible business model under the current system.
What you want is to build the system
in which it is a great business model.
Why is it a terrible model?
Because it will be decimated by those
who play to the short term.
I don’t think so.
Why?
I mean, I think we’re living it.
We’re living it.
Well, no, because if you have the alternative
that presents itself,
it points out the emperor has no clothes.
I mean, it points out that YouTube is operating in this way,
Twitter is operating in this way,
Facebook is operating in this way.
How long term would you like the wisdom to prove at?
Well, even a week is better when it’s currently happening.
Right, but the problem is,
if a week loses out to an hour, right?
And I don’t think it loses out.
It loses out in the short term.
That’s my point.
At least you’re a great communicator
and you basically say, look, here’s the metrics.
And a lot of it is like how people actually feel.
Like this is what people experience with social media.
They look back at the previous month and say,
I felt shitty on a lot of days because of social media.
Right.
If you look back at the previous few weeks and say,
wow, I’m a better person because of that month happened.
That’s, they immediately choose the product
that’s going to lead to that.
That’s what love for products looks like.
If you love, like a lot of people love their Tesla car,
like that’s, or iPhone or like beautiful design.
That’s what love looks like.
You look back, I’m a better person
for having used this thing.
Well, you got to ask yourself the question though,
if this is such a great business model,
why isn’t it devolving?
Why don’t we see it?
Honestly, it’s competence.
It’s like people are just, it’s not easy to build new,
it’s not easy to build products, tools, systems
on new ideas.
It’s kind of a new idea.
We’ve gone through this, everything we’re seeing now
comes from the ideas of the initial birth of the internet.
There just needs to be new sets of tools
that are incentivizing long term personal growth
and happiness.
That’s it.
Right, but what we have is a market
that doesn’t favor this, right?
I mean, for one thing, we had an alternative to Facebook,
right, that looked, you owned your own data,
it wasn’t exploitative and Facebook bought
a huge interest in it and it died.
I mean, who do you know who’s on diaspora?
The execution there was not good.
Right, but it could have gotten better, right?
I don’t think that the argument that why hasn’t somebody
done it a good argument for it’s not going to completely
destroy all of Twitter and Facebook when somebody does it
or Twitter will catch up and pivot to the algorithm.
This is not what I’m saying.
There’s obviously great ideas that remain unexplored
because nobody has gotten to the foothill
that would allow you to explore them.
That’s true, but you know, an internet
that was non predatory is an obvious idea
and many of us know that we want it
and many of us have seen prototypes of it
and we don’t move because there’s no audience there.
So the network effects cause you to stay
with the predatory internet.
But let me just, I wasn’t kidding about build the system
in which your idea is a great business plan.
So in our upcoming book, Heather and I in our last chapter
explore something called the fourth frontier
and fourth frontier has to do with sort of a 2.0 version
of civilization, which we freely admit
we can’t tell you very much about.
It’s something that would have to be,
we would have to prototype our way there.
We would have to effectively navigate our way there.
But the result would be very much
like what you’re describing.
It would be something that effectively liberates humans
meaningfully and most importantly,
it has to feel like growth without depending on growth.
In other words, human beings are creatures
that like every other creature
is effectively looking for growth, right?
We are looking for underexploited
or unexploited opportunities and when we find them,
our ancestors for example, they happen into a new valley
that was unexplored by people.
Their population would grow until it hit carrying capacity.
So there would be this great feeling of there’s abundance
until you hit carrying capacity, which is inevitable
and then zero sum dynamics would set in.
So in order for human beings to flourish longterm,
the way to get there is to satisfy the desire for growth
without hooking it to actual growth,
which only moves and fits and starts.
And this is actually, I believe the key
to avoiding these spasms of human tragedy
when in the absence of growth,
people do something that causes their population
to experience growth, which is they go and make war on
or commit genocide against some other population,
which is something we obviously have to stop.
By the way, this is a hunter gatherers guide
to the 21st century coauthored.
That’s right.
With your wife, Heather, being released in September.
I believe you said you’re going to do
a little bit of a preview videos on each chapter
leading up to the release.
So I’m looking forward to the last chapter
as well as all the previous ones.
I have a few questions on that.
So you generally have faith to clarify that technology
could be the thing that empowers this kind of future.
Well, if you just let technology evolve,
it’s going to be our undoing, right?
One of the things that I fault my libertarian friends for
is this faith that the market is going to find solutions
without destroying us.
And my sense is I’m a very strong believer in markets.
I believe in their power
even above some market fundamentalists.
But what I don’t believe is that they should be allowed
to plot our course, right?
Markets are very good at figuring out how to do things.
They are not good at all about figuring out
what we should do, right?
What we should want.
We have to tell markets what we want
and then they can tell us how to do it best.
And if we adopted that kind of pro market
but in a context where it’s not steering,
where human wellbeing is actually the driver,
we can do remarkable things.
And the technology that emerges
would naturally be enhancing of human wellbeing.
Perfectly so?
No, but overwhelmingly so.
But at the moment, markets are finding
our every defective character and exploiting them
and making huge profits
and making us worse to each other in the process.
Before we leave COVID 19,
let me ask you about a very difficult topic,
which is the vaccines.
So I took the Pfizer vaccine, the two shots.
You did not.
You have been taking ivermectin.
Yep.
So one of the arguments
against the discussion of ivermectin
is that it prevents people
from being fully willing to get the vaccine.
How would you compare ivermectin
and the vaccine for COVID 19?
All right, that’s a good question.
I would say, first of all,
there are some hazards with the vaccine
that people need to be aware of.
There are some things that we cannot rule out
and for which there is some evidence.
The two that I think people should be tracking
is the possibility, some would say a likelihood,
that a vaccine of this nature,
that is to say very narrowly focused on a single antigen,
is an evolutionary pressure
that will drive the emergence of variants
that will escape the protection
that comes from the vaccine.
So this is a hazard.
It is a particular hazard in light of the fact
that these vaccines have a substantial number
of breakthrough cases.
So one danger is that a person who has been vaccinated
will shed viruses that are specifically less visible
or invisible to the immunity created by the vaccines.
So we may be creating the next pandemic
by applying the pressure of vaccines
at a point that it doesn’t make sense to.
The other danger has to do with something called
antibody dependent enhancement,
which is something that we see in certain diseases
like dengue fever.
You may know that dengue, one gets a case,
and then their second case is much more devastating.
So break bone fever is when you get your second case
of dengue, and dengue effectively utilizes
the immune response that is produced by prior exposure
to attack the body in ways that it is incapable
of doing before exposure.
So this is apparently, this pattern has apparently blocked
past efforts to make vaccines against coronaviruses.
Whether it will happen here or not,
it is still too early to say.
But before we even get to the question
of harm done to individuals by these vaccines,
we have to ask about what the overall impact is going to be.
And it’s not clear in the way people think it is
that if we vaccinate enough people, the pandemic will end.
It could be that we vaccinate people
and make the pandemic worse.
And while nobody can say for sure
that that’s where we’re headed,
it is at least something to be aware of.
So don’t vaccines usually create
that kind of evolutionary pressure
to create deadlier, different strains of the virus?
So is there something particular with these mRNA vaccines
that’s uniquely dangerous in this regard?
Well, it’s not even just the mRNA vaccines.
The mRNA vaccines and the adenovector DNA vaccine
all share the same vulnerability,
which is they are very narrowly focused
on one subunit of the spike protein.
So that is a very concentrated evolutionary signal.
We are also deploying it in mid pandemic
and it takes time for immunity to develop.
So part of the problem here,
if you inoculated a population before encounter
with a pathogen, then there might be substantially
enough immunity to prevent this phenomenon from happening.
But in this case, we are inoculating people
as they are encountering those who are sick with the disease.
And what that means is the disease is now faced
with a lot of opportunities
to effectively evolutionarily practice escape strategies.
So one thing is the timing,
the other thing is the narrow focus.
Now in a traditional vaccine,
you would typically not have one antigen, right?
You would have basically a virus full of antigens
and the immune system would therefore
produce a broader response.
So that is the case for people who have had COVID, right?
They have an immunity that is broader
because it wasn’t so focused
on one part of the spike protein.
So anyway, there is something unique here.
So these platforms create that special hazard.
They also have components that we haven’t used before
in people.
So for example, the lipid nanoparticles
that coat the RNAs are distributing themselves
around the body in a way that will have unknown consequences.
So anyway, there’s reason for concern.
Is it possible for you to steel man the argument
that everybody should get vaccinated?
Of course.
The argument that everybody should get vaccinated
is that nothing is perfectly safe.
Phase three trials showed good safety for the vaccines.
Now that may or may not be actually true,
but what we saw suggested high degree of efficacy
and a high degree of safety for the vaccines
that inoculating people quickly
and therefore dropping the landscape of available victims
for the pathogen to a very low number
so that herd immunity drives it to extinction
requires us all to take our share of the risk
and that because driving it to extinction
should be our highest priority that really
people shouldn’t think too much about the various nuances
because overwhelmingly fewer people will die
if the population is vaccinated from the vaccine
than will die from COVID if they’re not vaccinated.
And with the vaccine as it currently is being deployed,
that is a quite a likely scenario
that everything, you know, the virus will fade away.
In the following sense that the probability
that a more dangerous strain will be created is nonzero,
but it’s not 50%, it’s something smaller.
And so the most likely, well, I don’t know,
maybe you disagree with that,
but the scenario we’re most likely to see now
that the vaccine is here is that the virus,
the effects of the virus will fade away.
First of all, I don’t believe that the probability
of creating a worse pandemic is low enough to discount.
I think the probability is fairly high
and frankly, we are seeing a wave of variants
that we will have to do a careful analysis
to figure out what exactly that has to do
with campaigns of vaccination,
where they have been, where they haven’t been,
where the variants emerged from.
But I believe that what we are seeing is a disturbing pattern
that reflects that those who were advising caution
may well have been right.
The data here, by the way, and the small tangent is terrible.
Terrible, right.
And why is it terrible is another question, right?
This is where I started getting angry.
Yes.
It’s like, there’s an obvious opportunity
for exceptionally good data, for exceptionally rigorous,
like even the self, like the website for self reporting,
side effects for, not side effects,
but negative effects, right?
Adverse events.
Adverse events, sorry, for the vaccine.
Like, there’s many things I could say
from both the study perspective,
but mostly, let me just put on my hat of like HTML
and like web design.
Like, it’s like the worst website.
It makes it so unpleasant to report.
It makes it so unclear what you’re reporting.
If somebody actually has serious effect,
like if you have very mild effects,
what are the incentives for you to even use
that crappy website with many pages and forms
that don’t make any sense?
If you have adverse effects,
what are the incentives for you to use that website?
What is the trust that you have
that this information will be used well?
All those kinds of things.
And the data about who’s getting vaccinated,
anonymized data about who’s getting vaccinated,
where, when, with what vaccine,
coupled with the adverse effects,
all of that we should be collecting.
Instead, we’re completely not.
We’re doing it in a crappy way
and using that crappy data to make conclusions
that you then twist.
You’re basically collecting in a way
that can arrive at whatever conclusions you want.
And the data is being collected by the institutions,
by governments, and so therefore,
it’s obviously they’re going to try
to construct any kind of narratives they want
based on this crappy data.
Reminds me of much of psychology, the field that I love,
but is flawed in many fundamental ways.
So rant over, but coupled with the dangers
that you’re speaking to,
we don’t have even the data to understand the dangers.
Yeah, I’m gonna pick up on your rant and say,
we, estimates of the degree of underreporting in VAERS
are that it is 10% of the real to 100%.
And that’s the system for reporting.
Yeah, the VAERS system is the system
for reporting adverse events.
So in the US, we have above 5,000 unexpected deaths
that seem in time to be associated with vaccination.
That is an undercount, almost certainly,
and by a large factor.
We don’t know how large.
I’ve seen estimates, 25,000 dead in the US alone.
Now, you can make the argument that, okay,
that’s a large number,
but the necessity of immunizing the population
to drive SARS CoV2 to extinction
is such that it’s an acceptable number.
But I would point out
that that actually does not make any sense.
And the reason it doesn’t make any sense
is actually there are several reasons.
One, if that was really your point,
that yes, many, many people are gonna die,
but many more will die if we don’t do this.
Were that your approach,
you would not be inoculating people who had had COVID 19,
which is a large population.
There’s no reason to expose those people to danger.
Their risk of adverse events
in the case that they have them is greater.
So there’s no reason that we would be allowing
those people to face a risk of death
if this was really about an acceptable number of deaths
arising out of this set of vaccines.
I would also point out
there’s something incredibly bizarre.
And I struggle to find language that is strong enough
for the horror of vaccinating children in this case
because children suffer a greater risk of longterm effects
because they are going to live longer.
And because this is earlier in their development,
therefore it impacts systems that are still forming.
They tolerate COVID well.
And so the benefit to them is very small.
And so the only argument for doing this
is that they may cryptically be carrying more COVID
than we think, and therefore they may be integral
to the way the virus spreads to the population.
But if that’s the reason that we are inoculating children,
and there has been some revision in the last day or two
about the recommendation on this
because of the adverse events
that have shown up in children,
but to the extent that we were vaccinating children,
we were doing it to protect old, infirm people
who are the most likely to succumb to COVID 19.
What society puts children in danger,
robs children of life to save old, infirm people?
That’s upside down.
So there’s something about the way we are going about
vaccinating, who we are vaccinating,
what dangers we are pretending don’t exist
that suggests that to some set of people,
vaccinating people is a good in and of itself,
that that is the objective of the exercise,
not herd immunity.
And the last thing, and I’m sorry,
I don’t wanna prevent you from jumping in here,
but the second reason, in addition to the fact
that we’re exposing people to danger
that we should not be exposing them to.
By the way, as a tiny tangent,
another huge part of this soup
that should have been part of it
that’s an incredible solution is large scale testing.
Mm hmm.
But that might be another couple hour conversation,
but there’s these solutions that are obvious
that were available from the very beginning.
So you could argue that iveractin is not that obvious,
but maybe the whole point is you have aggressive,
very fast research that leads to a meta analysis
and then large scale production and deployment.
Okay, at least that possibility
should be seriously considered,
coupled with a serious consideration
of large scale deployment of testing,
at home testing that could have accelerated
the speed at which we reached that herd immunity.
But I don’t even wanna.
Well, let me just say, I am also completely shocked
that we did not get on high quality testing early
and that we are still suffering from this even now,
because just the simple ability to track
where the virus moves between people
would tell us a lot about its mode of transmission,
which would allow us to protect ourselves better.
Instead, that information was hard won
and for no good reason.
So I also find this mysterious.
You’ve spoken with Eric Weinstein, your brother,
on his podcast, The Portal,
about the ideas that eventually led to the paper
you published titled, The Reserved Capacity Hypothesis.
I think first, can you explain this paper
and the ideas that led up to it?
Sure, easier to explain the conclusion of the paper.
There’s a question about why a creature
that can replace its cells with new cells
grows feeble and inefficient with age.
We call that process, which is otherwise called aging,
we call it senescence.
And senescence, in this paper, it is hypothesized,
is the unavoidable downside of a cancer prevention
feature of our bodies.
That each cell has a limit on the number of times
it can divide.
There are a few cells in the body that are exceptional,
but most of our cells can only divide
a limited number of times.
That’s called the Hayflick limit.
And the Hayflick limit reduces the ability
of the organism to replace tissues.
It therefore results in a failure over time
of maintenance and repair.
And that explains why we become decrepit as we grow old.
The question was why would that be,
especially in light of the fact that the mechanism
that seems to limit the ability of cells to reproduce
is something called a telomere.
Telomere is a, it’s not a gene, but it’s a DNA sequence
at the ends of our chromosomes
that is just simply repetitive.
And the number of repeats functions like a counter.
So there’s a number of repeats that you have
after development is finished.
And then each time the cell divides a little bit
of telomere is lost.
And at the point that the telomere becomes critically short,
the cell stops dividing even though it still has
the capacity to do so.
Stops dividing and it starts transcribing different genes
than it did when it had more telomere.
So what my work did was it looked at the fact
that the telomeric shortening was being studied
by two different groups.
It was being studied by people who were interested
in counteracting the aging process.
And it was being studied in exactly the opposite fashion
by people who were interested in tumorigenesis and cancer.
The thought being because it was true that when one looked
into tumors, they always had telomerase active.
That’s the enzyme that lengthens our telomeres.
So those folks were interested in bringing about a halt
to the lengthening of telomeres
in order to counteract cancer.
And the folks who were studying the senescence process
were interested in lengthening telomeres
in order to generate greater repair capacity.
And my point was evolutionarily speaking,
this looks like a pleiotropic effect
that the genes which create the tendency of the cells
to be limited in their capacity to replace themselves
are providing a benefit in youth,
which is that we are largely free of tumors and cancer
at the inevitable late life cost that we grow feeble
and inefficient and eventually die.
And that matches a very old hypothesis in evolutionary theory
by somebody I was fortunate enough to know, George Williams,
one of the great 20th century evolutionists
who argued that senescence would have to be caused
by pleiotropic genes that cause early life benefits
at unavoidable late life costs.
And although this isn’t the exact nature of the system,
he predicted it matches what he was expecting
in many regards to a shocking degree.
That said, the focus of the paper is about the,
well, let me just read the abstract.
We observed that captive rodent breeding protocols designed,
this is the end of the abstract.
We observed that captive rodent breeding protocols
designed to increase reproductive output,
simultaneously exert strong selection
against reproductive senescence
and virtually eliminate selection
that would otherwise favor tumor suppression.
This appears to have greatly elongated
the telomeres of laboratory mice.
With their telomeric failsafe effectively disabled,
these animals are unreliable models
of normal senescence and tumor formation.
So basically using these mice is not going to lead
to the right kinds of conclusions.
Safety tests employing these animals
likely overestimate cancer risks
and underestimate tissue damage
and consequent accelerated senescence.
So I think, especially with your discussion with Eric,
the conclusion of this paper has to do with the fact that,
like we shouldn’t be using these mice to test the safety
or to make conclusions about cancer or senescence.
Is that the basic takeaway?
Like basically saying that the length of these telomeres
is an important variable to consider.
Well, let’s put it this way.
I think there was a reason that the world of scientists
who was working on telomeres
did not spot the pleiotropic relationship
that was the key argument in my paper.
The reason they didn’t spot it was that there was a result
that everybody knew, which seemed inconsistent.
The result was that mice have very long telomeres,
but they do not have very long lives.
Now, we can talk about what the actual meaning
of don’t have very long lives is,
but in the end, I was confronted with a hypothesis
that would explain a great many features
of the way mammals and indeed vertebrates age,
but it was inconsistent with one result.
And at first I thought,
maybe there’s something wrong with the result.
Maybe this is one of these cases
where the result was achieved once
through some bad protocol and everybody else
was repeating it, didn’t turn out to be the case.
Many laboratories had established
that mice had ultra long telomeres.
And so I began to wonder whether or not
there was something about the breeding protocols
that generated these mice.
And what that would predict is that the mice
that have long telomeres would be laboratory mice
and that wild mice would not.
And Carol Greider, who agreed to collaborate with me,
tested that hypothesis and showed that it was indeed true,
that wild derived mice, or at least mice
that had been in captivity for a much shorter period of time
did not have ultra long telomeres.
Now, what this implied though, as you read,
is that our breeding protocols
generate lengthening of telomeres.
And the implication of that is that the animals
that have these very long telomeres
will be hyper prone to create tumors.
They will be extremely resistant to toxins
because they have effectively an infinite capacity
to replace any damaged tissue.
And so ironically, if you give one of these
ultra long telomere lab mice a toxin,
if the toxin doesn’t outright kill it,
it may actually increase its lifespan
because it functions as a kind of chemotherapy.
So the reason that chemotherapy works
is that dividing cells are more vulnerable
than cells that are not dividing.
And so if this mouse has effectively
had its cancer protection turned off,
and it has cells dividing too rapidly,
and you give it a toxin, you will slow down its tumors
faster than you harm its other tissues.
And so you’ll get a paradoxical result
that actually some drug that’s toxic
seems to benefit the mouse.
Now, I don’t think that that was understood
before I published my paper.
Now I’m pretty sure it has to be.
And the problem is that this actually is a system
that serves pharmaceutical companies
that have the difficult job of bringing compounds to market,
many of which will be toxic.
Maybe all of them will be toxic.
And these mice predispose our system
to declare these toxic compounds safe.
And in fact, I believe we’ve seen the errors
that result from using these mice a number of times,
most famously with Vioxx, which turned out
to do conspicuous heart damage.
Why do you think this paper and this idea
has not gotten significant traction?
Well, my collaborator, Carol Greider,
said something to me that rings in my ears to this day.
She initially, after she showed that laboratory mice
have anomalously long telomeres
and that wild mice don’t have long telomeres,
I asked her where she was going to publish that result
so that I could cite it in my paper.
And she said that she was going to keep the result in house
rather than publish it.
And at the time, I was a young graduate student.
I didn’t really understand what she was saying.
But in some sense, the knowledge that a model organism
is broken in a way that creates the likelihood
that certain results will be reliably generateable,
you can publish a paper and make a big splash
with such a thing, or you can exploit the fact
that you know how those models will misbehave
and other people don’t.
So there’s a question, if somebody is motivated cynically
and what they want to do is appear to have deeper insight
into biology because they predict things
better than others do, knowing where the flaw is
so that your predictions come out true is advantageous.
At the same time, I can’t help but imagine
that the pharmaceutical industry,
when it figured out that the mice were predisposed
to suggest that drugs were safe,
didn’t leap to fix the problem because in some sense,
it was the perfect cover for the difficult job
of bringing drugs to market and then discovering
their actual toxicity profile, right?
This made things look safer than they were
and I believe a lot of profits
have likely been generated downstream.
So to kind of play devil’s advocate,
it’s also possible that this particular,
the length of the telomeres is not a strong variable
for the drug development and for the conclusions
that Carol and others have been studying.
Is it possible for that to be the case?
So one reason she and others could be ignoring this
is because it’s not a strong variable.
Well, I don’t believe so and in fact,
at the point that I went to publish my paper,
Carol published her result.
She did so in a way that did not make a huge splash.
Did she, I apologize if I don’t know how,
what was the emphasis of her publication of that paper?
Was it purely just kind of showing data
or is there more, because in your paper,
there’s a kind of more of a philosophical statement as well.
Well, my paper was motivated by interest
in the evolutionary dynamics around senescence.
I wasn’t pursuing grants or anything like that.
I was just working on a puzzle I thought was interesting.
Carol has, of course, gone on to win a Nobel Prize
for her co discovery with Elizabeth Greider
of telomerase, the enzyme that lengthens telomeres.
But anyway, she’s a heavy hitter in the academic world.
I don’t know exactly what her purpose was.
I do know that she told me she wasn’t planning to publish
and I do know that I discovered that she was
in the process of publishing very late
and when I asked her to send me the paper
to see whether or not she had put evidence in it
that the hypothesis had come from me,
she grudgingly sent it to me
and my name was nowhere mentioned
and she broke contact at that point.
What it is that motivated her, I don’t know,
but I don’t think it can possibly be
that this result is unimportant.
The fact is, the reason I called her in the first place,
an established contact that generated our collaboration,
was that she was a leading light in the field
of telomeric studies and because of that,
this question about whether the model organisms
are distorting the understanding
of the functioning of telomeres, it’s central.
Do you feel like you’ve been,
as a young graduate student, do you think Carol
or do you think the scientific community
broadly screwed you over in some way?
I don’t think of it in those terms.
Probably partly because it’s not productive
but I have a complex relationship with this story.
On the one hand, I’m livid with Carol Greider
for what she did.
She absolutely pretended that I didn’t exist in this story
and I don’t think I was a threat to her.
My interest was as an evolutionary biologist,
I had made an evolutionary contribution,
she had tested a hypothesis and frankly,
I think it would have been better for her
if she had acknowledged what I had done.
I think it would have enhanced her work
and I was, let’s put it this way,
when I watched her Nobel lecture,
and I should say there’s been a lot of confusion
about this Nobel stuff.
I’ve never said that I should have gotten a Nobel prize.
People have misportrayed that.
In listening to her lecture,
I had one of the most bizarre emotional experiences
of my life because she presented the work
that resulted from my hypothesis.
She presented it as she had in her paper
with no acknowledgement of where it had come from
and she had in fact portrayed the distortion
of the telomeres as if it were a lucky fact
because it allowed testing hypotheses
that would otherwise not be testable.
You have to understand as a young scientist
to watch work that you have done presented
in what’s surely the most important lecture
of her career, it’s thrilling.
It was thrilling to see her figures
projected on the screen there.
To have been part of work that was important enough
for that felt great and of course,
to be erased from the story felt absolutely terrible.
So anyway, that’s sort of where I am with it.
My sense is what I’m really troubled by in this story
is the fact that as far as I know,
the flaw with the mice has not been addressed.
And actually, Eric did some looking into this.
He tried to establish by calling the Jack’s lab
and trying to ascertain what had happened with the colonies,
whether any change in protocol had occurred
and he couldn’t get anywhere.
There was seemingly no awareness that it was even an issue.
So I’m very troubled by the fact that as a father,
for example, I’m in no position to protect my family
from the hazard that I believe lurks
in our medicine cabinets, right?
Even though I’m aware of where the hazard comes from,
it doesn’t tell me anything useful
about which of these drugs will turn out to do damage
if that is ultimately tested.
And that’s a very frustrating position to be in.
On the other hand, there’s a part of me
that’s even still grateful to Carol for taking my call.
She didn’t have to take my call
and talk to some young graduate student
who had some evolutionary idea
that wasn’t in her wheelhouse specifically, and yet she did.
And for a while, she was a good collaborator, so.
Well, can I, I have to proceed carefully here because
it’s a complicated topic.
So she took the call.
And you kind of, you’re kind of saying that
she basically erased credit, you know,
pretending you didn’t exist in some kind of,
in a certain sense.
Let me phrase it this way.
I’ve, as a research scientist at MIT,
I’ve had, and especially just part of
a large set of collaborations,
I’ve had a lot of students come to me
and talk to me about ideas,
perhaps less interesting than what we’re discussing here
in the space of AI, that I’ve been thinking about anyway.
In general, with everything I’m doing with robotics, people
have told me a bunch of ideas
that I’m already thinking about.
The point is taking that idea, see, this is different
because the idea has more power in the space
that we’re talking about here,
and robotics is like your idea means shit
until you build it.
Like, so the engineering world is a little different,
but there’s a kind of sense that I probably forgot
a lot of brilliant ideas have been told to me.
Do you think she pretended you don’t exist?
Do you think she was so busy that she kind of forgot,
you know, that she has like the stream
of brilliant people around her,
there’s a bunch of ideas that are swimming in the air,
and you just kind of forget people
that are a little bit on the periphery
on the idea generation, like, or is it some mix of both?
It’s not a mix of both.
I know that because we corresponded.
She put a graduate student on this work.
He emailed me excitedly when the results came in.
So there was no ambiguity about what had happened.
What’s more, when I went to publish my work,
I actually sent it to Carol in order to get her feedback
because I wanted to be a good collaborator to her,
and she absolutely panned it,
made many critiques that were not valid,
but it was clear at that point
that she became an antagonist,
and none of this adds up.
She couldn’t possibly have forgotten the conversation.
I believe I even sent her tissues at some point in part,
not related to this project, but as a favor.
She was doing another project that involved telomeres,
and she needed samples that I could get ahold of
because of the Museum of Zoology that I was in.
So this was not a one off conversation.
I certainly know that those sorts of things can happen,
but that’s not what happened here.
This was a relationship that existed
and then was suddenly cut short
at the point that she published her paper by surprise
without saying where the hypothesis had come from
and began to be a opposing force to my work.
Is there, there’s a bunch of trajectories
you could have taken through life.
Do you think about the trajectory of being a researcher,
of then going to war in the space of ideas,
of publishing further papers along this line?
I mean, that’s often the dynamic of that fascinating space
is you have a junior researcher with brilliant ideas
and a senior researcher that starts out as a mentor
that becomes a competitor.
I mean, that happens.
But then the way to,
it’s almost an opportunity to shine
is to publish a bunch more papers in this place
to tear it apart, to dig into,
like really make it a war of ideas.
Did you consider that possible trajectory?
I did.
A couple of things to say about it.
One, this work was not central for me.
I took a year on the T. Lemire project
because something fascinating occurred to me
and I pursued it.
And the more I pursued it,
the clearer it was there was something there.
But it wasn’t the focus of my graduate work.
And I didn’t want to become a T. Lemire researcher.
What I want to do is to be an evolutionary biologist
who upgrades the toolkit of evolutionary concepts
so that we can see more clearly
how organisms function and why.
And T. Lemire’s was a proof of concept, right?
That paper was a proof of concept
that the toolkit in question works.
As for the need to pursue it further,
I think it’s kind of absurd
and you’re not the first person to say
maybe that was the way to go about it.
But the basic point is, look, the work was good.
It turned out to be highly predictive.
Frankly, the model of senescence that I presented
is now widely accepted.
And I don’t feel any misgivings at all
about having spent a year on it, said my piece,
and moved on to other things
which frankly I think are bigger.
I think there’s a lot of good to be done
and it would be a waste to get overly narrowly focused.
There’s so many ways through the space of science
and the most common ways is just publish a lot.
Just publish a lot of papers, do these incremental work
and exploring the space kind of like ants looking for food.
You’re tossing out a bunch of different ideas.
Some of them could be brilliant breakthrough ideas, nature.
Some of them are more confidence kind of publications,
all those kinds of things.
Did you consider that kind of path in science?
Of course I considered it,
but I must say the experience of having my first encounter
with the process of peer review be this story,
which was frankly a debacle from one end to the other
with respect to the process of publishing.
It did not, it was not a very good sales pitch
for trying to make a difference through publication.
And I would point out part of what I ran into
and I think frankly part of what explains Carol’s behavior
is that in some parts of science,
there is this dynamic where PIs parasitize their underlings
and if you’re very, very good, you rise to the level
where one day instead of being parasitized,
you get to parasitize others.
Now I find that scientifically despicable
and it wasn’t the culture of the lab I grew up in at all.
My lab, in fact, the PI, Dick Alexander, who’s now gone,
but who was an incredible mind and a great human being,
he didn’t want his graduate students working
on the same topics he was on,
not because it wouldn’t have been useful and exciting,
but because in effect, he did not want any confusion
about who had done what because he was a great mentor
and the idea was actually a great mentor
is not stealing ideas and you don’t want people
thinking that they are.
So anyway, my point would be,
I wasn’t up for being parasitized.
I don’t like the idea that if you are very good,
you get parasitized until it’s your turn
to parasitize others.
That doesn’t make sense to me.
Crossing over from evolution into cellular biology
may have exposed me to that.
That may have been par for the course,
but it doesn’t make it acceptable.
And I would also point out that my work falls
in the realm of synthesis.
My work generally takes evidence accumulated by others
and places it together in order to generate hypotheses
that explain sets of phenomena
that are otherwise intractable.
And I am not sure that that is best done
with narrow publications that are read by few.
And in fact, I would point to the very conspicuous example
of Richard Dawkins, who I must say I’ve learned
a tremendous amount from and I greatly admire.
Dawkins has almost no publication record
in the sense of peer reviewed papers in journals.
What he’s done instead is done synthetic work
and he’s published it in books,
which are not peer reviewed in the same sense.
And frankly, I think there’s no doubting
his contribution to the field.
So my sense is if Richard Dawkins can illustrate
that one can make contributions to the field
without using journals as the primary mechanism
for distributing what you’ve come to understand,
then it’s obviously a valid mechanism
and it’s a far better one from the point of view
of accomplishing what I want to accomplish.
Yeah, it’s really interesting.
There is of course several levels
you can do the kind of synthesis
and that does require a lot of both broad
and deep thinking is exceptionally valuable.
You could also, I’m working on something
with Andrew Huberman now, you can also publish synthesis.
That’s like review papers that are exceptionally valuable
for the communities.
It brings the community together, tells a history,
tells a story of where the community has been.
It paints a picture of where the path lays for the future.
I think it’s really valuable.
And Richard Dawkins is a good example
of somebody that does that in book form
that he kind of walks the line really interestingly.
You have like somebody who like Neil deGrasse Tyson,
who’s more like a science communicator.
Richard Dawkins sometimes is a science communicator,
but he gets like close to the technical
to where it’s a little bit, it’s not shying away
from being really a contribution to science.
No, he’s made real contributions.
In book form.
Yes, he really has.
Which is fascinating.
I mean, Roger Penrose, I mean, similar kind of idea.
That’s interesting, that’s interesting.
Synthesis does not, especially synthesis work,
work that synthesizes ideas does not necessarily need
to be peer reviewed.
It’s peer reviewed by peers reading it.
Well, and reviewing it.
That’s it, it is reviewed by peers,
which is not synonymous with peer review.
And that’s the thing is people don’t understand
that the two things aren’t the same, right?
Peer review is an anonymous process
that happens before publication
in a place where there is a power dynamic, right?
I mean, the joke of course is that peer review
is actually peer preview, right?
Your biggest competitors get to see your work
before it sees the light of day
and decide whether or not it gets published.
And again, when your formative experience
with the publication apparatus is the one I had
with the telomere paper, there’s no way
that that seems like the right way
to advance important ideas.
And what’s the harm in publishing them
so that your peers have to review them in public
where they actually, if they’re gonna disagree with you,
they actually have to take the risk of saying,
I don’t think this is right and here’s why, right?
With their name on it.
I’d much rather that.
It’s not that I don’t want my work reviewed by peers,
but I want it done in the open, you know,
for the same reason you don’t meet
with dangerous people in private, you meet at the cafe.
I want the work reviewed out in public.
Can I ask you a difficult question?
Sure.
There is popularity in martyrdom.
There’s popularity in pointing out
that the emperor has no clothes.
That can become a drug in itself.
I’ve confronted this in scientific work I’ve done at MIT
where there are certain things that are not done well.
People are not being the best version of themselves.
And particular aspects of a particular field
are in need of a revolution.
And part of me wanted to point that out
versus doing the hard work of publishing papers
and doing the revolution.
Basically just pointing out, look,
you guys are doing it wrong and then just walking away.
Are you aware of the drug of martyrdom,
of the ego involved in it,
that it can cloud your thinking?
Probably one of the best questions I’ve ever been asked.
So let me try to sort it out.
First of all, we are all mysteries to ourself at some level.
So it’s possible there’s stuff going on in me
that I’m not aware of that’s driving.
But in general, I would say one of my better strengths
is that I’m not especially ego driven.
I have an ego, I clearly think highly of myself,
but it is not driving me.
I do not crave that kind of validation.
I do crave certain things.
I do love a good eureka moment.
There is something great about it.
And there’s something even better about the phone calls
you make next when you share it, right?
It’s pretty fun, right?
I really like it.
I also really like my subject, right?
There’s something about a walk in the forest
when you have a toolkit in which you can actually look
at creatures and see something deep, right?
I like it, that drives me.
And I could entertain myself for the rest of my life, right?
If I was somehow isolated from the rest of the world,
but I was in a place that was biologically interesting,
hopefully I would be with people that I love
and pets that I love, believe it or not.
But if I were in that situation and I could just go out
every day and look at cool stuff and figure out
what it means, I could be all right with that.
So I’m not heavily driven by the ego thing, as you put it.
So I am completely the same except instead of the pets,
I would put robots.
But so it’s not, it’s the eureka, it’s the exploration
of the subject that brings you joy and fulfillment.
It’s not the ego.
Well, there’s more to say.
No, I really don’t think it’s the ego thing.
I will say I also have kind of a secondary passion
for robot stuff.
I’ve never made anything useful, but I do believe,
I believe I found my calling.
But if this wasn’t my calling,
my calling would have been inventing stuff.
I really enjoy that too.
So I get what you’re saying about the analogy quite well.
But as far as the martyrdom thing,
I understand the drug you’re talking about
and I’ve seen it more than I’ve felt it.
I do, if I’m just to be completely candid
and this question is so good, it deserves a candid answer.
I do like the fight, right?
I like fighting against people I don’t respect
and I like winning, but I have no interest in martyrdom.
One of the reasons I have no interest in martyrdom
is that I’m having too good a time, right?
I very much enjoy my life and.
It’s such a good answer.
I have a wonderful wife.
I have amazing children.
I live in a lovely place.
I don’t wanna exit any quicker than I have to.
That said, I also believe in things
and a willingness to exit if that’s the only way
is not exactly inviting martyrdom,
but it is an acceptance that fighting is dangerous
and going up against powerful forces
means who knows what will come of it, right?
I don’t have the sense that the thing is out there
that used to kill inconvenient people.
I don’t think that’s how it’s done anymore.
It’s primarily done through destroying them reputationally,
which is not something I relish the possibility of,
but there is a difference between
a willingness to face the hazard
rather than a desire to face it because of the thrill, right?
For me, the thrill is in fighting when I’m in the right.
I think I feel that that is a worthwhile way
to take what I see as the kind of brutality
that is built into men and to channel it
to something useful, right?
If it is not channeled into something useful,
it will be channeled into something else,
so it damn well better be channeled into something useful.
It’s not motivated by fame or popularity,
those kinds of things.
It’s, you know what, you’re just making me realize
that enjoying the fight,
fighting the powerful and idea that you believe is right
is a kind of optimism for the human spirit.
It’s like, we can win this.
It’s almost like you’re turning into action,
into personal action, this hope for humanity
by saying like, we can win this.
And that makes you feel good about the rest of humanity,
that if there’s people like me, then we’re going to be okay.
Even if you’re like, your ideas might be wrong or not,
but if you believe they’re right
and you’re fighting the powerful against all odds,
then we’re going to be okay.
If I were to project, I mean,
because I enjoy the fight as well,
I think that’s the way I, that’s what brings me joy,
is it’s almost like it’s optimism in action.
Well, it’s a little different for me.
And again, I think, you know, I recognize you.
You’re a familiar, your construction is familiar,
even if it isn’t mine, right?
For me, I actually expect us not to be okay.
And I’m not okay with that.
But what’s really important, if I feel like what I’ve said
is I don’t know of any reason that it’s not okay,
or any reason that it’s too late.
As far as I know, we could still save humanity
and we could get to the fourth frontier
or something akin to it.
But I expect us not to, I expect us to fuck it up, right?
I don’t like that thought, but I’ve looked into the abyss
and I’ve done my calculations
and the number of ways we could not succeed are many
and the number of ways that we could manage
to get out of this very dangerous phase of history is small.
The thing I don’t have to worry about is
that I didn’t do enough, right?
That I was a coward, that I prioritized other things.
At the end of the day, I think I will be able to say
to myself, and in fact, the thing that allows me to sleep,
is that when I saw clearly what needed to be done,
I tried to do it to the extent that it was in my power.
And if we fail, as I expect us to,
I can’t say, well, geez, that’s on me, you know?
And frankly, I regard what I just said to you
as something like a personality defect, right?
I’m trying to free myself from the sense
that this is my fault.
On the other hand, my guess is that personality defect
is probably good for humanity, right?
It’s a good one for me to have the externalities
of it are positive, so I don’t feel too bad about it.
Yeah, that’s funny, so yeah, our perspective on the world
are different, but they rhyme, like you said.
Because I’ve also looked into the abyss,
and it kind of smiled nervously back.
So I have a more optimistic sense that we’re gonna win
more than likely we’re going to be okay.
Right there with you, brother.
I’m hoping you’re right.
I’m expecting me to be right.
But back to Eric, you had a wonderful conversation.
In that conversation, he played the big brother role,
and he was very happy about it.
He was self congratulatory about it.
Can you talk to the ways in which Eric made you
a better man throughout your life?
Yeah, hell yeah.
I mean, for one thing, you know,
Eric and I are interestingly similar in some ways
and radically different in some other ways,
and it’s often a matter of fascination
to people who know us both because almost always
people meet one of us first, and they sort of
get used to that thing, and then they meet the other,
and it throws the model into chaos.
But you know, I had a great advantage,
which is I came second, right?
So although it was kind of a pain in the ass
to be born into a world that had Eric in it
because he’s a force of nature, right?
It was also terrifically useful because A,
he was a very awesome older brother
who made interesting mistakes, learned from them,
and conveyed the wisdom of what he had discovered,
and that was, you know, I don’t know who else
ends up so lucky as to have that kind of person
blazing the trail.
It also probably, you know, my hypothesis
for what birth order effects are
is that they’re actually adaptive, right?
That the reason that a second born is different
than a first born is that they’re not born
into a world with the same niches in it, right?
And so the thing about Eric is he’s been
completely dominant in the realm of fundamental thinking,
right, like what he’s fascinated by
is the fundamental of fundamentals,
and he’s excellent at it, which meant
that I was born into a world where somebody
was becoming excellent in that, and for me
to be anywhere near the fundamental of fundamentals
was going to be pointless, right?
I was going to be playing second fiddle forever,
and I think that that actually drove me
to the other end of the continuum
between fundamental and emergent,
and so I became fascinated with biology
and have been since I was three years old, right?
I think Eric drove that, and I have to thank him for it
because, you know, I mean.
I never thought of, so Eric drives towards the fundamental,
and you drive towards the emergent,
the physics and the biology.
Right, opposite ends of the continuum,
and as Eric would be quick to point out
if he was sitting here, I treat the emergent layer,
I seek the fundamentals in it,
which is sort of an echo of Eric’s style of thinking
but applied to the very far complexity.
He’s overpoweringly argues for the importance of physics,
the fundamental of the fundamental.
He’s not here to defend himself.
Is there an argument to be made against that?
Or biology, the emergent,
the study of the thing that emerged
when the fundamental acts at the cosmic scale
and then builds the beautiful thing that is us
is much more important.
Psychology, biology, the systems
that we’re actually interacting with in this human world
are much more important to understand
than the low level theories of quantum mechanics
and general relativity.
Yeah, I can’t say that one is more important.
I think there’s probably a different time scale.
I think understanding the emergent layer
is more often useful, but the bang for the buck
at the far fundamental layer may be much greater.
So for example, the fourth frontier,
I’m pretty sure it’s gonna have to be fusion powered.
I don’t think anything else will do it,
but once you had fusion power,
assuming we didn’t just dump fusion power on the market
the way we would be likely to
if it was invented usefully tomorrow,
but if we had fusion power
and we had a little bit more wisdom than we have,
you could do an awful lot.
And that’s not gonna come from people like me
who look at the dynamics of it.
Can I argue against that?
Please.
I think the way to unlock fusion power
is through artificial intelligence.
So I think most of the breakthrough ideas
in the futures of science will be developed by AI systems.
And I think in order to build intelligent AI systems,
you have to be a scholar of the fundamental
of the emergent, of biology, of the neuroscience,
of the way the brain works,
of intelligence, of consciousness.
And those things, at least directly,
don’t have anything to do with physics.
Well.
You’re making me a little bit sad
because my addiction to the aha moment thing
is incompatible with outsourcing that job.
Like the outsource thing.
I don’t wanna outsource that thing to the AI.
You reap the moment.
And actually, I’ve seen this happen before
because some of the people who trained Heather and me
were phylogenetic systematists,
Arnold Kluge in particular.
And the problem with systematics
is that to do it right when your technology is primitive,
you have to be deeply embedded in the philosophical
and the logical, right?
Your method has to be based in the highest level of rigor.
Once you can sequence genes,
genes can spit so much data at you
that you can overwhelm high quality work
with just lots and lots and lots of automated work.
And so in some sense,
there’s like a generation of phylogenetic systematists
who are the last of the greats
because what’s replacing them is sequencers.
So anyway, maybe you’re right about the AI.
And I guess I’m…
What makes you sad?
I like figuring stuff out.
Is there something that you disagree with the error con,
even trying to convince them you failed so far,
but you will eventually succeed?
You know, that is a very long list.
Eric and I have tensions over certain things
that recur all the time.
And I’m trying to think what would be the ideal…
Is it in the space of science,
in the space of philosophy, politics, family, love, robots?
Well, all right, let me…
I’m just gonna use your podcast
to make a bit of a cryptic war
and just say there are many places
in which I believe that I have butted heads with Eric
over the course of decades
and I have seen him move in my direction
substantially over time.
So you’ve been winning.
He might win a battle here or there,
but you’ve been winning the war.
I would not say that.
It’s quite possible he could say the same thing about me.
And in fact, I know that it’s true.
There are places where he’s absolutely convinced me.
But in any case, I do believe it’s at least…
It may not be a totally even fight,
but it’s more even than some will imagine.
But yeah, we have…
There are things I say that drive him nuts, right?
Like when something, like you heard me talk about the…
What was it?
It was the autopilot that seems to be putting
a great many humans in needless medical jeopardy
over the COVID 19 pandemic.
And my feeling is we can say this almost for sure.
Anytime you have the appearance
of some captured gigantic entity
that is censoring you on YouTube
and handing down dictates from the who and all of that,
it is sure that there will be
a certain amount of collusion, right?
There’s gonna be some embarrassing emails in some places
that are gonna reveal some shocking connections.
And then there’s gonna be an awful lot of emergence
that didn’t involve collusion, right?
In which people were doing their little part of a job
and something was emerging.
And you never know what the admixture is.
How much are we looking at actual collusion
and how much are we looking at an emergent process?
But you should always walk in with the sense
that it’s gonna be a ratio.
And the question is, what is the ratio in this case?
I think this drives Eric nuts
because he is very focused on the people.
I think he’s focused on the people who have a choice
and make the wrong one.
And anyway, he may.
Discussion of the ratio is a distraction to that.
I think he takes it almost as an offense
because it grants cover to people who are harming others.
And I think it offends him morally.
And if I had to say, I would say it alters his judgment
on the matter.
But anyway, certainly useful just to leave open
the two possibilities and say it’s a ratio,
but we don’t know which one.
Brother to brother, do you love the guy?
Hmm, hell yeah, hell yeah.
And I’d love him if he was just my brother,
but he’s also awesome.
So I love him and I love him for who he is.
So let me ask you about back to your book,
Hunter Gatherer’s Guide to the 21st Century.
I can’t wait both for the book and the videos
you do on the book.
That’s really exciting that there’s like a structured,
organized way to present this.
A kind of from an evolutionary biology perspective,
a guide for the future,
using our past as the fundamental, the emergent way
to present a picture of the future.
Let me ask you about something that,
I think about a little bit in this modern world,
which is monogamy.
So I personally value monogamy.
One girl, ride or die.
There you go.
Ride or, no, that’s exactly it now.
But that said, I don’t know what’s the right way
to approach this,
but from an evolutionary biology perspective
or from just looking at modern society,
that seems to be an idea that’s not,
what’s the right way to put it, flourishing?
It is waning.
It’s waning.
So I suppose based on your reaction,
you’re also a supporter of monogamy
or you value monogamy.
Are you and I just delusional?
What can you say about monogamy
from the context of your book,
from the context of evolutionary biology,
from the context of being human?
Yeah, I can say that I fully believe
that we are actually enlightened
and that although monogamy is waning,
that it is not waning because there is a superior system.
It is waning for predictable other reasons.
So let us just say it is,
there is a lot of pre trans fallacy here
where people go through a phase
where they recognize that actually
we know a lot about the evolution of monogamy
and we can tell from the fact
that humans are somewhat sexually dimorphic
that there has been a lot of polygyny in human history.
And in fact, most of human history was largely polygynous.
But it is also the case that most of the people
on earth today belong to civilizations
that are at least nominally monogamous
and have practiced monogamy.
And that’s not anti evolutionary.
What that is is part of what I mentioned before
where human beings can swap out their software program
and different mating patterns are favored
in different periods of history.
So I would argue that the benefit of monogamy,
the primary one that drives the evolution
of monogamous patterns in humans
is that it brings all adults into child rearing.
Now the reason that that matters
is because human babies are very labor intensive.
In order to raise them properly,
having two parents is a huge asset
and having more than two parents,
having an extended family also very important.
But what that means is that for a population
that is expanding, a monogamous mating system makes sense.
It makes sense because it means that the number of offspring
that can be raised is elevated.
It’s elevated because all potential parents
are involved in parenting.
Whereas if you sideline a bunch of males
by having a polygynous system
in which one male has many females,
which is typically the way that works,
what you do is you sideline all those males,
which means the total amount of parental effort is lower
and the population can’t grow.
So what I’m arguing is that you should expect to see
populations that face the possibility of expansion
endorse monogamy.
And at the point that they have reached carrying capacity,
you should expect to see polygyny break back out.
And what we are seeing
is a kind of false sophistication around polyamory,
which will end up breaking down into polygyny,
which will not be in the interest of most people.
Really the only people whose interest
it could be argued to be in
would be the very small number of males at the top
who have many partners and everybody else suffers.
Is it possible to make the argument
if we focus in on those males at the quote unquote top
with many female partners,
is it possible to say that that’s a suboptimal life,
that a single partner is the optimal life?
Well, it depends what you mean.
I have a feeling that you and I wouldn’t have to go very far
to figure out that what might be evolutionarily optimal
doesn’t match my values as a person
and I’m sure it doesn’t match yours either.
Can we try to dig into that gap between those two?
Sure.
I mean, we can do it very simply.
Selection might favor your engaging in war
against a defenseless enemy or genocide, right?
It’s not hard to figure out
how that might put your genes at advantage.
I don’t know about you, Lex.
I’m not getting involved in no genocide.
It’s not gonna happen.
I won’t do it.
I will do anything to avoid it.
So some part of me has decided that my conscious self
and the values that I hold trump my evolutionary self
and once you figure out that in some extreme case,
that’s true and then you realize
that that means it must be possible in many other cases
and you start going through all of the things
that selection would favor
and you realize that a fair fraction of the time,
actually, you’re not up for this.
You don’t wanna be some robot on a mission
that involves genocide when necessary.
You wanna be your own person and accomplish things
that you think are valuable.
And so among those are not advocating,
let’s suppose you were in a position
to be one of those males at the top of a polygynous system.
We both know why that would be rewarding, right?
But we also both recognize.
Do we?
Yeah, sure.
Lots of sex?
Yeah.
Okay, what else?
Lots of sex and lots of variety, right?
So look, every red blooded American slash Russian male
can understand why that’s appealing, right?
On the other hand, it is up against an alternative
which is having a partner with whom one is bonded
especially closely, right?
Right.
And so.
A love.
Right.
Well, I don’t wanna straw man the polygyny position.
Obviously polygyny is complex
and there’s nothing that stops a man presumably
from loving multiple partners and from them loving him back.
But in terms of, if love is your thing,
there’s a question about, okay, what is the quality of love
if it is divided over multiple partners, right?
And what is the net consequence for love in a society
when multiple people will be frozen out
for every individual male in this case who has it?
And what I would argue is, and you know,
this is weird to even talk about,
but this is partially me just talking
from personal experience.
I think there actually is a monogamy program in us
and it’s not automatic.
But if you take it seriously, you can find it
and frankly, marriage, and it doesn’t have to be marriage,
but whatever it is that results in a lifelong bond
with a partner has gotten a very bad rap.
You know, it’s the butt of too many jokes.
But the truth is, it’s hugely rewarding, it’s not easy.
But if you know that you’re looking for something, right?
If you know that the objective actually exists
and it’s not some utopian fantasy that can’t be found,
if you know that there’s some real world, you know,
warts and all version of it, then you might actually think,
hey, that is something I want and you might pursue it
and my guess is you’d be very happy when you find it.
Yeah, I think there is, getting to the fundamental
and the emergent, I feel like there is some kind of physics
of love.
So one, there’s a conservation thing going on.
So if you have like many partners, yeah, in theory,
you should be able to love all of them deeply.
But it seems like in reality that love gets split.
Yep.
Now, there’s another law that’s interesting
in terms of monogamy.
I don’t know if it’s at the physics level,
but if you are in a monogamous relationship by choice
and almost as in slight rebellion to social norms,
that’s much more powerful.
Like if you choose that one partnership,
that’s also more powerful.
If like everybody’s in a monogamous,
this pressure to be married and this pressure of society,
that’s different because that’s almost like a constraint
on your freedom that is enforced by something
other than your own ideals.
It’s by somebody else.
When you yourself choose to, I guess,
create these constraints, that enriches that love.
So there’s some kind of love function,
like E equals MC squared, but for love,
that I feel like if you have less partners
and it’s done by choice, that can maximize that.
And that love can transcend the biology,
transcend the evolutionary biology forces
that have to do much more with survival
and all those kinds of things.
It can transcend to take us to a richer experience,
which we have the luxury of having,
exploring of happiness, of joy, of fulfillment,
all those kinds of things.
Totally agree with this.
And there’s no question that by choice,
when there are other choices,
imbues it with meaning that it might not otherwise have.
I would also say, I’m really struck by,
and I have a hard time not feeling terrible sadness
over what younger people are coming
to think about this topic.
I think they’re missing something so important
and so hard to phrase that,
and they don’t even know that they’re missing it.
They might know that they’re unhappy,
but they don’t understand what it is
they’re even looking for,
because nobody’s really been honest with them
about what their choices are.
And I have to say, if I was a young person,
or if I was advising a young person,
which I used to do, again, a million years ago
when I was a college professor four years ago,
but I used to talk to students.
I knew my students really well,
and they would ask questions about this,
and they were always curious
because Heather and I seemed to have a good relationship,
and many of them knew both of us.
So they would talk to us about this.
If I was advising somebody, I would say,
do not bypass the possibility
that what you are supposed to do is find somebody worthy,
somebody who can handle it,
somebody who you are compatible with,
and that you don’t have to be perfectly compatible.
It’s not about dating until you find the one.
It’s about finding somebody whose underlying values
and viewpoint are complimentary to yours,
sufficient that you fall in love.
If you find that person, opt out together.
Get out of this damn system
that’s telling you what’s sophisticated
to think about love and romance and sex.
Ignore it together, all right?
That’s the key, and I believe you’ll end up laughing
in the end if you do it.
You’ll discover, wow, that’s a hellscape
that I opted out of, and this thing I opted into?
Complicated, difficult, worth it.
Nothing that’s worth it is ever not difficult,
so we should even just skip
the whole statement about difficult.
Yeah, all right.
I just, I wanna be honest.
It’s not like, oh, it’s nonstop joy.
No, it’s fricking complex, but worth it?
No question in my mind.
Is there advice outside of love
that you can give to young people?
You were a million years ago a professor.
Is there advice you can give to young people,
high schoolers, college students about career, about life?
Yeah, but it’s not, they’re not gonna like it
because it’s not easy to operationalize,
and this was a problem when I was a college professor, too.
People would ask me what they should do.
Should they go to graduate school?
I had almost nothing useful to say
because the job market and the market of prejob training
and all of that, these things are all so distorted
and corrupt that I didn’t wanna point anybody to anything
because it’s all broken, and I would tell them that,
but I would say that results in a kind of meta level advice
that I do think is useful.
You don’t know what’s coming.
You don’t know where the opportunities will be.
You should invest in tools rather than knowledge.
To the extent that you can do things,
you can repurpose that no matter what the future brings
to the extent that if you, as a robot guy,
you’ve got the skills of a robot guy.
Now, if civilization failed
and the stuff of robot building disappeared with it,
you’d still have the mind of a robot guy,
and the mind of a robot guy can retool
around all kinds of things, whether you’re forced to work
with fibers that are made into ropes.
Your mechanical mind would be useful in all kinds of places,
so invest in tools like that that can be easily repurposed,
and invest in combinations of tools, right?
If civilization keeps limping along,
you’re gonna be up against all sorts of people
who have studied the things that you studied, right?
If you think, hey, computer programming
is really, really cool, and you pick up computer programming,
guess what, you just entered a large group of people
who have that skill, and many of them will be better
than you, almost certainly.
On the other hand, if you combine that with something else
that’s very rarely combined with it,
if you have, I don’t know if it’s carpentry
and computer programming, if you take combinations
of things that are, even if they’re both common,
but they’re not commonly found together,
then those combinations create a rarefied space
where you inhabit it, and even if the things
don’t even really touch, but nonetheless,
they create a mind in which the two things are live
and you can move back and forth between them
and step out of your own perspective
by moving from one to the other,
that will increase what you can see
and the quality of your tools.
And so anyway, that isn’t useful advice.
It doesn’t tell you whether you should go
to graduate school or not, but it does tell you
the one thing we can say for certain about the future
is that it’s uncertain, and so prepare for it.
And like you said, there’s cool things to be discovered
in the intersection of fields and ideas.
And I would look at grad school that way,
actually, if you do go, or I see,
I mean, this is such a, like every course
in grad school, undergrad too,
was like this little journey that you’re on
that explores a particular field.
And it’s not immediately obvious how useful it is,
but it allows you to discover intersections
between that thing and some other thing.
So you’re bringing to the table these pieces of knowledge,
some of which when intersected might create a niche
that’s completely novel, unique, and will bring you joy.
I mean, I took a huge number of courses
in theoretical computer science.
Most of them seem useless, but they totally changed
the way I see the world in ways that I’m not prepared
or is a little bit difficult to kind of make explicit,
but taken together, they’ve allowed me to see,
for example, the world of robotics totally different
and different from many of my colleagues
and friends and so on.
And I think that’s a good way to see if you go
to grad school was as a opportunity
to explore intersections of fields,
even if the individual fields seem useless.
Yeah, and useless doesn’t mean useless, right?
Useless means not directly applicable,
but a good, useless course can be the best one
you ever took.
Yeah, I took James Joyce, a course on James Joyce,
and that was truly useless.
Well, I took immunobiology in the medical school
when I was at Penn as, I guess I would have been
a freshman or a sophomore.
I wasn’t supposed to be in this class.
It blew my goddamn mind, and it still does, right?
I mean, we had this, I don’t even know who it was,
but we had this great professor who was highly placed
in the world of immunobiology.
The course is called Immunobiology, not immunology.
Immunobiology, it had the right focus,
and as I recall it, the professor stood sideways
to the chalkboard, staring off into space,
literally stroking his beard with this bemused look
on his face through the entire lecture.
And you had all these medical students
who were so furiously writing notes
that I don’t even think they were noticing
the person delivering this thing,
but I got what this guy was smiling about.
It was like so, what he was describing,
adaptive immunity is so marvelous, right?
That it was like almost a privilege to even be saying it
to a room full of people who were listening, you know?
But anyway, yeah, I took that course,
and lo and behold, COVID.
That’s gonna be useful.
Well, yeah, suddenly it’s front and center,
and wow, am I glad I took it.
But anyway, yeah, useless courses are great.
And actually, Eric gave me one of the greater pieces
of advice, at least for college, that anyone’s ever given,
which was don’t worry about the prereqs.
Take it anyway, right?
But now, I don’t even know if kids can do this now
because the prereqs are now enforced by a computer.
But back in the day, if you didn’t mention
that you didn’t have the prereqs,
nobody stopped you from taking the course.
And what he told me, which I didn’t know,
was that often the advanced courses are easier in some way.
The material’s complex, but it’s not like intro bio
where you’re learning a thousand things at once, right?
It’s like focused on something.
So if you dedicate yourself, you can pull it off.
Yeah, stay with an idea for many weeks at a time,
and it’s ultimately rewarding,
and not as difficult as it looks.
Can I ask you a ridiculous question?
Please.
What do you think is the meaning of life?
Well, I feel terrible having to give you the answer.
I realize you asked the question,
but if I tell you, you’re gonna again feel bad.
I don’t wanna do that.
But look, there’s two.
There can be a disappointment.
No, it’s gonna be a horror, right?
Because we actually know the answer to the question.
Oh no.
It’s completely meaningless.
There is nothing that we can do
that escapes the heat death of the universe
or whatever it is that happens at the end.
And we’re not gonna make it there anyway.
But even if you were optimistic about our ability
to escape every existential hazard indefinitely,
ultimately it’s all for naught and we know it, right?
That said, once you stare into that abyss,
and then it stares back and laughs or whatever happens,
then the question is, okay, given that,
can I relax a little bit, right?
And figure out, well, what would make sense
if that were true, right?
And I think there’s something very clear to me.
I think if you do all of the,
if I just take the values that I’m sure we share
and extrapolate from them,
I think the following thing is actually a moral imperative.
Being a human and having opportunity
is absolutely fucking awesome, right?
A lot of people don’t make use of the opportunity
and a lot of people don’t have opportunity, right?
They get to be human, but they’re too constrained
by keeping a roof over their heads to really be free.
But being a free human is fantastic.
And being a free human on this beautiful planet,
crippled as it may be, is unparalleled.
I mean, what could be better?
How lucky are we that we get that, right?
So if that’s true, that it is awesome to be human
and to be free, then surely it is our obligation
to deliver that opportunity to as many people as we can.
And how do you do that?
Well, I think I know what job one is.
Job one is we have to get sustainable.
The way to get the maximum number of humans
to have that opportunity to be both here and free
is to make sure that there isn’t a limit
on how long we can keep doing this.
That effectively requires us to reach sustainability.
And then at sustainability, you could have a horror show
of sustainability, right?
You could have a totalitarian sustainability.
That’s not the objective.
The objective is to liberate people.
And so the question, the whole fourth frontier question,
frankly, is how do you get to a sustainable
and indefinitely sustainable state
in which people feel liberated,
in which they are liberated,
to pursue the things that actually matter,
to pursue beauty, truth, compassion, connection,
all of those things that we could list as unalloyed goods,
those are the things that people should be most liberated
to do in a system that really functions.
And anyway, my point is,
I don’t know how precise that calculation is,
but I’m pretty sure it’s not wrong.
It’s accurate enough.
And if it is accurate enough, then the point is, okay,
well, there’s no ultimate meaning,
but the proximate meaning is that one.
How many people can we get to have this wonderful experience
that we’ve gotten to have, right?
And there’s no way that’s so wrong
that if I invest my life in it,
that I’m making some big error.
I’m sure of that.
Life is awesome, and we wanna spread the awesome
as much as possible.
Yeah, you sum it up that way, spread the awesome.
Spread the awesome.
So that’s the fourth frontier.
And if that fails, if the fourth frontier fails,
the fifth frontier will be defined by robots,
and hopefully they’ll learn the lessons
of the mistakes that the humans made
and build a better world with more awesome.
I hope they’re very happy here
and that they do a better job with the place than we did.
Yeah.
Brett.
I can’t believe it took us this long to talk,
as I mentioned to you before,
that we haven’t actually spoken, I think, at all.
And I’ve always felt that we’re already friends.
I don’t know how that works
because I’ve listened to your podcasts a lot.
I’ve also sort of loved your brother.
And so it was like,
we’ve known each other for the longest time,
and I hope we can be friends and talk often again.
And I hope that you get a chance to meet
some of my robot friends as well and fall in love.
And I’m so glad that you love robots as well.
So we get to share in that love.
So I can’t wait for us to interact together.
So we went from talking about some of the worst failures
of humanity to some of the most beautiful
aspects of humanity.
What else can you ask for from a conversation?
Thank you so much for talking today.
You know, Lex, I feel the same way towards you,
and I really appreciate it.
This has been a lot of fun,
and I’m looking forward to our next one.
Thanks for listening to this conversation
with Brett Weinstein,
and thank you to Jordan Harbridge’s show,
Express CPN, Magic Spoon, and Four Sigmatic.
Check them out in the description to support this podcast.
And now, let me leave you with some words
from Charles Darwin.
Ignorance more frequently begets confidence
than does knowledge.
It is those who know little, not those who know much,
who so positively assert that this or that problem
will never be solved by science.
Thank you for listening, and hope to see you next time.