The following is a conversation with Peter Wang,
one of the most impactful leaders and developers
in the Python community.
Former physicist, current philosopher,
and someone who many people told me about
and praised as a truly special mind
that I absolutely should talk to.
Recommendations ranging from Travis Hallifont
to Eric Weinstein.
So, here we are.
This is the Lex Friedman podcast.
To support it, please check out our sponsors
in the description.
And now, here’s my conversation with Peter Wang.
You’re one of the most impactful humans
in the Python ecosystem.
So, you’re an engineer, leader of engineers,
but you’re also a philosopher.
So, let’s talk both in this conversation
about programming and philosophy.
First, programming.
What to you is the best
or maybe the most beautiful feature of Python?
Or maybe the thing that made you fall in love
or stay in love with Python?
Well, those are three different things.
What I think is the most beautiful,
what made me fall in love, what made me stay in love.
When I first started using it
was when I was a C++ computer graphics performance nerd.
In the 90s?
Yeah, in the late 90s.
And that was my first job out of college.
And we kept trying to do more and more abstract
and higher order programming in C++,
which at the time was quite difficult.
With templates, the compiler support wasn’t great, et cetera.
So, when I started playing around with Python,
that was my first time encountering
really first class support for types, for functions,
and things like that.
And it felt so incredibly expressive.
So, that was what kind of made me fall in love
with it a little bit.
And also, once you spend a lot of time
in a C++ dev environment,
the ability to just whip something together
that basically runs and works the first time is amazing.
So, really productive scripting language.
I mean, I knew Perl, I knew Bash, I was decent at both.
But Python just made everything,
it made the whole world accessible.
I could script this and that and the other,
network things, little hard drive utilities.
I could write all of these things
in the space of an afternoon.
And that was really, really cool.
So, that’s what made me fall in love.
Is there something specific you could put your finger on
that you’re not programming in Perl today?
Like, why Python for scripting?
I think there’s not a specific thing
as much as the design motif of both the creator
of the language and the core group of people
that built the standard library around him.
There was definitely, there was a taste to it.
I mean, Steve Jobs used that term
in somewhat of an arrogant way,
but I think it’s a real thing,
that it was designed to fit.
A friend of mine actually expressed this really well.
He said, Python just fits in my head.
And there’s nothing better to say than that.
Now, people might argue modern Python,
there’s a lot more complexity,
but certainly as version 1.5.2,
I think was my first version,
that fit in my head very easily.
So, that’s what made me fall in love with it.
Okay, so the most beautiful feature of Python
that made you stay in love.
It’s like over the years, what has like,
you do a double take and you return too often
as a thing that just brings you a smile.
I really still like the ability to play with meta classes
and express higher order of things.
When I have to create some new object model
to model something, right?
It’s easy for me,
cause I’m pretty expert as a Python programmer.
I can easily put all sorts of lovely things together
and use properties and decorators and other kinds of things
and create something that feels very nice.
So, that to me, I would say that’s tied
with the NumPy and vectorization capabilities.
I love thinking in terms of the matrices and the vectors
and these kind of data structures.
So, I would say those two are kind of tied for me.
So, the elegance of the NumPy data structure,
like slicing through the different multi dimensional.
Yeah, there’s just enough things there.
It’s like a very, it’s a very simple, comfortable tool.
Just, it’s easy to reason about what it does
when you don’t stray too far afield.
Can you put your finger on how to design a language
such that it fits in your head?
Certain things like the colon
or the certain notation aspects of Python
that just kind of work.
Is it something you have to kind of write out on paper,
look and say, it’s just right?
Is it a taste thing or is there a systematic process?
What’s your sense?
I think it’s more of a taste thing.
But one thing that should be said
is that you have to pick your audience, right?
So, the better defined the user audience is
or the users are, the easier it is to build something
that fits in their minds because their needs
will be more compact and coherent.
It is possible to find a projection, right?
A compact projection for their needs.
The more diverse the user base, the harder that is.
And so, as Python has grown in popularity,
that’s also naturally created more complexity
as people try to design any given thing.
There’ll be multiple valid opinions
about a particular design approach.
And so, I do think that’s the downside of popularity.
It’s almost an intrinsic aspect
of the complexity of the problem.
Well, at the very beginning,
aren’t you an audience of one, isn’t ultimately,
aren’t all the greatest projects in history
were just solving a problem that you yourself had?
Well, so Clay Shirky in his book on crowdsourcing
or his kind of thoughts on crowdsourcing,
he identifies the first step of crowdsourcing
is me first collaboration.
You first have to make something
that works well for yourself.
It’s very telling that when you look at all of the impactful
big project, well, they’re fundamental projects now
in the SciPy and Pydata ecosystem.
They all started with the people in the domain
trying to scratch their own itch.
And the whole idea of scratching your own itch
is something that the open source
or the free software world has known for a long time.
But in the scientific computing areas,
these are assistant professors
or electrical engineering grad students.
They didn’t have really a lot of programming skill
necessarily, but Python was just good enough
for them to put something together
that fit in their domain, right?
So it’s almost like a,
it’s a necessity is the mother of invention aspect.
And also it was a really harsh filter
for utility and compactness and expressiveness.
Like it was too hard to use,
then they wouldn’t have built it
because that was just too much trouble, right?
It was a side project for them.
And also necessity creates a kind of deadline.
It seems like a lot of these projects
are quickly thrown together in the first step.
And that, even though it’s flawed,
that just seems to work well for software projects.
Well, it does work well for software projects in general.
And in this particular space,
one of my colleagues, Stan Siebert identified this,
that all the projects in the SciPy ecosystem,
if we just rattle them off,
there’s NumPy, there’s SciPy
built by different collaborations of people.
Although Travis is the heart of both of them.
But NumPy coming from numeric and numery,
these are different people.
And then you’ve got Pandas,
you’ve got Jupyter or IPython,
there’s Matplotlib,
there’s just so many others, I’m not gonna do justice
if I try to name them all.
But all of them are actually different people.
And as they rolled out their projects,
the fact that they had limited resources
meant that they were humble about scope.
A great famous hacker, Jamie Zawisky,
once said that every geek’s dream
is to build the ultimate middleware, right?
And the thing is with these scientists turned programmers,
they had no such dream.
They were just trying to write something
that was a little bit better for what they needed,
the MATLAB,
and they were gonna leverage what everyone else had built.
So naturally, almost in kind of this annealing process
or whatever, we built a very modular cover
of the basic needs of a scientific computing library.
If you look at the whole human story,
how much of a leap is it?
We’ve developed all kinds of languages,
all kinds of methodologies for communication.
It just kind of like grew this collective intelligence,
civilization grew, it expanded, wrote a bunch of books,
and now we tweet how big of a leap is programming
if programming is yet another language?
Is it just a nice little trick
that’s temporary in our human history,
or is it like a big leap in the,
almost us becoming another organism
at a higher level of abstraction, something else?
I think the act of programming
or using grammatical constructions
of some underlying primitives,
that is something that humans do learn,
but every human learns this.
Anyone who can speak learns how to do this.
What makes programming different
has been that up to this point,
when we try to give instructions to computing systems,
all of our computers, well, actually this is not quite true,
but I’ll first say it,
and then I’ll tell you why it’s not true.
But for the most part,
we can think of computers as being these iterated systems.
So when we program,
we’re giving very precise instructions to iterated systems
that then run at incomprehensible speed
and run those instructions.
In my experience,
some people are just better equipped
to model systematic iterated systems,
well, whatever, iterated systems in their head.
Some people are really good at that,
and other people are not.
And so when you have like, for instance,
sometimes people have tried to build systems
that make programming easier by making it visual,
drag and drop.
And the issue is you can have a drag and drop thing,
but once you start having to iterate the system
with conditional logic,
handling case statements and branch statements
and all these other things,
the visual drag and drop part doesn’t save you anything.
You still have to reason about this giant iterated system
with all these different conditions around it.
That’s the hard part, right?
So handling iterated logic, that’s the hard part.
The languages we use then emerge
to give us the ability and capability over these things.
Now, the one exception to this rule, of course,
is the most popular programming system in the world,
which is Excel, which is a data flow
and a data driven, immediate mode,
data transformation oriented programming system.
And this actually not an accident
that that system is the most popular programming system
because it’s so accessible
to a much broader group of people.
I do think as we build future computing systems,
you’re actually already seeing this a little bit,
it’s much more about composition of modular blocks.
They themselves actually maintain all their internal state
and the interfaces between them
are well defined data schemas.
And so to stitch these things together using like IFTTT
or Zapier or any of these kind of,
I would say compositional scripting kinds of things,
I mean, HyperCard was also a little bit in this vein.
That’s much more accessible to most people.
It’s really that implicit state
that’s so hard for people to track.
Yeah, okay, so that’s modular stuff,
but there’s also an aspect
where you’re standing on the shoulders of giants.
So you’re building like higher and higher levels
of abstraction, but you do that a little bit with language.
So with language, you develop sort of ideas,
philosophies from Plato and so on.
And then you kind of leverage those philosophies
as you try to have deeper and deeper conversations.
But with programming,
it seems like you can build much more complicated systems.
Like without knowing how everything works,
you can build on top of the work of others.
And it seems like you’re developing
more and more sophisticated expressions,
ability to express ideas in a computational space.
I think it’s worth pondering the difference here
between complexity and complication.
Okay, right. Back to Excel.
Well, not quite back to Excel,
but the idea is when we have a human conversation,
all languages for humans emerged
to support human relational communications,
which is that the person we’re communicating with
is a person and they would communicate back to us.
And so we sort of hit a resonance point, right?
When we actually agree on some concepts.
So there’s a messiness to it and there’s a fluidity to it.
With computing systems,
when we express something to the computer and it’s wrong,
we just try again.
So we can basically live many virtual worlds
of having failed at expressing ourselves to the computer
until the one time we expressed ourselves right.
Then we kind of put in production
and then discover that it’s still wrong
a few days down the road.
So I think the sophistication of things
that we build with computing,
one has to really pay attention to the difference
between when an end user is expressing something
onto a system that exists
versus when they’re extending the system
to increase the system’s capability
for someone else to then interface with.
We happen to use the same language for both of those things
in most cases, but it doesn’t have to be that.
And Excel is actually a great example of this,
of kind of a counterpoint to that.
Okay, so what about the idea of, you said messiness.
Wouldn’t you put the software 2.0 idea,
this idea of machine learning
into the further and further steps
into the world of messiness.
The same kind of beautiful messiness of human communication.
Isn’t that what machine learning is?
Is building on levels of abstraction
that don’t have messiness in them,
that at the operating system level,
then there’s Python, the programming languages
that have more and more power.
But then finally, there’s neural networks
that ultimately work with data.
And so the programming is almost in the space of data
and the data is allowed to be messy.
Isn’t that a kind of program?
So the idea of software 2.0 is a lot of the programming
happens in the space of data, so back to Excel,
all roads lead back to Excel, in the space of data
and also the hyperparameters of the neural networks.
And all of those allow the same kind of messiness
that human communication allows.
It does, but my background is in physics.
I took like two CS courses in college.
So I don’t have, now I did cram a bunch of CS in prep
when I applied for grad school,
but still I don’t have a formal background
in computer science.
But what I have observed in studying programming languages
and programming systems and things like that
is that there seems to be this triangle.
It’s one of these beautiful little iron triangles
that you find in life sometimes.
And it’s the connection between the code correctness
and kind of expressiveness of code,
the semantics of the data,
and then the kind of correctness or parameters
of the underlying hardware compute system.
So there’s the algorithms that you wanna apply,
there’s what the bits that are stored on whatever media
actually represent, so the semantics of the data
within the representation,
and then there’s what the computer can actually do.
And every programming system, every information system
ultimately finds some spot in the middle
of this little triangle.
Sometimes some systems collapse them into just one edge.
Are we including humans as a system in this?
No, no, I’m just thinking about computing systems here.
And the reason I bring this up is because
I believe there’s no free lunch around this stuff.
So if we build machine learning systems
to sort of write the correct code
that is at a certain level of performance,
so it’ll sort of select with hyperparameters
we can tune kind of how we want the performance boundary
in SLA to look like for transforming some set of inputs
into certain kinds of outputs.
That training process itself is intrinsically sensitive
to the kinds of inputs we put into it.
It’s quite sensitive to the boundary conditions
we put around the performance.
So I think even as we move to using automated systems
to build this transformation,
as opposed to humans explicitly
from a top down perspective, figuring out,
well, this schema and this database and these columns
get selected for this algorithm,
and here we put a Fibonacci heap for some other thing.
Human design or computer design,
ultimately what we hit,
the boundaries that we hit with these information systems
is when the representation of the data hits the real world
is where there’s a lot of slop and a lot of interpretation.
And that’s where actually I think
a lot of the work will go in the future
is actually understanding kind of how to better
in the view of these live data systems,
how to better encode the semantics of the world
for those things.
There’ll be less of the details
of how we write a particular SQL query.
Okay, but given the semantics of the real world
and the messiness of that,
what does the word correctness mean
when you’re talking about code?
There’s a lot of dimensions to correctness.
Historically, and this is one of the reasons I say
that we’re coming to the end of the era of software,
because for the last 40 years or so,
software correctness was really defined
about functional correctness.
I write a function, it’s got some inputs,
does it produce the right outputs?
If so, then I can turn it on,
hook it up to the live database and it goes.
And more and more now we have,
I mean, in fact, I think the bright line in the sand
between machine learning systems
or modern data driven systems
versus classical software systems
is that the values of the input
actually have to be considered with the function together
to say this whole thing is correct or not.
And usually there’s a performance SLA as well.
Like did it actually finish making this?
What’s SLA?
Sorry, service level agreement.
So it has to return within some time.
You have a 10 millisecond time budget
to return a prediction of this level of accuracy, right?
So these are things that were not traditionally
in most business computing systems for the last 20 years
at all, people didn’t think about it.
But now we have value dependence on functional correctness.
So that question of correctness
is becoming a bigger and bigger question.
What does that map to the end of software?
We’ve thought about software as just this thing
that you can do in isolation with some test trial inputs
and in a very sort of sandboxed environment.
And we can quantify how does it scale?
How does it perform?
How many nodes do we need to allocate
if we wanna scale this many inputs?
When we start turning this stuff into prediction systems,
real cybernetic systems,
you’re going to find scenarios where you get inputs
that you’re gonna wanna spend
a little more time thinking about.
You’re gonna find inputs that are not,
it’s not clear what you should do, right?
So then the software has a varying amount of runtime
and correctness with regard to input.
And that is a different kind of system altogether.
Now it’s a full on cybernetic system.
It’s a next generation information system
that is not like traditional software systems.
Can you maybe describe what is a cybernetic system?
Do you include humans in that picture?
So is a human in the loop kind of complex mess
of the whole kind of interactivity of software
with the real world or is it something more concrete?
Well, when I say cybernetic,
I really do mean that the software itself
is closing the observe, orient, decide, act loop by itself.
So humans being out of the loop is the fact
what for me makes it a cybernetic system.
And humans are out of that loop.
When humans are out of the loop,
when the machine is actually sort of deciding on its own
what it should do next to get more information,
that makes it a cybernetic system.
So we’re just at the dawn of this, right?
I think everyone talking about MLAI, it’s great.
But really the thing we should be talking about
is when we really enter the cybernetic era
and all of the questions of ethics and governance
and all correctness and all these things,
they really are the most important questions.
Okay, can we just linger on this?
What does it mean for the human to be out of the loop
in a cybernetic system, because isn’t the cybernetic system
that’s ultimately accomplishing some kind of purpose
that at the bottom, the turtles all the way down,
at the bottom turtle is a human.
Well, the human may have set some criteria,
but the human wasn’t precise.
So for instance, I just read the other day
that earlier this year,
or maybe it was last year at some point,
the Libyan army, I think,
sent out some automated killer drones with explosives.
And there was no human in the loop at that point.
They basically put them in a geofenced area,
said find any moving target, like a truck or vehicle
that looks like this, and boom.
That’s not a human in the loop, right?
So increasingly, the less human there is in the loop,
the more concerned you are about these kinds of systems,
because there’s unintended consequences,
like less the original designer and engineer of the system
is able to predict, even one with good intent
is able to predict the consequences of such a system.
Is that it? That’s right.
There are some software systems, right,
that run without humans in the loop
that are quite complex.
And that’s like the electronic markets.
And we get flash crashes all the time.
We get in the heyday of high frequency trading,
there’s a lot of market microstructure,
people doing all sorts of weird stuff
that the market designers had never really thought about,
contemplated or intended.
So when we run these full on systems
with these automated trading bots,
now they become automated killer drones
and then all sorts of other stuff.
We are, that’s what I mean by we’re at the dawn
of the cybernetic era and the end of the era
of just pure software.
Are you more concerned,
if you’re thinking about cybernetic systems
or even like self replicating systems,
so systems that aren’t just doing a particular task,
but are able to sort of multiply and scale
in some dimension in the digital
or even the physical world.
Are you more concerned about like the lobster being boiled?
So a gradual with us not noticing,
collapse of civilization or a big explosion,
like oops, kind of a big thing where everyone notices,
but it’s too late.
I think that it will be a different experience
for different people.
I do share a common point of view
with some of the climate,
people who are concerned about climate change
and just the big existential risks that we have.
But unlike a lot of people who share my level of concern,
I think the collapse will not be quite so dramatic
as some of them think.
And what I mean is that,
I think that for certain tiers of let’s say economic class
or certain locations in the world,
people will experience dramatic collapse scenarios.
But for a lot of people, especially in the developed world,
the realities of collapse will be managed.
There’ll be narrative management around it
so that they essentially insulate,
the middle class will be used to insulate the upper class
from the pitchforks and the flaming torches and everything.
It’s interesting because,
so my specific question wasn’t as general.
My question was more about cybernetic systems or software.
Okay.
It’s interesting,
but it would nevertheless perhaps be about class.
So the effect of algorithms
might affect certain classes more than others.
Absolutely.
I was more thinking about
whether it’s social media algorithms or actual robots,
is there going to be a gradual effect on us
where we wake up one day
and don’t recognize the humans we are,
or is it something truly dramatic
where there’s like a meltdown of a nuclear reactor
kind of thing, Chernobyl, like catastrophic events
that are almost bugs in a program that scaled itself
too quickly?
Yeah, I’m not as concerned about the visible stuff.
And the reason is because the big visible explosions,
I mean, this is something I said about social media
is that at least with nuclear weapons,
when a nuke goes off, you can see it
and you’re like, well, that’s really,
wow, that’s kind of bad, right?
I mean, Oppenheimer was reciting the Bhagavad Gita, right?
When he saw one of those things go off.
So we can see nukes are really bad.
He’s not reciting anything about Twitter.
Well, but right, but then when you have social media,
when you have all these different things that conspire
to create a layer of virtual experience for people
that alienates them from reality and from each other,
that’s very pernicious, that’s impossible to see, right?
And it kind of slowly gets in there, so.
You’ve written about this idea of virtuality
on this topic, which you define as the subjective phenomenon
of knowingly engaging with virtual sensation and perception
and suspending or forgetting the context
that it’s simulacrum.
So let me ask, what is real?
Is there a hard line between reality and virtuality?
Like perception drifts from some kind of physical reality.
We have to kind of have a sense of what is the line
that’s too, we’ve gone too far.
Right, right.
For me, it’s not about any hard line about physical reality
as much as a simple question of,
does the particular technology help people connect
in a more integral way with other people,
with their environment,
with all of the full spectrum of things around them?
So it’s less about, oh, this is a virtual thing
and this is a hard real thing,
more about when we create virtual representations
of the real things, always some things
are lost in translation.
Usually many, many dimensions are lost in translation.
We’re now coming to almost two years of COVID,
people on Zoom all the time.
You know it’s different when you meet somebody in person
than when you see them on,
I’ve seen you on YouTube lots, right?
But then seeing a person is very different.
And so I think when we engage in virtual experiences
all the time, and we only do that,
there is absolutely a level of embodiment.
There’s a level of embodied experience
and participatory interaction that is lost.
And it’s very hard to put your finger on exactly what it is.
It’s hard to say, oh, we’re gonna spend $100 million
building a new system that captures this 5% better,
higher fidelity human expression.
No one’s gonna pay for that, right?
So when we rush madly into a world of simulacrum
and virtuality, the things that are lost are,
it’s difficult.
Once everyone moves there, it can be hard to look back
and see what we’ve lost.
So is it irrecoverably lost?
Or rather, when you put it all on the table,
is it possible for more to be gained than is lost?
If you look at video games,
they create virtual experiences that are surreal
and can bring joy to a lot of people,
can connect a lot of people,
and can get people to talk a lot of trash.
So they can bring out the best and the worst in people.
So is it possible to have a future world
where the pros outweigh the cons?
It is.
I mean, it’s possible to have that in the current world.
But when literally trillions of dollars of capital
are tied to using those things
to groom the worst of our inclinations
and to attack our weaknesses in the limbic system
to create these things into id machines
versus connection machines,
then those good things don’t stand a chance.
Can you make a lot of money by building connection machines?
Is it possible, do you think,
to bring out the best in human nature
to create fulfilling connections and relationships
in the digital world and make a shit ton of money?
If I figure it out, I’ll let you know.
But what’s your intuition
without concretely knowing what’s the solution?
My intuition is that a lot of our digital technologies
give us the ability to have synthetic connections
or to experience virtuality.
They have co evolved with sort of the human expectations.
It’s sort of like sugary drinks.
As people have more sugary drinks,
they need more sugary drinks to get that same hit, right?
So with these virtual things and with TV and fast cuts
and TikToks and all these different kinds of things,
we’re co creating essentially humanity
that sort of asks and needs those things.
And now it becomes very difficult
to get people to slow down.
It gets difficult for people to hold their attention
on slow things and actually feel that embodied experience.
So mindfulness now more than ever is so important in schools
and as a therapy technique for people
because our environment has been accelerated.
And McLuhan actually talks about this
in the electric environment of the television.
And that was before TikTok and before front facing cameras.
So I think for me, the concern is that
it’s not like we can ever switch to doing something better,
but more of the humans and technology,
they’re not independent of each other.
The technology that we use kind of molds what we need
for the next generation of technology.
Yeah, but humans are intelligent and they’re introspective
and they can reflect on the experiences of their life.
So for example, there’s been many years in my life
where I ate an excessive amount of sugar.
And then a certain moment I woke up and said,
why do I keep doing this?
This doesn’t feel good.
Like longterm.
And I think, so going through the TikTok process
of realizing, okay, when I shorten my attention span,
actually that does not make me feel good longer term.
And realizing that and then going to platforms,
going to places that are away from the sugar.
So in so doing, you can create platforms
that can make a lot of money to help people wake up
to what actually makes them feel good longterm.
Develop, grow as human beings.
And it just feels like humans are more intelligent
than mice looking for cheese.
They’re able to sort of think, I mean,
we can contemplate our own mortality.
We can contemplate things like longterm love
and we can have a longterm fear
of certain things like mortality.
We can contemplate whether the experiences,
the sort of the drugs of daily life
that we’ve been partaking in is making us happier,
better people.
And then once we contemplate that,
we can make financial decisions in using services
and paying for services that are making us better people.
So it just seems that we’re in the very first stages
of social networks that just were able to make a lot of money
really quickly, but in bringing out sometimes
the bad parts of human nature, they didn’t destroy humans.
They just fed everybody a lot of sugar.
And now everyone’s gonna wake up and say,
hey, we’re gonna start having like sugar free social media.
Right, right.
Well, there’s a lot to unpack there.
I think some people certainly have the capacity for that.
And I certainly think, I mean, it’s very interesting
even the way you said it, you woke up one day
and you thought, well, this doesn’t feel very good.
Well, it’s still your limbic system saying
this doesn’t feel very good, right?
You have a cat brain’s worth of neurons around your gut,
right?
And so maybe that saturated and that was telling you,
hey, this isn’t good.
Humans are more than just mice looking for cheese
or monkeys looking for sex and power, right?
So.
Let’s slow down.
Now a lot of people would argue with you on that one,
but yes.
Well, we’re more than just that, but we’re at least that.
And we’re very, very seldom not that.
So I don’t actually disagree with you
that we could be better and that better platforms exist.
And people are voluntarily noping out of things
like Facebook and noping out.
That’s an awesome verb.
It’s a great term.
Yeah, I love it.
I use it all the time.
You’re welcome, Mike.
I’m gonna have to nope out of that.
I’m gonna have to nope out of that, right?
It’s gonna be a hard pass and that’s great.
But that’s again, to your point,
that’s the first generation of front facing cameras
of social pressures.
And you as a self starter, self aware adult
have the capacity to say, yeah, I’m not gonna do that.
I’m gonna go and spend time on long form reads.
I’m gonna spend time managing my attention.
I’m gonna do some yoga.
If you’re a 15 year old in high school
and your entire social environment
is everyone doing these things,
guess what you’re gonna do?
You’re gonna kind of have to do that
because your limbic system says,
hey, I need to get the guy or the girl or the whatever.
And that’s what I’m gonna do.
And so one of the things that we have to reason about here
is the social media systems or social media,
I think is our first encounter with a technological system
that runs a bit of a loop around our own cognition
and attention.
It’s not the last, it’s far from the last.
And it gets to the heart of some of the philosophical
Achilles heel of the Western philosophical system,
which is each person gets to make their own determination.
Each person is an individual that’s sacrosanct
in their agency and their sovereignty and all these things.
The problem with these systems is they come down
and they are able to make their own decisions.
They come down and they are able to manage everyone on mass.
And so every person is making their own decision,
but together the bigger system is causing them to act
with a group dynamic that’s very profitable for people.
So this is the issue that we have is that our philosophies
are actually not geared to understand
what is it for a person to have a high trust connection
as part of a collective and for that collective
to have its right to coherency and agency.
That’s something like when a social media app
causes a family to break apart,
it’s done harm to more than just individuals, right?
So that concept is not something we really talk about
or think about very much, but that’s actually the problem
is that we’re vaporizing molecules into atomic units
and then we’re hitting all the atoms with certain things.
That’s like, yeah, well, that person chose to look at my app.
So our understanding of human nature
at the individual level, it emphasizes the individual
too much because ultimately society operates
at the collective level.
And these apps do as well.
And the apps do as well.
So for us to understand the progression and the development
of this organism we call human civilization,
we have to think at the collective level too.
I would say multi tiered.
Multi tiered.
So individual as well.
Individuals, family units, social collectives
and all the way up.
Okay, so you’ve said that individual humans
are multi layered susceptible to signals and waves
and multiple strata, the physical, the biological,
social, cultural, intellectual.
So sort of going along these lines,
can you describe the layers of the cake
that is a human being and maybe the human collective,
human society?
So I’m just stealing wholesale here from Robert Persig,
who is the author of Zen and the Art of Motorcycle
Maintenance and his follow on book has a sequel to it
called Lila.
He goes into this in a little more detail.
But it’s a crude approach to thinking about people.
But I think it’s still an advancement
over traditional subject object metaphysics,
where we look at people as a dualist would say,
well, is your mind, your consciousness,
is that just merely the matter that’s in your brain
or is there something kind of more beyond that?
And they would say, yes, there’s a soul,
sort of ineffable soul beyond just merely the physical body.
And I’m not one of those people.
I think that we don’t have to draw a line between are things
only this or only that.
Collectives of things can emerge structures and patterns
that are just as real as the underlying pieces.
But they’re transcendent, but they’re still
of the underlying pieces.
So your body is this way.
I mean, we just know physically you consist of atoms
and whatnot.
And then the atoms are arranged into molecules
which then arrange into certain kinds of structures
that seem to have a homeostasis to them.
We call them cells.
And those cells form sort of biological structures.
Those biological structures give your body
its physical ability and the biological ability
to consume energy and to maintain homeostasis.
But humans are social animals.
I mean, human by themselves is not very long for the world.
So part of our biology is why are two connect to other people?
From the mirror neurons to our language centers
and all these other things.
So we are intrinsically, there’s a layer,
there’s a part of us that wants to be part of a thing.
If we’re around other people, not saying a word,
but they’re just up and down jumping and dancing, laughing,
we’re going to feel better.
And there was no exchange of physical anything.
They didn’t give us like five atoms of happiness.
But there’s an induction in our own sense of self
that is at that social level.
And then beyond that, Persick puts the intellectual level
kind of one level higher than social.
I think they’re actually more intertwined than that.
But the intellectual level is the level of pure ideas.
That you are a vessel for memes.
You’re a vessel for philosophies.
You will conduct yourself in a particular way.
I mean, I think part of this is if we think about it
from a physics perspective, you’re not,
there’s the joke that physicists like to approximate things.
And we’ll say, well, approximate a spherical cow, right?
You’re not a spherical cow, you’re not a spherical human.
You’re a messy human.
And we can’t even say what the dynamics of your emotion
will be unless we analyze all four of these layers, right?
If you’re Muslim at a certain time of day, guess what?
You’re going to be on the ground kneeling and praying, right?
And that has nothing to do with your biological need
to get on the ground or physics of gravity.
It is an intellectual drive that you have.
It’s a cultural phenomenon
and an intellectual belief that you carry.
So that’s what the four layered stack is all about.
It’s that a person is not only one of these things,
they’re all of these things at the same time.
It’s a superposition of dynamics that run through us
that make us who we are.
So no layer is special.
Not so much no layer is special,
each layer is just different.
But we are.
Each layer gets the participation trophy.
Yeah, each layer is a part of what you are.
You are a layer cake, right, of all these things.
And if we try to deny, right,
so many philosophies do try to deny
the reality of some of these things, right?
Some people will say, well, we’re only atoms.
Well, we’re not only atoms
because there’s a lot of other things that are only atoms.
I can reduce a human being to a bunch of soup
and they’re not the same thing,
even though it’s the same atoms.
So I think the order and the patterns
that emerge within humans to understand,
to really think about what a next generation philosophy
would look like, that would allow us to reason
about extending humans into the digital realm
or to interact with autonomous intelligences
that are not biological in nature.
We really need to appreciate these,
that human, what human beings actually are
is the superposition of these different layers.
You mentioned consciousness.
Are each of these layers of cake conscious?
Is consciousness a particular quality of one of the layers?
Is there like a spike if you have a consciousness detector
at these layers or is something that just permeates
all of these layers and just takes different form?
I believe what humans experience as consciousness
is something that sits on a gradient scale
of a general principle in the universe
that seems to look for order and reach for order
when there’s an excess of energy.
You know, it would be odd to say a proton is alive, right?
It’d be odd to say like this particular atom
or molecule of hydrogen gas is alive,
but there’s certainly something we can make assemblages
of these things that have autopoetic aspects to them
that will create structures that will, you know,
crystalline solids will form very interesting
and beautiful structures.
This gets kind of into weird mathematical territories.
You start thinking about Penrose and Game of Life stuff
about the generativity of math itself,
like the hyperreal numbers, things like that.
But without going down that rabbit hole,
I would say that there seems to be a tendency
in the world that when there is excess energy,
things will structure and pattern themselves.
And they will then actually furthermore try to create
an environment that furthers their continued stability.
It’s the concept of externalized extended phenotype
or niche construction.
So this is ultimately what leads to certain kinds
of amino acids forming certain kinds of structures
and so on and so forth until you get the ladder of life.
So what we experience as consciousness,
no, I don’t think cells are conscious at that level,
but is there something beyond mere equilibrium state biology
and chemistry and biochemistry
that drives what makes things work?
I think there is.
So Adrian Bajan has his ConstructoLaw.
There’s other things you can look at.
When you look at the life sciences
and you look at any kind of statistical physics
and statistical mechanics,
when you look at things far out of equilibrium,
when you have excess energy, what happens then?
Life doesn’t just make a hotter soup.
It starts making structure.
There’s something there.
The poetry of reaches for order
when there’s an excess of energy.
Because you brought up game of life.
You did it, not me.
I love cellular automata,
so I have to sort of linger on that for a little bit.
So cellular automata, I guess, or game of life
is a very simple example of reaching for order
when there’s an excess of energy.
Or reaching for order and somehow creating complexity.
Within this explosion of just turmoil,
somehow trying to construct structures.
And in so doing, create very elaborate
organism looking type things.
What intuition do you draw from this simple mechanism?
Well, I like to turn that around its head.
And look at it as what if every single one of the patterns
created life, or created, not life,
but created interesting patterns?
Because some of them don’t.
And sometimes you make cool gliders.
And other times, you start with certain things
and you make gliders and other things
that then construct like AND gates and NOT gates, right?
And you build computers on them.
All of these rules that create these patterns
that we can see, those are just the patterns we can see.
What if our subjectivity is actually limiting
our ability to perceive the order in all of it?
What if some of the things that we think are random
are actually not that random?
We’re simply not integrating at a final level
across a broad enough time horizon.
And this is again, I said, we go down the rabbit holes
and the Penrose stuff or like Wolfram’s explorations
on these things.
There is something deep and beautiful
in the mathematics of all this.
That is hopefully one day I’ll have enough money
to work and retire and just ponder those questions.
But there’s something there.
But you’re saying there’s a ceiling to,
when you have enough money and you retire and you ponder,
there’s a ceiling to how much you can truly ponder
because there’s cognitive limitations
in what you’re able to perceive as a pattern.
Yeah.
And maybe mathematics extends your perception capabilities,
but it’s still finite.
It’s just like.
Yeah, the mathematics we use is the mathematics
that can fit in our head.
Yeah.
Did God really create the integers?
Or did God create all of it?
And we just happen at this point in time
to be able to perceive integers.
Well, he just did the positive in it.
She, I just said, did she create all of it?
And then we.
She just created the natural numbers
and then we screwed it all up with zero and then I guess.
Okay.
But we did, we created mathematical operations
so that we can have iterated steps
to approach bigger problems, right?
I mean, the entire point of the Arabic Neural System
and it’s a rubric for mapping a certain set of operations,
folding them into a simple little expression,
but that’s just the operations that we can fit in our heads.
There are many other operations besides, right?
The thing that worries me the most about aliens and humans
is that the aliens are all around us and we’re too dumb.
Yeah.
To see them.
Oh, certainly, yeah.
Or life, let’s say just life,
life of all kinds of forms or organisms.
You know what, just even the intelligence of organisms
is imperceptible to us
because we’re too dumb and self centered.
That worries me.
Well, we’re looking for a particular kind of thing.
Yeah.
When I was at Cornell,
I had a lovely professor of Asian religions,
Jane Marie Law,
and she would tell this story about a musical,
a musician, a Western musician who went to Japan
and he taught classical music
and could play all sorts of instruments.
He went to Japan and he would ask people,
he would basically be looking for things in the style of
a Western chromatic scale and these kinds of things.
And then finding none of it,
he would say, well, there’s really no music in Japan,
but they’re using a different scale.
They’re playing different kinds of instruments, right?
The same thing she was using as a sort of a metaphor
for religion as well.
In the West, we center a lot of religion,
certainly the religions of Abraham,
we center them around belief.
And in the East, it’s more about practice, right?
Spirituality and practice rather than belief.
So anyway, the point is here to your point,
life, we, I think so many people are so fixated
on certain aspects of self replication
or homeostasis or whatever.
But if we kind of broaden and generalize this thing
of things reaching for order,
under which conditions can they then create an environment
that sustains that order, that allows them,
the invention of death is an interesting thing.
There are some organisms on earth
that are thousands of years old.
And it’s not like they’re incredibly complex,
they’re actually simpler than the cells that comprise us,
but they never die.
So at some point, death was invented,
somewhere along the eukaryotic scale,
I mean, even the protists, right?
There’s death.
And why is that along with the sexual reproduction, right?
There is something about the renewal process,
something about the ability to respond
to a changing environment,
where it just becomes,
just killing off the old generation
and letting new generations try,
seems to be the best way to fit into the niche.
Human historians seems to write about wheels and fires,
the greatest inventions,
but it seems like death and sex are pretty good.
And they’re kind of essential inventions
at the very beginning.
At the very beginning, yeah.
Well, we didn’t invent them, right?
Well, Broad, you didn’t invent them.
I see us as one,
you particular Homo sapiens did not invent them,
but we together, it’s a team project,
just like you’re saying.
I think the greatest Homo sapiens invention
is collaboration.
So when you say collaboration,
Peter, where do ideas come from
and how do they take hold in society?
Is that the nature of collaboration?
Is that the basic atom of collaboration is ideas?
It’s not not ideas, but it’s not only ideas.
There’s a book I just started reading
called Death From A Distance.
Have you heard of this?
No.
It’s a really fascinating thesis,
which is that humans are the only conspecific,
the only species that can kill other members
of the species from range.
And maybe there’s a few exceptions,
but if you look in the animal world,
you see like pronghorns butting heads, right?
You see the alpha lion and the beta lion
and they take each other down.
Humans, we developed the ability
to chuck rocks at each other,
well, at prey, but also at each other.
And that means the beta male can chunk a rock
at the alpha male and take them down.
And he can throw a lot of rocks actually,
miss a bunch of times, but just hit once and be good.
So this ability to actually kill members
of our own species from range
without a threat of harm to ourselves
created essentially mutually assured destruction
where we had to evolve cooperation.
If we didn’t, then if we just continue to try to do,
like I’m the biggest monkey in the tribe
and I’m gonna own this tribe and you have to go,
if we do it that way, then those tribes basically failed.
And the tribes that persisted
and that have now given rise to the modern Homo sapiens
are the ones where respecting the fact
that we can kill each other from a range
without harm, like there’s an asymmetric ability
to snipe the leader from range.
That meant that we sort of had to learn
how to cooperate with each other, right?
Come back here, don’t throw that rock at me.
Let’s talk our differences out.
So violence is also part of collaboration.
The threat of violence, let’s say.
Well, the recognition, maybe the better way to put it
is the recognition that we have more to gain
by working together than the prisoner’s dilemma
of both of us defecting.
So mutually assured destruction in all its forms
is part of this idea of collaboration.
Well, and Eric Weinstein talks about our nuclear peace,
right, I mean, it kind of sucks
with thousands of warheads aimed at each other,
we mean Russia and the US, but it’s like,
on the other hand, we only fought proxy wars, right?
We did not have another World War III
of like hundreds of millions of people dying
to like machine gun fire and giant guided missiles.
So the original nuclear weapon is a rock
that we learned how to throw, essentially?
The original, yeah, well, the original scope of the world
for any human being was their little tribe.
I would say it still is for the most part.
Eric Weinstein speaks very highly of you,
which is very surprising to me at first
because I didn’t know there’s this depth to you
because I knew you as an amazing leader of engineers
and an engineer yourself and so on, so it’s fascinating.
Maybe just as a comment, a side tangent that we can take,
what’s your nature of your friendship with Eric Weinstein?
How did the two, how did such two interesting paths cross?
Is it your origins in physics?
Is it your interest in philosophy
and the ideas of how the world works?
What is it?
It’s very random, Eric found me.
He actually found Travis and I.
Travis Oliphant.
Oliphant, yeah, we were both working
at a company called Enthought back in the mid 2000s
and we were doing a lot of consulting
around scientific Python and we’d made some tools
and Eric was trying to use some of these Python tools
to visualize, he had a fiber bundle approach
to modeling certain aspects of economics.
He was doing this and that’s how he kind of got in touch
with us and so.
This was in the early.
This was mid 2000s, oh seven timeframe, oh six, oh seven.
Eric Weinstein trying to use Python.
Right, to visualize fiber bundles.
Using some of the tools that we had built
in the open source.
That’s somehow entertaining to me, the thought of that.
It’s very funny but then we met with him a couple times,
a really interesting guy and then in the wake
of the oh seven, oh eight kind of financial collapse,
he helped organize with Lee Smolin a symposium
at the Perimeter Institute about okay, well clearly,
big finance can’t be trusted, government’s in its pockets
with regulatory capture, what the F do we do?
And all sorts of people, Nassim Tlaib was there
and Andy Lowe from MIT was there and Bill Janeway,
I mean just a lot of top billing people were there
and he invited me and Travis and another one
of our coworkers, Robert Kern, who is anyone
in the SciPy, NumPy community knows Robert.
Really great guy.
So the three of us also got invited to go to this thing
and that’s where I met Brett Weinstein
for the first time as well.
Yeah, I knew him before he got all famous
for unfortunate reasons, I guess.
But anyway, so we met then and kind of had a friendship
throughout since then.
You have a depth of thinking that kind of runs
with Eric in terms of just thinking about the world deeply
and thinking philosophically and then there’s Eric’s
interest in programming.
I actually have never, you know, he’ll bring up programming
to me quite a bit as a metaphor for stuff.
But I never kind of pushed the point of like,
what’s the nature of your interest in programming?
I think he saw it probably as a tool.
Yeah, absolutely.
That you visualize, to explore mathematics
and explore physics and I was wondering like,
what’s his depth of interest and also his vision
for what programming would look like in the future.
Have you had interaction with him, like discussion
in the space of Python, programming?
Well, in the sense of sometimes he asks me,
why is this stuff still so hard?
Yeah, you know, everybody’s a critic.
But actually, no, Eric.
Programming, you mean, like in general?
Yes, yes, well, not programming in general,
but certain things in the Python ecosystem.
But he actually, I think what I find in listening
to some of his stuff is that he does use
programming metaphors a lot, right?
He’ll talk about APIs or object oriented
and things like that.
So I think that’s a useful set of frames
for him to draw upon for discourse.
I haven’t pair programmed with him in a very long time.
You’ve previously pair coded with Eric.
Well, I mean, I look at his code trying to help
like put together some of the visualizations
around these things.
But it’s been a very, not really pair programmed,
but like even looked at his code, right?
I mean.
How legendary would be is that like Git repo
with Peter Wang and Eric Weinstein?
Well, honestly, Robert Kern did all the heavy lifting.
So I have to give credit where credit is due.
Robert is the silent but incredibly deep, quiet,
not silent, but quiet, but incredibly deep individual
at the heart of a lot of those things
that Eric was trying to do.
But we did have, you know, as Travis and I
were starting our company in 2012 timeframe,
we went to New York.
Eric was still in New York at the time.
He hadn’t moved to, this is before he joined Teal Capital.
We just had like a steak dinner somewhere.
Maybe it was Keynes, I don’t know, somewhere in New York.
So it was me, Travis, Eric, and then Wes McKinney,
the creative pandas, and then Wes’s then business partner,
Adam, the five of us sat around having this,
just a hilarious time, amazing dinner.
I forget what all we talked about,
but it was one of those conversations,
which I wish as soon as COVID is over,
maybe Eric and I can sit down.
Recreate.
Recreate it somewhere in LA, or maybe he comes here,
because a lot of cool people are here in Austin, right?
Exactly.
Yeah, we’re all here.
He should come here.
Come here.
Yeah.
So he uses the metaphor source code sometimes
to talk about physics.
We figure out our own source code.
So you with a physics background
and somebody who’s quite a bit of an expert in source code,
do you think we’ll ever figure out our own source code
in the way that Eric means?
Do you think we’ll figure out the nature of reality?
Well, I think we’re constantly working on that problem.
I mean, I think we’ll make more and more progress.
For me, there’s some things I don’t really doubt too much.
Like, I don’t really doubt that one day
we will create a synthetic, maybe not fully in silicon,
but a synthetic approach to
cognition that rivals the biological
20 watt computers in our heads.
What’s cognition here?
Cognition.
Which aspect?
Perception, attention, memory, recall,
asking better questions.
That for me is a measure of intelligence.
Doesn’t Roomba vacuum cleaner already do that?
Or do you mean, oh, it doesn’t ask questions.
I mean, no, it’s, I mean, I have a Roomba,
but it’s not even as smart as my cat, right?
Yeah, but it asks questions about what is this wall?
It now, new feature asks, is this poop or not, apparently.
Yes, a lot of our current cybernetic system,
it’s a cybernetic system.
It will go and it will happily vacuum up some poop, right?
The older generations would.
The new one, just released, does not vacuum up the poop.
Okay.
This is a commercial for.
I wonder if it still gets stuck
under my first rung of my stair.
In any case, these cybernetic systems we have,
they are mold, they’re designed to be sent off
into a relatively static environment.
And whatever dynamic things happen in the environment,
they have a very limited capacity to respond to.
A human baby, a human toddler of 18 months of age
has more capacity to manage its own attention
and its own capacity to make better sense of the world
than the most advanced robots today.
So again, my cat, I think can do a better job of my two
and they’re both pretty clever.
So I do think though, back to my kind of original point,
I think that it’s not, for me, it’s not question at all
that we will be able to create synthetic systems
that are able to do this better than the human,
at an equal level or better than the human mind.
It’s also for me, not a question that we will be able
to put them alongside humans
so that they capture the full broad spectrum
of what we are seeing as well.
And also looking at our responses,
listening to our responses,
even maybe measuring certain vital signs about us.
So in this kind of sidecar mode,
a greater intelligence could use us
and our whatever 80 years of life to train itself up
and then be a very good simulacrum of us moving forward.
So who is in the sidecar
in that picture of the future exactly?
The baby version of our immortal selves.
Okay, so once the baby grows up,
is there any use for humans?
I think so.
I think that out of epistemic humility,
we need to keep humans around for a long time.
And I would hope that anyone making those systems
would believe that to be true.
Out of epistemic humility,
what’s the nature of the humility that?
That we don’t know what we don’t know.
So we don’t.
Right?
So we don’t know.
First we have to build systems
that help us do the things that we do know about
that can then probe the unknowns that we know about.
But the unknown unknowns, we don’t know.
We could always know.
Nature is the one thing
that is infinitely able to surprise us.
So we should keep biological humans around
for a very, very, very long time.
Even after our immortal selves have transcended
and have gone off to explore other worlds,
gone to go communicate with the lifeforms living in the sun
or whatever else.
So I think that’s,
for me, these seem like things that are going to happen.
Like I don’t really question that,
that they’re gonna happen.
Assuming we don’t completely destroy ourselves.
Is it possible to create an AI system
that you fall in love with and it falls in love with you
and you have a romantic relationship with it?
Or a deep friendship, let’s say.
I would hope that that is the design criteria
for any of these systems.
If we cannot have a meaningful relationship with it,
then it’s still just a chunk of silicon.
So then what is meaningful?
Because back to sugar.
Well, sugar doesn’t love you back, right?
So the computer has to love you back.
And what does love mean?
Well, in this context, for me, love,
I’m gonna take a page from Alain de Botton.
Love means that it wants to help us
become the best version of ourselves.
Yes, that’s beautiful.
That’s a beautiful definition of love.
So what role does love play in the human condition
at the individual level and at the group level?
Because you were kind of saying that humans,
we should really consider humans
both at the individual and the group and the societal level.
What’s the role of love in this whole thing?
We talked about sex, we talked about death,
thanks to the bacteria that invented it.
At which point did we invent love, by the way?
I mean, is that also?
No, I think love is the start of it all.
And the feelings of, and this gets sort of beyond
just romantic, sensual, whatever kind of things,
but actually genuine love as we have for another person.
Love as it would be used in a religious text, right?
I think that capacity to feel love
more than consciousness, that is the universal thing.
Our feeling of love is actually a sense
of that generativity.
When we can look at another person
and see that they can be something more than they are,
and more than just a pigeonhole we might stick them in.
I mean, I think there’s, in any religious text,
you’ll find voiced some concept of this,
that you should see the grace of God in the other person.
They’re made in the spirit of the love
that God feels for his creation or her creation.
And so I think this thing is actually the root of it.
So I would say, I don’t think molecules of water
feel consciousness, have consciousness,
but there is some proto micro quantum thing of love.
That’s the generativity when there’s more energy
than what they need to maintain equilibrium.
And that when you sum it all up is something that leads to,
I mean, I had my mind blown one day as an undergrad
at the physics computer lab.
I logged in and when you log into bash for a long time,
there was a little fortune that would come out.
And it said, man was created by water
to carry itself uphill.
And I was logging into work on some problem set
and I logged in and I saw that and I just said,
son of a bitch, I just, I logged out
and I went to the coffee shop and I got a coffee
and I sat there on the quad and I’m like,
you know, it’s not wrong and yet WTF, right?
So when you look at it that way,
it’s like, yeah, okay, non equilibrium physics is a thing.
And so when we think about love,
when we think about these kinds of things, I would say
that in the modern day human condition,
there’s a lot of talk about freedom and individual liberty
and rights and all these things,
but that’s very Hegelian, it’s very kind of following
from the Western philosophy of the individual as sacrosanct,
but it’s not really couched I think the right way
because it should be how do we maximize people’s ability
to love each other, to love themselves first,
to love each other, their responsibilities
to the previous generation, to the future generations.
Those are the kinds of things
that should be our design criteria, right?
Those should be what we start with to then come up
with the philosophies of self and of rights
and responsibilities, but that love being at the center
of it, I think when we design systems for cognition,
it should absolutely be built that way.
I think if we simply focus on efficiency and productivity,
these kind of very industrial era,
all the things that Marx had issues with, right?
Those, that’s a way to go and really I think go off
the deep end in the wrong way.
So one of the interesting consequences of thinking of life
in this hierarchical way of an individual human
and then there’s groups and there’s societies
is I believe that you believe that corporations are people.
So this is a kind of a politically dense idea,
all those kinds of things.
If we just throw politics aside,
if we throw all of that aside,
in which sense do you believe that corporations are people?
And how does love connect to that?
Right, so the belief is that groups of people
have some kind of higher level, I would say mesoscopic
claim to agency.
So where do I, let’s start with this.
Most people would say, okay, individuals have claims
to agency and sovereignty.
Nations, we certainly act as if nations,
so at a very large, large scale,
nations have rights to sovereignty and agency.
Like everyone plays the game of modernity
as if that’s true, right?
We believe France is a thing,
we believe the United States is a thing.
But to say that groups of people at a smaller level
than that, like a family unit is the thing.
Well, in our laws, we actually do encode this concept.
I believe that in a relationship and a marriage, right,
one partner can sue for loss of consortium, right?
If someone breaks up the marriage or whatever.
So these are concepts that even in law,
we do respect that there is something about the union
and about the family.
So for me, I don’t think it’s so weird to think
that groups of people have a right to,
a claim to rights and sovereignty of some degree.
I mean, we look at our clubs, we look at churches.
These are, we talk about these collectives of people
as if they have a real agency to them, and they do.
But I think if we take that one step further and say,
okay, they can accrue resources.
Well, yes, check, you know, and by law they can.
They can own land, they can engage in contracts,
they can do all these different kinds of things.
So we in legal terms support this idea
that groups of people have rights.
Where we go wrong on this stuff
is that the most popular version of this
is the for profit absentee owner corporation
that then is able to amass larger resources
than anyone else in the landscape, anything else,
any other entity of equivalent size.
And they’re able to essentially bully around individuals,
whether it’s laborers, whether it’s people
whose resources they want to capture.
They’re also able to bully around
our system of representation,
which is still tied to individuals, right?
So I don’t believe that’s correct.
I don’t think it’s good that they, you know,
they’re people, but they’re assholes.
I don’t think that corporations as people
acting like assholes is a good thing.
But the idea that collectives and collections of people
that we should treat them philosophically
as having some agency and some mass,
at a mesoscopic level, I think that’s an important thing
because one thing I do think we underappreciate sometimes
is the fact that relationships have relationships.
So it’s not just individuals
having relationships with each other.
But if you have eight people seated around a table, right?
Each person has a relationship with each of the others
and that’s obvious.
But then if it’s four couples,
each couple also has a relationship
with each of the other couples, right?
The dyads do.
And if it’s couples, but one is the, you know,
father and mother older, and then, you know,
one of their children and their spouse,
that family unit of four has a relationship
with the other family unit of four.
So the idea that relationships have relationships
is something that we intuitively know
in navigating the social landscape,
but it’s not something I hear expressed like that.
It’s certainly not something that is,
I think, taken into account very well
when we design these kinds of things.
So I think the reason why I care a lot about this
is because I think the future of humanity
requires us to form better sense make,
collective sense making units at something, you know,
around Dunbar number, you know, half to five X Dunbar.
And that’s very different than right now
where we defer sense making
to massive aging zombie institutions.
Or we just do it ourselves.
Go it alone.
Go to the dark force of the internet by ourselves.
So that’s really interesting.
So you’ve talked about agency,
I think maybe calling it a convenient fiction
at all these different levels.
So even at the human individual level,
it’s kind of a fiction.
We all believe, because we are, like you said,
made of cells and cells are made of atoms.
So that’s a useful fiction.
And then there’s nations that seems to be a useful fiction,
but it seems like some fictions are better than others.
You know, there’s a lot of people that argue
the fiction of nation is a bad idea.
One of them lives two doors down from me,
Michael Malice, he’s an anarchist.
You know, I’m sure there’s a lot of people
who are into meditation that believe the idea,
this useful fiction of agency of an individual
is a troublesome as well.
We need to let go of that in order to truly,
like to transcend, I don’t know.
I don’t know what words you want to use,
but suffering or to elevate the experience of life.
So you’re kind of arguing that,
okay, so we have some of these useful fictions of agency.
We should add a stronger fiction that we tell ourselves
about the agency of groups in the hundreds
of the half a Dunbar’s number, 5X Dunbar’s number.
Yeah, something on that order.
And we call them fictions,
but really they’re rules of the game, right?
Rules that we feel are fair or rules that we consent to.
Yeah, I always question the rules
when I lose like a monopoly.
That’s when I usually question the rules.
When I’m winning, I don’t question the rules.
We should play a game Monopoly someday.
There’s a trippy version of it that we could do.
What kind?
Contract Monopoly is introduced by a friend of mine to me
where you can write contracts on future earnings
or landing on various things.
And you can hand out like, you know,
you can land the first three times you land
in a park place, it’s free or whatever.
And then you can start trading those contracts for money.
And then you create a human civilization
and somehow Bitcoin comes into it.
Okay, but some of these.
Actually, I bet if me and you and Eric sat down
to play a game Monopoly and we were to make NFTs
out of the contracts we wrote, we could make a lot of money.
Now it’s a terrible idea.
I would never do it,
but I bet we could actually sell the NFTs around.
I have other ideas to make money that I could tell you
and they’re all terrible ideas.
Yeah, including cat videos on the internet.
Okay, but some of these rules of the game,
some of these fictions are,
it seems like they’re better than others.
They have worked this far to cohere human,
to organize human collective action.
But you’re saying something about,
especially this technological age
requires modified fictions, stories of agency.
Why the Dunbar number?
And also, you know, how do you select the group of people?
You know, Dunbar numbers, I think I have the sense
that it’s overused as a kind of law
that somehow we can have deep human connection at this scale.
Like some of it feels like an interface problem too.
It feels like if I have the right tools,
I can deeply connect with a larger number of people.
It just feels like there’s a huge value
to interacting just in person, getting to share
traumatic experiences together,
beautiful experiences together.
There’s other experiences like that in the digital space
that you can share.
It just feels like Dunbar’s number
could be expanded significantly,
perhaps not to the level of millions and billions,
but it feels like it could be expanded.
So how do we find the right interface, you think,
for having a little bit of a collective here
that has agency?
You’re right that there’s many different ways
that we can build trust with each other.
Yeah.
My friend Joe Edelman talks about a few different ways
that, you know, mutual appreciation, trustful conflict,
just experiencing something like, you know,
there’s a variety of different things that we can do,
but all those things take time and you have to be present.
The less present you are, I mean, there’s just, again,
a no free lunch principle here.
The less present you are, the more of them you can do,
but then the less connection you build.
So I think there is sort of a human capacity issue
around some of these things.
Now, that being said, if we can use certain technologies,
so for instance, if I write a little monograph
on my view of the world,
you read it asynchronously at some point,
and you’re like, wow, Peter, this is great.
Here’s mine.
I read it.
I’m like, wow, Lex, this is awesome.
We can be friends without having to spend 10 years,
you know, figuring all this stuff out together.
We just read each other’s thing and be like,
oh yeah, this guy’s exactly in my wheelhouse
and vice versa.
And we can then, you know, connect just a few times a year
and maintain a high trust relationship.
It can be expanded a little bit,
but it also requires,
these things are not all technological in nature.
It requires the individual themselves
to have a certain level of capacity,
to have a certain lack of neuroticism, right?
If you want to use like the ocean big five sort of model,
people have to be pretty centered.
The less centered you are,
the fewer authentic connections you can really build
for a particular unit of time.
It just takes more time.
Other people have to put up with your crap.
Like there’s just a lot of the stuff
that you have to deal with
if you are not so well balanced, right?
So yes, we can help people get better
to where they can develop more relationships faster,
and then you can maybe expand Dunbar number by quite a bit,
but you’re not going to do it.
I think it’s going to be hard to get it beyond 10X,
kind of the rough swag of what it is, you know?
Well, don’t you think that AI systems could be an addition
to the Dunbar’s number?
So like why?
Do you count as one system or multiple AI systems?
Multiple AI systems.
So I do believe that AI systems,
for them to integrate into human society as it is now,
have to have a sense of agency.
So there has to be a individual
because otherwise we wouldn’t relate to them.
We could engage certain kinds of individuals
to make sense of them for us and be almost like,
did you ever watch Star Trek?
Like Voyager, like there’s the Volta,
who are like the interfaces,
the ambassadors for the Dominion.
We may have ambassadors that speak
on behalf of these systems.
They’re like the Mentats of Dune, maybe,
or something like this.
I mean, we already have this to some extent.
If you look at the biggest sort of,
I wouldn’t say AI system,
but the biggest cybernetic system in the world
is the financial markets.
It runs outside of any individual’s control,
and you have an entire stack of people on Wall Street,
Wall Street analysts to CNBC reporters, whatever.
They’re all helping to communicate what does this mean?
You know, like Jim Cramer,
like coming around and yelling and stuff.
Like all of these people are part of that lowering
of the complexity there to meet sense,
you know, to help do sense making for people
at whatever capacity they’re at.
And I don’t see this changing with AI systems.
I think you would have ringside commentators
talking about all this stuff
that this AI system is trying to do over here, over here,
because it’s actually a super intelligence.
So if you want to talk about humans interfacing,
making first contact with the super intelligence,
we’re already there.
We do it pretty poorly.
And if you look at the gradient of power and money,
what happens is the people closest to it
will absolutely exploit their distance
for personal financial gain.
So we should look at that and be like,
oh, well, that’s probably what the future
will look like as well.
But nonetheless, I mean,
we’re already doing this kind of thing.
So in the future, we can have AI systems,
but you’re still gonna have to trust people
to bridge the sense making gap to them.
See, I just feel like there could be
like millions of AI systems that have,
have agencies, you have,
when you say one super intelligence,
super intelligence in that context means
it’s able to solve particular problems extremely well.
But there’s some aspect of human like intelligence
that’s necessary to be integrated into human society.
So not financial markets,
not sort of weather prediction systems,
or I don’t know, logistics optimization.
I’m more referring to things that you interact with
on the intellectual level.
And that I think requires,
there has to be a backstory.
There has to be a personality.
I believe it has to fear its own mortality in a genuine way.
Like there has to be all,
many of the elements that we humans experience
that are fundamental to the human condition,
because otherwise we would not have
a deep connection with it.
But I don’t think having a deep connection with it
is necessarily going to stop us from building a thing
that has quite an alien intelligence aspect here.
So the other kind of alien intelligence on this planet
is the octopuses or octopodes
or whatever you wanna call them.
Octopi. Octopi, yeah.
There’s a little controversy
as to what the plural is, I guess.
But an octopus. I look forward to your letters.
Yeah, an octopus,
it really acts as a collective intelligence
of eight intelligent arms, right?
Its arms have a tremendous amount of neural density to them.
And I see if we can build,
I mean, just let’s go with what you’re saying.
If we build a singular intelligence
that interfaces with humans that has a sense of agency
so it can run the cybernetic loop
and develop its own theory of mind
as well as its theory of action,
all these things, I agree with you
that that’s the necessary components
to build a real intelligence, right?
There’s gotta be something at stake.
It’s gotta make a decision.
It’s gotta then run the OODA loop.
Okay, so we build one of those.
Well, if we can build one of those,
we can probably build 5 million of them.
So we build 5 million of them.
And if their cognitive systems are already digitized
and already kind of there,
we stick an antenna on each of them,
bring it all back to a hive mind
that maybe doesn’t make all the individual decisions
for them, but treats each one
as almost like a neuronal input
of a much higher bandwidth and fidelity,
going back to a central system
that is then able to perceive much broader dynamics
that we can’t see.
In the same way that a phased array radar, right?
You think about how phased array radar works.
It’s just sensitivity.
It’s just radars, and then it’s hypersensitivity
and really great timing between all of them.
And with a flat array,
it’s as good as a curved radar dish, right?
So with these things,
it’s a phased array of cybernetic systems
that’ll give the centralized intelligence
much, much better, a much higher fidelity understanding
of what’s actually happening in the environment.
But the more power,
the more understanding the central super intelligence has,
the dumber the individual like fingers
of this intelligence are, I think.
I think you…
Not necessarily.
In my sense…
I don’t see what has to be.
This argument, there has to be,
the experience of the individual agent
has to have the full richness of the human like experience.
You have to be able to be driving the car in the rain,
listening to Bruce Springsteen,
and all of a sudden break out in tears
because remembering something that happened to you
in high school.
We can implant those memories
if that’s really needed.
But no, I’m…
No, but the central agency,
like I guess I’m saying for, in my view,
for intelligence to be born,
you have to have a decentralization.
Like each one has to struggle and reach.
So each one in excess of energy has to reach for order
as opposed to a central place doing so.
Have you ever read like some sci fi
where there’s like hive minds?
Like the Wernher Vinge, I think has one of these.
And then some of the stuff from the Commonwealth Saga,
the idea that you’re an individual,
but you’re connected with like a few other individuals
telepathically as well.
And together you form a swarm.
So if you are, I ask you,
what do you think is the experience of if you are like,
well, a Borg, right?
If you are one, if you’re part of this hive mind,
outside of all the aesthetics, forget the aesthetics,
internally, what is your experience like?
Because I have a theory as to what that looks like.
The one question I have for you about that experience is
how much is there a feeling of freedom, of free will?
Because I obviously as a human, very unbiased,
but also somebody who values freedom and biased,
it feels like the experience of freedom is essential for
trying stuff out, to being creative
and doing something truly novel, which is at the core of.
Yeah, well, I don’t think you have to lose any freedom
when you’re in that mode.
Because I think what happens is we think,
we still think, I mean, you’re still thinking about this
in a sense of a top down command and control hierarchy,
which is not what it has to be at all.
I think the experience, so I’ll just show by cards here.
I think the experience of being a robot in that robot swarm,
a robot who has agency over their own local environment
that’s doing sense making
and reporting it back to the hive mind,
I think that robot’s experience would be one,
when the hive mind is working well,
it would be an experience of like talking to God, right?
That you essentially are reporting to,
you’re sort of saying, here’s what I see.
I think this is what’s gonna happen over here.
I’m gonna go do this thing.
Because I think if I’m gonna do this,
this will make this change happen in the environment.
And then God, she may tell you, that’s great.
And in fact, your brothers and sisters will join you
to help make this go better, right?
And then she can let your brothers and sisters know,
hey, Peter’s gonna go do this thing.
Would you like to help him?
Because we think that this will make this thing go better.
And they’ll say, yes, we’ll help him.
So the whole thing could be actually very emergent.
The sense of, what does it feel like to be a cell
and a network that is alive, that is generative.
And I think actually the feeling is serendipity.
That there’s random order, not random disorder or chaos,
but random order, just when you need it to hear Bruce Springsteen,
you turn on the radio and bam, it’s Bruce Springsteen, right?
That feeling of serendipity, I feel like,
this is a bit of a flight of fancy,
but every cell in your body must have,
what does it feel like to be a cell in your body?
When it needs sugar, there’s sugar.
When it needs oxygen, there’s just oxygen.
Now, when it needs to go and do its work
and pull like as one of your muscle fibers, right?
It does its work and it’s great.
It contributes to the cause, right?
So this is all, again, a flight of fancy,
but I think as we extrapolate up,
what does it feel like to be an independent individual
with some bounded sense of freedom?
All sense of freedom is actually bounded,
but it was a bounded sense of freedom
that still lives within a network that has order to it.
And I feel like it has to be a feeling of serendipity.
So the cell, there’s a feeling of serendipity, even though.
It has no way of explaining why it’s getting oxygen
and sugar when it gets it.
So you have to, each individual component has to be too dumb
to understand the big picture.
No, the big picture is bigger than what it can understand.
But isn’t that an essential characteristic
of the individual is to be too dumb
to understand the bigger picture.
Like not dumb necessarily,
but limited in its capacity to understand.
Because the moment you understand,
I feel like that leads to, if you tell me now
that there are some bigger intelligence
controlling everything I do,
intelligence broadly defined, meaning like,
you know, even the Sam Harris thing, there’s no free will.
If I’m smart enough to truly understand that that’s the case,
that’s gonna, I don’t know if I.
We have philosophical breakdown, right?
Because we’re in the West and we’re pumped full of this stuff
of like, you are a golden, fully free individual
with all your freedoms and all your liberties
and go grab a gun and shoot whatever you want to.
No, it’s actually, you don’t actually have a lot of these,
you’re not unconstrained,
but the areas where you can manifest agency,
you’re free to do those things.
You can say whatever you want on this podcast.
You can create a podcast, right?
Yeah.
You’re not, I mean, you have a lot of this kind of freedom,
but even as you’re doing this, you are actually,
I guess where the denouement of this is that
we are already intelligent agents in such a system, right?
In that one of these like robots
of one of 5 million little swarm robots
or one of the Borg,
they’re just posting on internal bulletin board.
I mean, maybe the Borg cube
is just a giant Facebook machine floating in space
and everyone’s just posting on there.
They’re just posting really fast and like, oh yeah.
It’s called the metaverse now.
That’s called the metaverse, that’s right.
Here’s the enterprise.
Maybe we should all go shoot it.
Yeah, everyone upvotes and they’re gonna go shoot it, right?
But we already are part of a human online
collaborative environment
and collaborative sensemaking system.
It’s not very good yet.
It’s got the overhangs of zombie sensemaking institutions
all over it, but as that washes away
and as we get better at this,
we are going to see humanity improving
at speeds that are unthinkable in the past.
And it’s not because anyone’s freedoms were limited.
In fact, the open source,
and we started this with open source software, right?
The collaboration, what the internet surfaced
was the ability for people all over the world
to collaborate and produce some of the most
foundational software that’s in use today, right?
That entire ecosystem was created
by collaborators all over the place.
So these online kind of swarm kind of things
are not novel.
It’s just, I’m just suggesting that future AI systems,
if you can build one smart system,
you have no reason not to build multiple.
If you build multiple,
there’s no reason not to integrate them all
into a collective sensemaking substrate.
And that thing will certainly have immersion intelligence
that none of the individuals
and probably not any of the human designers
will be able to really put a bow around and explain.
But in some sense, would that AI system
still be able to go like rural Texas,
buy a ranch, go off the grid, go full survivalist?
Like, can you disconnect from the hive mind?
You may not want to.
So to be ineffective, to be intelligent.
You have access to way more intelligence capability
if you’re plugged into five million other
really, really smart cyborgs.
Why would you leave?
So like there’s a word control that comes to mind.
So it doesn’t feel like control,
like overbearing control.
It’s just knowledge.
I think systems, well, this is to your point.
I mean, look at how much,
how uncomfortable you are with this concept, right?
I think systems that feel like overbearing control
will not evolutionarily win out.
I think systems that give their individual elements
the feeling of serendipity and the feeling of agency
that that will, those systems will win.
But that’s not to say that there will not be
emergent higher level order on top of it.
And that’s the thing, that’s the philosophical breakdown
that we’re staring right at,
which is in the Western mind,
I think there’s a very sharp delineation
between explicit control,
Cartesian, like what is the vector?
Where is the position?
Where is it going?
It’s completely deterministic.
And kind of this idea that things emerge.
Everything we see is the emergent patterns
of other things.
And there is agency when there’s extra energy.
So you have spoken about a kind of meaning crisis
that we’re going through.
But it feels like since we invented sex and death,
we broadly speaking,
we’ve been searching for a kind of meaning.
So it feels like a human civilization
has been going through a meaning crisis
of different flavors throughout its history.
Why is, how is this particular meaning crisis different?
Or is it really a crisis and it wasn’t previously?
What’s your sense?
A lot of human history,
there wasn’t so much a meaning crisis.
There was just a like food
and not getting eaten by bears crisis, right?
Once you get to a point where you can make food,
there was the like not getting killed
by other humans crisis.
So sitting around wondering what is it all about,
it’s actually a relatively recent luxury.
And to some extent, the meaning crisis coming out of that
is precisely because, well, it’s not precisely because,
I believe that meaning is the consequence of
when we make consequential decisions,
it’s tied to agency, right?
When we make consequential decisions,
that generates meaning.
So if we make a lot of decisions,
but we don’t see the consequences of them,
then it feels like what was the point, right?
But if there’s all these big things
that we don’t see the consequences of,
right, but if there’s all these big things happening,
but we’re just along for the ride,
then it also does not feel very meaningful.
Meaning, as far as I can tell,
this is my working definition of CERCA 2021,
is generally the result of a person
making a consequential decision,
acting on it and then seeing the consequences of it.
So historically, just when humans are in survival mode,
you’re making consequential decisions all the time.
So there’s not a lack of meaning
because like you either got eaten or you didn’t, right?
You got some food and that’s great, you feel good.
Like these are all consequential decisions.
Only in the post fossil fuel and industrial revolution
could we create a massive leisure class.
I could sit around not being threatened by bears,
not starving to death,
making decisions somewhat,
but a lot of times not seeing the consequences
of any decisions they make.
The general sort of sense of anomie,
I think that is the French term for it,
in the wake of the consumer society,
in the wake of mass media telling everyone,
hey, choosing between Hermes and Chanel
is a meaningful decision.
No, it’s not.
I don’t know what either of those mean.
Oh, they’re high end luxury purses and crap like that.
But the point is that we give people the idea
that consumption is meaning,
that making a choice of this team versus that team,
spectating has meaning.
So we produce all of these different things
that are as if meaning, right?
But really making a decision that has no consequences for us.
And so that creates the meaning crisis.
Well, you’re saying choosing between Chanel
and the other one has no consequence.
I mean, why is one more meaningful than the other?
It’s not that it’s more meaningful than the other.
It’s that you make a decision between these two brands
and you’re told this brand will make me look better
in front of other people.
If I buy this brand of car,
if I wear that brand of apparel, right?
Like a lot of decisions we make are around consumption,
but consumption by itself doesn’t actually yield meaning.
Gaining social status does provide meaning.
So that’s why in this era of abundant production,
so many things turn into status games.
The NFT kind of explosion is a similar kind of thing.
Everywhere there are status games
because we just have so much excess production.
But aren’t those status games a source of meaning?
Like why do the games we play have to be grounded
in physical reality like they are
when you’re trying to run away from lions?
Why can’t we, in this virtuality world, on social media,
why can’t we play the games on social media,
even the dark ones?
Right, we can, we can.
But you’re saying that’s creating a meaning crisis.
Well, there’s a meaning crisis
in that there’s two aspects of it.
Number one, playing those kinds of status games
oftentimes requires destroying the planet
because it ties to consumption,
consuming the latest and greatest version of a thing,
buying the latest limited edition sneaker
and throwing out all the old ones.
Maybe it keeps in the old ones,
but the amount of sneakers we have to cut up
and destroy every year
to create artificial scarcity for the next generation, right?
This is kind of stuff that’s not great.
It’s not great at all.
So conspicuous consumption fueling status games
is really bad for the planet, not sustainable.
The second thing is you can play these kinds of status games,
but then what it does is it renders you captured
to the virtual environment.
The status games that really wealthy people are playing
are all around the hard resources
where they’re gonna build the factories,
they’re gonna have the fuel in the rare earths
to make the next generation of robots.
They’re then going to run game,
run circles around you and your children.
So that’s another reason not to play
those virtual status games.
So you’re saying ultimately the big picture game is won
by people who have access or control
over actual hard resources.
So you can’t, you don’t see a society
where most of the games are played in the virtual space.
They’ll be captured in the physical space.
It all builds.
It’s just like the stack of human being, right?
If you only play the game at the cultural
and then intellectual level,
then the people with the hard resources
and access to layer zero physical are going to own you.
But isn’t money not connected to,
or less and less connected to hard resources
and money still seems to work?
It’s a virtual technology.
There’s different kinds of money.
Part of the reason that some of the stuff is able
to go a little unhinged is because the big sovereignties
where one spends money and uses money
and plays money games and inflates money,
their ability to adjudicate the physical resources
and hard resources and the resources
and hard resources on land and things like that,
those have not been challenged in a very long time.
So, you know, we went off the gold standard.
Most money is not connected to physical resources.
It’s an idea.
And that idea is very closely connected to status.
But it’s also tied to like, it’s actually tied to law.
It is tied to some physical hard things
so you have to pay your taxes.
Yes, so it’s always at the end going to be connected
to the blockchain of physical reality.
So in the case of law and taxes, it’s connected to government
and government is what violence is the,
I’m playing with stacks of devil’s advocates here
and popping one devil off the stack at a time.
Isn’t ultimately, of course,
it’ll be connected to physical reality,
but just because people control the physical reality,
it doesn’t mean the status.
I guess LeBron James in theory could make more money
than the owners of the teams in theory.
And to me, that’s a virtual idea.
So somebody else constructed a game
and now you’re playing in the virtual space of the game.
So it just feels like there could be games where status,
we build realities that give us meaning in the virtual space.
I can imagine such things being possible.
Oh yeah, okay, so I see what you’re saying.
I think I see what you’re saying there
with the idea there, I mean, we’ll take the LeBron James side
and put in like some YouTube influencer.
Yes, sure.
So the YouTube influencer, it is status games,
but at a certain level, it precipitates into real dollars
and into like, well, you look at Mr. Beast, right?
He’s like sending off half a million dollars
worth of fireworks or something, right?
Not a YouTube video.
And also like saving, like saving trees and so on.
Sure, right, trying to find a million trees
with Mark Rober or whatever it was.
Yeah, like it’s not that those kinds of games
can’t lead to real consequences.
It’s that for the vast majority of people in consumer culture,
they are incented by the, I would say mostly,
I’m thinking about middle class consumers.
They’re incented by advertisements,
they’re scented by their memetic environment
to treat the purchasing of certain things,
the need to buy the latest model, whatever,
the need to appear, however,
the need to pursue status games as a driver of meaning.
And my point would be that it’s a very hollow
driver of meaning.
And that is what creates a meaning crisis.
Because at the end of the day,
it’s like eating a lot of empty calories, right?
Yeah, it tasted good going down, a lot of sugar,
but man, it did not, it was not enough protein
to help build your muscles.
And you kind of feel that in your gut.
And I think that’s, I mean, so all this stuff aside
and setting aside our discussion on currency,
which I hope we get back to,
that’s what I mean about the meaning crisis,
part of it being created by the fact that we don’t,
we’re not encouraged to have more and more
direct relationships.
We’re actually alienated from relating to,
even our family members sometimes, right?
We’re encouraged to relate to brands.
We’re encouraged to relate to these kinds of things
that then tell us to do things
that are really of low consequence.
And that’s where the meaning crisis comes from.
So the role of technology in this,
so there’s somebody you mentioned who’s Jacques,
his view of technology, he warns about the towering piles
of technique, which I guess is a broad idea of technology.
So I think, correct me if I’m wrong for him,
technology is bad at moving away from human nature
and it’s ultimately is destructive.
My question, broadly speaking, this meaning crisis,
can technology, what are the pros and cons of technology?
Can it be a good?
Yeah, I think it can be.
I certainly think it can be a good thing.
Can it be a good? Yeah, I think it can be.
I certainly draw on some of Alol’s ideas
and I think some of them are pretty good.
But the way he defines technique is,
well, also Simondon as well.
I mean, he speaks to the general mentality of efficiency,
homogenized processes, homogenized production,
homogenized labor to produce homogenized artifacts
that then are not actually,
they don’t sit well in the environment.
Essentially, you can think of it as the antonym of craft.
Whereas a craftsman will come to a problem,
maybe a piece of wood and they make into a chair.
It may be a site to build a house or build a stable
or build whatever.
And they will consider how to bring various things in
to build something well contextualized
that’s in right relationship with that environment.
But the way we have driven technology
over the last 100 and 150 years is not that at all.
It is how can we make sure the input materials
are homogenized, cut to the same size,
diluted and doped to exactly the right alloy concentrations.
How do we create machines that then consume exactly
the right kind of energy to be able to run
at this high speed to stamp out the same parts,
which then go out the door,
everyone gets the same tickle of Mielmo.
And the reason why everyone wants it
is because we have broadcasts that tells everyone
this is the cool thing.
So we homogenize demand, right?
And we’re like Baudrillard and other critiques
of modernity coming from that direction,
the situation lists as well.
It’s that their point is that at this point in time,
consumption is the thing that drives
a lot of the economic stuff, not the need,
but the need to consume and build status games on top.
So we have homogenized, when we discovered,
I think this is really like Bernays and stuff, right?
In the early 20th century, we discovered we can create,
we can create demand, we can create desire
in a way that was not possible before
because of broadcast media.
And not only do we create desire,
we don’t create a desire for each person
to connect to some bespoke thing,
to build a relationship with their neighbor or their spouse.
We are telling them, you need to consume this brand,
you need to drive this vehicle,
you gotta listen to this music,
have you heard this, have you seen this movie, right?
So creating homogenized demand makes it really cheap
to create homogenized product.
And now you have economics of scale.
So we make the same tickle me Elmo,
give it to all the kids and all the kids are like,
hey, I got a tickle me Elmo, right?
So this is ultimately where this ties in then
to runaway hypercapitalism is that we then,
capitalism is always looking for growth.
It’s always looking for growth
and growth only happens at the margins.
So you have to squeeze more and more demand out.
You gotta make it cheaper and cheaper
to make the same thing,
but tell everyone they’re still getting meaning from it.
You’re still like, this is still your tickle me Elmo, right?
And we see little bits of this dripping critiques
of this dripping in popular culture.
You see it sometimes it’s when Buzz Lightyear
walks into the thing, he’s like,
oh my God, at the toy store, I’m just a toy.
Like there’s millions of other,
or there’s hundreds of other Buzz Lightyear’s
just like me, right?
That is, I think, a fun Pixar critique
on this homogenization dynamic.
I agree with you on most of the things you’re saying.
So I’m playing devil’s advocate here,
but this homogenized machine of capitalism
is also the thing that is able to fund,
if channeled correctly, innovation, invention,
and development of totally new things
that in the best possible world,
create all kinds of new experiences that can enrich lives,
the quality of lives for all kinds of people.
So isn’t this the machine
that actually enables the experiences
and more and more experiences that will then give meaning?
It has done that to some extent.
I mean, it’s not all good or bad in my perspective.
We can always look backwards
and offer a critique of the path we’ve taken
to get to this point in time.
But that’s a different, that’s somewhat different
and informs the discussion,
but it’s somewhat different than the question
of where do we go in the future, right?
Is this still the same rocket we need to ride
to get to the next point?
Will it even get us to the next point?
Well, how does this, so you’re predicting the future,
how does it go wrong in your view?
We have the mechanisms,
we have now explored enough technologies
to where we can actually, I think, sustainably produce
what most people in the world need to live.
We have also created the infrastructures
to allow continued research and development
of additional science and medicine
and various other kinds of things.
The organizing principles that we use
to govern all these things today have been,
a lot of them have been just inherited
from honestly medieval times.
Some of them have refactored a little bit
in the industrial era,
but a lot of these modes of organizing people
are deeply problematic.
And furthermore, they’re rooted in,
I think, a very industrial mode perspective on human labor.
And this is one of those things,
I’m gonna go back to the open source thing.
There was a point in time when,
well, let me ask you this.
If you look at the core SciPy sort of collection of libraries,
so SciPy, NumPy, Matplotlib, right?
There’s iPython Notebook, let’s throw pandas in there,
scikit learn, a few of these things.
How much value do you think, economic value,
would you say they drive in the world today?
That’s one of the fascinating things
about talking to you and Travis is like,
it’s a measure, it’s like a…
At least a billion dollars a day, maybe?
A billion dollars, sure.
I mean, it’s like, it’s similar question of like,
how much value does Wikipedia create?
Right.
It’s like, all of it, I don’t know.
Well, I mean, if you look at it,
all of it, I don’t know.
Well, I mean, if you look at our systems,
when you do a Google search, right?
Now, some of that stuff runs through TensorFlow,
but when you look at Siri,
when you do credit card transaction fraud,
like just everything, right?
Every intelligence agency under the sun,
they’re using some aspect of these kinds of tools.
So I would say that these create billions of dollars
of value.
Oh, you mean like direct use of tools
that leverage this data?
Yes, direct, yeah.
Yeah, even that’s billions a day, yeah.
Yeah, right, easily, I think.
Like the things they could not do
if they didn’t have these tools, right?
Yes.
So that’s billions of dollars a day, great.
I think that’s about right.
Now, if we take, how many people did it take
to make that, right?
And there was a point in time, not anymore,
but there was a point in time when they could fit
in a van.
I could have fit them in my Mercedes winter, right?
And so if you look at that, like, holy crap,
literally a van of maybe a dozen people
could create value to the tune of billions of dollars a day.
What lesson do you draw from that?
Well, here’s the thing.
What can we do to do more of that?
Like that’s open source.
The way I’ve talked about this in other environments is
when we use generative participatory crowdsourced
approaches, we unlock human potential
at a level that is better than what capitalism can do.
I would challenge anyone to go and try to hire
the right 12 people in the world
to build that entire stack
the way those 12 people did that, right?
They would be very, very hard pressed to do that.
If a hedge fund could just hire a dozen people
and create like something that is worth
billions of dollars a day,
every single one of them would be racing to do it, right?
But finding the right people,
fostering the right collaborations,
getting it adopted by the right other people
to then refine it,
that is a thing that was organic in nature.
That took crowdsourcing.
That took a lot of the open source ethos
and it took the right kinds of people, right?
Now those people who started that said,
I need to have a part of a multi billion dollar a day
sort of enterprise.
They’re like, I’m doing this cool thing
to solve my problem for my friends, right?
So the point of telling the story
is to say that our way of thinking about value,
our way of thinking about allocation of resources,
our ways of thinking about property rights
and all these kinds of things,
they come from finite game, scarcity mentality,
medieval institutions.
As we are now entering,
to some extent we’re sort of in a post scarcity era,
although some people are hoarding a whole lot of stuff.
We are at a point where if not now soon,
we’ll be in a post scarcity era.
The question of how we allocate resources
has to be revisited at a fundamental level
because the kind of software these people built,
the modalities that those human ecologies
that built that software,
it treats offers unproperty.
Actually sharing creates value.
Restricting and forking reduces value.
So that’s different than any other physical resource
that we’ve ever dealt with.
It’s different than how most corporations
treat software IP, right?
So if treating software in this way
created this much value so efficiently, so cheaply,
because feeding a dozen people for 10 years
is really cheap, right?
That’s the reason I care about this right now
is because looking forward
when we can automate a lot of labor,
where we can in fact,
the programming for your robot in your part,
neck of the woods and your part of the Amazon
to build something sustainable for you
and your tribe to deliver the right medicines,
to take care of the kids,
that’s just software, that’s just code
that could be totally open sourced, right?
So we can actually get to a mode
where all of this additional generative things
that humans are doing,
they don’t have to be wrapped up in a container
and then we charge for all the exponential dynamics
out of it.
That’s what Facebook did.
That’s what modern social media did, right?
Because the old internet was connecting people just fine.
So Facebook came along and said,
well, anyone can post a picture,
anyone can post some text
and we’re gonna amplify the crap out of it to everyone else.
And it exploded this generative network
of human interaction.
And then I said, how do I make money off that?
Oh yeah, I’m gonna be a gatekeeper
on everybody’s attention.
And that’s how I’m gonna make money.
So how do we create more than one van?
How do we have millions of vans full of people
that create NumPy, SciPy, that create Python?
So the story of those people is often they have
some kind of job outside of this.
This is what they’re doing for fun.
Don’t you need to have a job?
Don’t you have to be connected,
plugged in to the capitalist system?
Isn’t that what,
isn’t this consumerism,
the engine that results in the individuals
that kind of take a break from it every once in a while
to create something magical?
Like at the edges is where the innovation happens.
There’s a surplus, right, this is the question.
Like if everyone were to go and run their own farm,
no one would have time to go and write NumPy, SciPy, right?
Maybe, but that’s what I’m talking about
when I say we’re maybe at a post scarcity point
for a lot of people.
The question that we’re never encouraged to ask
in a Super Bowl ad is how much do you need?
How much is enough?
Do you need to have a new car every two years, every five?
If you have a reliable car,
can you drive one for 10 years, is that all right?
I had a car for 10 years and it was fine.
Your iPhone, do you have to upgrade every two years?
I mean, it’s sort of, you’re using the same apps
you did four years ago, right?
This should be a Super Bowl ad.
This should be a Super Bowl ad, that’s great.
Maybe somebody. Do you really need a new iPhone?
Maybe one of our listeners will fund something like this
of like, no, but just actually bringing it back,
bringing it back to actually the question
of what do you need?
How do we create the infrastructure
for collectives of people to live on the basis
of providing what we need, meeting people’s needs
with a little bit of access to handle emergencies,
things like that, pulling our resources together
to handle the really, really big emergencies,
somebody with a really rare form of cancer
or some massive fire sweeps through half the village
or whatever, but can we actually unscale things
and solve for people’s needs
and then give them the capacity to explore
how to be the best version of themselves?
And for Travis, that was throwing away his shot of tenure
in order to write NumPy.
For others, there is a saying in the SciFi community
that SciFi advances one failed postdoc at a time.
And that’s, we can do these things.
We can actually do this kind of collaboration
because code, software, information, organization,
that’s cheap.
Those bits are very cheap to fling across the oceans.
So you mentioned Travis.
We’ve been talking and we’ll continue to talk
about open source.
Maybe you can comment.
How did you meet Travis?
Who is Travis Aliphant?
What’s your relationship been like through the years?
Where did you work together?
How did you meet?
What’s the present and the future look like?
Yeah, so the first time I met Travis
was at a SciFi conference in Pasadena.
Do you remember the year?
2005.
I was working at, again, at nthought,
working on scientific computing consulting.
And a couple of years later,
he joined us at nthought, I think 2007.
And he came in as the president.
One of the founders of nthought was the CEO, Eric Jones.
And we were all very excited that Travis was joining us
and that was great fun.
And so I worked with Travis
on a number of consulting projects
and we worked on some open source stuff.
I mean, it was just a really, it was a good time there.
And then…
It was primarily Python related?
Oh yeah, it was all Python, NumPy, SciFi consulting
kind of stuff.
Towards the end of that time,
we started getting called into more and more finance shops.
They were adopting Python pretty heavily.
I did some work on like a high frequency trading shop,
working on some stuff.
And then we worked together on some,
at a couple of investment banks in Manhattan.
And so we started seeing that there was a potential
to take Python in the direction of business computing,
more than just being this niche like MATLAB replacement
for big vector computing.
What we were seeing was, oh yeah,
you could actually use Python as a Swiss army knife
to do a lot of shadow data transformation kind of stuff.
So that’s when we realized the potential is much greater.
And so we started Anaconda,
I mean, it was called Continuum Analytics at the time,
but we started in January of 2012
with a vision of shoring up the parts of Python
that needed to get expanded to handle data at scale,
to do web visualization, application development, et cetera.
And that was that, yeah.
So he was CEO and I was president for the first five years.
And then we raised some money and then the board,
it was sort of put in a new CEO.
They hired a kind of professional CEO.
And then Travis, you laugh at that.
I took over the CTO role.
Travis then left after a year to do his own thing,
to do Quonsight, which was more oriented
around some of the bootstrap years that we did at Continuum
where it was open source and consulting.
It wasn’t sort of like gung ho product development.
And it wasn’t focused on,
we accidentally stumbled
into the package management problem at Anaconda,
but we had a lot of other visions of other technology
that we built in the open source.
And Travis was really trying to push,
again, the frontiers of numerical computing,
vector computing,
handling things like auto differentiation and stuff
intrinsically in the open ecosystem.
So I think that’s kind of the direction
he’s working on in some of his work.
We remain great friends and colleagues and collaborators,
even though he’s no longer day to day working at Anaconda,
but he gives me a lot of feedback
about this and that and the other.
What’s a big lesson you’ve learned from Travis
about life or about programming or about leadership?
Wow, there’s a lot.
There’s a lot.
Travis is a really, really good guy.
He really, his heart is really in it.
He cares a lot.
I’ve gotten that sense having to interact with him.
It’s so interesting.
Such a good human being.
He’s a really good dude.
And he and I, it’s so interesting.
We come from very different backgrounds.
We’re quite different as people,
but I think we can like not talk for a long time
and then be on a conversation
and be eye to eye on like 90% of things.
And so he’s someone who I believe
no matter how much fog settles in over the ocean,
his ship, my ship are pointed
sort of in the same direction of the same star.
Wow, that’s a beautiful way to phrase it.
No matter how much fog there is,
we’re pointed at the same star.
Yeah, and I hope he feels the same way.
I mean, I hope he knows that over the years now.
We both care a lot about the community.
For someone who cares so deeply,
I would say this about Travis that’s interesting.
For someone who cares so deeply about the nerd details
of like type system design and vector computing
and efficiency of expressing this and that and the other,
memory layouts and all that stuff,
he cares even more about the people
in the ecosystem, the community.
And I have a similar kind of alignment.
I care a lot about the tech, I really do.
But for me, the beauty of what this human ecology
has produced is I think a touchstone.
It’s an early version, we can look at it and say,
how do we replicate this for humanity at scale?
What this open source collaboration was able to produce?
How can we be generative in human collaboration
moving forward and create that
as a civilizational kind of dynamic?
Like, can we seize this moment to do that?
Because like a lot of the other open source movements,
it’s all nerds nerding out on code for nerds.
And this because it’s scientists,
because it’s people working on data,
that all of it faces real human problems.
I think we have an opportunity
to actually make a bigger impact.
Is there a way for this kind of open source vision
to make money?
Absolutely.
To fund the people involved?
Is that an essential part of it?
It’s hard, but we’re trying to do that
in our own way at Anaconda,
because we know that business users,
as they use more of the stuff, they have needs,
like business specific needs around security, provenance.
They really can’t tell their VPs and their investors,
hey, we’re having, our data scientists
are installing random packages from who knows where
and running on customer data.
So they have to have someone to talk to you.
And that’s what Anaconda does.
So we are a governed source of packages for them,
and that’s great, that makes some money.
We take some of that and we just take that as a dividend.
We take a percentage of our revenues
and write that as a dividend for the open source community.
But beyond that, I really see the development
of a marketplace for people to create notebooks,
models, data sets, curation of these different kinds
of things, and to really have
a long tail marketplace dynamic with that.
Can you speak about this problem
that you stumbled into of package management,
Python package management?
What is that?
A lot of people speak very highly of Conda,
which is part of Anaconda, which is a package manager.
There’s a ton of packages.
So first, what are package managers?
And second, what was there before?
What is pip?
And why is Conda more awesome?
The package problem is this, which is that
in order to do numerical computing efficiently with Python,
there are a lot of low level libraries
that need to be compiled, compiled with a C compiler
or C++ compiler or Fortran compiler.
They need to not just be compiled,
but they need to be compiled with all of the right settings.
And oftentimes those settings are tuned
for specific chip architectures.
And when you add GPUs to the mix,
when you look at different operating systems,
you may be on the same chip,
but if you’re running Mac versus Linux versus Windows
on the same x86 chip, you compile and link differently.
All of this complexity is beyond the capability
of most data scientists to reason about.
And it’s also beyond what most of the package developers
want to deal with too.
Because if you’re a package developer,
you’re like, I code on Linux.
This works for me, I’m good.
It is not my problem to figure out how to build this
on an ancient version of Windows, right?
That’s just simply not my problem.
So what we end up with is we have a creator economy
or create a very creative crowdsourced environment
where people want to use this stuff, but they can’t.
And so we ended up creating a new set of technologies
like a build recipe system, a build system
and an installer system that is able to,
well, to put it simply,
it’s able to build these packages correctly
on each of these different kinds of platforms
and operating systems,
and make it so when people want to install something,
they can, it’s just one command.
They don’t have to set up a big compiler system
and do all these things.
So when it works well, it works great.
Now, the difficulty is we have literally thousands
of people writing code in the ecosystem,
building all sorts of stuff and each person writing code,
they may take a dependence on something else.
And so you have all this web,
incredibly complex web of dependencies.
So installing the correct package
for any given set of packages you want,
getting that right subgraph is an incredibly hard problem.
And again, most data scientists
don’t want to think about this.
They’re like, I want to install NumPy and pandas.
I want this version of some like geospatial library.
I want this other thing.
Like, why is this hard?
These exist, right?
And it is hard because it’s, well,
you’re installing this on a version of Windows, right?
And half of these libraries are not built for Windows
or the latest version isn’t available,
but the old version was.
And if you go to the old version of this library,
that means you need to go to a different version
of that library.
And so the Python ecosystem,
by virtue of being crowdsourced,
we were able to fill a hundred thousand different niches.
But then we also suffer this problem
that because it’s crowdsourced and no one,
it’s like a tragedy of the commons, right?
No one really needs, wants to support
their thousands of other dependencies.
So we end up sort of having to do a lot of this.
And of course the conda forge community
also steps up as an open source community that,
you know, maintain some of these recipes.
That’s what conda does.
Now, pip is a tool that came along after conda,
to some extent, it came along as an easier way
for the Python developers writing Python code
that didn’t have as much compiled, you know, stuff.
They could then install different packages.
And what ended up happening in the Python ecosystem
was that a lot of the core Python and web Python developers,
they never ran into any of this compilation stuff at all.
So even we have, you know, on video,
we have Guido van Rossum saying,
you know what, the scientific community’s packaging problems
are just too exotic and different.
I mean, you’re talking about Fortran compilers, right?
Like you guys just need to build your own solution
perhaps, right?
So the Python core Python community went
and built its own sort of packaging technologies,
not really contemplating the complexity
of this stuff over here.
And so now we have the challenge where
you can pip install some things, some libraries,
if you just want to get started with them,
you can pip install TensorFlow and that works great.
The instant you want to also install some other packages
that use different versions of NumPy
or some like graphics library or some OpenCV thing
or some other thing, you now run into dependency hell
because you cannot, you know,
OpenCV can have a different version of libjpeg over here
than PyTorch over here.
Like they actually, they all have to use the,
if you want to use GPU acceleration,
they have to all use the same underlying drivers
and same GPU CUDA things.
So it’s, it gets to be very gnarly
and it’s a level of technology
that both the makers and the users
don’t really want to think too much about.
And that’s where you step in and try to solve this.
We try to solve it.
Subgraph problems.
How much is that?
I mean, you said that you don’t want to think,
they don’t want to think about it,
but how much is it a little bit on the developer
and providing them tools to be a little bit more clear
of that subgraph of dependency that’s necessary?
It is getting to a point where we do have to think about,
look, can we pull some of the most popular packages together
and get them to work on a coordinated release timeline,
get them to build against the same test matrix,
et cetera, et cetera, right?
And there is a little bit of dynamic around this,
but again, it is a volunteer community.
Yeah.
You know, people working on these different projects
have their own timelines
and their own things they’re trying to meet.
So we end up trying to pull these things together.
And then it’s this incredibly,
and I would recommend just as a business tip,
don’t ever go into business
where when your hard work works, you’re invisible.
And when it breaks because of someone else’s problem,
you get flagged for it.
Because that’s in our situation, right?
When something doesn’t condensate all properly,
usually it’s some upstream issue,
but it looks like condensate is broken.
It looks like, you know, Anaconda screwed something up.
When things do work though, it’s like, oh yeah, cool.
It’s worked.
Assuming naturally, of course,
it’s very easy to make that work, right?
So we end up in this kind of problematic scenario,
but it’s okay because I think we’re still,
you know, our heart’s in the right place.
We’re trying to move this forward
as a community sort of affair.
I think most of the people in the community
also appreciate the work we’ve done over the years
to try to move these things forward
in a collaborative fashion, so.
One of the subgraphs of dependencies
that became super complicated
is the move from Python 2 to Python 3.
So there’s all these ways to mess
with these kinds of ecosystems of packages and so on.
So I just want to ask you about that particular one.
What do you think about the move from Python 2 to 3?
Why did it take so long?
What were, from your perspective,
just seeing the packages all struggle
and the community all struggle through this process,
what lessons do you take away from it?
Why did it take so long?
Looking back, some people perhaps underestimated
how much adoption Python 2 had.
I think some people also underestimated how much,
or they overestimated how much value
some of the new features in Python 3 really provided.
Like the things they really loved about Python 3
just didn’t matter to some of these people in Python 2.
Because this change was happening as Python, SciPy,
was starting to take off really like past,
like a hockey stick of adoption
in the early data science era, in the early 2010s.
A lot of people were learning and onboarding
in whatever just worked.
And the teachers were like,
well, yeah, these libraries I need
are not supported in Python 3 yet,
I’m going to teach you Python 2.
Took a lot of advocacy to get people
to move over to Python 3.
So I think it wasn’t any particular single thing,
but it was one of those death by a dozen cuts,
which just really made it hard to move off of Python 2.
And also Python 3 itself,
as they were kind of breaking things
and changing things around
and reorganizing the standard library,
there’s a lot of stuff that was happening there
that kept giving people an excuse to say,
I’ll put off till the next version.
2 is working fine enough for me right now.
So I think that’s essentially what happened there.
And I will say this though,
the strength of the Python data science movement,
I think is what kept Python alive in that transition.
Because a lot of languages have died
and left their user bases behind.
If there wasn’t the use of Python for data,
there’s a good chunk of Python users
that during that transition,
would have just left for Go and Rust and stayed.
In fact, some people did.
They moved to Go and Rust and they just never looked back.
The fact that we were able to grow by millions of users,
the Python data community,
that is what kept the momentum for Python going.
And now the usage of Python for data is over 50%
of the overall Python user base.
So I’m happy to debate that on stage somewhere,
I don’t know if they really wanna take issue
with that statement, but from where I sit,
I think that’s true.
The statement there, the idea is that the switch
from Python 2 to Python 3 would have probably
destroyed Python if it didn’t also coincide with Python
for whatever reason,
just overtaking the data science community,
anything that processes data.
So like the timing was perfect that this maybe
imperfect decision was coupled with a great timing
on the value of data in our world.
I would say the troubled execution of a good decision.
It was a decision that was necessary.
It’s possible if we had more resources,
we could have done in a way that was a little bit smoother,
but ultimately, the arguments for Python 3,
I bought them at the time and I buy them now, right?
Having great text handling is like a nonnegotiable
table stakes thing you need to have in a language.
So that’s great, but the execution,
Python is the, it’s volunteer driven.
It’s like now the most popular language on the planet,
but it’s all literally volunteers.
So the lack of resources meant that they had to really,
they had to do things in a very hamstrung way.
And I think to carry the Python momentum in the language
through that time, the data movement
was a critical part of that.
So some of it is carrot and stick, I actually have to
shamefully admit that it took me a very long time
to switch from Python 2 and Python 3
because I’m a machine learning person.
It was just for the longest time,
you could just do fine with Python 2.
Right.
But I think the moment where I switched everybody
I worked with and switched myself for small projects
and big is when finally, when NumPy announced
that they’re going to end support like in 2020
or something like that.
Right.
So like when I realized, oh, this isn’t going,
this is going to end.
Right.
So that’s the stick, that’s not a carrot.
That’s not, so for the longest time it was carrots.
It was like all of these packages were saying,
okay, we have Python 3 support now, come join us.
We have Python 2 and Python 3, but when NumPy,
one of the packages I sort of love and depend on
said like, nope, it’s over.
That’s when I decided to switch.
I wonder if you think it was possible much earlier
for somebody like NumPy or some major package
to step into the cold and say like we’re ending this.
Well, it’s a chicken and egg problem too, right?
You don’t want to cut off a lot of users
unless you see the user momentum going too.
So the decisions for the scientific community
for each of the different projects,
you know, there’s not a monolith.
Some projects are like, we’ll only be releasing
new features on Python 3.
And that was more of a sticky carrot, right?
A firm carrot, if you will, a firm carrot.
A stick shaped carrot.
But then for others, yeah, NumPy in particular,
cause it’s at the base of the dependency stack
for so many things, that was the final stick.
That was a stick shaped stick.
People were saying, look, if I have to keep maintaining
my releases for Python 2, that’s that much less energy
that I can put into making things better
for the Python 3 folks or in my new version,
which is of course going to be Python 3.
So people were also getting kind of pulled by this tension.
So the overall community sort of had a lot of input
into when the NumPy core folks decided
that they would end of life on Python 2.
So as these numbers are a little bit loose,
but there are about 10 million Python programmers
in the world, you could argue that number,
but let’s say 10 million.
That’s actually where I was looking,
said 27 million total programmers, developers in the world.
You mentioned in a talk that changes need to be made
for there to be 100 million Python programmers.
So first of all, do you see a future
where there’s 100 million Python programmers?
And second, what kind of changes need to be made?
So Anaconda and Miniconda get downloaded
about a million times a week.
So I think the idea that there’s only
10 million Python programmers in the world
is a little bit undercounting.
There are a lot of people who escape traditional counting
that are using Python and data in their jobs.
I do believe that the future world for it to,
well, the world I would like to see
is one where people are data literate.
So they are able to use tools
that let them express their questions and ideas fluidly.
And the data variety and data complexity will not go down.
It will only keep increasing.
So I think some level of code or code like things
will continue to be relevant.
And so my hope is that we can build systems
that allow people to more seamlessly integrate
Python kinds of expressivity with data systems
and operationalization methods that are much more seamless.
And what I mean by that is, you know,
right now you can’t punch Python code into an Excel cell.
I mean, there’s some tools you can do to kind of do this.
We didn’t build a thing for doing this back in the day,
but I feel like the total addressable market
for Python users, if we do the things right,
is on the order of the Excel users,
which is, you know, a few hundred million.
So I think Python has to get better at being embedded,
you know, being a smaller thing that pulls in
just the right parts of the ecosystem
to run numerics and do data exploration,
meeting people where they’re already at
with their data and their data tools.
And then I think also it has to be easier
to take some of those things they’ve written
and flow those back into deployed systems
or little apps or visualizations.
I think if we don’t do those things,
then we will always be kept in a silo
as sort of an expert user’s tool
and not a tool for the masses.
You know, I work with a bunch of folks
in the Adobe Creative Suite,
and I’m kind of forcing them or inspired them
to learn Python, to do a bunch of stuff that helps them.
And it’s interesting, because they probably
wouldn’t call themselves Python programmers,
but they’re all using Python.
I would love it if the tools like Photoshop and Premiere
and all those kinds of tools that are targeted
towards creative people, I guess that’s where Excel,
Excel is targeted towards a certain kind of audience
that works with data, financial people,
all that kind of stuff, if there would be easy ways
to leverage to use Python for quick scripting tasks.
And you know, there’s an exciting application
of artificial intelligence in this space
that I’m hopeful about, looking at open AI codecs
with generating programs.
So almost helping people bridge the gap
from kind of visual interface to generating programs,
to something formal, and then they can modify it and so on,
but kind of without having to read the manual,
without having to do a Google search and stack overflow,
which is essentially what a neural network does
when it’s doing code generation,
is actually generating code and allowing a human
to communicate with multiple programs,
and then maybe even programs to communicate
with each other via Python.
So that to me is a really exciting possibility,
because I think there’s a friction to kind of,
like how do I learn how to use Python in my life?
There’s oftentimes you kind of start a class,
you start learning about types, I don’t know, functions.
Like this is, you know, Python is the first language
with which you start to learn to program.
But I feel like that’s going to take a long time
for you to understand why it’s useful.
You almost want to start with a script.
Well, you do, in fact.
I think starting with the theory behind programming languages
and types and all that, I mean,
types are there to make the compiler writer’s jobs easier.
Types are not, I mean, heck, do you have an ontology
of types or just the objects on this table?
No.
So types are there because compiler writers are human
and they’re limited in what they can do.
But I think that the beauty of scripting,
like there’s a Python book that’s called
“‘Automate the Boring Stuff,’
which is exactly the right mentality.
I grew up with computers in a time when I could,
when Steve Jobs was still pitching these things
as bicycles for the mind.
They were supposed to not be just media consumption devices,
but they were actually, you could write some code.
You could write basic, you could write some stuff
to do some things.
And that feeling of a computer as a thing
that we can use to extend ourselves
has all but evaporated for a lot of people.
So you see a little bit in parts
in the current, the generation of youth
around Minecraft or Roblox, right?
And I think Python, circuit Python,
these things could be a renaissance of that,
of people actually shaping and using their computers
as computers, as an extension of their minds
and their curiosity, their creativity.
So you talk about scripting the Adobe Suite with Python
in the 3D graphics world.
Python is a scripting language
that some of these 3D graphics suites use.
And I think that’s great.
We should better support those kinds of things.
But ultimately the idea that I should be able
to have power over my computing environment.
If I want these things to happen repeatedly all the time,
I should be able to say that somehow to the computer, right?
Now, whether the operating systems get there faster
by having some Siri backed with open AI with whatever.
So you can just say, Siri, make this do this
and this and every other Friday, right?
We probably will get there somewhere.
And Apple’s always had these ideas.
There’s the Apple script in the menu that no one ever uses,
but you can do these kinds of things.
But when you start doing that kind of scripting,
the challenge isn’t learning the type system
or even the syntax of the language.
The challenge is all of the dictionaries
and all the objects of all their properties
and attributes and parameters.
Like who’s got time to learn all that stuff, right?
So that’s when then programming by prototype
or by example becomes the right way
to get the user to express their desire.
So there’s a lot of these different ways
that we can approach programming.
But I do think when, as you were talking
about the Adobe scripting thing,
I was thinking about, you know,
when we do use something like NumPy,
when we use things in the Python data
and scientific, let’s say, expression system,
there’s a reason we use that,
which is that it gives us mathematical precision.
It gives us actually quite a lot of precision
over precisely what we mean about this data set,
that data set, and it’s the fact
that we can have that precision
that lets Python be powerful over as a duct tape for data.
You know, you give me a TSV or a CSV,
and if you give me some massively expensive vendor tool
for data transformation,
I don’t know I’m gonna be able to solve your problem.
But if you give me a Python prompt,
you can throw whatever data you want at me.
I will be able to mash it into shape.
So that ability to take it as sort of this like,
you know, machete out into the data jungle
is really powerful.
And I think that’s why at some level,
we’re not gonna get away from some of these expressions
and APIs and libraries in Python for data transformation.
You’ve been at the center of the Python community
for many years.
If you could change one thing about the community
to help it grow, to help it improve,
to help it flourish and prosper, what would it be?
I mean, you know, it doesn’t have to be one thing,
but what kind of comes to mind?
What are the challenges?
Humility is one of the values that we have
at Anaconda at the company,
but it’s also one of the values in the community.
That it’s been breached a little bit in the last few years,
but in general, people are quite decent
and reasonable and nice.
And that humility prevents them from seeing
the greatness that they could have.
I don’t know how many people in the core Python community
really understand that they stand perched at the edge
of an opportunity to transform how people use computers.
And actually, PyCon, I think it’s the last physical PyCon
I went to, Russell Keith McGee gave a great keynote
about very much along the lines of the challenges I have,
which is Python, for a language that doesn’t actually,
that can’t put an interface up,
put an interface up on the most popular computing devices,
it’s done really well as a language, hasn’t it?
You can’t write a web front end with Python, really.
I mean, everyone uses JavaScript.
You certainly can’t write native apps.
So for a language that you can’t actually write apps
in any of those front end runtime environments,
Python’s done exceedingly well.
And so that wasn’t to pat ourselves on the back.
That was to challenge ourselves as a community to say,
we, through our current volunteer dynamic,
have gotten to this point.
What comes next and how do we seize,
you know, we’ve caught the tiger by the tail.
How do we make sure we keep up with it as it goes forward?
So that’s one of the questions I have
about sort of open source communities,
is at its best, there’s a kind of humility.
Is that humility prevent you to have a vision
for creating something like very new and powerful?
And you’ve brought us back to consciousness again.
The collaboration is a swarm emergent dynamic.
Humility lets these people work together
without anyone trouncing anyone else.
How do they, you know, in consciousness,
there’s the question of the binding problem.
How does a singular, our attention,
how does that emerge from billions of neurons?
So how can you have a swarm of people emerge a consensus
that has a singular vision to say, we will do this.
And most importantly, we’re not gonna do these things.
Emerging a coherent, pointed, focused leadership dynamic
from a collaboration, being able to do that kind of,
and then dissolve it so people can still do
the swarm thing, that’s a problem, that’s a question.
So do you have to have a charismatic leader?
For some reason, Linus Torvald comes to mind,
but there’s people who criticize.
He rules with an iron fist, man.
But there’s still charisma to it.
There is charisma, right?
There’s a charisma to that iron fist.
There’s, every leader’s different, I would say,
in their success.
So he doesn’t, I don’t even know if you can say
he doesn’t have humility, there’s such a meritocracy
of ideas that like, this is a good idea
and this is a bad idea.
There’s a step function to it.
Once you clear a threshold, he’s open.
Once you clear the bozo threshold,
he’s open to your ideas, I think, right?
But see, the interesting thing is obviously
that will not stand in an open source community
if that threshold that is defined
by that one particular person is not actually that good.
So you actually have to be really excellent at what you do.
So he’s very good at what he does.
And so there’s some aspect of leadership
where you can get thrown out, people can just leave.
That’s how it works with open source, the fork.
But at the same time, you want to sometimes be a leader
like with a strong opinion, because people,
I mean, there’s some kind of balance here
for this like hive mind to get like behind.
Leadership is a big topic.
And I didn’t, I’m not one of these guys
that went to MBA school and said,
I’m gonna be an entrepreneur and I’m gonna be a leader.
And I’m gonna read all these Harvard Business Review
articles on leadership and all this other stuff.
Like I was a physicist turned into a software nerd
who then really like nerded out on Python.
And now I am entrepreneurial, right?
I saw a business opportunity around the use
of Python for data.
But for me, what has been interesting over this journey
with the last 10 years is how much I started really
enjoying the understanding, thinking deeper
about organizational dynamics and leadership.
And leadership does come down to a few core things.
Number one, a leader has to create belief
or at least has to dispel disbelief.
Leadership also, you have to have vision,
loyalty and experience.
So can you say belief in a singular vision?
Like what does belief mean?
Yeah, belief means a few things.
Belief means here’s what we need to do
and this is a valid thing to do and we can do it.
That you have to be able to drive that belief.
And every step of leadership along the way
has to help you amplify that belief to more people.
I mean, I think at a fundamental level, that’s what it is.
You have to have a vision.
You have to be able to show people that,
or you have to convince people to believe in the vision
and to get behind you.
And that’s where the loyalty part comes in
and the experience part comes in.
There’s all different flavors of leadership.
So if we talk about Linus, we could talk about Elon Musk
and Steve Jobs, there’s Sunder Prachai.
There’s people that kind of put themselves at the center
and are strongly opinionated.
And some people are more like consensus builders.
What works well for open source?
What works well in the space of programmers?
So you’ve been a programmer, you’ve led many programmers
that are now sort of at the center of this ecosystem.
What works well in the programming world would you say?
It really depends on the people.
What style of leadership is best?
And it depends on the programming community.
I think for the Python community,
servant leadership is one of the values.
At the end of the day, the leader has to also be
the high priest of values, right?
So any collection of people has values of their living.
And if you want to maintain certain values
and those values help you as an organization
become more powerful,
then the leader has to live those values unequivocally
and has to hold the values.
So in our case, in this collaborative community
around Python, I think that the humility
is one of those values.
Servant leadership, you actually have to kind of do the stuff.
You have to walk the walk, not just talk the talk.
I don’t feel like the Python community really demands
that much from a vision standpoint.
And they should.
And I think they should.
This is the interesting thing is like so many people
use Python, from where comes the vision?
You know, like you have a Elon Musk type character
who makes bold statements about the vision
for particular companies he’s involved with.
And it’s like, I think a lot of people that work
at those companies kind of can only last
if they believe that vision.
And some of it is super bold.
So my question is, and by the way,
those companies often use Python.
How do you establish a vision?
Like get to 100 million users, right?
Get to where, you know, the Python is at the center
of the machine learning and was it data science,
machine learning, deep learning,
artificial intelligence revolution, right?
Like in many ways, perhaps the Python community
is not thinking of it that way,
but it’s leading the way on this.
Like the tooling is like essential.
Right, well, you know, for a while,
PyCon people in the scientific Python
and the PyData community, they would submit talks.
Those are early 2010s, mid 2010s.
They would submit talks for PyCon
and the talks would all be rejected
because there was the separate sort of PyData conferences.
And like, well, these probably belong more to PyData.
And instead there’d be yet another talk about, you know,
threads and, you know, whatever, some web framework.
And it’s like, that was an interesting dynamic to see
that there was, I mean, at the time it was a little annoying
because we wanted to try to get more users
and get more people talking about these things.
And PyCon is a huge venue, right?
It’s thousands of Python programmers.
But then also came to appreciate that, you know,
parallel, having an ecosystem that allows parallel
innovation is not bad, right?
There are people doing embedded Python stuff.
There’s people doing web programming,
people doing scripting, there’s cyber uses of Python.
I think the, ultimately at some point,
if your slide mold covers so much stuff,
you have to respect that different things are growing
in different areas and different niches.
Now, at some point that has to come together
and the central body has to provide resources.
The principle here is subsidiarity.
Give resources to the various groups
to then allocate as they see fit in their niches.
That would be a really helpful dynamic.
But again, it’s a volunteer community.
It’s not like they had that many resources to start with.
What was or is your favorite programming setup?
What operating system, what keyboard,
how many screens are you listening to?
What time of day are you drinking coffee, tea?
Tea, sometimes coffee, depending on how well I slept.
I used to have.
How much sleep do you get?
Are you a night owl?
I remember somebody asked you somewhere,
a question about work life balance.
Not just work life balance, but like a family,
you lead a company and your answer was basically like,
I still haven’t figured it out.
Yeah, I think I’ve gotten to a little bit better balance.
I have a really great leadership team now supporting me
and so that takes a lot of the day to day stuff
off my plate and my kids are getting a little older.
So that helps.
So, and of course I have a wonderful wife
who takes care of a lot of the things
that I’m not able to take care of and she’s great.
I try to get to sleep earlier now
because I have to get up every morning at six
to take my kid down to the bus stop.
So there’s a hard thing.
For a while I was doing polyphasic sleep,
which is really interesting.
Like I go to bed at nine, wake up at like 2 a.m.,
work till five, sleep three hours, wake up at eight.
Like that was actually, it was interesting.
It wasn’t too bad.
How did it feel?
It was good.
I didn’t keep it up for years, but once I have travel,
then it just, everything goes out the window, right?
Because then you’re like time zones and all these things.
Socially was it, except like were you able to live
outside of how you felt?
Were you able to live normal society?
Oh yeah, because like on the nights
that I wasn’t out hanging out with people or whatever,
going to bed at nine, no one cares.
I wake up at two, I’m still responding to their slacks,
emails, whatever, and you know, shitposting on Twitter
or whatever at two in the morning is great, right?
And then you go to bed for a few hours and you wake up,
it’s like you had an extra day in the middle.
And I’d read somewhere that humans naturally
have biphasic sleep or something, I don’t know.
I read basically everything somewhere.
So every option of everything.
Every option of everything.
I will say that that worked out for me for a while,
although I don’t do it anymore.
In terms of programming setup,
I had a 27 inch high DPI setup that I really liked,
but then I moved to a curved monitor
just because when I moved to the new house,
I want to have a bit more screen for Zoom plus communications
plus various kinds of things.
So it’s like one large monitor.
One large curved monitor.
What operating system?
Mac.
Okay. Yeah.
Is that what happens when you become important,
is you stop using Linux and Windows?
No, I actually have a Windows box as well
on the next table over, but I have three desks, right?
So the main one is the standing desk so that I can,
whatever, when I’m like, I have a teleprompter set up
and everything else.
And then I’ve got my iMac and then eGPU and then Windows PC.
The reason I moved to Mac was it’s got a Linux prompt
or no, sorry, it’s got a, it’s got a Unix prompt
so I can do all my stuff, but then I don’t have to worry.
Like when I’m presenting for clients
or investors or whatever, like it,
I don’t have to worry about any like ACPI related
fsic things in the middle of a presentation,
like none of that.
It just, it will always wake from sleep
and it won’t kernel panic on me.
And this is not a dig against Linux,
except that I just, I feel really bad.
I feel like a traitor to my community saying this, right?
But in 2012, I was just like, okay, start my own company.
What do I get?
And Linux laptops were just not quite there.
And so I’ve just stuck with Macs.
Can I just defend something that nobody respectable
seems to do, which is, so I do a boot on Linux windows,
but in windows, I have a windows subsystems
for Linux or whatever, WSL.
And I find myself being able to handle everything I need
and almost everything I need in Linux
for basic sort of tasks, scripting tasks within WSL
and it creates a really nice environment.
So I’ve been, but like whenever I hang out with like,
especially important people,
like they’re all on iPhone and a Mac
and it’s like, yeah, like what,
there is a messiness to windows and a messiness to Linux
that makes me feel like you’re still in it.
Well, the Linux stuff, windows subsystem for Linux
is very tempting, but there’s still the windows
on the outside where I don’t know where,
and I’ve been, okay, I’ve used DOS since version 1.11
or 1.21 or something.
So I’ve been a long time Microsoft user.
And I will say that like, it’s really hard
for me to know where anything is,
how to get to the details behind something
when something screws up as an invariably does
and just things like changing group permissions
on some shared folders and stuff,
just everything seems a little bit more awkward,
more clicks than it needs to be.
Not to say that there aren’t weird things
like hidden attributes and all this other happy stuff
on Mac, but for the most part,
and well, actually, especially now
with the new hardware coming out on Mac,
it’ll be very interesting with the new M1.
There were some dark years in the last few years
when I was like, I think maybe I have to move off of Mac
as a platform, but this, I mean,
like my keyboard was just not working.
Like literally my keyboard just wasn’t working, right?
I had this touch bar, didn’t have a physical escape button
like I needed to because I used Vim,
and now I think we’re back, so.
So you use Vim and you have a, what kind of keyboard?
So I use a RealForce 87U, it’s a mechanical,
it’s a Topre keyswitch.
Like it’s a weird shape, there’s a normal shape, okay.
Well, no, because I say that because I use a Kinesis,
and you said some dark, you said you had dark moments.
I recently had a dark moment,
I was like, what am I doing with my life?
So I remember sort of flying in a very kind of tight space,
and as I’m working, this is what I do on an airplane.
I pull out a laptop, and on top of the laptop,
I’ll put a Kinesis keyboard.
That’s hardcore, man.
I was thinking, is this who I am?
Is this what I’m becoming?
Will I be this person?
Because I’m on Emacs with this Kinesis keyboard,
sitting like with everybody around.
Emacs on Windows.
On WSL, yeah.
Yeah, Emacs on Linux on Windows.
Yeah, on Windows.
And like everybody around me is using their iPhone
to look at TikTok.
So I’m like in this land, and I thought, you know what?
Maybe I need to become an adult and put the 90s behind me,
and use like a normal keyboard.
And then I did some soul searching,
and I decided like this is who I am.
This is me like coming out of the closet
to saying I’m Kinesis keyboard all the way.
I’m going to use Emacs.
You know who else is a Kinesis fan?
Wes McKinney, the creator of Pandas.
Oh.
He banged out Pandas on a Kinesis keyboard, I believe.
I don’t know if he’s still using one, maybe,
but certainly 10 years ago, like he was.
If anyone’s out there,
maybe we need to have a Kinesis support group.
Please reach out.
Isn’t there already one?
Is there one?
I don’t know.
There’s gotta be an RSC channel, man.
Oh no, and you access it through Emacs.
Okay.
Do you still program these days?
I do a little bit.
Honestly, the last thing I did was I had written,
I was working with my son to script some Minecraft stuff.
So I was doing a little bit of that.
That was the last, literally the last code I wrote.
Oh, you know what?
Also, I wrote some code to do some cap table evaluation,
waterfall modeling kind of stuff.
What advice would you give to a young person,
you said your son, today, in high school,
maybe even college, about career, about life?
This may be where I get into trouble a little bit.
We are coming to the end.
We’re rapidly entering a time between worlds.
So we have a world now that’s starting to really crumble
under the weight of aging institutions
that no longer even pretend to serve the purposes
they were created for.
We are creating technologies that are hurtling billions
of people headlong into philosophical crises
who they don’t even know the philosophical operating systems
in their firmware.
And they’re heading into a time when that gets vaporized.
So for people in high school,
and certainly I tell my son this as well,
he’s in middle school, people in college,
you are going to have to find your own way.
You’re going to have to have a pioneer spirit,
even if you live in the middle
of the most dense urban environment.
All of human reality around you
is the result of the last few generations of humans
agreeing to play certain kinds of games.
A lot of those games no longer operate
according to the rules they used to.
Collapse is nonlinear, but it will be managed.
And so if you are in a particular social caste
or economic caste,
and I think it’s not kosher to say that about America,
but America is a very stratified and classist society.
There’s some mobility, but it’s really quite classist.
And in America, unless you’re in the upper middle class,
you are headed into very choppy waters.
So it is really, really good to think
and understand the fundamentals of what you need
to build a meaningful life for you, your loved ones,
with your family.
And almost all of the technology being created
that’s consumer facing is designed to own people,
to take the four stack of people, to delaminate them,
and to own certain portions of that stack.
And so if you want to be an integral human being,
if you want to have your agency
and you want to find your own way in the world,
when you’re young would be a great time to spend time
looking at some of the classics
around what it means to live a good life,
what it means to build connection with people.
And so much of the status game, so much of the stuff,
one of the things that I sort of talk about
as we create more and more technology,
there’s a gradient of technology,
and a gradient of technology
always leads to a gradient of power.
And this is Jacques Leleu’s point to some extent as well.
That gradient of power is not going to go away.
The technologies are going so fast
that even people like me who helped create
some of the stuff, I’m being left behind.
Some of the cutting edge research,
I don’t know what’s going on against today.
You know, I go read some proceedings.
So as the world gets more and more technological,
it will create more and more gradients
where people will seize power, economic fortunes.
And the way they make the people who are left behind
okay with their lot in life is they create lottery systems.
They make you take part in the narrative
of your own being trapped in your own economic sort of zone.
So avoiding those kinds of things is really important.
Knowing when someone is running game on you basically.
So these are the things I would tell young people.
It’s a dark message, but it’s realism.
I mean, that’s what I see.
So after you gave some realism, you sit back.
You sit back with your son.
You’re looking out at the sunset.
What to him can you give as words of hope and to you
from where do you derive hope for the future of our world?
So you said at the individual level,
you have to have a pioneer mindset
to go back to the classics,
to understand what is in human nature you can find meaning.
But at the societal level, what trajectory,
when you look up possible trajectories, what gives you hope?
What gives me hope is that we have little tremors now
shaking people out of the reverie
of the fiction of modernity that they’ve been living in,
kind of a late 20th century style modernity.
That’s good, I think.
Because, and also to your point earlier,
people are burning out on some of the social media stuff.
They’re sort of seeing the ugly side,
especially the latest news with Facebook
and the whistleblower, right?
It’s quite clear these things are not
all they’re cracked up to be.
Do you believe, I believe better social media can be built
because they are burning out
and it’ll incentivize other competitors to be built.
Do you think that’s possible?
Well, the thing about it is that
when you have extractive return on returns
capital coming in and saying,
look, you own a network,
give me some exponential dynamics out of this network.
What are you gonna do?
You’re gonna just basically put a toll keeper
at every single node and every single graph edge,
every node, every vertex, every edge.
But if you don’t have that need for it,
if no one’s sitting there saying,
hey, Wikipedia, monetize every character,
every byte, every phrase,
then generative human dynamics will naturally sort of arise,
assuming we respect a few principles
around online communications.
So the greatest and biggest social network in the world
is still like email, SMS, right?
So we’re fine there.
The issue with the social media, as we call it now,
is they’re actually just new amplification systems, right?
Now it’s benefited certain people like yourself
who have interesting content to be amplified.
So it’s created a creator economy, and that’s cool.
There’s a lot of great content out there.
But giving everyone a shot at the fame lottery,
saying, hey, you could also have your,
if you wiggle your butt the right way on TikTok,
you can have your 15 seconds of micro fame.
That’s not healthy for society at large.
So I think if we can create tools that help people
be conscientious about their attention,
spend time looking at the past,
and really retrieving memory and calling,
not calling, but processing and thinking about that,
I think that’s certainly possible,
and hopefully that’s what we get.
So the bigger question that you’re asking
about what gives me hope
is that these early shocks of COVID lockdowns
and remote work and all these different kinds of things,
I think it’s getting people to a point
where they’re sort of no longer in the reverie.
As my friend Jim Rutt says,
there’s more people with ears to hear now.
With the pandemic and education,
everyone’s like, wait, wait,
what have you guys been doing with my kids?
How are you teaching them?
What is this crap you’re giving them as homework?
So I think these are the kinds of things
that are getting, and the supply chain disruptions,
getting more people to think about,
how do we actually just make stuff?
This is all good, but the concern is that
it’s still gonna take a while for these things,
for people to learn how to be agentic again,
and to be in right relationship with each other
and with the world.
So the message of hope is still people are resilient,
and we are building some really amazing technology.
And I also, to me, I derive a lot of hope
from individuals in that van.
The power of a single individual to transform the world,
to do positive things for the world is quite incredible.
Now you’ve been talking about,
it’s nice to have as many of those individuals as possible,
but even the power of one, it’s kind of magical.
It is, it is.
We’re in a mode now where we can do that.
I think also, part of what I try to do
is in coming to podcasts like yours,
and then spamming with all this philosophical stuff
that I’ve got going on,
there are a lot of good people out there
trying to put words around the current technological,
social, economic crises that we’re facing.
And in the space of a few short years,
I think there has been a lot of great content
produced around this stuff.
For people who wanna see, wanna find out more,
or think more about this,
we’re popularizing certain kinds of philosophical ideas
that move people beyond just the,
oh, you’re communist, oh, you’re capitalist kind of stuff.
Like it’s sort of, we’re way past that now.
So that also gives me hope,
that I feel like I myself am getting a handle
on how to think about these things.
It makes me feel like I can,
hopefully affect change for the better.
We’ve been sneaking up on this question all over the place.
Let me ask the big, ridiculous question.
What is the meaning of life?
Wow.
The meaning of life.
Yeah, I don’t know.
I mean, I’ve never really understood that question.
When you say meaning crisis,
you’re saying that there is a search
for a kind of experience
that could be described as fulfillment,
as like the aha moment of just like joy,
and maybe when you see something beautiful,
or maybe you have created something beautiful,
that experience that you get,
it feels like it all makes sense.
So some of that is just chemicals coming together
in your mind and all those kinds of things.
But it seems like we’re building
a sophisticated collective intelligence
that’s providing meaning in all kinds of ways
to its members.
And there’s a theme to that meaning.
So for a lot of history,
I think faith played an important role.
Faith in God, sort of religion.
I think technology in the modern era
is kind of serving a little bit
of a source of meaning for people,
like innovation of different kinds.
I think the old school things of love
and the basics of just being good at stuff.
But you were a physicist,
so there’s a desire to say, okay, yeah,
but these seem to be like symptoms of something deeper.
Right.
Like why?
A little meaning, what’s capital M meaning?
Yeah, what’s capital M meaning?
Why are we reaching for order
when there is excess of energy?
I don’t know if I can answer the why.
Any why that I come up with, I think, is gonna be,
I’d have to think about that a little more,
maybe get back to you on that.
But I will say this.
We do look at the world through a traditional,
I think most people look at the world through
what I would say is a subject object
to kind of metaphysical lens,
that we have our own subjectivity,
and then there’s all of these object things that are not us.
So I’m me, and these things are not me, right?
And I’m interacting with them, I’m doing things to them.
But a different view of the world
that looks at it as much more connected,
that realizes, oh, I’m really quite embedded
in a soup of other things,
and I’m simply almost like a standing wave pattern
of different things, right?
So when you look at the world
in that kind of connected sense,
I’ve recently taken a shine
to this particular thought experiment,
which is what if it was the case
that everything that we touch with our hands,
that we pay attention to,
that we actually give intimacy to,
what if there’s actually all the mumbo jumbo,
like people with the magnetic healing crystals
and all this other kind of stuff and quantum energy stuff,
what if that was a thing?
What if literally when your hand touches an object,
when you really look at something
and you concentrate and you focus on it
and you really give it attention,
you actually give it,
there is some physical residue of something,
a part of you, a bit of your life force that goes into it.
Okay, now this is of course completely mumbo jumbo stuff.
This is not like, I don’t actually think this is real,
but let’s do the thought experiment.
What if it was?
What if there actually was some quantum magnetic crystal
and energy field thing that just by touching this can,
this can has changed a little bit somehow.
And it’s not much unless you put a lot into it
and you touch it all the time, like your phone, right?
These things gained, they gain meaning to you a little bit,
but what if there’s something that,
technical objects, the phone is a technical object,
it does not really receive attention or intimacy
and then allow itself to be transformed by it.
But if it’s a piece of wood,
if it’s the handle of a knife that your mother used
for 20 years to make dinner for you, right?
What if it’s a keyboard that you banged out,
your world transforming software library on?
These are technical objects
and these are physical objects,
but somehow there’s something to them.
We feel an attraction to these objects
as if we have imbued them with life energy, right?
So if you walk that thought experiment through,
what happens when we touch another person,
when we hug them, when we hold them?
And the reason this ties into my answer for your question
is that if there is such a thing,
if there is such a thing,
if we were to hypothesize, you know,
hypothesize it’s such a thing,
it could be that the purpose of our lives
is to imbue as many things with that love as possible.
That’s a beautiful answer
and a beautiful way to end it, Peter.
You’re an incredible person.
Thank you.
Spanning so much in the space of engineering
and in the space of philosophy.
I’m really proud to be living in the same city as you
and I’m really grateful
that you would spend your valuable time with me today.
Thank you so much.
Well, thank you.
I appreciate the opportunity to speak with you.
Thanks for listening to this conversation with Peter Wang.
To support this podcast,
please check out our sponsors in the description.
And now let me leave you with some words
from Peter Wang himself.
We tend to think of people
as either malicious or incompetent,
but in a world filled with corruptible
and unchecked institutions,
there exists a third thing, malicious incompetence.
It’s a social cancer
and it only appears once human organizations scale
beyond personal accountability.
Thank you for listening and hope to see you next time.