Lex Fridman Podcast - #103 - Ben Goertzel: Artificial General Intelligence

The following is a conversation with Ben Goertzel,

one of the most interesting minds

in the artificial intelligence community.

He’s the founder of SingularityNet,

designer of OpenCog AI Framework,

formerly a director of research

at the Machine Intelligence Research Institute,

and chief scientist of Hanson Robotics,

the company that created the Sophia robot.

He has been a central figure in the AGI community

for many years, including in his organizing

and contributing to the conference

on artificial general intelligence,

the 2020 version of which is actually happening this week,

Wednesday, Thursday, and Friday.

It’s virtual and free.

I encourage you to check out the talks,

including by Yosha Bach from episode 101 of this podcast.

Quick summary of the ads.

Two sponsors, The Jordan Harbinger Show and Masterclass.

Please consider supporting this podcast

by going to jordanharbinger.com slash lex

and signing up at masterclass.com slash lex.

Click the links, buy all the stuff.

It’s the best way to support this podcast

and the journey I’m on in my research and startup.

This is the Artificial Intelligence Podcast.

If you enjoy it, subscribe on YouTube,

review it with five stars on Apple Podcast,

support it on Patreon, or connect with me on Twitter

at lexfriedman, spelled without the E, just F R I D M A N.

As usual, I’ll do a few minutes of ads now

and never any ads in the middle

that can break the flow of the conversation.

This episode is supported by The Jordan Harbinger Show.

Go to jordanharbinger.com slash lex.

It’s how he knows I sent you.

On that page, there’s links to subscribe to it

on Apple Podcast, Spotify, and everywhere else.

I’ve been binging on his podcast.

Jordan is great.

He gets the best out of his guests,

dives deep, calls them out when it’s needed,

and makes the whole thing fun to listen to.

He’s interviewed Kobe Bryant, Mark Cuban,

Neil deGrasse Tyson, Keira Kasparov, and many more.

His conversation with Kobe is a reminder

how much focus and hard work is required for greatness

in sport, business, and life.

I highly recommend the episode if you want to be inspired.

Again, go to jordanharbinger.com slash lex.

It’s how Jordan knows I sent you.

This show is sponsored by Master Class.

Sign up at masterclass.com slash lex

to get a discount and to support this podcast.

When I first heard about Master Class,

I thought it was too good to be true.

For 180 bucks a year, you get an all access pass

to watch courses from to list some of my favorites.

Chris Hadfield on Space Exploration,

Neil deGrasse Tyson on Scientific Thinking

and Communication, Will Wright, creator of

the greatest city building game ever, Sim City,

and Sims on Space Exploration.

Ben Sims on Game Design, Carlos Santana on Guitar,

Keira Kasparov, the greatest chess player ever on chess,

Daniel Negrano on Poker, and many more.

Chris Hadfield explaining how rockets work

and the experience of being launched into space alone

is worth the money.

Once again, sign up at masterclass.com slash lex

to get a discount and to support this podcast.

Now, here’s my conversation with Ben Kurtzell.

What books, authors, ideas had a lot of impact on you

in your life in the early days?

You know, what got me into AI and science fiction

and such in the first place wasn’t a book,

but the original Star Trek TV show,

which my dad watched with me like in its first run.

It would have been 1968, 69 or something,

and that was incredible because every show

they visited a different alien civilization

with different culture and weird mechanisms.

But that got me into science fiction,

and there wasn’t that much science fiction

to watch on TV at that stage,

so that got me into reading the whole literature

of science fiction, you know,

from the beginning of the previous century until that time.

And I mean, there was so many science fiction writers

who were inspirational to me.

I’d say if I had to pick two,

it would have been Stanisław Lem, the Polish writer.

Yeah, Solaris, and then he had a bunch

of more obscure writings on superhuman AIs

that were engineered.

Solaris was sort of a superhuman,

naturally occurring intelligence.

Then Philip K. Dick, who, you know,

ultimately my fandom for Philip K. Dick

is one of the things that brought me together

with David Hansen, my collaborator on robotics projects.

So, you know, Stanisław Lem was very much an intellectual,

right, so he had a very broad view of intelligence

going beyond the human and into what I would call,

you know, open ended superintelligence.

The Solaris superintelligent ocean was intelligent,

in some ways more generally intelligent than people,

but in a complex and confusing way

so that human beings could never quite connect to it,

but it was still probably very, very smart.

And then the Golem 4 supercomputer

in one of Lem’s books, this was engineered by people,

but eventually it became very intelligent

in a different direction than humans

and decided that humans were kind of trivial,

not that interesting.

So it put some impenetrable shield around itself,

shut itself off from humanity,

and then issued some philosophical screed

about the pathetic and hopeless nature of humanity

and all human thought, and then disappeared.

Now, Philip K. Dick, he was a bit different.

He was human focused, right?

His main thing was, you know, human compassion

and the human heart and soul are going to be the constant

that will keep us going through whatever aliens we discover

or telepathy machines or super AIs or whatever it might be.

So he didn’t believe in reality,

like the reality that we see may be a simulation

or a dream or something else we can’t even comprehend,

but he believed in love and compassion

as something persistent

through the various simulated realities.

So those two science fiction writers had a huge impact on me.

Then a little older than that, I got into Dostoevsky

and Friedrich Nietzsche and Rimbaud

and a bunch of more literary type writing.

Can we talk about some of those things?

So on the Solaris side, Stanislaw Lem,

this kind of idea of there being intelligences out there

that are different than our own,

do you think there are intelligences maybe all around us

that we’re not able to even detect?

So this kind of idea of,

maybe you can comment also on Stephen Wolfram

thinking that there’s computations all around us

and we’re just not smart enough to kind of detect

their intelligence or appreciate their intelligence.

Yeah, so my friend Hugo de Gares,

who I’ve been talking to about these things

for many decades, since the early 90s,

he had an idea he called SIPI,

the Search for Intraparticulate Intelligence.

So the concept there was as AIs get smarter

and smarter and smarter,

assuming the laws of physics as we know them now

are still what these super intelligences

perceived to hold and are bound by,

as they get smarter and smarter,

they’re gonna shrink themselves littler and littler

because special relativity makes it

so they can communicate

between two spatially distant points.

So they’re gonna get smaller and smaller,

but then ultimately, what does that mean?

The minds of the super, super, super intelligences,

they’re gonna be packed into the interaction

of elementary particles or quarks

or the partons inside quarks or whatever it is.

So what we perceive as random fluctuations

on the quantum or sub quantum level

may actually be the thoughts

of the micro, micro, micro miniaturized super intelligences

because there’s no way we can tell random

from structured but within algorithmic information

more complex than our brains, right?

We can’t tell the difference.

So what we think is random could be the thought processes

of some really tiny super minds.

And if so, there is not a damn thing we can do about it,

except try to upgrade our intelligences

and expand our minds so that we can perceive

more of what’s around us.

But if those random fluctuations,

like even if we go to like quantum mechanics,

if that’s actually super intelligent systems,

aren’t we then part of the super of super intelligence?

Aren’t we just like a finger of the entirety

of the body of the super intelligent system?

It could be, I mean, a finger is a strange metaphor.

I mean, we…

A finger is dumb is what I mean.

But the finger is also useful

and is controlled with intent by the brain

whereas we may be much less than that, right?

I mean, yeah, we may be just some random epiphenomenon

that they don’t care about too much.

Like think about the shape of the crowd emanating

from a sports stadium or something, right?

There’s some emergent shape to the crowd, it’s there.

You could take a picture of it, it’s kind of cool.

It’s irrelevant to the main point of the sports event

or where the people are going

or what’s on the minds of the people

making that shape in the crowd, right?

So we may just be some semi arbitrary higher level pattern

popping out of a lower level

hyper intelligent self organization.

And I mean, so be it, right?

I mean, that’s one thing that…

Yeah, I mean, the older I’ve gotten,

the more respect I’ve achieved for our fundamental ignorance.

I mean, mine and everybody else’s.

I mean, I look at my two dogs,

two beautiful little toy poodles

and they watch me sitting at the computer typing.

They just think I’m sitting there wiggling my fingers

to exercise them maybe or guarding the monitor on the desk

that they have no idea that I’m communicating

with other people halfway around the world,

let alone creating complex algorithms

running in RAM on some computer server

in St. Petersburg or something, right?

Although they’re right there in the room with me.

So what things are there right around us

that we’re just too stupid or close minded to comprehend?

Probably quite a lot.

Your very poodle could also be communicating

across multiple dimensions with other beings

and you’re too unintelligent to understand

the kind of communication mechanism they’re going through.

There have been various TV shows and science fiction novels,

poisoning cats, dolphins, mice and whatnot

are actually super intelligences here to observe that.

I would guess as one or the other quantum physics founders

said, those theories are not crazy enough to be true.

The reality is probably crazier than that.

Beautifully put.

So on the human side, with Philip K. Dick

and in general, where do you fall on this idea

that love and just the basic spirit of human nature

persists throughout these multiple realities?

Are you on the side, like the thing that inspires you

about artificial intelligence,

is it the human side of somehow persisting

through all of the different systems we engineer

or is AI inspire you to create something

that’s greater than human, that’s beyond human,

that’s almost nonhuman?

I would say my motivation to create AGI

comes from both of those directions actually.

So when I first became passionate about AGI

when I was, it would have been two or three years old

after watching robots on Star Trek.

I mean, then it was really a combination

of intellectual curiosity, like can a machine really think,

how would you do that?

And yeah, just ambition to create something much better

than all the clearly limited

and fundamentally defective humans I saw around me.

Then as I got older and got more enmeshed

in the human world and got married, had children,

saw my parents begin to age, I started to realize,

well, not only will AGI let you go far beyond

the limitations of the human,

but it could also stop us from dying and suffering

and feeling pain and tormenting ourselves mentally.

So you can see AGI has amazing capability

to do good for humans, as humans,

alongside with its capability

to go far, far beyond the human level.

So I mean, both aspects are there,

which makes it even more exciting and important.

So you mentioned Dostoevsky and Nietzsche.

Where did you pick up from those guys?

I mean.

That would probably go beyond the scope

of a brief interview, certainly.

I mean, both of those are amazing thinkers

who one, will necessarily have

a complex relationship with, right?

So, I mean, Dostoevsky on the minus side,

he’s kind of a religious fanatic

and he sort of helped squash the Russian nihilist movement,

which was very interesting.

Because what nihilism meant originally

in that period of the mid, late 1800s in Russia

was not taking anything fully 100% for granted.

It was really more like what we’d call Bayesianism now,

where you don’t wanna adopt anything

as a dogmatic certitude and always leave your mind open.

And how Dostoevsky parodied nihilism

was a bit different, right?

He parodied as people who believe absolutely nothing.

So they must assign an equal probability weight

to every proposition, which doesn’t really work.

So on the one hand, I didn’t really agree with Dostoevsky

on his sort of religious point of view.

On the other hand, if you look at his understanding

of human nature and sort of the human mind

and heart and soul, it’s really unparalleled.

He had an amazing view of how human beings construct a world

for themselves based on their own understanding

and their own mental predisposition.

And I think if you look in the brothers Karamazov

in particular, the Russian literary theorist Mikhail Bakhtin

wrote about this as a polyphonic mode of fiction,

which means it’s not third person,

but it’s not first person from any one person really.

There are many different characters in the novel

and each of them is sort of telling part of the story

from their own point of view.

So the reality of the whole story is an intersection

like synergetically of the many different characters

world views.

And that really, it’s a beautiful metaphor

and even a reflection I think of how all of us

socially create our reality.

Like each of us sees the world in a certain way.

Each of us in a sense is making the world as we see it

based on our own minds and understanding,

but it’s polyphony like in music

where multiple instruments are coming together

to create the sound.

The ultimate reality that’s created

comes out of each of our subjective understandings,

intersecting with each other.

And that was one of the many beautiful things in Dostoevsky.

So maybe a little bit to mention,

you have a connection to Russia and the Soviet culture.

I mean, I’m not sure exactly what the nature

of the connection is, but at least the spirit

of your thinking is in there.

Well, my ancestry is three quarters Eastern European Jewish.

So I mean, my three of my great grandparents

emigrated to New York from Lithuania

and sort of border regions of Poland,

which are in and out of Poland

in around the time of World War I.

And they were socialists and communists as well as Jews,

mostly Menshevik, not Bolshevik.

And they sort of, they fled at just the right time

to the US for their own personal reasons.

And then almost all, or maybe all of my extended family

that remained in Eastern Europe was killed

either by Hitlands or Stalin’s minions at some point.

So the branch of the family that emigrated to the US

was pretty much the only one.

So how much of the spirit of the people

is in your blood still?

Like, when you look in the mirror, do you see,

what do you see?

Meat, I see a bag of meat that I want to transcend

by uploading into some sort of superior reality.

But very, I mean, yeah, very clearly,

I mean, I’m not religious in a traditional sense,

but clearly the Eastern European Jewish tradition

was what I was raised in.

I mean, there was, my grandfather, Leo Zwell,

was a physical chemist who worked with Linus Pauling

and a bunch of the other early greats in quantum mechanics.

I mean, he was into X ray diffraction.

He was on the material science side,

an experimentalist rather than a theorist.

His sister was also a physicist.

And my father’s father, Victor Gertzel,

was a PhD in psychology who had the unenviable job

of giving Soka therapy to the Japanese

in internment camps in the US in World War II,

like to counsel them why they shouldn’t kill themselves,

even though they’d had all their stuff taken away

and been imprisoned for no good reason.

So, I mean, yeah, there’s a lot of Eastern European

Jewishness in my background.

One of my great uncles was, I guess,

conductor of San Francisco Orchestra.

So there’s a lot of Mickey Salkind,

bunch of music in there also.

And clearly this culture was all about learning

and understanding the world,

and also not quite taking yourself too seriously

while you do it, right?

There’s a lot of Yiddish humor in there.

So I do appreciate that culture,

although the whole idea that like the Jews

are the chosen people of God

never resonated with me too much.

The graph of the Gertzel family,

I mean, just the people I’ve encountered

just doing some research and just knowing your work

through the decades, it’s kind of fascinating.

Just the number of PhDs.

Yeah, yeah, I mean, my dad is a sociology professor

who recently retired from Rutgers University,

but clearly that gave me a head start in life.

I mean, my grandfather gave me

all those quantum mechanics books

when I was like seven or eight years old.

I remember going through them,

and it was all the old quantum mechanics

like Rutherford Adams and stuff.

So I got to the part of wave functions,

which I didn’t understand, although I was very bright kid.

And I realized he didn’t quite understand it either,

but at least like he pointed me to some professor

he knew at UPenn nearby who understood these things, right?

So that’s an unusual opportunity for a kid to have, right?

My dad, he was programming Fortran

when I was 10 or 11 years old

on like HP 3000 mainframes at Rutgers University.

So I got to do linear regression in Fortran

on punch cards when I was in middle school, right?

Because he was doing, I guess, analysis of demographic

and sociology data.

So yes, certainly that gave me a head start

and a push towards science beyond what would have been

the case with many, many different situations.

When did you first fall in love with AI?

Is it the programming side of Fortran?

Is it maybe the sociology psychology

that you picked up from your dad?

Or is it the quantum mechanics?

I fell in love with AI when I was probably three years old

when I saw a robot on Star Trek.

It was turning around in a circle going,

error, error, error, error,

because Spock and Kirk had tricked it

into a mechanical breakdown by presenting it

with a logical paradox.

And I was just like, well, this makes no sense.

This AI is very, very smart.

It’s been traveling all around the universe,

but these people could trick it

with a simple logical paradox.

Like why, if the human brain can get beyond that paradox,

why can’t this AI?

So I felt the screenwriters of Star Trek

had misunderstood the nature of intelligence.

And I complained to my dad about it,

and he wasn’t gonna say anything one way or the other.

But before I was born, when my dad was at Antioch College

in the middle of the US,

he led a protest movement called SLAM,

Student League Against Mortality.

They were protesting against death,

wandering across the campus.

So he was into some futuristic things even back then,

but whether AI could confront logical paradoxes or not,

he didn’t know.

But when I, 10 years after that or something,

I discovered Douglas Hofstadter’s book,

Gordalesh or Bach, and that was sort of to the same point of AI

and paradox and logic, right?

Because he was over and over

with Gordal’s incompleteness theorem,

and can an AI really fully model itself reflexively

or does that lead you into some paradox?

Can the human mind truly model itself reflexively

or does that lead you into some paradox?

So I think that book, Gordalesh or Bach,

which I think I read when it first came out,

I would have been 12 years old or something.

I remember it was like 16 hour day.

I read it cover to cover and then reread it.

I reread it after that,

because there was a lot of weird things

with little formal systems in there

that were hard for me at the time.

But that was the first book I read

that gave me a feeling for AI as like a practical academic

or engineering discipline that people were working in.

Because before I read Gordalesh or Bach,

I was into AI from the point of view of a science fiction fan.

And I had the idea, well, it may be a long time

before we can achieve immortality in superhuman AGI.

So I should figure out how to build a spacecraft

traveling close to the speed of light, go far away,

then come back to the earth in a million years

when technology is more advanced

and we can build these things.

Reading Gordalesh or Bach,

while it didn’t all ring true to me, a lot of it did,

but I could see like there are smart people right now

at various universities around me

who are actually trying to work on building

what I would now call AGI,

although Hofstadter didn’t call it that.

So really it was when I read that book,

which would have been probably middle school,

that then I started to think,

well, this is something that I could practically work on.

Yeah, as opposed to flying away and waiting it out,

you can actually be one of the people

that actually builds the system.

Yeah, exactly.

And if you think about, I mean,

I was interested in what we’d now call nanotechnology

and in the human immortality and time travel,

all the same cool things as every other,

like science fiction loving kid.

But AI seemed like if Hofstadter was right,

you just figure out the right program,

sit there and type it.

Like you don’t need to spin stars into weird configurations

or get government approval to cut people up

and fiddle with their DNA or something, right?

It’s just programming.

And then of course that can achieve anything else.

There’s another book from back then,

which was by Gerald Feinbaum,

who was a physicist at Princeton.

And that was the Prometheus Project.

And this book was written in the late 1960s,

though I encountered it in the mid 70s.

But what this book said is in the next few decades,

humanity is gonna create superhuman thinking machines,

molecular nanotechnology and human immortality.

And then the challenge we’ll have is what to do with it.

Do we use it to expand human consciousness

in a positive direction?

Or do we use it just to further vapid consumerism?

And what he proposed was that the UN

should do a survey on this.

And the UN should send people out to every little village

in remotest Africa or South America

and explain to everyone what technology

was gonna bring the next few decades

and the choice that we had about how to use it.

And let everyone on the whole planet vote

about whether we should develop super AI nanotechnology

and immortality for expanded consciousness

or for rampant consumerism.

And needless to say, that didn’t quite happen.

And I think this guy died in the mid 80s,

so we didn’t even see his ideas start

to become more mainstream.

But it’s interesting, many of the themes I’m engaged with now

from AGI and immortality,

even to trying to democratize technology

as I’ve been pushing forward with Singularity,

my work in the blockchain world,

many of these themes were there in Feinbaum’s book

in the late 60s even.

And of course, Valentin Turchin, a Russian writer

and a great Russian physicist who I got to know

when we both lived in New York in the late 90s

and early aughts.

I mean, he had a book in the late 60s in Russia,

which was the phenomenon of science,

which laid out all these same things as well.

And Val died in, I don’t remember,

2004 or five or something of Parkinson’sism.

So yeah, it’s easy for people to lose track now

of the fact that the futurist and Singularitarian

advanced technology ideas that are now almost mainstream

are on TV all the time.

I mean, these are not that new, right?

They’re sort of new in the history of the human species,

but I mean, these were all around in fairly mature form

in the middle of the last century,

were written about quite articulately

by fairly mainstream people

who were professors at top universities.

It’s just until the enabling technologies

got to a certain point, then you couldn’t make it real.

And even in the 70s, I was sort of seeing that

and living through it, right?

From Star Trek to Douglas Hofstadter,

things were getting very, very practical

from the late 60s to the late 70s.

And the first computer I bought,

you could only program with hexadecimal machine code

and you had to solder it together.

And then like a few years later, there’s punch cards.

And a few years later, you could get like Atari 400

and Commodore VIC 20, and you could type on the keyboard

and program in higher level languages

alongside the assembly language.

So these ideas have been building up a while.

And I guess my generation got to feel them build up,

which is different than people coming into the field now

for whom these things have just been part of the ambience

of culture for their whole career

or even their whole life.

Well, it’s fascinating to think about there being all

of these ideas kind of swimming, almost with the noise

all around the world, all the different generations,

and then some kind of nonlinear thing happens

where they percolate up

and capture the imagination of the mainstream.

And that seems to be what’s happening with AI now.

I mean, Nietzsche, who you mentioned had the idea

of the Superman, right?

But he didn’t understand enough about technology

to think you could physically engineer a Superman

by piecing together molecules in a certain way.

He was a bit vague about how the Superman would appear,

but he was quite deep at thinking

about what the state of consciousness

and the mode of cognition of a Superman would be.

He was a very astute analyst of how the human mind

constructs the illusion of a self,

how it constructs the illusion of free will,

how it constructs values like good and evil

out of its own desire to maintain

and advance its own organism.

He understood a lot about how human minds work.

Then he understood a lot

about how post human minds would work.

I mean, the Superman was supposed to be a mind

that would basically have complete root access

to its own brain and consciousness

and be able to architect its own value system

and inspect and fine tune all of its own biases.

So that’s a lot of powerful thinking there,

which then fed in and sort of seeded

all of postmodern continental philosophy

and all sorts of things have been very valuable

in development of culture and indirectly even of technology.

But of course, without the technology there,

it was all some quite abstract thinking.

So now we’re at a time in history

when a lot of these ideas can be made real,

which is amazing and scary, right?

It’s kind of interesting to think,

what do you think Nietzsche would do

if he was born a century later or transported through time?

What do you think he would say about AI?

I mean. Well, those are quite different.

If he’s born a century later or transported through time.

Well, he’d be on like TikTok and Instagram

and he would never write the great works he’s written.

So let’s transport him through time.

Maybe also Sprach Zarathustra would be a music video,

right? I mean, who knows?

Yeah, but if he was transported through time,

do you think, that’d be interesting actually to go back.

You just made me realize that it’s possible to go back

and read Nietzsche with an eye of,

is there some thinking about artificial beings?

I’m sure there he had inklings.

I mean, with Frankenstein before him,

I’m sure he had inklings of artificial beings

somewhere in the text.

It’d be interesting to try to read his work

to see if Superman was actually an AGI system.

Like if he had inklings of that kind of thinking.

He didn’t.

No, I would say not.

I mean, he had a lot of inklings of modern cognitive science,

which are very interesting.

If you look in like the third part of the collection

that’s been titled The Will to Power.

I mean, in book three there,

there’s very deep analysis of thinking processes,

but he wasn’t so much of a physical tinkerer type guy,

right? He was very abstract.

Do you think, what do you think about the will to power?

Do you think human, what do you think drives humans?

Is it?

Oh, an unholy mix of things.

I don’t think there’s one pure, simple,

and elegant objective function driving humans by any means.

What do you think, if we look at,

I know it’s hard to look at humans in an aggregate,

but do you think overall humans are good?

Or do we have both good and evil within us

that depending on the circumstances,

depending on whatever can percolate to the top?

Good and evil are very ambiguous, complicated

and in some ways silly concepts.

But if we could dig into your question

from a couple of directions.

So I think if you look in evolution,

humanity is shaped both by individual selection

and what biologists would call group selection,

like tribe level selection, right?

So individual selection has driven us

in a selfish DNA sort of way.

So that each of us does to a certain approximation

what will help us propagate our DNA to future generations.

I mean, that’s why I’ve got four kids so far

and probably that’s not the last one.

On the other hand.

I like the ambition.

Tribal, like group selection means humans in a way

will do what will advocate for the persistence of the DNA

of their whole tribe or their social group.

And in biology, you have both of these, right?

And you can see, say an ant colony or a beehive,

there’s a lot of group selection

in the evolution of those social animals.

On the other hand, say a big cat

or some very solitary animal,

it’s a lot more biased toward individual selection.

Humans are an interesting balance.

And I think this reflects itself

in what we would view as selfishness versus altruism

to some extent.

So we just have both of those objective functions

contributing to the makeup of our brains.

And then as Nietzsche analyzed in his own way

and others have analyzed in different ways,

I mean, we abstract this as well,

we have both good and evil within us, right?

Because a lot of what we view as evil

is really just selfishness.

A lot of what we view as good is altruism,

which means doing what’s good for the tribe.

And on that level,

we have both of those just baked into us

and that’s how it is.

Of course, there are psychopaths and sociopaths

and people who get gratified by the suffering of others.

And that’s a different thing.

Yeah, those are exceptions on the whole.

But I think at core, we’re not purely selfish,

we’re not purely altruistic, we are a mix

and that’s the nature of it.

And we also have a complex constellation of values

that are just very specific to our evolutionary history.

Like we love waterways and mountains

and the ideal place to put a house

is in a mountain overlooking the water, right?

And we care a lot about our kids

and we care a little less about our cousins

and even less about our fifth cousins.

I mean, there are many particularities to human values,

which whether they’re good or evil

depends on your perspective.

Say, I spent a lot of time in Ethiopia in Addis Ababa

where we have one of our AI development offices

for my SingularityNet project.

And when I walk through the streets in Addis,

you know, there’s people lying by the side of the road,

like just living there by the side of the road,

dying probably of curable diseases

without enough food or medicine.

And when I walk by them, you know, I feel terrible,

I give them money.

When I come back home to the developed world,

they’re not on my mind that much.

I do donate some, but I mean,

I also spend some of the limited money I have

enjoying myself in frivolous ways

rather than donating it to those people who are right now,

like starving, dying and suffering on the roadside.

So does that make me evil?

I mean, it makes me somewhat selfish

and somewhat altruistic.

And we each balance that in our own way, right?

So whether that will be true of all possible AGI’s

is a subtler question.

So that’s how humans are.

So you have a sense, you kind of mentioned

that there’s a selfish,

I’m not gonna bring up the whole Ayn Rand idea

of selfishness being the core virtue.

That’s a whole interesting kind of tangent

that I think we’ll just distract ourselves on.

I have to make one amusing comment.

Sure.

A comment that has amused me anyway.

So the, yeah, I have extraordinary negative respect

for Ayn Rand.

Negative, what’s a negative respect?

But when I worked with a company called Genescient,

which was evolving flies to have extraordinary long lives

in Southern California.

So we had flies that were evolved by artificial selection

to have five times the lifespan of normal fruit flies.

But the population of super long lived flies

was physically sitting in a spare room

at an Ayn Rand elementary school in Southern California.

So that was just like,

well, if I saw this in a movie, I wouldn’t believe it.

Well, yeah, the universe has a sense of humor

in that kind of way.

That fits in, humor fits in somehow

into this whole absurd existence.

But you mentioned the balance between selfishness

and altruism as kind of being innate.

Do you think it’s possible

that’s kind of an emergent phenomena,

those peculiarities of our value system?

How much of it is innate?

How much of it is something we collectively

kind of like a Dostoevsky novel

bring to life together as a civilization?

I mean, the answer to nature versus nurture

is usually both.

And of course it’s nature versus nurture

versus self organization, as you mentioned.

So clearly there are evolutionary roots

to individual and group selection

leading to a mix of selfishness and altruism.

On the other hand,

different cultures manifest that in different ways.

Well, we all have basically the same biology.

And if you look at sort of precivilized cultures,

you have tribes like the Yanomamo in Venezuela,

which their culture is focused on killing other tribes.

And you have other Stone Age tribes

that are mostly peaceful and have big taboos

against violence.

So you can certainly have a big difference

in how culture manifests

these innate biological characteristics,

but still, there’s probably limits

that are given by our biology.

I used to argue this with my great grandparents

who were Marxists actually,

because they believed in the withering away of the state.

Like they believe that,

as you move from capitalism to socialism to communism,

people would just become more social minded

so that a state would be unnecessary

and everyone would give everyone else what they needed.

Now, setting aside that

that’s not what the various Marxist experiments

on the planet seem to be heading toward in practice.

Just as a theoretical point,

I was very dubious that human nature could go there.

Like at that time when my great grandparents are alive,

I was just like, you know, I’m a cynical teenager.

I think humans are just jerks.

The state is not gonna wither away.

If you don’t have some structure

keeping people from screwing each other over,

they’re gonna do it.

So now I actually don’t quite see things that way.

I mean, I think my feeling now subjectively

is the culture aspect is more significant

than I thought it was when I was a teenager.

And I think you could have a human society

that was dialed dramatically further toward,

you know, self awareness, other awareness,

compassion and sharing than our current society.

And of course, greater material abundance helps,

but to some extent material abundance

is a subjective perception also

because many Stone Age cultures perceive themselves

as living in great material abundance

that they had all the food and water they wanted,

they lived in a beautiful place,

that they had sex lives, that they had children.

I mean, they had abundance without any factories, right?

So I think humanity probably would be capable

of fundamentally more positive and joy filled mode

of social existence than what we have now.

Clearly Marx didn’t quite have the right idea

about how to get there.

I mean, he missed a number of key aspects

of human society and its evolution.

And if we look at where we are in society now,

how to get there is a quite different question

because there are very powerful forces

pushing people in different directions

than a positive, joyous, compassionate existence, right?

So if we were tried to, you know,

Elon Musk is dreams of colonizing Mars at the moment,

so we maybe will have a chance to start a new civilization

with a new governmental system.

And certainly there’s quite a bit of chaos.

We’re sitting now, I don’t know what the date is,

but this is June.

There’s quite a bit of chaos in all different forms

going on in the United States and all over the world.

So there’s a hunger for new types of governments,

new types of leadership, new types of systems.

And so what are the forces at play

and how do we move forward?

Yeah, I mean, colonizing Mars, first of all,

it’s a super cool thing to do.

We should be doing it.

So you love the idea.

Yeah, I mean, it’s more important than making

chocolatey or chocolates and sexier lingerie

and many of the things that we spend

a lot more resources on as a species, right?

So I mean, we certainly should do it.

I think the possible futures in which a Mars colony

makes a critical difference for humanity are very few.

I mean, I think, I mean, assuming we make a Mars colony

and people go live there in a couple of decades,

I mean, their supplies are gonna come from Earth.

The money to make the colony came from Earth

and whatever powers are supplying the goods there

from Earth are gonna, in effect, be in control

of that Mars colony.

Of course, there are outlier situations

where Earth gets nuked into oblivion

and somehow Mars has been made self sustaining by that point

and then Mars is what allows humanity to persist.

But I think that those are very, very, very unlikely.

You don’t think it could be a first step on a long journey?

Of course it’s a first step on a long journey,

which is awesome.

I’m guessing the colonization of the rest

of the physical universe will probably be done

by AGI’s that are better designed to live in space

than by the meat machines that we are.

But I mean, who knows?

We may cryopreserve ourselves in some superior way

to what we know now and like shoot ourselves out

to Alpha Centauri and beyond.

I mean, that’s all cool.

It’s very interesting and it’s much more valuable

than most things that humanity is spending its resources on.

On the other hand, with AGI, we can get to a singularity

before the Mars colony becomes sustaining for sure,

possibly before it’s even operational.

So your intuition is that that’s the problem

if we really invest resources and we can get to faster

than a legitimate full self sustaining colonization of Mars.

Yeah, and it’s very clear that we will to me

because there’s so much economic value

in getting from narrow AI toward AGI,

whereas the Mars colony, there’s less economic value

until you get quite far out into the future.

So I think that’s very interesting.

I just think it’s somewhat off to the side.

I mean, just as I think, say, art and music

are very, very interesting and I wanna see resources

go into amazing art and music being created.

And I’d rather see that than a lot of the garbage

that the society spends their money on.

On the other hand, I don’t think Mars colonization

or inventing amazing new genres of music

is not one of the things that is most likely

to make a critical difference in the evolution

of human or nonhuman life in this part of the universe

over the next decade.

Do you think AGI is really?

AGI is by far the most important thing

that’s on the horizon.

And then technologies that have direct ability

to enable AGI or to accelerate AGI are also very important.

For example, say, quantum computing.

I don’t think that’s critical to achieve AGI,

but certainly you could see how

the right quantum computing architecture

could massively accelerate AGI,

similar other types of nanotechnology.

Right now, the quest to cure aging and end disease

while not in the big picture as important as AGI,

of course, it’s important to all of us as individual humans.

And if someone made a super longevity pill

and distributed it tomorrow, I mean,

that would be huge and a much larger impact

than a Mars colony is gonna have for quite some time.

But perhaps not as much as an AGI system.

No, because if you can make a benevolent AGI,

then all the other problems are solved.

I mean, if then the AGI can be,

once it’s as generally intelligent as humans,

it can rapidly become massively more generally intelligent

than humans.

And then that AGI should be able to solve science

and engineering problems much better than human beings,

as long as it is in fact motivated to do so.

That’s why I said a benevolent AGI.

There could be other kinds.

Maybe it’s good to step back a little bit.

I mean, we’ve been using the term AGI.

People often cite you as the creator,

or at least the popularizer of the term AGI,

artificial general intelligence.

Can you tell the origin story of the term maybe?

So yeah, I would say I launched the term AGI upon the world

for what it’s worth without ever fully being in love

with the term.

What happened is I was editing a book,

and this process started around 2001 or two.

I think the book came out 2005, finally.

I was editing a book which I provisionally

was titling Real AI.

And I mean, the goal was to gather together

fairly serious academicish papers

on the topic of making thinking machines

that could really think in the sense like people can,

or even more broadly than people can, right?

So then I was reaching out to other folks

that I had encountered here or there

who were interested in that,

which included some other folks who I knew

from the transhumist and singularitarian world,

like Peter Vos, who has a company, AGI Incorporated,

still in California, and included Shane Legge,

who had worked for me at my company, WebMind,

in New York in the late 90s,

who by now has become rich and famous.

He was one of the cofounders of Google DeepMind.

But at that time, Shane was,

I think he may have just started doing his PhD

with Marcus Hooter, who at that time

hadn’t yet published his book, Universal AI,

which sort of gives a mathematical foundation

for artificial general intelligence.

So I reached out to Shane and Marcus and Peter Vos

and Pei Wang, who was another former employee of mine

who had been Douglas Hofstadter’s PhD student

who had his own approach to AGI,

and a bunch of some Russian folks reached out to these guys

and they contributed papers for the book.

But that was my provisional title, but I never loved it

because in the end, I was doing some,

what we would now call narrow AI as well,

like applying machine learning to genomics data

or chat data for sentiment analysis.

I mean, that work is real.

And in a sense, it’s really AI.

It’s just a different kind of AI.

Ray Kurzweil wrote about narrow AI versus strong AI,

but that seemed weird to me because first of all,

narrow and strong are not antennas.

That’s right.

But secondly, strong AI was used

in the cognitive science literature

to mean the hypothesis that digital computer AIs

could have true consciousness like human beings.

So there was already a meaning to strong AI,

which was complexly different, but related, right?

So we were tossing around on an email list

whether what title it should be.

And so we talked about narrow AI, broad AI, wide AI,

narrow AI, general AI.

And I think it was either Shane Legge or Peter Vos

on the private email discussion we had.

He said, but why don’t we go

with AGI, artificial general intelligence?

And Pei Wang wanted to do GAI,

general artificial intelligence,

because in Chinese it goes in that order.

But we figured gay wouldn’t work

in US culture at that time, right?

So we went with the AGI.

We used it for the title of that book.

And part of Peter and Shane’s reasoning

was you have the G factor in psychology,

which is IQ, general intelligence, right?

So you have a meaning of GI, general intelligence,

in psychology, so then you’re looking like artificial GI.

So then we use that for the title of the book.

And so I think maybe both Shane and Peter

think they invented the term,

but then later after the book was published,

this guy, Mark Guberd, came up to me and he’s like,

well, I published an essay with the term AGI

in like 1997 or something.

And so I’m just waiting for some Russian to come out

and say they published that in 1953, right?

I mean, that term is not dramatically innovative

or anything.

It’s one of these obvious in hindsight things,

which is also annoying in a way,

because Joshua Bach, who you interviewed,

is a close friend of mine.

He likes the term synthetic intelligence,

which I like much better,

but it hasn’t actually caught on, right?

Because I mean, artificial is a bit off to me

because artifice is like a tool or something,

but not all AGI’s are gonna be tools.

I mean, they may be now,

but we’re aiming toward making them agents

rather than tools.

And in a way, I don’t like the distinction

between artificial and natural,

because I mean, we’re part of nature also

and machines are part of nature.

I mean, you can look at evolved versus engineered,

but that’s a different distinction.

Then it should be engineered general intelligence, right?

And then general, well,

if you look at Marcus Hooter’s book,

universally, what he argues there is,

within the domain of computation theory,

which is limited, but interesting.

So if you assume computable environments

or computable reward functions,

then he articulates what would be

a truly general intelligence,

a system called AIXI, which is quite beautiful.

AIXI, and that’s the middle name

of my latest child, actually, is it?

What’s the first name?

First name is QORXI, Q O R X I,

which my wife came up with,

but that’s an acronym for quantum organized rational

expanding intelligence, and his middle name is Xiphonies,

actually, which means the former principal underlying AIXI.

But in any case.

You’re giving Elon Musk’s new child a run for his money.

Well, I did it first.

He copied me with this new freakish name,

but now if I have another baby,

I’m gonna have to outdo him.

It’s becoming an arms race of weird, geeky baby names.

We’ll see what the babies think about it, right?

But I mean, my oldest son, Zarathustra, loves his name,

and my daughter, Sharazad, loves her name.

So far, basically, if you give your kids weird names.

They live up to it.

Well, you’re obliged to make the kids weird enough

that they like the names, right?

It directs their upbringing in a certain way.

But yeah, anyway, I mean, what Marcus showed in that book

is that a truly general intelligence

theoretically is possible,

but would take infinite computing power.

So then the artificial is a little off.

The general is not really achievable within physics

as we know it.

And I mean, physics as we know it may be limited,

but that’s what we have to work with now.

Intelligence.

Infinitely general, you mean,

like information processing perspective, yeah.

Yeah, intelligence is not very well defined either, right?

I mean, what does it mean?

I mean, in AI now, it’s fashionable to look at it

as maximizing an expected reward over the future.

But that sort of definition is pathological in various ways.

And my friend David Weinbaum, AKA Weaver,

he had a beautiful PhD thesis on open ended intelligence,

trying to conceive intelligence in a…

Without a reward.

Yeah, he’s just looking at it differently.

He’s looking at complex self organizing systems

and looking at an intelligent system

as being one that revises and grows

and improves itself in conjunction with its environment

without necessarily there being one objective function

it’s trying to maximize.

Although over certain intervals of time,

it may act as if it’s optimizing

a certain objective function.

Very much Solaris from Stanislav Lem’s novels, right?

So yeah, the point is artificial, general and intelligence.

Don’t work.

They’re all bad.

On the other hand, everyone knows what AI is.

And AGI seems immediately comprehensible

to people with a technical background.

So I think that the term has served

as sociological function.

And now it’s out there everywhere, which baffles me.

It’s like KFC.

I mean, that’s it.

We’re stuck with AGI probably for a very long time

until AGI systems take over and rename themselves.

Yeah.

And then we’ll be biological.

We’re stuck with GPUs too,

which mostly have nothing to do with graphics.

Any more, right?

I wonder what the AGI system will call us humans.

That was maybe.

Grandpa.

Yeah.

GPs.

Yeah.

Grandpa processing unit, yeah.

Biological grandpa processing units.

Yeah.

Okay, so maybe also just a comment on AGI representing

before even the term existed,

representing a kind of community.

You’ve talked about this in the past,

sort of AI is coming in waves,

but there’s always been this community of people

who dream about creating general human level

super intelligence systems.

Can you maybe give your sense of the history

of this community as it exists today,

as it existed before this deep learning revolution

all throughout the winters and the summers of AI?

Sure.

First, I would say as a side point,

the winters and summers of AI are greatly exaggerated

by Americans and in that,

if you look at the publication record

of the artificial intelligence community

since say the 1950s,

you would find a pretty steady growth

in advance of ideas and papers.

And what’s thought of as an AI winter or summer

was sort of how much money is the US military

pumping into AI, which was meaningful.

On the other hand, there was AI going on in Germany,

UK and in Japan and in Russia, all over the place,

while US military got more and less enthused about AI.

So, I mean.

That happened to be, just for people who don’t know,

the US military happened to be the main source

of funding for AI research.

So another way to phrase that is it’s up and down

of funding for artificial intelligence research.

And I would say the correlation between funding

and intellectual advance was not 100%, right?

Because I mean, in Russia, as an example, or in Germany,

there was less dollar funding than in the US,

but many foundational ideas were laid out,

but it was more theory than implementation, right?

And US really excelled at sort of breaking through

from theoretical papers to working implementations,

which did go up and down somewhat

with US military funding,

but still, I mean, you can look in the 1980s,

Dietrich Derner in Germany had self driving cars

on the Autobahn, right?

And I mean, it was a little early

with regard to the car industry,

so it didn’t catch on such as has happened now.

But I mean, that whole advancement

of self driving car technology in Germany

was pretty much independent of AI military summers

and winters in the US.

So there’s been more going on in AI globally

than not only most people on the planet realize,

but then most new AI PhDs realize

because they’ve come up within a certain sub field of AI

and haven’t had to look so much beyond that.

But I would say when I got my PhD in 1989 in mathematics,

I was interested in AI already.

In Philadelphia.

Yeah, I started at NYU, then I transferred to Philadelphia

to Temple University, good old North Philly.

North Philly.

Yeah, yeah, yeah, the pearl of the US.

You never stopped at a red light then

because you were afraid if you stopped at a red light,

someone will carjack you.

So you just drive through every red light.

Yeah.

Every day driving or bicycling to Temple from my house

was like a new adventure.

But yeah, the reason I didn’t do a PhD in AI

was what people were doing in the academic AI field then,

was just astoundingly boring and seemed wrong headed to me.

It was really like rule based expert systems

and production systems.

And actually I loved mathematical logic.

I had nothing against logic as the cognitive engine for an AI,

but the idea that you could type in the knowledge

that AI would need to think seemed just completely stupid

and wrong headed to me.

I mean, you can use logic if you want,

but somehow the system has got to be…

Automated.

Learning, right?

It should be learning from experience.

And the AI field then was not interested

in learning from experience.

I mean, some researchers certainly were.

I mean, I remember in mid eighties,

I discovered a book by John Andreas,

which was, it was about a reinforcement learning system

called PURRDASHPUSS, which was an acronym

that I can’t even remember what it was for,

but purpose anyway.

But he, I mean, that was a system

that was supposed to be an AGI

and basically by some sort of fancy

like Markov decision process learning,

it was supposed to learn everything

just from the bits coming into it

and learn to maximize its reward

and become intelligent, right?

So that was there in academia back then,

but it was like isolated, scattered, weird people.

But all these isolated, scattered, weird people

in that period, I mean, they laid the intellectual grounds

for what happened later.

So you look at John Andreas at University of Canterbury

with his PURRDASHPUSS reinforcement learning Markov system.

He was the PhD supervisor for John Cleary in New Zealand.

Now, John Cleary worked with me

when I was at Waikato University in 1993 in New Zealand.

And he worked with Ian Whitten there

and they launched WEKA,

which was the first open source machine learning toolkit,

which was launched in, I guess, 93 or 94

when I was at Waikato University.

Written in Java, unfortunately.

Written in Java, which was a cool language back then.

I guess it’s still, well, it’s not cool anymore,

but it’s powerful.

I find, like most programmers now,

I find Java unnecessarily bloated,

but back then it was like Java or C++ basically.

And Java was easier for students.

Amusingly, a lot of the work on WEKA

when we were in New Zealand was funded by a US,

sorry, a New Zealand government grant

to use machine learning

to predict the menstrual cycles of cows.

So in the US, all the grant funding for AI

was about how to kill people or spy on people.

In New Zealand, it’s all about cows or kiwi fruits, right?

Yeah.

So yeah, anyway, I mean, John Andreas

had his probability theory based reinforcement learning,

proto AGI.

John Cleary was trying to do much more ambitious,

probabilistic AGI systems.

Now, John Cleary helped do WEKA,

which is the first open source machine learning toolkit.

So the predecessor for TensorFlow and Torch

and all these things.

Also, Shane Legg was at Waikato

working with John Cleary and Ian Witten

and this whole group.

And then working with my own companies,

my company, WebMind, an AI company I had in the late 90s

with a team there at Waikato University,

which is how Shane got his head full of AGI,

which led him to go on

and with Demis Hassabis found DeepMind.

So what you can see through that lineage is,

you know, in the 80s and 70s,

John Andreas was trying to build probabilistic

reinforcement learning AGI systems.

The technology, the computers just weren’t there to support

his ideas were very similar to what people are doing now.

But, you know, although he’s long since passed away

and didn’t become that famous outside of Canterbury,

I mean, the lineage of ideas passed on from him

to his students, to their students,

you can go trace directly from there to me

and to DeepMind, right?

So that there was a lot going on in AGI

that did ultimately lay the groundwork

for what we have today, but there wasn’t a community, right?

And so when I started trying to pull together

an AGI community, it was in the, I guess,

the early aughts when I was living in Washington, D.C.

and making a living doing AI consulting

for various U.S. government agencies.

And I organized the first AGI workshop in 2006.

And I mean, it wasn’t like it was literally

in my basement or something.

I mean, it was in the conference room at the Marriott

in Bethesda, it’s not that edgy or underground,

unfortunately, but still.

How many people attended?

About 60 or something.

That’s not bad.

I mean, D.C. has a lot of AI going on,

probably until the last five or 10 years,

much more than Silicon Valley, although it’s just quiet

because of the nature of what happens in D.C.

Their business isn’t driven by PR.

Mostly when something starts to work really well,

it’s taken black and becomes even more quiet, right?

But yeah, the thing is that really had the feeling

of a group of starry eyed mavericks huddled in a basement,

like plotting how to overthrow the narrow AI establishment.

And for the first time, in some cases,

coming together with others who shared their passion

for AGI and the technical seriousness about working on it.

And that’s very, very different than what we have today.

I mean, now it’s a little bit different.

We have AGI conference every year

and there’s several hundred people rather than 50.

Now it’s more like this is the main gathering

of people who want to achieve AGI

and think that large scale nonlinear regression

is not the golden path to AGI.

So I mean it’s…

AKA neural networks.

Yeah, yeah, yeah.

Well, certain architectures for learning using neural networks.

So yeah, the AGI conferences are sort of now

the main concentration of people not obsessed

with deep neural nets and deep reinforcement learning,

but still interested in AGI, not the only ones.

I mean, there’s other little conferences and groupings

interested in human level AI

and cognitive architectures and so forth.

But yeah, it’s been a big shift.

Like back then, you couldn’t really…

It’ll be very, very edgy then

to give a university department seminar

that mentioned AGI or human level AI.

It was more like you had to talk about

something more short term and immediately practical

than in the bar after the seminar,

you could bullshit about AGI in the same breath

as time travel or the simulation hypothesis or something.

Whereas now, AGI is not only in the academic seminar room,

like you have Vladimir Putin knows what AGI is.

And he’s like, Russia needs to become the leader in AGI.

So national leaders and CEOs of large corporations.

I mean, the CTO of Intel, Justin Ratner,

this was years ago, Singularity Summit Conference,

2008 or something.

He’s like, we believe Ray Kurzweil,

the singularity will happen in 2045

and it will have Intel inside.

So, I mean, it’s gone from being something

which is the pursuit of like crazed mavericks,

crackpots and science fiction fanatics

to being a marketing term for large corporations

and the national leaders,

which is a astounding transition.

But yeah, in the course of this transition,

I think a bunch of sub communities have formed

and the community around the AGI conference series

is certainly one of them.

It hasn’t grown as big as I might’ve liked it to.

On the other hand, sometimes a modest size community

can be better for making intellectual progress also.

Like you go to a society for neuroscience conference,

you have 35 or 40,000 neuroscientists.

On the one hand, it’s amazing.

On the other hand, you’re not gonna talk to the leaders

of the field there if you’re an outsider.

Yeah, in the same sense, the AAAI,

the artificial intelligence,

the main kind of generic artificial intelligence

conference is too big.

It’s too amorphous.

Like it doesn’t make sense.

Well, yeah, and NIPS has become a company advertising outlet

in the whole of it.

So, I mean, to comment on the role of AGI

in the research community, I’d still,

if you look at NeurIPS, if you look at CVPR,

if you look at these iClear,

AGI is still seen as the outcast.

I would say in these main machine learning,

in these main artificial intelligence conferences

amongst the researchers,

I don’t know if it’s an accepted term yet.

What I’ve seen bravely, you mentioned Shane Legg’s

DeepMind and then OpenAI are the two places that are,

I would say unapologetically so far,

I think it’s actually changing unfortunately,

but so far they’ve been pushing the idea

that the goal is to create an AGI.

Well, they have billions of dollars behind them.

So, I mean, they’re in the public mind

that certainly carries some oomph, right?

I mean, I mean.

But they also have really strong researchers, right?

They do, they’re great teams.

I mean, DeepMind in particular, yeah.

And they have, I mean, DeepMind has Marcus Hutter

walking around.

I mean, there’s all these folks who basically

their full time position involves dreaming

about creating AGI.

I mean, Google Brain has a lot of amazing

AGI oriented people also.

And I mean, so I’d say from a public marketing view,

DeepMind and OpenAI are the two large well funded

organizations that have put the term and concept AGI

out there sort of as part of their public image.

But I mean, they’re certainly not,

there are other groups that are doing research

that seems just as AGI is to me.

I mean, including a bunch of groups in Google’s

main Mountain View office.

So yeah, it’s true.

AGI is somewhat away from the mainstream now.

But if you compare it to where it was 15 years ago,

there’s been an amazing mainstreaming.

You could say the same thing about super longevity research,

which is one of my application areas that I’m excited about.

I mean, I’ve been talking about this since the 90s,

but working on this since 2001.

And back then, really to say,

you’re trying to create therapies to allow people

to live hundreds of thousands of years,

you were way, way, way, way out of the industry,

academic mainstream.

But now, Google had Project Calico,

Craig Venter had Human Longevity Incorporated.

And then once the suits come marching in, right?

I mean, once there’s big money in it,

then people are forced to take it seriously

because that’s the way modern society works.

So it’s still not as mainstream as cancer research,

just as AGI is not as mainstream

as automated driving or something.

But the degree of mainstreaming that’s happened

in the last 10 to 15 years is astounding

to those of us who’ve been at it for a while.

Yeah, but there’s a marketing aspect to the term,

but in terms of actual full force research

that’s going on under the header of AGI,

it’s currently, I would say dominated,

maybe you can disagree,

dominated by neural networks research,

that the nonlinear regression, as you mentioned.

Like what’s your sense with OpenCog, with your work,

but in general, I was logic based systems

and expert systems.

For me, always seemed to capture a deep element

of intelligence that needs to be there.

Like you said, it needs to learn,

it needs to be automated somehow,

but that seems to be missing from a lot of research currently.

So what’s your sense?

I guess one way to ask this question,

what’s your sense of what kind of things

will an AGI system need to have?

Yeah, that’s a very interesting topic

that I’ve thought about for a long time.

And I think there are many, many different approaches

that can work for getting to human level AI.

So I don’t think there’s like one golden algorithm,

or one golden design that can work.

And I mean, flying machines is the much worn

analogy here, right?

Like, I mean, you have airplanes, you have helicopters,

you have balloons, you have stealth bombers

that don’t look like regular airplanes.

You’ve got all blimps.

Birds too.

Birds, yeah, and bugs, right?

Yeah.

And there are certainly many kinds of flying machines that.

And there’s a catapult that you can just launch.

And there’s bicycle powered like flying machines, right?

Nice, yeah.

Yeah, so now these are all analyzable

by a basic theory of aerodynamics, right?

Now, so one issue with AGI is we don’t yet have the analog

of the theory of aerodynamics.

And that’s what Marcus Hutter was trying to make

with the AXI and his general theory of general intelligence.

But that theory in its most clearly articulated parts

really only works for either infinitely powerful machines

or almost, or insanely impractically powerful machines.

So I mean, if you were gonna take a theory based approach

to AGI, what you would do is say, well, let’s take

what’s called say AXE TL, which is Hutter’s AXE machine

that can work on merely insanely much processing power

rather than infinitely much.

What does TL stand for?

Time and length.

Okay.

So you’re basically how it.

Like constrained somehow.

Yeah, yeah, yeah.

So how AXE works basically is each action

that it wants to take, before taking that action,

it looks at all its history.

And then it looks at all possible programs

that it could use to make a decision.

And it decides like which decision program

would have let it make the best decisions

according to its reward function over its history.

And it uses that decision program

to make the next decision, right?

It’s not afraid of infinite resources.

It’s searching through the space

of all possible computer programs

in between each action and each next action.

Now, AXE TL searches through all possible computer programs

that have runtime less than T and length less than L.

So it’s, which is still an impractically humongous space,

right?

So what you would like to do to make an AGI

and what will probably be done 50 years from now

to make an AGI is say, okay, well, we have some constraints.

We have these processing power constraints

and we have the space and time constraints on the program.

We have energy utilization constraints

and we have this particular class environments,

class of environments that we care about,

which may be say, you know, manipulating physical objects

on the surface of the earth,

communicating in human language.

I mean, whatever our particular, not annihilating humanity,

whatever our particular requirements happen to be.

If you formalize those requirements

in some formal specification language,

you should then be able to run

automated program specializer on AXE TL,

specialize it to the computing resource constraints

and the particular environment and goal.

And then it will spit out like the specialized version

of AXE TL to your resource restrictions

and your environment, which will be your AGI, right?

And that I think is how our super AGI

will create new AGI systems, right?

But that’s a very rush.

It seems really inefficient.

It’s a very Russian approach by the way,

like the whole field of program specialization

came out of Russia.

Can you backtrack?

So what is program specialization?

So it’s basically…

Well, take sorting, for example.

You can have a generic program for sorting lists,

but what if all your lists you care about

are length 10,000 or less?

Got it.

You can run an automated program specializer

on your sorting algorithm,

and it will come up with the algorithm

that’s optimal for sorting lists of length 1,000 or less,

or 10,000 or less, right?

That’s kind of like, isn’t that the kind of the process

of evolution as a program specializer to the environment?

So you’re kind of evolving human beings,

or you’re living creatures.

Your Russian heritage is showing there.

So with Alexander Vityaev and Peter Anokhin and so on,

I mean, there’s a long history

of thinking about evolution that way also, right?

So, well, my point is that what we’re thinking of

as a human level general intelligence,

if you start from narrow AIs,

like are being used in the commercial AI field now,

then you’re thinking,

okay, how do we make it more and more general?

On the other hand,

if you start from AICSI or Schmidhuber’s Gödel machine,

or these infinitely powerful,

but practically infeasible AIs,

then getting to a human level AGI

is a matter of specialization.

It’s like, how do you take these

maximally general learning processes

and how do you specialize them

so that they can operate

within the resource constraints that you have,

but will achieve the particular things that you care about?

Because we humans are not maximally general intelligence.

If I ask you to run a maze in 750 dimensions,

you’d probably be very slow.

Whereas at two dimensions,

you’re probably way better, right?

So, I mean, we’re special because our hippocampus

has a two dimensional map in it, right?

And it does not have a 750 dimensional map in it.

So, I mean, we’re a peculiar mix

of generality and specialization, right?

We’ll probably start quite general at birth.

Not obviously still narrow,

but like more general than we are

at age 20 and 30 and 40 and 50 and 60.

I don’t think that, I think it’s more complex than that

because I mean, in some sense,

a young child is less biased

and the brain has yet to sort of crystallize

into appropriate structures

for processing aspects of the physical and social world.

On the other hand,

the young child is very tied to their sensorium.

Whereas we can deal with abstract mathematics,

like 750 dimensions and the young child cannot

because they haven’t grown what Piaget

called the formal capabilities.

They haven’t learned to abstract yet, right?

And the ability to abstract

gives you a different kind of generality

than what the baby has.

So, there’s both more specialization

and more generalization that comes

with the development process actually.

I mean, I guess just the trajectories

of the specialization are most controllable

at the young age, I guess is one way to put it.

Do you have kids?

No.

They’re not as controllable as you think.

So, you think it’s interesting.

I think, honestly, I think a human adult

is much more generally intelligent than a human baby.

Babies are very stupid, you know what I mean?

I mean, they’re cute, which is why we put up

with their repetitiveness and stupidity.

And they have what the Zen guys would call

a beginner’s mind, which is a beautiful thing,

but that doesn’t necessarily correlate

with a high level of intelligence.

On the plot of cuteness and stupidity,

there’s a process that allows us to put up

with their stupidity as they become more intelligent.

So, by the time you’re an ugly old man like me,

you gotta get really, really smart to compensate.

To compensate, okay, cool.

But yeah, going back to your original question,

so the way I look at human level AGI

is how do you specialize, you know,

unrealistically inefficient, superhuman,

brute force learning processes

to the specific goals that humans need to achieve

and the specific resources that we have.

And both of these, the goals and the resources

and the environments, I mean, all this is important.

And on the resources side, it’s important

that the hardware resources we’re bringing to bear

are very different than the human brain.

So the way I would want to implement AGI

on a bunch of neurons in a vat

that I could rewire arbitrarily is quite different

than the way I would want to create AGI

on say a modern server farm of CPUs and GPUs,

which in turn may be quite different

than the way I would want to implement AGI

on whatever quantum computer we’ll have in 10 years,

supposing someone makes a robust quantum turing machine

or something, right?

So I think there’s been coevolution

of the patterns of organization in the human brain

and the physiological particulars

of the human brain over time.

And when you look at neural networks,

that is one powerful class of learning algorithms,

but it’s also a class of learning algorithms

that evolve to exploit the particulars of the human brain

as a computational substrate.

If you’re looking at the computational substrate

of a modern server farm,

you won’t necessarily want the same algorithms

that you want on the human brain.

And from the right level of abstraction,

you could look at maybe the best algorithms on the brain

and the best algorithms on a modern computer network

as implementing the same abstract learning

and representation processes,

but finding that level of abstraction

is its own AGI research project then, right?

So that’s about the hardware side

and the software side, which follows from that.

Then regarding what are the requirements,

I wrote the paper years ago

on what I called the embodied communication prior,

which was quite similar in intent

to Yoshua Bengio’s recent paper on the consciousness prior,

except I didn’t wanna wrap up consciousness in it

because to me, the qualia problem and subjective experience

is a very interesting issue also,

which we can chat about,

but I would rather keep that philosophical debate distinct

from the debate of what kind of biases

do you wanna put in a general intelligence

to give it human like general intelligence.

And I’m not sure Yoshua Bengio is really addressing

that kind of consciousness.

He’s just using the term.

I love Yoshua to pieces.

Like he’s by far my favorite of the lines of deep learning.

Yeah.

He’s such a good hearted guy.

He’s a good human being.

Yeah, for sure.

I am not sure he has plumbed to the depths

of the philosophy of consciousness.

No, he’s using it as a sexy term.

Yeah, yeah, yeah.

So what I called it was the embodied communication prior.

Can you maybe explain it a little bit?

Yeah, yeah.

What I meant was, what are we humans evolved for?

You can say being human, but that’s very abstract, right?

I mean, our minds control individual bodies,

which are autonomous agents moving around in a world

that’s composed largely of solid objects, right?

And we’ve also evolved to communicate via language

with other solid object agents that are going around

doing things collectively with us

in a world of solid objects.

And these things are very obvious,

but if you compare them to the scope

of all possible intelligences

or even all possible intelligences

that are physically realizable,

that actually constrains things a lot.

So if you start to look at how would you realize

some specialized or constrained version

of universal general intelligence

in a system that has limited memory

and limited speed of processing,

but whose general intelligence will be biased

toward controlling a solid object agent,

which is mobile in a solid object world

for manipulating solid objects

and communicating via language with other similar agents

in that same world, right?

Then starting from that,

you’re starting to get a requirements analysis

for human level general intelligence.

And then that leads you into cognitive science

and you can look at, say, what are the different types

of memory that the human mind and brain has?

And this has matured over the last decades

and I got into this a lot.

So after getting my PhD in math,

I was an academic for eight years.

I was in departments of mathematics,

computer science, and psychology.

When I was in the psychology department

at the University of Western Australia,

I was focused on cognitive science of memory and perception.

Actually, I was teaching neural nets and deep neural nets

and it was multi layer perceptrons, right?

Psychology?

Yeah.

Cognitive science, it was cross disciplinary

among engineering, math, psychology, philosophy,

linguistics, computer science.

But yeah, we were teaching psychology students

to try to model the data from human cognition experiments

using multi layer perceptrons,

which was the early version of a deep neural network.

Very, very, yeah, recurrent back prop

was very, very slow to train back then, right?

So this is the study of these constraint systems

that are supposed to deal with physical objects.

So if you look at cognitive psychology,

you can see there’s multiple types of memory,

which are to some extent represented

by different subsystems in the human brain.

So we have episodic memory,

which takes into account our life history

and everything that’s happened to us.

We have declarative or semantic memory,

which is like facts and beliefs abstracted

from the particular situations that they occurred in.

There’s sensory memory, which to some extent

is sense modality specific,

and then to some extent is unified across sense modalities.

There’s procedural memory, memory of how to do stuff,

like how to swing the tennis racket, right?

Which is, there’s motor memory,

but it’s also a little more abstract than motor memory.

It involves cerebellum and cortex working together.

Then there’s memory linkage with emotion

which has to do with linkages of cortex and limbic system.

There’s specifics of spatial and temporal modeling

connected with memory, which has to do with hippocampus

and thalamus connecting to cortex.

And the basal ganglia, which influences goals.

So we have specific memory of what goals,

subgoals and sub subgoals we want to perceive

in which context in the past.

Human brain has substantially different subsystems

for these different types of memory

and substantially differently tuned learning,

like differently tuned modes of longterm potentiation

to do with the types of neurons and neurotransmitters

in the different parts of the brain

corresponding to these different types of knowledge.

And these different types of memory and learning

in the human brain, I mean, you can back these all

into embodied communication for controlling agents

in worlds of solid objects.

Now, so if you look at building an AGI system,

one way to do it, which starts more from cognitive science

than neuroscience is to say,

okay, what are the types of memory

that are necessary for this kind of world?

Yeah, yeah, necessary for this sort of intelligence.

What types of learning work well

with these different types of memory?

And then how do you connect all these things together, right?

And of course the human brain did it incrementally

through evolution because each of the sub networks

of the brain, I mean, it’s not really the lobes

of the brain, it’s the sub networks,

each of which is widely distributed,

which of the, each of the sub networks of the brain

co evolves with the other sub networks of the brain,

both in terms of its patterns of organization

and the particulars of the neurophysiology.

So they all grew up communicating

and adapting to each other.

It’s not like they were separate black boxes

that were then glommed together, right?

Whereas as engineers, we would tend to say,

let’s make the declarative memory box here

and the procedural memory box here

and the perception box here and wire them together.

And when you can do that, it’s interesting.

I mean, that’s how a car is built, right?

But on the other hand, that’s clearly not

how biological systems are made.

The parts co evolve so as to adapt and work together.

That’s by the way, how every human engineered system

that flies, that was, we were using that analogy

before it’s built as well.

So do you find this at all appealing?

Like there’s been a lot of really exciting,

which I find strange that it’s ignored work

in cognitive architectures, for example,

throughout the last few decades.

Do you find that?

Yeah, I mean, I had a lot to do with that community

and you know, Paul Rosenbloom, who was one of the,

and John Laird who built the SOAR architecture,

are friends of mine.

And I learned SOAR quite well

and ACTAR and these different cognitive architectures.

And how I was looking at the AI world about 10 years ago

before this whole commercial deep learning explosion was,

on the one hand, you had these cognitive architecture guys

who were working closely with psychologists

and cognitive scientists who had thought a lot

about how the different parts of a human like mind

should work together.

On the other hand, you had these learning theory guys

who didn’t care at all about the architecture,

but we’re just thinking about like,

how do you recognize patterns in large amounts of data?

And in some sense, what you needed to do

was to get the learning that the learning theory guys

were doing and put it together with the architecture

that the cognitive architecture guys were doing.

And then you would have what you needed.

Now, you can’t, unfortunately, when you look at the details,

you can’t just do that without totally rebuilding

what is happening on both the cognitive architecture

and the learning side.

So, I mean, they tried to do that in SOAR,

but what they ultimately did is like,

take a deep neural net or something for perception

and you include it as one of the black boxes.

It becomes one of the boxes.

The learning mechanism becomes one of the boxes

as opposed to fundamental part of the system.

You could look at some of the stuff DeepMind has done,

like the differential neural computer or something

that sort of has a neural net for deep learning perception.

It has another neural net, which is like a memory matrix

that stores, say, the map of the London subway or something.

So probably Demis Tsabas was thinking about this

like part of cortex and part of hippocampus

because hippocampus has a spatial map.

And when he was a neuroscientist,

he was doing a bunch on cortex hippocampus interconnection.

So there, the DNC would be an example of folks

from the deep neural net world trying to take a step

in the cognitive architecture direction

by having two neural modules that correspond roughly

to two different parts of the human brain

that deal with different kinds of memory and learning.

But on the other hand, it’s super, super, super crude

from the cognitive architecture view, right?

Just as what John Laird and Soar did with neural nets

was super, super crude from a learning point of view

because the learning was like off to the side,

not affecting the core representations, right?

I mean, you weren’t learning the representation.

You were learning the data that feeds into the…

You were learning abstractions of perceptual data

to feed into the representation that was not learned, right?

So yeah, this was clear to me a while ago.

And one of my hopes with the AGI community

was to sort of bring people

from those two directions together.

That didn’t happen much in terms of…

Not yet.

And what I was gonna say is it didn’t happen

in terms of bringing like the lions

of cognitive architecture together

with the lions of deep learning.

It did work in the sense that a bunch of younger researchers

have had their heads filled with both of those ideas.

This comes back to a saying my dad,

who was a university professor, often quoted to me,

which was, science advances one funeral at a time,

which I’m trying to avoid.

Like I’m 53 years old and I’m trying to invent

amazing, weird ass new things

that nobody ever thought about,

which we’ll talk about in a few minutes.

But there is that aspect, right?

Like the people who’ve been at AI a long time

and have made their career developing one aspect,

like a cognitive architecture or a deep learning approach,

it can be hard once you’re old

and have made your career doing one thing,

it can be hard to mentally shift gears.

I mean, I try quite hard to remain flexible minded.

Have you been successful somewhat in changing,

maybe, have you changed your mind on some aspects

of what it takes to build an AGI, like technical things?

The hard part is that the world doesn’t want you to.

The world or your own brain?

The world, well, that one point

is that your brain doesn’t want to.

The other part is that the world doesn’t want you to.

Like the people who have followed your ideas

get mad at you if you change your mind.

And the media wants to pigeonhole you as an avatar

of a certain idea.

But yeah, I’ve changed my mind on a bunch of things.

I mean, when I started my career,

I really thought quantum computing

would be necessary for AGI.

And I doubt it’s necessary now,

although I think it will be a super major enhancement.

But I mean, I’m now in the middle of embarking

on the complete rethink and rewrite from scratch

of our OpenCog AGI system together with Alexey Potapov

and his team in St. Petersburg,

who’s working with me in SingularityNet.

So now we’re trying to like go back to basics,

take everything we learned from working

with the current OpenCog system,

take everything everybody else has learned

from working with their proto AGI systems

and design the best framework for the next stage.

And I do think there’s a lot to be learned

from the recent successes with deep neural nets

and deep reinforcement systems.

I mean, people made these essentially trivial systems

work much better than I thought they would.

And there’s a lot to be learned from that.

And I wanna incorporate that knowledge appropriately

in our OpenCog 2.0 system.

On the other hand, I also think current deep neural net

architectures as such will never get you anywhere near AGI.

So I think you wanna avoid the pathology

of throwing the baby out with the bathwater

and like saying, well, these things are garbage

because foolish journalists overblow them

as being the path to AGI

and a few researchers overblow them as well.

There’s a lot of interesting stuff to be learned there

even though those are not the golden path.

So maybe this is a good chance to step back.

You mentioned OpenCog 2.0, but…

Go back to OpenCog 0.0, which exists now.

Alpha, yeah.

Yeah, maybe talk through the history of OpenCog

and your thinking about these ideas.

I would say OpenCog 2.0 is a term we’re throwing around

sort of tongue in cheek because the existing OpenCog system

that we’re working on now is not remotely close

to what we’d consider a 1.0, right?

I mean, it’s an early…

It’s been around, what, 13 years or something,

but it’s still an early stage research system, right?

And actually, we are going back to the beginning

in terms of theory and implementation

because we feel like that’s the right thing to do,

but I’m sure what we end up with is gonna have

a huge amount in common with the current system.

I mean, we all still like the general approach.

So first of all, what is OpenCog?

Sure, OpenCog is an open source software project

that I launched together with several others in 2008

and probably the first code written toward that

was written in 2001 or two or something

that was developed as a proprietary code base

within my AI company, Novamente LLC.

Then we decided to open source it in 2008,

cleaned up the code throughout some things

and added some new things and…

What language is it written in?

It’s C++.

Primarily, there’s a bunch of scheme as well,

but most of it’s C++.

And it’s separate from something we’ll also talk about,

the SingularityNet.

So it was born as a non networked thing.

Correct, correct.

Well, there are many levels of networks involved here.

No connectivity to the internet, or no, at birth.

Yeah, I mean, SingularityNet is a separate project

and a separate body of code.

And you can use SingularityNet as part of the infrastructure

for a distributed OpenCog system,

but there are different layers.

Yeah, got it.

So OpenCog on the one hand as a software framework

could be used to implement a variety

of different AI architectures and algorithms,

but in practice, there’s been a group of developers

which I’ve been leading together with Linus Vepstas,

Neil Geisweiler, and a few others,

which have been using the OpenCog platform

and infrastructure to implement certain ideas

about how to make an AGI.

So there’s been a little bit of ambiguity

about OpenCog, the software platform

versus OpenCog, the AGI design,

because in theory, you could use that software to do,

you could use it to make a neural net.

You could use it to make a lot of different AGI.

What kind of stuff does the software platform provide,

like in terms of utilities, tools, like what?

Yeah, let me first tell about OpenCog

as a software platform,

and then I’ll tell you the specific AGI R&D

we’ve been building on top of it.

So the core component of OpenCog as a software platform

is what we call the atom space,

which is a weighted labeled hypergraph.

ATOM, atom space.

Atom space, yeah, yeah, not atom, like Adam and Eve,

although that would be cool too.

Yeah, so you have a hypergraph, which is like,

so a graph in this sense is a bunch of nodes

with links between them.

A hypergraph is like a graph,

but links can go between more than two nodes.

So you have a link between three nodes.

And in fact, OpenCog’s atom space

would properly be called a metagraph

because you can have links pointing to links,

or you could have links pointing to whole subgraphs, right?

So it’s an extended hypergraph or a metagraph.

Is metagraph a technical term?

It is now a technical term.

Interesting.

But I don’t think it was yet a technical term

when we started calling this a generalized hypergraph.

But in any case, it’s a weighted labeled

generalized hypergraph or weighted labeled metagraph.

The weights and labels mean that the nodes and links

can have numbers and symbols attached to them.

So they can have types on them.

They can have numbers on them that represent,

say, a truth value or an importance value

for a certain purpose.

And of course, like with all things,

you can reduce that to a hypergraph,

and then the hypergraph can be reduced to a graph.

You can reduce hypergraph to a graph,

and you could reduce a graph to an adjacency matrix.

So, I mean, there’s always multiple representations.

But there’s a layer of representation

that seems to work well here.

Got it.

Right, right, right.

And so similarly, you could have a link to a whole graph

because a whole graph could represent,

say, a body of information.

And I could say, I reject this body of information.

Then one way to do that is make that link

go to that whole subgraph representing

the body of information, right?

I mean, there are many alternate representations,

but that’s, anyway, what we have in OpenCOG,

we have an atom space, which is this weighted, labeled,

generalized hypergraph.

Knowledge store, it lives in RAM.

There’s also a way to back it up to disk.

There are ways to spread it among

multiple different machines.

Then there are various utilities for dealing with that.

So there’s a pattern matcher,

which lets you specify a sort of abstract pattern

and then search through a whole atom space

with labeled hypergraph to see what subhypergraphs

may match that pattern, for an example.

So that’s, then there’s something called

the COG server in OpenCOG,

which lets you run a bunch of different agents

or processes in a scheduler.

And each of these agents, basically it reads stuff

from the atom space and it writes stuff to the atom space.

So this is sort of the basic operational model.

That’s the software framework.

And of course that’s, there’s a lot there

just from a scalable software engineering standpoint.

So you could use this, I don’t know if you’ve,

have you looked into the Stephen Wolfram’s physics project

recently with the hypergraphs and stuff?

Could you theoretically use like the software framework

to play with it? You certainly could,

although Wolfram would rather die

than use anything but Mathematica for his work.

Well that’s, yeah, but there’s a big community of people

who are, you know, would love integration.

Like you said, the young minds love the idea

of integrating, of connecting things.

Yeah, that’s right.

And I would add on that note,

the idea of using hypergraph type models in physics

is not very new.

Like if you look at…

The Russians did it first.

Well, I’m sure they did.

And a guy named Ben Dribis, who’s a mathematician,

a professor in Louisiana or somewhere,

had a beautiful book on quantum sets and hypergraphs

and algebraic topology for discrete models of physics.

And carried it much farther than Wolfram has,

but he’s not rich and famous,

so it didn’t get in the headlines.

But yeah, Wolfram aside, yeah,

certainly that’s a good way to put it.

The whole OpenCog framework,

you could use it to model biological networks

and simulate biology processes.

You could use it to model physics

on discrete graph models of physics.

So you could use it to do, say, biologically realistic

neural networks, for example.

And that’s a framework.

What do agents and processes do?

Do they grow the graph?

What kind of computations, just to get a sense,

are they supposed to do?

So in theory, they could do anything they want to do.

They’re just C++ processes.

On the other hand, the computation framework

is sort of designed for agents

where most of their processing time

is taken up with reads and writes to the atom space.

And so that’s a very different processing model

than, say, the matrix multiplication based model

as underlies most deep learning systems, right?

So you could create an agent

that just factored numbers for a billion years.

It would run within the OpenCog platform,

but it would be pointless, right?

I mean, the point of doing OpenCog

is because you want to make agents

that are cooperating via reading and writing

into this weighted labeled hypergraph, right?

And that has both cognitive architecture importance

because then this hypergraph is being used

as a sort of shared memory

among different cognitive processes,

but it also has software and hardware

implementation implications

because current GPU architectures

are not so useful for OpenCog,

whereas a graph chip would be incredibly useful, right?

And I think Graphcore has those now,

but they’re not ideally suited for this.

But I think in the next, let’s say, three to five years,

we’re gonna see new chips

where like a graph is put on the chip

and the back and forth between multiple processes

acting SIMD and MIMD on that graph is gonna be fast.

And then that may do for OpenCog type architectures

what GPUs did for deep neural architecture.

It’s a small tangent.

Can you comment on thoughts about neuromorphic computing?

So like hardware implementations

of all these different kind of, are you interested?

Are you excited by that possibility?

I’m excited by graph processors

because I think they can massively speed up OpenCog,

which is a class of architectures that I’m working on.

I think if, you know, in principle, neuromorphic computing

should be amazing.

I haven’t yet been fully sold

on any of the systems that are out.

They’re like, memristors should be amazing too, right?

So a lot of these things have obvious potential,

but I haven’t yet put my hands on a system

that seemed to manifest that.

Mark’s system should be amazing,

but the current systems have not been great.

Yeah, I mean, look, for example,

if you wanted to make a biologically realistic

hardware neural network,

like making a circuit in hardware

that emulated like the Hodgkin–Huxley equation

or the Izhekevich equation,

like differential equations

for a biologically realistic neuron

and putting that in hardware on the chip,

that would seem that it would make more feasible

to make a large scale, truly biologically realistic

neural network.

Now, what’s been done so far is not like that.

So I guess personally, as a researcher,

I mean, I’ve done a bunch of work in computational neuroscience

where I did some work with IARPA in DC,

Intelligence Advanced Research Project Agency.

We were looking at how do you make

a biologically realistic simulation

of seven different parts of the brain

cooperating with each other,

using like realistic nonlinear dynamical models of neurons,

and how do you get that to simulate

what’s going on in the mind of a geo intelligence analyst

while they’re trying to find terrorists on a map, right?

So if you want to do something like that,

having neuromorphic hardware that really let you simulate

like a realistic model of the neuron would be amazing.

But that’s sort of with my computational neuroscience

hat on, right?

With an AGI hat on, I’m just more interested

in these hypergraph knowledge representation

based architectures, which would benefit more

from various types of graph processors

because the main processing bottleneck

is reading writing to RAM.

It’s reading writing to the graph in RAM.

The main processing bottleneck for this kind of

proto AGI architecture is not multiplying matrices.

And for that reason, GPUs, which are really good

at multiplying matrices, don’t apply as well.

There are frameworks like Gunrock and others

that try to boil down graph processing

to matrix operations, and they’re cool,

but you’re still putting a square peg

into a round hole in a certain way.

The same is true, I mean, current quantum machine learning,

which is very cool.

It’s also all about how to get matrix and vector operations

in quantum mechanics, and I see why that’s natural to do.

I mean, quantum mechanics is all unitary matrices

and vectors, right?

On the other hand, you could also try

to make graph centric quantum computers,

which I think is where things will go.

And then we can have, then we can make,

like take the open cog implementation layer,

implement it in a collapsed state inside a quantum computer.

But that may be the singularity squared, right?

I’m not sure we need that to get to human level.

That’s already beyond the first singularity.

But can we just go back to open cog?

Yeah, and the hypergraph and open cog.

That’s the software framework, right?

So the next thing is our cognitive architecture

tells us particular algorithms to put there.

Got it.

Can we backtrack on the kind of, is this graph designed,

is it in general supposed to be sparse

and the operations constantly grow and change the graph?

Yeah, the graph is sparse.

But is it constantly adding links and so on?

It is a self modifying hypergraph.

So it’s not, so the write and read operations

you’re referring to, this isn’t just a fixed graph

to which you change the way, it’s a constantly growing graph.

Yeah, that’s true.

So it is different model than,

say current deep neural nets

and have a fixed neural architecture

and you’re updating the weights.

Although there have been like cascade correlational

neural net architectures that grow new nodes and links,

but the most common neural architectures now

have a fixed neural architecture,

you’re updating the weights.

And then open cog, you can update the weights

and that certainly happens a lot,

but adding new nodes, adding new links,

removing nodes and links is an equally critical part

of the system’s operations.

Got it.

So now when you start to add these cognitive algorithms

on top of this open cog architecture,

what does that look like?

Yeah, so within this framework then,

creating a cognitive architecture is basically two things.

It’s choosing what type system you wanna put

on the nodes and links in the hypergraph,

what types of nodes and links you want.

And then it’s choosing what collection of agents,

what collection of AI algorithms or processes

are gonna run to operate on this hypergraph.

And of course those two decisions

are closely connected to each other.

So in terms of the type system,

there are some links that are more neural net like,

they’re just like have weights to get updated

by heavy and learning and activation spreads along them.

There are other links that are more logic like

and nodes that are more logic like.

So you could have a variable node

and you can have a node representing a universal

or existential quantifier as in predicate logic

or term logic.

So you can have logic like nodes and links,

or you can have neural like nodes and links.

You can also have procedure like nodes and links

as in say a combinatorial logic or Lambda calculus

representing programs.

So you can have nodes and links representing

many different types of semantics,

which means you could make a horrible ugly mess

or you could make a system

where these different types of knowledge

all interpenetrate and synergize

with each other beautifully, right?

So the hypergraph can contain programs.

Yeah, it can contain programs,

although in the current version,

it is a very inefficient way

to guide the execution of programs,

which is one thing that we are aiming to resolve

with our rewrite of the system now.

So what to you is the most beautiful aspect of OpenCog?

Just to you personally,

some aspect that captivates your imagination

from beauty or power?

What fascinates me is finding a common representation

that underlies abstract, declarative knowledge

and sensory knowledge and movement knowledge

and procedural knowledge and episodic knowledge,

finding the right level of representation

where all these types of knowledge are stored

in a sort of universal and interconvertible

yet practically manipulable way, right?

So to me, that’s the core,

because once you’ve done that,

then the different learning algorithms

can help each other out. Like what you want is,

if you have a logic engine

that helps with declarative knowledge

and you have a deep neural net

that gathers perceptual knowledge,

and you have, say, an evolutionary learning system

that learns procedures,

you want these to not only interact

on the level of sharing results

and passing inputs and outputs to each other,

you want the logic engine, when it gets stuck,

to be able to share its intermediate state

with the neural net and with the evolutionary system

and with the evolutionary learning algorithm

so that they can help each other out of bottlenecks

and help each other solve combinatorial explosions

by intervening inside each other’s cognitive processes.

But that can only be done

if the intermediate state of a logic engine,

the evolutionary learning engine,

and a deep neural net are represented in the same form.

And that’s what we figured out how to do

by putting the right type system

on top of this weighted labeled hypergraph.

So is there, can you maybe elaborate

on what are the different characteristics

of a type system that can coexist

amongst all these different kinds of knowledge

that needs to be represented?

And is, I mean, like, is it hierarchical?

Just any kind of insights you can give

on that kind of type system?

Yeah, yeah, so this gets very nitty gritty

and mathematical, of course,

but one key part is switching

from predicate logic to term logic.

What is predicate logic?

What is term logic?

So term logic was invented by Aristotle,

or at least that’s the oldest recollection we have of it.

But term logic breaks down basic logic

into basically simple links between nodes,

like an inheritance link between node A and node B.

So in term logic, the basic deduction operation

is A implies B, B implies C, therefore A implies C.

Whereas in predicate logic,

the basic operation is modus ponens,

like A implies B, therefore B.

So it’s a slightly different way of breaking down logic,

but by breaking down logic into term logic,

you get a nice way of breaking logic down

into nodes and links.

So your concepts can become nodes,

the logical relations become links.

And so then inference is like,

so if this link is A implies B,

this link is B implies C,

then deduction builds a link A implies C.

And your probabilistic algorithm

can assign a certain weight there.

Now, you may also have like a Hebbian neural link

from A to C, which is the degree to which thinking,

the degree to which A being the focus of attention

should make B the focus of attention, right?

So you could have then a neural link

and you could have a symbolic,

like logical inheritance link in your term logic.

And they have separate meaning,

but they could be used to guide each other as well.

Like if there’s a large amount of neural weight

on the link between A and B,

that may direct your logic engine to think about,

well, what is the relation?

Are they similar?

Is there an inheritance relation?

Are they similar in some context?

On the other hand, if there’s a logical relation

between A and B, that may direct your neural component

to think, well, when I’m thinking about A,

should I be directing some attention to B also?

Because there’s a logical relation.

So in terms of logic,

there’s a lot of thought that went into

how do you break down logic relations,

including basic sort of propositional logic relations

as Aristotelian term logic deals with,

and then quantifier logic relations also.

How do you break those down elegantly into a hypergraph?

Because you, I mean, you can boil logic expression

into a graph in many different ways.

Many of them are very ugly, right?

We tried to find elegant ways

of sort of hierarchically breaking down

complex logic expression into nodes and links.

So that if you have say different nodes representing,

Ben, AI, Lex, interview or whatever,

the logic relations between those things

are compact in the node and link representation.

So that when you have a neural net acting

on the same nodes and links,

the neural net and the logic engine

can sort of interoperate with each other.

And also interpretable by humans.

Is that an important?

That’s tough.

Yeah, in simple cases, it’s interpretable by humans.

But honestly, I would say logic systems

I would say logic systems give more potential

for transparency and comprehensibility

than neural net systems,

but you still have to work at it.

Because I mean, if I show you a predicate logic proposition

with like 500 nested universal and existential quantifiers

and 217 variables, that’s no more comprehensible

than the weight metrics of a neural network, right?

So I’d say the logic expressions

that AI learns from its experience

are mostly totally opaque to human beings

and maybe even harder to understand than neural net.

Because I mean, when you have multiple

nested quantifier bindings,

it’s a very high level of abstraction.

There is a difference though,

in that within logic, it’s a little more straightforward

to pose the problem of like normalize this

and boil this down to a certain form.

I mean, you can do that in neural nets too.

Like you can distill a neural net to a simpler form,

but that’s more often done to make a neural net

that’ll run on an embedded device or something.

It’s harder to distill a net to a comprehensible form

than it is to simplify a logic expression

to a comprehensible form, but it doesn’t come for free.

Like what’s in the AI’s mind is incomprehensible

to a human unless you do some special work

to make it comprehensible.

So on the procedural side, there’s some different

and sort of interesting voodoo there.

I mean, if you’re familiar in computer science,

there’s something called the Curry Howard correspondence,

which is a one to one mapping between proofs and programs.

So every program can be mapped into a proof.

Every proof can be mapped into a program.

You can model this using category theory

and a bunch of nice math,

but we wanna make that practical, right?

So that if you have an executable program

that like moves the robot’s arm or figures out

in what order to say things in a dialogue,

that’s a procedure represented in OpenCog’s hypergraph.

But if you wanna reason on how to improve that procedure,

you need to map that procedure into logic

using Curry Howard isomorphism.

So then the logic engine can reason

about how to improve that procedure

and then map that back into the procedural representation

that is efficient for execution.

So again, that comes down to not just

can you make your procedure into a bunch of nodes and links?

Cause I mean, that can be done trivially.

A C++ compiler has nodes and links inside it.

Can you boil down your procedure

into a bunch of nodes and links

in a way that’s like hierarchically decomposed

and simple enough?

It can reason about.

Yeah, yeah, that given the resource constraints at hand,

you can map it back and forth to your term logic,

like fast enough

and without having a bloated logic expression, right?

So there’s just a lot of,

there’s a lot of nitty gritty particulars there,

but by the same token, if you ask a chip designer,

like how do you make the Intel I7 chip so good?

There’s a long list of technical answers there,

which will take a while to go through, right?

And this has been decades of work.

I mean, the first AI system of this nature I tried to build

was called WebMind in the mid 1990s.

And we had a big graph,

a big graph operating in RAM implemented with Java 1.1,

which was a terrible, terrible implementation idea.

And then each node had its own processing.

So like that there,

the core loop looped through all nodes in the network

and let each node enact what its little thing was doing.

And we had logic and neural nets in there,

but an evolutionary learning,

but we hadn’t done enough of the math

to get them to operate together very cleanly.

So it was really, it was quite a horrible mess.

So as well as shifting an implementation

where the graph is its own object

and the agents are separately scheduled,

we’ve also done a lot of work

on how do you represent programs?

How do you represent procedures?

You know, how do you represent genotypes for evolution

in a way that the interoperability

between the different types of learning

associated with these different types of knowledge

actually works?

And that’s been quite difficult.

It’s taken decades and it’s totally off to the side

of what the commercial mainstream of the AI field is doing,

which isn’t thinking about representation at all really.

Although you could see like in the DNC,

they had to think a little bit about

how do you make representation of a map

in this memory matrix work together

with the representation needed

for say visual pattern recognition

in the hierarchical neural network.

But I would say we have taken that direction

of taking the types of knowledge you need

for different types of learning,

like declarative, procedural, attentional,

and how do you make these types of knowledge represent

in a way that allows cross learning

across these different types of memory.

We’ve been prototyping and experimenting with this

within OpenCog and before that WebMind

since the mid 1990s.

Now, disappointingly to all of us,

this has not yet been cashed out in an AGI system, right?

I mean, we’ve used this system

within our consulting business.

So we’ve built natural language processing

and robot control and financial analysis.

We’ve built a bunch of sort of vertical market specific

proprietary AI projects.

They use OpenCog on the backend,

but we haven’t, that’s not the AGI goal, right?

It’s interesting, but it’s not the AGI goal.

So now what we’re looking at with our rebuild of the system.

2.0.

Yeah, we’re also calling it True AGI.

So we’re not quite sure what the name is yet.

We made a website for trueagi.io,

but we haven’t put anything on there yet.

We may come up with an even better name.

It’s kind of like the real AI starting point

for your AGI book.

Yeah, but I like True better

because True has like, you can be true hearted, right?

You can be true to your girlfriend.

So True has a number and it also has logic in it, right?

Because logic is a key part of the system.

So yeah, with the True AGI system,

we’re sticking with the same basic architecture,

but we’re trying to build on what we’ve learned.

And one thing we’ve learned is that,

we need type checking among dependent types

to be much faster

and among probabilistic dependent types to be much faster.

So as it is now,

you can have complex types on the nodes and links.

But if you wanna put,

like if you want types to be first class citizens,

so that you can have the types can be variables

and then you do type checking

among complex higher order types.

You can do that in the system now, but it’s very slow.

This is stuff like it’s done

in cutting edge program languages like Agda or something,

these obscure research languages.

On the other hand,

we’ve been doing a lot tying together deep neural nets

with symbolic learning.

So we did a project for Cisco, for example,

which was on, this was street scene analysis,

but they had deep neural models

for a bunch of cameras watching street scenes,

but they trained a different model for each camera

because they couldn’t get the transfer learning

to work between camera A and camera B.

So we took what came out of all the deep neural models

for the different cameras,

we fed it into an open called symbolic representation.

Then we did some pattern mining and some reasoning

on what came out of all the different cameras

within the symbolic graph.

And that worked well for that application.

I mean, Hugo Latapie from Cisco gave a talk touching on that

at last year’s AGI conference, it was in Shenzhen.

On the other hand, we learned from there,

it was kind of clunky to get the deep neural models

to work well with the symbolic system

because we were using torch.

And torch keeps a sort of state computation graph,

but you needed like real time access

to that computation graph within our hypergraph.

And we certainly did it,

Alexey Polopov who leads our St. Petersburg team

wrote a great paper on cognitive modules in OpenCog

explaining sort of how do you deal

with the torch compute graph inside OpenCog.

But in the end we realized like,

that just hadn’t been one of our design thoughts

when we built OpenCog, right?

So between wanting really fast dependent type checking

and wanting much more efficient interoperation

between the computation graphs

of deep neural net frameworks and OpenCog’s hypergraph

and adding on top of that,

wanting to more effectively run an OpenCog hypergraph

distributed across RAM in 10,000 machines,

which is we’re doing dozens of machines now,

but it’s just not, we didn’t architect it

with that sort of modern scalability in mind.

So these performance requirements are what have driven us

to want to rearchitect the base,

but the core AGI paradigm doesn’t really change.

Like the mathematics is the same.

It’s just, we can’t scale to the level that we want

in terms of distributed processing

or speed of various kinds of processing

with the current infrastructure

that was built in the phase 2001 to 2008,

which is hardly shocking.

Well, I mean, the three things you mentioned

are really interesting.

So what do you think about in terms of interoperability

communicating with computational graph of neural networks?

What do you think about the representations

that neural networks form?

They’re bad, but there’s many ways

that you could deal with that.

So I’ve been wrestling with this a lot

in some work on supervised grammar induction,

and I have a simple paper on that.

They’ll give it the next AGI conference,

online portion of which is next week, actually.

What is grammar induction?

So this isn’t AGI either,

but it’s sort of on the verge

between narrow AI and AGI or something.

Unsupervised grammar induction is the problem.

Throw your AI system, a huge body of text,

and have it learn the grammar of the language

that produced that text.

So you’re not giving it labeled examples.

So you’re not giving it like a thousand sentences

where the parses were marked up by graduate students.

So it’s just got to infer the grammar from the text.

It’s like the Rosetta Stone, but worse, right?

Because you only have the one language,

and you have to figure out what is the grammar.

So that’s not really AGI because,

I mean, the way a human learns language is not that, right?

I mean, we learn from language that’s used in context.

So it’s a social embodied thing.

We see how a given sentence is grounded in observation.

There’s an interactive element, I guess.

Yeah, yeah, yeah.

On the other hand, so I’m more interested in that.

I’m more interested in making an AGI system learn language

from its social and embodied experience.

On the other hand, that’s also more of a pain to do,

and that would lead us into Hanson Robotics

and their robotics work I’ve known much.

We’ll talk about it in a few minutes.

But just as an intellectual exercise,

as a learning exercise,

trying to learn grammar from a corpus

is very, very interesting, right?

And that’s been a field in AI for a long time.

No one can do it very well.

So we’ve been looking at transformer neural networks

and tree transformers, which are amazing.

These came out of Google Brain, actually.

And actually on that team was Lucas Kaiser,

who used to work for me in the one,

the period 2005 through eight or something.

So it’s been fun to see my former

sort of AGI employees disperse and do

all these amazing things.

Way too many sucked into Google, actually.

Well, yeah, anyway.

We’ll talk about that too.

Lucas Kaiser and a bunch of these guys,

they create transformer networks,

that classic paper like attention is all you need

and all these things following on from that.

So we’re looking at transformer networks.

And like, these are able to,

I mean, this is what underlies GPT2 and GPT3 and so on,

which are very, very cool

and have absolutely no cognitive understanding

of any of the texts they’re looking at.

Like they’re very intelligent idiots, right?

So sorry to take, but this small, I’ll bring this back,

but do you think GPT3 understands language?

No, no, it understands nothing.

It’s a complete idiot.

But it’s a brilliant idiot.

You don’t think GPT20 will understand language?

No, no, no.

So size is not gonna buy you understanding.

And any more than a faster car is gonna get you to Mars.

It’s a completely different kind of thing.

I mean, these networks are very cool.

And as an entrepreneur,

I can see many highly valuable uses for them.

And as an artist, I love them, right?

So I mean, we’re using our own neural model,

which is along those lines

to control the Philip K. Dick robot now.

And it’s amazing to like train a neural model

on the robot Philip K. Dick

and see it come up with like crazed,

stoned philosopher pronouncements,

very much like what Philip K. Dick might’ve said, right?

Like these models are super cool.

And I’m working with Hanson Robotics now

on using a similar, but more sophisticated one for Sophia,

which we haven’t launched yet.

But so I think it’s cool.

But no, these are recognizing a large number

of shallow patterns.

They’re not forming an abstract representation.

And that’s the point I was coming to

when we’re looking at grammar induction,

we tried to mine patterns out of the structure

of the transformer network.

And you can, but the patterns aren’t what you want.

They’re nasty.

So I mean, if you do supervised learning,

if you look at sentences where you know

the correct parts of a sentence,

you can learn a matrix that maps

between the internal representation of the transformer

and the parse of the sentence.

And so then you can actually train something

that will output the sentence parse

from the transformer network’s internal state.

And we did this, I think Christopher Manning,

some others have not done this also.

But I mean, what you get is that the representation

is hardly ugly and is scattered all over the network

and doesn’t look like the rules of grammar

that you know are the right rules of grammar, right?

It’s kind of ugly.

So what we’re actually doing is we’re using

a symbolic grammar learning algorithm,

but we’re using the transformer neural network

as a sentence probability oracle.

So like if you have a rule of grammar

and you aren’t sure if it’s a correct rule of grammar or not,

you can generate a bunch of sentences

using that rule of grammar

and a bunch of sentences violating that rule of grammar.

And you can see the transformer model

doesn’t think the sentences obeying the rule of grammar

are more probable than the sentences

disobeying the rule of grammar.

So in that way, you can use the neural model

as a sense probability oracle

to guide a symbolic grammar learning process.

And that seems to work better than trying to milk

the grammar out of the neural network

that doesn’t have it in there.

So I think the thing is these neural nets

are not getting a semantically meaningful representation

internally by and large.

So one line of research is to try to get them to do that.

And InfoGAN was trying to do that.

So like if you look back like two years ago,

there was all these papers on like at Edward,

this probabilistic programming neural net framework

that Google had, which came out of InfoGAN.

So the idea there was like you could train

an InfoGAN neural net model,

which is a generative associative network

to recognize and generate faces.

And the model would automatically learn a variable

for how long the nose is and automatically learn a variable

for how wide the eyes are

or how big the lips are or something, right?

So it automatically learned these variables,

which have a semantic meaning.

So that was a rare case where a neural net

trained with a fairly standard GAN method

was able to actually learn the semantic representation.

So for many years, many of us tried to take that

the next step and get a GAN type neural network

that would have not just a list of semantic latent variables,

but would have say a Bayes net of semantic latent variables

with dependencies between them.

The whole programming framework Edward was made for that.

I mean, no one got it to work, right?

And it could be.

Do you think it’s possible?

Yeah, do you think?

I don’t know.

It might be that back propagation just won’t work for it

because the gradients are too screwed up.

Maybe you could get it to work using CMAES

or some like floating point evolutionary algorithm.

We tried, we didn’t get it to work.

Eventually we just paused that rather than gave it up.

We paused that and said, well, okay, let’s try

more innovative ways to learn implicit,

to learn what are the representations implicit

in that network without trying to make it grow

inside that network.

And I described how we’re doing that in language.

You can do similar things in vision, right?

So what?

Use it as an oracle.

Yeah, yeah, yeah.

So you can, that’s one way is that you use

a structure learning algorithm, which is symbolic.

And then you use the deep neural net as an oracle

to guide the structure learning algorithm.

The other way to do it is like Infogam was trying to do

and try to tweak the neural network

to have the symbolic representation inside it.

I tend to think what the brain is doing

is more like using the deep neural net type thing

as an oracle.

I think the visual cortex or the cerebellum

are probably learning a non semantically meaningful

opaque tangled representation.

And then when they interface with the more cognitive parts

of the cortex, the cortex is sort of using those

as an oracle and learning the abstract representation.

So if you do sports, say take for example,

serving in tennis, right?

I mean, my tennis serve is okay, not great,

but I learned it by trial and error, right?

And I mean, I learned music by trial and error too.

I just sit down and play, but then if you’re an athlete,

which I’m not a good athlete,

I mean, then you’ll watch videos of yourself serving

and your coach will help you think about what you’re doing

and you’ll then form a declarative representation,

but your cerebellum maybe didn’t have

a declarative representation.

Same way with music, like I will hear something in my head,

I’ll sit down and play the thing like I heard it.

And then I will try to study what my fingers did

to see like, what did you just play?

Like how did you do that, right?

Because if you’re composing,

you may wanna see how you did it

and then declaratively morph that in some way

that your fingers wouldn’t think of, right?

But the physiological movement may come out of some opaque,

like cerebellar reinforcement learned thing, right?

And so that’s, I think trying to milk the structure

of a neural net by treating it as an oracle,

maybe more like how your declarative mind post processes

what your visual or motor cortex.

I mean, in vision, it’s the same way,

like you can recognize beautiful art

much better than you can say why

you think that piece of art is beautiful.

But if you’re trained as an art critic,

you do learn to say why.

And some of it’s bullshit, but some of it isn’t, right?

Some of it is learning to map sensory knowledge

into declarative and linguistic knowledge,

yet without necessarily making the sensory system itself

use a transparent and an easily communicable representation.

Yeah, that’s fascinating to think of neural networks

as like dumb question answers that you can just milk

to build up a knowledge base.

And then it can be multiple networks, I suppose,

from different.

Yeah, yeah, so I think if a group like DeepMind or OpenAI

were to build AGI, and I think DeepMind is like

a thousand times more likely from what I could tell,

because they’ve hired a lot of people with broad minds

and many different approaches and angles on AGI,

whereas OpenAI is also awesome,

but I see them as more of like a pure

deep reinforcement learning shop.

Yeah, this time, I got you.

So far. Yeah, there’s a lot of,

you’re right, I mean, there’s so much interdisciplinary

work at DeepMind, like neuroscience.

And you put that together with Google Brain,

which granted they’re not working that closely together now,

but my oldest son Zarathustra is doing his PhD

in machine learning applied to automated theorem proving

in Prague under Josef Urban.

So the first paper, DeepMath, which applied deep neural nets

to guide theorem proving was out of Google Brain.

I mean, by now, the automated theorem proving community

is going way, way, way beyond anything Google was doing,

but still, yeah, but anyway,

if that community was gonna make an AGI,

probably one way they would do it was,

take 25 different neural modules,

architected in different ways,

maybe resembling different parts of the brain,

like a basal ganglia model, cerebellum model,

a thalamus module, a few hippocampus models,

number of different models,

representing parts of the cortex, right?

Take all of these and then wire them together

to co train and learn them together like that.

That would be an approach to creating an AGI.

One could implement something like that efficiently

on top of our true AGI, like OpenCog 2.0 system,

once it exists, although obviously Google

has their own highly efficient implementation architecture.

So I think that’s a decent way to build AGI.

I was very interested in that in the mid 90s,

but I mean, the knowledge about how the brain works

sort of pissed me off, like it wasn’t there yet.

Like, you know, in the hippocampus,

you have these concept neurons,

like the so called grandmother neuron,

which everyone laughed at it, it’s actually there.

Like I have some Lex Friedman neurons

that fire differentially when I see you

and not when I see any other person, right?

So how do these Lex Friedman neurons,

how do they coordinate with the distributed representation

of Lex Friedman I have in my cortex, right?

There’s some back and forth between cortex and hippocampus

that lets these discrete symbolic representations

in hippocampus correlate and cooperate

with the distributed representations in cortex.

This probably has to do with how the brain

does its version of abstraction and quantifier logic, right?

Like you can have a single neuron in the hippocampus

that activates a whole distributed activation pattern

in cortex, well, this may be how the brain does

like symbolization and abstraction

as in functional programming or something,

but we can’t measure it.

Like we don’t have enough electrodes stuck

between the cortex and the hippocampus

in any known experiment to measure it.

So I got frustrated with that direction,

not because it’s impossible.

Because we just don’t understand enough yet.

Of course, it’s a valid research direction.

You can try to understand more and more.

And we are measuring more and more

about what happens in the brain now than ever before.

So it’s quite interesting.

On the other hand, I sort of got more

of an engineering mindset about AGI.

I’m like, well, okay,

we don’t know how the brain works that well.

We don’t know how birds fly that well yet either.

We have no idea how a hummingbird flies

in terms of the aerodynamics of it.

On the other hand, we know basic principles

of like flapping and pushing the air down.

And we know the basic principles

of how the different parts of the brain work.

So let’s take those basic principles

and engineer something that embodies those basic principles,

but is well designed for the hardware

that we have on hand right now.

So do you think we can create AGI

before we understand how the brain works?

I think that’s probably what will happen.

And maybe the AGI will help us do better brain imaging

that will then let us build artificial humans,

which is very, very interesting to us

because we are humans, right?

I mean, building artificial humans is super worthwhile.

I just think it’s probably not the shortest path to AGI.

So it’s fascinating idea that we would build AGI

to help us understand ourselves.

A lot of people ask me if the young people

interested in doing artificial intelligence,

they look at sort of doing graduate level, even undergrads,

but graduate level research and they see

whether the artificial intelligence community stands now,

it’s not really AGI type research for the most part.

So the natural question they ask is

what advice would you give?

I mean, maybe I could ask if people were interested

in working on OpenCog or in some kind of direct

or indirect connection to OpenCog or AGI research,

what would you recommend?

OpenCog, first of all, is open source project.

There’s a Google group discussion list.

There’s a GitHub repository.

So if anyone’s interested in lending a hand

with that aspect of AGI,

introduce yourself on the OpenCog email list.

And there’s a Slack as well.

I mean, we’re certainly interested to have inputs

into our redesign process for a new version of OpenCog,

but also we’re doing a lot of very interesting research.

I mean, we’re working on data analysis

for COVID clinical trials.

We’re working with Hanson Robotics.

We’re doing a lot of cool things

with the current version of OpenCog now.

So there’s certainly opportunity to jump into OpenCog

or various other open source AGI oriented projects.

So would you say there’s like masters

and PhD theses in there?

Plenty, yeah, plenty, of course.

I mean, the challenge is to find a supervisor

who wants to foster that sort of research,

but it’s way easier than it was when I got my PhD, right?

It’s okay, great.

We talked about OpenCog, which is kind of one,

the software framework,

but also the actual attempt to build an AGI system.

And then there is this exciting idea of SingularityNet.

So maybe can you say first what is SingularityNet?

Sure, sure.

SingularityNet is a platform

for realizing a decentralized network

of artificial intelligences.

So Marvin Minsky, the AI pioneer who I knew a little bit,

he had the idea of a society of minds,

like you should achieve an AI

not by writing one algorithm or one program,

but you should put a bunch of different AIs out there

and the different AIs will interact with each other,

each playing their own role.

And then the totality of the society of AIs

would be the thing

that displayed the human level intelligence.

And I had, when he was alive,

I had many debates with Marvin about this idea.

And I think he really thought the mind

was more like a society than I do.

Like I think you could have a mind

that was as disorganized as a human society,

but I think a human like mind

has a bit more central control than that actually.

Like, I mean, we have this thalamus

and the medulla and limbic system.

We have a sort of top down control system

that guides much of what we do,

more so than a society does.

So I think he stretched that metaphor a little too far,

but I also think there’s something interesting there.

And so in the 90s,

when I started my first sort of nonacademic AI project,

WebMind, which was an AI startup in New York

in the Silicon Alley area in the late 90s,

what I was aiming to do there

was make a distributed society of AIs,

the different parts of which would live

on different computers all around the world.

And each one would do its own thinking

about the data local to it,

but they would all share information with each other

and outsource work with each other and cooperate.

And the intelligence would be in the whole collective.

And I organized a conference together with Francis Heiligen

at Free University of Brussels in 2001,

which was the Global Brain Zero Conference.

And we’re planning the next version,

the Global Brain One Conference

at the Free University of Brussels for next year, 2021.

So 20 years after.

And then maybe we can have the next one 10 years after that,

like exponentially faster until the singularity comes, right?

The timing is right, yeah.

Yeah, yeah, exactly.

So yeah, the idea with the Global Brain

was maybe the AI won’t just be in a program

on one guy’s computer,

but the AI will be in the internet as a whole

with the cooperation of different AI modules

living in different places.

So one of the issues you face

when architecting a system like that

is, you know, how is the whole thing controlled?

Do you have like a centralized control unit

that pulls the puppet strings

of all the different modules there?

Or do you have a fundamentally decentralized network

where the society of AIs is controlled

in some democratic and self organized way,

but all the AIs in that society, right?

And Francis and I had different view of many things,

but we both wanted to make like a global society

of AI minds with a decentralized organizational mode.

Now, the main difference was he wanted the individual AIs

to be all incredibly simple

and all the intelligence to be on the collective level.

Whereas I thought that was cool,

but I thought a more practical way to do it might be

if some of the agents in the society of minds

were fairly generally intelligent on their own.

So like you could have a bunch of open cogs out there

and a bunch of simpler learning systems.

And then these are all cooperating, coordinating together

sort of like in the brain.

Okay, the brain as a whole is the general intelligence,

but some parts of the cortex,

you could say have a fair bit of general intelligence

on their own,

whereas say parts of the cerebellum or limbic system

have very little general intelligence on their own.

And they’re contributing to general intelligence

by way of their connectivity to other modules.

Do you see instantiations of the same kind of,

maybe different versions of open cog,

but also just the same version of open cog

and maybe many instantiations of it as being all parts of it?

That’s what David and Hans and I want to do

with many Sophia and other robots.

Each one has its own individual mind living on the server,

but there’s also a collective intelligence infusing them

and a part of the mind living on the edge in each robot.

So the thing is at that time,

as well as WebMind being implemented in Java 1.1

as like a massive distributed system,

blockchain wasn’t there yet.

So had them do this decentralized control.

We sort of knew it.

We knew about distributed systems.

We knew about encryption.

So I mean, we had the key principles

of what underlies blockchain now,

but I mean, we didn’t put it together

in the way that it’s been done now.

So when Vitalik Buterin and colleagues

came out with Ethereum blockchain,

many, many years later, like 2013 or something,

then I was like, well, this is interesting.

Like this is solidity scripting language.

It’s kind of dorky in a way.

And I don’t see why you need to turn complete language

for this purpose.

But on the other hand,

this is like the first time I could sit down

and start to like script infrastructure

for decentralized control of the AIs

in this society of minds in a tractable way.

Like you can hack the Bitcoin code base,

but it’s really annoying.

Whereas solidity is Ethereum scripting language

is just nicer and easier to use.

I’m very annoyed with it by this point.

But like Java, I mean, these languages are amazing

when they first come out.

So then I came up with the idea

that turned into SingularityNet.

Okay, let’s make a decentralized agent system

where a bunch of different AIs,

wrapped up in say different Docker containers

or LXC containers,

different AIs can each of them have their own identity

on the blockchain.

And the coordination of this community of AIs

has no central controller, no dictator, right?

And there’s no central repository of information.

The coordination of the society of minds

is done entirely by the decentralized network

in a decentralized way by the algorithms, right?

Because the model of Bitcoin is in math we trust, right?

And so that’s what you need.

You need the society of minds to trust only in math,

not trust only in one centralized server.

So the AI systems themselves are outside of the blockchain,

but then the communication between them.

At the moment, yeah, yeah.

I would have loved to put the AI’s operations on chain

in some sense, but in Ethereum, it’s just too slow.

You can’t do it.

Somehow it’s the basic communication between AI systems.

That’s the distribution.

Basically an AI is just some software in singularity.

An AI is just some software process living in a container.

And there’s a proxy that lives in that container

along with the AI that handles the interaction

with the rest of singularity net.

And then when one AI wants to contribute

with another one in the network,

they set up a number of channels.

And the setup of those channels uses the Ethereum blockchain.

Once the channels are set up,

then data flows along those channels

without having to be on the blockchain.

All that goes on the blockchain is the fact

that some data went along that channel.

So you can do…

So there’s not a shared knowledge.

Well, the identity of each agent is on the blockchain,

on the Ethereum blockchain.

If one agent rates the reputation of another agent,

that goes on the blockchain.

And agents can publish what APIs they will fulfill

on the blockchain.

But the actual data for AI and the results for AI

is not on the blockchain.

Do you think it could be?

Do you think it should be?

In some cases, it should be.

In some cases, maybe it shouldn’t be.

But I mean, I think that…

So I’ll give you an example.

Using Ethereum, you can’t do it.

Using now, there’s more modern and faster blockchains

where you could start to do that in some cases.

Two years ago, that was less so.

It’s a very rapidly evolving ecosystem.

So like one example, maybe you can comment on

something I worked a lot on is autonomous vehicles.

You can see each individual vehicle as an AI system.

And you can see vehicles from Tesla, for example,

and then Ford and GM and all these as also like larger…

I mean, they all are running the same kind of system

on each sets of vehicles.

So it’s individual AI systems and individual vehicles,

but it’s all different.

The station is the same AI system within the same company.

So you can envision a situation where all of those AI systems

are put on SingularityNet, right?

And how do you see that happening?

And what would be the benefit?

And could they share data?

I guess one of the biggest things is that the power there’s

in a decentralized control, but the benefit would have been,

is really nice if they can somehow share the knowledge

in an open way if they choose to.

Yeah, yeah, yeah, those are all quite good points.

So I think the benefit from being on the decentralized network

as we envision it is that we want the AIs in the network

to be outsourcing work to each other

and making API calls to each other frequently.

So the real benefit would be if that AI wanted to outsource

some cognitive processing or data processing

or data pre processing, whatever,

to some other AIs in the network,

which specialize in something different.

And this really requires a different way of thinking

about AI software development, right?

So just like object oriented programming

was different than imperative programming.

And now object oriented programmers all use these

frameworks to do things rather than just libraries even.

You know, shifting to agent based programming

where AI agent is asking other like live real time

evolving agents for feedback and what they’re doing.

That’s a different way of thinking.

I mean, it’s not a new one.

There was loads of papers on agent based programming

in the 80s and onward.

But if you’re willing to shift to an agent based model

of development, then you can put less and less in your AI

and rely more and more on interactive calls

to other AIs running in the network.

And of course, that’s not fully manifested yet

because although we’ve rolled out a nice working version

of SingularityNet platform,

there’s only 50 to 100 AIs running in there now.

There’s not tens of thousands of AIs.

So we don’t have the critical mass

for the whole society of mind to be doing

what we want to do.

Yeah, the magic really happens

when there’s just a huge number of agents.

Yeah, yeah, exactly.

In terms of data, we’re partnering closely

with another blockchain project called Ocean Protocol.

And Ocean Protocol, that’s the project of Trent McConnachie

who developed BigchainDB,

which is a blockchain based database.

So Ocean Protocol is basically blockchain based big data

and aims at making it efficient for different AI processes

or statistical processes or whatever

to share large data sets.

Or if one process can send a clone of itself

to work on the other guy’s data set

and send results back and so forth.

So by getting Ocean and you have data lake,

so this is the data ocean, right?

So again, by getting Ocean and SingularityNet

to interoperate, we’re aiming to take into account

the big data aspect also.

But it’s quite challenging

because to build this whole decentralized

blockchain based infrastructure,

I mean, your competitors are like Google, Microsoft,

Alibaba and Amazon, which have so much money

to put behind their centralized infrastructures,

plus they’re solving simpler algorithmic problems

because making it centralized in some ways is easier, right?

So they’re very major computer science challenges.

And I think what you saw with the whole ICO boom

in the blockchain and cryptocurrency world

is a lot of young hackers who were hacking Bitcoin

or Ethereum, and they see, well,

why don’t we make this decentralized on blockchain?

Then after they raised some money through an ICO,

they realize how hard it is.

And it’s like, actually we’re wrestling

with incredibly hard computer science

and software engineering and distributed systems problems,

which can be solved, but they’re just very difficult

to solve.

And in some cases, the individuals who started

those projects were not well equipped

to actually solve the problems that they wanted to solve.

So you think, would you say that’s the main bottleneck?

If you look at the future of currency,

the question is, well…

Currency, the main bottleneck is politics.

It’s governments and the bands of armed thugs

that will shoot you if you bypass their currency restriction.

That’s right.

So like your sense is that versus the technical challenges,

because you kind of just suggested

the technical challenges are quite high as well.

I mean, for making a distributed money,

you could do that on Algorand right now.

I mean, so that while Ethereum is too slow,

there’s Algorand and there’s a few other more modern,

more scalable blockchains that would work fine

for a decentralized global currency.

So I think there were technical bottlenecks

to that two years ago.

And maybe Ethereum 2.0 will be as fast as Algorand.

I don’t know, that’s not fully written yet, right?

So I think the obstacle to currency

being put on the blockchain is that…

Is the other stuff you mentioned.

I mean, currency will be on the blockchain.

It’ll just be on the blockchain in a way

that enforces centralized control

and government hedge money rather than otherwise.

Like the ERNB will probably be the first global,

the first currency on the blockchain.

The EURUBIL maybe next.

There are any…

EURUBIL?

Yeah, yeah, yeah.

I mean, the point is…

Oh, that’s hilarious.

Digital currency, you know, makes total sense,

but they would rather do it in the way

that Putin and Xi Jinping have access

to the global keys for everything, right?

So, and then the analogy to that in terms of SingularityNet,

I mean, there’s Echoes.

I think you’ve mentioned before that Linux gives you hope.

AI is not as heavily regulated as money, right?

Not yet, right?

Not yet.

Oh, that’s a lot slipperier than money too, right?

I mean, money is easier to regulate

because it’s kind of easier to define,

whereas AI is, it’s almost everywhere inside everything.

Where’s the boundary between AI and software, right?

I mean, if you’re gonna regulate AI,

there’s no IQ test for every hardware device

that has a learning algorithm.

You’re gonna be putting like hegemonic regulation

on all software.

And I don’t rule out that that can happen.

And the adaptive software.

Yeah, but how do you tell if a software is adaptive

and what, every software is gonna be adaptive, I mean.

Or maybe they, maybe the, you know,

maybe we’re living in the golden age of open source

that will not always be open.

Maybe it’ll become centralized control

of software by governments.

It is entirely possible.

And part of what I think we’re doing

with things like SingularityNet protocol

is creating a tool set that can be used

to counteract that sort of thing.

Say a similar thing about mesh networking, right?

Plays a minor role now, the ability to access internet

like directly phone to phone.

On the other hand, if your government starts trying

to control your use of the internet,

suddenly having mesh networking there

can be very convenient, right?

And so right now, something like a decentralized

blockchain based AGI framework or narrow AI framework,

it’s cool, it’s nice to have.

On the other hand, if governments start trying

to tap down on my AI interoperating

with someone’s AI in Russia or somewhere, right?

Then suddenly having a decentralized protocol

that nobody owns or controls

becomes an extremely valuable part of the tool set.

And, you know, we’ve put that out there now.

It’s not perfect, but it operates.

And, you know, it’s pretty blockchain agnostic.

So we’re talking to Algorand about making part

of SingularityNet run on Algorand.

My good friend Tufi Saliba has a cool blockchain project

called Toda, which is a blockchain

without a distributed ledger.

It’s like a whole other architecture.

So there’s a lot of more advanced things you can do

in the blockchain world.

SingularityNet could be ported to a whole bunch of,

it could be made multi chain important

to a whole bunch of different blockchains.

And there’s a lot of potential and a lot of importance

to putting this kind of tool set out there.

If you compare to OpenCog, what you could see is

OpenCog allows tight integration of a few AI algorithms

that share the same knowledge store in real time, in RAM.

SingularityNet allows loose integration

of multiple different AIs.

They can share knowledge, but they’re mostly not gonna

be sharing knowledge in RAM on the same machine.

And I think what we’re gonna have is a network

of network of networks, right?

Like, I mean, you have the knowledge graph

inside the OpenCog system,

and then you have a network of machines

inside a distributed OpenCog mind,

but then that OpenCog will interface with other AIs

doing deep neural nets or custom biology data analysis

or whatever they’re doing in SingularityNet,

which is a looser integration of different AIs,

some of which may be their own networks, right?

And I think at a very loose analogy,

you could see that in the human body.

Like the brain has regions like cortex or hippocampus,

which tightly interconnects like cortical columns

within the cortex, for example.

Then there’s looser connection

within the different lobes of the brain,

and then the brain interconnects with the endocrine system

and different parts of the body even more loosely.

Then your body interacts even more loosely

with the other people that you talk to.

So you often have networks within networks within networks

with progressively looser coupling

as you get higher up in that hierarchy.

I mean, you have that in biology,

you have that in the internet as a just networking medium.

And I think that’s what we’re gonna have

in the network of software processes leading to AGI.

That’s a beautiful way to see the world.

Again, the same similar question is with OpenCog.

If somebody wanted to build an AI system

and plug into the SingularityNet,

what would you recommend?

Yeah, so that’s much easier.

I mean, OpenCog is still a research system.

So it takes some expertise to, and sometimes,

we have tutorials, but it’s somewhat cognitively

labor intensive to get up to speed on OpenCog.

And I mean, what’s one of the things we hope to change

with the true AGI OpenCog 2.0 version

is just make the learning curve more similar

to TensorFlow or Torch or something.

Right now, OpenCog is amazingly powerful,

but not simple to deal with.

On the other hand, SingularityNet,

as an open platform was developed a little more

with usability in mind over the blockchain,

it’s still kind of a pain.

So I mean, if you’re a command line guy,

there’s a command line interface.

It’s quite easy to take any AI that has an API

and lives in a Docker container and put it online anywhere.

And then it joins the global SingularityNet.

And anyone who puts a request for services

out into the SingularityNet,

the peer to peer discovery mechanism will find

your AI and if it does what was asked,

it can then start a conversation with your AI

about whether it wants to ask your AI to do something for it,

how much it would cost and so on.

So that’s fairly simple.

If you wrote an AI and want it listed

on like official SingularityNet marketplace,

which is on our website,

then we have a publisher portal

and then there’s a KYC process to go through

because then we have some legal liability

for what goes on that website.

So in a way that’s been an education too.

There’s sort of two layers.

Like there’s the open decentralized protocol.

And there’s the market.

Yeah, anyone can use the open decentralized protocol.

So say some developers from Iran

and there’s brilliant AI guys

in University of Isfahan in Tehran,

they can put their stuff on SingularityNet protocol

and just like they can put something on the internet, right?

I don’t control it.

But if we’re gonna list something

on the SingularityNet marketplace

and put a little picture and a link to it,

then if I put some Iranian AI geniuses code on there,

then Donald Trump can send a bunch of jackbooted thugs

to my house to arrest me for doing business with Iran, right?

So, I mean, we already see in some ways

the value of having a decentralized protocol

because what I hope is that someone in Iran

will put online an Iranian SingularityNet marketplace, right?

Which you can pay in the cryptographic token,

which is not owned by any country.

And then if you’re in like Congo or somewhere

that doesn’t have any problem with Iran,

you can subcontract AI services

that you find on that marketplace, right?

Even though US citizens can’t by US law.

So right now, that’s kind of a minor point.

As you alluded, if regulations go in the wrong direction,

it could become more of a major point.

But I think it also is the case

that having these workarounds to regulations in place

is a defense mechanism against those regulations

being put into place.

And you can see that in the music industry, right?

I mean, Napster just happened and BitTorrent just happened.

And now most people in my kid’s generation,

they’re baffled by the idea of paying for music, right?

I mean, my dad pays for music.

I mean, but that because these decentralized mechanisms

happened and then the regulations followed, right?

And the regulations would be very different

if they’d been put into place before there was Napster

and BitTorrent and so forth.

So in the same way, we gotta put AI out there

in a decentralized vein and big data out there

in a decentralized vein now,

so that the most advanced AI in the world

is fundamentally decentralized.

And if that’s the case, that’s just the reality

the regulators have to deal with.

And then as in the music case,

they’re gonna come up with regulations

that sort of work with the decentralized reality.

Beautiful.

You are the chief scientist of Hanson Robotics.

You’re still involved with Hanson Robotics,

doing a lot of really interesting stuff there.

This is for people who don’t know the company

that created Sophia the Robot.

Can you tell me who Sophia is?

I’d rather start by telling you who David Hanson is.

Because David is the brilliant mind behind the Sophia Robot.

And he remains, so far, he remains more interesting

than his creation, although she may be improving

faster than he is, actually.

I mean, he’s a…

So yeah, I met David maybe 2007 or something

at some futurist conference we were both speaking at.

And I could see we had a great deal in common.

I mean, we were both kind of crazy,

but we both had a passion for AGI and the singularity.

And we were both huge fans of the work

of Philip K. Dick, the science fiction writer.

And I wanted to create benevolent AGI

that would create massively better life

for all humans and all sentient beings,

including animals, plants, and superhuman beings.

And David, he wanted exactly the same thing,

but he had a different idea of how to do it.

He wanted to get computational compassion.

Like he wanted to get machines that would love people

and empathize with people.

And he thought the way to do that was to make a machine

that could look people eye to eye, face to face,

look at people and make people love the machine,

and the machine loves the people back.

So I thought that was very different way of looking at it

because I’m very math oriented.

And I’m just thinking like,

what is the abstract cognitive algorithm

that will let the system, you know,

internalize the complex patterns of human values,

blah, blah, blah.

Whereas he’s like, look you in the face and the eye

and love you, right?

So we hit it off quite well.

And we talked to each other off and on.

Then I moved to Hong Kong in 2011.

So I’ve been living all over the place.

I’ve been in Australia and New Zealand in my academic career.

Then in Las Vegas for a while.

Was in New York in the late 90s

starting my entrepreneurial career.

Was in DC for nine years

doing a bunch of US government consulting stuff.

Then moved to Hong Kong in 2011,

mostly because I met a Chinese girl

who I fell in love with and we got married.

She’s actually not from Hong Kong.

She’s from mainland China,

but we converged together in Hong Kong.

Still married now, I have a two year old baby.

So went to Hong Kong to see about a girl, I guess.

Yeah, pretty much, yeah.

And on the other hand,

I started doing some cool research there

with Gino Yu at Hong Kong Polytechnic University.

I got involved with a project called IDEA

using machine learning for stock and futures prediction,

which was quite interesting.

And I also got to know something

about the consumer electronics

and hardware manufacturer ecosystem in Shenzhen

across the border,

which is like the only place in the world

that makes sense to make complex consumer electronics

at large scale and low cost.

It’s just, it’s astounding the hardware ecosystem

that you have in South China.

Like US people here cannot imagine what it’s like.

So David was starting to explore that also.

I invited him to Hong Kong to give a talk

at Hong Kong PolyU,

and I introduced him in Hong Kong to some investors

who were interested in his robots.

And he didn’t have Sophia then,

he had a robot of Philip K. Dick,

our favorite science fiction writer.

He had a robot Einstein,

he had some little toy robots

that looked like his son Zeno.

So through the investors I connected him to,

he managed to get some funding

to basically port Hanson Robotics to Hong Kong.

And when he first moved to Hong Kong,

I was working on AGI research

and also on this machine learning trading project.

So I didn’t get that tightly involved

with Hanson Robotics.

But as I hung out with David more and more,

as we were both there in the same place,

I started to get,

I started to think about what you could do

to make his robots smarter than they were.

And so we started working together

and for a few years I was chief scientist

and head of software at Hanson Robotics.

Then when I got deeply into the blockchain side of things,

I stepped back from that and cofounded Singularity Net.

David Hanson was also one of the cofounders

of Singularity Net.

So part of our goal there had been

to make the blockchain based like cloud mind platform

for Sophia and the other Hanson robots.

Sophia would be just one of the robots in Singularity Net.

Yeah, yeah, yeah, exactly.

Sophia, many copies of the Sophia robot

would be among the user interfaces

to the globally distributed Singularity Net cloud mind.

And I mean, David and I talked about that

for quite a while before cofounding Singularity Net.

By the way, in his vision and your vision,

was Sophia tightly coupled to a particular AI system

or was the idea that you can plug,

you could just keep plugging in different AI systems

within the head of it?

David’s view was always that Sophia would be a platform,

much like say the Pepper robot is a platform from SoftBank.

Should be a platform with a set of nicely designed APIs

that anyone can use to experiment

with their different AI algorithms on that platform.

And Singularity Net, of course, fits right into that, right?

Because Singularity Net, it’s an API marketplace.

So anyone can put their AI on there.

OpenCog is a little bit different.

I mean, David likes it, but I’d say it’s my thing.

It’s not his.

Like David has a little more passion

for biologically based approaches to AI than I do,

which makes sense.

I mean, he’s really into human physiology and biology.

He’s a character sculptor, right?

So yeah, he’s interested in,

but he also worked a lot with rule based

and logic based AI systems too.

So yeah, he’s interested in not just Sophia,

but all the Hanson robots as a powerful social

and emotional robotics platform.

And what I saw in Sophia was a way to get AI algorithms

was a way to get AI algorithms out there

in front of a whole lot of different people

in an emotionally compelling way.

And part of my thought was really kind of abstract

connected to AGI ethics.

And many people are concerned AGI is gonna enslave everybody

or turn everybody into computronium

to make extra hard drives for their cognitive engine

or whatever.

And emotionally I’m not driven to that sort of paranoia.

I’m really just an optimist by nature,

but intellectually I have to assign a non zero probability

to those sorts of nasty outcomes.

Cause if you’re making something 10 times as smart as you,

how can you know what it’s gonna do?

There’s an irreducible uncertainty there

just as my dog can’t predict what I’m gonna do tomorrow.

So it seemed to me that based on our current state

of knowledge, the best way to bias the AGI as we create

toward benevolence would be to infuse them with love

and compassion the way that we do our own children.

So you want to interact with AIs in the context

of doing compassionate, loving and beneficial things.

And in that way, as your children will learn

by doing compassionate, beneficial,

loving things alongside you.

And that way the AI will learn in practice

what it means to be compassionate, beneficial and loving.

It will get a sort of ingrained intuitive sense of this,

which it can then abstract in its own way

as it gets more and more intelligent.

Now, David saw this the same way.

That’s why he came up with the name Sophia,

which means wisdom.

So it seemed to me making these beautiful, loving robots

to be rolled out for beneficial applications

would be the perfect way to roll out early stage AGI systems

so they can learn from people

and not just learn factual knowledge,

but learn human values and ethics from people

while being their home service robots,

their education assistants, their nursing robots.

So that was the grand vision.

Now, if you’ve ever worked with robots,

the reality is quite different, right?

Like the first principle is the robot is always broken.

I mean, I worked with robots in the 90s a bunch

when you had to solder them together yourself

and I’d put neural nets during reinforcement learning

on like overturned solid ball type robots

and in the 90s when I was a professor.

Things of course advanced a lot, but…

But the principle still holds.

The principle that the robot’s always broken still holds.

Yeah, so faced with the reality of making Sophia do stuff,

many of my robo AGI aspirations were temporarily cast aside.

And I mean, there’s just a practical problem

of making this robot interact in a meaningful way

because like, you put nice computer vision on there,

but there’s always glare.

And then, or you have a dialogue system,

but at the time I was there,

like no speech to text algorithm could deal

with Hong Kongese people’s English accents.

So the speech to text was always bad.

So the robot always sounded stupid

because it wasn’t getting the right text, right?

So I started to view that really

as what in software engineering you call a walking skeleton,

which is maybe the wrong metaphor to use for Sophia

or maybe the right one.

I mean, where the walking skeleton is

in software development is

if you’re building a complex system, how do you get started?

But one way is to first build part one well,

then build part two well, then build part three well

and so on.

And the other way is you make like a simple version

of the whole system and put something in the place

of every part the whole system will need

so that you have a whole system that does something.

And then you work on improving each part

in the context of that whole integrated system.

So that’s what we did on a software level in Sophia.

We made like a walking skeleton software system

where so there’s something that sees,

there’s something that hears, there’s something that moves,

there’s something that remembers,

there’s something that learns.

You put a simple version of each thing in there

and you connect them all together

so that the system will do its thing.

So there’s a lot of AI in there.

There’s not any AGI in there.

I mean, there’s computer vision to recognize people’s faces,

recognize when someone comes in the room and leaves,

trying to recognize whether two people are together or not.

I mean, the dialogue system,

it’s a mix of like hand coded rules with deep neural nets

that come up with their own responses.

And there’s some attempt to have a narrative structure

and sort of try to pull the conversation

into something with a beginning, middle and end

and this sort of story arc.

So it’s…

I mean, like if you look at the Lobner Prize and the systems

that beat the Turing Test currently,

they’re heavily rule based

because like you had said, narrative structure

to create compelling conversations,

you currently, neural networks cannot do that well,

even with Google MENA.

When you actually look at full scale conversations,

it’s just not…

Yeah, this is the thing.

So we’ve been, I’ve actually been running an experiment

the last couple of weeks taking Sophia’s chat bot

and then Facebook’s Transformer chat bot,

which they opened the model.

We’ve had them chatting to each other

for a number of weeks on the server just…

That’s funny.

We’re generating training data of what Sophia says

in a wide variety of conversations.

But we can see, compared to Sophia’s current chat bot,

the Facebook deep neural chat bot comes up

with a wider variety of fluent sounding sentences.

On the other hand, it rambles like mad.

The Sophia chat bot, it’s a little more repetitive

in the sentence structures it uses.

On the other hand, it’s able to keep like a conversation arc

over a much longer, longer period, right?

So there…

Now, you can probably surmount that using Reformer

and like using various other deep neural architectures

to improve the way these Transformer models are trained.

But in the end, neither one of them really understands

what’s going on.

I mean, that’s the challenge I had with Sophia

is if I were doing a robotics project aimed at AGI,

I would wanna make like a robo toddler

that was just learning about what it was seeing.

Because then the language is grounded

in the experience of the robot.

But what Sophia needs to do to be Sophia

is talk about sports or the weather or robotics

or the conference she’s talking at.

She needs to be fluent talking about

any damn thing in the world.

And she doesn’t have grounding for all those things.

So there’s this, just like, I mean, Google Mina

and Facebook’s chat, but I don’t have grounding

for what they’re talking about either.

So in a way, the need to speak fluently about things

where there’s no nonlinguistic grounding

pushes what you can do for Sophia in the short term

a bit away from AGI.

I mean, it pushes you towards IBM Watson situation

where you basically have to do heuristic

and hard code stuff and rule based stuff.

I have to ask you about this, okay.

So because in part Sophia is like an art creation

because it’s beautiful.

She’s beautiful because she inspires

through our human nature of anthropomorphize things.

We immediately see an intelligent being there.

Because David is a great sculptor.

He is a great sculptor, that’s right.

So in fact, if Sophia just had nothing inside her head,

said nothing, if she just sat there,

we already prescribed some intelligence to her.

There’s a long selfie line in front of her

after every talk.

That’s right.

So it captivated the imagination of many people.

I wasn’t gonna say the world,

but yeah, I mean a lot of people.

Billions of people, which is amazing.

It’s amazing, right.

Now, of course, many people have prescribed

essentially AGI type of capabilities to Sophia

when they see her.

And of course, friendly French folk like Yann LeCun

immediately see that of the people from the AI community

and get really frustrated because…

It’s understandable.

So what, and then they criticize people like you

who sit back and don’t say anything about,

like basically allow the imagination of the world,

allow the world to continue being captivated.

So what’s your sense of that kind of annoyance

that the AI community has?

I think there’s several parts to my reaction there.

First of all, if I weren’t involved with Hanson and Box

and didn’t know David Hanson personally,

I probably would have been very annoyed initially

at Sophia as well.

I mean, I can understand the reaction.

I would have been like, wait,

all these stupid people out there think this is an AGI,

but it’s not an AGI, but they’re tricking people

that this very cool robot is an AGI.

And now those of us trying to raise funding to build AGI,

people will think it’s already there and it already works.

So on the other hand, I think,

even if I weren’t directly involved with it,

once I dug a little deeper into David and the robot

and the intentions behind it,

I think I would have stopped being pissed off.

Whereas folks like Yann LeCun have remained pissed off

after their initial reaction.

That’s his thing, that’s his thing.

I think that in particular struck me as somewhat ironic

because Yann LeCun is working for Facebook,

which is using machine learning to program the brains

of the people in the world toward vapid consumerism

and political extremism.

So if your ethics allows you to use machine learning

in such a blatantly destructive way,

why would your ethics not allow you to use machine learning

to make a lovable theatrical robot

that draws some foolish people

into its theatrical illusion?

Like if the pushback had come from Yoshua Bengio,

I would have felt much more humbled by it

because he’s not using AI for blatant evil, right?

On the other hand, he also is a super nice guy

and doesn’t bother to go out there

trashing other people’s work for no good reason, right?

Shots fired, but I get you.

I mean, that’s…

I mean, if you’re gonna ask, I’m gonna answer.

No, for sure.

I think we’ll go back and forth.

I’ll talk to Yann again.

I would add on this though.

I mean, David Hansen is an artist

and he often speaks off the cuff.

And I have not agreed with everything

that David has said or done regarding Sophia.

And David also has not agreed with everything

David has said or done about Sophia.

That’s an important point.

I mean, David is an artistic wild man

and that’s part of his charm.

That’s part of his genius.

So certainly there have been conversations

within Hansen Robotics and between me and David

where I was like, let’s be more open

about how this thing is working.

And I did have some influence in nudging Hansen Robotics

to be more open about how Sophia was working.

And David wasn’t especially opposed to this.

And he was actually quite right about it.

What he said was, you can tell people exactly

how it’s working and they won’t care.

They want to be drawn into the illusion.

And he was 100% correct.

I’ll tell you what, this wasn’t Sophia.

This was Philip K. Dick.

But we did some interactions between humans

and Philip K. Dick robot in Austin, Texas a few years back.

And in this case, the Philip K. Dick was just teleoperated

by another human in the other room.

So during the conversations, we didn’t tell people

the robot was teleoperated.

We just said, here, have a conversation with Phil Dick.

We’re gonna film you, right?

And they had a great conversation with Philip K. Dick

teleoperated by my friend, Stefan Bugaj.

After the conversation, we brought the people

in the back room to see Stefan

who was controlling the Philip K. Dick robot,

but they didn’t believe it.

These people were like, well, yeah,

but I know I was talking to Phil.

Maybe Stefan was typing,

but the spirit of Phil was animating his mind

while he was typing.

So like, even though they knew it was a human in the loop,

even seeing the guy there,

they still believed that was Phil they were talking to.

A small part of me believes that they were right, actually.

Because our understanding…

Well, we don’t understand the universe.

That’s the thing.

I mean, there is a cosmic mind field

that we’re all embedded in

that yields many strange synchronicities in the world,

which is a topic we don’t have time to go into too much here.

Yeah, I mean, there’s something to this

where our imagination about Sophia

and people like Yann LeCun being frustrated about it

is all part of this beautiful dance

of creating artificial intelligence

that’s almost essential.

You see with Boston Dynamics,

whom I’m a huge fan of as well,

you know, the kind of…

I mean, these robots are very far from intelligent.

I played with their last one, actually.

With a spot mini.

Yeah, very cool.

I mean, it reacts quite in a fluid and flexible way.

But we immediately ascribe the kind of intelligence.

We immediately ascribe AGI to them.

Yeah, yeah, if you kick it and it falls down and goes out,

you feel bad, right?

You can’t help it.

And I mean, that’s part of…

That’s gonna be part of our journey

in creating intelligent systems

more and more and more and more.

Like, as Sophia starts out with a walking skeleton,

as you add more and more intelligence,

I mean, we’re gonna have to deal with this kind of idea.

Absolutely.

And about Sophia, I would say,

I mean, first of all, I have nothing against Yann LeCun.

No, no, this is fun.

This is all for fun.

He’s a nice guy.

If he wants to play the media banter game,

I’m happy to play him.

He’s a good researcher and a good human being.

I’d happily work with the guy.

The other thing I was gonna say is,

I have been explicit about how Sophia works

and I’ve posted online and what, H Plus Magazine,

an online webzine.

I mean, I posted a moderately detailed article

explaining like, there are three software systems

we’ve used inside Sophia.

There’s a timeline editor,

which is like a rule based authoring system

where she’s really just being an outlet

for what a human scripted.

There’s a chat bot,

which has some rule based and some neural aspects.

And then sometimes we’ve used OpenCog behind Sophia,

where there’s more learning and reasoning.

And the funny thing is,

I can’t always tell which system is operating here, right?

I mean, whether she’s really learning or thinking,

or just appears to be over a half hour, I could tell,

but over like three or four minutes of interaction,

I could tell.

So even having three systems

that’s already sufficiently complex

where you can’t really tell right away.

Yeah, the thing is, even if you get up on stage

and tell people how Sophia is working,

and then they talk to her,

they still attribute more agency and consciousness to her

than is really there.

So I think there’s a couple of levels of ethical issue there.

One issue is, should you be transparent

about how Sophia is working?

And I think you should,

and I think we have been.

I mean, there’s articles online,

there’s some TV special that goes through me

explaining the three subsystems behind Sophia.

So the way Sophia works

is out there much more clearly

than how Facebook’s AI works or something, right?

I mean, we’ve been fairly explicit about it.

The other is, given that telling people how it works

doesn’t cause them to not attribute

too much intelligence agency to it anyway,

then should you keep fooling them

when they want to be fooled?

And I mean, the whole media industry

is based on fooling people the way they want to be fooled.

And we are fooling people 100% toward a good end.

I mean, we are playing on people’s sense of empathy

and compassion so that we can give them

a good user experience with helpful robots.

And so that we can fill the AI’s mind

with love and compassion.

So I’ve been talking a lot with Hanson Robotics lately

about collaborations in the area of medical robotics.

And we haven’t quite pulled the trigger on a project

in that domain yet, but we may well do so quite soon.

So we’ve been talking a lot about robots

can help with elder care, robots can help with kids.

David’s done a lot of things with autism therapy

and robots before.

In the COVID era, having a robot

that can be a nursing assistant in various senses

can be quite valuable.

The robots don’t spread infection

and they can also deliver more attention

than human nurses can give, right?

So if you have a robot that’s helping a patient

with COVID, if that patient attributes more understanding

and compassion and agency to that robot than it really has

because it looks like a human, I mean, is that really bad?

I mean, we can tell them it doesn’t fully understand you

and they don’t care because they’re lying there

with a fever and they’re sick,

but they’ll react better to that robot

with its loving, warm facial expression

than they would to a pepper robot

or a metallic looking robot.

So it’s really, it’s about how you use it, right?

If you made a human looking like door to door sales robot

that used its human looking appearance

to scam people out of their money,

then you’re using that connection in a bad way,

but you could also use it in a good way.

But then that’s the same problem with every technology.

Beautifully put.

So like you said, we’re living in the era

of the COVID, this is 2020,

one of the craziest years in recent history.

So if we zoom out and look at this pandemic,

the coronavirus pandemic,

maybe let me ask you this kind of thing in viruses in general,

when you look at viruses,

do you see them as a kind of intelligence system?

I think the concept of intelligence is not that natural

of a concept in the end.

I mean, I think human minds and bodies

are a kind of complex self organizing adaptive system.

And viruses certainly are that, right?

They’re a very complex self organizing adaptive system.

If you wanna look at intelligence as Marcus Hutter defines it

as sort of optimizing computable reward functions

over computable environments,

for sure viruses are doing that, right?

And I mean, in doing so they’re causing some harm to us.

So the human immune system is a very complex

of organizing adaptive system,

which has a lot of intelligence to it.

And viruses are also adapting

and dividing into new mutant strains and so forth.

And ultimately the solution is gonna be nanotechnology,

right?

The solution is gonna be making little nanobots that.

Fight the viruses or.

Well, people will use them to make nastier viruses,

but hopefully we can also use them

to just detect combat and kill the viruses.

But I think now we’re stuck

with the biological mechanisms to combat these viruses.

And yeah, we’ve been AGI is not yet mature enough

to use against COVID,

but we’ve been using machine learning

and also some machine reasoning in open cog

to help some doctors to do personalized medicine

against COVID.

So the problem there is given the person’s genomics

and given their clinical medical indicators,

how do you figure out which combination of antivirals

is gonna be most effective against COVID for that person?

And so that’s something

where machine learning is interesting,

but also we’re finding the abstraction

to get an open cog with machine reasoning is interesting

because it can help with transfer learning

when you have not that many different cases to study

and qualitative differences between different strains

of a virus or people of different ages who may have COVID.

So there’s a lot of different disparate data to work with

and it’s small data sets and somehow integrating them.

This is one of the shameful things

that’s very hard to get that data.

So, I mean, we’re working with a couple of groups

doing clinical trials and they’re sharing data with us

like under non disclosure,

but what should be the case is like every COVID

clinical trial should be putting data online somewhere

like suitably encrypted to protect patient privacy

so that anyone with the right AI algorithms

should be able to help analyze it

and any biologists should be able to analyze it by hand

to understand what they can, right?

Instead that data is like siloed inside whatever hospital

is running the clinical trial,

which is completely asinine and ridiculous.

So why the world works that way?

I mean, we could all analyze why,

but it’s insane that it does.

You look at this hydrochloroquine, right?

All these clinical trials were done

were reported by Surgisphere,

some little company no one ever heard of

and everyone paid attention to this.

So they were doing more clinical trials based on that

then they stopped doing clinical trials based on that

then they started again

and why isn’t that data just out there

so everyone can analyze it and see what’s going on, right?

Do you have hope that data will be out there eventually

for future pandemics?

I mean, do you have hope that our society

will move in the direction of?

It’s not in the immediate future

because the US and China frictions are getting very high.

So it’s hard to see US and China

as moving in the direction of openly sharing data

with each other, right?

It’s not, there’s some sharing of data,

but different groups are keeping their data private

till they’ve milked the best results from it

and then they share it, right?

So yeah, we’re working with some data

that we’ve managed to get our hands on,

something we’re doing to do good for the world

and it’s a very cool playground

for like putting deep neural nets and open cog together.

So we have like a bioadden space

full of all sorts of knowledge

from many different biology experiments

about human longevity

and from biology knowledge bases online.

And we can do like graph to vector type embeddings

where we take nodes from the hypergraph,

embed them into vectors,

which can then feed into neural nets

for different types of analysis.

And we were doing this

in the context of a project called Rejuve

that we spun off from SingularityNet

to do longevity analytics,

like understand why people live to 105 years or over

and other people don’t.

And then we had this spin off Singularity Studio

where we’re working with some healthcare companies

on data analytics.

But so there’s bioadden space

that we built for these more commercial

and longevity data analysis purposes.

We’re repurposing and feeding COVID data

into the same bioadden space

and playing around with like graph embeddings

from that graph into neural nets for bioinformatics.

So it’s both being a cool testing ground,

some of our bio AI learning and reasoning.

And it seems we’re able to discover things

that people weren’t seeing otherwise.

Cause the thing in this case is

for each combination of antivirals,

you may have only a few patients

who’ve tried that combination.

And those few patients

may have their particular characteristics.

Like this combination of three

was tried only on people age 80 or over.

This other combination of three,

which has an overlap with the first combination

was tried more on young people.

So how do you combine those different pieces of data?

It’s a very dodgy transfer learning problem,

which is the kind of thing

that the probabilistic reasoning algorithms

we have inside OpenCog are better at

than deep neural networks.

On the other hand, you have gene expression data

where you have 25,000 genes

and the expression level of each gene

in the peripheral blood of each person.

So that sort of data,

either deep neural nets or tools like XGBoost or CatBoost,

these decision forest trees are better at dealing

with than OpenCog.

Cause it’s just these huge,

huge messy floating point vectors

that are annoying for a logic engine to deal with,

but are perfect for a decision forest or a neural net.

So it’s a great playground for like hybrid AI methodology.

And we can have SingularityNet have OpenCog in one agent

and XGBoost in a different agent

and they talk to each other.

But at the same time, it’s highly practical, right?

Cause we’re working with, for example,

some physicians on this project,

physicians in the group called Nth Opinion

based out of Vancouver in Seattle,

who are, these guys are working every day

like in the hospital with patients dying of COVID.

So it’s quite cool to see like neural symbolic AI,

like where the rubber hits the road,

trying to save people’s lives.

I’ve been doing bio AI since 2001,

but mostly human longevity research

and fly longevity research,

try to understand why some organisms really live a long time.

This is the first time like race against the clock

and try to use the AI to figure out stuff that,

like if we take two months longer to solve the AI problem,

some more people will die

because we don’t know what combination

of antivirals to give them.

At the societal level, at the biological level,

at any level, are you hopeful about us

as a human species getting out of this pandemic?

What are your thoughts on it in general?

The pandemic will be gone in a year or two

once there’s a vaccine for it.

So, I mean, that’s…

A lot of pain and suffering can happen in that time.

So that could be irreversible.

I think if you spend much time in Sub Saharan Africa,

you can see there’s a lot of pain and suffering

happening all the time.

Like you walk through the streets

of any large city in Sub Saharan Africa,

and there are loads, I mean, tens of thousands,

probably hundreds of thousands of people

lying by the side of the road,

dying mainly of curable diseases without food or water

and either ostracized by their families

or they left their family house

because they didn’t want to infect their family, right?

I mean, there’s tremendous human suffering

on the planet all the time,

which most folks in the developed world pay no attention to.

And COVID is not remotely the worst.

How many people are dying of malaria all the time?

I mean, so COVID is bad.

It is by no mean the worst thing happening.

And setting aside diseases,

I mean, there are many places in the world

where you’re at risk of having like your teenage son

kidnapped by armed militias and forced to get killed

in someone else’s war, fighting tribe against tribe.

I mean, so humanity has a lot of problems

which we don’t need to have given the state of advancement

of our technology right now.

And I think COVID is one of the easier problems to solve

in the sense that there are many brilliant people

working on vaccines.

We have the technology to create vaccines

and we’re gonna create new vaccines.

We should be more worried

that we haven’t managed to defeat malaria after so long.

And after the Gates Foundation and others

putting so much money into it.

I mean, I think clearly the whole global medical system,

the global health system

and the global political and socioeconomic system

are incredibly unethical and unequal and badly designed.

And I mean, I don’t know how to solve that directly.

I think what we can do indirectly to solve it

is to make systems that operate in parallel

and off to the side of the governments

that are nominally controlling the world

with their armies and militias.

And to the extent that you can make compassionate

peer to peer decentralized frameworks

for doing things,

these are things that can start out unregulated.

And then if they get traction

before the regulators come in,

then they’ve influenced the way the world works, right?

SingularityNet aims to do this with AI.

REJUVE, which is a spinoff from SingularityNet.

You can see REJUVE.io.

How do you spell that?

R E J U V E, REJUVE.io.

That aims to do the same thing for medicine.

So it’s like peer to peer sharing of information

peer to peer sharing of medical data.

So you can share medical data into a secure data wallet.

You can get advice about your health and longevity

through apps that REJUVE.io will launch

within the next couple of months.

And then SingularityNet AI can analyze all this data,

but then the benefits from that analysis

are spread among all the members of the network.

But I mean, of course,

I’m gonna hawk my particular projects,

but I mean, whether or not SingularityNet and REJUVE.io

are the answer, I think it’s key to create

decentralized mechanisms for everything.

I mean, for AI, for human health, for politics,

for jobs and employment, for sharing social information.

And to the extent decentralized peer to peer methods

designed with universal compassion at the core

can gain traction, then these will just decrease the role

that government has.

And I think that’s much more likely to do good

than trying to like explicitly reform

the global government system.

I mean, I’m happy other people are trying to explicitly

reform the global government system.

On the other hand, you look at how much good the internet

or Google did or mobile phones did,

even you’re making something that’s decentralized

and throwing it out everywhere and it takes hold,

then government has to adapt.

And I mean, that’s what we need to do with AI

and with health.

And in that light, I mean, the centralization

of healthcare and of AI is certainly not ideal, right?

Like most AI PhDs are being sucked in by a half dozen

to a dozen big companies.

Most AI processing power is being bought

by a few big companies for their own proprietary good.

And most medical research is within a few

pharmaceutical companies and clinical trials

run by pharmaceutical companies will stay solid

within those pharmaceutical companies.

You know, these large centralized entities,

which are intelligences in themselves, these corporations,

but they’re mostly malevolent psychopathic

and sociopathic intelligences,

not saying the people involved are,

but the corporations as self organizing entities

on their own, which are concerned with maximizing

shareholder value as a sole objective function.

I mean, AI and medicine are being sucked

into these pathological corporate organizations

with government cooperation and Google cooperating

with British and US government on this

as one among many, many different examples.

23andMe providing you the nice service of sequencing

your genome and then licensing the genome

to GlaxoSmithKline on an exclusive basis, right?

Now you can take your own DNA

and do whatever you want with it.

But the pooled collection of 23andMe sequence DNA

is just to GlaxoSmithKline.

Someone else could reach out to everyone

who had worked with 23andMe to sequence their DNA

and say, give us your DNA for our open

and decentralized repository that we’ll make available

to everyone, but nobody’s doing that

cause it’s a pain to get organized.

And the customer list is proprietary to 23andMe, right?

So, yeah, I mean, this I think is a greater risk

to humanity from AI than rogue AGI

is turning the universe into paperclips or computronium.

Cause what you have here is mostly good hearted

and nice people who are sucked into a mode of organization

of large corporations, which has evolved

just for no individual’s fault

just because that’s the way society has evolved.

It’s not altruistic, it’s self interested

and become psychopathic like you said.

The human.

The corporation is psychopathic even if the people are not.

And that’s really the disturbing thing about it

because the corporations can do things

that are quite bad for society

even if nobody has a bad intention.

Right.

And then.

No individual member of that corporation

has a bad intention.

No, some probably do, but it’s not necessary

that they do for the corporation.

Like, I mean, Google, I know a lot of people in Google

and there are, with very few exceptions,

they’re all very nice people

who genuinely want what’s good for the world.

And Facebook, I know fewer people

but it’s probably mostly true.

It’s probably like fine young geeks

who wanna build cool technology.

I actually tend to believe that even the leaders,

even Mark Zuckerberg, one of the most disliked people

in tech is also wants to do good for the world.

I think about Jamie Dimon.

Who’s Jamie Dimon?

Oh, the heads of the great banks

may have a different psychology.

Oh boy, yeah.

Well, I tend to be naive about these things

and see the best in, I tend to agree with you

that I think the individuals wanna do good by the world

but the mechanism of the company

can sometimes be its own intelligence system.

I mean, there’s a, my cousin Mario Goetzler

has worked for Microsoft since 1985 or something

and I can see for him,

I mean, as well as just working on cool projects,

you’re coding stuff that gets used

by like billions and billions of people.

And do you think if I improve this feature

that’s making billions of people’s lives easier, right?

So of course that’s cool.

And the engineers are not in charge

of running the company anyway.

And of course, even if you’re Mark Zuckerberg or Larry Page,

I mean, you still have a fiduciary responsibility.

And I mean, you’re responsible to the shareholders,

your employees who you want to keep paying them

and so forth.

So yeah, you’re enmeshed in this system.

And when I worked in DC,

I worked a bunch with INSCOM, US Army Intelligence

and I was heavily politically opposed

to what the US Army was doing in Iraq at that time,

like torturing people in Abu Ghraib

but everyone I knew in US Army and INSCOM,

when I hung out with them, was very nice person.

They were friendly to me.

They were nice to my kids and my dogs, right?

And they really believed that the US

was fighting the forces of evil.

And if you ask me about Abu Ghraib, they’re like,

well, but these Arabs will chop us into pieces.

So how can you say we’re wrong

to waterboard them a bit, right?

Like that’s much less than what they would do to us.

It’s just in their worldview,

what they were doing was really genuinely

for the good of humanity.

Like none of them woke up in the morning

and said like, I want to do harm to good people

because I’m just a nasty guy, right?

So yeah, most people on the planet,

setting aside a few genuine psychopaths and sociopaths,

I mean, most people on the planet have a heavy dose

of benevolence and wanting to do good

and also a heavy capability to convince themselves

whatever they feel like doing

or whatever is best for them is for the good of humankind.

So the more we can decentralize control.

Decentralization, you know, the democracy is horrible,

but this is like Winston Churchill said,

you know, it’s the worst possible system of government

except for all the others, right?

I mean, I think the whole mess of humanity

has many, many very bad aspects to it,

but so far the track record of elite groups

who know what’s better for all of humanity

is much worse than the track record

of the whole teaming democratic participatory

mess of humanity, right?

I mean, none of them is perfect by any means.

The issue with a small elite group that knows what’s best

is even if it starts out as truly benevolent

and doing good things in accordance

with its initial good intentions,

you find out you need more resources,

you need a bigger organization, you pull in more people,

internal politics arises, difference of opinions arise

and bribery happens, like some opponent organization

takes a second in command now to make some,

the first in command of some other organization.

And I mean, that’s, there’s a lot of history

of what happens with elite groups

thinking they know what’s best for the human race.

So yeah, if I have to choose,

I’m gonna reluctantly put my faith

in the vast democratic decentralized mass.

And I think corporations have a track record

of being ethically worse

than their constituent human parts.

And democratic governments have a more mixed track record,

but there are at least.

That’s the best we got.

Yeah, I mean, you can, there’s Iceland,

very nice country, right?

I’ve been very democratic for 800 plus years,

very, very benevolent, beneficial government.

And I think, yeah, there are track records

of democratic modes of organization.

Linux, for example, some of the people in charge of Linux

are overtly complete assholes, right?

And trying to reform themselves in many cases,

in other cases not, but the organization as a whole,

I think it’s done a good job overall.

It’s been very welcoming in the third world, for example,

and it’s allowed advanced technology to roll out

on all sorts of different embedded devices and platforms

in places where people couldn’t afford to pay

for proprietary software.

So I’d say the internet, Linux, and many democratic nations

are examples of how sort of an open,

decentralized democratic methodology

can be ethically better than the sum of the parts

rather than worse.

And corporations, that has happened only for a brief period

and then it goes sour, right?

I mean, I’d say a similar thing about universities.

Like university is a horrible way to organize research

and get things done, yet it’s better than anything else

we’ve come up with, right?

A company can be much better,

but for a brief period of time,

and then it stops being so good, right?

So then I think if you believe that AGI

is gonna emerge sort of incrementally

out of AIs doing practical stuff in the world,

like controlling humanoid robots or driving cars

or diagnosing diseases or operating killer drones

or spying on people and reporting under the government,

then what kind of organization creates more and more

advanced narrow AI verging toward AGI

may be quite important because it will guide

like what’s in the mind of the early stage AGI

as it first gains the ability to rewrite its own code base

and project itself toward super intelligence.

And if you believe that AI may move toward AGI

out of this sort of synergetic activity

of many agents cooperating together

rather than just have one person’s project,

then who owns and controls that platform for AI cooperation

becomes also very, very important, right?

And is that platform AWS?

Is it Google Cloud?

Is it Alibaba or is it something more like the internet

or Singularity Net, which is open and decentralized?

So if all of my weird machinations come to pass, right?

I mean, we have the Hanson robots

being a beautiful user interface,

gathering information on human values

and being loving and compassionate to people in medical,

home service, robot office applications,

you have Singularity Net in the backend

networking together many different AIs

toward cooperative intelligence,

fueling the robots among many other things.

You have OpenCog 2.0 and true AGI

as one of the sources of AI

inside this decentralized network,

powering the robot and medical AIs

helping us live a long time

and cure diseases among other things.

And this whole thing is operating

in a democratic and decentralized way, right?

And I think if anyone can pull something like this off,

whether using the specific technologies I’ve mentioned

or something else, I mean,

then I think we have a higher odds

of moving toward a beneficial technological singularity

rather than one in which the first super AGI

is indifferent to humans

and just considers us an inefficient use of molecules.

That was a beautifully articulated vision for the world.

So thank you for that.

Well, let’s talk a little bit about life and death.

I’m pro life and anti death for most people.

There’s few exceptions that I won’t mention here.

I’m glad just like your dad,

you’re taking a stand against death.

You have, by the way, you have a bunch of awesome music

where you play piano online.

One of the songs that I believe you’ve written

the lyrics go, by the way, I like the way it sounds,

people should listen to it, it’s awesome.

I considered, I probably will cover it, it’s a good song.

Tell me why do you think it is a good thing

that we all get old and die is one of the songs.

I love the way it sounds,

but let me ask you about death first.

Do you think there’s an element to death

that’s essential to give our life meaning?

Like the fact that this thing ends.

Well, let me say I’m pleased and a little embarrassed

you’ve been listening to that music I put online.

That’s awesome.

One of my regrets in life recently is I would love

to get time to really produce music well.

Like I haven’t touched my sequencer software

in like five years.

I would love to like rehearse and produce and edit.

But with a two year old baby

and trying to create the singularity, there’s no time.

So I just made the decision to,

when I’m playing random shit in an off moment.

Just record it.

Just record it, put it out there, like whatever.

Maybe if I’m unfortunate enough to die,

maybe that can be input to the AGI

when it tries to make an accurate mind upload of me, right?

Death is bad.

I mean, that’s very simple.

It’s baffling we should have to say that.

I mean, of course people can make meaning out of death.

And if someone is tortured,

maybe they can make beautiful meaning out of that torture

and write a beautiful poem

about what it was like to be tortured, right?

I mean, we’re very creative.

We can milk beauty and positivity

out of even the most horrible and shitty things.

But just because if I was tortured,

I could write a good song

about what it was like to be tortured,

doesn’t make torture good.

And just because people are able to derive meaning

and value from death,

doesn’t mean they wouldn’t derive even better meaning

and value from ongoing life without death,

which I very…

Indefinite.

Yeah, yeah.

So if you could live forever, would you live forever?

Forever.

My goal with longevity research

is to abolish the plague of involuntary death.

I don’t think people should die unless they choose to die.

If I had to choose forced immortality

versus dying, I would choose forced immortality.

On the other hand, if I chose…

If I had the choice of immortality

with the choice of suicide whenever I felt like it,

of course I would take that instead.

And that’s the more realistic choice.

I mean, there’s no reason

you should have forced immortality.

You should be able to live until you get sick of living,

right?

I mean, that’s…

And that will seem insanely obvious

to everyone 50 years from now.

And they will be so…

I mean, people who thought death gives meaning to life,

so we should all die,

they will look at that 50 years from now

the way we now look at the Anabaptists in the year 1000

who gave away all their positions,

went on top of the mountain for Jesus

to come and bring them to the ascension.

I mean, it’s ridiculous that people think death is good

because you gain more wisdom as you approach dying.

I mean, of course it’s true.

I mean, I’m 53.

And the fact that I might have only a few more decades left,

it does make me reflect on things differently.

It does give me a deeper understanding of many things.

But I mean, so what?

You could get a deep understanding

in a lot of different ways.

Pain is the same way.

We’re gonna abolish pain.

And that’s even more amazing than abolishing death, right?

I mean, once we get a little better at neuroscience,

we’ll be able to go in and adjust the brain

so that pain doesn’t hurt anymore, right?

And that, you know, people will say that’s bad

because there’s so much beauty

in overcoming pain and suffering.

Oh, sure.

And there’s beauty in overcoming torture too.

And some people like to cut themselves,

but not many, right?

I mean.

That’s an interesting.

So, but to push, I mean, to push back again,

this is the Russian side of me.

I do romanticize suffering.

It’s not obvious.

I mean, the way you put it, it seems very logical.

It’s almost absurd to romanticize suffering or pain

or death, but to me, a world without suffering,

without pain, without death, it’s not obvious.

Well, then you can stay in the people’s zoo,

people torturing each other.

No, but what I’m saying is I don’t,

well, that’s, I guess what I’m trying to say,

I don’t know if I was presented with that choice,

what I would choose because it, to me.

This is a subtler, it’s a subtler matter.

And I’ve posed it in this conversation

in an unnecessarily extreme way.

So I think, I think the way you should think about it

is what if there’s a little dial on the side of your head

and you could turn how much pain hurt,

turn it down to zero, turn it up to 11,

like in spinal tap, if it wants,

maybe through an actual spinal tap, right?

So, I mean, would you opt to have that dial there or not?

That’s the question.

The question isn’t whether you would turn the pain down

to zero all the time.

Would you opt to have the dial or not?

My guess is that in some dark moment of your life,

you would choose to have the dial implanted

and then it would be there.

Just to confess a small thing, don’t ask me why,

but I’m doing this physical challenge currently

where I’m doing 680 pushups and pull ups a day.

And my shoulder is currently, as we sit here,

in a lot of pain.

And I don’t know, I would certainly right now,

if you gave me a dial, I would turn that sucker to zero

as quickly as possible.

But I think the whole point of this journey is,

I don’t know.

Well, because you’re a twisted human being.

I’m a twisted, so the question is am I somehow twisted

because I created some kind of narrative for myself

so that I can deal with the injustice

and the suffering in the world?

Or is this actually going to be a source of happiness

for me?

Well, this is to an extent is a research question

that humanity will undertake, right?

So I mean, human beings do have a particular biological

makeup, which sort of implies a certain probability

distribution over motivational systems, right?

So I mean, we, and that is there, that is there.

Now the question is how flexibly can that morph

as society and technology change, right?

So if we’re given that dial and we’re given a society

in which say we don’t have to work for a living

and in which there’s an ambient decentralized

benevolent AI network that will warn us

when we’re about to hurt ourself,

if we’re in a different context,

can we consistently with being genuinely and fully human,

can we consistently get into a state of consciousness

where we just want to keep the pain dial turned

all the way down and yet we’re leading very rewarding

and fulfilling lives, right?

Now, I suspect the answer is yes, we can do that,

but I don’t know that, I don’t know that for certain.

Yeah, now I’m more confident that we could create

a nonhuman AGI system, which just didn’t need an analog

of feeling pain.

And I think that AGI system will be fundamentally healthier

and more benevolent than human beings.

So I think it might or might not be true

that humans need a certain element of suffering

to be satisfied humans, consistent with the human physiology.

If it is true, that’s one of the things that makes us fucked

and disqualified to be the super AGI, right?

I mean, the nature of the human motivational system

is that we seem to gravitate towards situations

where the best thing in the large scale

is not the best thing in the small scale

according to our subjective value system.

So we gravitate towards subjective value judgments

where to gratify ourselves in the large,

we have to ungratify ourselves in the small.

And we do that in, you see that in music,

there’s a theory of music which says

the key to musical aesthetics

is the surprising fulfillment of expectations.

Like you want something that will fulfill

the expectations are listed in the prior part of the music,

but in a way with a bit of a twist that surprises you.

And I mean, that’s true not only in outdoor music

like my own or that of Zappa or Steve Vai or Buckethead

or Christoph Pendergast or something,

it’s even there in Mozart or something.

It’s not there in elevator music too much,

but that’s why it’s boring, right?

But wrapped up in there is we want to hurt a little bit

so that we can feel the pain go away.

Like we wanna be a little confused by what’s coming next.

So then when the thing that comes next actually makes sense,

it’s so satisfying, right?

That’s the surprising fulfillment of expectations,

is that what you said?

Yeah, yeah, yeah.

So beautifully put.

We’ve been skirting around a little bit,

but if I were to ask you the most ridiculous big question

of what is the meaning of life,

what would your answer be?

Three values, joy, growth, and choice.

I think you need joy.

I mean, that’s the basis of everything.

If you want the number one value.

On the other hand, I’m unsatisfied with a static joy

that doesn’t progress perhaps because of some

elemental element of human perversity,

but the idea of something that grows

and becomes more and more and better and better

in some sense appeals to me.

But I also sort of like the idea of individuality

that as a distinct system, I have some agency.

So there’s some nexus of causality within this system

rather than the causality being wholly evenly distributed

over the joyous growing mass.

So you start with joy, growth, and choice

as three basic values.

Those three things could continue indefinitely.

That’s something that can last forever.

Is there some aspect of something you called,

which I like, super longevity that you find exciting?

Is there research wise, is there ideas in that space that?

I mean, I think, yeah, in terms of the meaning of life,

this really ties into that because for us as humans,

probably the way to get the most joy, growth, and choice

is transhumanism and to go beyond the human form

that we have right now, right?

I mean, I think human body is great

and by no means do any of us maximize the potential

for joy, growth, and choice imminent in our human bodies.

On the other hand, it’s clear that other configurations

of matter could manifest even greater amounts

of joy, growth, and choice than humans do,

maybe even finding ways to go beyond the realm of matter

as we understand it right now.

So I think in a practical sense,

much of the meaning I see in human life

is to create something better than humans

and go beyond human life.

But certainly that’s not all of it for me

in a practical sense, right?

Like I have four kids and a granddaughter

and many friends and parents and family

and just enjoying everyday human social existence.

But we can do even better.

Yeah, yeah.

And I mean, I love, I’ve always,

when I could live near nature,

I spend a bunch of time out in nature in the forest

and on the water every day and so forth.

So, I mean, enjoying the pleasant moment is part of it,

but the growth and choice aspect are severely limited

by our human biology.

In particular, dying seems to inhibit your potential

for personal growth considerably as far as we know.

I mean, there’s some element of life after death perhaps,

but even if there is,

why not also continue going in this biological realm, right?

In super longevity, I mean,

you know, we haven’t yet cured aging.

We haven’t yet cured death.

Certainly there’s very interesting progress all around.

I mean, CRISPR and gene editing can be an incredible tool.

And I mean, right now,

stem cells could potentially prolong life a lot.

Like if you got stem cell injections

of just stem cells for every tissue of your body

injected into every tissue,

and you can just have replacement of your old cells

with new cells produced by those stem cells,

I mean, that could be highly impactful at prolonging life.

Now we just need slightly better technology

for having them grow, right?

So using machine learning to guide procedures

for stem cell differentiation and trans differentiation,

it’s kind of nitty gritty,

but I mean, that’s quite interesting.

So I think there’s a lot of different things being done

to help with prolongation of human life,

but we could do a lot better.

So for example, the extracellular matrix,

which is the bunch of proteins

in between the cells in your body,

they get stiffer and stiffer as you get older.

And the extracellular matrix transmits information

both electrically, mechanically,

and to some extent, biophotonically.

So there’s all this transmission

through the parts of the body,

but the stiffer the extracellular matrix gets,

the less the transmission happens,

which makes your body get worse coordinated

between the different organs as you get older.

So my friend Christian Schaffmeister

at my alumnus organization,

my Alma mater, the Great Temple University,

Christian Schaffmeister has a potential solution to this,

where he has these novel molecules called spiral ligamers,

which are like polymers that are not organic.

They’re specially designed polymers

so that you can algorithmically predict

exactly how they’ll fold very simply.

So he designed the molecular scissors

that have spiral ligamers that you could eat

and would then cut through all the glucosamine

and other crosslink proteins

in your extracellular matrix, right?

But to make that technology really work

and be mature as several years of work,

as far as I know, no one’s finding it at the moment.

So there’s so many different ways

that technology could be used to prolong longevity.

What we really need,

we need an integrated database of all biological knowledge

about human beings and model organisms,

like hopefully a massively distributed

open cog bioatom space,

but it can exist in other forms too.

We need that data to be opened up

in a suitably privacy protecting way.

We need massive funding into machine learning,

AGI, proto AGI statistical research

aimed at solving biology,

both molecular biology and human biology

based on this massive data set, right?

And then we need regulators not to stop people

from trying radical therapies on themselves

if they so wish to,

as well as better cloud based platforms

for like automated experimentation on microorganisms,

flies and mice and so forth.

And we could do all this.

You look after the last financial crisis,

Obama, who I generally like pretty well,

but he gave $4 trillion to large banks

and insurance companies.

You know, now in this COVID crisis,

trillions are being spent to help everyday people

and small businesses.

In the end, we’ll probably will find many more trillions

are being given to large banks and insurance companies.

Anyway, like could the world put $10 trillion

into making a massive holistic bio AI and bio simulation

and experimental biology infrastructure?

We could, we could put $10 trillion into that

without even screwing us up too badly.

Just as in the end COVID and the last financial crisis

won’t screw up the world economy so badly.

We’re not putting $10 trillion into that.

Instead, all this research is siloed inside

a few big companies and government agencies.

And most of the data that comes from our individual bodies

personally, that could feed this AI to solve aging

and death, most of that data is sitting

in some hospital’s database doing nothing, right?

I got two more quick questions for you.

One, I know a lot of people are gonna ask me,

you are on the Joe Rogan podcast

wearing that same amazing hat.

Do you have a origin story for the hat?

Does the hat have its own story that you’re able to share?

The hat story has not been told yet.

So we’re gonna have to come back

and you can interview the hat.

We’ll leave that for the hat’s own interview.

All right.

It’s too much to pack into.

Is there a book?

Is the hat gonna write a book?

Okay.

Well, it may transmit the information

through direct neural transmission.

Okay, so it’s actually,

there might be some Neuralink competition there.

Beautiful, we’ll leave it as a mystery.

Maybe one last question.

If you build an AGI system,

you’re successful at building the AGI system

that could lead us to the singularity

and you get to talk to her and ask her one question,

what would that question be?

We’re not allowed to ask,

what is the question I should be asking?

Yeah, that would be cheating,

but I guess that’s a good question.

I’m thinking of a,

I wrote a story with Stefan Bugay once

where these AI developers,

they created a super smart AI

aimed at answering all the philosophical questions

that have been worrying them.

Like what is the meaning of life?

Is there free will?

What is consciousness and so forth?

So they got the super AGI built

and it turned a while.

It said, those are really stupid questions.

And then it puts off on a spaceship and left the earth.

So you’d be afraid of scaring it off.

That’s it, yeah.

I mean, honestly, there is no one question

that rises among all the others, really.

I mean, what interests me more

is upgrading my own intelligence

so that I can absorb the whole world view of the super AGI.

But I mean, of course, if the answer could be like,

what is the chemical formula for the immortality pill?

Like then I would do that or emit a bit string,

which will be the code for a super AGI

on the Intel i7 processor.

So those would be good questions.

So if your own mind was expanded

to become super intelligent, like you’re describing,

I mean, there’s kind of a notion

that intelligence is a burden, that it’s possible

that with greater and greater intelligence,

that other metric of joy that you mentioned

becomes more and more difficult.

What’s your sense?

Pretty stupid idea.

So you think if you’re super intelligent,

you can also be super joyful?

I think getting root access to your own brain

will enable new forms of joy that we don’t have now.

And I think as I’ve said before,

what I aim at is really make multiple versions of myself.

So I would like to keep one version,

which is basically human like I am now,

but keep the dial to turn pain up and down

and get rid of death, right?

And make another version which fuses its mind

with superhuman AGI,

and then will become massively transhuman.

And whether it will send some messages back

to the human me or not will be interesting to find out.

The thing is, once you’re a super AGI,

like one subjective second to a human

might be like a million subjective years

to that super AGI, right?

So it would be on a whole different basis.

I mean, at very least those two copies will be good to have,

but it could be interesting to put your mind

into a dolphin or a space amoeba

or all sorts of other things.

You can imagine one version that doubled its intelligence

every year and another version that just became

a super AGI as fast as possible, right?

So, I mean, now we’re sort of constrained to think

one mind, one self, one body, right?

But I think we actually, we don’t need to be that

constrained in thinking about future intelligence

after we’ve mastered AGI and nanotechnology

and longevity biology.

I mean, then each of our minds

is a certain pattern of organization, right?

And I know we haven’t talked about consciousness,

but I sort of, I’m panpsychist.

I sort of view the universe as conscious.

And so, you know, a light bulb or a quark

or an ant or a worm or a monkey

have their own manifestations of consciousness.

And the human manifestation of consciousness,

it’s partly tied to the particular meat

that we’re manifested by, but it’s largely tied

to the pattern of organization in the brain, right?

So, if you upload yourself into a computer

or a robot or whatever else it is,

some element of your human consciousness may not be there

because it’s just tied to the biological embodiment.

But I think most of it will be there.

And these will be incarnations of your consciousness

in a slightly different flavor.

And, you know, creating these different versions

will be amazing, and each of them will discover

meanings of life that have some overlap,

but probably not total overlap

with the human Ben’s meaning of life.

The thing is, to get to that future

where we can explore different varieties of joy,

different variations of human experience and values

and transhuman experiences and values to get to that future,

we need to navigate through a whole lot of human bullshit

of companies and governments and killer drones

and making and losing money and so forth, right?

And that’s the challenge we’re facing now

is if we do things right,

we can get to a benevolent singularity,

which is levels of joy, growth, and choice

that are literally unimaginable to human beings.

If we do things wrong,

we could either annihilate all life on the planet,

or we could lead to a scenario where, say,

all humans are annihilated and there’s some super AGI

that goes on and does its own thing unrelated to us

except via our role in originating it.

And we may well be at a bifurcation point now, right?

Where what we do now has significant causal impact

on what comes about,

and yet most people on the planet

aren’t thinking that way whatsoever,

they’re thinking only about their own narrow aims

and aims and goals, right?

Now, of course, I’m thinking about my own narrow aims

and goals to some extent also,

but I’m trying to use as much of my energy and mind as I can

to push toward this more benevolent alternative,

which will be better for me,

but also for everybody else.

And it’s weird that so few people understand

what’s going on.

I know you interviewed Elon Musk,

and he understands a lot of what’s going on,

but he’s much more paranoid than I am, right?

Because Elon gets that AGI

is gonna be way, way smarter than people,

and he gets that an AGI does not necessarily

have to give a shit about people

because we’re a very elementary mode of organization

of matter compared to many AGI’s.

But I don’t think he has a clear vision

of how infusing early stage AGI’s

with compassion and human warmth

can lead to an AGI that loves and helps people

rather than viewing us as a historical artifact

and a waste of mass energy.

But on the other hand,

while I have some disagreements with him,

like he understands way, way more of the story

than almost anyone else

in such a large scale corporate leadership position, right?

It’s terrible how little understanding

of these fundamental issues exists out there now.

That may be different five or 10 years from now though,

because I can see understanding of AGI and longevity

and other such issues is certainly much stronger

and more prevalent now than 10 or 15 years ago, right?

So I mean, humanity as a whole can be slow learners

relative to what I would like,

but on a historical sense, on the other hand,

you could say the progress is astoundingly fast.

But Elon also said, I think on the Joe Rogan podcast,

that love is the answer.

So maybe in that way, you and him are both on the same page

of how we should proceed with AGI.

I think there’s no better place to end it.

I hope we get to talk again about the hat

and about consciousness

and about a million topics we didn’t cover.

Ben, it’s a huge honor to talk to you.

Thank you for making it out.

Thank you for talking today.

Thanks for having me.

This was really, really good fun

and we dug deep into some very important things.

So thanks for doing this.

Thanks very much.

Awesome.

Thanks for listening to this conversation with Ben Gertzel

and thank you to our sponsors,

The Jordan Harbinger Show and Masterclass.

Please consider supporting the podcast

by going to jordanharbinger.com slash lex

and signing up to Masterclass at masterclass.com slash lex.

Click the links, buy the stuff.

It’s the best way to support this podcast

and the journey I’m on in my research and startup.

If you enjoy this thing, subscribe on YouTube,

review it with five stars on a podcast,

support it on Patreon or connect with me on Twitter

at lexfriedman spelled without the E, just F R I D M A N.

I’m sure eventually you will figure it out.

And now let me leave you with some words from Ben Gertzel.

Our language for describing emotions is very crude.

That’s what music is for.

Thank you for listening and hope to see you next time.

comments powered by Disqus