Lex Fridman Podcast - #107 – Peter Singer: Suffering in Humans, Animals, and AI

The following is a conversation with Peter Singer,

professor of bioethics at Princeton University,

best known for his 1975 book, Animal Liberation,

that makes an ethical case against eating meat.

He has written brilliantly from an ethical perspective

on extreme poverty, euthanasia, human genetic selection,

sports doping, the sale of kidneys,

and generally happiness, including in his books,

Ethics in the Real World, and The Life You Can Save.

He was a key popularizer of the effective altruism movement

and is generally considered one of the most influential

philosophers in the world.

Quick summary of the ads.

Two sponsors, Cash App and Masterclass.

Please consider supporting the podcast

by downloading Cash App and using code LexPodcast

and signing up at masterclass.com slash Lex.

Click the links, buy the stuff.

It really is the best way to support the podcast

and the journey I’m on.

As you may know, I primarily eat a ketogenic or carnivore diet,

which means that most of my diet is made up of meat.

I do not hunt the food I eat, though one day I hope to.

I love fishing, for example.

Fishing and eating the fish I catch

has always felt much more honest than participating

in the supply chain of factory farming.

From an ethics perspective, this part of my life

has always had a cloud over it.

It makes me think.

I’ve tried a few times in my life

to reduce the amount of meat I eat.

But for some reason, whatever the makeup of my body,

whatever the way I practice the dieting I have,

I get a lot of mental and physical energy

and performance from eating meat.

So both intellectually and physically,

it’s a continued journey for me.

I return to Peter’s work often to reevaluate the ethics

of how I live this aspect of my life.

Let me also say that you may be a vegan

or you may be a meat eater and may be upset by the words I say

or Peter says, but I ask for this podcast

and other episodes of this podcast

that you keep an open mind.

I may and probably will talk with people you disagree with.

Please try to really listen, especially

to people you disagree with.

And give me and the world the gift

of being a participant in a patient, intelligent,

and nuanced discourse.

If your instinct and desire is to be a voice of mockery

towards those you disagree with, please unsubscribe.

My source of joy and inspiration here

has been to be a part of a community that thinks deeply

and speaks with empathy and compassion.

That is what I hope to continue being a part of

and I hope you join as well.

If you enjoy this podcast, subscribe on YouTube,

review it with five stars on Apple Podcast,

follow on Spotify, support on Patreon,

or connect with me on Twitter at Lex Friedman.

As usual, I’ll do a few minutes of ads now

and never any ads in the middle

that can break the flow of the conversation.

This show is presented by Cash App,

the number one finance app in the App Store.

When you get it, use code LEXPODCAST.

Cash App lets you send money to friends,

buy Bitcoin, and invest in the stock market

with as little as one dollar.

Since Cash App allows you to buy Bitcoin,

let me mention that cryptocurrency in the context

of the history of money is fascinating.

I recommend Ascent of Money

as a great book on this history.

Debits and credits on ledgers

started around 30,000 years ago.

The US dollar created over 200 years ago

and the first decentralized cryptocurrency

released just over 10 years ago.

So given that history, cryptocurrency is still very much

in its early days of development,

but it’s still aiming to and just might

redefine the nature of money.

So again, if you get Cash App from the App Store

or Google Play and use the code LEXPODCAST,

you get $10 and Cash App will also donate $10 to FIRST,

an organization that is helping to advance

robotic system education for young people around the world.

This show is sponsored by Masterclass.

Sign up at masterclass.com slash LEX

to get a discount and to support this podcast.

When I first heard about Masterclass,

I thought it was too good to be true.

For $180 a year, you get an all access pass

to watch courses from, to list some of my favorites,

Chris Hadfield on space exploration,

Neil Gauss Tyson on scientific thinking and communication,

Will Wright, creator of SimCity and Sims on game design.

I promise I’ll start streaming games at some point soon.

Carlos Santana on guitar, Gary Kasparov on chess,

Daniel Lagrano on poker and many more.

Chris Hadfield explaining how rockets work

and the experience of being launched into space alone

is worth the money.

By the way, you can watch it on basically any device.

Once again, sign up at masterclass.com slash LEX

to get a discount and to support this podcast.

And now, here’s my conversation with Peter Singer.

When did you first become conscious of the fact

that there is much suffering in the world?

I think I was conscious of the fact

that there’s a lot of suffering in the world

pretty much as soon as I was able to understand

anything about my family and its background

because I lost three of my four grandparents

in the Holocaust and obviously I knew

why I only had one grandparent

and she herself had been in the camps and survived,

so I think I knew a lot about that pretty early.

My entire family comes from the Soviet Union.

I was born in the Soviet Union.

World War II has deep roots in the culture

and the suffering that the war brought

the millions of people who died is in the music,

is in the literature, is in the culture.

What do you think was the impact

of the war broadly on our society?

The war had many impacts.

I think one of them, a beneficial impact,

is that it showed what racism

and authoritarian government can do

and at least as far as the West was concerned,

I think that meant that I grew up in an era

in which there wasn’t the kind of overt racism

and antisemitism that had existed for my parents in Europe.

I was growing up in Australia

and certainly that was clearly seen

as something completely unacceptable.

There was also, though, a fear of a further outbreak of war

which this time we expected would be nuclear

because of the way the Second World War had ended,

so there was this overshadowing of my childhood

about the possibility that I would not live to grow up

and be an adult because of a catastrophic nuclear war.

The film On the Beach was made

in which the city that I was living,

Melbourne, was the last place on Earth

to have living human beings

because of the nuclear cloud

that was spreading from the North,

so that certainly gave us a bit of that sense.

There were many, there were clearly many other legacies

that we got of the war as well

and the whole setup of the world

and the Cold War that followed.

All of that has its roots in the Second World War.

There is much beauty that comes from war.

Sort of, I had a conversation with Eric Weinstein.

He said everything is great about war

except all the death and suffering.

Do you think there’s something positive

that came from the war,

the mirror that it put to our society,

sort of the ripple effects on it, ethically speaking?

Do you think there are positive aspects to war?

I find it hard to see positive aspects in war

and some of the things that other people think of

as positive and beautiful may be questioning.

So there’s a certain kind of patriotism.

People say during wartime, we all pull together,

we all work together against a common enemy

and that’s true.

An outside enemy does unite a country

and in general, it’s good for countries to be united

and have common purposes

but it also engenders a kind of a nationalism

and a patriotism that can’t be questioned

and that I’m more skeptical about.

What about the brotherhood

that people talk about from soldiers?

The sort of counterintuitive, sad idea

that the closest that people feel to each other

is in those moments of suffering,

of being at the sort of the edge

of seeing your comrades dying in your arms.

That somehow brings people extremely closely together.

Suffering brings people closer together.

How do you make sense of that?

It may bring people close together

but there are other ways of bonding

and being close to people I think

without the suffering and death that war entails.

Perhaps you could see, you could already hear

the romanticized Russian in me.

We tend to romanticize suffering just a little bit

in our literature and culture and so on.

Could you take a step back

and I apologize if it’s a ridiculous question

but what is suffering?

If you would try to define what suffering is,

how would you go about it?

Suffering is a conscious state.

There can be no suffering for a being

who is completely unconscious

and it’s distinguished from other conscious states

in terms of being one that considered just in itself.

We would rather be without.

It’s a conscious state that we want to stop

if we’re experiencing or we want to avoid having again

if we’ve experienced it in the past.

And that’s, as I say, emphasized for its own sake

because of course people will say,

well, suffering strengthens the spirit.

It has good consequences.

And sometimes it does have those consequences

and of course sometimes we might undergo suffering.

We set ourselves a challenge to run a marathon

or climb a mountain or even just to go to the dentist

so that the toothache doesn’t get worse

even though we know the dentist is gonna hurt us

to some extent.

So I’m not saying that we never choose suffering

but I am saying that other things being equal,

we would rather not be in that state of consciousness.

Is the ultimate goal sort of,

you have the new 10 year anniversary release

of the Life You Can Save book, really influential book.

We’ll talk about it a bunch of times

throughout this conversation

but do you think it’s possible

to eradicate suffering or is that the goal

or do we want to achieve a kind of minimum threshold

of suffering and then keeping a little drop of poison

to keep things interesting in the world?

In practice, I don’t think we ever will eliminate suffering

so I think that little drop of poison as you put it

or if you like the contrasting dash of an unpleasant color

perhaps something like that

in a otherwise harmonious and beautiful composition,

that is gonna always be there.

If you ask me whether in theory

if we could get rid of it, we should.

I think the answer is whether in fact

we would be better off

or whether in terms of by eliminating the suffering

we would also eliminate some of the highs,

the positive highs and if that’s so

then we might be prepared to say

it’s worth having a minimum of suffering

in order to have the best possible experiences as well.

Is there a relative aspect to suffering?

So when you talk about eradicating poverty in the world,

is this the more you succeed,

the more the bar of what defines poverty raises

or is there at the basic human ethical level

a bar that’s absolute that once you get above it

then we can morally converge

to feeling like we have eradicated poverty?

I think they’re both and I think this is true for poverty

as well as suffering.

There’s an objective level of suffering or of poverty

where we’re talking about objective indicators

like you’re constantly hungry,

you can’t get enough food,

you’re constantly cold, you can’t get warm,

you have some physical pains that you’re never rid of.

I think those things are objective

but it may also be true that if you do get rid of it

if you do get rid of that and you get to the stage

where all of those basic needs have been met,

there may still be then new forms of suffering that develop

and perhaps that’s what we’re seeing

in the affluent societies we have

that people get bored for example,

they don’t need to spend so many hours a day earning money

to get enough to eat and shelter.

So now they’re bored, they lack a sense of purpose.

That can happen.

And that then is a kind of a relative suffering

that is distinct from the objective forms of suffering.

But in your focus on eradicating suffering,

you don’t think about that kind of,

the kind of interesting challenges and suffering

that emerges in affluent societies,

that’s just not, in your ethical philosophical brain,

is that of interest at all?

It would be of interest to me if we had eliminated

all of the objective forms of suffering,

which I think of as generally more severe

and also perhaps easier at this stage anyway

to know how to eliminate.

So yes, in some future state when we’ve eliminated

those objective forms of suffering,

I would be interested in trying to eliminate

the relative forms as well.

But that’s not a practical need for me at the moment.

Sorry to linger on it because you kind of said it,

but just is elimination the goal for the affluent society?

So is there, do you see suffering as a creative force?

Suffering can be a creative force.

I think repeating what I said about the highs

and whether we need some of the lows

to experience the highs.

So it may be that suffering makes us more creative

and we regard that as worthwhile.

Maybe that brings some of those highs with it

that we would not have had if we’d had no suffering.

I don’t really know.

Many people have suggested that

and I certainly can’t have no basis for denying it.

And if it’s true, then I would not want

to eliminate suffering completely.

But the focus is on the absolute,

not to be cold, not to be hungry.

Yes, that’s at the present stage

of where the world’s population is, that’s the focus.

Talking about human nature for a second,

do you think people are inherently good

or do we all have good and evil in us

that basically everyone is capable of evil

based on the environment?

Certainly most of us have potential for both good and evil.

I’m not prepared to say that everyone is capable of evil.

Maybe some people who even in the worst of circumstances

would not be capable of it,

but most of us are very susceptible

to environmental influences.

So when we look at things

that we were talking about previously,

let’s say what the Nazis did during the Holocaust,

I think it’s quite difficult to say,

I know that I would not have done those things

even if I were in the same circumstances

as those who did them.

Even if let’s say I had grown up under the Nazi regime

and had been indoctrinated with racist ideas,

had also had the idea that I must obey orders,

follow the commands of the Fuhrer,

plus of course perhaps the threat

that if I didn’t do certain things,

I might get sent to the Russian front

and that would be a pretty grim fate.

I think it’s really hard for anybody to say,

nevertheless, I know I would not have killed those Jews

or whatever else it was that they were.

Well, what’s your intuition?

How many people will be able to say that?

Truly to be able to say it,

I think very few, less than 10%.

To me, it seems a very interesting

and powerful thing to meditate on.

So I’ve read a lot about the war, World War II,

and I can’t escape the thought

that I would have not been one of the 10%.

Right, I have to say, I simply don’t know.

I would like to hope that I would have been one of the 10%,

but I don’t really have any basis

for claiming that I would have been different

from the majority.

Is it a worthwhile thing to contemplate?

It would be interesting if we could find a way

of really finding these answers.

There obviously is quite a bit of research

on people during the Holocaust,

on how ordinary Germans got led to do terrible things,

and there are also studies of the resistance,

some heroic people in the White Rose group, for example,

who resisted even though they knew

they were likely to die for it.

But I don’t know whether these studies

really can answer your larger question

of how many people would have been capable of doing that.

Well, sort of the reason I think is interesting

is in the world, as you described,

when there are things that you’d like to do that are good,

that are objectively good,

it’s useful to think about whether

I’m not willing to do something,

or I’m not willing to acknowledge something

as good and the right thing to do

because I’m simply scared of putting my life,

of damaging my life in some kind of way.

And that kind of thought exercise is helpful

to understand what is the right thing

in my current skill set and the capacity to do.

Sort of there’s things that are convenient,

and I wonder if there are things

that are highly inconvenient,

where I would have to experience derision,

or hatred, or death, or all those kinds of things,

but it’s truly the right thing to do.

And that kind of balance is,

I feel like in America, we don’t have,

it’s difficult to think in the current times,

it seems easier to put yourself back in history,

where you can sort of objectively contemplate

whether, how willing you are to do the right thing

when the cost is high.

True, but I think we do face those challenges today,

and I think we can still ask ourselves those questions.

So one stand that I took more than 40 years ago now

was to stop eating meat, become a vegetarian at a time

when you hardly met anybody who was a vegetarian,

or if you did, they might’ve been a Hindu,

or they might’ve had some weird theories

about meat and health.

And I know thinking about making that decision,

I was convinced that it was the right thing to do,

but I still did have to think,

are all my friends gonna think that I’m a crank

because I’m now refusing to eat meat?

So I’m not saying there were any terrible sanctions,

obviously, but I thought about that,

and I guess I decided,

well, I still think this is the right thing to do,

and I’ll put up with that if it happens.

And one or two friends were clearly uncomfortable

with that decision, but that was pretty minor

compared to the historical examples

that we’ve been talking about.

But other issues that we have around too,

like global poverty and what we ought to be doing about that

is another question where people, I think,

can have the opportunity to take a stand

on what’s the right thing to do now.

Climate change would be a third question

where, again, people are taking a stand.

I can look at Greta Thunberg there and say,

well, I think it must’ve taken a lot of courage

for a schoolgirl to say,

I’m gonna go on strike about climate change

and see what happens.

Yeah, especially in this divisive world,

she gets exceptionally huge amounts of support

and hatred, both.

That’s right.

Which is very difficult for a teenager to operate in.

In your book, Ethics in the Real World,

amazing book, people should check it out.

Very easy read.

82 brief essays on things that matter.

One of the essays asks, should robots have rights?

You’ve written about this,

so let me ask, should robots have rights?

If we ever develop robots capable of consciousness,

capable of having their own internal perspective

on what’s happening to them

so that their lives can go well or badly for them,

then robots should have rights.

Until that happens, they shouldn’t.

So is consciousness essentially a prerequisite to suffering?

So everything that possesses consciousness

is capable of suffering, put another way.

And if so, what is consciousness?

I certainly think that consciousness

is a prerequisite for suffering.

You can’t suffer if you’re not conscious.

But is it true that every being that is conscious

will suffer or has to be capable of suffering?

I suppose you could imagine a kind of consciousness,

especially if we can construct it artificially,

that’s capable of experiencing pleasure

but just automatically cuts out the consciousness

when they’re suffering.

So they’re like an instant anesthesia

as soon as something is gonna cause you suffering.

So that’s possible.

But doesn’t exist as far as we know on this planet yet.

You asked what is consciousness.

Philosophers often talk about it

as there being a subject of experiences.

So you and I and everybody listening to this

is a subject of experience.

There is a conscious subject who is taking things in,

responding to it in various ways,

feeling good about it, feeling bad about it.

And that’s different from the kinds

of artificial intelligence we have now.

I take out my phone.

I ask Google directions to where I’m going.

Google gives me the directions

and I choose to take a different way.

Google doesn’t care.

It’s not like I’m offending Google or anything like that.

There is no subject of experiences there.

And I think that’s the indication

that Google AI we have now is not conscious

or at least that level of AI is not conscious.

And that’s the way to think about it.

Now, it may be difficult to tell, of course,

whether a certain AI is or isn’t conscious.

It may mimic consciousness

and we can’t tell if it’s only mimicking it

or if it’s the real thing.

But that’s what we’re looking for.

Is there a subject of experience,

a perspective on the world from which things can go well

or badly from that perspective?

So our idea of what suffering looks like

comes from just watching ourselves when we’re in pain.

Or when we’re experiencing pleasure, it’s not only.

Pleasure and pain.

Yes, so and then you could actually,

you could push back on us, but I would say

that’s how we kind of build an intuition about animals

is we can infer the similarities between humans and animals

and so infer that they’re suffering or not

based on certain things and they’re conscious or not.

So what if robots, you mentioned Google Maps

and I’ve done this experiment.

So I work in robotics just for my own self

or I have several Roomba robots

and I play with different speech interaction,

voice based interaction.

And if the Roomba or the robot or Google Maps

shows any signs of pain, like screaming or moaning

or being displeased by something you’ve done,

that in my mind, I can’t help but immediately upgrade it.

And even when I myself programmed it in,

just having another entity that’s now for the moment

disjoint from me showing signs of pain

makes me feel like it is conscious.

Like I immediately, then the whatever,

I immediately realize that it’s not obviously,

but that feeling is there.

So sort of, I guess, what do you think about a world

where Google Maps and Roombas are pretending to be conscious

and we descendants of apes are not smart enough

to realize they’re not or whatever, or that is conscious,

they appear to be conscious.

And so you then have to give them rights.

The reason I’m asking that is that kind of capability

may be closer than we realize.

Yes, that kind of capability may be closer,

but I don’t think it follows

that we have to give them rights.

I suppose the argument for saying that in those circumstances

we should give them rights is that if we don’t,

we’ll harden ourselves against other beings

who are not robots and who really do suffer.

That’s a possibility that, you know,

if we get used to looking at a being suffering

and saying, yeah, we don’t have to do anything about that,

that being doesn’t have any rights,

maybe we’ll feel the same about animals, for instance.

And interestingly, among philosophers and thinkers

who denied that we have any direct duties to animals,

and this includes people like Thomas Aquinas

and Immanuel Kant, they did say, yes,

but still it’s better not to be cruel to them,

not because of the suffering we’re inflicting

on the animals, but because if we are,

we may develop a cruel disposition

and this will be bad for humans, you know,

because we’re more likely to be cruel to other humans

and that would be wrong.

So.

But you don’t accept that kind of.

I don’t accept that as the basis of the argument

for why we shouldn’t be cruel to animals.

I think the basis of the argument

for why we shouldn’t be cruel to animals

is just that we’re inflicting suffering on them

and the suffering is a bad thing.

But possibly I might accept some sort of parallel

of that argument as a reason why you shouldn’t be cruel

to these robots that mimic the symptoms of pain

if it’s gonna be harder for us to distinguish.

I would venture to say, I’d like to disagree with you

and with most people, I think,

at the risk of sounding crazy,

I would like to say that if that Roomba is dedicated

to faking the consciousness and the suffering,

I think it will be impossible for us.

I would like to apply the same argument

as with animals to robots,

that they deserve rights in that sense.

Now we might outlaw the addition

of those kinds of features into Roombas,

but once you do, I think I’m quite surprised

by the upgrade in consciousness

that the display of suffering creates.

It’s a totally open world,

but I’d like to just sort of the difference

between animals and other humans is that in the robot case,

we’ve added it in ourselves.

Therefore, we can say something about how real it is.

But I would like to say that the display of it

is what makes it real.

And I’m not a philosopher, I’m not making that argument,

but I’d at least like to add that as a possibility.

And I’ve been surprised by it

is all I’m trying to sort of articulate poorly, I suppose.

So there is a philosophical view

has been held about humans,

which is rather like what you’re talking about,

and that’s behaviorism.

So behaviorism was employed both in psychology,

people like BF Skinner was a famous behaviorist,

but in psychology, it was more a kind of a,

what is it that makes this science?

Well, you need to have behavior

because that’s what you can observe,

you can’t observe consciousness.

But in philosophy, the view just defended

by people like Gilbert Ryle,

who was a professor of philosophy at Oxford,

wrote a book called The Concept of Mind,

in which in this kind of phase,

this is in the 40s of linguistic philosophy,

he said, well, the meaning of a term is its use,

and we use terms like so and so is in pain

when we see somebody writhing or screaming

or trying to escape some stimulus,

and that’s the meaning of the term.

So that’s what it is to be in pain,

and you point to the behavior.

And Norman Malcolm, who was another philosopher

in the school from Cornell, had the view that,

so what is it to dream?

After all, we can’t see other people’s dreams.

Well, when people wake up and say,

I’ve just had a dream of, here I was,

undressed, walking down the main street

or whatever it is you’ve dreamt,

that’s what it is to have a dream.

It’s basically to wake up and recall something.

So you could apply this to what you’re talking about

and say, so what it is to be in pain

is to exhibit these symptoms of pain behavior,

and therefore, these robots are in pain.

That’s what the word means.

But nowadays, not many people think

that Ryle’s kind of philosophical behaviorism

is really very plausible,

so I think they would say the same about your view.

So, yes, I just spoke with Noam Chomsky,

who basically was part of dismantling

the behaviorist movement.

But, and I’m with that 100% for studying human behavior,

but I am one of the few people in the world

who has made Roombas scream in pain.

And I just don’t know what to do

with that empirical evidence,

because it’s hard, sort of philosophically, I agree.

But the only reason I philosophically agree in that case

is because I was the programmer.

But if somebody else was a programmer,

I’m not sure I would be able to interpret that well.

So I think it’s a new world

that I was just curious what your thoughts are.

For now, you feel that the display

of what we can kind of intellectually say

is a fake display of suffering is not suffering.

That’s right, that would be my view.

But that’s consistent, of course,

with the idea that it’s part of our nature

to respond to this display

if it’s reasonably authentically done.

And therefore it’s understandable

that people would feel this,

and maybe, as I said, it’s even a good thing

that they do feel it,

and you wouldn’t want to harden yourself against it

because then you might harden yourself

against being sort of really suffering.

But there’s this line, so you said,

once artificial general intelligence system,

a human level intelligence system become conscious,

I guess if I could just linger on it,

now I’ve wrote really dumb programs

that just say things that I told them to say,

but how do you know when a system like Alexa,

which is sufficiently complex

that you can’t introspect to how it works,

starts giving you signs of consciousness

through natural language?

That there’s a feeling,

there’s another entity there that’s self aware,

that has a fear of death, a mortality,

that has awareness of itself

that we kind of associate with other living creatures.

I guess I’m sort of trying to do the slippery slope

from the very naive thing where I started

into something where it’s sufficiently a black box

to where it’s starting to feel like it’s conscious.

Where’s that threshold

where you would start getting uncomfortable

with the idea of robot suffering, do you think?

I don’t know enough about the programming

that we’re going to this really to answer this question.

But I presume that somebody who does know more about this

could look at the program

and see whether we can explain the behaviors

in a parsimonious way that doesn’t require us

to suggest that some sort of consciousness has emerged.

Or alternatively, whether you’re in a situation

where you say, I don’t know how this is happening,

the program does generate a kind of artificial

general intelligence which is autonomous,

starts to do things itself and is autonomous

of the basics programming that set it up.

And so it’s quite possible that actually

we have achieved consciousness

in a system of artificial intelligence.

Sort of the approach that I work with,

most of the community is really excited about now

is with learning methods, so machine learning.

And the learning methods are unfortunately

are not capable of revealing,

which is why somebody like Noam Chomsky criticizes them.

You create powerful systems that are able

to do certain things without understanding

the theory, the physics, the science of how it works.

And so it’s possible if those are the kinds

of methods that succeed, we won’t be able

to know exactly, sort of try to reduce,

try to find whether this thing is conscious or not,

this thing is intelligent or not.

It’s simply giving, when we talk to it,

it displays wit and humor and cleverness

and emotion and fear, and then we won’t be able

to say where in the billions of nodes,

neurons in this artificial neural network

is the fear coming from.

So in that case, that’s a really interesting place

where we do now start to return to behaviorism and say.

Yeah, that is an interesting issue.

I would say that if we have serious doubts

and think it might be conscious,

then we ought to try to give it the benefit

of the doubt, just as I would say with animals.

I think we can be highly confident

that vertebrates are conscious,

but when we get down, and some invertebrates

like the octopus, but with insects,

it’s much harder to be confident of that.

I think we should give them the benefit

of the doubt where we can, which means,

I think it would be wrong to torture an insect,

but it doesn’t necessarily mean it’s wrong

to slap a mosquito that’s about to bite you

and stop you getting to sleep.

So I think you try to achieve some balance

in these circumstances of uncertainty.

If it’s okay with you, if we can go back just briefly.

So 44 years ago, like you mentioned, 40 plus years ago,

you’ve written Animal Liberation,

the classic book that started,

that launched, that was the foundation

of the movement of Animal Liberation.

Can you summarize the key set of ideas

that underpin that book?

Certainly, the key idea that underlies that book

is the concept of speciesism,

which I did not invent that term.

I took it from a man called Richard Rider,

who was in Oxford when I was,

and I saw a pamphlet that he’d written

about experiments on chimpanzees that used that term.

But I think I contributed

to making it philosophically more precise

and to getting it into a broader audience.

And the idea is that we have a bias or a prejudice

against taking seriously the interests of beings

who are not members of our species.

Just as in the past, Europeans, for example,

had a bias against taking seriously

the interests of Africans, racism.

And men have had a bias against taking seriously

the interests of women, sexism.

So I think something analogous, not completely identical,

but something analogous goes on

and has gone on for a very long time

with the way humans see themselves vis a vis animals.

We see ourselves as more important.

We see animals as existing to serve our needs

in various ways.

And you’re gonna find this very explicit

in earlier philosophers from Aristotle

through to Kant and others.

And either we don’t need to take their interests

into account at all,

or we can discount it because they’re not humans.

They can a little bit,

but they don’t count nearly as much as humans do.

My book argues that that attitude is responsible

for a lot of the things that we do to animals

that are wrong, confining them indoors

in very crowded, cramped conditions in factory farms

to produce meat or eggs or milk more cheaply,

using them in some research that’s by no means essential

for survival or wellbeing, and a whole lot,

some of the sports and things that we do to animals.

So I think that’s unjustified

because I think the significance of pain and suffering

does not depend on the species of the being

who is in pain or suffering

any more than it depends on the race or sex of the being

who is in pain or suffering.

And I think we ought to rethink our treatment of animals

along the lines of saying,

if the pain is just as great in an animal,

then it’s just as bad that it happens as if it were a human.

Maybe if I could ask, I apologize,

hopefully it’s not a ridiculous question,

but so as far as we know,

we cannot communicate with animals through natural language,

but we would be able to communicate with robots.

So I’m returning to sort of a small parallel

between perhaps animals and the future of AI.

If we do create an AGI system

or as we approach creating that AGI system,

what kind of questions would you ask her

to try to intuit whether there is consciousness

or more importantly, whether there’s capacity to suffer?

I might ask the AGI what she was feeling

or does she have feelings?

And if she says yes, to describe those feelings,

to describe what they were like,

to see what the phenomenal account of consciousness is like.

That’s one question.

I might also try to find out if the AGI

has a sense of itself.

So for example, the idea would you,

we often ask people,

so suppose you were in a car accident

and your brain were transplanted into someone else’s body,

do you think you would survive

or would it be the person whose body was still surviving,

your body having been destroyed?

And most people say, I think I would,

if my brain was transplanted along with my memories

and so on, I would survive.

So we could ask AGI those kinds of questions.

If they were transferred to a different piece of hardware,

would they survive?

What would survive?

And get at that sort of concept.

Sort of on that line, another perhaps absurd question,

but do you think having a body

is necessary for consciousness?

So do you think digital beings can suffer?

Presumably digital beings need to be

running on some kind of hardware, right?

Yeah, that ultimately boils down to,

but this is exactly what you just said,

is moving the brain from one place to another.

So you could move it to a different kind of hardware.

And I could say, look, your hardware is getting worn out.

We’re going to transfer you to a fresh piece of hardware.

So we’re gonna shut you down for a time,

but don’t worry, you’ll be running very soon

on a nice fresh piece of hardware.

And you could imagine this conscious AGI saying,

that’s fine, I don’t mind having a little rest.

Just make sure you don’t lose me or something like that.

Yeah, I mean, that’s an interesting thought

that even with us humans, the suffering is in the software.

We right now don’t know how to repair the hardware,

but we’re getting better at it and better in the idea.

I mean, some people dream about one day being able

to transfer certain aspects of the software

to another piece of hardware.

What do you think, just on that topic,

there’s been a lot of exciting innovation

in brain computer interfaces.

I don’t know if you’re familiar with the companies

like Neuralink, with Elon Musk,

communicating both ways from a computer,

being able to send, activate neurons

and being able to read spikes from neurons.

With the dream of being able to expand,

sort of increase the bandwidth at which your brain

can like look up articles on Wikipedia kind of thing,

sort of expand the knowledge capacity of the brain.

Do you think that notion, is that interesting to you

as the expansion of the human mind?

Yes, that’s very interesting.

I’d love to be able to have that increased bandwidth.

And I want better access to my memory, I have to say too,

as I get older, I talk to my wife about things

that we did 20 years ago or something.

Her memory is often better about particular events.

Where were we?

Who was at that event?

What did he or she wear even?

She may know and I have not the faintest idea about this,

but perhaps it’s somewhere in my memory.

And if I had this extended memory,

I could search that particular year and rerun those things.

I think that would be great.

In some sense, we already have that

by storing so much of our data online,

like pictures of different events.

Yes, well, Gmail is fantastic for that

because people email me as if they know me well

and I haven’t got a clue who they are,

but then I search for their name.

Ah yes, they emailed me in 2007

and I know who they are now.

Yeah, so we’re taking the first steps already.

So on the flip side of AI,

people like Stuart Russell and others

focus on the control problem, value alignment in AI,

which is the problem of making sure we build systems

that align to our own values, our ethics.

Do you think sort of high level,

how do we go about building systems?

Do you think is it possible that align with our values,

align with our human ethics or living being ethics?

Presumably, it’s possible to do that.

I know that a lot of people who think

that there’s a real danger that we won’t,

that we’ll more or less accidentally lose control of AGI.

Do you have that fear yourself personally?

I’m not quite sure what to think.

I talk to philosophers like Nick Bostrom and Toby Ord

and they think that this is a real problem

we need to worry about.

Then I talk to people who work for Microsoft

or DeepMind or somebody and they say,

no, we’re not really that close to producing AGI,

super intelligence.

So if you look at Nick Bostrom,

sort of the arguments, it’s very hard to defend.

So I’m of course, I am a self engineer AI system,

so I’m more with the DeepMind folks

where it seems that we’re really far away,

but then the counter argument is,

is there any fundamental reason that we’ll never achieve it?

And if not, then eventually there’ll be

a dire existential risk.

So we should be concerned about it.

And do you find that argument at all appealing

in this domain or any domain that eventually

this will be a problem so we should be worried about it?

Yes, I think it’s a problem.

I think that’s a valid point.

Of course, when you say eventually,

that raises the question, how far off is that?

And is there something that we can do about it now?

Because if we’re talking about

this is gonna be 100 years in the future

and you consider how rapidly our knowledge

of artificial intelligence has grown

in the last 10 or 20 years,

it seems unlikely that there’s anything much

we could do now that would influence

whether this is going to happen 100 years in the future.

People in 80 years in the future

would be in a much better position to say,

this is what we need to do to prevent this happening

than we are now.

So to some extent I find that reassuring,

but I’m all in favor of some people doing research

into this to see if indeed it is that far off

or if we are in a position to do something about it sooner.

I’m very much of the view that extinction

is a terrible thing and therefore,

even if the risk of extinction is very small,

if we can reduce that risk,

that’s something that we ought to do.

My disagreement with some of these people

who talk about longterm risks, extinction risks,

is only about how much priority that should have

as compared to present questions.

So essentially, if you look at the math of it

from a utilitarian perspective,

if it’s existential risk, so everybody dies,

that it feels like an infinity in the math equation,

that that makes the math

with the priorities difficult to do.

That if we don’t know the time scale

and you can legitimately argue

that it’s nonzero probability that it’ll happen tomorrow,

that how do you deal with these kinds of existential risks

like from nuclear war, from nuclear weapons,

from biological weapons, from,

I’m not sure if global warming falls into that category

because global warming is a lot more gradual.

And people say it’s not an existential risk

because there’ll always be possibilities

of some humans existing, farming Antarctica

or northern Siberia or something of that sort, yeah.

But you don’t find the complete existential risks

as a fundamental, like an overriding part

of the equations of ethics, of what we should do.

You know, certainly if you treat it as an infinity,

then it plays havoc with any calculations.

But arguably, we shouldn’t.

I mean, one of the ethical assumptions that goes into this

is that the loss of future lives,

that is of merely possible lives of beings

who may never exist at all,

is in some way comparable to the sufferings or deaths

of people who do exist at some point.

And that’s not clear to me.

I think there’s a case for saying that,

but I also think there’s a case for taking the other view.

So that has some impact on it.

Of course, you might say, ah, yes,

but still, if there’s some uncertainty about this

and the costs of extinction are infinite,

then still, it’s gonna overwhelm everything else.

But I suppose I’m not convinced of that.

I’m not convinced that it’s really infinite here.

And even Nick Bostrom, in his discussion of this,

doesn’t claim that there’ll be

an infinite number of lives lived.

What is it, 10 to the 56th or something?

It’s a vast number that I think he calculates.

This is assuming we can upload consciousness

onto these digital forms,

and therefore, they’ll be much more energy efficient,

but he calculates the amount of energy in the universe

or something like that.

So the numbers are vast but not infinite,

which gives you some prospect maybe

of resisting some of the argument.

The beautiful thing with Nick’s arguments

is he quickly jumps from the individual scale

to the universal scale,

which is just awe inspiring to think of

when you think about the entirety

of the span of time of the universe.

It’s both interesting from a computer science perspective,

AI perspective, and from an ethical perspective,

the idea of utilitarianism.

Could you say what is utilitarianism?

Utilitarianism is the ethical view

that the right thing to do is the act

that has the greatest expected utility,

where what that means is it’s the act

that will produce the best consequences,

discounted by the odds that you won’t be able

to produce those consequences,

that something will go wrong.

But in simple case, let’s assume we have certainty

about what the consequences of our actions will be,

then the right action is the action

that will produce the best consequences.

Is that always, and by the way,

there’s a bunch of nuanced stuff

that you talk with Sam Harris on this podcast

on that people should go listen to.

It’s great.

That’s like two hours of moral philosophy discussion.

But is that an easy calculation?

No, it’s a difficult calculation.

And actually, there’s one thing that I need to add,

and that is utilitarians, certainly the classical

utilitarians, think that by best consequences,

we’re talking about happiness

and the absence of pain and suffering.

There are other consequentialists

who are not really utilitarians who say

there are different things that could be good consequences.

Justice, freedom, human dignity,

knowledge, they all count as good consequences too.

And that makes the calculations even more difficult

because then you need to know

how to balance these things off.

If you are just talking about wellbeing,

using that term to express happiness

and the absence of suffering,

I think the calculation becomes more manageable

in a philosophical sense.

It’s still in practice.

We don’t know how to do it.

We don’t know how to measure quantities

of happiness and misery.

We don’t know how to calculate the probabilities

that different actions will produce, this or that.

So at best, we can use it as a rough guide

to different actions and one where we have to focus

on the short term consequences

because we just can’t really predict

all of the longer term ramifications.

So what about the extreme suffering of very small groups?

Utilitarianism is focused on the overall aggregate, right?

Would you say you yourself are a utilitarian?

Yes, I’m a utilitarian.

What do you make of the difficult, ethical,

maybe poetic suffering of very few individuals?

I think it’s possible that that gets overridden

by benefits to very large numbers of individuals.

I think that can be the right answer.

But before we conclude that it is the right answer,

we have to know how severe the suffering is

and how that compares with the benefits.

So I tend to think that extreme suffering is worse than

or is further, if you like, below the neutral level

than extreme happiness or bliss is above it.

So when I think about the worst experiences possible

and the best experiences possible,

I don’t think of them as equidistant from neutral.

So like it’s a scale that goes from minus 100 through zero

as a neutral level to plus 100.

Because I know that I would not exchange an hour

of my most pleasurable experiences

for an hour of my most painful experiences,

even I wouldn’t have an hour

of my most painful experiences even for two hours

or 10 hours of my most painful experiences.

Did I say that correctly?

Yeah, yeah, yeah, yeah.

Maybe 20 hours then, it’s 21, what’s the exchange rate?

So that’s the question, what is the exchange rate?

But I think it can be quite high.

So that’s why you shouldn’t just assume that

it’s okay to make one person suffer extremely

in order to make two people much better off.

It might be a much larger number.

But at some point I do think you should aggregate

and the result will be,

even though it violates our intuitions of justice

and fairness, whatever it might be,

giving priority to those who are worse off,

at some point I still think

that will be the right thing to do.

Yeah, it’s some complicated nonlinear function.

Can I ask a sort of out there question is,

the more and more we put our data out there,

the more we’re able to measure a bunch of factors

of each of our individual human lives.

And I could foresee the ability to estimate wellbeing

of whatever we together collectively agree

and is in a good objective function

from a utilitarian perspective.

Do you think it’ll be possible

and is a good idea to push that kind of analysis

to make then public decisions perhaps with the help of AI

that here’s a tax rate,

here’s a tax rate at which wellbeing will be optimized.

Yeah, that would be great if we really knew that,

if we really could calculate that.

No, but do you think it’s possible

to converge towards an agreement amongst humans,

towards an objective function

or is it just a hopeless pursuit?

I don’t think it’s hopeless.

I think it would be difficult

to get converged towards agreement, at least at present,

because some people would say,

I’ve got different views about justice

and I think you ought to give priority

to those who are worse off,

even though I acknowledge that the gains

that the worst off are making are less than the gains

that those who are sort of medium badly off could be making.

So we still have all of these intuitions that we argue about.

So I don’t think we would get agreement,

but the fact that we wouldn’t get agreement

doesn’t show that there isn’t a right answer there.

Do you think, who gets to say what is right and wrong?

Do you think there’s place for ethics oversight

from the government?

So I’m thinking in the case of AI,

overseeing what kind of decisions AI can make or not,

but also if you look at animal rights

or rather not rights or perhaps rights,

but the ideas you’ve explored in animal liberation,

who gets to, so you eloquently and beautifully write

in your book that this, you know, we shouldn’t do this,

but is there some harder rules that should be imposed

or is this a collective thing we converse towards the society

and thereby make the better and better ethical decisions?

Politically, I’m still a Democrat

despite looking at the flaws in democracy

and the way it doesn’t work always very well.

So I don’t see a better option

than allowing the public to vote for governments

in accordance with their policies.

And I hope that they will vote for policies

that reduce the suffering of animals

and reduce the suffering of distant humans,

whether geographically distant or distant

because they’re future humans.

But I recognise that democracy

isn’t really well set up to do that.

And in a sense, you could imagine a wise and benevolent,

you know, omnibenevolent leader

who would do that better than democracies could.

But in the world in which we live,

it’s difficult to imagine that this leader

isn’t gonna be corrupted by a variety of influences.

You know, we’ve had so many examples

of people who’ve taken power with good intentions

and then have ended up being corrupt

and favouring themselves.

So I don’t know, you know, that’s why, as I say,

I don’t know that we have a better system

than democracy to make these decisions.

Well, so you also discuss effective altruism,

which is a mechanism for going around government

for putting the power in the hands of the people

to donate money towards causes to help, you know,

remove the middleman and give it directly

to the causes that they care about.

Sort of, maybe this is a good time to ask,

you’ve, 10 years ago, wrote The Life You Can Save,

that’s now, I think, available for free online?

That’s right, you can download either the ebook

or the audiobook free from the lifeyoucansave.org.

And what are the key ideas that you present

in the book?

The main thing I wanna do in the book

is to make people realise that it’s not difficult

to help people in extreme poverty,

that there are highly effective organisations now

that are doing this, that they’ve been independently assessed

and verified by research teams that are expert in this area

and that it’s a fulfilling thing to do

to, for at least part of your life, you know,

we can’t all be saints, but at least one of your goals

should be to really make a positive contribution

to the world and to do something to help people

who through no fault of their own

are in very dire circumstances and living a life

that is barely or perhaps not at all

a decent life for a human being to live.

So you describe a minimum ethical standard of giving.

What advice would you give to people

that want to be effectively altruistic in their life,

like live an effective altruism life?

There are many different kinds of ways of living

as an effective altruist.

And if you’re at the point where you’re thinking

about your long term career, I’d recommend you take a look

at a website called 80,000Hours, 80,000Hours.org,

which looks at ethical career choices.

And they range from, for example,

going to work on Wall Street

so that you can earn a huge amount of money

and then donate most of it to effective charities

to going to work for a really good nonprofit organization

so that you can directly use your skills and ability

and hard work to further a good cause,

or perhaps going into politics, maybe small chances,

but big payoffs in politics,

go to work in the public service

where if you’re talented, you might rise to a high level

where you can influence decisions,

do research in an area where the payoffs could be great.

There are a lot of different opportunities,

but too few people are even thinking about those questions.

They’re just going along in some sort of preordained rut

to particular careers.

Maybe they think they’ll earn a lot of money

and have a comfortable life,

but they may not find that as fulfilling

as actually knowing that they’re making

a positive difference to the world.

What about in terms of,

so that’s like long term, 80,000 hours,

sort of shorter term giving part of,

well, actually it’s a part of that.

You go to work at Wall Street,

if you would like to give a percentage of your income

that you talk about and life you can save that.

I mean, I was looking through, it’s quite a compelling,

I mean, I’m just a dumb engineer,

so I like, there’s simple rules, there’s a nice percentage.

Okay, so I do actually set out suggested levels of giving

because people often ask me about this.

A popular answer is give 10%, the traditional tithe

that’s recommended in Christianity and also Judaism.

But why should it be the same percentage

irrespective of your income?

Tax scales reflect the idea that the more income you have,

the more you can pay tax.

And I think the same is true in what you can give.

So I do set out a progressive donor scale,

which starts out at 1% for people on modest incomes

and rises to 33 and a third percent

for people who are really earning a lot.

And my idea is that I don’t think any of these amounts

really impose real hardship on people

because they are progressive and geared to income.

So I think anybody can do this

and can know that they’re doing something significant

to play their part in reducing the huge gap

between people in extreme poverty in the world

and people living affluent lives.

And aside from it being an ethical life,

it’s one that you find more fulfilling

because there’s something about our human nature that,

or some of our human natures,

maybe most of our human nature that enjoys doing

the ethical thing.

Yes, I make both those arguments,

that it is an ethical requirement

in the kind of world we live in today

to help people in great need when we can easily do so,

but also that it is a rewarding thing

and there’s good psychological research showing

that people who give more tend to be more satisfied

with their lives.

And I think this has something to do

with having a purpose that’s larger than yourself

and therefore never being, if you like,

never being bored sitting around,

oh, you know, what will I do next?

I’ve got nothing to do.

In a world like this, there are many good things

that you can do and enjoy doing them.

Plus you’re working with other people

in the effective altruism movement

who are forming a community of other people

with similar ideas and they tend to be interesting,

thoughtful and good people as well.

And having friends of that sort is another big contribution

to having a good life.

So we talked about big things that are beyond ourselves,

but we’re also just human and mortal.

Do you ponder your own mortality?

Is there insights about your philosophy,

the ethics that you gain from pondering your own mortality?

Clearly, you know, as you get into your 70s,

you can’t help thinking about your own mortality.

Uh, but I don’t know that I have great insights

into that from my philosophy.

I don’t think there’s anything after the death of my body,

you know, assuming that we won’t be able to upload my mind

into anything at the time when I die.

So I don’t think there’s any afterlife

or anything to look forward to in that sense.

Do you fear death?

So if you look at Ernest Becker

and describing the motivating aspects

of our ability to be cognizant of our mortality,

do you have any of those elements

in your drive and your motivation in life?

I suppose the fact that you have only a limited time

to achieve the things that you want to achieve

gives you some sort of motivation

to get going and achieving them.

And if we thought we were immortal,

we might say, ah, you know,

I can put that off for another decade or two.

So there’s that about it.

But otherwise, you know, no,

I’d rather have more time to do more.

I’d also like to be able to see how things go

that I’m interested in, you know.

Is climate change gonna turn out to be as dire

as a lot of scientists say that it is going to be?

Will we somehow scrape through

with less damage than we thought?

I’d really like to know the answers to those questions,

but I guess I’m not going to.

Well, you said there’s nothing afterwards.

So let me ask the even more absurd question.

What do you think is the meaning of it all?

I think the meaning of life is the meaning we give to it.

I don’t think that we were brought into the universe

for any kind of larger purpose.

But given that we exist,

I think we can recognize that some things

are objectively bad.

Extreme suffering is an example,

and other things are objectively good,

like having a rich, fulfilling, enjoyable,

pleasurable life, and we can try to do our part

in reducing the bad things and increasing the good things.

So one way, the meaning is to do a little bit more

of the good things, objectively good things,

and a little bit less of the bad things.

Yes, so do as much of the good things as you can

and as little of the bad things.

You beautifully put, I don’t think there’s a better place

to end it, thank you so much for talking today.

Thanks very much, Lex.

It’s been really interesting talking to you.

Thanks for listening to this conversation

with Peter Singer, and thank you to our sponsors,

Cash App and Masterclass.

Please consider supporting the podcast

by downloading Cash App and using the code LexPodcast,

and signing up at masterclass.com slash Lex.

Click the links, buy all the stuff.

It’s the best way to support this podcast

and the journey I’m on in my research and startup.

If you enjoy this thing, subscribe on YouTube,

review it with 5,000 Apple Podcast, support on Patreon,

or connect with me on Twitter at Lex Friedman,

spelled without the E, just F R I D M A N.

And now, let me leave you with some words

from Peter Singer, what one generation finds ridiculous,

the next accepts, and the third shudders

when looks back at what the first did.

Thank you for listening, and hope to see you next time.

comments powered by Disqus