Lex Fridman Podcast - #65 - Daniel Kahneman: Thinking Fast and Slow, Deep Learning, and AI

The following is a conversation with Daniel Kahneman, winner of the Nobel Prize in Economics

for his integration of economic science with the psychology of human behavior,

judgment, and decision making. He’s the author of the popular book Thinking Fast and Slow that

summarizes in an accessible way his research of several decades, often in collaboration with

Amos Tversky on cognitive biases, prospect theory, and happiness. The central thesis of this work

is the dichotomy between two modes of thought. What he calls system one is fast, instinctive,

and emotional. System two is slower, more deliberative, and more logical. The book

delineates cognitive biases associated with each of these two types of thinking.

His study of the human mind and its peculiar and fascinating limitations are both instructive and

inspiring for those of us seeking to engineer intelligent systems. This is the Artificial

Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcast,

follow on Spotify, support it on Patreon, or simply connect with me on Twitter at

Lex Friedman spelled F R I D M A N. I recently started doing ads at the end of the introduction.

I’ll do one or two minutes after introducing the episode and never any ads in the middle

that can break the flow of the conversation. I hope that works for you and doesn’t hurt the

listening experience. This show is presented by Cash App, the number one finance app in the App

Store. I personally use Cash App to send money to friends, but you can also use it to buy, sell,

and deposit Bitcoin in just seconds. Cash App also has a new investing feature. You can buy

fractions of a stock, say one dollar’s worth, no matter what the stock price is. Broker services

are provided by Cash App Investing, a subsidiary of Square and member SIPC. I’m excited to be

working with Cash App to support one of my favorite organizations called First, best known

for their FIRST Robotics and Lego competitions. They educate and inspire hundreds of thousands

of students in over 110 countries and have a perfect rating at Charity Navigator, which means

that donated money is used to maximum effectiveness. When you get Cash App from the App Store or Google

Play and use code LEXPODCAST, you’ll get $10 and Cash App will also donate $10 to FIRST,

which again is an organization that I’ve personally seen inspire girls and boys to dream

of engineering a better world. And now here’s my conversation with Daniel Kahneman.

You tell a story of an SS soldier early in the war, World War II, in Nazi occupied France in

Paris, where you grew up. He picked you up and hugged you and showed you a picture of a boy,

Daniel Kahneman. Maybe not realizing that you were Jewish.

Not maybe, certainly not.

So I told you I’m from the Soviet Union that was significantly impacted by the war as well,

and I’m Jewish as well. What do you think World War II taught us about human psychology broadly?

Well, I think the only big surprise is the extermination policy, genocide,

by the German people. That’s when you look back on it, and I think that’s a major surprise.

It’s a surprise because…

It’s a surprise that they could do it. It’s a surprise that enough people

willingly participated in that. This is a surprise. Now it’s no longer a surprise,

but it’s changed many people’s views, I think, about human beings. Certainly for me,

the Ackman trial, that teaches you something because it’s very clear that if it could happen

in Germany, it could happen anywhere. It’s not that the Germans were special.

This could happen anywhere.

So what do you think that is? Do you think we’re all capable of evil? We’re all capable of cruelty?

I don’t think in those terms. I think that what is certainly possible is you can dehumanize people

so that you treat them not as people anymore, but as animals. And the same way that you can slaughter

animals without feeling much of anything, it can be the same. And when you feel that,

I think, the combination of dehumanizing the other side and having uncontrolled power over

other people, I think that doesn’t bring out the most generous aspect of human nature.

So that Nazi soldier, he was a good man. And he was perfectly capable of killing a lot of people,

and I’m sure he did.

But what did the Jewish people mean to Nazis? So what the dismissal of Jewish as worthy of?

IA Again, this is surprising that it was so extreme,

but it’s not one thing in human nature. I don’t want to call it evil, but the distinction between

the in group and the out group, that is very basic. So that’s built in. The loyalty and

affection towards in group and the willingness to dehumanize the out group, that is in human nature.

That’s what I think probably didn’t need the Holocaust to teach us that. But the Holocaust is

a very sharp lesson of what can happen to people and what people can do.

SL. So the effect of the in group and the out group. IA It’s clear. Those were people,

you could shoot them. They were not human. There was no empathy, or very, very little empathy left.

So occasionally, there might have been. And very quickly, by the way, the empathy disappeared,

if there was initially. And the fact that everybody around you was doing it,

that completely, the group doing it, and everybody shooting Jews, I think that makes it permissible.

Now, how much, whether it could happen in every culture, or whether the Germans were just

particularly efficient and disciplined, so they could get away with it. It’s an interesting

question. SL. Are these artifacts of history or is it human nature? IA I think that’s really human

nature. You put some people in a position of power relative to other people, and then they become

less human, they become different. SL. But in general, in war, outside of concentration camps

in World War Two, it seems that war brings out darker sides of human nature, but also the beautiful

things about human nature. IA Well, I mean, what it brings out is the loyalty among soldiers. I mean,

it brings out the bonding, male bonding, I think is a very real thing that happens. And there is

a certain thrill to friendship, and there is certainly a certain thrill to friendship under

risk and to shared risk. And so people have very profound emotions, up to the point where it gets

so traumatic that little is left. SL. So let’s talk about psychology a little bit. In your book,

Thinking Fast and Slow, you describe two modes of thought, system one, the fast and instinctive,

and emotional one, and system two, the slower, deliberate, logical one. At the risk of asking

Darwin to discuss theory of evolution, can you describe distinguishing characteristics for people

who have not read your book of the two systems? IA Well, I mean, the word system is a bit

misleading, but at the same time it’s misleading, it’s also very useful. But what I call system one,

it’s easier to think of it as a family of activities. And primarily, the way I describe it

is there are different ways for ideas to come to mind. And some ideas come to mind automatically,

and the standard example is two plus two, and then something happens to you. And in other cases,

you’ve got to do something, you’ve got to work in order to produce the idea. And my example,

I always give the same pair of numbers as 27 times 14, I think. SL. You have to perform some

algorithm in your head, some steps. IA Yes, and it takes time. It’s a very difference. Nothing

comes to mind except something comes to mind, which is the algorithm, I mean, that you’ve got

to perform. And then it’s work, and it engages short term memory, it engages executive function,

and it makes you incapable of doing other things at the same time. So the main characteristic of

system two is that there is mental effort involved, and there is a limited capacity for mental effort,

whereas system one is effortless, essentially. That’s the major distinction.

SL. So you talk about there, you know, it’s really convenient to talk about two systems,

but you also mentioned just now and in general that there’s no distinct two systems in the brain

from a neurobiological, even from a psychology perspective. But why does it seem to, from the

experiments you’ve conducted, there does seem to be kind of emergent two modes of thinking? So

at some point, these kinds of systems came into a brain architecture. Maybe mammals share it.

Or do you not think of it at all in those terms that it’s all a mush and these two things just

emerge? RL. Evolutionary theorizing about this is cheap and easy. So it’s the way I think about it

is that it’s very clear that animals have perceptual system, and that includes an ability

to understand the world, at least to the extent that they can predict, they can’t explain anything,

but they can anticipate what’s going to happen. And that’s a key form of understanding the world.

And my crude idea is that what I call system two, well, system two grew out of this.

And, you know, there is language and there is the capacity of manipulating ideas and the capacity

of imagining futures and of imagining counterfactual things that haven’t happened

and to do conditional thinking. And there are really a lot of abilities that without language

and without the very large brain that we have compared to others would be impossible. Now,

system one is more like what the animals are, but system one also can talk. I mean,

it has language. It understands language. Indeed, it speaks for us. I mean, you know,

I’m not choosing every word as a deliberate process. The words, I have some idea and then

the words come out and that’s automatic and effortless. And many of the experiments you’ve

done is to show that, listen, system one exists and it does speak for us and we should be careful

about the voice it provides. Well, I mean, you know, we have to trust it because it’s

the speed at which it acts. System two, if we’re dependent on system two for survival,

we wouldn’t survive very long because it’s very slow. Yeah. Crossing the street.

Crossing the street. I mean, many things depend on their being automatic. One very important aspect

of system one is that it’s not instinctive. You use the word instinctive. It contains skills that

clearly have been learned. So that skilled behavior like driving a car or speaking, in fact,

skilled behavior has to be learned. And so it doesn’t, you know, you don’t come equipped with

driving. You have to learn how to drive and you have to go through a period where driving is not

automatic before it becomes automatic. So. Yeah. You construct, I mean, this is where you talk

about heuristic and biases is you, to make it automatic, you create a pattern and then system

one essentially matches a new experience against the previously seen pattern. And when that match

is not a good one, that’s when the cognitive, all the mess happens, but it’s most of the time

it works. And so it’s pretty. Most of the time, the anticipation of what’s going to happen next

is correct. And most of the time the plan about what you have to do is correct. And so most of

the time everything works just fine. What’s interesting actually is that in some sense,

system one is much better at what it does than system two is at what it does. That is there is

that quality of effortlessly solving enormously complicated problems, which clearly exists so

that the chess player, a very good chess player, all the moves that come to their mind are strong

moves. So all the selection of strong moves happens unconsciously and automatically and

very, very fast. And all that is in system one. So system two verifies.

So along this line of thinking, really what we are are machines that construct

a pretty effective system one. You could think of it that way. So we’re not talking about humans,

but if we think about building artificial intelligence systems, robots, do you think

all the features and bugs that you have highlighted in human beings are useful

for constructing AI systems? So both systems are useful for perhaps instilling in robots?

What is happening these days is that actually what is happening in deep learning is more like

a system one product than like a system two product. I mean, deep learning matches patterns

and anticipate what’s going to happen. So it’s highly predictive. What deep learning

doesn’t have and many people think that this is the critical, it doesn’t have the ability to

reason. So there is no system two there. But I think very importantly, it doesn’t have any

causality or any way to represent meaning and to represent real interactions. So until that is

solved, what can be accomplished is marvelous and very exciting, but limited.

That’s actually really nice to think of current advances in machine learning as essentially

system one advances. So how far can we get with just system one? If we think of deep learning

in artificial intelligence systems? I mean, you know, it’s very clear that deep mind has already

gone way beyond what people thought was possible. I think the thing that has impressed me most about

the developments in AI is the speed. It’s that things, at least in the context of deep learning,

and maybe this is about to slow down, but things moved a lot faster than anticipated.

The transition from solving chess to solving Go, that’s bewildering how quickly it went.

The move from Alpha Go to Alpha Zero is sort of bewildering the speed at which they accomplished

that. Now, clearly, there are many problems that you can solve that way, but there are some problems

for which you need something else. Something like reasoning.

Well, reasoning and also, you know, one of the real mysteries, psychologist Gary Marcus, who is

also a critic of AI. I mean, what he points out, and I think he has a point, is that humans learn

quickly. Children don’t need a million examples, they need two or three examples. So, clearly,

there is a fundamental difference. And what enables a machine to learn quickly, what you have

to build into the machine, because it’s clear that you have to build some expectations or

or something in the machine to make it ready to learn quickly. That at the moment seems to be

unsolved. I’m pretty sure that DeepMind is working on it, but if they have solved it, I haven’t heard

yet. They’re trying to actually, them and OpenAI are trying to start to get to use neural networks

to reason. So, assemble knowledge. Of course, causality is, temporal causality, is out of

reach to most everybody. You mentioned the benefits of System 1 is essentially that it’s

fast, allows us to function in the world.

Fast and skilled, yeah.

It’s skill.

And it has a model of the world. You know, in a sense, I mean, there was the early phase of

AI attempted to model reasoning. And they were moderately successful, but, you know, reasoning

by itself doesn’t get you much. Deep learning has been much more successful in terms of, you know,

what they can do. But now, it’s an interesting question, whether it’s approaching its limits.

What do you think?

I think absolutely. So, I just talked to Gian LeCun. He mentioned, you know, so he thinks

that the limits, we’re not going to hit the limits with neural networks, that ultimately,

this kind of System 1 pattern matching will start to look like System 2 without significant

transformation of the architecture. So, I’m more with the majority of the people who think that,

yes, neural networks will hit a limit in their capability.

He, on the one hand, I have heard him tell them it’s a sub, it’s essentially that, you know,

what they have accomplished is not a big deal, that they have just touched, that basically,

you know, they can’t do unsupervised learning in an effective way. But you’re telling me that he

thinks that the current, within the current architecture, you can do causality and reasoning?

So, he’s very much a pragmatist in a sense that’s saying that we’re very far away,

that there’s still, I think there’s this idea that he says is, we can only see

one or two mountain peaks ahead and there might be either a few more after or

thousands more after. Yeah, so that kind of idea.

I heard that metaphor.

Yeah, right. But nevertheless, it doesn’t see the final answer not fundamentally looking like one

that we currently have. So, neural networks being a huge part of that.

Yeah, I mean, that’s very likely because pattern matching is so much of what’s going on.

And you can think of neural networks as processing information sequentially.

Yeah, I mean, you know, there is an important aspect to, for example, you get systems that

translate and they do a very good job, but they really don’t know what they’re talking about.

And for that, I’m really quite surprised. For that, you would need an AI that has sensation,

an AI that is in touch with the world.

Yes, self awareness and maybe even something resembles consciousness kind of ideas.

Certainly awareness of, you know, awareness of what’s going on so that the words have meaning

or can get, are in touch with some perception or some action.

Yeah, so that’s a big thing for Jan and as what he refers to as grounding to the physical space.

So that’s what we’re talking about the same thing.

Yeah, so how do you ground?

I mean, the grounding, without grounding, then you get a machine that doesn’t know what

it’s talking about because it is talking about the world ultimately.

The question, the open question is what it means to ground. I mean, we’re very

human centric in our thinking, but what does it mean for a machine to understand what it means

to be in this world? Does it need to have a body? Does it need to have a finiteness like we humans

have all of these elements? It’s a very, it’s an open question.

You know, I’m not sure about having a body, but having a perceptual system,

having a body would be very helpful too. I mean, if you think about human, mimicking human,

you know, but having a perception that seems to be essential so that you can build,

you can accumulate knowledge about the world. So if you can imagine a human completely paralyzed,

and there’s a lot that the human brain could learn, you know, with a paralyzed body.

So if we got a machine that could do that, that would be a big deal.

TK And then the flip side of that, something you see in children and something in machine

learning world is called active learning. Maybe it is also in, is being able to play with the world.

How important for developing System 1 or System 2 do you think it is to play with the world?

To be able to interact with the world?

MG A lot of what you learn is you learn to anticipate the outcomes of your actions. I mean,

you can see that how babies learn it, you know, with their hands, how they learn, you know,

to connect, you know, the movements of their hands with something that clearly is something

that happens in the brain and the ability of the brain to learn new patterns. So, you know,

it’s the kind of thing that you get with artificial limbs, that you connect it and then people learn

to operate the artificial limb, you know, really impressively quickly, at least from what I hear.

So we have a system that is ready to learn the world through action.

TK At the risk of going into way too mysterious of land,

what do you think it takes to build a system like that? Obviously, we’re very far from understanding

how the brain works, but how difficult is it to build this mind of ours?

MG You know, I mean, I think that Jan LeCun’s answer that we don’t know how many mountains

there are, I think that’s a very good answer. I think that, you know, if you look at what Ray

Kurzweil is saying, that strikes me as off the wall. But I think people are much more realistic

than that, where actually Demis Hassabis is and Jan is, and so the people are actually doing the

work fairly realistic, I think. TK To maybe phrase it another way,

from a perspective not of building it, but from understanding it,

how complicated are human beings in the following sense? You know, I work with autonomous vehicles

and pedestrians, so we tried to model pedestrians. How difficult is it to model a human being,

their perception of the world, the two systems they operate under, sufficiently to be able to

predict whether the pedestrian is going to cross the road or not?

MG I’m, you know, I’m fairly optimistic about that, actually, because what we’re talking about

is a huge amount of information that every vehicle has, and that feeds into one system,

into one gigantic system. And so anything that any vehicle learns becomes part of what the whole

system knows. And with a system multiplier like that, there is a lot that you can do.

So human beings are very complicated, and the system is going to make mistakes, but human

makes mistakes. I think that they’ll be able to, I think they are able to anticipate pedestrians,

otherwise a lot would happen. They’re able to, you know, they’re able to get into a roundabout

and into traffic, so they must know both to expect or to anticipate how people will react

when they’re sneaking in. And there’s a lot of learning that’s involved in that.

RL Currently, the pedestrians are treated as things that cannot be hit, and they’re not

treated as agents with whom you interact in a game theoretic way. So, I mean, it’s not,

it’s a totally open problem, and every time somebody tries to solve it, it seems to be harder

than we think. And nobody’s really tried to seriously solve the problem of that dance,

because I’m not sure if you’ve thought about the problem of pedestrians, but you’re really

putting your life in the hands of the driver.

RL You know, there is a dance, there’s part of the dance that would be quite complicated,

but for example, when I cross the street and there is a vehicle approaching, I look the driver

in the eye, and I think many people do that. And, you know, that’s a signal that I’m sending,

and I would be sending that machine to an autonomous vehicle, and it had better understand

it, because it means I’m crossing.

RL So, and there’s another thing you do, that actually, so I’ll tell you what you do,

because we watched, I’ve watched hundreds of hours of video on this, is when you step

in the street, you do that before you step in the street, and when you step in the street,

you actually look away.

RL Look away.

RL Yeah. Now, what is that? What that’s saying is, I mean, you’re trusting that the car who

hasn’t slowed down yet will slow down.

RL Yeah. And you’re telling him, I’m committed. I mean, this is like in a game of chicken,

so I’m committed, and if I’m committed, I’m looking away. So, there is, you just have

to stop.

RL So, the question is whether a machine that observes that needs to understand mortality.

RL Here, I’m not sure that it’s got to understand so much as it’s got to anticipate. So, and

here, but you know, you’re surprising me, because here I would think that maybe you

can anticipate without understanding, because I think this is clearly what’s happening in

playing go or in playing chess. There’s a lot of anticipation, and there is zero understanding.

RL Exactly.

RL So, I thought that you didn’t need a model of the human and a model of the human mind

to avoid hitting pedestrians, but you are suggesting that actually…

RL There you go, yeah.

RL You do. Then it’s a lot harder, I thought.

RL And I have a follow up question to see where your intuition lies. It seems that almost

every robot human collaboration system is a lot harder than people realize. So, do you

think it’s possible for robots and humans to collaborate successfully? We talked a little

bit about semi autonomous vehicles, like in the Tesla autopilot, but just in tasks in

general. If you think we talked about current neural networks being kind of system one,

do you think those same systems can borrow humans for system two type tasks and collaborate

successfully?

RL Well, I think that in any system where humans and the machine interact, the human

will be superfluous within a fairly short time. That is, if the machine is advanced

enough so that it can really help the human, then it may not need the human for a long

time. Now, it would be very interesting if there are problems that for some reason the

machine cannot solve, but that people could solve. Then you would have to build into the

machine an ability to recognize that it is in that kind of problematic situation and

to call the human. That cannot be easy without understanding. That is, it must be very difficult

to program a recognition that you are in a problematic situation without understanding

the problem.

SL. That’s very true. In order to understand the full scope of situations that are problematic,

you almost need to be smart enough to solve all those problems.

RL It’s not clear to me how much the machine will need the human. I think the example of

chess is very instructive. I mean, there was a time at which Kasparov was saying that human

machine combinations will beat everybody. Even stockfish doesn’t need people and Alpha

Zero certainly doesn’t need people.

SL. The question is, just like you said, how many problems are like chess and how many

problems are not like chess? Every problem probably in the end is like chess. The question

is, how long is that transition period?

RL That’s a question I would ask you. Autonomous vehicle, just driving, is probably a lot more

complicated than Go to solve that problem. Because it’s open. That’s not surprising to

me because there is a hierarchical aspect to this, which is recognizing a situation

and then within the situation bringing up the relevant knowledge. For that hierarchical

type of system to work, you need a more complicated system than we currently have.

SL. A lot of people think, because as human beings, this is probably the cognitive biases,

they think of driving as pretty simple because they think of their own experience. This is

actually a big problem for AI researchers or people thinking about AI because they evaluate

how hard a particular problem is based on very limited knowledge, based on how hard

it is for them to do the task. And then they take for granted, maybe you can speak to that

because most people tell me driving is trivial and humans in fact are terrible at driving

is what people tell me. And I see humans and humans are actually incredible at driving

and driving is really terribly difficult. Is that just another element of the effects

that you’ve described in your work on the psychology side?

No, I mean, I haven’t really, I would say that my research has contributed nothing to

understanding the ecology and to understanding the structure of situations and the complexity

of problems. So all we know is very clear that that goal, it’s endlessly complicated,

but it’s very constrained. And in the real world, there are far fewer constraints and

many more potential surprises.

SL. So that’s obvious because it’s not always obvious to people, right? So when you think

about…

Well, I mean, you know, people thought that reasoning was hard and perceiving was easy,

but you know, they quickly learned that actually modeling vision was tremendously complicated

and modeling, even proving theorems was relatively straightforward.

To push back on that a little bit on the quickly part, it took several decades to learn that

and most people still haven’t learned that. I mean, our intuition, of course, AI researchers

have, but you drift a little bit outside the specific AI field, the intuition is still

perceptible to solve that.

No, I mean, that’s true. Intuitions, the intuitions of the public haven’t changed

radically. And they are, as you said, they’re evaluating the complexity of problems by how

difficult it is for them to solve the problems. And that’s got very little to do with the

complexities of solving them in AI.

SL. How do you think from the perspective of an AI researcher, do we deal with the intuitions

of the public? So in trying to think, arguably, the combination of hype investment and the

public intuition is what led to the AI winters. I’m sure that same could be applied to tech

or that the intuition of the public leads to media hype, leads to companies investing

in the tech, and then the tech doesn’t make the company’s money. And then there’s a crash.

Is there a way to educate people to fight the, let’s call it system one thinking?

In general, no. I think that’s the simple answer. And it’s going to take a long time

before the understanding of what those systems can do becomes public knowledge. And then

the expectations, there are several aspects that are going to be very complicated. The

fact that you have a device that cannot explain itself is a major, major difficulty. And we’re

already seeing that. I mean, this is really something that is happening. So it’s happening

in the judicial system. So you have system that are clearly better at predicting parole

violations than judges, but they can’t explain their reasoning. And so people don’t want

to trust them.

We seem to in system one, even use cues to make judgements about our environment. So

this explainability point, do you think humans can explain stuff?

No, but I mean, there is a very interesting aspect of that. Humans think they can explain

themselves. So when you say something and I ask you, why do you believe that? Then reasons

will occur to you. But actually, my own belief is that in most cases, the reasons have very

little to do with why you believe what you believe. So that the reasons are a story that

comes to your mind when you need to explain yourself. But people traffic in those explanations

I mean, the human interaction depends on those shared fictions and, and the stories that

people tell themselves.

You just made me actually realize and we’ll talk about stories in a second. That not to

be cynical about it, but perhaps there’s a whole movement of people trying to do explainable

AI. And really, we don’t necessarily need to explain AI doesn’t need to explain itself.

It just needs to tell a convincing story.

Yeah, absolutely.

It doesn’t necessarily, the story doesn’t necessarily need to reflect the truth as it

might, it just needs to be convincing. There’s something to that.

You can say exactly the same thing in a way that sounds cynical or doesn’t sound cynical.

Right.

But the objective of having an explanation is to tell a story that will be acceptable

to people. And, and, and for it to be acceptable and to be robustly acceptable, it has to have

some elements of truth. But, but the objective is for people to accept it.

It’s quite brilliant, actually. But so on the, on the stories that we tell, sorry to

ask me, ask you the question that most people know the answer to, but you talk about two

selves in terms of how life is lived, the experienced self and remembering self. Can

you describe the distinction between the two?

Well, sure. I mean, the, there is an aspect of, of life that occasionally, you know, most

of the time we just live and we have experiences and they’re better and they’re worse and it

goes on over time. And mostly we forget everything that happens or we forget most of what happens.

Then occasionally you, when something ends or at different points, you evaluate the past

and you form a memory and the memory is schematic. It’s not that you can roll a film of an interaction.

You construct, in effect, the elements of a story about an, about an episode. So there

is the experience and there is the story that is created about the experience. And that’s

what I call the remembering. So I had the image of two selves. So there is a self that

lives and there is a self that evaluates life. Now the paradox and the deep paradox in that

is that we have one system or one self that does the living, but the other system, the

remembering self is all we get to keep. And basically decision making and, and everything

that we do is governed by our memories, not by what actually happened. It’s, it’s governed

by, by the story that we told ourselves or by the story that we’re keeping. So that’s,

that’s the distinction.

I mean, there’s a lot of brilliant ideas about the pursuit of happiness that come out of

that. What are the properties of happiness which emerge from a remembering self?

There are, there are properties of how we construct stories that are really important.

So that I studied a few, but, but a couple are really very striking. And one is that

in stories, time doesn’t matter. There’s a sequence of events or there are highlights

or not. And, and how long it took, you know, they lived happily ever after or three years

later or something. It, time really doesn’t matter. And in stories, events matter, but

time doesn’t. That, that leads to a very interesting set of problems because time is all we got

to live. I mean, you know, time is the currency of life. And yet time is not represented basically

in evaluated memories. So that, that creates a lot of paradoxes that I’ve thought about.

Yeah. They’re fascinating. But if you were to give advice on how one lives a happy life

based on such properties, what’s the optimal?

You know, I gave up, I abandoned happiness research because I couldn’t solve that problem.

I couldn’t, I couldn’t see. And in the first place, it’s very clear that if you do talk

in terms of those two selves, then that what makes the remembering self happy and what

makes the experiencing self happy are different things. And I, I asked the question of, suppose

you’re planning a vacation and you’re just told that at the end of the vacation, you’ll

get an amnesic drug, so you remember nothing. And they’ll also destroy all your photos.

So there’ll be nothing. Would you still go to the same vacation? And, and it’s, it turns

out we go to vacations in large part to construct memories, not to have experiences, but to

construct memories. And it turns out that the vacation that you would want for yourself,

if you knew, you will not remember is probably not the same vacation that you will want for

yourself if you will remember. So I have no solution to these problems, but clearly those

are big issues.

And you’ve talked about, you’ve talked about sort of how many minutes or hours you spend

about the vacation. It’s an interesting way to think about it because that’s how you really

experience the vacation outside the being in it. But there’s also a modern, I don’t

know if you think about this or interact with it. There’s a modern way to, um, magnify the

remembering self, which is by posting on Instagram, on Twitter, on social networks. A lot of people

live life for the picture that you take, that you post somewhere. And now thousands of people

share and potentially potentially millions. And then you can relive it even much more

than just those minutes. Do you think about that magnification much?

You know, I’m too old for social networks. I, you know, I’ve never seen Instagram, so

I cannot really speak intelligently about those things. I’m just too old.

But it’s interesting to watch the exact effects you’ve described.

Make a very big difference. I mean, and it will make, it will also make a difference.

And that I don’t know whether, uh, it’s clear that in some ways the devices that serve us

are supplant functions. So you don’t have to remember phone numbers. You don’t have,

you really don’t have to know facts. I mean, the number of conversations I’m involved with,

somebody says, well, let’s look it up. Uh, so it’s, it’s in a way it’s made conversations.

Well it’s, it means that it’s much less important to know things. You know, it used to be very

important to know things. This is changing. So the requirements of that, that we have

for ourselves and for other people are changing because of all those supports and because,

and I have no idea what Instagram does, but it’s, uh, well, I’ll tell you, I wish I could

just have the, my remembering self could enjoy this conversation, but I’ll get to enjoy it

even more by having watched, by watching it and then talking to others. It’ll be about

a hundred thousand people as scary as this to say, well, listen or watch this, right?

It changes things. It changes the experience of the world that you seek out experiences

which could be shared in that way. It’s in, and I haven’t seen, it’s, it’s the same effects

that you described. And I don’t think the psychology of that magnification has been

described yet because it’s a new world.

But the sharing, there was a, there was a time when people read books and, uh, and,

and you could assume that your friends had read the same books that you read. So there

was kind of invisible sharing. There was a lot of sharing going on and there was a lot

of assumed common knowledge and, you know, that was built in. I mean, it was obvious

that you had read the New York Times. It was obvious that you had read the reviews. I mean,

so a lot was taken for granted that was shared. And, you know, when there were, when there

were three television channels, it was obvious that you’d seen one of them probably the same.

So sharing, sharing always was always there. It was just different.

At the risk of, uh, inviting mockery from you, let me say that I’m also a fan of Sartre

and Camus and existentialist philosophers. And, um, I’m joking of course about mockery,

but from the perspective of the two selves, what do you think of the existentialist philosophy

of life? So trying to really emphasize the experiencing self as the proper way to, or

the best way to live life.

I don’t know enough philosophy to answer that, but it’s not, uh, you know, the emphasis on,

on experience is also the emphasis in Buddhism.

Yeah, right. That’s right.

So, uh, that’s, you just have got to, to experience things and, and, and not to evaluate and not

to pass judgment and not to score, not to keep score. So, uh,

If, when you look at the grand picture of experience, you think there’s something to

that, that one, one of the ways to achieve contentment and maybe even happiness is letting

go of any of the things, any of the procedures of the remembering self.

Well, yeah, I mean, I think, you know, if one could imagine a life in which people don’t

score themselves, uh, it, it feels as if that would be a better life as if the self scoring

and you know, how am I doing a kind of question, uh, is not, is not a very happy thing to have.

But I got out of that field because I couldn’t solve that problem and, and that was because

my intuition was that the experiencing self, that’s reality.

But then it turns out that what people want for themselves is not experiences. They want

memories and they want a good story about their life. And so you cannot have a theory

of happiness that doesn’t correspond to what people want for themselves. And when I, when

I realized that this, this was where things were going, I really sort of left the field

of research.

Do you think there’s something instructive about this emphasis of reliving memories in

building AI systems. So currently artificial intelligence systems are more like experiencing

self in that they react to the environment. There’s some pattern formation like a learning

so on, but you really don’t construct memories, uh, except in reinforcement learning every

once in a while that you replay over and over.

Yeah, but you know, that would in principle would not be.

Do you think that’s useful? Do you think it’s a feature or a bug of human beings that we,

that we look back?

Oh, I think that’s definitely a feature. That’s not a bug. I mean, you, you have to look back

in order to look forward. So, uh, without, without looking back, you couldn’t, you couldn’t

really intelligently look forward.

You’re looking for the echoes of the same kind of experience in order to predict what

the future holds.

Yeah.

So though Victor Frankel in his book, man’s search for meaning, I’m not sure if you’ve

read, describes his experience at the consecration concentration camps during world war two as

a way to describe that finding identifying a purpose in life, a positive purpose in life

can save one from suffering. First of all, do you connect with the philosophy that he

describes there?

Not really. I mean, the, so I can, I can really see that somebody who has that feeling of

purpose and meaning and so on, that, that could sustain you. Uh, I in general don’t

have that feeling and I’m pretty sure that if I were in a concentration camp, I’d give

up and die, you know? So he talks, he is, he is a survivor.

Yeah.

And, you know, he survived with that. And I’m, and I’m not sure how essential to survival

this sense is, but I do know when I think about myself that I would have given up. Oh,

this isn’t going anywhere. And there is, there is a sort of character that, that, that manages

to survive in conditions like that. And then because they survive, they tell stories and

it sounds as if they survive because of what they were doing. We have no idea. They survived

because the kind of people that they are and the other kind of people who survives and

would tell themselves stories of a particular kind. So I’m not, uh,

So you don’t think seeking purpose is a significant driver in our being?

Oh, I mean, it’s, it’s a very interesting question because when you ask people whether

it’s very important to have meaning in their life, they say, oh yes, that’s the most important

thing. But when you ask people, what kind of a day did you have? And, and you know,

what were the experiences that you remember? You don’t get much meaning. You get social

experiences. Then, uh, and, and some people say that, for example, in, in, in child, you

know, in taking care of children, the fact that they are your children and you’re taking

care of them, uh, makes a very big difference. I think that’s entirely true. Uh, but it’s

more because of a story that we’re telling ourselves, which is a very different story

when we’re taking care of our children or when we’re taking care of other things.

Jumping around a little bit in doing a lot of experiments, let me ask a question. Most

of the work I do, for example, is in the, in the real world, but most of the clean good

science that you can do is in the lab. So that distinction, do you think we can understand

the fundamentals of human behavior through controlled experiments in the lab? If we talk

about pupil diameter, for example, it’s much easier to do when you can control lighting

conditions, right? So when we look at driving, lighting variation destroys almost completely

your ability to use pupil diameter. But in the lab for, as I mentioned, semi autonomous

or autonomous vehicles in driving simulators, we can’t, we don’t capture true, honest, uh,

human behavior in that particular domain. So what’s your intuition? How much of human

behavior can we study in this controlled environment of the lab? A lot, but you’d have to verify

it, you know, that your, your conclusions are basically limited to the situation, to

the experimental situation. Then you have to jump the big inductive leap to the real

world. Uh, so, and, and that’s the flare. That’s where the difference, I think, between

the good psychologists and others that are mediocre is in the sense of that your experiment

captures something that’s important and something that’s real and others are just running experiments.

So what is that? Like the birth of an idea to his development in your mind to something

that leads to an experiment. Is that similar to maybe like what Einstein or a good physicist

do is your intuition. You basically use your intuition to build up.

Yeah, but I mean, you know, it’s, it’s very skilled intuition. I mean, I just had that

experience actually. I had an idea that turns out to be very good idea a couple of days

ago and, and you, and you have a sense of that building up. So I’m working with a collaborator

and he essentially was saying, you know, what, what are you doing? What’s, what’s going on?

And I was, I really, I couldn’t exactly explain it, but I knew this is going somewhere, but

you know, I’ve been around that game for a very long time. And so I can, you, you develop

that anticipation that yes, this, this is worth following up. That’s part of the skill.

Is that something you can reduce to words in describing a process in the form of advice

to others?

No.

Follow your heart, essentially.

I mean, you know, it’s, it’s like trying to explain what it’s like to drive. It’s not,

you’ve got to break it apart and it’s not.

And then you lose.

And then you lose the experience.

You mentioned collaboration. You’ve written about your collaboration with Amos Tversky

that this is you writing, the 12 or 13 years in which most of our work was joint were years

of interpersonal and intellectual bliss. Everything was interesting. Almost everything

was funny. And there was a current joy of seeing an idea take shape. So many times in

those years, we shared the magical experience of one of us saying something, which the other

one would understand more deeply than the speaker had done. Contrary to the old laws

of information theory, it was common for us to find that more information was received

than had been sent. I have almost never had the experience with anyone else. If you have

not had it, you don’t know how marvelous collaboration can be.

So let me ask a perhaps a silly question. How does one find and create such a collaboration?

That may be asking like, how does one find love?

Yeah, you have to be lucky. And I think you have to have the character for that because

I’ve had many collaborations. I mean, none were as exciting as with Amos, but I’ve had

and I’m having just very. So it’s a skill. I think I’m good at it. Not everybody is good

at it. And then it’s the luck of finding people who are also good at it.

Is there advice in a form for a young scientist who also seeks to violate this law of information

theory?

I really think it’s so much luck is involved. And in those really serious collaborations,

at least in my experience, are a very personal experience. And I have to like the person

I’m working with. Otherwise, I mean, there is that kind of collaboration, which is like

an exchange, a commercial exchange of giving this, you give me that. But the real ones

are interpersonal. They’re between people who like each other and who like making each

other think and who like the way that the other person responds to your thoughts. You

have to be lucky.

But I already noticed that even just me showing up here, you’ve quickly started to digging

in on a particular problem I’m working on and already new information started to emerge.

Is that a process, just the process of curiosity of talking to people about problems and seeing?

I’m curious about anything to do with AI and robotics. And I knew you were dealing with

that. So I was curious.

Just follow your curiosity. Jumping around on the psychology front, the dramatic sounding

terminology of replication crisis, but really just the, at times, this effect that at times

studies do not, are not fully generalizable. They don’t.

You are being polite. It’s worse than that.

Is it? So I’m actually not fully familiar to the degree how bad it is, right? So what

do you think is the source? Where do you think?

I think I know what’s going on actually. I mean, I have a theory about what’s going on

and what’s going on is that there is, first of all, a very important distinction between

two types of experiments. And one type is within subject. So it’s the same person has

two experimental conditions. And the other type is between subjects where some people

are this condition, other people are that condition. They’re different worlds. And between

subject experiments are much harder to predict and much harder to anticipate. And the reason,

and they’re also more expensive because you need more people. And it’s just, so between

subject experiments is where the problem is. It’s not so much in within subject experiments,

it’s really between. And there is a very good reason why the intuitions of researchers about

between subject experiments are wrong. And that’s because when you are a researcher,

you’re in a within subject situation. That is you are imagining the two conditions and

you see the causality and you feel it. But in the between subject condition, they live

in one condition and the other one is just nowhere. So our intuitions are very weak about

between subject experiments. And that I think is something that people haven’t realized.

And in addition, because of that, we have no idea about the power of manipulations of

experimental manipulations because the same manipulation is much more powerful when you

are in the two conditions than when you live in only one condition. And so the experimenters

have very poor intuitions about between subject experiments. And there is something else which

is very important, I think, which is that almost all psychological hypotheses are true.

That is in the sense that, you know, directionally, if you have a hypothesis that A really causes

B, that it’s not true that A causes the opposite of B. Maybe A just has very little effect,

but hypotheses are true mostly, except mostly they’re very weak. They’re much weaker than

you think when you are having images. So the reason I’m excited about that is that I recently

heard about some friends of mine who they essentially funded 53 studies of behavioral

change by 20 different teams of people with a very precise objective of changing the number

of times that people go to the gym. And the success rate was zero. Not one of the 53 studies

worked. Now, what’s interesting about that is those are the best people in the field

and they have no idea what’s going on. So they’re not calibrated. They think that it’s

going to be powerful because they can imagine it, but actually it’s just weak because you

are focusing on your manipulation and it feels powerful to you. There’s a thing that I’ve

written about that’s called the focusing illusion. That is that when you think about something,

it looks very important, more important than it really is.

More important than it really is. But if you don’t see that effect, the 53 studies, doesn’t

that mean you just report that? So what was, I guess, the solution to that?

Well, I mean, the solution is for people to trust their intuitions less or to try out

their intuitions before. I mean, experiments have to be pre registered and by the time

you run an experiment, you have to be committed to it and you have to run the experiment seriously

enough and in a public. And so this is happening. The interesting thing is what happens before

and how do people prepare themselves and how they run pilot experiments. It’s going to

train the way psychology is done and it’s already happening.

Do you have a hope for, this might connect to the study sample size.

Yeah.

Do you have a hope for the internet?

Well, I mean, you know, this is really happening. MTurk, everybody’s running experiments on

MTurk and it’s very cheap and very effective.

Do you think that changes psychology essentially? Because you’re thinking you cannot run 10,000

subjects.

Eventually it will. I mean, I, you know, I can’t put my finger on how exactly, but it’s,

that’s been true in psychology with whenever an important new method came in, it changes

the field. So, and MTurk is really a method because it makes it very much easier to do

something, to do some things.

Is there a undergrad students who’ll ask me, you know, how big a neural network should

be for a particular problem? So let me ask you an equivalent question. How big, how many

subjects does the study have for it to have a conclusive result?

Well, it depends on the strength of the effect. So if you’re studying visual perception or

the perception of color, many of the classic results in visual, in color perception were

done on three or four people. And I think one of them was colorblind, but partly colorblind,

but on vision, you know, it’s highly reliable. Many people don’t need a lot of replications

for some type of neurological experiment. When you’re studying weaker phenomena and

especially when you’re studying them between subjects, then you need a lot more subjects

than people have been running. And that is, that’s one of the things that are happening

in psychology now is that the power, the statistical power of experiments is increasing rapidly.

Does the between subject, as the number of subjects goes to infinity approach?

Well, I mean, you know, it goes to infinity is exaggerated, but people, the standard number

of subjects for an experiment in psychology were 30 or 40. And for a weak effect, that’s

simply not enough. And you may need a couple of hundred. I mean, it’s that sort of order

of magnitude.

What are the major disagreements in theories and effects that you’ve observed throughout

your career that still stand today? You’ve worked on several fields, but what still is

out there as a major disagreement that pops into your mind?

I’ve had one extreme experience of, you know, controversy with somebody who really doesn’t

like the work that Amos Tversky and I did. And he’s been after us for 30 years or more,

at least.

Do you want to talk about it?

Well, I mean, his name is Gerd Gigerenzer. He’s a well known German psychologist. And

that’s the one controversy, which I, it’s been unpleasant. And no, I don’t particularly

want to talk about it.

But is there is there open questions, even in your own mind, every once in a while? You

know, we talked about semi autonomous vehicles. In my own mind, I see what the data says,

but I also constantly torn. Do you have things where you or your studies have found something,

but you’re also intellectually torn about what it means? And there’s maybe disagreements

within your own mind about particular things.

I mean, it’s, you know, one of the things that are interesting is how difficult it is

for people to change their mind. Essentially, you know, once they are committed, people

just don’t change their mind about anything that matters. And that is surprisingly, but

it’s true about scientists. So the controversy that I described, you know, that’s been going

on like 30 years and it’s never going to be resolved. And you build a system and you live

within that system and other other systems of ideas look foreign to you and there is

very little contact and very little mutual influence. That happens a fair amount.

Do you have a hopeful advice or message on that? Thinking about science, thinking about

politics, thinking about things that have impact on this world, how can we change our

mind?

I think that, I mean, on things that matter, which are political or really political or

religious and people just don’t, don’t change their mind. And by and large, and there’s

very little that you can do about it. The, what does happen is that if leaders change

their minds. So for example, the public, the American public doesn’t really believe in

climate change, doesn’t take it very seriously. But if some religious leaders decided this

is a major threat to humanity, that would have a big effect. So that we have the opinions

that we have, not because we know why we have them, but because we trust some people and

we don’t trust other people. And so it’s much less about evidence than it is about stories.

So the way, one way to change your mind isn’t at the individual level, is that the leaders

of the communities you look up with, the stories change and therefore your mind changes with

them. So there’s a guy named Alan Turing, came up with a Turing test. What do you think

is a good test of intelligence? Perhaps we’re drifting in a topic that we’re maybe philosophizing

about, but what do you think is a good test for intelligence, for an artificial intelligence

system?

Well, the standard definition of artificial general intelligence is that it can do anything

that people can do and it can do them better. What we are seeing is that in many domains,

you have domain specific devices or programs or software, and they beat people easily in

a specified way. What we are very far from is that general ability, general purpose intelligence.

In machine learning, people are approaching something more general. I mean, for Alpha

Zero was much more general than Alpha Go, but it’s still extraordinarily narrow and

specific in what it can do. So we’re quite far from something that can, in every domain,

think like a human except better.

What aspect, so the Turing test has been criticized, it’s natural language conversation that is

too simplistic. It’s easy to quote unquote pass under constraints specified. What aspect

of conversation would impress you if you heard it? Is it humor? What would impress the heck

out of you if you saw it in conversation?

Yeah, I mean, certainly wit would be impressive and humor would be more impressive than just

factual conversation, which I think is easy. And allusions would be interesting and metaphors

would be interesting. I mean, but new metaphors, not practiced metaphors. So there is a lot

that would be sort of impressive that is completely natural in conversation, but that you really

wouldn’t expect.

Does the possibility of creating a human level intelligence or superhuman level intelligence

system excite you, scare you? How does it make you feel?

I find the whole thing fascinating. Absolutely fascinating.

So exciting.

I think. And exciting. It’s also terrifying, you know, but I’m not going to be around

to see it. And so I’m curious about what is happening now, but I also know that predictions

about it are silly. We really have no idea what it will look like 30 years from now.

No idea.

Speaking of silly, bordering on the profound, let me ask the question of, in your view,

what is the meaning of it all? The meaning of life? He’s a descendant of great apes that

we are. Why, what drives us as a civilization, as a human being, as a force behind everything

that you’ve observed and studied? Is there any answer or is it all just a beautiful mess?

There is no answer that I can understand and I’m not, and I’m not actively looking for

one.

Do you think an answer exists?

No. There is no answer that we can understand. I’m not qualified to speak about what we cannot

understand, but there is, I know that we cannot understand reality, you know. I mean, there

are a lot of things that we can do. I mean, you know, gravity waves, I mean, that’s a

big moment for humanity. And when you imagine that ape, you know, being able to go back

to the Big Bang, that’s, that’s, but…

But the why.

Yeah, the why.

It’s bigger than us.

The why is hopeless, really.

Danny, thank you so much. It was an honor. Thank you for speaking today.

Thank you.

Thanks for listening to this conversation. And thank you to our presenting sponsor, Cash

App. Download it, use code LexPodcast, you’ll get $10 and $10 will go to FIRST, a STEM education

nonprofit that inspires hundreds of thousands of young minds to become future leaders and

innovators. If you enjoy this podcast, subscribe on YouTube, give it five stars on Apple Podcast,

follow on Spotify, support it on Patreon, or simply connect with me on Twitter.

And now, let me leave you with some words of wisdom from Daniel Kahneman.

Intelligence is not only the ability to reason, it is also the ability to find relevant material

and memory and to deploy attention when needed.

Thank you for listening and hope to see you next time.

comments powered by Disqus