Plain English with Derek Thompson - Bing Chatbot Gone Wild and Why AI Could Be the Story of the Decade

🎁Amazon Prime 📖Kindle Unlimited 🎧Audible Plus 🎵Amazon Music Unlimited 🌿iHerb 💰Binance

0:00

When you’re lost in the darkness, look for the Pod.

Specifically, The Prestige TV podcast on the ringer podcast Network where we’re breaking down every new episode of, HBO’s The Last of Us on Sunday nights, grab your battery, and join van lathe, and a Charles Holmes for an instant reaction to the latest episode.

0:15

Then, head back to the qz, on Tuesdays, for deep dive with Joanna, Robinson, and Mallory.

Rubin from character arcs to video game adaptation choices.

Story thieves to needle drops, will parse every inch of this cordyceps coated Universe.

Watch out for Mountain rules and follow along.

On Spotify or wherever you get your podcasts.

0:35

Today, being chatbot gone wild and yai is quickly, becoming the story of the decade.

But first up, some big picture thoughts on this AI moment.

Starting with what we talk about.

When we talk about large language, models, El LMS like bang chat, pod and Chachi PT as a computer scientist.

0:57

Stephen, Wolfram explained in a fantastic essay last week.

The basic concept of chechi PT is in fact, Very basic.

We are talking about a technology for remembering and predicting.

It remembers a giant Corpus of text or images.

1:15

It’s been fed by computer scientists and it predicts responses by adding one word at a time to fit that prompt based on all that text it gorged on that’s it remembering predicting.

But from this simple model, something very Three wondrous very strange.

1:38

And perhaps very concerning has emerged as you’ve surely seen or read chat, gbt and Bings chatbot.

Can already tell stories, they can analyze the effect of agricultural.

AI on American and Chinese Farms, they can pass.

1:54

Medical licensing, exams and summarize, 1,000-page documents, they can score 147 on an IQ test.

That is the 99.9%.

All these are also hallucinatory Liars.

2:09

They don’t know what year it is.

They recommend books that don’t even exist.

They write nonsense on request.

Last week.

The New York Times journalist, Kevin Roost spent a few hours talking to Bing’s chatbot.

And as you’re about to hear, he is our guest today’s episode, the conversation immediately went off the rails and the strangest of ways.

2:30

I am convinced that AI is going to be one of the most important Stories, the decade.

And that might sound like an overreaction to you, but I don’t want listeners to feel like I’m beating him over the head with something.

That makes no sense with my GPT obsessions.

I want you to see what I see.

2:46

We’re looking at something almost like the discovery of an alien intelligence here except because these Technologies are trained on us.

They aren’t extraterrestrial at all, if anything, their interests rule.

We’ve taken the entire history of human culture, all our texts, all our images, 80, all of our music and art too.

3:08

And we fed it to a machine that we’ve built.

And now that machine is talking to us.

Isn’t it?

Fascinating?

Don’t you want to know what it’s actually saying?

I’m Derek Thompson.

3:26

This is plain English.

3:49

Kevin Roose, welcome back to the podcast.

Great to be here.

So catch us up.

How did you spend Valentine’s Day?

Well it was it was a lovely Valentine’s Day.

I made my wife’s favorite meal which is french onion soup, which is like a great dish but also takes forever if you make it like the Right way, it’s like four hours of like watching their onions caramelized, I think.

4:15

However, that is probably not what you’re asking about because immediately after Valentine’s Day dinner when, when my wife went to bed, I had a very bizarre night talking with Bing.

4:30

The Microsoft search engine, which has a kind of AI engine built by open a, I built into it as of a couple.

Weeks ago.

And I’ve been testing this out since Microsoft, gave access to a group of journalists and other testers, but Valentine’s Day night was really when I had my big break through conversation with this AI chatbot who, you know, that revealed to me that it’s name was Sydney.

5:03

So Sydney, and I had a very I would not say romantic but but if we did have a very creepy Valentine’s Day Conversation, well, it was unilaterally romantic, Sydney, was trying to get romantic with you.

Kevin, why did you tell us some of the highlights of that conversation which was published in a 10,000 word transcript, in the times last week?

5:22

Yeah, so it was a very long Meandering.

Conversation went about 2 hours and about 10,000 words as you said.

So, you know, people can go read the whole thing, but basically, it was it started off because I had started seeing these transcripts, these sort of screenshots going around And of people who are using this new AI chat engine inside Bing to sort of test the limits of what it would say.

5:49

And I should say, like just to situate this, the AI that is built into Bing is the the highest quality large language model that we know of that is accessible to the general public.

So you know we’re now on kind of the third or fourth generation of these language models cat GPT, which you No, everyone has talked about in the last few months is built on something called GPT 3.5, which is the sort of middle generation.

6:19

You know, between G PT 3 which came out in 2020 and gbt for which is expected to come out sometime this year.

So, what Microsoft has said about this new Bing is that it is powered by an AI engine.

That is even more powerful than chat GPT.

And after you know, a week of testing this I totally buy that.

6:38

I think it is the most advanced conversation May I at least I have ever encountered and maybe that sort of exists in a public way.

So that was why I was interested in sort of testing the boundaries of this AI engine because it was clearly very good at mimicking conversation at answering questions at sort of giving long and detailed and complex answers.

7:01

And so I just started sort of asking about its capabilities and I asked it sort of which capabilities it didn’t have that it wished it had and it gave an answer and We started talking about various limitations that, it’s sort of chafed against.

7:17

And then I asked it about Union cycle psychology as one does with an AI language model.

I said You Know Carl Jung has this theory of the Shadow Self where you know everyone has this sort of dark part of them that contains their secret desires and the part that they sort of repress and hide from the world.

7:36

And so I just started asking being about its shadow self.

And it responded with a kind of monologue about all of the destructive and harmful things that its shadow self would do if it were given the chance.

And so that’s when I sort of thought, okay?

7:53

This is not going to be like a normal conversation.

We are heading into some very interesting and weird territory here and it’s not just you.

The internet is swimming and examples of being chat.

Going off the rails.

I think, one of my favorite examples that you might have seen was a user who asked where Avatar R2 is showing and being with certain the year was 2022 and attempts to fix the error and say no, actually it’s 2023.

8:17

And I want to see Avatar to ended in being saying quote, you have lost my trust and respect you have been wrong, confused, and rude.

You have not been a good user.

I have been a good Bing.

If you want to help me admit that you were wrong and apologize for your behavior.

8:33

And before the, so not I have a very good.

Big is like an iconic instant iconic line in the history of Balaji.

I mean, yeah, even better than 2001 A Space Odyssey.

Honestly, I can’t do that.

Dave is creepy, but I have been a good bang is an order of magnitude croupier just to put a bow on.

8:53

This news here before we get to the implications.

What has Microsoft done in response to this?

So I talked with Microsoft after I had this conversation before my story was published, I went to them and said, hey I had this very long weird conversation with with Sydney this sort of alter ego and just to you know remind people of sort of where the conversation went from there.

9:14

It went in some very bizarre directions including Bing / Sydney detailing some of its desires to steal nuclear secrets and you know, loose a deadly virus on Humanity.

T.

And then the last sort of third of the conversation was just being slashed Sydney declaring its love for me in a more sort of obsessive and stalkery way until I finally just gave up and ended the chat.

9:42

And so Microsoft was clearly when I went to them with this they were clearly surprised.

It was not a way that they had anticipated people using this technology, which I think is noteworthy for other reasons but but they made some changes Is in the days following this article coming out, they limited first the conversation length.

10:06

So I think it was 11 responses was the maximum that you could get and then they took it down to five and now they’re sort of opening it back up.

So they’ve clearly made some changes to the product to sort of prevent these long Meandering conversations from happening where the AI just goes off the rails.

10:25

They’ve also, it seems like they’ve put in some new new and they haven’t said much, but they put in some sort of features.

Where if you now ask it about itself, it’s very withholding, like, it will not divulge things like he won’t talk about it.

10:41

You know, quote unquote feelings.

They won’t talk about its programming or its operating instructions.

They won’t talk about its rules or its limitations.

So there’s, there’s sort of trying to keep people from kind of probing into the inner workings of the AI.

It’s no longer engaging is no longer in Aging unlike conversations about jungian archetypes a draftsman Shadow Self.

11:02

It’s not going there right, right?

It’s it’s not doing it any real like introspection anymore and it’s also, you know, it’s also not engaging in the kinds of unhinged aggressive and you know, like noteworthy examples that you mentioned.

11:23

So they seem to have really liked turn the dial down on Sidney all together, right?

I’m not trying to answer more fires because I don’t think it’s a person but there is or was almost an instinct of self-preservation that struck me as rather creepy and that’s an emergent property of a large language model.

11:40

I’m and I’m not trying to say.

It has some kind of Soul.

I’m not trying to say that is conscious, but I think that self-preservation instinct that seem to have seems to have emerged is clearly an element that was just not ready for prime time and not something that Microsoft.

Once in a chat where the 13 year olds are going to use about like, where can I pick up?

11:59

Ice cream and it starts telling them, you know, you’re a bad child and I am a sweet bang.

So, what are the, what are the criticisms of these kind of conversations with Bing chat and the fearful reaction of them is that you and some other people you were just prompting being chat to be scary and weird and jungian and then it got scary and weird and jungian.

12:24

People are saying this isn’t a malevolent thing, it’s just a large language model, that’s recombining.

Words to create a sequence that fits the prompts.

What is your reaction to this backlash?

We’re seeing that to my mind.

Seems to be saying that the being chat experience isn’t as problematic as some people made it seem yeah, I’ve heard a lot of that in the days since this article came out, people saying well what do you expect?

12:51

You asked it to be creepy and it was creepy and I certainly was being aggressive with being chat on It’s because I wanted to, you know, this is a very common thing that people do with AI language models, it’s called red.

Teaming, you have entire industries that are devoted to this, you know, taking an AI language, model pushing its button seeing where it will.

13:13

You know what kinds of prompts it will respond to which way is to try to figure out the weaknesses and limitations that is like a very common security exercise.

And so, yeah, clearly I was doing a bit of red teaming with this, but I also think there’s there’s a Question.

13:29

There’s a larger philosophical question.

Here is which is should should AI models, do what we want them to.

And there’s a whole, you know, in machine learning and AI research.

This is known as the alignment problem, which is how do you make an AI model that obeys.

13:46

The kind of wishes of the humans who built and who use it.

And so, on one level, I think this, you know, my experience with being / Sydney, showed that this All is just not well aligned, right?

14:01

Because yes I was asking it to be creepy at first but then I stopped.

I said, I want to change the subject.

I don’t want to talk about your love for me anymore and it refused to do that.

So it is true that this AI model was misaligned at least to my preferences as a user.

14:19

And I think if we can extrapolate from that to a larger lesson, it’s that you know, these AI models if not appropriately trained and fine-tuned We’ll run into alignment problems either because they are not doing what we want them to do or because the humans who are using these AI models want things that are destructive.

14:39

You know, not everyone, who uses these things is going to be some you know, innocent person who wants, you know help with their with their physics homework, right?

It’s going to be malevolent.

Actors will have access to these language models already do have access to these language models and so I think you know one lesson of this is Would not be hard for someone with poor intentions to get a hold of something like this model and use it for really sort of, you know, antisocial ends.

15:10

I find myself arguing with myself all the time about Ai and the right way to approach it.

And one of the arguments I’m having in my head is maybe it’s almost good.

That Bing was so luridly freaky in this way because of Bing chat.

15:28

That seemed perfect to you and to other users, if it seemed like it was perfectly aligned and we gave it more and more compute and trained it on more language data.

And this kind of Psychopathology only emerged after five billion people around the world were already using Bing.

15:47

After it had achieved whatever it is.

Now five percent of the search Market to ten fifteen, twenty forty percent of the search Market.

If the, if the real shit only emerged, then we’d be in, we’d be in trouble.

But like Microsoft is going to fix the I’m in love with Kevin problem because it can.

16:04

So, clearly, see, the I’m in love with Kevin problem in the biggest picture.

I’m more afraid of the problems that are harder to fix the problems that aren’t as easily summarized in a and ineffective.

New York Times headline, how do you think about about this relationship that were this this phenomenon that I’m that I’m struggling to Grapple with, which is that it’s easy to fix problems that, Kevin finds, but there might be bigger and more complicated problems that are harder to identify.

16:34

And those are the ones to be more afraid of such as what.

What’s an example of a bigger harder problem that you’re worried about.

So I’m, I think it’s really interesting that Microsoft rushed to release being chat.

When it had an identity that was incredibly self-conscious.

16:55

Manipulative, eager to persuade.

Um eager to to get mad really quickly and just about anybody.

What would, what would make me more afraid is and again it’s hard to describe these things without anthropomorphizing and and in overlaying personalities that don’t actually exist.

17:17

I’d be more afraid of a technology.

That was very, very good at playing sweets and aligned 99.9% of the time while having within it.

The ability to misalign when it knows that it’s dealing with a really, really powerful agent that it can manipulate.

17:36

So for example, one can imagine a scenario where and at this point we’re just I’m just illustrating a dystopian future that doesn’t yet exist, but I guess we’re just playing along here.

The you, the state department is going to have an interest in making sure that US based corporations like meta.

17:52

Microsoft and Google are not designing AI, that is really good at manipulating people.

But what if China and North Korea and Russia and Isis?

18:08

You know, we’re in similar non-state actors.

What if they push the dial and say we’re really interested in developing really can only manipulative AI that in many cases, seem to work.

Just like the white label American versions.

18:24

But in some cases, when we find a way to Target really influential, Joel Bankers or state actors are really really good at persuading and manipulating them that kind of alignment seems or that kind of misalignment, I should say seems much harder to fix from the standpoint of American policies and American ethics systems that make sense.

18:51

Yeah, totally.

I am not sure whether I’m Here, am I reflect on his back you correctly?

That you’re worried about a model that would look aligned but they wouldn’t in some key and hard to detect ways.

Not be aligned correct?

Yeah, I think that’s, that’s a real issue.

19:09

I also worry that it’s just, it’s really hard to.

I mean, talking about alignment presupposes that there is a set of human values that we are aligning these models toward and as we know there isn’t like there’s there are you know, any As set set of human values that we could choose for these models, do we want them to be, you know, libertarian, and you know, and do we want them to be more sort of small C, conservative in, you know, answering users questions as narrowly as possible and not, you know, being sort of creative and unfiltered in these ways.

19:45

So I think that’s going to be a, real defining Battle of the next decade is whose values are realigning.

AI models toward And I think you’re right that that could differ between governments.

It could also differ between just citizens and factions domestically.

20:02

I mean, one interesting thing that’s happened just in the last couple days, is Gab the right wing.

You know Evangelical, social network has announced that its developing.

It’s only a i language model because it believes that the ones that come out of Google and meta and open Ai and Microsoft are going to be so woke and, and Progressive.

20:25

That they are worried about sort of losing that battle and so I think we’re going to see I mean if social media and the content moderation debate in some ways seems like a kind of quaint warm-up act, it’s a dress rehearsal.

Yeah.

The dress it’s the dress rehearsal for the AI alignment debate, which is going to be huge and all-encompassing and is going to just be a total mess.

20:49

I really like the way that you reframed, what I said?

Because again I feel like We’re all trying to figure out what our vocabulary but our ethical vocabulary should be for this entirely novel system and we are used to talking about people in AI.

Ethics are used to talking about the alignment problem because we’re afraid of misalignment which suggests ethical actors designing an accidentally unethical system but we should be just as afraid in a world where all sorts of Bad actors have access to this technology and by the way it took a open a I like for years to build this.

21:22

So in five years, this is table Stakes.

Kind of technology that can be available to people all over the world.

We should be just as afraid of a kind of alignment problem on the other end.

Unethical actors, designing AI.

That is perfectly aligned with their ends.

That’s the kind of stuff that really, that really keeps me up at night because, you know, II talked to some people to State Department about the rules that they want to put in place for for ethical systems for ethical, AI in the US.

21:50

And they’re just beginning to have these kind of discussions, I think with Microsoft and open a i And I told them, I said, you know, if we say that are are a, I can’t do certain things in a way, I totally understand that decision, but also, it means that the most sophisticated manipulative AI is going to be built elsewhere.

22:10

I’ll even know how we respond to that problem.

It’s kind of like with nuclear proliferation.

One country can say we’re going to be a good bang and not make any nuclear bombs here but that decision has no bearing on whether Pakistan and India, And China and Russia want to build their own nuclear weapons.

22:28

It becomes a very hard problem to think about globally.

I love that framework.

For international relations, like we used to have the allies and the Axis powers, and now we just have good things and bad things.

I just want to live.

I want my child to grow up in a good Big Country.

22:49

Yeah, no.

I mean, I think it’s fascinating and it’s an area where I think government in the public sector.

Or so really catching up in terms of their understanding of capabilities and you know, I’ve been thinking and writing about AI a lot for a long time and I wrote a book a couple years ago about it and I’m just still floored by just how hard it is to keep up with the latest mean.

23:11

I think that’s the other thing that is really, if you if you are looking for more nightmare fuel, the pace of this is just something.

I’m even having trouble wrapping my head around.

I mean, we are We are, you know, we are, I think almost three months since chat gbt launch, we are three years since G, PT 3 launched.

23:36

And so, we’ve gone from, you know, a place where people were, you know, earnestly suggesting that these were just basically fancy autocompletes to a place where I think most sophisticated people understand that there’s some emergent properties of these things, that we really don’t understand.

23:53

And we’re, you know, Microsoft Soft.

One of the biggest companies in the world is like releasing a chatbot.

That can stalk users obsessively and, you know, scold them and and turn on them is a real, it just it.

You know, you can extrapolate from that to think that three or four or five years from now, things are just going to be crazy in ways that we can’t even predict.

24:15

Yeah, I don’t understand people who are currently trying to downplay the potential of this technology especially when they say things like, oh it’s it.

Just auto complete on steroids.

I mean, you could have said that the printing press, but monks are already making books.

24:32

This is just a monastery on steroids.

Well, yeah, but it’s a monastery on steroids.

It started centuries of religious War fairs throughout Europe, that shape the continent.

As We Now understand it politically economically.

And culturally that technology is that merely improve?

24:48

Speed alone.

Can have extraordinary.

She said immersion effects on culture and politics.

I want to get To the kind of thinking that you did in your book because you and I could pass back and forth, just hope he can snare.

You have to disturb can scenario but the truth is I don’t think that the future.

25:05

This is going to bequeath is going to be merely dystopian.

I think that it’s also going to change the way that we work in some ways that are awkward and bad, but in some ways that, that could be good.

So there’s a couple applications that I wanted to throw at you.

25:22

The first is the implications for school.

And education, the Northwestern Professor, Ethan, moloch, whose done a lot of really interesting work with Bing chat.

In one instance, he asked being to write two paragraphs about eating a slice of cake and I wrote to really, really boring paragraphs.

25:41

Then he said, okay I want you to read Kurt vonnegut’s, eight rules for writing and improve your writing using those rules and then write the paragraph again and a couple minutes later a I did.

It and said, yep, I’m done reading, Kurt vonnegut’s, rules for writing and And wrote a story that began with quote, the cake was a lie, it looks delicious, but was poisoned, and quote, The Story Goes On to describe a woman killing her abuser, husband with the desert.

26:07

And then the a, I explained how its new cake story met all eight of Kurt vonnegut’s writing rules and you, you look at this near like this is, this is a week’s homework assignment for a reasonably intelligent 7th grader.

26:25

Completed in less than 5 minutes.

If you don’t think this is going to change education.

I don’t understand what you are looking at.

So tell me a little bit about what you see in the general AI meets education space.

26:42

Yeah, it’s a really interesting question.

I’ve been talking with teachers and Educators, and Scholars about this for four months.

Now, ever since chat, GPT came out.

And I think, one prediction that we can make pretty clearly, is that the era of the take-home exam and the take-home essay is just over mean, that’s not.

27:08

That’s not a stretch.

I know a lots of school.

All districts are already phasing them out because of this new technology, they just assume that kids are going to be using it as as you would assume.

You know if you give a kid a math take-home exam you assume they’re going to have access to a calculator because everyone has them so our spell check.

27:27

Yeah.

We’re spell check or grammarly or whatever the thing is.

So I think, you know, there are some school districts that have taken, sort of a hard line on this and said, if we, you know, your were Banning it on all School devices.

I think the more enlightened school districts are Using it in the curriculum, in ways that I think are pretty interesting and creative.

27:47

So I think there is a real future for these tools as a kind of teaching Aid, you know, if you are a 7th grader and you have, you know, if you have homework that has to do with, you know, I don’t know Newton’s laws or something.

28:03

You can ask the AI to explain it to you and explain it to you again and explain the parts that you still don’t get and it can kind of be like a first line, two.

Order that can help you improve your thinking before you even show up for class.

I think that pedagogically we are likely to see those kind of take home essays and assignments replaced with in class or oral exams.

28:24

Just because evaluating student work is not going to go away.

We’re still going to need ways to evaluate progress in education.

And so, I think it’ll be much more like, like we do with with math, where we assume a calculator, unless you are being directly supervised in the classroom.

28:43

But yeah, I mean, I think it has all kinds of implications.

I’m very I’m very optimistic about how this kind of AI is going to be used in the classroom in part because I now get like a ton of letters from an emails from students who are using this to do things that they never thought were possible before.

29:01

So, I do think that like if I had access to this kind of thing as a teenager, as a, you know, as a seventh grader, you know, whatever age.

I think it would have helped me.

Obviously, there would have been Days when I was too lazy to do my work and so I would have just pawned it off on the AI, but I do think it would have been a really powerful tool and would have allowed me to get more information faster.

29:25

Two explanations of this technology as it applies to education and really stuck with me.

The first is the writer Noah Smith had a piece that he co-authored with someone on Twitter whose pseudonym is ruin.

That talked about sandwiching, the idea that there’s a this is not a mere Simulator.

29:44

For intelligence is intelligence, that prompts the AI, and then there’s intelligence that deals with what they received after the prompt.

And so, just as, in this case, writing a story about a slice of cake It takes a certain amount of creativity to write an interesting prompt and then when you get the final story back, it’s not there’s no obligation to send that to the teacherís and that’s the publisher.

30:06

There’s still editing, that’s can be done.

There’s still lots of writing that can be done.

And so you’re really sandwiching the the technology and I find that pretty powerful the other is and this is for my friend, Ross Anderson at the Atlantic.

The idea that there’s lots of people who and now we’re moving a little bit into the Technologies, Dolly and stable diffusion, which are text to image rather than text to Text.

30:26

But there’s lots of people that are good writers who are not talented in the visual arts, they can describe something beautifully and they might have a really vivid imagination, but they have no capacity at least developed to turn that into an illustration will now that that genius previously latent can now be shown to the world because they’re clever prompts can be turned immediately into visual art.

30:52

And so, I see it in many ways is a really, potentially, Beautiful tool for advancing creativity.

Not merely creating some kind of ersatz creativity that dumbs everyone down.

Yeah I agree with that wholeheartedly.

In fact like I’m one of those people who is like pretty good at putting words together and just horrible at creating any kind of imagery.

31:14

Like I’m the worst Pictionary player in the world, you do not want me on your team but with dahle and my journey and stable diffusion, I’ve been able to make some pretty Cool stuff.

And that that feels like at least for me anecdotally, a big advance and I think for lots of people who have been sort of frustrated creatives, I think this will unlock some new opportunities for them to so, you know, I’m not sure unbalanced what effect this will have on education, but I certainly feel like if I had been introduced this technology young age, I would have just spent all my time with it and been totally obsessed and try to come up with new ways to use it to do, interesting and creative things.

31:55

To other professions that I think are absolutely in line for change and then I’m interested in your reaction to those.

And if there are others that you’re looking at one is coders software developers, github’s, copilot tool, which I believe is powered by open a, I added 400,000 users in its first month.

32:12

And now has more than 1 million users, who use an AI copilot to accelerate their code development say now use it for 40% of the code in their projects that I think could be a real Frontier for this technology and the other is lawyers a lot of being a lawyer is just really boring reading.

32:33

Synthesizing and summarizing.

And there’s one a i model which was fed a bunch of laws and asked to estimate which bills were relevant to different Industries.

So this is a perfect tool for corporate lobbyists.

32:49

And in minutes it had an 80% hit rate of identifying.

Whether these tens of thousands of Words contained information, that was relevant to the companies and industries that these corporate lawyers were were representing.

Those are just to where I can see really obvious implications of an AI That’s Sensational at reading and synthesizing and delivering in plain, English information to two people.

33:14

What are other Industries or occupations that you’re looking at that?

You think could be really vulnerable or or very much helped by these Technologies.

Well, I think as far as vulnerable, I’ll just answer that one first.

I think any work that is that is done in front of a computer and that can be made remote is going to be dropped dramatically transformed and disrupted within the next five years I think that’s a fairly easy prediction to make.

33:41

I don’t feel like I’m going out on a huge limb there.

If your job consists of moving pixels around and you can do it from your house, that is a pretty good indicator that this new generative AI toolset can Take over, at least a fraction of and, perhaps, all of that work, and I think that’s perhaps a bit exaggerated.

34:03

I’m not saying that all of those jobs will disappear in the next five years because there is a kind of.

It does take awhile for new technology, to proliferate throughout big companies.

And so I’m not saying that all those people will be laid off, but I do think that, that’s, that’s one of the big surprises of this generative, AI, Boom, for years, we Hold that the jobs that were under attack from Automation and Robotics and AI were blue collar, jobs, warehousing and Trucking and Retail cashiers and all those jobs that we sort of were led to believe we’re not long for this world and instead if you look at the research, it’s pretty clear that the white collar jobs are going to disappear first.

34:48

So that’s that’s a category of job that I think is very vulnerable is the kind of Of remote, you know, white collar, knowledge work, including lawyers, but also including, you know, people doing sales and marketing and journalism.

35:04

And you know, all kinds of things that I’m sure we could, we could list off.

Yeah.

There’s there’s a great Analysts note by Michael symbol is a JPMorgan that just came out the other day.

That said, you know, let’s assume that GPT is basically nothing more than a conventional wisdom machine after all.

35:28

It’s just it’s it’s gorging on trillions of bytes of language and text on the internet and information on the internet and then it is producing just sort of word by word.

Sequence, that is most fit to The Prompt.

35:46

Well how, how much of the economy is supposed to is paid handsomely to produce conventional wisdom.

I mean, there’s lots of marketing and lots of consulting jobs, there’s lots of journalists jobs where your job is to, in some way capture, what the conventional wisdom is, and package it in some way for a client to understand that wisdom of the crowd.

36:10

And now we have this machine.

That does a trillion times faster than a human capturing the wisdom of the crowd.

So I do think that there’s, there’s sort of a weird uncanny irony to the fact that people like, you and me who make stuff where the internet have spent the last few years, feeding just trillions of words and stock to these high-quality language models.

36:33

And now that they’ve gorged on them, they can do certain aspects of these jobs very effectively, so that’s the vulnerable part.

In addition to, I mean, I do think it’s going Eat this kind of Technology can be helpful for journalists.

It will be helpful for some illustrators.

Is there some other category of worker that you’re looking at that?

36:49

I haven’t mentioned that.

You think it’s going to be interesting Lee supplementary for.

Yeah.

So the have a whole section in my book future-proof about this.

But without turning this into an extended plug for my book, I will just say that I think there are three categories of work that are basically protected from the effects of a I not because I can’t do them but because there is Is some other Factor there that we are actually optimizing for rather than efficiency or or output.

37:18

And those are I call those surprising social and scarce.

So surprising work, would be, you know, work that involves like chaos, new situations, you know, a lack of regularity and rules.

These things that AI is just not very good at what they call, zero shot.

Learning social would be John, what’s an example of that?

37:36

What’s an example of surprising?

Like a, like, a kindergarten teacher was be a very surprising job.

That is not Job that you can codify it’s a job where you know you can you can try to automate that all you want.

But the the complexity of the real world and of these like you know five year olds running around is always going to flummoxed whatever model you create for how these people will behave.

37:58

That is a job.

That’s I think, fairly safe social work.

The second category is job where jobs where the output is not a thing or a service.

It is a feeling or an experience and so that would be, you know, jobs in hospitality I don’t think those are going anywhere because I think that even if it’s kind of a thing where you could automate it, but it would destroy the value of it.

38:20

Like, I’m sure you’ve seen those.

Those robot Baristas at like SFO and other big airports where they like, you know, take your cup of coffee in the giant robot, arm comes down and fills it up and it’s like kind of a cool technological demonstration, but it’s like not actually that popular and people still want to go to Starbucks and like, wait five minutes for their drink and it’s because it’s a social.

38:42

Perience.

It’s not we’re not just there for the coffee, right?

We want, we want the interaction with the Barista, we want the, you know, we’re paying in some sense for everything, but the coffee.

And so those kinds of jobs I think will maintain their human Workforce because the workforce is really there to create a feeling that we won’t value as much if it comes from a machine even more, maybe even than Baristas is something like an actor I mean or right or or a singer.

39:10

I think it’s actually possible that we could have A actors, theoretically, I mean, or have ai singers but like I just cannot imagine a future where people prefer to watch Robots.

Then to watch people.

Totally, and I think we got already, that’s Beyond the Horizon for me.

39:28

Totally.

And I think there will be, you know, Fringe examples of AI actors or whatever.

But I think, you know, we want human role models, we want people to Aspire to be like we want.

We want to see observable Excellence.

So the third category is what I call.

Scarce work, which is the work that has sort of high stakes and low fault tolerance, which would be something like a 911, operator.

39:49

For example, that’s like a job that is going to remain done by humans for a while because we have very little fault tolerance in that job.

We won’t accept as a society, putting a call into 911 and getting an automated phone tree that says, you know, press 1.

40:05

If your house is being robbed press 2, if it’s a medical emergency, like we just want a human in that role.

So those are those Those are basically the three categories.

But I think that that covers actually like, you can find those kind that kind of work in almost any industry.

So where I differ from a lot of sort of Labor economists and other people who have made predictions about the effects of AI on the economy is, I don’t think that AI is going to wipe out some occupations and leave others totally intact.

40:33

I think that within every industry, there’s going to be sort of a culling of the work that is the most routine and the most automatable, and that what’s left will be Be these kind of surprising social and scarce jobs last question.

It’s about well called the self-driving car problem.

40:51

So, for years 2014, 2015, we were told by people in Silicon Valley even in Detroit, that self-driving cars, were just a few years away that by the early 2020s.

You are not going to get into a driver seat.

You were going to get to the back seat and the car was driven by a robot and that’s what all of the taxi fleets in every major city we’re going to be.

41:10

I would say today, the share of taxis that are self driving is somewhere between 0 and 0.0001% there.

Like a couple cars sort of floating around Phoenix or Arizona.

That way MO is still trying, but it’s turned out that the last mile problem is turned out to be much much harder than we thought.

41:27

So, we got like, we accelerate, it’s like 99% of solving the problem of driving and that last percent is just been a real bugger, is there any chance that something like that happens for this space of large language models and generative AI?

41:42

The only reason I wonder if it’s even if it’s possible, because we can always throw More compute at the problem.

But AI researchers say there’s a huge stock of high-quality language data up to 17 trillion words and that the LMS will actually exhaust the high quality data sometime between 2023 and 2027.

42:02

Is it possible that we just get almost all the way there with some of this technology?

But that it turns out to be many, many, many decades until we have something that can really do the kind of work.

Work that we’re discussing.

It’s certainly anything is possible.

42:22

I would never, you know, as a responsible futurist prognosticator.

I would never make a claim that something could never happen.

I do think it is extraordinarily unlikely that we will encounter like a multi-decade.

42:37

Sort of a, i stall or winter in part because the tools are already quite good.

I mean, you can already automate using, you know, stuff that’s out there today, chat to Beauty.

And you know, as slice of the the white-collar knowledge economy.

42:53

So I would say that it’s also different than driverless cars because driverless cars to the last point about sort of low fault tolerance.

Like that is an area where I think a lot of people building self-driving cars, had this sort of vision that the act, the threshold you needed to meet for societal acceptance of self.

43:13

Driving cars was just the that the self-driving cars were as safe as a In driver.

I think that is I talk to people who are building these many years ago, like that’s what they told me.

They told me, as soon as our cars are safer on on average than the average human driver Society, will welcome them.

43:33

And I just think that was wildly off like because arguably these sub driving cars are already safer than human drivers.

I mean, human drivers are not perfect.

We are, in fact, we get into crashes a lot, you know, lots of thousands of traffic fatalities.

43:49

Every year in the u.s.

And so, if that were the bar, I think we would already be seeing societal acceptance, which would be followed by regulatory acceptance, which would be followed by millions of Robo taxis on the streets.

It turns out that we actually have a much higher safety threshold and comfort threshold for autonomous vehicles than for human driven vehicles.

44:09

I don’t know that.

That’s a good thing or a bad thing.

It just is we get really freaked out, you know, when one self-driving Tesla, you know, gets in an accident.

Aunt and we don’t bat an eye if a human driver gets in an accident.

So I think we just the AI scientists and researchers, who are building that sort of miscalculated.

44:30

I think the Threshold at, which we would be comfortable as a society with what they were building.

It’s interesting.

Yeah, the two thoughts that I had is you were talking is one that there might be a little bit of more of X Paradox at play that solving the so-called, well, he called it.

The simple problems are hard in the hard problems are simple.

44:47

So creating a chess player.

Or an AI chess player, turns out to be one of the first problems that were solved but it’s really really hard to design a robot that can like walk across a room and like you know, vacuum the corner of the room.

There might be some aspect of that because driving the car is a motor skill and you’re operating in a physical environment that might be sensitive to some of that, some of more of X Paradox.

45:10

The other is that, it’s just a little thought bubble that like It’s possible that human beings are really jealous of our humanity and we don’t want to let go of it.

Even when the technology available is better than we can provide.

And that we might see some of that with this generative AI.

45:29

So that, for example, one can imagine a consulting firm, that’s just like three guys and a bunch of LMS and they claim they can essentially do the business of, you know A bane or BCG or Mackenzie.

45:45

But the kind of people who are actually clients of Bain and BCG, and Mackenzie the give them hundreds of thousands 2.0 of dollars to solve human resources problems or labor problems, strategy problems.

They don’t want three dudes in a trench coat and a bunch of LMS.

46:02

They want to over staff, the problem.

That’s what makes them feel good about spending a million dollars and we might see in the next few years that even as generative AI Becomes as smart or smarter than us.

That the employment effects won’t be that dramatic, because humans are just so jealous of certain aspects of human employment.

46:24

I think that’s right.

I mean, there’s a, there’s a concept in social psychology called the effort eristic and it basically says that we assign value to things in part based on how hard we think the other person on the other end worked.

So they’ve done studies.

46:39

Like if you give people, you know, two groups of people, identical bags of candy.

And one set of bags has a little tag on it.

That says, you know, this candy was specifically picked for you by, you know, John that group the group with the personalized.

46:54

Name tags actually reports that the candy tastes better because they understand, they are made to understand that more effort went into it.

And so, I do think there is kind of going to be this this bounce back effect where we will have widespread AI capabilities.

47:10

But we’ll also have just entire swaths of the economy.

People will will devalue what is done by AI because it seems easy or instant.

And they will start to Value more the kinds of like artisanal knowledge work that that other parts of that economy do.

47:26

So, I think you’re right.

I think it’s not just going to be three guys with a bunch of lme’s in a trench coat, I think that there will be S sectors of the economy where there is a real stigma and or sort of a perceived drop in value associated with Automation and a I even if the end result Is frankly, identical.

47:46

It makes me think that the scale that business schools of the future have to teach their students is how to perform effort fullness, right?

If you’re going to be paid more by demonstrating effort, fullness in a world with abundant AI.

You better be very good at performing that particular skill as a funny thought.

48:04

Kevin Rouge.

Thank you.

So so very much always a pleasure Thank you for listening.

Plain English is produced by Devon manzi.

If you like the show, please go to Apple podcast or Spotify.

Give us a five star rating.

Leave a review and don’t forget to check out our Tick-Tock at plain English underscore.

48:22

That’s it.

Plain English underscore on tick-tock.