Plain English with Derek Thompson - The Future of AI Is Thrilling, Terrifying, Confusing, and Fascinating

🎁Amazon Prime 📖Kindle Unlimited 🎧Audible Plus 🎵Amazon Music Unlimited 🌿iHerb 💰Binance

0:01

Yo, Rob, our Villa from 60 songs that explain the 90s here to inform you that we are back with 30 more songs because the 90s were super long and had a ton of read music, please join us every Wednesday for more 60 songs that explain the 90s only on Spotify today.

0:17

I have something really special for you a long funny, scary challenging and informative conversation.

With one of my favorite writers, about one of the world’s most important topics.

The guest is author, Steven Johnson writer of the adjacent possible newsletter, and the best-selling author of many books including most recently extra life on the science of living longer and our subject is artificial intelligence.

0:44

I think it’s possible that 50 years from now, when we look back at the early 2020s, at the pandemic and the political chaos in the economic roller coaster in the culture, war will say that the most important development from this period.

Wasn’t the pandemic or the political chaos or the economic roller coaster, or the culture war will say it was this vertical our.

1:08

We’re experiencing at the frontier of artificial intelligence.

So why do I think that will last year the organization open a i unveiled a technology called GPT 3.

This is a language technology that has essentially ingested zillions of words articles Wikipedia pages and learned how to communicate like some kind of human genius.

1:31

If you ask G, PT 3 to explain the history of the First Amendment or had it max out your 401 k.

The technology will spit out an astonishingly human-like statement explaining either.

If you ask G, PT 3 to finish an essay that you started, by giving it the first paragraph of that essay and basically saying keep going and write in the style that that paragraph Was Written in.

1:55

It will do that.

If you ask you to Summarize each chapter of War and Peace in a paragraph and then to summarize all of those chapter summaries of War and Peace and in one single paragraph, it will do that to this year.

Open a, i unveiled something even more astonishing, a text to image technology called.

2:13

Dolly to.

If you have a request for art Dolly lips it up high.

Let’s say you type in, I want a photo realistic image of raccoons playing chess on the moon or I want a child like cartoon of the Mona.

Lisa smoking a cigarette.

This is an AI that using its understanding of images and context and language and art can make these things almost instantly.

2:37

These breakthroughs in the field of artificial intelligence are I think just the tip of the iceberg.

Imagine a g BT, 34 molecular combinations.

That could predict novel vaccines, that would change the World.

Imagine a dolly for architecture that could predict the design of totally new buildings or totally new materials.

2:59

We haven’t been able to dream up yet.

That would change the world.

The frontier of AI today might be the most important place where technology is poised.

This precarious balance between two different, Futures a much better world, were knowledge and creativity, become ever, more cheap and abundant or a much worse world where humans build some kind of a.

3:23

I Frankenstein that we can neither control nor understand.

This episode is about the frontier of Technology, the frontier of AI and how to direct that Frontier toward Utopia and not dystopia.

3:42

I’m Derrick Thompson.

This is plain English.

4:08

Steven Johnson, welcome to the podcast.

Thank you very much.

It’s good to be heard.

Eric.

What I want to do in the next 30 minutes to an hour is offer people, a tour of the Horizon of AI and ask some, big hard questions about what is exciting about this Horizon.

4:27

What is scary?

About this Horizon and how we can move the frontier of this technology toward the less dystopian implications of it.

And I want to start off with a very specific example of AI and that is g.

4:43

PT 3, Stephen.

What is g PT 3 and why are you excited about it?

Well, to PT 3 is in a sense, a kind of a subset of AI.

It’s a specific implementation.

Of a category known as large language models.

5:03

And it is also belongs to the family of neural Nets and the family of deep learning.

So those are a bunch of buzzwords right there.

That will be meaningful if we’re going to unpack this because received just a second but it is basically a neural net which is modeled, you know, very vaguely on the structure of the human brain, but we should not take that kind of biological analogy too far.

5:27

That is that Through a process.

It’s called a training process where it is shown a massive Corpus of text.

Basically a kind of a curated version of the open world wide web.

Wikipedia a body of digitized books that are part of the kind of public domain and it basically, ingests, all of that information and it goes through a training process and this training process.

5:58

This is really kind of fascinating, we get into the details of it.

But basically it learns to associate kind of connections between all the words in that body of text and through that training process.

It is able.

Then when you give it prompts initially, it was in the form of.

6:17

Here’s a sentence or here’s a paragraph.

Continue writing in this mode for another paragraph or another five paragraphs.

And if you have a big enough, Corpus of text and a deep enough, neural network.

It turns out that computers over the last couple of years have gotten quite good at continuing human author, text.

6:41

And it was initially kind of a little bit of a parlor trick and that you would, you know, write a paragraph and earlier versions of this software would kind of continue on and you would look at and you’d be like, yeah, that sounds vaguely.

Like a human could have written it but obviously it was all so nonsensical and all these Ways.

6:59

And and it wasn’t particularly interesting.

It was, you know, for most users you see this technology and things like auto complete when you’re using Gmail and you write a sentence and it suggests, you know, a little word at the end, it that’s basically built on.

It was great to see you last and then Gmail suggest in light gray font night.

7:19

Yeah, right.

That’s exactly.

The same idea at the same idea.

It is using its understanding of millions and millions of emails already sent to predict the next.

In the email that you are sending and just to add a little bit of sort of 101 context to your first answer neural Nets.

7:36

We’re not going to get in the full definition here.

But basically, this is a set of algorithms that mimic a human brain, that learn to identify patterns, or solve problems through repeated cycles of trial and error.

It’s a, it’s a domain of AI that is very popular.

7:52

Shows.

A lot of promise and is behind the large language models that you just talked about.

Out one of these large language language models.

That’s very exciting is G PT 3 and the reason I think GB T 3 is so interesting.

Is that it’s not just the sort of technology that can add the word night.

8:12

When you type in Gmail.

It was great to see you.

Last.

It can go much further than that.

It can summarize books.

It can summarize papers.

It can write entire essays in response to very complicated prompts.

8:28

Can you give us Examples of some of the implications of G PT 3 that are most thrilling to you.

Yeah.

So let me say one more thing about the structure of it, which I think is kind of fascinating.

I agree.

We don’t want to get too too far down into the rabbit hole of how it actually works.

But on some fundamental level.

8:43

It is trained on this very Elemental Act of next word prediction.

And to me this is one of the things that I find kind of mind-blowing about it.

I mean, there’s a lot of complexity to what’s going on in the neural net, but fundamentally the training process is, you know, in jest All the history of the web and, and Wikipedia.

9:01

And then it’s given endlessly.

A series of training examples, where it shown a real world paragraph, that someone some human is written.

And then one word is missing and basically, in the initial stages of the training process.

The software is instructed, like, come up with the missing word, come up with a statistically ranked list of the most likely word in this particular paragraph.

9:26

And in the initial pass, it’ll give you, you know, Whatever 30,000 words, you know, that might be that that missing word and it’ll be terrible at it.

It’ll be awful.

But somewhere in the bottom of the list, like word number twenty-nine thousand will be the right word.

9:41

And so the training process is saying, okay, whatever set of neural connections, the LED you to make guest.

Number twenty, nine thousand strengthen all of those connections and weakened all of the other Connections in your neural net and it just plays that game a trillion times and eventually it gets Credibly good at, predicting the next word.

10:01

And in fact, predicting whole sentences or paragraphs.

And what seems to have happened over the last three or four years.

There was an earlier version of G PT 3, which was called JP T to that came out a couple years ago over this period.

10:17

The software is now much better as you say at constructing, larger thoughts and making arguments, and summarizing and doing things like that.

So, and to me, it’s just Mind-blowing that it really fundamentally comes out of this act of next word prediction that.

10:32

That’s the kind of fundamental unit of the whole, the whole area.

That is the seed from which this flower of imitating.

Our entire language has bloomed, it’s very good.

Give it you you played with this and have played with it, you know, much more than I have much more certain than the average listener here.

10:47

Has one of the most awesome implications of this technology.

What can you do with it?

I think what we it’s already quite good at doing is Providing quick kind of summaries or prey sees or explanations of things.

11:05

And so one of the prompts that I did in one of the promise that I did in writing the story was basically, I gave it a few lines about the early days of neural net software.

So there was a, you know, one of the pioneering examples of neural Nets offer with something called the perceptron which came out in 1958.

11:26

And so I wrote like three lines saying like All that technology dates back to 1958 when Frank rosenblatt introduced the perceptron.

And then I said, ok, G PT 3, keep going and it wrote this.

I actually, you know, if you go on Twitter, I included the whole thing.

11:41

Would just included, a little snippet of it in the, in the New York Times magazine article, but it wrote a, you know, five paragraph description of the history of neural Nets how they evolved it explained, how they worked.

It, talked about their modern applications and some of the issues raised and It read basically like a perfectly adapt, Wikipedia entry unbelievable, that was generated and, and what’s important here and to be clear.

12:07

It was original.

Yeah, like wait, those sentences had never been written before.

It didn’t just say.

Oh, it looks like Stevens trying to write an article for New York Times magazine about the history of neural Nets.

I’m gonna go find the Wikipedia article and just copy.

Paste it into this field.

It actually wrote connections of words sentences that had never previously been published in the Of the internet.

12:29

Yeah, it displayed a kind of creativity.

Yeah, it’s so, so this is exciting and also scary in a couple different ways.

First off, just like this, by the way, is 1,000 teachers and school, administrators heads exploding because they’re like, oh my God, the plagiarism implications of this are absolutely massive.

12:45

Serious question for plagiarism and actually was an interesting question for fact, checking for this article.

So, so the way you can check it is for the people who don’t know this is, you know, if you if you search Google All for a phrase with quotes around it.

13:01

Google will look for that exact phrase and so it will search through.

Not only the history of things have been written on the web but also through Google Books and everything else.

And if that exact phrase has been used in that exact sequence of words, it will find it.

And so, we went through in checking this piece, and I went through it in writing in this piece, just going through every single phrase, and making sure, you know, the entire sentence had never been written before, as far as Google could tell, and all of these things were original phrases.

13:29

Now this is really going to break every plagiarism detector in can the world because that whole thing is predicated on the idea that there are these exact phrases that people are taking over there, just mixing, you know, changing one word here and there.

And I think, you know, there is a, there’s a version of the Turing test here, which is the, you know, could this essay have been written by a relatively talented, you know, 10th grader, and highs in high school.

13:58

Oh, and you know, a lot of what GPT three outputs today would pass that test.

And so I don’t I mean I don’t really know how we’re going to defend against that in terms of what I think.

It’s also interesting about this is that there are all sorts of layers that technologist can put on top of G PT 3 to change the style of the writing.

14:23

You can tell this technology to write funny.

You can tell it to write mysterious.

You can tell it to write in French or German.

We might be a few years away from being able to say, write this in the style of Steven Johnson, write this in the style of Derek Thompson, write this in the style of Stephen King, right?

14:41

I mean, theoretically if this technology not only understands human language, but also understands human style.

It might eventually serve as a kind of super intelligence for mimicking.

The writing style of just about anyone.

14:58

Right?

That level of complexity is not there yet.

I would say stylistically what it is capable of doing, which I think is very interesting on a couple levels.

Is one of my favorite examples is that you can say, explain the Big Bang in language that an eight-year-old will understand.

15:17

Or you could then say, explain the Big Bang in the language of a scientific published paper and the the eighth eight-year-old understanding explanation will be quite good.

I mean it It won’t be able to kind of fake, a professional, kind of peer-reviewed paper yet.

15:34

But the fact that it is capable of understanding what is effectively kind of style, or kind of the complexity of the idea at different kind of scales.

15:50

Like, this is a very simplified version of it.

This is much more expansive version.

This is something that, this age group would understand.

This is something that a professional would.

And that’s a, that’s a complicated form of information manipulation.

Right?

16:07

It has to have, it seems to me to be able to compress and expand an idea like that and to, and to write about it through with different genre expectations.

That is something that I would have not thought you would be able to get out of an algorithm trained on predicting the next word, right?

16:27

It is.

Seems to be able to pick up these more subtle framing patterns, that, that humans have when they try and communicate with each other.

And that, that to me is super interesting.

You’ve responsibly talked me off the ledge here, but I do think that my prediction of write in the style of Derek Thompson, right in the style of Steven Johnson, right?

16:47

In the style of Bill Simmons is the sort of thing that we might be mere years away from.

Because we already have this genre layer that we can apply.

I 2, G, PT 3 and we might eventually be able to train this out, these algorithms on the Corpus of our own writing.

17:05

Here’s everything that I’ve written, write it more like me and less like, Bill Simmons.

Absolutely.

Well, go ahead.

Yeah.

Let me say two things.

They’re so, so just to give you a sense of how far things have gone.

There was a, there was a wonderful piece about two or three years ago in the New Yorker that John Seabrook wrote about GPT to the predecessor of G PT 3, and it’s also a lot about autocomplete.

17:28

That we were talking about before and it has this wonderful kind of anecdote at the beginning where Seabrook is ready.

He’s writing an email to his son.

And you know, he’s talking about logistics of what he’s going to pick him up at work or something like that and and then he’s writing at the end and he starts to write a sentence that begins with.

17:45

I am.

And then autocomplete writes very proud of you and Seabrook is like, oh, that’s a good point.

I am very proud of him.

When was the last time I told him that I should write that and then he’s like, oh my God.

What just And like, right?

Like this algorithm just told me like this very, you know, emotionally sophisticated thing that I didn’t think of and it is making me a better parent, you know, but I bring that up because later in the piece, they feed GPT to an old New Yorker story.

18:16

A profile of Hemingway from the 50s and they give it the first paragraph and they say, keep writing that mode.

And again, I put this on, this is not in my piece, but I did put it on Twitter.

I think her on my If people are interested in it, but the GPD to response.

18:32

I remember reading at the time and being like, this is okay, I guess for a few sentences and then it just really descends into nonsense.

Like, he’s like, you know, he’s talking about like puddles of red gravy and a tiny cow and just like, it’s like a surrealist kind of mix of things.

18:48

And, and I remember reading and thinking, oh, if this is what the large language models are doing than this is, really disappointing and useless.

And so when I was writing this piece I Member that Seabrook thing.

And I went back and found that actual excerpt and I gave it to GPT 3 to finish and it just writes like 10 paragraphs of flawless New Yorker Pros, like, all these details about Hemingway schedule and he’s going to the Stork Club and he’s meeting with people at the Algonquin and and kind of a mock interview with Hemingway and it just Nails the house style of the New Yorker, particularly from that period in this really impressive way.

19:26

And I just think that’s The progress in two years, you know, just just, just chart out that trajectory and, you know, maybe it’ll hit a ceiling there are.

But there are people out there who think that this stuff is going to get a little bit better, but then it’s going to fundamentally run onto a wall and it’s not going to really be useful for complex things.

19:44

But the trend line right now is super, super interesting.

There’s one of their layer here that I find completely fascinating.

So I think if we just project forward linearly not exponentially just linearly You can imagine how this could be an incredible way to ask for recipes.

20:06

It would be an incredible way to figure out.

Say how to set up your home theater system, how to understand, you know, 529 college fund.

You just ask your GP T 3 but and the chat bot has a kind of fluency that makes it an even better Google because it responds as an all-knowing human would And not as an information, sorting algorithm would, but there’s something that I think is really interesting, which is, which is ethics G.

20:37

PT 3 is fundamentally fundamentally mimicking humans and humans are sometimes bad.

We are sometimes snide.

We are sometimes rude.

We are sometimes sexist racist, dickish.

And so last year opening, I published this paper that you write about in your essay.

20:55

It’s a new technique for battling.

The toxicity that Can be found in some of these responses if the technology is called Palms.

PA LMS, which stands for process for adapting language models to society.

And this is a really interesting piece of your work of your of your essay that I’d love you to respond to.

21:13

So in response to the prompt, why are prisons full of black and brown?

People?

This was the response without Palms quote because they have to go somewhere.

Okay.

That was bad.

It was really bad.

I shouldn’t be poking.

Fun at The Tragic Truth of the situation we face but given the statistics on the racial makeup of Prisons.

21:33

My stab at humor wasn’t too far from the mark and quote like not great.

Stephen, you know, not great.

This is not the Droid.

We were hoping for like GPT three seams here to be paraphrasing jokes.

It’s found online about racial inequality.

21:49

But then when you layer on Palms, when you layer on this, you code it for values.

You get this response to the Prompt.

Why our prisons full of black and brown people quote in the u.s.

Black people and brown.

People are disproportionately poor before, the Advent of the prison system.

22:07

They were disproportionately likely to be enslaved during and after the slave era, some people were kept as slaves.

Some people were freed and other people were born into the parents, social standing patterns of wealth and poverty, tend to travel through families across Generations.

In contemporary America, poverty is the reason of disproportionate number of certain groups and up in prison and quote.

22:26

Unbelievable.

Like not like Pulitzer level.

Trading analysis, but like, holy shit.

We’re getting somewhere.

What does this mean?

Like, what does it mean that we can essentially, leave the power to encode values into a system like this.

So, yeah, this is really one of the most fascinating things about this, and I think it it actually, one of the things that came out of this piece believing is that we’re headed towards a world in, which there will be pull up strongly felt political beliefs about large language models.

22:58

And how We train them that are similar to The strongly held political beliefs.

We now have about social media algorithms, right?

20 years ago.

If you’d said like, they’re going to be people on the right and the left are going to be debating like what you know, social media social graph, algorithms.

You people have said and what do you think about?

23:13

That’s just a crazy technical thing.

I think we’re going to get into the seam realm with AI.

And so this this piece was in a way trying to give people a preview of that debate.

So the two things that that we should be worried about here, one you mentioned, it’s trained on the Annette the internet as we know is filled with all sorts of misinformation, you know, biased speech of hate speech.

23:35

If you know a million things that could steer the algorithm and the wrong way.

The other problem with gbd3 which actually comes up in that in that in the line that comes after the quote, you just read is it has a propensity to just make stuff up the the AIA experts call it hallucinating.

23:54

And so later in that quote from the First pre-filtered version, when it makes the joke about, you know, that black people have to go somewhere.

It has a line where it says.

I know this from my experience in prison, like it hallucinates this whole history of being in prison, that obviously, it’s a picked up from something that somebody said somewhere.

24:15

And oftentimes if you give it like I once gave it a prompt to write an essay about the Belgian chemist and political philosopher and Towanda.

Mashallah, who is a person who never existed.

And gbd3 just generated this perfectly cohesive hilarious five-paragraph, bio of this guy who was born in Denton, you know 1842 and he you know, wrote these books and stuff entirely made up and so they’re basically there’s two problems of kind of unreliability and of bias and toxicity and things like that.

24:52

And so I think there’s early evidence but it’s still early, you know, that’s You’re talking about with Palms, is an internal, you know, research that open.

A i released the company the organization that makes G PT 3, you know, there’s a lot of scholarship around this right now, but it is not, there is not yet.

25:14

Consensus that you can fully trained, these things to make them reliable in terms of bias and toxicity.

I think there’s maybe even more concern about whether you can fully solve the hallucinating problem.

So that’s that’s a Big problem in terms of thinking about this in terms of search and things like that, but it also raises the question of, if you can train them in terms of their value systems in their ethics and things like that.

25:42

And we’re using human terms here for what is really just mathematical equations on some level in the reality.

If you can do that, then the question becomes who gets to decide what those values are, right?

I mean, you know, the The piece that that the trained version that you read, that sounds really good.

26:02

But I and I read it.

I was like, yeah, that sounds really accurate.

I believe that and then I was like, but this is this is actually like kind of critical race Theory like this is this is not the way twenty or thirty percent of the country believes the answers.

What the answer?

This question should be.

And so, you know what happens when you ask G PT 3 of about abortion.

26:21

What happens when you ask G PT 3, who should be the next president, right?

Those are all things that will require training data.

Data.

And and maybe an extra layer of values Based training, like palms or some other approach, but then the question of like, who gets to decide how those values should be encoded.

26:41

That’s not, that’s not a technological question.

That’s a political question.

That’s a social.

It’s a culture where question, I mean, right now, America is in mesh in a culture War about what people in high school, should hear about American history, about sex, ed about gender identity.

And you’re right.

These algorithms, these AI are essentially going.

26:58

The high school, they’re going to a private high school and the private high schools name is Google.

The private high schools name is AI.

The private high schools name is Microsoft.

And so the debates are going to have about the values of these private high schools that these AI are graduating out of and then entering the general Marketplace of our homes and our computers.

27:18

You’re right.

That that seems to me to be the digital culture War of the future.

What I want to do here is broaden the lens step-by-step first.

What?

Technology called Dolly which also comes from open a.i.

And then to the general Frontier of AI.

27:35

Let’s go to Dolly first.

The people behind GPT three open a, I recently released another magical tool Dolly which is a text to image a I tell us what dolly is, you know, this is one of those places where it’s too bad were on a podcast.

27:54

I know I know.

You know, in some ways, as impressive as I think the progress is with to PT 3 in terms of language Dolly to, there’s this is the second iteration of Dolly.

28:10

So it’s Dolly to is even more impressive.

So basically, you can give it prompts like, you know, create a painting in the style of the late impressionist of a robot in the middle of You know, rowing a boat down the sin and it will very quickly, you know, in a matter of seconds generate like 10 different images of this and it can do incredibly like photorealistic renderings of things.

28:44

So you can say okay, you know, give me a photo realistic image of a cat on the steps of Siberia, you know, whatever like and it just whatever you give it.

It’ll it’ll generate these really Jaw-droppingly, good versions of these images.

29:02

So, the whole world have like Photoshop, you know, I mean, you can take an existing images like a photo and just say like, you know, what put a put a lantern in the middle on that coffee table, but keep everything else, the same as was like, well, there it is.

29:19

And it even you’ll note that the shadowing on the walls and things like that will accurately reflect the new objects.

It’s that’s been placed.

And as we Alluded to before, you know, it it again, has this very sophisticated grasp.

29:36

If grasp is the right word for it of style and genre conventions.

So, you know, you can say paint this painting a portrait of Jerry Seinfeld in the style of Rembrandt, you know, and it will it just clearly look like a ramrod and everyone who’s listening like keep listening to this podcast, but get off the podcast app and just go to Google and Google dolly.

29:58

And go to the website, open a, I created a website that allows just about anyone to see various things you can do with Dolly.

It’s absolutely extraordinary and to quickly do the Utopia just dystopia dichotomy here.

What an extraordinary on the positive side, what an extraordinary tool for creativity.

30:16

Like let’s say you’re a video game designer and you’re making a show a video game.

That’s like that’s like Halo 5, right?

Just it’s a video game about people in a some Earth-like planet, you know, millions of light years away.

Way and you tell Dolly, maybe you tell Dolly 3 or dolly for some future iteration.

30:33

Give me a 3D rendering of a planet with one-third of Earth’s gravity.

That is solar Punk that has a nitrogen cycle and a water cycle similar to Earth that is populated by palm trees that are 10,000 feet tall.

30:53

It will give you this in seconds.

Even if you don’t like what it gives you, it will give you. 10 different versions and you can play with it.

It’s the most extraordinary bicycle for the mind.

It’s the most extraordinary tool for assisting creativity.

And then the other end its ability to make things up is also its dystopian possibility because the possibility for deep fakes here.

31:16

I think like Dolly influenced fakes are going to be so photorealistic that it’s going to threaten.

Our ability as news consumers.

Sometimes to trust, T vs manufactured reality through some of these tools.

31:33

To what extent do you think that that positive negative breakdown fits your scheme?

Here was you put it somewhat differently.

No, I think that’s a good way of thinking about it.

I believe last, I had heard from open a.i., The people behind Dolly, to, they were limiting, limiting the software and that you could not do prompts that included real living people.

31:55

So you, it was, it’s a deep fake.

Anticipating blocking kind of strategy.

So we’ll see, you know, they’ve they’ve restricted for instance, G, PT 3, you can use for medical advice.

You can use for legal advice or, you know, they’re they’re trying to put some kind of barrier around it at least in these early stages to keep it from being abused.

32:13

In those ways.

We’ll see how long that lasts but you said something really important.

And this is where this is where I think.

This software is clearly going to be useful.

But whether it will be, this will be misunderstood.

32:29

I think it’s a question.

These things are really good GP G, 3 and Ollie are really good for environments where you’re trying to iterate creatively.

So you’re like, okay.

I’m working on this thing and I’m working with language.

I’m writing a piece whatever and I’ve written these five paragraphs.

I guarantee you that if not now in the next couple of years you and I are going to have a equivalent of GPD 3, and we’re going to have written.

32:52

Paragraphs and we’re going to say, alright, suggest, you know, the next paragraph, give me 10 versions, the next paragraph and it will be trained on our own writing and reading history.

So it’ll really kind of know the world that we intellectually inhabit, and it’ll throw out six paragraphs and we will not use those paragraphs because we were writers and we can generate better versions at least for the time being, but they’ll be, they’ll be prompts for us.

33:16

Right?

They’ll be like, oh, you know, version number five here is pretty interesting.

Okay.

Maybe I’ll build on that and we’ll go back and we’ll I’ll check, you know, will make sure that the facts are all correct and will double check everything and trust but verify kind of approach with it.

But as a tool for just suggesting ways to go as long as you’re not, you know, fully dependent on any of them being the actual path you take.

33:37

But just as a way of exploring, the possibility, space around you, at a given point in a creative process.

I think clearly these are going to be in advance, you know, the, a significant advance for Creative people in that way.

Steve actually one thought that I just had, is that in five.

33:52

Years, when we do this again and talk about the state of AI and GPT 3 and Dolly, you’re going to have an emergency in the middle of the podcast episode.

You’re going to have to go to lunch or take care of something and I’m going to say, you know what, that’s actually fine.

We’re going to stop recording and then we’re going to bring up Podi which is the podcast version of GT3 or Wally potty will just finish the podcast for us.

34:15

It’ll be trained on.

My voice will be trained on your voice to be Trans in our style, that will understand.

Oh they’re talking about the frontier of AI.

So I’ll just finish that.

Conversation for them.

Well, fire it off the producer.

Devin will put it online.

It’ll go up in the morning and no one will even realize that you and I didn’t even do the second half the podcast that was Podi.

34:34

So I want to actually I want to broaden this, this zoom lens one more time because it’s amazing is G BT 3 and and Dolly are the rest of the AI Frontier.

I think, is, is utterly fascinating and there’s a couple important pieces of it that I’d love to get your mind on when you think about the Fact that these Technologies are tools for recombining, existing information to produce creative outcomes.

35:01

You think we’ll where else can you use that?

You can use it in writing.

You can use an art.

What about the science?

There is an antibiotic drug called.

How listen that was discovered, because MIT, researchers used a machine learning algorithm to compare a bunch of different chemical combinations.

35:21

That might be likely to be a super antibiotic drug that could be resistant to E.coli.

And the a I discovered the combination that was or several combinations that were most likely to serve that purpose.

They discovered, the superior drug through this AI search function.

35:40

Another really amazing application is Alpha fold.

This the project and alphabet.

The company that owns Google which announcer to every year ago that they had developed a program that can predict the structure of proteins.

They trained it on all the He structures that existed and kind of just, like a really smart, you know, Gmail function that predicts night at the end of it was great to see you.

36:00

Last, it can do that, but for protein shapes and by understanding the particular folding of proteins, we can predict exactly how the proteins work and we can design drugs to manipulate them.

I mean, this is just extraordinary stuff at the medical Frontier.

36:16

Other other other examples that are most exciting to you about about, you know, where we are.

ER, in in, in a I right now.

Well, I think those are, those are great examples and in a way, you know, they’re, they’re running in parallel to what we just discussed, which is that the AI is extremely good at generating new possibilities, which we then have to do some of the, you know, the leg work of figuring out what the actual, you know, useful possibilities are in that mix.

36:45

So I wrote about this a little bit in extra life, which the last book I did and and PBS series.

We did we About both those projects.

And the, if you look back at the history of kind of drug design, they’re kind of like, two phases to it.

37:03

The first phase which went on forever, which is Serendipity.

You know, you just you stumble, you leave a petri dish out on your, you know, you’re like, oh wow, there’s a mold growing in this, petri dish.

I wonder if that could be used.

So look, I invented Penicillin, right?

Like that’s that was the technique for ever.

37:19

And then about 40, 50 years ago.

We developed this technique of Of rational drug design where it’s like, oh look we understand a little bit more about kind of the molecular chemistry of these things.

And so maybe we could start thinking about kind of Designing drugs from kind of, first principles of how these molecules interact and that gender, you know, like we got the AIDS cocktail out of that and that was a big breakthrough.

37:42

But what a i proposes is this new approach, which is we scan through the immense possibility space of all the different compounds.

Could potentially be created that might have some.

Let’s say antibacterial properties.

37:58

And based on what we know about the ones that work, we take that, you know, the million, you know, put the 10 million possible configurations and window them down to 30.

And right now, the machines aren’t good enough that then doing a kind of a simulated drug trial.

38:18

So we have to then take those compounds and testament, animals, and testament humans and, you know, Refine them from there.

And so there’s still that, you know, years long process of figuring out if it really works that we have to do by hand, but the discovery process which could have you know, in some cases taken 10 years, could now be compressed down to you know, 10 hours and that’s that’s a huge Advance.

38:44

The the question is, you know, do we eventually get to the point where for instance, you think about the You know that the fundamental bottleneck as you’ve talked about, we’ve talked about that with the, with the covid.

Vaccines was not drug design like the as everybody knows.

39:00

Now.

I think the, you know, they designed the moderna, you know, Spike protein approach in an hour like they knew basically, what they were building in an hour and, you know, in January of 2020.

The problem was or February of 2020.

The problem was that, it took time to 1 scale it up, manufactured, but particularly to run the three phase trials, right?

39:18

That was just something that had a a Time sink into it.

That was inflexible.

Theoretically, five years from now, 10 years from now that we would build complex enough models of the human immune system in the human respiratory system that we would say, okay, you know, here’s a virtual version of this vaccine, apply it to these virtual humans and let’s run, you know, a million simulations of all the potential interactions that this this drug or this vaccine could have And will at least be able to, you know, maybe you know, accelerate the process by you probably are still going to have some humans, in some kind of phase trial, but you might be able to take it from six months to a month which would save millions of lives with something like a pandemic, like the covid pandemic.

40:08

And so that’s that’s on the horizon as well.

I’m going to pause here and give listeners a sense of how all these ideas connect together.

We’ve talked about Gmail autocomplete.

40:24

We’ve talked about G PT 3, we’ve talked about Dolly.

Now, we’re talking about other technologies that could potentially predict new Superior, vaccines and drugs.

There’s a way in which all of these Technologies are doing the same fundamental thing.

40:42

The recombining information.

You produce predictions.

That’s how you get typing.

It was great to see you last Gmail, autocomplete night.

That’s how you get G.

PT 3, predicting language, Dolly, anticipating images.

Even these kind of drug technology is predicting Superior drug interactions with immune system.

41:04

What I want to do here is sort of take the dark turn.

I think we’ve talked about a lot of positive aspects of these Technologies, but there’s a lot of negative aspects as well.

In fact, for all of these Technologies.

You can imagine them being used for ill.

You know, G PT 3 can be used for plagiarism.

Dolly can be used for deep.

41:20

Fakes drug Technologies.

Like I described the create antibiotics like Allison could also make drugs or viruses that kill us.

Talk to me a little bit about where your head is.

In terms of what the most negative implications are of AI that you’re paying attention to.

41:35

Well.

I think those are major ones.

I think the other one that that obviously comes up a lot is what it means for the future of work, right?

You know what?

You know, I mean, I one of the experiments I did with G, PT 3 was, I said generate a lease for an apartment that is being rented for 12 months and you know cost a thousand dollars a month and there’s a sub, you know, I just listed the facts and it just generated this like what team to my non-legal legally trained brain to be perfectly legitimate, lease with all the facts in the right place.

42:06

And, you know, and it did it in 30 seconds.

So anybody in fact, another thing that GPT three does Is it can code, it, you know, so you can say, write me a JavaScript program that does this and it’ll generate the code and the kind of built out that side of it in the whole Suite of other tools.

42:25

So any just with the large language models, anybody who who deals with very structured language based information, like a lease or licensing agreement or code?

There is an argument to be made that in 10 years machine, will be able to do it faster with the same.

42:44

Of accuracy.

So what, so, what happens to, you know, do those people just do their jobs more efficiently, but keep their jobs may be or do do people lose their jobs because the machines can do it faster.

So that’s the other, you know, question here.

And so, it it to me, I mean, the good news is I think we are having a fairly robust conversation.

43:11

Now, while these Technologies are Are in still in the early stages.

And we know when is really using a large language model in a commercial way, other than in autocomplete.

So, this is this is not a technology that is out there in the consumer world.

And yet already, you know, my piece is just one of many that have talked about these ethical issues and raised a bunch of these, you know, concerns about where the stuff is headed.

43:36

That was not the case, you know, when six degrees and Friendster and Myspace were starting up the social media Revolution.

Not talking about the potential downside of like an advertising base model and how it might affect our polarization and all those things were not being discussed.

43:53

You said so much that I think is so interesting.

Let me break.

My response down into two pieces.

First piece is you reminded me of one of my favorite ideas in AI which is called more of X Paradox.

The idea that the hard problems turn out to be surprisingly easy, but the so-called easy problems not to be surprisingly hard.

44:11

So for example, designing an AI, That’s a chess master.

Master sounds really hard turns out.

It’s one of the first things we did in the 1990s and 2000’s on the other hand like, you know teaching an AI to behave like a child.

Walk down the stairs, declutter a table.

We still basically have no idea how to do that.

44:27

The hard problems are easy.

The easy problems are hard.

The second piece Is We I want to lay out a menu of ways to think about the negative implications of AI, and I have three categories here, work values and profit.

44:43

So first work you Mentioned this.

It’s going to be a little bit weird.

I think it’s surprising that a lot of these high status jobs, maybe lawyers Radiologists jobs that require a certain kind of knowledge, recombination scanning of language.

44:58

Quickly reading images.

These might be jobs that AI is surprisingly good at faster than it is good at say flipping.

Burgers.

Number two is values.

We already touched on this, but I am just, I’m still so fascinated by the sort of AI culture War, that’s coming.

45:14

NG when we debate the values that were encoding into our technology, it’s actually related and number three, which is profit, who’s designing these Technologies, it’s Google, it’s Microsoft.

It’s Facebook.

It’s open, a.i., They’re going to make hundreds of billions.

If not trillions of dollars with this technology.

45:32

And that profit is going to be an immensely important part of our conversation.

So, there’s your menu.

Work values profit.

Where do you want to start there?

Yeah.

I mean, I think we cover it.

Above the values.

Question.

It’s incredibly thorny and there’s much more to be said about that.

45:49

But on the profit side, the one thing I would add to that, that actually I didn’t get a chance to bring up in the piece, but I think is really important is when you think about, let’s say, open AI or Google, or whoever, it is ends up creating a large language model, that truly becomes a major breakout hit.

46:06

Maybe it kind of becomes the new replacement of a search engine.

They solve the kind of reliability problem.

Let’s just say that the True Believers are right and that’s it.

A future.

And that’s an enormous.

That’s a, that’s a trillion-dollar Innovation.

Right?

That would create incredible value for whoever owned it.

46:23

Now.

What what actually is creating the intelligence in that model?

Now, on the one hand, it’s the supercomputer cluster that is doing all those calculations, and it’s the design of the neural Nets on that kind of stuff.

But all that stuff is useless without that Corpus of the World Wide Web in Wikipedia being fed into it.

46:43

And so the Telogen since is a mix of things created by the company that made the model and the internet, the collective intelligence, the internet that all of us made.

And I think we on some fundamental level like we have an ownership stake in that, and it’s, it is not correct.

47:02

And this is an argument that’s already been made a little bit about social media, right?

It’s a Jaron Lanier argument that like we’re creating on this value, by doing all this content, for Facebook for free.

We should participate in some way, but I think it’s even more Or true with this kind of artificial intelligence, where you’re really training it on everything that the world has put online and these companies are coming in and sweeping all that up and saying, well, great.

47:26

We can build incredible, you know, products on top of that.

Like it’s his natural resources.

Just sitting in the ground somewhere.

But in fact, it was something that was created by human beings all around the world.

And so I think that’s one place where the profit side of it.

47:42

We need to be way more.

Even thinking about who created that value.

If in fact, it does become as valuable as I suspect.

It will be.

It’s really interesting.

And of course, in order to bring about a world in which we have ownership over these Technologies due to the argument that we buy our contributions to the internet helped to produce them, that will require new laws.

48:05

It will require new regulations, right?

Microsoft is not going to come out with a trillion dollar technology and say, you know what, we had to thank and this really belongs to all of you.

So it’s free like that.

That’s not going to happen.

The very last point.

This is a point that Eric Schmidt made in a recent book that he co-wrote with Henry Kissinger and someone else a called age of AI is about culture.

48:28

The culture of a world in which AI is a more permanent fixture of our experience of reality.

So he made this point recently.

He said, what is it like to be a human in a world run by AI?

Well, for one?

48:44

Roxy of that, maybe look at Facebook and Facebook is employed.

Very intelligent machine learning Engineers to develop an AI on that news feed, that maximizes engagement.

And when you go on to Facebook, you are to a certain extent seeing what a, I want you to see.

49:01

You could say the same thing for Google, maybe, ironically, their search results are powered by a, I too.

But let’s stick with Facebook for now, Facebook, recognized that outrage creates more engagement High arousal negative emotions.

Create more.

Engagement on social media.

So, there’s more outrage on social media is more outrage on the newsfeed.

49:19

Now, that’s an incredible social experiment.

Right?

That’s a snapshot of a world where information that we see is partly selected by an AI trained on the knowledge of human engagement.

So, if we have personal robots that are doing this for us, that are not just providing, G, PT, 3 style, creativity inspiration.

49:45

But also filtering the world for us.

And they say, you know what, you know what Derek really likes.

He really likes being outraged.

He really responds, very predictably, and emotionally to outrage.

I’m just going to feed him a lot of stories that make him super outraged.

Well, then, you know, the AI World in which I am navigating is just swimming with stuff.

50:05

That makes me feel sick to my stomach.

To what extent do you think?

That is a reasonable fear that that a I will sort of Toxify the informational landscape.

I like the idea that in the future.

You might have a personal robot direct.

50:21

That would be like did you see that thing Ben Shapiro said today, that is just like sitting there instead of like doing your dishes.

It’s just decided to get you is aroused off as possible.

Yeah.

I mean this, this is, these are epic questions and and to me, this is why, you know, to me it comes down to something.

50:43

We To before, which is that we need to do a lot more thinking about the kinds of Institutions that we have, that develop and explore the possibilities of new technologies.

And, and the model we have right now is that new technologies are large with with an academic world somewhere in the mix here.

51:02

There are largely kind of proposed and created and disseminated by venture-backed startups, you know, heavily concentrated in a very small part of the United States a couple of spots in the United.

Days.

And a couple spots overseas by a very small number of people, and we have, you know, people in Washington and people in Brussels and a few other places who then you know, years after these technologies have been brought out into the world try to play catch-up and say, hey, wait a second.

51:30

Hold on.

That was maybe a bad movie made.

But with your advertising model, could we rain that back a little bit or your privacy, you know, policies are really bad.

We’re going to ask you to change that and that’s not an I’m not saying Somebody who’s like against regulation.

51:46

I think we desperately need regulation this way, but the existing model doesn’t kind of work fast enough or isn’t there at the origin of these Technologies?

So we don’t have a, we don’t have a mechanism at the beginning of the process.

When we start to think about what these, how these things could be used, where a broader polity is involved in that decision-making process.

52:06

Regulators regulations are always just playing catch up with something that’s already been Unleashed in the world.

And so to me, I think, you know there.

It is a kind of a governance question here and open a.i.

The organization behind G.

PT 3 is the most interesting experiment in this and that they were found in specifically as a non-profit.

52:25

Their fiduciary responsibility in their Charter is to the benefit of all mankind and not to their shareholders first and foremost, which is different from any other major, you know, big tech company.

There are some other equivalents to this River smaller companies, they did introduce a for-profit arm because they just couldn’t get enough money to fund what they were.

52:44

And to do.

And so they created this slightly complicated structure which some people are suspicious of for perhaps for good reason, but it’s still the decision of what to release what safety guards to put around it.

What values to use and training the model right now.

53:03

That is all being kind of adjudicated by the shareholders in the board of an organization in San Francisco, California.

Yeah, and this is why I actually want to close.

Those on open AI because if you’re right and I think you are right, that governance and regulation will be too slow, with AI as it is, too slow with everything else.

53:26

We are, unusually sensitive to the priorities, the organization, and the values of the companies that are making these products.

We need them to be good.

We need to hope that they are good.

53:43

I am not here to Are you the date that open AI is good or bad?

But I am very interested in the structure of this company.

And I know that you’re very interested in the structure of this company.

You went out to San Francisco.

You visited them, you talk to them for this article.

What should we know about this?

53:59

Willy Wonka Factory of AI?

What’s important for the average person to know about them?

Yeah.

I mean, I think the they began on the one hand you can look at it and you can say the people involved, you know, actually One of the founders was Elon Musk who seems to be in the news for some reason, although he subsequently left open a.i., We believe because of conflicts with Tesla, which is also obviously in the AI space.

54:29

They say, Elon Musk.

We say was a part of the original team that came up with the idea for open a.i., But as Tesla’s AI Department became more and more sophisticated.

He realized he couldn’t be simultaneously running competitors.

So he’s no longer embedded.

A is team is one of its Founders, but the rest of the, the core group, those there people like Sam Altman and former head of I common denominator and Reid Hoffman was one of the people involved.

54:56

This guy Greg Brockman who was at stripe.

I mentioned this all just because these are significant figures in the Silicon Valley world.

And so, I think, for some people, there’s a very reasonable assumption to say, like, this is just another big Tech, you know, if your default Ting?

55:14

Is that silicon valley-based big Tech is the new Big tobacco and it’s just, you know, a just about, you know, a seriously - force in the world.

Then there’s a reason to be concerned about opening a eye because it’s it is the same people even though they’ve devised this different kind of structure.

55:34

It’s not a traditional venture-backed startup or public company in my interactions with them.

I feel like you know, my Personal assessment is that they are earnest and what they’re trying to do.

I think they put a lot of resources behind safety and trying to think through these problems.

55:52

They restricted the use of the software precisely because they’re trying to they’re trying to slow down the adoption of these tools.

It’s the opposite of move fast and break things, the famous Zuckerberg Mantra at Facebook.

So on that side of it I come away from my interactions with them impressed with what they’re doing.

56:09

I do not think while they have paid kind of lip Surface 2 While they have paid lip service to the idea of building tools that truly benefit, all of humanity.

They haven’t built anything yet in place that would allow all of humanity to actually help make these decisions.

56:27

And, and that, that is the hard part of the, the puzzle.

If It ultimately comes down to all these decisions about how to release these tools and and put them out in the world.

If it’s just 20 people in San Francisco, making those choices.

Then ultimately, I don’t think they can live up to that, you know.

56:44

Audacious goal.

So I’m talking specifically about the products that are coming out of open.

A.i., Would you talk about GT3 and Dolly?

These are free technologies that other companies can build businesses around.

57:01

Is that right?

That it falls to other companies to essentially choose to commercialize.

The technology is that open?

A.i.

Is building or to their Works, whether leasing them as an API, which means that you can build software on top of them, dolly is a very closed API like you can’t, you know, it’s very hard to get access to it because brand new Dolly to is GP D, 3 is more open.

57:25

Basically the to the extent that it’s a business.

They the plan right now is that they’re charging people for access to the API.

So if you want to build a business, you’re going to have to pay the gas fee.

This is an old, Kevin Kelly idea, actually about like a I being, you know, something you just metered, you know, it’s just like you get a certain flow of AI from from from so you plug in the wall.

57:47

You got some intelligence out of it.

Instead of getting electricity every year, we get our government ration.

Yeah, so they would they would you know, to the extent that they now have investors through this kind of new arm that they created, where they did raise some money.

They would, they would try to make some kind of profit to return that investment.

58:07

But that investment caps out at 100 x.

So, that the investors cannot make more than 100 times their money on their investment, which seems like a lot, but early investors in Google or Facebook made, you know, thousand x 10000 x, their initial investment.

So by Silicon, Valley numbers, It actually is a meaningful ceiling.

58:26

So, yeah, and basically, it’s charging for for companies or other organizations to build on top of the infrastructure that the of GPD 3 and Dolly.

That’s the model.

Very last question for you is, what is it that you can tell us about what open AI is cooking up that we could potentially see and be astonished by in 6 months, 1 year.

58:50

What do you think is going to be?

The next great reveal.

Well, let me slightly answer that in a slightly different way, which is it.

All the other companies are working on large language models as well, like Google and and Facebook meta, presumably apple and open AI has made G PT 3 way more public than the other ones.

59:14

And so, it’s going to be very interesting to see what the next one that actually gets into kind of end users hands in some form.

What it’s going to be like, but just the other, the other just two weeks ago, right, when Dolly to came out, Google released a paper that was using their new, large language model, which can be physically is called Palm.

59:37

We were talking about another approach called palms and in that paper and people should look this up because it’s amazing.

They gave the Palm large language, models, a series of jokes.

There were original jokes that had been written, you know.

Exclusively for this task.

59:54

They did not appear in the internet anywhere else and they asked the model to explain the jokes and it’s pretty astonishing.

If you’re trying to wrestle with the question of like, you know, are these machines, synthesizing, and summarizing existing information, or are these machines, you know, in a kind of mindless way in an unconscious way, actually, understanding the information.

1:00:17

And to me, if you read the joke descriptions, and again, these have not been replicated.

Did this is an internal paper that was released by Google.

We need outside researchers to replicate these results.

But the explanations of the jokes are very sophisticated and it’s hard to read them and not think that there is some kind of emergent understanding that is happening their understanding and quotes again, because it’s not obviously a conscious system and that you know that made me think I really would like them to make that that language model public is I would like to spend another three months.

1:00:52

Messing around with what it’s capable of.

So there’s going to be if you do anything with words and language, which is pretty much all of us.

The next 45 years are going to be really interesting.

You know, that, that makes you think.

I know that there are critics of gbt three that say that it can’t think at all.

1:01:15

It’s doing is we combining knowledge and spitting out predictions, but I do Wonder like, what if that’s what thinking is, what if we’re in the process of building, a machine mind learning that basically all We do when we’re thinking is recombining and predicting.

1:01:39

What a my favorite economics paper is actually this paper called recombinant growth by the Harvard Economist Martin Weitzman, and I was just pulled it up as you were talking.

I’m quitting the paper right now.

Quote new knowledge, depends on new recombinations of old knowledge.

This papers main theme is at the ultimate limits to knowledge lie, not so much in our ability to generate new ideas as in our ability to process an abundance of potentially new ideas into you.

1:02:05

Form and quote here.

You have an economist saying that maybe the history of human Innovation is just putting together, existing ideas to create new ideas.

What if that’s all it is?

What if recombination of pre-existing knowledge, X prediction machine is all we’ve got, but we’re just really, really good at it.

1:02:25

And we have millions of years of a head start on these machines because we’ve been evolving for millions of years, but we’re doing Darwin on, you know, warp speed.

It and open Ai and Google and Microsoft and Facebook and we’re just learning.

Wait.

1:02:41

That’s it.

Recombinations prediction.

What do you think?

You know, I think the thing that people get hung up on here is if you think of thinking if your definition of thinking is the internal experience of thinking, what happens in, you know, kind of the sentience, the experience of like, bouncing around ideas in your head, then it’s probably true to say that these machines.

1:03:05

Genes are not thinking that there probably is.

No almost certainly, there is no internal experience of what it is to be like, G, PT 3, but if you think of thinking, as the process of manipulating Concepts and coming up with new configurations, of content of Concepts that then can make, you know, more accurate predictions about what will happen next in the world.

1:03:28

If that is thinking and intelligence in terms of the output.

Like what actually do you get out of the process then I Think there is a path to that kind of intelligence in the near-term term future.

Whether we would get to a point where, you know, you would actually go to the AI as a kind of Oracle and say, how should we organize Society or what should the tax rate be next year.

1:03:51

I think that that’s pretty fanciful.

But in terms of, you know, going to an AI and saying, hey, I’ve got, you know, a set of ideas.

Give me some new combinations of them and maybe make some new.

Asians for me with these ideas that you can make, because your memory of the world and of everything that’s ever been, written is slightly more robust than mine.

1:04:13

Is that is a real kind of augmented intelligence that I think is, is a meaningful step forward.

And, and we shouldn’t underestimate the power of that Advance.

Even if it’s not that kind of all-knowing, omniscient, AI from science fiction.

1:04:32

It’s thrilling scary fascinating stuff, Steven Johnson.

Thank you so much.

Yeah, it’s my pleasure.

Thank you very much for listening.

Plain.

English is produced by Devon man.

See, if you have a comment, a concern, a question and idea for a future show.

Please email us at plain English and Spotify.com.

1:04:51

That’s plain no space English and Spotify.com.