SXSW - OpenAI Co-founder on ChatGPT, DALL·E, and the Impact of Generative AI | SXSW 2023

🎁Amazon Prime 📖Kindle Unlimited 🎧Audible Plus 🎵Amazon Music Unlimited 🌿iHerb 💰Binance

Hi guys. Happy South by. I feel like what a way to kick this off. One of the things I love about South by Southwest is I’ve been coming for the last decade, and we’re always talking about what’s the next big thing in tech, and I would say like artificial intelligence and chat GPT is like couldn’t be more relevant. So glad to be sitting here with you. How many folks in the audience have used chat GPT?

Okay, so it feels like this is an audience that like we can. That’s good. I can be very specific on this stuff, and remember you guys ask questions. I’m going to leave 15 minutes at the end to get to it, so I want to get to open AI, and I want to talk about the company behind chat GPT, but I would love to start with chat GPT, so let’s go. It’s November 22nd. You guys released chat GPT.

This is an AI chatbot that’s developed by open AI. It’s built on top of large language models, a large language model called GPT-3. You release it November 2022. Over 100 million users in two months, this becomes the fastest growing application in history.

Just for some perspective, it took Facebook meta 4.5 years to reach 100 million users, took TikTok nine months. Like why was chat GPT the killer app?

Yeah, I actually think about this question a lot because for us, you know, we actually had the technology behind it, the model behind it created almost a year prior, so it wasn’t new technology, but the thing that we really did differently is that we did a little bit of extra work to make it more aligned, so it really you could talk to it. It would do what you wanted, but secondly, we made it accessible, right?

We built an interface that was super simple. It was kind of the simplest interface we could think of. We made it available for free to anyone, and I think that the thing that was very interesting was as this app really took off and people started using it, we could see the gap between what people thought was possible and what actually had been possible for quite some time.

And I think to me, this is actually maybe the biggest takeaway is that I really want us as a company and as a field to be informing people to make sure that they know what’s possible, what’s not, kind of what the forefront is going to look like and where things are going because I think that’s actually really important to figure out how to absorb this in society, like how do we actually get all the positives and how do we mitigate the negatives?

Like in the past, I mean, should we talk about Tay? We won’t talk too much about Tay, but like chatbots are hard to put out there, but there was something about what you put out there, and you talk about that gap, right, that it didn’t implode, right? It learned a lot, and all of a sudden, it’s almost spurred this whole new era of everyone saying, could we do this? Could we do this? Could we do this? Why now?

Yes, so I, as we were preparing Chachapiti for release, the thing I kept telling to the team was the most important thing, we can be overly conservative in terms of like refusing to do anything that seems even a little bit sketchy, that’s fine. Most important thing is that we don’t have to like turn it off in three days.

Yeah, are you worried when you kind of like pressed publish on this?

You have to be worried. How could you not, right?

Yeah.

Right, so we’ve been doing lots of testing, right? We have our own internal red teams. We’ve had beta testers on it, hundreds of beta testers for many, many months, but it’s very different from kind of exposing it to kind of the full diversity and adversarial and sort of beautiful force of the world and where people are going to apply it.

And so for us, I think that, you know, we have been doing iterative deployment for a very long time, right? We’ve been, you know, ever since, you know, 2020, June or so is when we first released a product, you know, an API so people could use these language models.

We’ve been making them more capable, getting them into more people’s hands, but we kind of knew this was going to be just a different dimension.

Yeah.

And it was our first time building a consumer facing app. And so we definitely were nervous, but I think that the team really rose to the occasion.

Yeah. Well, I want to look, I definitely want to talk about the future of ChatGPT because I know a lot of folks, especially we have a lot of users in the audience are curious about it.

But let’s look, I want to start at the, I want to go to the past, right? Because the company behind ChatGPT, Dali, is OpenAI.

And this is, it’s interesting because in the Silicon Valley world, you have like a sexy company comes out, everyone’s talking about it. OpenAI was just kind of the opposite.

It just was kind of like hanging out in the background until this thing came out. Until you, you know, you put out these products that could shift culture and start all these questions.

And so let’s go back. It’s 2015, July, and you’re in Menlo Park at a fancy hotel called the Rosewood. I don’t know if anyone here has been to the Rosewood. It’s certainly a scene.

You’re sitting there. Who’s there? What are we eating? Why are we there? What’s the topic of conversation? And I promise I’m going somewhere with this.

Well, I couldn’t tell you what was on the menu that night.

Right.

But yeah, we were.

I just want to know what Elon Musk was eating during this conversation.

Uh-huh, yeah.

Okay, sorry, I got ahead of it. Go ahead.

So we were having a dinner to discuss AI in the future and kind of just what might be possible and whether we could do something positive to affect it.

And so my co-founders at OpenAI, so that’s Elon, Sam, Ilya, and other people were all there.

And kind of the question was, is it too late to start a lab with a bunch of the best people at it?

Right, we all kind of saw that like AI feels like it’s going to happen.

It feels like AGI, really building human level machines, will be achievable.

And what can we do as technologists, as just people who care about this problem, to try to steer in a positive direction?

And kind of the conclusion from the dinner was, it’s not obviously impossible to do something here.

You felt a sense of urgency.

I did.

Why?

For sure.

The moment, I think the thing that is easy to miss here, right, is I think now people see ChatGPT and they say, wow, like suddenly you feel the possibilities.

Right.

And you both see what’s possible, like not science fiction anymore, right, actually usable today.

But it’s still hard to kind of extrapolate, to really follow the exponential, to think what might be possible tomorrow.

And I think that the mode that I have been in for a long time has been really thinking about that exponential.

Like I remember reading Alan Turing’s 1950 paper on the Turing test.

And the thing that really stuck out to me, and this was right after high school, was he said, look, you’re never going to program a machine to solve this problem.

Instead, you need a machine that can actually learn how to do it.

And that for me was the aha moment.

The idea that you could have a machine that could solve problems that I could not, that no human could figure out how to solve.

Like that so clearly could be so transformational, right?

There’s all these challenges, global warming, you know, just like medicine for everyone, like all these things that are kind of out of reach.

Yeah.

I don’t know how we’re going to do it.

But if you could use machines to aid in that process, we want to.

And so I think we all kind of felt like, OK, the technology is starting to happen.

You know, deep learning is an overnight success that took 70 years, right?

It’s like, you know, 2012, there was a big breakthrough on image recognition.

But it really took another decade to start to get to the point that we’re at now.

But we could all see that exponential.

And I think we really wanted to really push it along and really steer it.

And I mean, you at the time.

So you before you were the CTO of Stripe, this little company called Stripe.

And you really felt that you, Sam Altman at the time, Elon Musk.

We can get into all this later, but that you guys could build something better and you guys could build something that was pro-humanity and not anti-humanity, which is always that fine line in technology, which I think the last decade has kind of taught us.

Yeah.

And I would I would quibble a little bit with, you know, I don’t know that at least for me personally, that I viewed it as we would build something better.

You know, in the sense of like, you know, there’s lots of other people who are in this field doing great work, too.

But I wanted to contribute, you know, and I think it’s one thing that’s actually very important about AI and something that’s very core to our values and our mission is that we think this really should be an endeavor of humanity.

Right.

If we’re all thinking about, well, what’s my part of it?

You know, like, what do I get to own?

I think that is actually one place where the danger really lies.

And so so tell me about how the company was and is structured, because now that was seven years ago.

So take us behind the curtain.

I saw something Sam Altman wrote.

He said, we’ve attempted to set up our structure in a way that aligns our incentives with a good outcome.

What does that even mean?

Yeah.

So we are a weird looking company.

In what sense?

So we started as a nonprofit because we had this grand mission, but we did not know how to operationalize it.

Right.

We know that we want to have AGI benefit all of humanity.

But what is what does that mean?

What are you supposed to do?

And so we started as a research lab.

We hired some PhDs.

We did some research.

We open source some code.

And our original plan was open source everything.

Right.

You think about how you can have a good impact.

Maybe if you just make everything available to anyone that can make any changes they want, then, you know, if there’s one bad actor, well, you’ve got seven billion good actors who can keep them in check.

And, you know, I think that this plan was a good place to start.

But, you know, Ilya and I, we were really the ones running the company in the early days, spent a lot of time really thinking about how do you turn this into the kind of impact that we think is possible and to something that really can make a difference in terms of just how beneficial AGI ends up being.

And I think that we found kind of two important pieces.

One was simply a question of scale.

Right.

You know, all the results that we were getting that were impressive and really pushing things forward were requiring bigger and bigger computers.

And we kind of realized that, OK, well, you’re just going to need to raise billions of dollars to build these supercomputers.

And we actually tried really hard to raise that money as a nonprofit.

Like I remember sitting in a room during one of these fundraisers and looking in the eyes of a well-known Silicon Valley investor.

Who is that?

I wouldn’t share the name, but he was like, $100 million, which is what we’re trying to raise.

He’s like, that’s a staggering amount for a nonprofit.

Right.

And we looked at each other, we were like, it is.

Yeah.

And we actually succeeded.

We actually raised the money.

But we realized that 10x that.

That was not going to happen.

I mean, if anyone in this audience knows how to do that as a nonprofit, like, please, we will hire you in an instant.

But we realized that, you know, that if we wanted to actually achieve the mission, that we needed a vehicle that could get us there.

And, you know, we’re not anti-capitalist.

Like, that’s not why we started a nonprofit the way, opening as a nonprofit.

Actually, capitalism is a very good mechanism within the bounds that it’s designed for.

But if you do build sort of the most powerful technology ever in a single company, and that thing becomes just like way more valuable or powerful than any company we have today.

Capitalism is not really designed for that.

So we ended up sort of designing this custom bespoke structure.

It’s super weird.

Like, we have this limited partnership with all custom docs.

You know, if you’re if you’re a legal nerd, like, it’s the kind of thing that, like, you know, is like actually really, really fun to dig into.

But the way we design things is that we actually have the nonprofit is the governing body.

So there’s a board of a nonprofit that kind of owns everything.

It owns this limited partnership that actually has profit interest, but they’re capped.

So there’s only a fixed amount that investors and shareholders are able to get.

And that there’s a very careful balance in a lot of these details in terms of, like, you know, having the board have a majority of people who don’t have profit interest.

All these things in order to really try to change the incentive and make it so that, you know, that the way that we operate the company is it comports with the mission.

And so I think that, you know, this kind of approach of, like, really trying to figure out how do you balance?

How do you approach the mission?

But how do you make it practical?

How do you operationalize it?

That is something that has come up again and again in our history.

And if we look back at the history of, I mean, artificial intelligence like this is nothing new, obviously.

So, like, what is it about now that feels like a watershed moment?

Why now are all companies putting money into this?

Why now is this the thing that we all are talking about?

What is it about the technology now?

Yeah, well, I think the fundamental thing here is really about exponentials, right?

It’s like no matter how many times you hear it, it is still hard to impossible to internalize.

And when I look back, like, we’ve done these studies on the growth of compute power in the field.

And we see this nice exponential with a doubling period of, like, every 3.5 months, you know, as opposed to 18 months for Moore’s Law.

It’s been going on for the past 10 years or so.

But we actually extrapolated back even further.

And you can see that this exponential continues all the way.

Slightly smaller slope.

It used to be Moore’s Law.

But over the past 10 years, basically, people have been being like, well, you can go faster than Moore’s Law by just spending more money.

And I think that what’s been happening is we’ve been having this accumulated value, this slow roll.

Rather than trying to do a flash in the pan, like, just get rich quick kind of thing that maybe other fields have been accused of.

AI, I think, has been a much more steady incremental build of value.

And I think that the thing that’s so interesting is normally if you have a technology in search of a problem, adoption is hard.

It’s a new technology.

Everyone has to change their business.

They don’t know where it fits in.

But for language in particular, every business is already a language business.

Every flow is a language flow.

And so if you can add a little bit of value, then everyone wants it.

And I think that is the fundamental thing that really has driven the adoption and the excitement.

Is that it just fits into what everyone already wants to do.

Well, and also in 2017, you know, a model called Transformers, right?

These large language models and this idea that you could treat everything as a language.

Music and code and speech and image.

The entire world almost looks like a sequence of tokens, right?

If we could put a language behind it.

That was really an accelerant for a lot of what you’re building, too.

Yeah, I think that it’s, you know, the way they think about the progress, like the technological driver behind this,

is that it’s very easy to latch on to any one piece of it, right?

Transformer, definitely a really important thing.

But where the Transformer came from was really trying to figure out how do you get good compute utilization out of the compute hardware that we use?

The GPUs, right?

And the GPUs themselves are really impressive feat of engineering that has required just huge amounts of investment to get there.

And the software stack on top of them.

And so it’s kind of each of these pieces.

And each one kind of has its time.

Like one thing that’s super interesting to me, looking from the inside,

was that we were working on language models that look very similar to what we do today.

Starting 2016, you know, we had one person, Alec Radford, who was really excited about language.

And, you know, like he just was kind of working on building these little chatbots.

And like, we really liked Alec.

And so we were just like very supportive of him, doing whatever he wanted.

And meanwhile, we were off like investing in serious projects and stuff.

And we’re just like, you know, whatever Alec needs, like we’ll make sure he gets.

And 2017, you know, we had a first really interesting result,

which was that we had a model that was trained on Amazon reviews.

And that it was just predicting the next character, the next character, just what letter comes next.

And it actually learned a state-of-the-art sentiment analysis classifier.

You could give it a sentence and it would say like, this is positive or negative.

May not sound very impressive, but this was the moment where we kind of knew it was going to work.

It’s so clear that you would transcend it, just syntax, where the commas go.

And you’d move to semantics.

And so we just knew we had to push and push and push.

I mean, it always comes to Amazon reviews.

Who knew that this is the real story behind it?

Exactly, exactly. You always start small.

You know, every day there’s a new headline on how this technology is being adapted.

I just literally was Googling it yesterday.

It’s like the latest headlines are companies are harnessing the power of a chatbot

to write and automate emails with a little bit of personalization.

Another headline, how Chachapiti can help abuse survivors represent themselves in court if they can’t afford.

Otherwise, we obviously know about Microsoft’s being and disrupting search.

From the seat that you’re sitting in, what for you, and if you could be as specific as possible,

what do you think are the most interesting and disruptive use cases for generative AI?

Yeah, well, you know, I actually first want to just tell a personal anecdote

of the kind of thing that I am very hopeful for.

So, you know, medicine is definitely a very high stakes area.

We’re very cautious with, you know, how people should use this kind of technology there.

But even today, I want to talk about a place where I have just been like,

I really want for my own use.

So, you know, my wife, a number of years ago, had a mysterious ailment.

That she had this pulsating pain right here on her abdomen, bottom right side.

And it wasn’t appendicitis.

You know, we went to the first doctor and the doctor was like, oh, I know what this is.

And prescribes some antibiotic.

Nothing happened.

Went to a second doctor who said, oh, it’s a super rare disease.

Went to a second doctor who said, oh, it’s a super rare bacterial infection.

You need this other super powerful antibiotic.

Took that.

And over the course of three months, we went to four different doctors.

Until finally someone just like did an ultrasound and found what it was.

And I kid you not, I just typed in, you know, a couple sentences of description

that I just gave here into chat GPT.

Said number one, make sure it’s not appendicitis.

Number two, ruptured ovarian cyst.

And that is in fact what it was.

Wow.

And so the kind of thing that I want is I personally in the medical field

want something that I don’t rely on.

I don’t want it to replace a doctor.

I don’t want it to tell me like, oh, go take this super rare antibiotic.

I don’t want a doctor telling me that either.

And also chat GPT sometimes confidently says the exact wrong thing.

It’s kind of like a drunk frat guy every so often.

Exactly.

So you got to be a little bit careful.

You got to be careful.

Something we’re working on.

Yeah, right.

And I think that our models actually are much more calibrated than we realize

and can say when they’re right or wrong.

But we currently destroy that information in some of the training processes we do.

So more to say there.

But yeah, I think this suggests, give you ideas really, you know, in writing.

It’s like the blank page problem.

But I think this for me is where generative AI can really shine.

Right.

It’s really about sort of unblocking you, giving you ideas,

and just giving you an assistant that is willing to do whatever you want 24

7.

And so let’s you’ve now the chat GPT has been deployed to millions.

Has there been anything that’s really shocked you or surprised you and how

people have been utilizing it?

I mean, of course.

Yeah.

I mean, I do think that for me, the overall most interesting thing has just

been seeing just how many people engage with it for so many just sort of

surprising aspects of life.

Right.

Like what?

I think knowledge work is maybe the area that I kind of see as most

important for us to really focus on.

And, you know, we see people within OpenAI who don’t have who aren’t native

English speakers use it to improve their writing.

And that you know, at first that there was someone with OpenAI who is

suddenly his you could just tell it the writing style of everything changed.

And it was just like way more fluid and just also just like honestly just

like way more understandable.

And at first, what just happened?

And he literally at one point had hired someone to to do the writing for him.

But that was actually really hard.

It was just like a lot of overhead and he wasn’t able to get the points

across.

But with chat GPT, he really was able to.

And I think that that for me is just like so interesting to see that people

just use it as a cognitive aid to think just more clearly and to

communicate with others.

Well, you always know you have disruptive technology when you put it out

there and people misuse it.

I remember a decade ago doing like a story on pimps recruiting women on

Which is like, OK, you know, if someone’s using your technology in a bad

way, like you have something that’s hitting mainstream.

So like, can you tell us like what how are people using it in ways that

Have you what have you learned from putting this out there?

And what have you learned from how people are misusing it?

Well, misuse is definitely also very core to what we think about.

Part of why we wanted to put this out there was to get feedback to see

how people use it for good and for bad and to continually tune.

And honestly, one of the biggest things that we’ve seen, you know,

we always anticipate all the different things that might go wrong for GPT-3.

We really focused on misinformation and that actually the most common

people, the most common abuse vector was generating spam for drugs,

you know, for various medicines.

And so you don’t necessarily see the problems around the corner.

For Chachapiti, one thing we’ve just seen is people just creating

thousands or hundreds of thousands of accounts in order to just be able

to use it much more.

Some people generating lots of spam.

It’s clear that people are using it for all sorts of different things.

I think for individuals, there’s definitely, I think, actually,

I would say this is an interesting category of, you know, to your point

where it says something that is confidently wrong.

My drunk frat guy point.

Exactly. Yeah. Over reliance. Right.

And thinking, oh, because it said that, it must be true.

Yeah. And that’s not true for humans.

It’s not quite true for AIs.

Yeah. I think we will get there at one point,

but I think that it’s going to be a process and something we all need

to participate in. Right.

And so, I mean, I would love to get into kind of what we can predict

in the future with AI, but before we leave Chachapiti,

this isn’t really Chachapiti, but I feel like we have to talk

about Sydney for a moment.

People in the audience, people who heard of,

who read Kevin Roose’s article in the New York Times?

Right. So just a little background.

You know, you guys put Chachapiti out there,

Microsoft, Google, racing to get search products out there.

Microsoft releases its own AI-powered search,

a Bing chatbot, and all of a sudden, Kevin Roose,

great writer at the New York Times, is playing with it,

with the Bing chatbot.

It reveals that its name is, the shadow name is Sydney,

and also tries and tells Kevin when prompted a certain way,

I want to be alive, and tried to persuade him to leave his wife.

So obviously, that’s like an awkward conversation.

So what are the guardrails?

And to be clear, Microsoft’s an investor and partner.

This isn’t something that OpenAI specifically put out there,

but I do think it’s an interesting point of saying,

you put this stuff out there, the next thing you know,

like, I don’t know, Sydney’s trying to make you leave your wife.

So like, what are the guardrails that need to be put in?

Like, what have you learned over the last couple months

where you’ve seen the misuse, and what can you put in

to make sure that we’re not all, you know,

trying to leave our significant others

that bots are telling us to?

I mean, look, like, there’s, I think that even the,

I think this is actually a great question, right?

And I think that even the most high-order bit, right,

the most important thing in my mind,

is this question of when.

When do you want to release?

And my point earlier of, well, there was this overhang

in terms of this gap between people’s expectations,

what they were prepared for,

and what was actually possible.

And I think that’s actually where a lot of the danger lies.

You know, we can kind of joke about or laugh about this article

because it wasn’t very convincing.

You know, just like chatbot saying, you know, leave your wife.

Sydney was pretty spicy, I don’t know.

Yeah, it was very spicy, right?

But did not actually have an impact, you know?

And I think that is actually, in my mind,

the most important thing, is trying to surface these things

as early in the process as possible, right?

Before you have some system that is much more persuasive

or capable or able to operate in more subtle ways.

Because we want to build trust

and figure out where we can’t trust yet.

You know, figure out where we put the guardrails in.

So that, to me, this is the process, right?

This is the pain of the learning.

And that we’ve seen this across the board, right?

We’ve seen places where people try really hard

to get the model to do something,

and it says, sorry, nope, can’t do that.

We’ve seen places where people use it for positive things,

and we’ve seen cases where people have outcomes like this.

And so, I think that my answer is that, you know,

we have a team that works really hard on these problems.

You know, that we have people who build on top of us,

who customize the technology in different ways.

But fundamentally, I think that we’re all very aligned

in terms of trying to make this technology

more trustworthy and usable.

And, you know, we do a lot of red teaming internally.

And so that’s, you know, we hire experts in different domains.

We hire lots of people to try to break the models.

You know, when we actually released it,

we knew, like, we’d kind of cleared a bar, we felt,

in terms of just how hard it was

to get it to go off the rails.

But we knew it wasn’t perfect.

We knew that we had come up with some ways

to get around it with sufficient effort.

And we knew that other people would find more, too.

But we’ve been feeding all that back in.

We’ve been learning from what we see in practice.

And so I think that this sort of loop

of there being failures, I think that’s important.

Because if not, it means you’re kind of holding it too long

because you’re being too conservative.

And then when you do release it,

now you actually are taking on much more risk

and much more danger.

It’s not 100% true in all cases,

but I think that that heuristic, I think, is important.

Well, I think it’s also, we’ll get to it a little bit later,

but an important segue, too,

to talk about the future of misinformation

and how we can prep now for what’s coming

with this innovation.

Before we get to it, I mean,

I think one of the most interesting things to me

is the ability for this technology

to synthesize information and make predictions

and identify patterns.

So can you tell me what you think

the most interesting future use cases

of what artificial intelligence will be able to predict

will be, like predict disease,

predict stock market,

predict if you’re going to get a, not you,

if someone’s going to get a divorce?

What could this predict?

Paint the image of the future.

Well, I think that the real story here in my mind

is amplification of what humans can do.

And I think that that will be true on knowledge work.

I think that it will just be that we’re all,

it’s kind of like if you hire six assistants

who are all like, you know, they’re not perfect.

They need to be trained up a little bit.

They don’t quite know exactly what you want always,

but they’re so eager, they never sleep.

They’re there to help you.

They’re willing to do the drudge work.

And you get to be the director.

And I think that that is going to be

what writing will look like.

I think that’s what sort of, you know,

business communication will look like.

But I also think that is what entertainment will look like.

You think about today

where everyone watches the same TV show.

And, you know, maybe people are still upset

about the last season of Game of Thrones.

But imagine if you could ask your AI

to make a new ending that goes a different way.

And maybe even put yourself in there

as a main character or something.

Having interactive experiences.

And so I think it’s just going to be

every aspect of life

is going to be sort of amplified by this technology.

And I’m sure there are some aspects,

people or companies that will say,

I don’t want that.

And that’s okay.

Like, I think it’s really going to be a tool

just like the cell phone in your pocket

that is going to be available when it makes sense.

I think, we think a lot at my company

about we’re knee-deep in exploring

how artificial intelligence can personalize content,

develop closer relationships with the audience,

which is a wide open space

and an interesting space.

But also there’s so many ethics that come up with that.

So we’re developing a lot of

these ethical frameworks around it.

I’m curious, when you talk about Game of Thrones

and personalized media

and being able to put yourself in it,

when we look at the future of media and entertainment,

would you say this is a new frontier

for personalized media?

Yeah, I think for sure.

And I kind of think it’s a new frontier

for most areas.

You know, it may not be great yet

at some domains,

but I think that we are just going to see

just like way more creative action happening.

And to me, actually, the thing that’s

I think most sort of encouraging

is I think it will be

the barriers to entry decrease.

And this is, by the way,

how we thought about things at Stripe.

Decrease the barrier to people

making payments online,

to people making services.

Way more activity happens,

things you would never think of.

And I think we’ll see this in content.

Individuals who have a creative idea

that they want to see realized,

they now have a whole creative studio

at their disposal.

But also the pros,

the people who really want to make something good

or make something way better

than any of the amateurs could.

And we’ve seen this with Dolly.

There’s literally these 100-page books

that people write on how to prompt Dolly.

And there’s all these murky questions

around identity and attribution

as these models go mainstream.

So it’s not perfectly clear

what the data sets are used to train.

So when we take a step back,

and this is a more fundamental question,

should an artist style

with models trained on their work,

should it be available to folks,

to anyone without use of attribution?

What are you guys thinking about

when it comes to these ethical?

Yeah, so we engage very closely

with policymakers,

and I think that’s something that we have.

Fundamentally, we as a company

want to provide information

and to show just what’s possible

and let there be a public conversation

about these topics.

I don’t think that we have all the answers,

but we think it’s really important

to be talking about.

So take me for example.

I’m like the beta test.

I’ll put myself in the driver’s seat.

So let’s say someone took all the footage

of me interviewing folks like you,

Zuckerberg, whatever,

and they trained this as a Lori model.

I’ve already named it.

I don’t know.

Please don’t do it, guys.

And then, I don’t know why I’m inviting this,

but then they launched a podcast

using my likeness, my style, my voice.

Hopefully you’d have fabulous style.

That would be all I’d ask.

But could they do it?

Should they get a cut?

Should I get a say in it?

As a content creator,

as someone who’s sat at the center

of the conversation about the future,

what does that look like?

Yeah.

Again, I think this is a great question.

I think it would be kind of

futuristic of me to say

that I have all the answers,

but I can tell you a little bit

about how we think about it.

As a company, our mission to build AGI

that benefits all of humanity.

We’ve kind of built with this

cap profit structure.

I really think that an answer

on this question, but more broadly,

all of humanity are kind of

stakeholders in what gets built.

Everyone benefits if it’s

access to these services,

if it’s that you’re able to

have your AI personality

or this AI that you build up

that represents you

and build a business with that.

I think all of this is on the table.

I do think that we need

some sort of,

I think that society as a whole

needs to adapt here.

There’s no question that

we need to think done

to get a little black mirror,

but why not?

Do you see a future where

we verify our own AI identities

and we can license them out

so I could license out

my likeness to some degree?

Yeah.

Again, I think kind of

everything is on the table.

I think actually this,

to your earlier question too

of why now, what’s happening now,

is I think everyone kind of

agrees that I think

where content comes from,

in good and bad ways,

how it’s created,

what an application is.

There’s Web 1.0 and 2.0

or something,

and I’m not going to talk

about Web 3.

Is it too soon?

There you go.

More to say there.

I think that where we’re going

is what an application is

will be very different.

It’s static.

You can’t really interact with it,

but we’re clearly moving

to a world where it’s alive.

You can talk to it

and it understands you

and helps you.

Honestly, every time I go

through some menu

and I keep trying to find

where I’m supposed to click,

I’m like,

why is this still here?

I think in the future

it will not.

How much powerful

is the current technology

you’re building?

We are continuing

to make, I say,

significant progress.

Blink twice if it’s

ten times more powerful.

Or, okay,

three times.

There we go.

I guess all I can say

is that I can’t comment

on unreleased work,

but I can say that

there’s been a lot of rumors

swirling around about

what we’re going to be releasing

and what’s coming out.

What I can definitely say

is that we do not release

until we feel good

about the safety

and the risk mitigations.

You guys have the ability

to turn up the dial,

turn down the dial.

I joke about ChatGPT

confidently.

It does so many

things, right?

Can you give any insight,

maybe speaking,

I don’t know,

we could speak around it

about what future versions

are going to look like?

Will it be more cautious,

more creative?

Let me give you a mental

model for how we build

these systems.

There’s a first step

in the process

of training what we

do.

It sees all the bad stuff.

It sees true facts.

It sees math problems

with good answers

and incorrect answers.

No one tells its incorrect

answers.

It sees everything.

It learns to predict.

It learns to,

given some document,

it’s supposed to predict

what comes next.

It has to think

about what’s the next word.

That model,

it has every bias,

it has every ideology,

it has every idea

that has been almost

expressed in this system,

compressed and learned

in a real way.

Then we do a second step

of reinforcement learning

from human preferences,

of what we call post-training.

Here you move from this

giant sea of data of

what’s the next word,

what’s the next thing.

Here I think there’s

something that’s very

important, very fraught.

This question of,

what should the AI do?

Who should pick that?

That is also a whole

different conversation.

That second step is

where these behaviors come

from.

I alluded to earlier

that we found that the

base model itself is

actually right with

quite good precision.

But our current

post-training process,

this next step that we

do to really say,

no, no, no, this is

what you’re supposed to do.

We don’t really include

any of that calibration

in there.

That the model really

learns, you know what,

just go for it.

That I think is a

engineering challenge

for us to address.

You should expect that

even with the current

chatGB team, we’ve

released four or five

versions since December

and they’ve gotten

a lot better.

Factuality improves,

that hallucinations

are a problem.

People talk about those

have improved.

A lot of the jailbreaks

that used to work

don’t work anymore.

And that is because

of the post-training

process.

And so I would expect

that we will have

systems that are much

more calibrated,

that are able to

sort of check their

own work,

that are able to

be much more calibrated

on when they should

refuse, when they

should help you,

but also that are

able to help you

solve more ambitious

tasks.

Like what?

Well, you know,

I think that the

kinds of things that

I want as a programmer

is that, you know,

right now, we started

with a program called

Copilot, which can do

sort of, you know,

just like autocomplete

a line.

And it was very useful

if you don’t really

know the programming

language that you’re in

or you don’t know

specific library functions,

that kind of stuff.

So it’s basically like,

you know, being able

to get, skip the

dictionary and look up

and it just does it

for you right there

in your text editor.

With chatGPT, you

can start being more

ambitious, you can

start asking to write

whole functions for you

or like, oh, like you

write the skeleton of

writing a bot in this

way.

And I think that where

we’re going to go is

towards systems that

could help you be much

more like a manager,

right?

Where you can really

be like, okay, I want

a software system that’s

architected in this way

and the system goes

and it writes a lot

of the pieces and

actually tests them

and runs them.

And I think this

kind of like moving,

you know, giving

everyone a promotion,

right?

Like making you into

more of the, you know,

the CEO of a company

and making you more

of the person who

does the whole pay

grades.

I think literally

and figuratively, I

think that’s like the

kind of thing that

they would do.

So the future of

chatGPT is we’re all

getting a promotion.

I think so.

It’s not too bad

if we achieve it.

Interesting.

I think there’s

obviously a lot of

fear around the

future of artificial

intelligence, right?

People say AI’s coming

for our jobs.

Be honest with all

of our friends here.

What jobs are most

at risk?

And one of the

things that I think

about this, certainly

that I did, was it’s

very clear that AI’s

coming for the jobs.

Just a question of

what order?

And clearly the like,

you know, ones that

don’t, you know, that

are like menial or,

you know, just like

require physical work

or something like

that, oh, the robots

will come for that

first.

And in reality, it’s

been very different,

right?

That actually we’ve

made great strides on

cognitive labor,

right?

On, you know, think

about writing poems or,

you know, anything like

that.

And we have not made

very much progress on

physical things.

And I think that this

amplification is kind of

showing a very different

character from what was

expected.

But it’s also the case

that we haven’t really

automated a whole job,

right?

That you think about, I

think the lesson from

that is that humans, I

think, are much more

capable than we give

ourselves credit for,

right?

To actually, you know,

do your job, to do

what you’re doing right

now.

It’s not just-

Well, I asked

ChatGPT.

These aren’t the

ChatGPT questions.

These are the

ChatGPT questions.

I had to follow up and

say, can you be more

hard-hitting?

There you go.

Well, thank you.

Are these the hard-

hitting ones?

No, they’re coming.

Here we go.

We’re about to go into

the future of truth right

after this.

There we go.

Perfect.

But ChatGPT, it’s not up

here on stage with me.

You know, there’s the

personal relationship

aspect.

There’s this judgment

aspect.

There’s so many details

that are what you want

from the person in

charge.

But the, like, writing

of the actual copy, I

mean, who cares about

that?

But the, like,

writing of the actual

copy, I mean, who cares

about that?

Maybe we’ll do the

follow-up.

Well, probably we’ll

do the follow-up

question.

My follow-up

question is, so give

us a couple jobs

most at risk.

Yeah, well, I’ll tell

you, the one that I

think is is actually

content moderator.

So jobs, what I’ve

really seen is jobs

that you kind of didn’t

want human judgment

there in the first

place, right?

You really just wanted

a set of rules that

could be followed, and

you kind of wanted a

computer to do it.

And then you had

to decide, is this

thing sufficiently

horrible or just

like slightly not

sufficiently horrible

to be disallowed?

And that’s

something I already

see this technology

impacting.

So that might be a

good segue into the

future of truth,

right?

Because I think

we’re entering this

really fascinating,

exciting, and scary

era of you have the

rise of deep fakes,

these automated

chatbots that could

have the ability to

persuade someone one

way or the other.

What happens to truth

in an era where AI

just makes fiction so

believable?

Well, I have a

slightly spicy take

here, which is that I

think technology has

not been kind in a lot

of ways to journalism.

And I think that AI

and this particular

problem might actually

be something that is

quite kind and actually

really reinforces the

need for authoritative

sources that can tell

you this is real,

right?

We actually went out,

had humans investigate

this, that we looked at

all the different sides

of this thing, and this

is actually user

authenticated videos or

whatever it is that can

tell you what happened

and what the facts are.

And so I think that

where we’re going to go

is away from a world

where because certainly

you saw some text

somewhere that you can

trust it’s true, it’s

never really been the

case.

Humans have always been

very good at writing

fake text.

Images,

pictures,

images,

images,

doctored images,

those have existed since

the invention of

photography.

But this gives us the

ability to do this at

viral speed.

100%.

Right?

All the bad things that

happened over the last

decade, if we’re not

careful, this will

amplify.

Yes.

And I think this is,

to me, I agree with

this, right?

I think this is kind of

the crux is that the

fact of being able to do

these things at all,

not new.

The fact of being able

to do it with a much

lower barrier to entry,

that’s new.

And I think that’s

what sparked the need

for new solutions.

We’ve never had real

answers for sort of

chain of custody of

information online.

We’ve never really had

verified identities.

All these things people

talked about since the

beginning of the

internet.

But I think there was

never really a need for

it.

And I think the need

will come.

Yeah.

I, the folks, I was at

an event for the folks

for the Center for

Humane Technology.

They’re the folks who

did also like the

Social Dilemma, which

in my opinion, Social

Dilemma’s great, but

it’s like we’ve been

having these conversations

for 10 years before

Netflix puts out a

doc and asks these

questions, right?

So we’re at the

beginning of an

interesting era.

We should ask these

questions, you know,

before like we have to

do a sexy doc on it in

10 years.

So there was something

that was said there that

I thought was really

important.

They said that 2024

will be the last human

election, meaning by

2028, we will see

synthesized ads, viral

information powered by

artificial intelligence.

Someone releases a

Biden-Trump filter,

tens of millions of

videos are going out

there.

How do we build now?

Like what has to

happen now in your

opinion to get ahead of

what will be the

inevitable downside of

this?

Yeah.

So I think this is a

great question and I

think this is like maybe

also going to be a tip

of an iceberg kind of

problem where it’s like

it’s the most visible

one.

It’s clearly extremely

impactful.

It’s one that, you

know, has been very

topical for a long

time.

But I think that we’re

going to see the same

questions appearing

across all sorts of

human endeavor of just

as there’s more access

to creation, how do

you sift through for

good creation?

How do you actually,

you know, find what is

true or find what is

high quality or, you

know, how do you make

sense of it?

And I think some of

this is really going to

be about what people

building good tools.

Like we’ve seen this

within, I think, the

social media space.

Like even, for example,

you know, people

building tools for

cyber harassment, you

know, to make it so

that people can easily

block, you know,

various efforts and

things like that.

And I think that we

need lots of tools to

be able to do that.

And I think that we

need lots of tools to

be built here that are

really tackling this

problem.

And so that’s one reason

that we, you know, we

don’t just build chat

GPT, the app.

Actually, our main focus

is building a platform.

So we release an API.

Anyone can use this to

build applications.

And I think that you

have an opportunity,

some using traditional

technology, some using,

you know, the AI

technology itself in

order to actually sift

through and figure out

like what is high

quality curated and

people want to play

with it.

And so I think that

that’s one of the

things that we need

to do.

I think that’s one of the

things that we need

to do is to create

an API that’s

high quality curated and

people want to put

their stamp of

approval on it.

I remember the

move fast and

break things era of

meta-Facebook.

Remember they used

to have the sign that

said move fast and

break things.

I know Open AI put

these things out there

in an iterative way and

has a philosophy about

limiting growth to

some degree and

getting feedback.

But now I would say

because of what’s

launched, there’s this

AI race with the

biggest companies

around the world

that is like what

is the best for

society?

What do you think

we’ve learned from

the last decade of

tech innovation that

we must use as we

enter into this new

era where the stakes

you could argue are

even higher?

Yeah.

We think about this a

lot.

Like I have spent a

lot of time really

trying to understand

for each of the big

tech companies, you

know, what did they

do wrong?

And right.

But to the extent

that things, that

mistakes were made,

like what are they,

what can we learn?

And actually one thing

I found very

interesting is that

there’s not really

consensus on that

answer.

Like I wish there was

a clean narrative that

everyone knew and it’s

just like just don’t

do this thing.

Well, I could give an

opinion.

Please.

I would love it.

I’ve interviewed Mark

Zuckerberg many times

and I would say just

having been across

from some of those

folks, I think the

biggest mistake is

not understanding

humans.

In a nutshell, right?

So how I think like,

you know.

We’ve got the stamp

of approval on this.

Great.

And the audience.

So I think it’s, I

mean, it sounds like

you’ve done a lot of,

you guys have done a

lot of thinking into

how you put this out

there and how you

build out these APIs

that other people can

build on.

Who are the people

that need to build

out for these

solutions?

Like who can you

guys, now that you

have a seat in Silicon

Valley and you’re at

this really powerful

place, like who do

you guys bring in

that’s different,

diverse and interesting?

Yeah.

So we do quite a lot

of outreach and I

actually think this is

a really good example

on how we make

decisions on the

limits of what the

AI should say.

We’ve written a blog

post about this, but

we think that this is

something that really

needs legitimacy.

It can’t just be a

company in Silicon

Valley.

It can’t just be us

who’s making these

decisions.

It has to be

collective.

And so we’re

actually, and we’ll

have more to share

soon in terms of

exactly what we’re

doing, but we’re

really trying to

scale up efforts to

get input to actually

be able to help

make collective

decisions.

And so I think

that it’s just so

clear that you do

need everyone to

have a seat at the

table here and

that’s something we’re

very committed to.

And then talking

like regulation, I

think it’s, Open

AI talks about moving

at a bit of a slower

pace, but these tools

are being deployed to

millions.

So the FDA doesn’t

allow a drug to go

out to the market

unless it’s safe.

So what is the

right regulation look

like for artificial

intelligence and what’s

happening?

So yeah, this is

again something we’ve

been engaging with

Congressional

testimonies back in

like 2016, 2017.

It was so

interesting to see

that policymakers

were already quite

smart on these issues

and already starting

to engage.

And I think that,

you know, one thing

we think is really

important is really

about focusing

regulation on

regulating harms,

right?

That it’s very

tempting to regulate

the means.

And we’re actually

seeing this right now

with like the EU AI

Act that’s kind of a

question of exactly

how to sort of

operationalize some

of these issues.

And the thing you

really want is to

really say like,

let’s think about

the stakes and

really parse apart

what are high-stakes

areas, what are

low-stakes areas,

what does it mean

to do a good job,

how do you know?

And these sort of

measurements and

evaluations, like

those are really,

really critical.

And so we think

the government,

it’s a key part

of the issue,

right?

Like this question

of how do you

get everyone

involved?

The answer is we

have institutions

that are meant

to be able to

regulate these

issues.

I’m sorry, I

have Facebook made

money and the

answer was like

we sell ads.

So really

understanding because

it certainly seems

like there’s going

to be all these

new issues.

Should there be

a new regulatory

body for this?

Again, I think

it’s on the table.

I think more

likely what I see

happening is like

I think that AI

is just going to

be so baked into

so many different

pieces and honestly

so helpful in so

many different areas

and it’s a good

strategy.

But I think that

every organization,

government or

otherwise, is going

to have to understand

AI and really

figure it out.

I know we have to

wrap soon because

I want to get to

questions but I

thought we could

do a little

lightning round.

I love a good

lightning round.

Okay.

AI will be

sentient when?

Long time from

now.

Like how long?

This kind of

question I prefer

not to comment on.

It’s hard to answer.

Most interesting

question.

I think it’s

going to be just

making your dreams

come to life.

Oh.

Huh.

In what sense?

Sorry.

It’s not part of

the lightning round.

You hook up your

brain machine interface

and then you do a

nice rendering and

you’ll get great

visions of your

dreams.

Wow.

Spiciest take on

the future of AI

that you’re

generally not

allowed to say

publicly.

Oh, man.

I think it’s

going to be

making your

dreams come to

life.

I think we’re

going to figure it

out.

I think it’s

going to go well.

You’re optimistic.

I’m optimistic.

I consider myself

an optimistic realist.

I think it’s not

going to go well by

default, but I

think humanity can

rise to this challenge.

Elon Musk no longer

really deeply involved

with open AI,

building potentially

what’s called

anti-woke AI.

Success or failure?

Well, I think a

failure on our part

for sure.

In what sense?

Well, I think we

were not fast enough

with our biases in

chat GPT, and we

did not intend them

to be there, that

our goal really was

to have a system that

would kind of, you

know, be sort of

egalitarian, sort of

treat all the sort

of mainstream sides

equally, and we

actually made a lot

of improvements on

this over the past

month, and we’ll have

more to share soon,

but, yeah, I think

that people were

right to criticize us,

and I think that we

really sort of, you

know, responded to

that.

It’s one of the

pieces of feedback

that I think is most

valuable.

Fill in the blank.

What do you think

the future of AI in

2050 is?

Unimaginable.

Okay.

I like that.

The single most

important ethical issue

we’re facing when it

comes to the future

of AI in humans.

This one’s hard.

I think it’s the

whole package,

honestly.

I think it’s this

question of how the

values get in there,

whose values get in

there, who’s values

get in there, and

who’s values don’t

get in there, and

who’s values don’t

get in there, and

who’s values don’t

get in there, and

who’s values don’t

get in there, and

who’s benefits get

in there, and who’s

benefits get in

there, and who’s

benefits get in

there, and who’s

benefits get

distributed.

How do you make

sure the technology

is safe and used in

the right ways and

the emergent risks

that are going to

appear at some point

with very capable

systems don’t end up

overwhelming the

positives we’re going

to get.

So I

think it’s the

whole thing.

At some point to

your first question,

the sentience

question, at what

point do the

systems have moral

philosophers to

help answer some

of these questions?

Are you going to

hire philosophers?

We’re going to

hire I think everyone

across the board.

I think this is

one key thing to get

across.

I think that within

AI I’ve definitely

seen this fallacy of

people thinking this

is a technology

problem or just

saying look,

there’s the

alignment problem

of how do you

make the AI not

go off the rails,

but the society

thing, that’s the

hard part.

And I think

that’s the

hard part.

I’m not going to

worry about that.

And I think you

can’t do that.

I think that it

really has to be that

you engage with the

whole package and

that I think is going

to require everyone.

I like the

understanding of

understanding the

people behind the

code that transforms

society.

And so I’ve just

met you in person

today, but we’ve

spoken a little bit

about some of the

ethical stuff too.

You’re at the helm

of one of the most

important technological

advances of our time.

What do you want

people here to know

about you?

Well, I love my

wife.

I’m not going to

listen to the chat

bot.

I bet she is

fabulous.

She’s not being

replaced.

Sidney cannot

break up that

marriage.

Exactly.

And we were

talking about this

last night.

She was asking

why do I do it?

Because I work a

lot.

I think we give

up a lot of time

together as a result

of just how much I

really try to focus

on the work and

trying to kind of

move the company

forward.

And, you know, I

hadn’t really thought

about that question

for a while.

And I thought

about it, and my

true answer was

because it’s the

right thing to do.

Like, I just think

that this technology

really can help

everyone, can help

the world.

I think it’s, you

know, these problems

that we just see

coming down the

pipe, you know,

climate change again

being one of them,

I think we have a

way out.

And if I can

move the needle on

that, and, you

know, I’m grateful

to be in the

position that I am,

but honestly, when

we started the

company, what I

cared about most

was I was just like,

I’m happy to do

anything.

You know, like,

first day two people

were arguing about

something, they

didn’t have a white

board, I was like,

great, I’ll go get

the white board.

And I think that

this problem is just

so important.

It transcends

each of us

individually.

It transcends our

own position in it.

And I think it is

really about trying

to get to that

good future.

Great.

Well, thank you.

So, I think we’re

talking about the

decline in human

intelligence as we

start to outsource

our cognition to

AI.

Yeah, this is

definitely something

that keeps me up at

night.

Although, it’s

interesting to see

this trend across

all previous

technologies, you

know, radio,

television, you

know, I’ve talked

to some esteemed

states people who

have said, like,

the politicians

these days, nothing

compared to

Teddy Roosevelt.

Like, read all of

Teddy Roosevelt’s

great thoughts, and

it’s so unclear to

me.

Like, you know, I

feel, like, is this

true or is it not?

But I think that

what is definitely

important as we see

this new technology

coming is figuring

out how to have it

be an intelligence

multiplier, right?

So that, you know,

sometimes, yeah, you

do need to solve the

problem yourself, but

what you really want

is you want a great

tutor.

You want someone who

breaks down the

problem to you,

really understands

what motivates you,

and if you have a

different learning

style.

And so I think

that’s important.

But if you have

something that

actually is figuring

out the how do I

help you fish, or

how do I help you

learn to fish, I

think you can go way

further.

What is your opinion

on, this one was

upvoted a lot, so

I’m being true to

the audience.

They have a good

question.

All right.

What is your opinion

on intellectual

property rights for

AI-generated content

trained on the work

of a particular

artist?

We talked a little

bit about this, but

The people want more.

I think this is like

asking a question

about exactly how

copyright should work

right at the

creation of the

Gutenberg press,

right, where it’s

like we are going to

need to have an

answer.

We’re engaging with

the copyright office.

We’re engaging with

lots of different

areas, and I don’t

personally know what

exactly the answer

should be, but I do

think that, like, one

thing that I do want

to say, not to kind

of hedge everything

here, is that I do

think that content

creators should be

sort of, you know,

it should be a more

prestigious, a more

compensated, a more

just, like, good thing

for people to pursue

now than ever, and I

think if we don’t

achieve that in some

way, then I think

that something has

gone wrong.

Will there be new

laws that didn’t

exist?

Oh, for sure.

I mean, there should

be.

What do you think

they will be?

Well, again, I don’t

want to speak out of

turn.

I don’t want to be

too, yeah, I just

don’t want to speak

out of turn on

these issues, but I

think that, to me,

the process that’s

happening right now

is really important.

I think that, you

know, it’s really

important that, you

know, when we’re

talking about

things, people really

care, and they should,

and that we are

trying to figure out

mechanisms just

within our own, you

know, sort of, you

know, slice of how we

implement things and

how we sort of work

with different

partners.

You know, for

Dolly, for example,

the very first people

that we invited to

use it were artists,

right, because we

really wanted to figure

out how do we make

this be a tool that

you are excited about

and that you feel

like, yes, like, I

think that the most

important thing is

really going to be

these higher-level

skills, right,

judgment, really

figuring out is this

good, is this bad, do

I like this, do I

not, knowing when to

sort of, you know,

sort of dig more into

the details, and

really, I think today

just even playing with

these systems, like, I

think that it will be

the case that we’re

going to make, you

know, the next

generations of the

Dolly’s and these

other systems just be,

you don’t even have

to, no language,

you know, you don’t

even have to know

the language, you

don’t even have to

know the language,

you don’t even have to

know the language,

right, they should

become much more

child-accessible, and I

think that children

being sort of AI native

users, I think you’re

going to find that

you’re going to figure

out how to just use

these in totally

unimaginable ways.

Let’s see.

Sorry, this one’s not

working, I’m going to

this one.

Okay, how can we

maintain the integrity

of AI models like

ChatGPT when capital

from corporates has

entered the space

monetizing a tool run

by a nonprofit, and

you’ve, I mean, a lot

of folks, this is,

actually, this is what

I was going to ask

you, but ChatGPT

also asked me to ask

you, which is

interesting.

It’s very topical.

I like that.

This is good.

And so if you could

give us a little more

insight, because

obviously we’re very

far from when you

guys sat at that

dinner and said we

want to change

things, and now

there’s money, there’s

profit, there’s all

these other things, so

how do you guys

maintain that?

Yep.

Well, I think that

our answer to this

question, and you

should hold us

accountable, by the

way, is really about

the fact that we

really set up our

structure in a very

specific way, which,

by the way, has

turned off a lot of

investors.

We have this big

purple box at the

top of all of our

investment docs that

say the mission comes

first, that we may

have to, you know,

if there’s a conflict

with achieving the

mission, cancel all

of your profit

interests, which,

yeah, you know,

sends many traditional

investors running for

the hills, and I

think that, you know,

like, there’s a part

of the frame of the

question that I, you

know, sort of don’t

agree with, which is

that the existence of

capital is itself a

problem.

Like, I think that,

you know, we’re all

using iPhones, we’re

using TVs created by

companies.

There’s a lot of good

things, but I do think

it comes with great

incentives, right?

It comes with this

pressure to, you

know, sort of do

what’s good for you

specifically, but not

necessarily for the

rest of society, not

to internalize those

externalities.

And so I think that

the important thing for

us has been to really

figure out how do you

set up the incentives

that are on yourself so

that you do, as much

as possible, get

the, you know, the

best people to join,

you can build the

massive supercomputers,

you can actually build

these tools and get

them out there.

But at the same time,

if you do succeed

massively and wildly

beyond anything that’s

happened, how do you

make sure that you

don’t, you know, once

you’ve kind of gotten

to everything, you

don’t have to then 2x

everything, you know?

And I think that these

kinds of very subtle

choices make a huge

difference in terms of

outcome.

I want to end with a

quote from your

co-founder Sam

Altman.

He wrote, a misaligned

super intelligent AGI

could lead to

serious harm to the

world.

An autocratic regime

with a super intelligence

could lead to that,

too.

Successfully

transitioning to a world

where super intelligence

is perhaps the most

important and hopeful

and scary project in

human, is perhaps the

most, sorry, I’m really

messing this up, is the

most important, hopeful

and scary project in

human history.

Success is far from

guaranteed and the

stakes, boundless

downside and boundless

upside, are there to

hopefully unite us

all.

I want to end

with a quote from

the great guy Sam

Altman, who’s an

artist, a philosopher,

a 봉을 탄생하고

40허리 열풍을

I think that this is

the key.

And I think by engaging

in these technologies

we have to study

these questions

and not know the

answers yet.

That’s the

responsibility not just

of us but of everyone

It’s going to be a

project of decades

to really go from

where we are to the

kinds of systems that

we’re talking about

there.

And all along the way

there’s going to be

surprising things.

There’s going to be

great things that

happen.

There’s going to be

causes for joy,

causes for grief.

And I think that they

all happen in small

ways now.

And I think in the

future maybe they’ll

happen in bigger and

bigger ways.

And I think that just

really engaging with

this process, just

really everyone

educating themselves

as much as possible

and figuring out how

can this fit in.

I love the question

about what should I

teach my one-year-old

because that is a hope

for the future kind of

question.

And I think that I am

very optimistic.

Again, I think I

consider myself this

realistic optimist that

you really have to be

calibrated.

You can’t just blindly

think it’s all going to

work out but you have

to engage with all the

problems.

But I think it is

possible that we will

end up in this world

of abundance and sort

of the real good

future.

And I think it’s

possible that we

will end up in this

world of abundance and

sort of the real good

future.

I think it won’t be

perfect.

I think there will be

problems.

And there will certainly

be many problems along

the way.

But I think we can

rise to the occasion.

You have children?

Not yet.

Working on convincing

my wife though.

Okay.

So I was going to

say, do you believe

that kids of your

friends, if you end up

having children, will

grow up in a better

world?

I do think so.

I think we have a

shot at it.

And I think the moment

you think that it is,

that’s when things go

wrong.

And so I think we all

have to be sort of

constantly asking what

can go wrong and what

can we do to prevent

that.

Great.

Greg Ruckman, thank

you so much.

Thank you.

Thanks, guys.

Appreciate it