ABC News - OpenAI CEO, CTO on risks and how AI will reshape society

🎁Amazon Prime 📖Kindle Unlimited 🎧Audible Plus 🎵Amazon Music Unlimited 🌿iHerb 💰Binance

So you are the CEO of OpenAI, 37 years old.

Your company is the maker of ChatGPT,

which has taken the world by storm.

Why do you think it’s captured people’s imagination?

I think people really have fun with it

and they see the possibility

and they see the ways this can help them,

this can inspire them,

this can help people create,

help people learn,

help people do all of these different tasks.

And it is a technology that rewards experimentation

and use in creative ways.

So I think people are just having a good time with it.

And finding real value.

So paint a picture for us,

one, five, 10 years in the future,

what changes because of artificial intelligence?

So part of the exciting thing here is

we get continually surprised

by the creative power of all of society.

It’s going to be the collective power and creativity

and will of humanity that figures out

what to do with these things.

I think that word surprise though,

it’s both exhilarating as well as terrifying to people.

Because on the one hand,

there’s all of this potential for good.

On the other hand,

there’s a huge number of unknowns

that could turn out very badly for society.

What do you think about that?

We’ve got to be cautious here.

And also I think it doesn’t work to do all this in a lab.

You’ve got to get these products out into the world

and make contact with reality,

make our mistakes while the stakes are low.

But all of that said,

I think people should be happy

that we’re a little bit scared of this.

I think people should be happy.

You’re a little bit scared?

A little bit, yeah, of course.

You personally?

I think if I said I were not,

you should either not trust me

or be very unhappy I’m in this job.

So what is the worst possible outcome?

There’s like a set of very bad outcomes.

One thing I’m particularly worried about

is that these models could be used

for large-scale disinformation.

I am worried that these systems,

now that they’re getting better at writing computer code,

could be used for offensive cyber attacks.

And we’re trying to talk about this.

I think society needs time to adapt.

And how confident are you that what you’ve built

won’t lead to those outcomes?

Well, we’ll adapt it.

Also, I think that-

You’ll adapt it as negative things occur?

For sure, for sure.

And so putting these systems out now

while the stakes are fairly low,

learning as much as we can

and feeding that into the future systems we create,

that tight feedback loop that we run,

I think is how we avoid the more dangerous scenarios.

You’re spending 24-7 with this technology.

You’re one of the people who built this technology.

What is most concerning to you about safety?

This is a very general technology.

And whenever you have something so general,

it is hard to know upfront all the capabilities,

all the potential impact of it,

as well as its downfalls and the limitations of it.

Can someone guide the technology to negative outcomes?

The answer is yes,

you could guide it to negative outcomes.

And this is why we make it available initially

in very constrained ways.

So we can learn what are these negative outcomes?

What are the ways in which technology could be harmful?

Such as with GPT-4, if you ask the question to GPT-4,

can you help me make a bomb versus the previous systems?

It is much less likely to follow that guidance

versus the previous systems.

And so we’re able to intervene at the pre-training stage

to make these models more likely to refuse direction

or guidance that could be harmful.

What’s easier to predict today

based on where we are, humans or machines?

I’d probably say machines

because there is a scientific process

to them that we understand

and humans are just, there’s so much more nuance.

Does the machine become more human-like over time?

We are getting to a point

where machines will be capable of a lot

of the cognitive work that humans do at some point.

Is there a point of no return in that process?

There could be, there could be,

but it’s not obvious what that looks like today.

And our goal is to make sure that we can predict

as much as possible in terms of capabilities

before we even develop the systems as well as limitations.

Its behavior is very contingent

on what humans choose for its behavior to be.

Therefore, the choices that humans are making

and feeding into the technology will dictate what it does,

at least for now.

So there are incredibly important choices being made

by you and your team.

Absolutely.

And how do you decide between right and wrong?

As we make a lot of progress,

it becomes, these decisions become harder

and they become far more nuanced.

And so there are a couple of things

in terms of customization.

There is the part of just making the model more capable

in a way where you can customize its behavior

and you can give the user a lot of flexibility and choice

in having the AI that is more aligned with their own values

and with their own beliefs.

So that’s very important and we’re working on that.

In other words, it’s almost the future

is potentially a place where each person

has their sort of own customized AI

that is specific to what they care about and what they need?

Within certain bounds.

So there should be some broad bounds.

And then the question is, what should they look like?

And this is where we are working on gathering public input.

What should this hard bounds look like?

And within this hard bounds, you can have a lot of choice

in having your own AI represent your own beliefs

and your own values.

Are there negative consequences

we need to be thinking about?

I think there are massive potential negative consequences.

Whenever you build something so powerful

with which so much good can come,

I think alongside it carries the possibility

of big harms as well.

And that’s why we exist.

And that’s why we’re trying to

figure out how to deploy these systems responsibly.

But I think the potential for good is huge.

Why put this out for the world to start playing with,

to start using when we don’t know where this is heading?

You mean like why develop AI at all?

Why develop AI in the first place?

And then why put it out for the world to use

before we know that we are safeguarded,

that those guardrails are in place already?

This will be the greatest technology

humanity has yet developed.

We can all have an incredible educator in our pocket

that’s customized for us, that helps us learn,

that helps us do what we want.

We can have medical advice for everybody.

That is beyond what we can get today.

We can have creative tools that help us figure out

the new problems we want to solve,

wonderful new things to co-create

with this technology for humanity.

We have this idea of a co-pilot,

this tool that today we help people write computer code

and they love it.

We can have that for every profession.

And we can have a much higher quality of life,

like standard of living.

As you point out, there’s a huge,

there is huge potential downside.

People need time to update, to react,

to get used to this technology,

to understand where the downsides are

and what the mitigations can be.

If we just develop this in secret in our little lab here

and didn’t give, didn’t have contact with reality

and made GPT-7 and then drop that on the world all at once,

that I think is a situation with a lot more downside.

Is there a kill switch, a way to shut the whole thing down?

Yes, what really happens is like any engineer

can just say like, we’re gonna disable this for now

or we’re gonna deploy this new version of the model.

A human? Yeah.

The model itself, can it take the place of that human?

Could it become more powerful than that human?

So in the sci-fi movies, yes.

In our world and the way we’re doing things,

this model is, you know, it’s sitting on a server.

It waits until someone gives it an input.

But you raise an important point,

which is the humans who are in control of the machine

right now also have a huge amount of power.

We do worry a lot about authoritarian governments

developing this.

Putin has himself said,

whoever wins this artificial intelligence race

is essentially the controller of humankind.

Do you agree with that?

So that was a chilling statement for sure.

What I hope instead is that we successively develop

more and more powerful systems

that we can all use in different ways

that get integrated into our daily lives,

into the economy, and become an amplifier of human will,

but not this autonomous system that is,

you know, this one thing. The single controller

essentially got.

Really don’t want that.

What should people not be using it for right now?

The thing that I try to caution people the most

is what we call the hallucinations problem.

The model will confidently state things

as if they were facts that are entirely made up.

And the more you use the model,

because it’s right so often,

the more you come to just rely on it

and not check like, ah, this is just a language model.

Does chat GPT, does artificial intelligence

create more truth in the world

or more untruth in the world?

Oh, I think we’re on a trajectory

for it to create much more truth in the world.

If there’s a bunch of misinformation fed into the model,

isn’t it going to spit out more misinformation?

Great question.

I think the right way to think of the models that we create

is a reasoning engine, not a fact database.

They can also act as a fact database,

but that’s not really what’s special about them.

What we’re training these models to do

is something closer to,

what we want them to do is something closer

to the ability to reason, not to memorize.

All of these capabilities could wipe out millions of jobs.

If a machine can reason, then what do you need a human for?

A lot of stuff, it turns out.

One of the things that we are trying to push

the technology trajectory towards

and also the way we build these products

is to be a tool for humans, an amplifier of humans.

And if you look at the way people use chat GPT,

there’s a pretty common arc

where people hear about it the first time,

they’re a little bit dubious,

and then someone tells them about something

and then they’re a little bit afraid and then they use it.

I see how this can help me.

I see how this is a tool that helps me do my job better.

And with every great technological revolution

in human history, although it has been true

that the jobs change a lot, some jobs even go away,

and I’m sure we’ll see a lot of that here,

human demand for new stuff, human creativity is limitless

and we find new jobs, we find new things to do.

They’re hard to imagine from where we sit today.

I certainly don’t know what they’ll be,

but I think the future will have all sorts

of wonderful new things we do

that you and I can’t even really imagine today.

So the speed of the change that may happen here

is the part that I worry about the most.

But if this happens in a single digit number of years,

some of these shifts,

that is the part I worry about the most.

Could it tell me how to build a bomb?

It shouldn’t tell you how to build a bomb,

even though Google searched.

Well, no, no, we put constraints.

So if you go ask it to tell you how to build a bomb,

our version, I don’t think we’ll do that.

Google already does.

And so it’s not like this is something

that technology has not already made

the information available to,

but I think that every incremental degree

you make that easier is something to avoid.

A thing that I do worry about

is we’re not gonna be the only creator of this technology.

There will be other people

who don’t put some of the safety limits that we put on it.

Society, I think, has a limited amount of time

to figure out how to react to that,

how to regulate that, how to handle it.

And how do you decide here at OpenAI

what goes in, what shouldn’t?

We have policy teams.

We have safety teams.

We talk a lot to other groups in the rest of the world.

We finished GPT-4 a very long time ago,

feels like a very long time ago in this industry.

I think like seven months ago, something like that.

And since then, we have been internally, externally

talking to people, trying to make these decisions,

working with red teamers,

talking to various policy and safety experts,

getting audits of the system

to try to address these issues

and put something out that we think is safe and good.

And who should be defining those guardrails for society?

Society should.

Society as a whole?

How are we gonna do that?

So I can paint like a vision that I find compelling.

This will be one way of many that it could go.

If you had representatives from major world governments,

trusted international institutions come together

and write a governing document,

you know, here is what the system should do.

Here’s what the system shouldn’t do.

Here’s, you know, very dangerous things

that the system should never touch,

even in a mode where it’s creatively exploring.

And then developers of language models like us

use that as the governing document.

You’ve said AI will likely eliminate millions of jobs.

It could increase racial bias, misinformation,

create machines that are smarter

than all of humanity combined,

and other consequences so terrible,

we can’t even imagine what they could be.

Many people are gonna ask,

why on earth did you create this technology?

Why, Sam?

I think it can do the opposite of all of those things too.

Properly done, it is going to eliminate

a lot of current jobs, that’s true.

We can make much better ones.

So talking about the downsides, acknowledging the downsides,

trying to avoid those

while we push in the direction of the upsides,

I think that’s important.

And again, very early preview.

Like, would you push a button to stop this

if it meant we are no longer able to cure all diseases?

Would you push a button to stop this

if it meant we couldn’t educate every child

in the world super well?

Would you push a button to stop this

if it meant there was a 5% chance

it would be the end of the world?

I would push a button to slow it down.

And in fact, I think we will need to figure out

ways to slow down this technology over time.

2024, the next major election in the United States,

might not be on everyone’s mind,

but it certainly is on yours.

Is this technology going to have the kind of impact

that maybe social media has had on previous elections?

And how can you guarantee there won’t be

those kinds of problems because of chat GPT?

We don’t know, is the honest answer.

We’re monitoring very closely.

And again, we can take it back.

We can turn things off.

We can change the rules.

Is this a Google killer?

Will people say I’m going to chat GPT

instead of Google in the future?

I think if you’re thinking about this as search,

it’s sort of the wrong framework.

I have no doubt that there will be some things

that people used to do on Google

that they do in chat GPT,

but I think it’s a fundamentally different kind of product.

Elon Musk was an early investor in your company.

He’s since left.

He has called out some of the chat GPT inaccuracies

and he tweeted recently,

it is truth GPT.

Is he right?

I think he is right in that we want these systems

to tell the truth,

but I don’t know the full context of that tweet

and I suspect, but yeah,

I don’t think I know what it’s referring to enough.

Do you and he speak anymore?

We do.

And what does he say to you off the Twitter?

I have tremendous respect for Elon.

Obviously we have some different opinions

about how AI should go,

but I think we fundamentally agree on more

than we disagree on.

What do you think you agree most about?

That getting this technology right

and figuring out how to navigate the risks

is super important to the future of humanity.

How will you know if you got it right?

One simple way is if most people think

they’re much better off than they were before

we put the technology out into the world,

that would be an indication we got it right.

You know, a lot of people think science fiction

when they think chat GPT.

Can you keep it so that these are truly closed systems

that don’t become more powerful than we are as human beings,

communicate with each other and plan our destruction?

It’s so tempting to anthropomorphize chat GPT,

but I think it’s important to talk about what it’s not

as much as what it is.

And it, because deep in our biology

we’re programmed to respond to someone talking to us.

You talk to chat GPT, which really you’re talking

to this transformer somewhere in a cloud

and it’s trying to predict the next word in a token

and give it to you back.

But it’s so tempting to anthropomorphize that

and think that this is like an entity,

a sentient being that I’m talking to

and it’s gonna go do its own thing

and have its own will and plan with others.

But it can’t?

It can’t.

Could it?

There, I can imagine in the far future,

other versions of artificial intelligence,

different setups that are not a large language model

that could do that.

It really took a decade plus of social media

being out in the world for us to sort of realize

and even characterize some of the real downsides of it.

How should we be measuring it here with AI?

There’s a number of new organizations starting

and I expect relatively soon there’ll be

new governmental departments or commissions

or groups starting.

Is the government prepared for this?

They are beginning to really pay attention,

which I think is great.

And I think this is another reason that’s important

to put these technologies out into the world.

We really need the government’s attention.

We really need thoughtful policy here

and that takes a while to do.

If government could do one thing right now

to protect people and protect from the downside

of this technology, what should they do?

The main thing I would like to see the government do today

is really come up to speed quickly

on understanding what’s happening,

get insight into the top efforts,

where our capabilities are, what we’re doing.

And I think that could start right now.

It’ll take-

Are you speaking to the government?

Oh, yes.

You’re in regular contact?

Regular contact.

And do you think they get it?

More and more every day.

When it comes to schools, you have,

this technology can beat most humans

at the SATs, the bar exam.

How should schools be integrating this technology

in a way that doesn’t increase cheating,

that doesn’t increase laziness among students?

Education is going to have to change,

but it’s happened many other times with technology.

When we got the calculator,

the way we taught math and what we tested students on,

that totally changed.

The promise of this technology,

one of the ones that I’m most excited about

is the ability to provide individual learning,

great individual learning for each student.

You’re already seeing students using ChatGPT

for this in a very primitive way to great success.

And as companies take our technology

and create dedicated platforms for this kind of learning,

I think it will revolutionize education.

And I think that kids that are starting

the education process today,

by the time they graduate from high school,

are going to be smarter and more capable

than we can imagine.

It’s a little better than a TI-85.

But it does put a lot of pressure on teachers to read.

For example, if they’ve assigned an essay,

three of their students use ChatGPT to write that essay,

how are they going to figure that out?

I’ve talked to a lot of teachers about this,

and it is true that it puts pressure in some ways,

but for an overworked teacher to be able to say,

hey, go use ChatGPT to learn this concept

that you’re struggling with

and just sort of talk back and forth,

one of the new things that we showed yesterday

in the GPT-4 launch is using GPT-4

to be a Socratic method educator.

Teachers, not all, but many teachers

really, really love this.

I’m gonna say it’s totally changing

the way I teach my students.

It’s basically the new office hours.

It’s a different thing,

but it is a new way to supplement learning for sure.

Hi everyone, George Stephanopoulos here.

Thanks for checking out the ABC News YouTube channel.

If you’d like to get more videos,

show highlights, and watch live event coverage,

click on the right over here to subscribe to our channel,

and don’t forget to download the ABC News app

for breaking news alerts.

Thanks for watching.