C-SPAN - ChatGPT CEO Sam Altman Testifies Before Senate on AI Oversight

🎁Amazon Prime 📖Kindle Unlimited 🎧Audible Plus 🎵Amazon Music Unlimited 🌿iHerb 💰Binance

OpenAI CEO Sam Altman, whose company created ChatGPT, was one of three artificial intelligence

experts to testify on oversight of the swiftly developing technology at a Senate Judiciary

Subcommittee hearing.

Mr. Altman stated that AI could, quote, cause significant harm to the world.

Here’s the rest of that hearing.

Welcome to the hearing of the Privacy, Technology, and the Law Subcommittee.

Thank my partner in this effort, Senator Hawley, Ranking Member, and I particularly

want to thank Senator Durbin, Chairman of the Judiciary Committee, and he will be speaking

shortly.

This hearing is on the oversight of artificial intelligence, the first in a series of hearings

intended to write the rules of AI.

Our goal is to demystify and hold accountable those new technologies to avoid some of the

mistakes of the past.

And now, for some introductory remarks.

Too often, we have seen what happens when technology outpaces regulation.

The unbridled exploitation of personal data, the proliferation of disinformation, and the

deepening of societal inequalities.

We have seen how algorithmic biases can perpetuate discrimination and prejudice, and how the

lack of transparency can undermine public trust.

This is not the future we want.

If you were listening from home, you might have thought that voice was mine and the words

from me.

But in fact, that voice was not mine.

The words were not mine.

And the audio was an AI voice cloning software trained on my floor speeches.

The remarks were written by ChatGPT when it was asked how I would open this hearing.

And you heard just now the result.

I asked ChatGPT, why did you pick those themes and that content?

And it answered, and I’m quoting, Blumenthal has a strong record in advocating for consumer

protection and civil rights.

He has been vocal about issues such as data privacy and the potential for discrimination

in algorithmic decision making.

Therefore, the statement emphasizes these aspects.

Mr. Altman, I appreciate ChatGPT’s endorsement.

In all seriousness, this apparent reasoning is pretty impressive.

I am sure that we’ll look back in a decade and view ChatGPT and GPT-4 like we do the

first cell phone, those big clunky things that we used to carry around.

But we recognize that we are on the verge, really, of a new era.

The audio and my playing, it may strike you as curious or humorous.

But what reverberated in my mind was, what if I had asked it, and what if it had provided

an endorsement of Ukraine’s surrendering or Vladimir Putin’s leadership?

That would have been really frightening.

And the prospect is more than a little scary, to use the word, Mr. Altman, you have used

yourself.

And I think you have been very constructive in calling attention to the pitfalls, as well

as the promise.

And that’s the reason why we wanted you to be here today.

And we thank you and our other witnesses for joining us.

For several months now, the public has been fascinated with GPT, DALI, and other AI tools.

These examples, like the homework done by CHAT GPT, or the articles and op-eds that

it can write, feel like novelties.

But the underlying advancement of this era are more than just research experiments.

They are no longer fantasies of science fiction.

They are real and present.

The promises of curing cancer or developing new understandings of physics and biology

or modeling climate and weather, all very encouraging and hopeful.

But we also know the potential harms.

And we’ve seen them already, weaponized disinformation, housing discrimination, harassment of women

and impersonation fraud, voice cloning, deep fakes.

These are the potential risks, despite the other rewards.

And for me, perhaps the biggest nightmare is the looming new industrial revolution,

the displacement of millions of workers, the loss of huge numbers of jobs, the need to

prepare for this new industrial revolution in skill training and relocation that may

be required.

And already, industry leaders are calling attention to those challenges.

To quote CHAT GPT, this is not necessarily the future that we want.

We need to maximize the good over the bad.

Congress has a choice now.

We had the same choice when we faced social media.

We failed to seize that moment.

The result is predators on the Internet, toxic content, exploiting children, creating

dangers for them.

And Senator Blackburn and I and others like Senator Durbin on the Judiciary Committee

are trying to deal with it, Kids Online Safety Act.

But Congress failed to meet the moment on social media.

Now we have the obligation to do it on AI before the threats and the risks become real.

Sensible safeguards are not in opposition to innovation.

Accountability is not a burden, far from it.

They are the foundation of how we can move ahead while protecting public trust.

They are how we can lead the world in technology and science, but also in promoting our democratic

values.

Otherwise, in the absence of that trust, I think we may well lose both.

These are sophisticated technology, but there are basic expectations common in our law.

We can start with transparency.

AI companies ought to be required to test their systems, disclose known risks, and allow

independent researcher access.

We can establish scorecards and nutrition labels to encourage competition based on safety

and trustworthiness.

Limitations on use.

There are places where the risk of AI is so extreme that we ought to impose restriction

or even ban their use, especially when it comes to commercial invasions of privacy for

profit and decisions that affect people’s livelihoods.

And of course, accountability or liability.

When AI companies and their clients cause harm, they should be held liable.

We should not repeat our past mistakes.

For example, Section 230, forcing companies to think ahead and be responsible for the

ramifications of their business decisions can be the most powerful tool of all.

Garbage in, garbage out.

The principle still applies.

We ought to beware of the garbage, whether it’s going into these platforms or coming

out of them.

And the ideas that we develop in this hearing, I think, will provide a solid path forward.

I look forward to discussing them with you today, and I will just finish on this note.

The AI industry doesn’t have to wait for Congress.

I hope there are ideas and feedback from this discussion and from the industry and

voluntary action, such as we’ve seen lacking in many social media platforms, and the consequences

have been huge.

So I’m hoping that we will elevate rather than have a race to the bottom.

And I think these hearings will be an important part of this conversation.

This one is only the first.

The Ranking Member and I have agreed there should be more, and we’re going to invite

other industry leaders.

Some have committed to come, experts, academics, and the public, we hope, will participate.

And with that, I will turn to the Ranking Member, Senator Hawley.

Thank you very much, Mr. Chairman.

Thanks to the witnesses for being here.

I appreciate that several of you had long journeys to make in order to be here.

I appreciate you making the time.

I look forward to your testimony.

I want to thank Senator Blumenthal for convening this hearing, for being a leader on this topic.

You know, a year ago, we couldn’t have had this hearing because the technology that we’re

talking about had not burst into public consciousness.

That gives us a sense, I think, of just how rapidly this technology that we’re talking

about today is changing and evolving and transforming our world right before our very eyes.

I was talking with someone just last night, a researcher in the field of psychiatry, who

was pointing out to me that the chat GPT and generative AI, these large language models,

it’s really like the invention of the internet in scale, at least, at least, and potentially

far, far more significant than that.

We could be looking at one of the most significant technological innovations in human history.

And I think my question is, what kind of an innovation is it going to be?

Is it going to be like the printing press that diffused knowledge and power and learning

widely across the landscape that empowered ordinary, everyday individuals that led to

greater flourishing, that led above all to greater liberty?

Or is it going to be more like the atom bomb?

Huge technological breakthrough, but the consequences, severe, terrible, continue to

haunt us to this day.

I don’t know the answer to that question.

I don’t think any of us in the room know the answer to that question because I think the

answer has not yet been written.

And to a certain extent, it’s up to us here and to us as the American people to write

the answer.

What kind of technology will this be?

How will we use it to better our lives?

How will we use it to actually harness the power of technological innovation for the

good of the American people, for the liberty of the American people, not for the power

of the few?

You know, I was reminded of the psychologist and writer Carl Jung who said at the beginning

of the last century that our ability for technological innovation, our capacity for technological

revolution had far outpaced our ethical and moral ability to apply and harness the

technology we developed.

That was a century ago.

I think the story of the 20th century largely bore him out.

And I just wonder what will we say as we look back at this moment about these new technologies,

about generative AI, about these language models, and about the hosts of other AI capacities

that are even right now underdeveloped, but not just in this country, but in China, the

countries of our adversaries, and all around the world.

And I think that the question that Jung posed is really the question that faces us.

Will we strike that balance between technological innovation and our ethical and moral responsibility

to humanity, to liberty, to the freedom of this country?

And I hope that today’s hearing will take us a step closer to that answer.

Thank you, Mr. Chairman.

Thanks.

Thanks, Senator Hawley.

I’m going to turn to the chairman of the Judiciary Committee and the ranking member, Senator

Graham, if they have opening remarks as well.

Yes, Mr. Chairman.

Thank you very much, and Senator Hawley as well.

Last week in this committee, full committee, Senate Judiciary Committee, we dealt with

an issue that had been waiting for attention for almost two decades, and that is what to

do with the social media when it comes to the abuse of children.

We had four bills initially that were considered by this committee.

And what may be history in the making, we passed all four bills with unanimous roll

calls.

Unanimous roll calls.

I can’t remember another time when we’ve done that on an issue that important.

It’s an indication, I think, of the important position of this committee in the national

debate on issues that affect every single family and affect our future in a profound

way.

1989 was a historic watershed year in America, because that’s when Seinfeld arrived.

And we had a sitcom, which was supposedly about little or nothing, which turned out

to be enduring.

I like to watch it, obviously.

And I’m always marveled when they show the phones that he used in 1989.

And I think about those in comparison to what we carry around in our pockets today.

It’s a dramatic change.

And I guess the question as I look at that is, does this change in phone technology that

we’ve witnessed through the sitcom really exemplify a profound change in America?

Still unanswered.

But the very basic question we face is whether or not this issue of AI is a quantitative

change in technology or a qualitative change.

The suggestions that I’ve heard from experts in the field suggest it’s qualitative.

Is it AI fundamentally different?

Is it a game changer?

Is it so disruptive that we need to treat it differently than other forms of innovation?

That’s the starting point.

And the second starting point is one that’s humbling, and that is the fact when you look

at the record of Congress in dealing with innovation, technology, and rapid change,

we’re not designed for that.

In fact, the Senate was not created for that purpose, but just the opposite.

Slow things down.

Take a harder look at it.

Don’t react to public sentiment.

Make sure you’re doing the right thing.

Well, I’ve heard of the potential, the positive potential of AI, and it is enormous.

You can go through lists of the deployment of technology that would say that an idea

you can sketch for a website on a napkin can generate functioning code.

Medical companies could use the technology to identify new candidates to treat disease.

The list goes on and on.

And then, of course, the danger, and it’s profound as well.

So I’m glad that this hearing is taking place, and I think it’s important for all of us to

participate.

I’m glad that it’s a bipartisan approach.

We’re going to have to scramble to keep up with the pace of innovation in terms of our

government public response to it, but this is a great start.

Thank you, Mr. Chairman.

Thanks.

Thank you, Mr. Chairman.

It is very much a bipartisan approach, very deeply and broadly bipartisan, and in that

spirit, I’m going to turn to my friend, Senator Graham.

Thank you.

That was not written by AI for sure.

Let me introduce now the witnesses.

We’re very grateful to you for being here.

Graham Altman is the co-founder and CEO of OpenAI, the AI research and deployment company

behind CHAT-GPT and DALY.

Mr. Altman was president of the early stage startup accelerator Y Combinator from 2014

to 2019.

OpenAI was founded in 2015.

Christina Montgomery is IBM’s vice president and chief privacy and trust officer overseeing

the company’s global privacy program policies, compliance, and strategy.

She also chairs IBM’s AI ethics board, a multidisciplinary team responsible for the governance of AI and

emerging technologies.

Christina has served in various roles at IBM, including corporate secretary to the company’s

board of directors.

She is a global leader in AI ethics and governments.

And Ms. Montgomery also is a member of the United States Chamber of Commerce AI Commission

and the United States National AI Advisory Committee, which was established in 2022 to

advise the president and the National AI Initiative Office on a range of topics related to AI.

Gary Marcus is a leading voice in artificial intelligence.

He’s a scientist, bestselling author, and entrepreneur, founder of the robust AI and

geometric AI acquired by Uber, if I’m not mistaken, and emeritus professor of psychology

and neuroscience at NYU.

Mr. Marcus is well known for his challenges to contemporary AI, anticipating many of the

current limitations decades in advance, and for his research in human language development

and cognitive neuroscience.

Thank you for being here.

And as you may know, our custom on the Judiciary Committee is to swear in our witnesses before

they testify.

So if you would all please rise and raise your right hand.

And you solemnly swear that the testimony that you are going to give is the truth, the

whole truth, and nothing but the truth, so help us God.

Thank you.

Mr. Altman, we’re going to begin with you, if that’s okay.

Thank you.

Thank you, Chairman Blumenthal, Ranking Member Hawley, members of the Judiciary Committee.

Thank you for the opportunity to speak to you today about large neural networks.

It’s really an honor to be here, even more so in the moment than I expected.

My name is Sam Altman.

I’m the Chief Executive Officer of OpenAI.

OpenAI was founded on the belief that artificial intelligence has the potential to improve

nearly every aspect of our lives, but also that it creates serious risks we have to work

together to manage.

We’re here because people love this technology.

We think it can be a printing press moment.

We have to work together to make it so.

OpenAI is an unusual company, and we set it up that way because AI is an unusual technology.

We are governed by a nonprofit, and our activities are driven by our mission and our charter,

which commit us to working to ensure that the broad distribution of the benefits of

AI and to maximizing the safety of AI systems.

We are working to build tools that one day could help us make new discoveries and address

some of humanity’s biggest challenges, like climate change and curing cancer.

Our current systems aren’t yet capable of doing these things, but it has been immensely

gratifying to watch many people around the world get so much value from what these systems

can already do today.

We love seeing people use our tools to create, to learn, to be more productive.

We’re very optimistic that there are going to be fantastic jobs in the future and that

current jobs can get much better.

We also love seeing what developers are doing to improve lives.

For example, Be My Eyes used our new multimodal technology in GPT-4 to help visually impaired

individuals navigate their environment.

We believe that the benefits of the tools we have deployed so far vastly outweigh the

risks, but ensuring their safety is vital to our work, and we make significant efforts

to ensure that safety is built into our systems at all levels.

Before releasing any new system, OpenAI conducts extensive testing, engages external experts

for detailed reviews and independent audits, improves the model’s behavior, and implements

robust safety and monitoring systems.

Before we released GPT-4, our latest model, we spent over six months conducting extensive

evaluations, external red teaming, and dangerous capability testing.

We are proud of the progress that we made.

GPT-4 is more likely to respond helpfully and truthfully and refuse harmful requests

than any other widely deployed model of similar capability.

However, we think that regulatory intervention by governments will be critical to mitigate

the risks of increasingly powerful models.

For example, the U.S. government might consider a combination of licensing and testing requirements

for development and release of AI models above a threshold of capabilities.

There are several other areas I mentioned in my written testimony where I believe that

companies like ours can partner with governments, including ensuring that the most powerful

AI models adhere to a set of safety requirements, facilitating processes to develop and update

safety measures, and examining opportunities for global coordination.

And as you mentioned, I think it’s important that companies have their own responsibility

here no matter what Congress does.

This is a remarkable time to be working on artificial intelligence.

But as this technology advances, we understand that people are anxious about how it could

change the way we live.

We are too.

But we believe that we can and must work together to identify and manage the potential downsides

so that we can all enjoy the tremendous upsides.

It is essential that powerful AI is developed with democratic values in mind, and this means

that U.S. leadership is critical.

I believe that we will be able to mitigate the risks in front of us and really capitalize

on this technology’s potential to grow the U.S. economy and the world’s, and I look forward

to working with you all to meet this moment, and I look forward to answering your questions.

Thank you.

Thank you, Mr. Altman.

Ms. Montgomery.

Chairman Blumenthal, Ranking Member Hawley, and members of the subcommittee, thank you

for today’s opportunity to present.

AI is not new, but it’s certainly having a moment.

Recent breakthroughs in generative AI and the technology’s dramatic surge in the public

attention has rightfully raised serious questions at the heart of today’s hearing.

What are AI’s potential impacts on society?

What do we do about bias?

What about misinformation, misuse, or harmful content generated by AI systems?

Senators, these are the right questions, and I applaud you for convening today’s hearing

to address them head on.

While AI may be having its moment, the moment for government to play a role has not passed

us by.

This period of focused public attention on AI is precisely the time to define and build

the right guardrails to protect people and their interests.

But at its core, AI is just a tool, and tools can serve different purposes.

To that end, IBM urges Congress to adopt a precision regulation approach to AI.

This means establishing rules to govern the deployment of AI in specific use cases, not

regulating the technology itself.

Such an approach would involve four things.

First, different rules for different risks.

The strongest regulation should be applied to use cases with the greatest risks to people

and society.

Second, clearly defining risks.

There must be clear guidance on AI uses or categories of AI-supported activity that are

inherently high risk.

This common definition is key to enabling a clear understanding of what regulatory requirements

will apply in different use cases and contexts.

Third, be transparent, so AI shouldn’t be hidden.

Consumers should know when they’re interacting with an AI system and that they have recourse

to engage with a real person should they so desire.

No person anywhere should be tricked into interacting with an AI system.

And finally, showing the impact.

For higher risk use cases, companies should be required to conduct impact assessments

that show how their systems perform against tests for bias and other ways that they could

potentially impact the public and to attest that they’ve done so.

By following risk-based, use case-specific approach at the core of precision regulation,

Congress can mitigate the potential risks of AI without hindering innovation.

But businesses also play a critical role in ensuring the responsible deployment of AI.

Companies active in developing or using AI must have strong internal governance, including,

among other things, designating a lead AI ethics official responsible for an organization’s

trustworthy AI strategy, standing up an ethics board or a similar function as a centralized

clearinghouse for resources to help guide implementation of that strategy.

IBM has taken both of these steps, and we continue calling on our industry peers to

follow suit.

Our AI ethics board plays a critical role in overseeing internal AI governance processes,

creating reasonable guardrails to ensure we introduce technology into the world in a responsible

and safe manner.

It provides centralized governance and accountability while still being flexible enough to support

decentralized initiatives across IBM’s global operations.

We do this because we recognize that society grants our license to operate.

And with AI, the stakes are simply too high.

We must build, not undermine, the public trust.

The era of AI cannot be another era of move fast and break things.

But we don’t have to slam the brakes on innovation either.

These systems are within our control today, as are the solutions.

What we need at this pivotal moment is clear, reasonable policy and sound guardrails.

These guardrails should be matched with meaningful steps by the business community to do their

part.

Congress and the business community must work together to get this right.

The American people deserve no less.

Thank you for your time, and I look forward to your questions.

Thank you.

Professor Marcus.

Thank you, Senators.

Today’s meeting is historic.

Thank you, Senators.

Today’s meeting is historic.

I’m profoundly grateful to be here.

I come as a scientist, someone who’s founded AI companies, and as someone who genuinely

loves AI, but who is increasingly worried.

There are benefits, but we don’t yet know whether they will outweigh the risks.

Fundamentally, these new systems are going to be destabilizing.

They can and will create persuasive lies at a scale humanity has never seen before.

Outsiders will use them to affect our elections, insiders to manipulate our markets and our

political systems.

Democracy itself is threatened.

Chatbots will also clandestinely shape our opinions, potentially exceeding what social

media can do.

Choices about data sets that AI companies use will have enormous unseen influence.

Those who choose the data will make the rules, shaping society in subtle but powerful ways.

There are other risks, too, many stemming from the inherent unreliability of current

systems.

A law professor, for example, was accused by a chatbot of sexual harassment, untrue,

and it pointed to a Washington Post article that didn’t even exist.

The more that that happens, the more that anybody can deny anything.

As one prominent lawyer told me on Friday, defendants are starting to claim that plaintiffs

are making up legitimate evidence.

These sorts of allegations undermine the abilities of juries to decide what or who to believe

and contribute to the undermining of democracy.

Poor medical advice could have serious consequences, too.

An open source large language model recently seems to have played a role in a person’s

decision to take their own life.

The large language model asked the human, if you wanted to die, why didn’t you do it

earlier?

And then followed up with, were you thinking of me when you overdosed?

Without ever referring the patient to the human help that was obviously needed.

Another system rushed out and made available to millions of children told a person posing

as a 13-year-old how to lie to her parents about a trip with a 31-year-old man.

Further threats continue to emerge regularly.

A month after GPT-4 was released, OpenAI released ChatGPT plugins, which quickly led

others to develop something called AutoGPT with direct access to the internet, the ability

to write source code, and increased powers of automation.

This may well have drastic and difficult to predict security consequences.

What criminals are going to do here is to create counterfeit people.

It’s hard to even envision the consequences of that.

We have built machines that are like bulls in a china shop, powerful, reckless, and difficult

to control.

We all more or less agree on the values we would like for our AI systems to honor.

We want, for example, for our systems to be transparent, to protect our privacy, to be

free of bias, and above all else, to be safe.

But current systems are not in line with these values.

Current systems are not transparent.

They do not adequately protect our privacy, and they continue to perpetuate bias.

And even their makers don’t entirely understand how they work.

First of all, we cannot remotely guarantee that they’re safe, and hope here is not enough.

The big tech company’s preferred plan boils down to trust us.

But why should we?

The sums of money at stake are mind-boggling.

Emissions drift.

OpenAI’s original mission statement proclaimed, our goal is to advance AI in the way that

is most likely to benefit humanity as a whole, unconstrained by a need to generate financial

return.

Seven years later, they’re largely beholden to Microsoft, embroiled in part in an epic

battle of search engines that routinely make things up.

And that’s forced Alphabet to rush out products and de-emphasize safety.

Humanity has taken a back seat.

AI is moving incredibly fast, with lots of potential, but also lots of risk.

We obviously need government involved, and we need the tech companies involved, both

big and small.

But we also need independent scientists, not just so that we scientists can have a voice,

but so that we can participate directly in addressing the problems and evaluating solutions.

Not just after products are released, but before, and I’m glad that Sam mentioned that.

We need tight collaboration between independent scientists and governments in order to hold

the company’s feet to the fire.

Allowing independent scientists access to these systems before they are widely released,

as part of a clinical trial-like safety evaluation, is a vital first step.

Ultimately, we may need something like CERN, global, international, and neutral, but focused

on AI safety rather than high-energy physics.

We have unprecedented opportunities here, but we are also facing a perfect storm of

corporate irresponsibility, widespread deployment, lack of adequate regulation, and inherent

unreliability.

AI is among the most world-changing technologies ever, already changing things more rapidly

than almost any technology in history.

We acted too slowly with social media.

Many unfortunate decisions got locked in with lasting consequence.

The choices we make now will have lasting effects for decades, maybe even centuries.

The very fact that we are here today in bipartisan fashion to discuss these matters gives me

some hope.

Thank you, Mr. Chairman.

Thanks very much, Professor Marcus.

We’re going to have seven-minute rounds of questioning, and I will begin.

First of all, Professor Marcus, we are here today because we do face that perfect storm.

Some of us might characterize it more like a bomb in a China shop, not a bull.

And as Senator Hawley indicated, there are precedents here, not only the atomic warfare

era but also the genome project, the research on genetics, where there was international

cooperation as a result.

And we want to avoid those past mistakes, as I indicated in my opening statement, that

were committed on social media.

That is precisely the reason we are here today.

Chat GPT makes mistakes.

All AI does.

And it can be a convincing liar, what people call hallucinations.

That might be an innocent problem in the opening of a judiciary subcommittee hearing where

a voice is impersonated, mine in this instance, or quotes from research papers that don’t

exist, but Chat GPT and BARD are willing to answer questions about life or death matters,

for example, drug interactions.

And those kinds of mistakes can be deeply damaging.

I’m interested in how we can have reliable information about the accuracy and trustworthiness

of these models and how we can create competition and consumer disclosures that reward greater

accuracy.

The National Institutes of Standards and Technology actually already has an AI accuracy test,

the face recognition vendor test.

It doesn’t solve for all the issues with facial recognition, but the scorecard does provide

useful information about the capabilities and flaws of these systems.

So there’s work on models to assure accuracy and integrity.

My question, let me begin with you, Mr. Altman, is should we consider independent testing

labs to provide scorecards and nutrition labels or the equivalent of nutrition labels, packaging

that indicates to people whether or not the content can be trusted, what the ingredients

are and what the garbage going in may be, because it could result in garbage going out?

Yeah, I think that’s a great idea.

I think that companies should put their own sort of, you know, here are the results of

our test of our model before we release it.

Here’s where it has weaknesses, here’s where it has strengths.

But also independent audits for that are very important.

These models are getting more accurate over time.

You know, this is, as we have, I think, said as loudly as anyone, this technology is in

its early stages.

It definitely still makes mistakes.

We find that people, that users are pretty sophisticated and understand where the mistakes

are that they need or likely to be, that they need to be responsible for verifying what

the models say, that they go off and check it.

I worry that as the models get better and better, the users can have sort of less and

less of their own discriminating thought process around it.

But I think users are more capable than we could often give them credit for in conversations

like this.

I think a lot of disclosures, which if you’ve used ChatGBT, you’ll see about the inaccuracies

of the model are also important.

And I’m excited for a world where companies publish with the models information about

how they behave, where the inaccuracies are, and independent agencies or companies provide

that as well.

I think it’s a great idea.

I alluded in my opening remarks to the jobs issue, the economic effects on employment.

I think you have said, in fact, and I’m going to quote, development of superhuman machine

intelligence is probably the greatest threat to the continued existence of humanity, end

quote.

You may have had in mind the effect on jobs, which is really my biggest nightmare in the

long term.

Let me ask you what your biggest nightmare is and whether you share that concern.

Like with all technological revolutions, I expect there to be significant impact on jobs,

but exactly what that impact looks like is very difficult to predict.

If we went back to the other side of a previous technological revolution, talking about the

jobs that exist on the other side, you know, you can go back and read books of this.

What people said at the time, it’s difficult.

I believe that there will be far greater jobs on the other side of this, and the jobs of

today will get better.

I think it’s important.

First of all, I think it’s important to understand and think about GPT-4 as a tool, not a creature,

which is easy to get confused, and it’s a tool that people have a great deal of control

over in how they use it.

And second, GPT-4 and other systems like it are good at doing tasks, not jobs, and

so you see already people that are using GPT-4 to do their job much more efficiently by helping

them with tasks.

Now, GPT-4 will, I think, entirely automate away some jobs, and it will create new ones

that we believe will be much better.

This happens, again, my understanding of the history of technology is one long technological

revolution, not a bunch of different ones put together, but this has been continually

happening.

As our quality of life raises, and as machines and tools that we create can help us live

better lives, the bar raises for what we do, and our human ability and what we spend our

time going after goes after more ambitious, more satisfying projects.

So there will be an impact on jobs.

We try to be very clear about that, and I think it will require partnership between

the industry and government, but mostly action by government to figure out how we want to

mitigate that.

But I’m very optimistic about how great the jobs of the future will be.

Thank you.

Let me ask Ms. Montgomery and Professor Marcus for your reaction to those questions as well.

Ms. Montgomery?

On the jobs point?

Yeah, I mean, well, it’s a hugely important question, and it’s one that we’ve been talking

about for a really long time at IBM.

We do believe that AI, and we’ve said it for a long time, is going to change every job.

New jobs will be created.

Many more jobs will be transformed, and some jobs will transition away.

I’m a personal example of a job that didn’t exist when I joined IBM, and I have a team

of AI governance professionals who are in new roles that we created as early as three

years ago.

I mean, they’re new and they’re growing.

But I think the most important thing that we could be doing and can and should be doing

now is to prepare the workforce of today and the workforce of tomorrow for partnering with

AI technologies and using them.

And we’ve been very involved for years now in doing that, in focusing on skills-based

hiring, in educating for the skills of the future.

Our skills-build platform has 7 million learners and over 1,000 courses worldwide focused on

skills, and we’ve pledged to train 30 million individuals by 2030 in the skills that are

needed for society today.

Thank you.

Professor Marcus?

May I go back to the first question as well?

Absolutely.

On the subject of nutrition labels, I think we absolutely need to do that.

I think that there are some technical challenges and that building proper nutrition labels

goes hand-in-hand with transparency.

The biggest scientific challenge in understanding these models is how they generalize.

What do they memorize and what new things do they do?

The more that there’s in the data set, for example, the thing that you want to test accuracy

on, the less you can get a proper read on that.

So it’s important, first of all, that scientists be part of that process, and second, that

we have much greater transparency about what actually goes into these systems.

If we don’t know what’s in them, then we don’t know exactly how well they’re doing when we

give something new, and we don’t know how good a benchmark that will be for something

that’s entirely novel.

So I could go into that more, but I want to flag that.

Second is on jobs, past performance history is not a guarantee of the future.

It has always been the case in the past that we have had more jobs, that new jobs, new

professions come in as new technologies come in.

I think this one’s going to be different, and the real question is over what time scale?

Is it going to be 10 years?

Is it going to be 100 years?

And I don’t think anybody knows the answer to that question.

I think in the long run, so-called artificial general intelligence really will replace a

large fraction of human jobs.

We’re not that close to artificial general intelligence.

Despite all of the media hype and so forth, I would say that what we have right now is

just a small sampling of the AI that we will build, and 20 years people will laugh at this,

as I think it was Senator Hawley made the, but maybe Senator Durbin made the example

about this.

It was Senator Durbin made the example about cell phones.

When we look back at the AI of today, 20 years ago, we’ll be like, wow, that stuff

was really unreliable.

It couldn’t really do planning, which is an important technical aspect.

It’s reasoning was ability, and reasoning abilities were limited.

But when we get to AGI, artificial general intelligence, maybe let’s say it’s 50 years,

that really is going to have, I think, profound effects on labor, and there’s just no way

around that.

And last, I don’t know if I’m allowed to do this, but I will note that Sam’s worst fear,

I do not think is employment, and he never told us what his worst fear actually is, and

I think it’s germane to find out.

Thank you.

I’m going to ask Mr. Altman if he cares to respond.

Yeah.

Look, we have tried to be very clear about the magnitude of the risks here.

I think jobs and employment and what we’re all going to do with our time really matters.

I agree that when we get to very powerful systems, the landscape will change.

I think I’m just more optimistic that we are incredibly creative and we find new things

to do with better tools, and that will keep happening.

My worst fears are that we cause significant, we, the field, the technology, the industry,

cause significant harm to the world.

I think that could happen in a lot of different ways.

It’s why we started the company.

It’s a big part of why I’m here today, and why we’ve been here in the past, and we’ve

been able to spend some time with you.

I think if this technology goes wrong, it can go quite wrong, and we want to be vocal

about that.

We want to work with the government to prevent that from happening, but we try to be very

clear-eyed about what the downside case is and the work that we have to do to mitigate

that.

Thank you.

And our hope is that the rest of the industry will follow the example that you and IBM,

and Ms. Montgomery have set by coming today and meeting with us, as you have done privately,

in helping to guide what we’re going to do so that we can target the harms and avoid

unintended consequences to the good.

Thank you.

Senator Hawley.

Thank you again, Mr. Chairman.

Thanks to the witnesses for being here.

Mr. Altman, I think you grew up in St. Louis, if I’m not mistaken.

It’s great to see a fellow Missourian here.

Missouri is a great place.

It is.

Thank you.

I think it’s important, especially underlining the record, Missouri is a great place.

That is the takeaway from today’s hearing.

Maybe we should stop there, Mr. Chairman.

Let me ask you, Mr. Altman, I think I’ll start with you, and I’ll just preface this by saying

my questions here are an attempt to get my head around and to ask all of you to help

us to get our heads around what this generative AI, particularly the large language models,

what it can do.

So I’m trying to understand its capacities and then its significance.

So I’m looking at a paper here entitled, Large Language Models Trained on Media Diets

Can Predict Public Opinion.

This was just posted about a month ago.

The authors are Chu, Andreas, Anselaberry, and Roy.

And their conclusion, this work was done at MIT and then also at Google, their conclusion

is that large language models can indeed predict public opinion.

And they go through and model why this is the case.

And they conclude ultimately that an AI system can predict human survey responses by adapting

a pre-trained language model to subpopulation-specific media diets.

So in other words, you can feed the model a particular set of media inputs and it can,

with remarkable accuracy, and the paper goes into this, predict then what people’s opinions

will be.

I want to think about this in the context of elections.

If these large language models can even now, based on the information we put into them,

quite accurately predict public opinion, you know, ahead of time, I mean predict, it’s

before you even ask the public these questions, what will happen when entities, whether it’s

corporate entities or whether it’s governmental entities or whether it’s campaigns or whether

it’s foreign actors, take this survey information, these predictions about public opinion, and

then fine-tune strategies to elicit certain responses, certain behavioral responses.

I mean, we already know, this committee is her testimony.

I think three years ago now about the effect of something as prosaic, it now seems, as

Google search, the effect that this has on voters in an election, particularly undecided

voters in the final days of an election who may try to get information from Google search

and what an enormous effect the ranking of the Google search, the articles that it returns

has become an enormous effect on an undecided voter.

This of course is orders of magnitude, far more powerful, far more significant, far more

directive if you like.

So Mr. Altman, maybe you can help me understand here what some of the significance of this

is.

Should we be concerned about models that can, large language models that can predict survey

opinion and then can help organizations, entities, fine-tune strategies to elicit behaviors from

voters?

Should we be worried about this for our elections?

Yeah.

Thank you, Senator Hawley, for the question.

It’s one of my areas of greatest concern, the more general ability of these models to

manipulate, to persuade, to provide sort of one-on-one, you know, interactive disinformation.

I think that’s like a broader version of what you’re talking about.

But given that we’re going to face an election next year and these models are getting better,

I think this is a significant area of concern.

I think there’s a lot, there’s a lot of policies that companies can voluntarily adopt and I’m

happy to talk about what we do there.

I do think some regulation would be quite wise on this topic.

Someone mentioned earlier, it’s something we really agree with.

People need to know if they’re talking to an AI, if content that they’re looking at

might be generated or might not.

I think it’s a great thing to do is to make that clear.

I think we also will need rules, guidelines about what’s expected in terms of disclosure

from a company providing a model that could have these sorts of abilities that you talk about.

So I’m nervous about it.

I think people are able to adapt quite quickly.

When Photoshop came onto the scene a long time ago, you know, for a while people were

really quite fooled by Photoshop images and then pretty quickly developed an understanding

that images might be Photoshopped.

This will be like that but on steroids and the interactivity, the ability to really model,

predict humans well as you talked about, I think is going to require a combination of

companies doing the right thing, regulation and public education.

Professor Marcus, do you want to address this?

Yeah, I’d like to add two things.

One is in the appendix to my remarks, I have two papers to make you even more concerned.

One is in the Wall Street Journal just a couple of days ago called Help, My Political

Beliefs Were Altered by a Chatbot.

And I think the scenario you raised was that we might basically observe people and use

surveys to figure out what they’re saying.

But as Sam just acknowledged, the risk is actually worse that the systems will directly,

maybe not even intentionally, manipulate people.

And that was the thrust of the Wall Street Journal article.

And it links to an article that I’ve also linked to called Interacting, and it’s not

yet published, not yet peer-reviewed.

Interacting with opinionated language models changes users’ views.

And this comes back ultimately to data.

One of the things that I’m most concerned about with GPT-4 is that we don’t know what

it’s trained on.

I guess Sam knows, but the rest of us do not.

And what it is trained on has consequences for essentially the biases of the system.

We could talk about that in technical terms.

But how these systems might lead people about depends very heavily on what data is trained

on them.

We need transparency about that, and we probably need scientists in there doing analysis in

order to understand what the political influences, for example, of these systems might be.

And it’s not just about politics.

It can be about health.

It could be about anything.

These systems absorb a lot of data, and then what they say reflects that data.

And they’re going to do it differently depending on what’s in that data.

So it makes a difference if they’re trained on the Wall Street Journal as opposed to the

New York Times or Reddit.

I mean, actually, they’re largely trained on all of this stuff, but we don’t really

understand the composition of that.

And so we have this issue of potential manipulation.

And it’s even more complex than that because it’s subtle manipulation.

People may not be aware of what’s going on.

That was the point of both the Wall Street Journal article and the other article that

I called your attention to.

Let me ask you about AI systems trained on personal data, the kind of data that, for

instance, the social media companies, the major platforms, Google, Meta, et cetera,

collect on all of us routinely.

We’ve had many a chat about this in this committee over many a year now.

But the massive amounts of data, personal data, that the companies have on each one

of us, an AI system that is trained on that individual data that knows each of us better

than ourselves and also knows the billions of data points about human behavior, human

language interaction generally, can’t we foresee an AI system that is extraordinarily good

at determining what will grab human attention and what will keep an individual’s attention?

And so for the war for attention, the war for clicks that is currently going on, on

all of these platforms and how they make their money, I’m just imagining an AI system, these

AI models supercharging that war for attention such that we now have technology that will

allow individual targeting of a kind we have never even imagined before, where the AI will

know exactly what Sam Altman finds attention grabbing, will know exactly what Josh Hawley

finds attention grabbing, will be able to elicit, to grab our attention and then elicit

responses from us in a way that we have heretofore not even been able to imagine.

Should we be concerned about that for its corporate applications, for the monetary applications,

for the manipulation that could come from that? Mr. Altman.

Yes, we should be concerned about that. To be clear, OpenAI does not, we’re not off,

you know, we don’t have an ad-based business model, so we’re not trying to build up these

profiles of our users. We’re not trying to get them to use it more. Actually, we’d love it if

they’d use it less because we don’t have enough GPUs. But I think other companies are already,

and certainly will in the future, use AI models to create, you know, very good ad predictions of

what a user will like. I think that’s already happening in many ways. Mr. Marcus, anything

you want to add? Hyper, yes, and perhaps Ms. Montgomery will want to as well, I don’t know.

But hyper-targeting of advertising is definitely going to come. I agree that that’s not been

OpenAI’s business model. Of course, now they’re working for Microsoft, and I don’t know what’s

in Microsoft’s thoughts, but we will definitely see it. Maybe it will be with open source

language models. I don’t know, but the technology there is, let’s say, partway there to being able

to do that, and we’ll certainly get there. So, we’re an enterprise technology company,

not consumer-focused, so the space isn’t one that we necessarily operate in, in terms of,

but these issues are hugely important issues, and it’s why we’ve been out ahead in developing

the technology that will help to ensure that you can do things like produce a fact sheet that has

the ingredients of what your data is trained on, data sheets, model cards, all those types of

things, and calling for, as I’ve mentioned today, transparency, so you know what the algorithm

was trained on, and then you also know and can manage and monitor continuously over the life

cycle of an AI model the behavior and the performance of that model. Senator Durbin.

Thank you. I think what’s happening today in this hearing room is historic.

I can’t recall when we’ve had people representing large corporations or

private sector entities come before us and plead with us to regulate them.

In fact, many people in the Senate have based their careers on the opposite, that the economy

will thrive if government gets the hell out of the way, and what I’m hearing instead today is

that stop me before I innovate again message, and I’m just curious as to how we’re going to achieve

this. As I mentioned section 230 in my opening remarks, we learned something there. We decided

that in section 230 that we were basically going to absolve the industry from liability

for a period of time as it came into being. Well, Mr. Altman, on the podcast earlier this year,

you agreed with host Kara Swisher that section 230 doesn’t apply to generative AI,

and that developers like open AI should not be entitled to full immunity for harms caused by

their products. So what have we learned from 230 that applies to your situation with AI?

Thank you for the question, Senator. I don’t know yet exactly what the right answer here is. I’d

love to collaborate with you to figure it out. I do think for a very new technology,

we need a new framework. Certainly companies like ours bear a lot of responsibility for

the tools that we put out in the world, but tool users do as well, and how we want, and also people

that will build on top of it between them and the end consumer, and how we want to come up with a

liability framework there is a super important question, and we’d love to work together.

The point I want to make is this. When it came to online platforms, the inclination of the

government was get out of the way. This is a new industry. Don’t overregulate it. In fact,

give them some breathing space and see what happens. I’m not sure I’m happy with the outcome

as I look at online platforms and the harms that they’ve created. Problems that we’ve seen

demonstrated in this committee, child exploitation, cyber bullying, online drug sales, and more.

I don’t want to repeat that mistake again, and what I hear is the opposite suggestion from the

private sector, and that is come in the front of this thing and establish some liability standards,

precision regulation, for a major company like IBM to come before this committee and say to

the government, please regulate us. Can you explain the difference in thinking from the past and now?

Yeah, absolutely. So for us, this comes back to the issue of trust, and trust in the technology.

Trust is our license to operate, as I mentioned in my remarks, and so we firmly believe, and we’ve

been calling for precision regulation of artificial intelligence for years now. This is not a new

position. We think that technology needs to be deployed in a responsible and clear way, that

people, we’ve taken principles around that. Trust and transparency, we call them, are principles

that were articulated years ago and build them into practices. That’s why we’re here advocating

for precision regulatory approach. So we think that AI should be regulated at the point of risk,

essentially, and that’s the point at which technology meets society.

Let’s take a look at what that might appear to be. Members of Congress are a pretty smart lot

of people, maybe not as smart as we think we are many times, and government certainly has a capacity

to do amazing things, but when you talk about our ability to respond to the current challenge and

perceived challenge of the future, challenges which you all have described in terms which are

hard to forget, as you said, Mr. Altman, things can go quite wrong. As you said, Mr. Marcus,

democracy is threatened. I mean, the magnitude of the challenge you’re giving us is substantial.

I’m not sure that we respond quickly and with enough expertise to deal with it.

Professor Marcus, you made a reference to CERN, the International Arbiter of Nuclear Research,

I suppose. I don’t know if that’s a fair characterization, but it’s a characterization

I’ll start with. What is it, what agency of this government do you think exists that could

respond to the challenge that you’ve laid down today?

We have many agencies that can respond in some ways. For example, the FTC,

the FCC, there are many agencies that can, but my view is that we probably need a cabinet level

organization within the United States in order to address this. And my reasoning for that is that

the number of risks is large. The amount of information to keep up on is so much. I think

we need a lot of technical expertise. I think we need a lot of coordination of these efforts. So

there is one model here where we stick to only existing law and try to shape all of what we need

to do, and each agency does their own thing. But I think that AI is going to be such a large part

of our future and is so complicated and moving so fast, and this does not fully solve your problem

about a dynamic world, but it’s a step in that direction to have an agency that’s full-time job

is to do this. I personally have suggested, in fact, that we should want to do this in a global

way. I wrote an article in The Economist, I have a link in here, an invited essay for The Economist

suggesting we might want an international agency for AI. That’s what I wanted to go to next, and

that is the fact that, I’ll get inside from the CERN and nuclear examples, because government was

involved in that from day one, at least in the United States. But now we’re dealing with

innovation, which doesn’t necessarily have a boundary. We may create a great U.S. agency,

and I hope that we do, that may have jurisdiction over U.S. corporations and U.S. activity,

but doesn’t have a thing to do with what’s going to bombard us from outside the United States.

How do you give this international authority the authority to regulate in a fair way for

all entities involved in AI? I think that’s probably over my pay grade. I would like to

see it happen, and I think it may be inevitable that we push there. I mean, I think the politics

behind it are obviously complicated. I’m really heartened by the degree to which this room is

bipartisan and supporting the same things, and that makes me feel like it might be possible.

I would like to see the United States take leadership in such organization. It has to

involve the whole world and not just the U.S. to work properly. I think even from the perspective

of the companies, it would be a good thing. So the companies themselves do not want a situation

where you take these models, which are expensive to train, and you have to have 190-some of them,

one for every country. That wouldn’t be a good way of operating. When you think about the energy

costs alone, just for training these systems, it would not be a good model if every country has

its own policies, and for each jurisdiction, every company has to train another model,

and maybe, you know, different states are different. So Missouri and California have

different rules, and so then that requires even more training of these expensive models with huge

climate impact. I mean, it would be very difficult for the companies to operate if there was no

global coordination, and so I think that we might get the companies on board if there’s bipartisan

support here, and I think there’s support around the world. It is entirely possible that we could

develop such a thing, but obviously, there are many, you know, nuances here of diplomacy that

are over my pay grade. I would love to learn from you all to try to help make that happen.

Mr. Altman.

Can I weigh in just briefly?

Briefly, please.

I want to echo support for what Mr. Marcus said. I think the U.S. should lead here and do things

first, but to be effective, we do need something global. As you mentioned, this can happen

everywhere. There is precedent. I know it sounds naive to call for something like this, and it

sounds really hard. There is precedent. We’ve done it before with the IAEA. We’ve talked about doing

it for other technologies. Given what it takes to make these models, the chip supply chain,

the sort of limited number of competitive GPUs, the power the U.S. has over these companies,

I think there are paths to the U.S. setting some international standards that other countries would

need to collaborate with and be part of that are actually workable, even though it sounds on its

face like an impractical idea. I think it would be great for the world.

Thank you, Mr. Chairman.

Thanks, Senator Durbin. In fact, I think we’re going to hear more about what Europe is doing.

European Parliament already is acting on an AI act. On social media, Europe is ahead of us.

We need to be in the lead. I think your point is very well taken. Let me turn to Senator Graham.

Senator Blackburn.

Thank you, Mr. Chairman, and thank you all for being here with us today. I put into my chat

GPT account should Congress regulate AI chat GPT, and it gave me four pros, four cons, and says,

ultimately, the decision rests with Congress and deserves careful consideration. So on that,

you know, it was very balanced. I recently visited with the Nashville Technology Council.

I represent Tennessee. And, of course, you had people there from healthcare, financial services,

logistics, educational entities, and they’re concerned about what they see happening with AI,

with the utilizations for their companies. Ms. Montgomery, you know, similar to you,

they’ve got healthcare people are looking at disease analytics. They’re looking at predictive

diagnosis, how this can better the outcomes for patients, logistics industry, looking at ways to

save time and money and yield efficiencies. You’ve got financial services that are saying,

how does this work with quantum? How does it work with blockchain? How can we use this? But

I think as we have talked with them, Mr. Chairman, one of the things that continues to come up

is, yes, Professor Marcus, as you were saying, the EU, different entities are ahead of us in this,

but we have never established a federally given preemption for online privacy, for data security,

and put some of those foundational elements in place, which is something that we need to do

as we look at this. And it will require that Commerce Committee, Judiciary Committee decide

how we move forward so that people own their virtual you. And Mr. Altman, I was glad to see

last week that your open AI models are not going to be trained using consumer data. I think that

that is important. And if we have a second round, I’ve got a host of questions for you on data

security and privacy. But I think it’s important to let people control their virtual you, their

information in these settings. And I want to come to you on music and content creation, because

we’ve got a lot of songwriters and artists, and I think we have the best creative community on

the face of the earth there in Tennessee. And they should be able to decide if their copyrighted

songs and images are going to be used to train these models. And I’m concerned about OpenAI’s

jukebox. It offers some re-renditions in the style of Garth Brooks, which suggests that OpenAI

is trained on Garth Brooks songs. I went in this weekend and I said, write me a song that

sounds like Garth Brooks. And it gave me a different version of Simple Man. So it’s interesting that it

would do that. But you’re training it on these copyrighted songs, these MIDI files, these sound

technologies. So as you do this, who owns the rights to that AI-generated material? And using

your technology, could I remake a song, insert content from my favorite artist, and then own the

creative rights to that song? Thank you, Senator. This is an area of great interest to us.

I would say, first of all, we think that creators deserve control over how their creations are used

and what happens sort of beyond the point of them releasing it into the world.

Second, I think that we need to figure out new ways with this new technology that creators can

win, succeed, have a vibrant life. And I’m optimistic that this will present…

Then let me ask you this. How do you compensate the artist?

That’s exactly what I was going to say. We’re working with artists now, visual artists,

musicians, to figure out what people want. There’s a lot of different opinions, unfortunately,

and at some point we’ll have to… Let me ask you this. Do you favor

something like SoundExchange that has worked in the area of radio?

I’m not familiar with SoundExchange, I’m sorry. Okay. You’ve got your team behind you. Get back

to me on that. That would be a third-party entity. Okay.

So let’s discuss that. Let me move on. Can you commit, as you’ve done with consumer data,

not to train ChatGPT, OpenAI, Jukebox, or other AI models on artists and songwriters’ copyrighted

works, or use their voices and their likenesses without first receiving their consent?

First of all, Jukebox is not a product we offer. That was a research release,

but it’s not unlike ChatGPT or Dolly. Yeah, but we’ve lived through Napster.

Yes. That was something that really cost a lot of artists a lot of money.

Oh, I understand. Yeah, for sure. In the digital distribution era.

I don’t know the numbers on Jukebox on the top of my head as a research release. I can follow up

with your office, but Jukebox is not something that gets much attention or usage. It was put

out to show that something’s possible. Well, as Senator Durbin just said,

and I think it’s a fair warning to you all, if we’re not involved in this from the get-go,

and you all already are a long way down the path on this, but if we don’t step in,

then this gets away from you. So, are you working with a copyright office?

Are you considering protections for content generators and creators in generative AI?

Yes, we are absolutely engaged on that. Again, to reiterate my earlier point,

we think that content creators, content owners need to benefit from this technology. Exactly

what the economic model is, we’re still talking to artists and content owners about what they want.

I think there’s a lot of ways this can happen, but very clearly, no matter what the law is,

the right thing to do is to make sure people get significant upside benefit from this new technology

and we believe that it’s really going to deliver that, but that content owners, likenesses,

people totally deserve control over how that’s used and to benefit from it.

Okay, so on privacy then, how do you plan to account for the collection

of voice and other user-specific data, things that are copyrighted, user-specific data through

your AI applications? Because if I can go in and say, write me a song that sounds like Garth Brooks

and it takes part of an existing song, there has to be a compensation to that artist for

that utilization and that use. If it was radio play, it would be there. If it was streaming,

it would be there. So if you’re going to do that, what is your policy for making certain you’re

accounting for that and you’re protecting that individual’s right to privacy and their right to

secure that data and that created work? So a few thoughts about this. Number one,

we think that people should be able to say, I don’t want my personal data trained on.

That’s, I think that’s… Right, that gets to a national privacy law,

which many of us here on the dais are working toward getting something that we can use.

Yeah, I think strong privacy… My time’s expired. Let me yield back. Thank you, Mr. Chair.

Thanks, Senator Blackburn. Senator Klobuchar. Thank you very much, Mr. Chairman. And

Senator Blackburn, I love Nashville, love Tennessee, love your music, but I will say,

I use chat GPT and just ask what are the top creative song artists of all time? And two of

the top three were from Minnesota. That would be Prince and Bob Dylan. Okay. All right. So let us

continue on. One thing AI won’t change and you’re seeing it here. All right. So on a more serious

note, though, my staff and I, in my role as chair of the rules committee and leading a lot of the

election bill, and we just introduced a bill that Representative Yvette Clark from New York

introduced over the house, Senator Booker and Bennett and I did on political advertisements.

But that is just, of course, the tip of the iceberg. You know this from your discussions

with Senator Hawley and others about the images and my own view, Senator Graham’s of section 230

is that we just can’t let people make stuff up and then not have any consequence. But I’m going

to focus in on what my job, one of my jobs will be on the rules committee, and that is election

misinformation. And we just asked chat GPT to do a tweet about a polling location in Bloomington,

Minnesota, and said there are long lines at this polling location at Atonement Lutheran Church.

Where should we go? Now, albeit it’s not an election right now, but the answer, the tweet

that was drafted was a completely fake thing. Go to 1234 Elm Street. And so you can imagine what

I’m concerned about here with an election upon us, with primary elections upon us, that we’re

going to have all kinds of misinformation. And I just want to know what you’re planning on doing

it, doing about it. I know we’re going to have to do something soon, not just for the images of the

candidates, but also for misinformation about the actual polling places and election rules.

Thank you, Senator. We talked about this a little bit earlier. We are quite concerned about the

impact this can have on elections. I think this is an area where hopefully the entire industry

and the government can work together quickly. There’s many approaches, and I’ll talk about

some of the things we do. But before that, I think it’s tempting to use the frame of social media.

But this is not social media. This is different. And so the response that we need is different.

You know, this is a tool that a user is using to help generate content more efficiently than before.

They can change it. They can test the accuracy of it. If they don’t like it, they can get another

version. But it still then spreads through social media or other ways, like chat GPT is a, you know,

single player experience where you’re just using this. And so I think as we think about what to do,

that’s important to understand. There’s a lot that we can do there. There’s things that the model

refuses to generate. We have policies. We also importantly have monitoring. So at scale,

we can detect someone generating a lot of those tweets, even if generating one tweet is OK.

Yeah. And of course, there’s going to be other platforms. And if they’re all spouting out fake

election information, I just I think what happened in the past with Russian interference and like,

it’s just going to be a tip of the iceberg when some of those fake ads. So that’s number one.

Number two is the impact on intellectual property. And Senator Blackburn was getting at some of this

with song rights and had serious concerns about that. But news content. So Senator Kennedy and

I have a bill that was really quite straightforward that would simply allowed the the news

organizations an exemption to be able to negotiate with basically Google and Facebook. Microsoft was

supportive of the bill, but basically negotiate with them to get better rates and be able to

not have some leverage. And other countries are doing this, Australia and the like.

And so my question is, when we already have a study by Northwestern predicting that one third

of the U.S. newspapers are that roughly existed two decades are going to go are going to be gone

by 2025. Unless you start compensating for everything from book movies, books, yes,

but also news content, we’re going to lose any realistic content producers. And so I’d

like your response to that. And of course, there is an exemption for copyright in Section 230.

But I think asking little newspapers to go out and sue all the time just can’t be the answer.

They’re not going to be able to keep up. Yeah, like, it is my hope that tools like

what we’re creating can help news organizations do better. I think having a vibrant,

having a vibrant national media is critically important. And let’s call it round one of the

internet has not been great for that. Right. We’re talking here about local that,

you know, report on your high school for school scores and a scandal in your city council,

those kinds of things. For sure. They’re the ones that are actually getting the worst,

the little radio stations and broadcast. But do you understand that this could be

exponentially worse in terms of local news content if they’re not compensated?

Well, because what they need is to be compensated for their content and not have it stolen.

Yeah. Again, our our model, you know, the current version of GPT for ended training in 2021. It’s

not it’s not it’s not a good way to find recent news. And it’s I don’t think it’s a service that

can do a great job of linking out, although maybe with our plugins, it’s it’s possible.

If there are things that we can do to help local news, we would certainly like to again,

I think it’s it’s critically important. Okay. May I add something there?

Yeah. But let me just ask you a question. You can combine them quick,

more transparency on the platforms. Senator Coons and Senator Cassidy and I have the Platform

Accountability Transparency Act to give researchers access to this information of

the algorithms and the like on social media data. Would that be helpful? And then why don’t you

just say yes or no, and then go at his. Transparency is absolutely critical here

to understand the political ramifications, the bias ramifications and so forth. We need

transparency about the data. We need to know more about how the models work. We need to have

scientists have access to them. I was just going to amplify your earlier point about local news.

A lot of news is going to be generated by these systems. They’re not reliable. NewsGuard already

is a study. I’m sorry, it’s not in my appendix, but I will get it to your office, showing that

something like 50 websites are already generated by bots. We’re going to see much, much more of

that. And it’s going to make it even more competitive for the local news organizations.

And so the quality of the sort of overall news market is going to decline as we have more

generated content by systems that aren’t actually reliable in the content they’ve generated.

Thank you. And thank you on a very timely basis to make the argument why we have to mark up this

bill again in June. I appreciate it. Thank you. Senator Graham.

Thank you, Mr. Chairman and Senator Hawley for having this. I’m trying to find out how it is

different than social media and learn from the mistakes we’ve made with social media.

The idea of not suing social media companies is to allow the Internet to flourish because

if I slander you, you can sue me. If you’re a billboard company and you put up the slander,

can you sue the billboard company? We said no. Basically, Section 230 is being used by

social media companies to avoid liability for activity that other people generate.

When they refuse to comply with their terms of use, a mother calls up the company and says,

this app is being used to bully my child to death. You promised in the terms of use,

you would prevent bullying. And she calls three times. She gets no response. The child kills

herself and they can’t sue. Do you all agree we don’t want to do that again? Yes.

If I may speak for one second, there’s a fundamental distinction

between reproducing content and generating content.

Yeah, but you you would like liability where people are harmed.

Absolutely. Yes. In fact, IBM has been publicly advocating to condition liability on a reasonable

care standard. So let me just make sure I understand the law as it exists today.

Mr. Altman, thank you for coming. Your company is not a company.

Your company is not claiming that Section 230 applies to the tool you have created.

Yeah, we’re claiming we need to work together to find a totally new approach. I don’t think

Section 230 is even the right framework. Okay. So under the law, it exists today.

This tool you create, if I’m harmed by it, can I sue you?

That is beyond my area of legal expertise. Have you ever been sued?

Not for that, no. Have you ever been sued at all?

That your company? Yeah, I get sued.

Yeah, we’ve gotten sued before. Okay. And what for?

I mean, they’ve mostly been like pretty frivolous things like I think happens to any company.

But like the examples my colleagues have given from artificial intelligence that could literally

ruin our lives. Can we go to the company that created that tool and sue them? Is that your

understanding? Yeah, I think there needs to be clear responsibility by the companies.

But you’re not claiming any kind of legal protection

like Section 230 applies to your industry, is that correct?

No, I don’t think we’re saying anything like that.

Mr. Marcus, when it comes to consumers, there seems to be like three time-tested

ways to protect consumers against any product. Statutory schemes, which are non-existent here,

legal systems, which may be here, but not social media, and agencies. Go back to

Senator Hawley. The atom bomb has put a cloud over humanity.

But nuclear power could be one of the solutions to climate change.

So what I’m trying to do is make sure that you just can’t go build a nuclear power plant. Hey,

Bob, what would you like to do today? Let’s go build a nuclear power plant.

You have a Nuclear Regulatory Commission that governs how you build a plant and is licensed.

Do you agree, Mr. Altman, that these tools you’re creating should be licensed?

Yeah, we’ve been calling for this. That’s the simplest way. You get a license. And do you

agree with me that the simplest way and the most effective way is to have an agency that is more

nimble and smarter than Congress, which should be easy to create, overlooking what you do?

Yes, we’d be enthusiastic about that.

You agree with that, Mr. Marcus?

Absolutely.

You agree with that, Ms. Montgomery?

I would have some nuances. I think we need to build on what we have in place already today.

We don’t have an agency that’s working.

Regulators.

Oh, wait a minute. Nope, nope, nope.

We don’t have an agency that regulates the technology.

So should we have one?

But a lot of the issues I don’t think so. A lot of the issues

Okay, wait a minute. Wait a minute. So IBM says we don’t need an agency.

Uh, interesting. Should we have a license required for these tools?

So, so what we believe is that we need to regulate.

That’s a simple question. Should you get a license to produce one of these tools?

I think it comes back to some of them potentially, yes.

So what I said at the onset is that we need to clearly define risks.

Do you claim Section 230 applies in this area at all?

We’re not a platform company and we’ve, again,

long advocated for a reasonable care standard in Section 230.

I just don’t understand how you could say

that you don’t need an agency to deal with the most transformative technology maybe ever?

Well, I think we have existing

Is this a transformative technology that can disrupt life as we know it, good and bad?

I think it’s a transformative technology, certainly.

And the conversations that we’re having here today have been

really bringing to light the fact that the domains and the issues

This one with you has been very enlightening to me.

Mr. Allman, why are you so willing to have an agency?

Senator, we’ve been clear about what we think the upsides are,

and I think you can see from users how much they enjoy it,

how much value they’re getting out of it.

But we’ve also been clear about what the downsides are,

and so that’s why we think we need an agency.

So it’s a major tool to be used by a lot of people.

It’s a major new technology.

Yeah, if you make a ladder and the ladder doesn’t work,

you can sue the people who made the ladder,

but there are some standards out there to make a ladder.

So that’s why we’re agreeing with you.

Yeah, that’s right. I think you’re on the right track.

So here’s what my two cents worth for the committee

is that we need to empower an agency that issues in a license and can take it away.

Wouldn’t that be some incentive to do it right

if you could actually be taken out of business?

Clearly, that should be part of what an agency can do.

And you also agree that China’s doing AI research, is that right?

Correct.

This world organization that doesn’t exist, maybe it will,

but if you don’t do something about the China part of it,

you’ll never quite get this right. Do you agree?

Well, that’s why I think it doesn’t necessarily have to be a world organization,

but there has to be some sort of, and there’s a lot of options here,

there has to be some sort of standard,

some sort of set of controls that do have global effect.

You know, because, you know, other people doing this.

I got 15. Military application.

How can AI change the warfare?

And you got one minute.

I got one minute? All right.

This is, that’s a tough question for one minute.

This is very far out of my area of expertise.

But I give you one example, a drone.

Can a drone, you can plug into a drone,

the coordinates and it can fly out and it goes over this target

and it drops a missile on this car moving down the road and somebody’s watching it.

Could AI create a situation where a drone can select the target itself?

I think we shouldn’t allow that.

Well, can it be done?

Sure.

Thanks.

Thanks, Senator Graham.

Senator Coon.

Thank you, Senator Blumenthal, Senator Hawley,

for convening this hearing, for working closely together to come up with this

compelling panel of witnesses and beginning a series of hearings

on this transformational technology.

We recognize the immense promise and substantial risks

associated with generative AI technologies.

We know these models can make us more efficient,

help us learn new skills, open whole new vistas of creativity.

But we also know that generative AI can authoritatively deliver

wildly incorrect information.

It can hallucinate, as is often described.

It can impersonate loved ones.

It can encourage self-destructive behaviors.

And it can shape public opinion and the outcome of elections.

Congress thus far has demonstrably failed to responsibly enact

meaningful regulation of social media companies

with serious harms that have resulted that we don’t fully understand.

Senator Klobuchar referenced in her questioning

a bipartisan bill that would open up social media platforms underlying algorithms.

We have struggled to even do that, to understand the underlying technology

and then to move towards responsible regulation.

We cannot afford to be as late to responsibly regulating generative AI

as we have been to social media, because the consequences,

both positive and negative, will exceed those of social media by orders of magnitude.

So let me ask a few questions designed to get at both how we assess the risk,

what’s the role of international regulation, and how does this impact AI?

Mr. Altman, I appreciate your testimony about the ways in which open AI

assesses the safety of your models through a process of iterative deployment.

The fundamental question embedded in that process, though,

is how you decide whether or not a model is safe enough to deploy

and safe enough to have been built and then let go into the wild.

I understand one way to prevent generative AI models from providing harmful content

is to have humans identify that content and then train the algorithm to avoid it.

There’s another approach that’s called constitutional AI

that gives the model a set of values or principles to guide its decision making.

Would it be more effective to give models these kinds of rules

instead of trying to require or compel training the model

on all the different potentials for harmful content?

Thank you, Senator. It’s a great question.

I’d like to frame it by talking about why we deploy at all,

like why we put these systems out into the world.

There’s the obvious answer about there’s benefits,

and people are using it for all sorts of wonderful things and getting great value,

and that makes us happy.

But a big part of why we do it is that we believe that iterative deployment

and giving people in our institutions and you all

time to come to grips with this technology, to understand it,

to find its limitations, its benefits, the regulations we need around it,

what it takes to make it safe, that’s really important.

Going off to build a super powerful AI system in secret

and then dropping it on the world all at once I think would not go well.

So a big part of our strategy is while these systems are still relatively weak

and deeply imperfect, to find ways to get people to have experience with them,

to have contact with reality,

and to figure out what we need to do to make it safer and better.

And that is the only way that I’ve seen in the history of new technology

and products of this magnitude to get to a very good outcome.

And so that interaction with the world is very important.

Now, of course, before we put something out, it needs to meet a bar of safety.

And again, we spent well over six months with GPT-4 after we finished training it,

going through all of these different things,

and deciding also what the standards were going to be before we put something out there,

trying to find the harms that we knew about and how to address those.

One of the things that’s been gratifying to us is even some of our biggest critics

have looked at GPT-4 and said, wow, OpenAI made huge progress on…

If you could focus briefly on whether or not a constitutional model

that gives values would be worth it.

I was just about to get there.

All right. Sorry about that.

Yeah, I think giving the models values up front is an extremely important set.

You know, RLHF is another way of doing that same thing.

But somehow or other, you are…

With synthetic data or human-generated data,

you’re saying, here are the values, here’s what I want you to reflect,

or here are the wide bounds of everything that society will allow.

And then within there, you pick as the user,

you know, if you want value system over here, value system over there.

We think that’s very important.

There’s multiple technical approaches, but we need to give policymakers

and the world as a whole the tools to say, here’s the values and implement them.

Thank you, Ms. Montgomery.

You serve on an AI ethics board of a long established company

that has a lot of experience with AI.

I’m really concerned that generative AI technologies

can undermine the faith of democratic values

and the institutions that we have.

The Chinese are insisting that AI as being developed in China

reinforce the core values of the Chinese Communist Party

and the Chinese system.

And I’m concerned about how we promote AI that reinforces

and strengthens open markets, open societies, and democracy.

In your testimony, you’re advocating for AI regulation

tailored to the specific way the technology is being used,

not the underlying technology itself.

And the EU is moving ahead with an AI act

which categorizes AI products based on level of risk.

You all in different ways have said that you view elections

and the shaping of election outcomes and disinformation

that can influence elections as one of the highest risk cases,

one that’s entirely predictable.

We have attempted so far unsuccessfully to regulate social media

after the demonstrably harmful impacts of social media

on our last several elections.

What advice do you have for us about what kind of approach we should follow

and whether or not the EU direction is the right one to pursue?

Yeah, I mean the conception of the EU AI Act

is very consistent with this concept of precision regulation

where you’re regulating the use of the technology in context.

So absolutely that approach makes a ton of sense.

It’s what I advocated for at the onset.

Different rules for different risks.

So in the case of elections,

absolutely any algorithm being used in that context

should be required to have disclosure around the data being used,

the performance of the model,

anything along those lines is really important.

Guardrails need to be in place.

And on the point, just come back to the question

of whether we need an independent agency.

I mean, I think we don’t want to slow down regulation

to address real risks right now, right?

So we have existing regulatory authorities in place

who have been clear that they have the ability

to regulate in their respective domains.

A lot of the issues we’re talking about today

span multiple domains, elections and the like, so.

If I could, I’ll just assert

that those existing regulatory bodies and authorities

are under-resourced and lack many of the statutory

regulatory powers that they need.

Correct.

We have failed to deliver on data privacy

even though industry has been asking us to regulate data privacy.

If I might, Mr. Marcus,

I’m interested also what international bodies

are best positioned to convene multilateral discussions

to promote responsible standards.

We’ve talked about a model being CERN and nuclear energy.

I’m concerned about proliferation and non-proliferation.

We’ve also talked, I would suggest that the IPCC,

a UN body, helped at least provide a scientific baseline

of what’s happening in climate change.

So that even though we may disagree about strategies,

globally, we’ve come to a common understanding

of what’s happening

and what should be the direction of intervention.

I’d be interested, Mr. Marcus,

if you could just give us your thoughts

on who’s the right body internationally

to convene a conversation

and one that could also reflect our values.

I’m still feeling my way on that issue.

I think global politics is not my specialty.

I’m an AI researcher,

but I have moved towards policy in recent months, really,

because of my great concern about all of these risks.

I think certainly the UN, UNESCO has its guidelines,

should be involved and at the table,

and maybe things work under them,

and maybe they don’t,

but they should have a strong voice

and help to develop this.

The OACD has also been thinking greatly about this.

A number of organizations have internationally.

I don’t feel like I personally am qualified

to say exactly what the right model is there.

Well, thank you.

I think we need to pursue this,

both at the national level and the international level.

I’m the chair of the IP subcommittee

of the Judiciary Committee.

In June and July, we will be having hearings

on the impact of AI on patents and copyrights.

You can already tell from the questions of others,

there’ll be a lot of interest.

I look forward to following up with you about that topic.

I know, Mr. Chairman, I’m-

I look forward to helping as much as possible.

Thank you very much.

Thanks, Senator Coons.

Senator Kennedy.

Thank you all for being here.

Permit me to share with you three hypotheses

that I would like you to assume for the moment to be true.

Hypothesis number one, many members of Congress

do not understand artificial intelligence.

Hypothesis number two, that absence of understanding

may not prevent Congress from plunging in with enthusiasm

and trying to regulate this technology in a way

that could hurt this technology.

Hypothesis number three that I would like you to assume,

there is likely a berserk wing of the artificial intelligence

community that intentionally or unintentionally

could use artificial intelligence to kill all of us

and hurt us the entire time that we are dying.

Assume all of those to be true.

Please tell me in plain English, two or three reforms,

regulations, if any, that you would implement

if you were a queen or king for a day.

Ms. Montgomery.

I think it comes back again to transparency

and explainability in AI.

We absolutely need to know and have companies attest.

What do you mean by transparency?

Disclosure of the data that’s used to train AI,

disclosure of the model and how it performs,

and making sure that there’s continuous governance

over these models, that we are the leading edge.

Governance by whom?

Technology governance, organizational governance,

rules and clarification that are needed.

Which rules?

I mean, this is your chance for you to say,

I mean, this is your chance, folks,

to tell us how to get this right.

Please use it.

All right.

I mean, I think, again, the rules should be focused

on the use of AI in certain contexts.

So if you look at, for example, so if you look

at the EU AI Act, it has certain uses of AI

that it says are just simply too dangerous

and will be outlawed in the UK.

OK, so we ought to first pass a law that says

you can use AI for these uses, but not others.

Is that what you’re saying?

We need to define the highest risk uses of AI.

Is there anything else?

And then, of course, requiring things

like impact assessments and transparency,

requiring companies to show their work,

protecting data that’s used to train AI

in the first place as well.

Professor Marcus, if you could be specific.

This is your shot, man.

Talk in plain English and tell me what,

if any, rules we ought to implement.

And at least don’t just use concepts.

I’m looking for specificity.

Number one, a safety review like we use

with the FDA prior to widespread deployment.

If you’re going to introduce something

to 100 million people, somebody has

to have their eyeballs on it.

There you go.

OK, that’s a good one.

I’m not sure I agree with it, but that’s a good one.

What else?

You didn’t ask for three that you would agree with.

Number two, a nimble monitoring agency

to follow what’s going on, not just pre-review,

but also post as things are out there

in the world with authority to call things back,

which we’ve discussed today.

And number three would be funding geared

towards things like AI constitution,

AI that can reason about what it’s doing.

I would not leave things entirely

to current technology, which I think

is poor at behaving in ethical fashion

and behaving in honest fashion.

And so I would have funding to try to basically focus

on AI safety research.

That term has a lot of complications in my field.

There’s both safety, let’s say, short term and long term.

And I think we need to look at both.

Rather than just funding models to be bigger,

which is the popular thing to do,

we need to find models to be more trustworthy.

Because I want to hear from Mr. Altman.

Mr. Altman, here’s your shot.

Thank you, Senator.

Number one, I would form a new agency

that licenses any effort above a certain scale

of capabilities and can take that license away

and ensure compliance with safety standards.

Number two, I would create a set of safety standards

focused on what you said in your third hypothesis

as the dangerous capability evaluations.

One example that we’ve used in the past

is looking to see if a model can self-replicate

and self-exfiltrate into the wild.

We can give your office a long other list

of the things that we think are important there,

but specific tests that a model has to pass

before it can be deployed into the world.

And then third, I would require independent audits.

So not just from the company or the agency,

but experts who can say the model is or isn’t in compliance

with these stated safety thresholds

and these percentages of performance on question X or Y.

Can you send me that information?

We will do that.

Would you be qualified if we promulgated those rules

to administer those rules?

I love my current job.

Cool.

Are there people out there that would be qualified?

We’d be happy to send you recommendations

for people out there, yes.

Okay.

You make a lot of money, do you?

I make, no, I paid enough for health insurance.

I have no equity in OpenAI.

Really? That’s interesting.

You need a lawyer.

I need a what?

You need a lawyer or an agent.

I’m doing this because I love it.

Thank you, Mr. Chairman.

Thanks, Senator Kennedy.

Senator Hirono.

Thank you, Mr. Chairman.

I’m listening to all of you testifying.

Thank you very much for being here.

Clearly, AI truly is a game-changing tool,

and we need to get the regulation of this tool right

because my staff, for example, asked AI,

it might have been GPT-4, it might have been,

I don’t know, one of the other entities

to create a song that my favorite band, BTS,

a song that they would sing, somebody else’s song,

but neither of the artists were involved

in creating what sounded like a really genuine song,

so you can do a lot.

We also asked, can there be a speech created

talking about the Supreme Court decision in Dobbs

and the chaos that it created using my voice,

my kind of voice, and it created a speech

that was really good.

It almost made me think about, you know,

what do I need my staff for?

So don’t worry, that’s not where we are.

Nervous laughter behind you.

Their jobs are safe.

But there’s so much that can be done,

and one of the things that you mentioned, Mr. Altman,

that intrigued me was you said GPT-4

can refuse harmful requests,

so you must have put some thought

into how your system, if I can call it that,

can refuse harmful requests.

What do you consider a harmful request?

You can just keep it short.

Yeah, I’ll give a few examples.

One would be about violent content.

Another would be about content that’s encouraging self-harm.

Another’s adult content.

Not that we think adult content is inherently harmful,

but there’s things that could be associated with that

that we cannot reliably enough differentiate,

so we refuse all of it.

So those are some of the more obvious

harmful kinds of information,

but in the election context, for example,

I saw a picture of former President Trump

being arrested by NYPD, and that went viral.

I don’t know.

Is that considered harmful?

I’ve seen all kinds of statements

attributed to any one of us

that could be put out there

that may not rise to your level of harmful content,

but there you have it.

So two of you said that we should have a licensing scheme.

I can’t envision or imagine right now

what kind of a licensing scheme we would be able to create

to pretty much regulate the vastness

of this game-changing tool.

So are you thinking of an FTC kind of a system,

an FCC kind of a system?

What do the two of you even envision

as a potential licensing scheme

that would provide the kind of guardrails that we need

to protect literally our country from harmful content?

To touch on the first part of what you said,

there are things besides,

you know, should this content be generated or not

that I think are also important.

So that image that you mentioned was generated.

I think it’d be a great policy to say

generated images need to be made clear

in all contexts that they were generated.

And, you know, then we still have the image out there,

but we’re at least requiring people to say

this was a generated image.

Okay, well, you don’t need an entire licensing scheme

in order to make that a reality.

Where I think the licensing scheme comes in

is not for what these models are capable of today,

because as you pointed out,

you don’t need a new licensing agency to do that.

But as we head, and, you know, this may take a long time,

I’m not sure,

as we head towards artificial general intelligence

and the impact that will have

and the power of that technology,

I think we need to treat that as seriously

as we treat other very powerful technologies.

And that’s where I personally think we need such a scheme.

I agree.

And that is why, by the time we’re talking about AGI,

we’re talking about major harms

that can occur through the use of AGI.

So Professor Marcus, I mean,

what kind of a regulatory scheme would you envision?

And we can’t just come up with something,

you know, that is gonna be,

take care of the issues that will arise in the future,

especially with AGI.

So what kind of a scheme would you contemplate?

Well, first, if I can rewind just a moment,

I think you really put your finger

on the central scientific issue

in terms of the challenges

in building artificial intelligence.

We don’t know how to build a system

that understands harm in the full breadth of its meaning.

So what we do right now is we gather examples

and we say, is this like the examples

that we have labeled before?

But that’s not broad enough.

And so I thought your questioning beautifully outlined

the challenge that AI itself has to face

in order to really deal with this.

We want AI itself to understand harm

and that may require new technology.

So I think that’s very important.

On this second part of your question,

the model that I tend to gravitate towards,

but I am not an expert here,

is the FDA, at least as part of it,

in terms of you have to make a safety case

and say why the benefits outweigh the harms

in order to get that license.

Probably we need elements of multiple agencies.

I’m not an expert there,

but I think that the safety case part of it

is incredibly important.

You have to be able to have external reviewers

that are scientifically qualified

look at this and say, have you addressed enough?

So I’ll just give one specific example.

AutoGPT frightens me.

That’s not something that OpenAI made,

but something that OpenAI did make

called ChatGPT plugins led a few weeks later

to some building open source software called AutoGPT.

And what AutoGPT does

is it allows systems to access source code,

access the internet, and so forth.

And there are a lot of potential,

let’s say, cybersecurity risks there.

There should be an external agency that says,

well, we need to be reassured

if you’re going to release this product

that there aren’t gonna be cybersecurity problems

or there are ways of addressing it.

So Professor, I am running out of time.

I just wanted to mention, Ms. Montgomery,

your model is a use model

similar to what the EU has come up with,

but the vastness of AI and the complexities involved,

I think, would require more

than looking at the use of it.

I think that based on what I’m hearing today,

don’t you think that we’re probably gonna need

to do a heck of a lot more

than to focus on what use AI is being used for?

For example, you can ask AI

to come up with a funny joke or something,

but you can use the same,

you can ask the same AI tool

to generate something

that is like an election fraud kind of a situation.

So I don’t know how you will make a determination

based on where you’re going with the use model,

how to distinguish those kinds of uses of this tool.

So I think that if we’re gonna go

toward a licensing kind of a scheme,

we’re gonna need to put a lot of thought

into how we’re gonna come up with an appropriate scheme

that is going to provide the kind of future reference

that we need to put in place.

So I thank all of you for coming in

and providing further food for thought.

Thank you, Mr. Chairman.

Thanks very much, Senator Hirono.

Senator Padilla.

Thank you, Mr. Chairman.

I appreciate the flexibility

as I’ve been back and forth

between this committee

and Homeland Security Committee

where there’s a hearing going on right now

on the use of AI in government.

So it’s AI day on the hill

or at least the Senate apparently.

Now for folks watching at home,

if you never thought about AI

until the recent emergence of generative AI tools,

the developments in this space

may feel like they’ve just happened all of a sudden.

But the fact of the matter is, Mr. Chair,

is that they haven’t.

AI is not new, not for government,

not for business, not for the public.

In fact, the public uses AI all the time.

And just for folks to be able to relate,

wanna offer the example of anybody with a smartphone,

many features on your device leverage AI,

including suggested replies, right?

When we’re text messaging

or even to email auto-correct features,

including but not limited to spelling

in our email and text applications.

So I’m frankly excited to explore

how we can facilitate positive AI innovation

that benefits society

while addressing some of the already known harms

and biases that stem from the development

and use of the tools today.

Now with language models

becoming increasingly ubiquitous,

I wanna make sure that there’s a focus

on ensuring equitable treatment

of diverse demographic groups.

My understanding is that most research

in to evaluating and mitigating fairness harms

has been concentrated on the English language,

while non-English languages

have received comparatively little attention or investment.

And we’ve seen this problem before.

I’ll tell you why I raised this.

Social media companies, for example,

have not adequately invested in content moderation,

tools and resources for their non-English language.

And I share this not just out of concern

for non-US-based users,

but so many US-based users prefer a language

other than English in their communication.

So I’m deeply concerned

about repeating social media’s failure

in AI tools and applications.

Question, Mr. Altman and Ms. Montgomery,

how are open AI and IBM

ensuring language and cultural inclusivity

that they’re in their large language models?

And is even an area of focus

in the development of your products?

So bias and equity in technology

is a focus of ours and always has been.

I think diversity in terms of the development of the tools,

in terms of their deployment.

So having diverse people

that are actually training those tools,

considering the downstream effects as well.

We’re also very cautious,

very aware of the fact

that we can’t just be articulating

and calling for these types of things

without having the tools and the technology

to test for bias and to apply governance

across the life cycle of AI.

So we were one of the first teams and companies

to put toolkits on the market,

deploy them, contribute them to open source

that will do things like help to address,

be the technical aspects

in which we help to address issues like bias.

Okay, can you speak just for a second

specifically to language inclusivity?

Yeah, I mean, language.

So we don’t have a consumer platform,

but we are very actively involved

with ensuring that the technology we help to deploy

and the large language models

that we use in helping our clients

to deploy technology is focused on

and available in many languages.

Thank you.

Michelle.

We think this is really important.

One example is that we worked

with the government of Iceland,

which is a language with fewer speakers

than many of the languages

that are well-represented on the internet

to ensure that their language was included in our model.

And we’ve had many similar conversations

and I look forward to many similar partnerships

with lower resource languages

to get them into our models.

GPT-4 is unlike previous models of ours,

which were good at English

and not very good at other languages,

now pretty good at a large number of languages.

You can go pretty far down the list,

ranked by number of speakers

and still get good performance.

But for these very small languages,

we’re excited about custom partnerships

to include that language into our model run.

And the part of the question you asked about values

and making sure that cultures are included,

we’re equally focused on that,

excited to work with people

who have particular data sets

and to work to collect a representative set of values

from around the world to draw these wide bounds

of what the system can do.

I also appreciate what you said

about the benefits of these systems

and wanting to make sure we get those

to as wide of a group as possible.

I think these systems will have lots of positive impact

on a lot of people,

but in particular,

historically underrepresented groups in technology,

people who have not had as much access

to technology around the world,

this technology seems like it can be a big lift up.

Great.

And I know my question was specific

to language inclusivity,

but I’m glad there’s agreement

on the broader commitment to diversity and inclusion.

And I’ll just give a couple more reasons

why I think it’s so critical.

You know, the largest actors in this space

can afford the massive amount of data,

the computing power,

and they have the financial resources necessary

to develop complex AI systems.

But in this space,

we haven’t seen from a workforce standpoint,

the racial and gender diversity reflective

of the United States of America.

And we risk, if we’re not thoughtful about it,

contributing to the development of tools

and approaches that only exacerbate

the bias and inequities that exist in our society.

So a lot of follow-up work to do there.

In my time remaining,

I do want to ask one more question.

This committee and the public are right to pay attention

to the emergence of generative AI.

This technology has a different opportunity

and risk profile than other AI tools.

And these applications have felt very tangible

for the public due to the nature of the user interface

and the outputs that they produce.

But I don’t think we should lose sight

of the broader AI ecosystem

as you consider AI’s broader impact on society,

as well as the design of appropriate safeguards.

So Ms. Montgomery, in your testimony,

as you noted, AI is not you.

Can you highlight some of the different applications

that the public and policymakers

should also keep in mind

as we consider possible regulations?

MS. MONTGOMERY Yeah.

I mean, I think the generative AI systems

that are available today are creating new issues

that need to be studied,

new issues around the potential to generate content

that could be extremely misleading,

deceptive, and the like.

So those issues absolutely need to be studied.

But we shouldn’t also ignore the fact that AI is a tool.

It’s been around for a long time.

It has capabilities beyond just generative capabilities.

And again, that’s why I think going back to this approach

where we’re regulating AI,

where it’s touching people and society

is a really important way to address it.

CHAIRMAN POWELL.

Thank you.

Thank you, Mr. Chair.

CHAIRMAN POWELL.

Thanks, Senator Pia.

Senator Booker is next,

but I think he’s going to defer to Senator Ossoff.

CHAIRMAN POWELL.

It’s because Senator Ossoff is a very big deal.

I don’t know if you…

SENATOR OSSOFF.

I have a meeting at noon,

and I’m grateful to you, Senator Booker,

for yielding your time.

You are, as always, brilliant and handsome.

And thank you to the panelists for joining us.

Thank you to the subcommittee leadership

for opening this up to all committee members.

If we’re going to contemplate a regulatory framework,

we’re going to have to define what it is that we’re regulating.

So, you know, Mr. Alban, any such law

will have to include a section

that defines the scope of regulated activities,

technologies, tools, products.

Just take a stab at it.

MR. ALBAN.

Yeah.

Thanks for asking, Senator Ossoff.

I think it’s super important.

I think there are very different levels here,

and I think it’s important that any new approach,

any new law does not stop the innovation

from happening with smaller companies,

open-source models,

researchers that are doing work at a smaller scale.

That’s a wonderful part of this ecosystem and of America,

and we don’t want to slow that down.

There still may need to be some rules there,

but I think we could draw a line at systems

that need to be licensed in a very intense way.

The easiest way to do it,

I’m not sure if it’s the best,

but the easiest would be to talk about the amount of compute

that goes into such a model.

So we could define a threshold of compute,

and it’ll have to go, it’ll have to change.

It could go up or down,

down as we discover more efficient algorithms

that says above this amount of compute,

you are in this regime.

What I would prefer, it’s hard to do,

but I think more accurate,

is to define some capability thresholds

and say a model that can do things X, Y, and Z,

up to you all to decide.

That’s now in this licensing regime,

but models that are less capable,

you know, we don’t want to stop our open-source community.

We don’t want to stop individual researchers.

We don’t want to stop new startups,

can proceed, you know, with a different framework.

Thank you.

As concisely as you can,

please state which capabilities you’d propose

we consider for the purposes of this definition.

I would love, rather than to do that off the cuff,

to follow up with your office with like a follow-up.

Well, perhaps opine,

opine understanding that you’re just responding,

and you’re not making law.

All right, in the spirit of just opining,

I think a model that can persuade,

manipulate, influence a person’s behavior,

or a person’s beliefs,

that would be a good threshold.

I think a model that could help create

novel biological agents would be a great threshold.

Things like that.

I want to talk about,

the predictive capabilities of the technology,

and we’re going to have to think about

a lot of very complicated constitutional questions

that arise from it.

With massive data sets,

the integrity and accuracy with which

such technology can predict future human behaviors,

potentially pretty significant

at the individual level, correct?

I think we don’t know the answer to that for sure,

but let’s say it can at least have some impact there.

Okay, so we may be confronted by situations where,

for example, a law enforcement agency

deploying such technology

seeks some kind of judicial consent to execute a search,

or to take some other police action

on the basis of a modeled prediction

about some individual’s behavior.

But that’s very different

from the kind of evidentiary predicate

that normally police would take to a judge

in order to get a warrant.

Talk me through how you think that would work.

I’m thinking about that issue.

Yeah, I think it’s very important

that we continue to understand

that these are tools that humans use

to make human judgments,

and that we don’t take away human judgment.

I don’t think that people should be prosecuted

based off of the output of an AI system, for example.

We have no national privacy law.

Europe has rolled one out to mixed reviews.

Do you think we need one?

I think it’d be good.

What would be the qualities or purposes of such a law

that you think would make the most sense

based on your experience?

Again, this is very far out of my area of expertise.

I think there’s many, many people

that are privacy experts

that could weigh on what a law needs.

I’d still like you to weigh in.

I think a minimum is that users

should be able to opt out

from having their data used

by companies like ours

or the social media companies.

It should be easy to delete your data.

I think those are…

But the thing that I think is important

from my perspective running an AI company

is that if you don’t want your data

used for training these systems,

you have the right to do that.

So let’s think about

how that would be practically implemented.

I mean, as I understand it,

your tool and certainly similar tools,

one of the inputs will be scraping,

for lack of a better word,

data off of the open web, right,

as a low cost way of gathering information.

And there’s a vast amount of information

out there about all of us.

How would such a restriction

on the access or use or analysis

of such data be practically implemented?

So I was speaking about something

a little bit different,

which is the data that someone generates,

the questions they ask our system,

things that they input,

they’re training on that.

Data that’s on the public web,

that’s accessible,

even if we don’t train on that,

the models can certainly link out to it.

So that was not what I was referring to.

I think that, you know,

there’s ways to have your data

or there should be more ways

to have your data taken down

from the public web,

but certainly models

with web browsing capabilities

will be able to search the web

and link out to it.

When you think about implementing

a safety or a regulatory regime

to constrain such software

and to mitigate some risk,

is your view that the federal government

would make laws

such that certain capabilities

or functionalities themselves

are forbidden in potential?

In other words,

one cannot deploy

or execute code capable of X?

Yes.

Or is it the act itself,

X only when actually executed?

Well, I think both.

I’m a believer in defense in depth.

I think that there should be limits

on what a deployed model is capable of

and then what it actually does to.

How are you thinking

about how kids use your product?

We, well, you have to be,

I mean, you have to be 18 or up

or have your parents permission

at 13 and up to use a product.

But we understand that people

get around those safeguards all the time.

And so we try to do

is just design a safe product.

And there are decisions that we make

that we would allow

if we knew only adults were using it,

that we just don’t allow in the product

because we know children

will use it some way or other too.

In particular,

given how much these systems

are being used in education,

we like want to be aware

that that’s happening.

I think what and Senator Blumenthal

has done extensive work

investigating this,

what we’ve seen repeatedly

is that companies

whose revenues depend upon volume of use,

screen time, intensity of use,

design these systems

in order to maximize

the engagement of all users,

including children,

with perverse results in many cases.

And what I would humbly advise you

is that you get way ahead of this issue,

the safety for children of your product,

or I think you’re going to find

that Senator Blumenthal,

Senator Hawley,

others on the subcommittee,

and I will look very harshly

on the deployment of technology

that harms children.

We couldn’t agree more.

I think we’re out of time,

but I’m happy to talk about that

if I can respond.

Go ahead.

It’s up to the chairman.

OK.

I, first of all,

I think we try to design systems

that do not maximize for engagement.

In fact, we’re so short on GPUs.

The less people use our products,

the better.

But we’re not an advertising-based model.

We’re not trying to get people

to use it more and more.

And I think that’s a different shape

than ad-supported social media.

Second, these systems

do have the capability

to influence in obvious

and in very nuanced ways.

And I think that’s particularly important

for the safety of children,

but that will impact all of us.

One of the things that we’ll do ourselves,

regulation or not,

but I think a regulatory approach

would be good for also,

is requirements about how the values

of these systems are set

and how these systems respond to questions

that can cause influence.

So we’d love to partner with you.

Couldn’t agree more on the importance.

Thank you.

Mr. Chairman, for the record,

I just want to say that

the senator from Georgia

is also very handsome and brilliant too.

But I will allow that comment

to stand without objection.

Without objection, okay.

Mr. Chairman and ranking members.

You are now recognized.

Thank you very much.

Thank you.

It’s nice that we finally got down

to the bald guys down here at the end.

I just want to thank you both.

This has been one of the best hearings

I’ve had this Congress

and just a testimony to you two

as seeing the challenges

and the opportunities that AI presents.

So I appreciate you both.

I want to just jump in

I want to just jump in,

I think very broadly,

and then I’ll get a little more narrow.

Sam, you said very broadly,

technology has been moving like this

and a lot of people

have been talking about regulation.

And so I use the example of the automobile.

What an extraordinary piece of technology.

I mean, New York City did not know

what to do with horse manure.

They were having crises,

forming commissions,

and the automobile comes along,

ends that problem.

But at the same time,

we have tens of thousands of people

dying on highways every day.

We have emissions crises and the like.

There are multiple federal agencies,

multiple federal agencies

that were created

or are specifically focused

on regulating cars.

And so this idea

that this equally transforming technology

is coming

and for Congress to do nothing,

which is not what anybody here

is calling for,

little or nothing,

is obviously unacceptable.

I really appreciate Senator Welch

and I who’ve been going back

and forth during this hearing

and him and Bennett have a bill

talking about trying to regulate

in this space.

Not doing so for social media

has been, I think, very destructive

and allowed a lot of things to go on

that are really causing a lot of harm.

And so the question is,

is what kind of regulation?

You all have spoken that

to a lot of my colleagues.

And I want to say,

Ms. Montgomery,

and I have to give full disclosure,

I’m the child of two IBM parents.

But you talked about

defining the highest risk uses.

We don’t know all of them.

We really don’t.

We can’t see where this is going.

Regulating at the point of risk.

And you sort of called not for an agency.

And I think when somebody else

asks you to specify,

because you don’t want to slow things down,

we should build on what we have in place.

But you can envision

that we can try to work

on two different ways

that ultimately a specific,

like we have in cars,

EPA, NHTSA,

the Federal Motor Car

Carrier Safety Administration,

all of these things,

you can imagine something specific

that is, as Mr. Marcus points out,

a nimble agency

that could do monitoring, other things.

You can imagine the need

for something like that, correct?

Oh, absolutely, yeah.

And so just for the record then,

in addition to trying to regulate

with what we have now,

you would encourage Congress

and my colleague, Senator Welsh,

to move forward in trying to figure out

the right tailored agency

to deal with what we know

and perhaps things

that might come up in the future.

I would encourage Congress

to make sure it understands

the technology,

has the skills and resources in place

to impose regulatory requirements

on the uses of the technology

and to understand emerging risks as well.

So, yes.

Mr. Marcus, there’s no way

to put this genie in the bottle.

Globally, it’s exploding.

I appreciate your thoughts

and I shared some with my staff

about your ideas

of what the international context is,

but there’s no way

to stop this moving forward.

So, with that understanding,

just building on what Ms. Montgomery said,

what kind of encouragement

do you have as specifically as possible

to forming an agency,

to using current rules and regulations?

Can you just put some clarity

on what you’ve already stated?

Let me just insert,

there are more genies

yet to come from more bottles.

Some genies are already out,

but we don’t have machines

that can really, for example,

self-improve themselves.

We don’t really have machines

that have self-awareness

and we might not ever want to go there.

So, there are other genies

to be concerned about.

On to the main part of your question.

I think that we need to have

some international meetings

very quickly with people

who have expertise

in how you grow agencies,

in the history of growing agencies.

We need to do that in the federal level.

We need to do that

in the international level.

I’ll just emphasize one thing.

I haven’t as much as I would like to,

which is that I think science

has to be a really important part of it.

And I’ll give an example.

We’ve talked about misinformation.

We don’t really have the tools right now

to detect and label misinformation

with nutrition labels

that we would like to.

We have to build new technologies for that.

We don’t really have tools yet

to detect a wide uptick

in cybercrime, probably.

We probably need new tools there.

We need science to probably help us

to figure out what we need to build

and also what it is

that we need to have transparency around.

Understood, understood.

Sam, just going to you

for the little bit of time I have left.

Real quick, first of all,

you’re a bit of a unicorn

when I sat down with you first.

Could you explain why non-profit,

in other words,

you’re not looking at this

and you’ve even capped the VC people.

Just really quickly,

I want folks to understand that.

We started as a non-profit,

really focused on how this technology

was going to be built.

At the time, it was very

outside the Overton window.

Something like AGI was even possible.

That shifted a lot.

We didn’t know at the time

how important scale was going to be,

but we did know that we wanted to build this

with humanity’s best interest at heart

and a belief that this technology could,

if it goes the way we want,

if we can do some of those things

Professor Marcus mentioned,

really deeply transform the world.

We wanted to be as much of a force

for getting to a positive.

I’m going to interrupt you.

I think that’s all good.

I hope more of that gets out on the record.

The second part of my question

I found it fascinating.

Are you ever going to,

for a revenue model,

for return on your investors,

are you ever going to do ads

or something like that?

I wouldn’t say never.

I think there may be people

that we want to offer services to

and there’s no other model that works,

but I really like having

a subscription-based model.

We have API developers pay us

and we have chat GPT pay us.

Okay, then can I just jump real quickly?

One of my biggest concerns

about this space

is what I’ve already seen

in the space of Web 2, Web 3

is this massive corporate concentration.

It is really terrifying to see

how few companies now control

and affect the lives of so many of us.

These companies are getting bigger

and more powerful.

I see OpenAI backed by Microsoft.

Anthropic is backed by Google.

Google has its own in-house product.

So I’m really worried about that

and I’m wondering if Sam,

you can give me a quick

acknowledgement.

Are you worried about the corporate

concentration in this space

and what effect it might have

in the associated risks

perhaps with market concentration in AI?

And then Mr. Marcus,

can you answer that as well?

I think there will be many people

that develop models.

What’s happening now

in the open source community is amazing,

but there will be a relatively

small number of providers

that can make models at the true edge.

I think there is benefits

and danger to that

because we’re talking about

the dangers with AI.

The fewer of us that you really have

to keep a careful eye on

on the absolute bleeding edge

of capabilities,

there’s benefits there.

I think there needs to be enough

and there will

because there’s so much value

that consumers have choice

that we have different ideas.

Mr. Marcus, real quick.

There is a real risk

of a kind of technocracy

combined with oligarchy

where a small number of companies

influence people’s beliefs

through the nature of these systems.

Again, I put something in the record

about the Wall Street Journal

about how these systems

can subtly shape our beliefs

and that has enormous influence

on how we live our lives.

And having a small number of players

do that with data

that we don’t even know about,

that scares me.

Sam, I’m sorry.

One more thing I wanted to add.

One thing that I think

is very important

is that what these systems

get aligned to,

whose values,

what those bounds are,

that that is somehow set

by society as a whole,

by governments as a whole.

And so creating that data set,

the alignment data set,

it could be an AI constitution,

whatever it is,

that has got to come

very broadly from society.

Thank you very much, Mr. Chairman.

My time’s expired

and I guess the best for last.

Thank you, Senator Booker.

Senator Weld.

First of all, I want to thank you,

Senator Blumenthal and you,

Senator Hawley.

This has been a tremendous hearing.

Senators are noted

for their short attention spans,

but I’ve sat through this entire hearing

and enjoyed every minute of it.

You have one of our longer attention spans

in the United States.

To your great credit.

Well, we’ve had good witnesses

and it’s an incredibly important issue.

And here’s just,

I don’t, all the questions I have

have been asked really,

but here’s a kind of a takeaway.

And what I think is the major question

that we’re going to have to answer

as a Congress.

Number one, you’re here

because AI is this extraordinary

new technology that everyone says

can be transformative

as much as the printing press.

Number two, it’s really unknown

what’s going to happen,

but there’s a big fear

you’ve expressed to all of you

about what bad actors can do

and will do

if there’s no rules of the road.

Number three,

as a member who served in the House

and now in the Senate,

I’ve come to the conclusion

that it’s impossible for Congress

to keep up with the speed of technology.

And there have been concerns expressed

about social media

and now about AI

that relate to fundamental privacy rights,

bias rights, intellectual property,

the spread of disinformation,

which in many ways for me

is the biggest threat

because that goes to the core

of our capacity for self-governing.

There’s the economic transformation,

which can be profound.

There’s safety concerns.

And I’ve come to the conclusion

that we absolutely have to have an agency.

What its scope of engagement is,

it has to be defined by us.

But I believe that

unless we have an agency

that is going to address these questions

from some level,

from social media and AI,

we really don’t have much of a defense

against the bad stuff.

And the bad stuff will come.

So last year,

I introduced in the House side

and Senator Bennett did in the Senate side,

it was the end of the year,

Digital Commission Act,

and we’re going to be reintroducing that this year.

And the two things that I want to ask,

one, you’ve somewhat answered

because I think two of the three of you have said

you think we do need an independent commission.

And Congress established an independent commission

when railroads were running rampant

over the interest of farmers,

when Wall Street had no rules of the road

and we had the SEC.

I think we’re at that point now.

But what the commission does

would have to be defined and circumscribed.

But also there’s always a question

about the use of regulatory authority

and the recognition

that it can be used for good.

J.D. Vance actually mentioned that

when we were considering his and Senator Brown’s bill

about railroads in that event in East Palestine,

regulation for the public health.

But there’s also a legitimate concern

about regulation getting in the way of things,

being too cumbersome

and being a negative influence.

So A, two of the three of you have said

you think we do need an agency.

What are some of the perils of an agency

that we would have to be mindful of

in order to make certain that its goals

of protecting many of those interests

I just mentioned, privacy, bias,

intellectual property, disinformation,

would be the winners and not the losers?

And I’ll start with you, Mr. Altman.

Thank you, Senator.

One, I think America has got to continue to lead.

This happened in America.

I’m very proud that it happened in America.

By the way, I think that’s right.

And that’s why I’d be much more confident

if we had our agency as opposed to

got involved in international discussions.

Ultimately, you want the rules of the road.

But I think if we lead and get rules of the road

that work for us,

that is probably a more effective way to proceed.

I personally believe there’s a way to do both.

And I think it is important to have the global view on this

because this technology will impact Americans

and all of us wherever it’s developed.

But I think we want America to lead.

We want, we want…

So get to the perils issue though,

because I know…

Well, that’s one.

I mean, that is a peril,

which is you slow down American industry

in such a way that China or somebody else

makes faster progress.

A second, and I think this can happen with like,

the regulatory pressure should be on us.

It should be on Google.

It should be on the other small set of people

in the lead the most.

We don’t want to slow down smaller startups.

We don’t want to slow down open source efforts.

We still need them to comply with things.

They can still, you can still cause great harm

with a smaller model,

but leaving the room and the space for new ideas

and new companies and independent researchers

to do their work

and not putting a regulatory burden

and say a company like us could handle

but a smaller one couldn’t.

I think that’s another peril

and it’s clearly a way that regulation has gone.

Mr. Marcus or Professor Marcus.

The other obvious peril is regulatory capture.

If we make it as appear as if we are doing something,

but it’s more like greenwashing

and nothing really happens.

We just keep out the little players

because we put so much burden

that only the big players can do it.

So there are also those kinds of perils.

I fully agree with everything that Mr. Altman said

and I would add that to the list.

Okay.

Ms. Montgomery.

One of the things I would add to the list

is the risk of not holding companies accountable

for the harms that they’re causing today, right?

So we talk about misinformation in electoral systems.

So no agency or no agency,

we need to hold companies responsible today

and accountable for AI that they’re deploying

that disseminates misinformation

on things like elections and where the risk is.

You know, a regulatory agency

would do a lot of the things

that Senator Graham was talking about.

You know, you don’t build a nuclear reactor

without getting a license.

You don’t build an AI system

without getting a license

that gets tested independently.

I think it’s a great analogy.

We need both pre-deployment and post-deployment.

Okay.

Thank you all very much.

I yield back, Mr. Chairman.

Thanks.

Thanks, Senator Wells.

Let me ask a few more questions.

You’ve all been very, very patient

and the turnout today,

which is beyond our subcommittee,

I think reflects both your value

in what you’re contributing

as well as the interest in this topic.

There are a number of subjects

that we haven’t covered at all.

One was just alluded to by Professor Marcus,

which is the monopolization danger,

the dominance of markets

that excludes new competition

and thereby inhibits

or prevents innovation and invention,

which we have seen in social media.

As well as some of the old industries,

airlines, automobiles,

and others where consolidation

has narrowed competition.

And so I think we need to focus

on kind of an old area of antitrust,

which dates more than a century,

still inadequate to deal with the challenges

we have right now in our economy.

And certainly we need to be mindful

of the way that rules

can enable the big guys to get bigger

and exclude innovation and competition

and responsible good guys,

such as are represented

in this industry right now.

We haven’t dealt with national security.

There are huge implications

for national security.

I will tell you,

as a member of the Armed Services Committee,

classified briefings on this issue

have abounded

and the threats that are posed

by some of our adversaries.

China has been mentioned here,

but the sources of threats to this nation

in this space are very real and urgent.

We’re not going to deal with them today,

but we do need to deal with them.

And we will hopefully in this committee.

And then on the issue of a new agency,

you know, I’ve been doing this stuff for a while.

I was attorney general of Connecticut for 20 years.

I was a federal prosecutor,

the U.S. attorney.

Most of my career has been in enforcement.

And I will tell you something,

you can create 10 new agencies,

but if you don’t give them the resources

and I’m talking not just about dollars,

I’m talking about scientific expertise,

you guys will run circles around.

And it isn’t just the models

or the generative AI

that will run models or run circles around them,

but it is the scientists in your companies.

For every success story in government regulation,

you can think of five failures.

That’s true of the FDA.

It’s true of the IAEA.

It’s true of the SEC.

It’s true of the whole alphabet list

of government agencies.

And I hope our experience here will be different.

But the Pandora’s box requires

more than just the words

or the concepts licensing new agency.

There’s some real hard decision-making

as Ms. Montgomery has alluded to

about how to frame the rules to fit the risks.

First, do no harm.

Make it effective.

Make it enforceable.

Make it real.

I think we need to grapple

with the hard questions here

that frankly this initial hearing

I think has raised very successfully,

but not answered.

And I thank our colleagues

who have participated

and made these very creative suggestions.

I’m very interested in enforcement.

I literally 15 years ago,

I think advocated abolishing Section 230.

What’s old is new again.

You know, now people are talking

about abolishing Section 230.

Back then it was considered

completely unrealistic.

But enforcement really does matter.

I want to ask Mr. Altman,

because of the privacy issue

and you’ve suggested

that you have an interest

in protecting the privacy of the data

that may come to you or be available.

How do you,

what specific steps do you take

to protect privacy?

One is that we don’t train on any data

submitted to our API.

So if you’re a business customer of ours

and submit data,

we don’t train on it at all.

We do retain it for 30 days

solely for the purpose

of trust and safety enforcement.

But that’s different than training on it.

If you use ChatGPT,

you can opt out of us

training on your data.

You can also delete your conversation history

or your whole account.

Ms. Montgomery,

I know you don’t deal directly with consumers,

but do you take steps

to protect privacy as well?

Absolutely.

And we even filter

our large language models for content

that includes personal information

that may have been pulled

from public data sets as well.

So we apply additional level of filtering.

Professor Marcus,

you made reference to self-awareness,

self-learning.

Already we’re talking about

the potential for jailbreaks.

How soon do you think

that new kind of generative AI

will be usable,

will be practical?

New AI that is self-aware and so forth?

Yes.

I mean, I have no idea on that one.

I think we don’t really understand

what self-awareness is,

and so it’s hard to put a date on it.

In terms of self-improvement,

there’s some modest self-improvement

in current systems,

but one could imagine a lot more,

and that could happen in two years.

It could happen in 20 years.

There are basic paradigms

that haven’t been invented yet.

Some of them we might want to discourage,

but it’s a bit hard

to put timelines on them.

Just going back to enforcement

for one second,

one thing that is absolutely paramount,

I think,

is far greater transparency

about what the models are

and what the data are.

That doesn’t necessarily mean

everybody in the general public

has to know exactly

what’s in one of these systems,

but I think it means

that there needs to be

some enforcement arm

that can look at these systems,

can look at the data,

can perform tests and so forth.

Let me ask you, all of you,

I think there has been a reference

to elections

and banning outputs

involving elections.

Are there other areas

where you think,

what are the other high risk

or highest risk areas

where you would either ban

or establish especially strict rules?

Ms. Montgomery.

The space around misinformation,

I think, is a hugely important one.

And coming back to the points

of transparency,

knowing what content

was generated by AI

is going to be a really critical area

that we need to address.

Any others?

I think medical misinformation

is something to really worry about.

We have systems

that hallucinate things.

They’re going to hallucinate

medical advice.

Some of the advice

they’ll give is good.

Some of it’s bad.

We need really tight regulation

around that.

Same with psychiatric advice,

people using these things

as kind of ersatz therapists.

I think we need to be

very concerned about that.

I think we need to be concerned

about internet access

for these tools

when they can start making requests,

both of people and internet things.

It’s probably OK

if they just do search,

but as they do more intrusive things

on the internet,

do we want them to be able

to order equipment

or order chemistry and so forth?

So as we empower these systems more

by giving them internet access,

I think we need to be concerned about that.

And then we’ve hardly talked at all

about long-term risks.

Sam alluded to it briefly.

I don’t think that’s where

we are right now,

but as we start to approach machines

that have a larger footprint on the world

beyond just having a conversation,

we need to worry about that

and think about how we’re going

to regulate that

and monitor it and so forth.

In a sense,

we’ve been talking about bad guys

or certain bad actors

manipulating AI to do harm.

Manipulating people.

And manipulating people,

but also generative AI

can manipulate the manipulators.

It can.

I mean, there’s many layers

of manipulation that are possible,

and I think we don’t yet

really understand the consequences.

Dan Dennett just sent me

a manuscript last night

that will be in the Atlantic

in a few days

on what he calls counterfeit people.

It’s a wonderful metaphor.

These systems are almost

like counterfeit people,

and we don’t really honestly understand

what the consequence of that is.

They’re not perfectly human-like yet,

but they’re good enough

to fool a lot of the people

a lot of the time,

and that introduces lots of problems.

For example, cybercrime

and how people might try

to manipulate markets and so forth.

So it’s a serious concern.

In my opening,

I suggested three principles.

Transparency,

accountability,

and limits on use.

Would you agree that

those are a good starting point?

Ms. Montgomery?

100 percent.

And as you also mentioned,

industry shouldn’t wait for Congress.

That’s what we’re doing here at IBM.

There’s no reason that industry

should wait for Congress.

Yeah.

Professor Marcus?

I think those three

would be a great start.

I mean, there are things

like the White House Bill of Rights,

for example, that show,

I think, a large consensus.

The UNESCO guidelines and so forth

show a large consensus

around what it is we need,

and the real question is definitely

now how are we going to put

some teeth in it,

try to make these things

actually enforced?

So, for example,

we don’t have transparency yet.

We all know we want it,

but we’re not doing enough

to enforce it.

Mr. Altman?

I certainly agree that

those are important points.

I would add that,

and Professor Marcus touched on this,

I would add that as we,

we spend most of the time today

on current risks,

and I think that’s appropriate,

and I’m very glad we have done it.

As these systems do become

more capable,

and I’m not sure how far away that is,

but maybe not super far.

I think it’s important

that we also spend time

talking about how we’re going

to confront those challenges.

Having talked to you privately,

I know how much I care.

I agree that you care

deeply and intensely,

but also that prospect

of increased danger or risk

resulting from even more

complex and capable AI mechanisms

certainly may be closer

than a lot of people appreciate.

Let me just add for the record

that I’m sitting next to Sam.

Closer than I’ve ever sat to him,

except once before in my life,

and that his sincerity

in talking about those fears

is very apparent physically

in a way that just doesn’t communicate

on the television screen,

but communicates from here.

Thank you.

Senator Hawley.

Thank you again, Mr. Chairman,

for a great hearing.

Thanks to the witnesses.

So I’ve been keeping a little list here

of the potential downsides or harms,

risks of generative AI,

even in its current form.

Let’s just run through it.

Loss of jobs.

And this isn’t speculative.

I think your company, Ms. Montgomery,

has announced that it’s potentially

laying off 7,800 people,

third of your non-consumer facing workforce

because of AI.

So loss of jobs,

invasion of privacy, personal privacy,

on a scale we’ve never before seen,

manipulation of personal behavior,

manipulation of personal opinions,

and potentially the degradation

of free elections in America.

Did I miss anything?

I mean, this is quite a list.

I noticed that an eclectic group

of about 1,000 technology and AI leaders,

everybody from Andrew Yang to Elon Musk,

recently called for a six-month moratorium

on any further AI development.

Were they right?

Do you join those calls?

Are they right to do that?

Should we pause for six months?

Your characterization is not quite correct.

I actually signed that letter.

About 27,000 people signed it.

It did not call for a ban

on all AI research,

nor on all AI,

but only on a very specific thing,

which would be systems like GPT-5.

Every other piece of research

that’s ever been done,

it was actually supportive or neutral about.

It specifically called for more AI,

specifically called for more research

on trustworthy and safe AI.

So you think that we should take a moratorium,

a six-month moratorium or more

on anything beyond CHAT-GPT-4?

I took the letter.

What is the famous phrase?

Spiritually, not literally,

what was the famous phrase?

Well, I’m asking for your opinion now, though.

So do you endorse the six-month moratorium?

My opinion is that the moratorium

that we should focus on

is actually deployment

until we have good safety cases.

I don’t know that we need

to pause that particular project,

but I do think its emphasis

on focusing more on AI safety,

on trustworthy, reliable AI,

is exactly right.

Deployment means not making it available to the public?

Yeah, so my concern is about things

that are deployed at a scale of,

let’s say, 100 million people

without any external review.

I think that we should think

very carefully about doing that.

What about you, Mr. Altman?

Do you agree with that?

Would you pause any further development

for six months or longer?

So first of all,

after we finished training GPT-4,

we waited more than six months to deploy it.

We are not currently training

what will be GPT-5.

We don’t have plans to do it

in the next six months.

But I think the frame of the letter is wrong.

What matters is audits,

red teaming, safety standards

that a model needs to pass before training.

If we pause for six months,

then I’m not really sure what we do then.

Do we pause for another six?

Do we kind of come up with some rules then?

The standards that we have developed

and that we’ve used for GPT-4 deployment,

we want to build on those,

but we think that’s the right direction,

not a calendar clock pause.

There may be times,

I expect there will be times

when we find something that we don’t understand

and we really do need to take a pause,

but we don’t see that yet.

Nevermind all the benefits.

What would you don’t see what yet?

You’re comfortable with all of the potential ramifications

from the current existing technology?

I’m sorry, we don’t see the reasons

to not train a new one for deploying.

As I mentioned,

I think there’s all sorts of risky behavior

and there’s limits we put.

We have to pull things back sometimes,

add new ones.

I mean, we don’t see something

that would stop us from training the next model.

The next model where we’d be so worried

that we’d create something dangerous,

even in that process,

let alone the deployment.

What about you, Ms. Montgomery?

I think we need to use the time

to prioritize ethics and responsible technology

as opposed to posing development.

Well, wouldn’t a pause in development

help the development of protocols

for safety standards and ethics?

I’m not sure how practical it is to pause,

but we absolutely should be prioritizing

safety protocols.

Okay, the point about practicality

leads me to this.

I’m interested in this talk about an agency

and maybe that would work.

Although, having seen how agencies

work in this government,

they usually get captured by the interests

that they’re supposed to regulate.

They usually get controlled

by the people who they’re supposed to be watching.

I mean, that’s just been our history for 100 years.

Maybe this agency would be different.

I have a little different idea.

Why don’t we just let people sue you?

Why don’t we just make you liable in court?

We can do that.

We know how to do that.

We can pass a statute.

We can create a federal right of action

that will allow private individuals

who are harmed by this technology

to get into court

and to bring evidence into court.

And it can be anybody.

I mean, you want to talk about crowdsourcing.

We’ll just open the courthouse doors.

We’ll define a broad right of action,

private right of action,

private citizens to be class actions.

We’ll just open it up.

We’ll allow people to go into court.

We’ll allow them to present evidence.

They say that they were harmed by,

they were given medical misinformation.

They were given election misinformation, whatever.

Why not do that, Mr. Altman?

I mean, please forgive my ignorance.

Can’t people sue us?

Well, you’re not protected by Section 230,

but there’s not currently, I don’t think,

a federal right of action,

private right of action that says

that if you are harmed by generative AI technology,

we will guarantee you the ability to get into court.

Oh, well, I think there’s like a lot of other laws

where if technology harms you,

there’s standards that we could be sued under,

unless I’m really misunderstanding how things work.

If the question is, are clearer laws

about the specifics of this technology

and consumer protection is a good thing,

I would say definitely yes.

The laws that we have today were designed

long before we had artificial intelligence,

and I do not think they give us enough coverage.

The plan that you propose, I think,

is a hypothetical,

would certainly make a lot of lawyers wealthy.

I think it would be too slow to affect

a lot of the things that we care about,

and there are gaps in the law.

For example, we don’t really…

Wait, you think it’d be slower than Congress?

Yes, I do, in some ways.

Really?

Well, litigation can take a decade or more.

Oh, but the threat of litigation is a powerful tool.

I mean, how would IBM like to be sued for $100 billion?

I’m in no way asking to take litigation

off the table among the tools,

but I think, for example, if I can continue,

there are areas like copyright

where we don’t really have laws,

we don’t really have a way of thinking

about wholesale misinformation

as opposed to individual pieces of it,

where, say, a foreign actor might make

billions of pieces of misinformation,

or a local actor.

We have some laws around market manipulation

we could apply,

but we get in a lot of situations

where we don’t really know which laws apply,

there would be loopholes.

The system is really not thought through.

In fact, we don’t even know that 230

does or does not apply here, as far as I know.

I think that that’s something a lot of people

speculated about this afternoon,

but it’s not solid.

Well, we could fix that.

The question is how?

Oh, easy.

You just, it would be easy for us to say

that Section 230 doesn’t apply to generative AI.

Ms. Montgomery, I’ll give you the last word.

I think it’s an important start.

You suggested, Ms. Montgomery, a duty of care,

which I think fits the idea

of a private right of action.

No, that’s exactly right.

And also, AI is not a shield, right?

So if a company discriminates in granting credit,

for example, or in the hiring process,

by virtue of the fact that they relied

too significantly on an AI tool,

they’re responsible for that today,

regardless of whether they used a tool

or a human to make that decision.

I’m gonna turn to Senator Booker

for some final questions,

but I just wanna make a quick point here.

On the issue of the moratorium,

I think we need to be careful.

The world won’t wait.

The rest of the global scientific community

isn’t going to pause.

We have adversaries that are moving ahead

and sticking our head in the sand

is not the answer.

Safeguards and protections, yes,

but a flat stop sign,

sticking our head in the sand,

I would be very, very worried.

Without militating for any sort of pause,

I would just again emphasize

there is a difference between research,

which surely we need to do

to keep pace with our foreign rivals,

and deployment at really massive scale.

You could deploy things at the scale

of a million people or 10 million people,

but not 100 million people or a billion people.

And if there are risks,

you might find them out sooner

and be able to close the barn doors

before the horses leave rather than after.

Senator Booker.

Yeah, there will be no pause.

I mean, there’s no enforcement body

to force a pause.

It’s just not going to happen.

It’s nice to call for it

for any just reasons or whatsoever,

but forgive me for sounding skeptical.

Nobody’s pausing.

This thing is racing.

I would agree.

I don’t think it’s a realistic thing in the world.

The reason I personally signed the letter

was to call attention

to how serious the problems were

and to emphasize spending more of our efforts

on trustworthy and safe AI

rather than just making a bigger version

of something we already know to be unreliable.

Yeah.

So I’m a futurist.

I love exciting about the future.

And I guess there’s a famous question.

If you couldn’t control for your race,

your gender,

where you would land on the planet Earth

or what time in humanity

would you want to be born?

Everyone would say right now.

It’s still the best time to be alive

because of technology,

innovation and everything.

And I’m excited about what the future holds,

but the destructiveness that I’ve also seen

as a person that’s seen

the transformative technologies

of a lot of the technologies

of the last 25 years

is what really concerns me.

And one of the things,

especially with companies

that are designed to want

to keep my attention on screens,

and I’m not just talking about new media.

24-hour cable news is a great example

of people that want to keep your eyes on screens.

I have a lot of concerns

about the corporate intention.

And Sam, this is, again,

why I find your story so fascinating to me

and your values that I believe in

from our conversations so compelling to me.

But absent that,

I really want to just explore

what happens when these companies

that are already controlling

so much of our lives,

a lot has been written about the fang companies.

What happens when they are the ones

that are dominating this technology

as they did before?

So Professor Marcus,

does that have any concern

the role that corporate power,

corporate concentration has in this realm

that a few companies might control this whole area?

I radically changed the shape of my own life

in the last few months.

And it was because of what happened

with Microsoft releasing Sydney.

And it didn’t go the way I thought it would.

In one way, it did,

which is I anticipated the hallucinations.

I wrote an essay,

which I have in the appendix,

What to Expect When You’re Expecting GPT-4.

And I said that it would still be

a good tool for misinformation,

that it would still have trouble

with physical reasoning,

psychological reasoning,

that it would hallucinate.

And then along came Sydney

and the initial press reports

were quite favorable.

And then there was the famous article

by Kevin Roose

in which it recommended

he get a divorce.

And I had seen Tay

and I had seen Galactica from Meta

and those had been pulled

after they had problems.

And Sydney clearly had problems.

What I would have done

had I run Microsoft,

which clearly I do not,

would have been to temporarily

withdraw it from the market.

And they didn’t.

And that was a wake-up call to me

and a reminder that even

if you have a company like OpenAI

that is a non-profit

and Sam’s values, I think,

have become clear today,

other people can buy those companies

and do what they like with them.

And maybe we have

a stable set of actors now,

but the amount of power

that these systems have

to shape our views

and our lives

is really, really significant.

And that doesn’t even

get into the risks

that someone might repurpose them

deliberately for all kinds

of bad purposes.

And so in the middle of February,

I stopped writing much

about technical issues in AI,

which is most of what

I’ve written about

for the last decade and said,

I need to work on policy.

This is frightening.

And Sam, I want to give you

an opportunity.

It’s my sort of last

question or so.

Don’t you have concerns about,

I mean, I graduated from Stanford.

I know so many of the players

in the valley from VC folks,

angel folks,

to a lot of founders of companies

that we all know.

Do you have some concern

about a few players

with extraordinary resources

and power,

power to influence Washington?

I mean, I see us,

I love, I’m a big believer

in the free market,

but the reason why

I walk into a bodega

and a Twinkie is cheaper

than an apple

or a Happy Meal costs less

than a bucket of salad

is because of the way

the government tips the scales

to pick winners and losers.

So the free market

is not what it should be

when you have

large corporate power

that can even influence

the game here.

Do you have some concerns

about that in this next era

of technological innovation?

Yeah.

I mean, again, that’s so much

of why we started OpenAI.

We have huge concerns about that.

I think it’s important

to democratize the inputs

to these systems,

the values that we’re going

to align to.

And I think it’s also important

to give people wide use

of these tools.

When we started the API strategy,

which is a big part

of how we make our systems

available for anyone to use,

there was a huge amount

of skepticism over that.

And it does come with challenges,

that’s for sure.

But we think putting this

in the hands of a lot of people

and not in the hands

of a few companies

is really quite important.

And we are seeing the result

in innovation boom from that.

But it is absolutely true

that the number of companies

that can train

the true frontier models

is going to be small

just because of

the resources required.

And so I think there needs

to be incredible scrutiny

on us and our competitors.

I think there is a rich

and exciting industry

happening of incredibly

good research and new startups

that are not just using our models,

but creating their own.

And I think it’s important

to make sure that

whatever regulatory stuff happens,

whatever new agencies

may or may not happen,

we preserve that fire

because that’s critical.

Well, I’m a big believer

in the democratizing

potential of technology,

but I’ve seen the promise

of that fail time and time again

where people said,

oh, this is going to have

a big democratizing force.

My team works on a lot of issues

about the reinforcing

of bias through algorithms,

the failure to advertise

certain opportunities

and certain zip codes.

But you seem to be saying,

and I heard this with Web3,

that this is going to be

decentralized finance.

All these things

are going to happen.

But this seems to me

not even to offer that promise

because the people

who are designing these,

it takes so much power,

energy, resources.

Are you saying that

my dreams of technology,

further democratizing opportunity

and more are possible

within a technology

that is ultimately,

I think, can be very centralized

to a few players

who already control so much?

So this point that I made

about use of the model

and building on top of it,

this is really a new platform, right?

It is definitely important

to talk about who’s going

to create the models.

I want to do that.

I also think it’s really important

to decide to whose values

we’re going to align these models.

But in terms of using the models,

the people that build

on top of the OpenAI API

do incredible things.

And it’s, you know,

people frequently comment,

like, I can’t believe

you get this much technology

for this little money.

And so what people are,

the companies people are building,

putting AI everywhere,

using our API,

which does let us

put safeguards in place,

I think that’s quite exciting.

And I think that is how

it is being democratized,

not how it’s going to be,

but how it is being

democratized right now.

There is a whole new Cambrian

explosion of new businesses,

new products, new services

happening by lots

of different companies

on top of these models.

And so I’ll say, Chairman,

as I close, that I have,

most industries resist

even reasonable regulation

from seatbelt laws

to we’ve been talking a lot

recently about rail safety.

The only way we’re going

to see the democratization

of values, I think,

and while there are

noble companies out there,

is if we create rules of the road

that enforce

certain safety measures,

like we’ve seen

with other technology.

Thank you.

Thanks, Senator Booker.

And I couldn’t agree more

that in terms of

consumer protection,

which I’ve been doing

for a while,

participation by the industry

is tremendously important

and not just rhetorically,

but in real terms,

because we have a lot of

industries that come before us

and say, oh,

we’re all in favor of rules,

but not those rules.

Those rules we don’t like.

And it’s every rule,

in fact, that they don’t like.

And I sense that

there is a willingness

to participate here

that is genuine and authentic.

I thought about asking

CHAT-GPT to do a new version

of don’t stop

thinking about tomorrow,

because that’s what we need

to be doing here.

And Senator Hawley

has pointed out,

Congress doesn’t always move

at the pace of technology.

And that may be a reason

why we need a new agency.

But we also need to recognize

the rest of the world

is going to be moving as well.

And you’ve been enormously helpful

in focusing us

and illuminating

some of these questions

and performed a great service

by being here today.

So thank you to every one

of our witnesses.

And I’m going to close the hearing,

leave the record open for one week.

In case anyone wants

to submit anything,

I encourage any of you

who have either manuscripts

that are going to be published

or observations

from your companies

to submit them to us.

And we look forward

to our next hearing.

This one is closed.

It’s.

I’m here all week

if you have any time to talk.

Could you make any comparisons

between Sam Altman’s

testimony here

and earlier testimony

by other tech CEOs?

Well, you know,

just looking at the record,

Sam Altman is night and day

compared to the CEO.

And not just in the words

and rhetoric,

but in actual actions

and his willingness

to participate

and commit to specific action.

So, you know,

some of the big tech companies

are under consent decrees,

which they have violated.

That’s a far cry

from the kind of cooperation

that Sam Altman has promised.

And given his track record,

I think it’s a,

it seems to be pretty sincere.

Senator, the hearing

really reflected

the range of concerns here.

You’re talking about elections

to national security,

to medical employment.

Right.

How do you,

what kind of challenge

does that pose

in trying to craft a response?

It means that we have

to construct a system

that is broad

and flexible

without Congress

being the gatekeeper

every time there’s

some new technological advance.

So probably creating an agency

or delegating a high degree

of responsibility

for rulemaking

makes a lot of sense

in this area.

But does that make it challenging

to come up with a bill

and get consensus on a bill

when you’ve got so many different

constituencies and concerns

all swirling around this?

Well, there’s no question

that it’s complex

with a lot of constituencies.

But the recognition

that we’re not going

to solve the problem

by an excruciatingly detailed

prescriptive formula

answering every one

of these questions,

in other words,

that we’re going to have to say

to an agency,

look, you know,

do the rulemaking here.

Make the rules clear.

Fit the risks.

You’re going to have to develop

the expertise.

Congress is not going to have it.

And Congress can’t

act quickly enough.

That degree of humility

is required here.

And I think you sense

that degree of humility

in space today.

When we’re talking about

a new regulatory agency,

are you thinking about

a regulatory agency

for all of technology

or for AI specifically?

Um, you know,

the question of a new

regulatory agency,

I think, is still to be answered,

whether it’s new or

part of an existing agency.

But certainly it should be

broader than just AI.

Probably technology, privacy,

you know, but clearly

the FTC doesn’t have

the capability right now.

So if you’re going to rely

on the FTC,

you’ve got to, in effect,

clone it within itself,

so to speak.

I think there’s a powerful

argument for an entirely

new agency that is given

the resources to really

do the job, you know,

because as I said here,

you can create an agency

just by signing a bill.

But an agency alone

is not the solution.

It’s resources and expertise

and a genuine commitment

to make it work.

Senator, how soon,

realistically,

do you think action

could take place?

This session?

Senator Schumer

is working on a framework.

President of the United States

has said there should be

a Bill of Rights.

The European Parliament

is doing an AI Act.

You know, all kinds

of private groups

are issuing ideas

for legislation.

There is certainly

a lot of legs for a bill

and a lot of momentum.

And there’s a clear need.

You know, people are excited,

but also anxious

with good reason.

Any details

on Leader Schumer’s framework

that you can share?

Not beyond what he has said.

He should be the one

to talk about.

But his leadership

certainly is very,

very important.

Thank you.

Why should—

I’m going to have to run.

Yeah, one more.

OK, sure.

Why should consumers

believe that, you know,

with AI, we’ll get regulation

faster than with something

like privacy,

which we’ve seen

have to be reintroduced

over and over?

Well, they should demand

better protection on privacy

as well as AI.

You know, there’s

a need for privacy legislation.

There’s a need for AI protection.

And there is a need

for the Kids Online Safety Act.

You know, social media.

So we all understand

we’ve got a bill

now that will protect kids

from a form of AI.

Those algorithms

are a form of AI

that is out there

and deploying bullying,

eating disorders,

suicidal thoughts,

drug abuse.

They’re out there right now

in effect ricocheting

that toxic content

back and forth from kids

to social media.

And, you know,

there is certainly

a sense of urgency

around that issue.

Thank you.

OK, thanks.

Thank you.