StrictlyVC Download - OpenAI CEO Sam Altman on ChatGPT and Our Future with AI

Hi, I’m Connie Loizis, and this is Alex Gove, and this is Strictly VC Download.

Hey everybody, hope you’ve had a good week. We are just beginning to dry out here in Marin

after weeks and weeks of rain. Speaking of misery, perhaps the biggest story this week is all of the

layoffs that have been taking place. On Wednesday, Microsoft sacked 10,000 employees, roughly 5% of

its workforce. And today, Google’s parent company, Alphabet, announced that it is laying off 12,000

workers, or approximately 6% of its total employees. When you also take into account

the massive number of folks Meta, Amazon, and Salesforce have cut in recent weeks,

it feels like open season on the technorati. With the disappearance of so many jobs,

all of this talk about AI and chat GPT is good news, bad news. On the one hand,

artificial intelligence could spur the creation of the next Google, or Googles. On the other hand,

it also promises to eliminate many jobs forever. One can’t help but think about the darker side

of AI when reading an article in today’s times entitled, How Smart Are the Robots Getting?

In an opening anecdote, author Cade Metz recounts the experience of Klaus de Graaff,

a chemist from the Netherlands who is playing the online version of Diplomacy,

a legendary strategy game in which players assume the role of a major European power

and attempt to hoodwink other players in the run up to World War I. De Graaff formed an alliance

with a player named Franz Brosef that he believed would allow them to leapfrog the other 18 players

in the game, but Brosef ultimately betrayed de Graaff and finished on top. It should not

surprise you at this point to learn that Brosef was an AI. I was flabbergasted, de Graaff told

the Times. It seemed so genuine, so lifelike. It could read my texts, and converse with me,

and make plans that were mutually beneficial, that would allow both of us to get ahead.

It also lied to me and betrayed me like top players frequently do. As Metz points out,

the Turing test, a framework created by British scientist Alan Turing more than 70 years ago

to determine whether a person is conversing with a man or machine, is now wholly inadequate,

an anachronism in an age when AIs can write novels in the style of James Joyce at the click

of a mouse. Which brings us to Sam Altman, a serial entrepreneur and investor, the former

head of Y Combinator, and now co-founder and president of OpenAI, the company behind ChatGPT.

Connie sat down with Sam at our event last week in San Francisco, and Sam touched on many topics,

such as OpenAI’s deal with Microsoft, what education will look like in a world in which

students can instantly generate term papers, and more. Stay tuned for Connie’s interview with Sam.

But first, a word from our sponsor.

Did you know that incorrectly reporting your emissions, or worse yet, not understanding them

in the first place, is a form of greenwashing? It’s time to get ahead of regulations and quit

misleading customers and stakeholders. Get a handle on your carbon footprint with

Sustain Life’s complete carbon accounting and ESG platform. Learn how to increase the ROI

of your ESG program. Visit sustain.life slash StrictlyVC. That’s sustain.life slash StrictlyVC.

And now, yes, I’m so excited to introduce Sam Altman. I know you’re all so excited about him.

I’m thrilled that he’s here. Sam is a really good sport because, you know, originally I had talked

to him about having an event with him and his brothers, and life gets busy, and so we realized

a long time ago that that wasn’t going to be possible. And he very nicely, despite what’s

happening in his life right now, came anyway. So thank you, Sam. So appreciate it.

Thanks for having me.

So Sam, you also very nicely came to an event that happened to be right here three years ago.

Yeah, I remember.

What’s been happening?

Opened their eye, mostly. Yeah, it’s just taken up a lot of my time, but it’s super great, and

I think we’re doing a lot of stuff we’re really proud of.

I know. I mean, I’m half kidding, obviously, but, you know, you’ve been at the center of the

startup conversation for almost 10 years since you had taken over as the president of Y Combinator.

Wow, that has been 10 years, yeah.

Yeah, or something like that.

Almost.

Paul Graham once said that Sam is exceedingly good at becoming powerful, which I think is very funny.

But you are now at the center of the national conversation. I mean, to an extent that I think,

you know, sort of has taken us all collectively aback. I’m just wondering, how is that for you?

Are you like springing out of bed? Are you like waking up dreading, you know, the headlines?

I saw that you had posted…

I don’t read the news. Honestly, I think if I, and I don’t really do stuff like this much,

at all, I think if I could, like, just stop trolling on Twitter, which I really love,

for some reason, I can’t explain. I think I would just like really accomplish my goal of

not, you know, being very quiet. But Twitter is fun.

Well, I saw that you posted a barfing emoji yesterday without comment. And I wondered

if that had anything to do with the headlines?

No, I had a bad morning for like extremely pedestrian reasons. Like my house flooded,

I got in a car crash. And, you know, I’m allowed to use Twitter like a regular person.

I’m sorry. Sure, absolutely. Oh, well, I mean, we’re not here to talk about Twitter,

but any thoughts about, you know, your friend is running it? What do you think? How do you

think things are going?

I think it’s gonna be fine. I remember that night where everyone was like,

get your tweets off right now. Say your goodbyes. I heard from my like, you know, brothers,

roommates, father’s uncle, whatever that like, it’s all gonna melt down tonight. And it’s all over.

And, you know, it’s still here. I think it’s gonna…

Because they went to Mastin and then they saw what the alternative was.

I think it’s, I think it’s, I would be making some different decisions. But also,

I have like unbelievable respect for Elon, I wouldn’t bet against him.

And I think it’s most likely going to be fine.

Great. Well, you know, there are a lot of reporters here who have been generating the

coverage that we’ve all been reading. So I was going to say, if there’s anything that

you want to confirm, or correct, I’m sure they would be delighted. But in the meantime,

you know, Strictly VC is really about investors and startup founders. And I think in the same

way that people are very interested in open AI because of your involvement, they are really

interested in your work as an investor. So I thought just to start, if it’s okay with you,

we could kind of talk a little bit about that aspect of your life. Starting with…

Sure. But can I correct one thing first? I don’t think people are interested in open AI

because of my involvement. I think open AI has managed to pull together the most talent dense

researchers and engineers in the field of AI, who have done just like incredible work. And I think

what people are interested in is like, open AI from a cold start a few years ago has managed to

do this thing that I think is going to be incredibly important to the next many decades,

at least of society and how we all live our lives and what we do and what’s possible. And I think

it’s going to be tremendously good. But the reason people I think are interested in open AI is

because of the work that those people do. We’ve managed to make a research lab that has been able

to deliver some cool stuff. And I think we’ll deliver a lot more cool things. So just wanted

to add that. Absolutely. But I mean, you are one of the best storytellers in Silicon Valley,

possibly the business. I think that counts for a lot. So I’m going to argue with you there. But

how many investments do you have, like all together? Active. Trying to get a sense for…

I mean, counting all the YC ones, like a few thousand personal ones,

I would guess 400. Wow, really?

Something. Well, I’ve been doing this for a long time. Yeah, absolutely. I mean,

and every once in a while I see like a really gigantic deal. What makes a Sam Altman deal?

I try to just do things that I’m interested in at this point. One of the things I have realized

is all of the companies I think I have added a lot of value to are the ones that I sort of like

think about in my free time on a hike or whatever, and then text the founders,

hey, I have this idea for you. And I have learned kind of like what those are and the ones that are

not. I think like every founder deserves an investor who is like going to think about them

while they’re hiking. And so I’ve tried to like hold myself to the stuff that I really love,

which tends to be like the hard tech years of R&D, capital intensive or like sort of like risky

research. But if it works, it really works. Well, one of the investments that I think is

so interesting, and obviously it’s very interesting to you too, is Helion.

That company announced, so you’ve been investing in Helion since 2015,

but announced a $500 million investment last year and you participated, you wrote them a $375

million check, which I think probably surprised people because there’s not that many people who

can write a $375 million check. Or not that many people who would like

do it until like one risky fusion company. Well, I mean, I wanted to ask, so you mentioned

your YC companies, and I guess in the aggregate, maybe that’s the answer. I just wondered,

which have been your most successful investments to date?

I mean, probably on a multiples basis, definitely on a multiples basis,

Stripe. And also I think that was like my second investment ever. So it seemed like a lot easier.

This was also a time when valuations were different. It was great. But probably that

one on a multiples basis. But then, yeah, I’ve been doing this for like 17 years, I guess. So

there’s been a lot of really good ones. And super grateful to have been in Silicon Valley at what

was such a magical time. Helion is more than an investment to me. That’s the other thing besides

OpenAI I spend a lot of time on. And just super excited about what’s going to happen there.

So tell me about it, because I don’t really, I mean, I don’t understand this. I saw what

happened at Lawrence Livermore last month, and I wondered what you thought of that. It’s a very

different approach. Maybe if you can sort of explain since you’re the expert.

Super happy for them. I think it’s like a very cool scientific result. As they themselves said,

I don’t think it’ll be commercially relevant. And that’s what I’m excited about. Not sort

of getting fusion to work in a lab, although that is cool, too. But building a system that

will work at a super low cost. So if you look at the previous energy transitions, if you can get

the cost of a new form of energy down, it can take over everything else in a couple of decades.

Just phenomenally fast. And then also a system where we can create enough energy,

enough reliable energy, both in terms of the machines not breaking and also not having the

intermittence or the need for storage of solar or wind or something like that. If we can create

enough for Earth in like 10 years, and I think that’s actually the hardest challenge that Helion

faces. As we sketch out what it takes operationally to do that, to replace all the

current generative capacity on Earth with fusion and to do it really fast and to think about what

it really means to build a factory that’s capable of putting out two of these machines a day for a

decade, that’s really hard, but also a super fun problem. So I’m very happy there’s a fusion race.

I think that’s great. I’m also very happy solar and batteries are getting so cheap.

But I think what will matter is who can deliver energy the cheapest and enough of it.

And again, just knowing only what I read in a superficial way, why is Helion’s approach

to your mind superior than what dozens of nations are working on in the south of France?

Yeah. Well, that thing I think probably will work either. But to what I was just saying earlier,

I think it will be commercially irrelevant. They also think it’ll be commercially irrelevant.

The thing that is so exciting to me about Helion is that it’s a simple machine that is

an affordable cost and a reasonable size. There’s a bunch of different elements of it than the

giant tokamaks. But one that is very cool is what comes out of the reaction is charged particles,

not heat. Almost all others, like a coal plant or a natural gas plant or whatever,

makes heat, drives a steam turbine. That’s what it does. Helion makes charged particles,

which push back on the magnet and drive an electrical current down a wire. There’s no

heat cycle at all. And so it can be a much simpler, much more efficient system.

And that, I think, is missed out of the whole discussion on fusion, but really great. It also

means we don’t have to deal with much nuclear material. We don’t ever have dangerous waste or

even a dangerous system. You could touch it pretty shortly after it turns off.

And so I know it’s building a big facility right now. Has it proven its thesis?

We will have more to share there shortly.

OK. Well, after talking to you last time a few years ago and looking back on our conversation,

where I was like, ah, Sam. No, everything that you said was going to happen is happening. I

take you more seriously than I did and should have.

There’s a long way to go, but thank you.

So I also wanted to ask about some of your other investments, one of which I think is really

interesting, Hermes.

Yeah.

So Hermes is interesting for a few reasons. Hermes is a supersonic jet company that wants

to go at, like, five times the speed of sound. So that’s cool. Also, a big investment from you,

I think. It was like a $100 million round that you led. But also, you were involved with a

competitor for a while. You were on the board of Boom Supersonic, whose CEO has also participated

in a Strictly VC event. So just wondering, why change horses?

Yeah. Not at all change horses. Boom is a different technology. And it’s like a

Mach 2-ish airplane instead of a Mach 5-ish airplane. Hermes is like a ramjet technology

that has very different characteristics. But I think there will be, like, it’s a huge market.

I think there will be multiple needs. I think these are very different approaches. And my

general approach is, if there’s an area that I think is really important, like energy, for

example, I try to fund the best fusion and the best fission company I can. They’re competitive

in the sense that they’re both trying to make cheap energy, but, like, we desperately need

more cheap energy. It’s a huge market. I think they can both work. I wouldn’t have funded two,

like, exact same approach airplane companies. But I think these are, like, very different.

And also, you know, you are somebody who thinks about sort of second-order effects.

When you think about Hermes, I mean, first of all, I guess, is it climate-friendly? And second,

what are the impacts, I guess, good and bad of us traveling around the world much faster

than we do right now? Part of my theory is that, so it’s not climate-friendly if it’s using

current aviation fuel. I think even if something like fusion doesn’t happen, there’s a pretty

good move to sustainable aviation fuel. And, you know, at some point, we’ll be all using that

anyway. If something like fusion does work, it will so change the dynamics of what’s possible

in terms of our ability to create things like aviation fuel easily or capture carbon out of

the atmosphere, and I am confident enough in that working, that things I was much more nervous

about doing a few years ago, like creating faster airplanes that will increase the need for fuel,

because you have to burn a lot more fuel to go even a little bit faster, I’m sort of muddling

I’m sort of much more open to in terms of like the benefits of traveling fast.

I think human history is like a pretty good, there’s like quite good evidence that when we

are able to travel faster and more conveniently, good things happen. More commerce happens,

more innovation happens. I think people develop much more empathy. Certainly the time I have spent

like traveling around the world and seeing very different things, very different problems,

meeting very different people have been like super formative for me, and I think more of

that’s a good thing. I guess one downside is the spread of disease happens faster, but that’s.

Yes, although I like, I think blaming faster planes for the spread of disease rather than

the incompetence of governments and insufficient funding for pandemic response is sort of the

wrong way to go about it. What about WorldCoin? That was a strange one, and that one was not

received well by the media. We probably didn’t understand it. You can’t win them all. Wait,

wait, can I read the headline in Bloomberg? Please. Sam Altman wants to scan your eyeball

in exchange for cryptocurrency. What is going on with that company? Is it still,

are you still working on it and should we be scared? I am. Yeah, like I think that’s they’ll

have more to share soon. I am. I’m like a co-founder. I’m on the board. I’m not day to day

involved, but I think super highly of them. I think the press cycle, it was like came from a

leak. The company hadn’t like was not ready to tell its story yet. That was unfortunate. I think

they’ll do it soon. And I think it will go over well. I think the need, so I like, I try to think

about like not any individual company, but sort of where the world is going to co-evolve. And I

think at this point, the need for systems that provide proof of personhood and the need for like

new experiments with wealth redistribution and global governance of systems like say,

an AGI is higher. So I’m very glad this experiment is running. I’d like to see many more. I think

the, like to me personally, and again, people will have different opinions and they’ll do what

they want, but like the amount of privacy you give up to like use Facebook or something versus

the amount of privacy you like give up for like a scan of your, of your retina and nothing else.

Like I’d much rather have the latter. And many people won’t want that and that’s fine. But I

think more experiments about how, what we can do, what problems we can solve with technology

in this sort of new world, like great to try that stuff. I think it’s a phenomenal team. I think

they’ve got a great product. I’m excited for the launch. When is that? I don’t know exactly,

but pretty soon, like months. And you’re a co-founder, but you’re obviously not

very involved. Correct. Okay. One of the investors I just happened to notice was Sam

Bankman-Freed, who’s also like you. I did not know that. Interesting. I really didn’t. He’s

personally an investor in the company. I mean, according to some report. No, I didn’t know that.

Are you, do you know him? We met briefly, very briefly once only. Okay. So not enough

to form an opinion. Okay. Scratch that hard questions. Not good. I wanted to ask about

then crypto more broadly. You have a smattering of crypto investments. I don’t know how interested

you are, if this is like friends that you’ve backed. Honestly, not super interested. I’m

interested in world coin, not because it’s crypto, but because I think it’s an interesting,

it’s an interesting attempt to use technology to solve something that is beyond what even

like governments in the world can effectively do. And I think if we can, if we can use technology,

any technology to experiment with global UBI instead of what one country could do, I’d be

very happy to see what happens there. But that’s not really about any particular technology. I

think crypto is just like a way that we should try implementing that. And we should, again,

try lots and lots of other things. So we don’t need a, like a new web. We don’t need new

infrastructure. We don’t need decentralization. You know, this is like one of the things that

makes me feel really old and out of touch. I’ve never quite understood that. Like, I love the

spirit of the Web3 people, but I don’t, I don’t intuitively feel why we need it.

That’s great. That’s a relief because I think a lot of people feel the same way. I want to

move on soon, but another company that I think is so interesting, and again, these are all so

different and ambitious, is Conception, which is a startup pursuing what’s called in vitro

gametogenesis, which refers to turning adult cells into gametes, sperm or egg cells. So,

I mean, is this fantasy? What makes you think that like artificial eggs is possible?

It’s not fantasy. You know, there was a recent paper, they really truly have this not working

in mice. Obviously, working in mice is very different than working in humans. We’ve learned

that lesson many times. But it seems to me like it should work at some point. It’s not soon.

There’s a there’s a gigantic amount of work left to do. But I think what’s happening in

biotech in general is just tremendously exciting now. It’s, you know, I think it’s like,

kind of in the shadow of AI, which has taken over like so much of the of the mindshare. But I think

the next five, seven years of biotech progress is going to be remarkable. And yeah, I think like,

like, if we do this again, three years later, it’ll it’ll start that one in particular,

we’ll go, yeah, that’s, that’s gonna work. And I think things that are even further out there,

like, you know, human life extension or whatever will also seem like, yeah, maybe that’s gonna

work. That’s phenomenal. I guess there’s obviously a lot of overlap, which is why

you think that these things are going to work. Yeah, I think a lot of these things have these

weirdly synergistic effects with each other. But even without that, I would just say,

a biotech on its own is has made quite a lot of progress.

There’s so much interesting stuff going on. I interviewed a founder recently,

who’s trying to extend the life of women’s ovaries, which I also think is,

is really interesting. Okay, before we move on, you have been investing for 20 years,

you were the president of Y Combinator for something like five or six years,

you have your successor, Jeff Ralston, just left Gary Tan came in, you had told Tad Friend once,

that when a CEO takes over a company, they have to kind of refound I think was the word you use,

the company, do you think Gary’s got to do anything differently?

Gary’s awesome. I think Gary will do a lot of things differently, and be wildly successful at

it. We’re in a very different world and market now. You know, like, I got to run, I said this

once to somebody, I got to like run YC at the time where like any idiot would have been wildly

successful. And that was great. That was a lot of fun. It’s and then like the last couple of years,

I think were really hard. But now, when everything is like bombed out, I think it’s a wonderful

opportunity. And I think YC can really remake itself. And I think Gary is an incredible leader

to do that. And at the time when all of the tourists are leaving, and all of the people

who are like, you know, starting startups or raising the their seed fund or whatever,

because it was like the fashionable thing to do are leaving like, this is when the great value

gets created. This is like the best time to start a startup in many, many, many years.

So I’m very excited for him.

So moving on to AI, which is where you’ve obviously spent the bulk of your time.

Since I saw you, we sat here three years ago. And as I was teasing you, but it’s true,

it’s true. You were telling us what was coming. And we all thought you were being sort of like,

you know, hyperbolic, I guess you were dead serious. Why do you think I mean, people knew

that you were working on this Google’s working on this. Why do you think that chat GPT and Dolly

so surprised people? I genuinely don’t know. I’ve reflected on it a lot.

We had that we had a model in the we had the model for chat GPT in the API for like,

I don’t know, 10 months or something before we made chat GPT. And I sort of thought someone

was going to just build it or whatever. And that, you know, enough people had played around with it.

Definitely, if you make a really good user experience on top of something, like one thing

that I very deeply believed was the way people wanted to interact with these models was via

dialogue. And I, you know, we kept telling people this, we kept trying to convince people to build

it, and people wouldn’t quite do it. So we finally said, All right, we’re just going to do it.

But yeah, I think the pieces were there for a while. I think one of the reasons I think Dolly

surprised people is if you asked, you know, five or seven years ago, that kind of ironclad wisdom

on AI. First, it comes for physical labor, truck driving, working in a factory, working factory,

then truck driving, then the sort of less demanding cognitive labor, then the really

demanding cognitive labor, like computer programming, and then and then very last of

all, or maybe never, because maybe it’s like some deep human special sauce was creativity.

And of course, we can look now and say, really looks like it’s going to go exactly the opposite

direction. But I think that is not super intuitive. And so I can see why Dolly surprised people.

But I genuinely felt somewhat confused about why chat GPT did, you know, one of the things we

really believe is that the most responsible way to put this out in society is very gradually,

and to get people, institutions, policymakers, get them familiar with it, thinking about the

implications, feeling the technology, getting a sense for what it can do and can’t do

very early, rather than drop a super powerful AGI in the world all at once. And so

we put GPT three out almost three years ago. And then we put it into an API, like,

you know, kind of, I think it was maybe like June of two, like two and a half years ago.

And the the incremental update from that to chat GPT, I felt like should have been predictable.

And I want to like do more introspection on why I was sort of miscalibrated on that.

So, you know, you had talked when you were here about releasing things in a responsible way,

I guess, what gave you the confidence to release what you have released already? I mean,

do you think we’re ready for it? Are there enough guardrails in place?

It seems like it. We do, we have like an internal process where we kind of try to break things and

study impacts. We use external auditors, we have external red teamers, we work with other labs and

have safety organizations look at stuff. There are societal changes that chat GPT is going to

cause or is causing. There’s, I think, a big one going now about the impact of this on education,

academic integrity, all of that. But starting these now, where I think it’s important to

look at where the stakes are still relatively low, rather than just put out what the whole

industry will have in a few years with no time for society to update, I think would be bad.

COVID did show us for better or for worse, or at least me, that society can update to like

massive changes sort of faster than I would have thought in many ways. But I still think like,

given the magnitude of the economic impact we expect here, more gradual is better. And so

putting out a very weak and imperfect system like chat GPT, and then making it a little better this

year, a little better later this year, a little better next year, that seems much better than

the alternative.

Can you comment on whether GPT-4 is coming out in the first quarter, first half of the year?

It’ll come out at some point when we are like confident that we can do it safely and responsibly.

I think in general, we are going to release technology much more slowly than people would

like. We’re going to sit on it for much longer than people would like. And eventually people

will be like happy with our approach to this. But at the time, I realized like people want

the shiny toy and it’s frustrating. I totally get that.

I saw a visual and I don’t know if it was accurate, but it showed GPT-3.5 versus I guess

what GPT-4 is expected.

I saw that thing on Twitter.

Did you? Was that accurate?

Complete bullshit. No.

Okay. Because that was a little bit scary.

The GPT-4 rumor mill is like a ridiculous thing. I don’t know where it all comes from.

I don’t know why people don’t have like better things to speculate on. I get a little bit

of it. Like it’s sort of fun, but that it’s been going for like six months at this volume.

People are begging to be disappointed and they will be like, it’s, you know, people

are going to like, the hype is just like, we don’t have an actual AGI. And I think that’s

sort of what is expected of us. And, you know, yeah, we’re going to disappoint those people.

Right, right. Well, I want to talk to you about how close that is. So, you know, another

thing a few years ago you said, and this was funny, I thought, we were talking about revenue.

This is before you announced your partnership with Microsoft. And you said, and I quote,

basically, we made a soft promise to investors that once we built this generally intelligent

system, we will ask it to figure out a way to generate an investment return.

We all kind of laughed at this. And you said, it sounds like an episode of Silicon Valley. I know,

but it is actually what I believe. Someone sent me that video a few weeks ago.

I mean, in some sense, that’s what’s happening. Like we built a thing deeply imperfect as it is,

we couldn’t figure out how to monetize it. You could talk to it. We put it out into the world

via an API, and other people, by playing around with it, figured out all these things to do.

So it was not quite the like ask thing, and it tells you how to monetize.

But it hasn’t been hasn’t gone totally the other direction either.

Okay, but we’re not quite there yet. You obviously have figured out a way to make some revenue.

You’re licensing your models.

Not much. We’re very early.

Right. But so right now, licensing to startups. So you are early on, and people are sort of looking

at the whole of what’s happening out there. And they’re saying, you’ve got like Google,

which could potentially release things this year. You have a lot of AI upstarts nipping at your

heels. Are you worried about what you’re building being commoditized? And I guess, driving the

I mean, to some degree, I hope it is. The future I would like to see is where access to AI is

super democratized, where there are several AGIs in the world that can kind of like help allow for

multiple viewpoints, and not have anyone get too powerful. And that, and that, like, the cost of

intelligence and energy, because it gets commoditized, trends down and down and down,

and the massive surplus their access to the systems, eventually governance of the systems

benefits all of us. So yeah, I sort of hope that happens. I think competition is good.

At least, you know, until we get to AGI, I like deeply believe in capitalism and

competition to offer the best service at the lowest price.

But that’s not great from a business standpoint.

We’ll be fine. We’ll be fine.

I also find it interesting that you say, differing viewpoints, or these AGIs would

have different differing viewpoints, I guess how I mean, they’re all being trained on like

all the data that’s available in the world. So how do we come up with differing viewpoints?

What I think is going to have to happen is society will have to agree and like set some laws

on what an AGI can never do, or what one of these systems should never do.

And one of the cool things about the path of the technology tree that we’re on, which is very

different, like before we came along, and it was sort of deep mind having these games that were

like, you know, having agents play each other and try to deceive each other and kill each other and

all that, which I think could have gone in a bad direction. We now have these language models that

can understand language. And so we can say, hey, you know, model, here’s what we’d like you to do.

Here are the values we’d like you to align to. And we don’t have it working perfectly yet,

but it works a little and it’ll get better and better. And the world can say, all right,

here are the rules. Here’s the very broad bounds, very broad, like absolute rules of a system.

But within that, people should be allowed very different things that they want their AI to do.

And so if you want the super, like, you know, never offend, safe for work model,

you should get that. And if you want an edgier one, that, you know, is sort of like creative

and exploratory, but says some stuff you like, might not be comfortable with, or some people

might not be comfortable with, you should get that. And I think there will be many systems

in the world that have different settings of the values that they enforce. And really what I think,

and this will take longer, is that you as a user should be able to write up a few pages of here’s

what I want, here are my values, here’s how I want the AI to behave. And it reads it and

thinks about it and acts exactly how you want, because it’s like, should be your AI. And,

you know, it should be there to serve you and do the things you believe in.

So that to me is much better than one system where like one tech company says, here are the rules.

That’s really interesting. So also, when we sat down, it was right before your partnership with

Microsoft. So when you say we’re going to be okay, I wonder if…

No, nothing about that. We’re just going to build a fine business. Like even if the

competitive pressure pushes the price that people will like pay token down, we’re going to do fine.

We also have this like cap profit model. So we don’t have this incentive to just like

capture all of the infinite dollars out there anyway. And to like generate enough money for

our equity structure, like, yeah, I believe we’ll be fine.

Well, I know you’re not crazy about talking about deal making, so we won’t. But can you

talk a little bit about your partnership with Microsoft, I guess, how it’s going

and how they’re using your tech?

It’s great. They’re the only tech company out there that I think

we I’d be excited to partner with this deeply. I think Satya is an amazing CEO,

but more than that human being and understands. So do Kevin Scott and Mikhail, who we work with

closely as well, like understand the stakes of what AGI means and why we need to have all the

weirdness we do in our structure and our agreement with them. And so I really feel like it’s a very

values aligned company. And there’s some things they’re very good at, like building very large

supercomputers and the infrastructure we operate on and putting the technology into products.

There’s things we’re very good at, like doing research. And it’s been a great partnership.

Can you comment on whether the reports are accurate,

that it’s going to be in being an office or maybe it’s already in those things?

You are a very experienced and professional reporter.

You know, I can’t comment on that. I know, you know, I can’t comment on that. You know,

I know, you know, you can’t comment on that. In the spirit of shortness of life

and our precious time here, why do you ask?

Sam.

I’m genuinely curious. Like if you ask a question, you know I’m not going to answer.

Well, I thought you might answer that one. No. Okay. I know there’s some things you don’t answer,

but I got to try.

Another company’s product plans I’m definitely not going to touch.

Well, okay. Let me ask you about yours then. Do you, is your pact with Microsoft,

does it preclude you from building software and services?

No, no. We build stuff. I mean, we just, as we talked about chat GPT,

we have lots more cool stuff coming.

Okay. And you, and what about other partnerships other than with Microsoft also?

Yeah. Yeah. I mean, like, again, in general, we are very much here to build AGI

and products and services are tactics and service of that partnerships too.

But important ones, and we like, we really want to be useful to people.

And I think if we just build this in a lab and don’t figure out how to get out into the world,

that’s, that’s like somehow we’re really falling short there.

Well, I wondered what you made of the fact that Google has said

to its employees, we just, it’s too imperfect. It could harm our reputation. We’re not ready.

I hope, I hope when they launch something anyway, you really hold them to that comment.

I’ll just leave it there.

Okay. Let me ask you this. What did you think when they suspended that seven-year veteran

of their responsible AI organization who thought that the chat bot that he was working on for them

had become sentient?

You know, I read, I remember reading a little bit about that,

but not enough that I feel like I can comment. Like I basically only remember the headline.

I guess I thought at the time he sounded like a crackpot. And now that I’ve seen chat GPT,

I think maybe that’s why you rushed out chat GPT because yours is amazing. And if there’s

this also amazing, um, you know, I haven’t seen theirs. Um, I would, I think they’re like

a competent org, so I would assume they have something good, but I, I don’t know anything about

it. Um, so we talked earlier on about education. People are scared and excited. I was just telling

you that my 13 year old came home from school a couple of days ago and his teacher was telling

him, um, not to be scared by AI, but you know, you guys are going to have to sort of, um,

develop different skill sets in your lifetime, um, that are valued. So, but there is a lot of

concern. The New York public school system, just restricted access to jet GPT, which is probably,

um, not as big a story as it sort of seemed from the headline, but what do you think about

educators? What are misconceptions about what you’re working on? How can you kind of allay

their concerns? Look, I get it. Um, I get why educators feel the way they feel about this.

And, and probably like, this is just a preview of what we’re going to see in a lot of other areas.

Um, I think this is just the new, we’re going to try to, you know, do some things in the short term.

Um, and there may be ways we can help teachers be like a little bit more likely to detect output or

anyone output of like a GPT like system. But honestly, a determined person is going to get

around them. And I don’t think it’ll be something society can or should rely on longterm. Um, we’re

just in a new world now, like generated text is something we all need to adapt to. And that’s

fine. Um, we adapted to, you know, calculators and changed what we tested for in math classes.

I imagine, uh, this is a more extreme version of that, no doubt. Um, but also the benefits of it

are more extreme as well. Um, you know, we hear about, we hear from teachers who are understandably

very nervous about the impact of this on homework. We also hear a lot from teachers who are like,

wow, this is like an unbelievable personal tutor for me. Um, and I think that’s a good thing.

Wow. This is like an unbelievable personal tutor for each kid. And I think that I have used it to

learn things myself, uh, and found it like much more compelling than other ways. I’ve learned

things in the past. Like I would much rather have chat GPT teach me about something, then go read a

textbook. Uh, so, you know, it’s like an evolving world and we’ll all adapt and I think be better

off for it. And we won’t want to go back. Well, my 15 year old son came home one day and was

using it to understand some science concepts better, which I thought was really great. Yeah.

But the same kid also was like, could I use this to write my papers? So, so I did want to ask about,

um, watermarking technologies and other techniques. So it sounds like you don’t think it’s,

no, I think, you know, we will, we will experiment with this. Other people will too.

I think it is important for the transition. Um, but I, I would caution policy,

national policy, individual schools, whatever, from relying on this. Um, because I don’t like

fundamentally, I think it’s impossible to make it perfect. You’ll think, you know, people will

figure out how much of the text they have to change. There’ll be other things that modify

the outputted text. Um, I think it’s good to pursue and we will. Um, but I think what’s

important to realize is like the playing field has shifted and that’s fine. Uh, there’s good

and bad. And we just figure out like, rather than try to go back, we figure out the way forward.

So even if you develop technologies that could be sort of rendered like irrelevant

in a few months, I suspect. Yeah. Um, I also wanted to ask, uh, anthropic, um, arrival,

I guess, uh, founded by a former, yeah, again, like I arrival in some sense,

I think super highly of those people, uh, like very, very talented and multiple AGI’s in the

world, I think is better than one. Sure. Uh, well, what I was going to ask,

and just for some background, um, it was founded by a former open AI VP of research who you,

I think met like when he was at Google. Um, but it, um, is stressing, um, an ethical layer

as a kind of distinction from other players. And I just wondered, um, if you think that systems

should adopt, uh, you know, I kind of a common code of principles and, and also whether that

should be regulated. Yeah. I mean, that was my earlier point. I think society should adopt

and should regulate what the kind of the wide bounds are, but then I think individual users

should have a huge amount of Liberty to decide how they want their experience and their interaction

to go. So I think it is like a combination of society, you know, like we have, there are a few

asterisks on the free speech rules. Um, and society has decided like free speech, not quite absolute.

I think society will decide also decide like language models, not quite absolute, but there

are a lot of, there’s a lot of speech that is legal that you find distasteful that I find

distasteful that he finds distasteful. And we all probably have somewhat different definitions of

that. And I think it is very important that that is left to the responsibility of individual users

and groups, not, not one company and that the government they’re like govern and not dictate

all of the rules. Um, and there are a lot of people here who I think want to ask you questions

and I know you can’t stay forever. I wanted to ask one more question before I turn it over to the

crowd. Um, video, is that coming? It will come. I wouldn’t want to make a confident prediction

about when, um, obviously like people are interested in it. Uh, we’ll try to do it.

Other people will try to do it. Um, it could be like pretty soon it’s, it’s a, it’s a legitimate

research project, so it could be pretty soon. It could take a while. Okay. Uh, let’s see,

who would like to ask Sam a question? Oh, great. Hold on. I got to run over here.

Thank you. Hi fusion. When do you think there will be a commercial plant actually producing

electricity economically? Yeah, I think, I think by like 2028 pending, you know,

good fortune with regulators, we could be plugging them into the grid.

I think we’ll do it, uh, you know, a really great demo well before that, like hopefully pretty soon.

Hey Sam, thank you. Um, what is your, and I don’t know if you are allowed to answer this,

but what is your like best case scenario for AI and worst case, or more pointedly,

what would you like to see and what would you not like to see out of AI in the future?

I mean, I, I think the best case is like so unbelievably good that it’s like hard to,

I, I think it’s like hard for me to even imagine, like, I can sort of, I can sort of think about

what it’s like when we make more progress of discovering new, new knowledge with these systems

than humanity has done so far, but like in a year instead of 70,000, um, I can sort of imagine what

it’s like when we kind of like launch probes out to the whole universe and find out really, you

know, everything going on out there. I can sort of imagine what it’s like when we have, you know,

help us resolve deadlocks and improve all aspects of reality and, uh, kind of like, let us all live

our best lives. But I can’t quite like, I think the, the, the good case is just so unbelievably

good that you sound like a really crazy person to start talking about it. Um, and the bad case,

and I think this is like important to say is like lights out, right? And, and, and, and, and, and,

and the bad case, and I think this is like important to say is like lights out for all of us.

Um, I’m more worried about like an accidental misuse case in the short term where, you know,

someone gets a super powerful, like, it’s not like the AI wakes up and decides to be evil.

And I think all of the sort of traditional AI safety thinkers reveal a lot about more about

themselves than they mean to when they talk about what they think the AGI is going to be like,

but, but I can see the accidental misuse case clearly. And that’s, that’s super bad. Um,

so I think like, uh, yeah, I think it’s like impossible to overstate the importance of AI

safety and alignment work. Um, I would like to see much, much more happening, but I think it’s

more subtle than most people think. And that, you know, you hear a lot of people talk about

AI capabilities and AI alignment as like orthogonal vectors. And, you know, you’re

bad if you’re a capabilities researcher and you’re, but if you’re an alignment researcher,

it actually sounds very reasonable. Um, but they’re almost the same thing. Like

deep learning is just going to like solve all of these problems. And so far that’s what the

progress has been. And progress on capabilities is also what has let us make the systems safer

and vice versa, surprisingly. Um, and so I think, and none of this sort of sound bite,

easy answers work. Alfred Lynn told me to ask you, and I was going to ask anyway,

how far away do you think AGI is? He said, Sam, I’ll probably tell you sooner than you thought.

The closer we get, the harder time I have answering, because I think that it’s going

to be much blurrier, uh, and much more of a gradual transition than, than people think.

If you, if you imagine like a two by two matrix of sort of short timelines until the AGI takeoff

era begins and long timelines until it begins, and then a slow takeoff or a fast takeoff,

the world I think we’re heading to and the safest world, the one I most hope for

is the short timeline, slow takeoff. But I think people are going to have hugely different

opinions about when in there you kind of like declare victory on the AGI thing.

Thank you, Sam. Um, 30 seconds versus when you spoke a few years ago, I was highly skeptical.

Um, and so you’ve put me on notice, felt like Netscape when I was a teenager,

check, uh, GPT. Thank you very much. The question I have is less science and technology and more

geography, which is what’s your take on San Francisco and Southern Valley? Cause you referenced

it earlier in like, man, I love this city so much. Um, and it is so sad what like the current state

is. I do think it’s like somewhat come back to life after the pandemic, but yeah, like when you

walk down market street at night, or like if I try to walk home and walk through the Tenderloin

like late, it’s not great. And I think it’s like a real shame, uh, that we put up with treating

people like this. And we continue to elect leaders who sort of don’t think this is okay, but also

don’t fix the problem. Um, I totally get how hard this is. I totally get how complicated this is.

I also, I think, unlike other tech people will say that tech has some responsibility for it. Um,

but other cities managed to do better than this. Like it is a solvable problem and to entirely

blame tech companies who don’t get to run the city, uh, that doesn’t feel good either. Uh,

and I wish there could be a more collaborative partnership instead of all of the finger

pointing. Um, I am super long in-person work. I am super long, the Bay area. I’m super long,

California. I think we are probably going through some trying times. Um, but I am hopeful we like

come out of the fire better for it. Can you talk a little bit more about, um, what you expected the

reaction to chat GPT to, to be, and also would you prefer that there wasn’t so much hype? Like,

is that potentially detrimental to the company? Yeah. Um, I would have expected maybe like one

order of magnitude, less of everything, like one order of magnitude, less of, um, hype,

one order of magnitude, less of users. Um, yeah, I would have expected sort of one order of

magnitude, less on everything. Um, and I think less hype is probably better just as like a general

rule that one of the sort of strange things about these technologies is they are impressive, but not

robust. And so you use them in a first demo, you kind of have this like very impressive, like,

wow, this is like incredible and ready to go. Um, you use them a hundred times, you see the

weaknesses. And so I think people can get a much sort of a false impression of how good they are.

However, that’s all going to get better. The critics who point these problems out and say,

well, this is why it’s like, you know, all like, like, you know, fake news or whatever

are equally wrong. And so I think it’s good in the sense that people are updating to this,

thinking hard about it and all of that. Can I ask, how do you use it? You know,

when we were emailing back and forth, I thought, am I talking to Sam? Um,

I have occasionally used it to summarize super long emails, but I’ve never used it to write one.

Um, I actually summarized something. I use it for a lot. It’s super good at that.

I use it for translation. I use it to like learn things.

So two quick questions. When people talk about, um, your technologies being the end of Google,

how do you unpack or how do you understand that? And then also your thoughts on UBI.

Yeah. I think whenever someone like talks about a technology being the end of some other giant

companies, it’s usually wrong. Like I think people forget they get, they get to make a counter move

here and they’re like pretty smart, pretty competent, but I do think it means there is

a change for search that will probably come at some point, but not as dramatically as people

think in the short term would like, my guess is that people are going to be using Google

the same way people are using Google now for quite some time. And also Google for whatever

this whole like code red thing is, it’s probably not going to change that dramatic would be my

guess. Um, UBI, I think UBI is good and important, but, uh, very far from sufficient. I think it is

like a little part of the solution. Uh, I think it’s great. Like, I think we should, as AGI

participates more and more in the economy, I think we should distribute wealth and resources much

more than we have. Um, and that’ll be important over time, but I don’t, I don’t think that’s

going to like solve the problem. I don’t think that’s going to give people meaning. I don’t

think it means people are going to like entirely stop trying to create and do new things and

whatever else. So I sort of would consider it like an enabling technology, but not like a plan

for society. Is that why your company though, as a capped profit company, I mean, are you planning

to take the proceeds that presumably you’re presuming you’re going to make someday and

you’re going to give them back to society? I mean, is that whether we do that just by like

saying here’s cash for everyone, totally possible. Or whether we do that by saying like here is,

you know, we’re going to like invest all of this in a nonprofit that does a bunch of science

because scientific progress is how we all make progress. I’m unsure, but yeah, we would like

to operate for, for the good of society. And I think I’m like a big believer in sort of design

a custom structure for whatever you’re trying to do. And I think AGI is just like really different.

And so the cap will turn out to be super important. Can I ask selfishly? So if UBI is only

part of the solution and I’ve got teenagers and we all have jobs, what should we be preparing for?

You know, as I said, my, my son’s teacher was trying to prepare them, but of course you would

maybe be better positioned to have some ideas on this. Resilience, adaptability, ability to like

learn new things quickly, creativity, although it’ll be aided creativity and aided learning

things quickly. I mean, I feel like my, for sure, like in some sense before Google came along,

there was like a bunch of things that we learned, like memorizing facts was really important.

And that changed. And now I think learning will change again and we’ll probably adapt

faster than we think. Yeah. So, um, okay. I think we have to let Sam go, but how about like

two more questions. Thank you. Thank you so much. Uh, the future workplace for, um,

tech workers, you think it’ll be out of the home, out of the office? What percent in each?

I have like, I look, I think people are going to do different things. I don’t think there’ll

be one answer. And I think people will sort the people who want fully in person. We’ll do that.

People want fully remote. We’ll do that. I think a lot of people will do like hybrid.

I have always been a fan of going to the office a few days a week and work at home a day or two a

week. Um, YC was like very much like being a YC partner was very much that way. Opening. I was

that way before the pandemic opening eyes that way. Now, I personally, I don’t know if I’m

I personally am skeptical that fully remote is going to be the thing that everyone does.

And I think even the people who thought it was a really good idea are now sort of saying like,

Hmm, the next, like 40 years sitting in my bedroom, looking at a computer screen on zoom,

do I really want that? Am I really sure with some skepticism, there are some people who do

what I think has been the hardest is companies who are the wrong kind of hybrid where it’s not like,

you know, these four days, everyone’s in these two days, everyone’s home, whatever, but it’s, uh,

come in if you want, be at home if you want. And then you have like half the people in the

meeting, this little box on the screen, have people in person. It’s clearly a way better

experience in person. The people that are not there, like do get sort of like left out that,

that I think is the hardest, but it’s all gonna like continue to evolve and people will sort into

what they want. I would bet that many of the most important companies of this decade are still

pretty heavily in person. Do you work for CBRE? No. Maybe that. So, um, one of you guys want to

wrestle for it. He has a white t-shirt. So let’s do him too. Great, great, great, great. Sure.

So given your experience with the open AI safety and the conversation around it,

how do you think about safety and other AI fields like autonomous vehicles?

Yeah, I think there’s like a bunch of safety issues for

any new technology and particularly any narrow vertical of AI.

And we kind of have learned a lot in the past few decades of, or more than a few past like

seven or eight decades of technological progress about how to do really good safety engineering

and safety systems management. And a lot of that about how we like learn how to build safe systems

and safe processes will translate imperfect. There’ll be mistakes, but we know how to do that.

I think the AGI safety stuff is really different personally and worthy of study as its own

category. And they’re there because the stakes are so high and the, and the irreversible situations

are so easy to imagine. We do need to somehow treat that differently and figure out a new set

of safety processes and standards. So you said like right now is one of the

best times to start a company. I found that counterintuitive. Maybe you can explain why

and what companies should I tell my friends to go start? Cause I have actually pretty few

smart friends who are looking to do something. The only thing that I think is like easy in

a mega bubble is capital. So it was a great time to raise capital for a startup from say 2015 to

20, when did that go wrong? End of 2021, but everything else was pretty hard. It was like

pretty hard to hire people. It was like pretty hard to like rise above the noise. It was pretty

hard to do something that mattered without having like thousands of competitors right away.

And a lot of those startups that looked like they were doing well, because of the same reason

capital was cheap, found that actually they were not able to like build as much enduring value as

they hoped. Now raising capital is like tough. It’s still sort of reasonable, I think at like

seed stages, but it certainly seems much tougher at later stages. But all the other stuff is much

easier. You actually can concentrate talent. People are not constantly poached. You can rise

above the noise threshold, whether that’s with like customers, the press, you know, users,

whatever. I would much rather be, have a hard time raising capital, but an easier time doing

everything else than the other way around. So that’s why I think it’s a better time.

In terms of like what I would do now, I would probably like go do AI for some vertical.

Well, it brought to mind this information story about Jasper that I thought was interesting.

It’s a customer of yours, a copywriting company relying on your, you know, AI language models.

And now ChatGPT is so good that it’s got to kind of like find a new reason for being, I think.

Is that a danger for many startups? I guess, which ones if so?

I heard about this article, but I didn’t read it. But if I understand that it was basically like

the company was saying like, you know, we had built this thing on GPT-3 and now ChatGPT is

available for free and that’s causing us problems. Is that right? I think probably the wrong thing to

do in, to make an AI startup. Well, let me say, I think the best thing you can do to make an AI

startup is the same way that like a lot of other companies differentiate, which is to build like

deep relationships with customers, a product they love, and some sort of moat that doesn’t have to

be technology and network effect or whatever. And I think a lot of companies in the AI space

are doing exactly that. And you’ve got to plan that open AI’s models are going to get better

and better. We view ourselves more as a platform company, but we will do some, you know, like a

business strategy I’ve always really respected is like the platform plus killer app together.

And so we will probably do something to help show people what we think is possible. But I think you

want to build a startup. And I think Jasper is going to do this or already is doing this that

has like deep value on top of the fundamental language model. And we are a piece of enabling

technology. Is there anybody knowing what you know, or you think you see coming that should

like basically drop what they’re doing right now because they’re cooked?

Like, I’m sure if I had more time to think about it, I could come up with an answer. But

in general, I think there’s going to be way, way more new value created. Like this is going to be

a golden few years, then people who are should just like stop what they’re doing. I would not

ignore it. I think you got to like embrace it big time. But I think the amount of value that’s about

to get created, we have not seen like, since the launch of the iPhone app store, something like

that. It’s incredible. This is going to be an amazing year. I’m sure I’m so thankful. Thank

you for having me. My gosh, Sam. Thank you. For sure.

Thank you.

That’s it. Thanks for listening, everybody. And special thanks to sustain.life. Make sure to

check out their site at sustain.life slash StrictlyVC. Have a great weekend. And we’ll

see you back here next week.

comments powered by Disqus