Hi, I’m Connie Loizos, and this is Alex Gove, and this is Strictly VC Download.
Hi, happy Friday, everyone. I was at a friend’s house for lunch earlier with a bunch of very,
very joyous Argentinians after Lionel Messi and company survived one of the wildest games of the
World Cup in a penalty shootout that I hope some of you caught. It was really amazing and exciting
to watch. But now, of course, it is getting late. So we are going to head straight into a couple of
quick news hits, including a segment about OpenAI, the AI company headed by Sam Altman,
and why an economist and VC who I’ve known for years, Paul Kedrowski, is very upset with the
company. For what it’s worth, I’m excited to be sitting down with Altman next month in San
Francisco to talk about the company at our first Strictly VC event of the year. We last sat down
together three years ago, and candidly, the interview has not aged well. Much of what Altman
had to say at the time was just unimaginable to me, and I don’t think I took him entirely
seriously. And now, of course, much of what he said was around the corner is, in fact,
seeping into our society very much in real time. So we’re going to be talking about the
implications of the tech that his company has released to the general public and where we
go from here. But like I said, first, some other news.
FDX founder Sam Bankman-Fried has been talking the ears off of journalists.
But up until now, he has avoided speaking to congressional investigators. That has changed.
Today, Bankman-Fried tweeted that he would appear before the House Committee on Financial Services
this coming Tuesday. SBF will presumably be under oath, so it’s probably a good thing that he’s
hired a high-profile lawyer to represent him. This same advocate recently represented
Ghislaine Maxwell, Jeffrey Epstein’s partner in pedophilia.
Still, Bankman-Fried will have a lot of splainin’ to do. The former crypto wunderkind will inevitably
be asked whether he transferred customer deposits to Alameda Research, a hedge fund that he
controlled. FDX’s new CEO John J. Ray III, a man who has been hired to clean up the mess at FDX,
has stated that SBF used, quote, special software to conceal the misuse of customer funds.
Bankman-Fried has denied this allegation. It would also not be unreasonable to think
that SBF will face questioning about the $1 billion that he borrowed from Alameda,
as well as the $2.3 billion that an Alameda affiliate loaned Paper Bird,
another company that he controlled. And he will surely face questioning about the money
he put to work with both the Democratic and Republican parties, as well as news organizations,
such as the crypto news publication The Block, which, according to a report in today’s Axios,
received some $43 million in loans from Alameda. Was this all part of a cynical and manipulative
game that the former billionaire was playing as he intubated to a Vox reporter in a series
of late-night DMs? For what it’s worth, if the House Committee is open to suggestions,
I would like to know more about the signal conversations that SBF had with Binance CEO
Changpeng Cixi Zhao in the days leading up to FDX’s bankruptcy, and what they reveal about
the state of the cryptocurrency market. In a fascinating article in Today’s Times,
reporters David Yoffe-Bellany and Emily Flitter write about one such conversation in which Cixi
accuses Bankman-Fried of trying to tank Tether, a USD stablecoin, with a $250,000 trade. SBF
retorts that there is no way such a small trade could ever undermine Tether, but the fact that
Zhao was so concerned about Tether’s stability does not speak well for stablecoins or the crypto
industry in general. So, we will be watching on Tuesday and looking to see in particular if
committee members will be able to poke any holes in SBF’s defense, namely that he was a terrible
CEO and lost sight of FDX’s risk exposure. One thing is certain though, whatever SBF’s
lawyer is getting paid, it’s not enough. Yesterday, the Federal Trade Commission
announced that it would seek to block Microsoft’s $69 billion acquisition of Activision, the video
game behemoth behind Call of Duty, a franchise that has generated almost $30 billion in revenue
from game sales and microtransactions since 2003. It’s a curious development for two very
different reasons. First, according to a story in today’s Wall Street Journal, Microsoft has
been engaged in a charm offensive with every regulator that will sit down with it, an effort
led by its vice chairman and president, Brad Smith, who joined the company back in 1993.
To allay regulators’ concerns, Microsoft has made many promises, the most important of which
is that Call of Duty will be available on other platforms, such as Sony PlayStation.
Perhaps a more important reason that the FDC’s move came as a surprise is that courts have been
skeptical of challenges to so-called vertical mergers, or mergers in which two businesses
don’t compete directly. Although the landmark Paramount case famously barred vertical
integration in the movie business, in which studios tried to control the production,
distribution, and exhibition of feature films, the justice system has looked the other way when
it comes to other vertical mergers. One famous, albeit dated, example is the AT&T-Time Warner
merger, which was ultimately allowed. Given how many concessions Microsoft has made to help this
deal go through, Daniel Francis, an assistant professor of law at New York University and a
former FTC official, thinks the FTC may have overplayed its hand. Courts have been surprisingly
solicitous about the kinds of things that Microsoft has offered here, he told the New York Times.
The Times also notes that the FTC’s leader, Lena Khan, has been very aggressive in pursuing novel
or little-used arguments to challenge deals. Still, any parent of a 13-year-old strictly VC
intern who plays NBA 2K can tell you that Xbox and PlayStation don’t always play well together,
especially when it comes to fast-twitch games like Call of Duty. No matter how much Microsoft
protests that Call of Duty will not tilt the scales in its favor, we can definitely see why
regulators and Sony are so concerned. Up next, Connie’s interview with economist,
VC, and MIT fellow, Paul Kodrowski, about OpenAI. But first, a word from our sponsor.
Are you a military veteran looking to get your startup business up and running with the support
of the best of the best advisors and investors? The Military Veteran Startup Conference brings
together hundreds of founders, operators, and venture capitalists for two days on February 2nd
and 3rd in San Francisco. The event will have panels on military veteran-led unicorn companies,
women veteran founders, and dual-use founders. There’s also a panel on dual-use venture
capitalists with speakers from In-Q-Tel, Lux Capital, and Founders Fund. You can learn more
and register today at MilVetStartups.com, that’s M-I-L-V-E-T Startups.com, and use the code
STRICTLYVC15 for 15% off your tickets. Military veterans make fantastic founders and investors.
Come build your network at the densest gathering of military veteran talent in the early stage
ecosystem. The event is open to everyone, veterans, veteran spouses, and civilians.
Once again, visit MilVetStartups.com, that’s M-I-L-V-E-T Startups.com, to learn more and
register today using the code STRICTLYVC15 for 15% off your tickets.
More than three years ago, I sat down with Sam Altman for a small event in San Francisco
soon after he left his role as the president of Y Combinator to be the CEO of the AI company
he co-founded in 2015 with Elon Musk and others called OpenAI. At the time, Altman described
OpenAI’s potential in language that sounded outlandish to some. Here is part of our conversation
about OpenAI’s revenue model, or lack thereof, at the time.
So I’m just wondering, like, eventually, is the idea to kind of like
kind of like license technologies? Will you have customers? You’re going to be
customizing algorithms for them? Or how is it going to work?
You know, the honest answer is we have no idea. We, we have never made any revenue.
We have no current plans to make revenue. We have no idea how we may one day generate revenue.
We have made a soft promise to investors that once we’ve built this sort of generally
intelligent system, basically, we will ask it to figure out a way to generate an investment
return for you. It sounds like an episode of Silicon Valley. It really does. I get it.
You can laugh. It’s all right. But it is what I actually believe is going to happen.
And so it went. Altman said, for example, that the opportunity with artificial general
intelligence, which is machine intelligence that can solve problems as well as a human,
is so great that if OpenAI managed to crack it, the outfit could, quote,
maybe capture the light cone of all future value in the universe. He said the company was,
quote, going to have to not release research because it was so powerful.
Asked if OpenAI was guilty of fear mongering, Altman talked about the dangers of not thinking
through, quote, societal consequences when, quote, you’re building something on an exponential curve.
The audience continued to laugh at various points of the conversation,
not certain how seriously to take Altman. No one is laughing now, however,
while machines are not yet as intelligent as people, the tech that OpenAI has since released
is taking many aback, including Musk, with some critics fearful that it could be our undoing,
especially with more sophisticated tech reportedly coming soon.
Indeed, though heavy users insist it’s not so smart, the chat GPT model that OpenAI made
available to the general public last week is so capable of answering questions like a person
that professionals across a range of industries are trying to process the implications.
Educators, for example, wonder how they’ll be able to distinguish original writing from the
algorithmically generated essays they are bound to receive, and that can evade anti-plagiarism
software. Paul Kedrowski isn’t an educator per se. He’s an economist, venture capitalist,
and MIT fellow who calls himself a, quote, frustrated normal with a penchant for thinking
about risks and unintended consequences in complex systems. But Kedrowski is among those
who are suddenly worried about our collective future, tweeting yesterday, shame on OpenAI
for launching this pocket nuclear bomb without restrictions into an unprepared society.
Kedrowski continued, I obviously feel chat GPT and its ilk should be withdrawn immediately,
and if ever reintroduced, only with tight restrictions.
We talked with Kedrowski yesterday about some of his concerns and why he thinks OpenAI is driving
what he believes is the, quote, most disruptive change the U.S. economy has seen in 100 years.
Here’s Kedrowski in his own words.
I had looked at it early on, and I’ve played with these chat UIs, conversational
user interfaces, whatever you want to call them, into services like this in the past,
and obviously this is a huge leap beyond. And what troubled me here in particular,
though, is I want to have better words. I’ll say it’s the casual brutality of it,
that this has, as you implied, massive consequences for a host of different activities,
not just the obvious ones like, I don’t know, high school essay writing.
But across pretty much any domain where there’s a grammar, an organized way of expressing yourself,
so that could be software engineering, that can be high school essays,
that can be legal documents, where all of them are easily eaten by this sort of voracious beast.
And spit back out again without compensation to whatever was used for training it. So there’s
an input side of this, much like the problem we have in generative art, where the art was trained
on all of these images that were deemed to be in the public domain, and now all kinds of artists
are being put out of business because generative artificial intelligence products are generating
spectacularly impressive products that were all trained on other people’s work.
Kedroski went on to say that people keep wanting to compare OpenAI’s generative tech
to disruptive tech we’ve seen before, but that there is no comparison.
To make an analogy here, some might say, well, did you feel the same way when automation arrived
in auto plants and autoworkers were put out of work? Because it’s a broader phenomenon,
but you could make that analogy. And this is, of course, very different,
because the difference with respect to these specific learning technologies is they’re
self-catalyzing. They’re learning from the requests. So robots in a manufacturing plant,
while disruptive and have incredible economic consequences for the people working there,
didn’t then turn around and start absorbing everything going inside the factory and then
moving across sector by sector, whereas that’s exactly not only what we can expect,
but what we should expect.
I asked Kedroski how we rein in the type of tech that OpenAI is at the forefront of rolling out.
While Altman and Musk have themselves talked at length about the need for
safe artificial intelligence and guardrails, currently there are essentially none.
He said he could imagine a few things happening on this front.
So you’re going to see things like, for example, in the same way that the FTC demanded that people
running blogs years ago and had affiliate links, they had to say, we have affiliate links,
I make money from this stuff. I think people are going to be forced to make disclosures that we
wrote none of this. This is all machine generated. So you’re going to see at a trivial level,
you’re going to see that kind of thing go on. I think you’re also going to see
new energy for the ongoing lawsuit against Microsoft and OpenAI over copyright infringement
in the context of our in-training machine learning algorithms. There’s going to be a
broader DMCA issue here with respect to the service. And I think there’s the potential for
like a 10x snapster size lawsuit and settlement eventually with respect to the consequences of
these services, which will probably take too long and not help enough people. But, you know,
I don’t see how we don’t end up in a similar place with respect to these technologies.
Because Kedroski is a fellow at MIT, before I let him go, I asked him what some of the
other academics at the school think of OpenAI and its chat GPT.
Andy McAfee and his group over there, they’re more sanguine. There’s a more orthodox view
out there that anytime we see disruption, other opportunities get created. People are mobile,
they move from place to place and from occupation to occupation. And we shouldn’t be so
hidebound that we think that this particular evolution of technology is the one around
which we can’t mutate and migrate. And I think that’s broadly true. But the lesson of the last
five years in particular has been these changes can take a long time. Free trade, for example,
is one of those incredibly disruptive economy-wide experiences. And we all told ourselves as
economists looking at this, that the economy will adapt and people in general will benefit from
lower prices. But what no one anticipated was that someone would organize all the angry people
and elect Donald Trump. So there’s this idea that we can anticipate and predict how long that will
take and what the consequences will be in the middle and what gets burned down in the process
is, again, it’s hubristic. It’s wrong. We don’t know.
Thanks for listening, everybody. And special thanks to the Military Veteran Startup Conference
hosted by Context Ventures on February 2nd and 3rd. Remember to check out millvetstartups.com.
Have a great weekend, and we’ll see you here next week.