BILL GATES: It was stunning, it was mind blowing. After the biology question I had them type in, “What do you say to a father with a sick child?” And it gave this very careful, excellent answer that was perhaps better than any of us in the room might have given. And so, it was like, wow, what is it – what is the scope of this thing? Because this is way better.
KEVIN SCOTT: Hi, everyone. Welcome to Behind the Tech. I’m your host, Kevin Scott, Chief Technology Officer for Microsoft.
In this podcast, we’re going to get behind the tech. We’ll talk with some of the people who have made our modern tech world possible and understand what motivated them to create what they did. So, join me to maybe learn a little bit about the history of computing and get a few behind-the-scenes insights into what’s happening today. Stick around.
KEVIN SCOTT: Hi, welcome to Behind the Tech. We have a great episode for you today with a really special guest, Bill Gates, who needs no introduction. Given the unbelievable impact that he’s had on the world of technology and the world at large over the past several decades, he has been working very closely with the teams at Microsoft and OpenAI over the past handful of months, helping us think through what the amazing revolution that we’re experiencing right now in AI means for OpenAI, Microsoft, all of our stakeholders, and for the world at large.
I’ve learned so much from my conversations that I’ve had with Bill over these past months that I thought it might be a great thing to share. Just a tiny little glimpse of those conversations with all of you listeners today. So with that, let’s introduce Bill and get a great conversation started.
KEVIN SCOTT: Thank you so much for doing the show today, and I just wanted to jump right in with maybe one of the more interesting things that’s happened in the past few years in technology, which is GPT-4 and ChatGPT and the work that we’ve been doing together at Microsoft with OpenAI.
By the time this podcast airs, OpenAI will have made their announcement to the world about GPT-4, but I want to sort of set the stage. The unveiling of the first instance of GPT-4 outside of OpenAI was actually to you last August at a dinner that you hosted with Reid and Sam Altman and Greg Brockman and Satya and a whole bunch of other folks.
The OpenAI folks had been very anxious about showing you this because your bar for AI had been really high, and I think it has been really helpful actually, the push that you had made on all of us for like what acceptable high ambition I would look like.
And so, I wanted to ask you, what was that dinner like for you? What were your impressions like? What had you been thinking before, and like what, if anything, changed in your mind after you had seen GPT-4?
BILL GATES: Yeah, so AI has always been the holy grail of computer science. And when I was young, Stanford Research had Shakey the Robot that was trying to pick things up. And there were various logic systems that people were working on. So the dream was always some sort of reasoning capability.
Overall progress in AI until machine learning came along was pretty modest. Even speech recognition was just barely reasonable. And so, we had that gigantic acceleration with machine learning, particularly in sort of sensory things, recognizing speech, recognizing pictures. And it was phenomenal, and it just kept getting better, and scale was part of that. But we were still missing anything that had to do with complex logic, with being able to, say read a text and do what a human does, which is, quote, “understand” what’s in that text.
And so, as Microsoft was doing more with OpenAI, I had a chance to go see them myself independently a number of times, and they were doing a lot of text generation. They had a little robot arm.
The early text generation still didn’t seem to have a broad understanding. Like, it could generate a sentence saying, “Joe is in Chicago,” and then two sentences later, say, “Joe’s in Seattle,” which in its local probabilistic sense was a good sentence, but a human has a broad understanding of the world from both experience and reading that you understand that can’t be.
And so, as they were enthusing about GPT-3 and even the early versions of GPT-4, I said to them, hey, if you can pass an advanced placement biology exam where you take a question that’s not part of the training set, or a bunch of them, and give fully reasoned answers, knowing that a biology textbook is one of many things that’s in that training corpus, then you will really get my attention because that would be a heck of a milestone. And so, please work on that.
I thought that they’d go away for two or three years because my intuition has always been that we needed to understand knowledge representation and symbolic reasoning in a more explicit way, so that we were one or two inventions short of something where it was very good at reading and writing, and therefore being an assistant.
And so, it was amazing that you and Greg and Sam over the summer were saying, yeah, it might not be that long before we want to come demo this thing to you because it’s actually doing pretty well on scientific learning. And in August they said, yeah, let’s schedule this thing.
And so in early September, we had a pretty large group over to my house for a dinner, I think maybe 30 people in total, including a lot of the amazing OpenAI people, but a good sized group from Microsoft. Satya was there. And they gave it AP biology questions and let me give it AP biology questions, and with one exception, it did a super good job. And the exception had to do with math, which we can get to that later. But it was stunning. It was mind blowing.
After the biology questions, I had them type in, “What do you say to a father with a sick child?” And it gave this very careful, excellent answer that was perhaps better than any of us in the room might have given. And so, it was like, wow, what is it – what is the scope of this thing? Because this is way better. And then the rest of the night, we asked it historical questions about are there criticisms of Churchill or different things. And it was just fascinating.
And then over the next few months, as I was given an account, and Sal Khan got one of those early accounts, the idea that you could have it write college applications or rewrite, say, the Declaration of Independence the way a famous person like Donald Trump might have written it. And it was so capable of writing, writing poems, give it a tune like Hey Jude and tell it to write about that, tell it to take a Ted Lasso episode and include the following things.
Anyway, ever since that day in September, I’ve said, wow, this is a fundamental change, not without some things that still need to be worked out, but it is a fundamental advance. And it’s confusing people in terms of, well, it can’t yet do this, it can’t do that, it’s not perfect at this or that, but hey, this – 7:44 natural language is now the primary interface that we’re going to use to describe things even to computers. And so it’s a huge, huge advance.
KEVIN SCOTT: Yeah. So I want to – like, there’s so many different things to talk about here, but maybe the first one is to talk a little bit about what it’s not good at, because the last thing that I think we want to do is give people the impression that it is an AGI, that it is perfect, that there isn’t a lot of additional work that has to happen to improve it and make it better. You mentioned math as one of the things. So I thought maybe let’s talk a little bit about what you think needs to be better about these systems over time and where we need to focus our energy.
BILL GATES: Yeah. So there appears to be a general issue that its knowledge of context when it’s being asked, okay, I tell you something and I generate something, humans understand, oh, I’m making up fantasy stuff here, or I’m giving you advice that if it’s wrong, you’re going to buy the wrong stock or take the wrong drug. And so, humans have a very deep context of what’s going on.
Even the AI’s ability to know that you’ve switched context, like if you’re asking it to tell jokes, and then you ask it a serious question, where a human would sort of see from your face or the nature of that change that, okay, we’re not in that joking thing, it wants to keep telling jokes. You almost have to do the reset sometimes to get it out of the, hey, whatever I bring up, just make jokes about it. And so, I do think that sense of context, there’s work.
Also in terms of how hard it should work on a problem. When you and I see a math problem, we know, wow, I may have to apply simplification five or six times to get this into the right form. And so, we’re kind of looping through how we do these reductions. Whereas today, the reasoning is a sort of linear chain of descent through the levels. And if simplification needs run ten times, it probably won’t.
So math is a very abstract type of reasoning, and right now, I’d say that’s the greatest weakness. Weirdly, it can solve lots of math problems, and there are some math problems where if you ask it to explain it in an abstract form, make essentially an equation or a program that matches the math problem, it does that perfectly, and you could pass that off to a normal solver. Whereas if you tell it to do the numeric work itself, it often makes mistakes.
And it’s very funny because sometimes it’s very confident that, hey – or it’ll say, oh, I mistyped. Well, in fact there’s not a typewriter anywhere in this scene. So the notion of mistyping is really very weird.
And so, whether these current areas of weakness, it’s six months, a year or two years before those largely get fixed, so we have a serious mode where it’s not just making up URLs, and then we have a more fanciful mode, there’s some of that already being done largely through prompts and eventually through training. And training it for math, there may be some special training that needs to be done.
But these problems I don’t think are fundamental. And there are people who think, oh, it’s statistical, therefore it can never do X. That is nonsense. Every example they give of a specific thing it doesn’t do, wait a few months and it’s very good.
So characterizing how good it is, the people who say it’s crummy are really wrong, and the people who think this is AGI, they’re wrong. And those of us in the middle are just trying to make sure it gets applied in the right way.
There’s a lot of activities like helping somebody with their college application, what’s my next step, what haven’t I done, I have the following symptoms, that are, in fact, far within the boundary of things that it can do quite effectively.
KEVIN SCOTT: Yeah, well, I want to talk a little bit about this notion of it being able to use tools to assist it in reasoning. And I’ll give you an example from this weekend with my daughter. So my daughter had this assignment where she had this list of 115 vocabulary words, and she had written a 1,000 word essay. And her objective was to use as many of these vocabulary words as she reasonably could in this 1,000 word essay, which is sort of a ridiculous assignment on the surface, right?
But she had written this essay and she was going through this list trying to manually figure out like what her tally was on this vocabulary list, and it was boring. And she was like, all right, like, I want the shortcut here. Like, Dad, can you get me a ChatGPT account, and can I just put this in there, and it will do it for me? And we did it and ChatGPT, which is not running the GPT-4 model, but I don’t think GPT-4 would have gotten this right either, didn’t quite get it right. Like it was not precise.
But the thing then that I got her to do with me is I was like, well, let’s let ChatGPT write a little Python program that can very precisely – I mean, like, this is a very simple intro CS problem here, and like, the fact that the Python code for solving that problem was perfect and I got my solution immediately, like is just amazing. And like, my 14-year-old daughter who doesn’t program understood everything that was going on.
And so I don’t know, like if you’ve reflected much over these past months about like – because essentially when we are trying to solve a complicated math problem, we’ve got a head full of cognitive tools that we pick up, like these abstractions that you’re talking about, to help us break down very complicated problems into smaller, less complicated problems that we can solve. And like, I think it’s a very interesting idea to think about how these systems will be able to do that with code.
BILL GATES: Yeah, it’s so good at writing. That’s just a mind-blowing thing. But when you can use natural language, like say, for a drawing program that you want various objects and you want to change them in certain ways, sure, you still want the menus there to touch up the colors, but the primary interface for creating a from-scratch drawing will be language. And if you want a document summarized, that’s something that it can do extremely well.
And so, when you have large bodies of texts, when you have text creation problems, there was a ChatGPT-3 where a doctor who has to write to insurance companies to explain why he thinks something should be covered, that’s very complicated. And it was super helpful. He was reading that letter over to make sure it was right.
In ChatGPT-4, we took – the version 4 stuff, we took complex medical bills and we said, please explain this bill to me. What is this and how does it relate to my insurance policy? And it was so incredibly helpful at being able to do that. Explaining concepts in a simpler form, it’s very, very helpful at that.
And so, there’s going to be a lot of tasks where there’s just huge increased productivity, including a lot of documents, payables, accounts, receivables. Just take the health system alone, there’s a lot of documents that now software will be able to characterize them very effectively.
KEVIN SCOTT: Yeah. So one of the other things that I wanted to chat with you about, like you have this really unique perspective in your involvement in several of the big inflection points in technology. So for two of these inflections, like you were either like one of the primary architects of the inflection itself or like one of the big leaders. Like, we wouldn’t have the PC, the personal computing ecosystem without you. And like, you played like a really substantial role in getting the internet available to everybody and making it a ubiquitous technology that everyone can benefit from. To me, this feels like another one of those moments where a lot of things are going to change.
I wonder what your advice might be to people who are thinking about like, oh, I have this new technology that’s amazing that I can now use. Like, how should they be thinking about how to use it? Like, how should they be thinking about the urgency with which they are pursuing these new ideas? And like, how does that relate to how you thought about things in the PC era and the Internet era?
BILL GATES: Yeah, so the industry starts really small where computers aren’t personal. And then through the microprocessor and bunch of companies, we get the personal computer, IBM, Apple. And Microsoft got to be very involved in the software. You know, even the BASIC interpreter on the Apple II, a very obscure fact, was something that I did for Apple. And that idea that, wow, this is a tool that, at least for editing documents, that you have to do all the writing, you know, that was pretty amazing. And then connecting those up over the Internet was amazing. And then moving the computation into the mobile phone was absolutely amazing. So once you get the PC, the Internet, the software industry and the mobile phone, the digital world is changing huge, huge parts of our activities.
I was just in India seeing this – how they do payments digitally, even for government programs. It’s an amazing application of that world to help people who never would have bank accounts because the fees are just too high. It’s too complicated. And so, we continue to benefit from that foundation.
I do view this, the beginning of computers that read and write, as every bit as profound as any one of those steps. And a little bit surprising because robotics has gone a little slower than I would have expected. And I don’t mean autonomous driving. I think that’s a special case that’s particularly hard because of the open-ended environment and the difficulty of safety and what safety bar people will bring to that. But even factories where you actually have huge control over the environment of what’s going on and you can make sure that no kids are running around anywhere near that factory.
So a little bit people are saying, okay, these guys can over-predict, which that’s certainly correct. But here’s a case where we under-predicted that natural language and the computer’s ability to deal with that and how that affects white collar jobs, including sales, service, helping a doctor think through what to put in your health record, that I thought was many years off.
And so, all the AI books, even when they talk about things that might get a lot more productive, will turn out to be wrong. And because we’re just at the start of this, you can almost call it a mania like the internet mania, but the internet mania, although it had its insanity and things that, I don’t know, sock puppets or things where you look back and say, what were we thinking, it was a very profound tool that now we take for granted. And even just for scientific discovery during the pandemic, the utility of the immediate sharing that took place there was just phenomenal. And so, this is as big a breakthrough, a milestone as I’ve seen in the whole digital computer realm, which really starts when I’m quite young.
KEVIN SCOTT: Yeah, well, so I’m going to say this to you, and I’m just interested in your reaction because like, you will always tell me when an idea is dumb. But like, one of the things that I’ve been thinking for the last handful of years is that one of the big changes that’s happening because of this technology is that for 180 years, from the point that Ada Lovelace wrote the first program to harness the power of a digital machine, up until today, the way that you’d get a digital machine to do work for you is you either have to be a skilled programmer, which is like a barrier to entry that’s not easy, or you have to have a skilled programmer anticipate the needs of the user and to build a piece of software that you can then use to get the machine to do something for you.
And this may be the point where we get that paradigm to change a little bit, where because you have this natural language interface and these AIs can write code and like they will be able to actuate a whole bunch of services and systems, that we sort of give ordinary people the ability to get very complicated things done with machines, without having to have like all of this expertise that you and I spent many, many years building.
I don’t know whether you think that’s a dumb idea or not.
BILL GATES: No, absolutely, every advance hopefully lowers the bar in terms of who can easily take advantage of it. The spreadsheet was an example of that, because even though you still have to understand these formulas, you really didn’t have to understand logic or symbols much at all. And it had the input and the output so closely connected in this grid structure that you didn’t think about the separation of those two. And that’s kind of limiting in a way to a super abstract thinker, but it was so powerful in terms of the directness; oh, that didn’t come out right, let me change it.
Here, there’s a whole class of programs of taking like corporate data and presenting it or doing complex queries against, okay, have there been any sales offices where we’ve had 20% of the headcount missing, and are our sales results affected by that? You can just – now you don’t have to go to the IT department and wait in line and have them tell you, oh, that’s too hard or something.
Most of these corporate learning things, whether it’s a query or a report or even a simple workflow where if something happens, you want to trigger an activity, the description in English will be the program. And when you want it to do something extra, you’ll just pull out that English, or whatever your language is, and type that in. And so, there’s a whole layer of query assistance and programing that will be accessible to any employee.
And the same thing is true of, okay, I’m somewhere in the college application process and I want to know, okay, what’s my next step and what’s the threshold for these things? It’s so opaque today. So empowering people to go directly and interact, that is the theme that this is trying to enable.
KEVIN SCOTT: So, I wonder what some of the things are that you are most excited about, just in terms of application of the technology to the things that you care about deeply from the foundation or your personal perspective. So you care a lot about education, public health, climate and sustainable energy. You have all of these things that you’re working on. And like, have you been thinking about how this technology impacts any of those things?
BILL GATES: Yeah, it’s been fantastic that even going back to the fall, OpenAI and Microsoft have engaged with people at the Gates Foundation thinking about the health stuff and the education stuff. In fact, Peter Lee is going to be publishing some of his thinking, which is somewhat focused on rich world health, but it’s pretty obvious how that work in a sense is even more amazing in health systems where you have so few doctors and getting advice of any kind is so incredibly difficult.
And so, it is incredible to look at saying, okay, can we have a personal tutor that helps you out? Can you, when you write something, if you’re going to some amazing school, yes, the teacher may go line by line and give you feedback, but a lot of kids just don’t get that much feedback on the writing. And it looks like, configured properly, this is a great tool to give you feedback on writing.
It’s also ironic in a way that people are saying, well, what does it mean that can people cheat and turn in computer writing? Kind of like when calculators came along that, oh my goodness, what are we going to do about adding and subtracting? And of course, they did create context where you couldn’t use the calculator and they did – you know, we got through that without it being a huge problem.
So I think education is the most interesting application, and I think health is the second most interesting. You know, obviously there’s commercial opportunity in sales and service type things and that’ll happen. You don’t need any foundation type engagement on that.
We brainstormed a lot with Sal Khan, and it looks very promising because with a class size of 30 or 20, you can’t give a student individual attention. You can’t understand their motivation or what’s keeping them engaged. They might be ahead of the class; they might be behind. And it looks like in many subject areas, by having this and having dialogs and giving feedback, for the first time, we’ll succeed in helping education.
We have to admit, except for this sort of prosaic thing of looking up Wikipedia articles or helping you type things and print them out nice, the notion that computers were going to revolutionize education largely are still more in front of us than behind us. I mean, yes, some games draw kids in, but the average math score in the U.S. hasn’t gone up much over the last 30 years.
And so, the people who do computers kept saying, hey, we want credit for that. Credit for what? It’s not a lot better than it was. Obviously the computers didn’t perform some miracle there. And I think over the next five to 10 years, we will think of learning and how you can be helped in your learning in a very different way than just looking up material.
KEVIN SCOTT: Yeah. And I know you think about this as a global problem. My wife and I, with our family foundation, think about it as a local problem for disadvantaged kids. Like, one of the common things that we see is that parent engagement makes a big difference in the educational outcomes for kids. And if you look at the children of immigrants in East San Jose or East Palo Alto here in the Silicon Valley, like often, the parents are working two, three jobs. Like, they’re so busy that they have a hard time being engaged with their kids. And sometimes they don’t speak English. And so, like, they don’t even have the linguistic ability.
And you can just imagine what a technology like this could do, where it really doesn’t care what language you speak. It can bridge that gap between the parents and the teacher, and it can be there to help the parent understand where the roadblocks are for the child and to even potentially get very personalized to the child’s needs and sort of help them on the things that they’re struggling with. I think it’s really, really very exciting.
BILL GATES: Yeah, just the language barriers, we often forget about that. And that comes up in the developing world. India has a lot of languages. I was at the Bangalore Research Lab as part of that trip, and they’re taking these advanced technologies and trying to deal with the tail of languages, so that’s not a huge barrier.
KEVIN SCOTT: One of the things that you said at the GPT-4 dinner at your house is that you had this experience early in Microsoft’s history where you saw a demo that changed your way of thinking about how the personal computing industry was going to unfold, and that caused you to pivot the direction of the company. I wonder if you might be willing to share that with everyone.
BILL GATES: Yeah, so Xerox had made lots of money on copying machines. They got out ahead. Their patents were there. The Japanese competition hadn’t come in. And so, they created a research center out in Palo Alto, which was forever known by its acronym, Palo Alto Research Center, PARC.
And at PARC, they assembled an incredible set of talent. Bob Taylor and others were very good judges of talent. So you end up with Alan Kay, Charles Simonyi, Butler Lampson. And I don’t want to leave anybody out, but there’s like a bunch of other people.
And they create a graphics user interface machine. They weren’t the only ones. There were people over in Europe doing some of these things, but they combined it with a lot of things. They put it on a network. They got a laser printer. And Charles Simonyi was there programing this and did a word processor that used that very graphical bitmap screen and let you do things like fonts, stuff we take for granted now.
And so, I went and visited Charles at PARC at night, and he demoed what he had done with this Bravo word processor. And then he printed on the laser printer a document of all the things that should be done if there were cheap, pervasive computers. And he and I brainstormed that, and he updated the document and printed it again, and it just blew my mind.
The agenda for Microsoft came out of – that’s like in – that’s 1979 that I’m with him, and computers are still completely character mode. So that’s when the commitment to do software for the – the Mac emerges from Steve Jobs having a similar experience with Bob Belville at Xerox PARC.
And you know, PARC built a very – Xerox built a very expensive machine called the Star that they only sold a few thousand of because people didn’t think of word processing as something you would pay for. You had to come in really at the low end. So PCs with first character mode, but later graphics word processing.
So I hired Charles. Charles helped do Word and Excel and many of our great things. And eventually, 15 years after Charles had shown me his thinking, and we’d brainstorm, we largely achieved through Windows and Office on both Windows and Mac, we’d largely achieved that piece of paper.
But so I told the group that that was the other demo that had kind of blown my mind and made me think, okay, what can be achieved in the next five to 10 years? We should be more ambitious taking advantage of this breakthrough, even with the imperfections that we’re going to reduce over time.
KEVIN SCOTT: Yeah, it was a really powerful and motivating anecdote that you shared.
Maybe one last thing here before we go, or maybe two more things. What do you think are like the big grand challenges that we ought to be thinking about over the next five to 10 year period? In a sense, like this, I actually have this piece of paper that Charles wrote. It’s like here by my desk framed because I think it was one of the more unbelievable predictions of a technology cycle that anybody’s ever written, and I don’t know why everybody doesn’t know about the existence of this thing. Like, it’s just unbelievable.
But as you sort of think about like what lies ahead of us over the next five to 10 years, what’s your challenge, not just to Microsoft, but to everybody in the world who’s going to be thinking about this? What do you think we ought to go push on really, really hard?
BILL GATES: Well, there will be a set of innovations on how you execute these algorithms, lots of chips, some movement from silicon to optics to reduce the energy and the cost. Immense innovation where Nvidia’s the leader today, but others will try and challenge them as they keep getting better, and using even some radical approaches, because we want to get the cost of execution on these things and even the training dramatically less than it is today. Ideally, we’d like to move them so that often you can do them on a self-contained client device, not have to go up to the cloud to get these things. So lots to be done on the platform that this uses.
Then we have an immense challenge in the software side of figuring out, okay, do you just have many specialized versions of this thing, or do you have one that just keeps getting better? And there will be immense competition of those two approaches. Even at Microsoft we’ll pursue both in parallel with each other.
And ideally within a contained domain we’ll get something that the accuracy is probably extremely high by limiting the areas that it works in and by having the training data and even perhaps some prechecking, post-checking type logic that applies to that. I definitely think areas like sales and service, that there is a lot that can be done there and that that’s super valuable.
The notion that there is this emergent capability means that the push to try and scale up even higher, that’ll be there. Now what corpuses exist once you get past every piece of text and video, are you synthetically generating things, and do you still see that improvement as you scale up, obviously that’ll get pursued. And the fact that it costs billions of dollars to do that won’t stand in the way of that going ahead in a very high speed way.
And then there’s a lot of societal things of, okay, where can we push it in education? It’s not that it will just immediately understand student motivation or student cognition. There will have to be a lot of training and embedding it in an environment where the adults are seeing the engagement of the student and seeing the motivation.
And so, even though you free up the teacher from a lot of things, that personal relationship piece, you’re still going to want all the context coming out from those tutoring sessions to be visible and help the dialog that’s there with the teacher or with the patient.
And so, Microsoft talks about making humans more productive. Some things will be automated, but many things will just be facilitated where the final engagement is very much a human, but a human who’s able to get a lot more done than ever before.
So the number of challenges and opportunities created by this is pretty incredible, and I can see how engaged the OpenAI team is by this. I’m sure there’s lots of teams I don’t get to see that are pushing on this.
And the size of the industry, I mean, when the microprocessor is invented, the software industry was a tiny industry. We could put most of us on a panel and they could complain that I worked too hard, and it shouldn’t be allowed, and we could all laugh about that. Now, this is a global industry with – and so, it’s a little harder to get your hands around.
I get a weekly digest of all the different articles about AI that are being written. Oh, can we use it for moral questions? Which seems silly to even ask to me, but fine people can ask whatever they want to ask. And so, this thing has the ability to move faster because the amount of people and resources and companies is way beyond those other breakthroughs that I brought up and I was privileged to live through.
KEVIN SCOTT: Yeah. I mean, one of the things for me that has been really fascinating, and I think I’m going to say this just as a reminder to folks who are thinking about pursuing careers in computer science and becoming programmers, I spent most of my training as a computer scientist in my early part of my career as a systems person. I wrote compilers and like tons of Assembly Language and designed programing languages, which I know you did as well.
And I sort of feel like a lot of the things that I studied just in terms of parallel optimization and high performance computer architectures in grad school, I left grad school and went to Google and thought I would never use any of this stuff ever again. And then all of a sudden now, like we’re building supercomputers to train models and these things are relevant again.
I think it’s interesting. I wonder what Bill Gates, the young programmer, would be working on if you were in the mix right now, like writing code for these things because there’s so many interesting things to go work on.
And I know we’ve talked about some of them in private, like we obviously can’t discuss some of them on the podcast, but what do you think you as a 20-something-year-old young programmer would get really excited about in this stack of technologies?
BILL GATES: Well, there is an element of this that’s fairly mathematical. So I feel lucky that I did a lot of math and that was a gateway to programing for me, including all sorts of crazy stuff with numerical matrices and their properties.
And so, there are people who came to programing without that math background, who do need to go and get a little bit of the math. I’m not saying it’s super hard, but they should go and do it so that when you see those funny equations, you’re like, oh, okay, I’m comfortable with that, because a lot of the computation will be that kind of thing instead of classic programing.
It is the paradox. When I started out, writing tiny programs was super necessary. Like the original Macintosh is 128k machine, 128k bytes, 22k of which is the bitmap screen. And so, almost nobody could write programs to fit in there. And so, Microsoft, our approach, our tools let us write code for that machine, and really only we and Apple succeeded.
Then when it became 512k, a few people succeeded, but even that people found very difficult. I remember thinking, oh, as memory got to be 4 gigabytes, all these programmers, they don’t understand discipline and optimization, and they’re just allowed to waste resources.
But now that these things that you’re operating with billions of parameters, the idea of, okay, can I skip some of those parameters, can I simplify some of those parameters, can I pre-compute various things? If I have many, many models, can I keep deltas between models instead of having them? All the kind of optimization that made sense on these very resource limited machines, well, some of them come back in this world where when you’re going to do billions of operations or literally hundreds of billions of operations, we are pushing the absolute limit of the cost and the performance of these computers.
And that’s one thing that is very impressive is the speedups even in the last six months on some of these things has been better than I expected. And that’s fantastic because you get the hardware speedup, the software speedup kind of multiplied together that means are we – how resource bottlenecked will we be over the next couple of years? That’s less clear to me now that these improvements are taking place, although I still worry about that and how we make sure that companies broadly, and Microsoft in particular, allocates that in a smart way.
So understanding algorithms, understanding why certain things are fast and slow, that is fun. The systems work that in my early queries, just one computer and later kind of a network of computers, now that systems where you have data centers with millions of CPUs, it’s incredible the optimization that that can be done there. Just how the power supplies work or how the network connections work. Anyway almost every area of computer science, including database type techniques, programing techniques, this kind of forces us to think about in a really new way.
KEVIN SCOTT: Yeah, I could not agree more.
So last-last question. I know that you are incredibly busy, and you have the ability to choose to work on whatever it is that you want to work on. But I want to ask you anyway, what do you do outside of work for fun? I ask everybody who comes on the show that.
BILL GATES: No, that’s great. I get to read a lot. I get to play tennis a lot. During the pandemic, I was down in California in the fall and winter, and I’m still enjoying that. Although the Foundation’s meeting in person and some of these Microsoft OpenAI meetings, it’s been great that we’ve been able to do those in person. But some we can just do virtually.
So anyway, I play pickleball because I’ve been playing for over 50 years, tennis, and I like to read a lot. I goofed off and went to the Australian Open for the first time because it’s nice warm weather down there, and that was kind of spectacular.
KEVIN SCOTT: I actually I actually want to push on this idea you read a lot. So you say you read a lot, which is not the same as like what most people say when they say they read a lot. You’re famous for carrying around like a giant tote bag of books with you everywhere you go, and you read an insane amount of stuff, everything from really difficult science books, all the way to fiction. How much do you actually read? Like, what’s a typical pace of reading for Bill Gates?
BILL GATES: You know, if I don’t read a book in a week, then I’m really – I relook at what I was doing that week. If I’m on vacation, then I’ll hope to read more like five, six or seven. Of course, books are quite variable in size. But over the course of the year, I should be able to read close to 80-plus books.
I have younger children who read even more than I do. So it’s kind of like, oh, geez I have to be – which Sowell books am I going to read? I still read all the Smil, Pinker some authors that are just so profound and have shaped my thinking.
But reading is kind of relaxing. I should read more fiction. When I fall behind, my nonfiction list tends to dominate. And yet, people have suggested such good fiction stuff to me. And that’s why I kind of share my reading list on Gates Notes.
KEVIN SCOTT: What’s the over-under? You’re famous for saying that you want to read David Foster Wallace’s Infinite Jest like over-under that happening in ‘23.
BILL GATES: Well, if there hadn’t been this darn AI advance that’s distracting me. I’m kidding you, but it’s really true. I have allocated, and with super excitement, a lot more time to sitting with Microsoft product groups and saying, okay, what does this mean for security, what does it mean for Office, what does it mean for our database type thing? Because I love that type of brainstorming because new vistas are opened up. So, no, it’s all your fault, no Infinite Jest this year.
KEVIN SCOTT: Excellent. Well, I appreciate you making that trade because it’s been really fantastic over the past six months, having you help us think through all of this stuff. So thank you for that and thank you for doing the podcast today. Really, really appreciate it.
BILL GATES: No, it was fun. Kevin. Thanks so much.
KEVIN SCOTT: Awesome.
KEVIN SCOTT: Thank you so much to Bill for being with us here today. Uh, I hope you all enjoyed the conversation as much as I did. Um, yeah, there, there’s so many great things about this conversation, uh, which were reflections of the conversations that we’ve been having with Bill over the past handful of months as we think through this Amazing revolution. Uh, one of the things that I’ve learned most from Bill over the past handful of, uh, months as we think about what AI means for the future is how he thought about what personal computing and the, uh, microprocessor and PC operating system revolution, uh, meant for the world when he was building Microsoft from the ground.
Uh, and even like what it felt like for him as one of the leaders helping bring the internet revolution to the world. Uh, yeah. So like those parts of the conversation today that we had where he was recounting some of his experiences, like the, uh, the first meeting that he had with Charles Simon at Xerox Park, uh, where he saw one of the world’s first graphical word processors and like how, seeing that and talking with Charles influenced, uh, an enormous amount of the history of not just Microsoft, but the world in the, uh, subsequent years. Um, and, and just hearing Bill talk about his, uh, passion for the things that the Gates Foundation is doing and what these AI technologies mean for those things, like how, uh, maybe we can use these technologies to accelerate some of the benefits.
To the people in the world who are most in need of technologies like this to help them live, uh, better and more successful lives. Um, so again, they, this is tremendous treat for, uh, being able to talk with Bill on the podcast today.
If you have anything you’d like to share with us, please email any time at [email protected]. You can follow Behind the Tech on your favorite podcast platform or check. Full video episodes on YouTube. Thanks for tuning in and see you next time.