Behind The Tech with Kevin Scott - Dr. Peter Lee: Microsoft Research & Incubations

🎁Amazon Prime 📖Kindle Unlimited 🎧Audible Plus 🎵Amazon Music Unlimited 🌿iHerb 💰Binance


PETER LEE: What we’re seeing today is that more and more, what we do, and even just to survive as a civilization depends on researchers and scientists being able to get drawn in and solve problems, respond to crises, help us become hopefully more resilient to the future. And that sort of crisis response science, I think, is getting to be incredibly important. And it won’t work if society doesn’t trust what we’re doing. And that trust is – is so hard to earn.


KEVIN SCOTT: Hi, everyone. Welcome to Behind the Tech I’m your host, Kevin Scott, Chief Technology Officer for Microsoft.

In this podcast, we’re going to get behind the tech. We’ll talk with some of the people who have made our modern tech world possible and understand what motivated them to create what they did. So, join me to maybe learn a little bit about the history of computing and get a few behind-the-scenes insights into what’s happening today. Stick around.


CHRISTINA WARREN: Hello and welcome to Behind the Tech. I’m Christina Warren, senior cloud advocate at Microsoft.

KEVIN SCOTT: And I’m Kevin Scott.

CHRISTINA WARREN: And today, our guest is Dr. Peter Lee. Dr. Lee is a distinguished computer scientist whose research spans a range of areas including artificial intelligence, quantum computing, and biotechnology. And currently, he’s leading a team of researchers here at Microsoft, with eight labs across the globe.

KEVIN SCOTT: Yeah, we are super lucky to have Peter on our team. I’ve known about Peter since I was a computer science graduate student in the ‘90s. So he was a professor at Carnegie Mellon University when I was a PhD student at the University of Virginia, and we were working in pretty similar academic spaces. And I was always a huge admirer of not just his work, but of the work of his PhD students.

So it’s a real honor and a privilege for me to now be able to work with Peter here at Microsoft. It’s a strange, strange journey.

CHRISTINA WARREN: I love that. I love that you’ve been aware of him for so long, and now you get to work together, which is fantastic.

KEVIN SCOTT: Yeah, it’s – it’s super fun.

And he has a really big job here at Microsoft. So he runs all of Microsoft Research, which as an institution turns 30 this year.


KEVIN SCOTT: And over its lifetime, it has been one of the most important research institutions for computer science and related areas for the past three decades.

You know, and again, I’m a little bit biased. Microsoft Research is in my – in my group at Microsoft, and I was an intern at Microsoft Research 20 years ago. Oh god, this is a terrible thing to think. Yeah, but so anyway, Peter is awesome.

CHRISTINA WARREN: That’s great. That’s great. I can’t wait to hear your conversation. All right. Let’s talk with Dr. Lee.


KEVIN SCOTT: Our guest today is Dr. Peter Lee. Peter is a computer scientist and Corporate Vice President of Research and Incubations at Microsoft. Before joining Microsoft in 2010, he was the head of the Transformational Convergence Technology Office at DARPA; and before that, chair of the Computer Science Department at Carnegie Mellon University. He’s a member of the National Academy of Medicine, serves on the board of directors of the Allen Institute for Artificial Intelligence, and was a commissioner on President Obama’s Commission on Enhancing National Cybersecurity. Welcome, Peter.

PETER LEE: Thank you. Kevin. It’s great to be here.

KEVIN SCOTT: Yeah, the thing that your intro doesn’t say is that when you were at Carnegie Mellon, you were a functional programing expert. And when I began my journey as a graduate student, that was the particular area of compilers and programing languages that I was studying.

My first Ph.D. advisor was a guy named Norman Ramsey, who went to Harvard and is, I think, at Tufts now.

PETER LEE: Right, right.

KEVIN SCOTT: And yeah, and it’s a very small community. So like I, you know, even before we ever met, I felt this weird sense of familiarity. You know, like, I knew who your PhD students were. I knew your writing. I’d read, you know, books you had written and contributed to.

So I’m sort of curious to just start at the beginning of your journey, like you as a kid. How did you get interested in a set of things that took you to functional programing?

PETER LEE: You know, Norman Ramsey, I of course, know very well and was great. And I think I, in fact, pretty sure I had encountered you while you were a grad student. And so, it’s amazing how things, kind of how paths intersect.


PETER LEE: Well, yeah. So to go back to the beginning, you know, I grew up in a hardcore physical science household. You know, my parents immigrated from Korea. My mom became a chemistry professor. My dad became a physics professor.


PETER LEE: And so the joke is I was a big disappointment to them when I went to college to major in math. (Laughter.)

KEVIN SCOTT: You know, we laugh at that, but that is a weird thing in academia, that there seems – everybody has a pecking order in their head about which of the disciplines are better than the others, which is sort of just outrageously ridiculous thing.

PETER LEE: Well, you know, I think, of course, then I compounded the problem by going to grad school, not in math, but in computer science.

Obviously, my parents became very, very proud of me, in time. But, you know, it’s actually something that I think all researchers encounter, because what researchers do, it’s not clearly useful to anyone.

You know, people oftentimes don’t understand what it is that you and I do, or people in a place like Microsoft Research do. Society has to actually tolerate the burden and the cost, you know, of all of these great research institutions around the world. And so, you know, we’re oftentimes encountering, you know, questions like that. So, you know, as you say, there’s a pecking order, so even I grew up with that in my own household.

KEVIN SCOTT: Yeah, and I was thinking about this yesterday. I think tolerate is one word. I think trust is another, and like, we’re at this, you know, weird moment in time right now where I do feel that science and, like, particularly scientific research, where the result of what you’re doing, before the thing that you’re spending all of your time is going to have an impact on human beings, it sort of takes a while and sometimes it’s very indirect. Like, you make a contribution to a thing that’s going to have to have hundreds or thousands of different things contributed into it before you get a medicine or a breakthrough product or whatever it is. And I think part of the challenge with earning people’s trust and tolerance is on us just figuring out how to better tell folks what it is that we’re doing.


KEVIN SCOTT: My mom used to – yeah, I was a weird teenager, like, I would have the, you know, transactions on programing languages and systems laying around my house when I was, you know, 16 or 17. I think it’d be even weirder if I were 13 or 14, right? But you know, she’d look at me reading these computer science papers and textbooks, and she would be like, “I, you know, like, what are you doing? Like, all of those squiggles hurt my head.” And it’s like a perfectly legitimate point of view, and I never did a great job of explaining to her what I did.

Whereas, you know, I was playing around in my machine shop a couple of days ago and, like, I made this little part that I needed for a microphone holder, and I posted a picture of that on Instagram and like a gazillion people, jumped on and said, “Oh, wow, that’s neat,” because like, it’s a thing and you can see it and I can –


KEVIN SCOTT: – explain pretty easily what it’s good for.

So, I don’t know, like, what are your thoughts there? Like, how do we do a better job helping people understand what we do, because it is really necessary? The world doesn’t work without all of this research.

PETER LEE: Well, and it’s become even more important. You know, the need for scientific research has just gotten incredibly important.

You know, my frame, growing up the way I grew up, my frame for scientific research, you know, was formed by stories about, you know, Isaac Newton sitting under a tree and then an apple falls and hits him in the head. And he’s just wondering what the heck is that about? So it’s just pure curiosity driven research, and that’s sort of the frame that I grew up with.

But to your point, what we’re seeing today is that more and more, what we do, and even just to survive as a civilization depends on researchers and scientists being able to get drawn in and solve problems, respond to crises, help us become hopefully more resilient to the future. And that sort of crisis response science, I think, is getting to be incredibly important. And it won’t work if society doesn’t trust what we’re doing. And that trust is – is so hard to earn.

You know, another story. When I was a professor, I was an assistant professor. I didn’t have tenure. And we had a change in department head, a very good friend of mine now, Jim Morris, but at the time, he became department head, I didn’t know who he was. And so, he was brand new department head. He was going to have one-on-one meetings with all the faculty. So it was my turn. And he asked me what I did, and I explained all this functional programing stuff to him. And he sort of scrunched his nose and said, “Well, why would anyone work on that stuff? You know, what is it good for?”

And I was so nervous about the meeting, I just sort of stammered out, “Well, it’s just so beautiful.” And Jim’s response was, “Well, if it’s beauty that you care about, maybe you should be a professor in the fine arts college instead of computer science.” (Laughter.)

KEVIN SCOTT: That’s brutal.

PETER LEE: I know. And of course, you know, in time, we became really close and even did some research together.

But it it’s that kind of thing where part of what researchers do, there’s a portion of it that is sort of curiosity driven, that’s searching for truth and beauty. But now, more and more, there’s another part of it that is really important to, like, making sure voting machines work correctly, to helping us find, you know, drugs and vaccines for things like COVID-19, understanding, you know, where the next wildfires might happen because of climate change, and all of these sorts of things that are so important.

You know, if an asteroid that has the power to destroy life on the planet were to come towards Earth, you’re going to call in researchers to try to figure out how to prevent that from happening.

That mode of what we do is just getting so, so important, and especially at a place like Microsoft Research, you know, where we have an obligation to situate our work in the real world. It’s gotten really important.

And you’re right, how we explain what we do so that people have the trust in us, so that we can respond, I think ends up being everything.

KEVIN SCOTT: I want to go back to this idea of doing things because they’re beautiful. I mean, it always struck me that you’ve got many different reasons that you do research. Part of the reason that you do research and you try to tackle really, really, really hard problems is because it’s almost like exercise, right? You just need to be in the habit of doing that so that when the moment comes, and you may not even realize when the moment has arrived, but, like, when it does arrive, that you will be prepared to, like, actually throw your full mind and energy at a thing and have a higher chance of being able to solve the problem.

I mean, and another reason I’ve always thought that working on these hard problems is important is just solving them gives us a catalog of things to draw upon, even if it’s not immediately obvious what they’re useful for.

And what we do know from the world that we live in is everything that we have, like an mRNA vaccine or an AI-powered coding assistant or, like, you know, pick your thing that you think is like a really interesting achievement, we have it because it’s a layering of all of these discoveries and accomplishments and abstractions and tools. And no one when they were thinking about the part of the problem that they were solving, they were not imagining this thing that came out in the end.


KEVIN SCOTT: And so like, I don’t know, like maybe there are other things as well, but I think working on beautiful problems, hard problems has a lot of value, even if it’s not immediately obvious to everyone else, like, why it’s important.

PETER LEE: Yeah, I’ve always wondered if there’s a part of our brains that is like our muscles, that if we don’t work them out all the time, you know, they kind of atrophy.

But you know, one other thought that your comments triggered is actually 100 years ago this year, in 1921, a guy named Abraham Flexner wrote an essay. He was writing it to the board of the Rockefeller Foundation, trying to explain exactly your point, you know, that people work on really hard problems just to satisfy their curiosity. And lo and behold, you know, more often, way more often than you would expect, that new knowledge ends up being really important. And he wrote that in 1921, to try to explain to the Rockefeller Foundation why they should support research.

And then– more than 10 years, 15 years later, when there was the desire to rescue people from Europe, bad things happening in the late 1930s in Europe, really important people like Albert Einstein or von Neumann and others, to justify the cost and expense and political risks of immigrating them and forming the Institute for Advanced Study at Princeton, he published that memo publicly. And it’s an essay called “The Usefulness of Useless Knowledge.” And he just ticks through, like, even in this world where really bad things are happening, you know, World War II was brewing and all these other things. You know, there are things that we need to do. There are problems we need to think about and work on. There’s new knowledge to discover, and it really matters and maybe matters even more than ever in the context of a world that’s struggling.

And I reread that essay, and it’s available free online, just search for “The Usefulness of Useless Knowledge.” I read it about once a year because it’s important. You know, look, you and I are consumed with helping Microsoft become a more successful company. It’s all grounded in the real world and so on. But it’s important to not lose a grip on those sorts of enduring values. And so it’s sort of a pilgrimage for me. And I swear, you read the first page of that essay and it could have been written yesterday.


PETER LEE: It’s that timeless. And you know, it’s a little flowery and dramatic to call it the search for truth and beauty, but it is spiritual in that way.

You know, at the same time, I think Microsoft Research has a special obligation to put its brainpower into the real world. You know, we – you know, that real world might be far in the future. Like, you know, I know, Kevin, you’re thinking so hard about the inevitability that general artificial intelligence will become real. And maybe that’s a few years into the future, but it’s inevitably going to happen. And so that’s situating research in the real world because it’s a world we know is going to become real.

And so in that sense, we’re different than Isaac Newton or Galileo. But that doesn’t mean we’re still not obligated to have a firm grip, you know, on these enduring values of research.

KEVIN SCOTT: Yeah, I could not agree more.

I mean, and, like, one of the other things along those lines that you and I spend a bunch of time talking about is how to create the right incentives and culture for people taking the right types of intellectual risk. I think, you know, trying to solve a hard problem and failing, in general, I will go ahead and make this bold assertion, is like more valuable than spending a bunch of time trying to make incremental progress on something that’s already reasonably well developed.

And that is a hard, hard thing to get people to do. And like, I want to ground it like, you have a particular example that I’m familiar with from your days as a Carnegie Mellon professor.

So you had a PhD student, George Necula, who, like, wrote what I think is one of the most beautiful PhD dissertations ever, and it, like, really affected me as a graduate student. It was this idea called Proof Carrying Code. And, like, that is a risky idea to let a PhD student go pursue because he could have failed. And you know, like you have to have a dissertation at the end of your graduate studies to get that Ph.D. So talk to me about how that happened and – and, like, what can we learn from good examples like that.

Or Mendel Rosenblum is another example. Like, his work resulted in, you know, a ton of the virtualization stuff that we have now and like VMware and whatnot. So there are positive examples, but there also is a lot of energy that gets thrown into incrementalism.

PETER LEE: It’s true. And I think what happens also is it’s a real test as a mentor or a manager or advisor. You know, I think part of the struggle for you and me is sort of the tension between, you know, like you and I are opinionated, we have our own clear ideas. And it is not the case that the people that work for us across Microsoft Research share our points of view.



KEVIN SCOTT: Which is a good thing.

PETER LEE: Which is a good thing, but it’s still really hard, you know? And you know, when I was a professor, you know, I started off my career as an academic thinking, wow, I’m going to be able to, you know, work with these graduate students and form them all in my own image. And you know, they’re going to amplify, you know, how I view the world and the kinds of scientific research I like to do, and it’ll all be grand and glorious.

And of course, you learn pretty quickly, it just does not work that way. Well, there might be some second rate graduate students that do that, but at Carnegie Mellon, everyone is first rate. And you know, wow, they have their own opinions. And no, you know, they’re not going to just take my lead on things.

And so George Necula, you know, was one of those students and he, you know had this idea, which you refer to as Proof Carrying Code. And it’s true, I thought he was really on the wrong track, you know, that this would be just way too difficult.

The first drafts of some of the early papers and proofs that he wrote, it would take me less than 10 minutes to find horrible bugs in the proofs. You know, they would be simple little proofs, you know, less than 10 lines long, and they would be wrong. And so it just sort of cast doubt over the whole thing. But you have to decide, are you going to give the freedom to fail here and learn and grow from that or not?

And one of the golden rules then to translate to our current jobs that we have now is to decide, are you betting on the person and their commitment to something or are you betting on the idea?


PETER LEE: And time and time again, you learn that you’re better off trying to make an assessment of betting on the person than on the idea. And that makes it then super important for us to make sure that we’re viewing things fairly, you know, that we’re not engaging in any kind of favoritism or biases.

Ultimately, when we’re leading research, what we’re doing is we’re trying to understand where is the passion and the commitment to really go deep, to follow through. If a researcher came to me and said, I have a better idea for growing cabbages faster, I might think it’s a crazy thing to work on. But if that passion and that drive to really, really go deep is there, I have to really stop myself and decide, well, maybe it’s worth giving a little bit of time and rope for this to play out because you just never know, you know, where the next big thing is going to come.

And, you know, George ended up writing an amazing thesis, became a professor at Berkeley, then, you know, went into industry, and you know, he’s had amazing impact and an amazing career.

KEVIN SCOTT: Yeah. I think you make such a brilliant and important point around betting in people, not ideas. And this other thing of, like, giving people the ability to fail is also important. The learning that I have been able to get in failure is so much more – more powerful than the learning that I get in success. And the fear of failure is just a terrible thing. I mean, it really is crippling, I think.

PETER LEE: It is, and it is painful. They are growth experiences. You know, Satya Nadella, our CEO, talks about the growth mindset. And I joked with him once that growth mindset is a euphemism, because when you grow through failures, it’s incredibly painful.


PETER LEE: I think we’ve all had failures that have made us want even just to give up. There have been times I’ve thought about quitting from Microsoft because of a failure and then you somehow you lick your wounds and you find a way to overcome it and you find out that you emerge as a better person for it.

KEVIN SCOTT: Yeah. I had a boss a while ago who was running a part of a business that was responsible for just enormous amounts of money. And so, whenever you made an engineering mistake in this part of the business, it wasn’t, you know, reputational loss or, you know, like your pride was wounded because something went down and then you had a tough time debugging it. No, like, failures in the things that he was responsible for, like, the meter started running on the dollars that were going out of the door.

And he invariably like, we made mistakes. It’s impossible not to make mistakes when you’re building complex systems. He would be very calm and collected, never made anyone feel bad about this colossal amount of money that was, you know, just – just being lost. And, you know, he would patiently guide everyone through the crisis and then at the end of it, ask us like, okay, what did we learn from this? It’s like the real tragedy here would be to have experienced this and not have learned anything at all. Like, we can’t let this crisis go to waste.

PETER LEE: Yep. You know, you’re reminding me also, there’s another way to fail. You know, one way to fail is to make mistakes, but another way is to be wrong about an idea.

I think one of my most recent examples that really kind of stopped me dead in my tracks, I joined Microsoft Research in 2010. And, you know, I joined and I was doing a review of a bunch of projects. And there was one project that was in the Speech Recognition Group at Microsoft Research.

And, you know, in 2010, everybody knew that the way to do speech recognition was to use some form of Hidden Markov Models. Or Gaussian Mixture Models. But here, the speech team was describing the use of a layered cake of neural nets. And they even explained that, you know, the summer before, a guy named Geoff Hinton had spent the summer, along with a post-doc of his and maybe some students, and suggested the idea and the research team decided to give it a try to see how well it worked.

And I knew Geoff because Geoff and I were both professors at Carnegie Mellon. Geoff, after 1991 or so, left and went to Toronto, but you know, he was at CMU when I started there. And I remember Geoff was working on neural nets, you know, back in the late 1980s. And so my first thought was, wait a minute, people are still working on this stuff?


PETER LEE: On neural nets? And why on earth would anyone do this? You know, everyone knows Gaussian Mixture Models are the future of speech recognition. And of course, you know, three or four months later, when the engineering results came in, you know, we realized, wow, we have a real revolution here because it just works so well.

And then maybe six months after that, you know, Andrew Ng and Jeff Dean over at Google showed that the same things held up for computer vision.


PETER LEE: Look at where we are 10 years later. It’s amazing. But I’ve reflected that if I had joined, if I had been hired to Microsoft Research a year earlier, none of this would have happened.


PETER LEE: And it just makes you think, how many times have I inadvertently, like, held the whole world back by making a judgment like that? It’s one of those near misses that really makes you think.

KEVIN SCOTT: Yeah, and it’s a hard thing, because even at a company like Microsoft that invests a lot in R&D, like, we still have finite resources and you have to have some way to focus. You know, because at the end of the day, the types of things that we’re building now, rarely are the work of a lone genius cranking away in their office, and, you know, like, they have their Archimedean epiphany, and all of a sudden, this big problem is solved. It’s usually the, you know, the work of layering and combining and collaborating. And so you do have to focus in some way, but like, I totally agree with you.

And, like, in a certain sense, you know, Geoff Hinton is almost heroic in the extent to which he stuck with that idea. Because people, I think now, you know, you’re just like, oh, deep neural networks, like this is, like, clearly the way. It’s the same way that the Hidden Markov Models and the Gaussian Mixture Models were, like, clearly the way that you did speech recognition 20 years ago or 10 years ago.


KEVIN SCOTT: I think both 20 and 10 years ago. But you know, like, just as obvious as that was then, it’s as obvious now that like, oh, well, this is clearly the way that you do computer vision and speech recognition and natural language processing. In 1991, not obvious at all. In fact, quite to the contrary.

I remember AI throughout my entire academic life, which was off and on from 1990 until 2003, when I joined Google. AI was not the hot thing to study.


KEVIN SCOTT: And neural networks like particularly so were like this sort of weird – weirdly looked upon thing. And yet, like he was convinced that this was something that had merit and stuck with it and, like, had to listen to all of the people for years and years and years telling him he was wrong. And then all of a sudden, he wasn’t and you know, he helped catalyze this huge amount of progress, and like now has a Turing Award.

PETER LEE: Well, this sort of relates back to what we were saying at the start of the conversation because there is a stick-to-itiveness in all of this in the face of a lot of doubts or even skepticism.

And I think it actually even relates to the trust issue that you raised earlier because there’s something about that. You know, when you demonstrate that sort of commitment, it’s one path, one ingredient in earning people’s trust.

If I think about the speech group, you know, 10 years ago at Microsoft Research, they probably in the back of their minds, maybe it wasn’t conscious, but they had to think, well, maybe this is worth a try. After all, this guy, Geoff Hinton, has been at this, you know, for more than a decade. And, you know, earning trust in that way, I think ends up being maybe one ingredient in all of this.

And then, you know, it all does come around to more urgent priorities. You know, like, it looks now like some of the things that we need to be able to do to remove carbon from the atmosphere or you know, find drugs for global pandemics faster, these sorts of things, it looks like they’re really going to depend on things like deep neural nets in a really fundamental way. And thank god that people did stick to these ideas and were willing to experiment.

KEVIN SCOTT: Yeah. You know, the really interesting things that wasn’t obvious to me, even when I started doing machine learning work in 2003 is – so I left graduate school where I was before I finished my dissertation, which was on dynamic binary translation. So I was doing a bunch of like deep systems-y stuff to try to figure out, like how much information you could recover from an executing stream of binary level instructions. You know, could you do alias analysis, like, with high enough precision that you can do any sort of like memory safety analysis at the binary level and like a whole bunch of other things like that.

And I stopped doing that and went to Google and pretty quickly was doing machine learning stuff, and I thought I would never, ever use any of my compiler domain specific knowledge ever again.

And like, one of the things that we’re seeing right now with the deep learning revolution is that there’s a whole bunch of really interesting algorithmic stuff happening in how you architect these neural networks and, you know, like what you do with data and whatnot, but the systems work that sits beneath it is very reminiscent, to me at least, of ‘90s era high performance computing and high performance systems software work because we’re building supercomputers to train these things, like it’s a bunch of numerical optimization. It’s, you know, like programing languages matter again and like, there are very interesting sorts of programing languages often built on top of other PLs. So I don’t know, it’s like, this is another lesson for me, like, you know, things just seem to come around.

PETER LEE: Yeah. Well, it makes perfect sense because when we’re talking about machine learning and AI systems today, they are staged computations. You know, right at the highest level there is the training stage, and then there’s the infant stage. But then when you break those down, there are, you know, each of those big stages are broken down into smaller stages.

And whenever you have that staging, all of those sort of dynamic compilation ideas become super relevant. It becomes sort of the key to making them practical and affordable to run at all.

KEVIN SCOTT: Yeah. And a bunch of these computations, like the way that you express some of them looks very functional and like there, a bunch of like functional language compilation ideas that are useful now as well.


KEVIN SCOTT: It’s really interesting.

PETER LEE: It is. Well, in fact, it is functional. I mean –


PETER LEE: – you’re operating over some – some large network. And you know, each one of these stages is referentially transparent. You can, you know, remove one stage and replace it with another one, and there’s a modularity there, which is purely functional.

KEVIN SCOTT: Yeah, and look, it may be the most effective demonstration of the power and the promise of functional programing that anyone has ever had. Because the beautiful thing about these machine learning training programs that you express in something like PyTorch is they’re short, they’re functional, and they are brief and concise, and you understand exactly what they’re saying. It’s not like you’re writing hundreds of thousands of lines of imperative code to build a transformer.


KEVIN SCOTT: It’s like usually a few hundred or a very small thousands of lines of code that you’re writing.

PETER LEE: I find it really interesting for people who are working in the cutting edge of machine learning and AI, they have to be multilingual today in terms of programing languages. They have to have a facility to work back and forth between the mathematics and the algorithms and the system’s kind of architecture, kind of all at the same time. And then increasingly, they have to be sensitive to fairness and –


PETER LEE: – you know, ethical issues.

And you know, if you just think about the, you know, the growth that a human being has to go through to be able to kind of think through that span of things, it’s no surprise that those people are rare today. Hopefully, they’ll become much less rare five years from now, but you know, they’re kind of hard to find.

And it’s also no surprise that more and more of the most brilliant minds on the planet are drawn to this field.


PETER LEE: It’s not just the goal of artificial intelligence, but it’s the fact that it kind of covers all of these different things to think about in such a wide span. It just attracts a certain type of brilliant mind.

KEVIN SCOTT: Well, and I think it also points to how important it is to have a computer science education for either undergraduates or graduates where you really are getting people exposed to a very wide range of ideas, like everything from like classic ethics, all the way to pretty serious, you know, statistic linear algebra and differential and integral calculus, to, you know, just sort of the core ideas of computation.


KEVIN SCOTT: And like, I think it’s less important that you graduate with a four-year degree and like you know the particulars of a programing language and all of its, you know, accordant APIs and whatnot, you know. Because the thing that you and I have learned is all of that’s going to change over and over and over and over again.

So the important thing is that you get the core concept so that you can adapt as new things come along and so that you can integrate things across all of these things that should not be silos of knowledge or expertise.

PETER LEE: Yeah. And I think one thing that we’ve both become is we’ve both become students again. You know, we spend a lot of time just reading papers and it’s fun in a way. It’s also humbling because you just realize how hard and deep some of the technical ideas are. But I feel like my own personal growth has really accelerated just from having a student mindset and taking the time to try to read what people do.

KEVIN SCOTT: Yes. I want to spend a few minutes before we run out of time on societal resilience and like, one of the things that you have certainly had a student mindset on is all of the things related to health care and the biosciences.

So it was really a bit of good fortune that you had already immersed yourself in this area and you were running, you know, Microsoft Health prior to the pandemic. And when the pandemic started, you know, I had just asked you to take over Microsoft Research and then the pandemic starts, and then the company asked you to help coordinate the work that we were trying to do to help support people with pandemic response.

So like talk a little bit about that whole experience and how that’s informed, what it is you’re trying to do right now with societal resilience and research.

PETER LEE: Well, I blame you, Kevin. (Laughter.) Because, you know, I was happily helping the company build a new health technology business. I was focused on that. And then you decided to hire me to you know, lead Microsoft Research.

And so I agreed and I took that job on March 1st, 2020. That’s the date. And I remember that very clearly because then less than a week later, you and a couple of others, like our CEO, asked me to put that aside temporarily and help coordinate Microsoft’s science and technology response to COVID-19.

And you know, it was a heck of a way to start a new job, and it was total chaos because, you know, this pandemic, people were grokking just how serious this was. And we had within Microsoft Research and across the company, you know, hundreds of people stepping forward, wanting to help, looking for ways to help. Most of them had their own ideas, and they all had in their own personal networks connections to people outside the Microsoft that also had ideas, wanted help, or were parts of organizations that were in desperate need of our help. And so there was just this huge kind of cacophony of stuff going on.

And, you know, we had to very quickly get ourselves organized and mobilize a way to get focused attention on a manageable number of efforts so that we could actually help make a difference. And so, you know, you know all the work that happened.

But then this created another problem because in my mind, this all started in March of 2020, and I thought – and in fact, you and I both thought, well, the pandemic is going to be with us through the summer, but by the fall of 2020, we’ll be past it and we’ll be able to get back to our normal jobs. I’ll be able to get back to the job you hired me for. And so, August comes, September comes, and it’s clear, this thing is not over.

And then I had a management problem because I had a large number of researchers that were spending full time not doing their normal research jobs, but instead were working on projects in pandemic response.

And I looked around and I realized that it wasn’t just pandemic response, we had researchers working full time looking at the security of voting machines. We had researchers doing predictive analytics to understand where to put firefighting resources for major wildfires in Australia and California. We had researchers working on machine learning to accelerate new diagnostics for COVID-19. None of these were in anyone’s job description in Microsoft Research. And yet it would be wrong to say, you should stop doing those things and get back to your normal research.

And it also made us realize there’s something going on here. There’s a form of scientific research that we now call crisis response science that actually is legitimately a part of some people’s jobs in Microsoft Research.

And so, with that whole thought process, we wanted to allow some of our researchers to actually have as their full-time jobs doing crisis response science. And so we formed the new group called the Societal Resilience Group. It’s led by Chris White under Johannes Gehrke, Redmond, Research at Redmond organization at Microsoft Research. And one of the first tasks, besides creating those job descriptions, is to define this new field of scientific research.

And it reminds me a lot back in the 1980s, when the field of bioethics emerged. You know, we were mapping the human genome and it became important to understand what the ethical considerations are in the future of genetic engineering, and the whole new research discipline called bioethics, which is now really vibrant and important. In fact, I went and gave a keynote at one of the recent bioethics conferences just to understand this better.

I think we’re starting to see today the emergence of a new field in the same way that we saw the emergence of bioethics in the 1980s. Somehow, there’s something about crisis response science or the scientific research that helps make societies and communities and people more resilient to future crises, I think is emerging as a new discipline.

And, you know, it’s something that we really are taking very seriously, how do we build our capacity to anticipate, absorb, and adapt to disruptions that might threaten people, communities, or societies? And I think it’s something that leads to some surprising structures.

For example, community empowerment, grassroots community leaders end up being really important. It just helps establish trust. But there’s knowledge and insight there. And so, having elite researchers shoulder to shoulder with grassroots community leaders working on research problems together, it’s a new form of collaboration that wasn’t that common a few years ago, but is becoming sort of an everyday thing in this Societal Resilience Group.

KEVIN SCOTT: Yeah. I’m really happy that you’ve found the way to structure all of this energy that – and enthusiasm and intellect that people want to be able to focus on these problems, because I fully agree with you, that we are facing an increasingly complex world which isn’t necessarily a bad thing, it just sort of is, like there are more of us humans now. I was thinking about this the other day, like there are twice as many humans on the planet in 2021 than there were in 1972 when I was born, actually a little over 2X. And you know, population growth is slowing down, but like we won’t hit peak population, I don’t think, until the end of this century or later in the century.

But where the population growth is happening is interesting, like what the impacts of climate change on the conditions for those parts of the growing population is interesting. I mean, like even the basic thing that you just mentioned, like these grassroots organizing things, like one of the things my family foundation wanted to do throughout the pandemic is, we’re focused on how to break systemic poverty cycles, like the things that hold generation after generation of families into structural poverty.

And we were trying to figure out how to, as quickly as possible, get these underprivileged kids in a bunch of the communities that are our communities here in Silicon Valley, back to high-quality education, because that is one of the things that you know, that holds people in poverty.

And you know, you’ve got a whole bunch of things that you have to do to get education to resume safely in a pandemic. And like, one of those things is like, you’ve got to get people vaccinated.

And the way trust networks work, like the way that people get to a level of comfort in taking a vaccine or, like, a medicine that didn’t exist 24 months ago, is really different. So for you and I, the trust networks are fundamentally different than they are for other folks, and grassroots networks are very, very important there.

PETER LEE: It is. And you know, you end up getting trust in lots of different directions. You know, in pandemic response, one of the things that we got involved in is a loose coalition of organizations and people referred to as The Fight is in Us. And so this was looking at the importance of convalescent plasma as a treatment for COVID-19 patients.

And, you know, if you’re the U.S. government or if you’re a big corporation like Microsoft and you step into some community somewhere in the world, you don’t automatically earn people’s trust at all. And that trust is actually not warranted because we also don’t understand everything that’s going on in the context of those communities.

And so in The Fight is in Us, that coalition, yes, it included big tech companies, you know, like Microsoft. Uber was involved. It involved big health care organizations like the Mayo Clinic and Johns Hopkins Medicine and so on. But it also included grassroots community leaders, you know, people like, you know, Jaime Leibovitz, you know, who is a community leader in the Hasidic Jewish community in New York City, you know, or you know, Joe Lopez, you know, who’s in the Hispanic community in the Houston area.

And these people were absolutely first-class citizens in this coalition and actually emerged as real leaders, not just for relationships, but actually contributing to the science.


PETER LEE: Actually earning named recognition on scientific research papers.

And so it’s an element that I think is going to be incredibly important because when you’re responding to a crisis, yes, there’s a research component, there’s a science component, there’s a financial component, but there’s also a political component to these things.


PETER LEE: And so, you have to find ways to be inclusive and work together in order for any of this to work.

KEVIN SCOTT: Well, I mean, this is one of the interesting things to me in general. So I do think that there is crisis response research that is thinking about what the trends are in human society and in technology and science, so that we focus our research efforts and build a toolkit of ideas and concepts and actual scientific artifacts and tools.

But there’s also this component that blurs the line between science and engineering and politics and sociology and all of these things. And like, you know, and I think these lines have been blurring more and more over the past decade or so as technology has had such a large impact on society at large.

And it may sound like a small thing, but I think, you know, one of the very encouraging signs to me is that you can have people from all of these different disciplines participating in these works as equals.

And, you know, it sort of goes back to this, you know, this thing we were laughing at earlier like this, you know, and you have a different telling like, you know, mathematicians think they’re better than the physicists and physicists think they’re better than computer scientists, right? But you just can’t have that in crisis response research, like everybody has to have a full seat at the table.

PETER LEE: That’s right. I think one of the biggest challenges is that normal scientific research, when it transitions to the real world, it has the luxury of time normally. So like, if there’s a new drug to treat some disease, you know, you go through a whole bunch of trials, you publish papers, it gets debated at conferences, and over a course of, say, five to 10 years, it gets thoroughly discussed and a scientific consensus emerges. When you’re dealing with a crisis, that luxury of time evaporates.


PETER LEE: And so another reason that crisis response science, I think, is a different discipline is because of that. And if the crisis has the power to bring down power structures, you know, bring down governments, you know, like a global pandemic has that power, then it also becomes political and very public.


PETER LEE: And all of the debate that normally happens in the kind of cloistered halls of academia and big research labs like Microsoft Research, it becomes exposed to the world. You know, all the sausage making gets exposed.

And so, I think as researchers and as a research community, we’re all going to have to learn how to do that well and do that correctly. And there’s tremendous power in being explicit about it, recognizing this is what’s going on and understanding that context, because once you understand that, then you have a chance to write it down on paper –


PETER LEE: – teach it and – and become better at it in the future.

KEVIN SCOTT: Yeah, and I think one of the big challenges there, and like, we’re not going to solve this in this podcast today, but there are many, many challenges, and one of them is as everybody gets exposed to the sausage making of science, like it can be a little bit disconcerting if you’ve never seen it before.

I mean, time and again in this pandemic, people have looked, and continue to look, to science for a degree of certainty that science can probably never provide. Because, like the idea of science is it is a process to discover truth.


KEVIN SCOTT: And it is a messy process.

PETER LEE: Well, to my mind, we’re coming full circle in our conversation because, you know, we started it off with researchers, a researcher’s life, always confronting skepticism and doubt. And, you know, we’re kind of going through that now because the public – let’s just take the vaccines. You know, scientists are being confronted with, you know, the doubt and skepticism because they’re being forced to be much more open and more preliminary with the work that they’re doing than they would normally be. And you know, it’s not easy for anybody.

I actually have a lot of empathy for doubters.


PETER LEE: Because, you know, in fact, as researchers, you know, you and I were trained to be skeptics.


PETER LEE: And you know, that’s normally a good thing. But it just becomes –

KEVIN SCOTT: And in fact, honestly, like you and I probably dispositionally, like were born skeptics.


KEVIN SCOTT: Like, I was constantly asking why?


KEVIN SCOTT: Why, why, why? Like, I want to understand why. And if I was unconvinced at your why, like, I wanted more.

PETER LEE: Yes. And so, I think what we want to do is to understand that it’s fine and, in fact, appropriate to be skeptical, but to not allow your skepticism to become such a hardened position that you’re closed off to future evidence and future learnings.

And that is at core, the scientific method –


PETER LEE: – that – that we are hoping that the – that the world can adopt.

KEVIN SCOTT: I think that is very well said, because this is what we just understood throughout the whole history of science, and like particularly since the Enlightenment, like when, you know, we had a scientific method. Scientific theories rise and they fall. We have believed all sorts of things about the world that have proven outright false or, you know, they are a special case understanding of a more complicated, nuanced reality.

And so, like this whole scientific pursuit is just dealing with all of this messy complexity and trying to get closer and closer to what truth looks like, which means that sometimes you have to, you know, backtrack. And that I think, is just hard for folks in general to like, watch, you know, very smart people who believe something and then what is very natural to them as researchers, say, okay, we were wrong about that, like, you know, here’s the thing that looks more accurate now. Like that, that can be a very confusing thing that makes you wonder, well, can I trust these folks or not.


KEVIN SCOTT: And you know, if you just understood how the scientific process works, you’re like, yeah, actually, I trust someone who goes through that journey way more than I trust someone who is just absolutely dogmatically rigid about a point of view.


KEVIN SCOTT: So we’re almost out of time, but one thing that I like to ask everyone in these podcasts before we end, and I suspect I know the answer for you, is what do you do for fun when you’re not thinking about medicine or computer science or running Microsoft Research, or like any of the cool stuff that you get to do in your day job?

KEVIN SCOTT: Yeah, that’s I always feel a little embarrassed by that. So thanks for outing me publicly. (Laughter.)

But you know, all my life, I’ve been interested in cars and auto racing. In fact, you know, I became a certified auto technician and all this other stuff when I was younger.

And then one of my sisters and I, you know, were very interested in auto racing and gotten into cart racing and then Formula Ford, and then later, sports car racing. But then, you know, you have a life and kids and so on and that all stops.

Last March, at the same time that you hired me into this role, all of the major professional car racing series like Formula One, Indy Cars, NASCAR, they all got delayed. They all normally start in March every year, but their starts all got delayed because of the pandemic.

And what happened is that a remarkable number of the very best professional car racers on the planet migrated to online simulation racing on platforms like iRacing.

And what was cool was that if you were also in iRacing, you might be able to go wheel to wheel, you know, with like Dale Earnhardt, Junior, you know, or Lando Norris or, you know, Fernando Alonso, like incredible.

And so for me, this was like, I had to do this because, you know, okay, I was never going to become a professional race driver, but I could actually drive with these guys.

And so, all of the time I would normally spend in airports and airplanes, you know, flying around somewhere in the world, has been channeled into simulation racing.

KEVIN SCOTT: That’s awesome. And I have seen your simulated driving rig, which is really cool, and like, I just wasn’t aware of how good this simulation tech had gotten. Like, you can have a pretty seriously immersive experience in these things. You know, and as far as hobbies go, like, I’m guessing it’s no more expensive than being an amateur woodworker and like filling your garage full of woodworking equipment, right?

PETER LEE: Right. Well, you know, iRacing, which is the largest simulation racing platform, has about 200,000 subscribers. So in our business that’s not a huge number, but you know, it is a community that takes this very seriously and invests in some pretty significant equipment.

KEVIN SCOTT: Yeah. And you got pretty good, right?

PETER LEE: I’m doing okay. I’m still an amateur, but yeah, I’m – you know, I’m having some success.

KEVIN SCOTT: I think it’s awesome. I mean, the reason I ask this question is it is really amazing to me, the interesting things that humans put themselves up to doing.

And the thing that I love is just watching that intensity of like someone really, really, really getting into something and just learning everything about it and trying to get as good as they possibly can at it. Like whether or not you’re a professional, like just that journey is so inspiring.

PETER LEE: Well, these things intersect because I should publicly thank you. You’ve – you know, you 3-D-printed some nice parts for my sim rig. (Laughter.)

KEVIN SCOTT: Yeah. I mean, that is a thing that I have – I’ve gotten really into, especially over the past couple of years, because like a lot of my hours that would be spent in airports and on airplanes have been learning how to be a better machinist. And yeah, so it’s always fun. It’s always fun when things can intersect.

PETER LEE: Well, someday, maybe we’ll both retire and we can form a business that, you know –


KEVIN SCOTT: Immersive simulation rigs for people.

KEVIN SCOTT: Yeah, that – that would be – that would be awesome.

All right. Well, I think – I think we are officially out of time. This was so awesome, Peter. I am obviously on behalf of Microsoft, I’m very grateful for everything that you do and like, especially the extent to which you went above and beyond over the past year to help the world with pandemic response. Like, just as a human being, I’m super grateful for that. And like, as always, it’s been a super interesting conversation.

PETER LEE: Well, the thanks is all mine. I think working all together like this, it’s allowed us to accomplish a few things.

KEVIN SCOTT: Awesome. All right. Well, with that. Thank you so much for your time.


CHRISTINA WARREN: So that was Kevin’s chat with Dr. Peter Lee, Corporate Vice President of Research and Incubation at Microsoft. What an amazing conversation.

KEVIN SCOTT: Yeah, thank you. I always enjoy chatting with Peter. You know, we share these roots where from earlier in our careers like we were experts, he more so than me by a mile, in a particular flavor of computer science. And you know, he’s just transformed several times over the course of a career, which, you know, is sort of an interesting thing that we all do. But, you know, it’s sort of culminated in this very interesting place.

And, like, particularly over the past 18 months, he was just sort of, you know, in the right place at the right time to be able to really apply all of these things that he’s learned and all of his leadership skills to helping with crisis response with research. And using that even as a as a pattern for like how to systematize a new type of research, so hopefully we can be better prepared for the next set of crises that inevitably will come.

CHRISTINA WARREN: Yeah, no, I thought that was so interesting and obviously so great for the world and for us that he was in that kind of right place at the right time.

But I was really struck, you know, given his background and it makes sense because he is, you know, he was a professor and he has had, as you said, you know, this distinguished career across different areas. But I love how he was talking about his student mindset and that he never stops reading and learning and trying to figure out the next thing. That’s amazing.

KEVIN SCOTT: Yeah, the world is, in my opinion, infinitely fascinating. And one of the things that I do see that’s sort of correlated with an ability to have a lot of impact is just having a lot of curiosity, like not being satisfied with knowing something about one area, but just being curious to learn more and more and more and more.

And we talked about this a little bit. I’m convinced that learning is almost like an exercise that you do for your brain.


KEVIN SCOTT: Because the more time you spend learning, the easier it is to learn. And so, just having that student mindset throughout your life and like not just saying, okay, well, you know, that stage of my existence is over with and like everything is going to be in stasis now, that is not a winning strategy for the complicated world that we live in.

CHRISTINA WARREN: No, it’s not, it’s not. But it’s interesting because – and maybe this is just anecdotally, but I do run into people who I think sometimes are afraid or feel like, “Oh, well, good, you know, I’ve reached this certain stage, I don’t have to learn anymore.”

So, you know, seeing someone like him who obviously has this insatiable curiosity and has this student mindset, and then take that from a leadership perspective and take that into the areas and the groups that he runs, I think is really fantastic.

KEVIN SCOTT: Yeah, I totally agree. Helping other people become better learners and encouraging the curiosity that I think we all have in us is a really important leadership trait and, you know, I think he’s had that for a while. Like, you just wouldn’t choose to be a computer science professor if you weren’t interested in cultivating that learning process in other people. But sticking with it and like understanding that that’s just sort of an important part of your job as a leader is like just important and great.

CHRISTINA WARREN: Yeah, no, I totally agree.

The other thing I was struck by the conversation that you had, and this kind of ties into the learning a little bit because they are a little bit related was, learning from that fear of failure. And as he pointed out, there are real, you know, growth opportunities that come from that. But so many times, you know, with innovation, people are afraid to try because they don’t want to fail when that’s what you have to do.

I mean, I know just my own experiences, some of the failures I’ve had in life have been the most instrumental. But you know, it was great hearing you two talk about that because I think a lot of times people just assume, especially people who have been very successful, that they either have always succeeded or, that they don’t still have that in the back of their mind, you know?

KEVIN SCOTT: Yeah, I think this is such an important part of the human experience. The fear of failure causes people to do all sorts of weird stuff. So in lots of people, fear of failure prevents people from even making an attempt.


KEVIN SCOTT: And sometimes it makes people attempt things that aren’t nearly as ambitious as the things that they’re truly capable of accomplishing.

And I understand why, like failing is deeply unpleasant. Like, it never gets to the point where failure feels great. But you know, I learned this from my dad, who failed many times when I was a kid and like, the extraordinary thing that I always watched him do was he just dusted himself off and, you know, got back up, even when the failure was excruciatingly painful, and try again.

And you know, part of that is like, we were poor, and so he didn’t have much in the way of choice.


KEVIN SCOTT: But having that resilience and just, you know, like, okay, well, I failed, like, no sense wallowing in it, let’s just try again and like we will – we will use what we learned from last time to try to make it better this time.

CHRISTINA WARREN: Yes. Yes, exactly. And I mean, I think that ties in so well with what Peter does and the work he works on because it is research. It’s incubation. It’s about innovation. And you’re going to have those things that work or that don’t, but if you weren’t willing to try, if you weren’t willing to fail, think about all the things that we wouldn’t have accomplished in this world.

KEVIN SCOTT: Yeah, he made this really good point as well about this anecdote with Jeoffrey Hinton and the researchers that early in his tenure at Microsoft were showing him the new deep neural network stuff for doing speech recognition.

And, you know, it’s another aspect of this sort of failure mindset, like, you know, watching people do something that is against the norm, that is unusual and new, and sort of saying, look, these are really smart people, I’m going to trust them to let them potentially fail in the –


KEVIN SCOTT: – attempt at something interesting versus like, oh, I’m going to protect them from failure by shutting this down now, like that is a very hard thing to do. And it is – yeah, look – look, it can have catastrophic consequences for all of us, just curtailing these interesting, new avenues of exploration.

CHRISTINA WARREN: Yeah, exactly. I mean, I’m so glad that because of who he is, he was able to let them do that because think of all the innovation and all the massive changes in the neural nets and in the speech recognition that we might not have, you know, if they hadn’t taken those chances. So I love that. It’s so great.


CHRISTINA WARREN: Okay, that’s our show for today. You can send us an email anytime at [email protected]. We’d really like to hear from you. Thanks for listening and stay safe out there.

KEVIN SCOTT: See you next time.