Behind The Tech with Kevin Scott - Behind the Tech: 2020 Year in Review

🎁Amazon Prime 📖Kindle Unlimited 🎧Audible Plus 🎵Amazon Music Unlimited 🌿iHerb 💰Binance

[MUSIC]

KEVIN SCOTT: I think it’s really important for all of us to sort of share our stories because, you know, there are interesting lessons in there. But maybe most importantly, understanding each other a little bit better shows us how I do honestly believe that we are vastly, vastly more similar than we are different, and that sometimes, these stereotypes that we have of each other are just obstacles that stand in the way of each of us achieving our full potential.

[MUSIC]

KEVIN SCOTT: Hi, everyone. Welcome to Behind the Tech. I’m your host, Kevin Scott, Chief Technology Officer for Microsoft.

In this podcast, we’re going to get behind the tech. We’ll talk with some of the people who have made our modern tech world possible and understand what motivated them to create what they did. So, join me to maybe learn a little bit about the history of computing and get a few behind-the-scenes insights into what’s happening today. Stick around.

[MUSIC]

CHRISTINA WARREN: Hello and welcome to a special episode of Behind the Tech. I’m Christina Warren, Senior Cloud Advocate at Microsoft.

KEVIN SCOTT: And I’m Kevin Scott.

CHRISTINA WARREN: And today we’re doing our Year in Review episode, and this means that we’re going to revisit a few fascinating conversations with our guests from 2020 with topics ranging from artificial intelligence to synthetic biology.

I mean, 2020 – (laughter) – wow. It’s been, I think I can say, an unprecedented year full of challenge, but also opportunity. Kevin, if you had to come up with a tagline for 2020, what would it be?

KEVIN SCOTT: You know, every time somebody says that 2020 is an unprecedented year, I always go back to the scenes in The Princess Bride with the Sicilian where he keeps saying, “Inconceivable.” And at some point, Mandy Patinkin’s character says, “I don’t think that means what you think it means.” (Laughter.)

Yeah, I don’t know what my tagline – I mean, like my children have declared 2020 “The worst year ever.” But, you know, I think even though it has certainly been perhaps the most challenging year that I have experienced in my adult life because of the pandemic, because of this reckoning that we’ve been having in public with, you know, a bunch of social issues that, like, we should have contended with, addressed and resolved a long, long while ago, I do think that it has also been a year of unbelievable progress on several different scientific fronts, and where we’re getting an opportunity unlike any we’re likely to ever have again in our lifetimes to think very, very carefully about our lives, how we show up in society, and for us in particular in the technology industry, how it is that we map what it is we’re doing on to, like, what are an increasingly set of obvious needs that a society, the citizens of the world, the planet are going to need in the future.

CHRISTINA WARREN: Yeah, no, I think you’re exactly right. And I’m glad that you kind of turned that whole “worst year ever” thing into something that could be a little bit more hopeful, because I think you make really great points that a lot of people have really stepped up and we’ve really had to start to address things that maybe we haven’t before. And that is reassuring and maybe a little bit hopeful.

KEVIN SCOTT: Yeah, and you know, the other thing, too, that I will say is that we, in a time of crisis like this, we all have an opportunity to pay very, very close attention to what’s going on, because the things that we get to observe now hopefully won’t happen again for, you know, for many, many good reasons. Like, we should never wish another pandemic on ourselves.

But because of what’s going on and the urgency with which we are trying to address these problems, some things are just moving with extraordinary speed. And, you know, that’s been one of the themes of the season, as we’ve talked to more people doing work in the bio sciences.

CHRISTINA WARREN: No, you’re exactly right. And so, throughout this remarkable year of 2020, we have had quite the lineup. As you mentioned, we spoke with synthetic biologist Drew Endy and neuroscientist Tom Daniel, with Daphne Koller about digital biology and Oren Etzioni about artificial intelligence. And we also had an insightful chat with Microsoft Research’s Eric Horvitz. And, of course, Greg Shaw interviewed you about your amazing book, Reprogramming the American Dream, which explores how we might ensure that AI better serve us all.

We also met science fiction writer Charlie Stross, who talked about what a real frustration it is to come up with an original idea in times like these when truth really is stranger than fiction.

KEVIN SCOTT: Yeah, so, so true. And I’m really grateful for the many extraordinary guests we’ve had on this show over the past year. I sometimes have to pinch myself because I get to talk to such great people and have such awesome conversations.

If I had to pick a common theme from all of these conversations. I’d say it centers around the very basic question of “How can we use technology to better the lives of everyone on the planet?” And you can see this across the board.

So, you know, the folks who are working in the biological sciences are using technology in these incredible ways, like, where “unprecedented” might actually be the appropriate adjective to describe some of the ways that they are accelerating progress in this field by taking this intersection of the biological sciences and automated experimentation, and high precision instruments, and artificial intelligence. It’s just this incredible mixture of things that really is helping us solve problems that we haven’t really been able to solve in quite these ways before.

And then, you know, you look at Oren Etzioni, who is at the Allen AI Institute, which is just doing really incredible work, trying to build more powerful artificial intelligence systems in service of solving some of the big problems that we have on the planet, will have for years to come.

And then, you know, obviously, I always love chatting with Eric Horvitz, our Chief Scientist. Eric has one of the most brilliant and interesting set of experiences of anyone I know. Yeah, I don’t know too many people have a PhD in computer science and are medical doctors, and have spent their entire career leading a research institution like Microsoft Research. And so, getting his perspective on things like looking at his career path was extraordinary and wonderful.

And then, Charlie is one of my science fiction heroes. It was, like, just an unbelievable pleasure to be able to talk to him and to just sort of see how someone like Charlie is so thoughtful about thinking about the human condition in the context of all of these sort of interesting technological phenomena and phenomena shaped by technology.

CHRISTINA WARREN: Yeah, I agree, I mean, I think that it’s been fantastic to hear from so many smart people that think about the work that they’re doing and to think about the potential that that work is going to have on all of us.

KEVIN SCOTT: Yeah, indeed.

CHRISTINA WARREN: So with that, let’s get started. First, we’ll hear from digital biology and machine learning pioneer Daphne Koller.

[MUSIC]

DAPHNE KOLLER: I think one of the very, very thin silver linings around this very dire situation that we find ourselves in is that there is, I hope, a growing appreciation among the general public for what science is able to do for us today and how much of that ability rests on decades of basic science work by many, many people that— much of which is publicly funded work at academic institutions—that without that level of progress that we’ve made, the concept of, say, creating a vaccine in 12 months would have been completely ludicrous a few years ago.

[MUSIC]

KEVIN SCOTT: Yeah, so Daphne says it well. She is actually a computer scientist by training. As a Stanford faculty member, she began working in machine learning with traditional datasets and quickly realized that she wanted to pursue datasets that were more richly structured and more aspirational. So that’s the reason she got into biology and medicine.

CHRISTINA WARREN: And Daphne’s work is fascinating, in part because it relies on the collaboration of scientific fields that historically have spoken different languages. And Daphne describes her work at insitro, the company she founded, which applies machine learning to the research and development of pharmaceuticals.

[MUSIC]

DAPHNE KOLLER: So, the premise for what we’re doing really emerges from what I said a moment ago, which is that this last decade has been transformative in parallel on two fields that very rarely talk to each other.

We’ve already talked about the advancement on the machine learning side and the ability to build incredibly high accuracy, predictive models in a slew of different problem domains if you have enough quality data.

On the other side, the biologists and bioengineers have developed a set of tools over the last decade or so, that each of which have been transformative in their own rights, but together, they create, I think, a perfect storm of large data creation — enabling large data creation on the biology side, which when you feed it into the machine learning piece, can all of a sudden give rise to unique insights.

And so, some of those tools are actually pretty special and incredible, honestly. So, one of those is what we call “induced pluripotent stem cells,” which is — “we” being the community, not “we” at insitro — which is the ability to take skin cells or blood cells from any one of us and then by some almost magic, revert them to the state that they’re in when you’re an embryo, in which they can turn into any lineage of your body.

So, you can take a skin cell from us, revert it to stem cell status, and then make a Daphne neuron. And that’s amazing, because that Daphne neuron carries my genetics. And if there are diseases that manifest in neuronal — in a neuronal tissue, you will be able to potentially examine — assay those cells and say, “Oh, wait, this is what makes a healthy neuron different from one that carries a larger genetic burden of disease.” And so that’s one tool that has arisen.

A different one that also is remarkable is the whole CRISPR revolution and the ability to modify the genetics of those cells so that you could actually create fake disease — not fake disease, because it’s real disease, but introduce it into a cell to see what a really high penetrant mutation looks like in a cell. And then, commensurate with that, there’s been the ability to measure cells in many, many, many different ways, where you can collect hundreds of thousands of measurements from each of those cells so you can really get a broad perspective on what those cells look like, rather than coming in with, “I know I need to measure this one thing.” And you can do this all at an incredible scale.

So, on the one side, you have all of this capability for data production, and on the other side, you have all this capability for data interpretation. And I think those two threads are converging into a field that I’m calling “digital biology,” where we suddenly have the ability to measure biology quantitatively at an unprecedented scale, interpret what we see, and then take that back and write biology, whether it’s using CRISPR or some other intervention to make the biological system do something other than what it would normally have done.

So, that to me is a field that’s emerging and will have repercussions that span from, you know, environmental science, biofuel, bacteria or algae that do all sorts of funky things like suck carbon dioxide out of the environment, better crops, but also, importantly for what we do, better human health.

And so, I think we’re part of this wave that’s starting to emerge. And what we do is take this convergence and point it in the direction of making better drugs that can potentially actually be disease modifying, rather than as in many existing drugs, just often just make people feel better, but don’t really change the course of their disease.

[MUSIC]

CHRISTINA WARREN: That was Stanford computer science professor and CEO Daphne Koller talking about her work at her company, insitro.

KEVIN SCOTT: Yeah, and we should also note that the recent awarding of this year’s Nobel Prize for Chemistry went to two women responsible for the development of CRISPR, which is, for folks unfamiliar, the first precision technology allowing human beings to alter the genome of humans or other organisms.

CHRISTINA WARREN: Yes, I was so glad to see that award.

Next, we’ll hear from another Stanford PhD, Drew Endy. Drew is a member of the Stanford University bioengineering faculty, and his research teams have pioneered amplifying genetic logic, rewritable DNA data storage, reliably reusable standard biological parts and genome refactoring. Here’s Drew.

[MUSIC]

Drew: So, we’re biology, right? Everybody listening to this is biology. We all have biology. So you know, like, biology is kind of important is a gross understatement. So how do we explain the fact that we tend to take biology for granted? And I think it’s because, well, we just get biology. And so, there’s a way of thinking about the living world, which is the living world exists before us, and we are a part of it and we inherit it. And we can’t do anything about it. It’s just– it is what it is. And before the mid-19th century, not only is it is what it is, but it is what it is doesn’t change. This is the pre-evolutionary view.

Now, post-Darwin and colleagues, we have another cultural perspective on biology. It exists before us and it is what it is, but it changes over time through this evolutionary process. And we all know well that that’s controversial, still, culturally, for some, right? Do I have the pre-evolutionary view of biology or the post? But from my point, as an engineer, it doesn’t really matter. Everybody in either of those tribes is just the living world. We just take it for granted. It is what it is. And it’s not that we don’t care about it, but we don’t really think about it as this substrate, as this type of material.

And then a generation ago, starting 1970-ish, we get first generation genetic engineering and now we’re getting second generation genetic engineering. And suddenly, we get to inscribe human intention into living matter very crudely at first, but we’re getting better at it. And so, this is something of our time.

This third reality that that we can express and inscribe human intention in living matter is really a third cultural perspective. And it forces us to confront, ultimately, what do we want to say and what do we wish of our partnership with the living world, to the extent we can partner with it, to the extent that we can take responsibility for our writing, so to speak?

Now, what are people good at? People are very good at caring about people. And so, of course, health and medicine are a big deal. But it doesn’t stop there. And when I take a look at what’s going on, like, just to get some numbers out in the conversation, how’s biology powered? Well, right now, it’s mostly powered by photosynthesis. Well, how much? And the answer is 90 terawatts, plus or minus, 70 terawatts of photosynthesis on land and 20 terawatts in the oceans.

What’s 90 terawatts? Well, civilizations running on, what, 20 terawatts these days, plus or minus? So, okay, that’s interesting. The energy powering the natural living world is four and a half times the energy consumed by the human civilization. Huh.

Now, you asked me, like, what’s the big deal? How about civilization-scale flourishing, because what’s biology doing with those joules, that energy coming in? It’s organizing atoms, right? So, biology is operating at this intersection of joules, the energy, the atoms, the material and bits, by the way, right, the DNA code, which is – which is abstractable information.

And so, we’ve got this stuff, this living matter. It’s atomically precise manufacturing on a planetary scale, operating at almost 5x civilization. What should we be looking at? Lots of individual things, of course, vaccines here and there, a big deal. But the big prize I would submit for consideration is civilization-scale flourishing where we can provision for 10 billion people rounding up without trashing the place. And that’s never been true before because we’ve never understood biology in the way we’re approaching it, both as a science and engineering discipline.

And if I go away, right, and if all of bioengineering goes away at Stanford or MIT, both – not that I’m advocating for that, obviously – like we’re still running on this trend of we’re understanding we’re about life and we’re getting better at tinkering with it. Those trends will continue for the rest of our lives.

[MUSIC]

KEVIN SCOTT: We had a really great talk with Drew. You know, I think the eye opening thing for me from listening and chatting with Drew in our podcast, and the other conversations that I’ve had with him is that he really does think like both a scientist and an engineer. So, he is trying to turn many things about biology, which is incredibly tricky, into engineering platforms that we can then use to go build things with proactively. And, like, that is not a thing that human beings have been able to do in the biological sciences, historically. So, it is a very, very radical shift in the way that we think about these disciplines.

CHRISTINA WARREN: Honestly, it blows my mind and I love it. Here’s a bit more from your conversation with Drew Endy.

[MUSIC]

KEVIN SCOTT: Yeah, I think that was one of the quotes or things that I took away from our first meeting, the fact that we don’t completely understand a single cell of the human body, which, you know –

DREW ENDY: Or any cell, or any cell at all, like, even the simplest microbe. There’s not a single microorganism on Earth we understand completely.

KEVIN SCOTT: Yeah, and we’re sort of tangibly wrestling with this right now. You’ve got this SARS coronavirus 2, this little, you know, 50, 100 nanometer particle that is, like, really doing a number on civilization right now. And, you know, like, I’m sort of glad that it’s happening now versus 30 years ago, because we have, as a matter of fact, come a very long way in our understanding of these biological systems over the past several decades.

But still, you know, I think we’re, in many ways, completely flummoxed by the mechanism of this virus and, you know, why it does one thing to one person and another thing to another. And like, even when you get down to the, you know, we sort of got lucky, you know, as you mentioned in that first meeting that we had a solved structure for the spike glycoprotein pretty quickly in the outbreak.

And I know a bunch of work that people have done to simulate in computers the interaction of that spike protein with these Ace2 receptors in the human body, which is, like, one of the – is the mechanism that the virus used to invade a cell. But, like, even those computer simulations are relatively low resolution compared to what the actual in vivo interactions are of that virus spike protein in the cell. So, like, we do have a long way to go, still.

DREW ENDY: Yeah. And honestly, we’re playing. We’re not serious about biology yet. We’re not treating biology like a strategic domain. When an enveloped RNA virus can take out a carrier task force, something that no number of Chinese submarines can do, apparently, and all we can do is do F15 flyovers to celebrate the healthcare workers, that means we are not taking biology seriously. We are misspending our treasure.

Thirty years ago, by the way, 30-40 years ago, by the way, it was HIV, and we had that experience. So here’s a question I’m wrestling with. Why in infectious disease and epidemiology is it okay for us to adopt a strategic posture of, “Let’s wait till we’re surprised”? I don’t know of any important strategic domain where, you know, community gets together or the leaders get together and say, “Well, we’re really worried about this issue. And so, our strategy is going to be we’ll wait for something to happen and then we’ll react. And let’s get better at reacting.” Like, that’s bizarre, and I think it’s linked back to biology happens to us.

[MUSIC]

CHRISTINA WARREN: So, that was Stanford professor and bioengineer, Dr. Drew Endy.

KEVIN SCOTT: Okay, and next, we’ll hear from neuroscientists and bioengineer Dr. Tom Daniel.

CHRISTINA WARREN: Yeah. Tom is a professor and faculty member at the University of Washington. His research and teaching meld neuroscience, engineering, computing and biomechanics to understand the control and dynamics of movement in biology.

So, if I remember correctly, Kevin, you and Tom spend a good amount of time talking about flies, like – buzz buzz – flies.

KEVIN SCOTT: Yup. Tom does this really fascinating work with exploring how insects navigate the world by looking at their neurobiology and electromechanical systems. And he told us all about these things on flies called halteres. According to Tom, halteres is derived from the fly’s hind wing, and they are these super tiny non-aerodynamic knobs that flap just like wings, just completely fascinating. So, I will let Tom explain it. He can do a much better job that I can.

[MUSIC]

TOM DANIEL: They’re like tiny dumbbells that the fly oscillates a counter phase to the wings. They’re so small, there’s no aerodynamic forces, but they’re packed, they’re festooned with sensory structures. As it turns out, they’re like this little knob on a stick, and as that vibrates– experiences bending forces– but if the fly rotates in a direction orthogonal to the flap, it generates a Coriolis force, a gyroscopic sensor.

And lo’ and behold, these systems are exquisitely sensitive to rotational forces. So, they’re basically measuring–I apologize for the math–the cross product of their flap (Laughter) with their body rotation, okay? And so we – we had this idea that they’re able – or they’re physically able to respond to Coriolis forces, but we really wanted to nail whether the neural system actually has the equipment to measure that. And so we were able to stick electrodes into the neurons that go into these tiny modified hindwings, and measure their encoding properties, and you can show that they encode information at astronomically high rates, and do so for Coriolis forces.

But that sort of led to an interesting question is – these are what you would call a vibrating structural gyroscope, which is basically the same idea that you have in all these gyroscopic sensors in your cellphone, or anything else. They operate at a tiny, tiny fraction of the energy costs. I’m not going to stick a fly inside my cellphone. (Laughter) But bear with me, we do some odd things like that.

[MUSIC]

CHRISTINA WARREN: It’s so wonderful and weird. (Laughter.) I really, really like it. And here’s another bit from later in Kevin’s conversation with neuroscientist Tom Daniel.

[MUSIC]

KEVIN SCOTT: You have this beautiful point of view because you’ve been doing this for a while, so what are some of the interesting things that have changed in the field, other than like it being easier to do some of this interdisciplinary stuff?

TOM DANIEL: Yeah, I would say there are probably three big transformations today that are going to propel the field much further forward than I will see in my career. Candidly, ML methods, machine learning is coming to bear on a vast number of problems in neuroscience, everything from imaging to you know, how do we handle the massive data flowing in from neural systems, how does a brain handle massive data, can ML give us some insight?

So, as we said, not too long ago, there’s lots and lots of channels coming in. That’s a hard problem to do in traditional control theoretic approaches, right? It’s just hard, and by the way, they’re nonlinear. You know, ML methods, I think the advent of AI and ML, and our ability to grapple with massive data is transforming the field of neuroscience, period.

It’s transforming the field of movement control. We have the same problem in understanding how multiple actuators operate a dynamical system and how billions of motor molecules conspire to create movement in muscle. These are all problems that demand extreme advances in computation, not just the hardware of computation, but the ML methods that are coming about. So, even at my late stage of career, I’m finding myself having to learn more and more ML methods. This is great, this is exciting. So, DMNs, even simple just standard classification problems are becoming increasingly important. That’s, that’s revolution one that’s been going on.

Revolution two is of course devices, the advances in device technologies. So, an example of that will be the microfabrication of electrodes that you can implant in neural systems that record from hundreds of simultaneous sites. I almost said thousand because it’s at about 900-and-something. I think on the latest sharp electrode developed for mouse brain recordings, okay?

Those are now device technology, and of course, the ubiquity of microfabrication is influencing even how we make electrodes interfacing with natural systems. Okay, so now you have these two things. You have ML methods, device technologies, hand-in-hand, transforming our ability to understand the encoding and decoding processes of natural systems.

So, what’s the third revolution? The third revolution of course is gene editing. Where is gene editing coming into all of this? Well, our ability to look at neural circuits depends on our ability to look at variants in these neural circuits to turn them on, to turn them off, to use optogenetic methods, to use CRISPR, to change the chemosensory pathway on the antenna of an insect with really awesome electrodes inserted into it, and ML methods listening in, right?

So, those are the three technologies I think are transforming not just neuroscience, I think it’s transforming – they’re all mutually transforming each other. That is, as we need to grapple with ever more complex datasets, I think that’s driving development of ML. I think it’s driving how we manage and control and handle rapid information flow. Just like real brains, computers are fixed with this real-time challenge, even the brain the size of a sesame seed does astronomical amounts of computing at tiny levels of efficiency, so there’s lessons to be learned both ways. You can tell I’m really excited because I see these synergies and the sort of triumvirate of sort of advances in gene editing, advances in device technology, and advances in ML.

[MUSIC]

CHRISTINA WARREN: So, that was University of Washington professor Dr. Tom Daniel. Next, you’ll hear some highlights from a recent episode with Dr. Oren Etzioni. Oren is Chief Executive Officer at the Allen Institute for Artificial Intelligence.

KEVIN SCOTT: Yeah, I talked with Drew and Tom about the complexities of understanding biology. And with Oren, we got into the feasibility of creating AGI, artificial general intelligence.

[MUSIC]

OREN ETZIONI: I think it’s a really interesting time to be a computer scientist, to be a computer professional. I do want to say, off the top of my head, here are three things that the current technology doesn’t yet touch. The first one is the current technology – maybe this is a good phrase – is kind of profligate in its use of compute and data. Yeah, I need millions of examples at least for pre-training and then thousands for tuning. Yeah, I need this massive amount of computation, millions of dollars of computation to build my model.

Whereas of course human intelligence, which is the standard, sits in this little box, right, that’s on top of my neck and is powered by the occasional salad and a cup of coffee, right? We know, right, you know, kids, they’ll see one example and they’ll go to the races. So, I think we can build far more frugal machines in terms of data and compute, that’s one.

And then the second thing, and this goes right back to the discussions we were having at CMU in the early ‘90s is, “What is the cognitive architecture?” In other words, okay, you can take a narrow question like, “Is this email spam or not,” or “Did I just say “B” or “P?” Speech - phoneme recognition. And you can train models that’ll do – they have super-human performance at that.

But the key thing in artificial general intelligence – in AGI – is the “G.” So, how do we build what was called, then, a unified cognitive architecture? How do we build something that can really move fluidly from one task to another, when you form a goal, automatically go and say, “Okay, here’s a subgoal, here’s something I need to do or learn in order to achieve my goal.” There’s just so much more to general intelligence than these savant-like tasks that AI is performing today.

The third topic in AI that I think we ought to be paying more attention to is the notion of a unified cognitive architecture. So, this is something we studied at CMU back in the day. And it’s the notion of not just being a savant, not just taking one narrow problem, but going from one problem to the next and being able to fluidly manage living, where right now we’re talking. Soon, I will be crossing the street, then I’ll be reading something.

Putting all those pieces together and doing it in a reasonable way is something that’s way beyond the capabilities of AI today.

KEVIN SCOTT: Yeah, and we’ve got a little bit of that starting –

OREN ETZIONI: I know, I –

KEVIN SCOTT: In transfer learning, like, but just beginning.

OREN ETZIONI: Right, but the thing about the transfer learning is that it’s still from one narrow task to another. Maybe it’s from one genre of text to another genre of text. We don’t really have transfer learning from, okay, I’m reading a book, to now I can take what I read in the book and apply it to my basketball game, right? We’re very far from anything like that.

[MUSIC]

KEVIN SCOTT: Oren and I also talked about the use of AI as augmented intelligence. I asked Oren what he thought we should be thinking about to better be prepared for all the innovation coming in the near-term future.

[MUSIC]

OREN ETZIONI: Well, in terms of policy, I think we do actually have to be very careful not to use the kind of blunt and slow and easily distorted instrument of regulation to harm the field. So, I would be very hesitant, for example, to regulate basic research. And I would, instead, look at specific applications and ask, “Okay, if we’re putting AI into vehicles, how do we make sure that it’s safe for people? Or if we put AI into toys, how do we make sure that’s appropriate for our kids, for example? The AI doesn’t elicit confidential information from our kids or manipulate them in various ways.”

So, I’m a big believer in regulate the applications of AI, not the field on its own. I think some of the overarching regulatory ideas, for example, in the EU, there’s the right to an explanation. And it sounds good, right? AI is opaque, it’s confusing, these are called black box models. Surely, if an AI system gives us a conclusion, we have a right to an explanation, that sounds very appealing.

Actually, I think it’s a lot trickier than that because there are really two kinds of explanations of AI models. One is explanations that are simple and understandable but turn out not to be accurate. They’re not high-fidelity explanations, because the system is complex. And a great example of that is if you go to Netflix and it recommends a movie to you, they’ve realized that people want to know, why did you recommend this movie? And say, “Well, we recommended this movie because you liked that movie, right? We recommended Goodfellas because you liked The Godfather.”

Well, if you look under the hood, right, the model that they use is actually a lot more complicated than that. So, they gave me a really simple explanation that’s just not true. So, that’s one kind.

The other kind is I can give you a true explanation, but it’ll be completely incomprehensible. So, now if the EU says, you know, you have a right to an explanation, what you’re going to end up with is one of these two horns of the dilemma – something that’s incomprehensible, or something that is inaccurate.

So, I think that it’s really important that we are careful not to go with kind of popular notions like right to explain, but instead, think through what happens in particular contexts.

KEVIN SCOTT: Yeah, I think that is an extraordinarily good point. These models are already at the complexity where they’re as complex as some natural phenomenon. We’re not able to explain many natural phenomena, you know because when we get down to the point of like these are the electrostatic interactions of atoms that comprise this system. You have to look at the phenomenology of the system. It’s why statistics is going to be such a really important skill for everyone. It’s why understanding the scientific method and having an experimental mindset I think is important.

I think this is such a good point about not deceiving ourselves that an incomprehensibly complex answer to a question of like “Why did this thing do what it did?” even if it’s couched in terms of language that we might otherwise understand, that’s not real understanding.

OREN ETZIONI: Exactly. And I’m not suggesting that the solution is, hey, just trust us, you know, we’re – we’re all (inaudible, crosstalk)

KEVIN SCOTT: Yeah, yeah, yeah, for sure –

OREN ETZIONI: – going to work. But, again, going back to the auditing idea, rather than an explanation, if we want – you know, one of the most jarring ones are uses of AI in the criminal justice system, right?

KEVIN SCOTT: Yes.

OREN ETZIONI: To help make parole decisions and things like that. Well, we should audit these systems, test them for bias, right? The press should be doing that, the ACLU should be doing that, regulatory agencies should be doing that. But the solution is not to get some strange explanation for the machine. The solution is to be able to audit its behavior statistically and test it, hey, are you exhibiting some kind of demographic bias?

KEVIN SCOTT: Yeah, I mean, one of the things we do at Microsoft is we have these two bodies inside of the company, this thing called the Office for Responsible AI that sits in our legal team. And we have this thing called AITHER, that’s the AI and ethics committee inside of the company.

What we do with both of these bodies is we try to have both the lawyers and the scientists thinking about how you inspect both the artifacts that you’re building in your AI research, but their uses. And we have a very clearly defined notion of a sensitive use. And depending on how sensitive a use a particular model is being deployed in, we have different standards of auditing and scrutiny that go along with it.

And, recommendations, like, for a criminal justice application, for instance, you may say that a model can only advise, we do not condone it making a final decision. You know, just so that there’s always human review in the loop.

OREN ETZIONI: I think that’s smart. And I also think that this relates to another key principle when we think about both regulatory frameworks and ethical issues. Whose responsibility is it? The responsibility and liability has to ultimately rest with a person. You can’t say, “Hey, you know, look, my car ran you over, it’s an AI car, I don’t know what it did, it’s not my fault, right?” You as the driver or maybe it’s the manufacturer if there’s some malfunction, but people have to be responsible for the behavior of the machines.

The same way that, look, I’ve got – the car’s already a complex machine with 150 CPUs and so on, I can’t say, “Oh, well, the car ran you over, I had very little to do with it.” The same is true when I have an AI system. I have to be the one who’s responsible for an ethical decision. So, very much agree with you there.

[MUSIC]

CHRISTINA WARREN: That was the CEO of Allen Institute for AI, Dr. Oren Etzioni. Now, here’s a part of the show that I’ve most been looking forward to because I get to ask Kevin some questions. But before that, let’s hear from this past year’s episode recorded right before the launch of Kevin’s book, Reprogramming the American Dream: From Rural America to Silicon Valley – Making AI Serve Us All.

Kevin got together with his co-author, Greg Shaw, to talk about the book. Greg is a former journalist and has worked with Bill and Melinda Gates both at Microsoft and at the Bill and Melinda Gates Foundation. He is also co-author of Satya Nadella’s book, Hit Refresh. We’ll start with Kevin reading an excerpt from the book.

[MUSIC]

**KEVIN SCOTT: [reading] “**With modern technology, with more of our time spent online, and on our devices, and with more and more of our connections with one another mediated by social networks, it’s hard to avoid becoming trapped in self-reinforcing filter bubbles, and then not to have those bubbles exert their influence on other parts of our lives.

Many of my friends and colleagues see those living in rural communities, people who live outside of the urban innovation centers where the economic engines are thrumming, right now, in a very different light than I do.

That’s not just unfortunate; it’s an impediment to making the American dream real for everyone. The folks I know in rural America are some of the hardest working, most entrepreneurial, cleverest folks around.

They can do anything they set their minds to, and have the same hopes for their futures, and the futures of their families and communities, as those of us who live in Silicon Valley and other urban innovation centers all do.

They want their careers and their families to flourish, just like everyone else. Where we choose to live shouldn’t become a dividing line, an impediment to a good job and a promising future. That’s the American dream, and it’s on all of us to make sure that it works. Because in a certain very real sense, if it doesn’t work for all of us, it won’t work for any of us.”

[MUSIC]

CHRISTINA WARREN: Fantastic. You know, Kevin, now that the book has been out there for a few months, I wanted to ask you, how’s the book been received?

KEVIN SCOTT: I think it’s been received pretty well. I mean, it’s obviously an unusual time to launch a book, and people have an awful lot of things to think about. But the reason that I wrote the book in the first place was to try to help enrich the conversation that we were having about artificial intelligence, how it could be used and how it shouldn’t be used. And I think it has done a better job than even I was hoping for in forming those conversations. And it’s certainly been wonderful to have conversations around the book with people I never would have had the opportunity to chat with before.

CHRISTINA WARREN: You know, Kevin, on this podcast, you’ve shown a lot of curiosity about our guests’ origin stories – you know, where do they come from, how did they get started in their field? And in your book, you talk about your grandfather, Shorty, who is this remarkable character. Why did you feel that it was important to include these anecdotes about your grandfather, your mom, your dad? Why is it important for today’s leaders and innovators to share parts of their origin story?

KEVIN SCOTT: Well, I there are a couple of reasons. One of the things that I wrote in the, in the book is I really do believe that we are the stories that we tell. Storytelling really does shape who we become. They sort of give us a roadmap for the future that we aspire to have for ourselves and for each other.

And I think it’s really important for all of us to sort of share our stories because, you know, there are interesting lessons in there. But maybe most importantly, understanding each other a little bit better shows us how I do honestly believe that we are vastly, vastly more similar than we are different, and that sometimes, these stereotypes that we have of each other are just obstacles that stand in the way of each of us achieving our full potential.

CHRISTINA WARREN: Yeah, yeah, I like that. I like that. Here’s more from Kevin and Greg’s conversation.

[MUSIC]

GREG SHAW: You know, you dedicate the book to your father. In the book, you write a letter to your grandfather, Shorty, explaining to him. He was a – obviously, a craftsman and someone who would have been fascinated by AI.

I mention this because, you know, the book is titled Reprogramming the American Dream, and you had your family and other families in mind. What’s involved in reprograming the American dream, and what do you mean by the American dream?

KEVIN SCOTT: So, I think that we have an opportunity with better investment in advance technology, and like making those investments in a way where they’re accessible to as many people as humanly possible, to have people in rural and middle America have the opportunity to create really very interesting new businesses, that create jobs and economic opportunity, and that help them realize their creative vision.

And that, you know, serves as a platform, in the same way that industrial technology serves as a platform for these communities to build their economies in the you know, in the early mid-20th century, that AI can have a similar sort of effect in these communities today.

GREG SHAW: Yeah, you offer a number of different suggestions related to education and skilling, and that sort of thing. I’m curious, what would you say is your advice to young people who are might be growing up in rural central Virginia or Oklahoma, where I’m from? You know, how should they prepare for jobs of the future?

KEVIN SCOTT: Yeah. I think, I’ve chatted with a bunch of people about this over the past few weeks, and you know, when I get this question about what we need to do to make AI accessible to those kids in rural and middle America, yeah, some of the things that we need to do are just very prosaic, I think.

So, the tools themselves have never been more powerful. Like the really interesting thing to me is that first machine learning project that I did, 16 years ago, now, required me to sit down with like a couple of graduate-level statistical machine learning textbooks, and a whole stack full of fairly complicated research papers.

And then I spent six months writing a bunch of code, from scratch, to use machine learning to solve the particular problem I was trying to solve at the time.

If I look at the state of opensource software and cloud platforms, and just the online training materials that are available for free to everyone, a motivated high school student could do that same project that I did 16 years ago, probably in a weekend, using modern tools.

And so, you know, I think that the thing that we really need to be doing is figuring out how to take these tools that are now very accessible, and like we shouldn’t feel intimidated by them, in any shape, form or fashion, and figure out how to get those into high school curricula so that we are teaching kids in a project-oriented way, like how to use these tools to solve real world problems.

I think getting kids those skills is super important. Like the other thing that we need to think about is just how we’re connecting people to the digital infrastructure that is going to increasingly be running our future.

And so, you know, there are things like the availability of broadband that are a huge, huge deal. You know, I think we write about in the book, my visit to our datacenter in Boydton, which is in Mecklenburg County, about an hour-and-a-half, two hours away from where I grew up.

And this is one of the most sophisticated technology installations anywhere in the world. Like there’s an enormous amount of network bandwidth coming into this facility, and like the amount of compute power that is just in this sort of acres of datacenter infrastructure that we have there is just staggering.

And we have a bunch of high-skilled technology workers who are building and operating this infrastructure, on behalf of all of Microsoft’s cloud customers. And some of those people who are living in that community struggle to get access from their local telecommunication providers to the high-speed broadband that they expect. Like, they’re information works, like they expect in their homes to like have good broadband connectivity.

For students, like it’s even more critical. Like if you don’t have a good broadband connection that’s available to you, somewhere, as a student, like you’re never going to be able to go find these opensource tools to use these free or cheap cloud platforms to like go learn all of this like very accessible knowledge that is on YouTube.

And so, sometimes, I think it’s the, you know, the prosaic things that like we’re making more complicated than the complicated things.

[MUSIC]

CHRISTINA WARREN: That was Kevin speaking with his co-author, Greg Shaw, about their latest book, Reprogramming the American Dream. Now, let’s switch gears a bit and meet one of Kevin’s favorite science fiction authors, Charlie Stross. Kevin, tell us a bit about Charlie and why you invited him on the show.

KEVIN SCOTT: Well, I obviously love Charlie’s fiction. He’s a phenomenal writer. And, you know, I feel like I have been really, really shaped by the books that I read from early in childhood. And I’ve been, I think, especially inspired and motivated by the science fiction that I’ve read. And Charlie’s is certainly really, really wonderful, wonderful fiction.

I first found out about Charlie as a writer when I read his books on the singularity, which is this sort of notion that computers become intelligent and self-evolving, and so rapidly self-evolved that they become this sort of unknowable thing, and weirdness happens, right? Like, once you cross over the singularity, things become very unpredictable. And so having a– having someone with an amazing, brilliant imagination, and in Charlie’s case, an actual background in the technology industry because he was a programmer and a technical writer, really can shine a light on these sort of crazy circumstances that we can set up for ourselves as acts of imagination.

And so, I just thought it would be really interesting to hear Charlie’s take on the current state of state of affairs in the world where technology has become an increasingly important part of all of our lives, and an increasingly important factor shaping what’s going to be happening in the future. And as someone whose job it is to literally imagine the future, I thought he might have something interesting to say for sure.

CHRISTINA WARREN: For sure, for sure. Well, let’s hear from Kevin’s conversation with science fiction writer and Hugo Award winner Charlie Stross.

[MUSIC]

KEVIN: How do you start this process of trying to imagine what the future might be like so you can have a foundation for the stories that you’re telling?

CHARLIE STROSS: Okay. I don’t always start from a point — from the perspective of the world building itself. I usually start from the point of view of the characters, because fiction is essentially the study of the human condition under circumstances that don’t currently apply. And, you know, if you’re going to talk about the human condition, you have to start by talking about people.

Having said that, there are a couple of books I wrote in 2006 and 2009 which were very tightly focused on the world 10 years in the future. It was going to be a trilogy, but unfortunately the third book in the trilogy has been persistently derailed by political developments in the real world.

I mean, I just can’t write it. I’ve had about two or three different plots for it both destroyed, the most recent one was killed by COVID-19, because I do not want to write a book about a viral pandemic at this point.

KEVIN SCOTT: Yeah.

CHARLIE STROSS: Those books were “Halting State” and “Rule 34.” And the idea of “Halting State,” I got in 2005, when I was at a science fiction convention at a panel discussion discussing massively multiplayer online role-playing games like World of Warcraft, at that point.

And a member of the panel who were on the top table at that point came up with a couple of points. The first was MMOs were the first commercial successful virtual reality environment, one in which you have lots of people with avatars meeting each other. Forget the lack of headsets or tactile feedback or head position and so on, there’s still a window into a virtual world.

The second thing he came up with was, there’s economics involved. He gave, as an example, an anecdote of an incident that happened in London a couple of years earlier. And a guy walked into a police station to report a crime. Somebody he met on the internet had sold him a magic sword, and it wasn’t magic.

KEVIN SCOTT: (Laughter.) That’s great.

CHARLIE STROSS: It turned out to be fraud. You know, he bought a weapon inside a game via an eBay auction, and it wasn’t as described. It did actually get written up as a fraud. And I suddenly realized at this point, hang on, I need to do some digging here.

And I did some digging and discovered some exotic studies, including one paper that confirmed if you take in-game currencies and convert them to real-world currencies using whatever players are running as an exchange rate, by about 1999, there was one game which had an economy with about the same value as the GDP of Austria. Well no, you can’t really do a real-world conversion like that because it’s just fatuous. You’d crash the in-game economy if you tried anyway.

But there was something going on here. And you know, economics is the study of how human beings allocate resources under conditions of scarcity. I began to ask myself, what’s the world going to look like in ten years’ time if we really do get artificial augmented reality goggles and self-driving cars and computer games everywhere and MMOs and live-action role play combined with high-bandwidth, always-on stuff.

So, I started designing what I thought the world of 2017 would look like. And I got this book written and had a bit of a hard time selling it in the U.K., although it sold well enough in the U.S. The problem is, a crime novel set in 2017 in an independent Scotland where it opens with a cop being summoned to the boardroom of a startup company she doesn’t understand in a former converted nuclear bunker to be told there had been a bank robbery.

A gang of Orcs with a dragon for fire support had robbed a central bank inside an MMO. And she gradually — well, various consultants are called in, including forensic accountants and a computer guy, because you need a computer guy for this sort of stuff, and so on. And we gradually discover that somebody has come up with an exploit for compromising the private keys of a company whose basic specialty is arbitrage between the economics of competing MMOs, because one games company after another is trying to poach their competitors’ customers. And there’s now a capture of a flag game in progress between rival teams of Chinese hackers who are trying to hijack the economy of a small European state.

Now, to get to where this was going to go, I tried to do some rigorous extrapolation, and came up with a couple of rules of thumb. And the first is if you’re looking 10 years in the future, 70% of that world is here today. About half the cars on the street, they’re already there. You know, they’re going to be there in 10 years’ time, they’re still going to be driving. They’re going to be a bit more decrepit, but they’re out there.

Buildings, the average house in the U.K. is 75 years old. I know American dwellings tend to be a lot younger. But you know, 10 years’ time, there’s not going to be much turnover. There will be a few new office buildings, a few new developments, but most of what we see is there.

The people, everybody’s going to be 10 years older. The people at the top of the age range will — well, they won’t be visible anymore. The kids, they’re going to be teenagers, but it’s the same stuff. 70% of it is there today.

You then get another 20% — no, actually, it’s about 80% that’s there today. You then get about another 15% that is pretty much predictable. It’s on roadmaps.

We knew back in 2006, 2007, that by 2017, we’d be looking at — we’d have 3G cellular telephony as standard and something called 4G would almost certainly be out there, but not universal by then. You know, I have no idea what the 4G standards were, but 3G was pretty much visible. Everything back then was running on GSM.

The state of the phones we were using, again, it was fairly obvious that they would be connected devices and they’d be very smart, pocket computers. I missed the call on that by going for artificial reality goggles, shades of Google Glass, which as we know, kind of crashed in the market for social reasons rather than technological feasibility. It may eventually happen.

There’s always, though an element of a couple of percent which is, “Who ordered that?” You know, stuff that comes out of left field completely and is completely unpredicted. The CCD image sense that we have in all our cameras today was, I think developed in the 1980s, actually commercialized, and people realized that these things were literally cheap as chips.

What are we going to do with them? Where are we going to put them? The idea that everybody would be carrying a decent quality camera around with them at all times, though, a video camera that could upload to the internet, that was not something most people were prepared to grapple with.

And the idea that there’d be a crazy for “happy slapping,” whereby teenagers would find a random stranger and video one of their mates going up to them and beating the crap out of them and then put it on YouTube. Yeah, luckily, that was a short-lived craze. Most of the people who did it didn’t realize they were basically preparing evidence for prosecution. (Laughter.)

But it’s a second-order consequence. As Frederik Pohl once said, “Anyone can predict the automobile, the difficult bit is predicting the traffic jam.”

[MUSIC]

CHRISTINA WARREN: That was science fiction writer Charlie Stross.

KEVIN SCOTT: Yeah, and don’t forget to grab a copy of Charlie’s new book, Dead Lies Dreaming, it just came out in October and it is fantastic.

As mentioned, I’m intrigued to learn about those first sparks of curiosity that lead our guests into their professional pursuits. There’s so many common threads between them. One of my favorite stories this year was from Microsoft’s own Eric Horvitz.

CHRISTINA WARREN: Yeah. Eric is a Microsoft technical fellow and our very first Chief Scientific Officer. And he provides expertise on a broad range of scientific and technical areas, from AI and biology and medicine to a whole host of issues that lie at the intersection of technology, people and society. Let’s hear a bit from Kevin’s conversation with Eric.

[MUSIC]

KEVIN SCOTT: So I’d love to start, as we always do, by understanding how it is you first got interested in science and technology. Presumably, that was when you were a kid. So can you tell us a little bit about that?

ERIC HORVITZ: Yeah, it’s – I just know that I’ve always been sort of inspired to understand things. And I didn’t distinguish between human creations, artifacts, and stuff I would see in the world. So I was confused and intrigued and interested in living things, in space and time. I remember being very, very young asking my first-grade teacher if I could know more about time. She ended up bringing me to the library at Birch Elementary School and showing me a book about clocks.

And I said, “No, I don’t really mean clocks. I mean time.” And I’m also intrigued by light. I had this really beautiful phosphorescent – phospholuminescent nightlight in the ‘60s, beautiful green light would wash the room at night in this glow. I was curious, what the heck was light?

So, I had these basic questions. I remember having a discourse with my father about some – I heard a lot about god. I was curious what god was made of, and I couldn’t get a good answer from adults about that.

And when it comes to machines and mechanism, I took apart a flashlight – I think it was like the summer after kindergarten or so, because I remember in first grade, I was already into this and talking to friends about this. But I realized that there was a circuit there. I found some wire and I think I impressed my family more than myself when I ran around the house with a battery and a wire with a light bulb lighting up in my finger – under my finger.

And I think this was also around the time that – again, mid ‘60s when there was a lot of – you know, a lot of cartoons we were watching back then had electronic robots and Astro Boy flying around and very helpful entities. I was curious about electronic brains. I don’t know where I got that idea.

But I remember having a bag of parts and on the way to my grandmother’s house in the back of the station wagon, maybe this is around second or third grade, but the peanut can wires, light bulbs – I thought I could assemble an electronic brain on the way to my grandmother’s house in the back of the station wagon. And didn’t get –

KEVIN SCOTT: That’s so cool.

ERIC HORVITZ: You know, still working on that today, basically.

KEVIN SCOTT: That’s really awesome. And were your parents scientists or technical engineers?

ERIC HORVITZ: My parents were both schoolteachers. My mother was a kindergarten teacher. I remember being very proud of that in kindergarten. I would tell everybody at a time when the kindergarten teacher was like the person you most looked up to that, by the way, my mom was a kindergarten teacher, too. That was considered awesome by my peers at the time. My father was a high school teacher. He did science as well as history.

KEVIN SCOTT: So, where – I mean, it sounds like you had a bunch of innate curiosity, which is awesome and, like, one of the themes I think we see with a lot of people who chose careers in science and technology, but did you have any role models when you were a little kid or things that were in the popular media that were inspiring you or did this just really come out of, you know, from your perspective, nowhere?

ERIC HORVITZ: Lots of books. My parents had a home library filled with lots of books. We had the Merrick Library – Merrick, Long Island – where I would spend lots of time. I got to know the science sections as well as the pet section of the library pretty intensively.

Mostly, books at the time. And friends – some of whom had aligned interests. It’s hard to think of the idea of being in the first or second or third grade having a scientific support team, but we sort of had peers that were interested as well. In third grade, I became – I was elected to be the chairperson of the science club, I remember. We had all sorts of projects involving wind speed and solar energy back in those days.

But I’m not sure, you know, where some of the interest came from. It was largely curiosity and books. And later in life, of course, I had some fabulous mentors. You know, we all think back to our various teachers in elementary school, you know? You start in kindergarten, go to sixth grade, each teacher has a major influence on people.

And, you know, I can remember sitting at this desk in sort of a – what I thought was kind of a militaristic setting. And I asked myself on the first day of first grade, “Is this what school’s going to be like? I have to sit at this desk, like, for like 12 years?” (Laughter.)

And the way that first grade went, I was really unimpressed. I would have given it all up if it wasn’t for – and I’ll call out a name, Mrs. Frank, my second-grade teacher, who, like, completely opened the world to me. Was open to science and interested in answering questions. You know, and then you jump forward to fifth grade, Mrs. O’Hara, and these people were just brilliant teachers – Mr. Wilmott in sixth grade, where he celebrated my interests and we had science fairs. I actually won the science fair that year. And you have a few teachers like that who really are, like, are like large planets that spin you up into their gravitational field into new directions.

[MUSIC]

KEVIN SCOTT: That was Dr. Eric Horvitz, Chief Scientific Officer at Microsoft. And I really recommend listening to the entire podcast. It was a fascinating conversation that delved into, among many other things, the intersection of biology, AI and high-performance computing that’s been one of the themes this year.

CHRISTINA WARREN: Yeah, it was as great episode. They all are, if I do say so myself. And, you know, in that clip we just heard, I really loved that Eric gave a shout out to the teachers who inspired him and helped him become who he is today. So, in the spirit of celebrating all of our invaluable educators out there, I would like to give a shout out to Ms. Cohen. Kevin, what about you?

KEVIN SCOTT: Oh, that is an extremely difficult question, just to name one. (Laughter.) I had so many teachers who had such a phenomenal impact on me. Maybe I’ll give a shout out to Dr. Tom Morgan, who taught me my first real bits of computer science when I was in high school at the Central Virginia Governor’s School.

CHRISTINA WARREN: Shout out to Tom, that’s awesome. Well, the show would not be complete if we did not hear from at least one of our guests about what they do in their spare time. So, to wrap things up, we’ll hear from Dr. Percy Liang. Percy is an associate professor of computer science at Stanford University and one of the great minds in AI, specifically in machine learning and natural language processing.

[MUSIC]

KEVIN SCOTT: So one last question. So, just curious what you like to do outside of work. I understand that you are a classical pianist, which is very cool.

PERCY LIANG: Yeah, so, piano has been something that’s always been with me, since I was – as a young boy, and I think it’s also been a – kind of a counterbalance to all the other kind of tech-heavy activities that I have been doing.

KEVIN SCOTT: What’s your favorite bit of repertoire?

PERCY LIANG: I like many things, but late Beethoven is something I really enjoy. I think this is where he becomes kind of very reflective about – and his music has a kind of an inner – it’s, it’s very kind of deep, and so I kind of enjoy that.

KEVIN SCOTT: Like what, what particular piece is your favorite?

PERCY LIANG: So, so there has a – Beethoven sonata. So, I’ve played the last three Beethoven sonatas, so Opus 109, 110, 111 –

KEVIN SCOTT: Wonderful pieces.

PERCY LIANG: Yeah, and one of the things that, actually, you know– one of the challenges has been it’s incredibly hard to make time for kind of a serious habit. And actually in graduate school I was – I was, very – there was a period of time when I was really trying to enter this – or enter this competition and see how well I could do.

KEVIN SCOTT: Which competition?

PERCY LIANG: It was called the, the International Russian Music and Piano Competition. It was in San Jose, and I don’t know why they had this name, but then, you know, I practiced a lot. There’s some days I practiced like eight hours a day, but – and then I was just like this is – it’s just too hard. I can’t compete with all these people who are kind of the professionals.

And then it kind of – I was thinking about how – what is the bottleneck? Often, I have these musical ideas and I know what I should sound like, but you have to do the hard work of just actually doing the practicing, and you know, kind of thinking maybe wistfully, maybe machine learning AI could actually help me in this endeavor.

Because I think it’s kind of an analogous problem to – idea of, you know, having a desire and having a program being synthesized, or an assistant doing something for you. I have a musical idea, how can computers be a useful tool to augment my inability to find time to practice?

KEVIN SCOTT: Yeah, and I think, I think we are going to have a world where computers and like machine learning, in particular, like are going to like help with that human creativity, but like one of the things – I find classic piano is like this very fascinating thing because, on – it’s one of those disciplines and like there’s several of them where it’s just blindingly obvious that – the difference between expertise and non-expertise, like no matter how much I understand –

And so, like I’m not a classical pianist, like I’m just an enormous fan. Even though I understand the – I understand harmony, I understand music theory, I can read sheet music, I can understand all of these things, and I can appreciate Martha Argerich playing, you know, Liszt Piano Concerto No. 2 at the Proms.

There’s no way that I could sit down at the piano and like do what she does because she has put in an obscene amount of work, training her neuromuscular system to be able to play, and then to just have years and years and years of like thinking about how she turns notes on paper into something that communicates a feeling to her audience. And it’s like really just to me, stunning, because there’s just no – there’s no shortcutting it, like you can’t cheat.

PERCY LIANG: Yeah, it’s kind of interesting because, in computer science, there’s oft – sometimes an equivalence between the ability to generate and the ability to kind of discriminate and classify, right? If you can recognize something, whether it’s something’s good or bad, you can use that as an objective function to hill climb. But it seems like in music, we’re not at the stage where we have that equivalence.

You know, I can recognize when something is good or bad, but I don’t have the means of, you know, producing it, and some of that is physical, but I don’t know, maybe there’s a – this is something that is in the back of my mind, in the back pocket, and I think it’s something that – you know, maybe in a decade or so I’ll revisit.

KEVIN SCOTT: The other thing too that I really do, I wonder about, with performance, is there’s just something about – like, for me, it just happens to be classical music. I know other people like have these sorts of emotional reactions to rock or jazz or country music, or whatever it is that they listen to. But I can listen to the right performance of like Chopin’s G Minor Ballade, and like there are people who can play it, and like – I ’m like this is, you know, very nice, and like I can appreciate this.

And there’s some people who can play it, and it like – every time I listen to it, 100% of the time I get goosebumps on my spine, like it provokes a very intense emotional reaction. And I just wonder whether part of that is because I know that there’s this person on the other end, and they’re in some sort of an emotional state, playing it, that resonates with mine, and whether not – like you’ll ever have a computer be able to do that?

PERCY LIANG: Yeah, that’s a – I mean, this gets kind of a philosophical question at some point, if you didn’t know if it was a human or a computer, then what kind of effect would it have, but –

KEVIN SCOTT: Yeah, and I actually, you know, I had a philosophy professor in undergrad who like asked the question, like would, would it make you any less appreciative of a Chopin composition knowing that he was being insincere when he was composing it, like he was – you know, doing it for some flippant reason, and I was like, yeah, I don’t know, like it’s –

PERCY LIANG: Well, like one of my piano teachers used to say that you kind of have to – it’s kind of like theater. You have to convey your emotions, but there has to be some – even when you go wild, there has to be some element of control on the back because you need to kind of continue the thread, and –

KEVIN SCOTT: Yeah, yeah, for sure.

PERCY LIANG: Yeah, but in – also, it is, for me, also just back to playing, it’s just the pleasure of – it’s not just having a recording that’s – that sounds good to me.

KEVIN SCOTT: Yeah, no, I’m very jealous that you like had the discipline and did all the hard work to like put this power into your fingers. It’s awesome.

[MUSIC]

CHRISTINA WARREN: Well, I think that’s a great note to end on.

KEVIN SCOTT: Did I detect a little pun there?

CHRISTINA WARREN: Yes. Yes, you did, Kevin. (Laughter.)

KEVIN SCOTT: (Laughter.) Well, before we close, I just want to say thank you again to our guests this past year. I’m so grateful to all of the folks who shared their time and their vision with us.

CHRISTINA WARREN: Yes. And as always, thank you for listening. As 2020 draws to a close and a fond farewell, might I add – (laughter) – please take a minute to drop us a note you [email protected], and tell us about your hopes for 2021. Be well.

KEVIN SCOTT: See you next time.