Behind The Tech with Kevin Scott - Year in Review 2021

KEVIN SCOTT: Hi, everyone. Welcome to Behind the Tech. I’m your host, Kevin Scott, Chief Technology Officer for Microsoft.

In this podcast, we’re going to get behind the tech. We’ll talk with some of the people who have made our modern tech world possible and understand what motivated them to create what they did. So, join me to maybe learn a little bit about the history of computing and get a few behind-the-scenes insights into what’s happening today. Stick around.


CHRISTINA WARREN: Hello and welcome to a special episode of Behind the Tech. I’m Christina Warren, Senior Cloud Advocate at Microsoft.

KEVIN SCOTT: And I’m Kevin Scott.

CHRISTINA WARREN: And today, we’re doing our Year in Review episode, and this means that we’re going to revisit a few fascinating conversations with our guests from 2021 and beyond, with topics ranging from protein design to robotics to the importance of getting more girls in STEM.

You know, Kevin, we called 2020 an “unprecedented year.” My words fail me when I think about summing up the events of 2021. It makes me think of our previous guest science fiction writer, Charlie Stross, who said (and I quote): “This is just trying to front-run the insanity in my fiction, and I’m having great difficulty making elder gods more horrifying than what’s happening from around the world today.”

KEVIN SCOTT: Indeed, the world today is stranger than fiction.

CHRISTINA WARREN: It really is. But for today’s show, we’re focusing on Behind The Tech, which produced a fantastic year of conversations. As always, we were enlightened, humbled and inspired by the incredible guests we had the honor of speaking with.

KEVIN SCOTT: Yeah, we had a line-up of awesome guests this last year, including AI experts, bioengineers, entrepreneurs and digital influencers. We met Mae Jemison, the first African-American woman in space. We chatted with scientist Ashley Llorens about the future of AI and robotics. And we spent time with the Grammy-award winning Jacob Collier and his collaborator, Ben Bloomberg.

CHRISTINA WARREN: Yeah, and the exciting thing about this episode is that we’re also going to check in with some folks from earlier years of the podcast. So, we’ll revisit some bits of conversation from your interviews with them, and we’ll also share some news about what they’ve been up to recently.

KEVIN SCOTT: Great. So, who’s up first?

CHRISTINA WARREN: Okay, so, first, we thought it was fitting to check in with Anders Hejlsberg because Anders was our first-ever guest on the podcast.

KEVIN SCOTT: Anders is one of my legitimate coding heroes. He built Turbo Pascal when he was at Borland, which is this tool that inspired me as a teenager to take computer science seriously. He eventually moved over to Microsoft, where he helped the create C# programming language. And just a few months ago, he and the team released TypeScript 4.4 which offers smarter control analysis, stricter checks and speed improvements, and a bunch of other cool programming language awesomeness.

Here’s a snip from our conversation where we’re reminiscing about the fact that Anders and his team came up with one of the first integrated development environments written in Z-80 Assembly language.


KEVIN SCOTT: I just want to double-click on this point again. Coming up with one of the first integrated development environments that you have written in Z-80 Assembly language at that point, that’s an unbelievable breakthrough.

ANDERS HEJLSBERG: I suppose, in retrospect, yes. I’ve never really thought of it that way. But, you know, it’s-

KEVIN SCOTT: It’s just incredible. It’s the first. What the-

ANDERS HEJLSBERG: It just seemed like, heck, this is going to be so much better than having to have, first, load an editor and then load (inaudible). Why not just put it all to there? I mean, I don’t know. I never really-

KEVIN SCOTT: And especially at the time, because, I mean, again, like more framing, these are not windowed systems, can’t have multiple things opened at the same time. It’s super tedious to switch from one program to another. So, like, having everything in one place is just a huge productivity win.

ANDERS HEJLSBERG: Oh, it was, totally. The added compile, run, debug, cycle just shrunk by many orders of magnitude.

KEVIN SCOTT: Yeah, and I’m embarrassed to say I’ve forgotten, what was it, F9 to compile and run, or was it F5?

ANDERS HEJLSBERG: You know, I don’t even remember what it was. I think it was F5, yeah, but-

KEVIN SCOTT: It was, like, miraculous.

ANDERS HEJLSBERG: Or maybe it was F3, but yeah, no, it was great. There were all sorts of tricks in there, like the runtime library was the first 12K of the system. And then when producing code, I’d just copy the first 12K into the (XE?) we were producing. There’s your runtime library, right? (Laughter.) Then generate code from there on out, and you could compile the memory and we’d put the code in memory and run it, right, or the original implementation, compile to tape, to floppy tape. (Laughter.) And then you could – sorry, to tape recorder interface, right? Then you could load that machine code up because, I mean, there was only 64K of memory. I mean, it was crazy.

KEVIN SCOTT: Yeah. So, I bought a copy of Turbo Pascal 5.5 out of a catalog called Programmers Paradise. This is just sort of how you used to buy software. And so, I forked over my $200, or whatever it was.

ANDERS HEJLSBERG: Oh no, it wasn’t even that. It was $49, like $49.95.

KEVIN SCOTT: Yeah, it was affordable because I was poor. So thank you for making cheap software. (Laughter.)


CHRISTINA WARREN: That was an excerpt from Kevin’s conversation with Anders Hejlsberg, computer scientist and tech fellow at Microsoft. That was Episode 1 of Behind the Tech from 2018. Gosh, that seems like a lifetime ago doesn’t it?

KEVIN SCOTT: It does indeed. And on that note, let’s jump to one of this year’s guests, Dr. David Baker. David is a biochemist and computational biologist who has pioneered methods to predict and design three-dimensional structures of proteins.

CHRISTINA WARREN: That’s right. Dr. Baker is the director of the Institute for Protein Design and professor of biochemistry at the University of Washington. And we spoke with him in the spring of 2021, and since then, his lab has partnered with the U.S. Agency for International Development on a $125M project to detect emerging viruses.

KEVIN SCOTT: Here’s an excerpt from my conversation with David about the work at the Institute for Protein Design at the University of Washington.


KEVIN SCOTT: So, you know, maybe in terms of SARS coronavirus-2, like, can you describe–

DAVID BAKER: Yeah, that’s a great idea, really good suggestion. In fact, now, when I give talks, I explain protein design in the context of coronavirus. So, let me just spend a couple minutes describing what we’ve been doing at the Institute with regard to coronavirus.

So, the genome sequence was determined and made available last – at the beginning of last year. So, we took that amino acid sequence and used the methods we’ve been developing to predict the three-dimensional structure of the protein on the surface, the spike protein. Of course, you’re right, there’s higher literacy about this now than there ever was. And we knew that the spike protein bound the ACE-2 receptor on the target cells.

So, starting initially with that model, and then shifting over to the x-ray crystal structure when it was determined of the spike ACE-2 complex, the first thing that we did was to design small proteins that we predicted would fold up into such a way that they have a shape and chemical complementarity to the part of the spike protein called the receptor binding domain that binds the ACE-2.

So, these are like – I talked about sort of lock/key interactions. So, we – if you imagine the ACE-2 is the key and the RBD is the lock, so it’s sort of – the spike protein goes and binds to the ACE-2. We basically made things that would compete away that interaction, that is, bind more tightly to the virus than ACE-2.

And we were able to make compounds that bind to the virus about 1,000 times more tightly than ACE-2, and this was really cool. They were just completely made-up proteins, completely unrelated to anything that had been seen before. And with our collaborators, we were able to actually determine experimentally how these small proteins bind to the spike, and they bound basically exactly like in our computer model. So, we could then go – that meant we could go from, essentially from the sequence of a virus to these very, very tight, high-affinity binding proteins.

And the next thing we showed was that those proteins blocked the virus from getting into cells. And then we showed with collaborators that they protect animals from infection by the virus. And I think this was – this was kind of a real ah-ha moment for me, because we’d been developing these methods for designing proteins over the years, and here, in the midst of a pandemic, we were actually able to apply them to make therapeutic candidates.

And those are – those are now headed for clinical trials. It’s been slow, because this is a completely new modality, this whole idea of computational designed proteins. So, there’s been a little bit of a pushback because these are completely new things. No one knows exactly how they’ll behave.

But for the next pandemic, we’re – we’re going to be ready, so we have all the methods worked out and I think we’ve gotten over a lot of the sociological issues to actually using these as drugs. And there’s nothing really that can be as fast, if you can go from amino acid sequence to actually computing a protein which fits perfectly against the virus. So, that’s the first thing we did.

The second thing we did was to design, again, completely from scratch little molecular devices that emit light – luminesce – when they encounter the virus, and those are pretty neat. We’re – we’re developing those now for, not only for detecting the virus, but also for monitoring responses to vaccination, like, how good are my antibodies against the virus? And so, rather than that being just like a fixed key that fits into a lock, that’s actually a device that can undergo changes in its state when it encounters the virus.

And the third area, my colleague Neil King at the Institute has been developing sort of a next generation of coronavirus vaccines, using designed protein nanomaterials that we’ve created at the Institute, which self-assemble into big things that look like Death Stars. And we can put the parts of the coronavirus spike on the surface. And when Neil does that, he finds it gets very, very strong immune responses, stronger than with the current vaccines. These designed nanoparticle vaccines are now in clinical trials.

So, that sort of illustrates some of the key areas in protein design now, being able to design very precise shapes that can block, that can bind very tightly to targets, being able to design molecular devices that can undergo – that can basically do logic calculations, and being able to design nanomaterials like these protein Death Stars.


CHRISTINA WARREN: That was from Kevin’s conversation with Dr. David Baker from the spring of 2021. So unbelievably cool, the Death Star protein.

KEVIN SCOTT: So awesome. (Laughter.)

CHRISTINA WARREN: Okay, well now we’re going to jump back to the summer of 2020 and our conversation with Dr. Fei-Fei Li.

Dr. Li co-leads Institute for Human-Centered Artificial Intelligence at Stanford University. And this past October, the Institute awarded $2 million in seed grants to 26 research teams with the focus on bias, diversity, healthcare, and cognitive science.

KEVIN SCOTT: Yeah, it’s super inspiring. The grantees are conducting research in things like Civics Education for a just and sustainable future and Ultra-Fast MRI For Precision Radiotherapy. It’s just incredible work.

CHRISTINA WARREN: It really is. Not only has Fei-Fei’s own research been ground-breaking, but her work as an educator is remarkable. In 2015, she cofounded a non-profit called AI4ALL, and that’s dedicated to nurturing new AI leaders. I just love their mission statement, which is, “Our vision for AI is a world where diverse perspectives, voices and experiences unlock AI’s potential to benefit humanity.”

KEVIN SCOTT: Here’s an excerpt from my conversation with Fei-Fei.


FEI-FEI LI: And also, just talking about the rural America, this is something I feel passionate about, and I have a story to share with you.

So, you probably know that I co-founded and chair this nonprofit educational organization called AI4ALL, right?


FEI-FEI LI: It started as a summer camp at Stanford about five years ago to encourage diversity students to get involved in AI, especially through human-centered AI, studying and research experience to encourage them to stay in the field. And then our goal is in ten years, we would change the workforce composition.


FEI-FEI LI: Now, it became a national nonprofit and seed granted by Melinda Gates and Jen-Hsun Huang Foundation and –

KEVIN SCOTT: That’s awesome, I didn’t know Jen-Hsun was involved. That’s great.

FEI-FEI LI: Yeah. It’s Jen-Hsun & Lori Huang Foundation. And this year, we’re on 11 campuses nationwide. One of the populations we put a lot of focus on, in addition to gender, race, income, is geographic diversity and serving rural community. For example, our CMU campus is serving rural community in Pennsylvania. We also have Arizona campus.

One story actually came out of our Stanford camp is Stephanie. Stephanie is still a high school junior now, and she grew up in the backdrop of strawberry field in rural California in a trailer park with a Mexican mom. And she come from that extremely rural community, but she’s such a talented student and has this knack and interest for computer science. And she came to our AI4ALL program at Stanford two years ago.

And after learning some basics about AI, one thing she really was inspired is she realized this technology is not a cold-blooded, just bunch of code. It really can help people. So, she went back to her rural community and start thinking about what she can do using AI to help. And one of the things she came up with is water quality.


FEI-FEI LI: Really matters to her community. And so, she started to use machine learning techniques to look at water quality through water samples.

And that’s just such a beautiful example. I just love her story to show that when we democratize this technology to the communities, the diverse communities, especially these communities that technology hasn’t reached enough in, the young people, the leaders and the citizens of this community will come up with such an innovative and relevant ideas and solutions to help those communities.

KEVIN SCOTT: Yeah, and – and I think that getting this technology democratized is sort of a one-two punch.


CHRISTINA WARREN: So in this conversation Kevin had about Fei-Fei’s earlier work on ImageNet, ImageNet consisted of 15 million images which were organized in everyday English language of 22,000 vocabularies, mostly nouns. And it was, at that time, in 2009, the largest database of natural object images in the world. And it really was, in Fei-Fei’s words, the “onset of the deep learning revolution.”

Here’s Kevin’s conversation with Fei-Fei about the continuation of that work.


KEVIN SCOTT: Tell me a little bit more about this work that you’re doing that sort of blends vision and language together, because that – that seems really quite exciting.

FEI-FEI LI: Yeah. So, it actually is a continuation or step forward from ImageNet. If you look at what ImageNet is, for every picture, we give one label of an object. Fine, that’s cool. You have 15 million of them. It becomes a large dataset to drive object recognition, but it’s such an impoverished representation of the visual world.

KEVIN SCOTT: Right. Yes.

FEI-FEI LI: So, the next step forward is, obviously, to look at multiple objects and, you know, be able to recognize more. But what’s even more fascinating to me is not the list of 10 or 20 objects in a thing, it’s really the story.

And so, right after the bunch of work we have done with ImageNet around 2014 when deep learning was, you know, showing its power, my students and I started to work on what we call image storytelling or captioning. And we show you a picture, you say that two people are sitting in a room having a conversation. That’s the storytelling. And that is a sentence or two, right?

And, honestly, I’ll tell you, Kevin, when I was in grad school in early 2000, I thought I wouldn’t see that happen in my lifetime because it’s such an unbelievable capability humans have to connect visual intelligence with language, with that.


FEI-FEI LI: But in early 2015, my group and my students and I published the first work that shows computers having the capability of seeing a picture and generate a sentence that describe the thing, and that’s the storytelling work. And we used, obviously, a lot of deep learning algorithm, especially on the language side, we used recurrent models, like LSTM, to train the language model, whereas on the image side, we used convolutional neural network representation. But stitching those together and seeing the effect was really quite a “wow-wee” moment.


FEI-FEI LI: I couldn’t not believe that I saw that in my lifetime, that capability.

KEVIN SCOTT: Yeah. I sort of wonder, like, whether or not these big, unsupervised language models right now, these transformer things that people are building, the models that come out of them, like, have such – they’re just very large and, like, there’s not much. You sort of barely have, like, any signal in the parameters at all. It’s, like, just diffuse across the entire model. I just wonder, like, whether getting, like a vision model coordinated with training these things is going to be the way that, like, they more concisely learn.

FEI-FEI LI: Oh, I see. Well, yeah, I mean, human intelligence is very multi-modal. So, multi-modality is definitely not only complementary, but sometimes is more efficient.


FEI-FEI LI: We should also just recognize that, by and large, these storytelling models are still fitting patterns. They lack the kind of comprehension, and abstraction and deep understanding that humans have. They can say, two people are sitting in a room having a conversation, but they lack the common-sense knowledge of the social interactions or, you know, why are we having eye contact, or whatever, right? So, there is a lot more deeper things going on that we don’t know how to do yet.


CHRISTINA WARREN: That was Stanford researcher Dr. Fei-Fei Li talking about her organization AI4ALL.

Now let’s continue our conversation about AI with Sam Altman, who joined the show two years ago. Sam is the CEO of OpenAI and former president of Y Combinator.

KEVIN SCOTT: Yeah, Y Combinator is one of the most successful, if not the most successful startup incubators in existence. And OpenAI is a really interesting model. It’s an AI research and deployment company with a mission of ensuring that AGI (artificial general intelligence) is safe and benefits all of humanity.

CHRISTINA WARREN: And the recent news about OpenAI is that Microsoft announced in early November of 2021 the launch of the Azure OpenAI Service, which makes OpenAI’s machine learning models available on the Azure platform.

KEVIN SCOTT: Yeah, this is really exciting, because these models, like OpenAI’s GPT3, are incredibly difficult to train. So, having them behind an API, available on a cloud like Azure really helps democratize the power of those models, and gets it into the hands of people who can do the really interesting things with the models that the world needs them to do.

CHRISTINA WARREN: Let’s listen to Kevin’s chat with Sam Altman.


KEVIN SCOTT: And like, one of the interesting things that’s happening right now with these, you know, computers that we’re building to train very big models is that we are – like, computer architecture’s all of a sudden interesting again, and it hasn’t been for, you know, 20 years, maybe, 15, like a while.

SAM ALTMAN: Yeah. Yeah, that’s cool because there’s all these people that really want to work on that. They’ve had nothing to work on, which means we can get incredible talent focused on this.

KEVIN SCOTT: Yeah. Yeah, we’ve got all of these people who did high performance computing in the ‘90s who, you know, and like I – I was not an important person working on high performance computing in the ‘90s, but like, I was a compiler person. And like, I thought that none of the stuff that I learned in graduate school was ever going to be directly useful again. And like, here it is.

SAM ALTMAN: Here we are.

KEVIN SCOTT: It’s cool.

SAM ALTMAN: It’s really cool.

KEVIN SCOTT: It is really cool, and just a reminder of, like, how cyclical not just technology is, but history. I mean, like I don’t – how much do you think about, like, the historical corollaries for the disruption that we’re going through, like Industrial Revolution? You know, like, I think the steam engine’s a really fascinating example.

SAM ALTMAN: That’s a great one.

KEVIN SCOTT: Like, do you have any others that – because I know you’ve thought a lot about this?

SAM ALTMAN: Yeah. I mean, I think the analogs are the Agriculture Revolution, the Industrial Revolution, the Computer Revolution, and I think the AI revolution will be bigger than any of those three, bigger than all three of them together.

I love reading sort of firsthand accounts of people at the time as they were kind of going through those. There’s this great book called Pandemonium which is – it’s all primary source material of the Industrial Revolution as it was arriving. And many of the things that people say in that book could be said now about how people feel about AI. There’s no jobs; it’s going to take over; the machines are going to kill us; like, the future is going to be terrible; or like, it’s going to be utopia. It’s like, this is so amazing. Like, there’s nothing these machines can’t do.

KEVIN SCOTT: And the reality was some complicated thing in the middle.

SAM ALTMAN: And we always figure out something new to do. Like, the rate of – for instance, so one of the common themes in that book was, like, what are we all going to work on. The rate of job turnover is something like 50 percent of the jobs every 75 years, and this held remarkably constant. You know, it has, like, fits and spurts, but that’s held constant for hundreds of years. And like, we go – like, technology changes. Whole classes of jobs go away, and we find new ones. And they’re difficult to predict what they’re going to be, but like, I think the jobs this time will change a lot, but we’re going to find things to do, I’m pretty sure.


KEVIN SCOTT: Yeah. So, what is the most exciting thing that you think is going to happen in AI over the next few years that you can talk about?

SAM ALTMAN: Well, I’ll give a few, because I think the interesting thing is the breadth of things that are going to happen. I think we’ll have language models where we can interact with computers with natural language in an amazing way that we just – that feels unimaginable, and that’s going to feel like intelligence. I think we’ll have robots that can do human dexterity levels of manipulation, and that’s going to be a huge impact on the world. I think computer games are going to get really good and fun to play. That’s a sort of small sample.

KEVIN SCOTT: Yeah. So, it’s exciting.

SAM ALTMAN: Totally. It’s amazing.

KEVIN SCOTT: And, you know, none of those things – and so, this is – this is sort of to my point, like, none of those things is like Commander Data from Star Trek: The Next Generation walking around, and still, useful stuff will happen.


KEVIN SCOTT: So, that’s the thing that makes me, like, super, super excited.

SAM ALTMAN: Totally.

KEVIN SCOTT: And if we get Commander Data, like, I’m excited about that, as well.

SAM ALTMAN: Might happen. (Laughter.) Probably not in the next couple years, but at some point.

KEVIN SCOTT: (Laughter.) Probably not.


CHRISTINA WARREN: That was the CEO of OpenAI, Sam Altman. Next up, Dr. Mae Jemison.

Dr. Jemison is a doctor, an engineer, a professor, a philanthropist, an entrepreneur, a writer, a dancer, and a NASA astronaut. She was the first African-American woman in space. Her list of accomplishments is long, and we encourage you to listen to her podcast.

KEVIN SCOTT: Yes, she’s had an intimidatingly remarkable career. As an undergrad at Stanford, she majored in chemical engineering. She concurrently took graduate-level classes in biomedical instrumentation. And she also ended up majoring in African and African-American studies.

CHRISTINA WARREN: Right, and she also danced all the way through college. And so, Kevin, you asked her about this diversity of passions and academic pursuits, and how important it was to her, as a scientist, to draw upon this breadth of experience.

KEVIN SCOTT: Yeah, and she said it has been incredibly important, especially in helping her form her vision for the 100 Year Starship, which is an initiative to ensure human capability for travel beyond our solar system within the next 100 years.

Here’s Dr. Mae Jemison talking about interstellar human travel.


MAE JEMISON: What we do, what we see, even what we research and the questions that we ask, are based upon who we are and our experiences, right, what we’ve seen, what we’ve observed. So, coming back to some of the projects that I work on now, even 100 Year Starship, the title of the proposal that won this DARPA geek prize of the year, right, it was “An Inclusive Audacious Journey Transforms Life Here on Earth & Beyond.”

And that first word, inclusion, I doubt that it would have been there, if I was not the one leading the project, but the inclusion was not only across ethnicity, gender and geography. It was across disciplines, because you cannot solve a problem, like human interstellar flight, you cannot even start to approach it, without taking into account the full breadth and scope of human experiences.

So, 100 Year Starship is about making sure the capabilities for human interstellar flight exists within the next 100 years – capabilities, not building a starship or launching a starship, but having the capabilities. And the reason for that was the challenge that it requires, right, the radical leaps in knowledge that are required. We can’t ease up on this.

Why is that different than going to Mars? We’ve been to Mars a bunch of times, right? (Laughter.) There are some engineering challenges there, some life science challenges, but we can actually create a technology roadmap to get there.

I’m a little irritated I wasn’t on Mars, right? (Laughter.) That’s what I assumed when I was a little kid, growing up, right? At least I’d be on the moon. They’d just announced, you know, potential Artemis crew members, right, are going back to the moon for NASA. I’m trying to figure out how I missed the original one, and how I missed the other, 50 years later.

Interstellar is so different because of the vast distances, because of the enormous amounts of energy, for example, that you’d have to generate, in order to across those distances in any reasonable amount of time; the autonomy that has to be developed, within a vehicle, within a system, what we have to know about the life systems, you know, from them microbiomes that help us to digest our food to the microbiome in the soil that help plants grow, all of these kinds of things need to happen, even what makes us human, right?

So, people can come up with all these other things, or why is it important for humans to go, what do we learn by place, by physically being there, but even before you go, right, let’s not even think about that. How do you develop a public commitment and the will to support something like this? How do you develop the behavioral characteristics that are really needed on a starship, because I get to actually see the behavior becoming the long pole in the tent.

It’s not – it’s not going to be the tech. It’s going to be when I tell you I’m not going to do something after you wake me up out of suspended animation, right? (Laughter.) And I say, “Yay. You know, how are we going to work as a team? I don’t know, but I’m not doing that?”

But it’s such a wide range of challenges, and each one of those challenges, if you think about it, and – and I did not go through all of them at all, but, you know, just think of the energy, how much energy, you know? So, we can’t do it through regular chemical propulsion. There’s not enough chemical propulsion in the solar system to get us there. We’ll have to do fission, fusion, antimatter.

We’re okay with fission, but we really don’t contain and control it really well, right? Fusion, we go back and forth with whether we can do fusion, right? And antimatter, we don’t know how to contain antimatter, but each one of them is an order of magnitude greater energy resource, but imagine what that would do to our world, if we learned how to generate control and store that kind of energy, how it would impact us. The same thing with understanding the microbiome, the same thing with understanding investing financially in something like this, what is the return on investment?

So, when I look at all of this, and human behavior – don’t, let me not leave out human behavior, right? When I look at all of this, these are really the challenges that we face in the world today, in our world, on this planet. And if we don’t solve those, we have a problem.


CHRISTINA WARREN: That was Kevin’s interview with Dr. Mae Jemison, the first African-American woman in space. Now, let’s hear from a recent guest who also had a fascinating career path.

Ashley Llorens is a scientist, engineer, and hip-hop artist known as SoulStice. Ashley talks about his immersion in growing up in the south side of Chicago, and how a boom box with two tape decks served as his first recording studio. And those early efforts led Ashley to a career in the music industry, touring through Europe and Japan. And one of his songs was featured in the Oscar-Nominated film, The Blind Side.

KEVIN SCOTT: Yeah, and simultaneously, Ashley pursued his career as a scientist. He enjoyed a 20-year career in research and development of AI technologies at Johns Hopkins Applied Physics Laboratory and recently joined Microsoft as a vice president, distinguished scientist, and managing director for Microsoft Research.

CHRISTINA WARREN: Here’s an excerpt of Kevin’s conversation with Ashley about the narratives surrounding artificial intelligence.


KEVIN SCOTT: The narrative is really important because it’s such an important technology, and – and it is having such a profound impact on what the future is looking like every day as it unfolds, that people need to be able to understand how to engage with it to sort of, like, what do I think about this technology? What do I think about policy about this technology? What do I think about, you know, like, my hopes and my fears for the future of this technology? So how – you know, have you thought much about, like, you know, the story of AI?

ASHLEY LLORENS: Yeah, absolutely. And maybe there’s a couple of sides – well, there’s many sides, but maybe two sides I’ll pick to explore there. One is absolutely the idea that AI is – is taking us in a bold new direction as a society. And I think it’s more important than ever that we can engage around these policy questions and really around the directions of AI, definitely outside of computer science and across disciplines.

And so, we do need to create narratives. Even more than that, I think we need to create directions that we agree on that we want to take this technology.

A lot of times, I think people are discussing AI as something separate from human beings and human intelligence, and I think we need to be thinking of these two things as complementary.

So, what are our goals for these things? You know, can we start to set some audacious goals around enabling as many people as possible on the planet to live a long, healthy life, creating an atmosphere of shared prosperity? And what is the role of AI in doing that? To me, these big societal narratives should be at the top level of abstraction in terms of what we’re talking about, and then everything else is derived from that.

I think when we – if we’re going to just let a thousand flowers bloom and see where we land on this thing, I think we could wind up with some really unintended consequences, you know, from that.

KEVIN SCOTT: Yeah, I really, really agree. And I think, you know, too, if you have the wrong narrative, you could have unintended consequences as well. Like, one of the things that I have been telling people over and over again over the past handful of years, as just sort of a useful device about thinking about the future of AI, is that AI, like especially its embodiment in machine learning, is a tool. And just like any other tool that people have invented, like, it’s a human-made thing and, like, humans use it to accomplish a whole wide variety of tasks.

And, you know, the tool is going to be as good or as bad as the uses to which we put it. And, you know, it’s just very, very important, I think, for us to, like, have a set of hopeful things that we’re thinking about for, you know, those uses of AI as, you know, we have our anxieties.

And both are important. Like, you have to – you know, it will certainly be used for bad things. But, you know, as with any technology, like, the hope is that there will be orders of magnitude more positive things and good things that people will attempt to do with it than the bad. And part of how we get to that balance of good versus bad is the stories that we’re telling ourselves right now about what it’s capable of and, like, what to be wary about.

ASHLEY LLORENS: I think – I think that’s right on point. And, you know, we can even ask yourself, you know, what does it mean to behave intelligently as a species? I actually think we’re getting to the point where we can start asking ourselves and holding ourselves to, you know, to some standard there.

You know, if you just think about artificial intelligence at a low level, you know, from an agent standpoint, you know, I think intelligence, itself, is the ability to achieve goals, to set and achieve goals. And then what do you have to do? You have to be able to have some understanding of the world around you, you know, through some mechanisms of perception, whether that’s kind of our human modalities or other kinds of modalities. You have to decide on a course of action, you know, that best achieves your goals. And then you have to carry it out. Like, these are the things you do to be intelligent.

So, when you extrapolate that to us, as a species, because one of the hallmarks of human intelligence is our social intelligence, our ability to you know to collectively set – pursue goals and things like that. So, I think –

I’m sort of – as you can see, I’m sort of cursed now to see everything through the lens of intelligence and, you know, artificial intelligence. This is just how I – my lens on the world, but I think it’s – I think it’s helpful. I think it’s useful. I think in order to behave intelligently as a species, we have to do some of these things that you’re talking about, setting some bold visions and directions and figuring out how to organize around those.


CHRISTINA WARREN: That was distinguished scientist and managing director for Microsoft Research, Ashley Llorens.

We have another story about the intersection of music and computer science. Earlier this year, we had the pleasure of meeting Kimberly Bryant, the founder and CEO of Black Girls Code, an organization that’s dedicated to promoting equal representation of black women and girls in the tech sector.

This October, in celebration of the “International Day of the Girl,” Black Girls Code announced their partnership with actress, singer and influencer Willow Smith to help amplify their message about the importance of getting girls into STEM.

KEVIN SCOTT: We talk a lot about the need to bring diverse voices and perspectives to build not only better technology, but a better democracy. Kimberly’s work is helping to do this by creating education and mentorship opportunities in areas like AI, robotics, virtual reality, gaming and blockchain.

CHRISTINA WARREN: Here’s an excerpt from Kevin’s conversation with Kimberly.


KEVIN SCOTT: You all are doing such great work. Like, I wonder, you know, if you could get the world to do anything that would, you know, get more women and more women of color into computing, like, what would that thing be? And, like, how can we all help support that?

KIMBERLY BRYANT: I think we need more organizational support, not just at Black Girls Code, but any organization that’s doing this work as a nonprofit. We can’t do it alone. So, for me, it’s always about how can we have this magnified effort of different organizations that are all working collectively to elevate girls in the STEM fields, particularly in computer science?

And so, that means, like, getting companies to volunteer to help support this. So, that means bringing in staff. Our classes are run by volunteers that work in the industry. So, getting folks to volunteer at organizations like Black Girls Code, getting individuals to both give as well as encourage their organizations and companies to give to support organizations like Black Girls Code, absolutely, positively, creating both mentorship and internship opportunities.

Those opportunities are transformational because it’s difficult to understand what – and computer science does, if you haven’t done it. Like, very – I see this from my daughter. Like, when she’s in school, like, that’s totally different than when she’s in, you know, her internship and she’s on a team of engineers that are working on a product line, totally different experience and totally different way for her to develop this – this mindset of what a computer scientist really is and what that does.

And then I think really making sure that, once these girls are career ready, they’re graduating, that they can get a foot in the door, they can get an opportunity to work with a company like Microsoft and others, and have a fruitful career there, so, pushing on all those various levels, either via individuals that are just giving of their time and resources or really holding our companies accountable for providing these opportunities to get more women in the field.

KEVIN SCOTT: Yeah, it’s such a good and necessary push and, like, we should all be very, very grateful we have your leadership and your organization’s leadership out there, helping us all make progress on this.


CHRISTINA WARREN: That was the founder and CEO of Black Girls Code, Kimberly Bryant.

Now, let’s turn to another intersection of music and tech, this time with Grammy-award winning musician, Jacob Collier, and his collaborator, Ben Bloomberg.

Ben is a creative technologist who designs and builds everything from electroacoustic musical instruments to AI-driven performances and tours. Actually, we should call him Dr. Bloomberg, since he’s recently earned a PhD from MIT.

KEVIN SCOTT: That is such an accomplishment. And Jacob Collier is a multi-Grammy Award-winning instrumentalist, songwriter, arranger and producer based in London. Since we last spoke with Jacob, he’s added more Grammy awards for a total of 7 nominations and 5 wins.

CHRISTINA WARREN: Incredible. And since then, he’s been doing live performances with the likes of Coldplay, recording in Nashville for Jesse Volume 4, and selling out in venues in Europe and North America for his D-Jesse Work Tour 2022.

I just saw a post of Jacob with Joni Mitchell. So, he got to hang out with her for what he called a “raucous evening of magic and music.” He said it was like being in the presence of Shakespeare.

KEVIN SCOTT: Yeah, me too. It’s just so awesome. And as for Ben, I had the opportunity to work with Ben Bloomberg on the Build Conference this year, Microsoft’s annual developers conference. He helped conference goers better understand how he’s using tech in his musical production work.

CHRISTINA WARREN: Well, I’m excited to revisit your conversation with Ben and Jacob. It’s such a great conversation, and in fact, that we had to break it into two episodes. You talked about this technology that Ben designed and created called “the Harmoniser” which allows Jacob to do these one-man live shows.

Here’s Kevin, Jacob and Ben talking about the impact of technological reliability and fallibility as it relates to the creative and performative endeavors.


KEVIN SCOTT: You know, what I’ve done over the years with technology, like, you’re sort of building things, and you’ve got billions of people who use it. And so, like, you’re constantly worried about the fragility of things, and robustness, and fault tolerance and reliability, and whatnot, because, you know, the consequences of something failing are – like, you just impact a lot of people.

For you all, it seems to me that, you know, one of the special things about what you do with music is that, you know, done well, like, you are completely capturing someone in this immersive emotional state, and, like, mistakes, you know, like a cough, or, like, there are very easy ways to sort of pull you out of that immersion. And so, like in that sense, like, the stuff really has to be robust, right?

JACOB COLLIER: I think it’s a mixture of things being robust, and then leaving space for being spontaneous, you know. And I think that this is – this is something that I’m kind of forever indebted to Ben doing, was on that first Skype conversation, and in those kind of initial dreaming phases of the one-man show and the Harmoniser, and also other things, there was never a moment where it was kind of like, oh, no, no, that’s too – we can’t do that, that’s going to be too fragile, or that’s not going to work out, or that’s not reasonable for you to be expecting – you know, whatever. It was like, well, if it’s not possible, then we’ll find a way to make it possible.

And then I came to trust that process, not necessarily to end up where I was expecting it to, but, you know, there are a few different examples of things where we’d set out thinking, I want to be able to do this live, and then by the time we do it live it’s – it’s changed its nature.

I mean, I remember when we started with the one-man show, having about 10 different foot pedals across the whole stage. And I had to run around, hitting all the pedals as I would play each instrument, and then spring away from that instrument, but I figured out – well, what we figured out in trials was that if I hit that pedal even a fraction of a second after the downbeat, it would loop the following bar, you know. And so, there’s – there’s only so much processing that my mind can do in one go about when I hit a button, and also how there’s also only so much I can do physically with my body on stage, at one moment, and still be a human, and musical and give energy to a room.

And so, you know, that’s something that we – you know, we kind of looked at each other and said, “You know what, maybe we should just lose the pedals, and let’s have the loopers loop invisibly, and let’s just tell them when to start and stop looping.” And then I – my job would be to land, just to land in front of them at the right moment in the song, play them for the right length of time, and call that in my head, and run away and keep playing.

And so, you know, things – things do change, but I think that the thing about Ben is that there is always space for an idea to kind of be impossible for a little bit, because there’s a very important fragile moment when an idea is being had, where you can’t stamp on it and be too realistic. You have to dream. You have to say, this – no one could ever do this. Okay, let’s go and do it, you know.

And then, obviously, once you get started, that’s when kind of my wealth of experience of being guiding by what’s my idea of good and bad as creatively steps in, and Ben’s massive wealth of experience about how things work the best, and what things work well, and what is a no-no, and what is a yes, those kind of come into fruition, you know, but there’s that, that lovely – there’s that lovely moment at the beginning where you just think, whoa, yeah, we could do this.

BEN BLOOMBERG: We can do it, let’s go for it. (Laughter.)

JACOB COLLIER: Yeah, let’s go for it. I’m curious, Kevin, actually, to ask you, as somebody who has so successfully sort of had ideas and implemented them, how do you have ideas, and how do you assemble people around you to help those ideas come to life, where the idea can be as sort of safe, but yet impossible, as it needs to be to be a good idea?

KEVIN SCOTT: Yeah, I think it’s really, really – I mean, it’s sort of the hardest thing about creativity, right, especially – I don’t know. Like, I – so, with engineers, I think there’s this mindset thing that maybe you’re even born with where you sort of look at the world in terms of, like, all of the things that are wrong. So, you’re just sort of constantly scanning things. It’s like, oh, this doesn’t work as well as it could, and, you know, like, this is broken and, like, needs to be fixed, which is both a good and a bad thing. It’s like a slightly jaundiced world view, but it’s also the thing that results in, like, you know, this determination and drive to go, like, make things better.

And I think there’s this moment when you get a bunch of technical people in the – or creative people in the room, and they come to a problem with this – they’ve got this set of tools that they have, like they have an understanding of, like, how the world works, which is just that understanding at a point in time, and they have – you know, and they have their experiences about what they’ve, you know, tried in the past and, like, which things have worked and which haven’t.

And like, it’s – its’ sort of hard at the beginning, I think, to overcome everyone’s past, so, because, you know, you’ll have a lot of people in the room who are like, oh, you can’t do that. That’s going to be too hard. That’s impossible. Like, I don’t know how to do that. And the thing that you have to do is figure out inside of those groups how you can – how you can give people the permission to, like, speak the daring, crazy thing and – and not immediately get shot down, where they feel safe. It’s like, oh, no, you just don’t want to tell people, oh, that idea is stupid.

So, like, part of it is about, you know, language and culture. Like, one of the ways that we – we really admire the growth mindset work out of this brilliant professor at Stanford, and, like, one of the things that we tell everyone is, we don’t want to be know-it-alls, we want to be learn-it-alls.

And so, if you think about all of this stuff as, like, a learning experience, it’s like you have an objective in mind, and, like, the process of, like, going towards the objective is learning how to get there, then you sort of wash away a bunch of this, you know, sort of cultural stuff that can blow ideas apart before you ever really understand whether they’re going to work or not. And I’m guessing that that sounds like you all sort of approach things very similarly.

JACOB COLLIER: Yeah, I would say – I would definitely say so, and it’s really – it’s lovely to hear you talk in those terms. When Ben and I, you know, for example, set out after doing this – this first show, which was at the Montreux Jazz Festival in Switzerland. We were opening up for Herbie Hancock and Chick Correa, and there were 3,000 people who had never heard of me or anything, or – which is completely new. It was completely new for me, and for everyone.

You know, having done that gig, we – we set out on this – on this tour, you know, our first tour ever, and it’s crazy. And – and the objective of that, if you – you know, if you think about it in those terms, was – was kind of unclear. You know, we wanted to have a good time and we wanted to – to play music, and – and we wanted to make people happy, but we didn’t really know past that, like, what exactly we wanted it – it to be like and feel and – and represent. You know, we didn’t know why we were doing it. There wasn’t really a reason that we – we were doing it, other than it just felt like the right moment to do it.

And so, Ben and I set out for a month of shows, maybe about – about 20 shows. And we had eight – eight bags, between the two of us, and that meant that we each had to carry four. (Laughter.) And so, I – I can clearly remember having a huge suitcase in my left hand, huge suitcase in my right hand, and then a great big, like, rucksack in my – on my back with all of the gear, all the computer stuff in it, and then, oh my front, the – the stick base, the – the, like, double-bass in a case that goes on a stand. And Ben was kind of equivalently bestowed.

And we would be waddling around the – the stage, and we waddled across the whole – the whole of the U.S. and then stayed on friends’ couches, and – and all sorts of things, and just feeling out what was good and what we loved the most, and then sort of building around that. And even over the course of that one tour, you know, there were lots of different things that we changed.

And I think, for me, having been used to a very kind of quick process of manifesting something that I like in a recording environment – you know, it takes me 2.5 seconds to change my mind and – and start something fresh. On – on the road, it happens at a different speed. And I guess, Ben, I’m interested to hear your thoughts on what this has been like for you and continues to be like.

But, you know, at the end of every show, I would say, “Right, we need to change these six things about these six songs, and we’re going to change the whole structure and remove the – let’s speed this one up, change the key of this one.” And – and Ben was really good at kind of, from my perspective, of letting me do that kind of processing, but also grounding me in the idea that we were building a show that had to run every night. And if we change everything about every show, every night, then you’re kind of starting from scratch.

And so, there was a real kind of mutual patience, I think, that we had to have about how that process evolved, and our – our sort of goal of working together sort of emerged slightly further, the – the more steps we put in the line, you know. But Ben, do you want – do you want to talk to that at all?

BEN BLOOMBERG: You know, I – I think what’s really interesting, especially sort of in the world of live performance, is generally, people are reall risk averse. And so, every venue you go to, every house crew that you work with, people are – you know, people’s sort of reputations and everything is sort of dependent on how the – how the show goes.

And so, I think what we’ve evolved towards now is – is just starting the conversation, you know, every single time we go into a venue, saying, you know, look, let’s try, if we can, to set aside all of our sort of preconceived notions about what a show is and how you normally use your equipment, and things like that, and – and let’s, you know – and it’s actually, like, in big, bold letters on the front of the rider, you know, that says, like, “This isn’t a normal show. Let’s sort of work from base principles here.”

And so, I think that sort of mindset is – is what has really sort of pervaded and – and sort of evolved. And it took me a little bit. You know, I think – I think, Jacob, you know, you definitely stretched me a lot, because at the very beginning, it’d be like, oh my God, we can’t change that, you know, between the – between the shows.

I remember the – actually, the very first show at Ronnie Scott’s, which we actually did as a – as a rehearsal show before the big Montreux show. And it was like five minutes before the show started. And – and Jacob, you came up to me and said, “We need to change the playback,” or, “We need to change –” I forget exactly what it was. And I said, “No, no, we can’t change the playback right now.” (Laughter.)

Yeah, and it was – it was, like, the very first, you know – it was sort of the very first, like, sort of moment where we sort of had to say, okay, like, you know, we – we could change it, but every time we change something, there’s been a problem. And – and maybe we’ll get good at changing things down the road, you know, and really flexible, but so far, every time we’ve changed something, there’s been, like, a little hiccup. So, we could change it, but there might be a little hiccup. We just – we just don’t know, and we can’t test. So, like, what should we do, you know?

KEVIN SCOTT: Yeah. Well, what I’ve always tried to do with the technology products that my teams and I have built is you’ve sort of got these two things. One is, like, the – the faster and higher quality you can gather feedback, the quicker you can learn and the better you can make the thing that you’re trying to produce. And so, engineering your environment in a way where you can get that high-quality feedback as quickly as possible is, like, super important. So, like that’s one thing.

And then the other thing is, you know, if you are thoughtful enough, you can usually understand the sorts of risks that you need to be able to take. And then you build some systems to help you manage those risks, so that you can, like, walk right up to the edge of something, and even allow yourself to fail.

Like, a thing that we have in operations is this thing called MTTR, mean time to recovery. So, with software, I mean, like, both of you know this, like, there’s no way to produce bug-free software. Like, it is literally, from a theory of computation perspective it is – these are undecidable problems. Like, you can’t – you can’t compute a solution to them, no matter how powerful a machine you have. And so, you have to reconcile yourself to the fact that you’re going to produce things that will have errors in them.

And so, the – the question then becomes how do you catch as many errors as humanly possible before you throw something out into the world. And then, like, how, once, you know, knowing full well that things are going to get through, like how do you build your systems so that you can recover from failures quickly. And that is, for software, for products, like that’s a very useful way to think about the world. Like, it just lets you move faster than you otherwise would if you are constantly being crippled by the fear that you’re going to fail.

BEN BLOOMBERG: (Laughter.) I think that’s really important, and what’s so special about Jacob is – is, you know, for all that the technology does, we really do have a lot of – we really do have a lot of wiggle room, because Jacob on stage can – can make just about anything, you know, feel amazing. And – and so, you know, there have been some pretty ridiculous sort of – I won’t call them failures, but moments where things didn’t quite go as expected. And if it were anybody else, it would – you know, it would have been sort of a train wreck.

And – and Jacob is able to take, you know, even the craziest things. I think, like one time in Germany, all the loopers started speeding up and going up in pitch because one of the video operators pushed – pushed the wrong button at the wrong time. And, like, nobody else can handle that stuff, but – but Jacob can make that sound musical and, you know, keep the audience having a good time. So, I think – I think, in that respect, we were sort of really uniquely positioned to try out some pretty risk stuff. You know, he kind of makes it possible. (Laughter.)

JACOB COLLIER: I guess from my – just one thing from my perspective on that would be that I think Ben and I have different kind of values, different – different experiential values of control, and when control is necessary.

And, you know, so I know, for example, that when it comes to precision of musical information, I’m quite – I’m quite controlled because I – I kind of tend to know the highest resolution position for this note to be for it to mean the most in the groove, for example. And I think less about you know things like – you know is this flow – is the flowchart from the element to this element within the tech going to work every night in – in a way that means I can – that we can all have a good time?

And I guess that means that there are elements of the tech which I’m stretching, from my perspective, and absolutely the same in reverse, where, you know, Ben for example will be very, very risk aware in some scenarios, but then will also highly encourage me to jump off my own creative rails, in a musical sense, and try new stuff the whole time, you know.

And I think if there’s one thing that comes from making music in one room for 10 years and then going on the road, having never done a gig, you know, I don’t tend to think about imperfections being great. You know, I tend to think about imperfections that aren’t to plan being things that I will kind of want to correct and make sure that they are right. It’s not that they’re going to be completely in a grid-based system all the time, but it’s where I want it, you know.

But one thing that I, you know, at first, was very kind of – I guess I was quite intimidated with touring, and now, I’m completely in love with about touring is that it’s one of the only moments of your life where you have no room to be anything other than just present. You just have to be present. And so, a lot of the mistakes and the imperfections, we’ve ended up designing the whole show to let those shine even more than we used to, you know. And so, I used to think that the best gigs we did were the one-man show or the gigs where I nailed all the instruments, but it’s just not true.

And I know you were saying about an environment where you can get instantaneous feedback, I mean, for me, that’s just – that’s going on tour.

BEN BLOOMBERG: It’s touring, yeah. (Laughter.)

JACOB COLLIER: And every – every night, we get a fresh round of feedback, and it doesn’t matter, it doesn’t really matter what the audience says to you after the show. You know, they might say, oh, that was a great show, we loved it, or it was rubbish, you know, but you tend to sense, just even standing on the stage and being you on stage, how that is going down. It was a real kind of quick learning process of people immediately responded to the moments where I wasn’t impossibly perfect at something, you know, impossibly good at something, the moments where there would be space for me to be, you know, wiggle around, or something would go wrong.

I can think of gigs where, you know, someone would cough, and it would loop, every time, it would loop, as part of the loop [coughing] – you know, in the percussion loop because it was really quiet and someone would yell or scream or a plane would go overhead, or whatever. That would get, that would become part of the groove of the song. And that’s – that’s a fabulous challenge, musically, how do you make that make sense, but you have to be willing to look a little bit like a fool and just sort of embrace it.

And I think, for me, one thing I’ve learned really is how special it can be when everyone is kind of doing that together, you know, the audience and performer alike are both coping with the strange curveball, and alchemizing it into something that feels really, really great, you know?


CHRISTINA WARREN: That was Jacob Collier and Ben Bloomberg.

So, let’s end this way we end every episode of Behind the Tech, which is to ask guests what they do for fun.

KEVIN SCOTT: Yeah, this is really one of my favorite parts of the podcast. There is Dr. Peter Lee, who confessed to his love of simulated race car driving. Kimberly Bryant is a passionate gardener. And Dr. Mae Jemison loves cooking West African cuisine.

CHRISTINA WARREN: As for Justine Ezarik, it’s marital arts, specifically, Kali and jujitsu. Here’s iJustine:


JUSTINE EZARIK: When I walked into that gym, it was incredibly humbling because nobody cares who you are, what you do. It’s like when you’re on the mat, you’re there to learn, you’re there to train. And it’s just, for me, it kind of puts so many things into perspective, mostly because I wanted to learn everything so incredibly fast.

And, you know, as a white belt, like, you really – like, you think, yeah, I’m doing such a great job. And then somebody comes in and crushes you, and then you just go home crying. You’re like, oh, man, I know nothing. (Laughter.) And it’s like every single time, it’s like every time you level up and you learn something, that just causes another problem. But it’s just such a vast variety of things that you earn. And it’s so empowering when you actually are able to start leveling up and actually learning things. And it’s honestly one of the most rewarding things that I’ve ever done.

And I think I learned the most when I got injured for the first time because I was going too tough. I wanted to learn everything so fast. And that taught me to slow down and kind of enjoy the process. Like, you’re not going to learn everything in this first year. So, if you keep getting hurt, you’re not going to be able to progress. So, it’s like that injury taught me to kind of figure out what else I could do when I couldn’t be training.

And that’s when I started the other martial arts, Kali, which is more kind of, like, hand-to-hand combat with, like, various sticks and different weapons. I was doing that while I was injured. And then it’s kind of – that sort of has played into my life a lot, where if there’s a problem that I’m having, I’m like, okay, I, myself, cannot do this, but what else can I do to work around that issue to make it work? So, it’s like martial arts has taught me so, so much about, like, myself, and it’s just such a rewarding experience.

And I think from that, my biggest piece of advice to anyone is, like, have a hobby that you do outside of anything. And just go, set your phone down and just be. And I mean, that took me like 13 years to kind of figure that out. (Laughter.) And, you know, when I did, I was like, this really is life changing.


CHRISTINA WARREN: That was from Kevin’s conversation with actress, author and influencer, Justine Ezarik.

KEVIN SCOTT: Well, before we close, I just want to say thank you again to all our guests on Behind the Tech. Your ingenuity, compassion and dedication truly make an impact in the world. And we’re grateful that these folks take time away from that amazing work to chat with us.

CHRISTINA WARREN: Yes, thank you to all of our guests. And as always, thank you for listening. As 2021 draws to an end, please take a minute to drop us a note at [email protected], and tell us about who you’d like to hear from in 2022. Be well.

KEVIN SCOTT: See you next time.


comments powered by Disqus