Behind The Tech with Kevin Scott - Behind the Tech: 2019 Year in Review

🎁Amazon Prime 📖Kindle Unlimited 🎧Audible Plus 🎵Amazon Music Unlimited 🌿iHerb 💰Binance

[MUSIC]

KEVIN SCOTT: All of us every day should be asking, like, how is it that the thing that I’m doing right now is going to positively accrue to the overall social good. Like, if you can’t answer that question in an affirmative way, then maybe you’re doing the wrong damn thing.

[MUSIC]

KEVIN SCOTT: Hi, everyone. Welcome to Behind the Tech. I’m your host, Kevin Scott, Chief Technology Officer for Microsoft.

In this podcast, we’re going to get behind the tech. We’ll talk with some of the people who have made our modern tech world possible and understand what motivated them to create what they did. So, join me to maybe learn a little bit about the history of computing and get a few behind-the-scenes insights into what’s happening today. Stick around.

CHRISTINA WARREN: Hello, and welcome to a special episode of Behind the Tech. I’m Christina Warren, Senior Cloud Advocate at Microsoft.

KEVIN SCOTT: And I’m Kevin Scott.

CHRISTINA WARREN: And today, we’re going to change things up a little bit with the format and do a Behind the Tech Year in Review.

KEVIN SCOTT: So, we had a bunch of incredibly good conversations this year with a bunch of really brilliant people. And we thought we would just spend some time sharing some of the highlights from those really interesting conversations with everyone.

CHRISTINA WARREN: Yeah, exactly. And so, what we’re going to do is we’re going to basically highlight a few of our fascinating guests. We’re going to replay some of our favorite quotes. We’re going to share some surprising moments, maybe even a few laughs.

First, we’re going to hear from Stanford University scholar Andrew Ng about topics like the democratization of AI. We’ll revisit our conversation with Jaron Lanier about challenges of social media and the, quote/unquote, “downfall of democracy,” and we’ll hear from a few more of our guests about the crazy things they do when they’re not, you know, like changing the course of tech history and, like changing the course of tech history or something.

[MUSIC]

ANDREW NG: With the rise of technology often comes greater concentration of power in smaller numbers of people’s hands, and I think that this creates greater risk of ever-growing wealth inequality as well. To be really candid, I think that with the rise of the last few waves of technology, we actually did a great job creating wealth in the East and the West Coast, but we actually did leave large parts of the country behind, and I would love for this next one to bring everyone along with us.

KEVIN SCOTT: So that was Andrew Ng. Andrew is founder and CEO of Landing AI, founding lead of the Google Brain Project, and a cofounder of Coursera. He really is a true leader in artificial intelligence and machine learning. And I can’t agree with him more.

CHRISTINA WARREN: He’s so smart. And what happened is that actually led to a really interesting discussion about computer literacy and bringing AI not just to tech companies, but to all industries. So, here’s more from Kevin’s conversation with Andrew Ng.

ANDREW NG: Once upon a time, society used to wonder if everyone needed to be literate. Maybe all we needed was for a few monks to read the Bible to us and we didn’t need to learn to read and write ourselves because we’d just go and listen to the priest or the monks. But we found that when a lot of us learned to read and write that really improved human-to-human communication.

I think that in the future, every person needs to be computer literate at the level of being able to write these simple programs. Because computers are becoming so important in our world and coding is the deepest way for people and machines to communicate. There’s such a scarcity of computer programmers today that most computer programmers end up writing software for thousands of millions of people.

But in the future if everyone knows how to code, I would love for the proprietors of a small mom and pop store at a corner to be able to program an LCD display to better advertise their weekly sales. So, I think just as literacy, we found it having everyone being able to read and right, improved human-to-human communication. I actually think everyone in the future should learn to code because that’s how we get people and the computers to communicate at the deepest levels.

KEVIN SCOTT: I think that’s a really great segue into the main topic that I wanted to chat about today, AI, because I think even you have used this anecdote that AI is going to be like electricity.

ANDREW NG: I think I came up with that.

KEVIN SCOTT: Yeah. I know this is your brilliant quote and it’s spot on. The push to literacy in many ways is a byproduct of the second and third industrial revolution. We had this transformed society where you actually had to be literate in order to function in this quickly industrializing world. So, I wonder how many analogues you see between the last industrial revolution and what’s happening with AI right now.

ANDREW NG: Yeah. The last industrial revolution changed so much human labor. I think one of the biggest differences between the last one and this one is that this one will happen faster, because the world is so much more connected today. So, wherever you are in the world, listening to this, there’s a good chance that there’s a AI algorithm that’s not yet even been invented as of today, but that will probably affect your life five years from now.

[MUSIC]

KEVIN SCOTT: I love this image that he evokes of a mom-and-pop store programming their own LCD sign. If we really are going to get to this vision of powerful technology shaping the future, we really have to have as many people as humanly possible involved in the creation of the technology.

And, you know, I think one of the ironic things about AI is that it can be one of those democratizing influences. Like, ironically, even though we sort of think about AI as this just very arcane art that only a few people can practice, the reality may be over the next handful of years is that AI itself may facilitate more people being able to participate in the creation of technology than ever before. It might make programming actually easy.

CHRISTINA WARREN: Yeah, it’s really, really interesting to think about how, you know, instead of taking jobs away could actually maybe even create new careers and create new opportunities.

KEVIN SCOTT: Yeah, I mean, one of the things that we chatted about in our conversation is that we have a whole bunch of costs of subsistence like healthcare, for instance, that are growing faster than the gross domestic product. And, obviously, that’s not sustainable over time. The end state there is like we end up spending our entire national wealth on healthcare, which is probably not a great thing for society.

And one of the really interesting things, and we’re seeing this across a range of diagnostic applications is that AI is allowing us to provide cheaper and more ubiquitous access to healthcare technologies. Like, for instance, their AI apps right now that are able to read EKG charts, which is just a fraction of a cardiologist’s job, but able to read charts with clinical levels of accuracy, which then doesn’t take away cardiologists’ jobs, but potentially frees up cardiologists to do higher-value things.

And, you know, the thing that I get really excited about with things like that is like whenever you go to your general practitioner and get a checkup every year, you don’t have a cardiologist like giving you like a full diagnostic and examination of your cardiovascular health.

But with these technologies, you could potentially have some sensors and a box sitting in your general practitioner’s office that could basically do a full cardiologic exam, which would be really awesome and something that you couldn’t do without the technology. Like it’s just economically infeasible to have that many cardiologists like spending their time that way.

CHRISTINA WARREN: And you actually talked with Andrew if he’s seen those other trends in other domains. Let’s take a listen.

[MUSIC]

ANDREW NG: I think there’ll be a lot of partnerships between AI teams and doctors that will be very valuable. You know, one thing that excites me these days with the theme of things like healthcare, agriculture, and manufacturing is helping great companies become great AI companies.

I was fortunate really, to have led the Google Brain team which became I would say probably the leading force in turning Google from what was already a great company into today great AI company. Then, at Baidu, I was responsible for the company’s AI technology and strategy and team, and I think that helped transform Baidu from what was already a great company into a great AI company.

I think it really Satya did a great job also transforming Microsoft from a great company to a great AI company. But for AI to reach its full potential, we can’t just transform tech companies, we need to pull other industries along for it to create this GDP growth, for it to help people in healthcare deliver a safer and more accessible food to people.

So, one thing I’m excited about, building on my experience, helping with really Google and Baidu’s transformation is to look at other industries as well to see if either by providing AI solutions or by engaging deeply in AI transmission programs, whether my team at Landing.AI, whether Landing.AI can help other industries also become great at AI.

KEVIN SCOTT: Well, talk a little bit more about what Landing.AI’s mission is.

ANDREW NG: We want to empower businesses with AI. There is so much need for AI to enter other industries than technology, everything ranging from manufacturing to agriculture to healthcare, and so many more. For example, in manufacturing, there are today in factories sometimes hundreds of thousands of people using their eyes to inspect parts as they come off as the assembly line to check for scratches and things and so on.

We find that we can, for the most part, automate that with deep learning and often do it at a level of reliability and consistency that’s greater than the people are. People squinting at something 20 centimeters away your whole day, that’s actually not great for your eyesight it turns out, and I would love for computers rather than often these young employees to do it.

So, Landing.AI is working with a few different industries to provide solutions like that. We also engage companies with broader transformation programs. So, for both Google and Baidu, it was not one thing, it’s not that implement neural networks for ads and so it’s a great AI company.

For a company become a great AI company is much more than that. And then having sort of helped two great companies do that, we are trying to help other companies as well, especially ones outside tech become leading AI entities in their industry vertical. So, I find that work very meaningful and very exciting. Several days ago, I tweeted out that on Monday, I literally wake up at 5:00 AM so excited about one of the Landing.AI projects, I couldn’t go back to sleep. I started getting and scribbling on my notebook. So, I find these are really, really meaningful.

[MUSIC]

CHRISTINA WARREN: You know, when we talk about AI and you’ve had so many great conversations with people about AI, one of the big things that comes up is how to make sure that, you know, we’re doing this the right way.

KEVIN SCOTT: Yes, and we’ve had a bunch of conversations about that this year. So, one of them was with Fei-Fei Li.

[MUSIC]

FEI-FEI LI: Whenever humanity creates a technology as powerful and potentially useful as AI, we owe it to ourselves and our future generation to make it right.

CHRISTINA WARREN: So that was Fei-Fei Li, a pioneering researcher in AI.

KEVIN SCOTT: Fei-Fei is an incredible computer science researcher, she helped create in large part the entire discipline of modern deep neural networks. So, she created this thing called ImageNet in 2011, I think.

And ImageNet was the thing that allowed all of the breakthroughs in image classification that everybody’s so excited about now to actually happen. So, she’s a computer science professor at Stanford University, and right now perhaps she’s best known as the co-director of the Human-Centered Institute for AI at Stanford. I was actually lucky enough to be part of the advisory council that helped establish the institute. So, I asked her to tell us about some of the work that’s going on there.

[MUSIC]

FEI-FEI LI: So, first of all, I think the institute that both of us are involved in is really laying out a framework of thinking about this, and the framework is human centered. It’s that from the get-go, from the design and the basic science development of this technology all the way to the application and impact of this technology, we want to make it human benevolent.

And with this framework in mind, we have at Stanford, this institute works on three principles – founding principles to cover different aspect of human-centered AI.

The first principle is actually what we’ve been talking about is to continue to develop AI technology, basic science technology, that is human inspired and betting on the combination of cognitive science, psychology, behavior science, neuroscience, to push AI forward so that the technology we will be using have better coherency or better capability to serve human society.

KEVIN SCOTT: Right.

FEI-FEI LI: So, that’s the first principle. Second principle is I would love to hear your thoughts. You know, you and I are trained as a generation of technologists that the technology is solidly considered a engineering field or computer science field. But I think AI really has turned a chapter. AI is no longer a computer science field.

KEVIN SCOTT: Correct.

FEI-FEI LI: AI is so interdisciplinary today. In fact, some of the most interesting fields that AI should really contribute and also welcome to join force are social sciences and humanities. And at Stanford, we’re already seeing the collaboration between AI researchers with economists, with ethicists, philosophers, education experts, legal scholars and all that.

To do this, our goal is to understand what this technology is really about, understand its impact, but also forecast, anticipate the perils, anticipate the pitfalls, anticipate unintended consequences, and really with the eventual goal of guided and recommend policies that are good for all of us. So, that’s the second principle really understand, anticipate, and guide AI’s human and societal impact.

The third and the last but not the least principle is something I know you and I feel passionate about is really to emphasize the word “enhance” instead of “replace.”

KEVIN SCOTT: Correct.

FEI-FEI LI: Because AI technology is talked about as a technology to replace humans. I think we should stay vigilant about job displacement and labor market, but the real potential is using this technology to enhance and augment human capability, to improve productivity, to increase safety, and to really eventually to improve wellbeing –

KEVIN SCOTT: Right.

FEI-FEI LI: –of humans. And that’s what this technology is about. And here, we’re talking about healthcare. Another vertical that we put a lot of passion and resource in is education. Sustainability. Manufacturing and automation, these are really humanly and societally important areas of development.

KEVIN SCOTT: Yeah. Well, just sort of sticking with healthcare and like your eldercare example, like, this is something that I don’t think a whole lot of people spend time thinking about unless they’re taking care of an elderly parent or relative.

Like, we’re not thinking about, like, how systemically we can make the lives of elderly people better. And, like, we’re certainly not thinking about the big demographic shifts that are about to come – (Crosstalk.)

FEI-FEI LI: Oh, my god, it’s going to come globally.

KEVIN SCOTT: Yeah, globally. I mean, so, you and I have chatted about this before, but you know, we sort of see in almost all of the industrialized economies, but also in Japan, Korea, and China –

FEI-FEI LI: Yeah, absolutely.

KEVIN SCOTT: –you have this very large bubble of working-age population that’s getting older and older. And we just don’t have high enough fertility rates in these younger generations to replace it.

So, at some point, like, we – across the entire world, we’re going to have far more old people than we will have working-age people. And you have like a couple of big questions when that happens. Like, who takes care of all the old people and like who’s going to do all the work? And, it’s actually not far enough away that we cannot think about it.

FEI-FEI LI: 2035 is –

KEVIN SCOTT: Yes.

FEI-FEI LI: – I think the – we have to find the actual number, but the last Baby Boomers become the aging population, the youngest. So, we – we’re very close to that. And also, to do this research in aging population, I spend a lot of time in senior homes and senior centers. One thing I learned as a technologist is that we should really develop the kind of empathy and understanding of what we really are working on and working for. For example, I cannot tell you how many Silicon Valley startups are there to create robots as senior companions. And when some of them feel robots can replace family, nurses, friends, I really worry. And I really want to encourage these entrepreneurs to spend a lot of time with the seniors.

KEVIN SCOTT: Yeah.

FEI-FEI LI: One thing I learned a lot about wellbeing with aging population is dignity, social connection is part of the biggest part of aging. And so my dream technology is something that you don’t notice, but it’s quietly there.

KEVIN SCOTT: Correct.

FEI-FEI LI: To help, to assist, to connect people, to ensure safety. Rather than this big robot, you know, sitting in the middle of the living room and replacing the human connectivity.

KEVIN SCOTT: Yeah, it’s really funny that you’re bringing all of this up. I’m writing a book right now on, why I think people should be hopeful about the potential of AI, like, particularly in rural and middle America. And for the book, I went back to where I grew up in rural central Virginia in, like, this, you know, very, small town.

And I visited the nursing home where three of my grandparents spent the last chunk of their life. And I was just chatting with some of the people there. And I asked them, you know, the nurses and the managers in this place, like, you know, what – do you think AI – and like, when I say AI, like, the vision that conjures is, like, oh, there’s going to be some human equivalent android coming in. And they’d be like, no, the residents would be terrified – by this thing. Whereas like they’ve got a bunch of thing – like, dispensing medicine, for instance.

FEI-FEI LI: Exactly.

KEVIN SCOTT: Like, you know, when you’re elderly, like, you’re taking this, like, complicated cocktail of medicines and, like, getting it dispensed in the right amounts at the right time through the day, making sure that you actually take the medicine.

Like, that’s a problem that we could solve with AI-like technologies, like, you know, combination of robotics and computer vision. But it wouldn’t be like this talking, walking, you know, robot. It would be, like, a set of things that sort of disappear into the background and just sort of become part of the operation of the place.

FEI-FEI LI: Absolutely.

KEVIN SCOTT: And, like, that I think we should have more ambition for that sort of thing rather than this –

FEI-FEI LI: Absolutely.

KEVIN SCOTT: You know?

FEI-FEI LI: That’s why Stanford HAI wants to encourage that. The best technology is you don’t notice the technology.

KEVIN SCOTT: Yes.

FEI-FEI LI: But your life is better.

KEVIN SCOTT: Yes.

FEI-FEI LI: That’s the best technology.

KEVIN SCOTT: I could not agree more.

[MUSIC]

KEVIN SCOTT: That conversation with Fei-Fei was amazing, and it makes me really happy that we have people as technically brilliant as her who also are like thinking in a deeply conscientious way about how AI technology and technology in general are developing.

CHRISTINA WARREN: Now, I couldn’t agree more. I think it’s really imperative that we don’t just have, as you said, the technical minds, but people who are thoughtful and are thinking about the implications and how to, as we said kind of at the top, do these things the right way.

KEVIN SCOTT: Yeah.

CHRISTINA WARREN: So next up, we’re going to revisit our conversation with Reid Hoffman. And Reid is an angel investor. He’s the cofounder and executive chairman of LinkedIn, he’s the author of three books, including his most recent one, which is called Blitzscaling, and he’s now a partner at the venture firm Greylock, and I know he’s also a good friend of years, Kevin.

KEVIN SCOTT: Yeah, he’s one of my – one of my best friends. And so, in my conversation with Reid, which are always delightful, we talked about a book by Robert Wright called Non-Zero: The Logic of Human Destiny. It was sort of funny, like I read this book a while ago when I was in grad school and it was one of the things that Reid and I bonded over because he, too, had read the book early on and was a big fan of it. And so, I asked Reid to explain to us this notion of non-zero sum.

[MUSIC]

REID HOFFMAN: So, fundamentally, it’s roughly does the pie grow or not? Right? And in a zero-sum game, it’s whatever I win, you lose, right?

KEVIN SCOTT: Right.

REID HOFFMAN: So, if there’s 100 – you know, units, if I get 52, you get 48. If I get 55, you get 45.

KEVIN SCOTT: Right.

REID HOFFMAN: A non-zero-sum game is we figure out a way that, well, actually, I may still get 55, but maybe you get 50. Right? And so, it’s the pie grows.

KEVIN SCOTT: Right.

REID HOFFMAN: And I actually think one fundamental part of ethics that should go across all ethical systems, all value structures is we should prefer non-zero-sum games.

KEVIN SCOTT: Correct.

REID HOFFMAN: Right?

KEVIN SCOTT: Yeah, and like that’s one of the points that I make in this book that I’m writing right now is that like when you’re thinking about how you apply AI, like you should pick important zero-sum societal problems and try to figure out like how to put a set of incentives and policies in place to convert those zero-sum social good things into non-zero-sum games through the use of AI.

REID HOFFMAN: Yeah.

KEVIN SCOTT: And, like, health care potentially is one of them where, you know, like we have this like very interesting system right now where like we have a finite amount of our gross domestic product we can spend on health care. You can spend all of it – I think you’re the one who actually said this to me, you can spend 100 percent of GDP and still not solve the fundamental problem, which is we’re mortal.

And so, like it’s this terrible zero-sum game in a sense because you’re always having to decide, you know, what the tradeoffs are. And like anything that we can do technologically to like create abundance in this system to like get better diagnostics cheaper for everyone to like be able to sort of influence things in a way where you’re getting better healthcare outcomes for everyone, like, we should be doing.

And it’s like hard to do without some flavor of technology, like whether that’s AI or something else.

REID HOFFMAN: Yeah, that’s what progress looks like. That’s the universe we all want to be in because we’ll all be better off.

[MUSIC]

KEVIN SCOTT: So, Reid is also a real thought leader in AI. I asked him what he’s thinking about right now in terms of how we should be influencing AI in academia and industry to produce positive outcomes for society.

REID HOFFMAN: So, I think – let’s see, two things. So, one is to realize that the – that while our natural reflexes from evolution always go to fear and worry first, to realize that there’s massive opportunity and that we’re playing for that. And that opportunity could be like what you were mentioning earlier, medicine. It can be – actually, in fact, if we transform a whole bunch of productivity, that can do anything from like what your book could be describing, which is that allow creativity and innovation in rural towns. Like, there’s enormous amount of great things that can happen.

KEVIN SCOTT: Right.

REID HOFFMAN: We should play for those, and we should not lose sight of that’s what we’re playing for.

KEVIN SCOTT: Right.

REID HOFFMAN: And so that’s kind of a – look, how do we maximally benefit? Like, how do we become better, public intellectual, through this?

And then think about like, okay, how do we steer around – not stop, but how do we steer around the pitfalls? Like, what do we say, well, look, if we’re going to have a whole bunch of displaced driving because of autonomous vehicles, what are our ideas about what to do about that and have a healthy society where people have paths to better lives and what are the ways we help with that? What are the ways we facilitate?

And so part of the reason why, you know, you were part of the task force, thank you, for – and part of the advisory council for the Stanford Human-Centered AI is, like, well, how can we catalyze across industry, across government, across academia to say, “These are the ways that we steer towards good outcomes and avoid bad outcomes.”

[MUSIC]

CHRISTINA WARREN: That was such a great conversation. If you haven’t heard the whole episode, I’d definitely encourage anyone who’s listening now to go back and listen to that because there was so much good stuff there.

KEVIN SCOTT: Yeah, and conveniently, if you want to hear more from Reid, he has this amazing podcast called Masters of Scale, where he talks with a whole bunch of the really great entrepreneurs and technologists in Silicon Valley about how they have solved different aspects of the scaling problems for their companies as they were going from just an idea to something that was amazingly impactful.

CHRISTINA WARREN: Yeah, it’s a great podcast, so check out Masters of Scale as well.

KEVIN SCOTT: One of the themes that we’ve been grappling with over the past couple of years and that played heavily in many of our podcast recordings is about this narrative we’re telling about AI.

You know, is it the portrait of Rosie and the Jetson’s or Data in Star Trek, The Next Generation? Or is it Hal in 2001 A Space Odyssey or the Terminator in the Terminator movies?

CHRISTINA WARREN: No, exactly. I mean, and that’s – it is a really interesting conversation we’ve had, because we’ve talked about the need for positive stories about AI to balance some of the doom and gloom that is, arguably, maybe more narratively interesting, but maybe not so good for how we want to be thinking about this kind of game-changing technology that’s going to impact all of us.

KEVIN SCOTT: Yeah, we are, at the end of the day, the stories we tell. And like we need to be telling some positive stories to help push the development of AI in a positive direction.

So, I had this great conversation with Sam Altman, who’s the CEO of Open AI. He ran Y Combinator before Open AI and started his first company at the age of 19. I really think of him as an entrepreneurial prodigy.

He says one of the most valuable classes he took during college was a creative writing course, which is really interesting for someone who is the CEO of like one of the deepest technology companies in the world.

So the conversation that you’re about to hear is us chatting about the arc of the development of AI from the time when he first started programming to some of the most advanced stuff that’s happening today at the very cutting edge of AI research and development.

[MUSIC]

KEVIN SCOTT: So, when did you start thinking about AI?

SAM ALTMAN: Well, as an undergrad. When I was 18, I made this list of things I wanted to work on, and AI was the top. But I took the AI classes at Stanford, and it was, like, clearly not working.

KEVIN SCOTT: And why when you were 18? So, at 18, when this was 2000?

SAM ALTMAN: 2003. I was born in ‘85. But…

KEVIN SCOTT: So, AI in 2003 was not what it is now.

SAM ALTMAN: Well, I think everybody – like, most everyone who grew up reading sci-fi like wanted to make AI. Like, this is kind of – it just feels like we’re all in this inevitable path, and that’s where it’s going. And it’s like the most interesting thing to work on, but it just didn’t feel like there was an attack factor.

And then in 2012, it started to work. And then, in 2015, which was when we started talking about creating OpenAI – we started in early ‘16 – it felt like not only was it going to work, but might work much better and much faster than we thought because there had been this one trend of just scale things up that kept working. And again, this has been, like – I mention it’s been, like, the central learning of my career. The asterisks to that, though, is that humans have not apparently evolved well to guess how exponential curves are going to play out.

KEVIN SCOTT: Yeah.

SAM ALTMAN: And so, when you scale these things up, if they’re getting – like, you know, doubling every year, in the case of AI, maybe 8 X in every year. We don’t have good intuition for that. And so, people are never bullish enough if the curve is going to continue.

KEVIN SCOTT: Yeah.

SAM ALTMAN: And so, it was like, huh, maybe this is really going to work.

KEVIN SCOTT: But AI is – AI is like a tricky – a tricky thing, you know, in the sense that we – the term artificial intelligence, like, wasn’t really coined until the Dartmouth workshop in, what, ‘55, ‘56?

SAM ALTMAN: Something like that, and they thought they were going to get it done that summer.

KEVIN SCOTT: Oh yeah, they were completely convinced. Like, if you read those documents, like, they had this list of things and they were just sort of convinced that the progress was going to be much faster than it actually was. And like, we have had a couple of booms and busts now, you know, where you can actually go to Wikipedia and look up AI Winter, and like, the bust has a name.

SAM ALTMAN: Yup.

KEVIN SCOTT: So yeah, one of the things – and I’m just, for what it’s worth, like, I am a – I’m in the optimist column here.

SAM ALTMAN: Booms and busts are the way of the world. Like, you know, we talked earlier about startups. Like, we’ve got a lot of booms and busts there, but the curve, though it squiggles, if you zoom out enough, it goes up and to the right.

KEVIN SCOTT: Yup.

SAM ALTMAN: And the curve of computers getting smarter does, too. Now, how much further we have to go, when we’re going to get there, very hard to say. What I can say with confidence is maybe the current trends don’t get us all the way to general intelligence, but they’re going to get us surprisingly far. They’re going to change the world in very meaningful ways, and maybe they go all the way.

KEVIN SCOTT: Yup. And so, like, I’m interested to go back to this whole creative writing thing because, like, I think the storytelling around AI is, like, one of the really, really interesting things right now, like getting – because you guys – so, OpenAI is a non-profit organization that is committed to, like, realizing artificial general intelligence, and for having the value that AGI creates sort of accrue to the public good.

SAM ALTMAN: To be clear, we have not figured out the storytelling yet. I agree it’s really important. I think about this stuff all day. I can barely, in my own head, think clearly about what the world really does look like if AGI happens. You know, all of the stories I can tell are either like too mundane or too grandiose.

KEVIN SCOTT: Yup.

SAM ALTMAN: It’s like, either like, oh, medicine gets better, or it’s like, sentient beings colonize the universe until the heat death. And sort of neither of those quite feel right.

KEVIN SCOTT: Right. And people get – and people get really, really – you know, I know one of the things that you have – you’ve said is something about, you know, “the light cone of all” –

SAM ALTMAN: People don’t like that.

KEVIN SCOTT: Yeah. And like, people get really upset about the grandiose things, which sort of makes them miss all of the, like, really concretely useful things that this stuff is going to do with 100 percent predictability over the next few years.

SAM ALTMAN: Yeah. If you’re going to – you’re going to – if you’re doing anything interesting, you’re going to have a lot of haters, and you may as well, like, say the thing you actually believe. (Laughter.) So, I could, like, try to sort of, like, you know, figure out exactly how to calibrate this sort of, like, somewhat dishonest version of what I believe the future’s going to look like, or I could just say, like, “Here’s what I actually think. I might be wrong, but here’s what I genuinely think,” and not try to under or oversell it. And that’s what I actually think.

KEVIN SCOTT: So why do you think that?

SAM ALTMAN: It is possible that there is a very – actually, I don’t even think it’s that unlikely. I think there is a reasonable chance that there is something very deep about consciousness that we don’t understand, or we’re all Boltzmann brains, and none of this is real, or whatever. But if not, if physics as we understand it works and everything is just sort of an emergent property in our brains of this very large computer we have, then I think that will be replicate-able in silicon. And I don’t, like, I still think that’s the more likely outcome.

[MUSIC]

CHRISTINA WARREN: All right, so we’ve been talking a lot about AI in society, which is obviously really important and has been a big trend I think not just on our podcast, but more macro level in all of tech this past year. But let’s extend that conversation into social media and the Internet. And so here are a couple of sound clips from our guests beginning with Danny Hillis.

DANNY HILLIS: Social media is, you know, was never by itself, the thing that was going to, you know, advance humanity. Nor is it the thing that’s going to destroy humanity it’s actually a tool that we’re going to learn to use with time like we learn to use fire.

JUDY ESTRIN: If you’re talking about failure that is not just inconvenient, but can be harmful, you can’t just say, “Oops.” Right? So, whether it is a self-driving car that hits somebody, or a social media system whose implications is threatening democracy, we’re not allowed to just say, “Oops.” We have to be thinking about the consequences of the technologies that we are bringing to market, because we are now in the center of everything in our lives.

JARON LANIER: I’m worried about where we are right now. I just feel like our present Internet is destroying societies and democracies and economies. I think it’s bringing down civilization. It’s bad. We really screwed this thing up.

[MUSIC]

CHRISTINA WARREN: So, that was Danny Hillis, the pioneer of super computing, who created the company Thinking Machines, and then we heard from Judy Estrin, who’s one of the original co-creators of the Internet, which is just amazing –

KEVIN SCOTT: Amazing.

CHRISTINA WARREN: And, like, I mean, that’s so cool. That’s – yeah, that’s unreal. And finally, we heard from Jaron Lanier, who is basically considered by many people to be the father of virtual reality. Like, I think he actually coined the term “virtual reality.”

KEVIN SCOTT: He did, and so looking at this list of guests, it’s sort of shocking the people that we got to talk to. Like, the person who invented a big chunk of modern computing and these amazing supercomputers. Like, one of the people in the labs creating the Internet protocols and the person who like played perhaps the biggest role in the creation of virtual reality, it’s sort of crazy.

CHRISTINA WARREN: It is. You have really good friends, and we’re all really lucky that we get to like basically leach off of your friendships for these great conversations.

KEVIN SCOTT: You know so the interesting thing about Jaron in particular is that Jaron, in addition to being an amazing computer scientist and inventor and one of the people who helped create virtual reality is also a musician and an author, and today spends a bunch of his time thinking about humanism and sustainable economics in the context of all of the technology that we’re using in our day-to-day lives.

[MUSC]

JARON LANIER: I’m not sure if I coined data dignity, by the way. I think either Glen Weyl, or maybe even Satya Nadella did. Digital dignity was a term – was going to be the title of Who Owns the Future, but the editor didn’t like it, so I turned it to Who Owns the Future.

At any rate, so this is a whole long tale, as well. In the ’80s and ’90s there were a couple of really vociferous, intense movements within hacker culture, within technical culture, about what networking should be like whenever it really comes about.

One of them was this idea that everything should be open and free, and that was started from a number of sources. One of them was a guy who is a friend of mine, Richard Stallman, back in Boston. And there were a few other source points for that, as well. And then another was this kind of intense Libertarian philosophy that governments shouldn’t be involved, we should leave everything to entrepreneurs.

And in the last ’80s and early ’90s I ended up spending time with somebody named Al Gore, who is at that time a Senator from Tennessee. He eventually became a Vice President. And he was really interested in these things and he came up with this idea of throwing some government money at people with nascent packet switch networks to bribe them to become

interoperable. And that was the Internet. So that was funded by the Gore Bill.

And so, we used to debate what this thing should be and because of the extremely intense, those two dogmas, there was this feeling, well, it will be minimalist, it won’t have accounts, for instance. It won’t represent people. That will be left to private industry. There won’t be any persistent data on it. That will be left to private industry. There won’t be any transactions on it. That will be left to private industry and on and on and on. There won’t be any memory on it. There won’t be any contextualization on it. That will be left to private industry.

And I remember saying to him we’re creating this gift of many hundreds of billions of dollars to persons unknown, because there will be natural network monopolies that form to fill these obviously needed functions, but whatever. That was – there was just this feeling that that was the better way to do things. And since the experiment wasn’t run the other way, we don’t know.

But then the other, the “everything should be free” I think sent us down a terrible path, because it feels all socialist at the first. It feels like this friendly, socialist, lefty thing, but since it’s being mixed with this Libertarian philosophy you end up with only one possible business plan, which is advertising.

So, everything feels free, but actually the money is made by third parties who want to influence the users using user data. And it ends up – it starts cute and ends up evolving into the sort of monstrous universal behavior modification scheme. Anyway, this is the stuff I talk about all the time, where I think we’ve gone wrong and we’ve created a network that’s more about deception than it is about reality.

KEVIN SCOTT: So, what do you think we can do about that?

JARON LANIER: (Laughter) Well, we’re kind of in a pickle now, to use an expression from when I was a kid. It’s tricky. I mean there are a lot of schools of thought about it. I think we can’t combine socialism and libertarianism in the awkward way we did and expect to get anything useful. And I think we should just choose one of them. And I personally think we’re better off choosing markets.

KEVIN SCOTT: So, you’ve been working on a bunch of concrete things to try to figure out like how to introduce these new incentive structures. Can you elaborate on that a little bit more?

JARON LANIER: Yeah. Well, the problem is how to get from here to there. I kind of have in my head an image of what a society would be like with paid data. There’s a few things to say about it. One is, there are a lot of people out there who pretend to be making a living online but aren’t because they’re fakers. It’s all a big illusion. It’s what we used to call Horatio Alger illusion, where you create this illusion that there’s this way of making a living when, in fact, there isn’t. It’s only for a token, small number of people.

However, there’s another population of people out there who are making a living, but not within the rules dictated by a central hub, but as actors. For instance, there are tens of millions, maybe – we don’t know the total number. But at least 50 million people in the world who are actually making a living delivering online video lessons and counseling and guidance. This is anything from legal consulting to religious training, to yoga teachers, to musical instrument teachers.

All those people have sort of cobbled together something that has to fight against the grain of everything, because there’s no –

KEVIN SCOTT: There’s no infrastructure to support them.

JARON LANIER: There’s no infrastructure, so each one of them has had to invent their own infrastructure by cobbling together little pieces from the different digital companies. And that population interests me. In a way I see them as the future. Those are the people who don’t have to worry about their jobs being taken by robots, unless I mean they could be, all we have to do is create some machine learning thing that steals all their data and makes a fake clarinet teacher, without paying them for their data, just stealing their value. And that’s what we’ve done in so many other areas.

So the future I would see is to, first of all, try to support, to identify those groups and support them and also identify those communities that are trying to create new structures to help people cooperate in decentralized ways. And here the blockchain community, not the get rich quick blockchain, but the other blockchain, the blockchain of people who are interested in new ways of cooperation that can be mediated by networks, those people could be really important and helpful.

I think we need to invent new structures. The reason that we treat data as being worthless, even though the companies that collect the data become the most valuable ones in the world, is that there’s no collective bargaining for people whose data is taken. So in any other economic example, in order to have a productive economy you have to have some – you have to invent some kind of structure so that people can cooperate and have it not be this Hobbesian race to the bottom where each person is against each other person. If you believe more in capital than labor you call that a corporation or a legal partnership or something. So, these people are incentivized to cooperate instead of try to kill each other. If you believe in labor over capital you call it the union. And you call it collective bargaining. But on the Internet the difference is academic.

I was playing around with terms like Unorp and Corporion and they’re terrible. So we just came up with – my research partner Glen Weyl and I came up with the term MID, actually my wife came up with that, mediator of individual data. So you’d have something that’s a way for people to band into a group so as to not have the value of their data descend to zero through interpersonal competition, but instead have a degree of local cooperation.

So we need to create those things. And MIDs can serve another function here. I’m talking fast, because I know we’re almost out of time. But, one of the things that’s really terrible about what’s happened in the world is we’ve been petitioning tech companies to become the arbiters of culture and politics.

But the thing is, do we really want tech companies to become the new de facto government? Is that what we want? I don’t think so. So the MIDs could also become brands in themselves where people who have bonded together to create a MID not only are collectively bargaining for the value of their data, but the MID itself has become a channel, like, if you like, like a guild or a union, or like a corporation or a brand that represents a certain thing.

It might say, whatever data comes through here is scientifically rigorous and has been checked, or whatever data comes through here is fashionista approved and is very beautiful, or whatever data comes through here is guaranteed to be really amusing and suitable for your whole family, or whatever. What it creates is these in-between sized structures that can take on this function of quality maintenance, because you don’t want a centralized source being the maintainer of quality. That’s a recipe for some kind of dysfunction or too much centralized power.

So, the MIDs both solve the economic problem and the quality problem. And we need to start creating them. So, there are fledgling attempts to create them. Right now, they have no infrastructure tools to help them along. I’d like to change that. And that’s just one little corner of the problem.

I’m mostly just trying to – honestly, I’m just trying to get the tech companies to see the light. And here some of them are better than others. (Laughter)

[MUSIC]

KEVIN SCOTT: So, let’s switch gears. I always ask guests what they’re excited about in new trends, innovations, and future of tech.

CHRISTINA WARREN: Yeah, and I love that because it’s always so interesting to hear people from these various backgrounds who are so connected with what’s kind of on the cutting edge to hear about what they’re excited about.

And so next, we’re going to revisit our show with danah boyd, and danah is a tech scholar and researcher and she looks at the intersection of people and social practices and technology. And she’s a partner researcher at Microsoft and she’s the founder and president in Data and Society, which is a nonprofit, New York City based think tank.

KEVIN SCOTT: So, I asked danah about what she’s seeing right now at the intersection of tech and society that’s interesting and promising.

danah boyd: What I’m hopeful for is – there’s like small glimmers of it is the various folks who are really starting to grapple with climate with tech and those intersections. Both in the ability to understand how to reduce the cost to climate for technology, but also the possibilities that we can model, understand, and innovate because we have a big, heavy challenge in front of us on that front.

But that’s like – those are the like glimmer stages as opposed to like here’s where we have tools.

KEVIN SCOTT: There’s so much opportunity there. I mean, it’s unbelievable. Like, if you just look at – if you could co-optimize production and consumption of power, like there probably is on the order of like one or two magnitudes of efficiencies that we could drive, which would be unbelievable.

And then, you know, that’s without sort of having the even bigger thoughts about like what could you do with some of these big machine learning models to like design better systems that are like fundamentally more efficient in and of themselves.

danah boyd: Well, so here’s an example of something that, you know, is a mixed sort – mixed, you know, feelings on.

We also have the ability to model what land will be arable. And we can really think about the future of agriculture, the future of water supply. Who controls that information? Who controls the decision-making that happens from that information? So that’s that moment where I’m like, okay, we’re getting there. We actually have a decent understanding.

But if we’re at a point where that material gets coopted, it gets controlled, then I’m deeply concerned. So, like these are the contradictions I think we’re sitting in the middle of because if we can really understand – I mean, where did data analytics begin? Farming, right?

If with can really understand what this will do to ag, we’re going to be able to better build resilience. And that’s those moments where I’m like, okay, you know, this is not about just NOAA – the National Oceanographic Atmospheric Administration. It’s not just about NOAA being able to model, but it’s also being able to give that information publicly in a way where it doesn’t get perverted for political purposes.

And that’s a tricky thing right now.

KEVIN SCOTT: Yeah, and you know on the hopeful side of things, you know, what we’ve even seen at Microsoft with some of the stuff that’s happening with this Farm Beats program that’s happening at Microsoft Research, is that you can take some of this data, so like the weather data, weather forecasts, like all of the sort of historical information, like stuff that like you used to get embedded into a farmer’s almanac, which was almost you know like a little bit like astrology. But like there was real, you know, data and trending that people built into these almanacs that helped people decide, like, very prosaic things like when to put the seeds in the ground. And like we know that if you apply technology to that process to very simple things like when to plant in a particular location given historical and predicted weather trends, that we can make huge improvements in crop productivity.

We see it in India where, you know, some of these very poor parts of India like when you put a little bit of technology in, like you can get double-digit percentage improvements, and like that is the difference between people starving and people getting fed.

danah boyd: Oh, absolutely.

KEVIN SCOTT: And it’s just great to see happening.

danah boyd: And the important thing about something like agriculture is it has to happen around the globe.

KEVIN SCOTT: It has to happen.

danah boyd: Right? It just has to. And same with water resources.

KEVIN SCOTT: Yep.

danah boyd: We need to understand and model out water resources, because, I mean, just take the continent of Africa, right? There are so many places across that continent where things are possibly fragile if we don’t work out where that is or how to deal with it.

KEVIN SCOTT: Yep.

danah boyd: And so it’s both the technology of desalination, which is wonderful, but it’s also the modeling to understand what the ripples are around that.

KEVIN SCOTT: And there’s so many ways you – I mean, this is the thing where I really want people to get like super excited about jumping in because for all of these things, like making better use of your water resources, like there are hundreds and hundreds of ways. Like so for instance, like one of the ways that you can make more efficient use of water in agriculture is like all of the agricultural chemicals that we use – so pesticides and fertilizers and whatnot – are massively diluted with water.

So like the chemical concentration, like, the active compound is like a tiny part of like the thing that gets sprayed over the crop, which means that you’re wasting all of this water that the – you know, chemicals are going into places where they’re not needed. It’s just this hugely wasteful thing.

And there’s all sorts of like interesting new technology where you can very precisely deliver the chemicals to the crop without diluting them in water at all, so you’re not wasting any water, you don’t have any of this like chemical runoff into the water supply, like, it’s just fantastic.

And like simple things like using some of the cool new stuff that we are – that we’re seeing with computer vision where you can fuse classical sensor data, like moisture meters, with vision models where you can sort of infer soil moisture from pictures that you’re taking from above the crops with drones or in places where drones are too expensive, like the Farm Beats folks are literally tying like little cheap cameras to balloons and you have a human, like, walk like a balloon over the crop, you know, tethered to a rope because, you know, in some parts of the world, you can’t afford a drone to fly over them.

And from that, like, you can – if you know what your soil moisture is, like, you know exactly how much to water so you don’t have to worry about under or over watering a crop, which leads to like way more efficiency.

So, it’s just so damn cool what’s possible.

danah boyd: And that also is like that’s also the technology mind, which is like, you know, I live in New York City. And one of the funny things about living in such a crazy urban environment is to wander around and be, like, I can see how this could become more efficient.

Oh, and if we did this and this and this – and that is that moment where you see the real hope and the real excitement, which is that like we can actually do things that would solve problems, especially like nothing to me, you know, is sort of more interesting than seeing all those infrastructure layers.

And I think the question for me is: How do we get not just the technology, but all of the things that are surrounding the technology to make that happen?

KEVIN SCOTT: Yeah.

danah boyd: And that’s where we have to realize that those technologies are only as powerful as the political and social processes surrounding them.

KEVIN SCOTT: Yeah.

danah boyd: You know, I can talk about how to make my, you know, building that I rent in more efficient, but if I can’t convince developers, if I can’t convince, you know, the city, who’s, you know, setting out the regulations to set these things in motion, no amount of good technology can solve you know really valuable problems.

And that’s where I think that that coordination becomes so critical.

KEVIN SCOTT: Yeah.

danah boyd: Which is that technologies, in many way, we’re at a point where they’re moving faster than the political and social structures to make them function well. And that is why I think we need – you know, even as we invest in having people build up technical skill, we need to invest in people building up the ability to think about the bridge, because without that, you can’t actually deploy at the levels to make a difference.

And that’s one of the reasons, like, I’m firmly a believer that we need societal kinds of regulation – and I’ll use that abstractly, rather than government – so that we can actually advance the inter – the development of these things.

KEVIN SCOTT: I think we all have very concrete roles that we can play in it. But like, the thing that I think we technology folks, like, have a special duty and obligation to– and you inherently get this, like, you’ve been doing this since the very beginning – is like, all of us every day should be asking, like, how is it that the thing that I’m doing right now is going to positively accrue to the overall social good. Like, if you can’t answer that question in an affirmative way, then maybe you’re doing the wrong damn thing.

danah boyd: Right. No, I agree, and I think this is also where I’m a big believer in putting yourself in networks where this is in conversation. So like, one of the things that really struck me, especially when I was, you know, doing old dev days, you can imagine the positiveness, but you actually need people around you who are thinking about it, and how to implement, which is, like, everything from business to policy, etc.

You need people around you saying, “And what if this goes wrong?” You need to be doing this in networks, in communities, and you need to be thinking with all of the different, affected communities, or the people that you’re trying to really engage and create possibilities, because they need part of that conversation.

And I think, you know, one of the weirdest things right now, as I’m, you know, trying to do this exercise in coordination around differential privacy, it’s like the technology will get there, hopefully as fast as we need it to. But it will get there.

But we need that buy-in process. We need people understanding it. We need people really embracing and figuring out how to make it work, or we’re going to end up in a weird moment where we have this beautiful object sitting on a shelf that we’re going to look back, you know, 15 years, and go, “We had it. Why didn’t we put it out there?”

And so, that’s where it’s like, as you’re thinking about the goodness, think not just about, like, the goodness, you know, of that, but like, how to actually build your teams and your communities in ways that actually can make this really be part of it.

And I’ll say, one of the most powerful things that I learned from an old mentor is that there is nothing more successful than getting a community to think it’s their own idea, right? And so, this is one of those moments where, as an industry, we’ve gotten into a bad habit of telling people what the future should be, rather than inviting them to co-construct it with us.

KEVIN SCOTT: Correct. Right.

danah boyd: And that co-construction, I think, is what we need to make all of those beautiful things that we can imagine in our minds become truly real.

[MUSIC]

CHRISTINA WARREN: One of the things that I really love about danah is how inclusive she is. She’s always thinking about how we can bring more people into what we call “tech” and how we can use those perspectives to make technology better and more interesting.

KEVIN SCOTT: Yeah, I get asked a lot how I have the time to do this podcast and conversations like the ones I have with danah are so energizing that doing this podcast in a way is almost like therapy for me. Like I walk away from those conversations feeling so much better about the world and so happy that we have people like Dana who are pushing on things in a humane, ethical, principled way that it helps me just go do my job.

CHRISTINA WARREN: I love that. I love that. So, one of the questions that you always ask of your guests, and I love this question, because we get so many different answers, is how they got started in tech.

KEVIN SCOTT: Yeah, like Danielle Feinberg, who is a Pixar engineer and director of photography who worked on “Coco” and a bunch of other of their movies, although “Coco” is the one that most recently made me cry like a little baby.

CHRISTINA WARREN: Coco totally made me cry, too, for – I had to just say that. When you asked this question of Danielle Feinberg she shared that it was her love of combining computers and arts and that actually started when she was just eight years old. And that eventually led to her getting a BA in computer science from Harvard.

[MUSIC]

KEVIN SCOTT: So, when did you decide that computer graphics was the thing?

DANIELLE FEINBERG: Well, you know, it’s that same thing of, like, I’m looking at these engineering classes. I don’t know, looking at the computer classes and I go, (gasps) “Look at that computer graphics class.” “That sounds awesome.” “I want to take that.” “How soon can I take that?” “Oh, well, it’s got this prerequisite here.” And then, “Oh, I can’t take it till junior year.”

And so, I’m such a nerd. Sophomore year, I emailed the professor, and I was, like, “Hey, I’m so excited to take your class, is there anything I could do to get ahead? Is there anything I could just play with now?” And I got the most confused email back from him. You know, I don’t think most Harvard students are like emailing professors for future classes and asking how they can do work for it. (Laughter)

He was, like, “I guess you could go buy the textbook.” But it must have made an impression. We’re still friends to this day. And he clearly knew my enthusiasm going into the class. And so, I go in, and it was really a class about programming – all the underpinnings of the programming to get to the 3-D world. But there was a day where he turned off the lights and he started playing these films. And it was the Pixar short films from the late ’80s and early ’90s. And this is, I think, ‘94.

And I, still completely clearly etched in my mind, just watched those with my mouth hanging open. Was like, “That is what I have to do with my life.” Because it was all this math, science, and code I’d been learning, but it created world and stories and characters in this way, that to me, was just the most perfect combination of everything that I loved.

[MUSIC]

KEVIN SCOTT: Before we close, we’d love to revisit some moments when we ask our guests what they do when they’re not innovating, inventing and, you know, otherwise being amazing and brilliant. Here were some of our favorite responses, first from Danny Hillis, when I asked him what he did outside of work.

DANNY HILLIS: Well, let’s see. I mean, really, I have to say that there is kind of a blend of fun and work for me. But I do some things that I have no excuse for doing at all at work, like, I make perfume.

KEVIN SCOTT: Oh, I didn’t know that.

DANNY HILLIS: And I do that really just because it uses a different part of my brain than everything else I do, because I tend to be an overthinker, very logical in my thinking. You can’t illogical about perfume. You can’t even really give names to the –

KEVIN SCOTT: Right.

DANNY HILLIS: So, it’s the sort of a meditative thing for me, because it turns off what a neurophysiologist would call my default mode network.

KEVIN SCOTT: Right.

DANNY HILLIS: And so, my default mode is very analytical, but in that you really just have to be experiential. So, I look for excuses to do that, hanging out in nature, those sorts of things to compliment it.

KEVIN SCOTT: Yeah. That’s super cool.

CHRISTINA WARREN: And, again, we posed the same question to Sam Altman.

SAM ALTMAN: I’m very thankful that there’s so many things I could say here. Hmmm. One thing that has been surprisingly great over the least year is a lot of long meditations, and finding a group of people who have been nice enough to spend time with me and teach me, and that’s been sort of a – significantly changed my perspective on the world.

KEVIN SCOTT: In what way?

SAM ALTMAN: I think I just – like, I’m a very different person now. I think I’m so much more content, and grateful, and happier, and calm. And it’s something that I just really wouldn’t have expected me to get into.

KEVIN SCOTT: I know that, a few years ago, I think – so, I don’t meditate, but like a bunch of these sort of Buddhist practices around sort of compassion and mindfulness are really helpful. Like, the thing that I’ve latched onto that’s been really useful is just gratitude.

SAM ALTMAN: Totally.

KEVIN SCOTT: Like, trying to – trying to find in as many moments in as many days as possible something to be truly grateful for. And like, I surprise myself because I’m a – yeah, I think engineers are sort of pessimistic, and like, a little bit cynical by nature, like but abide by the – you’re sort of wired a little bit to sort of see all of the problems in the world because, like, that’s part of what motivates you to go out and, like, change them and make them better. But it is, like, sort of a jaundiced way of looking at the world sometimes. But like, I’ve just been shocked at how many things I’ve – I can find to be grateful for every day, and like, how much, like, calmer that makes me.

SAM ALTMAN: Totally.

KEVIN SCOTT: And here’s Neha Narkhede, who is one of the initial authors of Apache Kafka and cofounder and CTO of Confluent.

NEHA NARKHEDE: My favorite activity to strike a balance with is to travel to new countries and experience new cultures and that’s what me and my husband do, and our crazy hobby together is to go SCUBA diving.

KEVIN SCOTT: Oh, wow.

NEHA NARKHEDE: And some of the, you know, crazy locations. Oftentimes, to see different varieties of sharks if I can’t – I can’t – I’ve been in one of those cages where great whites are on the outside and you’re on the inside. I can’t confirm if I was scared or not. (Laughter.) But I would say I survived it, and it turned out to be fun in a very weird kind of way.

KEVIN SCOTT: Yeah, you might be a little braver than I am.

CHRISTINA WARREN: And, of course, we couldn’t end without hearing from Jaron Lanier, who as a musician, has collaborated with the likes of Yoko Ono, T Bone Burnett, and Philip Glass. And Kevin is asking Jaron about his musical instrument collection, which is – it’s kind of insane, like, that’s – it – “expansive” doesn’t really – doesn’t really do it justice.

KEVIN SCOTT: It’s kind of insane in the best possible way.

CHRISTINA WARREN: In the best possible way.

JARON LANIER: I started just learning new instruments and I have this voracious, perhaps not always healthy need to always be learning a new instrument. And so, whether it’s the largest instrument collection I’m a little doubtful of, because there are some pretty big instrument museums. But, in terms of playable collection, I’m pretty sure it is. And I don’t know how many there are. But there are a lot of instruments and I do – I can play them.

KEVIN SCOTT: And we’re talking like hundreds, if not thousands.

JARON LANIER: Certainly in the thousands, yeah.

KEVIN SCOTT: Which is sort of a mind-boggling, interesting thought in and of itself that there are like thousands of distinct instruments that one could collect.

JARON LANIER: Well, they’re the best user interfaces that have ever been created. They’re the ones that support peak human performance like no other invention ever. And they’re profoundly beautiful. And each one has a story. And each one is kind of a form of time travel, because you learn to move and breathe like the people who originally played it wherever it’s from. So, it’s a kind of a cultural record that’s unlike any other one. It’s a haptic record, if you like.

[MUSIC]

KEVIN SCOTT: Well, I think that’s a great note to end on.

CHRISTINA WARREN: Yes, and there were so many more interviews that we just didn’t have time to revisit. For instance, our conversation with Anders Hejlsberg, who’s the inventor of Pascal and TypeScript as well as our conversations with Wences Casares, and Dio Gonzalez.

KEVIN SCOTT: And one of my personal heroes, Bill Coughran.

CHRISTINA WARREN: The good news is that you can listen to any of these interviews any time at your leisure, because that is the nature of podcasting. (Laughter). And before we bring 2019 to a close, I wanted to take a moment, Kevin, to congratulate you on your book that is due out on April 7th, and it is called Reprogramming the American Dream.

KEVIN SCOTT: Yeah, it’s pretty exciting. It’s very exciting to have all of the writing over with and it off into the publication process.

So, I’m excited to see how people react to the book. It is basically telling the story of AI through my own personal journey – like my life and career, and trying to get us all to think more about how the decisions that we make right now about how we build the technology, how we deploy it, how we regulate it, the incentive structures that we set up for guiding its evolution over the next several years, trying to get people to have a more rigorous and robust debate about all of these things so that we get a form of AI that is actually beneficial for everyone.

CHRISTINA WARREN: I am so glad that you wrote the book. I’m also glad for you that you are done with the writing process. I’m excited to read the book. Let me ask you, now that you have all this podcasting experience, are you going to record the audio book or are you like letting someone else do that?

KEVIN SCOTT: I suspect I’m going to let someone else do that.

CHRISTINA WARREN: Yeah, I don’t blame you, I think that would probably be – take up too much of your very valuable time.

KEVIN SCOTT: Well, and I think it’s also a – you know, what I’ve learned podcasting is this is a skill set you have to practice it a lot to like even get okay at it. I would not even remotely presume that I’m nearly as good as some of the professionals who do audio recordings for books.

CHRISTINA WARREN: Well, I can’t wait to read your book next year. And as always, we would love to hear from you at [email protected]. Tell us what’s on your mind. Tell us what your favorite episodes of the last year were, what some of your favorite conversations were. Tell us about your tech heroes, and maybe we’ll invite them to be on the show.

And, of course, be sure to tell your friends, your colleagues, your parents, anybody that you’re going to be seeing over the holiday season, you know, the strangers on the street, the Uber or Lyft drivers, everybody, tell them about the show and thank you very much for listening.

KEVIN SCOTT: See you next time!

[MUSIC]