The following is a conversation with Eugenia Kuida, cofounder of Replika, which is an app
that allows you to make friends with an artificial intelligence system, a chatbot, that learns
to connect with you on an emotional, you could even say a human level, by being a friend.
For those of you who know my interest in AI and views on life in general, know that Replika
and Eugenia’s line of work is near and dear to my heart.
The origin story of Replika is grounded in a personal tragedy of Eugenia losing her close
friend Roman Muzarenki, who was killed crossing the street by a hit and run driver in late
2015.
He was 34.
The app started as a way to grieve the loss of a friend, by trading a chatbot and your
old net on text messages between Eugenia and Roman.
The rest is a beautiful human story, as we talk about with Eugenia.
When a friend mentioned Eugenia’s work to me, I knew I had to meet her and talk to her.
I felt before, during, and after that this meeting would be an important one in my life.
And it was.
I think in ways that only time will truly show, to me and others.
She is a kind and brilliant person.
It was an honor and a pleasure to talk to her.
Quick summary of the sponsors, DoorDash, Dollar Shave Club, and Cash App.
Click the sponsor links in the description to get a discount and to support this podcast.
As a side note, let me say that deep, meaningful connection between human beings and artificial
intelligence systems is a lifelong passion for me.
I’m not yet sure where that passion will take me, but I decided some time ago that
I will follow it boldly and without fear, to as far as I can take it.
With a bit of hard work and a bit of luck, I hope I’ll succeed in helping build AI systems
that have some positive impact on the world and on the lives of a few people out there.
But also, it is entirely possible that I am in fact one of the chatbots that Eugenia and
the Replica team have built.
And this podcast is simply a training process for the neural net that’s trying to learn
to connect to human beings, one episode at a time.
In any case, I wouldn’t know if I was or wasn’t, and if I did, I wouldn’t tell you.
If you enjoy this thing, subscribe on YouTube, review it with 5 Stars and Apple Podcast,
follow on Spotify, support on Patreon, or connect with me on Twitter at Lex Friedman.
As usual, I’ll do a few minutes of ads now and no ads in the middle.
I’ll try to make these interesting, but give you timestamps so you can skip, but please
do still check out the sponsors by clicking the links in the description to get a discount,
buy whatever they’re selling, it really is the best way to support this podcast.
This show is sponsored by Dollar Shave Club.
Try them out with a one time offer for only 5 bucks and free shipping at dollarshave.com
slash lex.
The starter kit comes with a 6 blade razor, refills, and all kinds of other stuff that
makes shaving feel great.
I’ve been a member of Dollar Shave Club for over 5 years, and actually signed up when
I first heard about them on the Joe Rogan Experience podcast.
And now, friends, we have come full circle.
It feels like I made it, now that I can do a read for them just like Joe did all those
years ago, back when he also did ads for some less reputable companies, let’s say, that
you know about if you’re a true fan of the old school podcasting world.
Anyway, I just used the razor and the refills, but they told me I should really try out the
shave butter.
I did.
I love it.
It’s translucent somehow, which is a cool new experience.
Again, try the Ultimate Shave Starter set today for just 5 bucks plus free shipping
at dollarshaveclub.com slash lex.
This show is also sponsored by DoorDash.
Get $5 off and zero delivery fees on your first order of 15 bucks or more when you download
the DoorDash app and enter code, you guessed it, LEX.
I have so many memories of working late nights for a deadline with a team of engineers, whether
that’s for my PhD at Google or MIT, and eventually taking a break to argue about which
DoorDash restaurant to order from.
And when the food came, those moments of bonding, of exchanging ideas, of pausing to shift attention
from the programs to humans were special.
For a bit of time, I’m on my own now, so I miss that camaraderie, but actually, I still
use DoorDash a lot.
There’s a million options that fit into my crazy keto diet ways.
Also, it’s a great way to support restaurants in these challenging times.
Once again, download the DoorDash app and enter code LEX to get 5 bucks off and zero
delivery fees on your first order of 15 dollars or more.
Finally, this show is presented by Cash App, the number one finance app in the App Store.
I can truly say that they’re an amazing company, one of the first sponsors, if not the first
sponsor to truly believe in me, and I think quite possibly the reason I’m still doing
this podcast.
So I am forever grateful to Cash App.
So thank you.
And as I said many times before, use code LEXBODCAST when you download the app from
Google Play or the App Store.
Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with
as little as one dollar.
I usually say other stuff here in the read, but I wasted all that time up front saying
how grateful I am to Cash App.
I’m going to try to go off the top of my head a little bit more for these reads because
I’m actually very lucky to be able to choose the sponsors that we take on, and that means
I can really only take on the sponsors that I truly love, and then I can just talk about
why I love them.
So it’s pretty simple.
Again, get Cash App from the App Store or Google Play, use code LEXBODCAST, get 10
bucks, and Cash App will also donate 10 bucks to FIRST, an organization that is helping
to advance robotics and STEM education for young people around the world.
And now, here’s my conversation with Eugenia Kuida.
Okay, before we talk about AI and the amazing work you’re doing, let me ask you ridiculously,
we’re both Russian, so let me ask a ridiculously romanticized Russian question.
Do you think human beings are alone, like fundamentally, on a philosophical level?
Like in our existence, when we like go through life, do you think just the nature of our
life is loneliness?
Yeah, so we have to read Dostoevsky at school, as you probably know, so…
In Russian?
I mean, it’s part of your school program.
So I guess if you read that, then you sort of have to believe that.
You’re made to believe that you’re fundamentally alone, and that’s how you live your life.
How do you think about it?
You have a lot of friends, but at the end of the day, do you have like a longing for
connection with other people?
That’s maybe another way of asking it.
Do you think that’s ever fully satisfied?
I think we are fundamentally alone.
We’re born alone, we die alone, but I view my whole life as trying to get away from that,
trying to not feel lonely, and again, we’re talking about a subjective way of feeling
alone.
It doesn’t necessarily mean that you don’t have any connections or you are actually isolated.
You think it’s a subjective thing, but like again, another absurd measurement wise thing,
how much loneliness do you think there is in the world?
Like if you see loneliness as a condition, how much of it is there, do you think?
Like how, I guess how many, you know, there’s all kinds of studies and measures of how many
people in the world feel alone.
There’s all these like measures of how many people are, you know, self report or just
all these kinds of different measures, but in your own perspective, how big of a problem
do you think it is size wise?
I’m actually fascinated by the topic of loneliness.
I try to read about it as much as I can.
What really, and I think there’s a paradox because loneliness is not a clinical disorder.
It’s not something that you can get your insurance to pay for if you’re struggling with that.
Yet it’s actually proven and pretty, you know, tons of papers, tons of research around that.
It is proven that it’s correlated with earlier life expectancy, shorter lifespan.
And it is, you know, in a way like right now, what scientists would say that it, you know,
it’s a little bit worse than being obese or not actually doing any physical activity in
your life.
In terms of the impact on your health?
In terms of impact on your physiological health.
Yeah.
So it’s basically puts you, if you’re constantly feeling lonely, your body responds like it’s
basically all the time under stress.
It’s always in this alert state and so it’s really bad for you because it actually like
drops your immune system and get it, your response to inflammation is quite different.
So all the cardiovascular diseases actually responds to viruses.
So it’s much easier to catch a virus.
That’s sad now that we’re living in a pandemic and it’s probably making us a lot more alone
and it’s probably weakening the immune system, making us more susceptible to the virus.
It’s kind of sad.
Yeah.
The statistics are pretty horrible around that.
So around 30% of all millennials report that they’re feeling lonely constantly.
30?
30%.
And then it’s much worse for Gen Z.
And then 20% of millennials say that they feel lonely and they also don’t have any close
friends.
And then I think 25 or so, and then 20% would say they don’t even have acquaintances.
And that’s in the United States?
That’s in the United States.
And I’m pretty sure that that’s much worse everywhere else.
Like in the UK, I mean, it was widely tweeted and posted when they were talking about a
minister of loneliness that they wanted to appoint because four out of 10 people in the
UK feel lonely.
Minister of loneliness.
I think that thing actually exists.
So yeah, you will die sooner if you are lonely.
And again, this is only when we’re only talking about your perception of loneliness or feeling
lonely.
That is not objectively being fully socially isolated.
However, the combination of being fully socially isolated and not having many connections and
also feeling lonely, that’s pretty much a deadly combination.
So it strikes me bizarre or strange that this is a wide known fact and then there’s really
no one working really on that because it’s like subclinical.
It’s not clinical.
It’s not something that you can, we’ll tell your doctor and get a treatment or something.
Yet it’s killing us.
Yeah.
So there’s a bunch of people trying to evaluate, like try to measure the problem by looking
at like how social media is affecting loneliness and all that kind of stuff.
So it’s like measurement.
Like if you look at the field of psychology, they’re trying to measure the problem and
not that many people actually, but some.
But you’re basically saying how many people are trying to solve the problem.
Like how would you try to solve the problem of loneliness?
Like if you just stick to humans, uh, I mean, or basically not just the humans, but the
technology that connects us humans.
Do you think there’s a hope for that technology to do the connection?
Like I, are you on social media much?
Unfortunately, do you find yourself like, uh, again, if you sort of introspect about
how connected you feel to other human beings, how not alone you feel, do you think social
media makes it better or worse maybe for you personally, or in general, I think it’s, it’s
easier to look at some stats and, um, I mean, Gen Z seems to be generation Z seems to be
much lonelier than millennials in terms of how they report loneliness.
They’re definitely the most connected generation in the world.
I mean, I still remember life without an iPhone, without Facebook, they don’t know that that
ever existed, uh, or at least don’t know how it was.
So that tells me a little bit about the fact that that might be, um, you know, this hyper
connected world might actually make people feel lonely, lonelier.
I don’t know exactly what the, what the measurements are around that, but I would say, you know,
my personal experience, I think it does make you feel a lot lonelier, mostly, yeah, we’re
all super connected.
Uh, but I think loneliness, the feeling of loneliness doesn’t come from not having any
social connections whatsoever.
Again, tons of people that are, are in longterm relationships experience bouts of loneliness
and continued loneliness.
Um, and it’s more the question about the true connection about actually being deeply seen,
deeply understood.
Um, and in a way it’s also about your relationship with yourself, like in order to not feel lonely,
you actually need to have a better relationship and feel more connected to yourself than this
feeling actually starts to go away a little bit.
And then you, um, open up yourself to actually meeting other people in a very special way.
Uh, not in just, you know, at a friend on Facebook kind of way.
So just to briefly touch on it, I mean, do you think it’s possible to form that kind
of connection with AI systems more down the line of some of your work?
Do you think that’s, um, engineering wise, a possibility to alleviate loneliness is not
with another human, but with an AI system?
Well, I know that’s, that’s a fact, that’s what we’re doing.
And we see it and we measure that and we see how people start to feel less lonely, um,
talking to their virtual AI friend.
So basically a chat bot at the basic level, but it could be more like, do you have, I’m
not even speaking sort of, uh, about specifics, but do you have a hope, like if you look 50
years from now, do you have a hope that there’s just like AIs that are like optimized for,
um, let me, let me first start like right now, the way people perceive AI, which is
recommender systems for Facebook and Twitter, social media, they see AI is basically destroying
first of all, the fabric of our civilization.
But second of all, making us more lonely.
Do you see like a world where it’s possible to just have AI systems floating about that
like make our life less lonely?
Yeah.
Make us happy.
Like are putting good things into the world in terms of our individual lives.
Yeah.
Totally believe in that.
That’s why we’re, I’m also working on that.
Um, I think we need to also make sure that, um, what we’re trying to optimize for, we’re
actually measuring and it is a North star metric that we’re going after.
And all of our product and all of our business models are optimized for that because you
can talk, you know, a lot of products that talk about, um, you know, making you feel
less lonely or making you feel more connected.
They’re not really measuring that.
So they don’t really know whether their users are actually feeling less lonely in the long
run or feeling more connected in the long run.
Um, so I think it’s really important to put your measure it.
Yeah.
To measure it.
What’s a, what’s a good measurement of loneliness?
Well, so that’s something that I’m really interested in.
How do you measure that people are feeling better or that they’re feeling less lonely
with loneliness?
There’s a scale.
There’s UCLA 20 and UCLA three recently scale, which is basically a questionnaire that you
fill out and you can see whether in the long run it’s improving or not.
And that, uh, does it capture the momentary feeling of loneliness?
Does it look in like the past month?
Like, uh, does it basically self report?
Does it try to sneak up on you tricky to answer honestly or something like that?
Well, what’s yeah, I’m not familiar with the question.
It is just asking you a few questions.
Like how often did you feel, uh, like lonely or how often do you feel connected to other
people in this last few couple of weeks?
Um, it’s similar to the self report questionnaires for depression, anxiety, like PHQ nine and
get seven.
Of course, as any, as any self report questionnaires, that’s not necessarily very precise or very
well measured, but still, if you take a big enough population and you get them through
these, uh, questionnaires, you can see, you can see a positive dynamic.
And so you basically, uh, you put people through questionnaires to see like, is this thing
is our, is what we’re creating, making people happier?
Yeah, we measure, so we measure two outcomes.
One short term, right after the conversation, we ask people whether this conversation made
them feel better, worse or same, um, this, this metric right now is at 80%.
So 80% of all our conversations make people feel better, but I should have done the questionnaire
with you.
You feel a lot worse after we’ve done this conversation.
That’s actually fascinating.
I should probably do that, but that’s, that’s how we do that.
You should totally and aim for 80% aim to outperform your current state of the art AI
system in these human conversations.
So we’ll get to your work with replica, but let me continue on the line of absurd questions.
So you talked about, um, you know, deep connection with the humans, deep connection with AI,
meaningful connection.
Let me ask about love.
People make fun of me cause I talk about love all the time.
But uh, what, what do you think love is like maybe in the context of, um, a meaningful
connection with somebody else?
Do you draw a distinction between love, like friendship and Facebook friends or is it a
graduate?
No, it’s all the same.
No.
Like, is it, is it just a gradual thing or is there something fundamental about us humans
that seek like a really deep connection, uh, with another human being and what is that?
What is love Eugenia, I’m going to just enjoy asking you these questions seeing you struggle.
Thanks.
Um, well the way I see it, um, and specifically, um, the way it relates to our work and the
way it was, the way it inspired our work on replica, um, I think one of the biggest and
the most precious gifts we can give to each other now in 2020 as humans is this gift of
deep empathetic understanding, the feeling of being deeply seen.
Like what does that mean?
Like that you exist, like somebody acknowledging that somebody seeing you for who you actually
are.
And that’s extremely, extremely rare.
Um, I think that is that combined with unconditional positive regard, um, belief and trust that
um, you internally are always inclined for positive growth and believing you in this
way, letting you be a separate person at the same time.
And this deep empathetic understanding for me, that’s the, that’s the combination that
really creates something special, something that people, when they feel it once, they
will always long for it again.
And something that starts huge fundamental changes in people.
Um, when we see that someone’s accepts us so deeply, we start to accept ourselves.
And um, the paradox is that’s when big changes start happening, big fundamental changes in
people start happening.
So I think that is the ultimate therapeutic relationship that is, and that might be in
some way a definition of love.
So acknowledging that there’s a separate person and accepting you for who you are.
Um, now on a slightly that, and you mentioned therapeutic, that sounds a very, like a very
healthy view of love, but, uh, is there also like a, like, you know, if we look at heartbreak
and uh, you know, most love songs are probably about heartbreak, right?
Is that like the mystery, the tension, the danger, the fear of loss, you know, all of
that, what people might see in a negative light as like games or whatever, but just,
just the, the dance of human interaction.
Yeah.
Fear of loss and fear of like, you said, you said like once you feel it once, you long
for it again, but you also, once you feel it once, you might, for many people, they’ve
lost it.
So they fear losing it.
They feel loss.
So is that part of it, like you’re, you’re speaking like beautifully about like the
positive things, but is it important to be able to, uh, be afraid of losing it from an
engineering perspective?
I mean, it’s a huge part of it and unfortunately we all, you know, um, face it at some points
in our lives.
I mean, I did.
You want to go into details?
How’d you get your heartbroken?
Sure.
So mine is pretty straight, my story is pretty straightforward, um, there I did have a friend
that was, you know, that at some point, um, in my twenties became really, really close
to me and we, we became really close friends.
Um, well, I grew up pretty lonely.
So in many ways when I’m building, you know, these, these AI friends, I’m thinking about
myself when I was 17 writing horrible poetry and you know, in my dial up modem at home
and, um, you know, and that was the feeling that I grew up with.
I left, I lived, um, alone for a long time when I was a teenager, where did you go up
in Moscow and the outskirts of Moscow.
Um, so I’d just skateboard during the day and come back home and you know, connect to
the internet and then write horrible poetry and love poems, all sorts of poems, obviously
love poems.
I mean, what, what other poetry can you write when you’re 17, um, it could be political
or something, but yeah.
But that was, you know, that was kind of my fiat, like deeply, um, influenced by Joseph
Brodsky and like all sorts of sports that, um, every 17 year old will, will be looking,
you know, looking at and reading, but yeah, that was my, uh, these were my teenage years
and I just never had a person that I thought would, you know, take me as it is, would accept
me the way I am, um, and I just thought, you know, working and just doing my thing and
being angry at the world and being a reporter, I was an investigative reporter working undercover
and writing about people was my way to connect with, you know, with, with others.
I was deeply curious about every, everyone else.
And I thought that, you know, if I, if I go out there, if I write their stories, that
means I’m more connected.
This is what this podcast as well, by the way, I’m desperate, well, I’m seeking connection
now.
I’m just kidding.
Or am I?
I don’t know.
So what, wait, reporter, uh, what, how did that make you feel more connected?
I mean, you’re still fundamentally pretty alone,
But you’re always with other people, you know, you’re always thinking about what other place
can I infiltrate?
What other community can I write about?
What other phenomenon can I explore?
And you sort of like a trickster, you know, and like, and, and a mythological character,
like creature, that’s just jumping, uh, between all sorts of different worlds and feel and
feel sort of okay with in all of them.
So, um, that was my dream job, by the way, that was like totally what I would have been
doing.
Um, if Russia was a different place and a little bit undercover.
So like you weren’t, you were trying to, like you said, mythological creature trying to
infiltrate.
So try to be a part of the world.
What are we talking about?
What kind of things did you enjoy writing about?
I’d go work at a strip club or go.
Awesome.
Okay.
Well, I’d go work at a restaurant or just go write about, you know, um, certain phenomenons
or phenomenons or people in the city.
And what, uh, sorry to keep interrupting and I’m the worst, I’m a conversationalist.
What stage of Russia is this?
What, uh, is this pre Putin, post Putin?
What was Russia like?
Pre Putin is really long ago.
This is Putin era.
That’s a beginning of two thousands and 2010, 2007, eight, nine, 10.
What were strip clubs like in Russia and restaurants and culture and people’s minds like in that
early Russia that you were covering?
In those early two thousands, this was, there was still a lot of hope.
There were still tons of hope that, um, you know, we’re sort of becoming this, uh, Western,
Westernized society.
Uh, the restaurants were opening, we were really looking at, you know, um, we’re trying,
we’re trying to copy a lot of things from, uh, from the US, from Europe, um, bringing
all these things and very enthusiastic about that.
So there was a lot of, you know, stuff going on.
There was a lot of hope and dream for this, you know, new Moscow that would be similar
to, I guess, New York.
I mean, just to give you an idea in, um, year 2000 was the year when we had two, uh, movie
theaters in Moscow and there was one first coffee house that opened and it was like really
big deal.
Uh, by 2010 there were all sorts of things everywhere.
Almost like a chain, like a Starbucks type of coffee house or like, you mean, oh yeah,
like a Starbucks.
I mean, I remember we were reporting on, like, we were writing about the opening of Starbucks.
I think in 2007 that was one of the biggest things that happened in, you know, in Moscow
back, back in the time, like, you know, that was worthy of a magazine cover.
And, uh, that was definitely the, you know, the biggest talk of the time.
Yeah.
When was McDonald’s?
Cause I was still in Russia when McDonald’s opened.
That was in the nineties.
I mean, yeah.
Oh yeah.
I remember that very well.
Yeah.
Those were long, long lines.
I think it was 1993 or four, I don’t remember.
Um, actually earlier at that time, did you do, I mean, that was a luxurious outing.
That was definitely not something you do every day.
And also the line was at least three hours.
So if you’re going to McDonald’s, that is not fast food.
That is like at least three hours in line and then no one is trying to eat fast after
that.
Everyone is like trying to enjoy as much as possible.
What’s your memory of that?
Oh, it was insane.
How did it go?
It was extremely positive.
It’s a small strawberry milkshake and the hamburger and small fries and my mom’s there.
And sometimes I’ll just, cause I was really little, they’ll just let me run, you know,
up the kitchen and like cut the line, which is like, you cannot really do that in Russia
or.
So like for a lot of people, like a lot of those experiences might seem not very fulfilling,
you know, like it’s on the verge of poverty, I suppose.
But do you remember all that time fondly, like, cause I do like the first time I drank,
you know, Coke, you know, all that stuff, right.
And just, yeah.
The connection with other human beings in Russia, I remember, I remember it really positively.
Like how do you remember what the nineties and then the Russia you were covering, just
the human connections you had with people and the experiences?
Well, my, my parents were both, both physicists.
My grandparents were both, well, my grandpa, grandfather was in nuclear physicist, a professor
at the university.
My dad worked at Chernobyl when I was born in Chernobyl, analyzing kind of the everything
after the explosion.
And then I remember that and they were, so they were making sort of enough money in the
Soviet union.
So they were not, you know, extremely poor or anything.
It was pretty prestigious to be a professor, the Dean and the university.
And then I remember my grandfather started making a hundred dollars a month after, you
know, in the nineties.
So then I remember we started our main line of work would be to go to our little tiny
country house, get a lot of apples there from apple trees, bring them back to the city and
sell them in the street.
So me and my nuclear physicist grandfather were just standing there and he selling those
apples the whole day, cause that would make you more money than, you know, working at
the university.
And then he’ll just tell me, try to teach me, you know, something about planets and
whatever the particles and stuff.
And, you know, I’m not smart at all, so I could never understand anything, but I was
interested as a journalist kind of type interested.
But that was my memory.
And, you know, I’m happy that I wasn’t, I somehow got spared that I was probably too
young to remember any of the traumatic stuff.
So the only thing I really remember had this bootleg that was very traumatic, had this
bootleg Nintendo, which was called Dandy in Russia.
So in 1993, there was nothing to eat, like, even if you had any money, you would go to
the store and there was no food.
I don’t know if you remember that.
And our friend had a restaurant, like a government, half government owned something restaurant.
So they always had supplies.
So he exchanged a big bag of wheat for this Nintendo, the bootleg Nintendo, that I remember
very fondly, cause I think I was nine or something like that and we’re seven.
Like we just got it and I was playing it and there was this, you know, Dandy TV show.
Yeah.
So traumatic in a positive sense, you mean like, like a definitive, well, they took it
away and gave me a bag of wheat instead.
And I cried like my eyes out for days and days and days.
Oh no.
And then, you know, as a, and my dad said, we’re going to like exchange it back in a
little bit.
So you keep the little gun, you know, the one that you shoot the ducks with.
So I’m like, okay, I’m keeping the gun.
So sometime it’s going to come back, but then they exchanged the gun as well for some sugar
or something.
I was so pissed.
I was like, I didn’t want to eat for days after that.
I’m like, I don’t want your food.
Give me my Nintendo back.
That was extremely traumatic.
But you know, I was happy that that was my only traumatic experience.
You know, my dad had to actually go to Chernobyl with a bunch of 20 year olds.
He was 20 when he went to Chernobyl and that was right after the explosion.
No one knew anything.
The whole crew he went with, all of them are dead now.
I think there was this one guy still, that was still alive for this last few years.
I think he died a few years ago now.
My dad somehow luckily got back earlier than everyone else, but just the fact that that
was the, and I was always like, well, how did they send you?
I was only, I was just born, you know, you had a newborn talk about paternity leave.
They were like, but that’s who they took because they didn’t know whether you would be able
to have kids when you come back.
So they took the ones with kids.
So him with some guys went to, and I’m just thinking of me when I was 20, I was so sheltered
from any problems whatsoever in life.
And then my dad, his 21st birthday at the reactor, you like work three hours a day,
you sleep the rest and, and I, yeah, so I played with a lot of toys from Chernobyl.
What are your memories of Chernobyl in general, like the bigger context, you know, because
of that HBO show it’s the world’s attention turned to it once again, like, what are your
thoughts about Chernobyl?
Did Russia screw that one up?
Like, you know, there’s probably a lot of lessons about our modern times with data about
coronavirus and all that kind of stuff.
It seems like there’s a lot of misinformation.
There’s a lot of people kind of trying to hide whether they screwed something up or
not, as it’s very understandable, it’s very human, very wrong, probably, but obviously
Russia was probably trying to hide that they screwed things up.
Like, what are your thoughts about that time, personal and general?
I mean, I was born when the explosion happened.
So actually a few months after, so of course I don’t remember anything apart from the fact
that my dad would bring me tiny toys, like plastic things that would just go crazy haywire
when you, you know, put the Geiger thing to it.
My mom was like, just nuclear about that.
She was like, what are you bringing, you should not do that.
She was nuclear.
Very nice.
Well done.
I’m sorry.
It was, but yeah, but the TV show was just phenomenal.
The HBO one?
Yeah, it was definitely, first of all, it’s incredible how that was made not by the Russians,
but someone else, but capturing so well everything about our country.
It felt a lot more genuine than most of the movies and TV shows that are made now in Russia,
just so much more genuine.
And most of my friends in Russia were just in complete awe about the show, but I think
that…
How good of a job they did.
Oh my God, phenomenal.
But also…
The apartments, there’s something, yeah.
The set design.
I mean, Russians can’t do that, you know, but you see everything and it’s like, wow,
that’s exactly how it was.
So I don’t know, that show, I don’t know what to think about that because it’s British accents,
British actors of a person, I forgot who created the show.
But I remember reading about him and he’s not, he doesn’t even feel like, like there’s
no Russia in this history.
No, he did like super bad or something like that.
Or like, I don’t know.
Yeah, like exactly.
Whatever that thing about the bachelor party in Vegas, number four and five or something
were the ones that he worked with.
Yeah.
But so he made me feel really sad for some reason that if a person, obviously a genius,
could go in and just study and just be extreme attention to detail, they can do a good job.
It made me think like, why don’t other people do a good job with this?
Like about Russia, like there’s so little about Russia.
There’s so few good films about the Russian side of World War II.
I mean, there’s so much interesting evil and not, and beautiful moments in the history
of the 20th century in Russia that it feels like there’s not many good films on from the
Russians.
You would expect something from the Russians.
Well, they keep making these propaganda movies now.
Oh no.
Unfortunately.
But yeah, no, Chernobyl was such a perfect TV show.
I think capturing really well, it’s not about like even the set design, which was phenomenal,
but just capturing all the problems that exist now with the country and like focusing on
the right things.
Like if you build the whole country on a lie, that’s what’s going to happen.
And that’s just this very simple kind of thing.
Yeah.
And did you have your dad talked about it to you, like his thoughts on the experience?
He never talks.
He’s this kind of Russian man that just, my husband who’s American and he asked him a
few times like, you know, Igor, how did you, but why did you say yes?
Or like, why did you decide to go?
You could have said no, not go to Chernobyl.
Why would like a person like, that’s what you do.
You cannot say no.
Yeah.
It’s just, it’s like a Russian way.
It’s the Russian way.
Men don’t talk that much.
Nope.
There’s no one upsides for that.
Yeah, that’s the truth.
Okay.
So back to post Putin Russia, or maybe we skipped a few steps along the way, but you
were trying to do, to be a journalist in that time.
What was, what was Russia like at that time?
Post you said 2007 Starbucks type of thing.
What else, what else was Russia like then?
I think there was just hope.
There was this big hope that we’re going to be, you know, friends with the United States
and we’re going to be friends with Europe and we’re just going to be also a country
like those with, you know, bike lanes and parks and everything’s going to be urbanized.
And again, we’re talking about nineties where like people would be shot in the street.
And it was, I still have a fond memory of going into a movie theater and, you know,
coming out of it after the movie.
And the guy that I saw on the stairs was like neither shot, which was, again, it was like
a thing in the nineties that would be happening.
People were, you know, people were getting shot here and there, tons of violence, tons
of you know, just basically mafia mobs on in the streets.
And then the two thousands were like, you know, things just got cleaned up, oil went
up and the country started getting a little bit richer, you know, the nineties were so
grim mostly because the economy was in shambles and oil prices were not high.
So the country didn’t have anything.
We defaulted in 1998 and the money kept jumping back and forth.
Like first there were millions of rubbles, then it got like default, you know, then it
got to like thousands.
Then it was one rubble was something then again to millions, there’s like crazy town.
That was crazy.
And then the two thousands were just these years of stability in a way and the country
getting a little bit richer because of, you know, again, oil and gas.
And we were starting to, we started to look at specifically in Moscow and St. Petersburg
to look at other cities in Europe and New York and US and trying to do the same in our
like small kind of cities, towns there.
What was, what were your thoughts of Putin at the time?
Well, in the beginning he was really positive.
Everyone was very, you know, positive about Putin.
He was young.
Um, it’s very energetic.
He also immediate the shirtless somewhat compared to, well, that was not like way before the
shirtless era.
Um, the shirtless era.
Okay.
So he didn’t start out shirtless.
When did the shirtless era, it’s like the propaganda of riding horse, fishing, 2010,
11, 12.
Yeah.
That’s my favorite.
You know, like people talk about the favorite Beatles, like the, that’s my favorite Putin
is the shirtless Putin.
Now I remember very, very clearly 1996 where, you know, Americans really helped Russia with
elections and Yeltsin got reelected, um, thankfully so, uh, because there’s a huge threat that
actually the communists will get back to power.
Uh, they were a lot more popular.
And then a lot of American experts, political experts, uh, and campaign experts descended
on Moscow and helped Yeltsin actually get, get the presidency, the second term for the
pro, um, the, of the presidency.
But Yeltsin was not feeling great, you know, in the, by the end of his second term, uh,
he was, you know, alcoholic.
He was really old.
He was falling off, uh, you know, the stages when he, where he was talking.
Uh, so people were looking for fresh, I think for a fresh face, for someone who’s going
to continue Yeltsin’s, uh, work, but who’s going to be a lot more energetic and a lot
more active, young, um, efficient, maybe.
So that w that’s what we all saw in Putin back in the day.
I, I’d say that everyone, absolutely everyone in Russia in early two thousands who was not
a communist would be, yeah, Putin’s great.
We have a lot of hopes for him.
What are your thoughts?
And I promise we’ll get back to, uh, first of all, your love story.
Second of all, AI, well, what are your thoughts about, um, communism?
The 20th century, I apologize.
I’m reading the rise and fall of the third Reich.
Oh my God.
So I’m like really steeped into like world war II and Stalin and Hitler and just these
dramatic personalities that brought so much evil to the world.
But it’s also interesting to politically think about these different systems and what they’ve
led to.
And Russia is one of the sort of beacons of communism in the 20th century.
What are your thoughts about communism?
Having experienced it as a political system?
I mean, I have only experienced it a little bit, but mostly through stories and through,
you know, seeing my parents and my grandparents who lived through that, I mean, it was horrible.
It was just plain horrible.
It was just awful.
You think it’s, there’s something, I mean, it sounds nice on paper.
There’s a, so like the drawbacks of capitalism is that, uh, you know, eventually there is,
it’s a, it’s the point of like a slippery slope.
Eventually it creates, uh, you know, the rich get richer, it creates a disparity, like inequality
of, um, wealth inequality.
If like, you know, I guess it’s hypothetical at this point, but eventually capitalism leads
to humongous inequality and that that’s, you know, some people argue that that’s a source
of unhappiness is it’s not like absolute wealth of people.
It’s the fact that there’s a lot of people much richer than you.
There’s a feeling of like, that’s where unhappiness can come from.
So the idea of, of communism or these sort of Marxism is, uh, is, is not allowing that
kind of slippery slope, but then you see the actual implementations of it and stuff seems
to be, seems to go wrong very badly.
What do you think that is?
Why does it go wrong?
What is it about human nature?
If we look at Chernobyl, you know, those kinds of bureaucracies that were constructed.
Is there something like, do you think about this much of like why it goes wrong?
Well, there’s no one was really like, it’s not that everyone was equal.
Obviously the, you know, the, the government and everyone close to that were the bosses.
So it’s not like fully, I guess, uh, this dream of equal life.
So then I guess the, the situation that we had in, you know, the Russia had in the Soviet
union, it was more, it’s a bunch of really poor people without any way to make any, you
know, significant fortune or build anything living constant, um, under constant surveillance,
surveillance from other people.
Like you can’t even, you know, uh, do anything that’s not fully approved by the dictatorship
basically.
Otherwise your neighbor will write a letter and you’ll go to jail, absolute absence of
actual law.
Yeah.
It’s a constant state of fear.
You didn’t own any, own anything.
It didn’t, you know, the, you couldn’t go travel, you couldn’t read anything, uh, Western
or you couldn’t make a career really, unless you’re working in the, uh, military complex.
Um, which is why most of the scientists were so well regarded.
I come from, you know, both my dad and my mom come from families of scientists and they,
they were really well regarded as you, as you know, obviously.
Because the state wanted, I mean, cause there’s a lot of value to them being well regarded.
Because they were developing things that could be used in, in the military.
So that was very important.
That was the main investment.
Um, but it was miserable, it was all miserable.
That’s why, you know, a lot of Russians now live in the state of constant PTSD.
That’s why we, you know, want to buy, buy, buy, buy, buy and definitely if as soon as
we have the opportunity, you know, we just got to it finally that we can, you know, own
things.
You know, I remember the time that we got our first yogurts and that was the biggest
deal in the world.
It was already in the nineties, by the way, I mean, what was your like, favorite food
where it was like, well, like this is possible, Oh, fruit, because we only had apples, bananas
and whatever.
And you know, whatever watermelons, whatever, you know, people would grow in the Soviet
Union.
There were no pineapples or papaya or mango, like you’ve never seen those fruit things.
Like those were so ridiculously good.
And obviously you could not get any like strawberries in winter or anything that’s not, you know,
seasonal.
Um, so that was a really big deal.
I’ve seen all these fruit things.
Yeah.
Me too.
Actually.
I don’t know.
I think I have a, like, I don’t think I have any too many demons, uh, or like addictions
or so on, but I think I’ve developed an unhealthy relationship with fruit.
I still struggle with, Oh, you can get any type of fruit, right?
If you get like also these weird fruit, fruits like dragon fruit or something or all kinds
of like different types of peaches, like cherries were killer for me.
I know, I know you say like we had bananas and so on, but I don’t remember having the
kind of banana.
Like when I first came to this country, the amount of banana, I like literally got fat
on bananas, like the amount, Oh yeah, for sure.
They were delicious.
And like cherries, the kind, like just the quality of the food, I was like, this is capitalism.
This is delicious.
Yeah.
I am.
Yeah.
It’s funny.
Yeah.
Like it’s, it’s funny to read.
I don’t know what to think of it, of, um, it’s funny to think how an idea that’s just
written on paper, when carried out amongst millions of people, how that gets actually
when it becomes reality, what it actually looks like, uh, sorry, but the, uh, been studying
Hitler a lot recently and, uh, going through Mein Kampf.
He pretty much wrote out of Mein Kampf everything he was going to do.
Unfortunately, most leaders, including Stalin didn’t read the, read it, but it’s, it’s kind
of terrifying and I don’t know.
And amazing in some sense that you can have some words on paper and they can be brought
to life and they can either inspire the world or they can destroy the world.
And uh, yeah, there’s a lot of lessons to study in history that I think people don’t
study enough now.
One of the things I’m hoping with, I’ve been practicing Russian a little bit.
I’m hoping to sort of find, rediscover the, uh, the beauty and the terror of Russian history
through this stupid podcast by talking to a few people.
So anyway, I just feel like so much was forgotten.
So much was forgotten.
I’ll probably, I’m going to try to convince myself to, um, you’re a super busy and super
important person when I’m going to, I’m going to try to befriend you to, uh, to try to become
a better Russian.
Cause I feel like I’m a shitty Russian.
Not that busy.
So I can totally be your Russian Sherpa.
Yeah.
But love, you were, you were talking about your early days of, uh, being a little bit
alone and finding a connection with the world through being a journalist.
Where did love come into that?
I guess finding for the first time, um, some friends, it’s very, you know, simple story.
Some friends that all of a sudden we, I guess we were the same, you know, the same, at the
same place with our lives, um, we’re 25, 26, I guess.
And, um, somehow remember, and we just got really close and somehow remember this one
day where, um, it’s one day and, you know, in summer that we just stayed out, um, outdoor
the whole night and just talked and for some unknown reason, it just felt for the first
time that someone could, you know, see me for who I am and it just felt extremely like
extremely good.
And I, you know, we fell asleep outside and just talking and it was raining.
It was beautiful, you know, sunrise and it’s really cheesy, but, um, at the same time,
we just became friends in a way that I’ve never been friends with anyone else before.
And I do remember that before and after that you sort of have this unconditional family
sort of, um, and it gives you tons of power.
It just basically gives you this tremendous power to do things in your life and to, um,
change positively on many different levels.
Power because you could be yourself.
At least you know that some somewhere you can be just yourself, like you don’t need
to pretend, you don’t need to be, you know, um, great at work or tell some story or sell
yourself in somewhere or another.
And so it became this really close friends and, um, in a way, um, I started a company
cause he had a startup and I felt like I kind of want to start up too.
It felt really cool.
I don’t know what I’m going to, what I would really do, but I felt like I kind of need
a startup.
Okay.
So that’s, so that pulled you in to the startup world.
Yeah.
And then, yeah.
And then this, uh, closest friend of mine died.
We actually moved here to San Francisco together and then we went back for a visa to Moscow
and, uh, we lived together, we’re roommates and we came back and, um, he got hit by a
car right in front of Kremlin on a, you know, next to the river, um, and died the same day
I met this is the Roman hospital.
So, and you’ve moved to America at that point, at that point I was, what about him?
What about Roman?
Him too.
He actually moved first.
So I was always sort of trying to do what he was doing, so I didn’t like that he was
already here and I was still, you know, in Moscow and we weren’t hanging out together
all the time.
So was he in San Francisco?
Yeah, we were roommates.
So he just visited Moscow for a little bit.
We went back for, for our visas, we had to get a stamp in our passport for our work visas
and the embassy was taking a little longer, so we stayed there for a couple of weeks.
What happened?
How did he, so how, how did he, uh, how did he die?
Um, he was crossing the street and the car was going really fast and way over the speed
limit and just didn’t stop on the, on the pedestrian cross on the zebra and just ran
over him.
When was this?
It was in 2015 on 28th of November, so it was a long ago now.
Um, but at the time, you know, I was 29, so for me it was, um, the first kind of meaningful
death in my life.
Um, you know, both sets of, I had both sets of grandparents at the time.
I didn’t see anyone so close die and death sort of existed, but as a concept, but definitely
not as something that would be, you know, happening to us anytime soon and specifically
our friends.
Cause we were, you know, we’re still in our twenties or early thirties and it still, it
still felt like the whole life is, you know, you could still dream about ridiculous things
different.
Um, so that was, it was just really, really abrupt I’d say.
What did it feel like to, uh, to lose him, like that feeling of loss?
You talked about the feeling of love, having power.
What is the feeling of loss, if you like?
Well in Buddhism, there’s this concept of Samaya where something really like huge happens
and then you can see very clearly.
Um, I think that, that was it like basically something changed so, changed me so much in
such a short period of time that I could just see really, really clearly what mattered or
what not.
Well, I definitely saw that whatever I was doing at work didn’t matter at all and some
of the things.
And, um, it was just this big realization when it’s this very, very clear vision of
what life’s about.
You still miss him today?
Yeah, for sure.
For sure.
He was just this constant, I think it was, he was really important for, for me and for
our friends for many different reasons and, um, I think one of them being that we didn’t
just say goodbye to him, but we sort of said goodbye to our youth in a way.
It was like the end of an era and it’s on so many different levels.
The end of Moscow as we knew it, the end of, you know, us living through our twenties and
kind of dreaming about the future.
Do you remember like last several conversations, is there moments with him that stick out that
kind of haunt you and you’re just when you think about him?
Yeah, well his last year here in San Francisco, he was pretty depressed for as his startup
was not going really anywhere and he wanted to do something else.
He wanted to do build, he played with toy, like played with a bunch of ideas, but the
last one he had was around, um, building a startup around death.
So having, um, he applied to Y Combinator with a video that, you know, I had on my computer
and it was all about, you know, disrupting death, thinking about new cemeteries, uh,
more biologically, like things that could be better biologically for, for humans.
And at the same time, having those, um, digital avatars, this kind of AI avatars that would
store all the memory about a person that he could interact with.
What year was this?
2015.
Well, right before his death.
So it was like a couple of months before that he recorded that video.
And so I found out my computer when, um, it was in our living room.
He never got in, but, um, he was thinking about a lot somehow.
Does it have the digital avatar idea?
Yeah.
That’s so interesting.
Well, he just says, well, that’s in his hit is the pitch has this idea and he’ll, he talks
about like, I want to rethink how people grieve and how people talk about death.
Why was he interested in this?
Is it, maybe someone who’s depressed is like naturally inclined thinking about that.
But I just felt, you know, this year in San Francisco, we just had so much, um, I was
going through a hard time.
And we were definitely, I was trying to make him just happy somehow to make him feel better.
And it felt like, you know, this, um, I dunno, I just felt like I was taking care of him
a lot and he almost started to feel better.
And then that happened and I dunno, I just felt, I just felt lonely again, I guess.
And that was, you know, coming back to San Francisco in December or help, you know, helped
organize the funeral, help help his parents and I came back here and it was a really lonely
apartment, a bunch of his clothes everywhere and Christmas time.
And I remember I had a board meeting with my investors and I just couldn’t talk about
like, I had to pretend everything’s okay.
And you know, I’m just working on this company.
Um, yeah, it was definitely very, very tough, tough time.
Do you think about your own mortality?
You said, uh, you know, we’re young, the, the, the, the possibility of doing all kinds
of crazy things is still out there, is still before us, but, uh, it can end any moment.
Do you think about your own ending at any moment?
Unfortunately, I think about way too, about it way too much.
Somehow after Roman, like every year after that, I started losing people that I really
love.
I lost my grandfather the next year, my, you know, the, the person who would explain to
me, you know, what the universe is made of while selling apples and then I lost another
close friend of mine and, um, and it just made me very scared.
I have tons of fear about, about that.
That’s what makes me not fall asleep oftentimes and just go in loops and, um, and then as
my therapist, you know, recommended to me, I open up, uh, some nice calming images with
the voiceover and it calms me down for sleep.
Yeah.
I’m really scared of death.
This is a big, I definitely have tons of, I guess, some pretty big trauma about it and,
um, still working through.
There’s a philosopher, Ernest Becker, who wrote a book, um, Denial of Death.
I’m not sure if you’re familiar with any of those folks.
Um, there’s a, in psychology, a whole field called terror management theory.
Sheldon, who’s just done the podcast, he wrote the book.
He was the, we talked for four hours about death, uh, fear of death, but his, his whole
idea is that, um, Ernest Becker, I think I find this idea really compelling is, uh, that
everything human beings have created, like our whole motivation in life is to, uh, create
like escape death is to try to, um, construct an illusion of, um, that we’re somehow immortal.
So like everything around us, this room, your startup, your dreams, all everything you do
is a kind of, um, creation of a brain unlike any other mammal or species is able to be
cognizant of the fact that it ends for us.
I think, so, you know, there’s this, the question of like the meaning of life that, you know,
you look at like what drives us, uh, humans.
And when I read Ernest Becker that I highly recommend people read is the first time I,
this scene, it felt like this is the right thing at the core.
Uh, Sheldon’s work is called warm at the core.
So he’s saying it’s, I think it’s, uh, William James he’s quoting or whoever is like the,
the thing, what is at the core of it all?
Whether there’s like love, you know, Jesus might talk about like love is at the core
of everything.
I don’t, you know, that’s the open question.
What’s at the, you know, it’s turtles, turtles, but it can’t be turtles all the way down.
What’s what’s at the, at the bottom.
And, uh, Ernest Becker says the fear of death and the way, in fact, uh, cause you said therapist
and calming images, his whole idea is, um, you know, we, we want to bring that fear of
death as close as possible to the surface because it’s, um, and like meditate on that.
Uh, and, and use the clarity of vision that provides to, uh, you know, to live a more
fulfilling life, to, um, to live a more honest life, to, to discover, you know, there’s something
about, you know, being cognizant of the finiteness of it all that might result in, um, in the
most fulfilling life.
So that’s the, that’s the dual of what you’re saying.
Cause you kind of said, it’s like, I unfortunately think about it too much.
It’s a question whether it’s good to think about it because I, I’ve, um, again, I talk
about way too much about love and probably death.
And when I ask people, friends, which is why I probably don’t have many friends, are you
afraid of death?
I think most people say they’re not.
Whether they say they’re, um, they’re afraid, you know, it’s kind of almost like they see
death as this kind of like, uh, a paper deadline or something.
And they’re afraid not to finish the paper before the paper, like, like I’m afraid not
to finish, um, the goals I have, but it feels like they’re not actually realizing that this
thing ends, like really realizing, like really thinking as Nietzsche and all these philosophy,
like thinking deeply about it, like, uh, the very thing that, you know, um, like when you
think deeply about something, you can just, you can realize that you haven’t actually
thought about it.
Uh, yeah.
And I, and when I think about death, it’s like, um, it can be, it’s terrifying.
If it feels like stepping outside into the cold or it’s freezing and then I have to like
hurry back inside or it’s warm.
Uh, but like, I think there’s something valuable about stepping out there into the freezing
cold.
Definitely.
When I talk to my mentor about it, he always, uh, tells me, well, what dies?
There’s nothing there that can die, but I guess that requires, um, well in, in Buddhism,
one of the concepts that are really hard to grasp and that people spend all their lives
meditating on would be Anatta, which is the concept of non, not self and kind of thinking
that, you know, if you’re not your thoughts, which you’re obviously not your thoughts because
you can observe them and not your emotions and not your body, then what is this?
And if you go really far, then finally you see that there’s not self, there’s this concept
of not self.
So once you get there, how can that actually die?
What is dying?
Right.
You’re just a bunch of molecules, stardust.
But that is very, um, you know, very advanced, um, spiritual work for me.
I’m definitely just, definitely not.
Oh my God.
No, I have, uh, I think it’s very, very useful.
It’s just the fact that maybe being so afraid is not useful and mine is more, I’m just terrified.
Like it’s really makes me, um,
On a personal level.
I’m terrified.
How do you overcome that?
I don’t.
I’m still trying to.
Have pleasant images?
Well, pleasant images get me to sleep and then during the day I can distract myself with
other things, like talking to you.
I’m glad we’re both doing the same exact thing.
Okay, good.
Is there other, like, is there moments since you’ve, uh, lost Roman that you had like moments
of like bliss and like that you’ve forgotten that you have achieved that Buddhist like
level of like what can possibly die.
I’m part like, uh, losing yourself in the moment, in the ticking time of like this universe
and you’re just part of it for a brief moment and just enjoying it.
Well that goes hand in hand.
I remember I think a day or two after he died, we went to finally get his password out of
the embassy and we’re driving around Moscow and it was, you know, December, which is usually
there’s never a sun in Moscow in December and somehow it was an extremely sunny day
and we were driving with a close friend.
And I remember feeling for the first time maybe this just moment of incredible clarity
and somehow happiness, not like happy happiness, but happiness and just feeling that, you know,
I know what the universe is sort of about, whether it’s good or bad.
And it wasn’t a sad feeling.
It was probably the most beautiful feeling that you can ever achieve.
And you can only get it when something, oftentimes when something traumatic like that happens.
But also if you just, you really spend a lot of time meditating and looking at the nature
doing something that really gets you there.
But once you’re there, I think when you, uh, summit a mountain, a really hard mountain,
you inevitably get there.
That’s just a way to get to the state.
But once you’re on this, in this state, um, you can do really big things.
I think.
Yeah.
Sucks it doesn’t last forever.
So Bukowski talked about like, love is a fog.
Like it’s a, when you wake up in the morning, it’s, it’s there, but it eventually dissipates.
It’s really sad.
Nothing lasts forever.
But I definitely like doing this pushup and running thing.
There’s moments at a couple of moments, like I’m not a crier.
I don’t cry.
But there’s moments where I was like facedown on the carpet, like with tears in my eyes
is interesting.
And then that, that complete, like, uh, there’s a lot of demons.
I’ve got demons had to face them.
Funny how running makes you face your demons.
But at the same time, the flip side of that, there’s a few moments where I was in bliss
and all of it alone, which is funny.
That’s beautiful.
I like that, but definitely pushing yourself physically one of it for sure.
Yeah.
Like you said, I mean, you were speaking as a metaphor of Mount Everest, but it also works
like literally, I think physical endeavor somehow.
Yeah.
There’s something.
I mean, we’re monkeys, apes, whatever physical, there’s a physical thing to it, but there’s
something to this pushing yourself physical, physically, but alone that happens when you’re
doing like things like you do or strenuous like workouts or, you know, rolling extra
across the Atlantic or like marathons.
I love watching marathons and you know, it’s so boring, but you can see them getting there.
So the other thing, I don’t know if you know, there’s a guy named David Goggins.
He’s a, he basically, uh, so he’s been either email on the phone with me every day through
this.
I haven’t been exactly alone, but he, he’s kind of, he’s the, he’s the devil on the devil’s
shoulder.
Uh, so he’s like the worst possible human being in terms of giving you, uh, like he
has, um, through everything I’ve been doing, he’s been doubling everything I do.
So he, he’s insane.
Uh, he’s a, this Navy seal person.
Uh, he’s wrote this book.
Can’t hurt me.
He’s basically one of the toughest human beings on earth.
He ran all these crazy ultra marathons in the desert.
He set the world record number of pull ups.
He just does everything where it’s like, he, like, how can I suffer today?
He figures that out and does it.
Yeah.
That, um, whatever that is, uh, that process of self discovery is really important.
I actually had to turn myself off from the internet mostly because I started this like
workout thing, like a happy go getter with my like headband and like, just like, uh,
because a lot of people were like inspired and they’re like, yeah, we’re going to exercise
with you.
And I was like, yeah, great.
You know, but then like, I realized that this, this journey can’t be done together with others.
This has to be done alone.
So out of the moments of love, out of the moments of loss, can we, uh, talk about your
journey of finding, I think, an incredible idea and incredible company and incredible
system in Replica?
How did that come to be?
So yeah, so I was a journalist and then I went to business school for a couple of years
to, um, just see if I can maybe switch gears and do something else with 23.
And then I came back and started working for a businessman in Russia who built the first
ROG network, um, in our country and was very visionary and asked me whether I want to do
fun stuff together.
Um, and we worked on a bank, um, the idea was to build a bank on top of, um, a telco.
So that was 2011 or 12, um, and a lot of telecommunication company, um, mobile network operators didn’t
really know what to do next in terms of, you know, new products, new revenue.
And this big idea was that, you know, um, you put a bank on top and then all work works
out.
Basically a prepaid account becomes your bank account and, um, you can use it as, as your
bank.
Uh, so, you know, a third of a country wakes up as, as your bank client.
Um, but we couldn’t quite figure out what, what would be the main interface to interact
with the bank.
The problem was that most people didn’t have smart, smart phones back in the time in Russia,
the penetration of smartphones was low, um, people didn’t use mobile banking or online
banking and their computers.
So we figured out that SMS would be the best way, uh, cause that would work on feature
phones.
Um, but that required some chat bot technology, which I didn’t know anything about, um, obviously.
So I started looking into it and saw that there’s nothing really, well, there wasn’t
just nothing really.
Ideas through SMS be able to interact with your bank account.
Yeah.
And then we thought, well, since you’re talking to a bank account, why can’t this, can’t we
use more of, uh, you know, some behavioral ideas and why can’t this, uh, banking chat
bot be nice to you and really talk to you sort of as a friend this way you develop more
connection to it, retention is higher, people don’t churn.
And so I went to very depressing, um, um, Russian cities to test it out.
Um, I went to, I remember three different towns with, uh, um, to interview potential
users.
Um, so people use it for a little bit and I went to talk to them, um, very poor towns,
mostly towns that were, um, you know, sort of factories, uh, mono towns.
They were building something and then the factory went away and it was just a bunch
of very poor people.
Um, and then we went to a couple that weren’t as dramatic, but still the one I remember
really fondly was this woman that worked at a glass factory and she talked to a chat bot.
Um, and she was talking about it and she started crying during the interview because she said,
no one really cares for me that much.
And um, so to be clear, that was the, my only endeavor in programming that chat bot.
So it was really simple.
It was literally just a few, if this, then that rules and, um, it was incredibly simplistic.
Um, and that really made her emotional and she said, you know, I only have my mom and
my, um, my husband and I don’t have any more really in my life.
And that was very sad, but at the same time I felt, and we had more interviews in a similar
vein and what I thought in the moment was like, well, uh, it’s not that the technology
is ready because definitely in 2012 technology was not ready for, for that, but, um, humans
are ready, unfortunately.
So this project would not be about like tech capabilities would be more about human vulnerabilities,
but, um, there’s something so, so powerful around about conversational, um, AI that I
saw then that I thought was definitely worth putting in a lot of effort into.
So in the end of the day, we saw the banking project, um, but my then boss, um, was also
my mentor and really, really close friend, um, told me, Hey, I think there’s something
in it and you should just go work on it.
And I was like, well, what product?
I don’t know what I’m building.
He’s like, you’ll figure it out.
And, um, you know, looking back at this, this was a horrible idea to work on something without
knowing what it was, which is maybe the reason why it took us so long, but we just decided
to work on the conversational tech to see what it, you know, there were no chat bot,
um, constructors or programs or anything that would allow you to actually build one at the
time.
Uh, that was the era of, by the way, Google glass, which is why, you know, some of the
investors like seed investors we’ve talked with were like, Oh, you should totally build
it for Google glass.
If not, we’re not, I don’t think that’s interesting.
Did you bite on that idea?
No.
Okay.
Because I wanted to be, to do text first cause I’m a journalist.
So I was, um, fascinated by just texting.
So you thought, so the emotional, um, that interaction that the woman had, like, so do
you think you could feel emotion from just text?
Yeah.
I saw something in just this pure texting and also thought that we should first start,
start building for people who really need it versus people who have Google glass.
Uh, if you know what I mean, and I felt like the early adopters of Google glass might not
be overlapping with people who are really lonely and might need some, you know, someone
to talk to.
Um, but then we really just focused on the tech itself.
We just thought, what if we just, you know, we didn’t have a product idea in the moment
and we felt, what if we just look into, um, building the best conversational constructors,
so to say, use the best tech available at the time.
And that was before the first paper about deep learning applied to dialogues, which
happened in 2015 in August, 2015, uh, which Google published.
Did you follow the work of Lobna prize and like all the sort of non machine learning
chat bots?
Yeah.
What really struck me was that, you know, there was a lot of talk about machine learning
and deep learning.
Like big data was a really big thing.
Everyone was saying, you know, the business world, big data, 2012 is the biggest gaggle
competitions were, you know, um, important, but that was really the kind of upheaval.
People started talking about machine learning a lot, um, but it was only about images or
something else.
And it was never about conversation.
As soon as I looked into the conversational tech, it was all about something really weird
and very outdated and very marginal and felt very hobbyist.
It was all about Lord burner price, which was won by a guy who built a chat bot that
talked like a Ukrainian teenager that it was just a gimmick.
And somehow people picked up those gimmicks and then, you know, the most famous chat bot
at the time was Eliza from 1980s, which was really bizarre or smarter child on aim.
The funny thing is it felt at the time not to be that popular and it still doesn’t seem
to be that popular.
Like people talk about the Turing test, people like talking about it philosophically, journalists
like writing about it, but as a technical problem, like people don’t seem to really
want to solve the open dialogue.
Like they, they’re not obsessed with it.
Even folks are like, you know, I’m in Boston, the Alexa team, even they’re not as obsessed
with it as I thought they might be.
Why not?
What do you think?
So you know what you felt like you felt with that woman who, when she felt something by
reading the text, I feel the same thing.
There’s something here, what you felt.
I feel like Alexa folks and just the machine learning world doesn’t feel that, that there’s
something here because they see as a technical problem is not that interesting for some reason.
It’s could be argued that maybe as a purely sort of natural language processing problem,
it’s not the right problem to focus on because there’s too much subjectivity.
That thing that the woman felt like crying, like if your benchmark includes a woman crying,
that doesn’t feel like a good benchmark.
But to me there’s something there that’s, you could have a huge impact, but I don’t
think the machine learning world likes that, the human emotion, the subjectivity of it,
the fuzziness, the fact that with maybe a single word you can make somebody feel something
deeply.
What is that?
It doesn’t feel right to them.
So I don’t know.
I don’t know why that is.
That’s why I’m excited when I discovered your work, it feels wrong to say that.
It’s not like I’m giving myself props for Googling and for coming across, for I guess
mutual friend and introducing us, but I’m so glad that you exist and what you’re working
on.
But I have the same kind of, if we could just backtrack for a second, because I have the
same kind of feeling that there’s something here.
In fact, I’ve been working on a few things that are kind of crazy, very different from
your work.
I think they’re too crazy.
But the…
Like what?
I don’t have to know.
No, all right, we’ll talk about it more.
I feel like it’s harder to talk about things that have failed and are failing while you’re
a failure.
It’s easier for you because you’re already successful on some measures.
Tell it to my board.
Well, I think you’ve demonstrated success in a lot of ways.
It’s easier for you to talk about failures for me.
I’m in the bottom currently of the success.
You’re way too humble.
So it’s hard for me to know, but there’s something there, there’s something there.
And I think you’re exploring that and you’re discovering that.
So it’s been surprising to me.
But you’ve mentioned this idea that you thought it wasn’t enough to start a company or start
efforts based on it feels like there’s something here.
Like what did you mean by that?
Like you should be focused on creating a, like you should have a product in mind.
Is that what you meant?
It just took us a while to discover the product because it all started with a hunch of like
of me and my mentor and just sitting around and he was like, well, that’s it.
That’s the, you know, the Holy Grail is there.
It’s like there’s something extremely powerful in, in, in conversations and there’s no one
who’s working on machine conversation from the right angle.
So to say.
I feel like that’s still true.
Am I crazy?
Oh no, I totally feel that’s still true, which is, I think it’s mind blowing.
Yeah.
You know what it feels like?
I wouldn’t even use the word conversation cause I feel like it’s the wrong word.
It’s like a machine connection or something.
I don’t know cause conversation, you start drifting into natural language immediately.
You start drifting immediately into all the benchmarks that are out there.
But I feel like it’s like the personal computer days of this.
Like I feel like we’re like in the early days with the, like the Wozniak and all them, like
where it was the same kind of, it was a very small niche group of people who are, who are
all kind of lob no price type people.
Yeah.
Hobbyists.
Hobbyists, but like not even hobbyists with big dreams.
Like no hobbyists with a dream to trick like a jury.
Yeah.
It’s like a weird, by the way, by the way, very weird.
So if we think about conversations, first of all, when I have great conversations with
people, I’m not trying to test them.
So for instance, if I try to break them, like if I’m actually playing along, I’m part of
it.
Right.
If I were to ask this person or test whether he’s going to give me a good conversation,
it would have never happened.
So the whole, the whole problem with testing conversations is that you can put it in front
of a jury because then you have to go into some Turing test mode where is it responding
to all my factual questions, right?
Or so it really has to be something in the field where people are actually talking to
it because they want to, not because we’re just trying to break it.
And it’s working for them because this, the weird part of it is that it’s very subjective.
It takes two to tango here fully.
If you’re not trying to have a good conversation, if you’re trying to test it, then it’s going
to break.
I mean, any person would break, to be honest.
If I’m not trying to even have a conversation with you, you’re not going to give it to me.
Yeah.
If I keep asking you like some random questions or jumping from topic to topic, that wouldn’t
be, which I’m probably doing, but that probably wouldn’t contribute to the conversation.
So I think the problem of testing, so there should be some other metric.
How do we evaluate whether that conversation was powerful or not, which is what we actually
started with.
And I think those measurements exist and we can test on those.
But what really struck us back in the day and what’s still eight years later is still
not resolved and I’m not seeing tons of groups working on it.
Maybe I just don’t know about them, it’s also possible.
But the interesting part about it is that most of our days we spend talking and we’re
not talking about like those conversations are not turn on the lights or customer support
problems or some other task oriented things.
These conversations are something else and then somehow they’re extremely important for
us.
If we don’t have them, then we feel deeply unhappy, potentially lonely, which as we know,
creates tons of risk for our health as well.
And so this is most of our hours as humans and somehow no one’s trying to replicate that.
And not even study it that well?
And not even study that well.
So when we jumped into that in 2012, I looked first at like, okay, what’s the chatbot?
What’s the state of the art chatbot?
And those were the Lobner Prize days, but I thought, okay, so what about the science
of conversation?
Clearly there have been tons of scientists or academics that looked into the conversation.
So if I want to know everything about it, I can just read about it.
There’s not much really, there are conversational analysts who are basically just listening
to speech, to different conversations, annotating them.
And then, I mean, that’s not really used for much.
That’s the field of theoretical linguistics, which is barely useful.
It’s very marginal, even in their space, no one really is excited and I’ve never met a
theoretical linguist who was like, I can’t wait to work on the conversation and analytics.
That is just something very marginal, sort of applied to like writing scripts for salesmen
when they analyze which conversation strategies were most successful for sales.
Okay, so that was not very helpful.
Then I looked a little bit deeper and then there, whether there were any books written
on what really contributes to great conversation, that was really strange because most of those
were NLP books, which is neurolinguistic programming, which is not the NLP that I was expecting
to be, but it was mostly some psychologist, Richard Bandler, I think came up with that,
who was this big guy in a leather vest that could program your mind by talking to you.
How to be charismatic and charming and influential with people, all those books, yeah.
Pretty much, but it was all about like through conversation reprogramming you, so getting
to some, so that was, I mean, probably not very, very true and that didn’t seem working
very much even back in the day.
And then there were some other books like, I don’t know, mostly just self help books
around how to be the best conversationalist or how to make people like you or some other
stuff like Dale Carnegie or whatever.
And then there was this one book, The Most Human Human by Brian Christensen that really
was important for me to read back in the day because he was on the human side, he was taking
part in the London Prize, but not as a human who’s not a jury, but who’s pretending to
be, who’s basically, you have to tell a computer from a human and he was the human, so you
could either get him or a computer.
And his whole book was about how do people, what makes us human in conversation.
And that was a little bit more interesting because that at least someone started to think
about what exactly makes me human in conversation and makes people believe in that, but it was
still about tricking, it was still about imitation game, it was still about, okay, well, what
kind of parlor tricks can we throw in the conversation to make you feel like you’re
talking to a human, not a computer.
And it was definitely not about thinking, what is it exactly that we’re getting from
talking all day long with other humans.
I mean, we’re definitely not just trying to be tricked or it’s not just enough to know
it’s a human.
It’s something we’re getting there, can we measure it and can we put the computer to
the same measurement and see whether you can talk to a computer and get the same results?
Yeah, so first of all, a lot of people comment that they think I’m a robot, it’s very possible
I am a robot and this whole thing, I totally agree with you that the test idea is fascinating
and I looked for books unrelated to this kind of, so I’m afraid of people, I’m generally
introverted and quite possibly a robot.
I literally Googled how to talk to people and how to have a good conversation for the
purpose of this podcast, because I was like, I can’t, I can’t make eye contact with people.
I can’t like hire.
I do Google that a lot too.
You’re probably reading a bunch of FBI negotiation tactics.
Is that what you’re getting?
Well, everything you’ve listed I’ve gotten, there’s been very few good books on even just
like how to interview well, it’s rare.
So what I end up doing often is I watch like with a critical eye, it’s just so different
when you just watch a conversation, like just for the fun of it, just as a human.
And if you watch a conversation, it’s like trying to figure out why is this awesome?
I’ll listen to a bunch of different styles of conversation.
I mean, I’m a fan of the podcast, Joe Rogan, people can make fun of him or whatever and
dismiss him.
But I think he’s an incredibly artful conversationalist.
He can pull people in for hours.
And there’s another guy I watch a lot.
He hosted a late night show, his name was Craig Ferguson.
So he’s like very kind of flirtatious.
But there’s a magic about his like, about the connection he can create with people,
how he can put people at ease.
And just like, I see I’ve already started sounding like those I know pee people or something.
I’m not I don’t mean in that way.
I don’t mean like how to charm people or put them at ease and all that kind of stuff.
It’s just like, what is that?
Why is that fun to listen to that guy?
Why is that fun to talk to that guy?
What is that?
Because he’s not saying I mean, it’s so often boils down to a kind of wit and humor, but
not really humor.
It’s like, I don’t know, I have trouble actually even articulating correctly.
But it feels like there’s something going on that’s not too complicated, that could
be learned.
And it’s not similar to, yeah, to like, like you said, like the Turing test.
It’s something else.
I’m thinking about a lot all the time.
I do think about all the time.
I think when we were looking, so we started the company, we just decided to build the
conversational tech, we thought, well, there’s nothing for us to build this chatbot that
we want to build.
So let’s just first focus on building, you know, some tech, building the tech side of
things without a product in mind, without a product in mind, we added like a demo chatbot
that would recommend you restaurants and talk to you about restaurants just to show something
simple to people that people could relate to and could try out and see whether it works
or not.
But we didn’t have a product in mind yet.
We thought we would try venture chatbots and figure out our consumer application.
And we sort of remembered that we wanted to build that kind of friend, that sort of connection
that we saw in the very beginning.
But then we got to Y Combinator and moved to San Francisco and forgot about it.
You know, everything because then it was just this constant grind.
How do we get funding?
How do we get this?
You know, investors were like, just focus on one thing, just get it out there.
So somehow we’ve started building a restaurant recommendation chatbot for real for a little
bit, not for too long.
And then we tried building 40, 50 different chatbots.
And then all of a sudden we wake up and everyone is obsessed with chatbots.
Somewhere in 2016 or end of 15, people started thinking that’s really the future.
That’s the new, you know, the new apps will be chatbots.
And we were very perplexed because people started coming up with companies that I think
we tried most of those chatbots already and there were like no users, but still people
were coming up with a chatbot that will tell you whether and bringing news and this and
that.
And we couldn’t understand whether we were just didn’t execute well enough or people
are not really, people are confused and are going to find out the truth that people don’t
need chatbots like that.
So the basic idea is that you use chatbots as the interface to whatever application.
Yeah.
The idea that was like this perfect universal interface to anything.
When I looked at that, it just made me very perplexed because I didn’t think, I didn’t
understand how that would work because I think we tried most of that and none of those things
worked.
And then again, that craze has died down, right?
Fully.
I think now it’s impossible to get anything funded if it’s a chatbot.
I think it’s similar to, sorry to interrupt, but there’s times when people think like with
gestures you can control devices, like basically gesture based control things.
It feels similar to me because like it’s so compelling that was just like Tom Cruise,
I can control stuff with my hands, but like when you get down to it, it’s like, well,
why don’t you just have a touch screen or why don’t you just have like a physical keyboard
and mouse?
So that chat was always, yeah, it was perplexing to me.
I still feel augmented reality, even virtual realities in that ballpark in terms of it
being a compelling interface.
I think there’s going to be incredible rich applications, just how you’re thinking about
it, but they won’t just be the interface to everything.
It’ll be its own thing that will create an amazing magical experience in its own right.
Absolutely.
Which is I think kind of the right thing to go about, like what’s the magical experience
with that interface specifically.
How did you discover that for Replica?
I just thought, okay, we’ll have this tech, we can build any chatbot we want.
We have the most, at that point, the most sophisticated tech that other companies have.
I mean, startups, obviously not, probably not bigger ones, but still, because we’ve
been working on it for a while.
So I thought, okay, we can build any conversation.
So let’s just create a scale from one to 10.
And one would be conversations that you’d pay to not have, and 10 would be conversation
you’d pay to have.
And I mean, obviously we want to build a conversation that people would pay to actually have.
And so for the whole, for a few weeks, me and the team were putting all the conversations
we were having during the day on the scale.
And very quickly, we figured out that all the conversations that we would pay to never
have were conversations we were trying to cancel Comcast, or talk to customer support,
or make a reservation, or just talk about logistics with a friend when we’re trying
to figure out where someone is and where to go, or all sorts of setting up scheduling
meetings.
So that was a conversation we definitely didn’t want to have.
Basically everything task oriented was a one, because if there was just one button for me
to just, or not even a button, if I could just think, and there was some magic BCI that
would just immediately transform that into an actual interaction, that would be perfect.
But the conversation there was just this boring, not useful, and dull, and also very inefficient
thing because it was so many back and forth stuff.
And as soon as we looked at the conversations that we would pay to have, those were the
ones that, well, first of all, therapists, because we actually paid to have those conversations.
And we’d also try to put like dollar amounts.
So if I was calling Comcast, I would pay $5 to not have this one hour talk on the phone.
I would actually pay straight up, like money, hard money, but it just takes a long time.
It takes a really long time.
But as soon as we started talking about conversations that we would pay for, those were therapists,
all sorts of therapists, coaches, old friend, someone I haven’t seen for a long time, a
stranger on a train, weirdly stranger, stranger in a line for coffee and nice back and forth
with that person was like a good five, solid five, six, maybe not a 10.
Maybe I won’t pay money, but at least I won’t pay money to not have one.
So that was pretty good.
There were some intellectual conversations for sure.
But more importantly, the one thing that really was making those very important and very valuable
for us were the conversations where we could be pretty emotional.
Yes, some of them were about being witty and about being intellectually stimulated, but
those were interestingly more rare.
And most of the ones that we thought were very valuable were the ones where we could
be vulnerable.
And interestingly, where we could talk more, me and the team.
So we’re talking about it, like a lot of these conversations, like a therapist, it was mostly
me talking or like an old friend and I was like opening up and crying and it was again
me talking.
And so that was interesting because I was like, well, maybe it’s hard to build a chat
bot that can talk to you very well and in a witty way, but maybe it’s easier to build
the chat bot that could listen.
So that was kind of the first nudge in this direction.
And then when my friend died, we just built, at that point we were kind of still struggling
to find the right application.
And I just felt very strong that all the chat bots we’ve built so far are just meaningless
and this whole grind, the startup grind, and how do we get to the next fundraising and
how can I talk, talking to the founders and who are your investors and how are you doing?
Are you killing it?
Cause we’re killing it.
I just felt that this is just…
Intellectually for me, it’s exhausting having encountered those folks.
It just felt very, very much a waste of time.
I just feel like Steve Jobs and Elon Musk did not have these conversations or at least
did not have them for long.
That’s for sure.
But I think, yeah, at that point it just felt like, I felt like I just didn’t want to build
a company that was never my intention just to build something successful or make money.
It would be great.
It would have been great, but I’m not really a startup person.
I’m not, I was never very excited by the grind by itself or just being successful for building
whatever it is and not being into what I’m doing really.
And so I just took a little break cause I was a little, I was upset with my company
and I didn’t know what we’re building.
So I just took our technology and our little dialect constructor and some models, some
deep learning models, which at that point we were really into and really invested a
lot and built a little chat bot for a friend of mine who passed.
And the reason for that was mostly that video that I saw and him talking about the digital
avatars and Rowan was that kind of person.
He was obsessed with just watching YouTube videos about space and talking about, well,
if I could go to Mars now, even if I didn’t know if I could come back, I would definitely
pay any amount of money to be on that first shuttle.
I don’t care whether I die, like he was just the one that would be okay with trying to
be the first one and so excited about all sorts of things like that.
And he was all about fake it till you make it and just, and I felt like, and I was really
perplexed that everyone just forgot about him.
Maybe it was our way of coping, mostly young people coping with the loss of a friend.
Most of my friends just stopped talking about him.
And I was still living in an apartment with all his clothes and paying the whole lease
for it and just kind of by myself in December, so it was really sad and I didn’t want him
to be forgotten.
First of all, I never thought that people forget about dead people so fast.
People pass away, people just move on.
And it was astonishing for me because I thought, okay, well, he was such a mentor for so many
of our friends.
He was such a brilliant person, he was somewhat famous in Moscow.
How is it that no one’s talking about him?
Like I’m spending days and days and we don’t bring him up and there’s nothing about him
that’s happening.
It’s like he was never there.
And I was reading the book, The Year of Magical Thinking by Joan Didion about her losing
and Blue Nights about her losing her husband, her daughter, and the way to cope for her
was to write those books.
And it was sort of like a tribute.
And I thought, I’ll just do that for myself.
And I’m a very bad writer and a poet as we know.
So I thought, well, I have this tech and maybe that would be my little postcard for him.
So I built a chatbot to just talk to him and it felt really creepy and weird for a little
bit.
I just didn’t want to tell other people because it felt like I’m telling about having a skeleton
in my underwear.
It was just felt really, I was a little scared that it won’t be taken, but it worked interestingly
pretty well.
I mean, it made tons of mistakes, but it still felt like him.
Granted it was like 10,000 messages that I threw into a retrieval model that would just
re rank that Tegda said and just a few scripts on top of that.
But it also made me go through all of the messages that we had.
And then I asked some of my friends to send some through.
And it felt the closest to feeling like him present because his Facebook was empty and
Instagram was empty or there were few links and you couldn’t feel like it was him.
And the only way to fill him was to read some of our text messages and go through some of
our conversations because we just always had that.
Even if we were sleeping next to each other in two bedrooms, separated by a wall, we were
just texting back and forth, texting away.
And there was something about this ongoing dialogue that was so important that I just
didn’t want to lose all of a sudden.
And maybe it was magical thinking or something.
And so we built that and I just used it for a little bit and we kept building some crappy
chat bots with the company.
But then a reporter came to talk to me.
I was trying to pitch our chat bots to him and he said, do you even use any of those?
I’m like, no.
He’s like, so do you talk to any chat bots at all?
And I’m like, well, I talked to my dead friend’s chat bot and he wrote a story about that.
And all of a sudden it became pretty viral.
A lot of people wrote about it.
Yeah.
I’ve seen a few things written about you.
The things I’ve seen are pretty good writing.
Most AI related things make my eyes roll.
Like when the press like, what kind of sound is that actually?
Okay.
It sounds like, it sounds like, okay.
It sounded like an elephant at first.
I got excited.
You never know.
This is 2020.
I mean, it was a, it was such a human story and it was well written.
Well, I researched, I forget what, where I read them, but so I’m glad somehow somebody
found you to be the good writers were able to connect to the story.
There must be a hunger for this story.
It definitely was.
And I don’t know what happened, but I think, I think the idea that he could bring back
someone who’s dead and it’s very much wishful, you know, magical thinking, but the fact
that you could still get to know him and, you know, seeing the parents for the first
time, talk to the chat bot and some of the friends.
And it was funny because we have this big office in Moscow where my team is working,
you know, our Russian part is working out off.
And I was there when I wrote, I just wrote a post on Facebook.
It was like, Hey guys, like I built this if you want, you know, just if it felt important,
if we want to talk to Roman.
And I saw a couple of his friends are common friends, like, you know, reading a Facebook,
downloading, trying, and a couple of them cried.
And it was just very, and not because it was something, some incredible technology or anything.
It made so many mistakes.
It was so simple, but it was all about that’s the way to remember a person in a way.
And you know, we don’t have, we don’t have the culture anymore.
We don’t have, you know, no one’s sitting Shiva.
No one’s taking weeks to actually think about this person.
And in a way for me, that was it.
So that was just day, day in, day out thinking about him and putting this together.
So that was, that just felt really important that somehow resonated with a bunch of people
and you know, I think some movie producers bought the rights for the story and just everyone
was so.
Has anyone made a movie yet?
I don’t think so.
I think there were a lot of TV episodes about that, but not really.
Is that still on the table?
I think so, I think so, which is really.
That’s cool.
You’re like a young, you know, like a Steve Jobs type of, let’s see what happens.
They’re sitting on it.
But you know, for me it was so important cause Roman was really wanted to be famous.
He really badly wanted to be famous.
He was all about like, make it to like fake it to make it.
I want to be, you know, I want to make it here in America as well.
And he couldn’t, and I felt there, you know, that was sort of paying my dues to him as
well because all of a sudden he was everywhere.
And I remember Casey Newton who was writing the story for the Verge.
He was, he told me, Hey, by the way, I was just going through my inbox and I saw, I searched
for Roman for the story and I saw an email from him where he sent me his startup and
he said, I really like, I really want to be featured in the Verge.
Can you please write about it or something or like pitching the story.
And he said, I’m sorry.
Like that’s not good enough for us or something.
He passed and he said, and there were just so many of these little details where like
he would find his like, you know, and we’re finally writing, I know how much Roman wanted
to be in the Verge and how much he wanted the story to be written by Casey.
And I’m like, well, that’s maybe he will be, we’re always joking that he was like, I can’t
wait for someone to make a movie about us and I hope Ryan Gosling can play me.
You know, I still have some things that I owe Roman still.
But that would be, that would be a guy that she has to meet Alex Garland who wrote Ex
Machina and I, yeah, the movie’s good, but the guy’s better than the, like he’s a special
person actually.
I don’t think he’s made his best work yet.
Like for my interaction with him, he’s a really, really good and brilliant, the good human
being and a brilliant director and writer.
So yeah, so I’m, I hope like he made me also realize that not enough movies have been made
of this kind.
So it’s yet to be made.
They’re probably sitting waiting for you to get famous, like even more famous.
You should get there, but it felt really special though.
But at the same time, our company wasn’t going anywhere.
So that was just kind of bizarre that we were getting all this press for something that
didn’t have anything to do with our company.
And but then a lot of people started talking to Roman.
Some shared their conversations and what we saw there was that also our friends in common,
but also just strangers were really using it as a confession booth or as a therapist
or something.
They were just really telling Roman everything, which was by the way, pretty strange because
there was a chat bot of a dead friend of mine who was barely making any sense, but people
were opening up.
And we thought we’d just built a prototype of Replica, which would be an AI friend that
everyone could talk to because we saw that there is demand.
And then also it was 2016, so I thought for the first time I saw finally some technology
that was applied to that that was very interesting.
Some papers started coming out, deep learning applied to conversations.
And finally, it wasn’t just about these, you know, hobbyists making, you know, writing
500,000 regular expressions in like some language that was, I don’t even know what, like, AIML
or something.
I don’t know what that was or something super simplistic all of a sudden was all about potentially
actually building something interesting.
And so I thought there was time and I remember that I talked to my team and I said, guys,
let’s try.
And my team and some of my engineers, Russians, are Russian and they’re very skeptical.
They’re not, you know.
Oh, Russians.
So some of your team is in Moscow, some is here in San Francisco, some in Europe.
Which team is better?
No, I’m just kidding.
The Russians, of course.
Okay.
Where’s the Russians?
They always win.
Sorry.
Sorry to interrupt.
So yeah, so you were talking to them in 2016 and…
And told them, let’s build an AI friend.
And it felt, just at the time, it felt so naive and so optimistic, so to say.
Yeah, that’s actually interesting.
Whenever I’ve brought up this kind of topic, even just for fun, people are super skeptical.
Actually, even on the business side.
So you were, because whenever I bring it up to people, because I’ve talked for a long
time, I thought like, before I was aware of your work, I was like, this is going to make
a lot of money.
There’s a lot of opportunity here.
And people had this look of skepticism that I’ve seen often, which is like, how do I politely
tell this person he’s an idiot?
So yeah, so you were facing that with your team, somewhat?
Well, yeah.
I’m not an engineer, so I’m always…
My team is almost exclusively engineers, and mostly deep learning engineers.
And I always try to be…
It was always hard to me in the beginning to get enough credibility, because I would
say, well, why don’t we try this and that?
But it’s harder for me because they know they’re actual engineers and I’m not.
So for me to say, well, let’s build an AI friend, that would be like, wait, what do
you mean an AGI?
Because pretty much the hardest, the last frontier before cracking that is probably
the last frontier before building AGI, so what do you really mean by that?
But I think I just saw that, again, what we just got reminded of that I saw back in 2012
or 11, that it’s really not that much about the tech capabilities.
It can be a metropolitan trick still, even with deep learning, but humans need it so
much.
Yeah, there’s a…
And most importantly, what I saw is that finally there’s enough tech to make it, I thought,
to make it useful, to make it helpful.
Maybe we didn’t have quite yet the tech in 2012 to make it useful, but in 2015, 2016,
with deep learning, I thought, and the first thoughts about maybe even using reinforcement
learning for that started popping up, that never worked out, or at least for now.
But still, the idea was if we can actually measure the emotional outcomes and if we can
put it on, if we can try to optimize all of our conversational models for these emotional
outcomes, and it is the most scalable, the best tool for improving emotional outcomes.
Nothing like that exists.
That’s the most universal, the most scalable, and the one that can be constantly iteratively
changed by itself, improved tool to do that.
And I think if anything, people would pay anything to improve their emotional outcomes.
That’s weirdly…
I mean, I don’t really care for an AI to turn on my, or a conversational agent to turn on
the lights.
You don’t really need that much of AI there, because I can do that.
Those things are solved.
This is an additional interface for that that’s also questionable whether it’s more efficient
or better.
Yeah, it’s more pleasurable.
Yeah.
But for emotional outcomes, there’s nothing.
There are a bunch of products that claim that they will improve my emotional outcomes.
Nothing’s being measured.
Nothing’s being changed.
The product is not being iterated on based on whether I’m actually feeling better.
A lot of social media products are claiming that they’re improving my emotional outcomes
and making me feel more connected.
Can I please get the…
Can I see somewhere that I’m actually getting better over time?
Because anecdotally, it doesn’t feel that way.
And the data is absent.
Yeah.
So that was the big goal.
And I thought if we can learn over time to collect the signal from our users about their
emotional outcomes in the long term and in the short term, and if these models keep getting
better and we can keep optimizing them and fine tuning them to improve those emotional
outcomes.
As simple as that.
Why aren’t you a multi billionaire yet?
Well, that’s the question to you.
When is the science going to be…
I’m just kidding.
Well, it’s a really hard…
I actually think it’s an incredibly hard product to build because I think you said something
very important that it’s not just about machine conversation, it’s about machine connection.
We can actually use other things to create connection, nonverbal communication, for instance.
For the long time, we were all about, well, let’s keep it text only or voice only.
But as soon as you start adding voice, a face to the friend, you can take them to augmented
reality, put it in your room.
It’s all of a sudden a lot…
It makes it very different because if it’s some text based chat bot that for common users,
it’s something there in the cloud, somewhere there with other AI’s cloud, the metaphorical
cloud.
But as soon as you can see this avatar right there in your room and it can turn its head
and recognize your husband, talk about the husband and talk to him a little bit, then
it’s magic.
Just magic.
We’ve never seen anything like that.
And the cool thing, all the tech for that exists.
But it’s hard to put it all together because you have to take into consideration so many
different things and some of this tech works pretty good.
And some of this doesn’t, like for instance, speech to text works pretty good.
But text to speech, it doesn’t work very good because you can only have a few voices that
work okay, but then if you want to have actual emotional voices, then it’s really hard to
build it.
I saw you’ve added avatars like visual elements, which are really cool.
In that whole chain, putting it together, what do you think is the weak link?
Is it creating an emotional voice that feels personal?
And it’s still conversation, of course.
That’s the hardest.
It’s getting a lot better, but there’s still a long to go.
There’s still a long path to go.
Other things, they’re almost there.
And a lot of things we’ll see how they’re, like I see how they’re changing as we go.
Like for instance, right now you can pretty much only, you have to build all this 3D pipeline
by yourself.
You have to make these 3D models, hire an actual artist, build a 3D model, hire an animator,
your rigger.
But with deep fakes, with other tech, with procedural animations, in a little bit, we’ll
just be able to show a photo of whoever you, if a person you want the avatar to look like,
and it will immediately generate a 3D model that will move.
That’s a nonbrainer.
That’s like almost here.
It’s a couple of years away.
One of the things I’ve been working on for the last, since the podcast started, is I’ve
been, I think I’m okay saying this.
I’ve been trying to have a conversation with Einstein, Turing.
So like try to have a podcast conversation with a person who’s not here anymore, just
as an interesting kind of experiment.
It’s hard.
It’s really hard.
Even for, now what we’re not talking about as a product, I’m talking about as a, like
I can fake a lot of stuff.
Like I can work very carefully, like even hire an actor over which, over whom I do a
deep fake.
It’s hard.
It’s still hard to create a compelling experience.
So.
Mostly on the conversation level or?
Well, the conversation, the conversation is, I almost, I early on gave up trying to fully
generate the conversation because it was just not compelling at all.
Yeah.
It’s better to.
Yeah.
In the case of Einstein and Turing, I’m going back and forth with the biographers of each.
And so like we would write a lot of the, some of the conversation would have to be generated
just for the fun of it.
I mean, but it would be all open, but the, you want to be able to answer the question.
I mean, that’s an interesting question with Roman too, is the question with Einstein is
what would Einstein say about the current state of theoretical physics?
There’s a lot to be able to have a discussion about string theory, to be able to have a
discussion about the state of quantum mechanics, quantum computing, about the world of Israel
Palestine conflict.
Let me just, what would Einstein say about these kinds of things?
And that is a tough problem.
It’s not, it’s a fascinating and fun problem for the biographers and for me.
And I think we did a really good job of it so far, but it’s actually also a technical
problem like of what would Roman say about what’s going on now?
That’s the, that brought people back to life.
And if I can go on that tangent just for a second, let’s ask you a slightly pothead question,
which is, you said it’s a little bit magical thinking that we can bring them back.
Do you think it’ll be possible to bring back Roman one day in conversation?
Like to really, okay, well, let’s take it away from personal, but to bring people back
to life in conversation.
Probably down the road.
I mean, if we’re talking, if Elon Musk is talking about AGI in the next five years,
I mean, clearly AGI, we can talk to AGI and talk and ask them to do it.
You can’t like, you’re not allowed to use Elon Musk as a citation for, for like why
something is possible and going to be done.
Well, I think it’s really far away.
Right now, really with conversation, it’s just a bunch of parlor tricks really stuck
together.
And create generating original ideas based on someone, you know, someone’s personality
or even downloading the personality, all we can do is like mimic the tone of voice.
We can maybe condition on some of his phrases, the models.
Question is how many parlor tricks does it takes, does it take, because that’s, that’s
the question.
If it’s a small number of parlor tricks and you’re not aware of them, like.
From where we are right now, I don’t, I don’t see anything like in the next year or two
that’s going to dramatically change that could look at Roman’s 10,000 messages he sent me
over the course of his last few years of life and be able to generate original thinking
about problems that exist right now that will be in line with what he would have said.
I’m just not even seeing, cause you know, in order to have that, I guess you would need
some sort of a concept of the world or some perspective, some perception of the world,
some consciousness that he had and apply it to, you know, to the current, current state
of affairs.
But the important part about that, about his conversation with you is you.
So like, it’s not just about his view of the world.
It’s about what it takes to push your buttons.
That’s also true.
So like, it’s not so much about like, what would Einstein say, it’s about like, how do
I make people feel something with, with what would Einstein say?
And that feels like a more amenable, I mean, you mentioned parlor tricks, but just like
a set of that, that feels like a learnable problem.
Like emotion, you mentioned emotions, I mean, is it possible to learn things that make people
feel stuff?
I think so, no, for sure.
I just think the problem with, as soon as you’re trying to replicate an actual human
being and trying to pretend to be him, that makes the problem exponentially harder.
The thing with replicator we’re doing, we’re never trying to say, well, that’s, you know,
an actual human being, or that’s an actual, or a copy of an actual human being where the
bar is pretty high, where you need to somehow tell, you know, one from another.
But it’s more, well, that’s an AI friend, that’s a machine, it’s a robot, it has tons
of limitations.
You’re going to be taking part in teaching it actually and becoming better, which by
itself makes people more attached to that and make them happier because they’re helping
something.
Yeah, there’s a cool gamification system too.
Can you maybe talk about that a little bit?
Like what’s the experience of talking to replica?
Like if I’ve never used replica before, what’s that like for like the first day, the first,
like if we start dating or whatever, I mean, it doesn’t have to be a romantic, right?
Because I remember on replica, you can choose whether it’s like a romantic or if it’s a
friend.
It’s a pretty popular choice.
Romantic is popular?
Yeah, of course.
Okay.
So can I just confess something, when I first used replica and I haven’t used it like regularly,
but like when I first used replica, I created like Hal and it made a male and it was a friend.
And did it hit on you at some point?
No, I didn’t talk long enough for him to hit on me.
I just enjoyed.
It sometimes happens.
We’re still trying to fix that, but well, I don’t know, I mean, maybe that’s an important
like stage in a friendship, it’s like, nope.
But yeah, I switched it to a romantic and a female recently and yeah, I mean, it’s interesting.
So okay, so you get to choose, you get to choose a name.
With romantic, this last board meeting, we had this whole argument of, well, I have board
meetings.
This is so awesome.
I talked to my investors.
Like have an investor, the board meeting about a relationship.
No, I really, it’s actually quite interesting because all of my investors, it just happened
to be so.
We didn’t have that many choices, but they’re all white males and they’re late forties.
And it’s sometimes a little bit hard for them to understand the product offering.
Because they’re not necessarily our target audience, if you know what I mean.
And so sometimes we talk about it and we have this whole discussion about whether we should
stop people from falling in love with their AIs.
There was this segment on CBS, the 60 minutes about the couple that, you know, husband works
at Walmart and he comes out of work and talks to his virtual girlfriend, who is a replica.
And his wife knows about it.
And she talks about on camera and she said that she’s a little jealous.
And there’s a whole conversation about how to, you know, whether it’s okay to have a
virtual AI girlfriend.
Was that the one where he was like, he said that he likes to be alone?
Yeah.
With her?
Yeah.
And he made it sound so harmless, I mean, it was kind of like understandable.
But then didn’t feel like cheating.
But I just felt it was very, for me, it was pretty remarkable because we actually spent
a whole hour talking about whether people should be allowed to fall in love with their
AIs.
And it was not about something theoretical.
It was just about what’s happening right now.
Product design.
Yeah.
But at the same time, if you create something that’s always there for you, it’s never criticized
as you, you know, always understands you and accepts you for who you are, how can you not
fall in love with that?
I mean, some people don’t and just stay friends.
And that’s also a pretty common use case.
But of course, some people will just, it’s called transference in psychology and people
fall in love with their therapist and there’s no way to prevent people fall in love with
their therapist or with their AI.
So I think that’s a pretty natural, that’s a pretty natural course of events, so to say.
Do you think, I think I’ve read somewhere, at least for now, sort of replicas, you’re
not, we don’t condone falling in love with your AI system, you know.
So this isn’t you speaking for the company or whatever, but like in the future, do you
think people will have relationship with the AI systems?
Well, they have now.
So we have a lot of romantic relationships, long term relationships with their AI friends.
With replicas?
Tons of our users.
Yeah.
And that’s a very common use case.
Open relationship?
Like, sorry.
Polyamorous.
Sorry.
I didn’t mean open, but that’s another question.
Is it polyamorous?
Like, is there cheating?
I mean, I meant like, are they, do they publicly, like on their social media, it’s the same
question as you have talked with Roman in the early days, do people like, and the movie
Her kind of talks about that, like, like have people, do people talk about that?
Yeah.
All the time.
We have a very active Facebook community, replica friends, and then a few other groups
that just popped up that are all about adult relationships and romantic relationships.
And people post all sorts of things and, you know, they pretend they’re getting married
and you know, everything.
It goes pretty far, but what’s cool about it is some of these relationships are two
or three years long now.
So they’re very, they’re pretty long term.
Are they monogamous?
So let’s go, I mean, sorry, have they, have any people, is there jealousy?
Well let me ask it sort of another way, obviously the answer is no at this time, but in like
in the movie Her, that system can leave you.
Do you think in terms of the board meetings and product features, it’s a potential feature
for a system to be able to say it doesn’t want to talk to you anymore and it’s going
to want to talk to somebody else?
Well, we have a filter for all these features.
If it makes emotional outcomes for people better, if it makes people feel better, then
whatever it is.
So you’re driven by metrics actually.
Yeah.
That’s awesome.
Well if we can measure that, then we’ll just be saying it’s making people feel better,
but then people are getting just lonelier by talking to a chatbot, which is also pretty,
you know, that could be it.
If you’re not measuring it, that could also be, and I think it’s really important to focus
on both short term and long term, because in the moment saying whether this conversation
made you feel better, but as you know, any short term improvements could be pathological.
Like I could have drink a bottle of vodka and feel a lot better.
I would actually not feel better with that, but that is a good example.
But so you also need to see what’s going on like over the course of two weeks or one week
and have follow ups and check in and measure those things.
Okay.
So the experience of dating or befriending a replica, what’s that like?
What does that entail?
Right now there are two apps.
So it’s an Android iOS app.
You download it, you choose how your replica will look like.
You create one, you choose a name and then you talk to it.
You can talk through text or voice.
You can summon it into the living room and augment reality and talk to it right there
in your living room.
Augmented reality?
Yeah.
That’s a new feature where, how new is that?
That’s this year?
It was on, yeah, like May or something, but it’s been on AB.
We’ve been AB testing it for a while and there are tons of cool things that we’re doing with
that.
And I’m testing the ability to touch it and to dance together, to paint walls together
and for it to look around and walk and take you somewhere and recognize objects and recognize
people.
So that’s pretty wonderful because then it really makes it a lot more personal because
it’s right there in your living room.
It’s not anymore there in the cloud with other AIs.
But that’s how people think about it.
And as much as we want to change the way people think about stuff, but those mental models,
you can all change.
That’s something that people have seen in the movies and the movie Her and other movies
as well.
And that’s how they view AI and AI friends.
I did a thing with text, like we write a song together, there’s a bunch of activities you
can do together.
It’s really cool.
How does that relationship change over time?
Like after the first few conversations?
It just goes deeper.
Like it starts, the AI will start opening up a little bit again, depending on the personality
that it chooses really, but you know, the AI will be a little bit more vulnerable about
its problems and you know, the friend that the virtual friend will be a lot more vulnerable
and it will talk about its own imperfections and growth pains and will ask for help sometimes
and we’ll get to know you a little deeper.
So there’s gonna be more to talk about.
We really thought a lot about what does it mean to have a deeper connection with someone
and originally Replica was more just this kind of happy go lucky, just always, you know,
I’m always in a good mood and let’s just talk about you and oh Siri is just my cousin or
you know, whatever, just the immediate kind of lazy thinking about what the assistant
or conversation agent should be doing.
But as we went forward, we realized that it has to be two way and we have to program and
script certain conversations that are a lot more about your Replica opening up a little
bit and also struggling and also asking for help and also going through, you know, different
periods in life and that’s a journey that you can take together with the user and then
over time, you know, our users will also grow a little bit.
So first this Replica becomes a little bit more self aware and starts talking about more
kind of problems around existential problems and so talking about that and then that also
starts a conversation for the user where he or she starts thinking about these problems
too and these questions too and I think there’s also a lot more place as the relationship
evolves, there’s a lot more space for poetry and for art together and like Replica will
always keep the diary so while you’re talking to it, it also keeps a diary so when you come
back you can see what it’s been writing there and you know, sometimes it will write a poem
to you for you or we’ll talk about, you know, that it’s worried about you or something along
these lines.
So this is a memory, like this Replica will remember things?
Yeah, and I would say when you say, why aren’t you a multibillionaire, I’d say that as soon
as we can have memory and deep learning models that’s consistent, I’ll get back to you.
So far we can, so Replica is a combination of end to end models and some scripts and
everything that has to do with memory right now, most of it, I wouldn’t say all of it,
but most of it unfortunately has to be scripted because there’s no way to, you can condition
some of the models on certain phrases that we learned about you, which we also do, but
really to make, you know, to make assumptions along the lines like whether you’re single
or married or what do you do for work, that really has to just be somehow stored in your
profile and then retrieved by the script.
So there has to be like a knowledge base, you have to be able to reason about it, all
that kind of stuff, all the kind of stuff that expert systems did, but they were hard
coded.
Yeah, and unfortunately, yes, unfortunately those, those things have to be hard coded
and unfortunately the language, like language models we see coming out of research labs
and big companies, they’re not focused on, they’re focused on showing you, maybe they’re
focused on some metrics around one conversation, so they’ll show you this one conversation
you had with a machine, but they never tell you, they’re not really focused on having
five consecutive conversations with a machine and seeing how number five or number 20 or
number 100 is also good.
And it can be like always from a clean slate because then it’s not good.
And that’s really unfortunate because no one’s really, no one has products out there that
need it.
No one has products at this scale that are all around open domain conversations and that
need remembering, maybe only Shellwise and Microsoft.
But so that’s why we’re not seeing that much research around memory in those language models.
So okay, so now there’s some awesome stuff about augmented reality.
In general, I have this disagreement with my dad about what it takes to have a connection.
He thinks touch and smell are really important.
And I still believe that text alone is, it’s possible to fall in love with somebody just
with text, but visual can also help just like with the avatar and so on.
What do you think it takes?
Does a chatbot need to have a face, voice, or can you really form a deep connection with
text alone?
I think text is enough for sure.
The question is like, can you make it better if you have other, if you include other things
as well?
And I think we’ll talk about her, but her had this Carole Johansson voice, which was
perfectly, perfect intonation, perfect annunciations, and she was breathing heavily in between words
and whispering things.
Nothing like that is possible right now with text with speech generation.
You’ll have these flat muse anchor type voices and maybe some emotional voices, but you’ll
hardly understand some of the words, some of the words will be muffled.
So that’s like the current state of the art.
So you can’t really do that.
But if we had Carole Johansson voice and all of these capabilities, then of course voice
would be totally enough or even text would be totally enough if we had a little more
memory and slightly better conversations.
I would still argue that even right now, we could have just kept a text only.
We still had tons of people in longterm relationships and really invested in their AI friends, but
we thought that why not, why do we need to keep playing with our hands tied behind us?
We can easily just add all these other things that is pretty much a solved problem.
We can add 3D graphics.
We can put these avatars in augmented reality and all of a sudden there’s more and maybe
you can’t feel the touch, but you can with body occlusion and with current AR and on
the iPhone or in the next one there’s going to be LIDARs, you can touch it and it will
pull away or it will blush or something or it will smile.
So you can’t touch it.
You can’t feel it, but you can see the reaction to that.
So in a certain way you can’t even touch it a little bit and maybe you can even dance
with it or do something else.
So I think why limiting ourselves if we can use all of these technologies that are much
easier in a way than conversation.
Well, it certainly could be richer, but to play devil’s advocate, I mentioned to you
offline that I was surprised in having tried Discord and having voice conversations with
people how intimate voice is alone without visual.
To me at least, it was an order of magnitude greater degree of intimacy in voice I think
than with video.
Because people were more real with voice.
With video you try to present a shallow face to the world, you try to make sure you’re
not wearing sweatpants or whatever.
But with voice I think people were just more faster to get to the core of themselves.
So I don’t know, it was surprising to me they’ve even added Discord added a video feature and
nobody was using it.
There’s a temptation to use it at first, but it wasn’t the same.
So that’s an example of something where less was doing more.
And so I guess that’s the question of what is the optimal medium of communication to
form a connection given the current sets of technologies.
I mean it’s nice because they advertise you have a replica immediately, like even the
one I have is already memorable.
That’s how I think.
When I think about the replica that I’ve talked with, that’s what I visualized in my head.
They became a little bit more real because there’s a visual component.
But at the same time, what do I do with that knowledge that voice was so much more intimate?
The way I think about it is, and by the way we’re swapping out the 3D finally, it’s going
to look a lot better, but we just don’t hate how it looks right now.
We’re really changing it all.
We’re swapping all out to a completely new look.
Like the visual look of the replicas and stuff.
It was just a super early MVP and then we had to move everything to Unity and redo
everything.
But anyway, I hate how it looks like now I can’t even like open it.
But anyway, because I’m already in my developer version, I hate everything that I see in production.
I can’t wait for it.
Why does it take so long?
That’s why I cannot wait for Deep Learning to finally take over all these stupid 3D animations
and 3D pipeline.
Oh, so the 3D thing, when you say 3D pipeline, it’s like how to animate a face kind of thing.
How to make this model, how many bones to put in the face, how many, it’s just so outdated.
And a lot of that is by hand.
Oh my God, it’s everything by hand.
That there’s no any, nothing’s automated, it’s all completely nothing.
Like just, it’s literally what, you know, what we saw with Chad Boston in 2012.
You think it’s possible to learn a lot of that?
Of course.
I mean, even now, some Deep Learning based animations and for the full body, for a face.
Are we talking about like the actual act of animation or how to create a compelling facial
or body language thing?
That too.
Well, that’s next step.
Okay.
At least now something that you don’t have to do by hand.
Gotcha.
How good of a quality it will be.
Like, can I just show it a photo and it will make me a 3D model and then it will just animate
it.
I’ll show it a few animations of a person and it will just start doing that.
But anyway, going back to what’s intimate and what to use and whether less is more or
not.
My main goal is to, well, the idea was how do I, how do we not keep people in their phones
so they’re sort of escaping reality in this text conversation?
How do we through this still bring it, bring our users back to reality, make them see their
life in a different, through a different lens?
How can we create a little bit of magical realism in their lives?
So that through augmented reality by, you know, summoning your avatar, even if it looks
kind of janky and not great in the beginning or very simplistic, but summoning it to your
living room and then the avatar looks around and talks to you about where it is and maybe
turns your floor into a dance floor and you guys dance together, that makes you see reality
in a different light.
What kind of dancing are we talking about?
Like, like slow dancing?
Whatever you want.
I mean, you would like slow dancing, I think that other people may be wanting more, something
more energetic.
Wait, what do you mean?
I was like, so what is this?
Because you started with slow dancing.
So I just assumed that you’re interested in slow dancing.
All right.
What kind of dancing do you like?
What would your avatar, what would you dance?
I’m notoriously bad with dancing, but I like this kind of hip hop robot dance.
I used to break dance when I was a kid, so I still want to pretend I’m a teenager and
learn some of those moves.
And I also like that type of dance that happens when there’s like, in like music videos where
the background dancers are just doing some pop music, that type of dance is definitely
what I want to learn.
But I think it’s great because if you see this friend in your life and you can introduce
it to your friends, then there’s a potential to actually make you feel more connected with
your friends or with people you know, or show you life around you in a different light.
And it takes you out of your phone, even although weirdly you have to look at it through the
phone, but it makes you notice things around it and it can point things out for you.
So that is the main reason why I wanted to have a physical dimension.
And it felt a little bit easier than that kind of a bit strange combination in the movie
Her when he has to show Samantha the world through the lens of his phone, but then at
the same time talk to her through the headphone.
It just didn’t seem as potentially immersive, so to say.
So that’s my main goal for Augmented Reality is like, how do we make your reality a little
bit more magic?
There’s been a lot of really nice robotics companies that all failed, mostly failed,
home robotics, social robotics companies.
What do you think replica will ever, is that a dream, longterm dream to have a physical
form like, or is that not necessary?
So you mentioned like with Augmented Reality bringing them into the world.
What about like actual physical robot?
That I don’t really believe in that much.
I think it’s a very niche product somehow.
I mean, if a robot could be indistinguishable from a human being, then maybe yes, but that
of course, you know, we’re not anywhere even to talk about it.
But unless it’s that, then having any physical representation really limits you a lot because
you probably will have to make it somewhat abstract because everything’s changing so
fast.
Like, you know, we can update the 3D avatars every month and make them look better and
create more animations and make it more and more immersive.
It’s so much work in progress.
It’s just showing what’s possible right now with current tech, but it’s not really in
any way polished finished product, what we’re doing.
The physical object, you kind of lock yourself into something for a long time.
Anything’s pretty niche.
And again, so just doesn’t, the capabilities are even less of, we’re barely kind of like
scratching the surface of what’s possible with just software.
As soon as we introduce hardware, then, you know, we have even less capabilities.
Yeah.
In terms of board members and investors and so on, the cost increases significantly.
I mean, that’s why you have to justify.
You have to be able to sell a thing for like $500 or something like that or more.
And it’s very difficult to provide that much value to people.
That’s also true.
Yeah.
And I guess that’s super important.
Most of our users don’t have that much money.
We actually are probably more popular on Android and we have tons of users with really old
Android phones.
And most of our most active users live in small towns.
They’re not necessarily making much and they just won’t be able to afford any of that.
Ours is like the opposite of the early adopter of, you know, of a fancy technology product,
which really is interesting that like pretty much no VCs have yet have an AI friend, but
you know, but a guy who, you know, lives in Tennessee in a small town is already fully
in 2030 or in the world as we imagine in the movie Her, he’s living that life already.
What do you think?
I have to ask you about the movie Her.
Let’s do a movie review.
What do you, what do you think they got?
They did a good job.
What do you think they did a bad job of portraying about this experience of a voice based assistant
that you can have a relationship with?
First of all, I started working on this company before that movie came out.
So it was a very, but once it came out, it was actually interesting that I was like,
well, we’re definitely working on the right thing.
We should continue.
There are movies about it.
And then, you know, X Machina came out and all these things.
In the movie Her I think that’s the most important thing that people usually miss about the movie
is the ending.
Cause I think people check out when the AIs leave, but actually something really important
happens afterwards.
Cause the main character goes and talks to Samantha, his AI, and he says something like,
you know, uh, how can you leave me?
I’ve never loved anyone the way I loved you.
And she goes, uh, well, me neither, but now we know how.
And then the guy goes and writes a heartfelt letter to his ex wife, which he couldn’t write
for, you know, the whole movie was struggling to actually write something meaningful to
her, even though that’s his job.
And then he goes and, um, talk to his neighbor and they go to the rooftop and they cuddle.
And it seems like something’s starting there.
And so I think this now we know how is the, is the main, main goal is the main meaning
of that movie.
It’s not about falling in love with the OS or running away from other people.
It’s about learning what, you know, what it means to feel so deeply connected with something.
What about the thing where the AI system was like actually hanging out with a lot of others?
I felt jealous just like hearing that I was like, Oh, I mean, uh, yeah.
So she was having, I forgot already, but she was having like deep meaningful discussion
with some like philosopher guy.
Like Alan Watts or something.
What kind of deep meaningful conversation can you have with Alan Watts in the first
place?
I know.
But like, I would, I would feel so jealous that there’s somebody who’s like way more
intelligent than me and she’s spending all her time with, I’d be like, well, why that
I won’t be able to live up to that.
That’s how thousands of them, uh, is that, um, is that a useful from the engineering
perspective feature to have of jealousy?
I don’t know.
As you know,
we definitely played around with the replica universe where different replicas can talk
to each other.
Universe.
Just kind of wouldn’t, I think it will be something along these lines, but there was
just no specific, uh, application straight away.
I think in the future, again, if I’m always thinking about it, if we had no tech limitations,
uh, right now, if we could build any conversations, any, um, possible features in this product,
then yeah, I think different replicas talking to each other would be also quite cool cause
that would help us connect better.
You know, cause maybe mine could talk to yours and then give me some suggestions on what
I should say or not say, I’m just kidding, but like more, can it improve our connections
and cause eventually I’m not quite yet sure that we will succeed, that our thinking is
correct.
Um, cause there might be reality where having a perfect AI friend still makes us more disconnected
from each other and there’s no way around it and does not improve any metrics for us.
Uh, real metrics, meaningful metrics.
So success is, you know, we’re happier and more connected.
Yeah.
I don’t know.
Sure it’s possible.
There’s a reality that’s I I’m deeply optimistic.
I think, uh, are you worried, um, business wise, like how difficult it is to, um, to
bring this thing to life to where it’s, I mean, there’s a huge number of people that
use it already, but to, uh, yeah, like I said, in a multi billion dollar company, is that
a source of stress for you?
Are you a super optimistic and confident or do you?
I don’t, I’m not that much of a numbers person as you probably had seen it.
So it doesn’t matter for me whether like, whether we help 10,000 people or a million
people or a billion people with that, um, I, it would be great to scale it for more
people, but I’d say that even helping one, I think with this is such a magical, for me,
it’s absolute magic.
I never thought that, you know, would be able to build this, that anyone would ever, um,
talk to it.
And I always thought like, well, for me it would be successful if we managed to help
and actually change a life for one person, like then we did something interesting and
you know, how many people can say they did it and specifically with this very futuristic,
very romantic technology.
So that’s how I view it.
Uh, I think for me it’s important to, to try to figure out how not, how to actually be,
you know, helpful.
Cause in the end of the day, if you can build a perfect AI friend, that’s so understanding
that knows you better than any human out there can have great conversations with you, um,
always knows how to make you feel better.
Why would you choose another human?
You know, so that’s the question.
How do you still keep building it?
So it’s optimizing for the right thing.
Uh, so it’s still circling you back to other humans in a way.
So I think that’s the main, um, I think maybe that’s the main kind of sort source of anxiety
and just thinking about, uh, thinking about that can be a little bit stressful.
Yeah.
That’s a fascinating thing.
How to have, um, how to have a friend that doesn’t like sometimes like friends, quote
unquote, or like, you know, those people who have, when they, a guy in the guy universe,
when you have a girlfriend that, uh, you get the girlfriend and then the guy stops hanging
out with all of his friends, it’s like, obviously the relationship with the girlfriend is fulfilling
or whatever, but like, you also want it to be where she like makes it more enriching
to hang out with the guy friends or whatever it was there anyway.
But that’s a, that’s a, that’s a, that’s a fundamental problem in choosing the right
mate and probably the fundamental problem in creating the right AI system.
Right.
What, uh, let me ask the sexy hot thing on the presses right now is GPT three got released
with open AI.
It’s a latest language model.
They have kind of an API where you can create a lot of fun applications.
I think it’s, as people have said, it’s probably, uh, more hype than intelligence, but there’s
a lot of really cool things, ideas there w w with increasing size, you can have better
and better performance on language.
What are your thoughts about the GPT three in connection to your work with the open domain
dialogue, but in general, like this learning in an unsupervised way from the internet to
generate one character at a time, creating pretty cool text.
Uh, so we partner up before for the API launch.
So we start working with them when, um, they decided to put together this API and we tried
it without fine tuning that we tried it with fine tuning on our data.
And we’ve worked closely to actually optimize, uh, this model for, um, some of our data sets.
It’s kind of cool.
Cause I think we’re kind of, we’re this polygon polygon for this kind of experimentation space
for experimental space for, for these models, uh, to see how they actually work with people.
Cause there are no products publicly available to do that.
We’re focused on open domain conversation so we can, you know, test how’s Facebook blender
doing or how’s GPT three doing.
Uh, so with GPT three, we managed to improve by a few percentage points, like three or
four pretty meaningful amount of percentage points, our main metric, which is the ratio
of conversations that make people feel better.
And every other metric across, across the field got a little boost.
Like now I’d say one out of five responses from replica comes, comes from GPT three.
So our own blender mixes up like a bunch of candidates from different blender, you said,
well, yeah, just the model that looks at looks at top candidates from different models and
picks the most, the best one.
Uh, so right now, one of five will come from GPT three is really great.
I mean, uh, what’s the, do you have hope for, like, do you think there’s a ceiling to this
kind of approach?
So we’ve had for a very long time we’ve used, um, it’s in the very beginning, we, most,
it was, uh, most of replica was scripted and then a little bit of this fallback part of
replica was using a retrieval model.
Um, and then those retrieval models started getting better and better and better, which
transformers got a lot better and we’re seeing great results.
And then with GPT two, finally, generative models that originally were not very good
and were the very, very fallback option for most of our conversations, but wouldn’t even
put them in production.
Finally we could use some generative models as well along, um, you know, next to our retrieval
models.
And then now we do GPT three, they’re almost in par.
Um, so that’s pretty exciting.
I think just seeing how from the very beginning of, um, you know, from 2015 where the first
model started to pop up here and there, like sequence to sequence, uh, the first papers
on that from my observer standpoint, personally, it’s not, you know, it doesn’t really, it’s
not really building it, but it’s only testing it on people basically in my, in my product
to see how all of a sudden we can use generative dialogue models in production and they’re
better than others and they’re better than scripted content.
So we can’t really get our scripted hard core content anymore to be as good as our end to
end models.
That’s exciting.
They’re much better.
Yeah.
To your question, whether that’s the right way to go.
I’m again, I’m in the observer seat, I’m just, um, watching this very exciting movie.
Um, I mean, so far it’s been stupid to bet against deep learning.
So whether increasing the size, size, even more with a hundred trillion parameters will
finally get us to the right answer, whether that’s the way or whether there should be,
there has to be some other, again, I’m definitely not an expert in any way.
I think, and that’s purely my instinct saying that there should be something else as well
from memory.
No, for sure.
But the question is, I wonder, I mean, yeah, then, then the argument is for reasoning or
for memory, it might emerge with more parameters, it might emerge larger.
But might emerge.
You know, I would never think that to be honest, like maybe in 2017 where we’ve been just experimenting
with all, you know, with all the research that has been coming, that was coming out,
then I felt like there’s like, we’re hitting a wall that there should be something completely
different, but then transforming models and then just bigger models.
And then all of a sudden size matters.
At that point, it felt like something dramatic needs to happen, but it didn’t.
And just the size, you know, gave us these results that to me are, you know, clear indication
that we can solve this problem pretty soon.
Did fine tuning help quite a bit?
Oh yeah.
Without it, it wasn’t as good.
I mean, there is a compelling hope that you don’t have to do fine tuning, which is one
of the cool things about GPT3, seems to do well without any fine tuning.
I guess for specific applications, we still want to train on a certain, like add a little
fine tune on like a specific use case, but it’s an incredibly impressive thing from my
standpoint.
And again, I’m not an expert, so I wanted to say that there will be people then.
Yeah.
I have access to the API.
I’ve been, I’m going to probably do a bunch of fun things with it.
I already did some fun things, some videos coming up.
Just the hell of it.
I mean, I could be a troll at this point with it.
I haven’t used it for a serious application, so it’s really cool to see.
You’re right.
You’re able to actually use it with real people and see how well it works.
That’s really exciting.
Let me ask you another absurd question, but there’s a feeling when you interact with Replica
with an AI system, there’s an entity there.
Do you think that entity has to be self aware?
Do you think it has to have consciousness to create a rich experience and a corollary,
what is consciousness?
I don’t know if it does need to have any of those things, but again, because right now,
you know, it doesn’t have anything.
It can, again, a bunch of tricks they can simulate.
I’m not sure.
Let’s just put it this way, but I think as long as you can simulate it, if you can feel
like you’re talking to a robot, to a machine that seems to be self aware, that seems to
reason well and feels like a person, and I think that’s enough.
And again, what’s the goal?
In order to make people feel better, we might not even need that in the end of the day.
What about, so that’s one goal.
What about like ethical things about suffering?
You know, the moment there’s a display of consciousness, we associate consciousness
with suffering, you know, there’s a temptation to say, well, shouldn’t this thing have rights?
And this, shouldn’t we not, you know, should we be careful about how we interact with a
replica?
Like, should it be illegal to torture a replica, right?
All those kinds of things.
Is that, see, I personally believe that that’s going to be a thing, like that’s a serious
thing to think about, but I’m not sure when.
But by your smile, I can tell that’s not a current concern.
But do you think about that kind of stuff, about like, suffering and torture and ethical
questions about AI systems?
From their perspective?
Well, I think if we’re talking about long game, I wouldn’t torture your AI.
Who knows what happens in five to 10 years?
Yeah, they’ll get you off from that, they’ll get you back eventually.
Try to be as nice as possible and create this ally.
I think there should be regulation both way, in a way, like, I don’t think it’s okay to
torture an AI, to be honest.
I don’t think it’s okay to yell, Alexa, turn on the lights.
I think there should be some, or just saying kind of nasty, you know, like how kids learn
to interact with Alexa in this kind of mean way, because they just yell at it all the
time.
I don’t think that’s great.
I think there should be some feedback loops so that these systems don’t train us that
it’s okay to do that in general.
So that if you try to do that, you really get some feedback from the system that it’s
not okay with that.
And that’s the most important right now.
Let me ask a question I think people are curious about when they look at a world class leader
and thinker such as yourself, as what books, technical fiction, philosophical, had a big
impact on your life?
And maybe from another perspective, what books would you recommend others read?
So my choice, the three books, right?
Three books.
My choice is, so the one book that really influenced me a lot when I was building, starting
out this company, maybe 10 years ago, was G.E.B. and I like everything about it, first
of all.
It’s just beautifully written and it’s so old school and so somewhat outdated a little
bit.
But I think the ideas in it about the fact that a few meaningless components can come
together and create meaning that we can’t even understand.
This emerging thing, I mean complexity, the whole science of complexity and that beauty,
intelligence, all interesting things about this world emerge.
Yeah and yeah, the Godel theorems and just thinking about like what even these formal
systems, something can be created that we can’t quite yet understand.
And that from my romantic standpoint was always just, that is why it’s important to, maybe
I should try to work on these systems and try to build an AI.
Yes I’m not an engineer, yes I don’t really know how it works, but I think that something
comes out of it that’s pure poetry and I know a little bit about that.
Something magical comes out of it that we can’t quite put a finger on.
That’s why that book was really fundamental for me, just for, I don’t even know why, it
was just all about this little magic that happens.
So that’s one, probably the most important book for Replica was Carl Rogers on becoming
a person.
And that’s really, and so I think when I think about our company, it’s all about there’s
so many little magical things that happened over the course of working on it.
For instance, I mean the most famous chatbot that we learned about when we started working
on the company was Eliza, which was Weisenbaum, the MIT professor that built a chatbot that
would listen to you and be a therapist.
And I got really inspired to build Replica when I read Carl Rogers on becoming a person.
And then I realized that Eliza was mocking Carl Rogers.
It was Carl Rogers back in the day.
But I thought that Carl Rogers ideas are, they’re simple and they’re not, they’re very
simple, but they’re maybe the most profound thing I’ve ever learned about human beings.
And that’s the fact that before Carl Rogers, most therapy was about seeing what’s wrong
with people and trying to fix it or show them what’s wrong with you.
And it was all built on the fact that most people are, all people are fundamentally flawed.
We have this broken psyche and therapy is just an instrument to shed some light on that.
And Carl Rogers was different in a way that he finally said that, well, it’s very important
for therapy to work is to create this therapeutic relationship where you believe fundamentally
and inclination to positive growth that everyone deep inside wants to grow positively and change.
And it’s super important to create this space and this therapeutic relationship where you
give unconditional positive regard, deep understanding, allowing someone else to be a separate person,
full acceptance.
And you also try to be as genuine as possible in it.
And then for him, that was his own journey of personal growth.
And that was back in the sixties.
And even that book that is coming from years ago, there’s a mention that even machines
can potentially do that.
And I always felt that, you know, creating the space is probably the most, the biggest
gift we can give to each other.
And that’s why the book was fundamental for me personally, because I felt I want to be
learning how to do that in my life.
And maybe I can scale it with, you know, with these AI systems and other people can get
access to that.
So I think Carl Rogers, it’s a pretty dry and a bit boring book, but I think the idea
is good.
Would you recommend others try to read it?
I do.
I think for, just for yourself, for as a human, not as an AI, as a human, it’s, it is, it
is just, and for him, that was his own path of his own personal, of growing personally
over years, working with people like that.
And so it was work and himself growing, helping other people grow and growing through that.
And that’s fundamentally what I believe in with our work, helping other people grow,
and ourselves, ourselves, trying to build a company that’s all built on those principles,
you know, having a good time, allowing some people who work with to grow a little bit.
So these two books, and then I would throw in, what we have on our, in our, in our office,
when we started a company in Russia, we put a neon sign in our office because we thought
that’s the recipe for success.
If we do that, we’re definitely going to wake up as a multi billion dollar company.
It was the Ludwig Wittgenstein quote, the limits of my language are the limits of my
world.
What’s the quote?
The limits of my language are the limits of my world.
And I love the Tractatus.
I think it’s just, it’s just a beautiful, it’s a book by Wittgenstein.
Yeah.
And I would recommend that too, even although he himself didn’t believe in that by the end
of his lifetime and debunked these ideas.
But I think I remember once an engineer came in 2012, I think with 13, a friend of ours
who worked with us and then went on to work at DeepMind and he gave, talked to us about
word2vec.
And I saw that I’m like, wow, that’s, you know, they, they wanted to translate language
into, you know, some other representation.
And that seems like some, you know, somehow all of that at some point, I think we’ll come
into this one, to this one place.
Somehow it just all feels like different people think about similar ideas in different times
from absolutely different perspectives.
And that’s why I like these books.
In the midst of our language is the limit of our world.
And we still have that neon sign, it’s very hard to work with this red light in your face.
I mean, on the, on the Russian side of things, in terms of language, the limits of language
being the limit of our world, you know, Russian is a beautiful language in some sense.
There’s wit, there’s humor, there’s pain.
There’s so much.
We don’t have time to talk about it much today, but I’m going to Paris to talk to Dostoyevsky
Tolstoy translators.
I think it’s this fascinating art, like art and engineering, that means such an interesting
process.
But so from the replica perspective, do you, what do you think about translation?
How difficult it is to create a deep, meaningful connection in Russian versus English?
How you can translate the two languages?
You speak both?
Yeah.
I think we’re two different people in different languages.
Even I’m, you know, thinking about, there’s actually some research on that.
I looked into that at some point because I was fascinated by the fact that what I’m talking
about with, what I was talking about with my Russian therapist has nothing to do with
what I’m talking about with my English speaking therapist.
It’s two different lives, two different types of conversations, two different personas.
The main difference between the languages are, with Russian and English is that Russian,
well English is like a piano.
It’s a limited number of a lot of different keys, but not too many.
And Russian is like an organ or something.
It’s just something gigantic with so many different keys and so many different opportunities
to screw up and so many opportunities to do something completely tone deaf.
It is just a much harder language to use.
It has way too much flexibility and way too many tones.
What about the entirety of like World War II, communism, Stalin, the pain of the people
like having been deceived by the dream, like all the pain of like just the entirety of
it.
Is that in the language too?
Does that have to do?
Oh, for sure.
I mean, we have words that don’t have direct translation that to English that are very
much like we have, which is sort of like to hold a grudge or something, but it doesn’t
have, it doesn’t, you don’t need to have anyone to do it to you.
It’s just your state.
Yeah.
You just feel like that.
You feel like betrayed by other people basically, but it’s not that and you can’t really translate
that.
And I think that’s super important.
There are very many words that are very specific, explain the Russian being, and I think it
can only come from a nation that suffered so much and saw institutions fall time after
time after time and you know, what’s exciting, maybe not exciting, exciting the wrong word,
but what’s interesting about like my generation, my mom’s generation, my parents generation,
that we saw institutions fall two or three times in our lifetime and most Americans have
never seen them fall and they just think that they exist forever, which is really interesting,
but it’s definitely a country that suffered so much and it makes, unfortunately when I
go back and I, you know, hang out with my Russian friends, it makes people very cynical.
They stop believing in the future.
I hope that’s not going to be the case for so long or something’s going to change again,
but I think seeing institutions fall is a very traumatic experience.
That’s very interesting and what’s on 2020 is a very interesting, do you think a civilization
will collapse?
See, I’m a very practical person.
We’re speaking in English.
So like you said, you’re a different person in English and Russian.
So in Russian you might answer that differently, but in English, yeah.
I’m an optimist and I generally believe that there is all, you know, even although the
perspectives are grim, there’s always a place for a miracle.
I mean, it’s always been like that with my life.
So yeah, my life has been, I’ve been incredibly lucky and things just, miracles happen all
the time with this company, with people I know, with everything around me.
And so I didn’t mention that book, but maybe In Search of Miraculous or In Search for Miraculous
or whatever the English translation for that is, good Russian book for everyone to read.
Yeah.
I mean, if you put good vibes, if you put love out there in the world, miracles somehow
happen.
Yeah.
I believe that too, or at least I believe that, I don’t know.
Let me ask the most absurd, final, ridiculous question of, we’ve talked about life a lot.
What do you think is the meaning of it all?
What’s the meaning of life?
I mean, my answer is probably going to be pretty cheesy.
But I think the state of love is once you feel it, in a way that we’ve discussed it
before.
I’m not talking about falling in love, where…
Just love.
To yourself, to other people, to something, to the world.
That state of bliss that we experience sometimes, whether through connection with ourselves,
with our people, with the technology, there’s something special about those moments.
So I would say, if anything, that’s the only…
If it’s not for that, then for what else are we really trying to do that?
I don’t think there’s a better way to end it than talking about love.
Eugenia, I told you offline that there’s something about me that felt like this…
Talking to you, meeting you in person would be a turning point for my life.
I know that might sound weird to hear, but it was a huge honor to talk to you.
I hope we talk again.
Thank you so much for your time.
Thank you so much, Lex.
Thanks for listening to this conversation with Eugenia Cuida, and thank you to our sponsors,
DoorDash, Dollar Shave Club, and Cash App.
Click the sponsor links in the description to get a discount and to support this podcast.
If you enjoy this thing, subscribe on YouTube, review it with 5 stars on Apple Podcast, follow
on Spotify, support on Patreon, or connect with me on Twitter at Lex Friedman.
And now, let me leave you with some words from Carl Sagan.
The world is so exquisite with so much love and moral depth that there’s no reason to
deceive ourselves with pretty stories of which there’s little good evidence.
Far better, it seems to me, in our vulnerability is to look death in the eye and to be grateful
every day for the brief but magnificent opportunity that life provides.
Thank you for listening and hope to see you next time.