Lex Fridman Podcast - #4 - Yoshua Bengio: Deep Learning

What difference between biological neural networks and artificial neural networks

is most mysterious, captivating, and profound for you?

First of all, there’s so much we don’t know about biological neural networks,

and that’s very mysterious and captivating because maybe it holds the key to improving

artificial neural networks. One of the things I studied recently is something

that we don’t know how biological neural networks do but would be really useful for artificial ones

is the ability to do credit assignment through very long time spans. There are things that

we can in principle do with artificial neural nets, but it’s not very convenient and it’s

not biologically plausible. And this mismatch, I think this kind of mismatch

may be an interesting thing to study to, A, understand better how brains might do these

things because we don’t have good corresponding theories with artificial neural nets, and B,

maybe provide new ideas that we could explore about things that brain do differently and that

we could incorporate in artificial neural nets. So let’s break credit assignment up a little bit.

Yes. So what, it’s a beautifully technical term, but it could incorporate so many things. So is it

more on the RNN memory side, that thinking like that, or is it something about knowledge, building

up common sense knowledge over time? Or is it more in the reinforcement learning sense that you’re

picking up rewards over time for a particular, to achieve a certain kind of goal? So I was thinking

more about the first two meanings whereby we store all kinds of memories, episodic memories

in our brain, which we can access later in order to help us both infer causes of things that we

are observing now and assign credit to decisions or interpretations we came up with a while ago

when those memories were stored. And then we can change the way we would have reacted or interpreted

things in the past, and now that’s credit assignment used for learning.

So in which way do you think artificial neural networks, the current LSTM, the current architectures

are not able to capture the, presumably you’re thinking of very long term?

Yes. So current, the current nets are doing a fairly good jobs for sequences with dozens or

say hundreds of time steps. And then it gets harder and harder and depending on what you have

to remember and so on, as you consider longer durations. Whereas humans seem to be able to

do credit assignment through essentially arbitrary times, like I could remember something I did last

year. And then now because I see some new evidence, I’m going to change my mind about the way I was

thinking last year. And hopefully not do the same mistake again.

I think a big part of that is probably forgetting. You’re only remembering the really important

things. It’s very efficient forgetting.

Yes. So there’s a selection of what we remember. And I think there are really cool connection to

higher level cognition here regarding consciousness, deciding and emotions,

so deciding what comes to consciousness and what gets stored in memory, which are not trivial either.

So you’ve been at the forefront there all along, showing some of the amazing things that neural

networks, deep neural networks can do in the field of artificial intelligence is just broadly

in all kinds of applications. But we can talk about that forever. But what, in your view,

because we’re thinking towards the future, is the weakest aspect of the way deep neural networks

represent the world? What is that? What is in your view is missing?

So current state of the art neural nets trained on large quantities of images or texts

have some level of understanding of, you know, what explains those data sets, but it’s very

basic, it’s it’s very low level. And it’s not nearly as robust and abstract and general

as our understanding. Okay, so that doesn’t tell us how to fix things. But I think it encourages

us to think about how we can maybe train our neural nets differently, so that they would

focus, for example, on causal explanation, something that we don’t do currently with neural

net training. Also, one thing I’ll talk about in my talk this afternoon is the fact that

instead of learning separately from images and videos on one hand and from texts on the other

hand, we need to do a better job of jointly learning about language and about the world

to which it refers. So that, you know, both sides can help each other. We need to have good world

models in our neural nets for them to really understand sentences, which talk about what’s

going on in the world. And I think we need language input to help provide clues about

what high level concepts like semantic concepts should be represented at the top levels of our

neural nets. In fact, there is evidence that the purely unsupervised learning of representations

doesn’t give rise to high level representations that are as powerful as the ones we’re getting

from supervised learning. And so the clues we’re getting just with the labels, not even sentences,

is already very, very high level. And I think that’s a very important thing to keep in mind.

It’s already very powerful. Do you think that’s an architecture challenge or is it a data set challenge?

Neither. I’m tempted to just end it there. Can you elaborate slightly?

Of course, data sets and architectures are something you want to always play with. But

I think the crucial thing is more the training objectives, the training frameworks. For example,

going from passive observation of data to more active agents, which

learn by intervening in the world, the relationships between causes and effects,

the sort of objective functions, which could be important to allow the highest level explanations

to rise from the learning, which I don’t think we have now, the kinds of objective functions,

which could be used to reward exploration, the right kind of exploration. So these kinds of

questions are neither in the data set nor in the architecture, but more in how we learn,

under what objectives and so on. Yeah, I’ve heard you mention in several contexts, the idea of sort

of the way children learn, they interact with objects in the world. And it seems fascinating

because in some sense, except with some cases in reinforcement learning, that idea

is not part of the learning process in artificial neural networks. So it’s almost like,

do you envision something like an objective function saying, you know what, if you

poke this object in this kind of way, it would be really helpful for me to further learn.

Right, right.

Sort of almost guiding some aspect of the learning.

Right, right, right. So I was talking to Rebecca Sacks just a few minutes ago,

and she was talking about lots and lots of evidence from infants seem to clearly pick

what interests them in a directed way. And so they’re not passive learners, they focus their

attention on aspects of the world, which are most interesting, surprising in a non trivial way.

That makes them change their theories of the world.

So that’s a fascinating view of the future progress. But on a more maybe boring question,

do you think going deeper and larger, so do you think just increasing the size of the things that

have been increasing a lot in the past few years, is going to be a big thing?

I think increasing the size of the things that have been increasing a lot in the past few years

will also make significant progress. So some of the representational issues that you mentioned,

they’re kind of shallow, in some sense.

Oh, shallow in the sense of abstraction.

In the sense of abstraction, they’re not getting some…

I don’t think that having more depth in the network in the sense of instead of 100 layers,

you’re going to have more layers. I don’t think so. Is that obvious to you?

Yes. What is clear to me is that engineers and companies and labs and grad students will continue

to tune architectures and explore all kinds of tweaks to make the current state of the art

slightly ever slightly better. But I don’t think that’s going to be nearly enough. I think we need

changes in the way that we’re considering learning to achieve the goal that these learners actually

understand in a deep way the environment in which they are, you know, observing and acting.

But I guess I was trying to ask a question that’s more interesting than just more layers.

It’s basically, once you figure out a way to learn through interacting, how many parameters

it takes to store that information. So I think our brain is quite bigger than most neural networks.

Right, right. Oh, I see what you mean. Oh, I’m with you there. So I agree that in order to

build neural nets with the kind of broad knowledge of the world that typical adult humans have,

probably the kind of computing power we have now is going to be insufficient.

So the good news is there are hardware companies building neural net chips. And so

it’s going to get better. However, the good news in a way, which is also a bad news,

is that even our state of the art, deep learning methods fail to learn models that understand

even very simple environments, like some grid worlds that we have built.

Even these fairly simple environments, I mean, of course, if you train them with enough examples,

eventually they get it. But it’s just like, instead of what humans might need just

dozens of examples, these things will need millions for very, very, very simple tasks.

And so I think there’s an opportunity for academics who don’t have the kind of computing

power that, say, Google has to do really important and exciting research to advance

the state of the art in training frameworks, learning models, agent learning in even simple

environments that are synthetic, that seem trivial, but yet current machine learning fails on.

We talked about priors and common sense knowledge. It seems like

we humans take a lot of knowledge for granted. So what’s your view of these priors of forming

this broad view of the world, this accumulation of information and how we can teach neural networks

or learning systems to pick that knowledge up? So knowledge, for a while, the artificial

intelligence was maybe in the 80s, like there’s a time where knowledge representation, knowledge,

acquisition, expert systems, I mean, the symbolic AI was a view, was an interesting problem set to

solve and it was kind of put on hold a little bit, it seems like. Because it doesn’t work.

It doesn’t work. That’s right. But that’s right. But the goals of that remain important.

Yes. Remain important. And how do you think those goals can be addressed?

Right. So first of all, I believe that one reason why the classical expert systems approach failed

is because a lot of the knowledge we have, so you talked about common sense intuition,

there’s a lot of knowledge like this, which is not consciously accessible.

There are lots of decisions we’re taking that we can’t really explain, even if sometimes we make

up a story. And that knowledge is also necessary for machines to take good decisions. And that

knowledge is hard to codify in expert systems, rule based systems and classical AI formalism.

And there are other issues, of course, with the old AI, like not really good ways of handling

uncertainty, I would say something more subtle, which we understand better now, but I think still

isn’t enough in the minds of people. There’s something really powerful that comes from

distributed representations, the thing that really makes neural nets work so well.

And it’s hard to replicate that kind of power in a symbolic world. The knowledge in expert systems

and so on is nicely decomposed into like a bunch of rules. Whereas if you think about a neural net,

it’s the opposite. You have this big blob of parameters which work intensely together to

represent everything the network knows. And it’s not sufficiently factorized. It’s not

sufficiently factorized. And so I think this is one of the weaknesses of current neural nets,

that we have to take lessons from classical AI in order to bring in another kind of compositionality,

which is common in language, for example, and in these rules, but that isn’t so native to neural

nets. And on that line of thinking, disentangled representations. Yes. So let me connect with

disentangled representations, if you might, if you don’t mind. So for many years, I’ve thought,

and I still believe that it’s really important that we come up with learning algorithms,

either unsupervised or supervised, but reinforcement, whatever, that build representations

in which the important factors, hopefully causal factors are nicely separated and easy to pick up

from the representation. So that’s the idea of disentangled representations. It says transform

the data into a space where everything becomes easy. We can maybe just learn with linear models

about the things we care about. And I still think this is important, but I think this is missing out

on a very important ingredient, which classical AI systems can remind us of.

So let’s say we have these disentangled representations. You still need to learn about

the relationships between the variables, those high level semantic variables. They’re not going

to be independent. I mean, this is like too much of an assumption. They’re going to have some

interesting relationships that allow to predict things in the future, to explain what happened

in the past. The kind of knowledge about those relationships in a classical AI system

is encoded in the rules. Like a rule is just like a little piece of knowledge that says,

oh, I have these two, three, four variables that are linked in this interesting way,

then I can say something about one or two of them given a couple of others, right?

In addition to disentangling the elements of the representation, which are like the variables

in a rule based system, you also need to disentangle the mechanisms that relate those

variables to each other. So like the rules. So the rules are neatly separated. Like each rule is,

you know, living on its own. And when I change a rule because I’m learning, it doesn’t need to

break other rules. Whereas current neural nets, for example, are very sensitive to what’s called

catastrophic forgetting, where after I’ve learned some things and then I learn new things,

they can destroy the old things that I had learned, right? If the knowledge was better

factorized and separated, disentangled, then you would avoid a lot of that.

Now, you can’t do this in the sensory domain.

What do you mean by sensory domain?

Like in pixel space. But my idea is that when you project the data in the right semantic space,

it becomes possible to now represent this extra knowledge beyond the transformation from inputs

to representations, which is how representations act on each other and predict the future and so on

in a way that can be neatly disentangled. So now it’s the rules that are disentangled from each

other and not just the variables that are disentangled from each other.

And you draw a distinction between semantic space and pixel, like does there need to be

an architectural difference?

Well, yeah. So there’s the sensory space like pixels, which where everything is entangled.

The information, like the variables are completely interdependent in very complicated ways.

And also computation, like it’s not just the variables, it’s also how they are related to

each other is all intertwined. But I’m hypothesizing that in the right high level

representation space, both the variables and how they relate to each other can be

disentangled. And that will provide a lot of generalization power.

Generalization power.

Yes.

Distribution of the test set is assumed to be the same as the distribution of the training set.

Right. This is where current machine learning is too weak. It doesn’t tell us anything,

is not able to tell us anything about how our neural nets, say, are going to generalize to

a new distribution. And, you know, people may think, well, but there’s nothing we can say

if we don’t know what the new distribution will be. The truth is humans are able to generalize

to new distributions.

Yeah. How are we able to do that?

Yeah. Because there is something, these new distributions, even though they could look

very different from the training distributions, they have things in common. So let me give you

a concrete example. You read a science fiction novel. The science fiction novel, maybe, you

know, brings you in some other planet where things look very different on the surface,

but it’s still the same laws of physics. And so you can read the book and you understand

what’s going on. So the distribution is very different. But because you can transport

a lot of the knowledge you had from Earth about the underlying cause and effect relationships

and physical mechanisms and all that, and maybe even social interactions, you can now

make sense of what is going on on this planet where, like, visually, for example,

things are totally different.

Taking that analogy further and distorting it, let’s enter a science fiction world of,

say, Space Odyssey, 2001, with Hal. Or maybe, which is probably one of my favorite AI movies.

Me too.

And then there’s another one that a lot of people love that may be a little bit outside

of the AI community is Ex Machina. I don’t know if you’ve seen it.

Yes. Yes.

By the way, what are your views on that movie? Are you able to enjoy it?

Are there things I like and things I hate?

So you could talk about that in the context of a question I want to ask, which is, there’s

quite a large community of people from different backgrounds, often outside of AI, who are concerned

about existential threat of artificial intelligence. You’ve seen this community

develop over time. You’ve seen you have a perspective. So what do you think is the best

way to talk about AI safety, to think about it, to have discourse about it within AI community

and outside and grounded in the fact that Ex Machina is one of the main sources of information

for the general public about AI?

So I think you’re putting it right. There’s a big difference between the sort of discussion

we ought to have within the AI community and the sort of discussion that really matter

in the general public. So I think the picture of Terminator and AI loose and killing people

and super intelligence that’s going to destroy us, whatever we try, isn’t really so useful

for the public discussion. Because for the public discussion, the things I believe really

matter are the short term and medium term, very likely negative impacts of AI on society,

whether it’s from security, like, you know, big brother scenarios with face recognition

or killer robots, or the impact on the job market, or concentration of power and discrimination,

all kinds of social issues, which could actually, some of them could really threaten democracy,

for example.

Just to clarify, when you said killer robots, you mean autonomous weapon, weapon systems.

Yes, I don’t mean that’s right.

So I think these short and medium term concerns should be important parts of the public debate.

Now, existential risk, for me is a very unlikely consideration, but still worth academic investigation

in the same way that you could say, should we study what could happen if meteorite, you

know, came to earth and destroyed it. So I think it’s very unlikely that this is going

to happen in or happen in a reasonable future. The sort of scenario of an AI getting loose

goes against my understanding of at least current machine learning and current neural

nets and so on. It’s not plausible to me. But of course, I don’t have a crystal ball

and who knows what AI will be in 50 years from now. So I think it is worth that scientists

study those problems. It’s just not a pressing question as far as I’m concerned.

So before I continue down that line, I have a few questions there. But what do you like

and not like about Ex Machina as a movie? Because I actually watched it for the second

time and enjoyed it. I hated it the first time, and I enjoyed it quite a bit more the

second time when I sort of learned to accept certain pieces of it, see it as a concept

movie. What was your experience? What were your thoughts?

So the negative is the picture it paints of science is totally wrong. Science in general

and AI in particular. Science is not happening in some hidden place by some, you know, really

smart guy, one person. This is totally unrealistic. This is not how it happens. Even a team of

people in some isolated place will not make it. Science moves by small steps, thanks to

the collaboration and community of a large number of people interacting. And all the

scientists who are expert in their field kind of know what is going on, even in the industrial

labs. It’s information flows and leaks and so on. And the spirit of it is very different

from the way science is painted in this movie.

Yeah, let me ask on that point. It’s been the case to this point that kind of even if

the research happens inside Google or Facebook, inside companies, it still kind of comes out,

ideas come out. Do you think that will always be the case with AI? Is it possible to bottle

ideas to the point where there’s a set of breakthroughs that go completely undiscovered

by the general research community? Do you think that’s even possible?

It’s possible, but it’s unlikely. It’s not how it is done now. It’s not how I can foresee

it in the foreseeable future. But of course, I don’t have a crystal ball and science is

a crystal ball. And so who knows? This is science fiction after all.

I think it’s ominous that the lights went off during that discussion.

So the problem, again, there’s one thing is the movie and you could imagine all kinds

of science fiction. The problem for me, maybe similar to the question about existential

risk, is that this kind of movie paints such a wrong picture of what is the actual science

and how it’s going on that it can have unfortunate effects on people’s understanding of current

science. And so that’s kind of sad.

There’s an important principle in research, which is diversity. So in other words, research

is exploration. Research is exploration in the space of ideas. And different people will

focus on different directions. And this is not just good, it’s essential. So I’m totally

fine with people exploring directions that are contrary to mine or look orthogonal to

mine. I am more than fine. I think it’s important. I and my friends don’t claim we have universal

truth about what will, especially about what will happen in the future. Now that being

said, we have our intuitions and then we act accordingly according to where we think we

can be most useful and where society has the most to gain or to lose. We should have those

debates and not end up in a society where there’s only one voice and one way of thinking

and research money is spread out.

So disagreement is a sign of good research, good science.

Yes.

The idea of bias in the human sense of bias. How do you think about instilling in machine

learning something that’s aligned with human values in terms of bias? We intuitively as

human beings have a concept of what bias means, of what fundamental respect for other human

beings means. But how do we instill that into machine learning systems, do you think?

So I think there are short term things that are already happening and then there are long

term things that we need to do. In the short term, there are techniques that have been

proposed and I think will continue to be improved and maybe alternatives will come up to take

data sets in which we know there is bias, we can measure it. Pretty much any data set

where humans are being observed taking decisions will have some sort of bias, discrimination

against particular groups and so on.

And we can use machine learning techniques to try to build predictors, classifiers that

are going to be less biased. We can do it, for example, using adversarial methods to

make our systems less sensitive to these variables we should not be sensitive to.

So these are clear, well defined ways of trying to address the problem. Maybe they have weaknesses

and more research is needed and so on. But I think in fact they are sufficiently mature

that governments should start regulating companies where it matters, say like insurance companies,

so that they use those techniques. Because those techniques will probably reduce the

bias but at a cost. For example, maybe their predictions will be less accurate and so companies

will not do it until you force them.

All right, so this is short term. Long term, I’m really interested in thinking how we can

instill moral values into computers. Obviously, this is not something we’ll achieve in the

next five or 10 years. How can we, you know, there’s already work in detecting emotions,

for example, in images, in sounds, in texts, and also studying how different agents interacting

in different ways may correspond to patterns of, say, injustice, which could trigger anger.

So these are things we can do in the medium term and eventually train computers to model,

for example, how humans react emotionally. I would say the simplest thing is unfair situations

which trigger anger. This is one of the most basic emotions that we share with other animals.

I think it’s quite feasible within the next few years that we can build systems that can

detect these kinds of things to the extent, unfortunately, that they understand enough

about the world around us, which is a long time away. But maybe we can initially do this

in virtual environments. So you can imagine a video game where agents interact in some

ways and then some situations trigger an emotion. I think we could train machines to detect

those situations and predict that the particular emotion will likely be felt if a human was

playing one of the characters.

You have shown excitement and done a lot of excellent work with unsupervised learning.

But there’s been a lot of success on the supervised learning side.

Yes, yes.

And one of the things I’m really passionate about is how humans and robots work together.

And in the context of supervised learning, that means the process of annotation. Do you

think about the problem of annotation put in a more interesting way as humans teaching

machines?

Yes.

Is there?

Yes. I think it’s an important subject. Reducing it to annotation may be useful for somebody

building a system tomorrow. But longer term, the process of teaching, I think, is something

that deserves a lot more attention from the machine learning community. So there are people

who have coined the term machine teaching. So what are good strategies for teaching a

learning agent? And can we design and train a system that is going to be a good teacher?

So in my group, we have a project called BBI or BBI game, where there is a game or scenario

where there’s a learning agent and a teaching agent. Presumably, the teaching agent would

eventually be a human. But we’re not there yet. And the role of the teacher is to use

its knowledge of the environment, which it can acquire using whatever way brute force

to help the learner learn as quickly as possible. So the learner is going to try to learn by

itself, maybe using some exploration and whatever. But the teacher can choose, can have an influence

on the interaction with the learner, so as to guide the learner, maybe teach it the things

that the learner has most trouble with, or just add the boundary between what it knows

and doesn’t know, and so on. So there’s a tradition of these kind of ideas from other

fields and like tutorial systems, for example, and AI. And of course, people in the humanities

have been thinking about these questions. But I think it’s time that machine learning

people look at this, because in the future, we’ll have more and more human machine interaction

with the human in the loop. And I think understanding how to make this work better, all the problems

around that are very interesting and not sufficiently addressed. You’ve done a lot of work with

language, too. What aspect of the traditionally formulated Turing test, a test of natural

language understanding and generation in your eyes is the most difficult of conversation?

What in your eyes is the hardest part of conversation to solve for machines? So I would say it’s

everything having to do with the non linguistic knowledge, which implicitly you need in order

to make sense of sentences, things like the Winograd schema. So these sentences that are

semantically ambiguous. In other words, you need to understand enough about the world

in order to really interpret properly those sentences. I think these are interesting challenges

for machine learning, because they point in the direction of building systems that both

understand how the world works and this causal relationships in the world and associate that

knowledge with how to express it in language, either for reading or writing.

You speak French?

Yes, it’s my mother tongue.

It’s one of the romance languages. Do you think passing the Turing test and all the

underlying challenges we just mentioned depend on language? Do you think it might be easier

in French than it is in English, or is independent of language?

I think it’s independent of language. I would like to build systems that can use the same

principles, the same learning mechanisms to learn from human agents, whatever their language.

Well, certainly us humans can talk more beautifully and smoothly in poetry, some Russian originally.

I know poetry in Russian is maybe easier to convey complex ideas than it is in English.

But maybe I’m showing my bias and some people could say that about French. But of course,

the goal ultimately is our human brain is able to utilize any kind of those languages

to use them as tools to convey meaning.

Yeah, of course, there are differences between languages, and maybe some are slightly better

at some things, but in the grand scheme of things, where we’re trying to understand how

the brain works and language and so on, I think these differences are minute.

So you’ve lived perhaps through an AI winter of sorts?

Yes.

How did you stay warm and continue your research?

Stay warm with friends.

With friends. Okay, so it’s important to have friends. And what have you learned from the

experience?

Listen to your inner voice. Don’t, you know, be trying to just please the crowds and the

fashion. And if you have a strong intuition about something that is not contradicted by

actual evidence, go for it. I mean, it could be contradicted by people.

Not your own instinct of based on everything you’ve learned?

Of course, you have to adapt your beliefs when your experiments contradict those beliefs.

But you have to stick to your beliefs. Otherwise, it’s what allowed me to go through those years.

It’s what allowed me to persist in directions that, you know, took time, whatever other

people think, took time to mature and bring fruits.

So history of AI is marked with these, of course, it’s marked with technical breakthroughs,

but it’s also marked with these seminal events that capture the imagination of the community.

Most recent, I would say, AlphaGo beating the world champion human Go player was one

of those moments. What do you think the next such moment might be?

Okay, so first of all, I think that these so called seminal events are overrated. As

I said, science really moves by small steps. Now what happens is you make one more small

step and it’s like the drop that, you know, that fills the bucket and then you have drastic

consequences because now you’re able to do something you were not able to do before.

Or now, say, the cost of building some device or solving a problem becomes cheaper than

what existed and you have a new market that opens up, right? So especially in the world

of commerce and applications, the impact of a small scientific progress could be huge.

But in the science itself, I think it’s very, very gradual.

And where are these steps being taken now? So there’s unsupervised learning.

So if I look at one trend that I like in my community, so for example, at Milan, my institute,

what are the two hardest topics? GANs and reinforcement learning. Even though in Montreal

in particular, reinforcement learning was something pretty much absent just two or three

years ago. So there’s really a big interest from students and there’s a big interest from

people like me. So I would say this is something where we’re going to see more progress, even

though it hasn’t yet provided much in terms of actual industrial fallout. Like even though

there’s AlphaGo, there’s no, like Google is not making money on this right now. But I

think over the long term, this is really, really important for many reasons.

So in other words, I would say reinforcement learning may be more generally agent learning

because it doesn’t have to be with rewards. It could be in all kinds of ways that an agent

is learning about its environment.

Now reinforcement learning you’re excited about, do you think GANs could provide something,

at the moment? Well, GANs or other generative models, I believe, will be crucial ingredients

in building agents that can understand the world. A lot of the successes in reinforcement

learning in the past has been with policy gradient, where you just learn a policy, you

don’t actually learn a model of the world. But there are lots of issues with that. And

we don’t know how to do model based RL right now. But I think this is where we have to

go in order to build models that can generalize faster and better like to new distributions

that capture to some extent, at least the underlying causal mechanisms in the world.

Last question. What made you fall in love with artificial intelligence? If you look

back, what was the first moment in your life when you were fascinated by either the human

mind or the artificial mind?

You know, when I was an adolescent, I was reading a lot. And then I started reading

science fiction.

There you go.

That’s it. That’s where I got hooked. And then, you know, I had one of the first personal

computers and I got hooked in programming. And so it just, you know,

Start with fiction and then make it a reality.

That’s right.

Yoshua, thank you so much for talking to me.

My pleasure.

comments powered by Disqus