[MUSIC PLAYING]
PRABHAKAR RAGHAVAN: Hello, everyone.
Bonjour.
Welcome.
Before we start, I’d like to join
[? Mack ?] in acknowledging the tragic loss of life
and the widespread destruction from the earthquakes in Turkey
and Syria.
Our hearts are with the people there.
Now, we’re here in France, the birthplace
of seven giants of science and mathematics–
Blaise Pascal, Pierre de Fermat, Joseph Fourier,
to name just a few on whose shoulders computer scientists
stand today.
And for those on the livestream, we’re
coming to you from Google Paris, home of one of our premier AI
research centers and less than 5 kilometers
from the final resting place of Pascal,
my favorite mathematician.
What a fitting setting to talk about the next frontier
for our information products and how AI is powering the future.
The very first founder’s letter said that our goal at Google
is to “significantly improve the lives of as many people as
possible.”
That’s why our products have a singular focus,
to be helpful to you in moments big and small.
Think about Google Search, which would
celebrate its 25th birthday today, this year.
Search was built in breakthroughs
in language understanding.
That’s why we can take a complex, conversational query,
like delicious-looking, flaky French pastry
in the shape of a heart, and help you identify exactly
what you’re looking for.
But we didn’t stop there.
Through our ongoing investments in AI,
we can now understand information in its many forms,
from language to images and videos, even the real world.
With this deeper understanding, we’re
moving beyond the traditional notion of search
to help you make sense of information in new ways.
So now you can simply take a picture with Lens
to instantly learn that heart-shaped pastry is
a palmier cookie.
But advancements in AI are also why
if you need to fix your bike chain,
you can get directed to the exact point in a video that’s
relevant to you, like when they’re showing you
how to put the chain back on.
If you’re shopping for a new accent chair,
you can see it from all angles right on Search in 3D
and place it in your living room with AR, augmented reality,
to see how it looks.
Or if you pop out of the metro in an unfamiliar city.
you can find arrows overlaid in Google Maps on the real world,
pointing you to walk in the right direction.
All these examples are a far cry from the early days of Search.
But, as we always say, Search will never be a solved problem.
Although we are almost 25 years in,
Search is still our biggest moonshot.
The thing is, that moon keeps moving.
The perfect search remains elusive
because two things are constantly changing, first,
how people expect to engage with information naturally
and intuitively, and second, how technology
can empower that experience.
And so we’re creating new Search experience
that work more like our minds, that reflect how we as people
naturally make sense of the world.
As we enter this new era of Search,
you’ll be able to understand information,
no matter what language it originated in.
Search anyway and anywhere, be it
on your screen or to explore the real world.
And express yourself and unlock your creativity in new ways.
Let’s start with understanding information.
We’ve seen time and time again that access to information
empowers people.
But for centuries, information was largely
confined to the language it was created or spoken in
and only accessible to people who understand that language.
With Google Translate, we can break down
language barriers and unlock information, regardless
of the language of its origin.
Over a billion people around the world
today use Translate across 133 languages
to understand conversations, online information,
and the real world.
For example, translate has been a lifeline
to help those displaced from Ukraine
adjust to daily life in new countries.
In the war’s early days, Ukrainian Google Translate
queries grew in Polish, German, and other European languages
as Ukrainians seeking refuge turned to it
for critical information in their language.
We recently added 33 new languages
to Translate’s offline mode, including Corsican, Latin,
and Yiddish, to name just a few.
So even if you’re somewhere without access to the internet,
you’ll get the translation help you need.
And soon it will bring you a richer, more intuitive way
to translate words that have multiple meanings
and translations.
So whether you’re trying to buy a new novel
or celebrate a novel idea, you’ll
have the context you need to use the right turn of phrase.
We’ll begin rolling this out in several languages in the coming
weeks.
But there’s still more we can do to bridge language divides.
To bring the power of translate to even more languages,
we use zero-shot machine translation, an advanced AI
technique that learns to translate into another language
without ever seeing translation pairs.
Thanks to zero-shot machine translation,
we’ve added two dozen new languages
to Translate this past year.
In total, over 300 million people
speak these newly added languages.
That’s roughly the equivalent of bringing translation
to the entire United States.
While language is at the heart of how we communicate
as people, another important way we make sense of information
is visually.
As we are fond of saying, your camera is the next keyboard.
That’s why back in 2017, we redefined
what it means to search by introducing Lens
so you can search what you see with your camera or photos.
We’ve since brought Lens directly to the Search Bar.
And we’ve continued to bring you new capabilities,
like shopping within an image and step-by-step homework help.
I’m excited to announce that we’ve just
reached a major new milestone.
People now use Lens more than 10 billion times a month.
This signals that visual search has moved from a novelty
to reality.
And as we predicted, the age of visual search is here.
In the context of translation, understanding isn’t just
about the languages we use.
It’s also about the visuals we see.
Often it’s the words with context,
like background images, that create meaning.
And so in Lens, a new advancement
helps you translate the whole picture, not just
the text in it.
Before, when translating text in an image,
we’d block part of the background.
Now, instead of covering the text,
we erase it, recreate the pixels underneath
with an AI-generated background, and then overlay
the translated text back on top of the image,
all as if it was part of the original picture.
I’m pleased to share that this is now rolling out globally
on Android mobile.
So you can use Lens to start translating text into context.
As you can see with Lens, we want
to connect you to the world’s information one
visual at a time.
We’re continuing to build upon these capabilities.
So I’ll now turn to Liz to share more.
Liz?
[MUSIC PLAYING]
LIZ REID: Thanks, Prabhakar.
You can already use Lens to search
from your camera or photos right from the Search Bar.
But we’re introducing a major update
to help you search what’s on your mobile screen.
In the coming months, you’ll be able to use
Lens to search what you see in photos or videos
across the websites and apps you know and love on Android.
For example, let’s say you get a message from your friend who
sent a video exploring Paris.
You’d like to learn what the landmark is
that you see in the video.
So you long press the power button on your phone,
bring up Google Assistant, and tap Search Screen.
Assistant connects you to Lens, which identifies it
as Luxembourg Palace.
And you can tap to learn more.
Pretty awesome, huh?
Think about it like this.
With Lens, if you can see it, you can search it.
As Prabhakar touched upon, sometimes
it’s the combination of words and images
that communicate meaning.
That’s why last year we introduced Multisearch in Lens.
With Multisearch, you can search with a picture and a text
together, opening up entirely new ways to express yourself.
Say you see a stylish chair but you want it
in a more muted color to match your style.
You can use Multisearch to find it
in beige or another color of yours.
Or you spot a floral pattern shirt
but you want in it bleu rouge instead.
You can use Multisearch for that, too.
Let’s see how that works with a live demo.
We are missing the–
we’re missing the phone.
[LAUGHS] We’ll have to–
we have no– we’ll move on.
We can’t find the phone.
Sorry.
We’ll do one later in the special Q&A.
OK, so what you can do is you can spot a cool pattern
on the notebook, but then you can swipe up to see the text
and be able to search on the Search Box,
letting you find something like a rock
if you just typed in [INAUDIBLE]..
Or you can find wallpaper, similar.
This unique ability allows us to mix modalities,
like images and texts.
And it opens up a whole world of possibilities.
And you can imagine a future where even more modalities
are at play.
I’m excited to share that Multisearch is now officially
live globally on mobile.
And that means Multisearch is now
available in over 70 languages that Lens
is in around the world.
We’ve taken Multisearch a step further
by adding the ability to search locally on mobile in the US.
You can take a picture or screenshot of a food dish
or item and add Near Me to find where
to get it nearby from the millions of businesses
on Google.
In the next few months, we’ll bring Multisearch Near Me
to all the languages and countries
for which Lens is available to.
So you’ll be able to use Near Me if you
want to support a neighborhood business
or if you just need to pick something up right away.
There are also times when you’re already searching
and you find something that just catches your eye
and it inspires you.
So in the next few months, you’ll
be able to use Multisearch globally
for any image you see on the Search results page on mobile.
Once you start using Multisearch,
it’s striking how natural.
It feels to be able to use multiple senses to search.
I hope you give it a try.
And with that, back to Prabhakar.
[APPLAUSE]
PRABHAKAR RAGHAVAN: Thanks, Liz.
And we’ll have to figure out who stole your phone.
So far today, we’ve talked about how
AI is helping us more deeply understand
the world’s information so we can help
you access it more naturally.
But we only scratched the surface
of what’s possible with AI.
We’ve long been pioneers in this space,
not just in our research, but also
in how we bring those breakthroughs
to the world and our products in a responsible way.
We’ve made significant contributions
to the scientific community, like developing
the Transformer, which set the stage for much
of the generative AI activity we see today.
And we’re committed to continuing
to bring these technologies to the world in a responsible way
that benefits everyone.
This is the journey we’ve been on with large language
models, which can make engaging with technology
more natural and conversational.
Back at I/O in 2021, we unveiled our LaMDA AI model,
a breakthrough in conversational technology.
Next, we’re bringing LaMDA to an experimental conversational AI
service, which we fondly call Bard.
You’ll be able to interact with Bard to explore complex topics,
collaborate in real time, and get creative new ideas.
For example, let’s say you’re in the market for a new car, one
that’s a good fit for your family.
Bard can help you think through different angles to consider,
from budget to safety and more, and simplify and make
sense of them.
Bard’s suggestion to consider fuel type
might spark your curiosity.
So you can ask it to explain the pros
and cons of buying an electric car and get helpful insights.
We all know that once you buy a new car,
you’ll have to plan a road trip.
Bard can help you plan your road trip
so you can take your new car out for a spin.
You might ask Bard to help you find
scenic routes, interesting places to stop along the way,
and fun things to do when you and your family
get to your destination.
Bard seeks to combine the breadth of the world’s
knowledge with the power, intelligence, and creativity
of large language models.
It draws on information to the web
to provide fresh, high-quality responses.
We are releasing Bard initially with our lightweight model
version of LaMDA.
This much smaller model needs significantly less computing
power, which means we’ll be able to scale it to more users
and get more feedback.
We just took our next big step by opening Bard up
to trusted testers this week.
We’ll continue to use feedback from internal and external
testing to make sure it meets the high bar, our high bar
for quality, safety, and groundedness
before we launch it more broadly.
Human curiosity is endless.
And for many years, we’ve helped remove roadblocks
to information so you can follow your curiosity wherever
it takes you, from learning more about a topic to understanding
a variety of viewpoints.
People often turn to Google for quick, factual answers,
like what is a constellation?
Already today, we give you fast answers
for straightforward queries like this.
But for many questions, there’s no one right answer,
what we call NORA queries, questions like,
what are the best constellations to look for when stargazing?
For questions like those, you probably
want to explore a diverse range of opinions or perspectives
and be connected to the expansive wisdom of the web.
That’s why we’re bringing the magic of generative AI
directly to your search results.
So soon, if you ask, what are the best constellations
to look for when stargazing, new generative
AI features will help us organize
complex information in multiple viewpoints
right in search results.
With this, you’ll be able to quickly understand
the big picture and then go on to explore different angles.
So say this new information on constellations
piques your interest.
You can dig deeper, for instance,
to learn what time of year is best to see them and explore
further on the web.
Open access to information is core to our mission.
And we know people seek authentic voices
and diverse perspectives.
As we scale these new generative AI features, like this,
in our search results, we continue
to prioritize approaches that will
allow us to send valuable traffic
to a wide range of creators and support a healthy, open web.
In fact, we’ve sent more traffic to the web every year
each year than the year prior.
The potential for generative AI goes
far beyond language and text.
As we mentioned earlier, one of the most natural ways people
engage with information is visually.
With generative AI, we can already
automate 360-degree spins of sneakers
from just a handful of still photos, something
that would have previously required merchants
to use hundreds of product photos and costly technology.
As we look ahead, you can imagine
how generative AI will enable people
to interact with visual information in entirely
new ways.
They might help a local baker collaborate on a cake design
with a client or a toymaker dream up a new creation.
They might help someone envision what their kitchen looks like
but with green cabinets instead of wood,
or describe and find the perfect complementary pocket square
to match a new blazer.
In our quest to make Search more natural and intuitive,
we’ve gone from enabling you to search your text to voice,
to images, to combination of modalities,
like you saw with Multisearch today that Liz talked about.
As we continue to bring generative AI technologies
into our products, the only limit to Search
will be your imagination.
Beyond our own products, it’s important to make
it easy, safe, and scalable for others
to benefit from these advances.
Next month, we’ll start onboarding developers,
creators, and enterprises so they
can try a generative language API, initially powered
with LaMDA, with a range of models to follow.
Over time, we’ll create a suite of tools and APIs
to make it easy for others to build applications with API.
From Bard to the new AI-powered features in Search
to image generation, APIs, and beyond, when it comes to AI,
it’s critical, critical that we bring
these experiences rooted in the models
to the world responsibility.
That’s why we’ve been focusing on responsible AI
since the very beginning.
We were one of the first companies
to articulate AI principles.
We are also embracing the opportunity
to work with creative communities and partners
to develop these tools.
AI will be the most profound way to expand access
to high-quality information and improve the lives of people
around the world, committed to setting the high standard
on how to bring it to people in a way that’s
both bold and responsible.
So far, you’ve seen how we are applying state-of-the-art AI
technologies to help you understand
the world’s information across languages and modalities.
AI is also making it far more natural
to make sense of and explore the real world, like with Google
Maps.
Over the Chris to share more.
Come on up, Chris.
[MUSIC PLAYING]
CHRIS PHILLIPS: Thanks, Prabhakar.
For 18 years, Google Maps has transformed how
people make sense of the world.
It’s a valuable tool for over 1 billion people,
helping them avoid traffic jams on the way to work,
find restaurants in a new city, and so much more.
And the latest advancements in AI and computer vision
are powering the next generation of Google Maps,
making it more immersive and sustainable than ever before.
Let me show you what I mean.
Before Google Maps, getting directions
meant physically printing them out on a piece of paper.
But Google Maps reimagined what a map could be,
bringing live traffic and helpful information
about places right to your phone.
Now we’re transforming Google Maps once again,
evolving our 2D map into a multidimensional view
of the real world that comes alive,
starting with Immersive View.
Immersive View is a brand-new way
to explore that’s far more natural and intuitive.
It uses AI to fuse billions of Street View and aerial images
to create a rich digital model of the world,
letting you truly experience a place before you step inside.
Let’s take a look at the Rijksmuseum in Amsterdam.
If you’re considering a visit, you
can virtually soar over the building, finding the entrances
and get a sense of what’s in the area.
With the Time Slider, you can see
what it looks like at different times of the day
and what the weather will be so you know when you visit.
To help you avoid crowds, we want to point out
areas that tend to be busy.
So you have all the information you need to confidently make
a decision about where to go.
If you’re hungry, you can explore different restaurants
in the neighborhood.
You can even glide down the street, peek inside it,
and understand the vibe before you book a reservation.
This stunning, photorealistic indoor view
is powered by neural radiance fields.
It’s an advanced AI technique that
uses 2D images to generate a highly accurate 3D
representation that recreates the entire context of a place,
including its lighting, the texture of materials,
and what’s in the background.
You can also see if our restaurant’s lighting is
good for a date night or if the outdoor view at a cafe
is the right place for lunch with friends.
Immersive View represents a completely new way
to interact with the map, using all the detailed information
in Google Maps today and visualizing it
in a more intuitive way.
We’re excited that Immersive View starts rolling out today
in London Los Angeles, New York, San Francisco, and Tokyo.
And we’re bringing it to more European cities,
like Amsterdam, Dublin, Florence, and Venice
in the coming months.
Immersive View is just one example
of how artificial intelligence is powering a more
visual and intuitive map.
It also helps us reimagine how you find places
when you’re on the go.
You heard Prabhakar talk about how
your camera’s the new keyboard.
And that’s also true for the map.
Search with Live View uses AI paired with augmented reality
to help you visually find things nearby,
like ATMs, restaurants, and transit hubs
just by lifting up your phone.
We’ve recently launched Search with Live
View in several cities, including here in Paris.
In the coming months, we’ll start
expanding it to more places, like Barcelona, Dublin,
and Madrid.
Let’s head outside to where Rachel
will show us how it works.
Over to you, Rachel.
RACHEL: Thanks, Chris.
I’m out here scoping out the neighborhood.
Whenever I come to a new city, I’m always
on the hunt for great coffee.
So let’s see what I can find.
Tapping on the Camera icon in the Search Bar,
I’m able to see coffee shops, as well
as other categories of places, like restaurants, bars,
and stores.
I can even see places that are out of my field of view.
So I’m really able to get a sense of what this neighborhood
has to offer at a glance.
But let’s look at coffee shops specifically because I really
need some caffeine.
All right, so it looks like we have a few good coffee
options right around here.
I’m able to see if these places are open,
if they’re busy right now, and if they’re highly rated.
This one looks pretty good.
So I’m going to tap on it to learn more.
All right, OK, this looks pretty good.
It has a lot.
It has a high star rating.
This looks really tasty and cute.
All right, and it’s not too busy right now.
So I’m going to head over there and grab an espresso.
Back to you, Chris.
CHRIS PHILLIPS: Wow.
Thanks, Rachel.
[APPLAUSE]
As you can see, pairing our AI with AR
is transforming how we interact with the world.
Augmented reality can be especially helpful
when navigating tricky places indoors,
like airports, train stations, and shopping centers.
We launched Indoor Live View in select cities
to help you do just that.
It uses AR arrows to help you find things
like nearest elevators, baggage claim, and food courts.
Today we’re excited to announce that we’re
embarking on the largest expansion of Indoor Live
View to date.
We’re bringing it to 1,000 new venues in cities like Berlin,
London, New York, Tokyo, and right
here in Paris in the coming months.
Today, you’ve seen how the future of maps
is becoming more visual and immersive.
But we’re also making it more sustainable.
It’s all about helping people make the sustainable
choice, the easy choice.
We recently launched ecofriendly routing in Europe
to help you choose the most fuel-efficient,
energy-efficient energy efficient route
to your destination whether you drive a petrol, diesel,
electric, or hybrid vehicle.
And as we’re seeing more drivers embrace electric vehicles,
we’re launching new Maps features for EVs
with Google built in to make sure you have enough charge,
no matter where you’re headed.
First, to help alleviate range anxiety,
we’ll use AI to suggest the best charging
stop, whether you’re taking a road trip
or just running errands nearby.
We’ll factor in traffic, charge level, and the energy
consumption of your trip.
If you’re in a rush, we’ll help you find stations
where you can charge your car quickly with our new very
fast charging filter.
For many cars, this can give you enough power
to fill up and get back on the road in less than 40 minutes.
Lastly, we’re making it easier to see
when places like supermarkets have charging stations
on site with a new EV icon.
So if you’re on your way to pick up groceries,
you can choose a store that also lets you charge your car.
Look out for these Maps features in the coming
months for cars with Google built in wherever
EV charging is available.
To help drivers make the shift to electric vehicles,
we’re focused on creating great EV experiences
across all of our products.
For instance, in Waze, we’ll soon
be making it easy for drivers to specify their EV plug
types so they can find the right charging
station along their route.
But we’re not just focused on driving.
In many places, people are choosing
more sustainable options, like walking, biking, or taking
transit.
On Google Maps, we’re making it even simpler to get around
with new glanceable directions.
For example, when you’re walking,
you can track your journey right from your route overview.
It’s perfect for those times when you need to see your path.
We’ll give you easy access to updated ETAs
and show you where to make the next turn, information that
was previously only available by using
our comprehensive Navigation Mode.
Glanceable directions start rolling out globally
on Android and iOS in the coming months.
Making a global impact requires everyone
to come together, including cities, people, and businesses.
That’s why we’ve worked with cities
for years to provide key insights through Environmental
Insights Explorer, or EIE, a free platform designed to help
cities measure emissions.
The Dublin City Council has been using
EIE to analyze bicycle usage across the city
and implement smart transportation policies.
And in Copenhagen, we’re using Street View cars
to measure hyperlocal air quality with Project Air View.
With this data, the city is designing low-emission zones
and exploring ways to build schools and playgrounds away
from high-pollution areas.
These are just a few ways that AI
is helping us reimagine the future of Google Maps,
making it more immersive and sustainable
for both people and cities around the world.
And now I’ll turn it over to Marzia
to talk about the work we’re doing in Europe with Google
Arts & Culture.
[APPLAUSE]
[MUSIC PLAYING]
MARZIA NICCOLAI: Thank you, Chris.
It’s exciting to see how Google Maps keeps
getting more helpful.
For the past decade, our daily work at Google Arts & Culture
has also been about finding new pathways, specifically
those at the intersection of technology and culture.
Together with our 3,000 partners from over 80 countries,
we brought dinosaurs to life in virtual reality,
digitized and preserved the famed Timbuktu Manuscripts,
recrafted a destroyed Mayan limestone staircase,
and found a way for us humans to find our four-legged friend’s
doppelganger and famous artworks.
As for the latter, at least one of Chris’s dogs
apparently spent a previous life in Renaissance Venice.
Perhaps you might also have heard of our work
through our popular Art Selfie feature, which helped over
200 million people find their doppelganger
in famous artworks.
But what you probably didn’t know
is that Art Selfie was actually the first on-device AI
application from Google and that we
have applied AI to cultural pursuits
in our Google Arts & Culture lab in Paris for over five years.
So today, I’d like to show you what
artificial intelligence in the hands of creatives and cultural
experts can achieve.
For our first example, I would like
to welcome the blobs to the digital stage.
[OPERA MUSIC]
[APPLAUSE]
Thank you, blobs.
Now, some of you might recognize the hallmarks of good opera
right away–
bass, tenor, mezzo soprano, and soprano.
And if you aren’t familiar with the world of opera singing,
this experiment, created in collaboration
with artist David Li, is for you and will
be your gateway to learn more.
For Blob Opera, we teamed up with four professional opera
singers whose voices trained a neural network,
essentially teaching the AI algorithm
how to sing and harmonize.
So when you conduct the blobs to create your very own opera,
what you hear aren’t the voices of the opera singers,
but instead the neural network’s interpretation of what opera
singing sounds like.
Give it a try and join– try and join the many people
from around the world who have spent
over 80 million minutes in this playful AI experiment
to learn about opera.
As you’ve seen and heard, AI can create new and even playful
ways for people to engage with culture.
But it can also be applied to preserve intangible heritage.
As Prabhakar shared earlier, access
to language and translation tools
is a powerful way to make the world’s information more
accessible to everyone.
But I was surprised to learn that out
of the 7,000 languages spoken in Earth, more than 3,000
are currently under threat of disappearing,
amongst them Maori, Louisiana Creole, Sanskrit,
and Calabrian Greek.
To support these communities in preserving and sharing
their languages, we created an easily usable language
preservation tool called Woolaroo, which, by the way,
is the word for photo in the Aboriginal language
of Tugambeh.
So how does it work?
Once you open Woolaroo and your mobile phone’s browser,
select one of the 17 languages currently
featured and just take a photo of your surroundings.
Woolaroo, with the help of AI-powered object recognition,
will then try to identify what is in the frame
and match it against its growing library of words.
For me, this tool is special because it
shows how I can help to make a tangible difference
for communities and real people, like the ones shown here,
in their struggle to preserve their unique heritage.
Now let’s have a look of AI in the service
of cultural institutions and how it can help uncover what
has been lost or overlooked.
Women on the forefront of science
have often not received proper credit or acknowledgment
for their essential work.
To take another step to rectify this,
we teamed up with researchers at the Smithsonian American
Women’s History Initiative and developed
an experimental AI-driven research
tool that first compares archival records across history
by connecting different nodes in the metadata.
Secondly, it’s able to identify women scientists on variations
in their name because sometimes they
have to do things use their husband’s
name in a publication.
And third, it’s capable of analyzing image records
to cluster and recover female contributors.
The initial results have been extremely promising
and we can’t wait to apply this technology
to uncover even more accomplishments of women
in science.
Preserving cultural heritage online is core to our mission.
We work hard to ensure that the knowledge and treasures
provided by our cultural partners
show up where it’s of most benefit,
when people are searching online.
Say you search for Artemisia Gentileschi,
the most successful yet often overlooked
female painter of the Baroque period.
You’ll be able to explore many of her artworks,
including “Self-Portrait of Saint Catherine,”
that have been provided by our partner,
the National Gallery of London, in high resolution.
When you click on it, you’ll be able to zoom into the brush
stroke level to see all the rich detail of the work.
You’ll never be able to get that close in the museum.
What’s more, you’re actually able to bring
this and many other artworks right into your home.
Just click on the View in Augmented Reality button
on your mobile phone to teleport Artemisia’s masterpiece
in its original size right in front of you.
But culture doesn’t stop at classical art.
So keep your eyes open for a variety of 3D
and augmented reality assets provided
by cultural institutions.
One of my favorite, besides the James Webb Space Telescope,
is one of the most popular queries for students,
the periodic table, for which I’m
happy to announce we’ll triple the number
of available languages to include French, Spanish,
German, and more in the coming weeks.
3D and AR models in Google Search
really unlock people’s curiosity.
And in the past year alone, we’ve
seen an 8x increase in people engaging
with AR models contributed by Google Arts & Culture partners
to explore and learn.
Those are just some of the examples
of what awaits at the intersection
of artificial intelligence and culture
and how we work with our partners to make
more culture available online.
I invite you to discover all of that and much more
in the Google Arts & Culture App.
Thank you and back to Prabhakar.
[APPLAUSE]
PRABHAKAR RAGHAVAN: Thanks, Marzia.
Today you saw how we are applying state-of-the-art AI
technologies to make our information products more
helpful for you, to create experiences that are
as multidimensional as the people who rely on them.
We call this making Search more natural and intuitive.
But for you, we hope that it means
that when you next seek information,
you won’t be confined by the language it originated in.
You won’t be constrained to typing words in a search box.
And you won’t be beholden to a single way of searching.
Although we are 25 years into Search,
I daresay that our story has just begun.
We have even more exciting AI-enabled innovations
in the works that will change the way people search,
work, and play.
We are reinventing what it means to search.
And the best is yet to come.
Thank you all.
Merci.
[APPLAUSE]
[MUSIC PLAYING]