Google - Google Keynote (Google I/O '23)

🎁Amazon Prime 📖Kindle Unlimited 🎧Audible Plus 🎵Amazon Music Unlimited 🌿iHerb 💰Binance

Google I/O ‘23

[Cheers and Applause]. » Hello.

Hello. Hello.

Hello. Hello.

Hello. Hello.

Hello. Hello, everyone, my name is Dan

deacon. Thanks for coming here early.

It’s a real pleasure to be sharing the stage with some of

my all-time favorites, G-mail and Google Calendar.

So we’re going to – I’m going to play some songs, and a lot of

the content is going to be made using Fanake and Bard and Music

LM, so I’m just going to get started.

This first one is a song of mine called “when I was done dying

kwo,” and the video was generated

thanks to the lyrics of this song.

Thanks so much. When I was done dying, my

conscience regained So I began my struggle, a

nothingness strained Out a flash made of time, my new

form blasted out And it startled me so and I

burst out a shout My legs ran frantic

like birds from a nest And I ran until drained, leaving

no choice but rest So I fell asleep softly at the

edge of a cave But I should have gone deeper

but I’m not so brave And like that I was torn out and

thrown in the sky And I said all my prayers

because surely I’ll die As I crashed down and smashed

into earth, into dirt How my skin did explode, leaving

only my shirt But from shirt grew a tree and

then tree grew a fruit And I became the seed and that

seed was a brute And I clawed through the ground

with my roots and my leaves Tore up the shirt and I

ate up the sleeves And they laughed out at me and

said “what is your plan?” But their question was foreign,

I could not understand When then suddenly I’m ripped up

Placed in a mouth And it swallowed me down at

which time I head south

No need to fear them, no reason to run.

Reached out to touch them faded too soon yet their mouth still

remained standing up towards the moon

And I looked up beyond that beautiful sky, and I

Oh, it greeted me kindly and then we all drank

And we drooled out together right onto the ground

And the ocean grew quickly right up all around

And the earth looked at me and said “Wasn’t that fun?”

And I replied “I’m sorry if I hurt anyone”

And without even thinking cast me into space

But before she did that she wiped off my own face

She said better luck next time don’t worry so much

Without ears I couldn’t hear I could just feel the touch

As I fell asleep softly at the edge of a cave

But I should have gone deeper but I’m not so brave

Thank you very much. [Cheers and Applause].

Whew! All right.

Now that all the morsh pits hav subsided, we can return to our

seats. Let’s get center, and where we

are in the present, in the now, and I do a lot of shows like

this. Normally, they’re at night in

the dark, and unseated rooms, so I thought maybe we could just

all try to do a guided visualization so we can all get

in the same place. I’m sure a lot of you traveled

here today. So let’s just fully relax our

shoulders, feel our bodies in the seats, our feet on the

floor. I used Bard, and I was like how

can I structure a guided visualization?

For this event. They gave me some suggestions.

So let’s envision now that we’re slowly, slowly lifting up from

our seats, floating into the a r air, and let’s imagine the air

is not really cold. Let’s imagine the air is a

wonderful, wonderful temperature.

We’re floating here above the shoreline afAmpitheater.

We see everyone else below. They’re floating up with us.

We’re becoming mist. Vapor.

And there, we see a beautiful, beautiful entity come towards

us. And what is it?

It’s a bird. A bird with lips.

Don’t pay too much attention to the lips, but the bird has lips.

And the bird says to you, hey, I just got a two-bedroom

apartment. I’m looking for a roommate.

And you think, wow. I never thought about

relocating, but what neighborhood is it in?

And the bird says, it’s in this neighborhood that you’ve always

wanted to live in. You say that sounds great.

Do you live with anybody else? And you’re like, yes, I do love

playing board games. I have friends over.

We play board games. Great.

Maybe my friends can become your friends and we’ll start a whole

community. The bird’s like, great, the only

thing is it’s kind of a small kitchen, so I don’t like that

many kitchen appliances. And you’re like, that’s

wonderful. I normally eat out, but I’m

trying to cook more at home. They’re like, that’s wonderful.

We can make meals together. So now that we’re all on the

same page, we’re recentered here, let’s just think of how

wonderful the world would be if there was a bird with flips that

invited us to make meals with them, and we could make

wonderful, wonderful meals together.

And that’s sort of what this whole process has felt like.

It’s like being a bird and finding a new roommate and

making it work with them. I’ll explain more.

As a composer, it’s always interested me in that composing

music is an endless and evolving system about finding ways and

communicating ideas about how to make sounds.

Music has always been expanding with ways to share ideas and

about how to make sounds while exploring what sounds are

available to us that can be called music.

Another thing that fascinates me as a composer is that throughght

music’s history, advancements in technology have always led to

and coincided with advancements in music thinking.

New tools lead to new instruments, which lead to new

music, which leads to new boundaries to break, which leads

to a restart for new tools. At one point in time, the

trombone was a reflection of cutting-edge musical technology.

And like many technologies, when the first trombones entered the

scene, they were viewed as highly controversial pieces of

music tech. The same could be said for

player pianos, synthesizers, synchronizing sound with film,

home recording studios and midi. I love all these things, and

when I started getting in to making ecomputer music as a

teenage nerd on my family computer, all those tools were

readily established and antiquated, but they were there

for people to make music with, and every since those early case

of days of me playing with the keyboard, I’ve been exploring

new ways of making new sounds. So when Google invited me to

come meet with some of their AI researchers and talk about what

they’ve been working on, I was pretty ecstatic to be one of the

first people to use these tools and see how they’re being shaped

and make new music with them. Last month I was pretty lucky to

spend time with the music researchers.

They showed me the tools that they’ve been making and

explained the concepts behind how they worked and allowed me

to experiment with making sounds, music and visuals with

them. It felt a lot like those early

days of sitting in front of a computer is and having no idea

of what I’m doing or taking out a keyboard and hitting buttons

and see what happens. I had no idea what happened, and

that was sort of the fun part, just seeing what it is and

making something new with it. It reminded me a lot of being a

child again. So you hear that drum beat, and

that would start, and the auto cords would start, and the

outputs from AI felt the same way.

They would spark new ideas, they created building blocks to make

new socks. So the music that accompanied

our guided visualization about becoming roommates with a duck

with lips, that was generated using the text prompt and text

to music using sad, moody, new age music, sad piano and

synthesizer. Once we found an output that we

liked, we started experimenting with it and processing it with

sing song to create your variations to accompany the

text-to-music outputs. We fed those results into

themselves countless times, making dozens of iterations to

explore, and the results weree the songs you heard again while

I was rambling on about the birds with lips.

Th It is available.

You can sign up for the wait list now.

We also made a new track that we’re about to play using a

single-word prompt. Chip tune.

The name of the band is Chip Tune and our album is Chip Tune

and if you would like to also join the band, you’re welcome to

join the band. This is our first show.

We’re all members of the band, so thanks for joining the band.

You can also quit the band at my time, and let’s see how it goes.

So we put the word chip tune in, and this is the output that

we got. [Music playing].

And we realized we needed another section of the song, so

we fit it back into itself. And that gave us this B section.

We knew it needed to be a full song.

It couldn’t just be these two riffs, but as you know, every

song needs a chorus, so we made the opening part our refrain,

and here we are back at it. And we’re like, all right, well,

it’s a chip tune song so maybe we should add some sort of

adventure theme, so we were like all right, we know what comes

after this. And what comes after this was a

breakdown because every song needs a breakdown, and we’re

like, all right, we’re going to be at the Shoreline, people are

going to be nuts in their seats. There’s going to be mosh pits

everywhere, so when the breakdown is done, it’s just

going to be we did this work, the full-on release.

So this part came as kind of an unexpected little gift.

We didn’t really know what we would get, and that was the fun

part of this discovery being like what we would put in would

be like finding out a new sound that we didn’t even expect to

create. But then, of course, we go back

to our beautiful refrain. Which we fell in love with at

the beginning. » Thank you very much.

That was our first show. [Applause].

For our band Chip Tune. Some of the other band members

are here in the audience, and some of our band members don’t

exist in the physical realm. I’ve never been in a band where

the band members aren’t physical entities.

So, all right. Now that we’ve been basking in

the Chip Tune, let’s return to our guided meditation.

We’ve got a long day of Google ahead of us, and now let’s place

ourselves in a beautiful woods, where the – our roommate’s

house is. We’re wandering through the

woods. We’re coming back from a long

day, and we’re thinking about all of the G Ggle jokes that we

could make, and you’re excited to tell your roommate the joke

that you came up with and you’re like, and then they appear, and

you’re like, hey, duck with lips, doing this event for

Google and I’m thinking about using this joke.

The duck’s like, what is it? You just get me America rised by

the feathers. You’re pulled deeper and deeper

into the realm of pure feathers. The duck says, hey, I’m sorry,

but I finished the another milk. You’re like, oh, I was really

looking forward to having that with my cereal.

You’re like, okay, I’ll go out and get more.

And you’re like no, I got this joke I want to tell you.

I know you’ve been making another milk and home, and we

talked specifically about not having so many kitchen

appliances, it’s a small kitchen, but I think we can

expand the kitchen and your heart just fills with pure love

of the idea of blowing out the kitchen.

Adding more walls, adding more cabinetry, so you can make

almond milk at home, another milk at home, maybe even build a

little stable, so you could have a million milks, and the bird is

like this is the best roommate relationship I’ve ever had,nd

you’re like, this is wonderful, bird with lips, thank you so

much, and then you go back to the house and you’re trying so

deeply to remember the joke that you were going to make, and you

remember, in the streets, it’s Google Maps, and in the sheets,

it’s Google Sheets, and the duck is like, I think you could have

delivered it earlier. It might have landed a little

better. Why don’t we just go relax back

in our home, drift away into the ethernet of tomorrow.

We’ll find a way to find that joke in our everyday life, and

you just wish so badly that the duck with lips was real.

So if we could all now just collectively, with all of our

ability, and all of our might, try to manifest a duck with

lips, and if we think hard enough, perhaps we can put this

creature into reality and we can share this utopian life with

this beautiful roommate, the duck with lips.

So this will be my last song. And then I’m going to do like a

five-hour set later, so don’t worry.

This song is called “change your li

life,” and it seemed like a good thing to do right before this

talk where I think all of our lives with going to drastically

changed. And thanks so much.

Again, my name is Dan Deacon. And again, these visuals were

generated using Fanake. Tonight’s the night, you’re

gonna change your life Tonight’s the night, you’re

gonna change your life Tonight’s the night, you’re

gonna change your life Tonight’s the night, you’re

gonna change your life Your life, your life, your life,

your life, your life, your life, your life

Tonight, tonight, tonight, tonight, tonight

You’re gonna change your life You’re gonna change your life

Tonight’s the night, you’re gonna change your life

Tonight’s the night, you’re gonna change your life

Tonight’s the night, you’re gonna change your life

Tonight’s the night, you’re gonna change your life

Your life, your life, your life, your life, your life, your life,

your life Tonight, tonight, tonight,

tonight, tonight You’re gonna change your life

You’re gonna change your life Yes you can you know you can

[Cheers and Applause]. [Music playing].

Ten. Nine.

Eight. Seven.

Six. Five.

Four. Three.

Two. One.

[Cheers and Applause]. » Since day one, we set out to

significantly improve the lives of as many people as possible.

And with a little help, you found new answers, discovered

new places. The right words came at just the

right time. And we even learned how to spell

the word epicurean. Life got a little easier.

Our photos got a little better. And we got closer to a world

where we all belong. » All stations ready to resume

count, 3-2-1… we have liftoff! » So as we stand on the cusp of

a new area, new breakthroughs in AI will reimagine the ways we

can help. We will have the chance to

improve the lives of billions of people.

We will give businesses the opportunity to thrive and grow,

and help society answer the toughest questions we have to

face. Now, we don’t take this for

granted. So while our ambition is bold,

our approach will always be responsible, because our goal is

to make AI helpful for everyone.

[Cheers and Applause]. »SUNDAR: Good morning,

everyone. Welcome to Google I/O.

[Cheers and Applause]. It’s great to see so many

of you here at Shoreline, so many developers,

and a huge thanks to the millions joining from around the

world, from Bangladesh to Brazil to our new Bay View Campus right

next door. It’s so great to have you, as

always. As you may have heard, AI is

having a very busy year, so we’ve got lots to talk about.

Let’s get started. Seven years into our journey, as

an AI-first company, we are at an exciting inflection point.

We have an opportunity to make AI even more helpful for people,

for businesses, for communities, for everyone.

We have been applying AI to make our products radically more

helpful for a while. With generative AI, we’re taking

the next step. With a bold and responsible

approach, we’re reimagining all our core products, including

Search. You will hear more later in the

keynote. Let me start with a few examples

of how generative AI is helping to evolve our products, starting

with G-mail. In 2017, we launched Smart

Reply, short responses you could select with just one click.

Next came Smart Compose, which offered writing suggestions as

you type. Smart Compose led to more

advanced writing features powered by AI.

They’ve been used in WorkSpace over 180 billion times in the

past year alone. And now, with a much more

powerful generative model, we are taking the next step in

G-mail with “Help me write.” Let’s say you got this E-mail

that your flight was cancelled. The airline has sent you a

voucher, but what you really want is a full refund.

You could reply and use Help Me Write.

Just type in the prompt of what you want, and E-mail to ask for

a full refund, hit create, and a full draft appears.

As you can see, it conveniently pulled in flight details from

the previous E-mail. And it looks pretty close to

what you want to send. Maybe you want to refine it

further. In this case, a more elaborate

E-mail might increase the chances of getting the refund.

[Laughter]. [Applause].

And there you go. I think it’s ready to send!

Help me write will start rolling out as part of our WorkSpace

updates. And just like with Smart

Compose, you will see it get better over time.

The next example is Maps. Since the early days of Street

View, AI has stitched together billions of panoramic images so

people can explore the world from their device.

At last year’s I/O, we introduced Immersive View, which

uses AI to create a high fidelity representation of a

place so you can experience it before you visit.

Now we are expanding that same technology to do what Maps does

best: Help you get where you want to go.

Google Maps provides 20 billion kilometers of directions every

day. That’s a lot of trips.

Imagine if you could see your whole trip in advance.

With Immersive View for routes, now you can, whether you’re

walking, cycling or driving. Let me show you what I mean.

Say I’m in New York City and I want to go on a bike ride.

Maps has given me a couple of options close to where I am.

I like the one on the waterfront, so let’s go with

that. Looks scenic.

And I want to get a feel for it first.

Click on immersive view for routes.

And it’s an entirely new way to look at my journey.

I can zoom in to get an incredible bird’s eye view of

the ride. And as we turn, we get on to a

great bike path. [Cheers and Applause].

It looks like it’s going to be a beautiful ride.

You can also check today’s air quality.

Looks really AQI is 43. Pretty good.

And if I want to check traffic and weather and see how they

might change over the next few hours, I can do that.

Looks like it’s going to pour later, so maybe I want to get

going now. Immersive View for routes will

begin to roll out over the summer and launch in 15 cities

by the end of the year, including London, New York,

Tokyo and San Francisco. [Cheers and Applause].

Another product made better by AI is Google Photos.

We introduced it at I/O in 2015, and it was one of our first AI

native products. Breakthroughs in machine

learning made it possible to search your photos for things

like people, sunsets or waterfalls.

Of course, we want you to do more than just search photos.

We also want to help you make them better.

In fact, every month, 1.7 billion images are edited in

Google photos. AI advancements give us more

powerful ways to do this. For example, Magic Eraser,

launched first on Pixel, uses AI-powered computational

photography to remove unwanted distractions.

And later this year, using a combination of semantic

understanding and generative AI, you can do much more with the

new experience called Magic Editor.

Let’s have a look. Say you’re on a hike and stop to

take a photo in front of a waterfall.

You wish you had taken your bag off for the photo, so let’s go

ahead and remove that bag strap. The photo feels a bit dark, so

you can improve the lighting. And maybe you want to even get

rid of some clouds to make it feel as sunny as you remember

it. [Laughter].

Looking even closer, you wish you had posed so it looks like

you’re really catching the water in your hand.

No problem. You can adjust that.

[Laughter]. [Applause].

There you go. Let’s look at one more photo.

This is a great photo, but as a parent, you always want your kid

at the center of it all. And it looks like the balloons

got cut off in this one. So you can go ahead and

reposition the birthday boy. Magic Editor automatically

re-creates parts of the bench and balloons that were not

captured in the original shot. And as a finishing touch, you

can punch up the sky. This also changes the lighting

in the rest of the photo so the edit feels consistent.

It’s truly magical. We are excited to roll out Magic

Editor in Google Photos later this year.

[Cheers and Applause]. From G-mail and Photos to Maps,

these are just a few examples of how AI can help you in moments

that matter. And there is so much more we can

do to deliver the full potential of AI across the products you

know and love. Today, we have 15 products that

each serve more than half a billion people and businesses,

and six of those products serve over two billion users each.

This gives us so many opportunities to deliver on our

mission, to organize the world’s information and make it

universally accessible and useful.

It’s a timeless mission that feels more relevant with each

passing year. And looking ahead, making AI

helpful for everyone, is the most profound way we will

advance our mission. And we are doing this in four

important ways. First, by improving your

knowledge and learning, and deepening your understanding of

the world. Second, by boosting creativity

and productivity so you can express yourself and get things

done. Third, by enabling developers

and businesses to build their own transformative products and

services. And finally, by building and

deploying AI responsibly so that everyone can benefit equally.

We are so excited by the opportunities ahead.

Our ability to make AI helpful for everyone relies on

continuously advancing our foundation models.

So I want to take a moment to share how we are approaching

them. Last year, you heard us talk

about PaLM, which led to many improvements across our

products. Today, we are ready to announce

our latest PaLM model in production, PaLM 2.

[Cheers and Applause]. PaLM 2 builds on our fundamental

research and our latest infrastructure.

It’s highly capable at a wide range of tasks, and easy to

deploy. We are announcing over 25

products and features powered by PaLM 2 today.

PaLM 2 models deliver excellent foundational capabilities across

a wide range of sizes. We have affectionately named

them Gecko, Otter, Bison and Unicorn.

Gecko is so light-weight that it can work on mobile devices, fast

enough for great interactive applications on device, even

when offline. PaLM 2 models are stronger in

logic and reasoning, thanks to broad training on scientific and

mathematical topics. It’s also trained on

multi-lingual texts, spanning over 100 languages so it

understands and generates nuanced results.

Combined with powerful coding capabilities, PaLM 2 can also

help developers collaborating around the world.

Let’s look at this example. Let’s say you’re working with a

colleague in Seoul and you’re debugging code.

You can ask it to fix a bug and help out your teammate by adding

comments in Korean to the code. It first recognizes the code is

recursive, suggests a fix and even explains the reasoning

behind the fix. And as you can see, it added

comments in Korean, just like you asked.

[Applause]. While PaLM 2 is highly capable,

it really shines when fine-tuned on domain-specific knowledge.

We recently released Sec-PaLM, a version of PaLM 2, fine-tuned

for security use cases. It uses AI to better detect

malicious scripts and can help security experts understand and

resolve threats. Another example is Med-PaLM 2.

In this case, it’s fine-tuned on medical knowledge.

This fine-tuning achieved a 9x reduction in inaccurate

reasoning when compared to the model, approaching the

performance of clinician experts who answered the same set of

questions. In fact, Med-PaLM 2 was the

first language model to perform at “expert level” on medical

licensing exam-style questions, and is currently the

state-of-the-art. We are also working to add

capabilities to Med-PaLM 2 so that it can synthesize

information from medical imaging like plain films and mammograms.

You can imagine an AI collaborator that helps

radiologists, interpret images and communicate the results.

These are some examples of PaLM 2 being used in specialized

domains. We can’t wait to see it used in

more. That’s why I’m pleased

announce that it is now available in preview.

And I’ll let Thomas share more. [Applause].

PaLM 2 is the latest step in our decade-long journey to bring AI

in responsible ways to billions of people.

It builds on progress made by two world-class teams, the Brain

Team and DeepMind. Looking back at the defining AI

breakthroughs over the last decade, these teams have

contributed to a significant number of them.

AlphaGo, Transformers, sequence-to-sequence models, and

so on. All this helped set the stage

for the inflection point we are at today.

We recently brought these two teams together in to a single

unit, Google DeepMind. Using the computational

resources of Google, they are focused on building more capable

systems safely and responsibly. This includes our next

generation foundation model, Gemini, which is still in

training. Gemini was created from the

ground up to be multi-modal, highly efficient at tool and API

integrations, and built to enable future innovations like

memory and planning. While still early, we’re already

seeing impressive multi-modal capabilities not seen in prior

models. Once fine-tuned and rigorously

tested for safety, Gemini will be available at various sizes

and capabilities just like PaLM 2.

As we invest in more advanced models, we are also deeply

investing in AI responsibility. This includes having the tools

to identify synthetically generated content whenever you

encounter it. Two important approaches are

watermarking and metadata. Watermarking embeds information

directly into content in ways that are maintained even through

modest image editing. Moving forward, we are building

our models to include watermarking and other

techniques from the start. So if you look at this synthetic

image, it’s impressive how real it looks, so you can imagine how

important this is going to be in the future.

Metadata allows content creators to associate additional context

with original files, giving you more information whenever you

encounter an image. We’ll ensure every one of our

AI-generated images has that metadata.

James will talk about our responsible approach to AI

later. As models get better and more

capable, one of the most exciting opportunities is making

them available for people to engage with directly.

That’s the opportunity we have with Bard, our experiment for

conversational AI. We are rapidly evolving Bard.

It now supports a wide range of programming capabilities, and

it’s gotten much smarter at reasoning and math prompts.

And as of today, it is now fully running on PaLM 2.

To share more about what’s coming, let me turn it over to

Sissie. [Cheers and Applause].

Sissie Hsiao: Thanks, Sundar. Large language models have

captured the world’s imagination, changing how we

think about the future of computing.

We launched Bard as a limited access experiment on a

lightweight large language model to get feedback and iterate.

Since then, the team has been working hard to make rapid

improvements,and launch them quickly.

With PaLM 2 Bard’s math, logic and reasoning skills made a huge

leap forward, underpinning its ability to help developers with

programming. Bard can now collaborate on

tasks like code generation, debugging and explaining code

snippets. Bard has already learned more

than 20 programming languages, including C++, Go, JavaScript,

Python, Kotlin, and even Google Sheets functions.

And we’re thrilled to see that coding has quickly become one of

the most popular things people are doing with Bard.

So let’s take a look at an example.

I’ve recently been learning chess, and, for fun, I thought

I’d see if I can program a move in Python.

How would I use Python to generate the “Scholar’s Mate”

move in chess? Okay.

Here, Bard created a script to re-create this chess move in

Python. And notice how it also formatted

the code nicely, making it easy to read.

We’ve alsosoeard great feedback from developers about how Bard

provides code e tations. And starting next week, you’ll

notice something right here. We’re making code citations even

more precise. If Bard brings in a block of

code, just click this annotation and Bard will underline the

block and link to the source. Now, Bard can also help me

understand the code. Could you tell me what

‘chess.Board()’ does in this code?

Now, this is a super helpful explanation of what it’s doing

and makes things more clear. All right.

Let’s see if we can make this code a little better.

How would I improve this code? Okay.

Let’s see, there’s a list comprehension, creating a

function, and using a generator. Those are great suggestions!

Now, could you join them in to one single Python code block?

Okay. Bard is rebuilding the code with

these improvements. Okay.

Great. How easy was that?

And in a couple clicks, I can move this directly into Colab.

Developers love the ability to bring code from Bard into their

workflow, like Colab, so coming soon, we’re adding the

ability to export and run code with our partner Replit,

starting with Python. [Cheers and Applause].

We’ve also heard that you want Dark theme, so starting today,

you can activate it – [Cheers and Applause].

You can activate it right in Bard or let it follow your OS

settings. Speaking of exporting things,

people often ask Bard for a head start drafting E-mails and

documents. So today we’re launching two

more Export Actions, making it easy to move Bard’s responses

right in to G-mail and Docs. [Cheers and Applause].

So we’re excited by how quickly Bard and the underlying models

are improving, but we’re not stopping there.

We want to bring more capabilities to Bard to fuel

your curiosity and imagination. And so, I’m excited to announce

that tools are coming to Bard. [Cheers and Applause].

As you collaborate with Bard, you’ll be able to tap into

services from Google and extensions with partners to let

you do things never before possible.

And of course, we’ll approach this responsibly, in a secure

and private way, letting you always stay in control.

We’re starting with some of the Google apps that people love and

use every day. It’s incredible what Bard can

already do with text, but images are such a fundamental part of

how we learn and express. So in the next few weeks, Bard

will become more visual, both in its responses, and your prompts.

So if you ask, What are some must-see sights in New Orleans?

Bard’s going to use Google Search and the Knowledge Graph

to find the most relevant images.

Okay. Here we go.

Hmmm, the French Quarter, the Garden District.

These images really give me a much better sense of what I’m

exploring. We’ll also make it easy for you

to prompt Bard with images, giving you even more ways to

explore and create. People love Google Lens, and in

coming months, we’re bringing the powers of Lens to Bard.

[Cheers and Applause]. So if you’re looking to have

some fun with your fur babies, you might upload an image and

ask Bard to “Write a funny caption about these two.”

Lens detects this is a photo of a goofy German Shepherd and a

Golden Retriever. And then Bard uses that to

create some funny captions. If you ask me, I think they’re

both good boys. Okay.

Let’s do another one. Imagine I’m 18 and I need to

apply to college. I won’t date myself with how

long it’s been, but it’s still an overwhelming process.

So I’m thinking about colleges, but I’m not sure what I want to

focus on. I’m into video games.

And what kinds of programs might be interesting?

Okay. This is a really helpful head

start. Hmmm, animation looks pretty

interesting. Now I could ask, help me find

colleges with animation programs in Pennsylvania.

Okay. Great.

That’s a good list of schools. Now, to see where these are, I

might now say, show these on a map.

Here, Bard’s using Google Maps to visualize where these schools

are. [Cheers and Applause].

This is super helpful, and exciting to see that plenty of

options are not too far from home.

Now, let’s start organizing things a bit.

Show these options as a table. Nice.

Structured and organized. But there’s more I want to know.

Add a column showing whether they are public or private

schools. [Applause]e]

Perfect. This is a great start to build

on. And now, let’s move this to

Google Sheets so my family can jump in later to help me with my

search. [Cheers and Applause].

You can see how easy it will be to get a jump start in Bard and

quickly have something useful to move over to apps like Docs or

Sheets to build on with others. Okay.

Now, that’s a taste of what’s possible when Bard meets some of

Google’s apps, but that’s just the start.

Bard will be able to tap into all kinds of services from

across the Web, with extensions from incredible partners like

Instacart, Indeed, Khan Academy and many more.

Here’s a look at one coming in the next couple months.

With Adobe Firefly, you’ll be able to generate completely new

images from your imagination, right in Bard.

Now, let’s say I’m planning a birthday party for my

seven-year-old who loves unicorns.

I want a fun image to send out with the invitations.

Make an image of a unicorn and a cake at a kid’s party.

Okay. Now Bard is working with Firefly

to bring what I imagined to life.

[Cheers and Applause]. How amazing is that?

This will unlock all kinds of ways that you can take your

creativity further and faster. And we are so excited for this

partnership. Bard continues to rapidly

improve and learn new abilities, and we want to let people around

the world try it out and share their feedback.

And so today, we are removing the wait list and opening up

Bard to over 180 countries and territ

territories. [Cheers and Applause].

With more coming soon. Bard is also becoming available

in more languages. Beyond English, starting today,

you’ll be able to talk to Bard in Japanese and Korean.

[Cheers and Applause]. Adding languages responsibly

involves deep work to get things like quality and local nuances

right, and we’re pleased to share that we’re on track to

support 40 languages soon! It’s amazing to see the rate of

progress so far. More advanced models, so many

new capabilities, and the ability for even more people to

collaborate with Bard. And when we’re ready to move

Bard to our Gemini model, I’m really excited about more

advancements to come. So that’s where we’re going with

Bard, connecting tools from Google and amazing services

across the Web, to help you do and create anything you can

imagine. Through a fluid collaboration

with our most cacable large language models.

There’s so much to share in the days ahead.

And now, to hear more about how large language models are

enabling next generation productivity features right in

wo WorkSpace, I’ll hand it over to

Aparna. [Cheererand Applause].

APARNA PAPPU: From the very beginning, Workspace was built

to allow you to collaborate in realtime with other people.

Now you can collaborate in real time with AI.

AI can act as a coach, a thought partner, source of inspiration,

as well as a productivity booster across all of the apps

of Workspace. Our first steps with AI as a

collaborator were via the help me write feature in G-mail and

Docs, which launched to trusted testers in March.

We’ve been truly blown away by the clever and creative ways

these features are being used, from writing essays, sales

pitches, project plans, client outreach, and so much more.

Since then, we’ve been busy expanding these helpful features

across more surfaces. Let me show you a few examples.

One of our most popular use cases is the trusty job

description. Every business, big or small,

needs to hire people. A good job description can make

all the difference. Here’s how Docs has been

helping. Say you run a fashion boutique

and need to hire a textile designer.

To get started, you enter just a few words as a prompt, “Senior

level job description for textile designer.”

Docs will take that prompt and send it to our PaLM 2 based

model. And let’s see what I get back.

Nod bad. With just seven words, the model

came back with a good starting point, written out really

nicely. Now, you can take that and

customize it for the kind of experience, education and skill

set that this role needs, saving you a ton of time and effort.

[Applause]. Next, let me show you how you

can get more organized with Sheets.

Imagine you run a dog-walking business and need to keep track

of things like your clients, logistics about the dogs, like

what time they need to be walked, for how long, et cetera.

Sheets can help you get organized.

In a new Sheet, simply type something like,“Client and pet

roroer for a dog-walking business with rates,” and hit

Create. Sheets sends this input to a

fine-tuned model that we’ve been training with all sorts of

Sheets-specific use cases. Look at that!

[Cheers and Applause]. The model figured out what you

might need. The again rated table has things

like the dog’s name, client info, notes, et cetera.

This is a good start for you to tinker with.

Sheets made it easy for you to get started, so you can get back

to doing what you love. Speaking of getting back to

things you love, let’s talk about Google Slides.

People use Slides for storytelling all the time,

whether at work or in their personal lives.

For example, you get your extended family to collect

anecdotes/haikus/jokes for your parents’ 50th wedding

anniversary in a slide deck. Everyone does their bit, but

maybe this deck could have more pizzazz.

Let’s pick one of the slides and use the poem on there as a

prompt for image generation. “Mom loves her pizza cheesy and

true, while Dad’s favorite treat is a warm pot of fondue.”

Let’s hit create and see what it comes up with.

Behind the scenes, that quote is sent as input to our

text-2-image models. And we know it’s unlikely that

the user will be happy with just one option so we generate six to

eight images so that you have the ability to choose and

refine. Whoa!

I have some oddly delicious-looking fondue pizza

images! Now, the style is a little too

cartoony for me. So I’m going to ask it to try

again. Let’s change the style to

“photography.” And give it a whirl.

Just as weird, but it works for me.

[Cheers and Applause]. You can have endless fun with

this, with no limits on cheesiness or creativity.

Starting next month, trusted testers will be able to try this

and six more generative AI features across Workspace.

And later this year, all of this will be generally available to

business and consumer Workspace users via a new service called

Duet AI for Workspace. [Cheers and Applause].

Stepping back a bit, I showed you a few powerful examples of

how Workspace can help you get more done with just a few words

as prompts. Prompts are a powerful way of

collaborating with AI. The right prompt can unlock far

more from these models. However, it can be daunting for

many of us to even know where to start.

Well, what if we could solve that for you?

What if AI could proactively offer you prompts?

Even better, what if these prompts were actually contextual

and changed based on what you are working on?

I am super excited to show you a preview of just that.

This is how we see the future of collaboration with AI coming to

life. Let’s switch to a live demo so I

can show you what I mean. Tony’s here to help with that.

Hey, Tony. » TONY: Hey, Aparna.

[Cheers and Applause]. »APARNA PAPPU: My niece, Meera,

and I are working on a spooky story

together for summer camp. We’ve already written a few

paragraphs, but now we’re stuck. Let’s get some help.

As you can see, we launched a side panel, something the team

fondly calls Sidekick. Sidekick instantly reads and

processes the document, and offers some really neat

suggestions, along with an open-prompt dialogue.

If we look closely, we can see some of the suggestions, like,

what happened to the golden seashell?

What are common mystery plot twists?

Let’s try the seashell option and see what it comes back with.

What’s happening behind the scenes is that we’ve provided

the document as context to the model, along with the suggested

prompt. Let’s see what we got.

The golden seashell was eaten by a giant squid that lives in the

cove? This is a good start.

Let’s insert these ideas as notes so we can continue our

little project. Now, one of the interesting

observations we have is that it’s actually easier to react to

something or perhaps use that that to say, hmmm, I want to go

in a different direction. And this is exactly what AI can

help with. I see a new suggestion for

generating images. Let’s see what this does.

This story has a village, a golden seashell and some other

details. Instead of having to type all

that out, the model picks up these details from the document

and generates images. These are some cool pictures,

and I bet my niece will love these.

let’s insert them into the doc for fun.

Thank you, Tony! [Cheers and Applause].

I’m going to walk you through some more examples, and this

will help you see how this powerful new contextual

collaboration is such a remarkable boost to productivity

and creativity. Say you are writing to your

neighbors about an upcoming potluck.

Now, as you can see, Sidekick has summarized what this

conversation is about. Last year, everyone brought

hummus. Who doesn’t love hummus!

But this year, you want a little more variety.

Let’s see what people signed up to bring.

Now, somewhere in this thread is a Google sheet, where you’ve

collected that information. You can get some help by typing,

“Write a note about the main dishes people are bringing.”

And let’s see what we get back. Awesome!

It found the right sheet and cited the source, in the “Found

in” section, giving you confidence that this is not made

up. Looks good.

You can insert it directly into your E-mail.

Let’s end with an example of how this can help you at work.

Say you are about to give an important presentation, and

you’ve been so focused on the content that you forgot to

prepare speaker notes. The presentation is in an hour.

Uh-oh. No need to panic.

Look at what one of the suggestions is: Create speaker

notes for each slide. [Cheers and Applause].

Let’s see what happens. What happened behind the scenes

here is that the presentation and other

relevant context is sent to the model to help create

these notes. Once you’ve reviewed them, you

can hit insert and edit the notes to convey what you

intended. So you can now deliver the

presentation without worrying about the notes.

As you can see, we’ve been having a ton of fun playing with

this. We can see the true potential of

AI as a collaborator, and will be bringing this experience to

Duet AI for Workspace. With that, I’ll hand it back to

Sundar. [Cheers and Applause].

SUNDAR: Thanks, Aparna. It’s exciting to see all the

innovation coming to Google WorkSpace.

As AI continues to improve rapidly, we are focused on

giving helpful features to our users.

And starting today, we are giving you a new way to preview

some of the experiences across WorkSpace and other products.

It’s called Labs. I say new, but Google has a long

history of bringing labs, and, you know, we’ve made it

available throughout our history as well.

You can check it out at

Next up, we’re going to talk about Search.

Search has been our founding product from our earliest days.

And we’ve always approached it placing user trust about

everything else. To give you a sense of how we

bringing generating AI on to search, I’m going to invite

Cathy on to stage. Cathy.

[Cheers and Applause]. »CATHY EDWARDS: Thanks, Sundar.

I’ve been working on Search for many years, and what inspires me

So much is how it continues to be an unsolved problem, and

that’s why I’m just so excited by the potential of bringing

generative AI in to Search. Let’s give it a whirl.

So let’s start with a search for what’s better for a family with

kids under three and a dog, Bryce Canyon or Arches.

Now, although this is the question you have, you probably

wouldn’t ask it in this way today.

You’d break it down into smaller ones, sift through information,

and then piece things together yourself.

Now, Search does the heavy lifting for you.

What you see here looks pretty different, so let me first give

you a quick tour. You’ll notice a new integrated

search results page so you can get even more out of a single

search. There’s an AI-powered snapshot

that quickly gives you the lay of the land on a topic.

And so here you can see that while both parks are

kid-friendly, only Bryce Canyon has more options for your furry

friend. If you want to dig deeper, there

are links included in the snapshot.

You can also click to expand your view.

And you’ll see how information is corroborated, so you can

check out more details and real the topic.

This new experience builds on Google’s ranking and safety

systems that we’ve been fine-tuning for decades.

Search will continue to be your jumping-off point to what makes

the Web so special, its diverse range of content, from

publishers to creators, businesses, and even people like

you and me. So you can check out

recommendations from experts like the National Park Service,

and learn from authentic, first-hand experiences like the

MOM Trotter blog. Because even in a world where AI

can provide insights, we know that people will always value

the input of other people, and a thriving Web is essential to

that. These new generative AI

[Cheers and Applause]. Thank you!

These new generative AI capabilities will make Search

smarter, and searching simpler. And as you’ve seen, this is

especially helpful when you need to make sense of something

complex, with multiple angles to explore.

You know, those times when even your question has questions.

So, for example, let’s say you’re searching for a good bike

for a five-mile commute with hills.

This can be a big purchase. So you want to do some research.

In the AI-powered snapshot, you’ll see important

considerations like motor and battery for taking on those

hills, and suspension for a comfortable ride.

Right below, you’ll see products that fit the bill, each

with images, reviews, helpful descriptions and current

pricing. This is built on Google’s

Shopping Graph, the world’s most comprehensive dataset of

constantly changing products, sellers, brands, reviews and

inventory out there, with over 35 billion listings.

In fact, there are 1.8 billion live updates to our Shopping

Graph every hour. So you can shop with confidence

in this new experience, knowing that you’ll get fresh, relevant

results. And for commercial queries like

this, we also know that ads can be especially helpful to connect

people with useful information and help businesses get

discovered online. They’re here, clearly labeled,

and we’re exploring different ways to integrate them as we

roll out n n experiences in Search.

And now that you’ve done some research, you might want to

explore more. So right under the snapshot,

you’ll see the option to ask a follow-up question, or select a

suggested next step. Tapping any of these options

will bring you into our brand new conversational mode.

[Cheers and Applause]. In this case, maybe you want to

ask a follow-up about eBikes, so you look for one in your

favorite color, red. And without having to go back to

square one, Google Search understands your full intent,

and that you’re looking specifically for ebikes in red

that would be good for a five- mile commute with hills.

And even when you’re in this conversational mode, it’s an

integrated experience, so you can simply scroll to see other

search results. Now, maybe this eBike seems to

be a good fit for your commute. With just a click, you’re

able to see a variety of retailers that have it in stock,

and some that offer free delivery or returns.

You’ll also see current prices, including deals, and can

seamlessly go to a merchant’s site, check out, and turn your

attention to what really matters, getting ready to ride.

These new generative AI capabilities also unlock a whole

new category of experiences on Search.

It could help you create a clever name for your cycling

club, craft the perfect social post to show off your new

wheels, or even test your knowledge on bicycle hand

signals. These are thnings you may never

have thought to ask Search for before.

Shopping is just one example of where this can be helpful.

Let’s walk through another one in a live demo.

What do you say? [Cheers and Applause].

Yeah. So special shout-out to my

three-year-old daughter, who is obsessed with whales.

I wanted to teach her about whale songs, so let me go to the

Google app and ask, why do whales like to sing?

And so here I see a snapshot that organizations the Web

results and gets me to key things I want to know so I can

understand quickly that oh, they sing for a lot of different

reasons, like to communicate with other whales, but also to

find food. And I can click “see more” to

expand here as well. Now, if I was actually with my

daughter, and not on stage in front of thousands of people,

I’d be checking out some of these Web results right now.

They look pretty good. Now I’m thinking she would get a

kick out of seeing one up close, so let me ask, can I see whales

in California? And so the LLMs right now are

working behind the scenes to generate my snapshot,

distilling insights and perspectives from across the

Web. It looks like in northern

California, I can see humpbacks around this time of year.

That’s cool. I’ll have to plan to take her on

a trip soon. And again, I can see some really

great results from across the Web.

And if I want to refer to the results from my previous

question, I can just scroll right up.

Now, that’s got a birthday coming up, so I can follow up

with plush ones for kids under $40.

Again, the LLMs are organizing this information for me, and

this process will get faster over time.

These seem like some great options.

I think she’ll really like the second one.

She’s in to orcas as well. Phew!

Live demos are always nerve racking.

I’m really glad that one went whale!

[Cheers and Applause]. What you’ve seen today is just a

first look at how we’re experimenting with generative AI

in Search, and we’re excited to keep improving with your

feedback through our Search Labs Program.

This new Search generative experience, also known as SGE,

will be available in labs, along with some other experiments.

And they’ll be rolling out in the coming weeks.

If you’re in the U.S., you can join the wait list today by

tapping the labs icon in the latest versions of the Google

app or Chrome desktop. This new experience reflects the

beginning of a new chapter, and you can think of this evolution

as Search, super charged. Search has been at the core of

our timeless mission for 25 years.

And as we build for the future, we’re so excited for you to turn

to Google for things you never dreamed you you could.

Here’s an early look at what’s to come for AI in Search.

[Music playing] » Hey!

I’mma make’em talk like whoa Move fast, move slow

Catch me on a roll Come say hello, 3-2-1 let’s go

Yes. Yes.

Yes! Come say hello, 3-2-1 let’s go

You got this, let’s go! » Is a hotdog a sandwich?

And the answer is… » Yes.

No. » Yes!

No! [Music playing]

I’mma make’em talk like whoa, whoa, whoa, whoa, whoa.

I’mma make’em talk like. I’mma make’em talk like whoa.

[Cheers and Applause]. »SUNDAR: Is a hotdog a

sandwich? I think it’s more like a taco

because the bread goes around it.

[Laughter]. That comes from the expert

viewpoint of a vegetarian. [Laughter].

Thanks, Cathy. It’s so exciting to see how we

are evolving Search and look forward to building it with you.

So far today, we have shared how AI can help unlock creativity,

productivity and knowledge. As you can see, AI is not only a

powerful enabler, it’s also a big platform shift.

Every business and organization is thinking about how to drive

transformation. That’s why we are focused on

making it easy and scalable for others to innovate with AI.

That means providing the most advanced computing

infrastructure, including state-of-the-art TPUs and GPUs,

and expanding access to Google’s latest foundation models that

have been rigorously tested in our own products.

We are also working to provide world-class tooling so customers

can train, fine-tune and run their own models, with

enterprise-grade safety, security and privacy.

To tell you more about how we are doing this with Google

Cloud, please welcome Thomas. [Applause].

THOMAS KURIAN: All of the AI investments you’ve heard about

today are also coming to businesses.

So whether you’re an individual developer or a full-scale

enterprise, Google is using the power of AI to transform the way

you work. There are already thousands of

companies using our generative AI platform to create amazing

content, to synthesize and organize information, to

automate processes, and to build incredible customer experiences.

And yes, each and every one of you can, too.

There are three ways Google Cloud can help you take

advantage of the massive opportunity in front of you.

First, you can quickly build generative applications using

our AI platform, Vertex AI. With Vertex you can access

foundation models for chat, text and image.

You just select the model you want to use, create prompts to

tune the model, and you can even fine-tune the model’s weights on

your own dedicated compute clusters.

To help you retrieve fresh and factual information from your

company’s databases, your corporate internet, your Web

site and enterprise applications, we offer

Enterprise Search. Our AI platform is so compelling

for businesses because it guarantees the privacy of your

data. With both Vertex and Enterprise

Search, you have sole control of your data and the costs of using

generative AI models. In other words, your data is

your data, and no one else’s. You can also choose the best

model for your specific needs across many sizes that have been

optimized for cost, latency and quality.

Many leading companies are using our generative AI technologies

to build super-cool applications, and we’ve been

blown away by what they’re doing.

Let’s hear from a few of them. »AMJAD MASAD: The unique thing

about Google Cloud is the expansive offering.

TODD PENEGOR: The Google partnership has taught us to

lean in, to iterate, to test and learn and have the courage to

fail fast where we need to. »STEVE JARRETT: But also Google

is a really AI centric company, and so there’s a lot for us to

learn directly from the engineering team.

BERND LEUKERT: Now with generative AI, you can have a

much smarter conversation with our customers

MEL PERKINS: We have been really enjoying taking the

latest and greatest technology and making that accessible to

our entire community. »KAMRAN ZARGAHI: Getting early

access to Vertex APIs opens a lot of doors for us to be most

efficient and productive in the way we create experiences for

Uber customers. »AMJAD MASAD: The act of making

software is really suddenly open up to everyone.

Now you can talk to the AI on the Replit app and tell it,

“Make me a workout program.” And with one click, we can

deploy it to a Google Cloud VM and you have an app that you

just talked into existence. »MEL PERKINS: We have an

extraordinarily exciting feature in the pipeline.

It’s called Magic Video, and it enables you to take your videos

and images, and with just a couple of clicks, turn that into

a cohesive story. It is powered by Google’s PaLM

technology, and it truly empowers everyone to be able to

create a video with absolute ease.

TODD PENEGOR: Folks come to a Wendy’s, and a lot of times they

use some of our acronyms. The Junior Bacon Cheeseburger,

they’ll come in and, “Give me a JBC.”

We need to understand what that really means, and voice AI can

help make sure that order is accurate every single time.

BERND LEUKERT: Genenative AI can be incorporated in all the

business processes Deutsche Bank is running.

TODD PENEGOR: The partnership with Google has inspired us to

leverage technology, to truly transform the whole restaurant

experience. »BERND LEUKERT: There is no

limitations. »AMJAD MASAD: There’s no other

way to describe it. Google’s just living in the

future. [Applause].

THOMAS KURIAN: We’re also doing this with partners like

Character AI. We provide Character with the

world’s most performant and cost-efficient infrastructure

for training and serving their models.

By combining its own AI capabilities with those of

Google Cloud, consumers can create their own deeply

personalized characters and interact with them.

We’re also partnering with Salesforce to integrate Google

Cloud’s AI models and Big Query with their data cloud and

Einstein, their AI-infused CRM assistant.

In fact, we are working with many other incredible partners,

including consultancies, software as a service leaders,

consumer internet companies to build remarkable experiences

with our AI technologies. In addition to PaLM 2, we are

excited to introduce three new models in Vertex, including

Imag Imagine, which powers image

generation editing and customization from text inputs.

Codey for code completion and generation, which you can train

on your own code base to help you build applications faster,

and Chirp, a universal speech model which brings

speech-to-text accuracy for over 300 languages.

We’re also introducing Reinforcement Learning From

Human Feedback into Vertex AI. You can fine-tune pre-trained

models by incorporating human feedback to further improve the

model’s results. You can also fine-tune a model

on domain or industry-specific data, as we have with Sec-PaLM

and Med-PaLM, so they become even more powerful.

All of these features are now in preview, and I encourage each

and every one of you to try them.

[Cheers and Applause]. The second way we’re helping you

take advantage of this opportunity is by introducing

Duet AI for Google Cloud. Earlier, Aparna told you about

Duet AI for Google Workspace and how it is an always-on AI

collaborator to help people get things done.

Well, the same thing is true with Duet AI for Google Cloud,

which serves as an AI expert pair programmer.

Duet uses generative AI to provide developers assistance

wherever you need it within the IDE, the Cloud console, or

directly within chat. It can provide contextual code

completion, offer suggestions tuned to your code base,

And generate entire functions in real-time.

It can even assist you with code reviews and code inspection.

Chen will show you more in the developer keynote.

The third way we are helping you seize this moment is by building

all of these capabilities on our AI-optimized infrastructure.

This infrastructure makes large-scale training workloads

up to 80 percent faster, and up to 50 percent cheaper compared

to any alternatives out there. Look, when you nearly double

performance – [Applause].

When you nearly double performance for less than half

the cost, amazing things can happen.

Today, we are excited to announce a new addition to this

infrastructure family, the A3 Virtual Machines based on

NVIDIA’s H100 GPUs. We provide the widest choice of

compute options for leading AI companies like Anthropic and

Midjourney to build their future on Google Cloud.

And yes, there’s so much more to come.

Next, Josh is here to show you exactly how we’re making it easy

and scalable for every developer to innovate with AI and PaLM 2.

[Cheers and Applause]. »JOSH WOODWARD: Thanks, Thomas.

Our work is enabling businesses and it’s also empowering

developers. PaLM 2, our most capable next

generation language model that Sundar talked about, powers the

PaLM API. Since March, we’ve been running

a private preview with our PaLM API, and it’s been amazing to

see how quickly developers have used it in their applications.

Like CHAPTR, who are generating stories so you can choose your

own adventure, forever changing storytime.

Or GameOn Technology, a company that makes chat apps for sports

fans and retail brands to connect with their audiences.

And there’s also Wendy’s. They’re using the PaLM API to

help customers place that correct order for the junior

bacon they talked about in their talk to menu feature.

But I’m most excited by the response we’ve gotten from

the developer tools community. Developers want choice when it

comes to language models, and we’re working with

working with LangChain, Chroma, and many more to add PaLM API

We’ve also integrated into Google developer tools like

Firebase and Colab. [Cheers and Applause].

You can hear a lot more about the PaLM API in the Developer

Keynote, and sign up today. Now, to show you just how

powerful the PaLM API is, I want to share one concept that five

engineers at Google put together over the last few weeks.

The idea is called Project Tailwind, and we think of it as

an AI-first Notebook that helps you learn faster.

Like a real notebook, your notes and your sources power Tailwind.

How it works is you can simply pick the files from Google Drive

and it effectively creates a personalized and private AI

model that has expertise in the information that you give it.

We’ve been developing this idea with authors like Steven

Johnson, and testing it at universities like Arizona State

and the University of Oklahoma, where I went to school.

Do you want to see how it works? Let’s do a live demo.

Now, imagine that I’m a student taking a computer history class.

I open up tailwind, and I can quickly see all my different

notes, assignments and reading. I can insert them and what will

happen when Tailwind loads up is you can see my different notes

and articles on the side. Here they are in the middle, and

it instantly creates a study guide on the right to give me

bearings. You can see it’s pulling out key

concepts and questions, grounded in the materials that I’ve given

it. Now, I can come over here and

quickly change it to go across all the different sources, and

type something like, create glossary for hopper.

And what’s going to happen behind the scenes is it will

automatically compile a glossary associated with all the

different notes and articles relating to Grace Hopper, the

computer science history pioneer.

Look at this. Flowmattic, Coval, Compiler, all

created based on my notes. Now, let’s try one more.

I’m going to try something else called different viewpoints on

Dynabook. So the Dynabook, this was a

concept from Alan Kay. Again, Tailwind, going out,

finding all the different things.

You can see how quickly it comes back.

There it is. And what’s interesting here is

it’s helping me think through the concepts so it’s giving me

different viewpoints. It was a visionary product.

It was a missed opportunity. But my favorite part is it shows

its work. You can see the citations here.

When I hover over, here’s something from my class notes.

Here’s something from ann artice the teacher assigned.

It’s all right here, grounded in my sources.

[Cheers and Applause]. Now, projProject Tailwind is st

in its early days, but we’ve had so much fun making this

prototype, and we realized it’s not just for students.

It’s helpful for anyone synthesizing information from

many different sources that you choose.

Like writers researching an article, or analysts going

through earning calls, or even lawyers preparing for a case.

Imagine collaborating with an AI that’s grounded in what you’ve

read and all your notes. We want to make it available to

try it out if you want to see it.

[Cheers and Applause]. There’s a lot more you can do

with PaLM 2, and we can’t wait to see what you build using the

PaLM API. Because generative AI is

changing what it means to develop new products.

At Google, we offer the best ML infrastructure, with powerful

models including those in Vertex, and APIs and tools to

quickly generate your own applications.

And building bold AI also requires a responsible approach,

so let me hand it over to James to share more.

Thanks. [Cheers and Applause].

JAMES MANYIKA: Hi, everyone. I’m James.

In addition to Research, I lead a new area at Google called

Technology and Society. Growing up in Zimbabwe, I could

not have imagined all the amazing and ground-breaking

innovations that have been presented on this stage today.

And while I feel it’s important to celebrate the incredible

progress in AI and the immense potential it has for people in

society everywhere, we must also acknowledge that it’s an

emerging technology that is still being developed, and

there’s still so much more to do.

Earlier, you heard Sundar say that our approach to AI must be

both bold and responsible. While there is a natural tension

between the two, we believe it’s not only possible, but, in fact,

critical to embrace that tension not only possible, but, in fact,

critical to embrace that tension productively.

The only way to be truly bold in the long-term is to be

responsible from the start. Our field-defining research is

helping scientists make bold advances in many scientific

fields, including medical breakthroughs.

Take, for example, Google DeepMind’s AlphaFold program,

which can accurately predict the 3D shapes of 200 million

proteins. That’s nearly all the cataloged

proteins known to science. AlphaFold gave us the equivalent

of nearly 400 million years of progress in just weeks.

[Applause]. So far, more than one million

researchers around the world have used AlphaFold’s

predictions, including Feng Zhang’s pioneering lab at the

Broad Institute of MIT and Harvard.

In fact, in March this year, Zhang and his colleagues at MIT

announced they used AlphaFold to develop a novel molecular

syringe which could deliver drugs to help improve the

effectiveness of treatments for diseases like cancer.

[Cheers and Applause]. While it’s exhilarating to see

such bold and beneficial breakthroughs, AI also has the

potential to worsen existing societal challenges like unfair

bias, as well as pose new challenges as it becomes more

advanced, and new uses emerge. That’s why we believe it’s

imperative to take a responsible approach to AI.

This work centers around our AI Principles that we first

established in 2018. These principles guide product

development, and they help us assess every AI application.

They prompt questions like, “will it be socially

beneficial?” Or, “could it lead to harm in

any way?” One area that is top of mind for

us is misinformation. Generative AI makes it easier

than ever to create new content. But it also raises additional

questions about its trust worthiness.

This is why we’re developing and providing people with tools to

evaluate online information. For example, have you come

across a photo on a Web site, or one shared by a friend, with

very little context, like this one of the moon landing, and

found yourself wondering, is this reliable?

I have. And I’m sure many of you have as

well. In the coming months, we are

adding two new ways for people to evaluate images.

First, with our “About this Image” tool in Google Search.

You will be able to see important information such as

when and where similar images may have first appeared, where

else the image has been seen online, including news, fact

and social sites, all this providing you with helpful

context to determine if it’s reliable.

Later this year, you’ll also be able to use it if you search for

an image or screenshot using Google Lens, or when you’re on

websites in Chrome. And as we begin to roll out

generative image capabilities, like Sundar mentioned, we will

ensure that every one of our AI-generated images has

metadata, a markup in the original file, to give you

context if you come across it outside of our platforms.

Not only that, Creators and Publishers will be able to add

similar metadata, so you’ll be able to see a label in images in

Google Search, marking them as AI-generated.

[Applause]. As we apply our AI principles,

we also start to see potential tensions when it comes to being

bold and responsible. Here is an example.

Universal Translator is an experimental AI video dubbing

service that helps experts translate a speaker’s voice

while also matching their lip movements.

Let me show you how it works with an online college course,

created in partnership with Arizona State University.

What many college students don’t realize is that knowing

when to ask for help, and then following through on using

helpful resources is actually a hallmark of becoming a

productive adult. [Foreign language].

[Cheers and Applause]. »JAMES MANYIKA: We use next

generation translation models to translate what the speaker is

saying, models to replicate the style and the tone, and then

match the speaker’s lip movem

movements, and then we bring it all together.

This is an enormous step forward for learning comprehension, and

we’re seeing promising results with course completion rates.

But there’s an inherent tension here.

You can see how this can be incredibly beneficial, but some

of the same underlying technology could be misused by

bad actors to create deep fakes. So we’ve built this service with

guardrails to help prevent misuse and, we make it

accessible only to authorized partners.

[Cheers and Applause]. And, as Sundar mentioned, soon

we’ll be integrating our new innovations in watermarking into

our latest generative models to also help with the challenge of

misinformation. Our AI principles also help

guide us on what not to do. For instance, years ago, we were

the first major company to decide not to make a

general-purpose facial recognition API commercially

available. We felt there weren’t adequate

safeguards in place. Another way we live up to our AI

principles is with innovations to tackle challenges as they

emerge, like reducing the risk of problematic outputs that may

be generated by our models. We are one of the first in the

industry to develop and launch automated adversarial testing

using large language model technology.

We do this for queries like this, to help us uncover and

reduce inaccurate outputs, like the one on the left, and make

them better, like the one on the right.

We’re doing this at a scale that’s never been done before at

Google, significantly improving the speed, quality and coverage

of testing, allowing safety experts to focus on the most

difficult cases. And we’re sharing these

innovations with others. For example, our “Perspective

API”, originally created to help publishers mitigate toxicity is

now being used in large language models.

Academic researchers have used our perspective API to create an

industry evaluation standard. And today, all significant large

language models, including those from OpenAI and Anthropic,

incorporate this standard to evaluate toxicity generated by

their own models. [Applause].

Building AI responsibly must be a collective effort involving

researchers, social scientists, industry experts, governments

and everyday people, as well as creators and publishers.

Everyone benefits from a vibrant content ecosystem.

Today and in the future. That’s why we’re getting

feedback and will be working with the Web community on ways

to give publishers choice and control over their Web content.

It’s such an exciting time. There’s so much we can

accomplish, and so much we must get right together.

We look forward to working with all of you.

And now I’ll hand it off to Sameer, who will speak to you

about all the exciting developments we’re bringing to

Android. Thank you.

[Cheers and Applause]. »SAMEER: Hi, everyone!

It’s great to be back at Google I/O.

As you’ve heard today, our bold and responsible approach to AI

can unlock people’s creativity and potential.

But how can all this helpfulness reach as many people as

possible? At Google, our computing

platforms and hardware products have been integral to that

mission. From the beginning of Android,

we believed that an open OS would enable a whole ecosystem

and bring smartphones to everyone.

And as we all add more devices to our lives, like tablets, TVs,

cars and more, this openness creates the freedom to choose

the devices that are best for you.

With more than three billion Android devices, we’ve now seen

the benefits of using AI to improve experiences at scale.

For example, this past year, Android used AI models to

protect users from more than 100 billion suspected spam messages

and calls. [Cheers and Applause].

We can all agree, that’s pretty useful!

There are so many opportunities where AI can just make things

better. Today, we’ll talk about two big

ways Android is bringing that benefit of computing to

everyone. First, continuing to connect you

to the most complete ecosystem of devices, where everything

works better together. And second, using AI to make

things you love about Android even better, starting with

customization and expression. Let’s begin by talking about

Android’s ecosystem of devices, starting with two of the most

important: Tablets and watches. Over the last two years, we’ve

redesigned the experience on large screens, including tablets

and foldables. We introduced a new system for

multi-tasking that makes it so much easier to take advantage of

all that extra screen real estate and seamlessly move

between apps. We’ve made huge investments to

optimize more than 50 Google apps, including G-mail, photos

and meet. And we’re working closely with

partners such as Minecraft, Spotify and Disney+ to build

beautiful experiences that feel intuitive on larger screens.

People are falling in love with Android tablets and there are

more great devices to pick from than ever.

Stay tuned for our hardware announcements, where you just

might see some of the awesome new features we’re building for

tablets in action. It’s really exciting to see the

– [Cheers and Applause].

It’s really exciting to see the momentum in smart watches as

well. WearOS is now the

fastest-growing watch platform, just two years after launching

WearOS 3 with Samsung. A top ask from fans has been for

more native messaging apps on the watch.

I’m excited to share that WhatsApp is bringing their

first-ever watch app to Wear this summer.

[Cheers and Applause]. I’m really enjoying using

WhatsApp on my wrist. I can start a new conversation,

reply to messages by voice and even take calls.

I can’t wait for you to try it. Our partnership on WearOS with

Samsung has been amazing, and I’m excited about our new

Android collaboration on immersive XR.

We’ll share more later this year.

Now, we all know that to get the best experience, all these

devices need to work seamlessly together.

It’s got to be simple. That’s why we built Fast Pair,

which lets you easily connect more than 300 headphones.

And it’s why we have Nearby Share to easily move files

between your phone, tablet or windows in Chrome OS computer.

And Cast, to make streaming video and audio to your devices

ultra simple, with support for over 3,000 apps.

It’s great to have all of our devices connected, but if you’re

anything like me, it can be hard to keep track of all the stuff.

Just ask my family. I misplace my earbuds at least

three times a day, which is why we’re launching a major update

to our Find My Device experience to support a wide range of

devices in your life, including headphones, tablets and more.

It’s powered by a network of billions of Android devices

around the world, so if you leave your earbuds at the gym,

other nearby Android devices can help you locate them.

And for other important things in your life, like your bicycle

or suitcase, Tile, Chipolo, and others, will have tracker tags

that work with the Find My Device Network as well.

[Cheers and Applause]. Now, we took some time to really

get this right, because protecting your privacy and

safety is vital. From the start, we designed the

network in a privacy-preserving way, where location information

is encrypted. No one else can tell where your

devices are located, not even Google.

This is also why we are introducing Unknown Tracker

Alerts. Your phone will tell you if an

unrecognized tracking tag is moving with you, and help you

locate it. [Cheers and Applause].

It’s important that these warnings work on your Android

phone, but on other types of phones as well.

That’s why last week, we published a new industry stander

with Apple, outlining how unknown tracker alerts will work

across all smartphones. [Cheers and Applause].

Both the new Find My Device experience and Unknown Tracker

Alerts are coming later this summer.

[Applause]. Now, we’ve talked a lot about

connecting devices, but Android is also about connecting people.

After all, phones were created for us to communicate with our

friends and family. When you are texting in a group

chat, you shouldn’t have to worry about whether everyone is

using the same type of phone. Sending high-quality images and

video, getting typing notifications and end-to-end

encryption should all just work. That’s why we’ve worked with our

partners on upgrading old SMS and MMS technology to a modern

standard called RCS. That makes all of this possible,

and there are now over 800 million people with RCS.

On our way to over a billion by the end of the year.

We hope every mobile operating System – [Laughter] – gets the

message and adopts RCS. [Cheers and Applause].

So we can all hang out in the group chat together, no matter

what device we’re using. Whether it’s connecting with

your loved ones or connecting all of your devices, Android’s

complete ecosystem makes it easy.

Another thing people love about Android is the ability to

customize their devices and express themselves.

Here’s Dave to tell you how we are taking this to the next

level with generative AI. [Ae].

DAVE: Thanks, Sameer. And hello, everyone.

With Google’s advances in generative AI so your phone can

feel even more personal. So let me show you what this

looks like. To start, messages and

conversation can be so much more expressive, fun and playful with

Magic Compose. It’s a new feature coming to

Google Messages powered by generative AI that helps you add

that extra spark of personality to your conversation.

Just type your message like you normally would, and choose how

you want to sound. Magic Compose will do the

rest. So your messages give off more

positivity, more rhymes, more professionalism.

Or if you want, in the style of a certain playwright.

To try or not to try this feature, that is the question.

Now, we also have new personalizations coming to the

OS layer. At Google I/O two years ago, we

introduced Material You. It’s a design system which

combines user inspiration with dynamic color science for a

fully personalized experience. We’re continuing to expand on

this in Android 14 with all new customization options coming to

your lockscreen. Now, I can add my own

personalized style to the lockscreen clock so that it

looks just the way I want. And what’s more, with the new

customizable lockscreen shortcuts, I can instantly jump

in to my most frequent activities.

Of course, what really makes your lockscreen and home screen

yours is the wallpaper, and it’s first thing that many of us

set when we get a new phone. Now, emojis are such a fun and

simple way of expressing yourself so we thought wouldn’t

it be cool to bring them to your wallpaper?

With Emoji wallpapers, you can choose your favorite combination

of emojis, pick the perfect pattern, and find just the right

color to bring it all together. So let’s take a look.

And I’m not going to use the laptops.

I’m going to use a phone. All right.

So let’s see. I’m going to go into the

wallpaper picker, and I’m going to tap on the new option for

emojis. And I’m feeling in a kind of, I

don’t know, zany mood, with all you people looking at me, so I’m

going to pick this guy and this guy, and let’s see, who else is

in here? This one looks pretty cool.

I like the ape fit one, and obviously that one.

And somebody said there was a duck on stage earlier, so let’s

go find a duck. Hello, duck.

Where’s the duck. Anyone see a duck?

Where’s the duck? There’s the duck.

All right. There it is.

We got some ducks. Okay.

Cool. And then pattern-wise, we got a

bunch of different patterns you can pick.

I’m going to pick mosaic. That’s my favorite.

I’m going to play with the Zoom. Let’s see.

We’ll get this just right. Okay.

I got enough ducks in there. Okay.

Cool. And then colors, let’s see, ooh,

let’s go with the more muted one.

That one. No, that looks good.

I like that one. All right.

Select that, set the wallpaper, and then I go, boom!

Looks pretty cool, huh? [Cheers and Applause].

And the little emojis, they react when you tap them, which I

find unusually satisfying. How much time have I got?

Okay. Now, of course, so many of us

like to use a favorite photo for our wallpaper, and so with the

new cinematic wallpaper feature you can create a stunning 3D

image from any regular photo and then use it as your wallpaper.

So let’s take a look. So this time I’m going to go

into my photos. And I really like this photo of

my daughter, so let me select that.

And you’ll notice there’s a sparkle icon at the top.

So if I tap that, I then get an option for cinematic wallpaper.

So let me tap that, and wait for it.

Boom. Now, under the hood, we’re using

an on-device convolutional neural network to estimate

depth, and a generative adversarial network for

inpainting as the background moves.

The result is a beautiful cinematic 3D photo.

So now let me set the wallpaper. And then I’m going to return

home. And check out the parallax

effect as I tilt the device. It literally jumps off the

screen. [Cheers and Applause].

Both Cinematic Wallpapers and Emoji Wallpapers are coming

first to Pixel devices next month.

[Cheers and Applause]. So let’s say you don’t have

the perfect wallpaper photo handy, or you just want to have

fun and create something new. With our new Generative AI

wallpapers, you choose what inspires you, and we create a

beautiful wallpaper to fit your vision.

So let’s take a look. So this time, I’m going to go

and select create a wallpaper with AI.

And I like classic art, so let me tap that.

Now, you’ll notice at the bottom we use structured prompts to

make it easier to create. So for example, I can pick –

what am I going to do? City by the bay in a

post-impressionist style. Cool.

And I type – tap create wallpaper.

Nice. Now, behind the scenes, we’re

using Google’s text-to-image diffusion models to generate

completely new and original wallpapers.

And I can swipe through and see different options that it’s

created. And some of these look really

cool, right? [Applause].

So let me pick this one. I like this one.

So I’ll select that. Set the wallpaper.

And then return home. Cool.

So now, out of billions of sxroIs phones in the world, no

other phone will be quite like mine.

And thanks to Material You, you can see the system’s color

palette automatically adapts to match the wallpaper I created.

Generative AI Wallpapers will be coming this fall.

[Cheers and Applause]. So from a thriving ecosystem of

devices to AI-powered expression, there’s so much

going on right now in Android. Okay.

Rick is up next to show you how this Android innovation is

coming to life in the Pixel family of devices.

Thank you. [Cheers and Applause].

RICK OSTERLOH: The pace of AI innovation over the past year

has been astounding. As you heard Sundar talk about

earlier, new advances are transforming everything from

creativity and productivity to knowledge and learning.

Now, let’s talk about what that innovation means for Pixel,

which has been leading the way in AI-driven hardware

experiences for years. Now, from the beginning, Pixel

was conceived as an AI-first mobile computer, bringing

together all the amazing breakthroughs across the

company, and putting them in to a Google device you can hold in

your hand. Other phones have AI features,

but Pixel is the only phone with AI at the center.

And I mean that literally. The Google Tensor G2 chip is

custom designed to put Google’s leading-edge AI research to work

in our Pixel devices. By combining Tensor’s on-device

intelligence with Google’s AI in the cloud, Pixel delivers truly

personal AI. Your device adapts to your own

needs and preferences, and anticipates how it can help you

save time and get more done. This Personal AI enables all

those helpful experiences that Pixel is known for that aren’t

available on any other mobile device.

on any other mobile device. Like pixel Call Assist, which

helps you avoid long hold times, navigate phone tree menus,

ignore the calls you don’t want and get better sound quality on

the calls you do want. [Laughter].

Personal AI also enables helpful Pixel Speech capabilities.

On-device machine learning translates different languages

for you, transcribes conversations in real time, and

understands how you talk and type.

And you’re protected with Pixel Safe, a collection of features

that keep you safe online and in the real world.

And of course, there’s Pixel Camera –

[Cheers and Applause]. It understands faces,

expressions and skin tones to better depict you and the people

you care about, so your photos will always look amazing.

We’re also constantly working to make Pixel Camera more inclusive

and more accessible, with features like Real Tone and

Guided Frame. [Cheers and Applause].

Pixel experiences continue to be completely unique in mobile

computing, and that’s because Pixel is the only phone

engineered end-to-end by Google, and the only phone that combines

Google Tensor, Android and AI. [Cheers and Applause].

With this combination of hardware and software, Pixel

lets you experience all those incredible new AI-powered

features you saw today in one place.

For example, the new Magic Editor in Google Photos that

Sundar showed you will be available for early access to

select Pixel phones later this year, opening up a whole new

avenue of creativity with your photos.

Dave just showed you how Android is adding depth to how you can

express yourself with new Generative AI wallpapers, and

across Search, Workspace and Bard, new features powered by

large language models can spark your imagination, make big tasks

more manageable, and help you find better answers to everyday

questions, all from your Pixel device.

We have so many more exciting developments in this space.

And we can’t wait to show you more in the coming months.

Now, it’s probably no surprise that as AI keeps getting more

and more helpful, our Pixel portfolio keeps growing in

popularity. Last year’s Pixel devices are

our most popular generation yet, with both users and respected

reviewers and analysts. [Applause].

Our Pixel phones won multiple Phone of the Year awards.

[Cheers and Applause]. Yes.

Thank you. And in the premium smartphone

category, Google is the fastest-growing OEM in our

markets. [Cheers and Applause].

One of our more popular products is the Pixel A-series, which

delivers incredible – [Cheers and Applause].

Thank you. I’m glad you like it.

It delivers incredible Pixel performance in a very affordable

device. And to continue the I/O

tradition, let me show you the newest member of our A-series.

[Cheers and Applause]. Today, we’re completely

upgrading everything you love about our A-series with the

gorgeous new Pixel 7a. [Cheers and Applause].

Like all Pixel 7 series devices, the Pixel 7a is powered by our

flagship Google Tensor G2 chip, and it’s paired with eight

gigabytes of ram, which ensures Pixel 7a delivers best-in-class

performance and intelligence. And you’re going to love the

camera. The 7a takes the crown from 6a

as the highest-rated camera in its class, with the biggest

upgrade ever to our A-series camera hardware, including a 72

percent bigger main camera sensor.

[Applause]. Now, here’s the best part.

Pixel 7a is available today, starting at $499.

[Cheers and Applause]. It’s an unbeatable combination

of design, performance and photography, all at a great

value. You can check out the entire

Pixel 7a lineup on the Google Store, including our exclusive

Coral color. Now, next up, we’re going to

show you how we’re continuing to expand the Pixel portfolio into

new form factors. [Cheers and Applause].

Yeah. Like foldables and tablets.

[Cheers and Applause]. You can see them right there.

It’s a complete ecosystem of AI-powered devices naerjd by

Google. Here’s Rose to share what a

larger-screen Pixel can do for you.

[Applause]. »ROSE: Okay.

Let’s talk tablets. Which have been a little bit

frustrating. It has always been hard to know

where they fit in, and they haven’t really changed the past

ten years. A lot of the time, they are

sitting, forgotten in a drawer, and that one moment you

need it, it is out of battery. [Laughter].

We believe tablets, and large screens in general, still have a

lot of potential. So we set out to build something

different, making big investments across Google apps,

Android and Pixel, to reimagine how large screens can deliver a

more helpful experience. Pixel Tablet is the only tablet

engineered by Google and designed specifically to be

helpful in your hand and in the place they are used the most,

the home. We designed the Pixel Tablet to

uniquely deliver helpful Pixel experiences, and that starts

with great hardware. A beautiful 11-inch,

high-resolution display with crisp audio from the four

built-in speakers. A premium aluminum enclosure

with a nanoceramic coating that feels great in the hand and is

cool to the touch. The world’s best Android

experience on a tablet, powered by Google Tensor G2, for

long-lasting battery life and cutting-edge personal AI.

For example, with Tensor G2, we optimize the Pixel Camera

specifically for video calling. Tablets are fantastic video

calling devices. And with Pixel Tablet, you are

always in frame, in focus and looking your best.

The large screen makes Pixel Tablet the best Pixel device for

editing photos, with AI-powered tools like Magic Eraser, and

Photo Unblur. Now, typing on a tablet can be

so frustrating. With Pixel Speech and Tensor G2,

we have the best voice recognition, making voice typing

nearly three times faster than tapping.

And as Sameer mentioned, we’ve been making huge investments to

create great app experiences for larger screens, including more

than 50 of our own apps. With Pixel tablet, you’re

getting great tablet hardware with great tablet apps, but we

saw an opportunity to make the tablet even more helpful in the

home. So we engineered a first of its

kind charging speaker dock. [Cheers and Applause].

It gives the tablet a home. And now, you never have to worry

about it being charged. Pixel Tablet is always ready to

help, 24/7. When it’s docked, the new Hub

Mode turns Pixel Tablet into a beautiful digital photo frame, a

powerful smart home controller, a voice activated helper, and a

shared entertainment device. It feels like a smart display,

but it has one huge advantage. With the ultra fast fingerprint

sensor, I can quickly unlock the device and get immediate access

to all of my favorite Android apps.

So I can quickly find a recipe with Side Chef, or discover a

new podcast on Spotify, or find something to watch with the

tablet-optimized Google TV app. Your media is going to look and

sound great with room-filling sound from the charging speaker

dock. Pixel Tablet is also the

ultimate way to control your smart home.

And that starts with the new, redesigned Google Home App.

It looks great on Pixel Tablet, and it brings together over

80,000 supported smart home devices, including all of your

Matter-Enabled devices. [Cheers and Applause].

We also made it really easy to access your smart home controls

directly from Hub Mode. With the new home panel, any

family member can quickly adjust the lights, lock the doors or

see if a package was delivered. Or, if you’re lazy, likee, you

can just use your voice. Now, we know that tablets are

often shared, so a tablet for the home needs to support

multiple users. Pixel Tablet makes switching

between users super easy, so you get your own apps and your own

content while maintaining your privacy.

[Cheers and Applause]. And my favorite part, it is so

easy to move content between devices.

Pixel Tablet is the first tablet with Chromecast built in, so

with a few taps – [Applause].

– I can easily cast some music or my favorite show from the

phone to the tablet. And then I can just take the

tablet off the dock and keep listening or watching all around

the house. We designed a new type of case

for Pixel Tablet that solves the pain of flimsy tablet cases.

It has a built-in stand that provides continuous flexibility

and is sturdy at all angles, so you can confidently use your

tablet anywhere, on a plane, in bed or in the kitchen.

The case easily docks, so you never have to take it off to

charge. It’s just another example of how

we can make the tablet experience even more helpful.

[Cheers and Applause]. The new Pixel Tablet

comes in three colors. It is available for pre-order

today and ships next month, starting at just $499.

[Applause]. And the best part, every Pixel

Tablet comes bundled with a $129 charging speaker dock for free!

[Cheers and Applause]. It is truly the best tablet in

your hand and in your home. To give you an idea of just how

helpful Pixel Tablet can be, we asked TV personality Michelle

Buteau to put it to the test. Let’s see how that went.

MICHELLE BUTEAU: When Google asked me to spend the day with

this tablet, I was a little apprehensive, because I’m not a

tech person. I don’t know how things work all

the time. But I’m a woman in STEM now.

Some days I could barely find the floor, let alone the charger

for something, so when the Google folks said something

about a tablet that docks, I was like, okay then, Google, prove

it! [Music playing].

[Laughter]. I’m on average two to five

meetings a day. Today I got stuck on all these

features, honey, the 360 of it all!

The last time I was around this much sand, some of it got caught

in my belly button, and I had a pearl two weeks later.

Look, it’s a bird! So this is what I love about my

me time today. Six shows just popped up based

off of my preferences. And they were like, hey, girl!

[Laughter]. I would have made it funnier,

but that was good. My husband is actually a

photographer, so I have to rely on him to make everything nice

and pretty. But now, I love this picture of

me and my son, but there’s a boom mic there.

Look. It’s right here.

You see this one? Get this mic.

You see that? Magic Eraser, I can circle or

brush. I’m going to do both.

Boom! How cute is that!

And so I hope not only you guys are happy with me reviewing

this, but that you’ll also give me one, because, I mean –

[Laughter]. You’re getting tired, right?

No, I’m not! » You’re not?

Okay. ‘Cause I am.

[Applause]. »RICK: That’s a pretty good

first review. Now, tablets aren’t the only

large-screen device we want to show you today.

It’s been really exciting to see foldables take off over the past

few years. Android’s driven so much

innovation in this new form-factor, and we see

tremendous potential here. We’ve heard from our users

that the dream foldable should have a versatile form factor,

making it great to use both folded and unfolded.

It should also have a flagship-level camera system

that truly takes advantage of the unique design.

And an app experience that’s fluid and seamless across both

screens. Creating a foldable like that

it really means pushing the envelope with state-of-the-art

technology, and that means an ultra-premium $1799 device.

Now, to get there, we’ve been working closely with our Android

colleagues to create a new standard for foldable

technology. Introducing Google Pixel Fold.

[Cheers and Applause]. It combines Tensor G2 Android

innovation and AI for an incredible phone that unfolds

into an incredible compact tablet.

It’s the only foldable engineered by Google to adapt to

how you want to use it, with a familiar front display that

works great when it’s folded, and when it’s unfolded, it’s our

thinnest phone yet and the thinnest foldable on the market.

[Applause]. Now, to get there, we had to

pack a flagship-level phone into nearly half the thickness, which

meant completely redesigning components like the telephoto

lens and the battery, and a lot more.

So it can fold up and it can fit in your pocket, and retain that

familiar smartphone silhouette when it’s in your hand, but

Pixelfold has three times the screen space of a normal phone.

You unfold it and you’re treated to an expansive I 7.6-inch

display that opens flat with a custom 180-degree fluid-friction

hinge. So you’re getting the best of

both worlds. It’s a powerful smartphone when

it’s convenient and an immersive tablet when you need one.

And like every phone we make, Pixel Fold is built to last.

We’ve extensively tested the hinge to be the most durable of

any foldable. Corning Gorilla Glass Victus

protects it from exterior scratches, while the IPX8 water

resistant design safeguards against the weather.

And as you’d expect from a Pixel device, Pixel Fold gives you

entirely new ways to take stunning photos and videos with

Pixel Camera. Put the camera in tabletop

mode to capture the stars. And you can get closer with the

best zoom on a foldable. And use the best camera on the

phone for your selfies. The unique combination of form

factor, triple rear camera hardware, and Personal AI with

Tensor G2 make it the best foldable camera system.

[Cheers and Applause]. Now, there are so many

experiences that feel even more natural with a Pixel fold.

One is the Dual Screen Interpreter Mode.

Your Pixel Fold can use both displays, both displays, to

provide a live translation to you and the person you’re

talking to. So it’s really easy to connect

across languages. [Applause].

And powering all of this is Google Tensor G2.

Pixel Fold has all of the Personal AI features you would

expect from a top of the line Pixel device, across safety,

speech and call assist. Plus, great performance for

on-the-go multi-tasking and entertainment.

And the entire foldable experience is built on Android.

Let’s get Dave back out here to show you the latest improvements

to Android that you’ll get to experience on Pixel Fold.

[Applause]. »DAVE: Thanks, Rick.

From new form factors and customizability to biometrics

and computational photography, Android has always been at the

forefront of mobile industry breakthroughs.

Recently, we’ve been working on a ton of features and

improvements for large-screen devices like tablets and

foldables. So who thinks we should try a

bunch of live demos on the new Pixel Fold?

[Cheers and Applause]. All right.

It starts the second I unfold the device with this stunning

wallpaper animation. The hinge sensor is actually

driving the animation, and it’s a subtle thing, but it makes the

device feel so dynamic and alive.

Yeah, I just love that. All right.

So let’s go back to the folded state.

And I’m looking through Google photos at a recent snowboarding

trip. Now, the scenery is really

beautiful so I want to show you on the big screen.

I just open my phone, and the video instantly expands into

this gorgeous full-screen view. [Cheers and Applause].

We call this continuity, and we’ve obsessed over every

millisecond it takes for apps to seamlessly adapt from the

smaller screen to the larger screen.

Now, all work and no play makes Davey a dull boy, so I’m going

to message my buddy about getting back out on the

mountain. I can just swipe to bring up the

new Android taskbar, and then drag Google messages to the side

to enter split screen mode like so.

I’ll send my buddy a photo to try to inspire him.

I just drag and drop straight from Google Photos right into my

message. Like so.

And thanks to the new Jetpack drag and drop library, this is

now supported in a wide variety of apps from WorkSpace to

WhatsApp. You’ll notice we’ve made a bunch

of improvements throughout the OS to take advantage of the

larger screen. So for example, here’s the new

split keyboard for faster typing.

And if I pull down from the top, you’ll notice the new two-panel

shade showing about my notifications and my quick

settings at the same time. Now, Pixel Fold is great for

productivity on the go. And if I swipe up into Overview,

you’ll notice that we now keep the multi-tasking windows

paired. And for example, I was working

on a Google Docs and Slides earlier to prep for this

keynote, and I think I’ve – I think I’ve followed most of

these tips so far, but I’m not quite done yet.

I can even adjust the split to suit the content that I’m

viewing, and, you know, working this way, it’s like having a

dual monitor set up in the palm of my hand, allowing me to do

two things at once. Which reminds me, I should

probably send Rick a quick note, so I’ll open G-mail and I don’t

have a lot of time so I’m going to use the new help me write

feature, so let’s try this out. Don’t cheer yet.

Let’s see if it works. Okay.

Rick, congrats on – what are we going to call this, Pixel Fold’s

launch, amazing with Android. Okay.

And then I probably should say, Dave – not Andrew, Android.

Dave. It’s hard to type with all you

people looking at me. All right.

Now, by the power of large language models, allow me to

elaborate. Dear Rick, congratulations on

the successful launch of Pixel Fold, I’m really impressed with

the device and how well it works with Android.

The foldable screen is a game changer, and I can’t wait to see

what you do – [Cheers and Applause].

All right. That’s productivity.

But there’s more. The Pixel Efold is also an

awesome entertainment device, and YouTube is just a really

great showcase for this so let’s start watching this video on the

big screen. Now, look what happens when I

fold the device at right angles. YouTube enters what we call

tabletop mode so that the video plays on the top half and we’re

working on adding playback controls on the bottom half for

an awesome single-handed lean-back experience.

And the video just keeps playing fluidly through these

transitions without losing a beat.

Okay. One last thing.

We’re adding support for switching displays within an

app, and Pixel Efold’s camera is a really great example of that.

Now, by the way, say hi to Julie behind me.

She’s the real star of the show. [Cheers and Applause].

So Pixel Fold has this new button on the bottom right so

I’m going to tap this and it means I can move the view finder

to the outside screen. So let me turn the device

around. Okay.

So why is this interesting? Well, it means that the feview

finder is now beside the rear camera system and that means I

can get a high-quality, ultra-wide, amazing selfie with

the best camera on the device. Speaking of which, and you knew

where this was going! Smile, everybody!

You look awesome! [Cheers and Applause].

I always wanted to do that on Google I/O keynote.

All right. So what you’re seeing here is

the culmination of several years of work, in fact, on large

screens, spanning the Android OS and the most popular apps on the

play store. All this where it comes alive on

the amazing new Pixel Tablet and Pixel Fold, check out this

video. Thank you.

They ain’t never seen it like. This ain’t never seen it like

this. This, this, this like this.

Never seen it like this, this, this, this, this, like this.

They ain’t never seen it like this.

This. Like this.

They ain’t never seen it like this.

[Cheers and Applause]. » That demo is awesome.

Across Pixel and Android, we’re making huge strides with

large-screen devices, and we can’t wait to get Pixel Tablet

and Pixel Fold into your hands. And you’re not going to have to

wait too long. You can pre-order Pixel Fold

starting today and it will ship next month.

[Cheers and Applause]. And you’ll get the most out of

our first ultra-premium foldable by pairing it with Pixel Watch.

So when you pre-order a Pixel Fold, you’ll also get a Pixel

Watch on us. [Cheers and Applause].

The Pixel family continues to grow into the most dynamic

mobile hardware portfolio in the market today.

From a broad selection of smartphones to watches, earbuds,

and now tablets and foldables, there are more ways than ever to

experience the helpfulness Pixel is known for whenever and

wherever you need it. Now let me pass it back to

Sundar. Thanks, everyone!

[Cheers and Applause]. »SUNDAR: Thanks, Rick.

I’m really enjoying the new tablet and the first Pixel

foldable phone, and am proud of the progress Android is driving

across the ecosystem. As we wrap up, I’ve been

reflecting on the big technology shifts that we have all been a

part of. The shift with AI is as big as

they come, and that’s why it’s so important that we make AI

helpful for everyone. We are approaching it boldly

with a sense of excitement, and because as we look ahead,

Google’s deep understanding of information, combined with the

capabilities of generative AI, can transform Search and all of

our products yet again. And we are doing this

responsibly in a way that underscores the deep commitment

we feel to get it right. No one company can do this

alone. Our developer community will be

key to unlocking the enormous opportunities ahead.

We look forward to working together and building together.

So on behalf of all of us at Google, thank you, and enjoy the

rest of I/O. [Cheers and Applause].

[Music playing].

CHLOE CONDON: Well, that’s a wrap on the Google keynote.

We’ve got much more I/O content coming up.

So before you take a break or grab a snack, you want to make

sure you’re right back here in just a few minutes.

CRAIG LABENZ: That’s right. I’m Craig, and she’s Chloe, and

we’ll be showing off I/O Flip, a classic card game.

With an AI twist. We released the game yesterday

to show you what’s possible with generative AI and other Google

tools like Flutter, Firebase and Google Cloud.

So come see how the game is played, or play along with us at

home. For now, check out our new I/O

Flip game trailer.

IREM: My name is Irem. I’m an engineer at Google, and

I’ve been working on Project Starline for a little over a

year. Project Starline’s mission is to

bring people together and make them feel present with each

other, even if physically they are miles apart.

Our earlier prototype relied on several cameras and sensors to

produce a live 3D image of the remote person.

In our new prototype, we have developed a breakthrough AI

technique that learns to create a person’s 3D likeness and image

using only a few standard cameras.

MELANIE: And I’m Melanie Lowe, and I’m the Global Workplace

Design Lead here at Salesforce. You know, you were so used to

seeing a two-dimensional little, you know, box, and then we’re

connecting like this, and that feeling of being in front of a

person is now replicated in Starline.

I’m more than happy to be a part of the setup.

I was kind of curious about the collaboration.

I was like, is this possible? You felt like someone was right

there. Thanks for having me.

Yeah. Of course.

The first meeting I had on Starline, I said, wow, you’ve

got blue eyes. And this is a person I’d been

meeting with for a year. Just to see a person in 3D, it

was really astounding. » His smile was the same smile.

Exactly how it was when I first met him.

Oh, Elaine is in Atlanta, but actually it feels like she’s

sitting in front of me. » Starline is really about the

person you’re talking to. All the technology sort of falls

by the wayside.

As a Black woman in tech, no matter what, I’m Black.

Think about all of the technologies that we use all the

time and how few of them are designed from headspace that

considers an identity like mine, like Black people.

I provide material for Black artists to create authentic

depictions of their own community.

You have to use your imagination.

How do you build your own technology?

I want to challenge what people want and provide

something new.

There’s no way to change somebody’s life more than to

give them a good education. Generation Games makes games and

the tools to make them. » We started focused on closing

the math gap in underserved communities.

You talk about how do we get kids engaged?

Make it relevant because it’s really powerful when they see

someone who looks like them reflected the very first time on

their device. » Music was always part of our

life. » When I worked in my previous

job, I injured my back. » During his rehabilitation, he

started his long walks, and he wanted to listen to music.

I started to develop the Equalizer app, and this is the

story. » The app allows you to have

your own music experience, to hear the music you like, the way

you like. » I started my career as a

historian. Game development was never a

plan. But then games came into my

life, and I decided to combine history and entertainment to

touch people’s heart. Our first game, Arida, is

influenced by a conflict that happened in Canudos, the

backlands of Bahia, in the 19th century.

I think the game could be a good way to make this memorable for

many generations. » You can only imagine how

liberated I feel, no longer being defined by my stutter.

It was only when I realized that with the right tools, with the

right training, stuttering is something that I could control.

Access to speech therapy is a global problem, but it’s

particularly acute in the developing countries.

Straight away we realized we need to code this into an app,

and then maybe we can help other people as well.


CHLOE CONDON: Hi. Him Chloe.

And I’m Craig, and we’re super excited to introduce you

to I/O Flip, a classic card game with a modern AI twist, powered

by Google, built for I/O and featuring a number of our

favorite products. » We just released it

yesterday, and a lot of you have already checked it out at

flip.with If you haven’t, make sure you

do. For now, we are here to give you

a real-time demo and details about some of the tech we used

like Flutter, Firebase and Google Cloud.

You’ll want to stay tuned for the developer keynote coming up

after this to see how the game was made.

For this demo, Chloe and I, well, we happen to know the

folks that made this game. Hey, Filippo team, so we’re able

to play against each other just for today, and the winner gets

to take home that trophy right over there.

Wow! » I know the perfect place for

that trophy. » Chloe, how do you know what

my trophy case looks like? » Okay.

So to get started on I/O Flip, you get to build your own team,

customizing your cards with classes and special powers along

the way, and there are some extra bonuses that add to your

strength, like holographic cards and elemental powers, more on

those later. You win when your cards are more

powerful than your opponent’s cards.

All right, Craig. We’re going to try this out on

our new Pixel 7as right here. Are you ready to open a pack and

start building our teams? » Let’s do it.

Let’s play I/O Flip. [Bell dinging].

Okay. First we’re going to build an AI

design pack of cards featuring some of our beloved Google

characters. Dash, Sparky, Android and Dino.

Then we’re going to assign them special powers and see what we

get. Let’s see.

What do I want for my team? Ooh, I’m going to go with Fairy,

because an army of pixies sounds like a crew that I want to hang

out with. And let’s see, for my special

power, I’m going to choose break dancing because nothing is more

powerful than the power of tats. » Hmmm.

All right. Well, I’ve chosen Pirate as my

class, and Chloe, you do know how to tell if someone’s a

pirate, right? Well, they’re always talking

about plunder. All right.

So now I need a special power. Let’s see.

Astro Astrology.

Only that had been astronomy, the pirates might have actually

been able to use it. Let’s see, break dancing

pirates. What is this, pirates of

Penzance? Oh, fake crying!

That’s good. Be careful, Chloe.

My parts are also good at emotional manipulation.

They’ve never seen a guilt trip they wouldn’t take.

Fake crying. Huh.

I didn’t realize that I had a super power.

We each get 12 cards in a pack, and from here we’ll be able to

swipe through and strategize and decide which three we think will

be our strongest competitors. Those three become our teams,

and they’re the cards that will compete with our opponent’s

team. » Ooh, here’s a good

description. Sparky the pirate fake cries to

get out of trouble, but he always laughs it off.

Pretty childish, Sparky, but hopefully pretty good flavor

text. Now, MakerSuite helped us

prototype all of those prompts, and then the PaLM API generated

thousands of unique descriptions for all these cards.

And those animations are silky smooth with Flutter.

And what’s even cooler is because those animations are

powered by code, they’re really flexible, and because all of

this is made in Flutter, we don’t only have a Web app here,

we’re most of the way toward a mobile app on Android and iOS as

well. » And all the images were

created with Google AI tools. We’re committed to using AI

responsibly here at Google, so we collaborated with artists to

train our AI models which then generated the thousands of

images that you see in I/O Flip. » All the game play

communication like match making and results was easy to

implement with fire store, and with Cloud Run we’re able to

deploy and scale our all dart back.

That’s right. I/O Flip is full stack Dart.

Okay, Chloe, I think it’s time to flip.

Okay. Now, this is a fast-paced game.

Things happen pretty quickly, so pay attention.

Okay. We’re in.

Okay. I’ve got my card.

All right, fairies, let’s break dance.

All right, pirates. Yar.

Okay. Moment of truth here.

Yes! » Oh!

And your water elemental power further beating my fire, as if

you even needed it. Okay.

Round two, pressure’s on. I think I’ve got a winner.

We’ll see about that. Right into my trap.

Oh, these are – these are real tears, Chloe.

That is my lowest card. » Woo-hoo!

This is for all the marbles. » I feel good about this one.

I think I’m going to get that trophy.

Oh! Fire is about to get your metal!

But not enough! Chloe!

You’ve taken it! Agh!

Well-played, Chloe. I suppose if anyone deserved

that trophy other than me, it would be you.

Thanks, Craig. Once again, the power of dance

prevails. Super fun, super easy to play.

I love being able to customize my cards and characters and I

can play quick games if I’m short on time, or I can play

again to extend my winning streak.

Want to play I/O Flip yourself?

Go to to play on your laptop, desktop or

mobile device. » Thanks for hanging out with

us and checking out I/O Flip, and you can learn more about the

AI technology actually used to make the game and so much more

coming up next in the developer keynote.

See you there!

Ten. Nine.

Eight. Seven.

Six. Five.

Four. Three.