DeepLearningAI - ChatGPT Prompt Engineering for Developers - Inferring

🎁Amazon Prime 📖Kindle Unlimited 🎧Audible Plus 🎵Amazon Music Unlimited 🌿iHerb 💰Binance

This next video is on inferring. I like to think

of these tasks where the model takes a text as input and

performs some kind of analysis. So this could be extracting labels,

extracting names, kind of understanding the

sentiment of a text, that kind of thing.

So if you want to extract a sentiment, positive or negative,

with a piece of text, in the traditional

machine learning workflow, you’d have to collect the label data set, train

the model, figure out how to deploy the model somewhere in

the cloud and make inferences. And that can work pretty well, but

it was just a lot of work to go through that process. And

also for every task, such as sentiment versus

extracting names versus something else, you

have to train and deploy a separate model. One

of the really nice things about a large

language model is that for many tasks like these, you

can just write a prompt and have it

start generating results pretty much right away. And

that gives tremendous speed in terms of application development. And

you can also just use one model, one API, to do many different tasks

rather than needing to figure out how to

train and deploy a lot of different models. And

so with that, let’s jump into the code to see how you can

take advantage of this. So here’s a usual starter code. I’ll just run that.

And the most important example I’m going to use is a review for a lamp. So

need a nice lamp for the bedroom, and this one additional storage, and

so on.

So

let me write a prompt to classify the sentiment of this.

And if I want the system to tell me, you know, what is the sentiment,

I can just write what is the sentiment

of the following

product review,

with the usual delimiter and the review text and so on. And let’s

run that.

And this says the sentiment of the product review is positive,

which is actually seems pretty right. This lamp isn’t perfect, but

this customer seems pretty happy. Seems to be a great

company that cares about the customers and products. I

think positive sentiment seems like the right answer. Now

this prints out the entire sentence, the sentiment of the product

review is positive. If you wanted to give a

more concise response to make it easier for post-processing, I can

take this prompt and add another instruction to

give you answers in a single word, either positive

or negative. So it just prints out positive

like this, which makes it easier for a

piece of text to take this output and process it and do

something with it. Let’s look at another prompt, again still using

the lamp review.

Here, I have it identify a list of emotions

that the writer of the following review is expressing,

including no more than five items in this list.

So, large language models are pretty good at extracting

specific things out of a piece of text. In this case, we’re

expressing the emotions. And this could be useful for understanding

how your customers think about a

particular product.

For a lot of customer support organizations, it’s important to understand

if a particular user is extremely upset. So you might have

a different classification problem like this. Is

the writer of the following review expressing anger?

Because if someone is really angry, it

might merit paying extra attention

to have a customer review, to have customer

support or customer success, reach out to figure what’s

going on and make things right for the customer. In

this case, the customer is not angry. And

notice that with supervised learning, if

I had wanted to build all of these classifiers, there’s

no way I would have been able to do

this with supervised learning in just a few

minutes that you saw me do so in this video. I’d encourage you

to pause this video and try changing some

of these prompts. Maybe ask if the customer is expressing

delight or ask if there are any missing

parts and see if you can get a prompt to make different

inferences about this lamp review.

Let me show some more things that you

can do with this system, uhm, specifically extracting

richer information from a customer review.

So, information extraction is the part of NLP,

of natural language processing, that relates to taking

a piece of text and extracting certain things

that you want to know from the text. So, in this prompt, I’m asking it, identify

the following items, the item purchase, and

the name of the company that made the item. Again, if

you are trying to summarize many reviews from

an online shopping e-commerce website, it might be useful for your

large collection of reviews to figure out what

were the items, who made the item, figure out

positive and negative sentiment, to track

trends about positive or negative sentiment for specific items

or for specific manufacturers. And in

this example, I’m going to ask it to format your

response as a JSON object with item and brand as

the keys. And so, if I do that, it says the

item is a lamp, the brand is Luminar, and you can easily load this

into the Python dictionary to then do additional processing

on this output. In the examples we’ve gone through, you

saw how to write a prompt to recognize

the sentiment, figure out if someone is angry, and then also extract

the item and the brand.

One way to extract all of this information,

would be to use 3 or 4 prompts and call getCompletion,

you know, 3 times or 4 times, extract these different fields

one at a time, but it turns out you can actually write

a single prompt to extract all of this

information at the same time. So, let’s say, identify the fine items, extract

sentiment, uhm, as a reviewer, expressing anger, item

purchase, completely made it, uhm, and then here, I’m also

going to tell it to format the anger value as a, as a

boolean value, and let me run that, and this

outputs a, uhm, JSON,

where sentiment is positive, anger, and there are no quotes around false,

because it asks it to just output it as a boolean value, uhm,

it extracted the item as a lamp with

additional storage instead of lamp, seems okay,

but this way, you can extract multiple

fields out of a piece of text with just a single prompt.

And as usual, please feel free to pause the video and play

with different variations on this yourself, or maybe even try

typing in a totally different review to see

if you can still extract these things accurately.

Now, one of the cool applications I’ve seen of large language

models is inferring topics. Given a long piece of text, you

know, what is this piece of text about? What

are the topics? Here’s a fictitious newspaper article about

how government workers feel about the agency they

work for. So, the recent survey conducted by

government, you know, and so on, uh, results reviewed at NASA was

a popular department with high satisfaction rating. I am

a fan of NASA, I love the work they do, but this

is a fictitious article. And so, given an article like this, we can

ask it,

with this prompt, determine five topics

that are being discussed in the following text. Let’s

make each item one or two words long, format your response in a comma-separated list,

and so if we run that, you know, we get

out this article is about a government survey, it’s about job

satisfaction, it’s about NASA, and so on. So, overall, I think pretty

nice, um, extraction of a list of topics, and of course, you

can also, you know, split it so you get, uh, pie to the list

with the five topics that, uh, this article was about.

And if you have a collection of articles and extract

topics, you can then also use a large language

model to help you index into different topics. So,

let me use a slightly different topic list. Let’s

say that, um, we’re a news website or something, and, you know,

these are the topics we track, NASA, local government,

engineering, employee satisfaction, federal government.

And let’s say you want to figure out, given a news

article, which of these topics are covered in that

news article.

So, here’s a prompt that I can use.

I’m going to say, determine whether each item in

the following list of topics is a topic in the text below.

Um, give your answer as a list of

zero one for each topic.

And so,

great. So, this is the same story text as before.

So, this thing’s a story. It is about NASA. It’s not

about local governments, not about engineering. It is

about employee satisfaction, and it is about federal government. So, with

this, in machine learning, this is sometimes called a zero

shot learning

algorithm because we didn’t give it any training

data that was labeled. So, that’s zero shot. And with

just a prompt, it was able to determine which of these topics are covered

in that news article. And so, if you

want to generate a news alert, say, so that process news, and you

know, I really like a lot of work that NASA does. So, if you

want to build a system that can take this, you know,

put this information into a dictionary, and whenever

NASA news pops up, print alert, new NASA story, they can

use this to very quickly take any article, figure out

what topics it is about, and if the topic includes NASA, have it

print out alert, new NASA story. Just one thing, I use

this topic dictionary down here. This prompt that I use up here isn’t very robust.

If I went to the production system, I would probably

have it output the answer

in JSON format rather than as a list

because the output of the large language model

can be a little bit inconsistent. So, this is actually a

pretty brittle piece of code. But if you want, when you’re

done watching this video, feel free to see if you can figure out

how to modify this prompt to have it

output JSON instead of a list like this and then have a

more robust way to tell if a bigger article is a story

about NASA.

So, that’s it for inferring, and in just a few minutes, you

can build multiple systems for making inferences about text

that previously this would have taken days or even

weeks for a skilled machine learning developer. And so, I

find this very exciting that both for skilled machine

learning developers as well as for people that are

newer to machine learning, you can now use prompting to very

quickly build and start making inferences on pretty complicated

natural language processing tasks like these. In

the next video, we’ll continue to talk about exciting

things you can do with large language models

and we’ll go on to transforming. How can you

take one piece of text and transform it into a different piece

of text such as translated to a different

language? Let’s go on to the next video.