MakerSuite, essentially a developer tool, enables everyone with a Google Account to access the power of PaLM API with a focus on building products and services using it. The MakerSuite interface allows rapid prototyping and testing of the configurations that are used while interacting with the PaLM API. Once the user is satisfied with the configurations, they can very easily port them to their backend codebases.
We’re now ready to dive into exploring the MakerSuite interface. To get started, head over to https://makersuite.google.com/ on your browser. Make sure you’re logged in to your Google Account to be able to access the interface. You’ll be able to see the welcome dashboard.
The available options on MakerSuite as of the date of writing this article are - Text Prompts, Data Prompts, and Chat Prompts. Let’s take a brief look at what each of these does.
Text prompts are the most basic and customizable form of prompts that can be provided to the models. You can choose to set it to any task or ask any question in a stateless manner. The user prompt and input are ingested by the model every time it is run and the model itself does not hold any context. Thus, text prompts are a great starting point and can be made as deterministic or creative in their output as required by the user.
Let us create a Text prompt in MakerSuite. Click on the Create button on the Text prompt card and you’ll be presented with the prompt testing UI.
On the top, MakerSuite allows users to save their prompts by name. It also provides starter samples which allow one to quickly test and understand how the product works. Below that, is the main working area where the users can define their own prompts and by adjusting the configuration parameters of the model at the bottom, run the prompts to produce an output.
First, Click on the Pencil icon on the top left to give this prompt a suitable name. For our example, we’ll be building a prompt that asks the model to produce the etymology of any given word. We’re using the following values
field | value |
---|---|
name | Word Etymology |
description | Asking PaLM API to provide word etymologies. |
Click on “Save” to save these values and close the input modal. Kindly note that these values do not affect the model in any manner and are simply present for user convenience.
Now, in the main working area below, we’ll write the required prompt. For our example, we write the prompt given below:
For any given word that follows, provide its etymology in no more than 300 words.
Aeroplane.
Etymology:
Now, let’s adjust the model parameters. Click on the button next to the Run button to change the model settings.
For our example, we shall set the following values to the parameters:
field | value | remark |
---|---|---|
model | Text Bison | Use default |
Temperature | 0 | Since word etymologies are dry facts and are not expected to be creative |
Add stop sequence |
| Use default |
Max outputs | 1 | Word etymologies are usually not going to benefit from variations of telling them |
Depending on the use case you’re building your generative AI-backed software for, you may wish to change the Safety settings of the model response. To do so, click on the Edit safety settings button. You can see the following options and can change them as per your requirement. For our use case, we shall leave it to default.
At the bottom of the configuration menu, you can choose to adjust further advanced settings of the model output. Let’s see what these options are:
We shall leave these options on default for now.
Great, we’re now all set to run the prompt. Click on the Run button on the bottom and wait for the model to produce the output. In our case, the model outputs:
The word "aeroplane" is derived from the Greek words "aēr" (air) and "planē" (to wander). The term was first used in the 1860s to describe a type of flying machine that was powered by a steam engine. In 1903, the Wright brothers made the first successful flight of a powered aeroplane.
Note that, for you, the response might come out slightly different due to the inherently non-deterministic nature of how generative AI works.
At this point, you might want to experiment by erasing the model output and running the prompt again. Does the output change? Re-run it several times to observe changes in the model output. Then, try adjusting the values of the model configuration and see how that affects the output of the model. If you had set the temperature configuration to 0, you will notice that the model likely produces the same output many times. Try increasing it to 1 and then run the model a few times. Does the output generated in each iteration remain the same now? It is highly possible that you’ll observe that the model output changes every time you re-run the prompt.
It is interesting to note here that the prompt you provide to the model does not contain any examples of how the model should respond. This method of using the model is called Zero-shot learning in which the trained model is asked to produce predictions for an input that it may not have seen before. In our example, it is the task of providing word etymologies, which the model may or may not have been trained on.
This makes us wonder if we gave the model an input that it has absolutely not trained on, is it likely to produce the correct response? Let us try this out. Change the word in our etymology prompt example to “xakoozifictation”. Hit the Run button to see what the model outputs. Instead of telling us that the word does not exist and thus, has no meaning, the model attempts to produce an etymology of the word. The output we got was:
Instead of telling us that the word does not exist and thus, has no meaning, the model attempts to produce an etymology of the word. The output we got was:
Xakoozifictation is a portmanteau of the words "xakooz" and "ification". Xakooz is a nonsense word created by combining the sounds of the words "chaos" and "ooze". ification is a suffix that can be added to verbs to create nouns that describe the process of doing something. In this case, xakoozifictation means the process of making something chaotic or oozy.
What we observe here is called “model hallucination” - a phenomenon common among large language models wherein the model tries to produce output contrary to common logic or is inaccurate in real-world knowledge. It is highly recommended here to read more about Model Hallucations in the “Challenges in working with LLMs” section.
Let us continue our discussion about Zero shot learning. We saw that when we provide only a prompt to the model and no examples of how to produce responses, the model tries its best to produce a response and in most general cases it succeeds. However, if we were to provide some examples to the model of the expected input-output pairs, can we program the model to perform more accurately and do away with the model hallucinations? Let us give this a try by providing some input-output examples of the model. Update your model prompt to the following:
For any given word that follows, provide its etymology in no more than 300 words.
Examples:
Word: aeroplane
Reasoning: Since it's a valid English word, produce an output.
Etymology: Aeroplane is a compound word formed from the Greek roots "aer" (air) and "planus" (flat).
Word: balloon
Reasoning: Since it's a valid English word, produce an output.
Etymology: The word balloon comes from the Italian word pallone, which means ball. The Italian word is derived from the Latin word ballare, which means to dance.
Word: oungopoloctous
Reasoning: Since this is not a valid English word, do not produce an etymology and say it's "Not available".
Etymology: Not available
Word: kaploxicating
Reasoning: Since this is not a valid English word, do not produce an etymology and say it's "Not available".
Etymology: Not available
Word: xakoozifictation
Etymology:
In the above prompt, we have provided 2 examples of words that exist and 2 examples of words that do not exist. We expect the model to learn from these examples and produce output accordingly. Hit Run to see the output of the model, remember to set the temperature configuration of the model back to 0.
You will see that the model responds with the “Not available” output for non-existent words now and with etymologies only for words that exist in the English dictionary. Hence, by providing a few examples of how we expect the model to behave, we were able to stop the model hallucination problem.
This method of providing some samples of the expected input-output to the model in the prompt is called Few shot learning. In Few shot learning, the model is expected to predict output on unknown input based on a few similar samples it has received prior to the task of prediction. In special cases, the number of samples might be exactly one, which gets termed as “One-shot learning”.
Now, let us explore the next type of prompt available on the MakerSuite - Data Prompt.
In Data prompts, the user is expected to use the model to generate more samples of data based on provided samples. The MakerSuite data prompt interface defines two sections of the prompt - the prompt itself which is now options and the samples of the data that the prompt has to work on, which is a required section.
It is important to note here that at the bottom of the page, the model is still the Text Bison model. Thus, the Data prompts can be understood as specific use cases of the text generation using the Text Bison model.
Further, there is no way to test the data prompts without specifying the inputs as one or more columns of the to-be-generated rows of the dataset. Let us build a prompt for this interface. Since providing a prompt text is now not necessary, we’ll skip it and instead fill the table as shown below:
In order to add more columns than the number of columns present by default, use the Add button on the top right.
Once this is done, we are now ready to provide the input column for the test inputs below. In the Test your prompt section at the bottom, fill in only the INPUT number column as shown below:
Now, click on the Run button to see how the model produces outputs for this prompt.
We see that the model produces the rest of the data for those rows correctly and using the format that we provided it with. This makes us wonder that if we provide historical data to the Data prompt, will it be able to predict future trends? Let us give this a try.
Create a new Data prompt and on the data examples table, on the top right click on Add -> Import examples. You may choose any existing Google Sheets from the dialog box, or upload any supported file. We choose to upload a CSV file, notably the Iris flower dataset’s CSV. We use the one found at https://gist.github.com/netj/8836201/
On selecting the file, the interface will ask you to assign the columns in the CSV to columns in your data examples. We choose to create new input columns for all the feature columns of the Iris dataset, and keep the labels column as an output column, as shown below:
After importing the examples, let us manually move a few examples to the Test your prompt section. Remember to remove these examples from the data examples section above to ensure the model is not training on the same data that it is being tested on. Now, click the Run button to get the model’s output.
We observe that the model is able to correctly output the label column values as per the examples it has received. Hence, besides generating more examples for a given dataset, the model is also capable of making predictions about the inputs to a degree. One would require a much more extensive testing of the same to determine the accuracy of the model, which is beyond the scope of this article.
Finally, let us explore the Chat prompts.
Chatting with generative AI models is a form in which most people have interacted with them first. Made popular once more by the advent of ChatGPT, the concept of AI being able to have intelligent conversations has been around for a very long time and has been a part of popular culture on a regular basis. One of the most well-known examples of an AI being able to take instructions and produce output accordingly is the JARVIS from Iron Man series of comics. With the latest possibilities enabled by generative AI, building such systems is very much a realistic task, with efforts already underway.
In this section, we shall see how we can have conversations with generative AI models that mimic human-like understanding and decision making skills.
First, click on the Create New button on the top left of the MakerSuite interface and select the Chat prompt. You will be able to see the blank interface for designing a Chat prompt, as shown below:
One immediate change to notice is that there is no longer a Run button at the bottom of the UI. This has been shifted to the Test your prompt section with a chat box-like interface, and the message send button for the chat box functions as the Run button.
On the left the interface section reads “Write your prompt examples”, we’ll call this the prompts examples section. Also take note of the Context field available in this section, which can be used to set the rules of the interaction and the format in which the output is expected.
Now, let us design a chat prompt, as shown in the table below:
field | value |
Context | You're a banker at the Gringotts bank, set in the Wizarding world of Harry Potter. |
User | I wish to access my account |
Model | Very well, please present your key. |
User | How safe are the vaults at Gringotts? |
Model | Gringotts' vaults are considered extremely safe. Protected by complex magic, various creatures, and intricate security measures, these vaults are nearly impenetrable. Unauthorized access is extraordinarily challenging and dangerous, as demonstrated multiple times in the series. This reputation contributes to the bank's trustworthiness among wizards. |
We expect that the model will pretend to be a banker at the Gringotts bank, which is referenced from the popular book series Harry Potter. Since it's a fictional world and we expect the conversation to be similarly not bound to the real world, we should increase the model temperature, allowing it to be more creative. For this example, let’s set the model temperature to 0.7.
Let us try having a conversation with the model. This is the conversation that happens with the model, in our case:
We observe that although we have not provided the model with an example of how to respond when the user says they do not have the key, it correctly handles the response based on its existing knowledge about Gringotts Bank’s policies.
Now that we have covered the different types of prompts available in MakerSuite, let’s explore how we can use them via code, making direct calls to the PaLM API.
Anubhav Singh, Co-founder of Dynopii & Google Dev Expert in Google Cloud, is a seasoned developer since the pre-Bootstrap era, Anubhav has extensive experience as a freelancer and AI startup founder. He authored "Hands-on Python Deep Learning for Web" and "Mobile Deep Learning with TensorFlow Lite, ML Kit, and Flutter." A Google Developer Expert in GCP, he co-organizes for TFUG Kolkata community and formerly led the team at GDG Cloud Kolkata. Anubhav is often found discussing System Architecture, Machine Learning, and Web technologies