In this article, we’ll explore the basics of Generative AI Studio and how to run a language model within this suite with practical example.
Generative AI Studio is the all-encompassing offering of generative AI-based services on Google Cloud. It includes models of different types, allowing users to generate content that may be - text, image, or audio. On the Generative AI Studio, or Gen AI Studio, users can rapidly prototype and test different types of prompts associated with the different types of models to figure out which parameters and settings work best for their use cases. Then, they can easily shift the tested configurations to the code bases of their solutions.
Model Garden on the other hand provides a collection of foundation and customized generative AI models which can be used directly as models in code or as APIs. The foundation models are based on the models that have been trained by Google themselves, whereas the fine-tuned/task-specific models include models that have been developed and trained by third parties.
Packaged within Vertex AI, the Generative AI Studio on Google Cloud Platform provides low-code solutions for developing and testing invocations over Google’s AI models that can then be used within customer’s solutions. As of August 2023, the following solutions are a part of the Generative AI Studio -
Let’s explore each one of these in detail.
The language models in Gen AI studio are based on the PaLM 2 for Text models and are currently in the form of either “text-bison” or “chat-bison”. The first type of model is the base model which allows performing any kind of tasks related to text understanding and generation. “Chat-bison” models on the other hand are focused on providing a conversational interface for interacting with the model. Thus, they are more suitable for tasks that require a conversation to happen between the model user and the model.
There’s another form of the PaLM2 models available as “code-bison” which powers the Codey product suite. This deals with programming languages instead of human languages.
Let’s take a look at how we can use a language model in Gen AI Studio. Follow the steps below:
1. First, head over to https://console.cloud.google.com/vertex-ai/generative on your browser with a Billing enabled Google Cloud account. You will be able to see the Generative AI Studio dashboard.
2. Next, click “Open” in the card titled “Language”.
3. Then, click on “Text Prompt” to open the prompt builder interface. The interface should look similar to the image below, however, being an actively developed product, it may change in several ways in the future.
4. Now, let us write a prompt. For our example, we’ll instruct the model to fact check whatever is passed to it. Here’s a sample prompt:
You're a Fact Checker Bot. Whatever the user says, fact check it and say any of the following:
1. "This is a fact" if the statement by the user is a true fact.
2. "This is not a fact" if the user's statement is not classifiable as a fact.
3. "This is a myth" if the user's state is a false fact.
User:
5. Now, write the user’s part as well and hit the Submit button. The last line of the prompt would now be:
User: I am eating an apple.
6. Observe the response. Then, change the user’s part to “I am an apple” and “I am a human”. Observe the response in each case. The following output table is expected:
Once we’re satisfied with the model responses based on our prompt, we can shift the model invocation to code. In our example, we’ll do it on Google Colaboratory. Follow the steps below:
1. Open Google Colaboratory by visiting: https://colab.research.google.com/
2. In the first cell, we’ll install the required libraries for using Gen AI Studio models
%%capture
!pip install "shapely<2.0.0"
!pip install google-cloud-aiplatform --upgrade
3. Next, we’ll authenticate the Colab notebook to be able to access the resources available on Google Cloud to the currently logged in user.
from google.colab import auth as google_auth
google_auth.authenticate_user()
4. Then we import the required libraries.
import vertexai
from vertexai.language_models import TextGenerationModel
5. Now, we instantiate the VertexAI client to work with the project. Take note to replace the PROJECT_ID with your own project’s ID on Google Cloud
vertexai.init(project=PROJECT_ID, location="us-central1")
6. Let us now set the configurations that the model will use while answering to our prompts and initialize the model client
parameters = {
"candidate_count": 1,
"max_output_tokens": 256,
"temperature": 0,
"top_p": 0.8,
"top_k": 40
}
model = TextGenerationModel.from_pretrained("text-bison@001")
7. Now, we can call the model and observe the response by printing it
response = model.predict(
"""You\'re a Fact Checker Bot. Whatever the user says, fact check it and say any of the following:
1. \"This is a fact\" if the statement by the user is a true fact.
2. \"This is not a fact\" if the user\'s statement is not classifiable as a fact.
3. \"This is a myth\" if the user\'s state is a false fact.
User: I am a human""",
**parameters
)
print(f"Response from Model: {response.text}")
You can similarly work with the other models available in Gen AI Studio. In this notebook, we’ve provided an example each of Language, Vision and Speech model usage: GenAIStudio&ModelGarden.ipynb
Anubhav Singh, Co-founder of Dynopii & Google Dev Expert in Google Cloud, is a seasoned developer since the pre-Bootstrap era, Anubhav has extensive experience as a freelancer and AI startup founder. He authored "Hands-on Python Deep Learning for Web" and "Mobile Deep Learning with TensorFlow Lite, ML Kit, and Flutter." A Google Developer Expert in GCP, he co-organizes for TFUG Kolkata community and formerly led the team at GDG Cloud Kolkata. Anubhav is often found discussing System Architecture, Machine Learning, and Web technologies.