Building Applications in the AI Era

1. Overview

In this lab, you will use Google's generative AI products to build infrastructure in Google Cloud with the aid of Gemini Cloud Assist, query BigQuery data using natural language to SQL features of Data Canvas, write code in Colab Enterprise Jupyter notebooks and in Eclipse Theia (Visual Studio Code) with the help of Gemini Code Assist, and integrate AI search and chat features built on Cloud Storage and BigQuery grounding sources in Vertex AI Agent Builder.

Our goal is to create a recipes and cooking website called AI Recipe Haven. The site will be built in Python and Streamlit and will contain two major pages. Cooking Advice will host a chatbot we will create using Gemini and a Vertex AI Agent Builder grounded source tied to a group of cookbooks, and it will offer cooking advice and answer cooking related questions. Recipe Search will be a search engine fed by Gemini, this time grounded in a BigQuery recipe database.

If you get hung up on any of the code in this exercise, solutions for all code files are located in the exercise GitHub repo on the solution branch.

Objectives

In this lab, you learn how to perform the following tasks:

  • Activate and use Gemini Cloud Assist
  • Create a search app in Vertex AI Agent Builder for the cooking advice chatbot
  • Load and clean data in a Colab Enterprise notebook, with help from Gemini Code Assist
  • Create a search app in Vertex AI Agent Builder for the recipe generator
  • Frame out the core Python and Streamlit web application, with a little Gemini help
  • Deploy the web application to Cloud Run
  • Connect the Cooking Advice page to our cookbook-search Agent Builder app
  • (Optional) Connect the Recipe Search page to the recipe-search Agent Builder app
  • (Optional) Explore the final application

2. Prerequisites

  1. If you do not already have a Google account, you must create a Google account.
    • Use a personal account instead of a work or school account. Work and school accounts may have restrictions that prevent you from enabling the APIs needed for this lab.

3. Project setup

  1. Sign-in to the Google Cloud Console.
  2. Enable billing in the Cloud Console.
    • Completing this lab should cost less than $1 USD in Cloud resources.
    • You can follow the steps at the end of this lab to delete resources to avoid further charges.
    • New users are eligible for the $300 USD Free Trial.
    • Attending a virtual hands-on lab event? A $5 USD credit may be available.
  3. Create a new project or choose to reuse an existing project.
  4. Confirm billing is enabled in My projects in Cloud Billing
    • If your new project says Billing is disabled in the Billing account column:
      1. Click the three dots in the Actions column
      2. Click Change billing
      3. Select the billing account you would like to use
    • If you are attending a live event, the account will likely be named Google Cloud Platform Trial Billing Account

4. Activate and use Gemini Cloud Assist

In this task we will activate and use Gemini Cloud Assist. While working in the Google Cloud Console, Gemini Cloud Assist can offer advice, help you through building, configuring, and monitoring your Google Cloud infrastructure, and can even suggest gcloud commands and write Terraform scripts.

  1. To activate Cloud Assist for use, click into the Search box at the top of the Cloud Console UI and select Ask Gemini or Ask Gemini for Cloud console.
  2. Scroll to the Required API section of the page and Enable the Gemini for Google Cloud API.
  3. If you don't immediately see a chat interface, click Start chatting. Start by asking Gemini to explain some of the benefits of using Cloud Shell Editor. Take a few minutes to explore the generated response.
  4. Next, ask about the benefits of Agent Builder and how it can help ground generative responses.
  5. Finally, let's look at a comparison. In the Gemini chat window of Google Cloud Console, ask the following question:
    What are the major steps to creating a search app grounded in a GCS data source using Vertex AI Agent builder?
    

5. Create a search app in Vertex AI Agent Builder for the cooking advice chatbot

The web site we are building will have a cooking advice page containing a chatbot designed to help users find answers to cooking related questions. It will be powered by Gemini grounded in a source containing 70 public-domain cookbooks. The cookbooks will act as the source of truth Gemini uses when answering questions.

  1. Use the Cloud Console search box to navigate to Vertex AI. From the Dashboard, click Enable All Recommended APIs. This may take a few minutes. If you get a popup box about the Vertex AI API itself needing enabling, please Enable it as well. Once the APIs are enabled, you can move to the next step.
  2. Use search to navigate to Agent Builder then Continue and Activate the API.
  3. As Gemini suggested in our earlier advice seeking, creating a search app in Agent Builder starts with the the creation of an authoritative data source. When the user searches, Gemini understands the question and how to compose intelligent responses, but it will look to the grounded source for the information used in that response, rather than pulling from its innate knowledge.From the left-hand menu, navigate to Data Stores and Create Data Store.
  4. The public domain cookbooks we are using to ground our cooking advice page are currently in a Cloud Storage bucket in an external project. Select the Cloud Storage source type.
  5. Examine but don't change the default options related to the type of information we are importing. Leave the import type set to Folder and for the bucket path use: labs.roitraining.com/labs/old-cookbooks, then Continue.
  6. Name the data store: old-cookbooks. Click EDIT and change the ID to old-cookbooks-idand Create the data store.

Vertex AI Agent builder supports several app types, and the Data Store acts as the source of truth for each. Search apps are good for general use and search. Chat apps are for generative flows in Dataflow driven chatbot/voicebot applications. Recommendations apps help creating better recommendation engines. And, Agent apps are for creating GenAI driven agents. Eventually, Agent would probably serve us best in what we want to do, but with the product currently being preview, we'll stick with the Search app type.

  1. Use the left-side menu to navigate to Apps, then click Create A New App.
  2. Click Create on the Search for your website card. Name the app cookbook-search. Click Edit and set the app ID to cookbook-search-id. Set the company to Google and click Continue.
  3. Check the old-cookbooks data store you created a few steps ago and Create the Search App.

If you examine the Activity tab, you'll likely see that the cookbooks are still importing and indexing. It will take 5+ minutes for Agent Builder to index the thousands of pages contained in the 70 cookbooks we've given it. While it's working, let's load and clean some recipe database data for our recipe generator.

6. Load and clean data in a Colab Enterprise notebook, with help from Gemini Code Assist

Google Cloud offers a couple of major ways you can work with Jupyter notebooks. We are going to use Google's newest offering, Colab Enterprise. Some of you may be familiar with Google's Colab product, commonly used by individuals and organizations who would like to experiment with Jupyter notebooks in a free environment. Colab Enterprise is a commercial Google Cloud offering that's fully integrated with the rest of Google's cloud products and which takes full advantage of the security and compliance capabilities of the GCP environment.

One of the features Colab Enterprise offers is integration with Google's Gemini Code Assist. Code Assist may be used in a number of different code editors and can offer advice as well as seamless inline suggestions while you code. We will leverage this generative assistant while we wrangle our recipe data.

  1. Use search to navigate to Colab Enterprise and click Create notebook. If you get an offer to experiment with new Colab features, dismiss it. To get the runtime, the compute power behind the notebook, up and going, click Connect in the upper right corner of your new notebook.Connect
  2. Click File > Rename to rename the notebook to Data Wrangling.Triple dot rename
  3. Click + Text to create a new text box, and use the up arrow to move it so it's the first cell on the page.+ Text and Up Arrow
  4. Edit the text box and enter:
    # Data Wrangling
    
    Import the Pandas library
    
  5. In the code block below the text block you just created, start typing imp and Gemini Code Assist should suggest the rest of the import in grey. Press tab to accept the suggestion.
    import pandas as pd
    
  6. Below the import code box, create another text box and enter:
    Create a Pandas DataFrame from: gs://labs.roitraining.com/labs/recipes/recipe_dataset.csv. View the first few records.
    
  7. Create and edit another code block. Again, start typing df = and examine the Gemini Code Assistant generated code. If you see an autocomplete droplist of Python keywords over the generated suggestion, hit escape to see the light grey suggested code. Again tab to accept the suggestion. If your suggestion didn't contain the head() function call, add it.
    df = pd.read_csv('gs://labs.roitraining.com/labs/recipes/recipe_dataset.csv')
    df.head()
    
  8. Click into your first code cell, where you imported Pandas, and use the Commands menu or keyboard to run the selected cell. On the keyboard shift+enter will run the cell and shift focus to the next cell, creating one if needed. Wait for the cell to execute before moving on.NOTE: You will see [ ] just to the left when a cell hasn't been executed. While a cell is executing, you'll see a spinning, working animation. Once the cell finishes, a number will appear, like [13].
  9. Execute the cell that loads the CSV into the DataFrame. Wait for the file to load and examine the first five rows of data. This is the recipe data we will load into BigQuery and we'll eventually use it to ground our recipe generator.
  10. Create a new code block and enter the below comment. After typing the comment, move to the next code line and you should receive the suggestion df.columns. Accept it then run the cell.
    # List the current DataFrame column names
    
    We've just demonstrated that you really have two choices on how you get help from Gemini Code Assist in a Jupyter notebook: text cells above code cells, or comments inside the code cell itself. Comments inside code cells work well in Jupyter notebooks, but this approach will also work in any other IDE supporting Google's Gemini Code assist.
  11. Let's do a little column cleanup. Rename the column Unnamed: 0 to id, and link to uri. Use your choice of prompt > code techniques to create the code, then run the cell when satisfied.
    # Rename the column 'Unnamed: 0' to 'id' and 'link' to 'uri'
    df.rename(columns={'Unnamed: 0': 'id', 'link': 'uri'}, inplace=True)
    
  12. Remove the source and NER columns and use head() to view the first few rows. Again, get Gemini to help. Run the last two lines and examine the results.
    # Remove the source and NER columns
    df.drop(columns=['source', 'NER'], inplace=True)
    df.head()
    
  13. Let's see how many records are in our dataset. Again, start with your choice of prompting technique and see if you can get Gemini to help you generate the code.
    # Count the records in the DataFrame
    df.shape # count() will also work
    
  14. 2.23 million records is probably more recipes than we have time for. The indexing process in Agent Builder would likely take too long for our exercise today. As a compromise, let's sample out 150,000 recipes and work with that. Use your prompt > code approach to take the sample and store it in a new DataFrame named dfs (s for small).
    # Sample out 150,000 records into a DataFrame named dfs
    dfs = df.sample(n=150000)
    
  15. Our recipe source data is ready to load into BigQuery. Before we do the load, let's head over to BigQuery and prep a dataset to hold our table. In the Google Cloud Console use the Search Box to navigate to BigQuery. You might right-click BigQuery and open it in a new browser tab.
  16. If it's not already visible, open the Gemini AI Chat panel using the Gemini logo in the upper right of the Cloud Console. If you are asked to enable the API again, either press enable or refresh the page. Run the prompt: What is a dataset used for in BigQuery? After you've explored the response ask, How can I create a dataset named recipe_data using the Cloud Console? Compare the results to the following few steps.Open Gemini Cloud Assist
  17. In the BigQuery Explorer pane, click the triple dot View actions menu next to your project ID. Then select Create dataset.Explore View actions
  18. Give the dataset and ID of recipe_data. Leave the location type to US and Create Dataset. If you receive an error that the dataset already exists, simply move on.With the dataset created in BigQuery, let's switch back to our notebook and do the insert.
  19. Switch back to your Data Wrangling notebook in Colab Enterprise. In a new code cell, create a variable named project_id and use it to hold your current project ID. Look in the upper left of these instructions, below the End Lab button, and you'll find the current project ID. It's also on the Cloud Console home page if you like. Assign the value into your project_id variable and run the cell.
    # Create a variable to hold the current project_id
    project_id='YOUR_PROJECT_ID'
    
  20. Use the prompt > code approach to create a block of code that will insert the DataFrame dfs into a table named recipes in the dataset we just created recipe_data. Run the cell.
    dfs.to_gbq(destination_table='recipe_data.recipes', project_id=project_id, if_exists='replace')
    

7. Create a search app in Vertex AI Agent Builder for the recipe generator

Excellent, with our table of recipe data created, let's use it to build a grounded data source for our recipe generator. The approach we will use will be similar to what we did for our cooking chatbot. We will use Vertex AI Agent Builder to create a Data Store, and then use that as the source of truth for a Search App.

If you like, feel free to ask Gemini in the Google Cloud Console to remind you of the steps to create an Agent Builder search app, or you can follow the steps listed below.

  1. Use Search to navigate to Agent Builder. Open the Data Stores and Create Data Store. This time, Select the BigQuery Data Store type.
  2. In the table selection cell, press Browse and search for recipes. Select the radio button next to your table. If you see recipes from other qwiklabs-gcp-... projects, make sure to Select the one that belongs to you.NOTE: If you click on recipes instead of selecting the radio button next to it, it will open a new tab in your browser and take you to the table overview page in BigQuery. Just close the browser tab and select the radio button in Agent Builder.
  3. Examine but don't change the rest of the default options, then Continue.
  4. In the schema review page, examine the initial default configurations, but don't change anything. Continue
  5. Name the datastore recipe-data. Edit the datastore ID and set it to recipe-data-id. Create the Data Store.
  6. Navigate to Apps using the left hand navigation menu and Create A New App.
  7. Click Create on the Search for your website card. Name the app recipe-search and click EDIT to set the ID to recipe-search-id. Set the company name to Google and Continue.
  8. This time, check the recipe-data data sources. Create the app.

It will take a while for our database table to index. While it does, let's experiment with BigQuery's new Data Canvas and see if we can find an interesting recipe or two.

  1. Use the search box to navigate to BigQuery. At the top of the BigQuery Studio, click the down arrow next to the right-most tab and select Data canvas. Set the region to us-central1.Open the Data Canvas
  2. Click Search for data. In the Data canvas search box, search for recipes, press Enter/Return to search, and click the Add to canvas button next to your table name.
  3. A visual representation of your recipes table will be loaded into the BigQuery Data canvas. You can explore the table's schema, preview the data in the table, and examine other details. Below the table representation, click Query.
  4. The canvas will load a more or less typical BigQuery query dialog with one addition: above the query window is a text box you can use to prompt Gemini for help. Let's see if we can find some cake recipes in our sample. Run the following prompt (by typing the text and pressing Enter/Return to trigger the SQL generation):
    Please select the title and ingredients for all the recipes with a title that contains the word cake.
    
  5. Look at the SQL generated. Once you're satisfied, Run the query.
  6. Not too shabby! Feel free to experiment with a few other prompts and queries before moving on. When you experiment, try less specific prompts to see what works, and what doesn't. As an example, this prompt:
    Do I have any chili recipes?
    
    (Don't forget to run the new query) Returned a list of chili recipes but left out the ingredients until I modified it to:
    Do I have any chili recipes?  Please include their title and ingredients.
    
    (Yes, I say please when I prompt. My Mama would be so proud.)I noticed that one chili recipe contained mushrooms, and who wants that in chili? I asked Gemini to help me exclude those recipes.
    Do I have any chili recipes?  Please include their title and ingredients, and ignore any recipes with mushrooms as an ingredient.
    

8. Open Cloud Shell Editor

  1. Navigate to Cloud Shell Editor
  2. If the terminal doesn't appear on the bottom of the screen, open it:
    • Click the hamburger menu Hamburger menu icon
    • Click Terminal
    • Click New TerminalOpen new terminal in Cloud Shell Editor
  3. In the terminal, set your project with this command:
    • Format:
      gcloud config set project [PROJECT_ID]
      
    • Example:
      gcloud config set project lab-project-id-example
      
    • If you can't remember your project id:
      • You can list all your project ids with:
        gcloud projects list | awk '/PROJECT_ID/{print $2}'
        
      Set project id in Cloud Shell Editor terminal
  4. If prompted to authorize, click Authorize to continue. Click to authorize Cloud Shell
  5. You should see this message:
    Updated property [core/project].
    
    If you see a WARNING and are asked Do you want to continue (Y/N)?, then you have likely entered the project ID incorrectly. Press N, press Enter, and try to run the gcloud config set project command again.

9. Enable APIs

In the terminal, enable the APIs:

gcloud services enable \
  compute.googleapis.com \
  sqladmin.googleapis.com \
  run.googleapis.com \
  artifactregistry.googleapis.com \
  cloudbuild.googleapis.com \
  networkconnectivity.googleapis.com \
  servicenetworking.googleapis.com \
  cloudaicompanion.googleapis.com

If prompted to authorize, click Authorize to continue. Click to authorize Cloud Shell

This command may take a few minutes to complete, but it should eventually produce a successful message similar to this one:

Operation "operations/acf.p2-73d90d00-47ee-447a-b600" finished successfully.

10. Frame out the core Python and Streamlit web application, with a little Gemini help

With both of our Vertex AI Agent Builder data stores indexing and with our search apps just about ready to roll, let's get to building our web application.

We will be leveraging Gemini Code Assist while we work. For more information on using Gemini Code Assist in Visual Studio Code, see the documentation here

  1. In the Cloud Shell Editor terminal, run this command to clone the recipe app repository.
    git clone https://github.com/haggman/recipe-app
    
  2. Tun this command to open the application folder in Cloud Shell Editor.
    cloudshell open-workspace recipe-app/
    
  3. Before we explore the cloned folder and start working on our web application, we need to get the editor's Cloud Code plugin logged into Google Cloud and we need to enable Gemini. Let's do that now. In the bottom left of your editor, click Cloud Code - Sign in. If you don't see the link, wait a minute and check again.Cloud Code - Sign in
  4. The terminal window will display a long URL. Open the URL in the browser and run through the steps to grant Cloud Code access to your Google Cloud environment. In the final dialog, Copy the verification code and paste it back into the waiting terminal window in your Cloud Shell Editor browser tab.
  5. After a few moments, the Cloud Code link at the bottom left of your editor will change to Cloud Code - No Project. Click the new link to select a project. The command pallet should open at the top of the editor. Click Select a Google Cloud project and select your project. After a few moments, the link in the lower left of your editor will update to display your project ID. This indicates that Cloud Code is successfully attached to your working project.
  6. With Cloud Code connected to your project, you can now activate Gemini Code Assist. In the lower right of your editor interface, click the crossed out Gemini logo. The Gemini Chat pane will open on the left of the editor. Click Select a Google Cloud Project. When the command pallet opens, select your project. If you've followed the steps correctly (and Google hasn't changed anything), then you should now see an active Gemini chat window.Disabled Gemini
  7. Excellent, with our terminal, Gemini chat, and Cloud Code configurations all set, open the Explorer tab and take a few minutes to explore the files in the current project.Explorer
  8. In the Explorer open your requirements.txt file for editing. Switch to the Gemini chat pane and ask:
    From the dependencies specified in the requirements.txt file, what type of application are we building?
    
  9. So, we are building an interactive web application using Python and Streamlit that's interacting with Vertex AI and Discovery Engine, nice. For now, let's focus on the web application components. As Gemini says, Streamlit is a framework for building data-driven web applications in Python. Now ask:
    Does the current project's folder structure seem appropriate for a Streamlit app?
    
    This is where Gemini tends to have issues. Gemini can access the file you have currently open in the editor, but it can't actually see the whole project. Try asking this:
    Given the below, does the current project's file and folder structure seem appropriate for a Streamlit app?
    - build.sh
    - Home.py
    - requirements.txt
    - pages
    -- Cooking_Advice.py
    -- Recipe_Search.py
    
    Get a better answer?
  10. Let's get some more information about Streamlit:
    What can you tell me about Streamlit?
    
    Nice, so we can see Gemini is offering us a nice overview including pros and cons.
  11. If you wanted to explore the cons, you could ask:
    What are the major downsides or shortcomings?
    
    Notice, we didn't have to say, "of Streamlit," because Gemini chat is conversational (multi-turn). Gemini knows what we've been talking about because we are in a chat session. If at any point you want to wipe the Gemini chat history clean, use the trashcan icon at the top of the Gemini code chat window.

11. Deploy the web application to Cloud Run

Excellent, we have our core application structure in place, but will it all work? Better yet, where should we host it in Google Cloud?

  1. In the Gemini chat window, ask:
    If I containerize this application, what compute technologies in Google Cloud would be best for hosting it?
    
  2. Remember, if you weren't already working in your IDE, you could also as Google Cloud Assist. Open the Google Cloud Console, then open Gemini Cloud Assist and ask:
    If I have a containerized web application, where would be the best place to run it in Google Cloud?
    
    Were the two sets of advice the same? Do you agree/disagree with any of the advice? Remember, Gemini is a Generative AI assistant, and like a human assistant, you won't always agree with everything is says. Still, having that helper always at your side while you work in Google Cloud and in your code editor can make you much more efficient.
  3. For a stateless short-lived containerized web application, Cloud Run would be a great option. In the Gemini chat window of your code editor, try the prompt:
    What steps would be required to run this application in Cloud Run?
    
  4. It looks like the first thing we need to do is create a Dockerfile. Using the editor, create a file named Dockerfile in the root of your project folder. Make sure you don't accidentally place it in the pages folder. Open the file for editing.
  5. Let's use the side Gemini chat panel to create our Dockerfile. Use a prompt like the one below. When the results are displayed in chat, use the + next to the copy icon just above the suggested Dockerfile to insert the suggested code into the Dockerfile.
    Create a Dockerfile for the application in the current folder. The dependencies are defined in requirements.txt and I want you to use the Python 3 slim bookworm base image.
    
    Gemini doesn't always return the same response to the same prompt. The first time I asked Gemini for a Dockerfile I got the exact file I'm going to suggest you use. Just now I received the suggestion:
    # Base image
    FROM python:3-bookworm-slim
    
    # Set working directory
    WORKDIR /app
    
    # Install dependencies
    RUN apt-get update && apt-get install -y \
        build-essential \
        libpq-dev \
        gcc \
        python3-dev \
        && rm -rf /var/lib/apt/lists/*
    
    # Install pip and virtualenv
    RUN pip install --upgrade pip virtualenv
    
    # Create virtual environment
    RUN python3 -m venv venv
    
    # Activate virtual environment
    WORKDIR /app/venv/bin
    RUN . activate
    
    # Install Streamlit and libraries from requirements.txt
    RUN pip install -r requirements.txt
    
    # Copy application files
    COPY . /app
    
    # Expose port 8501 for Streamlit
    EXPOSE 8501
    
    # Start Streamlit app
    CMD ["streamlit", "run", "main.py"]
    
    That's a heck of a Dockerfile. I'd simplify it a bit. We don't need the apt-get section as anything needed for Python is already in our base image. Also, using a virtual environment in a Python container is a waste of space, so I'd remove that. The expose command isn't strictly necessary, but it's fine. Also, it's trying to start main.py which I don't have.
  6. In the recipe-app folder, create a file called Dockerfile and paste these contents:
    FROM python:3.11-slim-bookworm
    
    WORKDIR /app
    
    COPY requirements.txt .
    RUN pip install --no-cache-dir --upgrade pip && \
        pip install --no-cache-dir -r requirements.txt
    
    COPY . .
    
    CMD ["streamlit", "run", "Home.py"]
    
  7. Gemini can operate via the chat window, but it can also work directly in your code file using comments, like we used in the Data Wrangling notebook, and it also may be invoked using Control+i on Windows or Command+i on Mac. Click somewhere in the Dockerfile, activate Gemini using the appropriate Command+i / Control+i command.
  8. At the prompt enter the below. Examine and Accept the change.
    Please comment the current file.
    
    How cool is that?! How many times have you had to work with someone else's code, only to have to waste time gaining a base understanding of their commentless work before you can even start making your changes. Gemini to the rescue!
  9. Now ask Gemini how you could use Cloud Run to build and deploy a new image named recipe-web-app from the Dockerfile in the current folder.
    How could I use gcloud to build a new Cloud Run service named recipe-web-app from the current directory?
    
  10. Let's build and deploy our application. In the terminal window execute the gcloud run deploy command
    gcloud run deploy recipe-web-app \
        --allow-unauthenticated \
        --source=. \
        --region=us-central1 \
        --port=8501
    
    If you are presented with a prompt asking to create an Artifact Registry repository, press enter/return
    Deploying from source requires an Artifact Registry Docker repository to store built containers. A repository
    named cloud-run-source-deploy in region us-central1 will be created.
    
    Do you want to continue (Y/n)?
    
  11. If you watch the build process, first it will build the Artifact Registry docker repo. Then, it uses Cloud Build to create the container image from the Dockerfile in the local folder. Lastly, the docker image will be deployed into a new Cloud Run service. At the end of the script you'll get a Cloud Run test URL to use.

Open the returned link in a new tab of your browser. Take a moment and explore the application's structure and pages. Nice, now we need hook in our generative AI functionality.

12. Connect the Cooking Advice page to our cookbook-search Agent Builder app

We have the framework for the web application running, but we need to connect the two work pages to our two Vertex AI Agent Builder search apps. Let's start with Cooking Advice.

  1. Leave you Cloud Shell Editor tab open. In the Google Cloud console use search to navigate to Chat in Vertex AI.
  2. In the right hand settings page pane set the model to gemini-1.5-flash-002. Slide the output token limit up to the max so the model can return longer answers if needed. Open the Safety Filter Settings. Set Hate speech, Sexually explicit content, and Harassment content to Block some. Set Dangerous content to Block few and Save. We're setting Dangerous Content a bit lower because talking about knives and cutting can be misinterpreted by Gemini as violence.
  3. Slide on the toggle to enable Grounding then click Customize. Set the grounding source to Vertex AI search and for the datastore path use the following. Change YOUR_PROJECT_ID to the project ID found up near the End Lab button in these instructions, then Save the grounding settings
    projects/YOUR_PROJECT_ID/locations/global/collections/default_collection/dataStores/old-cookbooks-id
    
    NOTE: If you get an error then you either didn't change the project ID to your actual project ID, or you may have missed the step where you changed the old-cookbooks Agent Builder Data Store ID. Check your Agent Builder > Data Stores > old-cookbooks for its actual Data store ID.
  4. Test a couple of chat messages. Perhaps start with the below. Try a few others if you like.
    How can I tell if a tomato is ripe?
    
  5. The model works, now let's experiment with the code. Click Clear Conversation so our conversations don't become part of the code then click Get Code.Clear Conversation & Get Code
  6. At the top of the code window, press Open Notebook so we can experiment and perfect the code in Colab Enterprise before integrating it into our app.
  7. Take a few minutes to familiarize yourself with the code. Let's make a couple of changes to adapt it to what we want. Before we start, run the first code cell to connect to the compute and install the AI Platform SDK. After the block runs you will be prompted to restart the session. Go ahead and do that.
  8. Move to the code we pulled out of Vertex AI Studio. Change the name the method multiturn_generate_content to start_chat_session.
  9. Scroll to the model = GenerativeModel( method call. The existing code defines the generation_config and safety_settings but doesn't actually use them. Modify the creation of the GenerativeModel so it resembles:
    model = GenerativeModel(
        "gemini-1.5-flash-002",
        tools=tools,
        generation_config=generation_config,
        safety_settings=safety_settings,
    )
    
  10. Lastly, add a final line to the method, just below chat = model.start_chat(), so the function returns the chat object. The finished function should look like the below.NOTE: DO NOT COPY this code into your notebook. It is simply here as a sanity check.
    def start_chat_session():
        vertexai.init(project="qwiklabs-gcp-02-9a7298ceaaec", location="us-central1")
        tools = [
            Tool.from_retrieval(
                retrieval=grounding.Retrieval(
                    source=grounding.VertexAISearch(datastore="projects/qwiklabs-gcp-02-9a7298ceaaec/locations/global/collections/default_collection/dataStores/old-cookbooks-id"),
                )
            ),
        ]
        model = GenerativeModel(
            "gemini-1.5-flash-002",
            tools=tools,
            generation_config=generation_config,
            safety_settings=safety_settings,
        )
        chat = model.start_chat()
        return chat
    
  11. Scroll to the bottom of the code cell and change the final line calling the old function so it calls the new function name and stores the returned object in a variable chat. Once you are satisfied with your changes, run the cell.
    chat = start_chat_session()
    
  12. Create a new code cell and add the comment # Use chat to invoke Gemini and print out the response. Move to the next line and type resp and Gemini should auto complete the block for you. Update the prompt to How can I tell if a tomato is ripe?. Run the cell
    response = chat.send_message("How can I tell if a tomato is ripe?")
    print(response)
    
  13. That's the response alright, but the part we really want is that nested text field. Modify the codeblock to print just that section, like:
    response = chat.send_message("How can I tell if a tomato is ripe?")
    print(response.candidates[0].content.parts[0].text)
    
  14. Good, now that we have working chat code, let's integrate it into our web application. Copy all the contents of the code cell that creates the start_chat_session function (we won't need the test cell). If you click into the cell you can click the triple dot menu in the upper right corner and copy from thereCopy cell
  15. Switch to your Cloud Shell Editor tab and open pages\Cooking_Advice.py for editing.
  16. Locate the comment:
    #
    # Add the code you copied from your notebook below this message
    #
    
  17. Paste your copied code just below the Add the code comment. Nice, now we have the section which drives the chat engine via a grounded call to Gemini. Now let's integrate it into Streamlit.
  18. Locate section of commented code directly below the comment:
    #
    # Here's the code to setup your session variables
    # Uncomment this block when instructed
    #
    
  19. Uncomment this section of code (Up till the next Setup done, let's build the page UI section) and explore it. It creates or retrieves the chat and history session variables.
  20. Next, we need integrate the history and chat functionality into the UI. Scroll in the code until you locate the below comment.
    #
    # Here's the code to create the chat interface
    # Uncomment the below code when instructed
    #
    
  21. Uncomment the rest of the code below the comment and take a moment to explore it. If you like, highlight it and get Gemini to explain its functionality.
  22. Excellent, now let's build the application and deploy it. When the URL comes back, launch the application and give the Cooking Advisor page a try. Perhaps ask it about ripe tomatoes, or the bot knows a good way to prepare brussels sprouts.
    gcloud run deploy recipe-web-app \
        --allow-unauthenticated \
        --source=. \
        --region=us-central1 \
        --port=8501
    

How cool is that! Your own personal AI cooking advisor :-)

13. (Optional) Connect the Recipe Search page to the recipe-search Agent Builder app

When we connected the Cooking Advice page to its grounded source, we did so using the Gemini API directly. For Recipe Search, let's connect to the Vertex AI Agent Builder search app directly.

  1. In your Cloud Shell Editor, open the pages/Recipe_Search.py page for editing. Investigate the structure of the page.
  2. Towards the top of the file, set your project ID.
  3. Examine the search_sample function. This code more or less comes directly from the Discovery Engine documentation here. You can find a working copy in this notebook here.The only change I made was to return the response.results instead of just the results. Without this, the return type is an object designed to page through results, and that's something we don't need for our basic application.
  4. Scroll to the very end of the file and uncomment the entire section below Here are the first 5 recipes I found.
  5. Highlight the whole section you just uncommented and open Gemini Code chat. Ask, Explain the highlighted code. If you don't have something selected, Gemini can explain the whole file. If you highlight a section and ask Gemini to explain, or comment, or improve it, Gemini will.Take a moment and read through the explanation. For what it's worth, using a Colab Enterprise notebook is a great way to explore the Gemini APIs before you integrate them into your application. It's especially helpful at exploring some of the newer APIs which may not be documented as well as they could be.
  6. At your editor terminal window, run build.sh to deploy the final application. Wait until the new version is deployed before moving to the next step.

14. (Optional) Explore the final application

Take a few minutes to explore the final application.

  1. In the Google Cloud console, use search to navigate to Cloud Run, then click into your recipe-web-app.
  2. Locate the application test URL (towards the top) and open it in a new browser tab.
  3. The application home page should appear. Note the basic layout and navigation provided by Streamlit, with the python files from the pages folder displayed as navigational choices, and the Home.py loaded as the home page. Navigate to the Cooking Advice page.
  4. After a few moments the chat interface will appear. Again, note the nice core layout provided by Streamlit.
  5. Try a few cooking cooking related questions and see how the bot functions. Something like:
    Do you have any advice for preparing broccoli?
    
    How about a classic chicken soup recipe?
    
    Tell me about meringue.
    
  6. Now let's find a recipe or two. Navigate to the Recipe Search page and try a few searches. Something like:
    Chili con carne
    
    Chili, corn, rice
    
    Lemon Meringue Pie
    
    A dessert containing strawberries
    

15. Congratulations!

You have created an application leveraging Vertex AI Agent Builder applications. Along the way you've explored Gemini Cloud Assist, Gemini Code Assist, and the natural language to SQL features of BigQuery's Data Canvas. Fantastic job!

Clean up

Cloud SQL does not have a free tier and will charge you if you continue to use it. You can delete your Cloud project to avoid incurring additional charges.

While Cloud Run does not charge when the service is not in use, you might still be charged for storing the container image in Artifact Registry. Deleting your Cloud project stops billing for all the resources used within that project.

If you would like, delete the project:

gcloud projects delete $GOOGLE_CLOUD_PROJECT

You may also wish to delete unnecessary resources from your cloudshell disk. You can:

  1. Delete the codelab project directory:
    rm -rf ~/task-app
    
  2. Warning! This next action is can't be undone! If you would like to delete everything on your Cloud Shell to free up space, you can delete your whole home directory. Be careful that everything you want to keep is saved somewhere else.
    sudo rm -rf $HOME