Skip to content

Conversation

@junaidaslam2006
Copy link

Context
This tutorial serves as a developer-focused complement to the native MCP gateway work in google-deepmind/gemma. It teaches you how to leverage Gemma 3's official native control tokens (like <start_function_call>) to interact with standardized tool environments via MCP.

Key Features:
Native Implementation: Demonstrates how you can use Gemma 3's native tokens and reasoning () flow for precise tool selection.

MCP Integration: Shows you how to define and use tools following the Model Context Protocol standard.

Interactive Agent Loop: Provides a Colab-ready simulation of an autonomous tool-calling session.

Verification
Verified in Google Colab (Python 3.10+).

Follows the Google Gemini Cookbook style guide (second-person perspective, H1 headers, and # @title usage).

@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@github-actions github-actions bot added status:awaiting review PR awaiting review from a maintainer component:examples Issues/PR referencing examples folder labels Jan 7, 2026
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @junaidaslam2006, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a comprehensive tutorial that guides developers through leveraging Gemma 3's native control tokens for tool-calling within the Model Context Protocol (MCP) framework. The tutorial focuses on practical implementation, covering tool definition, integration with Gemma 3's specific token format, and simulating an end-to-end agentic workflow, thereby enabling Gemma 3 to interact with external environments in a standardized manner.

Highlights

  • New Tutorial: Gemma 3 Native Tool-Calling via MCP: A new Jupyter notebook tutorial has been added, demonstrating how to integrate Gemma 3's native function calling capabilities with the Model Context Protocol (MCP).
  • MCP Tool Definition and Integration: The tutorial illustrates how to define local tools using the MCP standard and map these definitions to Gemma 3's native function calling format, utilizing tokens like <start_function_declaration>.
  • Autonomous Agent Loop Simulation: It provides a Colab-ready simulation of an autonomous tool-calling session, showcasing the full lifecycle of an MCP-enabled Gemma 3 interaction, including parsing model output and executing tools.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds a new tutorial notebook on using Gemma 3's native tool-calling with MCP. The notebook is a good starting point, but it has several issues related to style guide adherence and correctness that should be addressed. Key issues include missing standard notebook elements like a license and Colab badge, style violations in text and code, and misleading simulation logic where key variables are created but not used. Additionally, the notebook outputs have not been saved, which is contrary to the repository's style guide. The update to the README is good, but the new entry could be placed alphabetically for better organization.

" \n",
" # 1. Initialize System Instruction\n",
" tool_defs = gateway.get_tool_definitions()\n",
" system_instr = GEMMA_SYSTEM_PROMPT.format(tool_definitions=tool_defs)\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The system_instr variable is created but never used in the simulation, which can be confusing for readers. To make the tutorial clearer, please print this variable to show what would be sent to the model in a real API call.

    system_instr = GEMMA_SYSTEM_PROMPT.format(tool_definitions=tool_defs)
    print(f"SYSTEM INSTRUCTION:\n{system_instr}\n")

Comment on lines 247 to 251
" response_token = format_response_token(result)\n",
" \n",
" # Simulate Final Model Response\n",
" final_output = \"I have successfully created 'hello.txt' with the greeting you requested.\"\n",
" print(f\"MODEL FINAL ANSWER: {final_output}\")\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The response_token is created but never used, and the simulation jumps to a hardcoded final answer. This is misleading as it skips a crucial step in the agent loop. Please print the response_token and update the comments to clarify that this token would be sent back to the model to get the final response.

        response_token = format_response_token(result)
        print(f"FEEDING BACK TO MODEL:\n{response_token}\n")
        
        # 5. Simulate Final Model Response (In reality, this is another API call with the history including response_token)
        final_output = "I have successfully created 'hello.txt' with the greeting you requested."
        print(f"MODEL FINAL ANSWER: {final_output}")

Comment on lines 56 to 57
"from mcp import server\n",
"from mcp.types import Tool, TextContent, CallToolRequest\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

These imports (server, Tool, TextContent, CallToolRequest) do not appear to be used in the notebook. Please remove them to improve code clarity.

Additionally, the style guide (line 56) suggests placing imports where they are first used, rather than in a large block at the beginning. Consider moving the remaining imports to the cells where they are needed.

"id": "intro"
},
"source": [
"# Gemma 3: Native Tool-Calling via Model Context Protocol (MCP)\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

According to the repository style guide, this notebook should include:

  • A collapsed license header at the very top (Style Guide, line 48).
  • An 'Open in Colab' badge immediately after the H1 title (Style Guide, lines 50-53).

Please add these for consistency with other cookbook examples. You can copy them from another notebook like quickstarts/Function_calling.ipynb.

"id": "mcp-logic-markdown"
},
"source": [
"## 1. Defining MCP Tool Logic\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The style guide requires using sentence case for headings (Style Guide, line 88). Please update this heading and others in the notebook (e.g., 'Gemma 3 Integration Logic', 'The Execution Loop') to follow this convention.

## 1. Define MCP tool logic

Comment on lines 193 to 197
"def format_response_token(result: str) -> str:\n",
" \"\"\"\n",
" Wraps a tool result in official Gemma 3 response tokens.\n",
" \"\"\"\n",
" return f\"<start_function_response>{result}<end_function_response>\""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This helper function is very simple and only used once. According to the style guide (lines 70-71), it's often better to inline such short, single-use functions to improve readability by avoiding the need to look up its definition. Consider inlining this call in run_gemma_mcp_session.

},
"outputs": [],
"source": [
"%pip install -U mcp httpx"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The style guide recommends using the -q flag for quiet installation (Style Guide, line 25). Also, since the previous cell mentions using google-genai for a real implementation, it would be helpful to include it in the installation command for users who want to extend this tutorial.

%pip install -q -U mcp httpx 'google-genai>=1.0.0'

"source": [
"## Setup\n",
"\n",
"First, we install the necessary libraries. We use the `mcp` SDK for the protocol logic and `google-genai` for model interaction (if using hosted Gemma via Vertex AI) or standard Python libraries for local simulation."
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The style guide recommends using the second person ('you') instead of 'we' (Style Guide, line 91). Please update the text to maintain a consistent voice.

First, you install the necessary libraries. You use the `mcp` SDK for the protocol logic and `google-genai` for model interaction (if using hosted Gemma via Vertex AI) or standard Python libraries for local simulation.

" print(\"No tool call detected.\")\n",
"\n",
"# Run the demonstration\n",
"await run_gemma_mcp_session(\"Create a file named 'hello.txt' with a greeting.\")"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

While top-level await works in Colab, using asyncio.run() is a more robust and portable way to run an async function from a synchronous context. This ensures the code works correctly in different environments.

asyncio.run(run_gemma_mcp_session("Create a file named 'hello.txt' with a greeting."))

| [Virtual Try-on](./Virtual_Try_On.ipynb) | A Virtual Try-On application that utilizes Gemini 2.5 to create segmentation masks for identifying outfits in images, and Imagen 3 for generating and inpainting new outfits. | Spatial Understanding | [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/google-gemini/cookbook/blob/main/examples/Virtual_Try_On.ipynb) |
| [Talk to documents](./Talk_to_documents_with_embeddings.ipynb) | Use embeddings to search through a custom database. | Embeddings | [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/google-gemini/cookbook/blob/main/examples/Talk_to_documents_with_embeddings.ipynb) |
| [Entity extraction](./Entity_Extraction.ipynb) | Use Gemini API to speed up some of your tasks, such as searching through text to extract needed information. Entity extraction with a Gemini model is a simple query, and you can ask it to retrieve its answer in the form that you prefer. | Embeddings | [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/google-gemini/cookbook/blob/main/examples/Entity_Extraction.ipynb) |
| [Gemma 3: Native Tool-Calling via MCP](./Gemma_3_Native_Tool_Calling_via_MCP.ipynb) | Use Gemma 3's native function calling tokens to interact with local system environments via the Model Context Protocol (MCP). | Gemma 3, MCP, Tools | [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/google-gemini/cookbook/blob/main/examples/Gemma_3_Native_Tool_Calling_via_MCP.ipynb) |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better maintainability and ease of finding examples, please keep the table sorted alphabetically by the cookbook title. This new entry should be placed between 'Entity extraction' and 'Google I/O 2025 Live coding session'.

@junaidaslam2006
Copy link
Author

To provide more context for the reviewers: this notebook is designed as a developer-focused implementation of the features I am currently contributing to the core google-deepmind/gemma repository.

It demonstrates the end-to-end workflow for:

Native MCP Integration: Gemma PR #482

Model Intelligence Logic: Gemma PR #487

Linking these here to show how this tutorial serves as a practical application of the core engine features I've proposed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

component:examples Issues/PR referencing examples folder status:awaiting review PR awaiting review from a maintainer

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant