Skip to content

strands.models.gemini

Google Gemini model provider.

class GeminiModel(Model)

Defined in: src/strands/models/gemini.py:30

Google Gemini model provider implementation.

class GeminiConfig(TypedDict)

Defined in: src/strands/models/gemini.py:36

Configuration options for Gemini models.

Attributes:

def __init__(*,
client: genai.Client | None = None,
client_args: dict[str, Any] | None = None,
**model_config: Unpack[GeminiConfig]) -> None

Defined in: src/strands/models/gemini.py:57

Initialize provider instance.

Arguments:

  • client - Pre-configured Gemini client to reuse across requests. When provided, this client will be reused for all requests and will NOT be closed by the model. The caller is responsible for managing the client lifecycle. This is useful for:
    • Injecting custom client wrappers
    • Reusing connection pools within a single event loop/worker
    • Centralizing observability, retries, and networking policy
  • Note - The client should not be shared across different asyncio event loops.
  • client_args - Arguments for the underlying Gemini client (e.g., api_key). For a complete list of supported arguments, see https://googleapis.github.io/python-genai/.
  • **model_config - Configuration options for the Gemini model.

Raises:

  • ValueError - If both client and client_args are provided.
@override
def update_config(**model_config: Unpack[GeminiConfig]) -> None

Defined in: src/strands/models/gemini.py:99

Update the Gemini model configuration with the provided arguments.

Arguments:

  • **model_config - Configuration overrides.
@override
def get_config() -> GeminiConfig

Defined in: src/strands/models/gemini.py:112

Get the Gemini model configuration.

Returns:

The Gemini model configuration.

async def stream(messages: Messages,
tool_specs: list[ToolSpec] | None = None,
system_prompt: str | None = None,
tool_choice: ToolChoice | None = None,
**kwargs: Any) -> AsyncGenerator[StreamEvent, None]

Defined in: src/strands/models/gemini.py:437

Stream conversation with the Gemini model.

Arguments:

  • messages - List of message objects to be processed by the model.
  • tool_specs - List of tool specifications to make available to the model.
  • system_prompt - System prompt to provide context to the model.
  • tool_choice - Selection strategy for tool invocation.
  • Note - Currently unused.
  • **kwargs - Additional keyword arguments for future extensibility.

Yields:

Formatted message chunks from the model.

Raises:

  • ModelThrottledException - If the request is throttled by Gemini.
@override
async def structured_output(
output_model: type[T],
prompt: Messages,
system_prompt: str | None = None,
**kwargs: Any) -> AsyncGenerator[dict[str, T | Any], None]

Defined in: src/strands/models/gemini.py:535

Get structured output from the model using Gemini’s native structured output.

Arguments:

  • output_model - The output model to use for the agent.
  • prompt - The prompt messages to use for the agent.
  • system_prompt - System prompt to provide context to the model.
  • **kwargs - Additional keyword arguments for future extensibility.

Yields:

Model events with the last being the structured output.