strands.models.model
Abstract base class for Agent model providers.
CacheConfig
Section titled “CacheConfig”@dataclassclass CacheConfig()Defined in: src/strands/models/model.py:21
Configuration for prompt caching.
Attributes:
strategy- Caching strategy to use.- “auto”: Automatically detect model support and inject cachePoint to maximize cache coverage
- “anthropic”: Inject cachePoint in Anthropic-compatible format without model support check
class Model(abc.ABC)Defined in: src/strands/models/model.py:33
Abstract base class for Agent model providers.
This class defines the interface for all model implementations in the Strands Agents SDK. It provides a standardized way to configure and process requests for different AI model providers.
update_config
Section titled “update_config”@abc.abstractmethoddef update_config(**model_config: Any) -> NoneDefined in: src/strands/models/model.py:42
Update the model configuration with the provided arguments.
Arguments:
**model_config- Configuration overrides.
get_config
Section titled “get_config”@abc.abstractmethoddef get_config() -> AnyDefined in: src/strands/models/model.py:52
Return the model configuration.
Returns:
The model’s configuration.
structured_output
Section titled “structured_output”@abc.abstractmethoddef structured_output( output_model: type[T], prompt: Messages, system_prompt: str | None = None, **kwargs: Any) -> AsyncGenerator[dict[str, T | Any], None]Defined in: src/strands/models/model.py:62
Get structured output from the model.
Arguments:
output_model- The output model to use for the agent.prompt- The prompt messages to use for the agent.system_prompt- System prompt to provide context to the model.**kwargs- Additional keyword arguments for future extensibility.
Yields:
Model events with the last being the structured output.
Raises:
ValidationException- The response format from the model does not match the output_model
stream
Section titled “stream”@abc.abstractmethoddef stream(messages: Messages, tool_specs: list[ToolSpec] | None = None, system_prompt: str | None = None, *, tool_choice: ToolChoice | None = None, system_prompt_content: list[SystemContentBlock] | None = None, invocation_state: dict[str, Any] | None = None, **kwargs: Any) -> AsyncIterable[StreamEvent]Defined in: src/strands/models/model.py:83
Stream conversation with the model.
This method handles the full lifecycle of conversing with the model:
- Format the messages, tool specs, and configuration into a streaming request
- Send the request to the model
- Yield the formatted message chunks
Arguments:
messages- List of message objects to be processed by the model.tool_specs- List of tool specifications to make available to the model.system_prompt- System prompt to provide context to the model.tool_choice- Selection strategy for tool invocation.system_prompt_content- System prompt content blocks for advanced features like caching.invocation_state- Caller-provided state/context that was passed to the agent when it was invoked.**kwargs- Additional keyword arguments for future extensibility.
Yields:
Formatted message chunks from the model.
Raises:
ModelThrottledException- When the model service is throttling requests from the client.