strands.models.ollama
Ollama model provider.
- Docs: https://ollama.com/
OllamaModel
Section titled “OllamaModel”class OllamaModel(Model)Defined in: src/strands/models/ollama.py:26
Ollama model provider implementation.
The implementation handles Ollama-specific features such as:
- Local model invocation
- Streaming responses
- Tool/function calling
OllamaConfig
Section titled “OllamaConfig”class OllamaConfig(TypedDict)Defined in: src/strands/models/ollama.py:36
Configuration parameters for Ollama models.
Attributes:
additional_args- Any additional arguments to include in the request.keep_alive- Controls how long the model will stay loaded into memory following the request (default: “5m”).max_tokens- Maximum number of tokens to generate in the response.model_id- Ollama model ID (e.g., “llama3”, “mistral”, “phi3”).options- Additional model parameters (e.g., top_k).stop_sequences- List of sequences that will stop generation when encountered.temperature- Controls randomness in generation (higher = more random).top_p- Controls diversity via nucleus sampling (alternative to temperature).
__init__
Section titled “__init__”def __init__(host: str | None, *, ollama_client_args: dict[str, Any] | None = None, **model_config: Unpack[OllamaConfig]) -> NoneDefined in: src/strands/models/ollama.py:59
Initialize provider instance.
Arguments:
host- The address of the Ollama server hosting the model.ollama_client_args- Additional arguments for the Ollama client.**model_config- Configuration options for the Ollama model.
update_config
Section titled “update_config”@overridedef update_config(**model_config: Unpack[OllamaConfig]) -> NoneDefined in: src/strands/models/ollama.py:81
Update the Ollama Model configuration with the provided arguments.
Arguments:
**model_config- Configuration overrides.
get_config
Section titled “get_config”@overridedef get_config() -> OllamaConfigDefined in: src/strands/models/ollama.py:91
Get the Ollama model configuration.
Returns:
The Ollama model configuration.
format_request
Section titled “format_request”def format_request(messages: Messages, tool_specs: list[ToolSpec] | None = None, system_prompt: str | None = None) -> dict[str, Any]Defined in: src/strands/models/ollama.py:174
Format an Ollama chat streaming request.
Arguments:
messages- List of message objects to be processed by the model.tool_specs- List of tool specifications to make available to the model.system_prompt- System prompt to provide context to the model.
Returns:
An Ollama chat streaming request.
Raises:
TypeError- If a message contains a content block type that cannot be converted to an Ollama-compatible format.
format_chunk
Section titled “format_chunk”def format_chunk(event: dict[str, Any]) -> StreamEventDefined in: src/strands/models/ollama.py:227
Format the Ollama response events into standardized message chunks.
Arguments:
event- A response event from the Ollama model.
Returns:
The formatted chunk.
Raises:
RuntimeError- If chunk_type is not recognized. This error should never be encountered as we control chunk_type in the stream method.
stream
Section titled “stream”@overrideasync def stream(messages: Messages, tool_specs: list[ToolSpec] | None = None, system_prompt: str | None = None, *, tool_choice: ToolChoice | None = None, **kwargs: Any) -> AsyncGenerator[StreamEvent, None]Defined in: src/strands/models/ollama.py:290
Stream conversation with the Ollama model.
Arguments:
messages- List of message objects to be processed by the model.tool_specs- List of tool specifications to make available to the model.system_prompt- System prompt to provide context to the model.tool_choice- Selection strategy for tool invocation. Note: This parameter is accepted for interface consistency but is currently ignored for this model provider.**kwargs- Additional keyword arguments for future extensibility.
Yields:
Formatted message chunks from the model.
structured_output
Section titled “structured_output”@overrideasync def structured_output( output_model: type[T], prompt: Messages, system_prompt: str | None = None, **kwargs: Any) -> AsyncGenerator[dict[str, T | Any], None]Defined in: src/strands/models/ollama.py:346
Get structured output from the model.
Arguments:
output_model- The output model to use for the agent.prompt- The prompt messages to use for the agent.system_prompt- System prompt to provide context to the model.**kwargs- Additional keyword arguments for future extensibility.
Yields:
Model events with the last being the structured output.