Appearance
Function calling with Ollama
Explore function calling in Ollama, a powerful feature that allows LLMs to interact with external functions and APIs. This guide includes a step-by-step example using Python and Docker to demonstrate practical implementation and real-world use cases.
Introduction
Ollama enables function calling by allowing language models to determine when and how to invoke external functions based on user input. The model generates a structured request containing the function name and arguments, which the application then executes. The function's output is fed back to the model to generate a final response.
Supported models
The following models currently support function calling:
- Command R
- Llama
- Mistral
- Qwen
For the latest information on supported models, please refer to the Ollama documentation: https://ollama.com/search?c=tools&o=newest
Example
This example demonstrates how to use function calling with Ollama to retrieve weather information from a simple function.
Start the Ollama container
We'll use Docker Compose to manage the Ollama container and a separate Python container for the function calling script. Here's the compose.yml
file:
yaml
services:
ollama:
image: ollama/ollama
volumes:
- ollama:/root/.ollama
python:
build:
context: .
volumes:
- ./function-calling.py:/function-calling.py
volumes:
ollama:
Start the Ollama container using the following command:
sh
docker compose up -d ollama
Pull models
Next, pull the desired model (in this example, llama3.2
) in the Ollama container:
sh
docker compose exec ollama ollama pull llama3.2
Send request
The Python script (function-calling.py
) defines a simple get_weather
function that retrieves dummy weather data for a given city. The script then interacts with the Ollama API using the ollama
library. Here's the code:
python
# function-calling.py
import json
from ollama import Client
# Define a simple function to retrieve dummy weather data for a given city
def get_weather(city):
weather_data = {
"London": "Cloudy, 15°C",
"New York": "Rainy, 10°C",
"Paris": "Partly cloudy, 18°C",
"Sydney": "Sunny, 22°C",
"Tokyo": "Sunny, 25°C",
}
return weather_data.get(city, "Weather information not available for this city.")
# Create a client to interact with the Ollama API
client = Client(host="http://ollama:11434")
model = "llama3.2"
messages = [{"role": "user", "content": "What's the weather like in Paris?"}]
# Send a chat request to the Ollama server
response = client.chat(
model,
messages=messages,
tools=[get_weather],
)
# Update the conversation history with the LLM's response
messages.append(response.message)
# Extract details about the first tool call made by the LLM
tool_call = response.message.tool_calls[0]
# Call the `get_weather` function with arguments from the LLM's request
weather = get_weather(**tool_call.function.arguments)
# Update conversation history with the tool's response (weather information)
messages.append({"role": "tool", "content": weather, "name": tool_call.function.name})
# Send another chat request with the updated conversation history (including tool output)
response = client.chat(
model,
messages=messages,
stream=True, # Request a streaming response from the LLM
)
# Print the LLM's final response chunk by chunk
for chunk in response:
print(chunk.message.content, end="", flush=True)
Run the script using Docker Compose:
sh
docker compose run --rm python function-calling.py
This script sends a chat request to the LLM asking about the weather in Paris. If the LLM identifies the need for weather information, it will call the get_weather
function with "Paris" as the argument.
Next, the script incorporates the function's output (weather information) into a new message and sends it back to the LLM for final processing.
Finally, the LLM uses this information to generate a response like:
> The current weather in Paris is partly cloudy with a temperature of 18°C (64°F).
NOTE
This is a simplified example using dummy data. In a real-world scenario, the get_weather
function would likely interact with an actual weather API to retrieve real-time weather information.