Skip to main content
The Responses API supports comprehensive tool calling capabilities, allowing models to call functions, execute tools in parallel, and handle complex multi-step workflows.
Stateless APIRemember that this API is stateless. When handling multi-turn tool calls, you must include the complete conversation history (including previous tool calls and results) in each request.

Basic Tool Definition

Define tools using the OpenAI function calling format:
import requests
import json

weather_tool = {
    "type": "function",
    "name": "get_weather",
    "description": "Get the current weather in a location",
    "parameters": {
        "type": "object",
        "properties": {
            "location": {
                "type": "string",
                "description": "The city and state, e.g. San Francisco, CA"
            },
            "unit": {
                "type": "string",
                "enum": ["celsius", "fahrenheit"]
            }
        },
        "required": ["location"]
    }
}

response = requests.post(
    "https://api.anannas.ai/api/v1/responses",
    headers={
        "Authorization": "Bearer <ANANNAS_API_KEY>",
        "Content-Type": "application/json",
    },
    json={
        "model": "openai/gpt-5-mini",
        "input": [
            {
                "type": "message",
                "role": "user",
                "content": [
                    {
                        "type": "input_text",
                        "text": "What is the weather in San Francisco?"
                    }
                ]
            }
        ],
        "tools": [weather_tool],
        "tool_choice": "auto",
        "max_output_tokens": 9000,
    },
)

result = response.json()
print(json.dumps(result, indent=2))

Tool Choice Options

Control when and how tools are called using the tool_choice parameter:
Tool ChoiceDescription
"auto"Model decides whether to call tools (default)
"none"Model will not call any tools
{"type": "function", "name": "tool_name"}Force specific tool call
Force Specific Tool:
response = requests.post(
    "https://api.anannas.ai/api/v1/responses",
    headers={
        "Authorization": "Bearer <ANANNAS_API_KEY>",
        "Content-Type": "application/json",
    },
    json={
        "model": "openai/gpt-5-mini",
        "input": [
            {
                "type": "message",
                "role": "user",
                "content": [
                    {
                        "type": "input_text",
                        "text": "Get the weather for New York"
                    }
                ]
            }
        ],
        "tools": [weather_tool],
        "tool_choice": {
            "type": "function",
            "name": "get_weather"
        },
        "max_output_tokens": 9000,
    },
)

Handling Tool Calls

When a model calls a tool, the response contains function call information in the output:
response = requests.post(
    "https://api.anannas.ai/api/v1/responses",
    headers={
        "Authorization": "Bearer <ANANNAS_API_KEY>",
        "Content-Type": "application/json",
    },
    json={
        "model": "openai/gpt-5-mini",
        "input": [
            {
                "type": "message",
                "role": "user",
                "content": [
                    {
                        "type": "input_text",
                        "text": "What is the weather in Paris?"
                    }
                ]
            }
        ],
        "tools": [weather_tool],
        "tool_choice": "auto",
        "max_output_tokens": 9000,
    },
)

result = response.json()

# Extract function calls from output
for output_item in result["output"]:
    for content_part in output_item.get("content", []):
        if content_part.get("type") == "function_call":
            func_call = content_part["function_call"]
            print(f"Function: {func_call['name']}")
            print(f"Arguments: {func_call['arguments']}")
            
            # Execute the function
            args = json.loads(func_call["arguments"])
            # ... execute your function with args ...
            
            # Then make another request with the tool result

Multi-Turn Tool Calling

Stateless DesignSince the API is stateless, you must include the complete conversation history in each request, including all previous tool calls and their results. The API does not remember previous tool interactions.
For multi-turn conversations with tool calls, include the tool results in subsequent requests:
# First request - model calls tool
first_response = requests.post(
    "https://api.anannas.ai/api/v1/responses",
    headers={
        "Authorization": "Bearer <ANANNAS_API_KEY>",
        "Content-Type": "application/json",
    },
    json={
        "model": "openai/gpt-5-mini",
        "input": [
            {
                "type": "message",
                "role": "user",
                "content": [
                    {
                        "type": "input_text",
                        "text": "What's the weather in Boston? Then recommend what to wear."
                    }
                ]
            }
        ],
        "tools": [weather_tool],
        "max_output_tokens": 9000,
    },
)

first_result = first_response.json()

# Extract function call
function_call = None
for output_item in first_result["output"]:
    for content_part in output_item.get("content", []):
        if content_part.get("type") == "function_call":
            function_call = content_part["function_call"]
            break

if function_call:
    # Execute the function
    args = json.loads(function_call["arguments"])
    # Simulate weather API call
    weather_result = {"temperature": 45, "condition": "rainy", "humidity": 85}
    
    # Second request - include tool result
    second_response = requests.post(
        "https://api.anannas.ai/api/v1/responses",
        headers={
            "Authorization": "Bearer <ANANNAS_API_KEY>",
            "Content-Type": "application/json",
        },
        json={
            "model": "openai/gpt-5-mini",
            "input": [
                {
                    "type": "message",
                    "role": "user",
                    "content": [
                        {
                            "type": "input_text",
                            "text": "What's the weather in Boston? Then recommend what to wear."
                        }
                    ]
                },
                {
                    "type": "message",
                    "role": "assistant",
                    "status": "completed",
                    "content": first_result["output"][0]["content"]
                },
                {
                    "type": "message",
                    "role": "tool",
                    "content": [
                        {
                            "type": "tool_result",
                            "tool_call_id": function_call["id"],
                            "content": json.dumps(weather_result)
                        }
                    ]
                }
            ],
            "tools": [weather_tool],
            "max_output_tokens": 9000,
        },
    )
    
    second_result = second_response.json()
    print(second_result["output"][0]["content"][0]["text"])

Parallel Tool Calls

Enable parallel tool execution with parallel_tool_calls:
response = requests.post(
    "https://api.anannas.ai/api/v1/responses",
    headers={
        "Authorization": "Bearer <ANANNAS_API_KEY>",
        "Content-Type": "application/json",
    },
    json={
        "model": "openai/gpt-5-mini",
        "input": [
            {
                "type": "message",
                "role": "user",
                "content": [
                    {
                        "type": "input_text",
                        "text": "Get the weather for both New York and London"
                    }
                ]
            }
        ],
        "tools": [weather_tool],
        "parallel_tool_calls": True,
        "max_output_tokens": 9000,
    },
)

Best Practices

  1. Clear tool descriptions: Provide detailed descriptions for better tool selection
  2. Validate tool results: Always validate and sanitize tool execution results
  3. Handle errors gracefully: Implement error handling for tool execution failures
  4. Use parallel calls: Enable parallel_tool_calls when multiple independent tools can run simultaneously
  5. Tool result format: Return tool results as JSON strings for consistency
Was this page helpful?