Anannas provides a unified Chat Completions API across OpenAI and Anthropic. The schema is designed to be OpenAI-compatible while supporting Anannas-specific routing and model options.
Completions Request Format
The main endpoint for completions is: POST /chat/completions
Here’s the request schema in TypeScript:
Copy
// Definitions of subtypes are belowtype Request = { // Either "messages" or "prompt" is required messages?: Message[]; prompt?: string; // Model selection (defaults to user/org default if unspecified) model?: string; // See "Supported Models" section response_format?: { type: 'json_object' }; stop?: string | string[]; stream?: boolean; max_tokens?: number; temperature?: number; // Tool calling (OpenAI-compatible) tools?: Tool[]; tool_choice?: ToolChoice; // Advanced parameters seed?: number; top_p?: number; top_k?: number; frequency_penalty?: number; presence_penalty?: number; repetition_penalty?: number; logit_bias?: { [key: number]: number }; min_p?: number; top_a?: number; // Anannas-only parameters models?: string[]; // For model routing route?: 'fallback'; // Smart routing fallback provider?: ProviderPreferences; // Provider routing user?: string; // Stable identifier for your end-users};
Example Request
Copy
from openai import OpenAIclient = OpenAI( api_key="YOUR_ANANNAS_API_KEY", base_url="https://api.anannas.ai/v1")completion = client.chat.completions.create( model="openai/gpt-4o", messages=[ {"role": "user", "content": "What is the meaning of life?"} ])print(completion.choices[0].message)
Headers
You can set optional headers for discoverability:
If the model parameter is omitted, Anannas will select the default for the user/org. If multiple providers/models are available, Anannas’s routing system automatically selects the best option (based on price, availability, and latency) and falls back if a provider fails.
Responses
Anannas normalizes responses to comply with the OpenAI Chat API schema.
Finish ReasonQuerying Cost and StatsAnannas ensures your requests remain provider-agnostic, resilient, and cost-optimized, while staying fully OpenAI-compatible.