Documentation

Everything you need to integrate with the ozeye API.

Quick start

1. Get an API key

Sign up and create an API key from the dashboard.

2. Make a request

The ozeye API is compatible with the OpenAI chat completions format.

curl https://api.ozeye.ai/v1/chat/completions \
  -H "Authorization: Bearer $OZEYE_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "mistral-small-latest",
    "messages": [
      {"role": "user", "content": "Hello!"}
    ]
  }'

API endpoints

POST /v1/chat/completions

Create a chat completion. Supports streaming via SSE. Compatible with OpenAI client libraries.

GET /v1/models

List available models. Returns models from all configured providers.

GET /v1/models/catalog

Full model catalog with pricing, benchmarks, and provider availability. Public, no auth required.

GET /v1/account

Account info including credit balance and status.

GET /v1/account/usage

Usage summary for the current billing period. Token counts, cost, and request totals.

Features

Presets

Configure reusable presets with model, system prompt, and routing strategy. Apply via the X-Preset header or preset field in the request body.

Routing strategies

Route requests by cost, latency, availability, or fallback. Models available on multiple providers get automatic failover.

Zero data retention

No prompts or completions are stored. Audit logs record only metadata (token counts, model, latency, cost). Enabled by default.

Client libraries

Any OpenAI-compatible client library works with ozeye. Just change the base URL.

from openai import OpenAI

client = OpenAI(
    base_url="https://api.ozeye.ai/v1",
    api_key="your-ozeye-api-key",
)

response = client.chat.completions.create(
    model="mistral-small-latest",
    messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)