New Video Model Grok Imagine Video, Veo 3.1 Fast Live. Start free today
HomePricingModelsDocsContactLog InStart Free — It's Free
📖 Documentation

oxyy API Documentation

oxyy API provides unified access to 200+ AI models through a single, OpenAI-compatible API. Use Chat, Image, Video, Audio, and Embedding models — all with one API key, at up to 60% lower cost.

OpenAI Compatible: Just change your base_url to https://api.oxyy.ai/v1 — no other code changes needed.
🔒 Authentication

API Key Setup

All API requests require authentication using a Bearer token in the Authorization header.

Get your API key from oxyy Dashboard after registering.

Quick Start

If you already use the OpenAI SDK, migration is one line of code. Just change your base URL:

# pip install openai
from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://api.oxyy.ai/v1"
)

response = client.chat.completions.create(
    model="gpt-5",
    messages=[{"role": "user", "content": "Hello!"}]
)

print(response.choices[0].message.content)
// npm install openai
import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'YOUR_API_KEY',
  baseURL: 'https://api.oxyy.ai/v1'
});

const response = await client.chat.completions.create({
  model: 'gpt-5',
  messages: [{ role: 'user', content: 'Hello!' }]
});

console.log(response.choices[0].message.content);
curl https://api.oxyy.ai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{"model": "gpt-5", "messages": [{"role": "user", "content": "Hello!"}]}'
💬 Chat API

Chat Completions

Generate text using 145+ language models — GPT-5, Claude Opus 4.6, Gemini 3 Pro, DeepSeek, Grok 4, and more.

POSThttps://api.oxyy.ai/v1/chat/completions

Parameters

ParameterTypeRequiredDescription
modelstringRequiredModel ID (e.g., gpt-5, claude-opus-4.6)
messagesarrayRequiredArray of message objects with role and content
temperaturefloatOptionalSampling temperature 0–2. Default: 0.7
max_tokensintegerOptionalMax tokens to generate. Default: 4096

Code Examples

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://api.oxyy.ai/v1"
)

response = client.chat.completions.create(
    model="chatgpt-4o-latest",
    messages=[{"role": "user", "content": "Hello!"}],
    temperature=0.7,
    max_tokens=4096
)

print(response.choices[0].message.content)
import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'YOUR_API_KEY',
  baseURL: 'https://api.oxyy.ai/v1'
});

const response = await client.chat.completions.create({
  model: 'chatgpt-4o-latest',
  messages: [{ role: 'user', content: 'Hello!' }],
  temperature: 0.7, max_tokens: 4096
});

console.log(response.choices[0].message.content);
curl https://api.oxyy.ai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{"model": "chatgpt-4o-latest", "messages": [{"role": "user", "content": "Hello!"}], "temperature": 0.7}'
$client = OpenAI::factory()
    ->withApiKey('YOUR_API_KEY')
    ->withBaseUri('https://api.oxyy.ai/v1')
    ->make();

$response = $client->chat()->create([
    'model' => 'chatgpt-4o-latest',
    'messages' => [['role' => 'user', 'content' => 'Hello!']],
    'temperature' => 0.7,
]);

echo $response->choices[0]->message->content;

Example Response

{
  "id": "chatcmpl-abc123def456",
  "object": "chat.completion",
  "created": 1700000000,
  "model": "chatgpt-4o-latest",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! How can I help you today?"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 12,
    "completion_tokens": 9,
    "total_tokens": 21
  }
}

Available Models

Loading models...
Fetching latest models from API...
🎨 Image API

Image Generation

Create images using DALL·E 3, Flux Pro, Imagen 4, Sora 2, and 30+ image models.

POSThttps://api.oxyy.ai/v1/images/generations

Parameters

ParameterTypeRequiredDescription
modelstringRequiredModel ID (e.g., dall-e-3, flux.1-pro)
promptstringRequiredText description of the image to generate
nintegerOptionalNumber of images. Default: 1
sizestringOptionalImage size (e.g., 1024x1024)

Code Examples

from openai import OpenAI

client = OpenAI(api_key="YOUR_API_KEY", base_url="https://api.oxyy.ai/v1")

response = client.images.generate(
    model="dall-e-3",
    prompt="A beautiful sunset over the ocean",
    n=1, size="1024x1024"
)
print(response.data[0].url)
const client = new OpenAI({ apiKey: 'YOUR_API_KEY', baseURL: 'https://api.oxyy.ai/v1' });

const response = await client.images.generate({
  model: 'dall-e-3', prompt: 'A beautiful sunset over the ocean', n: 1, size: '1024x1024'
});
console.log(response.data[0].url);
curl https://api.oxyy.ai/v1/images/generations \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{"model": "dall-e-3", "prompt": "A beautiful sunset over the ocean", "n": 1, "size": "1024x1024"}'

Example Response

{
  "created": 1700000000,
  "data": [
    {
      "url": "https://cdn.oxyy.ai/images/2025/01/abc123.png",
      "revised_prompt": "A beautiful sunset over a calm ocean..."
    }
  ]
}

Available Models

Loading models...
Fetching latest models from API...
🎬 Video API 🔥 HOT

Video Generation

Generate AI videos using Google Veo 2, Veo 3, and Veo 3.1. Video generation is asynchronous.

POSThttps://api.oxyy.ai/v1/videos/generations

Parameters

ParameterTypeRequiredDescription
modelstringRequiredModel ID (e.g., veo-3, veo-3.1-fast)
promptstringRequiredText description for the video
durationintegerOptionalDuration in seconds. Default: 4
resolutionstringOptionalVideo resolution (e.g., 1080p)

Code Examples

import requests

headers = { "Authorization": "Bearer YOUR_API_KEY", "Content-Type": "application/json" }

response = requests.post("https://api.oxyy.ai/v1/videos/generations", headers=headers,
    json={'model': 'veo-3', 'prompt': 'A serene lake with mountains', 'duration': 4})
job = response.json()
print("Job ID:", job["id"])
curl https://api.oxyy.ai/v1/videos/generations \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{"model": "veo-3", "prompt": "A serene lake with mountains", "duration": 4, "resolution": "1080p"}'

Submit Response

{
  "id": "job_vid_abc123def456",
  "status": "queued",
  "model": "veo-3",
  "created_at": "2025-01-15T12:00:00Z",
  "estimated_wait": "30-60s"
}

Polling for Results

Video generation is asynchronous. After submitting a job, poll the status endpoint until the job completes.

GEThttps://api.oxyy.ai/v1/videos/generations/:job_id
import requests, time

headers = { "Authorization": "Bearer YOUR_API_KEY", "Content-Type": "application/json" }

# Step 1: Submit video generation job
response = requests.post("https://api.oxyy.ai/v1/videos/generations",
    headers=headers,
    json={'model': 'veo-3', 'prompt': 'A serene lake', 'duration': 4})
job = response.json()
job_id = job["id"]
print(f"Job submitted: {job_id}")

# Step 2: Poll for completion
while True:
    status = requests.get(
        f"https://api.oxyy.ai/v1/videos/generations/{job_id}",
        headers=headers
    ).json()

    if status["status"] == "completed":
        print("Video ready:", status["video_url"])
        break
    elif status["status"] == "failed":
        print("Generation failed:", status.get("error"))
        break
    else:
        print(f"Status: {status["status"]}... waiting")
        time.sleep(5)
const headers = {
  'Authorization': 'Bearer YOUR_API_KEY',
  'Content-Type': 'application/json'
};

// Step 1: Submit video generation job
const job = await fetch('https://api.oxyy.ai/v1/videos/generations', {
  method: 'POST', headers,
  body: JSON.stringify({ model: 'veo-3', prompt: 'A serene lake', duration: 4 })
}).then(r => r.json());

console.log('Job submitted:', job.id);

// Step 2: Poll for completion
while (true) {
  const status = await fetch(
    `https://api.oxyy.ai/v1/videos/generations/${job.id}`,
    { headers }
  ).then(r => r.json());

  if (status.status === 'completed') {
    console.log('Video ready:', status.video_url);
    break;
  } else if (status.status === 'failed') {
    console.error('Failed:', status.error);
    break;
  }
  console.log(`Status: ${status.status}... waiting`);
  await new Promise(r => setTimeout(r, 5000));
}
# Step 1: Submit job
curl -X POST https://api.oxyy.ai/v1/videos/generations \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{"model": "veo-3", "prompt": "A serene lake", "duration": 4}'

# Step 2: Poll using the returned job ID
curl https://api.oxyy.ai/v1/videos/generations/job_vid_abc123 \
  -H "Authorization: Bearer YOUR_API_KEY"

Completed Response

{
  "id": "job_vid_abc123def456",
  "status": "completed",
  "model": "veo-3",
  "video_url": "https://cdn.oxyy.ai/videos/2025/01/abc123.mp4",
  "duration": 4,
  "resolution": "1080p",
  "created_at": "2025-01-15T12:00:00Z",
  "completed_at": "2025-01-15T12:00:45Z"
}
Polling Tip: Use a 5-second interval between polls. Most videos complete within 30–90 seconds depending on model and duration.

Available Models

Loading models...
Fetching latest models from API...
📁 File Uploads

File Uploads & Input Methods

Several oxyy API endpoints accept file inputs for image editing, image-to-video generation, and speech-to-text transcription. You can provide files using three different methods:

ParameterTypeRequiredDescription
File Uploadmultipart/form-dataUpload a file directly from disk using multipart/form-data
URLapplication/jsonPass a URL to a hosted file via image or file_url field
Base64application/jsonSend base64-encoded data via image_base64 or file_base64 field
Content-Type: Use multipart/form-data for file uploads (the OpenAI SDK handles this automatically). For URL and Base64 methods, use application/json.

Image Editing (Image-to-Image)

Upload a source image and provide a prompt to edit or transform it.

POSThttps://api.oxyy.ai/v1/images/edits
ParameterTypeRequiredDescription
modelstringRequiredModel ID (e.g., flux.1-kontext-max)
imagefile/stringRequired*Source image — file upload or URL string
image_base64stringRequired*Base64-encoded image (alternative to image)
promptstringRequiredDescription of the desired edit
nintegerOptionalNumber of images. Default: 1
sizestringOptionalOutput size (e.g., 1024x1024)
*One of image (file or URL) or image_base64 is required.
# Image-to-Image / Image Edit — upload a source image
import requests

headers = { "Authorization": "Bearer YOUR_API_KEY" }

# Use multipart/form-data for file uploads
files = { "image": open("source.png", "rb") }
data = {
    "model": "flux.1-kontext-max",
    "prompt": "Change the background to a beach sunset",
    "n": "1",
    "size": "1024x1024"
}

response = requests.post(
    "https://api.oxyy.ai/v1/images/edits",
    headers=headers, files=files, data=data
)
print(response.json())
import fs from 'fs';
import OpenAI from 'openai';

const client = new OpenAI({ apiKey: 'YOUR_API_KEY', baseURL: 'https://api.oxyy.ai/v1' });

const response = await client.images.edit({
  model: 'flux.1-kontext-max',
  image: fs.createReadStream('source.png'),
  prompt: 'Change the background to a beach sunset',
  n: 1, size: '1024x1024'
});

console.log(response.data[0].url);
curl https://api.oxyy.ai/v1/images/edits \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -F "model=flux.1-kontext-max" \
  -F "image=@source.png" \
  -F "prompt=Change the background to a beach sunset" \
  -F "n=1" \
  -F "size=1024x1024"

Example Response

{
  "created": 1700000000,
  "data": [
    {
      "url": "https://cdn.oxyy.ai/images/2025/01/edited_abc123.png"
    }
  ]
}

Image-to-Video

Upload an image and animate it into a video clip.

POSThttps://api.oxyy.ai/v1/videos/generations
ParameterTypeRequiredDescription
modelstringRequiredVideo model (e.g., veo-3)
imagefile/stringRequired*Source image — file upload or URL string
image_base64stringRequired*Base64-encoded image (alternative to image)
promptstringRequiredMotion / animation description
durationintegerOptionalDuration in seconds. Default: 4
resolutionstringOptionalOutput resolution (e.g., 1080p)
*One of image (file or URL) or image_base64 is required.
# Image-to-Video — upload an image to animate
import requests

headers = { "Authorization": "Bearer YOUR_API_KEY" }

files = { "image": open("photo.jpg", "rb") }
data = {
    "model": "veo-3",
    "prompt": "Slowly pan across the scene with gentle motion",
    "duration": "4",
    "resolution": "1080p"
}

response = requests.post(
    "https://api.oxyy.ai/v1/videos/generations",
    headers=headers, files=files, data=data
)
job = response.json()
print("Job ID:", job["id"])
curl https://api.oxyy.ai/v1/videos/generations \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -F "model=veo-3" \
  -F "image=@photo.jpg" \
  -F "prompt=Slowly pan across the scene with gentle motion" \
  -F "duration=4" \
  -F "resolution=1080p"

Example Response

{
  "id": "job_vid_img2v_abc123",
  "status": "queued",
  "model": "veo-3",
  "created_at": "2025-01-15T12:00:00Z",
  "estimated_wait": "30-60s"
}
Image-to-video is also asynchronous — poll the status endpoint the same way as text-to-video above.

Audio Transcription (STT)

Upload an audio file for speech-to-text transcription with optional timestamps.

POSThttps://api.oxyy.ai/v1/audio/transcriptions
ParameterTypeRequiredDescription
modelstringRequiredSTT model (e.g., gpt-4o-mini-transcribe)
filefileRequired*Audio file — mp3, mp4, wav, m4a, webm, mpeg, mpga
file_urlstringRequired*URL of a hosted audio file (alternative to file)
file_base64stringRequired*Base64-encoded audio data (alternative to file)
response_formatstringOptionaljson, text, verbose_json, srt, vtt
languagestringOptionalISO-639-1 code (e.g., en, ja, bn)
*One of file, file_url, or file_base64 is required.
# Speech-to-Text — upload audio for transcription
from openai import OpenAI

client = OpenAI(api_key="YOUR_API_KEY", base_url="https://api.oxyy.ai/v1")

# Supported: mp3, mp4, mpeg, mpga, m4a, wav, webm
with open("interview.mp3", "rb") as audio_file:
    response = client.audio.transcriptions.create(
        model="gpt-4o-mini-transcribe",
        file=audio_file,
        response_format="verbose_json",
        language="en"
    )

print("Text:", response.text)
for seg in response.segments:
    print(f"{seg.start:.1f}s - {seg.end:.1f}s: {seg.text}")
import fs from 'fs';
import OpenAI from 'openai';

const client = new OpenAI({ apiKey: 'YOUR_API_KEY', baseURL: 'https://api.oxyy.ai/v1' });

const response = await client.audio.transcriptions.create({
  model: 'gpt-4o-mini-transcribe',
  file: fs.createReadStream('interview.mp3'),
  response_format: 'verbose_json',
  language: 'en'
});

console.log(response.text);
curl https://api.oxyy.ai/v1/audio/transcriptions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -F "model=gpt-4o-mini-transcribe" \
  -F "file=@interview.mp3" \
  -F "response_format=verbose_json" \
  -F "language=en"

Example Response (verbose_json)

{
  "text": "Hello, this is a transcription test.",
  "language": "en",
  "duration": 5.42,
  "segments": [
    {
      "start": 0.0,
      "end": 2.1,
      "text": "Hello, this is"
    },
    {
      "start": 2.1,
      "end": 5.42,
      "text": "a transcription test."
    }
  ]
}
File Size Limits: Audio files up to 25 MB, images up to 20 MB. For larger files, consider compressing or splitting.
🔊 Audio API

Text-to-Speech (TTS)

Convert text into natural-sounding speech using ElevenLabs, OpenAI TTS, and Gemini TTS models.

POSThttps://api.oxyy.ai/v1/audio/speech

Parameters

ParameterTypeRequiredDescription
modelstringRequiredTTS model (e.g., eleven_v3, tts-1-hd)
voicestringRequiredVoice ID (e.g., alloy, echo)
inputstringRequiredThe text to convert to speech

Code Examples

from openai import OpenAI
from pathlib import Path

client = OpenAI(api_key="YOUR_API_KEY", base_url="https://api.oxyy.ai/v1")

response = client.audio.speech.create(
    model="eleven_english_sts_v2", voice="alloy",
    input="Hello! This is a text-to-speech test."
)
response.stream_to_file(Path("output.mp3"))

Response

// Response: Binary audio data (MP3/WAV/OGG)
//
// Content-Type: audio/mpeg
// Content-Length: 45321
//
// The response body is raw audio bytes.
// Save directly to a file or stream to an audio player.
// Supported formats: mp3, opus, aac, flac, wav, pcm

Available TTS Models

Loading models...
Fetching latest models from API...

Speech-to-Text (STT)

Transcribe audio files into text with high accuracy. Supports timestamps, language detection, and multiple output formats.

POSThttps://api.oxyy.ai/v1/audio/transcriptions

Parameters

ParameterTypeRequiredDescription
modelstringRequiredSTT model (e.g., whisper-1, scribe_v1)
filefileRequiredAudio file (mp3, mp4, wav, etc.)
response_formatstringOptionaljson, text, verbose_json, srt, vtt

Code Examples

from openai import OpenAI

client = OpenAI(api_key="YOUR_API_KEY", base_url="https://api.oxyy.ai/v1")

with open("audio.mp3", "rb") as f:
    response = client.audio.transcriptions.create(
        model="gpt-4o-mini-transcribe", file=f, response_format="verbose_json"
    )
print(response.text)
curl https://api.oxyy.ai/v1/audio/transcriptions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -F "model=gpt-4o-mini-transcribe" \
  -F "file=@audio.mp3"

Example Response

{
  "text": "Hello, this is a transcription test.",
  "language": "en",
  "duration": 5.42,
  "segments": [
    {
      "start": 0.0,
      "end": 2.1,
      "text": "Hello, this is"
    },
    {
      "start": 2.1,
      "end": 5.42,
      "text": "a transcription test."
    }
  ]
}

Available STT Models

Loading models...
Fetching latest models from API...
🌐 Embeddings API

Embeddings

Generate vector embeddings for semantic search, clustering, recommendations, and RAG applications.

POSThttps://api.oxyy.ai/v1/embeddings

Parameters

ParameterTypeRequiredDescription
modelstringRequiredEmbedding model (e.g., text-embedding-3-large)
inputstring|arrayRequiredText to embed (string or array of strings)

Code Examples

from openai import OpenAI

client = OpenAI(api_key="YOUR_API_KEY", base_url="https://api.oxyy.ai/v1")

response = client.embeddings.create(
    model="gemini-embedding-001",
    input="The quick brown fox jumps over the lazy dog"
)
embedding = response.data[0].embedding
print(len(embedding), "dimensions")

Example Response

{
  "object": "list",
  "data": [
    {
      "object": "embedding",
      "index": 0,
      "embedding": [0.0023, -0.0091, 0.0156, -0.0042, ...]
    }
  ],
  "model": "gemini-embedding-001",
  "usage": {
    "prompt_tokens": 8,
    "total_tokens": 8
  }
}

Available Models

Loading models...
Fetching latest models from API...

SDKs & Libraries

oxyy API is OpenAI-compatible — use the official OpenAI SDK in any language. No custom SDK needed.

LanguagePackageInstall
Pythonopenaipip install openai
JavaScriptopenainpm install openai
PHPopenai-php/clientcomposer require openai-php/client
Gosashabaranov/go-openaigo get github.com/sashabaranov/go-openai
Rubyruby-openaigem install ruby-openai

Error Codes

Standard HTTP error codes with descriptive error messages.

Error Response Format

{
  "error": {
    "message": "Invalid API key provided",
    "type": "authentication_error",
    "code": "invalid_api_key"
  }
}
CodeMeaningResolution
401Invalid or missing API keyCheck your Authorization header
403Insufficient permissionsUpgrade your plan or check key scope
404Model not foundVerify the model ID exists
429Rate limit exceededSlow down requests or upgrade plan
500Internal server errorRetry after a short delay
503Model temporarily unavailableTry again or use a different model
OpenAI
Google
Anthropic
Meta
DeepSeek
Mistral
xAI
Alibaba
BlackForest AI
ElevenLabs
ByteDance
Kuaishou
Runway