New Video Model Grok Imagine Video, Veo 3.1 Fast Live. Start free today
HomePricingModelsDocsContactLog InStart Free — It's Free
📖 Documentation

oxyy API Documentation

oxyy API provides unified access to 200+ AI models through a single, OpenAI-compatible API. Use Chat, Image, Video, Audio, and Embedding models — all with one API key, at up to 60% lower cost.

OpenAI Compatible: Just change your base_url to https://api.oxyy.ai/v1 — no other code changes needed.
🔒 Authentication

API Key Setup

All API requests require authentication using a Bearer token in the Authorization header.

Get your API key from oxyy Dashboard after registering.

Quick Start

If you already use the OpenAI SDK, migration is one line of code. Just change your base URL:

# pip install openai
from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://api.oxyy.ai/v1"
)

response = client.chat.completions.create(
    model="gpt-5",
    messages=[{"role": "user", "content": "Hello!"}]
)

print(response.choices[0].message.content)
// npm install openai
import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'YOUR_API_KEY',
  baseURL: 'https://api.oxyy.ai/v1'
});

const response = await client.chat.completions.create({
  model: 'gpt-5',
  messages: [{ role: 'user', content: 'Hello!' }]
});

console.log(response.choices[0].message.content);
curl https://api.oxyy.ai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{"model": "gpt-5", "messages": [{"role": "user", "content": "Hello!"}]}'
💬 Chat API

Chat Completions

Generate text using 145+ language models — GPT-5, Claude Opus 4.6, Gemini 3 Pro, DeepSeek, Grok 4, and more.

POSThttps://api.oxyy.ai/v1/chat/completions

Parameters

ParameterTypeRequiredDescription
modelstringRequiredModel ID (e.g., gpt-5, claude-opus-4.6)
messagesarrayRequiredArray of message objects with role and content
temperaturefloatOptionalSampling temperature 0–2. Default: 0.7
max_tokensintegerOptionalMax tokens to generate. Default: 4096

Code Examples

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://api.oxyy.ai/v1"
)

response = client.chat.completions.create(
    model="chatgpt-4o-latest",
    messages=[{"role": "user", "content": "Hello!"}],
    temperature=0.7,
    max_tokens=4096
)

print(response.choices[0].message.content)
import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'YOUR_API_KEY',
  baseURL: 'https://api.oxyy.ai/v1'
});

const response = await client.chat.completions.create({
  model: 'chatgpt-4o-latest',
  messages: [{ role: 'user', content: 'Hello!' }],
  temperature: 0.7, max_tokens: 4096
});

console.log(response.choices[0].message.content);
curl https://api.oxyy.ai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{"model": "chatgpt-4o-latest", "messages": [{"role": "user", "content": "Hello!"}], "temperature": 0.7}'
$client = OpenAI::factory()
    ->withApiKey('YOUR_API_KEY')
    ->withBaseUri('https://api.oxyy.ai/v1')
    ->make();

$response = $client->chat()->create([
    'model' => 'chatgpt-4o-latest',
    'messages' => [['role' => 'user', 'content' => 'Hello!']],
    'temperature' => 0.7,
]);

echo $response->choices[0]->message->content;

Example Response

{
  "id": "chatcmpl-abc123def456",
  "object": "chat.completion",
  "created": 1700000000,
  "model": "chatgpt-4o-latest",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! How can I help you today?"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 12,
    "completion_tokens": 9,
    "total_tokens": 21
  }
}

Available Models

145 models
Model
Model ID
chatgpt-4o-latest
chatgpt-4o-latest
claude-haiku-3
claude-haiku-3
claude-haiku-3.5
claude-haiku-3.5
claude-haiku-4.5
claude-haiku-4.5
claude-opus-4
claude-opus-4
claude-opus-4.1
claude-opus-4.1
claude-opus-4.5
claude-opus-4.5
claude-opus-4.6
claude-opus-4.6
claude-sonnet-3.5
claude-sonnet-3.5
claude-sonnet-3.7
claude-sonnet-3.7
claude-sonnet-4
claude-sonnet-4
claude-sonnet-4.5
claude-sonnet-4.5
codestral-2508
codestral-2508
codestral-latest
codestral-latest
command-a-03-2025
command-a-03-2025
command-a-reasoning-08-2025
command-a-reasoning-08-2025
command-a-vision-07-2025
command-a-vision-07-2025
command-r-08-2024
command-r-08-2024
command-r-plus-08-2024
command-r-plus-08-2024
command-r7b-12-2024
command-r7b-12-2024
compound
compound
compound-mini
compound-mini
deepseek-chat
deepseek-chat
deepseek-r1-0528
deepseek-r1-0528
deepseek-reasoner
deepseek-reasoner
deepseek-v3
deepseek-v3
deepseek-v3-0324
deepseek-v3-0324
deepseek-v3.1
deepseek-v3.1
deepseek-v3.1-terminus
deepseek-v3.1-terminus
deepseek-v3.2
deepseek-v3.2
devious-uncensored
devious-uncensored
devious-uncensored2
devious-uncensored2
devstral-medium-2507
devstral-medium-2507
devstral-medium-latest
devstral-medium-latest
devstral-small-2512
devstral-small-2512
devstral-small-latest
devstral-small-latest
emotional-36b
emotional-36b
emotional-8b
emotional-8b
gemini-2.0-flash
gemini-2.0-flash
gemini-2.0-flash-lite
gemini-2.0-flash-lite
Gemini 2.5 Flash
gemini-2.5-flash
gemini-2.5-flash-image
gemini-2.5-flash-image
gemini-2.5-flash-lite
gemini-2.5-flash-lite
gemini-2.5-flash-lite-preview-09-2025
gemini-2.5-flash-lite-preview-09-2025
gemini-2.5-flash-preview-09-2025
gemini-2.5-flash-preview-09-2025
gemini-2.5-flash-preview-09-2025-thinking
gemini-2.5-flash-preview-09-2025-thinking
gemini-2.5-flash-thinking
gemini-2.5-flash-thinking
Gemini 2.5 Pro
gemini-2.5-pro
Gemini 3 Flash
gemini-3-flash
gemini-3-flash-preview
gemini-3-flash-preview
gemini-3-flash-preview-thinking
gemini-3-flash-preview-thinking
Gemini 3 Pro
gemini-3-pro
gemini-3-pro-image-preview
gemini-3-pro-image-preview
gemini-3-pro-preview
gemini-3-pro-preview
gemma-3-27b-it
gemma-3-27b-it
gemma-3n-e4b-it
gemma-3n-e4b-it
glm-4.5
glm-4.5
glm-4.5-air
glm-4.5-air
glm-4.6
glm-4.6
glm-4.6v
glm-4.6v
glm-4.7
glm-4.7
glm-4.7-fast
glm-4.7-fast
glm-5
glm-5
gpt-3.5-turbo
gpt-3.5-turbo
gpt-4.1
gpt-4.1
gpt-4.1-mini
gpt-4.1-mini
gpt-4.1-nano
gpt-4.1-nano
gpt-4o
gpt-4o
gpt-4o-mini
gpt-4o-mini
gpt-4o-mini-search-preview
gpt-4o-mini-search-preview
gpt-4o-search-preview
gpt-4o-search-preview
gpt-5
gpt-5
gpt-5-chat-latest
gpt-5-chat-latest
gpt-5-mini
gpt-5-mini
gpt-5-nano
gpt-5-nano
gpt-5-search-api
gpt-5-search-api
gpt-5.1
gpt-5.1
gpt-5.2
gpt-5.2
gpt-laborratse
gpt-laborratse
gpt-laborratse-de
gpt-laborratse-de
gpt-oss-120b
gpt-oss-120b
gpt-oss-20b
gpt-oss-20b
gpt5-laborratse
gpt5-laborratse
gpt5-laborratse-de
gpt5-laborratse-de
grok-2
grok-2
grok-2-vision
grok-2-vision
grok-3
grok-3
grok-3-fast
grok-3-fast
grok-3-mini
grok-3-mini
grok-3-mini-fast
grok-3-mini-fast
grok-4
grok-4
grok-4-fast-non-reasoning
grok-4-fast-non-reasoning
grok-4-fast-reasoning
grok-4-fast-reasoning
grok-4.1-fast-non-reasoning
grok-4.1-fast-non-reasoning
grok-4.1-fast-reasoning
grok-4.1-fast-reasoning
grok-code-fast-1
grok-code-fast-1
hermes-4-405b
hermes-4-405b
hermes-4-70b
hermes-4-70b
kimi-k2
kimi-k2
kimi-k2-0905
kimi-k2-0905
kimi-k2-0905-fast
kimi-k2-0905-fast
kimi-k2-fast
kimi-k2-fast
kimi-k2-thinking
kimi-k2-thinking
kimi-k2.5
kimi-k2.5
laborratse-de-uncensored
laborratse-de-uncensored
laborratse-uncensored
laborratse-uncensored
labs-mistral-small-creative
labs-mistral-small-creative
llama-3.1-8b-instruct
llama-3.1-8b-instruct
llama-3.3-70b-instruct
llama-3.3-70b-instruct
llama-4-maverick
llama-4-maverick
llama-4-scout
llama-4-scout
magistral-medium-2509
magistral-medium-2509
magistral-medium-latest
magistral-medium-latest
magistral-small-2509
magistral-small-2509
magistral-small-latest
magistral-small-latest
minimax-m2
minimax-m2
minimax-m2.1
minimax-m2.1
mistral-large-2512
mistral-large-2512
mistral-large-latest
mistral-large-latest
mistral-medium-2508
mistral-medium-2508
mistral-medium-latest
mistral-medium-latest
mistral-small-2506
mistral-small-2506
mistral-small-latest
mistral-small-latest
moirai-agent
moirai-agent
o3
o3
o3-mini
o3-mini
o4-mini
o4-mini
omni-moderation-latest
omni-moderation-latest
phi-4
phi-4
pixtral-large-2411
pixtral-large-2411
pixtral-large-latest
pixtral-large-latest
qwen2.5-coder-32b
qwen2.5-coder-32b
qwen3-235b-a22b
qwen3-235b-a22b
qwen3-235b-a22b-2507
qwen3-235b-a22b-2507
qwen3-235b-a22b-thinking-2507
qwen3-235b-a22b-thinking-2507
qwen3-32b
qwen3-32b
qwen3-coder
qwen3-coder
revenant-uncensored
revenant-uncensored
sonar
sonar
sonar-deep-research
sonar-deep-research
sonar-pro
sonar-pro
sonar-reasoning-pro
sonar-reasoning-pro
text-moderation-latest
text-moderation-latest
text-moderation-stable
text-moderation-stable
wizardlm-2-8x22b
wizardlm-2-8x22b
🎨 Image API

Image Generation

Create images using DALL·E 3, Flux Pro, Imagen 4, Sora 2, and 30+ image models.

POSThttps://api.oxyy.ai/v1/images/generations

Parameters

ParameterTypeRequiredDescription
modelstringRequiredModel ID (e.g., dall-e-3, flux.1-pro)
promptstringRequiredText description of the image to generate
nintegerOptionalNumber of images. Default: 1
sizestringOptionalImage size (e.g., 1024x1024)

Code Examples

from openai import OpenAI

client = OpenAI(api_key="YOUR_API_KEY", base_url="https://api.oxyy.ai/v1")

response = client.images.generate(
    model="dall-e-3",
    prompt="A beautiful sunset over the ocean",
    n=1, size="1024x1024"
)
print(response.data[0].url)
const client = new OpenAI({ apiKey: 'YOUR_API_KEY', baseURL: 'https://api.oxyy.ai/v1' });

const response = await client.images.generate({
  model: 'dall-e-3', prompt: 'A beautiful sunset over the ocean', n: 1, size: '1024x1024'
});
console.log(response.data[0].url);
curl https://api.oxyy.ai/v1/images/generations \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{"model": "dall-e-3", "prompt": "A beautiful sunset over the ocean", "n": 1, "size": "1024x1024"}'

Example Response

{
  "created": 1700000000,
  "data": [
    {
      "url": "https://cdn.oxyy.ai/images/2025/01/abc123.png",
      "revised_prompt": "A beautiful sunset over a calm ocean..."
    }
  ]
}

Available Models

30 models
Model
Model ID
cogvideox-flash
cogvideox-flash
dall-e-2
dall-e-2
dall-e-3
dall-e-3
flux
flux
flux.1-dev
flux.1-dev
flux.1-kontext
flux.1-kontext
flux.1-pro
flux.1-pro
flux.1-schnell
flux.1-schnell
flux.1-schnell-uncensored
flux.1-schnell-uncensored
flux.2-dev
flux.2-dev
flux.2-flex
flux.2-flex
flux.2-klein
flux.2-klein
flux.2-pro
flux.2-pro
Gemini 3 Pro Image
gemini-3-pro-image
gpt-image-1.5
gpt-image-1.5
grok-imagine-video
grok-imagine-video
imagen-3
imagen-3
imagen-4
imagen-4
Imagen 4 Fast
imagen-4-fast
Imagen 4 Ultra
imagen-4-ultra
nano-banana-pro
nano-banana-pro
p-image
p-image
Pollinations Flux
pollinations-flux
Pollinations GPT Image
pollinations-gptimage
Pollinations Seedream
pollinations-seedream
Pollinations Turbo
pollinations-turbo
qwen-image
qwen-image
sora-2
sora-2
veo-3.1
veo-3.1
z-image
z-image
🎬 Video API 🔥 HOT

Video Generation

Generate AI videos using Google Veo 2, Veo 3, and Veo 3.1. Video generation is asynchronous.

POSThttps://api.oxyy.ai/v1/videos/generations

Parameters

ParameterTypeRequiredDescription
modelstringRequiredModel ID (e.g., veo-3, veo-3.1-fast)
promptstringRequiredText description for the video
durationintegerOptionalDuration in seconds. Default: 4
resolutionstringOptionalVideo resolution (e.g., 1080p)

Code Examples

import requests

headers = { "Authorization": "Bearer YOUR_API_KEY", "Content-Type": "application/json" }

response = requests.post("https://api.oxyy.ai/v1/videos/generations", headers=headers,
    json={'model': 'veo-3', 'prompt': 'A serene lake with mountains', 'duration': 4})
job = response.json()
print("Job ID:", job["id"])
curl https://api.oxyy.ai/v1/videos/generations \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{"model": "veo-3", "prompt": "A serene lake with mountains", "duration": 4, "resolution": "1080p"}'

Submit Response

{
  "id": "job_vid_abc123def456",
  "status": "queued",
  "model": "veo-3",
  "created_at": "2025-01-15T12:00:00Z",
  "estimated_wait": "30-60s"
}

Polling for Results

Video generation is asynchronous. After submitting a job, poll the status endpoint until the job completes.

GEThttps://api.oxyy.ai/v1/videos/generations/:job_id
import requests, time

headers = { "Authorization": "Bearer YOUR_API_KEY", "Content-Type": "application/json" }

# Step 1: Submit video generation job
response = requests.post("https://api.oxyy.ai/v1/videos/generations",
    headers=headers,
    json={'model': 'veo-3', 'prompt': 'A serene lake', 'duration': 4})
job = response.json()
job_id = job["id"]
print(f"Job submitted: {job_id}")

# Step 2: Poll for completion
while True:
    status = requests.get(
        f"https://api.oxyy.ai/v1/videos/generations/{job_id}",
        headers=headers
    ).json()

    if status["status"] == "completed":
        print("Video ready:", status["video_url"])
        break
    elif status["status"] == "failed":
        print("Generation failed:", status.get("error"))
        break
    else:
        print(f"Status: {status["status"]}... waiting")
        time.sleep(5)
const headers = {
  'Authorization': 'Bearer YOUR_API_KEY',
  'Content-Type': 'application/json'
};

// Step 1: Submit video generation job
const job = await fetch('https://api.oxyy.ai/v1/videos/generations', {
  method: 'POST', headers,
  body: JSON.stringify({ model: 'veo-3', prompt: 'A serene lake', duration: 4 })
}).then(r => r.json());

console.log('Job submitted:', job.id);

// Step 2: Poll for completion
while (true) {
  const status = await fetch(
    `https://api.oxyy.ai/v1/videos/generations/${job.id}`,
    { headers }
  ).then(r => r.json());

  if (status.status === 'completed') {
    console.log('Video ready:', status.video_url);
    break;
  } else if (status.status === 'failed') {
    console.error('Failed:', status.error);
    break;
  }
  console.log(`Status: ${status.status}... waiting`);
  await new Promise(r => setTimeout(r, 5000));
}
# Step 1: Submit job
curl -X POST https://api.oxyy.ai/v1/videos/generations \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{"model": "veo-3", "prompt": "A serene lake", "duration": 4}'

# Step 2: Poll using the returned job ID
curl https://api.oxyy.ai/v1/videos/generations/job_vid_abc123 \
  -H "Authorization: Bearer YOUR_API_KEY"

Completed Response

{
  "id": "job_vid_abc123def456",
  "status": "completed",
  "model": "veo-3",
  "video_url": "https://cdn.oxyy.ai/videos/2025/01/abc123.mp4",
  "duration": 4,
  "resolution": "1080p",
  "created_at": "2025-01-15T12:00:00Z",
  "completed_at": "2025-01-15T12:00:45Z"
}
Polling Tip: Use a 5-second interval between polls. Most videos complete within 30–90 seconds depending on model and duration.

Available Models

5 models
Model
Model ID
Pollinations Veo
pollinations-veo
Veo 2
veo-2
Veo 3
veo-3
Veo 3 Fast
veo-3-fast
Veo 3.1 Fast
veo-3.1-fast
📁 File Uploads

File Uploads & Input Methods

Several oxyy API endpoints accept file inputs for image editing, image-to-video generation, and speech-to-text transcription. You can provide files using three different methods:

ParameterTypeRequiredDescription
File Uploadmultipart/form-dataUpload a file directly from disk using multipart/form-data
URLapplication/jsonPass a URL to a hosted file via image or file_url field
Base64application/jsonSend base64-encoded data via image_base64 or file_base64 field
Content-Type: Use multipart/form-data for file uploads (the OpenAI SDK handles this automatically). For URL and Base64 methods, use application/json.

Image Editing (Image-to-Image)

Upload a source image and provide a prompt to edit or transform it.

POSThttps://api.oxyy.ai/v1/images/edits
ParameterTypeRequiredDescription
modelstringRequiredModel ID (e.g., flux.1-kontext-max)
imagefile/stringRequired*Source image — file upload or URL string
image_base64stringRequired*Base64-encoded image (alternative to image)
promptstringRequiredDescription of the desired edit
nintegerOptionalNumber of images. Default: 1
sizestringOptionalOutput size (e.g., 1024x1024)
*One of image (file or URL) or image_base64 is required.
# Image-to-Image / Image Edit — upload a source image
import requests

headers = { "Authorization": "Bearer YOUR_API_KEY" }

# Use multipart/form-data for file uploads
files = { "image": open("source.png", "rb") }
data = {
    "model": "flux.1-kontext-max",
    "prompt": "Change the background to a beach sunset",
    "n": "1",
    "size": "1024x1024"
}

response = requests.post(
    "https://api.oxyy.ai/v1/images/edits",
    headers=headers, files=files, data=data
)
print(response.json())
import fs from 'fs';
import OpenAI from 'openai';

const client = new OpenAI({ apiKey: 'YOUR_API_KEY', baseURL: 'https://api.oxyy.ai/v1' });

const response = await client.images.edit({
  model: 'flux.1-kontext-max',
  image: fs.createReadStream('source.png'),
  prompt: 'Change the background to a beach sunset',
  n: 1, size: '1024x1024'
});

console.log(response.data[0].url);
curl https://api.oxyy.ai/v1/images/edits \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -F "model=flux.1-kontext-max" \
  -F "image=@source.png" \
  -F "prompt=Change the background to a beach sunset" \
  -F "n=1" \
  -F "size=1024x1024"

Example Response

{
  "created": 1700000000,
  "data": [
    {
      "url": "https://cdn.oxyy.ai/images/2025/01/edited_abc123.png"
    }
  ]
}

Image-to-Video

Upload an image and animate it into a video clip.

POSThttps://api.oxyy.ai/v1/videos/generations
ParameterTypeRequiredDescription
modelstringRequiredVideo model (e.g., veo-3)
imagefile/stringRequired*Source image — file upload or URL string
image_base64stringRequired*Base64-encoded image (alternative to image)
promptstringRequiredMotion / animation description
durationintegerOptionalDuration in seconds. Default: 4
resolutionstringOptionalOutput resolution (e.g., 1080p)
*One of image (file or URL) or image_base64 is required.
# Image-to-Video — upload an image to animate
import requests

headers = { "Authorization": "Bearer YOUR_API_KEY" }

files = { "image": open("photo.jpg", "rb") }
data = {
    "model": "veo-3",
    "prompt": "Slowly pan across the scene with gentle motion",
    "duration": "4",
    "resolution": "1080p"
}

response = requests.post(
    "https://api.oxyy.ai/v1/videos/generations",
    headers=headers, files=files, data=data
)
job = response.json()
print("Job ID:", job["id"])
curl https://api.oxyy.ai/v1/videos/generations \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -F "model=veo-3" \
  -F "image=@photo.jpg" \
  -F "prompt=Slowly pan across the scene with gentle motion" \
  -F "duration=4" \
  -F "resolution=1080p"

Example Response

{
  "id": "job_vid_img2v_abc123",
  "status": "queued",
  "model": "veo-3",
  "created_at": "2025-01-15T12:00:00Z",
  "estimated_wait": "30-60s"
}
Image-to-video is also asynchronous — poll the status endpoint the same way as text-to-video above.

Audio Transcription (STT)

Upload an audio file for speech-to-text transcription with optional timestamps.

POSThttps://api.oxyy.ai/v1/audio/transcriptions
ParameterTypeRequiredDescription
modelstringRequiredSTT model (e.g., gpt-4o-mini-transcribe)
filefileRequired*Audio file — mp3, mp4, wav, m4a, webm, mpeg, mpga
file_urlstringRequired*URL of a hosted audio file (alternative to file)
file_base64stringRequired*Base64-encoded audio data (alternative to file)
response_formatstringOptionaljson, text, verbose_json, srt, vtt
languagestringOptionalISO-639-1 code (e.g., en, ja, bn)
*One of file, file_url, or file_base64 is required.
# Speech-to-Text — upload audio for transcription
from openai import OpenAI

client = OpenAI(api_key="YOUR_API_KEY", base_url="https://api.oxyy.ai/v1")

# Supported: mp3, mp4, mpeg, mpga, m4a, wav, webm
with open("interview.mp3", "rb") as audio_file:
    response = client.audio.transcriptions.create(
        model="gpt-4o-mini-transcribe",
        file=audio_file,
        response_format="verbose_json",
        language="en"
    )

print("Text:", response.text)
for seg in response.segments:
    print(f"{seg.start:.1f}s - {seg.end:.1f}s: {seg.text}")
import fs from 'fs';
import OpenAI from 'openai';

const client = new OpenAI({ apiKey: 'YOUR_API_KEY', baseURL: 'https://api.oxyy.ai/v1' });

const response = await client.audio.transcriptions.create({
  model: 'gpt-4o-mini-transcribe',
  file: fs.createReadStream('interview.mp3'),
  response_format: 'verbose_json',
  language: 'en'
});

console.log(response.text);
curl https://api.oxyy.ai/v1/audio/transcriptions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -F "model=gpt-4o-mini-transcribe" \
  -F "file=@interview.mp3" \
  -F "response_format=verbose_json" \
  -F "language=en"

Example Response (verbose_json)

{
  "text": "Hello, this is a transcription test.",
  "language": "en",
  "duration": 5.42,
  "segments": [
    {
      "start": 0.0,
      "end": 2.1,
      "text": "Hello, this is"
    },
    {
      "start": 2.1,
      "end": 5.42,
      "text": "a transcription test."
    }
  ]
}
File Size Limits: Audio files up to 25 MB, images up to 20 MB. For larger files, consider compressing or splitting.
🔊 Audio API

Text-to-Speech (TTS)

Convert text into natural-sounding speech using ElevenLabs, OpenAI TTS, and Gemini TTS models.

POSThttps://api.oxyy.ai/v1/audio/speech

Parameters

ParameterTypeRequiredDescription
modelstringRequiredTTS model (e.g., eleven_v3, tts-1-hd)
voicestringRequiredVoice ID (e.g., alloy, echo)
inputstringRequiredThe text to convert to speech

Code Examples

from openai import OpenAI
from pathlib import Path

client = OpenAI(api_key="YOUR_API_KEY", base_url="https://api.oxyy.ai/v1")

response = client.audio.speech.create(
    model="eleven_english_sts_v2", voice="alloy",
    input="Hello! This is a text-to-speech test."
)
response.stream_to_file(Path("output.mp3"))

Response

// Response: Binary audio data (MP3/WAV/OGG)
//
// Content-Type: audio/mpeg
// Content-Length: 45321
//
// The response body is raw audio bytes.
// Save directly to a file or stream to an audio player.
// Supported formats: mp3, opus, aac, flac, wav, pcm

Available TTS Models

14 models
Model
Model ID
Eleven English STS v2
eleven_english_sts_v2
Eleven Flash v2
eleven_flash_v2
Eleven Flash v2.5
eleven_flash_v2_5
Eleven Multilingual STS v2
eleven_multilingual_sts_v2
Eleven Multilingual v2
eleven_multilingual_v2
Eleven Turbo v2
eleven_turbo_v2
Eleven Turbo v2.5
eleven_turbo_v2_5
Eleven v3
eleven_v3
gemini-2.5-flash-preview-tts
gemini-2.5-flash-preview-tts
Gemini 2.5 Flash TTS
gemini-2.5-flash-tts
Gemini 2.5 Pro TTS
gemini-2.5-pro-tts
gpt-4o-mini-tts
gpt-4o-mini-tts
tts-1
tts-1
tts-1-hd
tts-1-hd

Speech-to-Text (STT)

Transcribe audio files into text with high accuracy. Supports timestamps, language detection, and multiple output formats.

POSThttps://api.oxyy.ai/v1/audio/transcriptions

Parameters

ParameterTypeRequiredDescription
modelstringRequiredSTT model (e.g., whisper-1, scribe_v1)
filefileRequiredAudio file (mp3, mp4, wav, etc.)
response_formatstringOptionaljson, text, verbose_json, srt, vtt

Code Examples

from openai import OpenAI

client = OpenAI(api_key="YOUR_API_KEY", base_url="https://api.oxyy.ai/v1")

with open("audio.mp3", "rb") as f:
    response = client.audio.transcriptions.create(
        model="gpt-4o-mini-transcribe", file=f, response_format="verbose_json"
    )
print(response.text)
curl https://api.oxyy.ai/v1/audio/transcriptions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -F "model=gpt-4o-mini-transcribe" \
  -F "file=@audio.mp3"

Example Response

{
  "text": "Hello, this is a transcription test.",
  "language": "en",
  "duration": 5.42,
  "segments": [
    {
      "start": 0.0,
      "end": 2.1,
      "text": "Hello, this is"
    },
    {
      "start": 2.1,
      "end": 5.42,
      "text": "a transcription test."
    }
  ]
}

Available STT Models

5 models
Model
Model ID
gpt-4o-mini-transcribe
gpt-4o-mini-transcribe
gpt-4o-transcribe
gpt-4o-transcribe
Scribe v1
scribe_v1
Scribe v2
scribe_v2
whisper-1
whisper-1
🌐 Embeddings API

Embeddings

Generate vector embeddings for semantic search, clustering, recommendations, and RAG applications.

POSThttps://api.oxyy.ai/v1/embeddings

Parameters

ParameterTypeRequiredDescription
modelstringRequiredEmbedding model (e.g., text-embedding-3-large)
inputstring|arrayRequiredText to embed (string or array of strings)

Code Examples

from openai import OpenAI

client = OpenAI(api_key="YOUR_API_KEY", base_url="https://api.oxyy.ai/v1")

response = client.embeddings.create(
    model="gemini-embedding-001",
    input="The quick brown fox jumps over the lazy dog"
)
embedding = response.data[0].embedding
print(len(embedding), "dimensions")

Example Response

{
  "object": "list",
  "data": [
    {
      "object": "embedding",
      "index": 0,
      "embedding": [0.0023, -0.0091, 0.0156, -0.0042, ...]
    }
  ],
  "model": "gemini-embedding-001",
  "usage": {
    "prompt_tokens": 8,
    "total_tokens": 8
  }
}

Available Models

4 models
Model
Model ID
Gemini Embedding 001
gemini-embedding-001
text-embedding-3-large
text-embedding-3-large
text-embedding-3-small
text-embedding-3-small
text-embedding-ada-002
text-embedding-ada-002

SDKs & Libraries

oxyy API is OpenAI-compatible — use the official OpenAI SDK in any language. No custom SDK needed.

LanguagePackageInstall
Pythonopenaipip install openai
JavaScriptopenainpm install openai
PHPopenai-php/clientcomposer require openai-php/client
Gosashabaranov/go-openaigo get github.com/sashabaranov/go-openai
Rubyruby-openaigem install ruby-openai

Error Codes

Standard HTTP error codes with descriptive error messages.

Error Response Format

{
  "error": {
    "message": "Invalid API key provided",
    "type": "authentication_error",
    "code": "invalid_api_key"
  }
}
CodeMeaningResolution
401Invalid or missing API keyCheck your Authorization header
403Insufficient permissionsUpgrade your plan or check key scope
404Model not foundVerify the model ID exists
429Rate limit exceededSlow down requests or upgrade plan
500Internal server errorRetry after a short delay
503Model temporarily unavailableTry again or use a different model
OpenAI
Google
Anthropic
Meta
DeepSeek
Mistral
xAI
Alibaba
BlackForest AI
ElevenLabs
ByteDance
Kuaishou
Pollinations
Runway