oxyy API Documentation
oxyy API provides unified access to 200+ AI models through a single, OpenAI-compatible API. Use Chat, Image, Video, Audio, and Embedding models — all with one API key, at up to 60% lower cost.
base_url to https://api.oxyy.ai/v1 — no other code changes needed.API Key Setup
All API requests require authentication using a Bearer token in the Authorization header.
Quick Start
If you already use the OpenAI SDK, migration is one line of code. Just change your base URL:
# pip install openai from openai import OpenAI client = OpenAI( api_key="YOUR_API_KEY", base_url="https://api.oxyy.ai/v1" ) response = client.chat.completions.create( model="gpt-5", messages=[{"role": "user", "content": "Hello!"}] ) print(response.choices[0].message.content)
// npm install openai import OpenAI from 'openai'; const client = new OpenAI({ apiKey: 'YOUR_API_KEY', baseURL: 'https://api.oxyy.ai/v1' }); const response = await client.chat.completions.create({ model: 'gpt-5', messages: [{ role: 'user', content: 'Hello!' }] }); console.log(response.choices[0].message.content);
curl https://api.oxyy.ai/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_API_KEY" \ -d '{"model": "gpt-5", "messages": [{"role": "user", "content": "Hello!"}]}'
Chat Completions
Generate text using 145+ language models — GPT-5, Claude Opus 4.6, Gemini 3 Pro, DeepSeek, Grok 4, and more.
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| model | string | Required | Model ID (e.g., gpt-5, claude-opus-4.6) |
| messages | array | Required | Array of message objects with role and content |
| temperature | float | Optional | Sampling temperature 0–2. Default: 0.7 |
| max_tokens | integer | Optional | Max tokens to generate. Default: 4096 |
Code Examples
from openai import OpenAI client = OpenAI( api_key="YOUR_API_KEY", base_url="https://api.oxyy.ai/v1" ) response = client.chat.completions.create( model="chatgpt-4o-latest", messages=[{"role": "user", "content": "Hello!"}], temperature=0.7, max_tokens=4096 ) print(response.choices[0].message.content)
import OpenAI from 'openai'; const client = new OpenAI({ apiKey: 'YOUR_API_KEY', baseURL: 'https://api.oxyy.ai/v1' }); const response = await client.chat.completions.create({ model: 'chatgpt-4o-latest', messages: [{ role: 'user', content: 'Hello!' }], temperature: 0.7, max_tokens: 4096 }); console.log(response.choices[0].message.content);
curl https://api.oxyy.ai/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_API_KEY" \ -d '{"model": "chatgpt-4o-latest", "messages": [{"role": "user", "content": "Hello!"}], "temperature": 0.7}'
$client = OpenAI::factory() ->withApiKey('YOUR_API_KEY') ->withBaseUri('https://api.oxyy.ai/v1') ->make(); $response = $client->chat()->create([ 'model' => 'chatgpt-4o-latest', 'messages' => [['role' => 'user', 'content' => 'Hello!']], 'temperature' => 0.7, ]); echo $response->choices[0]->message->content;
Example Response
{
"id": "chatcmpl-abc123def456",
"object": "chat.completion",
"created": 1700000000,
"model": "chatgpt-4o-latest",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I help you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 12,
"completion_tokens": 9,
"total_tokens": 21
}
}Available Models
Image Generation
Create images using DALL·E 3, Flux Pro, Imagen 4, Sora 2, and 30+ image models.
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| model | string | Required | Model ID (e.g., dall-e-3, flux.1-pro) |
| prompt | string | Required | Text description of the image to generate |
| n | integer | Optional | Number of images. Default: 1 |
| size | string | Optional | Image size (e.g., 1024x1024) |
Code Examples
from openai import OpenAI client = OpenAI(api_key="YOUR_API_KEY", base_url="https://api.oxyy.ai/v1") response = client.images.generate( model="dall-e-3", prompt="A beautiful sunset over the ocean", n=1, size="1024x1024" ) print(response.data[0].url)
const client = new OpenAI({ apiKey: 'YOUR_API_KEY', baseURL: 'https://api.oxyy.ai/v1' }); const response = await client.images.generate({ model: 'dall-e-3', prompt: 'A beautiful sunset over the ocean', n: 1, size: '1024x1024' }); console.log(response.data[0].url);
curl https://api.oxyy.ai/v1/images/generations \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_API_KEY" \ -d '{"model": "dall-e-3", "prompt": "A beautiful sunset over the ocean", "n": 1, "size": "1024x1024"}'
Example Response
{
"created": 1700000000,
"data": [
{
"url": "https://cdn.oxyy.ai/images/2025/01/abc123.png",
"revised_prompt": "A beautiful sunset over a calm ocean..."
}
]
}Available Models
Video Generation
Generate AI videos using Google Veo 2, Veo 3, and Veo 3.1. Video generation is asynchronous.
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| model | string | Required | Model ID (e.g., veo-3, veo-3.1-fast) |
| prompt | string | Required | Text description for the video |
| duration | integer | Optional | Duration in seconds. Default: 4 |
| resolution | string | Optional | Video resolution (e.g., 1080p) |
Code Examples
import requests headers = { "Authorization": "Bearer YOUR_API_KEY", "Content-Type": "application/json" } response = requests.post("https://api.oxyy.ai/v1/videos/generations", headers=headers, json={'model': 'veo-3', 'prompt': 'A serene lake with mountains', 'duration': 4}) job = response.json() print("Job ID:", job["id"])
curl https://api.oxyy.ai/v1/videos/generations \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_API_KEY" \ -d '{"model": "veo-3", "prompt": "A serene lake with mountains", "duration": 4, "resolution": "1080p"}'
Submit Response
{
"id": "job_vid_abc123def456",
"status": "queued",
"model": "veo-3",
"created_at": "2025-01-15T12:00:00Z",
"estimated_wait": "30-60s"
}Polling for Results
Video generation is asynchronous. After submitting a job, poll the status endpoint until the job completes.
import requests, time headers = { "Authorization": "Bearer YOUR_API_KEY", "Content-Type": "application/json" } # Step 1: Submit video generation job response = requests.post("https://api.oxyy.ai/v1/videos/generations", headers=headers, json={'model': 'veo-3', 'prompt': 'A serene lake', 'duration': 4}) job = response.json() job_id = job["id"] print(f"Job submitted: {job_id}") # Step 2: Poll for completion while True: status = requests.get( f"https://api.oxyy.ai/v1/videos/generations/{job_id}", headers=headers ).json() if status["status"] == "completed": print("Video ready:", status["video_url"]) break elif status["status"] == "failed": print("Generation failed:", status.get("error")) break else: print(f"Status: {status["status"]}... waiting") time.sleep(5)
const headers = { 'Authorization': 'Bearer YOUR_API_KEY', 'Content-Type': 'application/json' }; // Step 1: Submit video generation job const job = await fetch('https://api.oxyy.ai/v1/videos/generations', { method: 'POST', headers, body: JSON.stringify({ model: 'veo-3', prompt: 'A serene lake', duration: 4 }) }).then(r => r.json()); console.log('Job submitted:', job.id); // Step 2: Poll for completion while (true) { const status = await fetch( `https://api.oxyy.ai/v1/videos/generations/${job.id}`, { headers } ).then(r => r.json()); if (status.status === 'completed') { console.log('Video ready:', status.video_url); break; } else if (status.status === 'failed') { console.error('Failed:', status.error); break; } console.log(`Status: ${status.status}... waiting`); await new Promise(r => setTimeout(r, 5000)); }
# Step 1: Submit job curl -X POST https://api.oxyy.ai/v1/videos/generations \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_API_KEY" \ -d '{"model": "veo-3", "prompt": "A serene lake", "duration": 4}' # Step 2: Poll using the returned job ID curl https://api.oxyy.ai/v1/videos/generations/job_vid_abc123 \ -H "Authorization: Bearer YOUR_API_KEY"
Completed Response
{
"id": "job_vid_abc123def456",
"status": "completed",
"model": "veo-3",
"video_url": "https://cdn.oxyy.ai/videos/2025/01/abc123.mp4",
"duration": 4,
"resolution": "1080p",
"created_at": "2025-01-15T12:00:00Z",
"completed_at": "2025-01-15T12:00:45Z"
}Available Models
File Uploads & Input Methods
Several oxyy API endpoints accept file inputs for image editing, image-to-video generation, and speech-to-text transcription. You can provide files using three different methods:
| Parameter | Type | Required | Description |
|---|---|---|---|
| File Upload | multipart/form-data | — | Upload a file directly from disk using multipart/form-data |
| URL | application/json | — | Pass a URL to a hosted file via image or file_url field |
| Base64 | application/json | — | Send base64-encoded data via image_base64 or file_base64 field |
multipart/form-data for file uploads (the OpenAI SDK handles this automatically). For URL and Base64 methods, use application/json.Image Editing (Image-to-Image)
Upload a source image and provide a prompt to edit or transform it.
| Parameter | Type | Required | Description |
|---|---|---|---|
| model | string | Required | Model ID (e.g., flux.1-kontext-max) |
| image | file/string | Required* | Source image — file upload or URL string |
| image_base64 | string | Required* | Base64-encoded image (alternative to image) |
| prompt | string | Required | Description of the desired edit |
| n | integer | Optional | Number of images. Default: 1 |
| size | string | Optional | Output size (e.g., 1024x1024) |
image (file or URL) or image_base64 is required.# Image-to-Image / Image Edit — upload a source image import requests headers = { "Authorization": "Bearer YOUR_API_KEY" } # Use multipart/form-data for file uploads files = { "image": open("source.png", "rb") } data = { "model": "flux.1-kontext-max", "prompt": "Change the background to a beach sunset", "n": "1", "size": "1024x1024" } response = requests.post( "https://api.oxyy.ai/v1/images/edits", headers=headers, files=files, data=data ) print(response.json())
import fs from 'fs'; import OpenAI from 'openai'; const client = new OpenAI({ apiKey: 'YOUR_API_KEY', baseURL: 'https://api.oxyy.ai/v1' }); const response = await client.images.edit({ model: 'flux.1-kontext-max', image: fs.createReadStream('source.png'), prompt: 'Change the background to a beach sunset', n: 1, size: '1024x1024' }); console.log(response.data[0].url);
curl https://api.oxyy.ai/v1/images/edits \ -H "Authorization: Bearer YOUR_API_KEY" \ -F "model=flux.1-kontext-max" \ -F "image=@source.png" \ -F "prompt=Change the background to a beach sunset" \ -F "n=1" \ -F "size=1024x1024"
Example Response
{
"created": 1700000000,
"data": [
{
"url": "https://cdn.oxyy.ai/images/2025/01/edited_abc123.png"
}
]
}Image-to-Video
Upload an image and animate it into a video clip.
| Parameter | Type | Required | Description |
|---|---|---|---|
| model | string | Required | Video model (e.g., veo-3) |
| image | file/string | Required* | Source image — file upload or URL string |
| image_base64 | string | Required* | Base64-encoded image (alternative to image) |
| prompt | string | Required | Motion / animation description |
| duration | integer | Optional | Duration in seconds. Default: 4 |
| resolution | string | Optional | Output resolution (e.g., 1080p) |
image (file or URL) or image_base64 is required.# Image-to-Video — upload an image to animate import requests headers = { "Authorization": "Bearer YOUR_API_KEY" } files = { "image": open("photo.jpg", "rb") } data = { "model": "veo-3", "prompt": "Slowly pan across the scene with gentle motion", "duration": "4", "resolution": "1080p" } response = requests.post( "https://api.oxyy.ai/v1/videos/generations", headers=headers, files=files, data=data ) job = response.json() print("Job ID:", job["id"])
curl https://api.oxyy.ai/v1/videos/generations \ -H "Authorization: Bearer YOUR_API_KEY" \ -F "model=veo-3" \ -F "image=@photo.jpg" \ -F "prompt=Slowly pan across the scene with gentle motion" \ -F "duration=4" \ -F "resolution=1080p"
Example Response
{
"id": "job_vid_img2v_abc123",
"status": "queued",
"model": "veo-3",
"created_at": "2025-01-15T12:00:00Z",
"estimated_wait": "30-60s"
}Audio Transcription (STT)
Upload an audio file for speech-to-text transcription with optional timestamps.
| Parameter | Type | Required | Description |
|---|---|---|---|
| model | string | Required | STT model (e.g., gpt-4o-mini-transcribe) |
| file | file | Required* | Audio file — mp3, mp4, wav, m4a, webm, mpeg, mpga |
| file_url | string | Required* | URL of a hosted audio file (alternative to file) |
| file_base64 | string | Required* | Base64-encoded audio data (alternative to file) |
| response_format | string | Optional | json, text, verbose_json, srt, vtt |
| language | string | Optional | ISO-639-1 code (e.g., en, ja, bn) |
file, file_url, or file_base64 is required.# Speech-to-Text — upload audio for transcription from openai import OpenAI client = OpenAI(api_key="YOUR_API_KEY", base_url="https://api.oxyy.ai/v1") # Supported: mp3, mp4, mpeg, mpga, m4a, wav, webm with open("interview.mp3", "rb") as audio_file: response = client.audio.transcriptions.create( model="gpt-4o-mini-transcribe", file=audio_file, response_format="verbose_json", language="en" ) print("Text:", response.text) for seg in response.segments: print(f"{seg.start:.1f}s - {seg.end:.1f}s: {seg.text}")
import fs from 'fs'; import OpenAI from 'openai'; const client = new OpenAI({ apiKey: 'YOUR_API_KEY', baseURL: 'https://api.oxyy.ai/v1' }); const response = await client.audio.transcriptions.create({ model: 'gpt-4o-mini-transcribe', file: fs.createReadStream('interview.mp3'), response_format: 'verbose_json', language: 'en' }); console.log(response.text);
curl https://api.oxyy.ai/v1/audio/transcriptions \ -H "Authorization: Bearer YOUR_API_KEY" \ -F "model=gpt-4o-mini-transcribe" \ -F "file=@interview.mp3" \ -F "response_format=verbose_json" \ -F "language=en"
Example Response (verbose_json)
{
"text": "Hello, this is a transcription test.",
"language": "en",
"duration": 5.42,
"segments": [
{
"start": 0.0,
"end": 2.1,
"text": "Hello, this is"
},
{
"start": 2.1,
"end": 5.42,
"text": "a transcription test."
}
]
}Text-to-Speech (TTS)
Convert text into natural-sounding speech using ElevenLabs, OpenAI TTS, and Gemini TTS models.
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| model | string | Required | TTS model (e.g., eleven_v3, tts-1-hd) |
| voice | string | Required | Voice ID (e.g., alloy, echo) |
| input | string | Required | The text to convert to speech |
Code Examples
from openai import OpenAI from pathlib import Path client = OpenAI(api_key="YOUR_API_KEY", base_url="https://api.oxyy.ai/v1") response = client.audio.speech.create( model="eleven_english_sts_v2", voice="alloy", input="Hello! This is a text-to-speech test." ) response.stream_to_file(Path("output.mp3"))
Response
// Response: Binary audio data (MP3/WAV/OGG) // // Content-Type: audio/mpeg // Content-Length: 45321 // // The response body is raw audio bytes. // Save directly to a file or stream to an audio player. // Supported formats: mp3, opus, aac, flac, wav, pcm
Available TTS Models
Speech-to-Text (STT)
Transcribe audio files into text with high accuracy. Supports timestamps, language detection, and multiple output formats.
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| model | string | Required | STT model (e.g., whisper-1, scribe_v1) |
| file | file | Required | Audio file (mp3, mp4, wav, etc.) |
| response_format | string | Optional | json, text, verbose_json, srt, vtt |
Code Examples
from openai import OpenAI client = OpenAI(api_key="YOUR_API_KEY", base_url="https://api.oxyy.ai/v1") with open("audio.mp3", "rb") as f: response = client.audio.transcriptions.create( model="gpt-4o-mini-transcribe", file=f, response_format="verbose_json" ) print(response.text)
curl https://api.oxyy.ai/v1/audio/transcriptions \ -H "Authorization: Bearer YOUR_API_KEY" \ -F "model=gpt-4o-mini-transcribe" \ -F "file=@audio.mp3"
Example Response
{
"text": "Hello, this is a transcription test.",
"language": "en",
"duration": 5.42,
"segments": [
{
"start": 0.0,
"end": 2.1,
"text": "Hello, this is"
},
{
"start": 2.1,
"end": 5.42,
"text": "a transcription test."
}
]
}Available STT Models
Embeddings
Generate vector embeddings for semantic search, clustering, recommendations, and RAG applications.
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| model | string | Required | Embedding model (e.g., text-embedding-3-large) |
| input | string|array | Required | Text to embed (string or array of strings) |
Code Examples
from openai import OpenAI client = OpenAI(api_key="YOUR_API_KEY", base_url="https://api.oxyy.ai/v1") response = client.embeddings.create( model="gemini-embedding-001", input="The quick brown fox jumps over the lazy dog" ) embedding = response.data[0].embedding print(len(embedding), "dimensions")
Example Response
{
"object": "list",
"data": [
{
"object": "embedding",
"index": 0,
"embedding": [0.0023, -0.0091, 0.0156, -0.0042, ...]
}
],
"model": "gemini-embedding-001",
"usage": {
"prompt_tokens": 8,
"total_tokens": 8
}
}Available Models
SDKs & Libraries
oxyy API is OpenAI-compatible — use the official OpenAI SDK in any language. No custom SDK needed.
| Language | Package | Install |
|---|---|---|
| Python | openai | pip install openai |
| JavaScript | openai | npm install openai |
| PHP | openai-php/client | composer require openai-php/client |
| Go | sashabaranov/go-openai | go get github.com/sashabaranov/go-openai |
| Ruby | ruby-openai | gem install ruby-openai |
Error Codes
Standard HTTP error codes with descriptive error messages.
Error Response Format
{
"error": {
"message": "Invalid API key provided",
"type": "authentication_error",
"code": "invalid_api_key"
}
}| Code | Meaning | Resolution |
|---|---|---|
401 | Invalid or missing API key | Check your Authorization header |
403 | Insufficient permissions | Upgrade your plan or check key scope |
404 | Model not found | Verify the model ID exists |
429 | Rate limit exceeded | Slow down requests or upgrade plan |
500 | Internal server error | Retry after a short delay |
503 | Model temporarily unavailable | Try again or use a different model |