Quick Start | Proxyify Docs
Docs Quick Start

Quick Start

Make your first API request in under 5 minutes. No provider accounts, no additional configuration.

Prerequisites

Install SDK

Proxyify uses the standard OpenAI SDK. No custom package needed.

bash
pip install openai
bash (Node)
npm install openai

First request

The only change from a standard OpenAI setup is the base_url. Everything else — method names, parameters, response shapes — stays identical.

Python
from openai import OpenAI client = OpenAI( api_key="prx-xxxxxxxxxxxxxxxx", # your Proxyify key base_url="https://proxyify.dev/v1", ) response = client.chat.completions.create( model="openai/gpt-4o-mini", messages=[{"role": "user", "content": "Explain REST APIs in one sentence."}], ) print(response.choices[0].message.content)
JavaScript / Node
import OpenAI from "openai"; const client = new OpenAI({ apiKey: "prx-xxxxxxxxxxxxxxxx", baseURL: "https://proxyify.dev/v1", }); const res = await client.chat.completions.create({ model: "openai/gpt-4o-mini", messages: [{ role: "user", content: "Explain REST APIs in one sentence." }], }); console.log(res.choices[0].message.content);
curl
curl https://proxyify.dev/v1/chat/completions \ -H "Authorization: Bearer prx-xxxxxxxxxxxxxxxx" \ -H "Content-Type: application/json" \ -d '{ "model": "openai/gpt-4o-mini", "messages": [{"role": "user", "content": "Explain REST APIs in one sentence."}] }'

Every response includes a _balancer field with credits_used, cost_usd, and latency_ms. You can see all requests in Dashboard → Logs.

Switch models

Changing models is a one-line edit. Proxyify supports 300+ models across every major provider:

examples
# OpenAI model="openai/gpt-4o" model="openai/o3-mini" # Anthropic model="anthropic/claude-3-5-sonnet" model="anthropic/claude-3-5-haiku" # Google model="google/gemini-2.0-flash" model="google/gemini-2.5-pro" # Meta model="meta-llama/llama-4-scout"

Browse all available models and copy their slugs from the Models page (click any slug to copy).

Next steps