Introduction | Proxyify Docs
Docs Introduction

Introduction

Proxyify is a unified AI gateway that gives you access to 300+ models across every provider — text, image, audio, video — through a single OpenAI-compatible endpoint. One API key, one bill, no provider accounts.

What is Proxyify?

Most AI gateway products require you to bring your own API keys (BYOK) — meaning you still need separate accounts at OpenAI, Anthropic, Google, and every other provider you want to use. Proxyify eliminates this entirely.

You register once on Proxyify, buy credits, and get a single API key. That key works with every supported model out of the box. Credits are deducted per request based on the model's pricing, and unused credits never expire.

Proxyify is in beta. Core features are being actively built. Feedback and bug reports are welcome.

How it works

Getting started takes less than two minutes:

1

Create an account

Sign in with Google. No credit card required — you get 500 free credits immediately to try every model.

2

Generate an API key

From the dashboard, create a key. Optionally lock it to specific origins, IP addresses, or model allowlists for extra security.

3

Point your SDK to Proxyify

Set base_url to https://proxyify.dev/v1 in your OpenAI client. No other code changes required.

Core concepts

Credits

Credits are Proxyify's billing unit. Each API call deducts credits based on the model's pricing — token-based for text, character-based for TTS, and second-based for STT. You can see the exact cost of every request in the response metadata and in your dashboard. Credits never expire and never reset.

API Keys

Every key you create is scoped to your account balance. You can create multiple keys for different projects. Each key can have its own spending cap, model allowlist, IP/origin restriction, country block, time-based access window, and key expiry — making them safe to embed directly in frontend code when properly restricted.

Endpoint

All requests go to a single base URL regardless of model or modality:

base url
https://proxyify.dev/v1

Quick example

The request format is identical to OpenAI's Chat Completions API:

Python
from openai import OpenAI client = OpenAI( api_key="prx-xxxxxxxxxxxxxxxx", base_url="https://proxyify.dev/v1", ) response = client.chat.completions.create( model="openai/gpt-4o-mini", messages=[{"role": "user", "content": "Hello!"}], ) print(response.choices[0].message.content)

You can switch to any supported model by changing only the model field. No configuration changes, no SDK swaps.

Next steps

Once you have your API key, explore the rest of the documentation: