OpenRouter: One API for 300+ AI Models

Unified API gateway for 300+ LLMs from OpenAI, Anthropic, Google & 60+ providers. Transparent pricing, automatic failover, enterprise compliance in one platform.

OpenRouter is a unified API platform that aggregates access to 300+ large language models from major providers (OpenAI, Anthropic, Google, Mistral, Meta, and 60+ others) through a single, OpenAI-compatible endpoint. It passes through provider pricing without markup, charges a small fee only on credit purchases, routes requests intelligently across providers for cost and performance optimization, and automatically falls back to alternative providers if one goes down. Available globally through edge-deployed infrastructure with ~25ms overhead. Used by 4.2M+ users across 250k+ applications.

Pricing

Free tier with 50 requests/day and 20 requests/min (no credit required). Pay-as-you-go: purchase credits via card or crypto (5.5% fee added), then spend per-token at provider rates—no minimums, no lock-in. Enterprise: custom volume discounts, annual commits, and prepaid invoicing available.

Frequently Asked Questions

What's the difference between OpenRouter and using OpenAI/Anthropic/Google APIs directly?

OpenRouter provides unified access to 300+ models from 60+ providers through a single API endpoint and billing account, eliminating the need to manage separate integrations. It offers transparent pricing (zero markup on inference), automatic failover across providers, and intelligent routing optimized for cost/latency/availability. However, there's a 5.5% fee when purchasing credits, and ~25ms of overhead from the routing layer.

How does OpenRouter pricing work?

OpenRouter charges per token at the provider's listed rates (zero markup). When you buy credits, a 5.5% fee is added (e.g., $100 becomes $105.50 in spendable credit). Free models are available with rate limits (20 req/min, 200 req/day). BYOK (Bring Your Own Keys) incurs an additional 5% fee on usage. Enterprise plans offer custom pricing with volume discounts and annual commits.

Is OpenRouter HIPAA/SOC 2 compliant?

Yes. OpenRouter is SOC 2 compliant, HIPAA compliant, PCI compliant, GDPR compliant, ISO 27001 certified, FedRamp compliant, and CSA Star Level 1 compliant. For enterprise customers, EU in-region routing is available so data stays within the European Union. Fine-grained provider selection controls ensure prompts only reach compliant providers.

What happens if a provider goes down?

OpenRouter's automatic failover routes requests to alternative providers on the same model. You're billed only for successful requests (zero completion insurance). The platform maintains 99.9%+ uptime across 50+ providers by continuously monitoring provider health and switching traffic based on real-time uptime metrics.

Can I use OpenRouter with the OpenAI SDK?

Yes. OpenRouter is fully OpenAI SDK compatible. Simply point the base URL to https://openrouter.ai/api and use your OpenRouter API key instead. All OpenAI SDKs (Python, Node.js, etc.) work without code changes. The same applies to frameworks like LangChain, Vercel AI SDK, and others that support OpenAI.

What models are available on OpenRouter?

OpenRouter provides access to 300+ models from 60+ providers, including frontier models like GPT-5, Claude Opus 4.6, Gemini 3.1 Pro, DeepSeek V3, Grok 4, and 300+ open-source and specialized models. The full catalog is browsable at https://openrouter.ai/models with real-time latency and pricing data per provider.

How much latency does OpenRouter add?

OpenRouter's edge infrastructure (via Cloudflare) adds approximately 25ms overhead for routing and request handling. The total latency depends on the routed provider's performance (TTFT and throughput metrics are published per provider per model). You can optimize for speed using the :nitro variant to route to fastest-available providers.