Get Started
Start using free LLM models in your app in 3 steps.
1
What are you building?
Filter models by capability to find the best fit for your use case.
0 free models
2
What matters most?
Order results by your priority.
3
Ready to code?
Copy the snippet and start building.
Filter:
Sort:
https://free-models-api.pages.dev/api/v1/models/openrouter?sort=contextLength1
Get an OpenRouter API Key
OpenRouter provides a unified API for accessing many LLM providers. Sign up for free and create an API key.
2
Fetch Free Models
After setting up OpenRouter, add this to fetch available free models.
const res = await fetch('https://free-models-api.pages.dev/api/v1/models/openrouter?sort=contextLength');
const { models } = await res.json();
// Pass to OpenRouter - it auto-fallbacks through the list
const modelIds = models.map(m => m.id);3
Pass Model IDs
Replace your model config with the fetched IDs. OpenRouter auto-fallbacks through the list.
models: modelIdsFree Models
Select filters to find models that match your needs. Copy the generated code snippet to use in your app.
Filter:
Sort by
0 free models
Loading...
API Reference
GET /api/v1/models/openrouter
Returns the list of currently available free models.
Query Parameters
| Parameter | Type | Description |
|---|---|---|
| filter | string | Comma-separated: chat, vision, coding, longContext, reasoning |
| sort | string | One of: contextLength, maxOutput, name, provider, capable |
Response
{
"models": [
{
"id": "google/gemini-2.0-flash-exp:free",
"name": "Gemini 2.0 Flash",
"contextLength": 1000000,
"maxCompletionTokens": 8192,
"description": "...",
"modality": "text->text",
"inputModalities": ["text", "image"],
"outputModalities": ["text"],
"supportedParameters": ["tools", "reasoning"],
"isModerated": false
}
],
"feedbackCounts": {
"model-id": { "rateLimited": 0, "unavailable": 0, "error": 0 }
},
"lastUpdated": "2024-12-29T10:00:00Z",
"filters": ["vision"],
"sort": "contextLength",
"count": 15
}Caching: 15-minute stale threshold. Cache-Control: public, s-maxage=900
POST /api/feedback
Report issues with a model (rate limiting, errors, unavailability).
Request Body
| Field | Type | Required | Description |
|---|---|---|---|
| modelId | string | Yes | The model ID to report |
| issue | string | Yes | One of: rate_limited, unavailable, error |
| details | string | No | Optional description of the issue |
| source | string | No | Your app identifier (default: "anonymous") |
Request Example
{
"modelId": "google/gemini-2.0-flash-exp:free",
"issue": "rate_limited",
"details": "Getting 429 after ~10 requests",
"source": "my-app"
}Response
{ "received": true }Errors
400- Missing modelId or invalid issue type500- Server error
Code Examples
Basic Fetch
Fetch the list of free models and log the count.
const response = await fetch('https://free-models-api.pages.dev/api/v1/models/openrouter');
const { models, count } = await response.json();
console.log(`Found ${count} free models`);With Filters
Filter models by capability and sort order.
// Get only vision-capable models, sorted by context length
const url = 'https://free-models-api.pages.dev/api/v1/models/openrouter?filter=vision&sort=contextLength';
const { models } = await fetch(url).then(r => r.json());Full Integration with OpenRouter
Complete example with model fallback for reliable responses.
import { OpenRouter } from '@openrouter/sdk';
async function chat(message: string) {
// 1. Get current free models
const { models } = await fetch(
'https://free-models-api.pages.dev/api/v1/models/openrouter?sort=capable'
).then(r => r.json());
// 2. Create OpenRouter client
const openRouter = new OpenRouter({
apiKey: process.env.OPENROUTER_API_KEY,
});
// 3. Send message with automatic fallback
const completion = await openRouter.chat.send({
models: models.map(m => m.id),
messages: [{ role: 'user', content: message }],
});
return completion.choices[0].message.content;
}Report an Issue
Help improve model availability data by reporting issues.
await fetch('https://free-models-api.pages.dev/api/feedback', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
modelId: 'google/gemini-2.0-flash-exp:free',
issue: 'rate_limited',
details: 'Getting 429 after ~10 requests',
source: 'my-app',
}),
});