API Reference
OpenAI-compatible · REST · /v1/chat/completions
The MAIG gateway exposes a single chat completions endpoint that is fully compatible with the OpenAI API. Any client or library that works with OpenAI can be pointed at MAIG with a URL and key swap.
The Endpoint
All chat completion requests go to:
POST https://api.maig.dev/v1/chat/completions
Authentication
Pass your MAIG API key in the Authorization header using the Bearer scheme:
Authorization: Bearer maig_your_key_here
maig_ and are created in the MAIG dashboard under your project settings. Each project has its own key. Requests without a valid key return 401.
Request Fields
The request body is JSON and follows the OpenAI chat completions format. Fields not supported by the selected provider are silently dropped — see the API Compatibility matrix for the full per-provider breakdown.
| Field | Type | Required | Description |
|---|---|---|---|
model |
string | Yes | Model name or MAIG route name. Model names starting with claude- route to Anthropic; gemini- routes to Google; all others route to OpenAI. |
messages |
array | Yes | Array of message objects with role (system, user, assistant) and content fields. |
stream |
boolean | No | If true, the response is streamed as Server-Sent Events. Defaults to false. |
max_tokens |
integer | No | Maximum number of tokens in the completion. Defaults to 1024 when using Anthropic if omitted. |
temperature |
number | No | Sampling temperature between 0 and 2. Higher values produce more random output. Supported by all providers. |
top_p |
number | No | Nucleus sampling parameter. Supported by all providers. |
stop |
string or array | No | Stop sequences. Supported by OpenAI and Anthropic. Silently dropped for Google models. |
frequency_penalty |
number | No | Penalizes repeated tokens by frequency. Supported by OpenAI only. Silently dropped for Anthropic and Google. |
presence_penalty |
number | No | Penalizes tokens that have already appeared. Supported by OpenAI only. Silently dropped for Anthropic and Google. |
seed |
integer | No | Seed for deterministic sampling. Supported by OpenAI only. Silently dropped for Anthropic and Google. |
response_format |
object | No | Output format, e.g. {"type": "json_object"}. Fully supported by OpenAI; approximated via system prompt injection for Anthropic; silently dropped for Google. |
For the full per-provider field support matrix, see the API Compatibility page.
Example Request (non-streaming)
curl https://api.maig.dev/v1/chat/completions \
-H "Authorization: Bearer maig_your_key_here" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello!"}]
}'
Example Response
All gateway responses are returned in the standard OpenAI chat completions format, regardless of which provider handled the request.
{
"id": "chatcmpl-a1b2c3d4e5f6g7h8i9j0k",
"object": "chat.completion",
"created": 1735689600,
"model": "gpt-4o",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I help you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 10,
"completion_tokens": 9,
"total_tokens": 19
}
}
Streaming
Set "stream": true to receive tokens as Server-Sent Events. Use --no-buffer with curl to see chunks as they arrive:
curl https://api.maig.dev/v1/chat/completions \
--no-buffer \
-H "Authorization: Bearer maig_your_key_here" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello!"}],
"stream": true
}'
The response is a sequence of data: lines, each containing a JSON chunk. The stream ends with data: [DONE]:
data: {"id":"chatcmpl-a1b2c3d4","object":"chat.completion.chunk","created":1735689600,"model":"gpt-4o","choices":[{"index":0,"delta":{"role":"assistant","content":"Hello"},"finish_reason":null}]}
data: {"id":"chatcmpl-a1b2c3d4","object":"chat.completion.chunk","created":1735689600,"model":"gpt-4o","choices":[{"index":0,"delta":{"content":"! How"},"finish_reason":null}]}
data: {"id":"chatcmpl-a1b2c3d4","object":"chat.completion.chunk","created":1735689600,"model":"gpt-4o","choices":[{"index":0,"delta":{"content":" can I help?"},"finish_reason":"stop"}]}
data: [DONE]
Using Claude
Pass any Anthropic model name — model names starting with claude- are automatically routed to Anthropic. The request and response format remain identical.
curl https://api.maig.dev/v1/chat/completions \
-H "Authorization: Bearer maig_your_key_here" \
-H "Content-Type: application/json" \
-d '{
"model": "claude-3-5-sonnet-20241022",
"messages": [{"role": "user", "content": "Hello!"}]
}'
Using Gemini
Pass any Google model name — model names starting with gemini- are automatically routed to Google. The request and response format remain identical.
curl https://api.maig.dev/v1/chat/completions \
-H "Authorization: Bearer maig_your_key_here" \
-H "Content-Type: application/json" \
-d '{
"model": "gemini-2.0-flash",
"messages": [{"role": "user", "content": "Hello!"}]
}'
Error Codes
The gateway returns standard HTTP status codes. Error responses include a JSON body with an error object.
| Code | Meaning | What to do |
|---|---|---|
401 |
Invalid or missing API key | Check that the Authorization header is present and the key starts with maig_. Verify the key is active in the dashboard. |
402 |
Subscription inactive or payment failed | Check your billing status in the dashboard. Update your payment method or reactivate your subscription. |
422 |
Malformed request body | Inspect the error message for the specific field that failed validation. Ensure model and messages are present and correctly typed. |
429 |
Rate limit or monthly plan limit exceeded | Back off and retry with exponential delay for rate limits. If you are consistently hitting your monthly limit, consider upgrading your plan. See Rate Limits. |
502 |
All configured providers failed | The primary provider and any fallback providers all returned errors. Check provider status pages. The gateway retried automatically — this error means all attempts were exhausted. |
503 |
No provider configured for this model | Your project does not have a provider configured that supports the requested model. Add the appropriate provider credentials in the dashboard. |