Magnitude supports a wide range of LLM providers. You can configure them through the /settings command, the /provider command, or by setting environment variables.
Supported providers
| Provider | Auth method | Environment variable |
|---|
| OpenAI | Subscription (ChatGPT Plus/Pro), API key | OPENAI_API_KEY |
| Anthropic | API key | ANTHROPIC_API_KEY |
| GitHub Copilot | Subscription (GitHub) | — |
| Google (Gemini) | API key | GOOGLE_API_KEY or GEMINI_API_KEY |
| OpenRouter | API key | OPENROUTER_API_KEY |
| Vercel AI Gateway | API key | VERCEL_API_KEY |
| Google Vertex AI | GCP service account | GOOGLE_APPLICATION_CREDENTIALS |
| Vertex AI (Anthropic) | GCP service account | GOOGLE_APPLICATION_CREDENTIALS |
| Amazon Bedrock | AWS credentials | AWS_ACCESS_KEY_ID, AWS_PROFILE, AWS_DEFAULT_REGION |
| MiniMax | API key | MINIMAX_API_KEY |
| Z.AI (Zhipu AI) | API key | ZHIPU_API_KEY |
| Cerebras | API key | CEREBRAS_API_KEY |
| Local | None required | — |
Setting up a provider
API key
The simplest way to connect is to set an environment variable:
export ANTHROPIC_API_KEY=sk-ant-...
You can also enter API keys through the /settings overlay — they’ll be stored locally in your Magnitude config.
Subscriptions
OpenAI
If you have a ChatGPT Plus or Pro subscription:
- Run
/provider or /settings
- Select OpenAI
- Choose “ChatGPT Plus/Pro”
- Pick your preferred method:
- Browser — opens a browser window to authenticate
- Device code — gives you a URL and code to enter manually, useful for headless environments
GitHub Copilot
- Run
/provider or /settings
- Select GitHub Copilot
- Follow the GitHub device flow to authenticate
Cloud providers (Bedrock, Vertex)
Amazon Bedrock and Google Vertex AI use your existing cloud credentials. Magnitude auto-detects them from your environment after they are set.
Amazon Bedrock
Configure AWS credentials using aws configure, or set environment variables:
export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
export AWS_DEFAULT_REGION=us-east-1
Google Vertex AI
Run gcloud auth application-default login, or set the credentials path:
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json
Local models
Magnitude supports local models via Ollama, LM Studio, llama.cpp, vLLM, or any OpenAI-compatible local server.
The default endpoint is http://localhost:1234/v1 (LM Studio’s default). To connect:
- Run
/provider or /settings
- Select “Local”
- Configure the base URL for your local server if different from default
Local models vary significantly in quality. For best results, use a near frontier model as your lead model.
Model slots
Magnitude uses seven model slots — one for each agent role:
- Lead — the team lead you interact with directly. Use your strongest model here.
- Explorer — codebase investigation. Fast and capable.
- Planner — converts findings into implementation plans. Fast and capable.
- Builder — performs code changes. Fast and capable.
- Reviewer — validates quality and completion. Fast and capable.
- Debugger — diagnoses failures. Fast and capable.
- Browser — visual browser tasks. Must be a visually grounded model. To see the full list of supported browser agent models, click here.
Configure each slot independently via /settings. The setup wizard sets sensible defaults based on your provider: a strong model for the lead, a fast capable model for subagents (explorer, planner, builder, reviewer, debugger), and a visually capable model for the browser agent.
Default models by provider
| Provider | Lead | Subagents | Browser |
|---|
| OpenAI (OAuth) | GPT-5.4 | GPT-5.3 Codex | GPT-5.3 Codex |
| OpenAI (API key) | GPT-5.4 | GPT-5.3 Codex | GPT-5.3 Codex |
| Anthropic | Claude Opus 4.6 | Claude Sonnet 4.6 | Claude Haiku 4.5 |
| GitHub Copilot | Claude Opus 4.6 | Claude Sonnet 4.6 | Claude Haiku 4.5 |
| Google (Gemini) | Gemini 3.1 Pro | Gemini 3 Flash | Gemini 3 Flash |
| OpenRouter | Claude Opus 4.6 | Claude Sonnet 4.6 | Claude Haiku 4.5 |
| Vercel | Claude Opus 4.6 | Claude Sonnet 4.6 | Claude Haiku 4.5 |
| Vertex AI (Gemini) | Gemini 3.1 Pro | Gemini 3 Flash | Gemini 3 Flash |
| Vertex AI (Anthropic) | Claude Opus 4.6 | Claude Sonnet 4.6 | Claude Haiku 4.5 |
| Amazon Bedrock | Claude Opus 4.6 | Claude Sonnet 4.6 | Claude Haiku 4.5 |
| Cerebras | ZAI GLM-4.7 | ZAI GLM-4.7 | ZAI GLM-4.7 |
| MiniMax | MiniMax M2.7 | MiniMax M2.7 | MiniMax M2.7 |
| Z.AI | GLM-5 | GLM-5 | GLM-5 |
Switching models
Use /settings to connect providers and change models. You can also use /provider or /model directly.
Visually grounded models
Most modern frontier models support visual grounding, so there are plenty of options to choose from. Below is the full list of models compatible with the browser agent, organized by provider.
| Provider | Model |
|---|
| OpenAI | GPT-5.3 Codex |
| GPT-5.2 Codex |
| Anthropic | Claude Opus 4.5, 4.6 |
| Claude Sonnet 4.5, 4.6 |
| Claude Haiku 4.5 |
| GitHub Copilot | Claude Opus 4.5, 4.6 |
| Claude Sonnet 4.5, 4.6 |
| Claude Haiku 4.5 |
| GPT-5.2 Codex |
| Grok Code Fast 1 |
| Google (Gemini) | Gemini 3.1 Pro Preview |
| Gemini 3 Pro Preview |
| Gemini 3 Flash Preview |
| OpenRouter | Claude Opus 4.5, 4.6 |
| Claude Sonnet 4.5, 4.6 |
| Claude Haiku 4.5 |
| Qwen 3.5 (397B, 122B, 35B, 27B) |
| Qwen 3 Max Thinking, Coder Next |
| Qwen 3.5 Plus, Flash |
| Kimi K2.5, K2 Thinking |
| DeepSeek V3.2, V3.2 Speciale |
| MiniMax M2.5, M2.1 |
| GLM-5, GLM-4.7, GLM-4.6V, GLM-4.7 Flash |
| Grok 4, Grok 4.1 Fast |
| GPT-OSS 120B, 20B |
| Mistral Large 2512, Devstral 2512 |
| Arcee Trinity Large Preview |
| Vercel | Claude Opus 4.5, 4.6 |
| Claude Sonnet 4.5, 4.6 |
| Claude Haiku 4.5 |
| GPT-5.3 Codex, 5.2 Codex |
| GPT-OSS 120B, 20B |
| Qwen 3.5 Flash, Plus |
| Qwen 3 Max Thinking, Coder Next |
| Kimi K2.5, K2 Thinking |
| DeepSeek V3.2 Exp, Thinking |
| MiniMax M2.5, M2.1 |
| GLM-5, GLM-4.7, GLM-4.6V |
| Grok 4, Grok 4.1 Fast Non-Reasoning |
| Mistral Large 3, Devstral 2 |
| Arcee Trinity Large Preview |
| Vertex AI (Gemini) | Gemini 3.1 Pro Preview |
| Gemini 3 Pro Preview |
| Gemini 3 Flash Preview |
| Vertex AI (Anthropic) | Claude Opus 4.5, 4.6 |
| Claude Sonnet 4.5, 4.6 |
| Claude Haiku 4.5 |
| Amazon Bedrock | Claude Opus 4.5, 4.6 |
| Claude Sonnet 4.5, 4.6 |
| Claude Haiku 4.5 |
| Cerebras | GPT-OSS 120B |
| Qwen 3 235B A22B Instruct |
| ZAI GLM-4.7 |
| MiniMax | MiniMax M2.7 |
| MiniMax M2.5 |
| MiniMax M2.1 |
| Z.AI | GLM-5 |
| GLM-4.7 |
| GLM-4.6V |
| GLM-4.7 Flash |