Skip to main content
Magnitude supports a wide range of LLM providers. You can configure them through the /settings command, the /provider command, or by setting environment variables.

Supported providers

ProviderAuth methodEnvironment variable
OpenAISubscription (ChatGPT Plus/Pro), API keyOPENAI_API_KEY
AnthropicAPI keyANTHROPIC_API_KEY
GitHub CopilotSubscription (GitHub)
Google (Gemini)API keyGOOGLE_API_KEY or GEMINI_API_KEY
OpenRouterAPI keyOPENROUTER_API_KEY
Vercel AI GatewayAPI keyVERCEL_API_KEY
Google Vertex AIGCP service accountGOOGLE_APPLICATION_CREDENTIALS
Vertex AI (Anthropic)GCP service accountGOOGLE_APPLICATION_CREDENTIALS
Amazon BedrockAWS credentialsAWS_ACCESS_KEY_ID, AWS_PROFILE, AWS_DEFAULT_REGION
MiniMaxAPI keyMINIMAX_API_KEY
Z.AI (Zhipu AI)API keyZHIPU_API_KEY
CerebrasAPI keyCEREBRAS_API_KEY
LocalNone required

Setting up a provider

API key

The simplest way to connect is to set an environment variable:
export ANTHROPIC_API_KEY=sk-ant-...
You can also enter API keys through the /settings overlay — they’ll be stored locally in your Magnitude config.

Subscriptions

OpenAI

If you have a ChatGPT Plus or Pro subscription:
  1. Run /provider or /settings
  2. Select OpenAI
  3. Choose “ChatGPT Plus/Pro”
  4. Pick your preferred method:
    • Browser — opens a browser window to authenticate
    • Device code — gives you a URL and code to enter manually, useful for headless environments

GitHub Copilot

  1. Run /provider or /settings
  2. Select GitHub Copilot
  3. Follow the GitHub device flow to authenticate

Cloud providers (Bedrock, Vertex)

Amazon Bedrock and Google Vertex AI use your existing cloud credentials. Magnitude auto-detects them from your environment after they are set.

Amazon Bedrock

Configure AWS credentials using aws configure, or set environment variables:
export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
export AWS_DEFAULT_REGION=us-east-1

Google Vertex AI

Run gcloud auth application-default login, or set the credentials path:
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json

Local models

Magnitude supports local models via Ollama, LM Studio, llama.cpp, vLLM, or any OpenAI-compatible local server. The default endpoint is http://localhost:1234/v1 (LM Studio’s default). To connect:
  1. Run /provider or /settings
  2. Select “Local”
  3. Configure the base URL for your local server if different from default
Local models vary significantly in quality. For best results, use a near frontier model as your lead model.

Model slots

Magnitude uses seven model slots — one for each agent role:
  • Lead — the team lead you interact with directly. Use your strongest model here.
  • Explorer — codebase investigation. Fast and capable.
  • Planner — converts findings into implementation plans. Fast and capable.
  • Builder — performs code changes. Fast and capable.
  • Reviewer — validates quality and completion. Fast and capable.
  • Debugger — diagnoses failures. Fast and capable.
  • Browser — visual browser tasks. Must be a visually grounded model. To see the full list of supported browser agent models, click here.
Configure each slot independently via /settings. The setup wizard sets sensible defaults based on your provider: a strong model for the lead, a fast capable model for subagents (explorer, planner, builder, reviewer, debugger), and a visually capable model for the browser agent.

Default models by provider

ProviderLeadSubagentsBrowser
OpenAI (OAuth)GPT-5.4GPT-5.3 CodexGPT-5.3 Codex
OpenAI (API key)GPT-5.4GPT-5.3 CodexGPT-5.3 Codex
AnthropicClaude Opus 4.6Claude Sonnet 4.6Claude Haiku 4.5
GitHub CopilotClaude Opus 4.6Claude Sonnet 4.6Claude Haiku 4.5
Google (Gemini)Gemini 3.1 ProGemini 3 FlashGemini 3 Flash
OpenRouterClaude Opus 4.6Claude Sonnet 4.6Claude Haiku 4.5
VercelClaude Opus 4.6Claude Sonnet 4.6Claude Haiku 4.5
Vertex AI (Gemini)Gemini 3.1 ProGemini 3 FlashGemini 3 Flash
Vertex AI (Anthropic)Claude Opus 4.6Claude Sonnet 4.6Claude Haiku 4.5
Amazon BedrockClaude Opus 4.6Claude Sonnet 4.6Claude Haiku 4.5
CerebrasZAI GLM-4.7ZAI GLM-4.7ZAI GLM-4.7
MiniMaxMiniMax M2.7MiniMax M2.7MiniMax M2.7
Z.AIGLM-5GLM-5GLM-5

Switching models

Use /settings to connect providers and change models. You can also use /provider or /model directly.

Visually grounded models

Most modern frontier models support visual grounding, so there are plenty of options to choose from. Below is the full list of models compatible with the browser agent, organized by provider.
ProviderModel
OpenAIGPT-5.3 Codex
GPT-5.2 Codex
AnthropicClaude Opus 4.5, 4.6
Claude Sonnet 4.5, 4.6
Claude Haiku 4.5
GitHub CopilotClaude Opus 4.5, 4.6
Claude Sonnet 4.5, 4.6
Claude Haiku 4.5
GPT-5.2 Codex
Grok Code Fast 1
Google (Gemini)Gemini 3.1 Pro Preview
Gemini 3 Pro Preview
Gemini 3 Flash Preview
OpenRouterClaude Opus 4.5, 4.6
Claude Sonnet 4.5, 4.6
Claude Haiku 4.5
Qwen 3.5 (397B, 122B, 35B, 27B)
Qwen 3 Max Thinking, Coder Next
Qwen 3.5 Plus, Flash
Kimi K2.5, K2 Thinking
DeepSeek V3.2, V3.2 Speciale
MiniMax M2.5, M2.1
GLM-5, GLM-4.7, GLM-4.6V, GLM-4.7 Flash
Grok 4, Grok 4.1 Fast
GPT-OSS 120B, 20B
Mistral Large 2512, Devstral 2512
Arcee Trinity Large Preview
VercelClaude Opus 4.5, 4.6
Claude Sonnet 4.5, 4.6
Claude Haiku 4.5
GPT-5.3 Codex, 5.2 Codex
GPT-OSS 120B, 20B
Qwen 3.5 Flash, Plus
Qwen 3 Max Thinking, Coder Next
Kimi K2.5, K2 Thinking
DeepSeek V3.2 Exp, Thinking
MiniMax M2.5, M2.1
GLM-5, GLM-4.7, GLM-4.6V
Grok 4, Grok 4.1 Fast Non-Reasoning
Mistral Large 3, Devstral 2
Arcee Trinity Large Preview
Vertex AI (Gemini)Gemini 3.1 Pro Preview
Gemini 3 Pro Preview
Gemini 3 Flash Preview
Vertex AI (Anthropic)Claude Opus 4.5, 4.6
Claude Sonnet 4.5, 4.6
Claude Haiku 4.5
Amazon BedrockClaude Opus 4.5, 4.6
Claude Sonnet 4.5, 4.6
Claude Haiku 4.5
CerebrasGPT-OSS 120B
Qwen 3 235B A22B Instruct
ZAI GLM-4.7
MiniMaxMiniMax M2.7
MiniMax M2.5
MiniMax M2.1
Z.AIGLM-5
GLM-4.7
GLM-4.6V
GLM-4.7 Flash