Providers & Routing¶
LoopForge loads provider config from ~/.loopforge/config.toml and routes each task kind (planning, coding, summary) to a (provider, model) pair.
For field-by-field config details, see Config Reference. For the runtime path that consumes routing decisions, see Runtime Architecture.
Mental model¶
Think about provider setup in three layers:
providers.*defines how to talk to one provider: API kind, base URL, credentials, and default model.router.*decides which(provider, model)pair each task kind should use.- The runtime reads that routing choice before each model turn, so you can keep planning local while moving coding to a stronger cloud model.
In practice, the safest workflow is:
Run those after every routing change.
Recommended rollout path¶
A good default progression is:
- Local-first — point all routes at
ollamauntil your workflow is stable. - Hybrid — keep
planningandsummarylocal, movecodingto a stronger cloud provider. - Cloud-heavy — only after you trust cost, latency, and security posture, move more task kinds to hosted providers.
That path keeps iteration cheap while making it easy to upgrade coding quality later.
Built-in presets (out of the box)¶
After loopforge init, your ~/.loopforge/config.toml already includes common providers and sensible defaults:
- Local:
ollama - OpenAI-compatible:
deepseek,kimi/kimi_cn,qwen/qwen_cn/qwen_sg,glm,minimax,nvidia - Provider-native:
qwen_native*,glm_native,minimax_native - Gateways:
minimax_anthropic - First-party APIs:
anthropic,gemini - AWS:
bedrock
You usually only need to:
- set the corresponding API key env var (if any)
- point one or more
[router.*]entries at the provider you want
How to choose a provider kind¶
openai_compatible— best when the provider exposes an OpenAI-style Chat Completions API; this covers Ollama and many hosted gateways.dashscope_native— use when you want Alibaba DashScope native behavior.zhipu_native— use GLM native APIs when you want Zhipu-specific auth and semantics.minimax_native— use MiniMax native APIs directly.anthropic— use Claude directly or through compatible gateways supported by the driver.gemini— use Google Gemini directly.bedrock— use AWS Bedrock via the Converse API (native AWS SDK; uses standard AWS credential resolution).
If you are unsure, prefer the preset already generated by loopforge init instead of inventing a custom provider entry from scratch.
Safe switch checklist¶
When switching providers or models, do it incrementally:
- set or update the provider API key env var
- update
providers.<name>if the endpoint or default model changes - change one
router.*entry at a time - run validation and diagnostics
- run one small agent task before moving all routes
A practical smoke path looks like this:
If you are enabling a new hosted provider for the first time, also consider running the provider-specific smoke tests documented below.
Provider kinds¶
openai_compatible: OpenAI-compatible Chat Completions APIs (Ollama, DeepSeek, Kimi, many gateways)dashscope_native: Alibaba DashScope native API (Qwen native)zhipu_native: Zhipu GLM native API (JWT auth handled)minimax_native: MiniMax nativetext/chatcompletion_v2anthropic: Claude API (and compatible gateways)gemini: Google Gemini APIbedrock: AWS Bedrock (Converse API)
Example: Ollama (local)¶
Example: GLM (Zhipu native)¶
Zhipu auth format
If ZHIPUAI_API_KEY looks like id.secret, LoopForge will sign a short-lived JWT automatically.
Example: MiniMax (native)¶
Example: NVIDIA NIM (OpenAI-compatible)¶
Example: AWS Bedrock (Converse API)¶
Credentials
Bedrock uses the AWS SDK credential chain (env vars, shared config, profiles, instance role, etc.). If you set profile in config, LoopForge will pass it to the SDK.
Routing patterns¶
All-local starter¶
Local planning, stronger cloud coding¶
Use model = "default" when you want routing to follow providers.<name>.default_model.
API keys (env vars)¶
LoopForge reads provider keys from the env var referenced by api_key_env.
Optional smoke tests (real providers)¶
These tests hit real provider endpoints and are #[ignore] by default:
Provider health report (nightly-friendly)¶
To generate a provider quality report (JSON + Markdown):
Artifacts:
- .tmp/provider-health/provider-health.json
- .tmp/provider-health/provider-health.md
Tips:
- Set ZHIPUAI_API_KEY / MINIMAX_API_KEY / NVIDIA_API_KEY to include those provider checks.
- For CI environments without local Ollama, set: