Quickstart (Ollama)¶
This tutorial runs RexOS locally using Ollama’s OpenAI-compatible endpoint.
Prerequisites¶
- Ollama is installed and running.
- You have at least one chat model available (example:
qwen3:4b,llama3.2, etc.).
1) Start Ollama¶
2) Initialize RexOS¶
This creates:
- ~/.rexos/config.toml (provider config + routing)
- ~/.rexos/rexos.db (SQLite memory)
3) Run your first agent session¶
Pick a workspace directory (tools are sandboxed to this root):
RexOS prints the final assistant output, and also logs a session_id to stderr for later reuse.
4) Re-run with the same session id (optional)¶
Next steps¶
- Use the harness for long tasks: see “Long Task With Harness”.
- Switch providers/models: see “Providers & Routing”.