Skip to content

Ollama

Provider value: ollama

Run models locally with Ollama. No API key required.

Setup

  1. Install Ollama
  2. Pull a model: ollama pull llama3
  3. Run ra:
bash
ra --provider ollama --model llama3 "Write a haiku"

Environment variables

VariableRequiredDescription
RA_OLLAMA_HOSTNoOllama host (default: http://localhost:11434)

Remote Ollama

Point ra at an Ollama instance running on another machine:

bash
export RA_OLLAMA_HOST=http://my-server:11434
ra --provider ollama --model llama3 "Hello"

Or via CLI flag:

bash
ra --provider ollama --ollama-host http://my-server:11434 --model llama3 "Hello"

See also

Released under the MIT License.