Skip to content

OpenAI Completions

Provider value: openai-completions

Uses the Chat Completions API (POST /v1/chat/completions). This is the right provider for OpenAI-compatible services like Together AI, Fireworks, Groq, vLLM, and llama.cpp.

For full details, see the OpenAI provider page.

Quick start

bash
export OPENAI_API_KEY=sk-...
ra --provider openai-completions --model gpt-4.1 "Hello"

With OpenAI-compatible services

bash
# Together AI
export OPENAI_API_KEY=your-together-key
ra --provider openai-completions \
  --openai-base-url https://api.together.xyz/v1 \
  --model meta-llama/Llama-3-70b-chat-hf "Hello"

# Groq
export OPENAI_API_KEY=your-groq-key
ra --provider openai-completions \
  --openai-base-url https://api.groq.com/openai/v1 \
  --model llama-3.3-70b-versatile "Hello"

Or in a config file:

yaml
app:
  providers:
    openai-completions:
      baseURL: https://api.together.xyz/v1
      apiKey: ${OPENAI_API_KEY}

agent:
  provider: openai-completions
  model: meta-llama/Llama-3-70b-chat-hf

When to use this instead of openai

The default openai provider uses the newer Responses API, which most third-party services don't support. If you're connecting to anything other than OpenAI directly, use openai-completions.

ScenarioProvider
OpenAI directlyopenai
Together AI, Groq, Fireworksopenai-completions
vLLM, llama.cpp, Ollama-compatibleopenai-completions
OpenAI proxy/gateway (Responses API)openai

Environment variables

VariableRequiredDescription
OPENAI_API_KEYYesAPI key for the service

Extended thinking

Supported modes: off, low, medium, high, adaptive (if the model supports it).

bash
ra --provider openai-completions --thinking high "Solve this step by step"

See also

Released under the MIT License.