Skip to content

OpenAI

ra ships two OpenAI providers — one for the Responses API (default) and one for the Chat Completions API.

Provider valueAPIWhen to use
openaiResponses APIDefault. Use with OpenAI directly. Supports native file inputs and structured streaming events.
openai-completionsChat Completions APIUse with OpenAI-compatible services (Together AI, Fireworks, Groq, etc.) or if you specifically need the Chat Completions endpoint.

Setup

bash
export OPENAI_API_KEY=sk-...
ra --provider openai "Hello"

Environment variables

VariableRequiredDescription
OPENAI_API_KEYYesOpenAI API key (used by both providers)

Models

ModelNotes
gpt-4.1Flagship model
gpt-4.1-miniFaster, cheaper
o3Reasoning model
o4-miniReasoning, fast
bash
ra --provider openai --model gpt-4.1 "Explain this error"
ra --provider openai --model o3 "Solve this step by step"

Extended thinking

Supported modes: off, low, medium, high, adaptive. Works with both providers.

bash
ra --provider openai --thinking high "Solve this step by step"

OpenAI-compatible APIs (Chat Completions)

Most OpenAI-compatible services (Together AI, Fireworks, Groq, etc.) implement the Chat Completions endpoint, not the Responses API. Use openai-completions for these:

bash
# Together AI
export OPENAI_API_KEY=your-together-key
ra --provider openai-completions \
  --openai-base-url https://api.together.xyz/v1 \
  --model meta-llama/Llama-3-70b-chat-hf "Hello"

# Groq
export OPENAI_API_KEY=your-groq-key
ra --provider openai-completions \
  --openai-base-url https://api.groq.com/openai/v1 \
  --model llama-3.3-70b-versatile "Hello"

# Fireworks
export OPENAI_API_KEY=your-fireworks-key
ra --provider openai-completions \
  --openai-base-url https://api.fireworks.ai/inference/v1 \
  --model accounts/fireworks/models/llama-v3p1-70b-instruct "Hello"

Or in a config file:

yaml
app:
  providers:
    openai-completions:
      baseURL: https://api.together.xyz/v1
      apiKey: ${OPENAI_API_KEY}

agent:
  provider: openai-completions
  model: meta-llama/Llama-3-70b-chat-hf

TIP

If you use --provider openai with a third-party base URL and get errors, switch to openai-completions — the service likely doesn't support the Responses API.

Custom base URL (Responses API)

If you're using a proxy or gateway that supports the OpenAI Responses API:

bash
ra --provider openai --openai-base-url https://my-proxy.example.com/v1 "Hello"

Choosing between the two providers

Use openai (Responses API) when:

  • Calling OpenAI directly
  • Using a proxy that forwards to OpenAI's Responses endpoint
  • You need native file attachment support

Use openai-completions (Chat Completions API) when:

  • Using a third-party OpenAI-compatible service (Together, Groq, Fireworks, etc.)
  • Calling an OpenAI-compatible local server (vLLM, llama.cpp server, etc.)
  • The endpoint only supports /v1/chat/completions

See also

Released under the MIT License.