@chinmaymk/aikit / OpenAIResponsesOptions
Interface: OpenAIResponsesOptions
Defined in: types.ts:364
OpenAI Responses API configuration and generation options. These can be provided at construction time or generation time. Generation time options will override construction time options.
Extends
Properties
model?
optional
model:string
Defined in: types.ts:305
The specific model you want to use. e.g., 'gpt-4o' or 'claude-3-5-sonnet-20240620'.
Inherited from
maxOutputTokens?
optional
maxOutputTokens:number
Defined in: types.ts:307
The maximum number of output tokens to generate. Don't want it to ramble on forever, do you?
Inherited from
ProviderOptions
.maxOutputTokens
temperature?
optional
temperature:number
Defined in: types.ts:313
The sampling temperature. Higher values (e.g., 0.8) make the output more random, while lower values (e.g., 0.2) make it more focused and deterministic. A bit like adjusting the chaos knob.
Inherited from
topP?
optional
topP:number
Defined in: types.ts:318
Top-p sampling. It's a way to control the randomness of the output by only considering the most likely tokens. It's like telling the AI to only pick from the top of the deck.
Inherited from
topK?
optional
topK:number
Defined in: types.ts:323
Top-k sampling. Similar to top-p, but it considers a fixed number of top tokens. Not all providers support this, because life isn't fair.
Inherited from
stopSequences?
optional
stopSequences:string
[]
Defined in: types.ts:325
A list of sequences that will stop the generation. A safe word, if you will.
Inherited from
tools?
optional
tools:Tool
[]
Defined in: types.ts:327
The list of tools you're making available to the model.
Inherited from
toolChoice?
optional
toolChoice: {name
:string
; } |"auto"
|"required"
|"none"
Defined in: types.ts:335
How the model should choose which tool to use. 'auto': The model decides. 'required': The model must use a tool. 'none': The model can't use any tools. { name: 'my_tool' }: Force the model to use a specific tool.
Inherited from
apiKey?
optional
apiKey:string
Defined in: types.ts:344
API key for authentication with the provider.
Inherited from
baseURL?
optional
baseURL:string
Defined in: types.ts:346
Custom base URL for the API endpoint.
Inherited from
timeout?
optional
timeout:number
Defined in: types.ts:348
Request timeout in milliseconds.
Inherited from
maxRetries?
optional
maxRetries:number
Defined in: types.ts:350
Maximum number of retry attempts for failed requests.
Inherited from
mutateHeaders()?
optional
mutateHeaders: (headers
) =>void
Defined in: types.ts:356
A function that allows you to modify the headers before a request is sent. This is useful for adding custom headers or modifying existing ones.
Parameters
headers
Record
<string
, string
>
The original headers object to mutate directly.
Returns
void
Inherited from
organization?
optional
organization:string
Defined in: types.ts:366
Your OpenAI organization ID. For when you're part of a fancy club.
project?
optional
project:string
Defined in: types.ts:368
Your OpenAI project ID. For even fancier clubs.
background?
optional
background:boolean
Defined in: types.ts:373
Whether to run the model response in the background. When true, the request is processed asynchronously and can be polled for status.
include?
optional
include:string
[]
Defined in: types.ts:378
Specify additional output data to include in the model response. For example: ["reasoning.encrypted_content"] to include encrypted reasoning traces.
instructions?
optional
instructions:string
Defined in: types.ts:383
Inserts a system (or developer) message as the first item in the model's context. This provides high-level instructions that take precedence over user messages.
metadata?
optional
metadata:Record
<string
,string
>
Defined in: types.ts:387
Set of key-value pairs that can be attached to an object for metadata purposes.
parallelToolCalls?
optional
parallelToolCalls:boolean
Defined in: types.ts:391
Whether to allow the model to run tool calls in parallel.
previousResponseId?
optional
previousResponseId:string
Defined in: types.ts:396
The unique ID of the previous response to the model for multi-turn conversations. This enables conversation state management by chaining responses together.
reasoning?
optional
reasoning:object
Defined in: types.ts:401
Configuration options for reasoning models (o-series models only). Controls the reasoning effort level for enhanced problem-solving capabilities.
effort?
optional
effort:"low"
|"medium"
|"high"
serviceTier?
optional
serviceTier:"auto"
|"default"
|"flex"
Defined in: types.ts:408
Specifies the latency tier to use for processing the request. 'auto' lets OpenAI choose, 'default' uses standard tier, 'flex' uses flexible tier.
store?
optional
store:boolean
Defined in: types.ts:413
Whether to store the generated model response for later retrieval via API. Defaults to true. Set to false for stateless requests.
text?
optional
text:object
Defined in: types.ts:418
Configuration options for a text response from the model. Controls the format and structure of text outputs.
format?
optional
format:object
format.type
type:
"text"
|"json_object"
|"json_schema"
format.json_schema?
optional
json_schema:object
format.json_schema.name?
optional
name:string
format.json_schema.description?
optional
description:string
format.json_schema.schema?
optional
schema:Record
<string
,unknown
>
format.json_schema.strict?
optional
strict:boolean
truncation?
optional
truncation:"auto"
|"disabled"
Defined in: types.ts:433
The truncation strategy to use for the model response. 'auto' lets the model decide, 'disabled' prevents truncation.
user?
optional
user:string
Defined in: types.ts:438
A stable identifier for your end-users. Helps OpenAI monitor and detect abuse.
includeUsage?
optional
includeUsage:boolean
Defined in: types.ts:443
Whether to include usage information (token counts and timing) in the response. When true, usage data will be included in the final stream chunk.