@chinmaymk/aikit / OpenAIResponsesOptions
Interface: OpenAIResponsesOptions
Defined in: types.ts:364
OpenAI Responses API configuration and generation options. These can be provided at construction time or generation time. Generation time options will override construction time options.
Extends
Properties
model?
optionalmodel:string
Defined in: types.ts:305
The specific model you want to use. e.g., 'gpt-4o' or 'claude-3-5-sonnet-20240620'.
Inherited from
maxOutputTokens?
optionalmaxOutputTokens:number
Defined in: types.ts:307
The maximum number of output tokens to generate. Don't want it to ramble on forever, do you?
Inherited from
ProviderOptions.maxOutputTokens
temperature?
optionaltemperature:number
Defined in: types.ts:313
The sampling temperature. Higher values (e.g., 0.8) make the output more random, while lower values (e.g., 0.2) make it more focused and deterministic. A bit like adjusting the chaos knob.
Inherited from
topP?
optionaltopP:number
Defined in: types.ts:318
Top-p sampling. It's a way to control the randomness of the output by only considering the most likely tokens. It's like telling the AI to only pick from the top of the deck.
Inherited from
topK?
optionaltopK:number
Defined in: types.ts:323
Top-k sampling. Similar to top-p, but it considers a fixed number of top tokens. Not all providers support this, because life isn't fair.
Inherited from
stopSequences?
optionalstopSequences:string[]
Defined in: types.ts:325
A list of sequences that will stop the generation. A safe word, if you will.
Inherited from
tools?
optionaltools:Tool[]
Defined in: types.ts:327
The list of tools you're making available to the model.
Inherited from
toolChoice?
optionaltoolChoice: {name:string; } |"auto"|"required"|"none"
Defined in: types.ts:335
How the model should choose which tool to use. 'auto': The model decides. 'required': The model must use a tool. 'none': The model can't use any tools. { name: 'my_tool' }: Force the model to use a specific tool.
Inherited from
apiKey?
optionalapiKey:string
Defined in: types.ts:344
API key for authentication with the provider.
Inherited from
baseURL?
optionalbaseURL:string
Defined in: types.ts:346
Custom base URL for the API endpoint.
Inherited from
timeout?
optionaltimeout:number
Defined in: types.ts:348
Request timeout in milliseconds.
Inherited from
maxRetries?
optionalmaxRetries:number
Defined in: types.ts:350
Maximum number of retry attempts for failed requests.
Inherited from
mutateHeaders()?
optionalmutateHeaders: (headers) =>void
Defined in: types.ts:356
A function that allows you to modify the headers before a request is sent. This is useful for adding custom headers or modifying existing ones.
Parameters
headers
Record<string, string>
The original headers object to mutate directly.
Returns
void
Inherited from
organization?
optionalorganization:string
Defined in: types.ts:366
Your OpenAI organization ID. For when you're part of a fancy club.
project?
optionalproject:string
Defined in: types.ts:368
Your OpenAI project ID. For even fancier clubs.
background?
optionalbackground:boolean
Defined in: types.ts:373
Whether to run the model response in the background. When true, the request is processed asynchronously and can be polled for status.
include?
optionalinclude:string[]
Defined in: types.ts:378
Specify additional output data to include in the model response. For example: ["reasoning.encrypted_content"] to include encrypted reasoning traces.
instructions?
optionalinstructions:string
Defined in: types.ts:383
Inserts a system (or developer) message as the first item in the model's context. This provides high-level instructions that take precedence over user messages.
metadata?
optionalmetadata:Record<string,string>
Defined in: types.ts:387
Set of key-value pairs that can be attached to an object for metadata purposes.
parallelToolCalls?
optionalparallelToolCalls:boolean
Defined in: types.ts:391
Whether to allow the model to run tool calls in parallel.
previousResponseId?
optionalpreviousResponseId:string
Defined in: types.ts:396
The unique ID of the previous response to the model for multi-turn conversations. This enables conversation state management by chaining responses together.
reasoning?
optionalreasoning:object
Defined in: types.ts:401
Configuration options for reasoning models (o-series models only). Controls the reasoning effort level for enhanced problem-solving capabilities.
effort?
optionaleffort:"low"|"medium"|"high"
serviceTier?
optionalserviceTier:"auto"|"default"|"flex"
Defined in: types.ts:408
Specifies the latency tier to use for processing the request. 'auto' lets OpenAI choose, 'default' uses standard tier, 'flex' uses flexible tier.
store?
optionalstore:boolean
Defined in: types.ts:413
Whether to store the generated model response for later retrieval via API. Defaults to true. Set to false for stateless requests.
text?
optionaltext:object
Defined in: types.ts:418
Configuration options for a text response from the model. Controls the format and structure of text outputs.
format?
optionalformat:object
format.type
type:
"text"|"json_object"|"json_schema"
format.json_schema?
optionaljson_schema:object
format.json_schema.name?
optionalname:string
format.json_schema.description?
optionaldescription:string
format.json_schema.schema?
optionalschema:Record<string,unknown>
format.json_schema.strict?
optionalstrict:boolean
truncation?
optionaltruncation:"auto"|"disabled"
Defined in: types.ts:433
The truncation strategy to use for the model response. 'auto' lets the model decide, 'disabled' prevents truncation.
user?
optionaluser:string
Defined in: types.ts:438
A stable identifier for your end-users. Helps OpenAI monitor and detect abuse.
includeUsage?
optionalincludeUsage:boolean
Defined in: types.ts:443
Whether to include usage information (token counts and timing) in the response. When true, usage data will be included in the final stream chunk.