Options
createAILogger(log, options?) accepts a single options bag. Every option is opt-in — defaults stay safe and quiet.
| Option | Type | Default | Description |
|---|---|---|---|
toolInputs | boolean | ToolInputsOptions | false | Capture tool call inputs alongside their names (off by default to avoid leaking sensitive data). |
cost | Record<string, ModelCost> | undefined | Pricing map. Keys are model IDs, values are { input, output } in dollars per 1M tokens. |
Tool Inputs
By default, ai.toolCalls is a string[] of tool names. Enable toolInputs to capture inputs too — useful for debugging agent behaviour or auditing what data the model reached for.
maxLength and transform rather than enabling raw capture in production.Capture everything
const ai = createAILogger(log, { toolInputs: true })
Truncate long inputs
const ai = createAILogger(log, { toolInputs: { maxLength: 200 } })
Redact sensitive fields
const ai = createAILogger(log, {
toolInputs: {
maxLength: 500,
transform: (input, toolName) => {
if (toolName === 'queryDB') return { sql: '***' }
return input
},
},
})
| Sub-option | Type | Description |
|---|---|---|
maxLength | number | Truncate stringified inputs exceeding this character length (appends …). |
transform | (input, toolName) => unknown | Custom transform applied before maxLength. Use to redact fields or reshape data. |
When toolInputs is enabled, ai.toolCalls becomes an Array<{ name, input }> instead of a plain string array.
Cost Estimation
Pass a cost map to compute estimated dollar cost per call. The middleware multiplies token usage by the per-million rates and sets ai.estimatedCost on the wide event.
const ai = createAILogger(log, {
cost: {
'claude-sonnet-4.6': { input: 3, output: 15 },
'gpt-4o': { input: 2.5, output: 10 },
},
})
Read the result from your handler with ai.getEstimatedCost() — useful for billing dashboards or warning users before expensive calls.
cost map in one file alongside model selection so renaming a model in production also updates pricing. Avoid hardcoding per-route maps.Error Handling
If a model call fails, the middleware captures the error into the wide event before re-throwing:
{
"ai": {
"calls": 1,
"model": "claude-sonnet-4.6",
"provider": "anthropic",
"finishReason": "error",
"error": "API rate limit exceeded"
}
}
Stream errors (e.g. content filter) are also captured from the stream's error chunks. Your error-handling code (try/catch, route-level error handlers) keeps working as usual — the middleware only observes.