Skip to main content

Installation

npm install @raindrop-ai/openai-agents @openai/agents

Quick Start

import { createRaindropOpenAIAgents } from "@raindrop-ai/openai-agents";
import { Agent, run, addTraceProcessor } from "@openai/agents";

const raindrop = createRaindropOpenAIAgents({
  writeKey: "rk_...",
  userId: "user-123",
});

addTraceProcessor(raindrop.processor);

const agent = new Agent({
  name: "Assistant",
  model: "gpt-4o",
  instructions: "Be helpful",
});

const result = await run(agent, "What is the capital of France?");
console.log(result.finalOutput);

await raindrop.flush();

What Gets Traced

The integration captures events from the OpenAI Agents SDK’s built-in tracing system:
  • Agent runs — trace-level events with workflow name and metadata
  • LLM generations — model, input messages, output, token usage (input_tokens/output_tokens)
  • Tool/function calls — function name, input arguments, output (TypeScript only)
  • Handoffs — from/to agent names for multi-agent workflows (TypeScript only)
  • Errors — captured with error status on spans (TypeScript only)
In TypeScript, all spans are linked with parent-child relationships for full trace visibility. In Python, only LLM generation data is captured per trace.

Configuration

const raindrop = createRaindropOpenAIAgents({
  writeKey: "rk_...",       // Optional: your Raindrop write key (omit to disable telemetry)
  endpoint: "...",          // Optional: custom API endpoint
  userId: "user-123",      // Optional: associate events with a user
  convoId: "convo-456",    // Optional: conversation/thread ID
  debug: false,             // Optional: enable verbose logging
});

Multi-Agent Workflows

The integration automatically captures handoffs between agents:
const triage = new Agent({
  name: "Triage",
  model: "gpt-4o-mini",
  instructions: "Route to the appropriate specialist.",
  handoffs: [specialist],
});

const result = await run(triage, "I need help with billing");
await raindrop.flush();

Flushing and Shutdown

Always call flush() before your process exits to ensure all telemetry is shipped:
await raindrop.flush();     // flush pending data
await raindrop.shutdown();  // flush + release resources

Known Limitations (Python)

  • Tool calls and handoffs are not captured in the Python SDK. Only LLM generation spans (model, input, output, tokens) are tracked. The TypeScript SDK captures full span trees including tools and handoffs.
  • Multi-response traces: In multi-agent workflows with handoffs, only the last response’s data survives in the single track_ai call.