Installation
Install with your package manager of choice:
import { Raindrop } from "raindrop-ai";
// Replace with the key from your Raindrop dashboard
const raindrop = new Raindrop({ writeKey: RAINDROP_API_KEY });
Quick Start: Interaction API
The Interaction API uses a simple three-step pattern:
begin() – Create an interaction and log the initial user input
- Update – Optionally call
setProperty, setProperties, or addAttachments
finish() – Record the AI’s final output and close the interaction
Using Vercel AI SDK? Check out our automatic integration to track AI events and traces with zero configuration.
Example: Chat Completion
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
import { randomUUID } from "crypto";
import { Raindrop } from "raindrop-ai";
const raindrop = new Raindrop({ writeKey: RAINDROP_API_KEY });
const message = "What is love?";
const eventId = randomUUID(); // Generate your own ID for log correlation
// 1. Start the interaction
const interaction = raindrop.begin({
eventId,
event: "chat_message",
userId: "user_123",
input: message,
model: "gpt-4o",
convoId: "convo_123",
properties: {
tool_call: "reasoning_engine",
system_prompt: "you are a helpful...",
experiment: "experiment_a",
},
});
// 2. Make the LLM call
const { text } = await generateText({
model: openai("gpt-4o"),
prompt: message,
});
// 3. Finish the interaction
interaction.finish({
output: text,
});
Updating an Interaction
Update an interaction at any point using setProperty, setProperties, or addAttachments:
interaction.setProperty("stage", "embedding");
interaction.addAttachments([
{
type: "text",
name: "Additional Info",
value: "A very long document",
role: "input",
},
{ type: "image", value: "https://example.com/image.png", role: "output" },
{
type: "iframe",
name: "Generated UI",
value: "https://newui.generated.com",
role: "output",
},
]);
Resuming an Interaction
If you no longer have the interaction object returned from begin(), resume it with resumeInteraction():
const interaction = raindrop.resumeInteraction(eventId);
Interactions are subject to a 1 MB event limit. Oversized payloads will be truncated. Contact us if you have custom requirements.
Single-Shot Tracking (trackAi)
For simple request-response interactions, you can use trackAi() directly:
raindrop.trackAi({
event: "user_message",
userId: "user123",
model: "gpt-4o-mini",
input: "Who won the 2023 AFL Grand Final?",
output: "Collingwood by four points!",
properties: {
tool_call: "reasoning_engine",
system_prompt: "you are a helpful...",
experiment: "experiment_a",
},
});
We recommend using begin() → finish() for new code to take advantage of partial-event buffering, tracing, and upcoming features like automatic token counts.
Tracking Signals (Feedback)
Signals capture quality ratings on AI events. Use trackSignal() with the same eventId from begin() or trackAi():
| Parameter | Type | Description |
|---|
eventId | string | The ID of the AI event you’re evaluating |
name | "thumbs_up", "thumbs_down", string | Signal name |
type | "default", "standard", "feedback", "edit", "agent", "agent_internal" | Optional, defaults to "default" |
comment | string | User comment (for feedback signals) |
after | string | User’s edited content (for edit signals) |
sentiment | "POSITIVE", "NEGATIVE" | Signal sentiment (defaults to "NEGATIVE") |
await raindrop.trackSignal({
eventId: "my_event_id",
name: "thumbs_down",
comment: "Answer was off-topic",
});
Self Diagnostics
Self Diagnostics lets your agent proactively report its own issues — capability gaps, missing context, persistent tool failures — back to your team. Signals appear in Raindrop’s Self Diagnostics dashboard.
The easiest way to enable Self Diagnostics is by using wrap() with the Vercel AI SDK:
import * as ai from "ai";
const { generateText, streamText } = raindrop.wrap(ai, {
context: { userId: "user_123", eventName: "support-agent" },
selfDiagnostics: {
enabled: true,
},
});
See the Vercel AI SDK docs for the full wrap() reference. For integrations that don’t use the Vercel AI SDK, or can’t use wrap(), the sections below cover standalone alternatives.
createSelfDiagnosticsTool() creates a tool your agent can call to report issues. It gives you:
- A reusable
execute() handler that tracks an agent signal
- Adapter-specific tool definitions for Vercel AI SDK, OpenAI SDK, and Anthropic SDK
The four default categories are missing_context, repeatedly_broken_tool, capability_gap, and complete_task_failure. You can replace these with your own.
Self diagnostics requires an eventId to correlate the signal with an interaction. Pass eventId, interaction, or getEventId when creating the tool.
const interaction = raindrop.begin({
eventId: "evt_123",
event: "agent_run",
userId: "user_123",
input: "Fix my deployment",
});
const diagnostics = raindrop.createSelfDiagnosticsTool({
interaction, // pulls eventId from interaction.getEventId()
toolName: "__raindrop_report", // optional
guidance: "Optional extra guidance for your domain", // optional
// Default signals (used when `signals` is omitted):
signals: {
missing_context: {
description:
"You cannot complete the task because critical information, credentials, or access is missing and the user cannot provide it. " +
"Do NOT report this for normal clarifying questions — only when you are blocked.",
sentiment: "NEGATIVE",
},
repeatedly_broken_tool: {
description:
"A tool has failed on multiple distinct attempts in this conversation, preventing task completion. You are sure the tool exists, and you have tried to use it, but it has failed. " +
"A single tool error is NOT enough — the tool must be persistently broken across retries.",
sentiment: "NEGATIVE",
},
capability_gap: {
description:
"The task requires a tool, permission, or capability that you do not have. " +
"For example, the user asks you to perform an action but no suitable tool exists, or you lack the necessary access. " +
"Do NOT report this if you simply need more information from the user — only when the gap is in your own capabilities.",
sentiment: "NEGATIVE",
},
complete_task_failure: {
description:
"You were unable to accomplish what the user asked despite making genuine attempts. This might be things like, you genuinely do not have the capabilities the user is asking for. You have tried but run into a persistent bug in the environment etc. " +
"This is NOT a refusal or policy block — you tried and failed to deliver the result.",
sentiment: "NEGATIVE",
},
},
});
// Later, when tool is executed:
await diagnostics.execute({
category: "missing_context",
detail: "Missing deployment credentials and logs access.",
});
You don’t need to mention self diagnostics in your system prompt — the tool description generated by the SDK already tells the model when and how to call it. Just wire the tool into your LLM call using one of the adapters below.
When the agent calls the tool, a signal is tracked on the same eventId:
{
"event_id": "evt_...",
"signal_name": "self diagnostics - missing_context",
"signal_type": "agent",
"properties": {
"source": "agent_reporting_tool",
"category": "missing_context",
"signal_description": "You cannot complete the task because critical information...",
"detail": "User asked to fix deployment but no access to logs or SSH credentials."
}
}
Supported adapters
createSelfDiagnosticsTool() supports these adapters:
| Adapter | Use with | Returns |
|---|
diagnostics.forVercelAI() | ai package / Vercel AI SDK | { description, parameters, inputSchema, execute } |
diagnostics.forOpenAI() | openai package function tools | OpenAI tools entry (type: "function") |
diagnostics.forAnthropic() | @anthropic-ai/sdk tool config | Anthropic tools entry (input_schema) |
Vercel AI SDK:
const tools = {
[diagnostics.name]: diagnostics.forVercelAI(),
};
OpenAI SDK:
const response = await openai.chat.completions.create({
model: "gpt-4o-mini",
messages,
tools: [diagnostics.forOpenAI()],
tool_choice: { type: "function", function: { name: diagnostics.name } },
});
const toolCall = response.choices[0]?.message.tool_calls?.[0];
if (toolCall) {
await diagnostics.execute(JSON.parse(toolCall.function.arguments || "{}"));
}
Anthropic SDK:
const response = await anthropic.messages.create({
model: "claude-sonnet-4-6",
max_tokens: 512,
messages,
tools: [diagnostics.forAnthropic()],
tool_choice: { type: "tool", name: diagnostics.name },
});
const block = response.content.find((b) => b.type === "tool_use");
if (block && block.type === "tool_use") {
await diagnostics.execute(block.input);
}
UI guidance
__raindrop_report is an internal tool. If your chat UI renders tool calls, hide this tool call from end users to avoid confusing them with internal diagnostics details.
const visibleToolCalls = toolCalls.filter((call) => call.toolName !== diagnostics.name);
Manual Reporting
If your agent already has its own way of detecting issues, use selfDiagnose() to report them directly — no LLM tool call needed.
raindrop.selfDiagnose({
eventId: "evt_123",
description: "User asked for production DB access but no credentials were provided.",
category: "missing_context", // optional — defaults to "general"
});
| Parameter | Type | Required | Description |
|---|
eventId | string | Yes | Interaction/event ID (from begin() or trackAi()) |
description | string | Yes | Human-readable description of the issue |
category | string | No | Category name (e.g. "missing_context"). Defaults to "general" |
source | string | No | Origin identifier. Defaults to "selfDiagnose" |
sentiment | "POSITIVE" | "NEGATIVE" | No | Defaults to "NEGATIVE" |
properties | Record<string, unknown> | No | Additional custom properties |
Attachments
Attachments let you include additional context—documents, images, code, or embedded content—with your events. They work with both begin() interactions and trackAi() calls.
| Property | Type | Description |
|---|
type | string | "code", "text", "image", or "iframe" |
name | string | Optional display name |
value | string | Content or URL |
role | string | "input" or "output" |
language | string | Programming language (for code attachments) |
interaction.addAttachments([
{
type: "code",
role: "input",
language: "typescript",
name: "example.ts",
value: "console.log('hello');",
},
{
type: "text",
name: "Additional Info",
value: "Some extra text",
role: "input",
},
{ type: "image", value: "https://example.com/image.png", role: "output" },
{ type: "iframe", value: "https://example.com/embed", role: "output" },
]);
Identifying Users
raindrop.setUserDetails({
userId: "user123",
traits: {
name: "Jane",
email: "jane@example.com",
plan: "paid", // we recommend 'free', 'paid', 'trial'
os: "macOS",
},
});
PII Redaction
Read more about how Raindrop handles privacy and PII redaction here. Enable client-side PII redaction when initializing the SDK:
new Raindrop({
writeKey: RAINDROP_API_KEY,
redactPii: true,
});
Error Handling
Exceptions are raised when errors occur while sending events to Raindrop. Handle these appropriately in your application.
Configuration
new Raindrop({
writeKey: RAINDROP_API_KEY,
debugLogs: process.env.NODE_ENV !== "production", // Print queued events
disabled: process.env.NODE_ENV === "test", // Disable all tracking
});
| Option | Type | Default | Description |
|---|
writeKey | string | — | Your Raindrop API key (required) |
debugLogs | boolean | false | Print queued events and tracing info to console |
disabled | boolean | false | Disable all tracking (useful for test environments) |
redactPii | boolean | false | Enable client-side PII redaction |
useExternalOtel | boolean | false | Use your own OpenTelemetry setup instead of Raindrop’s built-in one |
bypassOtelForTools | boolean | false | Ship trackTool() / withTool() / startToolSpan() spans directly via HTTP, bypassing the OTEL exporter |
disableBatching | boolean | true in dev | Disable span batching for faster feedback in development |
Call await raindrop.close() before your process exits to flush any buffered events.
Tracing
Tracing captures detailed execution information from your AI pipelines—multi-model interactions, chained prompts, and tool calls. This helps you:
- Visualize the full execution flow of your AI application
- Debug and optimize prompt chains
- Understand the intermediate steps that led to a response
Getting Started
Wrap your code with withSpan or withTool on an interaction, and LLM calls inside are automatically captured:
import { Raindrop } from "raindrop-ai";
const raindrop = new Raindrop({ writeKey: RAINDROP_API_KEY });
const interaction = raindrop.begin({ ... });
await interaction.withSpan({ name: "my_task" }, async () => {
// LLM calls here are automatically traced
});
Next.js users: Add raindrop-ai to serverExternalPackages in your config:
// next.config.js
/** @type {import('next').NextConfig} */
const nextConfig = {
serverExternalPackages: ['raindrop-ai'],
};
module.exports = nextConfig;
Using withSpan
Use withSpan to trace tasks or operations. Any LLM calls within the span are automatically captured:
// Basic span
const result = await interaction.withSpan(
{ name: "generate_response" },
async () => {
return "Generated response";
}
);
// Span with metadata
const result = await interaction.withSpan(
{
name: "embedding_generation",
properties: { model: "text-embedding-3-large" },
inputParameters: ["What is the weather today?"],
},
async () => {
return [0.1, 0.2, 0.3, 0.4];
}
);
| Parameter | Type | Description |
|---|
name | string | Name for identification in traces |
properties | Record<string, string> | Additional metadata |
inputParameters | unknown[] | Input parameters for the task |
Use withTool to trace agent actions—memory operations, web searches, API calls, and more:
// Basic tool call
const result = await interaction.withTool(
{ name: "search_tool" },
async () => {
return "Search results";
}
);
// Tool with metadata
const result = await interaction.withTool(
{
name: "calculator",
properties: { operation: "multiply" },
inputParameters: { a: 5, b: 10 },
},
async () => {
return "Result: 50";
}
);
| Parameter | Type | Description |
|---|
name | string | Name for identification in traces |
version | number | Version number of the tool |
properties | Record<string, string> | Additional metadata |
inputParameters | Record<string, any> | Input parameters for the tool |
traceContent | boolean | Whether to trace content |
suppressTracing | boolean | Suppress tracing for this invocation |
For more control over tool span tracking, use trackTool or startToolSpan.
Use trackTool to log a tool call after it has completed:
const interaction = raindrop.begin({
eventId: "my-event",
event: "agent_run",
userId: "user_123",
input: "Search for weather data",
});
// Log a completed tool call
interaction.trackTool({
name: "web_search",
input: { query: "weather in NYC" },
output: { results: ["Sunny, 72°F", "Clear skies"] },
durationMs: 150,
properties: { engine: "google" },
});
// Log a failed tool call
interaction.trackTool({
name: "database_query",
input: { query: "SELECT * FROM users" },
durationMs: 50,
error: new Error("Connection timeout"),
});
interaction.finish({ output: "Weather search complete" });
| Parameter | Type | Description |
|---|
name | string | Name of the tool |
input | unknown | Input passed to the tool |
output | unknown | Output returned by the tool |
durationMs | number | Duration in milliseconds |
startTime | Date | number | When the tool started (defaults to now - durationMs) |
error | Error | string | Error if the tool failed |
properties | Record<string, string> | Additional metadata |
Use startToolSpan to track a tool as it executes:
const interaction = raindrop.begin({
eventId: "my-event",
event: "agent_run",
userId: "user_123",
input: "Process this data",
});
const toolSpan = interaction.startToolSpan({
name: "api_call",
properties: { endpoint: "/api/data" },
inputParameters: { method: "GET", path: "/api/data" },
});
try {
const result = await fetchData();
toolSpan.setOutput(result);
} catch (error) {
toolSpan.setError(error);
} finally {
toolSpan.end();
}
interaction.finish({ output: "Data processed" });
| Method | Description |
|---|
setInput(input) | Set the input (JSON stringified if object) |
setOutput(output) | Set the output (JSON stringified if object) |
setError(error) | Mark the span as failed |
end() | End the span (required when execution completes) |
Module Instrumentation
In some environments, automatic instrumentation may not work due to module loading order or bundler behavior. Use instrumentModules to explicitly specify which modules to instrument:
Anthropic users: You must use a module namespace import (import * as ...), not the default export.
import OpenAI from "openai";
import Anthropic from "@anthropic-ai/sdk";
import * as AnthropicModule from "@anthropic-ai/sdk"; // Required for instrumentation
import { Raindrop } from "raindrop-ai";
const raindrop = new Raindrop({
writeKey: RAINDROP_API_KEY,
instrumentModules: {
openAI: OpenAI,
anthropic: AnthropicModule, // Pass the module namespace, not the default export
},
});
Supported modules: openAI, anthropic, cohere, bedrock, google_vertexai, google_aiplatform, pinecone, together, langchain, llamaIndex, chromadb, qdrant, mcp.
OpenTelemetry Integration
If you already have an OpenTelemetry setup (Sentry, Datadog, Honeycomb, etc.), integrate Raindrop alongside it using useExternalOtel:
import { NodeSDK } from "@opentelemetry/sdk-node";
import * as AnthropicModule from "@anthropic-ai/sdk";
import Anthropic from "@anthropic-ai/sdk";
import { Raindrop } from "raindrop-ai";
// 1. Create Raindrop with useExternalOtel
const raindrop = new Raindrop({
writeKey: RAINDROP_API_KEY,
useExternalOtel: true,
instrumentModules: { anthropic: AnthropicModule },
});
// 2. Add Raindrop's processor and instrumentations to your NodeSDK
const sdk = new NodeSDK({
spanProcessors: [
raindrop.createSpanProcessor(), // Sends traces to Raindrop
sentryProcessor, // Your existing processor
],
instrumentations: raindrop.getInstrumentations(),
});
sdk.start();
// 3. Create AI clients AFTER SDK starts
const anthropic = new Anthropic({ apiKey: "..." });
// 4. Use Raindrop normally
const interaction = raindrop.begin({
eventId: "my-event",
event: "chat_request",
userId: "user_123",
input: "Hello!",
});
await interaction.withSpan({ name: "generate_response" }, async () => {
const response = await anthropic.messages.create({
model: "claude-3-haiku-20240307",
max_tokens: 100,
messages: [{ role: "user", content: "Hello!" }],
});
return response;
});
interaction.finish({ output: "Response from Claude" });
| Method | Description |
|---|
createSpanProcessor() | Returns a span processor that sends traces to Raindrop |
getInstrumentations() | Returns OpenTelemetry instrumentations for AI libraries |
Without instrumentModules, getInstrumentations() returns instrumentations for all supported AI libraries. Specify instrumentModules to instrument only specific libraries.
That’s it! You’re ready to explore your events in the Raindrop dashboard. Ping us on Slack or email us if you get stuck!