Skip to main content

Documentation Index

Fetch the complete documentation index at: https://raindrop.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

Beta. The Rust SDK is at 0.0.1. The wire contract against the Raindrop ingestion API is stable and verified end-to-end against the live backend on every push, but the crate API may still change in minor ways before 0.1.0. We recommend pinning the git tag in your Cargo.toml.

Installation

The Rust SDK is hosted on GitHub (not on crates.io). Add it to your Cargo.toml:
[dependencies]
raindrop-ai = { git = "https://github.com/raindrop-ai/raindrop-rust", tag = "v0.0.1" }
tokio = { version = "1", features = ["macros", "rt-multi-thread"] }
serde_json = "1"
Smallest possible program — drops in as src/main.rs and runs:
use raindrop::Client;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Empty / missing key → SDK becomes a no-op (zero HTTP), so you can integrate
    // the SDK code first and add the key later without crashing your app.
    let client = Client::builder()
        .write_key(std::env::var("RAINDROP_WRITE_KEY").unwrap_or_default())
        .build()?;

    // ... use client ...

    client.close().await?;
    Ok(())
}
Source code and releases live in raindrop-ai/raindrop-rust.
The Rust SDK requires Rust 1.88+ (MSRV). It is async-first and uses tokio. Most fallible methods return Result<_, Error>track_ai, track_event, identify, track_signal, the Interaction mutators (set_input, set_property, set_properties, add_attachments, patch, finish), and Client::flush / Client::close. Propagate errors with ? as you would for any fallible call. The two constructors — Client::begin(...).await and Client::resume_interaction(...) — are infallible: they always return an Interaction (a no-op handle when the client is disabled), so don’t put a ? on those.

Quick Start: Interaction API

The Interaction API uses a simple three-step pattern:
  1. begin() – Create an interaction and log the initial user input
  2. Update – Optionally call set_property, set_properties, set_input, or add_attachments
  3. finish() – Record the AI’s final output and close the interaction

Example: Chat Completion

use std::collections::BTreeMap;

use raindrop::{BeginOptions, Client, FinishOptions};
use serde_json::json;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let client = Client::builder()
        .write_key(std::env::var("RAINDROP_WRITE_KEY").unwrap_or_default())
        .build()?;

    let mut props = BTreeMap::new();
    props.insert("system_prompt".into(), json!("you are a helpful..."));

    // 1. Start the interaction
    let interaction = client
        .begin(BeginOptions {
            event_id: "evt_123".into(),
            user_id: "user-123".into(),
            event: "chat_message".into(),
            input: "Can you suggest a calm Saturday morning in San Francisco?".into(),
            model: "gpt-4o".into(),
            convo_id: "conv-123".into(),
            properties: props,
            ..Default::default()
        })
        .await;

    // 2. Make the LLM call
    let output = call_llm().await;

    // 3. Finish the interaction
    interaction
        .finish(FinishOptions {
            output,
            ..Default::default()
        })
        .await?;

    client.close().await?;
    Ok(())
}

async fn call_llm() -> String {
    // Your LLM call here.
    String::new()
}

Updating an Interaction

Update an interaction at any point using set_property, set_properties, set_input, or add_attachments:
use raindrop::Attachment;
use serde_json::json;

interaction.set_property("stage", "retrieving").await?;
interaction
    .set_properties(BTreeMap::from([
        ("surface".into(), json!("chat")),
        ("region".into(), json!("us-west")),
    ]))
    .await?;
interaction.set_input("Can you make it a little more local?").await?;
interaction
    .add_attachments(vec![Attachment {
        kind: "text".into(),
        role: "input".into(),
        name: "preferences".into(),
        value: "Prefers coffee, a quiet walk, and no museum stops.".into(),
        ..Default::default()
    }])
    .await?;

Resuming an Interaction

If you no longer have the interaction object returned from begin(), resume it with resume_interaction():
let interaction = client.resume_interaction("evt_123");
interaction.set_property("stage", "follow-up").await?;
interaction
    .finish(FinishOptions {
        output: "Here is a shorter version of that itinerary.".into(),
        ..Default::default()
    })
    .await?;
resume_interaction() recovers an active in-memory interaction created by begin() in the same process. It is not a cross-process restore mechanism. If the event ID is not found in memory, a new interaction handle is created for that ID.

Single-Shot Tracking (track_ai)

For simple request-response interactions, you can use track_ai() directly:
use raindrop::AiEvent;

client
    .track_ai(AiEvent {
        user_id: "user-123".into(),
        event: "chat_message".into(),
        input: "Who won the 2023 AFL Grand Final?".into(),
        output: "Collingwood by four points!".into(),
        model: "gpt-4o".into(),
        convo_id: "conv-123".into(),
        properties: BTreeMap::from([
            ("ai.usage.prompt_tokens".into(), json!(10)),
            ("ai.usage.completion_tokens".into(), json!(5)),
        ]),
        ..Default::default()
    })
    .await?;
We recommend using begin()finish() for new code to take advantage of partial-event buffering and tracing.
Use track_event() for non-AI events:
use raindrop::Event;

client
    .track_event(Event {
        user_id: "user-123".into(),
        event: "session_started".into(),
        properties: BTreeMap::from([("entrypoint".into(), json!("dashboard"))]),
        ..Default::default()
    })
    .await?;

Tracking Signals (Feedback)

Signals capture quality ratings on AI events. Use track_signal() with the same event ID from begin() or track_ai():
FieldTypeDescription
event_idStringThe ID of the AI event you’re evaluating
nameStringSignal name (e.g. "thumbs_up", "thumbs_down")
kindStringOne of SignalKind::{DEFAULT, STANDARD, FEEDBACK, EDIT, AGENT, AGENT_INTERNAL}. Defaults to "default".
sentimentString"POSITIVE" or "NEGATIVE"
commentStringMerged into properties.comment for feedback signals
afterStringMerged into properties.after for edit signals
attachment_idStringOptional attachment ID to associate the signal with
propertiesBTreeMap<String, _>Additional metadata
use raindrop::{Signal, SignalKind};

client
    .track_signal(Signal {
        event_id: "evt_123".into(),
        name: "thumbs_down".into(),
        kind: SignalKind::FEEDBACK.into(),
        sentiment: "NEGATIVE".into(),
        comment: "Answer was off-topic".into(),
        ..Default::default()
    })
    .await?;

Identifying Users

use raindrop::User;

client
    .identify(User {
        user_id: "user-123".into(),
        traits: BTreeMap::from([
            ("name".into(), json!("Jane")),
            ("email".into(), json!("jane@example.com")),
            ("plan".into(), json!("paid")), // we recommend "free", "paid", "trial"
        ]),
    })
    .await?;

Attachments

Attachments let you include additional context — documents, images, code, or embedded content — with your events. They work with both begin() interactions and track_ai() calls.
FieldTypeDescription
kindString"code", "text", "image", or "iframe" (serialized as type on the wire)
roleString"input" or "output"
nameStringOptional display name
valueStringContent or URL
languageStringProgramming language (only meaningful for "code" attachments)
attachment_idStringOptional UUID. Backend auto-assigns one if empty. Set explicitly to round-trip with Signal::attachment_id.
use raindrop::Attachment;

interaction
    .add_attachments(vec![
        Attachment {
            kind: "code".into(),
            role: "input".into(),
            language: "rust".into(),
            name: "example.rs".into(),
            value: "println!(\"hello\");".into(),
            ..Default::default()
        },
        Attachment {
            kind: "text".into(),
            role: "input".into(),
            name: "Additional Info".into(),
            value: "Some extra text".into(),
            ..Default::default()
        },
        Attachment {
            kind: "image".into(),
            role: "output".into(),
            value: "https://example.com/image.png".into(),
            ..Default::default()
        },
        Attachment {
            kind: "iframe".into(),
            role: "output".into(),
            value: "https://example.com/embed".into(),
            ..Default::default()
        },
    ])
    .await?;
The dashboard’s attachment viewer renders text, image, and iframe attachments. code attachments survive ingestion and are searchable, but are not currently displayed in the visual attachments tab.

Configuration

use std::time::Duration;

let client = Client::builder()
    .write_key(std::env::var("RAINDROP_WRITE_KEY").unwrap_or_default())
    .debug(std::env::var("ENV").as_deref() != Ok("production"))
    .partial_flush_interval(Duration::from_secs(1))
    .trace_flush_interval(Duration::from_secs(1))
    .build()?;
Builder methodDescriptionDefault
.write_key(&str)Your Raindrop API key. Empty/missing key → SDK becomes a no-op.
.endpoint(&str)Override the API endpointhttps://api.raindrop.ai/v1/
.debug(bool)Verbose debug logging via tracingfalse
.partial_flush_interval(Duration)Periodic event flush. Duration::ZERO disables periodic flush1s
.trace_flush_interval(Duration)Periodic span flush. Duration::ZERO disables periodic flush1s
.trace_max_batch_size(usize)Max spans per trace export request50
.trace_max_queue_size(usize)Max spans buffered before back-pressuring5000
.max_attempts(u32)HTTP retries. 1 disables retries.3
.base_delay(Duration)Backoff base (exponential, ±20% jitter)1s
.jitter_fraction(f64)Backoff jitter fraction (0.0–1.0)0.2
.service_name(&str)OTLP resource.service.name"raindrop.rust-sdk"
.library_name(&str)$context.library.name reported with each event"raindrop-rust"
.library_version(&str)$context.library.versioncrate version
.http_client(reqwest::Client)Inject a custom reqwest::Clientnew client w/ 10s timeout
Call client.close().await? before your process exits to flush buffered events and spans. If write_key is empty, the client becomes a no-op (zero HTTP calls) instead of failing.

Tracing

Tracing captures detailed execution information from your AI pipelines — multi-model interactions, chained prompts, and tool calls. This helps you:
  • Visualize the full execution flow of your AI application
  • Debug and optimize prompt chains
  • Understand the intermediate steps that led to a response

Manual Spans

Build trees of spans by passing a parent into SpanOptions::parent:
use raindrop::{Attribute, SpanOptions};

let parent = client.start_span(SpanOptions {
    name: "agent.run".into(),
    event_id: "evt_123".into(),
    operation_id: "ai.workflow".into(),
    ..Default::default()
});

let child = client.start_span(SpanOptions {
    name: "llm.call".into(),
    event_id: "evt_123".into(),
    parent: Some(parent.clone()),
    ..Default::default()
});

child.set_attributes([
    Attribute::string("ai.model.id", "gpt-4o"),
    Attribute::int("ai.usage.prompt_tokens", 10),
]);

// For the canonical OpenTelemetry GenAI shape that the Raindrop backend reads
// to populate per-event token totals on the dashboard:
child.set_token_usage("gpt-4o", /* input */ 47, /* output */ 11);
child.end();

parent.end();
Spans started from an Interaction automatically inherit its user_id, convo_id, and event as traceloop.association.properties.* attributes, so the dashboard groups them under the same user, conversation, and event:
let interaction = client
    .begin(BeginOptions {
        user_id: "user-123".into(),
        convo_id: "conv-456".into(),
        event: "agent_run".into(),
        ..Default::default()
    })
    .await;

let span = interaction.start_span(SpanOptions {
    name: "rag.retrieve".into(),
    ..Default::default()
});
// ... do work ...
span.end();
Plain client.start_span(...) calls only need name + event_id. The SDK automatically emits traceloop.association.properties.event_id so the span survives the backend’s ingestion filter. For non-event-bound spans, set operation_id (e.g. "ai.workflow") or pass properties so the span has at least one of ai.operationId, traceloop.span.kind, traceloop.workflow.name, traceloop.association.properties.*, or gen_ai.* — otherwise it will be dropped server-side.

Closure-style helpers

If you prefer scoped instrumentation, with_span runs a closure inside a span and automatically marks the span as failed on Err:
interaction
    .with_span::<_, _, _, std::io::Error>(
        SpanOptions { name: "summarize".into(), ..Default::default() },
        |span| async move {
            span.set_attributes([Attribute::string("phase", "draft")]);
            Ok::<_, std::io::Error>(())
        },
    )
    .await?;

Tool Spans

Tool spans use the dedicated wire format (traceloop.span.kind=tool) so they surface in the dashboard’s event.toolCalls[] array.
use raindrop::ToolOptions;
use serde_json::json;

let tool = interaction.start_tool_span("weather_lookup", ToolOptions {
    input: Some(json!({ "location": "San Francisco" })),
    ..Default::default()
});
tool.set_output(&json!({ "forecast": "sunny" }));
tool.end();
For retroactive logging of an already-completed call:
use raindrop::TrackToolOptions;
use std::time::Duration;

interaction.track_tool(TrackToolOptions {
    name: "web_search".into(),
    input: Some(json!({ "query": "weather in NYC" })),
    output: Some(json!({ "results": ["Sunny, 72°F"] })),
    duration: Some(Duration::from_millis(150)),
    properties: BTreeMap::from([("engine".into(), json!("google"))]),
    ..Default::default()
});

// Failed tool calls (status=ERROR on the dashboard, with the message in output_payload)
interaction.track_tool(TrackToolOptions {
    name: "database_query".into(),
    input: Some(json!({ "query": "SELECT * FROM users" })),
    duration: Some(Duration::from_millis(50)),
    error: Some("connection timeout".into()),
    ..Default::default()
});
FieldTypeDescription
nameStringTool name
inputOption<Value>JSON input
outputOption<Value>JSON output
durationOption<Duration>Total duration
start_timeOption<OffsetDateTime>When the tool started (defaults to now - duration)
end_timeOption<OffsetDateTime>When the tool ended
errorOption<String>Error message; sets status=ERROR
parentOption<Span>Parent span for nesting
propertiesBTreeMap<String, _>Additional metadata
For functional wrapping, the SDK exposes with_tool and with_tool_async free helpers that run a closure inside a tool span and JSON-serialize the result onto traceloop.entity.output:
let result = raindrop::with_tool::<_, _, std::io::Error>(
    &interaction,
    "park_check",
    ToolOptions {
        input: Some(json!({ "location": "Dolores Park" })),
        ..Default::default()
    },
    || Ok(json!({ "recommendation": "yes" })),
)?;

Standalone Tracer

Use Client::tracer() for batch jobs or non-conversation work where you still want spans and tool traces:
let tracer = client.tracer(BTreeMap::from([("job_id".into(), json!("batch-123"))]));

let span = tracer.start_span(SpanOptions { name: "embed".into(), ..Default::default() });
span.end();

tracer.track_tool(TrackToolOptions {
    name: "vector_lookup".into(),
    input: Some(json!({ "query": "mission coffee" })),
    output: Some(json!({ "winner": "Ritual Coffee Roasters" })),
    properties: BTreeMap::from([("step".into(), json!("retrieve"))]),
    ..Default::default()
});

Span Attributes

The SDK provides typed helpers for OTLP-compatible attributes:
span.set_attributes([
    Attribute::string("ai.model.id", "gpt-4o"),
    Attribute::int("ai.usage.prompt_tokens", 150),
    Attribute::float("ai.latency_seconds", 1.23),
    Attribute::bool("ai.stream", true),
    Attribute::string_array("ai.tools", vec!["search".into(), "calculator".into()]),
]);

Known Limitations

  • No automatic LLM-client instrumentation. Unlike the Python and TypeScript SDKs, the Rust SDK does not auto-hook into LLM frameworks. Create spans manually via start_span, start_tool_span, with_span, with_tool, or track_tool.
  • No PII redaction. The Python SDK exposes set_redact_pii and the TypeScript SDK has redactPii. The Rust SDK does not yet implement client-side redaction. Redact at the call site or upstream of track_ai / track_event if needed.
  • No local debugger mirroring. The TypeScript SDK supports RAINDROP_LOCAL_DEBUGGER to mirror traces and partial events to a local Workshop instance. The Rust SDK currently ships only to the configured endpoint.
  • Oversized payload guard. Payloads larger than 1 MiB after JSON serialization are dropped client-side (matching the JS / Python SDKs) to avoid 413s on the gateway. The drop is logged via tracing::warn! so production callers can detect it.

That’s it! You’re ready to explore your events in the Raindrop dashboard. Ping us on Slack or email us if you get stuck!