Documentation Index
Fetch the complete documentation index at: https://raindrop.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
Installation
Quick Start
What Gets Traced
The CrewAI integration automatically captures:- Crew kickoff invocations — input variables, crew name, process type (sequential/hierarchical)
- Agent metadata — agent roles and count
- Task metadata — task descriptions and count
- Task outputs — per-task name, agent, and summary (truncated)
- Token usage —
ai.usage.prompt_tokens,ai.usage.completion_tokens,ai.usage.cached_tokens,ai.usage.total_tokens - Model name — extracted from the first agent’s
llm.model; per-LLM model also appears on the corresponding trace span - Crew output —
rawtext output of the crew execution - Errors — captured as
error.type/error.messageproperties on the event; the original exception is always re-raised - Async support —
kickoff(),kickoff_async(),kickoff_for_each(),kickoff_for_each_async()are all instrumented - Nested OTel trace spans — crew workflow → agent → task → LLM, via the
opentelemetry-instrumentation-crewaiinstrumentor (see Tracing; per-tool spans are not produced — see Known Limitations)
Configuration
Auto-instrumentation shortcut
Manual wrapping (no monkey-patch)
Debug Mode
Enable verbose logging when troubleshooting:raindrop_crewai logger to DEBUG, surfacing telemetry-side failures that are otherwise swallowed (so the user’s pipeline never crashes due to instrumentation).
Multi-Agent Crews
CrewAI is designed for multi-agent collaboration. Agent roles, task descriptions, and the overall crew structure are all captured as event properties:Tool Calls
CrewAI agents can use tools; the integration captures them indirectly:- The tool’s output flows through the agent’s response text and lands on the event’s
aiData.output. - The agent and task spans inside the workflow trace include the time spent in tool calls.
event.toolCalls on the dashboard will be empty (see Known Limitations).
Batch Processing
Usekickoff_for_each() to process multiple inputs in one call. The integration tracks the entire batch as a single event with aggregated token counts:
Async Usage
Tracing
Whentracing_enabled=True (the default), the integration activates the opentelemetry-instrumentation-crewai instrumentor via traceloop-sdk, producing nested OTel spans for every crew execution:
- Crew workflow — root span covering the entire
kickoff()call (crewai.workflow) - Agent execution — child span per agent (
{role}.agent) - Task execution — child span per task (
{name}.task) - LLM calls — leaf spans for each underlying LLM call, with model name and token usage
/v1/traces endpoint and link back to the flat event via the trace_id property the SDK sets in raindrop.begin(). They appear in the dashboard’s Traces tab after async ingestion (typically 30–120s).
To ship flat events only without OTel spans:
Identify Users
Associate events with a user identity after initialization:Track Signals
Attach feedback, edits, or other custom signals to a previously-shipped event by itsevent_id. The Python SDK does not currently expose a lastEventId accessor like the TypeScript client does, so to attach a signal you need either (a) the event_id returned to your application out-of-band (e.g. logged by debug=True), or (b) the public ID surfaced on the dashboard’s event detail page.
Flushing and Shutdown
Always callshutdown() (which flushes pending data) before your process exits to ensure all telemetry is shipped:
Known Limitations
- No per-tool spans / empty
event.toolCalls— the underlyingopentelemetry-instrumentation-crewaiinstrumentor only wrapsCrew.kickoff(workflow),Agent.execute_task(.agent),Task.execute_sync(.task), andLLM.call(LLM generation). It does NOT produce a span per tool invocation, soevent.toolCallson the dashboard will be empty for CrewAI runs. Tool usage surfaces only in the agent’s final output text and indirectly in the agent/task span durations. - Streaming — CrewAI’s
CrewStreamingOutputis returned to the caller as-is; the flat Raindrop event is shipped after the stream completes. finish_reasonis per-LLM, not per-crew —CrewOutputdoes not expose afinish_reason. Per-LLM finish reasons appear on the LLM trace spans inside the workflow trace, not on the flat event.- Python SDK feature surface — the Python SDK is module-level and does not expose
EventShipper/TraceShipperclasses. Theidentify()andtrack_signal()methods are pass-through wrappers around theraindrop.analytics.*module functions. wrapt<2required for tracing — the OTel CrewAI instrumentor useswrapt.wrap_function_wrapper(..., module=...)which was removed in wrapt 2.0. The package’s dev dependencies pinwrapt<2; production environments running wrapt 2.x will see no trace spans (the flat event still ships).