Skip to main content

Documentation Index

Fetch the complete documentation index at: https://raindrop.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

Installation

pip install raindrop-crewai crewai

Quick Start

from raindrop_crewai import RaindropCrewAI
from crewai import Agent, Crew, Task

raindrop = RaindropCrewAI(
    api_key="your-write-key",
    user_id="user-123",
)
raindrop.setup()  # auto-patch all Crew.kickoff* methods

agent = Agent(
    role="Senior Researcher",
    goal="Find the most interesting facts about {topic}",
    backstory="You are an experienced researcher.",
)
task = Task(
    description="Identify 3 interesting facts about {topic}.",
    expected_output="A bulleted list of facts.",
    agent=agent,
)
crew = Crew(agents=[agent], tasks=[task])

result = crew.kickoff(inputs={"topic": "AI safety"})
print(result.raw)

raindrop.shutdown()

What Gets Traced

The CrewAI integration automatically captures:
  • Crew kickoff invocations — input variables, crew name, process type (sequential/hierarchical)
  • Agent metadata — agent roles and count
  • Task metadata — task descriptions and count
  • Task outputs — per-task name, agent, and summary (truncated)
  • Token usageai.usage.prompt_tokens, ai.usage.completion_tokens, ai.usage.cached_tokens, ai.usage.total_tokens
  • Model name — extracted from the first agent’s llm.model; per-LLM model also appears on the corresponding trace span
  • Crew outputraw text output of the crew execution
  • Errors — captured as error.type / error.message properties on the event; the original exception is always re-raised
  • Async supportkickoff(), kickoff_async(), kickoff_for_each(), kickoff_for_each_async() are all instrumented
  • Nested OTel trace spans — crew workflow → agent → task → LLM, via the opentelemetry-instrumentation-crewai instrumentor (see Tracing; per-tool spans are not produced — see Known Limitations)

Configuration

raindrop = RaindropCrewAI(
    api_key="your-write-key",            # Optional: omit to disable telemetry shipping
    user_id="user-123",          # Optional: default user ID for all events
    convo_id="convo-456",        # Optional: conversation/thread ID
    tracing_enabled=True,        # Optional: enable OTel CrewAI instrumentor (default True)
    bypass_otel_for_tools=True,  # Optional: forwarded to raindrop.init() (default True; no observable effect on CrewAI events)
    debug=False,                 # Optional: enable debug logging
)

# Auto-patch every Crew.kickoff* on the Crew class:
raindrop.setup()

# Or wrap a specific Crew instance (no global monkey-patch):
wrapped = raindrop.wrap(crew)

Auto-instrumentation shortcut

from raindrop_crewai import setup_crewai

raindrop = setup_crewai(api_key="your-write-key", user_id="user-123")
# Every Crew.kickoff* call is now automatically traced.

Manual wrapping (no monkey-patch)

from raindrop_crewai import create_raindrop_crewai

raindrop = create_raindrop_crewai(api_key="your-write-key", user_id="user-123")
wrapped = raindrop.wrap(crew)
result = wrapped.kickoff(inputs={"topic": "AI safety"})
raindrop.shutdown()

Debug Mode

Enable verbose logging when troubleshooting:
raindrop = RaindropCrewAI(api_key="your-write-key", debug=True)
This sets the raindrop_crewai logger to DEBUG, surfacing telemetry-side failures that are otherwise swallowed (so the user’s pipeline never crashes due to instrumentation).

Multi-Agent Crews

CrewAI is designed for multi-agent collaboration. Agent roles, task descriptions, and the overall crew structure are all captured as event properties:
from crewai import Agent, Crew, Task

researcher = Agent(role="Senior Researcher", goal="Research the topic", backstory="...")
analyst = Agent(role="Data Analyst", goal="Analyze findings", backstory="...")
writer = Agent(role="Technical Writer", goal="Write a clear report", backstory="...")

crew = Crew(
    agents=[researcher, analyst, writer],
    tasks=[research_task, analysis_task, writing_task],
    process="sequential",  # or "hierarchical"
)

result = crew.kickoff(inputs={"topic": "quantum computing"})
# Captured properties include:
#   crewai.crew_name, crewai.process, crewai.agent_roles,
#   crewai.agent_count, crewai.task_count, crewai.tasks_output

Tool Calls

CrewAI agents can use tools; the integration captures them indirectly:
  • The tool’s output flows through the agent’s response text and lands on the event’s aiData.output.
  • The agent and task spans inside the workflow trace include the time spent in tool calls.
CrewAI’s OTel instrumentor does not emit per-tool spans, so event.toolCalls on the dashboard will be empty (see Known Limitations).
from crewai.tools import tool

@tool("get_weather")
def get_weather(city: str) -> str:
    """Get the current weather for a city."""
    return f"Sunny, 72F in {city}"

reporter = Agent(
    role="Weather Reporter",
    goal="Report the weather for the requested city",
    backstory="You always use the get_weather tool.",
    tools=[get_weather],
)

Batch Processing

Use kickoff_for_each() to process multiple inputs in one call. The integration tracks the entire batch as a single event with aggregated token counts:
inputs = [
    {"topic": "AI safety"},
    {"topic": "quantum computing"},
    {"topic": "climate change"},
]

results = crew.kickoff_for_each(inputs=inputs)
# Captured as one ai_generation event with:
#   crewai.kickoff_type = "batch"
#   crewai.batch_size = 3
#   ai.usage.* aggregated across all runs

Async Usage

import asyncio

async def main():
    result = await crew.kickoff_async(inputs={"topic": "AI safety"})
    print(result.raw)

    results = await crew.kickoff_for_each_async(
        inputs=[{"topic": "AI"}, {"topic": "ML"}],
    )

asyncio.run(main())

Tracing

When tracing_enabled=True (the default), the integration activates the opentelemetry-instrumentation-crewai instrumentor via traceloop-sdk, producing nested OTel spans for every crew execution:
  • Crew workflow — root span covering the entire kickoff() call (crewai.workflow)
  • Agent execution — child span per agent ({role}.agent)
  • Task execution — child span per task ({name}.task)
  • LLM calls — leaf spans for each underlying LLM call, with model name and token usage
Spans are exported to the Raindrop /v1/traces endpoint and link back to the flat event via the trace_id property the SDK sets in raindrop.begin(). They appear in the dashboard’s Traces tab after async ingestion (typically 30–120s). To ship flat events only without OTel spans:
raindrop = RaindropCrewAI(api_key="your-write-key", tracing_enabled=False)

Identify Users

Associate events with a user identity after initialization:
raindrop.identify("user-123", {"name": "Alice", "plan": "pro"})

Track Signals

Attach feedback, edits, or other custom signals to a previously-shipped event by its event_id. The Python SDK does not currently expose a lastEventId accessor like the TypeScript client does, so to attach a signal you need either (a) the event_id returned to your application out-of-band (e.g. logged by debug=True), or (b) the public ID surfaced on the dashboard’s event detail page.
raindrop.track_signal(
    event_id="<event-id-from-dashboard-or-debug-log>",
    name="thumbs_up",
    signal_type="feedback",
    sentiment="POSITIVE",
    comment="Great answer!",
)

Flushing and Shutdown

Always call shutdown() (which flushes pending data) before your process exits to ensure all telemetry is shipped:
raindrop.flush()     # flush pending data
raindrop.shutdown()  # flush + release resources

Known Limitations

  • No per-tool spans / empty event.toolCalls — the underlying opentelemetry-instrumentation-crewai instrumentor only wraps Crew.kickoff (workflow), Agent.execute_task (.agent), Task.execute_sync (.task), and LLM.call (LLM generation). It does NOT produce a span per tool invocation, so event.toolCalls on the dashboard will be empty for CrewAI runs. Tool usage surfaces only in the agent’s final output text and indirectly in the agent/task span durations.
  • Streaming — CrewAI’s CrewStreamingOutput is returned to the caller as-is; the flat Raindrop event is shipped after the stream completes.
  • finish_reason is per-LLM, not per-crewCrewOutput does not expose a finish_reason. Per-LLM finish reasons appear on the LLM trace spans inside the workflow trace, not on the flat event.
  • Python SDK feature surface — the Python SDK is module-level and does not expose EventShipper / TraceShipper classes. The identify() and track_signal() methods are pass-through wrappers around the raindrop.analytics.* module functions.
  • wrapt<2 required for tracing — the OTel CrewAI instrumentor uses wrapt.wrap_function_wrapper(..., module=...) which was removed in wrapt 2.0. The package’s dev dependencies pin wrapt<2; production environments running wrapt 2.x will see no trace spans (the flat event still ships).