LangGraph Example with Reflight
This example demonstrates how to integrate Reflight with LangGraph workflows to track and observe your AI operations.
📦 Package Version: This documentation is written for
@reflight/langgraphversion 0.6.0. If you encounter issues, ensure you're using this version or check the changelog (opens in a new tab) for breaking changes.
Installation
Install the Reflight SDK for LangGraph:
npm install @reflight/langgraph@0.6.0⚠️ Version Compatibility: This example uses
@reflight/langgraph@0.6.0. If you're using a different version, some APIs may differ. Always check the version-specific documentation for your SDK version.
Setup
- Get your Reflight API key from your Reflight dashboard
- Set it as an environment variable:
REFLIGHT_API_KEY="your-api-key-here"💡 Tip: Keep your API key secure and never commit it to version control. Use environment variables or a secrets manager.
Usage
1. Initialize Reflight
Call init() at the start of your application to set up Reflight tracking:
import { init } from "@reflight/langgraph";
init(process.env.REFLIGHT_API_KEY);2. Use the Tracer Callback
Add the Tracer callback to your LangGraph workflow's RunnableConfig to automatically track all operations:
import { Tracer } from "@reflight/langgraph";
import { runPublishingWorkflow } from "./workflows/publishing-workflow";
await runPublishingWorkflow(
{
topic: "New feature launch",
audience: "developers",
channel: "email",
},
{
callbacks: [new Tracer()], // Add Reflight tracer
}
);📝 Note: The
Tracerautomatically captures:
- All LLM calls
- Tool invocations
- Workflow state transitions
- Message flow
3. What runPublishingWorkflow Looks Like
For reference, here is the shape of the workflow function used in the examples above.
It wraps a LangGraph StateGraph that orchestrates a copywriter → editor → publisher tool chain.
// langgraph/src/workflows/publishing-workflow.ts
import type { RunnableConfig } from "@langchain/core/runnables";
import type {
PublishingChannel,
PublishingTone,
} from "../tools/publishing-tools";
type PublishingState = /* LangGraph state inferred from annotations */;
export type PublishingRequest = {
topic: string;
audience: string;
tone?: PublishingTone;
channel: PublishingChannel;
length?: number;
styleGuide?: string;
ctaUrl?: string;
};
export async function runPublishingWorkflow(
input: PublishingRequest,
config?: RunnableConfig
): Promise<PublishingState> {
// Build an orchestration prompt that enforces:
// copywriter -> editor -> publisher and passes through
// topic, audience, tone, length, styleGuide, channel, ctaUrl.
// Merge workflow metadata (workflow, topic, channel) into the config.
// Invoke the compiled LangGraph StateGraph with the initial
// HumanMessage and the merged RunnableConfig.
const runnableConfig: RunnableConfig = {
...config,
metadata: {
...config?.metadata,
workflow: "publishing",
topic: input.topic,
channel: input.channel,
},
};
return publishingWorkflow.invoke(
{
messages: [
new HumanMessage({
content: userPrompt,
}),
],
llmCalls: 0,
},
runnableConfig
);
}4. Shutdown (Optional)
Call shutdown() when your application is done to ensure all traces are flushed:
📝 Note: This step is optional. If you don't call
shutdown(), traces will still be sent, but calling it ensures all pending traces are flushed before your application exits.
import { shutdown } from "@reflight/langgraph";
// At the end of your script
await shutdown();Complete Example
Here's a complete example showing all three steps:
import { init, shutdown, Tracer } from "@reflight/langgraph";
import {
runPublishingWorkflow,
type PublishingRequest,
} from "./workflows/publishing-workflow";
// 1. Initialize Reflight
init(process.env.REFLIGHT_API_KEY);
// 2. Run your workflow with Tracer
const request: PublishingRequest = {
topic: "New feature launch: AI-powered content generation",
audience: "product managers and marketing teams",
tone: "professional",
channel: "email",
length: 150,
ctaUrl: "https://example.com/learn-more",
};
const result = await runPublishingWorkflow(request, {
callbacks: [new Tracer()], // Reflight automatically tracks everything
});
// 3. Shutdown when done
await shutdown();Trigger.dev Integration
When using Reflight with Trigger.dev, you can use the createFilteredExporter to export only LangGraph workflow traces to Reflight, filtering out traces from other services.
💡 Tip: The filtered exporter is particularly useful in Trigger.dev environments where multiple services may be generating traces, ensuring only relevant LangGraph workflow traces are sent to Reflight.
Setup
Configure your trigger.config.ts to use the filtered OTLP exporter:
import { defineConfig } from "@trigger.dev/sdk/v3";
import { createFilteredExporter } from "@reflight/langgraph";
export default defineConfig({
project: "your-project-id",
runtime: "node",
logLevel: "log",
// Export only LangGraph workflow traces to Reflight (filtered to exclude other services)
telemetry: {
exporters: [
createFilteredExporter({ apiKey: process.env.REFLIGHT_API_KEY! }),
],
},
// ... other config
});Using Tracer in Trigger.dev Tasks
In your Trigger.dev tasks, use the Tracer callback just like in regular workflows.
📝 Note: You don't need to call
init()orshutdown()when using Trigger.dev - Trigger.dev handles the OpenTelemetry SDK setup automatically.
import { logger, task } from "@trigger.dev/sdk/v3";
import { Tracer } from "@reflight/langgraph";
import { HumanMessage } from "@langchain/core/messages";
import { graph } from "../workflows/publishing-workflow";
export const publishingTask = task({
id: "publishing-workflow",
run: async (payload, { ctx }) => {
logger.log("Running publishing workflow...");
const input = {
topic: "New feature launch: AI-powered content generation",
audience: "product managers and marketing teams",
tone: "professional",
channel: "email",
length: 150,
ctaUrl: "https://example.com/learn-more",
};
// No init()/shutdown() needed - Trigger.dev handles OTel SDK
// Tracer callback still creates the LangGraph spans
const result = await graph.invoke(
{
messages: [new HumanMessage({ content: userPrompt })],
llmCalls: 0,
},
{
callbacks: [new Tracer()],
}
);
logger.log("Workflow complete");
return result;
},
});Filtered OTLP Exporter
In general, you can use the createFilteredExporter in any OpenTelemetry setup where you want to export only LangGraph workflow traces to Reflight while filtering out traces from other services. This is particularly useful in environments like Trigger.dev where multiple services may be generating traces.
What Gets Tracked
With the Tracer callback, Reflight automatically tracks:
- LLM Calls: Every model invocation with inputs, outputs, and metadata
- Tool Usage: All tool calls with parameters and results
- Workflow Execution: State transitions and message flow
- Performance Metrics: Latency, token usage, and costs
All of this data appears in your Reflight dashboard for analysis and debugging.