LangGraph Example with Reflight
This example demonstrates how to integrate Reflight with LangGraph workflows to track and observe your AI operations.
Installation
Install the Reflight SDK for LangChain:
# Using npm
npm install @reflight/langchain
# Using pnpm
pnpm add @reflight/langchain
# Using yarn
yarn add @reflight/langchain
# Using bun
bun add @reflight/langchainSetup
- Get your Reflight API key from your Reflight dashboard
- Set it as an environment variable:
export REFLIGHT_API_KEY="your-api-key-here"Usage
The Reflight SDK provides three main functions for LangGraph:
1. Initialize Reflight
Call init() at the start of your application to set up Reflight tracking:
import { init } from "@reflight/langchain";
init(process.env.REFLIGHT_API_KEY);2. Use the Tracer Callback
Add the Tracer callback to your LangGraph workflow's RunnableConfig to automatically track all operations:
import { Tracer } from "@reflight/langchain";
import { runPublishingWorkflow } from "./workflows/publishing-workflow";
const result = await runPublishingWorkflow(
{
topic: "New feature launch",
audience: "developers",
channel: "email",
},
{
callbacks: [new Tracer()], // Add Reflight tracer
}
);The Tracer automatically captures:
- All LLM calls
- Tool invocations
- Workflow state transitions
- Message flow
3. Shutdown (Optional)
Call shutdown() when your application is done to ensure all traces are flushed:
import { shutdown } from "@reflight/langchain";
// At the end of your script
await shutdown();Complete Example
Here's a complete example showing all three steps:
import { init, shutdown, Tracer } from "@reflight/langchain";
import { runPublishingWorkflow, type PublishingRequest } from "./workflows/publishing-workflow";
// 1. Initialize Reflight
init(process.env.REFLIGHT_API_KEY);
// 2. Run your workflow with Tracer
const request: PublishingRequest = {
topic: "New feature launch: AI-powered content generation",
audience: "product managers and marketing teams",
tone: "professional",
channel: "email",
length: 150,
ctaUrl: "https://example.com/learn-more",
};
const result = await runPublishingWorkflow(request, {
callbacks: [new Tracer()], // Reflight automatically tracks everything
});
// 3. Shutdown when done
await shutdown();What Gets Tracked
With the Tracer callback, Reflight automatically tracks:
- LLM Calls: Every model invocation with inputs, outputs, and metadata
- Tool Usage: All tool calls with parameters and results
- Workflow Execution: State transitions and message flow
- Performance Metrics: Latency, token usage, and costs
All of this data appears in your Reflight dashboard for analysis and debugging.
Running the Example
# Make sure you have your API keys set
export OPENAI_API_KEY="your-openai-key"
export REFLIGHT_API_KEY="your-reflight-key"
# Run the workflow
bun run scripts/run-workflow.ts