Vercel AI SDK¶
Logfire works well with AI applications built with the Vercel AI SDK. Track LLM calls, token usage, tool invocations, and response times across any supported model provider.
Node.js Scripts¶
For standalone Node.js scripts, use the @pydantic/logfire-node package combined with the Vercel AI SDK.
Installation¶
npm install @pydantic/logfire-node ai @ai-sdk/your-provider
Replace @ai-sdk/your-provider with the provider package you're using (e.g., @ai-sdk/openai, @ai-sdk/anthropic, @ai-sdk/google).
Setup¶
1. Create an instrumentation file
Create an instrumentation.ts file that configures Logfire:
import logfire from "@pydantic/logfire-node";
logfire.configure({
token: "your-write-token",
serviceName: "my-ai-app",
serviceVersion: "1.0.0",
});
You can also use the LOGFIRE_TOKEN environment variable instead of passing the token directly.
2. Import instrumentation first
In your main script, import the instrumentation file before other imports:
import "./instrumentation.ts";
import { generateText } from "ai";
import { yourProvider } from "@ai-sdk/your-provider";
// Your AI code here
Next.js¶
For Next.js applications, use Vercel's built-in OpenTelemetry support with environment variables pointing to Logfire.
Installation¶
npm install @vercel/otel @opentelemetry/api ai @ai-sdk/your-provider
Setup¶
1. Add environment variables
Add these to your .env.local file (or your deployment environment):
OTEL_EXPORTER_OTLP_ENDPOINT=https://logfire-api.pydantic.dev
OTEL_EXPORTER_OTLP_HEADERS='Authorization=your-write-token'
2. Create the instrumentation file
Create instrumentation.ts in your project root (or src directory if using that structure):
import { registerOTel } from "@vercel/otel";
export function register() {
registerOTel({ serviceName: "my-nextjs-app" });
}
This file must be in the root directory, not inside app or pages. See the Vercel instrumentation docs for more configuration options.
3. Enable telemetry on AI SDK calls
See the Enabling Telemetry section below.
Enabling Telemetry¶
The Vercel AI SDK uses OpenTelemetry for telemetry. To capture traces, add the experimental_telemetry option to your AI SDK function calls:
const result = await generateText({
model: yourModel("model-name"),
prompt: "Your prompt here",
experimental_telemetry: { isEnabled: true },
});
This option works with all AI SDK core functions:
generateText/streamTextgenerateObject/streamObjectembed/embedMany
Example: Text Generation with Tools¶
Here's a complete example showing text generation with a tool and telemetry enabled:
import { generateText, tool } from "ai";
import { yourProvider } from "@ai-sdk/your-provider";
import { z } from "zod";
const result = await generateText({
model: yourProvider("model-name"),
experimental_telemetry: { isEnabled: true },
tools: {
weather: tool({
description: "Get the weather in a location",
inputSchema: z.object({
location: z.string().describe("The location to get the weather for"),
}),
execute: async ({ location }) => ({
location,
temperature: 72 + Math.floor(Math.random() * 21) - 10,
}),
}),
},
prompt: "What is the weather in San Francisco?",
});
console.log(result.text);
For Node.js scripts, remember to import your instrumentation file at the top of your entry point.
What You'll See in Logfire¶
When telemetry is enabled, Logfire captures a hierarchical trace of your AI operations:
- Parent span for the AI operation (e.g.,
ai.generateText) - Provider call spans showing the actual LLM API calls
- Tool call spans for each tool invocation
The captured data includes:
- Prompts and responses
- Model information and provider details
- Token usage (input and output tokens)
- Timing information
- Tool call arguments and results
Advanced Options¶
The experimental_telemetry option accepts additional configuration:
experimental_telemetry: {
isEnabled: true,
functionId: "weather-lookup",
metadata: {
userId: "user-123",
environment: "production",
},
}
functionId- A custom identifier that appears in span names, useful for distinguishing different use casesmetadata- Custom key-value pairs attached to the telemetry spans