Anthropic¶
Logfire supports instrumenting calls to Anthropic with one extra line of code.
import anthropic
import logfire
client = anthropic.Anthropic()
logfire.configure()
logfire.instrument_anthropic(client) # (1)!
response = client.messages.create(
max_tokens=1000,
model='claude-3-haiku-20240307',
system='You are a helpful assistant.',
messages=[{'role': 'user', 'content': 'Please write me a limerick about Python logging.'}],
)
print(response.content[0].text)
- If you don't have access to the client instance, you can pass a class (e.g.
logfire.instrument_anthropic(anthropic.Anthropic)
), or just pass no arguments (i.e.logfire.instrument_anthropic()
) to instrument both theanthropic.Anthropic
andanthropic.AsyncAnthropic
classes.
For more information, see the instrument_anthropic()
API reference.
With that you get:
- a span around the call to Anthropic which records duration and captures any exceptions that might occur
- Human-readable display of the conversation with the agent
- details of the response, including the number of tokens used
Methods covered¶
The following Anthropic methods are covered:
All methods are covered with both anthropic.Anthropic
and anthropic.AsyncAnthropic
.
Streaming Responses¶
When instrumenting streaming responses, Logfire creates two spans — one around the initial request and one around the streamed response.
Here we also use Rich's Live
and Markdown
types to render the response in the terminal in real-time.
import anthropic
import logfire
from rich.console import Console
from rich.live import Live
from rich.markdown import Markdown
client = anthropic.AsyncAnthropic()
logfire.configure()
logfire.instrument_anthropic(client)
async def main():
console = Console()
with logfire.span('Asking Anthropic to write some code'):
response = client.messages.stream(
max_tokens=1000,
model='claude-3-haiku-20240307',
system='Reply in markdown one.',
messages=[{'role': 'user', 'content': 'Write Python to show a tree of files 🤞.'}],
)
content = ''
with Live('', refresh_per_second=15, console=console) as live:
async with response as stream:
async for chunk in stream:
if chunk.type == 'content_block_delta':
content += chunk.delta.text
live.update(Markdown(content))
if __name__ == '__main__':
import asyncio
asyncio.run(main())
Shows up like this in Logfire: