- Wrap manually
- Auto-instrument
- Use AI
Wrap your AI client with Braintrust to trace all LLM requests. Maximum flexibility and control. Works with any integration.Run this code:All API calls are automatically logged to Braintrust.
1. Sign up
If you’re new to Braintrust, sign up free at braintrust.dev.2. Get API keys
Create API keys for:- Braintrust
- Your AI provider or framework (OpenAI, Anthropic, Gemini, etc.)
3. Install SDKs
Install the Braintrust SDK and AI provider SDK for your programming language:4. Trace LLM calls
Make a simple LLM request and see it automatically traced in Braintrust. Initialize Braintrust and wrap your OpenAI client:- TypeScript & Python: Use
wrapOpenAI/wrap_openaiwrapper functions - Go: Use the tracing middleware with the OpenAI client
- Ruby: Use
Braintrust::Trace::OpenAI.wrapto wrap the OpenAI client - Java: Use the tracing interceptor with the OpenAI client
- C#: Use
BraintrustOpenAI.WrapOpenAI()to wrap the OpenAI client
quickstart.ts
5. View traces
In the Braintrust UI, go to the “Tracing quickstart” project and select Logs. You’ll see a trace for each request.Click into any trace to see:- Complete input prompt and model output
- Token counts, latency, and cost
- Model configuration (temperature, max tokens, etc.)
- Request and response metadata
Troubleshoot
Not seeing traces in the UI?
Not seeing traces in the UI?
Check your API key:Make sure it’s set and starts with
sk-.Verify the project name:
Check that you’re looking at the correct project in the UI.Look for errors:
Check your console output for any error messages from Braintrust. Common issues:- Invalid API key
- Network connectivity issues
- Firewall blocking requests to
api.braintrust.dev
Traces look incomplete or missing data?
Traces look incomplete or missing data?
Check wrapper coverage:
Make sure you’re wrapping the client before making API calls. Calls made with an unwrapped client won’t be traced.Verify async/await:
If using async functions, ensure you’re awaiting API calls properly. Unawaited promises may not be fully traced.Check for errors:
If your LLM call throws an error, the trace may be incomplete. Check logs for error messages.
Need help?
Need help?
- Join our Discord
- Email us at [email protected]
- Use the Loop feature in the Braintrust UI
Next steps
- Explore the full Braintrust workflow
- Go deeper with tracing:
- Explore integrations with AI providers, SDKs, and developer tools
- Add custom tracing for application logic
- Capture user feedback like thumbs up/down
- Analyze logs for patterns and issues