Braintrust's latest SDK release adds native Agent tool call tracing and experiment parameter persistence, reducing debugging friction for AI SDK users.

Reduce debugging latency and configuration drift by embedding tool tracing and parameter versioning directly into your experiment workflow.
Signal analysis
Here at Lead AI Dot Dev, we tracked the release of Braintrust JavaScript SDK v3.5.0, and the headline feature is clear: Agent tool call tracing for AI SDK v5/v6 now has automatic instrumentation built in. This means when you're using Vercel's AI SDK, your tool calls get traced automatically without boilerplate setup. The second addition is the ability to attach saved parameters directly to experiments, letting you version your prompt configurations alongside your test results.
For developers, this is about reducing setup overhead. Previously, capturing tool execution traces required manual instrumentation or middleware configuration. Now it's handled by the SDK automatically when you're on the supported AI SDK versions. This changes the debugging workflow - you go from 'I need to add tracing code' to 'tracing is just there.'
The elimination of manual tracing setup accelerates the debugging cycle. When a tool call fails or produces unexpected results, you now have the execution trace already in your experiment record. This collapses what used to be a multi-step investigation - checking logs, correlating timestamps, reconstructing the call sequence - into a direct view of what happened.
Parameter attachment changes how you manage experiments. Instead of storing your prompts separately from your test results, they're now bundled. This matters because it forces version alignment: you always know exactly which prompt configuration produced which result. No more searching through git history to find 'what version was this test using?' The parameter becomes part of the experiment's immutable record.
The combination creates a cleaner observability posture. Your experiment record becomes self-contained - it includes the parameters, the execution traces, and the results. This is valuable for teams doing iterative prompt optimization where configuration state matters as much as performance metrics.
If you're already on Braintrust and using AI SDK v5 or v6, update to v3.5.0 and the tracing should activate automatically. The primary consideration is that this only applies to those SDK versions - v3 and v4 users won't see automatic tracing and should plan migration timelines if tool observability is a priority.
For parameter attachment, the workflow is straightforward: pass your saved parameters to the experiment constructor. This is particularly useful if you're running A-B tests on different prompt versions or configurations. The saved parameters become queryable, letting you slice experiment results by configuration after the fact.
One operational detail: ensure your AI SDK version in package.json aligns with what Braintrust is optimized for. Version mismatches won't break things, but you'll miss the automatic instrumentation. Verify your lockfile and consider a minor version bump as part of the adoption cycle.
Braintrust is moving toward automated observability by default rather than optional instrumentation. This aligns with where the market is heading - developers increasingly expect tracing to be embedded in SDKs rather than bolted on. The move also signals confidence in the AI SDK v5/v6 ecosystem, which is now mature enough to become the instrumentation baseline.
The parameter versioning feature directly addresses a common operational pain point in LLM development: prompt management. Tools like Langsmith and Phoenix focus on execution tracing; Braintrust is now explicitly bundling configuration management with it. This is a differentiation point - not just 'what happened,' but 'what was running when it happened.'
For builders evaluating observability solutions, this release emphasizes what to look for: automatic instrumentation coverage (not manual), version alignment with your SDK choices, and integration of configuration state alongside traces. When choosing tools, verify automatic coverage for your specific SDK versions and ask whether configuration versioning is built in or an afterthought. Thank you for listening, Lead AI Dot Dev
Best use cases
Open the scenarios below to see where this shift creates the clearest practical advantage.
One concise email with the releases, workflow changes, and AI dev moves worth paying attention to.
More updates in the same lane.
Mistral Forge allows organizations to convert proprietary knowledge into custom AI models, enhancing enterprise capabilities.
Version 8.1 of the MongoDB Entity Framework Core Provider brings essential updates. This article analyzes the implications for builders.
The latest @composio/core update enhances Toolrouter with custom tool integration, expanding flexibility for developers.