Google's Gemini API now lets developers combine built-in tools and custom function calling in a single request. This reduces friction for multi-step workflows.

Combine built-in and custom tools in one request, cutting latency and code complexity for multi-step AI workflows.
Signal analysis
Here at Lead AI Dot Dev, we tracked this release because it directly addresses a common workflow bottleneck. Gemini API previously required developers to choose between built-in tools (like code execution or search) or custom functions - not both simultaneously. The update removes this constraint, allowing you to invoke built-in tools and your own custom functions in the same API call.
This is a mechanical improvement, not a conceptual one. The Gemini API now handles tool orchestration at the framework level rather than pushing that responsibility to your application code. For teams building multi-step AI workflows, this means fewer conditional branches and less request overhead.
The practical consequence is simpler code and faster execution. If you're building a research agent that needs to search the web (built-in), fetch data from your database (custom), and execute code (built-in), you now submit one request instead of three or four. The model handles sequencing.
This also reduces the surface area for failure. Fewer round-trips means fewer timeout possibilities and less state to manage between calls. Your error handling becomes more straightforward because you're not orchestrating tool composition at the application level.
For latency-sensitive applications - financial analysis, real-time data processing, customer support automation - this is material. You're shaving off network overhead on every request.
First, audit your existing Gemini implementations. If you're currently making sequential calls to handle tool composition, this update is a refactor opportunity. The effort is usually low - you're mainly consolidating your tool definitions and removing orchestration logic.
Second, test the update in your development environment before rolling to production. Verify that the model is composing tools in the order you expect and that your custom functions are receiving the right context from built-in tool outputs. The behavior should be transparent, but verify it matches your assumptions.
Third, consider this when designing new features. If you're building something that requires multiple tool invocations (data retrieval, transformation, analysis), design it as a single tool set from the start. This avoids technical debt and technical regret when you realize later that you could have simplified the architecture. Thank you for listening, Lead AI Dot Dev
Best use cases
Open the scenarios below to see where this shift creates the clearest practical advantage.
One concise email with the releases, workflow changes, and AI dev moves worth paying attention to.
More updates in the same lane.
Discover how to enable Basic and Enhanced Branded Calling through Twilio Console to enhance your brand's visibility.
Cohere has unveiled 'Cohere Transcribe', an open-source transcription model that enhances AI speech recognition accuracy.
Mistral AI has released Voxtral TTS, an open-source text-to-speech model, providing developers with free access to its capabilities for various applications.