Google's Gemini 3 now lets you combine built-in tools and custom function calling in single requests. Here's what builders need to know to simplify their tool orchestration.

Combine built-in and custom tools in one request to eliminate sequential tool calls and let Gemini handle routing logic.
Signal analysis
Here at Lead AI Dot Dev, we tracked this release closely because it directly addresses a friction point developers have faced: choosing between Google's pre-built tools or writing custom functions. The new Gemini 3 implementation removes that forced choice. You can now invoke both built-in tools (like Google Search or code execution) and your own custom functions in the same API call, letting the model determine which tool to use based on context.
This is a efficiency play, not a feature expansion. Previously, builders had to structure requests around tool availability - either commit to the SDK's built-in options or build custom wrappers. The combination capability means less request overhead and cleaner routing logic on your end.
The practical lift here is minimal if you're already using function calling. You define your custom functions as before, but now you can pass them alongside Google's built-in tools in a single tools parameter. The API documentation has been updated to show the combined structure.
The critical decision point: audit your current tool setup. If you have multiple function-calling requests happening sequentially to work around tool limitations, this is your consolidation opportunity. Map out which built-in tools solve standard problems (search, computation, etc.) and which require custom logic. Then restructure your prompts to let Gemini handle the routing.
One consideration - tool response time complexity increases when the model has more options. Test your specific use cases. A chatbot with 15 custom functions plus 3 built-in tools may see latency differences compared to the previous approach. Profile before and after if sub-100ms response times matter for your application.
Builders should think about this feature in two scenarios: greenfield projects and legacy migrations. For new work, the path is clear - design your tool set as a unified registry. Built-in tools for standard operations, custom functions for domain logic. That's your default starting point.
Legacy systems are more complex. If you've built abstractions around the old single-tool-category limitation, weigh the refactoring cost against the latency/request savings. Sometimes keeping your existing structure is the right call, especially if it's battle-tested in production. This update is powerful but not mandatory.
One architectural shift worth considering: move tool composition logic out of your application code and into the prompt itself. Instead of orchestrating which tool to call before sending to Gemini, let the model decide. This works best when your custom functions are well-documented in function descriptions and when tool interactions are straightforward.
This release reflects Google's push toward a unified tool system rather than fragmented API categories. The company is moving toward 'tool orchestration as a solved problem' at the SDK level. That's a strategic shift - it positions Gemini's function-calling capability as a true alternative to building your own tool management layer.
The timing matters. As Claude and other models add competing function-calling features, Google is consolidating developer experience. Built-in plus custom in one call is table stakes for serious AI platforms now. This update is catching up to where the competitive landscape expects you to be.
Thank you for listening, Lead AI Dot Dev
Best use cases
Open the scenarios below to see where this shift creates the clearest practical advantage.
One concise email with the releases, workflow changes, and AI dev moves worth paying attention to.
More updates in the same lane.
Discover how to enable Basic and Enhanced Branded Calling through Twilio Console to enhance your brand's visibility.
Cohere has unveiled 'Cohere Transcribe', an open-source transcription model that enhances AI speech recognition accuracy.
Mistral AI has released Voxtral TTS, an open-source text-to-speech model, providing developers with free access to its capabilities for various applications.