Together AI expanded fine-tuning with native tool calling, reasoning, and vision support. Builders can now train 100B+ models with 6x higher throughput and cost estimates.

Together AI's fine-tuning expansion lets builders customize tool calling, reasoning, and vision models at scale with predictable costs - cutting the operational friction that previously blocked fine-tuning adoption.
Signal analysis
Here at Lead AI Dot Dev, we tracked Together AI's fine-tuning expansion closely. The platform now natively supports tool calling, reasoning models, and vision-language models in its fine-tuning pipeline. This means builders can take complex model behaviors - function calling, chain-of-thought reasoning, and multimodal perception - and customize them for specific tasks without rebuilding infrastructure.
The throughput gains matter operationally. Together AI reports up to 6x higher training throughput compared to previous versions. They've also added support for training models with 100B+ parameters, breaking into the scale territory previously reserved for proprietary services or custom setups. Cost and ETA estimates are now baked into the platform, letting you forecast training spend and timeline before committing resources.
If you're building agents or systems that rely on function calling, this update removes a major constraint. Previously, fine-tuning tool-calling behavior required custom workflows or accepting pre-trained model limitations. Now you can encode your exact tool set and calling patterns directly into a fine-tuned model, improving consistency and reducing hallucination on function selection.
The reasoning support matters if your product depends on interpretability or multi-step decision-making. You can now fine-tune models that explicitly show their work - useful for compliance, debugging, or building user-facing explainability. Pair this with the vision additions and you're looking at a single fine-tuning interface for agents that see, reason, and act.
Cost predictability changes your project economics. With upfront ETA and cost estimates, you can now make go/no-go decisions on fine-tuning projects with actual numbers instead of guesses. This unlocks fine-tuning for cost-sensitive builders who previously couldn't justify the experimentation cycle.
Together AI is competing directly with Anthropic's fine-tuning, OpenAI's custom training, and open-source alternatives like Lambda Labs or Modal. The tool calling and reasoning support closes a gap - most platforms offer fine-tuning, but few make it safe and straightforward for complex model behaviors. Together's transparent pricing and throughput gains make it the logical choice for builders who need control without operational overhead.
The 100B+ support is strategic. It signals Together is attacking the scale problem that kept many teams dependent on proprietary vendors. Combined with their open-source partnerships and API-first design, this positions them as the independent alternative for production fine-tuning at meaningful scale.
Watch for two things: whether the vision support matches quality of dedicated vision fine-tuning services, and whether the cost estimates actually hold up under load. If both are solid, Together becomes a genuine platform play rather than a supplementary tool. That's the inflection point. Thank you for listening, Lead AI Dot Dev.
Best use cases
Open the scenarios below to see where this shift creates the clearest practical advantage.
One concise email with the releases, workflow changes, and AI dev moves worth paying attention to.
More updates in the same lane.
Discover how to enable Basic and Enhanced Branded Calling through Twilio Console to enhance your brand's visibility.
Cohere has unveiled 'Cohere Transcribe', an open-source transcription model that enhances AI speech recognition accuracy.
Mistral AI has released Voxtral TTS, an open-source text-to-speech model, providing developers with free access to its capabilities for various applications.