GPT-3.5 Turbo fine-tuning and API updates

OpenAI Blog Products

Summary

OpenAI has released fine-tuning capabilities for GPT-3.5 Turbo, allowing developers to customize models for specific use cases with improved performance, steerability, and output formatting. The update enables fine-tuned GPT-3.5 Turbo to match GPT-4 performance on certain tasks while reducing prompt sizes by up to 90%.

Developers can now bring their own data to customize GPT-3.5 Turbo for their use cases.
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 04/20/26, 02:54 PM

# GPT-3.5 Turbo fine-tuning and API updates Source: [https://openai.com/index/gpt-3-5-turbo-fine-tuning-and-api-updates/](https://openai.com/index/gpt-3-5-turbo-fine-tuning-and-api-updates/) Fine\-tuning for GPT‑3\.5 Turbo is now available, with fine\-tuning for GPT‑4 coming this fall\. This update gives developers the ability to customize models that perform better for their use cases and run these custom models at scale\. Early tests have shown a fine\-tuned version of GPT‑3\.5 Turbo can match, or even outperform, base GPT‑4‑level capabilities on certain narrow tasks\. As with all our APIs, data sent in and out of the fine\-tuning API is owned by the customer and is[not used by OpenAI⁠](https://openai.com/api-data-privacy/), or any other organization, to train other models\. Since the release of GPT‑3\.5 Turbo, developers and businesses have asked for the ability to customize the model to create unique and differentiated experiences for their users\. With this launch, developers can now run supervised fine\-tuning to make this model perform better for their use cases\. In our private beta, fine\-tuning customers have been able to meaningfully improve model performance across common use cases, such as: - **Improved steerability**: Fine\-tuning allows businesses to make the model follow instructions better, such as making outputs terse or always responding in a given language\. For instance, developers can use fine\-tuning to ensure that the model always responds in German when prompted to use that language\. - **Reliable output formatting:**Fine\-tuning improves the model's ability to consistently format responses—a crucial aspect for applications demanding a specific response format, such as code completion or composing API calls\. A developer can use fine\-tuning to more reliably convert user prompts into high\-quality JSON snippets that can be used with their own systems\. - **Custom tone:**Fine\-tuning is a great way to hone the qualitative feel of the model output, such as its tone, so it better fits the voice of businesses’ brands\. A business with a recognizable brand voice can use fine\-tuning for the model to be more consistent with their tone\. In addition to increased performance, fine\-tuning also enables businesses to**shorten their prompts**while ensuring similar performance\. Fine\-tuning with GPT‑3\.5‑Turbo can also handle 4k tokens—double our previous fine\-tuned models\. Early testers have reduced prompt size by up to 90% by fine\-tuning instructions into the model itself, speeding up each API call and cutting costs\. Fine\-tuning is most powerful when combined with[other techniques⁠\(opens in a new window\)](https://platform.openai.com/docs/guides/gpt-best-practices)such as prompt engineering, information retrieval, and function calling\. Check out our[fine\-tuning guide⁠\(opens in a new window\)](https://platform.openai.com/docs/guides/fine-tuning)to learn more\. Support for fine\-tuning with function calling and`gpt\-3\.5\-turbo\-16k`will be coming later this fall\.

Similar Articles

Customizing GPT-3 for your application

OpenAI Blog

OpenAI has launched fine-tuning capabilities for GPT-3, allowing developers to customize the model on their own data via a single CLI command, resulting in improved accuracy, reduced costs, and lower latency for production use cases. Early customers like Keeper Tax, Viable, and Sana Labs report significant accuracy improvements after fine-tuning.

Fine-tuning now available for GPT-4o

OpenAI Blog

OpenAI launches fine-tuning for GPT-4o and GPT-4o mini, allowing developers to customize models with their own datasets at lower costs. The feature includes free training tokens (1M/day for GPT-4o and 2M/day for GPT-4o mini through September 23) and is available to all paid-tier developers.

Introducing GPT-4.1 in the API

OpenAI Blog

OpenAI launches GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano models via API with major improvements in coding (54.6% on SWE-bench), instruction following, and 1M token context windows at lower costs. GPT-4.5 Preview will be deprecated on July 14, 2025.

GPT-4 API general availability and deprecation of older models in the Completions API

OpenAI Blog

OpenAI announced GPT-4 API general availability and deprecated older completion models (GPT-3 base models and text-davinci-003), requiring developers to migrate to new models like gpt-3.5-turbo-instruct or newer by January 4, 2024. Fine-tuned models will need to be retrained on new base models with priority access offered for GPT-3.5 Turbo and GPT-4 fine-tuning.

Introducing GPT-5.1 for developers

OpenAI Blog

OpenAI releases GPT-5.1, a new model in the GPT-5 series that dynamically adapts thinking time based on task complexity, offering 2-3x faster performance than GPT-5 while maintaining frontier intelligence. The release includes extended prompt caching (24-hour retention), new coding tools (apply_patch and shell), and a 'no reasoning' mode for latency-sensitive applications.