Fine-tuning now available for GPT-4o

OpenAI Blog Products

Summary

OpenAI launches fine-tuning for GPT-4o and GPT-4o mini, allowing developers to customize models with their own datasets at lower costs. The feature includes free training tokens (1M/day for GPT-4o and 2M/day for GPT-4o mini through September 23) and is available to all paid-tier developers.

No content available
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 04/20/26, 02:43 PM

# Fine-tuning now available for GPT-4o Source: [https://openai.com/index/gpt-4o-fine-tuning/](https://openai.com/index/gpt-4o-fine-tuning/) OpenAIFine\-tune custom versions of GPT‑4o to increase performance and accuracy for your applications\. Today, we’re launching fine\-tuning for[GPT‑4o⁠](https://openai.com/index/hello-gpt-4o/), one of the most requested features from developers\. We are also offering 1M training tokens per day for free for every organization through September 23\. Developers can now fine\-tune GPT‑4o with custom datasets to get higher performance at a lower cost for their specific use cases\. Fine\-tuning enables the model to customize structure and tone of responses, or to follow complex domain\-specific instructions\. Developers can already produce strong results for their applications with as little as a few dozen examples in their training data set\. From coding to creative writing, fine\-tuning can have a large impact on model performance across a variety of domains\. This is just the start—we’ll continue to invest in expanding our[model customization⁠](https://openai.com/index/introducing-improvements-to-the-fine-tuning-api-and-expanding-our-custom-models-program/)options for developers\. GPT‑4o fine\-tuning is available today to all developers on all paid[usage tiers⁠\(opens in a new window\)](https://platform.openai.com/docs/guides/rate-limits/usage-tiers)\. To get started, visit the[fine\-tuning dashboard⁠\(opens in a new window\)](https://platform.openai.com/finetune), click`create`, and select`gpt\-4o\-2024\-08\-06`from the base model drop\-down\. GPT‑4o fine\-tuning training costs $25 per million tokens, and inference is $3\.75 per million input tokens and $15 per million output tokens\. GPT‑4o mini fine\-tuning is also available to all developers on all paid usage tiers\. Visit the fine\-tuning dashboard and select`gpt\-4o\-mini\-2024\-07\-18`from the base model drop\-down\. For GPT‑4o mini, we’re offering 2M training tokens per day for free through September 23\. To learn more about how to use fine\-tuning, visit our[docs⁠\(opens in a new window\)](https://platform.openai.com/docs/guides/fine-tuning)\. Over the past couple of months, we’ve worked with a handful of trusted partners to test fine\-tuning on GPT‑4o and learn about their use cases\. Here are a couple of success stories: **Cosine achieves state\-of\-the\-art results on the SWE\-bench benchmark** [Cosine⁠\(opens in a new window\)](https://cosine.sh/)’s Genie is an AI software engineering assistant that’s able to autonomously identify and resolve bugs, build features, and refactor code in collaboration with users\. It can reason across complex technical problems and make changes to code with higher accuracy and fewer tokens needed\. Genie is powered by a fine\-tuned GPT‑4o model trained on examples of real software engineers at work, enabling the model to learn to respond in a specific way\. The model was also trained to be able to output in specific formats, such as patches that could be committed easily to codebases\. With a fine\-tuned GPT‑4o model, Genie achieves a SOTA score of 43\.8% on the new[SWE\-bench⁠\(opens in a new window\)](https://www.swebench.com/)Verified benchmark,[announced⁠](https://openai.com/index/introducing-swe-bench-verified/)last Tuesday\. Genie also holds a SOTA score of 30\.08% on SWE\-bench Full, beating its previous SOTA score of 19\.27%, the largest ever improvement in this benchmark\. **Distyl ranks 1st on BIRD\-SQL benchmark** [Distyl⁠\(opens in a new window\)](https://distyl.ai/), an AI solutions partner to Fortune 500 companies, recently placed 1st on the[BIRD\-SQL⁠\(opens in a new window\)](https://bird-bench.github.io/)benchmark, the leading text\-to\-SQL benchmark\. Distyl’s fine\-tuned GPT‑4o achieved an execution accuracy of 71\.83% on the leaderboard and excelled across tasks like query reformulation, intent classification, chain\-of\-thought, and self\-correction, with particularly high performance in SQL generation\. Fine\-tuned models remain entirely under your control, with full ownership of your business data, including all inputs and outputs\. This ensures your data is never shared or used to train other models\. We’ve also implemented layered safety mitigations for fine\-tuned models to ensure they aren’t being misused\. For example, we continuously run automated safety evals on fine\-tuned models and monitor usage to ensure applications adhere to our usage policies\. We’re excited to see what you build by fine\-tuning GPT‑4o\. If you’d like to explore more model customization options, please[reach out⁠](https://openai.com/form/custom-models/)to our team—we’d be happy to help\!

Similar Articles

GPT-3.5 Turbo fine-tuning and API updates

OpenAI Blog

OpenAI has released fine-tuning capabilities for GPT-3.5 Turbo, allowing developers to customize models for specific use cases with improved performance, steerability, and output formatting. The update enables fine-tuned GPT-3.5 Turbo to match GPT-4 performance on certain tasks while reducing prompt sizes by up to 90%.

Customizing GPT-3 for your application

OpenAI Blog

OpenAI has launched fine-tuning capabilities for GPT-3, allowing developers to customize the model on their own data via a single CLI command, resulting in improved accuracy, reduced costs, and lower latency for production use cases. Early customers like Keeper Tax, Viable, and Sana Labs report significant accuracy improvements after fine-tuning.

Fine-tuning GPT-4o webinar

OpenAI Blog

OpenAI hosted a webinar on August 26, 2024, focused on fine-tuning GPT-4o models for business applications.

GPT-4o mini: advancing cost-efficient intelligence

OpenAI Blog

OpenAI releases GPT-4o mini, a cost-efficient small model priced at 15 cents per million input tokens, 60% cheaper than GPT-3.5 Turbo, with strong performance on MMLU (82%) and outperforming competitors like Gemini Flash and Claude Haiku on reasoning, math, and coding tasks.

Introducing vision to the fine-tuning API

OpenAI Blog

OpenAI introduces vision fine-tuning capabilities for GPT-4o, allowing developers to customize the model with image data in addition to text for improved performance on vision tasks like visual search, object detection, and medical image analysis.