Introducing vision to the fine-tuning API
Summary
OpenAI introduces vision fine-tuning capabilities for GPT-4o, allowing developers to customize the model with image data in addition to text for improved performance on vision tasks like visual search, object detection, and medical image analysis.
View Cached Full Text
Cached at: 04/20/26, 02:47 PM
Similar Articles
Fine-tuning now available for GPT-4o
OpenAI launches fine-tuning for GPT-4o and GPT-4o mini, allowing developers to customize models with their own datasets at lower costs. The feature includes free training tokens (1M/day for GPT-4o and 2M/day for GPT-4o mini through September 23) and is available to all paid-tier developers.
GPT-3.5 Turbo fine-tuning and API updates
OpenAI has released fine-tuning capabilities for GPT-3.5 Turbo, allowing developers to customize models for specific use cases with improved performance, steerability, and output formatting. The update enables fine-tuned GPT-3.5 Turbo to match GPT-4 performance on certain tasks while reducing prompt sizes by up to 90%.
Introducing improvements to the fine-tuning API and expanding our custom models program
OpenAI introduces improvements to its fine-tuning API with new features including epoch-based checkpoints, comparative playground for model evaluation, third-party integrations, and enhanced dashboard capabilities. The company also expands its custom models program to give developers more control and flexibility in building domain-specific AI solutions.
Fine-tuning GPT-4o webinar
OpenAI hosted a webinar on August 26, 2024, focused on fine-tuning GPT-4o models for business applications.
GPT-4V(ision) system card
OpenAI releases a system card detailing the safety properties and evaluations of GPT-4V(ision), which adds image input capabilities to GPT-4, enabling multimodal instruction-following and vision analysis.