Experiment with Gemini 2.0 Flash native image generation
Summary
Google expands Gemini 2.0 Flash native image generation capabilities to all developers, enabling multimodal text and image output for storytelling, conversational image editing, and applications requiring world understanding and text rendering.
View Cached Full Text
Cached at: 04/20/26, 08:36 AM
Similar Articles
Gemini 2.0 is now available to everyone
Google announces general availability of Gemini 2.0 Flash via API, introduces experimental Gemini 2.0 Pro for advanced coding and reasoning tasks, and releases Gemini 2.0 Flash-Lite as a cost-efficient option. All models support multimodal input with text output and are available through Google AI Studio, Vertex AI, and the Gemini app.
Start building with Gemini 2.0 Flash and Flash-Lite
Google announces general availability of Gemini 2.0 Flash-Lite with improved performance over 1.5 Flash, simplified pricing, and a 1 million token context window. The model is now available in Google AI Studio and Vertex AI for production use, with developers already building voice AI, data analytics, and video editing applications.
Improved Gemini audio models for powerful voice experiences
Google has updated Gemini 2.5 Flash Native Audio to improve live voice agent capabilities, including sharper function calling, better instruction following, and smoother conversation context retrieval. The update also introduces live speech translation in the Google Translate app beta, preserving intonation across 70+ languages.
Introducing Gemini 2.5 Flash
Google announces Gemini 2.5 Flash, a new hybrid reasoning model available in preview through the Gemini API. The model features toggleable thinking capabilities, fine-grained thinking budgets for quality-cost-latency tradeoffs, and maintains fast inference speeds while improving performance over 2.0 Flash.
Gemini 2.5 Flash-Lite is now ready for scaled production use
Google releases Gemini 2.5 Flash-Lite as stable and generally available, the fastest and lowest-cost model in the Gemini 2.5 family at $0.10 input/$0.40 output per 1M tokens, featuring native reasoning capabilities and full feature parity with native tools.