@heyshrutimishra: Try here →
Summary
Luma AI launches its API with pay-as-you-go and provisioned throughput pricing for image and video generation models like Ray3.14 and Photon.
View Cached Full Text
Cached at: 05/10/26, 02:30 AM
Try here → https://t.co/MNspcqbsbP
Intelligence you can direct. Aesthetic you can ship. | Luma
Source: https://lumalabs.ai/api Build
Drop in to any stack. No custom infra. No waitlist. Everything you need to evaluate. No retry tax.
Two endpoints. Generate. Modify.
Up to nine references per generation.
Python and JavaScript SDKs.
All aspect ratios. All output formats.
You own everything you make.
Scale
Production at volume. Support to match. Volume math that finally works at production scale.
Everything in Build, with:
Dedicated engineering support.
For Building: Pay-as-you-go
Pay per image. No commitment. Ideal for prototyping and early production.
Task
Resolution
Uni-1.1
Uni-1.1 Max
Text to image
2048px (2K)
$0.0404
$0.1000
Image Edit
2048px (2K)
$0.0434
$0.1030
1 image reference
2048px (2K)
$0.0434
$0.1030
2 image references
2048px (2K)
$0.0464
$0.1060
8 image references
2048px (2K)
$0.0644
$0.1240
- Prices are approximate and based on billing tokens
- No minimum commitment
- Rate limits apply, no latency SLA
For Scaling: Provisioned throughput
Dedicated capacity with guaranteed throughput and latency. Built for production workloads at scale.
Commitment
Price / unit / month
Cost per image (Base/Max)
1 month
$3,800
$0.088 / $0.2200
3 months
$2,800
$0.065 / $0.1625
1 year
$2,100
$0.049 / $0.1225
- 1 unit = 1 request per minute (Base) or 0.4 RPM (Max)
- Minimum 8 units
- Includes SLA, moderation, and prompt enhancement
- No-train guarantee
Explore Other Models
Ray3.14
A new generation of video model capable of producing fast coherent motion, ultra-realistic details, and logical event sequences.
Ray3
World’s first reasoning video model. World’s first HDR model. A model designed to tell stories.
Ray2
Video generation model for text-to-video and image-to-video.
Supports keyframes, looping, and cinematic motion.
Photon
Image generation model for high-quality stills and visual assets, with flexible aspect ratios.
Similar Articles
@heyshrutimishra: The detail nobody is highlighting in Luma's Uni-1.1 launch: It was trained with Hollywood cinematographers and VFX arti…
Luma's Uni-1.1 model differentiates itself by incorporating training feedback from Hollywood cinematographers and VFX artists. This strategy suggests that curated human taste may become a key competitive moat in image AI beyond standard benchmarks.
@dhruvtwt_: Why is no one talking about this? @nvidia is offering around 80 AI models via hosted APIs absolutely for free. You get …
Nvidia quietly provides ~80 free hosted AI model APIs including MiniMax M2.7, GLM 5.1, Kimi 2.5, DeepSeek 3.2, GPT-OSS-120B, ready to integrate with popular dev tools like OpenClaude and Zed IDE.
@k1rallik: NVIDIA IS LITERALLY GIVING AWAY FREE AI INFERENCE I literally set it up in 5 minutes and couldn't believe it was free D…
NVIDIA offers free AI inference via DGX Cloud with OpenAI-compatible API for popular models like DeepSeek, MiniMax, Kimi, GLM, and Llama, claimable in 5 minutes.
DALL·E API now available in public beta
OpenAI announces DALL·E API is now available in public beta, allowing developers to integrate image generation capabilities directly into their applications. Early adopters include Microsoft, CALA, and Mixtiles, with built-in safety features and content moderation.
DALL·E now available in beta
OpenAI's DALL·E image generation system is now available in public beta, inviting 1 million users from the waitlist with free monthly credits and optional paid plans. Users gain full commercial rights to generated images and can use features like editing, variations, and collections.