Image-blaster: Creates 3D environments, SFX, and meshes from a single image
Summary
A CLI tool that transforms a single image into a fully meshed 3D environment with objects, ambient sound, and physics SFX in under five minutes by leveraging Claude skills, World Labs, and FAL models.
View Cached Full Text
Cached at: 05/15/26, 06:33 PM
neilsonnn/image-blaster
Source: https://github.com/neilsonnn/image-blaster
image-blaster
Creates 3D environments, SFX, and meshes from a single image using Claude skills, World Labs, and FAL.
Can take you from an image to a fully meshed 3D environment in < 5 minutes, great for jumpstarting 3D work. Go full blast.
Quickstart
- Open a Terminal, enter
git clone https://github.com/neilsonnn/image-blaster - Enter the directory with
cd image-blaster - Run
claude(install withcurl -fsSL https://claude.ai/install.sh | bash) - Say hello to Claude, and give them your API key for World Labs and FAL.
- Put an image into
input/directory and ask Claude toblast it and confirm each step with me.
Description
By default image-blaster will use your input image to create:
- 3D models (
.glb,.obj) of all dynamic objects - Gaussian splat (
.spz) of the static environment, - Ambient looping sound and object specific physics SFX (
.mp3)
Extensions
You can embed image-blaster under the assets of any game engine, DCC software, or web app.
- Unity, Unreal, or Godot game engine
- Blender, 3DS Max, or Maya or other DCC software
- Three.js web app or Electron app
Advanced
IMAGE-BLASTER uses a few generation models:
marble-1.1- World Labs Marble model creates the explorable environment.nano-banana- default image edit preference for source cleanup, clean plates, and object reference images.gpt-image-2- alternate image edit provider when the edit skill is asked to prefer it.hunyuan-3d- Hunyuan 3D model creates 3D object models through FAL.elevenlabs-sfx- ElevenLabs sound effects model creates ambient and object-specific sounds.
3D model creation supports these Hunyuan parameters:
--face-count <40000-1500000>: target face count. IMAGE-BLASTER defaults to50000; Hunyuan’s API default is500000.--enable-pbr true|false: enable PBR material generation. Defaults totrue.--generate-type Normal|LowPoly|Geometry:Normalcreates a textured model,LowPolyapplies polygon reduction, andGeometrycreates a white geometry-only model. Defaults toNormal.--polygon-type triangle|quadrilateral: polygon type forLowPoly. Defaults totriangle.
Examples
- Video game level concepts?
IMAGE-BLASTit. - Your childhood bedroom?
IMAGE-BLASTit. - Need an environment for a robot?
IMAGE-BLASTit. - A film location scout?
IMAGE-BLASTit. - An architectural rendering?
IMAGE-BLASTit.
Development
- remove
/appfrom the.claudeignorefile to give Claude the ability to change the React viewer.
Similar Articles
@theworldlabs: Turn a single image into a fully meshed 3D world in minutes Built by a World Labs team member, image-blaster combines M…
Turn a single image into a fully meshed 3D world in minutes using image-blaster, a tool built by a World Labs team member that combines Marble, Claude skills, and Fal to generate 3DGS environments, meshes, physics objects, and SFX.
PixelClaw: an LLM agent for image manipulation
PixelClaw is a free, open-source LLM agent that combines conversational AI with image generation, editing, and audio tools in a Raylib-based drag-and-drop UI.
@oliviscusAI: Someone just open-sourced a desktop app that generates 3D models from images and runs 100% locally. It's called Modly. …
Modly is an open-source desktop app that generates fully textured 3D meshes from images, running 100% locally on your GPU with pluggable AI model extensions.
Point-E: A system for generating 3D point clouds from complex prompts
OpenAI introduces Point-E, a system for generating 3D point clouds from text prompts in 1-2 minutes on a single GPU by combining text-to-image and image-to-3D diffusion models. The method achieves significant speedup over prior methods while releasing pre-trained models and code.
HY-World 2.0: A Multi-Modal World Model for Reconstructing, Generating, and Simulating 3D Worlds
HY-World 2.0 is a multi-modal world model framework that generates high-fidelity 3D Gaussian Splatting scenes from text, images, and videos through specialized modules for panorama generation, trajectory planning, and scene composition, achieving state-of-the-art performance among open-source approaches.