World Labs Integration: Photorealistic Environments for Simulation

Bridging the gap between synthetic training data and real-world visual complexity.

World Labs integration

Training robots in simulation has a fundamental problem: simulated environments don't look like the real world. Policies trained on flat textures and simple lighting often fail when deployed on hardware that sees shadows, reflections, and cluttered backgrounds.

We integrated World Labs into RebelAI to solve this. World Labs generates photorealistic 3D scenes from text descriptions, giving you training environments that match real-world visual complexity without manual asset creation.

How it works

The pipeline is straightforward. You describe the environment you want—"industrial warehouse with metal shelving and concrete floors"—and World Labs generates a complete 3D scene. RebelAI imports this scene into MuJoCo, handles the coordinate transforms, and sets up the rendering pipeline. Your robot spawns in a photorealistic environment ready for training.

The generated environments include proper lighting, material properties, and geometric detail. Objects have realistic textures. Surfaces reflect and scatter light correctly. The visual gap between simulation and reality shrinks dramatically.

Why this matters for sim-to-real

Sim-to-real transfer failures often trace back to the visual domain gap. A policy learns to recognize objects by their simplified simulation appearance, then fails when those same objects look different under real lighting conditions.

Photorealistic training environments reduce this gap at the source. Instead of learning features that only exist in simulation, policies learn from visual inputs that resemble deployment conditions. Combined with domain randomization over the generated scenes, you get robust perception without manual environment design.

Text-driven iteration

The text interface changes how you iterate on training environments. Instead of editing 3D assets or writing scene configuration files, you modify a prompt. Need a cluttered tabletop instead of a clean one? Change the description. Want outdoor lighting instead of fluorescent? Update the prompt and regenerate.

This speed matters when you're searching for the right training distribution. You can test hypotheses about what visual conditions your policy needs to handle without the overhead of traditional environment authoring.

Getting started

The World Labs integration is available in the latest RebelAI release. Generate your first photorealistic environment with a single function call. The documentation covers scene customization, camera placement, and tips for effective prompt engineering.