One Shot Prompt — World Generation in Unity

One prompt. An LLM agent builds a complete procedural 3D landscape inside the Unity Editor. Terrain, splat maps, water, props, lighting, clouds, and a horizon ring — all from a single message.

World procedurally generated by Claude Opus 4.6

What is One Shot Prompt — World Generation?

One Shot Prompt — World Generation in Unity asks an LLM agent to build a complete procedural 3D landscape inside the Unity Editor across seven ordered layers. The agent starts from an empty scene and ends with terrain, materials, water, vegetation, lighting, sky, and a horizon ring. Every layer only adds to the scene. Nothing is allowed to modify what an earlier layer produced.

Why one-shot world generation?

I built this because I wanted to see what an LLM can actually produce inside the Unity Editor from a single prompt — not chat answers, but real engine work. Terrain, materials, water, props, lighting, and sky that hold up inside a real project. Paste the prompt, let the agent run, and see the world it builds. The outcome that matters is the scene, not the reply.

The seven layers

A run executes all seven layers in order, without stopping between them. Each run is isolated under its own folder and every layer only adds to the scene. Layer 3 cannot rewrite Layer 1's terrain. Layer 4 cannot move Layer 3's water. That isolation is the core rule.

# Layer What the agent builds
1 Terrain & surface PBR Custom mesh terrain with mountains, hills, plains, and a meandering river valley. Domain-warped continental noise picks zones, and baked tint, height, and normal maps land on a URP material.
2 Splat mapping RGBA splat map (grass / rock / dirt / snow) plus tileable procedural PBR texture sets, composited into the terrain material with sun-exposure tinting.
3 Water system A continuous river surface that follows the carved channel and extends off the map edges, plus optional lakes and ponds on larger maps. Records water exclusion zones.
4 Props Recursive procedural trees with bark and cross-billboard leaves, plus scattered rocks. Each one unique seeded, placed via downward raycast, with density noise and water exclusion respected.
5 Lighting & post-processing Directional sun, gradient ambient, procedural skybox, and a Global Volume with ACES tonemapping, bloom, color grading, vignette, and SSAO.
6 Sky & clouds Cluster of spheres clouds floating at least 120 m above the highest terrain point. Cumulus, wisp, and puff variants, with shadow casting disabled.
7 Horizon closure A continuous mountain ring outside the playable terrain that welds to the terrain edges, hides the void, and gets a smaller ring forest of its own.

A partial run still produces a usable scene. An agent that completes Layers 1 through 4 but stalls at lighting still leaves you with terrain, textures, water, and props — a playable starting point.

How to use

The prompt ships as text you copy once and paste into your LLM tool, so every run sees the same instructions and constraints.

GitHub Source of truth for the prompt, layers, and tooling. Open issues for new ideas or edge cases, and send pull requests for tasks, fixtures, and docs. Open repository

You need the full prompt text in the model context. Click Copy for your LLM to pull the latest prompt straight from the repo and drop it on your clipboard. Paste it into Claude Code, Cursor, ChatGPT, your own API stack, or any tool that accepts a large first message or system block. That way layers and wording stay identical between runs and between people.

Before you click Copy, set the Unity side up. Any modern Unity version with the Universal Render Pipeline, an empty scene as your starting point, and a Unity MCP server exposing an execute_code tool (strongly recommended). I built the prompt around runtime editor manipulation through MCP. Without MCP it falls back to writing C# Editor scripts under the run folder that you trigger via [MenuItem] entries.

Everything that counts must be procedurally generated by the agent itself. Nothing is treated as if you dropped it into the editor by hand beforehand. Each run lives in its own folder (Assets/Runs/{agent}-{date}/), so multiple agents can share the same project without stepping on each other.

How to contribute

I want this to grow in public. The GitHub card above points at the repository that holds task definitions, helper scripts, and documentation.

Open an issue for new layer ideas, unclear wording, edge cases, or Unity version notes. Pull requests are welcome for tasks, fixtures, docs, and tooling. See CONTRIBUTING.md in the repo for branch names and review expectations.

If you publish results or comparisons, tie them to a prompt revision and a short methodology note so others can reproduce your runs.