For my project Dreamscape, I was looking to build an abstract, dream-like aesthetic. I decided to go with an unlit textureless 3D approach - I’d been starting my experiments with gradients and wanted to transition that into a game environment.
But I had a problem - without any lighting, objects in the environment tended to blend together, especially the walls of the dream world. It was nearly impossible to navigate. How was I going to maintain my vision without compromising the actual gameplay?
I came to realize that all lighting does is make some areas brighter (closer to white) and some areas darker (closer to black). So if I could do that in a weird and abstract way, I could keep my aesthetic while having a readable environment. While I considered ambient occlusion, fake lighting, and other options, I decided to go with a depth-based approach. What follows is my implementation in Unity, along with a demo to see the effect for yourself.
The Unity implementation is fairly straightforward. For this demo I’m using world space coordinates for the gradient, instead of using uv coordinates like I did in the actual game. Either way, the gradient is a linear interpolation between two colors set in the inspector. The depth is a value from 0 to 1 that will be multiplied by the gradient to get the final color.
I have a
_Height parameter and an
_Offset parameter to change the size and start point of the gradient, respectively. The
_DepthStrength parameter, which you can change in the demo GUI, will affect how much the depth darkens the final color.
You may be wondering why this line is necessary:
While writing this shader, I was frustrated that I couldn’t just get the depth position from the fragment position itself. It turns out that the SV_POSITION semantic offsets the clip space coordinates by 0.5, to solve the “half-texel” offset issue, which distorts the depth data. I could have taken the standard approach and used Unity’s built-in depth texture generation, but I wanted to do it all in the shader so I experimented until I found this workaround. If you assign the fragment position to a texture coordinate type (before leaving the vertex shader), the half-texel offset won’t be applied.
Additionally, the pixel depth and the effect of the depth are inverted. Closer objects (that have low depth), should be nearer to 1 (full gradient color) instead of 0 (black). That’s why when calculating the depth, the inverse is necessary.
This demo is a player camera in a room, with a cube in the center. You can move and look around and change the strength of the depth effect. Try it out for a bit:
If you set the depth strength all the way to zero, you’ll see how hard it is to tell where you are in the room. If you set the depth strength to max, you’ll see the effect break down and pure black show up in the corners of the room. Notice how for the walls, the depth strength has a large effect on the scene, whereas for the cube, there’s barely any impact.
To take this effect further, more parameters could be added to more finely tune the depth. For example, a maximum depth could be added if the game needed to support longer distances. In Dreamscape I changed the gradients over time between different color sets. Perhaps the player’s vision determined where depth was applied and where it was not.
I really liked this effect in the game because of how dynamic it was. By walking up to a wall, the world seems to fade in and out of existence. The implementation is simple but has a lot of adjustable parameters to change the effect. But most importantly, it sets the right tone and aesthetic for the game, which was my goal in the first place.
You can download a copy of the project I used to make the demo here.