Tuesday, November 21, 2017

A better depth buffer for raymarching

When doing any type of raymarching over a depth buffer, it is very easy to determine if there is no occluder – the depth in the buffer is farther away than the current point on the ray. However, when the depth in the buffer is closer you might be occluded or you might not, depending on a) the thickness of the occluder and b) if there are any other occluders behind the first one and their thickness. It seems most people assume a) is either infinite or a constant value and b) is ignored alltogether.

Since my new renderer is entirely based around screen space raymarching I wanted to improve on this to make it more accurate. This has been done before, but mostly in the context of order independent transparency (I think).

Let's look at a scene where the occluders are assumed to have infinite depth (I have tweaked the lighting for more distinct shadows to get a better look at raymarching artefacts, so the lighting does not exactly match the environment in these screenshot).


At a first glance it may look okay, but at certain angles, it is very evident that something is off:


Even an object that is visibly thin will receive a shadow as if infinitely thick. The go-to trick in this situation is to hardcode a thickness and tweak until it looks acceptable:


Still artefacts, but much better. However, for most scenes it's just not possible to find one single thickness that works for everything. What we ideally want is the actual object thickness per pixel. One relatively cheap way of approximating depth is to render a depth buffer for back faces. As long as objects don't overlap, are closed and reasonably convex, the difference between front face depth and back face depth is actually a pretty accurate representation of the object thickness.


I store front face and back face depth in different channels of the same texture, so I just retrieve RG instead of R for each pixel and compare the depth to both values when raymarching, making it really cheap. This removes a lot of artefacts, but there is still room for improvement.

It is hard to visualize in a still image, but with a moving camera it becomes very clear that shadows are only visible for the first layer of objects. As soon as an object disappears behind something, its shadow is also gone. This is of course particularly evident with long shadows from, say, a sunset.

Creating another layer of depth information is called depth peeling and there are several ways to do it. I use the stencil buffer, but it can also be done by discarding fragments in a shader. I already mentioned that I store front and back face depth in two different channels of the same texture, so why not add another layer of front and back face depth and make it a full, four channel texture? All four depth values (first front, first back, second front, second back) can still be fetched as a single texture read, making it really fast.

One could imagine doing even more depth layers, but the visual improvement would be hard to notice.

Thursday, November 9, 2017

Upscaling half resolution screen space effects

When working with diffuse lighting and ambient occlusion in screen space it is often very tempting to do computations in lower resolution. Most of it is blurry anyway, and for any kind of GI/path tracing, diffuse lighting is undoubtedly the bottleneck. Here is a test scene with all colours set to white and no textures.



Enabling only the diffuse lighting, the image looks strangely familiar.



You quickly realise that diffuse lighting is the lion's share of the entire image. Since everything is the same colour, two overlapping objects can be told apart only because they differ in diffuse lighting. Therefore, lowering the resolution of diffuse lighting also means that a lot of edges will be half resolution and the same diffuse lighting suddenly looks like this.



Not acceptable (click on image to view full resolution), but note that the image looks perfectly fine over larger areas where there are no edges, and also at the contours towards the skybox. I've come to think of two solutions to this problem:

1) Render at half resolution. Detect edges and re-render pixels near edges during upsampling. This would probably work very well, but I didn't try it yet.

2) A cheaper solution would be to cover up faulty pixels on the edges using neighbouring pixels from the same surface (it's all blurry, remember?), practically retouching the edges much the same way you retouch images in photoshop.

I decided to try the latter and got some interesting results. First I create a 2D "retouching" vector field. It is basically just a distance offset, telling each pixel where to fetch it's samples. In the middle of a surface this will be (0,0) and near an edge it will point away from the edge. If you have any way of classifying surfaces in a shader this is actually really cheap to do. I just use a unique number for each smoothing group to identify smooth surfaces and for each pixel, I check the eight neighboring pixels, average the offset of the ones that are in the same smoothing group. Ta-da, the average offset will now point in a direction away from each edge, and the retouch vector field looks something like this (here visualized upscaled and with absolute values):



Now if you process the downscaled, half resolution, diffuse lighting through this retouch field during upscaling, the resulting image will magically look like this:



Congratulations, you just saved ~75% processing time for your diffuse lighting. However, there are artifacts, as always. But I found the results to be acceptable in most situations. Computing diffuse lighting in half resolution (quarter pixel count) allowed me to do eight samples per pixel instead of two, resulting in more accurate lighting and less noise.

Another really nice property of the retouch vector field is that once you've created it, you can reuse the same field for any screen space upscaling you might do. I for instance reuse the same field when upscaling screen space reflections, and I'm hoping to use it also for smoke particles once I get there.