The standard occlusion shaders sample the environment sending out a bunch of rays, and return a color based on the percentage of rays hitting some objects. Mathematically speaking, they integrate the hemisphere (or the cone) centered above the normal with the given number of samples. As we know, a higher sampling rate gives better results, because the more the samples, the better the integration is computed. Unfortunately, this integration is a “brute force” one, i.e. the same number of samples is sent all over the occluded surface, not exploiting any of the standard integration optimization techniques
So I tried to help the XSI ambient occlusion shader in order to save as many samples as possible, still getting a nice result.
Double occlusion shaders to reduce sample count
The basic idea is: lets first “run” the occlusion with a few (say 16) samples. If needed, resample with a higher sample count (say 160). After the first occlusion is done and has returned its result, there are three possible outcomes:
- result == bright_color: no ray hit an object, the point is not occluded at all
- result == dark_color: all rays hit some objects, the point is fully occluded
- dark_color < result < bright_color
In cases 1 and 2, it’s very likely that with a higher sampling rate the result would have been the same, so the result of our rendertree can be set to bright_color (1) and dark_color (2) respectively.
In case 3 we’re in a mixed area. In this last case we’ll output a second occlusion shader with more samples. This is very similar to what mental ray does to reduce aliasing by the adaptive sampling of a pixel depending on the min and max samples. More samples (the higher value defined through max samples) are taken only if needed.
In our XSI rendertree, we basically need to distinguish the three scenarios.
This picture shows how to find out if the occlusion output is equal to bright_color or dark_color. The two colors are subtracted from the output, and if the absolute value of the difference is smaller than a threshold (~0.01) then one of the two rightmost nodes will output true.
In order to switch between three colors, we take a Mix_8colors node, set the weights to white, and allow:
Case 1: bright_color, because IsBright==true, IsDark==false, BothFalse==false
Case 2: dark_color, because IsBright==false, IsDark==true, BothFalse==false
Case 3: a better occlusion, because IsBright==false, IsDark==false, BothFalse==true
With “better occlusion” we mean a clone of the previous occlusion shader, with a much better sampling rate.
Note that we’re not considering the case where bright_color == dark_color, which is a possible case if you are texturing the occlusion inputs.
This is the full rendertree (get the preset here)
Compared to the classic way, it brings the overhead of the occlusion shader used to perform the test and the nodes to get to the final mixer, so you should avoid it when the whole scene is only partially occluded.
You may notice some noise in the “almost bright” areas. This is where the occlusion used for testing was less efficient.