Adaptive Occlusion

November 6th, 2006 by Stefano Jannuzzo - Viewed 18701 times -

The standard occlusion shaders sample the environment sending out a bunch of rays, and return a color based on the percentage of rays hitting some objects. Mathematically speaking, they integrate the hemisphere (or the cone) centered above the normal with the given number of samples. As we know, a higher sampling rate gives better results, because the more the samples, the better the integration is computed. Unfortunately, this integration is a “brute force” one, i.e. the same number of samples is sent all over the occluded surface, not exploiting any of the standard integration optimization techniques

So I tried to help the XSI ambient occlusion shader in order to save as many samples as possible, still getting a nice result.

Double occlusion shaders to reduce sample count

The basic idea is: lets first “run” the occlusion with a few (say 16) samples. If needed, resample with a higher sample count (say 160). After the first occlusion is done and has returned its result, there are three possible outcomes:

  1. result == bright_color: no ray hit an object, the point is not occluded at all
  2. result == dark_color: all rays hit some objects, the point is fully occluded
  3. dark_color < result < bright_color

In cases 1 and 2, it’s very likely that with a higher sampling rate the result would have been the same, so the result of our rendertree can be set to bright_color (1) and dark_color (2) respectively.

In case 3 we’re in a mixed area. In this last case we’ll output a second occlusion shader with more samples. This is very similar to what mental ray does to reduce aliasing by the adaptive sampling of a pixel depending on the min and max samples. More samples (the higher value defined through max samples) are taken only if needed.

In our XSI rendertree, we basically need to distinguish the three scenarios.

This picture shows how to find out if the occlusion output is equal to bright_color or dark_color. The two colors are subtracted from the output, and if the absolute value of the difference is smaller than a threshold (~0.01) then one of the two rightmost nodes will output true.

In order to switch between three colors, we take a Mix_8colors node, set the weights to white, and allow:

Case 1: bright_color, because IsBright==true, IsDark==false, BothFalse==false
Case 2: dark_color, because IsBright==false, IsDark==true, BothFalse==false
Case 3: a better occlusion, because IsBright==false, IsDark==false, BothFalse==true

With “better occlusion” we mean a clone of the previous occlusion shader, with a much better sampling rate.

Note that we’re not considering the case where bright_color == dark_color, which is a possible case if you are texturing the occlusion inputs.

This is the full rendertree (get the preset here)

This technique is a good time saver in scenes where there is a significant percentage of surfaces either fully occluded or fully lit, or when you’re using a narrow occlusion spread.

Compared to the classic way, it brings the overhead of the occlusion shader used to perform the test and the nodes to get to the final mixer, so you should avoid it when the whole scene is only partially occluded.

You may notice some noise in the “almost bright” areas. This is where the occlusion used for testing was less efficient.

We can fix the problem filtering off the brightness through a linear gradient node, as shown in the picture.

Finally, a (lucky) test
Standard occlusion (132 samples) : 48″

Standard occlusion (16 samples) : 10″

Our method (16 + 132 samples) : 21″

21 Responses to “Adaptive Occlusion”

  1. As a new user of XSI, this sort of stuff is great. I”ve been very pleased with XSI”s structure, so learning new ways of tuning it further just makes for a brighter day

  2. lola says:

    Is this really different from the result obtanied with ctrl_occlusion shader, and it”s integrated adaptive sampling tecnique ?

  3. I never used ctrl_occlusion, and probably its way is clever. However, I like playing with the standard nodes as much as possible, as we don”t rely on 3rd parties here, due to the different operative systems and such we use.

  4. LowJacK says:

    This is very clever. Obviously, the higher the sample rate, the longer the render. What I would like to know is what your render times are and how much they improve using your method?

  5. LowJack,

    Renders A, B, C refer to the three torus images in order of appearance.
    Render A (standard occlusion, 132 samples): 48s.
    Render B (standard occlusion, 16 samples): 10s.
    Render C (technique, 16 followed by 132 samples): 21s.

    Render C matches A”s quality in less than half the time. It is slower than B but the quality of the output can”t even be compared.

  6. Harry bardak says:

    hi stefano.

    Did you tried on more complex object to compare the time ? I remember that a old version of dirtmap got this adaptative sampling. The problem was that you gained rendertime only when you have a setup similar to yours and then daniel removed it.

    Can you confirm ?

  7. As mentioned, the method is worth only under specific circumstances, ie you know in advance that a given area of the scene is either fully bright or dark.
    If for instance the test occlusion has 20 samples and the real one 100, the preset will bring an overhead of 20% in terms of sample rays, so you should know in advance that at least 20% of the area is “safe”.
    On top you have the other nodes (expecially the gradient), so I would rise the threshold of another 10%.
    The scenario that best ensures “safe” areas is when you set a narrow spread. With a large spread (say 180 degrees, I”m not sure how it maps to the actual parameter), there is almost no safe zone at all.

  8. Stefano, what about waving a similar system for area lights? I mean, in a shadow pass you could retrive if a shadow is completely black (white to be correct), where is completely absent (black) and where area calculations are performed. Could we build a tree to perform such operations on a light shader?

    Hope this makes sense..

    Great job anyway, as always…


  9. Gianfranco, I”m afraid it is not possible, because the uv samples belong to the light primitive node, so they can”t be textured, nor can the primitive itself be used as input to anything.

  10. I see…thanks for your answer anyway! Unfortunately I really can”t go deep into mental ray”s functionalities…it just came up as an idea.

  11. Jakob Schindegger says:

    thanks stefano for this interesting work on speeding up the ao process.
    though could you check the saved rendertree preset. i think the download isn”t working anymore.

  12. Hi Jakob,

    I tried to download the preset this morning from the following address and everything went fine for me. Could you further explain the problem you are having.

    This is the link to the preset:
    or you can right click the link in the article and choose ”save link as…”

  13. Moods says:

    i cant download the prsent too.
    right click, save as, and i get an html file with a .preset Filename extension.(i open it with notepad)

  14. Jakob, Moods,

    The preset should be fine now. Hadn”t realized that the download was working but the content of the file was bad. All fixed now. Do leave a comment to confirm please.

    Sorry for the inconvenience.

  15. Robert Cole says:

    HI Patrick, nice work, thank you for sharing.
    I, also, can not get the preset loaded from
    …Softimage\XSI_5.11\Data\DSPresets\Shaders\Material, I have tried loading the preset from the “material” node, the “illumination” node, the “mix8colors” node, and simply drag n” drop to the object
    but still no luck. The preset looks fine as aafar as I can tell, but I have no luck in figuring which node will load it.

  16. Jakob Schindegger says:

    i swear it didn”t work the time i posted. thanks patrick for answering

  17. [...] IBlog
    People and thoughts behind XSI in production…

    « Adaptive Occlusion Motion Vector Driven Occlusion November 14th, 2006 by Guillaume Laforge [...]

  18. Jakob Schindegger says:

    works fine now. i also didn”t know how to load it but the drag and drop onto the object worked for me. and thanks again for fixing

  19. Moods says:

    OK,Ok,thank you so very much~ for fixing.
    now i can download de real .preset file ^_^. despite i cant load it either ,the drag&drop work for me.

  20. George R says:

    I saw in some paper somewhere on PRMan occlusion optimisation someone did both this (light/dark sampling), distance (close and further away from camera), and incidence-to-camera based sampling. It would make for a mess of a render tree but i’m guessing good results under certain circumstances.

  21. George R says:

    Oh here it is. [url][/url]

    Howcome you can’t directly control the sampling with an integer input?