What you need to reproduce theses examples :
I used to work for a 2 years at Framestore-CFC in London. That mean I use Maya and PRMan as my main application. During this year I learn a lot of thing but also hear a lot of false assumtion about rendering in general. Typically the common comment was that PRman was more suitable to render complex character and that MR was very slow for this kind of job. In fact people arguing about this doesn’t not really know what are the status of modern rendering such as MR, V-ray or any others assuming everything was frozen since 5 years or just repeating what the veteran keep repeating.
But I am not engaging a new flame wars between renderer. Actually I don’t care which is the better today in 2008 you can produce beautiful picture with any renderer available in the market ( and yes that include maya’s software render well known to be a piece of crap ;) ) The most important is the guys behind the scene. But I need to put everything in their respective context to elaborate and may be justify what and why I will describe later.
Actually mental ray is very fast, if you do things correctly. If you don’t do things correctly ( such as massive spatial and/or temporal oversampling ) it as well as any other renderer will be very slow. But I always get this "PRMan displaces faster" stuff. Of course it does…. until you actually trace a ray.
You have to know that PRMan lives in a mindset where raytracing is so slow that you avoid it at all cost. The thing is that in PRMan, if you try to shoot a ray, it too has to do all those things that a raytracer do by nature. So the minute you actually shoot a single ray, PRMan has to do what mental ray always does…. and the comparasion suddenly isn’t so much in favour of PRMan any more.
So people using PRman developed a completely different approach to same problem : Lighting and Rendering a scene while trying to avoid at all cost raytracing. That what is interesting. What will happen if I use these approach with Mentalray within XSI ? Maybe I can have a hybrid solution keep the best of the two world to render my character(s) ?
Diffuse Convolution on Environment map
What’s that ? To keep it simple you can assume that a diffuse convolved map is envmap where you apply a smart blur that will give you the result of sampling this envmap with an infinite number of rays with the Final Gathering on a perfect lambertian surface. For more information you can refer to Debevec’s research ( the father of HDR images ).
Having your envmap pre-blurred is a great advantage because you don’t need to sample it several times to get the correct illumination. Only one sample suffice to get the illumination on your surface. Actually you must cast only one ray. Otherwise you will average twice the map and the illumination will not be correct any more.
The ray direction that you will cast must also be in the same direction as your surface normal. The Xsi’s Ambient Occlusion setup correctly will do perfectly the job. You need one sample , the spread will be very small to not deviate from the normal ( 0.01 ) and the mode should be set to environement sampling. Obviously you can code a shader that do the correct environement lookup. I am using the XSI’s AO shader because it’s available out of the box.
Render Balls simple test.
The way I set up the scene is very simple. I tried to get the contribution of the envmap once convolved and only this. So there is no illumination model just different approach.
The HDR used for this test the one called beach.hdr that you can find at debevec’s website. The map that you download is an angular map so you have to convert it to spherical coordinate system. The easiest way is to HDRshop and convert to angular map to longitute/latitude map. Once the map ready, I put it in my scene as an environement map.
The first sphere is the result of an AO shader set to environement sampling mode with a spread of 0.8. I had to crank up the sampling to 1024 to get a smooth solution. AO env sampling is brute force approach. There is no importance sampling so that mean you have cast a lot of ray to get a smooth result specially with high dynamic range image. And obviously that be very slow. It’s the slowest render.
With the second sphere I used to FG to sample env. Same settings as before except that I use a shader that return irradiance and activate FG. FG is a lot faster that the previous method because it doesn’t sample every point with 1024 rays. Instead FG will sample a certain amount of point and interpolate the result between the point calculated. But for this comparaison I pushed the number of rays to 4096 to make sure I got enough accuracy to have fair comparaison with the quality of the next approach. Even by pushing the number of rays to 4096, it was still faster to render with FG.
With last sphere I made a diffuse convolution in HDRshop ( using the SH_diffuse plug in because it ‘s a lot faster than HDRshop’s function), applied the resulting image as an environement map and use the XSI’s AO shader with the sampling set to 1 and the spread to 0.01.
The render time is ultra fast and match FG solution. It ‘s actually realtime because the convolution has been already calculated once in HDRshop.
Render Balls too simple let’s try with something that got balls.
We got a exact match with the FG for a fraction of time. We can conclude that the diffuse convolution is good enough to simulate FG. But in our test we were using simple sphere. This an ideal case but we needed to check first if the lookup was correct before to move on something more serious.
I like buddha. I like him because it’s a one million polygon model that can be compared to any displaced model push out from Zbrush or Mudbox.
I have simply applied what I did with the spheres.
The first render is with an diffuse convolved envmap. It took approximately 50 sec to render with a large part of preprocessing the scene. If I had a realtime shader could get the same result ( minus anti aliasing ) in realtime. Obviously there is no occlusion calculated but that due to fact I asked only an environment lookup around the normal of the surface.
The second image is what you get if you were using FG and third one is a difference done in shake
to highlights the difference between the two renders.
As you can see only the occlusion and the FG colored bounce are missing. If you apply a simple occlusion on the top you will endup with an image that is fairly close to the FG solution but with only a fraction of the time involved by the process.
OK but in production is it worth to use it ?
Well I will say yes and no. It depand the number of shot you have to do and the time you got to complete them. This technique involve a bit of setup at the shading level while the using FG is straight foward. So if you are working on movie that need a 2K ( or more ) render then yes. Memory foot print is very low and it’s damn fast to render it specially with very heavy object like displaced object or Hair object..
At the moment you need to set this at the shader for every object. Ideally the best will be to set this as a global ambience. Unfortunately you can’t plug anything in the global ambience parameter. So the best if to use a light that cast only ambient light. And for that you need to code it so ask your favourites shader writer to do it.