The graphics pipeline from source art to final output is complicated, and requires the artist to work in several different colour spaces along the way. In this article I’ll give a brief overview of colour spaces, and then detail a commonly overlooked area in the texture pipeline where gamma is important.
The sRGB Standard
The sRGB colour space is based on the monitor characteristics expected in a dimly lit office, and has been standardised by the IEC (as IEC 61966-1-2). This colour space has been widely adopted by the industry, and is used universally for CRT, LCD and projector displays. Modern 8-bit image file formats (such as JPEG 2000 or PNG) default to the sRGB colour space.
A value in the sRGB colour space is a floating-point triple, with each value between 0.0 and 1.0. Values outside of this range are clipped. An sRGB colour from this [0, 1] interval is commonly encoded as an 8-bit unsigned integer between 0 and 255.
The pivotal fact to remember about sRGB is that it is non-linear. It roughly follows the curve y = x 2.2, although the actual standard curve is slightly more complicated (and will be listed at the end of this article). A graph of sRGB against gamma 2.2 looks as follows:
This mapping has the nice property that more resolution is given to low-luminance RGB values, which fits the human visual model well.
The Gamma Function As An Approximation
As can be seen by the above graph, the sRGB standard is very close to the gamma 2.2 curve. For this reason, the full sRGB conversion function is often approximated with the much simpler gamma function.
Please note that the value associated with the word gamma is the power value used in the function y = xp. Unfortunately gamma is often associated with brightness, which is not exactly what it is doing. The full [0, 1] interval is always mapped back onto the full [0, 1] interval.
What Maths Work In This Colour Space?
In general your lighting pipeline should be done in linear space, so that all lighting is accumulated linearly. This is the approach taken in many film pipeline, and is the only way to ensure that you are being physically correct.
However, assuming that the gamma function approximation is good enough, you can still perform modulate operations. In this case we have some constant A that we wish to modulate our sRGB source data x with, and store the result in sRGB as y. In linear space this would be written as:
y2.2= A x2.2 = ( A1/2.2 x )2.2
Since we are working only in the [0, 1] interval, we can remove the power from both sides and work in the sRGB space itself. In which case:
y = A 1/2.2 x
So if we convert our constants into sRGB, then modulate operations can still be performed. However, there are only very few operations that work this way. Additive operations (which are used in additive lighting models, or for alpha-blending) cannot be reformulated to work in a gamma 2.2 space, simply because the space is non-linear. If you wish to have a correct additive lighting model, then you have to work in a linear space, which will mean that you need a higher-precision framebuffer to at least match the low-luminance granularity of sRGB.
sRGB to linear RGB: rgb (sRGB), RGB (linear RGB)
|R =||r / 12.92||for r <= 0.04045|
|( (r + 0.055)/1.055 )2.4||for r > 0.04045|
|G =||g / 12.92||for g <= 0.04045|
|( (g + 0.055)/1.055 )2.4||for g > 0.04045|
|B =||b / 12.92||for b <= 0.04045|
|( (b + 0.055)/1.055 )2.4||for b > 0.04045|
This is commonly approximated as X = x 2.2 for all channels.
linear RGB to sRGB: RGB (linear RGB), rgb (sRGB)
|r =||12.92 R||for R <= 0.0031308|
|1.055 R 1.0 / 2.4 – 0.055||for R > 0.0031308|
|g =||12.92 G||for G <= 0.0031308|
|1.055 G 1.0 / 2.4 – 0.055||for G > 0.0031308|
|b =||12.92 B||for B <= 0.0031308|
|1.055 B 1.0 / 2.4 – 0.055||for B > 0.0031308|
This is commonly approximated as x = X 1/2.2 for all channels.
Ok But How To Do That in XSI ?
Before the release of XSI v6, we could change the ouput gamma of Mental ray and match approximately the sRGB by using a gamma of 2.2 ( put 1/2.2 in the active effect tab). In version 6 this option is missing unfortunately but don’t worry there are others third party addon that can do the job for you.
I wrote one of them and you can find them here.
This addon include two shaders. A texture node that convert sRGB image in linear space and a lens shader that convert linear render to sRGB color space.
These shader are intended to be used in a context where you don’t output anything in float format. I didn’t try it in this context so you don’t have any warranty.
The idea is to convert every texture that you create in Lin space and convert every render in sRGB space. By texture I mean any images that are not used to drive data such as displacement map, bump map or normal map. I exclude also floating point image which are considered linear defacto.
Let’s see these shader in action
I will first focus on the lens shader and then on the texture node.
As I said we need to watch our result according to the monitor profile (sRGB) so to convert our render we need to apply the lens shader. It’s really easy use add it on the lens shader stack.
My first example is simple. A grid with a phong shader and a light. An area light with a realistic falloff. To do this I am using the D2S_light with a temperature of 6500 K (White). I set the light intensity to get the same result in the red area.
The left picture is rendered as-is in XSI while the other one is rendered with Lin_to_sRGB Lens shader. As you can see the left picture is overlighted. (intensity around 4000) While the right one behave nicely (with an intensity of 750 only).
Note that even the specular from phong shader looks correct. It looks like the reflection of the area light.
My second example is a simple FG scene with 2 spheres and a plane. The plane and one of the sphere got a DGS shader with only a diffuse term set to a neutral gray. The second sphere is fully reflective to see the environment map.
I used an HDR image from Paul Debevec’s website called beach.hdr to light my scene with Final Gathering.
The first test is a render as-is:
As you can see the color of the environement map are more contrast / darker than what we can see in HDR shop. To correct this naturally we will change the exposure but the color will be affected and we will got a more saturated image which is wrong.
If we use the lens shader we go this result:
The result this time is what we expected. The color match to what we can see in HDR shop. We can start to work safely because we got the right illumination.
On my next example I am introducing a texture on the grey plane by plug to the diffuse slot an image node. I am keeping the same HDR illumination and the lens shader.
As you can see the texture is washed out. This doesn’t come from illumination. This wood texture has been generated in sRGB color space. With the lens shader we simply apply an another time the sRGB convertion. What we need is to convert this texture in linear space before the render. It is the purpose of the sRGB_to_Lin texture node. Just plug it between your image node and the diffuse slot and re-render.
As you can see now our texture is corrected and looks natural.
If you need to change your setup and include these two node to your existing setup your can use the existing script written by Guillaume Laforge and included in the first archive. It will insert a sRGB_to_Lin node after your image node.
If you have any questions, critics or correction please don’t hesitate : email@example.com
Edit : Gradient example removed because it seams to confuse more people than it helps. Shaders source code fixed I didn’t realised that this article was read by non XSI users.