mental ray – In the Lab

This is the first post of a new category: NVIDIA mental ray – In the Lab. We want to give peeks into research projects and features we are working on for future versions of mental ray. Here we like to talk about our GPU accelerated Ambient Occlusion solution, in short: AO GPU.

The video demonstrates three new features of AO GPU: Progressive Rendering, High Quality Anti-Aliasing and Object Instancing. It was generated using a standalone test application, using a NVIDIA Quadro K6000 GPU.

Progressive Rendering

Previously, AO GPU used a sampling mode optimized for batch rendering. This sampling scheme, based on Fibonacci numbers, converges very fast and efficiently to good results. The caveat is that you have to know the number of samples in advance, plus there are only certain sample numbers you are allowed to use (1, 2, 3, 5, 8, 13, …). If you stop rendering ‘in-between’, the image will look wrong.

Now, sometimes you are not sure how many samples are needed to achieve the desired quality. The usual solution is to perform progressive rendering, which means watching intermediate rendering results and stop the render process when quality is good enough. As you can see in the video, Fibonacci sampling is not suited for progressive rendering, intermediate images show strange wobbling shadow effects (we moved the camera to trigger a re-rendering in the application). Switching to our new progressive sampling scheme fixes the wobbling artifacts, you just see the noise disappearing over time.

This progressive sampling scheme does not converge as fast but it is well suited for ‘time-constrained’ situations, where you want to stop rendering after a certain time limit, and you are not sure what the best sample count setting would be.

High Quality Anti-Aliasing

The existing AO GPU solution used a very simple anti-aliasing scheme to filter pixel jaggies at geometric edges. Essentially, this filtering was constrained to the area of a single pixel, and every sample was equally weighted. Of course, for some scenes this simple Box filtering is not good enough. In the video, we magnified a detail of such a scene to show the problem. Look at the dark vertical gap between the doors of the cupboard; it looks staggered even with many anti-aliasing samples.

We added new filters to improve this. Anti-aliasing now samples in larger area than only a single pixel, and the samples are weighted according to a filter curve. In the video, we switch to a special Gauss filter curve with a sampling area of 3 pixels diameter, and you should see that the lines look much better now. Other available filter curves are ‘Triangle’ and ‘Gauss’.

Object Instancing

For complex scenes, memory consumption is always a concern. The model shown in the video has 21 million triangles, using about 3 GBytes of GPU memory. If we want to render more objects, then we might lose GPU acceleration, because additional models do not fit anymore into GPU memory. The current AO GPU solution will switch to CPU rendering automatically, but rendering will take much longer. If the additional objects consist of the same model, then the model can be reused in the rendering without taking much more memory. This model-replication technique is called instancing.

We implemented instancing in the AO GPU prototype to see how far we can get with this technique on a GPU. As the video shows, a million replicas are no problem.

We hope you enjoyed reading this post and watching the video.

XGen Hair and custom shaders

Today we want to talk about how to use a custom hair shader to render XGen hair in mental ray for Maya. We will see how such a shader can be assigned, how various helper nodes can be used to alter the appearance of the hair and finally, we will make use of Disney’s expression language SeExpr() to create some interesting effects. For this purpose we use this little doll with a simple XGen hair cut:

XGen Hair Scene

By default, XGen uses the xgen_hair_phen shader. This shader mimics the appearance of hair by using a classical Phong model with 2 specular lobes. The shader’s diffuse color parameter can be fed by the custom XGen shader parameters root_color and tip_color. Those need to be defined in the “XGen Window” and allow us to make use of SeExpr(). We won’t go into details on xgen_hair_phen here but simply link to a good overview of all its parameters and an interesting tutorial on how to use it instead.

We start by assigning a different hair shader. For this, we select the XGen hair collection node in the “Outliner”, then choose “Assign New Material” under the “Lighting/Shading” menu and finally apply mib_illum_hair_x:

Assigning a mentalray shader to the xgen hair collection

After some parameter tweaking, we end up with the following rendering:

doll with mib_illum_hair_x shader

Let’s add some root-to-tip color variation. In “Hypershade”, we create an xgen_root_tip_color_blend node and feed it into both the diffuse and the secondary color slots of the mib_illum_hair_x shader. The shader provides two color inputs which are blended together over the length of the hair. We choose the root color to be the same we had in mib_illum_hair_x before and orange for the tip:

doll with mib_illum_hair_x shader

To get some color variation not only along the length of the hair but also across the head, we now apply a texture to the root_color by connecting an xgen_ptex_lookup. We load the XGen region map for want of a better texture. Here is the result:

doll with mib_illum_hair_x shader

So far so good. But still no expressions involved.

In the “Output/Preview” section of the “XGen Window”, we add a new parameter of type color and name it custom_color (any name will do here):

Create a custom shader parameter

The custom_color parameter appears in the UI but it does not look like a color yet. The little icon next to the new parameter will bring up the “Expression Editor”. We open it and type in the expression seen below to make it a color (red, for now):

Color Expression

Now let’s connect this newly created variable to our shading network. We create a xgen_seexpr node and connect it to  the tip_color input of the xgen_root_tip_color_blend shader:

Create xgen_seexpr node

To make use of our custom_color parameter in the xgen_seexpr shader, we then type its name in the shader’s Customs attribute and type the expression $custom_color; into the Expression field:

Connecting the custom shader parameter

The rendering shows that the custom_color parameter has been picked up correctly:

doll with mib_illum_hair_x shader

Finally, lets replace this simple color by a more complex expression. XGen provides quite a long list of expression samples which can be accessed as shown in the image below. From the “Color” subset we choose “NoiseC”.

Load Expression
A bunch of new parameters will be added below our custom_shader parameters, Those parameters define the noise function we just selected. Instead of modifying them directly in the XGen window, let’s open the expression editor for immediate visual feedback. After some value tuning we get a quite colorful map:

Expression Editor

And here the rendering we get out of this:

Rendering

We could alter the expression to change the look even further, we could create more custom parameters and combine them in the xgen_seexpr shader, the possibilities are endless, so we simply leave further experiments to you and your imagination!