Troubleshooting mental ray renders of Maya XGen

Got a nice XGen groom? Everything looks decent in VP2, but in your mental ray renders parts of the hair are missing or, even worse, the hair is not rendered at all?
Don’t panic, these issues can be addressed.

Hair geometry shader setup

In mental ray, XGen hair and other procedurals like cards or randomly instanced archives are handled by a geometry shader. This geometry shader needs to be set up correctly. Usually, this is done automatically by Maya when a new XGen description is being created. In some cases it can happen though, that the geometry shader gets lost or has not been applied. As a result, the XGen primitives are not rendered. To reapply the shader, open the preview/output tab of the XGen window and click the “Setup” button to the right of  “Setup Geo Shaders” in the output settings section:

xgen-setup-geo
After that, your XGen description should render.

Adjusting render output settings

In case your hair looks clipped or thinned-out in a rendering, pay attention to the render output settings. One important setting that usually needs to be adjusted manually is the area, that is supposed to contain all generated XGen primitives. By default, this area is computed from the the bounding box of the XGen base mesh,  plus a padding value of 1 for the XGen primitives.  This padding value is suitable for fitting a default XGen description with hair of length 1. Real-world grooms rarely meet this condition, that’s why renders come out with clipped hair.

To change the padding, open the preview/output tab of the XGen window and click the “Auto Set” button beneath “Primitive Bound” in the output settings section to compute a proper padding-value:

xgen-primitive-bound

This ensures that all hair (or whatever primitives you are working with) is being rendered correctly.
As you groom along, the padding value might need to be re-adjusted, again.

In case the hair appears less dense in the rendering than in the viewport, make sure the “Percent” slider has been set to 100:

xgen-percentage

A lower value will cause the number of generated primitives to be reduced accordingly. This slider can be a nice tool to get fast preview renders with fewer primitives.

Batch rendering XGen primitives

For batch rendering, one additional step is necessary. The XGen collections used in your scene need to be exported to disk to ensure that mental ray can find them. To do so, click on File -> Export Patches for Batch Render in the XGen window:

xgen-batch

Happy hair rendering,
Sandra

New version of Alembic import shader

Here’s a new version of the abcimport shader that is compatible with mental ray 3.12 and mental ray for Maya 2015. It can be downloaded from this link. The package contains a version for Windows, Linux and Mac OS X, as well as the .mi file with the new shader declaration.

The new features:

  • Support for hair import
    Alembic curve objects are translated into mental ray hair geometry. The curves can be defined with linear segments, quadratic bezier and cubic b-spline (either uniform or with given knot vector). Hair approximation quality can be set with the shader parameter “subdivisions”  (default 0). The parameter controls the number of subdivisions for hair segments. The picture shows a rendering of an Alembic file generated by Ornatrix 3, with linear hair segments and using the shader “mib_illum_hair_x”.hair from abcimport                                                                         Hair alembic file courtesy Ephere Inc
  • Faceset material support
    In Maya, the facesets are assigned with material names. Our abcimport shader now uses this information to reference materials with these names. Facesets can be either triangle or polygon mesh. (Facesets for subdivision surfaces will be added later.) Faceset translation is enabled by default and can be turned off by setting the shader parameter “facesetmaterials” to off.
  • User data support
    Triangle meshes now can reference user data such as color3, color4, point, normal, float, integer. The user data must be attached as “arbitrary geometry parameter property” to objects in the abc file. Please, contact us for more details.
  • Subdivision control for hair, NURBS and subdivision surfaces
    The “subdivision” parameter is now also applied to hair, NURBS trim curves, NURBS, and subdivision surfaces.
  • Motion blur issues fixed
    Motion blur for topology changing abc files is now possible. We fixed an issue with the velocities in Alembic which were not interpreted correctly.

These features will be incorporated into the upcoming versions of mental ray this year.

mental ray – In the Lab

This is the first post of a new category: NVIDIA mental ray – In the Lab. We want to give peeks into research projects and features we are working on for future versions of mental ray. Here we like to talk about our GPU accelerated Ambient Occlusion solution, in short: AO GPU.

The video demonstrates three new features of AO GPU: Progressive Rendering, High Quality Anti-Aliasing and Object Instancing. It was generated using a standalone test application, using a NVIDIA Quadro K6000 GPU.

Progressive Rendering

Previously, AO GPU used a sampling mode optimized for batch rendering. This sampling scheme, based on Fibonacci numbers, converges very fast and efficiently to good results. The caveat is that you have to know the number of samples in advance, plus there are only certain sample numbers you are allowed to use (1, 2, 3, 5, 8, 13, …). If you stop rendering ‘in-between’, the image will look wrong.

Now, sometimes you are not sure how many samples are needed to achieve the desired quality. The usual solution is to perform progressive rendering, which means watching intermediate rendering results and stop the render process when quality is good enough. As you can see in the video, Fibonacci sampling is not suited for progressive rendering, intermediate images show strange wobbling shadow effects (we moved the camera to trigger a re-rendering in the application). Switching to our new progressive sampling scheme fixes the wobbling artifacts, you just see the noise disappearing over time.

This progressive sampling scheme does not converge as fast but it is well suited for ‘time-constrained’ situations, where you want to stop rendering after a certain time limit, and you are not sure what the best sample count setting would be.

High Quality Anti-Aliasing

The existing AO GPU solution used a very simple anti-aliasing scheme to filter pixel jaggies at geometric edges. Essentially, this filtering was constrained to the area of a single pixel, and every sample was equally weighted. Of course, for some scenes this simple Box filtering is not good enough. In the video, we magnified a detail of such a scene to show the problem. Look at the dark vertical gap between the doors of the cupboard; it looks staggered even with many anti-aliasing samples.

We added new filters to improve this. Anti-aliasing now samples in larger area than only a single pixel, and the samples are weighted according to a filter curve. In the video, we switch to a special Gauss filter curve with a sampling area of 3 pixels diameter, and you should see that the lines look much better now. Other available filter curves are ‘Triangle’ and ‘Gauss’.

Object Instancing

For complex scenes, memory consumption is always a concern. The model shown in the video has 21 million triangles, using about 3 GBytes of GPU memory. If we want to render more objects, then we might lose GPU acceleration, because additional models do not fit anymore into GPU memory. The current AO GPU solution will switch to CPU rendering automatically, but rendering will take much longer. If the additional objects consist of the same model, then the model can be reused in the rendering without taking much more memory. This model-replication technique is called instancing.

We implemented instancing in the AO GPU prototype to see how far we can get with this technique on a GPU. As the video shows, a million replicas are no problem.

We hope you enjoyed reading this post and watching the video.

XGen Hair and custom shaders

Today we want to talk about how to use a custom hair shader to render XGen hair in mental ray for Maya. We will see how such a shader can be assigned, how various helper nodes can be used to alter the appearance of the hair and finally, we will make use of Disney’s expression language SeExpr() to create some interesting effects. For this purpose we use this little doll with a simple XGen hair cut:

XGen Hair Scene

By default, XGen uses the xgen_hair_phen shader. This shader mimics the appearance of hair by using a classical Phong model with 2 specular lobes. The shader’s diffuse color parameter can be fed by the custom XGen shader parameters root_color and tip_color. Those need to be defined in the “XGen Window” and allow us to make use of SeExpr(). We won’t go into details on xgen_hair_phen here but simply link to a good overview of all its parameters and an interesting tutorial on how to use it instead.

We start by assigning a different hair shader. For this, we select the XGen hair collection node in the “Outliner”, then choose “Assign New Material” under the “Lighting/Shading” menu and finally apply mib_illum_hair_x:

Assigning a mentalray shader to the xgen hair collection

After some parameter tweaking, we end up with the following rendering:

doll with mib_illum_hair_x shader

Let’s add some root-to-tip color variation. In “Hypershade”, we create an xgen_root_tip_color_blend node and feed it into both the diffuse and the secondary color slots of the mib_illum_hair_x shader. The shader provides two color inputs which are blended together over the length of the hair. We choose the root color to be the same we had in mib_illum_hair_x before and orange for the tip:

doll with mib_illum_hair_x shader

To get some color variation not only along the length of the hair but also across the head, we now apply a texture to the root_color by connecting an xgen_ptex_lookup. We load the XGen region map for want of a better texture. Here is the result:

doll with mib_illum_hair_x shader

So far so good. But still no expressions involved.

In the “Output/Preview” section of the “XGen Window”, we add a new parameter of type color and name it custom_color (any name will do here):

Create a custom shader parameter

The custom_color parameter appears in the UI but it does not look like a color yet. The little icon next to the new parameter will bring up the “Expression Editor”. We open it and type in the expression seen below to make it a color (red, for now):

Color Expression

Now let’s connect this newly created variable to our shading network. We create a xgen_seexpr node and connect it to  the tip_color input of the xgen_root_tip_color_blend shader:

Create xgen_seexpr node

To make use of our custom_color parameter in the xgen_seexpr shader, we then type its name in the shader’s Customs attribute and type the expression $custom_color; into the Expression field:

Connecting the custom shader parameter

The rendering shows that the custom_color parameter has been picked up correctly:

doll with mib_illum_hair_x shader

Finally, lets replace this simple color by a more complex expression. XGen provides quite a long list of expression samples which can be accessed as shown in the image below. From the “Color” subset we choose “NoiseC”.

Load Expression
A bunch of new parameters will be added below our custom_shader parameters, Those parameters define the noise function we just selected. Instead of modifying them directly in the XGen window, let’s open the expression editor for immediate visual feedback. After some value tuning we get a quite colorful map:

Expression Editor

And here the rendering we get out of this:

Rendering

We could alter the expression to change the look even further, we could create more custom parameters and combine them in the xgen_seexpr shader, the possibilities are endless, so we simply leave further experiments to you and your imagination!

 

mental ray iray Maxwell support in 3ds Max 2015 SP2

Maxwell GPUs

Maxwell is NVIDIA’s next-generation GPU architecture for CUDA compute applications.

Please check the current list of Maxwell GPUs.

In order for iray to render on Maxwell GPU you need to patch the iray library which is distributed with 3ds Max 2015 SP2.

Without this patch, iray in 3ds Max 2015 SP2 will render using CPU only, if you have a Maxwell GPU.

Step by Step Procedure

Backup iray Library

iray library is typically located under:

C:\Program Files\Autodesk\3ds Max 2015\libiray.dll

The version which ships with 3ds Max SP2 is 3.12.1.17:

3.12.1.17

Rename or move your original iray library in case you want to revert back to it.

Update iray Library

Download iray library version 3.12.1.18 and place it under:

C:\Program Files\Autodesk\3ds Max 2015\libiray.dll

3.12.1.18

Restart 3ds Max 2015

Future patches and releases of 3ds Max will automatically support Maxwell generation GPUs.

More information on Maxwell architecture.

mental ray for 3ds Max 2015 SP2

mental ray for 3ds Max 2015 SP2 has been released several weeks ago.

It ships with mental ray version 3.12.1.17.

Several issues were fixed as far as mental ray is concerned:

  • mental ray hang when rendering an empty scene
  • precision problem when using Arch & Design related to computation of glossy samples
  • subset pixel rendering not showing up in rendered frame window when using mental ray

The new NVIDIA GPU Maxwell architecture is not yet supported, but we plan to release a patch very soon. Stay tuned!

 

Export your scene as .mi file from 3ds Max

Let’s pick up on the topic Let .mi export… and show how to export to .mi from 3ds Max.

Step by Step Procedure

You can simply export your scene to an .mi file in 3ds Max when using mental ray as Production renderer.

  • Go to “Rendering/Render Setup” or press F10.
  • Choose NVIDIA mental ray as Production renderer.
  • You find “Export to .mi File” in the “Processing” panel in the “Translator Options” rollout.
    Note: All the controls (but the “Browse…” button) are initially greyed out until a filename is specified.

export

  • Press the “Browse…” button to spectify the .mi file output.
  • Render or press Shift+Q. Instead of actually rendering, the scene is exported to .mi file.

You can use mental ray Standalone to render the exported .mi file.
Note: When rendering with Standalone you might see errors about missing SubstanceShader.dll and PointCloudShader.dll. These errors can usually be ignored and do not prevent proper rendering. You might want to delete the corresponding link in the .mi file to prevent these errors to show up (see link directives at the top of the .mi file).

User Interface

3dsmax_exportmifile_detail

“Export on Render”

Uncheck “Export on Render” if you want to render to the viewport instead of export to .mi file.

“Un-compressed”

If you choose to export “Un-compressed” (the default), the three dimensional vectors in objects are echoed in ASCII format. If you uncheck this option and choose to export compressed, vectors in objects are exported in binary format resulting in a smaller .mi file size and in exact floating point precision. See also the section “Export Binary vs. Ascii” in the post about exporting to .mi file in Maya.

“Incremental (Single File)”

If you select “Incremental (Single File)” options, animations are exported as a single .mi file that contains a definition of the first frame and descriptors of the incremental changes from frame to frame. If you uncheck “Incremental (Single File)”, each frame is exported as a separate .mi file.

“Browse …”

Allows to specify the output file name.

Follow

Get every new post delivered to your Inbox.

Join 922 other followers