mental ray’s GPU accelerated GI engine prototype

Autodesk 3ds Max 2015 and Autodesk Maya 2015 ship with mental ray 3.12 which includes a prototype of our new global illumination engine accelerated by the GPU.  We encourage our 3ds Max and Maya users to try it out. Your feedback will help us in making this a big step forward into the future of rendering with mental ray. While the current version is in prototype status and not yet feature complete, we are constantly improving the algorithms and adding new features. Your input is most welcome in this process.

The key idea of the new GI engine is full and exact simulation of the lighting interactions in a scene. This way, we overcome drawbacks from caching techniques and interpolation, and make mental ray more interactive and predictable. The brute-force raytracing approach is accelerated on CUDA capable NVIDIA GPUs making it particularly attractive in this set-up. Its result gets combined seamlessly and automatically with the primary rendering done on the CPU. This ensures full compatibility with existing custom shaders, which do not need to be touched in order to take benefit of the new GI engine.

The following Maya scene is rendered using the new GPU GI prototype in 11 min 34 sec (2 x Quad-core Xeon E5620 @ 2.4 GHz in hyper-threading, 8 GB RAM, Quadro K5000)

Mosque - mental ray 3.12 GI GPU diffuse

                                                                                            Autodesk Maya scene courtesy Lee Anderson, environment from openfootage


For comparison, this image is rendered with the classical finalgather automatic mode in 20 min 52 sec (2 x Quad-core Xeon E5620 @ 2.4 GHz in hyper-threading, 8 GB RAM, Quadro K5000)



In the current version, the GI GPU mode considers diffuse-diffuse bounces only, similar to what final gathering typically computes.  In fact, if this mode is enabled without setting further parameters then finalgather settings are used to derive reasonable default  parameters to render towards the same quality. If certain prominent ray tracing effects like mirror reflections or transparent  windows are not used in a scene then the fastest diffuse mode is best suited. For current limitations, see below.


The following image is rendered with the GI prototype in diffuse mode in 37 minutes (Core i7-3930K (6 cores), 16 GB RAM, Quadro K5000)

Medieval - mental ray 3.12 GI GPU diffuse

                                                                                                                                             Autodesk 3dsMax scene courtesy David Ferreira


The following 3dsMax scene is rendered on the CPU with finalgather force in 13 hours (Core i7-3930K (6 cores), 16 GB RAM, Quadro K5000)

Medieval - mental ray 3.12 CPU


The GI GPU mode can be enabled and controlled with scene options or on the command line of the standalone mental ray. We also provide scripts for Maya and 3dsMax that provide a simple GUI for enabling GI GPU (see screenshot). Please, note, that this is by no means how we envision it to be integrated in the applications. It’s rather to provide easy access to users that would like to test the prototype.



Script Download

The scripts to easily enable and access the GI GPU prototype can be downloaded directly from us here:


Current limitations

GI GPU transfers the scene geometry, presampled shader data, and some constant amount of buffer memory onto the GPU. Textures are not needed on the GPU. In the case that the GPU memory is not sufficient, there is an equivalent CPU mode. The new GI engine can still be used but the GPU acceleration must be disabled (uncheck the ‘Use GPU’ checkbox). There is also an absolute limit of 25 million triangles.

For GI GPU to be effective, finalgather must be turned on. Some features are not yet supported: distorting lens shaders, motion blur, particles, volume shaders, camera clipping planes, progressive rendering. There is only limited support for scattering shaders, emissive materials, and hair rendering.

Before testing GI GPU, we recommend to install a recent version of the NVIDIA graphic card driver.


Feedback and discussion

We would like to hear your feedback and see your renderings using this prototype.  Please join our NVIDIA Advanced Rendering forum, if you have not already, and send us your comments and discuss the mental ray GI GPU prototype.  The dedicated mental ray GI GPU Prototype forum topic here

mental ray for Maya 2015 SP2

mentalray for Maya 2015 SP2 is now available and provides important fixes around XGen hair rendering, layering shaders, massive assembly scenes, native IBL and more. Here’s a short list of fixes that come with the new mental ray and the new version of the mental ray plugin.
Please, consult the mental ray and the Maya 2015 SP1 and SP2 release notes for the complete list.

  • XGen
    The performance of the XGen hair shader has been significantly improved for scenes with dense hair.
  • Layering shaders (Mila)
    Several fixes in the shaders and the UI – namely, possible crash with subsurface scattering initialization, possible NaN values and a too bright contribution of the user IBL environment shader in glossy components, reordering layers with bump or weight connection.
  • Binary Maya file compatibility with Maya 2014 restored
    Some mental ray shader nodes saved in .mb files were not compatible between 2014 and 2015. This is now fixed.
  • GI GPU
    Improved performance of the GPU and multi-CPU rendering, and added support for NVIDIA Maxwell GPUs.
  • Large scene with assemblies
    Fixed possible crash when rendering large scenes with multiple assemblies exceeding physical memory.
  • Native environment lighting
    IPR now automatically updates when changing the new “emit light” features which control the native environment lighting. Fixed brightness difference in IPR versus Render Current Frame.

Happy rendering,


Load ’em On Demand

Recently, we were asked about a not-so-well-known feature to help rendering heavy scenes with mental ray inside Maya, or 3ds Max.

You may skip the introduction and jump down to the setting at the end of this article. For those who are into details, here is a little background.

As you might know, mental ray is based on the concept of “loading on demand”, which helps to cope with huge amounts of data that won’t ever fit into the available memory at once. Buying more memory for your rendering machine will help, but, at the same time, your scene size and texture needs have grown again. Well, typically you should not notice since mental ray takes care of handling these cases automatically. It delays all operations that are memory exhausting or expensive to compute to the latest point in time, and only executes them when really needed to render the current pixel. That is true for scene data, when loading elements from an assembly or Alembic archive, but also for textures, reading and decoding only those textures needed, possibly even keeping just pieces of it in memory, so-called “tiles. Most importantly, the tessellation of source geometry into triangle data is done on demand only, absolutely critical when working with very detailed displacement. Finally, this mental ray machinery of demand loading is also exposed to shader writers.

Let’s look at mental ray for Maya and its use of this technique for scene translation. Normally, if you do a preview render in Maya (like “Render Current Frame“), the whole scene will be converted to the mental ray database before the actual rendering starts, often referred to as the “translation” step. This will include every scene element independent of its contribution to the rendering of the frame or animation. What is usually OK for most scenarios, might become a bottleneck in extreme situations with many large pieces of geometry or several big chunks of hair, especially if they are not really actively involved when rendering the current view. Like in the example below (a quickly painted cityscape in Maya, but you get the idea)

City Far Away - rendered with mental ray
City Asset
City Street View - rendered with mental ray
City Rendered View

The usual answer to this problem would be: create a mental ray assembly for those scene parts and reference that in your master scene. But, there is an easier way right from within Maya:

Enable “Render Settings > Options > Translation > Performance > Export Objects On Demand“, as marked below.

mental ray for Maya - Export On Demand
Render Settings – Export Objects On Demand

What is it doing ?

It does not pre-translate geometry before rendering starts, as it usually does, but delays it to rendering time. The translation just creates so-called “placeholders” – basically bounding boxes around the pieces of geometry – that will trigger execution of the actual translation only when a ray hits that box (or a certain feature is requesting the actual geometry). Because translation becomes very fast it finishes almost unnoticed, so that the Maya progress bar typically starts with “mental ray rendering…” . Leaving the “threshold” setting at 0 (zero) will cause all objects to be demand-loaded, even if tiny. That may be inefficient. Increasing this to a higher number, only those objects with its number of points/vertices beyond the value will be considered for demand loading, the rest gets pre-translated.

Please remember, that in a ray tracing or global illumination context all the objects may be demand-translated immediately anyway even if out of sight! In that case there may be no real benefit using this mode. And, this translation mode has a certain runtime overhead attached to it, so it may pay off only in certain cases, and only with an appropriately chosen threshold.

This setting is saved with your scene. That means, it will also work with “Maya Batch”.

You are working with 3ds Max ? The same feature is available here too, enabled with “Render Setup > Processing > Translator Options > Memory Options > Use Placeholder Objects“, as shown below.

mental ray for 3ds Max - Use Placeholder
Render Setup – Use Placeholder Objects

Just give it a try, and leave a comment if you find it useful.

the mental ray Standalone has a similar feature if bounding boxes are given in the .mi file. It is enabled on the command line with:

> ray -reload

Happy rendering,

mental ray’s progressive rendering is now available in 3ds Max 2015 ActiveShade

You can now preview mental ray rendering interactively in 3dsMax: camera navigation, adjustments to light and material parameters, and object creation are immediately reflected in the ActiveShade window using mental ray.  The rendering provides an accurate look overtime – just as what you will get in the final frame.

mental ray takes advantage of the ActiveShade improvements that are released with 3dsMax 2015: Many changes are captured more frequently offering finer grain updates. These changes include: viewport navigation, switching between viewports, adjustments to light parameters, and certain other scene changes (Creating, moving, or deleting objects).


[vimeo 94391571 w=800 h=600]


Layering in Maya2015

The Layering shader library (MILA) in mental ray 3.12 provides a flexible, component-based set of shaders designed to work with each other to accommodate most look development needs. It is more optimized for modern rendering usage of unified sampling, and quality control. It is more efficient with light sampling, and built to take advantage of light importance sampling. It also provides a lightmap-less subsurface scattering component.

In this post, we show how to use the Layering (MILA) shaders in the UI for Maya 2015. This is a first step, but the main concepts will be carried through further steps and across DCC applications. We are looking into providing a similar workflow for 3ds max.

Components – base and layers

The first concept is that of a base component with layer components placed over the top of the base. We provide elemental components for layers, and base combination components for the base layer. You may hear us use the term Phenomenon for a pre-combined network of shader nodes. The base components, except for pure Diffuse are these combination components called Phenomena.

A newly created mila_material always starts with a Diffuse base component by default. Here is what you would see in the Attribute Editior (AE) for the mila_material if you wish to select a different base component.


On the left below, we show a head (courtesy of 3D Scan Store, ), using only Diffuse, and on the right, the Diffuse (Scatter) base component.

head_diff        head_diff_scatter


As mentioned, we have a set of elemental components for layering over the base. These provide for elemental material characteristics such as diffuse, glossy, specular reflection and transmission. We have a diagram below left that we typically show so you can see the light path representation of these elemental components. On the right, we render each of these components isolated on each sphere to correlate with that diagram.

dgs mila_dgs


On the left below, we show the Diffuse (Scatter) to compare with the right, where we have layered two glossy reflection components on top of that base.

head_diff_scatter  head_glossy_diff_scatter

elemental_componentsOn the right, we show the elemental components you may select in the UI currently, categorized by Reflection, Transmission, Subsurface Scattering and Emission. All of these can be layered on top of the base or other layers.

To add a new layer on top of the current layers, a user clicks on the +layer tab/button on top of the UI in the AE for the mila_material, before being presented this elemental component selection menu.


There are three different types of layers, for which we need to explain weight.

What is weight?

For a given layer, the weight represents a percentage of the incoming energy.  A layer can be weighted simply, or can have some directional dependency. For example, a Fresnel weighted curve derived from an Index of Refraction can be used to multiply by the weight, so that the layer has higher weight at grazing angles.




A Weighted layer is simple, as the weight represents the incoming light energy used.

A Fresnel layer uses an index of refraction input for directional dependency.

A Custom layer uses a Schlick approximation for directional dependency. Those familiar with the mia_material may recognize the controls for weight at facing and grazing angles and a curve exponent.


The mila_material AE UI presents layers in a top-down fashion, so one could imagine how the energy comes in at 100% from the top, while each layer takes away a percentage of that energy. It gives you a notion of the material as if looking at a cross-section. The base on the bottom is always considered as 100% of whatever is leftover, and therefore we don’t expose the weight for the base layer in the UI.


Above, the top layer receives 20% of the incoming light energy. The middle layer receives 25% of the 80% leftover energy from the top layer. One quarter of 80% makes another 20% of the overall incoming energy, and the base receives the leftover energy from the middle layer, which is now 60% of the initial incoming energy.

Mixing components

The UI also provides for mixing components in a given layer. One can think of a mix like a blend of two paints. This diagram depicts three layers, with the middle layer containing a mix of two components. The mix is built as the layer components are placed on top of each other.


In the Maya AE UI for mila_material, when there is a layer on top of the base component, a user can  click on the +mix tab/button to mix a component into the top layer. The weight of the mix will take energy away within that layer. There will still be an overall layer weight for the mixed components. In the example below, this is actually a Fresnel layer mixing two different roughness variations of glossy reflection in a 50-50 mix.

mix_add  mix_glossy

Masked layers

The weight may also be considered conceptually like a masking function, for a material placed on top of another material, as can be shown with the following image on top of the head we’ve already shown above. Now he’s ready for paintball.

head_paint_glossy_diff_scatter  head_layers


Look Development

So what does this mean for look development in terms of how to approach layering for something you want to match?

In our GTC presentation, David Hackett from The Mill used the following example, from a Norfolk Southern commercial (assets used for demonstration here are courtesy of Norfolk Southern Copr., “City of Possiblities”).

Here we see the final layered material rendered with a background plate and reference mirror and gray balls.


To start, we can now think about the look of the original object based on its material properties, rather than which shader node attributes to tweak. We examine a variety of references. This one obviously has bright light hitting it.


How shiny is it? How shiny are the various layers? Is it a blend of shiny characteristics?

Where do other properties like rust or dust show up as potential layers?

Note the rust is not as shiny, and has more noisy color variation.NorfolkSouthern3


This translates into various component layers for the mila_material, including masks for weighting. These masks can be generated in Mudbox or Mari using UV tiles.

NorfolkSouthern5 NorfolkSouthern6 NorfolkSouthern7 NorfolkSouthern8 NorfolkSouthern9

We think about the individual parts starting with the base paint layer which is primarily diffuse below left. Then, add a glossy reflection layer using the Fresnel layer. The roughness has a texture mapped to it below right.

NS_layer1  NS_layer3

Now we use a mask to weight a Grime layer. On the left below, we show it more obviously with a red color, and on the right we use the diffuse reflection component we intend.

NS_layer3_red NS_layer4

At this point we have three layers in the UI. A base diffuse with a glossy layer, and then on top of that, the masked diffuse layer for the rust. That is what we see on the left below. For further detail, we add a more subtle dust component and mix it in with the rust. On the right below, we see the resulting mila_material AE UI, noting slight gray color shift for each level down in the shader network heirarchy. Note also that we’ve collapsed the original rust diffuse reflection component displayed underneath the new dust diffuse reflection.


With more subtle additions, we again make the dust component red, in order to spot it, on the left below. Then the final look development image with extra tweaks, we see on the right.

NS_layer4_red  NS_layer4_final

This workflow will continue to be developed, as there are already plans for enabling more complex components, while making the selection of those components easier. For a simple example, we could have created the dust layer first on a separate material; then later, choose that material from a menu of existing mila_materials, as a component to be layered.

The layering shaders provide a glimpse into NVIDIA’s Material Definition Language (MDL), which is designed to provide material sharing across rendering platforms. For better compatiblity with MDL, many of the optimizing shader controls of the past have been moved out of the MILA shaders and into string option controls. This also allows better ability to optimize more automatically in the renderer, and the ability to provide sweeping quality control changes over a large scene full of many materials.

More to come

This post provides a brief glimpse into what the layering shaders provide, and there will be more posts to discuss the global string options available for quality control, the base Phenomena in more detail, and the render passes provided as emulation of Light Path Expressions (LPE), another forward looking technology coupled with MDL.

mental ray for Maya 2015

In this post, we give a short intro to the newest features of the mental ray for Maya 2015 plugin that is delivered with Autodesk Maya 2015. This is an overview: we will dig deeper into some of the topics on later posts.

Rendering ptex made as simple as possible

We have streamlined the workflow significantly for rendering ptex in Maya 2015.  You can simply specify your ptex file in the file node and render.  The ptex filter parameters are exposed in the file node.  You can render color, as well as scalar and vector displacement, and normal maps.


Support for UV-tiles

UV-tiled images are output from Mudbox, Z-brush and Mari. It’s a gigantic texture that is decomposed into a grid of small textures. The positioning of the tiles is specified by a naming scheme.
Some of you might have used the scripts that are available to render uv-tiles with mental ray for Maya and that create a shader chain to do the placement of all tiles of a uv-tile image.  However, now you can render as if it was a simple texture in the regular file node.  The user only has to specify the first tile and the naming scheme.

AO GPU exposed

The AO GPU plugin has been a part of mental ray since version 3.11. It allows one to render the AO pass at a very high quality in parallel to the beauty pass. With a recent GPU, the user gets the AO pass for free.  AO GPU is now enabled in mentalray for Maya2015 as parameter of the AO render pass.
The new ‘useGPU’ checkbox enables it. You might want to start setting the number of rays to 8 or 16. This usually results in sufficient quality.  Please, let us know on our forum if you have other experiences.














Native IBL exposed

mental ray’s native environment lighting delivers higher quality at better performance compared to the classical shader based solution.  It can now be enabled for image based lighting when you select the ’emitLight’ checkbox in the IBL shape node.  The only control that is exposed at the basic level is Quality. Advanced controls expose ‘Resolution’ and ‘Resolution Samples’ to control the building of the acceleration structure, as well as a ‘Color Gain’ control.


Xgen Hair

Autodesk Maya 2015 provides Xgen for generation of hair, fur and other primitives.  The Xgen shader for mental ray allows to render Xgen created content with mental ray.
















mila shaders coming with a UI now!

The layering shaders will be discussed in depth in a later post.

Happy Rendering!



Hi mental ray users, friends, rendering beginners and pro’s:

We are very pleased to announce this blog as a new, direct source of information about mental ray, one of the most used 3D rendering technologies in the world.  We like to use this channel to reach out to you and tell you about the exciting new features we are currently working on, but also about efficient workflows and best practices inside the various applications that have mental ray integrated or as plugins, like Autodesk Maya, 3ds Max, Softimage and Cinema4D or mental ray Standalone.

These are exciting times for rendering!  Traditional and well established lighting and shading techniques meet with physically based rendering approaches in order to compute highly realistic imagery with the push of a button while taking advantage of all available hardware.  We believe mental ray is flexible enough and continuously progressing to be used successfully in both of those domains, and anything in between.

We aim to accompany and support you in your daily work with mental ray, show the best and easiest ways to use it, or achieve a certain look or visual effect, and spread the word about the latest and greatest additions and changes.

We are looking forward to your feedback and comments, and invite you to follow us.

Happy rendering,

The mental ray development team.
Steffen Roemer, Product Manager Rendering Software, mental ray