Here is an overview of the mental ray and iray features which were integrated in 3ds Max 2016.
NVIDIA Material Definition Language (MDL)
The Material Definition Language (MDL) is an NVIDIA initiative to standardize physically based material designs in a common format, see http://www.nvidia.com/MDL. mental ray for 3ds Max 2016 is capable of rendering pre-packaged MDL materials. We will create a dedicated blog post to explain how to enable MDL in 3ds Max 2016.
Rendering MDL with mental ray
Light Importance Sampling (LIS)
The new Light Importance Sampling mechanism in mental ray allows to sample the whole set of lights as if it were one single light, placing more samples on the lights that contribute more to the part of the scene being rendered. It is an importance-driven mechanism that is controlled by a simple set of parameters. Both area and point lights are importance-sampled, and there is no fundamental change required in material and light shaders. This mechanism is typically useful in scenes with many lights, but can be beneficial also in other simpler cases.
Light Importance Sampling parameters
Ambient Occlusion, GPU accelerated
mental ray offers a new, efficient, GPU accelerated “mr Ambient Occlusion” render element.
mr Ambient Occlusion Render Element
mr Ambient Occlusion parameters
The “Max Distance” controls the maximum distance of occlusion probe rays (Note: Value 0 for “Max Distance” means infinite distance). “Falloff” controls how much the occlusion fades out with distance.
The mental ray and the iray renderers now offer the “Parametric” approximation method which can help to troubleshoot scenes where the “Length” method exhibits artifacts, for example scenes with very regular and flat geometry. This method is available from the “Render Setup/Renderer” tab and from the “Object Properties/mental ray” tab.
Object Properties/mental ray tab – Displacement Settings
The parametric approximation method regularly subdivides each triangle of the surface. The “Subdivision Level” specifies how many times each input triangle should be subdivided. A higher “Subdivision Level” results in a higher triangle count. Each input triangle is subdivided into 4(Subdivision Level) triangles. Note: there is an internal limit of 8 million triangles per object.
The iray renderer offers a new helper object “iray Section”. The “iray Section” behaves similarly to the “Grid” helper and is used to cut off the geometry in the rendered image. Section planes can either cut off the geometry completely (so let the light in), or let the viewer take a peek inside, see “Clip Light” parameter. You can define up to 8 section planes.
In this post, we introduce the newest features of the mental ray for Maya 2016 plugin that is delivered with Autodesk Maya 2016 and can be downloaded here. Stay tuned for more in-depth posts on the features.
Render Settings Redesign
This version comes with a complete new layout of the Render Settings. Our goal is to make the rendering experience with mental ray straight forward and easy. Settings are greatly simplified and grouped together. Five tabs allow to find settings sorted by topic. For advanced users, each tab provides the ‘Advanced Settings’ mode with more detailed controls to fine-tune the rendering.
The Scene tab contains a simplified render passes system for standard utility passes
as well as MILA Light Path and Matte Pass passes.
A more throrough introduction to the new Render Settings will follow shortly.
NVIDIA Material Definition Language
mental ray 3.13 renders materials defined by the NVIDIA Material Definition Language (MDL). MDL is an NVIDIA initiative to standardize physically based material designs in a common format, see http://www.nvidia.com/MDL. Prepackaged MDL materials can be applied in mental ray for Maya 2016. We will provide you with an introduction on how to use MDL in Maya 2016 and with examples for download in this blog soon.
Light Importance Sampling By Default
Light Importance Sampling is now enabled by default. It gives a significant speed / quality advantage out of the box especially with modern and complex lighting setups, emissive objects, and very many light sources. In addition, new heuristics have been incorporated that automatically determine which light sources in the scene are physically plausible and would benefit from importance sampling. This way, traditional idealized light sources and simple lighting setups can be detected and handled separately, like they may be excluded from importance sampling to retain an overall benefit even though this feature is generally enabled. Custom light shaders are fully supported and will be included in importance sampling if they adhere to physically plausible emission and distribution rules.
mental ray 3.13 adds support for generating ‘deep’ data and output to OpenEXR files. The resulting image is saved in the DeepTile form of the OpenEXR 2.0 file format, storing additional information of the pixel colors along the Z axis. It is possible to save deep and simple 2D data into different frame buffers during the same rendering.
UV Tiles Optimized
Rendering UV-tiled textures is faster and more memory efficient in this version because it is based on a native mental ray shader. It auto-creates and loads the tile textures into mental ray on demand, making sure that only those tiles that are actually accessed get loaded into memory. The shader is part of a new package called coreutil, which collects essential mental ray utilities and helper functions.
Create Lights menu
There is a new section in the Create|Lights menu for mental ray lights showing modern mental ray lights in a prominent and easily accessible place.
Using custom geometry to light your scene is now possible with mental ray for Maya 2016. You can assign the new ‘Object Light’ material to your geometry or, with the geometry selected, you can choose ‘Object Light’ from the new mental ray section in the Create|Lights menu. This will turn your geometry into a light.
Light bulb model courtesy of David Hackett.
Rendering Bifrost with mental ray
Bifröst is a procedural framework that can create simulated effects ranging from liquid to foam, bubbles, spray and mist. These effects can be rendered using the bifrost geometry shader delivered with mental ray for Maya 2016.
Autodesk published videos on bifrost and showing mental ray rendering at the end of each:
Rendering XGen with mental ray has been improved and enhanced with new features.
The default XGen hair shader for mental ray is now xgen_hair_physical. It is based on the mental ray human hair shader mib_illum_hair_x which has been improved with mental ray 3.13. It now adds contributions from indirect lighting to the shading. New parameters have been added to tune the tube shading look and to control the internal color noise effect.
Displacing sphere and dart primitives is now possible allowing for a much wider use-case for these primitives.
Texture Filtering based on Ray-Differentials
For advanced ‘elliptical’ texture filtering in Maya’s file node, we are now using ray differentials provided by mental ray core. This introduces more accurate and artifact-free texture filtering even across ray traced reflections and refractions.
To enable it, select a file node, choose ‘Mipmap’ as filtertype, go to the mental ray section and enable ‘Elliptical filtering’. You can choose between ‘Bilinear’ or ‘Bicubic’ filter mode.
Here’s a new version of the abcimport shader that is compatible with mental ray 3.12 and mental ray for Maya 2015. It can be downloaded from this link. The package contains a version for Windows, Linux and Mac OS X, as well as the .mi file with the new shader declaration.
The new features:
Support for hair import
Alembic curve objects are translated into mental ray hair geometry. The curves can be defined with linear segments, quadratic bezier and cubic b-spline (either uniform or with given knot vector). Hair approximation quality can be set with the shader parameter “subdivisions” (default 0). The parameter controls the number of subdivisions for hair segments. The picture shows a rendering of an Alembic file generated by Ornatrix 3, with linear hair segments and using the shader “mib_illum_hair_x”. Hair alembic file courtesy Ephere Inc
Faceset material support
In Maya, the facesets are assigned with material names. Our abcimport shader now uses this information to reference materials with these names. Facesets can be either triangle or polygon mesh. (Facesets for subdivision surfaces will be added later.) Faceset translation is enabled by default and can be turned off by setting the shader parameter “facesetmaterials” to off.
User data support Triangle meshes now can reference user data such as color3, color4, point, normal, float, integer. The user data must be attached as “arbitrary geometry parameter property” to objects in the abc file. Please, contact us for more details.
Subdivision control for hair, NURBS and subdivision surfaces
The “subdivision” parameter is now also applied to hair, NURBS trim curves, NURBS, and subdivision surfaces.
Motion blur issues fixed Motion blur for topology changing abc files is now possible. We fixed an issue with the velocities in Alembic which were not interpreted correctly.
These features will be incorporated into the upcoming versions of mental ray this year.
This is the first post of a new category: NVIDIA mental ray – In the Lab. We want to give peeks into research projects and features we are working on for future versions of mental ray. Here we like to talk about our GPU accelerated Ambient Occlusion solution, in short: AO GPU.
The video demonstrates three new features of AO GPU: Progressive Rendering, High Quality Anti-Aliasing and Object Instancing. It was generated using a standalone test application, using a NVIDIA Quadro K6000 GPU.
Previously, AO GPU used a sampling mode optimized for batch rendering. This sampling scheme, based on Fibonacci numbers, converges very fast and efficiently to good results. The caveat is that you have to know the number of samples in advance, plus there are only certain sample numbers you are allowed to use (1, 2, 3, 5, 8, 13, …). If you stop rendering ‘in-between’, the image will look wrong.
Now, sometimes you are not sure how many samples are needed to achieve the desired quality. The usual solution is to perform progressive rendering, which means watching intermediate rendering results and stop the render process when quality is good enough. As you can see in the video, Fibonacci sampling is not suited for progressive rendering, intermediate images show strange wobbling shadow effects (we moved the camera to trigger a re-rendering in the application). Switching to our new progressive sampling scheme fixes the wobbling artifacts, you just see the noise disappearing over time.
This progressive sampling scheme does not converge as fast but it is well suited for ‘time-constrained’ situations, where you want to stop rendering after a certain time limit, and you are not sure what the best sample count setting would be.
High Quality Anti-Aliasing
The existing AO GPU solution used a very simple anti-aliasing scheme to filter pixel jaggies at geometric edges. Essentially, this filtering was constrained to the area of a single pixel, and every sample was equally weighted. Of course, for some scenes this simple Box filtering is not good enough. In the video, we magnified a detail of such a scene to show the problem. Look at the dark vertical gap between the doors of the cupboard; it looks staggered even with many anti-aliasing samples.
We added new filters to improve this. Anti-aliasing now samples in larger area than only a single pixel, and the samples are weighted according to a filter curve. In the video, we switch to a special Gauss filter curve with a sampling area of 3 pixels diameter, and you should see that the lines look much better now. Other available filter curves are ‘Triangle’ and ‘Gauss’.
For complex scenes, memory consumption is always a concern. The model shown in the video has 21 million triangles, using about 3 GBytes of GPU memory. If we want to render more objects, then we might lose GPU acceleration, because additional models do not fit anymore into GPU memory. The current AO GPU solution will switch to CPU rendering automatically, but rendering will take much longer. If the additional objects consist of the same model, then the model can be reused in the rendering without taking much more memory. This model-replication technique is called instancing.
We implemented instancing in the AO GPU prototype to see how far we can get with this technique on a GPU. As the video shows, a million replicas are no problem.
We hope you enjoyed reading this post and watching the video.
Today we want to talk about how to use a custom hair shader to render XGen hair in mental ray for Maya. We will see how such a shader can be assigned, how various helper nodes can be used to alter the appearance of the hair and finally, we will make use of Disney’s expression language SeExpr() to create some interesting effects. For this purpose we use this little doll with a simple XGen hair cut:
By default, XGen uses the xgen_hair_phen shader. This shader mimics the appearance of hair by using a classical Phong model with 2 specular lobes. The shader’s diffuse color parameter can be fed by the custom XGen shader parameters root_color and tip_color. Those need to be defined in the “XGen Window” and allow us to make use of SeExpr(). We won’t go into details on xgen_hair_phen here but simply link to a good overview of all its parameters and an interesting tutorial on how to use it instead.
We start by assigning a different hair shader. For this, we select the XGen hair collection node in the “Outliner”, then choose “Assign New Material” under the “Lighting/Shading” menu and finally apply mib_illum_hair_x:
After some parameter tweaking, we end up with the following rendering:
Let’s add some root-to-tip color variation. In “Hypershade”, we create an xgen_root_tip_color_blend node and feed it into both the diffuse and the secondary color slots of the mib_illum_hair_x shader. The shader provides two color inputs which are blended together over the length of the hair. We choose the root color to be the same we had in mib_illum_hair_x before and orange for the tip:
To get some color variation not only along the length of the hair but also across the head, we now apply a texture to the root_color by connecting anxgen_ptex_lookup. We load the XGen region map for want of a better texture. Here is the result:
So far so good. But still no expressions involved.
In the “Output/Preview” section of the “XGen Window”, we add a new parameter of type color and name it custom_color (any name will do here):
The custom_color parameter appears in the UI but it does not look like a color yet. The little icon next to the new parameter will bring up the “Expression Editor”. We open it and type in the expression seen below to make it a color (red, for now):
Now let’s connect this newly created variable to our shading network. We create a xgen_seexpr node and connect it to the tip_color input of the xgen_root_tip_color_blend shader:
To make use of our custom_color parameter in the xgen_seexpr shader, we then type its name in the shader’s Customs attribute and type the expression $custom_color; into the Expression field:
The rendering shows that the custom_color parameter has been picked up correctly:
Finally, lets replace this simple color by a more complex expression. XGen provides quite a long list of expression samples which can be accessed as shown in the image below. From the “Color” subset we choose “NoiseC”.
A bunch of new parameters will be added below our custom_shader parameters, Those parameters define the noise function we just selected. Instead of modifying them directly in the XGen window, let’s open the expression editor for immediate visual feedback. After some value tuning we get a quite colorful map:
And here the rendering we get out of this:
We could alter the expression to change the look even further, we could create more custom parameters and combine them in the xgen_seexpr shader, the possibilities are endless, so we simply leave further experiments to you and your imagination!
Let’s pick up on the topic Let .mi export… and show how to export to .mi from 3ds Max.
Step by Step Procedure
You can simply export your scene to an .mi file in 3ds Max when using mental ray as Production renderer.
Go to “Rendering/Render Setup” or press F10.
Choose NVIDIA mental ray as Production renderer.
You find “Export to .mi File” in the “Processing” panel in the “Translator Options” rollout.
Note: All the controls (but the “Browse…” button) are initially greyed out until a filename is specified.
Press the “Browse…” button to spectify the .mi file output.
Render or press Shift+Q. Instead of actually rendering, the scene is exported to .mi file.
You can use mental ray Standalone to render the exported .mi file.
Note: When rendering with Standalone you might see errors about missing SubstanceShader.dll and PointCloudShader.dll. These errors can usually be ignored and do not prevent proper rendering. You might want to delete the corresponding link in the .mi file to prevent these errors to show up (see link directives at the top of the .mi file).
“Export on Render”
Uncheck “Export on Render” if you want to render to the viewport instead of export to .mi file.
If you choose to export “Un-compressed” (the default), the three dimensional vectors in objects are echoed in ASCII format. If you uncheck this option and choose to export compressed, vectors in objects are exported in binary format resulting in a smaller .mi file size and in exact floating point precision. See also the section “Export Binary vs. Ascii” in the post about exporting to .mi file in Maya.
“Incremental (Single File)”
If you select “Incremental (Single File)” options, animations are exported as a single .mi file that contains a definition of the first frame and descriptors of the incremental changes from frame to frame. If you uncheck “Incremental (Single File)”, each frame is exported as a separate .mi file.
For those who use mental ray Standalone to render on a remote computer or in a render farm, the creation of a scene file in the proprietary .mi format is a necessary step. Most content creation tools are able to “echo” the full scene into a .mi representation on disk using mental ray’s built-in capability. But mental ray for Maya implements a much more flexible approach with extra functionality to ease render pipeline integration. In this post we would like to take a closer look at some of these features.
The mental ray export options can be opened in Maya’s “Export All” or Export Selected” windows by sliding the separator handle to the left.
Or, click the little options box next to “Export All” or “Export Selected” menu items.
Last but not least, the file export can be triggered and controlled via scripting, like in the Script Editor. We show an example towards the end of this post.
Export Binary vs. Ascii
Similar to Maya’s ascii (.ma) vs. binary (.mb) scene formats, the .mi file can also come in two flavors: ascii or binary. The binary variant is the preferred choice when the precision of the scene data should be retained. The reason is, floating-point numbers are written in a binary form with no loss of bits and precision. Maya’s mental ray export enables binary by default.
That’s perfectly fine if you just like to feed that .mi file to Standalone for final rendering, and not touch it. In a production studio environment this is rarely the case, though. Typically, all the assets (textures, materials, geometry, …) are collected from different sources or departments, and may undergo steps of post-processing and editing before passing to the renderer. The ascii version of the .mi export is often better suited in such situations since it allows simple text editing and easier scripting. On the other hand, it may result in larger file sizes, and can lead to precision problems due to lossy ascii-binary conversions. There are ways to solve that with mental ray for Maya. Let’s take a closer look in the next sections.
The mental ray for Maya export engine can be tuned to use a more exact ascii conversion of extreme values that reside close to the edge of the floating-point precision range. Especially single-precision floating-point data are prone to this problem. Guess what, this is (still) most widely used to represent 3D scene geometry and transformation operations. The standard ascii conversion, which is using a maximum of 6 significant digits for single-precision values (15 digits for double-precision), can be increased up to 9 (or 18, respectively), by creating the following dynamic attributes in your scene prior to export.
Large scene dimensions and hierarchies, but also repetitive use of certain Maya modeling operations, can lead to such extreme values, see hints in the older Autodesk Knowledge Base Article.
Export File Path
A typical .mi scene contains various references to external files, like textures, finalgather maps, or output images. When rendering on a remote machine or in the render farm, the directory path to these files may be totally different than what was used on the exporting machine. Therefore, it is often desirable to not write an absolute file path into the .mi file, but just the relative one rooted at the project directory, or no path at all, and then use mental ray Standalone’s search path configuration on the remote computer to point to the right folders, or shared network drives. For example, using the “-T <path>” command line option, or the _MI_REG_TEXTURE registry value.
The mental ray for Maya export options provide selective control for each type of file.
This section of the export options allows to create .mi files that only contain elements of a certain scene type, like all “textures”, or all “materials”. It translates the full scene as usual but actually writes only the pieces selected in the “filter”. This way it is possible to split up a large scene into more manageable .mi parts that fit together perfectly. Here is an example. It will export three .mi files:
The “textures.mi” file contains just the textures references, all with absolute file path. The “scene.mi” file carries most of the scene data, geometry, lights and materials, and gets compressed to save disk space. Finally, the last one “render.mi” stores the render options, render cameras, and the final render command. Now, the first two .mi files can be included into the last one “render.mi” to create a renderable “master” .mi file, see this .mi snippet:
The resulting .mi file can be given to mental ray Standalone for final rendering. Arrgh, I got a new version of one of the textures last minute! Well, no problem! Just update the “textures.mi” file, no need to touch any of the heavy data. And re-render. Voila!
While this works great in general, it does not help the renderer to load certain scene parts on demand only, like when it is actually “seen” during rendering. That’s where mental ray “assemblies” come into play.
This export option helps to create a valid mental ray “assembly” from a Maya scene. It writes a normal .mi file with a few special properties. A mental ray assembly is used to store a larger part of the scene that belongs together spatially, like a character or a whole building. Normally, it comprises the geometry details of the sub-scene and related local properties like materials.
Such a .mi file can be referenced from a “master” scene “stand-in” bounding box object, which defines the ultimate location and scale in the global space.
In contrast to Maya file references, such an assembly is not loaded or shown in Maya immediately, thus not filling up memory and impacting your modeling operations. Instead, mental ray will load these parts only on demand during rendering, and even unload the piece in case room for other assemblies is needed. This mechanism enables the renderer to handle massive amounts of scene elements more efficiently, and keep memory consumption under control. The resulting rendering is no different to a regular render.
If you like to build a scene that contains huge amounts of similar looking objects that can share the actual assembly geometry, we suggest to use instances of the same assembly rather than copies (!) (which is the default “duplicate” operation in Maya). The placement and scale of the stand-in “proxy” element referencing the assembly determines the final location and size in the final rendering.
This technique of using mental ray assemblies is also available to shader writers and integrators through the programming API. In fact, it is utilized in the mental ray implementation of crowd simulation engines like Massive for Maya, or Maya’s native procedural generator XGen, to handle incredible amount of elements.
We hope these tips will help you in your daily work.
BTW, there are more possibilities to tweak .mi file export even further (“text boxes”). But they are rarely used, and would justify another blog post in case there is interest. Just let us know.
you love to stay on your Mac and use mental ray for Maya?
No problem, since Maya offers the same Qt user interface and experience like on Windows or Linux. Maya scene files and mental ray .mi files that have been generated on other platforms can simply be used on the Mac. Even the configuration files, “Maya.env” and “maya.rayrc”, work identically. All true, but…
Several things are different. For example, the unique use of Mac specific keyboard shortcuts is puzzling, like in the “open scene” window. Switching to the “OS native” file dialog is possible in the Maya preferences. Although it might even be more confusing to use Qt and non-Qt side by side.
What about mental ray ? Well, it is only dependent on the underlying Unix basis “Darwin” with its stable and standardized interfaces. That keeps it quite independent from frequent Mac OS X updates, which typically just touch the application levels of the operating system.
Normally. But the most recent “Mavericks” update of OS X, version 10.9, seem to have changed some multi-threading behavior of the kernel. mental ray is obviously also affected, and shows unexpected interruptions, or even gets blocked completely during rendering. Our developers are currently diving into it, to come up with a solution as soon as possible.
Here is another little annoyance:
The message log for mental ray preview rendering has disappeared a few Mac OS X versions back, also due to a system change that handles console output. The batch rendering is not affected since it writes the messages to a file. There is a workaround for preview rendering, though, using mental ray for Maya’s built-in log facility that works separately to the system log. It can be enabled on the Maya command line like this (also useful for a Shelf button):
Mayatomr -preview -log
It will create a file with the fixed name “mentalray.log”, typically residing in the last opened scene file directory. Double-clicking on the file in the Mac Finder will open the Console App, giving a similar experience to the message display on other platforms. If you enable the Console preference “bring log window to front” it will pop up automatically with each mental ray rendering of that scene using the -log command.