Jump to content

Normal mapping

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Jon914 (talk | contribs) at 03:51, 25 October 2007 (Normal mapping in videogames). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Normal mapping used to re-detail simplified meshes.

In 3D computer graphics, normal mapping is an application of the technique known as bump mapping. Normal mapping is sometimes referred to as "Dot3 bump mapping". While bump mapping perturbs the existing normal (the way the surface is facing) of a model, normal mapping replaces the normal entirely. Like bump mapping, it is used to add details to shading without using more polygons. But where a bump map is usually calculated based on a single-channel (interpreted as grayscale) image, the source for the normals in normal mapping is usually a multichannel image (that is, channels for "red", "green" and "blue" as opposed to just a single color) derived from a set of more detailed versions of the objects. The values of each channel (color) usually represent the xyz coordinates of the normal in the point corresponding to that texel.

Normal mapping is usually found in two varieties: object-space and tangent-space normal mapping. They differ in coordinate systems in which the normals are measured and stored.

One of the most interesting uses of this technique is to greatly enhance the appearance of a low poly model exploiting a normal map coming from a high resolution model. While this idea of taking geometric details from a high resolution model had been introduced in "Fitting Smooth Surfaces to Dense Polygon Meshes" by Krishnamurthy and Levoy, Proc. SIGGRAPH 1996, where this approach was used for creating displacement maps over nurbs, its application to more common triangle meshes came later. In 1998 two papers were presented with the idea of transferring details as normal maps from high to low poly meshes: "Appearance Preserving Simplification", by Cohen et al. SIGGRAPH 1998, and "A general method for recovering attribute values on simplified meshes" by Cignoni et al. IEEE Visualization '98. The former presented a particular constrained simplification algorithm that during the simplification process tracks how the lost details should be mapped over the simplified mesh. The latter presented a simpler approach that decouples the high and low polygonal mesh and allows the recreation of the lost details in a way that is not dependent on how the low model was created. This latter approach (with some minor variations) is still the one used by most of the currently available tools.

How it works

To calculate the Lambertian (diffuse) lighting of a surface, the unit vector from the shading point to the light source is dotted with the unit vector normal to that surface, and the result is the intensity of the light on that surface. Many other lighting models also involve some sort of dot product with the normal vector. Imagine a polygonal model of a sphere - you can only approximate the shape of the surface. By using an RGB bitmap textured across the model, more detailed normal vector information can be encoded. Each color channel in the bitmap (red, green and blue) corresponds to a spatial dimension (X, Y and Z). These spatial dimensions are relative to a constant coordinate system for object-space normal maps, or to a smoothly varying coordinate system (based on the derivatives of position with respect to texture coordinates) in the case of tangent-space normal maps. This adds much more detail to the surface of a model, especially in conjunction with advanced lighting techniques.

In the most common implementation of normalmaps, used by Valve's Source engine and implemented in hardware in nVidia cards, the red channel should be the relief of the material when lit from the right, the green channel should be the relief of the material when lit from below, and the blue channel should be the relief of the material when lit from the front(practically, full except on the "slopes"); or, to put it another way, the XYZ coordinates of the face normals are placed in the RGB values of the normal map. If a material is classified as being reflective, the albedo is usually encoded in the alpha channel if one exists.

Normal mapping in videogames

Interactive normal map rendering was originally only possible on PixelFlow, a parallel graphics machine built at the University of North Carolina at Chapel Hill. It was later possible to perform normal mapping on high-end SGI workstations using multi-pass rendering and frame buffer operations or on low end PC hardware with some tricks using paletted textures. However, with the advent of shaders in home PCs and gaming consoles, normal mapping became widely used in proprietary commercial videogames starting in 2004, and followed by open source games in later years. Normal mapping's popularity for real-time rendering is due to its good quality to processing requirements ratio versus other methods of producing similar effects. Much of this efficiency is made possible by distance-indexed detail scaling, a technique which selectively decreases the detail of the normal map of a given texture (cf. mipmapping), meaning that more distant surfaces require less complex lighting simulation.

Microsoft's Xbox was the first home game console to support the effect on the GPU (other sixth generation consoles use a CPU-based implementation). Developers on the Xbox 360 and the PlayStation 3 rely heavily on normal mapping and are beginning to implement parallax mapping. The Wii also has the ability to perform normal mapping, and this is evident in the game, Zack and Wiki.

See also

Bibliography