List of Image Channels

3DS Max Plug-In SDK

List of Image (G-Buffer) Channels

See Also: Class GBuffer, Class ImageFilter, Class Bitmap, Class MtlBase, Class Interface, Class INode, Structure RealPixel, Structure Color24.

Below is an overview of the image channels. The number of bits per pixel occupied by the channel is listed. The way the channel is accessed and cast to the appropriate data type is also shown.

Note: 3ds max users may store the G-Buffer data in offline storage in RLA or RPF files. For the definition of the RLA or RPF format you can look at the source code in \MAXSDK\SAMPLES\IO\RLA\RLA.CPP.

Also Note: The term 'fragment' is used in the descriptions of the channels below. A 'fragment' is the portion of a triangle of a mesh that's seen by a particular pixel being rendered. It's as if the pixel was a cookie-cutter and chopped a visible section of the triangle off for rendering -- that cut piece is called a fragment.

BMM_CHAN_Z

Z-buffer, stored as a float. The size is 32 bits per pixel. This is the channel that would be used by a depth of field blur routine for instance. The Z value is at the center of the fragment that is foremost in the sorted list of a-buffer fragments. The Z buffer is an array of float values giving the Z-coordinate in camera space of the point where a ray from the camera through the pixel center first intersects a surface. All Z values are negative, with more negative numbers representing points that are farther from the camera. The Z buffer is initialized with the value -1.0E30. Note that this is a change over 3ds max 1.x where the Z buffer was previously initialized with 1.0E30. The negative value is more appropriate since more negative values represent points farther from the camera.

Note that for non-camera viewports (such as Front, User, Grid, Shape, etc.) the values may be both positive and negative. In such cases the developer may as well add a large value onto all the values to make them all positive. This is because positive versus negative doesn't really mean anything. It is just the distance between values that matters.

As noted above, the Z values in the A buffer are in camera space. The projection for a point in camera space to a point in screen space is:

Point2 RenderInfo::MapCamToScreen(Point3 p) {

return (projType==ProjPerspective)?

Point2(xc + kx*p.x/p.z, yc + ky*p.y/p.z):

Point2(xc + kx*p.x, yc + ky*p.y);

}

This function is supplied by the RenderInfo data structure which can be obtained from the bitmap output by the renderer using the function Bitmap::GetRenderInfo(). Note that this outputs a Point2. There is no projection for Z. As noted before, the Z buffer just uses the camera space Z.

float *zbuffer = (float *)GetChannel(BMM_CHAN_Z,type);

BMM_CHAN_MTL_ID

The ID assigned to the material via the Material Editor. The size is 8 bits per pixel. This channel is currently settable to a value between 0 and 8 by the 'Material Effects Channel' flyoff in the Material Editor. A plug-in material can generated up to 255 different material ID's (since this is an 8-bit quantity). This channel would be used to apply an effect (i.e., a glow) to a specific material.

BYTE *bbuffer = (BYTE *)GetChannel(BMM_CHAN_MTL_ID,type);

BMM_CHAN_NODE_ID

This is the ID assigned to node via the Object Properties / G-buffer ID spinner. The size is 16 bits per pixel. This channel would be used to perform an effect (for example a flare) on a specific node.

WORD *wbuffer = (WORD *)GetChannel(BMM_CHAN_NODE_ID,type);

BMM_CHAN_UV

UV coordinates, stored as a Point2. The size is 64 bits per pixel. If you have UV Coordinates on your object this channel provides access to them. This channel could be used by 3D paint programs or image processing routines to affect objects based on their UVs. The UV coordinate is stored as a Point2, using Point2::x for u and Point2::y for v. The UV coordinates are values prior to applying the offset, tiling, and rotation associated with specific texture maps.

Point2 *pbuffer = (Point2 *)GetChannel(BMM_CHAN_UV,type);

BMM_CHAN_NORMAL

Normal vector in view space, compressed. The size is 32 bits per pixel. Object normals are available for image processing routines that take advantage of the normal vectors to do effects based on curvature (for example), as well as for 3D paint programs. The normal value is at the center of the fragment that is foremost in the sorted list of a-buffer fragments.

DWORD *dbuffer = (DWORD *)GetChannel(BMM_CHAN_NORMAL,type);

Note: The following function is available to decompress this value to a standard normalized Point3 value (DWORD and ULONG are both 32 bit quantities):

Point3 DeCompressNormal(ULONG n);

The decompressed vector has absolute error < 0.001 in each component.

BMM_CHAN_REALPIX

Non clamped colors in "RealPixel" format. The size is 32 bits per pixel. See Structure RealPixel. These are 'real' colors that are available for physically-correct image processing routines to provide optical effects that duplicate the way the retina works.

RealPixel *rbuffer =

 (RealPixel *)GetChannel(BMM_CHAN_REALPIX,type);

BMM_CHAN_COVERAGE

Pixel coverage of the front surface. This provides an 8-bit value (0..255) that gives the coverage of the surface fragment from which the other G-buffer values are obtained. This channel is being written and read with RLA files, and shows up in the Virtual Frame Buffer. This may be used to make the antialiasing in 2.5D plug-ins such as Depth Of Field filters much better.

UBYTE *gbufCov = (UBYTE*)GetChannel(BMM_CHAN_COVERAGE,type);

BMM_CHAN_BG

The RGB color of what's behind the front object. The size is 24 bits per pixel.

If you have the image color at a pixel, and the Z coverage at the pixel, then when the Z coverage is < 255, this channel tells you the color of the object that was partially obscured by the foreground object. For example, this info will let you determine what the "real" color of the foreground object was before it was blended (antialiased) into the background.

Color24 *bgbuffer = (Color24 *)GetChannel(BMM_CHAN_BG,type);

BMM_CHAN_NODE_RENDER_ID

System node number (valid during a render). The size is 16 bits per pixel.

The renderer will set the RenderID of all rendered nodes, and will set all non-rendered nodes to 0xffff. Video Post plug-ins can use the Interface::GetINodeFromRenderID() method to get a node pointer from an ID in this channel. Note that this channel is NOT saved with RLA files, because the IDs would not be meaningful unless the scene was the one rendered.

UWORD *renderID = (UWORD *)GetChannel(BMM_CHAN_NODE_RENDER_ID,type);

INode *node = ip->GetINodeFromRenderID(*renderID);

BMM_CHAN_COLOR

This option is available in release 3.0 and later only.

This is the color returned by the material shader for the fragment. It is a 24 bit RGB color (3 bytes per pixel).

Color24 *c1 = (Color24 *)GetChannel(BMM_CHAN_COLOR,type);

BMM_CHAN_TRANSP

This option is available in release 3.0 and later only.

This is the transparency returned by the material shader for the fragment. It is a 24 bit RGB color (3 bytes per pixel).

Color24 *transp = (Color24 *)GetChannel(BMM_CHAN_TRANSP,type);

BMM_CHAN_VELOC

This option is available in release 3.0 and later only.

This gives the velocity vector of the fragment relative to the screen, in screen coordinates. It is a Point 2 (8 bytes per pixel).

Point2 *src = (Point2 *)GetChannel(BMM_CHAN_VELOC,type);

BMM_CHAN_WEIGHT

This option is available in release 3.0 and later only.

This is the sub-pixel weight of a fragment. It is a 24 bit RGB color (3 bytes per pixel). It is the fraction of the total pixel color contributed by the fragment. The sum of (color *weight) for all the fragments should give the final pixel color. The weight ( which is an RGB triple) for a given fragment takes into account the coverage of the fragment and the transparency of any fragments which are in front of the given fragment.

If c1, c2, c3.. etc are the fragment colors, and w2, w2, w3... etc are the fragment weights, then

pixel color = c1*w1 + c2*w2 +c3*w3 + ... + cN*wN;

The purpose of the sub-pixel weight is to allow post processes to weight the conribution of a post-effect from a particular fragment. It may also be necessary to multiply by the fragment’s own transparency, which is not included in its weight. Note that for fragments that have no transparent fragments in front of them, the weight will be equal to the coverage.

Color24 *w1 = (Color24 *)GetChannel(BMM_CHAN_WEIGHT,type);

BMM_CHAN_NONE

None of the channels above.

BMM_CHAN_MASK

This option is available in release 4.0 and later only.

The 4x4 (16 bits = 1 word) pixel coverage mask.