Class ShadeContext

3DS Max Plug-In SDK

Class ShadeContext

See Also: Working with Materials, Class LightDesc, Class RenderGlobalContext, Class Color, Class ShadeOutput, Class AColor, Class UVGen, Class Box3, Class IPoint2, Class Point2, Class IRenderElement, Class Point3, Class Class_ID.

class ShadeContext : public InterfaceServer

Description:

This class is passed to materials and texture maps. It contains all the information necessary for shading the surface at a pixel.

Normally, the ShadeContext is provided by the 3ds max renderer. However developers that need to create their own ShadeContext for use in passing to the texture and material evaluation functions can do so by deriving a class from ShadeContext and providing implementations of the virtual methods. Some sample code is available demonstrating how this is done in \MAXSDK\SAMPLES\OBJECTS\LIGHT.CPP (see the code for class SCLight : public ShadeContext). The default implementations of these methods are shown for developers that need to create their own ShadeContext.

Note that raytracing (and all shading calculations in the default renderer) takes place in camera space.

For additional information on the methods DP(), Curve(), DUVW() and DPdUVW() see Additional Notes.

All methods are implemented by the system unless noted otherwise.

Data Members:

public:

ULONG mode;

One of the following values:

SCMODE_NORMAL

In normal mode, the material should do the entire illumination including transparency, refraction, etc.

SCMODE_SHADOW

In shadow mode, you are just trying to find out what color the shadow is that is falling on an object. In this case, all you care about is transmitted color.

BOOL doMaps

Indicates if texture maps should be applied.

BOOL filterMaps;

Indicates if textures should be filtered.

BOOL shadow

Indicates if shadows should be applied.

BOOL backFace;

Indicates if we are on the back side of a 2-sided face.

int mtlNum

The material number of the face being shaded. This is the sub-material number for multi-materials.

Color ambientLight

This is the color of the ambient light.

int nLights;

This is the number of lights being used in a render, which is the number of active lights in the scene, or if there are none, 2 for the default lights. For example, this is used in the Standard material in a loop like this:

LightDesc *l;

for (int i=0; i<sc.nLights; i++) {

l = sc.Light(i);

..etc

}

int rayLevel;

This data member is available in release 2.0 and later only.

This is used to limit the number of reflections for raytracing. For instance, if you're rendering a hall of mirrors, and the ray is reflecting back and forth, you don't want the raytracing to go forever. Every time Texmap::EvalColor() gets called again on a ray you create a new ShadeContext and bump up the rayLevel one. This allows you to test this value and see if it has reached the limit of how deep to go (if it reaches a maximum level, you can return black for example).

Note that it is concievable that more than one raytrace material can be in effect at a time (from different developers). In such a case, where one surface might have one raytracer and another surface a different one, and a ray was bouncing back and forth between them, each needs to be aware of the other. This is why this value is here -- the two texmaps each modify and check it.

int xshadeID;

This data member is available in release 2.0 and later only.

This is currently not used.

RenderGlobalContext *globContext;

This data member is available in release 2.0 and later only.

Points to an instance of RenderGlobalContext. This class describes the properties of the global rendering environment. This provides information such as the renderer in use, the project type for rendering, the output device width and height, several matrices for transforming between camera and world coordinates, the environment map, the atmospheric effects, the current time, field rendering information, and motion blur information.

LightDesc *atmosSkipLight;

The light description of lights to prevent self shadowing by volumetric lights.

RenderGlobalContext *globContext;

A pointer to the rendering global context.

ShadeOutput out;

This is where the material should leave its results.

The following is a discussion of blending the ShadeContext.out.c and ShadeContext.out.t together to get the final color:

The (c,t) returned by shaders is interpreted as follows: t.r is the (premultiplied) alpha for the r-channel, etc.

So if you want to composite (c,t) over a background b,

color = b*t + c ( where the multiplication of b and t multiplies the individual components ).

When you want to convert a (c,t) to a simple R,G,B,Alpha, just average together the components of t to get Alpha. (and use the r,g,b components of c directly).

Methods:

Prototype:

ShadeContext()

Remarks:

Constructor. The data members are initialized as follows:

mode = SCMODE_NORMAL; nLights = 0; shadow = TRUE; rayLevel = 0; globContext = NULL; atmosSkipLight = NULL;

Prototype:

void ResetOutput(int n = -1)

Remarks:

Sets the surface color output and surface transparency output to Black.

Parameters:

int n = -1

By supplying a negative value this method will clear elements but leave the number of elements unchanged.

Prototype:

Class_ID ClassID();

Remarks:

This method is available in release 2.0 and later only.

Returns the Class_ID of this ShadeContext. This is used to distinguish different ShadeContexts.

Default Implementation:

{ return Class_ID(0,0); }

Prototype:

virtual BOOL InMtlEditor()=0;

Remarks:

This method is available in release 2.0 and later only.

Returns TRUE if this rendering is for the material editor sample sphere (geometry); otherwise FALSE.

Prototype:

virtual int Antialias();

Remarks:

Returns the state of the antialiasing switch in the renderer dialog - TRUE if on; FALSE if off.

Default Implementation:

{return 0;}

Prototype:

virtual int ProjType();

Remarks:

This method returns the projection type.

Return Value:

A value of 0 indicates perspective projection; a value of 1 indicates parallel projection.

Default Implementation:

{return 0;}

Prototype:

virtual LightDesc* Light(int n)=0;

Remarks:

This method returns the 'i-th' light. Use data member nLights to get the total number of lights.

Parameters:

int n

Specifies the light to return.

Prototype:

virtual TimeValue CurTime()=0;

Remarks:

Returns the current time value (the position of the frame slider).

Return Value:

The current time.

Prototype:

virtual int NodeID();

Remarks:

Returns the node ID for the item being rendered or -1 if not set. This ID is assigned when the scene is being rendered - each node is simply given an ID - 0, 1, 2, 3, etc.

Default Implementation:

{return -1;}

Prototype:

virtual INode *Node();

Remarks:

Returns the INode pointer of the node being rendered. This pointer allows a developer to access the properties of the node. See Class INode.

Default Implementation:

{ return NULL; }

Prototype:

virtual Object *GetEvalObject();

Remarks:

This method is available in release 2.0 and later only.

Returns the evaluated object for this node. When rendering, usually one calls GetRenderMesh() to get the mesh to render. However, at certain times you might want to get the object itself from the node. For example, you could then call ClassID() on the object and determine its type. Then the object could be operated on procedurally (for instance you could recognize it as a true sphere, cylinder or torus). Note that this method will return NULL if object is motion blurred.

For example, here is how you can check if the object is a particle system:

 // . . .

 Object *ob = sc.GetEvalObject();

 if (ob && ob->IsParticleSystem()) {

 // . . .

Default Implementation:

{ return NULL; }

Prototype:

virtual Point3 BarycentricCoords()

Remarks:

The coordinates relative to triangular face. The barycentric coordinates of a point p relative to a triangle describe that point as a weighted sum of the vertices of the triangle. If the barycentric coordinates are b0, b1, and b2, then:

p = b0*p0 + b1*p1 + b2*p2;

where p0, p1, and p2 are the vertices of the triangle. The Point3 returned by this method has the barycentric coordinates stored in the its three coordinates. These coordinates are relative to the current triangular face being rendered. These barycentric coordinates can be used to interpolate any quantity whose value is known at the vertices of the triangle. For example, if a radiosity shader had available the illumination values at each of the three vertices, it could determine the illumination at the current point using the barycentric coordinates.

Default Implementation:

{ return Point3(0,0,0);}

Prototype:

virtual int FaceNumber()=0;

Remarks:

Returns the index of the face being rendered. For the scan-line renderer, which renders only triangle meshes, this is the index of the face in the Mesh data structure. This is meant for use in plug-in utilities such as a radiosity renderer, which stores a table of data, indexed on face number, in the Nodes's AppData, for use in a companion material.

Prototype:

virtual Point3 Normal()=0;

Remarks:

Returns the interpolated normal (in camera space). This is the value of the face normal facing towards the camera. This is affected by SetNormal() below.

Prototype:

virtual void SetNormal(Point3 p)

Remarks:

This method will set the value of the face normal facing towards the camera. This may be used to temporarily perturb the normal. The Standard material uses this for example because it implements bump mapping. It changes the normal and then calls other lighting functions, etc. These other method then see this changed normal value. When it is done it puts back the previous value.

Parameters:

Point3 p

The normal to set.

Prototype:

virtual Point3 OrigNormal();

Remarks:

This method is available in release 2.0 and later only.

Returns the original surface normal (not affected by SetNormal() above.)

Default Implementation:

{ return Normal(); }

Prototype:

virtual float Curve();

Remarks:

This is an estimate of how fast the normal is varying. For example if you are doing environment mapping this value may be used to determine how big an area of the environment to sample. If the normal is changing very fast a large area must be sampled otherwise you'll get aliasing. This is an estimate of dN/dsx, dN/dsy put into a single value.

Prototype:

virtual Point3 Gnormal()=0

Remarks:

This returns the geometric normal. For triangular mesh objects this means the face normal. Normals are unit vectors.

Prototype:

virtual Point3 ReflectVector()=0;

Remarks:

This takes the current view vector and the current normal vector and calculates a vector that would result from reflecting the view vector in the surface. This returns the reflection vector.

Prototype:

virtual Point3 RefractVector(float ior)=0;

Remarks:

This is similar to the method above however it calculates the view vector being refracted in the surface. This returns the refraction vector.

Parameters:

float ior

The relative index of refraction between the air and the material.

Prototype:

virtual void SetIOR(float ior);

Remarks:

This method is available in release 2.0 and later only.

Set index of refraction.

Parameters:

float ior

The index of refraction to set. This value can be any positive (non-zero) value.

Default Implementation:

{}

Prototype:

virtual float GetIOR();

Remarks:

This method is available in release 2.0 and later only.

Returns the index of refraction.

Default Implementation:

{ return 1.0f; }

Prototype:

virtual Point3 CamPos()=0;

Remarks:

Returns the camera position in camera space. For the 3ds max renderer this will always be 0,0,0.

Prototype:

virtual Point3 V()=0

Remarks:

This method returns the unit view vector, from the camera towards P, in camera space.

Prototype:

virtual void SetView(Point3 p)=0;

Remarks:

This method is available in release 2.0 and later only.

Sets the view vector as returned by V().

Parameters:

Point3 p

The view vector set.

Prototype:

virtual Point3 OrigView();

Remarks:

This method is available in release 2.0 and later only.

This is the original view vector that was not affected by ShadeContext::SetView().

Default Implementation:

{ return V(); }

Prototype:

virtual Point3 P()=0

Remarks:

Returns the point to be shaded in camera space.

Prototype:

virtual Point3 DP()=0

Remarks:

This returns the derivative of P, relative to the pixel. This gives the renderer or shader information about how fast the position is changing relative to the screen.

Prototype:

virtual void DP(Point3& dpdx, Point3& dpdy);

Remarks:

This returns the derivative of P, relative to the pixel - same as above. This method just breaks it down into x and y.

Prototype:

virtual Point3 PObj()=0

Remarks:

Returns the point to be shaded in object coordinates.

Prototype:

virtual Point3 DPObj()=0

Remarks:

Returns the derivative of PObj(), relative to the pixel.

Prototype:

virtual Box3 ObjectBox()=0

Remarks:

Returns the object extents bounding box in object coordinates.

Prototype:

virtual Point3 PObjRelBox()=0

Remarks:

Returns the point to be shaded relative to the object box where each component is in the range of -1 to +1.

Prototype:

virtual Point3 DPObjRelBox()=0

Remarks:

Returns the derivative of PObjRelBox(). This is the derivative of the point relative to the object box where each component is in the range of -1 to +1.

Prototype:

virtual void ScreenUV(Point2& uv, Point2 &duv)=0;

Remarks:

Retrieves the point relative to the screen where the lower left corner is 0,0 and the upper right corner is 1,1.

Parameters:

Point2& uv

The point.

Point2 &duv

The derivative of the point.

Prototype:

virtual IPoint2 ScreenCoord()=0;

Remarks:

Returns the integer screen coordinate (from the upper left).

Prototype:

virtual Point2 SurfacePtScreen();

Remarks:

This method is available in release 3.0 and later only.

Return the surface point at the center of the fragment in floating point screen coordinates. See the documentation for Sampler::DoSample() for an explanation of the use of this method. See Class Sampler.

Default Implementation:

{ return Point2(0.0,0.0); }

Prototype:

virtual Point3 UVW(int channel=0)=0;

Remarks:

Returns the UVW coordinates for the point.

Parameters:

int channel=0;

Specifies the channel for the values. One of the following:

0: Vertex Color Channel.

1 through 99: Mapping Channels.

Prototype:

virtual Point3 DUVW(int channel=0)=0

Remarks:

This method returns the UVW derivatives for the point. This is used for filtering texture maps and antialiasing procedurals that are using UVW. Note that in standard 3ds max textures, the UVGen class is used, and it calls this method itself. See the methods UVGen::GetBumpDP() for more details for using UVGen. If you are not using UVGen then you can use this method and UVW(). UVW() gets the UVW coordinates of the point and DUVW() gets the change in the UVWs for the point. This tells you a maximum change for each of UVW. This tells you how much of the area of the map to sample. So when you call the Bitmap method GetFiltered(float u, float v, float du, float dv, BMM_Color_64 *ptr) this tells you how big the sample should be. This lets you filter or average over this area to keep the map from aliasing.

Parameters:

int channel=0;

Specifies the channel for the values. One of the following:

0: Vertex Color Channel.

1 through 99: Mapping Channels.

Prototype:

virtual void DPdUVW(Point3 dP[3], int channel=0)=0

Remarks:

This returns the bump basis vectors for UVW in camera space. Note that if you want to retrieve these bump basis vectors that are altered by the UVGen instance use the method UVGen::GetBumpDP(). Also see the Advanced Topics section Working with Materials and Textures for more details on bump mapping.

Parameters:

Point3 dP[3]

The bump basic vectors. dP[0] is a vector corresponding to the U direction. dp[1] corresponds to V, and dP[2] corresponds to W.

int channel=0;

Specifies the channel for the values. One of the following:

0: Vertex Color Channel.

1 through 99: Mapping Channels.

Prototype:

virtual int BumpBasisVectors(Point3 dP[2], int axis, int channel=0);

Remarks:

This method is available in release 4.0 and later only.

This method should replace DpDUVW over time but is left in place as not to break 3rd party plugins. If this method returns 1, that is assumed to mean it is implemented, and it will be used instead of DpDUVW.

Parameters:

Point3 dP[2]

The bump basic vectors. dP[0] is a vector corresponding to the U direction. dp[1] corresponds to V, and dP[2] corresponds to W.

int axis

Specified the 2D cases for: AXIS_UV, AXIS_VW, or AXIS_WU.

int channel=0;

Specifies the channel for the values. One of the following:

0: Vertex Color Channel.

1 through 99: Mapping Channels.

Default Implementation:

{ return 0; }

Prototype:

virtual Point3 UVWNormal(int channel=0);

Remarks:

This method is available in release 2.0 and later only.

This method returns a vector in UVW space normal to the face in UVW space. This can be CrossProd(U[1]-U[0],U[2]-U[1]), where U[i] is the texture coordinate at the i-th vertex of the current face. This may be used for hiding textures on back side of objects.

Parameters:

int channel=0;

Specifies the channel for the values. One of the following:

0: Vertex Color Channel.

1 through 99: Mapping Channels.

Default Implementation:

{ return Point3(0,0,1); }

Prototype:

virtual float RayConeAngle();

Remarks:

This method is available in release 2.0 and later only.

Returns the angle of a ray cone hitting this point. It gets increased/decreased by curvature on reflection.

Visualize a small pyramid, with the top at the eye point, and its sides running through each corner of the pixel to be rendered, then onto the scene. Then visualize a small cone fitting inside this pyramid. This method returns the angle of that cone. When rendering, if the ray cone goes out and hits a flat surface, the angle of reflection will always be constant for each pixel. However, if the ray cone hits a curved surface, the angle will change between pixels. This change in value give some indication of how fast the sample size is getting bigger.

Default Implementation:

{ return 0.0f; }

Prototype:

virtual float RayDiam();

Remarks:

This method is available in release 2.0 and later only.

Returns the diameter of the ray cone at the pixel point (the point it intersects the surface being shaded). This is a dimension in world units. As a ray is propogated it is updated for each new surface that is encountered.

Default Implementation:

{ return Length(DP()); }

Prototype:

virtual AColor EvalEnvironMap(Texmap *map, Point3 view)

Remarks:

This is used by the Standard material to do the reflection maps and the refraction maps. Given the map, and a direction from which you want to view it, this method changes the view vector to be the specified vector and evaluates the function.

Parameters:

Texmap *map

The map to evaluate.

Point3 view

The view direction.

Return Value:

The color of the map, in r, g, b, alpha.

Prototype:

virtual void GetBGColor(class Color &bgCol, class Color &transp, int fogBG=TRUE)=0

Remarks:

Retrieves the background color and the background transparency.

Parameters:

class Color &bgCol

The returned background color.

class Color &transp

The returned transparency.

int fogBG

Specifies you want the current atmospheric shaders to be applied to the background color. If TRUE the shaders are applied; if FALSE they are not.

Prototype:

virtual float CamNearRange()

Remarks:

Returns the camera near range set by the user in the camera's user interface.

Prototype:

virtual float CamFarRange()

Remarks:

Returns the camera far range set by the user in the camera's user interface.

Prototype:

virtual Point3 PointTo(const Point3& p, RefFrame ito)=0;

Remarks:

Transforms the specified point from internal camera space to the specified space.

Parameters:

const Point3& p

The point to transform.

RefFrame ito

The space to transform the point to. One of the following values:

REF_CAMERA

REF_WORLD

REF_OBJECT

Return Value:

The transformed point, in the specified space.

Prototype:

virtual Point3 PointFrom(const Point3& p, RefFrame ifrom)=0;

Remarks:

Transforms the specified point from the specified coordinate system to internal camera space.

Parameters:

const Point3& p

The point to transform.

RefFrame ifrom

The space to transform the point from. One of the following values:

REF_CAMERA

REF_WORLD

REF_OBJECT

Return Value:

The transformed point in camera space.

Prototype:

virtual Point3 VectorTo(const Point3& p, RefFrame ito)=0;

Remarks:

Transform the vector from internal camera space to the specified space.

Parameters:

const Point3& p

The vector to transform.

RefFrame ito

The space to transform the vector to. One of the following values:

REF_CAMERA

REF_WORLD

REF_OBJECT

Prototype:

virtual Point3 VectorFrom(const Point3& p, RefFrame ifrom)=0;

Remarks:

Transform the vector from the specified space to internal camera space.

Parameters:

const Point3& p

The vector to transform.

RefFrame ifrom

The space to transform the vector from. One of the following values:

REF_CAMERA

REF_WORLD

REF_OBJECT

Prototype:

virtual Point3 VectorToNoScale(const Point3& p, RefFrame ito);

Remarks:

This method is available in release 3.0 and later only.

Transform the vector from internal camera space to the specified space without scaling.

Parameters:

const Point3& p

The vector to transform.

RefFrame ito

The space to transform the vector to. One of the following values:

REF_CAMERA

REF_WORLD

REF_OBJECT

Prototype:

virtual Point3 VectorFromNoScale(const Point3& p, RefFrame ifrom);

Remarks:

This method is available in release 3.0 and later only.

Transform the vector from the specified space to internal camera space without scaling.

Note: This method was added to correct a problem that was occurring in 3D Textures when the bump perturbation vectors were transformed from object space to camera space, so they are oriented correctly as the object rotates. If the object has been scaled, this transformation causes the perturbation vectors to be scale also, which amplifies the bump effect. This method is used to rotate the perturbation vectors so they are correctly oriented in space, without scaling them.

Parameters:

const Point3& p

The vector to transform.

RefFrame ifrom

RefFrame ifrom

The space to transform the vector from. One of the following values:

REF_CAMERA

REF_WORLD

REF_OBJECT

Prototype:

virtual void SetGBufferID(int gbid);

Remarks:

When a map or material is evaluated (in Shade(), EvalColor() or EvalMono()), if it has a non-zero gbufID, it should call this routine to store the gbid into the shade context.

Note: Normally a texmap calls this method so the index would be set for all of the area covered by the texture. There is no reason that this has to be done for every pixel however. A texture could just set the ID for particular pixels. This could allow post processing routines (for example a glow) to only process part of a texture and not the entire thing. For example, at the beggining of texmap's EvalColor() one typically has code that does:

 if (gbufid) sc.SetGBufferID(gbufid);

This takes the gbufid (which is in MtlBase) and (if it is non-zero) stores it into the shade context. The renderer, after evaluating the Shade() function for the material at a pixel, looks at the gbufferID left in the shade context, and stores it into the gbuffer at that pixel. So if the texmap adds another condition like

 if (inHotPortion)

  if (gbufid) sc.SetGBufferID(gbufid);

It will do it for just the choosen pixels.

Parameters:

int gbid

The ID to store.

Prototype:

virtual AColor EvalGlobalEnvironMap(Point3 dir);

Remarks:

This method is available in release 2.0 and later only.

Returns the color of the global enviornment map from the given view direction.

Parameters:

Point3 dir

Specifies the direction of view.

Default Implementation:

{ return AColor(0,0,0,0); }

Prototype:

LightDesc *GetAtmosSkipLight();

Remarks:

This method is available in release 3.0 and later only.

This method, along with SetAtmosSkipLight() below, are used by the lights to avoid self-shadowing when applying atmospheric shadows. This method returns a pointer to the LightDesc instance currently calling the Atmosphere::Shade() method when computing atmospheric shadows.

Here's how they are used:

(1) When computing the atmospheric shadows:(somewhere in ::LightDesc::Illuminate()) do the following:

sc.SetAtmosSkipLight(this);

sc.globContext->atmos->Shade(sc, lightPos, sc.P(), col, trans);

sc.SetAtmosSkipLight(NULL);

 

(2) In LightDesc::TraverseVolume() do the following:

if (sc.GetAtmosSkipLight()==this)

return;

Default Implementation:

{ return atmosSkipLight; }

Prototype:

void SetAtmosSkipLight(LightDesc *lt);

Remarks:

This method is available in release 3.0 and later only.

This method sets the LightDesc instance currently calling the Atmosphere::Shade() method. See GetAtmosSkipLight() above.

Parameters:

LightDesc *lt

Points to the LightDesc to set.

Default Implementation:

{ atmosSkipLight = lt; }

Prototype:

virtual BOOL GetCache(Texmap *map, AColor &c);

Remarks:

This method is available in release 2.0 and later only.

This method is used with texture maps only. If a map is multiply instanced within the same material, say on the diffuse channel and on the shinniness channel, it will return the same value each time its evaluated. Its a waste of processor time to reevaluate the map twice. This method allows you to cache the value so it won't need to be computed more than once.

Note that the cache is automatically cleared after each ShadeContext call. This is used within one evaluation of a material hierarchy.

Parameters:

Texmap *map

Points to the texmap storing the cache (usually the plug-ins this pointer).

AColor &c

The color to store.

Return Value:

TRUE if the color was returned; otherwise FALSE.

Default Implementation:

{ return FALSE; }

Sample Code:

This code from \MAXSDK\SAMPLES\MATERIALS\NOISE.CPP and shows how the cache is retrieved and stored:

RGBA Noise::EvalColor(ShadeContext& sc) {

 Point3 p,dp;

 if (!sc.doMaps) return black;

 

 AColor c;

 // If the cache exists, return the color

 if (sc.GetCache(this,c))

  return c;

 // Otherwise compute the color

 . . .

 // At the end of the eval the cache is stored

 sc.PutCache(this,c);

Prototype:

virtual BOOL GetCache(Texmap *map, float &f);

Remarks:

This method is available in release 2.0 and later only.

Retrieves a floating point value from the cache. See the AColor version above for details.

Parameters:

Texmap *map

Points to the texmap storing the cache (usually the plug-ins this pointer).

float &f

The value to store.

Return Value:

TRUE if the value was returned; otherwise FALSE.

Default Implementation:

{ return FALSE; }

Prototype:

virtual BOOL GetCache(Texmap *map, Point3 &p);

Remarks:

This method is available in release 2.0 and later only.

Retrieves a Point3 value from the cache. See the AColor version above for details.

Parameters:

Texmap *map

Points to the texmap storing the cache (usually the plug-ins this pointer).

Point3 &p

The point to store.

Return Value:

TRUE if the value was returned; otherwise FALSE.

Default Implementation:

{ return FALSE; }

Prototype:

virtual void PutCache(Texmap *map, const AColor &c);

Remarks:

This method is available in release 2.0 and later only.

Puts a color to the cache. See the method GetCache(Texmap *map, const AColor &c) above for detais.

Parameters:

Texmap *map

Points to the texmap storing the cache (usually the plug-ins this pointer).

const AColor &c

The color to store.

Default Implementation:

{}

Prototype:

virtual void PutCache(Texmap *map, const float f);

Remarks:

This method is available in release 2.0 and later only.

Puts a floatint point value to the cache. See the method GetCache(Texmap *map, const AColor &c) above for detais.

Parameters:

Texmap *map

Points to the texmap storing the cache (usually the plug-ins this pointer).

const float f

The floating point value to store.

Default Implementation:

{}

Prototype:

virtual void PutCache(Texmap *map, const Point3 &p);

Remarks:

This method is available in release 2.0 and later only.

Puts a floatint point value to the cache. See the method GetCache(Texmap *map, const AColor &c) above for detais.

Parameters:

Texmap *map

Points to the texmap storing the cache (usually the plug-ins this pointer).

const Point3 &p

The Point3 value to store.

Default Implementation:

{}

Prototype:

virtual void TossCache(Texmap *map);

Remarks:

This method is available in release 2.0 and later only.

Removes the specified cache.

Parameters:

Texmap *map

Points to the texmap storing the cache (usually the plug-ins this pointer).

Default Implementation:

{}

Prototype:

virtual FILE* DebugFile();

Remarks:

This method is used internally.

Prototype:

virtual INT_PTR Execute(int cmd, ULONG arg1=0, ULONG arg2=0, ULONG arg3=0);

Remarks:

This method is available in release 3.0 and later only.

This is a general purpose function that allows the API to be extended in the future. The 3ds max development team can assign new cmd numbers and continue to add functionality to this class without having to 'break' the API.

This is reserved for future use.

Parameters:

int cmd

The command to execute.

ULONG arg1=0

Optional argument 1 (defined uniquely for each cmd).

ULONG arg2=0

Optional argument 2.

ULONG arg3=0

Optional argument 3.

Return Value:

An integer return value (defined uniquely for each cmd).

Default Implementation:

{ return 0; }

Prototype:

bool IsPhysicalSpace() const;

Remarks:

This method is available in release 4.0 and later only.

This method returns TRUE if the operator really maps physical values to RGB, otherwise FALSE. This method is provided so shaders can determine whether the shading calculations are in physical or RGB space.

Prototype:

void ScaledToRGB(T& energy) const;

Remarks:

This method is available in release 4.0 and later only.

This method will map a scaled energy value into RGB. This converts a color value which will be stored in energy.

Parameters:

T& energy

The converted color value.

Prototype:

float ScaledToRGB(float energy) const;

Remarks:

This method is available in release 4.0 and later only.

This method will map a scaled energy value into RGB. This converts a monochrome value which will be returned.

Parameters:

float energy

The scaled energy value.

Prototype:

void ScaledToRGB();

Remarks:

This method is available in release 4.0 and later only.

This method will map an energy value int out.c into RGB. The converted value is stored in out.c.

Prototype:

void ScalePhysical(T& energy) const;

Remarks:

This method is available in release 4.0 and later only.

This method will scale physical values so they can be used in the renderer. This converts a color value which will be stored in energy.

Parameters:

T& energy

The converted color value.

Prototype:

float ScalePhysical(float energy) const;

Remarks:

This method is available in release 4.0 and later only.

This method will scale physical values so they can be used in the renderer. This converts a monochrome value which will be returned.

Parameters:

float energy

The energy value.

Prototype:

void ScaleRGB(T& energy) const;

Remarks:

This method is available in release 4.0 and later only.

This method will scale RGB values, just supplied to invert ScalePhysical. This converts a color value which will be stored in energy.

Parameters:

T& energy

The converted color value.

Prototype:

float ScaleRGB(float energy) const;

Remarks:

This method is available in release 4.0 and later only.

This method will scale RGB values, just supplied to invert ScalePhysical. This converts a monochrome value which will be returned.

Parameters:

float energy

The energy value.

Prototype:

LightDesc *GetAtmosSkipLight();

Remarks:

This method is available in release 4.0 and later only.

This method can be used to determine from within the ShadeContext if a volumetric light should be prevented from generating self-shadows.

Default Implementation:

{ return atmosSkipLight; }

 

Prototype:

void SetAtmosSkipLight(LightDesc *lt);

Remarks:

This method is available in release 4.0 and later only.

This method allows you to set the volumetric light that should be prevented from generating self-shadows.

Parameters:

LightDesc *lt

A pointer to the light to set.

Default Implementation:

{ atmosSkipLight = lt; }

 

Prototype:

virtual int NRenderElements();

Remarks:

This method is available in release 4.0 and later only.

Returns the number of render elements.

Default Implementation:

{ return globContext->NRenderElements(); }

Prototype:

virtual IRenderElement *GetRenderElement(int n);

Remarks:

This method is available in release 4.0 and later only.

Returns an interface to the 'i-th' render element.

Parameters:

int n

The zero based index of the render element to return.

Default Implementation:

{ return globContext->GetRenderElement(n); }

Prototype:

virtual Color DiffuseIllum();

Remarks:

This method is available in release 4.0 and later only.

Computes and returns the incoming diffuse illumination color (for matte/shadow).

Notes on the functions DP(), Curve(), DUVW() and DPdUVW()

The functions DP(), Curve(), and DUVW() are all for the purposes of antialiasing. They give the amount of variation of various quantities across the pixel. DPdUVW() is for the purposes of bump mapping. This section describes the method that 3ds max's renderer uses for computing them.

 

(1) Point3 DP();

 

This is gives the approximate dimension of a 3D box which bounds the surface fragment cut out of the surface by the current pixel being rendered. It can be used for antialiasing 3D textures.

 

First calculate (where U = {u,v,w} )

dP/dx: the derivative of a point on the surface relative to the screen x-coordinate. 

dP/dy: the derivative of a point on the surface relative to the screen y-coordinate. 

Then take the sum of the absolute values of these:

DP = abs(dP/dx) + abs(dP/dy). 

 

(2) Point3 DUVW();

 

This is similar to DP(), but bounds the change in UVW space. It is used for texture filtering.

 

First calculate (where U = {u,v,w} )

dU/dx: the derivative of {u,v,w} relative to the screen x-coordinate. 

dU/dy: the derivative of {u,v,w} relative to the screen y-coordinate. 

Then take the sum of the absolute values of these:

DUVW = DU = abs(dU/dx) + abs(dU/dy). 

 

 

(3) float Curve();

 

This gives an approximation of the amount of curvature of the surface. It is basically a measure of the length of a vector representing the rate of change of the surface normal across the pixel. It is used to scale the sample size in reflection mapping. (high curvature==>big sample).

 

Here is our implementation:

 

float SContext::Curve() {

  calc_derivs(); // calculate dxdx, etc.

  Point3 nv0 = actf->GetVertexNormal(0); // surface normal at vertex 0

  Point3 nv1 = actf->GetVertexNormal(1); // surface normal at vertex 1

  Point3 nv2 = actf->GetVertexNormal(2); // surface normal at vertex 2

  nv1 -= nv0;

  nv2 -= nv0;

  float dx2 = LengthSquared(dsdx*nv1 + dtdx*nv2);

  float dy2 = LengthSquared(dsdy*nv1 + dtdy*nv2);

  return (float)sqrt(0.5*(dx2+dy2));  

  }

 

Notes: 

calc_derivs() calculates the terms dsdx,dsdy, dtdx, dtdy. 

(s,t) are skew coordinates of the current point in the face 

relative to the face vertices (V0,V1,V2): 

  (i.e. the current point P = V0 + s*(V1-V0) + t*(V2-V0); )

 

  dsdx is derivative of s relative to screen x.  

  dsdy is derivative of s relative to screen y.  

  dtdx is derivative of t relative to screen x.  

  dtdy is derivative of t relative to screen y. 

   

  dx2 is the length squared of the change in normal relative to screen x.  

  dy2 is the length squared of the change in normal relative to screen y.  

 

 

(4) void DPdUVW(Point3 dP[3]);

 

This method returns the 3 basis vectors that describe U,V,and W axes in XYZ space.

 

The following function computes the U and V bump basis vectors for a triangle given the texture coordinates at the three vertices of the triangle ( tv[] ) and the 3D coordinates at the vertices ( v[] ). It is simply a solution using linear algebra for the U and V axes in terms of the XYZ coordinates. It returns

 

  b[0] = DP/DU

  b[1] = DP/DV

 

This function does not compute DP/DW, which at present is a shortcoming of the scanline renderer. It also makes the assumption that the bump basis vectors are constant over a given face, which has worked out successfully.

 

void ComputeBumpVectors(const Point3 tv[3], const Point3 v[3], Point3

bvec[3]) {

 float uva,uvb,uvc,uvd,uvk;

 Point3 v1,v2;

 

 uva = tv[1].x-tv[0].x;

 uvb = tv[2].x-tv[0].x;

 

 uvc = tv[1].y-tv[0].y;

 uvd = tv[2].y-tv[0].y;

 

 uvk = uvb*uvc - uva*uvd;

 

 v1 = v[1]-v[0];

 v2 = v[2]-v[0];

 

 if (uvk!=0) {

  bvec[0] = (uvc*v2-uvd*v1)/uvk;

  bvec[1] = (uva*v2-uvb*v1)/uvk;

  }

 else {

  if (uva!=0)

   bvec[0] = v1/uva;

  else if (uvb!=0)

   bvec[0] = v2/uvb;

  else

   bvec[0] = Point3(0.0f,0.0f,0.0f);

  if (uvc!=0)

   bvec[1] = v1/uvc;

  else if (uvd!=0)

   bvec[1] = v2/uvd;

  else

   bvec[1] = Point3(0.0f,0.0f,0.0f);

  }

 bvec[2] = Point3(0,0,1);

 }