Runtime Interfaces

NVIDIA APEX

Runtime Interfaces

This chapter shows how to use the Clothing Module by explaining the essential part of the SimpleClothing sample application in APEX/samples. It shows how to load an asset, instance an actor, keep it synchronized with the character and render the simulated clothing part through the clothing actor.

Initialization

An APEX application needs to first initialize PhysX. In the following code samples it is assumed that a PhysX SDK has been created (see the PhysX documentation):

PxPhysics* m_physxSDK;
PxFoundation* m_foundation;
m_foundation = PxCreateFoundation(PX_PHYSICS_VERSION, ...);
m_physxSDK = PxCreatePhysics( ... );
PxInitExtensions(*m_physxSDK);

and that we have a cooking library:

PxCookingParams params;
PxCooking* m_cooking;
m_cooking = PxCreateCooking(PX_PHYSICS_VERSION, m_physicsSDK->getFoundation(), params);

After initializing the PhysX SDK, an APEX application needs to initialize the APEX SDK. See APEX SDK Documentation.

This is done as follows:

nvidia::apex::ApexSDKDesc apexSdkDesc;

// Let Apex know about our PhysX SDK and cooking library
apexSdkDesc.physXSDK = mPhysicsSDK;
apexSdkDesc.cooking = mCookingInterface;

// The output stream is used to report any errors back to the user
apexSdkDesc.outputStream = mErrorReport;

// Our custom render resource manager
apexSdkDesc.renderResourceManager = mRenderResourceManager;

// Our custom named resource handler (NRP)
apexSdkDesc.resourceCallback = mResourceCallback;

// Apex needs an allocator and error stream.
// By default it uses those of the PhysX SDK.
apexSdkDesc.allocator = mApexAllocator;

// Finally, create the Apex SDK
ApexCreateError errorCode;
mApexSDK = nvidia::apex::CreateApexSDK(apexSdkDesc, &errorCode);

The render resource manager SampleApexRenderResourceManager is used. It is common to the APEX samples.

Also an Scene has to be created to hold APEX actors and renderables. It can be initialized with a PhysX scene:

// Create an APEX scene with a PhysX scene
nvidia::apex::SceneDesc sceneDesc;
sceneDesc.scene = mPhysxScene;
nvidia::apex::Scene* apexScene = mApexSDK->createScene( sceneDesc );

An APEX application then needs to create the APEX modules it will use.

Loading the Clothing Module

First the clothing module ModuleClothing is created:

mApexClothingModule = static_cast<nvidia::apex::ModuleClothing*>(mApexSDK->createModule("Clothing"));

Then it is initialized with an ModuleClothingDesc:

#include <NvParamUtils.h>
if (mApexClothingModule != NULL)
{
    NvParameterized::Interface* moduleDesc = mApexClothingModule->getDefaultModuleDesc();

    // Know what you're doing when playing with these values!

    // should not be 0 for every platform except PC.
    NvParameterized::setParamU32(*moduleDesc, "maxNumCompartments", 3);

    // Can be tuned for switching between more memory and more spikes.
    NvParameterized::setParamU32(*moduleDesc, "maxUnusedPhysXResources", 5);

    mApexClothingModule->init(moduleDesc);
}

ModuleClothingDesc holds the field maxNumCompartments that lets you define the number of PxCompartments to which the simulated cloth instances are supposed to be distributed. This is good to parallelize the CPU work of the cloth simulation for both CPU and GPU cloth.

Loading Assets

First a clothing asset (ClothingAsset) needs to be created or loaded. The clothing authoring tool (ClothingTool) or the PhysX plug-in for Autodesk Max and Maya can be used to create and save ClothingAssets.

Loading Assets the NRP way

This will get a callback to the NRP, much as if APEX requests a resource, allowing to gather all resource loading code in one place.

nvidia::apex::Asset* asset = reinterpret_cast<nvidia::apex::Asset*>(mApexSDK->getNamedResourceProvider()->getResource(nameSpace, fullname));
if( asset )
{
    mApexSDK->getNamedResourceProvider()->setResource(nameSpace, fullname, asset, true);
}

The NRP would look something like this:

void* MyResourceCallback::requestResource(const char *nameSpace, const char *filename)
{
    if( !strcmp(nameSpace, CLOTHING_AUTHORING_TYPE_NAME) ||
        !strcmp(nameSpace, <SOME OTHER TYPE>))
    {
        return loadParameterized(filename);
    }
}

Loading Assets the NvParameterized way

nvidia::apex::Asset* loadParameterized(const char* fullpath)
{
    nvidia::apex::Asset* result = NULL;
    NvParameterized::Serializer::DeserializedData deserializedData;

    // A FileBuffer needs to be created.
    // This can also be a customized subclass of physx::PxFileBuf
    physx::PxFileBuf* fileBuf = mApexSDK->createStream(fullpath, physx::PxFileBuf::OPEN_READ_ONLY);

    if (fileBuf != NULL)
    {
        if (fileBuf->isOpen())
        {
            char peekData[32];
            stream->peek(peekData, 32);

            // The method returns either Serializer::NST_BINARY or Serializer::NST_XML, or Serializer::NST_LAST for unknown files.
            NvParameterized::Serializer::SerializeType serType = mApexSDK->getSerializeType(peekData, 32);

            physx::Serializer* serializer = mApexSDK->createSerializer(serType);

            if (serializer != NULL)
            {
                physx::Serializer::ErrorType serError = NvParameterized::Serializer::ERROR_NONE;
                serError = serializer->deserialize(*fileBuf, deserializedData);

                // do some error checking here
                if (serError != NvParameterized::Serializer::ERROR_NONE)
                {
                    processError(serError);
                }

                serializer->release();
            }
        }
        fileBuf->release();
    }

    // Now deserializedData contains one or more NvParameterized objects.
    for (unsigned int i = 0; i < deserializedData.size(); i++)
    {
        NvParameterized::Interface* data = deserializedData[i];
        printf("Creating Asset of type %s\n", data->className());

        if (result == NULL)
        {
            result = mApexSDK->createAsset(data, "some unique name");
        }
        else
        {
            // we have to handle files with multiple assets differently, for now, let's get rid of it
            deserializedData[i]->destroy();
        }
    }

    return result;
}

Loading Assets the old way

Apex versions older than 1.0 used to have per asset file formats. In Clothing this was the .aca file. In the sample there is the function loadClothingAsset to load a stored aca file (Apex Clothing Asset) from disk. These files can no longer be loaded with Apex 1.1. They can be converted to .apx/.apb files using The ParamTool from Apex 1.0.

Instantiating Actors

The clothing assets can be instanced by creating clothing actors. A descriptor has to be provided to the creation function. The descriptor is a NvParameterized object:

#include <NvParamUtils.h>
#define VERIFY_PARAM(_A) { NvParameterized::ErrorType error = _A; PX_ASSERT(error == NvParameterized::ERROR_NONE); }

// Get the (singleton!) default actor descriptor.
NvParameterized::Interface* actorDesc = mAsset->getDefaultActorDesc();
PX_ASSERT(actorDesc != NULL);

// Run Cloth on the GPU
NvParameterized::setParamBool(*actorDesc, "useHardwareCloth", mGpuSimulation));
NvParameterized::setParamBool(*actorDesc, "flags.ParallelPhysXMeshSkinning", true);

// Initialize the global pose
NvParameterized::setParamMat44(*actorDesc, "globalPose", currentPose);

{
    NvParameterized::Handle actorHandle(*actorDesc);

    // No util method for this
    VERIFY_PARAM(actorHandle.getParameter("boneMatrices"));
    VERIFY_PARAM(actorHandle.resizeArray(skinningMatrices.size()));
    VERIFY_PARAM(actorHandle.setParamMat44Array(&skinningMatrices[0], skinningMatrices.size()));
}

// create the actor
nvidia::apex::Actor* apexActor = mAsset->createApexActor(*actorDesc, *mApexScene);
nvidia::apex::ClothingActor* clothingActor = static_cast<nvidia::apex::ClothingActor*>(apexActor);

doSomething(clothingActor);

Clothing Preview

Instead of creating clothing actors, it is also possible to create clothing previews:

NvParameterized::Interface* previewDesc = mAssets->getDefaultAssetPreviewDesc();
PX_ASSERT(previewDesc != NULL);

// Initialize the global pose
NvParameterized::setParamMat44(*previewDesc, "globalPose", currentPose);

nvidia::apex::AssetPreview* apexPreview = mAssets->createApexAssetPreview(*previewDesc);
nvidia::apex::ClothingPreview* clothingPreview = static_cast<nvidia::apex::ClothingPreview*>(apexPreview);

The clothing preview is basically the same as a clothing actor, except that there is no simulation running. So it is possible to show the animated version of the clothing actor without having an PxScene around. Updating the animation state and rendering works just like with the normal clothing actor (updateState, updateRenderResources and dispatchRenderResources).

Stepping the Simulation

When using an APEX application, the simulate and fetchResults functions of the PhysX scene need to be replaced by the corresponding simulate and fetchResults functions of the APEX scene. The APEX scene wraps the PhysX scene and ticks all the APEX actors additionally.

Note

This also applies to the checkResults method.

// start the simulation
mApexScene->simulate(dtime);

doSomethingInBetween();

// finish the simulation
physx::PxU32 errorState = 0;
mApexScene->fetchResults(true, &errorState);

Prior to the simulate call the actor state has to be updated.

Updating Actor Benefit for LoD

For each actor, the benefit is calculated according to this equation:

benefit = lodWeights.distanceWeight * angularImportance(distanceFromEye, assetRadius) + lodWeights.benefitsBias

The angularImportance function just linearly blends down from 1.0 to 0.0 basically measuring screen space size. The size is estimated as the amount the asset’s diagonal (assuming it’s a sphere) is covering across the screen.

If lodWeights.maxDistance is provided, the benefit will reach 0.0 once the actor has said distance, otherwise it will only reach 0.0 for infinitely far away actors.

The lodWeights.distanceWeight marks the importance of a clothing actor relative to others. Actors with a lower weight will be turned off more quickly even if they have the same maxDistance and distanceFromEye. A character having both hair and a cape should have the weight of the hair significantly lower than the weight of the cape as the cape is much more noticeable whether it’s being simulated.

distanceFromEye is the distance from the eye position, as computed in Scene::setViewMatrix() and the center of the bounding box of a clothing actor.

The lodWeights.benefitBias should be a value between 0.0 and 1.0. Benefit is defined as being in the [0.0, 1.0] range across all modules, however larger values still work, they just lead to proportionally larger amounts of simulation resources.

Note

The maxDistance parameter can be changed based on the fact whether a clothing actor is visible. Setting it to 0 for invisible characters however is not recommended at all since it takes a while to enable clothing again once it is disabled. Reducing the maxDistance to half for invisible actors can lead to good results though.

// requests the actors current configuration
// The object does not have to be returned, it is directly modified
NvParameterized::Interface* actorDesc = mActor->getActorDesc();

NvParameterized::setParamF32(*actorDesc, "lodWeights.maxDistance", maxDistance);
NvParameterized::setParamF32(*actorDesc, "lodWeights.distanceWeight", distanceWeight);
NvParameterized::setParamF32(*actorDesc, "lodWeights.benefitsBias", benefitBias);

Distribution of simulation resources

Simulation resources are an abstract unit. They are set through the Scene. Additionally two parameters need to be carefully chosen. The first describes the inter-module importance and the second describes how the abstract resources are then turned into resources allowing the clothing to decide whether it should simulate or not:

Scene* scene = getApexScene();
// assuming the abstract unit is milliseconds, giving the simulation 15ms to run
scene->setLODResourceBudget(15.0f);

ModuleClothing* clothingModule = getClothingModule();

// no special importance to clothing module (relative to other modules)
clothingModule->setLODBenefitValue(1.0f);

// This value needs to be carefully tweaked and will differ for different platforms (or even PC processors).
// Here we assume that each millisecond we can simulate 10'000 vertices.
clothingModule->setLODUnitCost(0.0001f);

Turning off the automatic LOD

This can be done either for the entire clothing module, or for each clothing actor separately:

ModuleClothing* moduleClothing = getClothingModule();

// turn off LOD for all clothing actors
moduleClothing->setLODEnabled(false);

for (unsigned int i = 0; i < getNumClothingActors(); i++)
{
    ClothingActor* clothingActor = getClothingActor(i);

    // turn off automatic LOD for this actor only
    clothingActor->forceLod(0);

    // turn on automatic LOD for this actor only
    clothingActor->forceLod(-1);
}

Updating the Animation State

When a character is animated the animation transforms have to be provided to the Clothing Actor:

physx::PxMat44 globalPose = getGlobalPose();
physx::PxMat44* boneMatrices = getBoneMatrices();
actor->updateState(globalPose, boneMatrices, sizeof(physx::PxMat44), getNumBoneMatrices(), isContinuous);

In updateState a global pose and the current state of the animation bones are provided. In the SimpleClothing sample the asset doesn’t have an animation skeleton, so just the pose is updated. The matrices to provide here are the same as the ones needed to specify the initial state in the ClothingActorDesc if you have animation on that mesh.

The isContinuous flag tells the actor if the motion is smooth or not. If it is set to true the normal simulation procedure is applied. Setting it to false marks a special frame in the animation where the character is teleported to a different location or orientation. In that case the clothing is reset to its animated, skinned state, replacing the simulation. If isContinuous is set to false multiple subsequent frames, it always forces the simulation to the animation and will look very awkward. The flag helps a lot whenever either animation cannot be blended every once in a while or a character is teleported.

Updating the Max Distance Scale

Each Clothing Actor can have an individually changing Max Distance Scale:

NvParameterized::Interface* actorDesc = mActor->getActorDesc();

NvParameterized::setParamF32(*actorDesc, "maxDistanceScale.Scale", scale);
NvParameterized::setParamBool(*actorDesc, "maxDistanceScale.Multipliable", multipliable);

The scale parameter must be in the [0,1] range and denotes the scale applied. The multipliable parameter switches between two modes of scale. Multipliable will multiply all max distances with the scale, otherwise the same fraction of the maximum Max Distance will be subtracted from all vertices, leading to some regions of the mesh not being simulated.

Remap Bone Indices

The function updateState takes an array of transformation matrices representing the current state of the animation skeleton. It is possible that the animation system and the internally stored bones in APEX do not have the same order. Therefore it can be necessary to map the list of animation matrices of the application to the bones stored in APEX using the remapBoneIndex function:

const std::vector<SkeletalBone>& bones = mSkeleton.getBones();
for (PxU32 i = 0; i < bones.size(); i++)
{
    mAsset->remapBoneIndex(bones[i].name.c_str(), i);
}

This needs to be done before the first updateState call.

Use Internal Bone Order

This is the orthogonal feature to Bone Index Remapping. The game engine will provide the skinning matrices in the same order as the Clothing Actor expects it, the same order as the bones are stored in the Clothing Asset.

It needs to be activated in the Clothing Actor Descriptor:

NvParameterized::Interface* actorDesc = mAsset->getDefaultNvParameterizedInterface();
PX_ASSERT(actorDesc != NULL);

NvParameterized::setParamBool(*actorDesc, "useInternalBoneOrder", true); // default is false

Then, the game engine needs to find out what bones are at what location:

const unsigned int boneNumber = mClothingAsset->getNumUsedBones();

for (unsigned int internalIndex = 0; internalIndex < boneNumber; internalIndex++)
{
    const char* boneName = mClothingAsset->getBoneName(internalIndex);

    // create a mapping table now
    // ...
}

This has the advantage that the updateState call can copy the matrices slightly more quickly. Also the number of bone matrices that need to be handed to the clothing actor is reduced to only the set of bones that are active within the clothing actor.

Frame Delay

When the simulation is updated serially in the usual order: clothingActor->updateState(...) scene->simulate(...) scene->fetchResults(...) clothingActor->updateRenderResources(...) clothingActor->dispatchRenderResources(...)

there is no frame delay. The clothing actor is rendered corresponding to the matrices provided in the updateState call.

There will be a frame delay when the provided matrices are based on a rigid body simulation, and if these rigid bodies are simulated in the same scene as the cloth. An example for such a scenario would be clothing on a ragdoll, or a simulation based character controller.

There are two ways to handle such situations. One solution is to introduce a matching delay on all rendered objects, or at least on the character with clothing. The second approach is to simulate the cloth in a separate scene later in the frame. The advantages of the former solution are more parallelism and the possibility to simulate interactions with other objects in the scene. However such a frame delay is often hard to handle in a game engine and the latter approach is more practicable. A selection of objects that need to be interacted with can be mirrored into the clothing scene.

Wind

For clothing actors a wind vector can be set:

NvParameterized::Interface* actorDesc = mActor->getActorDesc();

NvParameterized::setParamVec3(*actorDesc, "windParams.Velocity", windVelocity);
NvParameterized::setParamF32(*actorDesc, "windParams.Adaption", windAdaption);

// This alternative code is deprecated with 1.1
mActor->setWind(windAdaption, windVelocity);

The windVelocity parameter represents the target velocity of the wind. Best results can be achieved when the velocity has a small random noise on top. The windAdaption parameter represents how quickly the cloth piece adapts to the wind velocity. With an adaption value of 1.0 it will take the piece of cloth at least 1s to adapt fully to the velocity. If it takes longer than 1s it means the cloth is not perpendicular to the wind direction. Increasing the adaption to 10 will reduce the time to 1/10s whereas lowering the paramter to 0.5 will increase the time to 2s.

Since the wind velocity and adaption is part of the actor descriptor it can be set before actor creation, and it can also be altered when the actor already exists.

Note

For better quality it is very much recommended to slightly vary the wind velocity in strength as well as in direction.

Even wind with zero velocity can be simulated, this will result in air resistance (still modulated by dot product between vertex normal and wind direction).

Note

APEX Clothing has its own implementation of wind. This is orthogonal to the wind module and will work just by itself.

Rendering With Render Proxy

A clothing actor can be rendered by acquiring a render proxy from it, getting the render data through updateRenderResources, and triggering the render callback with dispatchRenderResources. After all render calls are done, the render proxy needs to be released, so that the memory can be reused again. The acquirer of the render proxy has exclusive access to its data, so locking the render resources is not necessary. The render proxy can only be acquired once per frame, otherwise NULL is returned:

SampleApexRenderer apexRenderer;
ClothingRenderProxy* renderProxy = clothingActor->acquireRenderProxy();
if (renderProxy != NULL)
{
    renderProxy->updateRenderResources();
    renderProxy->dispatchRenderResources(apexRenderer);
    renderProxy->release();
}

The render proxy can be acquired at any time, even during simulate, and it contains the latest simulation result. Note that it is better to release the render proxy before the next simulate call so that the memory can be reused. Otherwise, a new render proxy is allocated to double buffer the result. That means it’s good to acquire and release the render proxy after every fetchResults, even if the actor is not rendered this frame.

An acquired render proxy contains valid data that stays unchanged until it is released. A render proxy also stays valid after its corresponding actor has been released. However, releasing the corresponding ClothingAsset will invalidate the render proxy.

It is safe to call updateRenderResources on the render proxy in parallel, for example in the render thread, if the APEX SDK parameter ‘renderMeshActorLoadMaterialsLazily’ is set to false.

Rendering

The old way of rendering a clothing actor is still supported. However note that this way the render data is always double buffered, which allows updateRenderResources to be called during simulate, but which also might waste memory if you don’t need the double buffering. Use the render proxy and release it before the simulate call in order to prevent that.

A clothing actor is rendered by updating the render resources, setting up a renderer and calling dispatchRenderResources with it. The Renderer is the same as for any other APEX module:

SampleApexRenderer apexRenderer;
actor->lockRenderResources();
actor->updateRenderResources();
actor->unlockRenderResources();

// dispatch could also be called later on!
actor->dispatchRenderResources(apexRenderer);

For general information on how to use debug visualization within APEX, please see Debug Visualization.

Implementing the Render Interface

As mentioned above a renderer needs to be set up for the clothing actor rendering. Together with the renderer a bunch of render interface classes need to be sub classed and implemented by the application. See the APEX Programmer’s Guide for details about the APEX render interface. The SimpleClothing sample uses the SampleApexRenderer that is common to the samples.

A simple scenario

These are the buffers typically created when rendering a clothing actor. The order of creation can vary.

The formats specified in this document are an example. For normals it’s also possible to have more compressed formats like BYTE_SNORM3, SHORT_SNORM3, or (in future versions) even a BYTE_SNORM4 containing a quaternion instead of the normal/tangent buffers.

As of Apex 1.2 the tangent is now an FLOAT4 and the binormal semantic is gone. The binormal can be reconstructed as cross(normal, tangent.xyz) * tangent.w but been reduced to a single float. This reduces computational overhead inside apex and less data going through the render API as well.

The hint flags indicate whether a buffer is updated frequently or once.

The fist vertex buffer is created from the clothing actor itself. It always contains a position and normal, and it can contain tangents and bi-normals. The formats are identical to the formats used by the render mesh asset:

createVertexBuffer
  hint                = DYNAMIC
  uvOrigin            = ORIGIN_BOTTOM_LEFT
  Buffers Requested:
    POSITION          = FLOAT3
    NORMAL            = FLOAT3
    TANGENT           = FLOAT4

The next 3 vertex buffers are created by the render mesh asset within the clothing asset. They are split into 3 different ones because the clothing actor only ever needs a subset of them depending on its state.

The first buffer contains all the identical semantics as the buffer from the clothing actor. This static buffer will only be used when the clothing actor is not being simulated, and the fallback skinning flag is not set:

createVertexBuffer
  hint                = STATIC
  uvOrigin            = ORIGIN_BOTTOM_LEFT
  Buffers Requested:
    POSITION          = FLOAT3
    NORMAL            = FLOAT3
    TANGENT           = FLOAT4

The second buffer contains the bone indices and bone weights weights if present. It will only be used when the clothing actor is not simulated and fallback skinning flag is not set:

createVertexBuffer
  hint                = STATIC
  uvOrigin            = ORIGIN_BOTTOM_LEFT
  Buffers Requested:
    BONE_INDEX        = USHORT4
    BONE_WEIGHT       = FLOAT4

The third buffer contains everything else including custom vertex buffers. As an example, VERTEX_ORIGINAL_INDEX is a buffer that contains old vertex indices from the authoring pipeline. This is the order of the vertices in which they were entered into the render mesh asset:

createVertexBuffer
  hint                = STATIC
  uvOrigin            = ORIGIN_BOTTOM_LEFT
  Buffers Requested:
    TEXCOORD0         = FLOAT2
  Custom Buffers Requested:
    VERTEX_ORIGINAL_INDEX = UINT1

The index buffer is fairly straight forward. The hint will remain on STATIC as long as clothing does not support tearing. The format will be one of UBYTE1, USHORT1 or UINT1, the primitives TRIANGLES and registerInCUDA will be false for the time being (as well as no interopContext).

createIndexBuffer
  hint           = STATIC
  format         = UINT1
  primitives     = TRIANGLES
  registerInCUDA = false
  interopContext = 00000000

The render resource will then contain two or more of these vertex buffers. The vertex semantics of those buffers will always be mutually exclusive:

createRenderResource
  Vertex Buffers:
   TEXCOORD0 VERTEX_ORIGINAL_INDEX
   POSITION NORMAL TANGENT

Buffered calls & Multithreading

The API in the Clothing Module is not thread safe. It is not allowed at any time to call two methods on the same or two different APEX objects.

Also, none of the calls inside the clothing module are buffered. For some methods it is safe to call them even while the simulation is running but these methods mention this fact explicitly. Any other method should not be called during simulation.

For example it is allowed to load assets while the simulation is running, but it’s not allowed to create actors from these assets while the simulation is running. Also it’s not allowed to load two assets at the same time.

Using Morph Targets (also known as Blend Shapes)

When characters need to be customized after being exported from DCC tools it is usually for MMO like character customization. These are done by blending between deformed meshes of the original character.

These changes have to be applied on the fly as there is no way to store all possible configurations as pre-backed asset files. Apex Clothing does this by allowing the game engine to provide the displacements of the original positions to each ClothingActor upon creation. It can’t be modified later on because the changes need to be cooked into the simulation mesh.

First, the asset needs to be aware which order those displacements will be applied. To do so, the original positions need to be passed to the asset such that it can create the mapping from external to internal ordering. This requires that the original positions have to be identical to the ones provided by the DCC tool at export time, including the same animation pose:

ClothingAsset* clothingAsset = getClothingAsset();

physx::PxF32 epsilon = 0.0f;

physx::PxU32 numMapped = clothingAsset->prepareMorphTargetMapping(&morphVertices[0], morphVertices.size(), epsilon);

if (numMapped < morphVertices.size())
{
    // The mapping still worked, but some vertices were mapped to an origianl vertex with more than epsilon differences.
    // Either bump the epsilon or, if too many failed, emit an error message about a potentially bad mapping
}

Second, upon actor creation the displaced vertices need to be passed through the actor descriptor:

NvParameterized::Interface* actorDesc = clothingAsset->getDefaultActorDesc();

// [...] Some initialization of the actorDesc

physx::PxU32 numMorphDisplacements = 0;
physx::PxVec3* morphDisplacements = NULL;
getMorphDisplacements(morphDisplacements, numMorphDisplacements);

if (numMorphDisplacements > 0)
{
    NvParameterized::Handle md(*actorDesc, "morphDisplacements");
    md.resizeArray(numMorphDisplacements);
    md.setParamVec3Array(morphDisplacements, numMorphDisplacements);
}

physx::Actor* actor = clothingAsset->createApexActor(*actorDesc, apexScene);

This actor will most likely not immediately start to simulate. It will cook the simulation mesh during the first couple of simulation frames in a background thread. This however can be modified by setting the allowAsyncCooking bool in the Clothing Module Parameters.

The convex volumes are also adapted as good as possible, however the capsule volumes are not. As long as the morph target is only applying moderate changes this is usually no problem.

It is still not allowed to have scales on certain bones.

Colliding with the Environment

The 3x cloth solver does not support automatic collision with a scene. Each clothing actor will just collide with its own collision volumes, specified in the clothing asset. If additional collision is required, for example to collide with the ground, there is an interface on the clothing actor to add this:

ClothingPlane*        createCollisionPlane(const PxPlane& plane)
ClothingConvex*       createCollisionConvex(ClothingPlane** planes, PxU32 numPlanes)
ClothingSphere*       createCollisionSphere(const PxVec3& position, PxF32 radius)
ClothingCapsule*      createCollisionCapsule(ClothingSphere& sphere1, ClothingSphere& sphere2)
ClothingTriangleMesh* createCollisionTriangleMesh()

Each of those collision objects provide functions to release, edit and position the objects.

To implement generic collision with the environment, a scene can be queried around the clothing actor and corresponding clothing collision objects can be created for the actor. Some bookkeeping is required to keep track of the mirrored objects, in order to update and release them in time.

ClothingTriangleMesh has a few limitations. It should be kept small for performance reasons, it should be closed (at least when including the bounding box of the cloth), and it is currently not possible to dynamically remove triangles from it. Therefore it is recommended to use planes or convexes to approximate collision with the ground.