PhysX SDK 3.2 API Reference: pxtask::CudaContextManager Class Reference

PhysX SDK 3.2 API

pxtask::CudaContextManager Class Reference

Manages memory, thread locks, and task scheduling for a CUDA context. More...

#include <PxCudaContextManager.h>

List of all members.


Public Member Functions

virtual void acquireContext ()=0
 Acquire the CUDA context for the current thread.
virtual void releaseContext ()=0
 Release the CUDA context from the current thread.
virtual CudaMemoryManagergetMemoryManager ()=0
 Return the CudaMemoryManager instance associated with this CUDA context.
virtual class GpuDispatchergetGpuDispatcher ()=0
 Return the GpuDispatcher instance associated with this CUDA context.
virtual bool contextIsValid () const =0
 Context manager has a valid CUDA context.
virtual bool supportsArchSM10 () const =0
 G80.
virtual bool supportsArchSM11 () const =0
 G92.
virtual bool supportsArchSM12 () const =0
 GT200.
virtual bool supportsArchSM13 () const =0
 GT260.
virtual bool supportsArchSM20 () const =0
 GF100.
virtual bool supportsArchSM30 () const =0
 GK100.
virtual bool supportsArchSM35 () const =0
 GK110.
virtual bool isIntegrated () const =0
 true if GPU is an integrated (MCP) part
virtual bool hasDMAEngines () const =0
 true if GPU can overlap kernels and copies
virtual bool canMapHostMemory () const =0
 true if GPU map host memory to GPU (0-copy)
virtual int getDriverVersion () const =0
 returns cached value of cuGetDriverVersion()
virtual size_t getDeviceTotalMemBytes () const =0
 returns cached value of device memory size
virtual int getMultiprocessorCount () const =0
 returns cache value of SM unit count
virtual unsigned int getClockRate () const =0
 returns cached value of SM clock frequency
virtual int getSharedMemPerBlock () const =0
 returns total amount of shared memory available per block in bytes
virtual const char * getDeviceName () const =0
 returns device name retrieved from driver
virtual CudaInteropMode::Enum getInteropMode () const =0
 interop mode the context was created with
virtual bool registerResourceInCudaGL (CUgraphicsResource &resource, PxU32 buffer)=0
 Register a rendering resource with CUDA.
virtual bool registerResourceInCudaD3D (CUgraphicsResource &resource, void *resourcePointer)=0
 Register a rendering resource with CUDA.
virtual bool unregisterResourceInCuda (CUgraphicsResource resource)=0
 Unregister a rendering resource with CUDA.
virtual int usingDedicatedPhysXGPU () const =0
 Determine if the user has configured a dedicated PhysX GPU in the NV Control Panel.
virtual void release ()=0
 Release the CudaContextManager.

Protected Member Functions

virtual ~CudaContextManager ()
 protected destructor, use release() method

Detailed Description

Manages memory, thread locks, and task scheduling for a CUDA context.

A CudaContextManager manages access to a single CUDA context, allowing it to be shared between multiple scenes. Memory allocations are dynamic: starting with an initial heap size and growing on demand by a configurable page size. The context must be acquired from the manager before using any CUDA APIs.

The CudaContextManager is based on the CUDA driver API and explictly does not support the the CUDA runtime API (aka, CUDART).

To enable CUDA use by an APEX scene, a CudaContextManager must be created (supplying your own CUDA context, or allowing a new context to be allocated for you), the GpuDispatcher for that context is retrieved via the getGpuDispatcher() method, and this is assigned to the TaskManager that is given to the scene via its NxApexSceneDesc.


Constructor & Destructor Documentation

virtual pxtask::CudaContextManager::~CudaContextManager (  )  [inline, protected, virtual]

protected destructor, use release() method


Member Function Documentation

virtual void pxtask::CudaContextManager::acquireContext (  )  [pure virtual]

Acquire the CUDA context for the current thread.

Acquisitions are allowed to be recursive within a single thread. You can acquire the context multiple times so long as you release it the same count.

The context must be acquired before using most CUDA functions.

It is not necessary to acquire the CUDA context inside GpuTask launch functions, because the GpuDispatcher will have already acquired the context for its worker thread. However it is not harmfull to (re)acquire the context in code that is shared between GpuTasks and non-task functions.

virtual bool pxtask::CudaContextManager::canMapHostMemory (  )  const [pure virtual]

true if GPU map host memory to GPU (0-copy)

virtual bool pxtask::CudaContextManager::contextIsValid (  )  const [pure virtual]

Context manager has a valid CUDA context.

This method should be called after creating a CudaContextManager, especially if the manager was responsible for allocating its own CUDA context (desc.ctx == NULL). If it returns false, there is no point in assigning this manager's GpuDispatcher to a TaskManager as it will be unable to execute GpuTasks.

virtual unsigned int pxtask::CudaContextManager::getClockRate (  )  const [pure virtual]

returns cached value of SM clock frequency

virtual const char* pxtask::CudaContextManager::getDeviceName (  )  const [pure virtual]

returns device name retrieved from driver

virtual size_t pxtask::CudaContextManager::getDeviceTotalMemBytes (  )  const [pure virtual]

returns cached value of device memory size

virtual int pxtask::CudaContextManager::getDriverVersion (  )  const [pure virtual]

returns cached value of cuGetDriverVersion()

virtual class GpuDispatcher* pxtask::CudaContextManager::getGpuDispatcher (  )  [pure virtual]

Return the GpuDispatcher instance associated with this CUDA context.

virtual CudaInteropMode::Enum pxtask::CudaContextManager::getInteropMode (  )  const [pure virtual]

interop mode the context was created with

virtual CudaMemoryManager* pxtask::CudaContextManager::getMemoryManager (  )  [pure virtual]

Return the CudaMemoryManager instance associated with this CUDA context.

virtual int pxtask::CudaContextManager::getMultiprocessorCount (  )  const [pure virtual]

returns cache value of SM unit count

virtual int pxtask::CudaContextManager::getSharedMemPerBlock (  )  const [pure virtual]

returns total amount of shared memory available per block in bytes

virtual bool pxtask::CudaContextManager::hasDMAEngines (  )  const [pure virtual]

true if GPU can overlap kernels and copies

virtual bool pxtask::CudaContextManager::isIntegrated (  )  const [pure virtual]

true if GPU is an integrated (MCP) part

virtual bool pxtask::CudaContextManager::registerResourceInCudaD3D ( CUgraphicsResource resource,
void *  resourcePointer 
) [pure virtual]

Register a rendering resource with CUDA.

This function is called to register render resources (allocated from Direct3D) with CUDA so that the memory may be shared between the two systems. This is only required for render resources that are designed for interop use. In APEX, each render resource descriptor that could support interop has a 'registerInCUDA' boolean variable.

The function must be called again any time your graphics device is reset, to re-register the resource.

Returns true if the registration succeeded. A registered resource must be unregistered before it can be released.

Parameters:
resource [OUT] the handle to the resource that can be used with CUDA
resourcePointer [IN] A pointer to either IDirect3DResource9, or ID3D10Device, or ID3D11Resource to be registered.

virtual bool pxtask::CudaContextManager::registerResourceInCudaGL ( CUgraphicsResource resource,
PxU32  buffer 
) [pure virtual]

Register a rendering resource with CUDA.

This function is called to register render resources (allocated from OpenGL) with CUDA so that the memory may be shared between the two systems. This is only required for render resources that are designed for interop use. In APEX, each render resource descriptor that could support interop has a 'registerInCUDA' boolean variable.

The function must be called again any time your graphics device is reset, to re-register the resource.

Returns true if the registration succeeded. A registered resource must be unregistered before it can be released.

Parameters:
resource [OUT] the handle to the resource that can be used with CUDA
buffer [IN] GLuint buffer index to be mapped to cuda

virtual void pxtask::CudaContextManager::release (  )  [pure virtual]

Release the CudaContextManager.

When the manager instance is released, it also releases its GpuDispatcher instance and CudaMemoryManager. Before the memory manager is released, it frees all allocated memory pages. If the CudaContextManager created the CUDA context it was responsible for, it also frees that context.

Do not release the CudaContextManager if there are any scenes using its GpuDispatcher. Those scenes must be released first since there is no safe way to remove a GpuDispatcher from a TaskManager once the TaskManager has been given to a scene.

virtual void pxtask::CudaContextManager::releaseContext (  )  [pure virtual]

Release the CUDA context from the current thread.

The CUDA context should be released as soon as practically possible, to allow other CPU threads (including the GpuDispatcher) to work efficiently.

virtual bool pxtask::CudaContextManager::supportsArchSM10 (  )  const [pure virtual]

G80.

virtual bool pxtask::CudaContextManager::supportsArchSM11 (  )  const [pure virtual]

G92.

virtual bool pxtask::CudaContextManager::supportsArchSM12 (  )  const [pure virtual]

GT200.

virtual bool pxtask::CudaContextManager::supportsArchSM13 (  )  const [pure virtual]

GT260.

virtual bool pxtask::CudaContextManager::supportsArchSM20 (  )  const [pure virtual]

GF100.

virtual bool pxtask::CudaContextManager::supportsArchSM30 (  )  const [pure virtual]

GK100.

virtual bool pxtask::CudaContextManager::supportsArchSM35 (  )  const [pure virtual]

GK110.

virtual bool pxtask::CudaContextManager::unregisterResourceInCuda ( CUgraphicsResource  resource  )  [pure virtual]

Unregister a rendering resource with CUDA.

If a render resource was successfully registered with CUDA using the registerResourceInCuda***() methods, this function must be called to unregister the resource before the it can be released.

virtual int pxtask::CudaContextManager::usingDedicatedPhysXGPU (  )  const [pure virtual]

Determine if the user has configured a dedicated PhysX GPU in the NV Control Panel.

Note:
If using CUDA Interop, this will always return false
Returns:
1 if there is a dedicated PhysX GPU

0 if there is NOT a dedicated PhysX GPU

-1 if the routine is not implemented


The documentation for this class was generated from the following file:



Copyright © 2008-2012 NVIDIA Corporation, 2701 San Tomas Expressway, Santa Clara, CA 95050 U.S.A. All rights reserved. www.nvidia.com