Microsoft DirectX 9.0 SDK Update (Summer 2003) |
Throttle
The Throttle sample demonstrates how to monitor the send queue and scale the rate of network communications.
Path
Source: (SDK root)\Samples\C++\DirectPlay\Throttle
Executable: (SDK root)\Samples\C++\DirectPlay\Bin
User's Guide
Start the Throttle Server by double-clicking on the ThrottleServer.exe in the Bin folder. Wait for a moment while it connects to the network. When the server is ready to accept connection, the dialog user interface (UI) will appear. While the server is running, you can adjust the Server Load slider to simulate the processing load on the server. The higher the load setting, the slower the server will handle incoming messages.
After the server is running, launch the Throttle Client by double-clicking on the ThrottleClient.exe in the bin folder. The client will prompt for the host name or Internet Protocol (IP) address where the server is running. The port number is fixed. When the client is connected to the server, the server will indicate the added connection and show the amount of received data.
You can adjust the Send Interval slider on the Client window to set the delay between successive calls to IDirectPlay8Client::Send. With the default settings, the server's receive buffer will quickly fill to capacity, and outgoing messages will fill the client's send queue. When the Regulate Outgoing Rate box is checked, the program will attempt to scale the number of outgoing messages to keep the queue size below the Max Queue Size set by the slider.
Programming Notes
To understand why you might need to throttle outgoing data in your application, you need to understand the DirectPlay architecture and DirectPlay Service Providers.
DPN_SP_CAPS contains a list of capabilities and settings for service providers. This sample focuses on dwNumThreads, dwBuffersPerThread, and dwSystemBufferSize. During most Transmission Control Protocol/Internet Protocol (TCP/IP) sessions, DirectPlay immediately delivers messages to the receiver's system queue. The threads take messages from the system buffer and store them in their own message buffers until they can be received by the message handler.
When the thread buffers fill up and the system buffer fills up, DirectPlay won't allow any further messages to be delivered. Any messages destined for that target are then stored in the local send queue until enough space frees up on the remote computer.
You can adjust these parameters to suit your application, but increases in buffer size usually translate to increases in game lag. Therefore, it's best to leave these values for the service provider to decide and concentrate instead on Optimizing Network Usage.
Usually, the send queue is needed only for temporary spikes in network traffic. However, if a player continues to send messages faster than the target can receive them, the send queue will continue to grow. If no precautions are taken, any outgoing messages will take several seconds, possibly minutes, to make their way through all the queues. This will effectively end the game.
One easy way to combat this is to place a timeout value on outgoing messages. You can give critical messages a higher timeout value and a different priority. In extreme circumstances, you can still run into a problem where messages consistently time out before reaching the target. For the most flexibility, you should also monitor the send queue and adjust the rate of outgoing data accordingly.
This sample takes the simple approach of blocking a portion of outgoing data, based on the current send queue size. Because the application is responsible for blocking the data, it would be possible to store a running total of blocked data and send an averaged block of data the next time space allows. That way, critical data is never lost and minor update data can be screened or combined to ease the output rate.