This property sets an upper limit on the number of concurrent worker threads to employ for compression. The implementation of this stream employs multiple threads from the .NET thread pool, via ThreadPool.QueueUserWorkItem(), to compress the incoming data by block. As each block of data is compressed, this stream re-orders the compressed blocks and writes them to the output stream.
A higher number of workers enables a higher degree of parallelism, which tends to increase the speed of compression on multi-cpu computers. On the other hand, a higher number of buffer pairs also implies a larger memory consumption, more active worker threads, and a higher cpu utilization for any compression. This property enables the application to limit its memory consumption and CPU utilization behavior depending on requirements.
By default, DotNetZip allocates 4 workers per CPU core, subject to the upper limit specified in this property. For example, suppose the application sets this property to 16. Then, on a machine with 2 cores, DotNetZip will use 8 workers; that number does not exceed the upper limit specified by this property, so the actual number of workers used will be 4 * 2 = 8. On a machine with 4 cores, DotNetZip will use 16 workers; again, the limit does not apply. On a machine with 8 cores, DotNetZip will use 16 workers, because of the limit.
For each compression "worker thread" that occurs in parallel, there is up to 2mb of memory allocated, for buffering and processing. The actual number depends on the BlockSize property.
CPU utilization will also go up with additional workers, because a larger number of buffer pairs allows a larger number of background threads to compress in parallel. If you find that parallel compression is consuming too much memory or CPU, you can adjust this value downward.
The default value is 16. Different values may deliver better or worse results, depending on your priorities and the dynamic performance characteristics of your storage and compute resources.
The application can set this value at any time, but it is effective only before the first call to Write(), which is when the buffers are allocated.