11 26 Large Numbers Approach with Caution

LANSA Application Design

11.26 Large Numbers - Approach with Caution

There are many examples of very complex transactions available on the IBM i. Some are even shipped with the operating system.

For example:

  • Signing on at a workstation.
  • Signing off from the system.
  • Starting a source edit session via STRSEU.
  • Exiting and updating an edit session via STRSEU.
  • Using the WRKACTJOB command.

These transactions exist and are used everyday, and they usually do not badly affect overall system operations.

The main reason that they do not affect system operations is that they all have a fairly low FREQUENCY OF USE. This means that at any given instant there is a FAIRLY LOW PROBABILITY that anyone is ACTUALLY USING the transaction.

However, this is not always the case.

Some sites have suffered from the "sign-off syndrome" at around 5pm when 150 - 200 users all attempt to sign off from the system in a 5 minute time span. In these situations the users who are not signing off have been known to suffer seriously degraded response times.

The most significant thing to observe here is that if you are dealing with any application that has a REASONABLE or HIGH frequency of use then you need to be very, very careful that it does not overload the machine.

Sometimes some simple mathematics will show up an "impossible" application. Unfortunately the "impossibility" of the application is sometimes not discovered until after it has been created and put into production.

Imagine a high volume order entry transaction that typically does 100 database accesses to validate and store order transactions.

The 100 database accesses is not an unreasonably high figure given the functionality of some order entry systems.

It is to be used continuously by 200 users who typically take 20 seconds to key in a transaction received over the phone.

This application has an extremely high probability of use.

Under peak load, every 20 seconds each user will be "requesting" 100 database accesses. That averages to 5 per second.

No multiply that by 200 users and you have a "requirement" for 1000 database accesses per second.

It should also be noted that these 1000 database accesses are "logical" accesses. In fact what is often counted as 1 "logical" access via a keyed access path results in many more "physical" disk drive I/Os as database file indexes are traversed and/or updated, etc. (e.g.: How many "physical" I/Os are required to do one "logical" write to a file that has 20 logical views - the answer is almost impossible to predict on a busy machine, but it will probably be a LOT more than 20).

Add to that normal machine virtual memory paging, work and job management for 200 users, etc, etc, and the 1000 "logical" accesses per second is probably something more like an overall application requirement for 5,000 to 10,000 actual disk accesses PER SECOND for this order entry system under peak load. (Note: the values 5,000 to 10,000 are used to illustrate a point - the actual figures on a busy machine would be virtually impossible to predict because of the hundreds of factors than can affect them - but they would certainly be much, much larger than a simple projection of the "logical" I/O counts).

The overall problem here is that the transaction is far too complex for its probability of use.

The main thing to note about avoiding a situation like this is that the FREQUENCY OF USE of a transaction, or even a whole system, is the most significant constraint to the level of functionality that it can provide.

If the FREQUENCY OF USE is moderate or high extreme caution is required to ensure that transactions are not "over" functioned.

If the high level of functionality is a business requirement, then a larger computer may be part of the cost of providing the new system.

Similar thought should be given to batch transaction processing where very LARGE VOLUMES of information are usually processed.

For instance a very complex batch transaction may be developed and tested on data sets of 10,000 records. The average elapsed time of the run is 30 minutes.

This is not a problem and the batch transaction goes into production. After a period of time in production it is processing runs of 1,000,000 records and producing extremely long elapsed run times.

The reason is again shown by simple mathematics. The production runs involve a 100 times larger data set than the test runs. Multiple 30 minutes by 100 and you have 50 hours.

While this appears a very simple problem, and very easy to anticipate, you would be surprised at how often the simple equation 30mins x 100 = 50 hours is overlooked during application design and development.

Another situation that is overlooked sometimes is the effect of a program change. Imagine a report that is produced by sequentially reading and summarizing 1,000,000 records.

It has a run time of 2 hours.

The user wants another piece of information on the report. This requires an access to another file. Result: the run time will increase to 4, or possibly even 5 or 6 hours.

The purpose of this section and these examples is simply to demonstrate two points:

  • If you are dealing with complex and/or heavily used online transactions, or with complex batch transactions processing many records, then caution is required.
  • The amount of functionality that an application can provide must be limited by its probability of use, the volume of data that it processes, and most significantly, by the power of the computer that it is to be used on.