Triggers Some Do s and Don ts

LANSA for i

Triggers - Some Do's and Don'ts

Some Do's

  • Do experiment with small test cases using triggers so that you are comfortable with what they are and how they work before attempting to implement a complex application involving triggers.
  • Do remember that when you change the type or length of a field in the data dictionary (that has associated triggers) you should recompile:
  • All trigger functions associated with the field.
  • All I/O modules of files that contain the field as a real or virtual field.
  • All functions that make *DBOPTIMIZE references to file(s) containing the field.
  • The list of objects to recompile is easily obtained by producing a full listing of the definition of the field.
  • Remember that when you change the layout of a database file (that has associated triggers) you should recompile:
  • The I/O module of file.
  • All trigger functions associated with the file.
  • Any functions that make *DBOPTIMIZE references to the file.

The list of objects to recompile is easily obtained by producing a full listing of the definition of the file.

Some Don'ts

  • Do not do any I/O to the file with which the trigger is linked. Attempting such I/O directly, or indirectly, may cause a recursive call to the file I/O module. Do not attempt to use *DBOPTIMIZE to circumvent this rule. Such attempts will cause the file cursor of the active I/O module to become lost or corrupted. 
  • Do not use triggers on files that have more than 799 real and virtual fields (the 800th field position is reserved for the standard @@UPID field). 
  • Do not make triggers too expensive to execute. For example, an unconditioned trigger that is always executed after reading from a file doing, say, 3 database accesses, will at least quadruple the time required to read the base file. Triggers are a very useful facility but they are not magic. When you set up a trigger to do a lot of work, then your throughput will be reduced accordingly. The use of triggers and the estimation of the impact that they exert on application throughput is entirely your responsibility as an application designer. 
  • Do not introduce dependencies between triggers. For instance, trigger A (before update) sets a value in field X, say. Setting up trigger B (also before update), to run after trigger A, with the "knowledge" that trigger A has been executed first (and thus set field X) is not a good idea. This is an example of "interdependence" between triggers and it is not a good way to use triggers. In this case the logic in trigger B should be inserted directly into trigger A following the point that it sets a value into field X. 
  • Do not use ABORT when a user exit is called from a Trigger function. When ABORT is issued in the Trigger Function, the I/O Module is able to intercept the ABORT and passes a Trigger error status back to the Function. However, when the ABORT is issued in the (user exit) Function, called by the Trigger, the ABORT is interpreted in the standard way because the Function is not aware that the call was from a Trigger and it does not make any difference. Using ABORT in these situations (e.g. validations) is not recommended.
  • It is very strongly recommended that you do not design triggers in such a way as that "normal" RDML functions doing I/O operations are "aware" of their existence, and attempt to directly "communicate" with them in any way (e.g.: *LDA, data areas, etc).

Where trigger "requests" are to be supported, introduce a virtual (or real) field into the file definition and use it to "fire" the trigger in the normal way.