Transform Data Task Properties (Options Tab)
Use this tab to send rows to an exception file, determine the format of that file, and set the error count at which package execution should cease. The exception files are stored on a local or mapped drive. You can also specify items such as fast load, table locking properties, and constraint checking as data is moved.
Options
Exception file properties area
- Name
- Specify the path and name of the file where exception records will be written. If the file does not exist at package run time, the file will be created. The file does not have a default extension assigned to it.
- Browse
- Display the Select File dialog box, where you can search the local computer or mapped drives for an existing exception file. If an existing exception file is used, the status and error information for the package will be appended.
File type properties area
- 7.0 format
- Specify to save the exception file in 7.0 format. This format is useful if an exception file parser was written for Microsoft® SQL Server™ version 7.0 exception files, because exception files using this backward-compatible format can still be used in the parser.
- Error text
- Specify that any errors encountered during the task execution be recorded. Information such as the package name, execution start and completion times, and other data are entered in the exception log.
- Source error rows
- Specify that a separate exception file be created to contain all the rows from the source data that did not get written to the destination. Formatting of the file is done according to the specified File format properties. The file name will be the same as the file name specified in the Name field, with the extension .Source appended to it.
- Dest error rows
- Specify that a separate exception file be created to contain records rejected from the source file. The file name will be the same as the file name specified in the Name field, with the extension .Dest appended to it.
File format properties area
- Row delimiter
- Select the delimiter used to separate rows of data in the exception file. A carriage return/line feed {CR}{LF} is used by default.
- Column delimiter
- Select the delimiter used to separate the columns of data in the exception file. A vertical bar is used by default.
- Text qualifier
- Specify which character marks were used in the delimited data file to qualify text. Choose from: Double Quote {"}; Single Quote {'}; <none>. You can also type a character to use as the text qualifier.
Data movement properties area
- Max error count
- Set a limit for the number of errors allowed before processing is terminated for the task. When the SQL Server fast load option is selected, each error corresponds either to a row-level failure detected by the Transform Data task or to a batch failure. The value of Max error count includes the number of row-level errors detected by the Transform Data task plus batch failures. When the Max error count value is exceeded, task execution is terminated. The default is zero, which means that the task will terminate upon the first error.
- Fetch buffer size
- Set the number of rows of data being fetched at the source during data movement. Generally, you should not need to adjust this value unless it is necessary to optimize the characteristics of the data provider.
- First row
- Specify the first row of data to be moved. This is useful if the first row consists of column headings, or, if the first part of a data source has been copied. You can set this value to the row number where processing stopped in an earlier data pump operation.
- Last row
- Specify the last row of data to move.
SQL Server properties Area
- Use fast load
- Specify that you want to use high-speed bulk-copy processing. The fast load option can be used only when the destination connection is the Microsoft OLE DB Provider for SQL Server. When you enable this option, the data pump can accept batches of transformed data. Batch sizes are controlled through the Insert batch size option in this area.
- Keep NULL values
- Specify that you want to keep the NULL value in the destination column, even if the destination table was created with a default value designated for the column. This option is available only if you enable Use fast load.
- Check constraints
- Specify whether constraints on the destination table are checked during the load. By default, constraints are ignored. This improves the performance, but it also allows data that violates existing constraints to be inserted into the table. This option is available only if you enable Use fast load.
- Table lock
- Specify how the table should be locked during use. When the Use fast load property is not used, and Table lock is not used, the table is locked using row-level locks. If Table lock is used, the table is locked using table-level locking. If the Use fast load property is turned on, and Table lock is not used, the table is locked using row-level locks. If Table lock is used, then the table is locked using table-level locking. This option is available only if you enable Use fast load.
- Enable identity insert
- Allow explicit values to be inserted into the identity column of a table (SQL Server only). This option is available only if an identity column is detected. An identity column is defined as having a data type of bigint, decimal, integer, numeric, smallint, or tinyint, with the Identity property for the column set to Yes. This option is available only if you enable Use fast load.
- Always commit final batch
- Select to commit all rows in the final batch that were processed successfully before an error occurs. This property applies when a transformation or insert error occurs during processing of the final batch, so that all rows in the batch prior to the error do not have to be processed again. The setting is useful for large batch sizes. This option is available only if you enable Use fast load.
- Insert batch size
- Specify the number of rows in a batch. This option is available only if you enable Use fast load.
Values for Batch size work as follows:
If you set batch size to 0, the data is loaded in one batch, and the first row that fails will cause the entire load to be canceled and the step fails. This value is the default setting.
If you set batch size to 1, the data is loaded a single row at a time. Each row that fails is counted as a batch failure, and the value of Max error count is incremented by one. Previously loaded rows are either committed, or, if the step has joined the package transaction, retained in the transaction, subject to later commit or rollback.
If you set batch size to a value greater than 1, the data is loaded one batch at a time. The first row that fails in a batch fails that entire batch; loading stops and the step fails. Rows in previously loaded batches are either committed, or, if the step has joined the package transaction, retained in the transaction, subject to later commit or rollback.