Valid for: Oracle targets
Large transactions that are applied by application patches or other internal Oracle operations can be omitted from replication if they are not relevant to the data needed by user applications. These operations can translate into thousands or millions of individual UPDATE or DELETE statements for SharePlex, all to be applied by Post. Such transactions can adversely affect Post performance and increase the latency between the source and target data that user applications need to perform their work. There may be reasons to prevent other DML operations from being posted to a target database.
There are two ways you can handle such transactions:
- Assuming there are no referential relationships between those operations and the user data, configure those operations to process through a dedicated named post queue. For more information, see Configure named post queues.
- Configure Post to skip the operations, and then apply the SQL statement directly through Oracle. See the following instructions.
To skip maintenance DML
- On the source system, run the create_ignore.sql script from the util sub-directory in the SharePlex product directory. This script creates the SHAREPLEX_IGNORE_TRANS public procedure in the database. When executed at the start of the transaction, the procedure directs the Capture process to ignore the DML operations that occur until the transaction is committed or rolled back. Thus, the affected operations are not replicated. For more information about the script, its limitations, and how to run it, see create_ignore.sql in the SharePlex Reference Guide.
- Edit your patch script to call SHAREPLEX_IGNORE_TRANS before UPDATE or DELETE operations. This allows SharePlex to ignore the transaction and not send it to the target. The script will also have to be run on the target to bring the database back into sync.
Note: Only DML operations are affected by the SHAREPLEX_IGNORE_TRANS procedure. It does not cause SharePlex to skip DDL operations, including TRUNCATE. DDL operations are implicitly committed by Oracle, so they render the procedure invalid.
Valid for: Oracle and Open Target (as indicated per feature)
You can improve the speed of Post when it is processing mostly small transactions, such as those most commonly found in OLTP. There are two features you can use, depending on the supported database:
- Increase the level of concurrency
- Reduce the number of commits
Together these features are called Post Enhanced Performance, or PEP.
Increase the level of concurrency
Valid for Oracle, SQL Server, and PostgreSQL targets
The Transaction Concurrency feature configures a Post process to apply transactions in parallel to increase overall throughput. To use this feature, supplemental logging for primary and unique keys must be enabled on the source.
To enable Transaction Concurrency
- For an Oracle target database, set the SP_OPO_DEPENDENCY_CHECK parameter to 1.
- For SQL Server and PostgreSQL, set the SP_OPX_THREADS parameter to 2 or greater.
Note: The use of Transaction Concurrency may reduce or eliminate the need to run multiple Post processes, but you can still benefit from that configuration because it eliminates a single point of failure. If a Post process fails, the other Post processes can continue, resulting in less recovery time, after the problem is resolved. The Transaction Concurrency feature can be used in a multi-Post configuration, so long as the rules for using multiple Post processes are followed (such as including all tables with referential integrity in the same process stream). For more information, see Configure named post queues.
Reduce the number of commits
Valid for Oracle and all Open Target databases
The Commit Reduction feature of Post combines batches of small transactions into larger ones. One large transaction runs faster than multiple smaller ones by having fewer commits and acknowledgments to process.
Post skips the commits of small transactions until their combined size reaches the threshold specified by one of the following parameters:
- SP_OPO_COMMIT_REDUCE_MSGS (Oracle targets)
- SP_OPX_COMMIT_REDUCE_MSGS (Open Target)
The default batch transaction size is 100 messages. This value is an approximation. If the size of the last transaction in the batch exceeds the specified threshold, SharePlex waits for the remaining messages and the commit before applying the batch transaction to the target.
Commit reduction is enabled by default. To disable commit reduction, set this parameter to a value of 1.
Valid for: Currently supported for JMS
You can configure Post to split a large transaction into a series of smaller ones. This option can work around resource limits that affect large transactions, such as the number of row locks permitted per transaction.
To split a large transaction into smaller ones
Use the target command to set the commit_frequency parameter.
target r.database [queue queuename] set resources commit_frequency=number_of_operations
This parameter specifies a maximum number of operations after which Post issues a commit. It can be any integer greater than 1.
Example:
target r.mydb queue q1 set resources commit_frequency=10000
You can tune the performance of Post by tuning the performance of the post queue.
Reduce queue contention
You can use the SharePlex queue contention reduction feature to ensure that shared memory is not swapped to disk when the post queue is becoming full. This feature is enabled by the SP_IMP_QUEUE_PAUSE parameter.
This parameter pauses the writing of data to the post queue when that queue contains the specified number of messages. Post stores queue messages in shared memory until it issues a checkpoint, after which it releases the data from memory.
If the post queue runs out of shared memory, the read and write functions will start incurring file IO to free up the memory buffers. By pausing the queue writing, this parameter helps Post maintain its performance by avoiding the need for disk storage and the resultant slowdown in IO.
Use the SP_IMP_QUEUE_RESUME parameter to set the number of messages at which Import resumes writing to the post queue. This parameter works in conjunction with SP_IMP_QUEUE_PAUSE. If the number of messages in the post queue is lower or equal to the value set with this parameter, Import resumes writing to the post queue.
To use this feature, both SP_IMP_QUEUE_PAUSE and SP_IMP_QUEUE_RESUME must be greater than zero, and SP_IMP_QUEUE_PAUSE must be greater than SP_IMP_QUEUE_RESUME.
Tune subqueue indexing
You can improve Post queue performance by enabling subqueue indexing to access the subqueue structures that represent a transaction session. A message "Subqueue index enabled queuename" is written to the Event Log for every Post queue for which this parameter is enabled.
To enable this feature, set the SP_QUE_USE_SUBQUE_INDEX parameter to 1. This parameter does not support VARRAYs. If you are replicating VARRAYs and this parameter is enabled, the parameter is ignored.