Valid for: Oracle and Open Target (as indicated per feature)
You can improve the speed of Post when it is processing mostly small transactions, such as those most commonly found in OLTP. There are two features you can use, depending on the supported database:
Together these features are called Post Enhanced Performance, or PEP.
Valid for Oracle, SQL Server, and PostgreSQL targets
The Transaction Concurrency feature configures a Post process to apply transactions in parallel to increase overall throughput. To use this feature, supplemental logging for primary and unique keys must be enabled on the source.
To enable Transaction Concurrency
Note: The use of Transaction Concurrency may reduce or eliminate the need to run multiple Post processes, but you can still benefit from that configuration because it eliminates a single point of failure. If a Post process fails, the other Post processes can continue, resulting in less recovery time, after the problem is resolved. The Transaction Concurrency feature can be used in a multi-Post configuration, so long as the rules for using multiple Post processes are followed (such as including all tables with referential integrity in the same process stream). For more information, see Configure named post queues.
Valid for Oracle and all Open Target databases
The Commit Reduction feature of Post combines batches of small transactions into larger ones. One large transaction runs faster than multiple smaller ones by having fewer commits and acknowledgments to process.
Post skips the commits of small transactions until their combined size reaches the threshold specified by one of the following parameters:
The default batch transaction size is 100 messages. This value is an approximation. If the size of the last transaction in the batch exceeds the specified threshold, SharePlex waits for the remaining messages and the commit before applying the batch transaction to the target.
Commit reduction is enabled by default. To disable commit reduction, set this parameter to a value of 1.
Valid for: Currently supported for Teradata
You can configure Post to split a large transaction into a series of smaller ones. This option can work around resource limits that affect large transactions, such as the number of row locks permitted per transaction.
To split a large transaction into smaller ones
Use the target command to set the commit_frequency parameter.
target r.database [queue queuename] set resources commit_frequency=number_of_operations
This parameter specifies a maximum number of operations after which Post issues a commit. It can be any integer greater than 1.
Example:
target r.mydb queue q1 set resources commit_frequency=10000
You can tune the performance of Post by tuning the performance of the post queue.
Use the SP_IMP_QUEUE_RESUME parameter to set the number of messages at which Import resumes writing to the post queue. This parameter works in conjunction with SP_IMP_QUEUE_PAUSE. If the number of messages in the post queue is lower or equal to the value set with this parameter, Import resumes writing to the post queue.
To use this feature, both SP_IMP_QUEUE_PAUSE and SP_IMP_QUEUE_RESUME must be greater than zero, and SP_IMP_QUEUE_PAUSE must be greater than SP_IMP_QUEUE_RESUME.
You can improve Post queue performance by enabling subqueue indexing to access the subqueue structures that represent a transaction session. A message "Subqueue index enabled queuename" is written to the Event Log for every Post queue for which this parameter is enabled.
To enable this feature, set the SP_QUE_USE_SUBQUE_INDEX parameter to 1. This parameter does not support VARRAYs. If you are replicating VARRAYs and this parameter is enabled, the parameter is ignored.
Hash-based horizontally partitioned replication uses a hash algorithm that is based on the rowid by default. You may be able to improve the processing of tables that use hash-based horizontally partitioned replication by switching the hash algorithm to one that is based on the block where the row resides.
Because changing the algorithm has the same effect as a routing change (the potential to switch partitions), you must reactivate the configuration file. The activation locks the tables that are affected by this change so that the hash change is applied when there are no open transactions. The locking eliminates the potential for out-of-sync conditions by preventing data that is processed under the new hashing algorithm from being posted before in-flight data that was processed under the old algorithm.
To switch to a block-based hash
© 2024 Quest Software Inc. ALL RIGHTS RESERVED. Terms of Use Privacy Cookie Preference Center