Chat now with support
Chat with Support

SharePlex 9.4 - Administration Guide

About this Guide Conventions used in this guide Overview of SharePlex Run SharePlex Run multiple instances of SharePlex Execute commands in sp_ctrl Set SharePlex parameters Configure data replication Configure replication to and from a container database Configure named queues Configure partitioned replication Configure replication to a change history target Configure a replication strategy Configure DDL replication Configure error handling Configure data transformation Configure security features Start replication on your production systems Monitor SharePlex Prevent and solve replication problems Repair out-of-sync data Tune the Capture process Tune the Post process Recover replication after Oracle failover Make changes to an active replication environment Apply an Oracle application patch or upgrade Back up Oracle data on the source or target Troubleshooting Tips Appendix A: Peer-To-Peer Diagram Appendix B: SharePlex environment variables

Make small transactions faster

Valid for: Oracle and Open Target (as indicated per feature)

You can improve the speed of Post when it is processing mostly small transactions, such as those most commonly found in OLTP. There are two features you can use, depending on the supported database:

  • Increase the level of concurrency
  • Reduce the number of commits

Together these features are called Post Enhanced Performance, or PEP.

Increase the level of concurrency

Valid for Oracle, SQL Server, and PostgreSQL targets

The Transaction Concurrency feature configures a Post process to apply transactions in parallel to increase overall throughput. To use this feature, supplemental logging for primary and unique keys must be enabled on the source.

To enable Transaction Concurrency

  • For an Oracle target database, set the SP_OPO_DEPENDENCY_CHECK parameter to 1.
  • For SQL Server and PostgreSQL, set the SP_OPX_THREADS parameter to 2 or greater.

Note: The use of Transaction Concurrency may reduce or eliminate the need to run multiple Post processes, but you can still benefit from that configuration because it eliminates a single point of failure. If a Post process fails, the other Post processes can continue, resulting in less recovery time, after the problem is resolved. The Transaction Concurrency feature can be used in a multi-Post configuration, so long as the rules for using multiple Post processes are followed (such as including all tables with referential integrity in the same process stream). For more information, see Configure named post queues.

Reduce the number of commits

Valid for Oracle and all Open Target databases

The Commit Reduction feature of Post combines batches of small transactions into larger ones. One large transaction runs faster than multiple smaller ones by having fewer commits and acknowledgments to process.

Post skips the commits of small transactions until their combined size reaches the threshold specified by one of the following parameters:

  • SP_OPO_COMMIT_REDUCE_MSGS (Oracle targets)
  • SP_OPX_COMMIT_REDUCE_MSGS (Open Target)

The default batch transaction size is 100 messages. This value is an approximation. If the size of the last transaction in the batch exceeds the specified threshold, SharePlex waits for the remaining messages and the commit before applying the batch transaction to the target.

Commit reduction is enabled by default. To disable commit reduction, set this parameter to a value of 1.

Split a large transaction into a smaller one

Valid for: Currently supported for Teradata

You can configure Post to split a large transaction into a series of smaller ones. This option can work around resource limits that affect large transactions, such as the number of row locks permitted per transaction.

To split a large transaction into smaller ones

Use the target command to set the commit_frequency parameter.

target r.database [queue queuename] set resources commit_frequency=number_of_operations

This parameter specifies a maximum number of operations after which Post issues a commit. It can be any integer greater than 1.

Example:

target r.mydb queue q1 set resources commit_frequency=10000

Tune queue performance

You can tune the performance of Post by tuning the performance of the post queue.

Reduce queue contention

  • You can use the SharePlex queue contention reduction feature to ensure that shared memory is not swapped to disk when the post queue is becoming full. This feature is enabled by the SP_IMP_QUEUE_PAUSE parameter.
  • This parameter pauses the writing of data to the post queue when that queue contains the specified number of messages. Post stores queue messages in shared memory until it issues a checkpoint, after which it releases the data from memory.
  • If the post queue runs out of shared memory, the read and write functions will start incurring file IO to free up the memory buffers. By pausing the queue writing, this parameter helps Post maintain its performance by avoiding the need for disk storage and the resultant slowdown in IO.
  • Use the SP_IMP_QUEUE_RESUME parameter to set the number of messages at which Import resumes writing to the post queue. This parameter works in conjunction with SP_IMP_QUEUE_PAUSE. If the number of messages in the post queue is lower or equal to the value set with this parameter, Import resumes writing to the post queue.

    To use this feature, both SP_IMP_QUEUE_PAUSE and SP_IMP_QUEUE_RESUME must be greater than zero, and SP_IMP_QUEUE_PAUSE must be greater than SP_IMP_QUEUE_RESUME.

    Tune subqueue indexing

    You can improve Post queue performance by enabling subqueue indexing to access the subqueue structures that represent a transaction session. A message "Subqueue index enabled queuename" is written to the Event Log for every Post queue for which this parameter is enabled.

    To enable this feature, set the SP_QUE_USE_SUBQUE_INDEX parameter to 1. This parameter does not support VARRAYs. If you are replicating VARRAYs and this parameter is enabled, the parameter is ignored.

  • Tune hash-based horizontally partitioned replication

    Hash-based horizontally partitioned replication uses a hash algorithm that is based on the rowid by default. You may be able to improve the processing of tables that use hash-based horizontally partitioned replication by switching the hash algorithm to one that is based on the block where the row resides.

    Because changing the algorithm has the same effect as a routing change (the potential to switch partitions), you must reactivate the configuration file. The activation locks the tables that are affected by this change so that the hash change is applied when there are no open transactions. The locking eliminates the potential for out-of-sync conditions by preventing data that is processed under the new hashing algorithm from being posted before in-flight data that was processed under the old algorithm.

    To switch to a block-based hash

    1. Set the SP_OCF_HASH_BY_BLOCK parameter to 1.
    2. Reactivate the configuration file.
    Related Documents

    The document was helpful.

    Select Rating

    I easily found the information I needed.

    Select Rating