Chatta subito con l'assistenza
Chat con il supporto

SharePlex 11.0 - SharePlex Administration Guide

About this Guide Conventions used in this guide Revision History Overview of SharePlex Run SharePlex Run multiple instances of SharePlex Execute commands in sp_ctrl Set SharePlex parameters Configure data replication Configure replication to and from a container database Configure named queues Configure partitioned replication Configure replication to a change history target Configure a replication strategy Configure DDL replication Configure error handling Configure data transformation Configure security features Assign SharePlex users to security groups Start replication on your production systems Monitor SharePlex Prevent and solve replication problems Repair out-of-sync data Tune the Capture process Tune the Post process Recover replication after Oracle failover Make changes to an active replication environment Apply an Oracle application patch or upgrade Back up Oracle data on the source or target Troubleshooting Tips Appendix A: Peer-To-Peer Diagram Appendix B: SharePlex environment variables

Split a large transaction into a smaller one

Valid for: Currently supported for JMS

You can configure Post to split a large transaction into a series of smaller ones. This option can work around resource limits that affect large transactions, such as the number of row locks permitted per transaction.

To split a large transaction into smaller ones:

Use the target command to set the commit_frequency parameter.

target r.database [queue queuename] set resources commit_frequency=number_of_operations

This parameter specifies a maximum number of operations after which Post issues a commit. It can be any integer greater than 1.

Example:

target r.mydb queue q1 set resources commit_frequency=10000

Tune queue performance

You can tune the performance of Post by tuning the performance of the post queue.

Reduce queue contention

  • You can use the SharePlex queue contention reduction feature to ensure that shared memory is not swapped to disk when the post queue is becoming full. This feature is enabled by the SP_IMP_QUEUE_PAUSE parameter.
  • This parameter pauses the writing of data to the post queue when that queue contains the specified number of messages. Post stores queue messages in shared memory until it issues a checkpoint, after which it releases the data from memory.
  • If the post queue runs out of shared memory, the read and write functions will start incurring file IO to free up the memory buffers. By pausing the queue writing, this parameter helps Post maintain its performance by avoiding the need for disk storage and the resultant slowdown in IO.
  • Use the SP_IMP_QUEUE_RESUME parameter to set the number of messages at which Import resumes writing to the post queue. This parameter works in conjunction with SP_IMP_QUEUE_PAUSE. If the number of messages in the post queue is lower or equal to the value set with this parameter, Import resumes writing to the post queue.

    To use this feature, both SP_IMP_QUEUE_PAUSE and SP_IMP_QUEUE_RESUME must be greater than zero, and SP_IMP_QUEUE_PAUSE must be greater than SP_IMP_QUEUE_RESUME.

    Tune subqueue indexing

    You can improve Post queue performance by enabling subqueue indexing to access the subqueue structures that represent a transaction session. A message "Subqueue index enabled queuename" is written to the Event Log for every Post queue for which this parameter is enabled.

    To enable this feature, set the SP_QUE_USE_SUBQUE_INDEX parameter to 1. This parameter does not support VARRAYs. If you are replicating VARRAYs and this parameter is enabled, the parameter is ignored.

  • Tune hash-based horizontally partitioned replication

    Hash-based horizontally partitioned replication uses a hash algorithm that is based on the rowid by default. You may be able to improve the processing of tables that use hash-based horizontally partitioned replication by switching the hash algorithm to one that is based on the block where the row resides.

    Because changing the algorithm has the same effect as a routing change (the potential to switch partitions), you must reactivate the configuration file. The activation locks the tables that are affected by this change so that the hash change is applied when there are no open transactions. The locking eliminates the potential for out-of-sync conditions by preventing data that is processed under the new hashing algorithm from being posted before in-flight data that was processed under the old algorithm.

    To switch to a block-based hash:

    1. Set the SP_OCF_HASH_BY_BLOCK parameter to 1.
    2. Reactivate the configuration file.

    Recover replication after Oracle failover

    This chapter contains instructions for recovering Oracle replication along with the database and applications during a failover in a high-availability environment. To support these procedures, SharePlex must be properly configured to support high availability. See Configure Replication to Maintain High Availability.

    Contents
    Related Documents

    The document was helpful.

    Seleziona valutazione

    I easily found the information I needed.

    Seleziona valutazione