Chat now with support
Chat with Support

SharePlex 8.6.6 - Administration Guide

About this Guide Conventions used in this guide Overview of SharePlex Run SharePlex Run multiple instances of SharePlex Execute commands in sp_ctrl SharePlex parameters Prepare an Oracle environment for replication Create a configuration file Configure replication to Open Target targets Configure a replication strategy Configure partitioned replication Configure named queues Configure SharePlex to maintain a change history target Replicate Oracle DDL Set up error handling Transform data Configure SharePlex security features Activate replication in your production environment Monitor SharePlex Prevent and solve replication problems Repair out-of-sync Data Procedures to maintain Oracle high availability Make changes to an active replication environment Apply an Oracle application patch or upgrade Back up Oracle data on the source or target Tune the Capture process Tune the Post process Appendix A: Peer-To-Peer Diagram Appendix B: SharePlex environment variables

Configure Replication to maintain high availability

Configure a replication strategy > Configure Replication to maintain high availability

These instructions show you how to set up SharePlex for the purpose of high availability: replicating to a secondary Oracle database that is a mirror of the source database. This strategy uses bi-directional replication with two SharePlex configurations that are the reverse of each other. The configuration on the secondary (standby) machine remains in an activated state with the Export process on that system stopped in readiness for failover if the primary machine fails.

This strategy supports business requirements such as the following:

  • Disaster recovery
  • Continuous operation of business applications throughout maintenance cycles or mechanical failures

In this strategy, SharePlex operates as follows:

  • Under normal conditions, SharePlex replicates changes from the primary database to the secondary database.
  • When the primary system or database is offline and users are transferred to the secondary system, SharePlex captures their changes and queues the data on that system until the primary system is restored.
  • When the primary system is restored, SharePlex updates it with those changes and then resumes capture and replication from the primary database.

Supported targets

Oracle

Capabilities

This replication strategy supports the use of named export and post queues.

Note: Column mapping and partitioned replication is not appropriate in a high availability configuration. Source and target objects can have different names but this makes the management of a high-availability structure more complicated.

Requirements

  • Prepare the system, install SharePlex, and configure database accounts according to the instructions in the SharePlex Installation Guide.
  • Prepare the environment for replication. See Prepare an Oracle environment for replication.
  • All objects must exist in their entirety on both systems.
  • The target objects must have the same structure and qualified names as their source objects.
  • Enable archive logging on all systems.

  • Create a script that denies INSERT, UPDATE and DELETE operations to all users except SharePlex.

For failover purposes, the following are required:

  • Make the applications used on the primary system available on the secondary system.
  • Copy non-replicated database objects and critical files outside the instance to the secondary system.
  • Create a script that grants INSERT, UPDATE and DELETE privileges to all users, which can be run during a failover procedure.
  • Create a script that enables constraints on the secondary system to be used during a failover procedure.
  • Develop a failover procedure for relocating users to the secondary system.

Note: If you use an Oracle hot backup to create the secondary instance, keep the script. It can be modified to re-create the primary instance.

Conventions used in the syntax

In the configuration syntax in this topic, the placeholders represent the following:

  • hostA is the primary system.

  • hostB is the secondary system.
  • ownerA.object is the fully qualified name of an object on hostA or a wildcarded specification.
  • ownerB.object is the fully qualified name of an object on hostB or a wildcarded specification.
  • oraA is the Oracle instance on hostA.
  • oraB is the Oracle instance on hostB.

Important!

Configuration

A high availability configuration uses two configurations that are the reverse of each other. To replicate all objects in the database, you can use the config.sql script to simplify the configuration process. See Build a configuration file using a script

Configuration on the source system (primary system)

Datasource:o.oraA

   
ownerA.object ownerB.object hostB@o.oraB

Configuration on the target system (secondary system)

Datasource:o.oraB

   
ownerB.object ownerA.object hostA@o.oraA

Make the system ready for failover

  1. On the secondary system (the one that will initially be the passive system) run sp_ctrl and then issue the following command to stop the Export process on the secondary system so that nothing accidentally happening on the secondary system (such as a scheduled job changing data) gets replicated back to the primary system. This is the required state of SharePlex on that system until there is a need for a role switch between systems.
  2. Perform initial synchronization and startup. You will activate the source configuration during this procedure. For more information, see Activate replication in your production environment.
  3. Making sure the Export process is stopped on the secondary system, activate the configuration on that system. The configuration on the secondary machine remains in an activated state, but the stopped Export process and lack of user activity ensure that the system remains static in readiness for failover.
  4. Monitor the SharePlex instance that is linked to the secondary Oracle instance to make sure no non-SharePlex DDL or DML changes were performed. You can do this as follows: View the status of the export queue on the secondary system using the qstatus command in sp_ctrl. The queue should be empty, because the Capture process on a system ignores the Post process on that system. If there are any messages in the export queue, it means those transactions originated on the secondary system or the SP_OCT_REPLICATE_POSTER parameter was mistakenly enabled. See the SharePlex Reference Guide for more information about SharePlex commands and parameters.
  5. Maintain backups of replication files.

Perform recovery procedures

If a system fails in your high-availability environment, see Procedures to Maintain Oracle High AvailabilityThese procedures help you to move replication to a secondary system and then move it back to the primary system when it is restored.

Configure partitioned replication

Configure partitioned replication

This chapter contains instructions for using the advanced SharePlex configuration options of horizontally partitioned and vertically partitioned replication. These options provide an additional level of flexibility to divide, parallelize, and filter data to meet specific requirements. Before proceeding, make certain you understand the concepts and processes in Create configuration files.

Contents

Configure horizontally partitioned replication

Configure partitioned replication > Configure horizontally partitioned replication

Use horizontally partitioned replication to divide the rows of a table into separate processing streams. You can use horizontally partitioned replication to:

  • Replicate a subset of rows to the target, while retaining the rest of the rows in the source.
  • Replicate different subsets of rows to different targets.
  • Divide the replication of a source table into parallel Post processes for faster posting to the target table.

Guidelines for using horizontally partitioned replication

The following are guidelines to follow if you are considering a configuration that combines horizontally partitioned replication with vertically partitioned or full-table replication.

  • You can combine horizontally partitioned and vertically partitioned replication for maximum control over which information is distributed, and to where.

    For example:

    A company has a headquarters and regional divisions. The headquarters maintains the corporate database, and each region maintains a regional database. The headquarters uses vertically partitioned replication to share some of the column data of a table to those locations, while retaining other sensitive data at headquarters. Row changes made to the shared columns are further partitioned horizontally, for replication to the appropriate regional database.
  • Horizontally partitioned replication can be used in conjunction with full-table replication for the same table, for example to route groups of rows to different reporting systems and all rows to a failover system. However, SharePlex does not support that combination if vertically partitioned replication is also used for the same table.

How it works

To implement horizontally partitioned replication, you do the following:

  1. Define partitions
  2. Define partition schemes
  3. Specify partition schemes in the configuration file

Define partitions

A partition is a subset of the rows in a table that you want to replicate. To create a partition, you specify a column condition or a hash value, depending on your objectives. The following explains these concepts.

Column condition: A column condition is based on a standard WHERE clause. Use one or more column conditions for the following purposes:

  • Use a column condition to replicate only a subset of the rows of a table. For example, you can replicate only those rows where the value of the YEAR column is greater than 2000.
  • Use multiple column conditions to partition the rows of a table so that each set of rows replicates to a different target. For example: replicate changes where the value of the REGION column is EAST to one location and rows where REGION is WEST to a different location.

  • Use multiple column conditions to divide the rows of a table into parallel processing streams (parallel Export-Import-Post streams) for faster posting to the target table. For example, you can improve the flow of replication to a heavily updated target table. This use of column conditions for this purpose is appropriate only if the table contains a column that enables you to split the processing evenly among parallel Post processes.

Hash value: A hash partition (known as hash horizontal partitioning) directs SharePlex to create row partitions by using a hash algorithm. You can use a hash value to divide the rows of a table into parallel Post processing streams if it is not practical to use a column condition for this purpose. The advantage of using a hash partition rather than a column condition is that the rows are divided evenly and automatically, without the need for you to reference table columns in WHERE clauses.

Important: The following restrictions apply:

  • When hash partitioning is in use, the table cannot be compared or repaired with the compare and repair commands.
  • Hash horizontal partitioning cannot be enabled on an index-organized table (IOT) or tables that contain LOBs or LONGs.
  • Hash horizontal partitioning cannot be combined with horizontal partitioning that uses a column condition.
  • Hash horizontal partitioning is not supported for tables with operations that may cause rows to migrate. Examples of operations that cause rows to migrate are:

    • Export or import of the table
    • ALTER TABLE with the MOVE option
    • ALTER TABLE with the SHRINK SPACE option
    • FLASHBACK TABLE
    • Split a partition or combine two partitions
    • Update a value so that it moves to a new partition
    • Online table reoraganization
    • Redefine a table by using dbms_redefinition
    • DML applied to a regular, non-partitioned table that can cause row shift: UPDATE that changes row size so that the data does not fit into the current block, DELETE of a row and then re-insert.

Note: On a Windows system, the use of numerous queues may require the number of semaphores to be increased. If Post returns the error message "shs_SEMERR: an error occurred with the semaphore," see the Post stopped topic in Solve replication problems .

Define partition schemes

A partition scheme is a logical container for related row partitions and is the element that is used in the configuration file to direct SharePlex to use horizontal partitioning. For example, a simple partition scheme named CORPORATE.SALES could have four column conditions, each replicating the appropriate data subsets of the CORPORATE.SALES table to different regional sales offices. You must also specify a partition scheme if using the hash horizontal partitioning option, even though SharePlex creates the partitions.

Specify partition schemes in the configuration file

To direct SharePlex to use a specific partition scheme for a source table, you specify the partition scheme as the routing map in the configuration file. SharePlex obeys the specifications of the partition scheme to process the row subsets.

Define partitions and schemes

These instructions help you to define your partitions and link them to partition schemes. You define and link partitions by using the SHAREPLEX_PARTITION table.

Note: This table was installed in the SharePlex schema on the source system during the installation of SharePlex.

Use one SHAREPLEX_PARTITION table for all configurations that replicate data from the same Oracle instance. Enclose case-sensitive names in double quote marks, for example "Scott"."emp".

To define a partition based on a column condition

For each group of row subsets that you want to define as a partition, use a standard INSERT statement to enter the information described in "SHAREPLEX_PARTITION columns and input into the SHAREPLEX_PARTITION table.

To define a partition based on a hash

Use a standard INSERT statement to enter the information described in "SHAREPLEX_PARTITION columns and input" into one row of the SHAREPLEX_PARTITION table. One row captures the information that SharePlex needs to build the partitions automatically with a hash algorithm.

To route a partition to multiple targets

If a row partition must be sent to multiple target tables and the owner or table names are not all identical, do the following:

  • Issue and commit a separate INSERT statement for each target table that has a different owner or name from the other target tables.
  • Each INSERT must be identical, except to specify the appropriate name values in the TARGET_TABLE_OWNER and TARGET_TABLE_NAME columns, in relation to the target system specified in the ROUTE column.
  • When you create an entry for this partition scheme in the configuration file, specify any target table. SharePlex will detect the other target names when the configuration is activated.
  • Set the SP_ORD_FIRST_FIND parameter to 0 so that SharePlex checks all of the column conditions in the partition scheme. By default SharePlex assumes that any given row change will satisfy only one column condition in the partition scheme. For more information, see the SharePlex Reference Guide.

Table 13: SHAREPLEX_PARTITION columns and input

Column Datatype Input
PARTITION_SCHEME VARCHAR2(30)

The name of the partition scheme for which you are creating a row partition.

You can:

  • Define only one partition scheme per table in the replication configuration.
  • Use any or all of your partition schemes in a configuration file. Only those that are listed in an active configuration file are active.
DESCRIPTION VARCHAR2(61) A description of the partition. You can enter a NULL value, but if you are creating numerous partitions, it helps to identify or describe each one, for example to give each partition a name.
TARGET_TABLE_OWNER VARCHAR2(30)

The owner of the target table. This can be one of the following:

  • A NULL value if there is only one target owner name across all routes, such as when you are using the partitioning to parallelize posting.
  • If you are sending the partition to multiple target tables with different owner names, specify one of the owner names. If case-sensitive, the name must be specified as required by the database. See To route a partition to multiple targets.
TARGET_TABLE_NAME VARCHAR2(30)

The name of the target table. This can be one of the following:

  • A NULL value if there is only one target table name across all routes, such as when you are using the partitioning to parallelize posting.
  • If you are sending the partition to multiple target tables with different names, specify one of the target names. If case-sensitive, the name must be specified as required by the database. See To route a partition to multiple targets.
ROUTE VARCHAR2(1024)

The routing map for this partition. This can be one of the following, depending on whether you are using a column condition or a hash to create the partitions.

If creating a partition with a column condition:

If creating a partition with a hash:

Use the following format to direct SharePlex to automatically generate the named post queues:

host:name|#[o. | r.]database

where:

  • host is the name of the target system.
  • name is the base name of the post queues that SharePlex will create. Each name is appended with an integer from 0 to the number of partitions that you specify in the OPTIONS column.
  • |# directs SharePlex to sequentially number the queues.
  • database is the name or SID of the target database, if applicable.
PRIORITY NUMBER NULL. This column is reserved for use by SharePlex.
ORDER NUMBER

NULL. This column is reserved for use by SharePlex.

OPTIONS VARCHAR2(32)

Specify one of the following, depending on how you are defining the partitions:

If defining a partition based on a column condition:

NULL

If defining a partition based on a hash:

HASH#, where # is the number of partitions (post queues and processes) that you want SharePlex to use. The numbering starts at 0. For example, HASH4 creates the post queues hash00, hash01, hash02, and hash03.

COL_CONDITIONS VARCHAR2(1024)

Specify one of the following, depending on how you are defining the partitions:

If defining a partition based on a column condition:

Use standard WHERE conditional syntax such as ((region_id = West) and region_id is not null). You can specify as many column conditions as needed. See How to create a column condition for additional instructions.

If defining a partition based on a hash:

Specify a value of ROWID. This directs SharePlex to create the hash based on the block where the row resides.

How to create a column condition

The following are guidelines for creating column conditions.

Choose appropriate columns

Use columns whose values will not change, such as PRIMARY or UNIQUE key columns. The objective is to avoid a partition shift, where changes made to the conditional columns of a partition can cause the underlying data to satisfy the conditions of a different (or no) partition.

Partition shift case 1: The column value is updated and the new value no longer satisfies any column condition:
  • SharePlex performs the operation, but future operations on that row are not replicated. The reason: the row no longer satisfies a column condition.
  • The source and target tables of the original partition are now out of synchronization, but Post returns no errors.
  • Example:

    Column condition is “C1=A and C2=B”.

    1. INSERT a row into the database. C1 and C2 values satisfy the column condition, so the row is replicated to the target system.

      C1 C2 C3 C4
      A B 1 2
    2. UPDATE ...set C1=B WHERE C1=A and C2=B

    3. The change is replicated. Now the row does not satisfy the column condition, and subsequent changes to data in the row are not replicated.

      C1 C2 C3 C4
      B B 1 2
Partition shift case 2: A row that satisfies one column condition gets updated to meet a different condition:
  • Post cannot find a matching target row. The reason: the original change was not replicated because it did not meet the column condition.
  • Post returns an out-of-sync error.
  • Example:

    Column condition is “C1=A and C2=B”.

    1. INSERT a row into the database. C1 and C2 values do not satisfy the column condition, so the row is not replicated to the target system.

      C1 C2 C3 C4
      B B 1 2
    2. UPDATE ...set C1=A WHERE C1=B and C2=B

    3. Now the row satisfies the column condition, so SharePlex replicates it, but Post cannot locate the corresponding row on the target (it was never there) and returns an out-of-sync error.

How to repair out-of-sync rows caused by changes to the values of column conditions

You can do either of the following to repair the out-of-sync rows:

  • Set the SP_ORD_HP_IN_SYNC parameter to a value of 1. This setting directs SharePlex to detect and correct the out-of-sync condition. Enabling this parameter causes some performance degradation, depending on how many tables are configured for horizontally partitioned replication. For more information about this parameter, see the SharePlex Reference Guide.
  • Use the compare command to repair the out-of-sync rows. For more information about this command, see the SharePlex Reference Guide.

Note: If you are using a column other than a key to base the column condition on, and you notice reduced performance with horizontally partitioned replication enabled, add a log group for that column.

Use supported datatypes

SharePlex supports the following datatypes in column conditions:

CHAR

DATE

NUMBER

LONG VARCHAR

VARCHAR

VARCHAR2

Notes:

  • For the DATE datatype, SharePlex uses nls_date_format, with the syntax specified as MMDDSYYYYHH24MISS. For example:

    hiredate<‘1111 2011000000’

  • Horizontally partitioned replication does not support the following:

    • Oracle TO_DATE function.
    • VARRAYs and abstract datatypes.
    • UPDATEs or INSERTs on LONG columns larger than 100k.
    • LOB columns.
    • Sequences.
    • TRUNCATEs of an Oracle partition.

Use standard conditional syntax

The following list shows the conditional syntax that SharePlex supports in a column condition, where:

  • value can be a string or a number. Enclose strings and dates within single quote marks (‘west’). Do not use quote marks for numbers (10).
  • column is the name of a column in the table that you are configuring to use horizontally partitioned replication.
column = value
not (column = value)
column > value
value > column
column < value
column <= value
column >= value
column <> value
column != value
column like value
column between value1 and value2
not (column between value1 and value2)
column is null
column is not null

Conditions can be combined into nested expressions with parentheses and the AND, OR, and NOT logical connectives.

Example column conditions

not (col1 = 5)
(col2 = 5) and not (col3 = 6)
((col1 is not null) and (col2 = 5))

Additional guidelines

  • NULLs are replicated by SharePlex in cases such as this one: not (department_id = 90). If department_id is NULL, it is replicated. To avoid replicating records with NULLs, include the column is not null syntax as part of the condition, for example: not (department_id = 90) and department_id is not null.
  • If parentheses are not used to indicate operator precedence, SharePlex supports operator precedence in the same order that Oracle does. For example, a condition NOT x AND y behaves the same way as (NOT x) AND y. A condition x AND y OR z behaves the same as (x AND y) OR z. When a condition includes parentheses, the explicit precedence is respected.
  • Do not:

    • include references to other tables in the column condition.
    • exceed the 1024 bytes maximum defined storage.
  • During the activation of a configuration that refers to partition schemes, SharePlex verifies the syntax in the column conditions of those schemes. If any syntax is incorrect, the activation fails. SharePlex prints an error to the Event Log that indicates where the error occurred.

Specify partition schemes in the configuration file

Use one configuration file for all of the data that you want to replicate from a given datasource, including tables that will have full-table replication and those that will use partitioned replication. For general instructions on creating a configuration file, see Create configuration files. To configure entries for horizontally partitioned replication, use the following syntax.

datasource_specification    
src_owner.table tgt_owner.table

!partition_scheme

    !routing_map

Where:

  • src_owner.table and tgt_owner.table are the specifications for the source and target tables, respectively.

  • ! partition_scheme is the name of the partition scheme to use for the specified source and target tables. The ! is required. The name is case-sensitive. Compound routing of multiple partition schemes is not supported, for example !schemeA+schemeB. Create a separate entry for each partition scheme that you want to use for the same source table. See Examples.
  • ! routing_map is a placeholder routing map. It is required only if a route that you used in a partition scheme is not listed somewhere in the configuration file. SharePlex requires every route to be in the configuration file even if it is listed in a partition scheme.

    Notes:

    • This component applies only to partitions using a column condition.
    • This applies to different named queue routes as well as to routes to different target hosts.
    • You can use a compound routing map if the names of all target tables are identical. See Examples.

Examples

Configuration file entry to specify a partition scheme

Datasource:o.myora    
scott.emp scott.emp_2 !partition_emp
scott.cust scott.cust_2 !partition_cust

Correct way to specify multiple partition schemes for the same source table

Datasource:o.myora    
scott.emp scott.emp_2 !partition_schemeA
scott.emp scott.emp_3 !partition_schemeB

Correct way to specify placeholder routing map*

! targsys1
! targsys2@o.ora2+targsys3@o.ora3

*Required only for partitions based on column conditions.

Change the hash algorithm

When using hash-based horizontally partitioned replication, you can change the hash algorithm from the default of rowid-based to block-based. To enable the block-based hash option, set the SP_OCF_HASH_BY_BLOCK parameter to 1.

Configure vertically partitioned replication

Configure partitioned replication > Configure vertically partitioned replication

Use vertically partitioned replication to replicate a subset of the columns of a table. For example, you can replicate data changes made to C1, C2, C3, and C4, but not changes made to C5 and C6, as shown in the diagram.

Supported targets

All

Guidelines for using vertically partitioned replication

Follow these guidelines when creating a configuration file that includes vertically partitioned replication.

  • Vertically partitioned replication is appropriate for reporting and other data sharing strategies, but it is not appropriate for high availability environments. Once you configure a table for vertically partitioned replication, SharePlex does not recognize the other columns, so data in those columns is not replicated.

  • You can combine horizontally partitioned and vertically partitioned replication for maximum control over which information is distributed, and to where.

    For example:

    A company has a headquarters and regional divisions. The headquarters maintains the corporate database, and each region maintains a regional database. The headquarters uses vertically partitioned replication to share some of the column data of a table to those locations, while retaining other sensitive data at headquarters. Row changes made to the shared columns are further partitioned horizontally, for replication to the appropriate regional database.
  • A table cannot be configured to replicate some columns to one target system and all columns to another (combination of vertically partitioned replication and full-table replication). You can, however, configure full-table replication to an identical table on one target, and then configure vertically partitioned replication from that target to a second target that contains the table that requires only the partition columns.
  • A target table can, but does not have to, contain all of the same columns as its source table. The target can contain just the columns being replicated from the source table. The names of corresponding source and target columns do not need to be the same. Corresponding columns must contain the same datatypes (same type, size, precision).
  • ALTER TABLE to add a column to a table configured for vertically partitioned replication is not supported.

To configure vertically partitioned replication, you specify either a column partition or an excluded column partition in the configuration file:

  • A column partition replicates data changes that are made to the specified columns.
  • An excluded column partition replicates all data changes except those made to the specified columns.

Use one configuration file for all of the data that you want to replicate from a given datasource, including tables that will have full-table replication and those that will use partitioned replication. For general instructions on creating a configuration file, see Create configuration files. To configure entries for vertically partitioned replication, use the following syntax.

datasource_specification    
src_owner.table (src_col,src_col,...) tgt_owner.table [(tgt_col,tgt_col,...)] routing_map
src_owner.table !(src_col,src_col,...) tgt_owner.table routing_map
Configuration component Description
src_owner.table and tgt_owner.table The specifications for the source and target tables, respectively.

(src_col, src_col,...)

and

!(src_col,src_col,...)

The source columns to be replicated. Can be one of the following:

  • (src_col, src_col,...) specifies a column partition that lists columns to replicate.
  • !(src_col,src_col,...) specifies an excluded column partition that lists columns not to replicate. The remaining columns comprise the actual column partition.

Note: When using an excluded column partition, the corresponding source and target column names must be identical, and the excluded columns cannot be used in a key definition. For more information, see Define a unique key.

Follow these rules to specify either type of column partition:

  • There can be one partition per source table.
  • A column list must be enclosed within parentheses.
  • Separate each column name with a comma. A space after the comma is optional.
  • The maximum length of a partition is 174820 bytes (the maximum line length allowed in a configuration file). Therefore, the actual number of columns that you can list depends on the length of each name.
  • The columns can be contiguous or non-contiguous in the source table. For example, you can replicate the first, third and seventh columns of a table.
  • Key columns are not required to be included in the partition.
  • If using horizontally partitioned and vertically partitioned replication together for this table, all of the columns in the partition scheme must be part of the column condition.
(tgt_col,tgt_col,...)

The target columns. Use this option to map source columns to target columns that have different owners or names. If the source and target columns have identical owners or names, the target columns can be omitted.

To map source columns to target columns, follow these rules:

  • The syntax rules for the source column partition also apply to the target column list.
  • The target columns must have identical definitions as their source columns, except for their names.

  • List the target columns in the same logical order as their corresponding source columns. This is required regardless of the actual order of the target columns in the table, so that SharePlex builds the correct correlations in the object cache. For example, a change to the second column in the source list is replicated to the second column in the target list.
routing_map

The routing map for the column partition. The routing map can be one of the following:

  • If using horizontally partitioned replication for the source table, specify a partition scheme, as in: !partition_scheme. See Configure horizontally partitioned replication for more information.
  • If not using horizontally partitioned replication for the source table, specify a routing map as follows:

    • Use a simple routing map like sysB@o.myora if replicating the column partition to one target. A route with a named export or post queue is supported. See Configure named export queues and Configure named post queues for more information about named queues.
    • Use a compound routing map like sysB@o.myora+sysC@o.myora2 if replicating the column partition to multiple target systems. Important! A compound routing map must be used, rather than listing multiple targets in separate entries, because only one column condition per source table can be listed in the configuration file. To use a compound routing map, the owners and names of all of the target tables must be identical. For more information about using a compound routing map, see Routing specifications in a configuration file.

Configuration examples

The following is a vertically partitioned replication configuration replicating to multiple targets by using a compound routing map. To use a compound routing map for this source table, all targets must be named scott.sal.

Datasourceo.oraA    
scott.emp (c1,c2) scott.sal

sysB@o.oraB+sysC@o.oraC

The following is a vertically partitioned replication configuration replicating to a single target where the target columns have different names from those of the source.

Datasourceo.oraA    
scott.emp (c1,c2) scott.sal (c5,c6)

sysB@o.oraB

The following configuration file is not valid because it repeats the same column partition of scott.emp (c1, c2) twice in the configuration file.

Datasourceo.oraA    
scott.emp (c1,c2) scott.cust (c1,c2)

sysB@o.oraB

scott.emp (c1,c2) scott.sales (c1,c2) sysC@o.oraC
Related Documents