Chat now with support
Chat with Support

SharePlex 8.6.6 - Administration Guide

About this Guide Conventions used in this guide Overview of SharePlex Run SharePlex Run multiple instances of SharePlex Execute commands in sp_ctrl SharePlex parameters Prepare an Oracle environment for replication Create a configuration file Configure replication to Open Target targets Configure a replication strategy Configure partitioned replication Configure named queues Configure SharePlex to maintain a change history target Replicate Oracle DDL Set up error handling Transform data Configure SharePlex security features Activate replication in your production environment Monitor SharePlex Prevent and solve replication problems Repair out-of-sync Data Procedures to maintain Oracle high availability Make changes to an active replication environment Apply an Oracle application patch or upgrade Back up Oracle data on the source or target Tune the Capture process Tune the Post process Appendix A: Peer-To-Peer Diagram Appendix B: SharePlex environment variables

Configure replication to a JMS target

Configure replication to Open Target targets > Configure replication to a JMS target

Overview

The SharePlex Post process can connect and write to a JMS (Java Messaging Service) queue or topic. The data is written as XML records that include the data definitions, the operation type, and the changed column values. This data is written as a sequential series of operations as they occurred on the source, which can then be posted in sequential order to a target database or consumed by an external process or program.

Note: For the platforms, datatypes and operations that are supported when using SharePlex to replicate to JMS, see the SharePlex Release Notes.

Configure SharePlex on the source

When replicating data to JMS, configure the source database and SharePlex on the source system as follows.

Enable supplemental logging

On the source system, enable PK/UK supplemental logging in the Oracle source database. SharePlex must have the Oracle key information to build an appropriate key on the target.

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE) COLUMNS;

Set SP_OCT_USE_SUPP_KEYS parameter

On the source system, set the SP_OCT_USE_SUPP_KEYS parameter to a value of 1. This parameter directs SharePlex to use the columns set by Oracle's supplemental logging as the key columns when a row is updated or deleted. When both supplemental logging and this parameter are set, it ensures that SharePlex can always build a key and that the SharePlex key will match the Oracle key.

See the SharePlex Reference Guide for more information about this parameter.

Configure replication

On the source, create a SharePlex configuration file that specifies capture and routing information. The structure that is required in a configuration file varies, depending on your replication strategy, but this shows you the required syntax for routing data through a JMS queue or topic.

Datasource:o.SID
src_owner.table !jms[:tgt_owner.table] host

where:

  • SID is the Oracle SID of the source Oracle database.
  • src_owner.table is the owner and name of the source table.
  • !jms is a required keyword indicating SharePlex is posting to JMS.
  • :tgt_owner.table is optional and specifies the owner and name of the target table. Use if either component is different from that of the source table. Allow no spaces between !jms and :tgt_owner.table. Type case-sensitive names in the correct case and enclose them within double quotes, as in "MySchema"."MyTable"
  • host is the name of the target system.

Note:  For more information, see Create configuration files.

Source configuration example

Datasource:o.ora112

MY_SCHEMA.MY_TABLE !jms:"MySchema2"."MyTable2" sysprod

Configure SharePlex on the target

These instructions establish the Java provider on the target and configure the SharePlex Post process to connect to a JMS structure. SharePlex supports Active MQ and IBM MQ targets.

Configure SharePlex on an Active MQ target

These instructions configure the SharePlex Post process to post to an Active MQ queue or topic.

NoteS:

  • Active MQ does not necessarily need to be installed on the same machine where SharePlex is running, but you will need to copy the ActiveMQ jar files to a location in the SharePlex installation directory according to these instructions.
  • SharePlex is compiled for Java Runtime Engine (JRE) 1.6, but ActiveMQ 5.13 and 5.14 use JRE 1.7. Therefore, these instructions include creating a soft link to point to the JRE 1.7 binary.

The following directory naming conventions are used in the examples in this procedure:

Name Directory
SharePlex The SharePlex installation directory
amq The working directory for Active MQ within the SharePlex directory.

To configure post to the amq target

  1. Install and configure JRE 1.7 on the SharePlex target server.
  2. Create an amq directory under SharePlex/lib/providers.

    shareplex/lib/providers/amq

  3. Copy the following provider JAR files and HS dependencies to the amq directory.

    shareplex/lib/providers/amq/activemq-all.jar

    shareplex/lib/providers/amq/slf4j.jar

  4. Configure the JAVA_HOME and JRE_HOME variables to point to the JRE 1.7 home directory.
  5. Add the new JAVA_HOME to the PATH variable.
  6. Create a soft link under the SharePlex/.app-modules/openbridge directory to point to the JRE 1.7 binary:

    1. Shut down SharePlex, if running.
    2. Change directories to SharePlex/.app-modules/openbridge.
    3. Issue the following command:

      MV jre/ jre1.6/

    4. Locate and symlink the JRE 1.7 to SharePlex/.app-modules/openbridge/jre
    5. Issue the following command:

      ln –s /home/xxx/java7/jre1.7.0_09 jre

      The following is an example:

      hostname/home/xxx/prod80/.app-modules/openbridge> ls -ltr

      total 24

      drwxr-xr-x 2 xxx dba 4096 Jan 11 14:04 temp/

      drwxr-xr-x 8 xxx dba 4096 Jan 11 14:04 jre1.6/

      drwxr-xr-x 2 xxx dba 4096 Jan 11 14:05 plugins/

      drwxr-xr-x 2 xxx dba 4096 Jan 11 14:05 configuration/

      -r—1 xxx dba 160 Jan 11 14:05 bridge.ini

      -rwxr-xr-x 1 xxx dba 711 Jan 11 14:05 bridge*

      lrwxrwxrwx 1 xxx dba 31 May 10 15:08 jre -> /home/xxx/java7/jre1.7.0_09/

  7. Set all values of producerFlowControl="true" to "false". Additional content about this parameter is available at: http://activemq.apache.org/producer-flow-control.html.
  8. Start sp_cop. (Do not activate the configuration yet.)
  9. Run sp_ctrl.
  10. Issue the target command to configure Post to post to the JMS queue or topic. The following are example commands.

    See View and change JMS settings for command explanations and options.

    If posting to a JMS queue:

    sp_ctrl> target x.jmsset jms factory_class=org.apache.activemq.jndi.ActiveMQInitialContextFactory

    sp_ctrl> target x.jms set jms provider_url=tcp://w2k3-64bit:61616

    sp_ctrl> target x.jms set jms lib_location=shareplex/lib/providers/amq

    sp_ctrl> target x.jms set jms queuename=SHAREPLEX.Q1

    If posting to a JMS topic:

    sp_ctrl> target x.jmsset jms factory_class=org.apache.activemq.jndi.ActiveMQInitialContextFactory

    sp_ctrl> target x.jms set jms provider_url=tcp://w2k3-64bit:61616

    sp_ctrl> target x.jms set jms lib_location=shareplex/lib/providers/amq

    sp_ctrl> target x.jmsset jms destination=topic

    sp_ctrl> target x.jms set jms queuename=SHAREPLEX.Q1

Configure SharePlex on an IBM MQ target

These instructions configure the SharePlex Post process to post to an IBM MQ queue or topic.

The following directory naming conventions are used in the examples in this procedure:

Name Directory
MQClient The IBM MQ client installation directory
SharePlex The SharePlex installation directory
ibmmq The working directory for IBM MQ within the SharePlex directory.

To configure post to the IBM MQ target

  1. Install and configure the latest version of Java Runtime Engine (JRE) on either the database server or an intermediate server where the SharePlex Post process will be running.
  2. Set the JAVA_HOME and JRE_HOME environment variables to the appropriate directories.
  3. Add the JAVA_HOME and JRE_HOME paths to the PATH variable as part of the SharePlex operating system user profile.
  4. Install the IBM MQ client into the MQClient directory on either the database server or an intermediate server where the SharePlex Post process will be running.
  5. Copy the .bindings file that is generated by the IBM MQ Explorer utility from the middle-tier server to the MQClient/lib directory on the SharePlex target server.

    Note: The Post process uses this file to obtain information about the IBM MQ middle-tier server.

  6. If using IBM MQ 8.x, create a soft link to the JRE that ships with the MQ 8.x distribution. (The JRE 1.6. that ships with SharePlex is compatible with MQ 7.x but not 8.x.)

    1. $ cd SharePlex/.app-modules/openbridge
    2. $ mv jre jre.old

    3. $ ln –s /MQClient/java/jre64/jre jre

  7. Create an ibmmq directory within the SharePlex/lib/providers directory.

  8. Copy the .bindings file and all of the .jar files from the MQClient/java/lib directory to the SharePlex/lib/providers/ibmmq directory.

    $ cp MQClient/java/lib/.bindings SharePlex/lib/providers/ibmmq

    $ cp MQClient/java/lib/*.jar SharePlex/lib/providers/ibmmq

  9. Create a soft link to the MQClient/lib directory from the ibmmq directory that you created.

    $ cd SharePlex/lib/providers/ibmmq

    $ ln –sMQClient/libibmmq

  10. Start sp_cop. (Do not activate the configuration yet.)
  11. Run sp_ctrl.
  12. Issue the target command to configure Post to post to the JMS queue or topic. The following are example commands.

    See View and change JMS settings for command explanations and additional options.

    If posting to a JMS queue:

    sp_ctrl> target x.jmsset jms factory_name=SharePlexQcf

    sp_ctrl> target x.jmsset jms lib_location=ibmmq

    sp_ctrl> target x.jms set jms provider_url=file:/MQClient/app/mqm83client/lib

    sp_ctrl> target x.jms set jms queuename=SHAREPLEX.Q1

    sp_ctrl> target x.jms set jms user=myusername

    sp_ctrl> target x.jms set jms password=mypassword

    If posting to a JMS topic:

    sp_ctrl> target x.jmsset jms factory_name=SharePlexQcf

    sp_ctrl> target x.jmsset jms lib_location=ibmmq

    sp_ctrl> target x.jms set jms provider_url=file:/MQClient/app/mqm83client/lib

    sp_ctrl> target x.jms set jms queuename=SHAREPLEX.T1

    sp_ctrl> target x.jms set jms destination=topic

    sp_ctrl> target x.jms set jms user=myusername

    sp_ctrl> target x.jms set jms password=mypassword

View and change JMS settings

To view current property settings for output to JMS, use the following command:

target x.jms show

To change a property setting, use the following command.

target x.jms [queue queuename] set jms property=value

where:

  • queue queuename is the name of a Post queue. Use this option if there are multiple Post processes.
  • property and value are shown in the following table.

Table 3: JMS target properties

Property Input Value Default

factory_class=factory_class

Required Fully qualified class name of the factory class. Sets the JNDI environmental property java.naming.factory.initial to specify the class name of the initial context factory for the provider.

None

provider_url=url

Required RMI URL with no object name component. This sets the JNDI environmental property java.naming.provider.url to specify the location of the registry that is being used as the initial context.

None

lib_location=path

Required Path to the directory where you installed the JAR files.

None

destination={queue | topic}

Messaging domain. Valid values are queue (port-to-port) or topic (publisher-subscriber model).

queue

factory_name=factory_name

Name of a JNDI connection factory lookup. You can specify multiple names with a comma-separated list, for example: (jndi.name1, jndi.name2).

None

user=user

Name of the user that is attaching to JMS. If authentication is not required, omit this and the password option.

None

password=password

Password of the JMS user.

None

queuename=JMS_queuename

Name of the JMS queue or topic.

OpenTarget

persistent={yes | no}

yes logs messages to disk storage as part of send operations.

no prevents logging.

yes
session_transacted ={yes | no}

no directs Post to issue a JMS commit for every replicated message, making each one immediately visible and consumable. This is the default.

yes directs Post to operate in a transactional manner. In this mode, Post issues a JMS commit (to make messages visible and consumable) at intervals based on the following:

  • Issue a commit before the data is read-released from the Post queue, as controlled by the value set for the SP_OPX_READRELEASE_INTERVAL parameter.
  • Issue a commit every JMS write, as controlled by the value set for the SP_OPX_CHECKPOINT_FREQ parameter, until the SP_OPX_READRELEASE_INTERVAL is reached.
no
properties

Use this option if the JMS provider that you are using cannot consume messages that contain the default set of properties supplied by SharePlex. It enables you to remove or add properties. Supply the properties as a comma-delimited list.

  • To add a property, specify it as name=value.
  • To remove a SharePlex property, prefix the name with a dash. For example this string removes two SharePlex properties: -JMSXDeliveryCount,-JMSXGroupSeq.
None
client_id Use this option if the JMS provider that you are using cannot consume messages that contain the default SharePlex client ID. Set this value to the client ID that your provider accepts. None
commit_frequency

Use this option when Post is configured to post to a JMS server in transactional style (issue a JMS commit at intervals, rather than after every message as directed by the session_transacted property of the target command). This parameter specifies the interval between JMS commits. It works in conjunction with the SP_OPX_READRELEASE_INTERVAL parameter. Valid values are 1 to any positive integer.

1500

Set recovery options

If the JMS process aborts suddenly, or if the machine that it is running on aborts, row changes may be written twice to the JMS target. The consumer must manage this by detecting and discarding duplicates.

Every record of every row-change operation in a transaction has the same transaction ID and is also marked with a sequence ID. These attributes are id and msgIdx, respectively, under the txn element in the XML output (see About the XML).

The transaction ID is the SCN at the time the transaction was committed and the sequence ID is the index of the row change in the transaction. These two values are guaranteed to be the same if they are re-written to the JMS queue in a recovery situation.

If desired, you can configure the target to include additional metadata with every row-change record by using the following command:

target target [queue queuename] set metadata property[, property]

Table 4: Optional JMS metadata properties

Property Description
time The time the operation was applied on the source.
userid The ID of the database user that performed the operation.
trans The ID of the transaction that included the operation.
size The number of operations in the transaction.

Example

target x.jms set metadata time, userid, trans, size

To reset the metadata

target x.jms [queue queuename] reset metadata

To view the metadata

target x.jms [queue queuename] show metadata

Control memory settings

If you configured Post to process large amounts of transactions to JMS through multiple named Post queues, you may need to allocate more memory to the Post JMS bridge. The default settings are 256MB and 512MB.

To increase the JMS bridge memory

Open the openbridge.ini file and set the following parameters in the format shown:

-Xms=512m

-Xmx=1024m

-XX:PermSize=512m

-XX:MaxPermSize=1024m

Configure debugging

You can configure Post to log whether each write to a queue or topic succeeded or failed. This is controlled with the debugging setting.

To enable debugging

Open the openbridge.ini file and set the -OB_debugToLog parameter to true.

-OB_debugToLog=true

About the XML

The XML format is separated into operation and schema "types" for easier consumption. They are actually the same when viewed from an XSD perspective and are not distinct types. The template XML represents all possible attributes and elements. The individual XML represents the bare minimum output for each supported operation.

After startup, the first time that Post writes a change record for any given table, it first writes a schema record for that table. Each schema record contains the table name and details of interest for each columns. A schema record is written only once for each table during a Post run, unless there is a change to that schema, and then a new schema record is written. If Post stops and starts, schema records are written again, once for each table as Post receives a change record for it.

Schema record template

<?xml version="1.0" encoding="UTF-8" ?>  
<?opentarget version="1.0" ?>  
<opentarget>  
    <txn  
        id="xs:integer"  
        oracleTxnId="xs:string"   
        commitTime="xs:dateTimeStamp" />  
    <tbl  
        name="xs:string"  
        utcOffset="xs:integer"
        <cmd ops="schema">  
            <schema>  
                <col  
                    name="xs:string"  
                    xmlType="xs:string"  
                    key="xs:boolean"  
                    nullable="xs:boolean"  
                    length="xs:integer"  
                  />  
            </schema>  
        </cmd>  
    </tbl>  
</opentarget>

Table 5: Explanation of schema template (* = optional)

Element Attribute Description

txn

 

Transaction metadata

 

id

ID of current transaction

 

oracleTxnId *

Oracle transaction ID

 

commitTime*

Transaction commit timestamp

tbl

 

Table metadata

 

name

Fully qualified name of the table

 

utcOffset

UTC offset in the log

cmd

 

Operation metadata (In the case of a schema, there are no operations.)

 

ops

Type of record generated for this table. For a schema, the value is schema.

schema   Column metadata
col   Metadata for a column (One of these elements appears for every record in the table.)

 

name

Name of the column

 

xmlType

XML data type

 

key

Key flag (true, false)

 

nullable

Nullable flag

 

length

Length of the column

Operation record template

<?xml version="1.0" encoding="UTF-8" ?>
<?opentarget version="1.1" ?>
<opentarget>
    <txn
        id="xs:integer" 
        msgIdx="xs:integer" 
        msgTot="xs:integer" 
        oracleTxnId="xs:string"
        commitTime="xs:dateTimeStamp"
        userId="xs:string" />
    <tbl 
        name="xs:string"
        <cmd ops="xs:string">
            <row id="xs:string">
                <col name="xs:string"></col>
                <lkup>
                    <col name="xs:string"></col>
                </lkup>
            </row>
        </cmd>
    </tbl>
</opentarget>

Table 6: Explanation of operation template (* = optional)

Element Attribute Description
txn   Transaction metadata for the operation
  id ID of current transaction
  msgIdx Index of current record in the transaction
  msgTot* Total number of messages in transaction
  oracleTxnId * Oracle transaction ID, taken from the System Change Number (SCN)
  commitTime* Transaction commit timestamp
  userId * User ID that performed the operation
tble   Table metadata
  name Fully qualified table name
cmd  

Operation metadata

  ops Operation type (insert, update, delete, truncate)
row   Metadata of the row that changed in the operation
  id Oracle ROWID
col   Change data for a column (One of these elements appears for every changed column in the operation.)
  name Column name with the after value for that column
lkup   Before image for use in update and delete operations
col   Before image of column (One of these elements appears for every changed column in the operation.)
  name Column name with the before value or the key value (depending on the operation) for that column

Note: The id and msgIdx attributes together uniquely identify an operation.

Supported datatypes

See the SharePlex Release Notes for a chart that shows how Oracle datatypes are converted to XML.

Sample XML records

Source table

This is the table for which the sample operations are generated.

SQL> desc products

Name

Null?

Type

PRODUCT_ID

NOT NULL

NUMBER

DESCRIPTION

 

VARCHAR2(600)

PRICE

 

NUMBER

Source DML operations
insert into products values (230117, ‘Hamsberry vintage tee, cherry’, 4099);
commit;
update products set price=3599 where product_id=230117 and price=4099;
commit;
delete products where product_id=230117;
commit;
truncate table products;
Schema record
<?xml version="1.0" encoding="UTF-8"?>
<?opentarget version="1.1"?>
<opentarget>
<txn id="2218316945" commitTime="2014-10-10T13:18:43" userId="85" oracleTxnId="3.10.1339425" />
<tbl name="MFG.PRODUCTS" utcOffset="-5:00">
<cmd ops="schema">
<schema>
<col name="PRODUCT_ID" xmlType="decimal" key="true" nullable="false" length="22" />
<col name="DESCRIPTION" xmlType="string" key="false" nullable="true" length="600" />
<col name="PRICE" xmlType="decimal" key="false" nullable="true" length="22" />
</schema>
</cmd>
</tbl>
</opentarget>
Insert record
<?xml version="1.0" encoding="UTF-8"?>
<?opentarget version="1.1"?>
<opentarget>
<txn id="2218316945" msgIdx="1" msgTot="1" commitTime="2014-10-10T13:18:43" userId="85" oracleTxnId="3.10.1339425" />
<tbl name="MFG.PRODUCTS">
<cmd ops="ins">
<row id="AAAmDbAAEAAApRrAAA">
<col name="PRODUCT_ID">230117</col>
<col name="DESCRIPTION">Hamsberry vintage tee, cherry</col>
<col name="PRICE">4099</col>
</row>
</cmd>
</tbl>
</opentarget>
Update record
<?xml version="1.0" encoding="UTF-8"?>
<?opentarget version="1.1"?>
<opentarget>
<txn id="2218318728" msgIdx="1" msgTot="1" commitTime="2014-10-10T13:19:12" userId="85" oracleTxnId="1.17.970754" />
<tbl name="MFG.PRODUCTS">
<cmd ops="upd">
<row id="AAAmDbAAEAAApRrAAA">
<col name="PRICE">3599</col>
<lkup>
<col name="PRODUCT_ID">230117</col>
<col name="PRICE">4099</col>
</lkup>
</row>
</cmd>
</tbl>
</opentarget>
Delete record
<?xml version="1.0" encoding="UTF-8"?>
<?opentarget version="1.1"?>
<opentarget>
<txn id="2218319446" msgIdx="1" msgTot="1" commitTime="2014-10-10T13:19:25" userId="85" oracleTxnId="5.23.1391276" />
<tbl name="MFG.PRODUCTS">
<cmd ops="del">
<row id="AAAmDbAAEAAApRrAAA">
<lkup>
<col name="PRODUCT_ID">230117</col>
</lkup>
</row>
</cmd>
</tbl>
</opentarget>
Truncate record
<?xml version="1.0" encoding="UTF-8"?>
<?opentarget version="1.1"?>
<opentarget>
<txn id="2218319938" commitTime="1988-01-01T00:00:00" userId="85" oracleTxnId="11.4.939801" />
<tbl name="MFG.PRODUCTS">
<cmd ops="trunc" />
</tbl>
</opentarget>

Configure Replication to a Kafka target

Configure replication to Open Target targets > Configure Replication to a Kafka target

Overview

The SharePlex Post process can connect and write to a Kafka broker. The data is written as XML records that include the data definitions, the operation type, and the changed column values. This data is written as a sequential series of operations as they occurred on the source, which can then be posted in sequential order to a target database or consumed by an external process or program.

Guidelines for posting to Kafka

  • A SharePlex Post process acts as a Kafka producer. A SharePlex Post process writes to a single topic with the default partition of 0 in a single broker. To support multiple topics or multiple brokers, configure multiple Post processes using named Post queues. For more information, see Configure named post queues.
  • The SharePlex Post process does not create topics. You must create the topic through Kafka.
  • For performance reasons, Post batches messages together before sending them to the Kafka broker. Tuning parameters are available to control the batch size. These can be used to balance performance with latency.

Note: For the platforms, datatypes and operations that are supported when using SharePlex to replicate to Kafka, see the SharePlex Release Notes.

Configure SharePlex on the source

When replicating data to Kafka, configure the source database and SharePlex on the source system as follows.

Enable supplemental logging

On the source system, enable PK/UK supplemental logging in the Oracle source database. SharePlex must have the Oracle key information to build an appropriate key on the target.

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE) COLUMNS;

Set SP_OCT_USE_SUPP_KEYS parameter

On the source system, set the SP_OCT_USE_SUPP_KEYS parameter to a value of 1. This parameter directs SharePlex to use the columns set by Oracle's supplemental logging as the key columns when a row is updated or deleted. When both supplemental logging and this parameter are set, it ensures that SharePlex can always build a key and that the SharePlex key will match the Oracle key.

See the SharePlex Reference Guide for more information about this parameter.

Configure replication

On the source, create a SharePlex configuration file that specifies capture and routing information. The structure that is required in a configuration file varies, depending on your replication strategy, but this shows you the required syntax for routing data to Kafka.

Datasource:o.SID
src_owner.table !kafka[:tgt_owner.table] host

where:

  • SID is the Oracle SID of the source Oracle database.
  • src_owner.table is the owner and name of the source table.
  • !kafka is a required keyword indicating SharePlex is posting to Kafka.
  • :tgt_owner.table is optional and specifies the owner and name of the target table. Use if either component is different from that of the source table. Allow no spaces between !kafka and :tgt_owner.table. Type case-sensitive names in the correct case and enclose them within double quotes, as in "MySchema"."MyTable"
  • host is the name of the target system.

Note:  For more information, see Create configuration files.

Source configuration example

Datasource:o.ora112

MY_SCHEMA.MY_TABLE !kafka:"MySchema2"."MyTable2" sysprod

Configure SharePlex on the target

These instructions configure the SharePlex Post process to connect to Kafka. You must have a running Kafka broker.

To configure post to Kafka

  1. Create a Kafka topic.
  2. Start sp_cop. (Do not activate the configuration yet.)
  3. Run sp_ctrl.
  4. Issue the target command to configure posting to a Kafka broker and topic. The following are example commands.

    sp_ctrl> target x.kafkaset kafka broker=localhost:9092

    sp_ctrl> target x.kafka set kafka topic=shareplex

    See View and change Kafka settings for command explanations and options.

View and change Kafka settings

To view current property settings for output to Kafka, use the following command:

target x.kafka show

To change a property setting, use the following command.

target x.kafka [queue queuename] set kafka property=value

where:

  • queue queuename is the name of a Post queue. Use this option if there are multiple Post processes.
  • property and value are shown in the following table.

Table 7: Kafka target properties

Property Input Value Default
broker

Required. The host and port number of the Kafka broker, or a comma delimited list of multiple brokers. This list is the bootstrap into the Kafka cluster. So long as Post can connect to one of these brokers, it will discover any other brokers in the cluster.

localhost:9092
compression.code Optional. Controls whether data is compressed in Kafka. Options are none, gzip or snappy. None
topic

Required. The name of the target Kafka topic.

This string may contain the special sequences %o or %t. The %o sequence is replaced by the owner name of the table that is being replicated. The %t sequence is replaced by the table name of the table that is being replicated. This feature may be used in conjunction with a Kafka server setting of auto.create.topics.enabled set to 'true'. Also view your server settings for default.replication.factor and num.partitions because these are used as defaults when topics are auto created.

shareplex
client_id

Optional. A user-defined string that Post will send in each request to help trace calls.

None
partition

Optional. One of the following:

  • A fixed partition number: Directs Post to post messages only to the specified partition number. For example, setting it to 0 directs Post to post only to partition 0. This option is suitable for use in testing or if the target has multiple channels of data posting to the same Kafka topic.
  • The keyword rotate: Directs Post to apply messages to all of the partitions of a topic in a round-robin fashion. The partition changes with each new message. For example if a topic has three partitions, the messages are posted to partitions 0,1,2,0,1,2, and so on in that order.
  • The keyword rotate trans: This is similar to the rotate option, except that the partition is incremented with each transaction rather than with each message. For example, if a topic has three partitions, the messages are posted to partition 0 until the commit, then to partition 1 until the commit, and so on in that order. This option is suitable if you are replicating multiple tables to a single topic. It allows you to distribute data across several partitions, while still preserving all of the operations of a transaction together in a single partition. This enables a consumer that reads from a single partition to receive a stream of complete transactions.
None
request.required.acks Optional. This is a Kafka client parameter. By default it is set to a value of -1, which means all. Consult the Kafka documentation about this subject, because all really means all in-sync replicas. This parameter can be used in conjunction with the min.insync.replicas server parameter to tune behavior between availability and data consistency. Important: Is is possible for data to be lost between a Kafka producer (SharePlex in this case) and a Kafka cluster, depending on these settings. None
threshold_size

Optional. The approximate network packet size*, in kilobytes, that Post sends to the Kafka broker.

Notes:

  • Maximum packet size is 128000 KB.
  • Packet size is approximate.
10000KB

* To avoid latency, if Post detects no more incoming messages, it sends the packet to Kafka immediately without waiting for the threshold to be satisfied.

Set recovery options

If the Kafka process aborts suddenly, or if the machine that it is running on aborts, row changes may be written twice to the Kafka topic. The consumer must manage this by detecting and discarding duplicates.

Every record of every row-change operation in a transaction has the same transaction ID and is also marked with a sequence ID. These attributes are id and msgIdx, respectively, under the txn element in the XML output (see About the XML).

The transaction ID is the SCN at the time the transaction was committed, and the sequence ID is the index of the row change in the transaction. These two values are guaranteed to be the same if they are re-written to the Kafka topic in a recovery situation.

If desired, you can configure Post to include additional metadata with every row-change record by using the following command:

target x.kafka [queue queuename] set metadata property[, property]

Table 8: Optional metadata properties

Property Description
time The time the operation was applied on the source.
userid The ID of the database user that performed the operation.
trans The ID of the transaction that included the operation.
size The number of operations in the transaction.

Example

target x.kafka set metadata time, userid, trans, size

To reset the metadata

target x.kafka [queue queuename] reset metadata

To view the metadata

target x.kafka [queue queuename] show metadata

About the XML

The XML format is separated into operation and schema "types" for easier consumption. They are actually the same when viewed from an XSD perspective and are not distinct types. The template XML represents all possible attributes and elements. The individual XML represents the bare minimum output for each supported operation.

After startup, the first time that Post writes a change record for any given table, it first writes a schema record for that table. Each schema record contains the table name and details of interest for each columns. A schema record is written only once for each table during a Post run, unless there is a change to that schema, and then a new schema record is written. If Post stops and starts, schema records are written again, once for each table as Post receives a change record for it.

Schema record template

<?xml version="1.0" encoding="UTF-8" ?>  
<?opentarget version="1.0" ?>  
<opentarget>  
    <txn  
        id="xs:integer"  
        oracleTxnId="xs:string"   
        commitTime="xs:dateTimeStamp" />  
    <tbl  
        name="xs:string"  
        utcOffset="xs:integer"
        <cmd ops="schema">  
            <schema>  
                <col  
                    name="xs:string"  
                    xmlType="xs:string"  
                    key="xs:boolean"  
                    nullable="xs:boolean"  
                    length="xs:integer"  
                  />  
            </schema>  
        </cmd>  
    </tbl>  
</opentarget>

Table 9: Explanation of schema template (* = optional)

Element Attribute Description

txn

 

Transaction metadata

 

id

ID of current transaction

 

oracleTxnId *

Oracle transaction ID

 

commitTime*

Transaction commit timestamp

tbl

 

Table metadata

 

name

Fully qualified name of the table

 

utcOffset

UTC offset in the log

cmd

 

Operation metadata (In the case of a schema, there are no operations.)

 

ops

Type of record generated for this table. For a schema, the value is schema.

schema   Column metadata
col   Metadata for a column (One of these elements appears for every record in the table.)

 

name

Name of the column

 

xmlType

XML data type

 

key

Key flag (true, false)

 

nullable

Nullable flag

 

length

Length of the column

Operation record template

<?xml version="1.0" encoding="UTF-8" ?>
<?opentarget version="1.1" ?>
<opentarget>
    <txn
        id="xs:integer" 
        msgIdx="xs:integer" 
        msgTot="xs:integer" 
        oracleTxnId="xs:string"
        commitTime="xs:dateTimeStamp"
        userId="xs:string" />
    <tbl 
        name="xs:string"
        <cmd ops="xs:string">
            <row id="xs:string">
                <col name="xs:string"></col>
                <lkup>
                    <col name="xs:string"></col>
                </lkup>
            </row>
        </cmd>
    </tbl>
</opentarget>

Table 10: Explanation of operation template (* = optional)

Element Attribute Description
txn   Transaction metadata for the operation
  id ID of current transaction
  msgIdx Index of current record in the transaction
  msgTot* Total number of messages in transaction
  oracleTxnId * Oracle transaction ID, taken from the System Change Number (SCN)
  commitTime* Transaction commit timestamp
  userId * User ID that performed the operation
tble   Table metadata
  name Fully qualified table name
cmd  

Operation metadata

  ops Operation type (insert, update, delete, truncate)
row   Metadata of the row that changed in the operation
  id Oracle ROWID
col   Change data for a column (One of these elements appears for every changed column in the operation.)
  name Column name with the after value for that column
lkup   Before image for use in update and delete operations
col   Before image of column (One of these elements appears for every changed column in the operation.)
  name Column name with the before value or the key value (depending on the operation) for that column

Note: The id and msgIdx attributes together uniquely identify an operation.

Supported datatypes

See the SharePlex Release Notes for a chart that shows how Oracle datatypes are converted to XML.

Sample XML records

Source table

This is the table for which the sample operations are generated.

SQL> desc products

Name

Null?

Type

PRODUCT_ID

NOT NULL

NUMBER

DESCRIPTION

 

VARCHAR2(600)

PRICE

 

NUMBER

Source DML operations
insert into products values (230117, ‘Hamsberry vintage tee, cherry’, 4099);
commit;
update products set price=3599 where product_id=230117 and price=4099;
commit;
delete products where product_id=230117;
commit;
truncate table products;
Schema record
<?xml version="1.0" encoding="UTF-8"?>
<?opentarget version="1.1"?>
<opentarget>
<txn id="2218316945" commitTime="2014-10-10T13:18:43" userId="85" oracleTxnId="3.10.1339425" />
<tbl name="MFG.PRODUCTS" utcOffset="-5:00">
<cmd ops="schema">
<schema>
<col name="PRODUCT_ID" xmlType="decimal" key="true" nullable="false" length="22" />
<col name="DESCRIPTION" xmlType="string" key="false" nullable="true" length="600" />
<col name="PRICE" xmlType="decimal" key="false" nullable="true" length="22" />
</schema>
</cmd>
</tbl>
</opentarget>
Insert record
<?xml version="1.0" encoding="UTF-8"?>
<?opentarget version="1.1"?>
<opentarget>
<txn id="2218316945" msgIdx="1" msgTot="1" commitTime="2014-10-10T13:18:43" userId="85" oracleTxnId="3.10.1339425" />
<tbl name="MFG.PRODUCTS">
<cmd ops="ins">
<row id="AAAmDbAAEAAApRrAAA">
<col name="PRODUCT_ID">230117</col>
<col name="DESCRIPTION">Hamsberry vintage tee, cherry</col>
<col name="PRICE">4099</col>
</row>
</cmd>
</tbl>
</opentarget>
Update record
<?xml version="1.0" encoding="UTF-8"?>
<?opentarget version="1.1"?>
<opentarget>
<txn id="2218318728" msgIdx="1" msgTot="1" commitTime="2014-10-10T13:19:12" userId="85" oracleTxnId="1.17.970754" />
<tbl name="MFG.PRODUCTS">
<cmd ops="upd">
<row id="AAAmDbAAEAAApRrAAA">
<col name="PRICE">3599</col>
<lkup>
<col name="PRODUCT_ID">230117</col>
<col name="PRICE">4099</col>
</lkup>
</row>
</cmd>
</tbl>
</opentarget>
Delete record
<?xml version="1.0" encoding="UTF-8"?>
<?opentarget version="1.1"?>
<opentarget>
<txn id="2218319446" msgIdx="1" msgTot="1" commitTime="2014-10-10T13:19:25" userId="85" oracleTxnId="5.23.1391276" />
<tbl name="MFG.PRODUCTS">
<cmd ops="del">
<row id="AAAmDbAAEAAApRrAAA">
<lkup>
<col name="PRODUCT_ID">230117</col>
</lkup>
</row>
</cmd>
</tbl>
</opentarget>
Truncate record
<?xml version="1.0" encoding="UTF-8"?>
<?opentarget version="1.1"?>
<opentarget>
<txn id="2218319938" commitTime="1988-01-01T00:00:00" userId="85" oracleTxnId="11.4.939801" />
<tbl name="MFG.PRODUCTS">
<cmd ops="trunc" />
</tbl>
</opentarget>

Configure replication to a SQL or XML file target

Configure replication to Open Target targets > Configure replication to a SQL or XML file target

Overview

SharePlex can post replicated Oracle data to a file formatted as SQL or XML. This data is written as a sequential series of operations as they occurred on the source, which can then be posted in sequential order to a target database or consumed by an external process or program.

This chapter guides you through the configuration process to support replication to this target.

Note: For the platforms, datatypes and operations that are supported when using SharePlex to replicate to a SQL or XML file, see the SharePlex Release Notes.

Configure SharePlex on the source

On the source, create a SharePlex configuration file that specifies capture and routing information. The structure that is required in a configuration file varies, depending on your replication strategy, but this shows you the required syntax for routing data to a SQL or XML file.

Datasource:o.SID
src_owner.table !file[:tgt_owner.table] host

where:

  • SID is the Oracle SID of the source Oracle database.
  • src_owner.table is the owner and name of the source table.

  • !file is a required keyword that directs Post to write to a file.
  • tgt_owner.table is optional and specifies the owner and name of the target table. Use if either component is different from that of the source table.
  • host is the name of the target system.

Note:  For more information, see Create configuration files.

Source configuration example:

The following example replicates the parts table in schema PROD from Oracle instance ora112 to a file on target system sysprod.

Datasource:o.ora112

PROD.parts !file sysprod

Configure SharePlex on the target

To output data in XML format

By default, SharePlex formats data that it writes to a file in XML format, and you do not need to run target setup unless you want to change properties of the output file (see View and change target settings.)

To output data in SQL format

  1. Start sp_cop.
  2. Start sp_ctrl.
  3. Issue the following required target commands to output the records in SQL. Note: Use all lower-case characters.

    target x.file [queue queuename] set format record=sql

    target x.file [queuequeuename] set sql legacy=yes

    where: queue queuename constrains the action of the command to the SharePlex Post process that is associated with the specified queue.

    See View and change target settings for descriptions of these settings and other optional properties that you can set.

View and change target settings

To view current property settings for output to a file, use the following command:

target x.file show

To change a setting, use the following target command.

target x.file [queue queuename] set [category] property=value

target

About the SQL

Every transaction in a SQL-formatted file is headed by a comment that includes the transaction sequence within the SQL file and a unique transaction ID. A comment line at the end of the SQL file has the number of lines in the file. For example, the following is a SQL file with one transaction. In this example the transaction id is 2-113319. The file has nine lines.

/installed/vardir> cat opx/0000000010_20140305140820_legacy.sql

-- 0000000001 2-113319 03052014140813 03052014140813

DELETE FROM "ROBIN"."TEST_TYPES" WHERE ORA_NUMBER = '22345' AND ROWNUM = 1;

INSERT INTO "ROBIN"."TEST_TYPES" (ORA_NUMBER, ORA_DATE, ORA_RAW, ORA_ROWID,

ORA_FLOAT, ORA_CHAR, ORA_VARCHAR2, ORA_TIMESTAMP, ORA_TIMESTAMP_TZ,

ORA_TIMESTAMP_LTZ) VALUES('22345', '08132066000000', '0123456789ABCDEF'

, 'AAAAAAAAAAAAAAAAAA', '12350', 'Character ', 'Variable data'

, '10201998021300.22000', '06172002080000.00000', '06172002160000.00000');

COMMIT;

-- EOF 0000000009

About the XML

The XML format is separated into operation and schema "types" for easier consumption. They are actually the same when viewed from an XSD perspective and are not distinct types. The template XML represents all possible attributes and elements. The individual XML represents the bare minimum output for each supported operation.

After startup, the first time that Post writes a change record for any given table, it first writes a schema record for that table. Each schema record contains the table name and details of interest for each columns. A schema record is written only once for each table during a Post run, unless there is a change to that schema, and then a new schema record is written. If Post stops and starts, schema records are written again, once for each table as Post receives a change record for it.

Schema record template

<?xml version="1.0" encoding="UTF-8" ?>  
<?opentarget version="1.0" ?>  
<opentarget>  
    <txn  
        id="xs:integer"  
        oracleTxnId="xs:string"   
        commitTime="xs:dateTimeStamp" />  
    <tbl  
        name="xs:string"  
        utcOffset="xs:integer"
        <cmd ops="schema">  
            <schema>  
                <col  
                    name="xs:string"  
                    xmlType="xs:string"  
                    key="xs:boolean"  
                    nullable="xs:boolean"  
                    length="xs:integer"  
                  />  
            </schema>  
        </cmd>  
    </tbl>  
</opentarget>

Table 11: Explanation of schema template (* = optional)

Element Attribute Description

txn

 

Transaction metadata

 

id

ID of current transaction

 

oracleTxnId *

Oracle transaction ID

 

commitTime*

Transaction commit timestamp

tbl

 

Table metadata

 

name

Fully qualified name of the table

 

utcOffset

UTC offset in the log

cmd

 

Operation metadata (In the case of a schema, there are no operations.)

 

ops

Type of record generated for this table. For a schema, the value is schema.

schema   Column metadata
col   Metadata for a column (One of these elements appears for every record in the table.)

 

name

Name of the column

 

xmlType

XML data type

 

key

Key flag (true, false)

 

nullable

Nullable flag

 

length

Length of the column

Operation record template

<?xml version="1.0" encoding="UTF-8" ?>
<?opentarget version="1.1" ?>
<opentarget>
    <txn
        id="xs:integer" 
        msgIdx="xs:integer" 
        msgTot="xs:integer" 
        oracleTxnId="xs:string"
        commitTime="xs:dateTimeStamp"
        userId="xs:string" />
    <tbl 
        name="xs:string"
        <cmd ops="xs:string">
            <row id="xs:string">
                <col name="xs:string"></col>
                <lkup>
                    <col name="xs:string"></col>
                </lkup>
            </row>
        </cmd>
    </tbl>
</opentarget>

Table 12: Explanation of operation template (* = optional)

Element Attribute Description
txn   Transaction metadata for the operation
  id ID of current transaction
  msgIdx Index of current record in the transaction
  msgTot* Total number of messages in transaction
  oracleTxnId * Oracle transaction ID, taken from the System Change Number (SCN)
  commitTime* Transaction commit timestamp
  userId * User ID that performed the operation
tble   Table metadata
  name Fully qualified table name
cmd  

Operation metadata

  ops Operation type (insert, update, delete, truncate)
row   Metadata of the row that changed in the operation
  id Oracle ROWID
col   Change data for a column (One of these elements appears for every changed column in the operation.)
  name Column name with the after value for that column
lkup   Before image for use in update and delete operations
col   Before image of column (One of these elements appears for every changed column in the operation.)
  name Column name with the before value or the key value (depending on the operation) for that column

Note: The id and msgIdx attributes together uniquely identify an operation.

Supported datatypes

See the SharePlex Release Notes for a chart that shows how Oracle datatypes are converted to XML.

Sample XML records

Source table

This is the table for which the sample operations are generated.

SQL> desc products

Name

Null?

Type

PRODUCT_ID

NOT NULL

NUMBER

DESCRIPTION

 

VARCHAR2(600)

PRICE

 

NUMBER

Source DML operations
insert into products values (230117, ‘Hamsberry vintage tee, cherry’, 4099);
commit;
update products set price=3599 where product_id=230117 and price=4099;
commit;
delete products where product_id=230117;
commit;
truncate table products;
Schema record
<?xml version="1.0" encoding="UTF-8"?>
<?opentarget version="1.1"?>
<opentarget>
<txn id="2218316945" commitTime="2014-10-10T13:18:43" userId="85" oracleTxnId="3.10.1339425" />
<tbl name="MFG.PRODUCTS" utcOffset="-5:00">
<cmd ops="schema">
<schema>
<col name="PRODUCT_ID" xmlType="decimal" key="true" nullable="false" length="22" />
<col name="DESCRIPTION" xmlType="string" key="false" nullable="true" length="600" />
<col name="PRICE" xmlType="decimal" key="false" nullable="true" length="22" />
</schema>
</cmd>
</tbl>
</opentarget>
Insert record
<?xml version="1.0" encoding="UTF-8"?>
<?opentarget version="1.1"?>
<opentarget>
<txn id="2218316945" msgIdx="1" msgTot="1" commitTime="2014-10-10T13:18:43" userId="85" oracleTxnId="3.10.1339425" />
<tbl name="MFG.PRODUCTS">
<cmd ops="ins">
<row id="AAAmDbAAEAAApRrAAA">
<col name="PRODUCT_ID">230117</col>
<col name="DESCRIPTION">Hamsberry vintage tee, cherry</col>
<col name="PRICE">4099</col>
</row>
</cmd>
</tbl>
</opentarget>
Update record
<?xml version="1.0" encoding="UTF-8"?>
<?opentarget version="1.1"?>
<opentarget>
<txn id="2218318728" msgIdx="1" msgTot="1" commitTime="2014-10-10T13:19:12" userId="85" oracleTxnId="1.17.970754" />
<tbl name="MFG.PRODUCTS">
<cmd ops="upd">
<row id="AAAmDbAAEAAApRrAAA">
<col name="PRICE">3599</col>
<lkup>
<col name="PRODUCT_ID">230117</col>
<col name="PRICE">4099</col>
</lkup>
</row>
</cmd>
</tbl>
</opentarget>
Delete record
<?xml version="1.0" encoding="UTF-8"?>
<?opentarget version="1.1"?>
<opentarget>
<txn id="2218319446" msgIdx="1" msgTot="1" commitTime="2014-10-10T13:19:25" userId="85" oracleTxnId="5.23.1391276" />
<tbl name="MFG.PRODUCTS">
<cmd ops="del">
<row id="AAAmDbAAEAAApRrAAA">
<lkup>
<col name="PRODUCT_ID">230117</col>
</lkup>
</row>
</cmd>
</tbl>
</opentarget>
Truncate record
<?xml version="1.0" encoding="UTF-8"?>
<?opentarget version="1.1"?>
<opentarget>
<txn id="2218319938" commitTime="1988-01-01T00:00:00" userId="85" oracleTxnId="11.4.939801" />
<tbl name="MFG.PRODUCTS">
<cmd ops="trunc" />
</tbl>
</opentarget>

File storage and aging

Post writes to a series of files. The active working file is prepended with the label of current_ and is stored in the opx/current subdirectory of the variable-data directory.

Output Format Name of Current File
SQL

current_legacy.sql

XML

current_prodsys.XML

   

Important: Do not open or edit the current_ file.

Post uses the max_records, max_size and max_time parameters to determine the point at which to start a new active file. When this switch occurs, Post moves the processed data to a sequenced file in the opx subdirectory of the variable-data directory. The file names include the name of the post queue, the time and date, and an incrementing ID.

SQL files:

/installed/vardir> ls -1 opx

0000000000_20140305130858_legacy.sql

0000000001_20140305131130_legacy.sql

0000000002_20140305131212_legacy.sql

0000000003_20140305133835_legacy.sql

0000000004_20140305134028_legacy.sql

XML files:

/installed/vardir> ls -1 opx

0000000000_20140305130858_prodsys.XML

0000000001_20140305131130_prodsys.XML

0000000002_20140305131212_prodsys.XML

0000000003_20140305133835_prodsys.XML

0000000004_20140305134028_prodsys.XML

To force a file switch

The current file cannot be viewed or consumed without stopping Post. To access the data in the current file, you can use the target command with the switch option to move the data to a sequenced file, from which it can then be consumed or viewed. After issuing this command, the switch occurs after Post processes a new record.

target x.file [queue queuename] switch

Configure a replication strategy

Configure a replication strategy

This chapter contains instructions for configuring SharePlex to support different replication objectives. Production implementations can vary widely from basic configurations with one source and target, to multiple instances of SharePlex with named queues,multiple targets, partitioned data, and more.

It is difficult to foresee and document every possible way that an organization may want to deploy SharePlex. The goal of this documentation is to present instructions for setting up the basic deployment types in a way that is clear enough for you to be able to combine them and expand upon them to suit your needs. Additional deployment assistance is available through our Professional Services organization.

Contents
Related Documents