Chat now with support
Chat mit Support

SharePlex 9.4 - Reference Guide

About this guide Conventions used in this guide SharePlex commands SharePlex parameters General SharePlex utilities Database Setup utilities SharePlex environment variables

stop

Use the stop command to stop a SharePlex replication process gracefully, allowing it to finish reading from, or writing to, its associated queues. When you stop a replication process, data accumulates safely in the queues.

A process stopped with the stop command remains stopped even when SharePlex is shut down and restarted. It can only be started with the start command. When the process starts again, it resumes processing at the point where stopped, so the data remains synchronized.

Note: The stop command differs from the abort service command in that it stops a replication process after the process has finished reading from, or writing to, its associated queue. The abort service command immediately terminates the process, whether or not the process is processing data.

Stopping the Capture process

Be careful when you stop the Capture process. The source and target data can go out of synchronization if:

  • Users continue changing the source data while Capture is stopped.

    and...

  • The Oracle redo logs wrap during that time.

    and...

  • The archive logs become unavailable.

Stopping Post at a specific point in time

Use one of the [at sourcetime] options to control when the Post process stops. Either of those options automatically stops Post when it receives the first message stamped with a designated time, or time and date. Subsequent messages accumulate in the post queue until Post is restarted.

You can use an [at sourcetime] option when a job is scheduled on the source system, and you want to be certain that the operations were successful there before they are applied to the target database. Set the option to stop Post just before the job is scheduled to run.

You also can use this option when you are running reports on the target system and do not want Post operations competing for overhead.

Because SharePlex is not synchronous, the actual time on the target system when Post stops probably will be later than the operation’s timestamp. The timing depends on how long it takes to post preceding messages in the queue.

Usage

Supported sources: Oracle
Supported targets: All
Authorization level: Operator (2); Administrator (1) required to stop Capture
Issued for:
  • stop for Capture, Read and Export is issued for the source system.
  • stop for Import and Post is issued for the target system.
Related commands: abort service, start

Syntax

Basic command Command options Remote options
stop service

[to host]

[from host]

[for datasource]

[for datasource-datadest]

[queue queuename]

[at sourcetime hh:mm:ss]

[at sourcetime mm/dd/yyyy hh:mm:ss]

[ on host |

on host:portnumber |

on login/password@host |

on login/password@host:portnumber ]

Syntax description

Component Description
service

The SharePlex process you want to stop. Valid values are:

  • Capture
  • Read
  • Export
  • Import
  • Post

Example:

sp_ctrl(sysA)> stop export

to host

This option stops Export to a designated target system, while allowing Export to other systems to continue.

  • to is a required part of the syntax.
  • host is the name of the target system.

Example:

sp_ctrl(sysA)> stop export to sysB

In this example, Export is stopped from sysA to sysB, but Export from sysA to other target systems continues.

from host

This option stops Import from a designated source system, while allowing Import from other systems to continue.

  • from is a required part of the syntax.
  • host is the name of the source system.

Example:

sp_ctrl(sysD)> stop import from sysC

In this example, Import from sys C is stopped on sysD, but Import on sysD from other systems continues.

for datasource

This option stops Capture or Read for a designated datasource.

  • for is a required part of the syntax.
  • datasource is expressed as o.SID where SID is an ORACLE_SID.

Example:

sp_ctrl(sysA)> stop read for o.oraA

In this example, Read is stopped on sysA for instance oraA, but other Read processes for other instances on sysA continue processing.

for datasource-datadest

This option stops Post for a designated source-target data stream.

  • for is a required part of the syntax.
  • datasource is expressed as o.SID where SID is an ORACLE_SID.

  • datadest is expressed as o.SID or r.database, where SID is an ORACLE_SID or database is the name of an Open Target database.

Example:

sp_ctrl(sysC)> stop post for o.oraA-o.oraC

In this example, Post is stopped on sysC for instance oraA replicating to oraC, but other Post processes on sysC continue posting.

queue queuename

This option stops the Export, Import or Post process associated with a named queue. Replication through other named queues continues unaffected.

  • queue is a required part of the syntax.
  • queuename is the user-defined name of the queue, as shown in the configuration file.

Example:

sp_ctrl(sysA)> stop export queue QA

In this example, Export of data through named export queue QA is stopped on sysA, but Export continues for data assigned to all other named queues.

at sourcetime hh:mm:ss

This option stops the Post process when it receives the first message stamped with the designated time. Subsequent messages accumulate in the post queue until Post is restarted.

  • at sourcetime is a required part of the syntax.
  • hh:mm:ss is the time stamp at which Post stops, which must be specified using a 24-hour clock. Pad single-digit components with a zero (0). Allow no spaces.

Example:

sp_ctrl(sysC)> stop post at sourcetime 24:00:00

In this example, Post stops on sysC when it receives the first message with the timestamp of midnight.

at sourcetime mm/dd/yyyy hh:mm:ss

This option stops the Post process when it receives the first message stamped with the designated time and date. It has the same purpose and functionality as stop post at sourcetime hh:mm:ss, with an added date option.

  • at sourcetime is a required part of the syntax.
  • mm/dd/yyyy is the date on which you want Post to stop at the designated time. The date component must precede the time component in the syntax. The year must include all four digits. Pad single-digit components with a zero (0). Allow no spaces. Separate this component from the time component with a space.
  • hh:mm:ss is the time stamp at which Post stops, which must be specified using a 24-hour clock. Pad single-digit components with a zero (0).

Example:

sp_ctrl(sysC)> stop post at sourcetime 01/31/2002 24:00:00

In this example, Post stops on sysC when it receives the first message stamped midnight, January 31, 2002.

Remote options

These options enable you to issue the command on a remote machine and to script commands that include a login name, password, port number, or combination of those items.

Option Description
on host

Execute the command on a remote system (one other than the one where the current sp_ctrl session is running). You are prompted for login credentials for the remote system. If used, must be the last component of the command syntax.

Example: sp_ctrl(sysB)>status on SysA

on host:portnumber

Execute the command on a remote system when a remote login and port number must be provided. If used, must be the last component of the command syntax.

Example: sp_ctrl(sysB)>status on SysA:8304

on login/password@host

Execute the command on a remote system when a remote login, password, and host name must be provided. If used, must be the last component of the command syntax.

Example: sp_ctrl(sysB)>status on john/spot5489@SysA

on login/password@host:portnumber

Execute the command on a remote system when a remote login, password, host name, and port number must be provided. If used, must be the last component of the command syntax.

Example: sp_ctrl(sysB)>status on john/spot5489@SysA:8304

target

Use the target command to configure Post to support a specific type of target. This is known as the target configuration. The properties stored in the target configuration control how Post processes, formats, and outputs replicated data.

The target command also provides options to reset or view target configuration settings.

The target command can be used to control a target configuration both at the global level or, if the queue option is used, at the per-process level.

 

Important: Make certain to stop and then restart the Post process after using this command.

Usage

Supported sources: Oracle
Supported targets: All
Authorization level: Operator (2)
Issues on: target system
Related commands: connection

Syntax

Basic command Command options Remote options

target {o. | r. | x. | c.}target

[queue queuename]

{command_option}

{

{ set category property |

reset [category [property] ] |

show [category ] } |

switch

}

Not available

Syntax Description

Component Description
{o. | r. | x. | c.}target

The target for which you are configuring Post. The letter preceding the name of the target identifies the type of target, whether an Oracle database, Open Target database, JMS, Kafka broker, file, or change-history target.

Possible targets are:

o.SID

where: SID is the ORACLE_SID of an Oracle database.

r.database

where: database is the name (not the DSN) of an Open Target database.*

x.jms

where: jms is a Java Message Service (JMS) queue or topic.

x.kafka

where: kafka is an Apache Kafka broker.

x.file

where: file is a structured file that contains data in one of the supported formats (see format category).

c.SID

where: SID is the ORACLE_SID of a target Oracle database that is configured as a change-history target.

* Note: The SharePlex-supported datastore types, data types, and operations are listed in the SharePlex Release Notes.

queue queuename

Constrains the action of the command to the SharePlex Post process that is associated with the specified queue. This action overrides any global settings for the target property you are setting. Without the queue option, the target command affects all Post processes in the local SharePlex instance.

Example: the following sets metadata properties for the Post process that is associated with the myjms1 queue.

target x.jms queue myjms1 set metadata time, userid, trans, size

{

set category property |

reset [category [property]] |

show [category] |

switch

}

  • set applies a configuration property to the target configuration.
  • reset clears configuration settings back to their defaults (if one exists). It can be constrained to a specific category and (where applicable) a specific property. Note:reset does not reset one property value to another. Use another set command for that.
  • show displays configuration settings.
  • category is any category of configuration properties. The properties are grouped by category for efficient management through commands. See Category descriptions.
  • property is the configuration property that you want to set, reset, or show. There are different properties per category.
  • switch applies only to file targets (x.file). Use this option to create a new active output file and save the current one as a serially numbered aged file. After issuing this command, the switch occurs after Post processes a new record. This option takes no input and is not associated with any configuration categories.

Category descriptions

Category Description
cdc

Overrides the global settings for an Oracle change-history target that are set with the SP_OPO_TRACK_PREIMAGE parameter for a specific table. See cdc category.

characterset

Directs the Post process to convert replicated data to the specified character set before being posted to an Open Target target database. See characterset category.

file

Sets the properties of a file target. See file category for more information.

format

Sets the format of the data that is written to the target. See format category for more information.

json

Sets the properties of JSON output when format record=json. See json category for more information.

jms

Sets the properties of a JMS target. See jms category for more information.

kafka

Sets the properties of a Kafka target. See kafka category for more information.

metadata

Adds metadata properties to the data that is replicated by SharePlex. See metadata category for more information.

resources

Sets parameters that affect resources on the target system. See resources category

rule | filter

Specifies a conditional statement that applies a tracking rule or filter to a specific table in an Oracle change-history target. See rule | filter category

source

Sets source identifier properties for an Oracle change-history target. See source category for more information.

sql

Sets the properties of SQL output when format record=sql. See sql category for more information.

cdc category

Supported targets

Oracle change-history target

Command options

target c.SID [queue queuename]

table tablename

{

set cdc preimage={yes | no}

reset cdc

show cdc

}

Usage

Overrides the global setting for updates that is set with the SP_OPO_TRACK_PREIMAGE parameter for a specific table. This controls whether the before image of update operations are tracked in the history table. Valid values are yes to track before images of updates, or no to exclude update before images.

characterset category

Supported targets

Open Target databases

Command options

target r.database [queue queuename]

{

set characterset character_set |

reset characterset character_set |

show characterset character_set

}

Usage

Directs the Post process to send replicated data to the local Oracle client to be converted to the specified character set before being posted to the target database. The specified character set overrides the default character set of Unicode that is used by Post when posting character data to Open Target targets.

Note: To use this option, there must be an Oracle client on the target system. For more information, see the Open Target Checklist section in the SharePlex Installation and Setup Guide.

Property Input Value Default
character_set

The Oracle equivalent of the target character set, specified by its Oracle name.

Unicode
Example

To post the data in the GB2312 character set on the target, specify the Oracle ZHS16GBK character set.

target r.mymss set characterset ZHS16GBK

file category

Supported targets

File

Command options

target x.file [queue queuename]

{

set file property=value |

reset file [property] |

show file

}

Usage

Sets the properties of a file target.

Property Input Value Default

location=pathname

Path name under the SharePlex variable-data directory where you want the file to be created

opx

max_records=number

Maximum size of the active file, measured by the number of records, before switching files.

50,000

max_size=megabytes

Maximum size of the file, measured in megabytes, before switching files.

50

max_time=seconds

Maximum number of seconds to wait before switching files.

300

record_length=number

Maximum size of a record, in number of characters

132

Example

target x.file set file max_size=320

format category

Supported targets

File

Kafka

Command options

target x.{file | kafka} [queue queuename]

{

set format property=value |

reset format [property] |

show format

}

Usage

Sets the format of the data that is written to the target.

Property Input Value Default

date=format

Date format

yyyy-MM-ddTHH:mm:ss

decimal=character

Decimal character

. (period)

enotation=notation

Exponential notation

14

record=record_format

Format of the output records. Valid values are sql, xml, or json.

xml

timestamp=format

Timestamp format

yyyy-MM-ddTHH:mm:ss.ffffffffff

Example

target x.file set format record=sql

target x.kafka set format record=json

Descriptions of output formats

The following are descriptions of the output formats provided by format record=record_format.

SQL record format

Every transaction in a SQL-formatted file is headed by a comment that includes the transaction sequence within the SQL file and a unique transaction ID. A comment line at the end of the SQL file has the number of lines in the file. For example, the following is a SQL file with one transaction. In this example the transaction id is 2-113319. The file has nine lines.

/installed/vardir> cat opx/0000000010_20140305140820_legacy.sql

-- 0000000001 2-113319 03052014140813 03052014140813

DELETE FROM "ROBIN"."TEST_TYPES" WHERE ORA_NUMBER = '22345' AND ROWNUM = 1;

INSERT INTO "ROBIN"."TEST_TYPES" (ORA_NUMBER, ORA_DATE, ORA_RAW, ORA_ROWID,

ORA_FLOAT, ORA_CHAR, ORA_VARCHAR2, ORA_TIMESTAMP, ORA_TIMESTAMP_TZ,

ORA_TIMESTAMP_LTZ) VALUES('22345', '08132066000000', '0123456789ABCDEF'

, 'AAAAAAAAAAAAAAAAAA', '12350', 'Character ', 'Variable data'

, '10201998021300.22000', '06172002080000.00000', '06172002160000.00000');

COMMIT;

-- EOF 0000000009

XML record format

The XML format is separated into operation and schema "types" for easier consumption. They are actually the same when viewed from an XSD perspective and are not distinct types. The template XML represents all possible attributes and elements. The individual XML represents the bare minimum output for each supported operation.

After startup, the first time that Post writes a change record for any given table, it first writes a schema record for that table. Each schema record contains the table name and details of interest for each columns. A schema record is written only once for each table during a Post run, unless there is a change to that schema, and then a new schema record is written. If Post stops and starts, schema records are written again, once for each table as Post receives a change record for it.

Schema record template
<?xml version="1.0" encoding="UTF-8" ?>  
<?opentarget version="1.0" ?>  
<opentarget>  
    <txn  
        id="xs:integer"  
        oracleTxnId="xs:string"   
        commitTime="xs:dateTimeStamp" />  
    <tbl  
        name="xs:string"  
        utcOffset="xs:integer"
        <cmd ops="schema">  
            <schema>  
                <col  
                    name="xs:string"  
                    xmlType="xs:string"  
                    key="xs:boolean"  
                    nullable="xs:boolean"  
                    length="xs:integer"  
                  />  
            </schema>  
        </cmd>  
    </tbl>  
</opentarget>

Table 1: Explanation of schema template (* = optional)

Element Attribute Description

txn

 

Transaction metadata

 

id

ID of current transaction

 

oracleTxnId *

Oracle transaction ID

 

commitTime*

Transaction commit timestamp

tbl

 

Table metadata

 

name

Fully qualified name of the table

 

utcOffset

UTC offset in the log

cmd

 

Operation metadata (In the case of a schema, there are no operations.)

 

ops

Type of record generated for this table. For a schema, the value is schema.

schema   Column metadata
col   Metadata for a column (One of these elements appears for every record in the table.)

 

name

Name of the column

 

xmlType

XML data type

 

key

Key flag (true, false)

 

nullable

Nullable flag

 

length

Length of the column

Operation record template
<?xml version="1.0" encoding="UTF-8" ?>
<?opentarget version="1.1" ?>
<opentarget>
    <txn
        id="xs:integer" 
        msgIdx="xs:integer" 
        msgTot="xs:integer" 
        oracleTxnId="xs:string"
        commitTime="xs:dateTimeStamp"
        userId="xs:string" />
    <tbl 
        name="xs:string"
        <cmd ops="xs:string">
            <row id="xs:string">
                <col name="xs:string"></col>
                <lkup>
                    <col name="xs:string"></col>
                </lkup>
            </row>
        </cmd>
    </tbl>
</opentarget>

Table 2: Explanation of operation template (* = optional)

Element Attribute Description
txn   Transaction metadata for the operation
  id ID of current transaction
  msgIdx Index of current record in the transaction
  msgTot* Total number of messages in transaction
  oracleTxnId * Oracle transaction ID, taken from the System Change Number (SCN)
  commitTime* Transaction commit timestamp
  userId * User ID that performed the operation
tble   Table metadata
  name Fully qualified table name
cmd  

Operation metadata

  ops Operation type (insert, update, delete, truncate)
row   Metadata of the row that changed in the operation
  id Oracle ROWID
col   Change data for a column (One of these elements appears for every changed column in the operation.)
  name Column name with the after value for that column
lkup   Before image for use in update and delete operations
col   Before image of column (One of these elements appears for every changed column in the operation.)
  name Column name with the before value or the key value (depending on the operation) for that column

Note: The id and msgIdx attributes together uniquely identify an operation.

Supported data types

See the SharePlex Release Notes for a chart that shows how Oracle data types are converted to XML.

Sample XML records
Source table

This is the table for which the sample operations are generated.

SQL> desc products

Name

Null?

Type

PRODUCT_ID

NOT NULL

NUMBER

DESCRIPTION

 

VARCHAR2(600)

PRICE

 

NUMBER

Source DML operations
insert into products values (230117, ‘Hamsberry vintage tee, cherry’, 4099);
commit;
update products set price=3599 where product_id=230117 and price=4099;
commit;
delete products where product_id=230117;
commit;
truncate table products;
Schema record
<?xml version="1.0" encoding="UTF-8"?>
<?opentarget version="1.1"?>
<opentarget>
<txn id="2218316945" commitTime="2014-10-10T13:18:43" userId="85" oracleTxnId="3.10.1339425" />
<tbl name="MFG.PRODUCTS" utcOffset="-5:00">
<cmd ops="schema">
<schema>
<col name="PRODUCT_ID" xmlType="decimal" key="true" nullable="false" length="22" />
<col name="DESCRIPTION" xmlType="string" key="false" nullable="true" length="600" />
<col name="PRICE" xmlType="decimal" key="false" nullable="true" length="22" />
</schema>
</cmd>
</tbl>
</opentarget>
Insert record
<?xml version="1.0" encoding="UTF-8"?>
<?opentarget version="1.1"?>
<opentarget>
<txn id="2218316945" msgIdx="1" msgTot="1" commitTime="2014-10-10T13:18:43" userId="85" oracleTxnId="3.10.1339425" />
<tbl name="MFG.PRODUCTS">
<cmd ops="ins">
<row id="AAAmDbAAEAAApRrAAA">
<col name="PRODUCT_ID">230117</col>
<col name="DESCRIPTION">Hamsberry vintage tee, cherry</col>
<col name="PRICE">4099</col>
</row>
</cmd>
</tbl>
</opentarget>
Update record
<?xml version="1.0" encoding="UTF-8"?>
<?opentarget version="1.1"?>
<opentarget>
<txn id="2218318728" msgIdx="1" msgTot="1" commitTime="2014-10-10T13:19:12" userId="85" oracleTxnId="1.17.970754" />
<tbl name="MFG.PRODUCTS">
<cmd ops="upd">
<row id="AAAmDbAAEAAApRrAAA">
<col name="PRICE">3599</col>
<lkup>
<col name="PRODUCT_ID">230117</col>
<col name="PRICE">4099</col>
</lkup>
</row>
</cmd>
</tbl>
</opentarget>
Delete record
<?xml version="1.0" encoding="UTF-8"?>
<?opentarget version="1.1"?>
<opentarget>
<txn id="2218319446" msgIdx="1" msgTot="1" commitTime="2014-10-10T13:19:25" userId="85" oracleTxnId="5.23.1391276" />
<tbl name="MFG.PRODUCTS">
<cmd ops="del">
<row id="AAAmDbAAEAAApRrAAA">
<lkup>
<col name="PRODUCT_ID">230117</col>
</lkup>
</row>
</cmd>
</tbl>
</opentarget>
Truncate record
<?xml version="1.0" encoding="UTF-8"?>
<?opentarget version="1.1"?>
<opentarget>
<txn id="2218319938" commitTime="1988-01-01T00:00:00" userId="85" oracleTxnId="11.4.939801" />
<tbl name="MFG.PRODUCTS">
<cmd ops="trunc" />
</tbl>
</opentarget>

JSON record format

Note: This is default output with the exception that target x.kafka set json indent=2 was used to make the output more readable.

This is the table for which the sample operations are generated.

SQL> desc products

Name

Null?

Type

PRODUCT_ID

NOT NULL

NUMBER

DESCRIPTION

 

VARCHAR2(600)

PRICE

 

NUMBER

Source DML operations
insert into products values (230117, ‘Hamsberry vintage tee, cherry’, 4099);
commit;
update products set price=3599 where product_id=230117 and price=4099;
commit;
delete products where product_id=230117;
commit;
truncate table products;
Schema record
{
  "meta":{
    "op":"schema",
    "table":""
  },
  "schema":{
    "name":"BILL.PRODUCTS",
    "utcOffset":"-7:00",
    "PRODUCT_ID":{
      "jsonType":"decimal",
      "num":1,
      "key":1,
      "nullable":1,
      "length":22,
      "precision":0,
      "scale":0,
      "src_name":"PRODUCT_ID"
    },
    "DESCRIPTION":{
      "jsonType":"string",
      "num":2,
      "key":1,
      "nullable":1,
      "length":600,
      "precision":0,
      "scale":0,
      "src_name":"DESCRIPTION"
    },
    "PRICE":{
      "jsonType":"decimal",
      "num":3,
      "key":1,
      "nullable":1,
      "length":22,
      "precision":0,
      "scale":0,
      "src_name":"PRICE"
    }
  }
}
Insert record
{
  "meta":{
    "op":"ins",
    "table":"BILL.PRODUCTS"
  },
  "data":{
    "PRODUCT_ID":"230117",
    "DESCRIPTION":"Hamsberry vintage tee, cherry",
    "PRICE":"4099"
  }
}
Update record
{
  "meta":{
    "op":"upd",
    "table":"BILL.PRODUCTS"
  },
  "data":{
    "PRICE":"3599"
  },
  "key":{
    "PRODUCT_ID":"230117",
    "DESCRIPTION":"Hamsberry vintage tee, cherry",
    "PRICE":"4099"
  }
}
Delete record
{
  "meta":{
    "op":"del",
    "table":"BILL.PRODUCTS"
  },
  "data":{
    "PRODUCT_ID":"230117",
    "DESCRIPTION":"Hamsberry vintage tee, cherry",
    "PRICE":"3599"
  }
}
Truncate record
{
  "meta":{
    "op":"truncate",
    "table":"BILL.PRODUCTS"
  }
}

jms category

Supported targets

JMS

Command options

target x.jms [queue queuename]

{

set jms property=value |

reset jms [property] |

show jms

}

Usage

Sets the properties of a JMS target.

Property Input Value Default

factory_class=factory_class

Required Fully qualified class name of the factory class. Sets the JNDI environmental property java.naming.factory.initial to specify the class name of the initial context factory for the provider.

None

provider_url=url

Required RMI URL with no object name component. This sets the JNDI environmental property java.naming.provider.url to specify the location of the registry that is being used as the initial context.

Use the correct format depending your JMS Provider and type of URL. For example, if using LDAP your URL might be similar to the following:

ldap://hostname.company.com/contextName

Ask your JMS Provider Administrator for the JMS Provider URL.

None

lib_location=path

Required Path to the directory where you installed the client library files.

None

destination={queue | topic}

Messaging domain. Valid values are queue (port-to-port) or topic (publisher-subscriber model).

queue

factory_name=factory_name

Name of a JNDI connection factory lookup. You can specify multiple names with a comma-separated list, for example: (jndi.name1, jndi.name2).

None

user=user

Name of the user that is attaching to JMS. If authentication is not required, omit this and the password option.

None

password=password

Password of the JMS user.

None

queuename=JMS_topic_queuename

Name of the JMS queue or topic.

OpenTarget

persistent={yes | no}

yes logs messages to disk storage as part of send operations.

no prevents logging.

yes
session_transacted ={yes | no}

no directs Post to issue a JMS commit for every replicated message, making each one immediately visible and consumable. This is the default.

yes directs Post to operate in a transactional manner. In this mode, Post issues a JMS commit (to make messages visible and consumable) at intervals based on the following:

  • Issue a commit before the data is read-released from the Post queue, as controlled by the value set for the SP_OPX_READRELEASE_INTERVAL parameter.
  • Issue a commit every JMS write, as controlled by the value set for the SP_OPX_CHECKPOINT_FREQ parameter, until the SP_OPX_READRELEASE_INTERVAL is reached.

no
properties

Use this option if the JMS provider that you are using cannot consume messages that contain the default set of properties supplied by SharePlex. It enables you to remove or add properties. Supply the properties as a comma-delimited list.

  • To add a property, specify it as name=value.
  • To remove a SharePlex property, prefix the name with a dash. For example this string removes two SharePlex properties: -JMSXDeliveryCount,-JMSXGroupSeq.
None
client_id Use this option if the JMS provider that you are using cannot consume messages that contain the default SharePlex client ID. Set this value to the client ID that your provider accepts. None
commit_frequency

Use this option when Post is configured to post to a JMS server in transactional style (issue a JMS commit at intervals, rather than after every message as directed by the session_transacted property). This parameter specifies the interval between JMS commits. It works in conjunction with the SP_OPX_READRELEASE_INTERVAL parameter. Valid values are 1 to any positive integer.

1500
Example

target x.jms set jms queuename=SharePlexJMS

json category

Supported targets

Kafka (format record=json)

Command options

target x. {file | kafka } [queue queuename]

{

set json property=value |

reset json [property] |

show json

}

Usage

Sets the properties of JSON output when format record =json.

Property Input Value Default

before

  • yes includes the before image of the changed row.
    It produces a record similar to the following, with the before image at the end of the line:

    {"meta":{"op":"upd","table":"BILL.JSON"},"data":{"C2":"Bye"},
    "key":{"C1":"1","C2":"Hello"}}
    {"meta":{"op":"upd","table":"BILL.JSON"},
    "data":{"C1":"2"},"key":{"C1":"1"}}
    
  • no excludes the before image of the changed row. It roduces a record similar to the following:

    {"meta":{"op":"upd","table":"BILL.JSON"},"data":{"C2":"Bye"}}
    {"meta":{"op":"upd","table":"BILL.JSON"},"data":{"C1":"2"}}
    

yes

commit

  • yes includes the commit records. It produces a record similar to the following:

    {"meta":{"op":"commit","table":""}}
  • no omits commit records.

IMPORTANT! When commit is set to yes, there can only be one target topic.

yes

ddl
  • yes includes ALTER TABLE commands. It produces a record similar to the following:

    {"meta":{"op":"ddl","table":"BILL.JSON"},
    "sql":{"ddl":"alter table \"BILL\".\"JSON\""}
    
  • No omits ALTER TABLE commands.

yes

eol

yes includes an end-of-line character at the end of the JSON document.

no excludes an end-of-line character.

yes

indent

Controls the level of indentation of the records.

  • A value of 0 places the entire record on one line, as follows:

    {"meta":{"op":"ins","table":"BILL.JSON"},
    "data":{"C1":"1","C2":"Hello ",
    "C3":"There","C4":"2017-03-20T09:46:34",
    "C5":"2017-03-20T09:46:34.735370000"}}

  • A value greater than 0 indents the metadata (if included) and data lines by the specified number of characters.
    It splits the record logically onto multiple lines, as in the following example (indent= 4) :

    {
        "meta":{
            "op":"ins",
            "table":"BILL.JSON"
        },
        "data":{
            "C1":"1",
            "C2":"Hello",
            "C3":"There",
            "C4":"2017-03-20T10:02:37",
            "C5":"2017-03-20T10:02:37.456949000"
        }
    }
    

0

meta
  • yes includes the metadata section of the JSON record. It produces a record similar to the following:

    {"meta":{"op":"ins","table":"BILL.JSON"},
    "data":{"C1":"1","C2":"Hello","C3":"There",
    "C4":"2017-03-20T10:04:44",
    "C5":"2017-03-20T10:04:44.957758000"}}
    {"meta":{"op":"upd","table":"BILL.JSON"},"data":{"C2":"Bye"}}
    {"meta":{"op":"upd","table":"BILL.JSON"},"data":{"C1":"2"}}
    {"meta":{"op":"del","table":"BILL.JSON"},"data":{"C1":"2"}}
    {"meta":{"op":"commit","table":""}}
    
  • no omits the metadata section and produces a record similar to the following:

    {"data":{"C1":"1","C2":"Hello","C3":"There",
    "C4":"2017-03-20T10:05:09",
    "C5":"2017-03-20T10:05:09.268094000"}}
    {"data":{"C2":"Bye"}}
    {"data":{"C1":"2"}}
    {"data":{"C1":"2"}}
    {}

Note: To specify which metadata fields appear in the record, use the set metadata property command. See metadata category.

Example:

target x.kafka set metadata time,userid,op,scn

yes

schema
  • yes includes schema records.
  • no omits schema records.

    A schema record for objects in replication is produced when a new SharePlex
    configuration is activated or when a table has a DDL change.
    A value of yes (with indent=4) produces a record similar to the following:

    {
        "meta":{
            "op":"schema",
            "table":""
        },
        "schema":{
            "name":"BILL.JSON",
            "utcOffset":"-7:00",
            "C1":{
                "jsonType":"decimal",
                "num":1,
                "key":1,
                "nullable":0,
                "length":22,
                "precision":0,
                "scale":0,
                "src_name":"C1"
            },
            "C2":{
                "jsonType":"string",
                "num":2,
                "key":0,
                "nullable":1,
                "length":10,
                "precision":0,
                "scale":0,
                "src_name":"C2"
            },
            "C3":{
                "jsonType":"string",
                "num":3,
                "key":0,
                "nullable":1,
                "length":10,
                "precision":0,
                "scale":0,
                "src_name":"C3"
            }
      }
    }

yes

Example

target x.kafka set json meta=no

kafka category

Supported targets

Kafka

Command options

target x.kafka [queue queuename]

{

set kafka property=value |

reset kafka [property] |

show kafka

}

Usage

Sets the properties of a Kafka target.

Property Input Value Default
broker=broker

Required. The host and port number of the Kafka broker, or a comma delimited list of multiple brokers. This list is the bootstrap into the Kafka cluster. So long as Post can connect to one of these brokers, it will discover any other brokers in the cluster.

localhost:9092
client_id=ID

Optional. A user-defined string that Post will send in each request to help trace calls.

None
compression.code={none, gzip, snappy} Optional. Controls whether data is compressed in Kafka. Options are none, gzip or snappy. None
partition={number | rotate | rotate trans}

Required. One of the following:

  • A fixed partition number: Directs Post to post messages only to the specified partition number. For example, setting it to 0 directs Post to post only to partition 0. This option is suitable for use in testing or if the target has multiple channels of data posting to the same Kafka topic.
  • The keyword rotate: Directs Post to apply messages to all of the partitions of a topic in a round-robin fashion. The partition changes with each new message. For example if a topic has three partitions, the messages are posted to partitions 0,1,2,0,1,2, and so on in that order.
  • The keyword rotate trans: This is similar to the rotate option, except that the partition is incremented with each transaction rather than with each message. For example, if a topic has three partitions, the messages are posted to partition 0 until the commit, then to partition 1 until the commit, and so on in that order. This option is suitable if you are replicating multiple tables to a single topic. It allows you to distribute data across several partitions, while still preserving all of the operations of a transaction together in a single partition. This enables a consumer that reads from a single partition to receive a stream of complete transactions.
0
request.required.acks=value Optional. This is a Kafka client parameter. By default it is set to a value of -1, which means all. Consult the Kafka documentation about this subject, because all really means all in-sync replicas. This parameter can be used in conjunction with the min.insync.replicas broker parameter to tune behavior between availability and data consistency. Important: It is possible for data to be lost between a Kafka producer (SharePlex in this case) and a Kafka cluster, depending on these settings. -1
topic=topic_name

Required. The name of the target Kafka topic.

This string may contain the special sequences %o or %t. The %o sequence is replaced by the owner name of the table that is being replicated. The %t sequence is replaced by the table name of the table that is being replicated. This feature may be used in conjunction with a Kafka server setting of auto.create.topics.enabled set to 'true'. Also view your server settings for default.replication.factor and num.partitions because these are used as defaults when topics are auto created.

Important! If using multiple topics, you must also set the following properties with the target command:

  • The output must be in JSON. Set the record property of the format category to json:

    target x.kafka set format record=json

  • Commits must be disabled. Set the commit property of the json category to no:

    target x.kafka set json commit=no

shareplex

* To avoid latency, if Post detects no more incoming messages, it sends the packet to Kafka immediately without waiting for the threshold to be satisfied.

Example

sp_ctrl> target x.kafkaset kafka broker=host1:9092,host2:9092,host3:9092

sp_ctrl> target x.kafkaset kafka topic=shareplex

metadata category

Supported targets

Oracle replication targets

Oracle change-history targets

HANA replication targets

File

JMS

Kafka

Command options

target {o.SID | c.SID | r.database | x.file | x.jms | x.kafka} [queue queuename]

{

set metadata [colname:]property[, ...] |

reset metadata |

show metadata

}

Usage

The target command with set metadata can be used to override the default metadata properties that are set for a database target, an XML or JSON file target, a JMS target, or a Kafka target. All of these target types provide metadata in the Post output.

  • Oracle and HANA replication targets: Metadata can be configured for Oracle and HANA replication targets. The metadata columns must exist on the target. SharePlex does not add the metadata columns.
  • Oracle change-history target: If no target specification is made, Post updates the default metadata columns, which must exist in their default names before starting replication. In the following table, the default column names are listed if a property is included in the change-history target by default. Use set metadata only to add additional metadata or to assign a custom name to a column (use the colname option).
  • For targets that support XML input (JMS, Kafka, file) the metadata is formatted per the XML schema. Certain metadata is included in the output by default, as shown in the following table, without the need to use set metadata. Use set metadata only to add additional metadata.
  • For targets that support JSON input (Kafka), the op and table properties are included by default unless you set json meta=no.

For more information about configuring SharePlex to support a change history target database, see the SharePlex Administrator Guide.

To set multiple metadata properties, specify them as a comma-separated list.

Property Value inserted by Post Valid for
time

Time the operation was applied on the source, as supplied by Oracle.

Important! In Oracle version 11g and later, the time appears only in transactional boundary records, for example the start of a transaction. Based on the linear design of both the Oracle redo logs and the way that SharePlex replicates operations in transactional order, SharePlex applies the timetamp contained in the originating record to subsequent records until it encounters a record with a different timestamp.

Default column name: SHAREPLEX_SOURCE_TIME

data type: TIMESTAMP

This property is included by default in XML output and change-history tables.

Oracle replication target

Change-history target

JSON and XML output

userid

User ID that performed the operation

Default column name: SHAREPLEX_SOURCE_USERID

data type: NUMBER

This property is included by default in XML output and change-history tables.

Oracle replication target

Change-history target

JSON and XML output

op

Type of operation (INSERT, UPDATE, DELETE, TRUNCATE, DROP COLUMN, UPDATE BEFORE, UPDATE AFTER). For JMS implementations, this field is always added to the XML output as a field named SPOps. To add it to the JMS properties, issue an explicit target command with the op option.

Default column name: SHAREPLEX_SOURCE_OPERATION

data type: VARCHAR2

This property is included by default in XML and JSON output and change-history tables.

Oracle replication target

Change-history target

JSON and XML output

scn

Source SCN for when the operation was applied

Default column name: SHAREPLEX_SOURCE_SCN

data type: NUMBER

This property is included by default in change-history tables.

Oracle replication target

Change-history target

HANA replication target

JSON output

rowid

ROWID of the row that changed

Default column name: SHAREPLEX_SOURCE_ROWID

data type: ROWID

This property is included by default in change-history tables.

Oracle replication target

Change-history target

JSON output

trans

Transaction ID for the operation. For JMS implementations, this field is added to both the XML output and the JMS properties as a field named SPTxnId.

Default column name: SHAREPLEX_SOURCE_TRANS

data type: VARCHAR2

This property is included by default in change-history tables.

Oracle replication target

Change-history target

JSON output

seq

Order of the operation within the transaction

Default column name: SHAREPLEX_OPERATION_SEQ

data type: NUMBER

This property is included by default in change-history tables.

Oracle replication target

Change-history target

JSON output

host

Name or IP address of the source host

Default column name: SHAREPLEX_SOURCE_HOST

data type: VARCHAR2

This property is included by default in change-history tables.

Oracle replication target

Change-history target

JSON output

posttime The time that the operation was posted to the target. JSON output
queue

Name of the SharePlex queue

Default column name: SHAREPLEX_QUEUENAME

data type: VARCHAR2

This property is included by default in change-history tables.

Oracle replication target

Change-history target

JSON output

source

User-defined source identifier that was set with the set source option of target. See source category.

Default column name: SHAREPLEX_SOURCE_ID

data type: VARCHAR2

This property is included by default in change-history tables.

Oracle replication target

Change-history target

JSON output

changeid

Unique sequential ID of the operation

Default column name: SHAREPLEX_CHANGE_ID

data type: NUMBER

This property is included by default in change-history tables.

Oracle replication target

Change-history target

size

Number of operations in the transaction.

 

JSON output

Kafka

table

The name of the target table. The name of the target table will be added to the JMS properties as a field named SPTblName.

This property is included by default in JSON output.

JMS

JSON and XML output

idx The index of this operation within the transaction. This will be added to the JMS properties as a field named SPMsgIdx. It will be formatted in the JMS properties as m/n where n is the total number of operations in the transaction and m is the operation number within the transaction.

JMS

JMS and Kafka set metadata example

The following sets all Post processes to add the IP address or hostname of the source, name of the SharePlex queue, and the number of operations in the transaction to the output in the JMS target.

target x.jms set metadata host, queue, size

or...

target x.kafka set metadata host, queue, size

Oracle set metadata example

The following example sets the Post process associated with the myqueue1 queue to add the time of the operation and the userid that executed it to the Timestamp and User columns, both of which are user-defined names that are case-sensitive.

target c.targSID queue myqueue1 set metadata time:"Timestamp", userid:"User"

resources category

Supported targets

All targets

Command options

target {o. | r. | x. | c.} target [queue queuename]

{

set resources property=value |

reset resources [property] |

show resources [for datastore]

}

Usage

Sets parameters that affect resources on the target system.

Property Input Value Default

commit_frequency=number_of_operations

Specifies a maximum number of operations after which Post issues a commit. Can be any integer greater than 1. Can be used to divide a large transaction into two or more smaller ones, each containing, at most, the specified number_of_operations. This option can work around resource limits that affect large transactions, such as the number of row locks permitted per transaction.

50000

max_active_statements=number_of_cursors Specifies the number of concurrent active SQL statements that Post can process to a target database in one session. Can be any integer up to, and including, the maximum number of active statements permitted by the database or client driver. It is only used by Post when the SP_OPX_SQL_CACHE_DISABLE parameter is set to 0 (enabled).

16

Examples

target r.mydb queue q1 set resources commit_frequency=10000

target r.mydb queue q1 set resources max_active_statements=10

target r.mydb show resources for teradata

rule | filter category

Supported targets

Oracle change-history target

Command options

target c.SID [queue queuename]

table tablename

{

set {rule | filter} { 'column_condition' | !filename }

reset {rule | filter}

show {rule | filter}

}

Usage

Specifies a conditional statement that applies a tracking rule or filter to a specific table. Applies to UPDATE operations. Applies conditional logic that qualifies the data that you want to track to a target table, while excluding data that is not of interest. There can be one rule or filter per target table, but you can combine nested expressions with parentheses and the AND, OR, and NOT logical connectives to create a wide variety of conditions.

Syntax element Description
rule | filter
  • rule directs Post to insert a row only if the UPDATE operation matches the column_condition. If the UPDATE does not match the column condition, the row is discarded.
  • filter directs Post to insert a row only if the UPDATE does not match the column_condition. If the UPDATE does match the column condition, the row is discarded.
table tablename The name of the target table for which you are setting the rule or filter.

column_condition

The conditional logic that must be satisfied by the UPDATE operation. column_condition represents the entire conditional statement, which must be enclosed within single quotes. A column condition can be one or any combination of the following expressions, each enclosed within parentheses:

(column is changed)

(column is not changed)

([column1, column2, ...] matches change_list)

([column1, column2, ...] contains change_list)

 

Where:

  • column is the name of a target column. Use a comma to separate column names in a list, and enclose a list within brackets.
  • is changed means that the specified column(s) are changed in the UPDATE operation.
  • is not changed means that the specified column(s) are not changed in the UPDATE operation.
  • change_list is a logical representation of the source columns that changed.
  • matches change_list is satisfied when the condition is an exact match to the change list (no fewer columns than the change list, no more columns than the change list, same names).
  • contains change_list is satisfied when the condition contains all of the columns in the change list. The change list can be a subset of the columns in the condition, but it cannot contain more columns than the change list. Names must match.
!filename Enables you to store a large column condition in a text file, and then specify the file, rather than the entire column condition syntax, when you run the target command.

Note: Batch and direct load operations are not supported by the rule/filter feature.

Examples

These commands show some examples of how to set different rules by combining expressions:

target c.mySIDtable mytable set rule '(col3 is not changed) and (col5 is not changed)'

target c.mySIDtable mytable set rule '([col1, col3] matches change_list)'

target c.mySIDtable mytable set rule 'not ([col2, col5] contains change_list)'

 

The following table shows the different ways that rule and filter work to include or discard rows.

Rule Updated columns Row inserted?
rule = '(c2 is not changed)' c2 no
rule = '(c2 is not changed)' c3 yes
rule = '(c2 is changed)' c2 yes
rule = '(c2 is changed or c4 is changed)' c1 no
rule = '(c2 is changed or c4 is changed)' c1,c3,c5 no
rule = '(c2 is changed or c4 is changed)' c2 yes
rule = '(c2 is changed or c4 is changed)' c3,c4,c5 yes
rule = '([c3,c6,c7,c8] matches change_list)' c3,c6,c7,c8 yes
rule = '([c3,c6] matches change_list)' c3,c6,c7 no
rule = '([c3,c6,c7] contains change_list)' c3,c6 yes
rule = '([c3,c6] contains change_list)' c3,c8 no
     
Filter Updated columns Row inserted?
filter = '(c2 is not changed)' c2 yes
filter = '(c2 is not changed)' c3 no
filter = '(c3 is changed and c4 is changed)' c1 yes
filter = '(c3 is changed and c4 is changed)' c3,c4 no
filter = '(c3 is changed and c4 is changed)' c1,c2,c3 yes
filter = '([c3,c6,c7] matches change_list)' c3,c6 yes
filter = '([c3,c6,c7] matches change_list)' c3,c6,c7 no
filter = '([c3,c6] contains change_list)' c3,c6 no
filter = '([c3,c6,c7] contains change_list)' c3,c8 yes

source category

Supported targets

Oracle change-history target

Command options

target c.SID [queue queuename]

{

set source sourceID |

reset source |

show source

}

Usage

Sets the user-defined source identifier for the SHAREPLEX_SOURCE_ID metadata column. Use target with the set metadata source option to configure Post to populate this column. Specify any single alphanumeric string.

Example

The following shows the use of set source to support a configuration where the data from multiple source databases is being replicated to the same target database. This target configuration causes each Post process to update the SHAREPLEX_SOURCE_ID column with a different source ID, depending on the source database, to differentiate the data in the target database. This example uses four Post processes.

target c.target1 queue source718 set source east

target c.target1 queue source92 set source south

target c.target1 queue source101 set source west

target c.target1 queue source75 set source north

The Post process for queue “source718” will update column SHAREPLEX_SOURCE_ID with the value "east" for every insert or update to a table in its stream. The Post process for queue "source92" will update column SHAREPLEX_SOURCE_ID with the value "south", and so on for the other two Post processes.

sql category

Supported targets

File (format record=sql)

Command options

target x.file [queue queuename]

{

set sql property=value |

reset sql [property] |

show sql

}

Usage

Sets the properties of SQL output when formatrecord=sql.

Property Input Value Default

add_rownum={yes | no}

yes to include or no to exclude row numbers

yes

begin_transaction={yes | no}

yes to include or no to exclude begin transaction records

no

comment=character

Character that marks a comment

--

concatenate=character

Character that concatenates strings

||

end_transaction={yes | no}

yes to include or no to exclude end transaction records

no

legacy={yes | no}

Required in SharePlex 8.6. Use legacy SQL date and timestamp format of:

MMDDYYYYHH24MISS and MMDDYYYYHH24MISS.FFFFFF(yes/no)

no

name_delimiter=character

Character that delimits SID, table, owner, column names. This value overrides the value that is returned when Post queries the ODBC driver for the setting of SQL_DELIMITER.

none

record_terminator=character

Character that terminates the SQL

;

Example

target x.file set sql comment=#

trace capture

Use the trace capture command to output Capture processing and performance statistics to a trace file.

The Capture trace includes:

  • Details of the trace itself
  • Time that Capture spent on various tasks overall and per thread

Usage

Supported sources: Oracle
Supported targets: All
Authorization level: Operator (2)
Issues for: source system
Related commands: trace read, trace post

Syntax

Basic command Command options Remote options
trace capture

[minutes]

[for datasource]

[ on host |

on host:portnumber |

on login/password@host |

on login/password@host:portnumber ]

Syntax description

Component Description
minutes

The number of minutes for which you want to run the trace. The default is 15 minutes.

for datasource The datasource for which you want to run the Capture trace. Only required if running multiple Captures in the same SharePlex instance. Must follow minutes, if used.

Remote options

These options enable you to issue the command on a remote machine and to script commands that include a login name, password, port number, or combination of those items.

Option Description
on host

Execute the command on a remote system (one other than the one where the current sp_ctrl session is running). You are prompted for login credentials for the remote system. If used, must be the last component of the command syntax.

Example: sp_ctrl(sysB)>status on SysA

on host:portnumber

Execute the command on a remote system when a remote login and port number must be provided. If used, must be the last component of the command syntax.

Example: sp_ctrl(sysB)>status on SysA:8304

on login/password@host

Execute the command on a remote system when a remote login, password, and host name must be provided. If used, must be the last component of the command syntax.

Example: sp_ctrl(sysB)>status on john/spot5489@SysA

on login/password@host:portnumber

Execute the command on a remote system when a remote login, password, host name, and port number must be provided. If used, must be the last component of the command syntax.

Example: sp_ctrl(sysB)>status on john/spot5489@SysA:8304

Output

When the trace is complete, the process writes the trace data to a file in the log subdirectory of the variable-data directory in the following format:

process_id_trace_time

For example:

orcl_ocap_trace_Sep30_15_24_2014

where: orcl is the datasource and ocap is the name for the Capture process.

Example

The following command runs the trace for datasource myora for one minute.

trace capture 1 for myora

This command produces trace output similar to the following:

 

trace post

Use the trace post command to output Post processing and performance statistics to a trace file.

The Post trace includes:

  • Details of the trace itself
  • Statistics on transaction activity
  • Detail statistics per operation type

Usage

Supported sources: Oracle
Supported targets: All
Authorization level: Operator (2)
Issues for: target system
Related commands: trace capture, trace read

Syntax

Basic command Command options Remote options
trace post

[minutes]

[for datasource-datadest]

[queue queuename]

[ on host |

on host:portnumber |

on login/password@host |

on login/password@host:portnumber ]

Syntax description

Component Description
minutes

The number of minutes for which you want to run the trace. The default is 15 minutes.

for datasource-datadest

Constrains the trace to a specific Post process, as identified by the source and target datastores.

  • Required if running multiple Post processes in the same SharePlex instance (replication from different sources) or if running multiple Post processes to different target databases. Must follow minutes, if used.
  • datasource is expressed as o.SID where SID is an ORACLE_SID.

  • datadest is expressed as one of the following, depending on the target:

    o.ORACLE_SID

    r.database_name

    x.kafka

    x.jms

    x.file

queue queuename

Constrains the trace to the Post process that is associated with the specified named queue.

Remote options

These options enable you to issue the command on a remote machine and to script commands that include a login name, password, port number, or combination of those items.

Option Description
on host

Execute the command on a remote system (one other than the one where the current sp_ctrl session is running). You are prompted for login credentials for the remote system. If used, must be the last component of the command syntax.

Example: sp_ctrl(sysB)>status on SysA

on host:portnumber

Execute the command on a remote system when a remote login and port number must be provided. If used, must be the last component of the command syntax.

Example: sp_ctrl(sysB)>status on SysA:8304

on login/password@host

Execute the command on a remote system when a remote login, password, and host name must be provided. If used, must be the last component of the command syntax.

Example: sp_ctrl(sysB)>status on john/spot5489@SysA

on login/password@host:portnumber

Execute the command on a remote system when a remote login, password, host name, and port number must be provided. If used, must be the last component of the command syntax.

Example: sp_ctrl(sysB)>status on john/spot5489@SysA:8304

Output

When the trace is complete, the process writes the trace data to a file in the log subdirectory of the variable-data directory in the following format:

process_id_trace_time

For example:

orcl_orcl2_opo_trace_Feb_5_17_24_2014

where: orcl is the source datasource and orcl2 is the target datasource and opo is the name for the Post process.

Example

The following command runs the trace for datasources myora and myora2 for one minute.

trace post 1 for myora-myora2

This command produces trace output similar to the following:

 

Verwandte Dokumente

The document was helpful.

Bewertung auswählen

I easily found the information I needed.

Bewertung auswählen