Chat now with support
Chat with Support

SharePlex Connector for Hadoop 8.5.6 - Installation Guide

conn_cleanup.sh

conn_cleanup.sh

Use this script to delete all data (HDFS, HBase and Hive) from a table under replication by SharePlex Connector for Hadoop.

Shell Script Usage

conn_cleanup.sh -t <TABLE_OWNER.TABLE_NAME> [-h <HIVE_HOME_DIR>] [--help] [--version]

Options

Parameter

Description

-t <TABLE_OWNER.TABLE_NAME>

Name and owner of the table data to cleanup.

-h <HIVE_HOME_DIR>

Path to the Hive home directory.

If not specified the value of the HIVE_HOME environment variable is used. If this option is not set and HIVE_HOME environment variable is also not set, this parameter will be set as relative to HADOOP_HOME.

--help

Show this help and exit.

--version

Show version information and exit.

Example

[user@host bin]$ ./conn_cleanup.sh -t Schema.Table

uninstall.sh

uninstall.sh

Use this script to uninstall SharePlex Connector for Hadoop.

TIP: It is recommended to run the uninstall script from outside of the SharePlex Connector for Hadoop home directory.

Shell Script Usage

uninstall.sh [-q] [-h <HIVE_HOME_DIR>] [-t <LIST_OF_TABLES> | -a] [--help] [--version]

Options

Parameter

Description

-q

Delete the point-to-point JMS Queue to which SharePlex Connector for Hadoop listens.

-h <HIVE_HOME_DIR>

Path to the Hive home directory. This parameter is used in conjunction with -t or -a when tables in Hive are involved in the cleanup.

If not specified the value of the HIVE_HOME environment variable is used. If this option is not set and HIVE_HOME environment variable is also not set, this parameter will be set as relative to HADOOP_HOME.

-t <LIST_OF_TABLES>

Delete all data (HDFS, HBase and Hive) from the list of tables under replication by SharePlex Connector for Hadoop.

Use a comma to delimit the list of tables.

NOTE: This option uses conn_cleanup.sh.

-a

Delete all data (HDFS, HBase and Hive) from all tables under replication by SharePlex Connector for Hadoop.

NOTE: This option uses conn_cleanup.sh.

--help

Show this help and exit.

--version

Show version information and exit.

Example

[user@host bin]$ ./uninstall.sh

SharePlex Log Files

SharePlex Connector for Hadoop Log Files

SharePlex Connector for Hadoop uses the Apache log4j library as a logging utility. Logs are created in the SharePlex Connector for Hadoop home, logs.

Log File Description

shareplex-connector.log

Contains all log messages logged with levels DEBUG and above.

shareplex-connector-alert.log

Contains ALERT log messages which may require action, like:

  • Data Inconsistency observed while replication
  • Replication for a table failed
  • Schema of table changed (alter)
  • High priority messages and un-handled critical exceptions.

shareplex-connector-icm.log

Contains all installation, configuration and management related log messages from the SharePlex Connector for Hadoop shell scripts. For more information on these scripts see the Command Reference section.

This log file does not use the Apache log4j library as a logging utility.

How to configure the logs

The log4j configuration file is SharePlex Connector for Hadoop home,

conf\opentarget-log4j.xml

To change the size of the log file

The following parameters set for 10 files each of 20MB maximum size.

<param name="MaxFileSize" value="20MB" />

<param name="MaxBackupIndex" value="10" />

To change the log level

The default log level is DEBUG. Edit this parameter.

<logger name="com.quest.shareplex">

<level value="DEBUG" />

TIP: To apply changes to log4j you need to restart SharePlex™ Connector for Hadoop®. For more information, see conn_ctrl.sh.

How to Verify the JMS Queue

How to Verify the JMS Queue

Show the configuration of the JMS queue

Command Description

sp_ctrl ()> target x.jms show

Show the current configuration of the JMS.

sp_ctrl ()> help target x.jms

Ask for help on target jms. This shows all the options that can be configured for JMS and their usage.

Check if all the processes are running

Command Description

sp_ctrl()> show

Provide status information on the JMS queue.

For example, when you run the show or status command if the Post Process is Pending then sp_cop may have experienced a problem starting the JMS bridge server. The bridge service log file can be found in the SharePlex for Oracle install directory var/log/openbridge.log

When you run the qstatus command if there are messages in the backlog of the post queue then those messages have not been delivered to SharePlex Connector for Hadoop. Perhaps SharePlex Connector for Hadoop has stopped.

sp_ctrl()> status

sp_ctrl()> qstatus

Post process writing to the JMS queue:

Command Description

sp_ctrl()> stop post

Use these commands to monitor the post process.

sp_ctrl()> start post

sp_ctrl()> show post

Related Documents

The document was helpful.

Select Rating

I easily found the information I needed.

Select Rating