This shell script installs/upgrades programs in the SharePlex Connector for Hadoop archive.
Important — upgrades only! You do not need to uninstall SharePlex Connector for Hadoop before upgrading. Install the upgrade over the existing version.
Shell Script Usage
[user@host bin]$ ./install.sh [-h <HADOOP_HOME_DIR>] [-c <HADOOP_CONF_DIR>] [-b <HBASE_HOME_DIR>] [-v <HIVE_HOME_DIR>] [--help] [--version]
Options
| Parameter | 
 Description  | 
|---|---|
| 
 -h <HADOOP_HOME_DIR>  | 
 The path to the Hadoop home directory. This option overrides HADOOP_HOME in the environment. If this option is not set and the HADOOP_HOME environment variable is also not set, this parameter is set to /usr/lib/hadoop as default.  | 
| 
 -c <HADOOP_CONF_DIR>  | 
 The path to the Hadoop conf directory. This option overrides HADOOP_CONF_DIR in the environment. If this option is not set and the HADOOP_CONF_DIR environment variable is also not set, this parameter is set to 
  | 
| 
 -b <HBASE_HOME_DIR>  | 
 The path to HBase home directory. This option overrides HBASE_HOME in the environment. If this option is not set and the HBASE_HOME environment variable is also not set, this parameter is set relative to HADOOP_HOME.  | 
| 
 -v <HIVE_HOME_DIR>  | 
 The path to Hive home directory. This option overrides HIVE_HOME in the environment.  | 
| 
 --help  | 
 Show this help and exit.  | 
| 
 --version  | 
 Show version information and exit.  | 
Note: Optional parameters like –h / -c / -b / -v are applicable during fresh installation only. For Upgrade, install script will refer environment variables from bin/shareplex_hadoop_env.sh. 
When the shell script has finished executing, install.sh starts the Apache Derby network server.
Run this script to setup SharePlex Connector for Hadoop and provide the necessary configuration details. This is usually a one-time activity.
Shell Script Usage
[user@host bin]$ ./conn_setup.sh
TIP: See conn_setup.sh for a complete description of this command. 
Configuration Parameters
The script will prompt you to respond to all configuration parameters, one-by-one.
Note: Default values are provided within brackets. Press Enter to select the default value. 
| Categories of detail | 
 You will be prompted to provide the following details.  | 
|---|---|
| 
 Do you want to enable Hadoop connector to copy data to HDFS? Answer yes to this question if you intend to replicate all (or most) tables by HDFS Near Real Time Replication.  | |
| 
 Do you want to enable Hadoop connector to copy data to HBase? Answer yes to this question if HBase is setup in your environment and you intend to replicate all (or most) tables by HBase Real Time Replication.  | |
| CDC | 
 Do you intend to capture change data? Answer yes to this question to enable Change Data Capture. Please make sure you are using SharePlex for Oracle minimum version 8.5 if you intend to enable the Change Data Capture feature.  | 
| HBase parameters | 
 You will be prompted for this detail if you have responded YES to replicate tables to HBase. 
  | 
| HDFS parameters | 
 You will be prompted for this detail if you have responded YES to replicate tables to HDFS. 
 How often do you want to copy data to HDFS? This is measured by time and number of changes. The first question relates to time (in minutes). If you say 10 minutes for example then the table will be replicated every 10 minutes. You should not set this to under 10 minutes. The second question relates to the number of changes. If you say 2 then replication is executed following 2 changes to the table. Replication is executed on the first condition met: on the given number of changes to the table or the set time period, whichever comes first.  | 
| CDC Parameters | 
 You will be prompted for this detail if you have responded YES to capture change data. 
 SharePlex Connector for Hadoop maintains internal changes threshold count of 1000 and internal time threshold of 15 seconds (by default). So after every 1000 changes to the table or after every 15 seconds, change data will be written on specified CDC destination directory on HDFS.  | 
 For more information on each of these properties see Configure ActiveMQ to work with SharePlex  | |
| Oracle parameters | 
 You will be prompted to enter the Oracle password when taking a snapshot. Refer to the use cases for more information.  | 
Configuration Complete
SharePlex Connector for Hadoop shows the following messages indicating that it is ready for use.
connectorConfiguration.Xml updated successfully.
JMSConfiguration.xml updated successfully.
OraOopConfiguration.xml updated successfully.
Connector setup completed successfully.
SharePlex Connector for Hadoop supports the partitioning of data for the HDFS replication feature. SharePlex Connector for Hadoop takes the parameter --partition-key from a snapshot script that is used to specify the column name(s) of an Oracle table for which partition(s) are to be created on HDFS.
SharePlex Connector for Hadoop supports both custom and range partitioning.
| 
 
  | 
 Note: Partitioning is only supported for Text and Avro file formats. Partitioning is not supported for the Sequence file format.  | 
Partitioning support includes: