Chat now with support
Chat mit Support

SharePlex Connector for Hadoop 8.5.5 - SharePlex Connector for Hadoop Installation Guide

Run install.sh

Run install.sh

This shell script installs/upgrades programs in the SharePlex Connector for Hadoop archive.

Important — upgrades only! You do not need to uninstall SharePlex Connector for Hadoop before upgrading. Install the upgrade over the existing version.

Shell Script Usage

[user@host bin]$ ./install.sh [-h <HADOOP_HOME_DIR>] [-c <HADOOP_CONF_DIR>] [-b <HBASE_HOME_DIR>] [-v <HIVE_HOME_DIR>] [--help] [--version]

Options

Parameter

Description

-h <HADOOP_HOME_DIR>

The path to the Hadoop home directory.

This option overrides HADOOP_HOME in the environment. If this option is not set and the HADOOP_HOME environment variable is also not set, this parameter is set to /usr/lib/hadoop as default.

-c <HADOOP_CONF_DIR>

The path to the Hadoop conf directory.

This option overrides HADOOP_CONF_DIR in the environment. If this option is not set and the HADOOP_CONF_DIR environment variable is also not set, this parameter is set to

  • HADOOP_HOME/conf folder (for HDP / IDH / Apache / IBM BI)
  • HADOOP_HOME/etc/hadoop (for CDH4 or CDH 5).

-b <HBASE_HOME_DIR>

The path to HBase home directory.

This option overrides HBASE_HOME in the environment. If this option is not set and the HBASE_HOME environment variable is also not set, this parameter is set relative to HADOOP_HOME.

-v <HIVE_HOME_DIR>

The path to Hive home directory.

This option overrides HIVE_HOME in the environment.

--help

Show this help and exit.

--version

Show version information and exit.

Note: Optional parameters like –h / -c / -b / -v are applicable during fresh installation only. For Upgrade, install script will refer environment variables from bin/shareplex_hadoop_env.sh.

When the shell script has finished executing, install.sh starts the Apache Derby network server.

About the new shareplex_hadoop_connector directory

Files and Directories

Description

bin

SharePlex Connector for Hadoop shell scripts as documented in this guide.

In addition shareplex_hadoop_env.sh is used by SharePlex Connector for Hadoop. You can set the environment variables by executing source shareplex_hadoop_env.sh

conf

SharePlex Connector for Hadoop configuration files.

db-derby-version-bin

Apache Derby application.

lib

SharePlex Connector for Hadoop required dependencies.

logs

SharePlex Connector for Hadoop log files.

shareplex_hadoop_connector.jar

The Java archive file containing SharePlex Connector for Hadoop application code.

oraoop-version

Data Connector for Oracle and Hadoop application

sqoop-version.bin

Apache Sqoop application

Run conn_setup.sh

Run conn_setup.sh

Run this script to setup SharePlex Connector for Hadoop and provide the necessary configuration details. This is usually a one-time activity.

Shell Script Usage

[user@host bin]$ ./conn_setup.sh

TIP: See conn_setup.sh for a complete description of this command. 

Configuration Parameters

The script will prompt you to respond to all configuration parameters, one-by-one.

Note: Default values are provided within brackets. Press Enter to select the default value.

Categories of detail

You will be prompted to provide the following details.

HDFS

Do you want to enable Hadoop connector to copy data to HDFS?

Answer yes to this question if you intend to replicate all (or most) tables by HDFS Near Real Time Replication.

HBase

Do you want to enable Hadoop connector to copy data to HBase?

Answer yes to this question if HBase is setup in your environment and you intend to replicate all (or most) tables by HBase Real Time Replication.

CDC

Do you intend to capture change data?

Answer yes to this question to enable Change Data Capture.

Please make sure you are using SharePlex for Oracle minimum version 8.5 if you intend to enable the Change Data Capture feature.

HBase parameters

You will be prompted for this detail if you have responded YES to replicate tables to HBase.

  • The HBase column family name
HDFS parameters

You will be prompted for this detail if you have responded YES to replicate tables to HDFS.

  • The HDFS destination directory. You should consider this directory as used exclusively by SharePlex Connector for Hadoop. This directory may be cleaned up by conn_cleanup.sh and uninstall.sh.

    Note that HDFS destination directory entered by the user will be appended with “/hdfs_replication” and the new value for HDFS destination directory will be displayed on console.

How often do you want to copy data to HDFS? This is measured by time and number of changes.

The first question relates to time (in minutes). If you say 10 minutes for example then the table will be replicated every 10 minutes. You should not set this to under 10 minutes.

The second question relates to the number of changes. If you say 2 then replication is executed following 2 changes to the table.

Replication is executed on the first condition met: on the given number of changes to the table or the set time period, whichever comes first.

CDC Parameters

You will be prompted for this detail if you have responded YES to capture change data.

  • The CDC destination directory. You should consider this directory as used exclusively by SharePlex Connector for Hadoop.

    Note that CDC destination directory entered by the user will be appended with “/change_data_capture” and new value for CDC destination directory will be displayed on console.

SharePlex Connector for Hadoop maintains internal changes threshold count of 1000 and internal time threshold of 15 seconds (by default). So after every 1000 changes to the table or after every 15 seconds, change data will be written on specified CDC destination directory on HDFS.

JMS parameters

  • The name of the JMS queue. By default: OpenTarget
  • The name of the host running ActiveMQ. For example: localhost
  • The port number used by the JNDI provider.url property. By default: 61616
  • The port number used to access the ActiveMQ admin web site. By default: 8161

For more information on each of these properties see Configure ActiveMQ to work with SharePlex

Oracle parameters
  • Host name (or TCP/IP address of the Oracle server)
  • The port to connect to the Oracle server, By default: 1521
  • Oracle instance (SID). By default: ORCL
  • Oracle username

You will be prompted to enter the Oracle password when taking a snapshot. Refer to the use cases for more information.

Configuration Complete

SharePlex Connector for Hadoop shows the following messages indicating that it is ready for use.

connectorConfiguration.Xml updated successfully.
JMSConfiguration.xml updated successfully.
OraOopConfiguration.xml updated successfully.
Connector setup completed successfully.

Configure Partitions on HDFS

About Partitions on HDFS

About Support for Partitions in HDFS Replication

SharePlex Connector for Hadoop supports the partitioning of data for the HDFS replication feature. SharePlex Connector for Hadoop takes the parameter --partition-key from a snapshot script that is used to specify the column name(s) of an Oracle table for which partition(s) are to be created on HDFS.

SharePlex Connector for Hadoop supports both custom and range partitioning.

Note: Partitioning is only supported for Text and Avro file formats. Partitioning is not supported for the Sequence file format.

Partitioning support includes:

Configure Custom Partitioning

Configure Range Partitioning

Create Hive External Tables on Partitioned Data

Verwandte Dokumente

The document was helpful.

Bewertung auswählen

I easily found the information I needed.

Bewertung auswählen