Version 8.5.6
Release Notes
April 2015
New Features |
Resolved Issues and Minor Enhancements |
Known Issues |
System Requirements |
Upgrade and Compatibility |
Globalization |
Get Started |
Product Licensing |
Third-Party Licensing |
About Dell |
Copyright |
SharePlex™ Connector for Hadoop® enables log-based replication of tables from Oracle to Hadoop (HDFS and HBase).
SharePlex Connector for Hadoop also supports capturing change history data (CDC) of a table on HDFS/Hive. Tables can further be replicated to Hive (Hive over HDFS and/or Hive over HBase and Hive over CDC).
SharePlex™ Connector for Hadoop® operates in conjunction with SharePlex™ for Oracle®
The following is a list of issues addressed in this release.
Functional Area | Resolved Issue / Enhancement | Defect ID |
---|---|---|
SharePlex Connector for Hadoop | Support for organizing data on HDFS based on the Oracle table owner name for Near Real Time replication. | SPH-448 |
SharePlex Connector for Hadoop | Re-establishing Derby connection if it gets down. | SPH-444 |
SharePlex Connector for Hadoop | Running Snapshot script in parallel for multiple tables | SPH-440 |
SharePlex Connector for Hadoop | Provision to use single JMS ActiveMQ for multiple Connector instances | SPH-441 |
SharePlex Connector for Hadoop | Enhancements in the Cleanup script to clean data from Derby. | SPH-442 |
SharePlex Connector for Hadoop | Setup Script should validate Connector Controller queue name entered by the user. | SPH-450 |
SharePlex Connector for Hadoop | Table specific entries from Connector configurations should be removed by Cleanup script. | SPH-451 |
SharePlex Connector for Hadoop | Cleanup Script shows WARN message if CDC use case is not enabled for the table. | SPH-452 |
SharePlex Connector for Hadoop | Uninstall script does not delete internal JMS queue used by Connector. | SPH-459 |
SharePlex Connector for Hadoop | Research Hive behavior during data merge | SPH-447 |
SharePlex Connector for Hadoop | Setup Script should accept all Oracle parameters at a time. | SPH-462 |
The following is a list of issues known to exist at the time of this release.
Table 1: SharePlex for Oracle known issues
Known Issue | Defect ID |
---|---|
For boundary value of NUMBER data type and maximum value of FLOAT data type, SharePlex sends data in exponential format. | SPH-232 |
SharePlex sends only schema messages for maximum value of CHAR, VARCHAR, NCHAR, and NVARCHAR data types. | SPH-293 |
For an Oracle table with no primary key, SharePlex sends SCHEMA XML having the "key" attribute set to "true" for all columns. | SPH-313 |
For the TRUNCATE operation, SharePlex sends commitTime="1988-01-01T00:00:00.” | SPH-325 |
SharePlex sends SCHEMA XML with the default value when the user initially adds a column with a default value, and then later drops that column and then adds the same column without default value. | SPH-357 |
SharePlex does not send an empty tag in change history (CDC) XML for inserting a NULL value in an Oracle table. | SPH-131 |
If an UPDATE is followed by a TRUNCATE operation on an Oracle table, messages will be stuck in the SharePlex Post Queue and NOT forwarded to the JMS queue. The SharePlex console will show error: "Index out of bounds". | SPH-127 |
Table 2: SharePlex Connector for Hadoop known issues
Known Issue | Defect ID |
---|---|
Inconsistencies in the case where multiple UPDATE/ DELETE on a single row of an Oracle table having no primary key in a single batch. | SPH-457 |
The SharePlex Connector for Hadoop conn_setup script when run with -c provides no error messages when the configuration parameters are incorrect. | SPH-123 |
Hive table cannot access HDFS data if a negative value (BC dates) is present for DATE and TIMESTAMP column datatypes. | SPH-207 |
If the user adds a column to an Oracle table with DEFAULT value, then DEFAULT value is not captured by the change history (CDC) use case. | SPH-359 |
Single command to sync all tables in the replication configuration. |
SPH-14 |
Cleanup script does not delete external Hive table if -e option is toggled while Connector is running. | SPH-460 |
Derby should be updated only after Hive query executed by Connector is complete. | SPH-461 |
Near real time (HDFS) replication with Avro file format fails for Oracle tables with data types INTERVAL YEAR TO MONTH, INTERVAL DAY TO SECOND and ROWID | SPH-439 |
© 2024 Quest Software Inc. ALL RIGHTS RESERVED. Terms of Use Privacy Cookie Preference Center