These instructions help you set up SharePlex to replicate to a target cloud database from an intermediary server that runs the SharePlex Import and Post components.
See also:
Post to PaaS cloud from the source server for PostgreSQL
Overview of SharePlex setup on cloud databases.
Complete the SharePlex preinstallation checklist.
On the source server, install SharePlex for the source database. See: Install SharePlex on Linux/Unix for PostgreSQL Database.
On the source server, run the appropriate database setup utility for the source database. See: Database Setup for PostgreSQL.
On the intermediary server, install the appropriate ODBC client of the target cloud database.
On the intermediary server, install SharePlex for the target cloud database. See: Install SharePlex on Linux/Unix for PostgreSQL Database.
On the intermediary server, run the appropriate database setup utility for the target cloud database. See Database Setup Utilities.
Specify the following in the routing map of the SharePlex configuration file:
the name of the intermediary server as the target host.
the name of the cloud database as the target database.
In the following example using a MySQL target, intermediary3 is the intermediary system and sptest3 is the target cloud database.
datasource:r.orcldbprim | ||
#source tables | target tables | routing map |
HR.EMP | "sptest3"."emp" | intermediary3@r.sptest3 |
For more information, see Configure SharePlex to replicate data in the SharePlex Administration Guide.
Using PostgreSQL as a source with remote capture requires that the database instances be accessible from the servers on which SharePlex is installed.
To set up SharePlex for remote capture:
Run PG Setup on the SharePlex system according to the instructions provided in the Database Setup Utilities for PostgreSQL section of the SharePlex Reference Guide.
When prompted with "Is DB hosted over Azure Single Server?", reply with "No" as Azure Single Server is not supported as a source.
Complete all the setup tasks as mentioned in the Database Setup for PostgreSQL Database as a Service.
After completing the setup, start sp_cop and sp_ctrl.
For configuring and starting replication, refer to the SharePlex Administration Guide.
SharePlex supports CrunchyData High Availability cluster environment setup.
Follow the below configuration steps:
Setup the CrunchyData High Availability cluster environment according to the CrunchyData setup documentation.
Install or upgrade to SharePlex 11.1.
Run the pg_setup utility and enter a slot name.
Activate the configuration. The user input slot name will be created in the database after a successful activation.
Add the slot name to the respective CrunchyData config [YML or YAML] file to monitor in failover or switchover scenario.
Run the deactivate configuration or cleanup [pg_cleansp] utility to remove the dedicated slot name from the database. Users need to remove SharePlex dedicated slot name from the CrunchyData config file.
Remove the SharePlex dedicated slot name from the CrunchyData config file.
Example of CrunchyData config command: patronictl -c /etc/patroni/crunchy-demo.yml edit-config
NOTE: User need to add SharePlex dedicated slot name to the respective CrunchyData config.
Limitation: SharePlex internally uses PostgreSQL logical replication with a PostgreSQL database over cloud services. In the event of a failover to the standby server, the logical replication slots are not copied over to the standby server on cloud database services; hence, SharePlex will not handle logical slot re-creation and maintenance with cloud database services. This applies to the AWS Multi-AZ cluster setup of the RDS PostgreSQL, and the Aurora PostgreSQL databases.
SharePlex supports HA with logical replication on PostgreSQL Azure Flexible Server.
Follow the below configuration steps:
Enable high availability setup on Azure Flexible Server using the steps provided in the link below:
important: Users should be able to access the database using the primary server name (host name).
Create pg_failover_slots extension setup using the steps provided in the link below:
Note: The pg_failover_slots extension is supported for PostgreSQL version 11 to 15.
Use the servername = hostname under the DSN in the odbc.ini file. User should use this DSN during pg_setup.
Example:
[DSN]
Servername=pslflexihaserver01.postgres.database.azure.com
Note: Do not use the IP address of the primary database server, as it may change after failover. As the hostname always points to the current primary database server, we should use only the host name.
Stop the Capture process before the failover and restart it afterward, in the event of a planned failover.
In the event of an unplanned failover, the Capture process will stop due to an error state after the failover and will need to be manually restarted.
Limitation: If consecutive failovers occur, before initiating the capture following the initial failover, the pg_failover_slots extension will remove logical slots from both the primary and standby servers. The reason is that after the first failover, the slot on the standby is marked as active and the slot on the primary is marked as inactive. An active state of 'true' on the standby indicates that the slot has not yet synchronized and is not safe for use. Hence, when the failover happens again, the slot on the new primary is lost. To avoid the removal of slots on the primary and standby servers, the user must start the capture after each failover. So, the extension should ideally mark the slot on the standby as inactive (as it being inactive means it is safe to replicate). For additional information, see https://github.com/EnterpriseDB/pg_failover_slots/issues/25.
© ALL RIGHTS RESERVED. 이용 약관 개인정보 보호정책 Cookie Preference Center