Chatta subito con l'assistenza
Chat con il supporto

SharePlex 11.4 - Installation and Setup guide

About this Guide Conventions used in this guide Revision History Installing and Setting up SharePlex on an Oracle Source
SharePlex Pre-installation Checklist for Oracle Download the SharePlex installer Install SharePlex on Linux and UNIX Set up an Oracle environment for replication Set up replication from Oracle to a different target type Installation and Setup for Cloud-Hosted Databases for Oracle Installation and setup for remote capture Installation and setup for HA cluster Generic SharePlex demonstration for Oracle Advanced SharePlex demonstrations for Oracle Database Setup Utilities Solve Installation Problems for Oracle
Installing and Setting up SharePlex on a PostgreSQL Database as Source and Service
SharePlex Pre-installation Checklist for PostgreSQL Download the SharePlex installer for PostgreSQL Install SharePlex on Linux for PostgreSQL as a Source Set up Replication from PostgreSQL to Supported Target Types Installation and Setup for Cloud-Hosted Databases for PostgreSQL Installation and Setup for Remote Capture for PostgreSQL Install SharePlex on PostgreSQL High Availability Cluster Configure SharePlex on PostgreSQL Azure Flexible Server with High Availability Using Logical Replication Generic SharePlex Demonstration for PostgreSQL Advanced SharePlex Demonstrations for PostgreSQL Database Setup for PostgreSQL Database Setup for PGDB as a Service Installation of pg_hint_plan extension Solve Installation Problems for PostgreSQL
Installing SharePlex on a Docker container Assign SharePlex users to security groups Solve Installation Problems Uninstall SharePlex Advanced installer options Install SharePlex as root SharePlex installed items

Install SharePlex on PostgreSQL High Availability Cluster

SharePlex supports CrunchyData High Availability cluster environment setup.

Follow the below configuration steps:

  1. Setup the CrunchyData High Availability cluster environment according to the CrunchyData setup documentation.

  2. Install or upgrade to SharePlex 11.1.

  3. Run the pg_setup utility and enter a slot name.

  4. Activate the configuration. The user input slot name will be created in the database after a successful activation.

  5. Add the slot name to the respective CrunchyData config [YML or YAML] file to monitor in failover or switchover scenario.

  6. Run the deactivate configuration or cleanup [pg_cleansp] utility to remove the dedicated slot name from the database. Users need to remove SharePlex dedicated slot name from the CrunchyData config file.

  7. Remove the SharePlex dedicated slot name from the CrunchyData config file.

Example of CrunchyData config command: patronictl -c /etc/patroni/crunchy-demo.yml edit-config

NOTE: User need to add SharePlex dedicated slot name to the respective CrunchyData config.

Limitation: SharePlex internally uses PostgreSQL logical replication with a PostgreSQL database over cloud services. In the event of a failover to the standby server, the logical replication slots are not copied over to the standby server on cloud database services; hence, SharePlex will not handle logical slot re-creation and maintenance with cloud database services. This applies to the AWS Multi-AZ cluster setup of the RDS PostgreSQL, and the Aurora PostgreSQL databases.

Configure SharePlex on PostgreSQL Azure Flexible Server with High Availability Using Logical Replication

SharePlex supports HA with logical replication on PostgreSQL Azure Flexible Server.

Follow the below configuration steps:

  1. Enable high availability setup on Azure Flexible Server using the steps provided in the link below:

    https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/how-to-manage-high-availability-portal#enable-high-availability-post-server-creation

    important: Users should be able to access the database using the primary server name (host name).

  2. Create pg_failover_slots extension setup using the steps provided in the link below:

    https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-extensions#pg_failover_slots-preview

    Note: The pg_failover_slots extension is supported for PostgreSQL version 11 to 15.

  3. Use the servername = hostname under the DSN in the odbc.ini file. User should use this DSN during pg_setup.

    Example:

    [DSN]

    Servername=pslflexihaserver01.postgres.database.azure.com

    Note: Do not use the IP address of the primary database server, as it may change after failover. As the hostname always points to the current primary database server, we should use only the host name.

  4. Stop the Capture process before the failover and restart it afterward, in the event of a planned failover.

    In the event of an unplanned failover, the Capture process will stop due to an error state after the failover and will need to be manually restarted.

Limitation: If consecutive failovers occur, before initiating the capture following the initial failover, the pg_failover_slots extension will remove logical slots from both the primary and standby servers. The reason is that after the first failover, the slot on the standby is marked as active and the slot on the primary is marked as inactive. An active state of 'true' on the standby indicates that the slot has not yet synchronized and is not safe for use. Hence, when the failover happens again, the slot on the new primary is lost. To avoid the removal of slots on the primary and standby servers, the user must start the capture after each failover. So, the extension should ideally mark the slot on the standby as inactive (as it being inactive means it is safe to replicate). For additional information, see https://github.com/EnterpriseDB/pg_failover_slots/issues/25.

Generic SharePlex Demonstration for PostgreSQL

Contents

 

  • This chapter demonstrates the basics of SharePlex replication. This demonstration can be run on Unix or Linux from a PostgreSQL source to supported target databases.

    Notes:

    • These demonstrations are for use with databases. They do not support replication to a file or a messaging container.
    • These are only demonstrations. Do not use them as the basis for deployment in a production environment. To properly implement replication in your environment, follow the instructions in the SharePlex Installation and Setup Guide and the SharePlex Administration Guide.
    • For more information about the commands used in the demonstrations, see the SharePlex Reference Guide.
    • The demonstrations assume that SharePlex is fully installed on a source system and one target system, and that any pre- and post-installation setup steps were performed.

    What you will learn

    • How to activate a configuration
    • How SharePlex replicates smoothly from source to target systems
    • How SharePlex quickly and accurately replicates large transactions
    • How SharePlex queues the data if the target system is unavailable
    • How SharePlex resumes from its stopping point when the target system is recovered
    • How SharePlex recovers after a primary instance interruption
    • How to use named queues to spread the processing of different tables across parallel Post processes
  • Prework for the demonstrations

    Before you run the basic demonstrations, have the following items available.

    Tables used in the demonstrations

    You will replicate splex.demo_src from the source system to splex.demo_dest on the target system. These tables are installed by default into the SharePlex schema, which in these demonstrations is "splex." Your SharePlex schema may be different. Verify that these tables exist.

    Description of the demo tables.
    Column Name Data Type Null?
    NAME varchar2(30)  
    ADDRESS verchar2(60)  
    PHONE varchar2(12)

     

    INSERT scripts

    • Create a SQL script named insert_demo_src that inserts and commits 500 rows into the splex.demo_src table. You will run this script during some of the demonstrations.
    • If you will be using the demonstration of named post queues, create a SQL script named insert_demo_dest that inserts and commits 500 rows into the splex.demo_dest table. You will run this script during some of the demonstrations.
    Related Documents

    The document was helpful.

    Seleziona valutazione

    I easily found the information I needed.

    Seleziona valutazione