Chatee ahora con Soporte
Chat con el soporte

QoreStor 7.1.2 - User Guide

Introducing QoreStor Accessing QoreStor Configuring QoreStor settings
Licensing QoreStor Configuring SAML Configuring an SSL Certificate for your QoreStor System Configuring Active Directory settings Understanding system operation scheduling Configuring share-level security for CIFS shares Configuring Secure Connect Enabling MultiConnect Configuring and using Rapid NFS and Rapid CIFS Configuring and using VTL Configuring and Using Encryption at Rest Configuring and using the Recycle Bin Configuring Cloud Reader Configuring RDA immutability
Managing containers Managing local storage Managing cloud storage Managing replications Managing users Monitoring the QoreStor system Managing QoreStor remotely Support, maintenance, and troubleshooting Security recommendations guide About us

Creating an archive tiering schedule

After you create an archive tier, you can schedule when you want data to transfer to the tier.

To add an archive tiering schedule

  1. In the navigation menu, expand Cloud Storage, and then click Archive Tier.

    The Archive Tier page displays.

  2. On the Archive Tier page, to reveal the scheduling options, click Schedule.
  3. For each day of the week, select a time range by completing the following steps:
    1. Click the From box, select a time to begin the schedule, and then click Set.
    2. Click the To box, select a time for the schedule to end, and then click Set.
  4. When finished, click Save.
  5. To hide the schedule options, click Schedule.

Editing an archive tiering schedule

After you create an archive tiering schedule, you can edit the schedule by completing the following steps.

To edit an archive tiering schedule

  1. In the navigation menu, expand Cloud Storage, and then click Archive Tier.

    The Archive Tier page displays.

  2. On the Archive Tier page, to reveal the scheduling options, click Schedule.
  3. For each time you want to change, do either of the following steps:
    • To delete a time, click the trash can symbol.
    • To change a time, click the box, select a new time, and then click Set.
  4. When finished, click Save.
  5. To hide the schedule options, click Schedule.

Restoring from an archive tier

Depending on the container type, data is sent to the archive tier by different methods. For RDA and Object containers, data is archived based on the Archive Tiering Policy. For VTL containers, exporting the cartridge from the backup application will trigger the movement of cart data to cloud.

Restoring data from an archive tier differs in some ways from a standard restore process. There are two possible methods for restoring:

  • Selectively restoring backups based on need, and
  • Performing a full disaster recovery by completely restoring all data from Glacier to AWS S3.

In both cases, when restoring from an archive tier, no files are saved to on-prem storage. Instead, files are copied from the archive storage (Glacier or Amazon S3 Glacier Deep Archive) to warm AWS S3 storage for a period of time specified by the Archive Retention in Warm Cloud setting. When restoring from an archive tier, consider the following:

  • Restoring from archive storage is a two-step operation. First, archive data is restored to standard AWS S3 storage, then the objects are read from there.
  • There are two options for restoring from Glacier storage: Batch operations and Lambda with batch operations. According to AWS, "Lambda is a compute service that lets you run code without provisioning or managing servers." Using Lambda with batch operations can help avoid certain restore failures. For more information, see What is AWS Lambda? in the AWS documentation.
  • Restored objects will be ready for readback after 4-6 hours for Amazon S3 Glacier (10-12 hours for Amazon S3 Glacier Deep Archive). No notification is issued when restored objects are available. You may view the status of restore operations in the AWS Console. Refer to the Amazon S3 document Checking Archive Restore Status and Expiration Date for more information. To perform a batch restore for disaster recovery purposes, refer to Manually restoring datastores from Amazon S3 Glacier.
  • When restoring objects from archive, you are charged for both the archive copy and the restore copy in warm storage. Use the Archive Retention in Warm Cloud value to minimize the duration objects are kept in warm storage.
  • For restoring VTL cartridge data, the command vtl --import must be run on the QoreStor server.

Restoring files from RDS Container backups replicated to AWS S3 Glacier or Deep Archive

Backups written to an RDA container replicate to an archive tier and stubbed from on-premises storage based on the Archive Tiering Policy.

In the case of an RDA container, the process of restoring files from an archive tier differs based on direct memory access (DMA) . For more information, see the respective DMA setup guide on Quest QoreStor support portal.

Restoring selective objects from Object Container backups replicated to AWS S3 Glacier or Deep Archive

Backups written to an object container replicate to an archive tier based on the Archive Tiering Policy.

To selectively move the objects required for restoring files from the archive tier, see the following procedure for receiving data from AWS S3 Glacier or Deep Archive and then be able to readback data.

To restore selective objects from Object Container backups replicated to AWS S3 Glacier or Deep Archive

  1. To list the required objects from the bucket from which you want to restore and select the individual files identified by "Key" by using the following command:
    Restoring selective objects from object container command
    [root@qorestor-c8-v2 ~]# aws s3api list-objects  --bucket default-bucket --endpoint-url=https://10.230.48.153:9000 --no-verify-ssl
     
    {
        "Contents": [
            {
                "Key": "filex",
                "LastModified": "2021-06-15T04:24:43+00:00",
                "ETag": "\"0e2bec8c5b89eb953ded4baa10d0b084\"",
                "Size": 459740522,
                "StorageClass": "GLACIER",
                "Owner": {
                    "DisplayName": "",
                    "ID": "02d6176db174dc93cb1b899f7c6078f08654445fe8cf1b6ce98d8855f66bdbf4"
                }
            }
  2. To restore the object from Glacier, use the following command:

    $ aws s3api restore-object --bucket default-bucket --key filex --restore-request '{"Days":220}' --endpoint-url=https://10.230.48.175:9000 --no-verify-ssl

  3. To check the restore status, use the following command:

    $ aws s3api head-object --bucket default-bucket --key <object_key> --endpoint-url=https://10.230.48.175:9000 --no-verify-ssl

  4. After the restore is complete, to view head-object status "Restore": "ongoing-request=\"false\", expiry-date=\"2020-08-24 00:00:00 +0000 GMT\"", use the following command:

    Viewing the restore command

    [root@qorestor-c8-v2 ~]# aws s3api head-object --bucket default-bucket --key deep.sh --endpoint-url=https://10.230.48.153:9000 --no-verify-ssl
     
    {
    "AcceptRanges": "bytes",
    "Restore": "ongoing-request=\"true\"",
    "LastModified": "2021-06-14T09:55:15+00:00",
    "ContentLength": 540,
    "ETag": "\"edfc67818cfec9e9d76772a02be41a8f\"",
    "ContentType": "application/x-sh",
    "Metadata": {},
    "StorageClass": "STANDARD-S3"
    }

NOTE: Using the option --no verify-ssl is only necessary if you are using a self-signed certificate on QoreStor.

Restoring selective tapes of VTL backups replicated to AWS S3 Glacier or Deep Archive

In the case of VTL containers, there are no individual backups replicated to cloud storage, but the entire cartridge is exported to the cloud. You can replicate VTL cartridges that are no longer required as on-premises storage to the cloud using VTL export.

For detailed instructions, refer to the respective DMA setup guide on the Quest Support Portal.

After you replicate and stub VTL tapes to an archive tier, to bring the data to Warm Cloud storage and restore it, use the following procedure.

To restore selective tapes of VTL backups replicated to AWS S3 Glacier or Deep Archive

  1. To bring data to Warm Cloud storage and initiate a restore from AWS S3 Glacier to S3 Standard storage, use the following command:

    vtl --restore --name <cont_name> --barcode <barcode_of-media>

    NOTE: This process typically takes approximately four hours when performed from AWS S3 Glacier storage and as many as eight to 12 hours from Glacier Deep Archive.

  2. To check the Glacier to Standard S3 restore status, use the following command:

    vtl --show --verbose

    The restore status appears at the end of the command output, as shown in the following example:

    • Restore Status: [Cart Name] [Restore current State] [Restored Data available until]
    • Restore status can be:
      • [Cart Name] [Restore current State] [Restored Data available until]

        GV758Q001 Restore process initiated Tue 2020-03-31 09:23:50 EAT

      • [Cart Name] [Restore current State] [Restored Data available until]

        GV758Q001 Restore initiation successful Tue 2020-03-31 09:23:50 EAT

      • [Cart Name] [Restore current State] [Restored Data available until]

        GV758Q001 Restore Process InProgress Wed 2020-04-08 03:00:00 EAT

      • [Cart Name] [Restore current State] [Restored Data available until]

        GV758Q001 Restore SUCCESSFUL Wed 2020-04-08 03:00:00 EAT

  3. After the restore is done, import carts using the following command:

    vtl --import_cart --name <container-name> --barcode <comma-separated-barcodes>

    You can provide multiple carts during the VTL import operation.

  4. To view the status of the selected carts moving from the cloud to the storage_slot of the VTL tape drive, use the following command:

    vtl --show --verbose

  5. To rescan the media and update the VTL, perform an Inventory Robot operation.

    NOTE: Before you attempt to restore to a local disk, bring the newly added media online.

Performing a disaster recovery from the cloud

There are two ways to perform a recovery from the cloud, also known as a disaster recovery. You can recover data by creating a new QoreStor instance and transferring the data, or you can perform a quick recovery, which provides a read-only version of RDA container data of your current QoreStor instance. To recover your QoreStor configuration and cloud-replicated data from the cloud, perform the steps below. Before performing these steps, make sure you have the following:

To perform a disaster recovery from the cloud

  1. On a newly installed QoreStor server, run the following recovery command using the definitions provided in the table:
    maintenance --disaster_recovery --cloud_string <name> --container_name <name> --cloud_provider_type <name> --passphrase <name> [--logfile <name>]
    Table 7: Recovery command definitions
    Command option Definition

    --cloud_string

    Cloud connection string, to connect to the cloud bucket.

    --container_name

    Name of the cloud bucket from where data is to be recovered. Valid values are [a-z, 0-9, '-', '.'].

    --cloud_provider_type

    Name of the cloud service provider, such as <AWS-S3 | Azure | Wasabi-S3 | Google-S3 | IBM-S3 | S3-Compatible>.

    --passphrase

    Passphrase used on original machine for encrypting the data in the cloud bucket.

    --logfile

    Log file path to capture the ongoing recovery activity.

    This will regenerate configuration data and populate the metadata from the cloud copy.

    When completed, you will see the following message:

    Filesystem  disaster recovery started successfully.
    Please see the /var/log/oca/qsdr.log and the logfile given in the command.
  2. After the data recovery process is complete, perform a filesystem repair with the command
    maintenance --filesystem --repair_now

    When the file system repair is finished, the process is complete.

To perform a quick recovery

  1. On a newly installed QoreStor server of the same install mode and configuration as your previous QoreStor server, run the following recovery command using the definitions provided in the table:
    maintenance --disaster_recovery [--cloud_string <name>] [--container_name <name>] [--cloud_provider_type <name>] [--passphrase <name>] [--logfile <name>] [--quick_ro_recovery <[yes | no]>]
    Table 8: Quick recovery command definitions
    Command option Definition

    --cloud_string

    Cloud connection string, to connect to the cloud bucket.

    --container_name

    Name of the cloud bucket from where data is to be recovered. Valid values are [a-z, 0-9, '-', '.'].

    --cloud_provider_type

    Name of the cloud service provider, such as <AWS-S3 | Azure | Wasabi-S3 | Google-S3 | IBM-S3 | S3-Compatible>.

    --passphrase

    Passphrase used on original machine for encrypting the data in the cloud bucket.

    --logfile

    Log file path to capture the ongoing recovery activity.
    --quick_ro_recovery Fast disaster recovery, with data in cloud bucket accessible in RO mode only.--quick_ro_recovery quick read-only recovery.

    This will regenerate configuration data and populate the metadata from the cloud copy. When used with a newly deployed QoreStor instance, the --quick_ro_recovery command lets you attach a cloud bucket and conduct read-only restores without disturbing the cloud connection of the existing QoreStor.

    When completed, you will see the following message:

    Filesystem  disaster recovery started successfully.
    Please see the /var/log/oca/qsdr.log and the logfile given in the command.

Next steps

Depending on your configuration, there may be several steps required after recovering your QoreStor server. Some actions to consider are:

  • If you are using QoreStor with NetVault, you will need to add the new QoreStor as target device and add the container.
  • Depending on your DMA, you may need to reconfigure DMA or client connections to reference the new QoreStor server.
  • Once a disaster recovery completes, the recovered source containers will be unencrypted. Before ingesting new data into the recovered containers, you must enabled encryption on the recovered storage groups.

    NOTE:The recovered containers contain only stub files. The data remains encrypted in the cloud tier.

 

 

Documentos relacionados

The document was helpful.

Seleccionar calificación

I easily found the information I needed.

Seleccionar calificación