サポートと今すぐチャット
サポートとのチャット

QoreStor 7.0.1 - User Guide

Introducing QoreStor Setting up your QoreStor system Configuring QoreStor settings Managing containers Managing local storage Managing cloud storage Managing replications Managing Users Monitoring the QoreStor system Support, maintenance, and troubleshooting [[[Missing Linked File System.LinkedTitle]]]

Adding an archive tier

To add an archive tier

  1. In the navigation menu, click Cloud Storage to expand the menu, then click Archive Tier.
  2. In the Archive Tier pane, click Configure to add a cloud tier.
  3. In the archive provider drop-down, select AWS S3.
  4. Provide the name for your S3 bucket.
  5. Enter your Connection String using one of the two methods below:
    • Default - this option will compile your connection string into the correct format using the inputs below.
      • Access key - The access key is typically 20 upper-case English characters
      • Secret key - The secret key is generated automatically by AWS. It is typically 40 characters, including mixed upper and lower-case and special symbols.
      • Region - The region specifies the Amazon-specific region in which you want to deploy your backup solution. Your region name can be obtained from https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
    • Custom - this option allows you to enter your connection string with additional parameters.
      • Your connection string uses the following syntax:
        "accesskey=<ABDCEWERS>;secretkey=< >; loglevel=warn; region=<aws-region>;"

        Please note the following:

        1. The access key is typically 20 upper-case English characters
        2. The secret key is generated automatically by AWS. It is typically 40 characters, including mixed upper and lower-case and special symbols.
        3. The region specifies the Amazon-specific region in which you want to deploy your backup solution. Your region name can be obtained from https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region

        An example of a connection string with this syntax follows. Logically, each connection string is unique.

        accesskey=AKIARERFUCFODHFJUCWK;secretkey=p+8/T+o5WeZkX11QbuPazHX1IdWbwgFplxuVlO8J;loglevel=warn;region=eu-central-1;
  6. To apply encryption, in the Archive Tier Encryption section enter the following:

    • Passphrase — the passphrase is user-defined and is used to generate a passphrase key that encrypts the file in which the content encryption keys are kept. The passphrase is a human readable key, which can be up to 255 bytes in length. It is mandatory to define a passphrase to enable encryption.

      IMPORTANT: It is mandatory to define a passphrase to enable encryption. If the passphrase is compromised or lost, the administrator should change it immediately so that the content encryption keys do not become vulnerable. If this passphrase is lost or forgotten, data in the cloud will be unrecoverable.

    • Confirm Passphrase — re-enter the passphrase used above.

  7. In the Archive Tier Options section, enter the following:
    • Archive Retention in Warm Cloud - When restore operation succeeds, a temporary copy of the Glacier object is created in standard S3 storage. This setting specifies the number of days this temporary copy is held in S3 before it is deleted. Valid values are any integral values from 1 through 365.
    • Archive Role ARN - S3 must have permissions to perform batch operations on behalf of the user. An IAM role must be created that has "Create Job", "Pass Role" and other permissions to access the buckets as well as perform the batch operations. The account admin is expected to create such roles.

      NOTE: For more information on required permissions and S3 batch operations, refer to Required permissions to restore from Archive Tier and the AWS documents Granting permissions for Amazon S3 Batch Operations and The basics: S3 Batch Operations.

    • Archive Service Name- Select between S3-Glacier or S3 Deep Archive.
  8. Click Configure. A Cloud Storage Group will be created.
  9. To enable replication to the cloud, you must link a local container to the cloud using the procedures in Adding a Cloud Tiering policy.

Deleting an archive tier

Before deleting a archive tier, review the details below:

  • The metadata for the files archived to the cloud will be removed locally. This makes those files unrecoverable.
  • Data in the cloud bucket has to be deleted manually.
  • Archive policy settings on the source containers are unaffected.

Deleting a cloud tier from the GUI

To delete an archive tier, complete the following steps.

  1. In the navigation menu, click Cloud Storage to expand the menu, then click Archive Tier.
  2. Click Delete.
  3. When prompted to confirm, click Delete.
  4. In the Passphrase field, enter the passphrase used for Archive Tier encryption. This provides validation that the person deleting the archive tier has the appropriate authorization.
  5. Review the containers linked to the archive tier and confirm that data in these containers can be deleted.
  6. Click Delete.

Deleting an archive tier from the CLI

  1. Access the QoreStor CLI. Refer to Using the QoreStor command line for more information.
  2. Delete your archive tier using the command below. Refer to the QoreStor Command Line Reference Guide for more information.
    cloud_tier --delete --cloud_archive
    
  3. At the prompt, enter y for yes and press [Enter].

Restoring from archive tier

Depending on the container type, data is sent to the archive tier by different methods. For RDA and Object containers, data is archived based on the Archive Tiering Policy. For VTL containers, exporting the cartridge from the backup application will trigger the movement of cart data to cloud. Restoring data from an archive tier differs in some ways from a standard restore process. When restoring from an archive tier, no files are saved to on-prem storage. Instead, files are copied from the archive storage (Glacier or Amazon S3 Glacier Deep Archive) to warm AWS S3 storage for a period of time specified by the Archive Retention in Warm Cloud setting.  When restoring from an archive tier, consider the following:

  • Restoring from archive storage is a two-step operation. First, archive data is restored to standard AWS S3 storage, then the objects are read from there.
  • Restored objects will be ready for readback after 4-6 hours for Amazon S3 Glacier (10-12 hours for Amazon S3 Glacier Deep Archive). No notification is issued when restored objects are available. You may view the status of restore operations in the AWS Console. Refer to the Amazon S3 document Checking Archive Restore Status and Expiration Date for more information. To perform a batch restore for disaster recovery purposes, refer to Manually restoring datastores from Amazon S3 Glacier.
  • When restoring objects from archive, you are charged for both the archive copy and the restore copy in warm storage. Use the Archive Retention in Warm Cloud value to minimize the duration objects are kept in warm storage.
  • For restoring VTL cartridge data, the command vtl --import must be run on the QoreStor server.

Manually restoring datastores from Amazon S3 Glacier

To perform a disaster recover from an archive tier, you must first restore all datastores to standard Amazon S3 storage. The procedure below will guide you in performing this action in the AWS Console.

  1. Using the ls command, recursively get the contents of the cds folder in the cloud bucket you wish to restore.

    NOTE: Note that bucket_name is a placeholder for the bucket name. Replace this with your actual bucket name.

    # aws s3 ls s3://bucket_name/cds --recursive | awk '{print "bucket_name,"$NF}' > manifest.csv

    This command will create a manifest.csv file. A sample version of a manifest.csv file is shown below:

    	
    # head manifest.csv
    bucket_name,cds/0000/0000/0000/0001/158/158.imap
    bucket_name,cds/0000/0000/0000/0001/158/158_0
    bucket_name,cds/0000/0000/0000/0003/357/357.imap
    bucket_name,cds/0000/0000/0000/0003/357/357_0
    bucket_name,cds/0000/0000/0000/0004/439/439.imap
    bucket_name,cds/0000/0000/0000/0004/439/439_0
    bucket_name,cds/0000/0000/0000/0008/869/869.imap
    bucket_name,cds/0000/0000/0000/0008/869/869_0
    bucket_name,cds/0000/0000/0000/0010/1035/1035.imap
    bucket_name,cds/0000/0000/0000/0010/1035/1035_0
    		 
  2. To ensure the manifest file can be uploaded successfully, add an MD5 hash.
    # openssl md5 -binary manifest.csv| base64
    BBpU3xTS3/XKnmDm+uqdng==
  3. Paste the MD5 in the put-object command. The resulting ETAG can be used for initiating the restore job.

    NOTE: .--key <path> can be used for uploading manifest into any prefixed location in that bucket.

    # aws s3api put-object --bucket bucket_name --key manifest.csv --body manifest.csv --content-md5 BBpU3xTS3/XKnmDm+uqdng==
    {
    "ETag": "\"041a54df14d2dff5ca9e60e6faea9d9e\""
    }
  4. The uploaded manifest, it's location and ETAG can be used for initiating a batch operation restore job through the AWS console. Alternatively, you may use the create-job API as follows:
    # aws s3control create-job --no-confirmation-required  --account-id  177436582181 \
    	--operation '{"S3InitiateRestoreObject" : { "ExpirationInDays" : 7, "GlacierJobTier" : "STANDARD" }}' \
    	--report '{ "Bucket":"arn:aws:s3:::bucket_name",\	
    		"Prefix":"batch", \
    		"Format":"Report_CSV_20180820", \
    		"Enabled":true,\	
    		"ReportScope":"AllTasks" }' \
    	--manifest '{"Spec":{"Format":"S3BatchOperations_CSV_20180820","Fields":["Bucket","Key"]},\
    		"Location":{"ObjectArn":"arn:aws:s3:::bucket_name/manifest.csv","ETag":"\"041a54df14d2dff5ca9e60e6faea9d9e\""}}' \	
    	--role-arn arn:aws:iam::177436582181:role/BatchOperationUser \
    	--priority 10
    {
    "JobId": "ea4915fc-2cbb-4d00-895e-a363743b9c8c"
    }
  5. Restored objects will be ready for readback after 4-6 hours for Amazon S3 Glacier (10-12 hours for Amazon S3 Glacier Deep Archive). You can click on an individual CDS object to make sure it is ready for readback (ongoing-request = false, expiry-date = <some future date>) before attempting to restore from QoreStor. Optionally, you can perform a head-object inquiry on any one of the CDS entries from the uploaded manifest file:
    # aws s3api head-object --bucket bucket_name --key cds/0000/0000/0000/0001/158/158_0
    {
    	"AcceptRanges": "bytes",
    	"Restore": "ongoing-request=\"false\", expiry-date=\"Sat, 05 Sep 2020 00:00:00 GMT\"",
    	"LastModified": "2020-09-03T08:20:42+00:00",
    	"ContentLength": 4245203,
    	"ETag": "\"98b38ff76a2dc04f423ba954af44c052\"",
    	"ContentType": "binary/octet-stream",
    	"Metadata": {},
    	"StorageClass": "GLACIER"	
    }
    

    NOTE: Note that the time required to run the ls command can be quite long for a high number of objects. In the example below, the ls operation on a 1 TB file took approximately 43 minutes. If the command fails due network or connection issues, please restart.

    # time aws s3 ls s3://st-6300-19-archive/cds --recursive | awk '{print "st-6300-19-archive,"$NF}' > st-6300-19-archive_manifest.csv
    real    43m24.906s
    user    39m1.575ssys     1m11.745s
    
    # ls -al st-6300-19-archive_manifest.csv
    -rw-rw-r-- 1 quser quser 163138124 Sep  3 05:15 st-6300-19-archive_manifest.csv
    
    # wc -l st-6300-19-archive_manifest.csv
    2680848 st-6300-19-archive_manifest.csv
    
    # time aws s3api put-object --bucket bucket_name --key st_manifest.csv --body ./st-6300-19-archive_manifest.csv --content-md5  ni/tJMLCpB0l9A1m3Te3sg==
    {
    "ETag": "\"9e2fed24c2c2a41d25f40d66dd37b7b2\""
    }
    
    real    0m5.281s
    user    0m1.930s
    sys     0m0.560s
関連ドキュメント

The document was helpful.

評価を選択

I easily found the information I needed.

評価を選択