Chatta subito con l'assistenza
Chat con il supporto

Rapid Recovery 6.8 - User Guide

Introduction to Rapid Recovery The Core Console Repositories Core settings Protecting machines
About protecting machines with Rapid Recovery Understanding the Rapid Recovery Agent software installer Deploying Agent to multiple machines simultaneously from the Core Console Using the Deploy Agent Software Wizard to deploy to one or more machines Modifying deploy settings Understanding protection schedules Protecting a machine About protecting multiple machines Enabling application support Settings and functions for protected Exchange servers Settings and functions for protected SQL servers
Managing protected machines Snapshots and recovery points Managing privacy Encryption Authentication Replication Events Reporting VM export Restoring data Bare metal restore
About bare metal restore Differences in bare metal restore for Windows and Linux machines Understanding boot CD creation for Windows machines Managing a Linux boot image Performing a bare metal restore using the Restore Machine Wizard Using the Universal Recovery Console for a BMR Performing a bare metal restore for Linux machines Verifying a bare metal restore
Managing aging data Archiving Cloud accounts Core Console references REST APIs Glossary

Importing an archive

You can use this procedure to import an archive one time, or schedule an archive to import on a recurring basis.

When you want to recover archived data, you can import the entire archive to a specified location.

Caution: Perform this step only after careful consideration. Importing an archive repopulates the repository with the contents of the archive, replacing any new data in the repository since the archive was captured.

To import an archive, complete the steps in the following procedure.

  1. On the menu bar of the Rapid Recovery Core Console, click the [Archives] Archive[Drop-down menu] drop-down menu, and then select [Import] Import Archive.

    The Import Archive Wizard opens.

  2. On the Import Type page of the wizard, select one of the following options:
    • One-time import
    • Continual import (by schedule)
  3. Click Next.
  4. On the Location page, select the location of the archive you want to import from the drop-down list, and then enter the information as described in the following table:
    Table 163: Imported archive location type options
    Option Text Box Description
    Local

    Location

    Enter the local path where you want the archive to reside; for example, d:\work\archive.

    Network

    Location

    Enter the network path where you want the archive to reside; for example, \\servername\sharename.

    User name

    Enter the user name for the user with access to the network share.

    Password

    Enter the password for the user with access to the network share.

    Cloud

    Account

    Select an account from the drop-down list.

    NOTE: To select a cloud account, you must first have added it in the Core Console. For more information, see Adding a cloud account.

    Container Select a container associated with your account from the drop-down menu.
    Folder name Enter a name for the folder in which the archived data is saved.
  5. Click Next.
  6. On the Archive Information page of the wizard, if you want to import every machine included in the archive, select Import all machines.
  7. Complete one of the following options based on your selection:
    • If you selected One-time import in step 2, you selected Import all machines in step 6, and all the machines are present on the Core—as protected, replicated, or recovery points only machines— go to step 12.
    • If you did not import all machines in step 6, click Next, and then continue to step 8.
  8. On the Machines page, select the machines that you want to import from the archive.
    • If you selected One-time import in step 2, and at least one machine is not present on the Core—as a protected, replicated, or recovery points only machine—use the drop-down lists to select a repository for each machine you want to import, and then go to step 12.
    • If all machines are already present on the Core—as protected, replicated, or recovery points only machines—go to step 12.
  9. Click Next.
  10. On the Repository page, complete one of the following options:
    • If a repository is associated with the Core, select one of the options in the following table.
      Table 164: Repository options
      Option Description

      Use an existing Repository

      Select a repository currently associated with this Core from the drop-down list.

      Create a Repository

      In the Server text box, enter the name of the server on which you want to save the new repository—for example, servername or localhost—and then see Creating a DVM repository.

    • If no repository is associated with the Core, enter the name of the server on which you want to save the new repository—for example, servername or localhost—and then see Creating a DVM repository or Connecting to an existing repository.
  11. If you chose to Continuous import (by schedule) in step 2, on the Schedule page, select the options described in the following table.
    Table 165: Schedule import options
    Option Description
    Daily Click the clock icon and use the up and down arrows to select at what time you want to the archive job to begin.

    If you are using a 12-hour time system, click the AM or PM button to specify the time of day.

    Weekly Select the day of the week and then the time you want the archive job to begin.

    If you are using a 12-hour time system, click the AM or PM button to specify the time of day.

    Monthly Select the day of the month and the time you want the archive job to begin.

    If you are using a 12-hour time system, click the AM or PM button to specify the time of day.

    Pause initial importing

    Select this option if you do not want the import job to begin at the next scheduled time after you complete the wizard.

    NOTE: You may want to pause the scheduled import if you need time to prepare the target location before importing resumes. If you do not select this option, importing begins at the scheduled time.

  12. Click Finish.

Cloud accounts

Rapid Recovery lets you define connections between existing cloud storage or cloud service providers and your Rapid Recovery Core. Compatible cloud services include Microsoft Azure, Amazon Web Services (AWS), any OpenStack-based provider (including Rackspace), and Google Cloud. US government-specific platforms include AWS GovCloud (US) and Azure Government. You can add any number of cloud accounts to the Core Console, including multiple accounts for the same provider.

The purpose of adding cloud accounts to your Core Console is to work with them as described in the topic About cloud accounts.

Once added, you can manage the connection between the Core and the cloud accounts. Specifically, you can edit the display name or credential information, configure the account connection options, or remove the account from Rapid Recovery. When you edit or remove cloud accounts in the Core Console, you do not change the cloud accounts themselves—just the linkage between those accounts and your ability to access them from the Core Console.

This section describes how to define links between existing cloud storage provider or cloud service provider accounts, and the Rapid Recovery Core Console. It also describes how to manage those cloud accounts in Rapid Recovery.

Topics include:

About cloud accounts

Rapid Recovery works with cloud accounts in the following ways:

  • Archive. You can store a one-time archive or continual scheduled archive in the cloud. This feature is supported for all supported cloud account types. When you archive to the cloud, you can later access the information in the archived recovery points by attaching the archive (for archives created in release 6.0.1 or later). For all archives (including archives created prior to Rapid Recovery 6.0.1), you can import the archive (which then makes the imported data subject to your data retention policy). You can also perform bare metal restore from an archive.
  • Virtual export. You can perform virtual export to an Azure cloud account. This includes one-time export of a virtual machine, or continual export for a virtual standby VM.

NOTE: When archiving to Azure using Rapid Recovery release 6.8, use the cloud account type Microsoft Azure Service Management (for Archive). When exporting a VM to Azure, use the cloud account type Microsoft Azure Resource Management (for Virtual Export).

For conceptual information regarding various cloud accounts, see Considering cloud storage options.

For information about configuring timeout settings between the Core and cloud accounts, see Configuring cloud account connection settings.

For information about performing virtual export to the Azure cloud, see Exporting data to an Azure virtual machine.

For information about adding a cloud account, see Adding a cloud account.

Considering cloud storage options

This topic discusses support for US Government cloud storage accounts. It also discusses tradeoffs between cost and other factors when selecting cloud accounts for archiving.

Secure cloud accounts for US Government

United States federal, state, and local government agencies and their partners have access to increasing cloud account options. Rapid Recovery supports the following offerings for Government and related cloud accounts:

  • AWS GovCloud (US). Amazon Web Services offers a service called AWS GovCloud (US). This is an isolated AWS region designed to meet specific regulatory and compliance requirements. Using this service lets United States government agencies and customers join private businesses in leveraging cloud accounts. Rapid Recovery supports archiving to Amazon S3 storage accounts in the AWS GovCloud.
  • Azure Government. Azure Government is a United States government-only cloud platform exclusively for US federal, state, local, and tribal government agencies and their partners. Rapid Recovery supports Azure Government in the same manner that we offer standard Azure support. For example:
    • Rapid Recovery supports archiving to Azure Government and standard Azure storage accounts.
    • Rapid Recovery supports virtual export to an Azure virtual machine (one-time, or virtual standby) on public and Azure Government platforms.
    • Rapid Recovery supports running a Rapid Recovery Core in an Azure VM in the AWS GovCloud or in a standard Azure account.
    • Rapid Recovery supports replication from a source Core (running on-premises or in the AWS GovCloud) to an Azure VM target Core. If your source Core is located in the AWS GovCloud, your replication target must also run in the AWS GovCloud.

Balancing access time, cost and convenience for archiving to cloud accounts

To offer our users cost-effective cloud archiving and virtual export options, Rapid Recovery continues to expand support of cloud storage providers (and storage classes for leading providers that offer them). Educated users can leverage policies to balance data archive convenience, data access time, and cost.

When considering strategies for archiving or exporting to the cloud, Rapid Recovery users are encouraged to understand the tradeoffs between initial cost to store data, how frequently the data is expected to be used, the need to access that data within a prescribed period of time, and costs associated with retrieving the data.

Some providers (such as Amazon S3) offer different storage classes. Choosing the correct storage class can save you money if your assumptions regarding these factors are accurate. Quest recommends that Rapid Recovery users review data storage policies at least once annually to ensure you are using your resources effectively. Similarly, administrators are cautioned to review the data being archived or exported to cloud accounts so you can update planning assumptions and migrate data accordingly.

The act of storing data, for some vendors, is extremely low or in some cases free. However, cloud service providers often apply charges to your account when you access or retrieve that data. There are often different fees based on how quickly you need to access the data. In some cases, using more expensive storage (such as Amazon S3 standard) is more cost effective if you plan to restore from recovery points than if you store data in Glacier and need to restore.

Amazon lets you define data life cycle policies that move data between Amazon S3 storage classes over time. For example, you could store freshly uploaded data using the Standard storage class, move it to Standard – Infrequent Access 30 days later, and then to Reduced Redundancy Storage after another 60 days have passed. You can also explicitly archive data for any type of Amazon S3 cloud account to Glacier, using the Archive Wizard. This is recommended if data recovery is expected very infrequently. Before selecting this option, familiarize yourself with fees related to access, storage age, and so on. See the topic Amazon storage options and archiving.

Some Rapid Recovery features are designed specifically for the cloud. If performing virtual export to the cloud using Azure, consider virtual standby. This process lets you create a fully bootable virtual machine in the Azure cloud. The VM files are continually updated with newly captured recovery points. Unlike virtual standby performed on-premises, the VM files are not deployed into a bootable VM until or unless you need them. Your initial cost for virtual standby in Azure involve only storage. Compute costs (which in Azure can be considerable in the long term) are incurred only if the VM is deployed, which is required to spin up a VM and perform a restore.

You can run a Rapid Recovery Core in an Azure VM. You can also replicate an on-premises Core to a VM in the Azure cloud, or replicate a source Core in Azure to a target Core in Azure. Running a source or target Rapid Recovery Core in Azure uses compute resources for the active Core VM, and requires storage accounts to be created and associated with each Core VM for your repository, which incurs storage costs. For information about setting up a Core to run in Azure, see the Rapid Recovery Azure Setup Guide.

Users of Rapid Recovery that employ cloud storage options are encouraged to understand the tradeoffs between initial cost to store data, the need to access that data within a prescribed period of time, and costs associated with retrieving the data.

For example, the act of storing data, for some vendors, is extremely low or in some cases free. However, cloud service providers often apply charges to your account when you access or retrieve that data. There are often different fees based on how quickly you need to access the data. In some cases, using more expensive storage (such as Amazon S3 standard) is more cost effective if you plan to restore from recovery points than if you store data in Glacier and need to restore.

Amazon lets you define data life cycle policies that move data between Amazon S3 storage classes over time. For example, you could store freshly uploaded data using the Standard storage class, move it to Standard – IA 30 days later, and then to Amazon Glacier after another 60 days have passed.

Related Documents

The document was helpful.

Seleziona valutazione

I easily found the information I needed.

Seleziona valutazione