Chat now with support
Chat mit Support

vRanger 7.7.1 - User Guide

Introduction Configuring vRanger
Configuring vRanger through the Startup Wizard Configuring vRanger manually Supplemental instructions: additional repository types
Using vRanger Backup Restore
Restoring a physical server Performing a full restore for VMware VMs Performing a full restore for Hyper-V® VMs Performing a full restore for VMware vApps Performing a full restore of a physical machine Performing an FLR on Windows Performing an FLR on Linux Restoring from manifest
Replicate VMs Reports Integrating and monitoring vRanger Using the vRanger Console vAPI Cmdlet details
Add-BackupJobTemplate Add-CIFSRepository Add-DdbReplicationRepository Add-DdbRepository Add-EsxHost Add-HypervCluster Add-HypervHost Add-HypervRestoreJobTemplate Add-NFSRepository Add-NVSDRepository Add-PhysicalMachine Add-RdaRepository Add-ReplicationJobTemplate Add-RestoreFromManifestJobTemplate Add-RestoreJobTemplate Add-VirtualAppforLinuxFLR Add-VirtualAppforLinuxFLRVA Add-VirtualCenter Disable-Job Dismount-LinuxVolume Enable-Job Get-AddressBook Get-BackupGroupEntity Get-CatalogSearchData Get-CatalogStatus Get-ConfigOption Get-Connection Get-CurrentTemplateVersionID Get-Datastore Get-GlobalTransportFailover Get-InventoryEntities Get-IsInventoryRefreshing Get-Job Get-JobTemplate Get-MonitorLog Get-Network Get-PhysicalMachineDiskMap Get-Repository Get-RepositoryJob Get-RepositorySavePoint Get-RestoreDiskMap Get-SavepointDisk Get-SavepointManifest Get-Savepoints Get-TransportFailover Get-VirtualApplianceConfig Get-VirtualApplianceDeploymentStatus Get-VirtualApplianceReconfigStatus Get-VirtualMachinesUnderInventory Get-VmDisk Get-VMDKVolume Install-VirtualAppliance Mount-LinuxVolume New-BackupFlag New-BackupGroupMember New-Daily Schedule New-EmailAddress New-IntervalSchedule New-MonthlySchedule New-ReplicationFlag New-RestoreFlag New-SMTPServer New-TransportConfiguration New-VirtualAppliance New-WeeklySchedule New-YearlySchedule Remove-AllMount Remove-BackupGroupEntity Remove-BackupGroupMember Remove-Catalog Remove-DdbStorageUnit Remove-JobTemplate Remove-LinuxVolume Remove-Repository Remove-SavePoint Remove-VirtualAppliance Remove-VirtualApplianceConfiguration Run-JobsNow Run-ReplicationFailover Run-ResumeReplicationFailover Run-TestReplicationFailover Set-Cataloging Set-CBTonVM Set-LinuxVolume Set-MountPath Set-Resources Stop-vRangerJob Update-BackupJobTemplate Update-GlobalTransportFailover Update-HypervRestoreJobTemplate Update-Inventory Update-ReplicationJobTemplate Update-RestoreJobTemplate Update-VirtualAppliance Update-VirtualApplianceConfiguration
About us

Adding a Quest DR Series system as a CIFS repository

Previous Next



Adding a Quest DR Series system as a CIFS repository

To add a Quest DR Series system as a CIFS repository:
1
In the My Repositories pane, right-click anywhere, and click Add > Windows Share (CIFS).
2
In the Add Windows Network Share Repository dialog box, complete the following fields:
Repository Name: Enter a name for the repository.
Description: [Optional] Enter a long-form description for the repository.
User Name and Password: Enter the credentials for accessing the CIFS share.
Security Protocol: Select a protocol, NTLM (default) or NTLMv2.
Server: Enter the UNC path to the desired repository directory. Alternatively, you may enter a partial path and click Browse to find the target directory.
IMPORTANT: Do not select Encrypt all backups to this repository. Using encryption or compression with deduplicated repositories limits or disables deduplication. Encryption and compression should not be used with any repository type that provides deduplication.
3

The connection to the repository is tested and the repository is added to the My Repositories pane and the Repository Information dialog box.

vRanger checks the configured repository location for existing manifest data to identify existing savepoints.

Import as Read-Only: To import all savepoint data into the vRanger database, but only for restores, click this button. You cannot back up data to this repository.
Import: To import all savepoint data into the vRanger database, click this button. vRanger is able to use the repository for backups and restores. vRanger requires read and write access to the directory.
Overwrite: To retain the savepoint data on the disk and not import it into vRanger, click this button. vRanger ignores the existence of the existing savepoint data and treats the repository as new.
5
Click Next.

Adding a Quest DR Series system as an NFS repository

Previous Next



Adding a Quest DR Series system as an NFS repository

To add a Quest DR Series system as an NFS repository:
1
In the My Repositories pane, right-click anywhere, and click Add > NFS.
2
In the Add Network File Share Repository dialog box, complete the following fields:
Repository Name: Enter a descriptive name for the repository.
Description: [Optional] Enter a long-form description for the repository.
DNS Name or IP: Enter the IP or FQDN for the repository.
Export Directory: Specify the export directory, which is similar in concept to a network share. You must create a target subdirectory in the export directory.
Target Directory: Specify a subdirectory of the NFS export directory. This directory is the location to which savepoints are written.
IMPORTANT: Do not select Encrypt all backups to this repository. Using encryption or compression with deduplicated repositories limits or disables deduplication. Encryption and compression should not be used with any repository type that provides deduplication.
3

The connection to the repository is tested and the repository is added to the My Repositories pane and the Repository Information dialog box.

vRanger checks the configured repository location for existing manifest data to identify existing savepoints.

Import as Read-Only: To import all savepoint data into the vRanger database, but only for restores, click this button. You cannot back up data to this repository.
Import: To import all savepoint data into the vRanger database, click this button. vRanger is able to use the repository for backups and restores. vRanger requires read and write access to the directory.
Overwrite: To retain the savepoint data on the disk and not import it into vRanger, click this button. vRanger ignores the existence of the existing savepoint data and treats the repository as new.

 

Understanding the vRanger virtual appliance (VA)

Previous Next


Configuring vRanger > Supplemental instructions: additional repository types > Understanding the vRanger virtual appliance (VA)

Understanding the vRanger virtual appliance (VA)

The vRanger VA is a small, pre-packaged Linux® distribution that serves as a platform for vRanger operations away from the vRanger server. With VAs, the workload can be spread across the other CPUs available to a host. This feature provides increased reliability and scalability over operations.

vRanger uses the VA for the following VMware® functions:

The VA must be deployed to any ESXi host that you want to configure for replication — either as a source or a destination. For hosts in a cluster, you may deploy just one VA to the cluster; the VA is shared among the cluster’s hosts. When deploying a VA to a cluster, you must choose a host in the cluster to which the VA should be associated.

In addition, replication by way of a VA requires that if a VA is used on one host or cluster in a replication job, a VA must be used on both the source and destination host or cluster. In other words, VAs, when used for replication, must be used in pairs.

When configuring the VA, consider the amount of resources — CPU and RAM — allocated to the VA as the number of simultaneous tasks the VA can process is directly tied to available resources. In addition, if you want to perform replication tasks using a VA, carefully consider an appropriate size for the VA scratch disk. For more information, see The VA scratch disk.

For more information about vRanger VA configuration, see the following topics:

The VA scratch disk

Previous Next



The VA scratch disk

If you want to perform replication tasks with your VA, you need to add a second “scratch” disk to the VA. This scratch disk is used to store two types of files:

vzmap files: Block maps — in the form of a vzmap file — for the VMs replicated to the destination host. This file contains block map information, and not actual data blocks. These maps are compared to the source VM during each replication to identify the data blocks that have changed since the last replication. The vzmap files make differential replication faster as they remove the need to scan the destination VM blocks for comparison with the source VM.
vzUndo files: As data is sent to the destination host, by using the VA, blocks in the destination disk are written to the undo file before they are overwritten by the changed data. If replication fails and an undo becomes necessary, the original destination disk blocks are read from the undo file and written to the destination disk to roll back the failed replication. This process is a key function designed to provide resiliency in the face of a network failure; if there is a network failure during the replication pass, the destination VM is not corrupted by incomplete data.

After the replication is complete, and all data has been received by the destination VA, the undo file is deleted. At that point, the storage space used by the undo file is returned to the VA for use. Undo files are not created during the first replication. During the first replication, the entire VM is sent to the destination host, but there is no existing data on the destination VMDKs, and therefore no risk of corruption. Data is streamed directly to the VMDK. You do not need to allocate scratch disk space for this scenario.

While the vzmap files are trivial in size, in the order of a few MB, the undo file can potentially be as large as the VM itself. While the scratch disk needs to be configured to a size sufficient to handle the data of concurrent replication tasks, making it too large wastes valuable storage space. Use the following topics to guide you in determining the proper size for the scratch disk.

Verwandte Dokumente

The document was helpful.

Bewertung auswählen

I easily found the information I needed.

Bewertung auswählen