Chatee ahora con Soporte
Chat con el soporte

vRanger 7.6.4 - User Guide

Introduction Configuring vRanger
Configuring vRanger through the Startup Wizard Configuring vRanger manually Supplemental instructions: additional repository types
Using vRanger Backup Restore
Restoring a physical server Performing a full restore for VMware VMs Performing a full restore for Hyper-V® VMs Performing a full restore for VMware vApps Performing a full restore of a physical machine Performing an FLR on Windows Performing an FLR on Linux Restoring from manifest
Replicate VMs Reports Integrating and monitoring vRanger Using the vRanger Console vAPI Cmdlet details
Add-BackupJobTemplate Add-CIFSRepository Add-DdbReplicationRepository Add-DdbRepository Add-EsxHost Add-HypervCluster Add-HypervHost Add-HypervRestoreJobTemplate Add-NFSRepository Add-NVSDRepository Add-PhysicalMachine Add-RdaRepository Add-ReplicationJobTemplate Add-RestoreFromManifestJobTemplate Add-RestoreJobTemplate Add-VirtualAppforLinuxFLR Add-VirtualAppforLinuxFLRVA Add-VirtualCenter Disable-Job Dismount-LinuxVolume Enable-Job Get-AddressBook Get-BackupGroupEntity Get-CatalogSearchData Get-CatalogStatus Get-ConfigOption Get-Connection Get-CurrentTemplateVersionID Get-Datastore Get-GlobalTransportFailover Get-InventoryEntities Get-IsInventoryRefreshing Get-Job Get-JobTemplate Get-MonitorLog Get-Network Get-PhysicalMachineDiskMap Get-Repository Get-RepositoryJob Get-RepositorySavePoint Get-RestoreDiskMap Get-SavepointDisk Get-SavepointManifest Get-Savepoints Get-TransportFailover Get-VirtualApplianceConfig Get-VirtualApplianceDeploymentStatus Get-VirtualApplianceReconfigStatus Get-VirtualMachinesUnderInventory Get-VmDisk Get-VMDKVolume Install-VirtualAppliance Mount-LinuxVolume New-BackupFlag New-BackupGroupMember New-Daily Schedule New-EmailAddress New-IntervalSchedule New-MonthlySchedule New-ReplicationFlag New-RestoreFlag New-SMTPServer New-TransportConfiguration New-VirtualAppliance New-WeeklySchedule New-YearlySchedule Remove-AllMount Remove-BackupGroupEntity Remove-BackupGroupMember Remove-Catalog Remove-DdbStorageUnit Remove-JobTemplate Remove-LinuxVolume Remove-Repository Remove-SavePoint Remove-VirtualAppliance Remove-VirtualApplianceConfiguration Run-JobsNow Run-ReplicationFailover Run-ResumeReplicationFailover Run-TestReplicationFailover Set-Cataloging Set-CBTonVM Set-LinuxVolume Set-MountPath Set-Resources Stop-vRangerJob Update-BackupJobTemplate Update-GlobalTransportFailover Update-HypervRestoreJobTemplate Update-Inventory Update-ReplicationJobTemplate Update-RestoreJobTemplate Update-VirtualAppliance Update-VirtualApplianceConfiguration
About us

Strategies for sizing the scratch disk

Previous Next



Strategies for sizing the scratch disk

The scratch disk needs to be large enough only to hold the permanent vzmap files and the temporary vzUndo files, plus a small margin for safety. How large that is depends almost entirely on the amount of changed data you are replicating. The amount of changed data is itself a function of the number of VMs you are replicating, their total disk size, replication frequency, and the data change rate per VM. It is important to understand all this data when sizing the scratch disk.

If you are using one VA for a cluster, remember that you must account for all simultaneous replications for the cluster.

Use historical data

If you have previously replicated the source VMs, the most accurate method to size the scratch disk properly, without wasting storage space, is to use historical replication data. This data is available in the Replicate Task Reports, in the vRanger My Reports view, for the applicable VMs. This report shows the amount of data written during each replication task.

The safest method to size your scratch disk based on historical data is to record the highest amount of data written for each VM that you replicate at one time, and size the disk to accommodate those values.

To avoid filling your scratch disk, Quest recommends that you add a small margin, 10% or so, to the calculated scratch disk size for safety.

Calculating

If you do not have information on the amount of changed data for each VM, you can estimate the appropriate size of the scratch disk based on the VM size and the number of VMs you plan to replicate at one time.

A general rule for sizing the scratch disk is to choose a percentage of the total VM size to represent the practical limit of changed data for a given replication. Only you can decide what is appropriate for your environment. The following numbers are examples given to illustrate the concept:

For example, if you have four VMs that you want to replicate to a host or cluster at the same time, the minimum requirements for the VMs are described in the following table.

Table 2. Minimum requirements

VM

VM size

Change rate

Change size

1

100 GB

15%

15 GB

2

100 GB

10%

10 GB

3

100 GB

20%

20 GB

4

60 GB

5%

3 GB

For the preceding VMs, you would need approximately 48 GB of disk space for the undo files, plus a buffer of approximately 10%, for safety’s sake. In the example, an appropriate estimate for the scratch disk size for the preceding VMs would be approximately 55 GB.

Bear in mind that the estimate exercise should be done for every set of VMs you want replicated to that host or cluster, with the scratch disk being sized to accommodate the largest value obtained.

Options for a smaller scratch disk

Previous Next



Options for a smaller scratch disk

As previously stated, the primary driver for the scratch disk size is the amount of changed data that needs to be replicated. If you need to reduce the storage requirements for your scratch disk, you can:

The scratch disk on the source host

Previous Next



The scratch disk on the source host

As the scratch disk is used primarily for staging changes before they are written to disk — activity which occurs on the destination host or cluster — the scratch disk on the source side can be kept fairly small. However, in case you need to fail over to the disaster recovery (DR) site, the replication job reverses direction and starts replicating changes back to the product site — the original source host or cluster. For this process to occur, the scratch disk on the source side needs to be re-sized to accommodate the changed data.

Scratch disk location

Previous Next



Scratch disk location

When creating the second disk, make sure that you place the disk on a datastore with block sizes large enough to support the expected VMDK. The following list shows the maximum file size available for each block size:

Documentos relacionados

The document was helpful.

Seleccionar calificación

I easily found the information I needed.

Seleccionar calificación