지금 지원 담당자와 채팅
지원 담당자와 채팅

vRanger 7.8.6 - User Guide

Introduction vRanger overview Configuring vRanger
Configuring vRanger through the Startup Wizard Configuring vRanger manually Supplemental instructions: additional repository types
Using vRanger Backup Restore
Restoring an encrypted VMware VM Performing a full restore for VMware VMs Performing a full restore for Hyper-V® VMs Performing a full restore for VMware vApps Performing a full restore of a physical machine Performing an FLR on Windows Performing an FLR on Linux Restoring from manifest
Replicate VMs Reports Integrating and monitoring vRanger Using the vRanger Console vAPI Cmdlet details

The VA scratch disk

If you want to perform replication tasks with your VA, you need to add a second “scratch” disk to the VA. This scratch disk is used to store two types of files:

vzmap files: Block maps — in the form of a vzmap file — for the VMs replicated to the destination host. This file contains block map information, and not actual data blocks. These maps are compared to the source VM during each replication to identify the data blocks that have changed since the last replication. The vzmap files make differential replication faster as they remove the need to scan the destination VM blocks for comparison with the source VM.
vzUndo files: As data is sent to the destination host, by using the VA, blocks in the destination disk are written to the undo file before they are overwritten by the changed data. If replication fails and an undo becomes necessary, the original destination disk blocks are read from the undo file and written to the destination disk to roll back the failed replication. This process is a key function designed to provide resiliency in the face of a network failure; if there is a network failure during the replication pass, the destination VM is not corrupted by incomplete data.

After the replication is complete, and all data has been received by the destination VA, the undo file is deleted. At that point, the storage space used by the undo file is returned to the VA for use. Undo files are not created during the first replication. During the first replication, the entire VM is sent to the destination host, but there is no existing data on the destination VMDKs, and therefore no risk of corruption. Data is streamed directly to the VMDK. You do not need to allocate scratch disk space for this scenario.

While the vzmap files are trivial in size, in the order of a few MB, the undo file can potentially be as large as the VM itself. While the scratch disk needs to be configured to a size sufficient to handle the data of concurrent replication tasks, making it too large wastes valuable storage space. Use the following topics to guide you in determining the proper size for the scratch disk.

Strategies for sizing the scratch disk

The scratch disk needs to be large enough only to hold the permanent vzmap files and the temporary vzUndo files, plus a small margin for safety. How large that is depends almost entirely on the amount of changed data you are replicating. The amount of changed data is itself a function of the number of VMs you are replicating, their total disk size, replication frequency, and the data change rate per VM. It is important to understand all this data when sizing the scratch disk.

If you are using one VA for a cluster, remember that you must account for all simultaneous replications for the cluster.

If you have previously replicated the source VMs, the most accurate method to size the scratch disk properly, without wasting storage space, is to use historical replication data. This data is available in the Replicate Task Reports, in the vRanger My Reports view, for the applicable VMs. This report shows the amount of data written during each replication task.

The safest method to size your scratch disk based on historical data is to record the highest amount of data written for each VM that you replicate at one time, and size the disk to accommodate those values.

To avoid filling your scratch disk, Quest recommends that you add a small margin, 10% or so, to the calculated scratch disk size for safety.

If you do not have information on the amount of changed data for each VM, you can estimate the appropriate size of the scratch disk based on the VM size and the number of VMs you plan to replicate at one time.

A general rule for sizing the scratch disk is to choose a percentage of the total VM size to represent the practical limit of changed data for a given replication. Only you can decide what is appropriate for your environment. The following numbers are examples given to illustrate the concept:

For example, if you have four VMs that you want to replicate to a host or cluster at the same time, the minimum requirements for the VMs are described in the following table.

1

100 GB

15%

15 GB

2

100 GB

10%

10 GB

3

100 GB

20%

20 GB

4

60 GB

5%

3 GB

For the preceding VMs, you would need approximately 48 GB of disk space for the undo files, plus a buffer of approximately 10%, for safety’s sake. In the example, an appropriate estimate for the scratch disk size for the preceding VMs would be approximately 55 GB.

Bear in mind that the estimate exercise should be done for every set of VMs you want replicated to that host or cluster, with the scratch disk being sized to accommodate the largest value obtained.

Options for a smaller scratch disk

As previously stated, the primary driver for the scratch disk size is the amount of changed data that needs to be replicated. If you need to reduce the storage requirements for your scratch disk, you can:

The scratch disk on the source host

As the scratch disk is used primarily for staging changes before they are written to disk — activity which occurs on the destination host or cluster — the scratch disk on the source side can be kept fairly small. However, in case you need to fail over to the disaster recovery (DR) site, the replication job reverses direction and starts replicating changes back to the product site — the original source host or cluster. For this process to occur, the scratch disk on the source side needs to be re-sized to accommodate the changed data.

관련 문서

The document was helpful.

평가 결과 선택

I easily found the information I needed.

평가 결과 선택