What should and should not cause a reash of a replication job.
Should cause a "rehash" of target replica vmdks:
· Pre-seed job with no existing map files (per user guide).
· Vzmap is moved/deleted from folder (Folder is by target vm uuid, so if the uuid hasn’t changed it should be fine)
· Vzmap size does not match up with the vm
· Replication failed, and through vRanger command line execute a repe undo which undoes the failed replication so the map files are missing
· Disk is changed in size
· Disk is moved to a different datastore
· Source or Target is powered on/powered off AFTER 1st replication is run
· VA scratch disk is removed/replaced
Should not cause a "rehash" of target replica vmdks:
· Recreating a job using pre-seed when map files DO exist. This happens even with a powered off VM with no change, COS or VA replication.
· Disabling and re-enabling a job. This seems to happen more with larger VMs 400GB-1TB
· When upgrading to 5.4, upgrading the VA but maintaining the scratch disk and attaching to a new VA so map files are intact.
· Change from COS to VA based replication using pre-seed again with no existing map files. I did test pre creating the folder on a VA (using uuid from vRanger) and moving vzmap files there but there was the same result. Vzmap files were recreated. It did use the folder structure I created without issue so again it rehashed disks but not sure if this would be supported.
© ALL RIGHTS RESERVED. Terms of Use Privacy Cookie Preference Center