When attempting a replication in vRanger 6.x using the Virtual Appliance (VA) version 1.8.0, the "2129 - can't write" error message appears after some data is written to the scratch disk. The scratch disk is large enough to contain the .vzundo file(s) of the target replica, but the "can't write .vzundo" message still crashes the replication job.
After logging into the Virtual Appliance via a VM console/SSH client session, perform a "df" command. Check to see what free space is available. Also take note of the mounted directories. There should be a "/dev/sdb" mount, which is the scratch disk. It is possible that this "sdb" mount is missing, despite the successful deployment of the VA via the Virtual Appliance Deployment Wizard within vRanger. If there only appears to be a "/dev/sda" mount, perform the following:
1) Re-run the replication job.
2) When data is being written, run the "df" command again on the Virtual Appliance VM. Check to see where the "scratch" data is being written.
3) It may be that even though the scratch disk is present, the scratch data is actually being written to the 10GB /sda device mount point on the VA. This is the system disk for the VA, and only has 10GB total size. This is where the scratch data may be filling up the disk, culminating in the "2129 can't write" error. Once again, check the disk space via the "df" command to see if this is the case during the replication attempt. The very much larger scratch disk that was added was not having any data written to it at all.
1) If the cause is verified to be the system disk filling-up with scratch disk data, delete the TARGET virtual appliance via vRanger.
2) Redeploy the Virtual Appliance.
3) After the VMTools are installed on the new copy of the VA, login to it via a console session again. Do a "df" command to double-check the presence of the "/dev/sdb" mount point. It still may not be present. If it is not, reboot the VA one more time.
4) Upon reboot, check the console via "df" again. The second disk should now be visible and mounted as the scratch disk.
5) Re-seed any replications, or "Edit" each replication job using this target VA/Host and re-run. The replication should now work.