Chatee ahora con Soporte
Chat con el soporte

NetVault Bare Metal Recovery 11.0 - User Guide for Plug-ins

Introducing Dell™ NetVault™ Bare Metal Recovery Plug-ins Deploying NetVault Bare Metal Recovery Using the Plug-in Offline Client
Plug-in Server: an overview Installing and removing Plug-in Server Configuring Plug-in Server for use with Plug-in Offline Client Booting a NetVault Bare Metal Recovery Client with Plug-in Offline Client Backing up data with Plug-in Offline Client Restoring data with Plug-in Offline Client
Using NetVault Bare Metal Recovery Plug-in Live Client for Windows®
Plug-in Live Client for Windows: an overview Configuring Plug-in Server for use with Plug-in Live Client for Windows Installing and removing Plug-in Live Client for Windows® Backing up data with Plug-in Live Client for Windows® Booting a NetVault Bare Metal Recovery Client with Plug-in Offline Client Restoring data with Plug-in Live Client for Windows
Using NetVault Bare Metal Recovery Plug-in Live Client for Linux®
Plug-in Live Client for Linux: an overview Installing and removing Plug-in Live Client for Linux Generating a DR image for use with Plug-in Live Client for Linux Creating the required bootable CD for use with Plug-in Live Client for Linux Recovering a DR image for use with Plug-in Live Client for Linux
NetVault Bare Metal Recovery physical-to-virtual recovery Troubleshooting

Completing post-restore requirements for use with Plug-in Live Client for Linux

After a restore process completes on a target Linux® Client, the following points apply to that machine:
The “hosts” file for the Target is modified: A restore modifies the target NetVault Bare Metal Recovery Client machine’s entry in its “…/etc/hosts” file; for example, after recovery, the host name does not appear along with the IP address and the alias for this client in the “…/etc/hosts” file. The machine is still accessible through its IP address, but for it to be accessible through its host name, this file must be edited to incorporate the appropriate host name information. For information on this “hosts” file and how it should be edited to include the proper host name for the target Linux machine, see the relevant Linux documentation.
Perform a restore of the modified files backup (if applicable): With the recovery completed, you can now restore the files backed up in the Plug-in for FileSystem backup described in Recovering a DR image for use with Plug-in Live Client for Linux. This process restores these files to their state before the DR recovery.
Change to boot loader application: If running a version of the Linux boot loader utility other than GRUB, after a DR image is recovered on a target Linux Client, the boot loader utility is replaced with the GRUB version of this application.
GRUB entries: Storix® never assumes that you are reinstalling onto the same physical hardware or restoring to the same storage configuration. Therefore, it is never guaranteed that the previous GRUB entries are valid. The only GRUB entry guaranteed to be valid after restore is the entry created by Storix.
Volume labels and Volume UUIDs: For systems that use universal unique identifiers (UUIDs) for booting or mounting, review and edit the “/boot/grub/grub.conf” and “/etc/fstab” with the correct device UUID. For more information, see Updating the UUID information manually.
Change in the start-end sector location for a DR restore: After a recovery of a DR image, the start-end sector for a restored partition may be different from its original backed-up location. The partition size remains the same size, but no unallocated space is created after the Master Boot Record. Therefore, some boot loaders, for example, GRUB, are not usable, because they require this additional, unallocated space. This requirement is because the Linux Loader (LILO) version of the boot loader utility that is automatically established after a recovery, as explained previously, does not require this unallocated space.
Change to swap partition: During a recovery, the NetVault Bare Metal Recovery for Linux module implicitly modifies the “/etc/fstab” file entry for the swap partition.
File-system checking is enabled: A restore modifies the “Maximum mount count” and “Check interval” parameters, which enable file-system checking. For systems that should not have these parameters enabled based on the number of mounts or a specific period, use the following commands to disable the options manually:
# tune2fs -c -1 <deviceName>
# tune2fs -i 0 <deviceName>

Updating the UUID information manually

The UUID of each file system is re-created when you use the Plug-in Live Client for Linux to restore data. If the UUID is used in the “/boot/grub/grub.conf” and “/etc/fstab” files and they are restored from a previous backup with the Plug-in for FileSystem, the system fails to boot because the UUID values do not match the values on the actual file systems. To work around this issue, you must manually update the files.
2
Use the Plug-in for FileSystem and a previous backup to restore the “/boot/grub/grub.conf” and “/etc/fstab” files to the working directory.
7
Use a text editor to open the “grub.conf” file.
8
For the entry that contains “root=UUID=x-x-x-x-x”, match the “x-x-x-x-x-x” to the partition name, and then replace the UUID with the partition name.
9
10
Using the information noted in Step 3 and Step 4, change the UUID to the device partition name for all mount and swap partitions.
12
Use a text editor to open the “grub.conf” and “fstab” files, and verify that the device names were replaced with their corresponding UUIDs.
13
Make a backup copy of “/boot/grub/grub.conf” and “/etc/fstab”.
14
Copy the “grub.conf” and “fstab” files from the working directory to the original location, and re-create the symbolic link from “grub.conf” to “menu.lst”.
If the system fails to boot, use a rescue disk to start the system in rescue mode, copy back the backup files created in Step 13 to the original location, and restart the server. Review the newly created “grub.conf” and “fstab” files again, make any necessary corrections, and repeat Step 13 through Step 15.

NetVault Bare Metal Recovery physical-to-virtual recovery

Installing SCSI and IDE device drivers on a physical machine

If you are using the Linux®-based Plug-in Offline Client and you are migrating a physical server to a virtual environment in which the Client is Windows®-based, you must install the disk drivers on the OS before you back up the machine. The restored VM does not boot up because the restored image contains SCSI/IDE drivers for the source physical machine. The restored VM does not have the drivers for the target VM’s SCSI/IDE controller. This issue causes a blue screen error and the boot fails, as it cannot find any disks.
The solution is to create the “.inf” file that informs the Windows installer to load the appropriate drivers to the system and make correct registry entries every time Windows boots. Installation of the “.inf” file is required prior backing up the physical machine so that after the restore, the correct driver is loaded and detects the VMware IDE/SCSI controller.
“vm_ide_2008.inf”: IDE device driver for Windows Server 2008
“vm_lsi_2008.inf”: SCSI device driver for Windows Server 2008/2008 R2
1
Copy the required device driver, for example, “vm_ide_2008.inf,” to the physical machine.
3
When the Hardware Installation warning message appears, click Continue Anyway.
Documentos relacionados

The document was helpful.

Seleccionar calificación

I easily found the information I needed.

Seleccionar calificación