Chat now with support
Chat with Support

DR Series Software 4.0.4 - Administration Guide

Introducing the DR Series system documentation Introducing the DR Series system Setting up the DR Series system Configuring the DR Series system settings Managing containers Managing replications Monitoring the DR Series system Using GlobalView Configuring and using Rapid NFS and Rapid CIFS Configuring and using Rapid Data Access with NetVault Backup and with vRanger Configuring and using RDA with OST
Understanding RDA with OST Guidelines Terminology Supported RDA with OST software and components Best Practices: RDA with OST and the DR Series System Setting client-side optimization Configuring an LSU Installing the RDA with OST plug-in Configuring DR Series system information using NetBackup Backing up data from a DR Series system with NetBackup Using Backup Exec with a DR Series system (Windows) Understanding the OST CLI commands Understanding RDA with OST Plug-In Diagnostic Logs Collecting diagnostics by using a Linux utility Guidelines for gathering media server information
Configuring and using VTL Configuring and Using Encryption at Rest Support, maintenance, and troubleshooting Supported Ports in a DR Series system About us

Removing a DR Series system from GlobalView

You can remove any DR Series system from GlobalView except the system to which you are currently logged on, which contains the GlobalView.

When you remove a DR Series system from GlobalView on one system, it does not remove it from any other GlobalViews to which you may have added it on other systems.

To remove a DR Series system from GlobalView, complete the following steps:
  1. In the left navigation menu, click GlobalView.
  2. On the GlobalView page, in the appliance list, click the Delete icon next to the system you want to delete.

    NOTE: No Delete icon appears next to the system that contains the GlobalView; it is not available to be removed.

    A warning dialog box is displayed to confirm that you want to delete the system.
  3. Click OK to confirm.

Configuring and using Rapid NFS and Rapid CIFS

Rapid NFS and Rapid CIFS enable write operation acceleration on clients that use NFS and CIFS file system protocols. Similar to OST and RDS, these accelerators allow for better coordination and integration between DR Series system backup, restore, and optimized duplication operations with Data Management Applications (DMAs) such as CommVault, EMC Networker, and Tivoli Storage Manager. For the current list of supported DMAs, see the DR Series System Interoperability Guide.

Rapid NFS is a new client file system type that ensures that only unique data is written to the DR Series system. It uses user space components and file system in user space (FUSE) to accomplish this. Metadata operations such as file creates and permission changes go through the standard NFS protocol, whereas write operations go through Rapid NFS.

Rapid CIFS is a Windows-certified filter driver that also ensures that only unique data is written to the DR Series system. All chunking and hash computations are done at the client level.

NOTE: The supported DMAs listed in the DR Series System Interoperability Guide are the DMAs that have been tested and qualified with Rapid NFS and Rapid CIFS. You can use Rapid NFS and Rapid CIFS with other DMAs (such as Veritas products), but those products have not been tested and qualified with Rapid NFS or Rapid CIFS.

Rapid NFS and Rapid CIFS benefits

When Rapid NFS and Rapid CIFS are used with the DR Series system, they offer the following benefits:

  • Reduce network utilization and DMA backup time
    • Chunk data and perform hash computation on the client; transfer chunked hash files on the back-end
    • Reduce the amount of data that must be written across the wire
  • Improve performance
  • Support DMAs such as CommVault, EMC Networker, and Tivoli Storage Manager. For the current list of supported DMAs, see the DR Series System Interoperability Guide.
  • Compatible with existing NFS and CIFS clients — just need to install a plug-in (driver) on the client
    • Can use Rapid NFS and Rapid CIFS to accelerate I/O operations on any client — including a client that uses home-grown backup scripts
    • Can service multiple and concurrent media server backups

Best practices: Rapid NFS

This topic introduces some recommended best practices for using Rapid NFS operations with the DR Series system.

  • Containers must be of type NFS/CIFS

    RDA containers cannot use Rapid NFS. If you have existing NFS/CIFS containers, you do not need to create new containers to use Rapid NFS; you can install the plug-in (driver) to existing clients.

  • The Rapid NFS plug-in (driver) must be installed on client systems
     
    After the plug-in is installed, write operations will go through Rapid NFS while metadata operations such as file creates and permission changes will go through the standard NFS protocol. Rapid NFS can be disabled by uninstalling the plug-in.

  • Markers must be set on the client, not in the DR Series GUI
  • If you are using a DMA that supports a marker, should explicitly set it. Your containers should have the marker type of None until you set the marker using the Mount command on the client (after installing the Rapid NFS plug-in).
    • For existing containers, re-set the marker by doing the following:

      For example, if you wanted to set the CommVault marker (cv):

      mount -t rdnfs 10.222.322.190:/containers/backup /mnt/backup -o marker=cv

      Mount command usage:

      rdnfs [nfs mount point] [roach mount point] -o marker=[marker]

      where:

      nfs mount point = Already mounted nfs mountpoint

      roach mount point = A new mount point

      marker = appassure, arcserve, auto, cv, dump, hdm, hpdp, nw, or tsm

  • Your DR Series system must meet the minimum configuration

    Rapid NFS is available on a DR Series system and a client with a minimum of 4 CPU cores running at a minimum of 4 GHz cumulative processing power and 2 GB memory. Kernels must be 2.6.14 or later. For a list of supported operating systems, see the DR Series System Interoperability Guide. If you update your operating system, you must update your Rapid NFS plug-in as well. Updates are available on the Support site as well as within the GUI on the Clients page.

  • Rapid NFS is stateful

    If the DR Series system goes down, the connection will terminate. DMAs will restart from the last checkpoint.

  • Rapid NFS and passthrough mode

    If Rapid NFS mode fails for any reason, the DR Series system falls back to regular NFS mode automatically. For details, see Monitoring Performance.

  • Rapid NFS performance considerations

    When using Rapid NFS on your client, Quest recommends that you do not run other protocols to the DR Series system in parallel, as this will adversely affect your overall performance.

  • Rapid NFS acceleration constraints
    • Rapid NFS does not support:
      • Direct I/O memory
      • Mapped files
      • File path size greater than 4096 characters
      • File write locks across clients

    NOTE: If the client and server do not have the same times, the times seen will not match typical NFS behavior due to the nature of file system in user space (FUSE).

Related Documents

The document was helpful.

Select Rating

I easily found the information I needed.

Select Rating