Tchater maintenant avec le support
Tchattez avec un ingénieur du support

DR Series Software 4.0.0.3 - Administration Guide

Introducing the DR Series system documentation Introducing the DR Series system Setting up the DR Series system Configuring the DR Series system settings Managing containers Managing replications Monitoring the DR Series system Using GlobalView Configuring and using Rapid NFS and Rapid CIFS Configuring and using Rapid Data Access with NetVault Backup and with vRanger Configuring and using RDA with OST
Understanding RDA with OST Guidelines Terminology Supported RDA with OST software and components Best Practices: RDA with OST and the DR Series System Setting client-side optimization Configuring an LSU Installing the RDA with OST plug-in Configuring DR Series system information using NetBackup Backing Up Data From a DR Series System (NetBackup) Using Backup Exec with a DR Series system (Windows) Understanding the OST CLI commands Understanding RDA with OST Plug-In Diagnostic Logs Collecting Diagnostics Using a Linux Utility Guidelines for Gathering Media Server Information
Configuring and using VTL Configuring and Using Encryption at Rest Support, maintenance, and troubleshooting Supported Ports in a DR Series System

NDMP

NDMP

The Network Data Management protocol (NDMP) is used to control data backup and recovery between primary and secondary storage in a network environment. For example, a NAS server (Filer) can talk to a tape drive for the purposes of a backup.

You can use the protocol with a centralized data management application (DMA) to back up data on file servers running on different platforms to tape drives or tape libraries located elsewhere within the network. The protocol separates the data path from the control path and minimizes demands on network resources. With NDMP, a network file server can communicate directly to a network-attached tape drive or virtual tape library (VTL) for backup or recovery.

The DR Series system VTL container type is designed to work seamlessly with the NDMP protocol.

iSCSI

iSCSI

iSCSI or Internet Small Computer System Interface is an Internet Protocol (IP)-based storage networking standard for storage subsystems. It is a carrier protocol for SCSI. SCSI commands are sent over IP networks by using iSCSI. It also facilitates data transfers over intranets and to manage storage over long distances. iSCSI can be used to transmit data over LANs or WANs.

In iSCSI, clients are called initiators and SCSI storage devices are targets. The protocol allows an initiator to send SCSI commands (CDBs) to the targets on remote servers. It is a storage area network (SAN) protocol, allowing organizations to consolidate storage into data center storage arrays while providing hosts (such as database and web servers) with the illusion of locally attached disks. Unlike traditional Fibre Channel, which requires different cabling, iSCSI can be run over long distances using existing network infrastructure.

iSCSI is a low-cost alternative to Fibre Channel, which requires dedicated infrastructure except in FCoE (Fibre Channel over Ethernet). Note that the performance of an iSCSI SAN deployment can be degraded if not operated on a dedicated network or subnet

The VTL container type is designed to work seamlessly with the iSCSI protocol. For details, see the topic, Creating Storage Containers.

Fibre channel

Fibre channel

Fibre Channel (FC) is a high-speed network technology primarily used to connect computer data storage to servers in storage area networks (SAN) in enterprise storage. Fibre Channel networks are known as a Fabric because they operate in unison as one big switch. Fibre Channel mainly runs on optical fiber cables within and between data centers. Virtual tape libraries (VTLs) can ingest data over a Fibre Channel interface, which enables seamless integration with many existing backup infrastructures and processes. 

The DR Series system VTL container type is designed to work seamlessly with the FC interface. With FC, the DR Series system can direct attach to NAS filers or Fibre Channel switches and supports SAN devices.

Understanding the DR Series system hardware and data operations

Understanding the DR Series system hardware and data operations

Data is stored and resides on the DR Series system hardware appliances (two-rack unit (RU) appliances), which have DR Series system software pre-installed.

The DR Series system hardware consists of a total of 14 drives. Two of these drives are 2.5-inch drives that are configured as a Redundant Array of Independent Disks (RAID) 1 on the RAID Controller, and this is considered to be volume 1. On the DR4000 system, these drives are internal; while in the DR4100, DR6000, DR4300e core and standard, DR4300, and DR6300 systems, these drives are accessible from the rear of the appliance. The data that is being backed up is stored on the 12 virtual disks that reside on the DR Series system. The DR Series system also supports additional storage in the form of external expansion shelf enclosures (see the DR Series Expansion Shelf section in this topic). The hot-swappable data drives that are attached to the RAID controller are configured as:

The DR Series system supports RAID 6, which allows the appliance to continue read and write requests to the RAID array virtual disks even in the event of up to two concurrent disk failures, providing protection to your mission-critical data. In this way, the system design supports double-data drive failure survivability.

If the system detects that one of the 11 virtual drives has failed, then the dedicated hot spare (drive slot 0) becomes an active member of the RAID group. Data is then automatically copied to the hot spare as it acts as the replacement for the failed drive. The dedicated hot spare remains inactive until it is called upon to replace a failed drive. This scenario is usually encountered when a faulty data drive is replaced. The hot spare can act as replacement for both internal mirrored drives and the RAID 6 drive arrays.

Figure 1. DR Series System Drive Slot Locations

The figure shows the DR Series system drive slot locations.

Drive 0 (top)

Drive 3 (top)

Drive 6 (top)

Drive 9 (top)

Drive 1 (middle)

Drive 4 (middle)

Drive 7 (middle)

Drive 10 (middle)

Drive 2 (bottom)

Drive 5 (bottom)

Drive 8 (bottom)

Drive 11 (bottom)

DR Series expansion shelf

The DR Series hardware system appliance supports the installation and connection of Dell PowerVault MD1200 (for DR4000, DR4100, and DR6000) and Dell PowerVault MD1400 (for DR4300e core and standard, DR4300, and DR6300 systems) data storage expansion shelf enclosures. Each expansion shelf contains 12 physical disks in an enclosure, which provides additional data storage capacity for the basic DR Series system. The supported data storage expansion shelves can be added in a variety of capacities based on your DR Series system version; for details, see the DR Series System Interoperability Guide.

The physical disks in each expansion shelf are required to be Dell-certified Serial Attached SCSI (SAS) drives, and the physical drives in the expansion shelf uses slots 1–11 configured as RAID 6, with slot 0 being a global hot spare (GHS). When being configured, the first expansion shelf is identified as Enclosure 1 (in the case where two enclosures are added, these would be Enclosure 1 and Enclosure 2). Adding an expansion shelf to support the DR Series system requires a license.

Figure 2. DR Series System Expansion Shelf (MD1200) Drive Slot Locations

The figure shows the DR Series system expansion shelf (MD1200) drive slot locations.

Drive 0 (top)

Drive 3 (top)

Drive 6 (top)

Drive 9 (top)

Drive 1 (middle)

Drive 4 (middle)

Drive 7 (middle)

Drive 10 (middle)

Drive 2 (bottom)

Drive 5 (bottom)

Drive 8 (bottom)

Drive 11 (bottom)

Understanding the process for adding a DR Series expansion shelf

The process for adding an expansion shelf requires the following:

Documents connexes

The document was helpful.

Sélectionner une évaluation

I easily found the information I needed.

Sélectionner une évaluation