Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
Welcome to Celerra Unified QuickStart: NX4 Hardware Overview and Installation. EMC provides able and printable versions of the student materials for your benefit, which can be accessed from the ing Materials tab. Copyright © 2010 EMC Corporation. All rights reserved. These materials may not be copied without EMC's written consent. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC², EMC, EMC ControlCenter, AdvantEdge, AlphaStor, ApplicationXtender, Avamar, Captiva, Catalog Solution, Celerra, Centera, CentraStar, ClaimPack, ClaimsEditor, ClaimsEditor, Professional, CLARalert, CLARiiON, ClientPak, CodeLink, Connectrix, CoStandbyServer, Dantz, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, Document Sciences, Documentum, EmailXaminer, EmailXtender, EmailXtract, enVision, eRoom, Event Explorer, FLARE, FormWare, HighRoad, InputAccel, InputAccel Express, Invista, ISIS, Max Retriever, Navisphere, NetWorker, nLayers, OpenScale, PixTools, Powerlink, PowerPath, Rainfinity, RepliStor, ResourcePak, Retrospect, RSA, RSA Secured, RSA Security, SecurID, SecurWorld, Smarts, SnapShotServer, SnapView/IP, SRDF, Symmetrix, TimeFinder, VisualSAN, VSAM-Assist, WebXtender, where information lives, xPression, xPresso, Xtender, Xtender Solutions; and EMC OnCourse, EMC Proven, EMC Snap, EMC Storage , Acartus, Access Logix, ArchiveXtender, Authentic Problems, Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, C-Clip, Celerra Replicator, CLARevent, Codebook Correlation Technology, Common Information Model, CopyCross, CopyPoint, DatabaseXtender, Digital Mailroom, Direct Matrix, EDM, E-Lab, eInput, Enginuity, FarPoint, First, Fortress, Global File Virtualization, Graphic Visualization, InfoMover, Infoscape, MediaStor, MirrorView, Mozy, MozyEnterprise, MozyHome, MozyPro, NetWin, OnAlert, PowerSnap, QuickScan, RepliCare, SafeLine, SAN Advisor, SAN Copy, SAN Manager, SDMS, SnapImage, SnapSure, SnapView, StorageScope, Mate, SymmAPI, SymmEnabler, Symmetrix DMX, UltraFlex, UltraPoint, UltraScale, Viewlets, VisualSRM are trademarks of EMC Corporation. All other trademarks used herein are the property of their respective owners.
NX4 Hardware Overview and Installation – 1
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
This course is one in a series of courses designed to provide training that covers a range of topics: product overview, model-specific hardware information, installation, implementation, shared storage for Fibre Channel-connected hosts, and management and maintenance of the Celerra Unified Storage platforms. This course, NX4 Hardware Overview and Installation, provides an overview of the NX4 hardware.
NX4 Hardware Overview and Installation – 2
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
The objectives for this course are shown here. Please take a moment to read them.
NX4 Hardware Overview and Installation – 3
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
The objectives for this module are shown here. Please take a moment to read them.
NX4 Hardware Overview and Installation – 4
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
The NX4 utilizes the CLARiiON AX4-5 architecture. It consists of a single or dual-blade front end, and an integrated disk array. The NX4 provides NAS and iSCSI connectivity through the Data Mover blades, with an available Fibre Channel option in the NX4FC for SAN host connectivity. There are always two storage processors (SPs) for the array. The NX4 offers significant ease-of-use enhancements, including a new installation wizard called Celerra Startup Assistant (CSA) which provides a fast and streamlined process to get a system up and running. Documentation is consolidated, simple, and made readily available to the installation, maintenance, and troubleshooting tasks associated with an NX4 implementation. The software is factory installed, and the cables are clearly labeled and color-coded to make the storage available more quickly to the s.
NX4 Hardware Overview and Installation – 5
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
The NX4 can have one or two blades and a single Control Station. It s one Disk Processor Enclosure (DPE) and up to four Disk Array Enclosures (DAEs) with four to 60 disks for a maximum of 45 TB of usable storage. The system uses SAS or SATA II drives; either type can be used for the Celerra Control Volumes. The drive types can be mixed within an enclosure but cannot be mixed within RAID Groups. Depending on the model, the NX4 s NAS, iSCSI, or FC. The Celerra NX4FC (with Fibre Channel option) allows external hosts to use available storage in a SAN configuration by directly connecting to the FC ports or connecting to the FC ports through a SAN.
NX4 Hardware Overview and Installation – 6
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
We will now identify the hardware components of a Celerra NX4. The illustration on the slide provides a basic view of the major components as they appear in the Celerra system. The DAE contains additional disk drives beyond the drives in the DPE. The NX4 s one to four DAEs. The DPE contains the two storage processors and also holds the first twelve disk drives. The Control Station is positioned below the DPE. The Standby Power Supply, as the name suggests, provides sufficient power to protect cached data in the event of an unexpected power failure. The NX4 comes with one SPS with a second SPS available as an option to the system. The blade enclosure holds the blade servers (also known as Data Movers). The NX4 can be ordered with one or two blades.
NX4 Hardware Overview and Installation – 7
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
The NX4 and NX4FC models use the same NX4-AUXF storage array. The storage processors’ Fibre Channel ports are illustrated on this slide. The auxiliary ports are used to connect the Celerra NX4 blades to the storage array. The Fibre Channel ports are used to connect Fibre Channel hosts to the storage array on NX4FC models. The Fibre Channel ports are present but not activated on the NX4 (non-FC enabled) model. The storage processors have a single back-end loop connection for connecting to additional DAEs.
NX4 Hardware Overview and Installation – 8
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
The NX4 blade enclosure has dual power connections and holds two blades side-by-side. Each blade has two power supply/fan hot-pluggable modules. It also has internal management switches built into each blade to facilitate the private internal IP management network. A special enclosure mid-plane connects all the components together and provides hot swappable capabilities.
NX4 Hardware Overview and Installation – 9
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
The NX4 blade enclosure, 1U in height, is shown on the slide. Note that it has four power supply/fan hot-pluggable modules, two blades, and two internal management switches.
NX4 Hardware Overview and Installation – 10
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
This slide illustrates two views of an NX4 power and cooling module. Each blade has two power and cooling modules. If there are two blades in the enclosure, there will be four power and cooling modules.
NX4 Hardware Overview and Installation – 11
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
This slide illustrates the NX4 blade. The Data Mover blade has dual Intel 2.8 gigahertz LV-Nocona processors (P4), 4 GB of memory, and an 800 megahertz front-side bus. The Agilent QX4 Fibre Channel ports connect to the storage processor ports. The RJ45 Ethernet ports that connect to the internal management switch are also shown. The blades have 1 GB of onboard USB Flash memory each, an integrated management switch, and two serial ports, COM1 and COM2, for DART console connection and debugging.
NX4 Hardware Overview and Installation – 12
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
The control with the power button and LEDs (for status display) are visible from the front view of the Control Station. When you hold the power button down for a few seconds, the Control Station shuts down immediately. Note that there is no floppy drive on this Control Station.
NX4 Hardware Overview and Installation – 13
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
The Control Station on the NX4 has a serial modem connection used for ConnectHome, ports for the external network (eth3), the internal network to management switch A (eth0), and the internal network to management switch B (eth2). Port eth1 is not used on the NX4.
NX4 Hardware Overview and Installation – 14
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
These are the key points covered in this module. Please take a moment to review them.
NX4 Hardware Overview and Installation – 15
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
The objectives for this module are shown here. Please take a moment to read them.
NX4 Hardware Overview and Installation – 16
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
The NX4 system is available in two models—the NX4 and the NX4FC. Both models are pre-installed with the DART operating system at the factory and can come either in an EMC rack or be fieldinstallable. Minimum DART code on the Celerra is 5.6.39.x and minimum FLARE on the array is R23.x. Both models use AccessLogix and Storage Groups, making it easier to upgrade and both SAS and SATA II drives throughout the array. The SAS and SATA II drives can be in the same DAE or DPE, but not the same RAID Group or storage pool. The NX4 model uses the NX4-AUXF captive array with two FC ports per SP enabled for the Data Movers. Celerra Startup Assistant (CSA) is used to configure and manage all storage on the back end. The FC model uses the NX4-AUXF array which has two additional FC ports per SP enabled. These ports can be connected to a customer’s SAN and be used to connect external hosts to any storage not used by the Celerra. Navisphere Express connects to the array through the IP addresses configured with the CSA, and is used to configure the storage and the host access.
NX4 Hardware Overview and Installation – 17
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
The NX4 minimum configuration consumes 5U of rack space. At minimum, four drives are required in the DPE. One 3+1 RAID5 group houses the array vault, the Celerra Control Volumes and 0.5 TB of storage space. The DPE houses two AX4-5 storage processors, each with 2 GB of memory. This configuration represents one Data Mover (blade) within the Data Mover enclosure. The largest configuration footprint, 13U in size, includes four additional DAEs for a total of 60 disks ((1 DPE + 4 DAEs) x 15 disks each). This configuration represents the maximum capacity available, or about 45 TB of SATA II storage. A highly-available configuration is represented here in which dual NX4 blades are included. The dual blades can be implemented as a high availability, Active/ive configuration, or as an Active/Active configuration with no standby Data Mover. An optional second Standby Power Supply is also installed in this configuration. One Standby Power Supply and one Control Station are included in a minimum configuration.
NX4 Hardware Overview and Installation – 18
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
With the introduction of Celerras using AX4 arrays, new Celerra disk types and AVM storage pools have been implemented. The first table on the slide lists two new disk types for CLARiiON standard disks and CLARiiON disks using MirrorView. The second table lists the AVM system-defined pool names and the RAID type associated with each one.
NX4 Hardware Overview and Installation – 19
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
The NX4 can be ordered with a minimum configuration of only four drives. In this setup, the four drives are configured in a 3+1 RAID5 storage pool with no hot spare. If a fifth drive is installed later, it is configured as the hot spare. When six or more drives are ordered, the first six drives are configured as a 4+1 RAID5 storage pool with a hot spare. The storage pool will contain the six Celerra system LUNs; the remaining space is configured for Celerra LUNs.
NX4 Hardware Overview and Installation – 20
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
The NX4 uses a “best fit” LUN strategy when binding LUNs. Because of the likelihood of a small number of storage pools and different sizes due to varying drive types, this strategy tries to achieve the best performance by binding LUNs of the same size regardless of the spindle or storage pool size.
NX4 Hardware Overview and Installation – 21
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
When a new LUN is to be created by the NX4, the system first identifies the largest LUN of the same disk type and storage pool configuration. If none is found, the system binds two equal-sized LUNs (or Virtual disks) from the storage pool. If a LUN is found on the system, the NX4 tries to create as many LUNs as it can of the same size and load balance across the SPs. If no matching LUNs can be made due to the small size of the storage pool, the system uses the default rules and creates two LUNs of equal size. If the NX4 can create any matching LUNs, it will do so. Then, it will use the remaining space to create a single LUN which it assigns to the SP with the least number of LUNs.
NX4 Hardware Overview and Installation – 22
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
Shown on the slide is an example of how the NX4 Best Fit strategy works with a DPE fully loaded with 12 1000 GB SATA drives using the template NX4_4+1R5_HS_5+1R5. Prior to the NX4, the 4+1 RAID Group would have had approximately 60 of its 4000 GB used by the CLARiiON and Celerra’s system, leaving 3930 GB of free space. This would be bound into two 1965 GB LUNs. The 5+1 RAID Group would have all of its 5000 GB free to be bound into two 2500 GB LUNs which cannot be striped with the smaller LUNs from the other RAID Group. With Best Fit, the system can see that the first two LUNs are 1965 GB in size and therefore, will bind two more LUNs of the same size and distribute them between SPA and SPB. This leaves 1070 GB free which is bound into another LUN of that size and assigned to SPB as SPB owns fewer LUNs.
NX4 Hardware Overview and Installation – 23
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
The table on the slide displays the complete list of RAID types that can be used in the NX4 as well as the default number of LUNs that will be bound if Best Fit is not used and the name of the AVM pool they will be used in.
NX4 Hardware Overview and Installation – 24
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
The NS models based on the CX3 generation of hardware are the NS20, NS40, NS80, and NSX systems. In this generation of hardware, Data Movers are also called blades. For ease of installation and setup, the NS20, NS40, and NS80 are available as Unified Storage models that come with CLARiiON-based back-end storage. A Fibre Channel option is available for the Unified Storage systems to allow the back-end storage to be shared with other SAN-connected hosts. Note that the NS80 FC model comes with only two blades. Please take a moment to review the table on the slide.
NX4 Hardware Overview and Installation – 25
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
The NX4, NS-120, and NS-480 models are based on the CX4 generation of hardware. In CX3 and CX4 hardware generations, Data Movers are also called blades. For ease of installation and setup, the NX4, NS20, NS40, NS-120, NS-480,and NS-960 are available as Unified Storage models that come with CLARiiON-based back-end storage. A Fibre Channel option is available for the Unified Storage systems to allow the back-end storage to be shared with other SANconnected hosts. Please take a moment to review the table on the slide.
NX4 Hardware Overview and Installation – 26
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
You can add one or more DAEs, or add one blade without switching off the back-end array. Adding a second blade to a single-blade system requires modifying the Control Station network configuration. There is an upgrade for adding a second SPS. An NX4 can be upgraded to an NX4FC model, but there is no path for an NX4 or NX4FC to be upgraded to any other model. For example, it is not possible to upgrade an NX4 to an NS-120.
NX4 Hardware Overview and Installation – 27
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
These are the key points covered in this module. Please take a moment to review them.
NX4 Hardware Overview and Installation – 28
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
The objectives for this module are shown here. Please take a moment to read them.
NX4 Hardware Overview and Installation – 29
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
When the Celerra unit arrives onsite, it is installed in an EMC rack. If it is a field-installable unit, it comes in a disposable rack. Carefully inspect the box for damage before opening it.
NX4 Hardware Overview and Installation – 30
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
Once the shipping boxes have been opened, locate the Document and Media Kit inventory lists. These documents guide you through the installation process. Note that all of the necessary system software has already been installed at the factory.
NX4 Hardware Overview and Installation – 31
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
Carefully remove the components from the shipping container using the appropriate tool. Follow the instructions from the installation guide. Please note that for safety reasons, two people are required to install this equipment.
NX4 Hardware Overview and Installation – 32
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
The diagram on the slide shows the removal of the rails from the disposable mini-rack.
NX4 Hardware Overview and Installation – 33
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
The diagram on the slide shows the sequence to be followed for removing the rails and installing them into the site rack.
NX4 Hardware Overview and Installation – 34
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
This diagram provides detailed images and the steps to be followed for installing the components, latches, and bezels.
NX4 Hardware Overview and Installation – 35
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
Ports and cables are clearly labeled and color-coded to simplify the cabling process. Cabling diagrams are available on Powerlink for cabling and setup instructions for customers who will install the Celerra in a customer-provided rack. The cabling diagrams are clear and specific, making the process quick and easy.
NX4 Hardware Overview and Installation – 36
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
Celerra NX4 systems ship from the factory with a comprehensive cabling guide. The guide clearly illustrates the cabling for all internal systems. The next few slides diagrammatically show the break down each of these internal systems. If the Celerra is field-installed, you will need to perform the cabling, and then to make sure that it is correct. If the Celerra is factory-installed, you only need to that the cabling is correct.
NX4 Hardware Overview and Installation – 37
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
This illustration is taken from the NX4 Cabling Guide. This diagram shows the Fibre Channel cabling for a dual blade NX4.
NX4 Hardware Overview and Installation – 38
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
The single blade NX4 has very similar Fibre Channel cabling to that of the dual blade. The only difference is the absence of connections for the second blade.
NX4 Hardware Overview and Installation – 39
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
This illustration focuses on the Celerra NX4 cabling for the serial connections between the Celerra Control Station and its modem, and the serial connection between the array’s storage processors and the Standby Power Supplies.
NX4 Hardware Overview and Installation – 40
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
Shown here is part of the NX4 Cabling Guide for the Celerra’s internal IP network. For clarity, only the Celerra Internal Network is illustrated. Be sure to follow each connection and compare it with the Cabling Guide. Notice that the Control Station has two internal network connections for the Celerra’s internal networks. In a dual blade system, the Control Station has a single network connection to each blade. The internal networks also connect to the array’s storage processors (SPs) through a connection from each blade to each of the array’s SPs.
NX4 Hardware Overview and Installation – 41
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
The cabling for the Celerra internal private network on a single blade NX4 system is slightly different from the dual blade system. Since there is no second blade, the network cable for Storage Processor B connects directly to the Control Station. The other connections remain the same as a system with two Data Movers.
NX4 Hardware Overview and Installation – 42
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
This illustration focuses on the Celerra NX4 cabling to the public network. The Ethernet connection from the Control Station ships with the system. The Ethernet connections for each blade must be supplied by the customer.
NX4 Hardware Overview and Installation – 43
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
This slide illustrates the cabling for connecting DAEs to the storage processors. Each SP has a connection to the DAE. Additional DAEs connect in a daisy-chain fashion from the existing DAEs.
NX4 Hardware Overview and Installation – 44
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
The power cabling for the NX4 system is illustrated here. The component power cables connect to the rack power distribution s (PDPs). Note that when installing an NX4 system into an existing site rack, other equipment may be present and already powered on. Do not turn off the rack circuit breakers. When the power cabling is complete, turn the switch on for SPS A and SPS B, if it is present.
NX4 Hardware Overview and Installation – 45
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
The NX4 system status LEDs are illustrated on this slide. When powered on, the LEDs illuminate to indicate the status of the system components.
NX4 Hardware Overview and Installation – 46
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
These are the key points covered in this module. Please take a moment to review them.
NX4 Hardware Overview and Installation – 47
Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.
These are the key points covered in this course. Please take a moment to review them. This concludes the training. Please proceed to the Celerra Unified QuickStart: NS-120 Hardware Overview and Installation course.
NX4 Hardware Overview and Installation – 48