Comparing EMC, HDS and NetApp storage arrays – Part 1 (Block Features)

[Updated 11/03/14 – Added REMOTE & LOCAL REPLICATION table and amended QoS section and added Back-end connectivity section to CORE BLOCK table]

EMC, HDS and NetApp are leaders in storage technology and the good news is that they are not carbon copies of each other. They have all made different architectural choices which means they each have their own set of strengths and weaknesses.

It is never going to be the case that one of these vendors is always the best and one is always the weakest, so the challenge for the storage buyer is to find the platform that is the best match for their organisations requirements.

This exercise is a real challenge, on paper all the products look fantastic, but as always the devil is in the detail. Just seeing if a product supports a particular feature is not enough, it all comes down to how well it has been implemented.

The general rule of thumb is that enterprise-class arrays (i.e. EMC VMAX and HDS VSP) implement a broad range of features to a very high standard. As you move to lower-cost mid-range platforms the feature set often remains more or less the same, but the individual features are not as sophisticated as the enterprise offerings.

Here at SNS we have spent many years designing, deploying and contrasting storage arrays from various vendors, which has put us in a good position to dig into the detail to see exactly how they compare.

We have decided to break this comparison into 3 sections – Block, NAS and Management & Integration. In this first post we will focus on comparing the Block features:

Core BlockEMC VNXHDS HUS 100HDS HUS VMNetApp FAS
“Pure” Block CapableYesYesYesNo (Block on NAS)
Hardware AccelerationNoASICASICNo
Front-end connectivityFC/iSCSI/FCoEFC/iSCSIFCFC/iSCSI/FCoE
Back-end connectivitySAS 6GbsSAS 6GbsSAS 6GbsSAS 6Gbs
Conventional Flash Drives100/200 GB SLC
200/400 GB eMLC
200/400 GB eMLC200/400 GB eMLC200/400/800/1,600 GB eMLC
Extreme Performance FlashNo1.6 TB Flash Module Drive (150 only)1.6/3.2 TB Flash Module DriveNo
RAID levels1, 0, 3, 5, 6 and 1/01, 0, 5, 6 and 1/01/0, 5 and 64, DP (Double Parity)
Controller LUN BalancingManual AutomatedAutomatedNo
Simultaneous LUN access from both controllersYes (Classic LUNs only)YesYesNo
Simultaneous LUN IO processing from both controllersYes (Classic LUNs only)NoYesNo
Quality of ServiceSet upper limits (up to 32) and goals (up to 2) for I/O classes (group of LUNs) for throughput (IOPS), bandwidth (MB/s) and response time (ms)NoSet upper limits for throughput (IOPS) and bandwidth (MB/s) per array port/HBA with thresholds that disable the limits when high priority array port/HBA IO is lowCluster Mode (Limits only)
Data Encryption on DiskHost Based (PowerPath)Controller Based (150 only)Controller BasedSelf Encrypting Drives (not SSDs)
Global hot-sparesYesYesYesNo (Per controller)
External array virtualisationNoNoUp to 64 PB (Diskless option available)V-Series only
OS disksVault Drives (4 disks per system)Not requiredNot requiredRoot Volume (3 disks per controller)
Cache & PoolsEMC VNXHDS HUS 100HDS HUS VMNetApp FAS
Shared Global CacheNoNoYesNo
Adaptive or Partitioned R/W CacheAdaptivePartitionedPartitionedNo
SSD R/W Flash CacheSystem-wide (SLC drives only)NoNoPer Aggregate
SSD Flash Cache Applies toEntire Pools or Classic LUNsN/AN/AVolumes in Aggregate
Controller Read-only Flash CacheNoNoNoApplies to all Volumes (3200 & 6200 only)
Pools with Wide StripingYes (System-wide)Yes (System-wide)Yes (System-wide)Yes (Per Controller)
Pool Auto BalancingOn expansion and 24 hour tiering scheduleOn expansion, Daily and Manual On expansion, Daily and Manual Manual
Pool Shrinking (Remove RAID Groups)NoYesYesNo
Thin Provisioning page size8 KB32 MB42 MB4 KB
Thin Performance OverheadDisk metadata lookups required (Cannot hold all metadata in RAM)Negligible (All metadata held in RAM)Negligible (All metadata held in RAM)Disk metadata lookups required (Cannot hold all metadata in RAM)
Thin LUN ShrinkingYes (Windows only)YesYesYes (With OS support)
Zero Space ReclaimNoManualManualManual
Thick Pool LUNs (Pre-allocated)YesYesYesYes
Thick Pool LUNs (Pre-zeroed)Not requiredYes (Free space can be Pre-zeroed)Yes (Free space can be Pre-zeroed)Not requried
High Performance non-pool LUNsYesYesYesNo
Tiering & De-dupeEMC VNXHDS HUS 100HDS HUS VMNetApp FAS
Automated Sub-LUN tiering256 MB pages with minimum movement frequency once per day32 MB pages with minimum movement frequency every 30 minutes42 MB pages with minimum movement frequency every 30 minutesNo
Add TiersYesYesYesN/A
Remove TiersNoYesYesN/A
LUN Tier Placement PoliciesLow, High, Auto, Start-high then AutoAll LUNs leaned to highest tiers6 Built-in and 26 Custom (Specify LUN % per tier)N/A
LUN New Page Placement PolicesWrites to tier specified in Tier Placement PolicyCan be allocated to any TierCan be allocated to any TierN/A
Movement Schedule PoliciesSelect Start and End TimeSelect days of the week and 30 minute intervals each daySelect days of the week and 30 minute intervals each dayN/A
Adjustable Relocation PriorityHigh, Med, LowHigh, Med, LowAuto-adjusts to negate host impactN/A
Performance Monitoring Policy Exclusion PeriodsNoAny/All 30 minute periods in any defined dayAny/All 30 minute periods in any defined dayN/A
Relocation Policy Exclusion PeriodsEntire DaysAny/All 30 minute periods in any defined dayAny/All 30 minute periods in any defined dayN/A
Isolate LUN MovementLUNs can be fixed/movement disabledLUNs can be fixed/movement disabledLUNs can be fixed/movement disabledN/A
Post Process De-duplicationThin LUNs onlyNoNoYes
CompressionPost Process (Thin LUNs only & cannot be combined with de-duplication)NoNoIn-line and Post Process (Can be combined with de-duplication)
Block Size8 KBN/AN/A4 KB
REMOTE & LOCAL REPLICATIONEMC VNXHDS HUS 100HDS HUS VMNETAPP FAS
Synchronous replication over FC and IPYesYesYes (requires FCIP bridges for IP)Yes (7-mode only and data is sent twice)
VMware vSphere Metro Storage Cluster supportYes (with EMC VPLEX External appliances)NoYes (High Availability Manager)Yes (MetroCluster 7-mode only)
Asynchronous replication over FC and IPYesYesYes (requires FCIP bridges for IP)Yes
Asynchronous RPO of 1 second or lessNo (MirrorView)
Yes (RecoverPoint)
NoYesNo
Asynchronous replication typeMirrorView (CoW snapshots)
RecoverPoint (continuous with journals)
CoW snapshotsContinuous with cache and disk journalsRoW snapshots
Asynchronous replication adds read load to source LUNsYes (MirrorView)
No (RecoverPoint)
YesNoYes
Recover to “any point-in-time”No (MirrorView)
Yes (RecoverPoint)
NoNoNo
Integrated or off-array replicationIntegrated (MirrorView)
Off-array (RecoverPoint)
IntegratedIntegrated (Target pulls the data from the primary system for asynchronous replication)Integrated
WAN compression and de-duplicationYes (Asynchronous)NoCompression only when used with FCIP bridges (Brocade 7800)Compression (Asynchronous 7-mode only)
Snapshots (Read/Write)CoW (SnapView)
RoW (VNX Snapshots)
CoWCoW or CaWRoW
Automatic pool based snapshotsNo (SnapView)
Yes (VNX Snapshots)
YesSeparate pool required for Thin ImageYes
Full Clones (Read/Write)YesYesYesYes
Quick Split Clone with background resyncNoYesYesNo
1. Copy-on-Write snapshots generate additional IO for each unique write
2. Redirect-on-Write snapshots have an IO metadata overhead
3. Copy-after-Write is the same as CoW , except the write completion status is returned to the host before the snapshot data is stored

As software updates or new platforms are released we will revisit the post and as always if you feel there are any inaccuracies please advise and we will check them out and update as appropriate.

Related Posts

  1. What questions should you ask when you are looking to purchase new storage arrays?
  2. Comparing asynchronous remote replication technologies
Mark Burgess has worked in IT since 1984, starting as a programmer on DEC VAX systems, then moving into PC software development using Clipper and FoxPro. From here he moved into network administration using Novell NetWare, which kicked-off his interest in storage. In 1999 he co-founded SNS, a consultancy firm initially focused on Novell technologies, but overtime Virtualisation and Storage. Mark writes a popular blog and is a frequent contributor to Twitter and other popular Virtualisation and Storage blog sites.
twittergoogle_pluslinkedinmailtwittergoogle_pluslinkedinmailby feather

About Mark Burgess

Mark Burgess has worked in IT since 1984, starting as a programmer on DEC VAX systems, then moving into PC software development using Clipper and FoxPro. From here he moved into network administration using Novell NetWare, which kicked-off his interest in storage. In 1999 he co-founded SNS, a consultancy firm initially focused on Novell technologies, but overtime Virtualisation and Storage. Mark writes a popular blog and is a frequent contributor to Twitter and other popular Virtualisation and Storage blog sites.

9 thoughts on “Comparing EMC, HDS and NetApp storage arrays – Part 1 (Block Features)

    • Hi,

      We are working on new versions of both the Block and NAS matrix which will include all the latest updates for NetApp Clustered Data ONTAP and E-Series – I hope it will be of interest.

      Regards
      Mark

  1. Hello,

    This is a comparison of feature that means nothing, because the architecture of the storage are so different. It is like comparing a PC with a tablet. Depends on what you need. If you need raw power, the HDS/EMC are more powerful. If you need more elasticity, the NetApp is better.

    • Hi,

      Absolutely it is all about your requirements, but if you are looking for a block array with advanced tiering then clearly this plays to the strengths of the EMC/HDS architectures. On the other hand if you want a unified block/NAS solution with integrated backup this plays to the strengths of the NetApp architecture.

      What this Block feature post and our NAS feature post are trying to do is dig into the detail to explain the above statements, which I hope is a useful exercise.

      Regards
      Mark

      • I’m pretty sure the HNAS HDS combo is everything that FAS cDOT is and then some. I think NetApp is good for certain uses, especially where administrative know how is limited.

        • Hi Joe,

          HNAS is pretty good, but like EMC’s current VNX NAS solution it is not managed in the same way as block and it does not include all the “snaps as backups” features that NetApp has.

          There are therefore a number of areas where NetApp has clear advantages.

          Regards
          Mark

    • I found this very usefull. Each of these arrays have their own places. There are some specific features I was looking to compare and this helped me a lot.

      Thank you.

Leave a Reply