NetApp FAS and E-Series – what’s new for 2016?

Most large IT infrastructure vendors are having a very busy time updating their products to better integrate with Public Cloud services and fully exploit the performance of flash. One such vendor is NetApp, let’s take a quick look at what they have been up to when it comes to their core storage platforms: FAS New hardware platforms: FAS2600 (2U including drives) 100,000 IOPS at <1ms latency 1 TiB on-board Flash Cache per HA pair FAS8200 and All-Flash FAS A300 (3U) 280,000 IOPS at <1ms latency 2 TiB (Max 4 TiB) on-board Flash Cache per HA pair FAS9000 and All-Flash FAS A700 (8U) 500,000 IOPS at <1ms latency 2 TiB (Max 16 TiB) on-board Flash Cache per HA pair With the following enhancements: Massive performance Continue reading

Some interesting new ways to improve your IT infrastructure in 2016

It’s an exciting time in IT; there is so much innovation going on, often driven by start-ups, and many new ways of doing things compared to a few years ago. The good news is that generally the affordability of high-end technology is better than ever, bring them within the reach of most organisations. So let’s take a look at what’s now available: 1. Availability – Improving your Disaster Recovery processes by enabling vMotion and HA between sites I believe that current DR practices built around active passive asynchronous replication and VMware Site Recovery Manager are just not good enough: Data is lost in the event of a fail-over Local array failures cause downtime Failing-over/back causes significant disruption to the organisation Testing is difficult therefore Continue reading

What’s the NetApp storage portfolio looking like for 2016?

Each vendors portfolio is ever evolving – driven by constant R&D and acquisitions. It is therefore useful to occasionally step back and see what a particular vendor has to offer and why you might choose a particular technology. Let’s take a quick look a NetApp, the acquisition of SolidFire has not yet been completed, but at this stage I am assuming it will be, so I am including it in this blog. In this first part I will be looking at SAN and Unified storage platforms: SAN storage E-Series and All-Flash E-Series (EF) Simple “Configure & Forget” provisioning with fast rebuilds – with no RAID or idle spares to manage Flexible modular design – including an ultra-dense 60 drive tray Continue reading

NetApp FlashRay is dead long live SolidFire, AFF and EF

Just prior to Christmas NetApp announced that it will be buying SolidFire and its All-Flash Array (AFA), at the same time they have also stated that they will not be bringing FlashRay to market. I think this is a good thing as we have needed clarity with regard to the future of FlashRay for some time, and it has been evident that NetApp has been taking what they have learnt from the FlashRay project to enhance All-Flash FAS (AFF). FlashRay will therefore continue to live on in AFF and it will continue to get further FlashRay related features and performance enhancements over the next few years. So NetApp has lost one and gained one AFA, still leaving them with three platforms – surely that Continue reading

Why NetApp E-Series SAN storage is really cool

On the face of it, E-Series has got to be one of the least cool storage arrays available as its roots go back to the late nineties. Storage technology has massively moved on since then, with most modern platforms having a significant proportion of the following capabilities: Hyper-Converged – to simplify deployment, management and scaling, as compute and storage are combined into a single platform Software-Defined – to enable you to bring your own hardware All-Flash – to significantly improve application performance and the end-user experience Public Cloud integration – to leverage the cloud for backup, DR, and for test and development environments Stretched Cluster – to provide continuous availability in the event of local hardware and even complete site failures De-duplication Continue reading

If your backup strategy is in need of an overhaul then Commvault and NetApp can help

Backup continues to be a challenge for most organisations, legacy systems are just too complex and expensive, and whilst hypervisor centric solutions, such as Veeam, have done a great job of protecting virtualised environments, their weakness tends to be that their de-duplication engines do not scale. Purpose-built backup appliances (PBBAs), such as EMC Data Domain, ExaGrid EX, Quantum DXi or HP StoreOnce, provide much better de-duplication ratios and performance at scale (i.e. 150 VMs+) – it is therefore often recommended to combine the likes of Veeam with a PBBA, but this does significantly push up the cost of the solution. So what other options are available? Option 1 – Software-Defined Data Protection with Enterprise-class De-dupe By combining Commvault Simpana with NetApp E-Series you get Continue reading

Choosing between simple block and advanced storage platforms

When choosing storage you can either purchase a simple block platform or an advanced feature rich platform – let’s explore these options in more detail: Simple block storage Simple block storage platforms use large pages (measured in MBs) to store data and therefore they deliver a lot of performance using minimal CPU and memory resources. To determine if this is right for you consider the following list of requirements: Lowest cost per TB and per IO High capacity and performance scaling Fibre Channel or iSCSI connectivity Full support for vSphere and/or Hyper-V Simple “Configure and Forget” provisioning No need for: Storage efficiency features as they are provided by the applications (i.e. CommVault Simpana) Significant replication, snapshotting or cloning If your requirements match many of the the above then NetApp E-Series will be Continue reading

NetApp E-Series and CommVault Simpana – solving the data protection challenges of the 21st century

Data protection is a challenge for most organisations and many are very dissatisfied with their current solutions – common issues tend to be: Point solutions are used for VMware/Hyper-V, physical servers, laptops, remote offices and archiving which increases complexity and costs Purpose-Built Backup Appliances (PBBAs) are used, whilst they work well, they further increase the costs Ever increasing amounts of data are becoming difficult to backup in the amount of time available Tape backups can be unreliable, restores are slow and they are expensive to manage To address the problem NetApp and CommVault have created joint reference architectures that consist of NetApp E-Series storage and CommVault Simpana software – just add virtual or physical servers and away you go. So how do these joint reference architectures Continue reading

Affordability means that 2015 will be the year that the All-Flash Array (AFA) goes mainstream

There is no doubt that the SAS 10K and 15K drives are an “endangered species” – they have had a fantastic run, but we are reaching a tipping point whereby the cost per TB of flash drives will soon be on a par with them – then it will be game over. Does this mean that the all-flash data centre is now viable for most organisations? Absolutely not, the future will consist of three tiers of storage as it has done for the last 5 years: High performance flash capable of enduring a high number of writes (eMLC) High performance flash drives optimised for reads (cMLC) High capacity HDDs optimised for data archiving and sequential workloads (NL-SAS) They will be deployed in a Continue reading

Does it really matter if NetApp FAS is not “pure” block storage?

For many years traditional storage array vendors have claimed that their platforms are superior for block storage than NetApp FAS because they do not have the overhead of a Pointer-based Architecture – let’s explore this in more detail: What do we mean by “pure” block storage? Uses a Fixed Block Architecture whereby data is always read from and written to a fixed location (i.e. each block has its own Logical Block Address) – in reality most block storage arrays provide the option to use pages (ranging from 5 MB to 1 GB) where the LBA is fixed within the page, but the page can be moved to facilitate tiering. The advantages of this architecture are: No performance overhead – it is Continue reading