Are VMware VSAN, VVOLs and EVO:RAIL Software-Defined Storage and does it really matter?

Most IT vendors like to jump on the “latest bandwagon” to showcase their solutions, they then use key IT buzzwords to highlight their product as being cutting-edge. One of the most prevalent in today’s technology world is Software-Defined, but there is significant ambiguity with regard to exactly what it means. So what is my definition of a Software-Defined solution? You purchase software and hardware independently, more often than not from different vendors, and most importantly you can change the hardware without incurring additional licence fees – examples include VMware vSphere, Veeam Backup and Replication, and CommVault Simpana. So what is my definition of a non-Software-Defined solution? You purchase a hardware appliance that combines software and hardware into a single solution (you cannot move the Continue reading

VMware VVOLs on NetApp FAS is now available to deploy

More information on VVOLs is being released every week and it is only now that we are getting a chance to play with the full release code that we are able to dig into the detail of how it works. Let’s start off by exploring the benefits of VVOLs that are likely to make it game changing technology: Granular Control of VMs Enable VM granular storage operations on individual virtual disks for the first time including control of the following capabilities: Auto Grow Compression De-duplication Disk Types: SATA, FCAL, SAS, SSD Flash Accelerated High Availability Maximum Throughput: IOPS & MBs Replication Protocol: NFS, iSCSI, FC,FCoE Enhanced Efficiency and Performance Off-load VM snapshots, clones and moves to the array Automatically optimise I/O paths for all Continue reading

A deeper look into NetApp’s support for VMware Virtual Volumes

Virtual Volumes is the flagship feature of vSphere 6.0 as they enable VM granular storage management and NetApp FAS running Clustered Data ONTAP 8.3 is one of the first platforms to support the technology. Today storage administrators have to explain to the VM administrators how to identify which datastores to use for each class of VM, which is typically achieved using a combination of documentation and datastore naming conventions – however, consistency and compliance are difficult to achieve. Virtual Volumes changes this by enabling the storage administrator to provide vCenter with detailed information on the capabilities of each datastore. VM Storage Policies, whilst they existed in previous versions of vSphere were not sophisticated enough to query the actual storage for its capabilities, the Continue reading

What’s next for Storage in 2015?

The storage market has changed significantly over the last few years, whereas 5 years ago the industry was dominated by six players (EMC, NetApp, HDS, HP, IBM and Dell) supplying solutions using only HDDs, today solutions are designed using a hybrid of SSDs and HDDs, and there is far more choice with a number of start-ups entering the market (i.e. Pure, Nimble, Tintri and Tegile). What is clear to me is that the days of storage arrays being mysterious, complex and therefore expensive to deploy and maintain are over – customers want simplicity so that a dedicated storage expert is not required. Gartner wrote an interesting article on the subject which is available at Predicts 2015: Midmarket CIOs Must Shed IT Debt to Invest Continue reading

What’s new in NetApp Clustered Data ONTAP 8.3?

As we move into the world of Software-Defined Storage it “sticks out like a sore thumb” when an array vendor only makes new software releases available on their next generation hardware. The problem with this is that even if you purchase at the very beginning of the life cycle of a product, at best you will get one round of feature enhancements, after that all software development is focused on the next generation product. This often even includes support for new drive types – again they are only supported on the latest generation hardware. This problem is very evident when it comes to support for VMware Virtual Volumes – any array vendor that will be releasing new hardware next year is unlikely Continue reading

Does it really matter if NetApp FAS is not “pure” block storage?

For many years traditional storage array vendors have claimed that their platforms are superior for block storage than NetApp FAS because they do not have the overhead of a Pointer-based Architecture – let’s explore this in more detail: What do we mean by “pure” block storage? Uses a Fixed Block Architecture whereby data is always read from and written to a fixed location (i.e. each block has its own Logical Block Address) – in reality most block storage arrays provide the option to use pages (ranging from 5 MB to 1 GB) where the LBA is fixed within the page, but the page can be moved to facilitate tiering. The advantages of this architecture are: No performance overhead – it is Continue reading

An introduction to VMware Virtual Volumes Software-Defined Storage technology

Over the past decade VMware has changed the way IT is provisioned through the use of Virtual Machines, but if we want a truly Software-Defined Data Centre we also need to virtualise the storage and the network. For storage virtualisation VMware has introduced Virtual SAN and Virtual Volumes (expected to be available in 2015), and for network virtualisation NSX. In this, the second of a three part series, we will take a look at Virtual Volumes (VVOLs). Each new release of vSphere always delivers significant new storage related enhancements, from Storage DRS to vStorage APIs for Array Integration, VMware has consistently innovated. But one huge feature has always been missing – the ability to natively store a VMDK as an object directly on a Continue reading