The Chinese saying, “May you live in interesting times” certainly applies here. Right now so many new storage technologies are emerging, that if you are an infrastructure decision maker, you would be having a hard time selecting the right technology. The trick is to formulate a strategy that leverages the existing infrastructure you already have, but provides agility and scale for the future at a reduced cost.
This post covers what is available on the market right now and what use cases you should consider when developing your future storage strategy:
- Traditional Monolithic Shared Storage
- Server-Side Flash-Cache Acceleration
- Scale-Out NAS
- Converged Infrastructure
- Hyper-Converged Infrastructure
Traditional Monolithic Shared Storage
99% of the people reading this will be in this boat. A single SAN Disk Array of storage from IBM, Hitachi or EMC, connected via Fiber Channel to your redundant SAN fabrics and from there to your vSphere server infrastructure via redundant HBAs. If you have more than one SAN array, then the second one will most likely be older than the first array, since it was considered a “major” investment that should last for three or more years. Features such as storage federation, auto-tiering and global cache will be what sold you on your last purchase.
Server-Side Flash-Cache Acceleration
If you have a substantial investment in SAN storage, then server-side flash acceleration products (host or guest) could make sense if you want to extend the life of your infrastructure. Be warned, flash acceleration should only be implemented by analysis and design; do not buy it and expect your “pain-points” to magically disappear.
NOTE: Consider Guest OS Read/Write I/O optimisation software as an alternative, eg. Condusiv V-locity
Scale-Out Network Attached Storage
Anyone at VMworld/TechEd in the past year would have noticed the plethora of vendors offering cheap NAS boxes loaded with SAS and SSD. The idea being, you initially buy enough NAS “nodes” for your immediate requirements and then in the future buy more, scaling out as you grow. This is cheaper than doing the same with Fiber Channel, since you are reusing the existing 10GE switched network and not investing in additional FC infrastructure.
I heard an unconfirmed rumour that VMware may be implementing NFS v4.0/4.1 in the next major release of vSphere, so NAS may finally have a future in vSphere. Do not forget, Microsoft are pro-NAS with SMB 3.0 for Hyper-V.
Pre-installed, pre-cabled, pre-configured, pre-tested solutions that provide a fixed building block of Compute, Network and Storage infrastructure. You buy enough to meet your needs and then purchase another “Pod” or “Block” when required. This means that each “Pod” is a separate island of compute, network and storage infrastructure that has to be connected to the core of the Data Center.
Physical rack mounted servers with JBOD (“Just a Bunch Of Disks”) where software is used to provide shared storage redundancy/replication and auto tiering. If you have a policy of blade servers only, it will have to be changed to rack-mounted for scale.
Some very interesting Use Cases come to mind for Hyper-converged computing:
- Management, DMZ and PCI/HIPAA compliant clusters ideally require an Air-Gap for compliance and security. We currently get by with logical separation and compensating controls, but every auditor appreciates physical separation.
- VDI – hyper-convergence and VDI naturally go hand-in-hand, with scale-out growth. Nutanix have a program where they guarantee the design and will provide additional nodes for free if the implemented solution does not meet the design requirements.
- SMB/Regional Office/Branch Office – traditionally, vSphere server infrastructure deployed at a regional office would be two tower servers and one small SAN node, or for the budget conscious, DAS on a single physical server. Hyper-convergence gives you redundancy at a very low price. NOTE: alternative for this is HP Moonshot.
Currently I would not use VMware VSAN for Business Critical Applications since it has only just been released as GA, I would wait until Update 1 or higher is available. However, Nutanix and SimpliVity are considered mature products and suitable for BCA.
- Investigate the TCO and ROI on the technologies listed and see if it makes sense for the use cases of your organisation
- If it makes sense, get the competing vendors to initiate a PoC for those use cases; validate and choose which is best for you
- Stop investing in monolithic shared storage – set a date when the ROI will be recouped and then plan to decommission it
- Purchase server-side flash acceleration to correct the “pain-points” in your existing shared storage solution
- Start investing in Scale-Out Storage, Converged Computing or Hyper-Converged Computing nodes, particularly for the VDI, DMZ, Compliance and vSphere Management clusters
Some other interesting posts on this subject:
- Steven Poitras – The Nutanix Bible
- Long White Virtual Clouds Nutanix VDI Example Architecture
- Frank Denneman PernixData posts
- Cormac Hogan VMware VSAN series
- Long White Virtual Clouds Gotcha with vFRC and vCenter 5.5
- WikiBon VMware VSAN versus the simplicity of Hyper Convergence
- Yellow-Bricks response to some of the statements made in the WikiBon post
- Data Center Zombie Nutanix Platform Review
- DataCenterDan vSphere 5.5 Performance Best Practices – Disk Alignment