You may have read my recent Next-Generation Data Center post, that summarised the Conceptual Model and Logical Design of my ideal Private Cloud solution. This post describes the current vendors that are the leaders of their respective markets and I believe they have a compelling integration story with the highest chance of success. The back-bone of the strategy relies upon VMware, who in my opinion are the current leaders of SDDC for technology, professional services, training and support.
The vendor list is as follows:
- IT Service Management Layer – ServiceNow
- Infrastructure Management – Nutanix Prism, Cisco APIC, VCE Vision, Palo Alto Networks, F5 BIG-IQ, Trend Micro
- Cloud Management & Advanced Operations – VMware vCAC, VMware vCAD, VMware vC Ops, VMware vSphere Configuration Manager, VMware vCenter Server
- End User Computing – VMware Horizon Suite (View, Workspace, Mirage, AirWatch)
- Server Virtualisation – VMware vSphere
- Network Virtualisation – VMware NSX, F5 BIG-IQ LTM/GTM
- Continuous Availability – EMC VPLEX for Vblock, Nutanix Metro Availability
- Storage Virtualisation – EMC Viper, VMware vVols, Nutanix NDFS
- Security Virtualisation – Trend Micro, Palo Alto Networks, F5 BIG-IQ APM
- Physical Compute and Storage – VCE Vblock, Nutanix Block
- Physical Network – Cisco ACI, Cisco Nexus 7000/5500/2000 (Core, OOB Management), F5 BIG-IQ
- Physical Security – Palo Alto Networks PA-7000, F5 BIG-IP 12000/VIPRION
- Backup / Recovery – EMC Avamar & Data Domain
The diagram below shows the summarised physical design:
Why not use VMware vCAC for the ITSM layer?
Who knows what will happen in the future? Maybe VMware will fall from grace. I want an SDDC that is modular and extensible, so that I can change the underlying components (eg. CMP, Virtualisation components, Physical components) without my customers being aware of it. This way I can design the future SDDC 2.0 with the minimum of impact to my business.
Why Cisco ACI and VMware NSX?
Cisco ACI is a great choice for managing a single leaf and spine fabric from a central point, it removes the need to configure and manage individual switches, which is operationally complex and network engineers have been doing it for decades. You could use Cisco ACI for network virtualisation as well, but I think that using the complete VMware SDDC stack reduces the number of 3rd-party integration points and provides the least risk and best chance for success. But what about the combined cost of ACI and NSX, I hear you ask? Yes, I would pay for that privilege.
Nutanix Block and VCE Vblock?
My preference is hyper-converged all the way, but until a 4 socket hyper-converged solution is available, converged infrastructure is still required for Monster VM, Mission-Critical and Business Critical services. This adds additional API integration points into the solution and another level of operational complexity, but at least it is with converged infrastructure, where the responsibility for design, test, installation and support is pushed onto the vendor.
Why Hyper-Converged all the way?
I want the highest density of compute and storage to pack my data center to the rafters. I want to extract the maximum life out of my square footage, assuming the data center facility can handle the power and cooling demands. And I want my infrastructure to be as simple as possible.
Why not OpenStack?
It maybe the lowest CAPEX, but has the highest OPEX and highest risk, in my opinion. I would have to recruit expensive resources from the US, Australia and Europe and I would have to ransack RackSpace for people. I would be completely responsible for the entire solution.
And if I was to take the OpenStack route, then I would also have to use OpenCompute and OpenDayLight, because if reducing cost is my driving requirement, then do not be a lightweight – “in for a penny, in for a pound”.
Why not Hybrid Cloud?
This architecture is for a Service Provider Private Cloud. On principal, hybrid cloud would never be used (unless it was my own public cloud).