This is part 2 of the Nutanix XCP Deep-Dive, covering the Nutanix platform hardware architecture.
This will be a multi-part series, describing how to design, install, configure and troubleshoot an advanced Nutanix XCP solution from start to finish for vSphere, AHV and Hyper-V deployments:
- Nutanix XCP Deep-Dive – Part 1 – Overview
- Nutanix XCP Deep-Dive – Part 2 – Hardware Architecture
- Nutanix XCP Deep-Dive – Part 3 – Platform Installation
- Nutanix XCP Deep-Dive – Part 4 – Building a Nutanix SE Toolkit
- Nutanix XCP Deep-Dive – Part 5 – Installing ESXi Manually with Phoenix
- Nutanix XCP Deep-Dive – Part 6 – Installing ESXi with Foundation
- Nutanix XCP Deep-Dive – Part 7 – Installing AHV Manually
- Nutanix XCP Deep-Dive – Part 8 – Installing AHV with Foundation
- Nutanix XCP Deep-Dive – Part 9 – Installing Hyper-V Manually with Phoenix
- Nutanix XCP Deep-Dive – Part 10 – Installing Hyper-V with Foundation
- Nutanix XCP Deep-Dive – Part 11 – Benchmark Performance Testing
- Nutanix XCP Deep-Dive – Part 12 – ESXi Design Considerations
- Nutanix XCP Deep-Dive – Part 13 – AHV Design Considerations
- Nutanix XCP Deep-Dive – Part 14 – Hyper-V Design Considerations
- Nutanix XCP Deep-Dive – Part 15 – Data Center Facility Design Considerations
- Nutanix XCP Deep-Dive – Part 16 – The Risks
- Nutanix XCP Deep-Dive – Part 17 – CVM Autopathing with ESXi
- Nutanix XCP Deep-Dive – Part 18 – more to come as the series evolves (Cloud Connect to AWS and Azure, Prism Central, APIs, Metro, DR, etc.)
The complete breakdown of the Nutanix (SuperMicro OEM) platform is described in the Hardware Administration and Reference document (need a valid support contract to access). The current Nutanix Hardware Platform page describes the most current models. The Dell website covers their offering of the Nutanix Dell OEM models as well. I have taken the information from all of these sources and created a quick reference spreadsheet. As my NPX design evolves and I get more information, I will add performance and usable capacity columns for each model. Updated with corrections and Power/Cooling columns. Note: NX-8150 datasheet does not list 10GBase-T, however the Hardware Admin and Ref. document does, so it has been included.
Nutanix XCP Hardware Architecture
The Nutanix Xtreme Computing Platform has the following hardware architecture:
- A single 2 Rack Unit chassis that is referred to as a Block
- Each Block contains one to four Nutanix Nodes (depending upon model)
- Each Node Mainboard contains CPU, RAM, Network and Storage resources that are completely separate from the rest of the nodes in the Block
- The storage resources are a mix of InnoLite 64GB SATA DOM USB Flash (for Hypervisor boot and CVM ISO & config files), SSD drives and SATA drives (depending upon model)
- The SSD and SATA drives plug into the Mid-Plane from the front of the chassis for easy access and replacement
- Each Node Mainboard sled is connected to the dedicated SSD and SATA drives via a SCSI controller which plugs into the Mid-Plane
- Each Node Mainboard has 10/100MbE IPMI, 2 to 4 x 1GbE and 1 to 6 10GbE (SFP+ or 10GBase-T depending upon model) network resources
- Redundant power modules supply power to the Mid-Plane
- The Mid-Plane distributes power to the fans and Node Mainboards via the same printed circuit board that contains the SCSI controller chipset
- The performance fans are located between the Mid-Plane and Node sleds to suck cold air from the front of the Block and push heated air from the rear of the chassis, cooling each Node sled in the process
Hardware Design Analysis of the NX-1050 (SuperMicro OEM)
I have to say, the SuperMicro OEM design is very subtle and elegant, they had to do some creative thinking to fit that amount of hardware into such a small space. I can just imagine the arguments within engineering as the design evolved. Be aware that high density infrastructure has a plethora of power and cooling design issues that every vendor in this space must address. Here are some of my observations that I would like to draw your attention to:
- Notice how the length of the block is designed to allow up to 4 nodes plus associated hardware to fit into a 2 Rack Unit space and still fits into a standard server rack?
- Notice how the ears of the block are also the Power and UID controls?
- Notice how the mid-plane is also the power distribution and cooling control plane?
- See how the mid-plane is fabricated as a single printed circuit board, but is actually 4 separate connection domains for storage (one for each node) and one power distribution plane?
- See how the redundant power supplies are stacked in the center of the block with the shortest power cables to the mid-plane?
- Notice how all of the internal cabling is neatly loomed and tied down?
- See how the SCSI controller is vertically mounted and the end of the printed circuit board plugs directly into the mid-plane. The CVM takes control of this SCSI Controller PCIe slot via Pass-Through mode.
- Notice how the InnoLite SATA DOM 64GB Flash Drive is vertically mounted near the SCSI controller and plugged into a USB socket on the Node mainboard?
- Notice how the SSD and SATA drives plug directly into the mid-plane?
- Notice the perforation of the SSD/SATA drive bezels to allow air to be sucked into the front of the chassis, first cooling the SSD and SATA drives before continuing to the rear of the chassis.
- Notice how the mid-plane has cut-outs to allow the air to continue to the rear of the chassis.
- Notice how the side of the block has perforations for additional air to be sucked in.
- See how the fans are strategically placed to allow the optimum airflow from the front to the rear.
- Notice how the power supplies have their own fans for cooling?
- Notice the PVC wave guide to optimise the cooling of the CPUs, which are directly inline for the cleanest airflow. See the Venturi effect (think airplane wing), so that the greatest suction and airflow will occur through the CPUs, which are the hottest part of the chassis?
- Notice how the 12 DIMM banks are equally split on either side of the CPU sockets as close as possible for lowest memory access times?
Other Resources