NPX – Sizing a Nutanix Cluster

If you are used to sizing traditional 3-tier infrastructure, you will have to alter your methodology to correctly size a Nutanix Cluster.  The major difference is the selection of the correct appliance hardware and sub-options for the cluster (Vendor, Sockets, Cores, GHz/core, RAM, SSD, HDD, SED, GPU, 1GbE, 10GbE, Rack Units, Power, Cooling) and allow for the correct sizing of the Controller VM (vCPU and RAM) per Node/Host.

The NPX Link-O-Rama is a great resource for all things NPX, including this applicable list of articles in my VCDX Deep-Dive series (more than 70 posts).

Nutanix Cluster sizing is complicated and it is easy to get confused and miscalculate.  I use a spreadsheet to collect my requirements of how much I NEED (logical design) and a matching spreadsheet for the physical cluster size of how much I HAVE (physical design).

Nutanix have their public online sizing tool which is sufficient to get started, but not enough for NPX-level designs.  Nutanix employees and partners have an internal sizing tool that they can use when designing clusters for you.  They also have their design guarantees for correctly sizing EUC solutions.  If in doubt, reach out to your Nutanix account manager for assistance.

The sliding scale of appliance selection:

cluster_sizing

Summarised appliance list (Nutanix, Dell & Lenovo), green highlights are points of interest:

nx_dell_lenovo_model_list

Homogeneous (same host configuration) or Heterogeneous (different host configuration) appliances?

For a new cluster, start with a homogeneous configuration, this allows easy operator and administrator understanding and simplifies the balancing calculations for the hypervisor and Acropolis Distributed Storage Fabric.  If you are adding different nodes to an existing cluster, make sure you follow the Intermix rules (eg. NX-6035C Capacity Nodes).

Final Block/Node Count: This will be the final number of appliances required for your cluster based upon your calculations for the target cluster size.  Make sure you take Compute Availability, Availability Domains, Redundancy Factor and Replication Factor into account.

Vendor selection: Nutanix, Dell or Lenovo

Sockets, Cores, GHz/Core, RAM: Recap here: Host Design – Scale Up or Scale Out?.  Just make sure you include the Controller VM (this is your distributed storage processor) requirements and remember that the Controller VM has a 50% reservation for each vCPU.  Also increase the RAM to 24GB or 32GB depending upon the Dedupe options you intend to configure or if Business Critical Apps will be running in the cluster.

Also be wary exceeding the 24 Slot DIMM limit for dual Haswell processors (ie. more than 512GB RAM per 2 Socket Node), there is a performance tax with the Haswell architecture.

SSD: Consider a minimum of 2 for redundancy during SSD failure.  Make sure your Active Working Set per node resides in SSD.

HDD: Raw capacity calculation, divide by 2 or 3 depending upon Replication Factor selection.  Compression, Deduplication and Erasure Coding can also be used to optimise your capacity (choose wisely – there are pros and cons to each setting).

Storage Performance: This varies depending upon the type of workload, CVM resources, network and model selection.  You can make some assumptions during the design phase, but you MUST validate these during testing before going live.

Self-Encrypting Disks (SED): If PCI or HIPAA compliance is a requirement, SED may be an option (SAS drives only, requires KMS).

GPU: If the hypervisor supports vGPU and you are building an EUC solution, this may be of benefit for resource intensive desktop applications (eg. graphic design, scientific modeling).

Network: How many 10GbE links do you need? If the Hypervisor supports QoS, then dual 10GbE may be enough for all traffic, otherwise split CVM, Backup and VM traffic onto separate uplinks.  Fiber Channel is no longer used and your Data Center Switch Fabric now carries storage traffic and should be sized accordingly.  Consider collapsed core switches for small installs or leaf-and-spine fabrics for larger installations.

Data Center Facility (Rack Units, Power, Cooling): Each block consumes approximately 1,600 Watts of power (maximum) and 4,000 BTU/hr of cooling.  Make sure you have sufficient resources in your data center for these high-density workloads.  Not a big deal for your first cluster, but it will become critical as your Nutanix infrastructure grows.

Additional Resources

Published by

vcdx133

Chief Enterprise Architect and Strategist, 4xVCDX#133, NPX#8, DECM-EA.

2 thoughts on “NPX – Sizing a Nutanix Cluster”

Comments are closed.