IBM PowerVM Design for a VCDX

If you are unfortunate enough to be tasked with the design and implementation of an IBM PowerVM virtualisation environment for System p and you normally work with vSphere, then this is the post for you.  This entry has been written as a high-level overview of PowerVM and provides context via vSphere comparisons.

Next to vSphere, PowerVM is convoluted, finicky and difficult to implement.  My PowerVM implementation (4 hosts) took four months from start to finish, when an equivalent project with vSphere takes mere days.  In addition, PowerVM is approximately 8-20 times the cost of a comparable vSphere solution.  PowerVM is similar to any other IBM product, where the management interface exposes every single option available and you have to work out which permutation or combination is best for you.

However, if you have a Data Center full of physical System p servers (Frame or Blade) then PowerVM will allow you to consolidate and save money by virtualising those workloads.  With SystemsDirector and Tivoli, you can achieve automation with advanced operations as well.  And once you get it working, it is very stable and reliable.

Here is a comparison of vSphere and PowerVM terms.

PowerVM_vspheretable

Compute & Management Design

System p servers have an “Advanced Services Module” that provide hypervisor services.  System p is capable of running “Linux for Power” in addition to AIX as a Guest Operating System.  The “Hardware Management Console” provides advanced virtualisation infrastructure management services such as “Live Partition Mobility”.  It is possible to run a budget PowerVM configuration with “Integrated Virtualisation Management” which is a Web-based service on the Virtual I/O Server (no HMC required).  IBM SystemsDirector can also be deployed as a “Cloud services” overlay to link multiple HMCs together.  PowerVM has no concept of “automated” vSphere HA or DRS.

PowerVM_Compute

High-level decisions to be made:

  • Single or Dual HMCs or IVM only?
  • Frame or Blade?
  • SystemsDirector/Tivoli overlay?
  • PowerVM Licence Edition? – Enterprise is required for LPM

Network Design

PowerVM does not have the concept of “VDS”, the closest is “VSS” manually built by hand within each VIO Server and via the HMC.  This is the most complicated part of the puzzle, so bear with me.  The Virtual I/O Server is responsible for network and storage I/O virtualisation.  It uses “Shared Ethernet Adapters” with the VIOS and “Virtual Ethernet Switches” within the ASM to provide network connectivity to Virtual Servers.  For Uplink redundancy, Link Aggregation is configured to bind two interfaces into one.

PowerVM_Network

High-level decisions to be made:

  • Single or Dual VIOS or Dedicated Uplinks to each Virtual Server?
  • Redundant Uplinks (LAG) to each VIOS?

Storage Design

PowerVM has a version of “Datastores” which is the Shared Storage Pool, the other alternative is the equivalent of “pRDMs” where a LUN is bound to a Virtual Server.

PowerVM_Storage

High-level decisions to be made:

  • FC or iSCSI?
  • Which Multi-Pathing software?
  • Raw Devices or Shared Storage Pools?
  • If Raw Devices: Dedicated HBAs to Virtual Server?

Security Design

The “vShield” equivalent for PowerVM is PowerSC, otherwise Endpoint security can be implemented with agents from your favourite vendor that supports AIX.

Backup/Recovery

There is no “VADP” for PowerVM, so Backup/Recovery is implemented with agents from your favourite vendor that supports AIX.  However AIX does have a “NIM” server function that provides image-level backups of AIX.

Other IBM Products

  • PowerHA (aka HACMP) for OS Clustering of AIX Guest OS – Equivalent of Microsoft Clustering Services
  • Tivoli and SystemsDirector for automation, monitoring, patching, accounting, recovery and security

“Gotchas”

  • Firmware/Software versions of IBM software: make sure you get the correct matching set: Frame/Blade firmware, Network/Storage Adapter firmware, VIOS software, HMC firmware
  • If using P7 blades, the ASMI is enabled from the AMM (BladeCenter) or CMM (PureFlex)
  • Enable the ASM on each PowerVM “Host”, otherwise it will appear as a “Server” and not a “Host” within the HMC.
  • Setup the HMC management platform then connect each Host via the ASMI
  • Deploy the VIO Servers from the HMC, especially important if “Dual VIOS” are a requirement (you have to assign ownership of hardware to the VIOS LPARs)
  • Virtual Server “RMC” needs to be connected to HMC and each “hdisk” requires the SCSI “reserve policy” to be set to “no_reserve” for LPM to work
  • Operational procedures: before going into Production, ensure that your SOPs for adding, migrating, deleting, upgrading virtual servers, networks, storage, hosts, adapters, VIOS are written and tested.  Otherwise you will be burned in the future.  Be warned – PowerVM is a complicated beast to manage and operate

Resources

As usual, IBM have loads of good documentation, but you have to plough through a lot of Redbooks/Whitepapers/Forums to find what you want:

Summary

If you are implementing a large PowerVM farm, then you must have a large budget; especially when you add SystemsDirector, Tivoli, Enterprise licensing and HMC/Frame hardware.  Seriously consider using IBM professional services and a Resident Engineer (for first year of operations) to get the job done, it is too complicated to execute and operate on your own.

PowerVM is a valid option for the following use cases:

  • Consolidating a massive farm of legacy physical AIX servers to PowerVM, where you do not have the time or budget for Application transformation to Linux on Intel with vSphere
  • Customer requirement specifies a particular Application that is only available on AIX and the deployment is too big for a small number of physical AIX servers

Published by

vcdx133

Chief Enterprise Architect and Strategist, 4xVCDX#133, NPX#8, DECM-EA.