HP POD Generation 3 – Practicality On Demand

I certainly haven’t been shy in my love for HP’s Gen3 POD, and with good reason. The fact is that this is the first credible “shipping container” or containerized data center. As usual, I’ll be blunt; Sun’s “Black Box,” HP’s Gen1 and Gen2, and every other shipping container based “data center” wasn’t. They were all completely useless for almost every business, and very strictly niche. Fixed configurations of boxes – often the lowest of the low-end – packed so densely you couldn’t touch the cabling, where maintenance often entailed removing a dozen servers to replace a single faulty link. That’s classified as an Epic Design Fail, period.

HP’s Wade “Podfather” Vinsen put on a wonderful presentation for us back at VMworld earlier this year. So why’s it taken so long to get this blog done? Because, hello? Title: Practicality On Demand. In other words, this isn’t “oh look it’s a cool box.” This is all about why and how the Gen3 POD is practical for your business – as in affordable and usable as a real data center. (Seriously, I cannot emphasize that data center part enough.)

So how’s this trucked-in box of neat technology practical for your business? What makes it a real data center in a box? Let’s start with the basics: Gen3 POD comes in three flavors – 20 foot, 10 x 50U rack and 40 foot, 22 x 50U rack. Power (redundant, of course) is 145kW for 20ft, 291kW for 40ft, and 380kW for 40ft maximum density. That’s a lot of rackspace and a lot of power in very little space. But who cares about the physical space? It’s not about the physical space the POD takes up. Or the physical space the POD provides.

The HP POD gets trucked in. Semi pulls up, unloads 56,000lbs (max) or 111,000lbs (max). You connect power, because HP already installed the servers and provided network access for you to do setup and acceptance testing. Connect it to the building network. Throw the switch and go. Now when’s the last time you did a technology refresh that went anything remotely like that? The answer of course, is never. You built out a second data center, or you built a temporary data center, or you did the frustrating and infuriating equipment shuffle to make it all fit in the space you had. Technology refreshes suck – or rather, will now “have sucked in the past.”

More importantly; the Gen3 POD is built around standard 19″ racks which are, by default, empty. There are two requirements to put your hardware in them: not deeper than 36″, and front to rear airflow. That’s it. That means that better than 90% of systems, storage and networking equipment is compatible with the POD out of the box. No modifications, no long interoperability lists, no micro-detail work. It’s standard AC power, 220V at the racks via PDU, but if you ask very nicely (and pay quite a bit) you can even have -48VDC. Or you can have 208V or 120V by changing the PDUs. Yes, you can change the PDUs (but you’ll need to talk to HP to do that for obvious reasons.)

So given all these factors, why am I really excited about this thing, and why did it take so long? Because: I put an entire medium enterprise in a single 20 foot Gen3 POD. Let me say that again: entire IT operation, one 20 foot POD. And by everything, I mean everything. Multi-tiered storage including scale-out, tape library, enterprise Unix systems, x86 systems, blades, 1600 VDI users, and core networking. Oh, and here’s the more important part: I didn’t use only HP equipment. That has been the biggest failing of containers for years: you get one vendor, you get what they pick, and that’s it. Period.

10 Racks of Equipment

So let’s talk about what you’re looking at here, besides a Visio of the racks. Going from left to right, we have Unix, x86 and VDI, network core, and storage. Unix takes up two racks, x86 and VDI takes up the next two, networking takes the next two, then storage takes the last four. Unfortunately, the racks shown are 47U – there’s actually another 3U of space available for each. Our fictional enterprise runs PeopleSoft or SAP, a number of Unix applications for inventory or order control, IBM Tivoli Storage Manager for backups, VMware, a four node Exchange cluster using SAN storage, fax system, thin client desktops for the main office, and connectivity to a number of branch offices. They need several hundred terabytes of storage, much of it with high performance requirements.

Unix Racks

Each Unix rack is identical, using HP ServiceGuard and IBM PowerHA + PowerVM for high availability services. From top to bottom, each rack has an HP Integrity rx8640, IBM POWER 560Q, IBM POWER 755 with 5802 expansion, and an IBM POWER 770. All systems are near maximum configuration, using partitioning and virtualization to maximize utilization. Not pictured are two IBM HMCs installed at the top of each rack in a redundant configuration.

The total hardware configuration provides: 64 Itanium2 cores (2 x 32) with 768GB (2 x 384GB) for Integrity applications. For AIX and Linux applications, 16 POWER6 cores (2 x 8) with 384GB (2 x 192); 64 POWER7 cores at 3.3GHz (2 x 32) with 256GB (2 x 128) and 96 POWER7 cores at 3.5GHz (2 x 48) with 1TB (2 x 512) are available. Because of PowerVM, the POWER systems are effectively a pool of resources, while Integrity SRPs treat the Integrity systems as a pool of resources. Another 32 Itanium2 cores and 768GB of memory are available from 4 BL870c i2 and 4 BL860c i2 blades installed in C7000 chassis in the x86 racks.

Connectivity is a mix of Fiber Channel 8Gb, 10 Gigabit Ethernet and Gigabit Ethernet.

x86 Racks

Like the Unix racks, the x86 racks are identical. A variety of tools are using including VMware Vmotion, FT, Microsoft Cluster Services, and application clustering and load-balancing to provide high availability. From top to bottom, each rack has two Dell PowerEdge R715′s, two Dell PowerEdge R410′s, an HP DL380 G6, three HP C7000 Blade Chassis, and an MDS600 providing storage for VDI based on HP’s reference design.

Detailed configuration doesn’t matter too much, because the x86 environment is heavily virtualized and very dynamic. However, the top C7000′s contain two BL870c i2′s and two BL460 G6′s which make up the Exchange cluster which uses SAN storage. The middle C7000′s contain two BL860c i2′s, and a mix of any other 12 blades to provide VMware and other services. What sort of blades? Any sort of blades. Those installed are primarily for illustration of a possible mixed blade environment.

The interesting part is the bottom C7000 and MDS600 pairing. This provides up to 1600 VDI based thin client desktops based on an HP reference design found here (PDF) and previously discussed as being pretty damn awesome for taking the confusion out of VDI. Each rack can support 800 desktops standalone, combining for a total of 1600.

Connectivity is a mix of Fiber Channel, 10 Gigabit Ethernet and Gigabit Ethernet with heavy leveraging of HP Virtual Connect.

Network Racks

Believe it or not, I actually am a mediocre network engineer still. Mediocre but not incapable! From top to bottom, these racks are fitted with Juniper M10i Edge Service Routers, Juniper SA VPN systems, Cisco CSS11503 Content Service Switches with SSL, Brocade DCX4-S SAN directors, Juniper EX4200 Gigabit Ethernet switches with 10 Gigabit uplink module, and Arista 7500 series 10 Gigabit core switches.

The M10i’s are modular and provide Internet connectivity as well as connectivity from remote branch offices and to and from DR sites. There’s not a whole lot to say past that. Similarly, the Juniper VPN appliances are of course, VPN appliances. A CSS is a CSS is a CSS. However, then it gets interesting. The Brocade DCX4-Ses provide 128 full bandwidth 8Gbit FC ports each. The EX4200′s provide a total of 96 Gigabit Ethernet ports, but using stacking and Virtual Chassis can double that in about five minutes of work. The Arista 7500′s each provide up to 384 full line rate 10 Gigabit Ethernet ports and feature front to rear airflow.

This is, and I can’t emphasize this enough, just an example configuration of what you might do with the network racks. However, it is an absolute requirement that airflow be front to back. So no, you cannot use Cisco unless you buy some Catalyst 6506-NEBS used. In fact, most switches are right to left airflow, especially core class switches. The M10i’s are okay because of the low heat, but the remainder is all front to rear airflow. That’s part of the reason for the Arista 7500, besides it’s best in class power efficiency and amazing performance (and 384 ports of 10 Gigabit.)

Storage Racks

Just to be a pest, storage is done by IBM SVC. This is partly so that there’s a single, consistent multipath driver for every single system despite the variety of storage vendors and tasks involved. Mostly, it’s to wring every last bit of performance out of very limited space. From left to right, we have an HP EVA 6400 for Tier 3, Isilon 10000X-SSDs as Special Backup Tier, two IBM V7000′s as Tier 2 (ignore the DS3400 stencils, the V7000 stencils aren’t available yet,) a Hitachi AMS2500 as Tier 1, and a SpectraLogic T380 library. Yeah, this is a pretty complicated configuration.

The EVA6400 (2C16D) is configured as 64 x 600GB 10K and 128 x 1TB 7.2K for a total raw capacity of 38.4TB and 128TB at 3617.4 watts (using the EVA power calculator.) The Isilon 10000X’s provide 120TB over 24 10 Gigabit Ethernet connections. The IBM StorWize V7000′s are configured as two 8 node groups using 24 disk shelves, both fully configured with 450GB 10K for improved seek performance. Total raw capacity, 86.4TB. Our Hitachi AMS2500 is configured with 10 standard disk shelves using 300GB 15K drives for a raw capacity of 48TB. Finishing things out is our SpectraLogic T380 configured for full slots to give us 1.1PB of tape capacity. We’ll presume we’re using a good hierarchical storage management solution for unstructured data.

So the breakdown: 48TB Tier 1, 86.4TB Tier 2, 38.4TB Tier 2.5, 128TB Tier 3, 120TB Backup Tier. Total disk capacity: 300.8TB. Raw IOPS potential? Best case by adding all disks and ignoring bottlenecks and controllers, over 100,000 IOPS excluding backup tier. Remember that our VDI resides on the MDS600′s, so no VDI load will be applied to these racks.

What’s It All Add Up To?

The Killer Hardware Offering of the year. It’s not even really a “hardware” offering per se, as much as a facilities offering. This box is near magical in what it can do for businesses considering building new data centers. No longer do you need to spend months with architects, contractors, and others trying to figure out where it can go in the building, where the power will come from, etcetera. The staff required to deploy an HP Generation 3 POD are an electrical contractor, and a general contractor to verify the pad you plan to install it on. That’s it. There are no structural beam concerns, no load bearing walls, none of it. The most expensive part of building for a POD only occurs if you need to pour a new concrete pad for it. (Because it’s a heavy box.)

Not only that, but I just fit your entire infrastructure (nearly) into a 20 foot long box. One that you can build outside your offices with HP’s help, truck in, and turn on inside of a week. The turnaround on these PODs using all HP hardware is under 6 weeks. That’s time from order to on-site and running production. My last refresh project took 6 weeks just to evaluate the existing data center and come up with plans for resolving problems and improving cooling to handle twice as many servers for 9 months – that was the cutover schedule. Nine months. HP probably won’t build all of the configuration I just showed you, since you know, I used other vendors products that they may not have a relationship with. But ask. HP Services can work some real magic.

Just the box will cost you around $600,000 list. Let me tell you, if I could do a whole data center for $600,000 all said and done excluding the hardware? I would have jumped at it every time. I’ve done many data center buildouts, from NEBS4 to walk-in closets. This much data center at $600K is almost unbelievable. And again; I need only my general contractor to sign off on the pad, and my electrical contractor to connect the power. Oh, and by the way, you can install it outdoors. So if you’ve already got a POD, just drop your new data center off in the parking lot. Do your test and acceptance. Shut down for a day to swap positions. Done. Talk about a dream come true.

So yes, this is really, really cool stuff. And now you know how to get even more out of it than HP’s told you so far. Anyone want to hire me to work on theirs? ;)

  1. No Comments