Better than Ever – it’s the BabyDragon II!

So, we’ve had new hardware come out, and people have been asking me for a while “Phil, when are you going to update the BabyDragon? The Xeon X3450 and Intel 3420 architecture is pretty long in the tooth here.” Well, one doesn’t get to be the best by deploying new technology without testing (even if it’s other people testing it!) You don’t roll out brand new beta software in production, and that’s how I handle BabyDragon. I’m curating a hardware list that’s got to meet two key requirements. One, it needs to be fully compatible with VMware vSphere. Two, it needs to be true server grade – not just in terms of features, but in terms of reliability.

But finally, the wait is over.

Without further ado, let’s get right down to it. I’m in the middle of starting a business that will change the world of software (and enterprise storage!) forever. So I’ve been beyond swamped. I’m pulling on average 16 hour days, and running on little to no sleep. And I realized that we’d need a new BabyDragon to get the Linux/FreeBSD side of things done. It was a necessity. And we’d need a more powerful BabyDragon. It needed to handle virtualizing high impact databases, compiling software regularly, and all sorts of related nastiness. These systems would get absolutely hammered. And we needed them to be on the cheap. So how do I do this?

As usual, sure, I could call HP or IBM or Dell and order a micro server. But we already knew they’d collapse under our workload. We also knew they’d cost a lot more since we needed all those goodies like IPMI, virtual media, and so on. So let’s talk key features of the BabyDragon II versus the original BabyDragon.

IPMI? Check.
Virtual Media in IPMI? Check.
Dedicated IPMI port? Check.
Onboard USB A port for internal flash drive? Absolutely.
Dual Gigabit Ethernet? Upgraded to Intel i82574L and i82579LM.
SATA? Upgraded to 2x 6Gbit (SATA-III) and 4x SATA-II via Intel C204.
Memory? Still at 32GB, the ESXi Free PRAM limit.
Power efficiency? Peak draw down by 20W, typical draw estimated down by as much as 30W. (Yes, that puts it in the sub-100W envelope!)
Internal disk? Upgraded to dual Solid State Drives. Because let’s be honest; you’re NOT going faster.

Sounds downright sexy when I put it that way, doesn’t it? There is no component in the system which has NOT been improved in some way, be it efficiency, clock, or well, take your pick. When I said “nothing will beat BabyDragon II”? You’re darn right I meant every word of it, and the proof’s in the pudding. Some major changes have been made in what you need to do with the BabyDragon II versus the BabyDragon though.

First and foremost, all the BIOS adjustments? Defaults. Leave ’em at defaults, unless VTd/VTx are disabled – those should be turned on for 4.1U1 and vSphere 5. Do NOT adjust the Power Profile either. Everything is fine as is this time around, with no tweaks necessary. (Hooray!) Make sure to set a static IP for the IPMI controller, either via DHCP server or directly in the controller. I recommend setting it in the controller; it’s the same controller as the BabyDragon and the latest version horks up on DHCP failures – not unexpected, but somewhat annoying and dangerous. Resetting it requires completely disconnecting power.

Another change is that your case options change, fan options change, and there are some new restrictions with certain recommended power supplies. The reason for this is pretty simple: fanless PSUs require very careful consideration of airflow. You can’t just throw them in any old chassis, even at 50% draw, and expect them to not melt. So when I get into which cases can accept the fanless PSU option, you really do need to take this as the word of god from upon high, unless you like damaging hardware. It really is that important.

Okay. Enough blathering.

BabyDragon II – The All Important Specs.

  • $240
    Intel Xeon E3-1230 “Sandy Bridge” – 3.2GHz, 4 Cores, 8 Threads, 8MB
  • $200
    Supermicro X9SCM-F – Intel C204, Dual GigE, IPMI w/Virtual Media, 2x SATA-3, 4x SATA-2
  • $160
    4x Kingston ValueRAM 4GB DDR3-1333 ECC Unbuffered Unregistered
  • $400
    2x Crucial M4 (CT128M4SSD2CCA) 128GB SATA-3 Solid State Drive (about $180 direct on Amazon, BTW!)
  • $20-30
    8-16GB Flash Drive; be careful of clearance! Smaller is better. I like the Lexar Echo ZX best. Wide/thick USB flash drives cannot be used as they may block SATA ports.
  • $300
    Chassis and power supply to suit. Lots of options here. And it probably will be closer to $200.
  • TOTAL: $1,230.00-$1,330.00 USD


Here’s where we get into the fun and I develop a god complex. You have two key recommended options for the power supply.

Option one, fanless. If you go fanless, you have one option: the Seasonic X400 Fanless. Do not use the 460, do not use any other fanless – the rest are junk. The 460 will run improperly because of load imbalance – the 400 hits the 20/50% load points almost exactly. I like fanless myself, because it’s a true zero-noise solution. However, it comes with extremely strict restrictions on which chassis and fans can be used. When I say “NO SUBSTITUTIONS” here, I am not joking. You cannot just change chassis or fans. You will damage the power supply. Airflow must go in a specific prescribed direction and at a specific rate to ensure the PSU is cooled properly.

  • Lian-Li PC-V351 with any two fans rated for >40CFM in the front 2 positions. (Stock fans are a no-no.)
  • Lian-Li PC-V352 with any two fans rated for >40CFM in the front 2 positions. (Stock fans are a no-no.)
  • Lian-Li PC-V353 with any two fans rated for >25CFM in the front 4 positions. (Stock fans are a no-no.)
  • Lian-Li PC-V351 and V353 with Panasonic FBA12G12H1BX or Delta WFB1212H-R00 in front positions.
    • The FBA12G12 is a 120x38mm deep fan, and can interfere with cables. Install fan grills ALWAYS.
    • This is a HIGH PERFORMANCE configuration for systems with frequent load/temp spikes.
  • Lian-Li PC-U6B Special Edition with Arctic Cooling F12 PWM High Performance fans in all positions configured to exhaust.
    • Negative air pressure design; do not locate on carpet or foam. Must be on wood/concrete/etc.
  • Silverstone SUGO SG03 (all colors) with any fan rated for >35CFM in the front positions. (Stock fans are a no-no.)
  • Lian-Li PC-V351 and V353 with Delta FFB1212EH-PWM.
    • These are very loud fans. BIOS power management settings may need to be changed.
    • Only for use in systems with average loads of over 70%! 
    • Seriously guys. These are extremely powerful and potentially loud fans (56dBA+).
    • Never install grills on the exhaust side of FFB series fans. It actually disrupts airflow severely.

It’s very important to note that if you’re loading your BabyDragon II past 70%? You are still miles and miles within spec for the power supply. Your absolute peak draw is around 220W (around 230-240W at the wall.) The reason for such high airflow is twofold. Every chassis except the PC-U6B Special Edition must be positively pressurized (pressure inside the chassis higher than outside the chassis) to force cooling air through the fanless. Only the PC-U6B is able to operate as a negative pressure (pressure dropped so that air convection naturally draws through the chassis) design. For any other chassis, you need to ensure a high positive air pressure/air flow rate relative to it’s size. For any fanless configuration, the smaller the chassis, the better. (Good news, everyone! The indicated fanless PSU is modular, making that easier!)

For PSU option two, a quiet fan cooled PSU, there are no chassis restrictions. To be truthful, I’m still in love with my Lian-Li PC-V351B. However, you can absolutely use any chassis you like, and I’d tend to recommend the updated 351, known as the Lian-Li PC-V353. It will set you back a pretty penny ($100-140.) If you want something that’s less dust prone, the Fractal Design Define chassis is a good option here as the front fans have a washable filter built into the door. There’s also the Lian-Li PC-V352, V353 and V354.

In terms of recommended PSUs? I suggest reading up on JonnyGuru for who’s hot and who’s not. You need something with ripple of under 70mV typical and in the range of 330 to 400 watts rated. The most important aspect is the ripple – all Supermicro boards are ripple sensitive. Power supplies over 400W will see a severe negative effect on efficiency – you want your peak draw at ~50% and your typical at 20-25%.

Speaking of the Lian-Li options, let’s break it down for you. The V352 is the updated V351. It adds USB 3.0, a card reader, swaps the PSU location horizontally on the rear panel, and adds a third hard drive bay. (Do not install 3 10K RPM disks. You will melt things.) Airflow performance is more or less identical between the V351 and V352.
The V353 is similar to the V351, but relocates the disks entirely and steps up from 2 fans to 4. The entire front panel is perforated, giving it excellent airflow potential (thus the lower CFM requirements.) It also requires no adapter for the 2.5″ SSDs, and has washable fan filters. If you want to run quiet, this is the chassis to go with. But it will collect dust.
The V354 is an entirely new beast with some significant airflow potential, but slightly hobbled by it’s design when it comes to silence. The power supply is oriented horizontally across from the expansion slots – you cannot use a fanless PSU in this chassis. There is a strict orientation requirement for fanless PSUs for a reason. All the positive pressure in the world won’t help you when you’re heat-soaking anyways. That said, it offers the most disk space – 7 3.5″ disks possible. Like the V353, the 2.5″ bays are built in so no adapters required, and are separate from the 3.5″ bays. It also adds a 140mm fan directly above the CPU. It’s your best option for large quantities of disks. (Which will be covered in “BabyDragon IID – MOAR DISK SPACE.” Spoiler alert: it has a lot more storage.)

No, you CANNOT substitute motherboards!

I cannot emphasize that enough. You cannot use a Tyan, an Asus, or anyone else. There is a reason: only the Intel C204 chipset is supported, and only the Supermicro combination is tested and verified for full functionality. You cannot use C202 chipset motherboards! The SATA controller on the C202 is different and entirely unsupported. The Seasonic X400 Fanless also only works properly with the Supermicro X9SCM-F; no other motherboard works properly with it.
Tyan’s boards are, bluntly, garbage. The IPMI doesn’t work with the machine turned off (completely defeating the purpose of it), shares an ethernet port (which causes problems), and power management just doesn’t work at all. It only works with a limited number of large EPS12V power supplies as well.
Asus? Don’t make me laugh. The PCI slots are a kludge (and disable onboard video, which means you then have to buy a PCIe 16x video card. And it has to be a true 16x card.) The stock heatsink hits the bottom DIMM socket – way to fail mechanical engineering, guys. Need I go on about the extremely high failure rate? Didn’t think so.
That leaves one, and I DO MEAN ONE, motherboard capable of powering the BabyDragon II. Even if you’re not copying the rest, take away this message: for a MicroATX vSphere 5 system, the Supermicro X9SCM-F is your sole and only safe option.

Whoops, I forgot the Ethernet drivers?!

There’s ONE glitch here, and it’s unavoidable. The i82579LM Ethernet does not work on ESXi 4.1 out of the box. Sorry guys. This is entirely on VMware, because really, it’s not that complicated a driver to add. (Seriously guys. It’s adding PCI_IDs to e1000. C’mon.)

A gentlemen by the name of Bill Fung comes to the rescue – click the link over there. This will take you to his blog, where he shows you how to add the i82579LM driver to 4.1 using the oem.tgz method. Even if you don’t want to go this route, the i82574L still works out of the box, so you won’t have a “oh gods, no network!!” moment. I highly recommend going the oem.tgz route for 4.1, and if you’re experienced, vm-help has the goodies.You only need the i82579LM driver for 4.1U1. Honestly though, I’d recommend vSphere 5 strongly.

But, as mentioned, I’m buried in startup stuff – you can learn more about us over here, shameless plug – so that’s all for now. I’ll talk about the BabyDragon IID when I can find time, I promise. I just can’t promise that I’ll find time any time soon.

38 Responses to “Better than Ever – it’s the BabyDragon II!”

  1. Tony

    Hi Phil,
    I know your probably to busy to answer this. but…
    I really like this ESXI build and am preparing to make my own BabyDragon II.
    I have no experience building systems at all (very little), but would like to give it a go.
    It will be running mostly Windows 2008 server guests, I do a lot of development work and all my dev machines are currenlty in vmware workstation 8.
    I plan to use the “Fractal Design Define mini tower case” case, to start with to save some $ and will be using some existing Sata II drives, 4x500gb Seagate drives.
    I have no idea about system building/power requirements and am not sure what PSU should used for the system. I have priced all the components on but am stuck on the PSU
    Your help would be greatly apreciated. Thanks.

  2. Building ESXi 5 Whitebox Home Lab Servers « Wahl Network

    […] last year, who runs a tech website that contains the build list for a whitebox server called the Baby Dragon. One thing I learned about Phil is that he’s very passionate about server builds and really […]

  3. Scott

    I ordered the parts to build a Baby Dragon II. It’s been easier than i thought it would be. The only issue I am having at moment is getting the second NIC to work. I tried the link and that’s not working. Looking at the motherboard manual it appears that the second nic is a 82579 PHY. Which would explain why the link instructions don’t work.

    Any suggestions?

  4. Scott

    Tony, I put a Corsair Builder CX600 in mine. It was larger then was suggested but eventually I plan on fill all 6 drive bays in the tower I bought. I went with Fractal Design Define Mini and I love it.

  5. Tony

    HI Scot, thanks for the reply, I bit the bullet and took a guess. I put an Antec EarthWatts Platinum 430 PSU in. I managed to build it all myself which is a first for me. It is all running perfectly. I have 4 Sata Drives in at the moment all running smooth.
    I went with the Fractal Design case as well. Only running 2 VM’s at the moment.
    My Windows Home server system, serving media. I used direct Path I/O attached a external USB drive for media files and USB TV adaptor so I can record TV on the home server system. it is working very well so far.

  6. Scott

    Did you try and get both NICs working and if so were you successful?

  7. Tony

    No, did not try to get both NIC’s working. Did not have time to research it.

  8. Devin


    Thanks for the blog. Has helped me tremendously. Question:

    Do you have a CPU heatsink preference (size, cooling, etc) if I’m going with the Lian-Li PC-V353?



  9. Jerome

    I apologize in advance for what will sound like a really dumb question…

    Why do we need a flash drive in this box?



  10. Scott

    Jerome, the flash drive is where you will install ESXi. It has a really small footprint and that leaves the physical hard drives open for local storeage.

    I was never able to get the NIC on the MB working. I did order another NIC and got it working in the box however.

  11. Jerome

    Thanks for your reply Scott.
    I’m thinking of building my ESXi server. Would it be possible for it to rely exclusively on an external NAS or should I add a disk (HDD or SSD) in the server to provide some local storage?



  12. Scott

    You don’t need any internal storeage other than what you are going to install ESXi on and that can be a flash drive.

  13. Kent

    I noticed that the spec in the top mentioned 32GB of RAM, while the parts list showed 16GB.

    I’m basing a new VMWare home lab box on your design, and I went with the new Kingston ValueRAM 8GB sticks. (KVR1333D3E9SK2/16G) – They ran me $250 a pair at

  14. Devin Akin

    Did you ever try to use the integrated RAID controllers with ESXi? If so, any issues with ESXi recognizing the RAID volume?



  15. Building ESXi 5 Whitebox Home Lab Servers | AD2UX Blog

    […] last year, who runs a tech website that contains the build list for a whitebox server called the Baby Dragon. One thing I learned about Phil is that he’s very passionate about server builds and really hates […]

  16. Home ESXi - Seite 2

    […] […]

  17. DP

    Very helpful! Any chance you could post the “BabyDragon IID – MOAR DISK SPACE.” info????

  18. Phillip

    Indeed, it’s in the works. Been blocked by several big things. First thing, some major changes to controllers necessitating a re-evaluation of the options. (There’s some pretty strict limits there to maintain compatibility.)
    Second, I’ve been working on writing that one up differently so that folks can get a view of how I actually engineer these systems. It’s not just throwing bits together in a chassis – weeks of thought and careful work go into it before there’s even the consideration of real world testing.
    Third, I’ve been busy with the startup, which hasn’t left much time for the personal blogging – certainly not enough to do the writeup of the IID and IIs justice.

    So, hopefully coming relatively soon! I’m working on it, promise. 🙂

  19. Tony S.

    Is the Supermicro board compatible with advanced raid levels? (IE. . RAID 50, RAID 60)?

  20. Lyle

    I went with the Kingston ValueRAM 8GB sticks (KVR1333D3E9SK2/16G) and from Centos and other OSes I see all 32GB installed. But under VMware ESXi4.1 and 5 I am only seeing 16GB. Anyone have an idea of what could be going on? I used memtest and it also tested all 32GB just fine.

  21. My New Lab Infrastructure « Virtualization Eh

    […]  I must admit much of the research/effort was previously done by Phillip Jaenke as part of his Baby Dragon architecture.  Another bonus is the IPMI port so I can run them without […]

  22. bw

    once this is built what are you using for disk storage? are the crucial M4’s the disk storage where your VM/s reside and ESXi is on the flash drive, yes?

  23. Home labs - a scalable vSphere whitebox |

    […] The Baby Dragon II, (with a minor change via Chris Wahl) […]

  24. Building new whitebox servers for VMware home lab - Virtualization Tips

    […] baby dragon II build, there are a few revisions – rootWyrm or Chris Wahl About Brian Brian is a Technical Architect for a VMware partner and owner of this […]

  25. Ryan

    Phil – Hi, i have question that I didn’t see answered above….what cooler or heatsink/cooler solution are you using on your Xeon?

    Just the stock that comes with it? How is that for sound?

    Or did you choose a separate HS/cooler solution?

  26. Phillip

    Whoops, I thought I had!
    Generally I recommend going with the stock Xeon heatsink as long as you’re keeping the CPU under an average 50% load. It’s pretty much dead silent at lower RPM and great for low/zero noise setups.
    For higher loads though, well, it depends entirely on the chassis and airflow. Most any compatible heatsink that will fit is fine – I don’t have any real favorite aftermarkets though. 🙂

  27. Stan


    How were able to fit the Seasonic X400 into the v351B, did you need to modify the case so the power connector would fit? Or did you install it upside down?

  28. Phillip

    That’s interesting; can you explain where exactly you’re finding interference on the X400 with the V351? It should orient and mount correctly with no changes at all.

  29. Stan

    The interference was with the upper right corner of the plastic power connector. It does not fit properly with case. What I did was shaved it just a little bit so that it would fit. I have been up and running for about two weeks now. I cannot believe on how quite this is, the only noice is from the two 500GB drives I currently have it for my datastore. I will be taking those out later once I get my FreeNAS system up and running.

  30. Phillip

    Yeah, I LITERALLY just got one in my hands today and I see the problem exactly. The problem is that the production X400 mounts upside down compared to a normal PSU! Shaving chassis or PSU should be fine, but I definitely goofed on that one. Because the airflow in the V351B with Scythe Gentle Typhoons is set up as a positive pressure design, upside down (compared to the arrows) mounting might be OK. I will check with Seasonic ASAP on that to confirm though.

    Glad to hear you’re loving it. Getting the airflow and noise levels balanced is definitely the hardest part of any design like this. 🙂

  31. Hanging Chad

    I was wondering what you are planning on doing with the 2 SSD drives? Are you going to use a RAID controller? Are there benefits to using 2 SSD drives vs 1?

    I am planning on building a White Box and only running 2 VM for botting and may run another 2 VM’s in future. I am thinking about running a single SSD drive for now and consider running a NAS later. I do have a couple of SSD drives that I could dedicate to this build.

    Thanks for any help.

  32. Stan

    Phil – I just saw the TechNote about the Seasonic X400 Fanless PS (, thank you for checking into this. I will give this a try on my second node which I just received the parts for it. Once I have it built and running, I will then tear down my first node and switch the PS around. Again thanks for efforts.

  33. Welcome to vSphere-land! » Home Lab Links

    […] Server Lab Setup Part 2 (Ray Heffer) VMware vSphere Whitebox Server Lab Setup Part 3 (Ray Heffer) Better than Ever – it’s the BabyDragon II! (Rootwyrm’s Corner) NetGear ReadyNAS review: A look at the ReadyNAS 2100 (SearchSMBStorage) […]

  34. Steven

    Hi Phil,

    is the mentioned hardware of the BabyDragon II + X9SCM-iiF still compatible to the new ESXI5.5 version?

  35. Phillip Jaenke

    Ayep, it is! No changes there – fully VMware ESXi 5.5 compatible. Also VSAN compatible/capable according to a number of reports. Working on designs for the BabyDragon III (E3-1200v3 CPUs) but no ETA yet.
    So far though, it looks like the BabyDragon III will be in two flavors. One is the one you all know and love, with a 7 year manufacturing life and other improvements. The other is specifically designed for VSAN use and abuse with a lot more disk ports. (Being held up in part due to issues with the BIOS. It’s likely the BabyDragon III will require a custom BIOS.)

  36. Colin

    I have a quick question.

    I was able to get the following equipment from my work place to use for my lab.

    Now the question is that how much can I really practise while running things on a monster lab server with the following specs as shown below.

    Or should I invest in building a physical 4 host server lab rather than going for the nested route.

    The keys things I’m testing and preparing for are for vcix dc and nv tracks.


    1x SuperServer 7048R-C1RT4+ – 4U/Tower – 16x SATA/SAS – LSI 3108 12G SAS – Quad 10-Gigabit Ethernet – 1000W Redundant
    2 x Six-Core Intel® Xeon® Processor E5-2603 v3 1.60GHz 15MB Cache (85W)
    8 x 32GB PC4-17000 2133MHz DDR4 ECC Registered Load-Reduced DIMM
    2 x 64GB SATA 6.0Gb/s Disk on Module (MLC) (Vertical)
    2 x 240GB Intel® SSD DC S3500 Series 2.5″ SATA 6.0Gb/s Solid State Drive
    Integrated Video (Included with Motherboard)
    Integrated LSI 3108 SAS 3.0 12Gb/s 8-port RAID Controller with 2GB Cache
    Intel® Ethernet Server Adapter I350-T4V2 Quad Port (4x RJ-45)
    Included Supermicro Mobile Rack M28SACB-OEM

  37. Phillip Jaenke

    My thoughts on this are a little complicated – okay, maybe not so much, but certainly detailed.

    I’m not a fan of the 7048R because it is loud. Loud as hell. Obnoxiously so for no real good reason – especially not with 85W processors. Secondly, that’s only one box. To really comprehend things, you’re really gonna want a minimum two. Since you’re doing DC and NV track, more boxes are going to be much better – especially as nested ESXi is a “best effort, maybe” thing.

    What I would honestly recommend is going with a quartet of small or tiny boxes around 16-32GB of RAM each and put a bit more money out for a high end managed switch pair, e.g. Lab-in-a-Box style – which I’m still working on. The problem keeps being cost since LIAB (I REALLY need a better name for it) is supposed to be no more than $5000 for FOUR hosts and two switches.

    As far as specific hardware, my current favorite ITX board is the ASRock E3C224D2I. It’s vastly superior to the Supermicro ITX options because it takes standard DIMMs whereas the X10SLV requires impossible to find ECC SO-DIMMs. The drawback is that you can only get 16GB per system. But it is a cheap box to build.

    To be honest, I’ve moved mostly to ASRock these days as they’re putting out a better cost-feature set than Supermicro by a very wide margin. SM has regressed badly on the features with new models, and isn’t offering anything new or even mildly interesting for E3 v3s. Plus the AST IPMI is much, much better.

    As far as switches – I would invest in an HP ProCurve with Layer 3 (really any current model will work there) or the Juniper EX2200-12C. I’d recommend two switches as well for advanced NV configurations, but you can do most of it with VLANs and port segmenting too. Either way expect to spend at least $600 for a quality switch.

    Hope this helps! 🙂

  38. Installing VMware viclient 5.1.0 on Windows 10 - Notes and more

    […] was time to do some maintenance on my home ESXi server.  This server is a small variation of the Baby Dragon II and is so low maintenance that I even forgot what version of ESXi it was […]