I’ve been doing a lot of work on private (internal) clouds lately – it’s a result of my new job with Unisys. Part of that work has been spending time with customers on their plans for cloud computing — internal and external. There’s some very interesting work going on in the private cloud space, and the solutions available to enterprises to build their clouds are many.
Note – I make the (internal) distinction for a reason. The term “private cloud” is now starting to morph from purely internal, to internal and external clouds controlled closely by IT. Amazon’s Virtual Private Cloud is an example of a private cloud in an external provider setting.
I have seen charts from Gartner that show how private (internal) clouds will get more money from IT over the next few years than public clouds. I’ve also seen the benefits of a private cloud in the development/test workload scenario here at Unisys. The numbers are pretty staggering (we are publishing a paper on this).
I can imagine that IT folks are just a wee bit confused by all the cloud noise lately, especially when it comes to private clouds. You can choose from a wide range of approaches. These start with open source cloud projects from Eucalyptus or OpenNebula providing a good base level cloud framework. Then there’s the guys who have been out there a bit but are not really all that big — like Enomaly or 3tera (both of whom seem to see greener fields selling cloud solutions to hosting and telco providers than to entperprises).
In the 800-lb gorilla category you have VMware with their vCloud initiative. Not that most VMware customers aren’t already locked in until Armageddon, but vCloud is just one more way to remove any possibility in the future of getting out from under VMware’s thumb. vCloud=vLock! (note – as always on these pages, this is me talking, not my employer)
There’s also the big iron guys like IBM with CloudBurst and HP CloudAssure (is that even a product?). In this collection you can go from free (open source) to over $200,000 (HP, IBM) just to open the box. Oh, and it normally comes in a box, as in appliance (or appliance with cloud capacity in a rack).
So, what does a private internal cloud solution look like from an IT perspective anyway?
Well, it looks a bit like how Amazon’s EC2 and S3 look to Amazon’s internal tech crew. It starts with a bunch of hardware that can run the workloads. To this you add a level of virtualization and/or grid control to enable multiple workloads to run on the same box. This can be Xen (as in Amazon), or it can be VMware ESXi or Hyper-V, etc. Then you add layers of automation that allow you to manage tens, hundreds, thousands or even tens of thousands of boxes with relatively few people. My friend John Willis (e.g. @botchagalupe) calls this “zero touch” infrastructure (note, John is working with the Canonical folks on the Ubuntu Cloud bundle with Eucalyptus). “Zero touch” covers server repurposing, virtualization, image remediation (patch and release level management), provisioning, metering (for chargebacks) and more. Then you add self-service interfaces (portal, API, etc.) and voila! Private cloud. Note – I’m pretty sure that Eucalyptus and OpenNebula fall far short of this level of automation out of the box.
As you might imagine, this is bloody difficult to pull off. Amazon, Google and others worked their keisters off to get to where they are, and they are still building. Basically, the primary activity of the data center folks at Amazon is swapping servers (and that’s a big job). One fails, another is swapped in, and the system grabs the new box and joins it to the cloud. In most IT organizations, a lot of people get involved in installing the image on the machine, virtualizing it, and joining it to the network. For obvious reasons, that wouldn’t scale at Amazon.
One of the first things IT needs to let go of, to get a cloud running to its full potential, is the concept of manually approving anything. All of the rules for approvals need to exist in advance and the system just needs to follow them. Having a person “OK” the creation of a VM is just crazy, yet that’s what happens in many IT organizations. Someone requests a virtual machine instance for a project and it goes into a queue for manual approval, and then manual implementation. Yikes!! This isn’t to say that nothing is approved – as I said, you have to set up the rules in advance. How many instances of X can a department get? What do we charge them (IT is now really a business, no?)? Are there time limits on the VMs or servers we provide (e.g. maximum lease terms)? All of this needs to be there and ready to go.
Then you need to give people the portal. That’s the magic pixie dust that makes it all come together. Rather than “requesting” a server, you provision one. It might take a few minutes for the automati0n to do its work, but that’s a lot better than the hours, days, or even weeks (yes weeks for a single VM!!) it used to take.
So, to net it out — a private (internal) cloud = hardware + virtualization + automation + chargeback + self-service portal. It also = the promised land for enterprise IT modernization.
Sounds easy? As if!
Filed under: General