Anatomy of private cloud


Photo by Makio Kusahara

Good capacity planning of computing resources (server hardware, software, and network) almost always entails providing enough capacity to handle peak demand. Since peak demand only occurs regularly for short time, providing enough capacity for peak time automatically means over-capacity during non-peak times. How to better utilise the extra capacity during non-peak time? How about selling them?

Before a company can sell its extra computing capacity during non-peak times, it has to have a way of providing computing resources to users, internal or external, on demand. This means that capacity (number of processors, memory and storage size) assigned to users should be able to be easily adjusted according to demand. But how do you do that with physical servers? It’s impractical to add or remove processors, memory modules, or hard discs in servers as needs fluctuate. This is where cloud computing comes into play. It does this by enabling capacity adjustment not on physical server bank, but on virtual server bank.

Now, this on demand resource allocation is a pretty cool set of technologies. So, even when companies don’t want to sell its extra capacity, they might want to be able to use these technologies. When they use these technologies to sell their extra capacity, it’s called public cloud. When they use these technologies but don’t sell the extra capacity, it’s called private cloud.

Private cloud and physical server sizing

Server rack
Photo by Balázs Kovács

Physical server sizing makes sure that each physical server has enough computing capacity to provide the services it’s supposed to provide. Services requiring large computing capacity (e.g. database) should be performed by large servers. Services requiring small computing capacity (e.g. portal) should be performed by small servers. But what to do when capacity assignment are performed on virtual server bank, not physical server bank, as in the case of cloud computing? Then, the size of the physical servers doesn’t matter anymore. For practical purposes, it’s then a good idea to just use bank of similar large servers.

Private cloud technology stack

Private cloud building blocks

Cloud controller

This is the control center of the cloud. It is here that administrators add (also called contextualise), upgrade/downgrade, move, or remove virtual servers. It is basically an orchestrator of the hypervisors residing on physical servers

Image template

This part stores the templates, on which the new virtual servers will be based. Multiple templates should be stored here like plain Linux, plain Windows Server, Linux with database server, Windows Server with database servers, Linux with web server, Windows Server with web server, and so on. When adding (contextualising) new virtual servers, these templates will be adjusted for processor, memory, and storage sizes, then given virtual network address before they are copied to the destination hypervisor to be activated for use


This is the underlying technology that makes cloud computing possible. Hypervisor makes single large physical server hardware appears as multiple small virtual server hardware, a process called hardware virtualisation. Depending on when the hypervisor is activated, there are 2 types of hypervisors:

  • type 1 hypervisors are hypervisors that can control the physical server hardware directly by being the mini Operating System that runs first as the physical server is turned on
  • type 2 hypervisors are hypervisors that control the physical server hardware through the control mechanism of other Operating System that is already running prior to the hypervisor being activated

Of course, the task of dividing single large physical hardware into multiple small virtual hardware consumes some computing capacity of the hardware itself. Hence, understandably, virtual servers run a little slower than physical servers. Most of the time, the performance degradation is negligible. You can find some data on this by clicking here. To make this process a little more efficient, physical servers may have built-in ability to understand that it is being used simultaneously by multiple virtual servers. This feature is called hardware-assisted virtualisation and can be seen in some Intel processors which have VT-x capability or AMD processors with AMD-V capability, among others.

Just like there are multiple physical hardware platforms (x86, SPARC, etc), there are multiple virtual hardware platforms (Xen, VMWare, KVM, etc). Fortunately, where different hardware platforms are incompatible, there’s a certain degree of compatibility or convertibility between virtual hardware platforms

Private cloud reliability

Server crash
Photo by stevenafc

Private cloud may boost service reliability a little bit because:

  • system administrators can simply move the virtual servers from one physical server to another when a planned maintenance on the original physical server is needed
  • if one virtual server crashed, all other virtual servers within the same physical server will still be working properly. System administrators need only to restart the crashed virtual server. The data center gets the benefits of server consolidation (less physical servers) and server deconsolidation (multiple smaller servers) at the same time

Private cloud on its own won’t guard against the risk of hardware failure though. For that, private cloud should be coupled with clustering.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s