One of the big buzz words today is cloud. Server Cloud, Memory Cloud, Storage Cloud, Public Cloud, Private Cloud, clouds ad nauseum, we hear of a new “cloud” implementation almost on a daily basis. But what exactly is a cloud in computer context?
A cloud is a way to present a particular computer resource such that that resource appears to be infinite to the user. For example, company X launches a new website and expects to use 10 servers and 1 terabyte of storage with 100 mb/s bandwidth. Instead, they find they need 100 servers, 10 terabytes and 1000 mb/s due to the unprecedented need for their cell phone antenna amplifier. In the not-so-long-ago days, this could have been a disaster as they ordered new servers, more storage and got more bandwidth and weeks later, they were able to meet a demand no longer present due to the hurried release of the next generation of phone. Enter the era of the cloud: as their monitoring staff notices the huge leaps in access and resource requirements they notify their cloud provider and within a few minutes (not days or weeks) new servers, storage and bandwidth are magically added to their application, keeping things running smooth with no apparent issues to the users, this is how the cloud concept is supposed to work. Unfortunately, the cloud rarely works that way for huge increases in need.
The challenge is that the cloud providers have to be able to scale out and up to meet the needs of all their subscribers. This means being over provisioned in all areas to allow for sudden peaks and needs. Recent papers show how these need spikes can result in under-capacity issues for cloud providers which result in loss of clients, revenue and of course negative publicity. Other issues include perceived security issues with many potential users stating that they would never put their sensitive corporate data “in the cloud.”
All the issues and potential issues aside, one area that really causes problems is the provisioning of storage resources. Unlike CPU resources which can be easily allocated and deallocated at will using virtual machine technology as loads change, static data needs only increase for the users in the cloud space, requiring larger and larger numbers of storage arrays. In addition to capacity as volume, capacity as related to IOPS and latency are also an issue to meet required service level agreements (
One solution for the storage capacity versus SLA dilemma in the cloud space is to utilize a tiered performance based storage cloud for use by the users of the overall cloud space. Utilizing fast SSD storage in the uppermost tiers allow maximum use of resources as SSDs are not sensitive to data placement and there is no need to short-stroke them to get low latency access. Thus clients with stringent
By using SSD technology for low latency customers greater overall storage efficiency is realized as SSDs can be used at 100% of storage capacity and by removing the need for low latency reads from lower tier disk assets, the disks can also be utilized at a much higher capacity. For example, if an application requires 1-2 ms latency to meet response time requirements for their applications, you would need to have a read-caching SAN with disks short-stroked to 20% of capacity. This would mean at a minimum, buying 5 times the number of needed drives to meet this performance requirement. So a 7 TB database would require, at a minimum 35 TB of disks with no protection, up to 70 disks depending on the type of RAID utilized. Alternatively, if the application data is hosted on a tier 0 SSD system such as a RamSan-630 which has 10 TB of storage, only one or two (for redundancy) SSDs are required for a large reduction in server room footprint, energy and cooling requirements.
In the server cloud space, SSDs can also make a huge difference. The largest use of resources for the cloud is the instantiation of the virtual machine spaces used to serve clients. In tests using a standard SAN, only 10-15 VMs were able to instantiated simultaneously. When a SSD was substituted for the SAN, 40-50 VMs could be instantiated in the same time frame with much lower taxing of other resources. You can read more about this SSD implementation here: http://vknowledge.wordpress.com/2010/04/27/texas-memory-systems-ramsan-620/
Looks like the clouds silver lining might just be SSDs.
No comments:
Post a Comment