With the recent unveiling of Cisco InterCloud, Cisco and a select group of cloud service providers and technology partners have taken a significant step toward delivering on the long-promised “hybrid cloud” computing model, enabling enterprises to extend their datacenters into the public cloud as needed.
What is a hybrid cloud? The concept of hybrid cloud, touted by many vendors and service providers as the “Next Big Thing,” enables IT to utilize both on-premise and cloud-based infrastructure seamlessly for significant cost reduction, faster and simplified bursting of data, disaster recovery and other use cases.
But let’s take a step back from the news of the day. Hybrid cloud may be here – but does your company need or want it? Is it really the “holy grail” of IT and will it actually make an enterprise’s data more secure, its business processes and operations more efficient and its customers more satisfied?
Hybrid cloud is interesting to IT departments and CIOs worldwide because it was born out of a need within IT departments themselves. Virtualization has brought new levels of efficiency and flexibility to IT operations, and this flexibility leaves both IT and end users craving only more efficiency and flexibility. IT departments can essentially scale their on-premise datacenters when they need to scale and for the specific applications they want to scale using a hybrid-cloud model. The ability to extend on-premise datacenters into the cloud takes datacenter flexibility to new levels of efficiency and cost savings.
Why do IT managers and CIOs need flexibility in their datacenters? Consider the examples below, from two of Zerto’s customers.
Flexiblity in Time of Emergencies
Affigent LLC, is a government contractor with a large sales force that is spread across the United States. In 2012, as Hurricane Sandy gathered steam near the East Coast of the US, businesses in the path of the hurricane scrambled to address the complex issues of disaster recovery – and of moving their data and applications out of the line of the hurricane. Affigent, with their virtualized infrastructure, managed as an off-premise private cloud by Integrity Virtual IT, was able to continue working through the storm, with no impact on business operations. As the storm approached, a “fail-over” to Integrity’s Chicago site was performed during Monday lunchtime. Servers were shutdown cleanly, latest changes were replicated and then servers were restarted in a secondary site also managed by Integrity. It took 35 minutes to move and another 20 minutes to fully test all applications that were failed over to the Chicago site. Affigent was able to conduct business during the storm and provide its customers with timely responses across its entire sales force.
Flexibility for Business Operations
Consider Woodforest National Bank, with their “always on” approach to relationship banking, offering 24×7 live banking with tellers and personal bankers. Woodforest exceeds the industry standards for hours of operations and customer service, so its application availability requirements are rigorous. The bank’s location, in Houston Texas, puts it within range of seasonal threats from hurricanes and tornadoes. Therefore, each July before the heart of hurricane season, all production applications are pre-emptively “failed-over” to a secondary site, with a return to the primary site sometime around January , avoiding the hurricane season and protecting all Woodforest systems. While regularly failing over an entire datacenter would be a nightmare with a physical data center, with a virtualized infrastructure, it is a much simpler process; Woodforest is 99 percent virtualized on VMware vSphere.
As these two examples illustrate, one of the most significant organizational benefits of the modern, virtualized datacenter is the ability to easily move workloads – including data and applications – across different locations depending on business and IT requirements. This flexibility and workload portability is what allows organizations to cost-effectively select their optimal cloud strategy.
Businesses large and small have been at times both seduced and confused by the grand promise of the cloud. But when they talk about “hybrid cloud,” they are really just seeking the flexibility to utilize the cloud as an extension of their own datacenter. They also want levels of openness, transparency and ease of use – previously unattained in the enterprise – where production workloads can be easily mobilized, centrally managed and protected.
With InterCloud, the siloes of public and private clouds and the issues that come along with moving data and applications between them will be razed. Enterprises will benefit from an entirely new model of workload portability. They will be able to move on-premise production applications to clouds without incurring downtime, changing routing or firewall configurations between sites. They will be able to protect and recover applications without the need to purchase storage from the same manufacturer for both production and recovery sites. They will have one common management console for applications, no matter where those applications reside. This increased flexibility is what IT departments want, and is the reason that many are watching the hybrid-cloud space closely.
A version of this post first ran on Wired Innovation Insights.