“Cloud is not for us.” “We virtualize almost everything so it’s like we already use Cloud.” “Cloud is too expensive.” Etc……So what’s your excuse? Cloud isn’t going away. Instead of refocusing negativity and fear of the Cloud onto something productive, a lot of companies are still content on wasting time, money, and energy being a vocal skeptic. So let us be the first to address these 5 large elephants in the room head on and coax you out of your skeptical shell.
Before you start writing hate emails, please allow me to explain. By far, the easiest way to slow or halt a Cloud project is to raise security concerns. While many of these concerns are valid, they are not severe enough to completely derail your Cloud efforts. It’s amazing to me that companies will put all their CRM contacts into Salesforce.com, use WebEx for their most sensitive customer meetings, provide Box.net or Dropbox accounts to share files, and rely on a private MPLS or Frame Relay network to interconnect their offices but they do not trust a Public or Hybrid Cloud provider. By utilizing the same basic philosophies that have been in place for ages, customers can secure their Cloud deployments. For IaaS, harden the operating systems, patch the applications, restrict access, and use any and all features/tools available form the Cloud provider. For SaaS, consider the track record of the provider and demand that these providers support encryption or switch to ones that do!
Some weeks it seems all we read about is Cloud outages. One week it’s Amazon EC2, next its VMware’s Cloud Foundry, and then Microsoft Azure. In this new world of Facebook, LinkedIn, and Twitter, instant feedback defines a new type of overblown hype or panic called iT (instant Terror). In the end, it has the ability to make IT miserable. As I’ve talked about before, how would your datacenter uptime compare against the Cloud providers? Would you be willing to put up a dashboard with your availability for the entire world to see? Can you survive in a world with zero downtime upgrades? Reliability is a concern whether you are running legacy hardware or utilizing a private, public, or hybrid Cloud. The reality is the compass is pointed in the right direction and reliability continues to improve. Finally, just because you are using a Cloud provider does not mean you can simply forget about disaster recover and continuity.
Ok, Ok, yes this is an extremely broad topic. You’ll hear arguments such as “you can’t go faster than the speed of light” and “that OS doesn’t perform in the Cloud.” Others will point to the risks associated with shared infrastructure, as “I can’t control what’s utilizing the resources I need.” Finally, don’t forget about the network and the latencies that may be introduced via distance and load. My answer to all of this…what’s new? These are all the same issues and challenges that you and I have been dealing within our legacy or Cloud architectures. In fact, this is the reason why Cisco/VMware/EMC have the VCE, Microsoft has Fast Track, and NetApp has FlexPod. By providing you with a consistent architecture one should expect consistent performance. However, in the Cloud this burden does not fall on the customer it falls on the providers. Therefore, you see companies such as SAP and Oracle certifying on Amazon EC2. Finally, performance is simply a reality of providing services to internal and external customers.
Here, I’m talking about the ability to truly monitor and instrument your Cloud. Visibility allows you to understand the entire infrastructure, validate SLAs, and make logical decisions about the size and scope of your deployment. It is important to view your Cloud from multiple perspectives; as a user, as the provider, as the operator, and as the administrator. In order to attain these different perspectives, your monitoring solutions must be able to communicate with a variety of different infrastructure and application components while remaining vendor agnostic to the underlying technology. A huge driver of the Cloud is to prevent vendor-lock-in and this includes your monitoring solutions. While the Cloud providers allow for API access to give you a view of their infrastructure, most remain too much of a black box. Over time, I believe this will change, as customers will demand more and more transparency from their Cloud providers. Don’t be fooled by legacy solutions that do not handle dynamic environments, have poor integration ability, or do not understand automation. A properly instrumented Cloud provides operations and customers (internal or external) with the information they need to maximize their Cloud potential.
Let’s face it, if you are in the business of IT you are probably overworked and under appreciated. Most of your time is spent keeping your infrastructure up and running, upgrading solutions, troubleshooting issues, and keeping the users content. It is hard enough keeping up with the newest features from Cisco, NetApp, EMC, SAP, Microsoft, and more let alone learn new technologies like Puppet, Chef, OpenStack, or more. And then there is that little topic about IT budgets! Finally, some IT organizations are cognizant at other fads that came and gone. Let’s see…autonomic computing, grid computing, utility computing, Web 2.0, and more. In the end, Cloud computing has the potential to free IT from performing the mundane tasks while allowing you the agility to create new and interesting Services for your internal or external customers. While you don’t have the time to plan and implement Cloud technology, you don’t have the time to wait as your competitors are doing it already!
There you have it. 5 elephants in the room have now been addressed, named, and discusses…and guess what? This room is a heck of a lot bigger than any teeny-tiny elephant.
Have other concerns that I didn’t address here? Let me know in the comments.
Image sourced from here.