It has been an epic week for Cloud news with the giants of the industry all making big announcements. That being said, here’s six great article from the week of April 4th on that are worth reading over the weekend:
Seventy one percent of businesses are concerned about managing cloud computing, according to a survey. The companies are worried about the potential complexity of controlling a software as a service setup, the survey found. Additionally, over half of IT directors said they feared they would lose control of infrastructure if they shifted systems to the cloud.
Turning to diagnosis, it’s obvious that current operational and organizational models were developed for a much smaller scale, in which the cost and low productivity of manual provisioning were acceptable, in part because there were no alternatives. Today, however, lack of automation now threatens the ability of established IT organizations to survive.
Asked when they would adopt cloud storage, 27% indicated six months to one year, 22% said they’d already adopted some form of cloud storage, 17% said one year to 18 months, 16% indicated they’d adopt it in the next six months, and 12% said 18 months to two years.
Cloud market segments, rogue IT, Starving the Beast, the trough of disillusionment… this one has a little something for everyone.
Conventional wisdom might have you believe that the systems we build are basically safe, and that all they need is protection from unreliable humans. This logically stems from the myth that all outages/degradations occur as the result of a change gone wrong, and I suspect this idea also comes from Root Cause Analysis write-ups ending with “human error” at the bottom of the page. But Dekker, Woods, and others in Behind Human Error suggest that listing human error as a root cause isn’t where you should end, it’s where you should start your investigation.
While some enterprise applications can make use of additional computing power with relative ease, many cannot. Most client server applications are built with assumptions in mind around where they are installed, and where data lives. For example, how many applications can be run dynamically on new servers without prior configuration?