Tuesday, November 23, 2010

Defending the Private Cloud

Phil - You know I love hyperbole as much as the next guy, but come on... discrediting the private cloud?? (and to those who aren't aware - I've corresponded with Phil for just over 7 years and have a sincere respect for him... but that doesn't mean I won't rip into his posts ;-)

Phil writes:
"Back in January, I made a controversial prediction that private clouds will be discredited by year end. Now, in the eleventh month of the year, the cavalry has arrived to support my prediction, in the form of a white paper published by a most unlikely ally, Microsoft."
A whitepaper from Microsoft is the cavalry? Wow - it must have been written by Bill Gates himself! Or... a couple noobs with MBA's and banking backgrounds... But to be fair, the paper rocks. It's dead on. It says that the cloud model is a good one - and that *eventually* more and more applications will be a good fit for the large scale public clouds.

The cool stuff described in the paper is evolving; it will take time (like a decade). That said... let's take a look at the realities that my clients live with on a daily basis:

1. ALL (not some) of my large clients have a mixed computing environment including some combination of AIX, Solaris and Z. NONE (not some) of the public cloud providers have options for supporting all of these environments. I know, you're thinking to yourself... well, they should just port the applications to Linux/Wintel and all would be good. However, in the vast majority of the cases, the applications are packaged software and my clients have little influence over the vendors who own them. So, to be clear - a significant portion of the applications are not targets for the current large scale cloud providers (like Amazon, Microsoft, etc.)

2. Most applications are data intensive and coupled together. This presents a problem when you want to move applications from your internal data center to a public cloud. I compare it to pulling out a paper-clip from your desk drawer only to find it bound to a bunch of other paper-clips. Enterprise applications are often glued together, with either low latency requirements between them or requiring large amounts of data to be moved between them (not good if you have remote data centers with thin pipes and ingress/egress fees.) **Phil, it's about Loose Coupling ;-)

3. Hardware and software provisioning times in the enterprise are embarrassing. The amount of time/money that is wasted waiting for new environments to be procured, stood up, tested, secured, etc. would astound you. The pain is real TODAY - and waiting a decade for a public cloud to be able to support the half-dozen hardware platforms, operating systems, COTS licenses, etc. you need to perform integration testing on isn't an option.

4. Mankind didn't suddenly change in 2010. It turns out that wholesale moves from one computing model to another is not in the corporate DNA. Enterprises who excel at mitigating risks are taking incremental steps to the cloud. First, they're interested in finding out simple things like "how will my business critical application perform if we virtualize it?" or... "If we moved our data intensive application off of our vertically scaled mainframe onto a horizontally scale commodity compute (share nothing) architecture - - will it still perform?" You see... enterprise I.T. has lots of unknowns around cloud architectures. It will take some time for them to understand the basics. Once they answer the architectural questions, figuring out who hosts it is rather simple problem (price, service & reliability).




Private cloud is a natural stepping stone. Most I.T. professionals that I have met do not understand the architectures, processes and operating models (regardless of public or private). Pushing naive people to a public cloud where their mistakes will be hidden by a magically elastic service interface is *not a good idea*. Trust me... it shows up when they get the bill. Instead, I wholeheartedly recommend a stepwise approach to learning about horizontal scaling, sharding, MapReduce, BigData, multi-tenant services, etc. in an environment where they can observe actions and outcomes.

Wednesday, August 25, 2010

MomentumSI Partners on Private / Hybrid Cloud

Why self-service private cloud?
  • Improved agility — Deployment cycles shrink from months to minutes, making IT far more responsive to business lines and other internal customers.
  • Reduced capital expense — Utilization of hardware capacity improves dramatically due to elastic provisioning and de-provisioning of services.
  • Reduced operating costs — Software infrastructure and provisioning processes are standardized and automated. Control is decentralized to decentralized constituents.
  • Reduced risk — The controlled cloud provides an alternative to rogue deployments to the public cloud. The ability to move workloads between deployment environments (physical, virtual or cloud) avoids platform lock-in.
The core technologies and services within the MomentumSI self-service private/hybrid cloud platform include:


Tuesday, June 08, 2010

Current challenges for Application Performance Engineering

Application performance engineering is a discipline encompassing expertise, tools, and methodologies to ensure that applications meet their non-functional performance requirements. Performance engineering has understandably become more complex with the rise in multi-tier, distributed applications architectures that include SOA, BPM, SaaS, PaaS, cloud and others. Although performance engineering ideally should be applied across the lifecycle, we’re seeing more factors that unfortunately push it into the production phase, typically to resolve problems that have already gotten out of hand. That clearly a tougher challenge, so how did we get to this point?

In the client-server past, performance optimization was something that folks in the IT department typically figured out through trial and error. Developers learned to write more efficient database queries, database administrators learned to index and cache, and system administrators monitored CPU and memory to upgrade when needed.

As application architectures started to get more complex, the dependencies increased and it was harder for one team track down problems without chasing their tail. More organizations adopted something that was previously only used by enterprises with highly scalable, reliable mission critical applications – the performance testing lab. Vendors like Mercury created popular load testing tools like LoadRunner, and organizations invested millions in lab hardware and software in an attempt to recreate production environments that they could control for testing purposes.

Unfortunately, these performance labs became very difficult to cost justify. First, it always seemed to take too much time and money to setup the realistic test environments you’d like, particularly as apps became more distributed. Next, projects were often already behind schedule when it came time to test, and so lab times often had to be cut short. Factors like these minimized the lab’s value, but the real killer was the high maintenance costs for all that hardware and software, along with the data center and staff.

This put many IT organizations in a tough spot. With limited means to perform system-wide performance testing, and the inclusion of more SaaS/PaaS/cloud services in their architecture, they had to make due with whatever subsystem level performance testing they could get. After that, its finger-crossing and resigning yourself to further optimization in production.

Unfortunately, production can be a very frustrating place to try and optimize performance, particularly when you have performance problems and growing complaints from customers, partners, etc. It’s in these pressured environments where you need true performance engineers that follow a methodical and systematic end-to-end approach. Performance bottlenecks can reside in a myriad of places in highly distributed architectures, and you need to follow a disciplined methodology to analyze dependencies, isolate problem areas, and then leverage the best of breed tools to trace, profile, optimize, etc each of the tiers and technologies in the application delivery path. This takes a lot of skill and expertise.

In short, the challenges faced by today’s application performance engineer in production settings is a far cry from the client-server days of in-house tuning and experimentation. We expect that the role of Performance Engineer will grow in importance as SOA, BPM, cloud, and SaaS/PaaS implementations increase, and until more viable pre-production system performance testing options are available to rise up to the challenge.