Service Oriented Enterprise


Saturday, September 16, 2006

BPM 2.1.4.7.3.5.7.43.2.6.8.9.4.3.5.8.4.1  

I noticed some guys blogging about BPM 2.0. It reminds me of some thought I authored on the subject a few years ago.

http://www.looselycoupled.com/opinion/2003/schnei-bp0929.html

So - I hate the tag line BPM 2.0; it's so 2003. Ok, shame on me.

Here's why...it isn't about the driving the software solution from a single angle. It isn't about the PROCESS. It isn't about the USER INTERFACE. It isn't about the SERVICES. It isn't about MODEL DRIVEN. It's about integrating all of these concepts without your head exploding.

BPM 2.0 places too much emphasis on the process.
Web 2.0 places too much emphasis on the UI and social aspects.
UML 2.0 ;-) places too much emphasis on making models.
SOA 2.0 places too much emphasis on the services.
Entperise 2.0 places too much emphasis on... hell, being a marketing term.
Did I forget any?

posted by jeff | 1:46 PM


Design Time Agility over Runtime Performance Cost  

Last week we were working with a client to define their core architectural principles. We had listed, "Agility over Performance" and this created substantial debate. The first question was, "why were services going to perform slower?" - and, couldn't we have "Agility and Performance"?

On occasion architects will get lucky and find new technologies and approaches that are not conflicting. It has been my experience that this is the exception not the norm. More common is the need to resolve competing interests such as 'agility' or 'performance'. And when they do compete, it is the job of the architect to give guidance.

The services found in your environment will likely be victim to two primary performance degrading elements:
1. They will be remote and will fall victim to all of the performance issues associated with distributed computing
2. They will likely use a fat stack to describe the services, like the WS-I Basic Profile.

Now that we've described the performance issues we have to ask ourselves, "will the system perform worse?" And the answer is, "not necessarily". You see, for the last few decades we've been making our software systems more agile from a design/develop perspective. When we went from C to C++ we took a performance/cost hit. When we went to virtual machines we took a hit. When we moved to fully managed containers we took a hit. And when we move to distributed web services we will take another hit. This is intentional.

A fundamental notion that I.T. embraces is that we must increase developer productivity to enable "development at the speed of business". The new abstraction layers that enable us to increase developer agility have a cost - and that cost is system performance. However, There is no need to say that it is an "agility over performance" issue; rather, it is a "system agility over performance cost" issue. By this I mean we can continue to see the same levels of runtime performance, but it will cost us more in terms of hardware (and other performance increasing techniques). Warning: This principle isn't a license to go write fat-bloated code. Balancing design time agility and runtime performance cost is a delicate matter. Many I.T. shops have implicitly embraced the opposite view (Runtime Performance Cost over Design Time Agility). These shops must rethink their core architectural value system.

Summary: The principle is, "Design Time Agility over Runtime Performance Cost". This means that with SOA,
1. You should expect your time-to-deliver index to get better
2. You should not expect runtime performance to get worse. Instead, you should plan on resolving the performance issues.
3. You should expect (performance/cost) to go down

posted by jeff | 5:25 AM


Tuesday, September 12, 2006

MomentumSI on Reuse  

We published this a few months back but I thought I'd republish it since reuse seems to be a hot topic these days...

http://www.momentumsi.com/SOA_Reuse.html


Feel free to use (or reuse) the graphics! If you choose not to REUSE the graphics you can SHARE this page by passing someone a link ;-)

posted by jeff | 6:06 PM


Monday, September 11, 2006

Lessons from Planet Sabre  

Back in the early days of Momentum I did quite a bit of hands-on architectural consulting. One of my clients was Sabre, the travel information company. One day I was reassigned from another Sabre project (I'll tell that story another day), to a project in distress called, "Planet Sabre".

Planet Sabre was an application that focused on the needs of the Travel Agent. It allowed an agent to book flights, cars, hotels, etc. As you might imagine, these booking activities look quite similar to ones that might be done over the web (Travelocity) or through the internal call center. Hence, they were great candidates for services (they had a good client-to-service ratio).

I was assigned as the chief architect over a team of about 30 designers and developers. (BTW, I was like the 4th chief architect on the project). The developers were pissed that they received yet-another architect to 'help them out'. Regardless, they were good sports and we worked together nicely.

At Sabre, the services were mostly written in TPF (think Assembler) and client developers were given a client side library (think CLI). The service development group owned the services (funded, maintained and supported users). They worked off a shared schedule - requests came in, they prioritized them and knocked them out as they could.

The (consuming) application development groups would receive a list of services that were available as well as availability estimates for new services and changes to existing services. All services were available from a 'test' system for us to develop off of.

So, what were the issues?

The reason why the project was considered 'distressed' was due to poor performance. Sounds simple, eh? Surely the services were missing their SLA's, right? Wrong. The services were measured on their ability to process the request and to send the result back over the wire to the client. Here, the system performed according to SLA's. The issue that we hit was that the client machine was very slow, the client side VM and payload-parser were slow as was the connection to the box (often a modem).

We saw poor performance because the service designers assumed that the network wouldn't be the bottleneck, nor would the client side parser - both incorrect. The response messages from the service were fatty and the deserialization was too complex causing the client system to perform poorly. In addition, the client application would perform 'eager acquisition' of data to increase performance. This was a fine strategy except it would cause 'random' CPU spikes where an all-or-nothing download of data would occur (despite our best attempts to manipulate threads). From our point of view, we needed the equivalent of the 'database cursor' to more accurately control the streaming of data back to the client.

Lesson: Client / consumer capabilities will vary significantly. Understand the potential bottlenecks and design your services accordingly. Common remedies include streaming with throttling, box-carring, multi-granular message formats, cursors and efficient client side libraries.

The second lesson was more 'organizational' in nature. The 'shared service group' provided us with about 85% of all of the services we would need. For the remaining 15% we had two options - ask the shared services group to build them - or build them on our own. The last 15% weren't really that reusable - and in some cases were application specific - but they just didn't belong in the client. So, who builds them? In our case, we did. The thing is, we had hired a bunch of UI guys (in this case Java Swing), who weren't trained in designing services. They did their best - but, you get what you pay for. The next question was, who maintains the services we built? Could we move them to the shared services group? Well, we didn't know how to program in TPF so we built them in Java. The shared services group was not equipped to maintain our services so we did. No big deal - but now it's time to move the services into production. The shared services group had a great process for managing the deployment and operational processes around services that THEY built. But what about ours? Doh!

Lesson: New services will be found on projects and in some cases they will be 'non-shared'. Understand who will build them, who will maintain them and how they will be supported in a production environment.

Planet Sabre had some SOA issues, but all in all I found the style quite successful. When people ask me who is the most advanced SOA shop, I'll still say Sabre. They hit issues but stuck with it and figured it out. The project I discussed happened almost 10 years ago yet I see the same issues at clients today.

Lesson: SOA takes time to figure out. Once you do, you'll never, ever, ever go back. If you're SOA effort has already been deemed a failure it only means that your organization didn't have the leadership to 'do something hard'. Replace them.

posted by jeff | 5:25 AM

archives