Saturday, September 23, 2006

Reference Architecture Models


With the OASIS RM 1.0 up for vote, I've found myself discussing RM/RA vocabulary again... I found a Momentum view of how we differentiate between views and their usage.

I've heard people tell me that the OASIS Reference Model has not been helpful in creating their Customer Specific Reference Architecture. Well, that doesn't surprise me - it wasn't meant to be used in that way. A reference model establishes a scope, goals and a vocabulary of abstract concepts. It is used by professional Conceptual RA developers.

In RA land there is a process of moving from very abstract to somewhat concrete. It is an evolutionary process. However, if you skip layers you'll probably find yourself confused. Professionals from OASIS have informed me that they've lost some contributors in the process of creating their specification. It doesn't surprise me - RM's are theoretical work and not for everyone. I have also been told that they intend to publish a users guide which hopefully will set the stage for its use patterns and anti-patterns.

Sunday, September 17, 2006

Inside-out or Outside-in

Just a snippet from a recent email:

Sun created a platform when they wrote an aggregated set of specifications. They made their specifications API centric. It was an inside-out view. The next generation platform must be an outside-in view centered on protocols, formats, identifiers and service descriptions. It MUST be written using RFC 2119 format.


SOA, BPM, AJAX, etc. require a new integrated outside-in standardized platform. Until then, expect limited adoption or alternatively, enterprise rework.

SOA Acquisitions (revised list)

I've updated the list of acquisitions in the SOA space:
http://www.momentumsi.com/SOA_Acquisitions.html

A few interesting notes:
1. I couldn't be more disappointed in Cisco and their lack of acquisitions. AON failed (past tense). This could go down as one of the biggest blunders in software / hardware history. IMHO, Cisco should have revisited their executive leadership around AON a long time ago.

2. I've taken Service Integrity off the list. It appears as though they've shut down and didn't move the IP. This is disappointing as well - from what I've heard, many of the SOA ISV's were never even notified that the IP was up for sale.

3. WebMethods acquired Infravio. HP/Mercury/Systinet couldn't be happier. WEBM stock has been in the gutter for a long time. Infravio has a great product and a great team. It will be interesting to see if WEBM realizes that they need to move aside and let the Infravio team run their SOA direction.

4. I'm keeping SOA Software on both 'buy side' and 'sell side'. These guys have put together a pretty interesting package that keeps them in the pure play SOA infrastructure space. It's too clean. Someone one will grab them.

5. I added a couple 'client-side' guys to the list yet: ActiveGrid and AboveAll Software.

6. I added RogueWave (strong SCA/SDO story), I will proabably add some more SDO / data service providers in the near future.

7. I added testing specialists iTKO and Parasoft (although I don't know who will buy them).

8. I added Logic Library. With the Flashline acquisition, these guys become an obvious target.

Shai on Enterprise SOA

An excellent article identifies Shai's take on what the enterprise needs to do to prepare for enterprise SOA and the next generation SAP platform:

http://www.sda-asia.com/sda/features/psecom,id,595,srn,2,nodeid,4,_language,Singapore.html

Saturday, September 16, 2006

BPM 2.1.4.7.3.5.7.43.2.6.8.9.4.3.5.8.4.1

I noticed some guys blogging about BPM 2.0. It reminds me of some thought I authored on the subject a few years ago.

http://www.looselycoupled.com/opinion/2003/schnei-bp0929.html

So - I hate the tag line BPM 2.0; it's so 2003. Ok, shame on me.

Here's why...it isn't about the driving the software solution from a single angle. It isn't about the PROCESS. It isn't about the USER INTERFACE. It isn't about the SERVICES. It isn't about MODEL DRIVEN. It's about integrating all of these concepts without your head exploding.

BPM 2.0 places too much emphasis on the process.
Web 2.0 places too much emphasis on the UI and social aspects.
UML 2.0 ;-) places too much emphasis on making models.
SOA 2.0 places too much emphasis on the services.
Entperise 2.0 places too much emphasis on... hell, being a marketing term.
Did I forget any?

Design Time Agility over Runtime Performance Cost

Last week we were working with a client to define their core architectural principles. We had listed, "Agility over Performance" and this created substantial debate. The first question was, "why were services going to perform slower?" - and, couldn't we have "Agility and Performance"?

On occasion architects will get lucky and find new technologies and approaches that are not conflicting. It has been my experience that this is the exception not the norm. More common is the need to resolve competing interests such as 'agility' or 'performance'. And when they do compete, it is the job of the architect to give guidance.

The services found in your environment will likely be victim to two primary performance degrading elements:
1. They will be remote and will fall victim to all of the performance issues associated with distributed computing
2. They will likely use a fat stack to describe the services, like the WS-I Basic Profile.

Now that we've described the performance issues we have to ask ourselves, "will the system perform worse?" And the answer is, "not necessarily". You see, for the last few decades we've been making our software systems more agile from a design/develop perspective. When we went from C to C++ we took a performance/cost hit. When we went to virtual machines we took a hit. When we moved to fully managed containers we took a hit. And when we move to distributed web services we will take another hit. This is intentional.

A fundamental notion that I.T. embraces is that we must increase developer productivity to enable "development at the speed of business". The new abstraction layers that enable us to increase developer agility have a cost - and that cost is system performance. However, There is no need to say that it is an "agility over performance" issue; rather, it is a "system agility over performance cost" issue. By this I mean we can continue to see the same levels of runtime performance, but it will cost us more in terms of hardware (and other performance increasing techniques). Warning: This principle isn't a license to go write fat-bloated code. Balancing design time agility and runtime performance cost is a delicate matter. Many I.T. shops have implicitly embraced the opposite view (Runtime Performance Cost over Design Time Agility). These shops must rethink their core architectural value system.

Summary: The principle is, "Design Time Agility over Runtime Performance Cost". This means that with SOA,
1. You should expect your time-to-deliver index to get better
2. You should not expect runtime performance to get worse. Instead, you should plan on resolving the performance issues.
3. You should expect (performance/cost) to go down

Tuesday, September 12, 2006

MomentumSI on Reuse

We published this a few months back but I thought I'd republish it since reuse seems to be a hot topic these days...

http://www.momentumsi.com/SOA_Reuse.html


Feel free to use (or reuse) the graphics! If you choose not to REUSE the graphics you can SHARE this page by passing someone a link ;-)

Monday, September 11, 2006

Lessons from Planet Sabre

Back in the early days of Momentum I did quite a bit of hands-on architectural consulting. One of my clients was Sabre, the travel information company. One day I was reassigned from another Sabre project (I'll tell that story another day), to a project in distress called, "Planet Sabre".

Planet Sabre was an application that focused on the needs of the Travel Agent. It allowed an agent to book flights, cars, hotels, etc. As you might imagine, these booking activities look quite similar to ones that might be done over the web (Travelocity) or through the internal call center. Hence, they were great candidates for services (they had a good client-to-service ratio).

I was assigned as the chief architect over a team of about 30 designers and developers. (BTW, I was like the 4th chief architect on the project). The developers were pissed that they received yet-another architect to 'help them out'. Regardless, they were good sports and we worked together nicely.

At Sabre, the services were mostly written in TPF (think Assembler) and client developers were given a client side library (think CLI). The service development group owned the services (funded, maintained and supported users). They worked off a shared schedule - requests came in, they prioritized them and knocked them out as they could.

The (consuming) application development groups would receive a list of services that were available as well as availability estimates for new services and changes to existing services. All services were available from a 'test' system for us to develop off of.

So, what were the issues?

The reason why the project was considered 'distressed' was due to poor performance. Sounds simple, eh? Surely the services were missing their SLA's, right? Wrong. The services were measured on their ability to process the request and to send the result back over the wire to the client. Here, the system performed according to SLA's. The issue that we hit was that the client machine was very slow, the client side VM and payload-parser were slow as was the connection to the box (often a modem).

We saw poor performance because the service designers assumed that the network wouldn't be the bottleneck, nor would the client side parser - both incorrect. The response messages from the service were fatty and the deserialization was too complex causing the client system to perform poorly. In addition, the client application would perform 'eager acquisition' of data to increase performance. This was a fine strategy except it would cause 'random' CPU spikes where an all-or-nothing download of data would occur (despite our best attempts to manipulate threads). From our point of view, we needed the equivalent of the 'database cursor' to more accurately control the streaming of data back to the client.

Lesson: Client / consumer capabilities will vary significantly. Understand the potential bottlenecks and design your services accordingly. Common remedies include streaming with throttling, box-carring, multi-granular message formats, cursors and efficient client side libraries.

The second lesson was more 'organizational' in nature. The 'shared service group' provided us with about 85% of all of the services we would need. For the remaining 15% we had two options - ask the shared services group to build them - or build them on our own. The last 15% weren't really that reusable - and in some cases were application specific - but they just didn't belong in the client. So, who builds them? In our case, we did. The thing is, we had hired a bunch of UI guys (in this case Java Swing), who weren't trained in designing services. They did their best - but, you get what you pay for. The next question was, who maintains the services we built? Could we move them to the shared services group? Well, we didn't know how to program in TPF so we built them in Java. The shared services group was not equipped to maintain our services so we did. No big deal - but now it's time to move the services into production. The shared services group had a great process for managing the deployment and operational processes around services that THEY built. But what about ours? Doh!

Lesson: New services will be found on projects and in some cases they will be 'non-shared'. Understand who will build them, who will maintain them and how they will be supported in a production environment.

Planet Sabre had some SOA issues, but all in all I found the style quite successful. When people ask me who is the most advanced SOA shop, I'll still say Sabre. They hit issues but stuck with it and figured it out. The project I discussed happened almost 10 years ago yet I see the same issues at clients today.

Lesson: SOA takes time to figure out. Once you do, you'll never, ever, ever go back. If you're SOA effort has already been deemed a failure it only means that your organization didn't have the leadership to 'do something hard'. Replace them.