Service Oriented Enterprise


Saturday, December 07, 2002

Targeted Incrementalism?  

I read, "Net Gain" and thought it was a bunch of bull. Then I read, "Out of the Box", also by Hagel and was convinced that this was a well intentioned individual that just didn't get it. Now, I'm at that point where anything he says just kind of annoys me. His latest remarks in HBS Working Knowledge managed to do just that. He commented that:
"Web services replace "big bang" approaches with targeted incrementalism. The business proposition with this technology is much more compelling: Invest modest sums of money with relatively short lead-times (often six to twelve months) and generate tangible business benefits, particularly in the form of operating savings. In these challenging economic times, that's a powerful proposition.

"Targeted incrementalism"? Web services are a new breed of network but fall victim to the same old forces, i.e. Metcalfe's Law. For those of you who aren't familiar with this, it states, "the usefulness, or utility, of a network equals the square of the number of users." This law has been modified over the years to take into account the number and value of the resources on the network (i.e. available services). Now, you might be able to hook one business partner up to another with web services and claim that you saved x millions of dollars - but you probably could have done that with ANSI X.12 EDI as well. The value of web services is directly related to the number and value of the resources made available. And, my friends, this is a waiting game. Unfortunately, the Gartner Hype Cycle doesn't actually take into account Metcalfe's Law or the Tipping Point.

Now, Edwin Khodabakchian at Collaxa agrees with Hagel, commenting that, "Incrementalism is the key to success. No doubt. The Web [the largest and most dynamic always-on application] was founded on incrementalism, allowing enterprises to build complex systems, one page at the time." Edwin has a valid point - you can start small and build your way up... just be prepared - the value increases with adoption and utility... just like any network. Don't plan on getting your ROI until you have a critical mass of valuable services available.

posted by jeff | 3:47 PM


Local Interfaces, Local Invocation  

SODA is largely about refactoring your component based applications into service based applications. This forces the developer to think loose coupling and reusability.

When I talk with developers about the 'service-ification' of their application the first thing they usually point out is that the performance will stink. They say that all of this serialization and deserialization between services is often unnecessary, wasting computing resources and making the application perform poorly. But I live in tomorrow-land where this problem is resolved by the 'service engine'. And here is how it works:

If one service needs to call another service AND...
1.The two services exist in the same process then...
... perform a regular method call - no serialization/deserialization - no network - all local. Use the heap.
2. The two services exist in different processes but are of the same platform/language then...
... perform the call using the native RPC (RMI, .Net Remoting, etc.)
3. The two services are on different platforms/languages then...
... perform the call using the SOAP over the network.

** You should be able to use ONE invocation model for all three of the aforementioned scenarios

The key is that the actual invocation should look the same regardless of the three situations - syntactically, the three should appear the same. Java attempted to do this with RMI, but this obviously didn't hold true for the Java to .Net call. This dynamic resolution of invocation style has been evolving and to me, it appears as though this is where IBM is starting to go with its WSIF (Web Services Invocation Framework). Here, the invocation is abstracted so that, "developers can work with the same programming model regardless of how the Web service is implemented and accessed." Initially, I thought that WSIF was designed to get around the SOAP vs. REST dispute or to just abstract the developer from the JAX-RPC/JAX-M stuff. But now I believe that the true value of WSIF is in its potential to be used in a SODA model to allow the service engine to automatically adjust to the underlying invocation implementation that performs the best for the situation. Think HotSpot for services.

As we unglue our applications and refactor them into services, we rewrite many of the method invocations as service invocations. If it turns out that you deploy the services on the same box, then the service engine will resolve the service invocation into a method call (performance problem absolved). Did we add extra code? Yes, and it may run a bit slower but in return we gained transparency.

Location transparency - platform transparency - language transparency - invocation model transparency. Bliss.

posted by jeff | 8:48 AM


 

An Introduction to SODA
(Coming Soon)

posted by jeff | 8:45 AM


Friday, December 06, 2002

You Know You're a Service Oriented Enterprise When....  

Foxworthy did a great job of letting people know exactly what consitituted a redneck - the same is needed for the SOE ;-)

Your I.T. Department Supports SOE When:
- You demand that your EAI vendor use standardized integration logic like BPEL4WS.
- Your I.T. department has a role called the 'Service Architect'
- Your service architect uses an overly complicated rating system to assess the level of decoupling between all systems.
- Your developers have long debates on whether it should be a 'service' or a 'component'!
- You demand a list of wsdl's & orchestration scripts from any new package or ASP vendor before you buy.
- You believe that agility is more important that scalability.
- All the developers in I.T. have the "Eight Fallacies of Distributed Computing" memorized.
- The 'fine-grained vs. course grained' argument resurfaces.
- It becomes uncool to talk about object-oriented design patterns, but cool to debate service-oriented design patterns
- You have an XML version of your job description and it's role based.
- You debate a business analyst on the difference between a 'business choreography' and a web service orchestration'.
- You really could care less about the Java vs. C# debate or the .Net vs J2EE debate - they'll both allow you to write your services...

You Know You're Succeeding as an SOE When:
- Business users create their own forms and use web services as the data source (e.g. XDocs, WSXL)
- The bottleneck for setting up a new trading partner is accounting or legal (not I.T.)
- The applications you use in an ASP model integrate with your internal back office systems.
- The majority of your ASP vendors integrate between each other via web services.
- Business users are afraid of losing their job to a, "new level of automation".
- Your business processes drive your software rather than your software dictating a business process.
- The last of the I.T. group figures out that asynchronous document based message passing WAS the way to go...
- You buy a pre-canned business process from a vendor.
- You work with your business partners to collaboratively design a new business process over the web in real time.
- You begin to view your orchestration scripts as your competitive advantage.

You're Scared of Your SOE When:
- Your poor performer fails his MBO's three quarters in a row and Monster.com is called via web services to begin the replacement process automatically.
- Your job performance evaluation is automatically transferred with you from company to company.
- You realize that the 'new level of automation' will enable your employer to expedite the outsourcing of your position to India.
- Your in I.T. and... you install a new software package and it dynamically discovers the other packages in your company and links to them without your help.

posted by jeff | 3:07 PM


Reusability Through Web Services  

I have been talking with people about the evolution of programming paradigms - moving from procedural designs to object-oriented, to component based and now to service oriented. In each move, fundamental reasons can be found on why the developer community chose to make the transition. In looking at the evolution, it is apparent that a couple of common things drove it each time: 1) The desire to create small, manageable modules (division of labor) 2) The desire to increase reusability.

In the early days of object-oriented programming people held great promise for reuse, yet little came of it. A similar story can be told about component based development (CBD).
Reusability through web services appeared to be promising - but so did the others. Before saying why it will work this time maybe it is worth identifying what the obstacles were in previous attempts. The first thing that pops to mind is that reuse in OO and CBD were tied to either a programming language or a vendor platform (e.g. COM, JavaBeans). It is clear that reuse will not work when there is a programming language dependency. It is equally clear that it won't work when the interface technology is promoted by the vendor minority.

Beyond language and interface dependencies, some other issues pop up:
- Reusability via the integrated framework (think Swing, AWT, etc.)
- Lack of knowledge on decoupling techniques
- Immature component registries

Web Service will clearly take away the 'language' and 'platform' issues, but will the other problems be addressed as well?

posted by jeff | 12:23 AM


Tuesday, December 03, 2002

Next Generation of Programming in the Large  

I was attempting to explain Programming in the Large to a person that used to program. I roughly explained it as, "the kind of stuff that you would have done with JCL, DOS Batch files or REXX. It is the stuff that links modules together in a loose fashion. It redirects data from one module to another - piping information from the output of X to the input of Y.

Web service orchestration is (IMHO) a much more elegant way to interconnect modules (or services) than many of aforementioned technologies. Yet, in many ways it has similar features to its predecessors. Some new stuff includes: cross-platform interfaces, modules or services have location transparency, strong support for asynchronous messages and the concept of abstract workflows or business processes. You see, it has matured!

posted by jeff | 10:03 AM


Monday, December 02, 2002

Orchestration in the Large versus Services in the Small  

My recent thoughts on 'refactoring' and 'collective code ownership' reminded me of the seminal white paper, "Programming in the Large versus Programming in the Small". I just went back and reread the document - which was a bit depressing. The authors, DeRemer and Kron, articulate a problem that continues to haunt development shops today; recognizing the difference between programming small modules versus integrating those modules into large systems. Every once in a while I feel like my profession (software engineering) is making progress, then you read a paper like the aforementioned and realize that we are still working on problems that were clearly described in 1976.

For those of you who aren't familiar with the work, here are a few highlights: "We distinguish the activity of writing large programs versus the activity of writing small ones. By large programs, we mean systems consisting of many small programs (modules), possibly written by different people. We need languages for programming-in-the-small, i.e. not unlike the common programming languages of today, for writing modules. We also need a 'module interconnection language' for knitting modules together into an integrated whole and for providing an overview that formally records the intent of the programmer(s) and that can be checked for consistency by a compiler."

The authors go on to identify, "languages for programming-in-the-small (LPSs)" as well as, "module interconnection language (MIL)". The LPS is described as a 3GL, while the MIL takes on a bit more abstract definition, "An MIL should provide a means for the programmer(s) of a large system to express their intent regarding the overall program structure in a concise, precise, and checkable form."

Moving this discussion into the 21st century, it is easy to recognize that the world of web services is tackling the dual-language challenge. For programming in the small, web services encourage you to use your favorite tightly bound language (Java, C#, etc.) For programming in the large, web services encourage you to use a loosely coupled orchestration language such as BPEL4WS.

Concluding with thoughts from the authors,
"In summary, then, an MIL should:
(1) encourage the structuring of a system before starting to program the details;
(2) encourage the programming of modules assuming a correct environment, but without knowledge of the irrelevant details of that environment;
(3) encourage system hierarchy, while allowing flexible, if disciplined connections between modules;
(4) encourage information hiding and the construction of virtual machines, i.e. subsystems whose internal structure is hidden, but which provide desired resources; and
(5) encourage descriptions of module interconnectivity that are separate from the descriptions of the modules themselves.
"

I'm a huge fan of creating modern module interconnection languages (MIL75 is a bit dated). Although it is too early to tell if languages like BPEL4WS will survive, I sincerely hope that a new generation of programmers are introduced to the concept of programming in the large. More stuff here.

posted by jeff | 2:50 AM


Sunday, December 01, 2002

Martin Fowler on Flexibility and Complexity  

Bill Venners (JVM author & Jini guru) had some great discussions with Martin Fowler (OOD author) around designing applications for today rather than tomorrow. This is a core belief held by the XP community, but one that I have struggled to accept.

Fowler states, "The cost of flexibility is complexity. Every time you put extra stuff into your code to make it more flexible, you are usually adding more complexity. If your guess about the flexibility needs of your software is correct, then you are ahead of the game. You've gained. But if you get it wrong, you've only added complexity that makes it more difficult to change your software. You're obviously not getting the payback."

This is a, 'no shit' kind of comment, but one worth inspecting. The opposite of creating flexible, reusable code is to create rigid, simple, one-offs. The side effect of one-offs would be the creation of a larger code base. Thus, the maintenance effort deals with the MASS of the code rather than the COMPLEXITY of the code... an interesting trade-off. Now, Fowler would argue that the mass of the code would be decreased on a just-in-time basis through the magic of refactoring. And he would be right if we were only dealing with a single system or a small set of systems where Collective Code Ownership is possible. When we move into enterprise systems where hundreds of applications exist (with millions of lines of code), the ability to ascertain if the functionality should be coded in a flexible way is very hard. The Fowler (and XP) solution is to throw in the towel and claim that it is too hard.

So in response to Fowler's comment, "The cost of flexibility is complexity", I have identified a few additional quips:
-The cost of inflexibility is increased mass in the code base.
- An increased code base leads to additional costs in enhancements and quality.
- Refactoring is a solution to 'flexibility is complexity' but only when Collective Code Ownership is possible.
- The return on flexibility increases as the reusability of the functions increase.
- Reusability increases as the FUNCTIONALITY is neutralized.
- Functionality is neutralized by making it independent of the operating system, network, programming language, hardware and physical location.
- Service Oriented Architectures that neutralize (like Web Services, not Jini) increase reusability on a linear scale that is significantly greater than anything we have seen to date.

The statements that Fowler made were correct for an object-oriented world but do not hold true in a neutralized service oriented world.

By the way, if you haven't read Fowler's work and you're an OO person, you are really missing out - it is wonderful. His book on refactoring is the standard. Other strong works include his books on XP and analysis patterns. Bill also did a quality work on the JVM.

posted by jeff | 7:59 AM


Tyler Jewell on JCA versus Web Services  

I just ran across this article identifying the differences between JCA and Web Services. The lack of thought scared me - it really scared me since it was the BEA evangalist that was doing the speaking.

Tyler believes that, "The Biggest Difference Is Intrusion" - I'm guessing that Tyler hasn't spent much time with the BEA team that is working on web services.

Should I expect my business partners to integrate their processes via JCA? Should I expect business dialects to be created based on the JCA? Is JCA a loosely coupled paradigm for programming? Should I orchestrate JCA's? Is JCA the preferred method to connect to .Net? What if I don't even use Java - is it still a good idea?

Intrusion???

posted by jeff | 7:21 AM

archives