Saturday, December 23, 2006

The Blog Tag

I've been blog tagged by Paul Brown and Brenda Michaelson. So, here it goes... 5 things you probably didn't know about me:

1. I constantly listen to music - favoring "classic punk rock" and gospel... an odd combination.

2. Before I got fat, I was a long distance runner and a racquetball instructor.

3. Initial funding for MomentumSI came from personal savings and a $40k loan from my mom (10k at a time). "Uhhh... Mom... any chance..."

4. I intend to write at least one more book - most likely on "strategy digitization".

5. I was a double major in Computer Science and Psychology with the intent to pursue artificial intelligence.

I know I'm supposed to tag 5 more people, but it's the holiday season... :-)

Wednesday, December 13, 2006

New Podcast on SAP and SOA

Scott Campbell, SAP ESA guru, speaks on the state of SAP and SOA in the enterprise:
http://searchsap.techtarget.com/originalContent/0,289142,sid21_gci1234356,00.html


It's an excellent overview of the current state and future roadmap.

Friday, December 08, 2006

The Jon Udell Challenge

Jon Udell of InfoWorld has announced a career change. He is going to work for Microsoft. When I heard this I almost fainted. Jon at Microsoft? And then I remembered a conversation that I had with him a few months back. He spoke of 'affecting change'. His position was that the divide between the turbo geeks and the average consumer of technology has grown to an unacceptable level. We, the software development community, have gotten so wrapped up in the technology that we forgot about who it serves and why.

For Jon, I am happy - he deserves good things in life. For myself, I am sad. There is a part of me that enjoys going to read about Jon's latest geek adventure. I realize that I do so for one reason. He is so incredibly smart that it makes me feel stupid. That feeling of stupidity motivates me to learn more. I really hope that Jon doesn't become one of those mindless Microsoft snobs who views the world from a purely Microsoft perspective. This might sound insulting but I've lost too many friends to the Microsoft brain washing machine. It has taken down some good men.

That said, I am issuing a challenge to Jon:
1. Make a difference at Microsoft. Create a list of ten things that Microsoft has to change and then be ruthless in evangelizing what Microsoft must do to remedy their issues. Keep the list public and monitor the progress.

2. Eat your own dog food. If you are going to evangelize a new Microsoft technology, first show me how Microsoft uses it internally.

3. Be a good citizen. If you introduce a new concept, show me how it 'bridges cultures', as you mention in your podcast. The legacy of 'embrace and extend' will rightfully haunt Microsoft.


Congratulations to Jon for the new position. More important, congratulations to Microsoft for adding a team member that has the ability to actually make a difference.

Sunday, November 19, 2006

Linthicum on SOA Costs

David seems to have a formula for the cost of SOA:

Cost of Data Complexity = (((Number of Data Elements) * Complexity of the Data Storage Technology) * Labor Units))

Number of Data Elements being the number of semantics you?re tracking in your domain, new or derived.
Complexity of the Data Storage Technology, expressed as a percentage between 0 and 1 (0% to 100%). For instance, Relational is a .3, Object-Oriented is a .6, and ISAM is a .8.

So, at $100 a labor unit, or the amount of money it takes to understand and refine one data element, we could have:

Cost of Data Complexity = (((3,000) * .5) * $100)

Or, Cost of Data Complexity = $150,000 USD Or, the amount of money needed to both understand and refine the data so it fits into your SOA, which is a small part of the overall project by the way.


------------------

I can't speak for David - but I can promise you, this is not how we estimate SOA efforts. I'm not even sure what David is attempting to estimate. This post has me so confused... I'd recommend deleting that post - real soon.

Saturday, November 11, 2006

WebLayers Executes Asset Governance

Last week I saw the latest demo from WebLayers on their 'Policy Based Governance Suite'. The demo hit home for one simple reason. They've done a great job of laying out the extended SDLC from a roles / assets perspective and determining 'what' needs to be governed at each stage.

Most of the vendors in the space have approached the governance problem from a registry perspective which is an important aspect, but not exactly a holistic view. WebLayers takes a methodology / lifecycle perspective. Their tooling allows you to plug your own process with roles (Business Analysis, Application Architecture, Service Design, etc.) and identify 'what' needs to be governed in each area - then, define the policies for each asset or artifact.
Example: The Design Stage includes a "Service Designer"; this person creates a "WSDL"; and all WSDL's have a policy that "Namespaces must be used".

They do this by using an interceptor model. In essence, they've created a 'governance bus'. WebLayers provides intermediaries that sit between the asset creation tool (schema designer, IDE, etc.) and the repository that will store the asset (version control, CMDB, etc.) This allows their tool to inspect the newly created assets just after they've been created, but before they've been sent to production. The policies are applied to the assets and results displayed (pass, fail, etc.) to the author.

I've been calling this type of governance, "Asset Governance" because the emphasis in on looking at the final output that is created and determining if it complies with enterprise policies. IMHO, Asset Governance is an essential component of any SOA program that utilizes an offshore element ("WSDL is the Offshore Contract").

The product was lighter on the other two type of Governance that I look for: Process Governance and Portfolio Governance. We sum it up like this:
- Portfolio Governance focuses on finding right problem (prioritization)
- Process Governance focuses on ensuring that all the right steps are taken
- Asset Governance focuses on ensuring that the output of the steps were performed in accordance with policy

I talked with the WebLayers team about the other two types of governance and received feedback that traditional I.T. Governance & Project Management packages might solve the problem (see, http://www.niku.com/). Most of these vendors built their products prior to the SOA era and have not gone back and revisited the functionality. They have not killed the "application as the unit of work" and moved to "the service as the unit of work" nor have they updated ROI formulas based on "shared services" (thus reducing investment, increasing ROI).

IMHO, the SOA Governance space will eventually find a nice intersection that includes both classic I.T. Governance, and the more modern "asset & process governance". It will be interesting to see which of the vendors will have the courage to tackle the end-to-end governance problem.

Thursday, November 09, 2006

Rejected Four Years Ago...

I was at the Infoworld SOA event this week and and someone asked by about the use of ontologies in SOA. I haven't been asked about SOA ontologies in a LONG time... I had an immediate flashback to getting rejected by Web Services Journal to write an article on the subject...


------------------
Hi Jeff,
I sent your proposal to Sean Rhody for his review. At this time, we regret that we will be unable to accept this article for the magazine.
Gail



Gail, here is the abstract:
=====================================
“Semantic Web Services”
The desire for computers to easily communicate has long been a goal of both computer scientists and businessmen, the latter recognizing the financial gain of seamless systems integration. Over time, this goal has been recognized through network standards like Ethernet, TCP/IP and HTTP. More recently the standardization has moved up the protocol stack. Now, XML is being used to add structure through tagging, which facilitates concept delineation and enumeration. Web Services build on this foundation and enhance the communication through additional features including object serialization/deserialization (SOAP), service registries (UDDI) and standardized service interfaces (WSDL).

Yet even with these advances computers still aren’t aware of the meaning of the text that is being sent, nor are they able to make any reasonable inferences about the data. Tim Berner Lee and the W3C have been tackling this problem through an initiative dubbed the “Semantic Web”. This initiative uncovers the semantic meanings of transactions allowing companies to use a common dictionary and also enabling like terms to be disambiguated (“Automobile == Car”).

The use of the Semantic Web for concept delineation and Web Services for interoperability is enabling a new bread of applications known collectively as, “Semantic Web Services”. This article will explore the state of semantic ontologies, business grammars and emerging commercial products.


=====================================
Ok, the year was 2002, and I did refer to SOAP as an object serilization mechanism - perhaps it was appropriate for them to reject the article ;-) Now that we're approaching 2007 I believe that we'll start to hear more and more on this subject - who knows, maybe I'll resubmit the abstract!

Friday, October 20, 2006

CIO Paul Coby is Waiting for VHS

British Airways CIO Paul Coby is quoted as saying, "We don't want to invest in Betamax when VHS becomes the standard. There's no point in BA trying to go its own way on that. We will wait and see what standards emerge." (referring to SOA)

ROTFL.

Uh, Paul - the VHS version of SOA came out 5 years ago. And SOA isn't about the stupid standards - it's a way of integrating business and I.T.

Sunday, October 01, 2006

OASIS SOA-RM Passes

The news is out:
The ballots for approval of Reference Model for Service Oriented Architecture v1.0 as an OASIS Standard (announced at [1]) has closed. There were sufficient affirmative votes to approve the specification. However, because there was a negative vote, the SOA Reference Model Technical Committee must decide how to proceed, as provided in the OASIS TC Process, at http://www.oasis-open.org/committees/process.php#3.4. A further announcement will be made to this list regarding their disposition of the vote.


IMHO, the real value of this passing is that people can quit working on it. The RM is an abstract document that is used by professional conceptual RA developers (of which there are about 10 in the world). This particular standard was... very popular by the people who wrote it... and not so popular by everyone else.

Initially, MomentumSI was the sole No Vote, but we pulled the vote so that the committee could just put it to bed and move on. Unfortunately, someone else voted no saying that the RM was so generic that it served no purpose. Well, they do have an interesting point... here's an interesting test, see if Client/Server architecture passes the SOA litmus test as defined by OASIS... Hmmm....

Again - 'architecture by committee' is a hard thing to do. I don't envy these guys. The next test that these guys have is to make up their mind on the RA. Will it be a Conceptual RA or a Profile Based RA?

Saturday, September 23, 2006

Reference Architecture Models


With the OASIS RM 1.0 up for vote, I've found myself discussing RM/RA vocabulary again... I found a Momentum view of how we differentiate between views and their usage.

I've heard people tell me that the OASIS Reference Model has not been helpful in creating their Customer Specific Reference Architecture. Well, that doesn't surprise me - it wasn't meant to be used in that way. A reference model establishes a scope, goals and a vocabulary of abstract concepts. It is used by professional Conceptual RA developers.

In RA land there is a process of moving from very abstract to somewhat concrete. It is an evolutionary process. However, if you skip layers you'll probably find yourself confused. Professionals from OASIS have informed me that they've lost some contributors in the process of creating their specification. It doesn't surprise me - RM's are theoretical work and not for everyone. I have also been told that they intend to publish a users guide which hopefully will set the stage for its use patterns and anti-patterns.

Sunday, September 17, 2006

Inside-out or Outside-in

Just a snippet from a recent email:

Sun created a platform when they wrote an aggregated set of specifications. They made their specifications API centric. It was an inside-out view. The next generation platform must be an outside-in view centered on protocols, formats, identifiers and service descriptions. It MUST be written using RFC 2119 format.


SOA, BPM, AJAX, etc. require a new integrated outside-in standardized platform. Until then, expect limited adoption or alternatively, enterprise rework.

SOA Acquisitions (revised list)

I've updated the list of acquisitions in the SOA space:
http://www.momentumsi.com/SOA_Acquisitions.html

A few interesting notes:
1. I couldn't be more disappointed in Cisco and their lack of acquisitions. AON failed (past tense). This could go down as one of the biggest blunders in software / hardware history. IMHO, Cisco should have revisited their executive leadership around AON a long time ago.

2. I've taken Service Integrity off the list. It appears as though they've shut down and didn't move the IP. This is disappointing as well - from what I've heard, many of the SOA ISV's were never even notified that the IP was up for sale.

3. WebMethods acquired Infravio. HP/Mercury/Systinet couldn't be happier. WEBM stock has been in the gutter for a long time. Infravio has a great product and a great team. It will be interesting to see if WEBM realizes that they need to move aside and let the Infravio team run their SOA direction.

4. I'm keeping SOA Software on both 'buy side' and 'sell side'. These guys have put together a pretty interesting package that keeps them in the pure play SOA infrastructure space. It's too clean. Someone one will grab them.

5. I added a couple 'client-side' guys to the list yet: ActiveGrid and AboveAll Software.

6. I added RogueWave (strong SCA/SDO story), I will proabably add some more SDO / data service providers in the near future.

7. I added testing specialists iTKO and Parasoft (although I don't know who will buy them).

8. I added Logic Library. With the Flashline acquisition, these guys become an obvious target.

Shai on Enterprise SOA

An excellent article identifies Shai's take on what the enterprise needs to do to prepare for enterprise SOA and the next generation SAP platform:

http://www.sda-asia.com/sda/features/psecom,id,595,srn,2,nodeid,4,_language,Singapore.html

Saturday, September 16, 2006

BPM 2.1.4.7.3.5.7.43.2.6.8.9.4.3.5.8.4.1

I noticed some guys blogging about BPM 2.0. It reminds me of some thought I authored on the subject a few years ago.

http://www.looselycoupled.com/opinion/2003/schnei-bp0929.html

So - I hate the tag line BPM 2.0; it's so 2003. Ok, shame on me.

Here's why...it isn't about the driving the software solution from a single angle. It isn't about the PROCESS. It isn't about the USER INTERFACE. It isn't about the SERVICES. It isn't about MODEL DRIVEN. It's about integrating all of these concepts without your head exploding.

BPM 2.0 places too much emphasis on the process.
Web 2.0 places too much emphasis on the UI and social aspects.
UML 2.0 ;-) places too much emphasis on making models.
SOA 2.0 places too much emphasis on the services.
Entperise 2.0 places too much emphasis on... hell, being a marketing term.
Did I forget any?

Design Time Agility over Runtime Performance Cost

Last week we were working with a client to define their core architectural principles. We had listed, "Agility over Performance" and this created substantial debate. The first question was, "why were services going to perform slower?" - and, couldn't we have "Agility and Performance"?

On occasion architects will get lucky and find new technologies and approaches that are not conflicting. It has been my experience that this is the exception not the norm. More common is the need to resolve competing interests such as 'agility' or 'performance'. And when they do compete, it is the job of the architect to give guidance.

The services found in your environment will likely be victim to two primary performance degrading elements:
1. They will be remote and will fall victim to all of the performance issues associated with distributed computing
2. They will likely use a fat stack to describe the services, like the WS-I Basic Profile.

Now that we've described the performance issues we have to ask ourselves, "will the system perform worse?" And the answer is, "not necessarily". You see, for the last few decades we've been making our software systems more agile from a design/develop perspective. When we went from C to C++ we took a performance/cost hit. When we went to virtual machines we took a hit. When we moved to fully managed containers we took a hit. And when we move to distributed web services we will take another hit. This is intentional.

A fundamental notion that I.T. embraces is that we must increase developer productivity to enable "development at the speed of business". The new abstraction layers that enable us to increase developer agility have a cost - and that cost is system performance. However, There is no need to say that it is an "agility over performance" issue; rather, it is a "system agility over performance cost" issue. By this I mean we can continue to see the same levels of runtime performance, but it will cost us more in terms of hardware (and other performance increasing techniques). Warning: This principle isn't a license to go write fat-bloated code. Balancing design time agility and runtime performance cost is a delicate matter. Many I.T. shops have implicitly embraced the opposite view (Runtime Performance Cost over Design Time Agility). These shops must rethink their core architectural value system.

Summary: The principle is, "Design Time Agility over Runtime Performance Cost". This means that with SOA,
1. You should expect your time-to-deliver index to get better
2. You should not expect runtime performance to get worse. Instead, you should plan on resolving the performance issues.
3. You should expect (performance/cost) to go down

Tuesday, September 12, 2006

MomentumSI on Reuse

We published this a few months back but I thought I'd republish it since reuse seems to be a hot topic these days...

http://www.momentumsi.com/SOA_Reuse.html


Feel free to use (or reuse) the graphics! If you choose not to REUSE the graphics you can SHARE this page by passing someone a link ;-)

Monday, September 11, 2006

Lessons from Planet Sabre

Back in the early days of Momentum I did quite a bit of hands-on architectural consulting. One of my clients was Sabre, the travel information company. One day I was reassigned from another Sabre project (I'll tell that story another day), to a project in distress called, "Planet Sabre".

Planet Sabre was an application that focused on the needs of the Travel Agent. It allowed an agent to book flights, cars, hotels, etc. As you might imagine, these booking activities look quite similar to ones that might be done over the web (Travelocity) or through the internal call center. Hence, they were great candidates for services (they had a good client-to-service ratio).

I was assigned as the chief architect over a team of about 30 designers and developers. (BTW, I was like the 4th chief architect on the project). The developers were pissed that they received yet-another architect to 'help them out'. Regardless, they were good sports and we worked together nicely.

At Sabre, the services were mostly written in TPF (think Assembler) and client developers were given a client side library (think CLI). The service development group owned the services (funded, maintained and supported users). They worked off a shared schedule - requests came in, they prioritized them and knocked them out as they could.

The (consuming) application development groups would receive a list of services that were available as well as availability estimates for new services and changes to existing services. All services were available from a 'test' system for us to develop off of.

So, what were the issues?

The reason why the project was considered 'distressed' was due to poor performance. Sounds simple, eh? Surely the services were missing their SLA's, right? Wrong. The services were measured on their ability to process the request and to send the result back over the wire to the client. Here, the system performed according to SLA's. The issue that we hit was that the client machine was very slow, the client side VM and payload-parser were slow as was the connection to the box (often a modem).

We saw poor performance because the service designers assumed that the network wouldn't be the bottleneck, nor would the client side parser - both incorrect. The response messages from the service were fatty and the deserialization was too complex causing the client system to perform poorly. In addition, the client application would perform 'eager acquisition' of data to increase performance. This was a fine strategy except it would cause 'random' CPU spikes where an all-or-nothing download of data would occur (despite our best attempts to manipulate threads). From our point of view, we needed the equivalent of the 'database cursor' to more accurately control the streaming of data back to the client.

Lesson: Client / consumer capabilities will vary significantly. Understand the potential bottlenecks and design your services accordingly. Common remedies include streaming with throttling, box-carring, multi-granular message formats, cursors and efficient client side libraries.

The second lesson was more 'organizational' in nature. The 'shared service group' provided us with about 85% of all of the services we would need. For the remaining 15% we had two options - ask the shared services group to build them - or build them on our own. The last 15% weren't really that reusable - and in some cases were application specific - but they just didn't belong in the client. So, who builds them? In our case, we did. The thing is, we had hired a bunch of UI guys (in this case Java Swing), who weren't trained in designing services. They did their best - but, you get what you pay for. The next question was, who maintains the services we built? Could we move them to the shared services group? Well, we didn't know how to program in TPF so we built them in Java. The shared services group was not equipped to maintain our services so we did. No big deal - but now it's time to move the services into production. The shared services group had a great process for managing the deployment and operational processes around services that THEY built. But what about ours? Doh!

Lesson: New services will be found on projects and in some cases they will be 'non-shared'. Understand who will build them, who will maintain them and how they will be supported in a production environment.

Planet Sabre had some SOA issues, but all in all I found the style quite successful. When people ask me who is the most advanced SOA shop, I'll still say Sabre. They hit issues but stuck with it and figured it out. The project I discussed happened almost 10 years ago yet I see the same issues at clients today.

Lesson: SOA takes time to figure out. Once you do, you'll never, ever, ever go back. If you're SOA effort has already been deemed a failure it only means that your organization didn't have the leadership to 'do something hard'. Replace them.

Thursday, August 31, 2006

Supply Side and Demand Side SOA

I come from a manufacturing background, hence I tend to think in terms of products, inventory (the supply) and demand. I do the same thing with SOA.

Most of the companies that I've consulted to start with a 'supply side SOA strategy'. That is, they create a strategy to create a supply of services. As everyone in the manufacturing world knows, creating supply without demand is a really bad thing. Inventory that is not used is considered bad for a few reasons, the primary ones being:
1. You prematurely spent your money
2. As inventory ages, new demand-side requirements will be generated causing the current inventory to become outdated

Most manufacturing companies have moved to some variation of just-in-time production. They wait for customer demand before they build the products. You'd think that this would work for SOA but in many companies it isn't. The reason is simple. These companies do not have a demand-side generator (the sales and marketing engines). Demand-side SOA is a discipline that doesn't exist in many corporations.

Demand-side SOA requires a change in philosophy and process. For starters, you have to begin thinking about all of your services as products or SKU's. Products must be managed in a product portfolio. Products must be marketed to potential 'buyers'. Demand side application builders should have well defined processes to shop for SKU's with cross-selling capabilities. We need to kill the 'service librarian' and replace it with the 'service shopping assistant'. Obviously, we have to quit thinking in terms of 'registries'; 'catalogs' are better - but 'shopping carts' are probably even closer to target.

Demand-side SOA is currently at an infancy. Our ISV vendor community has largely failed us to date but it is only a matter of time before they catch up. In the mean time, it is the responsibility of the I.T. organization to begin changing philosophies and processes to think in terms of supply and demand economics.

Tuesday, August 29, 2006

Forking Web 2.0

Screw Web 2.0.

Seriously. I have no need for Web 2.0 as it is being defined.

A few months ago I attended the MS Web 2.0/SOA think & posture event where a bunch of smart people overloaded the term Web 2.0 to meet their own needs, myself included. I almost forgot about the event until I caught a post by Gregor Hohpe (who impresses the hell out of me). Gregor attended yet another 'what the hell is Web 2.0' event and blogged about some attributes and tenets:

I'm convinced that Gregor is a freakin genius, so I really doubt if he missed the conclusion. In fact, the thinking was very similar to the Spark conference so I'm not surprised.

What just occurred to me is Web 2.0, as the world defines it, bores the living hell out of me. It's simple - I don't work for a consumer company like Amazon, Google or Yahoo - and as I look across my clients most of them don't need Web 2.0 functionality (as people are defining it).

What do I need? How do I want to overload the term? Easy. I am a SOA dude. I need a bad ass client framework for my services. I was hoping that the Web 2.0 guys were going to create a new client model - but they aren't. They're creating a social computing model - good for them, but it doesn't meet my needs. I need... a Collaborative Composite Client Platform.



What's are the characteristics of a CCCP?
1. Obviously, it's client-side, asynchronous, message/service oriented and highly interactive.
2. It's designed for the Web but merges the application and document paradigm successfully (like http://finance.gooogle.com)
3. The componentized UI is self describing and viewable by a user (think 'view source' meets 'portlets') .
4. The services called by the clients can be identified and reused. If a user doesn't want the UI they should be able to identify the services the client calls and use them instead.
5. Collaboration is a core tenet, not a feature .
6. Services are pre-compiled and available as-is however, client compositions can be changed at runtime and the new configuration can be permanently saved (typical with modern portals).

If the Web 2.0 guys create their manifesto and it's a bunch of e-tail crap where the customer is king - I'm out. My interests are in creating a new programming model for the client that serves as a foundation for any domain. It's time to fork Web 2.0.

Sunday, August 27, 2006

Decoupling the Client and Service Platforms

The realization seems to be hitting the masses that Service Oriented Architecture isn't just about the Services, rather it is about the consumption of those services. Dare I say, it is about the clients (and the services).

The SOA programs that I've seen stall out typically were because they failed to identify the composite applications that would consume the services. It sounds rather obvious - but it isn't about building or buying services. Value is created when business people use clients that leverage the services. I guess that's one of the reasons why I always try to call this paradigm 'Client-Service Computing', rather than SOA.

Recently I reviewed a few enterprise SOA reference architectures and noticed an unpleasant pattern. Architects were forgetting to put the 'client' on the architecture. I know - sounds silly. Really smart architects get so caught up in identifying the patterns, domains, interactions, practices and standards associated with services that they forget about the clients!

So - we have clients and services... and we decoupled them. I'll say it again - we decoupled them! This takes me to my next point. For legacy reasons architects are continuing to insist that the client platforms be tightly aligned to the service platforms (.Net on both, etc.) This is non-sense.

Many of the last generation client platforms were not optimized for service oriented computing. By this I mean that they don't easily accommodate the Web Service standards nor do they embrace 'contract first design' and in general - many of them just plain stink. The reason we use them is because analysts like Gartner told us to go with a single platform. It's time to decouple the client and the service platforms. The client platforms should be optimized around UI capabilities including collaboration and human-computer-interactions. This might mean using a strong Web 2.0 platform. My bottom line is that there is no need to continue building UI's using the same ole platforms. It's time to optimize for this computing paradigm.