Service Oriented Enterprise

Saturday, November 15, 2003

Project Liberty & WS-Federation  

Project Liberty, is a federated trust & identity based scheme. It was created as a "Microsoft Passport Killer". A couple of years ago, MS was pushing Hailstorm and Passport as a mechanism to centrally control identity and schema based data. The fine folks over at Sun (and friends), came to the conclusion that they didn't want MS to control all of the user id's in the world - and for good reason. Thus, they came up with a specification to decentralize identity & trust. The program came to fruition just after the September 11th tragedy, and was given the very awkward name, "Project Liberty" - I guess they felt that they were 'liberating identity' or something like that...

Well, Project Liberty did what it was supposed to do. It created an alternative means to accomplish the same goal as Passport, without handing over the family jewels to MS. However, Project Liberty was created prior to the creation of the WS-* specifications. This means that for the most part, it has overlap with some of the newer specifications created, like WS-Trust, WS-Privacy and WS-Metadata.

I'm a huge fan of "concern-based protocols". Thus, I like having 'trust' as its own protocol - and 'privacy' as another protocol. I don't like mixing concerns in a single protocol; which I believe Project Liberty is guilty of. From a cursory view, it is appears as though WS-Federation covers the bulk of what is actually needed. I'm not an expert in this area - but so far, it looks 'good enough'.

The Project Liberty group recently published a paper comparing the approaches. Although the paper attempts to subtly convince the reader that their approach is better, for me, it has the opposite effect. They basically claim that they have successfully lumped a bunch of standalone concerns into one specification. In addition, they did it prior to the existence of the WS-* specifications, thus the implementations that are available won't be technically aligned with the needs of the next generation web service developer.

I'm not ready to say, "let's kill Project Liberty"... yet. But, I am mentally preparing for the funeral. In my opinion, Project Liberty did what it was supposed to do: force Microsoft down a standards based decentralized ID system. And this is exactly what happened... thus, I consider the project a raging success. But it served its purpose and now it may be time to move on.

posted by jeff | 7:40 AM

Thursday, November 13, 2003

New RFID Application - Knicker Surfing!  

According to the Chicago Sun Times, "RFID chips could make your daily life easier, but they also could let anyone with a scanning device know what kind of underwear you have on and how much money is in your wallet".

At first I thought to myself - Wow, what an invasion of privacy! Then, I realized that the Chicago Sun Times may have just found the killer application for RFID. By targeting perverts, we will be able to sell millions of handheld readers to identify the 'kind of underwear' that people are wearing. This is genius!!! Unfortunately, I found out that there is already a growing population of what I am dubbing, "knicker surfers":

Gary, a regular knicker surfer reports, "Yea, it's cool. Me and my buddies come out here all the time and knicker surf. I just hope Walmart pushes PML. Right now, I can only get the underwear brand... with PML I'll be able to get the size too!"

I had no idea. :-)

posted by jeff | 6:37 AM

Wednesday, November 12, 2003

Storing Transient Data  

John Udell reports, "Today, most IT shops can't store or process massive flows of transient data. But XML message traffic is a resource that creates strategic opportunity for those who learn to manage it well. Tools for doing that are on the way. "

Hmmm... if I create a persistent store of transient data, is it still transient? Maybe there is a reason most IT shops don't store transient data - wouldn't that just be considered 'persistent data'??

Ok, I understand what he means - perhaps instead of 'transient' maybe we could call it 'inter-service message data' or just 'message data'? Still, I'm not sure that I buy into the concept. Most I.T. shops do a significant amount of warehousing and reporting off the system of records that generate the messages. The reliability side is currently taken care of by message queue journaling, and realtime inquiry on state is best handled through an inquiry to a business process engine or BAM notification.

Right now, collecting transient data sounds like a bad habit... I need a use-case, with strong, strong justification.

posted by jeff | 5:10 AM

Monday, November 10, 2003

The Realist and the Idealist  

I recently had the opportunity to engage in a technical discussion with Chris Sells. We found ourselves agreeing to disagree. He took on the role of the realist, I took on the role of the idealist.

First, Chris and I seemed to be in agreement that distributed computing was easy to screw up. As he stated, it was necessary for consultants to travel the world preaching about *round-trips* (overly verbose message exchanges).

The Realist
Now, I hope I don't screw this up (Chris, correct me if I do).
Chris is of the opinion that we shouldn't paper-over the complexities of distributed computing. By extending the object paradigm into a distributed object paradigm, we unintentionally encourage developers to think in *local mode*, when really they should be thinking in *distributed mode*. His point is that additional considerations must be met (time, reliability, security, etc.) And by forcing the developer to acknowledge these concerns, runtime disasters will be decreased. His feeling is that Indigo (from Microsoft) does a good job of making the developer acknowledge the distinction between local and distributed calls, thus meeting a need in the developer community.

The Idealist
On the hand, I am the idealist. It is my opinion that we should continue to strive towards location transparency. Thus, we should continue to use one programming model (and invocation model) for both local and distributed calls. I believe that the SOA model largely facilitates location transparency and this should be leveraged. However, Chris (and others) will be quick to point out that this is like getting half-pregnant. Either your system is working efficiently in distributed mode, or it isn't. And in virtually every occasion, computer scientists will tell me that the great hurdle in location transparency deals with the static nature of message exchange sequences between client and server. In local mode, people strive towards fine-grained calls and in the remote mode, the coarse grained method is preferred. As an idealist, I am of the opinion that we shouldn't *dumb down* the programming model to reduce developer design errors. Rather, I feel that we should take the bull by the horns and look at the real issue of automating the granularity at run time (based on costing functions). However, to accomplish this, we need to give our runtime containers mode knowledge about our *intent* (think 'use case based sequence diagrams'). Now, instead of asking for a single method to be called, we ask for a 'use case' to be fulfilled. IMHO, more emphasis needs to go on writing smart software that fulfills an intent, rather than acting out a predetermined recipe.

Does Indigo excite me? Not really - I see good concepts from P2P, AOP and Trust rolled together. The exciting part is that MS has the resources to pull it off and make it easy to use.

Chris is a smart guy - he might be right. I don't know.

posted by jeff | 6:08 PM

Sunday, November 09, 2003

This post is about the hottest enterprise technology.  

I bet you think I'm talking about web services. Well, I'm not. It's time for me to start blogging about RFID and more precisely EPC.

I've considered starting a new blog dedicated to RFID, but I think I'm going to keep the posts inside of this blog. Web services and RFID will likely settle into a symbiotic relationship.

About 6 years ago, I had sector level responsibility for manufacturing systems at 3M. This involved building, buying and integrating all of the usual suspects (demand management, MRP, Lab BOM, Lab content mgmt., SCE (pick, pack, ship, label, optimize, capacity planning, etc.) Recently, I've had the pleasure of working on a supply chain project with Procter & Gamble. This has been a great experience. The first thing I noticed was that not much has really changed in the last decade. Sure collaborative planning, forecasting, dynamic safety stocks, etc. are all incrementally improving. But for the most part the changes are incremental.

RFID / EPC is not incremental. It is monumental. I'm going to blog more on this later. For now, if you want to get educated , go to the following sites:

posted by jeff | 2:40 PM