Here's an interesting BPM twist. A company called, "Clear Technology" claims:
"Tranzax is the industry’s first business process management platform designed specifically to capture and replicate the business processing behavior of an enterprise’s best employees, and then optimize and extend these best practices across the enterprise."
I have no idea if they can pull this off, but I like the sound of it. It does raise an interesting question. If you force everyone to use the same process, how can you expect process innovation to occur?
Delivering Business Services through modern practices and technologies. -- Cloud, DevOps and As-a-Service.
Sunday, November 30, 2003
Saturday, November 29, 2003
RFID Chips used for Mind Control
According to the former Chief Medical Officer of Finland, RFID chips are now being used for mind control. Now, I know what you're thinking... the transmission and storage capabilities of RFID are so low, how does it work? Well, I don't know. Apparently, the trick is to hook the antenna up to the brain stem and the electricity wires up to the cerebrum and then 'think' real hard.
The application for the device is targeted at creating a more choreographed dance line. Joseph O'Shea, the producer of 'River Dance' commented, "Gettin all these dancers to move together is a real pain. That's why we are switching to RFID Mind Control". However early tests of the device were less than successful.
We determined that midgets and fat people are less susceptible to the radio waves, as well as people that have consumed large amounts of Mad-Dog 20/20. More recent tests have confirmed that the systems works best with gay Irish men dressed in black.
The company behind this venture, "Coordinated Dancing Inc." are considering new markets. Unfortunately, most of the senior management team has been infected with gangrene of the medula which has left them without control of their bladders and urinary tract. CFO, Jimmy McDonald commented, "We've got a think tank working on the issue now - everything from more company toilets to bulk purchases of 'Depends' - we will not let gangrene of the brain slow us down. Inserting 10 cent chips into the brain of every man, woman and child is the future!"
The application for the device is targeted at creating a more choreographed dance line. Joseph O'Shea, the producer of 'River Dance' commented, "Gettin all these dancers to move together is a real pain. That's why we are switching to RFID Mind Control". However early tests of the device were less than successful.
We determined that midgets and fat people are less susceptible to the radio waves, as well as people that have consumed large amounts of Mad-Dog 20/20. More recent tests have confirmed that the systems works best with gay Irish men dressed in black.
The company behind this venture, "Coordinated Dancing Inc." are considering new markets. Unfortunately, most of the senior management team has been infected with gangrene of the medula which has left them without control of their bladders and urinary tract. CFO, Jimmy McDonald commented, "We've got a think tank working on the issue now - everything from more company toilets to bulk purchases of 'Depends' - we will not let gangrene of the brain slow us down. Inserting 10 cent chips into the brain of every man, woman and child is the future!"
Friday, November 28, 2003
A Service Oriented Coupling Index
In December of last year, I issued a challenge (really to myself) to work on a 'service oriented coupling index'. I was searching for a quantitative way of determining how loose or tightly coupled software entities are. I took a look at much of the academic literature but found that most of it was out of date or needed rethinking in the service oriented world. More recent work, such as that performed by Doug Kaye was a great help in the pursuit.
At the end, I produced an initial report called, "An Inter-Service Coupling Index for Lossless Exchanges"
I learned a couple of lessons in the process:
1. A coupling index is possible, however the quantitative aspect still lies in the eye of the beholder (which Doug and others warned be about). Thus, in many ways the coupling index becomes a 'best practices' in coupling guide.
2. The pursuit of the coupling index was extremely interesting. I am convinced that the value to be taken away from the report isn't the index but rather the insight on areas where coupling may still be reduced.
Please feel free to send me your thoughts (good or bad). I plan on putting out an updated version on Jan 1. And thanks again to all of you who sent me early feedback. Have Fun! jeff
At the end, I produced an initial report called, "An Inter-Service Coupling Index for Lossless Exchanges"
I learned a couple of lessons in the process:
1. A coupling index is possible, however the quantitative aspect still lies in the eye of the beholder (which Doug and others warned be about). Thus, in many ways the coupling index becomes a 'best practices' in coupling guide.
2. The pursuit of the coupling index was extremely interesting. I am convinced that the value to be taken away from the report isn't the index but rather the insight on areas where coupling may still be reduced.
Please feel free to send me your thoughts (good or bad). I plan on putting out an updated version on Jan 1. And thanks again to all of you who sent me early feedback. Have Fun! jeff
Thursday, November 27, 2003
MS Millennium Goals for OS
I just ran across an interesting paper from MS, see:
http://research.microsoft.com/sn/Millennium/mgoals.html
Perhaps Christian would be kind enough to blog on 'Longhorn' and how close it is to reaching some of the goals.
http://research.microsoft.com/sn/Millennium/mgoals.html
Perhaps Christian would be kind enough to blog on 'Longhorn' and how close it is to reaching some of the goals.
Wednesday, November 26, 2003
Service Data Objects
IBM & BEA released a new specification for Service Data Objects. In my opinion, this is a much needed and perhaps overdue specification. SDO introduces Data Objects and Data Graphs which are self-describing data containers that can be manipulated, serialized and navigated. The feature that really got my attention was the ability to create a 'change log' of the data set. In essence, this feature mimics the functionality found in .Net called, "disconnected datasets'.
It is also apparent that the specification writers have gone out of their way to plan for a service oriented world. The use of XML Schema and a SOAP binding are utilized. I'm getting the feeling that this will be an underlying work-horse for some follow-on specifications.
It is also apparent that the specification writers have gone out of their way to plan for a service oriented world. The use of XML Schema and a SOAP binding are utilized. I'm getting the feeling that this will be an underlying work-horse for some follow-on specifications.
Tuesday, November 25, 2003
OpenStorm Demo
Starting in December, I will be giving some one-on-one demonstrations of the OpenStorm Suite to prospects.
Here are my initial travel plans:
Dec. 2 - Houston
Dec. 3 - Dallas
Dec. 4 - New York
Dec. 5 - Atlanta
Dec. 10-11 Philadelphia
Dec. 18 - St. Louis
Dec. 19 - Chicago
Most of these days are only partially booked. If you are considering a purchase in this space and the date/location works for you shoot me a note! jschneider AT OpenStorm DOT com. The talk will focus on Service Oriented Integration techniques, the Service Network and using BPEL as an integration mechanism.
I'll be setting up a west coast visit in January.
Here are my initial travel plans:
Dec. 2 - Houston
Dec. 3 - Dallas
Dec. 4 - New York
Dec. 5 - Atlanta
Dec. 10-11 Philadelphia
Dec. 18 - St. Louis
Dec. 19 - Chicago
Most of these days are only partially booked. If you are considering a purchase in this space and the date/location works for you shoot me a note! jschneider AT OpenStorm DOT com. The talk will focus on Service Oriented Integration techniques, the Service Network and using BPEL as an integration mechanism.
I'll be setting up a west coast visit in January.
Monday, November 24, 2003
OCL for Web Services
Radovan wants OCL for web services (or more precisely, he wants a Service Constraint Language). I do too. As far as I know, a variation of the Object Constraint Language doesn't exist - let's call it SCL.
But be careful, just because you define constraints (pre-conditions, post-conditions, message verification - and potentially even the order of service participants), you haven't created a replacement for a fully defined business process. I love constraints - and I love fully described digital business processes - and I really love when the two are combined.
I've been having some offline conversations on the 'coupling index' - a means to quantitatively determine a 'loose or tight coupling factor'. One thing I noticed is that by having a centrally defined business process, we are able to have more fully encapsulated services (we shift knowledge out of the service and into the process). However, I've also noticed that the current state of web services fails dramatically in being 'fully encapsulated' - mostly due to the lack of constraints put on the operations. That is, a significant amount of knowledge beyond the interface is still required.
And yes, if we wanted... we could build an SCL as a web service... which means we could orchestrate the pre- and post condition calls. :-0 (not that you would want to... I'm still in that mode where everything looks like an orchestration...)
p.s., I'm a fan of point-to-point integration as well!
But be careful, just because you define constraints (pre-conditions, post-conditions, message verification - and potentially even the order of service participants), you haven't created a replacement for a fully defined business process. I love constraints - and I love fully described digital business processes - and I really love when the two are combined.
I've been having some offline conversations on the 'coupling index' - a means to quantitatively determine a 'loose or tight coupling factor'. One thing I noticed is that by having a centrally defined business process, we are able to have more fully encapsulated services (we shift knowledge out of the service and into the process). However, I've also noticed that the current state of web services fails dramatically in being 'fully encapsulated' - mostly due to the lack of constraints put on the operations. That is, a significant amount of knowledge beyond the interface is still required.
And yes, if we wanted... we could build an SCL as a web service... which means we could orchestrate the pre- and post condition calls. :-0 (not that you would want to... I'm still in that mode where everything looks like an orchestration...)
p.s., I'm a fan of point-to-point integration as well!
Saturday, November 22, 2003
Service Hijacking
I've been playing with a BPEL concept that I'd like to share... I'm calling it 'service hijacking'. It goes like this:
Some company releases a web service; for example, Amazon. This web service has a wsdl describing all of the operations and messages. The wsdl is published in a public directory for consumption.
Some other company, "OnlineBooks", decides to launch a marketplace for book shopping. So they create a BPEL service that front-ends the Amazon service, but adds calls to "Barnes & Noble" and a few others. Perhaps they even kept their wsdl the same as the original Amazon wsdl in order to allow the 'consumers' to easily switch over. At this point, we have added new value to the consumer, so all is good.
Then, some other company, "MarketResearchBooks", launches a new service to front-end the "OnlineBooks" service. They support all of the same features, but also keep records of who had the lowest price and then sell that information back to the original retailer. And yes, to the best extent possible, they kept their wsdl looking like the original WSDL Amazon wsdl. At this point, we didn't *really* add new value to the consumer, but we didn't take any value away.
Then, another company, "OnlineSuckers", launches a service to front-end the "MarketResearchBooks". It adds no new value, but asks people to pay for the service on a per-use basis. Now, we have a problem. The value of the service went down, not up.
Making information available online in a structured format for public consumption is a tricky proposition. In some cases, you don't care what people do with your information, while in other cases (OnlineSuckers), you might care. In my opinion, a service is hijacked when the value of the service goes down (rather than up) from the re-purposing of the call.
The earlier examples ("Online Books" and "MarketResearchBooks") are what I call either 'service piping' or 'service chaining'. These are good things. They take information and add or maintain the same level of value. Of course, there is a different technical argument to be made against long chains of services, but I'll save that for another day.
---- after thought----
It just hit me that if you don't write BPEL scripts everday, you might be wondering what this concept has to do with BPEL. Yes, service hijacking can be accomplished through your favorite programming language (java, c#, etc.) However, BPEL facilitates this functionality with VERY little effort. With the OpenStorm suite, I can service chain Amazon in under a minute, add marketing statistics in a couple of minutes and hijack on a payment service in under a minute. I guess my point is that BPEL makes this stuff very easy to do.
Some company releases a web service; for example, Amazon. This web service has a wsdl describing all of the operations and messages. The wsdl is published in a public directory for consumption.
Some other company, "OnlineBooks", decides to launch a marketplace for book shopping. So they create a BPEL service that front-ends the Amazon service, but adds calls to "Barnes & Noble" and a few others. Perhaps they even kept their wsdl the same as the original Amazon wsdl in order to allow the 'consumers' to easily switch over. At this point, we have added new value to the consumer, so all is good.
Then, some other company, "MarketResearchBooks", launches a new service to front-end the "OnlineBooks" service. They support all of the same features, but also keep records of who had the lowest price and then sell that information back to the original retailer. And yes, to the best extent possible, they kept their wsdl looking like the original WSDL Amazon wsdl. At this point, we didn't *really* add new value to the consumer, but we didn't take any value away.
Then, another company, "OnlineSuckers", launches a service to front-end the "MarketResearchBooks". It adds no new value, but asks people to pay for the service on a per-use basis. Now, we have a problem. The value of the service went down, not up.
Making information available online in a structured format for public consumption is a tricky proposition. In some cases, you don't care what people do with your information, while in other cases (OnlineSuckers), you might care. In my opinion, a service is hijacked when the value of the service goes down (rather than up) from the re-purposing of the call.
The earlier examples ("Online Books" and "MarketResearchBooks") are what I call either 'service piping' or 'service chaining'. These are good things. They take information and add or maintain the same level of value. Of course, there is a different technical argument to be made against long chains of services, but I'll save that for another day.
---- after thought----
It just hit me that if you don't write BPEL scripts everday, you might be wondering what this concept has to do with BPEL. Yes, service hijacking can be accomplished through your favorite programming language (java, c#, etc.) However, BPEL facilitates this functionality with VERY little effort. With the OpenStorm suite, I can service chain Amazon in under a minute, add marketing statistics in a couple of minutes and hijack on a payment service in under a minute. I guess my point is that BPEL makes this stuff very easy to do.
Friday, November 21, 2003
BPEL Validator :-)
It might be time to get that BPEL Validator routine up and running.... just in case any company were to release a few BPEL extensions...
:-)
And congratulations on the release.
:-)
And congratulations on the release.
Monday, November 17, 2003
Bean-counters to Join Programmers
VentureWire had great news today. More accounting and finance jobs will be moved offshore!!
It is clear that geeky American programmers have no idea on how to influence congress. Surely our accounting brothers can help out...
=============================
Ephinay Raises $10M Series B
By VentureWire Staff Reporters 11/17/2003
Charlotte, N.c.Ephinay, a finance and accounting business process outsourcing firm, said it received $10 million in Series B financing. [full story]
http://www.ephinay.com
=============================
Outsource Partners International Raises a Potential $20M in Series B
By VentureWire Staff Reporters 11/17/2003
Outsource Partners International (OPI), a provider of finance and accounting outsourcing services, said it has raised $12 million in its Series B round. The round could potentially bring in $8 million more. [full story]
http://www.opiglobal.com
=============================
Two in one day!!! Kwe Kwe is going to get crowded!!
It is clear that geeky American programmers have no idea on how to influence congress. Surely our accounting brothers can help out...
=============================
Ephinay Raises $10M Series B
By VentureWire Staff Reporters 11/17/2003
Charlotte, N.c.Ephinay, a finance and accounting business process outsourcing firm, said it received $10 million in Series B financing. [full story]
http://www.ephinay.com
=============================
Outsource Partners International Raises a Potential $20M in Series B
By VentureWire Staff Reporters 11/17/2003
Outsource Partners International (OPI), a provider of finance and accounting outsourcing services, said it has raised $12 million in its Series B round. The round could potentially bring in $8 million more. [full story]
http://www.opiglobal.com
=============================
Two in one day!!! Kwe Kwe is going to get crowded!!
Sunday, November 16, 2003
Junglemen Hex American Programmers
In a rare move, the normally friendly tribesmen of the Zimbabwe jungle put a voodoo-like-hex on the American programmers working there.
Frank, who was laid off from General Electric's I.T. department some time back explained, "After training my Indian replacement at G.E., I decided I was going to beat the game. If the trans-national corporations were only interested in low-cost labor, it was clear that I'd have to reduce my cost-of-living expenses. That's why our whole development team moved to the jungles of Zimbabwe."
"Unfortunately, we were found by the local tribesman. I don't fully understand their language, although it appears to be a blend of Morse-code and ASCII. From what I've gathered, they are concerned about too many U.S. programmers coming here." In an effort to remedy the concerns, Frank intends to meet with the local governing council. "I will explain to the council that we can put 'caps' on the number of American programmers that can come to the jungle - AND... they will be forced to leave after a certain number of years. In essence, we are pitching them a variation of the H1 and L1 programs!!!"
The city of Kwe Kwe, Zimbabwe is quickly becoming a technology hotbed. In addition to American programmers, recently displaced developers from Hyderabad, India are also flocking over. Amit Maheshwari explains, "Yes, we used to specialize in convincing American companies to come to India... now, we make our money selling the U.S. trans-nationals infrastructure to create wireless zones in the jungle. We have also tried to get them to subsidize the mosquito repellent, but so far haven't had any luck."
Frank, who was laid off from General Electric's I.T. department some time back explained, "After training my Indian replacement at G.E., I decided I was going to beat the game. If the trans-national corporations were only interested in low-cost labor, it was clear that I'd have to reduce my cost-of-living expenses. That's why our whole development team moved to the jungles of Zimbabwe."
"Unfortunately, we were found by the local tribesman. I don't fully understand their language, although it appears to be a blend of Morse-code and ASCII. From what I've gathered, they are concerned about too many U.S. programmers coming here." In an effort to remedy the concerns, Frank intends to meet with the local governing council. "I will explain to the council that we can put 'caps' on the number of American programmers that can come to the jungle - AND... they will be forced to leave after a certain number of years. In essence, we are pitching them a variation of the H1 and L1 programs!!!"
The city of Kwe Kwe, Zimbabwe is quickly becoming a technology hotbed. In addition to American programmers, recently displaced developers from Hyderabad, India are also flocking over. Amit Maheshwari explains, "Yes, we used to specialize in convincing American companies to come to India... now, we make our money selling the U.S. trans-nationals infrastructure to create wireless zones in the jungle. We have also tried to get them to subsidize the mosquito repellent, but so far haven't had any luck."
Saturday, November 15, 2003
Project Liberty & WS-Federation
Project Liberty, is a federated trust & identity based scheme. It was created as a "Microsoft Passport Killer". A couple of years ago, MS was pushing Hailstorm and Passport as a mechanism to centrally control identity and schema based data. The fine folks over at Sun (and friends), came to the conclusion that they didn't want MS to control all of the user id's in the world - and for good reason. Thus, they came up with a specification to decentralize identity & trust. The program came to fruition just after the September 11th tragedy, and was given the very awkward name, "Project Liberty" - I guess they felt that they were 'liberating identity' or something like that...
Well, Project Liberty did what it was supposed to do. It created an alternative means to accomplish the same goal as Passport, without handing over the family jewels to MS. However, Project Liberty was created prior to the creation of the WS-* specifications. This means that for the most part, it has overlap with some of the newer specifications created, like WS-Trust, WS-Privacy and WS-Metadata.
I'm a huge fan of "concern-based protocols". Thus, I like having 'trust' as its own protocol - and 'privacy' as another protocol. I don't like mixing concerns in a single protocol; which I believe Project Liberty is guilty of. From a cursory view, it is appears as though WS-Federation covers the bulk of what is actually needed. I'm not an expert in this area - but so far, it looks 'good enough'.
The Project Liberty group recently published a paper comparing the approaches. Although the paper attempts to subtly convince the reader that their approach is better, for me, it has the opposite effect. They basically claim that they have successfully lumped a bunch of standalone concerns into one specification. In addition, they did it prior to the existence of the WS-* specifications, thus the implementations that are available won't be technically aligned with the needs of the next generation web service developer.
I'm not ready to say, "let's kill Project Liberty"... yet. But, I am mentally preparing for the funeral. In my opinion, Project Liberty did what it was supposed to do: force Microsoft down a standards based decentralized ID system. And this is exactly what happened... thus, I consider the project a raging success. But it served its purpose and now it may be time to move on.
Well, Project Liberty did what it was supposed to do. It created an alternative means to accomplish the same goal as Passport, without handing over the family jewels to MS. However, Project Liberty was created prior to the creation of the WS-* specifications. This means that for the most part, it has overlap with some of the newer specifications created, like WS-Trust, WS-Privacy and WS-Metadata.
I'm a huge fan of "concern-based protocols". Thus, I like having 'trust' as its own protocol - and 'privacy' as another protocol. I don't like mixing concerns in a single protocol; which I believe Project Liberty is guilty of. From a cursory view, it is appears as though WS-Federation covers the bulk of what is actually needed. I'm not an expert in this area - but so far, it looks 'good enough'.
The Project Liberty group recently published a paper comparing the approaches. Although the paper attempts to subtly convince the reader that their approach is better, for me, it has the opposite effect. They basically claim that they have successfully lumped a bunch of standalone concerns into one specification. In addition, they did it prior to the existence of the WS-* specifications, thus the implementations that are available won't be technically aligned with the needs of the next generation web service developer.
I'm not ready to say, "let's kill Project Liberty"... yet. But, I am mentally preparing for the funeral. In my opinion, Project Liberty did what it was supposed to do: force Microsoft down a standards based decentralized ID system. And this is exactly what happened... thus, I consider the project a raging success. But it served its purpose and now it may be time to move on.
Thursday, November 13, 2003
New RFID Application - Knicker Surfing!
According to the Chicago Sun Times, "RFID chips could make your daily life easier, but they also could let anyone with a scanning device know what kind of underwear you have on and how much money is in your wallet".
At first I thought to myself - Wow, what an invasion of privacy! Then, I realized that the Chicago Sun Times may have just found the killer application for RFID. By targeting perverts, we will be able to sell millions of handheld readers to identify the 'kind of underwear' that people are wearing. This is genius!!! Unfortunately, I found out that there is already a growing population of what I am dubbing, "knicker surfers":
Gary, a regular knicker surfer reports, "Yea, it's cool. Me and my buddies come out here all the time and knicker surf. I just hope Walmart pushes PML. Right now, I can only get the underwear brand... with PML I'll be able to get the size too!"
I had no idea. :-)
At first I thought to myself - Wow, what an invasion of privacy! Then, I realized that the Chicago Sun Times may have just found the killer application for RFID. By targeting perverts, we will be able to sell millions of handheld readers to identify the 'kind of underwear' that people are wearing. This is genius!!! Unfortunately, I found out that there is already a growing population of what I am dubbing, "knicker surfers":
Gary, a regular knicker surfer reports, "Yea, it's cool. Me and my buddies come out here all the time and knicker surf. I just hope Walmart pushes PML. Right now, I can only get the underwear brand... with PML I'll be able to get the size too!"
I had no idea. :-)
Wednesday, November 12, 2003
Storing Transient Data
John Udell reports, "Today, most IT shops can't store or process massive flows of transient data. But XML message traffic is a resource that creates strategic opportunity for those who learn to manage it well. Tools for doing that are on the way. "
Hmmm... if I create a persistent store of transient data, is it still transient? Maybe there is a reason most IT shops don't store transient data - wouldn't that just be considered 'persistent data'??
:-)
Ok, I understand what he means - perhaps instead of 'transient' maybe we could call it 'inter-service message data' or just 'message data'? Still, I'm not sure that I buy into the concept. Most I.T. shops do a significant amount of warehousing and reporting off the system of records that generate the messages. The reliability side is currently taken care of by message queue journaling, and realtime inquiry on state is best handled through an inquiry to a business process engine or BAM notification.
Right now, collecting transient data sounds like a bad habit... I need a use-case, with strong, strong justification.
Hmmm... if I create a persistent store of transient data, is it still transient? Maybe there is a reason most IT shops don't store transient data - wouldn't that just be considered 'persistent data'??
:-)
Ok, I understand what he means - perhaps instead of 'transient' maybe we could call it 'inter-service message data' or just 'message data'? Still, I'm not sure that I buy into the concept. Most I.T. shops do a significant amount of warehousing and reporting off the system of records that generate the messages. The reliability side is currently taken care of by message queue journaling, and realtime inquiry on state is best handled through an inquiry to a business process engine or BAM notification.
Right now, collecting transient data sounds like a bad habit... I need a use-case, with strong, strong justification.
Monday, November 10, 2003
The Realist and the Idealist
I recently had the opportunity to engage in a technical discussion with Chris Sells. We found ourselves agreeing to disagree. He took on the role of the realist, I took on the role of the idealist.
First, Chris and I seemed to be in agreement that distributed computing was easy to screw up. As he stated, it was necessary for consultants to travel the world preaching about *round-trips* (overly verbose message exchanges).
The Realist
Now, I hope I don't screw this up (Chris, correct me if I do).
Chris is of the opinion that we shouldn't paper-over the complexities of distributed computing. By extending the object paradigm into a distributed object paradigm, we unintentionally encourage developers to think in *local mode*, when really they should be thinking in *distributed mode*. His point is that additional considerations must be met (time, reliability, security, etc.) And by forcing the developer to acknowledge these concerns, runtime disasters will be decreased. His feeling is that Indigo (from Microsoft) does a good job of making the developer acknowledge the distinction between local and distributed calls, thus meeting a need in the developer community.
The Idealist
On the hand, I am the idealist. It is my opinion that we should continue to strive towards location transparency. Thus, we should continue to use one programming model (and invocation model) for both local and distributed calls. I believe that the SOA model largely facilitates location transparency and this should be leveraged. However, Chris (and others) will be quick to point out that this is like getting half-pregnant. Either your system is working efficiently in distributed mode, or it isn't. And in virtually every occasion, computer scientists will tell me that the great hurdle in location transparency deals with the static nature of message exchange sequences between client and server. In local mode, people strive towards fine-grained calls and in the remote mode, the coarse grained method is preferred. As an idealist, I am of the opinion that we shouldn't *dumb down* the programming model to reduce developer design errors. Rather, I feel that we should take the bull by the horns and look at the real issue of automating the granularity at run time (based on costing functions). However, to accomplish this, we need to give our runtime containers mode knowledge about our *intent* (think 'use case based sequence diagrams'). Now, instead of asking for a single method to be called, we ask for a 'use case' to be fulfilled. IMHO, more emphasis needs to go on writing smart software that fulfills an intent, rather than acting out a predetermined recipe.
Does Indigo excite me? Not really - I see good concepts from P2P, AOP and Trust rolled together. The exciting part is that MS has the resources to pull it off and make it easy to use.
Chris is a smart guy - he might be right. I don't know.
First, Chris and I seemed to be in agreement that distributed computing was easy to screw up. As he stated, it was necessary for consultants to travel the world preaching about *round-trips* (overly verbose message exchanges).
The Realist
Now, I hope I don't screw this up (Chris, correct me if I do).
Chris is of the opinion that we shouldn't paper-over the complexities of distributed computing. By extending the object paradigm into a distributed object paradigm, we unintentionally encourage developers to think in *local mode*, when really they should be thinking in *distributed mode*. His point is that additional considerations must be met (time, reliability, security, etc.) And by forcing the developer to acknowledge these concerns, runtime disasters will be decreased. His feeling is that Indigo (from Microsoft) does a good job of making the developer acknowledge the distinction between local and distributed calls, thus meeting a need in the developer community.
The Idealist
On the hand, I am the idealist. It is my opinion that we should continue to strive towards location transparency. Thus, we should continue to use one programming model (and invocation model) for both local and distributed calls. I believe that the SOA model largely facilitates location transparency and this should be leveraged. However, Chris (and others) will be quick to point out that this is like getting half-pregnant. Either your system is working efficiently in distributed mode, or it isn't. And in virtually every occasion, computer scientists will tell me that the great hurdle in location transparency deals with the static nature of message exchange sequences between client and server. In local mode, people strive towards fine-grained calls and in the remote mode, the coarse grained method is preferred. As an idealist, I am of the opinion that we shouldn't *dumb down* the programming model to reduce developer design errors. Rather, I feel that we should take the bull by the horns and look at the real issue of automating the granularity at run time (based on costing functions). However, to accomplish this, we need to give our runtime containers mode knowledge about our *intent* (think 'use case based sequence diagrams'). Now, instead of asking for a single method to be called, we ask for a 'use case' to be fulfilled. IMHO, more emphasis needs to go on writing smart software that fulfills an intent, rather than acting out a predetermined recipe.
Does Indigo excite me? Not really - I see good concepts from P2P, AOP and Trust rolled together. The exciting part is that MS has the resources to pull it off and make it easy to use.
Chris is a smart guy - he might be right. I don't know.
Sunday, November 09, 2003
This post is about the hottest enterprise technology.
I bet you think I'm talking about web services. Well, I'm not. It's time for me to start blogging about RFID and more precisely EPC.
I've considered starting a new blog dedicated to RFID, but I think I'm going to keep the posts inside of this blog. Web services and RFID will likely settle into a symbiotic relationship.
About 6 years ago, I had sector level responsibility for manufacturing systems at 3M. This involved building, buying and integrating all of the usual suspects (demand management, MRP, Lab BOM, Lab content mgmt., SCE (pick, pack, ship, label, optimize, capacity planning, etc.) Recently, I've had the pleasure of working on a supply chain project with Procter & Gamble. This has been a great experience. The first thing I noticed was that not much has really changed in the last decade. Sure collaborative planning, forecasting, dynamic safety stocks, etc. are all incrementally improving. But for the most part the changes are incremental.
RFID / EPC is not incremental. It is monumental. I'm going to blog more on this later. For now, if you want to get educated , go to the following sites:
http://www.rfidjournal.com/
http://www.autoidcenter.org/
http://www.epcglobalinc.org
I've considered starting a new blog dedicated to RFID, but I think I'm going to keep the posts inside of this blog. Web services and RFID will likely settle into a symbiotic relationship.
About 6 years ago, I had sector level responsibility for manufacturing systems at 3M. This involved building, buying and integrating all of the usual suspects (demand management, MRP, Lab BOM, Lab content mgmt., SCE (pick, pack, ship, label, optimize, capacity planning, etc.) Recently, I've had the pleasure of working on a supply chain project with Procter & Gamble. This has been a great experience. The first thing I noticed was that not much has really changed in the last decade. Sure collaborative planning, forecasting, dynamic safety stocks, etc. are all incrementally improving. But for the most part the changes are incremental.
RFID / EPC is not incremental. It is monumental. I'm going to blog more on this later. For now, if you want to get educated , go to the following sites:
http://www.rfidjournal.com/
http://www.autoidcenter.org/
http://www.epcglobalinc.org
Saturday, November 08, 2003
Chris Sells and the Royal *We*
I just saw where Chris Sells of Microsoft was being questioned about the use of the word 'we':
In the '90s, we invented component technologies, like COM and Java, to bring DLLs into memory and we were very proud of ourselves.
Some of the readers were confused, thinking that Microsoft was claiming that they invented Java. But I understood, instead of "we", he meant to say, "people other than me and my company".
But then he goes on...
Unfortunately, we were so proud that we stretched the metaphor too far with technologies like DCOM, CORBA, and Java RMI. The problem is the idea of a proxy, which was designed to serve as an in-process stand-in for the remote code, hiding the fact that each method call was a round-trip of uncertain duration and uneven reliability. Indigo, on the other hand, is a platform technology that breaks from this metaphor to use a decidedly different way of connecting applications together. Specifically, Indigo uses services, not components, to model reusable units of code.
Holy shit - MICROSOFT USES SERVICES - why didn't the CORBA people think of that??? ROTFL
That's great... unlike DCE and CORBA, we (Chris + Microsoft + Indigo) use services.
Chris, with all due respect, it would be reading a book on the history of distributed computing, then rewriting your article.
In the '90s, we invented component technologies, like COM and Java, to bring DLLs into memory and we were very proud of ourselves.
Some of the readers were confused, thinking that Microsoft was claiming that they invented Java. But I understood, instead of "we", he meant to say, "people other than me and my company".
But then he goes on...
Unfortunately, we were so proud that we stretched the metaphor too far with technologies like DCOM, CORBA, and Java RMI. The problem is the idea of a proxy, which was designed to serve as an in-process stand-in for the remote code, hiding the fact that each method call was a round-trip of uncertain duration and uneven reliability. Indigo, on the other hand, is a platform technology that breaks from this metaphor to use a decidedly different way of connecting applications together. Specifically, Indigo uses services, not components, to model reusable units of code.
Holy shit - MICROSOFT USES SERVICES - why didn't the CORBA people think of that??? ROTFL
That's great... unlike DCE and CORBA, we (Chris + Microsoft + Indigo) use services.
Chris, with all due respect, it would be reading a book on the history of distributed computing, then rewriting your article.
Friday, November 07, 2003
Don Box on XAML
Don recently posted on XAML.
Take a good look at the source and the build code. Now ask yourself, are you excited to go out and whip up some XAML?
Take a good look at the source and the build code. Now ask yourself, are you excited to go out and whip up some XAML?
I.T. Doesn't Matter - Business Process Do
A few days back I mentioned that the new Howard Smith book came out called, "I.T. Doesn't Matter - Business Processes Do". I've had enough conversations with Howard to believe that he is one of the best minds in process driven, service oriented thinking. However, this book was quite disappointing.
Don't get me wrong. I think that Nicholas Carr is an incompetent pansy with a silver spoon stuck up his ass. As far as I can tell, Nicholas has never worked in an I.T. department, been a vendor to an I.T. department, or been the user of an I.T. department. He is a professional writer that gets paid by the word, regardless of the truth in the word.
Unlike Nicholas, Howard is an innovator, a practitioner and an evangelist. However, his critical analysis of Nicholas Carr's work was shabby at best. It appears as though he cranked out 120 pages of material while his emotions had control of him, and bitter emotions at that. It was clear to me that he was racing a publication to press while the Carr episode remained hot in peoples mind.
Save your money. Here are a couple great books:
For business dudes: "Designing and Managing the Supply Chain"
For geeks: "Non Functional Requirements in Software Engineering"
Don't get me wrong. I think that Nicholas Carr is an incompetent pansy with a silver spoon stuck up his ass. As far as I can tell, Nicholas has never worked in an I.T. department, been a vendor to an I.T. department, or been the user of an I.T. department. He is a professional writer that gets paid by the word, regardless of the truth in the word.
Unlike Nicholas, Howard is an innovator, a practitioner and an evangelist. However, his critical analysis of Nicholas Carr's work was shabby at best. It appears as though he cranked out 120 pages of material while his emotions had control of him, and bitter emotions at that. It was clear to me that he was racing a publication to press while the Carr episode remained hot in peoples mind.
Save your money. Here are a couple great books:
For business dudes: "Designing and Managing the Supply Chain"
For geeks: "Non Functional Requirements in Software Engineering"
Wednesday, November 05, 2003
Is John Udell Confused?
Is John Udell Confused? If not, I am.
A while back, a librarian wrote to me asking how she could integrate her OPAC with LibraryLookup. I investigated and found that her vendor's implementation was based on a Java applet, and there was no way to link into it. As I mentioned to Eric Rudder and Don Box at a meeting in Redmond, this librarian later posted to a mailing list that her OPAC couldn't support LibraryLookup because it was built on the "wrong kind" of software, where "wrong" meant -- though she wouldn't have called it this -- non-RESTful. For her, the richer experience of that Java applet was a poor tradeoff, since it precluded LibraryLookup's lightweight style of integration.
Is John confusing open systems with "a RESTful" approach? I might be confused... but it sounds like the librarian had an open integration problem - not something that necessarily demanded a RESTful solution. Let's see... if the year was 1993, the librarian may have had a 'CORBAful' issue... or if it was 1990, she had a 'DCE-ful' issue. Maybe what John is trying to say is that we finally have a quick & easy way to store off profile information - - kind of like the 'Win.ini' file in early versions of Windows. It smells like John is wanting to solve some problem with REST, or I could be confused.
A while back, a librarian wrote to me asking how she could integrate her OPAC with LibraryLookup. I investigated and found that her vendor's implementation was based on a Java applet, and there was no way to link into it. As I mentioned to Eric Rudder and Don Box at a meeting in Redmond, this librarian later posted to a mailing list that her OPAC couldn't support LibraryLookup because it was built on the "wrong kind" of software, where "wrong" meant -- though she wouldn't have called it this -- non-RESTful. For her, the richer experience of that Java applet was a poor tradeoff, since it precluded LibraryLookup's lightweight style of integration.
Is John confusing open systems with "a RESTful" approach? I might be confused... but it sounds like the librarian had an open integration problem - not something that necessarily demanded a RESTful solution. Let's see... if the year was 1993, the librarian may have had a 'CORBAful' issue... or if it was 1990, she had a 'DCE-ful' issue. Maybe what John is trying to say is that we finally have a quick & easy way to store off profile information - - kind of like the 'Win.ini' file in early versions of Windows. It smells like John is wanting to solve some problem with REST, or I could be confused.
Novell Wakes Up
After a long, long, long sleep - it appears as though the team at Novell has woken up and decided to get in the game.
First, Novell acquires Ximian which gives them Mono, a platform for running .Net applications on a Linux platform.
Now, Novell is acquiring SuSE Linux for $210 million in cash.
The way I see it, IBM will be encouraged to remain pure Java - while Microsoft will be encouraged to remain pure .Net. This leaves the 'neutral' ground in the middle wide open.
So, where does Novell go from here? I see more acquisitions. My conversations with people close to Novell lead me to believe that Novell really believes that web services are the *network operating system*. Thus, the Ximian and SuSe acquisitions were only laying the foundation for a distributed computing platform. I would look for Novell to continue down the M&A path, but this time moving up the stack - taking a hard look at web service platform vendors and perhaps even tool vendors like Borland.
But the real question deals with timing. Is it too late for Novell? Did they already lose the hearts and minds of the developer? I personally don't think that it is too late - remember, Novell was the company that forged programs like 'Gold Certified Novell Partner'. If Novell continues to make bold moves, they will attract the talent to create new developer-community offerings.
To Novell - Congratulations. I hope all that sleep you took gave you the rest needed to get into the next big battle.
First, Novell acquires Ximian which gives them Mono, a platform for running .Net applications on a Linux platform.
Now, Novell is acquiring SuSE Linux for $210 million in cash.
The way I see it, IBM will be encouraged to remain pure Java - while Microsoft will be encouraged to remain pure .Net. This leaves the 'neutral' ground in the middle wide open.
So, where does Novell go from here? I see more acquisitions. My conversations with people close to Novell lead me to believe that Novell really believes that web services are the *network operating system*. Thus, the Ximian and SuSe acquisitions were only laying the foundation for a distributed computing platform. I would look for Novell to continue down the M&A path, but this time moving up the stack - taking a hard look at web service platform vendors and perhaps even tool vendors like Borland.
But the real question deals with timing. Is it too late for Novell? Did they already lose the hearts and minds of the developer? I personally don't think that it is too late - remember, Novell was the company that forged programs like 'Gold Certified Novell Partner'. If Novell continues to make bold moves, they will attract the talent to create new developer-community offerings.
To Novell - Congratulations. I hope all that sleep you took gave you the rest needed to get into the next big battle.
Tuesday, November 04, 2003
Orchestrating BPEL Validation
This is a repost from the SOA news group:
>
We have been testing our implementations against some pretty large bpel documents with good results. Also, it is easy enough to enforce syntactic validation as a web service. Thus, each vendor (OpenStorm, Collaxa, Vitria, etc.) would expose a service to validate a bpel, then we would publish a simple "validation orchestration" that tested the syntax against each vendors implementation. The customer builds the bpel schedule and then calls the orchestration, which in turn calls each vendors validator. The results are aggregated and returned to the vendor. I can not commit on behalf of any other vendors, but I can commit that OpenStorm will offer this as a service.
The service world (and orchestration) may force standards to remain, well... standards.
Jeff
I would bet that my esteemed colleagues at Collaxa are game for an open validation service / orchestration as well.
>
We have been testing our implementations against some pretty large bpel documents with good results. Also, it is easy enough to enforce syntactic validation as a web service. Thus, each vendor (OpenStorm, Collaxa, Vitria, etc.) would expose a service to validate a bpel, then we would publish a simple "validation orchestration" that tested the syntax against each vendors implementation. The customer builds the bpel schedule and then calls the orchestration, which in turn calls each vendors validator. The results are aggregated and returned to the vendor. I can not commit on behalf of any other vendors, but I can commit that OpenStorm will offer this as a service.
The service world (and orchestration) may force standards to remain, well... standards.
Jeff
I would bet that my esteemed colleagues at Collaxa are game for an open validation service / orchestration as well.
Saturday, November 01, 2003
Publishing SOAP Calls
WSDL doesn't completely suck. But it isn't the friendliest vehicle for giving a person access to some piece of information.
Let's get real. WSDL as it exists today was largely designed by distributed computing gurus who have added the art of aspect oriented programming to the art of IDL design, while plopping it all on top of our latest markup language, XML.
The Interface
Now, one thing that I do like about WSDL is that I can define the contract (or interface) without specifying an implementer (the binding / port). This allows me to head down the 'contract-first' design path. It also allows me to create a service contract and easily share it with other people. Interfaces with aspect oriented, or declarative resolution of non-functional requirements are pretty damn cool too. Well done.
The Service
A service is created by binding an interface to an implementation (i.e., listen for the call on port 80, with HTTP...). Thus, a service not only specifies the contract (Types, Messages, PortTypes, Operations & Fault), but one that also specifies the aforementioned deployment considerations (Service, Port, Binding).
Design Time: When you are designing your functional solution, you knock your contracts (interfaces).
Deployment Time: When you are designing your non-functional solution (scalability, availability), you knock out your services.
Publish Time: Now, an interesting question arises when I want to publish my contract and its binding for others to consume. As developers, we tend to think that of sticking a pointer to the WSDL in the UBR or shoving it into Xmethods. Nothing wrong with this, as long as you realize that WSDL isn't easy to consume and that it is very likely that someone will give it a shot and eventually give up because the WSDL didn't give enough information to actually be used for its intended purpose. For that matter, they may just look at the 25 different operations specified in the Port Type and realize that they don't even know which operation to call.
Calls
This leads me to Calls. A Call is an instance of an invocation to a Service. Put another way; it is the SOAP envelope all filled out. In many cases, this is what people really want. Consider this: what if instead of publishing my WSDL (with many operations), I merely publish a single operation with many of the parameters all filled out (default values). Now, instead of exposing a WSDL that has a PortType holding 25 different operations to my Calendar Server, I simply publish a SOAP document that performs a call:
What: CalendarAgendaRequest
Who: Jeff Schneider
How: Use the SOAP format and send it to WS-Addressing( location XYZ)
Prior to the WS-* specs, publishing a call didn't do any good. The calls were self-contained (contractually), but were not self-contained from a service perspective (binding). That has all changed. This means that for the first time, we can pre-populate SOAP calls, save them off and make them available to our end users. This is a HUGE leap forward in usability.
Now, developers will have two options: 1. Create a WSDL will all operations and combinations (for power developers) and 2. Create a SOAP message, partially pre-populated (for business users). Now, in order for this to work, it means that we have to quit writing applications that only suck in WSDL's (like InfoPath, Excel, etc.). In addition, you will have the option to pass in a URL to the SOAP envelope (AKA, SOAP Poiner).
Again, the reason for publishing a call is to make it EASY for a non-developer to gain access to some operation.
Let's get real. WSDL as it exists today was largely designed by distributed computing gurus who have added the art of aspect oriented programming to the art of IDL design, while plopping it all on top of our latest markup language, XML.
The Interface
Now, one thing that I do like about WSDL is that I can define the contract (or interface) without specifying an implementer (the binding / port). This allows me to head down the 'contract-first' design path. It also allows me to create a service contract and easily share it with other people. Interfaces with aspect oriented, or declarative resolution of non-functional requirements are pretty damn cool too. Well done.
The Service
A service is created by binding an interface to an implementation (i.e., listen for the call on port 80, with HTTP...). Thus, a service not only specifies the contract (Types, Messages, PortTypes, Operations & Fault), but one that also specifies the aforementioned deployment considerations (Service, Port, Binding).
Design Time: When you are designing your functional solution, you knock your contracts (interfaces).
Deployment Time: When you are designing your non-functional solution (scalability, availability), you knock out your services.
Publish Time: Now, an interesting question arises when I want to publish my contract and its binding for others to consume. As developers, we tend to think that of sticking a pointer to the WSDL in the UBR or shoving it into Xmethods. Nothing wrong with this, as long as you realize that WSDL isn't easy to consume and that it is very likely that someone will give it a shot and eventually give up because the WSDL didn't give enough information to actually be used for its intended purpose. For that matter, they may just look at the 25 different operations specified in the Port Type and realize that they don't even know which operation to call.
Calls
This leads me to Calls. A Call is an instance of an invocation to a Service. Put another way; it is the SOAP envelope all filled out. In many cases, this is what people really want. Consider this: what if instead of publishing my WSDL (with many operations), I merely publish a single operation with many of the parameters all filled out (default values). Now, instead of exposing a WSDL that has a PortType holding 25 different operations to my Calendar Server, I simply publish a SOAP document that performs a call:
What: CalendarAgendaRequest
Who: Jeff Schneider
How: Use the SOAP format and send it to WS-Addressing( location XYZ)
Prior to the WS-* specs, publishing a call didn't do any good. The calls were self-contained (contractually), but were not self-contained from a service perspective (binding). That has all changed. This means that for the first time, we can pre-populate SOAP calls, save them off and make them available to our end users. This is a HUGE leap forward in usability.
Now, developers will have two options: 1. Create a WSDL will all operations and combinations (for power developers) and 2. Create a SOAP message, partially pre-populated (for business users). Now, in order for this to work, it means that we have to quit writing applications that only suck in WSDL's (like InfoPath, Excel, etc.). In addition, you will have the option to pass in a URL to the SOAP envelope (AKA, SOAP Poiner).
Again, the reason for publishing a call is to make it EASY for a non-developer to gain access to some operation.
Subscribe to:
Posts (Atom)