Lean Development Applied To SOA

Share the article!

I’ve been doing a little bit of musing (“Is SOA an Agile Enterprise Framework?“) about a development framework to support SOA. However, maybe Lean Production/Thinking would be better fit for SOA. In an earlier entry I mused about “How Web 2.0 supports Lean Production“. Let’s turn this question around and ask the question “How can Lean development be used in support SOA development?”.

Lean focuses on the elimination of waste in processes. Agile in comparison is tuned toward practices that adapt efficiently to change. There are though some commonalities between Lean and Agile Development. These specifically are:

  • People centric approach
  • Empowered teams
  • Adaptive planning
  • Continuous improvement

The last two bullet points align directly with the SOA Manifesto. For reference:

  • Business value over technical strategy
  • Strategic goals over project-specific benefits
  • Intrinsic interoperability over custom integration
  • Shared services over specific-purpose implementations
  • Flexibility over optimization
  • Evolutionary refinement over pursuit of initial perfection

Lean differs from Agile in that Lean is purported to be designed to scale (see: “Set-based concurrent engineering” and Scaling Lean and Agile Development ):

One of the ideas in lean product development is the notion of set-based concurrent engineering: considering a solution as the intersection of a number of feasible parts, rather than iterating on a bunch of individual “point-based” solutions. This lets several groups work at the same time, as they converge on a solution.

In contrast, agile methods were meant for smaller more nimble development teams and projects. One would therefore think that in the context of enterprise wide SOA activities, Lean principles may offer greater value than the Agile practices. Well, let’s see if we can convince ourselves of this by exploring this in more elaborate detail.

Lean Software Development is defined by a set of “seven lean principles” for software development:

  1. Eliminate Waste – Spend time only on what adds real customer value.
  2. Create Knowledge – When you have tough problems, increase feedback.
  3. Defer Commitment – Keep your options open as long as practical, but no longer.
  4. Deliver Fast – Deliver value to customers as soon as they ask for it.
  5. Respect People – Let the people who add value use their full potential.
  6. Build Quality In – Don’t try to tack on quality after the fact – build it in.
  7. Optimize the Whole – Beware of the temptation to optimize parts at the expense of the whole.

Can we leverage these principles as a guide for a better approach to SOA development?

Where can we find waste in the context of software development? Poppendieck has the following list:

  • Overproduction = Extra Features.
  • In Process Inventory = Partially Done Work.
  • Extra Processing – Relearning.
  • Motion = Finding Information.
  • Defects = Defects Not Caught by Tests.
  • Waiting = Delays.
  • Transport = Handoffs.

What steps in SOA development can we take to eliminate waste? Here’s a proposed table:

Waste in Software Development Lean SOA
Extra Features If there isn’t a clear and present economic need for a Service then it should not be developed.
Partially Done Work Move to a integrated, tested, documented and deployable service rapidly.
Relearning Reuse Services. Employ a Pattern Language. Employ Social Networking techniques to enhance Organizational Learning.
Finding Information Have all SOA contracts documented and human testable on a shared CMS. Manage Service evolution.
Defects Not Caught by Tests Design Testable Service interfaces. Test driven integration.
Delays Development is usually not the bottleneck. Map the value stream to identify real organizational bottlenecks.
Handoffs Service developers work directly with Service consumers (i.e. developers, sysadmins, help desk)

The customers for a Service are similar to the customers for an API. I wrote years ago about how the design and management of APIs leads to the development of damn good software. The same principles can be applied with the lean development of services. Taking some wisdom from that, here are some recommended practices for Lean SOA:

  • Designing Services is a human factors problem.
  • Design Services to support an Architecture of Participation. Focus on “Organizational Learning
  • Focus on what a user of your Service will experience. Simplicity is the #1 objective. Only when this has been accomplished (at least on paper) do we talk about implementation details.
  • Services arenxe2x80x99t included in a release until theyxe2x80x99re very simple to use.
  • In tradeoff situations, ease of use and quality win over feature count.
  • Useful subsets of standards are OK in the short term, but should be fully implemented in the longer term.
  • t

  • Continuous Integration – Fully automated build process and tests.
  • Always Beta – Each build is a release candidate; we expect it to work.
  • t

  • Community Involvement – Community needs to know what is going on to participate. Requires transparency.
  • Continuous Testing.
  • Collective Ownership.
  • Preserve Architectural Integrity –
    Deliver on time, every time, but must preserve architectural integrity. Deliver quality with continuity.
  • Services First – When defining a new Service, there must be at least one client involved, preferably more.

t

Finally, there is one last principle in Lean development “Decide as Late as Possible” that is of high importance. The ability to decide as late as possible is enabled by modularity. The absence of modularity makes composing new solutions and therefore new integrations extremely cumbersome. The key however is not to become a Cargo Cult and practice Lean SOA without understanding how one achieves modularity (or to use another phrase “intrinsic interoperability”). The key is to understand how to achieve that. That of course is the subject of the Design Language that I am in the process of formulating.

In conclusion, Lean SOA (A mashup of Lean and SOA) follow these principles:

  • Eliminate Waste – Spend time only on what adds business value.
  • Create Knowledge – Disseminate and share Service knowledge with the organization and its partners.
  • Defer Commitment – Be flexible before you optimize.
  • Deliver Fast – Deliver value quickly and incrementally. Don’t try to boil the ocean.
  • Respect People – Let the people who add value use their full potential.
  • Build Quality In – Don’t try to tack on quality after the fact – build it in.
  • Optimize the Whole – Strategic goals over project-specific benefits.
  • Build Interoperability In – Services should be designed to be modular.

  • This is a simple as it gets. The devil of course is in the details.

    Notes: You can find an earlier and different take on this here: SOA Agility.


    Share the article!

    In Search of a Pattern Language for SOA Intrinsic Interoperability

    Share the article!

    Any Good Pattern Language should be based on a well defined set of primitives (i.e. basic building blocks). Architectures and Design Patterns (referred in the GOF book as micro-architectures) require a clear definition of constraints to be of any real value. Roy Fielding when defines ReST in the context of constraints. In stark contrast, most SOA definitions that one can find, including the OASIS standard definition, fails to define the architectural constraints.

    In previous posts I have formulated a set of attributes that provide the definition of Services. I further refined those to this current definition:

    A Service Oriented approach satisfies the following:

    1. Decomposability – The approach helps in the task of decomposing a business problem into a small number of less complex subproblems, connected by a simple structure, and independent enough to allow further work to proceed independently on each item.
    2. Composability – The approach favors the production of Services which may then be freely combined with each other and produce new systems, possibly in an environment quite different from the one in which they were initially developed.
    3. Understandability – The approach helps produce software which a human reader can understand each Service without having to know the others, or, at worst, by having to examine only a few of the others.
    4. Continuity – The approach yields a software architecture that a small change in the problem specification will trigger a change of just one Service, or a small number of Services.
    5. Protection – The approach yields a software architecture in which the effect of an abnormal condition occurring at run time in a Service will remain confined to that Service, or at worst will only propagate to a few neighboring Services.
    6. Introspection – The approach yields an architecture that supports the search and inspection of data about Services (i.e. Service Meta-data).
    7. Remoteability – The approach yields an architecture that enables Service interaction between other Service that reside in separate physical environments.
    8. Asynchronicity – The approach yield an architecture that does not require an immediate response from a Service interaction. In other words, it assumes that latency exists in either the network or the invoked Service.
    9. Document Orientedness – The approach yields an architecture where the messages sent Service to Service interaction are explicitly defined, shared and that there is no implicit state sharing between interactions.
    10. Decentralized Administration – The approach yields an architecture that does not assume a single administrator for all Services.

    This is an extended definition of Bertand Meyer’s definition of Modularity. You can look at my previous post entitled “SOA and Modularity” to see how this compares with other definitions of SOA.

    Now if we were to consult the “SOA Manifesto” and its value system then we could derive the following goal: “We believe in building modular systems through intrinsic interoperability and evolutionary refinement to achieve business value and satisfy strategic goals”. The key ingredient in this statement that is left ambiguous is “Intrinsic Interoperability”. The key question for anyone employing SOA is to understand how to achieve “Intrinsic Interoperability”. Modularity and Evolutionary Refinement are well understood principles, Intrinsic Interoperability is not. One may have the belief that interoperability can be achieved by simply mandating a global standard. This can work in theory, however rarely ever does in practice. Centralized planning is rarely a scalable approach, evolutionary refinement in fact demands a decentralized approach. The question one needs to ponder is how can I build interoperable systems employing a decentralize approach. In the literature I have surveyed I have yet to find a cohesive treatment on how this can be done.

    Over the past decade many Design Patterns have been proposed to address many of the concerns that are introduced with in a Service Oriented Architecture. The most notable collections have been following:

    I’ve taken the trouble to comb through these patterns and to identify which ones lead to improved intrinsic interoperability. One of the challenges in developing a pattern language is the creation of a categorization that covers the entire collection.

    Service Identification Patterns

    • Dynamic Discovery – When a Service joins a network it might not have any knowledge about which other Services are available.
    • Absolute Object Reference – The notion of an identifier to a service that can be exchanged by other services and used to invoke the original service is a key ingredient for Service mobility.
    • Lookup – A Service is selected based on the query of Services in a directory. Provides an additional layer of indirection in identifying services.
    • Referral – A Service is selected based on the consultation of a Services. The difference with the previous is that another service is responsible for making the selection.
    • Proxy – A service communicates with another service that id does not have the identity of or is unreachabable.

    Service Dependency Patterns

    • Termination Notification – A mechanism to indicate when a Service becomes permanently unavailable is necessary to manage the evolution of Services.
    • Lease Renewal – This is mechanism is similar to the original, however the onus is placed on the consuming service to renew its dependency.
    • Reminder – Removes the requirement for a Service to maintain its own scheduling service.

    Service Extension Patterns

    • Invocation Interceptor – Provides the capability of dynamically introducing new Service functionality.
    • Invocation Context – Permits new Service functionality to be added that is dependent on invocation context rather than Service definition
    • Protocol Plug-in – Provides a explicit mechanism for introducing a new communication protocol to an existing Service.
    • Location Forwarder – A specialization of Invocation Interceptor where the Forwarder sends an invocation to another Service.
    • Delegation – Where a Service allocates a task previous allocated to it to another Service.
    • Escalation – Where a Service attempts to progress a work item that has stalled by offering it to another Service.
    • Deallocation – Where a Service makes a previously started task available for offer and subsequent distribution.
    • Reallocation – where a Service allocates a task that it has started to another Service. Can be stateful where the current state of the task is retained, or stateless where the task is restarted.
    • Suspension/resumption – where a Service temporarily suspends execution of a task or recommences execution of a previously suspended task.

    Service Negotiation Patterns : The Customer and Performer negotiate until they reach an agreement (commitment) about the work to be fulfilled.

    • Receiver Cancels – Receiving Service can cancel within certain timeframe.
    • Sender Cancels / Contingent Request – Sending Service can cancel within certain timeframe
    • Binding Request – A sending party sends an offer that it will agree to to if the receiving party accepts.
    • Binding Offer – A sending party request an offer that will responded to by an offere by the receiving party.
    • Resource-Initiated Allocation – The ability for a resource to commit to undertake a work item without needing to commence working on it immediately.
    • Resource-Initiated Execution – Offered Work Item – The ability for a resource to select a work item offered to it and commence work on it immediately.
    • Resource-Determined Work Queue Content – The ability for resources to specify the format and content of work items listed in the work queue for execution.
    • Selection Autonomy – The ability for resources to select a work item for execution based on its characteristics and their own preferences.

    Service Performance Patterns: The Performer fulfills the agreement.

    • Role-Based Distribution – The selection of a service to perform a task is based on the role of a service.
    • Deferred Distribution – The selection of a service to perform a tasks is deferred to the time of the the request.
    • Case Handling – The selection of a service to perform a task is based on the case of the request.
    • Capability-Based Distribution – The selection of a service to perform a task is based on the capability of the service.
    • History-Based Distribution – The selection of a service to perform a task is based on a Service handling history.
    • Organisational Distribution – The selection of a service to perform a task is based on the relationship of the service with other services.
    • Two Phase Execution – A service sends plan information prior to the start of execution.
    • Prepare to Start / Start – A service waits for a permission to start prior to the start of execution.
    • Interleaved Parallel Routing – A partial ordering of tasks are defined and can be executed in any order that conforms to the partial ordering.
    • Deferred Choice – A point in a process where one of several branches is chosen based on interaction with the operating environment.

    Service Reporting Patterns – The performer reports on the status of the execution of the agreement.

    • Fire-and-Forget – Invoke a Service without expecting a response.
    • Request-Response with Retry – Invoke a Service with the expectation that a retry does not alter the semantics of the previous invocation.
    • Polling – Periodically invoke a Service to derive status.
    • Subscribe-Notify – Subscribe to a Service to receive future notifications.
    • Quick Acknowledgment
    • Sync with Server – Provide a mechanism to synchronize with a Server’s data.
    • Result Callback – Provide a mechanism for the invoked Service to asynchronously return a response.

    Service Acceptance (Satisfaction) – The Customer evaluates the work and either declares satisfaction or points out what remains to be done to fulfill the agreement.

    • Retry
    • Compensating Action

    Clearly there’s a lot of interesting literature out there that can provide a lot of insight into the interoperation of Services. The above list is just a rough sketch and I’m hoping to provide a more cohesive set over time.

    TBD: Conversation join, Conversation refactor, Initiate conversation, Follow conversation,
    Leave conversation, Atomic consumption.


    Share the article!

    More Insightful SOA Design Patterns

    Share the article!

    The website SOAPatterns.org has a wealth of SOA Design Patterns that is worthy of detailed study and exploration. Although I personally think the book SOA Design Pattern by Thomas Erl is 2/3rds filled with vacuous patterns, the website which was created while the book was being written has a wealth of valuable contributions. In fact, if you ever read the book, you would immediately notice that the insightful contributions come from the contributors and not the original author. This is a bit unfortunate since historical Design Patterns books have been mostly about written in collaboration. The 5 volume series “Pattern-Oriented Software Architecture” that was first compiled from 1996 to 2007 is one of my favorites. My big beef with the Erl book is that most of the credit goes to Erl, while he has provided the most mediocre of contributions.

    However I digress. The topic of this entry is about the other Design Patterns that unfortunately didn’t make it into the book. My objective is to give a brief summary of some of the more interesting ones. So with little fan fare, here’s my list of interesting SOA design patterns from the candidate list:

    1. Alternative Format – This pattern is quite prevalent today. Typically JSON and XML are formats that are supported. In Web 2.0 based systems JSON is extremely important in allowing pure javascript clients to consume services.
    2. Code on Demand – This pattern is simple the embedding of code in a response that a client can subsequently execute. It’s most prevalent today as Javascript in web browsers. Applets and Flash are another example. This very capability is what enables AJAX and therefore a lot of Web 2.0 functionality. There is however, another pattern that is a bit more unique and interesting which I would call DSL on Demand. The difference being that the execution is performed with the server and not on the client.
    3. Consumer-processed Composition (Balasubramanian, Carlyle, Pautasso) – I would call this pattern “Integration at the Glass”. In general by deferring execution logic to when it is bound to a consumer then one can achieve the more flexibility.
    4. Enterprise Domain Repository (Lind) – I would prefer the name “Multi-source domain objects”. This is a common problem with legacy systems and the handling of which should be any SOA toolbox.
    5. Entity Endpoint (Balasubramanian, Carlyle, Pautasso ) – This pattern comes naturally in a ReSTful architecture. However even for WS-I or CORBA based systems, there needs to be a mechanism to exchange URLs or pointers so that one can resolve entities. I would call this pattern “Durable Reference”. It is indeed surprising that many SOA based initiatives treat this capability as an afterthought.
    6. Entity Linking (Balasubramanian, Webber, Erl, Booth) – This pattern again comes for ReSTful architecture. State exists in the clients and state transitions are achieved by the URL linking. This I would call “Trampoline Control Flow”.
    7. Endpoint Redirection (Balasubramanian, Carlyle, Pautasso)
      - Another ReSTful pattern. This in general is about redirecting an invocation to another service. This mechanism is quite useful in managing the evolution of a system. For example, a service may redirect a request to the an upgraded service.
    8. Forwards Compatibility (Orchard)
      – An interesting pattern that is useful for Service evolution in that it maintains backward compatibility and supports extensions in the communications between services.
    9. Idempotent Capability (Wilhelmsen, Pautasso) – The simple identification of the idempotency of services is a critical feature in building optimal distributed systems.
    10. Legacy Router (O’Brien) – There is a generalization of this pattern in that every dynamic system must begin from a well know landmark. This pattern exists in many dynamic discovery based P2P networks.
    11. Message-Based State Deferral (Balasubramanian, Carlyle, Pautasso) – This again is a consequence of a ReSTful based approach. The idea however is relevant for distributed systems in that there are benefits in maintaining state in the client or in the message that is exchanged between services.
    12. Service Virtualization (Roy) – There seems to be a common theme among many patterns in that greater flexibility is achieved if the reference to a service is a virtual one that who’s behavior can change dynamically.
    13. Validation by Projection (Orchard) – Seems to be a subset of Partial Validation, I’m unsure as to what the difference is, however I will give the pattern the benefit of the doubt since afterall both patterns are written by the same authors.

    Many of these patterns are clearly inspired by ReST architecture ( a book is in the works to address this). Also, I’ve purposely ignored those patterns that involve optimizing the resource allocation in a distributed systems with the exception of the Idempotent Capability pattern. For SOA, the emphasis is in pursuing flexible architectures in favor of optimal ones. So its best that focus be kept, lest we be distracted and prematurely optimize our architectures.

    My hidden agenda in why I am exhaustively covering the SOA Design Patterns is that I have a Pattern Language of my own in the works. Over time I will discuss this in more details.


    Share the article!

    Is SOA an Agile Enterprise Framework?

    Share the article!

    Is SOA an Agile Enterprise Framework? Let’s examine that in this entry. The consensus definition of SOA is covered by the SOA Manifesto, here are its principles:

    • Business value over technical strategy
    • Strategic goals over project-specific benefits
    • Intrinsic interoperability over custom integration
    • Shared services over specific-purpose implementations
    • Flexibility over optimization
    • Evolutionary refinement over pursuit of initial perfection

    Now there probably doesn’t exist a consensus definition for Agile Enterprise Framework, however there have been a couple of proposals. Below are the principles described by one of them:

  • Focus on people, not technology or techniques
  • Keep it simple
  • Work iteratively and incrementally
  • Roll up your sleeves
  • Look at the whole picture
  • Make enterprise architecture attractive to your customers
  • Making the comparison, SOA doesn’t appear to cover certain aspects of an Agile Enterprise Framework. These specifically are the following:

  • Focus on people, not technology or techniques
  • Keep it simple
  • Roll up your sleeves
  • SOA’s other principles are compatible with Agile Enterprise Framework. You can therefore have an embodiment of SOA that is indeed an Agile Enterprise Framework. “Agile SOA” may in fact be quite useful. There a couple of articles addressing this “Agile SOA: Mad Science or Solid Reality?” and “Agile Enterprise Architecture is not an Oxymoron!“.

    I have got two ideas can help fill in the gap to achieve Agile SOA. Focusing on Web 2.0 technologies to enhance communication and collaboration and keeping it simply by employing ReST.


    Share the article!

    The SOA Manifesto : Finally a Final Version for SOA

    Share the article!

    I confess that I haven’t kept myself abreast of SOA developments. I mean who really has the inclination to pursuing this subject when its very definition keeps changing with the wind. I checked and it appears that the last time I had blogged about the subject was way back in early 2007. I have always been more inclined to supporting a Web Oriented architecture. There was a time where I aggressively argued the advantages of ReST. Nevertheless I have always felt exasperated debating with SOA advocates in that it was never easy to pin them down on a definition.

    Back in 2007, a Oracle and Gartner decided to co-opt SOA and christen a new version called “SOA 2.0″. That immediately created a massive uproar among the SOA community claiming that it was senseless to give a version to a concept even if at that time it remained quite nebulous and fleeting in its definition. At that time, I however argued that SOA 2.0 may in fact a good thing. It was an opportunity to pin down a SOA definition. My argument was that a 2.0 version implied that there indeed existed a 1.0 version and that version was in need of a healthy revision. The SOA community however never acknowledged SOA 2.0 thus never acknowledging the existence of 1.0 version. So it turned out there little sense to revise something that doesn’t exist.

    In 2007 however I conjectured a different revision to SOA. It turns out that Oracle and Gartner were not the only one defining “SOA 2.0″. Richard Veryard in 2005 was defining something else. He wrote about a bottom-up decentralized process of building SOA that could potentially leverage Web 2.0 ideas. Gregory Hohpe, of Enterprise Integration Patterns fame, had his own insight that there was likely some value in pursuing agile ideas in combination with SOA.

    By late 2009, the SOA community got together and finally agreed on a what they called the SOA Manifesto. It took close to a decade for SOA proponents to finally to draw the line in the sand and make a commitment to something that appears to be a definition. One cannot fail to notice the intellectual dishonesty here in that these folks have pitched SOA for years without ever clearly defining what it meant. Nevertheless, this development is a very good event for the Software Development community, SOA now has an established definition and it is no longer a moving target.

    Let me attempt to deconstruct the SOA manifesto in attempt to gleam some insight into the its definition of SOA.

    SOA according to the manifesto is the architecture that arises from applying service orientation. It is indeed curious that “service orientation” here is not a proper noun. The manifesto states a value system and a collection of guiding principles.

    The SOA value system is paraphrased below as:

    1. Business Value. Technology for its own sake should be avoided.
    2. Strategic Goals. You can’t really talk about SOA if you aren’t looking at the big picture.
    3. Intrinsic Interoperability. There’s this notion of ‘intrinsic interoperability’ that needs to be achieved.
    4. Shared Services. Services are to be designed to be shared (note that reuse is not the termed used here).
    5. Flexibility is preferred over optimization (typical trade off is flexibility versus efficiency).
    6. Evolutionary Refinement. SOA doesn’t attempt to boil the ocean and an incremental approach is preferred

    The remainder of the document describes guiding principles. Here’s a quick take for each of the principles:

    Respect the social and power structure of
    the organization. – Sounds like being mindful of Conway’s Law.



    Recognize that SOA ultimately demands
    change on many levels. – If you can’t convince everyone, then you could fail. This statement seems to run counter to the idea of evolutionary refinement.



    The scope of SOA adoption can vary. Keep
    efforts manageable
    and within meaningful
    boundaries. – Look for low hanging fruit and quick wins to show success.



    Products and standards alone will neither
    give you SOA nor apply
    the service
    orientation paradigm for you. – Don’t just buy technology, hire consultants as well.



    SOA can be realized through a variety of
    technologies and standards. – Breaks the bonds completely from WS-*.



    Establish a uniform set of enterprise
    standards and policies based
    on industry,
    de facto, and community standards. – Don’t attempt to re-invent the wheel.



    Pursue uniformity on the outside while
    allowing diversity on the inside. – Some kind of notion of polymorphism and encapsulation going on here.



    Identify services through collaboration with
    business and
    technology stakeholders. – Don’t design in a vacuum, get people involve. What about the consumers, do they have a voice?



    Maximize service usage by considering the
    current and
    future scope of utilization. – Plan for change.



    Verify that services satisfy business
    requirements and goals. – Get customer feedback.



    Evolve services and their organization in
    response to real use. – YAGNI principle.



    Separate the different aspects of a system
    that change at different rates. – Probably the more interesting guideline here. Usually, architecture layering is the approach here, however I am unclear if there are other mechanisms to do this. If this is to imply the notion of ‘separation of concerns’, then I’ll be really disappointed.



    Reduce implicit dependencies and publish
    all external dependencies to increase
    robustness and reduce the impact of change. – One of the better guidelines, a generalization of the idea of the utility of documenting or even exposing for machine consumption a services dependencies.



    At every level of abstraction, organize each
    service around a cohesive
    and manageable
    unit of functionality. – A.K.A. Cohesion.

    First observation is that I would have liked to see more coherence between the value system and the guiding principles. Which guiding principles encourage ‘intrinsic interoperability’, ‘shared services’, ‘flexibility’ or even ‘evolutionary refinement’? I mean, its disingenuous to claim a preference for an objective and then provide very few principles to help you achieve those objectives. Identifying separation of concerns, managing coupling and cohesion are all important, but could someone please provide some unique insight here?

    Second observation is that most principles appear to be what any best-practice document for software development would recommend. Work in manageable chunks, technology doesn’t cure all ills, don’t re-invent the wheel, get customer feedback, layer your system, manage coupling and cohesion. The set of principles of the SOA Manifesto are stating the obvious and don’t lead to any better clarity or differentiation as to any other software development methodology.

    So there it is, I’ve gone through the document and am feeling quite unimpressed. Most of what is contained in the guiding principle are practices that are well established. I can’t see why anyone would disagree on any of them. It is however disappointing that it is supposed to be guiding principles, in clear terms, principles that help guide you to maintain the value system. Unfortunately is says nearly nothing about four of the six values (i.e. Intrinsic Interoperability, Shared Services, Flexibility and Evolutionary Refinement.

    One would think this value set of Intrinsic Interoperability, Shared Services and Flexibility. Unfortunately, it isn’t new at all, it is as old as the concept of Modularity. I’ve written about this in “SOA and Modularity“. The value statement can thus be refined to a single statement, “We believe in building modular systems through evolutionary refinement (and intrinsic interoperability) to achieve business value and satisfy strategic goals“. Who can’t agree with this statement? Are we saying that there exists practitioners in software development who still subscribe to the complete opposite (i.e. believe in building monolithic systems taking a big-bang approach (and incompatible modules) with the intent of avoiding creating business value and compromising the strategic goals)?

    I could be asking too much from this document. To its credit, SOA finally has a clear definition, that is any architecture that arises from a process that intends to achieve business value and strategic alignment though the evolutionary refinement of modular systems. Although this statement may appear to be obvious, I wouldn’t be surprised that there a few people out there espousing a big-bang monolithic solution. SOA may just turn out to be one of the more agile enterprise architecture frameworks in circulation.

    The other beneficial conclusion I see is that SOA divorces itself from the requirement to support WS-* based standards in a SOA initiative. That in itself is reason to celebrate.


    Share the article!

    SOA Design Patterns Book – A Review

    Share the article!

    The problem with SOA is that it has always been too abstract. The SOA manifesto that was signed late 2009 confirms this. No longer is it a set of technologies or even a set of standards, it is just simply the architecture that arises from applying service orientation which is defined as building modular systems via evolutionary refine to achieve business goals. This definition is a bit too abstract for me. Astonishingly, that’s as good a definition you can find in the manifesto.

    So I decided to dig deeper, possibly I can gleam some knowledge by go through patterns discovered in practice and documented in the book “SOA Design Patterns” by Thomas Erl. This is a massive book with over 800 pages, the patterns in the book can also be found in www.soapatterns.org. The problem I have with almost all SOA books is that SOA is discussed in a manner that reveals little differentiation from any other distributed processing model.

    The first hundred pages of the book covers introductory material covering SOA and Design Patterns. There’s nothing new here that you can’t find in other books on the subjects. So let’s dive straight into the meat of the book, the Design Patterns themselves.

    I’m a big fan of Design Patterns, however I just abhor it when authors define a Design Pattern that is an obviously implied by the domain you are defining patterns for. For example, if we take the Object Oriented Programming (OOP) domain, Polymorphism is not a Design Pattern, it is an attribute of OOP. When I see these kinds of patterns, its an indicator to me of the lack of rigor in vetting out these patterns.

    In the original GOF book, the OOP design patterns are categorized into 3 sets these are Behavioral, Creational and Structural. In this book the categories are Service Inventory, Service Design and Service Composition. The “Service Inventory” category is the most difficult to grasp simply because it is too abstract and its definitions are very weak. The Service Design category covers concerns revolving around the design of services by itself. The Service Composition category covers concerns that cover how services are composed together and how they interact. Service Inventory however seems to be describing services at a meta-level. That is, how would one describe services

    The book is a very difficult read because it avoids the use of more concise terminologies commonly used in other computer science texts. Furthermore, it employs pattern names that although sound familiar, can lead to a lot of confusion. In my attempt to understand the book, I will be relating the Design Patterns of this book to more commonly understand computer science terminology.

    The first category of patterns named “Service Inventory Patterns” covers ways in which Services are to be described. It can be a bit confusing discussing ideas in a meta-level, or in other words attempting to describe how you describe things. That’s the main flaw of this section in that it is not made apparent as to what is being talked about.

    Chapter 6 covers “Foundational Inventory Patterns”. These patterns is simply recording and categorizing services. The “Enterprise Inventory Pattern” says that you should recorded in an inventory, said inventory can be further categorized into different domains (i.e. “Domain Inventory Pattern”) and to various interacting layers (“Service Layers”). Each service can be normalized to minimize overlap in functionality (i.e. “Service Normalization Pattern”) and making sure to avoid redundant logic (“Logic Centralization Pattern”). A standard protocol (“Canonical Protocol Standard”) and standard schemas (“Canonical Schema”) may be defined in the inventories. Nothing really informative in this chapter, its all about book keeping. Maintaining a Meta data repository to track a systems artifacts is nothing new, I personally would have condensed this as “Meta Service Pattern” and just shoved in all the different aspects into a single pattern.

    Chapter 7 covers “Logical Inventory Layer Patterns” which in my opinion simply talks about the kinds of services that may be implemented (really just another categorization). That is one can talk about Utility, Entity and Process focused services.

    Chapter 8 covers “Inventory Centralization Patterns”. In general Processes, Schemas, Policies and Rules (which incidentally are all meta-data) can be positioned in a central location so as to avoid duplicate and inconsistent definitions. I would have just called this “Source of Truth Pattern”.

    Chapter 9 covers “Inventory Implementation Patterns”. Which would mean something along the lines of how you would implement ‘meta data’. Unfortunately I fail to see the logic behind why the patterns in this chapter are collected in this category. The category seems to consist mostly of patterns involving the sharing of compute resources across multiple services. The first pattern “Dual Protocols” doesn’t really belong here, it really is about support more than one protocol for a given service. I would in fact rename this a “Service Virtualization” pattern. “Canonical Resources” is about providing standard interfaces to compute resources. “State Repository” is about providing a utility service for storing service state. “Stateful Services” is well about Services that maintain their own state. “Service Grid” is some kind of service fabric that provides high scalability and fault tolerance for services that require states. I don’t know why this is a pattern, it seems to be more of a technology. “Inventory Endpoint” is a kind of service that acts like a facade to multiple services. “Cross Domain Utility Layer” provides utility services than span multiple domains. Though this pattern seems to be a replay of a previously mentioned layering pattern.

    Chapter 10 covers “Inventory Governance Patterns”. Which would mean manage ‘meta data’. “Canonical Expression” states that there should be a standard way for defining contracts. “Metadata Centralization” states that there should be a registry to store services for discovery. I would rename this pattern as “MetaData Discovery” to disambiguate itself from the “Inventory Centralization Patterns”. The key point here is that meta-data should be discoverable by the services within the system. “Canonical Versioning” states that there is a standard way of defining versions of services, this pattern in fact is ambiguous with a later pattern that describes the idea that there should be a language for versioning.

    The next set of chapters covers Service Design.

    Chapter 11 covers “Foundational Service Patterns”. The problem I have with this chapter is that it talks about fundamental concepts which is apparently is difficult to differentiate from taking about meta-data. In other words, if I can describe my vocabulary then I am in essence defining the foundations of what I’m describing. The chapter attempts to include patterns that one would assume as being all too obvious. For example, “Functional Decomposition” pattern states that a problem can be broken down into smaller problems. The inclusion of this kind of pattern is just plain simple absurd. There is “Service Encapsulation” which has a misleading name, but it is about designing existing logic as a service that can be used outside of its original context. Which again Erl continues to state the obvious through complex pattern definitions. Finally there are two patterns “Agnostic Context” and “Non-Agnostic Context” patterns which is all about identifying multi or single purpose services. This chapter seems completely pointless in my opinion.

    Chapter 12 covers “Service Implementation Patterns”. This is when finally there is some meat to the bones. However, some of these patterns here a miscategorized in that they are more about Service Composition, for example “Service Facade”, “Redundant Implementation”, “Service Data Replication” should be in the Service Composition category. In fact, it would have made better sense to categorize these patterns and those in chapter 9 under “State Handling Patterns”. “Partial State Deferral” pattern is indeed an implementation detail of service in how it manages its runtime state. I personally am a bit ambivalent about SOA design patterns that concern themselves with resource optimization. These kind of patterns belong elsewhere. “Partial Validation” pattern permits services to focus on what’s relevant in data and ignore the rest. This is a very useful capability that supports both versioning and interoperability. “UI Mediator” pattern is likely the most unique pattern I’ve found in this book. It is about providing a mechanism to support receiving timely feedback to a user on the progress of a service execution.

    Chapter 13 covers “Service Security Patterns”. This is a very coherent category in that it restricts itself with the concern of handling security of services. This is probably one of the better chapters, and the interesting coincidence is that none of the patterns are written by Erl. In fact, as a rule of thumb, patterns that were written by someone other than Erl tend to be of more valuable. I in particular have high regard to the patterns written by David Orchard, these are non-obvious and quite insightful. However, Erl however a times creates a pattern that appears to be a duplicate of Orchard’s pattern (i.e. Version Identifier) and does a very poor job at presenting it (i.e. Canonical Versioning).

    Chapter 14 covers Service Contract Design Patterns. This actually is a good categorization however I would have chosen a different name. I would label it “Contract Coupling” patterns. Decoupled Contract – States that a contract should be decoupled from its implementation. This is actually a practice a good practice worth emphasizing.
    Contract Centralization – States that all access to a service is through its contract. A better name would be Service Encapsulation. Contract Denormalization – This seems counter to chapter 7 service normalization. The pattern states that redundancy in contract may be required to reduce demands on consumers. These kinds of patterns that appear to be in conflict with other patterns are actually the ones that can be quite insightful. However I would rename this as “Contract Redundancy”. Concurrent Contracts – Were a service defines different kinds of contracts depending on target consumer, again running counter to Service normalization. Finally, a very interesting pattern “Validation Abstraction” – where Validation logic is made portable from the service contract.

    Chapter 15 covers Legacy Encapsulation Patterns. This is yet another bad chapter which covers the “Legacy Wrapper” pattern which clearly is the same as “Service Encapsulation”.
    “Multi Channel Endpoint” pattern which is intended support multiple user access channels (ex. laptop, mobile, etc.) which again is expounding on the obvious. Services are meant to be shareable across multiple contexts, is it not blindingly obvious the multi access channels would share the same service? Finally there’s the “File Gateway” pattern which is the same thing a “Protocol Bridging” that is described in a later chapter. This chapter makes me wonder as to the target audience level of technical sophistication.

    Chapter 16 covers Service Governance Patterns. “Compatible Change” pattern written by David Orchard discuses how to change a contract without affecting legacy consumers. A good example of a well written and insightful pattern. The same goes with the next pattern “Version Identification” which describes the need to define a Version vocabulary that identifies the compatibility constraints between versions. “Termination Notification” pattern is another pattern that is all to easy to forget. There should be a mechanism for contracts to express service termination information. This is then the point where this chapter turns for the worse. “Service Refactoring” pattern which is an obvious consequence consequence of “Service Decoupling” described previously. “Service Decomposition” pattern, well this is just a refactoring technique and the same goes for the “Proxy Capability” pattern. Now my head is beginning to hurt. Where you find the “Decomposed Capability”, with the following description “How can a service be designed to minimize the chances of capability logic deconstruction?”. I’ve got simply little patience left to figure what is meant here. The same goes with the “Distributed Capability” pattern. I’ll likely make another effort some other day.

    The next section covers Service Composition Patterns, which discusses patterns on how to compose existing services with each other.

    Chapter 17, a chapter that bordering on the absurd. The “Capability Composition” pattern is about composing services out of other services. I’m a bit confused here, for all this time I had thought that services by definition were composable. “Capability Recomposition” which again is obscured with a description like “How can the same capability be used to help solve multiple problems”. This is just plain and simple service instantiation. I can’t see how this is non-obvious. The book seems to repetitively recast well known computer science concepts as patterns and furthermore recasts these as entire chapters. The entire chapter can be summarized in one sentence “Services can be composed of services and Services can be instantiated and invoked in multiple contexts”. I’m beginning to get the feeling that Erl has trouble understanding basic concept like ‘instantiation’.

    Chapter 18 covers “Service Messaging” patterns, Hohpe “Enterprise Integration Pattern” provides a much better treatment of this subject area and I refer you to his excellent book if your interested in this. To be brief, the following patterns are discussed: “Service Messaging”, “Messaging Metadata”, “Service Agent”, “Intermediate Routing”, “State Messaging”, “Service Callback”, “Service Instance Routing”, “Asynchronous Queuing”, “Reliable Messaging” and “Event Driven Messaging”. The Author again tries to re-define a common word, he defines the “Service Messaging” pattern which essentially is “Asynchronous Communication” as a pattern.

    Chapter 19 covers “Composition Implementation Patterns”. I find the title to be confusing, I simply don’t understand what ‘Implementation’ is meant in this context. The chapter covers a incoherent collection of patterns: “Agnostic SubController”, “Composition Autonomy”, “Atomic Service Transaction” and “Compensating Service Transaction”. I would think that the appropriate title for this could be “Scope of Work Patterns”. It is as if Erl structures his chapter by creating combinations of the words “foundational” and “implementation” without giving any thought as to what they mean.

    Chapter 20 covers Service Interaction Security Patterns which covers an interesting collection of patterns not written by Erl.

    Chapter 21 covers Transformation Patterns. The chapter covers the “Data Model Transformation” pattern and the “Data Format Transformation” pattern which were written by Erl and the “Protocol Bridging” pattern written by Mark Little. The latter pattern is clearly the generalization of the former two.

    In summary, the SOA Design Patterns book isn’t structured with the same rigor and coherence as other Design Patterns books. The content is unusually wordy and repetitive. There are a lot of diagrams but a majority of them provide little insight. The book takes well known concepts in computer science and regurgitates them as design patterns essentially taking what is obvious and making them obscure. Despite the poor quality of most of the book, its saving grace is that there are but a few patterns that have been submitted by contributors that are of a high quality.

    However considering the pervasively poor quality of SOA books in general, I’m going to say it is one of the more valuable SOA books. Even if this bar is extremely low, this is of the few SOA books where you can indeed find some true nuggets of wisdom. (The book’s website has a lot more interesting patterns that weren’t published with the book) However, you have to dig very hard and long to find them because the map that is provided can is deliberately obscuring and more of a hindrance than an aid. Read it only if you know what to look for.

    So you don’t have to waste your own valuable time, I’ve collected a reference list of patterns from the book that are of some value. Included is a quick explanation of my own and an alternative and hopefully more concise name.

    1. Canonical Protocol – Always convenient to standards on a common protocol to reduce bridging costs. Uniform Protocol.
    2. Dual Protocol – Supporting more than one protocol increases the number of compliant clients. Virtual Service.
    3. Canonical Expression – Specifications about meta-data should be standardized to avoid the cost of translation. Canonical Metadata Language.
    4. Metadata Centralization – SOA systems should support some kind of discovery of metadata services. Metadata Discovery.
    5. Partial Validation – Services should support non-strict validation of messages. Non-strict Validation.
    6. UI Mediator – Provide a capability to receive timely feedback when monitoring a service’s execution.
    7. Exception Shielding – To ensure security the implementation details of an exception should be hidden from a client.
    8. Message Screening
    9. Trusted Subsystem
    10. Service Perimeter Guard
    11. Partial State Deferral
    12. Contract Denormalization – Redundant specifications are something necessary to reduce coupling. Redundant Contract.
    13. Validation Abstraction – A language for input validation should be introspect-able to permit flexibility in where validation is performed. Introspect-able Validation.
    14. Compatible Change – Service contract changes can be performed in a way to support backward compatibility.
    15. Version Identification – A versioning vocabulary should reveal the compatibility constraints between different versions of a services. Versioning Constraints.
    16. Termination Notification – A service should have a mechanism to express its availability.
    17. Messaging Metadata – There should be a mechanism to parse information about a message without having to read the entire message. Message Envelope.
    18. Intermediate Routing
    19. State Messaging – Conversational State can me stored in message. Conversation State Messages.
    20. Service Instance Routing – Communication between services may be routed using logic that is dependent on the content of the message. Content Based Routing.
    21. Asynchronous Queuing – Clients need not require the temporal availability of the services it requires. Asynchronous Communication.
    22. Reliable Messaging – Clients should not have to manage the reliable delivery of a communication to its destination.
    23. Event Driven Messaging – A service may not require the knowledge of the identity of its clients. Publish and Subscribe.
    24. Compensating Service Transaction – Actions performed by services should be undoable.
    25. Data Confidentiality
    26. Data Origin Authentication – A mechanism for discovering the provenance of data in essential. Non-forgeable Provenance.
    27. Broker Authentication – An intermediate broker may be required when there is no trust between two interacting services. Trust Broker.
    28. Protocol Bridging – SOA should allow the inclusion of a protocol mediator to translate communication between services. Protocol Mediator.

    Everything else not listed here is most likely to consist mostly of fluff and best be ignored. Do let me know however if I mistakenly ignored a good design pattern.


    Share the article!

    iPad’s Form Factor Demands User Profiles

    Share the article!

    So like the legion of Apple users out there, I pre-ordered my iPad and received it in the mail on Saturday. Shipped and tracked directly from Shanghai, China. Now that I have a couple of days playing with it, here are my brief impressions.

    There’s still not a lot of apps (actually free apps) in the store. Nothing really compellingly different from what’s out there for the iphone/itouch however I must say the book and comic readers are definitely suited for this kind of form factor. Also clearly makes for a good photo viewer and definitely a lot of opportunity here to build something for the prosumer photographer. The Netflix streaming experience is quite good, of course with its large screen youtube video artifacts become more apparent.

    In terms of business like apps like eBay and eTrade, I didn’t feel to me as compelling in that they were just simply larger versions of the same thing in the iPod. It definitely is going to take time for developers to make better use of real-estate. The typical iPhone UI paradigm of switching sliding screens that works so well doesn’t have the same kind of utility with this kind of device. Matter of fact, it sometime becomes irritating and disconcerting. Floating modeless windows seem to make more sense for the iPad.

    However the Achilles heal of the iPad is the lack of support for multiple profiles or accounts on the same device. I seriously doubt that Apple is going to be able to fix this anytime soon since it’s going to be a massive departure from its iTunes setup.

    The iPad form factor makes it a device that you share much more often than the iPhone or iTouch. It’s the size of a magazine or a book that I would think because of its bulk would be left around in the house for other household members to use. It only becomes a personal device when either you live by yourself or you are constantly traveling and have the luxury of carry-on luggage. The iPad is a mobile device only if you find yourself carrying a bag with you everywhere you go.

    The other area where the iPad makes sense are for workers who walk a lot in their work (i.e. waiters, delivery men, medical personnel etc.). The form factor allows them to more quickly read and input data. However, even in this context, a multi-user device seems to be a necessity.

    The more I think about it, the Kindle or Nook seems to be the right size for a tablet of this kind. These e-readers are comfortable to hold with one hand, which is different from the iPad which actually feels a bit heavy. In fact, the typing business feels a big awkward and seems to most comfortably done with the device on one’s lap (like a laptop) rather than while its held in the air.

    In summary, the door is still wide open for competitors like the Android to step into this space by providing multi-user capability on a touchscreen tablet device. Also, people definitely need to re-think the interface. The larger iphone like UI just doesn’t work to me, the transitions are disorienting and there’s the intuitive need to see more on the screen than less (as required by the iphone).


    Share the article!

    Open Source Rule Engines Written In Java

    Share the article!

    I don’t recall if someone has put together a review of open source Rule Engines that are written in Java. Here’s the list I’ve accumulated so far:

    • Drools The JBoss Rules engine uses a modified form of the Rete algorithm called the Rete-OO algorithm. Internally it operates using the same concepts and methods as Forgy’s original but adds some node types required for seemless integration with an object-oriented language.
    • OFBiz Rule Engine Backward chaining is supported. Original code base from “Building Parsers in Java” by Steven John Metsker.
    • Mandarax Based on backward reasoning. The easy integration of all kinds of data sources. E.g., database records can be easily integrated as sets of facts and reflection is used in order to integrate functionality available in the object model.
    • Algernon Efficient and concise KB traversal and retrieval. Straightforward access to ontology classes and instances. Supports both forward and backward chaining.
    • TyRuBa TyRuBa supports higher order logic programming: variables and compound terms are allowed everywhere in queries and rules, also in the position of a functor- or predicate-name. TyRuBa speeds up execution by making specialized copies of the rule-base for each query in the program. It does so incrementally while executing a logic program and builds an index for fast access to rules and facts in the rule base, tuned to the program that is running. The indexing techniques works also for higher-order logic. TyRuBa does ‘tabling’ of query results.
    • JTP Java Theorem Prover is based on a very simple and general reasoning architecture. The modular character of the architecture makes it easy to extend the system by adding new reasoning modules (reasoners), or by customizing or rearranging existing ones.
    • JEOPS JEOPS adds forward chaining, first-order production rules to Java through a set of classes designed to provide this language with some kind of declarative programming.
    • InfoSapient Semantics of business rules expressed using fuzzy logic.
    • RDFExpert RDF-driven expert system shell. The RDFExpert software uses Brian McBride’s JENA API and parser. A simple expert system shell that uses RDF for all of its input: knowledge base, inference rules and elements of the resolution strategy employed. It supports forward and backward chaining.
    • Jena 2 – Jena is a Java framework for writing Semantic Web applications. Jena2 has a reasoner subsystem which includes a generic rule based inference engine together with configured rule sets for RDFS and for the OWL/Lite subset of OWL Full. These reasoners can be used to construct inference models which show the RDF statements entailed by the data being reasoned over. The subsystem is designed to be extensible so that it should be possible to plug a range of external reasoners into Jena, though worked examples of doing so are left to a future release.

    • JLisa – JLisa is a powerful framework for building business rules accessible to Java and it is compatible with JSR-94. JLisa is more powerful than Clips because it has the expanded benefit of having all the features from common lisp available. These features are essential for multi-paradigm software development
    • Euler – Euler is a backward-chaining reasoner enhanced with Euler path detection and will tell you whether a given set of facts and rules supports a given conclusion. Things are described in N3.
    • JLog – JLog is an implementation of a Prolog interpreter, written in Java. JLog is a BSF-compatible language. It includes built-in source editor, query panels, online help, animation primitives, and a GUI debugger.
    • Pellet OWL Reasoner – Pellet is an open-source Java based OWL DL reasoner. It can be used in conjunction with either Jena or OWL API libraries. Pellet API provides functionalities to see the species validation, check consistency of ontologies, classify the taxonomy, check entailments and answer a subset of RDQL queries (known as ABox queries in DL terminology). Pellet is an OWL DL reasoner based on the tableaux algorithms developed for expressive Description Logics.
    • Prova – Prova is derived from Mandarax Java-based inference system developed by Jens Dietrich. Prova extends Mandarax by providing a proper language syntax, native syntax integration with Java, and agent messaging and reaction rules. The development of this language was supported by the grant provided within the EU project GeneStream. In the project, the language is used as a rules-based backbone for distributed web applications in biomedical data integration.
    • OpenRules – OpenRules is a powerful Business Rule Engine that has been designed to create, deploy, execute, and maintain decision services for complex real-world applications. OpenRules comes with a sophisticated user-friendly Rules Administrator that utilizes the power of MS Excel and Eclipse.
    • SweetRules – SweetRules is an integrated toolkit for semantic web rules, revolving around the RuleML. SweetRules supports the powerful Situated Courteous Logic Programs extension of RuleML, including prioritized conflict handling and procedural attachments for actions and tests. SweetRules’ capabilities include semantics-preserving translation and interoperability between a variety of rule and ontology languages (including XSB Prolog, Jess, HP Jena-2, and IBM CommonRules), highly scaleable backward and forward inferencing, and merging of rulebases/ontologies. The SweetRules project is a multi-institutional effort, originated and coordinated by the SweetRules group at MIT Sloan.
    • JShop2 – Simple Hierarchical Ordered Planner (SHOP) written in Java. JSHOP2 is a domain-independent automated-planning systems. They are based on ordered task decomposition, which is a type of Hierarchical Task Network (HTN) planning. JSHOP2 uses a new “planner compilation” technique to achieve faster execution speed.
    • OpenLexicon – Lexicon is a business rules and business process management tool that rapidly develops applications for transaction and process-based applications.There are two main components of Lexicon: the metadata repository and the business rules engine. The Lexicon business rules engine is not Rete based. It has a predicate evaluator that includes a unique power: it can evaluate small, in-line Java expressions.
    • Hammurapi Rules – Hammurapi rules has the following distinguishing features: (1) Rules and facts are written in Java; (2) leverages Java language semantics to express relationships between facts and (3) builds derivation trees for validation, debugging and to detect logic loops.
    • MINS Reasoner – MINS stands for Mins Is Not Silri. MINS is a reasoner for Datalog programs with negation and function symbols. MINS supports the Well-Founded Semantics.
    • Zilonis – An extremelly efficient, multithreaded Rules engine based on a variation of the forward chainning Rete algorithm. It has a unique scoping framework where you can define a scope for a user or group of users with inheritance of rules between them. The rules language is similar to CLIPS.
    • JCHR – JCHR is an embedding of Constraint Handling Rules (CHR) in

      Java. The multi-paradigmatic integration of declarative, forward

      chaining CHR rules and constraint (logic) programming within the

      imperative, OO host language Java offers clear synergetic advantages to

      the software developer. High performance is achieved through an

      optimized compilation to Java code. The rule engine is suited for the

      high-level development of expert systems, incremental constraint

      solvers and constraint-based algorithms.li>

    • Esper – Esper enables rapid development of applications that process large volumes of incoming messages or events. Esper filters and analyzes events in various ways, and responds to conditions of interest in real-time. Supports Event Stream Processing

      : Time-based, interval-based, length-based and sorted windows; Grouping, aggregation, sorting, filtering and merging of event streams; Tailored SQL-like query language using insert into, select, from, where, group-by, having and order-by clauses; Inner-joins and outer joins (left, right, full) of an unlimited number of windows; Output rate limiting and stabilizing.

    • mProlog – mProlog is a sub-product of the 3APL-M project. It delivers a reduced Prolog engine, optimized for J2ME applications. The mProlog engine was developed based on the W-Prolog project from Michael Winikoff. The 3APL-M project a platform for building applications using Artificial Autonomous Agents Programming Language (3APL) as the enabling logic for the deliberation cycles and internal knowledge representation.
    • OpenL Tablets – Create Decision Tables in Excel and use them in Java application in a convenient type-safe manner. Use Data Tables in Excel for data setup and testing. Eclipse plugin controls validity of Excel tables.
    • Jamocha – Jamocha is a rule engine and expert system shell environment. Jamocha comes with a FIPA-compliant agent. The agent is based on the Multiagent System JADE that supports speech-acts and the FIPA agent interaction protocol.
    • JSL: Java Search Library – JSL is a library written in Java that provides a framework for general searching on graphs. Standard search algorithms depth-first, breadth-first and A* are provided by JSL, in addition you can plug in any other search algorithms.
    • Termware – TermWare is Term Processing System. It’s use is applicable to computational algebra systems and various formal models analysis.
    • Take – Take consists of a scripting language for defining rules, and a compiler that creates executable Java code. Take is based on Mandarax, it has a similar API but implements its own inference engine.
    • Datalog – Datalog is a logical query language. It exists somewhere between relational algebra (the formal theory behind SQL) and Prolog. Its primary addition to the semantics of databases is recursive queries.
    • IRIS Reasoner – Integrated Rule Inference System is an extensible reasoning engine for expressive rule-based languages. IRIS supports safe or un-safe Datalog, a comprehensive and extensible set of built-in predicates and support for all the primitive XML schema data types.
    • CEL – A polynomial-time Classifier for the description logic EL+. CEL is the first reasoner for the description logic EL+, supporting as its main reasoning task the computation of the subsumption hierarchy induced by EL+ ontologies. The most distinguishing feature of CEL is that, unlike other modern DL reasoners, it implements a polynomial-time algorithm. The supported description logic EL+ offers a selected set of expressive means that are tailored towards the formulation of medical and biological ontologies.

    Please let me know if I’ve missed something.


    Share the article!

    Open Source Constraint Programming Solvers Written in Java

    Share the article!

    In a past life I wrote a language that would serve as a DSL to a variety of solvers. These solvers would include the Simplex based solvers, Quadratic solvers and in the most generalized form Constraint solvers. So I’ve had more than a passing interest in Constraint Programming (CP). Recently my interest in CP was rekindled. I was studying Javafx and found it’s binding mechanism to be eerily familiar. So, I looked back at my archives and to my surprise discovered a boat load of open source projects in existence that catered to Constraint Programming. Here is the list I compiled over the years.

    • Choco Solver – Choco Solver is a library for constraint satisfaction problems (CSP), constraint programming (CP) and explanation-based constraint solving (e-CP). Choco is built on a event-based propagation mechanism with backtrackable structures.
    • Cream – Cream is library helping for developing intelligent programs requiring constraint satisfaction or optimization on finite domains. Cream provides a natural description of constraints. It allows extensions of constraint and satisfaction algorithms. It includes simulated Annealing and Taboo Search. It provides an interface to OpenOffice Calc (Spreadsheet).
    • JSL – Java Search Library (JSL) is a framework for general searching on graphs. The standard search algorithms depth-first, breadth-first and A* are provided. JSL also allows the plug in of other search algorithms.
    • JHCR – The K.U.Leuven JCHR System is an integration of Java and Constraint Handling Rules (CHR) designed with three aims in mind: user-friendliness, flexibility and efficiency. User-friendliness is achieved by providing a high-level rule-based syntax that feels familiar to both Java programmers and users of other CHR embeddings, and by full compliance to the refined operational semantics. Flexibility is the result of a well thought-out design, allowing e.g. an easy integration of built-in constraint solvers and variable types. An optimized compilation to Java code and the use of a very efficient constraint store make the performance of the K.U.Leuven JCHR System competitive with that of state of the art CHR implementations in e.g. Prolog and HAL.
    • jOpt jOpt is an open source implementation of the Optimization Programming Language (OPL). OPL is a modeling language for combinatorial optimization problems that combines mathematical programming and constraint programming into a single language. Job Scheduling (JS), is the first add-on to have been released. Typically, JS problems involve assigning a set of activities to a limited number of resources with consideration given to time and ordering limitations.
    • Drools Solver – Drools Solver is library to help you solve planning problems using heuristic algorithms. Examples of planning problems include: employee shifts, freight routing, supply sorting, class scheduling and the traveling salesman problem. The Drools-solver combines a search algorithm with the power of the drools rule engine. Drools Solver uses the Drools rule engine for score calculation, which greatly reduces the complexity and effort to write very scalable constraints in a declarative manner. Drools-solver supports several search algorithms to efficiently wade through the incredbly large number of possible solutions. Drools-solver implements local search, including tabu search and simulated annealing.
    • JCL – Java Constraints Library (JCL) was one of the first libraries to bring constraints satisfaction problem solving to Java. JCL deals with discrete, fine domains as well as with continuous domains. Constraints can be crisp (as in the classic CSP problems) or soft.
    • SAT4J – The SAT4J library provides a SATisfiability problem solver. Compared to the OpenSAT project, the SAT4J library targets first users of SAT “black boxes”, willing to embed SAT technologies into their application without worrying about the details. SAT4J is an implementation in of Een and Sorenson’s MiniSAT specification: An extensible SAT solver. The original implementation was in C++.
    • Cassowary – Cassowary is an incremental constraint solving toolkit that efficiently solves systems of linear equalities and inequalities. The library appears to be designed to support support UI applications.

    If you know of other projects in this area, please free to let me know.


    Share the article!

    Push Driven Business Process Modeling Considered Harmful

    Share the article!

    Several years ago, I had this intuition that ‘locking can’t be abstracted away‘ and that ‘pessimistic locking was impractical‘. They were a somewhat controversial back then. My arguments then were grounded on the fact that transactions could be in conflict with what the business requires. It certainly fell on deaf ears for those developers who wanted to be isolated from the concerns of a business.

    Today however, there is considerable consensus has been building that you can’t always require transactions in a purely technical sense. In fact, there is formal conjecture (see Brewer’s Conjecture) that it may simply be an impossible task to have availability, consistency and partitioning all at the same time. In short, if you want massive scalability then be prepared to sacrifice transactions. In a physical world that where instantaneousness doesn’t really exists (see: The night’s sky), then the notion of relativism should be more of the norm.

    The question however these days that my mind has been stewing over is the notion of the explicit Business Process Modeling. Codifying knowledge is at the heart of software development. Defining and streamlining process is a the heart of a successful enterprise. I have no argument against the need of streamlining the handling of physical goods. You can’t break the laws of physics, the movement of mass requires energy and anyone who has filled up his gas tank lately knows that energy costs money. My question about explicit Business Process Modeling (BPM) and whether it makes sense confined to the context of knowledge based industries. That is, where value is created based on virtual goods.

    The general prescription of a workforce automation or re-engineering activity is to first uncover and make explicit the underlying business process. It is through this discovery phase that one can identify points of improvement. BPM streamlines the codification of the business process by allowing practitioners to build from process model a machine executable rendition of them. It is an extremely appealing idea, especially for management, a modeled process is now changeable on a whim (so they claim) and there is better visibility and therefore control of what’s going on.

    Process is extremely important, however process for process sake can be extremely harmful. Software Development is one of the most knowledge intensive industries known to man. We have painfully discovered the failures of rigid waterfall methodologies and have begun transitioning to more agile, lean and iterative methodologies. This evolution from rigid specification to a more iterative approach gives me a hint that codifying process models can only take you only so far as big upfront design. On the flip side, BPM tools should also place a premium on supporting the iterative development of process models.

    BPM tools however are best able to codify repetitive processes. These however are are low hanging fruit. The ‘not-happy-path’ or ‘exceptional-path’ process model is what’s difficult to capture and in many knowledge based industries, this path is where most of the value comes from.

    The key to understanding business processes is that they are highly asynchronous and concurrent. Sequential dependencies exist not because they are defined by a control-flow process diagrams but rather because the knowledge that’s being pushed around have intrinsic dependencies. The fatal conceptual flaw of control-flow is that it places artificial constraints on the movement of information. Furthermore, control-flow driven process is dehumanizing because that’s simply not how humans do work. Mark Miller explains this best when he writes:

    Remarkably, human beings engage in concurrent distributed computing every day even though people are generally single threaded.


    How do we simple creatures pull off the feat of distributed computing without multiple threads? Why do we not see groups of people, deadlocked in frozen tableau around the coffee pot, one person holding the sugar waiting for milk, the other holding milk waiting for sugar?


    The answer is, we use a sophisticated computational mechanism known as a nonblocking promise.


    (read his piece for the details)


    human beings never get trapped in the thread-based deadlock situations described endlessly in books on concurrent programming. People don’t deadlock because they live in a concurrent world managed with a promise-based architecture.

    Requests are ‘pushed’ as signals into the to-do lists, the responses are promises for future action and the execution of these actions are pulled on demand from the to-do list. This precisely mirrors Kanban that is practiced by Lean and JIT manufacturing. Matter of fact, any decent Issue Tracker would suffice and a process would just be a template for creating a bunch of ‘signaling’ tasks. In short,
    to-do lists are all that’s essential to support human workflow. Richard Pawson of NakedObject’s fame writes:

    I understand that all of this is a somewhat controversial view of business, but it is based on real experience. It reflects a very strong personal viewpoint that, to the extent that an information system involves human users, then the system should be designed to be the servant of the user, not the master. Most workflow systems attempt to be the master of the user.

    Pull driven processes make sense for humans. Unfortunately their essence are cumbersome to capture using control-based push based diagramming notations. A formal BPM specification can best be used as an active monitor that notifies users if events slip through the cracks. Beyond that, it should best act more as a facilitator rather than the manager.

    In fact, to support ‘exceptional-path’ processes, we can derive inspiration from the ideas and tools we employ in lean development. I’ve previously written about how ‘Web 2.0 enables Lean production‘. What becomes strikingly obvious is that software development is rarely ever driven by a BPM tool. It is however supported by tools like version control systems, issue trackers, wikis, automated build systems and continuous integration dashboards. That is tools that are more focused in managing knowledge.

    A BPM tool apparently doesn’t seem to add enough value to be of any use in the process of developing software. There seems to be little value in refactoring out control flow from knowledge (or documents). In other words, if there are any data dependencies then that would provide constraints to the flow. Work that is exerted to refine exactly what that control flow should look like is an exercise in splitting hairs.

    I am forever dumbfounded when I stumble upon development organizations that lack of even a rudimentary VCS and Issue Tracker. It is the height of hypocrisy to sell to customers the use of information technology to enhance processes and at the same time not know how to use information technology to enhance one’s own processes. So if a BPM isn’t used to drive one’s own process, then why in the world is it even sold to drive someone else’s? Perhaps someone else’s process isn’t as complicated as one’s own? Perhaps someone else’s business is less human centric that software development? Perhaps a more advanced collaboration tool like IBM’s Jazz is the cure to our process ills?

    To conclude, as the saying goes, “practice what you preach”. Good software process is based on sound underpinnings and should also be supported by automation. If you are able to migrate this wisdom over to support your customer’s processes then you would have added more value than just codifying their business processes. So if you asked me for a solution to re-engineer a business process, I would recommend at a bare minimum a document management system that supports versioning and an issue tracking system. Couple this with a good dose of design patterns ( Issue Tracking Patterns, SCM Patterns )


    Share the article!