Tag Archives: soa

Is High Scalability SOA an Oxymoron?

Share the article!

All too many Service Oriented Architecture (SOA) practitioners seem to have a belief, that because SOA deals with distributed computing, that scalability is a given. The reality however is that conventional SOA practices tend to work against the development of high scalability applications. This article shows the properties of a system that can achieve high scalability and then contrasts it with conventional SOA practices.

The patterns found in a system that exhibits high scalability are the following:

  • State Routing
  • Behavior Routing
  • Behavior Partitioning
  • State Partitioning
  • State Replication
  • State Coordination
  • Behavior Coordination
  • Messaging

This has been discussed in a previous blog entry “A Design Pattern for High Scalability“. SOA based systems conventionally cover Routing, Coordination and Messaging. However, the patterns of Partitioning and Replication are inadequately addressed by SOA systems. For reference, one can refer to the SOA Patterns book that I’ve covered in this review. The words “Partitioning” and “Replication” unsurprisingly can’t be found in the book’s index. Scalability apparently isn’t a concern to be addressed by SOA patterns.

What then are the patterns that we can introduce to SOA to ensure scalability? Here are a couple of suggested patterns from the previous article:

  • Behavior Partitioning
    • Loop Parallelism
    • Fork/Join
    • Map/Reduce
    • Round Robin Allocation
    • Random Allocation
    • Weighted Allocation
  • State Partitioning (Favors Latency)
    • Distributed Caching
    • HTTP Caching
    • Sharding
  • State Replication (Favors Availability in Partition Failure)
    • Synchronous Replication with Distributed Locks and Local Transactions
    • Synchronous Replication with Local Locks and Distributed Transactions
    • Synchronous Replication with Local Locks and Local Transactions
    • Asynchronous Replication with Update Anywhere
    • Asynchronous Replication with Update at the Master Site only

How can these patterns be manifested in a SOA system?

To achieve Behavioral Partitioning, the construct of a Command Pattern (see: Command Pattern) and the Functor Pattern can be used. In the conventional SOA architecture, behavior (as in executable code) needs to be propagated through the network, to be executed by receiving services. In lieu of a commonly agreed standard, one may either employ XQuery as a stand in for this capability. One should therefore can define services to accept XQuery in a way analogous to how SemanticWeb systems accept SPARQL. A key to achieving scalability is that behavior be allowed to be move close to the data that it will act on. Behavior that works on data through remote invocations is a guarantee to kill scalability. See “Hot Trend: Move Behavior To Data For A New Interactive Application Architecture“.

To achieve State Partitioning, SOA based system need to adopt the notion of persistent identifiers of data. WS-I has the notion of WS-Addressing which typically are used to reference endpoints as opposed to actual entities. What is needed is that this addressing or persistent identifiers act analogous to Consistent Hashing so that entities may be partitioned and accessible using multiple endpoints. Identifier based services would need to be stood up to perform the redirection to the endpoints.

Finally, there is the issue of Replication to support availability and fail-over. The Identifier based services described early may function as routers to handle the fail-over. Alternative, one may employ proxy servers in the manner described in A New Kind of Tiered Architecture. The replication capability however will require the exposure of new kinds of services that support a replication protocol. The most basic of which would be to provide a Publish and Subscribe interface.

To conclude, high scalability in SOA may indeed be possible. It is a bass-ackwards way of achieving high scalability, but if your only option is to use SOA, then there may just be a possibility to achieve it.

Share the article!

The Ten Commandments for SOA Salvation

Share the article!

I stumbled upon this excellent short white paper from PolarLake. The paper lays out “Seven Principles of SOA Success“. To summarizes the unamed author list these principles:

  1. Minimize costs of disruption.
  2. Integrate incremenatally.
  3. Reduce coding.
  4. Use industry standards whenever possible.
  5. Accept the things you cannot change.
  6. Understand the strategic value.
  7. Buy – don’t build.

It’s a pragmatic whitepaper that’s worth a quick read. Too many integrations projects fail because of the desire for perfection. A drive towards perfections leads to idealism and creates vision that can easily become unrealistic. That is the crux of the problem, the person that can envision the ideal world is in many cases the person who cannot see reality.

This reminds of an entry I wrote “Ten Fallacies of Software Analysis and Design“. Where I outlined several intellectual sink holes which many developers are victim to:

  1. You can trust everyone

  2. Universal agreement is possible
  3. A perfect model of the real world can be created
  4. Change can be avoided
  5. Time doesn’t exist
  6. Everyone can respond immediately
  7. Concurrency can be abstracted away
  8. Side effects and non-linearity should be removed
  9. Systems can be proven to be correct
  10. Implementation details can be hidden

Now with armed with this knowledge, I hereby present the 10 Commandments for SOA1 Salvation:

  1. Thou shalt not disrupt the legacy system.
  2. Thou shalt avoid massive overhauls. Honor incremental partial solutions instead.
  3. Thou shalt worship configuration over customization.
  4. Thou shalt not re-invent the wheel.
  5. Thou shalt not fix what is not broken.
  6. Thou shalt intercept or adapt rather than re-write.
  7. Thou shalt build federations before attempting any integration.
  8. Thou shalt prefer simple recovery over complex prevention.
  9. Thou shalt avoid gratuitously complex standards.
  10. Thou shalt create an architecture of participation. The social aspects of successful SOA tends to dominate the techinical aspects.

1. SOA is a nebulous term and too easily overloaded. I use it here in the sense of a legacy re-engineering project who’s goal is to migrate to a more adaptive and flexible architecture.

Share the article!

Best Practices for Service API Definition

Share the article!

In recent days I have come across a couple of interesting articles on the web on how to define service APIs.

The first one titles “Web API Documentation Best Practices” from ProgrammableWeb. The author writes about the importance of good documentation in that it encourages and keeps developers interested in the service and also helps reduce support costs. The article describes some basic areas that should be covered by documentation such as having an overview, a introduction section, sample code, and references. The article further recommends the following best practices:

  • Auto-generate Documentation
  • Include Sample Code
  • Show Example Requests and Responses
  • Explain Authentication and Error Handling

Mark Blotny wrote that “Each Application Should Be Shipped With a Set of Diagnostics Tools. He writes that developers typically have limited access to the production servers. However in the event that something goes wrong, developers require the capabilities to perform an investigation in reasonable time to identify to uncover the causes of the problems. He writes that a service api should have the following:

  • Each integration point should include a diagnostic tool.
  • There should be accessible logs for each call to an external system.
  • Service peformance data should be accessible by developers.
  • All unexpected errors should be logged and easily accessible.

Finally there is an article by Juergen Brendel who wrote “The Value of APIs that Can be Crawled“. He writes that a service API should be designed such that it can be discovered via a crawler. Although this requirement is commonsense for anyone concerned with SEO, it unfortunately isn’t quite common for developers of service APIs.

The notion of a decentralized index who’s data is populated by crawlers should in fact be key technology component of any Service Oriented Architecture (SOA). Surprisingly however, despite the success of search engine companies like Google, this component is absent in most of all SOA stacks I have seen. In SOA stacks, there is a notion of a service directory, in most implementations the assumption is for a centralized service and the onus is on each service to register and provide appropriate and current information to the directory. It appears to be logically the same thing, however what scales in practice is the decentralized index/crawler and not the centralized directory.

These three articles show that there is in fact value in providing service functionality that goes beyond the documented functional requirements. There is in fact research that I came across that documents this in a more comprehensive manner. Here are the property groups that a service may provide:

  • Temporal
  • Locative
  • Availability
  • Obligations
  • Price
  • Payment
  • Discounts
  • Penalties
  • Rights
  • Language
  • Trust
  • Quality
  • Security

I love lists exhaustive lists like this because it reminds me of what I may be missing. Speaking of which, it reminds me that Web Services Modeling Ontology (WSMO) has something formal along similar lines. In fact, if you really want to go into the deep end with service contracts, you can read this.

It is interesting how this non-functional attributes (i.e. ilities) align well with the idea of Aspects Oriented Programming and can be implemented in a proxy like infrastructure. That is in fact what it appears that existing “API Management” firms (ex. Mashery, Sonoa, WebServius, 3Scale) appear to provide. Here are some examples of the features that these API Management firms are offering:

  • Reporting, Analytics and Visualization dashboard.
  • Traffic management and rate-limiting
  • Security, Access Control, Authorization.
  • Mediation – Protocol Bridging.
  • Monetization
  • User management and provisioning. Self service provisioning.
  • Community management. Portal, Access key management, FAQs, Wiki
  • Scalabilability. Clustering, Caching.
  • Threat Protection – Denial of service attacks.
  • Versioning
  • Operations Management. Root cause analysis. Logging.

Share the article!

Some More SOA Design Patterns

Share the article!

Did some more googling around and have uncovered a couple more noteworthy SOA patterns. These are from the following sources:

  • Agent Itinerary – Objectifies agent itineraries and routing among destinations.
  • Forward – Provides a way for a host to forward newly arrived agents automatically to another host
  • Ticket – Objectifies a destination address, and encapsulates the quality of service and permissions that are needed to dispatch an agent to a host address and execute it there
  • Delegation – The debtor of a commitment delegates it to a delegatee who may accept the delegation, thus creating
    a new commitment with the delegatee as the new debtor.
  • Escalation – Commitments may be canceled or otherwise violated. Under such circumstances, the creditor or the
    debtor of the commitment may send escalations to the context Org.
  • Preemption – To cancel a commitment based on conflicting demands.
  • Barrier – Guards an action and specifies (pre)conditions on its execution
  • Co-location – Two or more resources are to be co-located at a certain time and place for a specified duration.
  • Correspondence – Relating two pieces of information each owned by a different participant
  • Deadline – Some information is required for an action before a certain time after which an alternate action is taken
  • Expiration – Some information will become invalid at a certain point in time (not shown in figure)
  • Notification – On-state-change xe2x80x9cpushingxe2x80x9d of information to enforce Correspondence.
  • Query – On-demand periodic polling of information to enforce Correspondence
  • Retry – Retrying an action a number of times before resorting to an alternate action
    Selection Choosing from among similar service offerings from multiple participants according to some criteria
  • Solicitation – Gathering information about service offerings from participants
  • Token – Issuing a permission for executing an action to other participants
  • Saga – How can we get transaction-like behavior or complex interactions between services without transactions.
  • Obligation Management – Allow obligations relating to data processing to be transferred and
    managed when the data is shared
  • Sticky Policies – Bind policies to the data it refers to

A couple of them are redundant with other patterns in other texts. You can find these patterns here:

Share the article!

Lean Development Applied To SOA

Share the article!

I’ve been doing a little bit of musing (“Is SOA an Agile Enterprise Framework?“) about a development framework to support SOA. However, maybe Lean Production/Thinking would be better fit for SOA. In an earlier entry I mused about “How Web 2.0 supports Lean Production“. Let’s turn this question around and ask the question “How can Lean development be used in support SOA development?”.

Lean focuses on the elimination of waste in processes. Agile in comparison is tuned toward practices that adapt efficiently to change. There are though some commonalities between Lean and Agile Development. These specifically are:

  • People centric approach
  • Empowered teams
  • Adaptive planning
  • Continuous improvement

The last two bullet points align directly with the SOA Manifesto. For reference:

  • Business value over technical strategy
  • Strategic goals over project-specific benefits
  • Intrinsic interoperability over custom integration
  • Shared services over specific-purpose implementations
  • Flexibility over optimization
  • Evolutionary refinement over pursuit of initial perfection

Lean differs from Agile in that Lean is purported to be designed to scale (see: “Set-based concurrent engineering” and Scaling Lean and Agile Development ):

One of the ideas in lean product development is the notion of set-based concurrent engineering: considering a solution as the intersection of a number of feasible parts, rather than iterating on a bunch of individual “point-based” solutions. This lets several groups work at the same time, as they converge on a solution.

In contrast, agile methods were meant for smaller more nimble development teams and projects. One would therefore think that in the context of enterprise wide SOA activities, Lean principles may offer greater value than the Agile practices. Well, let’s see if we can convince ourselves of this by exploring this in more elaborate detail.

Lean Software Development is defined by a set of “seven lean principles” for software development:

  1. Eliminate Waste – Spend time only on what adds real customer value.
  2. Create Knowledge – When you have tough problems, increase feedback.
  3. Defer Commitment – Keep your options open as long as practical, but no longer.
  4. Deliver Fast – Deliver value to customers as soon as they ask for it.
  5. Respect People – Let the people who add value use their full potential.
  6. Build Quality In – Don’t try to tack on quality after the fact – build it in.
  7. Optimize the Whole – Beware of the temptation to optimize parts at the expense of the whole.

Can we leverage these principles as a guide for a better approach to SOA development?

Where can we find waste in the context of software development? Poppendieck has the following list:

  • Overproduction = Extra Features.
  • In Process Inventory = Partially Done Work.
  • Extra Processing – Relearning.
  • Motion = Finding Information.
  • Defects = Defects Not Caught by Tests.
  • Waiting = Delays.
  • Transport = Handoffs.

What steps in SOA development can we take to eliminate waste? Here’s a proposed table:

Waste in Software Development Lean SOA
Extra Features If there isn’t a clear and present economic need for a Service then it should not be developed.
Partially Done Work Move to a integrated, tested, documented and deployable service rapidly.
Relearning Reuse Services. Employ a Pattern Language. Employ Social Networking techniques to enhance Organizational Learning.
Finding Information Have all SOA contracts documented and human testable on a shared CMS. Manage Service evolution.
Defects Not Caught by Tests Design Testable Service interfaces. Test driven integration.
Delays Development is usually not the bottleneck. Map the value stream to identify real organizational bottlenecks.
Handoffs Service developers work directly with Service consumers (i.e. developers, sysadmins, help desk)

The customers for a Service are similar to the customers for an API. I wrote years ago about how the design and management of APIs leads to the development of damn good software. The same principles can be applied with the lean development of services. Taking some wisdom from that, here are some recommended practices for Lean SOA:

  • Designing Services is a human factors problem.
  • Design Services to support an Architecture of Participation. Focus on “Organizational Learning
  • Focus on what a user of your Service will experience. Simplicity is the #1 objective. Only when this has been accomplished (at least on paper) do we talk about implementation details.
  • Services arenxe2x80x99t included in a release until theyxe2x80x99re very simple to use.
  • In tradeoff situations, ease of use and quality win over feature count.
  • Useful subsets of standards are OK in the short term, but should be fully implemented in the longer term.
  • t

  • Continuous Integration – Fully automated build process and tests.
  • Always Beta – Each build is a release candidate; we expect it to work.
  • t

  • Community Involvement – Community needs to know what is going on to participate. Requires transparency.
  • Continuous Testing.
  • Collective Ownership.
  • Preserve Architectural Integrity –
    Deliver on time, every time, but must preserve architectural integrity. Deliver quality with continuity.
  • Services First – When defining a new Service, there must be at least one client involved, preferably more.


Finally, there is one last principle in Lean development “Decide as Late as Possible” that is of high importance. The ability to decide as late as possible is enabled by modularity. The absence of modularity makes composing new solutions and therefore new integrations extremely cumbersome. The key however is not to become a Cargo Cult and practice Lean SOA without understanding how one achieves modularity (or to use another phrase “intrinsic interoperability”). The key is to understand how to achieve that. That of course is the subject of the Design Language that I am in the process of formulating.

In conclusion, Lean SOA (A mashup of Lean and SOA) follow these principles:

  • Eliminate Waste – Spend time only on what adds business value.
  • Create Knowledge – Disseminate and share Service knowledge with the organization and its partners.
  • Defer Commitment – Be flexible before you optimize.
  • Deliver Fast – Deliver value quickly and incrementally. Don’t try to boil the ocean.
  • Respect People – Let the people who add value use their full potential.
  • Build Quality In – Don’t try to tack on quality after the fact – build it in.
  • Optimize the Whole – Strategic goals over project-specific benefits.
  • Build Interoperability In – Services should be designed to be modular.

  • This is a simple as it gets. The devil of course is in the details.

    Notes: You can find an earlier and different take on this here: SOA Agility.

    Share the article!

    In Search of a Pattern Language for SOA Intrinsic Interoperability

    Share the article!

    Any Good Pattern Language should be based on a well defined set of primitives (i.e. basic building blocks). Architectures and Design Patterns (referred in the GOF book as micro-architectures) require a clear definition of constraints to be of any real value. Roy Fielding when defines ReST in the context of constraints. In stark contrast, most SOA definitions that one can find, including the OASIS standard definition, fails to define the architectural constraints.

    In previous posts I have formulated a set of attributes that provide the definition of Services. I further refined those to this current definition:

    A Service Oriented approach satisfies the following:

    1. Decomposability – The approach helps in the task of decomposing a business problem into a small number of less complex subproblems, connected by a simple structure, and independent enough to allow further work to proceed independently on each item.
    2. Composability – The approach favors the production of Services which may then be freely combined with each other and produce new systems, possibly in an environment quite different from the one in which they were initially developed.
    3. Understandability – The approach helps produce software which a human reader can understand each Service without having to know the others, or, at worst, by having to examine only a few of the others.
    4. Continuity – The approach yields a software architecture that a small change in the problem specification will trigger a change of just one Service, or a small number of Services.
    5. Protection – The approach yields a software architecture in which the effect of an abnormal condition occurring at run time in a Service will remain confined to that Service, or at worst will only propagate to a few neighboring Services.
    6. Introspection – The approach yields an architecture that supports the search and inspection of data about Services (i.e. Service Meta-data).
    7. Remoteability – The approach yields an architecture that enables Service interaction between other Service that reside in separate physical environments.
    8. Asynchronicity – The approach yield an architecture that does not require an immediate response from a Service interaction. In other words, it assumes that latency exists in either the network or the invoked Service.
    9. Document Orientedness – The approach yields an architecture where the messages sent Service to Service interaction are explicitly defined, shared and that there is no implicit state sharing between interactions.
    10. Decentralized Administration – The approach yields an architecture that does not assume a single administrator for all Services.

    This is an extended definition of Bertand Meyer’s definition of Modularity. You can look at my previous post entitled “SOA and Modularity” to see how this compares with other definitions of SOA.

    Now if we were to consult the “SOA Manifesto” and its value system then we could derive the following goal: “We believe in building modular systems through intrinsic interoperability and evolutionary refinement to achieve business value and satisfy strategic goals”. The key ingredient in this statement that is left ambiguous is “Intrinsic Interoperability”. The key question for anyone employing SOA is to understand how to achieve “Intrinsic Interoperability”. Modularity and Evolutionary Refinement are well understood principles, Intrinsic Interoperability is not. One may have the belief that interoperability can be achieved by simply mandating a global standard. This can work in theory, however rarely ever does in practice. Centralized planning is rarely a scalable approach, evolutionary refinement in fact demands a decentralized approach. The question one needs to ponder is how can I build interoperable systems employing a decentralize approach. In the literature I have surveyed I have yet to find a cohesive treatment on how this can be done.

    Over the past decade many Design Patterns have been proposed to address many of the concerns that are introduced with in a Service Oriented Architecture. The most notable collections have been following:

    I’ve taken the trouble to comb through these patterns and to identify which ones lead to improved intrinsic interoperability. One of the challenges in developing a pattern language is the creation of a categorization that covers the entire collection.

    Service Identification Patterns

    • Dynamic Discovery – When a Service joins a network it might not have any knowledge about which other Services are available.
    • Absolute Object Reference – The notion of an identifier to a service that can be exchanged by other services and used to invoke the original service is a key ingredient for Service mobility.
    • Lookup – A Service is selected based on the query of Services in a directory. Provides an additional layer of indirection in identifying services.
    • Referral – A Service is selected based on the consultation of a Services. The difference with the previous is that another service is responsible for making the selection.
    • Proxy – A service communicates with another service that id does not have the identity of or is unreachabable.

    Service Dependency Patterns

    • Termination Notification – A mechanism to indicate when a Service becomes permanently unavailable is necessary to manage the evolution of Services.
    • Lease Renewal – This is mechanism is similar to the original, however the onus is placed on the consuming service to renew its dependency.
    • Reminder – Removes the requirement for a Service to maintain its own scheduling service.

    Service Extension Patterns

    • Invocation Interceptor – Provides the capability of dynamically introducing new Service functionality.
    • Invocation Context – Permits new Service functionality to be added that is dependent on invocation context rather than Service definition
    • Protocol Plug-in – Provides a explicit mechanism for introducing a new communication protocol to an existing Service.
    • Location Forwarder – A specialization of Invocation Interceptor where the Forwarder sends an invocation to another Service.
    • Delegation – Where a Service allocates a task previous allocated to it to another Service.
    • Escalation – Where a Service attempts to progress a work item that has stalled by offering it to another Service.
    • Deallocation – Where a Service makes a previously started task available for offer and subsequent distribution.
    • Reallocation – where a Service allocates a task that it has started to another Service. Can be stateful where the current state of the task is retained, or stateless where the task is restarted.
    • Suspension/resumption – where a Service temporarily suspends execution of a task or recommences execution of a previously suspended task.

    Service Negotiation Patterns : The Customer and Performer negotiate until they reach an agreement (commitment) about the work to be fulfilled.

    • Receiver Cancels – Receiving Service can cancel within certain timeframe.
    • Sender Cancels / Contingent Request – Sending Service can cancel within certain timeframe
    • Binding Request – A sending party sends an offer that it will agree to to if the receiving party accepts.
    • Binding Offer – A sending party request an offer that will responded to by an offere by the receiving party.
    • Resource-Initiated Allocation – The ability for a resource to commit to undertake a work item without needing to commence working on it immediately.
    • Resource-Initiated Execution – Offered Work Item – The ability for a resource to select a work item offered to it and commence work on it immediately.
    • Resource-Determined Work Queue Content – The ability for resources to specify the format and content of work items listed in the work queue for execution.
    • Selection Autonomy – The ability for resources to select a work item for execution based on its characteristics and their own preferences.

    Service Performance Patterns: The Performer fulfills the agreement.

    • Role-Based Distribution – The selection of a service to perform a task is based on the role of a service.
    • Deferred Distribution – The selection of a service to perform a tasks is deferred to the time of the the request.
    • Case Handling – The selection of a service to perform a task is based on the case of the request.
    • Capability-Based Distribution – The selection of a service to perform a task is based on the capability of the service.
    • History-Based Distribution – The selection of a service to perform a task is based on a Service handling history.
    • Organisational Distribution – The selection of a service to perform a task is based on the relationship of the service with other services.
    • Two Phase Execution – A service sends plan information prior to the start of execution.
    • Prepare to Start / Start – A service waits for a permission to start prior to the start of execution.
    • Interleaved Parallel Routing – A partial ordering of tasks are defined and can be executed in any order that conforms to the partial ordering.
    • Deferred Choice – A point in a process where one of several branches is chosen based on interaction with the operating environment.

    Service Reporting Patterns – The performer reports on the status of the execution of the agreement.

    • Fire-and-Forget – Invoke a Service without expecting a response.
    • Request-Response with Retry – Invoke a Service with the expectation that a retry does not alter the semantics of the previous invocation.
    • Polling – Periodically invoke a Service to derive status.
    • Subscribe-Notify – Subscribe to a Service to receive future notifications.
    • Quick Acknowledgment
    • Sync with Server – Provide a mechanism to synchronize with a Server’s data.
    • Result Callback – Provide a mechanism for the invoked Service to asynchronously return a response.

    Service Acceptance (Satisfaction) – The Customer evaluates the work and either declares satisfaction or points out what remains to be done to fulfill the agreement.

    • Retry
    • Compensating Action

    Clearly there’s a lot of interesting literature out there that can provide a lot of insight into the interoperation of Services. The above list is just a rough sketch and I’m hoping to provide a more cohesive set over time.

    TBD: Conversation join, Conversation refactor, Initiate conversation, Follow conversation,
    Leave conversation, Atomic consumption.

    Share the article!

    SOA Design Patterns Book – A Review

    Share the article!

    The problem with SOA is that it has always been too abstract. The SOA manifesto that was signed late 2009 confirms this. No longer is it a set of technologies or even a set of standards, it is just simply the architecture that arises from applying service orientation which is defined as building modular systems via evolutionary refine to achieve business goals. This definition is a bit too abstract for me. Astonishingly, that’s as good a definition you can find in the manifesto.

    So I decided to dig deeper, possibly I can gleam some knowledge by go through patterns discovered in practice and documented in the book “SOA Design Patterns” by Thomas Erl. This is a massive book with over 800 pages, the patterns in the book can also be found in www.soapatterns.org. The problem I have with almost all SOA books is that SOA is discussed in a manner that reveals little differentiation from any other distributed processing model.

    The first hundred pages of the book covers introductory material covering SOA and Design Patterns. There’s nothing new here that you can’t find in other books on the subjects. So let’s dive straight into the meat of the book, the Design Patterns themselves.

    I’m a big fan of Design Patterns, however I just abhor it when authors define a Design Pattern that is an obviously implied by the domain you are defining patterns for. For example, if we take the Object Oriented Programming (OOP) domain, Polymorphism is not a Design Pattern, it is an attribute of OOP. When I see these kinds of patterns, its an indicator to me of the lack of rigor in vetting out these patterns.

    In the original GOF book, the OOP design patterns are categorized into 3 sets these are Behavioral, Creational and Structural. In this book the categories are Service Inventory, Service Design and Service Composition. The “Service Inventory” category is the most difficult to grasp simply because it is too abstract and its definitions are very weak. The Service Design category covers concerns revolving around the design of services by itself. The Service Composition category covers concerns that cover how services are composed together and how they interact. Service Inventory however seems to be describing services at a meta-level. That is, how would one describe services

    The book is a very difficult read because it avoids the use of more concise terminologies commonly used in other computer science texts. Furthermore, it employs pattern names that although sound familiar, can lead to a lot of confusion. In my attempt to understand the book, I will be relating the Design Patterns of this book to more commonly understand computer science terminology.

    The first category of patterns named “Service Inventory Patterns” covers ways in which Services are to be described. It can be a bit confusing discussing ideas in a meta-level, or in other words attempting to describe how you describe things. That’s the main flaw of this section in that it is not made apparent as to what is being talked about.

    Chapter 6 covers “Foundational Inventory Patterns”. These patterns is simply recording and categorizing services. The “Enterprise Inventory Pattern” says that you should recorded in an inventory, said inventory can be further categorized into different domains (i.e. “Domain Inventory Pattern”) and to various interacting layers (“Service Layers”). Each service can be normalized to minimize overlap in functionality (i.e. “Service Normalization Pattern”) and making sure to avoid redundant logic (“Logic Centralization Pattern”). A standard protocol (“Canonical Protocol Standard”) and standard schemas (“Canonical Schema”) may be defined in the inventories. Nothing really informative in this chapter, its all about book keeping. Maintaining a Meta data repository to track a systems artifacts is nothing new, I personally would have condensed this as “Meta Service Pattern” and just shoved in all the different aspects into a single pattern.

    Chapter 7 covers “Logical Inventory Layer Patterns” which in my opinion simply talks about the kinds of services that may be implemented (really just another categorization). That is one can talk about Utility, Entity and Process focused services.

    Chapter 8 covers “Inventory Centralization Patterns”. In general Processes, Schemas, Policies and Rules (which incidentally are all meta-data) can be positioned in a central location so as to avoid duplicate and inconsistent definitions. I would have just called this “Source of Truth Pattern”.

    Chapter 9 covers “Inventory Implementation Patterns”. Which would mean something along the lines of how you would implement ‘meta data’. Unfortunately I fail to see the logic behind why the patterns in this chapter are collected in this category. The category seems to consist mostly of patterns involving the sharing of compute resources across multiple services. The first pattern “Dual Protocols” doesn’t really belong here, it really is about support more than one protocol for a given service. I would in fact rename this a “Service Virtualization” pattern. “Canonical Resources” is about providing standard interfaces to compute resources. “State Repository” is about providing a utility service for storing service state. “Stateful Services” is well about Services that maintain their own state. “Service Grid” is some kind of service fabric that provides high scalability and fault tolerance for services that require states. I don’t know why this is a pattern, it seems to be more of a technology. “Inventory Endpoint” is a kind of service that acts like a facade to multiple services. “Cross Domain Utility Layer” provides utility services than span multiple domains. Though this pattern seems to be a replay of a previously mentioned layering pattern.

    Chapter 10 covers “Inventory Governance Patterns”. Which would mean manage ‘meta data’. “Canonical Expression” states that there should be a standard way for defining contracts. “Metadata Centralization” states that there should be a registry to store services for discovery. I would rename this pattern as “MetaData Discovery” to disambiguate itself from the “Inventory Centralization Patterns”. The key point here is that meta-data should be discoverable by the services within the system. “Canonical Versioning” states that there is a standard way of defining versions of services, this pattern in fact is ambiguous with a later pattern that describes the idea that there should be a language for versioning.

    The next set of chapters covers Service Design.

    Chapter 11 covers “Foundational Service Patterns”. The problem I have with this chapter is that it talks about fundamental concepts which is apparently is difficult to differentiate from taking about meta-data. In other words, if I can describe my vocabulary then I am in essence defining the foundations of what I’m describing. The chapter attempts to include patterns that one would assume as being all too obvious. For example, “Functional Decomposition” pattern states that a problem can be broken down into smaller problems. The inclusion of this kind of pattern is just plain simple absurd. There is “Service Encapsulation” which has a misleading name, but it is about designing existing logic as a service that can be used outside of its original context. Which again Erl continues to state the obvious through complex pattern definitions. Finally there are two patterns “Agnostic Context” and “Non-Agnostic Context” patterns which is all about identifying multi or single purpose services. This chapter seems completely pointless in my opinion.

    Chapter 12 covers “Service Implementation Patterns”. This is when finally there is some meat to the bones. However, some of these patterns here a miscategorized in that they are more about Service Composition, for example “Service Facade”, “Redundant Implementation”, “Service Data Replication” should be in the Service Composition category. In fact, it would have made better sense to categorize these patterns and those in chapter 9 under “State Handling Patterns”. “Partial State Deferral” pattern is indeed an implementation detail of service in how it manages its runtime state. I personally am a bit ambivalent about SOA design patterns that concern themselves with resource optimization. These kind of patterns belong elsewhere. “Partial Validation” pattern permits services to focus on what’s relevant in data and ignore the rest. This is a very useful capability that supports both versioning and interoperability. “UI Mediator” pattern is likely the most unique pattern I’ve found in this book. It is about providing a mechanism to support receiving timely feedback to a user on the progress of a service execution.

    Chapter 13 covers “Service Security Patterns”. This is a very coherent category in that it restricts itself with the concern of handling security of services. This is probably one of the better chapters, and the interesting coincidence is that none of the patterns are written by Erl. In fact, as a rule of thumb, patterns that were written by someone other than Erl tend to be of more valuable. I in particular have high regard to the patterns written by David Orchard, these are non-obvious and quite insightful. However, Erl however a times creates a pattern that appears to be a duplicate of Orchard’s pattern (i.e. Version Identifier) and does a very poor job at presenting it (i.e. Canonical Versioning).

    Chapter 14 covers Service Contract Design Patterns. This actually is a good categorization however I would have chosen a different name. I would label it “Contract Coupling” patterns. Decoupled Contract – States that a contract should be decoupled from its implementation. This is actually a practice a good practice worth emphasizing.
    Contract Centralization – States that all access to a service is through its contract. A better name would be Service Encapsulation. Contract Denormalization – This seems counter to chapter 7 service normalization. The pattern states that redundancy in contract may be required to reduce demands on consumers. These kinds of patterns that appear to be in conflict with other patterns are actually the ones that can be quite insightful. However I would rename this as “Contract Redundancy”. Concurrent Contracts – Were a service defines different kinds of contracts depending on target consumer, again running counter to Service normalization. Finally, a very interesting pattern “Validation Abstraction” – where Validation logic is made portable from the service contract.

    Chapter 15 covers Legacy Encapsulation Patterns. This is yet another bad chapter which covers the “Legacy Wrapper” pattern which clearly is the same as “Service Encapsulation”.
    “Multi Channel Endpoint” pattern which is intended support multiple user access channels (ex. laptop, mobile, etc.) which again is expounding on the obvious. Services are meant to be shareable across multiple contexts, is it not blindingly obvious the multi access channels would share the same service? Finally there’s the “File Gateway” pattern which is the same thing a “Protocol Bridging” that is described in a later chapter. This chapter makes me wonder as to the target audience level of technical sophistication.

    Chapter 16 covers Service Governance Patterns. “Compatible Change” pattern written by David Orchard discuses how to change a contract without affecting legacy consumers. A good example of a well written and insightful pattern. The same goes with the next pattern “Version Identification” which describes the need to define a Version vocabulary that identifies the compatibility constraints between versions. “Termination Notification” pattern is another pattern that is all to easy to forget. There should be a mechanism for contracts to express service termination information. This is then the point where this chapter turns for the worse. “Service Refactoring” pattern which is an obvious consequence consequence of “Service Decoupling” described previously. “Service Decomposition” pattern, well this is just a refactoring technique and the same goes for the “Proxy Capability” pattern. Now my head is beginning to hurt. Where you find the “Decomposed Capability”, with the following description “How can a service be designed to minimize the chances of capability logic deconstruction?”. I’ve got simply little patience left to figure what is meant here. The same goes with the “Distributed Capability” pattern. I’ll likely make another effort some other day.

    The next section covers Service Composition Patterns, which discusses patterns on how to compose existing services with each other.

    Chapter 17, a chapter that bordering on the absurd. The “Capability Composition” pattern is about composing services out of other services. I’m a bit confused here, for all this time I had thought that services by definition were composable. “Capability Recomposition” which again is obscured with a description like “How can the same capability be used to help solve multiple problems”. This is just plain and simple service instantiation. I can’t see how this is non-obvious. The book seems to repetitively recast well known computer science concepts as patterns and furthermore recasts these as entire chapters. The entire chapter can be summarized in one sentence “Services can be composed of services and Services can be instantiated and invoked in multiple contexts”. I’m beginning to get the feeling that Erl has trouble understanding basic concept like ‘instantiation’.

    Chapter 18 covers “Service Messaging” patterns, Hohpe “Enterprise Integration Pattern” provides a much better treatment of this subject area and I refer you to his excellent book if your interested in this. To be brief, the following patterns are discussed: “Service Messaging”, “Messaging Metadata”, “Service Agent”, “Intermediate Routing”, “State Messaging”, “Service Callback”, “Service Instance Routing”, “Asynchronous Queuing”, “Reliable Messaging” and “Event Driven Messaging”. The Author again tries to re-define a common word, he defines the “Service Messaging” pattern which essentially is “Asynchronous Communication” as a pattern.

    Chapter 19 covers “Composition Implementation Patterns”. I find the title to be confusing, I simply don’t understand what ‘Implementation’ is meant in this context. The chapter covers a incoherent collection of patterns: “Agnostic SubController”, “Composition Autonomy”, “Atomic Service Transaction” and “Compensating Service Transaction”. I would think that the appropriate title for this could be “Scope of Work Patterns”. It is as if Erl structures his chapter by creating combinations of the words “foundational” and “implementation” without giving any thought as to what they mean.

    Chapter 20 covers Service Interaction Security Patterns which covers an interesting collection of patterns not written by Erl.

    Chapter 21 covers Transformation Patterns. The chapter covers the “Data Model Transformation” pattern and the “Data Format Transformation” pattern which were written by Erl and the “Protocol Bridging” pattern written by Mark Little. The latter pattern is clearly the generalization of the former two.

    In summary, the SOA Design Patterns book isn’t structured with the same rigor and coherence as other Design Patterns books. The content is unusually wordy and repetitive. There are a lot of diagrams but a majority of them provide little insight. The book takes well known concepts in computer science and regurgitates them as design patterns essentially taking what is obvious and making them obscure. Despite the poor quality of most of the book, its saving grace is that there are but a few patterns that have been submitted by contributors that are of a high quality.

    However considering the pervasively poor quality of SOA books in general, I’m going to say it is one of the more valuable SOA books. Even if this bar is extremely low, this is of the few SOA books where you can indeed find some true nuggets of wisdom. (The book’s website has a lot more interesting patterns that weren’t published with the book) However, you have to dig very hard and long to find them because the map that is provided can is deliberately obscuring and more of a hindrance than an aid. Read it only if you know what to look for.

    So you don’t have to waste your own valuable time, I’ve collected a reference list of patterns from the book that are of some value. Included is a quick explanation of my own and an alternative and hopefully more concise name.

    1. Canonical Protocol – Always convenient to standards on a common protocol to reduce bridging costs. Uniform Protocol.
    2. Dual Protocol – Supporting more than one protocol increases the number of compliant clients. Virtual Service.
    3. Canonical Expression – Specifications about meta-data should be standardized to avoid the cost of translation. Canonical Metadata Language.
    4. Metadata Centralization – SOA systems should support some kind of discovery of metadata services. Metadata Discovery.
    5. Partial Validation – Services should support non-strict validation of messages. Non-strict Validation.
    6. UI Mediator – Provide a capability to receive timely feedback when monitoring a service’s execution.
    7. Exception Shielding – To ensure security the implementation details of an exception should be hidden from a client.
    8. Message Screening
    9. Trusted Subsystem
    10. Service Perimeter Guard
    11. Partial State Deferral
    12. Contract Denormalization – Redundant specifications are something necessary to reduce coupling. Redundant Contract.
    13. Validation Abstraction – A language for input validation should be introspect-able to permit flexibility in where validation is performed. Introspect-able Validation.
    14. Compatible Change – Service contract changes can be performed in a way to support backward compatibility.
    15. Version Identification – A versioning vocabulary should reveal the compatibility constraints between different versions of a services. Versioning Constraints.
    16. Termination Notification – A service should have a mechanism to express its availability.
    17. Messaging Metadata – There should be a mechanism to parse information about a message without having to read the entire message. Message Envelope.
    18. Intermediate Routing
    19. State Messaging – Conversational State can me stored in message. Conversation State Messages.
    20. Service Instance Routing – Communication between services may be routed using logic that is dependent on the content of the message. Content Based Routing.
    21. Asynchronous Queuing – Clients need not require the temporal availability of the services it requires. Asynchronous Communication.
    22. Reliable Messaging – Clients should not have to manage the reliable delivery of a communication to its destination.
    23. Event Driven Messaging – A service may not require the knowledge of the identity of its clients. Publish and Subscribe.
    24. Compensating Service Transaction – Actions performed by services should be undoable.
    25. Data Confidentiality
    26. Data Origin Authentication – A mechanism for discovering the provenance of data in essential. Non-forgeable Provenance.
    27. Broker Authentication – An intermediate broker may be required when there is no trust between two interacting services. Trust Broker.
    28. Protocol Bridging – SOA should allow the inclusion of a protocol mediator to translate communication between services. Protocol Mediator.

    Everything else not listed here is most likely to consist mostly of fluff and best be ignored. Do let me know however if I mistakenly ignored a good design pattern.

    Share the article!

    Push Driven Business Process Modeling Considered Harmful

    Share the article!

    Several years ago, I had this intuition that ‘locking can’t be abstracted away‘ and that ‘pessimistic locking was impractical‘. They were a somewhat controversial back then. My arguments then were grounded on the fact that transactions could be in conflict with what the business requires. It certainly fell on deaf ears for those developers who wanted to be isolated from the concerns of a business.

    Today however, there is considerable consensus has been building that you can’t always require transactions in a purely technical sense. In fact, there is formal conjecture (see Brewer’s Conjecture) that it may simply be an impossible task to have availability, consistency and partitioning all at the same time. In short, if you want massive scalability then be prepared to sacrifice transactions. In a physical world that where instantaneousness doesn’t really exists (see: The night’s sky), then the notion of relativism should be more of the norm.

    The question however these days that my mind has been stewing over is the notion of the explicit Business Process Modeling. Codifying knowledge is at the heart of software development. Defining and streamlining process is a the heart of a successful enterprise. I have no argument against the need of streamlining the handling of physical goods. You can’t break the laws of physics, the movement of mass requires energy and anyone who has filled up his gas tank lately knows that energy costs money. My question about explicit Business Process Modeling (BPM) and whether it makes sense confined to the context of knowledge based industries. That is, where value is created based on virtual goods.

    The general prescription of a workforce automation or re-engineering activity is to first uncover and make explicit the underlying business process. It is through this discovery phase that one can identify points of improvement. BPM streamlines the codification of the business process by allowing practitioners to build from process model a machine executable rendition of them. It is an extremely appealing idea, especially for management, a modeled process is now changeable on a whim (so they claim) and there is better visibility and therefore control of what’s going on.

    Process is extremely important, however process for process sake can be extremely harmful. Software Development is one of the most knowledge intensive industries known to man. We have painfully discovered the failures of rigid waterfall methodologies and have begun transitioning to more agile, lean and iterative methodologies. This evolution from rigid specification to a more iterative approach gives me a hint that codifying process models can only take you only so far as big upfront design. On the flip side, BPM tools should also place a premium on supporting the iterative development of process models.

    BPM tools however are best able to codify repetitive processes. These however are are low hanging fruit. The ‘not-happy-path’ or ‘exceptional-path’ process model is what’s difficult to capture and in many knowledge based industries, this path is where most of the value comes from.

    The key to understanding business processes is that they are highly asynchronous and concurrent. Sequential dependencies exist not because they are defined by a control-flow process diagrams but rather because the knowledge that’s being pushed around have intrinsic dependencies. The fatal conceptual flaw of control-flow is that it places artificial constraints on the movement of information. Furthermore, control-flow driven process is dehumanizing because that’s simply not how humans do work. Mark Miller explains this best when he writes:

    Remarkably, human beings engage in concurrent distributed computing every day even though people are generally single threaded.

    How do we simple creatures pull off the feat of distributed computing without multiple threads? Why do we not see groups of people, deadlocked in frozen tableau around the coffee pot, one person holding the sugar waiting for milk, the other holding milk waiting for sugar?

    The answer is, we use a sophisticated computational mechanism known as a nonblocking promise.

    (read his piece for the details)

    human beings never get trapped in the thread-based deadlock situations described endlessly in books on concurrent programming. People don’t deadlock because they live in a concurrent world managed with a promise-based architecture.

    Requests are ‘pushed’ as signals into the to-do lists, the responses are promises for future action and the execution of these actions are pulled on demand from the to-do list. This precisely mirrors Kanban that is practiced by Lean and JIT manufacturing. Matter of fact, any decent Issue Tracker would suffice and a process would just be a template for creating a bunch of ‘signaling’ tasks. In short,
    to-do lists are all that’s essential to support human workflow. Richard Pawson of NakedObject’s fame writes:

    I understand that all of this is a somewhat controversial view of business, but it is based on real experience. It reflects a very strong personal viewpoint that, to the extent that an information system involves human users, then the system should be designed to be the servant of the user, not the master. Most workflow systems attempt to be the master of the user.

    Pull driven processes make sense for humans. Unfortunately their essence are cumbersome to capture using control-based push based diagramming notations. A formal BPM specification can best be used as an active monitor that notifies users if events slip through the cracks. Beyond that, it should best act more as a facilitator rather than the manager.

    In fact, to support ‘exceptional-path’ processes, we can derive inspiration from the ideas and tools we employ in lean development. I’ve previously written about how ‘Web 2.0 enables Lean production‘. What becomes strikingly obvious is that software development is rarely ever driven by a BPM tool. It is however supported by tools like version control systems, issue trackers, wikis, automated build systems and continuous integration dashboards. That is tools that are more focused in managing knowledge.

    A BPM tool apparently doesn’t seem to add enough value to be of any use in the process of developing software. There seems to be little value in refactoring out control flow from knowledge (or documents). In other words, if there are any data dependencies then that would provide constraints to the flow. Work that is exerted to refine exactly what that control flow should look like is an exercise in splitting hairs.

    I am forever dumbfounded when I stumble upon development organizations that lack of even a rudimentary VCS and Issue Tracker. It is the height of hypocrisy to sell to customers the use of information technology to enhance processes and at the same time not know how to use information technology to enhance one’s own processes. So if a BPM isn’t used to drive one’s own process, then why in the world is it even sold to drive someone else’s? Perhaps someone else’s process isn’t as complicated as one’s own? Perhaps someone else’s business is less human centric that software development? Perhaps a more advanced collaboration tool like IBM’s Jazz is the cure to our process ills?

    To conclude, as the saying goes, “practice what you preach”. Good software process is based on sound underpinnings and should also be supported by automation. If you are able to migrate this wisdom over to support your customer’s processes then you would have added more value than just codifying their business processes. So if you asked me for a solution to re-engineer a business process, I would recommend at a bare minimum a document management system that supports versioning and an issue tracking system. Couple this with a good dose of design patterns ( Issue Tracking Patterns, SCM Patterns )

    Share the article!

    SOA Principles and Modularity

    Share the article!

    There’s this ongoing argument in the blogosphere on the issue of the “importance of Cohesion in SOA“. Cohesion and Coupling are two features that should be present in good software. The paradox is that they’re opposite forces and a balance between them has to be made. The ongoing argument is that SOA tends to lean towards loose coupling and therefore cohesion must take a back seat. What both surprises me and keeps me equally disgusted is the fact that SOA practitioners can’t even get a handle on this basic concept.

    Steve Vinoski wrote a survey paper in 2005 about Cohesion and Coupling. The paper revisits the concepts of coupling and cohesion that were introduced with structured programming. In other words, pre-object orientation. He concludes in his paper:

    Given that transitions to xe2x80x9cnewxe2x80x9d
    computing styles are often accompanied
    by explicit disapproval of the outgoing
    style, itxe2x80x99s no surprise that todayxe2x80x99s
    focus on SOA has created a bit of a
    backlash against distributed objects.
    Whatxe2x80x99s unfortunate is that many of the
    measures of quality for distributed
    object systems apply equally well to
    distributed services and SOA, so itxe2x80x99s a
    shame that some feel compelled to
    ignore them just to be trendy. But perhaps
    it doesnxe2x80x99t matter, because we can
    just go back to the days before objects,
    dig up measures like coupling and
    cohesion, and apply them all over
    again xe2x80x94 for the first time, of course.

    Reality Check: The world has made a world of progress since the days of Structured Programming and has settled towards Object Orientation. It is therefore imperative that we consult Object Oriented texts to gleam an understanding of what makes for good software. One of the most authoritative books of that time was Bertrand Meyer’sObject-Oriented Software Construction, 2nd edition“. The book was published back in 2000, weighing in at 4 lbs and 1,296 pages. In this book, Meyer presents five fundamental requirements that a design method worthy of being called “modular” must satisfy:

    1. Modular Decomposability – A software construction method satisfies Modular Decomposability if it helps in the task of decomposing a software problem into a small number of less complex subproblems, connected by a simple structure, and independent enough to allow further work to proceed separately on each item.
    2. Modular Composability – A method satisfies Modular Composability if it favors the products of software elements which may then be freely combined with each other o produce new systems, possibly in an environment quite different from the one in which they were initially developed.
    3. Modular Understandability – A method favors Modular Understandability if it helps produce software which a human reader can understand each module without having to know the others, or, at worst, by having to examine only a few of the others.
    4. Modular Continuity – A method satisfies Modular Continuity if, in the software architectures that it yields, a small change in the problem specification will trigger a change of just one module, or a small number of modules.
    5. Modular Protection – A method satisfied Modular Protection if it yields architectures in which the effect of an abnormal condition occurring at run time in a module will remain confined to that module, or at worst will only propagate to a few neighboring modules.

    Certainly a palatable and succinct list of requirements to guide the makings of good modular software. I certainly like lists like this where each point not only self contained but orthogonal to the other points. Coupling and Cohesion are not the primary goals, rather the goal is to achieve modularity and Coupling and Cohesion are just artifacts of Modularity. Cohension in particular simply boils down to our ability to understand the inner workings of a module. In the context of SOA which concerns itself with inter-module interconnectivity, Cohesion’s intra-module concerns plays very little importance.

    However, what’s extremely disturbing is that SOA practitioners seem to be rediscovering tried and true principles of software design. Contrast Bertrand Meyer’s Requirements for Modularity (the 1st edition was circa 1997) with Thomas Erl’s recently published SOA principles (circa 2007):

    1. Loose Coupling – Service contracts impose low consumer coupling requirements and are themselves decoupled from their surrounding environment.
    2. Abstraction – Service contracts only contain essential information and information about services is limited to what is published in service contracts.
    3. Reusability – Services contain and express agnostic logic and can be positioned as reusable enterprise resources.
    4. Autonomy – Services exercise a high level of control over their underlying runtime execution environment.
    5. Statelessness – Services minimize resource consumption by deferring the management of state information when necessary.
    6. Discoverability – Services are supplemented with communicative meta data by which they can be effectively discovered and interpreted..
    7. Composability – Services are effective composition participants, regardless of the size and complexity of the composition.

    IMHO, this ia a terrible set of principles because they are overlapping concerns. For instance, how is Composability different from Reusability? To illustrate this overlapping, just perform a rough mapping of these principles to the Meyer’s Moudularity. We thus get
    Loose Coupling maps to Decomposability, Protection and Continuity. Abstraction maps to Understandability. Reusability maps to Decomposability, Undestandability and Composability. Autonomy maps to Protection. Finally, Composability maps, to well, Composability. What’s depressing is that in 10 years of innovation (i.e. 1997 to 2007), SOA practitioners have added only Statelessness and Discoverability in to the mix?

    To be completely fair, there are other SOA practitioners that do lend some more depth (or meat?) into the SOA discourse. One would expect book authors like Webber and Erl would have a more indepth grasp of the subject, unfortunately that’s not the case.

    Don Box may have written the first set1 of Four SOA Design Tenets in 2004:

    Tenet 1: Boundaries are Explicit

    Tenet 2: Services Are Autonomous

    Tenet 3: Services share schema and contract, not class

    Tenet 4: Service compatibility is based upon policy

    This was extended by Stefan Tilkov in his Ten Principles of SOA :

    1. Explicit Boundaries*
    2. Shared Contract and Schema not Class*
    3. Policy Driven*
    4. Autonomous*
    5. Wire Formats not APIs
    6. Document Oriented
    7. Loosely Coupled
    8. Standards Compliant
    9. Vendor Independent
    10. Metadata-driven

    With any new computing style, I’m always on the look out as to what differentiates it from a pre-existing style. Tilkov’s principles in my opinion has more meat than Erl’s principles. However, I quest whether Loosely Coupled is a SOA principle or the entire essence of SOA. See, Wire Formats, Document Orientedness and Metadata driven are all a consequence of Loose Coupling. The same can be said of Don Box’s original tenets. Not to be out done, David Orchard has his own SOA principles that goes like this:

    1. Trade-off Principle: SOA deployment is a trade-off between properties of interest.
    2. Defined message format Principle: Services have well defined message formats.
    3. Interface Principle: Services have described interfaces.
    4. Interface Fidelity Principle: The richness of the interface description relates directly to the amount of coupling.
    5. Composite Behavior Principle: The actual behavior is the sum of all the behaviors in the software.
    6. Distributed Principle: Services should be designed for existence in a widely distributed and heterogeneous computing environment.
    7. Decentralization Principle: Services should be designed and planned for decentralized administration
    8. WS-* Technique: Services should make appropriate use of Web services specifications such as SOAP
    9. State Technique: State location and management
    10. Coarse Grained Technique: Coarse grained interfaces
    11. Asynchronous Technique: use Asynchrony.

    David Orchard’s list shows better balance, of course I do have to question the WS-* inclusion. Now if I could therefore come up with an aggregation of SOA principles that is formulated by extending Meyer’s Modularity principles and adding my definition of Services, it would then look like this:

    1. Decomposability – A method satisfies Decomposability if it helps in the task of decomposing a software problem into a small number of less complex subproblems, connected by a simple structure, and independent enough to allow further work to proceed independently on each item.
    2. Composability – A method satisfies Composability if it favors the products of software elements which may then be freely combined with each other and produce new systems, possibly in an environment quite different from the one in which they were initially developed.
    3. Understandability – A method favors Understandability if it helps produce software which a human reader can understand each module without having to know the others, or, at worst, by having to examine only a few of the others.
    4. Continuity – A method satisfies Continuity if, in the software architectures that it yields, a small change in the problem specification will trigger a change of just one module, or a small number of modules.
    5. Protection – A method satisfies Protection if it yields architectures in which the effect of an abnormal condition occurring at run time in a module will remain confined to that module, or at worst will only propagate to a few neighboring modules.
    6. Introspection – A method satisfies Introspection if it yields architectures that support a mechanism that enables the structure of modules and the structure of their communication to be queried and examined at runtime.
    7. Remoteability – A method satisfies Remoteability if it yields architectures that support a mechanism that enables module communication by other modules that are hosted in separate physical environments.
    8. Asynchronicity – A method satisfies Asynchronicity if it yields architectures that does not assume an immediate response from a module invocation. In other words, it assumes that latency exists in either the network or the invoked module.
    9. Document Orientedness – A method satisfies Document Orientedness if it yields architectures where the messages of inter module to module communication are explicitly defined, shared and that there is no implicit state sharing between invocations.
    10. Standardized Protocol Envelope2 – A method satisfies a Standard Protocol Envelope if it yields an architecture that requires the sharing of a common Envelope message format across all module communications.
    11. Decentralized Administration – A method satisfies Decentralized Administration if it yields an architecture that does not require a single administrator for all modules.

    The ideas of yesteryear continue to be relevant today. What we must do it clearly identify what is truly new rather than regurgitating old practices disguised as new ideas. This new set of SOA principle builds from what exists and extends it to the distributed and decentralized environments. That is, the derivation begins with Meyer’s modularity principles. To that I’ve required support for component model in the Introspection requirement (note: Discoverability follows from this). Then I’ve required distributed computing while applying Deutsch’s fallacies (see: Remoteability, Document Orientedness, Asyncrhonicity and Decentralized Administration).

    Now it is important to realize that there’s a vast gulf between a set of principles and how to effectively using it in practice (see: “SOA Salavation“). The primary goal of SOA is to provide more agile IT infrastructure. Even with the above SOA principles laid out, I have difficulty seeing how this helps achieve greater agility. There is a big social engineering aspect that seems to be required to achieve SOA success. Just as Web 1.0 evolved into Web 2.0 via virtue of systems that encourage participation, SOA will need to evolve to SOA 2.0. That is, a SOA architecture that does encourage participation.

    Now regarding that all encompassing term of “Loose Coupling“, one could essentially derive many of the above properties from it! To summarize, SOA is all and nothing but Loosely Coupled APIs.

    1. My own definition of services was published in 2003.

    2. Note that the “Standardized Protocol Envelope” is in contradiction with Peter Deutsch’s “8 Fallacies of Distributed Computing”.

    Share the article!

    SOA 2.0 : Why a Revision is Really Necessary

    Share the article!

    I guess I’m a little late to the debate about “SOA 2.0″. However, after going through the arguments, I will have go squarely against the petitioners. The petitioners would have everyone believe that SOA is a well defined idea that has worked wonders in practice. On the contrary, SOA is a term as nebulous as ever, and one in seven SOA endeavors end up in failure. The ideas and concepts behind SOA are just like its WS-NonexistentStandards underpinnings. That is it is careening towards a massive pileup. Joe Mckendrick has a good summary of the current dismal state:

    But, if SOA really is so abstract and elusive, what else is there? What’s the alternative? Enterprise computing requires a very deliberate methodology of planning that extends out for years and numerous budget cycles. For companies saddled with patchwork portfolios of various vendors’ incompatible legacy systems xe2x80x94 combined with home-grown systems xe2x80x94 service-oriented architecture and Web services offer a path of least resistance.

    Reality is, the industry desperately needs an alternative, that is a revision of the original SOA concepts.

    So that we can at least frame the argument, let’s try to figure out exactly what SOA means. From there, we can propose a reformulation, something that goes beyond the simplistic Oracle/Gartner definition. The Oracle/Gartner leads much to desired in that it simply augments that original SOA with Event Driven Architecture (EDA). Now, every vendor has the right to hijack a term to hype up their existing product line. This tactic was practiced extensively for SOA 1.0, so I don’t expect them giving up on that tactic for SOA 2.0.

    Nevertheless, I have yet to see a convincing definition of SOA 1.0 in all the years that it has been in existence. However, for arguments sake, let’s take the definition cited by the petitioners. The first definition comes from the OASIS SOA Reference Model group, which defines SOA as:

    Service Oriented Architecture is a paradigm for organizing and utilizing distributed capabilities that may be under the control of different ownership domains. It provides a uniform means to offer, discover, interact with and use capabilities to produce desired effects consistent with measurable preconditions and expectations.

    Its too generic a definition, however it does point out the organizational and technical issues that the SOA paradigm addresses. Furthermore, it alludes to a “uniform means”, which implies a standard implementation. A more technological agnostic definition of SOA , quoted by many is as follows:

    In Service-Oriented Architecture autonomous, loosely-coupled and coarse-grained services with well-defined interfaces provide business functionality and can be discovered and accessed through a supportive infrastructure. This allows internal and external system integration as well as the flexible reuse of application logic through the composition of services.

    I’ve struggled a bit to come up with a “definition of services” and what it means to be “loosely coupled” and “service composability“. However, after you’ve nailed down these concepts, there are emerging concepts that give real meat for the need for “SOA 2.0″. Gregor Hohpe alludes to this emerging paradigm when he says in “SOA and Agility go together like Google and Search“:

    The agile emphasis on evolution rather than complex upfront design is a perfect match for SOA because linking Web services into an application is so complex that architects and developers will not really know what they have until the project is complete.

    In an earlier piece, I wrote about the “10 Commandments for SOA Salvation“, and it goes as follows:

    1. Thou shalt not disrupt the legacy system.
    2. Thou shalt avoid massive overhauls. Honor incremental partial solutions instead.
    3. Thou shalt worship configuration over customization.
    4. Thou shalt not re-invent the wheel.
    5. Thou shalt not fix what is not broken.
    6. Thou shalt intercept or adapt rather than re-write.
    7. Thou shalt build federations before attempting any integration.
    8. Thou shalt prefer simple recovery over complex prevention.
    9. Thou shalt avoid gratuitously complex standards.
    10. Thou shalt create an architecture of participation. The social aspects of successful SOA tends to dominate the techinical aspects.

    Successful SOA deployment in other words requires nimbly avoiding many of the organizational constraints of a legacy computing ecosystem. That is success depends heavily on a more agile process and on technical underpinnings that facilitate this process. So, “SOA 2.0″ goes beyond an architecture for services and moves into the realm of an architecture for participation. So when Richard Veryard first coined the term and proposed his ideas, he was clearly in the right direction. That is, SOA 2.0 involves mechanisms for collaborative composition, uncontrolled reuse, internet wide interoperability and a user centric approach. The key distinction between SOA 1.0 and SOA 2.0 begins with the realization for the need to support a decentralized process:

    There are two competing visions of service-oriented architecture (SOA) circulating in the software industry, which we can label as SOA 1.0 and SOA 2.0. Our approach to governance is targeted at SOA 2.0. One of the central questions raised by Christopher Alexander in his latest four-volume work, The Nature of Order, is how to get order without imposing top-down (central) planning, or conversely how to permit and encourage bottom-up innovation without losing order.

    SOA 2.0, is about imposing an Architecture of Participation on top of the of the original Services composition concepts of SOA 1.0. So when Mark Little writes “Giving an architectural approach a version number is crazy: it makes no sense at all!” To me it makes perfect sense, the SOA 2.0 architectural approach is a different approach. It in fact is a more sensible approach to Enterprise Architecture. Roger Sessions writes in “A Better Path to Enterprise Architecture“:

    …partitioned iteration delivers high-business-value technical solutions as quickly as possible, and at the lowest possible cost. High value. Minimum time. Lowest Cost. That is the bottom line.

    There is a lot of truth that the social aspects of successful SOA tends to dominate the techinical aspects. Steve Vinoski writes in The Social Side of Services:

    Keep in mind, however, that building service-oriented systems is hard. Even if you get buy-in from the right people, it doesnxe2x80x99t mean that actually building and deploying services will then be trivial. After all, you still have to deal with changes in development processes, training, tools, and perhaps new avenues of technical collaboration with other teams. Nevertheless, actively addressing the social side of the equation greatly increases your chances for success with SOA.

    There is certainly an untapped opportunity to apply Web 2.0 social networking technologies in the context and process of implementing SOA across an enterprise. There are a lot of interesting ideas that can emanate from this. as a starting point, one can begin with Lean Software Development principles and apply it to the SOA problem.

    Share the article!