A New Kind of Tiered Architecture

Share the article!

The 2-tier ( web + database) and 3-tier ( web + app + database) architecture are most prevalent today. There is however an emerging architecture that is gaining traction. This I am coining this the “Latency-tier architecture”. In this architecture, a cache sits in between the customer facing front-end web components and multiple internal back-end web components:

cache-tier

The architecture has 4 relevant tiers. A front-end web container, a middle cache which I will call the Latency Engine, and then followed by a collection of silo’ed web applications. The responsibility of the front end web container is to provide the aggregate user experience (or aggregate web api). The Latency Engine’s responsibility is to retrieve and cache bits of data required by the front end container. The collection of back-end web applications are either Web UI’s or Web Services.

Let’s take a step back and look at some of the key motivators of this new architecture. I will consult the ‘10 Commandments of SOA Salvation‘:

  1. Thou shalt not disrupt the legacy system.
  2. Thou shalt avoid massive overhauls. Honor incremental partial solutions instead.
  3. Thou shalt worship configuration over customization.
  4. Thou shalt not re-invent the wheel.
  5. Thou shalt not fix what is not broken.
  6. Thou shalt intercept or adapt rather than re-write.
  7. Thou shalt build federations before attempting any integration.
  8. Thou shalt prefer simple recovery over complex prevention.
  9. Thou shalt avoid gratuitously complex standards.
  10. Thou shalt create an architecture of participation. The social aspects of successful SOA tends to dominate the technical aspects.

What we are observing in the enterprise (and obviously on the web) is that there is a growing legacy of web based applications and that any critical application will require a web interface. These legacy applications will always be owned and operated by someone else’s fiefdom. In other words, don’t try to change the organization and its corresponding internal politics to fit a new architecture. To make real progress requires a non-disruptive federated approach to architecture.

How does this solve the problem differently from a Portal server? The differentiator and the reason why the Latency Engine needs to be in a separate tier is that there may be multiple kinds of front end applications for the enterprise. Furthermore, to ensure usability one has to carefully manage the latency between the multiple downstream web services. Every downstream web service will have its own availability and latency characteristics. Finally, the Latency Engine acts a mediator between the formats expected by the front end and that provided by the downstream web services. In Design Patterns parlance, this would be a tier that acts as a Facade.

From the operational perspective, this architecture scales by adding capacity on any of the tiers. The cache will replicate naturally because it is based on a pull model rather than a push model (as seen in db replication architectures). The Latency Engine servers as a central point where one can monitor the responses of back end web services. Cache peering protocols may be employed to scale the Latency Engine tier.

Now as much as I would like you to believe that this architecture is entirely new and something that uniquely springs out from my creative mind, this actually draws directly from a talk by Mark Nottingham regarding leveraging the Web of Services at Yahoo.

Yahoo employs Squid for their Latency Engine, but what they do with it is beyond pedestrian. They’ve got a lot going on underneath the covers that basic caching content. Load balancing logic is employed to select the most healthy back-end. Multiple identical requests are coalesced into single requests to the back-end. When an error is returned by a back-end, either that error is cached or a stale cache entry is returned. Cached state is returned to reduce latency when accessing the back-end. State state can be invalidated by out of band channels.

One could use Squid or build a new kind of architectural component ( I would of course prefer to explore the later). This component would employ a generalization of caching logic using the kind of temporal logic typically found is Event Driven engines. Logic deployed on the Latency Engine can considered as Aspects that contribute concerns. This addresses the reality that logic in the Latency engine will require coordinated development and deployment between the front-end and latency engine tiers.

The Latency Engine may employ a mediation engine to manage divergences between the interfaces shared by the front-end and back-end tiers. A normalized data format, say based on Atom, can be employed to tackle the majority of uses cases so that Front End logic can be insulated from a proliferation of formats. Finally, one can forsee support for SaaS based plugins.

Latency Engines already exist today but it will take a while for the tools to get to a level of maturity to support this kind of architecture. There are a couple of vendors that have a head start in this space (see Cisco Nexus 7000, Zeus ZXTM) and I do fully expect that in the near future that a Latency Engine would be typical fixture in most enterprise deployments.


Share the article!

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>