Reusability, despite what many software practitioners have been lamenting about, is ubiquitous. Unfortunately, our Computer Science education makes us poorly predisposed towards observing it in the wild. From the Computer Science perspective, Reusability is framed by the programming language constructs that support it.
Structured Programming developed the construct of the callable procedure. So rather than have similar and redundant code cluttered all over the place, structured programming invented the call stack. A less sophisticated version of this would be the construct of macro expansion. Macro expansion can be quite powerful since they work at the meta-level, however they typically lack the self recursive call. The next step in the evolution was to package a group of procedures and their corresponding data structures into a module. Reusability was further improved with the introduction of module inheritance and interface polymorphism. This begot the Object Oriented paradigm that is prevalent today.
So this kind of Reusability is measured by the number of classes we either invoke or inherit from. It's a naive notion of Reusability. Fortunately, Object Oriented practice has led to the development of frameworks. The are rich Object Oriented constructions that employ multiple Design Patterns to drastically reduce the coding effort in a particular domain. Reusability has evolved beyond language constructs into employing Design Patterns that capture repetitive patterns encountered in practice into reusable design guides.
Before Design Patterns, the popular approach of improving Reusability was in the invention of new languages. Fourth Generation (4GL) or even Fifth Generation (5GL) languages that were billed as being able to capture the business model and logic at such a high abstraction level that it was divorced of the minutiae of mundane coding. Elegant idea, unfortunately this approach has failed miserably in practice. The 4GLs and 5GLs were to inflexible and applicable to very limited domains. Today, these languages are coined 'Domain Specific Languages'. More concisely characterizing their applicability and therefore setting more realistic expectations as to their capability.
Domain Specific Languages (DSLs) of course are not really new. One may argue that the most prevalent form of re-use has been via DSLs. Take for example the ubiquitous Relational Database Management System (RDMS). These are programmed exclusively via a DSL called SQL that is based on a Relational paradigm for managing data. The same can be said about Operating Systems (OS) like Unix that employ the triad of Processes, Pipes and Files as the basis of its own DSL. There are very few enterprise systems that do not employ a RDMS. There are even fewer systems that don't employ an OS.
Practitioners however don't treat an OS like Linux or a database like Oracle as being a reusable components in the conventional sense. These mega-scale components are platforms that we build on top of. These systems are very much like frameworks. They do all the heavy lifting and as a programmer you have well defined plug-in points for customization. These plug-in points are defined by the respective DSLs provided by the platform.
So if we can so comfortably accept Linux or Oracle as part of a solution, then shouldn't we be able to be more accepting of other platforms? The answer may be a resounding no, that's because we've been burned to often by monolithic platforms that tried to do everything but the kitchen sink. The common observation goes like this "you don't have SAP fit your business, rather you fit your business to SAP". Just like 4GLs of old, there's a high probability that the kitchen sink approach makes too many assumption that make it inapplicable to your business.
The key to successful Reusability is in selecting the "right level" of abstraction. The right level of abstraction are orthogonal to other abstractions. So, for example, I don't need a Oracle specific OS to run Oracle. The process management infrastructure of an OS is independent of the database management contructs of an RDMS. Some interplay between the abstractions is realistically unavoidable. However, a lower layer abstraction makes possible such things as an OS portability layer that is used in most RDMS implementations. Java for example provides that portability layer so that any Java based RDMS is portable to any OS that Java supports.
There are many frameworks/platforms out there that provide their abstractions at the "right level". Here are a few of them worth considering "Document Management System", "Workflow Engines", "Rules Engines" and "Identity Management Systems". These are integrated with the many other frameworks out there that support integration like "EAI", "ETL" and "Portals". Although the list here are exclusively open source, there are also many closed source alternatives that are equally malleable.
There are many keys aspects to consider in the selection (see "How to Choose an Open Source Project") of a mega-component. The first item that however needs to be nailed down is the integration strategy (see "10 Commandments of SOA Salvation). The integration strategy should be such that it preserves as much as possible as to what exists and has sufficient flexibility to address the core problem at hand. In other words, it maintains isolation between components but is able to cohesively bind mega-components into a well coordinated solution. Fortunately, web based systems by virtue of providing an introspectable web interface provides a good enough component model to build a foundation from. This is in stark contrast with database integrated systems and client server systems of yesteryear. Systems integrated via a database have too many implicit means of interaction that are usually intractable. Client server systems have ossified user interfaces that cannot be introspected and therefore modified on demand. It is no surprise that mashable interfaces have been all the rage, in fact, you should consider the browser as a mega-component.
The selection of a mega-component is such that its integration is compatible with the integration strategy. This therefore requires a understanding of the mega-components component model. Does it have a component model at all? Does it have a framework to plug-in customizations via code or DSLs? Does it expose public remoteable APIs? It is important to characterize the degree of reusability of each component. Furthermore, the minimum degree would be such that a remoteable API exists or achievable (see: Screenscraping).
To summarize, I am advocating a development approach that does not myopically think that the only mega-components to consider are the database and the OS. There are other mega-components out there that one must perform due diligence on. Don't have the arrogance that nobody else has built functionality that you are building. In all likelihood, someone already has and it'll take you less time learning and reusing what already exists that starting from scratch. The software industry is sufficiently mature that the mega-components are out there if you are in fact willing to take a look.