Model-centric architectures aims at producing architectures which are simple, maintainable, flexible and fast. This is achieved by following the seven rules described below. For a fast overview, see Model centric architectures in brief.
Many J2EE architects advocates building applications with no central object model at all. This is bad advice:
Some argue that they don't need a model because they have no business logic, only data travelling between a gui and a database. Fine, that may be true (allthough often it isn't), but then there should certainly be a model at the meta level. For instance, what kind of data are we working with, is it forms? So make a Form class owning Fields. Data are never just "any data".
Every enterprise application works with a spectrum of business information ranging from very fast-changing transactional data to and beyond the comparably slow changes of source code. The slow-changing data is just that because it contains the definitions, types, categories, processes and rules of the application - the knowledge layer.
When there are types, there are instances of the types. Most of the data in an application are typically not knowledge, but operational data. Such data are often fast-changing and transactional in nature.
There is usually a many to one relationship between the operational and knowledge layers, there are many employees of the same category, many items of the same type and so on. The knowledge and operational level may or may not be implemented by different classes.
Explicitly identifying the knowledge layer in an application has many benefits:
You will discover knowledge layer concepts if you look for
It is an unavoidable fact in todays state of the art that the number of objects reachable through hard (language-level) references must be limited. This is because the object graph is too large to be held in memory and because part of it is to be distributed, edited, cached and persisted independently of other parts. Partitioning is done by replacing a hard (normal Java) reference with an id.
Object graphs are often partitioned in an unstructured fashion. Unstructured partitioning leads to less oppurtunities for simple lock management and caching, and generally less control over system state. One solution to this problem is to keep the unstructured partitioning but to forbid cross-transaction state. This works sometimes, but it is always slower than the alternatives, often inconvenient and does not work at all with fat clients. The other solution is to structure the object graph partitioning.
The goal of a structured partitioning is to gain control over the hard references to each object, so we can control the contexts in which the objecst can participate in long (locking-based) and short (database-based) transactions, and make sure that all foreign references to objects refers the correct version of the objects. The way to achieve this goal is to group hard references into edit bubbles.
An edit bubble is a graph of transactional objects reachable through hard references. An edit bubble has exactly one identity which is used to reference the edit bubble from outside. The edit bubble object containing the id is the main object of the edit bubble, while the objects it can reach by hard references are the dependent objects in the edit bubble.
An object graph is partitioned into edit bubbles ifThis may sound wobbly and theoretical, but in practice it is pretty easy. Say we have a Person object. The Person has an Id and an Address, which is an id-less object refered by the person. In this case, the Person and Address is the edit bubble, the Person being the owner of the bubble and the Address being a dependent (of Person). Now, say we also has a Company, which has such persons as employees. The Company is a separate edit bubble which has no collection of persons, but instead a collection of person id's which we may resolve into Persons selectively as needed.
Making a partition is about finding a reasonable match between the edit bubbles and the object pieces you want to:
Apart from this points, we want to make edit bubbles as large as possible because the language level references inside a edit bubbles are more convenient to use than soft references. It is of course not possible to find a division which makes all the three cases listed above optimal, but the alternative of making ad hoc partitions which are believed to be optimal in a particular usage does not scale in complexity as the number of cases to consider increases, and actually produces poorer performance overall because less caching is possible. Note also that it is possible to make optimizations with edit bubbles, for instance loading edit bubbles likely to be needed soon when one is loaded. If fact a structured optimization makes such optimizations safer and simpler.
Many architectures, J2EE and others, insist that the object model should not escape the server. This is because they mix infrastructure layering and business logic layering. While this provides for good PowerPoint diagrams, it is bad architecture. Partitioning business logic along infrastructural layers instead of functional areas invariable leads to business logic duplication. This is unavoidable because all the layers really work with the same business entities. If there is accounts on the server, the client works with accounts as well. If the server's Account is not available another one has to be made (or more commonly, the same Account logic must be spread around in some other client side objects).
A good architecture instead uses perpendicular infrestructure layers and business areas. Each business area spans all the infrastructural layers.
Why do customers accept that their business logic can only be run remotely inside an application server? Whether or not the code is run remotely has nothing to do with the business logic. And why do developers accept to write business code which can not be unit tested without an application server and a data base, both of which has nothing to do with the logic in question?
Infrastructure, in this context, is the part of an application which can be written without asking someone with business knowledge what it should do. Persistence, distribution, transactions, caching, life-cycle management, pooling and so on. Isn't it obvious that such functionality should be independent of the business logic? Of course it is, here are some reasons why:
And please remember that J2EE is infrastructure. Every bit of it, unless your business is application servers.
Minimizing the interface between the buiness logic and infrastructure is (rule 5), without making this interface general and flexible would be a pretty lousy idea. This interface is the most important thing in any application, from an architectural point of view. To enable infrastructure changes without business logic changes, the interface (including it's usage protocol) should accomodate not only todays needs but also the needs of tomorrow. Please get this right - the infrastructure should not support features prematurely, but the interface should not require changes to support new features. This is difficult to do the first time, but often not more work than less flexible interfaces. In any case, there is no better place to spend some quality thinking.
Flexibility which is never used is wasted work, but worse still is unnecessary flexibility which makes something more complicated to use. Design is about usability - the common tasks should be easy to accomplish. Too much flexibility often makes the common things as hard to do as the obscure ones. This is the case for instance with most interfaces in the J2EE spec. The general infrastructure does not need to accomodate the obscure cases at all - the added work handling such cases is outweighted many times by having the best possible support for the common tasks.