A Good Architecture Is All About Probability - Or It Is Sufficient To Be Good Enough

If you managed to create a perfect architecture, you probably missed your customers expectations or at least unnecessary burned some money. No customer pays you to build perfect architectures - it is sufficient to be good enough. Every application consists of an important, domain specific kernel and some supportive, less interesting services like master data management or configuration (usually they are not the primary value for most applications). All domain specific logic is the added value for your customer, everything else has to exist but is less important, and could be so developed more efficiently with less effort. The architectural decisions are influenced by non functional requirements like scalability, performance, testability, flexibility, maintainability and dozen other "ilities".

Usually the customer implicitly expects high fullfillment of all non functional requirements, which is impossible in practice - non functional requirements influence each other. E.g. layering negatively influences the performance, scalability might hit the performance as well, modularization can increase the overall complexity.  Nothing comes for free. It is suboptimal to apply the same strategy to all subsystems (like domain and supportive services), because e.g. multiple layers for CRUD use cases do not only increase the complexity, hit the performance, but even degrade the maintainability. "Hacked", monolithic domain logic is even worse. The focus on a particular non-functional requirement, like e.g. scalability can have huge impact and has to be well-thought-out. The impact of scalability is statelessness and so rather procedural programming model which not only increases the complexity, but even obfuscate the domain logic.

A reasonable architecture does not try to realize all subsystems of an application perfectly, rather than recommend pragmatic solutions for a given problem. There is another important factor: the likelihood for a certain event. E.g. DAOs were originally intended to abstract from different data stores, but what if your database will likely live longer than your application? Is it really beneficial to  be totally client independent knowing that it will be always a web client, flash or iPhone application? How likely is change in a certain part of your system?

Unfortunately to be able to estimate the probability of an event you will need domain knowledge, and some experience.

The even more important question is: "What happens in worst case?". How long will it take to introduce another type of UI, replace the database or switch the application server. If this can happen in a reasonable amount of time its ok. Whether the amount of time is reaonable or not should be decided by the customer not the architect :-)...

Many J2EE architectures were entirely exaggereted. The were intended for all, even very uncertain, cases. The result were many, dead, layers with lot of transformations and indirections. This introduced additional complexity and obfuscated the actual business logic and missed the point. The problem were generic, stereotypical architectures, which were developed once and applied to every possible use case. Even a guestbook was developed with at least 15 layers :-). So keep it small, keep it simple, and focus on the essential cabatilities of your application.

[In "Real World Java EE Patterns" I described pragmatic Java EE architectures with a minimal set of patterns] 

Comments:

A reasonable architecture evolves from tests that you have written, after a user has specified some functionality in a story. One needs to refactor mercilessly as you try to make your new tests pass.

Funnily enough I think that non-functional requirements can be reached through additional stories or otherwise, constraints in development.

Can you demonstrate that an extra "layer" - particularly one that doesn't involve serialisation or remote access - actually always degrades performance?

Another thing; 80% of code's cost is maintenance. Increasing the complexity to maintain a class - and I contend that direct database access in the service class will always increase its complexity - will increase the cost of maintenance. Adding a functionally-driven layer that the service class can use increases intentional programming and aids readability without imposing any measurable performance penalty unless you do something amazingly dumb.

Posted by Scot Mcphee on February 10, 2009 at 08:29 AM CET #

Hello Adam,

my question is pretty off topic but recently i had a discussion about whether using Entities is bad object oriented design or not. The question was why Entities should not contain business logic. The main argument against this approach was that it leads to structured programming and not to clean Object Oriented design. What do you think about this topic?

Thanks, Mike

Posted by Mike Gr. on February 11, 2009 at 10:27 AM CET #

Very nice article.

Posted by Slim Tebourbi on February 18, 2009 at 10:41 AM CET #

Good post. I agree, and think that in 99% of the cases, encapsulate business logic is more beneficial than bad. A super generic architecture is much complicated.

Posted by Giovanni Silva on July 01, 2012 at 05:56 PM CEST #

Post a Comment:
  • HTML Syntax: NOT allowed
...the last 150 posts
...the last 10 comments
License