Adam Bien's Weblog

Java EE 6+ Increases Significantly Bugs Per 1000 LoC

The 8 years old J2EE was designed before annotations and built on interfaces and lots of XML. An average J2EE application used several J2EE Patterns to separate the business logic from the necessary plumbing. The (now ancient) J2EE technology also required the existence of thousands of lines of XML which tied the separate parts together. The XML code, as well as some of the patterns (like Session Facade, Service Activator or Business Delegate) were automatically generated with tools like xdoclet. Some of the patterns like Domain Store, Service Locator were replaced by built-in APIs (e.g. EntityManager, EJB 3.1), others like DAO, BusinessObject or DTO became optional but were not hard to implement.

With a pragmatic migration from J2EE to Java EE you can delete more than 60% of the old "pattern" code and the entire XML code without sacrificing any functionality. The deleted code became superfluous, because Java EE 6 is "clean" and does not require any further abstractions, indirections or wrapping. The deleted code was also obvious, trivial, mostly generated and therefore also mostly bug-free.

The remaining functionality usually contains non-obvious business logic and so more likely …bugs. This increases significantly the "bugs per kloc" metric. Although the quality and maintainability of a migrated Java EE 6 application is higher, ancient J2EE code looks from the statistical QA-angle better. So beware of metrics.

…in worst case you could generate obvious interfaces, XML and indirections to make the "bugs per kloc" statistics better :-).

[See also chapter "Retired Patterns", page 131 in Real World Java EE Patterns - Rethinking Best Practices book]


NEW workshop: Microservices with Java EE 7 and Java 8, January 26th, 2015, Airport Munich

A book about rethinking Java EE Patterns

Comments:

Hi Adam,
We would need something like
bugs per kloc / functionality per kloc
here. Something like the error density.
I like to look at some of those metrics from Sonar, but one should handle them carefully. Many times metrics are overestimated / misused...

Always a pleasure to read your blog!
Bernd

Posted by 87.79.75.185 on September 28, 2011 at 12:15 AM CEST #

@Bernd,

a "common sense metric" (CSM) would be really nice, but hard to implement. You are right: QA departments tend to rely on absolute numbers without questioning them or without considering the context / big picture.

Thanks for reading my blog + commenting!

adam

Posted by Adam Bien on September 29, 2011 at 01:46 AM CEST #

Adam,
for couple of past days was reading your two books. Some really good views on the Java EE. I wish I had that hands on the experience of Java EE you have :)

The first time I read in a "fast-mode" the Night Hacks to get the general idea on the book. Yes, it is that good that I decided to read it twice, to get the contents better. The second time I read it made me puzzled though.

The confusing thing comes in the "HTTPRequestRESTInterceptor at Large":

public class HTTPRequestRESTInterceptor implements Filter {
private final static Logger LOG = Logger.getLogger(HTTPRequestRESTInterceptor.class.getName());
// some code omitted for conciseness
private static long xrayPerformance = -1;
private static long worstXrayPerformance = -1;
private static long applicationPerformance = -1;
private static long worstApplicationPerformance = -1;

You declare variables as "static" to make them per class in order that they are shared amongst instances of HTTPRequestRESTInterceptor.

But, the Servlet 3.0 spec says that a single Filter implementation instance is created per <filter> declaration by application server( paragraph 6.2.1 ). So, you don't have many instances of those variables already.

Next, in the "private static void setupThreadPools( )" class method you go ahead and satisfy the "RejectedExecutionHandler" interface contract by providing anon-inner class that overrides the "rejectedExecution( )" method:

@Override
public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
int rejectedJobs = nrOfRejectedJobs.incrementAndGet( );
LOG.log(Level.SEVERE, "Job: {0} rejected. Number of rejected jobs: {1}", new Object[]{r, rejectedJobs});
}

All is fine when you call "nrOfRejectedJobs.incrementAndGet( )" as it is an atomic access, but the access to the LOG variable is not synchronized, hence not thread safe. Race condition definitely and just wrong.

The same Filter implementation instance is shared amongst many threads(client requests).
Now, the Servlet spec (servlet-3_0-pfd-spec, paragraph 2.3.3.1) says that in the service( ) method of Servlet there might be concurrent access/execution.

I am not sure the spec says the same about the Filter's doFilter method, but it requires the same thread, executing the doFilter( ) to eventually execute the target servlet(the spec, paragraph 6.2.3 - "A Filter and the target servlet or resource at the end of the filter chain must execute in the same invocation thread.").

Then,
in the doFilter( ) method you try to measure the performance of the execution:

public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain)
throws IOException, ServletException {
// some code omitted
long start = System.currentTimeMillis();
chain.doFilter(request, response);
applicationPerformance = (System.currentTimeMillis( ) - start);
worstApplicationPerformance = Math.max (applicationPerformance, worstApplicationPerformance);
sendAsync(uri, referer);
}

Here, you get timestamp with System.currentTimeMillis( ), although, you really should use System.nanoTime( ) as per Josh Bloch:

"For interval timing, always use System.nanoTime in preference to System.currentTimeMillis. System.nanoTime is both more accurate and more precise, and it is not affected by adjustments to the system’s real-time clock."

But the confusing thing here is semantics. The access to "applicationPerformance" is not synchronized. And the same is true regarding "worstApplicationPerformance". These are not synchronized, nor of course atomic. This code might be prone to "word-tearing", and is just wrong from java memory model perspective.

Next, here is the sendAsync( ):

public void sendAsync(final String uri, final String referer) {
Runnable runnable = getInstrumentedRunnable(uri, referer);
String actionName = createName(uri, referer);
executor.execute(new ThreadNameTrackingRunnable(runnable, actionName));
}

My head hurts to think about what happens when many threads( it is even not funny with 20-30 ) try to execute this piece of code. Here as well, we have the shared "executor" variable among all the threads. What it does is not so deterministic to me :)

Here is the above mentioned getInstrumentedRunnable( ) :

public Runnable getInstrumentedRunnable(final String uri, final String referer) {
return new Runnable( ) {
@Override
public void run() {
long start = System.currentTimeMillis();
send(uri, referer);
xrayPerformance = (System.currentTimeMillis( ) - start);
worstXrayPerformance = Math.max(xrayPerformance, worstXrayPerformance);
}
};
}

So, the executor gets this kind of Runnables to run. Each of those accessing those shared/unsyncronized "xrayPerformance" & "worstXrayPerformance" variables. And this is on top of the many request serving threads, the executor spawns some more threads to execute its tasks. I really wouldn't bet on what gets sent by "send(uri, referer)" and what values are populated to those unsync static variables.

And aren't you supposed to put:
setupThreadPools( );
registerMonitoring( );
in Filter's init(FilterConfig filterConfig) method ?
This is the style that Servlet puts for us to follow. I have no time to test if the static blocks are really free from race conditions on all implementations. Seems like it should work on java versions from 1.5. But if you put those into init( ) the implementation is required to initialize the code consistently, and you get at least some effort by the vendor to do it right for you.
But this is just the "style" thing, so it is ok I guess.

All in all I find this a little confusing and a bit misguided. There are many ways this is screwed from the Java Memory Model perspective.
I guess it is ok for pet project, but really, this should be fixed as some people will just copy paste or interpret this and run this with 24/7 business apps to run into "surprising" results. Unfortunately, non determinism and concurrency go hand in hand together and it is easy to miss the details.

I might be wrong in many ways, because I am just a newbie learning java & some java ee. I find your books are very good guides about new powerful features the Java EE 5/6 brings. And I don't regret paying for these gems at all.

Thanks, Aidar.

Posted by Aidar on September 30, 2011 at 02:26 PM CEST #

Post a Comment:
  • HTML Syntax: NOT allowed
realworldpatterns.com
...the last 150 posts
...the last 10 comments
License