I read Greg Luck's interesting blog entry about issues in Ruby On Rails framework. Most of the issues in this article discussed non functional problems in Ruby On Rails, like performance, scaleability, lack of prepared statemens etc. Although these facts are criticial in production, the Ruby On Rails platform will surely get improved over time. I evaluated Ruby On Rails in march this year and found a more interesting issue:
The Rail's persistence is based on the Active Record Pattern. An object consists of state and behavior and is mapped directly to database. The specific implementation in Rail's seems not to have the first level cache (FLC). FLC is used in the most persistence frameworks like Hibernate, CMP 2.0, JDO etc. to ensure consistency inside a transaction.
Without FLC you will receive everytime a new copy of an object inside the SAME transaction:
// instance1, instance2, instance3 are copies of the same record in the database.
This is not a big problem in "read only" or master data management applications. But in case the business logic becomes more complex, you have to track the instances inside a transaction, otherwise the data becomes inconsistent. "Lost Updates" can happen inside a transaction.
update(instance2) // overrides the A with old data
update(instance3); //overrides the A and B with the initial (old) data
This is can occur, in case more developers are working on the same component. There are some workarounds which ensure consistency inside a transaction:
- transactions are sorted: You have first to read, then to write (in real world hard to establish)
- intra-transactional cache has to be implemented: In this case every read object has to be registered in this cache (actually our FLC). The read operation read first the cache then the persistent store. Problem: what happens in case someone changes the database data directly? (Answer: your cache becomes stale :-))
I wonder whether these problems can also occur in Ruby On Rails, or are specific to the J2EE(JEE) platform?