The cure for RoR Active Record and transactions

It seems to be still a little be confusion about the possible inconsistencies using ActiveRecord. I got another good example from Behi (thank you):


Suppose that you have loaded an object from the database using an ORM tool and 
have modified one of its fields:

Employee e = ORM.findEmployee(1);
e.setSalary(e.getSalary() * 1.1);

Now in the same transaction, suppose you have loaded the same employee once    
again, this time modifing another of its fields:

Employee e2 = ORM.findEmployee(1);

Now at the end of the transaction you save e and e2:;;

Now depending on the ORM you use, the change to the salary field of the        
employee might be lost! (In RoR, it's lost, In EJB 3 is not lost)


The solution for this problem provides the IdentityMap pattern, which is well explained by Martin Fowler. This pattern ensures, that you only see one and only instance of a persistent object in a transaction. It is good for read access, but writing is more problematic. In case you update a table or delete data in the table you should also update or at least delete the instances in the IdentityMap which is not easy to realize (the IdentityMap should be then able to understand the SQL and behave the same way like the database).

I haven't such problems with RoR, but in Java EE space with some home-grown OR-mappers. The procedure was always the same:

  1. Developers built an own persistence framework (because it was easier, than JavaEE :-))
  2. Inconsistencies occured.
  3. The framework was replaced by hibernate, cmp 2.0 or jdo

Sometimes is Java EE easy - comparing it to the alternatives :-)

Cloudy Jakarta EE and MicroProfile: Microservices, Clouds and Beyond Jakarta EE / MicroProfile airhacks workshops at MUC airport, Winter Edition the podcast:

Stay in touch:


Hi Adam,

I believe the reason this is not seen as a big problem in the RoR space is that applications seldom have the size of the typical, 10-50 people EJB project. The most likely occurence of this problem is of course not in the simple examples used to illustrate it, but in situations where business logic developed by different developers uses the same object.

Of course the question is legitimate whether you should have a project requiring this size in the first place ;-)

Posted by Stefan Tilkov on September 08, 2006 at 02:08 PM CEST #

So - How hard would it be to implement an IdentityMap (bound to the transaction and/or thread) in ActiveRecord?

Is it something we might expect in the core library (based on the comments on the issue by DHH, I would say no) - or is this something that should be solved by a plugin?

As applications scale, and more infrastructure code gets written (such as filters, authorization checks etc.), this issue gets more and more significant IMO.

Posted by Anders Engstrom on September 08, 2006 at 02:28 PM CEST #


"I believe the reason this is not seen as a big problem in the RoR space is that applications seldom have the size of the typical, 10-50 people EJB project."

I do not believe in efficient projects with more than 5 developers in one room :-). But the problem with Active Record can even happen in case 2 eager developers are involved. I like to be able to rely on the persistence layer.

Posted by Adam Bien on September 08, 2006 at 10:50 PM CEST #


I think it is not the responsibility of a project to implement the IdentityMap. This should be the realized by the platform. For instance Hibernate is doing this (first level cache).
In more complex object graphs it is not very convenient (and often also not posssible) to call "save" to sync the data. In more complex graphs this should happen automatically (at the end of transaction).

Posted by Adam Bien on September 08, 2006 at 10:54 PM CEST #

Adam - I totally agree :) I'm also from a JEE/Hibernate/JDO background - and calling "save" on a previously persisted entity feels so.. wrong :)

I'm guessing (and hoping) that RoR and server-side Java will learn from each other in the future. Sun hiring the two lead developers of JRuby is a nice move in that direction :)

Posted by Anders Engstrom on September 09, 2006 at 02:21 AM CEST #

we use Toplink as persistence framework. there were no problems for the mentioned case. But we have other problems finding the correct objects which were created in an open transaction and not committed.

Posted by Adrian on September 11, 2006 at 02:58 AM CEST #


you are right. I've also some issues, especially with cache in the same transactions. But, after switching to CMP 2.1 all problems are gone (really).
I hope that, the glassfish persistence will better work, than toplink some years ago :-),



Posted by Adam Bien on September 11, 2006 at 11:26 PM CEST #

Post a Comment:
  • HTML Syntax: NOT allowed
Online Workshops
...the last 150 posts
...the last 10 comments