Enterprise Java

Hibernate Facts: Knowing flush operations order matters

Hibernate shifts the developer mindset from thinking SQL into thinking object state transitions. According to Hibernate Docs entity may be in one of the following states:
 
 
 
 
 
 
 
 
 

  • new/transient: the entity is not associated to a persistence context, be it a newly created object the database doesn’t know anything about.
  • persistent: the entity is associated to a persistence context (residing in the 1st Level Cache) and there is a database row representing this entity.
  • detached: the entity was previously associated to a persistence context, but the persistence context was closed, or the entity was manually evicted.
  • removed: the entity was marked as removed and the persistence context will remove it from the database at flush time.

Moving an object from one state to another is done by calling the EntityManager methods such as:

  • persist()
  • merge()
  • remove()

Cascading allows propagating a given event from a parent to a child, also easing managing entities relationship management.

During flush time, Hibernate will translate the changes recorded by the current Persistence Context into SQL queries.

Now, think what happens in the following code (reduced for the sake of brevity):

@Entity
public class Product {

   @OneToMany(fetch = FetchType.LAZY, cascade = CascadeType.ALL, mappedBy = "product", orphanRemoval = true)
   @OrderBy("index")
   private Set images = new LinkedHashSet();

   public Set getImages() {
      return images;
   }

   public void addImage(Image image) {
      images.add(image);
      image.setProduct(this);
   }

   public void removeImage(Image image) {
      images.remove(image);
      image.setProduct(null);
   }
}

@Entity
public class Image {

   @Column(unique = true)
   private int index;

   @ManyToOne
   private Product product;

   public int getIndex() {
      return index;
   }

   public void setIndex(int index) {
      this.index = index;
   }

   public Product getProduct() {
      return product;
   }

   public void setProduct(Product product) {
      this.product = product;
   }
}

final Long productId = transactionTemplate.execute(new TransactionCallback() {
   @Override
   public Long doInTransaction(TransactionStatus transactionStatus) {
      Product product = new Product();

      Image frontImage = new Image();
      frontImage.setIndex(0);

      Image sideImage = new Image();
      sideImage.setIndex(1);

      product.addImage(frontImage);
      product.addImage(sideImage);

      entityManager.persist(product);
      return product.getId();
   }
});

try {
   transactionTemplate.execute(new TransactionCallback() {
      @Override
         public Void doInTransaction(TransactionStatus transactionStatus) {
         Product product = entityManager.find(Product.class, productId);
         assertEquals(2, product.getImages().size());
         Iterator imageIterator = product.getImages().iterator();

         Image frontImage = imageIterator.next();
         assertEquals(0, frontImage.getIndex());
         Image sideImage = imageIterator.next();
         assertEquals(1, sideImage.getIndex());

         Image backImage = new Image();
         sideImage.setName("back image");
         sideImage.setIndex(1);

         product.removeImage(sideImage);
         product.addImage(backImage);

         entityManager.flush();
         return null;
     }
});
   fail("Expected ConstraintViolationException");
} catch (PersistenceException expected) {
   assertEquals(ConstraintViolationException.class, expected.getCause().getClass());
}

Because of the Image.index unique constraint we get a ConstraintviolationException during flush time.

You may wonder why this is happening since we are calling remove for the sideImage prior to adding the backImage with the same index, and the answer is flush operations order.

According to Hibernate JavaDocs the SQL operations order is:

  • inserts
  • updates
  • deletions of collections elements
  • inserts of the collection elements
  • deletes

Because our image collection is “mappedBy”, the Image will control the association, hence the “backImage” insert happens before  the “sideImage” delete.

select product0_.id as id1_5_0_, product0_.name as name2_5_0_ from Product product0_ where product0_.id=?
select images0_.product_id as product_4_5_1_, images0_.id as id1_1_1_, images0_.id as id1_1_0_, images0_.index as index2_1_0_, images0_.name as name3_1_0_, images0_.product_id as product_4_1_0_ from Image images0_ where images0_.product_id=? order by images0_.index
insert into Image (id, index, name, product_id) values (default, ?, ?, ?)
ERROR: integrity constraint violation: unique constraint or index violation; UK_OQBG3YIU5I1E17SL0FEAWT8PE table: IMAGE

To fix this you have to manual flush the Persistence Context after the remove operation:

transactionTemplate.execute(new TransactionCallback<Void>() {
   @Override
   public Void doInTransaction(TransactionStatus transactionStatus) {
      Product product = entityManager.find(Product.class, productId);
      assertEquals(2, product.getImages().size());
      Iterator<Image> imageIterator = product.getImages().iterator();

      Image frontImage = imageIterator.next();
      assertEquals(0, frontImage.getIndex());
      Image sideImage = imageIterator.next();
      assertEquals(1, sideImage.getIndex());

      Image backImage = new Image();
      backImage.setIndex(1);

      product.removeImage(sideImage);
      entityManager.flush();

      product.addImage(backImage);

      entityManager.flush();
      return null;
   }
});

This will output the desired behavior:

select versions0_.image_id as image_id3_1_1_, versions0_.id as id1_8_1_, versions0_.id as id1_8_0_, versions0_.image_id as image_id3_8_0_, versions0_.type as type2_8_0_ from Version versions0_ where versions0_.image_id=? order by versions0_.type
delete from Image where id=?
insert into Image (id, index, name, product_id) values (default, ?, ?, ?)
  • Source code available here.

 

Vlad Mihalcea

Vlad Mihalcea is a software architect passionate about software integration, high scalability and concurrency challenges.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
Back to top button