Hibernate Collections: Optimistic Locking
Introduction Hibernate provides an optimistic locking mechanism to prevent lost updates even for long-conversations. In conjunction with an entity storage, spanning over multiple user requests (extended persistence context or detached entities) Hibernate can guarantee application-level repeatable-reads. The dirty checking mechanism detects entity state changes and increments the entity version. While basic property changes are always taken into consideration, Hibernate collections are more subtle in this regard. Owned vs. Inverse Collections In relational databases, two records are associated through a foreign key reference. In this relationship, the referenced record is the parent while the referencing row (the foreign key side) is the child. A non-null foreign key may only reference an existing parent record. In the Object-oriented space this association can be represented in both directions. We can have a many-to-one reference from a child to parent and the parent can also have a one-to-many children collection. Because both sides could potentially control the database foreign key state, we must ensure that only one side is the owner of this association. Only the owningside state changes are propagated to the database. The non-owning side has been traditionally referred as the inverse side. Next I’ll describe the most common ways of modelling this association. The Unidirectional Parent-Owning-Side-Child Association Mapping Only the parent side has a @OneToMany non-inverse children collection. The child entity doesn’t reference the parent entity at all. @Entity(name = "post") public class Post { ... @OneToMany(cascade = CascadeType.ALL, orphanRemoval = true) private List comments = new ArrayList
(); ... } The Unidirectional Parent-Owning-Side-Child Component Association Mapping Mapping The child side doesn’t always have to be an entity and we might model it as acomponent type instead. An Embeddable object (component type) may contain both basic types and association mappings but it can never contain an @Id. The Embeddable object is persisted/removed along with its owning entity. The parent has an @ElementCollection children association. The child entity may only reference the parent through the non-queryable Hibernate specific @Parentannotation. @Entity(name = "post") public class Post { ... @ElementCollection @JoinTable(name = "post_comments", joinColumns = @JoinColumn(name = "post_id")) @OrderColumn(name = "comment_index") private List
comments = new ArrayList
(); ... public void addComment(Comment comment) { comment.setPost(this); comments.add(comment); } } @Embeddable public class Comment { ... @Parent private Post post; ... } The Bidirectional Parent-Owning-Side-Child Association Mapping The parent is the owning side so it has a @OneToMany non-inverse (without a mappedBy directive) children collection. The child entity references the parent entity through a @ManyToOne association that’s neither insertable nor updatable: @Entity(name = "post") public class Post { ... @OneToMany(cascade = CascadeType.ALL, orphanRemoval = true) private List
comments = new ArrayList
(); ... public void addComment(Comment comment) { comment.setPost(this); comments.add(comment); } } @Entity(name = "comment") public class Comment ... @ManyToOne @JoinColumn(name = "post_id", insertable = false, updatable = false) private Post post; ... } The Bidirectional Parent-Owning-Side-Child Association Mapping The child entity references the parent entity through a @ManyToOne association, and the parent has a mappedBy @OneToMany children collection. The parent side is the inverse side so only the @ManyToOne state changes are propagated to the database. Even if there’s only one owning side, it’s always a good practice to keep both sides in sync by using the add/removeChild() methods. @Entity(name = "post") public class Post { ... @OneToMany(cascade = CascadeType.ALL, orphanRemoval = true, mappedBy = "post") private List
comments = new ArrayList
(); ... public void addComment(Comment comment) { comment.setPost(this); comments.add(comment); } } @Entity(name = "comment") public class Comment { ... @ManyToOne private Post post; ... } The Unidirectional Parent-Owning-Side-Child Association Mapping The child entity references the parent through a @ManyToOne association. The parent doesn’t have a @OneToMany children collection so the child entity becomes the owning side. This association mapping resembles the relational data foreign key linkage. @Entity(name = "comment") public class Comment { ... @ManyToOne private Post post; ... } Collection Versioning The 3.4.2 section of the JPA 2.1 specification defines optimistic locking as: The version attribute is updated by the persistence provider runtime when the object is written to the database. All non-relationship fields and proper ties and all relationships owned by the entity are included in version checks[35]. [35] This includes owned relationships maintained in join tables N.B. Only owning-side children collection can update the parent version. Testing Time Let’s test how the parent-child association type affects the parent versioning. Because we are interested in the children collection dirty checking, theunidirectional child-owning-side-parent association is going to be skipped, as in that case the parent doesn’t contain a children collection. Test Case The following test case is going to be used for all collection type use cases: protected void simulateConcurrentTransactions(final boolean shouldIncrementParentVersion) { final ExecutorService executorService = Executors.newSingleThreadExecutor(); doInTransaction(new TransactionCallable
() { @Override public Void execute(Session session) { try { P post = postClass.newInstance(); post.setId(1L); post.setName("Hibernate training"); session.persist(post); return null; } catch (Exception e) { throw new IllegalArgumentException(e); } } }); doInTransaction(new TransactionCallable
() { @Override public Void execute(final Session session) { final P post = (P) session.get(postClass, 1L); try { executorService.submit(new Callable
() { @Override public Void call() throws Exception { return doInTransaction(new TransactionCallable
() { @Override public Void execute(Session _session) { try { P otherThreadPost = (P) _session.get(postClass, 1L); int loadTimeVersion = otherThreadPost.getVersion(); assertNotSame(post, otherThreadPost); assertEquals(0L, otherThreadPost.getVersion()); C comment = commentClass.newInstance(); comment.setReview("Good post!"); otherThreadPost.addComment(comment); _session.flush(); if (shouldIncrementParentVersion) { assertEquals(otherThreadPost.getVersion(), loadTimeVersion + 1); } else { assertEquals(otherThreadPost.getVersion(), loadTimeVersion); } return null; } catch (Exception e) { throw new IllegalArgumentException(e); } } }); } }).get(); } catch (Exception e) { throw new IllegalArgumentException(e); } post.setName("Hibernate Master Class"); session.flush(); return null; } }); } The Unidirectional Parent-Owning-Side-Child Association Testing #create tables Query:{[create table comment (idbigint generated by default as identity (start with 1), review varchar(255), primary key (id))][]} Query:{[create table post (idbigint not null, name varchar(255), version integer not null, primary key (id))][]} Query:{[create table post_comment (post_id bigint not null, comments_id bigint not null, comment_index integer not null, primary key (post_id, comment_index))][]} Query:{[alter table post_comment add constraint FK_se9l149iyyao6va95afioxsrl foreign key (comments_id) references comment][]} Query:{[alter table post_comment add constraint FK_6o1igdm04v78cwqre59or1yj1 foreign key (post_id) references post][]} #insert post in primary transaction Query:{[insert into post (name, version, id) values (?, ?, ?)][Hibernate training,0,1]} #select post in secondary transaction Query:{[selectentityopti0_.idas id1_1_0_, entityopti0_.name as name2_1_0_, entityopti0_.version as version3_1_0_ from post entityopti0_ where entityopti0_.id=?][1]} #insert comment in secondary transaction #optimistic locking post version update in secondary transaction Query:{[insert into comment (id, review) values (default, ?)][Good post!]} Query:{[update post setname=?, version=? where id=? and version=?][Hibernate training,1,1,0]} Query:{[insert into post_comment (post_id, comment_index, comments_id) values (?, ?, ?)][1,0,1]} #optimistic locking exception in primary transaction Query:{[update post setname=?, version=? where id=? and version=?][Hibernate Master Class,1,1,0]} org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect) : [com.vladmihalcea.hibernate.masterclass.laboratory.concurrency.EntityOptimisticLockingOnUnidirectionalCollectionTest$Post#1] The Unidirectional Parent-Owning-Side-Child Component Association Testing #create tables Query:{[create table post (idbigint not null, name varchar(255), version integer not null, primary key (id))][]} Query:{[create table post_comments (post_id bigint not null, review varchar(255), comment_index integer not null, primary key (post_id, comment_index))][]} Query:{[alter table post_comments add constraint FK_gh9apqeduab8cs0ohcq1dgukp foreign key (post_id) references post][]} #insert post in primary transaction Query:{[insert into post (name, version, id) values (?, ?, ?)][Hibernate training,0,1]} #select post in secondary transaction Query:{[selectentityopti0_.idas id1_0_0_, entityopti0_.name as name2_0_0_, entityopti0_.version as version3_0_0_ from post entityopti0_ where entityopti0_.id=?][1]} Query:{[selectcomments0_.post_id as post_id1_0_0_, comments0_.review as review2_1_0_, comments0_.comment_index as comment_3_0_ from post_comments comments0_ where comments0_.post_id=?][1]} #insert comment in secondary transaction #optimistic locking post version update in secondary transaction Query:{[update post setname=?, version=? where id=? and version=?][Hibernate training,1,1,0]} Query:{[insert into post_comments (post_id, comment_index, review) values (?, ?, ?)][1,0,Good post!]} #optimistic locking exception in primary transaction Query:{[update post setname=?, version=? where id=? and version=?][Hibernate Master Class,1,1,0]} org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect) : [com.vladmihalcea.hibernate.masterclass.laboratory.concurrency.EntityOptimisticLockingOnComponentCollectionTest$Post#1] The Bidirectional Parent-Owning-Side-Child Association Testing #create tables Query:{[create table comment (idbigint generated by default as identity (start with 1), review varchar(255), post_id bigint, primary key (id))][]} Query:{[create table post (idbigint not null, name varchar(255), version integer not null, primary key (id))][]} Query:{[create table post_comment (post_id bigint not null, comments_id bigint not null)][]} Query:{[alter table post_comment add constraint UK_se9l149iyyao6va95afioxsrl unique (comments_id)][]} Query:{[alter table comment add constraint FK_f1sl0xkd2lucs7bve3ktt3tu5 foreign key (post_id) references post][]} Query:{[alter table post_comment add constraint FK_se9l149iyyao6va95afioxsrl foreign key (comments_id) references comment][]} Query:{[alter table post_comment add constraint FK_6o1igdm04v78cwqre59or1yj1 foreign key (post_id) references post][]} #insert post in primary transaction Query:{[insert into post (name, version, id) values (?, ?, ?)][Hibernate training,0,1]} #select post in secondary transaction Query:{[selectentityopti0_.idas id1_1_0_, entityopti0_.name as name2_1_0_, entityopti0_.version as version3_1_0_ from post entityopti0_ where entityopti0_.id=?][1]} Query:{[selectcomments0_.post_id as post_id1_1_0_, comments0_.comments_id as comments2_2_0_, entityopti1_.idas id1_0_1_, entityopti1_.post_id as post_id3_0_1_, entityopti1_.review as review2_0_1_, entityopti2_.idas id1_1_2_, entityopti2_.name as name2_1_2_, entityopti2_.version as version3_1_2_ from post_comment comments0_ inner joincomment entityopti1_ on comments0_.comments_id=entityopti1_.idleft outer joinpost entityopti2_ on entityopti1_.post_id=entityopti2_.idwhere comments0_.post_id=?][1]} #insert comment in secondary transaction #optimistic locking post version update in secondary transaction Query:{[insert into comment (id, review) values (default, ?)][Good post!]} Query:{[update post setname=?, version=? where id=? and version=?][Hibernate training,1,1,0]} Query:{[insert into post_comment (post_id, comments_id) values (?, ?)][1,1]} #optimistic locking exception in primary transaction Query:{[update post setname=?, version=? where id=? and version=?][Hibernate Master Class,1,1,0]} org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect) : [com.vladmihalcea.hibernate.masterclass.laboratory.concurrency.EntityOptimisticLockingOnBidirectionalParentOwningCollectionTest$Post#1] The Bidirectional Parent-Owning-Side-Child Association Testing #create tables Query:{[create table comment (idbigint generated by default as identity (start with 1), review varchar(255), post_id bigint, primary key (id))][]} Query:{[create table post (idbigint not null, name varchar(255), version integer not null, primary key (id))][]} Query:{[alter table comment add constraint FK_f1sl0xkd2lucs7bve3ktt3tu5 foreign key (post_id) references post][]} #insert post in primary transaction Query:{[insert into post (name, version, id) values (?, ?, ?)][Hibernate training,0,1]} #select post in secondary transaction Query:{[selectentityopti0_.idas id1_1_0_, entityopti0_.name as name2_1_0_, entityopti0_.version as version3_1_0_ from post entityopti0_ where entityopti0_.id=?][1]} #insert comment in secondary transaction #post version is not incremented in secondary transaction Query:{[insert into comment (id, post_id, review) values (default, ?, ?)][1,Good post!]} Query:{[selectcount(id) from comment where post_id =?][1]} #update works in primary transaction Query:{[update post setname=?, version=? where id=? and version=?][Hibernate Master Class,1,1,0]} If you enjoy reading this article, you might want to subscribe to my newsletter and get a discount for my book as well. Overruling Default Collection Versioning If the default owning-side collection versioning is not suitable for your use case, you can always overrule it with Hibernate [a href="http://docs.jboss.org/hibernate/annotations/3.5/reference/en/html_single/#d0e2903" style="font-family: inherit; font-size: 14px; font-style: inherit; font-weight: inherit; text-decoration: none; color: rgb(1, 160, 219); -webkit-tap-highlight-color: rgb(240, 29, 79); background: transparent;"]@OptimisticLock annotation. Let’s overrule the default parent version update mechanism for bidirectional parent-owning-side-child association: @Entity(name = "post") public class Post { ... @OneToMany(cascade = CascadeType.ALL, orphanRemoval = true) @OptimisticLock(excluded = true) private List
comments = new ArrayList
(); ... public void addComment(Comment comment) { comment.setPost(this); comments.add(comment); } } @Entity(name = "comment") public class Comment { ... @ManyToOne @JoinColumn(name = "post_id", insertable = false, updatable = false) private Post post; ... } This time, the children collection changes won’t trigger a parent version update: #create tables Query:{[create table comment (idbigint generated by default as identity (start with 1), review varchar(255), post_id bigint, primary key (id))][]} Query:{[create table post (idbigint not null, name varchar(255), version integer not null, primary key (id))][]} Query:{[create table post_comment (post_id bigint not null, comments_id bigint not null)][]} Query:{[]} Query:{[alter table comment add constraint FK_f1sl0xkd2lucs7bve3ktt3tu5 foreign key (post_id) references post][]} Query:{[alter table post_comment add constraint FK_se9l149iyyao6va95afioxsrl foreign key (comments_id) references comment][]} Query:{[alter table post_comment add constraint FK_6o1igdm04v78cwqre59or1yj1 foreign key (post_id) references post][]} #insert post in primary transaction Query:{[insert into post (name, version, id) values (?, ?, ?)][Hibernate training,0,1]} #select post in secondary transaction Query:{[selectentityopti0_.idas id1_1_0_, entityopti0_.name as name2_1_0_, entityopti0_.version as version3_1_0_ from post entityopti0_ where entityopti0_.id=?][1]} Query:{[selectcomments0_.post_id as post_id1_1_0_, comments0_.comments_id as comments2_2_0_, entityopti1_.idas id1_0_1_, entityopti1_.post_id as post_id3_0_1_, entityopti1_.review as review2_0_1_, entityopti2_.idas id1_1_2_, entityopti2_.name as name2_1_2_, entityopti2_.version as version3_1_2_ from post_comment comments0_ inner joincomment entityopti1_ on comments0_.comments_id=entityopti1_.idleft outer joinpost entityopti2_ on entityopti1_.post_id=entityopti2_.idwhere comments0_.post_id=?][1]} #insert comment in secondary transaction Query:{[insert into comment (id, review) values (default, ?)][Good post!]} Query:{[insert into post_comment (post_id, comments_id) values (?, ?)][1,1]} #update works in primary transaction Query:{[update post setname=?, version=? where id=? and version=?][Hibernate Master Class,1,1,0]} If you enjoyed this article, I bet you are going to love my book as well. Conclusion It’s very important to understand how various modeling structures impact concurrency patterns. The owning side collections changes are taken into consideration when incrementing the parent version number, and you can always bypass it using the @OptimisticLock annotation. Code available on GitHub. If you have enjoyed reading my article and you’re looking forward to getting instant email notifications of my latest posts, you just need to follow my blog.
November 4, 2014
·
60,410 Views
·
1 Like
Comments
Mar 30, 2017 · Alon Segal
Indeed. My point was to show you that you can use the abstract class that Hibernate provides, not to take the test acse as a drop-in-replacement for your project. The DriverManager is not intended for production, but you can use the DataSourceConnectionProvider. Check out this article for more details.
Mar 30, 2017 · Alon Segal
You can make it select from multiple Connection Providers. Check out this example on Hibernate test cases.
Mar 30, 2017 · Alon Segal
Looks good from a Hibernate perspective. However, I'd strongly suggest to extend
instead of implementing the
interface.
Sep 26, 2016 · Eric Genesky
You should use an automated testing utility to detect the N+1 query issue during testing.
Aug 10, 2016 · Michael Remijan
This is so wrong! When you have a data access layer, the only valuable tests are integration tests, that you run against the same database engine type like the one you use in production.
You can run integration tests on MySQL or PostgreSQL or any other DB almost as fast as H2 or HSQLDB. You just have to map the data drive in-memory, as explained in this article.
Mar 29, 2016 · Dave Fecak
Sure, it depends on the develoeprs skills. Nevertheless, we deployed one of the largest real estate platforms in Finlland using Spring and JTA , and it worked like a charm.
Mar 29, 2016 · Dave Fecak
It's actually really simple. Checkout this Java-based JTA configuration. In 100 lines of code I manged to set the Bitronix PoolingataSource, a datasource-proxy to intercept all statements, the Bitronix config, the JTA spring transcation manager an dthe Hibernate entity manager factory classes.
So, it's actually pretty simple.
Mar 29, 2016 · Dave Fecak
The JtaTRansactionManager allows you to have JTA transactions in Spring. This is just a wrapper because underneath it still needs an actual TM: Atomikos, Bitronix, Narayana, etc.
HibernateTransactionManager is more like a legacy component that's been available before JPA 1.0 emerged. Nowadays, most users choose the JpaTransacionManager which can ne confogured with or without a persistence.xml file.
You can have multiple resources without needing JTA, like when you set up multiple DataSources (one master and multiple slaves). In this case RESOURCE_LOCAL works fine and using the read-only flag of the @Transactional annotation you can be redirected to the right DataSource.
You can even have multiple resources and JTA too.
Depending on the current application requirements, you get to choose what's best for your system
Mar 29, 2016 · Dave Fecak
That's true. Actually, squeezing the last drop is more an exception than a rule, so you're right.
JTA is very valuble, and it's a must when coordinate multiple sources of data too.
Mar 29, 2016 · Dave Fecak
Declarative transactions are indeed worth, but both Java EE and Spring have them. As far as I know, Spring lets you set the transaction isolation level in the @Transactional annotation too.
I'd like to see such a benchmark because I'm very curious about the actual results.
In a high-performance application, every millisecond matters. Check-out the aggresive connection release impact which is used by default on any JTA deployment when using Hibernate as a JPA provider.
Mar 29, 2016 · Dave Fecak
Actually, if you use a single Datasource JTA is not needed at all and you don't have to manage transactions manually. You can use the HibernateTransactionManager or JpaTransactionManager from Spring and there's nothing wrong with them.
In fact, they allow you to set the isolation level, read-only, route request by read or write, timeout.
Even with 1PC optimization, using JTA is still slower for a high-performance application.
Mar 07, 2016 · Ming Jiang
Great topic. You can also check my High-Performance Hibernate tutorial.
Jan 27, 2015 · Alvin Ashcraft
1. "readers don't block writers, writers don't block readers", but "writers block writers" and a DML statement is a writer, which will take a lock even with MVCC.
There's a very detailed explanation of every Oracle transaction isolation level behaviour on Oracle Tech Network:
http://www.oracle.com/technetwork/issue-archive/2010/10-jan/o65asktom-082389.html
On READ_COMMITTED:
All DML statements use exclusive locks.
2. Every database has a pre-defined locking scheme for each transaction isolation level. It doesn't mean that all RDBMS comply to the SQL standard. Oracle, for instance doesn't allow "dirty reads" in READ_UNCOMMITTED:
So, Oracle implements "READ UNCOMMITTED" as a non-blocking "READ COMMITTED" isolation level, which is not what the standard defined.
Jan 27, 2015 · Alvin Ashcraft
1. "readers don't block writers, writers don't block readers", but "writers block writers" and a DML statement is a writer, which will take a lock even with MVCC.
There's a very detailed explanation of every Oracle transaction isolation level behaviour on Oracle Tech Network:
http://www.oracle.com/technetwork/issue-archive/2010/10-jan/o65asktom-082389.html
On READ_COMMITTED:
All DML statements use exclusive locks.
2. Every database has a pre-defined locking scheme for each transaction isolation level. It doesn't mean that all RDBMS comply to the SQL standard. Oracle, for instance doesn't allow "dirty reads" in READ_UNCOMMITTED:
So, Oracle implements "READ UNCOMMITTED" as a non-blocking "READ COMMITTED" isolation level, which is not what the standard defined.
Jan 27, 2015 · Alvin Ashcraft
1. "readers don't block writers, writers don't block readers", but "writers block writers" and a DML statement is a writer, which will take a lock even with MVCC.
There's a very detailed explanation of every Oracle transaction isolation level behaviour on Oracle Tech Network:
http://www.oracle.com/technetwork/issue-archive/2010/10-jan/o65asktom-082389.html
On READ_COMMITTED:
All DML statements use exclusive locks.
2. Every database has a pre-defined locking scheme for each transaction isolation level. It doesn't mean that all RDBMS comply to the SQL standard. Oracle, for instance doesn't allow "dirty reads" in READ_UNCOMMITTED:
So, Oracle implements "READ UNCOMMITTED" as a non-blocking "READ COMMITTED" isolation level, which is not what the standard defined.
Jan 27, 2015 · Alvin Ashcraft
1. "readers don't block writers, writers don't block readers", but "writers block writers" and a DML statement is a writer, which will take a lock even with MVCC.
There's a very detailed explanation of every Oracle transaction isolation level behaviour on Oracle Tech Network:
http://www.oracle.com/technetwork/issue-archive/2010/10-jan/o65asktom-082389.html
On READ_COMMITTED:
All DML statements use exclusive locks.
2. Every database has a pre-defined locking scheme for each transaction isolation level. It doesn't mean that all RDBMS comply to the SQL standard. Oracle, for instance doesn't allow "dirty reads" in READ_UNCOMMITTED:
So, Oracle implements "READ UNCOMMITTED" as a non-blocking "READ COMMITTED" isolation level, which is not what the standard defined.
Jan 27, 2015 · Alvin Ashcraft
That's a very fine description of the MVCC inner-workings. All in all, the database transaction isolation level and their logical (MVCC) or physical (shared/exclusive) locks cannot prevent all data integrity anomalies.
In a multi-request logical transactions (web application workflow), you need application-level optimistic locking anyway.
I am writing a Hibernate Master Class tutorial and most of my writing efforts were channelled to the concurrency-control benefits of using an ORM tool.
With your thorough knowledge of this topic, I would be honoured to have your sincere review on my current articles.
Thanks, Vlad
Jan 27, 2015 · Alvin Ashcraft
That's a very fine description of the MVCC inner-workings. All in all, the database transaction isolation level and their logical (MVCC) or physical (shared/exclusive) locks cannot prevent all data integrity anomalies.
In a multi-request logical transactions (web application workflow), you need application-level optimistic locking anyway.
I am writing a Hibernate Master Class tutorial and most of my writing efforts were channelled to the concurrency-control benefits of using an ORM tool.
With your thorough knowledge of this topic, I would be honoured to have your sincere review on my current articles.
Thanks, Vlad
Jan 27, 2015 · Alvin Ashcraft
That's a very fine description of the MVCC inner-workings. All in all, the database transaction isolation level and their logical (MVCC) or physical (shared/exclusive) locks cannot prevent all data integrity anomalies.
In a multi-request logical transactions (web application workflow), you need application-level optimistic locking anyway.
I am writing a Hibernate Master Class tutorial and most of my writing efforts were channelled to the concurrency-control benefits of using an ORM tool.
With your thorough knowledge of this topic, I would be honoured to have your sincere review on my current articles.
Thanks, Vlad
Jan 02, 2015 · amiT jaiN
Thanks for pointing it out. Bitronix supports default isolation levels too. It's Spring default JTA TM that doesn't support it. But it's easy to extend it like it's the Weblogic example.
Dec 28, 2014 · Bogdan Mustiata
Aside from the personal insult, I can once again demonstrate you're wrong.
JPA is cache centric, only when it mandates session level repeatable reads through the 1st level cache. This is mandatory since state transitions are not immediately synchronized with the DB. For EAGER associations, if you rely on caching then you have to enable 2nd level cache, which is disabled by default. That's because the 1st level is bound to the life cycle of one and only one Session, so you will always fetch the entities from DB using JOINs or secondary selects, which leaves you with the 2nd level cache variant. The 2nd level one doesn't solve it either when a JOIN is issued anyway, even if the entity is in cache, like when loading the entity from the entity manager find method.
1. Did you know that the default AUTO flushing is not consistent with native queries in Hibernate? How are you going to resolve that with mere JPA spec logic?
2.The 2nd level cache, like another cache, introduces a consistency breaking point that you need to compensate with application logic. Is this really a good reason for using the 2nd level cache? Just for the default to-one associations?
3. Go ahead and ask this question on the JPA mailing list? What if they tell you that this behaviour dates back to 1.0 spec when LAZY wasn't a mandatory requirement. Could it be related to LAZY being only a hint?
Good like with your EAGER associations!
Dec 28, 2014 · Bogdan Mustiata
Aside from the personal insult, I can once again demonstrate you're wrong.
JPA is cache centric, only when it mandates session level repeatable reads through the 1st level cache. This is mandatory since state transitions are not immediately synchronized with the DB. For EAGER associations, if you rely on caching then you have to enable 2nd level cache, which is disabled by default. That's because the 1st level is bound to the life cycle of one and only one Session, so you will always fetch the entities from DB using JOINs or secondary selects, which leaves you with the 2nd level cache variant. The 2nd level one doesn't solve it either when a JOIN is issued anyway, even if the entity is in cache, like when loading the entity from the entity manager find method.
1. Did you know that the default AUTO flushing is not consistent with native queries in Hibernate? How are you going to resolve that with mere JPA spec logic?
2.The 2nd level cache, like another cache, introduces a consistency breaking point that you need to compensate with application logic. Is this really a good reason for using the 2nd level cache? Just for the default to-one associations?
3. Go ahead and ask this question on the JPA mailing list? What if they tell you that this behaviour dates back to 1.0 spec when LAZY wasn't a mandatory requirement. Could it be related to LAZY being only a hint?
Good like with your EAGER associations!
Dec 28, 2014 · Bogdan Mustiata
Aside from the personal insult, I can once again demonstrate you're wrong.
JPA is cache centric, only when it mandates session level repeatable reads through the 1st level cache. This is mandatory since state transitions are not immediately synchronized with the DB. For EAGER associations, if you rely on caching then you have to enable 2nd level cache, which is disabled by default. That's because the 1st level is bound to the life cycle of one and only one Session, so you will always fetch the entities from DB using JOINs or secondary selects, which leaves you with the 2nd level cache variant. The 2nd level one doesn't solve it either when a JOIN is issued anyway, even if the entity is in cache, like when loading the entity from the entity manager find method.
1. Did you know that the default AUTO flushing is not consistent with native queries in Hibernate? How are you going to resolve that with mere JPA spec logic?
2.The 2nd level cache, like another cache, introduces a consistency breaking point that you need to compensate with application logic. Is this really a good reason for using the 2nd level cache? Just for the default to-one associations?
3. Go ahead and ask this question on the JPA mailing list? What if they tell you that this behaviour dates back to 1.0 spec when LAZY wasn't a mandatory requirement. Could it be related to LAZY being only a hint?
Good like with your EAGER associations!
Dec 28, 2014 · Bogdan Mustiata
Aside from the personal insult, I can once again demonstrate you're wrong.
JPA is cache centric, only when it mandates session level repeatable reads through the 1st level cache. This is mandatory since state transitions are not immediately synchronized with the DB. For EAGER associations, if you rely on caching then you have to enable 2nd level cache, which is disabled by default. That's because the 1st level is bound to the life cycle of one and only one Session, so you will always fetch the entities from DB using JOINs or secondary selects, which leaves you with the 2nd level cache variant. The 2nd level one doesn't solve it either when a JOIN is issued anyway, even if the entity is in cache, like when loading the entity from the entity manager find method.
1. Did you know that the default AUTO flushing is not consistent with native queries in Hibernate? How are you going to resolve that with mere JPA spec logic?
2.The 2nd level cache, like another cache, introduces a consistency breaking point that you need to compensate with application logic. Is this really a good reason for using the 2nd level cache? Just for the default to-one associations?
3. Go ahead and ask this question on the JPA mailing list? What if they tell you that this behaviour dates back to 1.0 spec when LAZY wasn't a mandatory requirement. Could it be related to LAZY being only a hint?
Good like with your EAGER associations!
Dec 28, 2014 · Bogdan Mustiata
Aside from the personal insult, I can once again demonstrate you're wrong.
JPA is cache centric, only when it mandates session level repeatable reads through the 1st level cache. This is mandatory since state transitions are not immediately synchronized with the DB. For EAGER associations, if you rely on caching then you have to enable 2nd level cache, which is disabled by default. That's because the 1st level is bound to the life cycle of one and only one Session, so you will always fetch the entities from DB using JOINs or secondary selects, which leaves you with the 2nd level cache variant. The 2nd level one doesn't solve it either when a JOIN is issued anyway, even if the entity is in cache, like when loading the entity from the entity manager find method.
1. Did you know that the default AUTO flushing is not consistent with native queries in Hibernate? How are you going to resolve that with mere JPA spec logic?
2.The 2nd level cache, like another cache, introduces a consistency breaking point that you need to compensate with application logic. Is this really a good reason for using the 2nd level cache? Just for the default to-one associations?
3. Go ahead and ask this question on the JPA mailing list? What if they tell you that this behaviour dates back to 1.0 spec when LAZY wasn't a mandatory requirement. Could it be related to LAZY being only a hint?
Good like with your EAGER associations!
Dec 28, 2014 · Bogdan Mustiata
Aside from the personal insult, I can once again demonstrate you're wrong.
JPA is cache centric, only when it mandates session level repeatable reads through the 1st level cache. This is mandatory since state transitions are not immediately synchronized with the DB. For EAGER associations, if you rely on caching then you have to enable 2nd level cache, which is disabled by default. That's because the 1st level is bound to the life cycle of one and only one Session, so you will always fetch the entities from DB using JOINs or secondary selects, which leaves you with the 2nd level cache variant. The 2nd level one doesn't solve it either when a JOIN is issued anyway, even if the entity is in cache, like when loading the entity from the entity manager find method.
1. Did you know that the default AUTO flushing is not consistent with native queries in Hibernate? How are you going to resolve that with mere JPA spec logic?
2.The 2nd level cache, like another cache, introduces a consistency breaking point that you need to compensate with application logic. Is this really a good reason for using the 2nd level cache? Just for the default to-one associations?
3. Go ahead and ask this question on the JPA mailing list? What if they tell you that this behaviour dates back to 1.0 spec when LAZY wasn't a mandatory requirement. Could it be related to LAZY being only a hint?
Good like with your EAGER associations!
Dec 28, 2014 · Bogdan Mustiata
Aside from the personal insult, I can once again demonstrate you're wrong.
JPA is cache centric, only when it mandates session level repeatable reads through the 1st level cache. This is mandatory since state transitions are not immediately synchronized with the DB. For EAGER associations, if you rely on caching then you have to enable 2nd level cache, which is disabled by default. That's because the 1st level is bound to the life cycle of one and only one Session, so you will always fetch the entities from DB using JOINs or secondary selects, which leaves you with the 2nd level cache variant. The 2nd level one doesn't solve it either when a JOIN is issued anyway, even if the entity is in cache, like when loading the entity from the entity manager find method.
1. Did you know that the default AUTO flushing is not consistent with native queries in Hibernate? How are you going to resolve that with mere JPA spec logic?
2.The 2nd level cache, like another cache, introduces a consistency breaking point that you need to compensate with application logic. Is this really a good reason for using the 2nd level cache? Just for the default to-one associations?
3. Go ahead and ask this question on the JPA mailing list? What if they tell you that this behaviour dates back to 1.0 spec when LAZY wasn't a mandatory requirement. Could it be related to LAZY being only a hint?
Good like with your EAGER associations!
Dec 28, 2014 · Bogdan Mustiata
Aside from the personal insult, I can once again demonstrate you're wrong.
JPA is cache centric, only when it mandates session level repeatable reads through the 1st level cache. This is mandatory since state transitions are not immediately synchronized with the DB. For EAGER associations, if you rely on caching then you have to enable 2nd level cache, which is disabled by default. That's because the 1st level is bound to the life cycle of one and only one Session, so you will always fetch the entities from DB using JOINs or secondary selects, which leaves you with the 2nd level cache variant. The 2nd level one doesn't solve it either when a JOIN is issued anyway, even if the entity is in cache, like when loading the entity from the entity manager find method.
1. Did you know that the default AUTO flushing is not consistent with native queries in Hibernate? How are you going to resolve that with mere JPA spec logic?
2.The 2nd level cache, like another cache, introduces a consistency breaking point that you need to compensate with application logic. Is this really a good reason for using the 2nd level cache? Just for the default to-one associations?
3. Go ahead and ask this question on the JPA mailing list? What if they tell you that this behaviour dates back to 1.0 spec when LAZY wasn't a mandatory requirement. Could it be related to LAZY being only a hint?
Good like with your EAGER associations!
Dec 28, 2014 · Bogdan Mustiata
Aside from the personal insult, I can once again demonstrate you're wrong.
JPA is cache centric, only when it mandates session level repeatable reads through the 1st level cache. This is mandatory since state transitions are not immediately synchronized with the DB. For EAGER associations, if you rely on caching then you have to enable 2nd level cache, which is disabled by default. That's because the 1st level is bound to the life cycle of one and only one Session, so you will always fetch the entities from DB using JOINs or secondary selects, which leaves you with the 2nd level cache variant. The 2nd level one doesn't solve it either when a JOIN is issued anyway, even if the entity is in cache, like when loading the entity from the entity manager find method.
1. Did you know that the default AUTO flushing is not consistent with native queries in Hibernate? How are you going to resolve that with mere JPA spec logic?
2.The 2nd level cache, like another cache, introduces a consistency breaking point that you need to compensate with application logic. Is this really a good reason for using the 2nd level cache? Just for the default to-one associations?
3. Go ahead and ask this question on the JPA mailing list? What if they tell you that this behaviour dates back to 1.0 spec when LAZY wasn't a mandatory requirement. Could it be related to LAZY being only a hint?
Good like with your EAGER associations!
Dec 28, 2014 · Bogdan Mustiata
Aside from the personal insult, I can once again demonstrate you're wrong.
JPA is cache centric, only when it mandates session level repeatable reads through the 1st level cache. This is mandatory since state transitions are not immediately synchronized with the DB. For EAGER associations, if you rely on caching then you have to enable 2nd level cache, which is disabled by default. That's because the 1st level is bound to the life cycle of one and only one Session, so you will always fetch the entities from DB using JOINs or secondary selects, which leaves you with the 2nd level cache variant. The 2nd level one doesn't solve it either when a JOIN is issued anyway, even if the entity is in cache, like when loading the entity from the entity manager find method.
1. Did you know that the default AUTO flushing is not consistent with native queries in Hibernate? How are you going to resolve that with mere JPA spec logic?
2.The 2nd level cache, like another cache, introduces a consistency breaking point that you need to compensate with application logic. Is this really a good reason for using the 2nd level cache? Just for the default to-one associations?
3. Go ahead and ask this question on the JPA mailing list? What if they tell you that this behaviour dates back to 1.0 spec when LAZY wasn't a mandatory requirement. Could it be related to LAZY being only a hint?
Good like with your EAGER associations!
Dec 27, 2014 · Bogdan Mustiata
Argumentum ad hominem
Dec 27, 2014 · Bogdan Mustiata
Argumentum ad hominem
Dec 27, 2014 · Bogdan Mustiata
Argumentum ad hominem
Dec 27, 2014 · Bogdan Mustiata
Argumentum ad hominem
Dec 27, 2014 · Bogdan Mustiata
Argumentum ad hominem
Dec 27, 2014 · Bogdan Mustiata
Argumentum ad hominem
Dec 27, 2014 · Bogdan Mustiata
Argumentum ad hominem
Dec 27, 2014 · Bogdan Mustiata
Argumentum ad hominem
Dec 27, 2014 · Bogdan Mustiata
Argumentum ad hominem
Dec 27, 2014 · Bogdan Mustiata
Argumentum ad hominem
Dec 26, 2014 · Bogdan Mustiata
Thanks Valery for your suggestions. Indeed it's a dangerous zone. Most developers will sacrifice performance in the name of an illusionary portability, be it JPA or the database server. In a middle to large size enterprise project, it's not that simple to switch technologies and you most likely have to run optimized native queries anyway.
I liked your two suggestions on pagination and projections. This article is just a small section of the larger "Hibernate Master Class" free online course I am writing. Each article focuses on one idea, so I will address the Collection fetching anti-patterns and the "only Entity fetching" misconceptions in some new posts.
Dec 25, 2014 · Bogdan Mustiata
And the call:
You'll get a wonderful Cartesian Product.
This is no micro-optimization or performance tuning strategy. This is proper design and common-sense from an SQL point of view.
Nov 13, 2014 · Mr B Loid
That was exactly my point. Thanks for appreciating my article.
Sep 14, 2014 · Mr B Loid
Great numbers. Thanks for sharing it.
Mar 05, 2014 · Mr B Loid
Very good article. When it comes to concurrency and ensuring consistency and replications all DBs (NoSQL or SQL) share the same challenges. I think you might find this article (and the rest of the blog) really interesting.
Feb 11, 2014 · Amit Mehra
Hi Peter,
First of all, congrats for releasing your book. The last JEE book I read was Adam Bien's "Real World Java EE Patterns Rethinking Best Practices" and it changed by opinion about JEE, as I was previously looking at it from the J2EE heavyweight perspective.
I would like to read more about the new JMS 2.0, the JPA enhancements and WebSockets support, and I hope I get the chance to write a DZone book report after reading it.
Vlad
Jan 08, 2014 · Amit Mehra
Facebook still uses MySQL for its social graph and Cassandra for email searching. If we are talking about a relational data model, then RDBMS is the perfect choice and most projects don't really fit into a 'BigData' category anyway. Yes, NoSQL has evolved in the context of BigData, and therefore it offers sharding/horizontal scalability options, but then, you can horizontally scale a SQL solution too.
Dec 07, 2013 · Mr B Loid
Thanks for the tip, I am just an occasional Python developer, since I use it more as an universal bash scripting tool. I wanted to distribute the entries between a start and an end date, so I can further calculate some time series. In this example I wanted to generate 50.000.000 values for an year period (2012-2013).
Dec 07, 2013 · Mr B Loid
Thanks for the tip, I am just an occasional Python developer, since I use it more as an universal bash scripting tool. I wanted to distribute the entries between a start and an end date, so I can further calculate some time series. In this example I wanted to generate 50.000.000 values for an year period (2012-2013).
Nov 25, 2013 · Subhash Chandran
Hi,
I like to check the open-source projects code base. I am curious how they implemented some features I frequently use, and I got to learn a lot (applied design patterns, new Java features I haven't got the chance to use). But that's not how I evaluate tools. Like you said, I also take a pragmatic approach, and I weight the benefit I get with the overweight it adds to my current application (development/deployment). There are many projects using Hibernate simply because everybody's using it, when they could do better with simple JDBC, or JOOQ. Or when you have an UI table loading all rows for every new page it displays, and people complain the database is too slow. The DB can be very fast, but you have to know more than select/insert/update/delete or B-Tree indexes, like SQL window functions for instance.
Vlad
Nov 25, 2013 · Subhash Chandran
Hi,
I like to check the open-source projects code base. I am curious how they implemented some features I frequently use, and I got to learn a lot (applied design patterns, new Java features I haven't got the chance to use). But that's not how I evaluate tools. Like you said, I also take a pragmatic approach, and I weight the benefit I get with the overweight it adds to my current application (development/deployment). There are many projects using Hibernate simply because everybody's using it, when they could do better with simple JDBC, or JOOQ. Or when you have an UI table loading all rows for every new page it displays, and people complain the database is too slow. The DB can be very fast, but you have to know more than select/insert/update/delete or B-Tree indexes, like SQL window functions for instance.
Vlad
Nov 25, 2013 · Subhash Chandran
Hi,
I like to check the open-source projects code base. I am curious how they implemented some features I frequently use, and I got to learn a lot (applied design patterns, new Java features I haven't got the chance to use). But that's not how I evaluate tools. Like you said, I also take a pragmatic approach, and I weight the benefit I get with the overweight it adds to my current application (development/deployment). There are many projects using Hibernate simply because everybody's using it, when they could do better with simple JDBC, or JOOQ. Or when you have an UI table loading all rows for every new page it displays, and people complain the database is too slow. The DB can be very fast, but you have to know more than select/insert/update/delete or B-Tree indexes, like SQL window functions for instance.
Vlad
Nov 08, 2013 · Mr B Loid
Hi,
Splunk seems like a very handy tool, I'll have to investigate it.
Vlad
Nov 05, 2013 · Mr B Loid
Hi Steven,
My vision is that file-based string logging is like using text-files instead of a database. Usually logging is not taking to seriously until you move into production, when you realize logging/monitoring are equally important as any other aspects of your application.
Having so many NoSql solutions nowadays simplifies implementing a system of smart logging, and if more people get interested in such idea, I plan on starting a new open-source project to address this need.
The project goals should be quite straight forward:
- simple API to submit log objects
- asynchronous batch job to save the log objects into a NoSql storage
- support for handling a log object and update the "current system state"
- support for exposing the "current system state" as JMX
It see it as a library on top of which you start implementing your own smart-logging solution based on your current project requirements, rather than a full-featured logging application which cannot foresee the complex requirements of any project you'd want to integrate with.
Vlad
Nov 05, 2013 · Mr B Loid
Hi Steven,
My vision is that file-based string logging is like using text-files instead of a database. Usually logging is not taking to seriously until you move into production, when you realize logging/monitoring are equally important as any other aspects of your application.
Having so many NoSql solutions nowadays simplifies implementing a system of smart logging, and if more people get interested in such idea, I plan on starting a new open-source project to address this need.
The project goals should be quite straight forward:
- simple API to submit log objects
- asynchronous batch job to save the log objects into a NoSql storage
- support for handling a log object and update the "current system state"
- support for exposing the "current system state" as JMX
It see it as a library on top of which you start implementing your own smart-logging solution based on your current project requirements, rather than a full-featured logging application which cannot foresee the complex requirements of any project you'd want to integrate with.
Vlad
Oct 26, 2011 · Mr B Loid
Yes volatile fixes the issue in this case, but syncronized would have done the same thing also.
The TimerTask uses a separate thread to set the expired=true variable, so if you change the
TimerTask to:
Then the expired would get visible from both Timer Thread and the Worker Thread.