DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Enterprise AI Trend Report: Gain insights on ethical AI, MLOps, generative AI, large language models, and much more.

2024 Cloud survey: Share your insights on microservices, containers, K8s, CI/CD, and DevOps (+ enter a $750 raffle!) for our Trend Reports.

PostgreSQL: Learn about the open-source RDBMS' advanced capabilities, core components, common commands and functions, and general DBA tasks.

AI Automation Essentials. Check out the latest Refcard on all things AI automation, including model training, data security, and more.

Avatar

Vlad Mihalcea

CEO of Hypersistence at vladmihalcea.com

RO

Joined Oct 2011

http://vladmihalcea.com/

About

I no longer contribute to DZone since November 2015. To get my latest posts, go to my blog: vladmihalcea.com Author of High-Performance Java Persistence https://leanpub.com/high-performance-java-persistence

Stats

Reputation: 1392
Pageviews: 1.1M
Articles: 11
Comments: 56
  • Articles
  • Refcards
  • Comments

Articles

article thumbnail
MySQL: Server vs. Client-Side Prepared Statements in Java
Basically, there are two ways of preparing a statement: on the server-side or on the client-side.
September 22, 2015
· 9,202 Views · 3 Likes
article thumbnail
The High-Performance Java Persistence Book
It’s been a year since I started the quest for a highly-effective Data Knowledge Stack and the Hibernate Master Class contains over fifty articles already. Now that I covered many aspects of database transactions, JDBC and Java Persistence, it’s time to assemble all the pieces together into the High-Performance Java Persistence book. An Agile publishing experience Writing a book is a very time-consuming and stressful process and the last thing I needed was a very tight schedule. After reading Antonio Goncalves’s story, I chose the self-publishing way. In the end, I settled for Leanpub because it allows me to publish the book incrementally. This leads to a better engagement with readers, allowing me adapt the book content on the way. The content At its core, the book is about getting the most out of your persistence layer and that can only happen when your application resonates with the database system. Because concurrency is inherent to database processing, transactions play a very important role in this regard. The first part will be about some basic performance-related database concepts such as: locking, batching, connection pooling. In the second part, I will explain how an ORM can actually improve DML performance. This part will include the Hibernate Master Class findings. The third part is about advance querying techniques with jOOQ. If you enjoy reading this article, you might want to subscribe to my newsletter and get a discount for my book as well. Get involved The Agile methodologies are not just for software development. Writing a book in a Lean style can shorten the feed-back period and readers can get involved on the way. If you have any specific request or you are interested in this project, you can join my newsletter and follow my progress. Buy it! The book is 100% done, and you can check out the full Table of Content onLeanpub. If you enjoyed this article, I bet you are going to love my book as well. The ebook The PDF, ePUB and Kindle (MOBI) versions can be bought on Leanpub. The print version The print version is available on Amazon, Amazon.co.uk, Amazon.de or Amazon.fr. Presentations If you are not convinced, then check out the following two presentations: High-Performance JDBC from Voxxed Days Bucharest High-Performance Hibernate from JavaZone
June 26, 2015
· 6,298 Views · 1 Like
article thumbnail
How to Batch DELETE Statements with Hibernate
Introduction In my , I explained the Hibernate configurations required for batching INSERT and UPDATE statements. This post will continue this topic with DELETE statements batching. Domain model entities We’ll start with the following entity model: The Post entity has a one-to-many association to a Comment and a one-to-one relationship with the PostDetails entity: @OneToMany(cascade = CascadeType.ALL, mappedBy = "post", orphanRemoval = true) private List comments = new ArrayList<>(); @OneToOne(cascade = CascadeType.ALL, mappedBy = "post", orphanRemoval = true, fetch = FetchType.LAZY) private PostDetails details; The up-coming tests will be run against the following data: doInTransaction(session -> { int batchSize = batchSize(); for(int i = 0; i < itemsCount(); i++) { int j = 0; Post post = new Post(String.format( "Post no. %d", i)); post.addComment(new Comment( String.format( "Post comment %d:%d", i, j++))); post.addComment(new Comment(String.format( "Post comment %d:%d", i, j++))); post.addDetails(new PostDetails()); session.persist(post); if(i % batchSize == 0 && i > 0) { session.flush(); session.clear(); } } }); Hibernate Configuration As , the following properties are required for batching INSERT and UPDATE statements: properties.put("hibernate.jdbc.batch_size", String.valueOf(batchSize())); properties.put("hibernate.order_inserts", "true"); properties.put("hibernate.order_updates", "true"); properties.put("hibernate.jdbc.batch_versioned_data", "true"); Next, we are going to check if DELETE statements are batched as well. JPA Cascade Delete Because is convenient, I’m going to prove that CascadeType.DELETE and JDBC batching don’t mix well. The following tests is going to: Select some Posts along with Comments and PostDetails Delete the Posts, while propagating the delete event to Comments and PostDetails as well @Test public void testCascadeDelete() { LOGGER.info("Test batch delete with cascade"); final AtomicReference startNanos = new AtomicReference<>(); addDeleteBatchingRows(); doInTransaction(session -> { List posts = session.createQuery( "select distinct p " + "from Post p " + "join fetch p.details d " + "join fetch p.comments c") .list(); startNanos.set(System.nanoTime()); for (Post post : posts) { session.delete(post); } }); LOGGER.info("{}.testCascadeDelete took {} millis", getClass().getSimpleName(), TimeUnit.NANOSECONDS.toMillis( System.nanoTime() - startNanos.get() )); } Running this test gives the following output: Query:{[delete from Comment where id=? and version=?][55,0]} {[delete from Comment where id=? and version=?][56,0]} Query:{[delete from PostDetails where id=?][3]} Query:{[delete from Post where id=? and version=?][3,0]} Query:{[delete from Comment where id=? and version=?][54,0]} {[delete from Comment where id=? and version=?][53,0]} Query:{[delete from PostDetails where id=?][2]} Query:{[delete from Post where id=? and version=?][2,0]} Query:{[delete from Comment where id=? and version=?][52,0]} {[delete from Comment where id=? and version=?][51,0]} Query:{[delete from PostDetails where id=?][1]} Query:{[delete from Post where id=? and version=?][1,0]} Only the Comment DELETE statements were batched, the other entities being deleted in separate database round-trips. The reason for this behaviour is given by the ActionQueue sorting implementation: if ( session.getFactory().getSettings().isOrderUpdatesEnabled() ) { // sort the updates by pk updates.sort(); } if ( session.getFactory().getSettings().isOrderInsertsEnabled() ) { insertions.sort(); } While INSERTS and UPDATES are covered, DELETE statements are not sorted at all. A JDBC batch can only be reused when all statements belong to the same database table. When an incoming statement targets a different database table, the current batch has to be released, so that the new batch matches the current statement database table: public Batch getBatch(BatchKey key) { if ( currentBatch != null ) { if ( currentBatch.getKey().equals( key ) ) { return currentBatch; } else { currentBatch.execute(); currentBatch.release(); } } currentBatch = batchBuilder().buildBatch(key, this); return currentBatch; } If you enjoy reading this article, you might want to subscribe to my newsletter and get a discount for my book as well. Orphan removal and manual flushing A work-around is to dissociate all Child entities while manually flushing the HibernateSession before advancing to a new Child association: @Test public void testOrphanRemoval() { LOGGER.info("Test batch delete with orphan removal"); final AtomicReference startNanos = new AtomicReference<>(); addDeleteBatchingRows(); doInTransaction(session -> { List posts = session.createQuery( "select distinct p " + "from Post p " + "join fetch p.details d " + "join fetch p.comments c") .list(); startNanos.set(System.nanoTime()); posts.forEach(Post::removeDetails); session.flush(); posts.forEach(post -> { for (Iterator commentIterator = post.getComments().iterator(); commentIterator.hasNext(); ) { Comment comment = commentIterator.next(); comment.post = null; commentIterator.remove(); } }); session.flush(); posts.forEach(session::delete); }); LOGGER.info("{}.testOrphanRemoval took {} millis", getClass().getSimpleName(), TimeUnit.NANOSECONDS.toMillis( System.nanoTime() - startNanos.get() )); } This time all DELETE statements are properly batched: Query:{[delete from PostDetails where id=?][2]} {[delete from PostDetails where id=?][3]} {[delete from PostDetails where id=?][1]} Query:{[delete from Comment where id=? and version=?][53,0]} {[delete from Comment where id=? and version=?][54,0]} {[delete from Comment where id=? and version=?][56,0]} {[delete from Comment where id=? and version=?][55,0]} {[delete from Comment where id=? and version=?][52,0]} {[delete from Comment where id=? and version=?][51, Query:{[delete from Post where id=? and version=?][2,0]} {[delete from Post where id=? and version=?][3,0]} {[delete from Post where id=? and version=?][1,0]} SQL Cascade Delete A better solution is to use SQL cascade deletion, instead of JPA entity state propagation mechanism. This way, we can also reduce the DML statements count. Because Hibernate Session acts as a , we must be extra cautious when mixing entity state transitions with database-side automatic actions, as the Persistence Context might not reflect the latest database changes. The Post entity one-to-manyComment association is marked with the Hibernate specific @OnDelete annotation, so that the auto-generated database schema includes the ON DELETE CASCADE directive: @OneToMany(cascade = { CascadeType.PERSIST, CascadeType.MERGE}, mappedBy = "post") @OnDelete(action = OnDeleteAction.CASCADE) private List comments = new ArrayList<>(); Generating the following DDL: alter table Comment add constraint FK_apirq8ka64iidc18f3k6x5tc5 foreign key (post_id) references Post on delete cascade The same is done with the PostDetails entity one-to-one Post association: @OneToOne(fetch = FetchType.LAZY) @JoinColumn(name = "id") @MapsId @OnDelete(action = OnDeleteAction.CASCADE) private Post post; And the associated DDL: alter table PostDetails add constraint FK_h14un5v94coafqonc6medfpv8 foreign key (id) references Post on delete cascade The CascadeType.ALL and orphanRemoval were replaced with CascadeType.PERSIST and CascadeType.MERGE, because we no longer want Hibernate to propagate the entity removal event. The test only deletes the Post entities. doInTransaction(session -> { List posts = session.createQuery( "select p from Post p") .list(); startNanos.set(System.nanoTime()); for (Post post : posts) { session.delete(post); } }); The DELETE statements are properly batched as there’s only one target table. Query:{[delete from Post where id=? and version=?][1,0]} {[delete from Post where id=? and version=?][2,0]} {[delete from Post where id=? and version=?][3,0]} If you enjoyed this article, I bet you are going to love my book as well. Conclusion If INSERT and UPDATE statements batching is just a matter of configuration, DELETE statements require some additional steps, which may increase the data access layer complexity. Code available on GitHub. If you have enjoyed reading my article and you’re looking forward to getting instant email notifications of my latest posts, consider .
April 11, 2015
· 21,394 Views · 1 Like
article thumbnail
A Beginner's Guide to JPA and Hibernate Cascade Types
Introduction JPA translates entity state transitions to database DML statements. Because it’s common to operate on entity graphs, JPA allows us to propagate entity state changes from Parents to Child entities. This behavior is configured through the CascadeType mappings. JPA vs Hibernate Cascade Types Hibernate supports all JPA Cascade Types and some additional legacy cascading styles. The following table draws an association between JPA Cascade Types and their Hibernate native API equivalent: JPA EntityManager action JPA CascadeType Hibernate native Session action Hibernate native CascadeType Event Listener detach(entity) DETACH evict(entity) DETACH or EVICT Default Evict Event Listener merge(entity) MERGE merge(entity) MERGE Default Merge Event Listener persist(entity) PERSIST persist(entity) PERSIST Default Persist Event Listener refresh(entity) REFRESH refresh(entity) REFRESH Default Refresh Event Listener remove(entity) REMOVE delete(entity) REMOVE orDELETE Default Delete Event Listener saveOrUpdate(entity) SAVE_UPDATE Default Save Or Update Event Listener replicate(entity, replicationMode) REPLICATE Default Replicate Event Listener lock(entity, lockModeType) buildLockRequest(entity, lockOptions) LOCK Default Lock Event Listener All the above EntityManager methods ALL All the above Hibernate Session methods ALL From this table we can conclude that: There’s no difference between calling persist, merge or refresh on the JPAEntityManager or the Hibernate Session. The JPA remove and detach calls are delegated to Hibernate delete and evict native operations. Only Hibernate supports replicate and saveOrUpdate. While replicate is useful for some very specific scenarios (when the exact entity state needs to be mirrored between two distinct DataSources), the persist and merge combo is always a better alternative than the native saveOrUpdate operation. As a rule of thumb, you should always use persist for TRANSIENT entities and merge for DETACHED ones.The saveOrUpdate shortcomings (when passing a detached entity snapshot to aSession already managing this entity) had lead to the merge operation predecessor: the now extinct saveOrUpdateCopy operation. The JPA lock method shares the same behavior with Hibernate lock request method. The JPA CascadeType.ALL doesn’t only apply to EntityManager state change operations, but to all Hibernate CascadeTypes as well. So if you mapped your associations with CascadeType.ALL, you can still cascade Hibernate specific events. For example, you can cascade the JPA lock operation (although it behaves as reattaching, instead of an actual lock request propagation), even if JPA doesn’t define a LOCK CascadeType. Cascading best practices Cascading only makes sense only for Parent – Child associations (the Parent entity state transition being cascaded to its Child entities). Cascading from Child to Parent is not very useful and usually, it’s a mapping code smell. Next, I’m going to take analyse the cascading behaviour of all JPA Parent – Childassociations. One-To-One The most common One-To-One bidirectional association looks like this: @Entity public class Post { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; private String name; @OneToOne(mappedBy = "post", cascade = CascadeType.ALL, orphanRemoval = true) private PostDetails details; public Long getId() { return id; } public PostDetails getDetails() { return details; } public String getName() { return name; } public void setName(String name) { this.name = name; } public void addDetails(PostDetails details) { this.details = details; details.setPost(this); } public void removeDetails() { if (details != null) { details.setPost(null); } this.details = null; } } @Entity public class PostDetails { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; @Column(name = "created_on") @Temporal(TemporalType.TIMESTAMP) private Date createdOn = new Date(); private boolean visible; @OneToOne @PrimaryKeyJoinColumn private Post post; public Long getId() { return id; } public void setVisible(boolean visible) { this.visible = visible; } public void setPost(Post post) { this.post = post; } } The Post entity plays the Parent role and the PostDetails is the Child. The bidirectional associations should always be updated on both sides, therefore the Parent side should contain the addChild andremoveChild combo. These methods ensure we always synchronize both sides of the association, to avoid Object or Relational data corruption issues. In this particular case, the CascadeType.ALL and orphan removal make sense because the PostDetails life-cycle is bound to that of its Post Parent entity. Cascading the one-to-one persist operation The CascadeType.PERSIST comes along with the CascadeType.ALL configuration, so we only have to persist the Post entity, and the associated PostDetails entity is persisted as well: Post post = new Post(); post.setName("Hibernate Master Class"); PostDetails details = new PostDetails(); post.addDetails(details); session.persist(post); Generating the following output: INSERT INTO post(id, NAME) VALUES (DEFAULT, Hibernate Master Class'') insert into PostDetails (id, created_on, visible) values (default, '2015-03-03 10:17:19.14', false) Cascading the one-to-one merge operation The CascadeType.MERGE is inherited from the CascadeType.ALL setting, so we only have to merge the Post entity and the associated PostDetails is merged as well: Post post = newPost(); post.setName("Hibernate Master Class Training Material"); post.getDetails().setVisible(true); doInTransaction(session -> { session.merge(post); }); The merge operation generates the following output: SELECT onetooneca0_.id AS id1_3_1_, onetooneca0_.NAME AS name2_3_1_, onetooneca1_.id AS id1_4_0_, onetooneca1_.created_on AS created_2_4_0_, onetooneca1_.visible AS visible3_4_0_ FROM post onetooneca0_ LEFT OUTER JOIN postdetails onetooneca1_ ON onetooneca0_.id = onetooneca1_.id WHERE onetooneca0_.id = 1 UPDATE postdetails SET created_on = '2015-03-03 10:20:53.874', visible = true WHERE id = 1 UPDATE post SET NAME = 'Hibernate Master Class Training Material' WHERE id = 1 Cascading the one-to-one delete operation The CascadeType.REMOVE is also inherited from the CascadeType.ALL configuration, so the Post entity deletion triggers a PostDetails entity removal too: Post post = newPost(); doInTransaction(session -> { session.delete(post); }); Generating the following output: delete from PostDetails where id = 1 delete from Post where id = 1 The one-to-one delete orphan cascading operation If a Child entity is dissociated from its Parent, the Child Foreign Key is set to NULL. If we want to have the Child row deleted as well, we have to use the orphan removalsupport. doInTransaction(session -> { Post post = (Post) session.get(Post.class, 1L); post.removeDetails(); }); The orphan removal generates this output: SELECT onetooneca0_.id AS id1_3_0_, onetooneca0_.NAME AS name2_3_0_, onetooneca1_.id AS id1_4_1_, onetooneca1_.created_on AS created_2_4_1_, onetooneca1_.visible AS visible3_4_1_ FROM post onetooneca0_ LEFT OUTER JOIN postdetails onetooneca1_ ON onetooneca0_.id = onetooneca1_.id WHERE onetooneca0_.id = 1 delete from PostDetails where id = 1 Unidirectional one-to-one association Most often, the Parent entity is the inverse side (e.g. mappedBy), the Child controling the association through its Foreign Key. But the cascade is not limited to bidirectional associations, we can also use it for unidirectional relationships: @Entity public class Commit { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; private String comment; @OneToOne(cascade = CascadeType.ALL) @JoinTable( name = "Branch_Merge_Commit", joinColumns = @JoinColumn( name = "commit_id", referencedColumnName = "id"), inverseJoinColumns = @JoinColumn( name = "branch_merge_id", referencedColumnName = "id") ) private BranchMerge branchMerge; public Commit() { } public Commit(String comment) { this.comment = comment; } public Long getId() { return id; } public void addBranchMerge( String fromBranch, String toBranch) { this.branchMerge = new BranchMerge( fromBranch, toBranch); } public void removeBranchMerge() { this.branchMerge = null; } } @Entity public class BranchMerge { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; private String fromBranch; private String toBranch; public BranchMerge() { } public BranchMerge( String fromBranch, String toBranch) { this.fromBranch = fromBranch; this.toBranch = toBranch; } public Long getId() { return id; } } Cascading consists in propagating the Parent entity state transition to one or more Child entities, and it can be used for both unidirectional and bidirectional associations. One-To-Many The most common Parent – Child association consists of a one-to-many and a many-to-one relationship, where the cascade being useful for the one-to-many side only: @Entity public class Post { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; private String name; @OneToMany(cascade = CascadeType.ALL, mappedBy = "post", orphanRemoval = true) private List comments = new ArrayList<>(); public void setName(String name) { this.name = name; } public List getComments() { return comments; } public void addComment(Comment comment) { comments.add(comment); comment.setPost(this); } public void removeComment(Comment comment) { comment.setPost(null); this.comments.remove(comment); } } @Entity public class Comment { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; @ManyToOne private Post post; private String review; public void setPost(Post post) { this.post = post; } public String getReview() { return review; } public void setReview(String review) { this.review = review; } } Like in the one-to-one example, the CascadeType.ALL and orphan removal are suitable because the Comment life-cycle is bound to that of its Post Parent entity. Cascading the one-to-many persist operation We only have to persist the Post entity and all the associated Comment entities are persisted as well: Post post = new Post(); post.setName("Hibernate Master Class"); Comment comment1 = new Comment(); comment1.setReview("Good post!"); Comment comment2 = new Comment(); comment2.setReview("Nice post!"); post.addComment(comment1); post.addComment(comment2); session.persist(post); The persist operation generates the following output: insert into Post (id, name) values (default, 'Hibernate Master Class') insert into Comment (id, post_id, review) values (default, 1, 'Good post!') insert into Comment (id, post_id, review) values (default, 1, 'Nice post!') Cascading the one-to-many merge operation Merging the Post entity is going to merge all Comment entities as well: Post post = newPost(); post.setName("Hibernate Master Class Training Material"); post.getComments() .stream() .filter(comment -> comment.getReview().toLowerCase() .contains("nice")) .findAny() .ifPresent(comment -> comment.setReview("Keep up the good work!") ); doInTransaction(session -> { session.merge(post); }); Generating the following output: SELECT onetomanyc0_.id AS id1_1_1_, onetomanyc0_.NAME AS name2_1_1_, comments1_.post_id AS post_id3_1_3_, comments1_.id AS id1_0_3_, comments1_.id AS id1_0_0_, comments1_.post_id AS post_id3_0_0_, comments1_.review AS review2_0_0_ FROM post onetomanyc0_ LEFT OUTER JOIN comment comments1_ ON onetomanyc0_.id = comments1_.post_id WHERE onetomanyc0_.id = 1 update Post set name = 'Hibernate Master Class Training Material' where id = 1 update Comment set post_id = 1, review='Keep up the good work!' where id = 2 Cascading the one-to-many delete operation When the Post entity is deleted, the associated Comment entities are deleted as well: Post post = newPost(); doInTransaction(session -> { session.delete(post); }); Generating the following output: delete from Comment where id = 1 delete from Comment where id = 2 delete from Post where id = 1 The one-to-many delete orphan cascading operation The orphan-removal allows us to remove the Child entity whenever it’s no longer referenced by its Parent: newPost(); doInTransaction(session -> { Post post = (Post) session.createQuery( "select p " + "from Post p " + "join fetch p.comments " + "where p.id = :id") .setParameter("id", 1L) .uniqueResult(); post.removeComment(post.getComments().get(0)); }); The Comment is deleted, as we can see in the following output: SELECT onetomanyc0_.id AS id1_1_0_, comments1_.id AS id1_0_1_, onetomanyc0_.NAME AS name2_1_0_, comments1_.post_id AS post_id3_0_1_, comments1_.review AS review2_0_1_, comments1_.post_id AS post_id3_1_0__, comments1_.id AS id1_0_0__ FROM post onetomanyc0_ INNER JOIN comment comments1_ ON onetomanyc0_.id = comments1_.post_id WHERE onetomanyc0_.id = 1 delete from Comment where id = 1 If you enjoy reading this article, you might want to subscribe to my newsletter and get a discount for my book as well. Many-To-Many The many-to-many relationship is tricky because each side of this association plays both the Parent and the Child role. Still, we can identify one side from where we’d like to propagate the entity state changes. We shouldn’t default to CascadeType.ALL, because the CascadeTpe.REMOVE might end-up deleting more than we’re expecting (as you’ll soon find out): @Entity public class Author { @Id @GeneratedValue(strategy=GenerationType.AUTO) private Long id; @Column(name = "full_name", nullable = false) private String fullName; @ManyToMany(mappedBy = "authors", cascade = {CascadeType.PERSIST, CascadeType.MERGE}) private List books = new ArrayList<>(); private Author() {} public Author(String fullName) { this.fullName = fullName; } public Long getId() { return id; } public void addBook(Book book) { books.add(book); book.authors.add(this); } public void removeBook(Book book) { books.remove(book); book.authors.remove(this); } public void remove() { for(Book book : new ArrayList<>(books)) { removeBook(book); } } } @Entity public class Book { @Id @GeneratedValue(strategy=GenerationType.AUTO) private Long id; @Column(name = "title", nullable = false) private String title; @ManyToMany(cascade = {CascadeType.PERSIST, CascadeType.MERGE}) @JoinTable(name = "Book_Author", joinColumns = { @JoinColumn( name = "book_id", referencedColumnName = "id" ) }, inverseJoinColumns = { @JoinColumn( name = "author_id", referencedColumnName = "id" ) } ) private List authors = new ArrayList<>(); private Book() {} public Book(String title) { this.title = title; } } Cascading the many-to-many persist operation Persisting the Author entities will persist the Books as well: Author _John_Smith = new Author("John Smith"); Author _Michelle_Diangello = new Author("Michelle Diangello"); Author _Mark_Armstrong = new Author("Mark Armstrong"); Book _Day_Dreaming = new Book("Day Dreaming"); Book _Day_Dreaming_2nd = new Book("Day Dreaming, Second Edition"); _John_Smith.addBook(_Day_Dreaming); _Michelle_Diangello.addBook(_Day_Dreaming); _John_Smith.addBook(_Day_Dreaming_2nd); _Michelle_Diangello.addBook(_Day_Dreaming_2nd); _Mark_Armstrong.addBook(_Day_Dreaming_2nd); session.persist(_John_Smith); session.persist(_Michelle_Diangello); session.persist(_Mark_Armstrong); The Book and the Book_Author rows are inserted along with the Authors: insert into Author (id, full_name) values (default, 'John Smith') insert into Book (id, title) values (default, 'Day Dreaming') insert into Author (id, full_name) values (default, 'Michelle Diangello') insert into Book (id, title) values (default, 'Day Dreaming, Second Edition') insert into Author (id, full_name) values (default, 'Mark Armstrong') insert into Book_Author (book_id, author_id) values (1, 1) insert into Book_Author (book_id, author_id) values (1, 2) insert into Book_Author (book_id, author_id) values (2, 1) insert into Book_Author (book_id, author_id) values (2, 2) insert into Book_Author (book_id, author_id) values (3, 1) Dissociating one side of the many-to-many association To delete an Author, we need to dissociate all Book_Author relations belonging to the removable entity: doInTransaction(session -> { Author _Mark_Armstrong = getByName(session, "Mark Armstrong"); _Mark_Armstrong.remove(); session.delete(_Mark_Armstrong); }); This use case generates the following output: SELECT manytomany0_.id AS id1_0_0_, manytomany2_.id AS id1_1_1_, manytomany0_.full_name AS full_nam2_0_0_, manytomany2_.title AS title2_1_1_, books1_.author_id AS author_i2_0_0__, books1_.book_id AS book_id1_2_0__ FROM author manytomany0_ INNER JOIN book_author books1_ ON manytomany0_.id = books1_.author_id INNER JOIN book manytomany2_ ON books1_.book_id = manytomany2_.id WHERE manytomany0_.full_name = 'Mark Armstrong' SELECT books0_.author_id AS author_i2_0_0_, books0_.book_id AS book_id1_2_0_, manytomany1_.id AS id1_1_1_, manytomany1_.title AS title2_1_1_ FROM book_author books0_ INNER JOIN book manytomany1_ ON books0_.book_id = manytomany1_.id WHERE books0_.author_id = 2 delete from Book_Author where book_id = 2 insert into Book_Author (book_id, author_id) values (2, 1) insert into Book_Author (book_id, author_id) values (2, 2) delete from Author where id = 3 The many-to-many association generates way too many redundant SQL statements and often, they are very difficult to tune. Next, I’m going to demonstrate the many-to-many CascadeType.REMOVE hidden dangers. The many-to-many CascadeType.REMOVE gotchas The many-to-many CascadeType.ALL is another code smell, I often bump into while reviewing code. The CascadeType.REMOVE is automatically inherited when usingCascadeType.ALL, but the entity removal is not only applied to the link table, but to the other side of the association as well. Let’s change the Author entity books many-to-many association to use theCascadeType.ALL instead: @ManyToMany(mappedBy = "authors", cascade = CascadeType.ALL) private List books = new ArrayList<>(); When deleting one Author: doInTransaction(session -> { Author _Mark_Armstrong = getByName(session, "Mark Armstrong"); session.delete(_Mark_Armstrong); Author _John_Smith = getByName(session, "John Smith"); assertEquals(1, _John_Smith.books.size()); }); All books belonging to the deleted Author are getting deleted, even if other Authorswe’re still associated to the deleted Books: SELECT manytomany0_.id AS id1_0_, manytomany0_.full_name AS full_nam2_0_ FROM author manytomany0_ WHERE manytomany0_.full_name = 'Mark Armstrong' SELECT books0_.author_id AS author_i2_0_0_, books0_.book_id AS book_id1_2_0_, manytomany1_.id AS id1_1_1_, manytomany1_.title AS title2_1_1_ FROM book_author books0_ INNER JOIN book manytomany1_ ON books0_.book_id = manytomany1_.id WHERE books0_.author_id = 3 delete from Book_Author where book_id=2 delete from Book where id=2 delete from Author where id=3 Most often, this behavior doesn’t match the business logic expectations, only being discovered upon the first entity removal. We can push this issue even further, if we set the CascadeType.ALL to the Book entity side as well: @ManyToMany(cascade = CascadeType.ALL) @JoinTable(name = "Book_Author", joinColumns = { @JoinColumn( name = "book_id", referencedColumnName = "id" ) }, inverseJoinColumns = { @JoinColumn( name = "author_id", referencedColumnName = "id" ) } ) This time, not only the Books are being deleted, but Authors are deleted as well: doInTransaction(session -> { Author _Mark_Armstrong = getByName(session, "Mark Armstrong"); session.delete(_Mark_Armstrong); Author _John_Smith = getByName(session, "John Smith"); assertNull(_John_Smith); }); The Author removal triggers the deletion of all associated Books, which further triggers the removal of all associated Authors. This is a very dangerous operation, resulting in a massive entity deletion that’s rarely the expected behavior. If you enjoyed this article, I bet you are going to love my book as well. SELECT manytomany0_.id AS id1_0_, manytomany0_.full_name AS full_nam2_0_ FROM author manytomany0_ WHERE manytomany0_.full_name = 'Mark Armstrong' SELECT books0_.author_id AS author_i2_0_0_, books0_.book_id AS book_id1_2_0_, manytomany1_.id AS id1_1_1_, manytomany1_.title AS title2_1_1_ FROM book_author books0_ INNER JOIN book manytomany1_ ON books0_.book_id = manytomany1_.id WHERE books0_.author_id = 3 SELECT authors0_.book_id AS book_id1_1_0_, authors0_.author_id AS author_i2_2_0_, manytomany1_.id AS id1_0_1_, manytomany1_.full_name AS full_nam2_0_1_ FROM book_author authors0_ INNER JOIN author manytomany1_ ON authors0_.author_id = manytomany1_.id WHERE authors0_.book_id = 2 SELECT books0_.author_id AS author_i2_0_0_, books0_.book_id AS book_id1_2_0_, manytomany1_.id AS id1_1_1_, manytomany1_.title AS title2_1_1_ FROM book_author books0_ INNER JOIN book manytomany1_ ON books0_.book_id = manytomany1_.id WHERE books0_.author_id = 1 SELECT authors0_.book_id AS book_id1_1_0_, authors0_.author_id AS author_i2_2_0_, manytomany1_.id AS id1_0_1_, manytomany1_.full_name AS full_nam2_0_1_ FROM book_author authors0_ INNER JOIN author manytomany1_ ON authors0_.author_id = manytomany1_.id WHERE authors0_.book_id = 1 SELECT books0_.author_id AS author_i2_0_0_, books0_.book_id AS book_id1_2_0_, manytomany1_.id AS id1_1_1_, manytomany1_.title AS title2_1_1_ FROM book_author books0_ INNER JOIN book manytomany1_ ON books0_.book_id = manytomany1_.id WHERE books0_.author_id = 2 delete from Book_Author where book_id=2 delete from Book_Author where book_id=1 delete from Author where id=2 delete from Book where id=1 delete from Author where id=1 delete from Book where id=2 delete from Author where id=3 This use case is wrong in so many ways. There are a plethora of unnecessary SELECT statements and eventually we end up deleting all Authors and all their Books. That’s why CascadeType.ALL should raise your eyebrow, whenever you spot it on a many-to-many association. When it comes to Hibernate mappings, you should always strive for simplicity. TheHibernate documentation confirms this assumption as well: Practical test cases for real many-to-many associations are rare. Most of the time you need additional information stored in the “link table”. In this case, it is much better to use two one-to-many associations to an intermediate link class. In fact, most associations are one-to-many and many-to-one. For this reason, you should proceed cautiously when using any other association style. Conclusion Cascading is a handy ORM feature, but it’s not free of issues. You should only cascade from Parent entities to Children and not the other way around. You should always use only the casacde operations that are demanded by your business logic requirements, and not turn the CascadeType.ALL into a default Parent-Child association entity state propagation configuration. Code available on GitHub.
March 13, 2015
· 95,746 Views · 8 Likes
article thumbnail
The Downside of Version-less Optimistic Locking
Introduction In my previous post I demonstrated how you can scale optimistic locking through write-concerns splitting. Version-less optimistic locking is one lesser-known Hibernate feature. In this post I’ll explain both the good and the bad parts of this approach. Version-less optimistic locking Optimistic locking is commonly associated with a logical or physical clocking sequence, for both performance and consistency reasons. The clocking sequence points to an absolute entity state version for all entity state transitions. To support legacy database schema optimistic locking, Hibernate added a version-less concurrency control mechanism. To enable this feature you have to configure your entities with the @OptimisticLocking annotation that takes the following parameters: Optimistic Locking Type Description ALL All entity properties are going to be used to verify the entity version DIRTY Only current dirty properties are going to be used to verify the entity version NONE Disables optimistic locking VERSION Surrogate version column optimistic locking For version-less optimistic locking, you need to choose ALL or DIRTY. Use case We are going to rerun the Product update use case I covered in my previous optimistic locking scaling article. The Product entity looks like this: First thing to notice is the absence of a surrogate version column. For concurrency control, we’ll use DIRTY properties optimistic locking: @Entity(name = "product") @Table(name = "product") @OptimisticLocking(type = OptimisticLockType.DIRTY) @DynamicUpdate public class Product { //code omitted for brevity } By default, Hibernate includes all table columns in every entity update, therefore reusing cached prepared statements. For dirty properties optimistic locking, the changed columns are included in the update WHERE clause and that’s the reason for using the @DynamicUpdate annotation. This entity is going to be changed by three concurrent users (e.g. Alice, Bob and Vlad), each one updating a distinct entity properties subset, as you can see in the following The following sequence diagram: The SQL DML statement sequence goes like this: #create tables Query:{[create table product (id bigint not null, description varchar(255) not null, likes integer not null, name varchar(255) not null, price numeric(19,2) not null, quantity bigint not null, primary key (id))][]} Query:{[alter table product add constraint UK_jmivyxk9rmgysrmsqw15lqr5b unique (name)][]} #insert product Query:{[insert into product (description, likes, name, price, quantity, id) values (?, ?, ?, ?, ?, ?)][Plasma TV,0,TV,199.99,7,1]} #Alice selects the product Query:{[select optimistic0_.id as id1_0_0_, optimistic0_.description as descript2_0_0_, optimistic0_.likes as likes3_0_0_, optimistic0_.name as name4_0_0_, optimistic0_.price as price5_0_0_, optimistic0_.quantity as quantity6_0_0_ from product optimistic0_ where optimistic0_.id=?][1]} #Bob selects the product Query:{[select optimistic0_.id as id1_0_0_, optimistic0_.description as descript2_0_0_, optimistic0_.likes as likes3_0_0_, optimistic0_.name as name4_0_0_, optimistic0_.price as price5_0_0_, optimistic0_.quantity as quantity6_0_0_ from product optimistic0_ where optimistic0_.id=?][1]} #Vlad selects the product Query:{[select optimistic0_.id as id1_0_0_, optimistic0_.description as descript2_0_0_, optimistic0_.likes as likes3_0_0_, optimistic0_.name as name4_0_0_, optimistic0_.price as price5_0_0_, optimistic0_.quantity as quantity6_0_0_ from product optimistic0_ where optimistic0_.id=?][1]} #Alice updates the product Query:{[update product set quantity=? where id=? and quantity=?][6,1,7]} #Bob updates the product Query:{[update product set likes=? where id=? and likes=?][1,1,0]} #Vlad updates the product Query:{[update product set description=? where id=? and description=?][Plasma HDTV,1,Plasma TV]} Each UPDATE sets the latest changes and expects the current database snapshot to be exactly as it was at entity load time. As simple and straightforward as it may look, the version-less optimistic locking strategy suffers from a very inconvenient shortcoming. The detached entities anomaly The version-less optimistic locking is feasible as long as you don’t close the Persistence Context. All entity changes must happen inside an open Persistence Context, Hibernate translating [a href="http://vladmihalcea.com/2014/12/08/the-downside-of-version-less-optimistic-locking/2014/07/30/a-beginners-guide-to-jpahibernate-entity-state-transitions/"]entity state transitions into database DML statements. Detached entities changes can be only persisted if the entities rebecome managed in a new Hibernate Session, and for this we have two options: entity merging (using Session#merge(entity)) entity reattaching (using Session#update(entity)) Both operations require a database SELECT to retrieve the latest database snapshot, so changes will be applied against the latest entity version. Unfortunately, this can also lead to lost updates, as we can see in the following sequence diagram: Once the original Session is gone, we have no way of including the original entity state in the UPDATE WHERE clause. So newer changes might be overwritten by older ones and this is exactly what we wanted to avoid in the very first place. Let’s replicate this issue for both merging and reattaching. Merging The merge operation consists in loading and attaching a new entity object from the database and update it with the current given entity snapshot. Merging is supported by JPA too and it’s tolerant to already managed Persistence Context entity entries. If there’s an already managed entity then the select is not going to be issued, as Hibernate guarantees session-level repeatable reads. #Alice inserts a Product and her Session is closed Query:{[insert into Product (description, likes, name, price, quantity, id) values (?, ?, ?, ?, ?, ?)][Plasma TV,0,TV,199.99,7,1]} #Bob selects the Product and changes the price to 21.22 Query:{[select optimistic0_.id as id1_0_0_, optimistic0_.description as descript2_0_0_, optimistic0_.likes as likes3_0_0_, optimistic0_.name as name4_0_0_, optimistic0_.price as price5_0_0_, optimistic0_.quantity as quantity6_0_0_ from Product optimistic0_ where optimistic0_.id=?][1]} OptimisticLockingVersionlessTest - Updating product price to 21.22 Query:{[update Product set price=? where id=? and price=?][21.22,1,199.99]} #Alice changes the Product price to 1 and tries to merge the detached Product entity c.v.h.m.l.c.OptimisticLockingVersionlessTest - Merging product, price to be saved is 1 #A fresh copy is going to be fetched from the database Query:{[select optimistic0_.id as id1_0_0_, optimistic0_.description as descript2_0_0_, optimistic0_.likes as likes3_0_0_, optimistic0_.name as name4_0_0_, optimistic0_.price as price5_0_0_, optimistic0_.quantity as quantity6_0_0_ from Product optimistic0_ where optimistic0_.id=?][1]} #Alice overwrites Bob therefore loosing an update Query:{[update Product set price=? where id=? and price=?][1,1,21.22]} If you enjoy reading this article, you might want to subscribe to my newsletter and get a discount for my book as well. Reattaching Reattaching is a Hibernate specific operation. As opposed to merging, the given detached entity must become managed in another Session. If there’s an already loaded entity, Hibernate will throw an exception. This operation also requires an SQL SELECT for loading the current database entity snapshot. The detached entity state will be copied on the freshly loaded entity snapshot and the dirty checking mechanism will trigger the actual DML update: #Alice inserts a Product and her Session is closed Query:{[insert into Product (description, likes, name, price, quantity, id) values (?, ?, ?, ?, ?, ?)][Plasma TV,0,TV,199.99,7,1]} #Bob selects the Product and changes the price to 21.22 Query:{[select optimistic0_.id as id1_0_0_, optimistic0_.description as descript2_0_0_, optimistic0_.likes as likes3_0_0_, optimistic0_.name as name4_0_0_, optimistic0_.price as price5_0_0_, optimistic0_.quantity as quantity6_0_0_ from Product optimistic0_ where optimistic0_.id=?][1]} OptimisticLockingVersionlessTest - Updating product price to 21.22 Query:{[update Product set price=? where id=? and price=?][21.22,1,199.99]} #Alice changes the Product price to 1 and tries to merge the detached Product entity c.v.h.m.l.c.OptimisticLockingVersionlessTest - Reattaching product, price to be saved is 10 #A fresh copy is going to be fetched from the database Query:{[select optimistic_.id, optimistic_.description as descript2_0_, optimistic_.likes as likes3_0_, optimistic_.name as name4_0_, optimistic_.price as price5_0_, optimistic_.quantity as quantity6_0_ from Product optimistic_ where optimistic_.id=?][1]} #Alice overwrites Bob therefore loosing an update Query:{[update Product set price=? where id=?][10,1]} If you enjoyed this article, I bet you are going to love my book as well. Conclusion The version-less optimistic locking is a viable alternative as long as you can stick to a non-detached entities policy. Combined with extended persistence contexts, this strategy can boost writing performance even for a legacy database schema. Code available on GitHub.
January 2, 2015
· 13,099 Views · 1 Like
article thumbnail
Hibernate Collections: Optimistic Locking
Introduction Hibernate provides an optimistic locking mechanism to prevent lost updates even for long-conversations. In conjunction with an entity storage, spanning over multiple user requests (extended persistence context or detached entities) Hibernate can guarantee application-level repeatable-reads. The dirty checking mechanism detects entity state changes and increments the entity version. While basic property changes are always taken into consideration, Hibernate collections are more subtle in this regard. Owned vs. Inverse Collections In relational databases, two records are associated through a foreign key reference. In this relationship, the referenced record is the parent while the referencing row (the foreign key side) is the child. A non-null foreign key may only reference an existing parent record. In the Object-oriented space this association can be represented in both directions. We can have a many-to-one reference from a child to parent and the parent can also have a one-to-many children collection. Because both sides could potentially control the database foreign key state, we must ensure that only one side is the owner of this association. Only the owningside state changes are propagated to the database. The non-owning side has been traditionally referred as the inverse side. Next I’ll describe the most common ways of modelling this association. The Unidirectional Parent-Owning-Side-Child Association Mapping Only the parent side has a @OneToMany non-inverse children collection. The child entity doesn’t reference the parent entity at all. @Entity(name = "post") public class Post { ... @OneToMany(cascade = CascadeType.ALL, orphanRemoval = true) private List comments = new ArrayList (); ... } The Unidirectional Parent-Owning-Side-Child Component Association Mapping Mapping The child side doesn’t always have to be an entity and we might model it as acomponent type instead. An Embeddable object (component type) may contain both basic types and association mappings but it can never contain an @Id. The Embeddable object is persisted/removed along with its owning entity. The parent has an @ElementCollection children association. The child entity may only reference the parent through the non-queryable Hibernate specific @Parentannotation. @Entity(name = "post") public class Post { ... @ElementCollection @JoinTable(name = "post_comments", joinColumns = @JoinColumn(name = "post_id")) @OrderColumn(name = "comment_index") private List comments = new ArrayList (); ... public void addComment(Comment comment) { comment.setPost(this); comments.add(comment); } } @Embeddable public class Comment { ... @Parent private Post post; ... } The Bidirectional Parent-Owning-Side-Child Association Mapping The parent is the owning side so it has a @OneToMany non-inverse (without a mappedBy directive) children collection. The child entity references the parent entity through a @ManyToOne association that’s neither insertable nor updatable: @Entity(name = "post") public class Post { ... @OneToMany(cascade = CascadeType.ALL, orphanRemoval = true) private List comments = new ArrayList (); ... public void addComment(Comment comment) { comment.setPost(this); comments.add(comment); } } @Entity(name = "comment") public class Comment ... @ManyToOne @JoinColumn(name = "post_id", insertable = false, updatable = false) private Post post; ... } The Bidirectional Parent-Owning-Side-Child Association Mapping The child entity references the parent entity through a @ManyToOne association, and the parent has a mappedBy @OneToMany children collection. The parent side is the inverse side so only the @ManyToOne state changes are propagated to the database. Even if there’s only one owning side, it’s always a good practice to keep both sides in sync by using the add/removeChild() methods. @Entity(name = "post") public class Post { ... @OneToMany(cascade = CascadeType.ALL, orphanRemoval = true, mappedBy = "post") private List comments = new ArrayList (); ... public void addComment(Comment comment) { comment.setPost(this); comments.add(comment); } } @Entity(name = "comment") public class Comment { ... @ManyToOne private Post post; ... } The Unidirectional Parent-Owning-Side-Child Association Mapping The child entity references the parent through a @ManyToOne association. The parent doesn’t have a @OneToMany children collection so the child entity becomes the owning side. This association mapping resembles the relational data foreign key linkage. @Entity(name = "comment") public class Comment { ... @ManyToOne private Post post; ... } Collection Versioning The 3.4.2 section of the JPA 2.1 specification defines optimistic locking as: The version attribute is updated by the persistence provider runtime when the object is written to the database. All non-relationship fields and proper ties and all relationships owned by the entity are included in version checks[35]. [35] This includes owned relationships maintained in join tables N.B. Only owning-side children collection can update the parent version. Testing Time Let’s test how the parent-child association type affects the parent versioning. Because we are interested in the children collection dirty checking, theunidirectional child-owning-side-parent association is going to be skipped, as in that case the parent doesn’t contain a children collection. Test Case The following test case is going to be used for all collection type use cases: protected void simulateConcurrentTransactions(final boolean shouldIncrementParentVersion) { final ExecutorService executorService = Executors.newSingleThreadExecutor(); doInTransaction(new TransactionCallable () { @Override public Void execute(Session session) { try { P post = postClass.newInstance(); post.setId(1L); post.setName("Hibernate training"); session.persist(post); return null; } catch (Exception e) { throw new IllegalArgumentException(e); } } }); doInTransaction(new TransactionCallable () { @Override public Void execute(final Session session) { final P post = (P) session.get(postClass, 1L); try { executorService.submit(new Callable () { @Override public Void call() throws Exception { return doInTransaction(new TransactionCallable () { @Override public Void execute(Session _session) { try { P otherThreadPost = (P) _session.get(postClass, 1L); int loadTimeVersion = otherThreadPost.getVersion(); assertNotSame(post, otherThreadPost); assertEquals(0L, otherThreadPost.getVersion()); C comment = commentClass.newInstance(); comment.setReview("Good post!"); otherThreadPost.addComment(comment); _session.flush(); if (shouldIncrementParentVersion) { assertEquals(otherThreadPost.getVersion(), loadTimeVersion + 1); } else { assertEquals(otherThreadPost.getVersion(), loadTimeVersion); } return null; } catch (Exception e) { throw new IllegalArgumentException(e); } } }); } }).get(); } catch (Exception e) { throw new IllegalArgumentException(e); } post.setName("Hibernate Master Class"); session.flush(); return null; } }); } The Unidirectional Parent-Owning-Side-Child Association Testing #create tables Query:{[create table comment (idbigint generated by default as identity (start with 1), review varchar(255), primary key (id))][]} Query:{[create table post (idbigint not null, name varchar(255), version integer not null, primary key (id))][]} Query:{[create table post_comment (post_id bigint not null, comments_id bigint not null, comment_index integer not null, primary key (post_id, comment_index))][]} Query:{[alter table post_comment add constraint FK_se9l149iyyao6va95afioxsrl foreign key (comments_id) references comment][]} Query:{[alter table post_comment add constraint FK_6o1igdm04v78cwqre59or1yj1 foreign key (post_id) references post][]} #insert post in primary transaction Query:{[insert into post (name, version, id) values (?, ?, ?)][Hibernate training,0,1]} #select post in secondary transaction Query:{[selectentityopti0_.idas id1_1_0_, entityopti0_.name as name2_1_0_, entityopti0_.version as version3_1_0_ from post entityopti0_ where entityopti0_.id=?][1]} #insert comment in secondary transaction #optimistic locking post version update in secondary transaction Query:{[insert into comment (id, review) values (default, ?)][Good post!]} Query:{[update post setname=?, version=? where id=? and version=?][Hibernate training,1,1,0]} Query:{[insert into post_comment (post_id, comment_index, comments_id) values (?, ?, ?)][1,0,1]} #optimistic locking exception in primary transaction Query:{[update post setname=?, version=? where id=? and version=?][Hibernate Master Class,1,1,0]} org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect) : [com.vladmihalcea.hibernate.masterclass.laboratory.concurrency.EntityOptimisticLockingOnUnidirectionalCollectionTest$Post#1] The Unidirectional Parent-Owning-Side-Child Component Association Testing #create tables Query:{[create table post (idbigint not null, name varchar(255), version integer not null, primary key (id))][]} Query:{[create table post_comments (post_id bigint not null, review varchar(255), comment_index integer not null, primary key (post_id, comment_index))][]} Query:{[alter table post_comments add constraint FK_gh9apqeduab8cs0ohcq1dgukp foreign key (post_id) references post][]} #insert post in primary transaction Query:{[insert into post (name, version, id) values (?, ?, ?)][Hibernate training,0,1]} #select post in secondary transaction Query:{[selectentityopti0_.idas id1_0_0_, entityopti0_.name as name2_0_0_, entityopti0_.version as version3_0_0_ from post entityopti0_ where entityopti0_.id=?][1]} Query:{[selectcomments0_.post_id as post_id1_0_0_, comments0_.review as review2_1_0_, comments0_.comment_index as comment_3_0_ from post_comments comments0_ where comments0_.post_id=?][1]} #insert comment in secondary transaction #optimistic locking post version update in secondary transaction Query:{[update post setname=?, version=? where id=? and version=?][Hibernate training,1,1,0]} Query:{[insert into post_comments (post_id, comment_index, review) values (?, ?, ?)][1,0,Good post!]} #optimistic locking exception in primary transaction Query:{[update post setname=?, version=? where id=? and version=?][Hibernate Master Class,1,1,0]} org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect) : [com.vladmihalcea.hibernate.masterclass.laboratory.concurrency.EntityOptimisticLockingOnComponentCollectionTest$Post#1] The Bidirectional Parent-Owning-Side-Child Association Testing #create tables Query:{[create table comment (idbigint generated by default as identity (start with 1), review varchar(255), post_id bigint, primary key (id))][]} Query:{[create table post (idbigint not null, name varchar(255), version integer not null, primary key (id))][]} Query:{[create table post_comment (post_id bigint not null, comments_id bigint not null)][]} Query:{[alter table post_comment add constraint UK_se9l149iyyao6va95afioxsrl unique (comments_id)][]} Query:{[alter table comment add constraint FK_f1sl0xkd2lucs7bve3ktt3tu5 foreign key (post_id) references post][]} Query:{[alter table post_comment add constraint FK_se9l149iyyao6va95afioxsrl foreign key (comments_id) references comment][]} Query:{[alter table post_comment add constraint FK_6o1igdm04v78cwqre59or1yj1 foreign key (post_id) references post][]} #insert post in primary transaction Query:{[insert into post (name, version, id) values (?, ?, ?)][Hibernate training,0,1]} #select post in secondary transaction Query:{[selectentityopti0_.idas id1_1_0_, entityopti0_.name as name2_1_0_, entityopti0_.version as version3_1_0_ from post entityopti0_ where entityopti0_.id=?][1]} Query:{[selectcomments0_.post_id as post_id1_1_0_, comments0_.comments_id as comments2_2_0_, entityopti1_.idas id1_0_1_, entityopti1_.post_id as post_id3_0_1_, entityopti1_.review as review2_0_1_, entityopti2_.idas id1_1_2_, entityopti2_.name as name2_1_2_, entityopti2_.version as version3_1_2_ from post_comment comments0_ inner joincomment entityopti1_ on comments0_.comments_id=entityopti1_.idleft outer joinpost entityopti2_ on entityopti1_.post_id=entityopti2_.idwhere comments0_.post_id=?][1]} #insert comment in secondary transaction #optimistic locking post version update in secondary transaction Query:{[insert into comment (id, review) values (default, ?)][Good post!]} Query:{[update post setname=?, version=? where id=? and version=?][Hibernate training,1,1,0]} Query:{[insert into post_comment (post_id, comments_id) values (?, ?)][1,1]} #optimistic locking exception in primary transaction Query:{[update post setname=?, version=? where id=? and version=?][Hibernate Master Class,1,1,0]} org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect) : [com.vladmihalcea.hibernate.masterclass.laboratory.concurrency.EntityOptimisticLockingOnBidirectionalParentOwningCollectionTest$Post#1] The Bidirectional Parent-Owning-Side-Child Association Testing #create tables Query:{[create table comment (idbigint generated by default as identity (start with 1), review varchar(255), post_id bigint, primary key (id))][]} Query:{[create table post (idbigint not null, name varchar(255), version integer not null, primary key (id))][]} Query:{[alter table comment add constraint FK_f1sl0xkd2lucs7bve3ktt3tu5 foreign key (post_id) references post][]} #insert post in primary transaction Query:{[insert into post (name, version, id) values (?, ?, ?)][Hibernate training,0,1]} #select post in secondary transaction Query:{[selectentityopti0_.idas id1_1_0_, entityopti0_.name as name2_1_0_, entityopti0_.version as version3_1_0_ from post entityopti0_ where entityopti0_.id=?][1]} #insert comment in secondary transaction #post version is not incremented in secondary transaction Query:{[insert into comment (id, post_id, review) values (default, ?, ?)][1,Good post!]} Query:{[selectcount(id) from comment where post_id =?][1]} #update works in primary transaction Query:{[update post setname=?, version=? where id=? and version=?][Hibernate Master Class,1,1,0]} If you enjoy reading this article, you might want to subscribe to my newsletter and get a discount for my book as well. Overruling Default Collection Versioning If the default owning-side collection versioning is not suitable for your use case, you can always overrule it with Hibernate [a href="http://docs.jboss.org/hibernate/annotations/3.5/reference/en/html_single/#d0e2903" style="font-family: inherit; font-size: 14px; font-style: inherit; font-weight: inherit; text-decoration: none; color: rgb(1, 160, 219); -webkit-tap-highlight-color: rgb(240, 29, 79); background: transparent;"]@OptimisticLock annotation. Let’s overrule the default parent version update mechanism for bidirectional parent-owning-side-child association: @Entity(name = "post") public class Post { ... @OneToMany(cascade = CascadeType.ALL, orphanRemoval = true) @OptimisticLock(excluded = true) private List comments = new ArrayList (); ... public void addComment(Comment comment) { comment.setPost(this); comments.add(comment); } } @Entity(name = "comment") public class Comment { ... @ManyToOne @JoinColumn(name = "post_id", insertable = false, updatable = false) private Post post; ... } This time, the children collection changes won’t trigger a parent version update: #create tables Query:{[create table comment (idbigint generated by default as identity (start with 1), review varchar(255), post_id bigint, primary key (id))][]} Query:{[create table post (idbigint not null, name varchar(255), version integer not null, primary key (id))][]} Query:{[create table post_comment (post_id bigint not null, comments_id bigint not null)][]} Query:{[]} Query:{[alter table comment add constraint FK_f1sl0xkd2lucs7bve3ktt3tu5 foreign key (post_id) references post][]} Query:{[alter table post_comment add constraint FK_se9l149iyyao6va95afioxsrl foreign key (comments_id) references comment][]} Query:{[alter table post_comment add constraint FK_6o1igdm04v78cwqre59or1yj1 foreign key (post_id) references post][]} #insert post in primary transaction Query:{[insert into post (name, version, id) values (?, ?, ?)][Hibernate training,0,1]} #select post in secondary transaction Query:{[selectentityopti0_.idas id1_1_0_, entityopti0_.name as name2_1_0_, entityopti0_.version as version3_1_0_ from post entityopti0_ where entityopti0_.id=?][1]} Query:{[selectcomments0_.post_id as post_id1_1_0_, comments0_.comments_id as comments2_2_0_, entityopti1_.idas id1_0_1_, entityopti1_.post_id as post_id3_0_1_, entityopti1_.review as review2_0_1_, entityopti2_.idas id1_1_2_, entityopti2_.name as name2_1_2_, entityopti2_.version as version3_1_2_ from post_comment comments0_ inner joincomment entityopti1_ on comments0_.comments_id=entityopti1_.idleft outer joinpost entityopti2_ on entityopti1_.post_id=entityopti2_.idwhere comments0_.post_id=?][1]} #insert comment in secondary transaction Query:{[insert into comment (id, review) values (default, ?)][Good post!]} Query:{[insert into post_comment (post_id, comments_id) values (?, ?)][1,1]} #update works in primary transaction Query:{[update post setname=?, version=? where id=? and version=?][Hibernate Master Class,1,1,0]} If you enjoyed this article, I bet you are going to love my book as well. Conclusion It’s very important to understand how various modeling structures impact concurrency patterns. The owning side collections changes are taken into consideration when incrementing the parent version number, and you can always bypass it using the @OptimisticLock annotation. Code available on GitHub. If you have enjoyed reading my article and you’re looking forward to getting instant email notifications of my latest posts, you just need to follow my blog.
November 4, 2014
· 60,410 Views · 1 Like
article thumbnail
Hibernate Bytecode Enhancement
Now that you know the basics of Hibernate dirty checking, we can dig into enhanced dirty checking mechanisms.
September 10, 2014
· 20,715 Views · 1 Like
article thumbnail
The Dark Side of Hibernate Auto Flush
Introduction Now that I described the the basics of JPA and Hibernate flush strategies, I can continue unraveling the surprising behavior of Hibernate’s AUTO flush mode. Not all queries trigger a Session flush Many would assume that Hibernate always flushes the Session before any executing query. While this might have been a more intuitive approach, and probably closer to the JPA’s AUTO FlushModeType, Hibernate tries to optimize that. If the current executed query is not going to hit the pending SQL INSERT/UPDATE/DELETE statements then the flush is not strictly required. As stated in the reference documentation, the AUTO flush strategy may sometimessynchronize the current persistence context prior to a query execution. It would have been more intuitive if the framework authors had chosen to name it FlushMode.SOMETIMES. JPQL/HQL and SQL Like many other ORM solutions, Hibernate offers a limited Entity querying language (JPQL/HQL) that’s very much based on SQL-92 syntax. The entity query language is translated to SQL by the current database dialect and so it must offer the same functionality across different database products. Since most database systems are SQL-92 complaint, the Entity Query Language is an abstraction of the most common database querying syntax. While you can use the Entity Query Language in many use cases (selecting Entities and even projections), there are times when its limited capabilities are no match for an advanced querying request. Whenever we want to make use of some specific querying techniques, such as: Window functions Pivot table Common Table Expressions we have no other option, but to run native SQL queries. Hibernate is a persistence framework. Hibernate was never meant to replace SQL. If some query is better expressed in a native query, then it’s not worth sacrificing application performance on the altar of database portability. AUTO flush and HQL/JPQL First we are going to test how the AUTO flush mode behaves when an HQL query is about to be executed. For this we define the following unrelated entities: The test will execute the following actions: A Person is going to be persisted. Selecting User(s) should not trigger a the flush. Querying for Person, the AUTO flush should trigger the entity state transition synchronization (A person INSERT should be executed prior to executing the select query). 1 2 3 4 Product product = newProduct(); session.persist(product); assertEquals(0L, session.createQuery("select count(id) from User").uniqueResult()); assertEquals(product.getId(), session.createQuery("select p.id from Product p").uniqueResult()); Giving the following SQL output: 1 2 3 4 [main]: o.h.e.i.AbstractSaveEventListener - Generated identifier: f76f61e2-f3e3-4ea4-8f44-82e9804ceed0, using strategy: org.hibernate.id.UUIDGenerator Query:{[selectcount(user0_.id) as col_0_0_ from user user0_][]} Query:{[insert into product (color, id) values (?, ?)][12,f76f61e2-f3e3-4ea4-8f44-82e9804ceed0]} Query:{[selectproduct0_.idas col_0_0_ from product product0_][]} As you can see, the User select hasn’t triggered the Session flush. This is because Hibernate inspects the current query space against the pending table statements. If the current executing query doesn’t overlap with the unflushed table statements, the a flush can be safely ignored. HQL can detect the Product flush even for: Sub-selects 1 2 3 4 5 session.persist(product); assertEquals(0L, session.createQuery( "select count(*) "+ "from User u "+ "where u.favoriteColor in (select distinct(p.color) from Product p)").uniqueResult()); Resulting in a proper flush call: 1 2 Query:{[insert into product (color, id) values (?, ?)][Blue,2d9d1b4f-eaee-45f1-a480-120eb66da9e8]} Query:{[selectcount(*) as col_0_0_ from user user0_ where user0_.favoriteColor in(selectdistinct product1_.color from product product1_)][]} Or theta-style joins 1 2 3 4 5 session.persist(product); assertEquals(0L, session.createQuery( "select count(*) "+ "from User u, Product p "+ "where u.favoriteColor = p.color").uniqueResult()); Triggering the expected flush : 1 2 Query:{[insert into product (color, id) values (?, ?)][Blue,4af0b843-da3f-4b38-aa42-1e590db186a9]} Query:{[selectcount(*) as col_0_0_ from user user0_ cross joinproduct product1_ where user0_.favoriteColor=product1_.color][]} The reason why it works is because Entity Queries are parsed and translated to SQL queries. Hibernate cannot reference a non existing table, therefore it always knows the database tables an HQL/JPQL query will hit. So Hibernate is only aware of those tables we explicitly referenced in our HQL query. If the current pending DML statements imply database triggers or database level cascading, Hibernate won’t be aware of those. So even for HQL, the AUTO flush mode can cause consistency issues. If you enjoy reading this article, you might want to subscribe to my newsletter and get a discount for my book as well. AUTO flush and native SQL queries When it comes to native SQL queries, things are getting much more complicated. Hibernate cannot parse SQL queries, because it only supports a limited database query syntax. Many database systems offer proprietary features that are beyond Hibernate Entity Query capabilities. Querying the Person table, with a native SQL query is not going to trigger the flush, causing an inconsistency issue: 1 2 3 Product product = newProduct(); session.persist(product); assertNull(session.createSQLQuery("select id from product").uniqueResult()); 1 2 3 DEBUG [main]: o.h.e.i.AbstractSaveEventListener - Generated identifier: 718b84d8-9270-48f3-86ff-0b8da7f9af7c, using strategy: org.hibernate.id.UUIDGenerator Query:{[selectidfrom product][]} Query:{[insert into product (color, id) values (?, ?)][12,718b84d8-9270-48f3-86ff-0b8da7f9af7c]} The newly persisted Product was only inserted during transaction commit, because the native SQL query didn’t triggered the flush. This is major consistency problem, one that’s hard to debug or even foreseen by many developers. That’s one more reason for always inspecting auto-generated SQL statements. The same behaviour is observed even for named native queries: 1 2 3 4 @NamedNativeQueries( @NamedNativeQuery(name = "product_ids", query = "select id from product") ) assertNull(session.getNamedQuery("product_ids").uniqueResult()); So even if the SQL query is pre-loaded, Hibernate won’t extract the associated query space for matching it against the pending DML statements. Overruling the current flush strategy Even if the current Session defines a default flush strategy, you can always override it on a query basis. Query flush mode The ALWAYS mode is going to flush the persistence context before any query execution (HQL or SQL). This time, Hibernate applies no optimization and all pending entity state transitions are going to be synchronized with the current database transaction. 1 assertEquals(product.getId(), session.createSQLQuery("select id from product").setFlushMode(FlushMode.ALWAYS).uniqueResult()); Instructing Hibernate which tables should be syncronized You could also add a synchronization rule on your current executing SQL query. Hibernate will then know what database tables need to be syncronzied prior to executing the query. This is also useful for second level caching as well. 1 assertEquals(product.getId(), session.createSQLQuery("select id from product").addSynchronizedEntityClass(Product.class).uniqueResult()); If you enjoyed this article, I bet you are going to love my book as well. Conclusion The AUTO flush mode is tricky and fixing consistency issues on a query basis is a maintainer’s nightmare. If you decide to add a database trigger, you’ll have to check all Hibernate queries to make sure they won’t end up running against stale data. My suggestion is to use the ALWAYS flush mode, even if Hibernate authors warned us that: this strategy is almost always unnecessary and inefficient. Inconsistency is much more of an issue that some occasional premature flushes. While mixing DML operations and queries may cause unnecessary flushing this situation is not that difficult to mitigate. During a session transaction, it’s best to execute queries at the beginning (when no pending entity state transitions are to be synchronized) and towards the end of the transaction (when the current persistence context is going to be flushed anyway). The entity state transition operations should be pushed towards the end of the transaction, trying to avoid interleaving them with query operations (therefore preventing a premature flush trigger).
August 15, 2014
· 34,449 Views · 3 Likes
article thumbnail
From JPA to Hibernate's Legacy and Enhanced Identifier Generators
Read about enhanced identifier generators, like JPA and Hibernate.
July 16, 2014
· 18,583 Views · 0 Likes
article thumbnail
Hibernate Identity, Sequence and Table (Sequence) Generator
Learn about Identity, Sequence, and Table in Hibernate.
July 9, 2014
· 176,389 Views · 2 Likes
article thumbnail
A Beginner's Guide to ACID and Database Transactions
Read the original article here.
January 7, 2014
· 19,559 Views · 0 Likes

Refcards

Refcard #171

MongoDB Essentials

MongoDB Essentials

Comments

Multi-Tenancy Implementation for Spring Boot + Hibernate Projects

Mar 30, 2017 · Alon Segal

Indeed. My point was to show you that you can use the abstract class that Hibernate provides, not to take the test acse as a drop-in-replacement for your project. The DriverManager is not intended for production, but you can use the DataSourceConnectionProvider. Check out this article for more details.

Multi-Tenancy Implementation for Spring Boot + Hibernate Projects

Mar 30, 2017 · Alon Segal

You can make it select from multiple Connection Providers. Check out this example on Hibernate test cases.

Multi-Tenancy Implementation for Spring Boot + Hibernate Projects

Mar 30, 2017 · Alon Segal

Looks good from a Hibernate perspective. However, I'd strongly suggest to extend

AbstractMultiTenantConnectionProvider

instead of implementing the

MultiTenantConnectionProvider

interface.

How to Identify and Resolve Hibernate N+1 SELECT's Problems

Sep 26, 2016 · Eric Genesky

You should use an automated testing utility to detect the N+1 query issue during testing.

Unit Testing JPA... Stop Integration Testing!

Aug 10, 2016 · Michael Remijan

This is so wrong! When you have a data access layer, the only valuable tests are integration tests, that you run against the same database engine type like the one you use in production.

You can run integration tests on MySQL or PostgreSQL or any other DB almost as fast as H2 or HSQLDB. You just have to map the data drive in-memory, as explained in this article.

Who Are the Java EE Guardians and Why Should You Care?

Mar 29, 2016 · Dave Fecak

Sure, it depends on the develoeprs skills. Nevertheless, we deployed one of the largest real estate platforms in Finlland using Spring and JTA , and it worked like a charm.

Who Are the Java EE Guardians and Why Should You Care?

Mar 29, 2016 · Dave Fecak

It's actually really simple. Checkout this Java-based JTA configuration. In 100 lines of code I manged to set the Bitronix PoolingataSource, a datasource-proxy to intercept all statements, the Bitronix config, the JTA spring transcation manager an dthe Hibernate entity manager factory classes.

So, it's actually pretty simple.

Who Are the Java EE Guardians and Why Should You Care?

Mar 29, 2016 · Dave Fecak

The JtaTRansactionManager allows you to have JTA transactions in Spring. This is just a wrapper because underneath it still needs an actual TM: Atomikos, Bitronix, Narayana, etc.

HibernateTransactionManager is more like a legacy component that's been available before JPA 1.0 emerged. Nowadays, most users choose the JpaTransacionManager which can ne confogured with or without a persistence.xml file.

You can have multiple resources without needing JTA, like when you set up multiple DataSources (one master and multiple slaves). In this case RESOURCE_LOCAL works fine and using the read-only flag of the @Transactional annotation you can be redirected to the right DataSource.

You can even have multiple resources and JTA too.

Depending on the current application requirements, you get to choose what's best for your system

Who Are the Java EE Guardians and Why Should You Care?

Mar 29, 2016 · Dave Fecak

That's true. Actually, squeezing the last drop is more an exception than a rule, so you're right.

JTA is very valuble, and it's a must when coordinate multiple sources of data too.

Who Are the Java EE Guardians and Why Should You Care?

Mar 29, 2016 · Dave Fecak

Declarative transactions are indeed worth, but both Java EE and Spring have them. As far as I know, Spring lets you set the transaction isolation level in the @Transactional annotation too.

I'd like to see such a benchmark because I'm very curious about the actual results.

In a high-performance application, every millisecond matters. Check-out the aggresive connection release impact which is used by default on any JTA deployment when using Hibernate as a JPA provider.

Who Are the Java EE Guardians and Why Should You Care?

Mar 29, 2016 · Dave Fecak

Actually, if you use a single Datasource JTA is not needed at all and you don't have to manage transactions manually. You can use the HibernateTransactionManager or JpaTransactionManager from Spring and there's nothing wrong with them.

In fact, they allow you to set the isolation level, read-only, route request by read or write, timeout.

Even with 1PC optimization, using JTA is still slower for a high-performance application.

Hibernate Performance Tuning

Mar 07, 2016 · Ming Jiang

Great topic. You can also check my High-Performance Hibernate tutorial.

Bosom Buddies: How to make Google Chrome use Microsoft Bing for Search

Jan 27, 2015 · Alvin Ashcraft

1. "readers don't block writers, writers don't block readers", but "writers block writers" and a DML statement is a writer, which will take a lock even with MVCC.

There's a very detailed explanation of every Oracle transaction isolation level behaviour on Oracle Tech Network:

http://www.oracle.com/technetwork/issue-archive/2010/10-jan/o65asktom-082389.html

On READ_COMMITTED:

In Oracle Database, using multi-versioning and read-consistent queries, the answer I get from the ACCOUNTS query is the same in the READ COMMITTED example as it was in the READ UNCOMMITTED example. Oracle Database will reconstruct the modified data as it appeared when the query began, returning the answer that was in the database when the query started"

This means that MVCC allows readers to reconstruct data as of before any pending uncommitted changes. But that doesn't mean Oracle doesn't still uses exclusive locks:

"Updates row 1 and puts an exclusive lock on row 1, preventing other updates and reads. Row 1 now has $100.00.

All DML statements use exclusive locks.

This session must wait on that row until the transaction holding the exclusive lock commits... The really bad news in this scenario is that I'm making the end user wait for the wrong answer. I still receive an answer that never existed in the database, as with the dirty read, but this time I made the user wait for the wrong answer."

2. Every database has a pre-defined locking scheme for each transaction isolation level. It doesn't mean that all RDBMS comply to the SQL standard. Oracle, for instance doesn't allow "dirty reads" in READ_UNCOMMITTED:

The READ UNCOMMITTED isolation level allows dirty reads. Oracle Database doesn't use dirty reads, nor does it even allow them. The basic goal of a READ UNCOMMITTED isolation level is to provide a standards-based definition that allows for nonblocking reads. As you've seen, Oracle Database provides for nonblocking reads by default. You'd be hard-pressed to make a SELECT query block and wait in the database (as noted earlier, there is the special case of a distributed transaction). Every single query, be it a SELECT , INSERT , UPDATE , MERGE , or DELETE , executes in a read-consistent fashion. It might seem funny to refer to an UPDATE statement as a query, but it is. UPDATE statements have two components: a read component as defined by the WHERE clause, and a write component as defined by the SET clause. UPDATE statements read and write to the database, as do all DML statements. The case of a single row INSERT using the VALUES clause is the only exception to this, because such statements have no read component—just the write component."

So, Oracle implements "READ UNCOMMITTED" as a non-blocking "READ COMMITTED" isolation level, which is not what the standard defined.

Bosom Buddies: How to make Google Chrome use Microsoft Bing for Search

Jan 27, 2015 · Alvin Ashcraft

1. "readers don't block writers, writers don't block readers", but "writers block writers" and a DML statement is a writer, which will take a lock even with MVCC.

There's a very detailed explanation of every Oracle transaction isolation level behaviour on Oracle Tech Network:

http://www.oracle.com/technetwork/issue-archive/2010/10-jan/o65asktom-082389.html

On READ_COMMITTED:

In Oracle Database, using multi-versioning and read-consistent queries, the answer I get from the ACCOUNTS query is the same in the READ COMMITTED example as it was in the READ UNCOMMITTED example. Oracle Database will reconstruct the modified data as it appeared when the query began, returning the answer that was in the database when the query started"

This means that MVCC allows readers to reconstruct data as of before any pending uncommitted changes. But that doesn't mean Oracle doesn't still uses exclusive locks:

"Updates row 1 and puts an exclusive lock on row 1, preventing other updates and reads. Row 1 now has $100.00.

All DML statements use exclusive locks.

This session must wait on that row until the transaction holding the exclusive lock commits... The really bad news in this scenario is that I'm making the end user wait for the wrong answer. I still receive an answer that never existed in the database, as with the dirty read, but this time I made the user wait for the wrong answer."

2. Every database has a pre-defined locking scheme for each transaction isolation level. It doesn't mean that all RDBMS comply to the SQL standard. Oracle, for instance doesn't allow "dirty reads" in READ_UNCOMMITTED:

The READ UNCOMMITTED isolation level allows dirty reads. Oracle Database doesn't use dirty reads, nor does it even allow them. The basic goal of a READ UNCOMMITTED isolation level is to provide a standards-based definition that allows for nonblocking reads. As you've seen, Oracle Database provides for nonblocking reads by default. You'd be hard-pressed to make a SELECT query block and wait in the database (as noted earlier, there is the special case of a distributed transaction). Every single query, be it a SELECT , INSERT , UPDATE , MERGE , or DELETE , executes in a read-consistent fashion. It might seem funny to refer to an UPDATE statement as a query, but it is. UPDATE statements have two components: a read component as defined by the WHERE clause, and a write component as defined by the SET clause. UPDATE statements read and write to the database, as do all DML statements. The case of a single row INSERT using the VALUES clause is the only exception to this, because such statements have no read component—just the write component."

So, Oracle implements "READ UNCOMMITTED" as a non-blocking "READ COMMITTED" isolation level, which is not what the standard defined.

Bosom Buddies: How to make Google Chrome use Microsoft Bing for Search

Jan 27, 2015 · Alvin Ashcraft

1. "readers don't block writers, writers don't block readers", but "writers block writers" and a DML statement is a writer, which will take a lock even with MVCC.

There's a very detailed explanation of every Oracle transaction isolation level behaviour on Oracle Tech Network:

http://www.oracle.com/technetwork/issue-archive/2010/10-jan/o65asktom-082389.html

On READ_COMMITTED:

In Oracle Database, using multi-versioning and read-consistent queries, the answer I get from the ACCOUNTS query is the same in the READ COMMITTED example as it was in the READ UNCOMMITTED example. Oracle Database will reconstruct the modified data as it appeared when the query began, returning the answer that was in the database when the query started"

This means that MVCC allows readers to reconstruct data as of before any pending uncommitted changes. But that doesn't mean Oracle doesn't still uses exclusive locks:

"Updates row 1 and puts an exclusive lock on row 1, preventing other updates and reads. Row 1 now has $100.00.

All DML statements use exclusive locks.

This session must wait on that row until the transaction holding the exclusive lock commits... The really bad news in this scenario is that I'm making the end user wait for the wrong answer. I still receive an answer that never existed in the database, as with the dirty read, but this time I made the user wait for the wrong answer."

2. Every database has a pre-defined locking scheme for each transaction isolation level. It doesn't mean that all RDBMS comply to the SQL standard. Oracle, for instance doesn't allow "dirty reads" in READ_UNCOMMITTED:

The READ UNCOMMITTED isolation level allows dirty reads. Oracle Database doesn't use dirty reads, nor does it even allow them. The basic goal of a READ UNCOMMITTED isolation level is to provide a standards-based definition that allows for nonblocking reads. As you've seen, Oracle Database provides for nonblocking reads by default. You'd be hard-pressed to make a SELECT query block and wait in the database (as noted earlier, there is the special case of a distributed transaction). Every single query, be it a SELECT , INSERT , UPDATE , MERGE , or DELETE , executes in a read-consistent fashion. It might seem funny to refer to an UPDATE statement as a query, but it is. UPDATE statements have two components: a read component as defined by the WHERE clause, and a write component as defined by the SET clause. UPDATE statements read and write to the database, as do all DML statements. The case of a single row INSERT using the VALUES clause is the only exception to this, because such statements have no read component—just the write component."

So, Oracle implements "READ UNCOMMITTED" as a non-blocking "READ COMMITTED" isolation level, which is not what the standard defined.

Bosom Buddies: How to make Google Chrome use Microsoft Bing for Search

Jan 27, 2015 · Alvin Ashcraft

1. "readers don't block writers, writers don't block readers", but "writers block writers" and a DML statement is a writer, which will take a lock even with MVCC.

There's a very detailed explanation of every Oracle transaction isolation level behaviour on Oracle Tech Network:

http://www.oracle.com/technetwork/issue-archive/2010/10-jan/o65asktom-082389.html

On READ_COMMITTED:

In Oracle Database, using multi-versioning and read-consistent queries, the answer I get from the ACCOUNTS query is the same in the READ COMMITTED example as it was in the READ UNCOMMITTED example. Oracle Database will reconstruct the modified data as it appeared when the query began, returning the answer that was in the database when the query started"

This means that MVCC allows readers to reconstruct data as of before any pending uncommitted changes. But that doesn't mean Oracle doesn't still uses exclusive locks:

"Updates row 1 and puts an exclusive lock on row 1, preventing other updates and reads. Row 1 now has $100.00.

All DML statements use exclusive locks.

This session must wait on that row until the transaction holding the exclusive lock commits... The really bad news in this scenario is that I'm making the end user wait for the wrong answer. I still receive an answer that never existed in the database, as with the dirty read, but this time I made the user wait for the wrong answer."

2. Every database has a pre-defined locking scheme for each transaction isolation level. It doesn't mean that all RDBMS comply to the SQL standard. Oracle, for instance doesn't allow "dirty reads" in READ_UNCOMMITTED:

The READ UNCOMMITTED isolation level allows dirty reads. Oracle Database doesn't use dirty reads, nor does it even allow them. The basic goal of a READ UNCOMMITTED isolation level is to provide a standards-based definition that allows for nonblocking reads. As you've seen, Oracle Database provides for nonblocking reads by default. You'd be hard-pressed to make a SELECT query block and wait in the database (as noted earlier, there is the special case of a distributed transaction). Every single query, be it a SELECT , INSERT , UPDATE , MERGE , or DELETE , executes in a read-consistent fashion. It might seem funny to refer to an UPDATE statement as a query, but it is. UPDATE statements have two components: a read component as defined by the WHERE clause, and a write component as defined by the SET clause. UPDATE statements read and write to the database, as do all DML statements. The case of a single row INSERT using the VALUES clause is the only exception to this, because such statements have no read component—just the write component."

So, Oracle implements "READ UNCOMMITTED" as a non-blocking "READ COMMITTED" isolation level, which is not what the standard defined.

Bosom Buddies: How to make Google Chrome use Microsoft Bing for Search

Jan 27, 2015 · Alvin Ashcraft

That's a very fine description of the MVCC inner-workings. All in all, the database transaction isolation level and their logical (MVCC) or physical (shared/exclusive) locks cannot prevent all data integrity anomalies.

In a multi-request logical transactions (web application workflow), you need application-level optimistic locking anyway.

I am writing a Hibernate Master Class tutorial and most of my writing efforts were channelled to the concurrency-control benefits of using an ORM tool.

With your thorough knowledge of this topic, I would be honoured to have your sincere review on my current articles.

Thanks, Vlad

Bosom Buddies: How to make Google Chrome use Microsoft Bing for Search

Jan 27, 2015 · Alvin Ashcraft

That's a very fine description of the MVCC inner-workings. All in all, the database transaction isolation level and their logical (MVCC) or physical (shared/exclusive) locks cannot prevent all data integrity anomalies.

In a multi-request logical transactions (web application workflow), you need application-level optimistic locking anyway.

I am writing a Hibernate Master Class tutorial and most of my writing efforts were channelled to the concurrency-control benefits of using an ORM tool.

With your thorough knowledge of this topic, I would be honoured to have your sincere review on my current articles.

Thanks, Vlad

Bosom Buddies: How to make Google Chrome use Microsoft Bing for Search

Jan 27, 2015 · Alvin Ashcraft

That's a very fine description of the MVCC inner-workings. All in all, the database transaction isolation level and their logical (MVCC) or physical (shared/exclusive) locks cannot prevent all data integrity anomalies.

In a multi-request logical transactions (web application workflow), you need application-level optimistic locking anyway.

I am writing a Hibernate Master Class tutorial and most of my writing efforts were channelled to the concurrency-control benefits of using an ORM tool.

With your thorough knowledge of this topic, I would be honoured to have your sincere review on my current articles.

Thanks, Vlad

Load Progress Image in Autocomplete textbox using javascript

Jan 02, 2015 · amiT jaiN

Thanks for pointing it out. Bitronix supports default isolation levels too. It's Spring default JTA TM that doesn't support it. But it's easy to extend it like it's the Weblogic example.

Test Driven Development, How To?

Dec 28, 2014 · Bogdan Mustiata

Aside from the personal insult, I can once again demonstrate you're wrong.

JPA is cache centric, only when it mandates session level repeatable reads through the 1st level cache. This is mandatory since state transitions are not immediately synchronized with the DB. For EAGER associations, if you rely on caching then you have to enable 2nd level cache, which is disabled by default. That's because the 1st level is bound to the life cycle of one and only one Session, so you will always fetch the entities from DB using JOINs or secondary selects, which leaves you with the 2nd level cache variant. The 2nd level one doesn't solve it either when a JOIN is issued anyway, even if the entity is in cache, like when loading the entity from the entity manager find method.

1. Did you know that the default AUTO flushing is not consistent with native queries in Hibernate? How are you going to resolve that with mere JPA spec logic?

2.The 2nd level cache, like another cache, introduces a consistency breaking point that you need to compensate with application logic. Is this really a good reason for using the 2nd level cache? Just for the default to-one associations?

3. Go ahead and ask this question on the JPA mailing list? What if they tell you that this behaviour dates back to 1.0 spec when LAZY wasn't a mandatory requirement. Could it be related to LAZY being only a hint?

Good like with your EAGER associations!

Test Driven Development, How To?

Dec 28, 2014 · Bogdan Mustiata

Aside from the personal insult, I can once again demonstrate you're wrong.

JPA is cache centric, only when it mandates session level repeatable reads through the 1st level cache. This is mandatory since state transitions are not immediately synchronized with the DB. For EAGER associations, if you rely on caching then you have to enable 2nd level cache, which is disabled by default. That's because the 1st level is bound to the life cycle of one and only one Session, so you will always fetch the entities from DB using JOINs or secondary selects, which leaves you with the 2nd level cache variant. The 2nd level one doesn't solve it either when a JOIN is issued anyway, even if the entity is in cache, like when loading the entity from the entity manager find method.

1. Did you know that the default AUTO flushing is not consistent with native queries in Hibernate? How are you going to resolve that with mere JPA spec logic?

2.The 2nd level cache, like another cache, introduces a consistency breaking point that you need to compensate with application logic. Is this really a good reason for using the 2nd level cache? Just for the default to-one associations?

3. Go ahead and ask this question on the JPA mailing list? What if they tell you that this behaviour dates back to 1.0 spec when LAZY wasn't a mandatory requirement. Could it be related to LAZY being only a hint?

Good like with your EAGER associations!

Test Driven Development, How To?

Dec 28, 2014 · Bogdan Mustiata

Aside from the personal insult, I can once again demonstrate you're wrong.

JPA is cache centric, only when it mandates session level repeatable reads through the 1st level cache. This is mandatory since state transitions are not immediately synchronized with the DB. For EAGER associations, if you rely on caching then you have to enable 2nd level cache, which is disabled by default. That's because the 1st level is bound to the life cycle of one and only one Session, so you will always fetch the entities from DB using JOINs or secondary selects, which leaves you with the 2nd level cache variant. The 2nd level one doesn't solve it either when a JOIN is issued anyway, even if the entity is in cache, like when loading the entity from the entity manager find method.

1. Did you know that the default AUTO flushing is not consistent with native queries in Hibernate? How are you going to resolve that with mere JPA spec logic?

2.The 2nd level cache, like another cache, introduces a consistency breaking point that you need to compensate with application logic. Is this really a good reason for using the 2nd level cache? Just for the default to-one associations?

3. Go ahead and ask this question on the JPA mailing list? What if they tell you that this behaviour dates back to 1.0 spec when LAZY wasn't a mandatory requirement. Could it be related to LAZY being only a hint?

Good like with your EAGER associations!

Test Driven Development, How To?

Dec 28, 2014 · Bogdan Mustiata

Aside from the personal insult, I can once again demonstrate you're wrong.

JPA is cache centric, only when it mandates session level repeatable reads through the 1st level cache. This is mandatory since state transitions are not immediately synchronized with the DB. For EAGER associations, if you rely on caching then you have to enable 2nd level cache, which is disabled by default. That's because the 1st level is bound to the life cycle of one and only one Session, so you will always fetch the entities from DB using JOINs or secondary selects, which leaves you with the 2nd level cache variant. The 2nd level one doesn't solve it either when a JOIN is issued anyway, even if the entity is in cache, like when loading the entity from the entity manager find method.

1. Did you know that the default AUTO flushing is not consistent with native queries in Hibernate? How are you going to resolve that with mere JPA spec logic?

2.The 2nd level cache, like another cache, introduces a consistency breaking point that you need to compensate with application logic. Is this really a good reason for using the 2nd level cache? Just for the default to-one associations?

3. Go ahead and ask this question on the JPA mailing list? What if they tell you that this behaviour dates back to 1.0 spec when LAZY wasn't a mandatory requirement. Could it be related to LAZY being only a hint?

Good like with your EAGER associations!

Test Driven Development, How To?

Dec 28, 2014 · Bogdan Mustiata

Aside from the personal insult, I can once again demonstrate you're wrong.

JPA is cache centric, only when it mandates session level repeatable reads through the 1st level cache. This is mandatory since state transitions are not immediately synchronized with the DB. For EAGER associations, if you rely on caching then you have to enable 2nd level cache, which is disabled by default. That's because the 1st level is bound to the life cycle of one and only one Session, so you will always fetch the entities from DB using JOINs or secondary selects, which leaves you with the 2nd level cache variant. The 2nd level one doesn't solve it either when a JOIN is issued anyway, even if the entity is in cache, like when loading the entity from the entity manager find method.

1. Did you know that the default AUTO flushing is not consistent with native queries in Hibernate? How are you going to resolve that with mere JPA spec logic?

2.The 2nd level cache, like another cache, introduces a consistency breaking point that you need to compensate with application logic. Is this really a good reason for using the 2nd level cache? Just for the default to-one associations?

3. Go ahead and ask this question on the JPA mailing list? What if they tell you that this behaviour dates back to 1.0 spec when LAZY wasn't a mandatory requirement. Could it be related to LAZY being only a hint?

Good like with your EAGER associations!

Test Driven Development, How To?

Dec 28, 2014 · Bogdan Mustiata

Aside from the personal insult, I can once again demonstrate you're wrong.

JPA is cache centric, only when it mandates session level repeatable reads through the 1st level cache. This is mandatory since state transitions are not immediately synchronized with the DB. For EAGER associations, if you rely on caching then you have to enable 2nd level cache, which is disabled by default. That's because the 1st level is bound to the life cycle of one and only one Session, so you will always fetch the entities from DB using JOINs or secondary selects, which leaves you with the 2nd level cache variant. The 2nd level one doesn't solve it either when a JOIN is issued anyway, even if the entity is in cache, like when loading the entity from the entity manager find method.

1. Did you know that the default AUTO flushing is not consistent with native queries in Hibernate? How are you going to resolve that with mere JPA spec logic?

2.The 2nd level cache, like another cache, introduces a consistency breaking point that you need to compensate with application logic. Is this really a good reason for using the 2nd level cache? Just for the default to-one associations?

3. Go ahead and ask this question on the JPA mailing list? What if they tell you that this behaviour dates back to 1.0 spec when LAZY wasn't a mandatory requirement. Could it be related to LAZY being only a hint?

Good like with your EAGER associations!

Test Driven Development, How To?

Dec 28, 2014 · Bogdan Mustiata

Aside from the personal insult, I can once again demonstrate you're wrong.

JPA is cache centric, only when it mandates session level repeatable reads through the 1st level cache. This is mandatory since state transitions are not immediately synchronized with the DB. For EAGER associations, if you rely on caching then you have to enable 2nd level cache, which is disabled by default. That's because the 1st level is bound to the life cycle of one and only one Session, so you will always fetch the entities from DB using JOINs or secondary selects, which leaves you with the 2nd level cache variant. The 2nd level one doesn't solve it either when a JOIN is issued anyway, even if the entity is in cache, like when loading the entity from the entity manager find method.

1. Did you know that the default AUTO flushing is not consistent with native queries in Hibernate? How are you going to resolve that with mere JPA spec logic?

2.The 2nd level cache, like another cache, introduces a consistency breaking point that you need to compensate with application logic. Is this really a good reason for using the 2nd level cache? Just for the default to-one associations?

3. Go ahead and ask this question on the JPA mailing list? What if they tell you that this behaviour dates back to 1.0 spec when LAZY wasn't a mandatory requirement. Could it be related to LAZY being only a hint?

Good like with your EAGER associations!

Test Driven Development, How To?

Dec 28, 2014 · Bogdan Mustiata

Aside from the personal insult, I can once again demonstrate you're wrong.

JPA is cache centric, only when it mandates session level repeatable reads through the 1st level cache. This is mandatory since state transitions are not immediately synchronized with the DB. For EAGER associations, if you rely on caching then you have to enable 2nd level cache, which is disabled by default. That's because the 1st level is bound to the life cycle of one and only one Session, so you will always fetch the entities from DB using JOINs or secondary selects, which leaves you with the 2nd level cache variant. The 2nd level one doesn't solve it either when a JOIN is issued anyway, even if the entity is in cache, like when loading the entity from the entity manager find method.

1. Did you know that the default AUTO flushing is not consistent with native queries in Hibernate? How are you going to resolve that with mere JPA spec logic?

2.The 2nd level cache, like another cache, introduces a consistency breaking point that you need to compensate with application logic. Is this really a good reason for using the 2nd level cache? Just for the default to-one associations?

3. Go ahead and ask this question on the JPA mailing list? What if they tell you that this behaviour dates back to 1.0 spec when LAZY wasn't a mandatory requirement. Could it be related to LAZY being only a hint?

Good like with your EAGER associations!

Test Driven Development, How To?

Dec 28, 2014 · Bogdan Mustiata

Aside from the personal insult, I can once again demonstrate you're wrong.

JPA is cache centric, only when it mandates session level repeatable reads through the 1st level cache. This is mandatory since state transitions are not immediately synchronized with the DB. For EAGER associations, if you rely on caching then you have to enable 2nd level cache, which is disabled by default. That's because the 1st level is bound to the life cycle of one and only one Session, so you will always fetch the entities from DB using JOINs or secondary selects, which leaves you with the 2nd level cache variant. The 2nd level one doesn't solve it either when a JOIN is issued anyway, even if the entity is in cache, like when loading the entity from the entity manager find method.

1. Did you know that the default AUTO flushing is not consistent with native queries in Hibernate? How are you going to resolve that with mere JPA spec logic?

2.The 2nd level cache, like another cache, introduces a consistency breaking point that you need to compensate with application logic. Is this really a good reason for using the 2nd level cache? Just for the default to-one associations?

3. Go ahead and ask this question on the JPA mailing list? What if they tell you that this behaviour dates back to 1.0 spec when LAZY wasn't a mandatory requirement. Could it be related to LAZY being only a hint?

Good like with your EAGER associations!

Test Driven Development, How To?

Dec 28, 2014 · Bogdan Mustiata

Aside from the personal insult, I can once again demonstrate you're wrong.

JPA is cache centric, only when it mandates session level repeatable reads through the 1st level cache. This is mandatory since state transitions are not immediately synchronized with the DB. For EAGER associations, if you rely on caching then you have to enable 2nd level cache, which is disabled by default. That's because the 1st level is bound to the life cycle of one and only one Session, so you will always fetch the entities from DB using JOINs or secondary selects, which leaves you with the 2nd level cache variant. The 2nd level one doesn't solve it either when a JOIN is issued anyway, even if the entity is in cache, like when loading the entity from the entity manager find method.

1. Did you know that the default AUTO flushing is not consistent with native queries in Hibernate? How are you going to resolve that with mere JPA spec logic?

2.The 2nd level cache, like another cache, introduces a consistency breaking point that you need to compensate with application logic. Is this really a good reason for using the 2nd level cache? Just for the default to-one associations?

3. Go ahead and ask this question on the JPA mailing list? What if they tell you that this behaviour dates back to 1.0 spec when LAZY wasn't a mandatory requirement. Could it be related to LAZY being only a hint?

Good like with your EAGER associations!

Test Driven Development, How To?

Dec 27, 2014 · Bogdan Mustiata

Argumentum ad hominem

Test Driven Development, How To?

Dec 27, 2014 · Bogdan Mustiata

Argumentum ad hominem

Test Driven Development, How To?

Dec 27, 2014 · Bogdan Mustiata

Argumentum ad hominem

Test Driven Development, How To?

Dec 27, 2014 · Bogdan Mustiata

Argumentum ad hominem

Test Driven Development, How To?

Dec 27, 2014 · Bogdan Mustiata

Argumentum ad hominem

Test Driven Development, How To?

Dec 27, 2014 · Bogdan Mustiata

Argumentum ad hominem

Test Driven Development, How To?

Dec 27, 2014 · Bogdan Mustiata

Argumentum ad hominem

Test Driven Development, How To?

Dec 27, 2014 · Bogdan Mustiata

Argumentum ad hominem

Test Driven Development, How To?

Dec 27, 2014 · Bogdan Mustiata

Argumentum ad hominem

Test Driven Development, How To?

Dec 27, 2014 · Bogdan Mustiata

Argumentum ad hominem

Test Driven Development, How To?

Dec 26, 2014 · Bogdan Mustiata

Thanks Valery for your suggestions. Indeed it's a dangerous zone. Most developers will sacrifice performance in the name of an illusionary portability, be it JPA or the database server. In a middle to large size enterprise project, it's not that simple to switch technologies and you most likely have to run optimized native queries anyway.

I liked your two suggestions on pagination and projections. This article is just a small section of the larger "Hibernate Master Class" free online course I am writing. Each article focuses on one idea, so I will address the Collection fetching anti-patterns and the "only Entity fetching" misconceptions in some new posts.

Test Driven Development, How To?

Dec 25, 2014 · Bogdan Mustiata

Try adding EAGER on two collections:

@OneToMany(fetch = FetchType.EAGER, cascade = CascadeType.ALL, mappedBy = "product", orphanRemoval = true) @OrderBy("index") private Set images = new LinkedHashSet(); @OneToMany(fetch = FetchType.EAGER, cascade = CascadeType.ALL, mappedBy = "product", orphanRemoval = true) @OrderBy("index") private Set reviews = new LinkedHashSet();

And the call:

Product product = entityManager.find(Product.class, productId);

You'll get a wonderful Cartesian Product.

This is no micro-optimization or performance tuning strategy. This is proper design and common-sense from an SQL point of view.

Using Verbs As Nouns in User Interfaces

Nov 13, 2014 · Mr B Loid

That was exactly my point. Thanks for appreciating my article.

Fully lazy sequences are here! - Clojure

Sep 14, 2014 · Mr B Loid

Great numbers. Thanks for sharing it.

Full GlassFish adoption questionnaire responses from GroovyBlogs.org's Glen Smith.

Mar 05, 2014 · Mr B Loid

Very good article. When it comes to concurrency and ensuring consistency and replications all DBs (NoSQL or SQL) share the same challenges. I think you might find this article (and the rest of the blog) really interesting.

Interfaces + Factory pattern = Decoupled architecture

Feb 11, 2014 · Amit Mehra

Hi Peter,

First of all, congrats for releasing your book. The last JEE book I read was Adam Bien's "Real World Java EE Patterns Rethinking Best Practices" and it changed by opinion about JEE, as I was previously looking at it from the J2EE heavyweight perspective.

I would like to read more about the new JMS 2.0, the JPA enhancements and WebSockets support, and I hope I get the chance to write a DZone book report after reading it.

Vlad

WPF Beginner FAQ

Jan 08, 2014 · Amit Mehra

Facebook still uses MySQL for its social graph and Cassandra for email searching. If we are talking about a relational data model, then RDBMS is the perfect choice and most projects don't really fit into a 'BigData' category anyway. Yes, NoSQL has evolved in the context of BigData, and therefore it offers sharding/horizontal scalability options, but then, you can horizontally scale a SQL solution too.

Fully lazy sequences are here! - Clojure

Dec 07, 2013 · Mr B Loid

Thanks for the tip, I am just an occasional Python developer, since I use it more as an universal bash scripting tool. I wanted to distribute the entries between a start and an end date, so I can further calculate some time series. In this example I wanted to generate 50.000.000 values for an year period (2012-2013).

Fully lazy sequences are here! - Clojure

Dec 07, 2013 · Mr B Loid

Thanks for the tip, I am just an occasional Python developer, since I use it more as an universal bash scripting tool. I wanted to distribute the entries between a start and an end date, so I can further calculate some time series. In this example I wanted to generate 50.000.000 values for an year period (2012-2013).

RESTClient: Version 2.3 Released. Tool for testing RESTful WebServices.

Nov 25, 2013 · Subhash Chandran

Hi,

I like to check the open-source projects code base. I am curious how they implemented some features I frequently use, and I got to learn a lot (applied design patterns, new Java features I haven't got the chance to use). But that's not how I evaluate tools. Like you said, I also take a pragmatic approach, and I weight the benefit I get with the overweight it adds to my current application (development/deployment). There are many projects using Hibernate simply because everybody's using it, when they could do better with simple JDBC, or JOOQ. Or when you have an UI table loading all rows for every new page it displays, and people complain the database is too slow. The DB can be very fast, but you have to know more than select/insert/update/delete or B-Tree indexes, like SQL window functions for instance.

Vlad

RESTClient: Version 2.3 Released. Tool for testing RESTful WebServices.

Nov 25, 2013 · Subhash Chandran

Hi,

I like to check the open-source projects code base. I am curious how they implemented some features I frequently use, and I got to learn a lot (applied design patterns, new Java features I haven't got the chance to use). But that's not how I evaluate tools. Like you said, I also take a pragmatic approach, and I weight the benefit I get with the overweight it adds to my current application (development/deployment). There are many projects using Hibernate simply because everybody's using it, when they could do better with simple JDBC, or JOOQ. Or when you have an UI table loading all rows for every new page it displays, and people complain the database is too slow. The DB can be very fast, but you have to know more than select/insert/update/delete or B-Tree indexes, like SQL window functions for instance.

Vlad

RESTClient: Version 2.3 Released. Tool for testing RESTful WebServices.

Nov 25, 2013 · Subhash Chandran

Hi,

I like to check the open-source projects code base. I am curious how they implemented some features I frequently use, and I got to learn a lot (applied design patterns, new Java features I haven't got the chance to use). But that's not how I evaluate tools. Like you said, I also take a pragmatic approach, and I weight the benefit I get with the overweight it adds to my current application (development/deployment). There are many projects using Hibernate simply because everybody's using it, when they could do better with simple JDBC, or JOOQ. Or when you have an UI table loading all rows for every new page it displays, and people complain the database is too slow. The DB can be very fast, but you have to know more than select/insert/update/delete or B-Tree indexes, like SQL window functions for instance.

Vlad

Track memory allocations on Android

Nov 08, 2013 · Mr B Loid

Hi,

Splunk seems like a very handy tool, I'll have to investigate it.

Vlad

Track memory allocations on Android

Nov 05, 2013 · Mr B Loid

Hi Steven,

My vision is that file-based string logging is like using text-files instead of a database. Usually logging is not taking to seriously until you move into production, when you realize logging/monitoring are equally important as any other aspects of your application.

Having so many NoSql solutions nowadays simplifies implementing a system of smart logging, and if more people get interested in such idea, I plan on starting a new open-source project to address this need.

The project goals should be quite straight forward:

- simple API to submit log objects

- asynchronous batch job to save the log objects into a NoSql storage

- support for handling a log object and update the "current system state"

- support for exposing the "current system state" as JMX

It see it as a library on top of which you start implementing your own smart-logging solution based on your current project requirements, rather than a full-featured logging application which cannot foresee the complex requirements of any project you'd want to integrate with.

Vlad

Track memory allocations on Android

Nov 05, 2013 · Mr B Loid

Hi Steven,

My vision is that file-based string logging is like using text-files instead of a database. Usually logging is not taking to seriously until you move into production, when you realize logging/monitoring are equally important as any other aspects of your application.

Having so many NoSql solutions nowadays simplifies implementing a system of smart logging, and if more people get interested in such idea, I plan on starting a new open-source project to address this need.

The project goals should be quite straight forward:

- simple API to submit log objects

- asynchronous batch job to save the log objects into a NoSql storage

- support for handling a log object and update the "current system state"

- support for exposing the "current system state" as JMX

It see it as a library on top of which you start implementing your own smart-logging solution based on your current project requirements, rather than a full-featured logging application which cannot foresee the complex requirements of any project you'd want to integrate with.

Vlad

The Java EE Application as an EJB/Spring/Hibernate Hybrid

Oct 26, 2011 · Mr B Loid

Yes volatile fixes the issue in this case, but syncronized would have done the same thing also.

The TimerTask uses a separate thread to set the expired=true variable, so if you change the

TimerTask to:

public void run() { synchronized (mutext) { expired = true; } System.out.println("Timer interrupted main thread ..."); timer.cancel(); }

Then the expired would get visible from both Timer Thread and the Worker Thread.

User has been successfully modified

Failed to modify user

ABOUT US

  • About DZone
  • Send feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: