Java‎ > ‎

JPA

Locking

 http://stackoverflow.com/questions/33062635/difference-between-lockmodetype-jpa

 

Optimistic locking is fully controlled by JPA and only requires additional version column in DB tables. It is completely independent of underlying DB engine used to store relational data.

On the other hand, pessimistic locking uses locking mechanism provided by underlying database to lock existing records in tables. JPA needs to know how to trigger these locks and some databases do not support them or only partially.

Now to the list of lock types:

1.    LockModeType.Optimistic

·         This is really the default. It is usually ignored as stated by ObjectDB. In my opinion it only exists so that you may compute lock mode dynamically and pass it further even if the lock would be OPTIMISTIC in the end. Not very probable usecase though, but it is always good API design to provide an option to reference even the default value.

·         Example:

LockModeType lockMode = resolveLockMode(); A a = em.find(A.class, 1, lockMode);

2.    LockModeType.OPTIMISTIC_FORCE_INCREMENT

·         This is a rarely used option. But it could be reasonable, if you want to lock referencing this entity by another entity. In other words you want to lock working with an entity even if it is not modified, but other entities may be modified in relation to this entity.

·         Example: We have entity Book and Shelf. It is possible to add Book to Shelf, but book does not have any reference to its shelf. It is reasonable to lock the action of moving a book to a shelf, so that a book does not end up in another shelf (due to another transaction) before end of this transaction. To lock this action, it is not sufficient to lock current book shelf entity, as the book does not have to be on a shelf yet. It also does not make sense to lock all target bookshelves, as they would be probably different in different transactions. The only thing that makes sense is to lock the book entity itself, even if in our case it does not get changed (it does not hold reference to its bookshelf).

3.    LockModeType.PESSIMISTIC_READ

·         this mode is similar to LockModeType.PESSIMISTIC_WRITE, but different in one thing: until write lock is in place on the same entity by some transaction, it should not block reading the entity. It also allows other transactions to lock using LockModeType.PESSIMISTIC_READ. The differences between WRITE and READ locks are well explained here (ObjectDB) and here (OpenJPA).

4.    LockModeType.PESSIMISTIC_WRITE

·         this is a stronger version of LockModeType.PESSIMISTIC_READ. When WRITE lock is in place, JPA with the help of the database will prevent any other transaction to read the entity, not only to write as with READ lock.

·         The way how this is implemented ina JPA provider in cooperation with underlying DB is not prescribed. In your case with Oracle, I would say that Oracle does not provide something close to a READ lock. SELECT...FOR UPDATE is really rather a WRITE lock. It may be a bug in hibernate or just a decision that, instead of implementing custom "softer" READ lock, the "harder" WRITE lock is used instead. This mostly does not break consistency, but does not hold all rules with READ locks. You could run some simple tests with READ locks and long running transactions to find out if more transactions are able to acquire READ locks on the same entity. This should be possible, whereas not with WRITE locks.

5.    LockModeType.PESSIMISTIC_FORCE_INCREMENT

·         this is another rarely used lock mode. However, it is an option where you need to combine PESSIMISTIC and OPTIMISTIC mechanisms. Using plain PESSIMISTIC_WRITE would fail in following scenario:

1.    transaction A uses optimistic locking and reads entity E

2.    transaction B acquires WRITE lock on entity E

3.    transaction B commits and releases lock of E

4.    transaction A updates E and commits

·         in step 4, if version column is not incremented by transaction B, nothing prevents A from overwriting changes of B. Lock mode LockModeType.PESSIMISTIC_FORCE_INCREMENT will force transaction B to update version number and causing transaction A to fail with OptimisticLockException, even though B was using pessimistic locking.

 

Cascade relations

 

http://howtodoinjava.com/hibernate/hibernate-jpa-cascade-types/

JPA Cascade Types

The cascade types supported by the Java Persistence Architecture are as below:

1.      CascadeType.PERSIST : means that save() or persist() operations cascade to related entities.

2.      CascadeType.MERGE : means that related entities are merged into managed state when the owning entity is merged.

3.      CascadeType.REFRESH : does the same thing for the refresh() operation.

4.      CascadeType.REMOVE : removes all related entities association with this setting when the owning entity is deleted.

5.      CascadeType.DETACH : detaches all related entities if a “manual detach” occurs.

6.      CascadeType.ALL : is shorthand for all of the above cascade operations.


others 

 http://www.byteslounge.com/tutorials/jpa-entity-versioning-version-and-optimistic-locking


DDL Generation


Table 5-29 Valid Values for ddl-generation

ValueDescription

create-tables

EclipseLink will attempt to execute a CREATE TABLE SQL for each table.

If the table already exists, EclipseLink will follow the default behavior of your specific database and JDBC driver combination (when a CREATE TABLE SQL is issued for an already existing table). In most cases an exception is thrown and the table is not created; the existing table will be used. EclipseLink will then continue with the next statement.

create-or-extend-tables

EclipseLink will attempt to create tables. If the table exists, EclipseLink will add any missing columns.

drop-and-create-tables

EclipseLink will attempt to DROP all tables, then CREATE all tables. If any issues are encountered, EclipseLink will follow the default behavior of your specific database and JDBC driver combination, then continue with the next statement

This is useful in development if the schema frequently changes or during testing when the existing data needs to be cleared.

Note: Using drop-and-create will remove all of the data in the tables when they are dropped. You should never use option on a production schema that has valuable data in the database. If the schema changed dramatically, there could be old constraints in the database that prevent the dropping of the old tables. This may require the old schema to be dropped through another mechanism

none

(Default) No DDL generated; no schema generated.



Comments