#Version column is not working out of the box with spring data jdbc - spring-data-jdbc

I have my version column defined like this
#org.springframework.data.annotation.Version
protected long version;
With Spring Data JDBC it's always trying to INSERT. Updates are not happening. When I debug I see that, PersistentEntityIsNewStrategy is being used which is the default strategy. It has isNew() method to determine the state of the entity being persisted. I do see that version and id are used for this determination.
But my question is who is responsible to increment the version column after every save, so that when the second time .save() is called, the isNew() method can return false.
Should we do fire a BeforeSaveEvent and handle the incrementation of Version column? Would that be good enough to handle the OptimisticLock ?
Edit
I added an ApplicationListener to listen to BeforeSaveEvent like this.
public ApplicationListener<BeforeSaveEvent> incrementingVersion() {
return event -> {
Object entity = event.getEntity();
if (BaseDataModel.class.isAssignableFrom(entity.getClass())) {
BaseDataModel baseDataModel = (BaseDataModel) entity;
Long version = baseDataModel.getVersion();
if (version == null) {
baseDataModel.setVersion(0L);
} else {
baseDataModel.setVersion(version + 1L);
}
}
};
}
So now the version column works, but rest of Auditable fields #CreatedAt, #CreatedBy,#LastModifiedDate and #LastModifiedBy are not set!!
Edit2
Created a new ApplicationListener like below. In this case both my custom listener and Spring's RelationalAuditingListener are getting called. But still it doesn't solve the problem. Because the order of listeners[custom one followed by spring's] making the markAudited to invoke markUpdated instead of markCreated, since the version column is already incremented. I tried to make my Listener be the LOWEST_PRECEDENCE still no luck.
My custom listener here
public class CustomRelationalAuditingEventListener
implements ApplicationListener<BeforeSaveEvent>, Ordered {
#Override
public void onApplicationEvent(BeforeSaveEvent event) {
Object entity = event.getEntity();
// handler.markAudited(entity);
if (BaseDataModel.class.isAssignableFrom(entity.getClass())) {
BaseDataModel baseDataModel = (BaseDataModel) entity;
if (baseDataModel.getVersion() == null) {
baseDataModel.setVersion(0L);
} else {
baseDataModel.setVersion(baseDataModel.getVersion() + 1L);
}
}
}
#Override
public int getOrder() {
return LOWEST_PRECEDENCE;
}
}

Currently, you have to increment the version manually and there is no optimistic locking, i.e. the version is only used for checking if an entity is new.
There is an open issue for support of optimistic locking and there is even a PR open for it.
Therefore it is likely that this feature will be available with an upcoming 1.1 milestone.

Related

Writing blocking operations in reactor tests with Spring and State Machine

I'm completely new to reactor programming and I'm really struggling with migrating old integration tests since upgrading to the latest Spring Boot / State Machine.
Most Integration tests have the same basic steps :
Call a method that returns a Mono and starts a state Machine and returns an object containing a generated unique id as well as some other infos related to the initial request.
With the returned object call a method that verifies if a value has been updated in the database (using the information of the object retried in step 1)
Poll at a fixed interval the method that checks in the database if value has changed until either the value has changed or a predefined timeout occurs.
Check another table in the database if another object has been updated
Below an example:
#Test
void testEndToEnd() {
var instance = ServiceInstance.buildDefault();
var updateRequest = UpdateRequest.build(instance);
// retrieve an update Response related to the request
// since a unique id is generated when triggering the update request
// before starting a stateMachine that goes through different steps
var updateResponse = service.updateInstance(updateRequest).block();
await().alias("Check if operation was successful")
.atMost(Duration.ofSeconds(120))
.pollInterval(Duration.ofSeconds(2))
.until(() -> expectOperationState(updateResponse, OperationState.SUCCESS))
// check if values are updated in secondary table
assertValuesInTransaction(updateResponse);
}
This was working fine before but ever since the latest update where it fails with the exception :
java.lang.IllegalStateException: block()/blockFirst()/blockLast() are blocking, which is not supported in thread parallel-6
at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:83)
at reactor.core.publisher.Mono.block(Mono.java:1710)
I saw that a good practice to test reactor methods using StepVerifier but I do not see how I can reproduce the part done with Awaitability to poll to see if the value has changed in the DB since the method that checks in the DB returns a Mono and not a flux that keeps sending values.
Any idea on how to accomplish this or to make the spring stack accept blocking operations?
Thanks
My current stack :
Spring Boot 3.0.1
Spring State Machine 3.0.1
Spring 6
Junit 5.9.2
So as discussed in comments here is an example with comments. I used flatMap to subscribe to what expectOperationState returns. Also there is Mono.fromCallable used which check the value from some method and if it fails to emit anything in 3 seconds - the timeout exception is thrown. Also we could try to get rid of this boolean value from expectOperationState and refactor the code to just return Mono<Void> with completed signal but this basically shows how you can achieve what you want.
class TestStateMachine {
#Test
void testUntilSomeOperationCompletes() {
final Service service = new Service();
final UpdateRequest updateRequest = new UpdateRequest();
StepVerifier.create(service.updateInstance(updateRequest)
.flatMap(updateResponse -> expectOperationState(updateResponse, OperationState.SUCCESS))
)
.consumeNextWith(Assertions::assertTrue)
.verifyComplete();
}
private Mono<Boolean> expectOperationState(final UpdateResponse updateResponse, final OperationState success) {
return Mono.fromCallable(() -> {
while (true) {
boolean isInDb = checkValueFromDb(updateResponse);
if (isInDb) {
return true;
}
}
})
.publishOn(Schedulers.single())
//timeout if we not receive any value from callable within 3 seconds so that we do not check forever
.timeout(Duration.ofSeconds(3));
}
private boolean checkValueFromDb(final UpdateResponse updateResponse) {
return true;
}
}
class Service {
Mono<UpdateResponse> updateInstance(final UpdateRequest updateRequest) {
return Mono.just(new UpdateResponse());
}
}
Here is an example without using Mono<Boolean> :
class TestStateMachine {
#Test
void test() {
final Service service = new Service();
final UpdateRequest updateRequest = new UpdateRequest();
StepVerifier.create(service.updateInstance(updateRequest)
.flatMap(updateResponse -> expectOperationState(updateResponse, OperationState.SUCCESS).timeout(Duration.ofSeconds(3)))
)
.verifyComplete();
}
private Mono<Void> expectOperationState(final UpdateResponse updateResponse, final OperationState success) {
return Mono.fromCallable(() -> {
while (true) {
boolean isInDb = checkValueFromDb(updateResponse);
if (isInDb) {
//return completed Mono
return Mono.<Void>empty();
}
}
})
.publishOn(Schedulers.single())
//timeout if we not receive any value from callable within 3 seconds so that we do not check forever
.timeout(Duration.ofSeconds(3))
.flatMap(objectMono -> objectMono);
}
private boolean checkValueFromDb(final UpdateResponse updateResponse) {
return true;
}
}

Hibernate flush optimization using `hibernate.ejb.use_class_enhancer`

I am trying to use the hibernate feature that enhances the flush performance without making code changes. I came across the option hibernate.ejb.use_class_enhancer.
I made the following changes.
1) enabled the property hibernate.ejb.use_class_enhancer to true.
Build failed with error 'Cannot apply class transformer without LoadTimeWeaver specified'
2) I added
context:load-time-weaver to the context files.
Build failed with the following error :
Specify a custom LoadTimeWeaver or start your Java virtual machine with Spring’s agent: -javaagent:spring-agent.jar
3) I added the following to the maven-surefire-plugin
javaagent:${settings.localRepository}/org/springframework/spring-
agent/2.5.6.SEC03/spring-agent-2.5.6.SEC03.jar
the build is successful now.
We have an interceptor that tracks the number of entities being flushed in a transaction.
After I did the above changes, I was expecting that number to come down significantly, but, they did not.
My question is:
Are the above changes correct/enough for getting the 'entity flush optimization'?
How to verify that the application is indeed using the optimization?
Edit:
After debugging, I found the following.
There is a time when our DO class is submitted for transformation, but, the logic that figures out whether a given class is supposed to be transformed is not handling the class names correctly (in my case), because of that, the DO class goes without being transformed.
Is there a way I can pass my logic instead ?
the relevant code is below.
The return copyEntities.contains( className ); is coming out false for the following inputs.
copyEntities contains list of strings "com.x.y.abcDO", "com.x.y.asxDO" where are the className is "com.x.y.abcDO_$$_jvsteb8_48"
public InterceptFieldClassFileTransformer(List<String> entities) {
final List<String> copyEntities = new ArrayList<String>( entities.size() );
copyEntities.addAll( entities );
classTransformer = Environment.getBytecodeProvider().getTransformer(
//TODO change it to a static class to make it faster?
new ClassFilter() {
public boolean shouldInstrumentClass(String clas sName) {
return copyEntities.contains( className );
}
},
//TODO change it to a static class to make it faster?
new FieldFilter() {
#Override
public boolean shouldInstrumentField(String clas sName, String fieldName) {
return true;
}
#Override
public boolean shouldTransformFieldAccess(
String transformingClassName, String fieldOwnerClassName, String fieldName
) {
return true;
}
}
);
}
edited on June 15th
I updated my project to use Spring 4.0.5.RELEASE and hibernate to 4.3.5.Final
I started using org.hibernate.jpa.HibernatePersistenceProvider
and
org.springframework.instrument.classloading.InstrumentationLoadTimeWeaver
and
hibernate.ejb.use_class_enhancer=true
with these changes, I am debugging the flush behavior. I have a question in this code block .
private boolean isUnequivocallyNonDirty(Object entity) {
if(entity instanceof SelfDirtinessTracker)
return ((SelfDirtinessTracker) entity).$$_hibernate_hasDirtyAttributes();
final CustomEntityDirtinessStrategy customEntityDirtinessStrategy =
persistenceContext.getSession().getFactory().getCustomEntityDirtinessStrategy();
if ( customEntityDirtinessStrategy.canDirtyCheck( entity, getPersister(), (Session) persistenceContext.getSession() ) ) {
return ! customEntityDirtinessStrategy.isDirty( entity, getPersister(), (Session) persistenceContext.getSession() );
}
if ( getPersister().hasMutableProperties() ) {
return false;
}
if ( getPersister().getInstrumentationMetadata().isInstrumented() ) {
// the entity must be instrumented (otherwise we cant check dirty flag) and the dirty flag is false
return ! getPersister().getInstrumentationMetadata().extractInterceptor( entity ).isDirty();
}
return false;
}
In my case, the flow is returning false because of persister saying yes for hasMutableProperties. I think the interceptor did not have a chance to answer at all.
Is it not that the bytecode transformer cause an interceptor here? Or the bytecode transform should make the entity a SelfDirtinessTracker?
Can anyone explain, what is the behavior I should expect here from the bytecode transformation here.

OrmLiteSqliteOpenHelper onDowngrade

I'm using ormlite for android and I have a database table class that extends OrmLiteSqliteOpenHelper. I've had some reports in google play that the application had a force close caused by:
android.database.sqlite.SQLiteException: Can't downgrade database from version 3 to 2
The users probably get to downgrade via a backup or something. The problem is I cannot implement the onDowngrade method that exists on SQLiteOpenHelper:
Does ormlite support the downgrade? Is there any work around for this? At least to avoid the force close.
Interesting. So the onDowngrade(...) method was added in API 11. I can't just add support for it into ORMLite. Unfortunately this means that you are going to have to make your own onDowngrade method which is the same as the onUpgrade(...) in OrmLiteSqliteOpenHelper. Something like the followign:
public abstract void onUpgrade(SQLiteDatabase database, ConnectionSource connectionSource, int oldVersion,
int newVersion) {
// your code goes here
}
public final void onDowngrade(SQLiteDatabase db, int oldVersion, int newVersion) {
ConnectionSource cs = getConnectionSource();
/*
* The method is called by Android database helper's get-database calls when Android detects that we need to
* create or update the database. So we have to use the database argument and save a connection to it on the
* AndroidConnectionSource, otherwise it will go recursive if the subclass calls getConnectionSource().
*/
DatabaseConnection conn = cs.getSpecialConnection();
boolean clearSpecial = false;
if (conn == null) {
conn = new AndroidDatabaseConnection(db, true);
try {
cs.saveSpecialConnection(conn);
clearSpecial = true;
} catch (SQLException e) {
throw new IllegalStateException("Could not save special connection", e);
}
}
try {
onDowngrade(db, cs, oldVersion, newVersion);
} finally {
if (clearSpecial) {
cs.clearSpecialConnection(conn);
}
}
}
For more information about the onDowngrade(...) method see below:
public void onDowngrade (SQLiteDatabase db, int oldVersion, int newVersion);
To quote from the javadocs:
Called when the database needs to be downgraded. This is strictly similar to onUpgrade(SQLiteDatabase, int, int) method, but is called whenever current version is newer than requested one. However, this method is not abstract, so it is not mandatory for a customer to implement it. If not overridden, default implementation will reject downgrade and throws SQLiteException
Also see:
Can't downgrade database from version 2 to 1

Getting DataContext error while saving form

I get this error when opening one specific form. The rest is working fine and I have no clue why this one isn't.
Error: An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext. This is not supported.
I get the error at _oDBConnection when I try to save. When I watch _oDBConnection while running through the code, it does not exist. Even when I open the main-window it does not exist. So this form is where the DataContext is built for the very first time.
Every class inherits from clsBase where the DataContext is built.
My collegue is the professional one who built it all. I am just expanding and using it (learned it by doing it). But now I'm stuck and he is on holiday. So keep it simple :-)
What can it be?
clsPermanency
namespace Reservation
{
class clsPermanency : clsBase
{
private tblPermanency _oPermanency;
public tblPermanency PermanencyData
{
get { return _oPermanency; }
set { _oPermanency = value; }
}
public clsPermanency()
: base()
{
_oPermanency = new tblPermanency();
}
public clsPermanency(int iID)
: this()
{
_oPermanency = (from oPermanencyData in _oDBConnection.tblPermanencies
where oPermanencyData.ID == iID
select oPermanencyData).First();
if (_oPermanency == null)
throw new Exception("Permanentie niet gevonden");
}
public void save()
{
if (_oPermanency.ID == 0)
{
_oDBConnection.tblPermanencies.InsertOnSubmit(_oPermanency);
}
_oDBConnection.SubmitChanges();
}
}
}
clsBase
public class clsBase
{
protected DBReservationDataContext _oDBConnection;
protected int _iID;
public int ID
{
get { return _iID; }
}
public DBReservationDataContext DBConnection
{
get { return _oDBConnection; }
}
public clsBase()
{
_oDBConnection = new DBReservationDataContext();
}
}
Not a direct answer, but this is really bad design, sorry.
Issues:
One context instance per class instance. Pretty incredible. How are you going to manage units of work and transactions? And what about memory consumption and performance?
Indirection: every entity instance (prefixed o) is wrapped in a cls class. What a hassle to make classes cooperate, if necessary, or to access their properties.
DRY: far from it. Does each clsBase derivative have the same methods as clsPermanency?
Constructors: you always have to call the base constructor. The constructor with int iID always causes a redundant new object to be created, which will certainly be a noticeable performance hit when dealing with larger numbers. A minor change in constructor logic may cause the sequence of constructor invocations to change. (Nested and inherited constructors are always tricky).
Exception handling: you need a try-catch everywhere where classes are created. (BTW: First() will throw its own exception if the record is not there).
Finally, not a real issue, but class and variable name prefixes are sooo 19xx.
What to do?
I don't think you can change your colleague's design in his absence. But I'd really talk to him about it in due time. Just study some linq-to-sql examples out there to pick up some regular patterns.
The exception indicates that somewhere between fetching the _oPermanency instance (in the Id-d constructor) and saving it a new _oDBConnection is created. The code as shown does not reveal how this could happen, but I assume there is more code than this. When you debug and check GetHashCode() of _oDBConnection instances you should be able to find where it happens.

Non-Blocking Endpoint: Returning an operation ID to the caller - Would like to get your opinion on my implementation?

Boot Pros,
I recently started to program in spring-boot and I stumbled upon a question where I would like to get your opinion on.
What I try to achieve:
I created a Controller that exposes a GET endpoint, named nonBlockingEndpoint. This nonBlockingEndpoint executes a pretty long operation that is resource heavy and can run between 20 and 40 seconds.(in the attached code, it is mocked by a Thread.sleep())
Whenever the nonBlockingEndpoint is called, the spring application should register that call and immediatelly return an Operation ID to the caller.
The caller can then use this ID to query on another endpoint queryOpStatus the status of this operation. At the beginning it will be started, and once the controller is done serving the reuqest it will be to a code such as SERVICE_OK. The caller then knows that his request was successfully completed on the server.
The solution that I found:
I have the following controller (note that it is explicitely not tagged with #Async)
It uses an APIOperationsManager to register that a new operation was started
I use the CompletableFuture java construct to supply the long running code as a new asynch process by using CompletableFuture.supplyAsync(() -> {}
I immdiatelly return a response to the caller, telling that the operation is in progress
Once the Async Task has finished, i use cf.thenRun() to update the Operation status via the API Operations Manager
Here is the code:
#GetMapping(path="/nonBlockingEndpoint")
public #ResponseBody ResponseOperation nonBlocking() {
// Register a new operation
APIOperationsManager apiOpsManager = APIOperationsManager.getInstance();
final int operationID = apiOpsManager.registerNewOperation(Constants.OpStatus.PROCESSING);
ResponseOperation response = new ResponseOperation();
response.setMessage("Triggered non-blocking call, use the operation id to check status");
response.setOperationID(operationID);
response.setOpRes(Constants.OpStatus.PROCESSING);
CompletableFuture<Boolean> cf = CompletableFuture.supplyAsync(() -> {
try {
// Here we will
Thread.sleep(10000L);
} catch (InterruptedException e) {}
// whatever the return value was
return true;
});
cf.thenRun(() ->{
// We are done with the super long process, so update our Operations Manager
APIOperationsManager a = APIOperationsManager.getInstance();
boolean asyncSuccess = false;
try {asyncSuccess = cf.get();}
catch (Exception e) {}
if(true == asyncSuccess) {
a.updateOperationStatus(operationID, Constants.OpStatus.OK);
a.updateOperationMessage(operationID, "success: The long running process has finished and this is your result: SOME RESULT" );
}
else {
a.updateOperationStatus(operationID, Constants.OpStatus.INTERNAL_ERROR);
a.updateOperationMessage(operationID, "error: The long running process has failed.");
}
});
return response;
}
Here is also the APIOperationsManager.java for completness:
public class APIOperationsManager {
private static APIOperationsManager instance = null;
private Vector<Operation> operations;
private int currentOperationId;
private static final Logger log = LoggerFactory.getLogger(Application.class);
protected APIOperationsManager() {}
public static APIOperationsManager getInstance() {
if(instance == null) {
synchronized(APIOperationsManager.class) {
if(instance == null) {
instance = new APIOperationsManager();
instance.operations = new Vector<Operation>();
instance.currentOperationId = 1;
}
}
}
return instance;
}
public synchronized int registerNewOperation(OpStatus status) {
cleanOperationsList();
currentOperationId = currentOperationId + 1;
Operation newOperation = new Operation(currentOperationId, status);
operations.add(newOperation);
log.info("Registered new Operation to watch: " + newOperation.toString());
return newOperation.getId();
}
public synchronized Operation getOperation(int id) {
for(Iterator<Operation> iterator = operations.iterator(); iterator.hasNext();) {
Operation op = iterator.next();
if(op.getId() == id) {
return op;
}
}
Operation notFound = new Operation(-1, OpStatus.INTERNAL_ERROR);
notFound.setCrated(null);
return notFound;
}
public synchronized void updateOperationStatus (int id, OpStatus newStatus) {
iteration : for(Iterator<Operation> iterator = operations.iterator(); iterator.hasNext();) {
Operation op = iterator.next();
if(op.getId() == id) {
op.setStatus(newStatus);
log.info("Updated Operation status: " + op.toString());
break iteration;
}
}
}
public synchronized void updateOperationMessage (int id, String message) {
iteration : for(Iterator<Operation> iterator = operations.iterator(); iterator.hasNext();) {
Operation op = iterator.next();
if(op.getId() == id) {
op.setMessage(message);
log.info("Updated Operation status: " + op.toString());
break iteration;
}
}
}
private synchronized void cleanOperationsList() {
Date now = new Date();
for(Iterator<Operation> iterator = operations.iterator(); iterator.hasNext();) {
Operation op = iterator.next();
if((now.getTime() - op.getCrated().getTime()) >= Constants.MIN_HOLD_DURATION_OPERATIONS ) {
log.info("Removed operation from watchlist: " + op.toString());
iterator.remove();
}
}
}
}
The questions that I have
Is that concept a valid one that also scales? What could be improved?
Will i run into concurrency issues / race conditions?
Is there a better way to achieve the same in boot spring, but I just didn't find that yet? (maybe with the #Async directive?)
I would be very happy to get your feedback.
Thank you so much,
Peter P
It is a valid pattern to submit a long running task with one request, returning an id that allows the client to ask for the result later.
But there are some things I would suggest to reconsider :
do not use an Integer as id, as it allows an attacker to guess ids and to get the results for those ids. Instead use a random UUID.
if you need to restart your application, all ids and their results will be lost. You should persist them to a database.
Your solution will not work in a cluster with many instances of your application, as each instance would only know its 'own' ids and results. This could also be solved by persisting them to a database or Reddis store.
The way you are using CompletableFuture gives you no control over the number of threads used for the asynchronous operation. It is possible to do this with standard Java, but I would suggest to use Spring to configure the thread pool
Annotating the controller method with #Async is not an option, this does not work no way. Instead put all asynchronous operations into a simple service and annotate this with #Async. This has some advantages :
You can use this service also synchronously, which makes testing a lot easier
You can configure the thread pool with Spring
The /nonBlockingEndpoint should not return the id, but a complete link to the queryOpStatus, including id. The client than can directly use this link without any additional information.
Additionally there are some low level implementation issues which you may also want to change :
Do not use Vector, it synchronizes on every operation. Use a List instead. Iterating over a List is also much easier, you can use for-loops or streams.
If you need to lookup a value, do not iterate over a Vector or List, use a Map instead.
APIOperationsManager is a singleton. That makes no sense in a Spring application. Make it a normal PoJo and create a bean of it, get it autowired into the controller. Spring beans by default are singletons.
You should avoid to do complicated operations in a controller method. Instead move anything into a service (which may be annotated with #Async). This makes testing easier, as you can test this service without a web context
Hope this helps.
Do I need to make database access transactional ?
As long as you write/update only one row, there is no need to make this transactional as this is indeed 'atomic'.
If you write/update many rows at once you should make it transactional to guarantee, that either all rows are updated or none.
However, if two operations (may be from two clients) update the same row, always the last one will win.

Resources