GWT RequestFactory STRANGE behavior : No error reporting - spring

I do have a problem, and I can't figure out what it happens...
GWT beginner, working on a personal project.
Environment:
maven project with two modules
one module is the 'model', and has Hibernate, HSQLDB and Spring dependencies. HSQLDB runs embedded, in memory, configured from spring applicationContext.xml
the other module is the 'web' and has all GWT dependencies
The application is built using some Spring Roo generated code as basis, later modified and extended.
The issue is that, when editing some entity fields and pressing save, nothing happens. No problem when creating a new entity instance, only on edit changing some field and pressing 'save' basically overrides the new values.
So I started to thoroughly debug the client code, enabled hibernate and Spring detailed logging, but still ... nothing.
Then I made a surprising (for me) discovery.
Inspecting the GWT response payload, I have seen this:
{"S":[false],"O": [{"T":"663_uruC_g7F5h5IXBGvTP3BBKM=","V":"MS4w","S":"IjMi","O":"UPDATE"}],"I":[{"F":true,"M":"Server Error: org.hibernate.PersistentObjectException: detached entity passed to persist: com.myvdm.server.domain.Document; nested exception is javax.persistence.PersistenceException: org.hibernate.PersistentObjectException: detached entity passed to persist: com.myvdm.server.domain.Document"}]}
Aha, detached entity passed to persist !!!
Please note that the gwt client code uses this snippet to call the service:
requestContext.persist().using(proxy);
Arguably this could trigger the exception, and calling merge() could solve the problem, however, read on, to question 3...
Three question arise now:
Why isn't this somehow sent to the client as an error/exception?
Why isn't this logged by Hibernate?
How come the Spring Roo generated code (as I said, used as basis) works without manifesting this problem?
Thanks a lot,
Avaiting for some opinions/suggestions.
EDITED AFTER T. BROYER's RESPONSE::
Hi Thomas, thanks for the response.
I have a custom class that implements RequestTransport and implements send(). This is how I collected the response payload. Implementation follows::
public void send(String payload, final TransportReceiver receiver) {
TransportReceiver myReceiver = new TransportReceiver() {
#Override
public void onTransportSuccess(String payload) {
try {
receiver.onTransportSuccess(payload);
} finally {
eventBus.fireEvent(new RequestEvent(RequestEvent.State.RECEIVED));
}
}
#Override
public void onTransportFailure(ServerFailure failure) {
try {
receiver.onTransportFailure(failure);
} finally {
eventBus.fireEvent(new RequestEvent(RequestEvent.State.RECEIVED));
}
}
};
try {
wrapped.send(payload, myReceiver);
} finally {
eventBus.fireEvent(new RequestEvent(RequestEvent.State.SENT));
}
}
Here's the code that is executed when 'save' button is clicked in edit mode:
RequestContext requestContext = editorDriver.flush();
if (editorDriver.hasErrors()) {
return;
}
requestContext.fire(new Receiver<Void>() {
#Override
public void onFailure(ServerFailure error) {
if (editorDriver != null) {
setWaiting(false);
super.onFailure(error);
}
}
#Override
public void onSuccess(Void ignore) {
if (editorDriver != null) {
editorDriver = null;
exit(true);
}
}
#Override
public void onConstraintViolation(Set<ConstraintViolation<?>> errors) {
if (editorDriver != null) {
setWaiting(false);
editorDriver.setConstraintViolations(errors);
}
}
});
Based on what you said, onSuccess() should be called, and it's called
So how do I isolate exactly the code that creates the problem? I have this method that creates a fresh request context in order to persist the object
#Override
protected RequestContext createSaveRequestContextFor(DocumentProxy proxy) {
DocumentRequestContext request = requests.documentRequestContext();
request.persist().using(proxy);
return request;
}
and this is how it is called::
editorDriver.edit(getProxy(), createSaveRequestContextFor(getProxy()));
As for the Spring problem, you are saying that, between two subsequent requests, the find() and persist(), the JPA entityManager should not be closed. I am still investigating this, but after I press the edit button, I see the message 'org.springframework.orm.jpa.EntityManagerFactoryUtils - Closing JPA EntityManager' and that is not right, maybe the #Transactional annotation is not applied...

Why isn't this somehow sent to the client as an error/exception?
It is. The "S": [false] indicates the first (and only) method invocation (remember, a RequestContext is a batch!) has failed. The onFailure method of the invocation's Receiver will be called.
The "F": true of the ServerFailure then says it's a fatal error, so the default implementation of Receiver#onFailure would throw a RuntimeException. However, as you do not use a Receiver at all, nothing happens and the error is silently ignored.
Note that the batch request in itself has succeeded, so the global Receiver (the one you'd pass to RequestContext#fire) would have its onSuccess method called.
Also note that Request#fire(Receiver) is a shorthand for Request#to(Receiver) followed by RequestContext#fire() (with no argument).
Why isn't this logged by Hibernate?
This I don't know, sorry.
How come the Spring Roo generated code (as I said, used as basis) works without manifesting this problem?
OK, let's explore the underlying reason of the exception: the entity is loaded by your Locator (or the entity class's findXxx static method) and then the persist method is called on the instance. If you do not use the same JPA EntityManager / Hibernate session in the find and persist methods, then you'll have the issue.
Request Factory expects you to use the open session in view pattern to overcome this. I unfortunately do not know what kind of code Spring Roo generates.

Regarding the open session in view pattern Thomas mentioned, just add this filter definitions to your web.xml to turn on the pattern in your Spring application:
<filter>
<filter-name>
Spring OpenEntityManagerInViewFilter
</filter-name>
<filter-class>
org.springframework.orm.jpa.support.OpenEntityManagerInViewFilter
</filter-class>
</filter>
<filter-mapping>
<filter-name>Spring OpenEntityManagerInViewFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>

Related

JOOQ execution listener does not catch exception

I'm trying to implement a generic solution for optimized locking. What I want to achieve is to have a specific piece of code run when record's version changes. I have it implemented as an ExecuteListener instance that looks for DataChangedException. It's registered as a Spring bean.
class LockingListener : DefaultExecuteListener() {
override fun exception(ctx: ExecuteContext) {
val exception = ctx.exception()
if (exception is DataChangedException) {
ctx.exception(IllegalStateException("Accessed data has been altered mid-operation."))
}
}
}
#Configuration
class JooqConfig {
#Bean
fun lockingListenerProvider() = DefaultExecuteListenerProvider(LockingListener())
}
I had a breakpoint set in org.jooq.impl.ExecuteListeners#get and it does look like it gets picked up alongside LoggerListener and JooqExceptionTranslator.
When I try to run a test case though, DataChangedException does not get picked up on UpdateableRecord#update and I get the following stacktrace instead, no IllegalStateException in sight.
org.jooq.exception.DataChangedException: Database record has been changed or doesn't exist any longer
at org.jooq.impl.UpdatableRecordImpl.checkIfChanged(UpdatableRecordImpl.java:540)
at org.jooq.impl.UpdatableRecordImpl.storeMergeOrUpdate0(UpdatableRecordImpl.java:349)
at org.jooq.impl.UpdatableRecordImpl.storeUpdate0(UpdatableRecordImpl.java:241)
at org.jooq.impl.UpdatableRecordImpl.access$100(UpdatableRecordImpl.java:89)
at org.jooq.impl.UpdatableRecordImpl$2.operate(UpdatableRecordImpl.java:232)
at org.jooq.impl.RecordDelegate.operate(RecordDelegate.java:149)
at org.jooq.impl.UpdatableRecordImpl.storeUpdate(UpdatableRecordImpl.java:228)
at org.jooq.impl.UpdatableRecordImpl.update(UpdatableRecordImpl.java:165)
Debugging shows that LockingListener#exception does not even get entered into.
That exception is not part of the ExecuteListener lifecycle, i.e. the lifecycle that deals with interactions with the JDBC API. In other words, it's not a SQLException, it happens higher up the stack. Use the RecordListener.exception() callback, instead.

Jooq configuration per request

I'm struggling to find a way to define some settings in DSLContext per request.
What I want to achieve is the following:
I've got a springboot API and a database with multiple schemas that share the same structure.
Depending on some parameters of each request I want to connect to one specific schema, if no parameters is set I want to connect to no schema and fail.
To not connect to any schema I wrote the following:
#Autowired
public DefaultConfiguration defaultConfiguration;
#PostConstruct
public void init() {
Settings currentSettings = defaultConfiguration.settings();
Settings newSettings = currentSettings.withRenderSchema(false);
defaultConfiguration.setSettings(newSettings);
}
Which I think works fine.
Now I need a way to set schema in DSLContext per request, so everytime I use DSLContext during a request I get automatically a connection to that schema, without affecting other requests.
My idea is to intercept the request, get the parameters and do something like "DSLContext.setSchema()" but in a way that applies to all usage of DSLContext during the current request.
I tried to define a request scopeBean of a custom ConnectionProvider as follows:
#Component
#RequestScope
public class ScopeConnectionProvider implements ConnectionProvider {
#Override
public Connection acquire() throws DataAccessException {
try {
Connection connection = dataSource.getConnection();
String schemaName = getSchemaFromRequestContext();
connection.setSchema(schemaName);
return connection;
} catch (SQLException e) {
throw new DataAccessException("Error getting connection from data source " + dataSource, e);
}
}
#Override
public void release(Connection connection) throws DataAccessException {
try {
connection.setSchema(null);
connection.close();
} catch (SQLException e) {
throw new DataAccessException("Error closing connection " + connection, e);
}
}
}
But this code only executes on the first request. Following requests don't execute this code and hence it uses the schema of the first request.
Any tips on how can this be done?
Thank you
Seems like your request-scope bean is getting injected into a singleton.
You're already using #RequestScope which is good, but you could forget to add #EnableAspectJAutoProxy on your Spring configuration class.
#Configuration
#EnableAspectJAutoProxy
class Config {
}
This will make your bean run within a proxy inside of the singleton and therefore change per request.
Nevermind, It seems that the problem I was having was caused by an unexpected behaviour of some cacheable function I defined. The function is returning a value from the cache although the input is different, that's why no new connection is acquired. I still need to figure out what causes this unexpected behaviour thought.
For now, I'll stick with this approach since it seems fine at a conceptual level, although I expect there is a better way to do this.
*** UPDATE ***
I found out that this was the problem I had with the cache Does java spring caching break reflection?
*** UPDATE 2 ***
Seems that setting schema in the underlying datasource is ignored. I'm currently trying this other approach I just found (https://github.com/LinkedList/spring-jooq-multitenancy)

Spring #transactional with #async Timeout value is not working

I have created an asynchronous service for a long running stored procedure call. Things work good but the transaction is not getting timed out after the specified value given in the timeout attribute of the transactional annotation..The structure of the code is given below (not the real one...just skeleton...ignore semantics/syntax)
//asynchronous service
#override
#async("myCustomTaskExecutor")
#Transactional(rollbackfor=Exception.class,timeout=600)
public void serviceMethod(){
//repository method is invoked.
repository.callStoredProcedure();
}
//Repository method in the Repository class
#Transactional(rollbackfor=Exception.class,timeout=600)
public void callStoredProcedure(){
//Stored procedure is called from the private method using hibernate doWork implementation.
privateCallmethod();
}
private void privateCallmethod() throws ApplicationException{
Session session = null;
try{
session = entityManager.unwrap(Session.class);
session.doWork(new Work(){
#Override
public void execute(Connection connection) throws SQLException {
OracleCallableStatement statement =null;
try{
//using hibernate 4.x and ref cursors are used...so went on with this approach..
//suggest if there is some better approach.
String sqlString =“{begin storProcName(?,?)}”;
statement = connection.prepareCall(sqlString);
statement.setInt(1,5);
statement.setString(2,“userName5”);
statement.executeUpdate();
}
catch(Exception e){
throw RunTimeException(e.getMessage);
}
finally{
if(statement != null)
statement.close();
}
}
}
});
}
catch(Exception e){
throw ApplicationException(e.getMessage);
}
//Not using Final block to close the session.Is it an issue ?
}
delay is happening in the stored procedure side(Thread.sleep(700) are not used) yet transaction is not timed out...
Questions :
I guess #Transactional is enough on the service method alone...give little bit insight on correct approach of using #Transactional annotation
for this code setup.
Will the #Transactional works for the JDBC calls inside the doWork Interface implementation...is that whats the issue is ?
Some article suggest to use oracle.jdbc.readTimeout or setQueryTimeout in the CallableStatement... Is it the right way to achieve this.
Kindly point out the mistakes and explain the causes
If #Transactional Annotated method is not the entry point to the class, it will not be transactional unless you enable load time weaving (Spring default is Compile time weaving) https://stackoverflow.com/a/17698587/6785908
You should invoke callStoredProcedure() from outside this class, then it will be transactional. If you invoke serviceMethod() which in turn invokes callStoredProcedure(), then it will not be transactional
I used setQueryTimeout() approach to resolve the issue as #Transactional timeout does not work with the hibernate dowork() method...I guess its due to the hibernate work executes in different thread and it low level JDBC methods to invoke the store procedures...
NOTE: This particular application uses very spring 3.x version and hibernate 4.x with JPA 2.0 spec...little outdated versions

Server-side schema validation with JAX-WS

I have JAX-WS container-less service (published via Endpoint.publish() right from main() method). I want my service to validate input messages. I have tried following annotation: #SchemaValidation(handler=MyErrorHandler.class) and implemented an appropriate class. When I start the service, I get the following:
Exception in thread "main" javax.xml.ws.WebServiceException:
Annotation #com.sun.xml.internal.ws.developer.SchemaValidation(outbound=true,
inbound=true, handler=class mypackage.MyErrorHandler) is not recognizable,
atleast one constructor of class
com.sun.xml.internal.ws.developer.SchemaValidationFeature
should be marked with #FeatureConstructor
I have found few solutions on the internet, all of them imply the use of WebLogic container. I can't use container in my case, I need embedded service. Can I still use schema validation?
The #SchemaValidation annotation is not defined in the JAX-WS spec, but validation is left open. This means you need something more than only the classes in the jdk.
As long as you are able to add some jars to your classpath, you can set this up pretty easily using metro (which is also included in WebLogic. This is why you find solutions that use WebLogic as container.). To be more precise, you need to add two jars to your classpath. I'd suggest to
download the most recent metro release.
Unzip it somewhere.
Add the jaxb-api.jar and jaxws-api.jar to your classpath. You can do this for example by putting them into the JAVA_HOME/lib/endorsed or by manually adding them to your project. This largely depends on the IDE or whatever you are using.
Once you have done this, your MyErrorHandler should work even if it is deployed via Endpoint.publish(). At least I have this setup locally and it compiles and works.
If you are not able to modify your classpath and need validation, you will have to validate the request manually using JAXB.
Old question, but I solved the problem using the correct package and minimal configuration, as well using only provided services from WebLogic. I was hitting the same problem as you.
Just make sure you use correct java type as I described here.
As I am planning to expand to a tracking mechanism I also implemented the custom error handler.
Web Service with custom validation handler
import com.sun.xml.ws.developer.SchemaValidation;
#Stateless
#WebService(portName="ValidatedService")
#SchemaValidation(handler=MyValidator.class)
public class ValidatedService {
public ValidatedResponse operation(#WebParam(name = "ValidatedRequest") ValidatedRequest request) {
/* do business logic */
return response;
}
}
Custom Handler to log and store error in database
public class MyValidator extends ValidationErrorHandler{
private static java.util.logging.Logger log = LoggingHelper.getServerLogger();
#Override
public void warning(SAXParseException exception) throws SAXException {
handleException(exception);
}
#Override
public void error(SAXParseException exception) throws SAXException {
handleException(exception);
}
#Override
public void fatalError(SAXParseException exception) throws SAXException {
handleException(exception);
}
private void handleException(SAXParseException e) throws SAXException {
log.log(Level.SEVERE, "Validation error", e);
// Record in database for tracking etc
throw e;
}
}

Controlling inner transaction settings from outer transaction with Spring 2.5

I'm using Spring 2.5 transaction management and I have the following set-up:
Bean1
#Transactional(noRollbackFor = { Exception.class })
public void execute() {
try {
bean2.execute();
} catch (Exception e) {
// persist failure in database (so the transaction shouldn't fail)
// the exception is not re-thrown
}
}
Bean2
#Transactional
public void execute() {
// do something which throws a RuntimeException
}
The failure is never persisted into DB from Bean1 because the whole transaction is rolled back.
I don't want to add noRollbackFor in Bean2 because it's used in a lot of places which don't have logic to handle runtime exceptions properly.
Is there a way to avoid my transaction to be rolled back only when Bean2.execute() is called from Bean1?
Otherwise, I guess my best option is to persist my failure within a new transaction? Anything else clean I can do?
This is one of the caveats of annotations... your class is not reusable!
If you'd configure your transactions in the XML, if would have been possible.
Assuming you use XML configuration: if it's not consuming expensive resources, you can create another instance of bean2 for the use of the code you specified. That is, you can configure one been as you specified above, and one with no roll back for exception.

Resources