I have a problem with async controllers in Grails. Consider the following controller:
#Transactional(readOnly=true)
class RentController {
def myService
UserProperties props
def beforeInterceptor = {
this.props = fetchUserProps()
}
//..other actions
#Transactional
def rent(Long id) {
//check some preconditions here, calling various service methods...
if (!allOk) {
render status: 403, text: 'appropriate.message.key'
return
}
//now we long poll because most of the time the result will be
//success within a couple of seconds
AsyncContext ctx = startAsync()
ctx.timeout = 5 * 1000 * 60 + 5000
ctx.start {
try {
//wait for external service to confirm - can take a long time or even time out
//save appropriate domain objects if successful
//placeRental is also marked with #Transactional (if that makes any difference)
def result = myService.placeRental()
if (result.success) {
render text:"OK", status: 200
} else {
render status:400, text: "rejection.reason.${result.rejectionCode}"
}
} catch (Throwable t) {
log.error "Rental process failed", t
render text: "Rental process failed with exception ${t?.message}", status: 500
} finally {
ctx.complete()
}
}
}
}
The controller and service code appear to work fine (though the above code is simplified) but will sometimes cause a database session to get 'stuck in the past'.
Let's say I have a UserProperties instance whose property accountId is updated from 1 to 20 somewhere else in the application while a rent action is waiting in the async block. As the async block eventually terminates one way or another (it may succeed, fail or time out), the app will sometimes get a stale UserProperties instance with accountId: 1. Let's say I refresh the updated user's properties page, I will see accountId: 1 about 1 time per 10 refreshes while the rest of the time it will be 20 - and this is on my development machine where noone else is accessing the application (though the same behaviour can be observed in production). My connection pool also holds 10 connections so I suspect there may be a correlation here.
Other strange things will happen - for example, I will get StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect) from actions doing something as simple as render (UserProperties.list() as JSON) - after the response had already rendered (successfuly apart from the noise in the logs) and despite the action being annotated with #Transactional(readOnly=true).
A stale session doesn't seem to appear every time and so far our solution was to restart the server every evening (the app has few users for now), but the error is annoying and the cause was hard to pinpoint. My guess is that a DB transaction doesn't get committed or rolled back because of the async code, but GORM, Spring and Hibernate have many nooks and crannies where things could get stuck.
We're using Postgres 9.4.1 (9.2 on a dev machine, same problem), Grails 2.5.0, Hibernate plugin 4.3.8.1, Tomcat 8, Cache plugin 1.1.8, Hibernate Filter plugin 0.3.2 and the Audit Logging plugin 1.0.1 (other stuff too, obviously, but this feels like it could be relevant). My datasource config contains:
hibernate {
cache.use_second_level_cache = true
cache.use_query_cache = false
cache.region.factory_class = 'org.hibernate.cache.ehcache.SingletonEhCacheRegionFactory'
singleSession = true
flush.mode = 'manual'
format_sql = true
}
Grails bug. And a nasty one, everything seems OK until your app starts acting funny in completely unrelated parts of the app.
Related
I have a code similar to the one below. Every-time a DBLock appears, I want to get an alert in Dynatrace creating a problem so that I can see it on the dashboard and possibly get an email notification also. The DB lock would appear if the update count is greater than 1.
private int removeDBLock(DataSource dataSource) {
int updateCount = 0;
final Timestamp lastAllowedDBLockTime = new Timestamp(System.currentTimeMillis() - (5 * 60 * 1000));
final String query = format(RELEASE_DB_CHANGELOCK, lastAllowedDBLockTime.toString());
try (Statement stmt = dataSource.getConnection().createStatement()) {
updateCount = stmt.executeUpdate(query);
if(updateCount>0){
log.error("Stale DB Lock found. Locks Removed Count is {} .",updateCount);
}
} catch (SQLException e) {
log.error("Error while trying to find and remove Db Change Lock. ",e);
}
return updateCount;
}
I tried using the event API to trigger an event on my host mentioned here and was successful in raising a problem alert on my dashboard.
https://www.dynatrace.com/support/help/dynatrace-api/environment-api/events/post-event/?request-parameters%3C-%3Ejson-model=json-model
but this would mean injecting an api call in my code just for monitoring, any may lead to more external dependencies and hence more chance of failure.
I also tried creating a custom service detection by adding the class containing this method and the method itself in the custom service. But I do not know how I can link this to an alert or a event that creates a problem on the dashboard.
Are there any best practices or solutions on how I can do this in Dynatrace. Any leads would be helpful.
I would take a look at Custom Services for Java which will cause invocations of the method to be monitored in more detail.
Maybe you can extract a method which actually throws the exception and the outer method which handles it. Then it should be possible to alert on the exception.
There are also some more ways to configure the service via settings, i.e. raise an error based on a return value directly.
See also documentation:
https://www.dynatrace.com/support/help/how-to-use-dynatrace/transactions-and-services/configuration/define-custom-services/
https://www.dynatrace.com/support/help/technology-support/application-software/java/configuration-and-analysis/define-custom-java-services/
I have a business application with the following versions
spring boot(2.2.0.RELEASE) spring-Kafka(2.3.1-RELEASE)
spring-cloud-stream-binder-kafka(2.2.1-RELEASE)
spring-cloud-stream-binder-kafka-core(3.0.3-RELEASE)
spring-cloud-stream-binder-kafka-streams(3.0.3-RELEASE)
We have around 20 batches.Each batch using 6-7 topics to handle the business.Each service has its own state store to maintain the status of the batch whether its running/Idle.
Using the below code to query th store
#Autowired
private InteractiveQueryService interactiveQueryService;
public ReadOnlyKeyValueStore<String, String> fetchKeyValueStoreBy(String storeName) {
while (true) {
try {
log.info("Waiting for state store");
return new ReadOnlyKeyValueStoreWrapper<>(interactiveQueryService.getQueryableStore(storeName,
QueryableStoreTypes.<String, String> keyValueStore()));
} catch (final IllegalStateException e) {
try {
Thread.sleep(1000);
} catch (InterruptedException e1) {
e1.printStackTrace();
}
}
}
When deploying the application in one instance(Linux machine) every thing is working fine.While deploying the application in 2 instance we find the folowing observations
state store is available in one instance and other dosen't have.
When the request is being processed by the instance which has the state store every thing is fine.
If the request falls to the instance which does not have state store the application is waiting in the while loop indefinitley(above code snippet).
While the instance without store waiting indefinitely and if we kill the other instance the above code returns the store and it was processing perfectly.
No clue what we are missing.
When you have multiple Kafka Streams processors running with interactive queries, the code that you showed above will not work the way you expect. It only returns results, if the keys that you are querying are on the same server. In order to fix this, you need to add the property - spring.cloud.stream.kafka.streams.binder.configuration.application.server: <server>:<port> on each instance. Make sure to change the server and port to the correct ones on each server. Then you have to write code similar to the following:
org.apache.kafka.streams.state.HostInfo hostInfo = interactiveQueryService.getHostInfo("store-name",
key, keySerializer);
if (interactiveQueryService.getCurrentHostInfo().equals(hostInfo)) {
//query from the store that is locally available
}
else {
//query from the remote host
}
Please see the reference docs for more information.
Here is a sample code that demonstrates that.
I'm prototyping some simple audit logging functionality. I have a mid sized entity model (~50 entities) and I'd like to implement audit logging on about 5 or 6. Ultimately I'd like to get this working on Inserts & Deletes as well, but for now I'm just focusing on the updates.
The problem is, when I do session.Save (or SaveOrUpdate) to my auditLog table from within the EventListener, the original object is persisted (updated) correctly, but my AuditLog object never gets inserted.
I think it's a problem with both the Pre and Post event listeners being called to late in the NHibernate save life cycle for the session to still be used.
//in my ISessionFactory Build method
nHibernateConfiguration.EventListeners.PreUpdateEventListeners =
new IPreUpdateEventListener[]{new AuditLogListener()};
//in my AuditLogListener
public class AuditLogListener : IPreUpdateEventListener
{
public bool OnPreUpdate(PreUpdateEvent #event)
{
string message = //code to look at #event.Entity & build message - this works
if (!string.IsNullOrEmpty(message))
AuditLogHelper.Log(message, #event.Session); //Session is an IEventSource
return false; //Don't veto the change
}
}
//In my helper
public static void Log(string message, IEventSource session)
{
var user = session.QueryOver<User>()
.Where(x => x.Name == "John")
.SingleOrDefault();
//have confirmed a valid user is found
var logItem = new AdministrationAuditLog
{
LogDate = DateTime.Now,
Message = message,
User = user
};
(session as ISession).SaveOrUpdate(logItem);
}
When it hits the session.SaveOrUpdate() in the last method, no errors occur. No exceptions are thrown. it seems to succeed and moves on. But nothing happens. The audit log entry never appears in the database.
The only way I've been able to get this to work it to create a completely new Session & Transaction inside this method, but this isn't really ideal, as the code proceeds back out of the listener method, hits the session.Transaction.Commit() in my main app, and if that transaction fails, then I've got an orphaned log message in my audit table for somethign that never happened.
Any pointers where I might be going wrong ?
EDIT
I've also tried to SaveOrUpdate the LogItem using a child session from the events based on some comments in this thread. http://ayende.com/blog/3987/nhibernate-ipreupdateeventlistener-ipreinserteventlistener
var childSession = session.GetSession(EntityMode.Poco);
var logItem = new AdministrationAuditLog
{
LogDate = DateTime.Now,
Message = message,
User = databaseLogin.User
};
childSession.SaveOrUpdate(logItem);
Still nothing appears in my Log table in the db. No errors or exceptions.
You need to create a child session, currentSession.GetSession(EntityMode.Poco), in your OnPreUpdate method and use this in your log method. Depending on your flushmode setting, you might need to flush the child session as well.
Also, any particular reason you want to roll out your own solution? FYI, NHibernate Envers is now a pretty mature library.
I have a session scoped class that contains an object to manage user statistics. When a user logs in(through SSO) an application scoped method checks the table for active sessions - if any are found the session is invalidated using the session id in the table.
A row is added to a userStats table in the session scoped class:
/**
* get all the info needed for collecting user stats, add userStats to the users session and save to userStats table
* this happens after session is created
* #param request
*/
private void createUserStats(HttpServletRequest request){
if (!this.sessionExists) {
this.userStats = new UserStats(this.user, request.getSession(true)
.getId(), System.getProperty("atcots_host_name"));
request.getSession().setAttribute("userstats", this.userStats);
Events.instance().raiseEvent("userBoundToSession", this.userStats);
this.sessionExists = true;
log.info("user " + this.user + " is now logged on");
// add this to the db
try {
this.saveUserStatsToDb();
} catch (Exception e) {
log.error("could not save " + this.userStats.getId().getPeoplesoftId() + " information to db");
e.printStackTrace();
}
}
}
When the user's session is destroyed this row is updated with a log off time.
For reasons I can't explain nor duplicate 2 users in the last 2 weeks have logged in and locked the row. When that happens any database calls by that user are no longer possible and the application is effectively unusable for this user.
[org.hibernate.util.JDBCExceptionReporter] (http-127.0.0.1-8180-3) SQL Error: 0, SQLState: null
2012-07-26 18:45:53,427 ERROR [org.hibernate.util.JDBCExceptionReporter] (http-127.0.0.1-8180-3) Transaction is not active: tx=TransactionImple < ac, BasicAction: -75805e7d:3300:5011c807:6a status: ActionStatus.ABORT_ONLY >; - nested throwable: (javax.resource.ResourceException: Transaction is not active: tx=TransactionImple < ac, BasicAction: -75805e7d:3300:5011c807:6a status: ActionStatus.ABORT_ONLY >)
The gathering of these stats is important, but not life and death, if I can't get the information I'd like to give up and keep it moving. But that's not happening. What is happening is that the entityManager is marking the transaction for rollback and any db call after that returns the above error. I originally saved the users stats at the application scope - so when the row locked it locked the entityManager for the ENTIRE APPLICATION (this did not go over well). When I moved the method to session scope it only locks out the offending user.
I tried setting the entityManger to a lesser scope(I tried EVENT and METHOD):
((EntityManager) Component.getInstance("entityManager", ScopeType.EVENT)).persist(this.userStats);
((EntityManager) Component.getInstance("entityManager", ScopeType.EVENT)).flush();
This doesn't make db calls at all.
I've tried manually rolling back the transaction, but no joy.
When I lock a row in a table that has data that is used at the conversation scope level the results are not nearly as catastrophic - no data is saved but it recovers.
ETA:
I tried raising an AsynchronousEvent - that works locally, but deployed to our remote test server - and this is odd - I get:
DEBUG [org.quartz.core.JobRunShell] (qtz_Worker-1) Calling execute on job DEFAULT.2d0badb3:139030aec6e:-7f34
INFO [com.mypkg.myapp.criteria.SessionCriteria] (qtz_Worker-1) observing predestroy for seam
DEBUG [com.mypkg.myapp.criteria.SessionCriteria] (qtz_Worker-1) destroy destroy destroy sessionCriteria
ERROR [org.jboss.seam.async.AsynchronousExceptionHandler] (qtz_Worker-1) Exeception thrown whilst executing asynchronous call
java.lang.IllegalArgumentException: attempt to create create event with null entity
at org.hibernate.event.PersistEvent.<init>(PersistEvent.java:45)
at org.hibernate.event.PersistEvent.<init>(PersistEvent.java:38)
at org.hibernate.impl.SessionImpl.persist(SessionImpl.java:619)
at org.hibernate.impl.SessionImpl.persist(SessionImpl.java:623)
...
The odd bit is that it appears to be going through the Quartz handler.
ETA again:
So, not so odd, I had set Quartz as the async handler - I thought it was only for scheduling jobs. Also asynchronous methods don't have access to the session context, so I had to add a parameter to my observing method to actually have an object to persist:
#Observer("saveUserStatsEvent")
#Transactional
public void saveUserStatsToDb(UserStats userstats) throws Exception {
if(userstats != null){
log.debug("persisting userstats to db");
this.getEntityManager().persist(userstats);
this.getEntityManager().flush();
}
}
How do I recover from this?
First of all, specifying a scope in Component.getInstance() does not have the result of creating the component in the scope specified. EntityManager instances always live in the conversation context (be it temporary or long-running). The scope parameter of getInstance() serves the sole purpose of hinting the context in which the component should be, in order to avoid an expensive search in all contexts (which is what happens if you don't specify a context or specify the wrong context).
The transaction is being marked for rollback because of the previous error. If the entityManager were to commit regardless, it would not be transactional (a transaction in fact guarantees that if an error happens nothing is persisted). If you want to isolate the login transaction from stats gathering, the simplest solution is to perform the saveUserStatsToDb method inside an asynchronous event (transactions are bound to the thread, so using a different thread guarantees that the event is handled in a separate transaction).
Something like this:
#Observer("saveUserStatsEvent")
#Transactional
public void saveUserStatsToDb(UserStats stats) {
((EntityManager)Component.getInstance("entityManager")).persist(stats);
}
And in your createUserStats method:
Events.instance().raiseAsynchronousEvent("saveUserStatsEvent", this.userStats);
However, this just circumvents the problem by dividing the transactions in two. What you really want to solve is the locking condition at the base of the problem.
Here I have one service Called 'DataSaveService' which I used for Saving Objects like..
class DataSaveService {
static transactional = true
def saveObject(object)
{
if(object != null)
{
try
{
if(!object.save())
{
println( ' failed to save ! ')
System.err.println(object.errors)
return false
}
else
{
println('saved...')
return true
}
}
catch(Exception e)
{
System.err.println("Exception :" + e.getMessage())
return false
}
}
else
{
System.err.println("Object " + object + " is null...")
return false
}
}
}
this service is common, and used by many class`s object for storing.
when there is multiple request are there at that time is very slow for saving or you can say its bulky. Because of default scope i.e. singleton.
So, I think for reducing work, I am going to make this service as session scope. like..
static scope = 'session'
then after I am accessing this service and method in controller it generated exception..
what to do for session scope service?, any other idea for implementation this scenario......?
Main thing is I want need best performance in cloud. yeah, I need answer for cloud.
Singleton (if it's not marked as synchronized) can be called from different thread at same time, in parallel, w/o performance loss, it's not a bottleneck.
But if you really need thread safety (mean you have some shared state, that should be used inside one method call, or from different parts of application during one http request or even different requests from same user, and you aren't going to run your app in the cloud), then you can use different scopes, like session or request scope. But i'm not sure that it's a good architecture.
For your current example, there are no benefits of using non singleton scope. And also, you must be know that having few instances of same service requires extra mem and cpu resources.