Session not committed before the end of function - session

I have a big action that takes 50 seconds to process.
But, at the same time, I have another action that could be processed on the server (by clicking on a link).
However, if my second action try to access session's attributes put by my first action, they are note available until the end of the first action.
This is my big action:
public String bigAction() {
HttpSession session = request.getSession();
synchronized (session) {
for(int i = 0 ; i < 100000 ; ++i)
session.setAttribute("foo_"+i, "bar");
}
return SUCCESS;
}
And this is my smaller action:
public String smallAction() {
HttpSession session = request.getSession();
synchronized (session) {
session.getAttribute("foo_1", "bar");
}
return SUCCESS;
}
First action: -----------------------------------------------
Second action: --- -- --- - ---
So, in this example, my second action needs session's attributes created by the first action, but, actually, they don't exist.
How may I synchronize my session?

As per Servlet spec:
Multiple servlets executing request threads may have active access to the same session object at the same time. The container must ensure that manipulation of internal data structures representing the session attributes is performed in a threadsafe manner. The Developer has the responsibility for threadsafe access to the attribute objects themselves. This will protect the attribute collection inside the HttpSession object from concurrent access, eliminating the opportunity for an application to cause that collection to become corrupted.
This is safe:
request.getSession().setAttribute("bar", "foo");
This is not guaranteed to be safe:
HttpSession session = request.getSession();
synchronized (session) {
String value = (String) session.getAttribute("bar");
}
Moreover , the locks will work if on the same object , don't rely on request.getSession() returning same object. There is nothing in the Servlet specification that says a HttpServletSession instance can't be recreated as a facade object every time it is requested.
Read Java theory and practice: Are all stateful Web applications broken? and How HttpSession is not thread safe.
One of the approach is defined here , Java-synchronizing-on-transient-id.

Changes for today:
I am using Struts 2 so I implemented SessionAware because I read it could be a good solution. But this is the same.

Related

java 8 parallel stream with ForkJoinPool and ThreadLocal

We are using java 8 parallel stream to process a task, and we are submitting the task through ForkJoinPool#submit. We are not using jvm wide ForkJoinPool.commonPool, instead we are creating our own custom pool to specify the parallelism and storing it as static variable.
We have validation framework, where we subject a list of tables to a List of Validators, and we submit this job through the custom ForkJoinPool as follows:
static ForkJoinPool forkJoinPool = new ForkJoinPool(4);
List<Table> tables = tableDAO.findAll();
ModelValidator<Table, ValidationResult> validator = ValidatorFactory
.getInstance().getTableValidator();
List<ValidationResult> result = forkJoinPool.submit(
() -> tables.stream()
.parallel()
.map(validator)
.filter(result -> result.getValidationMessages().size() > 0)
.collect(Collectors.toList())).get();
The problem we are having is, in the downstream components, the individual validators which run on separate threads from our static ForkJoinPool rely on tenant_id, which is different for every request and is stored in an InheritableThreadLocal variable. Since we are creating a static ForkJoinPool, the threads pooled by the ForkJoinPool will only inherit the value of the parent thread, when it is created first time. But these pooled threads will not know the new tenant_id for the current request. So for subsequent execution these pooled threads are using old tenant_id.
I tried creating a custom ForkJoinPool and specifying ForkJoinWorkerThreadFactory in the constructor and overriding the onStart method to feed the new tenant_id. But that doesnt work, since the onStart method is called only once at creation time and not during individual execution time.
Seems like we need something like the ThreadPoolExecutor#beforeExecute which is not available in case of ForkJoinPool. So what alternative do we have if we want to pass the current thread local value to the statically pooled threads?
One workaround would be to create the ForkJoinPool for each request, rather than make it static but we wouldn't want to do it, to avoid the expensive nature of thread creation.
What alternatives do we have?
I found the following solution that works without changing any underlying code. Basically, the map method takes a functional interface which I am representing as a lambda expression. This expression adds a preExecution hook to set the new tenantId in the current ThreadLocal and cleaning it up in postExecution.
forkJoinPool.submit(tables.stream()
.parallel()
.map((item) -> {
preExecution(tenantId);
try {
return validator.apply(item);
} finally {
postExecution();
}
}
)
.filter(validationResult ->
validationResult.getValidationMessages()
.size() > 0)
.collect(Collectors.toList())).get();
The best option in my view would be to get rid of the thread local and pass it as an argument instead. I understand that this could be a massive undertaking though. Another option would be to use a wrapper.
Assuming that your validator has a validate method you could do something like:
public class WrappingModelValidator implements ModelValidator<Table. ValidationResult> {
private final ModelValidator<Table. ValidationResult> v;
private final String tenantId;
public WrappingModelValidator(ModelValidator<Table. ValidationResult> v, String tenantId) {
this.v = v;
this.tenantId = tenantId;
}
public ValidationResult validate(Table t) {
String oldValue = YourThreadLocal.get();
YourThreadLocal.set(tenantId);
try {
return v.validate(t);
} finally {
YourThreadLocal.set(oldValue);
}
}
}
Then you simply wrap your old validator and it will set the thread local on entry and restore it when done.

Concerned about the size of my Aggregate Root [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am new to DDD and have a concern about the size of my Aggregate Root. The object graph is like the image below. (They are collections). The problem is all of the entities depend on the state of the AggregateRoot (Event). My question is: how do I break the aggregate into smaller aggregates? It's like I have a "God" like aggregate root that just manages everything.
This is very simplistic view of my domain:
and these are the rules:
An event has a number of different states. (implemented state design
pattern here).
An event has a collection of sessions. (but only 1 can be active at a time and only if the event is in the correct state).
A session has two states: Active and Ended.
A session has a collection of Guests.
A session has a collection of photos. (Maximum
of 10).
When a session is deleted. It should delete all its children.
When a session has ended and a photo is deleted it should
check to see if there are any other photos that belong to the
session. If not it should also delete the session.
When a session has ended and a photo is deleted sometimes it should throw an exception depending on the state of the event.
When a session is active and a photo is deleted. It should not worry about whether or not the session has any other photos or not.
When a session ends it must have at least 1 photo and at least 1 guest.
A photo can be updated but only if the event is in the right state.
When an event is deleted it should delete all its children.
Edit: I have divided the 1 aggregate into smaller aggregates so that Event, Session and Photo are all ARs. The issues is a session needs to perform a check on the Event AR before starting. Is it perfectly ok to inject an event object into the sessions start method Session.Start(Event #event) or will I have concurrency issues as outlined in some of the comments?
As a first step, the following 3 articles will be invaluable: http://dddcommunity.org/library/vernon_2011/
With DDD you are splitting the entities up in to boundaries where the state is valid after a single operation from an external source completes (i.e. a method call).
Think in terms of the business problem you are trying to solve - you have used the word delete a lot...
Does delete even have a place in the wording of the business experts for whom you are designing the system? Thinking in terms of the real world and not database infrastructure, unless you can create a time machine to travel back in time and stop an event from starting and therefore change history, the word delete has no real world analogy.
If you are forcing yourself to delete children on delete, that means that operation would need to become a transaction so things that may not make sense to sit inside the aggregate root are forced too (so that the state of the entity and all its children can be controlled and assured to be valid once the method call completes). Yes there are things where you can do with a transaction across multiple aggregate roots, but these are very rare situations and to be avoided if possible.
Eventual consistency is used as an alternative to transactions and reduce complexity, if you speak to the person for whom the system is being designed, you will probably find that a delay of seconds or minutes is more than acceptable. This is plenty of time to fire off an event, to which some other business logic is listening and takes necessary action. Using eventual consistency removes the headaches that come with transactions.
Photos could take up a lot of storage yes, so you would probably need a cleanup mechanism that runs after an event is marked as finished. I would probably fire off an event once the session is marked closed, a different system somewhere else would listen for this event and after 1 year (or whatever makes sense for you) remove this from a server... assuming you used an array of string[10] for your URLs.
If this is the maximum extent of your business logic, then don't only focus on DDD, it seems like this could be a good fit for Entity Framework which is essentially CRUD and has cascade deletes built in.
Edits answer
What is a photo, does it contain attributes? Is it not instead something like a Url to a photo, or a path to a picture file?
I'm not yet thinking of databases, that should be the very last thing that is thought of and the solution should be database/technology agnostic. I see the rules as:
An event has many sessions.
A Session has the following states: NotStarted, Started and Ended.
A Session has a collection of Guests, I'm going to assume these are unique (in that two guests with the same name are not the same, so a guest should be an aggregate root).
An Event has one active Session.
When there are no active Sessions, an Event can be marked as Finished.
No Sessions can be started once an Event is marked as Finished.
A session has a collection of up to 10 photos.
When a session has ended, a photo cannot be removed.
A Session can not start if there are no Guests A Session can not end if there are no Photos.
You cannot return the Session directly, as a user of your code may call Start() on the session, you will need someway of checking with the Event that this cannot be started, so you can chain up to the root this is why I pass in the event to the Session. If you don't like this way, then just put the methods that manipulate the Session on the Event (so everything is accessed via the Event, which is enforcing all the rules).
In the simplest case, I see the photo as a string (value object) in the Session entity. As a first stab I would do something like this:
// untested, do not know if will compile!
public class Event
{
List<Session> sessions = new List<Session>();
bool isEventClosed = false;
EventId NewSession(string description, string speaker)
{
if(isEventClosed==true)
throw new InvalidOperationException("cannot add session to closed event");
// create a new session, what will you use for identity, string, guid etc
var sessionId = new SessionId(); // in this case autogenerate a guid inside this class
this.sessions.Add(new Session(sessionId, description, speaker));
}
Session GetSession(EventId id)
{
reutrn this.sessions.FirstOrDefault(x => x.id == id);
}
bool CanStartSession(Session session)
{
// TO DO: do a check session is in our array!!
if(this.isEventClosed == true)
return false;
foreach(var session in sessions)
{
if(session.IsStarted()==true)
return false;
}
return true;
}
}
public class Session
{
List<GuestId> guests = new List<GuestId>(); // list of guests
List<string> photoUrls = new List<string>(); // strings to photo urls
readonly SessionId id;
DateTime started = null;
DateTime ended = null;
readonly Event parentEvent;
public Session(Event parent, SessionId id, string description, string speaker)
{
this.id = id;
this.parentEvent = parent;
// store all the other params
}
void AddGuest(GuestId guestId)
{
this.guests.Add(guestId);
}
void RemoveGuest(GuestId guestId)
{
if(this.IsEnded())
throw new InvalidOperationException("cannot remove guest after event has ended");
}
void AddPhoto(string url)
{
if(this.photos.Count>10)
throw new InvalidOperationException("cannot add more than 10 photos");
this.photos.Add(url);
}
void Start()
{
if(this.guests.Count == 0)
throw new InvalidOperationException("cant start session without guests");
if(CanBeStarted())
throw new InvalidOperationException("already started");
if(this.parentEvent.CanStartSession()==false)
throw new InvalidOperationException("another session at our event is already underway or the event is closed");
this.started = DateTime.UtcNow;
}
void End()
{
if(IsEnded()==true)
throw new InvalidOperationException("session already ended");
if(this.photos.length==0)
throw new InvalidOperationException("cant end session without photos");
this.ended = DateTime.UtcNow;
// can raise event here that session has ended, see mediator/event-hander pattern
}
bool CanBeStarted()
{
return (IsStarted()==false && IsEnded()==false);
}
bool IsStarted()
{
return this.started!=null;
}
bool IsEnded()
{
return this.ended!=null;
}
}
No warranty on the above, and may well need to change over time as the understanding evolves and as you see better ways to re-factor the code.
A guest cannot be removed once a session has ended - this logic has been added with a simple test.
Talk about deletion of guests and leaving sessions with 0 guests - you have stated that guests cannot be removed once an event has ended... by allowing that to happen at any point would be in violation of that business rule, so it can't ever happen, ever. Besides, using the term to delete a person in your problem space makes no sense as people cannot be deleted, they existed and will always have a record that they existed. This database term delete belongs in the database, not in this domain model as you have described it.
Is this.parentEvent.CanStartSession()==false safe? No it is not multithread safe, but commands would be ran independently, perhaps in parallel, each in their own thread:
void HandleStartSessionCommand(EventId eventId, SessionId sessionId)
{
// repositories etc, have been provided in constructor
var event = repository.GetById(eventId);
var session = event.GetSession(sessionId);
session.Start();
repository.Save(session);
}
If we were using event sourcing then inside the repository it is writing the stream of changed events in a transaction, and the aggregate root's current version is used so we can detect any changes. So in terms of event sourcing, a change to the Session would indeed be a change to its parent aggregate root, since it doesn't make sense to refer to a Session event in its own right (it will always be a Event event, it cannot exist independently). Obviously the code I have given in my example is not event sourced but could be written as so.
If event sourcing is not used then depending on the transaction implementation, you could wrap the command handler in a transaction as a cross cutting concern:
public TransactionalCommandHandlerDecorator<TCommand>
: ICommandHandler<TCommand>
{
private ICommandHandler<TCommand> decoratedHandler;
public TransactionalCommandHandlerDecorator(
ICommandHandler<TCommand> decoratedHandler)
{
this.decoratedHandler = decoratedHandler;
}
public void Handle(TCommand command)
{
using (var scope = new TransactionScope())
{
this.decoratedHandler.Handle(command);
scope.Complete();
}
}
}
In short, we are using the infrastructure implementation to provide concurrency safety.

NHibernate IPreUpdateEventListener doesn't insert to second table

I'm prototyping some simple audit logging functionality. I have a mid sized entity model (~50 entities) and I'd like to implement audit logging on about 5 or 6. Ultimately I'd like to get this working on Inserts & Deletes as well, but for now I'm just focusing on the updates.
The problem is, when I do session.Save (or SaveOrUpdate) to my auditLog table from within the EventListener, the original object is persisted (updated) correctly, but my AuditLog object never gets inserted.
I think it's a problem with both the Pre and Post event listeners being called to late in the NHibernate save life cycle for the session to still be used.
//in my ISessionFactory Build method
nHibernateConfiguration.EventListeners.PreUpdateEventListeners =
new IPreUpdateEventListener[]{new AuditLogListener()};
//in my AuditLogListener
public class AuditLogListener : IPreUpdateEventListener
{
public bool OnPreUpdate(PreUpdateEvent #event)
{
string message = //code to look at #event.Entity & build message - this works
if (!string.IsNullOrEmpty(message))
AuditLogHelper.Log(message, #event.Session); //Session is an IEventSource
return false; //Don't veto the change
}
}
//In my helper
public static void Log(string message, IEventSource session)
{
var user = session.QueryOver<User>()
.Where(x => x.Name == "John")
.SingleOrDefault();
//have confirmed a valid user is found
var logItem = new AdministrationAuditLog
{
LogDate = DateTime.Now,
Message = message,
User = user
};
(session as ISession).SaveOrUpdate(logItem);
}
When it hits the session.SaveOrUpdate() in the last method, no errors occur. No exceptions are thrown. it seems to succeed and moves on. But nothing happens. The audit log entry never appears in the database.
The only way I've been able to get this to work it to create a completely new Session & Transaction inside this method, but this isn't really ideal, as the code proceeds back out of the listener method, hits the session.Transaction.Commit() in my main app, and if that transaction fails, then I've got an orphaned log message in my audit table for somethign that never happened.
Any pointers where I might be going wrong ?
EDIT
I've also tried to SaveOrUpdate the LogItem using a child session from the events based on some comments in this thread. http://ayende.com/blog/3987/nhibernate-ipreupdateeventlistener-ipreinserteventlistener
var childSession = session.GetSession(EntityMode.Poco);
var logItem = new AdministrationAuditLog
{
LogDate = DateTime.Now,
Message = message,
User = databaseLogin.User
};
childSession.SaveOrUpdate(logItem);
Still nothing appears in my Log table in the db. No errors or exceptions.
You need to create a child session, currentSession.GetSession(EntityMode.Poco), in your OnPreUpdate method and use this in your log method. Depending on your flushmode setting, you might need to flush the child session as well.
Also, any particular reason you want to roll out your own solution? FYI, NHibernate Envers is now a pretty mature library.

db table locked row and entityManager state

I have a session scoped class that contains an object to manage user statistics. When a user logs in(through SSO) an application scoped method checks the table for active sessions - if any are found the session is invalidated using the session id in the table.
A row is added to a userStats table in the session scoped class:
/**
* get all the info needed for collecting user stats, add userStats to the users session and save to userStats table
* this happens after session is created
* #param request
*/
private void createUserStats(HttpServletRequest request){
if (!this.sessionExists) {
this.userStats = new UserStats(this.user, request.getSession(true)
.getId(), System.getProperty("atcots_host_name"));
request.getSession().setAttribute("userstats", this.userStats);
Events.instance().raiseEvent("userBoundToSession", this.userStats);
this.sessionExists = true;
log.info("user " + this.user + " is now logged on");
// add this to the db
try {
this.saveUserStatsToDb();
} catch (Exception e) {
log.error("could not save " + this.userStats.getId().getPeoplesoftId() + " information to db");
e.printStackTrace();
}
}
}
When the user's session is destroyed this row is updated with a log off time.
For reasons I can't explain nor duplicate 2 users in the last 2 weeks have logged in and locked the row. When that happens any database calls by that user are no longer possible and the application is effectively unusable for this user.
[org.hibernate.util.JDBCExceptionReporter] (http-127.0.0.1-8180-3) SQL Error: 0, SQLState: null
2012-07-26 18:45:53,427 ERROR [org.hibernate.util.JDBCExceptionReporter] (http-127.0.0.1-8180-3) Transaction is not active: tx=TransactionImple < ac, BasicAction: -75805e7d:3300:5011c807:6a status: ActionStatus.ABORT_ONLY >; - nested throwable: (javax.resource.ResourceException: Transaction is not active: tx=TransactionImple < ac, BasicAction: -75805e7d:3300:5011c807:6a status: ActionStatus.ABORT_ONLY >)
The gathering of these stats is important, but not life and death, if I can't get the information I'd like to give up and keep it moving. But that's not happening. What is happening is that the entityManager is marking the transaction for rollback and any db call after that returns the above error. I originally saved the users stats at the application scope - so when the row locked it locked the entityManager for the ENTIRE APPLICATION (this did not go over well). When I moved the method to session scope it only locks out the offending user.
I tried setting the entityManger to a lesser scope(I tried EVENT and METHOD):
((EntityManager) Component.getInstance("entityManager", ScopeType.EVENT)).persist(this.userStats);
((EntityManager) Component.getInstance("entityManager", ScopeType.EVENT)).flush();
This doesn't make db calls at all.
I've tried manually rolling back the transaction, but no joy.
When I lock a row in a table that has data that is used at the conversation scope level the results are not nearly as catastrophic - no data is saved but it recovers.
ETA:
I tried raising an AsynchronousEvent - that works locally, but deployed to our remote test server - and this is odd - I get:
DEBUG [org.quartz.core.JobRunShell] (qtz_Worker-1) Calling execute on job DEFAULT.2d0badb3:139030aec6e:-7f34
INFO [com.mypkg.myapp.criteria.SessionCriteria] (qtz_Worker-1) observing predestroy for seam
DEBUG [com.mypkg.myapp.criteria.SessionCriteria] (qtz_Worker-1) destroy destroy destroy sessionCriteria
ERROR [org.jboss.seam.async.AsynchronousExceptionHandler] (qtz_Worker-1) Exeception thrown whilst executing asynchronous call
java.lang.IllegalArgumentException: attempt to create create event with null entity
at org.hibernate.event.PersistEvent.<init>(PersistEvent.java:45)
at org.hibernate.event.PersistEvent.<init>(PersistEvent.java:38)
at org.hibernate.impl.SessionImpl.persist(SessionImpl.java:619)
at org.hibernate.impl.SessionImpl.persist(SessionImpl.java:623)
...
The odd bit is that it appears to be going through the Quartz handler.
ETA again:
So, not so odd, I had set Quartz as the async handler - I thought it was only for scheduling jobs. Also asynchronous methods don't have access to the session context, so I had to add a parameter to my observing method to actually have an object to persist:
#Observer("saveUserStatsEvent")
#Transactional
public void saveUserStatsToDb(UserStats userstats) throws Exception {
if(userstats != null){
log.debug("persisting userstats to db");
this.getEntityManager().persist(userstats);
this.getEntityManager().flush();
}
}
How do I recover from this?
First of all, specifying a scope in Component.getInstance() does not have the result of creating the component in the scope specified. EntityManager instances always live in the conversation context (be it temporary or long-running). The scope parameter of getInstance() serves the sole purpose of hinting the context in which the component should be, in order to avoid an expensive search in all contexts (which is what happens if you don't specify a context or specify the wrong context).
The transaction is being marked for rollback because of the previous error. If the entityManager were to commit regardless, it would not be transactional (a transaction in fact guarantees that if an error happens nothing is persisted). If you want to isolate the login transaction from stats gathering, the simplest solution is to perform the saveUserStatsToDb method inside an asynchronous event (transactions are bound to the thread, so using a different thread guarantees that the event is handled in a separate transaction).
Something like this:
#Observer("saveUserStatsEvent")
#Transactional
public void saveUserStatsToDb(UserStats stats) {
((EntityManager)Component.getInstance("entityManager")).persist(stats);
}
And in your createUserStats method:
Events.instance().raiseAsynchronousEvent("saveUserStatsEvent", this.userStats);
However, this just circumvents the problem by dividing the transactions in two. What you really want to solve is the locking condition at the base of the problem.

grails + singleton service for object saving to database for cloud application

Here I have one service Called 'DataSaveService' which I used for Saving Objects like..
class DataSaveService {
static transactional = true
def saveObject(object)
{
if(object != null)
{
try
{
if(!object.save())
{
println( ' failed to save ! ')
System.err.println(object.errors)
return false
}
else
{
println('saved...')
return true
}
}
catch(Exception e)
{
System.err.println("Exception :" + e.getMessage())
return false
}
}
else
{
System.err.println("Object " + object + " is null...")
return false
}
}
}
this service is common, and used by many class`s object for storing.
when there is multiple request are there at that time is very slow for saving or you can say its bulky. Because of default scope i.e. singleton.
So, I think for reducing work, I am going to make this service as session scope. like..
static scope = 'session'
then after I am accessing this service and method in controller it generated exception..
what to do for session scope service?, any other idea for implementation this scenario......?
Main thing is I want need best performance in cloud. yeah, I need answer for cloud.
Singleton (if it's not marked as synchronized) can be called from different thread at same time, in parallel, w/o performance loss, it's not a bottleneck.
But if you really need thread safety (mean you have some shared state, that should be used inside one method call, or from different parts of application during one http request or even different requests from same user, and you aren't going to run your app in the cloud), then you can use different scopes, like session or request scope. But i'm not sure that it's a good architecture.
For your current example, there are no benefits of using non singleton scope. And also, you must be know that having few instances of same service requires extra mem and cpu resources.

Resources