If i add spring-session jdbc to my vaadin-spring-boot-application the application is very slow and does a full page reload after a few seconds. Everything else looks like it is working normally.
I do not notice the problem and I have been researching on this issue for a few days and got this Github issue and Vaadin microservices configuration But in these, I did not find a suitable solution to solve this problem, Any one can give me an true example to implemention Spring sessions on Vaadin?
Regards.
Session replication schemes like spring-session assumes that the session is relatively small and that the content isn't sensitive to concurrent modification from multiple request threads. Neither of those assumptions hold true for a typical Vaadin application.
The first problem is that there's typically between 100KB and 10MB of data in the session that needs to be fetched from the database, deserialized, updated and then again serialized and stored in the database for each request. The second problem is that Vaadin stores a lock instance in the session and uses that to ensure there aren't multiple request threads using the same session concurrently.
To serialize a session to persistent storage, you thus need to ensure your load balancer uses sticky sessions and typically also use a high performance solution such as Hazelcast rather than just deserializing and serializing individually for each request.
For more details, you can have a look at these two posts:
https://vaadin.com/learn/tutorials/hazelcast
https://vaadin.com/blog/session-replication-in-the-world-of-vaadin
Does Spring Session management take care of asynchronous calls?
Say that we have multiple controllers and each one is reading/writing different session attributes. Will there be a concurrency issue as the session object is entirely written/read to/from external servers and not the attributes alone?
We are facing such an issue that the attributes set from a controller are not present in the next read... this is an intermittent issue depending on the execution of other controllers in parallel.
When we use the session object from the container we never faced this issue... assuming that it is a direct attribute set/get happening right on to the session object in the memory.
The general use case for the session is storing some user specific data. If I am understanding your context correctly, your issue describes the scenario in which a user, while for example being authenticated from two devices (for example a PC and a phone - hence withing the bounds of the same session) is hitting your backend with requests so fast you face concurrency issues around reading and writing the session data.
This is not a common (and IMHO reasonable) scenario for the session, so projects such as spring-data-redis or spring-data-gemfire won't support it out of the box.
The good news is that spring-session was built with flexibility in mind, so you could of course achieve what you want. You could implement your own version of SessionRepository and manually synchronize (for example via Redis distributed locks) the relevant methods. But, before doing that, check your design and make sure you are using session for the right data storage job.
This question is very similar in nature to your last question. And, you should read my answer to that question before reading my response/comments here.
The previous answer (and insight) posted by the anonymous user is fairly accurate.
Anytime you have a highly concurrent (Web) application/environment where many different, simultaneous HTTP requests are coming in, accessing the same HTTP session, there is always a possibility for lost updates caused by race conditions between competing concurrent HTTP requests. This is due to the very nature of a Servlet container (e.g. Apache Tomcat, or Eclipse Jetty) since each HTTP request is processed by, and in, a separate Thread.
Not only does the HTTP session object provided by the Servlet container need to be Thread-safe, but so too do all the application domain objects that your Web application puts into the HTTP session. So, be mindful of this.
In addition, most HTTP session implementations, such as Apache Tomcat's, or even Spring Session's session implementations backed by different session management providers (e.g. Spring Session Data Redis, or Spring Session Data GemFire) make extensive use of "deltas" to send only the changes (or differences) to the Session state, there by minimizing the chance of lost updates due to race conditions.
For instance, if the HTTP session currently has an attribute key/value of 1/A and HTTP request 1 (processed by Thread 1) reads the HTTP session (with only 1/A) and adds an attribute 2/B, while another concurrent HTTP request 2 (processed by Thread 2) reads the same HTTP session, by session ID (seeing the same initial session state with 1/A), and now wants to add 3/C, then as Web application developers, we expect the end result and HTTP session state to be, after request 1 & 2 in Threads 1 & 2 complete, to include attributes: [1/A, 2/B, 3/C].
However, if 2 (or even more) competing HTTP requests are both modifying say HTTP sessoin attribute 1/A and HTTP request/Thread 1 wants to set the attribute to 1/B and the competing HTTP request/Thread 2 wants to set the same attribute to 1/C then who wins?
Well, it turns out, last 1 wins, or rather, the last Thread to write the HTTP session state wins and the result could either be 1/B or 1/C, which is indeterminate and subject to the vagaries of scheduling, network latency, load, etc, etc. In fact, it is nearly impossible to reason which one will happen, much less always happen.
While our anonymous user provided some context with, say, a user using multiple devices (a Web browser and perhaps a mobile device... smart phone or tablet) concurrently, reproducing this sort of error with a single user, even multiple users would not be impossible, but very improbable.
But, if we think about this in a production context, where you might have, say, several hundred Web application instances, spread across multiple physical machines, or VMs, or container, etc, load balanced by some network load balancer/appliance, and then throw in the fact that many Web applications today are "single page apps", highly sophisticated non-dumb (no longer thin) but thick clients with JavaScript and AJAX calls, then we begin the understand that this scenario is much more likely, especially in a highly loaded Web application; think Amazon or Facebook. Not only many concurrent users, but many concurrent requests by a single user given all the dynamic, asynchronous calls that a Web application can make.
Still, as our anonymous user pointed out, this does not excuse the Web application developer from responsibly designing and coding our Web application.
In general, I would say the HTTP session should only be used to track very minimal (i.e. in quantity) and necessary information to maintain a good user experience and preserve the proper interaction between the user and the application as the user transitions through different parts or phases of the Web app, like tracking preferences or items (in a shopping cart). In general, the HTTP session should not be used to store "transactional" data. To due so is to get yourself into trouble. The HTTP session should be primarily a read heavy data structure (rather than write heavy), particularly because the HTTP session can be and most likely will be accessed from multiple Threads.
Of course, different backing data stores (like Redis, and even GemFire) provide locking mechanisms. GemFire even provides cache level transactions, which is very heavy and arguable not appropriate when processing Web interactions managed in and by an HTTP session object (not to be confused with transactions). Even locking is going to introduce serious contention and latency to the application.
Anyway, all of this is to say that you very much need to be conscious of the interactions and data access patterns, otherwise you will find yourself in hot water, so be careful, always!
Food for thought!
In a web application, I'm using JPA entities to persist (and retrieve) my domain objects to (and from) an underlying database.
These JPA entities are kept "hot" in an in-memory cache structure (think of a Map<UniqueID, Entity>) for the whole running time of the web application.
So I'm doing a request to my web application, an entity gets loaded from the repository. This entity gets put into the in-memory cache structure. For the whole lifetime of this request, I can happily access any fields of this entity. During this first request, also lazily-loading relationships to other entities works fine, even in my View: I'm successfully using the Open-Session-in-View pattern (via Spring's OpenEntityManagerInViewInterceptor).
The first request has ended.
I'm doing the next request to my web application. This request asks for another entity. This entity is already in the in-memory cache structure, so it gets loaded from there. From this entity, I try to access a field that should lazily-load relationships to other entities. This unfortunately causes the obnoxious org.hibernate.LazyInitializationException: could not initialize proxy - no Session (I'm using Hibernate as my underlying JPA implementation)
To my understanding, this exception stems from the fact that after the first request has ended, JPA/Hibernate has ended any JPA sessions, yet entities in my in-memory cache structure still expect any of these sessions to exist; at the moment that the next request causes to fire the mechanism for lazily-loading entities, the lazily-loading mechanism can't find any no-longer existing session.
What are solutions to my problem?
One of solutions is to reattach the entity to session at the beginning of the second request using Session.update().
Another solution is to use the second level cache in Hibernate instead of your own solution. It should be much more reliable than any home-grown caching mechanism.
Basically, you cannot keep sessions alive across HTTP requests because this would mean to keep the transaction open between requests.
I think the only solution (apart detecting yourself what is loaded and what is not) is to fetch the whole entity before putting it in your cache. IMHO you shouldn't put partially loaded objects in a cache. If you don't want to load the whole object the first time, you might use separate caches for the objects relationships.
If you want, you might also consider enabling Hibernate cache as proposed by #Adam, but I don't think it would work well with lady-loaded fields.
You are getting this exception because your object is detached from current session. You have to re-attach this object to current session before assessing it
session.update(object);
You can read details here
I have been given a requirement to persist user data once the user has authenticated initially. We don't want to hit the database to look up the user every time they navigate to a new view etc...
I have a User class that is [Serializable] so it could be stored in a session. I am using SQL server for session state as well. I was thinking of storing the object in session but I really hate doing that.
How are developers handling this type of requirement these days?
Three ways:
Encrypting data in cookies and sending it to client, decrypting it whenever you need it
Storing it server side by an Id (e.g UserId) in Cache, Session, or any other storage(which is safer than cookie).
Use second level caching strategy if you used an ORM
Assuming your user object is not huge and does not change often i think it is acceptable to store it in the session.
Since you already have a sql server session you will be making SP calls to pull/push the data already and adding a small object to that should have minimal perf issues compared to other options like persisting it down to the client and sending it back on every request.
I would also consider the server a much more secure place to keep this info.
You want to minimize the number of times you write to the session(request a lock) when it is stored in sql as it is implemented in a sealed class that exclusivity locks the session. If any of your other requests in this session require write access to the SQL session they will be blocked by the initial request until it releases the session lock. (there are some new hooks in .NET 4 for allowing you to change the SessionStateBehavior in the pipeline before the session is accessed)
You might consider a session state server (appfabric) if perf of your SQL session store is an issue.
I would be interested in some timing details. For example I place in session some container, which can keep different data. I do change of content of the container frequently. How can I assure that the container session value get replicates across nodes for any change?
You don't need to make sure; that's the application server's job.
The J2EE specification doesn't deal with session-information synchronization amongst distributed components.
Theoretically, all you have to do is code thread-safe. In your example, simply make sure that access to the container is synchronized. If your application server is bug-free, then you can safely assume that the session information is properly replicated across all nodes in a seamless manner; if your application server has bugs around session synchronization... well... then nothing is really safe anymore, now is it.
Application servers use different strategies to synchronize session information between nodes. Session content can be considered as dirty and required synchronization at
put data in session
get data from session
get data from session falls in two categories as
get structured object
get scalar object or immutable object
So if session data get modified indirectly by modifying an structured object, then simple re-read it from session can assure that the object content got replicated.