Can't use spring sessions on Vaadin - spring-boot

If i add spring-session jdbc to my vaadin-spring-boot-application the application is very slow and does a full page reload after a few seconds. Everything else looks like it is working normally.
I do not notice the problem and I have been researching on this issue for a few days and got this Github issue and Vaadin microservices configuration But in these, I did not find a suitable solution to solve this problem, Any one can give me an true example to implemention Spring sessions on Vaadin?
Regards.

Session replication schemes like spring-session assumes that the session is relatively small and that the content isn't sensitive to concurrent modification from multiple request threads. Neither of those assumptions hold true for a typical Vaadin application.
The first problem is that there's typically between 100KB and 10MB of data in the session that needs to be fetched from the database, deserialized, updated and then again serialized and stored in the database for each request. The second problem is that Vaadin stores a lock instance in the session and uses that to ensure there aren't multiple request threads using the same session concurrently.
To serialize a session to persistent storage, you thus need to ensure your load balancer uses sticky sessions and typically also use a high performance solution such as Hazelcast rather than just deserializing and serializing individually for each request.
For more details, you can have a look at these two posts:
https://vaadin.com/learn/tutorials/hazelcast
https://vaadin.com/blog/session-replication-in-the-world-of-vaadin

Related

Microservices: Simultaneous cache updates

I am developing a microservice. This MS will be deployed to docker containers and will be monitored by Kubernetes. I have to implement a caching solution using hazelcast distributed cache. My requirements are:
Preload the cache on startup of this microservice. For around 3000 stores I have to fetch two specific attributes and cache them.
Every 24 hours refresh the cache.
I implemented Spring #EventListener and on startup to make a database call for the 2 attributes and do a #CachePut and store them in Cache.
I also have a Spring scheduler with cron expression to refresh cache at every 6 AM in morning.
So far so good.
But what I did not realize that in clustered environment - 10-15 instances of my microservice will be in action and will try to do above 2 steps almost simultaneously - thus creating a stampede effect on my database and cache. Does anyone know what to do in this scenario? Is there any good design or even average one which I can follow?
Thanks.
You should be looking to use Hazelcast provided Loading and Storing Persistent Data mechanism that allows 2 options for writing: Write-through and write-behind and read-through for loading data into the cache.
Look for MapLoader and its methods, that will let you warm-up/preload your cluster and you have the freedom to do that with your own implementation.
Check for more details: https://docs.hazelcast.org/docs/3.11/manual/html-single/index.html#loading-and-storing-persistent-data

Can I use the Grails session to store entire domain objects?

If I have a number of Grails domain objects that I do not want to save just yet, but still access them throughout my application, is it wise to store them in the Grails / Hibernate session (especially as regards peformance)? If not, what is the alternative?
What do you mean by the grails / hibernate session?
If you really mean the Hibernate session, adding an object to it will provoke the object to be saved automatically when the session is flushed (unless the object doesn't validate, in that case it will be lost once the session is discarded). A session is created and discared per request.
If you mean the session object that gets automatically injected into controllers and views, it's nor grails neither Hibernate specific, but just the old, plain HttpSession from the Servlet specification (see http://docs.oracle.com/javaee/7/api/javax/servlet/http/HttpServletRequest.html).
You can use that to store any kind of object if you need to access them across multiple requests of the same client. Meaning the session is private to a given client (who identifies it throught the jsessionid cookie) and survives multiple requests. If you don't need the multiple request bit, adding them as a request attribute would suffice.
Putting things in the session is generally fine and fast (since by default is based on memory), but it will increase the memory footprint of the application if abused, and will prevent horizontal scaling (i.e deploying the same application in multiple instances) unless sticky session mechanisms are used (or the session is persisted).
Bear in mind though that grails uses a new Hibernate session per request (not an Http session :), so if you add objects that are attached to a Hibernate session to the Http session, and then the Hibernate session is closed, you might encounter problems. This shouldn't affect non-saved objects (they don't come from a Hibernate session), but it might affect their associations (other domain classes that do come from the database and therefore a Hibernate session). If that's the case, you might need to re-attach them. See https://grails.github.io/grails-doc/latest/ref/Domain%20Classes/attach.html
Also, if the session is invalidated (because the user logs out, or the server is re-deployed) everything that was stored in there will be gone.
If you don't want to rely on sessions at all, you can create your own a MemoryBasedStoreService service and use a ConcurrentHashMap or a similar mechanism to store and retrieve the objects. Since services are singleton in Grails, you can use it across the whole application, regardless of requests or clients - as long as your application is deployed in a single instance of course :).

Why do I need to support sticky sessions in Jetty when storing in data store?

I'm looking to store sessions in an external data store instead of in memory. I want to do this in order to have session data available to all my Jetty server instances.
So I read the Jetty docs:
Session Clustering with a Database
Session Clustering with MongoDB
Both docs have the following statement:
The persistent session mechanism works in conjunction with a load balancer that supports stickiness.
My question is - why do I need stickiness in this mechanism? The whole point is to allow any server in the cluster serve any request because the session data is stored externally.
The problem is that the servlet specification does not provide any transactional semantics around sessions. Thus, unless your sessions are sticky its possible for concurrent requests involving the same session to go to multiple servers and produce inconsistent results, depending on the interleaving of the writes. If your sessions are read-mostly you may get away with non-sticky sessions, although you may find that sessions time out a little sooner or a little later than you'd expect, due to having multiple servers trying to manage the same session.

JSF Session management and tunning

All,
I am doing some investigation into how to shrink the amount of session memory our JSF application is consuming on a per user basis.
We are using MyFaces 1.1.7 and Tomahawk 1.1.5 running on IBM Websphere 7.0 patch 19. (Not able to upgrade either of these items at this time)
IBM’s guideline is that the session size should be less then 5k – average around 2.5k in order not to impact performance of the server and session replication. We are currently using Memory to Memory but looking at moving to database as suggested by IBM.
Our site was running at about 35M per user. We changed the number of view states from 100 to 10 and that dropped it down to around 4M.
We have several backing beans which are currently session scope and are looking at changing them to request scope.
I also found the following: http://www.econsulting.nl/images/pdf/Tuning%20JSF%20Applications-%20J-Spring%202008.pdf which seems to have a lot of information concerning how JSF handles certain content on the pages. This is still under investigation to make sure what is stated make sense.
I have also read somewhere that regardless if the managed backing bean is session or request scope is that the view state will still have the bean and its content. So the view state size will not change. Looking for clarification on this one.
The questions is are others facing the same issue in which JSF applications tend to consume a lot of memory for a given users session?
What are some of the best practices to reduce this size if any or is this just the way when using JSF?
Issues with session replication on IBM WebSphere when running a JSF application?
Is there any documentation out there on how JSF/MyFaces makes use of the heap memory - Young vs old or should it even be considered in this scope? Garbage collection tuning?
What we see as a result of this is that in the event a user hops to another server, the session data is not present due to how large the data is and how long it takes to replicate. User experience issues.
We had seen an issue in which it appeared that changes to the object in the session was not being updated correctly and have done some session management tuning in which we customize the settings so that all session attributes are written out. Looking at the .jar file it does appear that myFaces is making the call correctly when the contents of the object in the session changes. So WebSphere session listener should be picking up that change.
you can try saving view state on client, but I am not sure that MyFaces 1.1.7 already supports this

How does session replication across containers work?

I would be interested in some timing details. For example I place in session some container, which can keep different data. I do change of content of the container frequently. How can I assure that the container session value get replicates across nodes for any change?
You don't need to make sure; that's the application server's job.
The J2EE specification doesn't deal with session-information synchronization amongst distributed components.
Theoretically, all you have to do is code thread-safe. In your example, simply make sure that access to the container is synchronized. If your application server is bug-free, then you can safely assume that the session information is properly replicated across all nodes in a seamless manner; if your application server has bugs around session synchronization... well... then nothing is really safe anymore, now is it.
Application servers use different strategies to synchronize session information between nodes. Session content can be considered as dirty and required synchronization at
put data in session
get data from session
get data from session falls in two categories as
get structured object
get scalar object or immutable object
So if session data get modified indirectly by modifying an structured object, then simple re-read it from session can assure that the object content got replicated.

Resources