Let's say I have a Java Servlet+JSP app using Spring framework and Tomcat 6. This app must be hosted on multiple machines. How can I share the HTTP session across many computers?
I usually get my session using this code:
HttpServletRequest httpRequest = (HttpServletRequest) request;
HttpSession session = httpRequest.getSession();
Should I use some other kind of session (custom implementation of HttpSession) using a common MySQL db or something? Any idea?
If you want to use a shared HTTP session storage you need to override the session manager in the application server that you use. Here is a link for Tomcat 8 to give you an idea - http://tomcat.apache.org/tomcat-8.0-doc/config/manager.html
To answer the 'should' part of your question - you don't have to. You could use session cookie based 'stickiness' option on your load balancer as an alternative to shared cookie storage.
Related
We have started using infinispan cache with wildfly 13 in our web application. The web application is deployed in wildfly domain mode in a cluster of two node with one acting as master and the other as slave. In the application we have an admin feature, where the admin can terminate a user.
So what we want to do is add session objects to Infinispan cache and retrieve it and terminate it when required. I am aware that HttpSession object is not serializable hence it cannot be added to cache but every attribute added to the session object is serilizable so my question is, is there a workaround for the issue? Because right now we get a NotSerializable error when I try to add session to cache and it's also no longer possible to retrieve session from sessionId and terminate it due to security reasons.
There is no need to manually interact with the Infinispan cache: WildFly transparently supports full http session clustering with Infinispan. See https://docs.jboss.org/author/display/WFLY10/High+Availability+Guide
I am working on setting up a REST Client using jax-rs 2 client API.
In the api doc it says "Clients are heavy-weight objects that manage the client-side communication infrastructure. Initialization as well as disposal of a Client instance may be a rather expensive operation. It is therefore advised to construct only a small number of Client instances in the application." (https://docs.oracle.com/javaee/7/api/javax/ws/rs/client/Client.html). As per this statement it sounds like Client is not thread-safe and i should not be using single Client instance for all requests.
I am using CXF implementation, so far i didn't find a way to set up pool for Client objects.
If anyone has any information reg this could you please share.
Thanks in advance.
By default, CXF uses a transport based on the in-JDK HttpURLConnection object to perform HTTP requests.
Connection pooling is performed allowing persistent connections to reuse the underlying socket connection for multiple http requests.
Set these system properties to configure the pool(default values)
http.keepalive=true
http.maxConnections=5
Increment the value of http.maxConnections to set the maximum number of idle connections that will be simultaneously kept alive, per destination. See in this link the complete list of properties properties.html
In this post are explained some detail how it works
Java HttpURLConnection and pooling
Note also that the default JAX-RS client is not thread-safe by default. Check the limitations for proper use here
When you need many requests executed simultaneosly CXF can also use the asynchronous apache HttpAsyncClient. Ser details here
http://cxf.apache.org/docs/asynchronous-client-http-transport.html
currently we are deploying to a cluster-scenario where we have 3 nodes (tomcat), all of them share their sessions via Hazelcast. We have an apache in front of these nodes as a loadbalancer.
curling our application, I see, that there are two session-cookies used:
1) is the usual tomcat session with a JSESSIONID
2) is the hazelcast-session with a hazelcast.sessionId
Is there any way to omit the JSESSIONID?
or
Is there any way to somehow "join" both?
Thanks in advance.
No, both are required in current implementation.
Hazelcast uses only hazelcast.sessionId as HttpSession.getId() and nearly everyplace to identify distributed session. But for some cases like failover Hazelcast uses both session identifiers (hazelcast.sessionId and JSESSIONID ) internally.
From Hazelcast documentation:
SessionId Generation
SessionId generation is done by Hazelcast Web Session Module if session replication is configured in the web application. Default cookie name for the sessionId is hazelcast.sessionId and this is configurable with cookie-name parameter in the web.xml file of the application. hazelcast.sessionId is just a UUID prefixed with āHZā character and without ā-ā character, e.g. HZ6F2D036789E4404893E99C05D8CA70C7.
When called by the target application, the value of HttpSession.getId() is the same as the value of hazelcast.sessionId.
Can we extend session through Java Code?
I found the code to extend the session through javascript. But I've doubt , Can we extend session through Java Code? I need to extend the session through java only.
Can anyone help me???
Thanks.
The session is maintained by the application server. If you need longer sessions, you can configure them in your appserver.
The application server implicitly extends a session whenever it sees a http request with that session id, that's why extending the session through javascript works: It just triggers some http request and counts on the side effect of extending the session.
On an application server you typically don't have custom threads running anyway, so you shouldn't count on this to help you. It's the wrong place to solve the problem
No you can not extend session timeout through Java because session is maintained by application server.
If you want to extned sesseion, you can configured them in application server.
I set up two equal tomcat servers that host the same web application (Sun RI JSF 2 / Tomahawk). For load balancing and fail-over scenarios I use an nginx server as reverse proxy delegating the request to the one or the other server. Right now one tomcat is defines as backup solution, so that tomcat server 1 handles all the requests. When I kill the process of tomcat 1, nginx nicely delegates the following requests to tomcat server 2. In order to reuse the session data I configured both tomcat servers to use memcached as session store. JSF is configured to store its state on the server.
Concerning the log files, this setup looks quite nice and session data is read and stored using the memcached server. This for example facilitates using the web application without the need to login again even if tomcat 1 has been shut down.
Nevertheless it seems as if my (session scoped) backing beans are not stored or being used after restoring the session respectively. Form fields are left empty that are supposed to be filled with the data from the session bean.
Is it possible to do such things with the mentioned technologies at all?
With memcached-session-manager and OWB you should use tomcat < 7.0.22 as in this version the notification of ServletRequestListeners got changed (which is the mechanism used by OWB for failover support).
I'm currently working on a new version of msm that works with OWB and tomcat >= 7.0.22.