Two Apache with Two Tomcat, Sticky session not working - mod-jk

We have environment with two Apache Web Server and Two Tomcat via Mod_jk. We have configured Both to Apache for Loadbalancer as well as failover.
We have a LB on top of both the Apache web Server configured in Round Robin Algorithm
Problem is randomly session gets invalidated when both the apache are running .
If One of the Apache is running loadbalancing and failover works perfectly fine.
Do i need to configure a two apache web server ?
Do i need to change some configuration for two apache for sticky session ?

Related

Access to Redis Cluster via Single Endpoint

I have a application using Redis. This system implemented with java spring used jedis package for connection to the redis with the configuration as follow
jedis.pool.host=redisServer-IP
so the application connect to redis server on the redisServer-IP and works fine but, for the lack of memory on a single server and and HA capability I need to use a redis cluster I used docker compose to create a redis cluster using the here.
Also redis cluster working fine with three masters and three replicas.
I just need to understand, the Redis Cluster can work with the single endpoint, because I can only set single endpoint in the above jedis.pool.host configuration, or I need to have a proxy to deal with the redis cluster ?
NOTE: I can not make any changes in my application

Load Balancing ActimeMQ Artemis in JBoss EAP 7.2.0

We are developing an application using Spring Boot and Apache Camel that is reading a message from ActiveMQ Artemis, doing some transformation, and sending it to ActiveMQ Artemis. Our application is deployed as war file in on-premise JBoss EAP 7.2.0. Both the source and target applications are remote to our application and they are also deployed on JBoss EAP 7.2.0. The remote queues to which Camel is connecting are ActiveMQ Artemis which were created in JBoss and connecting using http-remoting protocol. These setup was working when there were only one node of each of the applications.
Now we are making the source and target applications 3 nodes each (i.e. they will be deployed in multiple JBoss servers). For accessing the front-end of the source and target applications we are configuring and accessing them through a load balancer.
Can we configure the load balancer to access the source and target brokers from the Camel layer? There will be 3 source and 3 target brokers. Or is clustering the brokers the only option in this case?
We are thinking of load balancing between the queues and not clustering. Suppose we have three queues q1, q2, and q3 with corresponding brokers b1, b2, and b3. I will configure the load balancer url in the Camel layer like http-remoting://<load-balancer-url>:<port> (much like we do while load balancing HTTP API requests). Any message coming in will hit the load balancer, and the load balancer will decide which queue to route the message to.
JMS connections are stateful. When a client creates a connection there is no indication of the queues to which it will send messages. The load-balancer will have to direct that client's connection to either b1, b2, or b3 and it will have no way to determine where it should go. A load-balancer working with messaging will almost certainly only be able to balance connections, not messages. It sounds like you want load-balancing at the message level instead. Perhaps you should look into something like Qpid Dispatch Router.
Messaging doesn't use HTTP so using an HTTP load balancer like you do with your HTTP API(s) won't work. It's easy for a load-balancer to inspect HTTP headers and route requests, especially since HTTP is stateless. However, messaging connections are stateful and the protocols are typically quite a bit more complex than HTTP. I don't know of any load-balancers that will work the way you are wanting for messaging.
You need your client not to use the topology, you can do this by using "setUseTopologyForLoadBalancing" on your AMQConnectionFactory. If you get the connection factory from EAP I think this is configurable on the connection factory since EAP 7.3.

How to run Spring Cloud Config server in Fault Tolerance mode?

In my project we have a requirement to run two instances of spring cloud config server so if one instance goes down, other will take care the config server responsibilities.
Currently, you would need to put config server behind a load balancer. It is stateless, so that wouldn't hurt. There is an open issue to configure multiple config server url's in the client, so it could do failover there.
If you are running multiple instances of the config server, you can have them all register themselves in Eureka, and maybe do a lookup to the config server with it's application name via Eureka in all the other microservices. This way, Zuul (and Ribbon) will take care of the load balancing.
Edit:
I guess spencergibb is right. It's best to use a load balancer, for eg: ELB, if you're going to deploy on AWS.
Consider multiple spring-cloud-config-uris for high availability

I have licence for WebSphere Application Server (Base package) can that be used to create a Cluster environment

I have got WebSphere Application Server base version and I need to deploy application in horizontal cluster. Can that be achieved without Network Deployment version?
You cannot create and manage cluster on base version.
What you can do, is so called Simple load balancing. So you have 2 separate servers, where you have to manually, separately deploy same application and configure IBM Http sever to do the load balancing among these 2 servers.
So you have a failover and load balancing, but without centralized management.

Restoring JSF application state using memcached as session fail-over

I set up two equal tomcat servers that host the same web application (Sun RI JSF 2 / Tomahawk). For load balancing and fail-over scenarios I use an nginx server as reverse proxy delegating the request to the one or the other server. Right now one tomcat is defines as backup solution, so that tomcat server 1 handles all the requests. When I kill the process of tomcat 1, nginx nicely delegates the following requests to tomcat server 2. In order to reuse the session data I configured both tomcat servers to use memcached as session store. JSF is configured to store its state on the server.
Concerning the log files, this setup looks quite nice and session data is read and stored using the memcached server. This for example facilitates using the web application without the need to login again even if tomcat 1 has been shut down.
Nevertheless it seems as if my (session scoped) backing beans are not stored or being used after restoring the session respectively. Form fields are left empty that are supposed to be filled with the data from the session bean.
Is it possible to do such things with the mentioned technologies at all?
With memcached-session-manager and OWB you should use tomcat < 7.0.22 as in this version the notification of ServletRequestListeners got changed (which is the mechanism used by OWB for failover support).
I'm currently working on a new version of msm that works with OWB and tomcat >= 7.0.22.

Resources