Understanding effects of Domino command to restart HTTP server - cluster-computing

We have a Domino cluster which consists of two servers. Recently we see that one of the server has memory problems, and the HTTP service goes down after 2 hours. So we plan to implement a scheduled server task which runs the command nserver -c "restart task http" till we find the memory leak solution. The HTTP service restarts in say 15 seconds. But what would happen if a user submits data during this small period. Will the cluster manager automatically manage the user session using the other server, and hence load balance the submit task?. Not sure about this. The failover runs fine in a normal case, so when one of the server goes down the other server load balances it. But we are not sure about the behavior of "restart task http" command. Does the restart http task finish all the pending threads, or Domino cluster manager switches to other server to load balance the request?.
Thanks in advance

The server should close out all HTTP requests prior to shutting down and restarting.

Related

Why Tomcat can stop respoinding until deplyed app is stopped?

Tomcat8 is running in docker container with single app deployed there.
App is mainly busy with processing users requests and cron jobs(usually additional work needs to be done after user request is finished).
What is the problem (by looking at the logs):
App (deployed under /mysoawesomeapp) is working as usual, processing requests and cron jobs.
There's a couple minutes gap, like the app would freeze
Docker is running health check on localhost:8080, every 30s waiting for response for 10s, then it restarts the container.
I can see shutdown request in logs, and then I can also see those health check responses with 200 status. It doesn't really matter now, since server is being shutdown.
My question is: how is it possible, that localhost:8080 request that would normally load tomcat home page can be halted until server shutdown occurs. How mysoawesomeapp can have an impact? And how can I confirm it?

Keep process alive during shutdown

I have two desktop apps on the same machine, let's call them Client and Server. When Windows goes into shutdown I would like to have the Client do some short housecleaning with the Server. Client knows it's closing time because in OnFormClosing the FormClosingEventArgs.CloseReason is CloseReason.WindowsShutDown. But in the mean time the Server may be forcefully killed by the OS. Is it possible to have the Server alive for as long as possible, so that all the Clients can finish their jobs, but not halt the shutdown entirely?
The Server does not know which Clients are alive and in need of housecleaning.
Both Client and Server should not cause the Windows to show the message saying that the app is preventing the Windows from shutting down.
I guess I'm asking for some Windows API calls that can negotiate with Windows to kill the process last if possible, but any working solution is welcome. The Client is written in C# and the Server is written in C++.
The Server should be keeping track of the Clients that are connected to it. So, if your apps are busy performing housecleaning, they ARE blocking shutdown, even if just momentarily. So what is wrong with letting Windows show a message to the user saying that?
When the Server gets notified of an imminent shutdown, have it call ShutdownBlockReasonCreate() if there are any Clients connected. Regardless of whether the Clients perform housecleaning or not, when the last Client disconnects then the Server can call ShutdownBlockReasonDestroy().
The obvious solution is to make the server a Windows service.
As a stop-gap solution you can try SetProcessShutdownParameters.
This function sets a shutdown order for a process relative to the other processes in the system.

Spamming logs for "Monitor thread successfully connected to server" from MongoDB java driver

I was given a Spring application to work on and upon running it on my local, I noticed a continuous spamming (every 10s) of below on console
Monitor thread successfully connected to server with description ServerDescription {address=some_replica_member_name.mongodb.net:27017, type=REPLICA_SET_PRIMARY, state=CONNECTED..
I am aware that it is used to poll the server to check its status but in all the other applications I've worked with, I've never seen such log being spammed unless there is actual connection going on. What gives? Setup is just
spring.data.mongodb.uri=mongodb+srv://user:xxx#some.mongo.host.net/some_db

Websphere web plug-in to automatically propagate cluster node shutdown

Does the WebServer web server plug-in automatically propagate the new configuration due to a manual shutdown of a node in the application server cluster? I've been going through the documentation and it looks like the only way for the web server to act on this is by detecting the node state by itself.
Is there any workaround?
By default, the WAS Plug-in only detects that a JVM is down by failing to send it a request or failing to establish a new TCP connection.
If you use the "Intelligent Management for WebServers" features available in 8.5 and later, there is a control connection between the cell and the Plug-in that will proactively tell the Plugin that a server is down.
Backing up to the non-IM case, here's what happens during an unplanned shutdown of a JVM (from http://publib.boulder.ibm.com/httpserv/ihsdiag/plugin_questions.html#failover)
If an application server terminates unexpectedly, several things
unfold. This is largely WebSphere edition independent.
The application servers operating system closes all open sockets.
WebServer threads waiting for the response in the WAS Plug-in are notified of EOF or ECONNRESET.
If the error occurred on a new connection to the application server, it will be marked down in the current webserver process. This server will not be retried until a configurable interval expires (RetryInterval).
If the error occurred on a an existng connection to the application server, it will not be marked down.
Retryable requests that were in-flight are retried by the WAS Plug-in, as permitted.
If the backend servers use memory to memory session replication (ND only), the WLM component will tell the WAS Plug-in to use a specific replacement affinity server.
If the backend servers use any kind of session persistence, the failover is transparent. Session persistence is available in all websphere editions.
New requests, with or without affinity, are routed to remaining servers..
After the RetryInterval expires, the WAS plug-in will try to establish new connections to the server. If it remains down, failure will be relatively fast, and put the server back into the markd down state.

Load balancing with nginx

I want to stop serving requests to my back end servers if the load on those servers goes above a certain level. Anyone who is already surfing the site will still get routed but new connection will be sent to a static server busy page until the load drops below a pre determined level.
I can use cookies to let the current customers in but I can't find information on how to to routing based on a custom load metric.
Can anyone point me in the right direction?
Nginx has an HTTP Upstream module for load balancing. Checking the responsiveness of the backend servers is done with the max_fails and fail_timeout options. Routing to an alternate page when no backends are available is done with the backup option. I recommend translating your load metrics into the options that Nginx supplies.
Let's say though that Nginx is still seeing the backend as being "up" when the load is higher than you want. You may be able to adjust that further by tuning the max connections of the backend servers. So, maybe the backend servers can only handle 5 connections before the load is too high, so you tune it only allow 5 connections. Then on the front-end, Nginx will time-out immediately when trying to send a sixth connection, and mark that server as inoperative.
Another option is to handle this outside of Nginx. Software like Nagios can not only monitor load, but can also proactively trigger actions based on the monitor it does.
You can generate your Nginx configs from a template that has options to mark each upstream node as up or down. When a monitor detects that the upstream load is too high, it could re-generate the Nginx config from the template as appropriate and then reload Nginx.
A lightweight version of the same idea could done with a script that runs on the same machine as your Nagios server, and performs simple monitoring as well as the config file updates.

Resources