Tomcat8 is running in docker container with single app deployed there.
App is mainly busy with processing users requests and cron jobs(usually additional work needs to be done after user request is finished).
What is the problem (by looking at the logs):
App (deployed under /mysoawesomeapp) is working as usual, processing requests and cron jobs.
There's a couple minutes gap, like the app would freeze
Docker is running health check on localhost:8080, every 30s waiting for response for 10s, then it restarts the container.
I can see shutdown request in logs, and then I can also see those health check responses with 200 status. It doesn't really matter now, since server is being shutdown.
My question is: how is it possible, that localhost:8080 request that would normally load tomcat home page can be halted until server shutdown occurs. How mysoawesomeapp can have an impact? And how can I confirm it?
Related
I'm using docker swarm for orchestrating containers of my micro services.
For one of the microservices i have 2 replicas, so the requests are sent to one of them..
but when one of these 2 containers is stopped and then started again, after container start it needs some time for the application inside the container to start..
however as soon as the container is started the requests are sent to it, but as the app is not started yet (it needs about 5 minutes to start), i get errors with server connection..
Is there any configuration (may be some parameter in docker-compose) for swarm load balancing, not to send requests to container for some configured time after start and wait for the app ?
I tried health-check parameter in docker-compose, but it did not work
We have spring boot micro-services working well.
Recently we wanted to add mail notification feature, so added boot mail starter dependency.
As soon as we did this change, all our services shutdown and start continuously, and below is text on the console log
Saw local status change event StatusChangeEvent [timestamp=...... , current=DOWN, previous=UP]
Saw local status change event StatusChangeEvent [timestamp=...... , current=UP, previous=DOWN]
Also after 4 lines as above there is one more line like
Ignoring onDemand update due to rate limiter
Not sure what could be an issue, but it seems server trying to ping mail server and may be if not getting pulse trying to shutdown, and next pulse get connection so again making it up.
Has anyone faced such an issue.
Finally after debugging the code, found that some issues when checking health of the mail server, as have actuator used for all services for health check.
Whenever doing health check if there is no response from the mail server, the service goes down. It check for health every 30 seconds. Tried finding parameter to reduce frequency but could not find parameter for it.
So for now did set the management.health.mail.enabled to false, and now service is not shutting down in loop.
I was given a Spring application to work on and upon running it on my local, I noticed a continuous spamming (every 10s) of below on console
Monitor thread successfully connected to server with description ServerDescription {address=some_replica_member_name.mongodb.net:27017, type=REPLICA_SET_PRIMARY, state=CONNECTED..
I am aware that it is used to poll the server to check its status but in all the other applications I've worked with, I've never seen such log being spammed unless there is actual connection going on. What gives? Setup is just
spring.data.mongodb.uri=mongodb+srv://user:xxx#some.mongo.host.net/some_db
I wanted to run some services automatically in the startup, inside a docker container. SO I added the relevant commands to .bashrc in order to execute those services. And they are running as expected but one service which involved in sending HTTP requests to the localhost, fails to send the request to the server. But once I logged in to the docker container and execute the script manually through shell, it works properly. Looking forward your suggestions and answers.
This sounds like the service isn't completely started before the HTTP request hits it. I would suggest adding either a delay between the two (sleep 5 or so), or logic for performing some number of retries on initial connection failure.
We have a Domino cluster which consists of two servers. Recently we see that one of the server has memory problems, and the HTTP service goes down after 2 hours. So we plan to implement a scheduled server task which runs the command nserver -c "restart task http" till we find the memory leak solution. The HTTP service restarts in say 15 seconds. But what would happen if a user submits data during this small period. Will the cluster manager automatically manage the user session using the other server, and hence load balance the submit task?. Not sure about this. The failover runs fine in a normal case, so when one of the server goes down the other server load balances it. But we are not sure about the behavior of "restart task http" command. Does the restart http task finish all the pending threads, or Domino cluster manager switches to other server to load balance the request?.
Thanks in advance
The server should close out all HTTP requests prior to shutting down and restarting.