Waiting for service to start in ansible when wait_for is not enough - ansible

I have a docker container being deployed by ansible. I then use wait_for to wait for the port the program I'm deploying listens on is open. When the wait_for returns I then do a get on the rest api provided by the service, part of final configuration steps.
My problem is that when I run everything at once it fails because I get a connection refused from my service on the rest call. However, if I immediately re-run the configuration role everything is fine, likewise if I don't have to restart the docker container the full playbook runs fine. As best I can tell it seems that there is a period of time between when my service starts listening on the port and when it's actually up and ready to authenticate and respond to GET requests, and ansible is managing to hit it during that brief down period.
What is the cleanest way to add a further delay until a rest call actual responds with a 200 from within ansible?

Related

Why Tomcat can stop respoinding until deplyed app is stopped?

Tomcat8 is running in docker container with single app deployed there.
App is mainly busy with processing users requests and cron jobs(usually additional work needs to be done after user request is finished).
What is the problem (by looking at the logs):
App (deployed under /mysoawesomeapp) is working as usual, processing requests and cron jobs.
There's a couple minutes gap, like the app would freeze
Docker is running health check on localhost:8080, every 30s waiting for response for 10s, then it restarts the container.
I can see shutdown request in logs, and then I can also see those health check responses with 200 status. It doesn't really matter now, since server is being shutdown.
My question is: how is it possible, that localhost:8080 request that would normally load tomcat home page can be halted until server shutdown occurs. How mysoawesomeapp can have an impact? And how can I confirm it?

How do I find out what port another process is running on besides the web process on Heroku?

I have a webhook URL and a normal web server (running HapiJS).
I'd like to proxy certain requests in HapiJS to the webhook server that's running on a private port but I need to know what the $PORT is on the other non web process.
Is there a way to find this port number?
There is no way to find that port number.
Heroku dynos run on different runtimes. So even if you did know it, you would also need to figure out the IP address of that server, which would change with every deployment and once every 24 hours.
This would also not be very scalable, as the strength of heroku is to allow you to boot more dynos easily. If you rely on knowing where the other dyno is, you're losing that easy scaling.
You don't necessarily need this to communicate between processes though. Using a redis queue, you could enqueue asynchronous jobs to be processed by your worker process. Both processes would communicate, and they wouldn't need to know where the other one is.

Docker .bashrc service HTTP request fail

I wanted to run some services automatically in the startup, inside a docker container. SO I added the relevant commands to .bashrc in order to execute those services. And they are running as expected but one service which involved in sending HTTP requests to the localhost, fails to send the request to the server. But once I logged in to the docker container and execute the script manually through shell, it works properly. Looking forward your suggestions and answers.
This sounds like the service isn't completely started before the HTTP request hits it. I would suggest adding either a delay between the two (sleep 5 or so), or logic for performing some number of retries on initial connection failure.

Understanding effects of Domino command to restart HTTP server

We have a Domino cluster which consists of two servers. Recently we see that one of the server has memory problems, and the HTTP service goes down after 2 hours. So we plan to implement a scheduled server task which runs the command nserver -c "restart task http" till we find the memory leak solution. The HTTP service restarts in say 15 seconds. But what would happen if a user submits data during this small period. Will the cluster manager automatically manage the user session using the other server, and hence load balance the submit task?. Not sure about this. The failover runs fine in a normal case, so when one of the server goes down the other server load balances it. But we are not sure about the behavior of "restart task http" command. Does the restart http task finish all the pending threads, or Domino cluster manager switches to other server to load balance the request?.
Thanks in advance
The server should close out all HTTP requests prior to shutting down and restarting.

HTTP GET requests work but POST requests do not

Our Spring application is running on several different servers. For one of those servers POST requests do not seem to be working. All site functionality that uses GET requests works completely fine; however, as soon as I hit something that uses a POST request (ex. form submit) the site just hangs permanently. The server won't give any response. We can see the requests in Tomcat Manager but they don't time out.
Has anyone ever seen this?
We have found the problem. Our DBA accidentally deleted the MySQL database files on that particular server (/sigh). In our Spring application we use GET requests for record retrieval and the records we were trying to retrieve must have been cached by MySQL. This made it seem as if GET requests were working. When trying to add new data to the database, which we use POST requests to do, Tomcat would wait for a response, which never came, from MySQL.
In my experience if you're getting a timeout error it's almost always due to not having correct ports open for your application. For example, go into your virtual machine's rules and insure port 8080, 8443 or 80, 443 are open for http and https traffic.
In google cloud platform: its under VPC networking -> firewall rules. Azure and AWS are similar.

Resources