Docker .bashrc service HTTP request fail - bash

I wanted to run some services automatically in the startup, inside a docker container. SO I added the relevant commands to .bashrc in order to execute those services. And they are running as expected but one service which involved in sending HTTP requests to the localhost, fails to send the request to the server. But once I logged in to the docker container and execute the script manually through shell, it works properly. Looking forward your suggestions and answers.

This sounds like the service isn't completely started before the HTTP request hits it. I would suggest adding either a delay between the two (sleep 5 or so), or logic for performing some number of retries on initial connection failure.

Related

Why Tomcat can stop respoinding until deplyed app is stopped?

Tomcat8 is running in docker container with single app deployed there.
App is mainly busy with processing users requests and cron jobs(usually additional work needs to be done after user request is finished).
What is the problem (by looking at the logs):
App (deployed under /mysoawesomeapp) is working as usual, processing requests and cron jobs.
There's a couple minutes gap, like the app would freeze
Docker is running health check on localhost:8080, every 30s waiting for response for 10s, then it restarts the container.
I can see shutdown request in logs, and then I can also see those health check responses with 200 status. It doesn't really matter now, since server is being shutdown.
My question is: how is it possible, that localhost:8080 request that would normally load tomcat home page can be halted until server shutdown occurs. How mysoawesomeapp can have an impact? And how can I confirm it?

Waiting for service to start in ansible when wait_for is not enough

I have a docker container being deployed by ansible. I then use wait_for to wait for the port the program I'm deploying listens on is open. When the wait_for returns I then do a get on the rest api provided by the service, part of final configuration steps.
My problem is that when I run everything at once it fails because I get a connection refused from my service on the rest call. However, if I immediately re-run the configuration role everything is fine, likewise if I don't have to restart the docker container the full playbook runs fine. As best I can tell it seems that there is a period of time between when my service starts listening on the port and when it's actually up and ready to authenticate and respond to GET requests, and ansible is managing to hit it during that brief down period.
What is the cleanest way to add a further delay until a rest call actual responds with a 200 from within ansible?

Understanding effects of Domino command to restart HTTP server

We have a Domino cluster which consists of two servers. Recently we see that one of the server has memory problems, and the HTTP service goes down after 2 hours. So we plan to implement a scheduled server task which runs the command nserver -c "restart task http" till we find the memory leak solution. The HTTP service restarts in say 15 seconds. But what would happen if a user submits data during this small period. Will the cluster manager automatically manage the user session using the other server, and hence load balance the submit task?. Not sure about this. The failover runs fine in a normal case, so when one of the server goes down the other server load balances it. But we are not sure about the behavior of "restart task http" command. Does the restart http task finish all the pending threads, or Domino cluster manager switches to other server to load balance the request?.
Thanks in advance
The server should close out all HTTP requests prior to shutting down and restarting.

HTTP GET requests work but POST requests do not

Our Spring application is running on several different servers. For one of those servers POST requests do not seem to be working. All site functionality that uses GET requests works completely fine; however, as soon as I hit something that uses a POST request (ex. form submit) the site just hangs permanently. The server won't give any response. We can see the requests in Tomcat Manager but they don't time out.
Has anyone ever seen this?
We have found the problem. Our DBA accidentally deleted the MySQL database files on that particular server (/sigh). In our Spring application we use GET requests for record retrieval and the records we were trying to retrieve must have been cached by MySQL. This made it seem as if GET requests were working. When trying to add new data to the database, which we use POST requests to do, Tomcat would wait for a response, which never came, from MySQL.
In my experience if you're getting a timeout error it's almost always due to not having correct ports open for your application. For example, go into your virtual machine's rules and insure port 8080, 8443 or 80, 443 are open for http and https traffic.
In google cloud platform: its under VPC networking -> firewall rules. Azure and AWS are similar.

tell haproxy or squid to execute a script or http request before perfoming the proxying action

Is there a way to let haproxy or squid to run a (bash)script (or another http request) before performing the proxying of the incoming requests?
I want to host a userX specific http server(and service) at userX.mydomain.com, but these kind of services can be running or not, depending on the load of the machine that hosts them.
So the first time in the day, the userX access to the url userX.mydomain.com, the http server hosting the serviceX has to be started.
I already managed, thanks to haproxy, xinetd, some bash script, and the "HTTP Refresh header directive" to perform a refresh after the http server/service start..
but now I would like to make it even better, to let the "http service starting" to be transparent to the client asking for a GET, a PUT or a POST, and to immediately reply correctly, with the correct service response even at the first http request.
So I will need to start the service and then immediately proxying the request to the service just started.
I already try the "http-check" and "check" options in haproxy but I don't think they can be useful to me, because the healt checks are asynchrnous to the request handling of haproxy. Instead, I will need to perform this script execution for each request and before that haproxy proxies the request..
If squid allows to perform this kind of action, I can even let haproxy to proxy the request to squid, that then, can start the service and proxy the request
Does someone have an idea to achieve it?
Thanks in advance.
This can be done using proxymachine - https://github.com/mojombo/proxymachine
Basically proxymachine can intercept the HTTP request, parse the headers, run arbitrary Ruby code, and then forward the connection.
You would need to terminate the SSL prior to proxymachine getting the connection - e.g. using haproxy (with the new SSL capability).

Resources