Nginx slow static file serving after a period of inactivity - performance

I have a nginx server deployed as a reverse proxy. everything works great if I regularly use the service.
Issue happens when the nginx service that I have deployed is inactive or not used(NO REQUEST PROCOSSED) for few days.
When I try to launch THE application using nginx the static files download take lots of time even though the size of the files are in byte.
issues goes away after I restart my nginx SERVER.
Using 1.15.8.3 version of openresty.
any suggestion/help will be highly appreciated.

Related

Create React App - Proxy - Caching Requests when it shouldn't

I've set up a proxy in package.json which points to the staging server so all API calls are routed to that server.
Works fine and gets the response from the actual server as expected, however, the responses seem to be getting cached in the proxy.
I've hit the staging site itself (which calls the same API) and i can see the updated response, but when hitting it on localhost via the the proxy I'm getting a stale version. Even when I add a cache busting querystring on the end of the URL it still gives me the old versions..
I've tried stopping the dev server (which was started from npm run start) and restarting, but it's acting like the proxy server doesn't stop/start in the background and is caching the requests.
Question is:
Is there a way to blow the proxy cache away from temp files etc? (or any interface to see what it's doing)?

Web App gets wrong IP after Windows Server upgrade

We cloned the servers, upgraded to a newer version of the OS (Windows 2012) which is compatible with the web app. However, when we placed those servers in production, only the one with the load balancer was being accessible through the IP. The other 4 were not because the load balancer was trying to redirect traffic to the local IP instead of the public ones. I don't know if this is information enough, but we can't seem to find the issue since the config of the web app is the same and the IIS didn't seem to have issues. Maybe the DNS? The IP's are the same.

Configure nginx API periodically

I am really new to nginx API and I never done API configuration as well.
I configured Ngnix as a load balancer in which I need to set weight for my backend server. base on my CPU utilization I decided to set my weight.
The thing is I don't have any issue getting server utilization but I need patch my server weight to nginx via API
Help me how to configure API
Note: I am getting server utilization periodically. to nginx I have to change server weight dynamically. it has to happen atomically.
The ability to modify an Nginx configuration on the fly like this is, unfortunately, a feature that's only available in the commercial Nginx Plus variety. (E.g., this tutorial.)
As far as I'm aware, the only way to reconfigure the vanilla open source Nginx is to modify the configuration files and either do a reload or a reboot of the service.

Is running multiple web: processes possible?

Our PHP application runs on apache, however php-pm can be used make the application persistent (no warmup, database connection remains open, caches are populated and application initialized).
I'd like to keep this behaviour when deploying to heroku, that is have apache serve static content and php-pm server the API layer. Locally this is handled using an .htaccess Rewrite proxy rule that sends all traffic from /middleware to php-pm listing on e.g. port 8082.
On heroku it would mean running two processes in the web dyno. Is that possible?
If not- are there other alternatives that can be used to handle web traffic through different processes or make a persistent process listen to web traffic?

How do I go about setting up my Sinatra REST API on a server?

I'm an iOS developer primarily. In building my current app, I needed a server that would have a REST API with a couple of GET requests. I spent a little time learning Ruby, and landed on using Sinatra, a simple web framework. I can run my server script, and access it from a browser at localhost:4567, with a request then being localhost:4567/hello, as an example.
Here's where I feel out of my depth. I setup an Ubuntu droplet at DigitalOcean, and felt my way around to setting up all necessary tools via command line, until I could again run my server, now on this droplet.
The problem then is that I couldn't then access my server via droplet.ip.address:4567, and a bit of research lead me to discovering I need Passenger and an Apache HTTP Server to be setup, and not with simple instructions.
I'm way in over my head here, and I don't feel comfortable. There must be a better way for me to take my small group of ruby files and run this on a server, than me doing this. But I have no idea what I'm doing.
Any help or advice would be greatly appreciated.
bit of research lead me to discovering I need Passenger and an Apache HTTP Server to be setup, and not with simple instructions.
Ignore that for now. Take baby steps first. You should be able to run your Sinatra app from the command line on the DigitalOcean droplet, and then access it via droplet.ip.address:4567. If that doesn't work something very fundamental is wrong.
When you start your app, you will see what address and port the app is listening on. Make sure it's 0.0.0.0 and 4567. If it's 127.0.0.1 or localhost that means it will only service requests originating from the same machine
After you get this working, next step is to make your Sinatra app into a service. Essentially this means the app runs in the background, and auto-starts when the system reboots. Look into Supervisor which is very simple configuration to get this running.
Later you can install Apache or Nginx to put in front of your Sinatra app. These are proxies which simply forward requests from port 80 (default HTTP port) to your sinatra app, but can do additional things such as add SSL support, load balancing, custom error pages etc. - all of which you do not need right now.

Resources