Create React App - Proxy - Caching Requests when it shouldn't - caching

I've set up a proxy in package.json which points to the staging server so all API calls are routed to that server.
Works fine and gets the response from the actual server as expected, however, the responses seem to be getting cached in the proxy.
I've hit the staging site itself (which calls the same API) and i can see the updated response, but when hitting it on localhost via the the proxy I'm getting a stale version. Even when I add a cache busting querystring on the end of the URL it still gives me the old versions..
I've tried stopping the dev server (which was started from npm run start) and restarting, but it's acting like the proxy server doesn't stop/start in the background and is caching the requests.
Question is:
Is there a way to blow the proxy cache away from temp files etc? (or any interface to see what it's doing)?

Related

Axios doesn't resolve subdomains when making requests to Local Area Network (e.g. 'http://us.192.168.1.25:8080/auth')

I use React-Native for an app and Spring Boot for backend. For making requests from the app, I use Axios.
I am developing a new backend service and I wanted to test it on the app before making a backend deploy. Usually, I would simply use something like axios.post('http://192.168.1.25:8080/resource'), to access the server running on my PC connected to the smartphone via LAN. It works.
This new service, however, depends on the subdomain sent with the the HTTP request. For instance, on the previous example, I would have to make a post to 'http://english.192.168.1.25:8080/resource'. Making a axios.post() to that address however, doesn't work. Axios gives me the error "can't resolve english.192.168.1.25".
Does anyone know how to solve this? Testing with Postman from another machine, the endpoint 'http://english.192.168.1.25:8080/resource' works just fine (Only the axios lib running on the react-native phone fails).
TLDR;
try to disable your dns-rebind protection in your router by adding an exception for your subdomain.localhost
FritzBox -> Home Network -> Network settings -> DNS Rebind Protection
the other way would be to run your own dns on your system which does the same. Something like dnsmasq or https://github.com/hubdotcom/marlon-tools/blob/master/tools/dnsproxy/dnsproxy.py
I think i found a solution for this problem. something.localhost is routed to your primary dns. In a typically installation its your router. Most modern routers have a dns-rebind-security mechanism. https://en.wikipedia.org/wiki/DNS_rebinding
I ran into the same issue at home and when i tried to get foo.localhost running at work, it works as expected. So what can I say. At home we use a FritzBox 7560 with DNS Rebind protection enabled. At work I have a noname router from a "magenta company" without a rebind protection.

Nginx slow static file serving after a period of inactivity

I have a nginx server deployed as a reverse proxy. everything works great if I regularly use the service.
Issue happens when the nginx service that I have deployed is inactive or not used(NO REQUEST PROCOSSED) for few days.
When I try to launch THE application using nginx the static files download take lots of time even though the size of the files are in byte.
issues goes away after I restart my nginx SERVER.
Using 1.15.8.3 version of openresty.
any suggestion/help will be highly appreciated.

Random 502/503 error on Nginx running behind Docker (on ECS cluster + ALB)

So i have setup a laravel application and hosted on a docker which in turned hosted using AWS ECS Cluster running behind ALB.
So far i have the application up and running as expected, everything runs just the way it is (e.g. Sessions are stored in memcached and working, static assets are in S3 bucket, etc).
Right now i just have 1 problem with stability and i am not quiet sure where exactly the problem is. When i hit my URL / website, sometimes (randomly) it returns 502/503 HTTP error. When this happen i have to wait for about a minute or 2 before the app can return 200 HTTP code.
Here's a result of doing tail on my docker (i.e. nginx log)
At this point i am totally lost and not sure where else i should check. I've tried the following:
Run it locally, with the same docker / nginx >> works just fine.
Run it without ALB (i.e. Using just 1 EC2) >> having similar problem.
Run it using ALB on 2 different EC2 type (i.e. t2.small and micro) >> both having similar problem.
Run it using ALB on just 1 EC2 >> having similar problem.
According to your logs, ngjnx is answering 401 Unauthorized to the ALB health check request. You have to answer 200 OK in / endpoint or configure a different one like /ping in your ALB target group.
To check the health of your targets using the console
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
On the navigation pane, under LOAD BALANCING, choose Target Groups.
Select the target group.
On the Targets tab, the Status column indicates the status of each target.
If the status is any value other than Healthy, view the tooltip for more information.
More info: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/target-group-health-checks.html
I have had a similar issue in the past for one of a couple of possible reasons;
Health checks configured for the ALB, e.g. the ALB is waiting for the configured number of checks to go green (e.g. every 30 seconds hit an endpoint and wait for a 200 for 4/5 times. During the "unhealthy phase" the instance may be designated offline. This usually happens most often immediately after a restart or deployment or if an instance goes unhealthy.
DNS within NGINX. If the DNS records of the downstream service that NGINX is proxying have changed it might be that NGINX has cached (either according to the TTL or for much longer depending on your configuration) the old record and is therefore unable to connect to the downstream.
To help fully debug, it might be worth determining whether the 502/503 is coming from the ALB or from NGINX. You might be able to determine this from the access log of the ALB or the /var/log/nginx/access|error.log in the container.
It may also help to check, was there a response body on the response?

Node app (meteor) do not accept XHR connections

Have just moved old (but running on the RedHat OpenShift PaaS) node app (Meteor to be ohnest) into new Linux VPS box.
The problem is that node server seems to refuse (block,do not provide, do not service) the XHR type connections from browser directed to the port usally defined using the
DDP_DEFAULT_CONNECTION_URL
env variable.
As I understand it's used for Ajax like responsiveness build in the Meteor apps.
From the browser point of view, I just see failed XHR type connections to the DDP url.
Firewall seems to be set ok.
Http communication (port 80) works ok, so I can get the static part of the web page and even navigate to other static pages but no dynamic data /like db/.
Any idea ?
You forgot to put export before setting the environment variable.
Run this command and I hope that will solve your problem.
export DDP_DEFAULT_CONNECTION_URL
So it was just the DDP_DEFAULT_CONNECTION_URL setting. Once the app was deployed to the RH OpenShift PaaS the value used there was :8000. My fault was I assumed it has to be the same everywhere. Changing it to :8080 (port used by node) made app working.
I just thought they have to be separate ports (one for www and one for DDP).

Listening to breakpoints from a different host with Xdebug

I'm building an application in Ember.js and I am debugging the PHP API remotely. It is working when I enable the cookie for XDebug, set a breakpoint, and then run the code. The breakpoint hits and I get all the debugger data in my IDE (PHPStorm) correctly.
In order to access the API from Ember, I have to use ember-cli with a proxy. The XDebug cookie is getting passed through the proxy, but requests that I make through Ember do not hit breakpoints in my IDE. I think this is due to the fact that XDebug sees the ember-cli request as coming from the remote server, rather than my development machine. Is there a way for me to get debugging to work for requests that go through the ember-cli proxy? I have to use the proxy instead of going directly to the API from the browser due to cross site browser security issue (ember-cli is running on port 4200 on the server and the API is on port 80).
I have pydbgpproxy running and PHPStorm is working with that as well, but even though the session for the requests going through ember-cli and the requests going from my machine directly to the API are the same key, I think it's still differentiating between the requesting machine's address.
Thank you!
Never mind, I realized that I needed to configure the contentSecurityPolicy in environment.js in Ember and then change my PHP API to return the 'Access-Control-Allow-Origin' header by using using nelmio/cors-bundle

Resources