What is your Ext.Direct data caching strategy? - ext-direct

I am start using Ext.Direct alot recently, however I seem not clear how to integrate cache strategy with directFn request? eg: return 304 when data not change etc

Related

Prevent truncated HTTP response from being cached

We saw this issue on one of our dev machines - the vendor.js bundle in our Angular project had somehow gotten cached, while truncated, which breaks the web app until you clear the cache.
We do use browser caching (together with URL-hashing so caching doesn't prevent app updates).
Is there any way to prevent the browser from caching a truncated request? Actually, I would have thought that the browser has this built-in (i.e. it won't cache a request where the bytes header does not match the amount downloaded).
The browser where we reproduced the problem was Chrome.
I think I found the issue - for whatever reason, our HTTP Response was missing the "Content-Length" header in the Response Headers.
The response passes through 2 proxies so one of them might remove the "Content-Length" header.
What we did in such a case is to add a parameter for the request of a lib.
You just need to raise the number and next time the browser and the caches in between will fetch a new version from the server:
e.g. www.myserver.com/libs/vendor.js?t=12254565
www.myserver.com/libs/vendor.js?t=12254566

413 request entity too large jetty server

I am trying to make a POST request to an endpoint served using jetty server. The request errors out saying 413 Request entity too large. But the content-length is only 70KB which I see is way below the default limit of 200KB.
I have tried serving via ngnix server and add client_max_body_size to desired level but that didn't work. I have set the setMaxFormContentSize of WebContext and that didn't help either. I have followed https://wiki.eclipse.org/Jetty/Howto/Configure_Form_Size and that didn't helped me either.
Does anyone have any solution to offer?
wiki.eclipse.org is OLD and is only for Jetty 7 and Jetty 8 (long ago EOL/End of Life). The giant red box at the top of the page that you linked it even tell you this, and gives you a link to the up to date documentation.
If you see a "413 Request entity too large" from Jetty, then it refers the the Request URI and Request Headers.
Note: some 3rd party libraries outside of Jetty's control can also use HttpServletResponse.sendError(413) which would result in the same response status message as you reported.
Judging by your screenshot, which does not include all of the details, (it's really better to copy/paste the text when making questions on stackoverflow, screenshots often hide details that are critical in getting a direct answer), your Cookie header is massive and is causing the 413 error by pushing the Request Headers over 8k in size.

Guzzlehttp slow performance

I am making a website using Laravel 5.4 that gets data from an api using guzzlehttp. I am making 96 requests, most of them (around 94) only return a few lines of json. This makes the website very slow to load (55 seconds). Am I doing something wrong?
Probably most of the requests can be done in parallel. Try to use Guzzle's async requests for that.

curl php returning status code 0 for golang api

I have created a getList api in golang. Now I am trying to call the getList api from my php function using php-curl.
I am making thousands of request from my php function. However, around 15k requests are served properly but after 15k - 20k (number varies) and further requests,
Curl CURLINFO_HTTP_CODE return 0 and response is "" and curl_error return empty string.Also the curl_errno return 7
My golang getList api is simple. It takes data from db and returns it. It Does not contain anygoroutines.
I don't understand that why after 15k-20k requests it starts giving me empty response. Don't know if it is curl-php problem or golang api problem. Also It can be that my golang api is denying serving the requests.
Please help.
Have you tried to test it with HTTP testing tools like ab, httperf, jmeter or alike?
* Try to run them with different number of requests and simultaneous requests.
First put static file to the webserver and try to fetch it in the same manner. Do you see such problems? If yes, there can problems with network configuration, few buffers, sockets maxfiles and so on.
If not - try to feed this static file with golang app. If you see problems investigate them in golang settings.
If no, check your app with db config. If there are problems check DB connections. May be they're not properly closed and got exhausted.

Ember 1.13 store.query always from cache in IE11

I upgraded my Ember web application from ember 1.12.2 to 1.13.13 with ember-cli 0.2.7 and ember-data v1.13.16.
Now for some models the store.query('modelname', {'something': thus.get('id')} always gives old data in IE11. When I check the REST call in IE11 I also see 'from cache'. Why? In Chrome it does get the data from te server, NOT from the cache.
How can I tell in my Ember code it must NEVER get the data from cache?
You don't.
Well you could hack something together, but the right way to fix this is to specify the Cache-Control: no-cache header on the response.
The only way to solve this from the client is to add a unique id (like a timestamp) to every request so that every request seems unique. But why a hacky solution if the right path is clear?

Resources