Symfony2 AppCache performance boost - performance

We're trying to figure out what effect enabling AppCache in the frontend controller has on caching without calling any cache directives on the response object.
I had presumed that simply adding the following line and setting default_ttl to 1:
$kernel = new AppCache($kernel);
would not change the behaviour of the application without calling a cache directive on the response. But as soon as we add this line (and cache:clear) our server is able to handle far more requests per second, which suggests that there is some caching going on.
Turning on debug and setting default_ttl to an hour all we see in the http headers is
X-Symfony-Cache: GET /: miss
Does this mean that there is no reverse proxy caching going on? If so what explains the performance increase?
Any clarification on what happens in this situation would be awesome.

This line
$kernel = new AppCache($kernel);
enables the Symfony2 Reverse Proxy. For further explanation follow this link: http://symfony.com/doc/current/book/http_cache.html#symfony2-reverse-proxy. The performance increase should be clear now.
The header means that the "Symfony-Cache" got a "GET" request and found no cached data ("miss"). If you call the same page multiple times in a row the header should change to something like:
X-Symfony-Cache: GET /: HIT 42

Related

Is there a way to delay cache revalidation in service worker?

I am currently working on performance improvements for a React-based SPA. Most of the more basic stuff is already done so I started looking into more advanced stuff such as service workers.
The app makes quite a lot of requests on each page (most of the calls are not to REST endpoints but to an endpoint that basically makes different SQL queries to the database, hence the amount of calls). The data in the DB is not updated too often so we have a local cache for the responses, but it's obviously getting lost when a user refreshes a page. This is where I wanted to use the service worker - to keep the responses either in cache store or in IndexedDB (I went with the second option). And, of course, the cache-first approach does not fit here too well as there is still a chance that the data may become stale. So I tried to implement the stale-while-revalidate strategy: fetch the data once, then if the response for a given request is already in cache, return it, but make a real request and update the cache just in case.
I tried the approach from Jake Archibald's offline cookbook but it seems like the app is still waiting for real requests to resolve even when there is a cache entry to return from (I see those responses in Network tab).
Basically the sequence seems to be the following: request > cache entry found! > need to update the cache > only then show the data. Doing the update immediately is unnecessary in my case so I was wondering if there is any way to delay that? Or, alternatively, not to wait for the "real" response to be resolved?
Here's the code that I currently have (serializeRequest, cachePut and cacheMatch are helper functions that I have to communicate with IndexedDB):
self.addEventListener('fetch', (event) => {
// some checks to get out of the event handler if certain conditions don't match...
event.respondWith(
serializeRequest(request).then((serializedRequest) => {
return cacheMatch(serializedRequest, db.post_cache).then((response) => {
const fetchPromise = fetch(request).then((networkResponse) => {
cachePut(serializedRequest, response.clone(), db.post_cache);
return networkResponse;
});
return response || fetchPromise;
});
})
);
})
Thanks in advance!
EDIT: Can this be due to the fact that I put stuff into IndexedDB instead of cache? I am sort of forced to use IndexedDB instead of the cache because those "magic endpoints" are POST instead of GET (because of the fact they require the body) and POST cannot be inserted into the cache...

Why $time from $lock=Cache::lock('name', $time) should be greater than the updating Cache time?

I placed this code inside a Route::get() method only to test it quicker. So this is how it looks:
use Illuminate\Support\Facades\Cache;
Route::get('/cache', function(){
$lock = Cache::lock('test', 4);
if($lock->get()){
Cache::put('name', 'SomeName'.now());
dump(Cache::get('name'));
sleep(5);
// dump('inside get');
}else{
dump('locked');
}
// $lock->release();
});
If you reach this route from two browsers (almost)at the same time. They both will respond with the result from dump(Cache::get('name'));. Shouldn't the second browser respond be "locked"? Because when it calls the $lock->get() that is supposed to return false? And that because when the second browser tries to reach this route the lock should be still set.
That same code works just fine if the time required for the code after the $lock = Cache::lock('test', 4) to be executed is less than 4. If you set the sleep($sec) when $sec<4 you will see that the first browser reaching this route will respond with the result from Cache::get('name') and the second browser will respond with "locked" as expected.
Can anyone explain why is this happening? Isn't it suppose that any get() method to that lock, expect the first one, to return false for that amount of time the lock has been set? I used 2 different browsers but it works the same with 2 tabs from the same browser too.
Quote from the 5.6 docs https://laravel.com/docs/5.6/cache#atomic-locks:
To utilize this feature, your application must be using the memcached or redis cache driver as your application's default cache driver. In addition, all servers must be communicating with the same central cache server.
Quote from the 5.8 docs https://laravel.com/docs/5.8/cache#atomic-locks:
To utilize this feature, your application must be using the memcached, dynamodb, or redis cache driver as your application's default cache driver. In addition, all servers must be communicating with the same central cache server.
Quote from the 8.0 docs https://laravel.com/docs/8.x/cache#atomic-locks:
To utilize this feature, your application must be using the memcached, redis, dynamodb, database, file, or array cache driver as your application's default cache driver. In addition, all servers must be communicating with the same central cache server.
Apparently, they have been adding support for more drivers to make use of this lock functionality. Check which Cache driver you are using and if it fits the support list of your Laravel version.
There is likely an atomicity issue here where the cache driver you are using is not able to lock a file atomically. What should happen is that when a process (i.e. a php request) is writing to the lock file, all other processes requiring the lock file should at least wait until the lock file available to be read again. If not, they read the lock file before it has been written to, which obviously causes a race condition.
I saw this question I asked, well now I can say that the problem I was trying to solve here was not because of the atomic lock. The problem here is the sleep method. If the time provided to the sleep method is bigger than the time that a lock will live, it means when the next request it's able to hit the route the lock time will expire(will be released). And that's because let's say you have defined a route like this:
Route::get('case/{value}', function($value){
if($value){
dump('hit-1');
}else{
sleep(5);
dump('hit-0');
}
});
And you open two browser tabs with the same URL that hits this route something like:
127.0.0.1:8000/case/0
and
127.0.0.1:8000/case/1
It will show you that the first route will take 5sec to finish execution and even if the second request is sent almost at the same time with the first request, still it will wait to finish the first one and then run. This means the second request will last 5sec(from the first request) plus the time it took to run.
Back to the asked question the lock time will expire by the time the second request will get it or said differently run the $lock->get() statement.

Cypress not retrying assertion

My Cypress test is acting inconsistently due to an assertion set on header text. Here is my code:
cy.get('.heading-large').should('contain', 'dashboard') // passes
cy.contains('View details').first().click()
cy.get('.heading-large').should('contain', 'Registration details') // sometimes fails
If it fails, it is because the heading still contains 'dashboard' - Cypress appears not to have retried but gives error Timed out retrying: expected '<h1.heading-large>' to contain 'Registration details'
From reading about Cypress retry-ability, my understanding is that the should assertion should keep trying until timeout, which is set as "defaultCommandTimeout" : 5000. This feels true even if I have an element with the same identifier across two pages. There are no major performance issues with the app I'm testing.
The test seems more likely to fail if I am not watching the window and this issue looks like a possible cause.
Can anyone help determine: is there an issue with my test or Cypress, and how might I improve the test? I'm using Cypress 5.1.0 and Chrome 85 on MacOS Catalina.
It is failing occasionally because the request that fills the header with information has not resolved by the time the timeout has been reached.
You can solve this by setting up a route with a route alias to wait for that exact response from the request to resolve before you proceed with the click.
In other words, When you click(), there is a request sent that grabs the information you want to check for in the next get(). This response for this request has sometimes not resolved by the time your get() reaches timeout. You could increase the timeout but that's not recommended and not good practice here. Instead, wait for that specific response with route & route alias. If you do that, in every case, the last get() won't get called until the information it is looking for has been resolved.
I don't know your request but it would work something like this:
// setup the route and alias
cy.server()
cy.route("/youRequestUrlHere").as("myLovelyAlias")
// first get
cy.get('.heading-large').should('contain', 'dashboard')
// this click fires the request url from route() above
cy.contains('View details').first().click()
// wait for route to resolve using route alias
cy.wait("#myLovelyAlias").then((response) => {
// next get called after response resolves
cy.get('.heading-large').should('contain', 'Registration details')
}
Reference:
Route & alias
Route
Best Practice - get()
Network Request - wait()
edit:
As mentioned above, you could also cheat and set the defaultCommandTimeout to a higher number but that is not recommended because you could still run into cases where the response resolution takes longer than the timeout you've set. The route/wait pattern is the better, more stable approach.
Just in case you want to know how its done though, you would change your get() to something like:
cy.get('.heading-large', {defaultCommandTimeout: 60000}).should('contain', 'Registration details')
Again, other way would be much better.
Reference:
Cypress configuration
It looks like we need to wait for the Cypress bug "Some tests flake only if test runner's browser loses focus (or run headlessly)" to be fixed. This is because I have tried the alternative, helpful answers but am consistently facing the original issue when the window is out of focus.
Thank you to those who have answered and commented.

How to conditional cache results using OSB?

Good morning,
I'm implementing an OSB-Coherence Integration and I would like to cache results only if a condition is returned.
Example:
I have a OSB Business Service interface that return stateCode = 0 in the sucess case, otherwise it will return an error code. I wish to cache only the sucess case.
There are a few tricks you can do to switch the result cache on or off based on the request by inserting custom values of <tp:cache-token> into $outbound/ctx:transport/ctx:request.
However, doing conditional caching based on the result requires a different approach.
What I would do would be to create a cache wrapper proxy.
Let's say your proxy flow is currently
MyProxy-http.proxy -> MyBusinessService-http-cached.biz
You instead insert a bizref to a second proxy, like so:
MyProxy-http.proxy -> MyBusinessService-sb-cached.biz -> MyBusinessService-wrapper-sb.proxy -> MyBusinessService-http.biz
The new bizref has the cache, so you remove results caching from the old bizref.
(So far you haven't changed anything, so the next bit is the step that does the conditional caching)
You modify MyBusinessService-wrapper-sb.proxy to read the $body of the response. If it detects from the response that it shouldn't be cached, you return failure.
This failure will suppress result caching via the bizref.
If it's important to you, you can reconstitute the original message in MyProxy-http.proxy via a fault handler, and return it as an uncached 'success' message.

ServerXMLHTTP Timeout in less than 2 seconds using default timeouts

I have a question involving a "timeout" when sending an HTTPS "GET" request using the ServerXMLHTTP object.
In order to fool the object to send the request with the logged in user's id and password, I set it up to use a dummy proxy and then excluded the domain of the URL (on the intranet). So variable url_to_get contains .mydomain.com, while the proxy address is actually "not.used.com".
// JScript source code
HTTP_RequestObject = new ActiveXObject("Msxml2.ServerXMLHTTP.6.0");
// Using logged in username authentication
HTTP_RequestObject.open("GET", url_to_get, false);
HTTP_RequestObject.setProxy(2, "not.used.com", "*.mydomain.com");
try
{
HTTP_RequestObject.send();
}
catch (e)
{
}
In the catch block, I log an exception of "(0x80072EE2) The operation timed out". This is timestamped 1 to 2 seconds after a log message right before the open.
A retry will work as expected, and it can do it over and over again. Is this something on the server side? Or is it a result of the proxy?
Well this was painful and embarassing. I figured out the root cause of the issue I was seeing with the timeout. I had set the timeouts to "defaults" that were 3 orders of magnitude smaller than the real defaults. So even when I increased them to what I thought were really large values, I was still shorter than the defaults.
I had skimmed through the wording on the Microsoft page describing the setTimouts() method and misinterpreted the timebase of the parameters. I assumed seconds while it was actually milliseconds.
During the process of debugging this issue, I duplicated the code using the alternative COM object, "WinHttp.WinHttpRequest.5.1", and in verifying the equivalent API, SetTimeouts(), discovered the faux pas.
I did learn a few things in the process, so all is not lost, "WinHttp.WinHttpRequest.5.1" has a SetAutoLogonPolicy() [3] method that allows me to skip the hokey "proxy" foolishness required with "Msxml2.ServerXMLHTTP.6.0" to force it to send the user's credentials to an intranet server. I also fiddled with Fiddler [4] and learned enough to be dangerous!
Hopefully someone else can learn from my mistake and find this useful in debugging their own issue in the future.
Here are some inline links since I don't have enough rep to post more than two:
[3] : msdn.microsoft.com/en-us/library/windows/desktop/aa384050%28v=vs.85%29.aspx
[4] : fiddler2.com
Msxml2.ServerXMLHTTP.6.0 can be used via a proxy. It took me a while but persistence paid off. I moved from WinHttp.WinHttpRequest.5.1 to Msxml2.ServerXMLHTTP.6.0 at the advice of Microsoft.
Where varHTTP is your object reference to Msxml2.ServerXMLHTTP.6.0 you can use
varHTTP.setproxy 2, ProxyServerName, "BypassList"
Hope this helps and you pursue onwards with Msxml2.ServerXMLHTTP.6.0.

Resources