We're running into transaction issue with Grails.
During performance test we have a scenario, where single API is called multiple times for the same user.
During each call something is changed on the domain object and it is saved in database.
We have discovered, that it is possible, that update in the database will be made after the response was sent to client and BEFORE another request for the same API arrives on the server.
So we end up with another API call which selects data from database before first API call updates it and we get StaleObjectStateException when second request tries to save stuff in database.
We were using the auto commit feature in Grails, which saves everything when transaction is finished. So the first decision was to start using .save() before render() in controller.
It's ok when doing it for simple APIs, but we do have some more complex APIs where we would have to keep track of quite a lot objects and save them explicitly. Currently it seems to work without flush:true, but we are still testing.
So my question is: is there any way to make sure that response is not sent before transaction is committed in Grails?
This is probably due to caching, if you require guarantees as to state of db, you need to .save(flush:true).
Do note that this flushes all across your session, so it might affect performance in a negative way.
Edit:
Not sure how I managed to read your question without seeing the part about you already doing flush:true.
Anyways, that is indeed the way you must go.
This was due to a tool we used for testing - SoapUI. It can send a duplicate request before response comes back. This made us think it was a Grails fault. It was not.
Related
I am trying to implement different cache strategies using ServiceWorker. For the following strategies the way to implement is completely clear:
Cache first
Cache only
Network first
Network only
For example, while trying to implement the cache-first strategy, in the fetch hook of the service-worker I will first ask the CacheStorage (or any other) for the requested URL and then if exists respondWith it and if not respondWith the result of network request.
But for the stale-while-revalidate strategy according to this definition of the workbox, I have the following questions:
First about the mechanism itself. Does stale-while-revalidate mean that use cache until the network responses and then use the network data or just use the network response to renew your cache data for the next time?
Now if the network is cached for the next time, then what scenarios contain a real use-case of that?
And if the network response should be replaced immediately in the app, so how could it be done in a service worker? Because the hook will be resolved with the cached data and then network data could not be resolved (with respondWith).
Yes, it means exactly that. The idea is simple: respond immediately from the cache, then refresh the cache in the background for the next time.
All scenarios where it is not important to always get the very latest version of the page/app =) I'm using stale-while-revalidate strategy on two different web applications, one for public transportation services and one for displaying restaurant menu information. Many sites/apps are just fine with this but of course not all.
One very important thing to note here on the #2:
You could eg. use stale-while-revalidate only for static assets. This way your html, js, css, images etc. would be cached and quickly served to the user, but the data fetched dynamically from an API could still be fresh. For some apps this works, for some others not so well. Depends completely on the app. Of course you have to remember not to change the semantics of your API if the user is running a previous version of the app etc.
Not possible in any automatic way. What you could do, however, is implement a msg channel between the Service Worker and the "regular JS code on the page" using window.postMessage API. You could listen for certain messages on the page and then, from the Service Worker, send a msg when an important change has happened and the cache has been updated. Then you could either show the user a prompt telling that the page really needs to be reloaded right now or even force reload it from JS. You would need to put this logic of determining when an important update has happened into the Service Worker of course.
I'm considering to use Falcor in an app project I'm currently working on, I've started reading the docs but there's still one issue that is not entirely clear to me.
Let's make this example.
At time zero client A performs a request to a Falcor model, which in turns retrieves the needed data from a server DataSource, and stores it in the client's cache.
At time one the same server data is changed by operations performed by client B.
At time two client A performs the same request to the Falcor model, which finds a cached value and serves the now outdated data.
Is there any way to notifiy client A after time one that its Falcor cache for that data is outdated, and should instead perform a new request to the server DataSource?
You can use web sockets to send messages to the client. On the client you can call invalidate to manually invalidate the cache. You can also set an expires time on values to cause them to expire after a certain amount of time.
I've been building some apps that connect to a SQL backend. I use ajax calls to hit WebMethods, a WebAPI, etc.
I notice that the first initial call to the SQL backend retrieves the data fairly slow. I can only assume that this is because it must first negotiate credentials first before retrieving the data. It probably caches this somewhere, and thus, any calls made afterwards come back very fast.
I'm wondering if there's an ideal, or optimal way, to initialize this connection.
My thought was to make a simple GET call right when the page loads (grabbing something very small, like a single entry). I probably wouldn't be using the returned data in any useful way, other than to ensure that any calls afterwards come back faster.
Is this an okay way to approach fixing the initial delay? I'd love to hear how others handle this.
Cheers!
There are a number of reasons that your first call could be slower than subsequent ones
Depending on your server platform, code may be compiled when first executed
You may not have an active DB connection in your connection pool
The database may not have cached indices or data on the first call
Some VM platforms may take a while to allocate sufficient resources to your server if it has been idle for a while.
One way I deal with those types of issues on the server side is to add startup code to my web service that fetches data likely to be used by many callers when the service first initializes (e.g. lookup tables, user credential tables, etc).
If you only control the client, consider that you may well wish to monitor server health (I use the open source monitoring platform Zabbix. There are also many commercial web-based monitoring solutions). Exercising the server outside of end-user code is probably better than making an extra GET call from a page that an end user has loaded.
I have noticed that some of my ajax-heavy sites (ones I visit, not ones I have built), have certain auto-refresh features. For example, in GMail, if I get a new message, I see the new message without a page reload. It's the same with the Facebook browser-based IM client. From what I can tell, there aren't any java applets handling the server-browser binding, so I'm left to assume it's being done by AJAX and perhaps some element I'm unaware of. So by my best guess, it's done in one of two ways:
The javascript does a steady "ping" to a server-side script, checking for any updates that might be available (which would explain why some of these pages bring any other heavy-duty pages to a crawl). or
The javascript sits idly by and a server-side script actually "Pushes" any updates to the browser. But I'm not sure if this is possible. I'd imagine there is some kind of AJAX function that still pings, but all it simply asks "any updates?" and the server-script has a simple boolean that says "nope" or "I'm glad you asked." But if this is the case, any data changes would need to call the script directly so that it has the data changes ready and makes the change to that boolean function.
So is that possible/feasible/how it works? I imagine something like:
Someone sends an email/IM/DB update to the server, the server calls the script using the script's URL plus some relevant GET variable, the script notes the change and updates the "updates available" variable, the AJAX gets the response that there are in fact updates, the AJAX runs its normal "update page" functions, which executes the normal update scripts and outputs them to the browser.
I ask because it seems really inefficient that the js is just doing a constant check which requires a) the server to do work every 1.5 seconds, and b) my browser to do work every 1.5 seconds just so that on my end I can say "Oh boy, I got an IM! just like a real IM client!"
Read about Comet
I've actually been working on a small .NET Web App that uses the Ajax with long polling technique described.
Depending on what technology you're using, you could use thread signaling mechanisms to hold your request until an update is retrieved.
With ASP.NET I'm running my server on a single machine, so I store a reference to my Producer object (which contains a thread that processes the data). To initiate the data pull, my service's Subscribe method is called, which creates a Consumer object that's registered with the Producer. If the Consumer is long polling mode, it has a AutoResetEvent which is signaled whenever it receives new data, and whenever the web client makes a request for data, the Consumer first waits on the reset event, and then returns it.
But you're mentioning something about PHP - as far as I know persistence is maintained through serialization, not actually keeping the object in memory, so I don't know how you could reference a Producer object using $_CACHE[] or $_SESSION[]. When I developed in PHP I never really knew anything about multithreading so I didn't play around with it, but I guess you can look into that.
Using infinite loops is going to consume a lot of your processing power - I would exhaust all other options first.
I'm trying to create a small and basic "ajax" based multiplayer game. Coordinates of objects are being given by a PHP "handler". This handler.php file is being polled every 200MS, by using ajax.
Since there is no need to poll when nothing happens, I wonder, is there something that could do the same thing without frequent polling? Eg. Comet, though I heard that you need to configure server side applications for Comet. It's a shared webserver, so I can't do that.
Maybe prevent the handler.php file from even returning a response if nothing has to be changed at the client, is that possible? Then again you'd still have the client uselessly asking for a response even though something hasn't changed yet. Basically, it should only use bandwidth and sever resources if something needs to be told to the client, eg. the change of an object's coordinates.
Comet is generally used for this kind of thing, and it can be a fragile setup as it's not a particularly common technology so it can be easy not to "get it right." That said, there are more resources available now than when I last tried it ~2 years ago.
I don't think you can do what you're thinking and have handler.php simply not return anything and stop execution: The web server will keep the connection open and prevent any further polling until handler.php does something (terminates or provides output). When it does, you're still handling a response.
You can try a long polling technique, where your AJAX allows a very large timeout (e.g. 30 seconds), and handler.php spins without responding until it has something to report, then returns. (You'll want to make sure the spinning is not resource-intensive). If handler.php "expires" and nothing happens, have it exit and let AJAX poll again. Since it only happens every 30 seconds, it will be a huge improvement over ~5 times a second. That would keep your polling to a minimum.
But that's the sort of thing Comet is designed for.
As Ajax only offers you a client server request model (normally termed pull, rather than push), the only way to get data from the server is via requests. However a common technique to get around this is for the server to only respond when it has new data. So the client makes a request, the server hangs on to that request until something happens and then replies. This gets around the need for frequent polling even when the data hasn't changed as you only need the client send a new request after it gets a response.
Since you are using PHP, one simple method might be to have the PHP code call the sleep command for 200ms at a time between checks for data changes and then return the data to the client when it does change.
EDIT: I would also recommend having a timeout on the request. So if nothing happens for say 2 seconds, a "no change" message is sent back. That way the client knows the server is still alive and processing its request.
Since this is tagged “html5”: HTML5 has <eventsource> and WebSocket, but the implementation side is still in the future tense in practice.
Opera implemented an old version of <eventsource> called <event-source>.
Here's a solution - use a SaaS comet provider, such as WebSync On-Demand. No server resources to worry about, shared hosting or not, since it's all offloaded, and you can push out the information as needed.
Since it's SaaS, it'll work with any server language. For PHP, there's already a publisher written and ready to go.
The server must take part in this. Check with the hosting provider what modules are available. Or try to convince them to support Comet.
Maybe you should consider a small Virtual Private Server (VPS) for this.
One thing to add on the long polling suggestions: If you're on a shared server, this solution will have limited scalability, as each active long poll will keep a connection (and a server-side process to service that connection) active. Your provider most likely has limits (either policy-defined or de facto) on the number of connections you can have open at a time, so you'll hit a wall if you have more sessions/windows than that playing concurrently.