Does Parse's JS client set a timeout? - parse-platform

Recently I've been running worker machines that are brought up & down to perform tasks against Parse collections. But a small percentage of those machines are never torn down. I understand that NodeJS itself doesn't have a timeout set on connections so for a non-responsive server, my nodejs code may end up sitting there indefinitely, doing nothing and thus my worker machine wouldn't be torn down.
My question is: Does the parse js client have a timeout set?
Update # 1
Looking through Parse's nodejs client and I saw that they use https://github.com/driverdan/node-XMLHttpRequest which doesn't have the concept of timeout:
https://github.com/driverdan/node-XMLHttpRequest/pull/67
Could this be the culprit when a connection gets lost in limbo while talking to Parse and the code just sits there until a worker timesout?

So as I discovered, the answer is NO there isn't a default timeout.
I updated the underlying library for Parse JS SDK to provide a timeout:
https://github.com/ShoppinPal/node-XMLHttpRequest.git
And patched the most current version of parse sdk to use it:
https://github.com/ShoppinPal/parse-js

Related

SignalR combined with load balancer missing messages

I have 2 web servers (IIS 8.5) behind a hardware firewall and our application uses SignalR for some real-time updates. We are using SQL Server as the backplane to help us work in this load balanced environment. Additionally we are using sticky sessions on the load balancer to help us keep the users on the same web server during their session. When we are running in this hardware configuration we lose at least 1/3 of our messages. Sometimes we get all the expected messages but more often than not we are missing plenty.
When we are running on a single web server all messages are received. Does anyone have any suggestions for troubleshooting this problem? We've turned on logs (both client & server) and nothing looks like it's missing or broken. We're really stumped.
EDIT---
Some additional details that I hope will shed light on the situation.
Server to Client messages are getting lost. Pretty much all our communication is Server to Client.
We are using sticky session just based on IP and limited to 5 minutes but we're losing messages within that 5 minutes.
This is some old SignalR code that has been only minimally touched since SignalR 1 (or even older). We are keeping an in memory list of users along with their connections and we use that list to send notices back to the client. It seems most likely that this is the cause of the troubles but with Sticky sessions the user should be stuck to the same server for at least the 5 minutes right?
This list of users maps Username to connection id. This is useful when our backend services (on another machine) sends a message back with the username not the connection id.
Finally resolved this. There were 2 issues really. The first is that we were using an in memory list of users as mentioned in the edit above. Once we realized that wasn't going to work across machines we removed it. It also led us to the second issue which was how SignalR 2 uses the IUserIdProvider and our call should have been
Clients.User(userId).send(message)
instead of
context.Clients.Client(connection)
This code had existed since we first started using SignalR many years ago and never got properly updated as we upgraded SignalR versions
Have the same machineKey specified in your web.config on both servers.

How to limit Couchbase client from trying to connect to Couchbase server when it's down?

I'm trying to handle Couchbase bootstrap failure gracefully and not fail the application startup. The idea is to use "Couchbase as a service", so that if I can't connect to it, I should still be able to return a degraded response. I've been able to somewhat achieve this by using the Couchbase async API; RxJava FTW.
Problem is, when the server is down, the Couchbase Java client goes crazy and keeps trying to connect to the server; from what I see, the class that does this is ConfigEndpoint and there's no limit to how many times it tries before giving up. This is flooding the logs with java.net.ConnectException: Connection refused errors. What I'd like, is for it to try a few times, and then stop.
Got any ideas that can help?
Edit:
Here's a sample app.
Steps to reproduce the problem:
svn export https://github.com/asarkar/spring/trunk/beer-demo.
From the beer-demo directory, run ./gradlew bootRun. Wait for the application to start up.
From another console, run curl -H "Accept: application/json" "http://localhost:8080/beers". The client request is going to timeout due to the failure to connect to Couchbase, but Couchbase client is going to flood the console continuously.
The reason we choose to have the client continue connecting is that Couchbase is typically deployed in high-availability clustered situations. Most people who run our SDK want it to keep trying to work. We do it pretty intelligently, I think, in that we do an exponential backoff and have tuneables so it's reasonable out of the box and can be adjusted to your environment.
As to what you're trying to do, one of the tuneables is related to retry. With adjustment of the timeout value and the retry, you can have the client referenceable by the application and simply fast fail if it can't service the request.
The other option is that we do have a way to let your application know what node would handle the request (or null if the bootstrap hasn't been done) and you can use this to implement circuit breaker like functionality. For a future release, we're looking to add circuit breakers directly to the SDK.
All of that said, these are not the normal path as the intent is that your Couchbase Cluster is up, running and accessible most of the time. Failures trigger failovers through auto-failover, which brings things back to availability. By design, Couchbase trades off some availability for consistency of data being accessed, with replica reads from exception handlers and other intentionally stale reads for you to buy into if you need them.
Hope that helps and glad to get any feedback on what you think we should do differently.
Solved this issue myself. The client I designed handles the following use cases:
The client startup must be resilient of CB failure/availability.
The client must not fail the request, but return a degraded response instead, if CB is not available.
The client must reconnect should a CB failover happens.
I've created a blog post here. I understand it's preferable to copy-paste rather than linking to an external URL, but the content is too big for an SO answer.
Start a separate thread and keep calling ping on it every 10 or 20 seconds, one CB is down ping will start failing, have a check like "if ping fails 5-6 times continuous then close all the CB connections/resources"

How can I efficiently poll a lot of servers?

I am looking for a good way to poll a lot of servers for their status through TCP. I am currently using synchronous code and the Minecraft Query Protocol, but whenever a server is offline the rest of the queue gets hold up.
Another problem I am experiencing with my current code is that some servers tend to block my server I use for polling in their firewall, and thus their servers appear offline on my serverlist.
I am using a Ruby rake task with an infinite loop in which every Minecraft server in my MongoDB database gets checked and updated every +- 10 minutes (I try to set this interval by letting the loop sleep (600/ s.count.to_i).ceil seconds.
Is there any way I can do this task efficiently (and prevent servers from blacklisting my IP in their firewall), preferably with Async code in Ruby?
You need to use non-blocking sockets to check - multithreading. The best thing to do is spawn several threads at once to check several servers at once - that way your main thread won't get held up.
This question contains a lot of information about multithreading in Ruby - you should be able to spawn multiple concurrent threads at once, or at least use non-blocking sockets.
Another point given by #Lie Ryan, you can use IO.Select to poll a array of servers, all at once. It will return an array of "online" servers when it's done - this could be more elegant than spawning multiple threads.

How to get a stack trace on all running ruby threads on passenger

I have a production ruby sinatra app running on nginx/passenger, and I frequently see requests get inexplicably stalled. I wrote a script to call passenger-status on my cluster of machines every ten seconds and plot the results on a graph. This is what I see:
The blue line shows the global queue waiting spiking constantly to 60. This is an average across 4 machines, so when the blue line hits 60, it means every machine is maxed out. I have the current passenger_max_pool_size set to 20, so it's getting to 3x the max pool size, and then presumably dropping subsequent requests.
My app depends on two key external resources - an Amazon RDS mysql backend and a Redis instance. Perhaps one of these is periodically becoming slow or unresponsive and thereby causing this behavior?
Can anyone advise me on how to get a stack trace to see if the bottleneck here is Amazon RDS, Redis, or something else?
Thanks!
I figured it out -- I had a SAVE config parameter in Redis that was firing once a minute. Evidently the forking/saving operations of redis are blocking for my app. I change the config param to be "3600 1", meaning I only save my database once an hour, which is OK because I am using it as a cache (data persisted in MYSQL).
To answer your original question, it is possible to get "all stack traces" for the running ruby processes that passenger is shepherding. Basically send SIGQUIT message to each one, and they'll spit out all their backtraces into the apache/nginx log file, ex:
https://gist.github.com/rdp/905759f88134229c2969b9f242188615

Is there an alternative of ajax that does not require polling without server side modifications?

I'm trying to create a small and basic "ajax" based multiplayer game. Coordinates of objects are being given by a PHP "handler". This handler.php file is being polled every 200MS, by using ajax.
Since there is no need to poll when nothing happens, I wonder, is there something that could do the same thing without frequent polling? Eg. Comet, though I heard that you need to configure server side applications for Comet. It's a shared webserver, so I can't do that.
Maybe prevent the handler.php file from even returning a response if nothing has to be changed at the client, is that possible? Then again you'd still have the client uselessly asking for a response even though something hasn't changed yet. Basically, it should only use bandwidth and sever resources if something needs to be told to the client, eg. the change of an object's coordinates.
Comet is generally used for this kind of thing, and it can be a fragile setup as it's not a particularly common technology so it can be easy not to "get it right." That said, there are more resources available now than when I last tried it ~2 years ago.
I don't think you can do what you're thinking and have handler.php simply not return anything and stop execution: The web server will keep the connection open and prevent any further polling until handler.php does something (terminates or provides output). When it does, you're still handling a response.
You can try a long polling technique, where your AJAX allows a very large timeout (e.g. 30 seconds), and handler.php spins without responding until it has something to report, then returns. (You'll want to make sure the spinning is not resource-intensive). If handler.php "expires" and nothing happens, have it exit and let AJAX poll again. Since it only happens every 30 seconds, it will be a huge improvement over ~5 times a second. That would keep your polling to a minimum.
But that's the sort of thing Comet is designed for.
As Ajax only offers you a client server request model (normally termed pull, rather than push), the only way to get data from the server is via requests. However a common technique to get around this is for the server to only respond when it has new data. So the client makes a request, the server hangs on to that request until something happens and then replies. This gets around the need for frequent polling even when the data hasn't changed as you only need the client send a new request after it gets a response.
Since you are using PHP, one simple method might be to have the PHP code call the sleep command for 200ms at a time between checks for data changes and then return the data to the client when it does change.
EDIT: I would also recommend having a timeout on the request. So if nothing happens for say 2 seconds, a "no change" message is sent back. That way the client knows the server is still alive and processing its request.
Since this is tagged “html5”: HTML5 has <eventsource> and WebSocket, but the implementation side is still in the future tense in practice.
Opera implemented an old version of <eventsource> called <event-source>.
Here's a solution - use a SaaS comet provider, such as WebSync On-Demand. No server resources to worry about, shared hosting or not, since it's all offloaded, and you can push out the information as needed.
Since it's SaaS, it'll work with any server language. For PHP, there's already a publisher written and ready to go.
The server must take part in this. Check with the hosting provider what modules are available. Or try to convince them to support Comet.
Maybe you should consider a small Virtual Private Server (VPS) for this.
One thing to add on the long polling suggestions: If you're on a shared server, this solution will have limited scalability, as each active long poll will keep a connection (and a server-side process to service that connection) active. Your provider most likely has limits (either policy-defined or de facto) on the number of connections you can have open at a time, so you'll hit a wall if you have more sessions/windows than that playing concurrently.

Resources