AJAX or WebSockets for a Heartstone-like game? - ajax

Game is a one-vs-one turn based 2D card management game to be played in a browser.
It is very much like Hearstone where a player plays a number of cards, observes effects and then passes turn to the opponent.
Game mechanics and prototype are ready and I need to decide on technology.
Server is PHP + MySQL, heard of node.js but have no experience with it.
I cannot have loss of packets, so need to use HTTP I guess.
Initial idea is to have scheduled AJAX call every 5 seconds to get game state for each client to check for:
end of turn
change of game state (and render animation based on it)
Obviously I would also need to validate every action of an active player on the server.
I am concerned of the number of calls to my server (not an expensive hosting) and how many calls would a modest server be capable of handling...
As a plus of Ajax I see guaranteed packet delivery and no issues with proxies involved (which may cut persistent connections).

Websockets reduce latency and server workload ( no need to open a new connection, meaning a key exchange in case of https), provided that you interact frequently.
A great advantage is that you are able to 'push' a message to the client (as opposed to having to 'pull' via Ajax every few seconds.
The server language shouldn't be a problem, but if you plan to maintain / extend it, you should choose carefully (I'm guessing you're a rather new programmer, thus gaining experience in a better suited environment would not be a huge amount of work).
Edit: just to clarify, I would recommend the usage of a websocket for your use case

Related

Is the mux in this golang socket.io example necessary?

In an app that I'm making, a user is always part of a 'game'. I'd like to set up a socket.io server to communicate with users in a game. I'm planning to use http://godoc.org/github.com/madari/go-socket.io go-socket.io, which defines the newSocketIOfunction to create a new socketio instance.
Instead of creating one socketio instance, I thought it might be possible to create a map that maps game id's to socket.io instances, and configure them so that they listen on an url that represents the game id.
This way, I can use methods such as broadcast and broadcastExcept to broadcast to all players ithin a single game. However, I'd have to start a new goroutine for every game, and I don't know enough about their performance characteristics to know if this is scalable, since the request rate for a single socketio instance will be very low, about 1/second at peak times, but the connection might be idle for tens of seconds at other times (except for heartbeat, and possibly other communication specified by the socket.io protocol).
Would I be better off creating 1 socket.io instance, and tracking which connections belong to which games?
I'd have to start a new goroutine for every game, and I don't know enough about their performance characteristics to know if this is scalable
Fire away, the Go scheduler is built to efficiently handle thousands and even millions of goroutines.
The default net/http server in the Go standard library spawns a goroutine for every client for instance.
Just remember to return from your goroutines once they're done working. Else you'll end up with a lot of stale ones.
Would I be better off creating 1 socket.io instance, and tracking which connections belong to which games?
I'm not involved in the project but if it follows Go's "get sh*t done" philosophy, then it shouldn't matter. You can find out what works better by profiling both approaches though.

what's the skinny on long polling with ajax and webapi...is it going to kill my server? and string comparisons

I have a very simple long polling ajax call like this:
(function poll(){
$.ajax({ url: "myserver", success: function(data){
//do my stuff here
}, dataType: "json", complete: poll, timeout: 30000 });
})();
I just picked this example up this afternoon and it seems to work great. I'm using it to build out some html on my page and it's nearly instantaneous as best I can tell. I'm a little worried though that this is going to keep worker threads open on my server and that if I have too big of a load on the server, it's going to stop entirely. Can someone shed some light on this theory? On the back end I have a webapi service (.net mvc 4) that calls a database, build the object, then passes the object back down. It also seems to me that in order for this to work, the server would have to be calling the database constantly...and that can't be good right???
My next question is what is the best way on the client to determine if I need to update the html on my page? Currently I"m using JSON.stringify() to turn my object into a string and comparing the string that comes down to the old string and if there's a delta, it re-writes the html on the page. right now there's not a whole lot in the object coming down, but it could potentially get very large and I think doing that string comparison could be pretty resource intensive on the client...especially if it's doing it nearly constantly.
bottom line for me is this: I"m not sure exactly sure how long polling works. I just googled it and found the above sample code and implemented it and, on the surface, it's awesome. I just fear that it's going to bog things down (on the server) and my way of comparing old results to new is going to bog thigns down (on the client).
any and all information you can provide is greatly appreciated.
TIA.
OK, my two cents:
As others said, SignalR is tried and tested code so I would really consider using that instead of writing my own.
SignalR does change some of the IIS settings to optimise IIS for this sort of work. So if you are looking to implement your own, have a look at IIS setting changes done in SignalR
I suppose you are doing long polling so that your server could implement some form of Server Push. Just bear in mind that this will turn your stateless HTTP machine into a stateful machine which is not good if you want to scale. Long polling behind a load ballancer is not nice :) For me this is the worst thing about server push.
ASP.NET uses ThreadPool for serving requests. A long poll will hog a ThreadPool thread. If you have too many of these threads you might end up in thread starvation (and tears). As a ballpark figure, 100 is not too many but +1000 is.
Even SignalR team say that the IIS box which is optimised for SingalR, probably not optimised for normal ASP.NET and they recommend to separate these boxes. So this means cost and overhead.
At the end of the day, I recommend to using long polling if you are solving a business problem (and not because it is just cool) because then that will pay its costs and overheads and headaches.
I agreee with SLaks - i.e. use SignalR if you need realtime web with WebApi http://www.asp.net/signalr. Long polling is difficult to implement well, let someone else handle that complexity i.e. use SignalR (natural choice for WebApi) or Comet.
SignalR attempts 3 other forms of communication before resorting to long polling, web sockets, server sent events and forever frame (here).
In some circumstances you may be better of with simple polling i.e. a hit every second or so to update... take a look at this article. But here is a quote:
when you have a high message volume, long-polling does not provide any substantial
performance improvements over traditional polling. In fact, it could be worse,
because the long-polling might spin out of control into an
unthrottled, continuous loop of immediate polls.
The fear is that with any significant load on your web page your 30 second ajax query could end up being your own denial of service attack.
Even Bayeux (CometD) will resort to simple polling if the load gets too much:
Increased server load and resource starvation are addressed by using
the reconnect and interval advice fields to throttle clients, which in
the worst-case degenerate to traditional polling behaviour.
As for the second part of you question.
If you are using long polling then your server should ideally only be returning an update if something actually has changed thus your UI should probably "trust" the response and assume that a response means new data. The same goes for any of the Server Push type approaches.
If you did move back down towards simple polling pullmethod then you can use the inbuilt http methods for detecting an update using the If-Modified-Since header which would allow you to return a 304 Not Modified, so the server would check the timestamp of an object and only return a 200 with an object if it had been modified since the last request.

ajax request for real-time monitor

We have to monitor the cars of our company which have the GPS installed and draw their position on the map.
We use google map,and render the car with the google.maps.Maker with a custom icon.
Once the position of the car changed,we re-set the position of the marker.
Now we have problems to implement the real-time.
In order to make the position of the car real-time we have to refresh the car position in a small interval.
We try to use this kind of solution:
function refresh(){
$.getJSONP(url,'xxx',function(data){
resetLocation(data);
});
}
setInterval(refresh,delay);
Now how to set the delay?
In the clients's opinion,the small the better. Since it will make the car in the map move smoothly. For example,set the delay to 500 mili seconds
However, this will cause the Frequent requests to the server. Can the server and the browser afford this?
Is there a alternative to implement our requirement?
It would be best to use Websockets or Meteor stream and maintain a connection for a while, if you're going for high resolution updates.
As for whether your server can afford this, that's for you to say. A typical MMO is sending way more data much more often; but they use a data center. So it depends on how much infrastructure you have, how many clients you're expecting, and how much processing the serverside needs to do to compile the data before sending.
It would be advantageous to use an event-based server such as Node.js if you don't have much processing. Even if you do, I'd still serve it from Node or EventMachine, and delegate heavy lifting to other processes.
Look into socket.io if you're going for Node.

Algorithm for Client-Server Games

For stand alone games, the basic game loop is (source: wikipedia)
while( user doesn't exit )
check for user input
run AI
move enemies
resolve collisions
draw graphics
play sounds
end while
But what if I develop client-server-like games, like Quake, Ragnarock, Trackmania, etc,
What the Loop/Algorithm for client and the server parts of the game?
It would be something like
Client:
while( user does not exit )
check for user input
send commands to the server
receive updates about the game from the server
draw graphics
play sounds
end
Server:
while( true )
check for client commands
run AI
move all entities
resolve collisions
send updates about the game to the clients
end
Client:
connect to server
while( user does not exit && connection live)
check for user input
send commands to the server
estimate outcome and update world data with 'best guess'
draw graphics
play sounds
receive updates about the game from the server
correct any errors in world data
draw graphics
play sounds
end
Server:
while( true )
check for and handle new player connections
check for client commands
sanity check client commands
run AI
move all entities
resolve collisions
sanity check world data
send updates about the game to the clients
handle client disconnects
end
The sanity checks on the client commands and world data are to remove any 'impossible' situations caused either by deliberate cheating (moving too fast, through walls etc) or lag (going through a door that the client thinks is open, but the server knows is closed, etc).
In order to handle lag between the client and server the client has to make a best guess about what will happen next (using it's current world data and the client commands) - the client will then have to handle any discrepancies between what it predicted would happen and what the server later tells it actually happened. Normally this will be close enough that the player doesn't notice the difference - but if lag is significant, or the client and server are out of synch (for example due to cheating), then the client will need to make an abrupt correction when it receives data back from the server.
There are also lots of issues regarding splitting sections of these processes out into separate threads to optimise response times.
One of the best ways to start is to grab an SDK from one of the games that has an active modding community - delving into how that works will provide a good overview of how it should be done.
It really isn't a simple problem. At a most basic level you could say that the network provides the same data that the MoveEnemies part of the original loop did. So you could simply replace your loop with:
while( user doesn't exit )
check for user input
run AI
send location to server
get locations from server
resolve collisions
draw graphics
play sounds
end while
However you need to take into account latency so you don't really want to pause your main loop with calls to the network. To overcome this it is not unusual to see the networking engine sitting on a second thread, polling for data from the server as quickly as it can and placing the new locations of objects into a shared memory space:
while(connectedToNetwork)
Read player location
Post player location to server
Read enemy locations from server
Post enemy locations into shared memory
Then your main loop would look like:
while( user doesn't exit )
check for user input
run AI
read/write shared memory
resolve collisions
draw graphics
play sounds
end while
The advantage of this method is that your game loop will run as fast as it can, but the information from the server will only be updated when a full post to and from the server has been completed. Of course, you now have issues with sharing objects across threads and the fun with locks etc that comes with it.
On the server side the loop is much the same, there is one connection per player (quite often each player is also on a separate thread so that the latency of one won't affect the others) for each connection it will run a loop like
while (PlayerConnected)
Wait for player to post location
Place new location in shared memory
When the client machine requests the locations of enemies the server reads all the other players locations from the shared block of memory and sends it back.
This is a hugely simplified overview and there are many more tweaks that will improve performance (for instance it may be worth the server sending the enemy positions to the client rather than the client requesting them) and you need to decide where certain logically decisions are made (does the client decide whether he has been shot because he has the most up to date position for himself, or the server to stop cheating)
The client part is basically the same, except replace
run AI
move enemies
resolve collisions
with
upload client data to server
download server updates
And the server just does:
while (game is running)
{
get all clients data
run AI
resolve collisions
udpate all clients
}
You can use almost the same thing, but the most of you logic would be on the server, you can put timers, sounds, grafics, and other UI components on the client app.
Any business rule (AI,Movements) goes in the server side.
A very useful and I would argue pertinent paper to read is this one: Client-Server Architectures
I gave it a read and learned a lot from it, a lot of sense was made. By separating out your game into strategically defined components or layers, you can create a more maintainable architecture. The program is easier to code, and more robust than a conventional linear program model like the one you've described.
That thought process came out in a previous post here about using a "Shared Memory" to talk between different parts of the program, and so overcoming the limitations of having a single thread and step-followed-step game logic.
You can spend months working on the perfect architecture and program flow, read a single paper and realise you've been barking up the wrong tree.
tldr; read it.

Distributed time synchronization and web applications

I'm currently trying to build an application that inherently needs good time synchronization across the server and every client. There are alternative designs for my application that can do away with this need for synchronization, but my application quickly begins to suck when it's not present.
In case I am missing something, my basic problem is this: firing an event in multiple locations at exactly the same moment. As best I can tell, the only way of doing this requires some kind of time synchronization, but I may be wrong. I've tried modeling the problem differently, but it all comes back to either a) a sucky app, or b) requiring time synchronization.
Let's assume I Really Really Do Need synchronized time.
My application is built on Google AppEngine. While AppEngine makes no guarantees about the state of time synchronization across its servers, usually it is quite good, on the order of a few seconds (i.e. better than NTP), however sometimes it sucks badly, say, on the order of 10 seconds out of sync. My application can handle 2-3 seconds out of sync, but 10 seconds is out of the question with regards to user experience. So basically, my chosen server platform does not provide a very reliable concept of time.
The client part of my application is written in JavaScript. Again we have a situation where the client has no reliable concept of time either. I have done no measurements, but I fully expect some of my eventual users to have computer clocks that are set to 1901, 1970, 2024, and so on. So basically, my client platform does not provide a reliable concept of time.
This issue is starting to drive me a little mad. So far the best thing I can think to do is implement something like NTP on top of HTTP (this is not as crazy as it may sound). This would work by commissioning 2 or 3 servers in different parts of the Internet, and using traditional means (PTP, NTP) to try to ensure their sync is at least on the order of hundreds of milliseconds.
I'd then create a JavaScript class that implemented the NTP intersection algorithm using these HTTP time sources (and the associated roundtrip information that is available from XMLHTTPRequest).
As you can tell, this solution also sucks big time. Not only is it horribly complex, but only solves one half the problem, namely giving the clients a good notion of the current time. I then have to compromise on the server, either by allowing the clients to tell the server the current time according to them when they make a request (big security no-no, but I can mitigate some of the more obvious abuses of this), or having the server make a single request to one of my magic HTTP-over-NTP servers, and hoping that request completes speedily enough.
These solutions all suck, and I'm lost.
Reminder: I want a bunch of web browsers, hopefully as many as 100 or more, to be able to fire an event at exactly the same time.
Let me summarize, to make sure I understand the question.
You have an app that has a client and server component. There are multiple servers that can each be servicing many (hundreds) of clients. The servers are more or less synced with each other; the clients are not. You want a large number of clients to execute the same event at approximately the same time, regardless of which server happens to be the one they connected to initially.
Assuming that I described the situation more or less accurately:
Could you have the servers keep certain state for each client (such as initial time of connection -- server time), and when the time of the event that will need to happen is known, notify the client with a message containing the number of milliseconds after the beginning value that need to elapse before firing the event?
To illustrate:
client A connects to server S at time t0 = 0
client B connects to server S at time t1 = 120
server S decides an event needs to happen at time t3 = 500
server S sends a message to A:
S->A : {eventName, 500}
server S sends a message to B:
S->B : {eventName, 380}
This does not rely on the client time at all; just on the client's ability to keep track of time for some reasonably short period (a single session).
It seems to me like you're needing to listen to a broadcast event from a server in many different places. Since you can accept 2-3 seconds variation you could just put all your clients into long-lived comet-style requests and just get the response from the server? Sounds to me like the clients wouldn't need to deal with time at all this way ?
You could use ajax to do this, so yoǘ'd be avoiding any client-side lockups while waiting for new data.
I may be missing something totally here.
If you can assume that the clocks are reasonable stable - that is they are set wrong, but ticking at more-or-less the right rate.
Have the servers get their offset from a single defined source (e.g. one of your servers, or a database server or something).
Then have each client calculate it's offset from it's server (possible round-trip complications if you want lots of accuracy).
Store that, then you the combined offset on each client to trigger the event at the right time.
(client-time-to-trigger-event) = (scheduled-time) + (client-to-server-difference) + (server-to-reference-difference)
Time synchronization is very hard to get right and in my opinion the wrong way to go about it. You need an event system which can notify registered observers every time an event is dispatched (observer pattern). All observers will be notified simultaneously (or as close as possible to that), removing the need for time synchronization.
To accommodate latency, the browser should be sent the timestamp of the event dispatch, and it should wait a little longer than what you expect the maximum latency to be. This way all events will be fired up at the same time on all browsers.
Google found the way to define time as being absolute. It sounds heretic for a physicist and with respect to General Relativity: time is flowing at different pace depending on your position in space and time, on Earth, in the Universe ...
You may want to have a look at Google Spanner database: http://en.wikipedia.org/wiki/Spanner_(database)
I guess it is used now by Google and will be available through Google Cloud Platform.

Resources