Data stream, which way to go? - ajax

I am about to start a HTML5 game, with heavy logic in java script, I want to keep some logic at the server side, so that I guarantee that my game will play only at my server.
I decided to chose node.js, as its very fast, I thought about two ways:
To use AJAX, client side will call a server side method which will return calculated numbers to refresh the game scene, this call will be called every 2 second.
To open a socket using node.js, so that client don't have to call the server each time, instead, it keep listening to data streamed from the opened socket, which will refresh data every x seconds.
The calculated data is not big, its about 0.5 kb per one second, client also needs to tell server what's the status, so data sent from client is about 0.1 kb / x second, depends on game play.
It seems that the second approach is better, but, I will need hundred of ports to handle concurrent players ..
So, in term of performance & minimizing used bandwidth, which way to chose? or, is there even a better way? any one can help?

As you mentioned you are creating a web-based JavaScript application that regularly sends information to, or retrieves updates from, a server then in my opinion you should use WebSocket(especially you are developing in HTML5), which reduce the amount of bandwidth your application uses.
In term of performance, I would chose WebSocket aswell, by doing some measurement experiments e.g averaging a round trip time for 100 requests at a time, WebSocket has a lower round trip time. Here is a link of a performance test might tell the result: http://www.peterbe.com/plog/are-websockets-faster-than-ajax

Related

Multiplayer game server: How much is too much communication from the client to the server

I am making a multiplayer game (server/client) with unity and a Colyseus backend. Currently the backend sends 20 updates per second to each client. I want each client to also send approximately 20 messages to the server each second. Is this too much communication? (the messages are very small, a JSON object with 5 string fields).
I don't want to build the game and find out it is not scalable :(. So Thesis: is Each client sending a small message to the server 20 times a second too much?
As mentioned by Slugart, it is best to benchmark and go from there.
That being said, there are a few things you can do if you find the performance to be a bottleneck:
Lower the number of messages - generally, 20 messages per second per client might be a bit too much - games usually go with less than half of that (6-12 msg/s).
Use binary format instead of json - if the server needs to act as a relay, you could encode your messages using binary protocol. Look into protobuf or messagepack.
There are some other options available, but they are not available for javascript (as far as I know).
In case you are expecting a large number of players, and every want to optimize as much as possible, I would suggest switching to a backend that supports multithreading, object pooling (to reduce Garbage Collection time), etc, to gain the most performance.
Disclaimer: I am a co-founder of ServerBytes - we help you make games faster.
You can also try ServerBytes for free - a platform which supports high concurrency, high throughput, custom c# backend code and more.
This depends on many things that you haven't specified, first among those is how many simultaneous and how many server isntances players you expect to have.
I would recommend you quickly benchmark how long the (de)serialisation of your message takes and then multiply it by the actual message volume you expect to see.
You could also create a proof of concept that does nothing except send messages at different messages rates to see yourself how it would scale.

Is combining rest api calls to reduce # requests worth doing?

My server used to handle 700+ user burst and now it is failing at around 200 users.
(Users are connecting to the server almost at the same time after clicking a push message)
I think the change is due to the change how the requests are made.
Back then, webserver collected all the information in a single response in an html.
Now, each section in a page is making a rest api request resulting in probably 10+ more requests.
I'm considering making an api endpoint to aggregate those requests for pages that users would open when they click on push notification.
Another solution I think of is caching those frequently used rest api responses.
Is it a good idea to combine api calls to reduce api calls ?
It is always a good idea to reduce API calls. The optimal solution is to get all the necessary data in one go without any unused information.
This results in less traffic, less requests (and loads) to the server, less RAM and CPU usage, as well as less concurrent DB operations.
Caching is also a great choice. You can consider both caching the entire request and separate parts of the response.
A combined API response means that there will be just one response, which will reduce the pre-execution time (where the app is loading everything), but will increase the processing time, because it's doing everything in one thread. This will result in less traffic, but a slightly slower response time.
From the user's perspective this would mean that if you combine everything, the page will load slower, but when it does it will load up entirely.
It's a matter of finding the balance.
And for the question if it's worth doing - it depends on your set-up. You should measure the start-up time of the application and the execution time and do the math.
Another thing you should consider is the amount of time this might require. There is also the solution of increasing the server power, like creating a clustered cache and using a load balancer to split the load. You should compare the needed time for both tasks and work from there.

How to time the time it takes to send something from a server to a browser and reverse

I want time how long it takes to send a few things from the browser to a server and the reverse (but not round trip). I know there are various timings available for each, but the only thing that I think would work is a very accurate system time for both (they are both local). Something like the browser's performance.now() would be perfect but I don't know how to compare it with the server side. I'm using node as the server.
There are many profilers for nodejs that time the execution of blocks of your choice.
Example: https://www.npmjs.com/package/exectimer
https://www.npmjs.com/browse/keyword/profiler

What additional overheads are there to sending a packet over a websocket connection?

When performing AJAX requests, I have always tried to do as few as possible since there is an overhead to each request having to open the http connection to send the data. Since a websocket connection is constantly open, is there any cost outside of the obvious packet bandwidth to sending a request?
For example. Over the space of 1 minute, a client will send 100kb of data to the server. Assuming the client does not need a response to any of these requests, is there any advantage to queuing packets and sending them in one big burst vs sending them as they are ready?
In other words, is there an overhead to the stopping and starting data transfer for a connection that is constantly open?
I want to make a multiplayer browser game as real time as possible, but I don't want to find that 100s of tiny requests per minute compared to a larger consolidated request is causing the server additional stress. I understand that if the client needs a response it will be slower as there is a lot of waiting from the back and forth. I will consider this and only consolidate when it is appropriate. The more smaller requests per minute, the better user experience, but I don't know what toll it will have on the server.
You are correct that a webSocket message will have lower overhead for a given message transmission than sending the same message via an Ajax call because the webSocket connection is already established and because a webSocket message has lower overhead than an HTTP request.
First off, there's always less overhead in sending one larger transmission vs. sending lots of smaller transmissions. That's just the nature of TCP. Every TCP packet gets separately processed and acknowledged so sending more of them costs a bit more overhead. Whether that difference is relevant or significant and worth writing extra code for or worth sacrificing some element of your user experience (because of the delay for batching) depends entirely upon the specifics of a given situation.
Since you've described a situation where your client gets the best experience if there is no delay and no batching of packets, then it seems that what you should do is not implement the batching and test out how your server handles the load with lots of smaller packets when it gets pretty busy. If that works just fine, then stay with the better user experience. If you have issues keeping up with the load, then seriously profile your server and find out where the main bottleneck to performance is (you will probably be surprised about where the bottleneck actually is as it is often not where you think it will be - that's why you have to profile and measure to know where to concentrate your energy for improving the scalability).
FYI, due to the implementation of Nagel's algorithm in most implementations of TCP, the TCP stack itself does small amounts of batching for you if you are sending multiple requests fairly closely spaced in time or if sending over a slower link.
It's also possible to implement a dynamic system where as long as your server is able to keep up, you keep with the smaller and more responsive packets, but if your server starts to get busy, you start batching in order to reduce the number of separate transmissions.

How do udp sockets actually work internally?

I am trying to reduce packets manipulation to its minimum in order to improve efficiency of a specific program i am working on but i am struggling with the time it takes to send through a udp socket using sendto/recvfrom. I am using 2 very basic processes (applications), one is sending, the other one receiving.
I am willing to understand how linux internally works when using these function calls...
Here are my observations:
when sending packets at:
10Kbps, the time it takes for the messages to go from one application to the other is about 28us
400Kbps, the time it takes for the messages to go from one application to the other is about 25us
4Mbps, the time it takes for the messages to go from one application to the other is about 20us
40Mbps, the time it takes for the messages to go from one application to the other is about 18us
When using different CPUs, time is obviously different and consistent with those observations. There must be some sort of setting that enables some queue readings to be done faster depending on the traffic flow on a socket... how can that be controlled?
When using a node as a forwarding node only, going in and out takes about 8us when using 400Kbps flow, i want to converge to this value as much as i can. 25us is not acceptable and deemed to slow (it is obvious that this is way less than the delay between each packet anyway... but the point is to be able to eventually have a greater deal of packets to be processed, hence, this time needs to be shortened!). Is there anything faster than sendto/recvfrom (must use 2 different applications (processes), i know i cannot use a monolitic block, thus i need info to be sent on a socket)?

Resources