I'm attempting to make my first multiplayer game (I'm doing this in Ruby with Gosu) and I'm wondering what information to send to the server and how many, if any, of the calculations should be done on the server.
Should the client be used simply for input gathering and drawing while leaving the server to compute everything else? Or should it be more evenly distributed than that?
I'm going to answer my own question with some more experience under my belt for the sake of anyone who might be interested or in need of an answer.
It will depend on what you're doing, but primarily, for most games, it's the best practice to have the client get and send inputs to the server so that it can do all the required calculations. This makes it much harder for players to cheat by using software such as Cheat Engine, as it means the only values they'd be able to change would be local variables, which have no bearing on the game.
However, in sending all of the client data from the server to the client, be careful not to send too much as it can end up creating a lot of network overhead. Keep your data transferred to the bare minimum needed. On that same note however, don't be afraid of adding data to your packets, just make sure you're being efficient.
Good luck with your projects everyone, and feel free to add to or debate my answer if something isn't up to scratch.
Related
Let's say I make a simple Snake game, and on top of that adds a highscore function.
What would be the best way to make sure that the players can't cheat their scores? The most straightforward way I can think of this is, well, just make sure to validate everything, or at least simulate the game on the server. Pressed right? Well, your snake moves right and stuff.
Is this sort of very server-heavy way the only way? or am I missing something?
Not sure how many games used a shared client/server model but it's probably a fair share of action games these days. Basically the server and client share most of the codebase and both simulate the game simultaneously (it helps if your methods are all deterministic, but it doesn't matter for things like graphical effects such as sparks/explosions, which don't affect the core gameplay)
The server sends updates to the client to tell it what's actually happening and the client corrects for errors. This is why in some action games when you have a network spike where you drop some packets, you can carry on running around shooting at people, and then suddenly the server re-establishes communications and puts you back where you were 5 seconds ago!
The server controls all important interactions such as loss of health, death etc so that you can't boost your health points by a billion whenever you feel like it
There's a good article here:
http://gamelix.com/resources/Unity3D%20Forum/Interpolation%20Explained.pdf
Which explains the old "server is boss" model and newer shared programming models with client-side prediction etc. It's a fair few years old but most of it still applies as far as I know.
I've got a Rails app where users can send messages to other users. The problem is, it's the type of site that draws many spammers who send bogus messages.
I'm already aware of a couple spam services like Akismet (via rakismet) and Defensio (via defender). The problem with these is that it looks like they don't take into account messages the user has already sent. The type of spam I'm seeing on my site is where the user sends the same (or very similar) messages to many other users. As such, I'd like to be able to compare to at least a handful of past messages to ensure they're different enough to not be considered spam.
So far, the best thing I've come across is the Text::Levenshtein distance implementation, which calculates the number of differences between two strings. I suppose I could calculate the number of difference divided by the string length, and if it's above a certain threshold, then it's not considered spam.
One other thing I've come across is Classifier::Bayes, which makes a best guess as to what category something falls into. Still pondering on this one.
I feel like I might just be looking in the wrong place, and maybe there's already a better solution for something like this out there. Perhaps I'm searching for the wrong words to find something a little more useful.
Don't try and roll your own solution for this, it's much more complex than you would expect. It is infact one of those things, like encryption, where it is a much better idea to farm it out to someone/something that is really good at it. Here is some background for you.
Levenshtein distance is certainly a good thing to be aware of (you never know when a similarity metric will come in handy), but it is not the right thing to use for this particular problem.
A Bayesian classifier is much closer to what you're after. Infact spam detection is pretty much the canonical example of where a naive Bayesian classifier can do a tremendous job. Having said that you'd have to find a large collection of data (messages) that has been classified as spam and non-spam and that's similar to the types of messages you get on your site. You would then need to train your classifier and measure its performance. You'd need to tweak it and make sure you don't overfit it etc. While Classifier::Bayes is a decent basic implementation it will not give you a lot of support for this. Infact Ruby does suffer from a lack of good natural language processing libraries. There is nothing in Ruby to compare to python's NLTK.
Having said all of that, services like akismet will certainly have a bayesian classifier as one of the tools they use to determine if what you send them is spam or not. This classifier will likely be much more sophisticated than what you can build yourself, if for no other reason than the fact that they also have access to so much data. They likely also have other types of classifiers/algorithms that they will use, this is their core business after all.
Long story short, if I were you I would give something like Akismet another look. If you build a facility into your site where you or your users can flag messages as spam (for example via rakismet's spam! method), you'll be able to send this data to akismet and the service should learn pretty quickly that a particular kind of message is spammy. So if your users are sending many similar spammy messages, even if akismet doesn't pick this up straight away, after you flag a couple of these all the rest should be picked up automatically. If I were you I would be concentrating my efforts into experimenting in this direction rather than trying to roll your own solution.
I am currently integrating a server functionality into a software that runs a complicated measuring system.
The client will be a software from another company that will periodically ask my software for the current state of the system.
Now my question is: What is the best way to design the protocol to provide these state information. There are many different states that have to be transmitted.
I have seen solutions where they generate a different state flags and then only transfer for example a 32 bit number where each bit stands for a different state.
Example:
Bit 0 - System Is online
Bit 1 - Measurement in Progress
Bit 2 - Temperature stabilized
... and so on.
This solution will produce very little traffic. Though it seems very unflexible to me and also very hard to debug.
The other I think it could be done is to tranfer each state preceded by the name of the state:
Example:
#SystemOnline#1#MeasurementInProgress#0#TemperatureInProgress#0#.....
This solution will produce a lot more traffic. But it appears a lot more flexible because the order in which each state is tranfered irrelevant. Also it should be a lot easier to debug.
Does anybody knows from experience a good way to solve the problem, or does anybody know a good source of knowledge where I can find best practices. I just want to prevent trying to reinvent the wheel
Once you've made a network request to a remote system, waited for the response, and received and decoded the response, it hardly matters whether the response is 32 bits or 32K. And how many times a second will you be generating this traffic? If less than 1, it matters even less. So use whatever is easiest to implement and most natural for the client, be it a string, or XML.
A friend and I were having a discussion about how a FPS server updates the clients connected to it. We watched a video of a guy cheating in Battlefield: Bad Company 2 and saw how it highlighted the position of enemies on the screen and it got us thinking.
His contention was that the server only updates the client with information that is immediately relevant to the client. I.e. the server won't send information about enemy players if they are too far away from the client or out of the client's line of sight for reasons of efficiency. He was unsure though - he brought up the example of someone hiding behind a rock, not able to see anyone. If the player were suddenly to pop up where he had three players in his line of sight, there would be a 50ms delay before they were rendered on his screen while the server transmitted the necessary information.
My contention was the opposite: that the server sends the client all the information about every player and lets the client sort out what is allowed and what isn't. I figured it would actually be less expensive computationally for the server to just send everything to the client and let the client do the heavy lifting, so to speak. I also figured this is how cheat programs work - they intercept the server packets, get the location of enemies, then show them on the client's view.
So the question: What are some general policies or strategies a modern first person shooter server employs to keep its clients updated?
It's a compromise between your position and your friend's position, and each game will make a slightly different decision on this to achieve their desired trade-off. The server may try not to send more information than it needs to, eg. performing the distance check, but inevitably will send some information that can be exploited, such as sending the position of an enemy that is behind a rock, simply both because it is too expensive for the server to calculate the exact line of sight each time and also for the latency issue you mention.
Typically you'll find that FPS games do tend to 'leak' more information than others because they are more concerned with a smooth game experience that requires a faster and more regular rate of updates. Additionally, unlike MMOs, an FPS player is usually at liberty to move to a different server if they're finding their game ruined by cheats.
Some additional reading:
Multiplayer and Network Programming Forum FAQ # Gamedev.net
Networking for Game Programmers # GafferOnGames
Source Multiplayer Networking # Valvesoftware.com
The general policy should be not to trust the clients, in the sense that the developer should assume that anyone is able to rewrite the client from scratch.
Having said that, I think it's hard to avoid sending this kind of information to the client (and prevent this kind of cheating). Even if there is no line of sight, you may still need to send positions (indirectly) since clients may want to utilize some surround-sound system which requires 3D locations of sound-sources etc.
I'm biased towards writing fool-proof applications. For example with PHP site, I validate all the inputs from client-side using JS. On the server-side I validate again. On both sides I do validation for emptiness, and other patterns (email, phone, url, number, etc). And then I strip malicious tags or characters, trim them (server-side). Later I convert the input into desired formats/data types (string, int, float, etc). If the library meant for server-side only, I even give developers chances for graceful degradation and accommodate the tolerate the worst inputs and normalize to the acceptable ones (I have predefined set of the acceptable ones).
Now I'm reading a library that I wrote one and a half years ago. I'm wondering if developers are so evil or lack IQ for me do so much of graceful degradation, finding every possible chance to make the dudes right, even they gave crappy input which seriously harms performance. Or shall I do minimal checking and expect developers to be able and are willfully to give proper input? I have no hope for end-users but should I trust developers more and give them application/library with better performance?
Common policy is to validate on the server anything sent from the client because you can't be totally sure it really was your client that sent it. You don't want to "trust developers more" and in the process find that you've "trusted hackers of your site more".
Fixing invalid input automatically can be as much a curse as a blessing -- you've essentially committed to accepting the invalid input as a valid part of your protocol (ie, in a future version if you make a change that will break the invalid input that you were correcting, it is no longer backwards compatible with the client code that has been written). In extremis, you might paint yourself into a corner that way. Also, invalid calls tend to propagate to new code -- people often copy-and-paste example code and then modify it to meet their needs. If they've copied bad code that you've been correcting at the server, you might find you start getting proportionally more and more bad data coming in, as well as confusing new programmers who think "that just doesn't look like it should be right, but it's the example everyone is using -- maybe I don't understand this after all".
Never expect diligence from developers. Always validate, if you can, any input that comes into your code, especially if it comes across a network.
End users (whether they're programmers using your tool, or non-programmers using your application) don't have to be stupid or evil to type the wrong thing in. As programmers we all too often make wrong assumptions about what's obvious for them.
That's the first thing, which justifies comprehensive validation all on its own. But validation isn't the same as guessing what they meant from what they typed, and inferring correct input from incorrect - unless the inference rules are also well known to the users (like Word's auto-correct, for instance).
But what is this performance you seek? There's no bit of client-side (or server-side, for that matter) validation that takes longer to run than the second or so that is an acceptable response time.
Validate, and ensure it doesn't break as the first priority. Then worry about making it clever enough to know (reliably) what they meant. After that, worry about how fast it is. In the real world, syntax validation doesn't make a measurable difference to anything where user input takes most of the total time.
Microsoft made the mistake of trusting programmers to do the right thing back in the days of Windows 3.1 and to a lesser extent Windows 95. You need only read a few posts from Raymond Chen to see where that road ultimately leads.
(P.S. This is not a dig against Microsoft - it's a statement on fact about how programmers abused the more liberal Win16, either deliberately or through ignorance)
I think you are right in being biased toward fool-proof applications. I would not assume that that degrades performance enough to be of much concern. Rather I would address performance concerns separately, starting by profiling or my favorite method, stackshots. There must be a way to get those in PHP.