I was wondering how the Half-Life 2 multiplayer protocol works in mods like Counter-Strike: Source or Day Of Defeat: Source. I believe that they use some kind of obfuscation and proprietary compression algorithm. I would like to know how different kinds of messages are encoded in a packet.
Half-Life 2, Counter-Strike:Source etc all use Valves Source engine. Valve has a developer wiki which covers a lot of stuff (its pretty cool check it out!)...
These articles might interest you:
Latency Compensating Methods in Client/Server In-game Protocol, Design and Optimization
Source Multiplayer Networking
You should check out Luigi Auriemmas papers on Half-Life. You'll find a packet decoder and some disassembled algorithms there, too.
Reverse engineering information on Half-Life 2 may be hard to come by, because of its relevance for cheating. I'd guess boards like mpcforum are your best bet.
This is a really complicated question, my suggestion would be to look at some of the open source network game engines:
http://www.hawksoft.com/hawknl/
http://www.zoidcom.com/
http://sourceforge.net/projects/opentnl
http://www.gillius.org/gne/
You could also look at the source code for the quake series upon which the original half life engine is based.
Though details might differ, the general framework is pretty old. Here's a quick overview:
In early fps games such as doom and Quake the player's position was updated only on the server's response to your move command. That is, you pressed the move-forward button and the client communicated that to the server, the server udpated your position on its memory and then relayed a new game-state to your client with your new position. This led to very laggy play: shooting, even moving in narrow corridors was a game of predicting lag.
Newer games let the client handle the player's shooting and movement by themselves. Though this led to lag-less movement and fire it opened more possibilities of cheating by hacking the client code. Now every player moves and fires independently on their own computer and communicates to the server what they have done. This only breaks down when two players bump into one another or try to catch a power up at the same time.
Now the server has this stream of client state coming from each player and has to sync them and make a coherent game out of them. The trick is to measure each player's latency. The ultimate goal is to be able to fire a very low latency weapon (such as sniper rifle or railgun) on an enemy moving sideways and have it hit correctly. If the latency from each player is know, suppose player A (latency 50ms) fires a gun on B (latency 60ms). To make a hit, the shot has to hit B where B was 60ms ago, from where A was 50ms ago.
That's a very rough overview but should give you the general idea.
I suggest that you look into Quake 1-3 engines. They are available with source code. Half-life's protocol might be a bit different but most likely close enough.
Related
Let's say I make a simple Snake game, and on top of that adds a highscore function.
What would be the best way to make sure that the players can't cheat their scores? The most straightforward way I can think of this is, well, just make sure to validate everything, or at least simulate the game on the server. Pressed right? Well, your snake moves right and stuff.
Is this sort of very server-heavy way the only way? or am I missing something?
Not sure how many games used a shared client/server model but it's probably a fair share of action games these days. Basically the server and client share most of the codebase and both simulate the game simultaneously (it helps if your methods are all deterministic, but it doesn't matter for things like graphical effects such as sparks/explosions, which don't affect the core gameplay)
The server sends updates to the client to tell it what's actually happening and the client corrects for errors. This is why in some action games when you have a network spike where you drop some packets, you can carry on running around shooting at people, and then suddenly the server re-establishes communications and puts you back where you were 5 seconds ago!
The server controls all important interactions such as loss of health, death etc so that you can't boost your health points by a billion whenever you feel like it
There's a good article here:
http://gamelix.com/resources/Unity3D%20Forum/Interpolation%20Explained.pdf
Which explains the old "server is boss" model and newer shared programming models with client-side prediction etc. It's a fair few years old but most of it still applies as far as I know.
How can we access the point cloud in the Leap Motion API? One feature that led me to purchase it was the point cloud demo from their promo video, but I can't seem to locate documentation regarding it and user replies on the forums seem mixed. Am I just missing something?
I'm looking to use the Leap Motion as a sort of cheap 3D scanner.
That demo was clearly a mockup which simulated a 3-D model of the human hand, not actual point cloud data. You can tell by the fact that points were displayed which could not have possibly been read by the sensor, due to obstruction.
orion78fr points to one forum post on this, but the transcript of an interview by the founders provides more information direct from the source:
Can you please allow access to cloud points in SDK?
David: So I think sometimes people have a misperception as to really
how things work in our hardware. It’s very different from other things
like the Kinect, and in normal device operation we have very different
priorities than most other technologies. Our priority is precision,
small movements, very low latency, very low CPU usage - so in order to
do that we will often be making sacrifices that make what the device
is doing completely not applicable to what I think you’re getting at,
which is 3D scanning.
What we’re working on are sort of alternative device modes that will
let you use it for those sorts of purposes, but that’s not what it was
originally built for.You know, it’s our goal to let it be able to do
those things and with the hardware can do many things. But our
priority right now is of course human computer interaction, which we
think is really the missing component in technology, and that’s our
core passion.
Michael: We really believe in trying to squeeze every ounce of
optimization and performance out of the devices for the purpose they
were built. So in this case the Leap today is intended to be a great
human computer interface. And we have made thousands of little
optimizations along the way to make it better, that might sacrifice
things in the process that might be useful for things like 3D scanning
objects. But those are intentional decisions, but they don’t mean that
we think 3D scanning isn’t exciting and isn’t a good use case. There
will be other things we build as a company in the future, and other
devices that might be able to do both or maybe there will be two
different devices. One that is fully optimized for 3D scanning, and
one that continues to be optimized and as great as it can be at
tracking fingers and hands.
If we haven’t done a good job communicating that the device isn’t
about 3D scanning or isn’t going to be able to 3D scan, that’s
unfortunate and it’s a mistake on our part - but that’s something that
we’ve had to sacrifice. The good news is that those sacrifices have
made the main device really exceptional at tracking hands and fingers.
I have developed with the Leap Motion Controller as well as several other 3-D scanning systems, and from what I've seen I'd seriously doubt if we're ever going to get point cloud data out of the currently shipping hardware. If we do, the fidelity will be far below what we see for gross finger and hand tracking from that device.
There are some low-cost alternatives for 3-D scanning that have started to emerge. SoftKinetic has their DepthSense 325 camera for $250 (which is effectively the same as the Creative Gesture Camera that is only $150 right now). The DS 325 is a time-of-flight IR camera that gives you a 320x240 point cloud map of the 3-D space in front of it. In my tests, it worked well with opaque materials, but anything with a little gloss or shininess gave it trouble.
The PrimeSense Carmine 1.09 ($200) uses structured light to get point cloud data in front of it, as an advancement of the technology they supplied for the original Kinect. It has a lower effective sptial resolution than the SoftKinetic cameras, but it seems to provide less depth noise and to work on a wider variety of materials.
The DUO was also a promising project, but unfortunately its Kickstarter campaign failed. It was using stereoscopic imaging from an IR source to return a point cloud from a couple of PS3 Eye cameras. They may restart that project at some point in the future.
While the Leap may not do what you want, it looks like more and more devices are coming out in the consumer price range to enable 3-D scanning.
See this link
It says that yes, Leap Motion can theorically handle point cloud and it was temporarily part of the visualiser in beta and no, you can't access it using the Leap Motion APIs right now.
It may appear in the future but it's not a priority of Leap Motion Team.
As with LeapMotion SDK 2.x one can at least access the stereo camera images! As I know by myself it is a convenient solution, for many tasks where the point cloud data was asked for. This is why I mention it here, even if it does not give the point-cloud data internally generated by the driver to extract the pointer-metadata. But now one has the capability to generate own point-cloud by yourself, this is why I think it is strongly related to the question.
Currently there is no access to the Pointcloud in the public API. But I think this video is no mock-up, so there should be a possibility:
http://www.youtube.com/watch?v=MYgsAMKLu7s#t=40s
Roadtovr recently reviewed the Nimble Sense Kickstarter, which is using point cloud.
It’s the same technology that the Kinect 2 uses, and it’s supposed to have some advantages over the Leap Motion.
Because it’s a depth sensing camera, you can point the camera top-down like the Touch+, although their product will not ship till next year.
I am wondering if there exists an site where people can upload their AIs to contest against each other in different board games: Chess, Gomoku, etc.
The site would accept source code of programs (written in some common language), compile it and run the programs against each other. All the programs would have to use some common communication technique.
My motivation is that I have seen many different Gomoku programs in Stack Overflow, and I would like to test the different algorithms against each other. But each one uses different languages and interfaces and I have no way to put them to play against each other.
Common dedicated server, that would play the AIs against each other and keep a global score-board would be tons of fun :)
Does such server exist?
The best I could find is http://wawrzak.com/megagomoku/, but it is still something that I have to download and run on my own computer - I would prefer an existing site where anyone can contribute.
EDIT: Also interesting is http://gomocup.wz.cz/gomoku/download.php . It is gomoku contest held each year, and features a common interface for communication and lot of existing gomoku programs. I wish it were ran more often than once in a year, though :) The immediate feedback of uploading Your program and seeing the results would be very good.
There is a yearly competition for things like this! This field of AI is called general game playing and is a relatively new area of study started recently by Prof. Michael Geneserth of Stanford University. Each year at AAAI there is a competition to determine the best GGP program after a variety of games have been played.
Outside of the competition, there are several live servers where you can play against the research universities and amateur hobbyists. The website ggp.org is relatively new but open and distributed, while The Technical University of Dresden maintains a more active server.
To play here you will need to build a player that conforms to the GGP standard protocol. For help in getting started, there is a project called ggp-base maintained by the current GGP world champion with a simple infrastructure for creating GGP players. It might be a great place to start.
Hope this helps!
A common server with a mostly common interface (aside from necessary game-specific differences) would be nice. Uploading your source code and having the server compile and run it has the nice effect of completely eliminating cheating (entering human moves as if a bot came up with them). But it is only practical for very low time-limit games because of the high CPU requirements -- each game engine will pin the CPU for the majority of it's allotted time. The lower the time limits, the more games you can run per day per CPU core.
But still, I like the idea. Even with low time limits, it would be fun. Hmm, maybe I'll start this project...
A friend and I were having a discussion about how a FPS server updates the clients connected to it. We watched a video of a guy cheating in Battlefield: Bad Company 2 and saw how it highlighted the position of enemies on the screen and it got us thinking.
His contention was that the server only updates the client with information that is immediately relevant to the client. I.e. the server won't send information about enemy players if they are too far away from the client or out of the client's line of sight for reasons of efficiency. He was unsure though - he brought up the example of someone hiding behind a rock, not able to see anyone. If the player were suddenly to pop up where he had three players in his line of sight, there would be a 50ms delay before they were rendered on his screen while the server transmitted the necessary information.
My contention was the opposite: that the server sends the client all the information about every player and lets the client sort out what is allowed and what isn't. I figured it would actually be less expensive computationally for the server to just send everything to the client and let the client do the heavy lifting, so to speak. I also figured this is how cheat programs work - they intercept the server packets, get the location of enemies, then show them on the client's view.
So the question: What are some general policies or strategies a modern first person shooter server employs to keep its clients updated?
It's a compromise between your position and your friend's position, and each game will make a slightly different decision on this to achieve their desired trade-off. The server may try not to send more information than it needs to, eg. performing the distance check, but inevitably will send some information that can be exploited, such as sending the position of an enemy that is behind a rock, simply both because it is too expensive for the server to calculate the exact line of sight each time and also for the latency issue you mention.
Typically you'll find that FPS games do tend to 'leak' more information than others because they are more concerned with a smooth game experience that requires a faster and more regular rate of updates. Additionally, unlike MMOs, an FPS player is usually at liberty to move to a different server if they're finding their game ruined by cheats.
Some additional reading:
Multiplayer and Network Programming Forum FAQ # Gamedev.net
Networking for Game Programmers # GafferOnGames
Source Multiplayer Networking # Valvesoftware.com
The general policy should be not to trust the clients, in the sense that the developer should assume that anyone is able to rewrite the client from scratch.
Having said that, I think it's hard to avoid sending this kind of information to the client (and prevent this kind of cheating). Even if there is no line of sight, you may still need to send positions (indirectly) since clients may want to utilize some surround-sound system which requires 3D locations of sound-sources etc.
I have been asked to investigate porting Wii games and some (Sony) PSOne games to OpenGL ES (can you guess what platform?).
I have never undertaken a game port like this before (and will be hiring someone to do it) but I'd like to understand the process.
Does the Wii use OpenGL? If not what does it use and how easy is it to port to OpenGL / OpenGL ES?
Are there any resources/books/blogs that will help me in understanding the process?
Will my company have to become an official Wii developer? If so where do I start that process?
Porting from the Wii or the PSOne is a complex and involved task that can be broken down into multiple separate engineering efforts working in parallel to produce a working end product. The best possible thing you can do before moving to the target hardware is to compartmentalize all of the non-portable code while ensuring that the game continues to run as expected. When you commit to moving to the new platform, your effort switches to reimplementing the non-portable compartmentalized parts.
So, to answer your question, yes, you will need to become or work with a Sony and Nintendo licensed developer in order to take this approach. In the case of Sony, I don't even know if they offer a PSOne development program anymore which presents issues. Your Sony account rep can help clarify.
The major subsystems that are likely to be the focus of your porting effort are:
Rendering Graphics code contains fundamental assumptions about the hardware it is being run on in order to perform optimally. API-level compatibility is superficial compatibility and does not get you as much as you may hope it does. Plan on finding the entry point to the renderer and determining what data you need to render a scene and rewriting all the render code from there for your target hardware.
Game Saving Game state serialization and archival will need to be separated out. Older games often fwrite() structs with #pragma packed fields. Is that still going to work for you?
Networking Wii games write to high level services that are unavailable on your target hardware. At the low level, sockets are still sockets. What network services do your Wii games rely on?
Controls From where you are coming from to where you are going, anything short of a full redesign or reimagining of input will result in poor reviews of the software.
Memory Management Console games often make fundamental assumptions about the rate the system software returns memory from the heap, how much fragmentation it will cause and the duration the game needs to operate under these conditions. These memory management assumptions are obsolete on the new platform. It is wise to write your own memory manager that provides a cushion from the operating system. Also, console games compiled for release are stripped of most error handling and don't gracefully handle running out of memory-- just a heads up.
Content Your bottleneck will be system memory. Can you fit the necessary assets into memory? With textures, you can reduce mip where necessary and with graphics hardware timing, you can pull in the far clipping plane. With assets resident in memory, you may need a technical artist to go through and reduce the face density of your models or an animation programmer to implement a more size-friendly animation codec. This is very game specific.
You also run into the standard set of problems with things like bit compatibility (though the Wii and PSOne are both 32-bit), compiler idiosyncrasies, build script incompatibilities and proprietary compiler extensions.
Games are relatively challenging to test. A good rule of thumb is you want to have enough testers on your team to run through the game in a maximum of two days, covering all major aspects of play. In games that take a long time to beat (RPGs with 30+ hours of gameplay), your testing team needs to be quite large to offer full coverage. Because you are just doing a port, you can come up with a testing plan that maximizes coverage of your new code without having a testing team punch every wall in your game to make sure it (still) has clipping. The game shipped once.
Becoming a licensed developer requires you to apply. The turnaround time, from experience, is not good. Generally speaking, priority is given to studios with shipped titles and organized offices with reasonably good security and the ability to buy the (relatively) expensive development kits. You may be better off working with a licensed developer if you do not meet these criteria.
Console and game development is challenging for people already experienced in it. There is no book that covers it all. My recommendation is to attempt to recruit an expert who has experience shipping titles in a position of systems or engine programmer. What types of programmers and skillsets exist in games is a whole different question for Stack, though.
Games consoles don't use OpenGL but their own, custom libraries. The main reason is that they are pretty slow and have little RAM. So you need to squeeze out every drop of performance you can get. And that means: Custom code. Usually, you get a framework with the developer kit which gets you started and then, you build your code from that. Eventually, you'll start replacing parts from the developer kit with your own special code to get all the speed and special effects you need.
There is a reason why PSOne games are so ugly on the PS3 despite the fact that the developers have access to the sources: Revenue just doesn't justify to touch the code.
Which is one reason why game development is so expensive: Every game is (more or less) a completely new product. Sometimes, game companies can reuse a bit of code from the last version but more often than not, they have to develop everything again. They also don't talk much with each other.
In recent years, kits have become more complex and powerful and you can get complete game engines (with all kinds of effects and 3D support) but each engine is a completely different kind of beast, so you can't even copy code from engine A to B.
Today, media content (video, audio and render sequences) are so expensive that the actual game engine is often a minor detail, so this isn't going to change any time soon.
Net result: If you want to port a game, write an emulator for the hardware (which is usually pretty simple and allows you to run all kinds of games).
[EDIT] To develop software for the Wii, see here: http://www.warioworld.com/
For a Wii emulator, see http://wiiemulator.net/
I ported a couple of games, when I was a new game programmer, from working with one version of our engine to a newer version (where backwards compatibility was neither ignored nor pursued). Even copying (and possibly renaming) the files and placing them in a home in the new project was a bit of work. Following that, the procedure was:
recompile
fix many of the hundreds of errors [in many places, with the same error occurring over and over again]
and
"wire up" calls from the new game engine to the appropriate calls in the old code
"wire up" function calls from the old code into the new game engine
deal with other oddities (ex. in the old game engine, the 2d game would "swizzle" textures itself; in the new version, the engine did it (on specific platforms))
and, while I don't recall this clearly, it was probably mixed in with a bunch of #ifdeffing out portions of code so the thing would actually compile, and possibly creating function stubs to be filled in later.
As I recall, it was three or four days until I had something that compiled. (But, it did help when we ported other games from the old version to the new one!)
The magnitude of the task will come down to what the code you are getting is like. If it has generic 3D calls that you can intercept -- add a thunking layer to -- then you are in business. It depends on the level of abstraction in the code. If it is well-behaved and has things like "RenderModel" and "RenderWorld" calls, you can replace those functions, and even the structures that they work with. If drawing is occurring all over the place, and calls are more like "Draw Polygon" and "Draw Line" or "Draw using this highly optimised data structure", then you are likely in for a long slog.
You shouldn't need a Wii dev kit. Sometimes it is nice to verify that the code you are given does indeed compile in the original environment (and matches the shipping code!), but sometimes you can just take it on faith and make it work in its new environment.
Lastly, I don't think the Wii uses OpenGL, and I really don't know where to point you for further help.
What you may want to do is to start with designing the architecture of the game, write up a detailed specification for what the new game is like.
Once you have this, since you will be rewriting the code, you may find that some of the business logic that doesn't deal with the console can be ported over. But, anything dealing with I/O, user interaction or graphics/sounds will be rewritten, so you might as well do that from scratch.
A specification is very important, to make certain that you know how the current game is working so that the new port will give the same user experience, if that is what is desired.
You may want to keep the same bugs, if that is part of the experience, as, if I know that in the Wii I can jump down and bounce off the wall to safely land, then if I can't do that in the new version then that may be bothersome.
Well porting a PS1 game to an iPhone would be quite a task they work in very different ways. I'm sure its doable but it will be a LOT of work to replace all the fixed point maths and lack of Z-Buffer based rendering to a real graphics chip.
Wii would be a lot easier. The Wii API is very similar to OpenGL. However the Wii has some very nice fixed function features that just are not available on any other GL based platform. Should be doable, though ...
I'm not really sure I can say anything more than that. Have signed far too many NDAs over the years to be 100% sure of what I can and cannot say ;)
Still if you want to hire someone to do some porting work and are prepared to supply the required hardware then I might be free ;)