I am building a set of programs that consist of multiple clients and a single server.
The clients are frequently pushing small packets of data to the server, which will validate the information (returning an error if the data is invalid), and process the received information. The information may then incur the firing of events, which clients will be subscribed to, allowing for clients to be instantly (or as close as possible) notified (along with a small amount of data).
I have some ideas about how to do this, but I am trying to avoid creating a protocol of my own, mainly as I'm sure it would take forever and I would probably make a few errors. So I was wondering if there are any existing protocols that I could implement into my system that would provide such functionality.
The number of clients will initially be quite small, but will be growing over time to potentially include 1000's of clients (with their own subscriptions), and several front end servers (each one handling a subset of subscriptions) parsing the information back and forth with back end servers for improved capability.
So, if anyone knows of any existing protocols that implement these requirements and functionality, that would be fantastic.
EDIT
I am currently looking at the XMPP protocol, and the JXTA protocol suite (for reference, and implement with another language). Both seem quite good and provide the necessary connectivity, but I have not had the opportunity to test each of them out in my environment, or if they are even suitable for what I am attempting.
Additionally, some of the network clients will be outside of the local network and operating over WAN. Security is not so much of an issue, but I need to take into account the increase latency of this, and firewall rules (local to the connection that is hosting the application and ISP firewalls) that could be blocking certain ports or transport protocols (I have read some text that said that some ISPs where blocking UDP packets, but not sure of how wide this goes. I can do it at home, the office, mobile, friends houses, etc and have yet to experience it myself).
I'm sorry if the following is not exactly what you're after but I am slightly confused by your use of the word 'protocol'. I understand a protocol to be a 'communication specification' only, where the implementation is left entirely to you. If that is the case I always find the the following graphic usefull, link.
If on the other hand you are looking for a solution which allows you to easily implement the networking side of your application, helping save time, then checkout the following network libraries, which implement their own custom protocol:
NetworkComms.Net
Lidgren
ZeroMQ
Disclaimer: I'm a developer for NetworkComms.Net
Related
What is the best way for multiple client programs to
communicate with a single server program, all running
on a single Windows computer? All written in VB6.
I'd appreciate recommendations of how you might solve
this problem.
NOTE: we are working on transition to .NET, but have to
add a capability to the V6B version before the .NET will
be ready.
The possibilities include TPC connections, named pipes,
shared memory, messages, files, and more.
A client passes the server a string as input, and the server
combines it with data known only to the server, to generate
another string which is returned to the client. Both strings
are only about 100 characters long. The server is contacted
only when a new file needs to be opened, and so it is a very
low volume of communication... probably a flurry of 10 calls
within 15 seconds, followed by an hour of idle time.
But it is possible that two clients would choose about the
same time to request information. Blocking/Locking are certainly
acceptable, as the server will be done with each request in
well under a second, and several seconds of delay is unimportant
to any of the programs.
The server's algorithm is complex, and for several reasons important
to the application should not be replicated in each helper program.
That is the reason for needing a server.
Background:
I am adding capability to a large existing legacy program.
This single program has several other legacy programs which
act as helpers and are run when the user makes certain
choices. These programs are started with a shell command,
and are not just separate threads. For instance, one helper
loads new data from a DVD drive onto the hard drive. Another
helper just displays a chart of the current positions of
the planets.
This is a LARGE commercial legacy program that happens to be
written in VB6. We are working to convert it and all the
helper programs to .NET, but must first release a new version
under vb6 with this added capability. (Please don't tell me
to not use VB6, as we are already moving elsewhere.)
We need a temporary VB6 solution.
VB6 does TCP and UDP extremely well via the standard Winsock Control component included in Pro and Enterprise Editions. A lot of shadetree coders do seem to struggle with it though. This is probably the most obvious route since the only other native IPC in VB6 would be COM/DCOM and DDE, however MSMQ provided excellent support for VB6 as well.
The downside of IP-based protocols is their limited namespace and resulting high probability of collisions (64K port numbers, many set aside for standard applications, ephemeral port ranges, etc.). They're also somewhat "heavyweight" but considering the vast resources of even the oldest PCs still in service and your light traffic requirements you can ignore that in deciding.
Another option you've considered is Named Pipes.
This offers a number of advantages in your situation. For one thing the namespace is much larger requiring only a unique name, which in the post-Win9x era can be up to 256 characters long making uniqueness fairly easy to achieve. For another, as long as your firewalls permit "File and Print Sharing" you're all set on that front.
Also, for your application you only seem to require an RPC-style mechanism rather than arbitrary bidirectional streams or messages. TransactNamedPipe() calls in your clients might be ideal. Named Pipes work over a LAN, but within one PC they are quite fast and light weight.
While VB6 doesn't come with a Named Pipe component such a thing is fairly easy to create as long as extremely high performance isn't required. You can use Timer-based polling in the server instead of trying to implement overlapped I/O to get asynchronicity. I put one together a couple of years ago and have had good luck with this approach.
I published a fairly stable rendition of this a while back at PipeRPC - RPC Over Named Pipes. There is an older and a somewhat newer version there with examples of use and documentation. As designed, clients make "calls" passing a Byte array of request parameters and receiving back a Byte array of response results. You can also shove Unicode Strings though with no changes, letting the compiler coerce the types.
Just one "drop in" UserControl for both clients and servers.
Looking back at this question:
The server's algorithm is complex, and for several reasons important
to the application should not be replicated in each helper program.
That is the reason for needing a server.
If that's really the concern why not just create a shared DLL that all programs use?
For a one-off upgrade release to an existing VB6 application being moved to a newer platform, I would stress keeping the modification as simple and straightforward as possible. As a result, I wouldn't go down any routes involving shared memory or anything relatively unusual.
A few options, none perfectly simple, but at least some ideas:
Expose a COM object in the server code that performs the translation, and can be consumed by the client apps. The clients instantiate the object from the server as an out-of-process object, and let COM handle all the marshalling, etc.
Does the server have any network awareness? VB6 doesn't do sockets/tcp natively very well, but if you've had a reason to add that in, you might be able to leverage it to perform a socket-based connection and data exchange.
The server and client could each poll a common resource folder for the presence of a specific file that constituted inbound/outbound requests for the translation service you describe. Not very elegant, but it might be the simplest.
Just a few ideas to give you some things to think about. Hope that's helpful in some way. Good luck!
I'm interested in how you would approach implementing a BitTorrent-like social network. It might have a central server, but it must be able to run in a peer-to-peer manner, without communication to it:
If a whole region's network is disconnected from the internet, it should be able to pass updates from users inside the region to each other
However, if some computer gets the posts from the central server, it should be able to pass them around.
There is some reasonable level of identification; some computers might be dissipating incomplete/incorrect posts or performing DOS attacks. It should be able to describe some information as coming from more trusted computers and some from less trusted.
It should be able to theoretically use any computer as a server, however, optimizing dynamically the network so that typically only fast computers with ample internet work as seeders.
The network should be able to scale to hundreds of millions of users; however, each particular person is interested in less than a thousand feeds.
It should include some Tor-like privacy features.
Purely theoretical question, though inspired by recent events :) I do hope somebody implements it.
Interesting question. With the use of already existing tor, p2p, darknet features and by using some public/private key infrastructure, you possibly could come up with some great things. It would be nice to see something like this in action. However I see a major problem. Not by some people using it for file sharing, BUT by flooding the network with useless information. I therefore would suggest using a twitter like approach where you can ban and subscribe to certain people and start with a very reduced set of functions at the beginning.
Incidentally we programmers could make a good start to accomplish that goal by NOT saving and analyzing to much information about the users and use safe ways for storing and accessing user related data!
Interesting, the rendezvous protocol does something similar to this (it grabs "buddies" in the local network)
Bittorrent is a mean of transfering static information, its not intended to have everyone become producers of new content. Also, bittorrent requires that the producer is a dedicated server until all of the clients are able to grab the information.
Diaspora claims to be such one thing.
What are methods for undocumented client/server communication to be captured and analyzed for information you want and then have your program looking for this information in real time? For example, programs that look at online game client/server communication and get information and use it to do things like show location on a 3rd party map, etc.
Wireshark will allow you to inspect communication between the client-server (assuming you're running one of them on your machine). As you want to perform this snooping in your own application, look at WinPcap. Being able to reverse engineer the protocol is a whole other kettle of fish, mind.
In general, wireshark is an excellent recommendation for traffic/protocol analysis- however, you seem to be looking for something else:
For example, programs that look at online game client/server communication and get information and use it to do things like show location on a 3rd party map, etc.
I assume you are referring to multiplayer games and game servers?
If so, these programs are usually using a dedicated service connection to query the corresponding server for positional updates and other meta information on a different port, they don't actually intercept or inspect client/server communciations at realtime, and they don't really interfere with these updates, either.
So, you'll find that most game servers provide support for a very simply passive connection (i.e. output only), that's merely there for getting certain runtime state, which in turn is often simply polled by a corresponding external script/webpage.
Similarly, there's often also a dedicated administration interface provided on a different port, as well as another one that publishes server statistics, so that these can be easily queried for embedding neat stats in webpages.
Depending on the type of game server, these may offer public/anonymous use, or they may require certain credentials to access such a data port.
More complex systems will also allow you to subscribe only to specific state and updates, so that you can dynamically configure what data you are interested in.
So, even if you had complete documentation on the underlying protocol, you wouldn't really be able to directly inspect client/server communications without being in between these communications. This can however not be easily achieved. In theory, this would basically require a SOCKS proxy server to be set up and used by all clients, so that you can actually inspect the communications going on.
Programs like wireshark will generally only provide useful information for communications going on on your own machine/network, and will not provide any information about communications going on in between machines that you do not have access to.
In other words, even if you used wireshark to a) reverse engineer the protocol, b) come up with a way to inspect the traffic, c) create a positional map - all this would only work for those communications that you have access to, i.e. those that are taking place on your own machine/network. So, a corresponding online map would only show your own position.
Of course, there's an alternative: you can emulate a client, so that you are being provided with server-side updates from other clients, this will mostly have to be in spectator mode.
This in turn would mean that you are a passive client that's just consuming server-side state, but not providing any.
So that you can in turn use all these updates to populate an online map or use it for whatever else is on your mind.
This will however require your spectator/client to be connected to the server all the time, possibly taking up precious game server slots.
Some game servers provide dedicated spectator modes, so that you can observe the whole game play using a live feed. Most game servers will however automatically kick spectators after a certain idle timeout.
If you want to open a two-way connection between the browser and server, the only choice is to poll (hammer the server), or use comet (crufty and prone to disconnects).
Why don't browsers just let you open up a plain TCP connection? Is there any practical benefit to not having this ability?
The underlying protocol HTTP is basically a half duplex communication protocol which is stateless as well and does not supports full duplex communication. However, with HTML 5 websockets things are going to change. Websockets is a new standard which is being considered in HTML 5 specification. Once the specifications have been finalized and all the browser vendors have adapted the standards you can possibly use websockets to establish a dedicated TCP connection through browsers itself.
We must also keep in mind that HTTP was basically designed to deliver documents & share information between the geographically distributed teams and it was not intended to be a communication protocol as such.
Having said that, there are already companies which have built some messaging gateways to enable you to implement full duplex communication.
Given that this functionality is effectively available through flash, there's no real security rationale - but these days no browser wants to be the first to implement a non-standard extension like that. Moreover, there's no easy way to do threads, which could make using a socket rather awkward.
Over the years so many aspects or elements of the web have been hijacked in order to deliver richer experiences. Comet is but one example, where long lived connections were exploited in order to allow server side push. Originally web pages were just meant to be hyperlinked documnts of text and not the rich applications we often see today. Hacks and abuses of what the original thought intended will continue, until one day these things become more standardised.
The answer to your question is essentially no, there is no tangible advantage to not being able to open a two-way connection between client and server in a browser. The reason it can't be done is simply that this was not the intention of web browsers, which were developed to poll/retrieve documents. With the advent of Rich Internet Applications, it has become desirable to have such functionality, but previously this had never been the goal of a browser. Currently there is a void to be filled by an eventual protocol or implementation of an existing protocol which will govern two-way communication between a browser and the server. There are existing techniques used to simulate this behavior to different degrees (AJAX, Comet, etc.) or it can be accomplished with embedded objects (Java, Flash, ActiveX Controls in IE) but these are simply paths around the void, not bridges over it.
We will simply have to wait (or act) for the standard to be written and the implementation to follow. More than likely, the implementation will actually come first, and we will have a fistfull of new cross-browser compatibility issues to enjoy :) Oh, the bleeding edge!
Firewalls. Non-HTTP traffic is often blocked by firewalls, so opening up a random TCP port for communication will often fail.
I recently created a turn-based game server that can accept 10s of thousands of simultaneous client connections (long story short - epoll on Linux). Communication is based on a simple, custom, line-based protocol. This server allows clients to connect, seek for other players in game matches, play said games (send moves, chat messages, etc.), and be notified when the game has ended.
What I'm looking to do now is test the server by simulating client connections. I'm hoping to support 10s of thousands of simultaneous connections, so this testing is very important to me. What do you guys use for your own testing?
Some things I'm researching now are: pexpect (python expect lib for the functional testing) and tsung for load testing.
I'd like to be able to just test from my laptop since I do not have a cluster of client machines to connect from. Perhaps I'd need to use ip aliasing or some-such in order to generate 100s of thousands of outbound sockets (limit is 65K per interface AFAIK).
Anyway, it seems to me like I need something fairly custom but I thought I'd ask before I went down that path.
Thanks!
I've used JMeter with custom sampler and assertion components before to do automated regression/load testing for a banking application with a custom protocol (Java RMI based API).
It's not exactly lightweight though, and you'll end up doing a lot of extra coding in the JMeter components to support your custom protocol. I'm guessing you'd have to code your own Java socket based client in this case.
But it gives you a lot of flexibility in defining the logic for testing the components, so you can do whatever you want inside there. It scales nicely as well, and allows you to throw a lot of concurrent connections at the system under test.
I decided it was best to "roll my own" to start with.
We are using HP LoadRunner it's the state of the art load testing product. (But also an expensive one). It can simulate thousands of requests to the server and provides metrics on response time etc..