Verifying clients when using interprocess communication - windows

I'm building an application that will provide a service to other applications (let's pretend like it solves differential equations). So my DifEq service will be running all the time and a client application can send it requests to solve DifEqs at any point.
This would be trivial using sockets or pipes.
The problem is some applications nefariously want to send linear equations instead of differential equations, so I want to register applications that I know are sending proper DifEqs to my application.
Traditional sockets break down here, as far as I know.
Ideally, I'd like to be able to look at some information about the application that is making a request of me and (either through some meta-data on that application, through communication with my web site, or through some other, unkown method) determine it is an acceptable DifEq app. Furthermore, this ideal method would not be spoofable without a root/admin-level compromise of the underlying OS. If the linear equation app is also a root kit, I'll concede to being broken. :)
I need to be able to do this on Windows, OS X, and Linux (and maybe Android); but I recognize that it may not be the same solution on all platforms. So, how would you accomplish this (specify the platform you are focusing on, if appropriate)? I've done a lot of server-side development, but it's been way too many years since I've done any client-side development outside the browser and the world is very different today than it was then.

I think your question is a little confusing when it comes to talking about DifEQ vs LinearEQ.
It sounds to me like you are just looking for a routine way to verify that clients are authorized to connect. There is a lot to read up on this subject. Common methods would be to use SSL certificates to verify the identity of clients. You can also tunnel over SSH, or use OAUTH, etc, etc.
You'll have to do some more digging around the web to see what kind of authentication fits your scenario. You mention 'not spoofable'. I think that people generally end up compiling-in a certificate of private key into their application. This will stop all but the very dedicated and experienced hackers.

Related

Simplest C++ library that supports distributed messaging - Observer Pattern

I need to do something relatively simple, and I don't really want to install a MOM like RabittMQ etc.
There are several programs that "register" with a central
"service" server through TCP. The only function of the server is to
call back all the registered clients when they all in turn say
"DONE". So it is a kind of "join" (edit: Barrier) for distributed client processes.
When all clients say "DONE" (they can be done at totally different times), the central server messages
them all saying "ALL-COMPLETE". The clients "block" until asynchronously called back.
So this is a kind of distributed asynchronous Observer Pattern. The server has to keep track of where the clients are somehow. It is ok for the client to pass its IP address to the server etc. It is constructable with things like Boost::Signal, BOOST::Asio, BOOST::Dataflow etc, but I don't want to reinvent the wheel if something simple already exists. I got very close with ZeroMQ, but non of their patterns support this use-case very well, AFAIK.
Is there a very simple system that does this? Notice that the server can be written in any language. I just need C++ bindings for the clients.
After much searching, I used this library
https://github.com/actor-framework
It turns out that doing this with this framework is relatively straightforward. The only real "impediment" to using it is that the library seems to have gotten an API transition recently and the documentation .pdf file has not completely caught up with the source. No biggie since the example programs and the source (.hpp) files get you over this hump. However, they need to bring the docs in sync with the source. In addition, IMO they need to provide more interesting examples on how to use c++ Actors for extreme performance. For my case it is not needed, but the idea of actors (shared nothing) in this use-case is one of the reasons people use it instead shared memory communication when using threads.
Also, getting used to the syntax that the library enforces (get used to lambdas!) if one is not used to state of the art c++11 programs it can be a bit of a mind-twister at first. Then, the triviality of remembering all the clients that registered with the server was the only other caveat.
STRONGLY RECOMMENDED.

SPA: using websockets only. Why not?

I am redesigning a web application which previously has been rendered server side to a Single Page Application and started to read about websockets . The web application will be using sockets to have new records and/or messages pushed to the client. I have been wondering why most pages which make use of sockets don't handle all their communication over the socket. Most of the times there is RESTful backend in addition to the websocket. Would it be a bad idea to have the client query for new resources over the socket? If so why - other than that a RESTful api might be easier to use with other devices?
I can imagine that using websockets would probably not be the best idea in case the network connection is kind of bad like on mobile devices, but that probably should work quite well with a reasonable connection to the web.
I found this related question, however it is from 2011 and seems a little outdated:
websocket api to replace rest api?
No, it won´t be a bad idea. Actually I work in an application that uses a WebSocket connection for all what is data interaction, the web server only handles requests for resources, views under different languages, dimensions .. etc..
The problem may be the lack of frameworks/tools based on a persistent connection. For many years most of frameworks, front and back end, have been designed and built around the request/response model. The approach shift may be no so easy to accept.
Coming back to this question a few years later, I would like to point out a few aspects to illustrate that having all your communication through websockets does have its drawbacks:
there is no common support for compression. You can easily configure your webserver to compress http requests and browsers have been known to happily accept compressed responses for years, however for web sockets it is still not that easy (even though the situation has improved)
client frameworks often are build upon commonly used standards like rest. The further away you move from frameworks expectations, the less addons or features will be available.
caching in the browser is not as easy. By now this goes a long way, reaching into the realm of offline availability and PWAs.
when using technology, that is only used by a subset of users, it is more likely to find new bugs, or bugs might take longer to fix. And if it's not bugs, there might be an edge case somewhere around the corner. This isn't an issue per se - but something to be aware of. If one runs into those things, they often easily take up quite some time to fix or work around.

Sending messages between computers

I'd like to start investigating client/server communication. I've started to look at Distributed Objects and a tad at CFNetwork. Let's just say I'm looking for something more my speed (which is slower).
I'd like to be able to send a message from one computer to another, possibly carrying a string or some other type of data. I'm thinking of building a simple student response system where one computer is acting as a server and the clients are connecting and sending data to it.
I'm looking for resources that might help me out as well as suggestions of where to start understanding the concepts involved. I've been teaching myself Objective-C and am a relative newbie to programming, so I know I have holes in my understanding.
"Sockets" is the canonical answer.
If you're interested, here's a great introduction to socket programming (biased toward C, but still very informative):
Beej's Guide to Network Programming
Another way of doing it really simple is by letting the server set up a local http server (inside it self), and then let the clients simply make http requests. By doing that you let the http layer do all the fancy sockets stuff. More simple, and with more overhead, but may be suitable for your case. Also a lot easier to debug, since you can use your browser to test the connection. There are many ways of implementing a HTTP server in cocoa, can't remember which one i've used, but a quick google pointed me at this one for example

is it reasonable to protect drm'd content client side

Update: this question is specifically about protecting (encipher / obfuscate) the content client side vs. doing it before transmission from the server. What are the pros / cons on going in an approach like itune's one - in which the files aren't ciphered / obfuscated before transmission.
As I added in my note in the original question, there are contracts in place that we need to comply to (as its the case for most services that implement drm). We push for drm free, and most content providers deals are on it, but that doesn't free us of obligations already in place.
I recently read some information regarding how itunes / fairplay approaches drm, and didn't expect to see the server actually serves the files without any protection.
The quote in this answer seems to capture the spirit of the issue.
The goal should simply be to "keep
honest people honest". If we go
further than this, only two things
happen:
We fight a battle we cannot win. Those who want to cheat will succeed.
We hurt the honest users of our product by making it more difficult to use.
I don't see any impact on the honest users in here, files would be tied to the user - regardless if this happens client or server side. This does gives another chance to those in 1.
An extra bit of info: client environment is adobe air, multiple content types involved (music, video, flash apps, images).
So, is it reasonable to do like itune's fairplay and protect the media client side.
Note: I think unbreakable DRM is an unsolvable problem and as most looking for an answer to this, the need for it relates to it already being in a contract with content providers ... in the likes of reasonable best effort.
I think you might be missing something here. Users hate, hate, hate, HATE DRM. That's why no media company ever gets any traction when they try to use it.
The kicker here is that the contract says "reasonable best effort", and I haven't the faintest idea of what that will mean in a court of law.
What you want to do is make your client happy with the DRM you put on. I don't know what your client thinks DRM is, can do, costs in resources, or if your client is actually aware that DRM can be really annoying. You would have to answer that. You can try to educate the client, but that could be seen as trying to explain away substandard work.
If the client is not happy, the next fallback position is to get paid without litigation, and for that to happen, the contract has to be reasonably clear. Unfortunately, "reasonable best effort" isn't clear, so you might wind up in court. You may be able to renegotiate parts of the contract in the client's favor, or you may not.
If all else fails, you hope to win the court case.
I am not a lawyer, and this is not legal advice. I do see this as more of a question of expectations and possible legal interpretation than a technical question. I don't think we can help you here. You should consult with a lawyer who specializes in this sort of thing, and I don't even know what speciality to recommend. If you're in the US, call your local Bar Association and ask for a referral.
I don't see any impact on the honest users in here, files would be tied to the user - regardless if this happens client or server side. This does gives another chance to those in 1.
Files being tied to the user requires some method of verifying that there is a user. What happens when your verification server goes down (or is discontinued, as Wal-Mart did)?
There is no level of DRM that doesn't affect at least some "honest users".
Data can be copied
As long as client hardware, standalone, can not distinguish between a "good" and a "bad" copy, you will end up limiting all general copies, and copy mechanisms. Most DRM companies deal with this fact by a telling me how much this technology sets me free. Almost as if people would start to believe when they hear the same thing often enough...
Code can't be protected on the client. Protecting code on the server is a largely solved problem. Protecting code on the client isn't. All current approaches come with stingy restrictions.
Impact works in subtle ways. At the very least, you have the additional cost of implementing client-side-DRM (and all follow-up cost, including the horde of "DMCA"-shouting lawyer gorillas) It is hard to prove that you will offset this cost with the increased revenue.
It's not just about code and crypto. Once you implement client-side DRM, you unleash a chain of events in Marketing, Public Relations and Legal. A long as they don't stop to alienate users, you don't need to bother.
To answer the question "is it reasonable", you have to be clear when you use the word "protect" what you're trying to protect against...
For example, are you trying to:
authorized users from using their downloaded content via your app under certain circumstances (e.g. rental period expiry, copied to a different computer, etc)?
authorized users from using their downloaded content via any app under certain circumstances (e.g. rental period expiry, copied to a different computer, etc)?
unauthorized users from using content received from authorized users via your app?
unauthorized users from using content received from authorized users via any app?
known users from accessing unpurchased/unauthorized content from the media library on your server via your app?
known users from accessing unpurchased/unauthorized content from the media library on your server via any app?
unknown users from accessing the media library on your server via your app?
unknown users from accessing the media library on your server via any app?
etc...
"Any app" in the above can include things like:
other player programs designed to interoperate/cooperate with your site (e.g. for flickr)
programs designed to convert content to other formats, possibly non-DRM formats
hostile programs designed to
From the article you linked, you can start to see some of the possible limitations of applying the DRM client-side...
The third, originally used in PyMusique, a Linux client for the iTunes Store, pretends to be iTunes. It requested songs from Apple's servers and then downloaded the purchased songs without locking them, as iTunes would.
The fourth, used in FairKeys, also pretends to be iTunes; it requests a user's keys from Apple's servers and then uses these keys to unlock existing purchased songs.
Neither of these approaches required breaking the DRM being applied, or even hacking any of the products involved; they could be done simply by passively observing the protocols involved, and then imitating them.
So the question becomes: are you trying to protect against these kinds of attack?
If yes, then client-applied DRM is not reasonable.
If no (for example, you're only concerned about people using your app, like Apple/iTunes does), then it might be.
(repeat this process for every situation you can think of. If the adig nswer is always either "client-applied DRM will protect me" or "I'm not trying to protect against this situation", then using client-applied DRM is resonable.)
Note that for the last four of my examples, while DRM would protect against those situations as a side-effect, it's not the best place to enforce those restrictions. Those kinds of restrictions are best applied on the server in the login/authorization process.
If the server serves the content without protection, it's because the encryption is per-client.
That being said, wireshark will foil your best-laid plans.
Encryption alone is usually just as good as sending a boolean telling you if you're allowed to use the content, since the bypass is usually just changing the input/output to one encryption API call...
You want to use heavy binary obfuscation on the client side if you want the protection to literally hold for more than 5 minutes. Using decryption on the client side, make sure the data cannot be replayed and that the only way to bypass the system is to reverse engineer the entire binary protection scheme. Properly done, this will stop all the kids.
On another note, if this is a product to be run on an operating system, don't use processor specific or operating system specific anomalies such as the Windows PEB/TEB/syscalls and processor bugs, those will only make the program even less portable than DRM already is.
Oh and to answer the question title: No. It's a waste of time and money, and will make your product not work on my hardened Linux system.

How to implement a secure distributed social network?

I'm interested in how you would approach implementing a BitTorrent-like social network. It might have a central server, but it must be able to run in a peer-to-peer manner, without communication to it:
If a whole region's network is disconnected from the internet, it should be able to pass updates from users inside the region to each other
However, if some computer gets the posts from the central server, it should be able to pass them around.
There is some reasonable level of identification; some computers might be dissipating incomplete/incorrect posts or performing DOS attacks. It should be able to describe some information as coming from more trusted computers and some from less trusted.
It should be able to theoretically use any computer as a server, however, optimizing dynamically the network so that typically only fast computers with ample internet work as seeders.
The network should be able to scale to hundreds of millions of users; however, each particular person is interested in less than a thousand feeds.
It should include some Tor-like privacy features.
Purely theoretical question, though inspired by recent events :) I do hope somebody implements it.
Interesting question. With the use of already existing tor, p2p, darknet features and by using some public/private key infrastructure, you possibly could come up with some great things. It would be nice to see something like this in action. However I see a major problem. Not by some people using it for file sharing, BUT by flooding the network with useless information. I therefore would suggest using a twitter like approach where you can ban and subscribe to certain people and start with a very reduced set of functions at the beginning.
Incidentally we programmers could make a good start to accomplish that goal by NOT saving and analyzing to much information about the users and use safe ways for storing and accessing user related data!
Interesting, the rendezvous protocol does something similar to this (it grabs "buddies" in the local network)
Bittorrent is a mean of transfering static information, its not intended to have everyone become producers of new content. Also, bittorrent requires that the producer is a dedicated server until all of the clients are able to grab the information.
Diaspora claims to be such one thing.

Resources