Windows Named Pipes connections - windows

I have search high and low for this answer. Can one code up a Named Pipes server where the connection that a client makes is persistent until you close the applications? This would be in C/C++. Not asking any one to actually do this, as I am capable. To explain in a little more detail if my question is not clear, I want to be able to have the client connect to the server and then be able to pass data back and forth without having to kill the connection at the end of each data transaction and then start a new one again for the next. It seams that in every example I have seen or read, the transaction only lasted for that one data exchange. That seems wasteful and extremely time consuming. Then I want to thread it so I can have up to 8 clients on the same named pipe. If you know of example code that does this, that would be great also. Already read the Microsoft examples, and they seem to be single data exchanges with new connections every time.
My confusion lies with the readfile() and writefile() functions. They need the pipe handle and pointers to the data structures just like a file R/W on the hard drive. Those files can be opened at program start, used, and then finally closed just before you exit your application. There are risks to doing this, but sometimes necessary. So I want my server application to be in control not the clients.
Thanks in advance. I will answer any questions if this is not clear to you.

I am not surprised at your answers. I was really wanting a way to keep the connection open per instance, but since this is not how pipes work, I get it. I have devised a better way to make my applications talk to each other. I originally had one server and many clients, so I turned that around and now have one client and many servers each with a different pipe name. Since my client, that was the server, did most of the initiating of the messages, I can now manage better they way I send and request the data via messages/pipes. The only draw back is not giving the data to all of them at once, but what is a few microseconds amount friends. Please let me know if this will not work as I expect it will, before I spend a lot of time on the code. Thanks. Any suggestions are welcomed.

Related

How do live sessions work (with multiple users)?

This has been on my mind for a while now, and I guess I am asking it now. My question is how do live sessions work? For instance, a live chat session, or the live multi-user updater on JSfiddle.net. How do both items update instantly? In the case of the live chat, is it updating from AJAX to the server every second?
Sorry if my question is misunderstood, but my question is simply, how do live sessions work with multiple users?
EDIT
How does Stack Overflow do it? Every time something happens I get a notification, is that updating to the database every second to see if something happens, or is there a better (more efficient) way of going about doing this?
There is a couple of ways of doing it.
The most common way people are nowadays doing it is through websockets. You can just google that term and learn about it. Basically the webserver notifies you through a socket whenever it decides to.
Another way is polling. People used to do it like this back in the day. Polling is pretty much the dumb way: constantly (or every other second or so) sending an ajax request to the webserver asking if there is any new content.
Another interesting way is sending a get request that stays open for a certain amount of time, even after it gets a response. It sort of functions like a stream that you opened to a file or connection, it stays open untill you close it (or untill some other condition). I'm not too familiar with this method, but I know Google Drive uses it for it's multi-user file editing. Just open two sessions to the same Google Drive document and inspect the page. You'll see in the console that every time you type a block of text it'll send a post, and you'll have at least 1 get request pending at all times. At one point it'll close, and right away a new one starts.
So in short: Websockets, Polling, and whatever you call that last method.

Multi-threaded Windows Service - Erlang

I am going to tell the problem that I have to solve and I need some suggestions if i am in the right path.
The problem is:
I need to create a Windows Service application that receive a request and do some action. (Socket communication) This action is to execute a script (maybe in lua or perl).This script models te bussiness rules of the client, querying in Databases, making request in websites and then send a response to the client.
There are 3 mandatory requirements:
The service will receive a lot of request at the same time. So I think to use the worker's thread model.
The service must have a high throughput. I will have many of requests at the same second.
Low Latency: I must response these requests very quickly.
Every request will generate a log entries. I cant write these log entries in the physical disk at same time the scripts execute because the big I/O time. Probably I will make a queue in memory and others threds will consume this queue and write on disk.
In the future, is possible that two woker's thread have to change messages.
I have to make a protocol to this service. I was thinking to use Thrift, but i don't know the overhead involved. Maybe i will make my own protocol.
To write the windows service, i was thinking in Erlang. Is it a good idea?
Does anyone have suggestions/hints to solve this problem? Which is the better language to write this service?
Yes, Erlang is a good choice if you're know it or ready to learn. With Erlang you don't need any worker thread, just implement your server in Erlang style and you'll receive multithreaded solution automatically.
Not sure how to convert Erlang program to Windows service, but probably it's doable.
Writing to the same log file from many threads are suboptimal because requires locking. It's better to have a log-entries queue (lock-free?) and a separate thread (Erlang process?) that writes them to the file. BTW, are you sure that executing external script in another language is much faster than writing a log-record to the file?
It's doubtfully you'll receive much better performance with your own serialization library than Thrift provides for free. Another option is Google Protocol Buffers, somebody claimed that it's faster.
Theoretically (!) it's possible that Erlang solution won't provide you required performance. In this case consider a compilable language, e.g. C++ and asynchronous networking, e.g. Boost.Asio. But be ready that it's much more complicated than Erlang way.

Creating proxy between application queries and Internet

Is it possible (for example with C++, but it does not really matter) to create a bridge/proxy application to get the data requested by another application? To be more detailed, I'm talking about a Adobe Air based game. (I want to create a report with stats based on the data acquired, but that is not actually part of this question.)
Rather than simple "boolean" answer please provide some link to example/documentation. Thanks
It would always be possible, and depending on the your target operating system, may require a fair amount of effort, which begs the question - is there a reason you cannot use Fiddler or some packet sniffing software for your target OS?
You can write a proxy by hand, in python can be quite easy. All you have to do is to set localhost as proxy, then forward the request and pass it back to the calling socket.
I've started writing something like this some times ago. The idea was to write a simple replacement for dansguardian.
I've uploaded it on github so you can give it a look if it can help.
I do not remember well (I've started writing it the last year) but maybe with some modification can fit well your requests.
Conceptually, this is your configuration:
app_client -> [app_channel] -> proxy -> [server_channel] -> app_server
Your proxy starts a server socket, the app_client connects to it. This is our app_channel. Now your proxy creates a connection to the app_server. This is your server_channel.
Now start 2 threads, one which reads from the app_channel and writes to the server_channel, the other reads from the server_channel and writes to the app_channel.
This will create a transparent connection to the app_server via your proxy. You can extract the data as you wish. If the data is encrypted though, there's very little you can actually do by way of analysis.

How do I check the destination that a socket is connected to?

If,for example,The socket in my compiled application is designed to connect to 123.456.789.0.
How do I check if its connected to 123.456.789.0? Is there a way to do this?
The idea is this:I want to prevent other people editing my program and changing the address to,for example, 127.0.0.1 and make it connect through a proxy.
Is there any function/way/trick to check the address after the socket is connected?
Use the getpeername function to retrieve the address of the remote host.
If someone edits your program like you mention, they'll probably alter such a check as well though.
nos's comment about the insecurity of this approach is correct, but incomplete. You wouldn't even need to change the program's code to circumvent your proposed mechanism.
The easiest way around it would be to add an IP alias to one of the machine's network interfaces. Then a program can bind to that interface on the port your program connects to, and the OS's network stack will happily send connections to the attacker's local program, not your remote one.
So, now you say you want to know how to list the computer's interfaces so you can detect this sort of subversion. Your opponent counterattacks, launching your program as a sub-process of theirs after installing a Winsock hook that routes Winsock calls back through the parent process.
We then expect to find you asking how to read the executable code section of a particular DLL loaded into your process space, so you can check that the code is what you expect. Now your opponent drops the Winsock shim, switching to an NDIS layer filter, rewriting packets from your program right before they hit the NIC.
Next we find you looking for someone to tell how to list the drivers installed on a Windows system, so you can check that one of these filters isn't present. Your opponent thinks for about 6 seconds and decides to start screwing with packet routing, selecting one of at least three different attacks I can think of off the top of my head. (No, wait, four.)
I'm not a security expert. Yet, I've spent five minutes on this and already have your security beat seven different ways.
Are you doomed? Maybe, maybe not.
Instead of you coming up with fixes to the risks you can see, better to post a new question saying what it is you're trying to protect, and have the experts comment on risks and possible fixes. (Don't add it here. Your question is already answered, correctly, by nos. This is a different question.)
Security is hard. Expertise counts for far more in that discipline than in most other areas of computer science.

Looking for pattern/approach/suggestions for handling long-running operation tied to web app

I'm working on a consumer web app that needs to do a long running background process that is tied to each customer request. By long running, I mean anywhere between 1 and 3 minutes.
Here is an example flow. The object/widget doesn't really matter.
Customer comes to the site and specifies object/widget they are looking for.
We search/clean/filter for widgets matching some initial criteria. <-- long running process
Customer further configures more detail about the widget they are looking for.
When the long running process is complete the customer is able to complete the last few steps before conversion.
Steps 3 and 4 aren't really important. I just mention them because we can buy some time while we are doing the long running process.
The environment we are working in is a LAMP stack-- currently using PHP. It doesn't seem like a good design to have the long running process take up an apache thread in mod_php (or fastcgi process). The apache layer of our app should be focused on serving up content and not data processing IMO.
A few questions:
Is our thinking right in that we should separate this "long running" part out of the apache/web app layer?
Is there a standard/typical way to break this out under Linux/Apache/MySQL/PHP (we're open to using a different language for the processing if appropriate)?
Any suggestions on how to go about breaking it out? E.g. do we create a deamon that churns through a FIFO queue?
Edit: Just to clarify, only about 1/4 of the long running process is database centric. We're working on optimizing that part. There is some work that we could potentially do, but we are limited in the amount we can do right now.
Thanks!
Consider providing the search results via AJAX from a web service instead of your application. Presumably you could offload this to another server and let you web application deal with the content as you desire.
Just curious: 1-3 minutes seems like a long time for a lookup query. Have you looked at indexes on the columns you are querying to improve the speed? Or do you need to do some algorithmic process -- perhaps you could perform some of this offline and prepopulate some common searches with hints?
As Jonnii suggested, you can start a child process to carry out background processing. However, this needs to be done with some care:
Make sure that any parameters passed through are escaped correctly
Ensure that more than one copy of the process does not run at once
If several copies of the process run, there's nothing stopping a (not even malicious, just impatient) user from hitting reload on the page which kicks it off, eventually starting so many copies that the machine runs out of ram and grinds to a halt.
So you can use a subprocess, but do it carefully, in a controlled manner, and test it properly.
Another option is to have a daemon permanently running waiting for requests, which processes them and then records the results somewhere (perhaps in a database)
This is the poor man's solution:
exec ("/usr/bin/php long_running_process.php > /dev/null &");
Alternatively you could:
Insert a row into your database with details of the background request, which a daemon can then read and process.
Write a message to a message queue which a daemon then read and processed.
Here's some discussion on the Java version of this problem.
See java: what are the best techniques for communicating with a batch server
Two important things you might do:
Switch to Java and use JMS.
Read up on JMS but use another queue manager. Unix named pipes, for instance, might be an acceptable implementation.
Java servlets can do background processing. You could do something similar to this technology in a web technology with threading support. I don't know about PHP though.
Not a complete answer but I would think using AJAX and passing the 2nd step to something thats faster then PHP (C, C++, C#) then a PHP function pick the results off of some stack most likely just a database.

Resources