Can a named-pipe client write to multiple instances? - windows

After creating multiple instances of a named pipe (using CreateNamedPipe()), I use CreateFile() to form a pipe client.
When the client writes a message to the pipe, only one server instance gets it.
Is there a way for the client to write a message to all instances?

When a client connects to an instance of a named pipe, the manner in which the operating system chooses which server instance to make the connection to is undocumented, as far as I know. However, empirically it appears to be done on a round robin basis.
If you are prepared to rely on undocumented behaviour which may change with service packs and QFE patches, your client can keep closing its pipe handle and calling CreateFile again to get a new one - each time it will attach to a new server instance of the pipe. However, there is a problem with this in that the client would not know when to stop. I suppose you could invent some mechanism involving a response from the server to break the loop but it is far from satisfactory. This isn't what named pipes were designed for.
The real purpose of multiple server instances of a pipe is to enable pipe servers to handle multiple clients concurrently. Usually, the same server process manages all the instances.
You really want to turn things around: what you think of as your client should be the server, and should create and manage the pipe. Processes which want notification would then connect as clients of the named pipe. This is a pattern which can be implemented quite easily using WCF, with a duplex contract and the NetNamedPipeBinding, if that's an option.

No, a pipe has two ends. Loop through the pipes. A mailslot supports broadcasts but delivery isn't guarantee.

Related

Preventing data loss in client authoritative database writes

A project I'm working on requires users to insert themselves into a list on a server. We expect a few hundred users over a weekend and while very unlikely, a collision could happen in which two users submit the list concurrently and one of them is lost. The server has no validation, it simply allows you to get and put data.
I was pointed in the direction of "optimistic locking" but I'm having trouble grasping when exactly the data should be validated and how it prevents this from happening. If one of the clients reads the data, adds itself and then checks again to ensure that the data is the same with the use of an index or timestamp, how does this prevent the other client from doing the same and then one overwriting the other?
I'm trying to understand the flow in the context of two clients getting data and putting data.
The point of optimistic locking is that the decision to accept or reject a write is taken on the server, and is protected against concurrency by a pessimistic transaction or some sort of hardware protection, such as compare-and-swap. So a client requests a write together with some sort of timestamp or version identifier, and the server only accepts the write if the timestamp is still accurate. If it isn't the client gets some sort of rejection code and will have to try again. If it is, the client gets told that its write succeeded.
This is not the only way to handle receiving data from multiple clients. One popular alternative is to use a reliable messaging system - for example the Java Messaging Service specifies an interface for such systems for which you can find open source implementations. Clients write into the messaging system and can go away as soon as their message is accepted. The server reads requests from the messaging system and acts on them. If the server or the network goes down it's no big deal: the messages will still be there to be read when they come back (typically they are written to disk and have the same level of protection as database data although if you look at a reliable message queue implementation you may find that it is not, in fact, built on top of a standard database table).
One example of a writeup of the details of optimistic locking is the HTTP server Etag specification e.g. https://en.wikipedia.org/wiki/HTTP_ETag

Can I call same RPC func in many servers at the same time?

I try to find some fast algorithm of interprocess communication.
One of I need is an ability to send one command to multiple application instances at the same time. I had tried to find out for a day if I am able to start many instances of the same app (local-rpc-server-app) and call RPC from one client. I use ncalrpc protocol for this purpose.
I just want to start several instances of server and one instance if client, and then call the same RPC func one time on a client to evaluate this RPC func on every running server.
Yes, you can either use multiple client threads (each making a separate server call) or modify the .acf and mark the call with the [async] attribute. If you go the latter route you can then make multiple calls on a single client thread. Note that asynchronous RPC is a fair bit more complicated than synchronous RPC due to needing to deal with call completions.
Making calls to multiple server instances (even local instances) is also made more complicated by the fact that you will have to somehow discover those endpoints, and the RPC namespace functions (RpcNs*) are no longer available as of Windows Vista.

Window messages v/s COM connection point

I would like to communicate between two processes running on the same machine.
I don not have luxury to use any sort of general IPC(e.g. shared memory, pipe, sockets etc.)
I can able to use window messages to communicate between both the process.
please advice will it be faster to use COM connection point rather than window messages.
Is COM connection point also based on window message queue.
Any help will be greatly appreciated.
Regards
Ashish
please advice will it be faster to use COM connection point rather
than window messages.
It largely depends on how you use Windows messages to communicate between processes.
For simple cases like calling a COM method without arguments, a synchronous inter-process call will not be faster than using SendMessage directly, because of the reason explained below.
Is COM connection point also based on window message queue.
It is not based on window message queue. COM connection point is just a convention for implementing outgoing COM interfaces. However, the COM inter-process marshaller does indeed use hidden windows and private messages to marshal calls, when it comes to making an out-of-proc call on a connection point interface.
This is not specific to connection points and applies to any COM proxy interface you may have cached. Normally, you need to have a functional message loop inside both client and server processes for this to work properly.

Web server and ZeroMQ patterns

I am running an Apache server that receives HTTP requests and connects to a daemon script over ZeroMQ. The script implements the Multithreaded Server pattern (http://zguide.zeromq.org/page:all#header-73), it successfully receives the request and dispatches it to one of its worker threads, performs the action, responds back to the server, and the server responds back to the client. Everything is done synchronously as the client needs to receive a success or failure response to its request.
As the number of users is growing into a few thousands, I am looking into potentially improving this. The first thing I looked at is the different patterns of ZeroMQ, and whether what I am using is optimal for my scenario. I've read the guide but I find it challenging understanding all the details and differences across patterns. I was looking for example at the Load Balancing Message Broker pattern (http://zguide.zeromq.org/page:all#header-73). It seems quite a bit more complicated to implement than what I am currently using, and if I understand things correctly, its advantages are:
Actual load balancing vs the round-robin task distribution that I currently have
Asynchronous requests/replies
Is that everything? Am I missing something? Given the description of my problem, and the synchronous requirement of it, what would you say is the best pattern to use? Lastly, how would the answer change, if I want to make my setup distributed (i.e. having the Apache server load balance the requests across different machines). I was thinking of doing that by simply creating yet another layer, based on the Multithreaded Server pattern, and have that layer bridge the communication between the web server and my workers.
Some thoughts about the subject...
Keep it simple
I would try to keep things simple and "plain" ZeroMQ as long as possible. To increase performance, I would simply to change your backend script to send request out from dealer socket and move the request handling code to own program. Then you could just run multiple worker servers in different machines to get more requests handled.
I assume this was the approach you took:
I was thinking of doing that by simply creating yet another layer, based on the Multithreaded Server pattern, and have that layer bridge the communication between the web server and my workers.
Only problem here is that there is no request retry in the backend. If worker fails to handle given task it is forever lost. However one could write worker servers so that they handle all the request they got before shutting down. With this kind of setup it is possible to update backend workers without clients to notice any shortages. This will not save requests that get lost if the server crashes.
I have the feeling that in common scenarios this kind of approach would be more than enough.
Mongrel2
Mongrel2 seems to handle quite many things you have already implemented. It might be worth while to check it out. It probably does not completely solve your problems, but it provides tested infrastructure to distribute the workload. This could be used to deliver the request to be handled to multithreaded servers running on different machines.
Broker
One solution to increase the robustness of the setup is a broker. In this scenario brokers main role would be to provide robustness by implementing queue for the requests. I understood that all the requests the worker handle are basically the same type. If requests would have different types then broker could also do lookups to find correct server for the requests.
Using the queue provides a way to ensure that every request is being handled by some broker even if worker servers crashed. This does not come without price. The broker is by itself a single point of failure. If it crashes or is restarted all messages could be lost.
These problems can be avoided, but it requires quite much work: the requests could be persisted to the disk, servers could be clustered. Need has to be weighted against the payoffs. Does one want to use time to write a message broker or the actual system?
If message broker seems a good idea the time which is required to implement one can be reduced by using already implemented product (like RabbitMQ). Negative side effect is that there could be a lot of unwanted features and adding new things is not so straight forward as to self made broker.
Writing own broker could covert toward inventing the wheel again. Many brokers provide similar things: security, logging, management interface and so on. It seems likely that these are eventually needed in home made solution also. But if not then single home made broker which does single thing and does it well can be good choice.
Even if broker product is chosen I think it is a good idea to hide the broker behind ZeroMQ proxy, a dedicated code that sends/receives messages from the broker. Then no other part of the system has to know anything about the broker and it can be easily replaced.
Using broker is somewhat developer time heavy. You either need time to implement the broker or time to get use to some product. I would avoid this route until it is clearly needed.
Some links
Comparison between broker and brokerless
RabbitMQ
Mongrel2

Spring Batch or JMS for long running jobs

I have the problem that I have to run very long running processes on my Webservice and now I'm looking for a good way to handle the result. The scenario : A user executes such a long running process via UI. Now he gets the message that his request was accepted and that he should return some time later. So there's no need to display him the status of his request or something like this. I'm just looking for a way to handle the result of the long running process properly. Since the processes are external programms, my application server is not aware of them. Therefore I have to wait for these programms to terminate. Of course I don't want to use EJBs for this because then they would block for the time no result is available. Instead I thought of using JMS or Spring Batch. Does anyone ever had the same problem or an advice which solution would be better?
It really depends on what forms of communication your external programs have available. JMS is a very good approach and immediately available in your app server but might not be the best option if your external program is a long running DB query which dumps the result in a text file...
The main advantage of Spring Batch over "just" using JMS as an aynchronous communcations channel is the transactional properties, allowing the infrastructure to retry failed jobs, group jobs together and such. Without knowing more about your specific setup, it is hard to give detailed advise.
Cheers,
I had a similar design requirement, users were sending XML files and I had to generate documents from them. Using JMS in this case is advantageous since you can always add new instances of these processes which can consume and execute the jobs in parallel.
You can use a timer task to check status or monitor these processes. Also, you can publish a message to a JMS queue once the processes are completed.

Resources