What is the relationship between RPC, IPC, and named pipes? - windows

In essence, how exactly does RPC, IPC, and named pipes work together within a network? I am currently looking at how different Microsoft hosts can have processes communicate with each other using named pipes, but I do not understand what is happening over the network - some articles talk about "RPC over SMB" but how does that relate to named pipes? How does this communication relate to the use of filesystem shares?

IPC is a general term, InterProcess Communication. It encompasses any method for one process to communicate with another, sometimes on the same machine, sometimes over a network.
Named pipes are simply a particular means of IPC. They are in many ways akin to TCP/IP, although generally only used on local networks rather than on the global internet.
RPC (Remote Procedure Call) is a protocol layered atop a particular IPC implementation. It enables a calling process to issue a function call that looks like any ordinary call in the given language and have that call handled by another process (again whether on the same machine or possibly over the network) RPC can be implemented on top of named pipes, TCP/IP and other lower-level network protocols. It is also possible to implement on a local machine using shared memory facilities provided by the operating system.

Related

ruby scrape with multiple ip addresses

I would like to know if it is possible for a Ruby program to possess multiple IP addresses? I am trying to download a lot of data from a site, but it is very slow with only 1 connection at a time.
I intend to multi-thread my program with each thread using its own IP address, but I do not know if it is possible in the first place, any help or hints would be greatly appreciated.
It is definitely possible for a machine or a program to have multiple IP addresses. You can even have multiple network adapters, and tie each of them to different physical connections.
However, it can get really hairy to maintain. The challenge for that is partly in the code, partly in the system maintenance, and partly in the networking required to make that happen.
A better approach that you can take is to design your program so that it can run distributed. As such, you can have several copies of it synchronized and doing the work in parallel. You can then scale it horizontally (build more copies) as required, and over different machines and connections if required.
EDIT: You mentioned that you cannot scale horizontally, and that you prefer to use multiple connections from the same machine.
It's very likely that for this you'll have to go a little bit lower in the network stack, developing yourself the connection through sockets in order to use specific network interfaces.
Check out an introduction to Ruby sockets.
Also, check out these related questions:
How does a socket know which network interface controller to use?
Binding to networking interfaces in ruby
Ruby: Binding a listening socket to a specific interface
Can I make ruby send network traffic over a specific iface?

Simplest C++ library that supports distributed messaging - Observer Pattern

I need to do something relatively simple, and I don't really want to install a MOM like RabittMQ etc.
There are several programs that "register" with a central
"service" server through TCP. The only function of the server is to
call back all the registered clients when they all in turn say
"DONE". So it is a kind of "join" (edit: Barrier) for distributed client processes.
When all clients say "DONE" (they can be done at totally different times), the central server messages
them all saying "ALL-COMPLETE". The clients "block" until asynchronously called back.
So this is a kind of distributed asynchronous Observer Pattern. The server has to keep track of where the clients are somehow. It is ok for the client to pass its IP address to the server etc. It is constructable with things like Boost::Signal, BOOST::Asio, BOOST::Dataflow etc, but I don't want to reinvent the wheel if something simple already exists. I got very close with ZeroMQ, but non of their patterns support this use-case very well, AFAIK.
Is there a very simple system that does this? Notice that the server can be written in any language. I just need C++ bindings for the clients.
After much searching, I used this library
https://github.com/actor-framework
It turns out that doing this with this framework is relatively straightforward. The only real "impediment" to using it is that the library seems to have gotten an API transition recently and the documentation .pdf file has not completely caught up with the source. No biggie since the example programs and the source (.hpp) files get you over this hump. However, they need to bring the docs in sync with the source. In addition, IMO they need to provide more interesting examples on how to use c++ Actors for extreme performance. For my case it is not needed, but the idea of actors (shared nothing) in this use-case is one of the reasons people use it instead shared memory communication when using threads.
Also, getting used to the syntax that the library enforces (get used to lambdas!) if one is not used to state of the art c++11 programs it can be a bit of a mind-twister at first. Then, the triviality of remembering all the clients that registered with the server was the only other caveat.
STRONGLY RECOMMENDED.

Does Mac OS X have an Microsoft RPC equivalent?

Microsoft RPC provides an IPC mechanism that can be done in a function-calling manner. This has been extremely helpful for my project where my main service delegates tasks to a child process, and functions in the child process can be called as if they were implemented in the main service. That takes away the burden of having to serialize abstract data and define custom protocols when using other IPC mechanisms such as named pipes, sockets, protobuf, etc. I'm aware that RPC does use them internally.
I've read an article on implementing COM for Mac OS X which is probably the closest thing to what I need. If I find no other no other way of implementing the type of IPC I need, I'm probably going to go with COM, but I thought I'd make sure that I'm not missing anything.
Have a look at "XPC Services". From the documentation:
XPC services are managed by launchd and provide services to a single
application. They are typically used to divide an application into
smaller parts. This can be used to improve reliability by limiting the
impact if a process crashes, and to improve security by limiting the
impact if a process is compromised.
And later in that guide:
The NSXPCConnection API is an Objective-C-based API that provides a
remote procedure call mechanism, allowing the client application to
call methods on proxy objects that transparently relay those calls to
corresponding objects in the service helper and vice-versa.

Bypassing the TCP-IP stack

I realise this is a somewhat open ended question...
In the context of low latency applications I've heard references to by-passing the TCP-IP stack.
What does this really mean and assuming you have two processes on a network that need to exchange messages what are the various options (and associated trade-offs) for doing so?
Typically the first steps are using a TCP offload engine, ToE, or a user-space TCP/IP stack such as OpenOnload.
Completely skipping TCP/IP means usually looking at InfiniBand and using RDMA verbs or even implementing custom protocols above raw Ethernet.
Generally you have latency due to using anything in the kernel and so user-space mechanisms are ideal, and then the TCP/IP stack is an overhead itself consider all of the layers and the complexity that in can be arranged: IP families, sub-networking, VLANs, IPSEC, etc.
This is not a direct answer to your question but i thought it might give you another view on this topic.
Before trying to bypass TCP-IP stack I would suggest researching proven real-time communication middleware.
One good solution for real-time communication is Data Distribution Service from OMG (Object Management Group)
DDS offers 12 or so quality attributes and has bindings for various languages.
It has LATENCY_BUDGET ,TRANSPORT_PRIORITY and many other quality of service attributes that makes data distribution very easy and fast.
Check out an implementation of DDS standard by PrismTech. It is called OpenSplice and
works well at LAN scale.
Depends on the nature of your protocol really.
If by low-latency applications you mean electronic trading systems, than they normally use IP or UDP multi-cast for market data, such as Pragmatic General Multicast. Mostly because there is one sender and many receivers of the data, so that using TCP would require sending copies of the data to each recipient individually requiring more bandwidth and increasing the latency.
Trading connections traditionally use TCP with application-level heartbeats because the connection needs to be reliable and connection loss must be detected promptly.

for interprocess communication, from the below which technique is used

messaging passing
or
remote procedure call
irrespective of the architecture or not?
You can use either for inter process communication on most systems without much trouble at all. Often message passing can be implemented with less overheads, but that tends to be system specific.

Resources