Is it possible to use netlink api to get link status immediately (without waiting when kernel throws the message)? I used ioctl SIOCGIFFLAGS for getting the status of the link. Is it possible to do the same with netlink or it's complete event-based mechanism? All of the examples I could find used recvmsg() with wile() loop or select() - all these are event-driven methods, but I need asynchronous method. My pseudo code for netlink would be like that:
int fd = socket(AF_NETLINK, SOCK_RAW, NETLINK_ROUTE);
bind(fd, (struct sockaddr*)&local_addr, sizeof(local_addr));
//send some message so the kernel responded with the answer (which contains link status)
ssize_t result = sendmsg(fd, &msg, ...);
//reading the answer
ssize_t result = recvmsg(fd, &msg, MSG_DONTWAIT);
//parsing the message to get link status
Is it possible with netlink api? Thanks!
Related
I am trying to implement an inter-process communication.
The model: Part A -> Sends messages to Part B.
I have implemented this using Client-Server example from ZMQ tutorial (code attached bellow), but facing issues that the process is "locked".
What is the best practice to implement this kind of model?
It is not classic "Client-Server". Actually just one part sends data to the second part, and second part uses it.
Is there an option to send a message with a timeout, that it will not lock the process?
Any input / example will be very appreciated!
Server:
zmq::context_t context(1);
zmq::socket_t socket(context, ZMQ_REP);
socket.bind("tcp://*:5555");
..
socket.recv(&request); // SERVER.receives first
socket.send(reply); // SERVER.sends next to Client
.. // .analyze .recv'd data
Client:
requester = context.socket(ZMQ.REQ);
requester.connect("tcp://localhost:5555");
requester.send(str.getBytes(), 0); // CLIENT.sends
byte[] reply = requester.recv(0); // CLIENT.receives
I am working on a project that uses MPI routines and multiple threads for sending and receiving messages. I would like each receiving thread to focus on a different incoming message instead of having two or more trying to receive the same one. Is there a way to achieve this?
I don't know if this helps but I am currently using Iprobe() to check for incoming messages and Irecv() with Test() to check if the thread has received the whole message.
Starting with version 3 of the standard, MPI allows for the removal of matched messages from the message queue so that they are no longer visible to subsequent probes/receives. This is done using the so-called matched probes. Just replace MPI_Iprobe with MPI_Improbe, which is the non-blocking matched probe operation:
int flag;
MPI_Status status;
MPI_Message msg;
MPI_Improbe(source, tag, comm, &flag, &msg, &status);
Once MPI_Improbe returns 1 in flag, a message matching (source, tag, comm) has arrived. A handle to the message is stored into msg and the message is removed from the queue. Subsequent probes or receives with a matching (source, tag, comm) triplet - by the same thread or in another - won't see the same message again and therefore won't interfere with its reception by the thread that matched it originally.
To receive a matched message, use MPI_Imrecv (or the blocking MPI_Mrecv):
MPI_Request req;
MPI_Imrecv(buffer, count, dtype, &msg, &req);
do
{
...
MPI_Test(&req, &flag, &status);
}
while (!flag);
Versions of MPI before 3.0 do not provide similar functionality. But, if I understand you correctly, you only need to guarantee that no matching probe will be posted before MPI_Irecv has had the opportunity to remove the message from the queue (which is what matched probe+receive is meant to prevent). If you are probing in a master thread and then dispatching the messages to different threads, then you could use a semaphore to delay the execution of the next probe by the main thread until after the worker has issued MPI_Irecv. If you have multiple threads doing probe+receive, then you may simply issue the MPI_Irecv call in the same critical section (or whatever synchronisation primitive you use to achieve the serialisation of the MPI calls as required by MPI_THREAD_SERIALIZED) as MPI_Iprobe once the probe turns out successful:
// Worker thread
CRITICAL(mpi)
{
MPI_Iprobe(source, tag, comm, &flag, &status);
if (flag)
MPI_Irecv(buffer, count, dtype, status.MPI_SOURCE, status.MPI_TAG, comm, &req);
}
Replace the CRITICAL(name) { ... } notation with whatever primitives your programming environment provides.
If I understand you correctly it's not the matter how you receive messages but how you send them. As you can see below MPI_Send function has destination parameter which defines to which thread this message will be sent.
MPI_Send(
void* data,
int count,
MPI_Datatype datatype,
int destination,
int tag,
MPI_Comm communicator)
So if you wan to make certain threads receive certain messages you have to send this messages only to that thread.
I created an SSH agent (similar to PuTTY's pageant.exe) which has a predefined protocol: Authentication requests are sent to the agent window via WM_COPYDATA containing the name of a file mapping:
// mapname is supplied via WM_COPYDATA
HANDLE filemap = OpenFileMapping(FILE_MAP_ALL_ACCESS, FALSE, mapname);
Is it possible to find out which process (ultimatively, the process name) created a particular file mapping?
I can use GetSecurityInfo on "filemap" to get the security attributes (SID, GID, ...) but how to I get the process itself?
Important note: It is NOT possible to change the protocol (e.g. add information about the sender to WM_COPYDATA) because this is the predefined protocol used by all PuTTY-like applications!
Don't try to find the process by file handle, it's complicated you need to enumerate process to find open handles for each. The WM_COPYDATA message send you the handle of the sender window, a call to GetWindowThreadProcessId should give your answer. Keep in mind that WM_COPYDATA is a way to communicate between 32 and 64 bits process so your process maybe in different space than the caller.
Edit-->
You receive the sender HWND in the WM_COPYDATA you only have to use that HWND to get the process ID
switch (uiMsg)
{
case WM_COPYDATA:
{
DWORD theProcessID;
GetWindowThreadProcessId((HWND) wParam, &theProcessID);
COPYDATASTRUCT *pMyCDS = (PCOPYDATASTRUCT) lParam;
/*...*/
}
/*...*/
}
I am working on TCP client server application using c++.third party lib are now allowed in this project.
Here exchange between client server takes using well define protocol format.once the client receives the packet it will send it for parsing.I have protocol manager which will take care of the parsing activity.
I have following doubt
When the data arrives at client from the network, the OS buffers it until application calls recv() function.
So two message msg1 and msg2 arrives at the buffer a call to recv will return msg1+msg2. Now this may result in failure of the parsing activity.
My queries
1. whether above mentioned assumption is correct or not ?
2. If above mentioned assuption is correct then how can resolve this issue.
Revathy,
What you need to do here is make a fixed length packet or at-least fixed length header followed by variable length data.
The header should contain size of the packet. So In the recv function you always read the header bytes and decode the size of the packet and read the rest of the packet using another recv call.
This way even when your TCP layer buffers any number of packets you will be able to read it correctly
unsigned char* pBuffer = NULL;
pBuffer = new unsigned char[MESSAGE_HEADER_LENGTH];
// reading header from socket
int nRet = recv(sock,(char*)pBuffer,MESSAGE_HEADER_LENGTH,0);
int nDataLen = //Read the packet length from pBuffer
// reading body from socket
unsigned char* pPacket = NULL;
pPacket= new unsigned char[nDataLen ];
nRet = recv(sock,(char*)pPacket ,nDataLen ,0 );
In TCP, you cannot see packet boundaries, so if both packets arrive before you get to call recv(), you will get both packets' contents in a single go.
In UDP, packet boundaries are preserved, so each call to recv() gives back one packet.
I'm interested in the behavior of send function when using a blocking socket.
The manual specifies nothing about this case explicitly.
From my tests (and documentation) it results that when using send on a blocking socket I have 2 cases:
all the data is sent
an error is returned and nothing is sent
In lines of code (in C for example) this translate like this:
// everything is allocated and initilized
int socket_fd;
char *buffer;
size_t buffer_len;
ssize_t nret;
nret = send(socket_fd, buffer, buffer_len, 0);
if(nret < 0)
{
// error - nothing was sent (at least we cannot assume anything)
}
else
{
// in case of blocking socket everything is sent (buffer_len == nret)
}
Am I right?
I'm interested about this behavior on all platforms (Windows, Linux, *nix).
From the man page. (http://linux.die.net/man/2/send)
"On success, these calls return the number of characters sent. On error, -1 is returned, and errno is set appropriately. "
You have three conditions.
-1 is a local error in the socket or it's binding.
Some number < the length: not all the bytes were sent. This is usually the case when the socket is marked non-blocking and the requested operation would block; the errno value is EAGAIN.
You probably won't see this because you're doing blocking I/O.
However, the other end of the socket could close the connection prematurely, which may lead to this. The errno value would probably be EPIPE.
Some number == the length: all the bytes were sent.
My understanding is that a blocking send need not be atomic, see for example the Solaris send man page:
For socket types such as SOCK_DGRAM and SOCK_RAW that require atomic messages,
the error EMSGSIZE is returned and the message is not transmitted when it is
too long to pass atomically through the underlying protocol. The same
restrictions do not apply to SOCK_STREAM sockets.
And also look at the EINTR error code there:
The operation was interrupted by delivery of a signal before any data could
be buffered to be sent.
Which indicates that send can be interrupted after some data has been buffered to be sent - but in that case send would return the number of bytes that have already been buffered to be sent (instead of an EINTR error code).
In practice I would only expect to see this behaviour for large messages (that can not be handled atomically by the operating system) on SOCK_STREAM sockets.