I have 2 cases:
Client connects, send no bytes and wait for server response.
Client connects, send more than 1 bytes and wait for server response.
Problem is next:
in 1st case I should read no bytes and get some server response.
in 2nd case I should read at least 1 byte and only then I'll get a server response.
If i try to read at least 0 bytes, like this:
async_read(sock, boost::asio::buffer(data),
boost::asio::transfer_at_least(0),
boost::bind(&server::read, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
I will never get proper server reposonse in 2nd case.
But if I read at least 1 byte than this async_read operation will never ends.
So, how can I process this cases?
Update 1:
I'm still searching for solution without using time limit.
How do you expect this to work? Does the response vary between the first and second case? If it does vary, you cannot do this reliably because there is a race condition and you should fix the protocol. If it does not vary, the server should just send the response.
The solution to this problem is not an asio issue.
here is what I did
while (ec != boost::asio::error::eof) {
vector<char>socketBuffer(messageSize);
size_t buffersize = socket->read_some(
boost::asio::buffer(socketBuffer),ec);
if (buffersize > 0) {
cout << "received" << endl;
for (unsigned int i=0; i < buffersize; ++i) {
cout << socketBuffer.at(i);
}
cout << "" << endl;
}
socketBuffer.clear();
}
if(!ec){
cout<<"error "<<ec.message()<<endl;
}
socket->close();
That is a small snippet of my code I hope it helps you can read a set amount of data and append it to a vector of bytes if you like and process it later.
Hope this helps
I guess u need to use the data which the client send to make the proper server response in case 2, and maybe make a default response in case 1.
So there is no time limit about how long the client should send the data after connected the server? Maybe u should start a timer waiting for the special limited time after the server accepted the connection. If the server receive the data in time, it's case 2. If time is over, then it's case 1.
Related
In OMNET++ with INET framework, I want to find out how many packets are received from each sending node.I found the below code. Can anyone tell me what is function of "it->second++" command in below code?
std::map<L3Address, uint64_t> recPkt;
auto it = recPkt.find(senderAddr);
if (it == recPkt.end())
recPkt[senderAddr] = 1;
else
it->second++;
Also, can anyone suggest how to display the number of received packets per node.
it is an iterator to an element of std::map. Iterator is something like a pointer. it points to a pair: <L3Address, uint64_t>. Probably the address of sender is the first element of this pair, and the second one is the number of received packets.
The first element of this pair may be obtained using it->first while the second via it->second.
Operation recPkt.find(senderAddr) checks whether recPkt contains an entry with the address senderAddr:
if not, it points to recPkt.end(), then a new entry is created and 1 is set as value (because first packet has been just received),
if entry for senderAddr already exists, the second value of this element (counter) is incremented using it->second++
To show current value of received packets to internal log window one may use:
for (auto it : recPkt) {
EV << "From address " << it.first << " received " << it.second << " packets." << std::endl;
}
However, the better way is to write these values to statistics. The best place for it is a finish() method of your module:
void YourModule::finish() {
// ..
for (auto it : recPkt) {
std::string name = "Received packet from ";
name += it.first.str(); // address
recordScalar(name, it.second);
}
}
Reference: C++ Reference, std::map
EDIT
One more thing. The declaration of recPkt i.e. line:
std::map<L3Address, uint64_t> recPkt;
must be in your class. recPkt cannot be declared just before use.
I have winsock2 application, running Windows on both ends. MSVC++ 2017 is my development environment. Without getting too much into the details, the two ends of it (both written by me) send small messages back and forth. Both ends run as a Windows service, and the connection is meant to stay running for hours or days. The problem is that on one end of it (call it machine A), after running normally for an hour or more, the connection seems to fail. Machine A sends a message to machine B, which is received, then machine B sends a response back. Machine A has the socket in non-blocking mode, so it goes into a loop of calling recv() until data shows up. If it gets WSAEWOULDBLOCK, it checks to see if a timeout interval has occurred, if not, loops back to the Recv(). The error happens because the timeout value (3 minutes) has passed. There is no way that the data could have been delayed that long, something has been hosed. A subsequent send() from machine A results in error 10054, connection reset.
As I said, this can run for hours with no problems. Other times I've seen it fail after 45 minutes or so. Both machines are configured not to go into power save mode or anything like that. Can someone suggest how I would go about diagnosing this problem?
Update: sample code. The goal is to wait until a buffer of a certain length (tlen) has been received, and to allow the routine to time out if something has gone wrong:
while (TRUE) {
Sleep(1);
iret = recv(isocket, s, rlen, 0);
err = WSAGetLastError();
if (iret == SOCKET_ERROR) {
if (err == WSAEWOULDBLOCK) {
time(&tcur);
if (tcur > tend) {
com_Log("Timeout on recv 1");
return(FALSE);
}
continue;
}
com_Log("Error on recv, error=%lu", err);
return(FALSE);
}
time(&tend);
tend = tend + gTimeOut;
clen = clen + iret;
if (clen >= tlen) break;
s = s + iret;
rlen = rlen - iret;
}
I use MPI non-blocking communication(MPI_Irecv, MP_Isend) to monitor the slaves' idle states, the code is like bellow.
rank 0:
int dest = -1;
while( dest <= 0){
int i;
for(i=1;i<=slaves_num;i++){
printf("slave %d, now is %d \n",i,idle_node[i]);
if (idle_node[i]== 1) {
idle_node[i] = 0;
dest = i;
break;
}
}
if(dest <= 0){
MPI_Irecv(&idle_node[1],1,MPI_INT,1,MSG_IDLE,MPI_COMM_WORLD,&request);
MPI_Irecv(&idle_node[2],1,MPI_INT,2,MSG_IDLE,MPI_COMM_WORLD,&request);
MPI_Irecv(&idle_node[3],1,MPI_INT,3,MSG_IDLE,MPI_COMM_WORLD,&request);
// MPI_Wait(&request,&status);
}
usleep(100000);
}
idle_node[dest] = 0;//indicates this slave is busy now
rank 1,2,3:
while(1)
{
...//do something
MPI_Isend(&idle,1,MPI_INT,0,MSG_IDLE,MPI_COMM_WORLD,&request);
MPI_Wait(&request,&status);
}
it works, but I want it to be faster, so I delete the line:
usleep(100000);
then rank 0 goes into a dead while like this:
slave 1, now is 0
slave 2, now is 0
slave 3, now is 0
slave 1, now is 0
slave 2, now is 0
slave 3, now is 0
...
So does it indicate that when I use the MPI_Irecv, it just tells MPI I want to receive a message hereļ¼haven't received message), and MPI needs other time to receive the real data? or some reasons else?
The use of non-blocking operations has been discussed over and over again here. From the MPI specification (section Nonblocking Communication):
Similarly, a nonblocking receive start call initiates the receive operation, but does not complete it. The call can return before a message is stored into the receive buffer. A separate receive complete call is needed to complete the receive operation and verify that the data has been received into the receive buffer. With suitable hardware, the transfer of data into the receiver memory may proceed concurrently with computations done after the receive was initiated and before it completed.
(the bold text is copied verbatim from the standard; the emphasis in italic is mine)
The key sentence is the last one. The standard does not give any guarantee that a non-blocking receive operation will ever complete (or even start) unless MPI_WAIT[ALL|SOME|ANY] or MPI_TEST[ALL|SOME|ANY] was called (with MPI_TEST* setting a value of true for the completion flag).
By default Open MPI comes as a single-threaded library and without special hardware acceleration the only way to progress non-blocking operations is to either call periodically into some non-blocking calls (with the primary example of MPI_TEST*) or call into a blocking one (with the primary example being MPI_WAIT*).
Also your code leads to a nasty leak that will sooner or later result in resource exhaustion: you are calling MPI_Irecv multiple times with the same request variable, effectively overwriting its value and losing the reference to the previously started requests. Requests that are not waited upon are never freed and therefore remain in memory.
There is absolutely no need to use non-blocking operations in your case. If I understand the logic correctly, you can achieve what you want with code as simple as:
MPI_Recv(&dummy, 1, MPI_INT, MPI_ANY_SOURCE, MSG_IDLE, MPI_COMM_WORLD, &status);
idle_node[status.MPI_SOURCE] = 0;
If you'd like to process more than one worker processes at the same time, it is a bit more involving:
MPI_Request reqs[slaves_num];
int indices[slaves_num], num_completed;
for (i = 0; i < slaves_num; i++)
reqs[i] = MPI_REQUEST_NULL;
while (1)
{
// Repost all completed (or never started) receives
for (i = 1; i <= slaves_num; i++)
if (reqs[i-1] == MPI_REQUEST_NULL)
MPI_Irecv(&idle_node[i], 1, MPI_INT, i, MSG_IDLE,
MPI_COMM_WORLD, &reqs[i-1]);
MPI_Waitsome(slaves_num, reqs, &num_completed, indices, MPI_STATUSES_IGNORE);
// Examine num_completed and indices and feed the workers with data
...
}
After the call to MPI_Waitsome there will be one or more completed requests. The exact number will be in num_completed and the indices of the completed requests will be filled in the first num_completed elements of indices[]. The completed requests will be freed and the corresponding elements of reqs[] will be set to MPI_REQUEST_NULL.
Also, there appears to be a common misconception about using non-blocking operations. A non-blocking send can be matched by a blocking receive and also a blocking send can be equally matched by a non-blocking receive. That makes such constructs nonsensical:
// Receiver
MPI_Irecv(..., &request);
... do something ...
MPI_Wait(&request, &status);
// Sender
MPI_Isend(..., &request);
MPI_Wait(&request, MPI_STATUS_IGNORE);
MPI_Isend immediately followed by MPI_Wait is equivalent to MPI_Send and the following code is perfectly valid (and easier to understand):
// Receiver
MPI_Irecv(..., &request);
... do something ...
MPI_Wait(&request, &status);
// Sender
MPI_Send(...);
I have a C++ pipe server app and a C# pipe client app communicating via Windows named pipe (duplex, message mode, wait/blocking in separate read thread).
It all works fine (both sending and receiving data via the pipe) until I try and write to the pipe from the client in response to a forms 'textchanged' event. When I do this, the client hangs on the pipe write call (or flush call if autoflush is off). Breaking into the server app reveals it's also waiting on the pipe ReadFile call and not returning.
I tried running the client write on another thread -- same result.
Suspect some sort of deadlock or race condition but can't see where... don't think I'm writing to the pipe simultaneously.
Update1: tried pipes in byte mode instead of message mode - same lockup.
Update2: Strangely, if (and only if) I pump lots of data from the server to the client, it cures the lockup!?
Server code:
DWORD ReadMsg(char* aBuff, int aBuffLen, int& aBytesRead)
{
DWORD byteCount;
if (ReadFile(mPipe, aBuff, aBuffLen, &byteCount, NULL))
{
aBytesRead = (int)byteCount;
aBuff[byteCount] = 0;
return ERROR_SUCCESS;
}
return GetLastError();
}
DWORD SendMsg(const char* aBuff, unsigned int aBuffLen)
{
DWORD byteCount;
if (WriteFile(mPipe, aBuff, aBuffLen, &byteCount, NULL))
{
return ERROR_SUCCESS;
}
mClientConnected = false;
return GetLastError();
}
DWORD CommsThread()
{
while (1)
{
std::string fullPipeName = std::string("\\\\.\\pipe\\") + mPipeName;
mPipe = CreateNamedPipeA(fullPipeName.c_str(),
PIPE_ACCESS_DUPLEX,
PIPE_TYPE_MESSAGE | PIPE_READMODE_MESSAGE | PIPE_WAIT,
PIPE_UNLIMITED_INSTANCES,
KTxBuffSize, // output buffer size
KRxBuffSize, // input buffer size
5000, // client time-out ms
NULL); // no security attribute
if (mPipe == INVALID_HANDLE_VALUE)
return 1;
mClientConnected = ConnectNamedPipe(mPipe, NULL) ? TRUE : (GetLastError() == ERROR_PIPE_CONNECTED);
if (!mClientConnected)
return 1;
char rxBuff[KRxBuffSize+1];
DWORD error=0;
while (mClientConnected)
{
Sleep(1);
int bytesRead = 0;
error = ReadMsg(rxBuff, KRxBuffSize, bytesRead);
if (error == ERROR_SUCCESS)
{
rxBuff[bytesRead] = 0; // terminate string.
if (mMsgCallback && bytesRead>0)
mMsgCallback(rxBuff, bytesRead, mCallbackContext);
}
else
{
mClientConnected = false;
}
}
Close();
Sleep(1000);
}
return 0;
}
client code:
public void Start(string aPipeName)
{
mPipeName = aPipeName;
mPipeStream = new NamedPipeClientStream(".", mPipeName, PipeDirection.InOut, PipeOptions.None);
Console.Write("Attempting to connect to pipe...");
mPipeStream.Connect();
Console.WriteLine("Connected to pipe '{0}' ({1} server instances open)", mPipeName, mPipeStream.NumberOfServerInstances);
mPipeStream.ReadMode = PipeTransmissionMode.Message;
mPipeWriter = new StreamWriter(mPipeStream);
mPipeWriter.AutoFlush = true;
mReadThread = new Thread(new ThreadStart(ReadThread));
mReadThread.IsBackground = true;
mReadThread.Start();
if (mConnectionEventCallback != null)
{
mConnectionEventCallback(true);
}
}
private void ReadThread()
{
byte[] buffer = new byte[1024 * 400];
while (true)
{
int len = 0;
do
{
len += mPipeStream.Read(buffer, len, buffer.Length);
} while (len>0 && !mPipeStream.IsMessageComplete);
if (len==0)
{
OnPipeBroken();
return;
}
if (mMessageCallback != null)
{
mMessageCallback(buffer, len);
}
Thread.Sleep(1);
}
}
public void Write(string aMsg)
{
try
{
mPipeWriter.Write(aMsg);
mPipeWriter.Flush();
}
catch (Exception)
{
OnPipeBroken();
}
}
If you are using separate threads you will be unable to read from the pipe at the same time you write to it. For example, if you are doing a blocking read from the pipe then a subsequent blocking write (from a different thread) then the write call will wait/block until the read call has completed and in many cases if this is unexpected behavior your program will become deadlocked.
I have not tested overlapped I/O, but it MAY be able to resolve this issue. However, if you are determined to use synchronous calls then the following models below may help you to solve the problem.
Master/Slave
You could implement a master/slave model in which the client or the server is the master and the other end only responds which is generally what you will find the MSDN examples to be.
In some cases you may find this problematic in the event the slave periodically needs to send data to the master. You must either use an external signaling mechanism (outside of the pipe) or have the master periodically query/poll the slave or you can swap the roles where the client is the master and the server is the slave.
Writer/Reader
You could use a writer/reader model where you use two different pipes. However, you must associate those two pipes somehow if you have multiple clients since each pipe will have a different handle. You could do this by having the client send a unique identifier value on connection to each pipe which would then let the server associate the two pipes. This number could be the current system time or even a unique identifier that is global or local.
Threads
If you are determined to use the synchronous API you can use threads with the master/slave model if you do not want to be blocked while waiting for a message on the slave side. You will however want to lock the reader after it reads a message (or encounters the end of a series of message) then write the response (as the slave should) and finally unlock the reader. You can lock and unlock the reader using locking mechanisms that put the thread to sleep as these would be most efficient.
Security Problem With TCP
The loss going with TCP instead of named pipes is also the biggest possible problem. A TCP stream does not contain any security natively. So if security is a concern you will have to implement that and you have the possibility of creating a security hole since you would have to handle authentication yourself. The named pipe can provide security if you properly set the parameters. Also, to note again more clearly: security is no simple matter and generally you will want to use existing facilities that have been designed to provide it.
I think you may be running into problems with named pipes message mode. In this mode, each write to the kernel pipe handle constitutes a message. This doesn't necessarily correspond with what your application regards a Message to be, and a message may be bigger than your read buffer.
This means that your pipe reading code needs two loops, the inner reading until the current [named pipe] message has been completely received, and the outer looping until your [application level] message has been received.
Your C# client code does have a correct inner loop, reading again if IsMessageComplete is false:
do
{
len += mPipeStream.Read(buffer, len, buffer.Length);
} while (len>0 && !mPipeStream.IsMessageComplete);
Your C++ server code doesn't have such a loop - the equivalent at the Win32 API level is testing for the return code ERROR_MORE_DATA.
My guess is that somehow this is leading to the client waiting for the server to read on one pipe instance, whilst the server is waiting for the client to write on another pipe instance.
It seems to me that what you are trying to do will rather not work as expected.
Some time ago I was trying to do something that looked like your code and got similar results, the pipe just hanged
and it was difficult to establish what had gone wrong.
I would rather suggest to use client in very simple way:
CreateFile
Write request
Read answer
Close pipe.
If you want to have two way communication with clients which are also able to receive unrequested data from server you should
rather implement two servers. This was the workaround I used: here you can find sources.
I want to read and write from serial using events/interrupts.
Currently, I have it in a while loop and it continuously reads and writes through the serial. I want it to only read when something comes from the serial port. How do I implement this in C++?
This is my current code:
while(true)
{
//read
if(!ReadFile(hSerial, szBuff, n, &dwBytesRead, NULL)){
//error occurred. Report to user.
}
//write
if(!WriteFile(hSerial, szBuff, n, &dwBytesRead, NULL)){
//error occurred. Report to user.
}
//print what you are reading
printf("%s\n", szBuff);
}
Use a select statement, which will check the read and write buffers without blocking and return their status, so you only need to read when you know the port has data, or write when you know there's room in the output buffer.
The third example at http://www.developerweb.net/forum/showthread.php?t=2933 and the associated comments may be helpful.
Edit: The man page for select has a simpler and more complete example near the end. You can find it at http://linux.die.net/man/2/select if man 2 select doesn't work on your system.
Note: Mastering select() will allow you to work with both serial ports and sockets; it's at the heart of many network clients and servers.
For a Windows environment the more native approach would be to use asynchronous I/O. In this mode you still use calls to ReadFile and WriteFile, but instead of blocking you pass in a callback function that will be invoked when the operation completes.
It is fairly tricky to get all the details right though.
Here is a copy of an article that was published in the c/C++ users journal a few years ago. It goes into detail on the Win32 API.
here a code that read serial incomming data using interruption on windows
you can see the time elapsed during the waiting interruption time
int pollComport(int comport_number, LPBYTE buffer, int size)
{
BYTE Byte;
DWORD dwBytesTransferred;
DWORD dwCommModemStatus;
int n;
double TimeA,TimeB;
// Specify a set of events to be monitored for the port.
SetCommMask (m_comPortHandle[comport_number], EV_RXCHAR );
while (m_comPortHandle[comport_number] != INVALID_HANDLE_VALUE)
{
// Wait for an event to occur for the port.
TimeA = clock();
WaitCommEvent (m_comPortHandle[comport_number], &dwCommModemStatus, 0);
TimeB = clock();
if(TimeB-TimeA>0)
cout <<" ok "<<TimeB-TimeA<<endl;
// Re-specify the set of events to be monitored for the port.
SetCommMask (m_comPortHandle[comport_number], EV_RXCHAR);
if (dwCommModemStatus & EV_RXCHAR)
{
// Loop for waiting for the data.
do
{
ReadFile(m_comPortHandle[comport_number], buffer, size, (LPDWORD)((void *)&n), NULL);
// Display the data read.
if (n>0)
cout << buffer <<endl;
} while (n > 0);
}
return(0);
}
}