socket "read" hanging if the MacBook sleeps more than 10 minutes - macos

I am writing an app, where a socket is connecting to a host and downloading a file.
The application runs in Mac.
Now, while the app is downloading, if I put the MacBook in sleep mode for more than 10 minutes, 60% of the time the app hangs when the computer wakes up.
The stack trace shows that, it has hanged in the "read" call. I am able to reproduce this with a sample program also. Below, I have pasted the code of the sample program and the stack where it is hanging. How to solve this hanging?
Also, this is not just TCP/IP waiting that will come out in few minutes. I have waited for more than 12 hours, it did not come out.
The stack trace: -
Call graph:
2466 Thread_2507
2466 start
2466 read$UNIX2003
2466 read$UNIX2003
The program :-
#include <stdio.h>
#include <string.h>
#include <netdb.h>
#include <sys/socket.h>
#include <unistd.h>
#define buflen 131072
unsigned int portno = 80;
char hostname[] = "192.168.1.9";
int main()
{
int sd = socket(AF_INET, SOCK_STREAM, 0); /* init socket descriptor */
struct sockaddr_in sin;
struct hostent *host = gethostbyname(hostname);
char buf[buflen];
int len;
int ret;
FILE *fp;
int i;
if(sd == -1){
printf("Could not create client socket\n");
return 1;
}
/*set keep alive*/
int optval = 1;
int optlen = sizeof(optval);
ret = setsockopt(sd, SOL_SOCKET, SO_KEEPALIVE, &optval, optlen);
if(ret != 0){
printf("could not set socket option.\n");
return 1;
}
/*** PLACE DATA IN sockaddr_in struct ***/
memcpy(&sin.sin_addr.s_addr, host->h_addr, host->h_length);
sin.sin_family = AF_INET;
sin.sin_port = htons(portno);
/*** CONNECT SOCKET TO THE SERVICE DESCRIBED BY sockaddr_in struct ***/
if (connect(sd, (struct sockaddr *)&sin, sizeof(sin)) < 0) {
perror("connecting");
return 1;
}
char *str = "GET /general-log.exe / HTTP/1.0\n\n";
ret = write(sd, str, strlen(str));
if(ret < 0){
printf("error while writing\n");
return 1;
}
fp = fopen("downloaded.file", "wb+");
if(fp == NULL){
printf("not able to open the file.\n");
return 1;
}
i = 0;
while ((len = read(sd, buf, buflen)) > 0) {
printf("%d\t%d\n", i++, len);
fwrite(buf, len, 1, fp); //we should check for return
}
if(len < 0){
printf("Error while reading\n");
}
fclose(fp);
close(sd);
return 0;
}
Update apparently the SO_RCVTIMEOUT is solving the problem.
struct timeval tv;
tv.tv_sec=10;
tv.tv_usec=0;
setsockopt ( m_sock, SOL_SOCKET, SO_RCVTIMEO, (char *) &tv, sizeof ( tv ) );
Is it okay to use SO_RCVTIMEO?

TCP/IP connections don't survive sleep mode. SO_KEEPALIVE doesn't help in this case since it has no effect on the server side. Just wait two minutes and the read will time out. After the timeout, you can connect again.
And that sleep(1) is unnecessary. The server will respond as soon as the data is available. If you don't fetch is right away, you'll allocate a connection on the server for longer than you need.

I couldn't solve it using blocking sockets. I had to change the IMAP library to non-blocking sockets.

Related

Can't accept incoming connections on c socket on Mac OS (Mojave) due to tcp RST packet

I have a problem with a server on MacOS using POSIX socket functions. The problem is that when my client try con connect to the server with the connect() function the server (macOS) send a tcp RST packet and close connection. I tried to disable the FW but the problem is still there.
I put only what I think is useful for you to understand my problem. The same identical code works on Linux (Ubuntu) very well. I think this is a problem about security policy on MacOS.
Server code:
int main (int argc, char *argv[]){
int s;
struct sockaddr_in saddr, caddr;
socklen_t addrlen = sizeof(struct sockaddr_in);
inet_aton(argv[1], &saddr.sin_addr);
uint16_t port = htons(atoi(argv[2]));
int result;
int bklog = 10;
int i,n;
int *sockVett;
sockVett = new int[nESP];
while(1) {
s = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP);
if (s == -1 ){
cout << "socket() failed\n";
return -1;
}
for(i=0; i < nESP; i++){
sockVett[i] = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP);
}
saddr.sin_family = AF_INET;
saddr.sin_port = port;
saddr.sin_addr.s_addr = INADDR_ANY;
bind(s, (struct sockaddr *) &saddr, sizeof(saddr));
listen (s, bklog);
for(i=0; i < nESP; i++){
if(debug) printf("SockVett[%d] is waiting for connection..\n", i+1);
sockVett[i] = accept(s, (struct sockaddr *) &caddr, &addrlen);
if(debug) printf("SockVett[%d] connected\n", i+1);
}
if(debug) printf("All ESP connected\n\n ---Start program---\n");
}
}
My server app blocks at the first accept() call in for(...) loop and with Wireshark I observed that my server send a TCP RST packet and so close the connection.
Thanks everybody!

Why does ioctl FIONREAD from /dev/null return 0 on Mac OS X and a random number on Linux?

I've come across something that seems strange while adding tests to a project I'm working on - I have been using /dev/null as a serial port and not expecting any data to be available for reading.
However, on LINUX there is always data available, and on Mac OS X after a call to srand() there is data available.
Can someone help explain this behaviour?
Here is a minimum viable test C++
#include <stdio.h>
#include <stdlib.h>
#include <termios.h>
#include <unistd.h>
#include <fcntl.h>
#include <sys/ioctl.h>
int open_serial(const char *device) {
speed_t bd = B115200;
int fd;
int state;
struct termios config;
if ((fd = open(device, O_NDELAY | O_NOCTTY | O_NONBLOCK | O_RDWR)) == -1)
return -1;
fcntl(fd, F_SETFL, O_RDWR);
tcgetattr(fd, &config);
cfmakeraw(&config);
cfsetispeed(&config, bd);
cfsetospeed(&config, bd);
config.c_cflag |= (CLOCAL | CREAD);
config.c_cflag &= ~(CSTOPB | CSIZE | PARENB);
config.c_cflag |= CS8;
config.c_lflag &= ~(ECHO | ECHOE | ICANON | ISIG);
config.c_oflag &= ~OPOST;
config.c_cc[VMIN] = 0;
config.c_cc[VTIME] = 50; // 5 seconds reception timeout
tcsetattr(fd, TCSANOW, &config);
ioctl(fd, TIOCMGET, &state);
state |= (TIOCM_DTR | TIOCM_RTS);
ioctl(fd, TIOCMSET, &state);
usleep(10000); // Sleep for 10 milliseconds
return fd;
};
int serial_data_available(const int fd) {
int result;
ioctl(fd, FIONREAD, &result);
return result;
};
int main() {
int fd = open_serial("/dev/null");
printf("Opened /dev/null - FD: %d\n", fd);
printf("Serial data available : %d\n", serial_data_available(fd));
printf("Serial data available : %d\n", serial_data_available(fd));
printf("Calling srand()\n");
srand(1234);
printf("Serial data available : %d\n", serial_data_available(fd));
printf("Serial data available : %d\n", serial_data_available(fd));
return 0;
}
Under Mac OS X the output is as follows :-
Opened /dev/null - FD: 3
Serial data available : 0
Serial data available : 0
Calling srand()
Serial data available : 148561936
Serial data available : 0
On Linux I get the following :-
Opened /dev/null - FD: 3
Serial data available : 32720
Serial data available : 32720
Calling srand()
Serial data available : 32720
Serial data available : 32720
Two questions -
Shouldn't /dev/null should always have 0 bytes available for reading?
Why does calling srand() on Mac OS X cause the bytes available for reading from /dev/null to change?
The problem was obvious (in hindsight!) - the result int is not initialised, so when ioctl has an error, the function returns a non-zero integer, even though data may not be available.
int serial_data_available(const int fd) {
int result;
ioctl(fd, FIONREAD, &result);
return result;
};
the correct code should be
int serial_data_available(const int fd) {
int result = 0;
ioctl(fd, FIONREAD, &result);
return result;
};

TCP buffer parameters not being honoured on Win7 machine

Note: I have tagged this with both programming and windows networking tags, so please don't shout, I'm just trying to expose this to as many people as may be able to help!
I am trying to set the receive and send buffers for a small client and server I have written, so that when I perform a network capture, I see the window size I have set in the TCP handshake.
For the programmers, please consider the following very simple code for a client and server.
For the none-programmers, please skip past this section to my image.
Client:
#include <WinSock2.h>
#include <mstcpip.h>
#include <Ws2tcpip.h>
#include <thread>
#include <iostream>
using namespace std;
int OutputWindowSize(SOCKET s, unsigned int nType)
{
int buflen = 0;
int nSize = sizeof(buflen);
if (getsockopt(s, SOL_SOCKET, nType, (char *)&buflen, &nSize) == 0)
return buflen;
return -1;
}
bool SetWindowSizeVal(SOCKET s, unsigned int nSize)
{
if (setsockopt(s, SOL_SOCKET, SO_SNDBUF, (char *)&nSize, sizeof(nSize)) == 0)
if (setsockopt(s, SOL_SOCKET, SO_RCVBUF, (char *)&nSize, sizeof(nSize)) == 0)
return true;
return false;
}
int main(int argc, char** argv)
{
if (argc != 3) { cout << "not enough args!\n"; return 0; }
const char* pszHost = argv[1];
const int nPort = atoi(argv[2]);
WSADATA wsaData;
DWORD Ret = 0;
if ((Ret = WSAStartup((2, 2), &wsaData)) != 0)
{
printf("WSAStartup() failed with error %d\n", Ret);
return 1;
}
struct sockaddr_in sockaddr_IPv4;
memset(&sockaddr_IPv4, 0, sizeof(struct sockaddr_in));
sockaddr_IPv4.sin_family = AF_INET;
sockaddr_IPv4.sin_port = htons(nPort);
if (!InetPtonA(AF_INET, pszHost, &sockaddr_IPv4.sin_addr)) { return 0; }
SOCKET clientSock = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP); // Create active socket: one which is passed to connect().
if (!SetWindowSizeVal(clientSock, 12345))
{
cout << "Failed to set window size " << endl;
return -1;
}
cout << "Set window size on client socket as: RECV" << OutputWindowSize(clientSock, SO_RCVBUF) <<
" SEND: " << OutputWindowSize(clientSock, SO_SNDBUF) << endl;
int nRet = connect(clientSock, (sockaddr*)&sockaddr_IPv4, sizeof(sockaddr_in));
if (nRet != 0) { return 0; }
char buf[100] = { 0 };
nRet = recv(clientSock, buf, 100, 0);
cout << "Received " << buf << " from the server!" << endl;
nRet = send(clientSock, "Hello from the client!\n", strlen("Hello from the client!\n"), 0);
closesocket(clientSock);
return 0;
}
Server:
#include <WinSock2.h>
#include <mstcpip.h>
#include <Ws2tcpip.h>
#include <iostream>
using namespace std;
int OutputWindowSize(SOCKET s, unsigned int nType)
{
int buflen = 0;
int nSize = sizeof(buflen);
if (getsockopt(s, SOL_SOCKET, nType, (char *)&buflen, &nSize) == 0)
return buflen;
return -1;
}
bool SetWindowSizeVal(SOCKET s, unsigned int nSize)
{
if (setsockopt(s, SOL_SOCKET, SO_SNDBUF, (char *)&nSize, sizeof(nSize)) == 0)
if (setsockopt(s, SOL_SOCKET, SO_RCVBUF, (char *)&nSize, sizeof(nSize)) == 0)
return true;
return false;
}
int main()
{
WSADATA wsaData;
DWORD Ret = 0;
if ((Ret = WSAStartup((2, 2), &wsaData)) != 0)
{
printf("WSAStartup() failed with error %d\n", Ret);
return 1;
}
struct sockaddr_in sockaddr_IPv4;
memset(&sockaddr_IPv4, 0, sizeof(struct sockaddr_in));
sockaddr_IPv4.sin_family = AF_INET;
sockaddr_IPv4.sin_port = htons(19982);
int y = InetPton(AF_INET, L"127.0.0.1", &sockaddr_IPv4.sin_addr);
if (y != 1) return 0;
socklen_t addrlen = sizeof(sockaddr_IPv4);
SOCKET sock = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
if (!SetWindowSizeVal(sock, 12345))
{
cout << "Failed to set window size " << endl;
return -1;
}
cout << "Set window size on listen socket as: RECV" << OutputWindowSize(sock, SO_RCVBUF) <<
" SEND: " << OutputWindowSize(sock, SO_SNDBUF) << endl;
if (bind(sock, (sockaddr*)&sockaddr_IPv4, sizeof(sockaddr_IPv4)) != 0) { /* error */ }
if (listen(sock, SOMAXCONN) != 0) { return 0; }
while (1)
{
SOCKET sockAccept = accept(sock, (struct sockaddr *) &sockaddr_IPv4, &addrlen);
if (!SetWindowSizeVal(sockAccept, 12345))
{
cout << "Failed to set window size " << endl;
return -1;
}
cout << "Set window size as on accepted socket as: RECV" << OutputWindowSize(sock, SO_RCVBUF) <<
" SEND: " << OutputWindowSize(sock, SO_SNDBUF) << endl;
if (sockAccept == -1) return 0;
int nRet = send(sockAccept, "Hello from the server!\n", strlen("Hello from the server!\n"), 0);
if (!nRet) return 0;
char buf[100] = { 0 };
nRet = recv(sockAccept, buf, 100, 0);
cout << "Received " << buf << " from the client!" << endl;
if (nRet == 0) { cout << "client disonnected!" << endl; }
closesocket(sockAccept);
}
return 0;
}
The output from my program states that the window sizes have been set succesfully:
Set window size on listen socket as: RECV12345 SEND: 12345
Set window size as on accepted socket as: RECV12345 SEND: 12345
for the server, and for the client:
Set window size on listen socket as: RECV12345 SEND: 12345
However, when I capture the traffic using RawCap, I see that the client window size is set fine, but server's window size is not what I set it to be, it is 8192:
Now, I have read this MS link and it says to add a registry value; I did this, adding the value 0x00001234, but it still made no difference.
The interesting thing is, the same code works fine on a Windows 10 machine, which makes me think it is Windows 7 specific. However, I'm not 100% sure on my code, there might be some errors in it.
Can anyone suggest how I can get Windows to honour my requested parameters please?
These are not 'window sizes'. They are send and receive buffer sizes.
There is no such thing as 'output window size'. There is a receive window and a congestion window, and the latter is not relevant to your question.
The send buffer size has exactly nothing to do with the receive window size, and the receive buffer size only determines the maximum receive window size.
The actual receive window size is adjusted dynamically by the protocol. It is the actual size that you are seeing in Wireshark.
The platform is entitled by the specification to adjust the supplied values for the send and receive buffers up or down, and the documentation advises you to get the corresponding values if you want to be sure what they really are.
There is no problem here to solve.
NB You don't have to set the receive window size on an accepted socket if you already set it on the listening socket. It is inherited.

How to work with NETLINK_KOBJECT_UEVENT protocol in user space?

Let's consider this example code:
#include <linux/netlink.h>
#include <sys/socket.h>
#include <string.h>
#include <unistd.h>
#include <stdio.h>
#define BUF_SIZE 4096
int main() {
int fd, res;
unsigned int i, len;
char buf[BUF_SIZE];
struct sockaddr_nl nls;
fd = socket(PF_NETLINK, SOCK_RAW, NETLINK_KOBJECT_UEVENT);
if (fd == -1) {
return 1;
}
memset(&nls, 0, sizeof(nls));
nls.nl_family = AF_NETLINK;
nls.nl_pid = getpid();
nls.nl_groups = 1;
res = bind(fd, (struct sockaddr *)&nls, sizeof(nls));
if (res == -1) {
return 2;
}
while (1) {
len = recv(fd, buf, sizeof(buf), 0);
printf("============== Received %d bytes\n", len);
for (i = 0; i < len; ++i) {
if (buf[i] == 0) {
printf("[0x00]\n");
} else if (buf[i] < 33 || buf[i] > 126) {
printf("[0x%02hhx]", buf[i]);
} else {
printf("%c", buf[i]);
}
}
printf("<END>\n");
}
close(fd);
return 0;
}
It listens on netlink socket for events related to hotplug. Basically, it works. However, some parts are unclear for me even after spending whole evening on googling, reading different pieces of documentation and manuals and working through examples.
Basically, I have two questions.
What different values for sockaddr_nl.nl_groups means? At least for NETLINK_KOBJECT_UEVENT protocol.
If buffer allocated for the message is too small, the message will be simply truncated (you can play with the BUF_SIZE size to see that). What this buffer size should be to not lose any data? Is it possible to know in user space length of the incoming message to allocate enough space?
I would appreciate either direct answers as references to kernel code.
The values represent different multicast groups. A netlink socket can have 31 different multicast groups (0 means unicast) that multicast messages can be sent to. For NETLINK_KOBJECT_UEVENT it looks like it's fixed to 1 see f.ex. here.
You should be able to use getsockopt with level set to SOL_SOCKET and optname set to SO_RCVBUF.

how to fix this MPI code program

This program demonstrates an unsafe program, because sometimes it will execute fine, and other times it will fail. The reason why the program fails or hangs is due to buffer exhaustion on the receiving task side, as a consequence of the way an MPI library has implemented an eager protocol for messages of a certain size. One possible solution is to include an MPI_Barrier call in the both the send and receive loops.
how its program code is correct???
#include "mpi.h"
#include <stdio.h>
#include <stdlib.h>
#define MSGSIZE 2000
int main (int argc, char *argv[])
{
int numtasks, rank, i, tag=111, dest=1, source=0, count=0;
char data[MSGSIZE];
double start, end, result;
MPI_Status status;
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD, &numtasks);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
if (rank == 0) {
printf ("mpi_bug5 has started...\n");
if (numtasks > 2)
printf("INFO: Number of tasks= %d. Only using 2 tasks.\n", numtasks);
}
/******************************* Send task **********************************/
if (rank == 0) {
/* Initialize send data */
for(i=0; i<MSGSIZE; i++)
data[i] = 'x';
start = MPI_Wtime();
while (1) {
MPI_Send(data, MSGSIZE, MPI_BYTE, dest, tag, MPI_COMM_WORLD);
count++;
if (count % 10 == 0) {
end = MPI_Wtime();
printf("Count= %d Time= %f sec.\n", count, end-start);
start = MPI_Wtime();
}
}
}
/****************************** Receive task ********************************/
if (rank == 1) {
while (1) {
MPI_Recv(data, MSGSIZE, MPI_BYTE, source, tag, MPI_COMM_WORLD, &status);
/* Do some work - at least more than the send task */
result = 0.0;
for (i=0; i < 1000000; i++)
result = result + (double)random();
}
}
MPI_Finalize();
}
Ways to improve this code so that the receiver doesn't end up with an unlimited number of unexpected messages include:
Synchronization - you mentioned MPI_Barrier, but even using MPI_Ssend instead of MPI_Send would work.
Explicit buffering - the use of MPI_Bsend or Brecv to ensure adequate buffering exists.
Posted receives - the receiving process posts IRecvs before starting work to ensure that the messages are received into the buffers meant to hold the data, rather than system buffers.
In this pedagogical case, since the number of messages is unlimited, only the first (synchronization) would reliably work.

Resources