I wrote a program that reads data, filters and processes it and writes it to stdout. If stdout is piped to another process, and the piped process terminates, I get SIGPIPEd, which is great, because the program terminates, and the pipeline comes to a timely end.
Depending on the filter parameters however, there may not be a single write for tens of seconds, and during that time there won't be a SIGPIPE, although the downstream process has long finished. How can I detect this, without actually writing something to stdout? Currently, the pipeline is just hanging, until my program terminates of natural causes.
I tried writing a zero-length slice
if _, err := os.Stdout.Write([]byte{}); err != nil
but unfortunately that does not result in an error.
N.B. Ideally, this should work regardless of the platform, but if it works on Linux only, that's already an improvement.
This doesn't answer it in Go, but you can likely find a way to use this.
If you can apply Poll(2) to the write end of your pipe, you will get an notification when it becomes un-writable. How to integrate that into your Go code depends upon your program; hopefully it could be useful:
#include <errno.h>
#include <poll.h>
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
void sp(int sno) {
write(2, "sigpipe!\n", 9);
_exit(1);
}
int waitfd(int fd) {
int n;
struct pollfd p;
p.fd = fd;
p.events = POLLOUT | POLLRDBAND;
/* RDBAND is for what looks like a bug in illumos fifovnops.c */
p.revents = 0;
if ((n=poll(&p, 1, -1)) == 1) {
if (p.revents & POLLOUT) {
return fd;
}
if (p.revents & (POLLERR|POLLHUP)) {
return -1;
}
}
fprintf(stderr, "poll=%d (%d:%s), r=%#x\n",
n, errno, strerror(errno), p.revents);
return -1;
}
int main() {
int count = 0;
char c;
signal(SIGPIPE, sp);
while (read(0, &c, 1) > 0) {
int w;
while ((w=waitfd(1)) != -1 &&
write(1, &c, 1) != 1) {
}
if (w == -1) {
break;
}
count++;
}
fprintf(stderr, "wrote %d\n", count);
return 0;
}
In linux, you can run this program as: ./a.out < /dev/zero | sleep 1 and it will print something like: wrote 61441. You can change it to sleep for 3s, and it will print the same thing. That is pretty good evidence that it is has filled the pipe, and is waiting for space.
Sleep will never read from the pipe, so when its time is up, it closes the read side, which wakes up poll(2) with a POLLERR event.
If you change the poll event to not include POLLOUT, you get the simpler program:
#include <errno.h>
#include <fcntl.h>
#include <poll.h>
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
int waitfd(int fd) {
int n;
struct pollfd p;
p.fd = fd;
p.events = POLLRDBAND;
p.revents = 0;
if ((n=poll(&p, 1, -1)) == 1) {
if (p.revents & (POLLERR|POLLHUP)) {
return -1;
}
}
fprintf(stderr, "poll=%d (%d:%s), r=%#x\n",
n, errno, strerror(errno), p.revents);
return -1;
}
int main() {
if (waitfd(1) == -1) {
fprintf(stderr, "Got an error!\n");
}
return 0;
}
where "Got an error!" indicates the pipe was closed. I don't know how portable this is, as poll(2) documentation is kinda sketchy.
Without the POLLRDBAND (so events is 0), this works on Linux, but wouldn't on UNIX (at least Solaris and macos). Again, docs were useless, but having the kernel source answers many questions :)
This example, using threads, can be directly mapped to go:
#include <pthread.h>
#include <errno.h>
#include <poll.h>
#include <signal.h>
#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
int Events = POLLRDBAND;
void sp(int sno) {
char buf[64];
write(2, buf, snprintf(buf, sizeof buf, "%d: sig%s(%d)\n", getpid(), sys_siglist[sno], sno));
_exit(1);
}
int waitfd(int fd) {
int n;
struct pollfd p;
p.fd = fd;
p.events = Events;
/* RDBAND is for what looks like a bug in illumos fifovnops.c */
p.revents = 0;
if ((n=poll(&p, 1, -1)) == 1) {
if (p.revents & (POLLERR|POLLHUP)) {
return -1;
}
return fd;
}
return -1;
}
void *waitpipe(void *t) {
int x = (int)(intptr_t)t; /*gcc braindead*/
waitfd(x);
kill(getpid(), SIGUSR1);
return NULL;
}
int main(int ac) {
pthread_t killer;
int count = 0;
char c;
Events |= (ac > 1) ? POLLOUT : 0;
signal(SIGPIPE, sp);
signal(SIGUSR1, sp);
pthread_create(&killer, 0, waitpipe, (int *)1);
while (read(0, &c, 1) > 0) {
write(1, &c, 1);
count++;
}
fprintf(stderr, "wrote %d\n", count);
return 0;
}
Note that it parks a thread on poll, and it generates a SIGUSR1. Here is running it:
mcloud:pipe $ ./spthr < /dev/zero | hexdump -n80
0000000 0000 0000 0000 0000 0000 0000 0000 0000
*
0000050
185965: sigUser defined signal 1(10)
mcloud:pipe $ ./spthr < /dev/zero | sleep 1
185969: sigUser defined signal 1(10)
mcloud:pipe $ ./spthr | sleep 1
185972: sigUser defined signal 1(10)
mcloud:pipe $ ./spthr < /dev/zero | hexdump -n800000
0000000 0000 0000 0000 0000 0000 0000 0000 0000
*
00c3500
185976: sigBroken pipe(13)
In the first command, hexdump quits after 80 bytes, the poll is fundamentally racing with the read+write loop, so it could have generated either a sigpipe or sigusr1.
The second two demonstrate that sleep will cause a sigusr1 (poll returned an exception event) whether or not the write side of the pipe is full when the pipe reader exits.
The fourth, uses hexdump to read a lot of data, way more than pipe capacity, which more deterministically causes a sigpipe.
You can generate test programs which model it more exactly, but the point is that the program gets notification as soon as the pipe is closed; not having to wait until its next write.
Not a real solution to the problem - namely, detecting if the process down the pipe has terminated without writing to it - but here is a workaround, suggested in a comment by Daniel Farrell: (Define and) use a heartbeat signal that will get ignored downstream.
As this workaround is not transparent, it may not be possible if you don't control all processes involved.
Here's an example that uses the NUL byte as heartbeat signal for text based data:
my-cmd | head -1 | tr -d '\000' > file
my-cmd would send NUL bytes in times of inactivity to get a timely EPIPE / SIGPIPE.
Note the use of tr to strip off the heartbeats again once it has served its purpose - otherwise they would end up in file.
Related
I've come across something that seems strange while adding tests to a project I'm working on - I have been using /dev/null as a serial port and not expecting any data to be available for reading.
However, on LINUX there is always data available, and on Mac OS X after a call to srand() there is data available.
Can someone help explain this behaviour?
Here is a minimum viable test C++
#include <stdio.h>
#include <stdlib.h>
#include <termios.h>
#include <unistd.h>
#include <fcntl.h>
#include <sys/ioctl.h>
int open_serial(const char *device) {
speed_t bd = B115200;
int fd;
int state;
struct termios config;
if ((fd = open(device, O_NDELAY | O_NOCTTY | O_NONBLOCK | O_RDWR)) == -1)
return -1;
fcntl(fd, F_SETFL, O_RDWR);
tcgetattr(fd, &config);
cfmakeraw(&config);
cfsetispeed(&config, bd);
cfsetospeed(&config, bd);
config.c_cflag |= (CLOCAL | CREAD);
config.c_cflag &= ~(CSTOPB | CSIZE | PARENB);
config.c_cflag |= CS8;
config.c_lflag &= ~(ECHO | ECHOE | ICANON | ISIG);
config.c_oflag &= ~OPOST;
config.c_cc[VMIN] = 0;
config.c_cc[VTIME] = 50; // 5 seconds reception timeout
tcsetattr(fd, TCSANOW, &config);
ioctl(fd, TIOCMGET, &state);
state |= (TIOCM_DTR | TIOCM_RTS);
ioctl(fd, TIOCMSET, &state);
usleep(10000); // Sleep for 10 milliseconds
return fd;
};
int serial_data_available(const int fd) {
int result;
ioctl(fd, FIONREAD, &result);
return result;
};
int main() {
int fd = open_serial("/dev/null");
printf("Opened /dev/null - FD: %d\n", fd);
printf("Serial data available : %d\n", serial_data_available(fd));
printf("Serial data available : %d\n", serial_data_available(fd));
printf("Calling srand()\n");
srand(1234);
printf("Serial data available : %d\n", serial_data_available(fd));
printf("Serial data available : %d\n", serial_data_available(fd));
return 0;
}
Under Mac OS X the output is as follows :-
Opened /dev/null - FD: 3
Serial data available : 0
Serial data available : 0
Calling srand()
Serial data available : 148561936
Serial data available : 0
On Linux I get the following :-
Opened /dev/null - FD: 3
Serial data available : 32720
Serial data available : 32720
Calling srand()
Serial data available : 32720
Serial data available : 32720
Two questions -
Shouldn't /dev/null should always have 0 bytes available for reading?
Why does calling srand() on Mac OS X cause the bytes available for reading from /dev/null to change?
The problem was obvious (in hindsight!) - the result int is not initialised, so when ioctl has an error, the function returns a non-zero integer, even though data may not be available.
int serial_data_available(const int fd) {
int result;
ioctl(fd, FIONREAD, &result);
return result;
};
the correct code should be
int serial_data_available(const int fd) {
int result = 0;
ioctl(fd, FIONREAD, &result);
return result;
};
Let's consider this example code:
#include <linux/netlink.h>
#include <sys/socket.h>
#include <string.h>
#include <unistd.h>
#include <stdio.h>
#define BUF_SIZE 4096
int main() {
int fd, res;
unsigned int i, len;
char buf[BUF_SIZE];
struct sockaddr_nl nls;
fd = socket(PF_NETLINK, SOCK_RAW, NETLINK_KOBJECT_UEVENT);
if (fd == -1) {
return 1;
}
memset(&nls, 0, sizeof(nls));
nls.nl_family = AF_NETLINK;
nls.nl_pid = getpid();
nls.nl_groups = 1;
res = bind(fd, (struct sockaddr *)&nls, sizeof(nls));
if (res == -1) {
return 2;
}
while (1) {
len = recv(fd, buf, sizeof(buf), 0);
printf("============== Received %d bytes\n", len);
for (i = 0; i < len; ++i) {
if (buf[i] == 0) {
printf("[0x00]\n");
} else if (buf[i] < 33 || buf[i] > 126) {
printf("[0x%02hhx]", buf[i]);
} else {
printf("%c", buf[i]);
}
}
printf("<END>\n");
}
close(fd);
return 0;
}
It listens on netlink socket for events related to hotplug. Basically, it works. However, some parts are unclear for me even after spending whole evening on googling, reading different pieces of documentation and manuals and working through examples.
Basically, I have two questions.
What different values for sockaddr_nl.nl_groups means? At least for NETLINK_KOBJECT_UEVENT protocol.
If buffer allocated for the message is too small, the message will be simply truncated (you can play with the BUF_SIZE size to see that). What this buffer size should be to not lose any data? Is it possible to know in user space length of the incoming message to allocate enough space?
I would appreciate either direct answers as references to kernel code.
The values represent different multicast groups. A netlink socket can have 31 different multicast groups (0 means unicast) that multicast messages can be sent to. For NETLINK_KOBJECT_UEVENT it looks like it's fixed to 1 see f.ex. here.
You should be able to use getsockopt with level set to SOL_SOCKET and optname set to SO_RCVBUF.
I am following a source code on my documents, but I encounter an error when I try to use MPI_Send() and MPI_Recv() from Open MPI library.
I have googled and read some threads in this site but I can not find the solution to resolve my error.
This is my error:
mca_oob_tcp_msg_recv: readv faled : Unknown error (108)
Here is details image:
And this is the code that I'm following:
#include <stdio.h>
#include <string.h>
#include <conio.h>
#include <mpi.h>
int main(int argc, char **argv) {
int rank, size, mesg, tag = 123;
MPI_Status status;
MPI_Init(&argv, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
if (size < 2) {
printf("Need at least 2 processes!\n");
} else if (rank == 0) {
mesg = 11;
MPI_Send(&mesg,1,MPI_INT,1,tag,MPI_COMM_WORLD);
MPI_Recv(&mesg,1,MPI_INT,1,tag,MPI_COMM_WORLD,&status);
printf("Rank 0 received %d from rank 1\n",mesg);
} else if (rank == 1) {
MPI_Recv(&mesg,1,MPI_INT,0,tag,MPI_COMM_WORLD,&status);
printf("Rank 1 received %d from rank 0/n",mesg);
mesg = 42;
MPI_Send(&mesg,1,MPI_INT,0,tag,MPI_COMM_WORLD);
}
MPI_Finalize();
return 0;
}
I commented all of MPI_Send(), and MPI_Recv(), and my program worked. In other hand, I commented either MPI_Send() or MPI_Recv(), and I still got that error. So I think the problem are MPI_Send() and MPI_Recv() functions.
P.S.: I'm using Open MPI v1.6 on Windows 8.1 OS.
You pass in the wrong arguments to MPI_Init (two times argv, instead of argc and argv once each).
The sends and receives actually look fine, I think. But there is also one typo in one of your prints with a /n instead of \n.
Here is what works for me (on MacOSX, though):
int main(int argc, char **argv) {
int rank, size, mesg, tag = 123;
MPI_Status status;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
if (size < 2) {
printf("Need at least 2 processes!\n");
} else if (rank == 0) {
mesg = 11;
MPI_Send(&mesg,1,MPI_INT,1,tag,MPI_COMM_WORLD);
MPI_Recv(&mesg,1,MPI_INT,1,tag,MPI_COMM_WORLD,&status);
printf("Rank 0 received %d from rank 1\n",mesg);
} else if (rank == 1) {
MPI_Recv(&mesg,1,MPI_INT,0,tag,MPI_COMM_WORLD,&status);
printf("Rank 1 received %d from rank 0\n",mesg);
mesg = 42;
MPI_Send(&mesg,1,MPI_INT,0,tag,MPI_COMM_WORLD);
}
MPI_Finalize();
return 0;
}
If this does not work, I'd guess your OS does not let the processes communicate with each other via the method chosen by OpenMPI.
Set MPI_STATUS_IGNORED instead of &status in MPI_Recv in both places.
I am using gcc compiler to Implement a random-number generator using only getpid() and gettimeofday(). Here is my code
#include <stdio.h>
#include <sys/time.h>
#include <sys/time.h>
#include <time.h>
#include <stdlib.h>
int main(int argc, char **argv)
{
struct timeval tv;
int count;
int i;
int INPUT_MAX =10;
int NO_OF_SAMPLES =10;
gettimeofday(&tv, NULL);
printf("Enter Max: \n");
scanf("%d", &INPUT_MAX);
printf("Enter No. of samples needed: \n");
scanf("%d", &NO_OF_SAMPLES);
/*printf("%ld\n",tv.tv_usec);
printf("PID :%d\n", getpid());*/
for (count = 0; count< NO_OF_SAMPLES; count++) {
printf("%ld\n", (getpid() * tv.tv_usec) % INPUT_MAX + 1);
for (i = 0; i < 1000000; ++i)
{
/* code */
}
}
return 0;
}
I gave a inner for loop for delay purpose but the result what i am getting is always same no. like this
./a.out
Enter Max:
10
Enter No. of samples needed:
10
1
1
1
1
1
1
1
1
1
1
Plz correct me what am i doing wrong?
getpid() is constant during the programs execution, so you get constant values, too.
But even if you use gettimeofday() inside the loop, this likely won't help:
gcc will likely optimize away your delay loop.
even it it's not optimized away, the delays will be very similar and your values won't be very random.
I'd suggest you look up "linear congruential generator", for a simple way to generate more random numbers.
Put gettimeofday in the loop. Look if getpid() is divisible by INPUT_MAX + 1 you will get the same answer always. Instead you can add getpid() (not make any sense though()) to tv.tv_usec.
I am trying to benchmark file system I/O on Mac OS X using mmap.
#include <unistd.h>
#include <fcntl.h>
#include <dirent.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <sys/mman.h>
#include <stdio.h>
#include <math.h>
char c;
int main(int argc, char ** argv)
{
if (argc != 2)
{
printf("no files\n");
exit(1);
}
int fd = open(argv[1], O_RDONLY);
fcntl(fd, F_NOCACHE, 1);
int offset=0;
int size=0x100000;
int pagesize = getpagesize();
struct stat stats;
fstat(fd, &stats);
int filesize = stats.st_size;
printf("%d byte pages\n", pagesize);
printf("file %s # %d bytes\n", argv[1], filesize);
while(offset < filesize)
{
if(offset + size > filesize)
{
int pages = ceil((filesize-offset)/(double)pagesize);
size = pages*pagesize;
}
printf("mapping offset %x with size %x\n", offset, size);
void * mem = mmap(0, size, PROT_READ, 0, fd, offset);
if(mem == -1)
return 0;
offset+=size;
int i=0;
for(; i<size; i+=pagesize)
{
c = *((char *)mem+i);
}
munmap(mem, size);
}
return 0;
}
The idea is that I'll map a file or portion of it and then cause a page fault by dereferencing it. I am slowly losing my sanity since this doesn't at all work and I've done similar things on Linux before.
Change this line
void * mem = mmap(0, size, PROT_READ, 0, fd, offset);
to
void * mem = mmap(0, size, PROT_READ, MAP_PRIVATE, fd, offset);
And, don't compare mem with -1. Use this instead:
if(mem == MAP_FAILED) { ... }
It's both more readable and more portable.
General advice: if you're on a different UNIX platform from what you're used to, it's a good idea to open the man page. For mmap on OS X, it can be found here. It says
Conforming applications must specify either MAP_PRIVATE or MAP_SHARED.
So, specifying 0 on the fourth
argument is not OK in OS X. I believe
this is true for BSD in general.