[FIFO]getline blocking when reading from a FIFO - fifo

I have two processes. One is a producer that writes data to a FIFO file opened in O_RDWR mode. The other process is a consumer . It opens the FIFO in "read mode" using a file pointer.
When producer writes data to FIFO , consumer is reading the data from the FIFO using getline. When producer stops writing data , getline returns -1 with "ferror". After 2-3 hours with producer not writing any data to the FIFO , the getline call blocks the consumer process.
Can anyone explain why this is happening?

Related

Writting messages to shared log file from different MPI processes independently

I have a parallel MPI program in which I would like to collect log messages for debugging to a single log text file. The length, number and timing of messages varies for each process and I cannot use the collective write_shared function. Is there a way to check if a process is currently writing to the shared file and hold the others until that action has been completed to prevent messages from being written over on another? I'm imagining functionality like a mutex pr a lock.

Batching in golang

I want to implement my program to handle batching which can improve the efficiency of the process.
I confused if I set a function to wait for seconds and then process all the messages received from a client, how can I do that without interrupting the infinitive loop. e.g.
for{
msg <- listenUDP
batching(msg)
}
And also I am not sure if I make batching function can wait without interrupting the for loop, when a new 'msg' received and batching function still running. Will the system call a new batching function? If do so, how can I force the system to use the existing batching function rather than call a new one?
Based on the details you have given, one possible approach is that you can maintain an in-memory list of events yet to be processed and then invoke a separate goroutine to process each batch after a specified time or if the list reaches a specific size.
That way your infinite loop can continue to process messages while you process messages in batches. Based on the requirement you can have communication between the main goroutine and batch goroutines
It is dependent on the implementation actually. For example, you can spawn a worker thread and then forget about it i.e. the main goroutine just continues to receive messages. The pseudo-code for that could be like below:
for each event received:
check if the time limit has crossed or if the message list has crossed
if yes then spawn a new goroutine and forget about it
if not push the message to list and continue to the next message

How can I create spring integration flow read from kafka with queue limit process (parallel)

I would like to create a stream that reads the kafka message asynchronously and uses a queue channel to accumulate a number of messages to process, and only at the end of processing these messages (50 messages for example) can it process another 50 or as it frees up space in that queue.
I tried using a flow that reads from kafka delegates to another flow with a QueueChannel with PollerMetadata (Pollers.fixedDelay(500).maxMessagesPerPoll(50)) but the poller uses a single thread to read the messages there, I can’t parallel processing the 50 messages, if I put an executor in the poller it will work like a normal executor and it will accumulate the messages and it will not hang at 50 until I have a new thread available for him to get another message from kafka.
The goal is to parallelize the processing of up to 50 kafka messages but that he only reads again in the kafka (consumer.pool) when this queue releases, but he is reading infinitely from the kafka and processing within the limit amount of the executor or poller, as can I achieve this goal using spring integration flow with kafka?
Only this configuration is enough for each consumer topic? the log always print the same thread: [ntainer#0-1-C-1] even I set 10 for concurrency
Blockquote
> Kafka.messageDrivenChannelAdapter(consumerFactory,
> topic).configureListenerContainer { kafkaMessageListenerContainer ->
> kafkaMessageListenerContainer.concurrency(concurrency)
> kafkaMessageListenerContainer.ackMode(ContainerProperties.AckMode.RECORD)
> }
> .errorChannel(IntegrationContextUtils.ERROR_CHANNEL_BEAN_NAME)
You should never use a queue channel or perform any async processing with Kafka. It's too difficult to keep track of the offsets within the topic/partitions. You will risk losing messages.
Instead, to increase concurrency, increase the number of partitions in the topic and set the listener container concurrency to get the number of consumers you need (e.g. 50).
You should generally have more partitions than consumers but you need at least as many because only one consumer in the group can consume from a partition.

Windows: What happens to data in a pipe when writer process exits?

The setup:
process A creates process B and correctly attaches stdin, stdout, stderr to anonymous pipes between the processes.
process B generates a small amount of data over stdout, then terminates.
process A was busy and didn't get a chance to read the pipe until sometime after process B terminated.
Is the small amount of data still readable by process A?

Deadlock on closing a pipe while another thread reads this pipe?

I'm having a deadlock on closing a pipe:
close(myPipeFD);
Another thread, the reading thread, is in a blocking read state from this exact same pipe:
ssize_t sizeRead = read(myPipeFD, buffer, bufferSize);
Could it be the cause of such deadlock? I thought that read would have immediately returned sizeRead == 0? Should I emit an interruption in this reading thread?
It is not safe to close a file descriptor when another thread may be using it, for several reasons.
As you've discovered, some system calls which can block waiting on the file descriptor may behave in unexpected ways if that file descriptor is closed.
But there are other problems. Let's suppose that the first thread closes a file descriptor just before a second thread enters a read() call on it. Let's also suppose that a third thread happens to be opening a file or a socket at the same time. The new file descriptor will get the same number as the one that was just closed. The second thread will read from the wrong file descriptor!
In general, you need to make sure that only one thread is operating on a file descriptor at a time. Threads should "own" file descriptors. You can pass ownership from one thread to another, but only one should own each at a time.
If you need to cancel operations, you need to use non-blocking I/O and things like select() for when you need to block waiting for data. Furthermore, you need to include a cross-thread communication channel (e.g. pipe) in the select() call which will be the mechanism by which one thread submits a request to the other to close one of its file descriptors.
You should also look into Dispatch I/O or asynchronous mechanisms like run-loop driven NSFileHandle.

Resources