how can i control two processes to run alternately in separate terminal windows.
for example i run the code for each on separate terminal windows at 11:59 and both of them wait for the time to be 12:00. at this moment process one starts execution and process two waits for say 10 seconds. then they switch, process two executes and process one waits.
in this way they take turns until the process is complete.
Pipes, or named pipes? Each process waits in a read for the other to write a byte to it.
Also, possibly use signal files. Process B sleeps for 100 ms, checks for file Foo, and repeats. When process A creates the file, process B deletes it, and proceeds. Then the reverse happens with file Bar.
You can use System V semaphores or Windows named mutexes (via CreateMutex). You could even resort to file locks. Which OS are you on and what are your restrictions?
Related
I faced a concurrency problem when writing to the same named pipe created with mkfifo by multiple processes at the same time, where some writes got lost. Since the number of writing processes are limited I want to switch from "writing to 1 pipe from n processes and reading from 1 separate" to "writing to n pipes by n processes and reading from 1 separate process".
Currently I'm reading via read line <"$pipe" in a loop until a condition is met. read blocks here until a line was read.
How can I read from multiples pipes ($pipe1, $pipe2 … $pipeN) via one loop until a condition is met, while honouring newly written lines on all pipes the same?
One way to deal with the initially described problem of multiple children writing to a single FIFO is to have a process open the FIFO for reading but never actually read it. This will allow writers to write unless the FIFO is full. I don't think there's a standard program that simply goes to sleep forever until signalled. I use a home-brew program pause, which is a pretty minimal C program:
#include <unistd.h>
int main(void)
{
pause();
}
It never exits until signalled. In your shell script, first launch the children, telling them to write to $FIFO, then run:
pause <$FIFO &
pid=$!
Note that pause-like command will not be launched into the background until the redirection completes, and the open of the FIFO won't complete until there is a process to write to the FIFO — so at least one child needs to be launched in background before the pause-like process is executed. Or write a variant of pause (I call mine sleepon) which opens the files named in its argument list — then the command line is similar to sleepon $FIFO & and the backgrounding operation completes and the pause-like program blocks until it is able to open the FIFO (which will be when one of the children opens the FIFO for writing), and then goes to sleep indefinitely. But the code for sleepon is a lot more complex than the code for pause.
Once the children and the pause-like process are launched, the parent can continue with the main processing loop.
while read line
do
…
done < $FIFO
The main thing to be aware of is that the parent loop will exit whenever the FIFO is emptied. You need to know when it should terminate, if ever. At the point where it does terminate, it should kill the pause process: kill $pid. You may need to wrap a while true; do … done loop around the line-reading loop — but you may need something cleverer than that. It depends, in part, on what your "until a condition is met" requirement is.
Your requirement to 'read from multiple FIFOs, all of which may intermittently have data on them' is not easy to do. It's not particularly trivial in C; I don't think there's a standard (POSIX) shell command to assist with that. In C, you'd end up using POSIX select() or
poll() or one of their many variants — some of which are platform-specific. There might be a platform-specific command that will help; I have my doubts, though.
I have a shell script which runs very large simulation binaries. This becomes problematic when I want to request some output of variables in the script. For instance, when I run 10 large simulations, I want to be able to print which iteration I am on without having to wait a minute or two for the current simulation to terminate.
Currently, I am using the trap command. However, the script does not react immediately to signals but will only execute the binded function when the current iteration terminates. I will post the code if anyone needs it.
You should start threads for each large thing you're going to run. Have those threads dump results somewhere, then you have your main method free waiting to interrogate the results on the fly.
Specifically, if the following events take place in the given order:
Process 1 opens a file in append mode.
Process 2 opens the same file in append mode.
Process 2 gets an exclusive lock using flock(2) on the file descriptor.
Process 1 attempts to write to the file.
What happens?
Will the write return immediately with a code indicating failure? Will it hang until the lock is released, then write and return success? Does the behavior vary by kernel? It seems odd that the documentation doesn't cover this case.
(I could write a couple processes to test it on my system, but I don't know whether my test would be representative of the general case, and if anyone does know, I can anticipate this answer saving a lot of other people a lot of time.)
The write proceeds as normal. flock provides advisory locking. Locking a file exclusively only prevents others from getting a shared or exclusive lock on the same file. Calls other than flock are not affected.
I have to create a script (ksh or perl) that starts certain number of parallel jobs (another scripts), each of them runs as a foreground process in a separate session. Plus I start monitoring job that has to determine if any of those scripts is expecting input from operator, and switch to the corresponding session if necessary.
My problem is that I have not found a good way to determine that process is expecting input. For the background process it's pretty easy: process state is "stopped" and this can be easily checked with 'ps' command. In case of foreground process this does not work.
So far I tried to attach to the process with dbx or truss to see if it's hanging on 'read', but this approach seems too heavyweight.
Could you suggest some better solution? Perl, shell, C, Java, etc. … is ok as long as it’s standard and does not require extra 3rd party or OS-specific stuff to install.
Thank you.
What you're asking isn't possible, at least not reliably. The process may be using select or other polling method rather than blocking on a read call. You can't know whether it's waiting for operator input or busy doing other stuff, and in general it could be both (doing stuff in the background while being responsive to operator input).
The normal way for a program to signal that it's waiting for operator input is to print a prompt. Thus you should consider a session to be active if it's displayed a prompt since the last time you fed it input.
If your programs don't behave this way, you'll need to find some other program-specific way to know that these processes are waiting for input.
I would like to run arbitrary console-based sub-processes and manage them from a single master process. The console based sub-processes communicate via stdin, stdout and stderr, and if you run them in a genuine console they terminate cleanly when you press CTRL+C. Some of them may in fact be a tree of processes, such as a batch script that runs an executable which may in turn run another executable to do some work. I would like to redirect their standard I/O (for example, so that I can show their output in a GUI window) and in certain circumstances to send them a CTRL+C event so that they will give up and terminate cleanly.
The following two diagrams show first the normal structure - one master process has four worker sub-processes, and some of those workers have their own subprocesses; and then what should happen when one of the workers needs to be stopped - it and all of its children should get the CTRL+C event, but no other processes should receive the CTRL+C event.
(source: livejournal.com)
Additionally, I would much prefer that there are no extra windows visible to the user.
Here's what I've tried (note that I'm working in Python, but solutions for C would still be helpful):
Spawning an extra intermediate process with CREATE_NEW_CONSOLE, and then having it spawn the worker process. Then have it call GenerateConsoleCtrlEvent(CTRL_C_EVENT, 0) when we want to kill the worker. Unfortunately, CREATE_NEW_CONSOLE seems to prevent me from redirecting the standard I/O channels, so I'm left with no easy way to get the output back to the main program.
Spawning an extra intermediate process with CREATE_NEW_PROCESS_GROUP, and then having it spawn the worker process. Then have it call GenerateConsoleCtrlEvent(CTRL_C_EVENT, 0) when we want to kill the worker. Somehow, this manages to send the CTRL+C only to the master process, which is completely useless. On closer inspection, GenerateConsoleCtrlEvent says that CTRL+C cannot be sent to process groups.
Spawning the subprocess with CREATE_NEW_PROCESS_GROUP. Then call GenerateConsoleCtrlEvent(CTRL_BREAK_EVENT, pid) to kill the worker. This is not ideal, because CTRL+BREAK is less friendly than CTRL+C and will probably result in a messier termination. (E.g. if it's a Python process, no KeyboardInterrupt can be caught and no finally blocks run.)
Is there any good way to do what I want? I can see that I could theoretically build on the first attempt and find some other way to communicate between the processes, but I am worried it will turn out to be extremely awkward. Are there good examples of other programs that achieve the same effect? It seems so simple that it can't be all that uncommon a requirement.
I don't know about managing/redirecting stdin et. al., but for managing the subprocess tree
have you considered using the Windows Job Objects api?
There are several other questions about managing process trees (How do I automatically destroy child processes in Windows? Performing equivalent of “Kill Process Tree” in c++ on windows) and it looks like the cleanest method if you can use it.
Chapter 5 of Windows Via C/C++ by Jeffery Richter has a good discussion on using CreateJobObject and the related APIs.