This is a simple question. If I use the read command in bash script, while the script is waiting for the input command, what really happens, is the memory consumption reduced to a sleep state, like if we use the sleep command?
The memory consumption is not affected at all, the thing that happens in both cases is that the shell process changes its state from runnable to suspended.
In the case of read, the shell process goes in kernel space to read the user input, and is later rescheduled whenever data is available.
sleep voluntarily yields to kernel space where the process is suspended, and it is rescheduled after the timeout has passed.
Related
I faced a concurrency problem when writing to the same named pipe created with mkfifo by multiple processes at the same time, where some writes got lost. Since the number of writing processes are limited I want to switch from "writing to 1 pipe from n processes and reading from 1 separate" to "writing to n pipes by n processes and reading from 1 separate process".
Currently I'm reading via read line <"$pipe" in a loop until a condition is met. read blocks here until a line was read.
How can I read from multiples pipes ($pipe1, $pipe2 … $pipeN) via one loop until a condition is met, while honouring newly written lines on all pipes the same?
One way to deal with the initially described problem of multiple children writing to a single FIFO is to have a process open the FIFO for reading but never actually read it. This will allow writers to write unless the FIFO is full. I don't think there's a standard program that simply goes to sleep forever until signalled. I use a home-brew program pause, which is a pretty minimal C program:
#include <unistd.h>
int main(void)
{
pause();
}
It never exits until signalled. In your shell script, first launch the children, telling them to write to $FIFO, then run:
pause <$FIFO &
pid=$!
Note that pause-like command will not be launched into the background until the redirection completes, and the open of the FIFO won't complete until there is a process to write to the FIFO — so at least one child needs to be launched in background before the pause-like process is executed. Or write a variant of pause (I call mine sleepon) which opens the files named in its argument list — then the command line is similar to sleepon $FIFO & and the backgrounding operation completes and the pause-like program blocks until it is able to open the FIFO (which will be when one of the children opens the FIFO for writing), and then goes to sleep indefinitely. But the code for sleepon is a lot more complex than the code for pause.
Once the children and the pause-like process are launched, the parent can continue with the main processing loop.
while read line
do
…
done < $FIFO
The main thing to be aware of is that the parent loop will exit whenever the FIFO is emptied. You need to know when it should terminate, if ever. At the point where it does terminate, it should kill the pause process: kill $pid. You may need to wrap a while true; do … done loop around the line-reading loop — but you may need something cleverer than that. It depends, in part, on what your "until a condition is met" requirement is.
Your requirement to 'read from multiple FIFOs, all of which may intermittently have data on them' is not easy to do. It's not particularly trivial in C; I don't think there's a standard (POSIX) shell command to assist with that. In C, you'd end up using POSIX select() or
poll() or one of their many variants — some of which are platform-specific. There might be a platform-specific command that will help; I have my doubts, though.
I'm in the process of writing my own shell and I have support for job control[1] (akin to jobs). I can stop processes, resume them in the foreground and also background[2] them. That all largely works as expected - at least from the user's perspective.
However the issue I have is when resuming a job in the background any applications that read from STDIN will compete with readline (I've actually written my own readline API, for reasons out of scope of this question) and this breaks usability of the shell.
These instances are quite rare I'll admit, but my understanding is what should normally happen is any backgrounded processes that requires reading from STDIN[3] are send a SIGTTIN[4] signal.
My issue is how do I monitor that applications read from STDIN - so that I can send SIGTTIN when required?
This is where my research has come to a dead end. So I'm interested in how other shells handle this kind of problem.
Below are some references to help explain what I'm trying to do in case my description above wasn't very clear:
[1] https://en.wikipedia.org/wiki/Job_control_(Unix)#Overview
[2] https://pubs.opengroup.org/onlinepubs/9699919799/utilities/bg.html
[3] https://en.wikipedia.org/wiki/Job_control_(Unix)#Implementation
[4] https://en.wikipedia.org/wiki/Signal_(IPC)#SIGTTIN
My understanding:
The shell is usually the session leader which allocates the controlling terminal.
Each job (may have multiple processes, like ls | wc -l) is a process group.
Only the foreground pgrp can read from the controlling terminal. (The fg pgrp may have multiple processes and these processes CAN read from the controlling termianl at the same time.)
The shell calls tcsetpgrp() to set the foreground pgrp (e.g. when we start a new job, or put a bg job back to fg with fg).
It's the kernel (tty driver) who sends SIGTTIN to a background process which tries to read from the controlling terminal.
The shell does not know when a process would read from the controlling terminal. Instead, the shell monitors the job's status change. When a process is stopped by SIGTTIN, the shell would receive SIGCHLD and then it can call waitpid() to get more info.
I have a shell script which runs very large simulation binaries. This becomes problematic when I want to request some output of variables in the script. For instance, when I run 10 large simulations, I want to be able to print which iteration I am on without having to wait a minute or two for the current simulation to terminate.
Currently, I am using the trap command. However, the script does not react immediately to signals but will only execute the binded function when the current iteration terminates. I will post the code if anyone needs it.
You should start threads for each large thing you're going to run. Have those threads dump results somewhere, then you have your main method free waiting to interrogate the results on the fly.
How would I create a simple shell script that doesn't do anything, just runs forever, without overloading the CPU. I am using an appify script to make it into an app, so that I can have an app that just runs forever. The reason I do this is so that I can always have an app running and therefore quit out of finder without it opening back up again.
Note: to allow quitting out of finder, run the command defaults write com.apple.finder QuitMenuItem -bool yes in terminal.
Ideally, you could create a job that would sleep forever and just wait on it:
sleep forever &
wait
but in reality you have to pick a finite amount of time to sleep.
while :; do
sleep 65535 &
wait
done
This will only use minimal CPU every 18 hours or so to restart the sleep process. There is probably an upper limit to the size of the argument you can give to sleep, but I don't know what that is (and it is probably implementation-dependent). You can experiment; a larger number will reduce total CPU usage over the life of the program, but even calling sleep once an hour (every 3600 seconds) will use very little CPU.
how can i control two processes to run alternately in separate terminal windows.
for example i run the code for each on separate terminal windows at 11:59 and both of them wait for the time to be 12:00. at this moment process one starts execution and process two waits for say 10 seconds. then they switch, process two executes and process one waits.
in this way they take turns until the process is complete.
Pipes, or named pipes? Each process waits in a read for the other to write a byte to it.
Also, possibly use signal files. Process B sleeps for 100 ms, checks for file Foo, and repeats. When process A creates the file, process B deletes it, and proceeds. Then the reverse happens with file Bar.
You can use System V semaphores or Windows named mutexes (via CreateMutex). You could even resort to file locks. Which OS are you on and what are your restrictions?