Make bash file wait for system calls made in Fortran - bash

My bash script looks something like this
mpiexec ./fortran_bin |& tee text_file
wait
./process_output_files
My MPI-based Fortran program makes several synchronous system calls with call exec_cmd(cmd,wait=.true.).
My problem is that handle_output_files only waits for fortran_bin to finish, but some system commands (cmd) are not yet done, and this messes up my output files.
How do I make process_output_files wait for cmd to finish?
NOTES
I'm not sure where best to solve this problem (if there is a solution):
within Fortran, with MPI, within Bash ...
cmd is of the form cat out_{1..n} > out && rm -f out_{1..n}.
I would like it to run synchronously (wait=.false.), because cmd can be time-consuming, and unrelated to the rest of the Fortran program.
The wait line in the bash script seems to have no effect.
I suppose you could ask the equivalent question for a C/C++ program that calls system(some_script).
But I can only find question about waiting within a C/C++ program, if the same program needs the result of the called command (e.g., here and here).

From the notes above, looks like the sub command inherit the stdout/stderr on the calling process, and do not leave any background processes behind.
If those assumptions are true, you can impose a wait until there is no more output coming from the fortran_bin and it's children by piping the output into cat (or similar). The cat program will not terminate until all 'fortran_bin' children (that did not redirect stderr) will finish
mpiexec ./fortran_bin 2>&1 3>&1 | cat
Possible to use tee (or other similar programs) instead of cat

Related

Redirect entire bash session to log file

I am familiar with the ability to pipe and redirect the IO of individual processes when running them in bash. However, is there a way to redirect stdio for an entire bash session?
Ideally, I would like to transparently pipe all stdout and stderr of all processes spawned by bash into tee to duplicate into a file the printed output displayed to the user. No matter what processes are run within that bash session, I could then go back later and look over the output.
Even more ideally, this should be the case for simple interactive programs that take options from stdin, but not for heavily interactive programs like vim.
The best I've found so far is: whenever the user opens a new terminal, run the command:
bash --login -i > >(tee ~/bash_$$.log) 2>&1
This will immediately start an interactive child shell in that new shell, and tee all stdin and stderr to a logfile named with the new parent shell's PID (to avoid overwriting).
This works, but vim fails to start with Vim: Warning: Output is not to a terminal. Are there any known solutions, up to and including patching the shell, to do this?
Background: vim is failing because isatty() is returning false when given the file descriptor for stdout; this is a safeguard to prevent uses such as vim >file that generally don't make sense. (Also, there are operating system calls available for interacting with PTYs that are useful to graphical, cursor-oriented programs that aren't available with a simple FIFO; this is why tools like ssh go to the trouble to provide a pseudoterminal during interactive use).
What's important for your purposes is that vim is directly inspecting the file descriptor it's passed as stdout. The shell is not a party to this -- it's literally vim running a standard-C-library call that gets some details about an open file descriptor -- so it's nothing that patching or reconfiguring the shell can fix.
To avoid this, then, you need to find a different way to redirect your output for logging such that stdout and stderr are still pointed at PTYs.
That said, for your real goal (logging all activity, vs redirecting stdout in-place), what you want is probably script:
if [ -z "$redirection_done" ]; then
redirection_done=1 exec script shell.log bash --login -i
fi
Using logging support from another tool which simulates a TTY, such as screen or tmux, will likewise suffice. (unbuffer, from the expect toolkit, can be used with similar effect).
Back to your literal question... (since while it may not be what you want to know, it is what you asked):
In all POSIX shells, including bash,
exec >wherever
...will immediately redirect stdout for the current shell to wherever. This can be a process substitution in bash, as anywhere else; thus, in an already-running shell, you can execute
exec > >(tee shell.log) 2>&1

Are tee and script essentially equivalent?

In the context where I want to capture the stdout of a process in a file but still want to have this output displayed in the terminal I can choose between script and tee. In this context, are these tools essentially equivalent or is there a – possibly subtle – reason to prefer one over the other?
The programs script and tee are designed for different purposes:
script -- make typescript of terminal session
tee -- pipe fitting
Important differences between script and tee are:
script transmits the exit status of the process it supervises, while tee, being a filter, does not even know about it.
script captures stdin, stdout, stderr of the process it supervises while tee only catches the stream it filters.
None of these differences are relevant in the given context.
They have a very different purpose and the usage is quite different as well.
Script is to record what you are doing in a shell session. Handy to show a professor what you did, to show co-workers how to do something, etc...
Tee is just an application to write to both your screen and a file. Very handy when installing something or running a command that generates a lot of output and wanting to see the output realtime while still saving it to disk.
A notable difference between the two is that you can use script to create an interactive shell to log everything (e.g. script commands.log zsh) including colors and such. Tee won't register as a tty so with that regard it's pretty different.
I found script to be useful for making control sequences work when piping to tee:
script -q -c 'python -c "import pdb, sys; pdb.set_trace()"' /dev/null \
| tee -a /tmp/tmp.txt
With only the following, Ctrl-A would be displayed as ^A etc:
python -c "import pdb, sys; pdb.set_trace()" | tee -a /tmp/tmp.txt
This is a minimal example. I am using tee here to capture the output from a pytest test run, but sometimes there might be a debugger in there, and cursor keys etc should work then.
Via https://unix.stackexchange.com/a/61833/1920.

Piping input to a shell command and keeping the created shell alive

My overarching program is a shell script. This shell script calls a C program that I need to pipe input to, and ultimately the C program will create a shell.
However, when I pipe my input into the C program within the shell script
Do_Other_Stuff
./my_prog < file1
I can't get the shell to stay alive. Running just,
Do_Other_Stuff
./my_prog
works, as I have to input the stdin myself, and the shell correctly spawns when my_prog exits. I'm pretty sure wrapping up the ./my_prog call in a C program, and compiling and running that would work, but I'm curious as to whether there's a cleaner way with shell.
I've tried several combinations of using cat file1 | ./my_prog and using & in different situations, and haven't had any success.
Thanks!
Try:
cat file1 - | ./myprog
Many programs recognize the "filename" - to mean stdin.
Do you have access to the C program source code? My guess is that the C program is using istty(0) to determine if stdin is coming from a terminal. It probably only creates an interactive shell when that is the case. Using stdin redirection, whether from a file or a pipe, means that istty(0) returns false.

Switch from file contents to STDIN in piped command? (Linux Shell)

I have a program (that I did not write) which is not designed to read in commands from a file. Entering commands on STDIN is pretty tedious, so I'd like to be able to automate it by writing the commands in a file for re-use. Trouble is, if the program hits EOF, it loops infinitely trying to read in the next command dropping an endless torrent of menu options on the screen.
What I'd like to be able to do is cat a file containing the commands into the program via a pipe, then use some sort of shell magic to have it switch from the file to STDIN when it hits the file's EOF.
Note: I've already considered using cat with the '-' for STDIN. Unfortunately (I didn't know this before), piped commands wait for the first program's output to terminate before starting the second program -- they do not run in parallel. If there's some way to get the programs to run in parallel with that kind of piping action, that would work!
Any thoughts? Thanks for any assistance!
EDIT:
I should note that my goal is not only to prevent the system from hitting the end of the commands file. I would like to be able to continue typing in commands from the keyboard when the file hits EOF.
I would do something like
(cat your_file_with_commands; cat) | sh your_script
That way, when the file with commands is done, the second cat will feed your script with whatever you type on stdin afterwards.
Same as Idelic answer with more simple syntax ;)
cat your_file_with_commands - | sh your_script
I would think expect would work for this.
Have you tried using something like tail -f commandfile | command I think that should pipe the lines of the file to command without closing the file descriptor afterwards. Use -n to specify the number of lines to be piped if tail -f doesn't catch all of them.

How do I pause to inspect results of sh commands run by a Makefile?

So, I have a Makefile which runs different commands of how to build S/W. I execute make from within a MSYS / MinGW enviroment.
I found for example, the sleep <seconds> command, but this only delays the execution. How can I make it wait for a key being pressed, for example?
You can use the read command. When you are done you press enter and your script/makefile continues. It's a builtin bash command, so it should work also on MinGW.
My proposition doesn't stop execution but halts and resume display on capable terminals:
Use ctrl-S for halting display, and ctrl-Q for resuming.
You don't need to modify your Makefile.
Pipe the output of the build through more (or less)
e.g.
make <make command line> | more
Or output everything to a file while still watching make progress on screen with your friend tee. I normally prefer this to less or more for bulkier projects.
make <make input arguments> 2>&1 | tee /some/path/build.log

Resources