Why does "read -t" block in scripts launched from xcodebuild? - macos

I have a script that creates a FIFO and launches a program that writes output to the FIFO. I then read and parse the output until the program exits.
MYFIFO=/tmp/myfifo.$$
mkfifo "$MYFIFO"
MYFD=3
eval "exec $MYFD<> $MYFIFO"
external_program >&"$MYFD" 2>&"$MYFD" &
EXT_PID=$!
while kill -0 "$EXT_PID" ; do
read -t 1 LINE <&"$MYFD"
# Do stuff with $LINE
done
This works fine reading input while the program is still running, but it looks like the timeout to read is ignored, and read call hangs after the external program exits.
I've used read with a timeout successfully in other scripts, and a simple test script that leaves out the external program times out correctly. What am I doing wrong here?
EDIT: It looks like read -t functions as expected when I run my script from the command line, but when I run it as part of an xcodebuild build process, the timeout does not function. What is different about these two environments?

I don't think -t will work with redirection.
From the man page here:
-t timeout
Cause read to time out and return failure if a complete line
of input is not read within timeout seconds. This option has no
effect if read is not reading input from the terminal or a pipe.

Related

The difference between "-D" and "&" in bash script

According to this docker tutorial
What's the difference between
./my_first_process -D
./my_main_process &
They both seem unblocking to bash script and run in background
& tells the shell to put the command that precedes it into the background. -D is simply a flag that is passed to my_first_process and is interpreted by it; it has absolutely nothing whatsoever to do with the shell.
You will have to look into the documentation of my_first_process to see what -D does … it could mean anything. E.g. in npm, -D means "development", whereas in some other tools, it may mean "directory". In diff, it means "Output merged file to show `#ifdef NAME' diffs."
Some programs, by convention, take -D as an instruction to self-daemonize. Doing this looks something like the following:
Call fork(), and exit if it returns 0 (so only the child survives).
Close stdin, stdout and stderr if they are attached to the console (ideally, replacing their file descriptors with handles on /dev/null, so writes don't trigger an error).
Call setsid() to create a new session.
Call fork() again, and exit if it returns 0 again.
That's a lot more work than what just someprogram & does! A program that has self-daemonized can no longer log to the terminal, and will no longer be impacted if the terminal itself closes. That's not true of a program that's just started in the background.
To get something similar to the same behavior from bash, correct code would be something like:
someprogram </dev/null >/dev/null 2>&1 & disown -h
...wherein disown -h tells the shell not to pass along a SIGHUP to that process. It's also not uncommon to see the external tool nohup used for this purpose (though by default, it redirects stdout and stderr to a file called nohup.out if they're pointed at the TTY, the end purpose -- of making sure they're not pointed at the terminal, and thus that writes to them don't start failing if the terminal goes away -- is achieved):
nohup someprogram >/dev/null &

How to run a command after a certain time while another command is running?

I have a bash script which will be running a main command, let's say for one hour. I would like to execute another command after a certain time since the main command has been started (at t_x). Something like this:
main starts -------> main ends
|
|
at time t_x, second command is executed
At the moment I have something like this:
mpirun main_command & sleep 1m && second_command
and the problem is that after second command is executed, the main command is killed. Can anyone help me? Thanks!
The first command is failing to lock the console, as another process is also using it. You will need to redirect the standard io pipelines, 0<&- mpirun main_command >/dev/null 2>/dev/null If this still does not work, use shell -c 'mpirun main_command' & sleep 1m;second_command You can use ; instead of &&, unless you need a failing exit code when someone interrupts the sleep.

How can I tell if a script was run in the background and with nohup?

Ive got a script that takes a quite a long time to run, as it has to handle many thousands of files. I want to make this script as fool proof as possible. To this end, I want to check if the user ran the script using nohup and '&'. E.x.
me#myHost:/home/me/bin $ nohup doAlotOfStuff.sh &. I want to make 100% sure the script was run with nohup and '&', because its a very painful recovery process if the script dies in the middle for whatever reason.
How can I check those two key paramaters inside the script itself? and if they are missing, how can I stop the script before it gets any farther, and complain to the user that they ran the script wrong? Better yet, is there way I can force the script to run in nohup &?
Edit: the server enviornment is AIX 7.1
The ps utility can get the process state. The process state code will contain the character + when running in foreground. Absence of + means code is running in background.
However, it will be hard to tell whether the background script was invoked using nohup. It's also almost impossible to rely on the presence of nohup.out as output can be redirected by user elsewhere at will.
There are 2 ways to accomplish what you want to do. Either bail out and warn the user or automatically restart the script in background.
#!/bin/bash
local mypid=$$
if [[ $(ps -o stat= -p $mypid) =~ "+" ]]; then
echo Running in foreground.
exec nohup $0 "$#" &
exit
fi
# the rest of the script
...
In this code, if the process has a state code +, it will print a warning then restart the process in background. If the process was started in the background, it will just proceed to the rest of the code.
If you prefer to bailout and just warn the user, you can remove the exec line. Note that the exit is not needed after exec. I left it there just in case you choose to remove the exec line.
One good way to find if a script is logging to nohup, is to first check that the nohup.out exists, and then to echo to it and ensure that you can read it there. For example:
echo "complextag"
if ( $(cat nohup.out | grep "complextag" ) != "complextag" );then
# various commands complaining to the user, then exiting
fi
This works because if the script's stdout is going to nohup.out, where they should be going (or whatever out file you specified), then when you echo that phrase, it should be appended to the file nohup.out. If it doesn't appear there, then the script was nut run using nohup and you can scold them, perhaps by using a wall command on a temporary broadcast file. (if you want me to elaborate on that I can).
As for being run in the background, if it's not running you should know by checking nohup.

How to make bash interpreter stop until a command is finished?

I have a bash script with a loop that calls a hard calculation routine every iteration. I use the results from every calculation as input to the next. I need make bash stop the script reading until every calculation is finished.
for i in $(cat calculation-list.txt)
do
./calculation
(other commands)
done
I know the sleep program, and i used to use it, but now the time of the calculations varies greatly.
Thanks for any help you can give.
P.s>
The "./calculation" is another program, and a subprocess is opened. Then the script passes instantly to next step, but I get an error in the calculation because the last is not finished yet.
If your calculation daemon will work with a precreated empty logfile, then the inotify-tools package might serve:
touch $logfile
inotifywait -qqe close $logfile & ipid=$!
./calculation
wait $ipid
(edit: stripped a stray semicolon)
if it closes the file just once.
If it's doing an open/write/close loop, perhaps you can mod the daemon process to wrap some other filesystem event around the execution? `
#!/bin/sh
# Uglier, but handles logfile being closed multiple times before exit:
# Have the ./calculation start this shell script, perhaps by substituting
# this for the program it's starting
trap 'echo >closed-on-calculation-exit' 0 1 2 3 15
./real-calculation-daemon-program
Well, guys, I've solved my problem with a different approach. When the calculation is finished a logfile is created. I wrote then a simple until loop with a sleep command. Although this is very ugly, it works for me and it's enough.
for i in $(cat calculation-list.txt)
do
(calculations routine)
until [[ -f $logfile ]]; do
sleep 60
done
(other commands)
done
Easy. Get the process ID (PID) via some awk magic and then use wait too wait for that PID to end. Here are the details on wait from the advanced Bash scripting guide:
Suspend script execution until all jobs running in background have
terminated, or until the job number or process ID specified as an
option terminates. Returns the exit status of waited-for command.
You may use the wait command to prevent a script from exiting before a
background job finishes executing (this would create a dreaded orphan
process).
And using it within your code should work like this:
for i in $(cat calculation-list.txt)
do
./calculation >/dev/null 2>&1 & CALCULATION_PID=(`jobs -l | awk '{print $2}'`);
wait ${CALCULATION_PID}
(other commands)
done

Send command to a background process

I have a previously running process (process1.sh) that is running in the background with a PID of 1111 (or some other arbitrary number). How could I send something like command option1 option2 to that process with a PID of 1111?
I don't want to start a new process1.sh!
Named Pipes are your friend. See the article Linux Journal: Using Named Pipes (FIFOs) with Bash.
Based on the answers:
Writing to stdin of background process
Accessing bash command line args $# vs $*
Why my named pipe input command line just hangs when it is called?
Can I redirect output to a log file and background a process at the same time?
I wrote two shell scripts to communicate with my game server.
This first script is run when computer start up. It does start the server and configure it to read/receive my commands while it run in background:
start_czero_server.sh
#!/bin/sh
# Go to the game server application folder where the game application `hlds_run` is
cd /home/user/Half-Life
# Set up a pipe named `/tmp/srv-input`
rm /tmp/srv-input
mkfifo /tmp/srv-input
# To avoid your server to receive a EOF. At least one process must have
# the fifo opened in writing so your server does not receive a EOF.
cat > /tmp/srv-input &
# The PID of this command is saved in the /tmp/srv-input-cat-pid file
# for latter kill.
#
# To send a EOF to your server, you need to kill the `cat > /tmp/srv-input` process
# which PID has been saved in the `/tmp/srv-input-cat-pid file`.
echo $! > /tmp/srv-input-cat-pid
# Start the server reading from the pipe named `/tmp/srv-input`
# And also output all its console to the file `/home/user/Half-Life/my_logs.txt`
#
# Replace the `./hlds_run -console -game czero +port 27015` by your application command
./hlds_run -console -game czero +port 27015 > my_logs.txt 2>&1 < /tmp/srv-input &
# Successful execution
exit 0
This second script it just a wrapper which allow me easily to send commands to the my server:
send.sh
half_life_folder="/home/jack/Steam/steamapps/common/Half-Life"
half_life_pid_tail_file_name=hlds_logs_tail_pid.txt
half_life_pid_tail="$(cat $half_life_folder/$half_life_pid_tail_file_name)"
if ps -p $half_life_pid_tail > /dev/null
then
echo "$half_life_pid_tail is running"
else
echo "Starting the tailing..."
tail -2f $half_life_folder/my_logs.txt &
echo $! > $half_life_folder/$half_life_pid_tail_file_name
fi
echo "$#" > /tmp/srv-input
sleep 1
exit 0
Now every time I want to send a command to my server I just do on the terminal:
./send.sh mp_timelimit 30
This script allows me to keep tailing the process on your current terminal, because every time I send a command, it checks whether there is a tail process running in background. If not, it just start one and every time the process sends outputs, I can see it on the terminal I used to send the command, just like for the applications you run appending the & operator.
You could always keep another open terminal open just to listen to my server server console. To do it just use the tail command with the -f flag to follow my server console output:
./tail -f /home/user/Half-Life/my_logs.txt
If you don't want to be limited to signals, your program must support one of the Inter Process Communication methods. See the corresponding Wikipedia article.
A simple method is to make it listen for commands on a Unix domain socket.
For how to send commands to a server via a named pipe (fifo) from the shell see here:
Redirecting input of application (java) but still allowing stdin in BASH
How do I use exec 3>myfifo in a script, and not have echo foo>&3 close the pipe?
You can use the bash's coproc comamnd. (avaliable only in 4.0+) - it's like ksh's |&
check this for examples http://wiki.bash-hackers.org/syntax/keywords/coproc
you can't send new args to a running process.
But if you are implementing this process or its a process that can take the args from a pipe, then the other answer would help.

Resources