Send a sequence of commands to a new terminal from a script - bash

I have a script to which I wanna add a shutdown timer at the end, I'd like the countdown to run in a new terminal windows so I can cancel it since the script will usually be run in the background.
Here's the problem,
a simple script containing only the following
secs=$((60))
while [ $secs -gt 0 ]; do
echo -ne "$secs\033[0K\r"
sleep 1
: $((secs--))
done
shutdown now
works fine, but if I try to send it to a new terminal like this
gnome-terminal -e "bash -c
'secs=$((60))
while [ $secs -gt 0 ]; do
echo -ne \"$secs\033[0K\r\"
sleep 1
: $((secs--))
done
shutdown now'"
it fails and just shuts down. If I remove the shutdown line I get this error :
Option ā€œ-eā€ is deprecated and might be removed in a later version of gnome-terminal.
Use ā€œ-- ā€ to terminate the options and put the command line to execute after it.
Does anyone know how I could fix this?
thanks

The easy way to do this is to export a function:
countdown() {
secs=60
while (( secs > 0 )); do
printf '%s\033[0K\r' "$secs"
sleep 1
((secs--))
done
shutdown now
}
export -f countdown
gnome-terminal -- bash -c countdown

Related

Bash script not exiting once a process is running

How should I modify my bash script logic so it exits the while loop and exits the script itself once a process named custom_app is running on my local Ubuntu 18.04? I've tried using break and exit inside an if statement with no luck.
Once custom app is running from say...1st attempt then I quit the app, run_custom_app.sh lingers in the background and resumes retrying 2nd, 3rd, 4th, 5th time. It should be doing nothing at this point since app already ran successfully and user intentionally quit.
Below is run_custom_app.sh used to run my custom app triggered from a website button click.
Script logic
Check if custom_app process is running already. If so, don't run the commands in the while code block. Do nothing. Exit run_custom_app.sh.
While custom_app process is NOT running, retry up to 5 times.
Once custom_app process is running, stop while loop and exit run_custom_app.sh as well.
In cases where 5 run retries have been attempted but custom_app process is still not running, display a message to the user.
#!/bin/sh
RETRYCOUNT=0
PROCESS_RUNNING=`ps cax | grep custom_app`
# Try to connect until process is running. Retry up to 5 times. Wait 10 secs between each retry.
while [ ! "$PROCESS_RUNNING" ] && [ "$RETRYCOUNT" -le 5 ]; do
RETRYCOUNT="`expr $RETRYCOUNT + 1`"
commands
sleep 10
PROCESS_RUNNING=`ps cax | grep custom_app`
if [ "$PROCESS_RUNNING" ]; then
break
fi
done
# Display an error message if not connected after 5 connection attempts
if [ ! "$PROCESS_RUNNING" ]; then
echo "Failed to connect, please try again in about 2 minutes" # I need to modify this later so it opens a Terminal window displaying the echo statement, not yet sure how.
fi
I have tested this code on VirtualBox as a replacement for your custom_app and the previous post was using an until loop and pgrep instead of ps. As suggested by DavidC.Rankin pidof is more correct but if you want to use ps then I suggest to use ps -C custom_app -o pid=
#!/bin/sh
retrycount=0
until my_app_pid=$(ps -C VirtualBox -o pid=); do ##: save the output of ps in a variable so we can check/test it for later.
echo commands ##: Just echoed the command here not sure which commands you are using/running.
if [ "$retrycount" -eq 4 ]; then ##: We started at 0 so the fifth count is 4
break ##: exit the loop
fi
sleep 10
retrycount=$((retrycount+1)) ##: increment by one using shell syntax without expr
done
if [ -n "$my_app_pid" ]; then ##: if $my_app_pid is not empty
echo "app is running"
else
echo "Failed to connect, please try again in about 2 minutes" >&2 ##: print the message to stderr
exit 1 ##: exit with a failure which is not 0
fi
The my_app_pid=$(ps -C VirtualBox -o pid=) variable assignment has a useful exit status so we can use it.
Basically the until loop is just the opposite of the while loop.

On closing the Terminal the nohupped shell script (with &) is stopped

I'm developing a simple screenshot spyware which takes screenshot every 5 seconds from start of the script. I want it to run on closing the terminal. Even after nohupping the script along with '&', my script exits on closing the terminal.
screenshotScriptWOSleep.sh
#!/bin/bash
echo "Starting Screenshot Capture Script."
echo "Process ID: $$"
directory=$(date "+%Y-%m-%d-%H:%M")
mkdir ${directory}
cd ${directory}
shotName=$(date "+%s")
while true
do
if [ $( date "+%Y-%m-%d-%H:%M" ) != ${directory} ]
then
directory=$(date "
+%Y-%m-%d-%H:%M")
cd ..
mkdir ${directory}
cd ${directory}
fi
if [ $(( ${shotName} + 5 )) -eq $(date "+%s" ) ]
then
shotName=$(date "+%s" )
screencapture -x $(date "+%Y-%m-%d-%H:%M:%S" )
fi
done
I ran the script with,
nohup ./screenshotScriptWOSleep.sh &
On closing the terminal window, it warns with,
"Closing this tab will terminate the running processes: bash, date."
I have read that the nohup applies to the child process too, but i'm stuck here. Thanks.
Either you're doing something really weird or that's referring to other processes.
nohup bash -c 'sleep 500' &
Shutdown that terminal; open another one:
ps aux | grep sleep
409370294 26120 1 0 2:43AM ?? 0:00.01 sleep 500
409370294 26330 26191 0 2:45AM ttys005 0:00.00 grep -i sleep
As you can see, sleep is still running.
Just ignore that warning, your process is not terminated. verify with
watch wc -l nohup.out

Protect the program before turning it back on

My script is executed by Cron and every 2 min checks if xxx is running. If it is not in the process then the script will run it. The problem is that sometimes it runs it several times.
My problem is how to detect that the program is running several times?
How does bash detect that the pidof function returns several rather than one pid?
#!/bin/bash
PID=`pidof xxx`
if [ "$PID" = "" ];
then
cd
cd /home/pi
sudo ./xxx
echo "OK"
else
echo "program is running"
fi
You can use this script for doing the same. It will make sure script is executed once.
#!/bin/bash
ID=`ps -ef|grep scriptname|grep -v grep|wc -l`
if [ $ID -eq 0 ];
then
#run the script
else
echo "script is running"
fi

Output of background process output to Shell variable

I want to get output of a command/script to a variable but the process is triggered to run in background. I tried as below and few servers ran it correctly and I got the response. But in few I am getting i_res as empty.
I am trying to run it in background as the command has chance to get in hang state and I don't want to hung the parent script.
Hope I will get a response soon.
#!/bin/ksh
x_cmd="ls -l"
i_res=$(eval $x_cmd 2>&1 &)
k_pid=$(pgrep -P $$ | head -1)
sleep 5
c_errm="$(kill -0 $k_pid 2>&1 )"; c_prs=$?
if [ $c_prs -eq 0 ]; then
c_errm=$(kill -9 $k_pid)
fi
wait $k_pid
echo "Result : $i_res"
Try something like this:
#!/bin/ksh
pid=$$ # parent process
(sleep 5 && kill $pid) & # this will sleep and wake up after 5 seconds
# and kill off the parent.
termpid=$! # remember the timebomb pid
# put the command that can hang here
result=$( ls -l )
# if we got here in less than 5 five seconds:
kill $termpid # kill off the timebomb
echo "$result" # disply result
exit 0
Add whatever messages you need to the code. On average this will complete much faster than always having a sleep statement. You can see what it does by making the command sleep 6 instead of ls -l

Why my shell script is in standby in the background till I bring it back on the foreground?

I have a shell script which is executing a php script (worker for beanstalkd).
Here is the script:
#!/bin/bash
if [ $# -eq 0 ]
then
echo "You need to specify an argument"
exit 0;
fi
CMD="/var/webserver/user/bin/console $#";
echo "$CMD";
nice $CMD;
ERR=$?
## Possibilities
# 97 - planned pause/restart
# 98 - planned restart
# 99 - planned stop, exit.
# 0 - unplanned restart (as returned by "exit;")
# - Anything else is also unplanned paused/restart
if [ $ERR -eq 97 ]
then
# a planned pause, then restart
echo "97: PLANNED_PAUSE - wait 1";
sleep 1;
exec $0 $#;
fi
if [ $ERR -eq 98 ]
then
# a planned restart - instantly
echo "98: PLANNED_RESTART";
exec $0 $#;
fi
if [ $ERR -eq 99 ]
then
# planned complete exit
echo "99: PLANNED_SHUTDOWN";
exit 0;
fi
If I execute the script manually, like this:
[user#host]$ ./workers.sh
It's working perfectly, I can see the output of my PHP script.
But if I detach the process from the console, like this:
[user#host]$ ./workers.sh &
It's not working anymore. However I can see the process in the background.
[user#host]$ jobs
[1]+ Stopped ./workers.sh email
The Queue jobs server is filling with jobs and none of them are processed until I bring the detached script in the foreground, like this:
[user#host]$ fg
At this moment I see all the job being process by my PHP script. I have no idea why this is happening. Could you help, please?
Thanks, Maxime
EDIT:
I've create a shell script to run x workers, I'm sharing it here. Not sure it's the best way to do it but it's working well at the moment:
#!/bin/bash
WORKER_PATH="/var/webserver/user/workers.sh"
declare -A Queue
Queue[email]=2
Queue[process-images]=5
for key in "${!Queue[#]}"
do
echo "Launching ${Queue[$key]} instance(s) of $key Worker..."
CMD="$WORKER_PATH $key"
for (( l=1; l<=${Queue[$key]}; l++ ))
do
INSTANCE="$CMD $l"
echo "lnch instance $INSTANCE"
nice $INSTANCE > /dev/null 2> /dev/null &
done
done
Background processes are not allowed to write to the terminal, which your script tries to do with the echo statements. You just need to redirect standard output to a file when you put it to the background.
[user#host]$ ./workers.sh > workers.output 2> workers.error &
(I've redirected standard error as well, just to be safe.)

Resources