bash commands in parallel - bash

I want to have two youtube-dl processes (or as much as possible )to run in parallel. Please show me how. thanks in advance.
#!/bin/bash
#package: youtube-dl axel
#file that contains youtube links
FILE="/srv/backup/temp/youtube.txt"
#number of lines in FILE
COUNTER=`wc -l $FILE | cut -f1 -d' '`
#download destination
cd /srv/backup/transmission/completed
if [[ -s $FILE ]]; then
while [ $COUNTER -gt 0 ]; do
#get video link
URL=`head -n 1 $FILE`
#get video name
NAME=`youtube-dl --get-filename -o "%(title)s.%(ext)s" "$URL" --restrict-filenames`
#real video url
vURL=`youtube-dl --get-url $URL`
#remove first link
sed -i 1d $FILE
#download file
axel -n 10 -o "$NAME" $vURL &
#update number of lines
COUNTER=`wc -l $FILE | cut -f1 -d' '`
done
else
break
fi

This ought to work with GNU Parallel:
cd /srv/backup/transmission/completed
parallel -j0 'axel -n 10 -o $(youtube-dl --get-filename -o "%(title)s.%(ext)s" "{}" --restrict-filenames) $(youtube-dl --get-url {})' :::: /srv/backup/temp/youtube.txt
Learn more: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Solution
You need to run your command in a subshell, i.e. put your command into ( cmd ) &.
Definition
A shell script can itself launch subprocesses. These subshells let the
script do parallel processing, in effect executing multiple subtasks
simultaneously.
Code
For you it will look like this I guess (I add quote to $vURL) :
( axel -n 10 -o "$NAME" "$vURL" ) &

I don't know if it is the best way, you can define a function and then call it in background
something like this:
#!/bin/bash
#package: youtube-dl axel
#file that contains youtube links
FILE="/srv/backup/temp/youtube.txt"
# define a function
download_video() {
sleep 3
echo $1
}
while read -r line; do
# call it in background, with &
download_video $line &
done < $FILE
script ends quick but function still runs in background, after 3 seconds it will show echos
also used read and while loop to simplify the file reading

Here's my take on it. By avoiding several commands you should see some minor improvement in speed though it might not be noticeable. I did add error checking which can save you time on broken URLs.
#file that contains youtube links
FILE="/srv/backup/temp/youtube.txt"
while read URL ; do
[ -z "$URL" ] && continue
#get video name
if NAME=$(youtube-dl --get-filename -o "%(title)s.%(ext)s" "$URL" --restrict-filenames) ; then
#real video url
if vURL=$(youtube-dl --get-url $URL) ; then
#download file
axel -n 10 -o "$NAME" $vURL &
else
echo "Could not get vURL from $URL"
fi
else
echo "Could not get NAME from $URL"
fi
done << "$FILE"
By request, here's my proposal for paralleling the vURL and NAME fetching as well as the download. Note: Since the download depends on both vURL and NAME there is no point in creating three processes, two gives you about the best return. Below I've put the NAME fetch in its own process, but if it turned out that vURL was consistently faster, there might be a small payoff in swapping it with the NAME fetch. (That way the while loop in the download process won't waste even a second sleeping.) Note 2: This is fairly crude, and untested, it's just off the cuff and probably needs work. And there's probably a much cooler way in any case. Be afraid...
#!/bin/bash
#file that contains youtube links
FILE="/srv/backup/temp/youtube.txt"
GetName () { # URL, filename
if NAME=$(youtube-dl --get-filename -o "%(title)s.%(ext)s" "$1" --restrict-filenames) ; then
# Create a sourceable file with NAME value
echo "NAME='$NAME'" > "$2"
else
echo "Could not get NAME from $1"
fi
}
Download () { # URL, filename
if vURL=$(youtube-dl --get-url $1) ; then
# Wait to see if GetName's file appears
timeout=300 # Wait up to 5 minutes, adjust this if needed
while (( timeout-- )) ; do
if [ -f "$2" ] ; then
source "$2"
rm "$2"
#download file
if axel -n 10 -o "$NAME" "$vURL" ; then
echo "Download of $NAME from $1 finished"
return 0
else
echo "Download of $NAME from $1 failed"
fi
fi
sleep 1
done
echo "Download timed out waiting for file $2"
else
echo "Could not get vURL from $1"
fi
return 1
}
filebase="tempfile${$}_"
filecount=0
while read URL ; do
[ -z "$URL" ] && continue
filename="$filebase$filecount"
[ -f "$filename" ] && rm "$filename" # Just in case
(( filecount++ ))
( GetName "$URL" "$filename" ) &
( Download "$URL" "$filename" ) &
done << "$FILE"

Related

Passing pre-prepared string of input arguments as input arguments to program (mailx)

I'm creating a tool to parse an input file to re-construct a mailx command (server that creates the data for mailx cannot send emails, so I need to store data into a file so another server can rebuild the command and send it). I could output the whole command to a file and execute the file on the other server, but that's hardly secure / safe.... anyone could intercept the file and insert malicious stuff that would be run as root - this parsing tool is checking every minute for any files to parse and email using a systemd timer and service.
I have created the file, using 'markers / separators' with this format:
-------MESSAGE START-------
Email Body Text
Goes Here
-------MESSAGE END-------
-------SUBJECT START-------
Email Subject
-------SUBJECT END-------
-------ATTACHEMENT START-------
path to file to attach if supplied
-------ATTACHEMENT END-------
-------S OPTS START-------
list of mailx '-S' options eg from=EMAILNAME <email#b.c> or sendwait etc each one on a new line
-------S OPTS END-------
-------EMAIL LIST START-------
string of recipient emails comma separated eg. email1,email2,email3 etc..
-------EMAIL LIST END-------
And I have a program to parse this file and rebuild, and run the mailx command:
#!/bin/bash
## Using systemd logging to journal for this as its now being called as part of a service
## See: https://serverfault.com/questions/573946/how-can-i-send-a-message-to-the-systemd-journal-froma-the-command-line (kkm answer)
start_time="$(date +[%c])"
exec 4>&2 2> >(while read -r REPLY; do printf >&4 '<3>%s\n' "$REPLY"; done)
echo >&4 "<5>$start_time -- Started gpfs_flag_email.sh"
trap_exit(){
exec >2&
}
trap 'trap_exit' EXIT
email_flag_path="<PATH TO LOCATION>/email_flags/"
mailx_message_start="-------MESSAGE START-------"
mailx_message_end="-------MESSAGE END-------"
mailx_subject_start="-------SUBJECT START-------"
mailx_subject_end="-------SUBJECT END-------"
mailx_attachement_start="-------ATTACHEMENT START-------"
mailx_attachement_end="-------ATTACHEMENT END-------"
mailx_s_opts_start="-------S OPTS START-------"
mailx_s_opts_end="-------S OPTS END-------"
mailx_to_email_start="-------EMAIL LIST START-------"
mailx_to_email_end="-------EMAIL LIST END-------"
no_attachment=false
no_additional_opts=false
additional_args_switch="-S "
num_files_in_flag_path="$(find $email_flag_path -type f | wc -l)"
if [[ $num_files_in_flag_path -gt 0 ]]; then
for file in $email_flag_path*; do
email_message="$(awk "/$mailx_message_start/,/$mailx_message_end/" $file | egrep -v -- "$mailx_message_start|$mailx_message_end")"
email_subject="$(awk "/$mailx_subject_start/,/$mailx_subject_end/" $file | egrep -v -- "$mailx_subject_start|$mailx_subject_end")"
email_attachment="$(awk "/$mailx_attachement_start/,/$mailx_attachement_end/" $file | egrep -v -- "$mailx_attachement_start|$mailx_attachement_end")"
email_additional_opts="$(awk "/$mailx_s_opts_start/,/$mailx_s_opts_end/" $file | egrep -v -- "$mailx_s_opts_start|$mailx_s_opts_end")"
email_addresses="$(awk "/$mailx_to_email_start/,/$mailx_to_email_end/" $file | egrep -v -- "$mailx_to_email_start|$mailx_to_email_end" | tr -d '\n')"
if [[ -z "$email_message" || -z "$email_subject" || -z "$email_addresses" ]]; then
echo >&2 "MISSING DETAILS IN INPUT FILE $file.... Exiting With Error"
exit 1
fi
if [[ -z "$email_attachment" ]]; then
no_attachment=true
fi
if [[ -z "$email_additional_opts" ]]; then
no_additional_opts=true
else
additional_opts_string=""
while read -r line; do
if [[ ! $line =~ [^[:space:]] ]]; then
continue
else
additional_opts_string="$additional_opts_string \"${additional_args_switch} '$line'\""
fi
done <<<"$(echo "$email_additional_opts")"
additional_opts_string="$(echo ${additional_opts_string:1} | tr -d '\n')"
fi
if [[ $no_attachment = true ]]; then
if [[ $no_additional_opts = true ]]; then
echo "$email_message" | mailx -s "$email_subject" $email_addresses
else
echo "$email_message" | mailx -s "$email_subject" $additional_opts_string $email_addresses
fi
else
if [[ $no_additional_opts = true ]]; then
echo "$email_message" | mailx -s "$email_subject" -a $email_attachment $email_addresses
else
echo "$email_message" | mailx -s "$email_subject" -a $email_attachment $additional_opts_string $email_addresses
fi
fi
done
fi
find $email_flag_path -type f -delete
exit 0
There is however an issue with the above that I just can work out..... the -S opts completely screw up the email headers and I end up with emails being sent to the wrong people (I have set a reply-to and from options, but the email header is jumbled and the reply-to email ends up in the to: field) like this:
To: Me <a#b.com>, sendwait#a.lan , -S#a.lan <-s#a.lan>, another-email <another#b.com>
All I'm trying to do is rebuild the command as if I'd typed it in the CLI:
echo "EMAIL BODY MESSAGE" | mailx -s "EMAIL SUBJECT" -S "from=EMAILNAME <email#b.c>" -S "replyto=EMAILNAME <email#b.c>" -S sendwait my.email#b.com
I've tried quoting in ' ' and " " quoting the other mailx parameters around it etc etc... I have written other tools that pass variables as input arguments so I just cannot understand how I'm screwing this up.....
Any help would be appreciated...
EDIT
Thanks to Gordon Davisson's really helpful comments I was able to not only fix it but understand the fix as well using an array and appropriately quoting the variables... the tip about using printf was really really helpful in helping me understand what I was doing wrong and how to correct it :P
declare -a mailx_args_array
...
num_files_in_flag_path="$(find $email_flag_path -type f | wc -l)"
if [[ $num_files_in_flag_path -gt 0 ]]; then
for file in $email_flag_path*; do
....
mailx_args_array+=( -s "$email_subject" )
if [[ ! -z "$email_attachment" ]]; then
mailx_args_array+=( -a "$email_attachment" )
fi
if [[ ! -z "$email_additional_s_opts" ]]; then
while read -r s_opt_line; do
mailx_args_array+=( -S "$s_opt_line" )
done < <(echo "$email_additional_s_opts")
fi
mailx_args_array+=( "$email_addresses" )
echo "$email_message" | mailx "${mailx_args_array[#]}"
done
fi

Bash script to download from google images

Some weeks ago I found in this site a very useful bash script that downloads images from google image results (download images from google with command line)
Although the script is quite complicate for me, I did some simple modifications so as not to rename the results so as to keep the original names.
However, since the last week, the script stopped working... probably Google updated the code or something, and the regexes of the script don't parse the results any more. I don't know enough about google's codes, web programing or regexing to see what is wrong, although I did some educated guesses, but still didn't work.
My (unworking) tweaked script is this
#! /bin/bash
# function to create all dirs til file can be made
function mkdirs {
file="$1"
dir="/"
# convert to full path
if [ "${file##/*}" ]; then
file="${PWD}/${file}"
fi
# dir name of following dir
next="${file#/}"
# while not filename
while [ "${next//[^\/]/}" ]; do
# create dir if doesn't exist
[ -d "${dir}" ] || mkdir "${dir}"
dir="${dir}/${next%%/*}"
next="${next#*/}"
done
# last directory to make
[ -d "${dir}" ] || mkdir "${dir}"
}
# get optional 'o' flag, this will open the image after download
getopts 'o' option
[[ $option = 'o' ]] && shift
# parse arguments
count=${1}
shift
query="$#"
[ -z "$query" ] && exit 1 # insufficient arguments
# set user agent, customize this by visiting http://whatsmyuseragent.com/
useragent='Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:31.0) Gecko/20100101 Firefox/31.0'
# construct google link
link="www.google.cz/search?q=${query}\&tbm=isch"
# fetch link for download
imagelink=$(wget -e robots=off --user-agent "$useragent" -qO - "$link" | sed 's/</\n</g' | grep '<a href.*\(png\|jpg\|jpeg\)' | sed 's/.*imgurl=\([^&]*\)\&.*/\1/' | head -n $count | tail -n1)
imagelink="${imagelink%\%*}"
# get file extention (.png, .jpg, .jpeg)
ext=$(echo $imagelink | sed "s/.*\(\.[^\.]*\)$/\1/")
# set default save location and file name change this!!
dir="$PWD"
file="google image"
# get optional second argument, which defines the file name or dir
if [[ $# -eq 2 ]]; then
if [ -d "$2" ]; then
dir="$2"
else
file="${2}"
mkdirs "${dir}"
dir=""
fi
fi
# construct image link: add 'echo "${google_image}"'
# after this line for debug output
google_image="${dir}/${file}"
# construct name, append number if file exists
if [[ -e "${google_image}${ext}" ]] ; then
i=0
while [[ -e "${google_image}(${i})${ext}" ]] ; do
((i++))
done
google_image="${google_image}(${i})${ext}"
else
google_image="${google_image}${ext}"
fi
# get actual picture and store in google_image.$ext
wget --max-redirect 0 -q "${imagelink}"
# if 'o' flag supplied: open image
[[ $option = "o" ]] && gnome-open "${google_image}"
# successful execution, exit code 0
exit 0
one way to invetigate : provide -x option to bash so to have the trace of your script; that is change /bin/bash to /bin/bash -x in your script -or- simply invoke your script with
bash -x <yourscript>
You can also annotate your script with echo commands to track some variables.

Curl not downloading files correctly

So I have been struggling with this task for eternity and still don't get what went wrong. This program doesn't seem to download ANY pdfs. At the same time I checked the file that stores final links - everything stored correctly. The $PDFURL also checked, stores correct values. Any bash fans ready to help?
#!/bin/sh
#create a temporary directory where all the work will be conducted
TMPDIR=`mktemp -d /tmp/chiheisen.XXXXXXXXXX`
echo $TMPDIR
#no arguments given - error
if [ "$#" == "0" ]; then
exit 1
fi
# argument given, but wrong format
URL="$1"
#URL regex
URL_REG='(https?|ftp|file)://[-A-Za-z0-9\+&##/%?=~_|!:,.;]*[-A-Za-z0-9\+&##/%=~_|]'
if [[ ! $URL =~ $URL_REG ]]; then
exit 1
fi
# go to directory created
cd $TMPDIR
#download the html page
curl -s "$1" > htmlfile.html
#grep only links into temp.txt
cat htmlfile.html | grep -o -E 'href="([^"#]+)\.pdf"' | cut -d'"' -f2 > temp.txt
# iterate through lines in the file and try to download
# the pdf files that are there
cat temp.txt | while read PDFURL; do
#if this is an absolute URL, download the file directly
if [[ $PDFURL == *http* ]]
then
curl -s -f -O $PDFURL
err="$?"
if [ "$err" -ne 0 ]
then
echo ERROR "$(basename $PDFURL)">&2
else
echo "$(basename $PDFURL)"
fi
else
#update url - it is always relative to the first parameter in script
PDFURLU="$1""/""$(basename $PDFURL)"
curl -s -f -O $PDFURLU
err="$?"
if [ "$err" -ne 0 ]
then
echo ERROR "$(basename $PDFURLU)">&2
else
echo "$(basename $PDFURLU)"
fi
fi
done
#delete the files
rm htmlfile.html
rm temp.txt
P.S. Another minor problem I have just spotted. Maybe the problem is with the if in regex? I pretty much would like to see something like that there:
if [[ $PDFURL =~ (https?|ftp|file):// ]]
but this doesn't work. I don't have unwanted parentheses there, so why?
P.P.S. I also ran this script on URLs beginning with http, and the program gave the desired output. However, it still doesn't pass the test.

Comparing an existing file with the result of a heavy process using named pipes

I'm trying to figure out a way to compare an existing file with the result of a process (a heavy one, not to be repeated) and clobber the existing file with the result of that process without having to write it in a temp file (it would be a large temp file, about the same size of the existing file: let's try to be efficient and not take twice space it should).
I would like to replace the normal file /tmp/replace_with_that (see below) with a fifo, but of course doing so with the code below would just lock up the script, since the /tmp/replace_with_that fifo cannot be read before comparing the existing file with the named pipe /tmp/test_against_this
#!/bin/bash
mkfifo /tmp/test_against_this
: > /tmp/replace_with_that
echo 'A B C D' >/some/existing/file
{
#A very heavy process not to repeat;
#Solved: we used a named pipe.
#Its large output should not be sent to a file
#To solve: using this code, we write the output to a regular file
for LETTER in "A B C D E"
do
echo $LETTER
done
} | tee /tmp/test_against_this /tmp/replace_with_that >/dev/null &
if cmp -s /some/existing/file /tmp/test_against_this
then
echo Exact copy
#Don't do a thing to /some/existing/file
else
echo Differs
#Clobber /some/existing/file with /tmp/replace_with_that
cat /tmp/replace_with_that >/some/existing/file
fi
rm -f /tmp/test_against_this
rm -f /tmp/replace_with_that
I think I would recommend a different approach:
Generate an MD5/SHA1/SHA256/whatever hash of the existing file
Run your heavy process and replace the output file
Generate a hash of the new file
If the hashes match, the files were the same; if not, the new file is different
Just for completeness, my answer (wanted to explore the use of pipes):
Was trying to find a way to compare on the fly a stream and an existing file, without overwriting the existing file unnecessarily (leaving it as is if stream and file are exact copies), and without creating sometimes big temp files (the product of a a heavy process like mysqldump for instance). The solution had to rely on pipes only (named and anonymous), and maybe a few very small temp files.
The checksum solution suggested by twalberg is just fine, but md5sum calls on large files are processor intensive (and processing time lengthens linearly with file size). cmp is faster.
Example call of the function listed below:
#!/bin/bash
mkfifo /tmp/fifo
mysqldump --skip-comments $HOST $USER $PASSWORD $DB >/tmp/fifo &
create_or_replace /some/existing/dump /tmp/fifo
#This also works, but depending on the anonymous fifo setup, seems less robust
create_or_replace /some/existing/dump <(mysqldump --skip-comments $HOST $USER $PASSWORD $DB)
The functions:
#!/bin/bash
checkdiff(){
local originalfilepath="$1"
local differs="$2"
local streamsize="$3"
local timeoutseconds="$4"
local originalfilesize=$(stat -c '%s' "$originalfilepath")
local starttime
local stoptime
#Hackish: we can't know for sure when the wc subprocess will have produced the streamsize file
starttime=$(date +%s)
stoptime=$(( $starttime + $timeoutseconds ))
while ([[ ! -f "$streamsize" ]] && (( $stoptime > $(date +%s) ))); do :; done;
if ([[ ! -f "$streamsize" ]] || (( $originalfilesize == $(cat "$streamsize" | head -1) )))
then
#Using streams that were exact copies of files to compare with,
#on average, with just a few test runs:
#diff slowest, md5sum 2% faster than diff, and cmp method 5% faster than md5sum
#Did not test, but on large unequal files,
#cmp method should be way ahead of the 2 other methods
#since equal files is the worst case scenario for cmp
#diff -q --speed-large-files <(sort "$originalfilepath") <(sort -) >"$differs"
#( [[ $(md5sum "$originalfilepath" | cut -b-32) = $(md5sum - | cut -b-32) ]] && : || echo -n '1' ) >"$differs"
( cmp -s "$originalfilepath" - && : || echo -n '1' ) >"$differs"
else
echo -n '1' >"$differs"
fi
}
create_or_replace(){
local originalfilepath="$1"
local newfilepath="$2" #Should be a pipe, but could be a regular file
local differs="$originalfilepath.differs"
local streamsize="$originalfilepath.size"
local timeoutseconds=30
local starttime
local stoptime
if [[ -f "$originalfilepath" ]]
then
#Cleanup
[[ -f "$differs" ]] && rm -f "$differs"
[[ -f "$streamsize" ]] && rm -f "$streamsize"
#cat the pipe, get its size, check for differences between the stream and the file and pipe the stream into the original file if all checks show a diff
cat "$newfilepath" |
tee >(wc -m - | cut -f1 -d' ' >"$streamsize") >(checkdiff "$originalfilepath" "$differs" "$streamsize" "$timeoutseconds") | {
#Hackish: we can't know for sure when the checkdiff subprocess will have produced the differs file
starttime=$(date +%s)
stoptime=$(( $starttime + $timeoutseconds ))
while ([[ ! -f "$differs" ]] && (( $stoptime > $(date +%s) ))); do :; done;
[[ ! -f "$differs" ]] || [[ ! -z $(cat "$differs" | head -1) ]] && cat - >"$originalfilepath"
}
#Cleanup
[[ -f "$differs" ]] && rm -f "$differs"
[[ -f "$streamsize" ]] && rm -f "$streamsize"
else
cat "$newfilepath" >"$originalfilepath"
fi
}

Loop script until file has equal size for a minute

I have cronjob to run a script every day in specific time. The script is for conversion a large file (about 2GB) in specific folder. The problem is that not every day my coleague put the file in the folder before the time, written as cronjob.
Please help me to add commands in the script or to write second script for:
Check if the file exists in the folder.
If the previous action is true, check the file size every minute. (I would like to avoid conversion of still incomming large file).
If filesize stays unchanged for 2 minutes, start the script for conversion.
I give you the important lines of the script so far:
cd /path-to-folder
for $i in *.mpg; do avconv -i "$i" "out-$i.mp4" ; done
10x for the help!
NEW CODE AFTER COMMENTS:
There is file in the folder!
#! /bin/bash
cdate=$(date +%Y%m%d)
dump="/path/folder1"
base=$(ls "$dump")
if [ -n "$file"]
then
file="$dump/$base"
size=$(stat -c '%s' "$file")
count=0
while sleep 10
do
size0=$(stat -c '%s' "$file")
if [ $size=$size0 ]
then $((count++))
count=0
fi
if [ $count = 2 ]
then break
fi
done
# file has been stable for two minutes. Start conversion.
CONVERSION CODE
fi
MESSAGE IN TERMINAL: Maybe error???
script.sh: 17: script.sh: arithmetic expression: expecting primary: "count++"
file=/work/daily/dump/name_of_dump_file
if [ -f "$file" ]
then
# size=$(ls -l "$file" | awk '{print $5}')
size=$(stat -c '%s' "$file")
count=0
while sleep 60
do
size0=$(stat -c '%s' "$file")
if [ $size = $size0 ]
then : $((count++))
else size=$size0
count=0
fi
if [ $count = 2 ]
then break
fi
done
# File has been stable for 2 minutes — start conversion
fi
Given the slightly revised requirements (described in the comments), and assuming that the file names do not contain spaces or newlines or other similarly awkward characters, then you can use:
dump="/work/daily/dump" # folder 1
base=$(ls "$dump")
if [ -n "$file" ]
then
file="$dump/$base"
...code as before...
# File has been stable for 2 minutes - start conversion
dir2="/work/daily/conversion" # folder 2
file2="$dir2/$(basename $base .mpg).xyz"
convert -i "$file" -o "$file2"
mv "$file" "/work/daily/originals" # folder 3
ncftpput other.coast.example.com /work/daily/input "$file2"
mv "$file2" "/work/daily/converted" # folder 4
fi
If there's nothing in the folder, the process exits. If you want it to wait until there is a file to convert, then you need a loop around the file test:
while file=$(ls "$dump")
[ -z "$file" ]
do sleep 60
done
This uses a little-known feature of shell loops; you can stack the commands in the control, but it is the exit status of the last one that controls the loop.
Well, I finally made some working code as follows:
#!/bin/bash
cdate=$(date +%Y%m%d)
folder1="/path-to-folder1"
cd $folder1
while file=$(ls "$folder1")
[ -z "$file" ]
do sleep 5 && echo "There in no file in the folder at $cdate."
done
echo "There is a file in folder at $cdate"
size1=$(stat -c '%s' "$file")
echo "The size1 is $size1 at $cdate"
size2=$(stat -c '%s' "$file")
echo "The size2 is $size2 at $cdate"
if [ $size1 = $size2 ]
then
echo "file is stable at $cdate. Do conversion."
Is the next line the right one to loop the same script???
else sh /home/user/bin/exist-stable.sh
fi
The right code after comments below is
else exec /home/user/bin/exist-stable.sh
fi

Resources