What would be the correct format for the following, where I want to execute two scripts? The following is only executing the first one for me:
if ps aux | grep -E "[a]ffiliate_download.py|[g]oogle_download.py" > /dev/null
then
echo "Script is already running. Skipping"
else
exec "$DIR/affiliate_download.py"
exec "$DIR/google_download.py"
fi
The exec command replaces the current shell process with the program it runs. Since the shell is no longer running, it can't run commands after that.
Just execute the commands normally:
else
"$DIR/affiliate_download.py"
"$DIR/google_download.py"
fi
Related
The following script.sh is executed:
#!/bin/bash
set -eu
# code ...
su buser
mkdir /does/not/work
echo $?
echo This should not be printed
Output:
1
This should not be printed
How i execute the script:
docker exec -i fancy_container bash < script.sh
Question: Why does the script not terminate after the failing command even when set -e was defined and how can i get the script to exit on any failing command? I think the key point is the '<' operator, which i do not understand exactly how it executes the script.
Notes:
-e means: Abort script at first error, when a command exits with non-zero status (except in until or while loops, if-tests, list constructs)
Possible solution:
docker exec -i fancy_container bash -c "cat > tmp.sh; bash tmp.sh" < script.sh
How it works:
< script.sh - Pipe all rows of this file from the host, to the docker exec command.
cat > tmp.sh - Save the incoming piped content to a file inside the container.
bash tmp.sh - Execute the file as-whole inside the container, which means -e works again as expected!
But i still don't know why the initial approach isn't working.
I have to run a command using exec in a shell script, and I need to trap the exit code in case of error and run another command e.g
#!/bin/sh
set +e
exec command_that_will_fail
if [ $? -eq 1 ]; then
echo "command failed, running another command"
fi
I understand that exec replaces the current shell and carries on, my problem is that I need to run another command if the exec is not sucessfull.
Your code works if there's some immediate error when it tries to run the process:
$ echo 1
1
$ echo $?
0
$ exec asd123
-bash: exec: asd123: not found
$ echo $?
127
If the executable file was found, and is started, then it will not return, because it will overtake the whole script and never return to bash again.
For example this never returns:
$ exec grep asd /dev/null
(the exit code of grep is 1, but the parent shell is overtaken, so nobody can check)
If you want to get an exit code from the process in this case, you have to start it as a subprocess, i.e. not using exec (just command_that_will_fail). In this case the bash process will act as a supervisor that waits until the subprocess finishes and can inspect the exit code.
I am trying to do the following:
if ps aux | grep "[t]ransporter_pulldown.py" > /dev/null
then
echo "Script is already running. Skipping"
else
exec "sudo STAGE=production $DIR/transporter_pulldown.py" # this line errors
fi
$ sudo STAGE=production $DIR/transporter_pulldown.py works on the command line, but in a bash script it gives me:
./transporter_pulldown.sh: line 9:
exec: /Users/david/Desktop/Avails/scripts/STAGE=production
/Users/david/Desktop/Avails/scripts/transporter_pulldown.py:
cannot execute: No such file or directory
What would be the correct syntax here?
sudo isn't a command interpreter thus its trying to execute the first argument as a command.
Instead try this:
exec sudo bash -c "STAGE=production $DIR/transporter_pulldown.py"
This creates uses a new bash processes to interpret the variables and execute your python script. Also note that $DIR will be interpreted by the shell you're typing in rather than the shell that is being executed. To force it to be interpreted in the new bash process use single quotes.
Im tring to make a script to execute a set of commands from a file
the file for example has a set of 3 commands perl script-a, perl script-b, perl script-c, each command on a new line and i made this script
#!/bin/bash
for command in `cat file.txt`
do
echo $command
perl $command
done
The problem is that some scripts get stuck or takes too long to finish and i want to see their outputs. It is possible to make the bash script in case i send CTRL+C on the current command that is executed to jump to the next command in the txt file not to cancel the wole bash script.
Thank you
You can use trap 'continue' SIGINT to ignore Ctrl+c:
#!/bin/bash
# ignore & continue on Ctrl+c (SIGINT)
trap 'continue' SIGINT
while read command
do
echo "$command"
perl "$command"
done < file.txt
# Enable Ctrl+c
trap SIGINT
Also you don't need to call cat to read a file's contents.
#!/bin/bash
for scr in $(cat file.txt)
do
echo $scr
# Only if you have a few lines in your file.txt,
# Then, execute the perl command in the background
# Save the output.
# From your question it seems each of these scripts are independent
perl $scr &> $scr_perl_execution.out &
done
You can check each of the output to see if the command is doing as you expect. If not, you can use kill to terminate each of the command.
I am attempting to run a couple commands in a bash script however it will hang up on my command waiting for it to complete (which it wont). this script is simply making sure its running.
#!/bin/bash
ps cax | grep python > /dev/null
if [ $? -eq 0 ]; then
echo "Process is running."
else
echo "Process is not running... Starting..."
python likebot.py
echo $(ps aux | grep python | grep -v color | awk {'print $2'})
fi
Once it gets to the python command it hangs up while the command is being executed. its not till i cntrl c before it gives the pid. is there anyway i can have it run this bash script and exit the bash script once the commands were run (without waiting for them to complete).
In general, if you want to execute a command and not wait for it, you can simply use & as the delimiter rather than ; or a newline. When doing so, the pid of that process is available to the shell in the special variable !. If you want to wait for that process to complete, you can use wait. If you do not wish to wait for it, then simply omit the wait. In your case:
python likebot.py & # Start command asynchronously
echo $! # echo the pid of the most recent asynchronous process
Since it looks like likebot should be always running you might want to consider 'nohup' as well, with a bare & the job is still a child of your login process and will die if that dies.