I've only been writing actual .sh scripts since sometime this morning, and I'm a bit stuck. I'm trying to write a script to check to see if a process is running, and to start it if it isn't. (I plan to run this script once every 10 to 15 minutes with cron.)
Here's what I have so far:
#!/bin/bash
APPCHK=$(ps aux | grep -c "/usr/bin/rsync -rvz -e ssh /home/e-smith/files/ibays/drive-i/files/Warehouse\ Pics/organized_pics imgserv#192.168.0.140:~/webapps/pavlick_container/public/images
")
RUNSYNC=$(rsync -rvz -e ssh /home/e-smith/files/ibays/drive-i/files/Warehouse\ Pics/organized_pics imgserv#192.168.0.140:~/webapps/pavlick_container/public/images)
if [ $APPCHK < '2' ];
then
$RUNSYNC
fi
exit
Here's the error that I'm getting:
$ ./image_sync.sh
rsync: mkdir "/home/i/webapps/pavlick_container/public/images" failed: No such file or directory (2)
rsync error: error in file IO (code 11) at main.c(595) [Receiver=3.0.7]
rsync: connection unexpectedly closed (9 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(601) [sender=3.0.7]
./image_sync.sh: line 8: 2: No such file or directory
TRTWF is that
rsync -rvz -e ssh /home/e-smith/files/ibays/drive-i/files/Warehouse\ Pics/organized_pics imgserv#192.168.0.140:~/webapps/pavlick_container/public/images
runs just fine from a terminal window.
What am I doing wrong?
Your grep call is wrong on two counts. The pattern shouldn't include a newline. To look for an exact string, use grep -F 'substring' or grep -xF 'exact whole line'.
Finding if a process is running with ps | grep is highly brittle. On most unices (at least Solaris, Linux and *BSD), use pgrep: pgrep -f 'PATTERN' returns true if there's a running process whose command line matches PATTERN.
Every program returns a status code, either 0 to indicate success or a number between 1 and 255 to indicate failure. In the shell, any command is a valid boolean expression; the status code 0 is treated as true and anything else as false.
$(…) means run the command inside the parentheses and capture its output. So rsync is executed as soon as the shell hits the definition of the RUNSYNC variable. To store a block of shell code, use a function (example below, although you don't actually need a function here, you could just write the code directly).
Your test [ $APPCHK < 2 ] should be [ $APPCHK -lt 2 ]: < means input redirection. (In bash, you can also write [[ foo < bar ]], but that's string comparison, not numeric comparison.)
~/ at the beginning of the remote rsync path is optional. Also, -e ssh is the default unless your version of rsync is really old.
exit at the end of the script is useless, the script will exit anyway.
Here's a script taking the above into account:
#!/bin/bash
run_rsync () {
rsync -rvz '/home/e-smith/files/ibays/drive-i/files/Warehouse Pics/organized_pics' \
imgserv#192.168.0.140:webapps/pavlick_container/public/images
}
process_pattern='/usr/bin/rsync -rvz /home/e-smith/files/ibays/drive-i/files/Warehouse Pics/organized_pics imgserv#192\.168\.0\.140:webapps/pavlick_container/public/images'
if pgrep -xF "$process_pattern"; then
run_rsync
fi
Looks like with your rsync command that some directory along this path is wrong: ~/webapps/pavlick_container/public/images
Have you checked on the server 192.168.0.140 in imgserv's home directory to see if "pavlick_container/public" exists? That's my guess.
You have a number of problems. First you are running the commands instead of putting the commands in variables. There is also a much easier way.
RUNSYNC="rsync -rvz -e ssh /home/e-smith/files/ibays/drive-i/files/Warehouse\ Pics/organized_pics imgserv#192.168.0.140:~/webapps/pavlick_container/public/images"
if ! pgrep -f "rsync.*organized_pics"; then $RUNSYNC; fi
First of all, the way of checking if the program is running is mostly wrong. This may or may not work. You should rely on some special file you create when your script starts, that it is deleted when your script ends. This will tell you if the script is running, just checking if this file exists.
Then, try to either put a \ before the ~ or to remove the ~/ completely. If cron is run as other user, the tilde will be substituted in the client for the user directory. It works for the command line because maybe the home directory of your user in both machines match, but not in the user the cron is running. A guess at this point, but again, try to remove the ~/ and see if it works.
If your real code is missing a closing dlb-quote on the grep target, you're going to get weird results from the get-go.
Also, ps aux will not list a complete command line result like you show (at least on all the the pss I have used).
You need to make it ps auxwww. Often you will see people add | grep -v grep | (you'll see why at some point). This can be reduced to changing your static search target slightly like "/usr/bin/rsync" to "/usr/bin/[r]sync ".
Other users are also helping with their comments. Using a flag file as #DiegoSevilla mentions is marginally deprecated. use a mkdir /tmp/MyWatcher_flagDir for your flag. Directory creation is an atomic activity (where as file creations are not), and this will eliminate any errors you might encounter from having 2 copies of you monitor try to make a flag file at the same time. Only one process will succeed in making or removing a flag dir.
I hope this helps.
Related
I am quite new to bash (barely any experience at all) and I need some help with a bash script.
I am using docker-compose to create multiple containers - for this example let's say 2 containers. The 2nd container will execute a bash command, but before that, I need to check that the 1st container is operational and fully configured. Instead of using a sleep command I want to create a bash script that will be located in the 2nd container and once executed do the following:
Execute a command and log the console output in a file
Read that file and check if a String is present. The command that I will execute in the previous step will take a few seconds (5 - 10) seconds to complete and I need to read the file after it has finished executing. I suppose i can add sleep to make sure the command is finished executing or is there a better way to do this?
If the string is not present I want to execute the same command again until I find the String I am looking for
Once I find the string I am looking for I want to exit the loop and execute a different command
I found out how to do this in Java, but if I need to do this in a bash script.
The docker-containers have alpine as an operating system, but I updated the Dockerfile to install bash.
I tried this solution, but it does not work.
#!/bin/bash
[command to be executed] > allout.txt 2>&1
until
tail -n 0 -F /path/to/file | \
while read LINE
do
if echo "$LINE" | grep -q $string
then
echo -e "$string found in the console output"
fi
done
do
echo "String is not present. Executing command again"
sleep 5
[command to be executed] > allout.txt 2>&1
done
echo -e "String is found"
In your docker-compose file make use of depends_on option.
depends_on will take care of startup and shutdown sequence of your multiple containers.
But it does not check whether a container is ready before moving to another container startup. To handle this scenario check this out.
As described in this link,
You can use tools such as wait-for-it, dockerize, or sh-compatible wait-for. These are small wrapper scripts which you can include in your application’s image to poll a given host and port until it’s accepting TCP connections.
OR
Alternatively, write your own wrapper script to perform a more application-specific health check.
In case you don't want to make use of above tools then check this out. Here they use a combination of HEALTHCHECK and service_healthy condition as shown here. For complete example check this.
Just:
while :; do
# 1. Execute a command and log the console output in a file
command > output.log
# TODO: handle errors, etc.
# 2. Read that file and check if a String is present.
if grep -q "searched_string" output.log; then
# Once I find the string I am looking for I want to exit the loop
break;
fi
# 3. If the string is not present I want to execute the same command again until I find the String I am looking for
# add ex. sleep 0.1 for the loop to delay a little bit, not to use 100% cpu
done
# ...and execute a different command
different_command
You can timeout a command with timeout.
Notes:
colon is a utility that returns a zero exit status, much like true, I prefer while : instead of while true, they mean the same.
The code presented should work in any posix shell.
i have a Shell script doing the following two commands, connecting to a remote server and putting files via SFTP, lets called it "execute.sh"
sftp -b /usr/local/outbox/send.sh username#example.com
mv /usr/local/outbox/DD* /usr/local/outbox/completed/
Then in my "send.sh" i have following commands to be executed on the remote server.
cd ExampleFolder/outbox
put Files_*
bye
Now my problem is
If the first command "sftp -b" fails due to a remote connection error some network problem, it still moves the files into the "completed folder" which is incorrect, so i want some way to do the next command "mv" to be executed only if the first command to "sftp" is successfully connected.
Can we do this by enhancing this shell script ? or some work around ?
My Shell is Bash.
Simply insert && between the two commands:
sftp -b /usr/local/outbox/send.sh username#example.com && \
mv /usr/local/outbox/DD* /usr/local/outbox/completed/
If the first fails, the second one will not run.
Alternatively, you can check the exit code of the first command explicitly. The exit code of the last command is always saved in $?, and it is 0 if the command succeeded:
sftp -b /usr/local/outbox/send.sh username#example.com
if [ $? -eq 0 ]
then
mv /usr/local/outbox/DD* /usr/local/outbox/completed/
fi
If you really wanted to capture the output of the first command, you could run it in $(...) and store the value in a variable:
sftpOutput="$(sftp -b /usr/local/outbox/send.sh username#example.com)"
and then use this variable in further checks, e.g. match it against a pattern in the next if.
When I'm looking at bash script code, I sometimes see | and sometimes see ||, but I don't know which is preferable.
I'm trying to do something like ..
set -e;
ret=0 && { which ansible || ret=$?; }
if [[ ${ret} -ne 0 ]]; then
# install ansible here
fi
Please advise which OR operator is preferred in this scenario.
| isn't an OR operator at all. You could use ||, though:
which ansible || {
true # put your code to install ansible here
}
This is equivalent to an if:
if ! which ansible; then
true # put your code to install ansible here
fi
By the way -- consider making a habit of using type (a shell builtin) rather than which (an external command). type is both faster and has a better understanding of shell behavior: If you have an ansible command that's provided by, say, a shell function invoking the real command, which won't know that it's there, but type will correctly detect it as available.
There is a big difference between using a single pipe (pipe output from one command to be used as input for the next command) and a process control OR (double pipe).
cat /etc/issue | less
This runs the cat command on the /etc/issue file, and instead of immediately sending the output to stdout it is piped to be the input for the less command. Yes, this isn't a great example, since you could instead simply do less /etc/issue - but at least you can see how it works
touch /etc/testing || echo Did not work
For this one, the touch command is run, or attempted to run. If it has a non-zero exit status, then the double pipe OR kicks in, and tries to execute the echo command. If the touch command worked, then whatever the other choice is (our echo command in this case) is never attempted...
I have a series of bash commands, some with interactive prompts, that I need run on a remote machine. I have to have them called in a certain order for different scenarios, so I've been trying to make a bash script to automate the process for me. However, it seems like every way to start an ssh session with a bash script results in the the redirection of stdin to whatever string or file was used to initiate the script in the first place.
Is there a way I can specify that a certain script be executed on a remote machine, but also forward stdin through ssh to the local machine to enable the user to interact with any prompts?
Here's a list of requirements I have to clarify what I'm trying to do.
Run a script on a remote machine.
Somewhere in the middle of that remote script be command that will prompt for input. Example: git commit will bring up vim.
If that command is git commit and it brings up vim, the user should be able to interact with vim as if it was running locally on their machine.
If that command prompts for a [y/n] response, the user should be able to input their answer.
After the user enters the necessary information—by quitting vim or pressing return on a prompt—the script should continue to run like normal.
My script will then terminate the ssh session. The end product is that commands were executed for the user without them needing to be aware that it was through a remote connection.
I've been testing various different methods with the following script that I want run on the remote machine.
#!/bin/bash
echo hello
vim
echo goodbye
exit
It's crucial that the user be able to use vim, and then, when the user finishes, "goodbye" should be printed to the screen and the remote session should be terminated.
I've tried uploading a temporary script to the remote machine and then running ssh user#host bash /tmp/myScript, but that seems to also take over stdin completely, rendering it impossible to let the user respond to prompts for user input. I've tried adding the -t and -T options (I'm not sure if they're different), but I still get the same result.
One commenter mentioned using expect, spawn, and interact, but I'm not sure how to use those tools together to get my desired behavior. It seems like interact will result in the user gaining control over stdin, but then there's no way to have it relinquished once the user quits vim in order to let my script continue execution.
Is my desired behavior even possible?
Ok, I think I've found my problem. I was creating a wrapper script for ssh that looked like this:
#!/bin/bash
tempScript="/tmp/myScript"
remote=user#host
commands=$(</dev/stdin)
cat <(echo "$commands") | ssh $remote "cat > $tempScript && chmod +x $tempScript" &&
ssh -t $remote $tempScript
errorCode=$?
ssh $remote << RM
if [[ -f $tempScript ]]; then
rm $tmpScript
fi
RM
exit $errorCode
It was there that I was redirecting stdin, not ssh. I should have mentioned this when I formulated my question. I read through that script over and over again, but I guess I just overlooked that one line. Removing that line totally fixed my problem.
Just to clarify, changing my script to the following totally fixed my problem.
#!/bin/bash
tempScript="/tmp/myScript"
remote=user#host
commands="$#"
cat <(echo "$commands") | ssh $remote "cat > $tempScript && chmod +x $tempScript" &&
ssh -t $remote $tempScript
errorCode=$?
ssh $remote << RM
if [[ -f $tempScript ]]; then
rm $tmpScript
fi
RM
exit $errorCode
Once I changed my wrapper script, my test script described in the question worked! I was able to print "hello" to the screen, vim appeared and I was able to use it like normal, and then once I quit vim "goodbye" was printed and the ssh client closed.
The commenters to the question were pointing me in the right direction the whole time. I'm sorry I only told part of my story.
I've searched for solutions to this problem several times in the past, however never finding a fully satisfactory one. Piping into ssh looses your interactivity. Two connects (scp/ssh) is slower, and your temporary file might be left lying around. And the whole script on the command line often ends up in escaping hell.
Recently I encountered that the command line buffer size is usually quite large (getconf ARG_MAX > 2MB where I looked). And this got me thinking about how I could use this and mitigate the escaping issue.
The result is:
ssh -t <host> /bin/bash "<(echo "$(cat my_script | base64 | tr -d "\n")" | base64 --decode)" <arg1> ...
or using a here document and cat:
ssh -t <host> /bin/bash $'<(cat<<_ | base64 --decode\n'$(cat my_script | base64)$'\n_\n)' <arg1> ...
I've expanded on this idea to produce a fully working BASH example script sshx that can run arbitrary scripts (not just BASH), where arguments can be local input files too, over ssh. See here.
I am writing a script that among other things runs a shell command several times. This command doesn't handle exit codes very well and I need to know if the process ended successfully or not.
So what I was thinking is to analyze the stderr to find out the word error (using grep). I know this is not the best thing to do, I'm working on it....
Anyway, the only way I can imagine is to put the stderr of that program in a variable and then use grep to well, "grep" it and throw it to another variable. Then I can see if that variable is valorized, meaning that there was an error, and do my work.
The qustion is: how can I do this ?
I don't really want to run the program inside a variable, because it has got a lot of arguments (with special characters such as backslash, quotes, doublequotes...) and it's a memory and I/O intensive program.
Awaiting your reply, thanks.
Redirect the stderr of that command to a temporary file and check if the word "error" is present in that file.
mycommand 2> /tmp/temp.txt
grep error /tmp/temp.txt
Thanks #Jdamian, this was my answer too, in the end.
I asked my principal if I can write a temp file and it allowed, so this is the end result:
... script
command to be launched -argument -other "argument" -other "other" argument 2>&1 | tee $TEMPFILE
ERRORCODE=( `grep -i error "$TEMPFILE" `)
if [ -z $ERRORCODE ] ;
then
some actions ....
I didn't tested this yet because I got some other scripts involved that I need to write before.
What I'm trying to do is:
run the command, having its stderr redirected to stdout;
using tee, have the above result printed on screen and also to the temp file;
have grep to store the string error found on the temp file (if any) in a variable called ERRORCODE;
if that variable is populated (which mean if it has been created by grep), then the script stops, quitting with status 1, else it contiunes.
What do you think ?
If you don't need the standard output:
if mycommand 2>&1 >/dev/null | grep -q error; then
echo an error occurred
fi