tee -a : No such file or Directory - bash

Please have a look at following code.
#!/bin/sh
LOG_FILE=/home/admin/scriptLogs.log
rm -f ${LOG_FILE}
echo "`date`:: Script Execution Started" | tee -a ${LOG_FILE}
DATABASE ACCESS CODE 2>&1
echo "`date`:: Script Execution Successful " | tee -a {$LOG_FILE}
exit 0
It produces following output:
> Tue Feb 7 12:14:49 IST 2017:: Script Execution Started
> tee:{/home/admin/scriptLogs.log}: No such file or directory
> Tue Feb 7 12:14:49 IST 2017:: Script Execution Successfull
However, the file is present in the specified location. It also gets appended with data except for the last echo statement. Why such behavior?

Make sure to always quote your variables before expanding them to avoid reinterpretation (read about it here: http://www.tldp.org/LDP/abs/html/quotingvar.html)
Also, you should define LOG_FILE as a string (note the quotes below):
LOG_FILE="/home/admin/scriptLogs.log"
With that said, you have a typo in your script as mentioned by #codeforester
Also, you print that the execution was successful, without checking that it really was.
So you code should look like this:
#!/bin/sh
LOG_FILE="/home/admin/scriptLogs.log"
rm -f "$LOG_FILE"
echo "`date`:: Script Execution Started" | tee -a "$LOG_FILE"
DATABASE ACCESS CODE 2>&1
if [ $? -eq 0 ]; then
echo "`date`:: Script Execution Successful " | tee -a "$LOG_FILE"
else
...
exit 0
Note: I have removed the curly brackets, as they are not needed here (although it is not a mistake to do so)

As pointed out by #codeforester, use:
#!/bin/sh
LOG_FILE=/home/admin/scriptLogs.log
rm -f ${LOG_FILE}
echo "`date`:: Script Execution Started" | tee -a ${LOG_FILE}
DATABASE ACCESS CODE 2>&1
echo "`date`:: Script Execution Successful " | tee -a ${LOG_FILE}
exit 0

Related

Cron + nohup = script in cron cannot find command?

There is a simple cron job:
#reboot /home/user/scripts/run.sh > /dev/null 2>&1
run.sh starts a binary (simple web server):
#!/usr/bin/env bash
NPID=/home/user/server/websrv
if [ ! -f $NPID ]
then
echo "Not started"
echo "Starting"
nohup home/user/server/websrv &> my_script.out &
else
NUM=$(ps ax | grep $(cat $NPID) | grep -v grep | wc -l)
if [ $NUM -lt 1 ]
then
echo "Not working"
echo "Starting"
nohup home/user/server/websrv &> my_script.out &
else
ps ax | grep $(cat $NPID) | grep -v grep
echo "All Ok"
fi
fi
websrv gets JSON from user, and runs work.sh script itselves.
The problem is that sh script, which is invoked by websrv, "does not see" commands and stops with exit 1.
The script work.sh is like this:
#!/bin/sh -e
if [ "$#" -ne 1 ]; then
echo "Usage: $0 INPUT"
exit 1
fi
cd $(dirname $0) #good!
pwd #good!
IN="$1"
echo $IN #good!
KEYFORGIT="/some/path"
eval `ssh-agent -s` #good!
which ssh-add #good! (returns /usr/bin/ssh-add)
ssh-add $KEYFORGIT/openssh #error: exit 1!
git pull #error: exit 1!
cd $(dirname $0) #good!
rm -f somefile #error: exit 1!
#############==========Etc.==============
Usage of the full paths does not help.
If the script has been executed itself, it works.
If run.sh manually, it also works.
If I run the command nohup home/user/server/websrv & if works as well.
However, if all this chain of tools is started by cron on boot, work.sh is not able to perform any command except of cp, pwd, which, etc. But invoke of ssh-add, git, cp, rm, make etc., forces exit 1 status of the script. Why it "does not see" the commands? Unfortunately, I also cannot get any extended log which might explain the particular errors.
Try adding the path from the session that runs the script correctly to the cron entry (or inside the script)
Get the current path (where the script runs fine) with echo $PATH and add that to the crontab: replacing the string below with the output -> <REPLACE_WITH_OUTPUT_FROM_ABOVE>
#reboot export PATH=$PATH:<REPLACE_WITH_OUTPUT_FROM_ABOVE>; /home/user/scripts/run.sh > /dev/null 2>&1
You can compare paths with a cron entry like this to see what cron's PATH is:
* * * * * echo $PATH > /tmp/crons_path
Then cat /tmp/crons_path to see what it says.
Example output:
$ crontab -l | grep -v \#
* * * * * echo $PATH >> /tmp/crons_path
# wait a minute or so...
$ cat /tmp/crons_path
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
$ echo $PATH
/home/ubuntu/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
As the commenter above mentioned, crontab doesn't always use the same path as user so likely something is missing.
Be sure to remove the temp cron entry after testing (crontab -e, etc.)...

grep command inside EOF doesn't seems to be executing on remote hosts [UNIX BASH]

Here is the chunk of code for reference:-
Output:
I have checked the variable values using echo and those looks fine.
But what I want do achieve is searching logs on remote hosts using grep which does not give any output.
for dir in ${log_path}
do
for host in ${Host}
do
if [[ "${userinputserverhost}" == "${host}" ]]
then
ssh -q -T username#userinputserverhost "bash -s" <<-'EOF' 2>&1 | tee -a ${LogFile}
echo -e "Fetching details: \n"
`\$(grep -A 5 -s "\${ID}" "\${dir}"/archive/*.log)`
EOF
fi
break
done
done
First, remove all the crap around the grep.
Second, you're overquoting your vars.
Third, skip the "bash -s" if you can.
ssh -q -T username#userinputserverhost <<-'EOF' 2>&1 | tee -a ${LogFile}
echo -e "Fetching details: \n"
grep -A 5 -s "${ID}" "${dir}"/archive/*.log
EOF
Fourth, I don't see where $ID is set...so if that's being loaded on the remote system by the login or something, then that one would need the dollar sign backslashed.
Finally, be aware that here-docs are great, but sometimes here-strings are simpler if you can spare the quotes.
$: ssh 2>&1 dudeling#sandbox-server '
> date
> whoami
> ' | tee -a foo.txt
Fri Apr 30 09:23:09 EDT 2021
dudeling
$: cat foo.txt
Fri Apr 30 09:23:09 EDT 2021
dudeling
That one is more a matter of taste. Even better, if you can, write your remote-script to a local file & use that. And of course, you can always add set -vx into the script to see what gets remotely executed.
cat >tmpScript <<-'EOF'
echo -e "Fetching details: \n"
set -vx
grep -A 5 -s "${ID}" "${dir}"/archive/*.log
EOF
ssh <tmpScript 2>&1 -q -T username#userinputserverhost | tee -a ${LogFile}
Now you have an exact copy of what was issued for debugging.
Thanks Paul for spending time and coming up with suggestions/solutions.
I have managed to get it working couple of days back. Would have felt happy to say that your solution worked 100% but even satisfied that I got it sorted on my own as it helped me learn some new stuff.
FYI - grep -A 5 -s "${ID}" "${dir}"/archive/*.log - this will work but only by using shell built-in 'declare -p' to declare the variables within EOF. Also, I read somewhere and it is recommended to use EOF unqouted as it caters variable expansion to remote hosts without any trouble.
Below piece of code is working for me in bash:
ssh -q -T username#userinputserverhost <<-EOF 2>&1 | tee -a ${LogFile}
echo -e "Fetching details: \n"
$(declare -p ID)
$(declare -p dir)
grep -A 5 -s "${ID}" "${dir}"/archive/*.log
EOF

shell script - run list of commands

for i in `cat foo.txt`
do
$i
done
And I have a input file "foo.txt", with list of commands.
ls -ltr | tail
ps -ef | tail
mysql -e STATUS | grep "^Uptime"
when I run the shell script, it executes, but splits the commands in each line at spaces i.e for first line it executes only "ls", then "-ltr" for which I get command not found error.
How can I run each list as one command?
why am I doing this?
I execute lot of arbitrary shell commands including DB commands. I need to have a error handling as I execute each command(each line from foo.txt), I can't think of what can go wrong, so the idea is put all commands in order and call them in loop and check for error (#?) at each line and stop on error.
Why not just do this?
set -e
. ./foo.txt
set -e causes the shell script to abort if a command exits with a non-zero exit code, and . ./foo.txt executes commands from foo.txt in the current shell.
but I guess I can't send notification (email).
Sure you can. Just run the script in a subshell, and then respond to the result code:
#!/bin/sh
(
set -e
. ./foo.txt
)
if [ "$?" -ne 0 ]; then
echo "The world is on fire!" | mail -s 'Doom is upon us' you#youremail.com
fi
Code mentioned.
for i in `cat foo.txt`
do
$i
done
Please use https://www.shellcheck.net/
This will result _
$ shellcheck myscript
Line 1:
for i in `cat foo.txt`
^-- SC2148: Tips depend on target shell and yours is unknown. Add a shebang.
^-- SC2013: To read lines rather than words, pipe/redirect to a 'while read' loop.
^-- SC2006: Use $(...) notation instead of legacy backticked `...`.
Did you mean: (apply this, apply all SC2006)
for i in $(cat foo.txt)
$
Will try while loop, and for test purpose content of foo.txt mentioned below
cat foo.txt
ls -l /tmp/test
ABC
pwd
while read -r line; do $line; if [ "$?" -ne 0 ]; then echo "Send email Notification stating $line Command reported error "; fi; done < foo.txt
total 0
-rw-r--r--. 1 root root 0 Dec 24 11:41 test.txt
bash: ABC: command not found...
Send email Notification stating ABC Command reported error
/tmp
In case error reported you can break the loop.
http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_05.html
while read -r line; do $line; if [ "$?" -ne 0 ]; then echo "Send email Notification stating $line Command reported error "; break; fi; done < foo.txt
total 0
-rw-r--r--. 1 root root 0 Dec 24 11:41 test.txt
bash: ABC: command not found...
Send email Notification stating ABC Command reported error
while read -r line; do eval $line; if [ "$?" -ne 0 ]; then echo "Send email Notification stating $line Command reported error "; break; fi; done < foo.txt
total 0
-rw-r--r--. 1 root root 0 Dec 24 11:41 test.txt
bash: ABC: command not found...
Send email Notification stating ABC Command reported error

false | true; echo $? [duplicate]

I currently have a script that does something like
./a | ./b | ./c
I want to modify it so that if any of a, b, or c exit with an error code I print an error message and stop instead of piping bad output forward.
What would be the simplest/cleanest way to do so?
In bash you can use set -e and set -o pipefail at the beginning of your file. A subsequent command ./a | ./b | ./c will fail when any of the three scripts fails. The return code will be the return code of the first failed script.
Note that pipefail isn't available in standard sh.
You can also check the ${PIPESTATUS[]} array after the full execution, e.g. if you run:
./a | ./b | ./c
Then ${PIPESTATUS} will be an array of error codes from each command in the pipe, so if the middle command failed, echo ${PIPESTATUS[#]} would contain something like:
0 1 0
and something like this run after the command:
test ${PIPESTATUS[0]} -eq 0 -a ${PIPESTATUS[1]} -eq 0 -a ${PIPESTATUS[2]} -eq 0
will allow you to check that all commands in the pipe succeeded.
If you really don't want the second command to proceed until the first is known to be successful, then you probably need to use temporary files. The simple version of that is:
tmp=${TMPDIR:-/tmp}/mine.$$
if ./a > $tmp.1
then
if ./b <$tmp.1 >$tmp.2
then
if ./c <$tmp.2
then : OK
else echo "./c failed" 1>&2
fi
else echo "./b failed" 1>&2
fi
else echo "./a failed" 1>&2
fi
rm -f $tmp.[12]
The '1>&2' redirection can also be abbreviated '>&2'; however, an old version of the MKS shell mishandled the error redirection without the preceding '1' so I've used that unambiguous notation for reliability for ages.
This leaks files if you interrupt something. Bomb-proof (more or less) shell programming uses:
tmp=${TMPDIR:-/tmp}/mine.$$
trap 'rm -f $tmp.[12]; exit 1' 0 1 2 3 13 15
...if statement as before...
rm -f $tmp.[12]
trap 0 1 2 3 13 15
The first trap line says 'run the commands 'rm -f $tmp.[12]; exit 1' when any of the signals 1 SIGHUP, 2 SIGINT, 3 SIGQUIT, 13 SIGPIPE, or 15 SIGTERM occur, or 0 (when the shell exits for any reason).
If you're writing a shell script, the final trap only needs to remove the trap on 0, which is the shell exit trap (you can leave the other signals in place since the process is about to terminate anyway).
In the original pipeline, it is feasible for 'c' to be reading data from 'b' before 'a' has finished - this is usually desirable (it gives multiple cores work to do, for example). If 'b' is a 'sort' phase, then this won't apply - 'b' has to see all its input before it can generate any of its output.
If you want to detect which command(s) fail, you can use:
(./a || echo "./a exited with $?" 1>&2) |
(./b || echo "./b exited with $?" 1>&2) |
(./c || echo "./c exited with $?" 1>&2)
This is simple and symmetric - it is trivial to extend to a 4-part or N-part pipeline.
Simple experimentation with 'set -e' didn't help.
Unfortunately, the answer by Johnathan requires temporary files and the answers by Michel and Imron requires bash (even though this question is tagged shell). As pointed out by others already, it is not possible to abort the pipe before later processes are started. All processes are started at once and will thus all run before any errors can be communicated. But the title of the question was also asking about error codes. These can be retrieved and investigated after the pipe finished to figure out whether any of the involved processes failed.
Here is a solution that catches all errors in the pipe and not only errors of the last component. So this is like bash's pipefail, just more powerful in the sense that you can retrieve all the error codes.
res=$( (./a 2>&1 || echo "1st failed with $?" >&2) |
(./b 2>&1 || echo "2nd failed with $?" >&2) |
(./c 2>&1 || echo "3rd failed with $?" >&2) > /dev/null 2>&1)
if [ -n "$res" ]; then
echo pipe failed
fi
To detect whether anything failed, an echo command prints on standard error in case any command fails. Then the combined standard error output is saved in $res and investigated later. This is also why standard error of all processes is redirected to standard output. You can also send that output to /dev/null or leave it as yet another indicator that something went wrong. You can replace the last redirect to /dev/null with a file if yo uneed to store the output of the last command anywhere.
To play more with this construct and to convince yourself that this really does what it should, I replaced ./a, ./b and ./c by subshells which execute echo, cat and exit. You can use this to check that this construct really forwards all the output from one process to another and that the error codes get recorded correctly.
res=$( (sh -c "echo 1st out; exit 0" 2>&1 || echo "1st failed with $?" >&2) |
(sh -c "cat; echo 2nd out; exit 0" 2>&1 || echo "2nd failed with $?" >&2) |
(sh -c "echo start; cat; echo end; exit 0" 2>&1 || echo "3rd failed with $?" >&2) > /dev/null 2>&1)
if [ -n "$res" ]; then
echo pipe failed
fi
This answer is in the spirit of the accepted answer, but using shell variables instead of temporary files.
if TMP_A="$(./a)"
then
if TMP_B="$(echo "TMP_A" | ./b)"
then
if TMP_C="$(echo "TMP_B" | ./c)"
then
echo "$TMP_C"
else
echo "./c failed"
fi
else
echo "./b failed"
fi
else
echo "./a failed"
fi

stop bash script from outputting in terminal

I believe I have everything setup correctly for my if else statement however it keeps outputting content into my shell terminal as if i ran the command myself. is there anyway i can escape this so i can run these commands without it populating my terminal with text from the results?
#!/bin/bash
ps cax | grep python > /dev/null
if [ $? -eq 0 ]; then
echo "Process is running." &
echo $!
else
echo "Process is not running... Starting..."
python likebot.py &
echo $!
fi
Here is what the output looks like a few minutes after running my bash script
[~]# sh check.sh
Process is not running... Starting...
12359
[~]# Your account has been rated. Sleeping on kranze for 1 minute(s). Liked 0 photo(s)...
Your account has been rated. Sleeping on kranze for 2 minute(s). Liked 0 photo(s)...
If you want to redirect output from within the shell script, you use exec:
exec 1>/dev/null 2>&1
This will redirect everything from now on. If you want to output to a log:
exec 1>/tmp/logfile 2>&1
To append a log:
exec 1>>/tmp/logfile 2>&1
To backup your handles so you can restore them:
exec 3>&1 4>&2
exec 1>/dev/null 2>&1
# Do some stuff
# Restore descriptors
exec 1>&3 2>&4
# Close the descriptors.
exec 3>&- 4>&-
If there is a particular section of a script you want to silence:
#!/bin/bash
echo Hey, check me out, I can make noise!
{
echo Thats not fair, I am being silenced!
mv -v /tmp/a /tmp/b
echo Me too.
} 1>/dev/null 2>&1
If you want to redirect the "normal (stdout)" output use >/dev/null if you also want to redirect the error output as well use 2>&1 >/dev/null
eg
$ command 2>&1 >/dev/null
I think you have to redirect STDOUT (and may be STDERR) of the python interpreter:
...
echo "Process is not running... Starting..."
python likebot.py >/dev/null 2>&1 &
...
For further details, please have a look at Bash IO-Redirection.
Hope that helped a bit.
*Jost
You have two options:
You can redirect standard output to a log file using > /path/to/file
You can redirect standard output to /dev/null to get rid of it completely using > /dev/null
If you want error output redirected as well use &>
See here
Also, not relevant to this particular example, but some bash commands support a 'quiet' or 'silent' flag.
Append >> /path/to/outputfile/outputfile.txt to the end of every echo statement
echo "Process is running." >> /path/to/outputfile/outputfile.txt
Alternatively, send the output to the file when you run the script from the shell
[~]# sh check.sh >> /path/to/outputfile/outputfile.txt

Resources