Redirecting the output of a cron job - bash

I have the following entry in crontab:
0 5 * * * /bin/bash -l -c 'export RAILS_ENV=my_env; cd /my_folder; ./script/my_script.rb 2>&1 > ./log/my_log.log'
The result of this is that I am receiving the output of ./script/my_script.rb in ./log/my_log.log. This behavior is desired. What is curious is that I am also receiving the output in my local mail. I am wondering how the output of my script is being captured by mail. Since I am redirecting the output to a log file, I would expect that my cron job would have no output, and thus I would receive no mail when the cron job runs. Can anyone shed some light as to how mail is able to get the output of ./script/my_script.rb?

Your redirection order is incorrect. Stderr is not being redirected to the file, but is being sent to stdout. That's what you must be receiving in your mail.
Fix the redirection by changing your cron job to:
0 5 * * * /bin/bash -l -c
'export RAILS_ENV=my_env;
cd /my_folder;
./script/my_script.rb > ./log/my_log.log 2>&1'

Try swapping 2>&1 with > ./log/my_log.log.

Judging by this answer you just need to switch the order of the redirects:
0 5 * * * /bin/bash -l -c 'export RAILS_ENV=my_env; cd /my_folder; ./script/my_script.rb > ./log/my_log.log 2>&1'

Related

How to check if the cronjob succeeded

I am trying to add a cronjob for patching and I just wanted to know if it has been successful.
I have performed the following:
echo "0 0 * * * root yum -d 0 -y update > /dev/null 2>&1 && shutdown -r +1" >> /etc/cron.d/patch
Now, when I am going to the /var/log/cron, I think all the cron jobs should be listed there. Further, I cannot see any /var/log/syslog. I want to know if my script file (added as patch under /etc/cron.d) has been successful, how can I do that?
Thanks
How to check if the cronjob succeeded
"Edits to a user's crontab and the cron jobs run are all logged by default to /var/log/syslog and that's the first place to check if things are not running as you expect."
$ awk "/clearme/ { print $1 }" /var/log/syslog
Also, set the shebang (the first line):
#!/bin/bash
And ensure you set absolute paths to executable. E.g. datetime should be set as /usr/bin/datetime
So in your case, the command could be:
/usr/bin/yum -d 0 -y update > /dev/null 2>&1 && /usr/bin/shutdown -r +1" >> /etc/cron.d/patch
Cron mails any output on stdout or stderr. To know whether a cron job succeeded or not, make the output dependent on success: all good → no output. Something fishy → echo fishy.
Don't write a long command with such logic in the crontab--put it in a script and start that script from cron.

Run a bash script via cron and save variable to be used in the cronjob

I have the following cronjob running on RHEL 7.5:
5 12 * * * root /mydir/myscript.sh 2>&1 | mailx -s "My Script: myscript.sh has run" root#mycompany.com
The script myscript.sh basically will output a result at the end of it - 0 for Success and 1 for failures. This is stored in the variable $result.
My question: is it possible to have $result be read in the cronjob so I can change it to something like:
5 12 * * * root /mydir/myscript.sh 2>&1 | mailx -s "My Script: myscript.sh has run with error code $result" root#mycompany.com
This way I can tell from the subject whether the script has run successfully or not.
So far I havent found a way to save $result into a variable that keeps its value that can be read by cron. Is this possible (I'm sure some of your geniuses out there will have a solution!)
UPDATE:
I know that I can send an email from the script itself but there is a requirement that prevents me from doing this so it has to be done from the cronjob itself.
Thanks J
One way to do it is like this
5 12 * * * /bin/bash -l -c 'result=`/mydir/myscript.sh` | mailx -s "My Script: myscript.sh has run with error code $result" root#mycompany.com'
Another way is just to update myscript to send the email

How to detect in bash script where stdout and stderr logs go?

I have a bash script called from cron multiple times with different parameters and redirecting their outputs to different logs approximately like this:
* * * * * /home/bob/bin/somescript.sh someparameters >> /home/bob/log/param1.log 2>&1
I need my script get in some variable the value "/home/bob/log/param1.log" in this case. It could as well have a date calculated for logfilename instead of "param1". Main reason as of now is reuse of same script for similar purposes and be able to inform a user via monitored folder where he should look for more info - give him a logfile name in some warning file.
How do I detect to which log the output (&1 or both &1 and &2) goes?
If you are running Linux, you can read the information from the proc file system. Assume you have the following program in stdout.sh.
#! /bin/bash
readlink -f /proc/$$/fd/1 >&2
Interactively it shows your terminal.
$ ./stdout.sh
/dev/pts/0
And with a redirection it shows the destination.
$ ./stdout.sh > nix
/home/ceving/nix
at runtime /usr/bin/lsof or /usr/sbin/lsof gives open file
lsof -p $$ -a -d 1
lsof -p $$ -a -d 2
filename1=$(lsof -p $$ -a -d 1 -F n)
filename1=${filename1#*$'\n'n}
filename2=$(lsof -p $$ -a -d 2 -F n)
filename2=${filename2#*$'\n'n}

pidof in cron not finding process?

I'd like to restart my daemon if it's not running (crashed etc). inittab is not applicable for various reasons. This snippet works fine in bash but not from cron as it keeps starting multiple processes:
*/1 * * * * /bin/bash if [ ! $(pidof vzlogger) ]; then sudo vzlogger -d; fi;
Is the subshell "eating" the exit code of pidof? The alternative
*/1 * * * * /bin/bash if [ -z "$(pidof vzlogger)" ]; then sudo vzlogger -d; fi;
has the same problem- multiple processes?
The way to run Bash commands is not bash commands but bash -c 'commands'.
*/1 * * * * /bin/bash -c 'pidof vzlogger >/dev/null || sudo vzlogger -d'
Of course, the /1 is redundant, and you don't need Bash for any of this.
* * * * * pidof vzlogger >/dev/null || sudo vzlogger -d
The if test wasn't incorrect per se, but it can very often be avoided. So, for example, pidof fortunately returns error if it did not find a PID, and success otherwise; so you can use the shortcut syntax. (Most properly maintained Unix tools have this feature.) Because the PID is no longer captured in a (superfluous) process substitution, we redirect the output from pidof to /dev/null (because otherwise you will receive email from the cron daemon every time it succeeds and generates output).
/bin/bash if will search for a file named if in the current directory (which for a cron job is your home directory), and attempt to execute it as a Bash script.
You should have received an email from the cron daemon with an error message:
bash: if: No such file or directory

What is /dev/null 2>&1? [duplicate]

This question already has answers here:
What does " 2>&1 " mean?
(19 answers)
Closed 26 days ago.
I found this piece of code in /etc/cron.daily/apf
#!/bin/bash
/etc/apf/apf -f >> /dev/null 2>&1
/etc/apf/apf -s >> /dev/null 2>&1
It's flushing and reloading the firewall.
I don't understand the >> /dev/null 2>&1 part.
What is the purpose of having this in the cron? It's overriding my firewall rules.
Can I safely remove this cron job?
>> /dev/null redirects standard output (stdout) to /dev/null, which discards it.
(The >> seems sort of superfluous, since >> means append while > means truncate and write, and either appending to or writing to /dev/null has the same net effect. I usually just use > for that reason.)
2>&1 redirects standard error (2) to standard output (1), which then discards it as well since standard output has already been redirected.
Let's break >> /dev/null 2>&1 statement into parts:
Part 1: >> output redirection
This is used to redirect the program output and append the output at the end of the file. More...
Part 2: /dev/null special file
This is a Pseudo-devices special file.
Command ls -l /dev/null will give you details of this file:
crw-rw-rw-. 1 root root 1, 3 Mar 20 18:37 /dev/null
Did you observe crw? Which means it is a pseudo-device file which is of character-special-file type that provides serial access.
/dev/null accepts and discards all input; produces no output (always returns an end-of-file indication on a read). Reference: Wikipedia
Part 3: 2>&1 (Merges output from stream 2 with stream 1)
Whenever you execute a program, the operating system always opens three files, standard input, standard output, and standard error as we know whenever a file is opened, the operating system (from kernel) returns a non-negative integer called a file descriptor. The file descriptor for these files are 0, 1, and 2, respectively.
So 2>&1 simply says redirect standard error to standard output.
& means whatever follows is a file descriptor, not a filename.
In short, by using this command you are telling your program not to shout while executing.
What is the importance of using 2>&1?
If you don't want to produce any output, even in case of some error produced in the terminal. To explain more clearly, let's consider the following example:
$ ls -l > /dev/null
For the above command, no output was printed in the terminal, but what if this command produces an error:
$ ls -l file_doesnot_exists > /dev/null
ls: cannot access file_doesnot_exists: No such file or directory
Despite I'm redirecting output to /dev/null, it is printed in the terminal. It is because we are not redirecting error output to /dev/null, so in order to redirect error output as well, it is required to add 2>&1:
$ ls -l file_doesnot_exists > /dev/null 2>&1
This is the way to execute a program quietly, and hide all its output.
/dev/null is a special filesystem object that discards everything written into it. Redirecting a stream into it means hiding your program's output.
The 2>&1 part means "redirect the error stream into the output stream", so when you redirect the output stream, error stream gets redirected as well. Even if your program writes to stderr now, that output would be discarded as well.
Let me explain a bit by bit.
0,1,2
0: standard input
1: standard output
2: standard error
>>
>> in command >> /dev/null 2>&1 appends the command output to /dev/null.
command >> /dev/null 2>&1
After command:
command
=> 1 output on the terminal screen
=> 2 output on the terminal screen
After redirect:
command >> /dev/null
=> 1 output to /dev/null
=> 2 output on the terminal screen
After /dev/null 2>&1
command >> /dev/null 2>&1
=> 1 output to /dev/null
=> 2 output is redirected to 1 which is now to /dev/null
/dev/null is a standard file that discards all you write to it, but reports that the write operation succeeded.
1 is standard output and 2 is standard error.
2>&1 redirects standard error to standard output. &1 indicates file descriptor (standard output), otherwise (if you use just 1) you will redirect standard error to a file named 1. [any command] >>/dev/null 2>&1 redirects all standard error to standard output, and writes all of that to /dev/null.
I use >> /dev/null 2>&1 for a silent cronjob. A cronjob will do the job, but not send a report to my email.
As far as I know, don't remove /dev/null. It's useful, especially when you run cPanel, it can be used for throw-away cronjob reports.
As described by the others, writing to /dev/null eliminates the output of a program. Usually cron sends an email for every output from the process started with a cronjob. So by writing the output to /dev/null you prevent being spammed if you have specified your adress in cron.
instead of using >/dev/null 2>&1
Could you use : wget -O /dev/null -o /dev/null example.com
what i can see on the other forum it says. "Here -O sends the downloaded file to /dev/null and -o logs to /dev/null instead of stderr. That way redirection is not needed at all."
and the other solution is : wget -q --spider mysite.com
https://serverfault.com/questions/619542/piping-wget-output-to-dev-null-in-cron/619546#619546
I normally used the command in connection with the log files… purpose would be to catch any errors to evaluate/troubleshoot issues when running scripts on multiple servers simultaneously.
sh -vxe cmd > cmd.logfile 2>&1
Edit /etc/conf.apf. Set DEVEL_MODE="0". DEVEL_MODE set to 1 will add a cron job to stop apf after 5 minutes.

Resources