I have some sort of learning block for cron, and no matter what I read, I can never get a get understanding of it. I asked for help from my webhost to create a cron job that runs a python script every two hours.
This is what he sent back:
0 */2 * * * python /path/to/file.py >> /dev/null 2>&1
I get that the first bit is saying everyone hour evenly divisible by two, the second part is using python to execute my file, and the rest, I don't really know.
The support guy sent me an email back saying
That means that stdout and stderr will be redirected nowhere to keep
you clean of garbled messages, and command outputs if any (useful and
common in cron).
To test script functionality, use the same without redirection.
Which makes sense, because I remember >> being used in the command prompt to write output to files. I still don't get two things though. First, what does 2>&1 do? And second, by redirection, is he talking about sending the output to /dev/null? If it didn't go there, and I did want to confirm it was working, where would it go?
2 represents the stderr stream, and it's saying to redirect it to same place that stream 1 (stdout) was directed, which is /dev/null (sometimes referred to as the "bit bucket").
If you didn't want the output to go to /dev/null, you could put, for example, a filename there, and the output of stderr and stdout would go there.
Ex:
0 */2 * * * python /path/to/file.py >> your_filename 2>&1
Finally, the >> (as opposed to >) means append, so in the case of a filename, the output would be appended instead of overwriting the file. With /dev/null, it doesn't matter though since you are throwing away the output anyway.
2>&1 redirects all error output to the same stream as the standard output (e.g. in your case to /dev/null = nowhere)
If you run python /path/to/file.py in a console window (e.g. removing the output redirection starting with >>) the output will be printed on your console (so you can read it visually)
Note: By default the output of cron jobs will be sent as an e-mail to the user owning the job. For that reason it is very common to always direct standard and error output to /dev/null.
>> is unnecessary there - /dev/null isn't a real file, it doesn't matter whether you use > or >>
2>&1 means send STDERR to the same place as STDOUT, i.e. /dev/null
The man page for cron explains what it does if you don't have the redirect; in general, it emails the admin with the output.
If you wanted to check it was working, you'd replace '/dev/null' with an actual file' say '/tmp/log', and check that file. This is why there's a >> in the command: when logging to a real file, you want to append each time rather than overwriting it.
The >> appends standard output to /dev/null; the 2>&1 sends file descriptor 2 (standard error) to the same place that file descriptor 1 (standard output) is going.
The append is unusual but not actually harmful; you'd normally just write >. If you were dealing with real files instead of /dev/null, the append is probably better, so this reduces the chances of running into problems when you change from /dev/null to /tmp/cron/job.log or whatever.
Throwing away errors is not necessarily a good idea, but if the command is 'chatty', that output will typically end up in an email to the user whose cron job it is.
Related
I've spent several hours with this - adding timestamps to output lines of cronjobs and redirecting stdout and stderr to separate processes for that, and then to separate files, stumbling upon several pitfalls. So, I've decided to share my knowledge with people out there, as a recipe.
So, the code is the following, and I will explain peculiarities further:
crontab sample
MAILTO=""
* * * * * ((/home/ilya/stdoutanderr.sh | ts "\%FT\%T\%z:" >> /srv/data/log/ilya-test-crontab-ts/stdout.log) 2>&1 | ts "\%FT\%T\%z:" >> /srv/data/log/ilya-test-crontab-ts/ts-stderr.log) 2>&1 | logger -t CRONOUT
Use crontab -e to edit your crontab.
First of all, MAILTO="" line is used to suppress mailing the cron output. If you're not reading these mails - it's better to suppress it.
Then you see the sample of a cronjob, where stdoutanderr.sh is the cronjob itself, the rest is needed for timestamping, logging stderr and stdout separately and outputting any errors with the command itself to syslog.
stdoutanderr.sh
#!/bin/bash
echo "Sample stdout at " $(date)
echo "Sample stderr at " $(date) >&2
As you can see, the sample job jus outputs one line to stderr and another to stdout, nothing fancy here.
ts is an utility from moreutils package, you should have it installed to use this approach (or you can mock it as a function - there are examples in the net), the stdout of the job is piped into it and then appended to the designated file. All this happens in parenthesis, so at the output of the parenthesis we have just the stderr.
Here comes the first tricky part I didn't know about - one has to additionally escape % symbols in the cronjob command lines, otherwise the rest of the line turns into input to the command, so at best it won't run as intended. As you can see - the % symbol in the ts format is preceded with a backslash.
Otherwise, the format for time is proven to be robust and informative, and turns into the following:
2021-12-02T12:25:01+0000: Sample stdout at Thu Dec 2 12:25:01 UTC 2021
One can play with the format and add the subsecond time resolution, or whatewer one needs. Just remember to escape percent symbols in the cron.
Then for the ease of the case we redirect stderr to stdout to pipe it to another ts invocation and append to another file. This is achieved by the redirection 2>&1. Be warned, though, that crontab runs with dash, not bash, and |& just won't work as expected, so use the long redirection.
Then the outer parenthesis are closed and the remains of the output, which should only exist if the dash couldn't process the command, will land to cron.log or syslog under the tag of CRONOUT. Haven't tested this paragraph, though. If you left mails enabled and/or other means of crontab logging - you probably do not need this one.
Have fun!
I want to know how I can see exactly what the cron jobs are doing on each execution. Where are the log files located? Or can I send the output to my email? I have set the email address to send the log when the cron job runs but I haven't received anything yet.
* * * * * myjob.sh >> /var/log/myjob.log 2>&1
will log all output from the cron job to /var/log/myjob.log
You might use mail to send emails. Most systems will send unhandled cron job output by email to root or the corresponding user.
By default cron logs to /var/log/syslog so you can see cron related entries by using:
grep CRON /var/log/syslog
https://askubuntu.com/questions/56683/where-is-the-cron-crontab-log
There are at least three different types of logging:
The logging BEFORE the program is executed, which only logs IF the
cronjob TRIED to execute the command. That one is located in
/var/log/syslog, as already mentioned by #Matthew Lock.
The logging of errors AFTER the program tried to execute, which can be sent to
an email or to a file, as mentioned by #Spliffster. I prefer logging
to a file, because with email THEN you have a NEW source of
problems, and its checking if email sending and reception is working
perfectly. Sometimes it is, sometimes it's not. For example, in a
simple common desktop machine in which you are not interested in
configuring an smtp, sometimes you will prefer logging to a file:
* * * * COMMAND_ABSOLUTE_PATH > /ABSOLUTE_PATH_TO_LOG 2>&1
I would also consider checking the permissions of /ABSOLUTE_PATH_TO_LOG, and run the command from that user's permissions. Just for verification, while you test whether it might be a potential source of problems.
The logging of the program itself, with its own error-handling and logging for tracking purposes.
There are some common sources of problems with cronjobs:
* The ABSOLUTE PATH of the binary to be executed. When you run it from your
shell, it might work, but the cron process seems to use another
environment, and hence it doesn't always find binaries if you don't
use the absolute path.
* The LIBRARIES used by a binary. It's more or less the same previous point, but make sure that, if simply putting the NAME of the command, is referring to exactly the binary which uses the very same library, or better, check if the binary you are referring with the absolute path is the very same you refer when you use the console directly. The binaries can be found using the locate command, for example:
$locate python
Be sure that the binary you will refer, is the very same the binary you are calling in your shell, or simply test again in your shell using the absolute path that you plan to put in the cronjob.
Another common source of problems is the syntax in the cronjob. Remember that there are special characters you can use for lists (commas), to define ranges (dashes -), to define increment of ranges (slashes), etc. Take a look:
http://www.softpanorama.org/Utilities/cron.shtml
Here is my code:
* * * * * your_script_fullpath >> your_log_path 2>&1
On Ubuntu you can enable a cron.log file to contain just the CRON entries.
Uncomment the line that mentions cron in /etc/rsyslog.d/50-default.conf file:
# Default rules for rsyslog.
#
# For more information see rsyslog.conf(5) and /etc/rsyslog.conf
#
# First some standard log files. Log by facility.
#
auth,authpriv.* /var/log/auth.log
*.*;auth,authpriv.none -/var/log/syslog
#cron.* /var/log/cron.log
Save and close the file and then restart the rsyslog service:
sudo systemctl restart rsyslog
You can now see cron log entries in its own file:
sudo tail -f /var/log/cron.log
Sample outputs:
Jul 18 07:05:01 machine-host-name CRON[13638]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
However, you will not see more information about what scripts were actually run inside /etc/cron.daily or /etc/cron.hourly, unless those scripts direct output to the cron.log (or perhaps to some other log file).
If you want to verify if a crontab is running and not have to search for it in cron.log or syslog, create a crontab that redirects output to a log file of your choice - something like:
# For more information see the manual pages of crontab(5) and cron(8)
#
# m h dom mon dow command
30 2 * * 1 /usr/local/sbin/certbot-auto renew >> /var/log/le-renew.log 2>&1
Steps taken from: https://www.cyberciti.biz/faq/howto-create-cron-log-file-to-log-crontab-logs-in-ubuntu-linux/
cron already sends the standard output and standard error of every job it runs by mail to the owner of the cron job.
You can use MAILTO=recipient in the crontab file to have the emails sent to a different account.
For this to work, you need to have mail working properly. Delivering to a local mailbox is usually not a problem (in fact, chances are ls -l "$MAIL" will reveal that you have already been receiving some) but getting it off the box and out onto the internet requires the MTA (Postfix, Sendmail, what have you) to be properly configured to connect to the world.
If there is no output, no email will be generated.
A common arrangement is to redirect output to a file, in which case of course the cron daemon won't see the job return any output. A variant is to redirect standard output to a file (or write the script so it never prints anything - perhaps it stores results in a database instead, or performs maintenance tasks which simply don't output anything?) and only receive an email if there is an error message.
To redirect both output streams, the syntax is
42 17 * * * script >>stdout.log 2>>stderr.log
Notice how we append (double >>) instead of overwrite, so that any previous job's output is not replaced by the next one's.
As suggested in many answers here, you can have both output streams be sent to a single file; replace the second redirection with 2>&1 to say "standard error should go wherever standard output is going". (But I don't particularly endorse this practice. It mainly makes sense if you don't really expect anything on standard output, but may have overlooked something, perhaps coming from an external tool which is called from your script.)
cron jobs run in your home directory, so any relative file names should be relative to that. If you want to write outside of your home directory, you obviously need to separately make sure you have write access to that destination file.
A common antipattern is to redirect everything to /dev/null (and then ask Stack Overflow to help you figure out what went wrong when something is not working; but we can't see the lost output, either!)
From within your script, make sure to keep regular output (actual results, ideally in machine-readable form) and diagnostics (usually formatted for a human reader) separate. In a shell script,
echo "$results" # regular results go to stdout
echo "$0: something went wrong" >&2
Some platforms (and e.g. GNU Awk) allow you to use the file name /dev/stderr for error messages, but this is not properly portable; in Perl, warn and die print to standard error; in Python, write to sys.stderr, or use logging; in Ruby, try $stderr.puts. Notice also how error messages should include the name of the script which produced the diagnostic message.
Use the command crontab -e, and then edit the cron jobs as
* * * * * /path/file.sh > /pathToKeepLogs/logFileName.log 2>&1
Here, 2>&1 indicates that the standard error (2>) is redirected to the same file descriptor that is pointed by standard output (&1).
If you'd still like to check your cron jobs you should provide a valid
email account when setting the Cron jobs in cPanel.
When you specify a valid email you will receive the output of the cron job that is executed. Thus you will be able to check it and make sure everything has been executed correctly. Note that you will not receive an email if there is no output from the cron job command.
Please bear in mind that you will receive an email for each of the executed cron jobs. This may flood your inbox in case your crons run too often
Incase you're running some command with sudo, it won't allow it. Sudo needs a tty.
I need to start up a Golang web server and leave it running in the background from a bash script. If the script in question in syntactically correct (as it will be most of the time) this is simply a matter of issuing a
go run /path/to/index.go &
However, I have to allow for the possibility that index.go is somehow erroneous. I should explain that in Golang this for something as "trival" as importing a module that you then fail to use. In this case the go run /path/to/index.go bit will return an error message. In the terminal this would be something along the lines of
index.go:4:10: expected...
What I need to be able to do is to somehow change that command above so I can funnel any error messages into a file for examination at a later stage. I tried variants on go run /path/to/index.go >> errors.txt with the terminating & in different positions but to no avail.
I suspect that there is a bash way to do this by altering the priority of evaluation of the command via some judiciously used braces/brackets etc. However, that is way beyond my bash capabilities. I would be most obliged to anyone who might be able to help.
Update
A few minutes later... After a few more experiments I have found that this works
go run /path/to/index.go &> errors.txt &
Quite apart from the fact that I don't in fact understand why it works there remains the issue that it produces a 0 byte errors.txt file when the command goes to completion without Golang throwing up any error messages. Can someone shed light on what is going on and how it might be improved?
Taken from man bash.
Redirecting Standard Output and Standard Error
This construct allows both the standard output (file descriptor 1) and the standard error output (file descriptor 2) to be redirected to the file whose name is the expansion of word.
There are two formats for redirecting standard output and standard error:
&>word
and
>&word
Of the two forms, the first is preferred. This is semantically equivalent to
>word 2>&1
Appending Standard Output and Standard Error
This construct allows both the standard output (file descriptor 1) and the standard error output (file descriptor 2) to be appended to the file whose name is the expansion of word.
The format for appending standard output and standard error is:
&>>word
This is semantically equivalent to
>>word 2>&1
Narūnas K's answer covers why the &> redirection works.
The reason why the file is created anyway is because the shell creates the file before it even runs the command in question.
You can see this by trying no-such-command > file.out and seeing that even though the shell errors because no-such-command doesn't exist the file gets created (using &> on that test will get the shell's error in the file).
This is why you can't do things like sed 'pattern' file > file to edit a file in place.
I'm using a script which is calling another, like this :
# stuff...
OUT="$(./scriptB)"
# do stuff with the variable OUT
Basically, the scriptB script displays text in multiple time. Ie : it displays a line, 2s late another, 3s later another and so on.
With the snippet i use, i only get the first output of my command, i miss a lot.
How can i get the whole output, by capturing stdout for a given time ? Something like :
begin capture
./scriptB
stop capture
I don't mind if the output is not shown on screen.
Thanks.
If I understand your question, then I believe you can use the tee command, like
./scriptB | tee $HOME/scriptB.log
It will display the stdout from scriptB and write stdout to the log file at the same time.
Some of your output seems to be coming on the STDERR stream. So we have to redirect that as needed. As in my comment, you can do
{ ./scriptB ; } > /tmp/scriptB.log 2>&1
Which can almost certainly be reduced to
./scriptB > /tmp/scriptB.log 2>&1
And in newer versions of bash, can further be reduced to
./scriptB >& /tmp/scriptB.log
AND finally, as your original question involved storing the output to a variable, you can do
OUT=$(./scriptB > /tmp/scriptB.log 2>&1)
The notation 2>&1 says, take the file descriptor 2 of this process (STDERR) and tie it (&) into the file descriptor 1 of the process (STDOUT).
The alternate notation provided ( ... >& file) is a shorthand for the 2>&1.
Personally, I'd recommend using the 2>&1 syntax, as this is understood by all Bourne derived shells (not [t]csh).
As an aside, all processes by default have 3 file descriptors created when the process is created, 0=STDIN, 1=STDOUT, 2=STDERR. Manipulation of those streams is usually as simple as illustrated here. More advanced (rare) manipulations are possible. Post a separate question if you need to know more.
IHTH
This question already has answers here:
Send output of last command to a file automatically in bash?
(3 answers)
Closed 8 years ago.
I know I can save the result of a command to a variable using last_output=$(my_cmd) but what I'd really want is for $last_output to get updated every time I run a command. Is there a variable, zsh module, or plugin that I could install?
I guess my question is does stdout get permanently written somewhere (at least before the next command)? That way I could manipulate the results of the previous command without having to re-run it. This would be really useful for commands that take a long time to run
If you run the following:
exec > >(tee save.txt)
# ... stuff here...
exec >/dev/tty
...then your stdout for everything run between the two commands will go both to stdout, and to save.txt.
You could, of course, write a shell function which does this for you:
with_saved_output() {
"$#" \
2> >(tee "$HOME/.last-command.err >&2) \
| tee "$HOME/.last-command.out"
}
...and then use it at will:
with_saved_output some-command-here
...and zsh almost certainly will provide a mechanism to wrap interactively-entered commands. (In bash, which I can speak to more directly, you could do the same thing with a DEBUG trap).
However, even though you can, you shouldn't do this: When you split stdout and stderr into two streams, information about the exact ordering of writes is lost, even if those streams are recombined later.
Thus, the output
O: this is written to stdout first
E: this is written to stderr second
could become:
E: this is written to stderr second
O: this is written to stdout first
when these streams are individually passed through tee subprocesses to have copies written to disk. There are also buffering concerns created, and differences in behavior caused by software which checks whether it's outputting to a TTY and changes its behavior (for instance, software which turns color-coded output on when writing directly to console, and off when writing to a file or pipeline).
stdout is just a file handle that by default is connected to the console, but could be redirected.
yourcommand > save.txt
If you want to display the output to the console and save it to a file at the same time yout could pipe the output to tee, a command that writes everything it receives on stdin to stdout and to a file of your choice:
yourcommand | tee save.txt