Log output and error from cron script to custom file with timestamp - bash

I'm trying to log both stdout and stderr from my cron task to a custom log file. If there is no output or error, I do not want to log anything. If there is either output or error, I want to log it to a new line and prefix it with the current timestamp.
I've tried the following
myscript 2>&1 | echo "$(cat -)" | ts >> cron.log
This gets me almost what I want. It will log both output and errors from myscript, prefix them with the current timestamp and put them on a new line. The problem is that if myscript produces no output, then because echo produces a new line, I'll get a log entry with just the timestamp on a new line.
I want to do this all on the cron line. I do not want to have to modify myscript.

I suggest to use sed:
myscript 2>&1 | ts | sed '$a\' >> cron.log
This adds \n at the end of the file only if it doesn’t already end with a newline. -- l0b0

Related

How to compare the last words of the lines in a file

I'm using a Raspberry Pi as a backup server. I use cron to run each backup job nightly and log the output to a file specific to each job. So each morning I have a bunch of log files (job1.log .. jobN.log). The log files are overwritten each time the job runs. I have another cron job (that runs after all the backup jobs) that sends me an email showing the last line of each log file. This all works as expected.
I'd like to be able to get a status in the subject of the email based on the last lines of the log files. When a backup job is successfully completed, the last line of the log file has some info followed by the word "completed" (which isn't included if the job fails). In my script that sends the email, I use "tail -1 >> summary.txt" for each log file, so summary.txt is a collection containing the last line of each logfile (and is included in the body of the email sent to me).
What I'd like to do is to check the last word of each line in summary.txt to see if all jobs completed successfully, and set the subject of the email appropriately (a simple "backup succeeded" or "backup failed" would be sufficient).
What would be the best way to do this? I know one possibility would be to use awk '{print $NF}' to get the last word of each line, but I'm not sure how to use that.
EDIT: As requested, here is the simplified code I'm currently using to send the "status" email to myself:
#!/bin/sh
tail -1 job1.log > summary.txt
tail -1 job2.log >> summary.txt
tail -1 job3.log >> summary.txt
mail -s "PI Backup Report" myemail#myhost < summary.txt
I know I could create an additional file with just the last lines by adding
awk '{print $NF}' summary.txt > results.txt
to the above script before the "mail" line, but then I still need to parse the results.txt file. How would I determine the status based on that file? Thanks again!
Measure total vs success lines in summary.txt.
xargs echo to trim results of excess whitespace
grep with regex specifying the line should end in "completed"
wc -l for line count
Set the title using an if statement
TOTAL=$(wc -l < summary.txt | xargs echo)
SUCCESS=$(grep -e 'completed$' summary.txt | wc -l)
title=$(if [ $TOTAL = $SUCCESS ]; then echo 'All Succeeded'; else echo "$SUCCESS/$TOTAL succeeded"; fi)
echo $title # or pass into mail command as subject

How to separate month's worth of timestamped data by day

I have a .log file which restarts at the beginning of each month, each message beginning with the following timestamp format: 01-07-2016 00:00:00:868|
There are thousands of messages per day and I'd like to create a short script which can figure out when the date increments and output each date to a new file with just that day's data. I'm not proficient in bash but I'd like to use sed or awk, as it's very useful for automating processes at my job and creating reports.
Below script will split the input log file into multiple files with the date added as a suffix to the input file name:
split_logfile_by_date
#!/bin/bash
exec < $1
while read line
do
date=$(echo $line|cut -d" " -f 1)
echo $line >> $1.$date
done
Example:
$ ls
log
$ split_logfile_by_date log
$ ls
log log.01-07-2016 log.02-07-2016 log.03-07-2016
awk '{log = FILENAME "." $1; print > log}' logfile
This will write all the 01-07-2016 records to the file logfile.01-07-2016

Sending ANSI-Colored codes text to 3 outputs: screen, file and file filtering ANSI codes

My program is sending characters-colored text to some log file:
echo -e "\\033[38;5;2m-->\\033[0m Starting program." | tee $LogFile -a
Resulting in a perfectly colored log line, but I would like to simultaneously create another logfile without ANSI codes, due to I need to browse this log in Windows (I know there are some ANSI browsers for Windows, but I prefer to browse the log files using Total Commander, that has no decent plugin for that).
So, I need three outputs in my log line:
Some colored --> Starting program. line to screen.
The same colored --> Starting program. line to the logfile with ANSI codes.
The same not colored --> Starting program. line to the logfile (supposedly without ANSI codes, or so I think).
The above line was fine thanks to the tee command, that solves points 1 and 2, but I don't know how to add some option to save to another file but without ANSI codes, at least without replicating the full line.
Maybe using redirectors, file descriptor, with the help of mkfifo?
My workaround for now is, by the way, duplication of outputs (what becomes a bit awkward):
echo -e "\\033[38;5;2m--> Starting program.\\033[0m" | tee $LogFile -a
echo "--> Starting program." >> $NotANSILogFile
To send the ansi codes to a log file, ansi.log, and to the screen, while also sending a non-ansi version to a log file called nonansi.log, use:
echo -e "\\033[38;5;2m-->\\033[0m Starting program." | tee -a ansi.log | tee /dev/tty | sed $'s/\E[^m]*m//g' >>nonansi.log
How it works
tee -a ansi.log
The first tee command appends the ansi-encoded string the the log file ansi.log.
tee /dev/tty
The second tee command sends the ansi-encoded string to the screen. (/dev/tty is the file name of the current screen.)
sed $'s/\E[^m]*m//g' >>nonansi.log
The final command, sed removes the ansi sequences and the result is appended to nonansi.log.
The sed command is contained in a bash $'...' string so that the escape character can be represented simply as \E. This substitute command looks for sequences that start with escape, \E, and end with m and removes them. The final g tells sed to perform this substitution on every escape sequence on the line, not just the first one.
If you have GNU sed, an alternative is to use GNU's \o notation instead:
sed 's/\o033[^m]*m//g' >>nonansi.log
If you have neither GNU sed nor bash, then you need to see what facilities your sed or your shell provide for the Esc character.
Hiding the details in a shell function
If you need to make multiple log entries, it might be easiest to create a shell function, logger, to hold all the messy details:
logger() { echo -e "$*" | tee -a ansi.log | tee /dev/tty | sed $'s/\E[^m]*m//g' >>nonansi.log; }
With this function defined, then any log entry can be performed by providing the ansi-encoded string as an argument:
logger "\\033[38;5;2m-->\\033[0m Start."
This method works (needs perl installed) to send output to screen (ANSI) and two files: $LogFile (ANSI) and $LogFile.plain (ANSI-free):
echo -e "\\033[38;5;2m--> Starting program.\\033[0m" | tee >(perl -pe 's/\e\[?.*?[\#-~]//g'>>"$LogFile".plain) $LogFile -a
Details:
The tee command splits output to both screen(stdout) and to the perl command.
The perl line filters the ANSI codes.
The output of perl is redirected to the plain Non-ANSI log file (used LogFile.plain for its filename).
To send output only to files (not to screen) do (same with just a classic >/dev/null).
echo -e "\\033[38;5;2m--> Starting program.\\033[0m" | tee >(perl -pe 's/\e\[?.*?[\#-~]//g'>>"$LogFile".plain) $LogFile -a >/dev/null
Notes:
Used >> instead of >, as expected for an incremental log file.
Used -a for the tee command for the same reason above.
I would prefer to use sed instead of perl, but all the sed examples I have found until now do not work. If someone knows, please report.

File contents not deleted in linux centos

I have the program running using this command
command 2> sample.txt
now that file is growing continuously and command will exit in 5-6 days and i beleive that file size won't go in GB
I tried this
echo "" > sample.txt but thats not making any differnce to it and filesize is growing.
i was thinking of setting up cron job after 1 hour to empty its contents
How can i empty the contents of file
Try the following command, this will write the console output to a file. (Your console will also get the messages printed).
command | tee -a file.log
and you can empty the contents by
> file.log

Bash: How to redirect the output of set of commands piped together to a file?

perf record | perf inject -b | perf report > tempfile 2>&1
I am running the above set of commands and trying to capture the ouput to temfile, but sometimes the outputs doesn't get fully appended in the tempfile (output of each command). To be more precise I am running this command from a script and I tried putting them in small brackets like
(perf record | perf inject -b | perf report) > tempfile 2>&1
but this also didn't work.
Pipe redirects output of one program to another. To log the output to a file and redirect to another program use tee command:
http://en.wikipedia.org/wiki/Tee_(command)

Resources