I am working on a shell script with exiftool to automatically change some exif tags on pictures contained in a certain folder and I would like to use the output to get a notification on my NAS (a QNAP) when the job is completed.
Everything works already, but - as the notification system truncates the message - I would like to receive just the information I need, i.e. the last line of the shell output, which is for example the following:
Warning: [minor] Entries in IFD0 were out of sequence. Fixed. - 2015-07-12 15.41.06.jpg
4512 files failed condition
177 image files updated
The problem is that currently I only receive the following notification:
Exiftool cronjob completed on Camera: 4512 files failed condition
What I would like to get instead is:
Exiftool cronjob completed on Camera: 177 image files updated
The script is the following:
#!/bin/sh
# exiftool script for 2002 problem
dir="/share/Multimedia/Camera"
cd "$dir"
FOLDER="$(printf '%s\n' "${PWD##*/}")"
OUTPUT="$(exiftool -overwrite_original -r '-CreateDate<DateTimeOriginal' -if '$CreateDate eq "2002:12:08 12:00:00"' -if '$DateTimeOriginal ne $CreateDate' *.[Jj][Pp][Gg])"
/sbin/notice_log_tool -a "Exiftool cronjob completed on ${FOLDER}: ${OUTPUT}" --severity=5
exit 0
To do that I played with the $OUTPUT variable using | tail -1, but probably I make some basic errors and I receive something like:
Exiftool cronjob completed on Camera: 4512 files failed condition | tail -1
How to do it in the right way? Thanks
Put the tail inside the capturing parens.
OUTPUT=$(exif ... | tail -1)
You don't need the double quotes here. I'm guessing that you
tried
OUTPUT="$(exif ...) | tail -1"
Probably an old post to be answering now, but try using the -n flag (see tail --help) and wrap the command output using ticks.
OUTPUT=`exif ... | tail -n 1`
(user464502's answer did not work for me as the tail command does not recognize the parameter "-1")
Related
I use tail -f to show the contents of a logfile.
What I want is when the logfile content changes, instead of appending the new lines to my screen, only the newly added lines should be shown on my screen.
So as if a clearscreen was made every time before printing the new lines.
I tried to find a solution by web search but couldn't find anything useful.
edit:
In my case it happens that several lines will be added at once (it is a php error logfile). So I am looking for a solution where more than the single last line can be shown on screen.
The watch command in combination with the tail command shows the last line of a log file with the intervall of every 2 seconds. Basically it doesn't refresh whenever a new line is appended to the log file but since you could specifiy an intervall it might help you for your use case.
watch -t tail -1 <path_to_logfile>
If you need a faster intervall like every 0.5 seconds, then you could specify it with the 'n' option i.e.:
watch -t -n 0.5 tail -1 <path_to_logfile>
Try
$ watch 'tac FILE | grep -m1 -C2 PATTERN | tac'
where
PATTERN is any keyword (or regexp) to identify errors you seek in the log,
tac prints the lines in reverse,
-m is a max count of matching lines to grep,
-C is any number of lines of context (before and after the match) to show (optional).
That would be similar to
$ tail -f FILE | grep -C2 PATTERN
if you didn't mind just appending occurrences to the output in real-time.
But if you don't know any generic PATTERN to look for at all,
you'd have to just follow all the updates as the logfile grows:
$ tail -n0 -f FILE
Or even, create a copy of the logfile and then do a diff:
Copy: cp file.log{,.old}
Refresh the webpage with your .php code (or whatever, to trigger the error)
Run: diff file.log{,.old}
(or, if you prefer sort to diff: $ sort file.log{,.old} | uniq -u)
The curly braces is shorthand for both filenames (see Brace Expansion in $ man bash)
If you must avoid any temp copies, store the line count in memory:
z=$(grep -c ^ file.log)
Refresh the webpage to trigger an error
tail -n +$z file.log
The latter approach can be built upon, to create a custom scripting solution more suitable for your needs (check timestamps, clear screen, filter specific errors, etc). For example, to only show the lines that belong to the last error message in the log file updated in real-time:
$ clear; z=$(grep -c ^ FILE); while true; do d=$(date -r FILE); sleep 1; b=$(date -r FILE); if [ "$d" != "$b" ]; then clear; tail -n +$z FILE; z=$(grep -c ^ FILE); fi; done
where
FILE is, obviously, your log file name;
grep -c ^ FILE counts all lines in a file (that is almost, but not entirely unlike cat FILE|wc -l that would only count newlines);
sleep 1 sets the pause/delay between checking the file timestamps to 1 second, but you could change it to even a floating point number (the less the interval, the higher the CPU usage).
To simplify any repetitive invocations in future, you could save this compound command in a Bash script that could take a target logfile name as an argument, or define a shell function, or create an alias in your shell, or just reverse-search your bash history with CTRL+R. Hope it helps!
I've been trying to read a log file in real time which sometimes contains one or more different error messages. A colleague sent me a command that, once the word "error" appears, it should update another log (a status log) with a specific number, in this case the number 2.
Here's the command:
$(tail -n0 -f /Users/user/Desktop/file | grep error | echo 2 > /Users/user/Desktop/error.log) &
The problem is: this command always updates error.log with the number 2, whether my file's last line was an error warning word or not. Moreover, if I try the following shorter command
$ tail -n0 -f /Users/user/Desktop/file | grep error
and echo a random word to my file, it still appears on the tail, so it definitely isn't filtering new lines in real time. Can you help?
Thanks
I am trying to prepend a message to the output of rsstail, this is what I have right now:
rsstail -o -i 15 --initial 0 http://feeds.bbci.co.uk/news/world/europe/rss.xml | awk -v time=$( date +\[%H:%M:%S_%d/%m/%Y\] ) '{print time,$0}' | tee someFile.txt
which should give me the following:
[23:46:49_23/10/2014] Title: someTitle
After the command I have a | while read line do ... end which never gets called because the above command does not output a single thing. What am I doing wrong?
PS: I am using the python version of rsstail, since the other one kept on crashing (https://github.com/gvalkov/rsstail.py)
EDIT:
As requested in the comments the command:
rsstail -o -i 15 --initial 0 http://feeds.bbci.co.uk/news/world/europe/rss.xml
Will give back a message like the following when a new article is found
Title: Sweden calls off search for sub
It seems that my rsstail is different from yours, but mine supports the option
-Z x add heading 'x'
so that
rsstail -Z"$( date +\[%H:%M:%S_%d/%m/%Y\] ) " ...
does the job without awk; on the other hand, you do have some problem with buffering, is it possible to ask rsstail to stop after a given number of titles?
I'm testing mobile Android devices and I would like to redirect the device log on a file whose name indicates both the date and time of my test, and the device model that is being tested.
For the first issue, I have already resolved with
now=$(date +"%b_%d_%Y_%k_%M");adb logcat -c;adb logcat|tee $now
So:
$ echo $now
Jan_03_2012_13_09
and the tee command creates a file with this filename.
As for the device model I have written two bash lines that obtain it from adb shell, namely
device=$(adb shell cat /system/build.prop | grep "^ro.product.device=")
deviceshortname=$(echo $device | sed 's/ro.product.device=//g')
(not optimal as I am not very good in bash programming... :) but I manage to get
$ echo $deviceshortname
LT15i
My problem is how to combine $now and $deviceshortname to obtain a filename such as:
LT15i_Jan_03_2012_13_19
I tried to set another variable:
filename=($(echo $deviceshortname"_"$now))
and got:
$ echo $filename
LT15i_Jan_03_2012_13_19
but if I try redirecting the log:
$ adb logcat | tee $filename
I obtain such file:
-rw-r--r--+ 1 ele None 293 Jan 3 13:21 ?[01;31m?[K?[m?[KLT15i_Jan_03_2012_13_19
I don't know why these strange characters and what I'm doing wrong.
Something is adding color to your output. It might be grep(1), it might adb, it might be baked into the /system/build.prop resource that you're reading.
If you're lucky, it is being added by grep(1) -- because that is supremely easy to disable with --color=no:
device=$(adb shell cat /system/build.prop | grep --color=no "^ro.product.device=")
deviceshortname=$(echo $device | sed 's/ro.product.device=//g')
If the colors are being added by adb, then perhaps it has a command line option that asks it to avoid colorizing the output.
If the colors are hard-coded into the /sys/build.prop resource in some way, then you'll need some little tool that filters out the color codes. I don't have one handy (and it's bedtime) but you can probably build one starting with tr(1) to delete \033 ASCII ESC characters.
Looks like an ANSI sequence used by adb to color the output.
I'm not sure if I'm missing something, but this works for me
p1=foo
p2=$(date +%d_%m_%Y)
cat sample_file.txt | tee $p1"_"$p2
Just type: echo ${deviceshortname}${now} and it will do the trick.
How to run the first process from a list of processes stored in a file and immediately delete the first line as if the file was a queue and I called "pop"?
I'd like to call the first command listed in a simple text file with \n as the separator in a pop-like fashion:
Figure 1:
cmdqueue.lst :
proc_C1
proc_C2
proc_C3
.
.
Figure 2:
Pop the first command via popcmd:
proc_A | proc_B | popcmd cmdqueue.lst | proc_D
Figure 3:
cmdqueue.lst :
proc_C2
proc_C3
proc_C4
.
.
Ooh, that's an amusing one-liner.
Okay, here's the deal. What you want is a program that, when called, prints the first line of the file to stdout, then delete that line from the file. Sounds like a job for sed(1).
Try
proc_A | proc_B | `(head -1 cmdstack.lst; sed -i -e '1d' cmdstack.lst)` | proc_D
I'm sure that someone who had already had their coffee could change the sed program to not need the head(1) call, but that works, and shows off using a subshell ("( foo )" runs in a sub-process.)
pop-cmd.py:
#!/usr/bin/env python
import os, shlex, sys
from subprocess import call
filename = sys.argv[1]
lines = open(filename).readlines()
if lines:
command = lines[0].rstrip()
open(filename, "w").writelines(lines[1:])
if command:
sys.exit(call(shlex.split(command) + sys.argv[2:]))
Example:
proc_A | proc_B | python pop-cmd.py cmdstack.lst | proc_D
I assume that you are constantly appending to the file also, so rewriting the file puts you in danger of overwriting data. For this type of task I think you would be better using individual files for each queue entry, using date/time to determine order, and then as you process each file you could append the data to a log file and then delete the trigger file.
Really need more information in order to suggest a good solution. It's important to know how the file is getting updated. Is it a lot of separate processes, just one process, etc.
I think you would need to rewrite the file - e.g. run a command to list all lines but the first, write that to a temporary file and rename it to the original. That could be done using tail or awk or perl depending on the commands you have available.
If you want to treat a file like a stack, then a better approach would be to have the top of the stack at the end of the file.
Thus you can easily cut off the file at the beginning of the last line (= pop), and simply append to the file as you push.
You can use a little bash script; name it "popcmd":
#!/bin/bash
cmd=`head -n 1 $1`
tail -n +2 $1 > ~tmp~
mv -f ~tmp~ $1
$cmd
edit: Using sed for the middle two lines, like Charlie Martin showed, is much more elegant, of course:
#!/bin/bash
cmd=`head -n 1 $1`
sed -i -e '1d' $1
$cmd
edit: You can use this exactly as in your example usage code:
proc_A | proc_B | popcmd cmdstack.lst | proc_D
You can't write to the beginning of a file, so cutting out line 1 would be a lot of work (rewrite the rest of the file (which isn't actually that much work for the programmer (it's what every other answer post has written for you :) ) ) ).
I'd recommend keeping the whole thing in memory and using a classic stack rather than a file.