Continuously read the last line of log file in bash script - bash

I have a log file in which new lines are continuously written.
I would like a bash script that continuously reads the last line of this log file, so that I can process the line (e.g. execute a specific command if the line contains the word "error").
I've tried:
while true
do
if tail -n1 -f file.log | grep -q ERROR
then
echo "$(date) : ERROR detected"
fi
done
But it's spamming:
sun 21 mar 2021 18:32:41 CET : ERROR detected
sun 21 mar 2021 18:32:41 CET : ERROR detected
sun 21 mar 2021 18:32:41 CET : ERROR detected
sun 21 mar 2021 18:32:41 CET : ERROR detected
sun 21 mar 2021 18:32:41 CET : ERROR detected
sun 21 mar 2021 18:32:41 CET : ERROR detected
sun 21 mar 2021 18:32:41 CET : ERROR detected
sun 21 mar 2021 18:32:41 CET : ERROR detected
sun 21 mar 2021 18:32:41 CET : ERROR detected
sun 21 mar 2021 18:32:41 CET : ERROR detected
sun 21 mar 2021 18:32:41 CET : ERROR detected
sun 21 mar 2021 18:32:41 CET : ERROR detected
(a new line is added every minute in this example)
How can I read only the last line and do not have spam for the result ?

I suggest with GNU grep:
tail -n1 -f file.log | grep --line-buffered ERROR | while read; do echo "$(date) : ERROR detected"; done

This is exactly why tail -f has been invented:
tail -f <logfile>
will show the last line of your logfile, so you can follow what gets added.
This can be combined with a grep:
tail -f <logfile> | grep <text_to_be_searched>
In your case:
tail -f file.log | grep "ERROR"

Related

Looping with an specific step in a long datetime string in bash

I have a list of files with the substring YYYYMMDDHH in them (example: 2016112200 means 2016 November 22th at 00 hours). These files are: temp_2016102200.data, temp_2016102212.data, temp_2016102300.data, temp_2016102312.data, ..., temp_20170301.data. And I also have other family of files substituting temp by wind.
For each string YYYYMMDDHH I want to create a tar with the temp and its correspondent wind file. I don't want this process to stop if one or both files are missing.
My idea was to loop from 12 hours to 12 hours, but I am having some problems because to specify the date I did: b=$(date -d '2016111400' +'%Y%m%d%H') but bash informs me that that is not a valid date...
Thanks.
It's not bash telling you the date format is wrong: date is telling you. Not everything you type is a bash command.
As Kamil comments, you have to split it up so that date can parse it. The YYYY-mm-dd HH:MM:SS format is parsable. Using bash parameter expansion to extract the relevant substrings:
$ d=2016111400
$ date -d "${d:0:4}-${d:4:2}-${d:6:2} ${d:8:2}:00:00"
Mon Nov 14 00:00:00 EST 2016
Now, when you want to add 12 hours, you have to be careful to do it in the right place in the datetime string: if you add a + character after the time, it will be parsed as a timezone offset, so put the relative part either first or between the date and the time.
$ date -d "+12 hours ${d:0:4}-${d:4:2}-${d:6:2} ${d:8:2}:00:00"
Mon Nov 14 12:00:00 EST 2016
As a loop, you could do:
d=2016111400
for ((i=1; i<=10; i++)); do
# print this datetime
date -d "${d:0:4}-${d:4:2}-${d:6:2} ${d:8:2}:00:00"
# add 12 hours
d=$( date -d "+12 hours ${d:0:4}-${d:4:2}-${d:6:2} ${d:8:2}:00:00" "+%Y%m%d%H" )
done
outputs:
Mon Nov 14 00:00:00 EST 2016
Mon Nov 14 12:00:00 EST 2016
Tue Nov 15 00:00:00 EST 2016
Tue Nov 15 12:00:00 EST 2016
Wed Nov 16 00:00:00 EST 2016
Wed Nov 16 12:00:00 EST 2016
Thu Nov 17 00:00:00 EST 2016
Thu Nov 17 12:00:00 EST 2016
Fri Nov 18 00:00:00 EST 2016
Fri Nov 18 12:00:00 EST 2016
OK, a "nicer" way to loop
start=2019043000
end=2019050300
plus12hours() {
local d=$1
date -d "+12 hours ${d:0:4}-${d:4:2}-${d:6:2} ${d:8:2}:00:00" "+%Y%m%d%H"
}
for (( d = start; d <= end; d = $(plus12hours "$d") )); do
printf "%d\t%s\n" "$d" "$(date -d "${d:0:4}-${d:4:2}-${d:6:2} ${d:8:2}:00:00")"
done
2019043000 Tue Apr 30 00:00:00 EDT 2019
2019043012 Tue Apr 30 12:00:00 EDT 2019
2019050100 Wed May 1 00:00:00 EDT 2019
2019050112 Wed May 1 12:00:00 EDT 2019
2019050200 Thu May 2 00:00:00 EDT 2019
2019050212 Thu May 2 12:00:00 EDT 2019
2019050300 Fri May 3 00:00:00 EDT 2019

awk : convert first line to first column and second line to second column

I have a file with data something like this :
{
MAG 121/002
Wed Mar 14 00:00:00 2018
MAG 121/003
Wed Mar 14 00:00:00 2018
MAG 121/004
Wed Mar 14 00:00:00 2018
}
I want the output as :
{
MAG 121/002 | Wed Mar 14 00:00:00 2018
MAG 121/003 | Wed Mar 14 00:00:00 2018
}
and so on.. Any help is appreciated.
What I tried was:
cat <filename> | awk '{printf "%s" (NR%2==0? RS:FS), $1}'
Could you please try following and let me know if this helps.
awk '/{/||/}/{print;next} /MAG/{val=$0;getline;print val OFS $0}' OFS=" | " Input_file
Solution with sed:
echo "MAG 121/002
Wed Mar 14 00:00:00 2018
MAG 121/003
Wed Mar 14 00:00:00 2018
MAG 121/004
Wed Mar 14 00:00:00 2018" | tr "\n" "|" | sed 's/|/ | /g' | sed -r 's/([^|]+\|[^|]+)\| /\1\n/g'
MAG 121/002 | Wed Mar 14 00:00:00 2018
MAG 121/003 | Wed Mar 14 00:00:00 2018
MAG 121/004 | Wed Mar 14 00:00:00 2018
Read and echo:
echo "MAG 121/002
Wed Mar 14 00:00:00 2018
MAG 121/003
Wed Mar 14 00:00:00 2018
MAG 121/004
Wed Mar 14 00:00:00 2018" | while read line ; do case $line in MAG*) echo -n $line "| " ;; *) echo $line ;; esac ; done
MAG 121/002 | Wed Mar 14 00:00:00 2018
MAG 121/003 | Wed Mar 14 00:00:00 2018
MAG 121/004 | Wed Mar 14 00:00:00 2018
code formatted:
while read line
do
case $line in
MAG*) echo -n $line "| " ;;
*) echo $line ;;
esac
done

Add seconds to current time in Bash [duplicate]

This question already has answers here:
Bash script/command to print out date 5 min before/after
(4 answers)
Closed 5 years ago.
I want to add 10 seconds 10 times. But I don't know well how to add times to the value.
This is my code.
./time.sh
time=$(date)
counter=1
while [ $counter -le 10 ]
do
echo "$time"
time=$('$time + 10 seconds') //error occurred.
((counter++))
done
echo All done
Using GNU Date
Assuming GNU date, replace:
time=$('$time + 10 seconds')
with:
time=$(date -d "$time + 10 seconds")
Putting it all together, try:
$ cat a.sh
t=$(date)
counter=1
while [ "$counter" -le 10 ]
do
echo "$t"
t=$(date -d "$t + 10 seconds")
((counter++))
done
echo All done
(I renamed time to t because time is also a bash built-in command and it is best to avoid potential confusion.)
When run, the output looks like:
$ bash a.sh
Tue Jan 16 19:19:44 PST 2018
Tue Jan 16 19:19:54 PST 2018
Tue Jan 16 19:20:04 PST 2018
Tue Jan 16 19:20:14 PST 2018
Tue Jan 16 19:20:24 PST 2018
Tue Jan 16 19:20:34 PST 2018
Tue Jan 16 19:20:44 PST 2018
Tue Jan 16 19:20:54 PST 2018
Tue Jan 16 19:21:04 PST 2018
Tue Jan 16 19:21:14 PST 2018
All done
Using Bash (>4.2)
Recent versions of bash support date calculations without external utilities. Try:
$ cat b.sh
#!/bin/bash
printf -v t '%(%s)T' -1
counter=1
while [ "$counter" -le 10 ]
do
((t=t+10))
printf '%(%c)T\n' "$t"
((counter++))
done
echo All done
Here, t is time since epoch in seconds.
When run, the output looks like:
$ bash b.sh
Tue 16 Jan 2018 07:31:44 PM PST
Tue 16 Jan 2018 07:31:54 PM PST
Tue 16 Jan 2018 07:32:04 PM PST
Tue 16 Jan 2018 07:32:14 PM PST
Tue 16 Jan 2018 07:32:24 PM PST
Tue 16 Jan 2018 07:32:34 PM PST
Tue 16 Jan 2018 07:32:44 PM PST
Tue 16 Jan 2018 07:32:54 PM PST
Tue 16 Jan 2018 07:33:04 PM PST
Tue 16 Jan 2018 07:33:14 PM PST
All done

Export information from text file in linux with bash script

I have this specific file:
Client 1: MAC 00:03:52:49:99:55
First : Fri Nov 7 09:50:11 2014
Last : Fri Nov 7 09:51:06 2014
--
Client 1: MAC 00:03:52:04:88:55
First : Fri Nov 7 09:51:44 2014
Last : Fri Nov 7 09:51:44 2014
--
Client 2: MAC 00:03:52:49:99:55
First : Fri Nov 7 10:50:10 2014
Last : Fri Nov 7 10:50:10 2014
--
Client 3: MAC 00:03:52:04:66:55
First : Fri Nov 7 09:51:30 2014
Last : Fri Nov 7 09:51:30 2014
--
From this file with many duplicate items like to create a new file like this:
00:03:52:49:99:55
First : Fri Nov 7 09:50:11 2014
Last : Fri Nov 7 09:51:06 2014
First : Fri Nov 7 09:50:11 2014
Last : Fri Nov 7 09:51:06 2014
00:03:52:04:88:55
First : Fri Nov 7 09:51:44 2014
Last : Fri Nov 7 09:51:44 2014
00:03:52:04:66:55
First : Fri Nov 7 09:51:30 2014
Last : Fri Nov 7 09:51:30 2014
How i can search with Bash Script the File with a For-Loop ? Important that the loop don't make more then 1 entry for the MAC-address. The MAC should be unique.
yes i have tried all this day :/
#!/bin/bash
array=$(cat Kismet-20141107-09-48-19-1.nettxt | grep Client -A 3 | grep -v Manuf)
echo "Array size: ${#array[#]}"
echo "Array items:"
for item in ${array[*]}
do
if [ $item -eq 3 ]; then
echo "$array[$item]"
fi
done
no it's not a requirement to use bash.. if you have other tools i will try it!
Try this out:
Shell> cat test1
#!/bin/bash
MACS=(`grep Client file|awk '{print $4}'|sort|uniq|xargs`)
for i in `echo ${MACS[*]}`; do
echo $i
grep $i file -A 2 | grep -vE 'MAC|--'
done
Shell> cat file
Client 1: MAC 00:03:52:49:99:55
First : Fri Nov 7 09:50:11 2014
Last : Fri Nov 7 09:51:06 2014
--
Client 1: MAC 00:03:52:04:88:55
First : Fri Nov 7 09:51:44 2014
Last : Fri Nov 7 09:51:44 2014
--
Client 2: MAC 00:03:52:49:99:55
First : Fri Nov 7 10:50:10 2014
Last : Fri Nov 7 10:50:10 2014
--
Client 3: MAC 00:03:52:04:66:55
First : Fri Nov 7 09:51:30 2014
Last : Fri Nov 7 09:51:30 2014
--
Output:
Shell> ./test1
00:03:52:04:66:55
First : Fri Nov 7 09:51:30 2014
Last : Fri Nov 7 09:51:30 2014
00:03:52:04:88:55
First : Fri Nov 7 09:51:44 2014
Last : Fri Nov 7 09:51:44 2014
00:03:52:49:99:55
First : Fri Nov 7 09:50:11 2014
Last : Fri Nov 7 09:51:06 2014
First : Fri Nov 7 10:50:10 2014
Last : Fri Nov 7 10:50:10 2014

Cat file find epochs and then convert the epochs to date

i have a file with:
....1342477599376
1342479596867
1342480248580
1342480501995
1342481198309
1342492256524
1342506099378....
these lines ... means Various character. I'd like to read this file with cat (it is essential that i need to with that) and get these lines with sed commands, than i'd like to convert the epoch to date...
cat myfile.log | sed '...*//' | sed 's/...*//' | date -d #$1
Unfortunately this isn't work.
One way, using sed:
cat file.txt | sed "s/^.*\([0-9]\{13\}\).*/date -d #\1/" | sh
Results:
Thu Jun 4 14:16:16 EST 44511
Sat Jun 27 17:07:47 EST 44511
Sun Jul 5 06:09:40 EST 44511
Wed Jul 8 04:33:15 EST 44511
Thu Jul 16 05:58:29 EST 44511
Sat Nov 21 05:42:04 EST 44511
Fri Apr 29 10:56:18 EST 44512
HTH
This is similar solution but it will find a timestamp in the stream
cat test.txt | sed 's/^/echo "/; s/\([0-9]\{13\}\)/`date -d #\1`/; s/$/"/' | bash

Resources