I'm trying to search through some logs while grepping for a specific line. How can I reduce the following logs even further by date and time? For example all lines between 2018/02/27 13:10:31 to 2018/02/27 13:17:34. I've tried using sed but I can't get it to work correctly on either date columns.
grep "Eps=" file.log
INFO | jvm 3 | 2018/02/27 13:02:27 | [Tue Feb 27 13:02:27 EST 2018] [INFO ] {Eps=5618.819672131148, Evts=2077762260}
INFO | jvm 3 | 2018/02/27 13:03:27 | [Tue Feb 27 13:03:27 EST 2018] [INFO ] {Eps=5288.8, Evts=2078079588}
INFO | jvm 3 | 2018/02/27 13:04:27 | [Tue Feb 27 13:04:27 EST 2018] [INFO ] {Eps=5176.633333333333, Evts=2078390186}
INFO | jvm 3 | 2018/02/27 13:05:28 | [Tue Feb 27 13:05:28 EST 2018] [INFO ] {Eps=5031.633333333333, Evts=2078692084}
INFO | jvm 3 | 2018/02/27 13:06:28 | [Tue Feb 27 13:06:28 EST 2018] [INFO ] {Eps=5047.433333333333, Evts=2078994930}
INFO | jvm 3 | 2018/02/27 13:07:30 | [Tue Feb 27 13:07:29 EST 2018] [INFO ] {Eps=5314.183333333333, Evts=2079313781}
INFO | jvm 3 | 2018/02/27 13:08:31 | [Tue Feb 27 13:08:31 EST 2018] [INFO ] {Eps=5182.934426229508, Evts=2079629940}
INFO | jvm 3 | 2018/02/27 13:09:31 | [Tue Feb 27 13:09:31 EST 2018] [INFO ] {Eps=5143.459016393443, Evts=2079943691}
INFO | jvm 3 | 2018/02/27 13:10:31 | [Tue Feb 27 13:10:31 EST 2018] [INFO ] {Eps=5519.266666666666, Evts=2080274847}
INFO | jvm 3 | 2018/02/27 13:11:31 | [Tue Feb 27 13:11:31 EST 2018] [INFO ] {Eps=5342.8, Evts=2080595415}
INFO | jvm 3 | 2018/02/27 13:12:32 | [Tue Feb 27 13:12:32 EST 2018] [INFO ] {Eps=5230.183333333333, Evts=2080909226}
INFO | jvm 3 | 2018/02/27 13:13:32 | [Tue Feb 27 13:13:32 EST 2018] [INFO ] {Eps=4975.533333333334, Evts=2081207758}
INFO | jvm 3 | 2018/02/27 13:14:32 | [Tue Feb 27 13:14:32 EST 2018] [INFO ] {Eps=5225.283333333334, Evts=2081521275}
INFO | jvm 3 | 2018/02/27 13:15:33 | [Tue Feb 27 13:15:33 EST 2018] [INFO ] {Eps=5261.766666666666, Evts=2081836981}
INFO | jvm 3 | 2018/02/27 13:16:34 | [Tue Feb 27 13:16:34 EST 2018] [INFO ] {Eps=5257.688524590164, Evts=2082157700}
INFO | jvm 3 | 2018/02/27 13:17:34 | [Tue Feb 27 13:17:34 EST 2018] [INFO ] {Eps=5634.133333333333, Evts=2082495748}
INFO | jvm 3 | 2018/02/27 13:18:34 | [Tue Feb 27 13:18:34 EST 2018] [INFO ] {Eps=5490.5, Evts=2082825178}
INFO | jvm 3 | 2018/02/27 13:19:35 | [Tue Feb 27 13:19:35 EST 2018] [INFO ] {Eps=5351.05, Evts=2083146241}
INFO | jvm 3 | 2018/02/27 13:20:37 | [Tue Feb 27 13:20:37 EST 2018] [INFO ] {Eps=5022.983606557377, Evts=2083452643}
INFO | jvm 3 | 2018/02/27 13:21:37 | [Tue Feb 27 13:21:37 EST 2018] [INFO ] {Eps=5302.196721311476, Evts=2083776077}
INFO | jvm 3 | 2018/02/27 13:22:37 | [Tue Feb 27 13:22:37 EST 2018] [INFO ] {Eps=5096.2, Evts=2084081849}
INFO | jvm 3 | 2018/02/27 13:23:37 | [Tue Feb 27 13:23:37 EST 2018] [INFO ] {Eps=5074.45, Evts=2084386316}
INFO | jvm 3 | 2018/02/27 13:24:38 | [Tue Feb 27 13:24:38 EST 2018] [INFO ] {Eps=5264.566666666667, Evts=2084702190}
Tools like sed or grep operate on strings, even when you can do really sophisticated stuff using regular expressions.
But these tools lack the ability to do something like "range queries" for things like date.
You might find various solutions to this questions, mine would include a small python snippet:
#!/usr/bin/env python
import sys
from datetime import datetime
begin = datetime(2018,2,27,13,10,31)
end = datetime(2018,2,27,13,47,34)
for line in sys.stdin.readlines():
if begin <= datetime.strptime(line.split('|')[2].strip(),'%Y/%m/%d %H:%M:%S') <= end:
print(line[:-1])
That snipped saved as filter.py and made executable (e.g. chmod +x) could then be called like this:
grep "Eps=" file.log | filter.py
Something like that will do the job in shell but as Stefan Sonnenberg-Carstens said in his answer consider using Python for that job:
#!/usr/bin/env sh
from=$(grep '2018/02/27 13:10:31' -n file.log | cut -d: -f1)
to=$(grep '2018/02/27 13:17:34' -n file.log | cut -d: -f1)
head -$to file.log | tail +$from
Output:
INFO | jvm 3 | 2018/02/27 13:10:31 | [Tue Feb 27 13:10:31 EST 2018] [INFO ] {Eps=5519.266666666666, Evts=2080274847}
INFO | jvm 3 | 2018/02/27 13:11:31 | [Tue Feb 27 13:11:31 EST 2018] [INFO ] {Eps=5342.8, Evts=2080595415}
INFO | jvm 3 | 2018/02/27 13:12:32 | [Tue Feb 27 13:12:32 EST 2018] [INFO ] {Eps=5230.183333333333, Evts=2080909226}
INFO | jvm 3 | 2018/02/27 13:13:32 | [Tue Feb 27 13:13:32 EST 2018] [INFO ] {Eps=4975.533333333334, Evts=2081207758}
INFO | jvm 3 | 2018/02/27 13:14:32 | [Tue Feb 27 13:14:32 EST 2018] [INFO ] {Eps=5225.283333333334, Evts=2081521275}
INFO | jvm 3 | 2018/02/27 13:15:33 | [Tue Feb 27 13:15:33 EST 2018] [INFO ] {Eps=5261.766666666666, Evts=2081836981}
INFO | jvm 3 | 2018/02/27 13:16:34 | [Tue Feb 27 13:16:34 EST 2018] [INFO ] {Eps=5257.688524590164, Evts=2082157700}
INFO | jvm 3 | 2018/02/27 13:17:34 | [Tue Feb 27 13:17:34 EST 2018] [INFO ] {Eps=5634.133333333333, Evts=2082495748}
Using a perl one-liner - try to find a more concise and clear way :)
perl -ne 'print if m|2018/02/27 13:10:31| .. m|2018/02/27 13:17:34|' file
Output :
INFO | jvm 3 | 2018/02/27 13:10:31 | [Tue Feb 27 13:10:31 EST 2018] [INFO ] {Eps=5519.266666666666, Evts=2080274847}
INFO | jvm 3 | 2018/02/27 13:11:31 | [Tue Feb 27 13:11:31 EST 2018] [INFO ] {Eps=5342.8, Evts=2080595415}
INFO | jvm 3 | 2018/02/27 13:12:32 | [Tue Feb 27 13:12:32 EST 2018] [INFO ] {Eps=5230.183333333333, Evts=2080909226}
INFO | jvm 3 | 2018/02/27 13:13:32 | [Tue Feb 27 13:13:32 EST 2018] [INFO ] {Eps=4975.533333333334, Evts=2081207758}
INFO | jvm 3 | 2018/02/27 13:14:32 | [Tue Feb 27 13:14:32 EST 2018] [INFO ] {Eps=5225.283333333334, Evts=2081521275}
INFO | jvm 3 | 2018/02/27 13:15:33 | [Tue Feb 27 13:15:33 EST 2018] [INFO ] {Eps=5261.766666666666, Evts=2081836981}
INFO | jvm 3 | 2018/02/27 13:16:34 | [Tue Feb 27 13:16:34 EST 2018] [INFO ] {Eps=5257.688524590164, Evts=2082157700}
INFO | jvm 3 | 2018/02/27 13:17:34 | [Tue Feb 27 13:17:34 EST 2018] [INFO ] {Eps=5634.133333333333, Evts=2082495748}
Related
I have a log file in which new lines are continuously written.
I would like a bash script that continuously reads the last line of this log file, so that I can process the line (e.g. execute a specific command if the line contains the word "error").
I've tried:
while true
do
if tail -n1 -f file.log | grep -q ERROR
then
echo "$(date) : ERROR detected"
fi
done
But it's spamming:
sun 21 mar 2021 18:32:41 CET : ERROR detected
sun 21 mar 2021 18:32:41 CET : ERROR detected
sun 21 mar 2021 18:32:41 CET : ERROR detected
sun 21 mar 2021 18:32:41 CET : ERROR detected
sun 21 mar 2021 18:32:41 CET : ERROR detected
sun 21 mar 2021 18:32:41 CET : ERROR detected
sun 21 mar 2021 18:32:41 CET : ERROR detected
sun 21 mar 2021 18:32:41 CET : ERROR detected
sun 21 mar 2021 18:32:41 CET : ERROR detected
sun 21 mar 2021 18:32:41 CET : ERROR detected
sun 21 mar 2021 18:32:41 CET : ERROR detected
sun 21 mar 2021 18:32:41 CET : ERROR detected
(a new line is added every minute in this example)
How can I read only the last line and do not have spam for the result ?
I suggest with GNU grep:
tail -n1 -f file.log | grep --line-buffered ERROR | while read; do echo "$(date) : ERROR detected"; done
This is exactly why tail -f has been invented:
tail -f <logfile>
will show the last line of your logfile, so you can follow what gets added.
This can be combined with a grep:
tail -f <logfile> | grep <text_to_be_searched>
In your case:
tail -f file.log | grep "ERROR"
I have a file with data something like this :
{
MAG 121/002
Wed Mar 14 00:00:00 2018
MAG 121/003
Wed Mar 14 00:00:00 2018
MAG 121/004
Wed Mar 14 00:00:00 2018
}
I want the output as :
{
MAG 121/002 | Wed Mar 14 00:00:00 2018
MAG 121/003 | Wed Mar 14 00:00:00 2018
}
and so on.. Any help is appreciated.
What I tried was:
cat <filename> | awk '{printf "%s" (NR%2==0? RS:FS), $1}'
Could you please try following and let me know if this helps.
awk '/{/||/}/{print;next} /MAG/{val=$0;getline;print val OFS $0}' OFS=" | " Input_file
Solution with sed:
echo "MAG 121/002
Wed Mar 14 00:00:00 2018
MAG 121/003
Wed Mar 14 00:00:00 2018
MAG 121/004
Wed Mar 14 00:00:00 2018" | tr "\n" "|" | sed 's/|/ | /g' | sed -r 's/([^|]+\|[^|]+)\| /\1\n/g'
MAG 121/002 | Wed Mar 14 00:00:00 2018
MAG 121/003 | Wed Mar 14 00:00:00 2018
MAG 121/004 | Wed Mar 14 00:00:00 2018
Read and echo:
echo "MAG 121/002
Wed Mar 14 00:00:00 2018
MAG 121/003
Wed Mar 14 00:00:00 2018
MAG 121/004
Wed Mar 14 00:00:00 2018" | while read line ; do case $line in MAG*) echo -n $line "| " ;; *) echo $line ;; esac ; done
MAG 121/002 | Wed Mar 14 00:00:00 2018
MAG 121/003 | Wed Mar 14 00:00:00 2018
MAG 121/004 | Wed Mar 14 00:00:00 2018
code formatted:
while read line
do
case $line in
MAG*) echo -n $line "| " ;;
*) echo $line ;;
esac
done
I would like to know if there is a way to get the list of changes from a developer for a time period in a SVN repo.
I know the command but is there a way(script). For EX- if you have 60 repos, you can run through all of them(SVN repos) and get the list of changes from a developer(xyz) for a given time period in SVN.
If someone have a script which they use and can share it will be great help.
kurt#CMSPPLAB2 ~/src/myApp $ svn log -r {2014-06-01}:{2014-06-11} |grep n243215
r1131 | n243215 | 2014-06-02 14:28:15 -0500 (Mon, 02 Jun 2014) | 1 line
r1132 | n243215 | 2014-06-02 14:28:39 -0500 (Mon, 02 Jun 2014) | 1 line
r1136 | n243215 | 2014-06-03 09:02:44 -0500 (Tue, 03 Jun 2014) | 2 lines
r1137 | n243215 | 2014-06-03 09:06:16 -0500 (Tue, 03 Jun 2014) | 2 lines
r1141 | n243215 | 2014-06-04 13:25:24 -0500 (Wed, 04 Jun 2014) | 2 lines
r1142 | n243215 | 2014-06-04 13:26:15 -0500 (Wed, 04 Jun 2014) | 2 lines
r1149 | n243215 | 2014-06-05 14:54:21 -0500 (Thu, 05 Jun 2014) | 2 lines
r1150 | n243215 | 2014-06-05 14:54:59 -0500 (Thu, 05 Jun 2014) | 2 lines
r1160 | n243215 | 2014-06-09 10:24:07 -0500 (Mon, 09 Jun 2014) | 2 lines
r1161 | n243215 | 2014-06-09 10:25:00 -0500 (Mon, 09 Jun 2014) | 2 lines
You can run svn log -r with a couple of dates and grep for the user. It's pretty simple to have this loop through X repositories.
Right now, I am running the following command:
rpm -qa --queryformat '%{name}\t%{installtime:date}\n' | sort -nr
and getting some output like this:
dhclient Fri 07 Feb 2014 01:37:47 PM EST
device-mapper-persistent-data Fri 07 Feb 2014 01:27:37 PM EST
device-mapper-libs Fri 07 Feb 2014 01:34:44 PM EST
device-mapper Fri 07 Feb 2014 01:34:46 PM EST
device-mapper-event-libs Fri 07 Feb 2014 01:34:48 PM EST
device-mapper-event Fri 07 Feb 2014 01:34:50 PM EST
dbus-libs Fri 07 Feb 2014 01:25:28 PM EST
dbus-glib Fri 07 Feb 2014 01:33:48 PM EST
db4-utils Fri 07 Feb 2014 01:30:05 PM EST
db4 Fri 07 Feb 2014 01:24:58 PM EST
dash Fri 07 Feb 2014 01:30:19 PM EST
cyrus-sasl-lib Fri 07 Feb 2014 01:25:48 PM EST
(note the odd tabs)
How do I tell the command I want it to output it into a table with common spacing instead of specifying the number of tabs?
Extra Question:
What I'm trying to do is just find out what has been installed and when so I can uninstall everything that I installed recently. How do I do that better than what I'm doing?
rpm -qa --queryformat '%-40{name} %{installtime:date}\n' | sort -nr
^^^
This will left-align the name and pad it to 40 characters.
If you want to order by time, you could print the numeric time first so it's easy to sort by.
$ rpm -qa --queryformat '%-10{installtime} %{installtime:date} %{name}\n' | sort -n
...
1375369678 Thu 01 Aug 2013 11:07:58 AM EDT xorg-x11-util-macros
1375886901 Wed 07 Aug 2013 10:48:21 AM EDT libdc1394
1378148462 Mon 02 Sep 2013 03:01:02 PM EDT gnome-system-monitor
1384526666 Fri 15 Nov 2013 09:44:26 AM EST perl-File-Next
1384526667 Fri 15 Nov 2013 09:44:27 AM EST ack
1385065567 Thu 21 Nov 2013 03:26:07 PM EST trousers
1385065568 Thu 21 Nov 2013 03:26:08 PM EST tpm-tools
1387405750 Wed 18 Dec 2013 05:29:10 PM EST libusb1
i have a file with:
....1342477599376
1342479596867
1342480248580
1342480501995
1342481198309
1342492256524
1342506099378....
these lines ... means Various character. I'd like to read this file with cat (it is essential that i need to with that) and get these lines with sed commands, than i'd like to convert the epoch to date...
cat myfile.log | sed '...*//' | sed 's/...*//' | date -d #$1
Unfortunately this isn't work.
One way, using sed:
cat file.txt | sed "s/^.*\([0-9]\{13\}\).*/date -d #\1/" | sh
Results:
Thu Jun 4 14:16:16 EST 44511
Sat Jun 27 17:07:47 EST 44511
Sun Jul 5 06:09:40 EST 44511
Wed Jul 8 04:33:15 EST 44511
Thu Jul 16 05:58:29 EST 44511
Sat Nov 21 05:42:04 EST 44511
Fri Apr 29 10:56:18 EST 44512
HTH
This is similar solution but it will find a timestamp in the stream
cat test.txt | sed 's/^/echo "/; s/\([0-9]\{13\}\)/`date -d #\1`/; s/$/"/' | bash