Cat file find epochs and then convert the epochs to date - bash

i have a file with:
....1342477599376
1342479596867
1342480248580
1342480501995
1342481198309
1342492256524
1342506099378....
these lines ... means Various character. I'd like to read this file with cat (it is essential that i need to with that) and get these lines with sed commands, than i'd like to convert the epoch to date...
cat myfile.log | sed '...*//' | sed 's/...*//' | date -d #$1
Unfortunately this isn't work.

One way, using sed:
cat file.txt | sed "s/^.*\([0-9]\{13\}\).*/date -d #\1/" | sh
Results:
Thu Jun 4 14:16:16 EST 44511
Sat Jun 27 17:07:47 EST 44511
Sun Jul 5 06:09:40 EST 44511
Wed Jul 8 04:33:15 EST 44511
Thu Jul 16 05:58:29 EST 44511
Sat Nov 21 05:42:04 EST 44511
Fri Apr 29 10:56:18 EST 44512
HTH

This is similar solution but it will find a timestamp in the stream
cat test.txt | sed 's/^/echo "/; s/\([0-9]\{13\}\)/`date -d #\1`/; s/$/"/' | bash

Related

Continuously read the last line of log file in bash script

I have a log file in which new lines are continuously written.
I would like a bash script that continuously reads the last line of this log file, so that I can process the line (e.g. execute a specific command if the line contains the word "error").
I've tried:
while true
do
if tail -n1 -f file.log | grep -q ERROR
then
echo "$(date) : ERROR detected"
fi
done
But it's spamming:
sun 21 mar 2021 18:32:41 CET : ERROR detected
sun 21 mar 2021 18:32:41 CET : ERROR detected
sun 21 mar 2021 18:32:41 CET : ERROR detected
sun 21 mar 2021 18:32:41 CET : ERROR detected
sun 21 mar 2021 18:32:41 CET : ERROR detected
sun 21 mar 2021 18:32:41 CET : ERROR detected
sun 21 mar 2021 18:32:41 CET : ERROR detected
sun 21 mar 2021 18:32:41 CET : ERROR detected
sun 21 mar 2021 18:32:41 CET : ERROR detected
sun 21 mar 2021 18:32:41 CET : ERROR detected
sun 21 mar 2021 18:32:41 CET : ERROR detected
sun 21 mar 2021 18:32:41 CET : ERROR detected
(a new line is added every minute in this example)
How can I read only the last line and do not have spam for the result ?
I suggest with GNU grep:
tail -n1 -f file.log | grep --line-buffered ERROR | while read; do echo "$(date) : ERROR detected"; done
This is exactly why tail -f has been invented:
tail -f <logfile>
will show the last line of your logfile, so you can follow what gets added.
This can be combined with a grep:
tail -f <logfile> | grep <text_to_be_searched>
In your case:
tail -f file.log | grep "ERROR"

Looping with an specific step in a long datetime string in bash

I have a list of files with the substring YYYYMMDDHH in them (example: 2016112200 means 2016 November 22th at 00 hours). These files are: temp_2016102200.data, temp_2016102212.data, temp_2016102300.data, temp_2016102312.data, ..., temp_20170301.data. And I also have other family of files substituting temp by wind.
For each string YYYYMMDDHH I want to create a tar with the temp and its correspondent wind file. I don't want this process to stop if one or both files are missing.
My idea was to loop from 12 hours to 12 hours, but I am having some problems because to specify the date I did: b=$(date -d '2016111400' +'%Y%m%d%H') but bash informs me that that is not a valid date...
Thanks.
It's not bash telling you the date format is wrong: date is telling you. Not everything you type is a bash command.
As Kamil comments, you have to split it up so that date can parse it. The YYYY-mm-dd HH:MM:SS format is parsable. Using bash parameter expansion to extract the relevant substrings:
$ d=2016111400
$ date -d "${d:0:4}-${d:4:2}-${d:6:2} ${d:8:2}:00:00"
Mon Nov 14 00:00:00 EST 2016
Now, when you want to add 12 hours, you have to be careful to do it in the right place in the datetime string: if you add a + character after the time, it will be parsed as a timezone offset, so put the relative part either first or between the date and the time.
$ date -d "+12 hours ${d:0:4}-${d:4:2}-${d:6:2} ${d:8:2}:00:00"
Mon Nov 14 12:00:00 EST 2016
As a loop, you could do:
d=2016111400
for ((i=1; i<=10; i++)); do
# print this datetime
date -d "${d:0:4}-${d:4:2}-${d:6:2} ${d:8:2}:00:00"
# add 12 hours
d=$( date -d "+12 hours ${d:0:4}-${d:4:2}-${d:6:2} ${d:8:2}:00:00" "+%Y%m%d%H" )
done
outputs:
Mon Nov 14 00:00:00 EST 2016
Mon Nov 14 12:00:00 EST 2016
Tue Nov 15 00:00:00 EST 2016
Tue Nov 15 12:00:00 EST 2016
Wed Nov 16 00:00:00 EST 2016
Wed Nov 16 12:00:00 EST 2016
Thu Nov 17 00:00:00 EST 2016
Thu Nov 17 12:00:00 EST 2016
Fri Nov 18 00:00:00 EST 2016
Fri Nov 18 12:00:00 EST 2016
OK, a "nicer" way to loop
start=2019043000
end=2019050300
plus12hours() {
local d=$1
date -d "+12 hours ${d:0:4}-${d:4:2}-${d:6:2} ${d:8:2}:00:00" "+%Y%m%d%H"
}
for (( d = start; d <= end; d = $(plus12hours "$d") )); do
printf "%d\t%s\n" "$d" "$(date -d "${d:0:4}-${d:4:2}-${d:6:2} ${d:8:2}:00:00")"
done
2019043000 Tue Apr 30 00:00:00 EDT 2019
2019043012 Tue Apr 30 12:00:00 EDT 2019
2019050100 Wed May 1 00:00:00 EDT 2019
2019050112 Wed May 1 12:00:00 EDT 2019
2019050200 Thu May 2 00:00:00 EDT 2019
2019050212 Thu May 2 12:00:00 EDT 2019
2019050300 Fri May 3 00:00:00 EDT 2019

awk : convert first line to first column and second line to second column

I have a file with data something like this :
{
MAG 121/002
Wed Mar 14 00:00:00 2018
MAG 121/003
Wed Mar 14 00:00:00 2018
MAG 121/004
Wed Mar 14 00:00:00 2018
}
I want the output as :
{
MAG 121/002 | Wed Mar 14 00:00:00 2018
MAG 121/003 | Wed Mar 14 00:00:00 2018
}
and so on.. Any help is appreciated.
What I tried was:
cat <filename> | awk '{printf "%s" (NR%2==0? RS:FS), $1}'
Could you please try following and let me know if this helps.
awk '/{/||/}/{print;next} /MAG/{val=$0;getline;print val OFS $0}' OFS=" | " Input_file
Solution with sed:
echo "MAG 121/002
Wed Mar 14 00:00:00 2018
MAG 121/003
Wed Mar 14 00:00:00 2018
MAG 121/004
Wed Mar 14 00:00:00 2018" | tr "\n" "|" | sed 's/|/ | /g' | sed -r 's/([^|]+\|[^|]+)\| /\1\n/g'
MAG 121/002 | Wed Mar 14 00:00:00 2018
MAG 121/003 | Wed Mar 14 00:00:00 2018
MAG 121/004 | Wed Mar 14 00:00:00 2018
Read and echo:
echo "MAG 121/002
Wed Mar 14 00:00:00 2018
MAG 121/003
Wed Mar 14 00:00:00 2018
MAG 121/004
Wed Mar 14 00:00:00 2018" | while read line ; do case $line in MAG*) echo -n $line "| " ;; *) echo $line ;; esac ; done
MAG 121/002 | Wed Mar 14 00:00:00 2018
MAG 121/003 | Wed Mar 14 00:00:00 2018
MAG 121/004 | Wed Mar 14 00:00:00 2018
code formatted:
while read line
do
case $line in
MAG*) echo -n $line "| " ;;
*) echo $line ;;
esac
done

How to get the changes for a given period by developer in SVN

I would like to know if there is a way to get the list of changes from a developer for a time period in a SVN repo.
I know the command but is there a way(script). For EX- if you have 60 repos, you can run through all of them(SVN repos) and get the list of changes from a developer(xyz) for a given time period in SVN.
If someone have a script which they use and can share it will be great help.
kurt#CMSPPLAB2 ~/src/myApp $ svn log -r {2014-06-01}:{2014-06-11} |grep n243215
r1131 | n243215 | 2014-06-02 14:28:15 -0500 (Mon, 02 Jun 2014) | 1 line
r1132 | n243215 | 2014-06-02 14:28:39 -0500 (Mon, 02 Jun 2014) | 1 line
r1136 | n243215 | 2014-06-03 09:02:44 -0500 (Tue, 03 Jun 2014) | 2 lines
r1137 | n243215 | 2014-06-03 09:06:16 -0500 (Tue, 03 Jun 2014) | 2 lines
r1141 | n243215 | 2014-06-04 13:25:24 -0500 (Wed, 04 Jun 2014) | 2 lines
r1142 | n243215 | 2014-06-04 13:26:15 -0500 (Wed, 04 Jun 2014) | 2 lines
r1149 | n243215 | 2014-06-05 14:54:21 -0500 (Thu, 05 Jun 2014) | 2 lines
r1150 | n243215 | 2014-06-05 14:54:59 -0500 (Thu, 05 Jun 2014) | 2 lines
r1160 | n243215 | 2014-06-09 10:24:07 -0500 (Mon, 09 Jun 2014) | 2 lines
r1161 | n243215 | 2014-06-09 10:25:00 -0500 (Mon, 09 Jun 2014) | 2 lines
You can run svn log -r with a couple of dates and grep for the user. It's pretty simple to have this loop through X repositories.

How can I get a cleaner file information with print command?

I have this command to dump all java and xml files:
find . -name '*.*' -print -ls
I get the following output:
./auth-jaas/pom.xml
562949954141667 2 ---------- 1 John ???????? 1282 Feb 14 2011 ./auth-jaas/pom.xml
Is there a way to get something smaller like this:
1282 Feb 14 2011 ./auth-jaas/pom.xml
I'm only interested in file size and timestamp.
I think what you're after is something like:
find . -name '*.*' -exec stat -f "%10z %Sm %N" {} +
I got this as part of the output in one of my directories:
534 Mar 2 20:17:16 2013 ./so.6964747
835 Mar 2 20:17:16 2013 ./so.6965001
25048 Jun 25 21:29:46 2012 ./so.8854855.sql
7710 Feb 13 07:17:01 2013 ./sortAtt.c
1565 Sep 4 19:15:30 2010 ./strandsort.c
7224 Sep 22 13:42:17 2012 ./streplace.c
3033 Jan 28 23:16:46 2013 ./substr.c
139 Mar 20 12:48:24 2013 ./sum.sh
6833 Sep 21 07:57:53 2012 ./timezeromoves.c
614 Feb 21 09:23:00 2013 ./travAsm.c
347 Feb 21 09:23:00 2013 ./traverse.c
1277 Jul 26 09:30:12 2012 ./uint128.c
793 Aug 19 00:47:48 2012 ./unwrap.c
1906 Jul 28 08:41:22 2012 ./xxx.sql
1904 Sep 22 21:30:09 2011 ./yyy.sql
Reading up on the options might tell you how to drop the time from the 'string format for the modification time' (%Sm).
Just for the record, this was using /usr/bin/stat on Mac OS X 10.7.5, not GNU stat. You will need to scrutinize what's available there.
find . -name '*.*' -exec /usr/gnu/bin/stat --format "%s %y %N"
And the same part of the output was:
534 2013-03-02 20:17:16.000000000 -0800 ./so.6964747
835 2013-03-02 20:17:16.000000000 -0800 ./so.6965001
25048 2012-06-25 21:29:46.000000000 -0700 ./so.8854855.sql
7710 2013-02-13 07:17:01.000000000 -0800 ./sortAtt.c
1565 2010-09-04 19:15:30.000000000 -0700 ./strandsort.c
7224 2012-09-22 13:42:17.000000000 -0700 ./streplace.c
3033 2013-01-28 23:16:46.000000000 -0800 ./substr.c
139 2013-03-20 12:48:24.000000000 -0700 ./sum.sh
6833 2012-09-21 07:57:53.000000000 -0700 ./timezeromoves.c
614 2013-02-21 09:23:00.000000000 -0800 ./travAsm.c
347 2013-02-21 09:23:00.000000000 -0800 ./traverse.c
1277 2012-07-26 09:30:12.000000000 -0700 ./uint128.c
793 2012-08-19 00:47:48.000000000 -0700 ./unwrap.c
1906 2012-07-28 08:41:22.000000000 -0700 ./xxx.sql
1904 2011-09-22 21:30:09.000000000 -0700 ./yyy.sql
If you have a Linux-style -printf in your find command, use %a for access time, %t for mod-time (and uppercase variants for more specific formatting) and %s for file size in bytes.
Alternatively, pipe the output from -ls through cut.
Use du instead:
$ du --time file
4 2013-03-20 19:49 file
With find:
$ find . -name 'file' -exec du --time {} +
Normally I would say cut, but not knowing if they are tab or space separators:
find . -name '*.*' -print -ls |awk '{$0=substr($0,index($0,FS)+4);printf $0}'

Resources