how to calculate elapsed time based on
start time=
[user001a#dev51 logs]# grep 'Recovery Manager'
refresh_03Jun2019_0250.log|head -1|awk -F'on ' '{print $NF}';
Jun 3 02:50:02 2019
[user001a#dev51 logs]#
end time=
[user001a#dev51 logs]# ls -l refresh_03Jun2019_0250.log
-rw-r--r--. 1 user001a grp001a 170050 Jun 3 05:06
refresh_03Jun2019_0250.log
[user001a#dev51 logs]#
Note - stat is missing birth time so stat might not be a good option time calculate file create and modify time:
[user001a#dev51 logs]# stat refresh_03Jun2019_0250.log
File: `refresh_03Jun2019_0250.log'
Size: 170050 Blocks: 344 IO Block: 4096 regular file
Device: 811h/2065d Inode: 1474545 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 219/ user001a) Gid: ( 219/grp001a)
Access: 2019-06-03 05:06:40.830829026 -0400
Modify: 2019-06-03 05:06:40.827828883 -0400
Change: 2019-06-03 05:06:40.827828883 -0400
[user001a#dev51 logs]#
Sample1 output:
StartTime=June 3, 2019 at 2:50:02 am
EndTime=June 3, 2019 at 5:06:40 am
ElapsedTime=2 hours, 16 minutes and 38 seconds
Sample2 output:
ElapsedTime=2 hours, 16 minutes and 38 seconds
Limitation of this solution: Max 23 hours. For more, days need to be added.
StartTime="June 3, 2019 at 2:50:02 am"
EndTime="June 3, 2019 at 5:06:40 am"
StartTimeInEpoch=`echo $StartTime | sed 's/at //g' | date -f- +"%s"`
EndTimeInEpoch=`echo $EndTime | sed 's/at //g' | date -f- +"%s"`
echo $EndTimeInEpoch-$StartTimeInEpoch | bc | sed 's/^/#/g' | date -u -f- "+%_H hours %_M minutes %_S seconds"
Output:
2 hours 16 minutes 38 seconds
Assuming you've got your dates in variables StartTime and EndTime. It's necessary to remove at from them, sed do this. Then both dates are converted to epoch time +"%s" do the trick. -f- tells date to take date from stdin (pipe). Then we can subtract the dates, add # to the beginning and format with date. -u mean UTC time - no time shift.
Related
I have a directory with a lot of files in it.
Each day, new files are added automatically.
The filenames are formatted like that :
[GROUP_ID]_[RANDOM_NUMBER].txt
Example : 012_1234.txt
For every day, for every GROUP_ID (032, 024, 044...etc), I want to keep only the biggest file of the day.
So for example, for the two days 27 and 28 march I have :
March 27 - 012_1234.txt - 12ko
March 27 - 012_0243.txt - 3000ko
March 27 - 016_5647.txt - 25ko
March 27 - 024_4354.txt - 20ko
March 27 - 032_8745.txt - 40ko
March 28 - 032_1254.txt - 16ko
March 28 - 036_0456.txt - 30ko
March 28 - 042_7645.txt - 500ko
March 28 - 042_2310.txt - 25ko
March 28 - 042_2125.txt - 34ko
March 28 - 044_4510.txt - 35ko
And I want to have :
March 27 - 012_0243.txt - 3000ko
March 27 - 016_5647.txt - 25ko
March 27 - 024_4354.txt - 20ko
March 27 - 032_8745.txt - 40ko
March 28 - 032_1254.txt - 16ko
March 28 - 036_0456.txt - 30ko
March 28 - 042_7645.txt - 500ko
March 28 - 044_4510.txt - 35ko
I don't find the right bash ls/find command to do that, somebody have an idea ?
With this command, I can display the biggest file for each day.
ls -l *.txt --time-style=+%s |
awk '{$6 = int($6/86400); print}' |
sort -nk6,6 -nrk5,5 | sort -sunk6,6
But I want the biggest file of each GROUP_ID file of each day.
So, if there is one file for "012" group_id file, of 10ko, I want to display it, even if there is bigger files for others group id...
I found myself the solution:
ls -l | tail -n+2 |
awk '{ split($0,var,"_"); group_id=var[5]; print $0" "group_id }' |
sort -k9,9 -k5,5nr |
awk '$10 != x { print } { x = $10 }'
This gives me the biggest file for each group_id, so now I just add to handle the day part.
For information:
tail -n+2: hide the "total" part of the ls command's output
First awk: get the group_id part (012, 036...) and display it after the original line ($0)
Sort: sort on filename and size
Take the biggest size of each group_id (column 10 added by awk at beginning)
I'm trying to write a shell script that displays unique Names, user name and Date using finger command.
Right now when I enter finger, it displays..
Login Name Tty Idle Login Time Office
1xyz xyz pts/13 Dec 2 18:24 (76.126.34.32)
1xyz xyz pts/13 Dec 2 18:24 (76.126.34.32)
2xxxx xxxx pts/23 2 Dec 2 21:35 (108.252.136.12)
2zzzz zzzz pts/61 13 Dec 2 20:46 (24.4.205.223)
2yyyy yyyy pts/32 57 Dec 2 21:06 (205.154.255.145)
1zzz zzz pts/35 37 Dec 2 20:56 (71.198.36.189)
1zzz zzz pts/48 12 Dec 2 20:56 (71.198.36.189)
I would the script to eliminate the unique values of the username and display it like..
xyz (1xyz) Dec 2 18:24
xxxx (2xxxx) Dec 2 21:35
zzzz (2zzzz) Dec 2 20:46
yyyy (2yyyy) Dec 2 21:06
zzz (1zzz) Dec 2 20:56
the Name is in the first column and the user name is in () and Date is last column
Thanks in Advance!
Ugly but should work.
finger | sed 's/\t/ /' | sed 's/pts\/[0-9]* *[0-9]*//' | awk '{print $2"\t("$1")\t"$3" "$4" "$5}' | sort | uniq
Unique names with sort-u is the easy part.
When you only want to parse the data in your example, you can try matching all strings in one command.
finger | sed 's/^\([^ ]*\) *\([^ ]*\) *pts[^A-Z]*\([^(]*\).*/\2\t(\1)\t\3/'
However, this is hard work and waiting to fail. My finger returns
Login Name Tty Idle Login Time Where
notroot notroot *:0 - Nov 26 15:30 console
notroot notroot pts/0 7d Nov 26 15:30
notroot notroot *pts/1 - Nov 26 15:30
You can try to improve the sed command, good luck with that!
I think the only way is looking at the columns: Read the finger output one line a time and slice each line with ${line:start:len} into parts (and remove spaces afterwards). Have a nice count (and be aware for that_user_with_a_long_name).
I have this data:
`date +%Y-%m-%d`" 00:00:00"
that return 2015-10-08 00:00:00
I would like cancel 5 minute:
2015-10-07 23:55:00
Many thanks
You need to subtract 5 minutes from a known point in time:
$ date -d "00:00:00 today"
Thu Oct 8 00:00:00 EDT 2015
$ date -d "00:00:00 today -5 minutes"
Wed Oct 7 23:55:00 EDT 2015
You just need to add your format string.
There's more than one way to subtract a value from the current time, although this should match the format shown in your question:
date -d "-5 min" "+%Y-%m-%d %H:%M:%S"
Result:
2015-10-08 15:26:13
Im new to AWK, and am trying to work out how to get all the results where the first column is equal to a variable and the date is greater than another unix timestamp formatted variable. Im using 'last' as my command. Example output is:
bob pts/2 172.6.14.37 Fri July 24 12:43 - 12:17 (9+23:34)
bob pts/2 172.6.14.37 Fri July 24 10:03 - 12:17 (5+23:34)
bob pts/2 172.6.14.37 Tue June 4 17:55 - 09:42 (8+15:46)
bob pts/2 172.6.14.37 Tue Mar 4 17:55 - 09:42 (8+15:46)
tim pts/1 172.6.14.37 Mon Mar 3 16:22 - 17:30 (1+01:08)
root pts/1 172.6.14.37 Thu Feb 27 09:38 - 09:56 (4+00:18)
and so I want all the results where 'bob' is in the first column. I've got
last -f /var/log/btmp | awk '$1 == "bob"'
Which gives me all bobs failed logins. Now I need to filter again where the date filed is greater than say '20140723145100' something like
last -f /var/log/btmp | awk '$1 == "bob" && $4 >= $DATE'
Assuming $DATE = 20140723145100 , the result I would want would be :
bob pts/2 172.6.14.37 Fri July 24 12:43 - 12:17 (9+23:34)
bob pts/2 172.6.14.37 Fri July 24 10:03 - 12:17 (5+23:34)
bash:
user=bob
since=20140623145100
last -Fa -f /var/log/btmp |
while read line; do
set -- $line # no quotes here
[[ $1 == "$user" ]] || continue
[[ $(date -d "$3 $4 $5 $6 $7" +%Y%m%d%H%M%S) > $since ]] && echo "$line"
done
Use the -s option in last:
last -s 20140723145100
From man last:
-s, --since time
Display the state of logins since specified time. This is useful,
e.g., to determine easily who was logged in at a particular time. The
option is often combined with --until.
And then grep for the user:
last -s 20140723145100 | grep "^bob"
As you do not have the -s option, you can use this workaround: store all the last output and the output until a certain time (using -t option). Then compare the output:
last -f /var/log/btmp | grep "^bob" > everything
last -f /var/log/btmp -t "20140723145100" | grep "^bob" > upto_20140723145100
grep -vf upto_20140723145100 everything
Using GNU Awk:
gawk -v user=bob -v date=20140723145100 -F '[[:space:]]{3,}| - ' '$1 == user { cmd = "exec date -d \"" $4 "\" +%Y%m%d%H%M%S"; cmd | getline d; close(cmd); if (d >= date) print }' sample
Output:
bob pts/2 172.6.14.37 Fri July 24 12:43 - 12:17 (9+23:34)
bob pts/2 172.6.14.37 Fri July 24 10:03 - 12:17 (5+23:34)
Of course actual command is last -f /var/log/btmp | gawk -v user=bob -v date=20140723145100 ....
And here's a script version:
#!/usr/bin/gawk -f
BEGIN {
FS = "[[:space:]]{3,}| - "
}
$1 == user {
cmd = "exec date -d \"" $4 "\" +%Y%m%d%H%M%S"
cmd | getline d
close(cmd)
if (d >= date)
print
}
Usage:
last -f /var/log/btmp | gawk -v user=bob -v date=20140723145100 -f script.awk
How to get epoch time in shell script (for ksh)?
I am interested in getting epoch time for the start of day (so e.g. now is July 28th, 2011 ~ 14:25:00 EST, I need time at midnight).
If you have GNU date,
epoch=$( date -d 00:00 +%s )
Otherwise, if you have tclsh,
epoch=$( echo 'puts [clock scan 00:00]' | tclsh )
Otherwise,
epoch=$( perl -MTime::Local -le 'print timelocal(0,0,0,(localtime)[3..8])' )
ksh's printf '%(fmt)T' supports time calculating. For example:
$ printf '%T\n' now
Mon Mar 18 15:11:46 CST 2013
$ printf '%T\n' '2 days ago'
Sat Mar 16 15:11:55 CST 2013
$ printf '%T\n' 'midnight today'
Mon Mar 18 00:00:00 CST 2013
$