The date command returns the current date. I want the nearest 5 minute interval. For e.g.
# date
Thu Mar 15 16:06:42 IST 2012
In this case I want to return ...
Mar 15 16:05:00
Is it possible in the shell script? or is there any one liner for this?
Update:
the date is in this format...
2012-03-10 12:59:59
Latest update:
The following command works as expected. Thanks for the response.
head r_SERVER_2012-03-10-12-55-00 | awk -F'^' '{print $7}' | awk '{split($2, a, ":"); printf "%s %s:%02d:00\n", $1, a[1],int(a[2]/5)*5}'
Correct result:
2012-03-10 12:55:00
But I want to show other fields as well other than date. The following does not work:
head r_SERVER_2012-03-10-12-55-00 | awk -F'^' '{print $1, $2, $7, $8}' | awk '{split($2, a, ":"); printf "%s %s:%02d:00\n", $1, a[1],int(a[2]/5)*5}'
Wrong result:
565 14718:00:00
It should be ...
565 123 2012-03-10 12:55:00 country
date | awk '{split($4, a, ":"); printf "%s %s %s:%02d:00", $2, $3, a[1],int(a[2]/5)*5}'
$ date="2012-03-10 12:59:59"
$ read d h m s < <(IFS=:; echo $date)
$ printf -v var "%s %s:%d:00" $d $h $(( m-(m%5) ))
$ echo "$var"
2012-03-10 12:55:00
I use process substitution in the read command to isolate changes to IFS in a subshell. `
If you have GNU AWK available, you could use this:
| gawk '{t=mktime(gensub(/[-:]/," ","g")); print strftime("%Y-%m-%d %H:%M:%S",int(t/5)*5);}'
This uses the int() function, which truncates, which sort of means "round down". If you decide you'd prefer to "round" (i.e. go to the "nearest" 5 second increment), replace int(t/5) with int((t+2.5)/5).
Of course, if you're feeling masochistic, you can do this in pure shell. This one only truncates rather than rounding up.
[ghoti#pc ~]$ fmt="%Y-%m-%d %H:%M:%S"
[ghoti#pc ~]$ date "+$fmt"
2012-03-15 07:53:37
[ghoti#pc ~]$ date "+$fmt" | while read date; do stamp="`date -jf \"$fmt\" \"$date\" '+%s'`"; date -r `dc -e "$stamp 5/ 5* p"` "+$fmt"; done
2012-03-15 07:53:35
Note that I'm using FreeBSD. If you're using Linux, then you might need to use different options for the date command (in particular, the -r and -f options I think). I'm runninB this in bash, but it should work in pure Bourne shell if that's what you need.
Related
Writing a shell script that receives 3 arguments but within the script one of the commands needs to check the first value after a delimiter is applied
#!/bin/bash
awk -F'\t' '$1 ~ /$1/&&/$2/ {print $1FS$3}' $3
this command is called:
bash search.sh 5 AM filename.txt
And should execute as follows:
awk -F'\t' '$1 ~ /5/&&/AM/ {print $1FS$3}' filename.txt
The command functions properly outside of the shell script, returns nothing right now when using it inside the shell script.
filename.txt :
03:00:00 AM John-James Hayward Evalyn Howell Chyna Mercado
04:00:00 AM Chyna Mercado Cleveland Hanna Katey Bean
05:00:00 AM Katey Bean Billy Jones Evalyn Howell
06:00:00 AM Evalyn Howell Saima Mcdermott Cleveland Hanna
07:00:00 AM Cleveland Hanna Abigale Rich Billy Jones
Expected output:
05:00:00 AM Billy Jones
Your arguments are not being expanded by the shell when you single quote them. You can use awk variables (as suggested by #JohnKugelman
) to more clearly separate shell and awk code:
#!/bin/bash
awk -F'\t' -v p="$1" -v p2="$2" '($0 ~ p && $0 ~ p2) {print $1FS$3}' "$3"
I use generic variable names here (p and p2) to emphasize that you are not anchoring your regex so they really do match on the hole line instead of hour and am/pm as intended.
Don't embed shell variables in an awk script.
Here's a solution with some explanatory comments:
#!/bin/bash
[[ $# -lt 2 ]] && exit 1 ## two args required, plus files or piped/redirected input
hour="$(printf '%02d' "$1")" ## add a leading zero if neccesary
pm=${2^^} ## capitalise
shift 2
time="^$hour:.* $pm\$" ## match the whole time field
awk -F '\t' -v time="$time" \
'$1 ~ time {print $1,$3}' "$#" ## if it matches, print fields 1 and 3 (date, second name)
Usage is bash search.bash HOUR PM [FILE] ..., or ./search HOUR PM [FILE] ... if you make it executable. For example ./search 5 am file.txt or ./search 05 AM file.txt.
I'm assuming that every field is delimited by tabs.
This is probably what you're trying to do (untested, using any awk):
#!/usr/bin/env bash
awk -v hour="$1" -v ampm="$2" '
BEGIN {
FS = OFS = "\t"
time = sprintf("%02d:00:00", hour)
}
($1 == time) && ($2 == ampm) {
print $1, $3
}
' "${3:--}"
Note that the above would work even if your input file contained 10:00:00 AM and the arguments used were 1 AM. Some of the other solutions would fail given that as they're using a partial regexp comparison and so the arg 1 would match the 1s in input of 10:00:00, 11:00:00, or 12:00:00.
I have a txt file with 1000 rows of various epoch times (1396848990 = Sun Apr 6 22:36:30 PDT 2014). How can I count the number of rows taking place between 8 PM and midnight.
You can use awk to do the following:
awk 'int(strftime("%H", $1)) >= 20 {print $1}' $input_file | wc -l
It will use strftime() to convert unix epoch time stamps to hours (%H), cast it to an integer (int()) and compare to the number 20. If the number is larger - print the time stamp.
On the outside, wc can take care of counting the lines printed.
Of course, you can count with awk, too:
awk 'int(strftime("%H", $1)) >= 20 {n+=1} END{print n}' $input_file
It will silently initialize the variable n with zero and print the result at the end.
Edit: strftime() seems to exist in GNU awk:
$ awk -V
GNU Awk 4.1.3, API: 1.1 (GNU MPFR 3.1.4, GNU MP 6.1.0)
While awk surely is faster and easier to read, I think you can do this using only (modern) bash.
Something like this should work:
let counter=0; (while read -ra epoch; \
do [ $(printf "%(%H)T" ${epoch[0]}) -ge 20 ] && \
let "counter++"; done; echo $counter) <inputfile
It works like Pavels answer but with bash's printf to format the date (probably still strftime behind the curtains).
reading to an array (-a) and let counter++ only works in modern bash (4?)
If your date command supports -d (--date) option, how about:
from=$(date +%s -d "Apr 6 20:00:00 PDT 2014")
to=$(date +%s -d "Apr 7 0:00:00 PDT 2014")
awk -v from=$from -v to=$to '$0 >= from && $0 <= to' file | wc -l
I assume that there may be a better way to do it but the only one I came up with was using AWK.
I have a file with name convention like following:
testfile_2016_03_01.txt
Using one command I am trying to shift it by one day testfile_20160229.txt
I started from using:
file=testfile_2016_03_01.txt
IFS="_"
arr=($file)
datepart=$(echo ${arr[1]}-${arr[2]}-${arr[3]} | sed 's/.txt//')
date -d "$datepart - 1 days" +%Y%m%d
the above works fine, but I really wanted to do it in AWK. The only thing I found was how to use "date" inside AWK
new_name=$(echo ${file##.*} | awk -F'_' ' {
"date '+%Y%m%d'" | getline date;
print date
}')
echo $new_name
okay so two things happen here. For some reason $4 also contains .txt even though I removed it(?) ##.*
And the main problem is I don't know how to pass the variables to that date, the below doesn't work
`awk -F'_' '{"date '-d "2016-01-01"' '+%Y%m%d'" | getline date; print date}')
ideally I want 2016-01-01 to be variables coming from the file name $2-$3-$4 and substract 1 day but I think I'm getting way too many single and double quotes here and my brain is losing..
Equivalent awk command:
file='testfile_2016_03_01.txt'
echo "${file%.*}" |
awk -F_ '{cmd="date -d \"" $2"-"$3"-"$4 " -1 days\"" " +%Y%m%d";
cmd | getline date; close(cmd); print date}'
20160229
WIth GNU awk for time functions:
$ file=testfile_2016_03_01.txt
$ awk -v file="$file" 'BEGIN{ split(file,d,/[_.]/); print strftime(d[1]"_%Y%m%d."d[5],mktime(d[2]" "d[3]" "d[4]" 12 00 00")-(24*60*60)) }'
testfile_20160229.txt
This might work for you:
file='testfile_2016_03_01.txt'
IFS='_.' read -ra a <<< "$file"
date -d "${a[1]}${a[2]}${a[3]} -1 day" "+${a[0]}_%Y%m%d.${a[4]}"
Content of the file is
Feb-01-2014 one two
Mar-02-2001 three four
I'd like to format the first field (the date) to %Y%m%d format
I'm trying to use a combination of awk and date command, but somehow this is failing even though i got the feeling i'm almost there:
cat infile | awk -F"\t" '{$1=system("date -d " $1 " +%Y%m%d");print $1"\t"$2"\t"$3}' > test
this prints out date's usage pages which makes me think that the date command is triggered properly, but there is something wrong with the argument, do you see the issue somewhere?
i'm not that familiar with awk,
You don't need date for this, its simply rearranging the date string:
$ awk 'BEGIN{FS=OFS="\t"} {
split($1,t,/-/)
$1 = sprintf("%s%02d%s", t[3], (match("JanFebMarAprMayJunJulAugSepOctNovDec",t[1])+2)/3, t[2])
}1' file
20140201 one two
20010302 three four
You can use:
while read -r a _; do
date -d "$a" '+%Y%m%d'
done < file
20140201
20010302
system() returns the exit code of the command.
Instead:
cat infile | awk -F"\t" '{"date -d " $1 " +%Y%m%d" | getline d;print d"\t"$2"\t"$3}'
$ awk '{var=system("date -d "$1" +%Y%m%d | tr -d \"\\n\"");printf "%s\t%s\t%s\n", var, $2, $3}' file
201402010 one two
200103020 three four
I have a simple shell script, shown below, and I want to put a line break after each line returned by it.
#!/bin/bash
vcount=`db2 connect to db_lexus > /dev/null; db2 list tablespaces | grep -i "Tablespace ID" | wc -l`
db2pd -d db_lexus -tablespaces | grep -i "Tablespace Statistics" -A $vcount | awk '{printf ($2 $7)}'
The output is:
Statistics:IdFreePgs0537610230083224460850d
and I want the output to be something like that:
Statistics:
Id FreePgs
0 5376
1 0
2 3008
3 224
4 608
5 0
Is that possible to do with shell scripting?
Your problem can be reduced to the following:
$ cat infile
11 12
21 22
$ awk '{ printf ($1 $2) }' infile
11122122
printf is for formatted printing. I'm not even sure if the behaviour of above usage is defined, but it's not how it's meant to be done. Consider:
$ awk '{ printf ("%d %d\n", $1, $2) }' infile
11 12
21 22
"%d %d\n" is an expression that describes how to format the output: "a decimal integer, a space, a decimal integer and a newline", followed by the numbers that go where the %d are. printf is very flexible, see the manual for what it can do.
In this case, we don't really need the power of printf, we can just use print:
$ awk '{ print $1, $2 }' infile
11 12
21 22
This prints the first and second field, separated by a space1 – and print does add a newline without us telling it to.
1More precisely, "separated by the value of the output field separator OFS", which defaults to a space and is printed wherever we use , between two arguments. Forgetting the comma is a popular mistake that leads to no space between the record fields.
It looks like you just want to print columns 2 and 7 of whatever is passed to AWK. Try changing your AWK command to
awk '{print $2, $7}'
This will also add a line break at the end.
I realize you are asking about how to do something in a shell script, but it would certainly be a LOT easier to get this from the database using SQL:
#!/bin/bash
export DB2DBDFT=db_lexus
db2 "select tbsp_id, tbsp_free_pages \
from table(mon_get_tablespace('',-2)) as T \
order by tbsp_id"