Check if given strftime format matches a date - bash

I have strftime format of time, let's say (%Y-%m-%d %H:%M:%S) and a file which should contain this kind of data e.g. (2012-02-11 17:15:00). I need to check if given pattern actually matches the data.
How to approach this? awk, date?
EDIT:
More info: The user enters the strftime format, let's say on input. Then he enters a file which should contain those dates. I need to make sure, that those data are valid (he didn't make a mistake). So I need to check the rows in the input file and see, if there are data that matches the given pattern. Example:
user enters strftime format: (%Y-%m-%d %H:%M:%S)
input file: (2012-02-11 17:15:00) long sentence
VALID
user enters strftime format: Date[%Y.%m.%d %H:%M:%S]
input file: Date-2012.02.11 17:15:00- long sentence
INVALID

If you allow an external helper binary, I've written dateutils to batch process date and time data.
dconv -q -i '(%Y-%m-%d %H:%M:%S)' <<EOF
not a match: 2012-04-10 12:00:00
a match: (2012-04-10 13:00:00)
EOF
will give
2012-04-10T13:00:00
-i is the input format, -q suppresses warnings. And dconv tries to convert input lines to output lines (in this case it converts matching lines to ISO standard format.
So using this, a file matches completely if the number of input lines equals the number of output lines.

If you want to check current datetime:
echo "(2012-02-11 17:15:00)" | grep "$(date "+%Y-%m-%d %H:%M:%S")"
If some other you need GNU date (-d option). This works for me:
echo "(2012-02-11 17:15:00)" |
grep "$(date -d "2012-02-11 17:15:00" "+%Y-%m-%d %H:%M:%S")"

I would take a brute force approach to this: replace any %X specifier with a corresponding regular expression, then you can filter out lines that don't match the resulting generated regex:
user_format="%Y-%m-%d"
awk -v fmt_string="$user_format" '
BEGIN {
gsub(/[][(){}?|*+.]/ "\\&", fmt_string) # protect any regex-special chars
gsub(/%Y/, "([0-9]{4})", fmt_string)
gsub(/%m/, "(0[1-9]|1[012])", fmt_string)
gsub(/%d/, "(0[1-9]|[12][0-9]|3[01])", fmt_string)
# and so on
}
$0 !~ "^" fmt_string {print "line " NR " does not match: " $0}
' filename

Related

AWK post-procession of multi-column data

I am working with the set of txt file containing multi column information present in one line. Within my bash script I use the following AWK expression to take the filename from each of the txt filles as well as the number from the 5th column and save it in 2 column format in results.CSV file (piped to SED, which remove path of the file and its extension from the final CSV file):
awk '-F, *' '{if(FNR==2) printf("%s| %s \n", FILENAME,$5) }' ${tmp}/*.txt | sed 's|\/Users/gleb/Desktop/scripts/clusterizator/tmp/||; s|\.txt||' >> ${home}/"${experiment}".csv
obtaining something (for 5 txt filles) like this as CSV:
lig177_cl_5.2| -0.1400
lig331_cl_3.5| -8.0000
lig394_cl_1.9| -4.3600
lig420_cl_3.8| -5.5200
lig550_cl_2.0| -4.3200
How it would be possible to modify my AWK expression in order to exclude "_cl_x.x" from the name of each txt file as well as add the name of the CSV as the comment to the first line of the resulted CSV file:
# results.CSV
lig177| -0.1400
lig331| -8.0000
lig394| -4.3600
lig420| -5.5200
lig550| -4.3200
based on the rest of the pipe, I think you want to do something like this and get rid of sed invocations.
awk -F', *' 'FNR==2 {f=FILENAME;
sub(/.*\//,"",f);
sub(/_.*/ ,"",f);
printf("%s| %s\n", f, $5) }' "${tmp}"/*.txt >> "${home}/${experiment}.csv"
this will convert
/Users/gleb/Desktop/scripts/clusterizator/tmp/lig177_cl_5.2.txt
to
lig177
The pattern replacement is generic
/path/to/the/file/filename_otherstringshere...
will extract only filename. From the last / char to the first _ char. This is based the greedy matching of regex patterns.
For the output filename, it's easier to do it before awk call, since it's a one line only.
$ echo "${experiment}.csv" > "${home}/${experiment}.csv"
$ awk ... >> "${home}/${experiment}.csv"

How to sort array of strings by function in shell script

I have the following list of strings in shell script:
something-7-5-2020.dump
another-7-5-2020.dump
anoter2-6-5-2020.dump
another-4-5-2020.dump
another2-4-5-2020.dump
something-2-5-2020.dump
another-2-5-2020.dump
8-1-2021
26-1-2021
20-1-2021
19-1-2021
3-9-2020
29-9-2020
28-9-2020
24-9-2020
1-9-2020
6-8-2020
20-8-2020
18-8-2020
12-8-2020
10-8-2020
7-7-2020
5-7-2020
27-7-2020
7-6-2020
5-6-2020
23-6-2020
18-6-2020
28-5-2020
26-5-2020
9-12-2020
28-12-2020
15-12-2020
1-12-2020
27-11-2020
20-11-2020
19-11-2020
18-11-2020
1-11-2020
11-11-2020
31-10-2020
29-10-2020
27-10-2020
23-10-2020
21-10-2020
15-10-2020
23-09-2020
So my goal is to sort them by date, but it's in dd-mm-yyyy and d-m-yyyy format and sometimes there's a word before like word-dd-mm-yyyy. I would like to create a function to sort the values like any other language so it ignores the first word, casts the date to a common format and compares that format. In javascript it would be something like:
arrayOfStrings.sort((a, b) => functionToOrderStrings())
My code to obtain the array is the following:
dumps=$(gsutil ls gs://organization-dumps/ambient | sed "s:gs\://organization-dumps/ambient/::" | sed '/^$/d' | sed 's:/$::' | sort --reverse --key=3 --key=2 --key=1 --field-separator=-)
echo "$dumps"
I would like to say that I've already searched this in Stackoverflow and none of the answers did help me, because all of them are oriented to sort dates in correct format and that's not my case.
If you have the results in a pipeline, involving an array seems completely superfluous here.
You can apply a technique called a Schwartzian transform: add a prefix to each line with a normalized version the data so it can be easily sorted, then sort, then discard the prefix.
I'm guessing something like the following;
gsutil ls gs://organization-dumps/ambient |
awk '{ sub("gs:\/\/organization-dumps/ambient/", "");
if (! $0) next;
sub("/$", "");
d = $0;
sub(/^[^0-9][^-]*-/, "", d);
sub(/[^0-9]*$/, "", d);
split(d, w, "-");
printf "%04i-%02i-%02i\t%s\n", w[3], w[2], w[1], $0 }' |
sort -n | cut -f2-
In so many words, we are adding a tab-delimited field in front of every line, then sorting on that, then discarding the first field with cut -f2-. The field extraction contains some assumptions which seem to be valid for your test data, but may need additional tweaking if you have real data with corner cases like if the label before the date could sometimes contain a number with dashes around it, too.
If you want to capture the result in a variable, like in your original code, that's easy to do; but usually, you should just run everything in a pipeline.
Notice that I factored your multiple sed scripts into the Awk script, too, some of that with a fair amount of guessing as to what the input looks like and what the sed scripts were supposed to accomplish. (Perhaps also note that sed, like Awk, is a scripting language; to run several sed commands on the same input, just put them after each other in the same sed script.)
Preprocess input to be in the format you want it to be for sorting.
Sort
Remove artifacts from step 1
The following:
sed -E '
# extract the date and put it in first column separated by tab
# this could be better, its just an example
s/(.*-)?([0-9]?[0-9]-[0-9]?[0-9]-[0-9]{4})/\2\t&/;
# If day is a single digit, add a zero in front
s/^([0-9]-)/0\1/;
# If month is a single digit, add a zero in front
s/^([0-9][0-9]-)([0-9]-)/\10\2/
# year in front? no idea - shuffle the way you want
s/([0-9]{2})-([0-9]{2})-([0-9]{4})/\3-\2-\1/
' input.txt | sort | cut -f2-
outputs:
another-2-5-2020.dump
something-2-5-2020.dump
another-4-5-2020.dump
another2-4-5-2020.dump
anoter2-6-5-2020.dump
another-7-5-2020.dump
something-7-5-2020.dump
26-5-2020
28-5-2020
5-6-2020
7-6-2020
18-6-2020
23-6-2020
5-7-2020
7-7-2020
27-7-2020
6-8-2020
10-8-2020
12-8-2020
18-8-2020
20-8-2020
1-9-2020
3-9-2020
23-09-2020
24-9-2020
28-9-2020
29-9-2020
15-10-2020
21-10-2020
23-10-2020
27-10-2020
29-10-2020
31-10-2020
1-11-2020
11-11-2020
18-11-2020
19-11-2020
20-11-2020
27-11-2020
1-12-2020
9-12-2020
15-12-2020
28-12-2020
8-1-2021
19-1-2021
20-1-2021
26-1-2021
Using GNU awk:
gsutil ls gs://organization-dumps/ambient | awk '{ match($0,/[[:digit:]]{1,2}-[[:digit:]]{1,2}-[[:digit:]]{4}/);dayt=substr($0,RSTART,RLENGTH);split(dayt,map,"-");length(map[1])==1?map[1]="0"map[1]:map[1]=map[1];length(map[2])==1?map[2]="0"map[2]:map[2]=map[2];map1[mktime(map[3]" "map[2]" "map[1]" 00 00 00")]=$0 } END { PROCINFO["sorted_in"]="#ind_num_asc";for (i in map1) { print map1[i] } }'
Explanation:
gsutil ls gs://organization-dumps/ambient | awk '{
match($0,/[[:digit:]]{1,2}-[[:digit:]]{1,2}-[[:digit:]]{4}/); # Check that lines contain a date
dayt=substr($0,RSTART,RLENGTH); # Extract the date
split(dayt,map,"-"); # Split the date in the array map based on "-" as the delimiter
length(map[1])==1? map[1]="0"map[1]:map[1]=map[1];length(map[2])==1?map[2]="0"map[2]:map[2]=map[2]; # Pad the month and day with "0" if required
map1[mktime(map[3]" "map[2]" "map[1]" 00 00 00")]=$0 # Get the epoch format date based on the values in the map array and use this for the index of the array map1 with the line as the value
}
END {
PROCINFO["sorted_in"]="#ind_num_asc"; # Set the ordering of the array
for (i in map1) {
print map1[i] # Loop through map1 and print the values (lines)
}
}'
Using GNU awk, you can do this fairly easy:
awk 'BEGIN{PROCINFO["sorted_in"]="#ind_num_asc"; FS="."}
{n=split($1,t,"-"); a[t[n]*10000 + t[n-1]*100 + t[n-2]]=$0}
END {for(i in a) print a[i]}' file
Essentially, we are asking GNU awk to traverse an array by index in ascending numeric order. Per line read, we extract the date. The date is always located before the <dot>-character and thus always in field 1 if the dot is the field separator (FS="."). We split the first field by the hyphen and use the total number of fields to extract the date. We convert the date simplistically to some number (YYYY*10000+MM*100+DD; DD<100 && MM*100 < 10000) and ask awk to sort it by that number.
It is now possible to combine the full pipe-line in a single awk:
$ gsutil ls gs://organization-dumps/ambient \
| awk 'BEGIN{PROCINFO["sorted_in"]="#ind_num_asc"; FS="."}
{sub("gs://organization-dumps/ambient/",""); sub("/$","")}
(NF==0){next}
{n=split($1,t,"-"); a[t[n]*10000 + t[n-1]*100 + t[n-2]]=$0}
END {for(i in a) print a[i]}'

Split file by date and keep header in Bash

I need to split a TSV file by date using whatever standard CLI tools come with OS X 10.10; e.g. sed, awk, etc. FYI the shell is Bash
The input file has a header row and follows a tab separated format (the date and time is in the first column) — I'm adding "\t" bellow to show the tabs, and "…" to indicate the rows have many more columns:
Transaction Date\t Account Number\t…
9/16/2004 12:00:00 AM\t ABC00147223\t…
9/17/2004 12:00:00 AM\t ABC00147223\t…
10/05/2004 12:00:00 AM\t ABC00147223\t…
The output should be:
A separate file for each unique year AND month (based on the example above I would get 2 output files: 9/2004 and 10/2004)
Maintain the first/header row of the original file
Filename in the form YYYYMM.txt
Thank you for your help.
If you want to do pure in bash shell do as below...
#!/bin/bash
datafile=inputdatafile.dat
ctr=0;
while read line
do
# counter to keep track of line number
ctr=$((ctr + 1))
# skip header line for processing
if [[ $ctr -gt 1 ]];
then
# create filename using date field present in record
vdate=${line%% *}
vday1=${vdate%%/*}
vday=`printf "%02d" $vday1` # day with padding 0
vyear=${vdate##*/} # year
vfilename="${vyear}${vday}.txt" # filname in YYYYMM.txt format
# check if file exists or not then put header record in it
if [ ! -f $vfilename ]; then
head -1 $datafile > $vfilename
fi
# put the record in that file
echo "$line" >> $vfilename
fi
done < $datafile
Not sure how big your data files are but its never a good idea to parse large files using shell scripting instead use other utils like awk, sed, grep, etc for it.
For big files and using nawk / gawk one-liner use as below ... it will do all you need.
# use nawk or gawk if you don't get the expected results using awk
$nawk '{if(NR==1)h=$0;} {if(NR>1){ split($1,a,"/"); fn=sprintf("%04d%02d.txt",a[3],a[1]); if(system( "[ ! -f " fn " ] ")==0)print h >> fn; print >> fn;} }' inputdatafile.dat

Shell script to extract data from file between two date ranges

I have a huge file, with each line starting with a timestamp as shown below. I need a way to grep lines between two dates. Is there any easy way to do this using sed or awk instead of extracting out date fields in each line and comparing day/month/year?
example, need to extract data between 2013-06-01 to 2013-06-15 by checking the timestamp in the first field
File contents:
2013-06-02T19:44:59;(3305,3308,2338,102116);aaaa;xxxx
2013-06-14T20:01:58;(2338);aaaa;xxxx
2013-06-12T20:01:58;(3305,3308,2338);bbbb;xxxx
2013-06-13T20:01:59;(3305,3308,2338,102116);bbbb;xxxx
2013-06-13T20:02:53;(2338);bbbb;xxxx
2013-06-13T20:02:53;(3305,3308,2338);aaaa2;xxxx
2013-06-13T20:02:54;(3305,3308,2338,102116);aaaa2;xxxx
2013-06-14T20:31:58;(2338);aaaa2;xxxx
2013-06-14T20:31:58;(3305,3308,2338);aaaa;xxxx
2013-06-15T20:31:59;(3305,3308,2338,102116);bbbb;xxxx
2013-06-16T20:32:53;(2338);aaaa;xxxx
2013-06-16T20:32:53;(3305,3308,2338);aaaa2;xxxx
2013-06-16T20:32:54;(3305,3308,2338,102116);bbbb;xxxx
It may not have been your first choice but Perl is great for this task.
perl -ne "print if ( m/2013-06-02/ .. m/2013-06-15/ )" myfile.txt
The way this works is that if the first trigger is matched (i.e. m/2013-06-02/) then the condition (print) will be executed on each line until the second trigger is matched (i.e. m/2013-06-15).
However this trick won't work if you specify m/2013-06-01/ as a trigger because this is never matched in your file.
A less exciting technique is to extract some text from each line and test that:
perl -ne 'if ( m/^([0-9-]+)/ ) { $date = $1; print if ( $date ge "2013-06-01" and $date le "2013-06-15" ) }' myfile.txt
(Tested both expressions and working).
You can try something like:
awk -F'-|T' '$1==2013 && $2==06 && $3>=01 && $3<=15' hugefile
You can use sed to print all lines between two patterns. In this case, you will have to sort the file first because the dates are interleaved:
$ sort file | sed -n '/2013-06-12/,/2013-06-15/p'
2013-06-12T20:01:58;(3305,3308,2338);bbbb;xxxx
2013-06-13T20:01:59;(3305,3308,2338,102116);bbbb;xxxx
2013-06-13T20:02:53;(2338);bbbb;xxxx
2013-06-13T20:02:53;(3305,3308,2338);aaaa2;xxxx
2013-06-13T20:02:54;(3305,3308,2338,102116);aaaa2;xxxx
2013-06-14T20:01:58;(2338);aaaa;xxxx
2013-06-14T20:31:58;(2338);aaaa2;xxxx
2013-06-14T20:31:58;(3305,3308,2338);aaaa;xxxx
2013-06-15T20:31:59;(3305,3308,2338,102116);bbbb;xxxx

Bash script to convert a date and time column to unix timestamp in .csv

I am trying to create a script to convert two columns in a .csv file which are date and time into unix timestamps. So i need to get the date and time column from each row, convert it and insert it into an additional column at the end containing the timestamp.
Could anyone help me? So far i have discovered the unix command to convert any give time and date to unixstamp:
date -d "2011/11/25 10:00:00" "+%s"
1322215200
I have no experience with bash scripting could anyone get me started?
Examples of my columns and rows:
Columns: Date, Time,
Row 1: 25/10/2011, 10:54:36,
Row 2: 25/10/2011, 11:15:17,
Row 3: 26/10/2011, 01:04:39,
Thanks so much in advance!
You don't provide an exerpt from your csv-file, so I'm using this one:
[foo.csv]
2011/11/25;12:00:00
2010/11/25;13:00:00
2009/11/25;19:00:00
Here's one way to solve your problem:
$ cat foo.csv | while read line ; do echo $line\;$(date -d "${line//;/ }" "+%s") ; done
2011/11/25;12:00:00;1322218800
2010/11/25;13:00:00;1290686400
2009/11/25;19:00:00;1259172000
(EDIT: Removed an uneccessary variable.)
(EDIT2: Altered the date command so the script actually works.)
this should do the job:
awk 'BEGIN{FS=OFS=", "}{t=$1" "$2; "date -d \""t"\" +%s"|getline d; print $1,$2,d}' yourCSV.csv
note
you didn't give any example. and you mentioned csv, so I assume that the column separator in your file should be "comma".
test
kent$ echo "2011/11/25, 10:00:00"|awk 'BEGIN{FS=OFS=", "}{t=$1" "$2; "date -d \""t"\" +%s"|getline d; print $1,$2,d}'
2011/11/25, 10:00:00, 1322211600
Now two imporvements:
First: No need for cat foo.csv, just stream that via < foo.csv into the while loop.
Second: No need for echo & tr to create the date stringformat. Just use bash internal pattern and substitute and do it inplace
while read line ; do echo ${line}\;$(date -d "${line//;/ }" +'%s'); done < foo.csv

Resources