Insert a date in a column using awk - bash

I'm trying to format a date in a column of a csv.
The input is something like: 28 April 1966
And I'd like this output: 1966-04-28
which can be obtain with this code:
date -d "28 April 1966" +%F
So now I thought of mixing awk and this code to format the entire column but I can't find out how.
Edit :
Example of input : (separators "|" are in fact tabs)
1 | 28 April 1966
2 | null
3 | null
4 | 30 June 1987
Expected output :
1 | 1966-04-28
2 | null
3 | null
4 | 30 June 1987

A simple way is
awk -F '\\| ' -v OFS='| ' '{ cmd = "date -d \"" $3 "\" +%F 2> /dev/null"; cmd | getline $3; close(cmd) } 1' filename
That is:
{
cmd = "date -d \"" $3 "\" +%F 2> /dev/null" # build shell command
cmd | getline $3 # run, capture output
close(cmd) # close pipe
}
1 # print
This works because date doesn't print anything to its stdout if the date is invalid, so the getline fails and $3 is not changed.
Caveats to consider:
For very large files, this will spawn a lot of shells and processes in those shells (one each per line). This can become a noticeable performance drag.
Be wary of code injection. If the CSV file comes from an untrustworthy source, this approach is difficult to defend against an attacker, and you're probably better off going the long way around, parsing the date manually with gawk's mktime and strftime.
EDIT re: comment: To use tabs as delimiters, the command can be changed to
awk -F '\t' -v OFS='\t' '{ cmd = "date -d \"" $3 "\" +%F 2> /dev/null"; cmd | getline $3; close(cmd) } 1' filename
EDIT re: comment 2: If performance is a worry, as it appears to be, spawning processes for every line is not a good approach. In that case, you'll have to do the parsing manually. For example:
BEGIN {
OFS = FS
m["January" ] = 1
m["February" ] = 2
m["March" ] = 3
m["April" ] = 4
m["May" ] = 5
m["June" ] = 6
m["July" ] = 7
m["August" ] = 8
m["September"] = 9
m["October" ] = 10
m["November" ] = 11
m["December" ] = 12
}
$3 !~ /null/ {
split($3, a, " ")
$3 = sprintf("%04d-%02d-%02d", a[3], m[a[2]], a[1])
}
1
Put that in a file, say foo.awk, and run awk -F '\t' -f foo.awk filename.csv.

This should work with your given input
awk -F'\\|' -vOFS="|" '!/null/{cmd="date -d \""$3"\" +%F";cmd | getline $3;close(cmd)}1' file
Output
| 1 |1966-04-28
| 2 | null
| 3 | null
| 4 |1987-06-30

I would suggest using a language that supports parsing dates, like perl:
$ cat file
1 28 April 1966
2 null
3 null
4 30 June 1987
$ perl -F'\t' -MTime::Piece -lane 'print "$F[0]\t",
$F[1] eq "null" ? $F[1] : Time::Piece->strptime($F[1], "%d %B %Y")->strftime("%F")' file
1 1966-04-28
2 null
3 null
4 1987-06-30
The Time::Piece core module allows you to parse and format dates, using the standard format specifiers of strftime. This solution splits the input on a tab character and modifies the format if the second field is not "null".
This approach will be much faster than using system calls or invoking subprocesses, as everything is done in native perl.

Here is how you can do this in pure BASH and avoid calling system or getline from awk:
while IFS=$'\t' read -ra arr; do
[[ ${arr[1]} != "null" ]] && arr[1]=$(date -d "${arr[1]}" +%F)
printf "%s\t%s\n" "${arr[0]}" "${arr[1]}"
done < file
1 1966-04-28
2 null
3 null
4 1987-06-30

Only one date call and no code injection problem is possible, see the following:
This script extracts the dates (using awk) into a temporary file processes them with one "date" call and merges the results back (using awk).
Code
awk -F '\t' 'match($3,/null/) { $3 = "0000-01-01" } { print $3 }' input > temp.$$
date --file=temp.$$ +%F > dates.$$
awk -F '\t' -v OFS='\t' 'BEGIN {
while ( getline < "'"dates.$$"'" > 0 )
{
f1_counter++
if ($0 == "0000-01-01") {$0 = "null"}
date[f1_counter] = $0
}
}
{$3 = date[NR]}
1' input.$$
One-liner using bash process redirections (no temporary files):
inputfile=/path/to/input
awk -F '\t' -v OFS='\t' 'BEGIN {while ( getline < "'<(date -f <(awk -F '\t' 'match($3,/null/) { $3 = "0000-01-01" } { print $3 }' "$inputfile") +%F)'" > 0 ){f1_counter++; if ($0 == "0000-01-01") {$0 = "null"}; date[f1_counter] = $0}}{$3 = date[NR]}1' "$inputfile"
Details
here is how it can be used:
# configuration
input=/path/to/input
temp1=temp.$$
temp2=dates.$$
output=output.$$
# create the sample file (optional)
#printf "\t%s\n" $'1\t28 April 1966' $'2\tnull' $'3\tnull' $'4\t30 June 1987' > "$input"
# Extract all dates
awk -F '\t' 'match($3,/null/) { $3 = "0000-01-01" } { print $3 }' "$input" > "$temp1"
# transform the dates
date --file="$temp1" +%F > "$temp2"
# merge csv with transformed date
awk -F '\t' -v OFS='\t' 'BEGIN {while ( getline < "'"$temp2"'" > 0 ){f1_counter++; if ($0 == "0000-01-01") {$0 = "null"}; date[f1_counter] = $0}}{$3 = date[NR]}1' "$input" > "$output"
# print the output
cat "$output"
# cleanup
rm "$temp1" "$temp2" "$output"
#rm "$input"
Caveats
Using "0000-01-01" as a temporary placeholder for invalid (null) dates
The code should be faster than other methods calling "date" a lot of times, but it reads the input file two times.

Related

convert table into comma separated in text file using bash

I have a text file like this:
+------------------+------------+----------+
| col_name | data_type | comment |
+------------------+------------+----------+
| _id | bigint | |
| starttime | string | |
+------------------+------------+----------+
how can i get a result like this using bash
(_id bigint, starttime string )
so just the column names and type
#remove first 3 lines
sed -e '1,3d' < columnnames.txt >clean.txt
#remove first character from each line
sed 's/^.//' < clean.txt >clean.txt
#remove last character from each line
sed 's/.$//' < clean.txt >clean.txt
# remove certain characters
sed 's/[+-|]//g' < clean.txt >clean.txt
# remove last line
sed '$ d' < clean.txt >clean.txt
so this is what i have so far, if there is a better implementation let me know!
Something similar, using only awk:
awk -F ' *[|]' 'BEGIN {printf("(")} NR>3 && NF>1 {printf("%s%s%s", NR>4 ? "," : "", $2, $3)} END {printf(" )\n")}' columnnames.txt
# Set the field separator to vertical bar surrounded by any number of spaces.
# BEGIN and END blocks print the opening and closing parens
# The line between skips the header lines and any line starting with '+'
$ awk -F"[[:space:]]*[|][[[:space:]]*" '
BEGIN { printf "%s", "( "}
NR > 3 && $0 !~ /^[+]/ { printf("%s%s %s", c, $2, $3); c = ", " }
END { print " )" }' file
( _id bigint, starttime string )
$ awk -F'[| ]+' 'NR>3 && NF>1{v=v s $2" "$3; s=", "} END{print "("v")"}' file
(_id bigint, starttime string)
I would do this :
cat input.txt \
| tail -n +4 \
| awk -F'[^a-zA-Z_]+' '{ for(i=1;i<=NF;i++) { printf $i" " }}'
Its a little bit shorter.
Another way to implement Diego Torres Milano's solution as a stand-alone awk program:
tableconvert
#!/usr/bin/env -S awk -f
BEGIN {
FS="[[:space:]]*[|][[[:space:]]*"
printf "%s", "( "
}
{
if (FNR <= 3 || match($0, /^[+]/))
next
else {
printf("%s%s %s", c, $2, $3)
c = ", "
}
}
END {
print " )"
}
Make tableconvert an executable:
chmod +x tableconvert
Run tableconvert on intablefile.txt
./tableconvert intablefile.txt
( _id bigint, starttime string )
With added bonus that using FNR instead of NR allow the awk program to process multiple input files as arguments:
./tableconvert infille1.txt infile2.txt infile3.txt ...
A variation on the other answers using awk with the field-separator being the '|' with optional spaces on either side as GNU awk allows, then taking fields 2 and 3 as the fields wanted in each record, and formatting the output as described in the question with the closing " )" provided in the END rule:
$ awk -F' *\\| *' '
NR>3 && $1~/^[+]/{exit} # exit condition first line w/^+
NR==4{$1=$1; printf "(%s %s", $2,$3} # 1st data record is 4
NR>4{$1=$1; printf ", %s %s", $2,$3} # process all remainng records
END{print " )"} # output closing " )"
' table
(_id bigint, starttime string )
(note: if you don't want the two-spaces before the closing ")", just remove them from the print in the END rule)
Rather than using a BEGIN the first record of interest (4) is used to provide the opening "(". Look things over and let me know if you have questions.

add output in command

I started only a few weeks ago with scripting or I am trying at least ...
bash-4.3# /usr/openv/netbackup/bin/admincmd/bperror -backstat -hoursago 72 \
| grep xxx1 \
| awk '{ print $1 "\t" $19 "\t" $12 "\t" $14 "\t" $16 }' >> test
bash-4.3# cat test
1535229470 0 xxx1 policy1 sched1
1535314239 0 xxx1 policy1 sched1
1535400749 0 xxx1 policy1 sched1
Now I want to transform the first entry (timestamp) into a readable date
date=$(awk 'NR == 1 {print $1}' test); bpdbm -ctime $date |awk '{ print $3 " " $4 " " $5 " " $6 " " $8 }'
Sat Aug 25 22:37:50 2018
How can I now replace the first entry on each line by this output or change the first command?
thank you very much!
Using GNU awk:
awk '$1~/[0-9]+/{$1=strftime(PROCINFO["strftime"],$1)}1' file
This replaces the timestamp in the first field of the line with the associated readable date using the function strftime.
The date format is the default one PROCINFO["strftime"] as mentioned in the awk man page.

Multiple condition in nawk command

I have the nawk command where I need to format the data based on the length .All the time I need to keep first 6 digit and last 4 digit and make xxxx in the middle. Can you help in fine tuning the below script
#!/bin/bash
FILES=/export/home/input.txt
cat $FILES | nawk -F '|' '{
if (length($3) >= 13 )
print $1 "|" $2 "|" substr($3,1,6) "xxxxxx" substr($3,13,4) "|" $4"|" $5
else
print $1 "|" $2 "|" $3 "|" $4 "|" $5"|
}' > output.txt
done
input.txt
"2"|"X"|"A"|"ST"|"245552544555201"|"1111-11-11"|75.00
"6"|"Y"|"D"|"VT"|"245652544555200"|"1111-11-11"|95.00
"5"|"X"|"G"|"ST"|"3445625445552023"|"1111-11-11"|75.00
"3"|"Y"|"S"|"VT"|"24532254455524"|"1111-11-11"|95.00
output.txt
"X"|"ST"|"245552544555201"|"245552xxxxx5201"
"Y"|"VT"|"245652544555200"|"245652xxxxx5200"
"X"|"ST"|"3445625445552023"|"344562xxxxxx2023"
"Y"|"VT"|"24532254455524"|"245322xxxx5524"
Try this:
$ awk '
BEGIN {FS = OFS = "|"}
length($5)>=13 {
fld5=$5
start = substr($5,1,7)
end = substr($5,length($5)-4)
gsub(/./,"x",fld5)
sub(/^......./,start,fld5)
sub(/.....$/,end,fld5)
$1=$2; $2=$4; $3=$5; $4=fld5; NF-=3;
}1' file
"X"|"ST"|"245552544555201"|"245552xxxxx5201"
"Y"|"VT"|"245652544555200"|"245652xxxxx5200"
"X"|"ST"|"3445625445552023"|"344562xxxxxx2023"
"Y"|"VT"|"24532254455524"|"245322xxxx5524"

create a excel file using shell script

I have a bunch of text files in a directory and i need to read them and extract information and keep in an excel or text file
name1_1.txt
count: 10
totalcount: 30
percentage:33
total no of a's: 20
total no of b's: 20
etc...
name2_2.txt
count: 20
totalcount: 40
percentage:50
total no of a's: 10
total no of b's: 30
etc...
etc...
output
name1 name2
count 10 20
totalcount 30 40
percentage 33 50
I want the output to keep in file called(example.txt or .csv) in the same directory.
can i get help in this?
here what i tried in writing a shell script,but can't create tab separated and output to file what i needed
#$ -S /bin/bash
for sample in *.txt; do
header=$(echo ${sample} | awk '{sub(/_/," ")}1'| awk '{print $1}')
echo -en $header"\t"
done
echo -e ' \t '
echo "count"
for sample in *.txt; do
grep "count:" $sample | awk -F: $'\t''{print $2}'
done
echo "totalcount"
for sample in *.txt; do
grep "totalcount:" $sample | awk -F: $'\t''{print $2}'
done
echo "percentage"
for sample in *.txt; do
grep "percentage:" $sample | awk -F: $'\t''{print $2}'
done
You can see if this does what you want:
awk -F":" 'BEGIN { DELIM="\t" } \
last_filename != FILENAME { \
split( FILENAME, farr, "_" ); header = header DELIM farr[1]; \
last_filename = FILENAME; i=0 } \
$1 ~ /count/ || $1 ~ /totalcount/ || $1 ~/percentage/ \
{ a[i++]= NR==FNR ? $1DELIM$2 : a[i]DELIM$2 } \
END { print header; for( j in a ) { print a[j] } }' name*.txt
where I've tried to break it up into multiple lines for "easier" reading. You can just remove the trailing "\" from each line and concat each line to re-make it as a one-liner. If I edit this anwswer one more time, I'll just make it an executable awk file.
The awk is setting a DELIM for the output to tab in the BEGIN block.
The FILENAME is cleaned up and appended to the header
It takes the column names from the first file, as well as the data and puts that into an array at i. For each next file, it just appends the data.
At the END, the header is output, and then the contents of the array are output.
I get the following output then:
name1 name2
count 10 20
totalcount 20 40
percentage 33 50
This will now only take the columns indicated in the data, provided $1 is an exact match for the count, totalcount and percentage.

AWK: Compare two CSV files

I have two CSV files and I want to compare them using AWK and generate a new file.
file1.csv:
"no","loc"
"abc121","C:/pro/in"
"abc122","C:/pro/abc"
"abc123","C:/pro/xyz"
"abc124","C:/pro/in"
file2.csv:
"no","loc"
"abc121","C:/pro/in"
"abc122","C:/pro/abc"
"abc125","C:/pro/xyz"
"abc126","C:/pro/in"
output.csv:
"file1","file2","Diff"
"abc121","abc121","Match"
"abc122","abc122","Match"
"abc123","","Unmatch"
"abc124","","Unmatch"
"","abc125","Unmatch"
"","abc126","Unmatch"
One way with awk:
script.awk:
BEGIN {
FS = ","
}
NR>1 && NR==FNR {
a[$1] = $2
next
}
FNR>1 {
print ($1 in a) ? $1 FS $1 FS "Match" : "\"\"" FS $1 FS "Unmatch"
delete a[$1]
}
END {
for (x in a) {
print x FS "\"\"" FS "Unmatch"
}
}
Output:
$ awk -f script.awk file1.csv file2.csv
"abc121","abc121",Match
"abc122","abc122",Match
"","abc125",Unmatch
"","abc126",Unmatch
"abc124","",Unmatch
"abc123","",Unmatch
I didn't use awk alone, but if I understood the gist of what you're asking correctly, I think this long one-liner should do it...
join -t, -a 1 -a 2 -o 1.1 2.1 1.2 2.2 file1.csv file2.csv | awk -F, '{ if ( $3 == $4 ) var = "\"Match\""; else var = "\"Unmatch\"" ; print $1","$2","var }' | sed -e '1d' -e 's/^,/"",/' -e 's/,$/,"" /' -e 's/,,/,"",/g'
Description:
The join portion takes the two CSV files, joins them on the first column (default behavior of join) and outputs all four fields (-o 1.1 2.1 1.2 2.2), making sure to include rows that are unmatched for both files (-a 1 -a 2).
The awk portion takes that output and replaces combination of the 3rd and 4th columns to either "Match" or "Unmatch" based on if they do in fact match or not. I had to make an assumption on this behavior based on your example.
The sed portion deletes the "no","loc" header from the output (-e '1d') and replaces empty fields with open-close quote marks (-e 's/^,/"",/' -e 's/,$/,""/' -e 's/,,/,"",/g'). This last part might not be necessary for you.
EDIT:
As tripleee points out, the above fails if the two initial files are unsorted. Here's an updated command to fix that. It punts the header line and sorts each file before passing them to join...
join -t, -a 1 -a 2 -o 1.1 2.1 1.2 2.2 <( sed 1d file1.csv | sort ) <( sed 1d file2.csv | sort ) | awk -F, '{ if ( $3 == $4 ) var = "\"Match\""; else var = "\"Unmatch\"" ; print $1","$2","var }' | sed -e 's/^,/"",/' -e 's/,$/,""/' -e 's/,,/,"",/g'

Resources