Adding a single date to the first column of a file - bash

I have a file that looks like
1234-00AA12 .02
5678-11BB34 .03
In a bash script I have an expression like
day=$(...)
that greps a date in the format YYYY/MM/DD (if this matters), let's say 2014/01/21 for specificity.
I want to produce the following:
2014/01/21,1,1,1234,00AA12,.02
2014/01/21,1,1,5678,11BB34,.03
(The first column is the day, the second and third columns are fixed as "1").
After a bit of googling I tried:
cat file|awk -F "-" '{split($2,array," "); printf "%s,%s,%s,%s,%s,%s\n",$day,"1","1",$1,array[1],array[2]}'> output.csv
but $day isn't working with awk.
Any help would be appreciated.

Try this awk:
awk -v d=$(date '+%Y/%m/%d') '{print d,1,1,$1,$2}' OFS=, file
2014/02/07,1,1,1234-00AA12,.02
2014/02/07,1,1,5678-11BB34,.03

$ awk -v day="$day" 'BEGIN{FS="[ -]";OFS=","} {print day,1,1,$1,$2,$3}' file
2014/01/21,1,1,1234,00AA12,.02
2014/01/21,1,1,5678,11BB34,.03

awk wouldn't understand shell variables. You need to pass those to it:
awk -vdd="$day" -F "-" '{split($2,array," "); printf "%s,%s,%s,%s,%s,%s\n",dd,"1","1",$1,array[1],array[2]}'
Moreover, rather than saying:
cat file | awk ...
avoid the useless use of cat:
awk file

With bash
day="2014/01/21"
(
IFS=,
while IFS=" -" read -ra fields; do
new=( "$day" 1 1 "${fields[#]}" )
echo "${new[*]}"
done < file
)
2014/01/21,1,1,1234,00AA12,.02
2014/01/21,1,1,5678,11BB34,.03
I run the while loop in a subshell just to keep changes to IFS localized.

Related

Reformat date in text file (.csv) with sed and date

This is the input .csv file
"item1","10/11/2017 2:10pm",1,2, ...
"item2","10/12/2017 3:10pm",3,4, ...
.
.
.
Now, I want to convert the second column (date) to this specific format
date -d '10/12/2017 2:10pm' +'%Y/%m/%d %H:%M:%S', so that "10/12/2017 2:10pm" converts to "2017/10/12 14:10:00"
Expecting output file
"item1","2017/10/11 14:10:00",1,2, ...
"item2","2017/10/12 15:10:00",3,4, ...
.
.
.
I know it can be done by using bash or python, but I want to do it in one-line command. Any ideas? Is there a way to pass date result to sed?
One-liner awk approach.
awk -F',' '{gsub(/"/,"",$2); cmd="date -d\""$2"\" +\\\"%Y/%m/%d\\ %T\\\"";
cmd |getline $2; close(cmd) }1' OFS=, infile #>>outfile
"item1","2017/10/11 14:10:00",1,2, ...
"item2","2017/10/12 15:10:00",3,4, ...
This will output changes in your Terminal, you need to redirect the output to a file if you need record the output or use FILENAME to redirect the output to the input infile itself.
awk -F',' '{gsub(/"/,"",$2); cmd="date -d\""$2"\" +\\\"%Y/%m/%d\\ %T\\\"";
cmd |getline $2; close(cmd); print >FILENAME }' OFS=, infile
Or with GNU awk implementations which does support -i inplace identifier for in-place replace. see 'awk' save modifications in place
You can do it in one line, but that begs the question -- "How long of a line do you want?" Since you have it labeled 'shell' and not bash, etc., you are a bit limited in your string handling. POSIX shell provides enough to do what you want, but it isn't the speediest remedy. You are either going to end up with an awk or sed solution that calls date or a shell solution that calls awk or sed to parse old date from the original file and feeds the result to date to get your new date. You will have to work out which provides the most efficient remedy.
As far as the one-liner goes, you can do something similar to the following while remaining POSIX compliant. It simply uses awk to get the 2nd field from the file, pipes the result to a while loop which uses expr length "$field" to get the length and uses that within expr substr "$field" "2" <length expression - 2> to chop the double-quotes from the end of the original date olddt, followed by date -d "$olddt" +'%Y/%m/%d %H:%M:%S' to get newdt and finally sed -i "s;$olddt;$newdt;" to perform the substitution in place. Your one-liner (shown with auto line-continuations for readability)
$ awk -F, '{print $2}' timefile.txt |
while read -r field; do
olddt="$(expr substr "$field" "2" "$(($(expr length "$field") - 2))")";
newdt=$(date -d "$olddt" +'%Y/%m/%d %H:%M:%S');
sed -i "s;$olddt;$newdt;" timefile.txt; done
Example Input File
$ cat timefile.txt
"item1","10/11/2017 2:10pm",1,2, ...
"item2","10/12/2017 3:10pm",3,4, ...
Resulting File
$ cat timefile.txt
"item1","2017/10/11 14:10:00",1,2, ...
"item2","2017/10/12 15:10:00",3,4, ...
There are probably faster ways to do it, but this is a reasonable length one-liner (relatively speaking).
Revised less ugly sed method:
sed 's/^.*,"\|",.*//g;h;s#.*#date "+%Y/%m/%d %T" -d "&"#e;H;g;s#\n\|$#,#g;s/^/s,/' input.csv | sed -f - input.csv
Spread out, (it works the same):
sed 's/^.*,"\|",.*//g
h;
s#.*#date "+%Y/%m/%d %T" -d "&"#e;
H;
g;
s#\n\|$#,#g;
s/^/s,/' input.csv | sed -f - input.csv
Output:
"item1","2017/10/11 14:10:00",1,2, ...
"item2","2017/10/12 15:10:00",3,4, ...
How it works:
The first sed block uses the evaluate command to run date, the output of which is used to generate some new sed substitute commands. To show the new s commands, temporarily replace the shell script | pipe with a # comment:
s,10/11/2017 2:10pm,2017/10/11 14:10:00,
s,10/12/2017 3:10pm,2017/10/12 15:10:00,
These are piped to the second sed.

Using and manipulating date variable inside AWK

I assume that there may be a better way to do it but the only one I came up with was using AWK.
I have a file with name convention like following:
testfile_2016_03_01.txt
Using one command I am trying to shift it by one day testfile_20160229.txt
I started from using:
file=testfile_2016_03_01.txt
IFS="_"
arr=($file)
datepart=$(echo ${arr[1]}-${arr[2]}-${arr[3]} | sed 's/.txt//')
date -d "$datepart - 1 days" +%Y%m%d
the above works fine, but I really wanted to do it in AWK. The only thing I found was how to use "date" inside AWK
new_name=$(echo ${file##.*} | awk -F'_' ' {
"date '+%Y%m%d'" | getline date;
print date
}')
echo $new_name
okay so two things happen here. For some reason $4 also contains .txt even though I removed it(?) ##.*
And the main problem is I don't know how to pass the variables to that date, the below doesn't work
`awk -F'_' '{"date '-d "2016-01-01"' '+%Y%m%d'" | getline date; print date}')
ideally I want 2016-01-01 to be variables coming from the file name $2-$3-$4 and substract 1 day but I think I'm getting way too many single and double quotes here and my brain is losing..
Equivalent awk command:
file='testfile_2016_03_01.txt'
echo "${file%.*}" |
awk -F_ '{cmd="date -d \"" $2"-"$3"-"$4 " -1 days\"" " +%Y%m%d";
cmd | getline date; close(cmd); print date}'
20160229
WIth GNU awk for time functions:
$ file=testfile_2016_03_01.txt
$ awk -v file="$file" 'BEGIN{ split(file,d,/[_.]/); print strftime(d[1]"_%Y%m%d."d[5],mktime(d[2]" "d[3]" "d[4]" 12 00 00")-(24*60*60)) }'
testfile_20160229.txt
This might work for you:
file='testfile_2016_03_01.txt'
IFS='_.' read -ra a <<< "$file"
date -d "${a[1]}${a[2]}${a[3]} -1 day" "+${a[0]}_%Y%m%d.${a[4]}"

format date in file using awk

Content of the file is
Feb-01-2014 one two
Mar-02-2001 three four
I'd like to format the first field (the date) to %Y%m%d format
I'm trying to use a combination of awk and date command, but somehow this is failing even though i got the feeling i'm almost there:
cat infile | awk -F"\t" '{$1=system("date -d " $1 " +%Y%m%d");print $1"\t"$2"\t"$3}' > test
this prints out date's usage pages which makes me think that the date command is triggered properly, but there is something wrong with the argument, do you see the issue somewhere?
i'm not that familiar with awk,
You don't need date for this, its simply rearranging the date string:
$ awk 'BEGIN{FS=OFS="\t"} {
split($1,t,/-/)
$1 = sprintf("%s%02d%s", t[3], (match("JanFebMarAprMayJunJulAugSepOctNovDec",t[1])+2)/3, t[2])
}1' file
20140201 one two
20010302 three four
You can use:
while read -r a _; do
date -d "$a" '+%Y%m%d'
done < file
20140201
20010302
system() returns the exit code of the command.
Instead:
cat infile | awk -F"\t" '{"date -d " $1 " +%Y%m%d" | getline d;print d"\t"$2"\t"$3}'
$ awk '{var=system("date -d "$1" +%Y%m%d | tr -d \"\\n\"");printf "%s\t%s\t%s\n", var, $2, $3}' file
201402010 one two
200103020 three four

How to execute awk command in shell script

I have an awk command that extracts the 16th column from 3rd line in a csv file and prints the first 4 characters.
awk -F"," 'NR==3{print $16}' sample.csv|sed -e 's/^[ \t]*//'|awk '{print substr($0,0,4)}'
This works fine.
But when I execute it from a shell script, I get and error
#!/bin/ksh
YEAR=awk -F"," 'NR==3{print $16}' sample.csv|sed -e 's/^[ \t]*//'|awk '{print substr($0,0,4)}'
Error message:
-F,: not found
Use command substitution to assign the output of a command to a variable, as shown below:
YEAR=$(awk -F"," 'NR==3{print $16}' sample.csv|sed -e 's/^[ \t]*//'|awk '{print substr($0,0,4)}')
you are asking the shell to do :
VAR=value command [arguments...]
which means: launch command but pass it the VAR=value environment first
(ex: LC_ALL=C grep '[0-9]*' /some/file.txt : will grep a number in file.txt (and this with the LC_ALL variable set to C just for the duration of the call of grep)
So here : you ask the shell to launch the -F"," command (ie, -F, once the shell interpret the "," into , with arguments 'NR==3.......... and with the variable YEAR set to the value awk for the duration of the command invocation.
Just replace it with :
#!/bin/ksh
YEAR="$(awk -F',' 'NR==3{print $16}' sample.csv|sed -e 's/^[ \t]*//'|awk '{print substr($0,1,4)}')"
(I didn't try it, but I hope they work for you and your sample.csv file)
(Note that you use "0" to match character position 1, which works in many awk implementation but not all (ie most (but not all) assume 1 when you write 0))
From your description, it looks like you want to extract the year from the 16th field, which might contain leading spaces. You can accomplish it by calling AWK once:
YEAR=$(awk -F, 'NR==3{sub(/^[ \t]*/, "", $16); print ">" substr($16,1,4) "<" }')
Better yet, you don't even have to use awk. Since you are already writing shell script, let's do it all in shell script:
{ read line; read line; read line; } < sample.csv # Get the third line
IFS=, set $line # Breaks line into comma-separated fields
IFS=" " set ${16} # Trick to remove leading spaces, field 16 becomes field 1
YEAR=${1:0:4} # Extract the first 4 char from field 1
Do this:
year=$(awk -F, 'NR==3{sub(/^[ \t]+/,"",$16); print substr($16,1,4); exit }' sample.csv)

Explode to Array

I put together this shell script to do two things:
Change the delimiters in a data file ('::' to ',' in this case)
Select the columns and I want and append them to a new file
It works but I want a better way to do this. I specifically want to find an alternative method for exploding each line into an array. Using command line arguments doesn't seem like the way to go. ANY COMMENTS ARE WELCOME.
# Takes :: separated file as 1st parameters
SOURCE=$1
# create csv target file
TARGET=${SOURCE/dat/csv}
touch $TARGET
echo #userId,itemId > $TARGET
IFS=","
while read LINE
do
# Replaces all matches of :: with a ,
CSV_LINE=${LINE//::/,}
set -- $CSV_LINE
echo "$1,$2" >> $TARGET
done < $SOURCE
Instead of set, you can use an array:
arr=($CSV_LINE)
echo "${arr[0]},${arr[1]}"
The following would print columns 1 and 2 from infile.dat. Replace with
a comma-separated list of the numbered columns you do want.
awk 'BEGIN { IFS='::'; OFS=","; } { print $1, $2 }' infile.dat > infile.csv
Perl probably has a 1 liner to do it.
Awk can probably do it easily too.
My first reaction is a combination of awk and sed:
Sed to convert the delimiters
Awk to process specific columns
cat inputfile | sed -e 's/::/,/g' | awk -F, '{print $1, $2}'
# Or to avoid a UUOC award (and prolong the life of your keyboard by 3 characters
sed -e 's/::/,/g' inputfile | awk -F, '{print $1, $2}'
awk is indeed the right tool for the job here, it's a simple one-liner.
$ cat test.in
a::b::c
d::e::f
g::h::i
$ awk -F:: -v OFS=, '{$1=$1;print;print $2,$3 >> "altfile"}' test.in
a,b,c
d,e,f
g,h,i
$ cat altfile
b,c
e,f
h,i
$

Resources