I'm trying to take last value in third column of a CSV file and replace then the whole third column with this value.
I've been trying this:
var=$(tail -n 1 math_ready.csv | awk -F"," '{print $3}'); awk -F, '{$3="$var";}1' OFS=, math_ready.csv > math1.csv
But it's not working and I don't understand why...
Please help!
awk '
BEGIN { ARGV[2]=ARGV[1]; ARGC++; FS=OFS="," }
NR==FNR { last = $3; next }
{ $3 = last; print }
' math_ready.csv > math1.csv
The main problem with your script was trying to access a shell variable ($var) inside your awk script. Awk is not shell, it is a completely separate language/tool with it's own namespace and variables. You cannot directly access a shell variable in awk, just like you couldn't access it in C. To access the VALUE of a shell variable you'd do:
shellvar=27
awk -v awkvar="$shellvar" 'BEGIN{ print awkvar }'`
Some additional cleanup:
When FS and OFS have the same value, don't assign them each to that value separately, use BEGIN{ FS=OFS="," } instead for clarity and maintainability.
Do not iniatailize variables AFTER the script that uses those variables unless you have a very specifc reason to do so. Use awk -F... -v OFS=... 'script' to init those variables to separate values, not awk -F... 'script' OFS=... as it's very unnatural to init variables in the code segment AFTER you've used them and variables inited in the args list at the end are not initialized when the BEGIN section is executed which can cause bugs.
A shell variable is not expandable internally in awk. You can do this instead:
awk -F, -v var="$var" '{ $3 = var } 1' OFS=, math_ready.csv > math1.cs
And you probably can simplify your code with this:
awk -F, 'NR == FNR { r = $3; next } { $3 = r } 1' OFS=, math_ready.csv math_ready.csv > math1.csv
Example input:
1,2,1
1,2,2
1,2,3
1,2,4
1,2,5
Output:
1,2,5
1,2,5
1,2,5
1,2,5
1,2,5
Try this one liner. It doesn't depend on the column count
var=`tail -1 sample.csv | perl -ne 'm/([^,]+)$/; print "$1";'`; cat sample.csv | while read line; do echo $line | perl -ne "s/[^,]*$/$var\n/; print $_;"; done
cat sample.csv
24,1,2,30,12
33,4,5,61,3333
66,7,8,91111,1
76,10,11,32,678
Out:
24,1,2,30,678
33,4,5,61,678
66,7,8,91111,678
76,10,11,32,678
Related
I have a big CSV file that I need to cut into different pieces based on the value in one of the columns. My input file dataset.csv is something like this:
NOTE: edited to clarify that data is ,data, no spaces.
action,action_type, Result
up,1,stringA
down,1,strinB
left,2,stringC
So, to split by action_type I simply do (I need the whole matching line in the resulting file):
awk -F, '$2 ~ /^1$/ {print}' dataset.csv >> 1_dataset.csv
awk -F, '$2 ~ /^2$/ {print}' dataset.csv >> 2_dataset.csv
This works as expected but I am basicaly travesing my original dataset twice. My original dataset is about 5GB and I have 30 action_type categories. I need to do this everyday, so, I need to script the thing to run on its own efficiently.
I tried the following but it does not work:
# This is a file called myFilter.awk
{
action_type=$2;
if (action_type=="1") print $0 >> 1_dataset.csv;
else if (action_type=="2") print $0 >> 2_dataset.csv;
}
Then I run it as:
awk -f myFilter.awk dataset.csv
But I get nothing. Literally nothing, no even errors. Which sort of tell me that my code is simply not matching anything or my print / pipe statement is wrong.
You may try this awk to do this in a single command:
awk -F, 'NR > 1{fn = $2 "_dataset.csv"; print >> fn; close(fn)}' file
With GNU awk to handle many concurrently open files and without replicating the header line in each output file:
awk -F',' '{print > ($2 "_dataset.csv")}' dataset.csv
or if you also want the header line to show up in each output file then with GNU awk:
awk -F',' '
NR==1 { hdr = $0; next }
!seen[$2]++ { print hdr > ($2 "_dataset.csv") }
{ print > ($2 "_dataset.csv") }
' dataset.csv
or the same with any awk:
awk -F',' '
NR==1 { hdr = $0; next }
{ out = $2 "_dataset.csv" }
!seen[$2]++ { print hdr > out }
{ print >> out; close(out) }
' dataset.csv
As currently coded the input field separator has not been defined.
Current:
$ cat myfilter.awk
{
action_type=$2;
if (action_type=="1") print $0 >> 1_dataset.csv;
else if (action_type=="2") print $0 >> 2_dataset.csv;
}
Invocation:
$ awk -f myfilter.awk dataset.csv
There are a couple ways to address this:
$ awk -v FS="," -f myfilter.awk dataset.csv
or
$ cat myfilter.awk
BEGIN {FS=","}
{
action_type=$2
if (action_type=="1") print $0 >> 1_dataset.csv;
else if (action_type=="2") print $0 >> 2_dataset.csv;
}
$ awk -f myfilter.awk dataset.csv
Input:
"prefix_foo,prefix_bar"
Expected Output:
foo
bar
This is what I've so far.
$ echo "PREFIX_foo,PREFIX_bar" | awk '/PREFIX_/{x=gsub("PREFIX_", ""); print $0 }'
foo,bar
I'm unable to figure out how to print foo and bar separated by a newline. Thanks in advance!
EDIT:
Length of input is unknown so there can be more than 2 words separated by comma.
This question is more towards learning awk language, not alternative gnu utils.
You may not need awk for this. Here is pure bash solution:
s="prefix_foo,prefix_bar"
s="${s//prefix_/}"
s="${s//,/$'\n'}"
echo "$s"
foo
bar
Here is one liner gnu sed for the same:
sed 's/prefix_//g; s/,/\n/g' <<< "$s"
foo
bar
EDIT: 2nd solution Adding more generic solution here as per OP's comments, this will Look for every field and check if its having prefix then it will print that column's 2nd part(after _ one).
echo "prefix_foo,etc,bla,prefix_bar" |
awk '
BEGIN{
FS=OFS=","
}
{
for(i=1;i<=NF;i++){
if($i~/prefix/){
split($i,array,"_")
val=(val?val OFS:"")array[2]
}
}
if(val){
print val
}
val=""
}'
To print output field values in new line try:
echo "prefix_foo,etc,bla,prefix_bar" |
awk '
BEGIN{
FS=OFS=","
}
{
for(i=1;i<=NF;i++){
if($i~/prefix/){
split($i,array,"_")
print array[2]
}
}
}
'
1st solution: For simple case(specific to shown samples) could you please try following.
awk -F'[_,]' '/prefix_/{print $2,$4}' Input_file
OR
echo "prefix_foo,prefix_bar" | awk -F'[_,]' '/prefix_/{print $2,$4}'
Just trying out awk
echo "PREFIX_foo,PREFIX_bar" | awk -F, -v OFS="\n" '{gsub(/PREFIX_/,""); $1=$1}1'
I'm trying to edit 3 columns in a file if the value in column 1 equals a specific string. This is my current attempt:
cp file file.copy
awk -F':' 'OFS=":" { if ($1 == "root1") $2="test"; print}' file.copy>file
rm file.copy
I've only been able to get the awk command working with one column being changed, I want to be able to edit $3 and $8 as well. Is this possible in the same command? Or is it only possible with separate awk commands or with a different command all together?
Edit note: The real command i'll be passing variables to the columns, i.e. $2=$var
It'll be used to edit the /etc/passwd file, sample input/output:
root:$6$fR7Vrjyp$irnF38R/htMSuk0efLSnAten/epf.5v7gfs0q.NcjKcFPeJmB/4TnnmgaAoTUE9.n4p4UyWOgFwB1guJau8AL.:17976::::::
You can create multiple statements for the if condition with a block {}.
awk -F':' 'OFS=":" { if ($1 == "root1") {$2="test"; $3="test2";} print}' file.copy>file
You can also improve your command by using awk's default "workflow": condition{commands}. For this you need to bring the OFS to the input variables (-v flag)
awk -F':' -v OFS=":" '$1=="root1"{$2="test"; $3="test2"; print}' file.copy>file
You may use
# Fake sample values
v1=pass1
v2=pass2
awk -v var1="$v1" -v var2="$v2" 'BEGIN{FS=OFS=":"} $1 == "root1" { $2 = var1; $3 = var2}1' file > tmp && mv tmp file
See the online awk demo:
s="root1:xxxx:yyyy
root11:xxxx:yyyy
root1:zzzz:cccc"
v1=pass1
v2=pass2
awk -v var1="$v1" -v var2="$v2" 'BEGIN{FS=OFS=":"} $1 == "root1" { $2 = var1; $3 = var2}1' <<< "$s"
Output:
root1:pass1:pass2
root11:xxxx:yyyy
root1:pass1:pass2
Note:
-v var1="$v1" -v var2="$v2" pass the variables you need to use in the awk command
BEGIN{FS=OFS=":"} set the field separator
$1 == "root1" check if Field 1 is equal to some value
{ $2 = var1; $3 = var2 } set Field 2 and 3 values
1 calls the default print command
file > tmp && mv tmp file helps you "shrink" the "replace-inplace-like" code.
I have a csv file stored as a temporary variable in a shell script (*.sh).
Let's say the data looks like this:
Account,Symbol,Price
100,AAPL US,200
102,SPY US,500
I want to add a fourth column, "Type", which is the result of a shell function "foobar". Run from the command line or a shell script itself:
$ foobar "AAPL US"
"Stock"
$ foobar "SPY US"
"ETF"
How do I add this column to my csv, and populate it with calls to foobar which take the second column as an argument? To clarify, this is my ideal result post-script:
Account,Symbol,Price,Type
100,AAPL US,200,Common Stock
102,SPY US,500,ETF
I see many examples online involving such a column addition using awk, and populating the new column with fixed values, conditional values, mathematical derivations from other columns, etc. - but nothing that calls a function on another field and stores its output.
You may use this awk:
export -f foobar
awk 'BEGIN{FS=OFS=","} NR==1{print $0, "Type"; next} {
cmd = "foobar \"" $2 "\""; cmd | getline line; close(cmd);
print $0, line
}' file.csv
Account,Symbol,Price,Type
100,AAPL US,200,Common Stock
102,SPY US,500,ETF
#anubhavas answer is a good approach so please don't change the accepted answer as I'm only posting this as an answer as it's too big and in need of formatting to fit in a comment.
FWIW I'd write his awk script as:
awk '
BEGIN { FS=OFS="," }
NR==1 { type = "Type" }
NR > 1 {
cmd = "foobar \047" $2 "\047"
type = ((cmd | getline line) > 0 ? line : "ERROR")
close(cmd)
}
{ print $0, type }
' file.csv
to:
better protect $2 from shell expansion, and
protect from silently printing the previous value if/when cmd | getline fails, and
consolidate the print statements to 1 line so it's easy to change for all output lines if/when necessary
awk to the rescue!
$ echo "Account,Symbol,Price
100,AAPL US,200
102,SPY US,500" |
awk -F, 'NR>1{cmd="foobar "$2; cmd | getline type} {print $0 FS (NR==1?"Type":type)}'
Not sure you need to quote the input to foobar
Another way not using awk:
paste -d, input.csv <({ read; printf "Type\n"; while IFS=, read -r _ s _; do foobar "$s"; done; } < input.csv)
I have this simple awk code:
awk -F, 'BEGIN{OFS=FS} {print $2,$1,$3}' $1
Works great, except I've hardcoded how I want to sort the comma-delimited fields of my plaintext file. I want to be able to specify at run time in which order I'd like to sort my fields.
One hacky way I thought about doing this was this:
read first
read second
read third
TOTAL=$first","$second","$third
awk -F, 'BEGIN{OFS=FS} {print $TOTAL}' $1
But this doesn't actually work:
awk: illegal field $(), name "TOTAL"
Also, I know a bit about awk's ability to accept user input:
BEGIN {
getline first < "-"
}
$1 == first {
}
But I wonder whether the variables created can in turn be used as variables in the original print command? Is there a better way?
You have to let bash expand $TOTAL before awk is called, so that awk sees the value of $TOTAL, not the literal string $TOTAL. This means using double, not single, quotes.
read first
read second
read third
# Dynamically construct the awk script to run
TOTAL="\$$first,\$$second,\$$third"
SCRIPT="BEGIN{OFS=FS} {print $TOTAL}"
awk -F, "$SCRIPT" "$1"
A safer method is to pass the field numbers as awk variables.
awk -F, -v c1="$first" -v c2="$second" -v c3="$third" 'BEGIN{OFS=FS} {print $c1, $c2, $c3}' "$1"
All you need is:
awk -v order='3 1 2' 'BEGIN{split(order,o)} {for (i=1;i<=NF;i++) printf "%s%s", $(o[i]), (i<NF?OFS:ORS)}'
e.g.:
$ echo 'a b c' | awk -v order='3 1 2' 'BEGIN{split(order,o)} {for (i=1;i<=NF;i++) printf "%s%s", $(o[i]), (i<NF?OFS:ORS)}'
c a b
$ echo 'a b c' | awk -v order='2 3 1' 'BEGIN{split(order,o)} {for (i=1;i<=NF;i++) printf "%s%s", $(o[i]), (i<NF?OFS:ORS)}'
b c a