Trying to execute unix command in awk but receiving error - bash
awk 'BEGIN{FS="|"; } {print $2|"od -An -vtu1"| tr -d "\n"}' test1.txt
I have file with
1|siva
2|krishna
3| syz 5
I am trying to find ascii value of field 2, but below command giving me error
awk 'BEGIN{FS="|"; } {print $2|"od -An -vtu1"| tr -d "\n"}' test1.txt
awk: BEGIN{FS="|"; } {print $2|"od -An -vtu1 tr -d "\n"}
awk: ^ backslash not last character on line
Expected output
115 105 118 97
107 114 105 115 104 110 97
32 115 121 122 32 53
you're not really using any awk, perhaps this is easier...
$ while IFS='|' read -r _ f;
do echo -n "$f" | od -An -vtu1;
done < file
115 105 118 97
107 114 105 115 104 110 97
32 32 115 121 122 32 53
It sounds like this is what your'e trying to do:
$ awk '
BEGIN { FS="|" }
{
cmd = "printf \047%s\047 \047" $2 "\047 | od -An -vtu1"
system(cmd)
}
' file
115 105 118 97
107 114 105 115 104 110 97
32 32 115 121 122 32 53
or an alternative syntax so the output comes from awk rather than by the shell called by system():
$ awk '
BEGIN { FS=OFS="|" }
{
cmd = "printf \047%s\047 \047" $2 "\047 | od -An -vtu1"
rslt = ( (cmd | getline line) > 0 ? line : "N/A" )
close(cmd)
print $0, rslt
}
' file
1|siva| 115 105 118 97
2|krishna| 107 114 105 115 104 110 97
3| syz 5| 32 32 115 121 122 32 53
Massage to suit. You don't NEED to save the result in a variable, you could just print it, but I figured you'll want to know how to do that at some point, and you don't NEED to print $0 of course.
I also assume you have some reason for wanting to do this in awk, e.g. it's part of a larger script, otherwise using awk to call system to call shell to execute shell commands is just a bad idea vs using shell to execute shell commands.
Having said that, the best shell command I can come up with to do what you want is this using GNU awk for mult-char RS:
$ awk -F'|' -v ORS='\0' '{print $2}' file |
od -An -vtu1 |
awk -v RS=' 0\\s' '{gsub(/\n/,"")}1'
115 105 118 97
107 114 105 115 104 110 97
32 32 115 121 122 32 53
See the comments below for how that's more robust than the first awk approach if the input contains '$(command)' but it does assume there's no NUL chars in your input.
Related
Apply multiple substract commands between two columns in text file in bash
I would like to substract 2x two columns in a text file and add into two new columns in a tab delimited text file in bash using awk. I would like to substract column 3 (h3) - column 1 (h1). And name the new added column "count1". I would like to substract column 4 (h4) - column 2 (h2). And name the new added column "count2". I don't want to build a new text file, but edit the old one. My text file: h1 h2 h3 h4 h5 343 100 856 216 536 283 96 858 220 539 346 111 858 220 539 283 89 860 220 540 280 89 862 220 541 76 32 860 220 540 352 105 856 220 538 57 16 860 220 540 144 31 858 220 539 222 63 860 220 540 305 81 858 220 539 My command at the moment looks like this: awk '{$6 = $3 - $1}1' file.txt awk '{$6 = $4 - $2}1' file.txt But I don't know how to rename the new added columns and maybe there is a smarter move to run both commands in the same awk command?
Pretty simple in awk. Use NR==1 to modify the first line. awk -F '\t' -v OFS='\t' ' NR==1 {print $0,"count1","count2"} NR!=1 {print $0,$3-$1,$4-$2}' file.txt > tmp && mv tmp file.txt
calculate percentage between columns in bash?
I have long tab formatted file with many columns, i would like to calculate % between two columns (3rd and 4rth) and print this % with correspondence numbers with this format (%46.00). input: file1 323 434 45 767 254235 275 2345 467 file1 294 584 43 7457 254565 345 235445 4635 file1 224 524 4343 12457 2542165 345 124445 41257 Desired output: file1 323 434(134.37%) 45(13.93%) 767 254235 275 2345 467 file1 294 584(198.64%) 43(14.63%) 7457 254565 345 235445 4635 file1 224 524(233.93%) 4343(1938.84%) 12457 2542165 345 124445 41257 i tried: cat test_file.txt | awk '{printf "%s (%.2f%)\n",$0,($4/$2)*100}' OFS="\t" | awk '{printf "%s (%.2f%)\n",$0,($3/$2)*100}' | awk '{print $1,$2,$3,$11,$4,$10,$5,$6,$7,$8,$9}' - | sed 's/ (/(/g' | sed 's/ /\t/g' >out.txt It works but I want something sort-cut of this.
I would say: $ awk '{$3=sprintf("%d(%.2f%)", $3, ($3/$2)*100); $4=sprintf("%d(%.2f%)", $4, ($4/$2)*100)}1' file file1 323 434(134.37%) 45(13.93%) 767 254235 275 2345 467 file1 294 584(198.64%) 43(14.63%) 7457 254565 345 235445 4635 file1 224 524(233.93%) 4343(1938.84%) 12457 2542165 345 124445 41257 With a function to avoid duplicities: awk 'function print_nice (num1, num2) { return sprintf("%d(%.2f%)", num1, (num1/num2)*100) } {$3=print_nice($3,$2); $4=print_nice($4,$2)}1' file This uses sprintf to express a specific format and store it in a variable. The calculations are the obvious.
convert comma separated list in text file into columns in bash
I've managed to extract data (from an html page) that goes into a table, and I've isolated the columns of said table into a text file that contains the lines below: [30,30,32,35,34,43,52,68,88,97,105,107,107,105,101,93,88,80,69,55], [28,6,6,50,58,56,64,87,99,110,116,119,120,117,114,113,103,82,6,47], [-7,,,43,71,30,23,28,13,13,10,11,12,11,13,22,17,3,,-15,-20,,38,71], [0,,,3,5,1.5,1,1.5,0.5,0.5,0,0.5,0.5,0.5,0.5,1,0.5,0,-0.5,-0.5,2.5] Each bracketed list of numbers represents a column. What I'd like to do is turn these lists into actual columns that I can work with in different data formats. I'd also like to be sure to include that blank parts of these lists too (i.e., "[,,,]") This is basically what I'm trying to accomplish: 30 28 -7 0 30 6 32 6 35 50 43 3 34 58 71 5 43 56 30 1.5 52 64 23 1 . . . . . . . . . . . . I'm parsing data from a web page, and ultimately planning to make the process as automated as possible so I can easily work with the data after I output it to a nice format. Anyone know how to do this, have any suggestions, or thoughts on scripting this?
Since you have your lists in python, just do it in python: l=[["30", "30", "32"], ["28","6","6"], ["-7", "", ""], ["0", "", ""]] for i in zip(*l): print "\t".join(i) produces 30 28 -7 0 30 6 32 6
awk based solution: awk -F, '{gsub(/\[|\]/, ""); for (i=1; i<=NF; i++) a[i]=a[i] ? a[i] OFS $i: $i} END {for (i=1; i<=NF; i++) print a[i]}' file 30 28 -7 0 30 6 32 6 35 50 43 3 34 58 71 5 43 56 30 1.5 52 64 23 1 .......... ..........
Another solution, but it works only for file with 4 lines: $ paste \ <(sed -n '1{s,\[,,g;s,\],,g;s|,|\n|g;p}' t) \ <(sed -n '2{s,\[,,g;s,\],,g;s|,|\n|g;p}' t) \ <(sed -n '3{s,\[,,g;s,\],,g;s|,|\n|g;p}' t) \ <(sed -n '4{s,\[,,g;s,\],,g;s|,|\n|g;p}' t) 30 28 -7 0 30 6 32 6 35 50 43 3 34 58 71 5 43 56 30 1.5 52 64 23 1 68 87 28 1.5 88 99 13 0.5 97 110 13 0.5 105 116 10 0 107 119 11 0.5 107 120 12 0.5 105 117 11 0.5 101 114 13 0.5 93 113 22 1 88 103 17 0.5 80 82 3 0 69 6 -0.5 55 47 -15 -0.5 -20 2.5 38 71 Updated: or another version with preprocessing: $ sed 's|\[||;s|\][,]\?||' t >t2 $ paste \ <(sed -n '1{s|,|\n|g;p}' t2) \ <(sed -n '2{s|,|\n|g;p}' t2) \ <(sed -n '3{s|,|\n|g;p}' t2) \ <(sed -n '4{s|,|\n|g;p}' t2)
If a file named data contains the data given in the problem (exactly as defined above), then the following bash command line will produce the output requested: $ sed -e 's/\[//' -e 's/\]//' -e 's/,/ /g' <data | rs -T Example: cat data [30,30,32,35,34,43,52,68,88,97,105,107,107,105,101,93,88,80,69,55], [28,6,6,50,58,56,64,87,99,110,116,119,120,117,114,113,103,82,6,47], [-7,,,43,71,30,23,28,13,13,10,11,12,11,13,22,17,3,,-15,-20,,38,71], [0,,,3,5,1.5,1,1.5,0.5,0.5,0,0.5,0.5,0.5,0.5,1,0.5,0,-0.5,-0.5,2.5] $ sed -e 's/[//' -e 's/]//' -e 's/,/ /g' <data | rs -T 30 28 -7 0 30 6 43 3 32 6 71 5 35 50 30 1.5 34 58 23 1 43 56 28 1.5 52 64 13 0.5 68 87 13 0.5 88 99 10 0 97 110 11 0.5 105 116 12 0.5 107 119 11 0.5 107 120 13 0.5 105 117 22 1 101 114 17 0.5 93 113 3 0 88 103 -15 -0.5 80 82 -20 -0.5 69 6 38 2.5 55 47 71
Using bash to read elements on a diagonal on a matrix and redirecting it to another file
So, currently i have created a code to do this as shown below. This code works and does what it is supposed to do after I echo the variables: a=`awk 'NR==2 {print $1}' $coor` b=`awk 'NR==3 {print $2}' $coor` c=`awK 'NR==4 {print $3}' $coor` ....but i have to do this for many more lines and i want a more general expression. So I have attempted to create a loop shown below. Syntax wise i don't think anything is wrong with the code, but it is not outputting anything to the file "Cmain". I was wondering if anyone could help me, I'm kinda new at scripting. If it helps any, I can also post what i am trying to read. for (( i=1; i <= 4 ; i++ )); do for (( j=0; j <= 3 ; j++ )); do B="`grep -n "cell" "$coor" | awk 'NR=="$i" {print $j}'`" done done echo "$B" >> Cmain
You can replace your lines of awk with this one: awk '{ for (i=1; i<=NF; i++) if (NR >= 2 && NR == i) print $(i - 1) }' file.txt Tested input: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 Output: 11 22 33 44 55 66 77
awk 'BEGIN {f=1} {print $f; f=f+1}' infile > outfile
An alternative using sed and coreutils, assuming space separated input is in infile: n=$(wc -l infile | cut -d' ' -f1) for i in $(seq 1 $n); do sed -n "${i} {p; q}" infile | cut -d' ' -f$i done
using sort command in shell scripting
I execute the following code : for i in {1..12};do printf "%s %s\n" "${edate1[$i]}" "${etime1[$i]}" (I retrieve the values of edate1 and etime1 from my database and store it in an array which works fine.) I receive the o/p as: 97 16 97 16 97 12 107 16 97 16 97 16 97 16 97 16 97 16 97 16 97 16 100 15 I need to sort the first column using the sort command. Expected o/p: 107 16 100 16 97 12 97 16 97 16 97 16 97 16 97 16 97 16 97 16 97 16 97 15
This is what I did to find your solution: Copy your original input to in.txt Run this code, which uses awk, sort, and paste. awk '{print $1}' in.txt | sort -g -r -s > tmp.txt paste tmp.txt in.txt | awk '{print $1 " " $3}' > out.txt Then out.txt matches the expected output in your original post. To see how it works, look at this: $ paste tmp.txt in.txt 107 97 16 100 97 16 97 97 12 97 107 16 97 97 16 97 97 16 97 97 16 97 97 16 97 97 16 97 97 16 97 97 16 97 100 15 So you're getting the first column sorted, then the original columns in place. Awk makes it easy to print out the columns (fields) you're interested in, ie, the first and third.
This is the best and simplest way to sort your data <OUTPUT> | sort -nrk1 Refer the following link to know more about the magic of sort.