Bash loop for multivariable to the same data - bash

I am trying to create models using multiple variables using bash loop. I need to run several predictions using different r2 and p-value cutoff for the same data. The r2 and value parameters are
cat parameters
0.2 1
0.2 5e-1
0.2 5e-2
0.2 5e-4
0.2 5e-6
0.2 5e-8
0.4 1
0.4 5e-1
0.4 5e-2
0.4 5e-4
0.4 5e-6
0.4 5e-8
0.6 1
0.6 5e-1
0.6 5e-2
0.6 5e-4
0.6 5e-6
0.6 5e-8
0.8 1
0.8 5e-1
0.8 5e-2
0.8 5e-4
0.8 5e-6
0.8 5e-8
The bash loop script I am using test.sh
RSQ=$(cat parameters | awk '{print $1}')
PVAL=$(cat parameters | awk '{print $2}')
season=("spring summer fall winter")
for i in $season;
do
echo prediction_${i}_${RSQ}_${PVAL}
done
the present output is
prediction_spring_0.2 0.2 0.2 0.2 0.2 0.2 0.4 0.4 0.4 0.4 0.4 0.4 0.6 0.6 0.6 0.6 0.6 0.6 0.8 0.8 0.8 0.8 0.8 0.8_1 5e-1 5e-2 5e-4 5e-6 5e-8 1 5e-1 5e-2 5e-4 5e-6 5e-8 1 5e-1 5e-2 5e-4 5e-6 5e-8 1 5e-1 5e-2 5e-4 5e-6 5e-8
prediction_summer_0.2 0.2 0.2 0.2 0.2 0.2 0.4 0.4 0.4 0.4 0.4 0.4 0.6 0.6 0.6 0.6 0.6 0.6 0.8 0.8 0.8 0.8 0.8 0.8_1 5e-1 5e-2 5e-4 5e-6 5e-8 1 5e-1 5e-2 5e-4 5e-6 5e-8 1 5e-1 5e-2 5e-4 5e-6 5e-8 1 5e-1 5e-2 5e-4 5e-6 5e-8
prediction_fall_0.2 0.2 0.2 0.2 0.2 0.2 0.4 0.4 0.4 0.4 0.4 0.4 0.6 0.6 0.6 0.6 0.6 0.6 0.8 0.8 0.8 0.8 0.8 0.8_1 5e-1 5e-2 5e-4 5e-6 5e-8 1 5e-1 5e-2 5e-4 5e-6 5e-8 1 5e-1 5e-2 5e-4 5e-6 5e-8 1 5e-1 5e-2 5e-4 5e-6 5e-8
prediction_winter_0.2 0.2 0.2 0.2 0.2 0.2 0.4 0.4 0.4 0.4 0.4 0.4 0.6 0.6 0.6 0.6 0.6 0.6 0.8 0.8 0.8 0.8 0.8 0.8_1 5e-1 5e-2 5e-4 5e-6 5e-8 1 5e-1 5e-2 5e-4 5e-6 5e-8 1 5e-1 5e-2 5e-4 5e-6 5e-8 1 5e-1 5e-2 5e-4 5e-6 5e-8
The desired output is
prediction_spring_0.2_1
prediction_spring_0.2_5e-1
prediction_spring_0.2_5e-2
prediction_spring_0.2_5e-4
prediction_spring_0.2_5e-6
prediction_spring_0.2_5e-8
prediction_spring_0.4_1
.......
prediction_winter_0.2_1
prediction_winter_0.2_5e-1
prediction_winter_0.2_5e-2
prediction_winter_0.2_5e-4
prediction_winter_0.2_5e-6
prediction_winter_0.2_5e-8
prediction_winter_0.4_1
..........

Your sample output is not complete enough. I can imagine two solutions: 1) you intend every season to be paired with every RSQ value to be paired with every PVAL value; or, 2) you want the stated R/P pairs to be matched with the seasons.
Solution for #1: you need to loop over the R & P lists
for i in $season; do
for r in $RSQ; do
for p in $PVAL; do
echo prediction_${i}_${r}_${p}
done
done
done
Solution for #2: read the file line by line
for i in $season; do
while read r p; do
echo prediction_${i}_${r}_${p}
done < parameters
done

Related

Is there a way to rotate objects about a tilted axis in X3D?

I have wireframe cubes in an X3D file. I am trying to tilt the cubes at a 15 degree axis and rotate them continuously. However, when I try rotating the object, it returns to its original (untilted) position and begins to rotate.
This is the snippet of code that I am using to rotate the one of the cubes.
<!DOCTYPE html>
<html>
<head>
<meta http-equiv='Content-Type' content='text/html;charset=utf-
8'></meta>
<link rel='stylesheet' type='text/css'
href='http://www.x3dom.org/x3dom/release/x3dom.css'></link>
<script type='text/javascript'
src='http://www.x3dom.org/x3dom/release/x3dom.js'></script>
</head>
<body>
<x3d id='someUniqueId' showStat='false' showLog='false' x='0px'
y='0px' width='400px' height='400px'>
<scene>
<navigationInfo avatarSize='0.25 1.75 0.75' headlight='false'
type='"EXAMINE" "ANY"'></navigationInfo>
<background DEF='WO_World' groundColor='0.051 0.051 0.051'
skyColor='0.051 0.051 0.051'></background>
<transform DEF='Cube_TRANSFORM' rotation='-0.735729 0.478906
0.478906 1.87298'>
<transform DEF='Cube_ifs_TRANSFORM'>
<group DEF='group_ME_Cube'>
<shape>
<appearance></appearance>
<indexedFaceSet solid='false' texCoordIndex='0 1 2 3 -1 4 5 6 7 -1 8 9 10 11 -1 12 13 14 15 -1 16 17 18 19 -1 20 21 22 23 -1 24 25 26 27 -1 28 29 30 31 -1 32 33 34 35 -1 36 37 38 39 -1 40 41 42 43 -1 44 45 46 47 -1 48 49 50 51 -1 52 53 54 55 -1 56 57 58 59 -1 60 61 62 63 -1 64 65 66 67 -1 68 69 70 71 -1 72 73 74 75 -1 76 77 78 79 -1 80 81 82 83 -1 84 85 86 87 -1 88 89 90 91 -1 92 93 94 95 -1 96 97 98 99 -1 100 101 102 103 -1 104 105 106 107 -1 108 109 110 111 -1 112 113 114 115 -1 116 117 118 119 -1 120 121 122 123 -1 124 125 126 127 -1 128 129 130 131 -1 132 133 134 135 -1 136 137 138 139 -1 140 141 142 143 -1 144 145 146 147 -1 148 149 150 151 -1 152 153 154 155 -1 156 157 158 159 -1 160 161 162 163 -1 164 165 166 167 -1 168 169 170 171 -1 172 173 174 175 -1 176 177 178 179 -1 180 181 182 183 -1 184 185 186 187 -1 188 189 190 191 -1' coordIndex='16 17 8 0 -1 17 16 1 9 -1 17 18 12 8 -1 18 17 9 13 -1 18 19 4 12 -1 19 18 13 5 -1 19 16 0 4 -1 16 19 5 1 -1 20 21 4 6 -1 21 20 7 5 -1 21 22 12 4 -1 22 21 5 13 -1 22 23 14 12 -1 23 22 13 15 -1 23 20 6 14 -1 20 23 15 7 -1 24 25 12 14 -1 25 24 15 13 -1 25 26 8 12 -1 26 25 13 9 -1 26 27 10 8 -1 27 26 9 11 -1 27 24 14 10 -1 24 27 11 15 -1 28 29 2 10 -1 29 28 11 3 -1 29 30 6 2 -1 30 29 3 7 -1 30 31 14 6 -1 31 30 7 15 -1 31 28 10 14 -1 28 31 15 11 -1 32 33 0 2 -1 33 32 3 1 -1 33 34 4 0 -1 34 33 1 5 -1 34 35 6 4 -1 35 34 5 7 -1 35 32 2 6 -1 32 35 7 3 -1 36 37 8 10 -1 37 36 11 9 -1 37 38 0 8 -1 38 37 9 1 -1 38 39 2 0 -1 39 38 1 3 -1 39 36 10 2 -1 36 39 3 11 -1'>
<coordinate DEF='coords_ME_Cube' point='0.971133 0.971133 0.971133 1.02887 1.02887 1.02887 0.971133 0.971133 -0.971133 1.02887 1.02887 -1.02887 0.971133 -0.971133 0.971133 1.02887 -1.02887 1.02887 0.971133 -0.971133 -0.971133 1.02887 -1.02887 -1.02887 -0.971133 0.971133 0.971133 -1.02887 1.02887 1.02887 -0.971133 0.971133 -0.971133 -1.02887 1.02887 -1.02887 -0.971133 -0.971133 0.971133 -1.02887 -1.02887 1.02887 -0.971133 -0.971133 -0.971133 -1.02887 -1.02887 -1.02887 0.95 0.95 1 -0.95 0.95 1 -0.95 -0.95 1 0.95 -0.95 1 0.95 -1 -0.95 0.95 -1 0.95 -0.95 -1 0.95 -0.95 -1 -0.95 -1 -0.95 -0.95 -1 -0.95 0.95 -1 0.95 0.95 -1 0.95 -0.95 -0.95 0.95 -1 0.95 0.95 -1 0.95 -0.95 -1 -0.95 -0.95 -1 1 0.95 -0.95 1 0.95 0.95 1 -0.95 0.95 1 -0.95 -0.95 -0.95 1 -0.95 -0.95 1 0.95 0.95 1 0.95 0.95 1 -0.95'></coordinate>
<textureCoordinate point='0.625 0.5 0.875 0.5 0.875 0.5 0.625 0.5 0.875 0.5 0.625 0.5 0.625 0.5 0.875 0.5 0.875 0.5 0.875 0.75 0.875 0.75 0.875 0.5 0.875 0.75 0.875 0.5 0.875 0.5 0.875 0.75 0.875 0.75 0.625 0.75 0.625 0.75 0.875 0.75 0.625 0.75 0.875 0.75 0.875 0.75 0.625 0.75 0.625 0.75 0.625 0.5 0.625 0.5 0.625 0.75 0.625 0.5 0.625 0.75 0.625 0.75 0.625 0.5 0.375 0.75 0.625 0.75 0.625 0.75 0.375 0.75 0.625 0.75 0.375 0.75 0.375 0.75 0.625 0.75 0.625 0.75 0.625 1 0.625 1 0.625 0.75 0.625 1 0.625 0.75 0.625 0.75 0.625 1 0.625 1 0.375 1 0.375 1 0.625 1 0.375 1 0.625 1 0.625 1 0.375 1 0.375 1 0.375 0.75 0.375 0.75 0.375 1 0.375 0.75 0.375 1 0.375 1 0.375 0.75 0.375 0 0.625 0 0.625 0 0.375 0 0.625 0 0.375 0 0.375 0 0.625 0 0.625 0 0.625 0.25 0.625 0.25 0.625 0 0.625 0.25 0.625 0 0.625 0 0.625 0.25 0.625 0.25 0.375 0.25 0.375 0.25 0.625 0.25 0.375 0.25 0.625 0.25 0.625 0.25 0.375 0.25 0.375 0.25 0.375 0 0.375 0 0.375 0.25 0.375 0 0.375 0.25 0.375 0.25 0.375 0 0.125 0.5 0.375 0.5 0.375 0.5 0.125 0.5 0.375 0.5 0.125 0.5 0.125 0.5 0.375 0.5 0.375 0.5 0.375 0.75 0.375 0.75 0.375 0.5 0.375 0.75 0.375 0.5 0.375 0.5 0.375 0.75 0.375 0.75 0.125 0.75 0.125 0.75 0.375 0.75 0.125 0.75 0.375 0.75 0.375 0.75 0.125 0.75 0.125 0.75 0.125 0.5 0.125 0.5 0.125 0.75 0.125 0.5 0.125 0.75 0.125 0.75 0.125 0.5 0.375 0.5 0.625 0.5 0.625 0.5 0.375 0.5 0.625 0.5 0.375 0.5 0.375 0.5 0.625 0.5 0.625 0.5 0.625 0.75 0.625 0.75 0.625 0.5 0.625 0.75 0.625 0.5 0.625 0.5 0.625 0.75 0.625 0.75 0.375 0.75 0.375 0.75 0.625 0.75 0.375 0.75 0.625 0.75 0.625 0.75 0.375 0.75 0.375 0.75 0.375 0.5 0.375 0.5 0.375 0.75 0.375 0.5 0.375 0.75 0.375 0.75 0.375 0.5 0.375 0.25 0.625 0.25 0.625 0.25 0.375 0.25 0.625 0.25 0.375 0.25 0.375 0.25 0.625 0.25 0.625 0.25 0.625 0.5 0.625 0.5 0.625 0.25 0.625 0.5 0.625 0.25 0.625 0.25 0.625 0.5 0.625 0.5 0.375 0.5 0.375 0.5 0.625 0.5 0.375 0.5 0.625 0.5 0.625 0.5 0.375 0.5 0.375 0.5 0.375 0.25 0.375 0.25 0.375 0.5 0.375 0.25 0.375 0.5 0.375 0.5 0.375 0.25'></textureCoordinate>
</indexedFaceSet>
</shape>
</group>
</transform>
</transform>
<transform DEF='Light_TRANSFORM' rotation='-0.498084 -0.762016 -0.413815 1.51388' translation='-4.07624 5.90386 1.00545'>
<pointLight DEF='LA_Light' radius='30'></pointLight>
</transform>
<transform DEF='Camera_TRANSFORM' rotation='-0.098233 -0.968789 -0.227591 2.34949' translation='-7.35889 4.95831 -6.92579'>
<viewpoint DEF='CA_Camera' position='-0 -0 100' fieldOfView='0.05'></viewpoint>
</transform>
<timeSensor DEF='clock' cycleInterval='8' loop='true'></timeSensor>
<orientationInterpolator DEF='spinThings' key='0 0.25 0.5 0.75 1' keyValue='0 1 0 0 0 1 0 1.57079 0 1 0 3.14159 0 1 0 4.71239 0 1 0 6.28317'></orientationInterpolator>
<ROUTE fromNode='clock' fromField='fraction_changed' toNode='spinThings' toField='set_fraction'></ROUTE>
<ROUTE fromNode='spinThings' fromField='value_changed' toNode='Cube_TRANSFORM' toField='set_rotation'></ROUTE>
</scene>
</x3d>
</body>
</html>
From the code it is apparent that the transform node with the DEF value of Cube_TRANSFORM is the node that tilts the cubes at the 15 degree axis. However, the second ROUTE statement is replacing this rotation with the interpolated and rotation axis, which for all interpolated values is along the y axis.
If you change the second ROUTE statement so that the toNode value is Cube_ifs_TRANSFORM then the static tilt by 15 degrees will be preserved, and this might be the visual animation you want.

Grouping data based on two logics

I have a huge text file of 4 columns. The first column is a serial number, second and third columns are co-ordinates and 4th column is a value. These are the values of a variable at cell nodes. I would like to average the 4 nodal values to get the cell value to be read by my code. For example let me consider a 3 by 3 cartesian cell with following data:
1 0. 0. 5e-4
2 0.1 0. 5e-3
3 0.2 0. 5e-4
4 0.3 0. 5e-3
5 0. 0.1 5e-5
6 0.1 0.1 5e-7
7 0.2 0.1 5e-5
8 0.3 0.1 5e-2
9 0. 0.2 5e-4
10 0.1 0.2 5e-3
11 0.2 0.2 5e-4
12 0.3 0.2 5e-3
13 0. 0.3 5e-5
14 0.1 0.3 5e-7
15 0.2 0.3 5e-5
16 0.3 0.3 5e-2
I would like to group lines in the following order:
1 0. 0. 5e-4
2 0.1 0. 5e-3
5 0. 0.1 5e-5
6 0.1 0.1 5e-7
2 0.1 0. 5e-3
3 0.2 0. 5e-4
6 0.1 0.1 5e-7
7 0.2 0.1 5e-5
3 0.2 0. 5e-4
4 0.3 0. 5e-3
7 0.2 0.1 5e-5
8 0.3 0.1 5e-2
5 0. 0.1 5e-5
6 0.1 0.1 5e-7
9 0. 0.2 5e-4
10 0.1 0.2 5e-3
6 0.1 0.1 5e-7
7 0.2 0.1 5e-5
10 0.1 0.2 5e-3
11 0.2 0.2 5e-4 and so on ...
There are two logics in the above example. One, data of lines (1,2,5,6 and 2,3,6,7 and 3,4,7,8) form one set (the first row of my mesh). This is followed by lines (5,6,9,10) where we move on to the next row data. Then the first logic continues again (6,7,10,11 and 7,8,11,12 and so on...).
I used the following 'sed' command to extract group of lines but doing this individually is cumbersome considering the size of data I have to handle:
sed -n -e 1,2p -e 5,6p fileName
How can I create a loop considering both the logics that I mentioned above?
This might work for you (GNU sed):
sed -n ':a;N;s/\n/&/5;Ta;P;s/[^\n]*\n//;h;P;s/.*\n\(.*\n.*\)/\1/p;g;ba' file |
sed '13~12,+3d'
This follows the pattern uniformly i.e. lines 1,2 followed by lines 5,6, lines row 2,3 followed by lines 6,7 etc. The result is passed to second invocation of sed that removes 4 lines every 12 lines starting at line 13.

Finding zeros and replacing them with another number in a matrix file by awk

I have a matrix where I want to replace every 0 with 0.1 and depending on how many zeros are replaced the max score in that line will be deducted by number of 0.1s added such that the below matrix will go from,
No line will contain only zeroes, since this is a probability matrix where each line adds up to1. If a highest number occurs more than once (0.5 in this case), then anyone can be changed,and the first line will always be the only one with letters in it,
>ACTTT ASB 0.098
0 0 1 0
0.75 0 0.25 0
0 0 0 1
0 1 0 0
1 0 0 0
1 0 0 0
0 1 0 0
0 1 0 0
to
>ACTTT ASB 0.098
0.1 0.1 0.7 0.1
0.55 0.1 0.25 0.1
0.1 0.1 0.1 0.7
0.1 0.7 0.1 0.1
0.7 0.1 0.1 0.1
0.7 0.1 0.1 0.1
0.1 0.7 0.1 0.1
0.1 0.7 0.1 0.1
I tried to use something like this in a loop from previous answers in here:
while read line ; do echo $line | awk 'NR>1{print gsub(/(^|[[:space:]])0([[:space:]]|$)/,"&")}'; echo $line | awk '{max=$2;for(i=3;i<=NF;i++)if($i>max)max=$i}END{print max}'; done < matrix_file
awk to the rescue!
$ awk -v eps=0.01 'function maxIx() {mI=1;
for(i=1;i<=NF;i++)
if($mI<$i)mI=i;
return mI}
NR>1{mX=maxIx();
for(i=1;i<=NF;i++)
if($i==0) {$i=eps;$mX-=eps}}1' file
>ACTTT ASB 0.098
0.01 0.01 0.97 0.01
0.73 0.01 0.25 0.01
0.01 0.01 0.01 0.97
0.01 0.97 0.01 0.01
0.97 0.01 0.01 0.01
0.97 0.01 0.01 0.01
0.01 0.97 0.01 0.01
0.01 0.97 0.01 0.01
defined eps, as long as you have a sensible value it should work fine, but doesn't check for going below zero.

How to round float in Bash? (to a decimal)

I want to round my float variables in order for the sum of these variables to be equal 1. Here is my program :
for float in 0.0 0.001 0.01 0.025 0.05 0.075 0.1 0.125 0.15 0.175 0.2 0.225 0.25; do
w1=`echo "1.0 - $float" | bc -l`
w2=`echo "$w1/3" | bc -l`
echo "$w2 0.0 $w2 0.0 0.0 0.0 $w2 $float 0.0 0.0 0.0 0.0"
done
Where the sum 3*$w2 + $float has to be 1.00.
I'm a beginner but I need this to compute some results.
I tried already what I found on the internet to round w2, but I didn't manage to make it work. And it has to be rounded and not truncated for the final result to be 1.00.
bc lets you use variables, so you can say:
for float in 0.0 0.001 0.01 0.025 0.05 0.075 0.1 0.125 0.15 0.175 0.2 0.225 0.25; do
{ read w2; read f; } < <(
bc -l <<< "scale=5; w2=(1.0-$float)/3; w2; 1.0-3*w2"
)
echo "$w2 0.0 $w2 0.0 0.0 0.0 $w2 $f 0.0 0.0 0.0 0.0"
done
.33333 0.0 .33333 0.0 0.0 0.0 .33333 .00001 0.0 0.0 0.0 0.0
.33300 0.0 .33300 0.0 0.0 0.0 .33300 .00100 0.0 0.0 0.0 0.0
.33000 0.0 .33000 0.0 0.0 0.0 .33000 .01000 0.0 0.0 0.0 0.0
.32500 0.0 .32500 0.0 0.0 0.0 .32500 .02500 0.0 0.0 0.0 0.0
.31666 0.0 .31666 0.0 0.0 0.0 .31666 .05002 0.0 0.0 0.0 0.0
.30833 0.0 .30833 0.0 0.0 0.0 .30833 .07501 0.0 0.0 0.0 0.0
.30000 0.0 .30000 0.0 0.0 0.0 .30000 .10000 0.0 0.0 0.0 0.0
.29166 0.0 .29166 0.0 0.0 0.0 .29166 .12502 0.0 0.0 0.0 0.0
.28333 0.0 .28333 0.0 0.0 0.0 .28333 .15001 0.0 0.0 0.0 0.0
.27500 0.0 .27500 0.0 0.0 0.0 .27500 .17500 0.0 0.0 0.0 0.0
.26666 0.0 .26666 0.0 0.0 0.0 .26666 .20002 0.0 0.0 0.0 0.0
.25833 0.0 .25833 0.0 0.0 0.0 .25833 .22501 0.0 0.0 0.0 0.0
.25000 0.0 .25000 0.0 0.0 0.0 .25000 .25000 0.0 0.0 0.0 0.0
Adjust scale=? as required.
From your comment in your OP you say it's acceptable to alter the float variable so as to have a sum equal to 1. In this, case, first compute the w2 and then re-compute float from that:
w2=$(bc -l <<< "(1-($float))/3")
float=$(bc -l <<< "1-3*($w2)")
The whole thing, written in a better style:
floats=( 0.0 0.001 0.01 0.025 0.05 0.075 0.1 0.125 0.15 0.175 0.2 0.225 0.25 )
for float in "${floats[#]}"; do
w2=$(bc -l <<< "(1-($float))/3")
float=$(bc -l <<< "1-3*($w2)")
printf "%s 0.0 %s 0.0 0.0 0.0 %s %s 0.0 0.0 0.0 0.0\n" "$w2" "$w2" "$w2" "$float"
done
This uses the precision provided by bc -l (20 decimal digits after the decimal point). If you don't want that accuracy, you may round the w2 before recomputing float as so:
floats=( 0.0 0.001 0.01 0.025 0.05 0.075 0.1 0.125 0.15 0.175 0.2 0.225 0.25 )
for float in "${floats[#]}"; do
w2=$(bc -l <<< "scale=3; (1-($float))/3")
float=$(bc <<< "1-3*($w2)")
printf "%s 0.0 %s 0.0 0.0 0.0 %s %s 0.0 0.0 0.0 0.0\n" "$w2" "$w2" "$w2" "$float"
done
Note that the last bc isn't called with the -l option: it will use whatever significant digits are in w2. Change the scale to suit your needs. Proceeding thus will guarantee that your numbers add up to 1, as you can check from the output of the previous snippet:
.333 0.0 .333 0.0 0.0 0.0 .333 .001 0.0 0.0 0.0 0.0
.333 0.0 .333 0.0 0.0 0.0 .333 .001 0.0 0.0 0.0 0.0
.330 0.0 .330 0.0 0.0 0.0 .330 .010 0.0 0.0 0.0 0.0
.325 0.0 .325 0.0 0.0 0.0 .325 .025 0.0 0.0 0.0 0.0
.316 0.0 .316 0.0 0.0 0.0 .316 .052 0.0 0.0 0.0 0.0
.308 0.0 .308 0.0 0.0 0.0 .308 .076 0.0 0.0 0.0 0.0
.300 0.0 .300 0.0 0.0 0.0 .300 .100 0.0 0.0 0.0 0.0
.291 0.0 .291 0.0 0.0 0.0 .291 .127 0.0 0.0 0.0 0.0
.283 0.0 .283 0.0 0.0 0.0 .283 .151 0.0 0.0 0.0 0.0
.275 0.0 .275 0.0 0.0 0.0 .275 .175 0.0 0.0 0.0 0.0
.266 0.0 .266 0.0 0.0 0.0 .266 .202 0.0 0.0 0.0 0.0
.258 0.0 .258 0.0 0.0 0.0 .258 .226 0.0 0.0 0.0 0.0
.250 0.0 .250 0.0 0.0 0.0 .250 .250 0.0 0.0 0.0 0.0
You have to use the bc utility to process floating point numbers in bash.
For example consider the code given below,
a=15
b=2
echo "$a / $b"
will give you 7 as result.
Where as,
a=15
b=2
echo "$a / $b" | bc -l
Will give 7.500000 as results
You can use printf to round the output of bc:
printf '%.2f\n' $( bc -l <<< "3 * $w2 + $float" )

Merge files with scientific notation data in the first column and how to use uniq

Two questions concerning using uniq command, please help.
First question
Say I have two files;
$ cat 1.dat
0.1 1.23
0.2 1.45
0.3 1.67
$ cat 2.dat
0.3 1.67
0.4 1.78
0.5 1.89
Using cat 1.dat 2.dat | sort -n | uniq > 3.dat, I am able to merge two files into one. results is:
0.1 1.23
0.2 1.45
0.3 1.67
0.4 1.78
0.5 1.89
But if I have a scientific notation in 1.dat file,
$ cat 1.dat
1e-1 1.23
0.2 1.45
0.3 1.67
the result would be:
0.2 1.45
0.3 1.67
0.4 1.78
0.5 1.89
1e-1 1.23
which is not what I want, how can I let uniq understand 1e-1 is a number, not a string.
Second question
Same as above, but this time, let the second file 2.dat's first row be slightly different (from 0.3 1.67 to 0.3 1.57)
$ cat 2.dat
0.3 1.57
0.4 1.78
0.5 1.89
Then the result would be:
0.1 1.23
0.2 1.45
0.3 1.67
0.3 1.57
0.4 1.78
0.5 1.89
My question is this, how could I use uniq just based on the value from the first file and find repetition only from the first column, so that the results is still:
0.1 1.23
0.2 1.45
0.3 1.67
0.4 1.78
0.5 1.89
Thanks
A more complex test cases
$ cat 1.dat
1e-6 -1.23
0.2 -1.45
110.7 1.55
0.3 1.67e-3
one awk (gnu awk) one-liner solves your two problems
awk '{a[$1*1];b[$1*1]=$0}END{asorti(a);for(i=1;i<=length(a);i++)print b[a[i]];}' file2 file1
test with data: Note, I made file1 unsorted and 1.57 in file2, as you wanted:
kent$ head *
==> file1 <==
0.3 1.67
0.2 1.45
1e-1 1.23
==> file2 <==
0.3 1.57
0.4 1.78
0.5 1.89
kent$ awk '{a[$1*1];b[$1*1]=$0}END{asorti(a);for(i=1;i<=length(a);i++)print b[a[i]];}' file2 file1
1e-1 1.23
0.2 1.45
0.3 1.67
0.4 1.78
0.5 1.89
edit
display 0.1 instead of 1e-1:
kent$ awk '{a[$1*1];b[$1*1]=$2}END{asorti(a);for(i=1;i<=length(a);i++)print a[i],b[a[i]];}' file2 file1
0.1 1.23
0.2 1.45
0.3 1.67
0.4 1.78
0.5 1.89
edit 2
for the precision, awk default (OFMT) is %.6g you could change it. but if you want to display different precision by lines, we have to a bit trick:
(I added 1e-9 in file1)
kent$ awk '{id=sprintf("%.9f",$1*1);sub(/0*$/,"",id);a[id];b[id]=$2}END{asorti(a);for(i=1;i<=length(a);i++)print a[i],b[a[i]];}' file2 file1
0.000000001 1.23
0.2 1.45
0.3 1.67
0.4 1.78
0.5 1.89
if you want to display same number precision for all lines:
kent$ awk '{id=sprintf("%.9f",$1*1);a[id];b[id]=$2}END{asorti(a);for(i=1;i<=length(a);i++)print a[i],b[a[i]];}' file2 file1
0.000000001 1.23
0.200000000 1.45
0.300000000 1.67
0.400000000 1.78
0.500000000 1.89
The first part only:
cat 1.dat 2.dat | sort -g -u
1e-1 1.23
0.2 1.45
0.3 1.67
0.4 1.78
0.5 1.89
man sort
-g, --general-numeric-sort
compare according to general numerical value
-u, --unique
with -c, check for strict ordering; without -c, output only the first of an equal run
To change the scientific notation to decimal I resorted to python
#!/usr/bin/env python
import sys
import glob
infiles = []
for a in sys.argv:
infiles.extend(glob.glob(a))
for f in infiles[1:]:
with open(f) as fd:
for line in fd:
data = map(float, line.strip().split())
print data[0], data[1]
output:
$ ./sn.py 1.dat 2.dat
0.1 1.23
0.2 1.45
0.3 1.67
0.3 1.67
0.4 1.78
0.5 1.89

Resources