I want to plot some data of a spray simulation. There is a variable called the vaporpenetrationlength, which describes the distance from the injector to the position where the mass fraction is 0.1%. The simulation created many folders for each time step. Inside those folders there is one file which contains the mass fraction and the distance.
I want to create a script which goes through all the time step folders and search inside this one file and prints out the distance where the 0.1% were measured and in which time step it was.
I found a script, but I don't understand it because I just started to learn shell scripting.
Could someone please help me step by step in building such a script? I am interested in learning it, and therefore I want to understand ever line of the code.
Thanks in advance :)
This little script outputs TimeTabLengthTabMass based on the value of the "mass fraction":
printf '%s\t%s\t%s\n' 'Time' 'Length' 'Mass'
awk '
BEGIN { FS = OFS = "\t"}
FNR == 1 {
n = split(FILENAME,path,"/")
time = sprintf("%0.7f",path[n-1])
}
NF != 2 {next}
0.001 <= $2 && $2 < 0.00101 { print time,$1,$2 }
' postProcessing/singleGraphVapPen/*/*
remark: In fact, printing the header could be done within the awk program, but doing it with a separate printf command allows you to post-process the output of awk (for ex. if you need to sort the times and/or lengths and/or masses).
notes:
FNR == 1 is true for the first line of each input file. In the corresponding block, I extract the time value from the directory name.
NF != 2 {next} is for filtering out the gnuplot commands that are at the beginning of the input files. In words, this statement means "if the number of (tab-delimited) fields in the line isn't 2, then skip"
0.001 <= $2 && $2 < 0.00101 selects the lines based on the value of their second field, which is referred to as yheptane in your script. IDK the margin of error of your "0.1% of mass fraction" so I chose convenient conditions for the sample output below.
With the sample data, the output will be:
Time Length Mass
0.0001500 0.0895768 0.00100839
0.0002000 0.102057 0.00100301
0.0002000 0.0877939 0.00100832
0.0003500 0.0827694 0.00100114
0.0009000 0.0657509 0.00100015
0.0015000 0.0501911 0.00100016
0.0016500 0.0469495 0.00100594
0.0018000 0.0436538 0.00100853
0.0021500 0.0369005 0.00100809
0.0023000 0.100328 0.00100751
As an aside, here's a script for replacing your original code:
#!/bin/bash
set -- postProcessing/singleGraphVapPen/*/*
if ! [ -f VapPen.txt ]
then
{
printf '%s\t%s\n' 'Time [s]' 'VapPen [m]'
awk '
BEGIN {FS = OFS = "\t"}
FNR == 1 {
if (NR > 1)
print time,vappen
vappen = 0
n = split(FILENAME,path,"/")
time = sprintf("%0.7f",path[n-1])
}
NF != 2 {next}
$2 >= 0.001 { vappen = $1 }
END { if (NR) print time,vappen }
' "$#" |
sort -n -k1,1
} > VapPen.txt
fi
gnuplot -e '
set title "Verdunstungspenetration";
set xlabel "Zeit [s]";
set ylabel "Verdunstungspenetrationslänge [m]";
set grid;
plot "VapPen.txt" using 1:2 with linespoints title "Vapor penetraion 0,1% mass";
pause -1 "Hit return to continue";
'
With the provided data, it reduces the execution time from several minutes to 0.15s on my computer.
Related
I've recently been working on some lab assignments and in order to collect and analyze results well, I prepared a bash script to automate my job. It was my first attempt to create such script, thus it is not perfect and my question is strictly connected with improving it.
Exemplary output of the program is shown below, but I would like to make it more general for more purposes.
>>> VARIANT 1 <<<
Random number generator seed is 0xea3495cc76b34acc
Generate matrix 128 x 128 (16 KiB)
Performing 1024 random walks of 4096 steps.
> Total instructions: 170620482
> Instructions per cycle: 3.386
Time elapsed: 0.042127 seconds
Walks accrued elements worth: 534351478
All data I want to collect is always in different lines. My first attempt was running the same program twice (or more times depending on the amount of data) and then using grep in each run to extract the data I need by looking for the keyword. It is very inefficient, as there probably are some possibilities of parsing whole output of one run, but I could not come up with any idea. At the moment the script is:
#!/bin/bash
write() {
o1=$(./progname args | grep "Time" | grep -o -E '[0-9]+.[0-9]+')
o2=$(./progname args | grep "cycle" | grep -o -E '[0-9]+.[0-9]+')
o3=$(./progname args | grep "Total" | grep -o -E '[0-9]+.[0-9]+')
echo "$1 $o1 $o2 $o3"
}
for ((i = 1; i <= 10; i++)); do
write $i >> times.dat
done
It is worth mentioning that echoing results in one line is crucial, as I am using gnuplot later and having data in columns is perfect for that use. Sample output should be:
1 0.019306 3.369 170620476
2 0.019559 3.375 170620475
3 0.021971 3.334 170620478
4 0.020536 3.378 170620480
5 0.019692 3.390 170620475
6 0.020833 3.375 170620477
7 0.019951 3.450 170620477
8 0.019417 3.381 170620476
9 0.020105 3.374 170620476
10 0.020255 3.402 170620475
My question is: how could I improve the script to collect such data in just one program execution?
You could use awk here and could get values into an array and later access them by index 1,2 and 3 in case you want to do this in a single command.
myarr=($(your_program args | awk '/Total/{print $NF;next} /cycle/{print $NF;next} /Time/{print $(NF-1)}'))
OR use following to forcefully print all elements into a single line, which will not come in new lines if someone using " to keep new lines safe for values.
myarr=($(your_program args | awk '/Total/{val=$NF;next} /cycle/{val=(val?val OFS:"")$NF;next} /Time/{print val OFS $(NF-1)}'))
Explanation: Adding detailed explanation of awk program above.
awk ' ##Starting awk program from here.
/Total/{ ##Checking if a line has Total keyword in it then do following.
print $NF ##Printing last field of that line which has Total in it here.
next ##next keyword will skip all further statements from here.
}
/cycle/{ ##Checking if a line has cycle in it then do following.
print $NF ##Printing last field of that line which has cycle in it here.
next ##next keyword will skip all further statements from here.
}
/Time/{ ##Checking if a line has Time in it then do following.
print $(NF-1) ##Printing 2nd last field of that line which has Time in it here.
}'
To access individual items you could use like:
echo ${myarr[0]}, echo ${myarr[1]} and echo ${myarr[2]} for Total, cycle and time respectively.
Example to access all elements by loop in case you need:
for i in "${myarr[#]}"
do
echo $i
done
You can execute your program once and save the output at a variable.
o0=$(./progname args)
Then you can grep that saved string any times like this.
o1=$(echo "$o0" | grep "Time" | grep -o -E '[0-9]+.[0-9]+')
Assumptions:
each of the 3x search patterns (Time, cycle, Total) occur just once in a set of output from ./progname
format of ./progname output is always the same (ie, same number of space-separated items for each line of output)
I've created my own progname script that just does an echo of the sample output:
$ cat progname
echo ">>> VARIANT 1 <<<
Random number generator seed is 0xea3495cc76b34acc
Generate matrix 128 x 128 (16 KiB)
Performing 1024 random walks of 4096 steps.
> Total instructions: 170620482
> Instructions per cycle: 3.386
Time elapsed: 0.042127 seconds
Walks accrued elements worth: 534351478"
One awk solution to parse and print the desired values:
$ i=1
$ ./progname | awk -v i=${i} ' # assign awk variable "i" = ${i}
/Time/ { o1 = $3 } # o1 = field 3 of line that contains string "Time"
/cycle/ { o2 = $5 } # o2 = field 5 of line that contains string "cycle"
/Total/ { o3 = $4 } # o4 = field 4 of line that contains string "Total"
END { printf "%s %s %s %s\n", i, o1, o2, o3 } # print 4x variables to stdout
'
1 0.042127 3.386 170620482
I have the following script that creates multiple objects.
I tried just simply running it in my terminal, but it seems to take so long. How can I run this with GNU-parallel?
The script below creates an object. It goes through niy = 1 through niy = 800, and for every increment in niy, it loops through njx = 1 to 675.
#!/bin/csh
set njx = 675 ### Number of grids in X
set niy = 800 ### Number of grids in Y
set ll_x = -337500
set ll_y = -400000 ### (63 / 2) * 1000 ### This is the coordinate at lower right corner
set del_x = 1000
set del_y = 1000
rm -f out.shp
rm -f out.shx
rm -f out.dbf
rm -f out.prj
shpcreate out polygon
dbfcreate out -n ID1 10 0
# n = 0 ### initilzation of counter (n) to count gridd cells in loop
# iy = 1 ### initialization of conunter (iy) to count grid cells along north-south direction
echo ### emptly line on screen
while ($iy <= $niy) ### start the loop for norht-south direction
echo ' south-north' $iy '/' $niy ### print a notication on screen
# jx = 1
while ($jx <= $njx)### start the loop for east-west direction
# n++
set x = `echo $ll_x $jx $del_x | awk '{print $1 + ($2 - 1) * $3}'`
set y = `echo $ll_y $iy $del_y | awk '{print $1 + ($2 - 1) * $3}'`
set txt = `echo $x $y $del_x $del_y | awk '{print $1, $2, $1, $2 + $4, $1 + $3, $2 + $4, $1 + $3, $2, $1, $2}'`
shpadd out `echo $txt`
dbfadd out $n
# jx++
end ### close the second loop
# iy++
end ### close the first loop
echo
### lines below create a projection file for the created shapefile using
cat > out.prj << eof
PROJCS["Asia_Lambert_Conformal_Conic",GEOGCS["GCS_WGS_1984",DATUM["D_WGS_1984",SPHEROID["WGS_1984",6378137.0,298.257223563]],PRIMEM["Greenwich",0.0],UNIT["Degree",0.0174532925199433]],PROJECTION["Lambert_Conformal_Conic"],PARAMETER["False_Easting",0.0],PARAMETER["False_Northing",0.0],PARAMETER["Central_Meridian",120.98],PARAMETER["Standard_Parallel_1",5.0],PARAMETER["Standard_Parallel_2",20.0],PARAMETER["Latitude_Of_Origin",14.59998],UNIT["Meter",1.0]]
eof
###
###
###
The inner part gets executed 540,000 times and on each iteration you invoke 3 awk processes to do 3 simple bits of maths... that's 1.6 million awks.
Rather than that, I have written a single awk to generate all the loops and do all the maths and this can then be fed into bash or csh to actually execute it.
I wrote this and ran it completely in the time the original version got to 16% through. I have not checked it extremely thoroughly, but you should be able to readily correct any minor errors:
#!/bin/bash
awk -v njx=675 -v niy=800 -v ll_x=-337500 -v ll_y=-400000 '
BEGIN{
print "shpcreate out polygon"
print "dbfcreate out -n ID1 10 0"
n=0
for(iy=1;iy<niy;iy++){
for(jx=1;jx<njx;jx++){
n++
x = llx + (jx-1)*1000
y = lly + (iy-1)*1000
txt = sprintf("%d %d %d %d %d %d %d %d %d %d",x,y,x, y+dely, x+delx, y+dely, x+delx,y,x,y)
print "shpadd out",txt
print "dbfadd out",n
}
}
}' /dev/null
If the output looks good, you can then run it through bash or csh like this:
./MyAwk | csh
Note that I don't know anything about these Shapefile (?) tools, shpadd or dbfadd tools. They may or may not be able to be run in parallel - if they are anything like sqlite running them in parallel will not help you much. I am guessing the changes above are enough to make a massive improvement to your runtime. If not, here are some other things you could think about.
You could append an ampersand (&) to each line that starts dbfadd or shpadd so that several start in parallel, and then print a wait after every 8 lines so that you run 8 things in parallel in chunks.
You could feed the output of the script directly into GNU Parallel, but I have no idea if the ordering of the lines is critical.
I presume this is creating some sort of database. It may be faster if you run it on a RAM-backed filesystem, such as /tmp.
I notice there is a Python module for manipulating Shapefiles here. I can't help thinking that would be many, many times faster still.
The following works well and captures all 2nd column values for S_nn. The goal is to add numbers in the 2nd column.
awk -F "," '/s_/ {cons = cons + $2} END {print cons}' G.csv
How can I change this to add only when nnn is between N1 and N2 e.g. s_23 and s_24?
Also is it possible to consider 1 if a line has junk instead of numbers in the 2nd column?
S_22, 1
S_23, 0
S_24, 1
S_25, 1
S_26, ?
Sample input: sum s_24 to s_26
Sample output: 1+1+1=3 (the last one is for error)
The solution is rather simple, all you need to do is perform a simple numeric test.
awk -v start=24 -v stop=26 '
BEGIN { FS="[_,]" }
(start <= $2 ) && ($2 <= stop) { s = s + (($3==$3+0)?$3:1) }
END{ print s+0 }' <file>
which outputs
3
How does it work:
line 1 : defines the start and stop fields
BEGIN statement redefines the field separator as a _ or a ,, so now we have 3 fields.
the second line checks if field 2 (the number) is between start and stop, if so perform the sum.
the field 3 is checked if it is a number by testing the condition $3==$3+0, if this fails, it is assumed to be 1
If you want to see the numbers printed, you can do :
awk -v start=24 -v stop=26 '
BEGIN{ FS="[_,]" }
(start <= $2 ) && ($2 <= stop) {
v = ($3==$3+0)?$3:1
s = s + v
printf "%s%d", (c++?"+":""), v
}
END{ printf "=%d\n", s }' <file>
output :
1+1+1=3
The printf statement always prints "+"$3 except on the first time. This is checked by keeping track of a counter c. By default the value of c is set to zero. The entry (c++?"+":"") determines if we are printing the first entry or not. c++ will return the value of c and afterwards sets c to the value c+1, This is called a post increment operator. Thus, the first time, c=0 and (c++?"+":"") returns "" and sets c to 1. The second time, (c++?"+":"") returns "+" and sets c to 2.
I need to split the bigger file into smaller chunks based on the last occurrence of the pattern in the bigger file using shell script. For eg.
Sample.txt ( File will be sorted based on the third field on which pattern to be searched )
NORTH EAST|0004|00001|Fost|Weaather|<br/>
NORTH EAST|0004|00001|Fost|Weaather|<br/>
SOUTH|0003|00003|Haet|Summer|<br/>
SOUTH|0003|00003|Haet|Summer|<br/>
SOUTH|0003|00003|Haet|Summer|<br/>
EAST|0007|00016|uytr|kert|<br/>
EAST|0007|00016|uytr|kert|<br/>
WEST|0002|00112|WERT|fersg|<br/>
WEST|0002|00112|WERT|fersg|<br/>
SOUTHWEST|3456|01134|GDFSG|EWRER|<br/>
"Pattern 1 = 00003 " to be searched output file must contain sample_00003.txt
NORTH EAST|0004|00001|Fost|Weaather|<br/>
NORTH EAST|0004|00001|Fost|Weaather|<br/>
SOUTH|0003|00003|Haet|Summer|<br/>
SOUTH|0003|00003|Haet|Summer|<br/>
SOUTH|0003|00003|Haet|Summer|<br/>
"Pattren 2 = 00112" to be searched output file must contain sample_00112.txt
EAST|0007|00016|uytr|kert|<br/>
EAST|0007|00016|uytr|kert|<br/>
WEST|0002|00112|WERT|fersg|<br/>
WEST|0002|00112|WERT|fersg|<br/>
Used
awk -F'|' -v 'pattern="00003"' '$3~pattern big_file' > smallfile
and grep commands but it was very time consuming since file is 300+ MB of size.
Not sure if you'll find a faster tool than awk, but here's a variant that fixes your own attempt and also speeds things up a little by using string matching rather than regex matching.
It processes lookup values in a loop, and outputs everything from where the previous iteration left off through the last occurrence of the value at hand to a file named smallfile<n>, where <n> is an index starting with 1.
ndx=0; fromRow=1
for val in '00003' '00112' '|'; do # 2 sample values to match, plus dummy value
chunkFile="smallfile$(( ++ndx ))"
fromRow=$(awk -F'|' -v fromRow="$fromRow" -v outFile="$chunkFile" -v val="$val" '
NR < fromRow { next }
{ if ($3 != val) { if (p) { print NR; exit } } else { p=1 } } { print > outFile }
' big_file)
done
Note that dummy value | ensures that any remaining rows after the last true value to match are saved to a chunk file too.
Note that moving all the logic into a single awk script should be much faster, because big_file would only have to be read once:
awk -F'|' -v vals='00003|00112' '
BEGIN { split(vals, val); outFile="smallfile" ++ndx }
{
if ($3 != val[ndx]) {
if (p) { p=0; close(outFile); outFile="smallfile" ++ndx }
} else {
p=1
}
print > outFile
}
' big_file
You can try with Perl:
perl -ne '/00003/ && print' big_file > small_file
and compare its timing with other solutions...
EDIT
Limiting my answer to the tools you didn't try already... you can also use:
sed -n '/00003/p' big_file > small_file
But I tend to believe perl will be faster. Again... I'd suggest you to measure the elapsed for different solutions on your own.
Reading a text file into an array, extracting elements and sorting them is taking a very long time.
The text file is ffmpeg console output for R128 audio analysis. I need to get the highest M and S values. Example:
[Parsed_ebur128_0 # 0x7fd32a60caa0] t: 4.49998 M: -22.2 S: -29.9 I: -27.0 LUFS LRA: 9.8 LU FTPK: -12.4 dBFS TPK: -9.7 dBFS
[Parsed_ebur128_0 # 0x7fd32a60caa0] t: 4.69998 M: -22.5 S: -28.6 I: -25.9 LUFS LRA: 11.3 LU FTPK: -12.7 dBFS TPK: -9.7 dBFS
The text file can be hundreds or thousands of lines long depending on the duration of the audio file being analysed
I want to find the highest M (-22.2) and S Values (-28.6) and assign them to variables M and S
This is what I am using currently:
ARRAY=()
while read LINE
do
ARRAY+=("$LINE")
done < $tempDir/text.txt
for LINE in "${ARRAY[#]}"
do
echo "$LINE" | sed -n ‘/B:/p' | sed 's/S:.*//' | sed -n -e 's/^.*M://p' | sed -n -e 's/-//p' >>/$tempDir/R128M.txt
done
for LINE in "${ARRAY[#]}"
do
echo "$LINE" | sed -n '/M:/p' | sed 's/I:.*//' | sed -n -e 's/^.*S://p' | sed -n -e 's/-//p' >>$tempDir/R128S.txt
done
cat $tempDir/R128M.txt
M=( $(sort $tempDir/R128M.txt) )
cat $tempDir/R128S.txt
S=( $(sort $tempDir/R128S.txt) )
Is there a faster way of doing this?
Rather than reading in the whole file in memory, writing bits of it out to separate file, and reading those in again, just parse it and pick out the largest values:
$ awk '$7 > m || m == "" { m = $7 } $9 > s || s == "" { s = $9 } END { print m, s }' data
-22.2 -28.6
In your data, field 7 and 9 contains the values of M and S. The awk script will update its m and s variables if it finds larger values in these fields and then print the largest found at the end. The m == "" and s == "" are needed to trigger initialization of the values if no values has been read yet.
Another way with awk, which may look cleaner:
$ awk 'FNR == 1 { m = $7; s = $9; next } $7 > m { m = $7 } $9 > s { s = $9 } END { print m, s }' data
To assign them to M and S in the shell:
$ declare $( awk 'FNR == 1 { m = $7; s = $9; next } $7 > m { m = $7 } $9 > s { s = $9 } END { printf("M=%f S=%f\n", m, s) }' data )
$ echo $M $S
-22.200000 -28.600000
Adjust the printf() format to use %s instead of %f if you want the original strings instead of float values, or set the number of decimals you might want with, e.g., %.2f in place of %f.
First of all, three-process pipe is a bit redundant for a single value extraction, especially taking into account you reinstantiate it anew for every line.
Next, you save all the values into a file and then sort that file, while all you need is the maximum value. You can easily find it during the very first (value extraction) loop, for additional O(N) running time, instead of I/O and sorting with all the I/O overhead and O(NlogN) sorting expenses. See ARITHMETIC EXPANSION and conditional expressions in bash manual.