I have a very basic shell script here:
for file in Alt_moabit Book_arrival Door_flowers Leaving_laptop
do
for qp in 10 12 15 19 22 25 32 39 45 60
do
for i in 0 1
do
echo "$file\t$qp\t$i" >> psnr.txt
./command > $file-$qp-psnr.txt 2>> psnr.txt
done
done
done
command calculates some PSNR values and writes a detailed summary to a file for each combination of file, qp and i. That's fine.
The 2>> outputs one line of information that I really need. But when executed, I get:
Alt_moabit 10 0
total 47,8221 50,6329 50,1031
Alt_moabit 10 1
total 47,8408 49,9973 49,8197
Alt_moabit 12 0
total 47,0665 50,1457 49,6755
Alt_moabit 12 1
total 47,1193 49,4284 49,3476
What I want, however, is this:
Alt_moabit 10 0 total 47,8221 50,6329 50,1031
Alt_moabit 10 1 total 47,8408 49,9973 49,8197
Alt_moabit 12 0 total 47,0665 50,1457 49,6755
Alt_moabit 12 1 total 47,1193 49,4284 49,3476
How can I achieve that?
(Please feel free to change the title if you think there's a more appropriate one)
You could pass the -n option to your first echo command, so it doesn't output a newline.
As a quick demonstration, this :
echo "test : " ; echo "blah"
will get you :
test :
blah
With a newline between the two outputs.
While this, with a -n for the first echo :
echo -n "test : " ; echo "blah"
will get you the following output :
test : blah
Without any newline between the two output.
The (GNU version of) echo utility has a -n option to omit the trailing newline. Use that on your first echo. You'll probably have to put some space after the first line or before the second for readability.
You can use printf instead of echo, which is better for portability reasons.
printf is the correct way to solve your problem (+1 kurumi), but for completeness, you can also do:
echo "$file\t$qp\t$i $( ./command 2>&1 > $file-$qp-psnr.txt )" >> psnr.txt
While echo -n may work if you just want the print the output to console, it won't work if you want the output redirected to file.
If you want the concatenated output to be redirected to a file, this will work:
echo "Str1: `echo "Str2"`" >> file
I was also facing the same problem.
To define my problem, I have a script which is using the echo function like this:
echo -n "Some text here"
echo -n "Some text here"
The output I was getting is like this:
-n Some text here
-n Some text here
and I want the text to be in same line and it is also printing -n option in the output.
Note :- According to man Page, -n option do not print the trailing newline character.
The way I solved it using by adding the shebang in the starting of the script file like this.
#!/bin/bash
echo -n "Some text here"
echo -n "Some text here"
This will print the desired output like this:
Some text here Some text here
Hope this helps!
Related
I have a large directory of data files which I am in the process of manipulating to get them in a desired format. They each begin and end 15 lines too soon, meaning I need to strip the first 15 lines off one file and paste them to the end of the previous file in the sequence.
To begin, I have written the following code to separate the relevant data into easy chunks:
#!/bin/bash
destination='media/user/directory/'
for file1 in `ls $destination*.ascii`
do
echo $file1
file2="${file1}.end"
file3="${file1}.snip"
sed -e '16,$d' $file1 > $file2
sed -e '1,15d' $file1 > $file3
done
This worked perfectly, so the next step is the worlds simplest cat command:
cat $file3 $file2 > outfile
However, what I need to do is to stitch file2 to the previous file3. Look at this screenshot of the directory for better understanding.
See how these files are all sequential over time:
*_20090412T235945_20090413T235944_* ### April 13
*_20090413T235945_20090414T235944_* ### April 14
So I need to take the 15 lines snipped off the April 14 example above and paste it to the end of the April 13 example.
This doesn't have to be part of the original code, in fact it would be probably best if it weren't. I was just hoping someone would be able to help me get this going.
Thanks in advance! If there is anything I have been unclear about and needs further explanation please let me know.
"I need to strip the first 15 lines off one file and paste them to the end of the previous file in the sequence."
If I understand what you want correctly, it can be done with one line of code:
awk 'NR==1 || FNR==16{close(f); f=FILENAME ".new"} {print>f}' file1 file2 file3
When this has run, the files file1.new, file2.new, and file3.new will be in the new form with the lines transferred. Of course, you are not limited to three files: you may specify as many as you like on the command line.
Example
To keep our example short, let's just strip the first 2 lines instead of 15. Consider these test files:
$ cat file1
1
2
3
$ cat file2
4
5
6
7
8
$ cat file3
9
10
11
12
13
14
15
Here is the result of running our command:
$ awk 'NR==1 || FNR==3{close(f); f=FILENAME ".new"} {print>f}' file1 file2 file3
$ cat file1.new
1
2
3
4
5
$ cat file2.new
6
7
8
9
10
$ cat file3.new
11
12
13
14
15
As you can see, the first two lines of each file have been transferred to the preceding file.
How it works
awk implicitly reads each file line-by-line. The job of our code is to choose which new file a line should be written to based on its line number. The variable f will contain the name of the file that we are writing to.
NR==1 || FNR==16{f=FILENAME ".new"}
When we are reading the first line of the first file, NR==1, or when we are reading the 16th line of whatever file we are on, FNR==16, we update f to be the name of the current file with .new added to the end.
For the short example, which transferred 2 lines instead of 15, we used the same code but with FNR==16 replaced with FNR==3.
print>f
This prints the current line to file f.
(If this was a shell script, we would use >>. This is not a shell script. This is awk.)
Using a glob to specify the file names
destination='media/user/directory/'
awk 'NR==1 || FNR==16{close(f); f=FILENAME ".new"} {print>f}' "$destination"*.ascii
Your task is not that difficult at all. You want to gather a list of all _end files in the directory (using a for loop and globbing, NOT looping on the results of ls). Once you have all the end files, you simply parse the dates using parameter expansion w/substing removal say into d1 and d2 for date1 and date2 in:
stuff_20090413T235945_20090414T235944_end
| d1 | | d2 |
then you simply subtract 1 from d1 into say date0 or d0 and then construct a previous filename out of d0 and d1 using _snip instead of _end. Then just test for the existence of the previous _snip filename, and if it exists, paste your info from the current _end file to the previous _snip file. e.g.
#!/bin/bash
for i in *end; do ## find all _end files
d1="${i#*stuff_}" ## isolate first date in filename
d1="${d1%%T*}"
d2="${i%T*}" ## isolate second date
d2="${d2##*_}"
d0=$((d1 - 1)) ## subtract 1 from first, get snip d1
prev="${i/$d1/$d0}" ## create previous 'snip' filename
prev="${prev/$d2/$d1}"
prev="${prev%end}snip"
if [ -f "$prev" ] ## test that prev snip file exists
then
printf "paste to : %s\n" "$prev"
printf " from : %s\n\n" "$i"
fi
done
Test Input Files
$ ls -1
stuff_20090413T235945_20090414T235944_end
stuff_20090413T235945_20090414T235944_snip
stuff_20090414T235945_20090415T235944_end
stuff_20090414T235945_20090415T235944_snip
stuff_20090415T235945_20090416T235944_end
stuff_20090415T235945_20090416T235944_snip
stuff_20090416T235945_20090417T235944_end
stuff_20090416T235945_20090417T235944_snip
stuff_20090417T235945_20090418T235944_end
stuff_20090417T235945_20090418T235944_snip
stuff_20090418T235945_20090419T235944_end
stuff_20090418T235945_20090419T235944_snip
Example Use/Output
$ bash endsnip.sh
paste to : stuff_20090413T235945_20090414T235944_snip
from : stuff_20090414T235945_20090415T235944_end
paste to : stuff_20090414T235945_20090415T235944_snip
from : stuff_20090415T235945_20090416T235944_end
paste to : stuff_20090415T235945_20090416T235944_snip
from : stuff_20090416T235945_20090417T235944_end
paste to : stuff_20090416T235945_20090417T235944_snip
from : stuff_20090417T235945_20090418T235944_end
paste to : stuff_20090417T235945_20090418T235944_snip
from : stuff_20090418T235945_20090419T235944_end
(of course replace stuff_ with your actual prefix)
Let me know if you have questions.
You could store the previous $file3 value in a variable (and do a check if it is not the first run with -z check):
#!/bin/bash
destination='media/user/directory/'
prev=""
for file1 in $destination*.ascii
do
echo $file1
file2="${file1}.end"
file3="${file1}.snip"
sed -e '16,$d' $file1 > $file2
sed -e '1,15d' $file1 > $file3
if [ -z "$prev" ]; then
cat $prev $file2 > outfile
fi
prev=$file3
done
I am currently building a bash script for class, and I am trying to use the grep command to grab the values from a simple calculator program and store them in the variables I assign, but I keep receiving a syntax error message when I try to run the script. Any advice on how to fix it? my script looks like this:
#!/bin/bash
addanwser=$(grep -o "num1 + num2" Lab9 -a 5 2)
echo "addanwser"
subanwser=$(grep -o "num1 - num2" Lab9 -s 10 15)
echo "subanwser"
multianwser=$(grep -o "num1 * num2" Lab9 -m 3 10)
echo "multianwser"
divanwser=$(grep -o "num1 / num2" Lab9 -d 100 4)
echo "divanwser"
modanwser=$(grep -o "num1 % num2" Lab9 -r 300 7)
echo "modawser"`
You want to grep the output of a command.
grep searches from either a file or standard input. So you can say either of these equivalent:
grep X file # 1. from a file
... things ... | grep X # 2. from stdin
grep X <<< "content" # 3. using here-strings
For this case, you want to use the last one, so that you execute the program and its output feeds grep directly:
grep <something> <<< "$(Lab9 -s 10 15)"
Which is the same as saying:
Lab9 -s 10 15 | grep <something>
So that grep will act on the output of your program. Since I don't know how Lab9 works, let's use a simple example with seq, that returns numbers from 5 to 15:
$ grep 5 <<< "$(seq 5 15)"
5
15
grep is usually used for finding matching lines of a text file. To actually grab a part of the matched line other tools such as awk are used.
Assuming the output looks like "num1 + num2 = 54" (i.e. fields are separated by space), this should do your job:
addanwser=$(Lab9 -a 5 2 | awk '{print $NF}')
echo "$addanwser"
Make sure you don't miss the '$' sign before addanwser when echo'ing it.
$NF selects the last field. You may select nth field using $n.
Suppose I have a file (sizes.txt)
daveclark#foo.com 0 23252 0
mikeclark#foo.com 0 45131 1
clark#foo.com 0 55235 0
joeclark#bar.net 33632 1
maryclark#bar.net 0 55523 0
clark#bar.net 0 99356 0
Now I have another file (users.txt)
clark#foo.com
clark#bar.net
What I want to do is find each line in sizes.txt for the specific email addresses in users.txt...using a loop, bash or one-liner in CentOS. Here's the key point, I need to find lines that only contain clark#foo.com and then clark#bar.net - meaning this should be one line only for each.
The most simple way that comes to mind...
for i in `cat users.txt`; do grep $i sizes.txt; done
...but this does not work because processing the first line of users.txt will return the lines containing daveclark#foo.com, mikeclark#foo.com and clark#foo.com. I explicitly want the line containing "clark#foo.com" (the third line of sizes.txt). Processing second line of users.txt, will have the same problem (it will return maryclark#bar.net and clark#bar.net lines) I know this has to be something totally simple that I'm overlooking.
What you are looking for is the exact match with grep. In your case that would be the -w option.
So
for i in cat users.txt do
grep -w "^$i" sizes.txt
done
should do the trick.
Cheers.
You can try something like this using only bash built-in functions and syntax:
while read -r user ; do
while read -r s_user s_column_2 s_column_3 s_column_4 ; do
[ "${s_user}" = "${user}" ] && printf "%b\t%b\t%b\t%b\n" "${s_user}" "${s_column_2}" "${s_column_3}" "${s_column_4}"
done < sizes.txt
done < users.txt
this nested while could be slow when using big size.txt files. In those cases you could use this in combination with awk
I have a loop, in a bash script. It runs a programme that by default outputs a text file when it works, and no file if it doesn't. I'm running it a large number of times (> 500K) so I want to merge the output files, row by row. If one iteration of the loop creates a file, I want to take the LAST line of that file, append it to a master output file, then delete the original so I don't end up with 1000s of files in one directory. The Loop I have so far is:
oFile=/path/output/outputFile_
oFinal=/path/output.final
for counter in {101..200}
do
$programme $counter -out $oFile$counter
if [ -s $oFile$counter ] ## This returns TRUE if file isn't empty, right?
then
out=$(tail -1 $oFile$counter)
final=$out$oFile$counter
$final >> $oFinal
fi
done
However, it doesn't work properly, as it seems to not return all the files I want. So is the conditional wrong?
You can be clever and pass the programme a process substitution instead of a "real" file:
oFinal=/path/output.final
for counter in {101..200}
do
$programme $counter -out >(tail -n 1)
done > $oFinal
$programme will treat the process substitution as a file, and all the lines written to it will be processed by tail
Testing: my "programme" outputs 2 lines if the given counter is even
$ cat programme
#!/bin/bash
if (( $1 % 2 == 0 )); then
{
echo ignore this line
echo $1
} > $2
fi
$ ./programme 101 /dev/stdout
$ ./programme 102 /dev/stdout
ignore this line
102
So, this loop should output only the even numbers between 101 and 200
$ for counter in {101..200}; do ./programme $counter >(tail -1); done
102
104
[... snipped ...]
198
200
Success.
I'm trying to use a Bash script to run a large number of calculations (just over 2 million) using a terminal-based program called uvspec. But I've hit a serious barrier following the latest addition to the calculation...
The script, opens an input file which has 2e^6 lines looking like this:
0 66.3426 -9.999 -9999
0 66.6192 -9.999 -9999
0 61.9212 1.655 1655
0 61.9999 1.655 1655
...
Each of these values represents a different value I want to substitute into the input file (using sed), so I read each line into an array. Many of these lines contain negative values in the 4th column e.g. -9999, which result in errors in the program so I would like to omit those lines and return a standard output - I'm doing this with the if statement... Problem is something terribly wrong is coming out of my output and I'm 99.9% sure the problem is a mistake in the following script as I'm fairly new to bash.
Can anyone spot anything here that doesn't make sense or is bad syntax?
Any comments on the script in general would also be useful feedback.
cat ".../Maps/dniinput" | while IFS=$' ' read -r -a myArray
do
if [ "${myArray[3]}" -gt 0 ]
then
sed s/TAU/"${myArray[0]}"/ x.template x.template > a.template
sed s/SZA/"${myArray[1]}"/ a.template a.template > b.template
sed s/ALT/"${myArray[2]}"/ b.template b.template > x.inp
../bin/uvspec < x.inp >> dni.out
else
echo "0 -9999" >> dnijul.out
fi
done
Sed can do all three substitutions in one go and you can pipe the output straight into your analysis program without creating any intermediate a.template and b.template files...
sed -e "s/.../.../" -e "s/.../.../" -e "s/.../.../" x.template | ../bin/uvspec
By the way, you can also get rid of the "cat" at the start, and replace your array with variables whose names better match what they are, if you use a loop like this:
while IFS=S' ' read tau sza alt p4
do
echo $tau $sza $alt $p4
done < a
0 66.3426 -9.999 -9999
0 66.6192 -9.999 -9999
0 61.9212 1.655 1655
0 61.9999 1.655 1655
I named the fourth element "p4" because you refer to the 4th one as the altitude in your comment, but in your code you replace the word "ALT" with the third column - so I am not really sure what your parameters are, but you should hopefully get the idea from the example above.
You might want to combine those "sed" lines into something more like:
sed -e "s/TAU/${myArray[0]}/" -e "s/SZA/${myArray[1]}/" \
-e "s/ALT/${myArray[2]}/" < x.template \
| ../bin/uvspec >> dni.out