The result of BASH time (run 5 times) is stored in a text file as decimal.
I then read back in the values and compute the average using bc.
Finally, I output the resulting average as a decimal to a file.
My script seems to work, (no errors in Mate Terminal on Linux Mint,
both .txt files are created) except the final output to file is "0".
TIMEFORMAT=%R
tsum=0
for i in {1..5}
do
(time sh -c \
'openssl des3 -e -nosalt -k 0123456789012345 -in orig.jpg -out encr.enc; '\
'openssl des3 -d -nosalt -k 0123456789012345 -in encr.enc -out decr.dec'\
) 2>&1 | grep 0 >> outtime.txt
done
avgDES3=0
cat outtime.txt | \
while read num
do
tsum=`echo $tsum + $num | bc -l`
done
avgDES3=`echo "$tsum / 5" | bc -l`
echo "DES3 average is: " $avgDES3 >> results.txt
I've also tried replacing the last line with:
printf "DESCBC average is: " $avgDESCBC >> results.txt
the outtime.txt is:
0.220
0.218
0.226
0.223
0.217
and results.txt is:
DES3 average is: 0
I'd appreciate help getting the resulting average to be a decimal. Perhaps I'm not using the correct value of the tsum variable in the next to last line (eg. if tsum global isn't changed by the expression within the loop)?
EDIT: Issue (as pointed out by rbong and Arun) was piping to a subshell (global variable not changed after loop expression). Origninally script was producing appropriate outtime.txt on my system (no errors, just didn't get tsum value from loop).
Executing your script with the bash -x option for debugging reveals that the tsum variable is acting as expected in your while loop, then its value is reset to zero after the loop exits.
This happens because you are creating a new subprocess when you use the | operator just before while, and the subprocess has its own copy of the variable. You can avoid this by not piping the output from cat into a loop, but instead using a redirect operator to achieve the same result without creating a subprocess.
This is done by changing this
cat outtime.txt | \
while read num
do
tsum=`echo $tsum + $num | bc -l`
done
to this
while read num
do
tsum=`echo $tsum + $num | bc -l`
done < outtime.txt
With this simple change, your new output becomes
DES3 average is: .22080000000000000000
To learn more, read here.
http://www.gnu.org/software/bash/manual/html_node/Redirections.html
Try this script. Using { , } around time command you can capture the OUTPUT of "time" command and use "2" identifier for creating outtime.txt file in append mode. Before starting the script, this file should be created fresh OR you can comment the ">outtime.txt" line. A ';' character just before closing '}' brace is important OR it won't end the begining '{'. This will FIX your outtime.txt file contents/data issue.
Other issue is with while loop as you are using "|" before while loop and due to that a new subshell is getting created for while loop and tsum variable is loosing its value when it's out of the while loop. Feed while loop "outtime.txt" file like shown below in my reply.
#!/bin/bash
TIMEFORMAT=%R
tsum=0
#create fresh outtime.txt and results.txt files
>outtime.txt
#if you want this file to be appended for future runs, comment the following line.
>results.txt
for i in {1..5}
do
{ time sh -c \
'openssl des3 -e -nosalt -k 0123456789012345 -in orig.jpg -out encr.enc; '\
'openssl des3 -d -nosalt -k 0123456789012345 -in encr.enc -out decr.dec'\
} 1>/dev/null 2>&1; } 2>> outtime.txt
done
## Now at this point, outtime.txt will contain only x.xxx time entries per line for 5 runs of "for" loop.
echo File outtime.txt looks like:
echo ----------
cat outtime.txt
echo;
## Now lets calculate average.
avgDES3=0
while read num
do
tsum=`echo $tsum + $num | bc -l`
done < outtime.txt; ## Feed while loop this outtime.txt file
avgDES3=`echo "$tsum / 5" | bc -l`
echo "DES3 average is: " $avgDES3 > results.txt
## Logically you should usr a single > re-director for results file. Instead of >> which is used for appending to file while creating results.txt.
#Show the output/average
echo; echo File results.txt looks like:
cat results.txt
OUTPUT:
File outtime.txt looks like:
----------
0.112
0.108
0.095
0.084
0.110
File results.txt looks like:
DES3 average is: .10180000000000000000
Related
I have a long-running command which outputs periodically. to demonstrate let's assume it is:
function my_cmd()
{
for i in {1..9}; do
echo -n $i
for j in {1..$i}
echo -n " "
echo $i
sleep 1
done
}
the output will be:
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
I want to display the command output meanwhile save it to a file at the same time.
this can be done by my_cmd | tee -a res.txt.
Now I want to display the output to terminal as-is but save to file with a transformed flavor, say with sed "s/ //g".
so the res.txt becomes:
11
22
33
44
66
77
88
99
how can I do this transformation on-the-fly without waiting for command exits then read the file again?
Note that in your original code, {1..$i} is an error because sequences can't contain variables. I've replaced it with seq. Also, you're missing a do and a done for the inner for loop.
At any rate, I would use process substitution.
#!/usr/bin/env bash
function my_cmd {
for i in {1..9}; do
printf '%d' "$i"
for j in $(seq 1 $i); do
printf ' '
done
printf '%d\n' "$j"
sleep 1
done
}
my_cmd | tee >(tr -d ' ' >> res.txt)
Process substitution usually causes bash to create an entry in /dev/fd which is fed to the command in question. The contents of the substitution run asynchronously, so it doesn't block the process sending data to it.
Note that the process substitution isn't a REAL file, so the -a option for tee is meaningless. If you really want to append to your output file, >> within the substitution is the way to go.
If you don't like process substitution, another option would be to redirect to alternate file descriptors. For example, instead of the last line in the script above, you could use:
exec 5>&1
my_cmd | tee /dev/fd/5 | tr -d ' ' > res.txt
exec 5>&-
This creates a file descriptor, /dev/fd/5, which redirects to your real stdout, the terminal. It then tells tee to write to this, allowing the normal stdout from tee to be processed by additional pipe elements before final redirection to your log file.
The method you choose is up to you. I find process substitution clearer.
Something you need to modify in your function. And you may use tee in the for loop to print and write file at the same time. The following script may get the result you desire.
#!/bin/bash
filename="a.txt"
[ -f $filename ] && rm $filename
for i in {1..9}; do
echo -n $i | tee -a $filename
for((j=1;j<=$i;j++)); do
echo -n " "
done
echo $i | tee -a $filename
sleep 1
done
Instead of double loop, I would use printf and its formatting capability %Xs to pad with blank characters.
Moreover I would use double printing (for stdout and your file) rather than using pipe and starting new processes.
So your function could look like this:
function my_cmd() {
for i in {1..9}; do
printf "%s %${i}s\n" $i $i
printf "%s%s\n" $i $i >> res.txt
done
}
Getting into bash, I love it, but it seems there are lots of subtleties that end up making a big difference in functionality, and whatnot, anyway here is my question:
I know this works:
total=0
for i in $(grep number some.txt | cut -d " " -f 1); do
(( total+=i ))
done
But why doesn't this?:
grep number some.txt | cut -d " " -f 1 | while read i; do (( total+=i )); done
some.txt:
1 number
2 number
50 number
both the for and the while loop receive 1, 2, and 50 separately, but the for loop shows the total variable being 53 in the end, while in the while loop code, it just stays in zero. I know there's some fundamental knowledge I'm lacking here, please help me.
I also don't get the differences in piping, for example
If I run
grep number some.txt | cut -d " " -f 1 | while read i; echo "-> $i"; done
I get the expected output
-> 1
-> 2
-> 50
But if run like so
while read i; echo "-> $i"; done <<< $(grep number some.txt | cut -d " " -f 1)
then the output changes to
-> 1 2 50
This seems weird to me since grep outputs the result in separate lines. As if this wasn't ambiguous, if I had a file with only numbers 1 2 3 in separate lines, and I ran
while read i; echo "-> $i"; done < someother.txt
Then the output would be printed by the echo in different lines, as expected in the previous example. I know < is for files and <<< for command outputs, but why does that line difference exist?
Anyways, I was hoping someone could shed some light on the matter, thank you for your time!
grep number some.txt | cut -d " " -f 1 | while read i; do (( total+=i )); done
Each command in a pipeline is run in a subshell. That means when you put the while read loop in a pipeline any variable assignments are lost.
See: BashFAQ 024 - "I set variables in a loop that's in a pipeline. Why do they disappear after the loop terminates? Or, why can't I pipe data to read?"
while read i; do echo "-> $i"; done <<< "$(grep number some.txt | cut -d " " -f 1)"
To preserve grep's newlines, add double quotes. Otherwise the result of $(...) is subject to word splitting which collapses all the whitespace into single spaces.
I have a directory (output) in unix (SUN). There are two types of files created with timestamp prefix to the file name. These file are created on a regular interval of 10 minutes.
e. g:
1. 20140129_170343_fail.csv (some lines are there)
2. 20140129_170343_success.csv (some lines are there)
Now I have to search for a particular string in all the files present in the output directory and if the string is found in fail and success files, I have to count the number of lines present in those files and save the output to the cnt_succ and cnt_fail variables. If the string is not found I will search again in the same directory after a sleep timer of 20 seconds.
here is my code
#!/usr/bin/ksh
for i in 1 2
do
grep -l 0140127_123933_part_hg_log_status.csv /osp/local/var/log/tool2/final_logs/* >log_t.txt; ### log_t.txt will contain all the matching file list
while read line ### reading the log_t.txt
do
echo "$line has following count"
CNT=`wc -l $line|tr -s " "|cut -d" " -f2`
CNT=`expr $CNT - 1`
echo $CNT
done <log_t.txt
if [ $CNT > 0 ]
then
exit
fi
echo "waiitng"
sleep 20
done
The problem I'm facing is, I'm not able to get the _success and _fail in file in line and and check their count
I'm not sure about ksh, but while ... do; ... done is notorious for running off with whatever variables you're using in bash. ksh might be similar.
If I've understand your question right, SunOS has grep, uniq and sort AFAIK, so a possible alternative might be...
First of all:
$ cat fail.txt
W34523TERG
ADFLKJ
W34523TERG
WER
ASDTQ34T
DBVSER6
W34523TERG
ASDTQ34T
DBVSER6
$ cat success.txt
abcde
defgh
234523452
vxczvzxc
jkl
vxczvzxc
asdf
234523452
vxczvzxc
dlkjhgl
jkl
wer
234523452
vxczvzxc
And now:
egrep "W34523TERG|ASDTQ34T" fail.txt | sort | uniq -c
2 ASDTQ34T
3 W34523TERG
egrep "234523452|vxczvzxc|jkl" success.txt | sort | uniq -c
3 234523452
2 jkl
4 vxczvzxc
Depending on the input data, you may want to see what options sort has on your system. Examining uniq's options may prove useful too (it can do more than just count duplicates).
Think you want something like this (will work in both bash and ksh)
#!/bin/ksh
while read -r file; do
lines=$(wc -l < "$file")
((sum+=$lines))
done < <(grep -Rl --include="[1|2]*_fail.csv" "somestring")
echo "$sum"
Note this will match files starting with 1 or 2 and ending in _fail.csv, not exactly clear if that's what you want or not.
e.g. Let's say I have two files, one starting with 1 (containing 4 lines) and one starting with 2 (containing 3 lines), both ending in `_fail.csv somewhere under my current working directory
> abovescript
7
Important to understand grep options here
-R, --dereference-recursive
Read all files under each directory, recursively. Follow all
symbolic links, unlike -r.
and
-l, --files-with-matches
Suppress normal output; instead print the name of each input
file from which output would normally have been printed. The
scanning will stop on the first match. (-l is specified by
POSIX.)
Finaly I'm able to find the solution. Here is the complete code:
#!/usr/bin/ksh
file_name="0140127_123933.csv"
for i in 1 2
do
grep -l $file_name /osp/local/var/log/tool2/final_logs/* >log_t.txt;
while read line
do
if [ $(echo "$line" |awk '/success/') ] ## will check the success file
then
CNT_SUCC=`wc -l $line|tr -s " "|cut -d" " -f2`
CNT_SUCC=`expr $CNT_SUCC - 1`
fi
if [ $(echo "$line" |awk '/fail/') ] ## will check the fail file
then
CNT_FAIL=`wc -l $line|tr -s " "|cut -d" " -f2`
CNT_FAIL=`expr $CNT_FAIL - 1`
fi
done <log_t.txt
if [ $CNT_SUCC > 0 ] && [ $CNT_FAIL > 0 ]
then
echo " Fail count = $CNT_FAIL"
echo " Success count = $CNT_SUCC"
exit
fi
echo "waitng for next search..."
sleep 10
done
Thanks everyone for your help.
I don't think I'm getting it right, but You can't diffrinciate the files?
maybe try:
#...
CNT=`expr $CNT - 1`
if [ $(echo $line | grep -o "fail") ]
then
#do something with fail count
else
#do something with success count
fi
I am new to programming altogether and am trying to write my first bash script.
I have a file called NUMBERS.txt that has various numbers in it, as such:
1000
1001
1001
1000
1002
1001
etc..
I would like to write a script to count the occurrence of each number, save it as a variable and print it into a new text file as such:
1001= 3
1000= 2
etc..
I am completely stuck.
Here's what I have so far:
#!/bin/bash
for Count in `grep -c '1000' /NUMBERS.txt `
do
echo 'Count = '${Count}
done
for Count in `grep -c '1001' /NUMBERS.txt `
do
echo 'Count = '${Count}
done
Sort the file then count how many times each unique line occurs:
sort NUMBERS.txt | uniq -c
Now your file is already have one number on each line, it is simpler
for i in `sort -u NUMBERS.txt ` ; do count=`grep -c "$i" NUMBERS.txt ` ; echo "$i=$count" ; done > your_result.txt
or in a different format
for i in `sort -u NUMBERS.txt `
do
count=`grep -c "$i" NUMBERS.txt `
echo "$i=$count"
done > your_result.txt
As asked by , the performance is not very good. here is a much better one
sort NUMBERS.txt | uniq -c | awk '{print $1,"=",$2}'
Basically you go through NUNMBERS.txt twice. The first pass, you get the unique numbers;
The second pass you count the occurrence of each unique number.
I'm not the best at shell script, but here is a solution that works, using bash and grep -c :
#!/bin/bash
INPUT="./numbers.txt"
OUTPUT="./result.txt"
rm -f ${OUTPUT}
# you might want to change the values
for i in {1000..2000}; do
for Count in `grep -c ${i} ${INPUT}`; do
echo "${i} = ${Count}" >> ${OUTPUT}
done
done
I have a file (fasta) that I am using awk to extract the needed fields from (sequences with their headers). I then pipe it to a BLAST program and finally I pipe it to qsub in order to submit a job.
the file:
>sequence_1
ACTGACTGACTGACTG
>sequence_2
ACTGGTCAGTCAGTAA
>sequence_3
CCGTTGAGTAGAAGAA
and the command (which works):
awk < fasta.fasta '/^>/ { print $0 } $0 !~ /^>/' | echo "/Local/ncbi-blast-2.2.25+/bin/blastx -db blastdb.fa -outfmt 5 >> /User/blastresult.xml" | qsun -q S
what I would like to do is a add a condition that will sample the number of jobs I am running (using qstat) if it is below a certain threshold the job will be submitted.
for example:
allowed_jobs=200 #for example
awk < fasta.fasta '/^>/ { print $0 } $0 !~ /^>/' | echo "/Local/ncbi-blast-2.2.25+/bin/blastx -db blastdb.fa -outfmt 5 >> /User/blastresult.xml" | cmd=$(qstat -u User | grep -c ".") | if [ $cmd -lt $allowed_jobs ]; then qsub -q S
unfortunately (for me anyway) I have failed in all my attempts to do that.
I'd be grateful for any help
EDIT: elaborating a bit:
what I am trying to do is to extract from the fasta file this:
>sequene_x
ACTATATATATA
or basically: >HEADER\nSEQUENCE
one by one and pipe it to the blast program which can take stdin. I want to create a unique job for each sequence and this is the reason I want to pipe to qsub for each sequence.
to put it plainly the qsub submission would have looked something like this:
qsub -q S /Local/ncbi-blast-2.2.25+/bin/blastx -db blastdb.fa -query FASTA_SEQUENCE -outfmt 5 >> /User/blastresult.xml
note that the -query flag is unnecessary if stdin sequence is piped to it.
however, the main problem for me is how to incorporate the condition I mentioned above so that the sequence will be piped to qsub only if the qstat result is below a threshold. ideally if the qstat result is above the threshold it'll sleep until i goes below and then pass it forward.
thanks.
Hello I guess this is answered since long now.
I'll just provide a way to solve this, by counting the lines that should be processed (sequences) before passing it over to awk, the awk piece would go where echo time to work is.
#!/bin/bash
ct=`grep -c '^>' fasta.fasta`
if [ $ct -lt 201 ] ; then
echo time to work
else
echo too much
fi
This bit of shell reads two lines, prints them to stdout and pipes into your qsub command
while IFS= read -r header; do
IFS= read -r sequence
printf "%s\n" "$header" "$sequence" |
qsub -q S /Local/ncbi-blast-2.2.25+/bin/blastx -db blastdb.fa -outfmt 5 >> /User/blastresult.xml
done < fasta.fasta