Bash increment input file name - bash

I have been trying to write a better bash script to run a specified program repeatedly with different input files. This is the basic brute force version that works, but i want to be able to use a loop to change the argument before ".txt".
#!/bin/bash
./a.out 256.txt >> output.txt
./a.out 512.txt >> output.txt
./a.out 1024.txt >> output.txt
./a.out 2048.txt >> output.txt
./a.out 4096.txt >> output.txt
./a.out 8192.txt >> output.txt
./a.out 16384.txt >> output.txt
./a.out 32768.txt >> output.txt
./a.out 65536.txt >> output.txt
./a.out 131072.txt >> output.txt
./a.out 262144.txt >> output.txt
./a.out 524288.txt >> output.txt
I attempted to make a for loop and change the argument:
#!/bin/bash
arg=256
for((i=1; i<12; i++))
{
#need to raise $args to a power of i
./a.out $args.txt << output.txt
}
but i get an error on my ./a.out stating that ".txt" does not exist. What is the proper way to raise args to a power of i and use that as the argument to ./a.out?

This is all that you need to do:
for ((i=256; i<=524288; i*=2)); do ./a.out "$i.txt"; done > output.txt
Every time the loop iterates, i is multiplied by 2, which produces the sequence that you want. Rather than redirecting the output of each iteration separately to the file, I have also moved the redirection outside the loop. This way, the file will only contain the contents from the loop.
In your question, $args is empty (I guess that you meant to put $arg), which is why your filename is just .txt. Also, you have used << rather than >>, which I assumed was a typo.

Check this:
seq 12 | xargs -i echo "256 * 2 ^ ({} - 1)" | bc | xargs -i echo ./a.out {}.txt
If it's OK, then drop echo and add >> output.txt
seq 12 | xargs -i echo "256 * 2 ^ ({} - 1)" | bc | xargs -i ./a.out {}.txt >> output.txt

Related

Loop through a list both as a variable and as a string

I have a long text file from which I want to grep all the lines that are with a given id (id123), and save them in a new text file.
In order to to this I can just execute the command:
zgrep -Hx id123 *.vcf.gz > /home/Roy/id123.txt
However, I have not one, but actually 20 files, so I would need to write this command 20 times.
These files are big and they take a long time to be processed, so I would prefer have it running in the background.
That's why I would like to create a script that iterates through a list of the ids, specifying it in the command
For example purposes, something like:
list=("id123" "id124" "id125" "id126")
for i in "${list[#]}"
do
zgrep -Hx $i *.vcf.gz > /home/Roy/$i.txt
done
How about using awk for dispatching the output of zgrep?
#!/bin/bash
ids=(id123 id124 id125 id126)
zgrep -Hxf <(printf '%s\n' "${ids[#]}") *.vcf.gz |
awk -F ':' '{print > ("/home/Roy/" $NF ".txt")}'
#!/bin/sh -x
find . | sed -n '/id[0-9][0-9][0-9].txt/' > stack
cat > ed1 <<EOF
1p
q
EOF
cat > ed2 <<EOF
1d
wq
EOF
next () {
[[ -s stack ]] && main
exit 0
}
main () {
line=$(ed -s stack < ed1)
zgrep -Hx "${line}" *.vcf.gz > /home/Roy/"${line}".txt
ed -s stack < ed2
next
}
next

Concatenate the output of 2 commands in the same line in Unix

I have a command like below
md5sum test1.txt | cut -f 1 -d " " >> test.txt
I want output of the above result prefixed with File_CheckSum:
Expected output: File_CheckSum: <checksumvalue>
I tried as follows
echo 'File_Checksum:' >> test.txt | md5sum test.txt | cut -f 1 -d " " >> test.txt
but getting result as
File_Checksum:
adbch345wjlfjsafhals
I want the entire output in 1 line
File_Checksum: adbch345wjlfjsafhals
echo writes a newline after it finishes writing its arguments. Some versions of echo allow a -n option to suppress this, but it's better to use printf instead.
You can use a command group to concatenate the the standard output of your two commands:
{ printf 'File_Checksum: '; md5sum test.txt | cut -f 1 -d " "; } >> test.txt
Note that there is a race condition here: you can theoretically write to test.txt before md5sum is done reading from it, causing you to checksum more data than you intended. (Your original command mentions test1.txt and test.txt as separate files, so it's not clear if you are really reading from and writing to the same file.)
You can use command grouping to have a list of commands executed as a unit and redirect the output of the group at once:
{ printf 'File_Checksum: '; md5sum test1.txt | cut -f 1 -d " " } >> test.txt
printf "%s: %s\n" "File_Checksum:" "$(md5sum < test1.txt | cut ...)" > test.txt
Note that if you are trying to compute the hash of test.txt(the same file you are trying to write to), this changes things significantly.
Another option is:
{
printf "File_Checksum: "
md5sum ...
} > test.txt
Or:
exec > test.txt
printf "File_Checksum: "
md5sum ...
but be aware that all subsequent commands will also write their output to test.txt. The typical way to restore stdout is:
exec 3>&1
exec > test.txt # Redirect all subsequent commands to `test.txt`
printf "File_Checksum: "
md5sum ...
exec >&3 # Restore original stdout
Operator &&
e.g. mkdir example && cd example

Output float variable as decimal to file in BASH

The result of BASH time (run 5 times) is stored in a text file as decimal.
I then read back in the values and compute the average using bc.
Finally, I output the resulting average as a decimal to a file.
My script seems to work, (no errors in Mate Terminal on Linux Mint,
both .txt files are created) except the final output to file is "0".
TIMEFORMAT=%R
tsum=0
for i in {1..5}
do
(time sh -c \
'openssl des3 -e -nosalt -k 0123456789012345 -in orig.jpg -out encr.enc; '\
'openssl des3 -d -nosalt -k 0123456789012345 -in encr.enc -out decr.dec'\
) 2>&1 | grep 0 >> outtime.txt
done
avgDES3=0
cat outtime.txt | \
while read num
do
tsum=`echo $tsum + $num | bc -l`
done
avgDES3=`echo "$tsum / 5" | bc -l`
echo "DES3 average is: " $avgDES3 >> results.txt
I've also tried replacing the last line with:
printf "DESCBC average is: " $avgDESCBC >> results.txt
the outtime.txt is:
0.220
0.218
0.226
0.223
0.217
and results.txt is:
DES3 average is: 0
I'd appreciate help getting the resulting average to be a decimal. Perhaps I'm not using the correct value of the tsum variable in the next to last line (eg. if tsum global isn't changed by the expression within the loop)?
EDIT: Issue (as pointed out by rbong and Arun) was piping to a subshell (global variable not changed after loop expression). Origninally script was producing appropriate outtime.txt on my system (no errors, just didn't get tsum value from loop).
Executing your script with the bash -x option for debugging reveals that the tsum variable is acting as expected in your while loop, then its value is reset to zero after the loop exits.
This happens because you are creating a new subprocess when you use the | operator just before while, and the subprocess has its own copy of the variable. You can avoid this by not piping the output from cat into a loop, but instead using a redirect operator to achieve the same result without creating a subprocess.
This is done by changing this
cat outtime.txt | \
while read num
do
tsum=`echo $tsum + $num | bc -l`
done
to this
while read num
do
tsum=`echo $tsum + $num | bc -l`
done < outtime.txt
With this simple change, your new output becomes
DES3 average is: .22080000000000000000
To learn more, read here.
http://www.gnu.org/software/bash/manual/html_node/Redirections.html
Try this script. Using { , } around time command you can capture the OUTPUT of "time" command and use "2" identifier for creating outtime.txt file in append mode. Before starting the script, this file should be created fresh OR you can comment the ">outtime.txt" line. A ';' character just before closing '}' brace is important OR it won't end the begining '{'. This will FIX your outtime.txt file contents/data issue.
Other issue is with while loop as you are using "|" before while loop and due to that a new subshell is getting created for while loop and tsum variable is loosing its value when it's out of the while loop. Feed while loop "outtime.txt" file like shown below in my reply.
#!/bin/bash
TIMEFORMAT=%R
tsum=0
#create fresh outtime.txt and results.txt files
>outtime.txt
#if you want this file to be appended for future runs, comment the following line.
>results.txt
for i in {1..5}
do
{ time sh -c \
'openssl des3 -e -nosalt -k 0123456789012345 -in orig.jpg -out encr.enc; '\
'openssl des3 -d -nosalt -k 0123456789012345 -in encr.enc -out decr.dec'\
} 1>/dev/null 2>&1; } 2>> outtime.txt
done
## Now at this point, outtime.txt will contain only x.xxx time entries per line for 5 runs of "for" loop.
echo File outtime.txt looks like:
echo ----------
cat outtime.txt
echo;
## Now lets calculate average.
avgDES3=0
while read num
do
tsum=`echo $tsum + $num | bc -l`
done < outtime.txt; ## Feed while loop this outtime.txt file
avgDES3=`echo "$tsum / 5" | bc -l`
echo "DES3 average is: " $avgDES3 > results.txt
## Logically you should usr a single > re-director for results file. Instead of >> which is used for appending to file while creating results.txt.
#Show the output/average
echo; echo File results.txt looks like:
cat results.txt
OUTPUT:
File outtime.txt looks like:
----------
0.112
0.108
0.095
0.084
0.110
File results.txt looks like:
DES3 average is: .10180000000000000000

Bash: Append command output and timestamp to file

Normally in my bash scripts I'm used to do some_command >> log.log. This works fine, however how can I append more data like time and command name?
My goal is to have a log like this
2012-01-01 00:00:01 [some_command] => some command output...
2012-01-01 00:01:01 [other_command] => other command output...
The processes should running and writing to the file concurrently.
The final solution, pointed by William Pursell in my case would be:
some_command 2>&1 | perl -ne '$|=1; print localtime . ": [somme_command] $_"' >> /log.log &
I also added 2>&1 to redirect the STDOUTand STDERR to the file and an & on the end to keep the program on background.
Thank you!
Given your comments, it seems that you want multiple processes to be writing to the file concurrently, and have a timestamp on each individual line. Something like this might suffice:
some_cmd | perl -ne '$|=1; print localtime . ": [some_cmd] $_"' >> logfile
If you want to massage the format of the date, use POSIX::strftime
some_cmd | perl -MPOSIX -ne 'BEGIN{ $|=1 }
print strftime( "%Y-%m-%d %H:%M:%S", localtime ) . " [some_cmd] $_"' >> logfile
An alternative solution using sed would be:
some_command 2>&1 | sed "s/^/`date '+%Y-%m-%d %H:%M:%S'`: [some_command] /" >> log.log &
It works by replacing the beginning of line "character" (^). Might come in handy if you don't want to depend on Perl.
On Ubuntu:
sudo apt-get install moreutils
echo "cat" | ts
Mar 26 09:43:00 cat
something like this:
(echo -n $(date); echo -n " ls => "; ls) >> /tmp/log
however, your command output is multiple lines and it will not have the format above you are showing. you may want to replace the newline in output with some other character with a command like tr or sed in that case.
One approach is to use logger(1).
Another might be something like this:
stamp () {
( echo -n "`date +%T` "
"$#" ) >> logfile
}
stamp echo how now brown cow
A better alternative using GNU sed would be:
some_command 2>&1 | sed 'h; s/.*/date "+%Y-%m-%d %H:%M:%S"/e; G; s/\n/ [some_command]: /'
Breaking down how this sed program works:
# store the current line (a.k.a. "pattern space") in the "hold space"
h;
# replace the current line with the date command and execute it
# note the e command is available in GNU sed, but not in some other version
s/.*/date "+%Y-%m-%d %H:%M:%S"/e;
# Append a newline and the contents of the "hold space" to the "pattern space"
G;
# Replace that newline inserted by the G command with whatever text you want
s/\n/ [some_command]: /

Opening a file in write mode

I have a file called a.txt. with values like
1
2
3
...
I want to overwrite this file but
echo "$var" >> a.txt
echo "$var1" >> a.txt
echo "$var2" >> a.txt
...
just appends. Using > is not useful as well. How can i overwrite with using >> operator in shell script?
You may want to use > for the first redirection and >> for subsequent redirections:
echo "$var" > a.txt
echo "$var1" >> a.txt
echo "$var2" >> a.txt
> truncates the file if it exists, and would do what you originally asked.
>> appends to the file if it exists.
If you want to overwrite the content of a file (not truncate it), use 1<>
e.g.:
[23:58:27 0 ~/tmp] $ echo foobar >a
[23:58:28 0 ~/tmp] $ cat a
foobar
[23:58:50 0 ~/tmp] $ echo -n bar 1<>a
[23:58:53 0 ~/tmp] $ cat a
barbar
In what way is using > not useful? That explicitly does what you want by overwriting the file, so use > for the first and then >> to append future values.
echo "$var
$var1
$var2" > a.txt
or
echo -e "$var\n$var1\n$var2" > a.txt

Resources