How to input only the first time in a while loop - bash

I have a while-read loop that runs my script in Terminal. If I inserted an echo and read command pair into the script, I'd get prompted for input for each file in the directory that the script is looping through.
I want to avoid this obviously, but at the same time I don't want to have to hard type the target directory that my script is generating CSVs into, which is an inelegant solution and means for each new target directory, the script has to be tweaked again.
This is my while loop command in Terminal:
while read MS; do (cd "$MS" && bash script && cd ..); done <whichMSS.txt
And /targetDirectory/ is the part of the script that needs inputting:
exiftool -csv -Title -Source $PWD > /targetDirectory/${PWD##*/}".csv"
The actual result is that I'd get prompted for input for each file as my script is iterating over them, which kind of defeats the purpose of the while loop. The ideal result would be to input /targetDirectory/ for the first time only and not get prompted any more until all the files have been looped through. I would appreciate any help!

Related

Passing output of tree through pipeline (BASH)

Sorry if this has been asked before but I couldn't find anything.
I have an issue with a (example) script that needs to do 2 things:
echo the output of the tree command, and echo the same again, but this time passing the output through a pipe to another script.
The first part works fine, I get the expected tree of of directories and files, output on the terminal.
However when passing the output through the pipe, all I get on the other side is the first line. Why is this?
I have tried assigning the output to a temporary file and then using cat on this before passing through, but with no success.
Thanks
example_script:
tree Folder
tree Folder > test.pipe
...
#The other script reads from the pipe like so:
read thisthing < test.pipe
echo $thisthing #I have also tried cat $thisthing

run a program avoiding overwriting output

I have 1000 inputs for a program which I have no control on the output
I can run the program over each file like below. So this program goes take the input file which is like input1, input2 and input3, then run my program and save several outputs there but each time overwrite the outputs to the previous
for i in {1..3}; do
myprogram input"$i"
done
I thought I generate 3 folders and put the input files there then I run the program so maybe the program write the output there, but still not successful.
for i in {1..3}; do
myprogram "$i"/input"$i"
done
Basically I want to exe the program that save the output in each file and then go to another folder .
Is there anyway to cope with this?
Thanks
If it is overwriting the input file as indicated in your comment, you can do save the original input file by copying and renaming/moving then calling the program. Then if you really want them in a subdirectory, make a directory, and move the input and/or output file(s).
for i in {1..3}
do
cp infile$i outfile$i
./myprogram outfile$i
mkdir programRun-$i
mv infile$i outfile$i programRun-$i
done
If it is leaving the input file alone, and just outputs to a consistent file name, then something like
for i in {1..3}
do
./myprogram infile$i
mkdir programRun-$i
mv outfile programRun-$i/outfile-$i
done
Note that in either case, I'd consider using a different variable than $i to identify which run of the program - perhaps a time/date in YYYMMDDHHMMSS form, or just a unix timestamp. Just for organization purposes, and that way all output files from a given run are together... but whatever fits your needs.
If the myprogram is always creating the same file names then you could move them off before executing the next loop iteration. In this example if the output is files called out*.txt .
for i in {1..3}; do ./myprogram input"$i"; mkdir output"$i"; mv out*.txt output"$i"/; done
If the file names created differ you could create new directories and cd into those prior to executing the application.
for i in {1..3}; do mkdir output"$i"; cd output"$i"; ../myprogram ../input"$i"; cd ..; done

bash while loop through a file doesn't end when the file is deleted

I have a while loop in a bash script:
Example:
while read LINE
do
echo $LINE >> $log_file
done < ./sample_file
My question is why when I delete the sample_file while the script is running the loop doesn't end and I see that the log_file is updating? How the loop is continuing while there is no input?
In unix, a file isn't truly deleted until the last directory entry for it is removed (e.g. with rm) and the last open file handle for it is closed. See this question (especially MarkR's answer) for more info. In the case of your script, the file is opened as stdin for the while read loop, and until that loop exits (or closes its stdin), rming the file will not actually delete it off disk.
You can see this effect pretty easily if you want. Open three terminal windows. In the first, run the command cat >/tmp/deleteme. In the second, run tail -f /tmp/deleteme. In the third, after running the other two commands, run rm /tmp/deleteme. At this point, the file has been unlinked, but both the cat and tail processes have open file handles for it, so it hasn't actually been deleted. You can prove this by typing into the first terminal window (running cat), and every time your hit return, tail will see the new line added to the file and display it in the second window.
The file will not actually be deleted until you end those two commands (Control-D will end cat, but you need Control-C to kill tail).
See "Why file is accessible after deleting in unix?" for an excellent explanation of what you are observing here.
In short...
Underlying rm and any other command that may appear to delete a file
there is the system call unlink. And it's called unlink, not remove or
deletefile or anything similar, because it doesn't remove a file. It
removes a link (a.k.a. directory entry) which is an association
between a file and a name in a directory.
You can use the function truncate to destroy the actual contents (or shred if you need to be more secure), which would immediately halt the execution of your example loop.
The moment shell executes the while loop, the sample_file contents have been read, and it does not matter whether the file exists or not after that point.
Test script:
$ cat test.sh
#!/bin/bash
while read line
do
echo $line
sleep 1
done < data_file
Test file:
$ seq 1 10 > data_file
Now, in one terminal you run the script, in another terminal, you go and delete the file data_file, you would still see the 1 to 10 numbers printed by the script.

Making a command loop in shell with a script

How can one loop a command/program in a Unix shell without writing the loop into a script or other application.
For example, I wrote a script that outputs a light sensor value but I'm still testing it right now so I want it run it in a loop by running the executable repeatedly.
Maybe I'd also like to just run "ls" or "df" in a loop. I know I can do this easily in a few lines of bash code, but being able to type a command in the terminal for any given set of command would be just as useful to me.
You can write the exact same loop you would write in a shell script by writing it in one line putting semicolons instead of returns, like in
for NAME [in LIST ]; do COMMANDS; done
At that point you could write a shell script called, for example, repeat that, given a command, runs it N times, by simpling changing COMMANDS with $1 .
I recommend the use of "watch", it just do exactly what you want, and it cleans the terminal before each execution of the commands, so it's easy to monitor changes.
You probably have it already, just try watch ls or watch ./my_script.sh. You can even control how much time to wait between each execution, in seconds, with the -n option, and you can use -d to highlight the difference in the output of consecutive runs.
Try:
Run ls each second:
watch -n 1 ls
Run my_script.sh each 3 seconds, and highlight differences:
watch -n 3 -d ./my_script.sh
watch program man page:
http://linux.die.net/man/1/watch
This doesn't exactly answer your question, but I felt it was relavent. One of the great things with shell looping is that some commands return lists of items. Of course that is obvious, but a something you can do using the for loop is execute a command on that list of items.
for $file in `find . -name *.wma`; do cp $file ./new/location/ done;
You can get creative and do some very powerful stuff.
Aside from accepting arguments, anything you can do in a script can be done on the command line. Earlier I typed this directly in to bash to watch a directory fill up as I transferred files:
while sleep 5s
do
ls photos
end

Bash script to edit a bunch of files

To process a bunch of data and get it ready to be inserted into our database, we generate a bunch of shell scripts. Each of them has about 15 lines, one for each table that the data is going. One a recent import batch, some of the import files failed going into one particular table. So, I have a bunch of shell scripts (about 600) where I need to comment out the first 7 lines, then rerun the file. There are about 6000 shell scripts in this folder, and nothing about a particular file can tell me if it needs the edit. I've got a list of which files that I pulled from the database output.
So how do I write a bash script (or anything else that would work better) to take this list of file names and for each of them, comment out the first 7 lines, and run the script?
EDIT:
#!/usr/bin/env sh
cmd1
cmd2
cmd3
cmd4
cmd5
cmd6
cmd7
cmd8
Not sure how readable that is. Basically, the first 7 lines (not counting the first line) need to have a # added to the beginning of them. Note: the files have been edited to make each line shorter and partially cut off copying out of VIM. But in the main part of each file, there is a line starting with echo, then a line starting with sqlldr
Using sed, you can specify a line number range in the file to be changed.
#!/bin/bash
while read line
do
# add a comment to beginning of lines 1 - 7 and rename the script
sed '3,9 s/^/#/' $line > $line.new
exec $line.new
done < "filelist.txt"
You may wish to test this before running it on all of those scripts...
EDIT: changed the lines numbers to reflect comments.
Roughly speaking:
#!/bin/sh
for file in "$#"
do
out=/tmp/$file.$$
sed '2,8s/^/#/' < $file > $out
$SHELL $out
rm -f $out
done
Assuming you don't care about checking for race conditions etc.
ex seems made for what you want to do.
For instance, for editing one file, with a here document:
#!/bin/sh
ex test.txt << END
1,12s/^/#/
wq
END
That'll comment out the first 12 lines in "test.txt". For your example you could try "$FILE" or similar (including quotes!).
Then run them the usual way, i.e. ./"$FILE"
edit: $SHELL "$FILE" is probably a better approach to run them (from one of the above commenters).
Ultimately you're going to want to use the linux command sed. Whatever logic you need to place in the script, you know. But your script will ultimately call sed. http://lowfatlinux.com/linux-sed.html

Resources