I have a while-read loop that runs my script in Terminal. If I inserted an echo and read command pair into the script, I'd get prompted for input for each file in the directory that the script is looping through.
I want to avoid this obviously, but at the same time I don't want to have to hard type the target directory that my script is generating CSVs into, which is an inelegant solution and means for each new target directory, the script has to be tweaked again.
This is my while loop command in Terminal:
while read MS; do (cd "$MS" && bash script && cd ..); done <whichMSS.txt
And /targetDirectory/ is the part of the script that needs inputting:
exiftool -csv -Title -Source $PWD > /targetDirectory/${PWD##*/}".csv"
The actual result is that I'd get prompted for input for each file as my script is iterating over them, which kind of defeats the purpose of the while loop. The ideal result would be to input /targetDirectory/ for the first time only and not get prompted any more until all the files have been looped through. I would appreciate any help!
Recently I tried to list all of the images located in a directory I had (several hundred) and put them into a file. I used a very simple command
ls > "image_names.txt"
I was bored and decided to look inside the file and realized that image_names.txt was located in the file. Then I realized, the order of operations performed was not what I thought. I read the command as left to right, in two separate steps:
ls (First list all the file names)
> "image_names.txt" (Then create this file and pipe it here)
Why is it creating the file first then listing all of the files in the directory, despite the ls command coming first?
When you use output redirection, the shell needs a place to put your output( suppose it was very long, then it could all be lost on terminate, or exhaust all working memory), so the first step is to open the output file for streaming output from the executed command's stdout.
This is especially important to know in this kind of command
cat a.txt | grep "foo" > a.txt
since a is opened first and not in append mode it will be truncated, meaning there is no input for cat. So the behaviour you expect that the lines will be filtered from a.txt and replace a.txt will not actually happen. Instead you will just lose the contents of a.txt.
Because redirection > "image_names.txt" was performed before ls command.
In system call open(), if I open with O_CREAT | O_EXCL, the system call ensures that the file will only be created if it does not exist. The atomicity is guaranteed by the system call. Is there a similar way to create a file in an atomic fashion from a bash script?
UPDATE:
I found two different atomic ways
Use set -o noclobber. Then you can use > operator atomically.
Just use mkdir. Mkdir is atomic
A 100% pure bash solution:
set -o noclobber
{ > file ; } &> /dev/null
This command creates a file named file if there's no existent file named file. If there's a file named file, then do nothing (but return a non-zero return code).
Pros of > over the touch command:
Doesn't update timestamp if file already existed
100% bash builtin
Return code as expected: fail if file already existed or if file couldn't be created; success if file didn't exist and was created.
Cons:
need to set the noclobber option (but it's okay in a script, if you're careful with redirections, or unset it afterwards).
I guess this solution is really the bash counterpart of the open system call with O_CREAT | O_EXCL.
Here's a bash function using the mv -n trick:
function mkatomic() {
f="$(mktemp)"
mv -n "$f" "$1"
if [ -e "$f" ]; then
rm "$f"
echo "ERROR: file exists:" "$1" >&2
return 1
fi
}
Examples:
$ mkatomic foo
$ wc -c foo
0 foo
$ mkatomic foo
ERROR: file exists: foo
You could create it under a randomly-generated name, then rename (mv -n random desired) it into place with the desired name. The rename will fail if the file already exists.
Like this:
#!/bin/bash
touch randomFileName
mv -n randomFileName lockFile
if [ -e randomFileName ] ; then
echo "Failed to acquired lock"
else
echo "Acquired lock"
fi
Just to be clear, ensuring the file will only be created if it doesn't exist is not the same thing as atomicity. The operation is atomic if and only if, when two or more separate threads attempt to do the same thing at the same time, exactly one will succeed and all others will fail.
The best way I know of to create a file atomically in a shell script follows this pattern (and it's not perfect):
create a file that has an extremely high chance of not existing (using a decent random number selection or something in the file name), and place some unique content in it (something that no other thread would have - again, a random number or something)
verify that the file exists and contains the contents you expect it to
create a hard link from that file to the desired file
verify that the desired file contains the expected contents
In particular, touch is not atomic, since it will create the file if it's not there, or simply update the timestamp. You might be able to play games with different timestamps, but reading and parsing a timestamp to see if you "won" the race is harder than the above. mkdir can be atomic, but you would have to check the return code, because otherwise, you can only tell that "yes, the directory was created, but I don't know which thread won". If you're on a file system that doesn't support hard links, you might have to settle for a less ideal solution.
Another way to do this is to use umask to try to create the file and open it for writing, without creating it with write permissions, like this:
LOCK_FILE=only_one_at_a_time_please
UMASK=$(umask)
umask 777
echo "$$" > "$LOCK_FILE"
umask "$UMASK"
trap "rm '$LOCK_FILE'" EXIT
If the file is missing, the script will succeed at creating and opening it for writing, despite the file being created without writing permissions. If it already exists, the script won't be able to open the file for writing. It would be possible to use exec to open the file and keep the file descriptor around.
rm requires you to have write permissions to the directory itself, without regards to file permissions.
touch is the command you are looking for. It updates timestamps of the provided file if the file exists or creates it if it doesn't.
I'm writing commands that do something like ./script > output.txt so that I can use the files in later scripts like ./script2 output.txt otherFile.txt > output2.txt. I remove them all at the end of the script, but when I'm testing certain things or debugging it's tricky to search through all my sub directories and files which have been created in the script.
Is the best option just to create a hidden file?
As always, there are numerous ways to do so. If you want to avoid files altogether, you can save the output (STDOUT) of a command in a variable and pass it to the next command as a file using the <() operator:
output=$(cat /usr/include/stdio.h)
cat <(echo "$output")
Alternatively, you can do so in a single command line:
cat <(cat /usr/include/stdio.h)
This assumes that the next command strictly requires a file for input.
I tend to avoid temporary files whenever possible to eliminate the need for a cleanup step that gets executed in all cases unless large amounts of data have to be processed.
I have around 50 input files to a terminal program. The program takes one file as input at the time, prints some data and terminates.
When it has terminated, I run the program again with the next file and so on.
Is there a way to make this automatic—since this will take several hours and some file take a few minutes and some can take 1 hour—and save each data print in a file output_inputfile.txt?
I was thinking to have a file like
myprogram file-1
myprogram file-2
myprogram file-3
and execute it in some way.
You can accomplish that via the shell scripting capability, e.g. have a look at this: http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO-7.html. You could just put them all in one directory and use this simple script:
#!/bin/bash
cd /path/to/your/files # go to the directory
for i in $( ls ); do # for every file that 'ls' returns
/path/to/your/program $i # call your program
done