Locking Files in Bash - bash

I have a Problem to find a good concept on locking files in bash,
Basically I want to achieve the following:
Lock File
Read in the data in the file (multiple times)
Do stuff with the data.
Write new stuff to the file (not necessarily to the end)
Unlock that file
Doing this with flock seems not possible to me, because the file descriptor will just move once to the end of the file.
Also creating a Tempfile fails, because I might overwrite already read lines which is also not possible.
Edit:
Also note that other scripts I do not control might try to write to that file.
So my question is how can I create a lock in step 1 so it will span over steps 2,3,4 till I unlock it again in step 5?

You can do this with the flock utility. You just need to get flock to use a separate read-only file descriptor, i.e. open the file twice. E.g. to sort a file using a intermediate temporary file:
(
flock -x -w 10 100 || exit 1
tmp=$(mktemp)
sort <"$file" >"$tmp"
cat "$tmp" > "$file"
rm -f "$tmp"
) 100<"$file"
flock will issue the flock() system call for your file and block if it is already locked. If the timeout is exceeded then the script will just abort with an error code.

Related

Is lockfile necessary for reading and writing the same file of two processes

I'm working with Bash script and meeting such a situation:
one bash script will write things into a file, and the other bash script will read things from the same file.
In this case, is lockfile necessary? I think I don't need to use lockfile because there are only one reading process and only one writing process but I'm not sure.
Bash write.sh:
#!/bin/bash
echo 'success' > tmp.log
Bash read.sh:
#!/bin/bash
while :
do
line=$(head -n 1 ./tmp.log)
if [[ "$line" == "success" ]]; then
echo 'done'
break
else
sleep 3
fi
done
BTW, the write.sh could write several key words, such as success, fail etc.
While many programmers ignore this, you can potentially run into a problem because writing to the file is not atomic. When the writer does
echo success > tmp.log
it could be split into two (or more) parts: first it writes suc, then it writes cess\n.
If the reader executes between those steps, it might get just suc rather than the whole success line. Using a lockfile would prevent this race condition.
This is unlikely to happen with short writes from a shell echo command, which is why most programmers don't worry about it. However, if the writer is a C program using buffered output, the buffer could be flushed at arbitrary times, which would likely end with a partial line.
Also, since the reader is reading the file from the beginning each time, you don't have to worry about starting the read where the previous one left off.
Another way to do this is for the writer to write into a file with a different name, then rename the file to what the reader is looking for. Renaming is atomic, so you're guaranteed to read all of it or nothing.
At least from your example, it doesn't look like read.sh really cares about what gets written to tmp.log, only that write.sh has created the file. In that case, all read.sh needs to check is that the file exists.
write.sh can simply be
: > tmp.log
and read.sh becomes
until [ -e tmp.log ]; do
sleep 3
done
echo "done"

Create file, but fail if it exists, with bash [duplicate]

In system call open(), if I open with O_CREAT | O_EXCL, the system call ensures that the file will only be created if it does not exist. The atomicity is guaranteed by the system call. Is there a similar way to create a file in an atomic fashion from a bash script?
UPDATE:
I found two different atomic ways
Use set -o noclobber. Then you can use > operator atomically.
Just use mkdir. Mkdir is atomic
A 100% pure bash solution:
set -o noclobber
{ > file ; } &> /dev/null
This command creates a file named file if there's no existent file named file. If there's a file named file, then do nothing (but return a non-zero return code).
Pros of > over the touch command:
Doesn't update timestamp if file already existed
100% bash builtin
Return code as expected: fail if file already existed or if file couldn't be created; success if file didn't exist and was created.
Cons:
need to set the noclobber option (but it's okay in a script, if you're careful with redirections, or unset it afterwards).
I guess this solution is really the bash counterpart of the open system call with O_CREAT | O_EXCL.
Here's a bash function using the mv -n trick:
function mkatomic() {
f="$(mktemp)"
mv -n "$f" "$1"
if [ -e "$f" ]; then
rm "$f"
echo "ERROR: file exists:" "$1" >&2
return 1
fi
}
Examples:
$ mkatomic foo
$ wc -c foo
0 foo
$ mkatomic foo
ERROR: file exists: foo
You could create it under a randomly-generated name, then rename (mv -n random desired) it into place with the desired name. The rename will fail if the file already exists.
Like this:
#!/bin/bash
touch randomFileName
mv -n randomFileName lockFile
if [ -e randomFileName ] ; then
echo "Failed to acquired lock"
else
echo "Acquired lock"
fi
Just to be clear, ensuring the file will only be created if it doesn't exist is not the same thing as atomicity. The operation is atomic if and only if, when two or more separate threads attempt to do the same thing at the same time, exactly one will succeed and all others will fail.
The best way I know of to create a file atomically in a shell script follows this pattern (and it's not perfect):
create a file that has an extremely high chance of not existing (using a decent random number selection or something in the file name), and place some unique content in it (something that no other thread would have - again, a random number or something)
verify that the file exists and contains the contents you expect it to
create a hard link from that file to the desired file
verify that the desired file contains the expected contents
In particular, touch is not atomic, since it will create the file if it's not there, or simply update the timestamp. You might be able to play games with different timestamps, but reading and parsing a timestamp to see if you "won" the race is harder than the above. mkdir can be atomic, but you would have to check the return code, because otherwise, you can only tell that "yes, the directory was created, but I don't know which thread won". If you're on a file system that doesn't support hard links, you might have to settle for a less ideal solution.
Another way to do this is to use umask to try to create the file and open it for writing, without creating it with write permissions, like this:
LOCK_FILE=only_one_at_a_time_please
UMASK=$(umask)
umask 777
echo "$$" > "$LOCK_FILE"
umask "$UMASK"
trap "rm '$LOCK_FILE'" EXIT
If the file is missing, the script will succeed at creating and opening it for writing, despite the file being created without writing permissions. If it already exists, the script won't be able to open the file for writing. It would be possible to use exec to open the file and keep the file descriptor around.
rm requires you to have write permissions to the directory itself, without regards to file permissions.
touch is the command you are looking for. It updates timestamps of the provided file if the file exists or creates it if it doesn't.

bash while loop through a file doesn't end when the file is deleted

I have a while loop in a bash script:
Example:
while read LINE
do
echo $LINE >> $log_file
done < ./sample_file
My question is why when I delete the sample_file while the script is running the loop doesn't end and I see that the log_file is updating? How the loop is continuing while there is no input?
In unix, a file isn't truly deleted until the last directory entry for it is removed (e.g. with rm) and the last open file handle for it is closed. See this question (especially MarkR's answer) for more info. In the case of your script, the file is opened as stdin for the while read loop, and until that loop exits (or closes its stdin), rming the file will not actually delete it off disk.
You can see this effect pretty easily if you want. Open three terminal windows. In the first, run the command cat >/tmp/deleteme. In the second, run tail -f /tmp/deleteme. In the third, after running the other two commands, run rm /tmp/deleteme. At this point, the file has been unlinked, but both the cat and tail processes have open file handles for it, so it hasn't actually been deleted. You can prove this by typing into the first terminal window (running cat), and every time your hit return, tail will see the new line added to the file and display it in the second window.
The file will not actually be deleted until you end those two commands (Control-D will end cat, but you need Control-C to kill tail).
See "Why file is accessible after deleting in unix?" for an excellent explanation of what you are observing here.
In short...
Underlying rm and any other command that may appear to delete a file
there is the system call unlink. And it's called unlink, not remove or
deletefile or anything similar, because it doesn't remove a file. It
removes a link (a.k.a. directory entry) which is an association
between a file and a name in a directory.
You can use the function truncate to destroy the actual contents (or shred if you need to be more secure), which would immediately halt the execution of your example loop.
The moment shell executes the while loop, the sample_file contents have been read, and it does not matter whether the file exists or not after that point.
Test script:
$ cat test.sh
#!/bin/bash
while read line
do
echo $line
sleep 1
done < data_file
Test file:
$ seq 1 10 > data_file
Now, in one terminal you run the script, in another terminal, you go and delete the file data_file, you would still see the 1 to 10 numbers printed by the script.

Lock a file in bash using flock and lockfile

i spent the better part of the day looking for a solution to this problem and i think i am nearing the brink ... What i need to do in bash is: write 1 script that will periodicly read your inputs and write them into a file and second script that will periodicly print out the complete file BUT only when something new gets written in, meaning it will never write 2 same outputs 1 after another. 2 scripts need to comunicate by the means of a lock, meaning script 1 will lock a file so that script 2 cant print anything out of it, then script 1 will write something new into that file and unlock it ( and then script 2 can print updated file ).
The only hints we got was the usage of flock and lockfile - didnt get any hints on how to use them, exept that problem MUST be solved by flock or lockfile.
edit: When i said i was looking for a solution i ment i tried every single combination of flock with those flags and i just couldnt get it to work.
I will write pseudo code of what i want to do. A thing to note here is that this pseudocode is basicly the same as it is done in C .. its so simple, i dont know why everything has to be so complicated in bash.
script 1:
place a lock on file text.txt ( no one else can read it or write to it)
read input
place that input into file ( not deleting previous text )
remove lock on file text.txt
repeat
script 2:
print out complete text.txt ( but only if it is not locked, if it is locked obviously you cant)
repeat
And since script 2 is repeating all the time, it should print the complete text.txt ONLY when something new was writen to it.
I have about 100 other commands like flock that i have to learn in a very short time and i spent 1 day only for 1 of those commands. It would be kind of you to at least give me a hint. As for man page ...
I tried to do something like flock -x text.txt -c read > text.txt, tried every other combination also, but nothing works. It takes only 1 command, wont accept arguments. I dont even know why there is an option for command. I just want it to place a lock on file, write into it and then unlock it. In c it only takes flock("text.txt", ..).
Let's look at what this does:
flock -x text.txt -c read > text.txt
First, it opens test.txt for write (and truncates all contents) -- before doing anything else, including calling flock!
Second, it tells flock to get an exclusive lock on the file and run the command read.
However, read is a shell builtin, not an external command -- so it can't be called by a non-shell process at all, mooting any effect that it might otherwise have had.
Now, let's try using flock the way the man page suggests using it:
{
flock -x 3 # grab a lock on file descriptor #3
printf "Input to add to file: " # Prompt user
read -r new_input # Read input from user
printf '%s\n' "$new_input" >&3 # Write new content to the FD
} 3>>text.txt # do all this with FD 3 open to text.txt
...and, on the read end:
{
flock -s 3 # wait for a read lock
cat <&3 # read contents of the file from FD 3
} 3<text.txt # all of this with text.txt open to FD 3
You'll notice some differences from what you were trying before:
The file descriptor used to grab the lock is in append mode (when writing to the end), or in read mode (when reading), so you aren't overwriting the file before you even grab the lock.
We're running the read command (which, again, is a shell builtin, and so can only be run directly by the shell) by the shell directly, rather than telling the flock command to invoke it via the execve syscall (which is, again, impossible).

shell - keep writing to the same file[name] even if externally changed

There's a process, which should not be interrupted, and I need to capture the stdout from it.
prg > debug.log
I can't quite modify the way the process outputs its data, though I'm free to wrap its launch as I see fit. Background, piping the output to other commands, etc, that's all a fair game. The process, once started, must run till the end of times (and can't be blocked, say, waiting for a fifo to be emptied). The writing isn't very fast, and the file can be cut at arbitrary place, if it exceeds predefined size.
Now, the problem is the log would grow to fill all available space, and so it must be rotated, oldest instances deleted/overwritten. And now there's the problem...
If I do
mv debug.log debug.log.1
the file debug.log vanishes forever while debug.log.1 keeps growing.
If I do
cp debug.log debug.log.1
rm debug.log
the file debug.log.1 doesn't grow, but debug.log vanishes forever, and all consecutive output from the program is lost.
Is there some way to make the stdout redirect behave like typical log writing - if the file vanished, got renamed or such, create it again?
(this is all working under busybox, so lightweight solutions are preferred.)
If the application in question holds the log file open all the time and cannot be told to close and re-open the log file (as many applications can) then the only option I can think of is to truncate the file in place.
Something like this:
cp debug.log debug.log.1
: > debug.log
You may use the split command or any other command which reopens a new file
prg | split --numeric-suffixes --lines=100 debug.log.
The reason is, that the redirected file output is sent to a file handle, not the file name.
So the process has to close the file handle and open a new one.
You may use rotatelogs to do it in Apache HTTPD style, if you like:
http://httpd.apache.org/docs/2.2/programs/rotatelogs.html
In bash style, you can use a script:
fn='debug.log'
prg | while IFS='' read in; do
if ...; then # time or number of lines read or filesize or ...
i=$((i+1))
mv "$file" "$fn.$i" # rename the file
> "$file" # make it empty
fi
echo "$in" >> "$file" # fill the file again
done

Resources