How to use fLock utility to prevent forever locks - bash

I currently have the following script below that runs every 1 second on a cron job
#!/bin/bash
LOCK_FILE=/src/lock-file.lock
exec 99>"$LOCK_FILE"
flock -xn 99 -c "node /src/cron.js" || exit 1
exec 99<&- #close
The code is based off some tutorial sites I found below.
https://dev.to/rrampage/ensuring-that-a-shell-script-runs-exactly-once-3d3f
https://www.baeldung.com/linux/file-locking
Currently after about a few days the script gets stuck on the lock. I can type in lslocks and I see the following below proving it's stuck on the lock, even if I stop the cron. This is a problem because the script basically stops running every 1 second.
COMMAND PID TYPE SIZE MODE M START END PATH
(undefined) -1 OFDLCK READ 0 0 0
flock 261 FLOCK WRITE 0 0 0 /root/99
As you can see above the file is still locked and it never seems to unlock for whatever reason. I read that I can use -u option to unlock, but I also read this is not good to do as well and that closing the file is preferred with the exec 99<&- option. The nodejs script should be impossible to have any infinite loops the way it's written.
For that matter the 'file descriptors' are very confusing for me to understand as well. What exactly does 99>"$LOCK_FILE" really do to the file /src/lockfile.lock? I can't seem to find any explanations online and just hear you are supposed to do it. I'm aware 99 is supposed to be some file descriptor that doesn't interfere with more common descriptors like 0 and 1, but this is still very confusing what it all means which might be where my problem is.
What is really weird is it creates a file in my directory called 99, I thought the file descriptor was supposed to be just an integer like an open port on a socket not an actual file that gets created? I'm not sure if this is part of the problem either.
My question is, does anyone see anything wrong with the code I have below that could cause a 'forever' lock situation?

Related

Blocking a bash script running with &

I may have inadvertently launched a bash script containing an infinite cycle whose exit condition may be met next century, if ever. The fact is that I launched the script, as I would do with a nohup program, with
bash [scriptname].sh &
so that (as I get it, which is most probably wrong) I can close the terminal and still keep the script running, as was my intention in developing it. The script should run calculation programmes in my absence and let me gather the results after some time.
Now I want to stop it, but nothing seems to do the trick: I killed the programmes the script had launched, I removed the input file the script was getting orders from and - last and most perfect of accomplishments - I accidentally closed the terminal trying to "exit" the script, which was still giving me error messages.
How can I check whether the script is running (as it does not appear in "top")? Is the '&' relevant? Should I just ask permission to reboot the pc, if that will work and kill everything?
Thank you.
[I put a "Hi everyone" at the beginning but the editor won't let me show it. Oh, well. It's that kind of day.]
Ok, I'll put it right here to prove my stupidity, as I wandered the internet shortly (after a long wandering before writing this post) and found that the line:
kill -9 $(pgrep -f [SCRIPTNAME].sh)
does the trick from any terminal window.
I write this answer to help anyone in the same situation, but feel free to remove the thread if unnecessary (and excuse me for disturbing).
Good you found it, here is another way if you do not use bash -c and run it in current shell not a separate shell.
# put a job in background
sleep 100 &
# save the last PID of background job
MY_PID=$!
# later
kill $MY_PID

Lock a file in bash using flock and lockfile

i spent the better part of the day looking for a solution to this problem and i think i am nearing the brink ... What i need to do in bash is: write 1 script that will periodicly read your inputs and write them into a file and second script that will periodicly print out the complete file BUT only when something new gets written in, meaning it will never write 2 same outputs 1 after another. 2 scripts need to comunicate by the means of a lock, meaning script 1 will lock a file so that script 2 cant print anything out of it, then script 1 will write something new into that file and unlock it ( and then script 2 can print updated file ).
The only hints we got was the usage of flock and lockfile - didnt get any hints on how to use them, exept that problem MUST be solved by flock or lockfile.
edit: When i said i was looking for a solution i ment i tried every single combination of flock with those flags and i just couldnt get it to work.
I will write pseudo code of what i want to do. A thing to note here is that this pseudocode is basicly the same as it is done in C .. its so simple, i dont know why everything has to be so complicated in bash.
script 1:
place a lock on file text.txt ( no one else can read it or write to it)
read input
place that input into file ( not deleting previous text )
remove lock on file text.txt
repeat
script 2:
print out complete text.txt ( but only if it is not locked, if it is locked obviously you cant)
repeat
And since script 2 is repeating all the time, it should print the complete text.txt ONLY when something new was writen to it.
I have about 100 other commands like flock that i have to learn in a very short time and i spent 1 day only for 1 of those commands. It would be kind of you to at least give me a hint. As for man page ...
I tried to do something like flock -x text.txt -c read > text.txt, tried every other combination also, but nothing works. It takes only 1 command, wont accept arguments. I dont even know why there is an option for command. I just want it to place a lock on file, write into it and then unlock it. In c it only takes flock("text.txt", ..).
Let's look at what this does:
flock -x text.txt -c read > text.txt
First, it opens test.txt for write (and truncates all contents) -- before doing anything else, including calling flock!
Second, it tells flock to get an exclusive lock on the file and run the command read.
However, read is a shell builtin, not an external command -- so it can't be called by a non-shell process at all, mooting any effect that it might otherwise have had.
Now, let's try using flock the way the man page suggests using it:
{
flock -x 3 # grab a lock on file descriptor #3
printf "Input to add to file: " # Prompt user
read -r new_input # Read input from user
printf '%s\n' "$new_input" >&3 # Write new content to the FD
} 3>>text.txt # do all this with FD 3 open to text.txt
...and, on the read end:
{
flock -s 3 # wait for a read lock
cat <&3 # read contents of the file from FD 3
} 3<text.txt # all of this with text.txt open to FD 3
You'll notice some differences from what you were trying before:
The file descriptor used to grab the lock is in append mode (when writing to the end), or in read mode (when reading), so you aren't overwriting the file before you even grab the lock.
We're running the read command (which, again, is a shell builtin, and so can only be run directly by the shell) by the shell directly, rather than telling the flock command to invoke it via the execve syscall (which is, again, impossible).

What does 200>"$somefile" accomplish? [duplicate]

This question already has an answer here:
How does this canonical flock example work?
(1 answer)
Closed 8 years ago.
I've found boilerplate flock(1) code which looks promising. Now I want to understand the components before blindly using it.
Seems like these functions are using the third form of flock
flock [-sxun] [-w timeout] fd
The third form is convenient inside shell scripts, and is usually used
the following manner:
(
flock -s 200
# ... commands executed under lock ...
) 200>/var/lock/mylockfile
The piece I'm lost on (from the sample wrapper functions) is this notation
eval "exec $LOCKFD>\"$LOCKFILE\""
or in shorthand from the flock manpage
200>/var/lock/mylockfile
What does that accomplish?
I notice subsequent commands to flock passed a value other than the one in the initial redirect cause flock to complain
flock: 50: Bad file descriptor
It seems like flock is using the file descriptors as a map to know which file to operate on. In order for that to work though, those descriptors would have to still be around and associated with the file, right?
After the redirect is finished, and the lock file is created, isn't the file closed, and file descriptors associated with the open file vaporized? I thought file descriptors were only associated with open files.
What's going on here?
200>/var/lock/mylockfile
This creates a file /var/lock/mylockfile which can be written to via file descriptor 200 inside the sub-shell. The number 200 is an arbitrary one. Picking a high number reduces the chance of any of the commands inside the sub-shell "noticing" the extra file descriptor.
(Typically, file descriptors 0, 1, and 2 are used by stdin, stdout, and stderr, respectively. This number could have been as low as 3.)
flock -s 200
Then flock is used to lock the file via the previously created file descriptor. It needs write access to the file, which the > in 200> provided. Note that this happens after the redirection above.

How can I capture output of background process

What is the best way of running process in background and receiving its output only when needed?
Intended usage: make prompt-outputting script with heavy initialization be initialized once per session and not on each prompt run. Note: two-way communication is needed: shell needs to tell when new prompt is needed, what is the last command status.
Known solutions:
some explicitly created files on filesystem (FIFO files, UNIX sockets): it would be better to avoid this as this means I need to choose file name, be sure it is garbage-collected on exit and add something to clean no longer used files in case of a crash.
zsh/zpty module: it is a bit like overkill for this job and does not work in bash.
coprocesses: does not work in bash and AFAIK only one coprocess per session is allowed.
Bash supports coprocesses sinces 4.0, but multiple coprocesses is still experimental.
I would have gone with some explicitly created files, naming them ~/.myThing-$HOSTNAME/fifo if they're per user and host. You can use flock to relatively easily determine if the command is still running and optionally start it:
(
flock -n 123 || exit 1
rm/mkfifo ..
exec yourServer < .. > ..
) 123> ~/".myThing-$HOSTNAME/lockfile"
If the command or server dies, the lock is automatically released and you only have a few zero length files lying around. The next time the server starts, it deletes and sets them up again.
Querying the server would be similar, but exiting if the lock is not in use (and optionally using a wait lock to avoid contention).

Choice of filehandle for flock utility

The flock utility man page gives the following usage example:
(
flock -s 200
# ... commands executed under lock ...
) 200>/var/lock/mylockfile
Assuming 200 is the filehandle of the lockfile, is there a possibility that during some run it fails, because the same filehandle is already in use by other process? If so, are there any tricks to make sure locking with flock works reliably?
It doesn't matter in the slightest whether another process is also using file descriptor 200. Think about it; every process on the system is entitled to have file descriptors 0, 1, 2 pointing to somewhere, and they do not all point to the same place. All that matters is that your processes won't get upset about file descriptor 200 being used, and very few processes will notice, much less care.
Given that, there aren't any tricks needed - you simply have to ensure that all the processes that need to use the lock file actually do use it.

Resources