Bash differences in changes to running scripts - bash

I'm scratching my head about two seemingly different behaviors of bash when editing a running script.
This is not a place to discuss WHY one would do this (you probably shouldn't). I would only like to try to understand what happens and why.
Example A:
$ echo "echo 'echo hi' >> script.sh" > script.sh
$ cat script.sh
echo 'echo hi' >> script.sh
$ chmod +x script.sh
$ ./script.sh
hi
$ cat script.sh
echo 'echo hi' >> script.sh
echo hi
The script edits itself, and the change (extra echo line) is directly executed. Multiple executions lead to more lines of "hi".
Example B:
Create a script infLoop.sh and run it.
$ cat infLoop.sh
while true
do
x=1
echo $x
done
$ ./infLoop.sh
1
1
1
...
Now open a second shell and edit the file changing the value of x. E.g. like this:
$ sed --in-place 's/x=1/x=2/' infLoop.sh
$ cat infLoop.sh
while true
do
x=2
echo $x
done
However, we observe that the output in the first terminal is still 1. Doing the same with only one terminal, interrupting infLoop.sh through Ctrl+Z, editing, and then continuing it via fg yields the same result.
The Question
Why does the change in example A have an immediate effect but the change in example B not?
PS: I know there are questions out there showing similar examples but none of those I saw have answers explaining the difference between the scenarios.

There are actually two different reasons that example B is different, either one of which is enough to prevent the change from taking effect. They're due to some subtleties of how sed and bash interact with files (and how unix-like OSes treat files), and might well be different with slightly different programs, etc.
Overall, I'd say this is a good example of how hard it is to understand & predict what'll happen if you modify a file while also running it (or reading etc from it), and therefore why it's a bad idea to do things like this. Basically, it's the computer equivalent of sawing off the branch you're standing on.
Reason 1: Despite the option's name, sed --in-place does not actually modify the existing file in place. What it actually does is create a new file with a temporary name, then when it's finished that it deletes the original and renames the new file into its place. It has the same name, but it's not actually the same file. You can tell this by looking at the file's inode number with ls -li:
$ ls -li infLoop.sh
88 -rwxr-xr-x 1 pi pi 39 Aug 4 22:04 infLoop.sh
$ sed --in-place 's/x=1/x=2/' infLoop.sh
$ ls -li infLoop.sh
4073 -rwxr-xr-x 1 pi pi 39 Aug 4 22:05 infLoop.sh
But bash still has the old file open (strictly speaking, it has an open file handle pointing to the old file), so it's going to continue getting the old contents no matter what changed in the new file.
Note that this doesn't apply to all programs that edit files. vim, for example, will rewrite the contents of existing files (unless file permissions forbid it, in which case it switches to the delete&replace method). Appending with >> will always append to the existing file rather than creating a new one.
(BTW, if it seems weird that bash could have a file open after it's been "deleted", that's just part of how unix-like OSes treat their files. Files are not truly deleted until their last directory entry is removed and the last open file handle referring to them is closed. Some programs actually take advantage of this for security by opening(/creating) a file and then immediately "deleting" it, so that the open file handle is the only way to reach the file.)
Reason 2: Even if you used a tool that actually modified the existing file in place, bash still wouldn't see the change. The reason for this is that bash reads from the file (parsing as it goes) until it has something it can execute, runs that, then goes back and reads more until it has another executable chunk, etc. It does not go back and re-read the same chunk to see if it's changed, so it'll only ever notice changes in parts of the file it hasn't read yet.
In example B, it has to read and parse the entire while true ... done loop before it can start executing it. Therefore, changes in that part (or possibly before it) will not be noticed once the loop has been read & started executing. Changes in the file after the loop would be noticed after the loop exited (if it ever did).
See my answer to this question (and Don Hatch's comment on it) for more info about this.

Related

bash while loop through a file doesn't end when the file is deleted

I have a while loop in a bash script:
Example:
while read LINE
do
echo $LINE >> $log_file
done < ./sample_file
My question is why when I delete the sample_file while the script is running the loop doesn't end and I see that the log_file is updating? How the loop is continuing while there is no input?
In unix, a file isn't truly deleted until the last directory entry for it is removed (e.g. with rm) and the last open file handle for it is closed. See this question (especially MarkR's answer) for more info. In the case of your script, the file is opened as stdin for the while read loop, and until that loop exits (or closes its stdin), rming the file will not actually delete it off disk.
You can see this effect pretty easily if you want. Open three terminal windows. In the first, run the command cat >/tmp/deleteme. In the second, run tail -f /tmp/deleteme. In the third, after running the other two commands, run rm /tmp/deleteme. At this point, the file has been unlinked, but both the cat and tail processes have open file handles for it, so it hasn't actually been deleted. You can prove this by typing into the first terminal window (running cat), and every time your hit return, tail will see the new line added to the file and display it in the second window.
The file will not actually be deleted until you end those two commands (Control-D will end cat, but you need Control-C to kill tail).
See "Why file is accessible after deleting in unix?" for an excellent explanation of what you are observing here.
In short...
Underlying rm and any other command that may appear to delete a file
there is the system call unlink. And it's called unlink, not remove or
deletefile or anything similar, because it doesn't remove a file. It
removes a link (a.k.a. directory entry) which is an association
between a file and a name in a directory.
You can use the function truncate to destroy the actual contents (or shred if you need to be more secure), which would immediately halt the execution of your example loop.
The moment shell executes the while loop, the sample_file contents have been read, and it does not matter whether the file exists or not after that point.
Test script:
$ cat test.sh
#!/bin/bash
while read line
do
echo $line
sleep 1
done < data_file
Test file:
$ seq 1 10 > data_file
Now, in one terminal you run the script, in another terminal, you go and delete the file data_file, you would still see the 1 to 10 numbers printed by the script.

Lock a file in bash using flock and lockfile

i spent the better part of the day looking for a solution to this problem and i think i am nearing the brink ... What i need to do in bash is: write 1 script that will periodicly read your inputs and write them into a file and second script that will periodicly print out the complete file BUT only when something new gets written in, meaning it will never write 2 same outputs 1 after another. 2 scripts need to comunicate by the means of a lock, meaning script 1 will lock a file so that script 2 cant print anything out of it, then script 1 will write something new into that file and unlock it ( and then script 2 can print updated file ).
The only hints we got was the usage of flock and lockfile - didnt get any hints on how to use them, exept that problem MUST be solved by flock or lockfile.
edit: When i said i was looking for a solution i ment i tried every single combination of flock with those flags and i just couldnt get it to work.
I will write pseudo code of what i want to do. A thing to note here is that this pseudocode is basicly the same as it is done in C .. its so simple, i dont know why everything has to be so complicated in bash.
script 1:
place a lock on file text.txt ( no one else can read it or write to it)
read input
place that input into file ( not deleting previous text )
remove lock on file text.txt
repeat
script 2:
print out complete text.txt ( but only if it is not locked, if it is locked obviously you cant)
repeat
And since script 2 is repeating all the time, it should print the complete text.txt ONLY when something new was writen to it.
I have about 100 other commands like flock that i have to learn in a very short time and i spent 1 day only for 1 of those commands. It would be kind of you to at least give me a hint. As for man page ...
I tried to do something like flock -x text.txt -c read > text.txt, tried every other combination also, but nothing works. It takes only 1 command, wont accept arguments. I dont even know why there is an option for command. I just want it to place a lock on file, write into it and then unlock it. In c it only takes flock("text.txt", ..).
Let's look at what this does:
flock -x text.txt -c read > text.txt
First, it opens test.txt for write (and truncates all contents) -- before doing anything else, including calling flock!
Second, it tells flock to get an exclusive lock on the file and run the command read.
However, read is a shell builtin, not an external command -- so it can't be called by a non-shell process at all, mooting any effect that it might otherwise have had.
Now, let's try using flock the way the man page suggests using it:
{
flock -x 3 # grab a lock on file descriptor #3
printf "Input to add to file: " # Prompt user
read -r new_input # Read input from user
printf '%s\n' "$new_input" >&3 # Write new content to the FD
} 3>>text.txt # do all this with FD 3 open to text.txt
...and, on the read end:
{
flock -s 3 # wait for a read lock
cat <&3 # read contents of the file from FD 3
} 3<text.txt # all of this with text.txt open to FD 3
You'll notice some differences from what you were trying before:
The file descriptor used to grab the lock is in append mode (when writing to the end), or in read mode (when reading), so you aren't overwriting the file before you even grab the lock.
We're running the read command (which, again, is a shell builtin, and so can only be run directly by the shell) by the shell directly, rather than telling the flock command to invoke it via the execve syscall (which is, again, impossible).

First line in file is not always printed in bash script

I have a bash script that prints a line of text into a file, and then calls a second script that prints some more data into the same file. Lets call them script1.sh and script2.sh. The reason it's split into two scripts, is because I have different versions of script2.sh.
script1.sh:
rm -f output.txt
echo "some text here" > output.txt
source script2.sh
script2.sh:
./read_time >> output.txt
./run_program
./read_time >> output.txt
Variations on the three lines in script2.sh are repeated.
This seems to work most of the time, but every once in a while the file output.txt does not contain the line "some text here". At first I thought it was because I was calling script2.sh like this: ./script2.sh. But even using source the problem still occurs.
The problem is not reproducible, so even when I try to change something I don't know if it's actually fixed.
What could be causing this?
Edit:
The scripts are very simple. script1 is exactly as you see here, but with different file names. script 2 is what I posted, but then the same 3 lines repeated, and ./run_program can have different arguments. I did a grep for the output file, and for > but it doesn't show up anywhere unexpected.
The way these scripts are used is that script1 is created by a program (the only difference between the versions is the source script2.sh line. This script1.sh is then run on a different computer (linux on an FPGA actually) using ssh. Before that is done, the output file is also deleted using ssh. I don't know why, but I didn't write all of this. Also, I've checked the code running on the host. The only mention of the output file is when it is deleted using ssh, and when it is copied back to the host after the script1 is done.
Edit 2:
I finally managed to make the problem reproducible at a reasonable rate by stripping script2.sh of everything but a single line printing into the file. This also let me do the testing a bit faster. Once I had this I got the problem between 1 and 4 times for every 10 runs. Removing the command that was deleting the file over ssh before the script was run seems to have solved the problem. I will test it some more to be sure, but I think it's solved. Although I'm still not sure why it would be a problem. I thought that the ssh command would not exit before all the remove commands were executed.
It is hard to tell without seeing the real code. Most likely explanation is that you have a typo, > instead of >>, somewhere in one of the script2.sh files.
To verify this, set noclobber option with set -o noclobber. The shell will then terminate when trying to write to existing file with >.
Another possibility, is that the file is removed under certain rare conditions. Or it is damaged by some command which can have random access to it - look for commands using this file without >>. Or it is used by some command both as input and output which step on each other - look for the file used with <.
Lastly, you can have a racing condition with a command outputting to the file in background, started before that echo.
Can you grep all your scripts for 'output.txt'? What about scripts called inside read_time and run_program?
It looks like something in one of the script2.sh scripts must be either overwriting, truncating or doing a substitution on output.txt.
For example,there could be a '> output.txt' burried inside a conditional for a condition that rarely obtains. Just a guess, but it would explain why you don't always see it.
This is an interesting problem. Please post the solution when you find it!

Diff output from two programs without temporary files

Say I have too programs a and b that I can run with ./a and ./b.
Is it possible to diff their outputs without first writing to temporary files?
Use <(command) to pass one command's output to another program as if it were a file name. Bash pipes the program's output to a pipe and passes a file name like /dev/fd/63 to the outer command.
diff <(./a) <(./b)
Similarly you can use >(command) if you want to pipe something into a command.
This is called "Process Substitution" in Bash's man page.
Adding to both the answers, if you want to see a side by side comparison, use vimdiff:
vimdiff <(./a) <(./b)
Something like this:
One option would be to use named pipes (FIFOs):
mkfifo a_fifo b_fifo
./a > a_fifo &
./b > b_fifo &
diff a_fifo b_fifo
... but John Kugelman's solution is much cleaner.
For anyone curious, this is how you perform process substitution in using the Fish shell:
Bash:
diff <(./a) <(./b)
Fish:
diff (./a | psub) (./b | psub)
Unfortunately the implementation in fish is currently deficient; fish will either hang or use a temporary file on disk. You also cannot use psub for output from your command.
Adding a little more to the already good answers (helped me!):
The command docker outputs its help to STD_ERR (i.e. file descriptor 2)
I wanted to see if docker attach and docker attach --help gave the same output
$ docker attach
$ docker attach --help
Having just typed those two commands, I did the following:
$ diff <(!-2 2>&1) <(!! 2>&1)
!! is the same as !-1 which means run the command 1 before this one - the last command
!-2 means run the command two before this one
2>&1 means send file_descriptor 2 output (STD_ERR) to the same place as file_descriptor 1 output (STD_OUT)
Hope this has been of some use.
For zsh, using =(command) automatically creates a temporary file and replaces =(command) with the path of the file itself. With normal Process Substitution, $(command) is replaced with the output of the command.
This zsh feature is very useful and can be used like so to compare the output of two commands using a diff tool, for example Beyond Compare:
bcomp =(ulimit -Sa | sort) =(ulimit -Ha | sort)
For Beyond Compare, note that you must use bcomp for the above (instead of bcompare) since bcomp launches the comparison and waits for it to complete. If you use bcompare, that launches comparison and immediately exits due to which the temporary files created to store the output of the commands disappear.
Read more here: http://zsh.sourceforge.net/Intro/intro_7.html
Also notice this:
Note that the shell creates a temporary file, and deletes it when the command is finished.
and the following which is the difference between $(...) and =(...) :
If you read zsh's man page, you may notice that <(...) is another form of process substitution which is similar to =(...). There is an important difference between the two. In the <(...) case, the shell creates a named pipe (FIFO) instead of a file. This is better, since it does not fill up the file system; but it does not work in all cases. In fact, if we had replaced =(...) with <(...) in the examples above, all of them would have stopped working except for fgrep -f <(...). You can not edit a pipe, or open it as a mail folder; fgrep, however, has no problem with reading a list of words from a pipe. You may wonder why diff <(foo) bar doesn't work, since foo | diff - bar works; this is because diff creates a temporary file if it notices that one of its arguments is -, and then copies its standard input to the temporary file.

Can a shell script indicate that its lines be loaded into memory initially?

UPDATE: this is a repost of How to make shell scripts robust to source being changed as they run
This is a little thing that bothers me every now and then:
I write a shell script (bash) for a quick and dirty job
I run the script, and it runs for quite a while
While it's running, I edit a few lines in the script, configuring it for a different job
But the first process is still reading the same script file and gets all screwed up.
Apparently, the script is interpreted by loading each line from the file as it is needed. Is there some way that I can have the script indicate to the shell that the entire script file should be read into memory all at once? For example, Perl scripts seem to do this: editing the code file does not affect a process that's currently interpreting it (because it's initially parsed/compiled?).
I understand that there are many ways I could get around this problem. For example, I could try something like:
cat script.sh | sh
or
sh -c "`cat script.sh`"
... although those might not work correctly if the script file is large and there are limits on the size of stream buffers and command-line arguments. I could also write an auxiliary wrapper that copies a script file to a locked temporary file and then executes it, but that doesn't seem very portable.
So I was hoping for the simplest solution that would involve modifications only to the script, not the way in which it is invoked. Can I just add a line or two at the start of the script? I don't know if such a solution exists, but I'm guessing it might make use of the $0 variable...
The best answer I've found is a very slight variation on the solutions offered to How to make shell scripts robust to source being changed as they run. Thanks to camh for noting the repost!
#!/bin/sh
{
# Your stuff goes here
exit
}
This ensures that all of your code is parsed initially; note that the 'exit' is critical to ensuring that the file isn't accessed later to see if there are additional lines to interpret. Also, as noted on the previous post, this isn't a guarantee that other scripts called by your script will be safe.
Thanks everyone for the help!
Use an editor that doesn't modify the existing file, and instead creates a new file then replaces the old file. For example, using :set writebackup backupcopy=no in Vim.
How about a solution to how you edit it.
If the script is running, before editing it, do this:
mv script script-old
cp script-old script
rm script-old
Since the shell keep's the file open as long as you don't change the contents of the open inode everything will work okay.
The above works because mv will preserve the old inode while cp will create a new one. Since a file's contents will not actually be removed if it is opened, you can remove it right away and it will be cleaned up once the shell closes the file.
According to the bash documentation if instead of
#!/bin/bash
body of script
you try
#!/bin/bash
script=$(cat <<'SETVAR'
body of script
SETVAR)
eval "$script"
then I think you will be in business.
Consider creating a new bang path for your quick-and-dirty jobs. If you start your scripts with:
#!/usr/local/fastbash
or something, then you can write a fastbash wrapper that uses one of the methods you mentioned. For portability, one can just create a symlink from fastbash to bash, or have a comment in the script saying one can replace fastbash with bash.
If you use Emacs, try M-x customize-variable break-hardlink-on-save. Setting this variable will tell Emacs to write to a temp file and then rename the temp file over the original instead of editing the original file directly. This should allow the running instance to keep its unmodified version while you save the new version.
Presumably, other semi-intelligent editors would have similar options.
A self contained way to make a script resistant to this problem is to have the script copy and re-execute itself like this:
#!/bin/bash
if [[ $0 != /tmp/copy-* ]] ; then
rm -f /tmp/copy-$$
cp $0 /tmp/copy-$$
exec /tmp/copy-$$ "$#"
echo "error copying and execing script"
exit 1
fi
rm $0
# rest of script...
(This will not work if the original script begins with the characters /tmp/copy-)
(This is inspired by R Samuel Klatchko's answer)

Resources