Replace file only if not being accessed in bash - bash

My requirement is to replace file only when it is not being accessed. I have following snippet:
if [ -f file ]
then
while true
do
if [ -n "$(fuser "file")" ]
then
echo "file is in use..."
else
echo "file is free..."
break
fi
done
fi
{
flock -x 3
mv newfile file
} 3>file
But I have a doubt that I am not handling concurrency properly. Please give some insights and possible way to achieve this.
Thanks.

My requirement is to replace file only when it is not being accessed.
Getting requirements right can be hard. In case your actual requirement is the following, you can boil down the whole script to just one command.
My guess on the actual requirement (not as strict as the original):
Replace file without disturbing any programs reading/writing file.
If this is the case, you can use a very neat behavior: In Unix-like systems file descriptors always point to the file (not path) for which they where opened. You can move or even delete the corresponding path. See also How do the UNIX commands mv and rm work with open files?.
Example:
Open a terminal and enter
i=1; while true; do echo $((i++)); sleep 1; done > file &
tail -f file
The first command writes output to file and runs in the background. The second command reads the file and continues to print its changing content.
Open another terminal and move or delete file, for instance with
mv file file2
echo overwritten > otherFile
mv otherFile file2
rm file2
echo overwritten > file
echo overwritten > file2
While executing these commands have a look at the output of tail -f in the first terminal – it won't be affected by any of these commands. You will never see overwritten.
Solution For New Requirement:
Because of this behavior you can replace the whole script with just one mv command:
mv newfile file

Consider lsof.
mvWhenClear() {
while [[ -f "$1" ]] && lsof "$1"
do sleep $delay
done
mv "$1" "$2" # still allows race condition
}

Related

How to check if a file exists or not and create/delete if does/does not exist in shell

In shell, I want to check if a file exists or not then create if it doesn't exist or delete if it exists. For this I need a one liner and am trying to do something like:
ls | awk '\filename\' <if exist delete else create>
I need the ls as my problem has some command that outputs a list of strings that need to be pipelined to awk then possibly touch/mkdir.
#!/bin/bash
if [ -z "$1" ] || [ ! -f "$1" ] # $1 is input filename and -f check if $1 is a regular file
then
rm "$1" #delete the file
else
touch "$1" #create the file
fi
save the file as filecreator.sh
change the permission to allow execution with sudo chmod a+rx
while running the script use ./filecreator.sh yourfile.extension
You can see the file in your directory.
Using oc projects and oc new-project instad of ls and touch as indicated in a comment.
oc projects |
while read -r proj; do
if [ -d "$proj" ]; then
rm -rf "$proj"
else
oc new-project "$proj"
fi
done
I don't think there is a useful way to write this as a one-liner. If you like, you can replace the newlines with semicolons, except after then and else.
You really should put your actual requirements in the question itself. ls is a superbly useless example because it cannot list a file which doesn't already exist, and you should not use ls in scripts at all.
rm yourfile 2>/dev/null || touch yourfile
If the file existed before, rm will succeed and erase the file, and the touch won't be executed. You end up with no file afterwards.
If the file did not exist before, rm will fail (but the error message is not visible, since it is directed to the bitbucket), and due to the non-zero exit code of rm, the touch will be executed. You end up with an empty file afterwards.
Caveat: If the file exists, but you don't have permissions to remove it, you won't notice this error, due to the redirection of stderr. Hence, for debugging and later diagnosis, it might be better to redirect stderr to some file instead.

loop over files with bash script

I have a js script that converts kml location history files to csv. I wrote a bash script to loop through all the files in a directory. The script works when I execute it from command line ./index.js filename.kml > filename.csv
But nothing happens when I execute the bash file that is supposed to loop through all files.
I know it probably is a simple mistake but I can't spot it.
#!/bin/bash
# file: foo.sh
for f in *.kml; do
test -e "${f%.kml}" && continue
./index.js "$f" > "-fcsv"
done
Just delete the "&& continue", if I'm not wrong you're skipping the current iteration with the "continue" keyword, that's why nothing happens
EDIT
Also, you shouldn't test if the file exists, the for loop is enough to be sure that "f" will be a valid .kml file. Anyways, if you still want to do it you have to do it like:
#!/bin/bash
# file: foo.sh
for f in *.kml; do
if [ -e "$f" ]; then
./index.js "$f" > "$f.csv"
fi;
done

Checkin if a Variable File is in another directory

I'm looking to check if a variable file is in another directory, and if it is, stop the script from running any farther. So far I have this:
#! /bin/bash
for file in /directory/of/variable/file/*.cp;
do
test -f /directory/to/be/checked/$file;
echo $?
done
I ran an echo of $file and see that it includes the full path, which would explain why my test doesn't see the file, but I am at a loss for how to move forward so that I can check.
Any help would be greatly appreciated!
Thanks
I think you want
#! /bin/bash
for file in /directory/of/variable/file/*.cp ; do
newFile="${file##*/}"
if test -f /directory/to/be/checked/"$newFile" ; then
echo "/directory/to/be/checked/$newFile already exists, updating ..."
else
echo "/directory/to/be/checked/$newFile not found, copying ..."
fi
cp -i "$file" /directory/to/be/checked/"$newFile"
done
Note that you can replace cp -i with mv -i and move the file, leaving no file left behind in /directory/of/variable/file/.
The -i option means interrogate (I think), meaning if the file is already there, it will ask you overwrite /directory/to/be/checked/"$newFile" (or similar) to which you must reply y. This will only happen if the file already exists in the new location.
IHTH
The command basename will give you just the file (or directory) without the rest of the path.
#! /bin/bash
for file in /directory/of/variable/file/*.cp;
do
test -f /directory/to/be/checked/$(basename $file);
echo $?
done

Shell Script to write to a file upto a certain point and then keep overwriting the file

I am trying to write a shell script , which will write the output of another script in a file and it will keep writing to that upto a certain point and then it will overwrite the file so that file size will remain within a well bounded range.
while true
do
./runscript.sh > test.txt
sleep 1
done
I have tried to use infinite loop and sleep so that it will keep overwrite that file.
But, it shows a different behaviour. Till the point command is running , the filesize keeps on increasing. But, when i stop the command, the file size get reduce.
How can i keep overwriting the same file and maintain the file size along with it.
use truncate -s <size> <file> to shrink the file when its size is out of your boundary
I will do with below script
#!/bin/sh
Logfile=test.txt
minimumsize=100000 # define the size you want
actualsize=$(wc -c <"$Logfile")
if [[ $actualsize -ge $minimumsize ]]; then
rm -rf "$Logfile"
sh ./runscript.sh >> test.txt
else
#current_date_time="`date +%Y%m%d%H%M%S`"; #add this to runscript.sh to track when it was written
#echo "********Added at :$current_date_time ********" #add this to runscript.sh to track when it was written
sh ./runscript.sh >> test.txt
fi
I can try with the option for generating the new file once the old one
is full. … How can make the
script to generate the new file and write to it.
The following script, let's call it chop.sh, does that; you use it by feeding the output to it, specifying the desired file size and name as arguments, e. g. ./runscript.sh|chop.sh 999999 test.txt.
File=${2?usage: $0 Size File}
Size=$1
while
set -- `ls -l "$File" 2>/dev/null` # 5th column is file size
[ "$5" -lt "$Size" ] || mv "$File" "$File"-old
read -r && echo "$REPLY" >>"$File"
do :
done
The old (full) file would then be named test.txt-old.

shell script to remove a file if it already exist

I am working on some stuff where I am storing data in a file.
But each time I run the script it gets appended to the previous file.
I want help on how I can remove the file if it already exists.
Don't bother checking if the file exists, just try to remove it.
rm -f /p/a/t/h
# or
rm /p/a/t/h 2> /dev/null
Note that the second command will fail (return a non-zero exit status) if the file did not exist, but the first will succeed owing to the -f (short for --force) option. Depending on the situation, this may be an important detail.
But more likely, if you are appending to the file it is because your script is using >> to redirect something into the file. Just replace >> with >. It's hard to say since you've provided no code.
Note that you can do something like test -f /p/a/t/h && rm /p/a/t/h, but doing so is completely pointless. It is quite possible that the test will return true but the /p/a/t/h will fail to exist before you try to remove it, or worse the test will fail and the /p/a/t/h will be created before you execute the next command which expects it to not exist. Attempting this is a classic race condition. Don't do it.
Another one line command I used is:
[ -e file ] && rm file
You can use this:
#!/bin/bash
file="file_you_want_to_delete"
if [ -f "$file" ] ; then
rm "$file"
fi
If you want to ignore the step to check if file exists or not, then you can use a fairly easy command, which will delete the file if exists and does not throw an error if it is non-existing.
rm -f xyz.csv
A one liner shell script to remove a file if it already exist (based on Jindra Helcl's answer):
[ -f file ] && rm file
or with a variable:
#!/bin/bash
file="/path/to/file.ext"
[ -f $file ] && rm $file
Something like this would work
#!/bin/sh
if [ -fe FILE ]
then
rm FILE
fi
-f checks if it's a regular file
-e checks if the file exist
Introduction to if for more information
EDIT : -e used with -f is redundant, fo using -f alone should work too
if [ $( ls <file> ) ]; then rm <file>; fi
Also, if you redirect your output with > instead of >> it will overwrite the previous file
So in my case I wanted to remove a FIFO file before I create it again, so this worked for me:
#!/bin/bash
file="/tmp/test"
rm -rf $file | true
mkfifo $file
| true will continue the script even if file is not found.

Resources