when does a file check in linux fail - shell

I am using a check -f in my code to see if a particular file is present.
I suspect that sometimes (maybe in 1 out of 10 cases) it doesn't work as I see some strange errors in that situation.
the code I have is: if [-f /folder/file] do something
else remove something.
in my deployment, /folder/file is always present. so the above file check should always work, but I see in some very rare cases that remove something gets called instead...which is not right. remove something should not get called if the above /folder/file is present.
if /folder/file is present, are there cases where a -f check can still fail? like for instance if either the folder or file is read only or based on permissions??

You must add a space after the [! Between [ and -f
if [ -f "/folder/file" ] ; then
echo "do something"
else
echo "do something else"
fi

Related

Bash output to same line while preserving column

Ok, this is going to seem like a basic question at first, but please hear me out. It's more complex than the title makes it seem!
Here is the goal of what I am trying to do. I would like to output to console similar to Linux boot.
Operating system is doing something... [ OK ]
Now this would seem to be obvious... Just use printf and set columns. Here is the first problem. The console needs to first print the action
Operating system is doing something...
Then it needs to actually do the work and then continue by outputting to the same line with the [ OK ].
This would again seem easy to do using printf. Simply do the work (in this case, call a function) and return a conditional check and then finish running the printf to output either [ OK ] or [ FAIL ]. This technically works, but I ran into LOTS of complications doing this. This is because the function must be called inside a subshell and I cant pass certain variables that I need. So printf is out.
How about just using echo -n? That should work right? Echo the first part, run the function, then continue echoing based on the return to the same line. The problem with this solution is I can no longer preserve the column formatting that I can with printf.
Operating system is doing something... [ OK ]
Operating system is doing something else... [ OK ]
Short example... [ OK ]
Any suggestions how I can fix any of these problems to get a working solution? Thanks
Here is another way I tried with printf. This gives the illusion of working, but the method is actually flawed because it does not give you the progress indication, ie the function runs first before it ever prints out the the function is running. The "hey im doing stuff" prints immediately with the "hey im done" message. As a result, its pointless.
VALIDATE $HOST; printf "%-50s %10s\n" " Validating and sanitizing input..." "$(if [ -z "$ERROR" ]; then echo "[$GREEN OK $RESET]"; else echo "[$RED FAIL $RESET] - $ERROR"; echo; exit; fi)"
There's no particular reason all the printf strings have to be printed together, unless you're worried some code you call is going to move the cursor.
Reordering your example:
printf "%-50s " " Validating and sanitizing input..."
VALIDATE $HOST
if [ -z "$ERROR" ]; then
printf "%10s\n" "[$GREEN OK $RESET]";
else
printf "%10s\n" "[$RED FAIL $RESET] - $ERROR"
echo
exit
fi
I have no idea what $ERROR contains or where it is supposed to display.

How to check if PDF files are online?

I would like to iterate through a number of PDFs starting from 18001.pdf to N.pdf (adding 1 to the basename) and stop the loop as soon as a file is not online available. Below is the code that I guess is closest to what a solution might look like but actually there are multiple things not properly working it seems. The command in the while condition causes a syntax error f.x.
#!/bin/bash
path=http://dip21.bundestag.de/dip21/btp/18/
n=18001
while [ wget -q --spider $path$n.pdf ]
do
n=$(($n+1))
done
echo $n
HST - my question is not about debugging this specific code - it mostly serves the purpose of illustrating what I would like to do. Then again, I would appreciate a solution using a loop and wget.
If you want to test the success of a command, don't put it inside [ -- that's used to test the value of a conditional expression.
while wget -q --spider $path$n.pdf
do
...
done

Creating a 'yes' or 'no' menu in UNIX bash shell scripting

I'm currently writing a script, and at one point I want it check if a file already exists. If the file doesn't exist, then it should do nothing. However, if the file does exist, I want a 'y' or 'n' (yes or no) menu to appear. It should ask "Do you want to overwrite this file?".
So far I've tried writing something similar to this. Take into account that before this a function called:
therestore
exists. I want this function to occur if they type "y". Anyway, this is what I tried:
If [ -f directorypathANDfilename ] ; then
read -p "A file with the same name exists, Overwrite it? Type y/n?" yesorno
case $yesorno in
y*) therestore ;;
n*) echo "File has not been restored" ;;
esac
fi
For some reason though, the menu always pops up, even if the file DOESN'T exist and it doesn't restore it properly if I type yes! (But I know the "therestore" function works fine, because I've tested it plenty of times).
Apologies for the long-winded question. If you need any more details let me know - thanks in advance!
Does your script even run? Doesn't look like valid bash-script to me. If is not a valid keyword, but if is. Also, tests go inside angle-brackets [ ], those are not optional. Moreover you forgot the closing fi.
And another thing, it's not quite clear to me what you're testing for. Is directorypathANDfilename a variable? In that case you have to reference it with the $.
The snippet would probably work better like this:
#!/bin/bash
if [ -f "$directorypathANDfilename" ] ; then
read -p "A file with the same name exists, Overwrite it? Type y/n?" yesorno
case "$yesorno" in
y*) therestore ;;
n*) echo "File has not been restored" ;;
esac
fi

How to parametrize verbosity of debug output (BASH)?

During the process of writing a script, I will use the command's output in varying ways, and to different degrees - in order to troubleshoot the task at hand.. For example, in this snippet, which reads an Application's icon resource and returns whether or not it has the typical .icns extension...
icns=`defaults read /$application/Contents/Info CFBundleIconFile`
if ! [[ $icns =~ ^(.*)(.icns)$ ]]; then
echo -e $icns "is NOT OK YOU IDIOT! **** You need to add .icns to "$icns"."
else
echo -e $icns "\t Homey, it's cool. That shits got its .icns, proper."
fi
Inevitably, as each bug is squashed, and the stdout starts relating more to the actual function vs. the debugging process, this feedback is usually either commented out, silenced, or deleted - for obvious reasons.
However, if one wanted to provide a simple option - either hardcoded, or passed as a parameter, to optionally show some, all, or none of "this kind" of message at runtime - what is the best way to provide that simple functionality? I am looking to basically duplicate the functionality of set -x but instead of a line-by rundown, it would only print the notifications that I had architected specificically.
It seems excessive to replace each and every echo with an if that checks for a debug=1|0, yet I've been unable to find a concise explanation of how to implement a getopts/getopt scheme (never can remember which one is the built-in), etc. in my own scripts. This little expression seemed promising, but there is very little documentation re: 2>$1 out there (although I'm sure this is key to this puzzle)
[ $DBG ] && DEBUG="" || DEBUG='</dev/null'
check_errs() {
# Parameter 1 is the return code Para. 2 is text to display on failure.
if [ "${1}" -ne "0" ]; then
echo "ERROR # ${1} : ${2}"
else
echo "SUCESSS "
fi }
Any concise and reusable tricks to this trade would be welcomed, and if I'm totally missing the boat, or if it was a snake, and it would be biting me - I apologize.
One easy trick is to simply replace your "logging" echo comamnd by a variable, i.e.
TRACE=:
if test "$1" = "-v"; then
TRACE=echo
shift
fi
$TRACE "You passed the -v option"
You can have any number of these for different types of messages if you wish so.
you may check a common open source trace library with support for bash.
http://sourceforge.net/projects/utalm/
https://github.com/ArnoCan/utalm
WKR
Arno-Can Uestuensoez

How to deal with NFS latency in shell scripts

I'm writing shell scripts where quite regularly some stuff is written
to a file, after which an application is executed that reads that file. I find that through our company the network latency differs vastly, so a simple sleep 2 for example will not be robust enough.
I tried to write a (configurable) timeout loop like this:
waitLoop()
{
local timeout=$1
local test="$2"
if ! $test
then
local counter=0
while ! $test && [ $counter -lt $timeout ]
do
sleep 1
((counter++))
done
if ! $test
then
exit 1
fi
fi
}
This works for test="[ -e $somefilename ]". However, testing existence is not enough, I sometimes need to test whether a certain string was written to the file. I tried
test="grep -sq \"^sometext$\" $somefilename", but this did not work. Can someone tell me why?
Are there other, less verbose options to perform such a test?
You can set your test variable this way:
test=$(grep -sq "^sometext$" $somefilename)
The reason your grep isn't working is that quotes are really hard to pass in arguments. You'll need to use eval:
if ! eval $test
I'd say the way to check for a string in a text file is grep.
What's your exact problem with it?
Also you might adjust your NFS mount parameters, to get rid of the root problem. A sync might also help. See NFS docs.
If you're wanting to use waitLoop in an "if", you might want to change the "exit" to a "return", so the rest of the script can handle the error situation (there's not even a message to the user about what failed before the script dies otherwise).
The other issue is using "$test" to hold a command means you don't get shell expansion when actually executing, just evaluating. So if you say test="grep \"foo\" \"bar baz\"", rather than looking for the three letter string foo in the file with the seven character name bar baz, it'll look for the five char string "foo" in the nine char file "bar baz".
So you can either decide you don't need the shell magic, and set test='grep -sq ^sometext$ somefilename', or you can get the shell to handle the quoting explicitly with something like:
if /bin/sh -c "$test"
then
...
Try using the file modification time to detect when it is written without opening it. Something like
old_mtime=`stat --format="%Z" file`
# Write to file.
new_mtime=$old_mtime
while [[ "$old_mtime" -eq "$new_mtime" ]]; do
sleep 2;
new_mtime=`stat --format="%Z" file`
done
This won't work, however, if multiple processes try to access the file at the same time.
I just had the exact same problem. I used a similar approach to the timeout wait that you include in your OP; however, I also included a file-size check. I reset my timeout timer if the file had increased in size since last it was checked. The files I'm writing can be a few gig, so they take a while to write across NFS.
This may be overkill for your particular case, but I also had my writing process calculate a hash of the file after it was done writing. I used md5, but something like crc32 would work, too. This hash was broadcast from the writer to the (multiple) readers, and the reader waits until a) the file size stops increasing and b) the (freshly computed) hash of the file matches the hash sent by the writer.
We have a similar issue, but for different reasons. We are reading s file, which is sent to an SFTP server. The machine running the script is not the SFTP server.
What I have done is set it up in cron (although a loop with a sleep would work too) to do a cksum of the file. When the old cksum matches the current cksum (the file has not changed for the determined amount of time) we know that the writes are complete, and transfer the file.
Just to be extra safe, we never overwrite a local file before making a backup, and only transfer at all when the remote file has two cksums in a row that match, and that cksum does not match the local file.
If you need code examples, I am sure I can dig them up.
The shell was splitting your predicate into words. Grab it all with $# as in the code below:
#! /bin/bash
waitFor()
{
local tries=$1
shift
local predicate="$#"
while [ $tries -ge 1 ]; do
(( tries-- ))
if $predicate >/dev/null 2>&1; then
return
else
[ $tries -gt 0 ] && sleep 1
fi
done
exit 1
}
pred='[ -e /etc/passwd ]'
waitFor 5 $pred
echo "$pred satisfied"
rm -f /tmp/baz
(sleep 2; echo blahblah >>/tmp/baz) &
(sleep 4; echo hasfoo >>/tmp/baz) &
pred='grep ^hasfoo /tmp/baz'
waitFor 5 $pred
echo "$pred satisfied"
Output:
$ ./waitngo
[ -e /etc/passwd ] satisfied
grep ^hasfoo /tmp/baz satisfied
Too bad the typescript isn't as interesting as watching it in real time.
Ok...this is a bit whacky...
If you have control over the file: you might be able to create a 'named pipe' here.
So (depending on how the writing program works) you can monitor the file in an synchronized fashion.
At its simplest:
Create the named pipe:
mkfifo file.txt
Set up the sync'd receiver:
while :
do
process.sh < file.txt
end
Create a test sender:
echo "Hello There" > file.txt
The 'process.sh' is where your logic goes : this will block until the sender has written its output. In theory the writer program won't need modifiying....
WARNING: if the receiver is not running for some reason, you may end up blocking the sender!
Not sure it fits your requirement here, but might be worth looking into.
Or to avoid synchronized, try 'lsof' ?
http://en.wikipedia.org/wiki/Lsof
Assuming that you only want to read from the file when nothing else is writing to it (ie, the writing process has finished) - you could check whether nothing else has file handle to it ?

Resources