What does -f mean in bash - bash

I was looking at how to use runit to run gunicorn. I was looking at the bash file and I don't know what -f $PID does in
#!/bin/sh
GUNICORN=/usr/local/bin/gunicorn
ROOT=/path/to/project
PID=/var/run/gunicorn.pid
APP=main:application
if [ -f $PID ]; then rm $PID; fi
cd $ROOT
exec $GUNICORN -c $ROOT/gunicorn.conf.py --pid=$PID $APP
Google is useless in this case because searching for flags is useless

Google is useless in this case because searching for flags is useless
Fortunately, the Bash Reference Manual is available online, at http://www.gnu.org/software/bash/manual/bashref.html. It's the first hit when you Google for "Bash manual". ยง6.4 "Bash Conditional Expressions" says:
-f file
True if file exists and is a regular file.

-f - file is a regular file (not a directory or device file)
Check this out for all file test operators:
http://tldp.org/LDP/abs/html/fto.html

The [ is the same as the command test which allows you to test certain things. Try help test to find out what the flags are. Things to be careful with are spaces - the [ needs a space after it.

-f checks if the file exists and is a regular file.

[ -f "$var" ]
Checks if $var is an existing file (regular file). Symbolic link passes this test too.

Related

"Standardized" docstring/self-documentation of bash scripts

Background
Python scripts, for example, can have several "levels" of documentation via docstrings. What's neat about it, is that they can be defined at per-function levels, per-method levels, per-class levels, and most importantly (in the context of my question): per-file levels. For example, the top of the file may look like so:
#!/usr/bin/env python
"""
#brief A script that does cool stuff.
"""
What's especially useful about this feature is that it's easy to extract and print at run-time.
Question
Do bash scripts support such a feature? i.e. is there a "standardized" approach to generating a file-level set of documentation (i.e. human-readable description of the purpose of the script, usage syntax, etc.; so that it's easy for another script to automatically parse/extract this information? My goal is to create several debug scripts that are self-documenting, and if there's already a standard/de-facto-best way to do this, I'd like to avoid re-inventing the wheel.
The "File Header" section of Google's Shell Style Guide is one way to add a 'docstring' to your bash scripts.
Basically, the answer is to use #, rather than quotes like you would with Python.
You can do this for Bash easily, it is a little more tricky if you need to ensure compatibility with POSIX only shells like /bin/sh or primarily busybox systems like Alpine.
The Linux Documentation Project has some great examples.
http://tldp.org/LDP/abs/html/here-docs.html
Yet another twist of this nifty trick makes "self-documenting" scripts
possible.
Example 19-12. A self-documenting script
#!/bin/bash
# self-document.sh: self-documenting script
# Modification of "colm.sh".
DOC_REQUEST=70
if [ "$1" = "-h" -o "$1" = "--help" ] # Request help.
then
echo; echo "Usage: $0 [directory-name]"; echo
sed --silent -e '/DOCUMENTATIONXX$/,/^DOCUMENTATIONXX$/p' "$0" |
sed -e '/DOCUMENTATIONXX$/d'; exit $DOC_REQUEST; fi
: <<DOCUMENTATIONXX
List the statistics of a specified directory in tabular format.
---------------------------------------------------------------
The command-line parameter gives the directory to be listed.
If no directory specified or directory specified cannot be read,
then list the current working directory.
DOCUMENTATIONXX
if [ -z "$1" -o ! -r "$1" ]
then
directory=.
else
directory="$1"
fi
echo "Listing of "$directory":"; echo
(printf "PERMISSIONS LINKS OWNER GROUP SIZE MONTH DAY HH:MM PROG-NAME\n" \
; ls -l "$directory" | sed 1d) | column -t
exit 0
Using a cat script is an alternate way of accomplishing this.
DOC_REQUEST=70
if [ "$1" = "-h" -o "$1" = "--help" ] # Request help.
then # Use a "cat script" . . .
cat <<DOCUMENTATIONXX
List the statistics of a specified directory in tabular format.
---------------------------------------------------------------
The command-line parameter gives the directory to be listed.
If no directory specified or directory specified cannot be read,
then list the current working directory.
DOCUMENTATIONXX
exit $DOC_REQUEST
fi
A slightly more elegant example using functions to handle the documentation and error messages.
#!/bin/sh
usage() {
cat << EOF
Usage:
$0 [-u [username]] [-p]
Options:
-u <username> : Optionally specify the new username to set password for.
-p : Prompt for a new password.
EOF
}
die() {
echo
echo "$1, so giving up. Sorry."
echo
exit 2
}
if [ -z "$USER" ] ; then
die "Could not identify the existing user"
fi
if $PSET ; then
passwd $USER || die "Busybox didn't like your password"
fi
https://github.com/jyellick/mficli/blob/master/util/changecreds.sh
There is no standard for docstrings for bash. It's always nice to have man pages though (eg. https://www.cyberciti.biz/faq/linux-unix-creating-a-manpage/), or info pages (https://unix.stackexchange.com/questions/164443/how-to-create-info-documentation).

shell script to remove a file if it already exist

I am working on some stuff where I am storing data in a file.
But each time I run the script it gets appended to the previous file.
I want help on how I can remove the file if it already exists.
Don't bother checking if the file exists, just try to remove it.
rm -f /p/a/t/h
# or
rm /p/a/t/h 2> /dev/null
Note that the second command will fail (return a non-zero exit status) if the file did not exist, but the first will succeed owing to the -f (short for --force) option. Depending on the situation, this may be an important detail.
But more likely, if you are appending to the file it is because your script is using >> to redirect something into the file. Just replace >> with >. It's hard to say since you've provided no code.
Note that you can do something like test -f /p/a/t/h && rm /p/a/t/h, but doing so is completely pointless. It is quite possible that the test will return true but the /p/a/t/h will fail to exist before you try to remove it, or worse the test will fail and the /p/a/t/h will be created before you execute the next command which expects it to not exist. Attempting this is a classic race condition. Don't do it.
Another one line command I used is:
[ -e file ] && rm file
You can use this:
#!/bin/bash
file="file_you_want_to_delete"
if [ -f "$file" ] ; then
rm "$file"
fi
If you want to ignore the step to check if file exists or not, then you can use a fairly easy command, which will delete the file if exists and does not throw an error if it is non-existing.
rm -f xyz.csv
A one liner shell script to remove a file if it already exist (based on Jindra Helcl's answer):
[ -f file ] && rm file
or with a variable:
#!/bin/bash
file="/path/to/file.ext"
[ -f $file ] && rm $file
Something like this would work
#!/bin/sh
if [ -fe FILE ]
then
rm FILE
fi
-f checks if it's a regular file
-e checks if the file exist
Introduction to if for more information
EDIT : -e used with -f is redundant, fo using -f alone should work too
if [ $( ls <file> ) ]; then rm <file>; fi
Also, if you redirect your output with > instead of >> it will overwrite the previous file
So in my case I wanted to remove a FIFO file before I create it again, so this worked for me:
#!/bin/bash
file="/tmp/test"
rm -rf $file | true
mkfifo $file
| true will continue the script even if file is not found.

Shell script file existence test fails for broken symbolic link

This Bourne shell script fails to detect the existence of a broken symbolic link. It returns false and doesn't echo yet /usr/bin/firefox.real is a file that exists but as a broken symbolic link. Why?
FIREFOX="/usr/bin/firefox.real"
[ -e "$FIREFOX" ] && echo "exists"
The reason is that internally, bash will call fstat(), not lstat() when you test with -e, so it checks the file itself, not the symbolic link.
Use -h to check for existence of a link (even broken):
[ -h "$FIREFOX" ] && echo "exists"
As per man test:
-h FILE
FILE exists and is a symbolic link (same as -L)

Difference between file tests in Bash

I am troubleshooting an existing Bash script and in the script it has two tests:
if [ ! -s <file_location> ] ; then
# copy the file to the file_location
if [ -s <file_location> ] ; then
# operate on the file
fi
fi
According to the Bash Tutorial, -s tests if the file is not of zero size. Would it be better to replace the ! -s test with a -e ? I could understand the second, nested test being a -s but the first one looks like it could be replaced with -e. What is the advantage here of ! -s vs -e? Am I missing something?
If the file exists but is empty, -e would pass, but the file would likely be useless. Using ! -s ensures that the file is present and contains useful content.

Shell Script to load multiple FTP files

I am trying to upload multiple files from one folder to a ftp site and wrote this script:
#!/bin/bash
for i in '/dir/*'
do
if [-f /dir/$i]; then
HOST='x.x.x.x'
USER='username'
PASSWD='password'
DIR=archives
File=$i
ftp -n $HOST << END_SCRIPT
quote USER $USER
quote PASS $PASSWD
ascii
put $FILE
quit
END_SCRIPT
fi
It is giving me following error when I try to execute:
username#host:~/Documents/Python$ ./script.sh
./script.sh: line 22: syntax error: unexpected end of file
I can't seem to get this to work. Any help is much appreciated.
Thanks,
Mayank
It's complaining because your for loop does not have a done marker to indicate the end of the loop. You also need more spaces in your if:
if [ -f "$i" ]; then
Recall that [ is actually a command, and it won't be recognized if it doesn't appear as such.
And... if you single quote your glob (at the for) like that, it won't be expanded. No quotes there, but double quotes when using $i. You probably also don't want to include the /dir/ part when you use $i as it's included in your glob.
If I'm not mistaken, ncftp can take wildcard arguments:
ncftpput -u username -p password x.x.x.x archives /dir/*
If you don't already have it installed, it's likely available in the standard repo for your OS.
First, the literal, fixing-your-script answer:
#!/bin/bash
# no reason to set variables that don't change inside the loop
host='x.x.x.x'
user='username'
password='password'
dir=archives
for i in /dir/*; do # no quotes if you want the wildcard to be expanded!
if [ -f "$i" ]; then # need double quotes and whitespace here!
file=$i
ftp -n "$host" <<END_SCRIPT
quote USER $user
quote PASS $password
ascii
put $file $dir/$file
quit
END_SCRIPT
fi
done
Next, the easy way:
lftp -e 'mput -a *.i' -u "$user,$password" "ftp://$host/"
(yes, lftp expands the wildcard internally, rather than expecting this to be done by the outer shell).
First of all my apologies in not making myself clear in the question. My actual task was to copy a file from local folder to a SFTP site and then move the file to an archive folder. Since the SFTP is hosted by a vendor I cannot use the key sharing (vendor limitation. Also, SCP will require password entering if used in a shell script so I have to use SSHPASS. SSHPASS is in the Ubuntu repo however for CentOS it needs to be installed from here
Current thread and How to run the sftp command with a password from Bash script? did gave me better understanding on how to write the script and I will share my solution here:
#!/bin/bash
#!/usr/bin
for i in /dir/*; do
if [ -f "$i" ]; then
file=$i
export SSHPASS=password
sshpass -e sftp -oBatchMode=no -b - user#ftp.com << !
cd foldername/foldername
put $file
bye
!
mv $file /somedir/test
fi
done
Thanks everyone for all the responses!
--Mayank

Resources