How can I see if a file exists using test -f and with a wildcard in the path?
This works:
test -f $PREFIX/lib/python3.6/some_file
This does not work (what am I doing wrong here?):
test -f $PREFIX/lib/python*/some_file
I need a non-zero exit code if the file does not exist.
Expand wildcard to an array and then check first element:
f=($PREFIX/lib/python*/some_file)
if [[ -f "${f[0]}" ]]; then echo "found"; else echo "not found"; fi
unset f
You need to iterate over the files as test -f only works with a single file. I would use a shell function for that:
#!/bin/sh
# test-f.sh
test_f() {
for fname; do
if test -f "$fname"; then
return 0
fi
done
}
test_f "$#"
Then a test run could be
$ sh -x test-f.sh
$ sh -x test-f.sh doesnotexist*
$ sh -x test-f.sh *
From the man page of test:
-f file True if file exists and is a regular file
meaning that test -f <arg> expects arg to be a single file. If your wildcard in path results in multiple files an error will be thrown.
Try iterating when using a wildcard :)
Related
I want to write some wrappers around the sha1sum function in bash. From the manpage:
SHA1SUM(1) User Commands SHA1SUM(1)
NAME
sha1sum - compute and check SHA1 message digest
SYNOPSIS
sha1sum [OPTION]... [FILE]...
DESCRIPTION
Print or check SHA1 (160-bit) checksums.
With no FILE, or when FILE is -, read standard input.
How can I set up my wrapper so that it works in the same way? I.e.:
my_wrapper(){
# some code here
}
that could work both as:
my_wrapper PATH_TO_FILE
and
echo -n "blabla" | my_wrapper
I think this is somehow related to Redirect standard input dynamically in a bash script but not sure how to make it 'nicely'.
Edit 1
I program in a quite defensive way, so I use in my whole script:
# exit if a command fails
set -o errexit
# make sure to show the error code of the first failing command
set -o pipefail
# do not overwrite files too easily
set -o noclobber
# exit if try to use undefined variable
set -o nounset
Anything that works with that?
You can use this simple wrapper:
args=("$#") # save arguments into an array
set -o noclobber nounset pipefail errexit
set -- "${args[#]}" # set positional arguments from array
my_wrapper() {
[[ -f $1 ]] && SHA1SUM "$1" || SHA1SUM
}
my_wrapper "$#"
Note that you can use:
my_wrapper PATH_TO_FILE
or:
echo -n "blabla" | my_wrapper
This code works for me, put it in a file named wrapper
#!/bin/bash
my_wrapper(){
if [[ -z "$1" ]];then
read PARAM
else
PARAM="$1"
fi
echo "PARAM:$PARAM"
}
Load the function in your environment
. ./wrapper
Test the function with input pipe
root#51ce582167d0:~# echo hello | my_wrapper
PARAM:hello
Test the function with parameter
root#51ce582167d0:~# my_wrapper bybye
PARAM:bybye
Ok, so the answers posted here are fine often, but in my case with defensive programming options:
# exit if a command fails
set -o errexit
# exit if try to use undefined variable
set -o nounset
things do not work as well. So I am now using something in this kind:
digest_function(){
# argument is either filename or read from std input
# similar to the sha*sum functions
if [[ "$#" = "1" ]]
then
# this needs to be a file that exists
if [ ! -f $1 ]
then
echo "File not found! Aborting..."
exit 1
else
local ARGTYPE="Filename"
local PARAM="$1"
fi
else
local ARGTYPE="StdInput"
local PARAM=$(cat)
fi
if [[ "${ARGTYPE}" = "Filename" ]]
then
local DIGEST=$(sha1sum ${PARAM})
else
local DIGEST=$(echo -n ${PARAM} | sha1sum)
fi
}
awkOut1="awkOut1.csv"
awkOut2="awkOut2.csv"
if [[ "$(-s $awkOut1)" || "$(-s $awkOut2)" ]]
The above 'if' check in shell script gives me below error:
-bash: -s: command not found
Suggestions anyone?
If you just have 2 files I would do
if [[ -e "$awkOut1" && ! -s "$awkOut1" ]] &&
[[ -e "$awkOut2" && ! -s "$awkOut2" ]]
then
echo both files exist and are empty
fi
Since [[ is a command, you can chain the exit statuses together with && to ensure they are all true. Also, within [[ (but not [), you can use && to chain tests together.
Note that -s tests for True if file exists and is not empty. so I'm explicitly adding the -e tests so that -s only checks if the file is not empty.
If you have more than 2:
files=( awkOut1.csv awkOut2.csv ... )
sum=$( stat -c '%s' "${files[#]}" | awk '{sum += $1} END {print sum}' )
if (( sum == 0 )); then
echo all the files are empty
fi
This one does not test for existence of the files.
You can use basic Bourne shell syntax and the test command (a single left bracket) to find out if either file is non-empty:
if [ -s "$awkOut1" -o -s "$awkOut2" ]; then
echo "One of the files is non-empty."
fi
When using single brackets, the -o means "or", so this expression is checking to see if awkOut1 or awkOut2 is non-empty.
If you have a whole directory full of files and you want to find out if any of them is empty, you could do something like this (again with basic Bourne syntax and standard utilities):
find . -empty | grep -q . && echo "some are empty" || echo "no file is empty"
In this line, find will print any files in the current directory (and recursively in any subdirectories) that are empty; grep will turn that into an exit status; and then you can take action based on success or failure to find empties. In an if statement, it would look like this:
if find . -empty | grep -q .; then
echo "some are empty"
else
echo "no file is empty"
fi
Here is one for GNU awk and filefuncs extension. It checks all parameter given files and exits once the first one is empty:
$ touch foo
$ awk '
#load "filefuncs" # enable
END {
for(i=1;i<ARGC;i++) { # all given files
if(stat(ARGV[i], fdata)<0) { # use stat
printf("could not stat %s: %s\n", # nonexists n exits
ARGV[i], ERRNO) > "/dev/stderr"
exit 1
}
if(fdata["size"]==0) { # file size check
printf("%s is empty\n",
ARGV[i]) > "/dev/stderr"
exit 2
}
}
exit
}' foo
Output:
foo is empty
I've been trying to loop trough lists of files and apply an operation on them depending on their number. I first tried using the ls command, but the output is not a list:
data="data2/Scerevisiae-Pho4/"
results="results3/"
samples=( "GSM730517" "GSM730528" )
if [ ! -d $results ]
then
mkdir $results
fi
for sam in ${samples[#]}
do
if [ ! -d $results$sam ]
then
mkdir $results$sam
fi
echo -e "Reading $sam directory $data$sam... \n"
files=$(ls $data$sam)
echo ${files[0]}
done
outputs
echo ${files[#]}
SRR217304.sra SRR217305.sra
echo ${files[0]}
SRR217304.sra SRR217305.sra
I tried this different syntax:
files=($data$sam/*)
It worked fine locally, but then I used this as shell code into a snakemake workflow, and then it throws a syntax error:
syntax error near unexpected token `('
I'm guessing it's a problem with the bash interpreter? Any clue how else I could loop through these files ?
Thanks
Edit:
I've also tried
files=$data$sam/*
which outputs
echo ${files[0]}
data2/Scerevisiae-Pho4/GSM730517/*
You don't need all these checks. You can use mkdir -p to create a directory path if it doesn't exist already. Also you don't need to use arrays at all
for d in GSM730517 GSM730528;
do p=results3/$d;
mkdir -p $p;
for f in $p/*;
do echo $f;
done;
done;
should iterate over all existing files, replace echo with your call.
I've troubles to understand an if syntax of a line in shell:
if [ ! -f *file1.txt* -a ! -f *file2.txt* -a ! -f *file3.txt* ]; then
sbatch file.sh
fi
The * is used because my files are backed up to #file.txt.1# format.
As far as I know, the ! creates a 'if not', the -f 'if the string is a file' but I haven't found any function for the -a flag.
I want to submit the file.sh only if all these files are NOT present.
Does anyone could help?
One easy implementation, compatible with any POSIX shell:
exists_any() {
while [ "$#" -gt 0 ]; do # as long as we have command-line arguments...
[ -e "$1" ] && return 0 # if first argument names a file that exists, success
shift # remove first argument from the list
done
return 1 # nothing matched; report failure
}
if ! exists_any *file1.txt* *file2.txt* *file3.txt*; then
sbatch file.txt
fi
I have a little bash script where I compare two files. If one doesn't exist and second one exists, then I will copy/replace backup to main folder.
Somehow this doesn't seem to work. Hope someone can give a hand on this one:
#!/bin/bash
if [ ! -f "/Folder1/$1.jpg" ] && [ -f "/BU_Folder2/$1_BU.jpg" ]; then
cp -fp /BU_Folder2/$1_BU.jpg /Folder1/$1.jpg
cp -fp /BU_Folder2/$1_BU.mp4 /Folder1/$1.mp4
fi
At the prompt, run the following commands:
$ set -- FILENAME # FILENAME is the value you think $1 is supposed to have
$ [ ! -f "/Folder1/$1.jpg" ] && [ -f "/BU_Folder2/$1_BU.jpg" ] && echo success
If the last command does not print "success", then your script probably does not have the value for $1 that you think it does. Add echo $1 to the top of your script to confirm.
If it does print "success", and your script has no error output from cp, I'm not sure what to suggest.