File count in a folder not showing accurate - bash

I am writing a shell script to check two things at one time. The first condition is to check for the existence of a specific file and the second condition is to confirm that there is only one file in that directory.
I am using the following code:
conf_file=ls -1 /opt/files/conf.json 2>/dev/null | wc -l
total_file=ls -1 /opt/files/* 2>/dev/null| wc -l
if [ $conf_file -eq 1 ] && [ $total_file -eq 1 ]
then
echo "done"
else
echo "Not Done"
fi
It is returning the following error
0
0
./ifexist.sh: 4: [: -eq: unexpected operator
Not Done
I am probably doing a very silly mistake. Can anyone help me a little bit?

One of the reasons you should normally not parse ls is that you can get strange results when you have files with newlines. In your case that won't be an issue, because any file different from json.conf should make the test fail. However you should make the code counting the files be future-proof. You can use find for this.
Your code can be changed into
jsonfile="/opt/files/conf.json"
countfiles=$(find /opt/files -maxdepth 1 -type f -exec printf '.\n' \; | wc -l)
if [[ -f "${jsonfile}" ]] && (( "${countfiles}" == 1)); then
echo "Done"
else
echo "Not Done"
fi

When you say this:
conf_file=ls -1 /opt/files/conf.json 2>/dev/null | wc -l
That assigns the value "ls" to the variable conf_file, and then tries to run a command called "-1" and pipe the result to wc If you want to run a pipe sequence, you have to enclose it in $( ):
conf_file=$(ls -1 /opt/files/conf.json 2./dev/null | wc -l)
Next, when combining clauses in the test command ([), do it inside the command:
if [ $conf_file -eq 1 -a $total_file -eq 1 ]
However, there are better ways to do this. You can check if a file exists with "-f", and you can just check whether the output of ls matches what you expect, without creating variables or running other commands:
if [ -f /opt/files/conf.json -a "$(ls /opt/files/conf.*)" -eq "/opt/files/conf.json" ]
However, it is not a friendly practice to prohibit other files. In many cases, people might want to leave backup or test copies (conf.json.bak or conf.json.test), and there's no reason for you to block that.

Related

how to edit this code to show multiple files (it currently works with 1 file)?

I have created a shell script in order to find 2 files. While it works with 1 it does not work with 2 or multiple. Any help?
#!/bin/bash
FILENAME="abc"
if [ -f "${FILENAME}"* ]
then
echo "EXISTS"
else
echo "NOT EXISTS"
fi
Expected: EXISTS
Error:
./test.sh: line 5: [: abc1.sh: binary operator expected
NOT EXISTS
Error is here:
if [ -f "${FILENAME}"* ]
-f option accepts a single file. If there are more files that start
with $FILENAME then * is expanded and more than one file is passed
to -f. It's also reported by shellcheck:
$ ~/.cabal/bin/shellcheck test.sh
In test.sh line 5:
if [ -f "${FILENAME}"* ]
^-- SC2144: -f doesn't work with globs. Use a for loop.
If you want to check if there is at least one file that starts with
$FILENAME without using external tools such as find you need use
for loop like that:
#!/bin/bash
FILENAME="abc"
for file in "${FILENAME}"*
do
if [ -f "$file" ]
then
echo File exists
exit 0
fi
done
echo File does not exist.
exit 1
The simple way is to check if there less then 2 files with same name abc*:
#!/bin/bash
FILENAME="abc"
COUNT_FILES=$(find . -maxdepth 1 -name "$FILENAME*" -type f | wc -l)
if [[ $COUNT_FILES -lt 2 ]]
then
echo "NOT EXISTS"
else
echo "EXISTS"
fi
if ls /path/to/your/files* 1> /dev/null 2>&1
then
echo "files do exist"
else
echo "files do not exist"
fi
This is what I was looking for. What I wanted was a function that looks for single OR multiple files, which the code above performed perfectly. Thanks for the previous answers, much help.

ps command in sh script not include the top command

I have written a script to check process is running or not,it work fine but while testing it, i have found that it not include top command count running in other terminal
check-process.sh
#!/bin/sh
OK=1
CRITICAL=0
PROCESS_NUM=$( ps -ef | grep $1 | grep -v "grep "|grep -v "sh"|wc -l )
#echo $PROCESS_NUM
if [ $PROCESS_NUM = $OK ]
then
echo "OK"
elif [ $PROCESS_NUM = $CRITICAL ]
then
echo "CRITICAL"
elif [ $PROCESS_NUM > $OK ]
then
echo "MULTIPLE process are runing"
else
echo "error"
fi
And i run top command in two terminals and run this script as follow:
./check-process.sh top
and out put is 0 CRITICAL , but when i run normal command ps -ef |grep -v "grep "| wc -l it gives two counts.
That mess of greps just has to go.
One "trick" for finding processes by name without finding your grep is to use a regular expression. That is, after all, what the Global Regular Expression Print command is for. You can use parameter expansion to construct a safe regular expression based on your input string, perhaps like this:
#!/bin/sh
if [ -z "$1" ]; then
echo "I'd love me an option." >&2
exit 1
fi
OK=1
CRITICAL=0
x="${1#?}" # make a temporary string missing the 1st chararcter,
re="[${1%$x}]$x" # then place the 1st character within square brackets.
PROC_COUNT=$( ps -ef | grep -w -c "$re" ) # yay, just one pipe.
if [ "$PROC_COUNT" -eq "$OK" ]; then
echo "OK"
elif [ "$PROC_COUNT" -eq "$CRITICAL" ]; then
echo "CRITICAL"
elif [ "$PROC_COUNT" -gt "$OK" ]; then
echo "MULTIPLE process are running"
else
echo "error"
fi
There are a few notable changes here:
I added something to fail with better explanation if no option is given.
The pipeline, of course. And the lines that create $re.
We're using -gt and -eq to test numeric values. man test for details.
I renamed your count variable to be clearer. What is a "PROCESS_NUM" really? Sounds like a PID to me.
All variables are quoted. I don't need to tell you why, you have the Google.
That said, you should also consider using pgrep instead of any sort of counting pipe, if it's available on your system. Try running pgrep and see what your OS tells you.

Grep issues with if statement in shell script

I'm having an issue with tail & grep in shell script if statement. If I run tail -5 mylog.log | grep -c "Transferred: 0" in shell, it runs as it should, but in this shell script if statement:
# Parse log for results
if [ tail -1 "$LOGFILE" | grep -c "Failed" ] ; then
RESULT=$(tail -1 "$LOGFILE")
elif [ tail -5 "$LOGFILE" | grep -c "Transferred: 0" ] ; then
RESULT=""
else
RESULT=$(tail -5 "$LOGFILE")
fi
I get
... [: missing `]'
grep: ]: No such file or directory
for both of the grep lines.
It's clearly to do with the closing ] being seen as part of the grep (and thus missing) but I'm using the correct whitespace so I can't figure out what's going on? What am I doing wrong here?
Thanks,
PJ
Immediate Issue / Immediate Fix
Take out the brackets.
if tail -1 "$logfile" | grep -q "Failed" ; then
[ is not part of if syntax. Rather, it's a synonym for the command named test (which is typically both available as a shell builtin and an external binary, like /bin/test or /usr/bin/test).
Thus, your original code was running [ tail -1 "$logfile", and piping its result to grep -q "Failed" ]. The first [ was failing because it didn't see an ending ] -- which is mandatory when invoked by that name rather than with the name test -- and also because its parameters weren't a test it knew how to parse; and the second grep didn't know what the ] it was being piped meant, trying to find a file by that name.
Other Notes
Try to run external commands -- like tail -- as little as possible. There's a very significant startup cost.
Consider the following, which runs tail only once:
#!/bin/bash
# ^^^^- IMPORTANT: bash, not /bin/sh
last_5_lines="$(tail -5 "$logfile")"
last_line="${last_5_lines##*$'\n'}"
if [[ $last_line = *Failed* ]]; then
result=$last_line
elif [[ $last_5_lines =~ 'Transferred:'[[:space:]]+'0' ]]; then
result=''
else
result=$last_5_lines
fi

Avoid statements inside if statements raising errors when false bash

To keep my data in structured form I often end up having deep folder-trees where it can be a little annoying going back and forward in so I wrote a function to generate an html tree of the cwd:
function toc_date {
# Get the date in _YYYY-MM-DD format
D=`date +_%F`
# Build the tree, -H flag to output as HTML,
# . to use current dir in <a href> link
tree -H . >> toc$D.html
}
The problem arose when I wrote a follow up function to remove old folder-tree files, specifically in the first expr of [ "$(ls -b toc_* | wc -l)" -gt "0" ] which gives an error when no files are found even though it's value is correctly set to 0, which should make it skip the if statement (right?). The code works as intended, but error messages are usually not a sign of good code, so was hoping that someone might be able to suggest improvements?
function old_stuff {
# Number of lines i.e. files matching toc_*
if [ "$(ls -b toc_* | wc -l)" -gt "0" ]
then
# Show files/directories so we don't remove stuff unintended
ls toc_*
while true; do
read -p "Do you wish to remove old TOCs? [Y/n]" yn
case $yn in
[Nn]* ) break;;
[Yy]* ) rm toc_*; break;;
* ) echo "Please answer yes or no.";;
esac
done
fi
# Go ahead and generate the html tree
toc_date
}
Change it to:
if [ "$(ls -b toc_* 2>/dev/null | wc -l)" -gt "0" ]
Another slightly more readable and better way is (stripped off unnecessary quotes):
if [ $(ls | grep -c '^toc_') -gt 0 ]
just an advice , take this out"$(ls -b toc_* | wc -l)" and set it inside a variable
fileCount=$(ls -b toc_* 2>/dev/null | wc -l)
The 2>/dev/null is to send stderr i.e any kind of error message to /dev/null
And also it is advisable you always use [[ ]] whenever you are writing an if statement, to avoid errors like unary operator is expected

Shell while loop no stopping

I'm writing a routine that will identify if a process stops running and will do something once the processes targeted is gone.
I came up with this code (as a test for my future code):
#!/bin/bash
value="aaa"
ls | grep $value
while [ $? = 0 ];
do
sleep 5
ls | grep $value
echo $?
done;
echo DONE
My problem is that for some reason, the loop never stops and echoes 1 after I delete the file "aaa".
0
0 >>> I delete the file at that point (in another terminal)
1
1
1
1
.
.
.
I would expect the output to be "DONE" as soon as I delete the file...
What's the problem?
SOLUTION:
#!/bin/bash
value="aaa"
ls | grep $value
while [ $? = 0 ];
do
sleep 5
ls | grep $value
done;
echo DONE
The value of $? changes very easily. In the current version of your code, this line:
echo $?
prints the status of the previous command (grep) -- but then it sets $? to 0, the status of the echo command.
Save the value of $? in another variable, one that won't be clobbered next time you execute a command:
#!/bin/bash
value="aaa"
ls | grep $value
status=$?
while [ $status = 0 ];
do
sleep 5
ls | grep $value
status=$?
echo $status
done;
echo DONE
If the ls | grep aaa is intended to check whether a file named aaa exists, this:
while [ ! -f aaa ] ; ...
is a cleaner way to do it.
$? is the return code of the last command, in this case your sleep.
You can rewrite that loop in much simpler way like this:
while [ -f aaa ]; do
sleep 5;
echo "sleeping...";
done
You ought not duplicate the command to be tested. You can always write:
while cmd; do ...; done
instead of
cmd
while [ $? = 0 ]; do ...; cmd; done
In your case, you mention in a comment that the command you are testing is parsing the output of ps. Although there are very good arguments that you ought not do that, and that the followon processing should be done by the parent of the command for which you are waiting, we'll ignore that issue at the moment. You can simply write:
while ps -ef | grep -v "grep mysqldump" |
grep mysqldump > /dev/null; do sleep 1200; done
Note that I changed the order of your pipe, since grep -v will return true if it
matches anything. In this case, I think it is not necessary, but I believe is more
readable. I've also discarded the output to clean things up a bit.
Presumably your objective is to wait until a filename containing the string $value is present in the local directory and not necessarily a single filename.
try:
#!/bin/bash
value="aaa"
while ! ls *$value*; do
sleep 5
done
echo DONE
Your original code failed because $?is filled with the return code of the echo command upon every iteration following the first.
BTW, if you intend to use ps instead of ls in the future, you will pick up your own grep unless you are clever. Use ps -ef | grep [m]ysqlplus.

Resources