Why is "if [ -z "ls -l /non_existing_file 2> /dev/null" ] not true [duplicate] - shell

This question already has answers here:
Test if a command outputs an empty string
(13 answers)
Closed 2 years ago.
I want to check if an external drive is still plugged in by checking /dev/disk/by-uuid/1234-5678.
However, I know that this could be done much easier with:
if ! [ -e "/non_existing_file" ]
echo "File dont exists anymore"
fi
But I still want to know why the script in the Title dont work. Is it because of the exit code of ls?
Thanks in advance.

It looks like the ls -l /nonexisting is a string that is not executed. This should work correctly if it is executed in a subshell.
Compare the example in the title:
if [[ -z "ls -l something.txt 2> /dev/null" ]]; then
echo "file does not exist"
fi
... with this version that returns as expected:
if [[ -z "$(ls -l something.txt 2> /dev/null)" ]]; then
echo "file does not exist"
fi

Related

Output of "df -h" to while loop, for list of host [duplicate]

This question already has answers here:
While loop stops reading after the first line in Bash
(5 answers)
Closed 6 years ago.
I use this bash-code to upload files to a remote server, for normal files this works fine:
for i in `find devel/ -newer $UPLOAD_FILE`
do
echo "Upload:" $i
if [ -d $i ]
then
echo "Creating directory" $i
ssh $USER#$SERVER "cd ${REMOTE_PATH}; mkdir -p $i"
continue
fi
if scp -Cp $i $USER#$SERVER:$REMOTE_PATH/$i
then
echo "$i OK"
else
echo "$i NOK"
rm ${UPLOAD_FILE}_tmp
fi
done
The only problem is that for files with a space in the name, the for-loop fails, so I replaced the first line like this:
find devel/ -newer $UPLOAD_FILE | while read i
do
echo "Upload:" $i
if [ -d $i ]
then
echo "Creating directory" $i
ssh $USER#$SERVER "cd ${REMOTE_PATH}; mkdir -p $i"
continue
fi
if scp -Cp $i $USER#$SERVER:$REMOTE_PATH/$i
then
echo "$i OK"
else
echo "$i NOK"
rm ${UPLOAD_FILE}_tmp
fi
done
For some strange reason, the ssh-command breaks out of the while-loop, therefore the first missing directory is created fine, but all subsequent missing files/directories are ignored.
I guess this has something to do with ssh writing something to stdout which confuses the "read" command. Commenting out the ssh-command makes the loop work as it should.
Does anybody know why this happens and how one can prevent ssh from breaking the while-loop?
The problem is that ssh reads from standard input, therefore it eats all your remaining lines. You can just connect its standard input to nowhere:
ssh $USER#$SERVER "cd ${REMOTE_PATH}; mkdir -p $i" < /dev/null
You can also use ssh -n instead of the redirection.
Another approach is to loop over a FD other than stdin:
while IFS= read -u 3 -r -d '' filename; do
if [[ -d $filename ]]; then
printf -v cmd_str 'cd %q; mkdir -p %q' "$REMOTE_PATH" "$filename"
ssh "$USER#$SERVER" "$cmd_str"
else
printf -v remote_path_str '%q#%q:%q/%q' "$USER" "$SERVER" "$REMOTE_PATH" "$filename"
scp -Cp "$filename" "$remote_path_str"
fi
done 3< <(find devel/ -newer "$UPLOAD_FILE" -print0)
The -u 3 and 3< operators are critical here, using FD 3 rather than the default FD 0 (stdin).
The approach given here -- using -print0, a cleared IFS value, and the like -- is also less buggy than the original code and the existing answer, which can't handle interesting filenames correctly. (Glenn Jackman's answer is close, but even that can't deal with filenames with newlines or filenames with trailing whitespace).
The use of printf %q is critical to generate commands which can't be used to attack the remote machine. Consider what would happen with a file named devel/$(rm -rf /)/hello with code which didn't have this paranoia.

Actual return code for SCP

I am writing a bash script that goes through a list of filenames and attempts to copy each file using scp from two servers into a local folder. The script then compares the local files to each other. Sometimes however, the file will not exist on one server or the other or both.
At first, I was using this code:
scp $user#$host:/etc/$file ./$host/conf/ 2>/tmp/Error 1>/dev/null
error=$(</tmp/Error) # error catching
if [[ -n "$error" ]]; then echo -e "$file not found on $host"; fi
But I found that some (corporate) servers output a (legalese) message (to stderr I guess) every time a user connects via scp or ssh. So I started looking into utilizing exit codes.
I could simply use
scp $user#$host:/etc/$file ./$host/conf/ 2>/tmp/Error 1>/dev/null
if [[ $? -ne 0 ]]; then echo -e "$file not found on $host"; fi
but since the exit code for "file does not exist" is supposed to be 6, I would rather have a more precise
scp $user#$host:/etc/$file ./$host/conf/ 2>/tmp/Error 1>/dev/null
if [[ $? -eq 6 ]]; then echo -e "$file not found on $host"; fi
The problem is that I seem to be getting an exit code of 1 no matter what went wrong. This question is similar to this one, but that answer does not help me in Bash.
Another solution I am considering is
scp $user#$host:/etc/$file ./$host/conf/ 2>/tmp/Error 1>/dev/null
error=$(</tmp/Error) # error catching
if [[ ${error: -25} = "No such file or directory" ]]; then echo -e "$file not found on $host"; fi
But I am concerned that different versions of scp could have different error messages for the same error.
Is there a way to get the actual exit code of scp in a Bash script?
Per the comments (#gniourf_gniourf, #shelter, #Wintermute) I decided to simply switch tools to rsync. Thankfully the syntax doesn't need to be changed at all.
23 was the error code I was getting when files didn't exist so here is the code I ended up with
rsync -q $user#$host:/etc/$file ./$host/conf/ 2>/tmp/Error 1>/dev/null
if [[ $? -eq 23 ]]; then echo -e "$file not found on $host"; continue; fi
I'm seeing 1 for "file not found" not found, you can do testing for these sorts of things against localhost, if you need to differentiate different errors capture stdout instead.
if $err=`scp $host:$file 2>&1`
then
echo "copied successfully
else
case "$err" in
*"file not found"* )
echo "$file Not Found on $host"
;;
*"Could not resolve hostname"* )
echo "Host not found: $host"
;;
"Permission denied "* )
echo "perm-denied! $host"
;;
* )
echo "other scp error $err"
;;
esac
this isn't going to work if you have a different locale with different messages.

BASH - Check that it is not running the same script

I have a script /root/data/myscript
and when I run /root/data/myscript
I do not know how to determine if you have one running
does anyone know?
I tried
if [[ "$(pidof -x /root/data/myscript | wc -w)" > "1" ]]
then echo "This script is already running!"
fi
thank you
This should work.
if [[ "$(pgrep myscript)" ]]
then echo "This script is already running!"
fi
This could work to check whether the script is already running or not.
if [[ "$(ps -ef | grep "/root/data/myscript" | grep -v "grep")" ]] ; then
echo "This script is already running!"
fi
Try this one.

Lynx is stopping loop?

I'll just apologize beforehand; this is my first ever post, so I'm sorry if I'm not specific enough, if the question has already been answered and I just didn't look hard enough, and if I use incorrect formatting of some kind.
That said, here is my issue: In bash, I am trying to create a script that will read a file that lists several dozen URL's. Once it reads each line, I need it to run a set of actions on that, the first being to use lynx to navigate to the website. However, in practice, it will run once perfectly on the first line. Lynx goes, the download works, and then the subsequent renaming and organizing of that file go through as well. But then it skips all the other lines and acts like it has finished the whole file.
I have tested to see if it was lynx causing the issue by eliminating all the other parts of the code, and then by just eliminating lynx. It works without Lynx, but, of course, I need lynx for the rest of the output to be of any use to me. Let me just post the code:
!#/bin/bash
while read line; do
echo $line
lynx -accept_all_cookies $line
echo "lynx done"
od -N 2 -h *.zip | grep "4b50"
echo "od done, if 1 starting..."
if [[ $? -eq 0 ]]
then ls *.*>>logs/zips.log
else
od -N 2 -h *.exe | grep "5a4d"
echo "if 2 starting..."
if [[ $? -eq 0 ]]
then ls *.*>>logs/exes.log
else
od -N 2 -h *.exe | grep "5a4d, 4b50"
echo "if 3 starting..."
if [[ $? -eq 1 ]]
then
ls *.*>>logs/failed.log
fi
echo "if 3 done"
fi
echo "if 2 done"
fi
echo "if 1 done..."
FILE=`(ls -tr *.* | head -1)`
NOW=$(date +"%m_%d_%Y")
echo "vars set"
mv $FILE "criticalfreepri/${FILE%%.*}(ZCH,$NOW).${FILE#*.}" -u
echo "file moved"
rm *.zip *.exe
echo "file removed"
done < "lynx"
$SHELL
Just to be sure, I do have a file called "lynx" that contains the urls separated by a return each. Also, I used all those "echo"s to do my own sort of debugging, but I have tried it with and without the echo's. When I execute the script, the echo's all show up...
Any help is appreciated, and thank you all so much! Hope I didn't break any rules on this post!
PS: I'm on Linux Mint running things through the "terminal" program. I'm scripting with bash in Gedit, if any of that info is relevant. Thanks!
EDIT: Actually, the echo tests repeat for all three lines. So it would appear that lynx simply can't start again in the same loop?
Here is a simplified version of the script, as requested:
!#/bin/bash
while read -r line; do
echo $line
lynx $line
echo "lynx done"
done < "ref/url"
read "lynx"
$SHELL
Note that I have changed the sites the "url" file goes to:
`www.google.com
www.majorgeeks.com
http://www.sophos.com/en-us/products/free-tools/virus-removal-tool.aspx`
Lynx is not designed to use in scripts because it locks the terminal. Lynx is an interactive console browser.
If you want to access URLs in a script use wget, for example:
wget http://www.google.com/
For exit codes see: http://www.gnu.org/software/wget/manual/html_node/Exit-Status.html
to parse the html-content use:
VAR=`wget -qO- http://www.google.com/`
echo $VAR
I found a way which may fulfilled your requirement to run lynx command in loop with substitution of different url link.
Use
echo `lynx $line`
(Echo the lynx $line in single quote('))
instead of lynx $line. You may refer below:
your code
!#/bin/bash
while read -r line; do
echo $line
lynx $line
echo "lynx done"
done < "ref/url"
read "lynx"
$SHELL
try on below
!#/bin/bash
while read -r line; do
echo $line
echo `lynx $line`
echo "lynx done"
done < "ref/url"
I should have answered this question a long time ago. I got the program working, it's now on Github!
Anyway, I simply had to wrap the loop inside a function. Something like this:
progdownload () {
printlog "attmpting download from ${URL}"
if echo "${URL}" | grep -q "http://www.majorgeeks.com/" ; then
lynx -cmd_script="${WORKINGDIR}/support/mgcmd.txt" --accept-all-cookies ${URL}
else wget ${URL}
fi
}
URL="something.com"
progdownload

Continue script if only one instance is running? [duplicate]

This question already has answers here:
Quick-and-dirty way to ensure only one instance of a shell script is running at a time
(43 answers)
Closed 5 years ago.
now this is embarrassing. I'm writing quick script and I can't figure out why this statement don't work.
if [ $(pidof -x test.sh | wc -w) -eq 1 ]; then echo Passed; fi
I also tried using back-ticks instead of $() but it still wouldn't work.
Can you see what is wrong with it? pidof -x test.sh | wc -w returns 1 if I run it inside of script, so I don't see any reason why basically if [ 1 -eq 1 ] wouldn't pass.
Thanks a lot!
Jefromi is correct; here is the logic I think you want:
#!/bin/bash
# this is "test.sh"
if [ $(pidof -x test.sh| wc -w) -gt 2 ]; then
echo "More than 1"
exit
fi
echo "Only one; doing whatever..."
Ah, the real answer: when you use a pipeline, you force the creation of a subshell. This will always cause you to get an increased number:
#!/bin/bash
echo "subshell:"
np=$(pidof -x foo.bash | wc -w)
echo "$np processes" # two processes
echo "no subshell:"
np=$(pidof -x foo.bash)
np=$(echo $np | wc -w)
echo "$np processes" # one process
I'm honestly not sure what the shortest way is to do what you really want to. You could avoid it all by creating a lockfile - otherwise you probably have to trace back via ppid to all the top-level processes and count them.
you don't have to pass the result of pidof to wc to count how many there are..use the shell
r=$(pidof -x -o $$ test.sh)
set -- $r
if [ "${##}" -eq 1 ];then
echo "passed"
else
echo "no"
fi
If you use the -o option to omit the PID of the script ($$), then only the PID of the subshell and any other instances of the script (and any subshells they might spawn) will be considered, so the test will pass when there's only one instance:
if [ $(pidof -x -o $$ test.sh | wc -w) -eq 1 ]; then echo Passed; fi
Here's how I would do it:
if [ "`pgrep -c someprocess`" -gt "1" ]; then
echo "More than one process running"
else
echo "Multiple processes not running"
fi
If you don't want to use a lockfile ... you can try this:
#!/bin/bash
if [[ "$(ps -N -p $$ -o comm,pid)" =~ $'\n'"${0##*/}"[[:space:]] ]]; then
echo "aready running!"
exit 1
fi
PS: it might need adjustment for a weird ${0##*/}
Just check for the existence of any one (or more) process identified as test.sh, the return code will be 1 if none are found:
pidof -x test.sh >/dev/null && echo "Passed"

Resources