Conditional lsof loop in bash script - macos

I found this example of a conditional lsof loop and want to adapt it to my situation.
typeset fSrc="/path/to/sourcedir"
typeset fTgt="/path/to/targetdir"
while : ; do
ls /path/to/sourcedir | while read file ; do
if [ $(lsof $fSrc/$file | wc -l) -gt 1 ] ; then
echo "file $file still loading, skipping it"
else
mv $fSrc/$file $fTgt/$file
echo "file $file completed upload, moving it"
fi
done
done
My example would be more like this:
while any files are present in "/pathto/sourcedir"; do
if [ lsof "any file" in "/pathto/sourcedir" is being written or modified ]; then
echo "Files being written or modified, exiting"
exit
else
do something
fi
done
Can this be done? Is my logic somewhat close to correct?

Related

i don't know if my shell script is correct

I have a homework using for loop but I'm not quite understand the task that I have to do in there. I wrote a script but I feel like it's not a correct script. Please help!
Here is the question:
Write a shell script to list out the contents of any directory, and indicate for each file (including invisible ones) whether the file is a directory, a plain file, and whether it is public and/or executable to this process
#!/bin/bash
if [ $# -lt 1 ] ; then
echo " file doesn't exist"
echo
echo " variable needed to run a command"
fi
echo ---------------------------------------------
echo ---------------------------------------------
for i in $*
do
if [ -f $i ]; then
echo " it's a file";
echo "THIS IS A LIST OF FILE and DIRECTORY in $i"
ls -a $i
fi
done
echo -----------------------------------------
if [ -d $i ]; then
echo "directory" ;
echo "THIS IS A LIST OF FILES AND DIRETORY in $i"
ls -a $i
fi
echo ------------------------------------------
if [ -x $i ]; then
echo "executable"
echo "THIS IS A LIST OF EXECUTABLE FILE IN $i"
ls -x $i
fi
echo -----------------------------------------
if [ -r $i ]; then
echo "this file is a public file"
else "this is a private file"
fi
#!/bin/bash
if [ $# -lt 1 ] ; then
echo " file doesn't exist"
echo
echo " variable needed to run a command"
fi
echo ---------------------------------------------
echo ---------------------------------------------
for i in $*
do
if [ -f $i ]; then
echo " it's a file";
echo "THIS IS A LIST OF FILE and DIRECTORY in $i"
ls -a $i
fi
done
echo -----------------------------------------
if [ -d $i ]; then
echo "directory" ;
echo "THIS IS A LIST OF FILES AND DIRETORY in $i"
ls -a $i
fi
echo ------------------------------------------
if [ -x $i ]; then
echo "executable"
echo "THIS IS A LIST OF EXECUTABLE FILE IN $i"
ls -x $i
fi
echo -----------------------------------------
if [ -r $i ]; then
echo "this file is a public file"
else "this is a private file"
fi
Poorly written specifications are the bane of education. "Public" sounds like the wrong word here. I'll assume it means "readable".
You check if there's an argument, but you don't exit the program if there is not. I'd also confirm it's a directory, and readable.
The manual will do you a lot of good. Expect to do a lot of reading till you learn this stuff, and then reference it a lot to be sure.
Read this section carefully, create some tests for yourself to prove they work and that you understand them, and your job will be more than half done.
Don't use [. Generally it's just better to always use [[ instead, unless you are using (( or case or some other construct.
I don't see that a for loop was specified, but it ought to be fine. Just be aware that you might have to specify $1/* and $1/.* separately.
Put all your tests in one loop, though. For each file, test for whether it's a directory - if it is, report it. Test if it's a plain file - if it is, report it.
I do NOT like doing homework for someone, but it looks like you could use an example that simplifies this. I recommend you not use this as written - break it out and make it clearer, but this is a template for the general logic.
#! /bin/env bash
(( $# )) && [[ -d "$1" ]] && [[ -r "$1" ]] || {
echo "use: $0 <dir>" >&2
exit 1
}
for e in "$1"/.* "$1"/*
do echo "$e:"
[[ -d "$e" ]] && echo " is a directory"
[[ -f "$e" ]] && echo " is a plain file"
[[ -r "$e" ]] && echo " is readable"
[[ -x "$e" ]] && echo " is executable"
done
If you read the links I provided you should be able to break this apart and understand it.
Generally, your script is long and a bit convoluted. Simpler is easier to understand and maintain. For example, be very careful about block indentation to understand scope.
$: for i in 1 2 3
> do echo $i
> done
1
2
3
$: echo $i
3
Compare this to -
for i in $*
do if [ -f $i ]; then
echo " it's a file";
echo "THIS IS A LIST OF FILE and DIRECTORY in $i"
ls -a $i
fi
done
echo -----------------------------------------
if [ -d $i ]; then
echo "directory" ;
echo "THIS IS A LIST OF FILES AND DIRETORY in $i"
ls -a $i
fi
You are testing each entry to see if it is a file, and if it is, reporting "THIS IS A LIST OF FILE and DIRECTORY in $i" every time...
but then only testing the last one to see if it's a directory, because the [ -d $i ] is after the done.
...did you run this somewhere to try it, and look at the results?

Conditional statement bash script

I need help with replacing the following script with a different format where a configuration file, and a loop is used.
[FedoraC]$ cat script.sh
#!/bin/bash
grep -q /tmp /etc/fstab
if [ $? -eq 0 ]; then
echo "True"
else
echo "False"
fi
mount | grep ' /tmp' | grep nodev
if [ $? -eq 0 ]; then
echo "True"
else
echo "False"
fi
mount | grep /tmp | grep nosuid
if [ $? -eq 0 ]; then
echo "True"
else
echo "False"
fi
So far I have the following script which should take the values from a source/conf file and run each command found in the conf file one by one. After the command is executed the output would be "True" or "False"
conf file is formed by Unix commands: /opt/conf1
[FedoraC]$ cat conf1
grep -q /tmp /etc/fstab
mount | grep /tmp | grep nodev
mount | grep /tmp | grep nosuid
mount | grep /tmp | grep noexec
[FedoraC]$ cat new_script.sh
#!/bin/bash
. conf1
for i in $#;
do $i
if [ $i -eq 0 ]; then
echo "Passed"
else
echo "Failed"
fi
done
Instead of displaying the output based on the conditional statement, the script runs each line one by one from conf1, and not echo messages are seen.
Can I get some help please.
try this:
#! bin/bash
while read L; do
echo $L'; exit $?'|sh
if [ $? -eq 0 ]; then
echo Pass
else
echo Failed
fi
done < conf1
The more robust and canonical way to do this would be to have a directory /opt/conf1.d/, and put each of your lines as an executable script in this directory. You can then do
for file in /opt/conf1.d/*
do
[[ -x $file ]] || continue
if "$file"
then
echo "Passed"
else
echo "Failed"
fi
done
This has the advantages of supporting multi-line scripts, or scripts with more complex logic. It also lets you write the check script in any language, and lets scripts and packages add and remove contents easily and non-interactively.
If you really want to stick with your design, you can do it with:
while IFS= read -r line
do
if ( eval "$line" )
then
echo "Passed"
else
echo "Failed"
fi
done < /opt/conf1
The parentheses in the if statement runs eval in a subshell, so that lines can't interfere with each other by setting variables or exiting your entire loop.

How can I avoid multiple starting of a bash script?

I wrote a little bash script called "wp", which upload files to an ftp server. It uses the wput utility. It takes the list of files from a text file. When uploading is ready it comments out the line with a double cross in the text file. The success of the upload is detected according to the last line in the logfile. My question is how can I avoid multiple starting of my script? I am trying to detect with pgrep if the instance is running, but doesn't work correctly:
#!/bin/bash
if [ "$(pgrep ^wp$|wc -l)" -eq "2" ]
then
echo "$(pgrep ^wp$)"
echo "$(pgrep ^wp$|wc -l)"
echo "wp script is starting..."
else
echo "$(pgrep ^wp$)"
echo "$(pgrep ^wp$|wc -l)"
echo "wp script is already running!"
exit
fi
server="ftp://username:password#ftp.ftpserver.com"
logfile=~/uploads.log
listfile=~/uploads.txt
list_backup=~/uploads_bak000.txt
while read f;
do
ret=""
if [ "${f:0:1}" = "#" -o "$f"1 = 1 ]
then
if [ "$f"1 = 1 ]
then
:
#echo "invalid string: "$f
else
#first character is remark sign # then empty command -> :
echo "remark line skipped: "$f
fi
else
#while string $ret is empty
while [ -z "$ret" ]
do
wput "$f" --tries=-1 "$server" 2>&1|tee -a $logfile #> /dev/null
ret=$(tail -n 1 "$logfile"|grep "FINISHED\|Nothing\|Skipped\|Transfered")
done
if [ -n "$ret" ]
then
cat $listfile > $list_backup
awk -v f="$f" '{if ($0==f && $0!~/#/) print "#" $0; else print $0;}' $list_backup > $listfile
fi
fi
done < $listfile
There are quick-n-dirty solutions that use ps with grep (don't do this).
It is better to use a lock file as a "mutex". A nice way of doing this is by using a directory as a lock file (http://mywiki.wooledge.org/BashFAQ/045).
I would also suggest taking a look at:
http://mywiki.wooledge.org/ProcessManagement#How_do_I_make_sure_only_one_copy_of_my_script_can_run_at_a_time.3F
, which mentions use of setlock(http://cr.yp.to/daemontools/setlock.html) that abstracts the lock file handling for you.

How to test if multiple files exist using a Bash script

How can I use the test command for an arbitrary number of files, passed in using an argument with a wildcard?
For example:
test -f /var/log/apache2/access.log.* && echo "exists one or more files"
Currently, it prints
error: bash: test: too many arguments
This solution seems to me more intuitive:
if [ `ls -1 /var/log/apache2/access.log.* 2>/dev/null | wc -l ` -gt 0 ];
then
echo "ok"
else
echo "ko"
fi
To avoid "too many arguments error", you need xargs. Unfortunately, test -f doesn't support multiple files. The following one-liner should work:
for i in /var/log/apache2/access.log.*; do test -f "$i" && echo "exists one or more files" && break; done
By the way, /var/log/apache2/access.log.* is called shell-globbing, not regexp. Please see Confusion with shell-globbing wildcards and Regex for more information.
First, store files in the directory as an array:
logfiles=(/var/log/apache2/access.log.*)
Then perform a test on the count of the array:
if [[ ${#logfiles[#]} -gt 0 ]]; then
echo 'At least one file found'
fi
This one is suitable for use with the Unofficial Bash Strict Mode, no has non-zero exit status when no files are found.
The array logfiles=(/var/log/apache2/access.log.*) will always contain at least the unexpanded glob, so one can simply test for existence of the first element:
logfiles=(/var/log/apache2/access.log.*)
if [[ -f ${logfiles[0]} ]]
then
echo 'At least one file found'
else
echo 'No file found'
fi
If you wanted a list of files to process as a batch, as opposed to doing a separate action for each file, you could use find, store the results in a variable, and then check if the variable was not empty. For example, I use the following to compile all the .java files in a source directory.
SRC=`find src -name "*.java"`
if [ ! -z $SRC ]; then
javac -classpath $CLASSPATH -d obj $SRC
# stop if compilation fails
if [ $? != 0 ]; then exit; fi
fi
You just need to test if ls has something to list:
ls /var/log/apache2/access.log.* >/dev/null 2>&1 && echo "exists one or more files"
Variation on a theme:
if ls /var/log/apache2/access.log.* >/dev/null 2>&1
then
echo 'At least one file found'
else
echo 'No file found'
fi
ls -1 /var/log/apache2/access.log.* | grep . && echo "One or more files exist."
Or using find
if [ $(find /var/log/apache2/ -type f -name "access.log.*" | wc -l) -gt 0 ]; then
echo "ok"
else
echo "ko"
fi
This condition below doesn't produce stderr. the condition's blackhole (/dev/null) doesn't prevent the stderr in cmd.
if [[ $(ls -1 /var/log/apache2/access.log.* | wc -l ) -gt 0 ]] 2> /dev/null
therefore I suggests this code.
if [[ $(ls -1 /var/log/apache2/access.log.* | wc -l ) -gt 0 ]] 2> /dev/null
then
echo "exists one or more files."
fi
more simplyfied:
if ls /var/log/apache2/access.log.* 2>/dev/null 1>&2; then
echo "ok"
else
echo "ko"
fi

Bash about repeat until

I want to know about that syntax is correct or not. I cant test it right now sorry, but its important for me. Its an FTP script. The file name is a.txt, I would like to create a script that will upload a file until it is successful. It will works or not? Anyone can help me to build the correct one pls
LOGFILE=/home/transfer_logs/$a.log
DIR=/home/send
Search=`ls /home/send`
firstline=`egrep "Connected" $LOGFILE`
secondline=`egrep "File successfully transferred" $LOGFILE`
if [ -z "$Search" ]; then
cd $DIR
ftp -p -v -i 192.163.3.3 < ../../example.script > ../../$LOGFILE 2>&1
fi
if
egrep "Not connected" $LOGFILE; then
repeat
ftp -p -v -i 192.163.3.3 < ../../example.script > ../../$LOGFILE 2>&1
until
[[ -n $firstline && $secondline ]];
done
fi
example.script contains:
binary
mput a.txt
quit
Does ftp not return a reasonable result? It would be easiest to write:
while ! ftp ...; do sleep 1; done
If you insist on searching the log file, do something like:
while :; do
ftp ... > $LOGFILE
grep -qF "File successfully transferred" $LOGFILE && break
done
Or
while ! test -e $LOGFILE || grep -qF "Not connected" $LOGFILE; do
ftp ... > $LOGFILE
done
It will works or not?
No, it won't work. According to §3.2.4.1 "Looping Constructs" of the Bash Reference Manual, these are the kinds of loops that exist:
until test-commands; do consequent-commands; done
while test-commands; do consequent-commands; done
for name [ [in [words …] ] ; ] do commands; done
for (( expr1 ; expr2 ; expr3 )) ; do commands ; done
You'll notice that none of them begins with repeat.
Additionally, these two lines:
firstline=`egrep "Connected" $LOGFILE`
secondline=`egrep "File successfully transferred" $LOGFILE`
run egrep immediately, and set their variables accordingly. This command:
[[ -n $firstline && $secondline ]]
will always give the same return-value, because nothing in the loop will ever modify $firstline and $secondline. You need to actually put an egrep command inside the loop.

Resources