I want to know about that syntax is correct or not. I cant test it right now sorry, but its important for me. Its an FTP script. The file name is a.txt, I would like to create a script that will upload a file until it is successful. It will works or not? Anyone can help me to build the correct one pls
LOGFILE=/home/transfer_logs/$a.log
DIR=/home/send
Search=`ls /home/send`
firstline=`egrep "Connected" $LOGFILE`
secondline=`egrep "File successfully transferred" $LOGFILE`
if [ -z "$Search" ]; then
cd $DIR
ftp -p -v -i 192.163.3.3 < ../../example.script > ../../$LOGFILE 2>&1
fi
if
egrep "Not connected" $LOGFILE; then
repeat
ftp -p -v -i 192.163.3.3 < ../../example.script > ../../$LOGFILE 2>&1
until
[[ -n $firstline && $secondline ]];
done
fi
example.script contains:
binary
mput a.txt
quit
Does ftp not return a reasonable result? It would be easiest to write:
while ! ftp ...; do sleep 1; done
If you insist on searching the log file, do something like:
while :; do
ftp ... > $LOGFILE
grep -qF "File successfully transferred" $LOGFILE && break
done
Or
while ! test -e $LOGFILE || grep -qF "Not connected" $LOGFILE; do
ftp ... > $LOGFILE
done
It will works or not?
No, it won't work. According to §3.2.4.1 "Looping Constructs" of the Bash Reference Manual, these are the kinds of loops that exist:
until test-commands; do consequent-commands; done
while test-commands; do consequent-commands; done
for name [ [in [words …] ] ; ] do commands; done
for (( expr1 ; expr2 ; expr3 )) ; do commands ; done
You'll notice that none of them begins with repeat.
Additionally, these two lines:
firstline=`egrep "Connected" $LOGFILE`
secondline=`egrep "File successfully transferred" $LOGFILE`
run egrep immediately, and set their variables accordingly. This command:
[[ -n $firstline && $secondline ]]
will always give the same return-value, because nothing in the loop will ever modify $firstline and $secondline. You need to actually put an egrep command inside the loop.
Related
I can't tell if something I'm trying here is simply impossible or if I'm really lacking knowledge in bash's syntax. This is the first script I've written.
I've got a Nextcloud instance that I am backing up daily using a script. I want to log the output of the script as it runs to a log file. This is working fine, but I wanted to see if I could also pipe the Nextcloud occ command's output to the log file too.
I've got an if statement here checking if the file scan fails:
if ! sudo -u "$web_user" "$nextcloud_dir/occ" files:scan --all; then
Print "Error: Failed to scan files. Are you in maintenance mode?"
fi
This works fine and I am able to handle the error if the system cannot execute the command. The error string above is sent to this function:
Print()
{
if [[ "$logging" -eq 1 ]] && [ "$quiet_mode" = "No" ]; then
echo "$1" | tee -a "$log_file"
elif [[ "$logging" -eq 1 ]] && [ "$quiet_mode" = "Yes" ]; then
echo "$1" >> "$log_file"
elif [[ "$logging" -eq 0 ]] && [ "$quiet_mode" = "No" ]; then
echo "$1"
fi
}
How can I make it so the output of the occ command is also piped to the Print() function so it can be logged to the console and log file?
I've tried piping the command after ! using | Print without success.
Any help would be appreciated, cheers!
The Print function doesn't read standard input so there's no point piping data to it. One possible way to do what you want with the current implementation of Print is:
if ! occ_output=$(sudo -u "$web_user" "$nextcloud_dir/occ" files:scan --all 2>&1); then
Print "Error: Failed to scan files. Are you in maintenance mode?"
fi
Print "'occ' output: $occ_output"
Since there is only one line in the body of the if statement you could use || instead:
occ_output=$(sudo -u "$web_user" "$nextcloud_dir/occ" files:scan --all 2>&1) \
|| Print "Error: Failed to scan files. Are you in maintenance mode?"
Print "'occ' output: $occ_output"
The 2>&1 causes both standard output and error output of occ to be captured to occ_output.
Note that the body of the Print function could be simplified to:
[[ $quiet_mode == No ]] && printf '%s\n' "$1"
(( logging )) && printf '%s\n' "$1" >> "$log_file"
See the accepted, and excellent, answer to Why is printf better than echo? for an explanation of why I replaced echo "$1" with printf '%s\n' "$1".
How's this? A bit unorthodox perhaps.
Print()
{
case $# in
0) cat;;
*) echo "$#";;
esac |
if [[ "$logging" -eq 1 ]] && [ "$quiet_mode" = "No" ]; then
tee -a "$log_file"
elif [[ "$logging" -eq 1 ]] && [ "$quiet_mode" = "Yes" ]; then
cat >> "$log_file"
elif [[ "$logging" -eq 0 ]] && [ "$quiet_mode" = "No" ]; then
cat
fi
}
With this, you can either
echo "hello mom" | Print
or
Print "hello mom"
and so your invocation could be refactored to
if ! sudo -u "$web_user" "$nextcloud_dir/occ" files:scan --all; then
echo "Error: Failed to scan files. Are you in maintenance mode?"
fi |
Print
The obvious drawback is that piping into a function loses the exit code of any failure earlier in the pipeline.
For a more traditional approach, keep your original Print definition and refactor the calling code to
if output=$(sudo -u "$web_user" "$nextcloud_dir/occ" files:scan --all 2>&1); then
: nothing
else
Print "error $?: $output"
Print "Error: Failed to scan files. Are you in maintenance mode?"
fi
I would imagine that the error message will be printed to standard error, not standard output; hence the addition of 2>&1
I included the error code $? in the error message in case that would be useful.
Sending and receiving end of a pipe must be a process, typically represented by an executable command. An if statement is not a process. You can of course put such a statement into a process. For example,
echo a | (
if true
then
cat
fi )
causes cat to write a to stdout, because the parenthesis put it into a child process.
UPDATE: As was pointed out in a comment, the explicit subprocess is not needed. One can also do a
echo a | if true
then
cat
fi
I found this example of a conditional lsof loop and want to adapt it to my situation.
typeset fSrc="/path/to/sourcedir"
typeset fTgt="/path/to/targetdir"
while : ; do
ls /path/to/sourcedir | while read file ; do
if [ $(lsof $fSrc/$file | wc -l) -gt 1 ] ; then
echo "file $file still loading, skipping it"
else
mv $fSrc/$file $fTgt/$file
echo "file $file completed upload, moving it"
fi
done
done
My example would be more like this:
while any files are present in "/pathto/sourcedir"; do
if [ lsof "any file" in "/pathto/sourcedir" is being written or modified ]; then
echo "Files being written or modified, exiting"
exit
else
do something
fi
done
Can this be done? Is my logic somewhat close to correct?
I am running a function in bash comparing two files $SRC and $DEST, I'm verifying that the command worked. Here is my working function. However if I remove the echo " " then it returns unsuccessful even if it worked. If I keep it in, it adds an extra blank output line and is successful. I've tried the following:
if [[ cmp -s "$SRC" "$DEST" >/dev/null 2>&1 ]]
and that returns errors. Any ideas?
copysuccess()
{
#Variable to track command
local COM=$1
if cmp -s "$SRC" "$DEST" >/dev/null 2>&1
echo " "
then
echo "$COM was successful"
else
echo "$COM was unsuccessful"
fi
}
Update:
Tried the following codes and now it outputs
cmp: file: is a directory.
I should have noted that this was for files AND directories. Sorry.
Also if I do a move directory where it is an overwrite, it outputs nothing.
Does not work:
cmp -s "$SRC" "$DEST"; [[ "$?" = 0 ]] && echo "$COM was successful" || echo "$COM was unsuccessful"
Does not work:
cmp -s "$SRC" "$DEST" && echo "$COM was successful" || echo "$COM was unsuccessful"
You should check return value of the executed command instead of the current approach. This is how I would've done it :
cmp -s "$SRC" "$DEST" >/dev/null 2>&1 && echo "success" || echo "failure"
Since -s option blocks all outputs, the rest can be removed.
Ended up going with Jdamian's advice and using $?
I created a global that got reset every time a major function was entered, and when I got the the copy portion
cp -f $SRC $DEST
TEST=$?
Was added. Then
copysuccess()
{
#Variable to track command
local COM=$1
if [[ $TEST == 0 ]]
then
echo "$COM was successful"
else
echo "$COM was unsuccessful"
fi
}
was the best option, I basically wanted to verify that the command ran successfully, which I wasn't clear on, and wanted to verify the copy worked, to bad my way was asinine and didn't know about the $?
I wrote a little bash script called "wp", which upload files to an ftp server. It uses the wput utility. It takes the list of files from a text file. When uploading is ready it comments out the line with a double cross in the text file. The success of the upload is detected according to the last line in the logfile. My question is how can I avoid multiple starting of my script? I am trying to detect with pgrep if the instance is running, but doesn't work correctly:
#!/bin/bash
if [ "$(pgrep ^wp$|wc -l)" -eq "2" ]
then
echo "$(pgrep ^wp$)"
echo "$(pgrep ^wp$|wc -l)"
echo "wp script is starting..."
else
echo "$(pgrep ^wp$)"
echo "$(pgrep ^wp$|wc -l)"
echo "wp script is already running!"
exit
fi
server="ftp://username:password#ftp.ftpserver.com"
logfile=~/uploads.log
listfile=~/uploads.txt
list_backup=~/uploads_bak000.txt
while read f;
do
ret=""
if [ "${f:0:1}" = "#" -o "$f"1 = 1 ]
then
if [ "$f"1 = 1 ]
then
:
#echo "invalid string: "$f
else
#first character is remark sign # then empty command -> :
echo "remark line skipped: "$f
fi
else
#while string $ret is empty
while [ -z "$ret" ]
do
wput "$f" --tries=-1 "$server" 2>&1|tee -a $logfile #> /dev/null
ret=$(tail -n 1 "$logfile"|grep "FINISHED\|Nothing\|Skipped\|Transfered")
done
if [ -n "$ret" ]
then
cat $listfile > $list_backup
awk -v f="$f" '{if ($0==f && $0!~/#/) print "#" $0; else print $0;}' $list_backup > $listfile
fi
fi
done < $listfile
There are quick-n-dirty solutions that use ps with grep (don't do this).
It is better to use a lock file as a "mutex". A nice way of doing this is by using a directory as a lock file (http://mywiki.wooledge.org/BashFAQ/045).
I would also suggest taking a look at:
http://mywiki.wooledge.org/ProcessManagement#How_do_I_make_sure_only_one_copy_of_my_script_can_run_at_a_time.3F
, which mentions use of setlock(http://cr.yp.to/daemontools/setlock.html) that abstracts the lock file handling for you.
How can I use the test command for an arbitrary number of files, passed in using an argument with a wildcard?
For example:
test -f /var/log/apache2/access.log.* && echo "exists one or more files"
Currently, it prints
error: bash: test: too many arguments
This solution seems to me more intuitive:
if [ `ls -1 /var/log/apache2/access.log.* 2>/dev/null | wc -l ` -gt 0 ];
then
echo "ok"
else
echo "ko"
fi
To avoid "too many arguments error", you need xargs. Unfortunately, test -f doesn't support multiple files. The following one-liner should work:
for i in /var/log/apache2/access.log.*; do test -f "$i" && echo "exists one or more files" && break; done
By the way, /var/log/apache2/access.log.* is called shell-globbing, not regexp. Please see Confusion with shell-globbing wildcards and Regex for more information.
First, store files in the directory as an array:
logfiles=(/var/log/apache2/access.log.*)
Then perform a test on the count of the array:
if [[ ${#logfiles[#]} -gt 0 ]]; then
echo 'At least one file found'
fi
This one is suitable for use with the Unofficial Bash Strict Mode, no has non-zero exit status when no files are found.
The array logfiles=(/var/log/apache2/access.log.*) will always contain at least the unexpanded glob, so one can simply test for existence of the first element:
logfiles=(/var/log/apache2/access.log.*)
if [[ -f ${logfiles[0]} ]]
then
echo 'At least one file found'
else
echo 'No file found'
fi
If you wanted a list of files to process as a batch, as opposed to doing a separate action for each file, you could use find, store the results in a variable, and then check if the variable was not empty. For example, I use the following to compile all the .java files in a source directory.
SRC=`find src -name "*.java"`
if [ ! -z $SRC ]; then
javac -classpath $CLASSPATH -d obj $SRC
# stop if compilation fails
if [ $? != 0 ]; then exit; fi
fi
You just need to test if ls has something to list:
ls /var/log/apache2/access.log.* >/dev/null 2>&1 && echo "exists one or more files"
Variation on a theme:
if ls /var/log/apache2/access.log.* >/dev/null 2>&1
then
echo 'At least one file found'
else
echo 'No file found'
fi
ls -1 /var/log/apache2/access.log.* | grep . && echo "One or more files exist."
Or using find
if [ $(find /var/log/apache2/ -type f -name "access.log.*" | wc -l) -gt 0 ]; then
echo "ok"
else
echo "ko"
fi
This condition below doesn't produce stderr. the condition's blackhole (/dev/null) doesn't prevent the stderr in cmd.
if [[ $(ls -1 /var/log/apache2/access.log.* | wc -l ) -gt 0 ]] 2> /dev/null
therefore I suggests this code.
if [[ $(ls -1 /var/log/apache2/access.log.* | wc -l ) -gt 0 ]] 2> /dev/null
then
echo "exists one or more files."
fi
more simplyfied:
if ls /var/log/apache2/access.log.* 2>/dev/null 1>&2; then
echo "ok"
else
echo "ko"
fi