Bash curl returns 0 whenever the copy has finished or not - bash

I'm calling curl on bash to copy a file from a mounted SD card with the option to resume the copy later if the device gets unmounted. I receive the same status exit code 0 when I interrupt the copy by unmounting the volume and when the file gets actually copied. Any suggestions how to catch the case where the file has not been copied?
I'm copying only one file at a time.
This is the command:
curl -C - -O file:///mnt/sdcard/DCIM/100/0044.MP4

I came to a solution which is not as clear as I want, but still working. I'm executing the command 2 times one after another, so when the first command returns 0 upon unmount, the second now tries to copy the file and return error code 37 because of the unreachable source. If the second command returns 0 I consider the file copied.

Following your concept you could have a script like this:
#!/bin/bash
# Copies files persistently.
#
# Usage: pc <filepath> [<filepath2>] ...
#
function pc {
local FILE
for FILE; do
echo "Copying $FILE."
until curl -C - -O "file://${FILE}" && curl -C - -O "file://${FILE}"; do
if [[ -e $FILE ]]; then
echo "File $FILE can't be copied."
break
else
echo "Waiting for $FILE."
until
sleep 5
[[ -e $FILE ]]
do
continue
done
fi
done
done
}
pc "$#"
You could also just embed the function to a bash startup script if you like.

Related

Unable to break out of While loop

I would like to break out of a loop when it gets to a blank line in a file. The issue is that my regexp's used to condition my data create a line with characters so I need something from the beginning to check if a line is empty or not so I can break out. What am I missing?
#!/bin/bash
#NOTES: chmod this script with chmod 755 to run as regular local user
#This line allows for passing in a source file as an argument to the script (i.e: ./script.sh source_file.txt)
input_file="$1"
#This creates the folder structure used to mount the SMB Share and copy the assets over to the local machines
SOURCE_FILES_ROOT_DIR="${HOME}/operations/source"
DESTINATION_FILES_ROOT_DIR="${HOME}/operations/copied_files"
#This creates the fileshare mount point and place to copy files over to on the local machine.
echo "Creating initial folders..."
mkdir -p "${SOURCE_FILES_ROOT_DIR}"
mkdir -p "${DESTINATION_FILES_ROOT_DIR}"
echo "Folders Created! Destination files will be copied to ${DESTINATION_FILES_ROOT_DIR}/SHARE_NAME"
while read -r line;
do
if [ -n "$line" ]; then
continue
fi
line=${line/\\\\///}
line=${line//\\//}
line=${line%%\"*\"}
SERVER_NAME=$(echo "$line" | cut -d / -f 4);
SHARE_NAME=$(echo "$line" | cut -d / -f 5);
ASSET_LOC=$(echo "$line" | cut -d / -f 6-);
SMB_MOUNT_PATH="//$(whoami)#${SERVER_NAME}/${SHARE_NAME}";
if df -h | grep -q "${SMB_MOUNT_PATH}"; then
echo "${SHARE_NAME} is already mounted. Copying files..."
else
echo "Mounting it"
mount_smbfs "${SMB_MOUNT_PATH}" "${SOURCE_FILES_ROOT_DIR}"
fi
cp -a ${SOURCE_FILES_ROOT_DIR}/${ASSET_LOC} ${DESTINATION_FILES_ROOT_DIR}
done < $input_file
# cleanup
hdiutil unmount ${SOURCE_FILES_ROOT_DIR}
exit 0
Expected result was for the script to realize when it gets to a blank line and then stops. The script works works when i remove the
if [ -n "$line" ]; then
continue
fi
The script runs and pulls assets but just keeps on going and never breaks out. When I do it as is now I get :
Creating initial folders...
Folders Created! Destination files will be copied to /Users/baguiar/operations/copied_files
Mounting it
mount_smbfs: server connection failed: No route to host
hdiutil: unmount: "/Users/baguiar/operations/source" failed to unmount due to error 16.
hdiutil: unmount failed - Resource busy
cat test.txt
This is some file
There are lines in it
And empty lines
etc
while read -r line; do
if [[ -n "$line" ]]; then
continue
fi
echo "$line"
done < "test.txt"
will print out
That's because -n matches strings that are not null, i.e., non-empty.
It sounds like you have a misunderstanding of what continue means. It does not mean "continue on in this step of the loop", it means "continue to the next step of the loop", i.e., go to the top of the while loop and run it starting with the next line in the file.
Right now, your script says "go line by line, and if the line is not empty, skip the rest of the processing". I think your goal is actually "go line by line, and if the line is empty, skip the rest of the processing". This would be achieved by if [[ -z "$line" ]]; then continue; fi
TL;DR You are skipping all the non-empty lines. Use -z to check if your variable is empty instead of -n.

Make script that reads argument from command line

I am running quantum chemical calculations by providing the command molcas -f file.input. I now have need for putting the molcas -f into a script that also tails the last 100 lines of the generated file.log, for me to quickly confirm that everything finished the way it's supposed to. So I want to run the script run.sh:
#!/bin/bash
molcas -f [here read the file.input from command line]
tail -100 [here read the file.log]
The question is how I can make the script read the argument I give, and then find on its own the output file (which has the same filename, but with a different extension).
Follow-up
Say I have a bunch of numbered files file-1, file-2, ..., file-n. I would save time if I instead of running
./run.sh file-1.input file-1.log
I run
./run.sh n n
or
./run.sh n.input n.log
assuming that the actual filename and placement of the number n is given in the script. Can that be done?
With this code:
#!/bin/bash
molcas -f "$1"
tail -100 "$2"
You will need to execute the script run.sh as follows:
./run.sh file.input file.log
to be hornest I have/had no clue over molcas, so I jumed to this side to get basic understandings.
The syntax shoould look like this ...
#!/bin/bash
# waiting for input
read -p "Enter a filename (sample.txt): " FILE
# checking for existing file
if [ -e "$FILE" ]; then
read -p "Enter a command for moculas: " CMD
else
echo "Sorry, \"${FILE}\" was not found. Exit prgramm."
exit 1
fi
# I am not sure how this command works.
# maybe you have to edit this line by your self.
molcas $FILE -f "$CMD"
# checking for programm-errors
ERRNO=$?
if [ "$ERRNO" != "" ] && [ "$ERRNO" -gt 0 ]; then
echo "Molcas returned an error. Number: ${ERRNO}"
exit 1
fi
# cuts off the fileending (For example: sample.txt gets sample)
FILENAME="${FILE%.*}"
# checking other files
ERRFILE="${FILENAME}.err"
tail -n 100 $ERRFILE
LOGFILE="${FILENAME}.log"
tail -n 100 $LOGFILE
exit 0
I would have posted more, but its not clear what to do with this data.
Hope this helps a bit.

Monitor and ftp newly-added files on Linux -- modify existing code

The OS is centos 7, I have a small application to implement below functionality:
1.Read information from config.ini like this:
# Configuration file for ftpxml service
# Remote FTP server informations
ftpadress=1.2.3.4
username=test
password=test
# Local folders configuration
# folderA: folder for incomming files
folderA=/u02/dump
# folderB: Successfuly transfered files are copied here
folderB=/u02/dump_bak
# retrydir: when ftp upload fails, store failed files in this
# directory
retrydir=/u02/dump_retry
Monitor folder A. If there are any newly-added files in A, do step 3.
Ftp these new files to a remote ftp server in the order of their creation time, While upload finished, copy uploaded files to folder B and delete relevant files in folder A.
If ftp fails, store relevant files in retrydir and try to ftp them later.
Record every operation in a log file.
Detailed setting instruction for the application:
install ncftp package: yum install ncftp -y, it's not a service nor a daemon, just a client tool which is invoked in bash file for ftp purpose.
Customize these files to suit your setting using vi: config.ini
,ftpmon.path and ftpmon.service
copy ftpmon.path and ftpmon.service to /etc/systemd/system/, copy config.ini and ftpxml.sh to /u02/ftpxml/, run: chmod +x ftpxml.sh
Start the monitoring tool
sudo systemctl start ftpmon.path
If you want to enable it at boot time just enter: sudo systemctl enable ftpmon.path
Setup a cron task to purge queued files (add option -p)
*/5 * * * * /u02/ftpxml/ftpxml.sh -p
Now the application seems works well, except a special situation:
When we put several files in folder A continuously, for instance, put 1.txt, 2.txt and 3.txt...... one after another in a short time, we usually found 1.txt ftp well, but the upcoming files fails to ftp and still stay under folder A.
Now I am going to fix this problem. I suppose the error maybe due to: while doing ftp for the first file, maybe the second file is already created under folder A. so the code can't care about the second file.
Below is code of ftpxml.sh:
#!/bin/bash
# ! Please read the README.txt file !
# Copy files from upload dir to remote FTP then save them
# to folderB
# look our location
SCRIPT=$(readlink -f $0)
# Absolute path to this script
SCRIPTPATH=`dirname $SCRIPT`
PIDFILE=${SCRIPTPATH}/ftpmon_prog.lock
# load config.ini
if [ -f $SCRIPTPATH/config.ini ]; then
source $SCRIPTPATH/config.ini
else
echo "No config found. Exiting"
fi
# Lock to avoid multiple instances
if [ -f $PIDFILE ]; then
kill -0 $(cat $PIDFILE) 2> /dev/null
if [ $? == 0 ]; then
exit
fi
fi
# Store PID in lock file
echo $$ > $PIDFILE
# Parse cmdline arguments
while getopts ":ph" opt; do
case $opt in
p)
#we set the purge mode (cron mode)
purge_only=1
;;
\?|h)
echo "Help text"
exit 1
;;
esac
done
# declare usefull functions
# common logging function
function logmsg() {
LOGFILE=ftp_upload_`date +%Y%m%d`.log
echo $(date +%m-%d-%Y\ %H:%M:%S) $* >> $SCRIPTPATH/log/${LOGFILE}
}
# Upload to remote FTP
# we use ncftpput to batch silently
# $1 file to upload $2 return value placeholder
function upload() {
ncftpput -V -u $username -p $password $ftpadress /prog/ $1
return $?
}
function purge_retry() {
failed_files=$(ls -1 -rt ${retrydir}/*)
if [ -z $failed_files ]; then
return
fi
while read line
do
#upload ${retrydir}/$line
upload $line
if [ $? != 0 ]; then
# upload failed we exit
exit
fi
logmsg File $line Uploaded from ${retrydir}
mv $line $folderB
logmsg File $line Copyed from ${retrydir}
done <<< "$failed_files"
}
# first check out 'queue' directory
purge_retry
# if called from a cron task we are done
if [ $purge_only ]; then
rm -f $PIDFILE
exit 0
fi
# look in incoming dir
new_files=$(ls -1 -rt ${folderA}/*)
while read line
do
# launch upload
if [ Z$line == 'Z' ]; then
break
fi
#upload ${folderA}/$line
upload $line
if [ $? == 0 ]; then
logmsg File $line Uploaded from ${folderA}
else
# upload failed we cp to retry folder
echo upload failed
cp $line $retrydir
fi
# don't care upload successfull or failed, we ALWAYS move the file to folderB
mv $line $folderB
logmsg File $line Copyed from ${folderA}
done <<< "$new_files"
# clean exit
rm -f $PIDFILE
exit 0
below is content of ftpmon.path:
[Unit]
Description= Triggers the service that logs changes.
Documentation= man:systemd.path
[Path]
# Enter the path to monitor (/u02/dump)
PathModified=/u02/dump/
[Install]
WantedBy=multi-user.target
below is content of ftpmon.service:
[Unit]
Description= Starts File Upload monitoring
Documentation= man:systemd.service
[Service]
Type=oneshot
#Set here the user that ftpmxml.sh will run as
User=root
#Set the exact path to the script
ExecStart=/u02/ftpxml/ftpxml.sh
[Install]
WantedBy=default.target
Thanks in advance, hope any experts can give me some suggestion.
As you remove the successfull transfered files from A you can leave files with transfer errors in A. So I am dealing only with files in one folder.
List your files by creation time with
find -type f -maxdepth 1 -print0 | xargs -r0 stat -c %y\ %n | sort
if you want hidden files to be included or - if not -
find -type f -maxdepth 0 -print0 | xargs -r0 stat -c %y\ %n | sort
You'll get something like
2016-02-19 18:53:41.000000000 ./.dockerenv
2016-02-19 18:53:41.000000000 ./.dockerinit
2016-02-19 18:56:09.000000000 ./versions.txt
2016-02-19 19:01:44.000000000 ./test.sh
Now cut the filenames (or use xargs -r0 stat -c %n if it does not matter that the files are order by name instead the timestamp) and
do the transfer
check the success
move successfully transfered files to B
As you stated above, there are situations where newly stored files are not successfully transfered. This may be if the file is written further after you started the transfer. So filter the timestamp to be at least some time old. Add -mmin -1 to the find statement for "at least one minute old"
find -type f -maxdepth 0 -mmin -1 -print0 | xargs -r0 stat -c %n | sort
If you don't want to use a minute file age you'll have to check if the file is still open: lsof | grep ./testfile but this may have issues if you have tmpfs in your file system.
lsof | grep ./testfile
lsof: WARNING: can't stat() tmpfs file system /var/lib/docker/containers/8596cd310292a54652c7f50d7315c8390703b4816442146b340946779a72a40c/shm
Output information may be incomplete.
lsof: WARNING: can't stat() proc file system /run/docker/netns/fb9323486c44
Output information may be incomplete.
So add %s to the stats statement to check the file size twice within some seconds and if it's constant the file may be written complete. May, as the write process may be stalled.

Getting continue behavior (not redownloading files already present) using lftp.

So, I have a script which downloads stuff from a seedbox. It works great for new files which are in the remote server and then mirrored on my local server. The problem is that when I want, for example, to remove unnecessary files, running the script again re-downloads the same file(s) again. I tried going into the man pages of mirror but it wasn't helpful. Here is the script which mirrors the files:
#!/bin/bash
login=XXXX
pass=XXXXXX
host=XXXXX
remote_dir=/files/
local_dir=/home/XXX/XXX
trap "rm -f /tmp/seedroots.lock" SIGINT SIGTERM
if [ -e /tmp/seedroots.lock ]; then
echo "Synctorrent is running already."
exit 1
else
touch /tmp/seedroots.lock
lftp -p 21 -u $login,$pass $host << EOF
set ftp:ssl-allow no
set mirror:use-pget-n 5
mirror -c -P5 --log=synctorrents.log $remote_dir $local_dir
EOF
rm -f /tmp/seedroots.lock
exit 0
fi
Is there an option for mirror which I am missing that doesn't re-download the locally deleted file(s) again?
The mirror command in lftp has a --continue flag which will result in the behavior you want.
You should give a try to my version of your script (not tested) :
#!/bin/bash
login=XXXX
pass=XXXXXX
host=XXXXX
remote_dir=/files/
local_dir=/home/XXX/XXX
files=$local_dir/*
trap "rmdir /tmp/seedroots.lock" 0 1 2 3 15
if [[ -d /tmp/seedroots.lock ]]; then
echo "Synctorrent is running already."
exit 1
else
mkdir /tmp/seedroots.lock
lftp -p 21 -u $login,$pass $host << EOF
set ftp:ssl-allow no
set mirror:use-pget-n 5
mget $files
EOF
fi
What it does :
I build a local list of files, and, subsequently, mget all these files on the ftp server with the variable $files.
I replaced the lock file with a dir : search web about atomicity.
Files are not atomic whereas directories are.
The trap runs on normal exit and other signals
If you are using bash, [[ ]] tests are more powerfull.
Indentation is not just an option ;)
If you are just leeching files (not seeding), you can use lftp mirror with --Remove-source-files option to remove files at source after transfer (so no duplicate, re-downloads).

Shell script help

I need help with two scripts I'm trying to make as one. There are two different ways to detect if there are issues with a bad NFS mount. One is if there is an issue, doing a df will hang and the other is the df works but there is are other issues with the mount which a find (mount name) -type -d will catch.
I'm trying to combine the scripts to catch both issues to where it runs the find type -d and if there is an issue, return an error. If the second NFS issue occurs and the find hangs, kill the find command after 2 seconds; run the second part of the script and if the NFS issue is occurring, then return an error. If neither type of NFS issue is occurring then return an OK.
MOUNTS="egrep -v '(^#)' /etc/fstab | grep nfs | awk '{print $2}'"
MOUNT_EXCLUDE=()
if [[ -z "${NFSdir}" ]] ; then
echo "Please define a mount point to be checked"
exit 3
fi
if [[ ! -d "${NFSdir}" ]] ; then
echo "NFS CRITICAL: mount point ${NFSdir} status: stale"
exit 2
fi
cat > "/tmp/.nfs" << EOF
#!/bin/sh
cd \$1 || { exit 2; }
exit 0;
EOF
chmod +x /tmp/.nfs
for i in ${NFSdir}; do
CHECK="ps -ef | grep "/tmp/.nfs $i" | grep -v grep | wc -l"
if [ $CHECK -gt 0 ]; then
echo "NFS CRITICAL : Stale NFS mount point $i"
exit $STATE_CRITICAL;
else
echo "NFS OK : NFS mount point $i status: healthy"
exit $STATE_OK;
fi
done
The MOUNTS and MOUNT_EXCLUDE lines are immaterial to this script as shown.
You've not clearly identified where ${NFSdir} is being set.
The first part of the script assumes ${NFSdir} contains a single directory value; the second part (the loop) assumes it may contain several values. Maybe this doesn't matter since the loop unconditionally exits the script on the first iteration, but it isn't the clear, clean way to write it.
You create the script /tmp/.nfs but:
You don't execute it.
You don't delete it.
You don't allow for multiple concurrent executions of this script by making a per-process file name (such as /tmp/.nfs.$$).
It is not clear why you hide the script in the /tmp directory with the . prefix to the name. It probably isn't a good idea.
Use:
tmpcmd=${TMPDIR:-/tmp}/nfs.$$
trap "rm -f $tmpcmd; exit 1" 0 1 2 3 13 15
...rest of script - modified to use the generated script...
rm -f $tmpcmd
trap 0
This gives you the maximum chance of cleaning up the temporary script.
There is no df left in the script, whereas the question implies there should be one. You should also look into the timeout command (though commands hung because NFS is not responding are generally very difficult to kill).

Resources