I would like to break out of a loop when it gets to a blank line in a file. The issue is that my regexp's used to condition my data create a line with characters so I need something from the beginning to check if a line is empty or not so I can break out. What am I missing?
#!/bin/bash
#NOTES: chmod this script with chmod 755 to run as regular local user
#This line allows for passing in a source file as an argument to the script (i.e: ./script.sh source_file.txt)
input_file="$1"
#This creates the folder structure used to mount the SMB Share and copy the assets over to the local machines
SOURCE_FILES_ROOT_DIR="${HOME}/operations/source"
DESTINATION_FILES_ROOT_DIR="${HOME}/operations/copied_files"
#This creates the fileshare mount point and place to copy files over to on the local machine.
echo "Creating initial folders..."
mkdir -p "${SOURCE_FILES_ROOT_DIR}"
mkdir -p "${DESTINATION_FILES_ROOT_DIR}"
echo "Folders Created! Destination files will be copied to ${DESTINATION_FILES_ROOT_DIR}/SHARE_NAME"
while read -r line;
do
if [ -n "$line" ]; then
continue
fi
line=${line/\\\\///}
line=${line//\\//}
line=${line%%\"*\"}
SERVER_NAME=$(echo "$line" | cut -d / -f 4);
SHARE_NAME=$(echo "$line" | cut -d / -f 5);
ASSET_LOC=$(echo "$line" | cut -d / -f 6-);
SMB_MOUNT_PATH="//$(whoami)#${SERVER_NAME}/${SHARE_NAME}";
if df -h | grep -q "${SMB_MOUNT_PATH}"; then
echo "${SHARE_NAME} is already mounted. Copying files..."
else
echo "Mounting it"
mount_smbfs "${SMB_MOUNT_PATH}" "${SOURCE_FILES_ROOT_DIR}"
fi
cp -a ${SOURCE_FILES_ROOT_DIR}/${ASSET_LOC} ${DESTINATION_FILES_ROOT_DIR}
done < $input_file
# cleanup
hdiutil unmount ${SOURCE_FILES_ROOT_DIR}
exit 0
Expected result was for the script to realize when it gets to a blank line and then stops. The script works works when i remove the
if [ -n "$line" ]; then
continue
fi
The script runs and pulls assets but just keeps on going and never breaks out. When I do it as is now I get :
Creating initial folders...
Folders Created! Destination files will be copied to /Users/baguiar/operations/copied_files
Mounting it
mount_smbfs: server connection failed: No route to host
hdiutil: unmount: "/Users/baguiar/operations/source" failed to unmount due to error 16.
hdiutil: unmount failed - Resource busy
cat test.txt
This is some file
There are lines in it
And empty lines
etc
while read -r line; do
if [[ -n "$line" ]]; then
continue
fi
echo "$line"
done < "test.txt"
will print out
That's because -n matches strings that are not null, i.e., non-empty.
It sounds like you have a misunderstanding of what continue means. It does not mean "continue on in this step of the loop", it means "continue to the next step of the loop", i.e., go to the top of the while loop and run it starting with the next line in the file.
Right now, your script says "go line by line, and if the line is not empty, skip the rest of the processing". I think your goal is actually "go line by line, and if the line is empty, skip the rest of the processing". This would be achieved by if [[ -z "$line" ]]; then continue; fi
TL;DR You are skipping all the non-empty lines. Use -z to check if your variable is empty instead of -n.
Related
I am running quantum chemical calculations by providing the command molcas -f file.input. I now have need for putting the molcas -f into a script that also tails the last 100 lines of the generated file.log, for me to quickly confirm that everything finished the way it's supposed to. So I want to run the script run.sh:
#!/bin/bash
molcas -f [here read the file.input from command line]
tail -100 [here read the file.log]
The question is how I can make the script read the argument I give, and then find on its own the output file (which has the same filename, but with a different extension).
Follow-up
Say I have a bunch of numbered files file-1, file-2, ..., file-n. I would save time if I instead of running
./run.sh file-1.input file-1.log
I run
./run.sh n n
or
./run.sh n.input n.log
assuming that the actual filename and placement of the number n is given in the script. Can that be done?
With this code:
#!/bin/bash
molcas -f "$1"
tail -100 "$2"
You will need to execute the script run.sh as follows:
./run.sh file.input file.log
to be hornest I have/had no clue over molcas, so I jumed to this side to get basic understandings.
The syntax shoould look like this ...
#!/bin/bash
# waiting for input
read -p "Enter a filename (sample.txt): " FILE
# checking for existing file
if [ -e "$FILE" ]; then
read -p "Enter a command for moculas: " CMD
else
echo "Sorry, \"${FILE}\" was not found. Exit prgramm."
exit 1
fi
# I am not sure how this command works.
# maybe you have to edit this line by your self.
molcas $FILE -f "$CMD"
# checking for programm-errors
ERRNO=$?
if [ "$ERRNO" != "" ] && [ "$ERRNO" -gt 0 ]; then
echo "Molcas returned an error. Number: ${ERRNO}"
exit 1
fi
# cuts off the fileending (For example: sample.txt gets sample)
FILENAME="${FILE%.*}"
# checking other files
ERRFILE="${FILENAME}.err"
tail -n 100 $ERRFILE
LOGFILE="${FILENAME}.log"
tail -n 100 $LOGFILE
exit 0
I would have posted more, but its not clear what to do with this data.
Hope this helps a bit.
I have an FTP server with thousands of directories. What I want to do is to download a specific number of them (for example, 500 directories) using a shell script. How can I do that? I tried wget with -Q command. For example, "wget -Q25MB", which gives me 25MB of data. The problem is that each folder has a different size. Therefore, using this command will stop the download in the middle of getting a specific folder.
Assuming wget returns an error when the download get interrupted:
#!/bin/bash
to_del= # empty to_del in case you want to copy-paste this to a terminal instead of using a file
username=blablabla
password=blablabla
server=blablabla
printf -v today '%(%Y_%m_%d)T'
# Get the 500 first directory names to download
ftp -n "$server" << EOF | grep -v '^\.\.\?$' | head -n 502 > "to_download_$today.txt"
user $username $password
ls
bye
EOF
# Then, you can download each folder one by one:
while read -r dir; do
if [[ -e $dir ]]; then
echo >&2 "WARNING: '$dir' already exists!"
continue # We don't download or remove it. Manual action needed
fi
if wget "$username:$password#$server/$dir"; then
to_del+=("$dir")
else
# A directory was not successfully downloaded, we delete the temporary files
echo >&2 "WARNING: '$dir' download failed, skipping..."
rm -rf "$dir"
fi
done < "to_download_$today.txt"
# Now, delete the successfully downloaded folders using a single FTP connection
{
printf 'user %s %s\n' "$username" "$password"
for dir in "${to_del[#]}"; do
printf 'del %s\n' "$dir"
done
printf 'bye\n'
} | ftp -i -n "$server"
im trying to understand what this two command doing:
config=$(date +%s)
rsync -rvzh $1 /var/lib/tomcat7/webapps/ROOT/DataMining/target > /var/lib/tomcat7/webapps/ROOT/DataMining/$config
this line appears in a bigger script - script.sh looking like this:
#! /bin/bash
config=$(date +%s)
rsync -rvzh $1 /var/lib/tomcat7/webapps/ROOT/DataMining/target > /var/lib/tomcat7/webapps/ROOT/DataMining/$config
countC=0
countS=`wc -l /var/lib/tomcat7/webapps/ROOT/DataMining/$config | sed 's/^\([0-9]*\).*$/\1/'`
let countS--
let countS--
let countS--
while read LINEC #read line
do
if [ "$countC" -gt 0 ]; then
if [ "$countC" -lt "$countS" ]; then
FILENAME="/var/lib/tomcat7/webapps/ROOT/DataMining/target/"$LINEC
count=0
countW=0
while read LINE
do
for word in $LINE;
do
echo "INSERT INTO data_mining.data (word, line, numWordLine, file) VALUES ('$word', '$count', '$countW', '$FILENAME');" >> /var/lib/tomcat7/webapps/ROOT/DataMining/query
mysql -u root -Alaba1515< /var/lib/tomcat7/webapps/ROOT/DataMining/query
echo > /var/lib/tomcat7/webapps/ROOT/DataMining/query
let countW++
done
countW=0
let count++
done < $FILENAME
count=0
rm -f /var/lib/tomcat7/webapps/ROOT/DataMining/query
rm -f /var/lib/tomcat7/webapps/ROOT/DataMining/$config
fi
fi
let countC++
done < /var/lib/tomcat7/webapps/ROOT/DataMining/$config #finish while
i was able to find lots of documentary about rsync and what it is doing but i don't understand whats the rest of the command do. any help please?
The first command assigns the current time (in seconds since epoch) to the shell variable config. For example:
$ config=$(date +%s)
$ echo $config
1446506996
rsync is a file copying utility. The second command thus makes a backup copy of the directory listed in argument 1 (referred to as $1). The backup copy is placed in /var/lib/tomcat7/webapps/ROOT/DataMining/target. A log file of what was copied is saved in var/lib/tomcat7/webapps/ROOT/DataMining/$config:
rsync -rvzh $1 /var/lib/tomcat7/webapps/ROOT/DataMining/target > /var/lib/tomcat7/webapps/ROOT/DataMining/$config
The rsync options mean:
-r tells rsync to copy files diving recursively into subdirectories
-v tells it to be verbose so that it shows what is copied.
-z tells it to compress files during their transfer from one location to the other.
-h tells it to show any numbers in the output in human-readable format.
Note that because $1 is not inside double-quotes, this script will fail if the name of directory $1 contains whitespace.
I am working on a horribly old machine without logrorate.
[ Actually it has busybox 0.6, which is 'void of form' for most purposes. ]
I have openvpn running and I'd like to be able to see what it's been up to. The openvpn I'm using can output progress info to stdout or to a named log file.
I tried and failed to find a way to stop it using one log file and start it on another. Maybe some SIGUSR or something will make it close and re-open the output file, but I can't find it.
So I wrote a script which reads from stdin, and directs output to a rotating log file.
So now all I need to do is pipe the output from openvpn to it.
Job done.
Except that if I kill openvpn, the script which is processing its output just runs forever. There's nothing more it can do, so I'd like it to die automatically.
Is there any way to trap the situation in the script "EOF on STDIN" or something using "find the process ID which is feeding my stdin", or whatever?
I see that this resembles the question
"Tee does not exit after pipeline it's on has finished"
but it's not quite that in that I have no control over the behaviour of openvpn ( save that I can kill it ). I do have control over the script that receives the output of openvpn, but can't work out how to detect the death of openvpn, or the pipe from it to me.
My upper-level script is roughly:
vpn_command="openvpn --writepid ${sole_vpn_pid_file} \
--config /etc/openvpn/openvpn.conf \
--remote ${VPN_HOST} ${VPN_PORT} "
# collapse sequences of multiple spaces to one space
vpn_command_tight=$(echo -e ${vpn_command}) # must not quote the parameter
# We pass the pid file over explicitly in case we ever want to use multiple VPNs.
( ./${launchAndWaitScriptFullName} "${vpn_command_tight}" "${sole_vpn_pid_file}" 2>&1 | \
./vpn-log-rotate.sh 10000 /var/log/openvpn/openvpn.log ) &
if I kill the openvpn process, the "vpn-log-rotate.sh" one stays running.
that is:
#!/bin/sh
# #file vpn-log-rotate.sh
#
# #brief rotates stdin out to 2 levels of log files
#
# #param linesPerFile Number of lines to be placed in each log file.
# #param logFile Name of the primary log file.
#
# Archives the last log files on entry to .last files, then starts clean.
#
# #FIXME DGC 28-Nov-2014
# Note that this script does not die if the previous stage of the pipeline dies.
# It is possible that use of "trap SIGPIPE" or similar might fix that.
#
# #copyright Copyright Dexdyne Ltd 2014. All rights reserved.
#
# #author DGC
linesPerFile="$1"
logFile="$2"
# The location of this script as an absolute path. ( e.g. /home/Scripts )
scriptHomePathAndDirName="$(dirname "$(readlink -f $0)")"
# The name of this script
scriptName="$( basename $0 )"
. ${scriptHomePathAndDirName}/vpn-common.inc.sh
# Includes /sbin/script_start.inc.sh
# Reads config file
# Sets up vpn_temp_directory
# Sets up functions to obtain process id, and check if process is running.
# includes vpn-script-macros
# Remember our PID, to make it easier for a supervisor script to locate and kill us.
echo $$ > ${vpn_log_rotate_pid_file}
onExit()
{
echo "vpn-log-rotate.sh is exiting now"
rm -f ${vpn_log_rotate_pid_file}
}
trap "( onExit )" EXIT
logFileRotate1="${logFile}.1"
# Currently remember the 2 previous logs, in a rather knife-and-fork manner.
logFileMinus1="${logfile}.minus1"
logFileMinus2="${logfile}.minus2"
logFileRotate1Minus1="${logFileRotate1}.minus1"
logFileRotate1Minus2="${logFileRotate1}.minus2"
# The primary log file exist, rename it to be the rotated version.
rotateLogs()
{
if [ -f "${logFile}" ]
then
mv -f "${logFile}" "${logFileRotate1}"
fi
}
# The log files exist, rename them to be the archived copies.
archiveLogs()
{
if [ -f "${logFileMinus1}" ]
then
mv -f "${logFileMinus1}" "${logFileMinus2}"
fi
if [ -f "${logFile}" ]
then
mv -f "${logFile}" "${logFileMinus1}"
fi
if [ -f "${logFileRotate1Minus1}" ]
then
mv -f "${logFileRotate1Minus1}" "${logFileRotate1Minus2}"
fi
if [ -f "${logFileRotate1}" ]
then
mv -f "${logFileRotate1}" "${logFileRotate1Minus1}"
fi
}
archiveLogs
rm -f "${LogFile}"
rm -f "${logFileRotate1}"
while true
do
lines=0
while [ ${lines} -lt ${linesPerFile} ]
do
read line
lines=$(( ${lines} + 1 ))
#echo $lines
echo ${line} >> ${logFile}
done
mv -f "${logFile}" "${logFileRotate1}"
done
exit_0
Change this:
read line
to this:
read line || exit
so that if read-ing fails (because you've reached EOF), you exit.
Better yet, change it to this:
IFS= read -r line || exit
so that you don't discard leading whitespace, and don't treat backslashes as special.
And while you're at it, be sure to change this:
echo ${line} >> ${logFile}
to this:
printf %s "$line" >> "$logFile"
so that you don't run into problems if $line has a leading -, or contains * or ?, or whatnot.
I'm calling curl on bash to copy a file from a mounted SD card with the option to resume the copy later if the device gets unmounted. I receive the same status exit code 0 when I interrupt the copy by unmounting the volume and when the file gets actually copied. Any suggestions how to catch the case where the file has not been copied?
I'm copying only one file at a time.
This is the command:
curl -C - -O file:///mnt/sdcard/DCIM/100/0044.MP4
I came to a solution which is not as clear as I want, but still working. I'm executing the command 2 times one after another, so when the first command returns 0 upon unmount, the second now tries to copy the file and return error code 37 because of the unreachable source. If the second command returns 0 I consider the file copied.
Following your concept you could have a script like this:
#!/bin/bash
# Copies files persistently.
#
# Usage: pc <filepath> [<filepath2>] ...
#
function pc {
local FILE
for FILE; do
echo "Copying $FILE."
until curl -C - -O "file://${FILE}" && curl -C - -O "file://${FILE}"; do
if [[ -e $FILE ]]; then
echo "File $FILE can't be copied."
break
else
echo "Waiting for $FILE."
until
sleep 5
[[ -e $FILE ]]
do
continue
done
fi
done
done
}
pc "$#"
You could also just embed the function to a bash startup script if you like.