Bash syntax error - bash

Just learning bash and trying to implement a function in a script.
The script below runs thru ShellCheck fine, but I get a syntax error running bash from the command line
It must be the way I've defined my function, but I can't figure out the right way to do it - especially if it passes ShellCheck
error is on line 24
syntax error near unexpected token `${\r''
`autounload() {
Script:
#!/bin/bash
# /home/pi/scripts/usb-unloader.sh
#
# Called from {SCRIPT_DIR}/usb-initloader.sh
# make sure to chmod 0755 on file
#
# UnMounts usb device on /media/<device>
# Logs changes to /var/log/syslog and local log folder
# use tail /var/log/syslog to look at latest events in log
#
# CONFIGURATION
#
LOG_FILE="$1"
MOUNT_DIR="$2"
DEVICE="$3" # USB device name (from kernel parameter passed from rule)
#
# check for defined log file
if [ -z "$LOG_FILE" ]; then
exit 1
fi
#
# autounload function to unmount USB device and remove mount folder
#
autounload() {
if [ -z "$MOUNT_DIR" ]; then
exit 1
fi
if [ -z "$DEVICE" ]; then
exit 1
fi
dt=$(date '+%Y-%m-%d %H:%M:%S')
echo "--- USB Auto UnLoader --- $dt"
sudo umount "/dev/$DEVICE"
sudo rmdir "$MOUNT_DIR/$DEVICE"
# test that this device isn't already mounted
device_mounted=$(grep "$DEVICE" /etc/mtab)
if ! "$device_mounted"; then
echo "/dev/$DEVICE successfully Un-Mounted"
exit 1
fi
}
autounload >> "$LOG_FILE" 2>&1
This is the part of the code where I start defining the function
#
autounload() {
if [ -z "$MOUNT_DIR" ]; then
exit 1
fi
I've tried to move function above and below where it gets called but it seems to makes no difference.

As discovered in comments, you have a Windows (DOS) file that then runs on a UNIX machine. This has the problem that the line endings in DOS are \r\n while in UNIX are \n, so there is a superfluous \r in every line.
To get rid of those you can use either of these:
In UNIX: dos2unix
In Windows: any of the tools described in Windows command to convert Unix line endings?

Related

How do I get USB device info from a script running within a Docker container in a bash script?

I made a mount.sh file inspired by balena-storage. It works when I login to the container via the balena.io dashboard where I'm deploying (could be the same elsewhere) and run the script manually. It hangs with unpopulated variables when the container runs the script when starting (a script that runs the script). I think it’s a permissions issue or script running script thing. I'm not sure how to proceed in reading USB device variables.
mount.sh:
# Automatically mount a USB drive by specified volume name.
# Note: make sure to have USB_VOLUME_NAME set in env vars.
# Thanks: https://github.com/balena-io-playground/balena-storage
echo "Checking for USB_VOLUME_NAME..."
echo "A"
if [[ -z $USB_VOLUME_NAME ]]; then
echo "Make sure to set environment variable USB_VOLUME_NAME in order to find a connected USB drive by that label and connect to it. Exiting..." >> /usr/src/app/mount.log
exit 1
fi
echo "B"
# Get device by label env var set in balena.io dashboard device env vars
USB_DEVICE=$(blkid -L $USB_VOLUME_NAME)
if [[ -z $USB_DEVICE ]]; then
echo "Invalid USB_DEVICE name: $USB_DEVICE" >> /usr/src/app/mount.log
exit 1
fi
echo $USB_DEVICE
echo "C"
# Get extra device info
ID_FS_TYPE=${ID_FS_TYPE:=$(/bin/udevadm info -n $USB_DEVICE | /usr/bin/awk -F "=" '/ID_FS_TYPE/{ print $2 }')}
ID_FS_UUID_ENC=${ID_FS_UUID_ENC:=$(/bin/udevadm info -n $USB_DEVICE | /usr/bin/awk -F "=" '/ID_FS_UUID_ENC/{ print $2 }')}
ID_FS_LABEL_ENC=${ID_FS_LABEL_ENC:=$(/bin/udevadm info -n $USB_DEVICE | /usr/bin/awk -F "=" '/ID_FS_LABEL_ENC/{ print $2 }')}
MOUNT_POINT=/mnt/$USB_VOLUME_NAME
echo $ID_FS_TYPE
echo $ID_FS_UUID_ENC
echo $ID_FS_LABEL_ENC
echo $MOUNT_POINT
echo "D"
# Bail if file system is not supported by the kernel
if ! /bin/grep -qw $ID_FS_TYPE /proc/filesystems; then
echo "File system not supported: $ID_FS_TYPE" >> /usr/src/app/mount.log
exit 1
fi
echo "E"
# Mount device
if /bin/findmnt -rno SOURCE,TARGET $USB_DEVICE >/dev/null; then
echo "Device $USB_DEVICE is already mounted!" >> /usr/src/mount.log
else
echo "Mounting - Source: $USB_DEVICE - Destination: $MOUNT_POINT" >> /usr/src/app/mount.log
/bin/mkdir -p $MOUNT_POINT
/bin/mount -t $ID_FS_TYPE -o rw $USB_DEVICE $MOUNT_POINT
fi
echo "F"
When the container runs the script, it gets stuck after "D", with ID_FS_TYPE, ID_FS_UUID_ENC and ID_FS_LABEL_ENC being empty (a good reason to hang).
output:
Checking for USB_VOLUME_NAME...
A
B
/dev/sda1
C
/mnt/MYDRIVE
D
My dockerfile.template:
FROM balenalib/%%BALENA_MACHINE_NAME%%-node
# Enable udev for detection of dynamically plugged devices
ENV UDEV=on
COPY udev/usb.rules /etc/udev/rules.d/usb.rules
# Install dependencies
RUN install_packages util-linux
WORKDIR /usr/src/app
# Move scripts used for mounting USB
COPY scripts scripts
RUN chmod +x scripts/*
# server.js will run when container starts up on the device
CMD ["/bin/bash", "/usr/src/app/scripts/start.sh"]
start.sh:
echo "Mounting USB drive..."
cd /usr/src/app/scripts
/bin/bash mount.sh
# It won't get this far while the script above hangs.
echo "Starting server..."
cd /usr/src/app
/usr/local/bin/yarn run serve
I can confirm that everything works when running from within the container manually:
cd /usr/src/app/scripts
/bin/bash mount.sh
Output:
Checking for USB_VOLUME_NAME...
A
B
/dev/sda1
C
vfat
BE23-31BA
MYDRIVE
/mnt/MYDRIVE
D
E
F
(and the drive mounted)
How would I resolve the empty variables?
Always quote every shell variable you use. (Unless you're absolutely sure of what you're doing, and what you expect to happen if the variable value is empty or includes spaces.)
Without quoting, when you
/bin/grep -qw $ID_FS_TYPE /proc/filesystems
and $ID_FS_TYPE is empty, that word just gets omitted from the command line, so you get
/bin/grep -qw /proc/filesystems
which uses /proc/filesystems as a regexp, and tries to grep over its stdin; this leads to the apparent hang you see.
If instead you quote it:
/bin/grep -qw "$ID_FS_TYPE" /proc/filesystems
it will get an empty string as the regexp parameter and a filename as the input parameter, which will trivially succeed (but not hang).
For similar reasons, I'd expect you'd get a shell syntax error if $USB_VOLUME_NAME is unset, and the whole script will act oddly if that variable name has a space in it.

how can a bash script which is in a pipe detect that it's data source has died.?

I am working on a horribly old machine without logrorate.
[ Actually it has busybox 0.6, which is 'void of form' for most purposes. ]
I have openvpn running and I'd like to be able to see what it's been up to. The openvpn I'm using can output progress info to stdout or to a named log file.
I tried and failed to find a way to stop it using one log file and start it on another. Maybe some SIGUSR or something will make it close and re-open the output file, but I can't find it.
So I wrote a script which reads from stdin, and directs output to a rotating log file.
So now all I need to do is pipe the output from openvpn to it.
Job done.
Except that if I kill openvpn, the script which is processing its output just runs forever. There's nothing more it can do, so I'd like it to die automatically.
Is there any way to trap the situation in the script "EOF on STDIN" or something using "find the process ID which is feeding my stdin", or whatever?
I see that this resembles the question
"Tee does not exit after pipeline it's on has finished"
but it's not quite that in that I have no control over the behaviour of openvpn ( save that I can kill it ). I do have control over the script that receives the output of openvpn, but can't work out how to detect the death of openvpn, or the pipe from it to me.
My upper-level script is roughly:
vpn_command="openvpn --writepid ${sole_vpn_pid_file} \
--config /etc/openvpn/openvpn.conf \
--remote ${VPN_HOST} ${VPN_PORT} "
# collapse sequences of multiple spaces to one space
vpn_command_tight=$(echo -e ${vpn_command}) # must not quote the parameter
# We pass the pid file over explicitly in case we ever want to use multiple VPNs.
( ./${launchAndWaitScriptFullName} "${vpn_command_tight}" "${sole_vpn_pid_file}" 2>&1 | \
./vpn-log-rotate.sh 10000 /var/log/openvpn/openvpn.log ) &
if I kill the openvpn process, the "vpn-log-rotate.sh" one stays running.
that is:
#!/bin/sh
# #file vpn-log-rotate.sh
#
# #brief rotates stdin out to 2 levels of log files
#
# #param linesPerFile Number of lines to be placed in each log file.
# #param logFile Name of the primary log file.
#
# Archives the last log files on entry to .last files, then starts clean.
#
# #FIXME DGC 28-Nov-2014
# Note that this script does not die if the previous stage of the pipeline dies.
# It is possible that use of "trap SIGPIPE" or similar might fix that.
#
# #copyright Copyright Dexdyne Ltd 2014. All rights reserved.
#
# #author DGC
linesPerFile="$1"
logFile="$2"
# The location of this script as an absolute path. ( e.g. /home/Scripts )
scriptHomePathAndDirName="$(dirname "$(readlink -f $0)")"
# The name of this script
scriptName="$( basename $0 )"
. ${scriptHomePathAndDirName}/vpn-common.inc.sh
# Includes /sbin/script_start.inc.sh
# Reads config file
# Sets up vpn_temp_directory
# Sets up functions to obtain process id, and check if process is running.
# includes vpn-script-macros
# Remember our PID, to make it easier for a supervisor script to locate and kill us.
echo $$ > ${vpn_log_rotate_pid_file}
onExit()
{
echo "vpn-log-rotate.sh is exiting now"
rm -f ${vpn_log_rotate_pid_file}
}
trap "( onExit )" EXIT
logFileRotate1="${logFile}.1"
# Currently remember the 2 previous logs, in a rather knife-and-fork manner.
logFileMinus1="${logfile}.minus1"
logFileMinus2="${logfile}.minus2"
logFileRotate1Minus1="${logFileRotate1}.minus1"
logFileRotate1Minus2="${logFileRotate1}.minus2"
# The primary log file exist, rename it to be the rotated version.
rotateLogs()
{
if [ -f "${logFile}" ]
then
mv -f "${logFile}" "${logFileRotate1}"
fi
}
# The log files exist, rename them to be the archived copies.
archiveLogs()
{
if [ -f "${logFileMinus1}" ]
then
mv -f "${logFileMinus1}" "${logFileMinus2}"
fi
if [ -f "${logFile}" ]
then
mv -f "${logFile}" "${logFileMinus1}"
fi
if [ -f "${logFileRotate1Minus1}" ]
then
mv -f "${logFileRotate1Minus1}" "${logFileRotate1Minus2}"
fi
if [ -f "${logFileRotate1}" ]
then
mv -f "${logFileRotate1}" "${logFileRotate1Minus1}"
fi
}
archiveLogs
rm -f "${LogFile}"
rm -f "${logFileRotate1}"
while true
do
lines=0
while [ ${lines} -lt ${linesPerFile} ]
do
read line
lines=$(( ${lines} + 1 ))
#echo $lines
echo ${line} >> ${logFile}
done
mv -f "${logFile}" "${logFileRotate1}"
done
exit_0
Change this:
read line
to this:
read line || exit
so that if read-ing fails (because you've reached EOF), you exit.
Better yet, change it to this:
IFS= read -r line || exit
so that you don't discard leading whitespace, and don't treat backslashes as special.
And while you're at it, be sure to change this:
echo ${line} >> ${logFile}
to this:
printf %s "$line" >> "$logFile"
so that you don't run into problems if $line has a leading -, or contains * or ?, or whatnot.

Bash curl returns 0 whenever the copy has finished or not

I'm calling curl on bash to copy a file from a mounted SD card with the option to resume the copy later if the device gets unmounted. I receive the same status exit code 0 when I interrupt the copy by unmounting the volume and when the file gets actually copied. Any suggestions how to catch the case where the file has not been copied?
I'm copying only one file at a time.
This is the command:
curl -C - -O file:///mnt/sdcard/DCIM/100/0044.MP4
I came to a solution which is not as clear as I want, but still working. I'm executing the command 2 times one after another, so when the first command returns 0 upon unmount, the second now tries to copy the file and return error code 37 because of the unreachable source. If the second command returns 0 I consider the file copied.
Following your concept you could have a script like this:
#!/bin/bash
# Copies files persistently.
#
# Usage: pc <filepath> [<filepath2>] ...
#
function pc {
local FILE
for FILE; do
echo "Copying $FILE."
until curl -C - -O "file://${FILE}" && curl -C - -O "file://${FILE}"; do
if [[ -e $FILE ]]; then
echo "File $FILE can't be copied."
break
else
echo "Waiting for $FILE."
until
sleep 5
[[ -e $FILE ]]
do
continue
done
fi
done
done
}
pc "$#"
You could also just embed the function to a bash startup script if you like.

Script won't recognize the file / directory

For class we have to work on a remote server that the school hosts. So far I have made a lot of files on the server and I would like to back them up in case I want to transfer them to my laptop or in case I accidentally delete a directory or make a silly error. I found a tutorial and a script to back up the file and I decided to modify it so that it would determine what directory it's in (which will be the main user's) and the cd to the Documents. It also creates the directory Backups if it doesn't exist. I am still pretty new to this sort of scripting and any additional advice or post links would be greatly appreciated.
Code:
#!/bin/bash
#######################################################
## Simple backup script..
## Created by Matthew Brunt: (openblue555#gmail.com)
## Licensed under GNU GPL v3 or later, at your option.
## http://www.gnu.org/licenses/gpl.html
##
## Further edited by Michael Garrison to backup the
## directory it is located in and print the contents.
#######################################################
mkdir -p Backup
#Defines our output file
OUTPUT= $( cd Backup && pwd )/backup_$(date +%Y%m%d).tar.gz
#Defines our directory to backup
BUDIR=$( cd Desktop && pwd )
#Display message about starting the backup
echo "Starting backup of directory $BUDIR to file $OUTPUT"
#Start the backup
tar -cZf $OUTPUT $BUDIR
#Checking the status of the last process:
if [ $? == 0 ]; then
#Display confirmation message
echo "The file:"
echo $OUTPUT
echo "was created as a backup for:"
echo $BUDIR
echo ""
echo "Items that were backed up include:"
for i in $BUDIR; do
echo $i
done
echo ""
else
#Display error message message
echo "There was a problem creating:"
echo $OUTPUT
echo "as a backup for:"
echo $BUDIR
fi
I know that the original script works and it worked until I changed the $OUTPUT variable. I currently get the following result:
./backup.sh
./backup.sh: line 15: /Users/mgarrison93/Backup/backup_20121004.tar.gz: No such file
or directory
Starting backup of directory /Users/mgarrison93/Desktop to file
tar: no files or directories specified
There was a problem creating:
as a backup for:
/Users/mgarrison93/Desktop
I can see that it is not accepting the file name, but I don't know how to correct this.
I just tried changing $OUTPUT to /Backups/file-name.tar.gz which I originally had and it works fine. The problem seems to be $( cd Backup && pwd )/backup_$(date +%Y%m%d).tar.gz. Just not sure what is wrong.
Consider these two entirely different pieces of bash syntax: first, you have the syntax for setting a variable to a value permanently (in the current script),
<variable>=<value>
and then there is the syntax for running a command with a variable temporarily set to a value ,
<variable>=<value> <command> <argument> ...
The difference between these two is the space. After the =, once bash runs into an unquoted space, it takes that to mean that the <value> has ended, and anything after it is interpreted as the <command>.
In this line of your script,
OUTPUT= $( cd Backup && pwd )/backup_$(date +%Y%m%d).tar.gz
you have a space after OUTPUT=. bash interprets that to mean that OUTPUT is to be (temporarily) set to the empty string, and the rest of the line, i.e. the result of $( cd Backup && pwd )/backup_$(date +%Y%m%d).tar.gz, is a command and arguments to be run while OUTPUT is equal to the empty string.
The solution is to remove the space. That way bash will know that you're trying to assign the rest of the line as a value to the variable.

Quick bash script to run a script in a specified folder?

I am attempting to write a bash script that changes directory and then runs an existing script in the new working directory.
This is what I have so far:
#!/bin/bash
cd /path/to/a/folder
./scriptname
scriptname is an executable file that exists in /path/to/a/folder - and (needless to say), I do have permission to run that script.
However, when I run this mind numbingly simple script (above), I get the response:
scriptname: No such file or directory
What am I missing?! the commands work as expected when entered at the CLI, so I am at a loss to explain the error message. How do I fix this?
Looking at your script makes me think that the script you want to launch a script which is locate in the initial directory. Since you change you directory before executing it won't work.
I suggest the following modified script:
#!/bin/bash
SCRIPT_DIR=$PWD
cd /path/to/a/folder
$SCRIPT_DIR/scriptname
cd /path/to/a/folder
pwd
ls
./scriptname
which'll show you what it thinks it's doing.
I usually have something like this in my useful script directory:
#!/bin/bash
# Provide usage information if not arguments were supplied
if [[ "$#" -le 0 ]]; then
echo "Usage: $0 <executable> [<argument>...]" >&2
exit 1
fi
# Get the executable by removing the last slash and anything before it
X="${1##*/}"
# Get the directory by removing the executable name
D="${1%$X}"
# Check if the directory exists
if [[ -d "$D" ]]; then
# If it does, cd into it
cd "$D"
else
if [[ "$D" ]]; then
# Complain if a directory was specified, but does not exist
echo "Directory '$D' does not exist" >&2
exit 1
fi
fi
# Check if the executable is, well, executable
if [[ -x "$X" ]]; then
# Run the executable in its directory with the supplied arguments
exec ./"$X" "${#:2}"
else
# Complain if the executable is not a valid
echo "Executable '$X' does not exist in '$D'" >&2
exit 1
fi
Usage:
$ cdexec
Usage: /home/archon/bin/cdexec <executable> [<argument>...]
$ cdexec /bin/ls ls
ls
$ cdexec /bin/xxx/ls ls
Directory '/bin/xxx/' does not exist
$ cdexec /ls ls
Executable 'ls' does not exist in '/'
One source of such error messages under those conditions is a broken symlink.
However, you say the script works when run from the command line. I would also check to see whether the directory is a symlink that's doing something other than what you expect.
Does it work if you call it in your script with the full path instead of using cd?
#!/bin/bash
/path/to/a/folder/scriptname
What about when called that way from the command line?

Resources