rdiff-backup errors: script keeps quitting with error - bash

Ive recently been introduced to bash scripting... So, Ive used my advanced theft course to throw together the attached script. it runs... and exits with "/xxx/ not mounted. You are not root! I have rdiff-backup and sshfs installed and working. The commands work fine on their own on the commandline, but in the script, well... Can you guys take a look and lemme know? PS I copied a LOT of this from scripts I found here and a few other places.
<code>
#!/bin/bash
# Version 1.5
# Prior to running this make sure you have ssh-keygen -t rsa to generate a key, then
# ssh username#target "mkdir .ssh/;chmod 700 .ssh"
# scp .ssh/id_rsa.pub username#target:.ssh/authorized_keys
#
# then check you can login and accept the ssh key
# ssh username#target "ls -la"
#
# Key things to remember, no spaces in pathnames, and try to use full paths (beginning with / )
#
# variables determine backup criteria
DATESTAMP=`date +%d%m%y`
USERNAME=username #remote site username here
TARGET=remote.ip.here #add the ip v4 address of the target
INCLUDES=/path/to/file/includes.txt #this is a txt file containing a list of directories you want backed up
EXCLUDES="**" #this is a list of files etc you want to skip
BACKUPLOG=/path/to/logfile/in/home/backuplog${DATESTAMP}.txt
OLDERTHAN=20W #change 20 to reflect how far back you want backups to exist
# to activate old backup expiry, uncomment the line below
#RMARGS=" --force --remove-older-than ${OLDERTHAN}"
TARGETMAIL="yourmailaddress#your.domain"
HOSTNAME=`hostname` #Dont change this!
TMPDIR=/backups Change this to the source folder
TARGETFOLDER=/backups change this to the TARGET folder
ARGS=" -v0 --terminal-verbosity 0 --exclude-special-files --exclude-other-filesystems --no-compression -v6"
# detecting distro and setting the correct path
if [ -e /etc/debian_version ];then
NICE=/usr/bin/nice
elif [ -e /etc/redhat-release ];then
NICE=/bin/nice
fi
if [ -e /tmp/backup.lock ];then
exit 0
fi
touch /tmp/backup.lock
touch -a ${BACKUPLOG}
cd /
/bin/mkdir -p ${TMPDIR}
/usr/bin/sshfs -o idmap=user -o ${USERNAME}#${TARGET}:/${TARGETFOLDER} ${TMPDIR} &>${BACKUPLOG}
# if you get errors mounting this then try
# mknod /dev/fuse -m 0666 c 10 229
for ITEMI in ${INCLUDES} ; do
ARGS="${ARGS} --include ${ITEMI} "
done
for ITEME in ${EXCLUDES} ; do
ARGS="${ARGS} --exclude-regexp '${ITEME}' "
done
# the --exclude ** / is a hack because it wont by default do multiple dirs, so use --include for all dirs then exclude everything else and sync / - if you dont understand dont worry
# ref: http://www.mail-archive.com/rdiff-backup-users#nongnu.org/msg00311.html
#echo /usr/bin/rdiff-backup ${ARGS} --exclude \'**\' / ${TMPDIR}/ &&
cat ${INCLUDES} | while read DIR; do
${NICE} -19 /usr/bin/rdiff-backup --exclude '**' ${DIR} ${TMPDIR}/ &>${BACKUPLOG}
if [ $? != 0 ]; then
echo "System Backup Failed" | mutt -s "Backup Log: System Backup Failed, Log attached!" -a ${BACKUPLOG} ${TARGETMAIL}
exit 1;
fi
done
#${NICE} -19 /usr/bin/rdiff-backup ${ARGS} --exclude '**' / ${TMPDIR}/ &>${BACKUPLOG} &&
echo Removing backups older than ${RMARGS}
${NICE} -19 /usr/bin/rdiff-backup -v0 --terminal-verbosity 0 ${RMARGS} ${TMPDIR}/ &>${BACKUPLOG}
/bin/umount ${TMPDIR} && /bin/rm -rf ${TMPDIR}/ &>${BACKUPLOG}
echo "System Backup Run" | mutt -s "Backup Log: System Backup Done!" -a ${BACKUPLOG} ${TARGETMAIL}
rm /tmp/backup.lock
rm ${BACKUPLOG}
Sorry, cannot paste, couldnt attach... BLIKSEM!
Thanks for ANY input... One HELL of a learning curve!!!
Regards,
B.

Related

Error in Bash script to change folder/file permissions

I don't know a lot about scripting, but I was attempting to write my own.
Context:
I have 2 servers. When server 1 (ubuntu server) automatically adds files to server 2 (synology) (through docker container) permissions of those files are wrong, so some applications I'm running can't access them.
I wanted to write a script that checks for permissions periodically and then changes them to what I want.
I've been messing with it for some days and this is what I've got for now:
#!/bin/bash
shopt -s nullglob
FOLDERS=(/volume1/files/videos/* /volume1/files/photos/*)
for folder in "${FOLDERS[#]}"; do
# [[ -d "$folder" ]]
if [ "$(stat -c '%a' "$folder")" != "755" ] || [ "$(stat -c '%U' "$folder")" != "my_user" ]
then
# echo "Change user permissions of $folder"
chown -R my_user:users "$folder" && chmod 755 -R "$folder"
fi
done
shopt -u nullglob
The problem whit this script is that when files (and not folders) are added it won't detect those files.
So I change the script to this: (find files in directories and the change top directory's permissions recursively)
#!/bin/bash
shopt -s nullglob
FOLDERS=(/volume1/files/videos/* /volume1/files/photos/*)
FILES=(/volume1/files/videos/*/*.mp4 /volume1/files/photos/*/*/*.jpg)
for file in "${FILES[#]}"; do
if [ "$(stat -c '%a' "$file")" != "755" ] || [ "$(stat -c '%U' "$file")" != "my_user" ]
then
# echo "$file" | (cut -d "/" -f5) # --> WORKS PERFECTLY
rootfolder=$(("$file") | (cut -d "/" -f5))
# echo "$rootfolder" # --> ERROR: cannot execute binary file: Exec format error
# chown -R my_user:users "$rootfolder" && chmod 755 -R "$rootfolder"
echo "chown -R my_user:users "$rootfolder" && chmod 755 -R "$rootfolder" "
fi
done
When I add "$file" | (cut -d "/" -f5) into a variable "rootfolder" ("/volume1/files/videos" and "/volume1/files/photos", I'm getting this error when executing the script:
./modify_media_permissions-test.sh: line 12: /volume1/files/videos/my_video.mp4: cannot execute binary file: Exec format error
Tried different thing, but can't resolve it.
Could someone help me further?
I know it is probably not the best and most efficient script, but I'm learning :)
Thanks a lot!

SSH into remote computer and compile/run code

I made a script (below) that goes into a remote computer and runs C code on it. This script works perfectly but asks for the password multiple times. How can I make it only ask for the password once?
#!/bin/bash
USER=myusername
COMP=remote_computer_name
OUTPUT=$1
ARGS=${#:2}
CODE_DIR="Dir_$$"
SCRIPT_NAME=$(basename $0)
LAST_CREATED_DIR=$(ls -td -- */ | head -n 1)
#Check if we are on local computer. If so, we copy the
#current directory to the remote run this script on the remote
if [ "${HOSTNAME}" != "${COMP}" ]; then
if [ "$#" -lt 1 ]; then
echo "Incorrect usage."
echo "Usage: ./${SCRIPT_NAME} <compiled_c_output_name> <arg1> <arg2> ... <argN>"
exit
fi
# Check if there is no makefile in the current directory
if [ ! -e [Mm]akefile ]; then
echo "There is no makefile in this directory"
exit
fi
echo "On local. Copying current directory to remote..."
scp -r ./ ${USER}#${COMP}:/ilab/users/${USER}/${CODE_DIR}
ssh ${USER}#${COMP} "bash -s" < ./${SCRIPT_NAME} ${OUTPUT} ${ARGS}
else
echo "On remote. Compiling code..."
cd $LAST_CREATED_DIR
make clean
make all
if [ -e $OUTPUT ]; then
echo "EXECUTING \"./${OUTPUT} ${ARGS}\" ON REMOTE ($COMP)"
./${OUTPUT} ${ARGS}
fi
fi
You can use SSH-Key authentication technique for password less login -
Here are the steps :
Generate RSA key -
ssh-keygen -t rsa
This generates two files under /home/<user>/.ssh/ id_rsa
(Private) and id_rsa.pub (Public)
The second file is your public key. You have to copy the contents of
this file over to the remote computer you want to log into and append
it to /home/<user>/.ssh/authorized_keys or use ssh-copy-id
utility if available (ssh-copy-id username#remote_host)
After this, the authentication is done by the public-private key pair
and you may not require a password henceforth.
You can use sshpass. Here is an example:
sshpass -pfoobar ssh -o StrictHostKeyChecking=no user#host command_to_run
More info, here:
https://askubuntu.com/questions/282319/how-to-use-sshpass

Download a fix number of directories from ftp server

I have an FTP server with thousands of directories. What I want to do is to download a specific number of them (for example, 500 directories) using a shell script. How can I do that? I tried wget with -Q command. For example, "wget -Q25MB", which gives me 25MB of data. The problem is that each folder has a different size. Therefore, using this command will stop the download in the middle of getting a specific folder.
Assuming wget returns an error when the download get interrupted:
#!/bin/bash
to_del= # empty to_del in case you want to copy-paste this to a terminal instead of using a file
username=blablabla
password=blablabla
server=blablabla
printf -v today '%(%Y_%m_%d)T'
# Get the 500 first directory names to download
ftp -n "$server" << EOF | grep -v '^\.\.\?$' | head -n 502 > "to_download_$today.txt"
user $username $password
ls
bye
EOF
# Then, you can download each folder one by one:
while read -r dir; do
if [[ -e $dir ]]; then
echo >&2 "WARNING: '$dir' already exists!"
continue # We don't download or remove it. Manual action needed
fi
if wget "$username:$password#$server/$dir"; then
to_del+=("$dir")
else
# A directory was not successfully downloaded, we delete the temporary files
echo >&2 "WARNING: '$dir' download failed, skipping..."
rm -rf "$dir"
fi
done < "to_download_$today.txt"
# Now, delete the successfully downloaded folders using a single FTP connection
{
printf 'user %s %s\n' "$username" "$password"
for dir in "${to_del[#]}"; do
printf 'del %s\n' "$dir"
done
printf 'bye\n'
} | ftp -i -n "$server"

Run wget and other commands in shell script

I'm trying to create a shell script that I will download the latest Atomic gotroot rules to my server, unpack them, copy them to the correct folder, etc.,
I've been reading shell tutorials and forum posts for most of the day and the syntax escapes me for some of these. I have run all these commands and I know they work if I manually run them.
I know I need to develop some error checking, but I'm just trying to get the commands to run correctly. The main problem at the moment is the syntax of the wget commands, i've got errors about missing semi-colons, divide by zero, unsupported schemes - I've tried various quoting (single and double) and escaping - / " characters in various combinations.
Thanks for any help.
The raw wget command is
wget --user="jim" --password="xxx-yyy-zzz" "http://updates.atomicorp.com/channels/rules/subscription/VERSION"
#!/bin/sh
update_modsec_rules(){
wget=/usr/bin/wget
tar=/bin/tar
apachectl=/usr/bin/apache2ctl
TXT="Script Run Finished"
WORKING_DIR="/var/asl/updates"
TARGET_DIR="/usr/local/apache/conf/modsec_rules/"
EXISTING_FILES="/var/asl/updates/modsec/*"
EXISTING_ARCH="/var/asl/updates/modsec-*"
WGET_OPTS='--user=jim --password=xxx-yyy-zzz'
URL_BASE="http://updates.atomicorp.com/channels/rules/subscription"
# change to working directory and cleanup any downloaded files and extracted rules in modsec/ directory
cd $WORKING_DIR
rm -f $EXISTING_ARCH
rm -f $EXISTING_FILES
rm -f VERSION*
# wget to download VERSION file
$wget ${WGET_OPTS} "${URL_BASE}/VERSION"
# get current MODSEC_VERSION from VERSION file and save as variable
source VERSION
TARGET_DATE=$MODSEC_VERSION
echo $TARGET_DATE
# wget to download current archive
$wget ${WGET_OPTS} "${URL_BASE}/modsec-${TARGET_DATE}.tar.gz"
# extract archive
echo "extracting files . . . "
tar zxvf $WORKING_DIR/modsec-${TARGET_DATE}.tar.gz
echo "copying files . . . "
cp -uv $EXISTING_FILES $TARGET_DIR
echo $TXT
}
update_modsec_rules $# 2>&1 | tee -a /var/asl/modsec_update.log
RESTART_APACHE="/usr/local/cpanel/scripts/restartsrv httpd"
$RESTART_APACHE
Here are some guidelines to use when writing shell scripts.
Always quote variables when you use them. This helps avoid the possibility of misinterpretation. (What if a filename contains a space?)
Don't trust fileglobbing on commands like rm. Use for loops instead. (What if a filename starts with a hyphen?)
Avoid subshells when possible. Your lines with backquotes make me itchy.
Don't exec if you can help it. And especially don't expect any parts of your script after your exec to actually get run.
I should point out that while your shell may be bash, you've specified /bin/sh for execution of this script, so it is NOT a bash script.
Here's a rewrite with some error checking. Add salt to taste.
#!/bin/sh
# Linux
wget=/usr/bin/wget
tar=/bin/tar
apachectl=/usr/sbin/apache2ctl
# FreeBSD
#wget=/usr/local/bin/wget
#tar=/usr/bin/tar
#apachectl=/usr/local/sbin/apachectl
TXT="GOT TO THE END, YEAH"
WORKING_DIR="/var/asl/updates"
TARGET_DIR="/usr/local/apache/conf/modsec_rules/"
EXISTING_FILES_DIR="/var/asl/updates/modsec/"
EXISTING_ARCH="/var/asl/updates/"
URL_BASE="http://updates.atomicorp.com/channels/rules/subscription"
WGET_OPTS='--user="jim" --password="xxx-yyy-zzz"'
if [ ! -x "$wget" ]; then
echo "ERROR: No wget." >&2
exit 1
elif [ ! -x "$apachectl" ]; then
echo "ERROR: No apachectl." >&2
exit 1
elif [ ! -x "$tar" ]; then
echo "ERROR: Not in Kansas anymore, Toto." >&2
exit 1
fi
# change to working directory and cleanup any downloaded files
# and extracted rules in modsec/ directory
if ! cd "$WORKING_DIR"; then
echo "ERROR: can't access working directory ($WORKING_DIR)" >&2
exit 1
fi
# Delete each file in a loop.
for file in "$EXISTING_FILES_DIR"/* "$EXISTING_ARCH_DIR"/modsec-*; do
rm -f "$file"
done
# Move old VERSION out of the way.
mv VERSION VERSION-$$
# wget1 to download VERSION file (replaces WGET1)
if ! $wget $WGET_OPTS $URL_BASE}/VERSION; then
echo "ERROR: can't get VERSION" >&2
mv VERSION-$$ VERSION
exit 1
fi
# get current MODSEC_VERSION from VERSION file and save as variable,
# but DON'T blindly trust and run scripts from an external source.
if grep -q '^MODSEC_VERSION=' VERSION; then
TARGET_DATE="`sed -ne '/^MODSEC_VERSION=/{s/^[^=]*=//p;q;}' VERSION`"
echo "Target date: $TARGET_DATE"
fi
# Download current archive (replaces WGET2)
if ! $wget ${WGET_OPTS} "${URL_BASE}/modsec-$TARGET_DATE.tar.gz"; then
echo "ERROR: can't get archive" >&2
mv VERSION-$$ VERSION # Do this, don't do this, I don't know your needs.
exit 1
fi
# extract archive
if [ ! -f "$WORKING_DIR/modsec-${TARGET_DATE}.tar.gz" ]; then
echo "ERROR: I'm confused, where's my archive?" >&2
mv VERSION-$$ VERSION # Do this, don't do this, I don't know your needs.
exit 1
fi
tar zxvf "$WORKING_DIR/modsec-${TARGET_DATE}.tar.gz"
for file in "$EXISTING_FILES_DIR"/*; do
cp "$file" "$TARGET_DIR/"
done
# So far so good, so let's restart apache.
if $apachectl configtest; then
if $apachectl restart; then
# Success!
rm -f VERSION-$$
echo "$TXT"
else
echo "ERROR: PANIC! Apache didn't restart. Notify the authorities!" >&2
exit 3
fi
else
echo "ERROR: Apache configs are broken. We're still running, but you'd better fix this ASAP." >&2
exit 2
fi
Note that while I've rewritten this to be more sensible, there is certainly still a lot of room for improvement.
You have two options:
1- changing this to
WGET1=' --user="jim" --password="xxx-yyy-zzz" "http://updates.atomicorp.com/channels/rules/subscription/VERSION"'
then run
wget $WGET1 same to WGET2
Or
2- encapsulating $WGET1 with backquotes ``.
e.g.:
`$WGET`
This applies to any command your executing out of a variable.
Suggested changes:
#!/bin/sh
TXT="GOT TO THE END, YEAH"
WORKING_DIR="/var/asl/updates"
TARGET_DIR="/usr/local/apache/conf/modsec_rules/"
EXISTING_FILES="/var/asl/updates/modsec/*"
EXISTING_ARCH="/var/asl/updates/modsec-*"
WGET1='wget --user="jim" --password="xxx-yyy-zzz" "http://updates.atomicorp.com/channels/rules/subscription/VERSION"'
WGET2='wget --user="jim" --password="xxx-yyy-zzz" "http://updates.atomicorp.com/channels/rules/subscription/modsec-$TARGET_DATE.tar.gz"'
## change to working directory and cleanup any downloaded files and extracted rules in modsec/ directory
cd $WORKING_DIR
rm -f $EXISTING_ARCH
rm -f $EXISTING_FILES
## wget1 to download VERSION file
`$WGET1`
## get current MODSEC_VERSION from VERSION file and save as variable
source VERSION
TARGET_DATE=`echo $MODSEC_VERSION`
## WGET2 command to download current archive
`$WGET2`
## extract archive
tar zxvf $WORKING_DIR/modsec-$TARGET_DATE.tar.gz
cp $EXISTING_FILES $TARGET_DIR
## restart server
exec '/usr/local/cpanel/scripts/restartsrv_httpd' $*;
Pro Tip: If you need string substitution, using ${VAR} is much better to eliminate ambiguity, e.g.:
tar zxvf $WORKING_DIR/modsec-${TARGET_DATE}.tar.gz

Getting continue behavior (not redownloading files already present) using lftp.

So, I have a script which downloads stuff from a seedbox. It works great for new files which are in the remote server and then mirrored on my local server. The problem is that when I want, for example, to remove unnecessary files, running the script again re-downloads the same file(s) again. I tried going into the man pages of mirror but it wasn't helpful. Here is the script which mirrors the files:
#!/bin/bash
login=XXXX
pass=XXXXXX
host=XXXXX
remote_dir=/files/
local_dir=/home/XXX/XXX
trap "rm -f /tmp/seedroots.lock" SIGINT SIGTERM
if [ -e /tmp/seedroots.lock ]; then
echo "Synctorrent is running already."
exit 1
else
touch /tmp/seedroots.lock
lftp -p 21 -u $login,$pass $host << EOF
set ftp:ssl-allow no
set mirror:use-pget-n 5
mirror -c -P5 --log=synctorrents.log $remote_dir $local_dir
EOF
rm -f /tmp/seedroots.lock
exit 0
fi
Is there an option for mirror which I am missing that doesn't re-download the locally deleted file(s) again?
The mirror command in lftp has a --continue flag which will result in the behavior you want.
You should give a try to my version of your script (not tested) :
#!/bin/bash
login=XXXX
pass=XXXXXX
host=XXXXX
remote_dir=/files/
local_dir=/home/XXX/XXX
files=$local_dir/*
trap "rmdir /tmp/seedroots.lock" 0 1 2 3 15
if [[ -d /tmp/seedroots.lock ]]; then
echo "Synctorrent is running already."
exit 1
else
mkdir /tmp/seedroots.lock
lftp -p 21 -u $login,$pass $host << EOF
set ftp:ssl-allow no
set mirror:use-pget-n 5
mget $files
EOF
fi
What it does :
I build a local list of files, and, subsequently, mget all these files on the ftp server with the variable $files.
I replaced the lock file with a dir : search web about atomicity.
Files are not atomic whereas directories are.
The trap runs on normal exit and other signals
If you are using bash, [[ ]] tests are more powerfull.
Indentation is not just an option ;)
If you are just leeching files (not seeding), you can use lftp mirror with --Remove-source-files option to remove files at source after transfer (so no duplicate, re-downloads).

Resources