I have a problem with one of my bash scripts. Here is code.
#!/bin/bash
ENVIRONMENT=$1
STORAGE_ACCOUNT_NAME=$2
TOKEN_AZURE=$3
TOKEN_GITHUB=$4
DATA_GOVERNANCE_NAME=$5
CURRENT_WORKING_DIR=$6
GIT_TIME_PERIOD=${7:-60 days}
if [ -z $2 ]; then echo "Environment name and the storage account name must be passed as a parameter" && exit 1; fi
echo "Filtering the repos to find out which should be deployed to blob containers workspace..."
less $CURRENT_WORKING_DIR/$DATA_GOVERNANCE_NAME/$ENVIRONMENT/data_ingestion_framework/Repos/Deployable_repos.json | jq -r '[.Repos[] | select(.LOCATION[]=="STORAGE")]' >$CURRENT_WORKING_DIR/repos_filtered_blob.json
REPOS_COUNT=$(jq length $CURRENT_WORKING_DIR/repos_filtered_blob.json)
for containers in $(jq -r '[.Repos[] | select(.LOCATION[]=="STORAGE") | .PATH ] | unique | .[]' $CURRENT_WORKING_DIR/$DATA_GOVERNANCE_NAME/$ENVIRONMENT/data_ingestion_framework/Repos/Deployable_repos.json);
do
echo "Clearing content of $containers blob"
STRIPED_CONTAINER=$(echo $containers | sed -r "s|/?$||")
if [ $STRIPED_CONTAINER == 'datagovernance' ];
then
az storage blob delete-batch --auth-mode login --account-name $STORAGE_ACCOUNT_NAME --source $STRIPED_CONTAINER --query '[::-1]' --delete-snapshots include
else
az storage blob delete-batch --auth-mode login --account-name $STORAGE_ACCOUNT_NAME --source $STRIPED_CONTAINER --query '[::-1]'
fi
done
if [ $REPOS_COUNT -eq "0" ]; then echo "No repos meant for blob containers found" && exit 1; fi
echo "Repos meant for blob containers found: $REPOS_COUNT"
echo ""
REGEX="^https://(.*#)?"
declare -i i=0
while [ $REPOS_COUNT -gt 0 ]; do
echo "Getting the repo name to be deployed..."
REPO_NAME=$(jq -r .[$i].Name $CURRENT_WORKING_DIR/repos_filtered_blob.json)
echo "Repository name: $REPO_NAME"
echo "Checking if the repo should be deployed to $ENVIRONMENT environment..."
echo "$ENVIRONMENT environment detected! Extracting the source branch name..."
BRANCH_NAME=$(less $CURRENT_WORKING_DIR/repos_filtered_blob.json | jq -r '.['$i'].BRANCH')
echo "Source branch name: $BRANCH_NAME"
REPO_LINK=$(jq -r .[$i].Link $CURRENT_WORKING_DIR/repos_filtered_blob.json)
echo "Repository LINK: $REPO_LINK"
if $(echo $REPO_LINK | grep -q github.dxc.com);
then
echo "GitHub link detected!"
ADJUSTED_LINK=$(echo $REPO_LINK | sed -r "s|$REGEX|https://$TOKEN_GITHUB#|g")
else
echo "Azure DevOps repository link detected!"
ADJUSTED_LINK=$(echo $REPO_LINK | sed -r "s|$REGEX|https://$TOKEN_AZURE#|g")
fi
echo "Adjusted link: $ADJUSTED_LINK"
REPO_SSH=$(jq -r .[$i].SSH $CURRENT_WORKING_DIR/repos_filtered_blob.json)
echo ""
CONTAINER_NAME=$(jq -r .[$i].PATH $CURRENT_WORKING_DIR/repos_filtered_blob.json | sed -r "s|/?$||")
CLEAN_COPY=$(jq -r .[$i].CLEAN_COPY $CURRENT_WORKING_DIR/repos_filtered_blob.json)
echo "Container name: $CONTAINER_NAME"
echo "Cloning the source branch..."
rm -rf $CURRENT_WORKING_DIR/Repos/$REPO_NAME
git clone --branch $BRANCH_NAME --single-branch $ADJUSTED_LINK $CURRENT_WORKING_DIR/Repos/$REPO_NAME
echo "Checking if the repository had recent commits..."
cd $CURRENT_WORKING_DIR/Repos/$REPO_NAME
if git log --date=relative --since "$GIT_TIME_PERIOD" | grep -q commit;
then
echo "Repository edited in last $GIT_TIME_PERIOD, will proceed with copying..."
cd ..
echo "Saving the list of the contents..."
if [ "$CLEAN_COPY" = "true" ];
then
if [ "$REPO_NAME" = "$DATA_GOVERNANCE_NAME" ];
then
find $REPO_NAME/$ENVIRONMENT/ | sed -r 's,^([a-zA-Z0-9_-]*\/){2},,' >contents.txt
else
find $REPO_NAME/ | sed 's,^([a-zA-Z0-9_-]*\/){1},,' >contents.txt
fi
# cat contents.txt
echo "Deleting the pre-existing files from the container..."
az storage blob delete-batch --auth-mode login --account-name $STORAGE_ACCOUNT_NAME --source $CONTAINER_NAME --query '[::-1]'
else
echo "Skiping deleting the pre-existing files from the container.. "
fi
if [ "$REPO_NAME" = "$DATA_GOVERNANCE_NAME" ];
then
echo "Copying $ENVIRONMENT to datagovernance"
az storage blob upload-batch --auth-mode login --account-name $STORAGE_ACCOUNT_NAME --destination $CONTAINER_NAME --source "$CURRENT_WORKING_DIR/Repos/$REPO_NAME/$ENVIRONMENT" --overwrite --no-progress
else
echo ""
echo "Copying the files to the container..."
az storage blob upload-batch --auth-mode login --account-name $STORAGE_ACCOUNT_NAME --destination $CONTAINER_NAME --source "$CURRENT_WORKING_DIR/Repos/$REPO_NAME" --overwrite --no-progress
fi
else
echo "Repository not edited recently, will not copy..."
echo ""
cd ..
fi
((i = i + 1))
((REPOS_COUNT = REPOS_COUNT - 1))
done
The problem with this script is that whenever I run whole run is going smoothly and perform everything that is required. However at the end it always throws error bellow.
##[error]The process '/bin/bash' failed with exit code 1
##[error]Bash failed with error: The process '/bin/bash' failed with exit code 1
I have spent quite few hours trying to debug this but i don't know what is cause of that error. Can error be caused by while loop itself?
Related
I am new to bash scripting, I am writing a script that will deploy a new artifact to AWS Elasticbeanstalk, rather than going to AWS UI, and developers taking a long time. please see below and let me know if I am doing anything wrong. I am worried about this part:
if [ "$1" = "help" ] HELP <<EOF
then
read -r -d ''
Usage:
\t$(basename $0) list - list all applications
\t$(basename $0) deploy - deploy artifact to an environment
EOF
die "$HELP"
exit 0
fi
Running this command to run the script:
AWS_PROFILE=default ARTIFACT_BUCKET=myawsstudybucket ARTIFACT_NAME=artifact1.zip ./deploy.sh deploy demo-app Demoapp-env artifact.zip
#!/bin/bash
PROFILE=$(aws sts get-caller-identity --query Account)
RED='\033[0;31m'
COLOR_OFF='\033[0m'
if [ -z "$PROFILE" ]
then
echo "Credentials missing"
else
region=$(aws configure get region)
fi
if [ "$1" = "list" ]
then
echo $(aws elasticbeanstalk describe-applications --query "Applications[].ApplicationName")
exit 0
fi
if [ "$1" = "help" ] HELP <<EOF
then
read -r -d ''
Usage:
\t$(basename $0) list - list all applications
\t$(basename $0) deploy <app-name> <environment-name> <local-artifact-path> - deploy artifact to an environment
EOF
die "$HELP"
exit 0
fi
die() { echo -e "$*" >&2; exit 1; }
[[ -z $EB_APP ]] && die "ERROR: Missing application name"
[[ -z $EB_ENV ]] && die "ERROR: Missing application environment"
[[ -z $EB_ARTIFACT ]] && die "ERROR: Missing application artifact location"
s3path="s3://$ARTIFACT_LOCATION/$ARTIFACT_NAME"
aws s3 cp $artifactpath $s3path
versionlabel=$(date +%s%N)
aws elasticbeanstalk create-application-version --application-name "$ebapp" --version-label $versionlabel --source-bundle S3Bucket=$ARTIFACT_LOCATION,S3Key=$ARTIFACT_NAME
aws elasticbeanstalk update-environment --environment-name $ebenv --version-label $versionlabel
echo "Deployment in progress"
while [[ "$STATUS" != OK ]] && [[ "$STATUS" != Severe ]];
do
echo "Checking environment status"
status=$(aws elasticbeanstalk describe-environment-health --environment-name $ebenv --attribute-names HealthStatus --query "HealthStatus"| tr -d '"')
echo "Current status: $status."
sleep 5
done
echo "Deployed successfully"
That part is definitely strange. What should the word HELP do after the ]?
You probably wanted something like
if [ "$1" = "help" ]
then
echo Press Enter to display the help...
read
cat <<-EOF
Usage:
$(basename $0) list - list all applications
$(basename $0) deploy <app-name> <environment-name> <local-artifact-path> - deploy artifact to an environment
EOF
fi
Note that you need the real Tab before the closing EOF.
Or maybe you wanted this?
die() { echo -e "$*" >&2; exit 1; }
if [ "$1" = "help" ]
then
HELP="
Usage:
$(basename $0) list - list all applications
$(basename $0) deploy <app-name> <environment-name> <local-artifact-path> - deploy artifact to an environment
"
die "$HELP"
fi
Strings in quotes can be multiline.
Note that die must be declared before you can call it.
I have the following configuration file for a project.
It does not run on windows per se.
I have powershell installed, the linux subsystem, docker running, etc.
What steps should I follow to make the project run on windows ? I am a bit lost
Can I run it without cygwin ?
#!/usr/bin/env bash
CYAN='\033[0;36m'
BLUE='\033[0;34m'
GREEN='\033[0;32m'
NC='\033[0m' # No Color
yell() { echo "$0: $*" >&2; }
die() { yell "$*"; exit 111; }
try() { "$#" || die "cannot $*"; }
# Path to your hosts file
hostsFile="/etc/hosts"
# Default IP address for host
ip="127.0.0.1"
hostnames="api.foodmeup.local"
removeHost() {
if [ -n "$(grep -p "[[:space:]]$1" /etc/hosts)" ]; then
echo "$1 found in $hostsFile. Removing now...";
try sudo sed -ie "/[[:space:]]$1/d" "$hostsFile";
else
yell "$1 was not found in $hostsFile";
fi
}
addHost() {
if [ -n "$(grep -p "[[:space:]]$1" /etc/hosts)" ]; then
yell "$1, already exists: $(grep $1 $hostsFile)";
else
echo "Adding $1 to $hostsFile...";
try printf "%s\t%s\n" "$ip" "$1" | sudo tee -a "$hostsFile" > /dev/null;
if [ -n "$(grep $1 /etc/hosts)" ]; then
echo "$1 was added succesfully:";
echo "$(grep $1 /etc/hosts)";
else
die "Failed to add $1";
fi
fi
}
addLinuxSSL() {
sudo mkdir -p /usr/local/share/ca-certificates/foodmeup.local
sudo cp ./.docker/nginx/ssl/foodmeup-ca.cert.pem /usr/local/share/ca-certificates/foodmeup.local
sudo update-ca-certificates
}
addMacSSL() {
sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain ./.docker/nginx/ssl/foodmeup-ca.cert.pem
}
addCygwinSSL() {
echo "Unable to add SSL for CygWin platform";
exit;
}
while true; do
echo "Do you wish to init the FoodMeUp API project?";
read -p "(Y/N) " yn
case $yn in
[Yy]* )
echo "";
echo -e "${BLUE}Setting local host names...${NC}";
IFS=', '; array=($hostnames)
for host in ${array[#]}; do addHost $host; done
echo -e "${GREEN}Host names set!${NC}";
echo "";
echo "";
echo -e "${BLUE}Prepping environment, please wait...${NC}";
aws s3 cp s3://fmu-dev/ssl ./.docker/nginx/ssl/ --recursive --profile fmu
unameOut="$(uname -s)"
case "${unameOut}" in
Linux*) addLinuxSSL;;
Darwin*) addMacSSL;;
CYGWIN*) addCygwinSSL;;
*) exit;;
esac
rm ./.docker/nginx/ssl/foodmeup-ca.cert.pem
aws s3 cp s3://fmu-dev/env-api-devel ./.env --profile fmu
mkdir -p ./var/cache ./var/logs ./var/jwt ./var/cloud ./public/uploads
aws s3 cp s3://fmu-dev/fmu-google-cloud.json ./var/cloud/FoodMeUp-dc2389a0a0cd.json --profile fmu
JWT_PASSPHRASE=$(grep JWT_PASSPHRASE .env | cut -d '=' -f 2-)
openssl genrsa -passout pass:${JWT_PASSPHRASE} -out ./var/jwt/private.pem -aes256 4096
openssl rsa -passin pass:${JWT_PASSPHRASE} -pubout -in ./var/jwt/private.pem -out ./var/jwt/public.pem
echo -e "${GREEN}Environment all set!${NC}";
echo "";
echo -e "${BLUE}Building Docker containers, please wait...${NC}";
docker-compose up -d nginx;
docker-compose up -d postgres;
docker-compose up -d rabbitmq;
echo -e "${GREEN}Docker containers built!${NC}";
echo "";
echo -e "${BLUE}Installing Composer dependencies, please wait...${NC}";
docker exec -ti fmu_backend-php composer install --no-ansi --no-interaction --no-progress --no-suggest --optimize-autoloader;
echo -e "${GREEN}Composer dependencies installed!!${NC}";
echo "";
echo -e "${BLUE}Generating assets, please wait...${NC}";
docker exec -ti fmu_backend-php bin/console assets:install
echo -e "${GREEN}Assets generated!${NC}";
echo "";
echo -e "${BLUE}Initializing application, please wait...${NC}";
docker exec -ti fmu_backend-php /var/www/bin/phing init
echo -e "${GREEN}Application initialized!${NC}";
echo "";
echo -e "${GREEN}[ALL DONE]${NC}";
break;;
[Nn]* ) exit;;
* ) echo "Please answer yes or no.";;
esac
done
echo -e "${GREEN}Project successfully installed${NC}";
Well, I took each command one by one and tried to make it work
here is the result
function Green {
process { Write-Host $_ -ForegroundColor Green }
}
function Red {
process { Write-Host $_ -ForegroundColor Red }
}
Write-output "Adding hosts...";
Set-HostsEntry -IPAddress 127.0.0.1 -HostName 'api.foodmeup.local' -Description "FoodMeUp local API"
Write-output "Host added" | Green;
Write-output "Adding certificates...";
aws s3 cp s3://fmu-dev/ssl ./.docker/nginx/ssl/ --recursive --profile fmu
Import-Certificate -FilePath "./.docker/nginx/ssl/foodmeup-ca.cert.pem" -CertStoreLocation cert:\CurrentUser\Root
Remove-Item ./.docker/nginx/ssl/foodmeup-ca.cert.pem
aws s3 cp s3://fmu-dev/env-api-devel ./.env --profile fmu
'./var/cache','./var/logs','./var/jwt','./var/cloud','./public/uploads','./vendor' | % {New-Item -Name "$_" -ItemType 'Directory'}
aws s3 cp s3://fmu-dev/fmu-google-cloud.json ./var/cloud/FoodMeUp-dc2389a0a0cd.json --profile fmu
$file = ".env"
$pattern = "(?<=JWT_PASSPHRASE=).*"
$values=Select-String -Path $file -Pattern $pattern |
Select-Object -Expand matches |
Select-Object value
$JWT_PASSPHRASE=$values.value
openssl genrsa -passout pass:$JWT_PASSPHRASE -out ./var/jwt/private.pem -aes256 4096
openssl rsa -passin pass:$JWT_PASSPHRASE -pubout -in ./var/jwt/private.pem -out ./var/jwt/public.pem
Write-output "Certificates added successfully" | Green
Write-output "Granting rights on all folders...";
takeown /r /d y /f "var"
icacls "./var/" /grant:r Users:F /t
Write-output "Rights granted" | Green;
Write-output "Building and uping all containers..."
Write-output "Make sure to accept prompts asking to share project local drive or share it manually...";
docker-compose up -d
Write-output "All containers up!" | Green
Write-output "Installing dependencies, please wait..."
docker exec -ti fmu_backend-php php composer install
Write-output "Dependencies installed!" | Green
Write-output "Generating assets, please wait..."
docker exec -ti fmu_backend-php bin/console assets:install
Write-output "Assets generated!" | Green
Write-output "Initializing application, please wait..."
docker exec -ti fmu_backend-php /var/www/bin/phing init
Write-output "Application initialized!" | Green
Write-output "All done!" | Green
I want to run a shell script on remote machine after logging through ssh.
Here is my code.
#!/bin/bash
USERNAME=user
HOSTS="172.20.16.120"
for HOSTNAME in ${HOSTS} ; do
sshpass -p password ssh -t -t ${USERNAME}#${HOSTNAME}
echo [QACOHORT-INFO] Space Before clean up
df -h
callworkspace()
{
if [ "$?" = "0" ];
then
for i in `ls`; do
if [ "$1" = "workspace" ] && echo "$i" | grep -q "$VERSION_WS" && [ "$VERSION_WS" != "" ];
then
echo [QACOHORT-INFO] Removing files-in:
pwd
rm -rf $i
echo [QACOHORT-INFO] Removed: $i
fi
if echo "$i" | grep -q "wasabi$VERSION_HUDSON" && [ "$VERSION_HUDSON" != "" ];
then
echo [QACOHORT-INFO] Removing files-in $i
rm -rf $i/*
elif echo "$i" | grep -q "wasabiSDK$VERSION_HUDSON" && [ "$VERSION_HUDSON" != "" ];
then
echo [QACOHORT-INFO] Removing files-in $i
#rm -rf $i/*
fi
done
fi
}
unamestr=`uname`
if [ "$unamestr" = "Linux" ];
then
cd /home/jenkin/workspace/Hudson/
callworkspace
cd /home/jenkin/workspace/Hudson/workspace
callworkspace workspace
echo [QACOHORT-INFO] Removing temp files
rm -rf /tmp/tmp*
rm -rf ~/.local/share/Trash/*
else [ "$unamestr" = "Darwin" ];
cd /Users/ITRU/ws/Hudson/
callworkspace
cd /Users/ITRU/ws/Hudson/workspace
callworkspace workspace
echo [QACOHORT-INFO] Removing temp files
rm -rf /tmp/tmp*
rm -rf ~/.Trash/*
fi
unamestr=`uname -o`
if [ "$unamestr" = "Cygwin" ];
then
cd D:/work/Hudson
callworkspace
cd D:/work/Hudson/workspace
callworkspace workspace
fi
echo [QACOHORT-INFO] Space after clean up
df -h
done
exit 0
After logging through ssh, I need to run the below lines after ssh as shell script only. I don't want to keep those lines in .sh file and to run. I need to run it in jenkins. Can anyone help?
I suggest you to follow the following steps:
Configure your remote machine as slave node.
Jenkins provide Node properties
go to Node Properties > Environmental Variables > there you can give the name and value for configuration. I have followings in my setup:
name: remotemachine1
value: 172.20.16.120
name:USERNAME
value: user
After you are done with Jenkins Node configuration, you can create a Jenkins Job and configure the Jenkins job.Jenkins configuration Build step provides "Execute Windows batch command" and you can run your shell script there. Please do not forget to specify your remote machine in "Restrict where this project can be run" step inside Jenkins job configuration.
Please let me know if you have any specific further questions.
Stumbled upon an similar problem right now and you could try to solve it this way:
for HOSTNAME in ${HOSTS} ; do
sshpass -p password ssh -t -t ${USERNAME}#${HOSTNAME} '(
pwd
ls -l
<put your script here>
)'
done
An array holds the files accessed, and the archive files are split into smaller sizes in preparation for online backup. I am attempting to retrieve the exit code for each iteration through the loop of the split command. However, it is returning Exit Code 1, yet it says that the operation was successful. Why?
#!/bin/bash
declare -a SplitDirs
declare -a CFiles
CDIR=/mnt/Net_Pics/Working/Compressed/
SDIR=/mnt/Net_Pics/Working/Split/
Err=/mnt/Net_Pics/Working
SplitDirs=(`ls -l "$CDIR" --time-style="long-iso" | egrep '^d' | awk '{print $8}'`)
for dir in "${SplitDirs[#]}"
do
if [ ! -d "$SDIR""$dir" ]; then
mkdir "$SDIR""$dir"
else continue
fi
CFiles=(`ls -l "$CDIR$dir" --time-style="long-iso" | awk '{print $8}'`)
for f in "${CFiles[#]}"
do
if [ ! -e "$SDIR""$dir"/"$f" ]; then
split -d -a 4 -b 1992295 "$CDIR""$dir"/"$f" "$SDIR""$dir"/"$f" --verbose
if [[ "$?" == 1 ]]
then
rm -rf "$SDIR""$dir" && echo "$SDIR""$dir" "Removed due to Error code" "$?""." "Testing Archives and Retrying..." 2>&1 | tee "$Err"/Split_Err.log
7z t "$CDIR""$dir"/"$f" >> tee stdout.log 2>> "$Err"/"$dir"/7z_Err.log >&2
mkdir "$SDIR""$dir" && split -d -a 4 -b 1992295 "$CDIR""$dir"/"$f" "$SDIR""$dir"/"$f" --verbose
if [[ "$?" == 1 ]]
then
rm -rf "$SDIR""$dir" && echo "$SDIR""$dir" "Removed a second time due to Error code "$?". Skipping..." 2>&1 | tee "$Err"/Split_Err.log
continue
else
echo "Split Success:" "$SDIR""$dir"/"$f" "ended with Exit status" "$?" && continue
fi
else
echo "Split Success:" "$SDIR""$dir" "ended with Exit status" "$?" && continue
fi
else
echo "$SDIR""$dir"/"$f" "Exists... Skipping Operation" 2>&1 | tee "$Err"/"$dir"/Split_Err.log
continue
fi
done
(The echo piping in a previous revision of the question was misplaced code, and thank you for pointing that out. The exit code remains the same, though. Overall,the script does what I want it to except for the exit code portion.)
Remove | echo $?. you are processing the return code of echo command(last command).
I want to setup a cron job to rsync a remote system to a backup partition, something like:
bash -c 'rsync -avz --delete --exclude=proc --exclude=sys root#remote1:/ /mnt/remote1/'
I would like to be able to "set it and forget it" but what if /mnt/remote1 becomes unmounted? (After a reboot or something) I'd like to error out if /mnt/remote1 isn't mounted, rather than filling up the local filesystem.
Edit:
Here is what I came up with for a script, cleanup improvements appreciated (especially for the empty then ... else, I couldn't leave them empty or bash errors)
#!/bin/bash
DATA=data
ERROR="0"
if cut -d' ' -f2 /proc/mounts | grep -q "^/mnt/$1\$"; then
ERROR=0
else
if mount /dev/vg/$1 /mnt/$1; then
ERROR=0
else
ERROR=$?
echo "Can't backup $1, /mnt/$1 could not be mounted: $ERROR"
fi
fi
if [ "$ERROR" = "0" ]; then
if cut -d' ' -f2 /proc/mounts | grep -q "^/mnt/$1/$DATA\$"; then
ERROR=0
else
if mount /dev/vg/$1$DATA /mnt/$1/data; then
ERROR=0
else
ERROR=$?
echo "Can't backup $1, /mnt/$1/data could not be mounted."
fi
fi
fi
if [ "$ERROR" = "0" ]; then
rsync -aqz --delete --numeric-ids --exclude=proc --exclude=sys \
root#$1.domain:/ /mnt/$1/
RETVAL=$?
echo "Backup of $1 completed, return value of rsync: $RETVAL"
fi
mountpoint seems to be the best solution to this: it returns 0 if a path is a mount point:
#!/bin/bash
if [[ `mountpoint -q /path` ]]; then
echo "filesystem mounted"
else
echo "filesystem not mounted"
fi
Found at LinuxQuestions.
if cut -d' ' -f2 /proc/mounts | grep '^/mnt/remote1$' >/dev/null; then
rsync -avz ...
fi
Get the list of mounted partitions from /proc/mounts, only match /mnt/remote1 (and if it is mounted, send grep's output to /dev/null), then run your rsync job.
Recent greps have a -q option that you can use instead of sending the output to /dev/null.
A quick google led me to this bash script that can check if a filesystem is mounted. It seems that grepping the output of df or mount is the way to go:
if df |grep -q '/mnt/mountpoint$'
then
echo "Found mount point, running task"
# Do some stuff
else
echo "Aborted because the disk is not mounted"
# Do some error correcting stuff
exit -1
fi
Copy and paste the script below to a file (e.g. backup.sh).
Make the script executable (e.g. chmod +x backup.sh)
Call the script as root with the format backup.sh [username (for rsync)] [backup source device] [backup source location] [backup target device] [backup target location]
!!!ATTENTION!!! Don't execute the script as root user without understanding the code!
I think there's nothing to explain. The code is straightforward and well documented.
#!/bin/bash
##
## COMMAND USAGE: backup.sh [username] [backup source device] [backup source location] [backup target device] [backup target location]
##
## for example: sudo /home/manu/bin/backup.sh "manu" "/media/disk1" "/media/disk1/." "/media/disk2" "/media/disk2"
##
##
## VARIABLES
##
# execute as user
USER="$1"
# Set source location
BACKUP_SOURCE_DEV="$2"
BACKUP_SOURCE="$3"
# Set target location
BACKUP_TARGET_DEV="$4"
BACKUP_TARGET="$5"
# Log file
LOG_FILE="/var/log/backup_script.log"
##
## SCRIPT
##
function end() {
echo -e "###########################################################################\
#########################################################################\n\n" >> "$LOG_FILE"
exit $1
}
# Check that the log file exists
if [ ! -e "$LOG_FILE" ]; then
touch "$LOG_FILE"
chown $USER "$LOG_FILE"
fi
# Check if backup source device is mounted
if ! mountpoint "$BACKUP_SOURCE_DEV"; then
echo "$(date "+%Y-%m-%d %k:%M:%S") - ERROR: Backup source device is not mounted!" >> "$LOG_FILE"
end 1
fi
# Check that source dir exists and is readable.
if [ ! -r "$BACKUP_SOURCE" ]; then
echo "$(date "+%Y-%m-%d %k:%M:%S") - ERROR: Unable to read source dir." >> "$LOG_FILE"
echo "$(date "+%Y-%m-%d %k:%M:%S") - ERROR: Unable to sync." >> "$LOG_FILE"
end 1
fi
# Check that target dir exists and is writable.
if [ ! -w "$BACKUP_TARGET" ]; then
echo "$(date "+%Y-%m-%d %k:%M:%S") - ERROR: Unable to write to target dir." >> "$LOG_FILE"
echo "$(date "+%Y-%m-%d %k:%M:%S") - ERROR: Unable to sync." >> "$LOG_FILE"
end 1
fi
# Check if the drive is mounted
if ! mountpoint "$BACKUP_TARGET_DEV"; then
echo "$(date "+%Y-%m-%d %k:%M:%S") - WARNING: Backup device needs mounting!" >> "$LOG_FILE"
# If not, mount the drive
if mount "$BACKUP_TARGET_DEV" > /dev/null 2>&1 || /bin/false; then
echo "$(date "+%Y-%m-%d %k:%M:%S") - Backup device mounted." >> "$LOG_FILE"
else
echo "$(date "+%Y-%m-%d %k:%M:%S") - ERROR: Unable to mount backup device." >> "$LOG_FILE"
echo "$(date "+%Y-%m-%d %k:%M:%S") - ERROR: Unable to sync." >> "$LOG_FILE"
end 1
fi
fi
# Start entry in the log
echo "$(date "+%Y-%m-%d %k:%M:%S") - Sync started." >> "$LOG_FILE"
# Start sync
su -c "rsync -ayhEAX --progress --delete-after --inplace --compress-level=0 --log-file=\"$LOG_FILE\" \"$BACKUP_SOURCE\" \"$BACKUP_TARGET\"" $USER
echo "" >> "$LOG_FILE"
# Unmount the drive so it does not accidentally get damaged or wiped
if umount "$BACKUP_TARGET_DEV" > /dev/null 2>&1 || /bin/false; then
echo "$(date "+%Y-%m-%d %k:%M:%S") - Backup device unmounted." >> "$LOG_FILE"
else
echo "$(date "+%Y-%m-%d %k:%M:%S") - WARNING: Backup device could not be unmounted." >> "$LOG_FILE"
fi
# Exit successfully
end 0
I am skimming This but I would think you would rather rsync -e ssh and setup the keys to accept the account.