Postgres database backup not working locally (Crontab + Shell script using expect) - shell

I am having issues on my Ubuntu server: I have two scripts which perform a pg_dump of two databases (a remote and a local one). However the backup file for the local one always ends up empty.
When I run the script manually, no problem.
The issue is when the script is ran via crontab while I am NOT logged into the machine. If I'm in a SSH session no problem, it works with crontab but when I'm not connected it does not work.
Check out my full scripts/setup under, and feel free to suggest any improvements. For now I just want it to work but if my method is insecure/unefficient I would gladly hear about alternatives :)
So far I've tried:
Using the postgres user for the local database (instead of another user I use to access the DB with my applications)
Switch pg_dump for /usr/bin/pg_dump
Here's my setup:
Crontab entry:
0 2 * * * path/to/my/script/local_databasesBackup.sh ; path/to/my/script/remote_databasesBackup.sh
scriptInitialization.sh
set LOCAL_PWD "password_goes_here"
set REMOTE_PWD "password_goes_here"
Expect script, called by crontab (local/remote_databaseBackup.sh):
#!/usr/bin/expect -f
source path/to/my/script/scriptInitialization.sh
spawn path/to/my/script/localBackup.sh expect "Password: " send "${LOCAL_PWD}\r"
expect eof exit
Actual backup script (local/remoteBackup.sh):
#!/bin/bash
DATE=$(date +"%Y-%m-%d_%H%M")
delete_yesterday_backup_and_perform_backup () {
/usr/bin/pg_dump -U postgres -W -F t localDatabaseName > /path/to/local/backup/folder/$DATE.tar
YESTERDAY_2_AM=$(date --date="02:00 yesterday" +"%Y-%m-%d_%H%M")
YESTERDAY_BACKUP_FILE=/path/to/local/backup/folder/$YESTERDAY_2_AM.tar
if [ -f "$YESTERDAY_BACKUP_FILE" ]; then
echo "$YESTERDAY_BACKUP_FILE exists. Deleting"
rm $YESTERDAY_BACKUP_FILE
else
echo "$YESTERDAY_BACKUP_FILE does not exist."
fi
}
CURRENT_DAY_NUMBER=$(date +"%d")
FIRST_DAY_OF_THE_MONTH="01"
if [ "$CURRENT_DAY_NUMBER" = "$FIRST_DAY_OF_THE_MONTH" ]; then
echo "First day of the month: Backup without deleting the previous backup"
/usr/bin/pg_dump -U postgres -W -F t localDatabaseName > /path/to/local/backup/folder/$DATE.tar
else
echo "Not the first day of the month: Delete backup from yesterday and backup"
delete_yesterday_backup_and_perform_backup
fi
The only difference between my local and remote script is the pg_dump parameters:
Local looks like this /usr/bin/pg_dump -U postgres -W -F t localDatabaseName > /path/to/local/backup/folder/$DATE.tar
Remote looks like this: pg_dump -U remote_account -p 5432 -h remote.address.com -W -F t remoteDatabase > /path/to/local/backup/folder/$DATE.tar
I ended up making two scripts because I thought it may have been the cause of the issue. However I'm pretty sure it's not at the moment.

Related

Running sudo via ssh on remote server

I am trying to write a deployment script which after copying the new release to the server should perform a few sudo commands on the remote machine.
#!/bin/bash
app=$1
echo "Deploying application $app"
echo "Copy file to server"
scp -pr $app-0.1-SNAPSHOT-jar-with-dependencies.jar nuc:/tmp/
echo "Execute deployment script"
ssh -tt stefan#nuc ARG1=$app 'bash -s' <<'ENDSSH'
# commands to run on remote host
echo Hello world
echo $ARG1
sudo ifconfig
exit
ENDSSH
The file gets copied correctly and the passed argument printed as well. But the prompt for the password shows for two seconds then it says "Sorry, try again" and the second prompt shows the text I enter in plain text (meaning not masked) but also does not work if I enter the password correctly.
stefan#X220:~$ ./deploy.sh photos
Deploying application photos
Copy file to server
photos-0.1-SNAPSHOT-jar-with-dependencies.jar 100% 14MB 75.0MB/s 00:00
Execute deployment script
# commands to run on remote host
echo Hello world
echo $ARG1
sudo ifconfig
exit
stefan#nuc:~$ # commands to run on remote host
stefan#nuc:~$ echo Hello world
Hello world
stefan#nuc:~$ echo $ARG1
photos
stefan#nuc:~$ sudo ifconfig
[sudo] password for stefan:
Sorry, try again.
[sudo] password for stefan: ksdlgfdkgdfg
I tried leaving out the -t flags for ssh as well as using -S for sudo which did not help. Any help is highly appreciated.
What I would do :
ssh stefan#nuc bash -s foobar <<'EOF'
echo "arg1 is $1"
echo "$HOSTNAME"
ifconfig
exit
EOF
Tested, work well.
Notes :
for the trick to work, use ssh key pair instead of using a password, it's even more secure
take care of the place of your bash -s argument. Check how I pass it
no need -tt at all
no need sudo to execute ifconfig and better use ip a
I came up with another solution: Create another file with the script to execute on the remote server. Then copy it using scp and in the calling script do a
ssh -t remoteserver sudo /tmp/deploy_remote.sh parameter1
This works as expected. Of course the separate file is not the most elegant solution, but -t and -tt did not work when inlining the script to execute on the remote machine.

Bash - ncftpls is not working

So, I'm trying to get the list of files and folders in the uppermost directory of my server and set it as a variable.
I'm using ncftpls to get the list of files and folders. It runs, but it doesn't display any files or folders.
LIST=$(ncftpls -u $USER -p $PASSWORD -P $PORT ftp://$HOST/)
echo $LIST
I tried not setting it as a variable and just running the command ncftpls, but it still won't display any of the files or folders.
The strange this is, though, when I run this script
ncftp -u $USER -p $PASSWORD -P $PORT ftp://$HOST/ <<EOF
ls
EOF
it outputs all the files and folders just fine. Although, then I can't set it as a variable (I don't think).
If anyone has any ideas on what is going on, that'd be much appreciated!
The only acceptable time to use FTP was the 1970s, when you could trust a host by the fact that it was allowed onto the internet.
Do not try to use it today. Use sftp, rsync, ssh or another suitable alternative.
You can capture output of any command with $(..):
In your case,
list=$(
ncftp -u $USER -p $PASSWORD -P $PORT ftp://$HOST/ <<EOF
ls
EOF
)
This happens to be equivalent to
list=$(ncftp -u $USER -p $PASSWORD -P $PORT ftp://$HOST/ <<< "ls")

Jenkins - Can't see my shell script's logs

EDIT:
Ok, so after answer from Vasanta Koli, I've looked deep in builds.
And actually, I have found the full console output.
It's a bit weird at first because you need to go in Build history or use the little arrow after your build name to access it... at the same place that "basic" console output when you click directly on your build's name.
Anyway, I can finally access to my full logs !
Original question:
This question may just look dumb, but in my Jenkins' configuration, I can't see all the logs from the shell script of my build.
I've looked an option to activate it, but I can't find it.
In my script, I'm just restoring a database, with an echo before each command, like this:
#!/usr/bin/env bash
timestamp=$(date +%T)
echo $timestamp "- Delete"
dropdb -h localhost -U user database
echo $timestamp "- Creation"
createdb -h localhost -E unicode -U user database
echo $timestamp "- Restore"
pg_restore -h localhost -U user -O -d database database.tar
All the script is executed, but no logs in my build, in the web UI (Console output)
I'm obviously missing something here.
Can someone help me ? Thank you !
Don't make variable for timestamp if you need actual time of the task (command) execution
also if you want logs to be redirected, then mention it.
It should be something like below
#!/usr/bin/env bash
logfile=/var/log/script.log
{
echo $(date +%T) "- Delete"
dropdb -h localhost -U user database
echo $(date +%T) "- Creation"
createdb -h localhost -E unicode -U user database
echo $(date +%T) "- Restore"
pg_restore -h localhost -U user -O -d database database.tar
} >> $logfile
Please check and update

Permission Issue: Creating postgres back as postgres user

I have following postgres backup script, its a shell script and written to run ans postgres user.
But the problem is postgres user doesn't have permission to write these directories. I as a user don't have sudo on these machines but I have changed the directory to has 755 and added to one of the group that has major permission to do read-write-execute. Since postgres user isn't part of the unix user group I guess I am running into this issue.
My goal is to put this in the cron-tab but prior to that I need to get the script running with proper permission:
#!/bin/bash
# location to store backups
backup_dir="/location/to/dir"
# name of the backup file has the date
backup_date=`date +%d-%m-%Y`
# only keep the backup for 30 days (maintain low storage)
number_of_days=30
databases=`psql -l -t | cut -d'|' -f1 | sed -e 's/ //g' -e '/^$/d'`
for i in $databases; do
if [ "$i" != "template0" ] && [ "$i" != "template1" ]; then
echo Dumping $i to $backup_dir$i\_$backup_date
pg_dump -Fc $i > $backup_dir$i\_$backup_date
fi
done
find $backup_dir -type f -prune -mtime +$number_of_days -exec rm -f {} \;
Before doing this be sure to login as a super user (sudo su) and try executing these:
useradd -G unix postgres (Add postgres user to unix group)
su postgres (Login as postgres user)
mkdir folder (Go to the directory where postgres needs to write files)
***From this line down is my answer to #find-missing-semicolon question
Just to illustrate an example with a shell script, you can capture the password using the read command and put it to a variable. Here I stored the password in password and echoed it afterwards. I hope this helps.
`#!/bin/bash`
`read -s -p "Password: " password`
`echo $password`

How can I use bash to handle sql backups?

I have a database and I used to backup daily manually like so
mysqldump -uroot -ppassword forum > 4.25.2011.sql
However I've been doing the above and wanted to use a script besides mysqldumper to do the job.
If 3 existing .sql files exist in the backup directory, how can I delete the oldest one?
So far all I have is:
#!/bin/bash
today = `eval date +%m.%d.%Y` #how do I add this to the backup?
mysqldump -uroot -ppassword forum > /root/backups/4.25.2011.sql
I still can't figure out how to pass a variable to save my sql file as. How would I do that too?
My VPS is limited to 10gb and disk size is a concern or else I wouldn't delete any files.
For your first part, instead of
today = `eval date +%m.%d.%Y` #how do I add this to the backup?
mysqldump -uroot -ppassword forum > /root/backups/4.25.2011.sql
do
today=$(date +%m.%d.%Y)
mysqldump -uroot -ppassword forum > /root/backups/$today.sql
In particular:
there must not be any spaces around the equals sign
running eval is not what you want
The simplest way to only keep three files would be:
rm forum-3.sql
mv /root/backups/forum-2.sql /root/backups/forum-3.sql
mv /root/backups/forum-1.sql /root/backups/forum-2.sql
mysqldump -uroot -ppassword forum > /root/backups/forum-1.sql
Tools like ls -l or the content of the file should tell you the date if you need it.
If you really need the date in the file name, the easiest tool to help you is GNU date:
dateformat="%m.%d.%Y"
rm forum-$(date -d "-3 days" +$dateformat).sql
mysqldump -uroot -ppassword forum > /root/backups/forum-$(date +$dateformat).sql
Or use find, e.g.
find . -name "forum*.sql" -mtime +3 -delete
mysqldump -uroot -ppassword forum > /root/backups/forum-$(date +$dateformat).sql
After that, you could look at logrotate.
Since you're rotating the backups daily, you could just delete all of the ones older than 3 days. Here's also how to write to a filename based on the date:
#!/bin/bash
mysqldump -uroot -ppassword forum > /root/backups/`date +%m.%d.%Y`.sql
find /root/backups -type f -ctime +3 -exec rm {} \;
If you just want to delete the oldest file you can add the following to your script:
ls -U1 *.sql | head -1 | xargs rm
well i have dedicated sql server that is having all database for my remote apache servers on aws
now what i did is made x scripts for x databased i want to have total control over each sql withour messing things up,
now what you need to do is for backup db use this script name demosql.sh
#!/bin/bash
# Database credentials
user="database user"
password="db password"
host="localhost"
db_name="database name"
# Other options
backup_path="/location/of/folder/for/sql"
date=$(date +"%d-%b-%Y")
time2=$(date +"%R")
# Set default file permissions
umask 177
# Dump database into SQL file
mysqldump --user=$user --password=$password --host=$host $db_name > $backup_path/$db_name-$date-$time2.sql
this is how i made few scripts for each db to contr
#!/bin/bash
#project name db
source $(dirname $0)/demosql.sh
then immediate after backup i want to put all of my sql in folder to sync with s3 storage like this
#!/bin/bash
s3cmd sync --skip-existing /location/ s3://folder/
also you can control script like this on cronjob with remote servers
using ssh with bash

Resources