Permission Issue: Creating postgres back as postgres user - shell

I have following postgres backup script, its a shell script and written to run ans postgres user.
But the problem is postgres user doesn't have permission to write these directories. I as a user don't have sudo on these machines but I have changed the directory to has 755 and added to one of the group that has major permission to do read-write-execute. Since postgres user isn't part of the unix user group I guess I am running into this issue.
My goal is to put this in the cron-tab but prior to that I need to get the script running with proper permission:
#!/bin/bash
# location to store backups
backup_dir="/location/to/dir"
# name of the backup file has the date
backup_date=`date +%d-%m-%Y`
# only keep the backup for 30 days (maintain low storage)
number_of_days=30
databases=`psql -l -t | cut -d'|' -f1 | sed -e 's/ //g' -e '/^$/d'`
for i in $databases; do
if [ "$i" != "template0" ] && [ "$i" != "template1" ]; then
echo Dumping $i to $backup_dir$i\_$backup_date
pg_dump -Fc $i > $backup_dir$i\_$backup_date
fi
done
find $backup_dir -type f -prune -mtime +$number_of_days -exec rm -f {} \;

Before doing this be sure to login as a super user (sudo su) and try executing these:
useradd -G unix postgres (Add postgres user to unix group)
su postgres (Login as postgres user)
mkdir folder (Go to the directory where postgres needs to write files)
***From this line down is my answer to #find-missing-semicolon question
Just to illustrate an example with a shell script, you can capture the password using the read command and put it to a variable. Here I stored the password in password and echoed it afterwards. I hope this helps.
`#!/bin/bash`
`read -s -p "Password: " password`
`echo $password`

Related

Postgres database backup not working locally (Crontab + Shell script using expect)

I am having issues on my Ubuntu server: I have two scripts which perform a pg_dump of two databases (a remote and a local one). However the backup file for the local one always ends up empty.
When I run the script manually, no problem.
The issue is when the script is ran via crontab while I am NOT logged into the machine. If I'm in a SSH session no problem, it works with crontab but when I'm not connected it does not work.
Check out my full scripts/setup under, and feel free to suggest any improvements. For now I just want it to work but if my method is insecure/unefficient I would gladly hear about alternatives :)
So far I've tried:
Using the postgres user for the local database (instead of another user I use to access the DB with my applications)
Switch pg_dump for /usr/bin/pg_dump
Here's my setup:
Crontab entry:
0 2 * * * path/to/my/script/local_databasesBackup.sh ; path/to/my/script/remote_databasesBackup.sh
scriptInitialization.sh
set LOCAL_PWD "password_goes_here"
set REMOTE_PWD "password_goes_here"
Expect script, called by crontab (local/remote_databaseBackup.sh):
#!/usr/bin/expect -f
source path/to/my/script/scriptInitialization.sh
spawn path/to/my/script/localBackup.sh expect "Password: " send "${LOCAL_PWD}\r"
expect eof exit
Actual backup script (local/remoteBackup.sh):
#!/bin/bash
DATE=$(date +"%Y-%m-%d_%H%M")
delete_yesterday_backup_and_perform_backup () {
/usr/bin/pg_dump -U postgres -W -F t localDatabaseName > /path/to/local/backup/folder/$DATE.tar
YESTERDAY_2_AM=$(date --date="02:00 yesterday" +"%Y-%m-%d_%H%M")
YESTERDAY_BACKUP_FILE=/path/to/local/backup/folder/$YESTERDAY_2_AM.tar
if [ -f "$YESTERDAY_BACKUP_FILE" ]; then
echo "$YESTERDAY_BACKUP_FILE exists. Deleting"
rm $YESTERDAY_BACKUP_FILE
else
echo "$YESTERDAY_BACKUP_FILE does not exist."
fi
}
CURRENT_DAY_NUMBER=$(date +"%d")
FIRST_DAY_OF_THE_MONTH="01"
if [ "$CURRENT_DAY_NUMBER" = "$FIRST_DAY_OF_THE_MONTH" ]; then
echo "First day of the month: Backup without deleting the previous backup"
/usr/bin/pg_dump -U postgres -W -F t localDatabaseName > /path/to/local/backup/folder/$DATE.tar
else
echo "Not the first day of the month: Delete backup from yesterday and backup"
delete_yesterday_backup_and_perform_backup
fi
The only difference between my local and remote script is the pg_dump parameters:
Local looks like this /usr/bin/pg_dump -U postgres -W -F t localDatabaseName > /path/to/local/backup/folder/$DATE.tar
Remote looks like this: pg_dump -U remote_account -p 5432 -h remote.address.com -W -F t remoteDatabase > /path/to/local/backup/folder/$DATE.tar
I ended up making two scripts because I thought it may have been the cause of the issue. However I'm pretty sure it's not at the moment.

Bash cron job on hpanel not locating directory

I have the following code on cron job, it runs but the code does not really do what it supposed to. It does not create the directory plus is does not do anything in the code. Please help check if the way I pointed to the directory is wrong.
#!/bin/bash
NAMEDATE=`date +%F_%H-%M`_`whoami`
NAMEDATE2=`date `
mkdir ~/home/u3811*****/domains/website.com/public_html/cron/backup/files/$NAMEDATE -m 0755
mysqldump -u u3811*****_boss -p"*******" u3811*****_data | gzip ~/home/u3811*****/domains/website.com/public_html/cron/backup/files/$NAMEDATE/db.sql.gz
echo "This is the database backup for website.com on $NAMEDATE2" |
mailx -a ~/home/u3811*****/domains/website.com/public_html/cron/backup/files/$NAMEDATE/db.sql.gz -s "website.com Database attached" -- mail#gmail.com
chmod -R 0644 ~/home/u3811*****/domains/website.com/public_html/cron/backup/files/$NAMEDATE/*
exit 0
Your NAMEDATE variable needs to be modified a bit, as shown below, for more information about variables in bash see this link
NAMEDATE=$(date +%F_%H-%M"_"$(whoami))
When you issue the mkdir command you will need to pass the -p option to create the complete directory structure if it doesn't exists.
mkdir -p ~/home/u3811numbers/domains/website.com/public_html/cron/backup/files/$NAMEDATE -m 0755
Also, the ~ character on Linux based distributions is used as a shortcut for the home directory of the user that invokes it so, in the line below the result is /home//home/u3811*****/domains/website.com/public_html/cron/backup/files/2020-09-04_23-13_ you can read more about it in here
In you last command before the exit, you might need to pass a wildcard (*) to avoid removing the executable bit on the directory, see below
chmod -R 0644 ~/home/u3811*****/domains/website.com/public_html/cron/backup/files/$NAMEDATE/
The final version of your script will look something like this.
#!/bin/bash
NAMEDATE=$(date +%F_%H-%M"_"$(whoami))
NAMEDATE2=date
mkdir -p ~/home/u3811******/domains/website.com/public_html/cron/backup/files/$NAMEDATE -m 0755
mysqldump -u u3811*****_boss -p"******" u3811*****_data | gzip > ~/home/u3811*****/domains/website.com/public_html/cron/backup/files/$NAMEDATE/db.sql.gz
echo "This is the database backup for website.com on $NAMEDATE2" | mailx -a ~/home/u3811*****/domains/website.com/public_html/cron/backup/files/$NAMEDATE/db.sql.gz -s "website.com Database attached" -- mail#gmail.com
chmod -R 0644 ~/home/u3811*****/domains/website.com/public_html/cron/backup/files/$NAMEDATE/*
To debug a bash script you can always pass the -x flag for more information take a look at this article

Trouble with a script I am writing Mac Apple Management

I am a technician managing 10 Mac Computers. I do not have and MDM to manage them. I manage them manually and one by one... I have some of my Mac Computers that even putting them non Administrator, their managed account comes back to be administrator.
I am at the point where I will write a script to prevent them from falling administrator.
This is my script :
PASSWORD=$(echo U2FsdGVkX1+6JWRG1T9hsA/DIOfb2OZdXBf9uVcYTxY= | openssl enc -aes-128-cbc -a -d -salt -pass pass:wtf)
echo $PASSWORD | sudo -u administrateur adminUsers=$(dscl . -read Groups/admin GroupMembership | cut -c 18-)
for user in $adminUsers
do
if [ "$user" != "root" ] && [ "$user" != "administrateur" ]
then
dseditgroup -o edit -d $user -t user admin
if [ $? = 0 ]; then echo "Removed user $user from admin group"; fi
else
echo "Admin user $user left alone"
fi
done
The encryption command works but my second command(line 2) can't take my variable $PASSWORD, I have this :
sudo: administrateur: command not found
The script get stuck at "administrateur" from line 2.
There are several problems with the line
echo $PASSWORD | sudo -u administrateur adminUsers=$(dscl . -read Groups/admin GroupMembership | cut -c 18-)
First, $PASSWORD isn't in double-quotes, so several special characters might cause trouble. Actually, echo has its own problems with special characters, so printf '%s\n' "$PASSWORD" would be much more reliable.
Except that sudo doesn't accept passwords over standard input, so the pipe won't work anyway.
Also, you can't do a variable assignment in a sudo command. Well, you can, but it's useless because it would make a subprocess as the other user, set the variable in that subprocess... and then exit the subprocess so the variable vanishes along with it.
And the order of evaluation is all wrong. The shell expands the $( ) part before running any of the commands (and as the current user). So it expands to something like:
echo pwgoeshere | sudo -u administrateur adminUsers=root administrateur
... which will tell sudo to run the command administrateur with the variable adminUsers set to "root". Not what you want at all.
But there's good news: dscl can read the group membership from any user account, so you don't need sudo or any of that. Just use:
adminUsers=$(dscl . -read Groups/admin GroupMembership | cut -c 18-)
On the other hand, dseditgroup does need special access to change group membership. What user is this script running as? If it's already running as root, it'll just work. If not, you could use sudo (with the complications of passing the password to that), or much simpler pass the admin credentials as arguments, with the -u and `-P options:
dseditgroup -o edit -u administrateur -P "$PASSWORD" -d "$user" -t user admin
Two more suggestions: use lowercase variable names (e.g. password instead of PASSWORD) to avoid conflicts with the various the various all-caps names with special meanings, and run your scripts through shellcheck.net and correct the things it points out.

Shell script to remove backups that are older than 2 Weeks

I have a shell script that takes the backup of the Mongo DB on a daily basis. It's working as expected. Now I need to remove the backups that are older than 2 weeks. Would that be achievable with the current naming convention. Can anyone shed some light? I'm fairly new to shell scripting
#!/bin/sh
DIR=`date +%m%d%y`
DEST=/dbBackups/$DIR
mkdir $DEST
mongodump --authenticationDatabase admin -h 127.0.0.1 -d pipe -u <username> -p <password>
Finally got it with the below script
#!/bin/sh
DIR=`date +%m%d%y`
DEST=/dbBackups/$DIR
mkdir $DEST
mongodump --authenticationDatabase admin -h 127.0.0.1 -d pipe -u <username> -p <password>
find /dbBackups/* -type d -ctime +14 -exec rm -rf {} +
Thanks to Shell script to delete directories older than n days

TAR doesn't work properly with the crontab

First of all, I'm saying that it doesn't work properly with the crontab because when I run the script manually it works fine.
The problem is that when I run the backup script with the cronjob and... it's coming to tar up the mysql dump, the tar archive has only 16 bytes size (and its empty, so it looks like there were no files to pack into the archive), the strange thing about that is that when I run the script manually, it runs almost 5~ minutes, and the tar package size is ~1.8GB.
Here is my bash code:
#!/usr/local/bin/bash
# Configuration
BACKUPD="/backup/mysql"
MySQLuser='root'
MySQLpass='xxxx'
# End configuration
ROK=`date +%Y`
MIESIAC=`date +%m`
DZIEN=`date +%d`
GIM=`date +%H-%M`
if [ -d $BACKUPD/$ROK/$MIESIAC/$DZIEN ]
then
echo
else
mkdir -p $BACKUPD/$ROK/$MIESIAC/$DZIEN
fi
for db in $(echo "SHOW DATABASES;" | mysql --user=$MySQLuser --password=$MySQLpass | grep -v -e "Database" -e "information_schema")
do
mysqldump --skip-lock-tables --ignore-table=log.log --user="$MySQLuser" --password="$MySQLpass" $db >$BACKUPD/$ROK/$MIESIAC/$DZIEN/$db.sql
done
cd $BACKUPD/$ROK/$MIESIAC/$DZIEN && tar jcPf $BACKUPD/$ROK/$MIESIAC/$DZIEN/mysql-$GIM.tar.bz2 *.sql && rm -rf *.sql
Where is the problem? Did anyone experienced a problem like this before?
Regards.
Can you try with full path name for mysqldump and mysql inside your script.
So:
if which mysql is equal to /usr/local/mysql/bin/mysql
and
if which mysqldump is equal to /usr/local/mysql/bin/mysqldump
Modify your script to:
for db in $(echo "SHOW DATABASES;" | /usr/local/mysql/bin/mysql --user=$MySQLuser --password=$MySQLpass | grep -v -e "Database" -e "information_schema")
do
/usr/local/mysql/bin/mysqldump --skip-lock-tables --ignore-table=log.log --user="$MySQLuser" --password="$MySQLpass" $db >$BACKUPD/$ROK/$MIESIAC/$DZIEN/$db.sql
done
My guess is that the last line is your problem. The shell glob (*.sql) in:
cd $BACKUPD/$ROK/$MIESIAC/$DZIEN && tar jcPf $BACKUPD/$ROK/$MIESIAC/$DZIEN/mysql-$GIM.tar.bz2 *.sql && rm -rf *.sql
is expanded in the current directory and not after the cd as you might expect. Try the following instead, it is safer.
old_dir=`pwd`
cd "$BACKUPD/$ROK/$MIESIAC/$DZIEN"
tar jcPf mysql-$GIM.tar.bz2 *.sql
rm -fr *.sql
cd "$old_dir"
There still might not be any .sql files to tar ball. I don't have mysql installed but I suspect that the for loop is messed up as well. Try something like the following instead:
mysqlshow | \
xargs mysqldump --databases | \
bzip2 > $BACKUPD/$ROK/$MIESIAC/$DZIEN/mysql-$GIM.bz2
You will probably beed to insert other arguments for the mysqlshow and mysqldump commands. Of course this won't create a tarball but it will give you a compressed backup.

Resources