I can run commands like vacuumdb, pg_dump, and psql just fine in a script if I preface them like so:
/usr/bin/sudo -u postgres /usr/bin/pg_dump -Fc mydatabase > /opt/postgresql/prevac.gz
/usr/bin/sudo -u postgres /usr/bin/vacuumdb --analyze mydatabase
/usr/bin/sudo -u postgres /usr/bin/pg_dump -Fc mydatabase > /opt/postgresql/postvac.gz
SCHEMA_BACKUP="/opt/postgresql/$(date +%w).db.schema"
sudo -u postgres /usr/bin/pg_dump -C -s mydatabase > $SCHEMA_BACKUP
These run at command line on Redhat when I am sudo to root and then as you see in the commands above I do a sudo -u to postgres.
But when I try to kick this off from cron, I get zero bytes in all the files -- meaning it didn't run properly. And I don't get a clue in the logs that I can see.
My /etc/crontab file has this entry at the bottom
00 23 * * * root /etc/db_backup.cron
And yes, /etc/db_backup.cron is chmod ug+x, owned by root, and the top of the file says "#!/bin/bash" (minus doublequotes).
Anyone know what gives?
Since you seem to have superuser rights anyway, you could put those commands into the crontab of the postgres user like so:
sudo su postgres
crontab -e
and then put the pg_dump/vacuumdb commands there.
I have a dynamic bash script that backs up all the databases on the server. It gets a list of all the databases and then vacuums each DB before performing a backup. All logs are written to a file and then that log is emailed to me. This is something you could use if you want.
Copy the code below into a file and add the file to your crontab. I have setup my pg_hba.conf to trust local connections.
#!/bin/bash
logfile="/backup/pgsql.log"
backup_dir="/backup"
touch $logfile
databases=`psql -h localhost -U postgres -q -c "\l" | sed -n 4,/\eof/p | grep -v rows\) | grep -v template0 | grep -v template1 | awk {'print $1'}`
echo "Starting backup of databases " >> $logfile
for i in $databases; do
dateinfo=`date '+%Y-%m-%d %H:%M:%S'`
timeslot=`date '+%Y%m%d%H%M'`
/usr/bin/vacuumdb -z -h localhost -U postgres $i >/dev/null 2>&1
/usr/bin/pg_dump -U postgres -i -F c -b $i -h 127.0.0.1 -f $backup_dir/$i-database-$timeslot.backup
echo "Backup and Vacuum complete on $dateinfo for database: $i " >> $logfile
done
echo "Done backup of databases " >> $logfile
tail -15 /backup/pgsql.log | mailx youremail#domain.com
I have set my cron like this. Every 59 minutes since monday to friday
*/59 * * * 1-5 sh /home/my_user/scripts/back_my_bd.sh
The script to run the backup is inside back_my_bd.sh file and the content is:
pg_dump -U USERDATABASE DATABASENAME > /home/my_user/sql/mybackup.sql
And i created the .pgpass file inside home directory to allow the backup whithout specify the user and password
localhost:5432:DATABASENAME:USER:PASSWORD
Sorry my english is not good!
Your environment variable are maybe not set in cron.
In your normal session, you probably have defined these variables:
PG_PORT
PG_HOST
PG_DATABASE
PG_USERNAME
PG_PASSWORD
Add an "env" into yout script.
you probably have "ident" authentication in your pg_hba.conf for your postgres user.
The option "-u postgres" fails when that is the case.
either change user to postgres in your backup script or configure a different authentication method.
Instead of the following command:
databases=psql -h localhost -U postgres -q -c "\l" | sed -n 4,/\eof/p | grep -v rows\) | grep -v template0 | grep -v template1 | awk {'print $1'}
You can use below:
databases=psql -t -c "select datname from pg_database where datname not like 'template%';" | grep -v '^$'
The first one return '|' for template databases and an empty line.
The second one is cleaner.
databases=psql -h localhost -U postgres -q -x -t -c "\l" | grep 'Name' | sed 's/ //g' | sed 's/Name|//g'
Another version to get the list of databases:
psql -lqt | grep -vE '^ +(template[0-9]+|postgres)? *\|' | cut -d'|' -f1| sed -e 's/ //g' -e '/^$/d'
As my psql -lqt output is:
abcdefghij | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
abc | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
Related
I don't get why this doesn't work:
filesToInclude="$(ssh -t $host ls -t /var/log/*.LOG | sort | egrep -A6 "$LastBootUp" | tr '\n' '[:space:]' | tr -s [:space:] ' ')"
allALL="$( ssh $host grep -Ev "$excludeSearch" $filesToInclude )"
on another server, which is capable of ag this works totally fine.
if I copy the output of filesToInclude to $filesToInclude manually, it works.
that is the output:
grep: o such file or directory
bash: 0m/var/log/A-MINI_23311_H007164M49_220419_1906_XX.LOG: No such file or directory
I'm working on a script, that should find certain disks and add hostname to them.
I'm using this for 40 servers with a for loop in bash
#!/bin/bash
for i in myservers{1..40}
do ssh user#$i findmnt -o SIZE,TARGET -n -l |
grep '1.8T\|1.6T\|1.7T' |
sed 's/^[ \t]*//' |
cut -d ' ' -f 2 |
awk -v HOSTNAME=$HOSTNAME '{print HOSTNAME ":" $0}'; done |
tee sorted.log
can you help out with the quoting here? It looks like awk gets piped (hostname) from localhost, not the remote server.
Everything after the first pipe is running locally, not on the remote server.
Try quoting the entire pipeline to have it run on the remote server:
#!/bin/bash
for i in myservers{1..40}
do ssh user#$i "findmnt -o SIZE,TARGET -n -l |
sed 's/^[ \t]*//' |
cut -d ' ' -f 2 |
awk -v HOSTNAME=\$HOSTNAME '{print HOSTNAME \":\" \$0}'" ;
done | tee sorted.log
This is a shorter version of your stuff:
findmnt -o SIZE,TARGET -n -l |
awk -v HOSTNAME=$HOSTNAME '/M/{print HOSTNAME ":" $2}'
Applied to the above:
for i in myservers{1..40}
do ssh user#$i bash -c '
findmnt -o SIZE,TARGET -n -l |
awk -v HOSTNAME=$HOSTNAME '"'"'/M/{print HOSTNAME ":" $2}'"'"' '
done |
tee sorted.log
see: How to escape the single quote character in an ssh / remote bash command?
I trying to add text (predefined) between a sorted output and saved to a new file.
I'm using a curl command to gather my info.
$ curl --user XXX:1234!## "http://......"
Then using grep to find IP addresses and sorting so they only appear once.
$ curl --user XXX:1234!## "http://......" | grep -E -o -m1 '([0-9]{1,3}[\.]){3}[0-9]{1,3}' | sort -u
I need to add <my_text_predefined> ([0-9]{1,3}[\.]){3}[0-9]{1,3} <my_text_predefined> between the regex ip address and then saved to a new file.
The script below only get my the ip address
$ curl --user XXX:1234!## "http://......" | grep -E -o -m1 '([0-9]{1,3}[\.]){3}[0-9]{1,3}' | sort -u
123.12.0.12
123.56.98.76
$ curl --user some_user:password "http://...." | grep -E -o -m1 '([0-9]{1,3}[\.]){3}[0-9]{1,3}' | sort -u | sed 's/.*/<prefix> -s & <suffix>/'
So if we need print some text for each IP ... try xargs
for i in {1..100}; do echo $i; done | xargs -n1 echo "Values are:"
if based on IP you would need to take decision put in a loop
for file $(curl ...) do ...
and check $file or do something with it ...
I'm trying to create script to be run by cron to create multiple folders with subfolders.
DATE=`date +%Y-%m-%d`
IP_ADDR=`ifconfig | grep -v '127.0.0.1' | sed -n 's/.*inet addr:\([0-9.]\+\)\s.*/\1/p'`
/bin/mkdir -p /mnt/db-backup/12/$DATE/$IP_ADDR/
If i run this script manually everything is created as expected. When script is ran by cron subdirectory $IP_ADDR is not created and there is no errors.
I suspect that /sbin is not part of the PATH for the environment that the cron job runs under. You should specify the full path for the ifconfig command:
IP_ADDR=$(/sbin/ifconfig | grep -v '127.0.0.1' | sed -n 's/.*inet addr:\([0-9.]\+\)\s.*/\1/p')
It's also better practice (in general) to use $() for command substitution.
Try to use debug mode :
set -x
DATE=`date +%Y-%m-%d`
IP_ADDR=`ifconfig | grep -v '127.0.0.1' | sed -n 's/.*inet addr:\([0-9.]\+\)\s.*/\1/p'`
/bin/mkdir -p /mnt/db-backup/12/$DATE/$IP_ADDR/
set +x
Then, redirect the output of your cron to a file and have a look, you should find useful information in it.
You are not far off, but there are several ordering caveats that could cause problems. Many systems have different formats for the ifconfig output line. Some with inet xxx.xxx.xxx.xxx, others with inet addr:xxx.xxx.xxx.xxx. (those are the two most common). You may also need to handle the case where there are multiple wired inet interfaces (2+ NICs in the box). However, if you have only 1 NIC, you could try the following to handle the common ifconfig formats:
DATE=`date +%Y-%m-%d`
IP_ADDR=$(ifconfig |
grep -v '127.0.0.1' |
grep -E 'inet[ ](addr:)*[0-9]{1,3}([.][0-9]{1,3}){3}' |
sed -e 's/^.*inet \(addr:\)*//' -e 's/ .*$//')
/bin/mkdir -p /mnt/db-backup/12/$DATE/$IP_ADDR/
or with IP_ADDR written as one line:
IP_ADDR=$(ifconfig | grep -v '127.0.0.1' | grep -E 'inet[ ](addr:)*[0-9]{1,3}([.][0-9]{1,3}){3}' | sed -e 's/^.*inet \(addr:\)*//' -e 's/ .*$//')
I have a script that logs in to a remote host to pull a directory listing to later present options to the user. It was all working perfectly, until some of the directories started having spaces in them. I have tried several syntaxes and googled the life out of this and I am now at the end of my tether. The original command was this:
SERVERDIRS=($(sshpass -p $PASS ssh -oStrictHostKeyChecking=no $USER#$SERVER ls -l --time-style="long-iso" $FROMFOLDER | egrep '^d' | awk '{print $8}'))
I first off changed this code to be able to read the spaces like this:
SERVERDIRS=($(sshpass -p $PASS ssh -oStrictHostKeyChecking=no $USER#$SERVER ls -l --time-style="long-iso" $FROMFOLDER | egrep '^d' | cut -d' ' -f8-))
However This resulted in each word being recognised as a variable. I have tried many ways to try to solve this, two of which were:
SERVERDIRS=($(sshpass -p $PASS ssh -oStrictHostKeyChecking=no $USER#$SERVER ls -d $FROMFOLDER* |rev| cut -d'/' -f1|rev|sed s/^/\"/g|sed s/$/\"/g))
SERVERDIRS=($(sshpass -p $PASS ssh -oStrictHostKeyChecking=no $USER#$SERVER ls -d $FROMFOLDER* |rev| cut -d'/' -f1|rev|sed 's/ /\\ /g'))
SERVERDIRS=(`sshpass -p $PASS ssh -oStrictHostKeyChecking=no $USER#$SERVER ls -d $FROMFOLDER* |rev| cut -d'/' -f1|rev|sed 's/ /\\ /g'`)
How can I resolve these directories in to separate elements correctly?
If you're trying to read one array value per line instead of space-separated, then $() syntax won't help. Try readarray (Bash 4):
readarray SERVERDIRS < <(sshpass -p $PASS ssh -oStrictHostKeyChecking=no $USER#$SERVER ls -l --time-style="long-iso" $FROMFOLDER | egrep '^d' | cut -d' ' -f8-)
or assign IFS and read with -d, -r, and -a set:
IFS=$'\n' read -d '' -r -a SERVERDIRS < <(sshpass -p $PASS ssh -oStrictHostKeyChecking=no $USER#$SERVER ls -l --time-style="long-iso" $FROMFOLDER | egrep '^d' | cut -d' ' -f8-)
or, really, any other answer to this SO question.
If you're unfamiliar with <() syntax, it's known as process substitution and will allow your variable to be set in your current environment rather than the instantly-discarded subshell that a pipe would create.
Bear in mind that this process is a little dangerous; filenames can also contain newlines, so it's usually much preferred to use find ... -print0.
If you only need to list directories, try this
ls -d /usr/local/src/*/
or
ls -d /path/to/your/directory/*/
You can then loop through all directories
#!/bin/bash
aa=`ls -d /usr/local/src/*/`
for dir in "${aa}[#]"
do
echo "$dir"
done
This works if dir names contain spaces.