I have the following script:
mongodump --gzip -d foobar \
--excludeCollection=foo1 \
--excludeCollection=foo2 \
--excludeCollection=foo3 \
--excludeCollection=foo4 -o ./
But the dump is too large for the server it's on, it's literally taking up all the disk space. Is there anyway to make it dump to another host? Maybe using scp?
The easiest thing to do is to use mongodump from anther computer, if the database is accessible, using the --host parameter and any credentials you may need. It's quite similar to using the mongo shell to connect to a remote instance.
Related
I had to automate my postgre database backup. As instructed by my software vendor I am trying to use pg_dump.exe (see below) file to take a backup but that prompts me for password.
.\pg_dump.exe -h localhost -p 4432 -U postgres -v -b -F t -f "C:\Backup\Backup.tar" Repo
So googled and found that as per "https://www.postgresql.org/docs/9.6/libpq-pgpass.html" I can create a pgpass.conf file within 'C:\Users\User1\AppData\Roaming\postgresql\pgpass.conf" which I did.
Then I tried to pass data of pgpass.conf file to env variable before executing my pg_dump command. But it is not working. Still I am getting prompt to enter password. This is the content of pgpass.conf file: *:*:*:postgres:password
Below is the code I am trying in PowerShell,
$Env:PGPASSFILE="C:\Users\User1\AppData\Roaming\postgresql\pgpass.conf"
cd "C:\Program Files\Qlik\Sense\Repository\PostgreSQL\9.6\bin"
.\pg_dump.exe -h localhost -p 4432 -U postgres -v -b -F t -f "C:\Backup\Backup.tar" Repo
Why am I still being asked for password?
When I type following code $Env:AppData I get following response "C:\Users\User1\AppData\Roaming"
Everywhere there are guidance on how to use it in UNIX or command prompt but not in powershell. Any help is appreciated. Also if you could direct me how to secure this password file then it will be great.
With password prompt I cannot automate it with windows task scheduler.
I suspect you have a suitable solution, however, as a quick (and not secure) workaround via the command prompt, you can use the variable PGPASSWORD to hold the password then run the backup script.
A sample might be something like:
SET PGPASSWORD=password
cd "C:\Program Files\Qlik\Sense\Repository\PostgreSQL\9.6\bin" pg_dump.exe -h localhost -p 4432 -U postgres -b -F t -f "d:\qs_backup\QSR_backup.tar" QSR
Rod
I have yet to get the damned thing to work yet, but I did find this:
-w
--no-password Never issue a password prompt. If the server requires password authentication and a password is not available by other means
such as a .pgpass file, the connection attempt will fail. This option
can be useful in batch jobs and scripts where no user is present to
enter a password.
I don't see a -w parameter in your call to pg_dump
I used pg_hba file to allow connection "trust" this is riskier method but I had to get things done ASAP. Thank you for your time and effort
I'm trying to install PostgreSQL from source and script it for automatic installation.
Installing dependances, downloading and compiling PostgreSQL works good. But there are 3 commands that I need to run as Postgres User
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data/
/usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data
/usr/local/pgsql/bin/createdb test
I saw this link but it doesn't work in my script here is the output :
Success. You can now start the database server using:
/usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data/
or
/usr/local/pgsql/bin/pg_ctl -D /usr/local/pgsql/data/ -l logfile start
server starting
createdb: could not connect to database template1: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
admin#ip-172-31-27-106:~$ LOG: database system was shut down at 2015-03-27 10:09:54 UTC
LOG: database system is ready to accept connections
LOG: autovacuum launcher started
And the script :
sudo su postgres <<-'EOF'
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data/
/usr/local/pgsql/bin/pg_ctl -D /usr/local/pgsql/data/ start
/usr/local/pgsql/bin/createdb pumgrana
EOF
After that, I need to press enter and the server is running. My database is not created. It seems like the script tries to create the database then run the server but I'm not sure. Can someone help me?
There are a few things wrong with that script:
pg_ctl should get a -w argument, making sure it waits until PostgreSQL has started before exiting.
You don't have any error checking, so it'll just keep going if something doesn't work. At minimum you should use set -e at the start.
I also suggest using sudo rather than su, which is kind of obsolete these days. You never need sudo su, that's what sudo -u is for. Using sudo also makes it easier to pass environment variables in. So I'd write something like (untested):
sudo -u postgres PATH="/usr/local/pgsql/bin:$PATH" <<-'EOF'
set -e
initdb -D /usr/local/pgsql/data/
pg_ctl -D /usr/local/pgsql/data/ -w start
createdb pumgrana
EOF
You might want to pass PGPORT or some other relevant env vars into the script too.
Completely separately to this ... why? Why do this? If you're automating an install from source, why not just build a .deb or .rpm automatically instead, then install that?
I'm setting up a development environment on heroku for my app and I'm having an issue copying over the DB. My current DB is ClearDB and I usually connect to it via Workbench. However, if I try to export the DB and iimport into my staging environment I get a credential issue.
I found this post on SO with regards to this issue:
Moving/copying one remote database to another remote database
And the solution is here:
mysqldump --single-transaction -u (old_database_username) -p -h (old_database_host) (database_name) | mysql -h (new_host) -u (new_user) -p -D (new_database)
But even if I run this, I'm still running into an issue with credentials. The execution wants both passwords at the same time, for old DB and new DB so it keeps failing.
I tried to inline the -p but it still asks for password. What am I missing?
Okay, that was a silly mistake. The reason I was having issues is that after option such as -u or -h, there is a space while in the option for password, there is no space. I.E.
mysqldump --single-transaction -u old_database_username -pPasswordOld -h old_database_host database_name | mysql -h new_host -u new_user -pPasswordNew -D new_database
Once corrected, everything was done.
I'm trying to troubleshoot the pg_restore command on my system. I've installed Postgresapp, and I've included its binaries on my PATH. Commands such as psql and pg_dump appear to work fine, and running which pg_restore give the expected result.
$ which pg_restore
/Applications/Postgres.app/Contents/MacOS/bin/pg_restore
The problem is that pg_restore doesn't seem to do anything. When I run it in the terminal, no output is printed, eiher to the console or to the logs. This is true no matter what arguments I pass in, including the --verbose switch. Running it does cause a pg_restore process to appear in my activity monitor, but this process doesn't use any CPU. Apart from that, nothing happens at all.
Has anyone else seen this issue? Do you have any suggestions for getting pg_restore to work?
I think I figured it out.
The command I was running included an extra line break after the user name.
As in, I was trying to execute this
pg_restore --verbose --clean --no-acl --no-owner -h localhost -U myusername
-d mydb latest.dump
instead of this
pg_restore --verbose --clean --no-acl --no-owner -h localhost -U myusername -d mydb latest.dump
For some reason that extra linebreak was gumming things up. Once I removed it pg_restore worked properly.
I got a hint from an answer to another question; the -f option doesn't do what you might think it does (or what I thought it does, anyway 😅), even if you're using a "custom" format dump (i.e. not one which you provide with shell redirection like | or >).
Incorrect ❌: pg_restore -f filename.dump – waits for a series of commands from STDIN, restoring to a database in filename.dump
Correct ✅: pg_restore -d database filename.dump – restores filename.dump to database.
I thought for some reason that a custom dump included the database name so you didn't need to provide it at all.
I was also stuck for no reason. I changed from pg_restore -U seed -h localhost -p 5432 -f dump.backup -C --verbose to pg_restore -d seed -U seed -h localhost -p 5432 < dump.backup and it worked.
If this can help someone..
sometimes if you have long running queries you need to stop it so pg_restore will not get stuck.
Run this script:
SELECT pg_cancel_backend(pid) FROM pg_stat_activity WHERE state = 'active' and pid <> pg_backend_pid();
https://www.sqlprostudio.com/blog/8-killing-cancelling-a-long-running-postgres-query#:~:text=Terminate%20all%20queries,be%20used%20in%20special%20situations.
I have a shell script that periodically checks the ADSL external IP address and send it to my email if it has changed.
#! /bin/sh
NEWIP=`/usr/bin/curl ifconfig.me`
OLDIP=`cat ./current`
logger "$NEWIP ... $OLDIP"
if [ "$NEWIP" != "$OLDIP" ]; then
TIME=`/bin/date`
/usr/bin/sendEmail -v -f ip_watcher#xxxoo.com \
-s smtp.gmail.com:587 -xu ip_watcher#xxxoo.com -xp xxxxxx \
-t xxx#xxxxx.com \
-o tls=yes \
-u "$NEWIP" \
-m "$NEWIP $TIME" -a
/bin/echo "$NEWIP" > ./current
logger "IP of bjserver1 has changed ..."
else
logger "New IP is the SAME with old. not sending ..."
fi
this works perfectly when I run it from command line. but after I put it into cron, NEWIP and OLDIP are always the same. I don't know why , can anybody help ?
What is ./current?
You are not using an absolute path in the script, so the file will be wherever it is run. You should use an absolute path.
The only other significant difference between cron and a command-line run is the user under whose account the script is executed. Make sure the account (if it's not root) has significant privileges to do what you're asking it to do.
Or, better yet, use an established dynamic DNS client so you don't need to be concerned with external hostnames. You do realize you're relying on that web site to be both honest and up, right?
At the start of the script you should change directory to the correct one (as a guess). Or use an absolute path.