I'm trying to install PostgreSQL from source and script it for automatic installation.
Installing dependances, downloading and compiling PostgreSQL works good. But there are 3 commands that I need to run as Postgres User
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data/
/usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data
/usr/local/pgsql/bin/createdb test
I saw this link but it doesn't work in my script here is the output :
Success. You can now start the database server using:
/usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data/
or
/usr/local/pgsql/bin/pg_ctl -D /usr/local/pgsql/data/ -l logfile start
server starting
createdb: could not connect to database template1: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
admin#ip-172-31-27-106:~$ LOG: database system was shut down at 2015-03-27 10:09:54 UTC
LOG: database system is ready to accept connections
LOG: autovacuum launcher started
And the script :
sudo su postgres <<-'EOF'
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data/
/usr/local/pgsql/bin/pg_ctl -D /usr/local/pgsql/data/ start
/usr/local/pgsql/bin/createdb pumgrana
EOF
After that, I need to press enter and the server is running. My database is not created. It seems like the script tries to create the database then run the server but I'm not sure. Can someone help me?
There are a few things wrong with that script:
pg_ctl should get a -w argument, making sure it waits until PostgreSQL has started before exiting.
You don't have any error checking, so it'll just keep going if something doesn't work. At minimum you should use set -e at the start.
I also suggest using sudo rather than su, which is kind of obsolete these days. You never need sudo su, that's what sudo -u is for. Using sudo also makes it easier to pass environment variables in. So I'd write something like (untested):
sudo -u postgres PATH="/usr/local/pgsql/bin:$PATH" <<-'EOF'
set -e
initdb -D /usr/local/pgsql/data/
pg_ctl -D /usr/local/pgsql/data/ -w start
createdb pumgrana
EOF
You might want to pass PGPORT or some other relevant env vars into the script too.
Completely separately to this ... why? Why do this? If you're automating an install from source, why not just build a .deb or .rpm automatically instead, then install that?
Related
I had to automate my postgre database backup. As instructed by my software vendor I am trying to use pg_dump.exe (see below) file to take a backup but that prompts me for password.
.\pg_dump.exe -h localhost -p 4432 -U postgres -v -b -F t -f "C:\Backup\Backup.tar" Repo
So googled and found that as per "https://www.postgresql.org/docs/9.6/libpq-pgpass.html" I can create a pgpass.conf file within 'C:\Users\User1\AppData\Roaming\postgresql\pgpass.conf" which I did.
Then I tried to pass data of pgpass.conf file to env variable before executing my pg_dump command. But it is not working. Still I am getting prompt to enter password. This is the content of pgpass.conf file: *:*:*:postgres:password
Below is the code I am trying in PowerShell,
$Env:PGPASSFILE="C:\Users\User1\AppData\Roaming\postgresql\pgpass.conf"
cd "C:\Program Files\Qlik\Sense\Repository\PostgreSQL\9.6\bin"
.\pg_dump.exe -h localhost -p 4432 -U postgres -v -b -F t -f "C:\Backup\Backup.tar" Repo
Why am I still being asked for password?
When I type following code $Env:AppData I get following response "C:\Users\User1\AppData\Roaming"
Everywhere there are guidance on how to use it in UNIX or command prompt but not in powershell. Any help is appreciated. Also if you could direct me how to secure this password file then it will be great.
With password prompt I cannot automate it with windows task scheduler.
I suspect you have a suitable solution, however, as a quick (and not secure) workaround via the command prompt, you can use the variable PGPASSWORD to hold the password then run the backup script.
A sample might be something like:
SET PGPASSWORD=password
cd "C:\Program Files\Qlik\Sense\Repository\PostgreSQL\9.6\bin" pg_dump.exe -h localhost -p 4432 -U postgres -b -F t -f "d:\qs_backup\QSR_backup.tar" QSR
Rod
I have yet to get the damned thing to work yet, but I did find this:
-w
--no-password Never issue a password prompt. If the server requires password authentication and a password is not available by other means
such as a .pgpass file, the connection attempt will fail. This option
can be useful in batch jobs and scripts where no user is present to
enter a password.
I don't see a -w parameter in your call to pg_dump
I used pg_hba file to allow connection "trust" this is riskier method but I had to get things done ASAP. Thank you for your time and effort
I'm a bit of a novice at bash scripting, so bear with me. I'm trying to write a script to execute a sql file using psql. From my terminal, it works fine:
psql -f /path/to/file.sql "$URI"
However, in my script I have something like this:
dbURI="postgres://some.connection.string"
psql -f /path/to/file.sql $dbURI
But I keep getting output like this:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
I cannot seem to get this to work at all. I've tried wrapping the variable in quotes, using $(command), etc, with no luck.
Try using the below in your script to disable globbing
psql -f /path/to/file.sql "$dbURI"
I have just had the same problem, with the exact same error message. The problem was that Postgresql takes a few seconds to start. So, if it is the case that you start postgresql and use a psql command in the same script, chances are that postgre has not yet started when you call it.
The solution was to include:
sleep 5
before the psql command. In your case, this would be:
sleep 5
psql -f /path/to/file.sql "$URI"
This gives some time for postgre to start before you use it.
I see the topic is 2 years old, but in case anyone else faces the same problem.
Whenever I try to run psql locally i.e. $ psql -d template1 -U postgres I receive -bash: psql: command not found
I've installed the Postgres Mac app, but when I click open psql I get...
Last login: Tue Jan 6 16:30:17 on ttys001
admins-MacBook-Air:~ surajkapoor$ /Applications/Postgres.app/Contents/MacOS/bin/psql ; exit;
psql: FATAL: database "surajkapoor" does not exist
logout
[Process completed]
This doesn't make sense to me. I assumed installing the mac app install postgres locally.
You need to add the directory where the psql file is to the PATH variable of the mac which would be
/Applications/Postgres.app/Contents/MacOS/bin/
Opening psql without parameters will attempt to connect to a database with the name of the current user, if you don't want to add the directory to the PATH variable you can just put the whole address on the console and it should work
/Applications/Postgres.app/Contents/MacOS/bin/psql -d template1 -U postgres
I'm setting up a development environment on heroku for my app and I'm having an issue copying over the DB. My current DB is ClearDB and I usually connect to it via Workbench. However, if I try to export the DB and iimport into my staging environment I get a credential issue.
I found this post on SO with regards to this issue:
Moving/copying one remote database to another remote database
And the solution is here:
mysqldump --single-transaction -u (old_database_username) -p -h (old_database_host) (database_name) | mysql -h (new_host) -u (new_user) -p -D (new_database)
But even if I run this, I'm still running into an issue with credentials. The execution wants both passwords at the same time, for old DB and new DB so it keeps failing.
I tried to inline the -p but it still asks for password. What am I missing?
Okay, that was a silly mistake. The reason I was having issues is that after option such as -u or -h, there is a space while in the option for password, there is no space. I.E.
mysqldump --single-transaction -u old_database_username -pPasswordOld -h old_database_host database_name | mysql -h new_host -u new_user -pPasswordNew -D new_database
Once corrected, everything was done.
I have an AIX 6.1 server where I want to uninstall a rpm.
This uninstallation can be done directly on the server :
[user#server]$ sudo /usr/bin/rpm -e --allmatches _MyRPM-1.0.0
This uninstallation is working.
I have a script lauching this unstallation :
Uninstall.sh
#!/usr/bin/bash
set -x
sudo /usr/bin/rpm -e --allmatches _MyRPM-1.0.0
I can play this script on the server without any problem :
[user#server]$ cd /where/is/the/script;./Uninstall.sh
+ sudo /usr/bin/rpm -e --allmatches _MyRPM-1.0.0
_MyRPM-1.0.0 has been uninstalled successfully
But when I'm playing this script remotely the rpm hang :
[user#client]$ ssh user#server "cd /where/is/the/script;./Uninstall.sh"
+ sudo /usr/bin/rpm -e --allmatches _MyRPM-1.0.0
And this command hang, I need to kill it in order to end the ssh.
PS : I have exactly the same comportment for installation or uninstallation.
EDIT :
The problem seems coming from the sudo. The hang problem appears also when I'm doing anithing with a sudo.
For example with a new script :
test.sh
#!/usr/bin/bash
set -x
sudo env
Sudo normally requires a user authenticate as themselves, and if I recall it can act different via remote execution due to the way the terminal is handled.
I don't have a system to test this on at the moment, but but you could try ssh's -t or -T switches:
-T Disable pseudo-tty allocation.
-t Force pseudo-tty allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services.
Multiple -t options force tty allocation, even if ssh has no local tty.
I suspect you could get this to work by adding the script you're remotely executing into /etc/sudoers:
{user} ALL=NOPASSWD:/where/is/the/script/Uninstall.sh
Then try:
"ssh -t user#server /where/is/the/script/Uninstall.sh"
EDIT:
Found some details to help explain why sudo is behaving differently when executed remotely:
http://www.sudo.ws/sudoers.man.html
The sudoers security policy requires that
most users authenticate themselves before they can use sudo. A
password is not required if the invoking user is root, if the target
user is the same as the invoking user, or if the policy has disabled
authentication for the user or command.
Perhaps it's hanging because it's trying to authenticate, whereas locally it wouldn't need to do so.