I get an error while executing the following psql statement inside a bash script:
execlog "psql -h $HOST -p $PORT -U $USER -d $DB -q -c 'CREATE EXTENSION hstore;'"
The raised error is:
ERROR: unterminated quoted string at or near "'CREATE"
LINE 1: 'CREATE
^
Thus, the single terminating quote is not recognize as it should be.
When using escaped double quotes instead single quotes (...\"CREATE EXTENSION hstore;\") I get the same error.
When executing the command directly from the command line, everything works fine.
Does someone know what's going wrong?
To give some additional info:
OS: Ubuntu 11.10, Postgresql Verion: 9.1
Thanks in advance,
Richard
Solved: The execlog function produces the error. Now I am calling
log "exec psql -h $HOST -p $PORT -U $USER -d $DB -q -c 'CREATE EXTENSION hstore;'"
which works fine!
Thanks for your help!
The problem is that after quote removal, the single quotes are no longer treated as syntax to escape the space, but literal characters as part of the string. Try
execlog psql -h $HOST -p $PORT -U $USER -d $DB -q -c 'CREATE EXTENSION hstore;'
(I can't find any information on execlog, so I don't know if the above will work.)
Related
I'm running several psql commands inside a bash shell script. One of the commands imports a csv file to a table. The problem is, the CSV file is occasionally corrupt, it has invalid characters at the end and the import fails. When that happens, and I have the ON_ERROR_STOP=on flag set, my entire shell script stops at that point as well.
Here's the relevant bits of my bash script:
$(psql \
-X \
$POSTGRES_CONNECTION_STRING \
-w \
-b \
-L ./output.txt
-A \
-q \
--set ON_ERROR_STOP=on \
-t \
-c "\copy mytable(...) from '$input_file' csv HEADER"\
)
echo "import is done"
The above works fine as long as the csv file isn't corrupt. If it is however, psql spits out a message to the console that begins ERROR: invalid byte sequence for encoding "UTF8": 0xb1 and my bash script apparently stops cold at that point-- my echo statement above doesn't execute, and neither do any other subsequent commands.
Per the psql documentation, a hard stop in psql should return an error code of 3:
psql returns 0 to the shell if it finished normally, 1 if a fatal error of its own occurs (e.g. out of >memory, file not found), 2 if the connection to the server went bad and the session was not >interactive, and 3 if an error occurred in a script and the variable ON_ERROR_STOP was set
That's fine and good, but is there a reason returning a value of 3 should terminate my calling bash script? And can I prevent that? I'd like to keep ON_ERROR_STOP set to on because I actually have other commands I'd like to run in that psql statement if the intial import succeeds, but not if it doesn't.
ON_ERROR_STOP will not work with the -c option.
Also, the $(...) surronding the psql look wrong — do you want to execute the output as a command?
Finally, you forgot a backslash after the -L option
Try using a “here document”:
psql \
-X \
$POSTGRES_CONNECTION_STRING \
-w \
-b \
-L ./output.txt \
-A \
-q \
--set ON_ERROR_STOP=on \
-t <<EOF
\copy mytable(...) from '$input_file' csv HEADER
EOF
echo "import is done"
I can successfully run SQL (Postgres) files from command line following instructions here:
Run a PostgreSQL .sql file using command line arguments
In particular, I use something like
psql -d DBPASSWORD -a -f FILENAME
Problem is that this (and specifically, I believe the -a) prints the sql code out to the terminal. This is annoying because I am running a lot of files in sequence within a Python script using subprocess, and I would rather not have the SQL code print out in terminal. Is there a way to not print the SQL code out to terminal?
EDIT: I've tried adding the -q option like people said, but the code in the SQL file is still being printed out to terminal.
What I tried was
psql -q -d DBPASSWORD -a -f FILENAME
psql -d DBPASSWORD -q -a -f FILENAME
psql -d DBPASSWORD -a -q -f FILENAME
psql -d DBPASSWORD -a -f FILENAME -q
And in each of those cases, the code in FILENAME is being printed to terminal
You may want to redirect STDOUT, STDERR or both to a log file.
Something like one of these
psql ... > out.log
psql ... 2> err.log
psql ... &> out_and_err.log
What should I do for making it work?
#!/bin/bash
TABLENAMES="user_stats"
ssh -t railsapps#xxx.xxx.xxx.xx -p xxx bash -c "'
for TABLENAME in $TABLENAMES
do
psql -d mydb -P format=unaligned -P tuples_only -P fieldsep=\, -c "SELECT * FROM $TABLENAME" > /tmp/$TABLENAME
done
'"
General problem: how to periodically dump the database tables to a local machine from a psql database in a single bash script run on Mac OS X?
Firstly, you should test your SQL and bash scripts remotely (do SSH interactively).
I think your problem is caused by a bad mix of quote / double-quote. I think the star (*) and $TABLENAME are expensed before the SSH call, so too early. Try to put a backslash before the $ sign.
You should use the verbose or the debug option, to help to understand what is really executed:
ssh -t railsapps#xxx.xxx.xxx.xx -p xxx bash -vxc "'
for TABLENAME in \$TABLENAMES; do
psql -d mydb -P format=unaligned -P tuples_only -P fieldsep=\, -c "SELECT \* FROM \$TABLENAME" > /tmp/\$TABLENAME
done
'"
I'm getting 7: Syntax error: "(" unexpected error while running bellow code on Ubuntu. But It's run on centos without any issues.
#!/bin/sh
#
TODATE=`date '+%Y-%b-%d'`
#
# Backup Creation for Databases
#
databases=(`echo 'show databases;' | mysql -u root -ppaSSword | grep -v ^Database$`)
for DB in "${databases[#]}"; do
mysqldump --force --opt --user=root --password=paSSword $DB | gzip > /mnt/Backup/DB/${DB}_${TODATE}.sql.gz
done
#
Please help me to solve this.
I can't figure out problem. But,
I'm using bellow code for backup. It's working fine with Ubuntu
#!/bin/bash
#
TODATE=`date '+%Y-%b-%d'`
databases="$(mysql -u root -ppaSSword -Bse 'show databases')"
for DB in $databases
do
mysqldump -u root -psqlMYadmin $DB | gzip > /mnt/Backup/DB/${DB}_${TODATE}.sql.gz
done
You can redirect the 'show databases' output to dump.txt file, if done then try.
#!/bin/bash
da=$(date +"%d-%m-%y")
for db in `cat dump.txt` ; do mysqldump --force --opt --user=root --password=paSSword $db | gzip /path/to/backup/$db_"$da".sql.gz ; done
You need to escape the last '$' on the line databases= ...
There's only one ( in the script and you have the shebang line #!/bin/sh. My best guess is that the program /bin/sh does not recognize array assignment, whereas /bin/bash would.
Change your shebang to #!/bin/bash.
You'd probably do better to use $(...) in place of the back ticks.) Also, as Sami Laine points out in his answer, it would be better if you quoted the regex to the grep command (though it is not the cause of your problem):
databases=( $(echo 'show databases;' | mysql -u root -ppaSSword | grep -v '^Database$') )
I'm going crazy while trying to insert bash-variables in a psql commands as connection paramters as well as variables in the command itself. The following example works properly:
psql -U postgres -h localhost -p 5432 -c "CREATE DATABASE testdb WITH ENCODING='UTF8' OWNER=postgres TABLESPACE=pg_default TEMPLATE=template_postgis CONNECTION LIMIT=-1;"
Now I'm trying to exchange each parameter through a variable, which is held in special config-file.
Non-working example:
dbserver=localhost
dbport=5432
dbowner=postgres
dbname=testdb
dbtemplate=template_postgis
dbtablespace=pg_default
psql -U '$dbowner' -h '$dbserver' -p '$dbport' -c "CREATE DATABASE '$dbname' WITH ENCODING='UTF8' OWNER='§dbowner' TABLESPACE='$dbtablespace' TEMPLATE='$dbtemplate'
CONNECTION LIMIT=-1;"
I've tried several quotings, backquotes and escape-slashes already but smhow it still won't work.
Thanks in advance, knutella
Use double quotes ("). Single quotes (') does not interpret shell variables inside.
Try it
echo '$USER' "$USER"
See man bash.
This works... most of the quotes are not needed:
psql -U $dbowner -h $dbserver -p $dbport -c "CREATE DATABASE $dbname WITH ENCODING='UTF8' OWNER=$dbowner TABLESPACE=$dbtablespace TEMPLATE=$dbtemplate CONNECTION LIMIT=-1;"