PSQL: How can I prevent any output on the command line? - windows

My problem: I'm trying to run a database generation script at the command line via a batch file as part of a TFS build process to enable nightly testing on a known dataset.
The scripts we run are outputting Notices, Warnings and some Errors on the command line. I would like to suppress at least the Notices and Warnings, and if possible the Errors as they don't seem to have an impact on the overall success of the scripts. This output seems to be affecting the success or failure of the process as far as the TFS build process is concerned. It's highlighting every line of output from the scripts as errors and failing the build.
As our systems are running on Windows, most of the potential solutions I've found online don't work as they seem to target Linux.
I've changed the client_min_messages to error in the postgresql.conf file, but when looking at the same configuration from pgAdmin (tools > server configuration) it shows the value as Error but the current value as Notice.
All of the lines in the batch file that call psql use the -q flag as well but that only seems to prevent the basics such as CREATE TABLE and ALTER TABLE etc.
An example line from the batch file is:
psql -d database -q < C:\Database\scripts\script.sql
Example output line from this command:
WARNING: column "identity" has type "unknown"
DETAIL: Proceeding with relation creation anyway.
Specifying the file with the -f flag makes no difference.
I can manually run the batch file on my development machine and it produces the expected database regardless of what errors or messages show on the command prompt.
So ultimately I need all psql commands in my batch files to run silently.

psql COMMAND &> output.txt
Or, using your example command:
psql -d database -q < C:\Database\scripts\script.sql &> output.txt

use psql -o flag to send the command output to the filename you wish or /dev/null if you don't care about it.

The -q option will not prevent the query output.
-q, --quiet run quietly (no messages, only query output)
to avoid the output you have to send the query result to a file
psql -U username -d db_name -pXXXX -c "SELECT * FROM table_name LIMIT 5;" > C:\test.csv
use 1 > : create new file each time
use 2 >> : will create and keep adding

Related

Issuing Redshift COPY command via psql never acknowledges completion

I have a shell script that issues a command similar to this:
$PGSQL_BIN/psql $RSCONNECTION -c "COPY property.history from 's3://my-bucket/data.txt.gz' CREDENTIALS 'aws_access_key_id=XXXXX;aws_secret_access_key=XXXXX' CSV DELIMITER AS ',' ACCEPTINVCHARS TRUNCATECOLUMNS GZIP TRIMBLANKS BLANKSASNULL EMPTYASNULL DATEFORMAT 'auto' ACCEPTANYDATE COMPUPDATE ON MAXERROR 100;"
The command is successful, but the completion is never acknolowdged, so the shell script does not move onto the next command.
Is there something I'm missing that will make this behave?
psql is probably losing touch with the session. Make sure you've followed the "Change TCP/IP Timeout Settings" instructions from the Redshift Docs. http://docs.aws.amazon.com/redshift/latest/mgmt/connecting-firewall-guidance.html#connecting-firewall-guidance.change-tcpip-settings

sending echo and error to both terminal and file log

I am trying to modify a script someone created for unix in shell. This script is mostly used to run on backed servers with no human interaction, however I needed to make another script to allow users to input information. So, it is just modifying to old version for user input. But the biggest issue I am running into is trying to get both error logs and echos to be saved in a log file. The script has a lot of them, but I wanted to have those shown on the terminal as well as send them to the log file specified, to be looked into later.
What I have is this:
exec 1> ${LOG} 2>&1
This line is pretty much send everything to the log file. That is all good, but I also have people trying to enter in information in the script, and it is sending everything to the log file including the echo needed for the prompt. This line is also at the beginning of the script, but reading more into the stderr and stdout messages. I tried:
exec 2>&1 1>>${LOG}
exec 1 | tee ${LOG} But only getting error when running it this "./bash_pam.sh: line 39: exec: 1: not found"
I have went over site such as this to solve the issue, but I am not understanding why it does not print to both. The way I insert it, it either only sends it to the log location and not to the terminal, or it sends it to the terminal, but nothing is persevered in the log.
EDIT: Some of the solutions, for this have mentioned that certain fixes will work in bash, but not in /bin/sh.
If you would like all output to be printed onto the console, while also being printed to a logfile.txt you would run this command on your script:
bash your_script.sh 2>&1 | tee -a logfile.txt
Or calling it within the file:
<bash_command> 2>&1 | tee -a logfile.txt
To append to logfile.txt instead of overwriting, add the -a option to tee.

Bash commands as variables failing when joining to form a single command

ssh="ssh user#host"
dumpstructure="mysqldump --compress --default-character-set=utf8 --no-data --quick -u user -p database"
mysql=$ssh "$dumpstructure"
$mysql | gzip -c9 | cat > db_structure.sql.gz
This is failing on the third line with:
mysqldump --compress --default-character-set=utf8 --no-data --quick -u user -p database: command not found
I've simplified my actualy script for the purpose of debugging this specific error. $ssh and $dumpstructure aren't always being joined together in the real script.
Variables are meant to hold data, not commands. Use a function.
mysql () {
ssh user#host mysqldump --compress --default-character-set=utf8 --nodata --quick -u user -p database
}
mysql | gzip -c9 > db_structure.sql.gz
Arguments to a command can be stored in an array.
# Although mysqldump is the name of a command, it is used here as an
# argument to ssh, indicating the command to run on a remote host
args=(mysqldump --compress --default-character-set=utf8 --nodata --quick -u user -p database)
ssh user#host "${args[#]}" | gzip -c9 > db_structure.sql.gz
Chepner's answer is correct about the best way to do things like this, but the reason you're getting that error is actually even more basic. The line:
mysql=$ssh "$dumpstructure"
doesn't do anything like what you want. Because of the space between $ssh and "$dumpstructure", it'll parse this as environmentvar=value command, which means it should execute the "mysqldump..." part with the environment variable mysql set to ssh user#host. But it's worse than that, since the double-quotes around "$dumpstructure" mean that it won't be split into words, and so the entire string gets treated as the command name (rather than mysqldump being the command name, and the rest being arguments to it).
If this had been the right way to go about building the command, the right way to stick the parts together would be:
mysql="$ssh $dumpstructure"
...so that the whole combined string gets treated as part of the value to assign to mysql. But as I said, you really should use Chepner's approach instead.
Actually, commands in variables should also work and can be in form of `$var` or just $($var). If it says command not found, it could because the command maybe not in you PATH. Or you should give full path of you command.
So let's put this vote down away and talk about this question.
The real problem is mysql=$ssh "$dumpstructure". This means you'll execute $dumpstructure with additional environment mysql=$ssh. So we got command not found exception. It's actually because mysqldump is located on remote server not this host, so it's reasonable this command is not found.
From this point, let's see how to fix this question.
OP want to dumpplicate mysql data from remote server, which means $dumpstructure shoud be executed remotely. Let's see third line mysql=$ssh "$dumpstructure". Now we figure out this would result in problem. So what should be the correct command? The simplest command should be like mysql="$ssh $dumpstructure", which means both $ssh and $dumpstructure will be join into single command line in variable $mysql.
At the end, let's talk about the last command line. I do not agree with variable are meant to hold data, not command. Cause command is also a kind of data. The real problem is how to use it correctly.
OP's command is also supported, at least it is supported on bash 4.2.46.
So the real problem is how to use a variable to hold commands not import a new method to do that, wraping them into a bash function, for example.
So who can tell me why this answer does not come into readers' notice but be voted down?

Executing PSQL in R on Windows

I have been trying to execute PSQL from system() within R in RStudio. I have PSQL setup in my PATH and can execute PSQL from the cmd line. I cannot for the life of me figure out the correct method for executing psql from within R on windows. I have code supplied from a ubuntu environment. I have not used system() previously before this and researching for this specific issue has been unsuccessful.
The hardest part is not receiving any output after executing system in R. I have tried a few different setting from looking at ?system. With no luck.
This code should execute a simple sql statement and pass the output to a local file. Ultimately this will be more robust to include dynamic elements in an application. Just having the basics working seems like the hardest part.
system(paste("export PGPASSWORD=db_password;psql -h db_host -d db_name -c 'copy(select * from large_table limit 1000) to stdout csv' > C:/temp_data/db_test.dat", sep=""))
I am curious as to if anyone has a working windows environment using PSQL in R. My greenplum server is not local.
My echo %PATH% includes C:\Program Files (x86)\pgAdmin III\1.12
included in both system and user vars.
There are a few problems with your command.
system cannot be used with redirects, you must use shell
You cannot use single quotes to quote commands in Windows, you must use double quotes.
To concatenate commands, you use the & operator, not a ; like in Unix.
So your command would look like (it appears to be necessary to include this in one line):
cmd<-'set PGPASSWORD=db_password& psql -h db_host -d db_name -c "copy(select * from large_table limit 1000) TO STDOUT CSV;" > C:/temp_data/db_test.dat'
shell(cmd)
But, have you considered using the RPostgresql driver, which is a much simpler, platform-independent way to do your task?
# Load up the driver
library(RPGsql)
drv <- dbDriver("PostgreSQL")
# Create a connection
con <- dbConnect(drv, dbname="db_name", host='db_host',password='db_password',user='db_user')
# Query the database
db_test=dbGetQuery(con, 'select * from large_table limit 1000')
# Write your file
write.csv(db_test,'C:/temp_data/db_test.dat')

Mysqldump creates empty file when run via cron on linux

I have a bash script mysql_cron.sh that runs mysqldump
#!/bin/bash
/usr/local/mysql/bin/mysqldump -ujoe -ppassword > /tmp/somefile
This works fine. I then call it from cron:
20 * * * * /home/joe/mysql_cron.sh
and this creates the file /tmp/somefile, but the file is always empty. I have tried adding a
source /home/joe/.bash_profile
to the script to make sure cron has the right env variables, but that doesn't help. I see many other people having this problem but have found no solution. I've also tried the '>' operator in the crontab to cat any cron errors to a file, but that doesn't seem to generate any errors. Any troubleshooting ideas welcomed. Thanks!
Add output of error information to file (as Damp has said), so that you can check if there is any error:
#!/bin/bash
/usr/local/mysql/bin/mysqldump -ujoe -ppassword > /tmp/somefile 2>&1
You can also take a look at MySQL's log files at /var/log in case there is some hint there.
Add this line to your script and compare the result between running it from cron versus running it directly:
env > /tmp/env.$$.out
The $$ will be replaced in the resulting filename by the PID of the parent process (cron or the shell). You should be able to diff the two files and see if anything significant is different between the two environments.

Resources