Executing PSQL in R on Windows - windows

I have been trying to execute PSQL from system() within R in RStudio. I have PSQL setup in my PATH and can execute PSQL from the cmd line. I cannot for the life of me figure out the correct method for executing psql from within R on windows. I have code supplied from a ubuntu environment. I have not used system() previously before this and researching for this specific issue has been unsuccessful.
The hardest part is not receiving any output after executing system in R. I have tried a few different setting from looking at ?system. With no luck.
This code should execute a simple sql statement and pass the output to a local file. Ultimately this will be more robust to include dynamic elements in an application. Just having the basics working seems like the hardest part.
system(paste("export PGPASSWORD=db_password;psql -h db_host -d db_name -c 'copy(select * from large_table limit 1000) to stdout csv' > C:/temp_data/db_test.dat", sep=""))
I am curious as to if anyone has a working windows environment using PSQL in R. My greenplum server is not local.
My echo %PATH% includes C:\Program Files (x86)\pgAdmin III\1.12
included in both system and user vars.

There are a few problems with your command.
system cannot be used with redirects, you must use shell
You cannot use single quotes to quote commands in Windows, you must use double quotes.
To concatenate commands, you use the & operator, not a ; like in Unix.
So your command would look like (it appears to be necessary to include this in one line):
cmd<-'set PGPASSWORD=db_password& psql -h db_host -d db_name -c "copy(select * from large_table limit 1000) TO STDOUT CSV;" > C:/temp_data/db_test.dat'
shell(cmd)
But, have you considered using the RPostgresql driver, which is a much simpler, platform-independent way to do your task?
# Load up the driver
library(RPGsql)
drv <- dbDriver("PostgreSQL")
# Create a connection
con <- dbConnect(drv, dbname="db_name", host='db_host',password='db_password',user='db_user')
# Query the database
db_test=dbGetQuery(con, 'select * from large_table limit 1000')
# Write your file
write.csv(db_test,'C:/temp_data/db_test.dat')

Related

how to execute docker sqlplus query inside bash script?

On Ubuntu 21.04, I installed oracle dtb from docker, works fine I am using it for sql developer but now I need to use it in shell script. I am not sure if it can work this way, or is it some better way. I am trying to run simple script:
#!/bin/bash
SSN_NUMBER="${HOME}/bin/TESTS/sql_log.txt"
select_ssn () {
sudo docker exec -it oracle bash -c "source /home/oracle/.bashrc; sqlplus username/password#ORCLCDB;" <<EOF > $SSN_NUMBER
select SSN from employee
where fname = 'James';
quit
EOF
}
select_ssn
After I run this, nothing happens and I need to kill the session. or
the input device is not a TTY
is displayed
Specifying a here document from outside Docker is problematic. Try inlining the query commands into the Bash command line instead. Remember that the string argument to bash -c can be arbitrarily complex.
docker exec -it oracle bash -c "
source /home/oracle/.bashrc
printf '%s\n' \\
\"select SSN from employee\" \\
\"where fname = \'James\'\;\" |
sqlplus -s username/password#ORCLCDB" > "$SSN_NUMBER"
I took out the sudo but perhaps you really do need it. I added the -s option to sqlplus based on a quick skim of the manual page.
The quoting here is complex and I'm not entirely sure it doesn't require additional tweaking. I went with double quotes around the shell script, which means some things inside those quotes will be processed by the invoking shell before being passed to the container.
In the worst case, if the query itself is static, storing it inside the image in a file when you build it will be the route which offers the least astonishment.

Bash commands as variables failing when joining to form a single command

ssh="ssh user#host"
dumpstructure="mysqldump --compress --default-character-set=utf8 --no-data --quick -u user -p database"
mysql=$ssh "$dumpstructure"
$mysql | gzip -c9 | cat > db_structure.sql.gz
This is failing on the third line with:
mysqldump --compress --default-character-set=utf8 --no-data --quick -u user -p database: command not found
I've simplified my actualy script for the purpose of debugging this specific error. $ssh and $dumpstructure aren't always being joined together in the real script.
Variables are meant to hold data, not commands. Use a function.
mysql () {
ssh user#host mysqldump --compress --default-character-set=utf8 --nodata --quick -u user -p database
}
mysql | gzip -c9 > db_structure.sql.gz
Arguments to a command can be stored in an array.
# Although mysqldump is the name of a command, it is used here as an
# argument to ssh, indicating the command to run on a remote host
args=(mysqldump --compress --default-character-set=utf8 --nodata --quick -u user -p database)
ssh user#host "${args[#]}" | gzip -c9 > db_structure.sql.gz
Chepner's answer is correct about the best way to do things like this, but the reason you're getting that error is actually even more basic. The line:
mysql=$ssh "$dumpstructure"
doesn't do anything like what you want. Because of the space between $ssh and "$dumpstructure", it'll parse this as environmentvar=value command, which means it should execute the "mysqldump..." part with the environment variable mysql set to ssh user#host. But it's worse than that, since the double-quotes around "$dumpstructure" mean that it won't be split into words, and so the entire string gets treated as the command name (rather than mysqldump being the command name, and the rest being arguments to it).
If this had been the right way to go about building the command, the right way to stick the parts together would be:
mysql="$ssh $dumpstructure"
...so that the whole combined string gets treated as part of the value to assign to mysql. But as I said, you really should use Chepner's approach instead.
Actually, commands in variables should also work and can be in form of `$var` or just $($var). If it says command not found, it could because the command maybe not in you PATH. Or you should give full path of you command.
So let's put this vote down away and talk about this question.
The real problem is mysql=$ssh "$dumpstructure". This means you'll execute $dumpstructure with additional environment mysql=$ssh. So we got command not found exception. It's actually because mysqldump is located on remote server not this host, so it's reasonable this command is not found.
From this point, let's see how to fix this question.
OP want to dumpplicate mysql data from remote server, which means $dumpstructure shoud be executed remotely. Let's see third line mysql=$ssh "$dumpstructure". Now we figure out this would result in problem. So what should be the correct command? The simplest command should be like mysql="$ssh $dumpstructure", which means both $ssh and $dumpstructure will be join into single command line in variable $mysql.
At the end, let's talk about the last command line. I do not agree with variable are meant to hold data, not command. Cause command is also a kind of data. The real problem is how to use it correctly.
OP's command is also supported, at least it is supported on bash 4.2.46.
So the real problem is how to use a variable to hold commands not import a new method to do that, wraping them into a bash function, for example.
So who can tell me why this answer does not come into readers' notice but be voted down?

PSQL: How can I prevent any output on the command line?

My problem: I'm trying to run a database generation script at the command line via a batch file as part of a TFS build process to enable nightly testing on a known dataset.
The scripts we run are outputting Notices, Warnings and some Errors on the command line. I would like to suppress at least the Notices and Warnings, and if possible the Errors as they don't seem to have an impact on the overall success of the scripts. This output seems to be affecting the success or failure of the process as far as the TFS build process is concerned. It's highlighting every line of output from the scripts as errors and failing the build.
As our systems are running on Windows, most of the potential solutions I've found online don't work as they seem to target Linux.
I've changed the client_min_messages to error in the postgresql.conf file, but when looking at the same configuration from pgAdmin (tools > server configuration) it shows the value as Error but the current value as Notice.
All of the lines in the batch file that call psql use the -q flag as well but that only seems to prevent the basics such as CREATE TABLE and ALTER TABLE etc.
An example line from the batch file is:
psql -d database -q < C:\Database\scripts\script.sql
Example output line from this command:
WARNING: column "identity" has type "unknown"
DETAIL: Proceeding with relation creation anyway.
Specifying the file with the -f flag makes no difference.
I can manually run the batch file on my development machine and it produces the expected database regardless of what errors or messages show on the command prompt.
So ultimately I need all psql commands in my batch files to run silently.
psql COMMAND &> output.txt
Or, using your example command:
psql -d database -q < C:\Database\scripts\script.sql &> output.txt
use psql -o flag to send the command output to the filename you wish or /dev/null if you don't care about it.
The -q option will not prevent the query output.
-q, --quiet run quietly (no messages, only query output)
to avoid the output you have to send the query result to a file
psql -U username -d db_name -pXXXX -c "SELECT * FROM table_name LIMIT 5;" > C:\test.csv
use 1 > : create new file each time
use 2 >> : will create and keep adding

Bash script with psql command tells nothing, but doesn't work

I am really confused with this piece of code:
...
COM="psql -d $DBNAME -p $PGPORT -c 'COPY (SELECT * FROM $TABLE_NAME s WHERE cast(s.$COLUMN_NAME as DATE) < DATE '$DATE_STOP' ) TO '$SCRIPTPATH/$ARCHIVE_NAME--$DBNAME' WITH CSV HEADER;'"
su postgres -c '$COM' &> pg_a.log
...
in psql shell this SQL code works fine, but in script he is not creating archive and tells me nothing about mistakes or fails.
Thanks in advance!
You'll get one hint if you replace your "su" command with this:
echo '$COM'
What you'll see is that it prints out $COM -- not an expansion, but the string itself.
You'll probably find a number of other problems with this approach. You're going to have to escape a bunch of characters so the shell doesn't interpret them for you. It's going to be a real pain.
I would put the sql into a file and use the -f option to psql.

Using open database connection to PostgreSQL in BASH?

I have to use BASH to connect to our PostgreSQL 9.1 database server to execute various SQL statements.
We have a performance issue caused by repeatedly opening/closing too many database connections (right now, we send each statement to a psql command).
I am looking at the possibility of maintaining an open database connection for a block of SQL statements using named pipes.
The problem I have is that once I open a connection and execute a SQL statement, I don't know when to stop reading from the psql. I've thought about parsing the output to look for a prompt, although I don't know if that is safe considering the possibility that the character may be embedded in a SELECT output.
Does anyone have a suggestion?
Here's a simplified example of what I have thus far...
#!/bin/bash
PIPE_IN=/tmp/pipe.in
PIPE_OUT=/tmp/pipe.out
mkfifo $PIPE_IN $PIPE_OUT
psql -A -t jkim_edr_md_xxx_db < $PIPE_IN > $PIPE_OUT &
exec 5> $PIPE_IN; rm -f $PIPE_IN
exec 4< $PIPE_OUT; rm -f $PIPE_OUT
echo 'SELECT * FROM some_table' >&5
# unfortunately, this loop blocks
while read -u 4 LINE
do
echo LINE=$LINE
done
Use --file=filename for a batch execution.
Depending on your need for flow control you may want to use another language with a more flexible DB API (Python would be my choice here but use whatever works).
echo >&5 "SELECT * FROM some_table"
should read
echo 'SELECT * FROM some_table' >&5
The redirection operator >& comes after the parameters to echo; and also, if you use "" quotes, some punctuation may be treated specially by the shell, causing foul and mysterious bugs later. On the other hand, quoting ' will be … ugly. SELECT * FROM some_table WHERE foo=\'Can\'\'t read\'' …
You probably want to also create these pipes someplace safer than /tmp. There's a big security-hole race condition where someone else on host could hijack your connection. Try creating a folder like /var/run/yournamehere/ with 0700 privileges, and create the pipes there, ideally with names like PIPE_IN=/var/run/jinkimsqltool/sql.pipe.in.$$ — $$ will be your process ID, so simulataneously-executed scripts won't clobber one another. (To exacerbate the security hole, rm -rf should not be needed for a pipe, but a clever cracker could use that excalation of privileges to abuse the -r there. Just rm -f is sufficient.)
in psql You can use
\o YOUR_PIPE
SELECT whatever;
\o
which will open, write and close the pipe. Your BASH-fu seems quite a lot stronger than mine, so I'll let You work out the details :)

Resources