Bash script with psql command tells nothing, but doesn't work - bash

I am really confused with this piece of code:
...
COM="psql -d $DBNAME -p $PGPORT -c 'COPY (SELECT * FROM $TABLE_NAME s WHERE cast(s.$COLUMN_NAME as DATE) < DATE '$DATE_STOP' ) TO '$SCRIPTPATH/$ARCHIVE_NAME--$DBNAME' WITH CSV HEADER;'"
su postgres -c '$COM' &> pg_a.log
...
in psql shell this SQL code works fine, but in script he is not creating archive and tells me nothing about mistakes or fails.
Thanks in advance!

You'll get one hint if you replace your "su" command with this:
echo '$COM'
What you'll see is that it prints out $COM -- not an expansion, but the string itself.
You'll probably find a number of other problems with this approach. You're going to have to escape a bunch of characters so the shell doesn't interpret them for you. It's going to be a real pain.
I would put the sql into a file and use the -f option to psql.

Related

how to execute docker sqlplus query inside bash script?

On Ubuntu 21.04, I installed oracle dtb from docker, works fine I am using it for sql developer but now I need to use it in shell script. I am not sure if it can work this way, or is it some better way. I am trying to run simple script:
#!/bin/bash
SSN_NUMBER="${HOME}/bin/TESTS/sql_log.txt"
select_ssn () {
sudo docker exec -it oracle bash -c "source /home/oracle/.bashrc; sqlplus username/password#ORCLCDB;" <<EOF > $SSN_NUMBER
select SSN from employee
where fname = 'James';
quit
EOF
}
select_ssn
After I run this, nothing happens and I need to kill the session. or
the input device is not a TTY
is displayed
Specifying a here document from outside Docker is problematic. Try inlining the query commands into the Bash command line instead. Remember that the string argument to bash -c can be arbitrarily complex.
docker exec -it oracle bash -c "
source /home/oracle/.bashrc
printf '%s\n' \\
\"select SSN from employee\" \\
\"where fname = \'James\'\;\" |
sqlplus -s username/password#ORCLCDB" > "$SSN_NUMBER"
I took out the sudo but perhaps you really do need it. I added the -s option to sqlplus based on a quick skim of the manual page.
The quoting here is complex and I'm not entirely sure it doesn't require additional tweaking. I went with double quotes around the shell script, which means some things inside those quotes will be processed by the invoking shell before being passed to the container.
In the worst case, if the query itself is static, storing it inside the image in a file when you build it will be the route which offers the least astonishment.

Bash commands as variables failing when joining to form a single command

ssh="ssh user#host"
dumpstructure="mysqldump --compress --default-character-set=utf8 --no-data --quick -u user -p database"
mysql=$ssh "$dumpstructure"
$mysql | gzip -c9 | cat > db_structure.sql.gz
This is failing on the third line with:
mysqldump --compress --default-character-set=utf8 --no-data --quick -u user -p database: command not found
I've simplified my actualy script for the purpose of debugging this specific error. $ssh and $dumpstructure aren't always being joined together in the real script.
Variables are meant to hold data, not commands. Use a function.
mysql () {
ssh user#host mysqldump --compress --default-character-set=utf8 --nodata --quick -u user -p database
}
mysql | gzip -c9 > db_structure.sql.gz
Arguments to a command can be stored in an array.
# Although mysqldump is the name of a command, it is used here as an
# argument to ssh, indicating the command to run on a remote host
args=(mysqldump --compress --default-character-set=utf8 --nodata --quick -u user -p database)
ssh user#host "${args[#]}" | gzip -c9 > db_structure.sql.gz
Chepner's answer is correct about the best way to do things like this, but the reason you're getting that error is actually even more basic. The line:
mysql=$ssh "$dumpstructure"
doesn't do anything like what you want. Because of the space between $ssh and "$dumpstructure", it'll parse this as environmentvar=value command, which means it should execute the "mysqldump..." part with the environment variable mysql set to ssh user#host. But it's worse than that, since the double-quotes around "$dumpstructure" mean that it won't be split into words, and so the entire string gets treated as the command name (rather than mysqldump being the command name, and the rest being arguments to it).
If this had been the right way to go about building the command, the right way to stick the parts together would be:
mysql="$ssh $dumpstructure"
...so that the whole combined string gets treated as part of the value to assign to mysql. But as I said, you really should use Chepner's approach instead.
Actually, commands in variables should also work and can be in form of `$var` or just $($var). If it says command not found, it could because the command maybe not in you PATH. Or you should give full path of you command.
So let's put this vote down away and talk about this question.
The real problem is mysql=$ssh "$dumpstructure". This means you'll execute $dumpstructure with additional environment mysql=$ssh. So we got command not found exception. It's actually because mysqldump is located on remote server not this host, so it's reasonable this command is not found.
From this point, let's see how to fix this question.
OP want to dumpplicate mysql data from remote server, which means $dumpstructure shoud be executed remotely. Let's see third line mysql=$ssh "$dumpstructure". Now we figure out this would result in problem. So what should be the correct command? The simplest command should be like mysql="$ssh $dumpstructure", which means both $ssh and $dumpstructure will be join into single command line in variable $mysql.
At the end, let's talk about the last command line. I do not agree with variable are meant to hold data, not command. Cause command is also a kind of data. The real problem is how to use it correctly.
OP's command is also supported, at least it is supported on bash 4.2.46.
So the real problem is how to use a variable to hold commands not import a new method to do that, wraping them into a bash function, for example.
So who can tell me why this answer does not come into readers' notice but be voted down?

Send complex command via ssh

I'm trying to send this command via ssh:
ssh <user1>#<ip1> ssh <user2>#<ip2> /opt/user/bin -f /opt/user/slap.conf -l /home/admin/`date +%Y%m%d`_Export_file$nr.gz -s "ou=multi" -a "(& (entry=$nr)(serv=PS))" -o wrap=no
this command is customized so do not confuse with this...
But it's not executed, smth like: unexpected '(
If i log in to the server and i give this command it gets executed correctly. So i think it should be something with bracket and parentheses rules.
Please can someone help me?
thank you in advance.
You will need to escape the quotes, possibly twice, since each invocation of ssh will involve stripping a layer off. Put escaped single quotes round the entire command, and then nested unescaped single quotes round the inner command:
ssh <user1>#<ip1> \'ssh <user2>#<ip2> '/opt/user/bin -f /opt/user/slap.conf -l /home/admin/`date +%Y%m%d`_Export_file$nr.gz -s "ou=multi" -a "(& (entry=$nr)(serv=PS))" -o wrap=no'\'
This assumes, by the way, that you want the backticks to be unpacked and the command executed on ip2, rather than beforehand on your source machine, and similarly with the decoding of the $nr variable. It's not clear how you want them interpreted.

Bash scripting with make

I have a makefile which I use to run simple bash scripts inside of a repo I am working on. I am attempting to make a new make call which will log me into my mySQL database automatically. I have my username and password stored in a .zinfo file at a known location (such as "/u/r/z/myid/.zinfo"). I am attempting to read the lines of this file to get my password, which is of the format:
user = csj483
database = cs495z_csj483
password = fjlID9dD923
Here is the code I am trying to run from the makefile, but it is not working. If I run the same code directly in the shell, it seems to work ok. Note that I had to use double $$s because make doesn't like single ones.
login:
for line in $$(cat /u/z/users/cs327e/$$USER/.zinfo); \
do \
PASSWORD=$$line; \
echo $$PASSWORD; \
done
echo $$PASSWORD
At this point, I am just trying to get the password, which should be the last value that PASSWORD is set to in the loop. If anyone can suggest an easier way to retrieve the password, or point out my error in my code, I would appreciate it. Eventually I will also want to retrieve the database name as well, and any help with that too would be appreciated as well. I am new to bash, but experienced in numerous other languages.
You didn't specify what you meant by "not working"; when asking questions please always be very clear about the command you typed, the result you got (cut and paste is best), and why that's not what you expected.
Anyway, most likely the behavior you're seeing is that the first echo shows the output you expect, but the second doesn't. That's because make will invoke each logical line in the recipe in a separate shell. So, the for loop is run in one shell and it sets the environment variable PASSWORD, then that shell exits and the last echo is run in a new shell... where PASSWORD is no longer set.
You need to put the entirety of the command line in a single logical line in the recipe:
login:
for line in $$(cat /u/z/users/cs327e/$$USER/.zinfo); do \
PASSWORD=$$line; \
echo $$PASSWORD; \
done \
&& echo $$PASSWORD
One last thing to remember: you say you're running bash scripts, but make does not run bash. It runs /bin/sh, regardless of what shell you personally use (imagine the havoc if makefiles used whatever shell the user happened to be using!). Your best option is to write recipes in portable shell syntax. If you really can't do that, be sure to set SHELL := /bin/bash in your Makefile to force make to use bash.
ETA:
Regarding your larger question, you have a lot of options. If you have control over the format of the zinfo file at all, then I urge you to define it to use the same syntax as the shell for defining variables. In the example above if you removed the whitespace around the = sign, like this:
user=csj483
database=cs495z_csj483
password=fjlID9dD923
Then you have a valid shell script with variable assignments. Now you can source this script in your makefile and your life is VERY easy:
login:
. /u/z/users/cs327e/$$USER/.zinfo \
&& echo user is $$user \
&& echo database is $$database \
&& echo password is $$password
If you don't have control over the syntax of the zinfo file then life is harder. You could use eval, something like this:
login:
while read var eq value; do \
eval $$var="$$value"; \
done < /u/z/users/cs327e/$$USER/.zinfo \
&& echo user is $$user \
&& echo database is $$database \
&& echo password is $$password
This version will only work if there ARE spaces around the "=". If you want to support both that's do-able as well.

Using open database connection to PostgreSQL in BASH?

I have to use BASH to connect to our PostgreSQL 9.1 database server to execute various SQL statements.
We have a performance issue caused by repeatedly opening/closing too many database connections (right now, we send each statement to a psql command).
I am looking at the possibility of maintaining an open database connection for a block of SQL statements using named pipes.
The problem I have is that once I open a connection and execute a SQL statement, I don't know when to stop reading from the psql. I've thought about parsing the output to look for a prompt, although I don't know if that is safe considering the possibility that the character may be embedded in a SELECT output.
Does anyone have a suggestion?
Here's a simplified example of what I have thus far...
#!/bin/bash
PIPE_IN=/tmp/pipe.in
PIPE_OUT=/tmp/pipe.out
mkfifo $PIPE_IN $PIPE_OUT
psql -A -t jkim_edr_md_xxx_db < $PIPE_IN > $PIPE_OUT &
exec 5> $PIPE_IN; rm -f $PIPE_IN
exec 4< $PIPE_OUT; rm -f $PIPE_OUT
echo 'SELECT * FROM some_table' >&5
# unfortunately, this loop blocks
while read -u 4 LINE
do
echo LINE=$LINE
done
Use --file=filename for a batch execution.
Depending on your need for flow control you may want to use another language with a more flexible DB API (Python would be my choice here but use whatever works).
echo >&5 "SELECT * FROM some_table"
should read
echo 'SELECT * FROM some_table' >&5
The redirection operator >& comes after the parameters to echo; and also, if you use "" quotes, some punctuation may be treated specially by the shell, causing foul and mysterious bugs later. On the other hand, quoting ' will be … ugly. SELECT * FROM some_table WHERE foo=\'Can\'\'t read\'' …
You probably want to also create these pipes someplace safer than /tmp. There's a big security-hole race condition where someone else on host could hijack your connection. Try creating a folder like /var/run/yournamehere/ with 0700 privileges, and create the pipes there, ideally with names like PIPE_IN=/var/run/jinkimsqltool/sql.pipe.in.$$ — $$ will be your process ID, so simulataneously-executed scripts won't clobber one another. (To exacerbate the security hole, rm -rf should not be needed for a pipe, but a clever cracker could use that excalation of privileges to abuse the -r there. Just rm -f is sufficient.)
in psql You can use
\o YOUR_PIPE
SELECT whatever;
\o
which will open, write and close the pipe. Your BASH-fu seems quite a lot stronger than mine, so I'll let You work out the details :)

Resources