In a Linux script, is it possible to execute multiple commands in the same process? - bash

I have a script that contains:
db2 connect to user01
db2 describe indexes for table table_desc
What I figure is happening is the process that executes the first line is different from the process that runs the second line. This means that the process that executes the first line gets the connection while the second process that runs the second line has no connection at all. This is verified because I get an error at the second line saying that there exists no database connection.
Is it possible to have the same process run both commands? Or at least a way to "join" the first process to the second?

If you want both instructions to run in the same process you need to write them to a script:
$ cat foo.db2
connect to user01
describe indexes for table table_desc
and run that script in the db2 interpreter:
db2 -f foo.db2
A Here Document might work as well:
db2 <<EOF
connect to user01
describe indexes for table table_desc
EOF
I can't test that, though, since I currently don't have a DB2 on Linux at hand.

Related

How to avoid series of commands running under sql plus using shell script

I have created a script which runs a series of other instances in shell using the while command. But the problem is that one of the job is connecting to sql plus and running the remaining command under sqlplus.
For ex. my input file --> job.txt
job1
job2
job3
Now, using the below script I am calling job one by one. So, until the present job is finished next job wont start. But the catch comes when the job tries to connect the sql plus. After it connects to sql plus, the process id of the current job gets complete and remaining jobs run as the sql statement in unix env.
while read line;
do
$job1
done < job.txt;
Error message I am getting in sqlplus after current instance exits.
SP2-0042: unknown command
How can I avoid bringing the jobs under sqlplus.

How to get ORACLE sqlplus command response time on multiple server using shell script?

I need to sqlplus command response time to check db connectivity in oracle 12c using shell script.. In order to get db connectivity time from multiple server by sqlplus command response time within 4-5 secs..
I don't know about SQLPlus, but I'd rather use TNSPING (especially as you're about to call it from operating system command prompt).

Establishing a simple connection to postgres server for load test in bash

I am currently trying to load test our instance hosting a postgres instance from a bash script. The idea is to spawn a bunch of open connections (without running any queries) and then checking the memory.
To spawn a bunch of connections I do:
export PGPASSWORD="$password"
for i in $(seq 1 $maxConnections);
do
sleep 0.2
psql -h "$serverAddress" -U postgres >/dev/null &
done
However, it seems that the connections don't stay open, as when I check for active connections, I get 0 from the ip of the instance I'm running it from. However, if I do
psql -h "$serverAddress" -U postgres &
manually from the shell, it keeps the connection open. How would I open and maintain open connections within a bash script? I've checked the password is correct, and if I exclude the ampersand from within the script, then I do enter the psql console with an open connection as expected. It's just when I background it in the script that it causes problems.
You can start your psql sessions in a sub-shell while you loop by using the sub-shell parentheses syntax like below. However, if you do this I recommend you write code to manage your jobs and clean them up when you are done.
(psql -h "$serverAddress" -U postgres)&
I tested this and I was able to maintain connections to a postgres instance this way. However, if you are checking for active connection via a select statement like select * from pg_stat_activity; you will see these connections as open and idle to the instance not active as they are not executing any task or query.
If you put this code in a script an execute it you will need to make sure that the script does not terminate before you are ready for all the sessions to die.

running db2 in bash from git's mingw on windows

I have a shell script that runs a few db2 commands which I want to use on windows.
When running this in bash from msysgit 2.5.3 64bit I get an error from db2:
SQL1024N Die Verbindung zur Datenbank ging verloren. SQLSTATE=08003
for instance
start db2 with db2cmd then,
start the bash from the db2cmd window,
then run
db2 connect to <db> user <user>
db2 select * from syscat.tables
The db2 select will produce the same error.
This happens because the bash will start another subshell to execute each db2 command and the db2 connect calls another process db2bp which actually holds the connection.
When db2 connect returns the subshell is closed and the connection is lost.
This happens also when I concatenate the commands with ; or &&.
Is there a way to make bash not execute a subshell or at least not for every command?
The usual method for preventing to spawn a new shell is to prefix each command with a dot (some references for example here). You may also examine the shell built-in exec command. However, I am afraid that running a shell in Windows will have its own oddities, at least judging from my own experience, so you may want to try to experiment with different shell flavours before you get the solution right. Hope it helps anyway!
For Scripting in Bash you should add, after the connect string this little Bugger:
export DB2DBDFT=
That will ensure that all further subshells will use your db2 connection.
Hope this solves your problem.

Chain dependent bash commands

I'm trying to chain together two commands:
The first in which I start up postgres
The second in which I run a command meant for postgres(a benchmark, in this case)
As far as I know, the '||', ';', and '&/&&' operators all require that the first command terminate or exit somehow. This isn't the case with a server that's been started, so I'm not sure how to proceed. I can't run the two completely in parallel, as the server has to be started.
Thanks for the help!
I would recommend something along the lines of the following in a single bash script:
Start the Postgres server via a command like /etc/init.d/postgresql start or similar
Sleep for a period of time to give the server time to startup; perhaps a minute or two
Then run a psql command that connects to the server to test its up-ness
Tie that command together with your benchmark via &&, so it completes only if the psql command completes (depending on the exact return codes from psql, you may need to inspect the output from the command instead of the return code). The command run via psql would best be a simple query that connects to the server and returns a simple value that can be cross-checked.
Edit in response to comment from OP:
It depends on what you want to benchmark. If you just want to benchmark a command after the server has started, and don't want to restart the server every time, then I would tweak the code to run the psql up-ness test in a separate block, starting the server if not up, and then afterward, run the benchmark test command unconditionally.
If you do want to start the server up fresh each time (to test cold-start performance, or similar), then I would add another command after the benchmarked command to shutdown the server, and then sleep, re-running the test command to check for up-ness (where this time no up-ness is expected).
In other case you should be able to run the script multiple times.
A slight aside: If your test is destructive (that is, it writes to the DB), you may want to consider dumping a "clean" copy of the DB -- that is, the DB in its pre-test state -- and then creating a fresh DB, with a different name from the original, using that dump with each run of the script, dropping it beforehand.

Resources