We are trying to schedule hanganalyze automatically if number of sessions are greater than 300 as it is too difficult to monitor database continuously and this issue happens only for a couple of minutes. Could you help us in creating procedure. Also could you confirm that if running this procedure has any impact on performance.
A hang analyze is done via oradebug, so you'd be looking at using SQL Plus to check session counts and then run it if needed, eg store this in a script
spool /tmp/checker.sql
select
case when count(*) > 300 then
'oradebug setorapname reco' ||chr(10)||'oradebug -g all hanganalyze 3'
else
'REM do nothing'
end
from v$session
spool off
#/tmp/checker.sql
and then have a SQL plus session that does:
conn / as sysdba
#script.sql
host sleep 60
#script.sql
host sleep 60
...
...
Related
For example, let's say we have the following script to execute in prod.
drop index idx_test1;
drop index idx_test2;
But, after execute first instruction we get the error: ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired. From here I'd like to stop all subsequent instructions. Is there any way to do it in sqlplus and sqldeveloper/PL/SQL Developer?
I would recommend running scripts using Sql*Plus. You would use
WHENEVER SQLERROR EXIT SQL.SQLCODE ROLLBACK
This can be part of your glogin.sql or just as a header of your production script.
This will also send the error code to the calling process for your own logging so if you have some orchestrator, it can know what's happened.
I need to sqlplus command response time to check db connectivity in oracle 12c using shell script.. In order to get db connectivity time from multiple server by sqlplus command response time within 4-5 secs..
I don't know about SQLPlus, but I'd rather use TNSPING (especially as you're about to call it from operating system command prompt).
I am currently trying to load test our instance hosting a postgres instance from a bash script. The idea is to spawn a bunch of open connections (without running any queries) and then checking the memory.
To spawn a bunch of connections I do:
export PGPASSWORD="$password"
for i in $(seq 1 $maxConnections);
do
sleep 0.2
psql -h "$serverAddress" -U postgres >/dev/null &
done
However, it seems that the connections don't stay open, as when I check for active connections, I get 0 from the ip of the instance I'm running it from. However, if I do
psql -h "$serverAddress" -U postgres &
manually from the shell, it keeps the connection open. How would I open and maintain open connections within a bash script? I've checked the password is correct, and if I exclude the ampersand from within the script, then I do enter the psql console with an open connection as expected. It's just when I background it in the script that it causes problems.
You can start your psql sessions in a sub-shell while you loop by using the sub-shell parentheses syntax like below. However, if you do this I recommend you write code to manage your jobs and clean them up when you are done.
(psql -h "$serverAddress" -U postgres)&
I tested this and I was able to maintain connections to a postgres instance this way. However, if you are checking for active connection via a select statement like select * from pg_stat_activity; you will see these connections as open and idle to the instance not active as they are not executing any task or query.
If you put this code in a script an execute it you will need to make sure that the script does not terminate before you are ready for all the sessions to die.
I am getting Oracle maximum processes exceeded error. To increase the the maximum allowed processes I log in as sysdba:
$ sqlplus / as sysdba
I try to see processes parameter:
sql> show parameter processes
ERROR:
ORA-01012: not logged on
I presume that sqlplus is not able to login due to processes being exceeded.
I try to shutdown the instance so that processes will be closed:
SQL> shutdown immediate
ORA-24324: service handle not initialized
ORA-24323: value not allowed
ORA-00020: maximum number of processes (%s) exceeded
I try to kill the Oracle service, that should kill the processes:
$ sudo service oracle-xe restart
[sudo] password for kshitiz:
Shutting down Oracle Database 10g Express Edition Instance.
Stopping Oracle Net Listener.
Starting Oracle Database 10g Express Edition Instance.
I login as sysdba again but it gives the same error.
How do I kill the processes so that the database is manageable again? And how should I diagnose why this error is occuring (which application is responsible for hogging the database)?
I am on Oracle 10g express and Ubuntu 13.10.
The solution is to increase the max number of processes allowed. But the problem is that in this state there is no way to do that since DB wouldn't take any commands. Even the commands to increment the process parameter would give the maximum processes exceeded error.
Kill all processes belonging to oracle user:
$ sudo su
$ su oracle
$ kill -9 -1
Check that all process are killed:
$ ps -ef | grep oracle
If not kill them using kill -9 PID. Now connect to database as sysdba:
$ su oracle
$ sqlplus / as sysdba
Shutdown the database (1. Shutdown immediate would probably not work and give same ORA-01012 error. 2. Since the processes are killed isn't the database already shutdown? That's what I thought but it seems that it keeps a record of the last state or something which isn't clear out until you run the following command):
SQL> shutdown abort
Now bring it up again:
SQL> startup
Modify the processes parameter to prevent this problem from recurring:
SQL> select count(*) from v$process;
SQL> alter system set processes=300 scope=spfile;
Restart database:
SQL> shutdown immediate
SQL> startup
Check that max processes have increased:
SQL> show parameter processes
Edit:
After all of this for some strange reason my application wasn't connecting to DB, though SQL plus was working just fine. So as a last step I restarted the oracle service:
$ sudo service oracle-xe restart
This got everything working for me.
I have a script that contains:
db2 connect to user01
db2 describe indexes for table table_desc
What I figure is happening is the process that executes the first line is different from the process that runs the second line. This means that the process that executes the first line gets the connection while the second process that runs the second line has no connection at all. This is verified because I get an error at the second line saying that there exists no database connection.
Is it possible to have the same process run both commands? Or at least a way to "join" the first process to the second?
If you want both instructions to run in the same process you need to write them to a script:
$ cat foo.db2
connect to user01
describe indexes for table table_desc
and run that script in the db2 interpreter:
db2 -f foo.db2
A Here Document might work as well:
db2 <<EOF
connect to user01
describe indexes for table table_desc
EOF
I can't test that, though, since I currently don't have a DB2 on Linux at hand.