Execute Long Stored Procedure via Web page - oracle

I have this web application (using Struts 1.x) running on Weblogic 10.3.3 that read files and input the data to a database (Oracle 11g DB). And also through this web application, I can run a stored procedure in the database. The problem is now after a few changes on the stored procedure, it requires a pretty long time to finish executing (25-40 minutes).
The page to run the stored procedure now will load, and even after the stored procedure has finished (checked through session browser), the page will still in loading state and after sometimes it will display timeout error.
Is there any way for a web page to run a stored procedure that takes long time to finish (60 minutes)? Should I make changes to the application code or the Weblogic setting?
Thanks for the responses.

You should never ever ever ever leave the user waiting. When I last dealt with that:
Client(user/browser) tells server "Start procedure"
Server spawns thread which starts the procedure.
Server tells Client(user/browser) "I is done now. come back later. Mabee I emails yoo"
Server completes procedure.
Server emails user that procedure is done.
Secondary thread terminates.
Server eats Sir Robin's minstrels and there is much rejoicing.

Related

Payara docker with oracle db connection

I have a java application running in payara docker which connects to oracle database with connection pool. A particular functionality button in ui when clicked calls a database stored procedure to display some rows as result. When this functionality button is triggered multiple times in a single user sesion, it returns duplicate results. First time if it returns 2 rows( expected) , 2nd time would be 4 (2 extected results and 2 possibly from previous run) and then 6 and so on..i checked the database stored procedure which is running fine without duplication when directly run in db..Can you please help on this
This looks to be a flaw in UI or data layer. You can do the following:
perform a Client SQLNET trace and check how many times the rows gut shipped.

How to trace errors logs of the Stored Procedure in PROD environment?

I am not an expertise in oracle DB. But I am curious to know that how can we check the logs of particular Stored procedure when it gets executed.
I check the trace folder but I dont know how and which file I have to analyse.
When I checked the UNIX logs it shows timeout error . It seems it did not get the response form one of the procedure. And after 2-3 hrs it get processed and sometimes it dosent. It should have done that job in 30 mnts max. I am not sure if DB is culprit or WEB SERVER (WAS) .
In extreme case I ask for DB restart and WAS restart and this solves our problem .
Is it possible to trace the problem? I am in PROD environment . The same is not behavior in UAT or in SIT environment
Could this be the problem from WAS or DB side? Please throw some light on this .
Thanks
I think what you want is DBMS_TRACE You'll have to enable tracing in your session and execute the procedure manually.
If by chance, this procedure is being executed by ORACLE scheduler you may find some info in alert log. I'd suggest checking that anyway.
If the procedure used to run in 30min and now takes 2h to complete and if there were no changes to it then the problem is not in the procedure.
I'd suggest you check for unusable indexes, redo log switches, blocking sessions, table locks etc. hard to say exactly without knowing the procedure. You say it's a prod environment. DBA must surely have some performance monitoring in place. If, by chance, you have Oracle Enterprise Manager go and take a look at what is happening while the procedure is being executed.

Stored Procedures Overwhelming Oracle.EXE On Oracle 11g On Windows

Until very recently we ran a 3rd party HR database on an Oracle Unix environment. I have additionally set up various web services that hit stored procedures to carry out a few bespoke processes for our users, and all ran well for years.
However, now that we have moved to Oracle on a Windows environment there is suddenly a big problem.
The best example I have is a VB.Net solution that reads in a 2000 row CSV of employees into a datatable, runs a couple of stored procedures to bring back Post Id etc, populates a database table with the results, then feeds it all back out into a new CSV. This process used to take 1-2 minutes to complete on Unix. It now takes well over 2 hours and kills the server!
The problem manifests by overwhelming the CPU on the database server. Any stored procedure call sends Oracle.EXE into overdrive, completely max-ing out the CPU core that it's using such that no other stored procedures can be run and everything grinds to a halt.
We have run Oracle Enterprise Manager, which suggested the creation of some indexes etc, but nothing will improve the issue. Like I say, the SQL ran fine and swiftly for years, and it hasn't changed at all.
Does anybody know what could be causing this? I am completely at a loss.
The way I see it, it must either be:
1. A CPU/hardware issue (but we have investigated, added extra cores etc to no avail)
2. An Oracle configuration issue?; or
3. An issue with the 3rd party database (which is supposedly identical to what it was on Unix).
Thanks to anyone who read this far.
P.S. I've had a Stack Overflow user account for years but can't get logged into it any more. Back to noobie status for me!

database stopped on running 500 quires per second

I built a chat application in which chatting page is loaded per every 1second through AJAX,
And i used DB2 express-c database for storing messages.
one day 500 user at a time used this app at a that time database is stopped working.
Is their any effect on database by running 500 quires at a time in one second.
please tell how to run quires for every second without effecting the database functionality.
The red mark on the DB2 icon means that the instance stop working. This issue should be related to a memory problem or something else.
You have to check the db2diag.log file, and check for message. It is highly probable that you have information at the time when the instance stopped. The first failrue data capture feature allows to recopile all that information when a crash occurs, in the diag directory.
In order to fix the problem, you just need to restart DB2. You can create a task that check if the instance is up, and if not, try to restarted. However, this is the wrong way to keep DB2 up.
You should see what happened at the time when DB2 crashed. Probably, the memory for the 500 agents was too high, and DB2 could not reserve more memory.
Are you running other processes in the same DB2 server? probably one of them corrupt the DB2 memory.

Oracle ALTER SESSION ADVISE COMMIT?

My app to recovers automatically from failures. I test it as follows:
Start app
In the middle of processing, kill the application server host (shutdown -r -f)
On host reboot, application server restarts (as a windows service)
Application restarts
Application tries to process, but is blocked by incomplete 2-phase commit transaction in Oracle DB from previous session.
Somewhere between 10 and 30 minutes later the DB resolves the prior txn and processing continues OK.
I need it to continue processing faster than this. My DBA advises that I should prefix my statement with
ALTER SESSION ADVISE COMMIT;
But he can't give me guarantees or details about the potential for data loss doing this.
Luckily the statement in question is simply updating a datetime value to SYSDATE every second or so, so if there was some data corruption it would last < 1 second before it was overwritten.
But, to my question. What exactly does the statement above do? How does Oracle resolve data synchronisation issues when it is used?
Can you clarify the role of the 'local' and 'remote' databases in your scenario.
Generally a multi-db transaction does the following
Starts the transaction
Makes a change on on database
Makes a change on the other database
Gets the other database to 'promise to commit'
Commits locally
Gets the remote db to commit
In doubt transactions happen if step 4 is completed and then something fails. The general practice is to get the remote database back up and confirm if it committed. If so, step (5) goes ahead. If the remote component of the transaction can't be committed, the local component is rolled back.
Your description seems to refer to an app server failure which is a different kettle of fish. In your case, I think the scenario is as follows :
App server takes a connection and starts a transaction
App server dies without committing
App server restarts and make a new database connection
App server starts a new transaction on the new connection
New transaction get 'stuck' waiting for a lock held by the old connection/transaction
After 20 minutes, dead connection is terminated and transaction rolled back
New transaction then continues
In which case the solution is to kill off the old connection quicker, with a shorter timeout (eg SQLNET_EXPIRE_TIME in the sqlnet.ora of the server) or a manual ALTER SYSTEM KILL SESSION.

Resources