I have a job which picks a record from a cursor and then it calls a stored procedure which processes the record picked up from the cursor.
The stored procedure has multiple queries to process the records. In all, procedure takes about 0.3 seconds to process a single record picked up by the cursor but since cursor contains more than 100k records it takes hours to complete the job.
The queries in the stored procedure are all optimized
I was thinking of making the procedure run in multi threaded way as in java and other programming language.
Can it be done in oracle? or is there any other way I can reduce the run time of my job.
I agree with the comments regarding processing cursors in a loop. As Tom Kyte often said "Row at a time [processing] is slow at a time"; Oracle performs best with set based operations and row-at-a-time operations usually have scalability issues (i.e. very susceptible to poor performance when things change on the DB such as CPU capacity, workload, number of records that need processing, changes in size of underlying tables, ...).
You probably already know that Oracle since 8i has a Java VM built in to the DB engine, so you might be able to have java code wrappered as PL/SQL, but this is not for the faint of heart [not saying that you are, just sayin'].
Before going to the trouble of re-writing your application, I would recommend the following tuning approach as it may yield some actionable tunings [assumes diagnostics and tuning pack licenses; won't remove the scalability issues but may lessen the impact of them]:
In versions of oracle 11g and above:
Find the the top level sql id recorded in gv$active_session_history and dba_hist_active_sess_history for the call to the PL/SQL procedure.
Examine the wait events for the sql_id's under that top_level_sql_id. (they tell you what the SQL is waiting on).
Run the tuning advisor on those sql_id's and check for any tuning recommendations. Sometimes if SQL is already sub-second getting it from hundredths of a second to thousandths of a second can have a big impact when call many times.
Run the ADDM report for the period when the procedure is running. Often you will find that heavy PL/SQL processes require increase in PGA. Further, ADDM may advise other relevant actions (e.g. increase SGA, session cached cursors, db writer processes, log buffer, run segment tuning advisor, ...)
We have a daily batch job executing a oracle-plsql function. Actually the quartz scheduler invokes a java program which makes a call to the oracle-plsql function. This oracle plsql function deletes data (which is more than 6 months) from 4 tables and then commits the transaction.
This batch job was running successfully in the test environment but started failing when new data was dumped to the tables which happened 2 weeks ago (The code is supposed to go into production this week). Earlier the number of rows in each table was not more than 0.1 million. But now it is 1 million in 3 tables and 2.4 million in the other table.
After running for 3 hours, we are getting a error in java (written in the log file) "...Connection reset; nested exception is java.sql.SQLException: Io exception: Connection reset....". When the row-counts on the tables were checked, it was clear that no record was deleted from any of the tables.
Is it possible in oracle database, for the plsql procedure/function to be automatically terminated/killed when the connection is timed out and the invoking session is no longer active?
Thanks in advance,
Pradeep.
The PL/SQL won't terminate because it is inactive, since by definition it isn't - it is still doing something. It won't be generating any network traffic back to your client though.
It appears something at the network level is causing the connection to be terminated. This could be a listener timeout, a firewall timeout, or something else. If it's consistently after three hours then it will almost certainly be a timeout configured somewhere rather than a network glitch, which would be more random (and possibly recoverable).
When the network connection is interrupted, Oracle will notice at some point and terminate the session. That will cause the PL/SQL call to be terminated, and that will cause any work it has done to be rolled back, which may take a while.
3 hours seems a long time for your deletes though even for a few million records. Perhaps you're deleting inefficiently, with individual inserts within your procedure. Which doesn't really help you of course. It might be worth pointing out that your production environment might not have whatever setting is killing your connection, or might have a shorter timeout, so even reducing the runtime might not make it bullet-proof in live. You probably need to find the source of the timeout and check the equivalent in the live env. to try to pre-empt similar problems there.
I have a stored procedure that returns about 50000 records in 10sec using at most 2 cores in SSMS. The SSRS report using the stored procedure was taking 20min and would max out the processor on an 8 core server for the entire time. The report was relatively simple (i.e. no graphs, calculations). The report did not appear to be the issue as I wrote the 50K rows to a temp table and the report could display the data in a few seconds. I tried many different ideas for testing altering the stored procedure each time, but keeping the original code in a separate window to revert back to. After one Alter of the stored procedure, going back to the original code, the report and server utilization started running fast, comparable to the performance of the stored procedure alone. Everything is fine for now, but I am would like to get to the bottom of what caused this in case it happens again. Any ideas?
I'd start with a SQL Profiler trace of both the stored procedure when you execute it normally, and then the same SP when it's called by SSRS. Make sure you include the execution plans involved, so you can see if it's making some bad decisions (though that seems unlikely - the SQL Server should execute an optimal - or at least consistent - plan regardless of the query's source).
We used to have cases where Business Objects would execute stored procs dozens of times for no aparent reason and it lead to occasionally horrible performance, though I've never seen that same behavior with SSRS. It may be somewhere to start, though. You'll also see the execution begin/end times - that will make it clear if it's the database layer that's hanging up, or if the SQL Server hands back the data in 10 seconds and then it's the SSRS service that's choking somewhere.
The primary solution to speeding SSRS reports is to cache the reports. If one does this (either my preloading the cache at 7:30 am for instance) or caches the reports on-hit, one will find massive gains in load speed.
You may also find that monthly restarts of SSRS application domain to resolve your issue.
Please note that I do this daily and professionally and am not simply waxing poetic on SSRS
Caching in SSRS
http://msdn.microsoft.com/en-us/library/ms155927.aspx
Pre-loading the Cache
http://msdn.microsoft.com/en-us/library/ms155876.aspx
If you do not like initial reports taking long and your data is static i.e. a daily general ledger or the like, meaning the data is relatively static over the day, you may increase the cache life-span.
Finally, you may also opt for business managers to instead receive these reports via email subscriptions, which will send them a point in time Excel report which they may find easier and more systematic.
You can also use parameters in SSRS to allow for easy parsing by the user and faster queries. In the query builder type IN(#SSN) under the Filter column that you wish to parameterize, you will then find it created in the parameter folder just above data sources in the upper left of your BIDS GUI.
[If you do not see the data source section in SSRS, hit CTRL+ALT+D.
See a nearly identical question here: Performance Issuses with SSRS
we use getMetaData() on every cursor returned from the oracle stored procedure call.
With ojdbc5 we dont have spike in number of metadata sql's executed and average time. But with ojdbc6 we see spike in number of metadata sql's executed and increase in avg sql execution time.
Does anyone know or aware of this issue with ojdbc6.. wish they had made it open source?
did anyone atleast try decompiling the ojdbc6 jar anytime?
the problem is with the way SimpleJdbcCall of Spring works, it gets the metadata of the procedure and its arguments for everycall. even though they should not cache it, there should have been a setting which enables and disables the caching of the metadata while using SimpleJdbcCall.
When using the SimpleJdbcCall... beware of the metadata contention that happens.. if your app has too many pl/sql procedure invocations then the oracle can get latch contention and hence the overall app will slow down as this causes a bottleneck... servers even crash rendering the app non-responsive. add a small cache by diving into spring code and make a flag to enable/disable... tadaanngggg.. it works awesomely faster than ever.
I have an SSRS report that calls out to a stored procedure. If I run the stored procedure directly from a query window, it will return in under 2 seconds. However, the same query run from an 2005 SSRS report takes up to 5 minutes to complete. This is not just happening on the first run, it happens every time. Additionally, I don't see this same problem in other environments.
Any ideas on why the SSRS report would run so slow in this particular environment?
Thanks for the suggestions provided here. We have found a solution and it did turn out to be related to the parameters. SQL Server was producing a convoluted execution plan when executed from the SSRS report due to 'parameter sniffing'. The workaround was to declare variables inside of the stored procedure and assign the incoming parameters to the variables. Then the query used the variables rather than the parameters. This caused the query to perform consistently whether called from SQL Server Manager or through the SSRS report.
I will add that I had the same problem with a non-stored procedure query - just a plain select statement. To fix it, I declared a variable within the dataset SQL statement and set it equal to the SSRS parameter.
What an annoying workaround! Still, thank you all for getting me close to the answer!
Add this to the end of your proc: option(recompile)
This will make the report run almost as fast as the stored procedure
I had the same problem, here is my description of the problem
"I created a store procedure which would generate 2200 Rows and would get executed in almost 2 seconds however after calling the store procedure from SSRS 2008 and run the report it actually never ran and ultimately I have to kill the BIDS (Business Intelligence development Studio) from task manager".
What I Tried: I tried running the SP from reportuser Login but SP was running normal for that user as well, I checked Profiler but nothing worked out.
Solution:
Actually the problem is that even though SP is generating the result but SSRS engine is taking time to read these many rows and render it back.
So I added WITH RECOMPILE option in SP and ran the report .. this is when miracle happened and my problem got resolve.
I had the same scenario occuring..Very basic report, the SP (which only takes in 1 param) was taking 5 seconds to bring back 10K records, yet the report would take 6 minutes to run. According to profiler and the RS ExecutionLogStorage table, the report was spending all it's time on the query. Brian S.'s comment led me to the solution..I simply added WITH RECOMPILE before the AS statement in the SP, and now the report time pretty much matches the SP execution time.
I simply deselected 'Repeat header columns on each page' within the Tablix Properties.
If your stored procedure uses linked servers or openquery, they may run quickly by themselves but take a long time to render in SSRS. Some general suggestions:
Retrieve the data directly from the server where the data is stored by using a different data source instead of using the linked server to retrieve the data.
Load the data from the remote server to a local table prior to executing the report, keeping the report query simple.
Use a table variable to first retrieve the data from the remote server and then join with your local tables instead of directly returning a join with a linked server.
I see that the question has been answered, I'm just adding this in case someone has this same issue.
I had the report html output trouble on report retrieving 32000 lines. The query ran fast but the output into web browser was very slow. In my case I had to activate “Interactive Paging” to allow user to see first page and be able to generate Excel file. The pros of this solution is that first page appears fast and user can generate export to Excel or PDF, the cons is that user can scroll only current page. If user wants to see more content he\she must use navigation buttons above the grid. In my case user accepted this behavior because the export to Excel was more important.
To activate “Interactive Paging” you must click on the free area in the report pane and change property “InteractiveSize”\ “Height” on the report level in Properties pane. Set this property to different from 0. I set to 8.5 inches in my case. Also ensure that you unchecked “Keep together on one page if possible” property on the Tablix level (right click on the Tablix, then “Tablix Properties”, then “General”\ “Page Break Options”).
I came across a similar issue of my stored procedure executing quickly from Management Studio but executing very slow from SSRS. After a long struggle I solved this issue by deleting the stored procedure physically and recreating it. I am not sure of the logic behind it, but I assume it is because of the change in table structure used in the stored procedure.
I Faced the same issue. For me it was just to unckeck the option :
Tablix Properties=> Page Break Option => Keep together on one page if possible
Of SSRS Report. It was trying to put all records on the same page instead of creating many pages.
Aside from the parameter-sniffing issue, I've found that SSRS is generally slower at client side processing than (in my case) Crystal reports. The SSRS engine just doesn't seem as capable when it has a lot of rows to locally filter or aggregate. Granted, these are result set design problems which can frequently be addressed (though not always if the details are required for drilldown) but the more um...mature...reporting engine is more forgiving.
In my case, I just had to disconnect and connect the SSMS. I profiled the query and the duration of execution was showing 1 minute even though the query itself runs under 2 seconds. Restarted the connection and ran again, this time the duration showed the correct execution time.
I was able to solve this by removing the [&TotalPages] builtin field from the bottom. The time when down from minutes to less than a second.
Something odd that I could not determined was having impact on the calculation of total pages.
I was using SSRS 2012.
Couple of things you can do, without executing the actual report just run the sproc from within the data tab of reporting services. Does it still take time?
Another option is to use SQL Profiler and determine what is coming in and out of the database system.
Another thing you can do to test it, so to recreate a simple report without any parameters. Run the report and see if it makes a difference. It could be that your RS report is corrupted or badly formed that may cause the rendering to be really slow.
Had the same problem, and fixed it by giving the shared dataset a default parameter and updating that dataset in the reporting server.
DO you use "group by" in the SSRS table?
I had a report with 3 grouped by fields and I noticed that the report runed very slowly despite having a light query, to the point where I can't even dial values in the search field.
Than I removed the groupings and now the report goes up in seconds and everything works in an instant.
In our case, no code was required.
Note from our Help Desk: "Clearing out your Internet Setting will fix this problem."
Maybe that means "clear cache."