In my team, we need to connect to Oracle, Sybase and MSSQL very frequently... We use Oracle's SQLDeveloper 3.3.2 Version to connect all 3 (using third party libs). This tool often has a problem that select queries never ends... Even if we get the results, it will keep on running... And because of this we receive database alerts for long running queries...
E.g.
Select * from products
If products has million records, then SQLDeveloper will show top records but in background the query will keep on running.
How Can this problem be solved?
Or
Is there a better product which can fulfill our need.
Your query - select * from products - is asking the database engine to send millions of records to your client application (SQLDeveloper in this case).
While SQLDeveloper (and many other GUIs of a similar design) will show you the first 30 (or 50, or 100, etc) rows, as far as the database engine is concerned you're still asking to see millions of rows hence your query continues to 'run' in the database engine.
For example, in Sybase ASE the query will show up with a status of 'send sleep' meaning the database engine is waiting for the client application to request the next batch of records to send down the connection.
To 'solve' this issue you have a few options:
using SQLDeveloper: scroll through (ie, display on your monitor) the
rest of the multi-million row result set [likely not what you want to
do; likely you don't have the time/desire to hit the 'Next' button
100's of thousands of times]
kill off your query after you've received/viewed the first set of
records [not recommended as there will likely be times when you
'forget' to kill of your query, thus earning the wrath of your DBA]
write your query to pull back only the records you REALLY want/need to see (eg, add a WHERE clause to limit the set of rows)
see if SQLDeveloper has any sort of configuration option to
auto-kill any 'long running' queries [I have no idea if this is even
doable in a client application]
see if the DBA can configure your login with a resource limit (eg,
auto-kill queries if they run for more than XX seconds)
Related
I have an interactive report in one of my APEX application. The SQL query used in the IR runs pretty fine when executed in SQL Developer.
But, at times in the application it gets stuck and requires more time than usual to load the IR. (Usually it takes less than 5 secs to load but at times more than 50 secs).
What might be the possible reasons for it to load slow ?
The query is well tuned and IR has default settings with no modification. I have also checked the stats on the tables and it is fresh.
The SQL query used in IR fetches 10k records.
If you go into Component View and then click Interactive Report under Regions, there is a setting near the bottom under the Performance heading called Maximum Rows To Process. Also limiting the number of rows to display sped things up for me.
Sorry but i can't write comments. Is there any database view in your query?
I have similar situation where query from database view with 6 mil. records take around 3 min to complete in Oracle Apex IR and 10-15 seconds in SQL Developer. So after some research i try to put sql from view directly into IR and result was almost same as this in SQL Developer.
Also You can remove pagination from IR or change it from "x to y from z" to be only "x to y".
I hope this can help you.
Query response time in SQL Developer versus any other Web browser cannot be compared directly. Some of the reasons for its slugishness could be related to server setup, server load, current user traffic, page load processes, page and region rendering, number of regions,components and plugins, navigation menu query, report query, number or columns and rows being displayed, row content length, apex items especially LOV with SQL queries, etc.
From your question, it looks like performance issue is not consistent and so, I think issue may be related to server setup or traffic. Try to check if you see any difference in load time after bouncing the server, if that's an option. Try to isolate the problem and if the issue is specific to interactive report, build a classic report and compare times.
Another thing that has helped me in past is to compare and verify compute times using APEX Debugger, here is the screenshot.
Also look at network and timeline tabs in Chrome debugger,
Implement indexes on your tables
Verify with your DBA if you have database locks
Verify the amount of logs in Database
Switch to classic reports.
Regards
I have a query CREATE TABLE foobar AS SELECT ... that runs successfully in Hue (the returned status is Inserted 986571 row(s)) and takes a couple seconds to complete. However, in Cloudera Manager its status - after more than 10 minutes - still says Executing.
Is it a bug in Cloudera Manager or is this query actually still running?
When Hue executes a query, it leaves the query open so that users can page through results at their own pace. (Of course, this behavior isn't very useful for DDL statements.) That means that Impala still considers the query to be executing, even if it is not actively using CPU cycles (keep in mind it is still holding memory!). Hue will close the query if explicitly told to, or when the page/session is closed, e.g. using the hue command:
> build/env/bin/hue close_queries --help
Note that Impala has a query option to automatically 'timeout' queries after a period of time, see query_timeout_s. Hue sets this to 10 minutes by default, but you can override it in the hue.ini settings.
One thing to note is that when queries 'time out', they are cancelled but not closed, i.e. the query will remain "in flight" with a CANCELLED status. The reason for this is so that users (or tools) can continue to observe the query metadata (e.g. query profile, status, etc.), which would not be available if the query is fully closed and thus deregistered from the impalad. Unfortunately these cancelled queries may still hold some non-negligible resources, but this will be fixed with IMPALA-1575.
More information: Hive and Impala queries life cycle
In an application I have 2000 Accounts. The first Account contains 10000 Clients, which is the maximum limit for an Account. Users can select Clients from the first Account and then select some Accounts to copy the selected Clients to the selected Accounts. So the possible maximums are 1999 Accounts and 10000 Clients.
Currently I’m looping over the Account list and calling a Stored Procedure in each iteration from the client application. With each iteration, an Account Id and a table-valued parameter, that contains the list of ids of the clients, are passed to the SP. While testing with 500 Accounts and 10000 Clients, it takes 25 minutes, 34 second and 543 milliseconds. At some point within the SP I’m using the following code –
INSERT INTO Client
SELECT AccountId, CId, Code, Name, Email FROM Client
WHERE Client.Id IN(SELECT Id FROM #clientIdList)
where #clientIdList is the table-type variable that contains the 10000 Client's Id.
The thing is, after each iteration 10000 new Client data is being added to Client table. So, with each iteration, this INSERT operation is gonna take longer in the next iteration. Googling for some SP performance tips I came to know that the IN clause is considered somewhat evil, and most people suggests to use INNER JOIN instead. So I changed the above code to –
INSERT INTO Client
SELECT c.AccountId, c.CId, c.Code, c.Name, c.Email FROM Client AS c
INNER JOIN #clientIdList AS cil
ON c.Id = cil.Id
Now it takes 25 minutes, 17 seconds and 407 milliseconds. Nothing exciting, really!
I’m new to Stored Procedures arena. So, with this amount of data, is it supposed to take this long? And which one should I consider for the given scenario, IN or INNER JOIN? Suggestions and performance tips are welcome. Thanks.
It's hard to tell exactly what is going on without knowing more about your stored procedure.
What I would recommend is checking the Execution plan. To do this, open up SQL Server Management studio. In a new query window make a call to your stored procedure passing in any relevant parameters.
Before you execute this, go up to the Query menu and choose the Client Side Statistics and the Actual Execution Plan menu items.
Now run your query.
Come back in 25 minutes when it's all done and there should be 3 or 4 tabs at the bottom (depending on if it returns data or not.) 1 Tab for results, 1 Tab for messages, 1 tab for the client stats and 1 tab for the execution plan.
The client stats tab is useful for seeing if the changes you make affect performance (it keeps several of your last runs to show you how it performed over those.)
The more interesting tab is the execution plan tab. Look at this one, it should look something like this:
Here it tells me that my query was able to use the primary key index on all my tables. You want to look out for whole table scans (because that means it's not using an index.) Also, if my query hadn't been so simple and had taken a long time, and not used an index then below "Query 1" there would be green text stating "Missing Index" or something along those lines. It will tell you the index you need to create to improve performance.
Also notice it tells you how much each query took, in percentage. I had one query so obviously it took 100% of the time. But if you had 5 queries in your sproc and one took 80% of the time, you might want to check that one out first.
It also tells you how much time was spent on each part of the query, again in percentages. That can be helpful for trying to understand what it is that your query is doing.
Run through this and I'd guess you have other problems with your sproc, and you can ask follow up questions from that.
I have a JEE application searching a large Oracle databse for data. The application uses JDBC to query the database.
The issue I am having is that the results page is unable to be displayed. I get the following error:
The connection to the server was reset while the page was loading.
This happens after 60 seconds. When I run the sql query manually using a SQL client, the results return in 3 seconds.
I have checked the logs and there are no exceptions that I can see.
Do any of you know the best way to find what is causing the connection to be reset? If I break my search date range into 2, and search both ranges individually, both return results. So it seems that it's the larger result set causing the issue.
Any help is welcome.
You are probably right about the larger result set. Often when running a query from a SQL client, you'll get the first set of records right away. If you page down to force pull of all records, then it bogs down. Perhaps your hitting the same issue with JDBC client where it takes more than 60 sec to get all the rows. I've not done JDBC in a while, but can you get it to stream the result set?
Regards,
Roger
All views are mine ...
I am trying to understand the performance of a query that I've written in Oracle. At this time I only have access to SQLDeveloper and its execution timer. I can run SHOW PLAN but cannot use the auto trace function.
The query that I've written runs in about 1.8 seconds when I press "execute query" (F9) in SQLDeveloper. I know that this is only fetching the first fifty rows by default, but can I at least be certain that the 1.8 seconds encompasses the total execution time plus the time to deliver the first 50 rows to my client?
When I wrap this query in a stored procedure (returning the results via an OUT REF CURSOR) and try to use it from an external application (SQL Server Reporting Services), the query takes over one minute to run. I get similar performance when I press "run script" (F5) in SQLDeveloper. It seems that the difference here is that in these two scenarios, Oracle has to transmit all of the rows back rather than the first 50. This leads me to believe that there is some network connectivity issues between the client PC and Oracle instance.
My query only returns about 8000 rows so this performance is surprising. To try to prove my theory above about the latency, I ran some code like this in SQLDeveloper:
declare
tmp sys_refcursor;
begin
my_proc(null, null, null, tmp);
end;
...And this runs in about two seconds. Again, does SQLDeveloper's execution clock accurately indicate the execution time of the query? Or am I missing something and is it possible that it is in fact my query which needs tuning?
Can anybody please offer me any insight on this based on the limited tools I have available? Or should I try to involve the DBA to do some further analysis?
"I know that this is only fetching the
first fifty rows by default, but can I
at least be certain that the 1.8
seconds encompasses the total
execution time plus the time to
deliver the first 50 rows to my
client?"
No, it is the time to return the first 50 rows. It doesn't necessarily require that the database has determined the entire result set.
Think about the table as an encyclopedia. If you want a list of animals with names beginning with 'A' or 'Z', you'll probably get Aardvarks and Alligators pretty quickly. It will take much longer to get Zebras as you'd have to read the entire book. If your query is doing a full table scan, it won't complete until it has read the entire table (or book), even if there is nothing to be picked up in anything after the first chapter (because it doesn't know there isn't anything important in there until it has read it).
declare
tmp sys_refcursor;
begin
my_proc(null, null, null, tmp);
end;
This piece of code does nothing. More specifically, it will parse the query to determine that the necessary tables, columns and privileges are in place. It will not actually execute the query or determine whether any rows meet the filter criteria.
If the query only returns 8000 rows it is unlikely that the network is a significant problem (unless they are very big rows).
Ask your DBA for a quick tutorial in performance tuning.