I have a query that retrieves a large amount of data.
<cfsetting requesttimeout="9999999" >
<cfquery name="randomething" datasource="ds" timeout="9999999" >
SELECT
col1,
col2
FROM
table
</cfquery>
<cfdump var="#randomething.recordCount#" /> <!---should be about 5 million rows --->
I can successfully retrieve the data with python's cx_Oracle and using sys.getsizeof on the python list returns 22621060, so about 21 megabytes.
ColdFusion does not return an error on the page, and I can't find anything in any of the logs. Why is cfdump not showing the number of rows?
Additional Information
The reason for doing it this way is because I have about 8000 smaller queries to run against the randomthing query. In other words when I run those 8000 queries against the database it takes hours for that process to complete. I suspect this is because I am competing with several other database users, and the database is getting bogged down.
The 8000 smaller queries are getting counts of col1 over a period of col2.
SELECT
count(col1) as count
WHERE
col2 < 20121109
AND
col2 > 20121108
According to Adam Cameron's suggestions.
cflog is suggesting that the query isn't finishing.
I tried changing the queries timeout both in the code and in the CFIDE/administrator, apparently CF9 no long respects the timeout attribute, regardless of what I tried I couldn't get the query to timeout.
I also started playing around with the maxrows attribute to see if I could discern any information that way.
when maxrows is set to 1300000 everything works fine
when maxrows is 1400000 or greater I get this error
when maxrows is 2000000 I observe my original problem
Update
So this isn't a limit of cfquery. By using QueryNew then looping over it to add data and I can get well past the 2 million mark without any problems.
I also created a ThinClient datasource using the information in this question, I didn't observe any change in behavior.
The messages on the database end are
SQL*Net message from client
and
SQL*Net more data to client
I just discovered that by using the thin client along with blockfactor1="100" I can retrieve more rows (appx. 3000000).
Is there anything logged on the DB end of things?
I wonder if the timeout is not being respected, and JDBC is "hanging up" on the DB whilst it's working. That's a wild guess. What if you set a very low timeout - eg: 5sec - does it error after 5sec, or what?
The browser could be timing out too. What say you write something to a log before and after the <cfquery> block, with <cflog>. To see if the query is eventually finishing.
I have to wonder what it is you intend to do with these 22M records once you get them back to CF. Whatever it is, it sounds to me like CF is the wrong place to be doing whatever it is: CF ain't for heavy data processing, it's for making web pages. If you need to process 22M records, I suspect you should be doing it on the database. That said, I'm second-guessing what you're doing with no info to go on, so I presume there's probably a good reason to be doing it.
Have you tried wrapping your cfquery within cftry tags to see if that reports anything?
<cfsetting requesttimeout="600" >
<cftry>
<cfquery name="randomething" datasource="ds" timeout="590" >
SELECT
col1,
col2
FROM
table
</cfquery>
<cfdump var="#randomething.recordCount#" /> <!--- should be about 5 million rows --->
<cfcatch type="any">
<cfdump var="#cfcatch#">
</cfcatch>
</cftry>
This is just an idea, but you could give it a go:
You mention that using QueryNew you can successfully add the more-than-two-million records you need.
Also that when your maxRows is less than 1,300,000 things work as expected.
So why not first do a query to count(*) the total number of records in the table, divide by a million and round up, then cfloop over that number executing a query with maxRows=1000000 and startRow=((i - 1 * 1000000) + 1) on each iteration...
ArrayAppend each query from within the loop to an array then when it's all done, loop over your array pushing the records into a new Query object. That way you end up with a query at the end containing all the records you were trying to retrieve.
You might hit memory issues, and it will not perform all that well, but hey - this is Coldfusion, those are par for the course, and sometimes crazy things happen / work.
(You could always append the results of each query to the one you're building up from QueryNew as you go rather than pushing each query onto an array, but it'll be easier to debug and see how far you get if it doesn't work if you build an array as you go.)
(Also, using multiple queries within the size that it CF can handle, you may then be able to execute the process you need to by looping over the array and then each query, rather than building up one massive query - would save processing time and memory, but depends on whether you need the full results set in a single Query object or not)
if your date ranges are consistent, i would suggest some aggregate functions in sql instead of having cf process it. something like:
select col1, count(col1), year(col2), month(col2)
from table
group by year(col2), month(col2)
order by year(col2), month(col2)
add day() if you need that detail level, too. you can get really creative with date parts.
this should greatly speed up the entire run time, reduce the main query size.
Your problem here is that ColdFusion cannot time out SQL. This has always been an issue since CF6 I believe. So basically what is happening is that the cfquery is taking longer than 9999999 seconds but CF cannot timeout JDBC so it waits until afterwards tries to run cfdump (which internally uses cfoutput) and this is reported as timing out because the request is now considered to have run too long.
As Adam pointed out, whatever you are trying to do is too large for CF to realistically handle and will either need to be chopped up into smaller jobs or entirely handled in the DB.
So as it turns out the server was running out of memory, apparently cfquery takes up quite a bit more memory than a python list.
It was Barry's comment that got me going in the right direction, I didn't know much about the server monitor up until this point other than the fact that it existed.
As it turns out I am also not very good at reading, the errors that were getting logged in the application.log file were
GC overhead limit exceeded The specific sequence of files included or processed is: \path\to\index.cfm, line: 10 "
and
Java heap space The specific sequence of files included or processed is: \path\to\index.cfm
I'll end up going with Adams suggestion and let the database do the processing. At least now I'll be able to explain why things are slow instead of just saying, "I don't know".
Related
I am writing some data loading code that pulls data from a large, slow table in an oracle database. I have read-only access to the data, and do not have the ability to change indexes or affect the speed of the query in any way.
My select statement takes 5 minutes to execute and returns around 300,000 rows. The system is inserting large batches of new records constantly, and I need to make sure I get every last one, so I need to save a timestamp for the last time I downloaded the data.
My question is: If my select statement is running for 5 minutes, and new rows get inserted while the select is running, will I receive the new rows or not in the query result?
My gut tells me that the answer is 'no', especially since a large portion of those 5 minutes is just the time spent on the data transfer from the database to the local environment, but I can't find any direct documentation on the scenario.
"If my select statement is running for 5 minutes, and new rows get inserted while the select is running, will I receive the new rows or not in the query result?"
No. Oracle enforces strict isolation levels and does not permit dirty reads.
The default isolation level is Read Committed. This means the result set you get after five minutes will be identical to the one you would have got if Oracle could have delivered you all the records in 0.0000001 seconds. Anything committed after you query started running will not be included in the results. That includes updates to the records as well as inserts.
Oracle does this by tracking changes to the table in the UNDO tablespace. Provided it can restrict the original image from that data your query will run to completion; if for any reason the undo information is overwritten your query will fail with the dreaded ORA-1555: Snapshot too old. That's right: Oracle would rather hurl an exception than provide us with an inconsistent result set.
Note that this consistency applies at the statement level. If we run the same query twice within the one transaction we may see two different results sets. If that is a problem (I think not in your case) we need to switch from Read Committed to Serialized isolation.
The Concepts Manual covers Concurrency and Consistency in great depth. Find out more.
So to answer your question, take the timestamp from the time you start the select. Specifically, take the max(created_ts) from the table before you kick off the query. This should protect you from the gap Alex mentions (if records are not committed the moment they are inserted there is the potential to lose records if you base the select on comparing with the system timestamp). Although doing this means you're issuing two queries in the same transaction which means you do need Serialized isolation after all!
I run a complex query against Oracle DB 11G based eBS R12 schema:
For first run it takes 4 seconds. If I run it again, it takes 9, next 30 etc.
If I add "and 1=1" it takes 4 seconds again, then 9, the 30 and so on.
Quick workaraound is that we added a random generated "and sometstring = somestring" and now the results are always in 4 second.
I have never encoutered a query that would behave this way (it should be opposite, or no siginificat change between executions). We tested it on 2 copies of same DB, same behaviour.
How to debug it? And what internal mechanics could be getting confused?
UPDATE 1:
EXPLAIN PLAN FOR
(my query);
SELECT * FROM table(DBMS_XPLAN.DISPLAY);
Is exactly the same before first run that it is for subsequent ones. see http://pastebin.com/dMsXmhtG
Check the DBMS_XPLAN.DISPLAY_CURSOR. The reason could be cardinality feedback or other adaptive techniques Oracle uses. You should see multiple child cursors related to SQL_ID of your query and you can compare their plans.
Has your query bound variables and columns used for filtering histograms? This could be another reason.
Sounds like you might be suffering from adaptive cursor sharing or cardinality feedback. Here is an article showing how to turn them off - perhaps you could do that and see if the issue stops happening, as well as using #OldProgrammer's suggestion of tracing what is happening.
If one of these is found to be the problem, you can then take the necessary steps to ensure that the root cause (eg. incorrect statistics, unnecessary histograms, etc.) is corrected.
In an application I have 2000 Accounts. The first Account contains 10000 Clients, which is the maximum limit for an Account. Users can select Clients from the first Account and then select some Accounts to copy the selected Clients to the selected Accounts. So the possible maximums are 1999 Accounts and 10000 Clients.
Currently I’m looping over the Account list and calling a Stored Procedure in each iteration from the client application. With each iteration, an Account Id and a table-valued parameter, that contains the list of ids of the clients, are passed to the SP. While testing with 500 Accounts and 10000 Clients, it takes 25 minutes, 34 second and 543 milliseconds. At some point within the SP I’m using the following code –
INSERT INTO Client
SELECT AccountId, CId, Code, Name, Email FROM Client
WHERE Client.Id IN(SELECT Id FROM #clientIdList)
where #clientIdList is the table-type variable that contains the 10000 Client's Id.
The thing is, after each iteration 10000 new Client data is being added to Client table. So, with each iteration, this INSERT operation is gonna take longer in the next iteration. Googling for some SP performance tips I came to know that the IN clause is considered somewhat evil, and most people suggests to use INNER JOIN instead. So I changed the above code to –
INSERT INTO Client
SELECT c.AccountId, c.CId, c.Code, c.Name, c.Email FROM Client AS c
INNER JOIN #clientIdList AS cil
ON c.Id = cil.Id
Now it takes 25 minutes, 17 seconds and 407 milliseconds. Nothing exciting, really!
I’m new to Stored Procedures arena. So, with this amount of data, is it supposed to take this long? And which one should I consider for the given scenario, IN or INNER JOIN? Suggestions and performance tips are welcome. Thanks.
It's hard to tell exactly what is going on without knowing more about your stored procedure.
What I would recommend is checking the Execution plan. To do this, open up SQL Server Management studio. In a new query window make a call to your stored procedure passing in any relevant parameters.
Before you execute this, go up to the Query menu and choose the Client Side Statistics and the Actual Execution Plan menu items.
Now run your query.
Come back in 25 minutes when it's all done and there should be 3 or 4 tabs at the bottom (depending on if it returns data or not.) 1 Tab for results, 1 Tab for messages, 1 tab for the client stats and 1 tab for the execution plan.
The client stats tab is useful for seeing if the changes you make affect performance (it keeps several of your last runs to show you how it performed over those.)
The more interesting tab is the execution plan tab. Look at this one, it should look something like this:
Here it tells me that my query was able to use the primary key index on all my tables. You want to look out for whole table scans (because that means it's not using an index.) Also, if my query hadn't been so simple and had taken a long time, and not used an index then below "Query 1" there would be green text stating "Missing Index" or something along those lines. It will tell you the index you need to create to improve performance.
Also notice it tells you how much each query took, in percentage. I had one query so obviously it took 100% of the time. But if you had 5 queries in your sproc and one took 80% of the time, you might want to check that one out first.
It also tells you how much time was spent on each part of the query, again in percentages. That can be helpful for trying to understand what it is that your query is doing.
Run through this and I'd guess you have other problems with your sproc, and you can ask follow up questions from that.
I have a quite complex multi-join TSQL SELECT query that runs for about 8 seconds and returns about 300K records. Which is currently acceptable. But I need to reuse results of that query several times later, so I am inserting results of the query into a temp table. Table is created in advance with columns that match output of SELECT query. But as soon as I do INSERT INTO ... SELECT - execution time more than doubles to over 20 seconds! Execution plans shows that 46% of the query cost goes to "Table Insert" and 38% to Table Spool (Eager Spool).
Any idea why this is happening and how to speed it up?
Thanks!
The "Why" of it hard to say, we'd need a lot more information. (though my SWAG would be that it has to do with logging...)
However, the solution, 9 times out of 10 is to use SELECT INTO to make your temp table.
I would start by looking at standard tuning itmes. Is disk performing? Are there sufficient resources (IOs, RAM, CPU, etc)? Is there a bottleneck in the RDBMS? Does sound like the issue but what is happening with locking? Does other code give similar results? Is other code performant?
A few things I can suggest based on the information you have provided. If you don't care about dirty reads, you could always change the transaction isolation level (if you're using MS T-SQL)
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
select ...
This may speed things up on your initial query as locks will not need to be done on the data you are querying from. If you're not using SQL server, do a google search for how to do the same thing with the technology you are using.
For the insert portion, you said you are inserting into a temp table. Does your database support adding primary keys or indexes on your temp table? If it does, have a dummy column in there that is an indexed column. Also, have you tried to use a regular database table with this? Depending on your set up, it is possible that using that will speed up your insert times.
I am trying to understand the performance of a query that I've written in Oracle. At this time I only have access to SQLDeveloper and its execution timer. I can run SHOW PLAN but cannot use the auto trace function.
The query that I've written runs in about 1.8 seconds when I press "execute query" (F9) in SQLDeveloper. I know that this is only fetching the first fifty rows by default, but can I at least be certain that the 1.8 seconds encompasses the total execution time plus the time to deliver the first 50 rows to my client?
When I wrap this query in a stored procedure (returning the results via an OUT REF CURSOR) and try to use it from an external application (SQL Server Reporting Services), the query takes over one minute to run. I get similar performance when I press "run script" (F5) in SQLDeveloper. It seems that the difference here is that in these two scenarios, Oracle has to transmit all of the rows back rather than the first 50. This leads me to believe that there is some network connectivity issues between the client PC and Oracle instance.
My query only returns about 8000 rows so this performance is surprising. To try to prove my theory above about the latency, I ran some code like this in SQLDeveloper:
declare
tmp sys_refcursor;
begin
my_proc(null, null, null, tmp);
end;
...And this runs in about two seconds. Again, does SQLDeveloper's execution clock accurately indicate the execution time of the query? Or am I missing something and is it possible that it is in fact my query which needs tuning?
Can anybody please offer me any insight on this based on the limited tools I have available? Or should I try to involve the DBA to do some further analysis?
"I know that this is only fetching the
first fifty rows by default, but can I
at least be certain that the 1.8
seconds encompasses the total
execution time plus the time to
deliver the first 50 rows to my
client?"
No, it is the time to return the first 50 rows. It doesn't necessarily require that the database has determined the entire result set.
Think about the table as an encyclopedia. If you want a list of animals with names beginning with 'A' or 'Z', you'll probably get Aardvarks and Alligators pretty quickly. It will take much longer to get Zebras as you'd have to read the entire book. If your query is doing a full table scan, it won't complete until it has read the entire table (or book), even if there is nothing to be picked up in anything after the first chapter (because it doesn't know there isn't anything important in there until it has read it).
declare
tmp sys_refcursor;
begin
my_proc(null, null, null, tmp);
end;
This piece of code does nothing. More specifically, it will parse the query to determine that the necessary tables, columns and privileges are in place. It will not actually execute the query or determine whether any rows meet the filter criteria.
If the query only returns 8000 rows it is unlikely that the network is a significant problem (unless they are very big rows).
Ask your DBA for a quick tutorial in performance tuning.