I have a query CREATE TABLE foobar AS SELECT ... that runs successfully in Hue (the returned status is Inserted 986571 row(s)) and takes a couple seconds to complete. However, in Cloudera Manager its status - after more than 10 minutes - still says Executing.
Is it a bug in Cloudera Manager or is this query actually still running?
When Hue executes a query, it leaves the query open so that users can page through results at their own pace. (Of course, this behavior isn't very useful for DDL statements.) That means that Impala still considers the query to be executing, even if it is not actively using CPU cycles (keep in mind it is still holding memory!). Hue will close the query if explicitly told to, or when the page/session is closed, e.g. using the hue command:
> build/env/bin/hue close_queries --help
Note that Impala has a query option to automatically 'timeout' queries after a period of time, see query_timeout_s. Hue sets this to 10 minutes by default, but you can override it in the hue.ini settings.
One thing to note is that when queries 'time out', they are cancelled but not closed, i.e. the query will remain "in flight" with a CANCELLED status. The reason for this is so that users (or tools) can continue to observe the query metadata (e.g. query profile, status, etc.), which would not be available if the query is fully closed and thus deregistered from the impalad. Unfortunately these cancelled queries may still hold some non-negligible resources, but this will be fixed with IMPALA-1575.
More information: Hive and Impala queries life cycle
Related
Starting in Oracle 11g, GATHER_STATS_JOB is no longer valid, and has been replace by "auto optimizer stats collection".
This job supposedly runs during the "maintenance windows" and gathers statistics for objects which have changed 10% or more, or have stale stats. If this is true, then why when I run a query checking "stale_stats='YES'", I still get objects?
Maybe I'm not understanding how the job executes......
Two broad possibilities
Oracle updates stale_statistics to "YES" in dba_tab_statistics periodically throughout the day as tables undergo changes. It is entirely possible that a table had just under the threshold amount of changes when stats were gathered this morning and that stale_stats flipped to "YES" during the day today when a few more changes were made.
Depending on how many objects had stale stats when the job ran and how much data those tables contained, how large your maintenance window is, and how powerful your server is, it is possible that the statistics job had to be aborted before it could re-gather all the stale statistics. If the job was aborted, that abort would be logged in the job history. If this happened because there happened to be a large number of changes one day (say you ran an annual purge process that deleted a large amount of data from almost every table in the database), the stale statistics would be updated over the course of several days worth of statistics job runs until the job caught up.
In my team, we need to connect to Oracle, Sybase and MSSQL very frequently... We use Oracle's SQLDeveloper 3.3.2 Version to connect all 3 (using third party libs). This tool often has a problem that select queries never ends... Even if we get the results, it will keep on running... And because of this we receive database alerts for long running queries...
E.g.
Select * from products
If products has million records, then SQLDeveloper will show top records but in background the query will keep on running.
How Can this problem be solved?
Or
Is there a better product which can fulfill our need.
Your query - select * from products - is asking the database engine to send millions of records to your client application (SQLDeveloper in this case).
While SQLDeveloper (and many other GUIs of a similar design) will show you the first 30 (or 50, or 100, etc) rows, as far as the database engine is concerned you're still asking to see millions of rows hence your query continues to 'run' in the database engine.
For example, in Sybase ASE the query will show up with a status of 'send sleep' meaning the database engine is waiting for the client application to request the next batch of records to send down the connection.
To 'solve' this issue you have a few options:
using SQLDeveloper: scroll through (ie, display on your monitor) the
rest of the multi-million row result set [likely not what you want to
do; likely you don't have the time/desire to hit the 'Next' button
100's of thousands of times]
kill off your query after you've received/viewed the first set of
records [not recommended as there will likely be times when you
'forget' to kill of your query, thus earning the wrath of your DBA]
write your query to pull back only the records you REALLY want/need to see (eg, add a WHERE clause to limit the set of rows)
see if SQLDeveloper has any sort of configuration option to
auto-kill any 'long running' queries [I have no idea if this is even
doable in a client application]
see if the DBA can configure your login with a resource limit (eg,
auto-kill queries if they run for more than XX seconds)
I am writing some data loading code that pulls data from a large, slow table in an oracle database. I have read-only access to the data, and do not have the ability to change indexes or affect the speed of the query in any way.
My select statement takes 5 minutes to execute and returns around 300,000 rows. The system is inserting large batches of new records constantly, and I need to make sure I get every last one, so I need to save a timestamp for the last time I downloaded the data.
My question is: If my select statement is running for 5 minutes, and new rows get inserted while the select is running, will I receive the new rows or not in the query result?
My gut tells me that the answer is 'no', especially since a large portion of those 5 minutes is just the time spent on the data transfer from the database to the local environment, but I can't find any direct documentation on the scenario.
"If my select statement is running for 5 minutes, and new rows get inserted while the select is running, will I receive the new rows or not in the query result?"
No. Oracle enforces strict isolation levels and does not permit dirty reads.
The default isolation level is Read Committed. This means the result set you get after five minutes will be identical to the one you would have got if Oracle could have delivered you all the records in 0.0000001 seconds. Anything committed after you query started running will not be included in the results. That includes updates to the records as well as inserts.
Oracle does this by tracking changes to the table in the UNDO tablespace. Provided it can restrict the original image from that data your query will run to completion; if for any reason the undo information is overwritten your query will fail with the dreaded ORA-1555: Snapshot too old. That's right: Oracle would rather hurl an exception than provide us with an inconsistent result set.
Note that this consistency applies at the statement level. If we run the same query twice within the one transaction we may see two different results sets. If that is a problem (I think not in your case) we need to switch from Read Committed to Serialized isolation.
The Concepts Manual covers Concurrency and Consistency in great depth. Find out more.
So to answer your question, take the timestamp from the time you start the select. Specifically, take the max(created_ts) from the table before you kick off the query. This should protect you from the gap Alex mentions (if records are not committed the moment they are inserted there is the potential to lose records if you base the select on comparing with the system timestamp). Although doing this means you're issuing two queries in the same transaction which means you do need Serialized isolation after all!
I have a java job that runs a query on Teradata and pushes the results to a local database. It's a large query (>80M records) and can take hours to finish (The slowness is not due to Teradata, but the local DB). Because it takes so long, there is a chance that it gets interrupted by a network error or something. When that happens I get this exception:
org.skife.jdbi.v2.exceptions.ResultSetException: Unable to advance result set
If the failure occurs a few hours into the query then it cannot rerun the query because the job needs to deliver the result before a specific time every day. Is there a way to resume the query after such failure? I'm not sure if pagination is a good option because the query involves joining a few tables and the tables are updated frequently.
The situation is simple, there is a table in oracle used as a "shared table" for data exchange. The table structure and number of records remains unchanged. In normal case, I continuously update data into this table and other process read this table for current data.
Strange thing is, when my process starts, the time consumption of each update statement execution is approximately 2 ms. And after a certain peroid of time(like 8 hours), the time consumption increased to 10 ~ 20 ms per statement. It makes the procedure quite slow.
the structure of table
and the update statement is like:
anaNum = anaList.size();
qry.prepare(tr("update YC set MEAVAL=:MEAVAL, QUALITY=:QUALITY, LASTUPDATE=:LASTUPDATE where YCID=:YCID"));
foreach(STbl_ANA ana, anaList)
{
qry.bindValue(":MEAVAL",ana.meaVal);
qry.bindValue(":QUALITY",ana.quality);
qry.bindValue(":LASTUPDATE",QDateTime::fromTime_t(ana.lastUpdate));
qry.bindValue(":YCID",ana.ycId);
if(!qry.exec())
{
qWarning() << QObject::tr("update yc failed, ")
<< qry.lastError().databaseText() << qry.lastError().driverText();
failedAnaList.append(ana);
}
}
the update statement using qt interface
There is many reasons which can cause orcle opreation slowd down, but I cannot find a clue to explain this.
I never start a transaction manually in qt code, which means the commit operation is executed every time after update statement.
The update frequency is about 200 records per second, but the number is dynamically changed by time. It maybe increase to 1000 in one time and drop to 10 in next time.
once the time consumption up to 10 ~ 20 ms per statement, it'll never dorp down. time consumption can be restored to 2ms only be restart oracle service.(it's useless to shutdown or restart any user process which visit orcle)
Please tell me how to solve it or at least what to be examined.
Good starting points is to check the AWR and ASH reports.
Comparing the reports in "good" and "bad" times you can spot the cause of the change. This can be for example a change of an execution plan or increase of wait events. One possible outcome is that only change you see is that the database is waiting more time on the client (i.e. the problem is not in the DB).
Anyway as diagnosed in other answer, the root cause of problems seems to be the update in a loop. If your update lists are long (say more that 10-100 entries) you can profit by updating the whole list in a single statement using MERGE.
build a collection from your list
cast the collection as TABLE
use this table in a MERGE statement to update the rows.
See here for details.
You can trace the session while it is running quickly and again later when it is running slowly. Use the sql trace functionality and tkprof to get a breakdown of where the update is spending its time in each case and see what has changed.
https://docs.oracle.com/cd/E25178_01/server.1111/e16638/sqltrace.htm#i4640
If you need help interpreting the results you can update your question or ask a new one.
Secondly, as a rule single record updates are not the best way to do updates in Oracle. Since you have many records to update already prepared before you prepare the query, look at execBatch.
https://doc.qt.io/qt-4.8/qsqlquery.html#execBatch
This will both execute the update faster and only issue a single commit.