Measuring processing time in MonetDB - monetdb

I need to measure the time it takes for MonetDB to return all relevant data in response to a query. Based on my research, I have found a few potential solutions:
Measure wall-clock time when sending the query to mclient via the command line (e.g., "time mclient -s [query]"). But this involves the overhead of loading and starting mclient.
Use the timer in interactive mode. But, from my understanding, this only gives me the time from when the server receives the query and returns the first block of data.
Use the Trace command. But measurements using trace apparently have high overheads.
Is there a way to measure the time from when the query arrives at the server, and the "last" block of relevant data is returned?
Thanks!

Use
select now();
YOUR_SQL_QUERY;
select now();
and now you know the start and end times.

Related

elapsed_time_delta in dba_hist_sqlstats is not actual query run time?

I've been trying to explore DBA_HIST_SQLSTAT table. Encountered ambiguity with a column (lot more in fact) ELAPSED_TIME_DELTA. I ran a simple delete query and noted the time taken. But when I query the DBA_HIST_SQLSTAT and look at the ELAPSED_TIME_DELTA column (I know the units are ms) is showing different time than what I've captured manually. What all comes under ELAPSED_TIME_DELTA in DBA_HIST_SQLSTAT table ? Any explanation with example is much appreciated.
(Assuming you mean ELAPSED_TIME columns. There is no EXECUTION_TIME column in DBA_HIST_SQLSTAT).
The elapsed_time_delta is the difference between the elapsed_time_total of the prior snap vs the current snap.
The elapsed_time_total is the total time spent executing that query since it was brought into the library cache. That will not necessarily equal the "wall-clock" time of any single execution of that query, except possibly for the very 1st execution of the query by the 1st user -- and that only if you grabbed the snap_id after that 1st execution and before any subsequent executions.
That's hard to do and not always possible. Generally speaking, you cannot use DBA_HIST_SQLSTAT to tell how long Oracle spent running a particular execution of a particular query.
What you can tell is how long Oracle spent running that query on average -- by finding the latest snap_id of interest and dividing elapsed_time_total by nullif(executions_total,0).

MonetDB Query Plan

I have a few queries that I am running and I would like to view some sort of query plan for a given query. When I add "explain" before the query, I get a long (~4,000 lines) result that is not possible to interpret.
The MAL plan exposes all parallel activity needed to solve the query. Each line is a relational algebra operator or catalog action.
You might also use PLAN to get an idea of the output of the SQL optimizer.
Each part in the physical execution plan that'll be executed in parallel is repeated the same number of times as the number of cores you have in the result of EXPLAIN. That's why EXPLAIN can sometimes produce a huge MAL plan.
If you just want to have an idea of how are query is handled, you can force MonetDB to generate a sequential MAL plan, then at least, you get rid of the repetitions. For this, you can change the default optimiser pipe line to, e.g., 'sequential_pipe'. This can be done both in a client (it works then only for this client session), or in a server (it works then for the whole server session). For more information: https://www.monetdb.org/Documentation/Cookbooks/SQLrecipes/OptimizerPipelines

Oracle slow down unexpected and rapidly when using sql "update" continuously

The situation is simple, there is a table in oracle used as a "shared table" for data exchange. The table structure and number of records remains unchanged. In normal case, I continuously update data into this table and other process read this table for current data.
Strange thing is, when my process starts, the time consumption of each update statement execution is approximately 2 ms. And after a certain peroid of time(like 8 hours), the time consumption increased to 10 ~ 20 ms per statement. It makes the procedure quite slow.
the structure of table
and the update statement is like:
anaNum = anaList.size();
qry.prepare(tr("update YC set MEAVAL=:MEAVAL, QUALITY=:QUALITY, LASTUPDATE=:LASTUPDATE where YCID=:YCID"));
foreach(STbl_ANA ana, anaList)
{
qry.bindValue(":MEAVAL",ana.meaVal);
qry.bindValue(":QUALITY",ana.quality);
qry.bindValue(":LASTUPDATE",QDateTime::fromTime_t(ana.lastUpdate));
qry.bindValue(":YCID",ana.ycId);
if(!qry.exec())
{
qWarning() << QObject::tr("update yc failed, ")
<< qry.lastError().databaseText() << qry.lastError().driverText();
failedAnaList.append(ana);
}
}
the update statement using qt interface
There is many reasons which can cause orcle opreation slowd down, but I cannot find a clue to explain this.
I never start a transaction manually in qt code, which means the commit operation is executed every time after update statement.
The update frequency is about 200 records per second, but the number is dynamically changed by time. It maybe increase to 1000 in one time and drop to 10 in next time.
once the time consumption up to 10 ~ 20 ms per statement, it'll never dorp down. time consumption can be restored to 2ms only be restart oracle service.(it's useless to shutdown or restart any user process which visit orcle)
Please tell me how to solve it or at least what to be examined.
Good starting points is to check the AWR and ASH reports.
Comparing the reports in "good" and "bad" times you can spot the cause of the change. This can be for example a change of an execution plan or increase of wait events. One possible outcome is that only change you see is that the database is waiting more time on the client (i.e. the problem is not in the DB).
Anyway as diagnosed in other answer, the root cause of problems seems to be the update in a loop. If your update lists are long (say more that 10-100 entries) you can profit by updating the whole list in a single statement using MERGE.
build a collection from your list
cast the collection as TABLE
use this table in a MERGE statement to update the rows.
See here for details.
You can trace the session while it is running quickly and again later when it is running slowly. Use the sql trace functionality and tkprof to get a breakdown of where the update is spending its time in each case and see what has changed.
https://docs.oracle.com/cd/E25178_01/server.1111/e16638/sqltrace.htm#i4640
If you need help interpreting the results you can update your question or ask a new one.
Secondly, as a rule single record updates are not the best way to do updates in Oracle. Since you have many records to update already prepared before you prepare the query, look at execBatch.
https://doc.qt.io/qt-4.8/qsqlquery.html#execBatch
This will both execute the update faster and only issue a single commit.

1st access to Oracle SP is very slow, subsequent access seem fine

Not sure if this question has been already asked. I face this problem where the 1st hit from the website to an Oracle SP takes a lot of time. Subsequent accesses work just fine.
The SP i'm taking about here is a dynamic SP used for Search functionality(With different search criteria selection option available)
1st access time ~200 seconds
subsequent access time ~20 to 30 seconds.
Stored Procedure logic on a high level.
Conditional JOINS are appended based on some logics.
Dynamic SQL and cursor used to retrieve data.
Any help to start tackling these kind of issues is very helpful..
Thanks,
Adarsh
The reason why it takes only a few secs to execute the query after the first run is that Oracle caches the results. If you change the SQL then Oracle considers it a different query and won't serve the results from the cache but executes the new query (even formatting the code again or adding a space in between will be a difference).
It is a hard question how to speed up first execution. You'll need to post your query and explain plan and probably you'll have to answer further questions if you want to get help on that.

Oracle performance via SQLDeveloper vs application

I am trying to understand the performance of a query that I've written in Oracle. At this time I only have access to SQLDeveloper and its execution timer. I can run SHOW PLAN but cannot use the auto trace function.
The query that I've written runs in about 1.8 seconds when I press "execute query" (F9) in SQLDeveloper. I know that this is only fetching the first fifty rows by default, but can I at least be certain that the 1.8 seconds encompasses the total execution time plus the time to deliver the first 50 rows to my client?
When I wrap this query in a stored procedure (returning the results via an OUT REF CURSOR) and try to use it from an external application (SQL Server Reporting Services), the query takes over one minute to run. I get similar performance when I press "run script" (F5) in SQLDeveloper. It seems that the difference here is that in these two scenarios, Oracle has to transmit all of the rows back rather than the first 50. This leads me to believe that there is some network connectivity issues between the client PC and Oracle instance.
My query only returns about 8000 rows so this performance is surprising. To try to prove my theory above about the latency, I ran some code like this in SQLDeveloper:
declare
tmp sys_refcursor;
begin
my_proc(null, null, null, tmp);
end;
...And this runs in about two seconds. Again, does SQLDeveloper's execution clock accurately indicate the execution time of the query? Or am I missing something and is it possible that it is in fact my query which needs tuning?
Can anybody please offer me any insight on this based on the limited tools I have available? Or should I try to involve the DBA to do some further analysis?
"I know that this is only fetching the
first fifty rows by default, but can I
at least be certain that the 1.8
seconds encompasses the total
execution time plus the time to
deliver the first 50 rows to my
client?"
No, it is the time to return the first 50 rows. It doesn't necessarily require that the database has determined the entire result set.
Think about the table as an encyclopedia. If you want a list of animals with names beginning with 'A' or 'Z', you'll probably get Aardvarks and Alligators pretty quickly. It will take much longer to get Zebras as you'd have to read the entire book. If your query is doing a full table scan, it won't complete until it has read the entire table (or book), even if there is nothing to be picked up in anything after the first chapter (because it doesn't know there isn't anything important in there until it has read it).
declare
tmp sys_refcursor;
begin
my_proc(null, null, null, tmp);
end;
This piece of code does nothing. More specifically, it will parse the query to determine that the necessary tables, columns and privileges are in place. It will not actually execute the query or determine whether any rows meet the filter criteria.
If the query only returns 8000 rows it is unlikely that the network is a significant problem (unless they are very big rows).
Ask your DBA for a quick tutorial in performance tuning.

Resources