measure linq to sql performance and stats - linq

I have a web app that creates a DataContext at the beginning of the request and lets go at the end.
I would like to have some handy stats for each page like
- number of inserts and time spent
- number of deletes and time spent
- number of updates and time spent
- number of selects and time spent
I have it all set for inserts/updates/deletes by implementing the partial methods InsertXXX/UpdateXXX/DeleteXXX and keeping track of counts and time spent.
But, how do I count and time the SELECTs ???
I am not sure there is a hook anywhere in Linq to SQL to be able to insert some measuring?
Thanks

To get an idea of how long each of the queries you are running is taking, you can run the SQL Profiler on the database you are working with. You can use the Query Execution Plan to narrow down any performance iussues.
If you need to integrate this more closely with your repository/data access code, you could use the Stopwatch class to time the execution of your Linq-to-SQL methods.
http://msdn.microsoft.com/en-us/library/system.diagnostics.stopwatch.aspx

Related

elapsed_time_delta in dba_hist_sqlstats is not actual query run time?

I've been trying to explore DBA_HIST_SQLSTAT table. Encountered ambiguity with a column (lot more in fact) ELAPSED_TIME_DELTA. I ran a simple delete query and noted the time taken. But when I query the DBA_HIST_SQLSTAT and look at the ELAPSED_TIME_DELTA column (I know the units are ms) is showing different time than what I've captured manually. What all comes under ELAPSED_TIME_DELTA in DBA_HIST_SQLSTAT table ? Any explanation with example is much appreciated.
(Assuming you mean ELAPSED_TIME columns. There is no EXECUTION_TIME column in DBA_HIST_SQLSTAT).
The elapsed_time_delta is the difference between the elapsed_time_total of the prior snap vs the current snap.
The elapsed_time_total is the total time spent executing that query since it was brought into the library cache. That will not necessarily equal the "wall-clock" time of any single execution of that query, except possibly for the very 1st execution of the query by the 1st user -- and that only if you grabbed the snap_id after that 1st execution and before any subsequent executions.
That's hard to do and not always possible. Generally speaking, you cannot use DBA_HIST_SQLSTAT to tell how long Oracle spent running a particular execution of a particular query.
What you can tell is how long Oracle spent running that query on average -- by finding the latest snap_id of interest and dividing elapsed_time_total by nullif(executions_total,0).

Oracle slow down unexpected and rapidly when using sql "update" continuously

The situation is simple, there is a table in oracle used as a "shared table" for data exchange. The table structure and number of records remains unchanged. In normal case, I continuously update data into this table and other process read this table for current data.
Strange thing is, when my process starts, the time consumption of each update statement execution is approximately 2 ms. And after a certain peroid of time(like 8 hours), the time consumption increased to 10 ~ 20 ms per statement. It makes the procedure quite slow.
the structure of table
and the update statement is like:
anaNum = anaList.size();
qry.prepare(tr("update YC set MEAVAL=:MEAVAL, QUALITY=:QUALITY, LASTUPDATE=:LASTUPDATE where YCID=:YCID"));
foreach(STbl_ANA ana, anaList)
{
qry.bindValue(":MEAVAL",ana.meaVal);
qry.bindValue(":QUALITY",ana.quality);
qry.bindValue(":LASTUPDATE",QDateTime::fromTime_t(ana.lastUpdate));
qry.bindValue(":YCID",ana.ycId);
if(!qry.exec())
{
qWarning() << QObject::tr("update yc failed, ")
<< qry.lastError().databaseText() << qry.lastError().driverText();
failedAnaList.append(ana);
}
}
the update statement using qt interface
There is many reasons which can cause orcle opreation slowd down, but I cannot find a clue to explain this.
I never start a transaction manually in qt code, which means the commit operation is executed every time after update statement.
The update frequency is about 200 records per second, but the number is dynamically changed by time. It maybe increase to 1000 in one time and drop to 10 in next time.
once the time consumption up to 10 ~ 20 ms per statement, it'll never dorp down. time consumption can be restored to 2ms only be restart oracle service.(it's useless to shutdown or restart any user process which visit orcle)
Please tell me how to solve it or at least what to be examined.
Good starting points is to check the AWR and ASH reports.
Comparing the reports in "good" and "bad" times you can spot the cause of the change. This can be for example a change of an execution plan or increase of wait events. One possible outcome is that only change you see is that the database is waiting more time on the client (i.e. the problem is not in the DB).
Anyway as diagnosed in other answer, the root cause of problems seems to be the update in a loop. If your update lists are long (say more that 10-100 entries) you can profit by updating the whole list in a single statement using MERGE.
build a collection from your list
cast the collection as TABLE
use this table in a MERGE statement to update the rows.
See here for details.
You can trace the session while it is running quickly and again later when it is running slowly. Use the sql trace functionality and tkprof to get a breakdown of where the update is spending its time in each case and see what has changed.
https://docs.oracle.com/cd/E25178_01/server.1111/e16638/sqltrace.htm#i4640
If you need help interpreting the results you can update your question or ask a new one.
Secondly, as a rule single record updates are not the best way to do updates in Oracle. Since you have many records to update already prepared before you prepare the query, look at execBatch.
https://doc.qt.io/qt-4.8/qsqlquery.html#execBatch
This will both execute the update faster and only issue a single commit.

1st access to Oracle SP is very slow, subsequent access seem fine

Not sure if this question has been already asked. I face this problem where the 1st hit from the website to an Oracle SP takes a lot of time. Subsequent accesses work just fine.
The SP i'm taking about here is a dynamic SP used for Search functionality(With different search criteria selection option available)
1st access time ~200 seconds
subsequent access time ~20 to 30 seconds.
Stored Procedure logic on a high level.
Conditional JOINS are appended based on some logics.
Dynamic SQL and cursor used to retrieve data.
Any help to start tackling these kind of issues is very helpful..
Thanks,
Adarsh
The reason why it takes only a few secs to execute the query after the first run is that Oracle caches the results. If you change the SQL then Oracle considers it a different query and won't serve the results from the cache but executes the new query (even formatting the code again or adding a space in between will be a difference).
It is a hard question how to speed up first execution. You'll need to post your query and explain plan and probably you'll have to answer further questions if you want to get help on that.

Entity Framework query performance

I have a problem with a quite complex query executed through Entity Framework that takes so much time, almost 50 seconds. The query is executed with an ad-hoc call to a web service which creates a new ObjectContext, execute the query and returns the result.
The problem is that if I trace with SQL Server Profiler the T-SQL code and try to execute it from SQL Server Management Studio it takes like 2 seconds... what could it be?
Thank you,
Marco
For every ObjectContext that touches the database, Entity does a lot of startup work building an internal representation of the database schema. This can take a long time (our project is about 30 seconds), and is rolled into the expense of the first query made against the database. Subsequent ones are plenty fast, until the process is restarted. Does that apply to you?

Why is Linq to Entities so slow the first time it's referenced

using Entity Framework 4.0, it seems that the first time an operation is done (read or write) against an entity framework object context it takes orders of magnatude longer than the second time. For example a query the first time may take 10 seconds (yes seconds) and the second time .1 seconds.
I'm guessing that the first time the objectcontext is constructed it has to build some sort of behind the scene data structures? Is it parsing the EDMX file (I thought would have been done at compile time?)
It is building views that get cached on subsequent calls.
You can pre-generate views to avoid the first time performance hit:
http://www.dotnetspark.com/kb/3706-optimizing-performance.aspx
EF has start-up expense of loading the Entity Data Model (EDM) metadata into memory, pre-compiling views and other one-time operations, you could try using warm-up query in order to get past that.
Maybe you have issue with a DB table that you are running your query against of. So first time you run EF it compiles your query, creates execution plan, etc, so when you are running second time DB uses cached version of your query. Try to add indexes to your table, and see if this helps.

Resources