What effect have the number of records of a particular table (in SQL Server) on LINQ Queries response time in C# MVC - ajax

I made some googling about my subject title, but didn't find useful answer. Most question were about effect of number of table's columns on query performance, while i need to know the effect of number of table's rows on linq query response time in C# MVC project. Actually, i have a web MVC project in which i try to get alarms set using ajax call from server and show them in a web grid in client side. Each ajax call is performed every 60 seconds in a loop created with setTimeOut method. Number of rows are gradually increasing within the alarm table (in SQL Server Database) and after a week, it reaches to thousands of rows. At first when lunching the project, I can see in DevTools of browser(Chrome), the time each ajax call takes is about 1 second or so. But this time gradually increases every day and after a week each success ajax call takes more than 2 minutes. This causes about 5 ajax call always be in pending queue. I am sure there is no memory leak both in client(JQuery) and server(C#) sides code, So the only culprit I suspect is SELECT Query response time performed on alarm table. I appreciate any advice.

Check the query that is executed in the database. There are two options:
Your Linq query fetches all the data from the database and process them locally. In this case the number of rows in the table is quite important. You need to fix your Linq query and make sure it fetches only the relevant rows from the database. Then you can hit option 2.
Your Linq query fetches only the relevant rows from the database, but there are no relevant indexes in your table and each query scans all the data in the table.
However with only few thousands rows in your table, I have doubts it will take 2 minutes to scan the table, so option 1 is more likely to be the reason for this slowdown.

Related

fetch 200k records in single jpa select query within 5 seconds

I want to fetch 200k records in single jpa select query within 5 seconds. I am selecting one column which is already indexed. Currently It is taking more than 5 minutes. is it possible to select over 100k of records in 5 seconds?
This is not possible with hibernate or normal native query since it has to create hundreds of thousands of objects in java side and results needs to be sent over the network (Serialization & de-serialization).
You could do below steps for fine tuning,
At DB side you could change the index method default is binery tree instead set it as "HASH" method.
Use Parallel threads to retrieve the results in paginated mode (Use native SQL).
Hope it gives some inputs for further fine tuning.
Use this property to retrieve lakh of records.
query.setHint(org.hibernate.fetchSize, 5000);

Querying from azure db using linq takes a lot of time

I am using Azure db and my table has over 120000 records.
Applied with paging of 10 records a page I am fetching data into IQueryable, but fetching 10 records only taking around 2 minutes. This query has no join and just 2 filters. While using Azure search I can get all the records within 3 seconds.
Please suggest me who to minimise my Linq search as azure search is costly.
Based on the query in the comment to your question, it looks like you are reading the entire db table to the memory of your app. The data is potentially transferred from one side of the data centre to another, thus causing the performance issue.
unitofwork.Repository<EntityName().GetQueriable().ToList().Skip(1).Take(10);
Without seeing the rest of the code, I'm just guessing your LINQ query should be something like this:
unitofwork.Repository<EntityName().GetQueriable().Skip(1).Take(10).ToList();
Skip and Take should be executed on the db Server, while .ToList() is at the end will materialize the entities.

What will happen when inserting a row during a long running query

I am writing some data loading code that pulls data from a large, slow table in an oracle database. I have read-only access to the data, and do not have the ability to change indexes or affect the speed of the query in any way.
My select statement takes 5 minutes to execute and returns around 300,000 rows. The system is inserting large batches of new records constantly, and I need to make sure I get every last one, so I need to save a timestamp for the last time I downloaded the data.
My question is: If my select statement is running for 5 minutes, and new rows get inserted while the select is running, will I receive the new rows or not in the query result?
My gut tells me that the answer is 'no', especially since a large portion of those 5 minutes is just the time spent on the data transfer from the database to the local environment, but I can't find any direct documentation on the scenario.
"If my select statement is running for 5 minutes, and new rows get inserted while the select is running, will I receive the new rows or not in the query result?"
No. Oracle enforces strict isolation levels and does not permit dirty reads.
The default isolation level is Read Committed. This means the result set you get after five minutes will be identical to the one you would have got if Oracle could have delivered you all the records in 0.0000001 seconds. Anything committed after you query started running will not be included in the results. That includes updates to the records as well as inserts.
Oracle does this by tracking changes to the table in the UNDO tablespace. Provided it can restrict the original image from that data your query will run to completion; if for any reason the undo information is overwritten your query will fail with the dreaded ORA-1555: Snapshot too old. That's right: Oracle would rather hurl an exception than provide us with an inconsistent result set.
Note that this consistency applies at the statement level. If we run the same query twice within the one transaction we may see two different results sets. If that is a problem (I think not in your case) we need to switch from Read Committed to Serialized isolation.
The Concepts Manual covers Concurrency and Consistency in great depth. Find out more.
So to answer your question, take the timestamp from the time you start the select. Specifically, take the max(created_ts) from the table before you kick off the query. This should protect you from the gap Alex mentions (if records are not committed the moment they are inserted there is the potential to lose records if you base the select on comparing with the system timestamp). Although doing this means you're issuing two queries in the same transaction which means you do need Serialized isolation after all!

The query time of a view increases after having fetched the last page from a view in Oracle PL/SQL

I'm using Oracle PL/SQL Developer on a Oracle Database 11g. I have recently written a view with some weird behaviour. When I run the simple query below without fetching the last page of the query the query time is about 0.5 sec (0.2 when cached).
select * from covenant.v_status_covenant_tuning where bankkode = '4210';
However, if i fetch the last page in PL/SQL Developer or if I run the query from Java-code (i.e. I run a query that retrieves all the rows) something happens to the view and the query time increases to about 20-30 secs.
The view does not start working properly again before I recompile it. The explain plan is exactly the same before and after. All indexes and tables are analyzed. I don't know if it's relevant but the view uses a few analytic expressions like rank() over (partition by .....), lag(), lead() and so on.
As I'm new here I can't post a picture of the explain plan (need a reputation of 10) but in general the optimizer uses indexes efficiently and it does a few sorts because of the analytic functions.
If the plan involves a full scan of some sort, the query will not complete until the very last block in the table has been read.
Imagine a table that has lots of matching rows in the very first few blocks in the table, and no matching rows in the rest of it. If there is a large volume of blocks to check, the query might return the first few pages of results very quickly, as it finds them all in the first few blocks of the table. But before it can return the final "no more results" to the client, it must check every last block of the table - it doesn't know if there might be one more result in the very last block of the table, so it has to wait until it has read that last block.
If you'd like more help, please post your query plan.

How to optimize data fetching in SQL Developer?

I am working on Oracle SQL Developer and have created tables with Wikipedia data, so size of data is very huge and have 7 tables. I have created a search engine which fetches and display data using JSP, but the problem is that for each query the application has to access 4 tables making my application very time consuming.
I have added indexes to all tables but still it takes more time, so any suggestion on how to optimize my app and reduce time it is taking to display result.
There are several approaches you can take to tune your application. And it could be either tuning at the database end, front end or a combination of the two.
At the database end you could be looking at say a materialized view to summarize the more commonly searched data. This could either be for your search purposes only or to reduce the size and complexity of the resultset. You might also look at tuning the query itself - perhaps placing indexes on the relevant WHERE clauses of your search or look at denormalizing your tables.
At the application end - the retrieval of vast recordsets - can always cause problems where a single record is large (multi-columned) and the number or records in the resultset - numerous.
What you are probably looking for is a rapid response time from your application so your user doesn't feel they are waiting ... and waiting.
A technique I have seen and used is to retrieve the resultset either as
1) a recordset of ROWIDs and to page through these ROWIDs on the display
2) a simulated "paged" recordset. Retrieving the recordset in chunks.

Resources