Please explain this behaviour (is LINQ cached)? - asp.net-mvc-3

I recently added the wonderful MiniProfiler package to my project and it helped me a lot to improve page render speed.
Now I notice the following. Every first request to a page takes a significant longer time in SQL than subsequent visits.
Here's an example:
First visit:
Second and later visits:
Is this caused by some sort of caching in LINQ or on SQL Server? I'm using .NET 4 and LINQ-to-SQL with default settings in my dbml file.

There are a lot of things that can affect the performance of a first hit. The jitter might have to do some work, and various levels of caching might come into play.
That said, SQL Server has very advanced caching features. It's not at all unusual for repeat queries against the server to be much faster than the initial query.

Related

Tabular Model Not Caching Results

Can anyone, someone point me in the direction of how to troubleshoot why a Tabular model that I have built does not seem to want to cache query results?
It is my understanding that MDX queries to Tabular model will be cached, however with our model they never seem to be! And I can't figure out why..
My best guess is that it's memory pressure, and the system is clearing down the RAM, but even that is a guess..
Are there any counters, DMVs, or other perfmon stats etc that i can use to actually see what is going on and check?
Thanks.
Plenty of places to look, but I'd recommend starting with a Profiler/xEvent trace. Below is an example of 2 runs of the same MDX query.
The first run is on a cold-cache...
The second run is on a warm-cache and you can see that it is resolving the query from cache...
This is much easier to see if you can isolate the query on non-production server (e.g. test/dev environment). There are quite a few reasons why a particular query may not be taking advantage of the cache...but you need to first confirm that it is not using the cache.

XPages performance - 2 apps on same server, 1 runs and 1 doesn't

We have been having a bit of a nightmare this last week with a business critical XPage application, all of a sudden it has started crawling really badly, to the point where I have to reboot the server daily and even then some pages can take 30 seconds to open.
The server has 12GB RAM, and 2 CPUs, I am waiting for another 2 to be added to see if this helps.
The database has around 100,000 documents in it, with no more than 50,000 displayed in any one view.
The same database set up as a training application with far fewer documents, on the same server always responds even when the main copy if crawling.
There are a number of view panels in this application - I have read these are really slow. Should I get rid of them and replace with a Repeat control?
There is also Readers fields on the documents containing Roles, and authors fields as it's a workflow application.
I removed quite a few unnecessary views from the back end over the weekend to help speed it up but that has done very little.
Any ideas where I can check to see what's causing this massive performance hit? It's only really become unworkable in the last week but as far as I know nothing in the design has changed, apart from me deleting some old views.
Try to get more info about state of your server and application.
Hardware troubleshooting is summarized here: http://www-10.lotus.com/ldd/dominowiki.nsf/dx/Domino_Server_performance_troubleshooting_best_practices
According to your experience - only one of two applications is slowed down, it is rather code problem. The best thing is to profile your code: http://www.openntf.org/main.nsf/blog.xsp?permaLink=NHEF-84X8MU
To go deeper you can start to look for semaphore locks: http://www-01.ibm.com/support/docview.wss?uid=swg21094630, or to look at javadumps: http://lazynotesguy.net/blog/2013/10/04/peeking-inside-jvms-heap-part-2-usage/ and NSDs http://www-10.lotus.com/ldd/dominowiki.nsf/dx/Using_NSD_A_Practical_Guide/$file/HND202%20-%20LAB.pdf and garbage collector Best setting for HTTPJVMMaxHeapSize in Domino 8.5.3 64 Bit.
This presentation gives a good overview of Domino troubleshooting (among many others on the web).
Ok so we resolved the performance issues by doing a number of things. I'll list the changes we did in order of the improvement gained, starting with the simple tweaks that weren't really noticeable.
Defrag Domino drive - it was showing as 32% fragmented and I thought I was on to a winner but it was really no better after the defrag. Even though IBM docs say even 1% fragmentation can cause performance issues.
Reviewed all the main code in the application and took a number of needless lookups out when they can be replaced with applicationScope variables. For instance on the search page, one of the drop down choices gets it's choices by doing an #Unique lookup on all documents in the database. Changed it to a keyword and put that in the application Scope.
Removed multiple checks on database.queryAccessRole and put the user's roles in a sessionScope.
DB had 103,000 documents - 70,000 of them were tiny little docs with about 5 fields on them. They don't need to be indexed by the FTIndex so we moved them in to a separate database and pointed the data source to that DB when these docs were needed. The FTIndex went from 500mb to 200mb = faster indexing and searches but the overall performance on the app was still rubbish.
The big one - I finally got around to checking the application properties, advanced tab. I set the following options :
Optimize document table map (ran copystyle compact)
Dont overwrite free space
Dont support specialized response hierarchy
Use LZ1 compression (ran copystyle compact with options to change existing attachments -ZU)
Dont allow headline monitoring
Limit entries in $UpdatedBy and $Revisions to 10 (as per domino documentation)
And also dont allow the use of stored forms.
Now I don't know which one of these options was the biggest gain, and not all of them will be applicable to your own apps, but after doing this the application flies! It's running like there are no documents in there at all, views load super fast, documents open like they should - quickly and everyone is happy.
Until the http threads get locked out - thats another question of mine that I am about to post so please take a look if you have any idea of what's going on :-)
Thanks to all who have suggested things to try.

How to decrease the startup time of this LINQ-EF query? Precompiled EF Queries?

I am using MVC3, .NET4.5, C#, EF5.0, MSSQL2008 R2
My web application can take between 30 and 60secs to warm up ie get through first page load. Following page loads are very quick.
I have done a little more analysis, using DotTrace.
I have discovered that some of my LINQ queries, particularly the .COUNT and .ANY() ones take a long time to execute first time ie:
if (!Queryable.Any<RPRC_Product>((IQueryable<RPRC_Product>) this.db.Class.OfType<RPRC_Product>(), (Expression<Func<RPRC_Product, bool>>) (c => c.ReportId == (int?) myReportId)))
takes around 12 secs on first use.
Can you provide pointer how I can get these times down. I have heard about precompilers for EF queries.
I have a feeling the answer lies in using precompilation rather than altering this specific query.
Many thanks in advance
EDIT
Just read up on EF5's auto compile feature, so second time round compiled queries are in cache. So first time round, compilation is still required to intermediate EF language. Also read up on pregeneration of Views which may help generally as well?
It is exactly as you said - you do not have to worry about compiling queries - they are cached automatically by ef. Pregenerated Views might help you or may not. Automatic tools for scaffolding them generates not all views required and the missing ones still needs to be made at run time. When I developed my solutions and expected the same problem as you the only thing that helped me was to simplify a query. It turns out that if the first query to run is very complex and involves complicated joins then ef needs to generate many views to execute it. So I simplified the queries - instead of loading whole joined entities I loaded only ids (grouped , filtered out or whatever) and then when I needed to load single entities I loaded them by ids one by one. This allowed me to avoid long execution time of my first query.

RavenDB - slow write/save performance?

I started porting a simple ASP.NET MVC web app from SQL to RavenDB. I noticed that the pages were faster on SQL than on RavenDB.
Drilling down with Miniprofiler it seems the culprit is the time it takes to do: session.SaveChanges (150-220ms). The code for saving in RavenDB looks like:
var btime = new TimeData() { Time1 = DateTime.Now, TheDay = new DateTime(2012, 4, 3), UserId = 76 };
session.Store(btime);
session.SaveChanges();
Authentication Mode: When RavenDB is running as a service, I assume it using "Windows Authentication". When deployed as an IIS application I just used the defaults - which was "Windows Authentication".
Background: The database machine is separate from my development machine which acts as the web server. The databases are running on the same database machine. The test data is quite small - say 100 rows. The queries are simple returning an object with 12 properties 48 bytes in size. Using fiddler to run a WCAT test against RavenDB generated higher utilization on the database machine (vs SQL) and far fewer pages. I tried running Raven as a service and as an IIS application, but didn't see a noticible difference.
Edit
I wanted to ensure it wasn't a problem with a) one of my machines or b) the solution I created. So, decided to try testing it on Appharbor using another solution created by Michael Friis: RavenDN sample app and simply add Miniprofiler to that solution. Michael is one of the awesome guys at Apharbor and you can download the code here if you want to look at it.
Results from Appharbor
You can try it here (for now):
Read: (7-12ms with a few outliers at 100+ms).
Write/Save: (197-312ms) * WOW that's a long time to save *. To test the save, just create a new "thingy" and save it. You might want to do it at least twice since the first one usually takes longer as the application warms up.
Unless we're both doing something wrong, RavenDB is very slow to save - around 10-20x slower to save than read. Given that it re-indexes asynchronously, this seems very slow.
Are there ways to speed it up or is this to be expected?
First - Ayende is "the man" behind RavenDB (he wrote it). I have no idea why he's not addressing the question, although even in the Google groups, he seems to chime in once to ask some pointed questions, but rarely comes back to provide a complete answer. Maybe he's working hard to to get RavenHQ off the ground?!?
Second - We experienced a similar problem. Below's a link to a discussion on Google Groups that may be the cause:
RavenDB Authentication and 401 Response.
A reasonable question might be: "If these recommendations fix the problem, why doesn't RavenDB work that way out of the box?" or at least provide documentation about how to get decent write performance.
We played for a while with the suggestions that were made in the thread above and the response-time did improve. In the end though, we switched back to MySQL because it's well-tested, we ran into this problem early (luckily) which caused concern that we might hit more problems and, finally, because we did not have the time to:
fully test whether it fixed the performance problems we saw on the RavenDB Server
investigate and test the implications of using UnsafeAuthenticatedConnectionSharing & pre-authentication.
To summarize Ayende's response you're actually testing the summation of network latency and authentication chatter. As Joe pointed out there's ways you can optimize the authentication to be less chatty. This does however arguably reduce security, clearly Microsoft built security to be secure first and performance secondary. You as the user of RavenDB can choose if the default security model is too robust as it arguably is for protected server-to-server communication.
RavenDB is clearly defined to be READ orientated. 10-20x slower for writes than reads is entirely acceptable because writes are full ACID and transactional.
If write speed is your limiting factor with RavenDB you've likely not modeled transaction boundaries properly. That you are saving documents that are too similar to RDBMS table rows and not actually well modeled documents.
Edit: Reading your question again and looking into the background section, you explicitly define your test conditions to be an optimal scenario for SQL Server while being one of the least efficient methods for RavenDB. For data that size, that's almost certainly 1 document if it would be real world usage.

Performance problem website

We have website, which is based on Drupal. There is a ~30 modules, which is not huge amount for our VPS. We have no high traffic, so traffic doesn't making Site overload.
On same VPS we have other Sites which are loading properly.
Site: http://jnews.am
Where I can start? How can I check what part of my server/website is causing performance issues? What investigating methods you can suggest?
You need to dig in to find out the bottle-neck. Here is a good article about it. http://www.mindyourcode.com/php/drupal-profiling-performance-analysis-for-optimization-finding-the-performance-bottlenecks-in-your-application-at-core-level/
Good question. Here's some answers:
1) Downlaod the YSlow extension to Firefox, and install it. This will let you test on a number of different items that can suggest why your site may be slow. This doesn't currently look to be your current big problem right now, though.
2) Install the Firebug extension to firefox. The 'Net' tab of firebug tells you how long every document took to download. your core page is taking 5 seconds, and for some reason system.css is taking almost as long, which is unusual in that it's a static file.
3) Check and see if you've got any slow queries, and why. Assuming you're using mysql, this page http://dev.mysql.com/doc/refman/5.0/en/slow-query-log.html tells you how to set up the slow query log, which will collect and report which queries are taking a long time to complete.
Beyond that, some suggestions: You're almost certainly not using some of Drupal's performance options, such as caching, and I would suggest using memcache to speed the site up as well. (See http://drupal.org/project/memcache)
It really looks like you've got a query that's taking too long. It seems to me that the slow query log is what's going to be the most useful tool - it'll tell you where you need to optimize your site. Do note that mysql tends to use the first index on a table that it finds in the WHERE clause instead of the fastest, and as such a query like "WHERE type='story' AND status = 1" is faster than "WHERE status = 1 AND type='story'", because there type index filters the data better than the status one. (And Views tends to put the items in the where clause in the same order that they're in the Filter section.)

Resources