I have an MVC3 C#.Net web app. It is running on IIS 7 on Windows Server 2008 R2. We are noticing significant performance issues when initially loading a page. We are using nHibernate and have found that performance to be slow in some instances. But all pages, even simple ones, behavinging similarly. I'm not really an IIS stud so.....
Am I missing something in IIS....a setting or action that I can tweak to improve performnce?
I had a similar issue running a site on a shared host that only allocated 100MB RAM to an application pool. When you exceeded that IIS was set to recycle it. The app generally ran at about 120MB so was constantly recycling. Each page was loading painfully slow as the whole thing started up again. Increasing the RAM available to the app pool fixed it.
Another thing that I'd try would be to set up SQL profiler and watch the queries being sent to the db. You can configure it to report the duration column in a smaller increment than the default (microseconds perhaps?) which makes the painful ones stand out. You can then pick up ones that are suspect, run them through query analyser with "display execution plan" switched on and examine the subtree costs. Perhaps NHibernate is generating nasty queries or perhaps too many?
Related
I have a 9Mb PBIX containing small tables and one table with 250k rows. Data imported from various xlsx & JSON sources. Machine is Windows 10 Pro, 2.6GHz, 64 bit, 16GB RAM.
On the Power BI service online the performance is ok, but on desktop it's practically unworkable. With task manager I can see that it is using 7Mb of memory, but almost 100% CPU, half an hour after opening - while on a blank tab with no visualisations.
I don't understand what it is doing in the background and how I can improve the situation.
There is the 'Allow data preview to download in the background' setting, but I think this is only relevant to the query editor? Would clearing the cache or changing cache settings help?
I am aware of performance analyzer and the query diagnostics tools, but neither seem relevant since the queries are not refreshing and there are no visualisations loading.
Am at a bit of a loss - any help greatly appreciated.
Thanks
Update: Having disabled parallel load and background refresh in Data load settings I noticed that finally the issue seemed to go away (though not immediately). Eventually, when reopening the pbix, mashup containers did not appear and CPU and memory was not being killed. Then at some point Power BI got stuck and had to close and the problem reappeared even though the data load settings were still disabled. Restarting the machine seemed to clear the problem once again.
It seems then, that some zombie processes can persist through application close and re-open. Has anyone else noticed this, can confirm or refute it, suggest what is going on or any steps on how to avoid/prevent? It's very annoying!
Thanks
I have also noticed the same issue, for opening 5 mb pbix file, power bi eating 12 GB of memory, and 90%+ CPU utilization, Power BI Desktop is poorly managed product by Microsoft.
I've recently had my laptop replaced and I've had to install Visual Studio 2015 and SQL Server 2014 Express with Management Studio.
My previous environment was Visual Studio 2015 with SQL Server 2008 R2 Express with Management Studio.
I restored the 2008 R2 databases into SQL Server 2014 Express, same database names, logins etc.
Now when I run any of my ASP.NET MVC 5 applications (using Entity Framework 6) on my laptop using Visual Studio, I'm getting sporadic timeout errors. Please see below.
Occasionally the application database calls will perform as expected, but mostly they are either very slow or timeout.
I'm finding it difficult to understand why this is as on my previous laptop using SQL Server 2008 R2 Express I never had any of these issues.
Also, these applications are on a live web server and being used by 1000s of users each day without any of these problems. This makes me think there is something possibly wrong with the installation of SQL Server 2014 Express on my laptop.
I have seen others comment on extending the Command Timeout on my DbContext
public class MyDatabase : DbContext
{
public MyDatabase ()
: base(ContextHelper.CreateConnection("Connection string"), true)
{
((IObjectContextAdapter)this).ObjectContext.CommandTimeout = 180;
}
}
But I don't see this as a solution, as I didn't need this with my previous laptop/ environment and the current live applications also don't need it.
I'm stumped here and would really appreciate any help or guidance.
Thanks.
Update
Thanks to the suggestions from Steve Py I decided to check the memory performance from my new laptop when running Visual Studio 2015 and SQL Server Express 2104 concurrently. I've included a screen shot below which shows that 90% of the available memory is used (3.5G out of 3.9G). I'm far from an expert in tuning up a device for software development, however, this seems it may be a reason as to why when I run my applications locally that they are timing out.
Is there anyone on Stackoverflow who ca inform me if this looks like the possible problem?
Thanks.
Firstly I'd look at hooking up a profiler to capture the queries coming from EF. For SQL Server you can use ExpressProfiler. This will give you the actual SQL EF is trying to run, the # of row reads, writes, and execution time. Copy the SQL queries and paste them into a new query window on the DB and execute them, plus have a look at the execution plan. Does the execution time correlate with EF? (change parameters and re-run in SQL to ensure you aren't getting cached results)
Other factors are the hardware on the two laptops. You'd hope that the new laptop would have more grunt than the old one, more cores, better cores, more RAM, but how do they compare? How much memory is free when nothing is running? What kind of HDD was in the two machines? For instance dropping from an i7 with an SSD down to an i3 with 5400rpm HDD, and half the RAM will be extremely noticeable, even if the clock speed is higher.
When it comes to databases there are a number of factors that can impact performance, even when backing up and restoring. For instance the Isolation Level and recovery model settings for the database can play a part, especially for larger databases. I'd also look at server settings such as how much RAM the database server is allocated to be able to use. Feel free to paste some results from the profiler for slow queries.
Edit: Based on the screenshot of the resource use, my guess is your new laptop is potentially underpowered. 4GB of RAM is bare-bones with Windows 10 especially to be running Visual Studio and SQL Server, even for just a development use database. The history graph for CPU and disk also show heavy activity. If that's all you've got to work with then the next step would be to look at what is using the memory. SQL Server by default will attempt to use whatever is available, and it can be quite greedy, but it's generally a good idea to set boundaries on the server. From SQL Management you can bring up the properties of the server and select "Memory" to set a minimum and maximum memory size. For 4GB I'd set the minimum to 500MB and the max to 2000MB. For processors you can use "Boost SQL Server Priority"
Next, on the database side of things look into the file and recovery options. What is the size of the database MDF file, and transaction log? (LDF file) From the database properties window under "General" you should see the "Size" which is the MDF size. For the LDF you will probably need to check on the file system. A large LDF can be bad for performance and indicate your database should be backed up and the log compressed/truncated. Lof files default to grow by percent so they can grow fast and churn the disk. In the "Options" tab you cna check the "Recovery Model" and set that to "Simple" for a dev database to significantly cut on log file churn/growth. Production databases will use "Full".
For development purposes it helps to have a bit more grunt from a laptop. While things like "ultra books" look like good options and are nice and lightweight, they rarely have the grunt and resources needed for a dev environment. (plus generally poor keyboards and displays to boot! :) ) There is also a significant gap in price between ultra books and workstation replacement laptops. What I've found fit in that gap and serve as exceptional development PC replacements are gaming laptops. They are tuned for performance and usually come with 8GB minimum with expansion available. They also happen to come with exceptional keyboards and displays. They're typically a fair bit cheaper than workstation replacements that seem to price in a premium. I use an MSI GE65 series which came with 16GB, SSD+HDD, a great keyboard, and was over $1000 cheaper than the closest "workstation" laptop. It does draw a couple stares coming into a client site with a gaming laptop with it's LED keyboard and lid badge, not a single game on it though! :)
Our web is running on AWS with Ubuntu OS. We developed it on top of playframework. Right after the web is deployed, it is pretty quick. However, after 1 days or os, it slows down significantly. I checked resource usage of the OS, it seems normal and is responsive. Just the web service is slow to request. I suspect there are some memory, thread pool or some resource leak. Any suggestion about how to investigate it? I used 'top' and 'ps' command to look at current resource usage but they all seem normal.
You may want to create a core dump and then take that to you dev computer and examine it. This is not the easiest way but if you have limited access to the box this may be required.
Create a core dump
Analyze Core Dump File?
So the times you see is an example of typical development. You fire up your server and mysql database, then login to the backend and try to add a simple thing like a menu item.
The times shown are only for the server to start responding, not for the page to actually finish loading. So this is time, passing on the server in the code, executing queries etc. All the JS files and CSS is not part of this measurement.
I can keep going. Clicking on "New Menu Item" and Hitting "Save" will take just as long.
So for a simple thing like adding a menu item the user spends roughly a minute looking at a blank screen (assuming the user knows joomla by heart and makes no wrong clicks and thus never has to go back).
Caching
So I read about caching. If you enable Page Caching I cannot keep developing because it seems my changes are not getting refreshed, and you really want this feature when you develop.
The View Caching actually speeds up the backend and the frontend a lot. But you still have to access the page once slowly before it gets cashed, and you have to access it again in the timeframe of the existence of the cash to profit from it. So for me, this means the backend is basically always slow. Unless I try to do something like adding 10 menu items within 15 minutes.
I btw run on a brand new notebook which really should not be the problem.
Is there something I am missing out on?
Is this actually normal?
EDIT
I could improve my times to around 2 seconds. The profile shows a lot of red colors though, someone has an idea? The picture is for the view menu manager, main menu menu items.
My times are all below 2 seconds, usually approx 1 both on my development server (a VM running CentOS 6 in virtualbox hosted by a Win7, i7 / 6Gb RAM / SSD disk) and my production server (Xeon dual 2GHz / 4Gb / 10000 rpm SATA disks).
Enable the debug for your site and see in the bottom of the page the times each module / component / event takes to run, this will make it possible to determine if it's a single extension / piece of Joomla eating up all the time, or it's just your machine.
I don't have a particularly good local machine (just a cheap W8 and using EasyPHP) and my times are all much faster than wither yours or those other people are reporting. One of the things you can do is turn on debug and look at the profiling data. When I load the admin login page even with debug on I can see it's onAfterDispatch which is the slowest part of the process.
A lot of times upgrading MySQL will give massive speed improvements.
When working on a localhost, the loading time usually depend on the PC performance. I work a hellish amount of time using wampserver (localhost) at work and on my computer at home.
When installing a fresh copy of Joomla 3.2 on Wamp at home, the step to create the database and insert the default content takes around 7-9 seconds, where as at work, it literally takes under 2 seconds. The reason? Because my work computer performance is much better than my personal computer.
It's the same concept for loading pages in the backend.
Hope this bit of info helped you.
Our ASP.Net 2.0 web app was running happily along on Windows Server 2003. We were starting to see some of the limits of the environment approaching, such as memory and CPU usage spikes, and as we're getting ready to scale we decided it was time for a larger server with higher availability.
We decided to move to Windows Server 2008 to take advantage of IIS 7's shared configuration. In our development and integration environments, we reproduced the OS and app in 2008/IIS 7 and everything seemed fine. But truth be told, don't have a good way of simulating production-like loads as of yet, nor can we reproduce our prod environment accurately (we're small with limited resources). So once we rolled out to production, we were surprised to find performance significantly worse on 2008 than it was on 2003.
We've also moved from a 32-bit environment to 64-bit in the process, and we've also incorporated ASP.Net 3.5 dll's into the project.
Memory usage is through the roof, but I'm not as worried about that. We believe in part this is because of the overhead with Server 2008's memory, so throwing more RAM at it may solve that issue. The troubling thing is we're seeing processor spikes to 99% CPU Utilization, which we've never seen before in the 2003/IIS 6 environment.
Has anyone encountered these issues before and are there any suggestions for a solution/places to look? Right now we're doing the following:
1) Buying time by adding memory.
2) Buying time by setting app pool limits: shut down w3wp.exe when CPU hits 99% load. Since you don't have the option to recycle the app pools, I have a scheduled task running that recycles any stopped app pools.
3) Profiling the app pools under Classic and Integrated modes to see which may perform better.
Any other ideas are completely welcome.
Our experiance is that code runs much faster on a 64bit windows 2008 than on a 32bit windows 2003 server.
I am wondering if something else is also running on the machine. For example is SQL Server installed with a maintainence plan that could cause the CPU spike.
I would check the following:
Which process is using the CPU?
Is there a change in the code? Try installing the new code on the old machine
Is it something to do with the compile options? Is the CPU usage a recompile?
Are there any errors in the event log?
In our cases, since we have 4 processors, we then increased the "number of worker process to 4" currently working well so far as compare before.
here a snapshot:
http://pic.gd/c3661a
You can use the application pool "Recycle" option in IIS7+ to configure physical and virtual memory limits for application pools. Once these are reached the process will recycle and the resources will be released. Unfortunately the option to recycle based on CUP usage has been removed from IIS7+ (some one correct me if I'm wrong). If you have other apps on the server and want to avoid them competing for resources when this condition happens you can implement Windows System Resources Manager and it's IIS policy (here is a good tutorial http://learn.iis.net/page.aspx/449/using-wsrm-to-manage-iis-70-apppool-cpu-utilization/)
Note SRWM is only available on Enterprise and Data Center editions.