High CPU Usage by Resource Monitor in SQL Server 2012 - performance

Has anyone encountered this before? I have a two node cluster with SQL Server 2012 SP1, Enterprise enterprise installed on Windows 2012. These are VMs running on VMWare 5.1. I have noticed that occasionally the CPU would spike all the way up to 100% and be sustained for a while. When I checked to see what was utilizing that much CPU it turned out to be the Resource Monitor. I know there was an issue with SQL Server 2008 with high CPU usage when virtual memory is low (KB 968722) but it was fixed in a service pack.
Is anyone seeing the same thing with SQL Server 2012 SP1? It's the exact same situation as mentioned in KB 968722 but instead of SQL Server 2008, it's happening on SQL Server 2012.

I just face a similar issue. Our windows team reported to me (SQL DBA) that we have one server with high CPU on only 2 cores (server has 10 cores). This server is part of a 2 node cluster and has 3 SQL Instances installed. One of those instances was causing the CPU issue, and it was very surprise that the instance causing the issue was the one doing nothing. This instance was installed but was not being used yet and it was causing CPU issues already on only 2 cores. Using Thread Object performance counters I identified the thread id that later I used to query sys.sysprocesses table to find those thread IDs (KPID).
SELECT * FROM sysprocesses
WHERE kpid IN (<Thread IDs>)
With that query I identified the Session IDs on SQL Server, they were background processes. Using sp_who one session ID cmd was: "RESOURCE MONITOR" and the other one was: "LAZY WRITER".
I verified memory, and since this instance was new, it was configured with min server memory as 1024 mb and max server memory as 1024 mb. I increased the max server memory setting to 2048 mb and the problem went away instantly.
I know this is not a universal solution, it was for my environment due to my context but hope it helps if somebody else is reading this question as well.

Quick answer (if > SQL Server 2008): then RESOURCE MONITOR is probably taking high CPU time because SQL server is lacking sufficient RAM.
Check your windows task manager / resource monitor for unnecessary ram-intensive processes. Clean it up.

Related

Timeout Issues with SQL Server 2014 Express and Entity Framework

I've recently had my laptop replaced and I've had to install Visual Studio 2015 and SQL Server 2014 Express with Management Studio.
My previous environment was Visual Studio 2015 with SQL Server 2008 R2 Express with Management Studio.
I restored the 2008 R2 databases into SQL Server 2014 Express, same database names, logins etc.
Now when I run any of my ASP.NET MVC 5 applications (using Entity Framework 6) on my laptop using Visual Studio, I'm getting sporadic timeout errors. Please see below.
Occasionally the application database calls will perform as expected, but mostly they are either very slow or timeout.
I'm finding it difficult to understand why this is as on my previous laptop using SQL Server 2008 R2 Express I never had any of these issues.
Also, these applications are on a live web server and being used by 1000s of users each day without any of these problems. This makes me think there is something possibly wrong with the installation of SQL Server 2014 Express on my laptop.
I have seen others comment on extending the Command Timeout on my DbContext
public class MyDatabase : DbContext
{
public MyDatabase ()
: base(ContextHelper.CreateConnection("Connection string"), true)
{
((IObjectContextAdapter)this).ObjectContext.CommandTimeout = 180;
}
}
But I don't see this as a solution, as I didn't need this with my previous laptop/ environment and the current live applications also don't need it.
I'm stumped here and would really appreciate any help or guidance.
Thanks.
Update
Thanks to the suggestions from Steve Py I decided to check the memory performance from my new laptop when running Visual Studio 2015 and SQL Server Express 2104 concurrently. I've included a screen shot below which shows that 90% of the available memory is used (3.5G out of 3.9G). I'm far from an expert in tuning up a device for software development, however, this seems it may be a reason as to why when I run my applications locally that they are timing out.
Is there anyone on Stackoverflow who ca inform me if this looks like the possible problem?
Thanks.
Firstly I'd look at hooking up a profiler to capture the queries coming from EF. For SQL Server you can use ExpressProfiler. This will give you the actual SQL EF is trying to run, the # of row reads, writes, and execution time. Copy the SQL queries and paste them into a new query window on the DB and execute them, plus have a look at the execution plan. Does the execution time correlate with EF? (change parameters and re-run in SQL to ensure you aren't getting cached results)
Other factors are the hardware on the two laptops. You'd hope that the new laptop would have more grunt than the old one, more cores, better cores, more RAM, but how do they compare? How much memory is free when nothing is running? What kind of HDD was in the two machines? For instance dropping from an i7 with an SSD down to an i3 with 5400rpm HDD, and half the RAM will be extremely noticeable, even if the clock speed is higher.
When it comes to databases there are a number of factors that can impact performance, even when backing up and restoring. For instance the Isolation Level and recovery model settings for the database can play a part, especially for larger databases. I'd also look at server settings such as how much RAM the database server is allocated to be able to use. Feel free to paste some results from the profiler for slow queries.
Edit: Based on the screenshot of the resource use, my guess is your new laptop is potentially underpowered. 4GB of RAM is bare-bones with Windows 10 especially to be running Visual Studio and SQL Server, even for just a development use database. The history graph for CPU and disk also show heavy activity. If that's all you've got to work with then the next step would be to look at what is using the memory. SQL Server by default will attempt to use whatever is available, and it can be quite greedy, but it's generally a good idea to set boundaries on the server. From SQL Management you can bring up the properties of the server and select "Memory" to set a minimum and maximum memory size. For 4GB I'd set the minimum to 500MB and the max to 2000MB. For processors you can use "Boost SQL Server Priority"
Next, on the database side of things look into the file and recovery options. What is the size of the database MDF file, and transaction log? (LDF file) From the database properties window under "General" you should see the "Size" which is the MDF size. For the LDF you will probably need to check on the file system. A large LDF can be bad for performance and indicate your database should be backed up and the log compressed/truncated. Lof files default to grow by percent so they can grow fast and churn the disk. In the "Options" tab you cna check the "Recovery Model" and set that to "Simple" for a dev database to significantly cut on log file churn/growth. Production databases will use "Full".
For development purposes it helps to have a bit more grunt from a laptop. While things like "ultra books" look like good options and are nice and lightweight, they rarely have the grunt and resources needed for a dev environment. (plus generally poor keyboards and displays to boot! :) ) There is also a significant gap in price between ultra books and workstation replacement laptops. What I've found fit in that gap and serve as exceptional development PC replacements are gaming laptops. They are tuned for performance and usually come with 8GB minimum with expansion available. They also happen to come with exceptional keyboards and displays. They're typically a fair bit cheaper than workstation replacements that seem to price in a premium. I use an MSI GE65 series which came with 16GB, SSD+HDD, a great keyboard, and was over $1000 cheaper than the closest "workstation" laptop. It does draw a couple stares coming into a client site with a gaming laptop with it's LED keyboard and lid badge, not a single game on it though! :)

Good idea? MS SQL Limit CPU Affinity to prevent system lock down?

recently I had to run few heavy one-time queries on our MSSQL 2008 R2 64-bit server and faced a problem: executing them made SQL server consume 100% CPU which eventually (in about 20 seconds) made server absolutely unresponsive.
Thus I was forced to reboot it or wait until execution completes which took a lot of time depending on a query.
What I noticed is that setting CPU Affinity for SQL server to 7 cores instead of 8 available in task manager would keep server responsive so I could cancel my query if it took too long (and proceed with query optimizations without having too reboot).
But is it a good idea to limit CPU Affinity of SQL server?
Please share your thoughts. Server is being used for web-applications.
It turns out to be a Bad Idea.
After few days with CPU affinity 7/8 I noticed that SQL server would continuously load 1-2 cores up to 100% while other cores were available.
It is probably true that SQL Scheduler cannot distribute workload correctly when CPU affinity is limited.
Its years later but in case anyone finds this in search, your assumption is correct that work schedulers become locked to a core. However there is a trace flag to turn on in order to put this back: 8002.

SQL Server process priority

I made a CLR plugin for SQL Server 2008 R2 Developer edition that runs a lot of float computing on multiple threads. To test it, I used my laptop (core 2 duo 6670), and those calculations ran on 2 threads. This caused the CPU to be at 100% usage.
The question is this: When SQL process occupied 100% of CPU (for 2-3 minutes), my computer stops responding (cursor doesn't move, the clock is not updated, the entire UI is dead). It never happend with other programs, so the question is : "Does SQL Server run with a higher priority than the other services?"
Thanks
Taken from http://msdn.microsoft.com/en-us/library/ms188709%28v=sql.100%29.aspx
Use the priority boost option to specify whether Microsoft SQL Server
should run at a higher Microsoft Windows 2000 or Windows 2003
scheduling priority than other processes on the same computer. If you
set this option to 1, SQL Server runs at a priority base of 13 in the
Windows 2000 or Windows Server 2003 scheduler. The default is 0, which
is a priority base of 7.

How can I force SQL Server to use more CPU

I have an data transformation query which takes a long time to run on my development machine (Core i7 920 running at 3.9GHz, and with 12GB of RAM under Windows Server 2003 x86 and with 2 Velociraptors 300GB iN RAID0).
When I look at the task manager, the CPU stays around 26%, with the third (out of 4) core being the most active.
As this is not a production environment, is there any way to tell SQL Server 2008 that I am alright with it using more of my CPU or is it because my query can not be parallelized for some reason?
If, shouldn't SQL Server be smart enough to cut the query in smaller chunks and run it across several threads so each core can get it?
Thanks.
Optimize your query. Chances are that the issue is with it and not SQL Server.
It already knows that it's okay unless you specifically limited it to use only a certain number of CPUs either through configuration or through setting the MAXDOP parameter.
It sounds like you may be constrained by your hard drives or memory more than anything.
Note that because you are running an x86 version of windows (and by extension sql server), you may be RAM limited to around 3GB. And even with the PAE (physical addressing extensions) turned on, it's going to be a world of difference slower than if you have an x64 OS and SQL Server to begin with.
In other words, you might consider reinstalling the machine from the ground up to take advantage of all the x64 goodness you have.

Significant Performance Decrease when moving from Windows Server 2003 to 2008 (IIS 6 to IIS 7)

Our ASP.Net 2.0 web app was running happily along on Windows Server 2003. We were starting to see some of the limits of the environment approaching, such as memory and CPU usage spikes, and as we're getting ready to scale we decided it was time for a larger server with higher availability.
We decided to move to Windows Server 2008 to take advantage of IIS 7's shared configuration. In our development and integration environments, we reproduced the OS and app in 2008/IIS 7 and everything seemed fine. But truth be told, don't have a good way of simulating production-like loads as of yet, nor can we reproduce our prod environment accurately (we're small with limited resources). So once we rolled out to production, we were surprised to find performance significantly worse on 2008 than it was on 2003.
We've also moved from a 32-bit environment to 64-bit in the process, and we've also incorporated ASP.Net 3.5 dll's into the project.
Memory usage is through the roof, but I'm not as worried about that. We believe in part this is because of the overhead with Server 2008's memory, so throwing more RAM at it may solve that issue. The troubling thing is we're seeing processor spikes to 99% CPU Utilization, which we've never seen before in the 2003/IIS 6 environment.
Has anyone encountered these issues before and are there any suggestions for a solution/places to look? Right now we're doing the following:
1) Buying time by adding memory.
2) Buying time by setting app pool limits: shut down w3wp.exe when CPU hits 99% load. Since you don't have the option to recycle the app pools, I have a scheduled task running that recycles any stopped app pools.
3) Profiling the app pools under Classic and Integrated modes to see which may perform better.
Any other ideas are completely welcome.
Our experiance is that code runs much faster on a 64bit windows 2008 than on a 32bit windows 2003 server.
I am wondering if something else is also running on the machine. For example is SQL Server installed with a maintainence plan that could cause the CPU spike.
I would check the following:
Which process is using the CPU?
Is there a change in the code? Try installing the new code on the old machine
Is it something to do with the compile options? Is the CPU usage a recompile?
Are there any errors in the event log?
In our cases, since we have 4 processors, we then increased the "number of worker process to 4" currently working well so far as compare before.
here a snapshot:
http://pic.gd/c3661a
You can use the application pool "Recycle" option in IIS7+ to configure physical and virtual memory limits for application pools. Once these are reached the process will recycle and the resources will be released. Unfortunately the option to recycle based on CUP usage has been removed from IIS7+ (some one correct me if I'm wrong). If you have other apps on the server and want to avoid them competing for resources when this condition happens you can implement Windows System Resources Manager and it's IIS policy (here is a good tutorial http://learn.iis.net/page.aspx/449/using-wsrm-to-manage-iis-70-apppool-cpu-utilization/)
Note SRWM is only available on Enterprise and Data Center editions.

Resources