SQL CPU trigger on machine memory load - windows

Does anyone know of a setting or trigger for MS SQL 2014 standard edition that would trigger SQL to start using CPU% when system memory hits over 90% usage. I have a small application that is not accessing SQL but does utilize large amounts of memory. When the memory on the system hits 90%+ utilization SQL started showing CPU usage on a continual basis until the toolset is stopped.
I've run multiple analysis such as profiler, queries to see the last statements processed, the connections to the database, etc... Nothing is standing out as being the trigger for SQL to spike. What is even more odd is that this particular machine has both SQL 2014 standard and SQL express on it and both of the versions will trigger the spike (this is not my server but one that I am having to work on).

Related

Timeout Issues with SQL Server 2014 Express and Entity Framework

I've recently had my laptop replaced and I've had to install Visual Studio 2015 and SQL Server 2014 Express with Management Studio.
My previous environment was Visual Studio 2015 with SQL Server 2008 R2 Express with Management Studio.
I restored the 2008 R2 databases into SQL Server 2014 Express, same database names, logins etc.
Now when I run any of my ASP.NET MVC 5 applications (using Entity Framework 6) on my laptop using Visual Studio, I'm getting sporadic timeout errors. Please see below.
Occasionally the application database calls will perform as expected, but mostly they are either very slow or timeout.
I'm finding it difficult to understand why this is as on my previous laptop using SQL Server 2008 R2 Express I never had any of these issues.
Also, these applications are on a live web server and being used by 1000s of users each day without any of these problems. This makes me think there is something possibly wrong with the installation of SQL Server 2014 Express on my laptop.
I have seen others comment on extending the Command Timeout on my DbContext
public class MyDatabase : DbContext
{
public MyDatabase ()
: base(ContextHelper.CreateConnection("Connection string"), true)
{
((IObjectContextAdapter)this).ObjectContext.CommandTimeout = 180;
}
}
But I don't see this as a solution, as I didn't need this with my previous laptop/ environment and the current live applications also don't need it.
I'm stumped here and would really appreciate any help or guidance.
Thanks.
Update
Thanks to the suggestions from Steve Py I decided to check the memory performance from my new laptop when running Visual Studio 2015 and SQL Server Express 2104 concurrently. I've included a screen shot below which shows that 90% of the available memory is used (3.5G out of 3.9G). I'm far from an expert in tuning up a device for software development, however, this seems it may be a reason as to why when I run my applications locally that they are timing out.
Is there anyone on Stackoverflow who ca inform me if this looks like the possible problem?
Thanks.
Firstly I'd look at hooking up a profiler to capture the queries coming from EF. For SQL Server you can use ExpressProfiler. This will give you the actual SQL EF is trying to run, the # of row reads, writes, and execution time. Copy the SQL queries and paste them into a new query window on the DB and execute them, plus have a look at the execution plan. Does the execution time correlate with EF? (change parameters and re-run in SQL to ensure you aren't getting cached results)
Other factors are the hardware on the two laptops. You'd hope that the new laptop would have more grunt than the old one, more cores, better cores, more RAM, but how do they compare? How much memory is free when nothing is running? What kind of HDD was in the two machines? For instance dropping from an i7 with an SSD down to an i3 with 5400rpm HDD, and half the RAM will be extremely noticeable, even if the clock speed is higher.
When it comes to databases there are a number of factors that can impact performance, even when backing up and restoring. For instance the Isolation Level and recovery model settings for the database can play a part, especially for larger databases. I'd also look at server settings such as how much RAM the database server is allocated to be able to use. Feel free to paste some results from the profiler for slow queries.
Edit: Based on the screenshot of the resource use, my guess is your new laptop is potentially underpowered. 4GB of RAM is bare-bones with Windows 10 especially to be running Visual Studio and SQL Server, even for just a development use database. The history graph for CPU and disk also show heavy activity. If that's all you've got to work with then the next step would be to look at what is using the memory. SQL Server by default will attempt to use whatever is available, and it can be quite greedy, but it's generally a good idea to set boundaries on the server. From SQL Management you can bring up the properties of the server and select "Memory" to set a minimum and maximum memory size. For 4GB I'd set the minimum to 500MB and the max to 2000MB. For processors you can use "Boost SQL Server Priority"
Next, on the database side of things look into the file and recovery options. What is the size of the database MDF file, and transaction log? (LDF file) From the database properties window under "General" you should see the "Size" which is the MDF size. For the LDF you will probably need to check on the file system. A large LDF can be bad for performance and indicate your database should be backed up and the log compressed/truncated. Lof files default to grow by percent so they can grow fast and churn the disk. In the "Options" tab you cna check the "Recovery Model" and set that to "Simple" for a dev database to significantly cut on log file churn/growth. Production databases will use "Full".
For development purposes it helps to have a bit more grunt from a laptop. While things like "ultra books" look like good options and are nice and lightweight, they rarely have the grunt and resources needed for a dev environment. (plus generally poor keyboards and displays to boot! :) ) There is also a significant gap in price between ultra books and workstation replacement laptops. What I've found fit in that gap and serve as exceptional development PC replacements are gaming laptops. They are tuned for performance and usually come with 8GB minimum with expansion available. They also happen to come with exceptional keyboards and displays. They're typically a fair bit cheaper than workstation replacements that seem to price in a premium. I use an MSI GE65 series which came with 16GB, SSD+HDD, a great keyboard, and was over $1000 cheaper than the closest "workstation" laptop. It does draw a couple stares coming into a client site with a gaming laptop with it's LED keyboard and lid badge, not a single game on it though! :)

Good idea? MS SQL Limit CPU Affinity to prevent system lock down?

recently I had to run few heavy one-time queries on our MSSQL 2008 R2 64-bit server and faced a problem: executing them made SQL server consume 100% CPU which eventually (in about 20 seconds) made server absolutely unresponsive.
Thus I was forced to reboot it or wait until execution completes which took a lot of time depending on a query.
What I noticed is that setting CPU Affinity for SQL server to 7 cores instead of 8 available in task manager would keep server responsive so I could cancel my query if it took too long (and proceed with query optimizations without having too reboot).
But is it a good idea to limit CPU Affinity of SQL server?
Please share your thoughts. Server is being used for web-applications.
It turns out to be a Bad Idea.
After few days with CPU affinity 7/8 I noticed that SQL server would continuously load 1-2 cores up to 100% while other cores were available.
It is probably true that SQL Scheduler cannot distribute workload correctly when CPU affinity is limited.
Its years later but in case anyone finds this in search, your assumption is correct that work schedulers become locked to a core. However there is a trace flag to turn on in order to put this back: 8002.

High CPU Usage by Resource Monitor in SQL Server 2012

Has anyone encountered this before? I have a two node cluster with SQL Server 2012 SP1, Enterprise enterprise installed on Windows 2012. These are VMs running on VMWare 5.1. I have noticed that occasionally the CPU would spike all the way up to 100% and be sustained for a while. When I checked to see what was utilizing that much CPU it turned out to be the Resource Monitor. I know there was an issue with SQL Server 2008 with high CPU usage when virtual memory is low (KB 968722) but it was fixed in a service pack.
Is anyone seeing the same thing with SQL Server 2012 SP1? It's the exact same situation as mentioned in KB 968722 but instead of SQL Server 2008, it's happening on SQL Server 2012.
I just face a similar issue. Our windows team reported to me (SQL DBA) that we have one server with high CPU on only 2 cores (server has 10 cores). This server is part of a 2 node cluster and has 3 SQL Instances installed. One of those instances was causing the CPU issue, and it was very surprise that the instance causing the issue was the one doing nothing. This instance was installed but was not being used yet and it was causing CPU issues already on only 2 cores. Using Thread Object performance counters I identified the thread id that later I used to query sys.sysprocesses table to find those thread IDs (KPID).
SELECT * FROM sysprocesses
WHERE kpid IN (<Thread IDs>)
With that query I identified the Session IDs on SQL Server, they were background processes. Using sp_who one session ID cmd was: "RESOURCE MONITOR" and the other one was: "LAZY WRITER".
I verified memory, and since this instance was new, it was configured with min server memory as 1024 mb and max server memory as 1024 mb. I increased the max server memory setting to 2048 mb and the problem went away instantly.
I know this is not a universal solution, it was for my environment due to my context but hope it helps if somebody else is reading this question as well.
Quick answer (if > SQL Server 2008): then RESOURCE MONITOR is probably taking high CPU time because SQL server is lacking sufficient RAM.
Check your windows task manager / resource monitor for unnecessary ram-intensive processes. Clean it up.

IIS Web App performance issues....MVC3

I have an MVC3 C#.Net web app. It is running on IIS 7 on Windows Server 2008 R2. We are noticing significant performance issues when initially loading a page. We are using nHibernate and have found that performance to be slow in some instances. But all pages, even simple ones, behavinging similarly. I'm not really an IIS stud so.....
Am I missing something in IIS....a setting or action that I can tweak to improve performnce?
I had a similar issue running a site on a shared host that only allocated 100MB RAM to an application pool. When you exceeded that IIS was set to recycle it. The app generally ran at about 120MB so was constantly recycling. Each page was loading painfully slow as the whole thing started up again. Increasing the RAM available to the app pool fixed it.
Another thing that I'd try would be to set up SQL profiler and watch the queries being sent to the db. You can configure it to report the duration column in a smaller increment than the default (microseconds perhaps?) which makes the painful ones stand out. You can then pick up ones that are suspect, run them through query analyser with "display execution plan" switched on and examine the subtree costs. Perhaps NHibernate is generating nasty queries or perhaps too many?

How can I force SQL Server to use more CPU

I have an data transformation query which takes a long time to run on my development machine (Core i7 920 running at 3.9GHz, and with 12GB of RAM under Windows Server 2003 x86 and with 2 Velociraptors 300GB iN RAID0).
When I look at the task manager, the CPU stays around 26%, with the third (out of 4) core being the most active.
As this is not a production environment, is there any way to tell SQL Server 2008 that I am alright with it using more of my CPU or is it because my query can not be parallelized for some reason?
If, shouldn't SQL Server be smart enough to cut the query in smaller chunks and run it across several threads so each core can get it?
Thanks.
Optimize your query. Chances are that the issue is with it and not SQL Server.
It already knows that it's okay unless you specifically limited it to use only a certain number of CPUs either through configuration or through setting the MAXDOP parameter.
It sounds like you may be constrained by your hard drives or memory more than anything.
Note that because you are running an x86 version of windows (and by extension sql server), you may be RAM limited to around 3GB. And even with the PAE (physical addressing extensions) turned on, it's going to be a world of difference slower than if you have an x64 OS and SQL Server to begin with.
In other words, you might consider reinstalling the machine from the ground up to take advantage of all the x64 goodness you have.

Resources