We are running a windows service which every 5 seconds checks a folder for files and if found logs some info about it using NLog.
I already tried the suggestions from ASP.NET: High CPU usage under no load without succes.
When the service was just started there is hardly any CPU usage. After a few hours we see CPU peaks to 100% and after some more waiting the cpu graph looks like:
I tried the steps described in http://blogs.technet.com/b/sooraj-sec/archive/2011/09/14/collecting-data-using-xperf-for-high-cpu-utilization-of-a-process.aspx to produce information on what is going on:
I don't know where to continue. Any help appreciated
Who wrote this windows service? Was it you or 3rd party?
To me, checking folder for changes every 5 seconds sounds really suspicious, and maybe the primary reason why you are getting this massive slowdown.
If you do it right, you can get directory changes immediately as they happen, and yet spend almost no CPU time while doing that.
This Microsoft article explains how exactly to do that: Obtaining Directory Change Notifications by using functions FindFirstChangeNotification, FindNextChangeNotification, ReadDirectoryChangesW and WaitForMultipleObjects.
After a lot of digging it had to do with this:
The service had a private object X with property Y
Every time the service was fired X was passed to the Business Logic. There Y was used and in the end disposed. The garbage collector will then wait until X is disposed, which will never happen until the service is restarted. This caused an extra GC waiting thread every time the service was fired.
Related
I have a .NET CORE 3.1 Console application running on a Ubuntu20 x64 server, and randomly experiencing High Cpu(100% for 4 cores) cases.
I'm following diagnostics to start a diag during the peak time for my app like:
dotnet-trace collect -p 1039 --providers Microsoft-DotNETCore-SampleProfiler
from the resulted .nettrace file opened in Visual Studio, I can see the Funcions list with CPU Time for each single function.
But I understand the CPU time here is actually the wall time that just means the time of a function call stayed in a thread stack, and no matter it consumes real CPU caculation resource or not.
The hotest spot my this .nettrace now is pointing to these lines of code(pseudo code):
while(true)
{
Thread.Sleep(1000);//<---------hottest spot
socket.Send(bytes);
}
and
while(true)
{
ManualResetEvent.WaitOne();//<---------hottest spot
httpClient.Post(data);
}
Obviously above 2 hottest spot will not consume real CPU resource but just idle waitting, so any way to trace the functions that used the real cpu usage, just like the JetBrains dotTrace provided:
You might want to use external tools like top. This could help to identify the process consuming the CPU percentage.
If your profiler identifies Thread.Sleep() as hottest spot, chances are that your application is waiting for some external process outside the scope of the profiler.
I would suggest refactoring this code to use async and use 'await Task.Wait(xxx)' instead of doing this on the Thread level.
I'm having this suggestion based on partially similar problem which has been described here
Why Thread.Sleep() is so CPU intensive?
I am writing an application that maps a file into memory to make some information resilient to failures (crash, power outage, etc). I know that the idea is to flush as infrequently as allowable, but to Do Things Right, and considering the goal, it seems to me that I should essentially flush to disk whenever the data has changed.
All the mapped data fits into a single page. I have a burst usage pattern (nothing happens for a looong time, then all of a sudden you'd modify the information ~20 times in a row). For this reason, I'm hesitant about FlushViewOfFile, since it seems to be synchronous. Flushing at every hit on a burst would seem to be inefficient.
Is there not a way I can tell Windows to flush pages the next time it has an idle cycle, and without having me wait until it does it?
I do not believe that there is a function in Windows for that. FlushViewOfFile is what you have to work with. You're going to have to think of a 'scheduler' for your program that matches your use-case/profile. Something like starting a short timer after each hit, which resets if there is another hit and if it expires flushes the page, and one long timer which if it expires flushes the page despite still being in a burst would probably work nicely for you. In any case, you'll need to profile what the usage will be and have the program act accordingly.
What does REALTIME_PRIORITY_CLASS (with THREAD_PRIORITY_TIME_CRITICAL) actually do?
Does it:
Prevent interrupts from firing
Prevent context switching from happening
on the processor (unless the thread sleeps)?
If it does prevents the above from happening:
How come when I run a program on a processor with this flag, I still get inconsistent timing results? Shouldn't the program take the same amount of time every time, if there's nothing interrupting it?
If it does NOT prevent the above from happening:
Why does my system (mouse, keyboard, etc.) lock up if I use it incorrectly? Shouldn't drivers still get some processor time?
It basically tells the system scheduler to only a lot time to your thread till it gives it up(via Sleep or SwitchToThread) or dies. As for timing not being the same, the OS still runs inbetween each run, this can change ram and caching etc. Secondly, most timing is inaccurate, so it will fluctuate(especially system quanta based timing like GetTickCount). The OS many also have thing things going on, like power saving/dynamic freq adjustment, so you best check would be to use RDTSC, though even with that you might notice other stuff running(especially if you can run more than one physical thread).
i have been looking at stack overflow for long time but never really got a chance to make my first question, so here it is:
i am developing a mac os x app and using nsoperations to keep the app responsive,
i also set maxConcurrentOperationCount to 3, however the app is still somewhat unresponsive while doing its work, if it ry to move the window around i can see it starts to lag and behave erratically
can someone provide any clue or pointer to solution ?
(no, not asking sample code ;)
There are a number of reasons why an app might be unresponsive in such a situation:
you are straight up blocking the main event loop or flooding it with events
you have complex drawing operations on the main thread
your app is using so much memory that it is causing the system to page. Doesn't really matter if you have 10 threads or 1 thread, as soon as you start paging, your performance goes down the tubes
you have lock contention between the main thread and the background thread(s)/queue(s)
Instruments offers a series of tools for profiling CPU usage. The first thing I'd do is to figure it if the main thread is using a lot of CPU (and, if so, for what?) or if it is blocked waiting on locks or the like.
If the app becomes unresponsive you are blocking the main thread somewhere in your code, take a sample using activity monitor or instruments ( recommended ) to find out where in your code.
Just using NSOperations won't make the app responsive. The key to responsiveness is to not block the main thread. If your app is laggy, it's (usually—see #bbum's answer) because you're doing something or things that are blocking the main thread.
The way to find out what is to use Instruments. Use the Time Profiler instrument, and then look at what is running on the main thread. Make those things smaller, move them to operations, delayed-perform them, or some combination thereof. If you need to refactor, do it.
One possibility is that you are running your operations on the main queue. Don't do that—they will run serially (regardless of maxOperationCount) on the main thread. Create a queue and use the queue you created.
I have 15 BackgroundWorers that are running all the time, each one of them works for about half a second (making web request) and none of them is ever stopped.
I've noticed that my program takes about 80% of my computer's processing resources and about 15mb of memory (core 2 duo, 4gb ddr2 memory).
It it normal? web requests are not heavy duty, it just sends and awaits server response, and yes, running 15 of them is really not a pro-performance act (speed was needed) but i didn't think that it would be so intense.
I am new to programming, and i hardly ever (just as any new programmer, I assume) care about performance, but this time it is ridiculous, 80% of processing resources usage for a windows forms application with two listboxes and backgroundworkers making web requests isn't relly what expected.
info:
I use exception handling as part of my routine, which i've once read that isn't really good for performance
I have 15 background workers
My code assures none of them is ever idle
List item
windows forms, visual studio, c#.
------[edit - questions in answers]------
What exactly do you mean by "My code assures none of them is ever idle"?
The program remains waiting
while (bgw1.IsBusy || gbw2.IsBusy ... ... ...) { Application.DoWork();}
then when any of them is free, gets put back to work.
Could you give more details about the workload you're putting this under?
I make an HTTP web request object, open it and wait for the server request. It really has only a couple of lines and does no heavy processing, the half second is due to server awaiting.
In what way, and how many exceptions are being thrown?
When the page doesn't exist, there is a system.WebException, when it works it returns "OK", and about 99% of the pages i check don't exist, so i'd say about 300 exceptions per minute (putting it like this makes it sound creepy, i know, but it works)
If you're running in the debugger, then exceptions are much more expensive than they would be when not debugging
I'm not talking about running it in the debugger, I run the executable, the resulting EXE.
while (bgw1.IsBusy || gbw2.IsBusy ... ... ...) { Application.DoWork();}
What's Application.DoWork(); doing? If it's doing something quickly and returning, this loop alone will consume 100% CPU since it never stops doing something. You can put a sleep(.1) or so inside the loop, to only check the worker threads every so often instead of continuously.
This bit concerns me:
My code assures none of them is ever idle
What exactly do you mean by that?
If you're making thousands and thousands of web requests, and if those requests are returning very quickly, then that could eat some CPU.
Taking 15MB of memory isn't unexpected, but the CPU is the more worrying bit. Could you give more details about the workload you're putting this under? What do you mean by "each one of them workds for about half a second"?
What do you mean by "I use exception handling as part of my routine"? In what way, and how many exceptions are being thrown? If you're running in the debugger, then exceptions are much more expensive than they would be when not debugging - if you're throwing and catching a lot of exceptions, that could be responsible for it...
Run the program in the debugger, pause it ten times, and have a look at the stacktraces. Then you will know what is actually doing when it's busy.
From your text I read that you have a Core 2 Duo. Is that a 2 Threads or a 4 Threads?
If you have a 2 Threads you only should use 2 BackGroundworkers simultaneously.
If you have a 4 Threads then use 4 BGW's simultaneously. If you have more BGW's then use frequently the following statement:
System.Threading.Thread.Sleep(1)
Also use Applications.DOevents.
My general advice is: start simple and slowly make your application more complex.
Have a look at: Visual Basic 2010 Parallel Programming techniques.