This question already has answers here:
Windows OSes and Memory Management-- What happens when an application is minimized?
(2 answers)
Closed 8 years ago.
I've noticed something odd when running resource intensive programs under Window, such as games. If you run the game in windowed mode and look at the memory usage you can see that it goes in the order of hundreds of megabytes for 2D games. But if you minimize that game, I've seen the memory usage go as low as a few megabytes, even less than ten.
What exactly is happening? Who's doing this, the games or the OS? Surely, the resources can't actually be unloaded from memory (that would be awful), so what's with the drop?
Windows trims the working set of a process when its main window is minimized. The working set isn't necessarily the best indicator of how much system resources a process is using.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I understand that goroutines are very light weight and we can spawn thousands of them but I want to know if there is some scenario when we should spawn a process instead of a goroutine (like hitting some kind of process boundaries in terms of resource or something else). Can spawning a new process in some scenario be beneficial in terms of resource utilization or some other dimension?
To get things started, here's three reasons. I'm sure there's more.
Reason #1
In a perfect world, CPUs would be busy doing the most important work they can (and not wasted doing the less important work while more important work waits).
To do this, whatever controls what work a CPU does (the scheduler) has to know how important each piece of work is. This is normally done with (e.g.) thread priorities. When there are 2 or more processes that are isolated from each other, whatever controls what work a CPU does can't be part of either process. Otherwise you get a situation where one process is consuming CPU time doing unimportant work because it can't know that there's a different process that wants the CPU for more important work.
This is why things like "goroutines" are broken (inferior to plain old threads). They simply can't do the right thing (unless there's never more than one process that wants CPU time).
Processes (combined with "process priorities") can fix that problem (while adding multiple other problems).
Reason #2
In a perfect world, software would never crash. The reality is that sometimes processes do crash (and sometimes the reason has nothing to do with software - e.g. a hardware flaw). Specifically, when one process crashes often there's no sane way to tell how much damage was done within that process, so the entire process typically gets terminated. To deal with this problem people use some form of redundancy (multiple redundant processes).
Reason #3
In a perfect world, all CPUs and all memory would be equal. In reality things don't scale up like that, so you get things like ccNUMA where a CPU can access memory in the same NUMA domain quickly, but the same CPU can't access memory in a different NUMA domain as quickly. To cope with that, ideally (when allocating memory) you'd want to tell the OS "this memory needs low latency more than bandwidth" (and OS would allocate memory for the fastest/closest NUMA domain only) or you'd tell the OS "this memory needs high bandwidth more than low latency" (and the OS would allocate memory from all NUMA domains). Sadly every language I've ever seen has "retro joke memory management" (without any kind of "bandwidth vs. latency vs. security" hints); which means that the only control you get is the choice between "one process spread across all NUMA domains vs. one process for each NUMA domain".
I have windowed WinApi/OpenGL app. Scene is drawn rarely (compared to games) in WM_PAINT, mostly triggered by user input - MW_MOUSEMOVE/clicks etc.
I noticed, that when there is no scene moving by user mouse (application "idle") and then some mouse action by user starts, the first frame is drawn with unpleasant delay - like 300 ms. Following frames are fast again.
I implemented 100 ms timer, which only does InvalidateRect, which is later followed by WM_PAINT/draw scene. This "fixed" the problem. But I don't like this solution.
I'd like know why is this happening and also some tips how to tackle it.
Does OpenGL render context save resources, when not used? Or could this be caused by some system behaviour, like processor underclocking/energy saving etc? (Although I noticed that processor runs underclocked even when app under "load")
This sounds like Windows virtual memory system at work. The sum of all the memory use of all active programs is usually greater than the amount of physical memory installed on your system. So windows swaps out idle processes to disc, according to whatever rules it follows, such as the relative priority of each process and the amount of time it is idle.
You are preventing the swap out (and delay) by artificially making the program active every 100ms.
If a swapped out process is reactivated, it takes a little time to retrieve the memory content from disc and restart the process.
Its unlikely that OpenGL is responsible for this delay.
You can improve the situation by starting your program with a higher priority.
https://superuser.com/questions/699651/start-process-in-high-priority
You can also use the virtuallock function to prevent Windows from swapping out part of the memory, but not advisable unless you REALLY know what you are doing!
https://msdn.microsoft.com/en-us/library/windows/desktop/aa366895(v=vs.85).aspx
EDIT: You can improve things for sure by adding more memory and for sure 4GB sounds low for a modern PC, especially if you Chrome with multiple tabs open.
If you want to be scientific before spending any hard earned cash :-), then open Performance Manager and look at Cache Faults/Sec. This will show the swap activity on your machine. (I have 16GB on my PC so this number is very low mostly). To make sure you learn, I would check Cache Faults/Sec before and after the memory upgrade - so you can quantify the difference!
Finally, there is nothing wrong with the solution you found already - to kick start the graphic app every 100ms or so.....
Problem was in NVidia driver global 3d setting -"Power management mode".
Options "Optimal Power" and "Adaptive" save power and cause the problem.
Only "Prefer Maximum Performance" does the right thing.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
We are trying to understand how Windows CPU Scheduler works in order to optimize our applications to achieve maximum possible infrastructure/real work ratio. There's some things in xperf that we don't understand and would like to ask the community to shed some light on what's really happening.
We initially started to investigate these issues when we got reports that some servers were "slow" or "unresponsive".
Background information
We have a Windows 2012 R2 Server that runs our middleware infrastructure with the following specs.
We found concerning that 30% of CPU is getting wasted on kernel, so we started to dig deeper.
The server above runs "host" ~500 processes (as windows services), each of these "host" processes has an inner while loop with a ~250 ms delay (yuck!), and each of those "host" processes may have ~1..2 "child" processes that are executing the actual work.
While having the infinite loop with 250 ms delay between iterations, the actual useful work for the "host" application to execute may appear only every 10..15 seconds. So there's a lot of cycles wasted for unnecessary looping.
We are aware that design of the "host" application is sub-optimal, to say the least, as applied to our scenario. The application is getting changed to an event-based model which will not require the loop and therefore we expect a significant reduction of "kernel" time in CPU utilization graph.
However, while we were investigating this problem, we've done some xperf analysis which raised several general questions about Windows CPU Scheduler for which we were unable to find any clear/concise explanation.
What we don't understand
Below is the screenshot from one of xperf sessions.
You can see from the "CPU Usage (Precise)" that
There's 15 ms time slices, of which majority are under-utilized. The utilization of those slices is ~35-40%. So I assume that this in turn means that CPU gets utilized about ~35-40% of the time, yet the system's performance (let's say observable through casual tinkering around the system) is really sluggish.
With this we have this "mysterious" 30% kernel time cost, judged by the task manager CPU utilization graph.
Some CPU's are obviously utilized for the whole 15 ms slice and beyond.
Questions
As far as Windows CPU Scheduling on multiprocessor systems is concerned:
What causes 30% kernel cost? Context switching? Something else? What consideration should be made when applications are written to reduce this cost? Or even - achieve perfect utilization with minimal infrastructure cost (on multiprocessor systems, where number of processes is higher than the number of cores)
What are these 15 ms slices?
Why CPU utilization has gaps in these slices?
To diag the CPU usage issues, you should use Event Tracing for Windows (ETW) to capture CPU Sampling data (not precise, this is useful to detect hangs).
To capture the data, install the Windows Performance Toolkit, which is part of the Windows SDK.
Now run WPRUI.exe, select First Level, under Resource select CPU usage and click on start.
Now capture 1 minute of the CPU usage. After 1 minute click on Save.
Now analyze the generated ETL file with the Windows Performance Analyzer by drag & drop the CPU Usage (sampled) graph to the analysis pane and order the colums like you see in the picture:
Inside WPA, load the debug symbols and expand Stack of the SYSTEM process. In this demo, the CPU usage comes from the nVIDIA driver.
This question already has answers here:
What's the maximum number of threads in Windows Server 2003?
(8 answers)
Closed 9 years ago.
What is the maximum number of thread in an application 32-bits and 64-bits developed in Delphi?
I need to know what is the limit of threads running simultaneously on a 32-bit application, because I'm doing performance analysis and I want to let the OS manage the execution order of the threads that are waiting.
You might want to read this answer: https://stackoverflow.com/a/481919/1560865
Still, what I wrote in my comment above stays partially true (but please also notice Martin James' objection to it below).
Notice that - generally speaking - if you create way more threads than
processor cores (or virtual equivalents), you will not gain any
performance advantage. If you create too many, you'll even end up with
pretty bad results like these:
thedailywtf.com/Articles/Less-is-More.aspx So are you completely sure
that you'll need the theoretically possible maximum number of threads?
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Typically in a working environment, I have many windows open, Outlook, 2/3 word docuements, few windows in browser, notepad++, some vpn client, excel etc..
Having said that, there are chances that about 40% of these apps are not frequently used, but are referred only sparingly. They occupy memory none-the-less.
Now, how does a typical OS deal with that kind of memory consumption ? does it suspend that app to hard disk (pagefile , or linux swap area etc) thereby freeing up that memory for usage, or does it keep occupying the memory there as it is.
Can this suspension be a practical solution, doable thing ? Are there any downsides ? response time ?
Is there some study material I can refer to for reading on this topic/direction..
would appreciate the help here.
The detailed answer depends on your OS and how it implements its memory management, but here is a generality:
The OS doesnt look at memory in terms of how many processes are in RAM, it looks at in terms of discrete units called pages. Most processes have several pages of RAM. Pages that are least referenced can be swapped out of RAM and onto the hard disk when physical RAM becomes scarce. Rarely, therefore, is an entire process swapped out of RAM, but only certain parts of it. It could be, for example, that some aspect of your currently running program is idle (ie the page is rarely accessed). In that case, it could be swapped out even though the process is in the foreground.
Try the wiki article for starters on how this process works and the many methods to implement it.