Is it possible to leave some applications running on RAM after closing them on Windows. What Im askig is like cached RAM but more like application specific, to specifically run those applications faster.
Windows does that automatically with its SuperFetch subsystem. It monitors which applications are used frequently and at what time of day and makes sure to have them cached at the right time.
And generally, when closing an application its files should still be cached, so a subsequent startup should be fast.
Related
I'm developing applications (services) for Windows and sometimes have problem with performance and recources (especially with MsSql). I need to know which service, application or OS component, developed by my or someone else, makes load CPU or HDD at some moment in past.
I whant to be able to do it using some kind of stored data (log), better with grafics.
Is there any way to do it?
Perfmon will be you built in friend!
you can either log current performance counters in a user session or let a background service track your preselected counters and you can check that afterwards.
you will find tons of explanations how to user perfmon. It is part of every windows since NT4.
I have a memory leak, that only occours in production (a webapp (Asp.Net MVC)).
I would like to take a memory snapshot with dotMemory (or a tool like it) to see what is going on.
However I am not sure if that will cause the production to pause and mess up any current requests.
Fwiw I have 32 gb of RAM on the machine
So my question is:
Can I get a memory snapshot without adversly blocking / affecting requests?
Yes, dotMemory and any other memory profiler working via Microsoft Profiling API will pause an app for some time, from milliseconds to minutes depending on how much data is in the memory.
I would recommend to take a standard Windows memory dump, in normal situation it also takes some time, but there is a technique which could help to avoid it.
Then you can analyze it in dotMemory or any other tool supporting Windows memory dumps.
https://blogs.msdn.microsoft.com/granth/2012/07/02/how-to-take-a-memory-dump-of-an-asp-net-application-pool-quickly/
When I was running the internal Team Foundation Servers (TFS) at
Microsoft, we sometimes encountered issues that could only be
understood by analysing a memory dump. This was especially true on the
Pioneer and Dogfood servers that were running pre-Beta builds. If the
problem was serious enough (crashing, memory leaks, etc) that it
needed a memory dump, it probably meant that it needed it quickly so
that we could recycle the application pool and get the server healthy
again.
The problem with dumping an ASP.NET Application Pool, is that all the
application pools use the w3wp.exe process name. So, before you can
take the dump, you need to work out which process corresponds to the
application pool that you are targeting. If you can’t tell by looking
at the process owner (e.g. service account/app pool identity). The
easy (but slow) way of doing that is to open Task Manager and add the
‘Command Line’ column to the display. You will then see each of the
application pool names in the command line of the w3wp.exe processes.
The other problem with app pools that are consuming a large amount of
memory, is that the process will be suspended for a long time while
the memory is dumped to disk. If this takes longer than the configured
ASP.NET process ‘ping time’, then IIS will terminate your process (and
start a new one) halfway through the dump and you’ll lose your repro.
To solve that problem, there is the ‘-r’ flag available in the
Sysinternals Procdump.exe. It leverages a feature of Windows 7/Windows
2008 R2 that “clones” a process to take the dump and unsuspends the
original process faster than normal.
-r Reflect (clone) the process for the dump to minimize the time
the process is suspended (Windows 7 and higher only).
We can then use the IIS management tools to look up the process ID for
a particular app pool and now we have a simple batch file that we can
put on the desktop of our TFS server for quick access.
DumpTfsAppPool.cmd Create the following batch file and put it in the
same directory as Procdump.exe. Don’t forget to create/update the path
to the dump location.
%windir%\system32\inetsrv\appcmd list wps /apppool.name:"Microsoft
Team Foundation Server Application Pool" /text:WP.NAME >
"%temp%\tfspid.txt"
:: ProcDump.exe (faster, reflects/clones the process for the dump to
minimize the time the process is suspended (Windows 7 and higher
only))
for /F %%a in (%temp%\tfspid.txt) do "%~dp0\procdump.exe" -accepteula
-64 -ma -r %%a f:\dumps pause
Our web is running on AWS with Ubuntu OS. We developed it on top of playframework. Right after the web is deployed, it is pretty quick. However, after 1 days or os, it slows down significantly. I checked resource usage of the OS, it seems normal and is responsive. Just the web service is slow to request. I suspect there are some memory, thread pool or some resource leak. Any suggestion about how to investigate it? I used 'top' and 'ps' command to look at current resource usage but they all seem normal.
You may want to create a core dump and then take that to you dev computer and examine it. This is not the easiest way but if you have limited access to the box this may be required.
Create a core dump
Analyze Core Dump File?
my question may seem too weird but i thought about the windows hibernation thing and i was wondering if there is a way to hibernate a specific process or application.
i.e : when windows start up from a normal shutdown/restart it will load all startup programs normally but in addition of that it will load a specific program with it`s previous status before shutting down the computer.
I have though about reserving the memory location and retrieve it back when computer start up , but is there any application that does that in windows environment ?
That cannot work. The state of a process is almost never contained in just the process itself. A gui app creates user32 and gdi objects that are stored in a heap associated with the desktop. It makes calls to Windows that affect the window manager state. It makes I/O calls that cause code inside drivers to run. Which in turn affects allocations inside the kernel pools. Multiply the trouble by every pipe or rpc channel it opens to talk to other processes. And shared resources like the clipboard.
Only making a snapshot of the entire operating system state works.
There are multiple solutions for this now, in Linux OS: CRIU, CryoPID2, BLCR.
I think docker can be used (both for windows & linux), but it requires pre-packaging your app in a docker, which bears some overhead(s).
Our ASP.Net 2.0 web app was running happily along on Windows Server 2003. We were starting to see some of the limits of the environment approaching, such as memory and CPU usage spikes, and as we're getting ready to scale we decided it was time for a larger server with higher availability.
We decided to move to Windows Server 2008 to take advantage of IIS 7's shared configuration. In our development and integration environments, we reproduced the OS and app in 2008/IIS 7 and everything seemed fine. But truth be told, don't have a good way of simulating production-like loads as of yet, nor can we reproduce our prod environment accurately (we're small with limited resources). So once we rolled out to production, we were surprised to find performance significantly worse on 2008 than it was on 2003.
We've also moved from a 32-bit environment to 64-bit in the process, and we've also incorporated ASP.Net 3.5 dll's into the project.
Memory usage is through the roof, but I'm not as worried about that. We believe in part this is because of the overhead with Server 2008's memory, so throwing more RAM at it may solve that issue. The troubling thing is we're seeing processor spikes to 99% CPU Utilization, which we've never seen before in the 2003/IIS 6 environment.
Has anyone encountered these issues before and are there any suggestions for a solution/places to look? Right now we're doing the following:
1) Buying time by adding memory.
2) Buying time by setting app pool limits: shut down w3wp.exe when CPU hits 99% load. Since you don't have the option to recycle the app pools, I have a scheduled task running that recycles any stopped app pools.
3) Profiling the app pools under Classic and Integrated modes to see which may perform better.
Any other ideas are completely welcome.
Our experiance is that code runs much faster on a 64bit windows 2008 than on a 32bit windows 2003 server.
I am wondering if something else is also running on the machine. For example is SQL Server installed with a maintainence plan that could cause the CPU spike.
I would check the following:
Which process is using the CPU?
Is there a change in the code? Try installing the new code on the old machine
Is it something to do with the compile options? Is the CPU usage a recompile?
Are there any errors in the event log?
In our cases, since we have 4 processors, we then increased the "number of worker process to 4" currently working well so far as compare before.
here a snapshot:
http://pic.gd/c3661a
You can use the application pool "Recycle" option in IIS7+ to configure physical and virtual memory limits for application pools. Once these are reached the process will recycle and the resources will be released. Unfortunately the option to recycle based on CUP usage has been removed from IIS7+ (some one correct me if I'm wrong). If you have other apps on the server and want to avoid them competing for resources when this condition happens you can implement Windows System Resources Manager and it's IIS policy (here is a good tutorial http://learn.iis.net/page.aspx/449/using-wsrm-to-manage-iis-70-apppool-cpu-utilization/)
Note SRWM is only available on Enterprise and Data Center editions.