Windows Server 2008. How can I quickly use up RAM so to induce GC in my app. If there is a way to do it without needing Visual Studio or installing a language runtime it would be good.
EDIT: I don't want to have to write an app and then copy it over to the server. I'm looking for a way to do it quickly without writing an app that requires an IDE or installation of a runtime/compiler.
Perhaps a powershell or batch script?...
I don't think using up RAM outside your process is going to necessarily trigger GC.
If I understand your question correctly, you have a program Foo.exe that is written in some unknown language, running on some unknown runtime (are you not allowed to post the details for some reason, or do you just not know?), and you want to try to get that program's runtime to trigger a garbage collection. However, you want to do this by using up RAM outside of foo.exe.
You could do this by creating a simple batch file that just started up a hundred copies of IE or Word or whatever program you want. However, I don't think that will do what you want it to do. If your process has already allocated a certain amount of memory, it won't necessarily give that memory up or trigger GC just because other processes are being started. It may page to disk, or may force other programs to page to disk. But not all Garbage Collectors are alike, so we can't really help without more details. I'm pretty sure some VM's never give back memory once they've allocated it, even after GC.
You could run your program inside a virtual machine such as Virtual Box, where you specify the memory ceiling of the guest operating system.
I'm having trouble imagining a scenario where this would be necessary though. Could you provide more information about the problem?
If you are using java you can specify the max amount of memory using Xmx. Search for JVM memory setting
Related
I have a console application that runs for less than 500ms, but that according to BenchmarkDotNet allocates more than 100 MB.
I am trying to figure out what are those 100Mb because it does not add up. However I cannot find a tool to do so in Linux or Mac. Once the method the app calls is over, the GC can clean all that memory without problems, so it is not a leak I can see in a dump, unless I take the dump in the very exact moment before exiting the method. I am not clear which is the moment in which the algorithm peaks in memory usage.
I can take CPU traces using dotnet-trace and show it in the browser with Speedscope, but I cannot show in Speedscope a trace when using gc-verbose or gc-collect as provider.
Is there a way with dotnet-trace to print in the console the stats of the created objects or anything like that?
try out dotnet dump and checkout this article by Tess Ferrandez.
and maybe you can share a little bit more information please.
I am writing a server app which I want to efficiently use ALL available physical RAM of the machine when possible. The plan is that it will allocate physical pages using AWE until it detects that 99% of physical memory and stop when 1% is free, and any time physical memory drops below 1% free, it will free physical pages it doesn't need.
However when I put this plan into practice, Windows seems to think any time it has 99% of RAM in use it would be a good idea to free up more physical memory, and so it starts paging all sorts of stuff to disk, and my system crashes.
How can I tell Windows it is OK to have 99% of RAM in use and it doesn't need to try to page stuff back to disk until it reaches whatever its default perceived ideal level of usage is (I guess it will be something like 90%...)
Note: Raymond says 'Unless you are designing a system where you are the only program running on the computer, this is a bad idea.'
In this server scenario this is basically intended to be the only app running on the computer. But unfortunately there are some OS/background tasks that need to run...
But certainly I don't expect there is any other process on the computer indulging in this 'use all but 1% of RAM' behavior...?
Update: I've done more experimentation and started to wonder if I'm somewhat asking the wrong question. My assumption that windows is being overeager may be wrong. Perhaps the question should instead have been 'how can I determine how much physical RAM my process can safely use without compromising overall responsiveness on the machine'?
You can't. The Windows memory manager runs at a lower level than your program and knows nothing about your program (and even if it did, it has no reason to assume your program is the good citizen you claim it to be. What if your program crashes, or has an off-by-one error in a loop that mallocs? What about other programs that need memory while yours is running? What about the thousand other scenarios that the guys who wrote the Windows MM encountered when they were writing it?)
Don't try to be cleverer than Windows. A more productive use of your time would be to consider if your application really needs to allocate 99% physical memory up front.
I am working with VisualStudio 2010 and this would probably be the most common error.
In my code I am calling a script to load data from a database table which comprises of over 1,765,700 rows and is 777,826 KB size.
I keep running into an System.OutOfMemory.Exception error.
Is there anyway I can increase the memory being allocated to my program or change the settings? I had done it while running my programs in eclipse before. Can it be done in Visual Studio2010 as well?
Thank you
The first step would be, if at all possible, to not load all of the data into memory at once unless this is truly a requirement. If there is any way to load the data in stages, you would avoid coming close to the memory limitations.
However, changing this to target x64 and running on a 64bit platform should give you plenty of memory access to load this amount of data without issues. That would potentially be the simplest option.
Anyone know likely avenues of investigation for kernel launch failures that disappear when run under cuda-gdb? Memory assignments are within spec, launches fail on the same run of the same kernel every time, and (so far) it hasn't failed within the debugger.
Oh Great SO Gurus, What now?
cuda-gdb spills all shared memory and registers to local memory. So when something runs ok built for debugging and fails otherwise, it usually means out of bounds shared memory access. cuda-memcheck might help, depending on what sort of card you are using. Fermi is better than older cards in that respect.
EDIT:
Casting my mind back to the bad old days, I remember having an ornery GT9500 which used to throw similar NV13 errors and have random code failures when running very memory intensive kernels with a lot of shared memory activity. Never when debugging. I put it down to bad hardware and moved on to a GT200, never to see a similar error since. One possibility might be bad hardware. Is this a G92 (9800GT or similar)?
CUDA GDB can make some of the cuda operations synchronous.
Are you reading from a memory after has been initialized ?
are you using Streams?
Are you launching more than one kernel?
Where and how does it fail ?
We are working on a Vista/Windows 7 application that will be running in 64 bit mode using VS2008/C++. We will be needing to cache hundreds of 2-3 mb blobs of data in RAM for performance reasons up to some memory limit. Our usage profile is such that we cannot read the data in fast enough if it is all on the the disk. Cached Memory usage will be larger than 1gb memory used. For this to work well, we need to ensure that Windows does not page this memory out as it will defeat the purpose of why we are doing this.
I've done a fair amount of research and cannot find documentation that states exactly how to do this. I've seen several references that infer memory mapped files work this way. Is there an expert who can clarify this for me?
I'm aware there are other programs that we could adapt to do this, for example, splitting the blobs and loading into memcache or inmemory databases, but they all have too many problems with performance or code complexity.
Suggestions?
You can use VirtualLock. However, you'll surely hit the quota with the amount you're talking about. Given that you should never run any other code on this machine, you'll be better off just disabling the paging file. Control Panel + System + Advanced.
From user mode, you can't (EDIT: At least for the sizes you're talking about). User mode allocations all come down to either the VirtualAlloc API (On top of which the GlobalAlloc/LocalAlloc/C Runtime's functions are written) or the Memory Mapped File API. Neither API supports this, and therefore it's impossible to obtain on Win32. It is possible from whithin Kernel Mode, but somehow I suspect this is a user-mode application :)
Note that the memory manager is not going to decide to page your RAM without good reason to do so.
Now, you could of course, if you control the machine completely (this is for internal use or something) disable the pagefile on the machine in question, but that does not seem to solve your problem.
It's possible! You can force pages to be locked in memory from a user mode app by allocating them using AWE (Address Windowing Extensions) VirtualAlloc + AllocatePhysicalPages + MapPhysicalPages.
Note: I have read that you can use the AWE APIs from either a 32-bit or 64-bit app also, but I've only tried with 32-bit app. (Of course since it's AWE you can manually remap memory to access > 2GB RAM.)
Note: You have to first have seLockMemoryPrivilege. (Which seems to require the app to run as Administrator in my testing so far.)
Note: Using AWE implies some limitations on what you can do with those particular pages of memory, e.g. no VirtualProtect().
perhaps the answer? (from a VMWARE tutorial)
To edit the Registry and disable paging kernel-mode stacks
Click Start > Run and type regedit.
In the left pane of the Registry Editor, navigate to
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager.
In the right pane, rightâclick GlobalFlag and select Modify.
With Base Hexadecimal, type value 80000, which corresponds to FLG_DISABLE_PAGE_KERNEL_STACKS.
Click OK and exit the Registry Editor.
Reboot the guest system for this change to take effect.
hope it helps