clangd uses a lot of Memory (Up to 2.5Gig on my System). On my 8GIG System I frequently run into OOM situations.
Currently the only solution I have found is to kill clangd once it uses a lot of memory.
clangd has some commandline options that look like they may reduce memory usage. But I have not found a way how to configure them in CLion.
CLion doesn't like limiting its memory via ulimit, but maybe it could be possible to limit memory only for the clangd processes?
Default limit for clangd is 8GB, but you can easily customize it.
CLion offers you a bunch of settings via "registry".
Open Help | Find Action ... and find for Registry.... In the registry find clion.clangd.max.memory and reduce value to 1000. In this case CLion will automatically kill clangd process if it will eat more memory that you specified.
You also can add Clangd Memory Indicator widget to bottom bar. In the latest CLion version you need to right click on bottom bar and enabled it.
BTW, clangd works only with opened files. So as few files opened in editor as less memory clangd eats. If it's not your case better to submit ticket to CLion tracker, cause 2.5 GB is too much IMHO.
Related
How can I make a program use virtual memory in Windows?
I have a long perl script which is using 6GB+ of memory and increasing. My machine only has 8GB or RAM. It is probably caused by a memory leak in a module, but there is nothing I can do about that now.
Is it possible to make it use virtual memory, or is this something controlled by Windows only?
The OS will provide virtual memory automatically if needed and if it's configured to have swap space. You cannot control that from a Perl program.
If your Perl program has a memory leak eventually it will start being swapped to the page file. When its memory consumption causes total memory to exceed the sum of your physical RAM plus page file, things will slow to a crawl and processes may become unresponsive and/or crash.
In any case, the size of the page file cannot be change dynamically, a reboot is required. The only long-term fix is to find and fix the leak.
Create a shortcut of program that u want to run in virtual ram.
Right click on shortcut and click properties.
In properties, locate for target.
Copy and this at the end of target( --profile-directory="Profile 1"--disk-cache-dir=C:\ ).
Restart your pc.
I'd like to launch CPU and GPU intensive process on some machines, but these processes must not interfere with user's tasks. So I need to limit or at least detect GPU usage by my processes. These processes are closed-source, so I can't watch GPU usage from inside.
The answer to your subject line question is: yes (On Windows Vista and up), use Process Explorer from Microsoft to monitor per process GPU usage. nvidia's parallel nsight can do this also. Now, the body of your question sounds like you want to do this remotely. Unfortunately I'm not aware of a way to do this remotely. Still, hopefully this will be of some use to you.
edit to add: If you fire up Process Explorer I don't think it shows the GPU stats by default, to get them right click on the list of columns and add them.
The GPU is a resource that can only be used by one program at a time. If another process is using the GPU, then you can't get access to it.
A program may run multiple GPU kernels at the same time, but it's up to that program how those get run. There's no real concept of scheduling like there is with the OS and CPU processes.
Some vendors may have a way for you to check on the status of the device, like # cores in use, heat, fan speed, etc, but that won't let you change what's happening on it, and it will be specific to each vendor/device.
We are working on a Vista/Windows 7 application that will be running in 64 bit mode using VS2008/C++. We will be needing to cache hundreds of 2-3 mb blobs of data in RAM for performance reasons up to some memory limit. Our usage profile is such that we cannot read the data in fast enough if it is all on the the disk. Cached Memory usage will be larger than 1gb memory used. For this to work well, we need to ensure that Windows does not page this memory out as it will defeat the purpose of why we are doing this.
I've done a fair amount of research and cannot find documentation that states exactly how to do this. I've seen several references that infer memory mapped files work this way. Is there an expert who can clarify this for me?
I'm aware there are other programs that we could adapt to do this, for example, splitting the blobs and loading into memcache or inmemory databases, but they all have too many problems with performance or code complexity.
Suggestions?
You can use VirtualLock. However, you'll surely hit the quota with the amount you're talking about. Given that you should never run any other code on this machine, you'll be better off just disabling the paging file. Control Panel + System + Advanced.
From user mode, you can't (EDIT: At least for the sizes you're talking about). User mode allocations all come down to either the VirtualAlloc API (On top of which the GlobalAlloc/LocalAlloc/C Runtime's functions are written) or the Memory Mapped File API. Neither API supports this, and therefore it's impossible to obtain on Win32. It is possible from whithin Kernel Mode, but somehow I suspect this is a user-mode application :)
Note that the memory manager is not going to decide to page your RAM without good reason to do so.
Now, you could of course, if you control the machine completely (this is for internal use or something) disable the pagefile on the machine in question, but that does not seem to solve your problem.
It's possible! You can force pages to be locked in memory from a user mode app by allocating them using AWE (Address Windowing Extensions) VirtualAlloc + AllocatePhysicalPages + MapPhysicalPages.
Note: I have read that you can use the AWE APIs from either a 32-bit or 64-bit app also, but I've only tried with 32-bit app. (Of course since it's AWE you can manually remap memory to access > 2GB RAM.)
Note: You have to first have seLockMemoryPrivilege. (Which seems to require the app to run as Administrator in my testing so far.)
Note: Using AWE implies some limitations on what you can do with those particular pages of memory, e.g. no VirtualProtect().
perhaps the answer? (from a VMWARE tutorial)
To edit the Registry and disable paging kernel-mode stacks
Click Start > Run and type regedit.
In the left pane of the Registry Editor, navigate to
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager.
In the right pane, rightâclick GlobalFlag and select Modify.
With Base Hexadecimal, type value 80000, which corresponds to FLG_DISABLE_PAGE_KERNEL_STACKS.
Click OK and exit the Registry Editor.
Reboot the guest system for this change to take effect.
hope it helps
Windows Server 2008. How can I quickly use up RAM so to induce GC in my app. If there is a way to do it without needing Visual Studio or installing a language runtime it would be good.
EDIT: I don't want to have to write an app and then copy it over to the server. I'm looking for a way to do it quickly without writing an app that requires an IDE or installation of a runtime/compiler.
Perhaps a powershell or batch script?...
I don't think using up RAM outside your process is going to necessarily trigger GC.
If I understand your question correctly, you have a program Foo.exe that is written in some unknown language, running on some unknown runtime (are you not allowed to post the details for some reason, or do you just not know?), and you want to try to get that program's runtime to trigger a garbage collection. However, you want to do this by using up RAM outside of foo.exe.
You could do this by creating a simple batch file that just started up a hundred copies of IE or Word or whatever program you want. However, I don't think that will do what you want it to do. If your process has already allocated a certain amount of memory, it won't necessarily give that memory up or trigger GC just because other processes are being started. It may page to disk, or may force other programs to page to disk. But not all Garbage Collectors are alike, so we can't really help without more details. I'm pretty sure some VM's never give back memory once they've allocated it, even after GC.
You could run your program inside a virtual machine such as Virtual Box, where you specify the memory ceiling of the guest operating system.
I'm having trouble imagining a scenario where this would be necessary though. Could you provide more information about the problem?
If you are using java you can specify the max amount of memory using Xmx. Search for JVM memory setting
I'm compiling a vc8 C++ project in a WinXp VmWare session. It's a hell of a lot slower than gcc3.2 in a RedHat VmWare session, so I'm looking at Task Manager. It's saying a very large percentage of my compile process is spent in the kernel. That doesn't sounds right to me.
Is there an equivalent of strace for Win32? At least something which will give me an overview of which kernel functions are being called. There might be something that stands out as being the culprit.
Windows Resource Kit contains a tool called kernrate. It's a sampling profiler. It can profile entire system or a particular process. By default, its resolution is on a module level, but can be tuned down to several bytes. You should be fine with default resolution as you'll see which modules/drivers are consuming most of the time.
Here is some info regarding its use.
Not exactly strace, but there is a way of getting visibility into the kernel call stack, and by sampling it at times of high CPU usage, you can usually estimate what's using up all the time.
Install Process Explorer and make sure you configure it with symbol server support. You can do this by:
Installing WinDebug to get an updated dbghelp.dll
Set Process Explorer to use this version of dbghelp.dll by setting the path in the Options | Configure Symbols menu of Process Explorer.
Also in the same dialog, set the symbols path such that it includes the MS symbol server and a local cache.
Here's an example value for the symbol path:
SRV*C:\symbolcache*http://msdl.microsoft.com/download/symbols
(You can set _NT_SYMBOL_PATH environment variable to the same value to have the debugging tools use the same symbol server and cache path.) This path will cause dbghelp.dll to download symbols to local disk when asked for symbols for a module that doesn't have symbols locally.
After having set up Process Explorer like this, you can then get a process's properties, go to the threads tab, and double-click on the busiest thread. This will cause Process Explorer to temporarily hook into the process and scan the thread's stack, and then go and look up the symbols for the various return addresses on the stack. The return addresses's symbols, and the module names (for non-MS third-party drivers) should give you a strong clue as to where your CPU time is being spent.
VmWare support should be address that question. It's probably somewhere in the VmWare implementation.
You can use for example IrpTracker that give you an idea what is going on in the kernel.
Another option is using kernel debugger i.e WinDbg. If the cpu load very high just randomly breaking in the debugger and looking on the call stack can give you an idea who is the driver behind the cpu load. But as i stated i will guess that it will be some VmWare component. It worth to check if the problem persist on same computer on WinXP without emulation.