Increase memory for shared memory - macos

When trying to get shared memory, shmget() often fails because being unable to allocate memory. The physical size of RAM really shouldn't be the problem (4GB is enough, I think).
Rather there's probably anywhere in the systems properties a limit for allocating shared memory set. Does anyone know, where I can find this property?
I'm using Mac OS X Version 10.6

Depends on the OS. PostgreSQL documentation has tips for changing the shared memory limit on various platforms.

Related

When OS using hard disk as additional disk space?

Is it correct that an operating system will be using my hard drive space if all active programs will use all RAM space? Which will lead to performance issue (all programs will be work slower because read info from disk is slower than from RAM)
Is it correct that an operating system will be using my hard drive space if all active programs will use all RAM space?
No. The operating system uses one or more paging files to support virtual memory. In a virtual memory system all process memory is mapped to secondary storage. It uses hard drive space even when there is available memory.
Which will lead to performance issue (all programs will be work slower because read info from disk is slower than from RAM)
If a page of memory has been paged out and has to be retrieved from disk (a page fault), it is a slow process.

How to split RAM for each Windows OS and my application?

I want to split RAM in my PC into two parts; half for my Windows OS and the other half for an image buffer for my application. For example, my desktop has 32GB memory, and I want to assign 16GB for Windows and assign another 16GB for my application access only. Windows doesn't touch the other 16GB but my application should use that 16GB image buffer. I know how to do this in Linux, but I need to do this in Windows OS. I think I have to configure the BIOS and need to implement a page remap Windows driver of image buffer for my application access.
Is there any good way to do this?
You can do this with the Address Windowing Extensions API. Although this was originally designed for 32-bit applications, it is still available to 64-bit applications, and memory allocated this way is not available to the virtual memory management system.
However, you should note that in most cases allowing the virtual memory manager to do its job will result in better overall performance than explicitly locking down memory will.

How can a 4GB process run on only 2 GB RAM?

Given a 32-bit/64-bit processor can a 4GB process run on 2GB RAM. Will it use virtual memory or it wont run at all?
This is HIGHLY platform dependent. On many 32bit OS's, no single process can ever use more than 2GB of memory, regardless of the physical memory installed or virtual memory allocated.
For example, my work computers use 32bit Linux with PAE (Physical Address Extensions) to allow it to have 16GB of RAM installed. The 2GB per process limit still applies however. Having the extra RAM simply allows me to have more individual processes running. 32bit Windows is the same way.
64bit OS's are more of a mixed bag. 64bit Linux will allow individual processes to map memory well in excess of 32GB (but again, varies from Kernel to Kernel). You will be limited only by the amount of Swap (Linux virtual memory) you have. 64bit Windows is a complete crap shoot. Certain versions will only allow 2GB per process, but most will allow >32GB limited only by the amount of Page File the user has allocated.
Microsoft provides a useful table breaking down the various memory limits on various OS versions/editions. Unfortunately there is no such table that I can find with cursory searching for Linux since it is so fragmented.
Short answer: Depends on the system.
Most 32-bit systems have a limitation of 2GB per process. If your system allows >2GB per process, then we can move on to the next part of your question.
Most modern systems use Virtual Memory. Yet, there are some constrained (and various old) systems that would just run out of space and make you cry. I believe uClinux supports both MMU and MMU-less architectures. Most 32-bit processors have a MMU (a few don't, see ARM Cortex-M0) and a handful of 16-bit or 8-bit have it as well (see Atmel ATtiny13A-MMU and Atari MMU).
Any process that needs more memory than is physically available will require a form of Memory Swap (e.g., a partition or file).
Virtual Memory is divided in pages. At some point, a page reside either in RAM or in Swap. Any attempt to access a memory page that's not loaded in RAM will trigger an interruption called Page Fault, which is handled by the kernel.
A 64-bit process needing 4GB on a 64-bit OS can generally run in 2GB of physical RAM, by using virtual memory, assuming disk swap space is available, but performance will be severely impacted if all of that memory is frequently accessed.
A 32-bit process can't address exactly 4GB of memory in practice (some address space overhead is required by the operating system), so it won't run. Depending on the OS, it can probably run a process that needs > 2GB and < 3-4GB.

mac osx occupied memory increase quickly

I noticed when I run Xcode especially start to run Interface builder.
Mac osx occupied memory increased quickly.
Not only xcode, there are some other apps also cause memory occupy too much after running a while.
Even the memory of my mac is 4GB, some time I have to use tool to free memory.
What is reason and how to avoid this case happen in my developing mac app?
Welcome any comment
I just experienced something similar(but probably not the same) in my Qt application.
I was reading and checksumming lots of files and the free memory kept dropping, though my appliations "real memory" stayed at a steady 50ish MB. However the amount of "Inactive memory" kept climbing.
What was happening was that every file i read was being added to the disk cache. The memory consumed by the disk cache is apperantly marked as "inactive", which should be just as availible as "free" memory according to apple ( http://support.apple.com/kb/HT1342 ) but that didn't stop OSX from starting to swap when "free" hit below 50ish MB.
in C:
#include "fnctl.h"
fcntl(f.handle(),F_GLOBAL_NOCACHE,1);
Seemd to fix that by bypassing disk caching for that file descriptor.
Freeing up inactive memory ( if that is indeed your problem ) can also be done from the commandline using the "purge" command.

Automatic Recovery of Virtual Memory Allocation

My system uses a third part kernel built in native libraries (C++) with a J2EE upper layer running on Tomcat 6. The vendor stipulates 32bit JDK and overall the application very memory hungry. We are presently running on Windows x64 with 32 bit JVM. Essentially, the JVM will hang once the Virtual Size gets close to the 2GB 32bit addressing limit.
Question: From time to time, the third party frameworks will make large requests for memory and this pushes up the Virtual Size allocated on the server. The Virtual Size allocated will never recover even though it appears that the memory that the kernel is reducing its memory needs. In a typical Tomcat deployment, does the Virtual Size ever recover automatically or does it always act as a high water level that keeps on rising? Is there a way to tell the JVM to try to lower the Virtual Size dynamically?
I suspect that the 3rd party native kernel is to blame here but I need to investigate all our options.
FYI - AWE in Windows is not a clear option as the vendor does not officially support any JVMs that have AWE support. Migration to Linux is also not an easy path but is being considered.

Resources