Limit Memory allocation to a process in Mac-OS X 10.8 - macos

I would like to control the maximum memory, a process can use in Mac-OS X 10.8. I feel that setting ulimit -v should achieve the goals but doesn't seem to be the case. I tried following simple commands :
ulimit -m 512
java -Xms1024m -Xmx2048m SomeJavaProgram
I was assuming that 2nd command should fail as Java Process will start by keeping 1024MB of memory for itself but it passes peacefully. Inside my Sample program, I try allocating more than 1024MB using following code snippet:
System.out.println("Allocating 1 GB of Memory");
List<byte[]> list = new LinkedList<byte[]>();
list.add(new byte[1073741824]); //1024 MB
System.out.println("Done....");
Both these programs get executed without any issues. How can we control the max memory allocation for a program in Mac-OS X?

I'm not sure if you still need the question answered, but here is the answer in case anyone else happens to have the same question.
ulimit -m strictly limits resident memory, and not the amount of memory a process can request from the operating system.
ulimit -v will limit the amount of virtual memory a process can request from the operating system.
for example...
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char* argv[]) {
int size = 1 << 20;
void* memory = NULL;
memory = malloc(size);
printf("allocated %i bytes...\n", size);
return 0;
}
ulimit -m 512
./memory
allocated 1048576 bytes...
ulimit -v 512
./memory
Segmentation fault
If you execute ulimit -a it should provide a summary of all the current limits for child processes.
As mentioned in comments below by #bikram990, the java process may not observe soft limits. To enforce java memory restrictions, you can pass arguments to the process (-Xmx, -Xss, etc...).
Warning!
You can also set hard limits via the ulimit -H command, which cannot be modified by sub-processes. However, those limits also cannot be raised again once lowered, without elevated permissions (root).

Related

Errno::ENOMEM: Cannot allocate memory - cat

I have a job running on production which process xml files.
xml files counts around 4k and of size 8 to 9 GB all together.
After processing we get CSV files as output. I've a cat command which will merge all CSV files to a single file I'm getting:
Errno::ENOMEM: Cannot allocate memory
on cat (Backtick) command.
Below are few details:
System Memory - 4 GB
Swap - 2 GB
Ruby : 1.9.3p286
Files are processed using nokogiri and saxbuilder-0.0.8.
Here, there is a block of code which will process 4,000 XML files and output is saved in CSV (1 per xml) (sorry, I'm not suppose to share it b'coz of company policy).
Below is the code which will merge the output files to a single file
Dir["#{processing_directory}/*.csv"].sort_by {|file| [file.count("/"), file]}.each {|file|
`cat #{file} >> #{final_output_file}`
}
I've taken memory consumption snapshots during processing.It consumes almost all part of the memory, but, it won't fail.
It always fails on cat command.
I guess, on backtick it tries to fork a new process which doesn't get enough memory so it fails.
Please let me know your opinion and alternative to this.
So it seems that your system is running pretty low on memory and spawning a shell + calling cat is too much for the few memory left.
If you don't mind loosing some speed, you can merge the files in ruby, with small buffers.
This avoids spawning a shell, and you can control the buffer size.
This is untested but you get the idea :
buffer_size = 4096
output_file = File.open(final_output_file, 'w')
Dir["#{processing_directory}/*.csv"].sort_by {|file| [file.count("/"), file]}.each do |file|
f = File.open(file)
while buffer = f.read(buffer_size)
output_file.write(buffer)
end
f.close
end
You are probably out of physical memory, so double check that and verify your swap (free -m). In case you don't have a swap space, create one.
Otherwise if your memory is fine, the error is most likely caused by shell resource limits. You may check them by ulimit -a.
They can be changed by ulimit which can modify shell resource limits (see: help ulimit), e.g.
ulimit -Sn unlimited && ulimit -Sl unlimited
To make these limit persistent, you can configure it by creating the ulimit setting file by the following shell command:
cat | sudo tee /etc/security/limits.d/01-${USER}.conf <<EOF
${USER} soft core unlimited
${USER} soft fsize unlimited
${USER} soft nofile 4096
${USER} soft nproc 30654
EOF
Or use /etc/sysctl.conf to change the limit globally (man sysctl.conf), e.g.
kern.maxprocperuid=1000
kern.maxproc=2000
kern.maxfilesperproc=20000
kern.maxfiles=50000
I have the same problem, but instead of cat it was sendmail (gem mail).
I found problem & solution here by installing posix-spawn gem, e.g.
gem install posix-spawn
and here is the example:
a = (1..500_000_000).to_a
require 'posix/spawn'
POSIX::Spawn::spawn('ls')
This time creating child process should succeed.
See also: Minimizing Memory Usage for Creating Application Subprocesses at Oracle.

Ruby profiler stack level too deep error

It seems like I always get this error on one of my scripts:
/Users/amosng/.rvm/gems/ruby-1.9.3-p194/gems/ruby-prof-0.11.2/lib/ruby-prof/profile.rb:25: stack level too deep (SystemStackError)
Has anyone encountered this error before? What could be causing it, and what can I be doing to prevent it from happening?
I run my ruby-prof scripts using the command
ruby-prof --printer=graph --file=profile.txt scraper.rb -- "fall 2012"
Edit I'm on Mac OS X, if that matters. Doing ulimit -s 64000 doesn't seem to help much, unfortunately. Here is what ulimit -a gives:
$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 256
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 64000
cpu time (seconds, -t) unlimited
max user processes (-u) 709
virtual memory (kbytes, -v) unlimited
Edit 2
Andrew Grimm's solution worked just fine to prevent ruby-prof from crashing, but the profiler seems to have problems of its own, because I see percentages like 679.50% of total time taken for a process...
One workaround would be to turn tail call optimization on.
The following is an example of something that works with TCO on, but doesn't work when TCO is off.
RubyVM::InstructionSequence.compile_option = {
:tailcall_optimization => true,
:trace_instruction => false
}
def countUpTo(current, final)
puts current
return nil if current == final
countUpTo(current+1, final)
end
countUpTo(1, 10_000)
Stack level too deep usually means an infinite loop. If you look at the ruby-prof code where the error happens you will see that it's a method that detects recursion in the call stack.
Try looking into the code where you are using recursion (how many places in your code can you be using recursion?) and see if there is a condition that would cause it to never bottom-out?
It could also mean that your system stack just isn't big enough to handle what you are trying to do. Maybe you are processing a large data set recursively? You can check your stack size (unixy systems):
$ ulimit -a
and increase the stack size:
$ ulimit -s 16384
You can also consider adjusting your algorithm. See this stack overflow quesion
I hope I'm not just re-hashing an existing question...
Having percentages go over 100% in Ruby-prof has been a known bug, but should be fixed now.

Profiling the memory used by linux kernel

I have linux kernel 2.6.30 on an ARM based embedded device.
I have to do some kernel memory usage profiling on the device.
I am thinking of monitoring the ps output on various kernel threads and modules while I carry out actions like wifi on/off etc.
Can you suggest me:
Which threads I need to monitor? How to monitor the kernel module memory usage?
sometimes it is useful to get the real info straight from the kernel, I have used this little C program I threw together to get real system info in an output format that is suited for the shell (it compiles down to a pretty small binary if that matters) --
#include <sys/sysinfo.h>
int main(int argc, char **argv){
struct sysinfo info;
sysinfo(&info);
printf( "UPTIME_SECONDS=%d\n"
"LOAD_1MIN=%d\n"
"LOAD_5MIN=%d\n"
"LOAD_15MIN=%d\n"
"RAM_TOT=%d\n"
"RAM_FREE=%d\n"
"MEMUSEDKB=%d\n"
"RAM_SHARED=%d\n"
"RAM_BUFFERS=%d\n"
"SWAP_TOT=%d\n"
"SWAP_FREE=%d\n"
"PROCESSES=%d\n",
info.uptime,
info.loads[0],
info.loads[1],
info.loads[2],
info.totalram,
info.freeram,
(info.totalram-info.freeram)*info.mem_unit/1024,
info.sharedram,
info.bufferram,
info.totalswap,
info.freeswap,
info.procs);
}
I use it in the shell like this:
eval `sysinfo`
BEFORERAM=$MEMUSEDKB
command &
sleep .1 #sleep value may need to be adjusted depending on command's run time
eval `sysinfo`
AFTERRAM=$MEMUSEDKB
echo RAMDELTA is $(($AFTERRAM - BEFORERAM ))

Why does Mac OS X move process memory to swap even though RAM is available?

I'm using a Mac with 8 GB RAM running Mac OS X 10.7.2. I wrote this small program to allocate about 6GB of RAM:
#include<iostream>
#include<vector>
int main() {
std::vector<char> vec;
vec.resize(6442450944);
std::cerr << "finished memory allocation\n";
char c;
std::cin >> c;
}
When i run the program, at some point the program does not get real memory anymore, but swap space is allocated instead. Even content already written to the RAM is moved to SWAP. When I run
./memtest & FOO=$! && while true; do ps -orss -p $FOO | tail -n1; sleep 0.2s; done
I get the following output:
4
524296
1052740
1574188
2024272
2493552
2988396
3481148
3981252
4484076
4980016
5420580 <= from here on, RSS goes down
5407772
5301416
5211060
5163212
5118716
5081816
5039548
4981152
4897772
4822260
4771412
4715036
4666308
4598596
4542028
4521976
4479732
4399104
4312240
4225252
finished memory allocation
When I start a second process, even more memory of the first process is moved from RAM to SWAP. Is there any way to control this behaviour and make Mac OS X use the available RAM?

Difference between memory_get_peak_usage and actual php process' memory usage

Why result of php memory_get_peak_usage differs so much from memory size that is shown as allocated to process when using 'top' or 'ps' commands in Linux?
I've set 2 Mb of memory_limit in php.ini
My single-string php-script with
echo memory_get_peak_usage(true);
says that it is using 786432 bytes (768 Kb)
If I try to ask system about current php process
echo shell_exec('ps -p '.getmypid().' -Fl');
it gives me
F S UID PID PPID C PRI NI ADDR SZ WCHAN RSS PSR STIME TTY TIME CMD
5 S www-data 14599 14593 0 80 0 - 51322 pipe_w 6976 2 18:53 ? 00:00:00 php-fpm: pool www
RSS param is 6976, so memory usage is 6976 * 4096 = 28573696 = ~28 Mb
Where that 28 Mb come from and is there any way to decrease memory size that is being used by php-fpm process?
The memory size is mostly used by the PHP process itself. memory_get_peak_usage() returns the memory used by your specific script. Ways to reduce the memory overhead is to remove the number of extensions, statically compile PHP, etc.. But don't forget that php-fpm (should) fork and that a lot of the memory usage that's not different between PHP process is in fact shared (until it changes).
PHP itself may only be set to a 2meg limit, but it's running WITHIN a Apache child process, and that process will have a much higher memory footprint.
If you were running the script from the command line, you'd get memory usage of PHP by itself, as it's not wrapped within Apache and is running on its own.
The peak memory usage is for the current script only.

Resources