Why does Mac OS X move process memory to swap even though RAM is available? - macos

I'm using a Mac with 8 GB RAM running Mac OS X 10.7.2. I wrote this small program to allocate about 6GB of RAM:
#include<iostream>
#include<vector>
int main() {
std::vector<char> vec;
vec.resize(6442450944);
std::cerr << "finished memory allocation\n";
char c;
std::cin >> c;
}
When i run the program, at some point the program does not get real memory anymore, but swap space is allocated instead. Even content already written to the RAM is moved to SWAP. When I run
./memtest & FOO=$! && while true; do ps -orss -p $FOO | tail -n1; sleep 0.2s; done
I get the following output:
4
524296
1052740
1574188
2024272
2493552
2988396
3481148
3981252
4484076
4980016
5420580 <= from here on, RSS goes down
5407772
5301416
5211060
5163212
5118716
5081816
5039548
4981152
4897772
4822260
4771412
4715036
4666308
4598596
4542028
4521976
4479732
4399104
4312240
4225252
finished memory allocation
When I start a second process, even more memory of the first process is moved from RAM to SWAP. Is there any way to control this behaviour and make Mac OS X use the available RAM?

Related

How many cores can I use when using **make -j**?

I'm using MacBook with M1 chip (10 CPU cores with 8 P cores and 2 E cores, 24 GPU cores) and want to compile some program, I wonder how I can know the number of cores I can use to compile?
Simply, what should x be in make -jx?
You can enter:
$ make -j `sysctl -n hw.ncpu`
so the answer is :
x = `sysctl -n hw.ncpu`
where the backstick characters means a command substitution, ie the result of the command inside the backsticks.

Allocating a larger u-boot image

I am compiling my own kernel and bootloader (U-boot). I added a bunch of new environmental variables, but U-boot doesn't load anymore (it just doesn't load anything from the memory). I am using pocketbeagle and booting from an SD card. Thus I am editing the file "am335x_evm.h" found in /include/configs/.
I am trying to allocate U-boot in a way that it has more space for the environmental variables and that it can load succesfully from the memory, but I have been unable to do so. As far as I understand, by default it allocates 128kb of memory to U-boot env variables. Since I added a bunch of them, I am trying to increase its size from 128kb to 512kb.
I have changed the following line (from 128kb to 512kb):
#define CONFIG_ENV_SIZE (512 << 10)
(By the way, anyone knows why it is shifted to the left 10 bits?)
I have also changed CONFIG_ENV_OFFSET and CONFIG_ENV_OFFSET_REDUND to:
#define CONFIG_ENV_OFFSET (1152 << 10) /* Originally 768 KiB in */
#define CONFIG_ENV_OFFSET_REDUND (1664 << 10) /* Originally 896 KiB in */
Then after compiling the new U-boot, I format the SD card and insert the new kernel and U-boot.
I start by erasing partitions:
export DISK=/dev/mmcblk0
sudo dd if=/dev/zero of=${DISK} bs=1M count=10
Then I transfer U-boot by doing:
sudo dd if=./u-boot/MLO of=${DISK} count=1 seek=1 bs=512k
sudo dd if=./u-boot/u-boot.img of=${DISK} count=2 seek=1 bs=576k
I then create the partition layout by doing:
sudo sfdisk ${DISK} <<-__EOF__
4M,,L,*
__EOF__
Then I add the kernel, binary trees, kernel modules, etc... When trying to boot and reading the serial port, I get nothing at all. U-boot is not able to load anything from the SD card. What am I doing wrong? I'd appreciate if you could point me what my problem is and exactly what I should be doing to increase the size and allocate everything correctly.

Find process where a particular system call returns a particular error

On OS X El Capitan, my log file system.log feels with hundreds of the following lines at times
03/07/2016 11:52:17.000 kernel[0]: hfs_clonefile: cluster_read failed - 34
but there is no indication of the process where this happens. Apart from that, Disk Utility could not find any fault with the file system. But I would still like to know what is going on and it seems to me that dtrace should be perfectly suited to find out that faulty process but I am stuck. I know of the function return probe but it seems to require the PID, e.g.
dtrace -n 'pidXXXX::hfs_clonefile:return { printf("ret: %d", arg1); }'
Is there a way to tell dtrace to probe all processes? And then how would I print the process name?
You can try something like this (I don't have access to an OS X machine to test it)
#!/usr/sbin/dtrace -s
# pragma D option quiet
fbt::hfs_clonefile:return
/ args[ 1 ] != 0 /
{
printf( "\n========\nprocess: %s, pid: %d, ret value: %d\n", execname, pid, args[ 1 ] );
/* get kernel and user-space stacks */
stack( 20 );
ustack( 20 );
}
For the fbt probes, args[ 1 ] is the value returned by the function.
The dTrace script will print out the process name, pid, and return value from hfs_clonefile() whenever the return value is not zero. It also adds the kernel and user space stack traces. That should be more than enough data for you to find the source of the errors.
Assuming it works on OS X, anyway.
You can use the syscall provider rather than the pid provider to do this sort of thing. Something like:
sudo dtrace -n 'syscall::hfs_clonefile*:return /errno != 0/ { printf("ret: %d\n", errno); }'
The above command is a minor variant of what's used within the built-in DTrace-based errinfo utility. You can view /usr/bin/errinfo in any editor to see how it works.
However, there's no hfs_clonefile syscall, as least as far as DTrace is concerned, on my El Capitan (10.11.5) system:
$ sudo dtrace -l -n 'syscall::hfs*:'
ID PROVIDER MODULE FUNCTION NAME
dtrace: failed to match syscall::hfs*:: No probe matches description
Also, unfortunately the syscall provider is prevented from tracing system processes by the System Integrity Protection feature introduced with El Capitan (macOS 10.11). So, you will have to disable SIP which makes your system less secure.

Limit Memory allocation to a process in Mac-OS X 10.8

I would like to control the maximum memory, a process can use in Mac-OS X 10.8. I feel that setting ulimit -v should achieve the goals but doesn't seem to be the case. I tried following simple commands :
ulimit -m 512
java -Xms1024m -Xmx2048m SomeJavaProgram
I was assuming that 2nd command should fail as Java Process will start by keeping 1024MB of memory for itself but it passes peacefully. Inside my Sample program, I try allocating more than 1024MB using following code snippet:
System.out.println("Allocating 1 GB of Memory");
List<byte[]> list = new LinkedList<byte[]>();
list.add(new byte[1073741824]); //1024 MB
System.out.println("Done....");
Both these programs get executed without any issues. How can we control the max memory allocation for a program in Mac-OS X?
I'm not sure if you still need the question answered, but here is the answer in case anyone else happens to have the same question.
ulimit -m strictly limits resident memory, and not the amount of memory a process can request from the operating system.
ulimit -v will limit the amount of virtual memory a process can request from the operating system.
for example...
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char* argv[]) {
int size = 1 << 20;
void* memory = NULL;
memory = malloc(size);
printf("allocated %i bytes...\n", size);
return 0;
}
ulimit -m 512
./memory
allocated 1048576 bytes...
ulimit -v 512
./memory
Segmentation fault
If you execute ulimit -a it should provide a summary of all the current limits for child processes.
As mentioned in comments below by #bikram990, the java process may not observe soft limits. To enforce java memory restrictions, you can pass arguments to the process (-Xmx, -Xss, etc...).
Warning!
You can also set hard limits via the ulimit -H command, which cannot be modified by sub-processes. However, those limits also cannot be raised again once lowered, without elevated permissions (root).

Profiling the memory used by linux kernel

I have linux kernel 2.6.30 on an ARM based embedded device.
I have to do some kernel memory usage profiling on the device.
I am thinking of monitoring the ps output on various kernel threads and modules while I carry out actions like wifi on/off etc.
Can you suggest me:
Which threads I need to monitor? How to monitor the kernel module memory usage?
sometimes it is useful to get the real info straight from the kernel, I have used this little C program I threw together to get real system info in an output format that is suited for the shell (it compiles down to a pretty small binary if that matters) --
#include <sys/sysinfo.h>
int main(int argc, char **argv){
struct sysinfo info;
sysinfo(&info);
printf( "UPTIME_SECONDS=%d\n"
"LOAD_1MIN=%d\n"
"LOAD_5MIN=%d\n"
"LOAD_15MIN=%d\n"
"RAM_TOT=%d\n"
"RAM_FREE=%d\n"
"MEMUSEDKB=%d\n"
"RAM_SHARED=%d\n"
"RAM_BUFFERS=%d\n"
"SWAP_TOT=%d\n"
"SWAP_FREE=%d\n"
"PROCESSES=%d\n",
info.uptime,
info.loads[0],
info.loads[1],
info.loads[2],
info.totalram,
info.freeram,
(info.totalram-info.freeram)*info.mem_unit/1024,
info.sharedram,
info.bufferram,
info.totalswap,
info.freeswap,
info.procs);
}
I use it in the shell like this:
eval `sysinfo`
BEFORERAM=$MEMUSEDKB
command &
sleep .1 #sleep value may need to be adjusted depending on command's run time
eval `sysinfo`
AFTERRAM=$MEMUSEDKB
echo RAMDELTA is $(($AFTERRAM - BEFORERAM ))

Resources