Making use of all available RAM in a Haskell program? - windows

I have 8 GB of RAM, but Haskell programs seemingly can only use 1.3 GB.
I'm using this simple program to determine how much memory a GHC program can allocate:
import System.Environment
import Data.Set as Set
main = do
args <- getArgs
let n = (read $ args !! 0) :: Int
s = Set.fromList [0..n]
do
putStrLn $ "min: " ++ (show $ findMin s)
putStrLn $ "max: " ++ (show $ findMax s)
Here's what I'm finding:
running ./mem.exe 40000000 +RTS -s succeeds and reports 1113 MB total memory in use
running ./mem.exe 42000000 +RTS -s fails with out of memory error
running ./mem.exe 42000000 +RTS -s -M4G errors out with -M4G: size outside allowed range
running ./mem.exe 42000000 +RTS -s -M3.9G fails with out of memory error
Monitoring the process via the Windows Task Manager shows that the max memory usage is about 1.2 GB.
My system: Win7, 8 GB RAM, Haskell Platform 2011.04.0.0, ghc 7.0.4.
I'm compiling with: ghc -O2 mem.hs -rtsopts
How can I make use of all of my available RAM? Am I missing something obvious?

Currently, on Windows, GHC is a 32-bit GHC - I think a 64-bit GHC for windows is supposed to be available when 7.6 comes.
One consequence of that is that on Windows, you can't use more than 4G - 1BLOCK of memory, since the maximum allowed as a size-parameter is HS_WORD_MAX:
decodeSize(rts_argv[arg], 2, BLOCK_SIZE, HS_WORD_MAX) / BLOCK_SIZE;
With 32-bit Words, HS_WORD_MAX = 2^32-1.
That explains
running ./mem.exe 42000000 +RTS -s -M4G errors out with -M4G: size outside allowed range
since decodeSize() decodes 4G as 2^32.
This limitation will remain also after upgrading your GHC, until finally a 64-bit GHC for Windows is released.
As a 32-bit process, the user-mode virtual address space is limited to 2 or 4 GB (depending on the status of the IMAGE_FILE_LARGE_ADDRESS_AWARE flag), cf Memory limits for Windows Releases.
Now, you are trying to construct a Set containing 42 million 4-byte Ints. A Data.Set.Set has five words of overhead per element (constructor, size, left and right subtree pointer, pointer to element), so the Set will take up about 0.94 GiB of memory (1.008 'metric' GB). But the process uses about twice that or more (it needs space for the garbage collection, at least the size of the live heap).
Running the programme on my 64-bit linux, with input 21000000 (to make up for the twice as large Ints and pointers), I get
$ ./mem +RTS -s -RTS 21000000
min: 0
max: 21000000
31,330,814,200 bytes allocated in the heap
4,708,535,032 bytes copied during GC
1,157,426,280 bytes maximum residency (12 sample(s))
13,669,312 bytes maximum slop
2261 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 59971 colls, 0 par 2.73s 2.73s 0.0000s 0.0003s
Gen 1 12 colls, 0 par 3.31s 10.38s 0.8654s 8.8131s
INIT time 0.00s ( 0.00s elapsed)
MUT time 12.12s ( 13.33s elapsed)
GC time 6.03s ( 13.12s elapsed)
EXIT time 0.00s ( 0.00s elapsed)
Total time 18.15s ( 26.45s elapsed)
%GC time 33.2% (49.6% elapsed)
Alloc rate 2,584,429,494 bytes per MUT second
Productivity 66.8% of total user, 45.8% of total elapsed
but top reports only 1.1g of memory use - top, and presumably the Task Manager, reports only live heap.
So it seems IMAGE_FILE_LARGE_ADDRESS_AWARE is not set, your process is limited to an address space of 2GB, and the 42 million Set needs more than that - unless you specify a maximum or suggested heap size that is smaller:
$ ./mem +RTS -s -M1800M -RTS 21000000
min: 0
max: 21000000
31,330,814,200 bytes allocated in the heap
3,551,201,872 bytes copied during GC
1,157,426,280 bytes maximum residency (12 sample(s))
13,669,312 bytes maximum slop
1154 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 59971 colls, 0 par 2.70s 2.70s 0.0000s 0.0002s
Gen 1 12 colls, 0 par 4.23s 4.85s 0.4043s 3.3144s
INIT time 0.00s ( 0.00s elapsed)
MUT time 11.99s ( 12.00s elapsed)
GC time 6.93s ( 7.55s elapsed)
EXIT time 0.00s ( 0.00s elapsed)
Total time 18.93s ( 19.56s elapsed)
%GC time 36.6% (38.6% elapsed)
Alloc rate 2,611,793,025 bytes per MUT second
Productivity 63.4% of total user, 61.3% of total elapsed
Setting the maximal heap size below what it would use naturally, actually lets it fit in hardly more than the space needed for the Set, at the price of a slightly longer GC time, and suggesting a heap size of -H1800M lets it finish using only
1831 MB total memory in use (0 MB lost due to fragmentation)
So if you specify a maximal heap size below 2GB (but large enough for the Set to fit), it should work.

The default heap size is unlimited.
Using GHC 7.2 on a 64 bit Windows XP machine, I can allocate higher values, by setting the heap size larger, explicitly:
$ ./A 42000000 +RTS -s -H1.6G
min: 0
max: 42000000
32,590,763,756 bytes allocated in the heap
3,347,044,008 bytes copied during GC
714,186,476 bytes maximum residency (4 sample(s))
3,285,676 bytes maximum slop
1651 MB total memory in use (0 MB lost due to fragmentation)
and
$ ./A 42000000 +RTS -s -H1.7G
min: 0
max: 42000000
32,590,763,756 bytes allocated in the heap
3,399,477,240 bytes copied during GC
757,603,572 bytes maximum residency (4 sample(s))
3,281,580 bytes maximum slop
1754 MB total memory in use (0 MB lost due to fragmentation)
even:
$ ./A 42000000 +RTS -s -H1.85G
min: 0
max: 42000000
32,590,763,784 bytes allocated in the heap
3,492,115,128 bytes copied during GC
821,240,344 bytes maximum residency (4 sample(s))
3,285,676 bytes maximum slop
1909 MB total memory in use (0 MB lost due to fragmentation)
That is, I can allocate up to the Windows XP 2G process limit. I imagine on Win 7 you won't have such a low limit -- this table suggests either 4G or 192G -- just ask for as much as you need (and use a more recent GHC).

Related

cPanel on AWS E2 Instance : How to Resize Disk

I have question about my disk partition,
here is the result from fdisk -l command
Disk /dev/loop0: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/xvda: 536.9 GB, 536870912000 bytes
255 heads, 63 sectors/track, 65270 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00050d75
Device Boot Start End Blocks Id System
/dev/xvda1 * 1 26109 209714176 83 Linux
As you can see, i have 500GB space (/dev/xvda) and our cPanel is using only 200GB (/dev/xvda1).
here is the result from lsblk command
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 4G 0 loop /tmp
xvda 202:0 0 500G 0 disk
└─xvda1 202:1 0 200G 0 part /
Here you can see i have 500GB disk,
My question is , How i can resize xvda1 so it can use it available space OR How i can can create new disk space to use in our cPanel to use more space.
My aim is to increase the disk space in cPanel but dont know how this is possible.
Thank's for your help !
You can use "growpart" to resize the partition and then reszie the file system.
install "cloud-guest-utils" if it is not installed already
apt install cloud-guest-utils
resize partition
growpart /dev/xvda 1
check the result
lsblk
resize filesystem
resize2fs /dev/xvda1
Check after resizing
df -h
Take a snapshot of your volume before trying this.
Run the following:
sudo yum install cloud-guest-utils
growpart /dev/xvda 1
then reboot

Puppet agent hangs and eventually gives a memory allocation error

I'm using puppet as a provisioner for Vagrant, and am coming across an issue where Puppet will hang for an extremely long time when I do a "vagrant provision". Building the box from scratch using "vagrant up" doesn't seem to be a problem, only subsequent provisions.
If I turn puppet debug on and watch where it hangs, it seems to stop at various, seemingly arbitrary, points the first of which is:
Info: Applying configuration version '1401868442'
Debug: Prefetching yum resources for package
Debug: Executing '/bin/rpm --version'
Debug: Executing '/bin/rpm -qa --nosignature --nodigest --qf '%{NAME} %|EPOCH?{% {EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\n''
Executing this command on the server myself returns immediately.
Eventually, it gets past this and continues. Using the summary option, I get the following, after waiting for a very long time for it to complete:
Debug: Finishing transaction 70191217833880
Debug: Storing state
Debug: Stored state in 9.39 seconds
Notice: Finished catalog run in 1493.99 seconds
Changes:
Total: 2
Events:
Failure: 2
Success: 2
Total: 4
Resources:
Total: 18375
Changed: 2
Failed: 2
Skipped: 35
Out of sync: 4
Time:
User: 0.00
Anchor: 0.01
Schedule: 0.01
Yumrepo: 0.07
Augeas: 0.12
Package: 0.18
Exec: 0.96
Service: 1.07
Total: 108.93
Last run: 1401869964
Config retrieval: 16.49
Mongodb database: 3.99
File: 76.60
Mongodb user: 9.43
Version:
Config: 1401868442
Puppet: 3.4.3
This doesn't seem very helpful to me, as the amount of time total's 108 seconds, so where have the other 1385 seconds gone?
Throughout, Puppet seems to be hammering the box, using up a lot of CPU, but still doesn't seem to advance. The memory it uses seems to continually increase. When I kick off the command, top looks like this:
Cpu(s): 10.2%us, 2.2%sy, 0.0%ni, 85.5%id, 2.2%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 4956928k total, 2849296k used, 2107632k free, 63464k buffers
Swap: 950264k total, 26688k used, 923576k free, 445692k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28486 root 20 0 439m 334m 3808 R 97.5 6.9 2:02.92 puppet
22 root 20 0 0 0 0 S 1.3 0.0 0:07.55 kblockd/0
18276 mongod 20 0 788m 31m 3040 S 1.3 0.6 2:31.82 mongod
20756 jboss-as 20 0 3081m 1.5g 21m S 1.3 31.4 7:13.15 java
20930 elastics 20 0 2340m 236m 6580 S 1.0 4.9 1:44.80 java
266 root 20 0 0 0 0 S 0.3 0.0 0:03.85 jbd2/dm-0-8
22717 vagrant 20 0 98.0m 2252 1276 S 0.3 0.0 0:01.81 sshd
28762 vagrant 20 0 15036 1228 932 R 0.3 0.0 0:00.10 top
1 root 20 0 19364 1180 964 S 0.0 0.0 0:00.86 init
To me, this seems fine, there's over 2GB of available memory and plenty of available swap. I have a max open files limit of 1024.
About 10-15 minutes later, still no advance in the console output, but top looks like this:
Cpu(s): 11.2%us, 1.6%sy, 0.0%ni, 86.9%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%s
Mem: 4956928k total, 3834376k used, 1122552k free, 64248k buffers
Swap: 950264k total, 24408k used, 925856k free, 445728k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28486 root 20 0 1397m 1.3g 3808 R 99.6 26.7 15:16.19 puppet
18276 mongod 20 0 788m 31m 3040 R 1.7 0.6 2:45.03 mongod
20756 jboss-as 20 0 3081m 1.5g 21m S 1.3 31.4 7:25.93 java
20930 elastics 20 0 2340m 238m 6580 S 0.7 4.9 1:52.03 java
8486 root 20 0 308m 952 764 S 0.3 0.0 0:06.03 VBoxService
As you can see, puppet is now using a lot more of the memory, and it seems to continue in this fashion. The box it's building has 5GB of RAM, so I wouldn't have expected it to have memory issues. However, further down the line, after a long wait, I do get "Cannot allocate memory - fork(2)"
Running unlimit -a, I get:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 38566
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 1024
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Which, again looks fine to me...
To be honest, I'm completely at a loss as to how to go about solving this, or what is causing it.
Any help or insight would be greatly appreciated!
EDIT:
So I managed to fix this eventually... It came down to using recurse with a file directive for a large directory. The target directory in question contained around 2GB worth of files, and puppet took a huge amount of time loading this into memory and doing it's hashes and comparisons. The first time I stood the server up, the directory was relatively empty so the check was quick, but then other resources were placed in it that increased its size massively, meaning subsequent runs took much longer.
The memory error that eventually was thrown was because, I can only assume, Puppet was loading the whole thing into memory in order to do its stuff...
I found a way around using the recurse function, and am now trying to avoid it like the plague...
Yeah, the problem with the recurse parameter on the file type is that it checks every single file's checksum, which on a massive directory adds up real quick.
As Felix suggests, using checksum => none is one way to fix it, another is to accomplish the task you're trying to do (say chmod or chown a whole directory) with an exec performing the native task, with an unless to check if it's already been done.
Something like:
define check_mode($mode) {
exec { "/bin/chmod $mode $name":
unless => "/bin/sh -c '[ $(/usr/bin/stat -c %a $name) == $mode ]'",
}
}
Taken from http://projects.puppetlabs.com/projects/1/wiki/File_Permission_Check_Patterns

unable to resize root partition on EC2 centos [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
i created my EC2 Machine using Community Image of Centos 6.3 x64. i have added a 35 GB disk. Now when i do #df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.9G 1.2G 6.4G 16% /
tmpfs 7.3G 0 7.3G 0% /dev/shm
my disk is 35GB but its showing 8 GB in root and 7 as tmpfs.
i tried to use resize2fs but it didnt work on centos. disk has ext4 partation..
# resize2fs /dev/xvda
resize2fs 1.41.12 (17-May-2010)
resize2fs: Device or resource busy while trying to open /dev/xvda
Couldn't find valid filesystem superblock.
or even if i tried resize2fs /dev/xvda1 it says device has nothing to do.
any idea or other way, its my root disk(/). so cant unmount it.
i found a way to do that, resize2fs not working in case not sure why but it says device or resource busy. i found a very good article on resizedisk using fdisk we can increase block size by deleting and creating it and Make the partition bootable. all it requires is a reboot. it wont effect your data if you use same start cylinder.
# df -h <<1>>
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 6.0G 2.0G 3.7G 35% /
tmpfs 15G 0 15G 0% /dev/shm
# fdisk -l <<2>>
Disk /dev/xvda: 21.5 GB, 21474836480 bytes
97 heads, 17 sectors/track, 25435 cylinders
Units = cylinders of 1649 * 512 = 844288 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0003b587
Device Boot Start End Blocks Id System
/dev/xvda1 * 2 7632 6291456 83 Linux
# fdisk /dev/xvda <<3>>
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): u <<4>>
Changing display/entry units to sectors
Command (m for help): p <<5>>
Disk /dev/xvda: 21.5 GB, 21474836480 bytes
97 heads, 17 sectors/track, 25435 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0003b587
Device Boot Start End Blocks Id System
/dev/xvda1 * 2048 12584959 6291456 83 Linux
Command (m for help): d <<6>>
Selected partition 1
Command (m for help): n <<7>>
Command action
e extended
p primary partition (1-4)
p <<8>>
Partition number (1-4): 1 <<9>>
First sector (17-41943039, default 17): 2048 <<10>>
Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039): <<11>>
Using default value 41943039
Command (m for help): p <<12>>
Disk /dev/xvda: 21.5 GB, 21474836480 bytes
97 heads, 17 sectors/track, 25435 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0003b587
Device Boot Start End Blocks Id System
/dev/xvda1 2048 41943039 20970496 83 Linux
Command (m for help): a <<13>>
Partition number (1-4): 1 <<14>>
Command (m for help): w <<15>>
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
# reboot <<16>>
<wait>
# df -h <<17>>
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 20G 2.0G 17G 11% /
tmpfs 15G 0 15G 0% /dev/shm
# resize2fs /dev/xvda1 <<18>>
resize2fs 1.41.12 (17-May-2010)
The filesystem is already 5242624 blocks long. Nothing to do!
The following steps very simple works very well for me:
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 30G 0 disk
└─xvda1 202:1 0 8G 0 part /
Perform the following command as root:
# yum install cloud-utils-growpart
# growpart /dev/xvda 1
# reboot
After the reboot:
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 30G 0 disk
└─xvda1 202:1 0 30G 0 part /
I got the same problem. All I need to do is
reboot the instance
run the command
sudo resize2fs -f /dev/xxxx
and it works well for me.
An Addition to Adeel Ahmad's Answer:
If you are attempting to start an instance from an AMI with a swap partition, then additional steps will have to be performed.
For example, if the ami contains as follows:
# fdisk -l
Disk /dev/xvde: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xe211223f
Device Boot Start End Blocks Id System
/dev/xvde1 * 1 1291 10369926 83 Linux
/dev/xvde2 1292 1305 112455 82 Linux swap / Solaris
If I have to upgrade my capacity to 20GB, i will create an AMI and try to launch another instance with 20GB space. After this, if i try the above steps, the disk space wont increase as there is a xvde2 partition in-between the xvde1 and the new space.
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvde1 9.8G 7.5G 1.8G 81% /
$ fdisk -l
Disk /dev/xvde: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xe211223f
Device Boot Start End Blocks Id System
/dev/xvde1 * 1 1291 10369926 83 Linux
/dev/xvde2 1292 1305 112455 82 Linux swap / Solaris
$ resize2fs /dev/xvde1
resize2fs 1.41.12 (17-May-2010)
The filesystem is already 2592481 blocks long. Nothing to do!
In this case do the following
Delete both the partitions
Create new Primary partition with the new required size minus the size for swap space
Add bootable flag for this partition
Create second partition
Mark it as swap
write changes and reboot
Extend partition 1
Setup swap
OR
Deleting partition 1 Selected partition 1
Command (m for help): d <<6>>
Partition number (1-4): 1 <<6.0.1>>
Deleting partition 2 Selected partition 2
Command (m for help): d <<6.2>>
Creating resized primary partition 1
Command (m for help): n <<7>>
Command action
e extended
p primary partition (1-4)
p <<8>>
Partition number (1-4): 1 <<9>>
First sector (17-41943039, default 17): 2048 <<10>>
Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039):<<NEW_UPPER_LIMIT>> <<11>>
TAKE CARE : 2048 should be replaced by your original starting sector
or the system wont boot. NEW_UPPER_LIMIT will be the new sector number
for upper limit and the rest will be left for swap. For maintaining
the same swap space, Subtract the original start and end sector
numbers and then subtract the result from 41943039(or your upper
limit)
Creating swap partition
Command (m for help): n <<12>>
Command action
e extended
p primary partition (1-4)
p <<13>>
Partition number (1-4): 2 <<14>>
First sector (<<NEW_UPPER_LIMIT+1>>-41943039, default <<NEW_UPPER_LIMIT+1>>): <<USE_DEFAULT>> <<15>>
Last sector, +sectors or +size{K,M,G}(<<NEW_UPPER_LIMIT+1>>-41943039,default 41943039):<<USE_DEFAULT>> <<16>>
Using default value 41943039
Adding bootable bit for partition 1
Command (m for help): a <<17>>
Partition number (1-4): 1 <<18>>
Marking partition 2 as swap
Command (m for help): l <<19>>
Now you will see a list of filesystems. Note the one corresponding to Linux swap (say 82)
Command (m for help): t <<20>>
Partition number (1-4): 2 <<21>>
Hex Code (type l to list codes) : 82 <<22>>
Write changes and reboot
Command (m for help): w <<23>> The partition table has been altered!
....
$ sudo reboot
After reboot run
resize2fs /dev/xvde1
This will resize your fs
Now to use the second partition as swap
$ mkswap /dev/<<SECOND SWAP PARTITION(run fdisk -l to get the name)>>
$ swapon /dev/<<SECOND SWAP PARTITION(run fdisk -l to get the name)>>
You can check the /proc/swaps file to verify
$ cat /proc/swaps
Now add the following to the /etc/fstab for these changes to be persistent
At the end of /etc/fstab (open with nano or vi etc)
/dev/<<SECOND SWAP PARTITION>> swap swap defaults 0 0
Save and Exit
Reboot and check
I had faced the same issue with my Debian 8 ec2 instance and getting below error
FAILED: failed to get CHS from /dev/xvda
Solution:
$ sudo parted /dev/xvda resizepart 1
Warning: Partition /dev/xvda1 is being used. Are you sure you want to continue?
Yes/No? yes
End? [8588MB]? 100
$ sudo resize2fs /dev/xvda1
$ lsblk
$ df -h
you will see that ebs volume has increased now.

Determine whether CPU, RAM or hard drive is bottleneck for Ruby script

I'm planning on purchasing a new Mac desktop soon, and I want to know whether CPU, RAM or my hard drive is my bottleneck for my script.
I ran my main unit tests with Ruby 1.9.3 on Ubuntu 12.04 and got the following information:
$ date; /usr/bin/time --verbose ruby1.9.1 test/test_all.rb ; date
Mon May 7 15:04:38 EST 2012
Run options:
# Running tests:
[snip 705 dots]
Finished tests in 50.672999s, 13.9127 tests/s, 49.1781 assertions/s.
705 tests, 2492 assertions, 0 failures, 0 errors, 0 skips
Command being timed: "ruby1.9.1 test/test_all.rb"
User time (seconds): 29.25
System time (seconds): 5.26
Percent of CPU this job got: 67%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:51.01
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 238592
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 4180160
Voluntary context switches: 31187
Involuntary context switches: 12397
Swaps: 0
File system inputs: 0
File system outputs: 224
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
Mon May 7 15:05:29 EST 2012
As the time taken by user plus system is less than the wall time, I assume CPU isn't the sole bottleneck. How can I work out what else is the bottleneck?
You can analyze your program's memory performance (i.e. how well it utilizes the cache) using valgrind's cachegrind tool.
$ valgrind --tool=cachegrind ruby ./hello.rb
==7082== Cachegrind, a cache and branch-prediction profiler.
==7082== Copyright (C) 2002-2008, and GNU GPL'd, by Nicholas Nethercote et al.
==7082== Using LibVEX rev 1884, a library for dynamic binary translation.
==7082== Copyright (C) 2004-2008, and GNU GPL'd, by OpenWorks LLP.
==7082== Using valgrind-3.4.1-Debian, a dynamic binary instrumentation framework.
==7082== Copyright (C) 2000-2008, and GNU GPL'd, by Julian Seward et al.
==7082== For more details, rerun with: -v
==7082==
hello world
==7082==
==7082== I refs: 14,529,000
==7082== I1 misses: 24,856
==7082== L2i misses: 6,707
==7082== I1 miss rate: 0.17%
==7082== L2i miss rate: 0.04%
==7082==
==7082== D refs: 7,110,663 (4,572,482 rd + 2,538,181 wr)
==7082== D1 misses: 48,207 ( 33,427 rd + 14,780 wr)
==7082== L2d misses: 16,350 ( 3,821 rd + 12,529 wr)
==7082== D1 miss rate: 0.6% ( 0.7% + 0.5% )
==7082== L2d miss rate: 0.2% ( 0.0% + 0.4% )
==7082==
==7082== L2 refs: 73,063 ( 58,283 rd + 14,780 wr)
==7082== L2 misses: 23,057 ( 10,528 rd + 12,529 wr)
==7082== L2 miss rate: 0.1% ( 0.0% + 0.4% )
Concerning disk performance, I believe that a program with no disk/io usage would run almost entirely in user time, leading me to believe that your hard drive might be at least one of your bottlenecks. Perhaps there's someone out there who can recommend a good tool for profiling a program's disk usage?

querying for memory details in shell

Is there a shell command to know about how much memory is being used at a particular moment and details of how much each process is using, how much virtual memory is left etc?
For "each process", how about top:
PhysMem: 238M wired, 865M active, 549M inactive, 1652M used, 395M free.
VM: 162G vsize, 1039M framework vsize, 124775(0) pageins, 9149(0) pageouts.
PID COMMAND %CPU TIME #TH #WQ #POR #MREG RPRVT RSHRD RSIZE VPRVT VSIZE PGRP PPID STATE UID
7233 top 5.7 00:00.53 1/1 0 24 33 1328K 264K 1904K 17M 2378M 7233 3766 running 0
e.g.:
rprvt Resident private address space size.
rshrd Resident shared address space size.
rsize Resident memory size.
vsize Total memory size.
vprvt Private address space size.
Let's also hear it for the old classic, vmstat.
$ vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
0 0 30160 15884 418680 281936 0 0 406 22 6 3 1 1 93 5
Depends on your operating system. In Linux, free answers two out of your three questions.
~> free
total used free shared buffers cached
Mem: 904580 895128 9452 0 63700 777728
-/+ buffers/cache: 53700 850880
Swap: 506036 0 506036
"Swap" refers to virtual memory.
If you're on Linux, give ps_mem.py a try.
If you are on an up-to-date Linux, cat /proc/$pid/smaps is the business.
If you are on OSX, check https://superuser.com/questions/97235/how-much-swap-is-a-given-mac-application-using.

Resources