macOS size command shows a really large number? - macos

> size /bin/ls
__TEXT __DATA __OBJC others dec hex
20480 4096 0 4294983680 4295008256 10000a000
How could it be that ls is 4GB? Is size not meant to be used on executables? I have 4GB ram, so is it just showing me the amount memory it can use?

On macOS, 64-bit apps have a 4GB page zero, by default. Page zero is chunk of the address space starting at address 0 which allows no access. This is what causes access violations when a program dereferences a null pointer.
64-bit Mac programs use a 4GB page zero so that, should any valid pointer get accidentally truncated to 32 bits by a program bug (e.g. cast to int and back to a pointer), it will be invalid and cause a crash as soon as possible. That helps to find and fix such bugs.
The page zero segment in the Mach-O executable file doesn't actually use 4GB on disk. It's just a bit of metadata that tells the kernel and dynamic loader how much address space to reserve for it. It seems that size is including the virtual size of all segments, regardless of whether they take up space on disk or not.
Also, the page zero doesn't consume actual RAM when the program is loaded, either. Again, there's just some bookkeeping data to track the fact that the lower 4GB of the address space is reserved.
The size being reported for "others", 4294983680 bytes, is 0x100004000 in hex. That's the 4GB page zero (0x100000000) plus another 4 pages for some other segments.
You can use the -m option to size to get more detail:
$ size -m /bin/ls
Segment __PAGEZERO: 4294967296
Segment __TEXT: 20480
Section __text: 13599
Section __stubs: 456
Section __stub_helper: 776
Section __const: 504
Section __cstring: 1150
Section __unwind_info: 148
total 16633
Segment __DATA: 4096
Section __got: 40
Section __nl_symbol_ptr: 16
Section __la_symbol_ptr: 608
Section __const: 552
Section __data: 40
Section __bss: 224
Section __common: 140
total 1620
Segment __LINKEDIT: 16384
total 4295008256
You can also use the command otool -lV /bin/ls to see the loader commands of the executable, including the one establishing the __PAGEZERO segment.

The size command outputs information related to some binary executable and how it is running. It is not about the file. The 4Gb number might be (that is just my guess) related to the virtual address space needed to run it.
I don't have a MacOSX operating system (because it is proprietary and tied to hardware that I dislike and find too expensive). But on Linux (which is mostly POSIX, like MacOSX), size /bin/ls gives:
text data bss dec hex filename
124847 4672 4824 134343 20cc7 /bin/ls
while ls -l /bin/ls shows
-rwxr-xr-x 1 root root 138856 Feb 28 16:30 /bin/ls
Of course, when ls is running, it has some data (notably bss) which is not corresponding to a part of the executable
Try man size on your system to get an explanation. For Linux, see size(1) (it gives info about sections of an ELF executable) and ls(1) (it gives the file size).
On MacOSX, executables follow the Mach-O format.
On Linux, if you try size on a non-executable file such as /etc/passwd, you get
size: /etc/passwd: file format not recognized
and I guess that you should have some error message on MacOSX if you try that.
Think of size giving executable size information. The name is historical and a bit misleading.

Related

Allocating a larger u-boot image

I am compiling my own kernel and bootloader (U-boot). I added a bunch of new environmental variables, but U-boot doesn't load anymore (it just doesn't load anything from the memory). I am using pocketbeagle and booting from an SD card. Thus I am editing the file "am335x_evm.h" found in /include/configs/.
I am trying to allocate U-boot in a way that it has more space for the environmental variables and that it can load succesfully from the memory, but I have been unable to do so. As far as I understand, by default it allocates 128kb of memory to U-boot env variables. Since I added a bunch of them, I am trying to increase its size from 128kb to 512kb.
I have changed the following line (from 128kb to 512kb):
#define CONFIG_ENV_SIZE (512 << 10)
(By the way, anyone knows why it is shifted to the left 10 bits?)
I have also changed CONFIG_ENV_OFFSET and CONFIG_ENV_OFFSET_REDUND to:
#define CONFIG_ENV_OFFSET (1152 << 10) /* Originally 768 KiB in */
#define CONFIG_ENV_OFFSET_REDUND (1664 << 10) /* Originally 896 KiB in */
Then after compiling the new U-boot, I format the SD card and insert the new kernel and U-boot.
I start by erasing partitions:
export DISK=/dev/mmcblk0
sudo dd if=/dev/zero of=${DISK} bs=1M count=10
Then I transfer U-boot by doing:
sudo dd if=./u-boot/MLO of=${DISK} count=1 seek=1 bs=512k
sudo dd if=./u-boot/u-boot.img of=${DISK} count=2 seek=1 bs=576k
I then create the partition layout by doing:
sudo sfdisk ${DISK} <<-__EOF__
4M,,L,*
__EOF__
Then I add the kernel, binary trees, kernel modules, etc... When trying to boot and reading the serial port, I get nothing at all. U-boot is not able to load anything from the SD card. What am I doing wrong? I'd appreciate if you could point me what my problem is and exactly what I should be doing to increase the size and allocate everything correctly.

Otool - Get file size only

I'm using Otool to look into a compiled library (.a) and I want to see what the file size of each component in the binary is. I see that
otool -l [lib.a]
will show me this information but there is also a LOT of other information I do not need. Is there a way I can just see the file size and not everything else? I can't seem to find it if there is.
The size command does that, e.g.,
size lib.a
will show the size of each object stored in the lib.a archive. For example:
$ size libasprintf.a
text data bss dec hex filename
0 0 0 0 0 lib-asprintf.o (ex libasprintf.a)
639 8 1 648 288 autosprintf.o (ex libasprintf.a)
on most systems. OS X format is a little different:
$ size libl.a
__TEXT __DATA __OBJC others dec hex
86 0 0 32 118 76 libl.a(libmain.o)
75 0 0 32 107 6b libl.a(libyywrap.o)
Oddly (though "everyone" implements it), I do not see size on the POSIX site. OS X has a manual page for it.

How can two 100% identical files have different sizes?

I have two 100% identical empty .sh shell script files on Mac:
encrypt.sh: 299 bytes
decrypt.sh: 13 bytes (Actually this size is correct, since I have 13 bytes: 11 character + two new line)
The contents of encrypt.sh and its hexdump:
The contents of decrypt.sh and its hexdump:
The file info window of encrypt.sh:
The file info window of decrypt.sh:
They have the exact same hexdump, then how is it possible that they have different sizes?
Mac OS X file system is implementing forks, so the larger one is likely having something specific stored in its resource fork.
Use ls -l# to get more details.

Couch has apparent limit of attachment sizes on Mac OS X

I have plain vanilla CouchDB from Apache, which runs as an App running on a Mac OS X 10.9. If I try to attach an attachment to a document that is above 1 Meg in size, it just hangs and does nothing.
I have tried to use couchdbs on Linux, and there the sky is the limit.
I first thought it had to do with low limits on the mac but it doesn't seem so :
➜ ~ ulimit -a
-t: cpu time (seconds) unlimited
-f: file size (blocks) unlimited
-d: data seg size (kbytes) unlimited
-s: stack size (kbytes) 8192
-c: core file size (blocks) 0
-v: address space (kbytes) unlimited
-l: locked-in-memory size (kbytes) unlimited
-u: processes 709
-n: file descriptors 256
What is causing this ? Why ? And how to fix this ?
Check the config files given by couchdb -c. You probably have this somewhere in them (for some unknown reason):
[couchdb]
max_attachment_size = 1048576 ; bytes
Remove or comment the line and you should be fine.
Or maybe it was compiled with this hardcoded so you could add this line to one of the config file and increase the value.
Update
max_attachment_size is undocumented so probably not safe to use. I leave the original answer as it seems to have solved the problem of the OP but according to the docs, the attachment size should be unlimited. Also attachment_stream_buffer_size is the config key controlling the chunk size of the attachments which might relevant.

mkfs.ext2 in cygwin not working

I'm attempting to create a fs within a file.
under linux it's very simple:
create a blank file size 8 gb
dd of=fsFile bs=1 count=0 seek=8G
"format" the drive:
mkfs.ext2 fsFile
works great.
however under cygwin running from /usr/sbin ./mkfs.ext2
has all kinds of weird errors (i assume because of some abstraction)
But with cygwin i get:
mkfs.ext2: Device size reported to be zero. Invalid partition specified, or
partition table wasn't reread after running fdisk, due to
a modified partition being busy and in use. You may need to reboot
to re-read your partition table.
or even worse (if i try to access a file through /cygdrive/...
mkfs.ext2: Bad file descriptor while trying to determine filesystem size
:(
please help,
Thanks
Well it seems that the way to solve it is to not use any path on the file you wish to modify.
doing that seems to have solved it.
also it seems that my 8 gig file does have a file size that's simply to big, it seems like it resets the size var i.e.
$ /usr/sbin/fsck.ext2 -f testFile8GiG
e2fsck 1.41.12 (17-May-2010)
The filesystem size (according to the superblock) is 2097152 blocks
The physical size of the device is 0 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort? no
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
testFile8GiG: 122/524288 files (61.5% non-contiguous), 253313/2097152 blocks
Thanks anyway

Resources