Large binary file in IAR release configuration - binaryfiles

I am debugging a firmware code and need to use the output .bin file for programming the hardware. In the debug configuration, the binary file size is 158 KB and in the release configuration, it goes down to 120 KB by applying the optimization settings in IAR Embedded Workbench.
I know that the file size can go down to below 50 KB as there are some old .bin files that the previous developer could get from the software. But I can't find a way to reduce the file size further.
Does anyone have any idea how the binary file size could be reduced in the release configuration in IAR Embedded Workbench?
Here's the ending lines of my map file:
38 674 bytes of readonly code memory
4 721 bytes of readonly data memory
17 351 bytes of readwrite data memory
Errors: none
Warnings: none

Related

Arduino Verify Issue storage

I am trying to practice a bit using FreeRTOS on my Arduino. I believe I installed the libraries correctly as well as executed the code. When I try to verify on my IDE Arduino, I get the following error. At first, I thought
I needed to update the IDE on my macOS and all the libraries to help with storage, but I am still getting the error.
"text section exceeds available space in boardSketch uses 46768 bytes (144%) of program storage space. Maximum is 32256 bytes.
Global variables use 1572 bytes (76%) of dynamic memory, leaving 476 bytes for local variables. Maximum is 2048 bytes.
Sketch too big; see https://support.arduino.cc/hc/en-us/articles/360013825179 for tips on reducing it.
Error compiling for board Arduino Uno."

Double the RAM consumed than the file size on disk

I am opening a huge txt file (~ 350 MB) in Notepad++
When I am monitoring the Private Bytes consumed for Notepad++ before and after opening the file, I see approx. 730 MB consumed where as the file that is opened is just around 350 MBs.
I am not trying to point problem with Notepad++ as I see same memory consumption when I write data of this file to my MFC CEditCtrl too. I need to know as to why this behavior.
PS: I monitor Private Bytes using Process Explorer software.

Coredump size different than process virtual memory space

I'm working on OS X 10.11, and generated dump file in the following manner :
1. ulimit -c unlimited
2. kill -10 5228 (process pid)
and got dump file with the rolling attributes : 642M Jun 26 15:00 core.5228
Right before that, I checked the process total memory space using vmmap command to try and estimate the expected dump size.
However, the estimation (238.7Mb) was much smaller than the actual size (642Mb).
Can this gap be explained ?
VIRTUAL REGION
REGION TYPE SIZE COUNT (non-coalesced)
=========== ======= =======
Activity Tracing 2048K 2
Kernel Alloc Once 4K 2
MALLOC guard page 16K 4
MALLOC metadata 180K 6
MALLOC_SMALL 56.0M 4 see MALLOC ZONE table below
MALLOC_SMALL (empty) 8192K 2 see MALLOC ZONE table below
MALLOC_TINY 8192K 3 see MALLOC ZONE table below
STACK GUARD 56.0M 2
Stack 8192K 2
__DATA 1512K 44
__LINKEDIT 90.9M 4
__TEXT 8336K 44
shared memory 12K 4
=========== ======= =======
TOTAL 238.7M 110
VIRTUAL ALLOCATION BYTES REGION
MALLOC ZONE SIZE COUNT ALLOCATED % FULL COUNT
=========== ======= ========= ========= ====== ======
DefaultMallocZone_0x100e42000 72.0M 7096 427K 0% 6
coredump can, and does, filter the process memory. See the core man page:
Controlling which mappings are written to the core dump
Since kernel 2.6.23, the Linux-specific /proc/PID/coredump_filter file can be used to control which memory segments are written to the core dump file in the event that a core dump is performed for the process with the corresponding process ID.
The value in the file is a bit mask of memory mapping types (see mmap(2)). If a bit is set in the mask, then memory mappings of the corresponding type are dumped; otherwise they are not dumped. The bits in this file have the following meanings:
bit 0 Dump anonymous private mappings.
bit 1 Dump anonymous shared mappings.
bit 2 Dump file-backed private mappings.
bit 3 Dump file-backed shared mappings.
bit 4 (since Linux 2.6.24)
Dump ELF headers.
bit 5 (since Linux 2.6.28)
Dump private huge pages.
bit 6 (since Linux 2.6.28)
Dump shared huge pages.
bit 7 (since Linux 4.4)
Dump private DAX pages.
bit 8 (since Linux 4.4)
Dump shared DAX pages.
By default, the following bits are set: 0, 1, 4 (if the CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS kernel configuration option is enabled), and 5. This default can be modified at boot time using the coredump_filter boot option.
I assume OS X behaves similarly.

Size of exe file vs available memory

I have gone through How does a PE file get mapped into memory?, this is not what i am asking for.
I want to know which sections (data, text, code, ...) of a PE file are always completely loaded into memory by the loader no matter whatever the condition is?
As per my understanding, none of the sections (code,data,resources,text,...) are always loaded completely, they are loaded as and when needed, page by page. If few pages of code (in the middle or at the end), are not required to process user's request then these pages will not always get loaded.
I have tried making exe files with lots of code with/without resources both of which are not used at all, but, every time the exe loads into memory, it takes more memory than the file size. (I might have been looking at the wrong column of Memory in Task Manager)
Matt Pietrek writes here
It's important to note that PE files are not just mapped into memory
as a single memory-mapped file. Instead, the Windows loader looks at
the PE file and decides what portions of the file to map in.
and
A module in memory represents all the code, data, and resources from
an executable file that is needed by a process. Other parts of a PE
file may be read, but not mapped in (for instance, relocations). Some
parts may not be mapped in at all, for example, when debug information
is placed at the end of the file.
In a nutshell,
1- There is an exe of size 1 MB and available memory (physical + virtual) is less than 1 MB, is it consistent that loader will always refuse to load because available memory is less than the size of file?
2- If an exe of size 1 MB takes 2 MB memory when loaded (starts running first line of user code) while available memory (physical + virtual) is 1.5 MB, is it consistent that loader will always refuse to load because there is not enough memory?
3- There is an exe of size 50 MB (lots of code, data and resources) but it requires 500 KB to run the first line of user code, is it consistent that this exe will always run first line of code if available memory (physical + virtual) is 500 KB atleast?

How is the Page File available calculated in Windows Task Manager?

In Vista Task Manager, I understand the available page file is listed like this:
Page File inUse M / available M
In XP it's listed as the Commit Charge Limit.
I had thought that:
Available Virtual Memory = Physical Memory Total + Sum of Page Files
But on my machine I've got Physical Memory = 2038M, Page Files = 4096M, Page File Available = 6051. There's 83M unaccounted for here. What's that used for. I thought it might be something to do with the Kernel memory, but the number doesn't seem to match up?
Info I've found so far:
See http://msdn.microsoft.com/en-us/library/aa965225(VS.85).aspx for more info.
Page file size can be found here: Computer Properties, advanced, performance settings, advanced.
I think you are correct in your guess it has to do something with the kernel - the kernel memory needs some physical backup as well.
However I have to admit that when trying to verify try, the numbers still do not match well and there is a significant amount of memory not accounted for by this.
I have:
Available Virtual Memory = 4 033 552 KB
Physical Memory Total = 2 096 148 KB
Sum of Page Files = 2048 MB
Kernel Non-Paged Memory = 28 264 KB
Kernel Paged Memory = 63 668 KB

Resources