I am opening a huge txt file (~ 350 MB) in Notepad++
When I am monitoring the Private Bytes consumed for Notepad++ before and after opening the file, I see approx. 730 MB consumed where as the file that is opened is just around 350 MBs.
I am not trying to point problem with Notepad++ as I see same memory consumption when I write data of this file to my MFC CEditCtrl too. I need to know as to why this behavior.
PS: I monitor Private Bytes using Process Explorer software.
Related
I am debugging a firmware code and need to use the output .bin file for programming the hardware. In the debug configuration, the binary file size is 158 KB and in the release configuration, it goes down to 120 KB by applying the optimization settings in IAR Embedded Workbench.
I know that the file size can go down to below 50 KB as there are some old .bin files that the previous developer could get from the software. But I can't find a way to reduce the file size further.
Does anyone have any idea how the binary file size could be reduced in the release configuration in IAR Embedded Workbench?
Here's the ending lines of my map file:
38 674 bytes of readonly code memory
4 721 bytes of readonly data memory
17 351 bytes of readwrite data memory
Errors: none
Warnings: none
I have an application that reads from serial port from PC. When i read using my standalone application, all the expected read bytes are received. But when i incorporate the application into HWUT ( Hello World Unit Testing), the .exe output generated in OUT folder contains a portion of the received data and fills the rest will NULL. I use the same receive buffer size for both cases. What could be the reason?
When you run the application on the command line, is the output correct?
Does 'fflush(stdout)' help?
How large is th output? Note, that HWUT has an in-buildt oversize detection. If you need larger output respond to "--hwut-info" with
... printf("SIZE-LIMIT: 4711MB;\n"); ...
Change MB to KB for kilo byte or GB for giga byte. 4711 is your size limit.
I'm tring to understand the way bytes go from write() to the phisical disk plate to tune my picture server performance.
Thing I don't understand is what is the difference between these two: commit= mount option and dirty_writeback_centisecs. Looks like they are about the same procces of writing changes to the storage device, but still different.
I do not get it clear which one fires first on the way to the disk for my bytes.
Yeah, I just ran into this investigating mount options for an SDCard Ubuntu install on an ARM Chromebook. Here's what I can tell you...
Here's how to see the dirty and writeback amounts:
user#chrubuntu:~$ cat /proc/meminfo | grep "Dirty" -A1
Dirty: 14232 kB
Writeback: 4608 kB
(edit: This dirty and writeback is rather high, I had a compile running when I ran this.)
So data to be written out is dirty. Dirty data can still be eliminated (if say, a temporary file is created, used, and deleted before it goes to writeback, it'll never have to be written out). As dirty data is moved into writeback, the kernel tries to combine smaller requests that may be into dirty into single larger I/O requests, this is one reason why dirty_expire_centisecs is usually not set too low. Dirty data is usually put into writeback when a) Enough data is cached to get up to vm.dirty_background_ratio. b) As data gets to be vm.dirty_writeback_centisecs centiseconds old (3000 default is 30 seconds) it is put into writeback. vm.dirty_writeback_centisecs, a writeback daemon is run by default every 500 centiseconds (5 seconds) to actually flush out anything in writeback.
fsync will flush out an individual file (force it from dirty into writeback and wait until it's flushed out of writeback), and sync does that with everything. As far as I know, it does this ASAP, bypassing any attempt to try to balance disk reads and writes, it stalls the device doing 100% writes until the sync completes.
commit=5 default ext4 mount option actually forces a sync() every 5 seconds on that filesystem. This is intended to ensure that writes are not unduly delayed if there's heavy read activity (ideally losing a maximum of 5 seconds of data if power is cut or whatever.) What I found with an Ubuntu install on SDCard (in a Chromebook) is that this actually just leads to massive filesystem stalls like every 5 seconds if you're writing much to the card, ChromeOS uses commit=600 and I applied that Ubuntu-side to good effect.
The dirty_writeback_centisecs, configures the daemons of the kernel Linux related to the virtual memory (that's why the vm). Which are in charge of making a write back from the RAM memory to all the storage devices, so if you configure the dirty_writeback_centisecs and you have 25 different storage devices mounted on your system it will have the same amount of time of writeback for all the 25 storage systems.
While the commit is done per storage device (actually is per filesystem) and is related to the sync process instead of the daemons from the virtual memory.
So you can see it as:
dirty_writeback_centisecs
writing from RAM to all filesystems
commit
each filesystem fetches from RAM
I have gone through How does a PE file get mapped into memory?, this is not what i am asking for.
I want to know which sections (data, text, code, ...) of a PE file are always completely loaded into memory by the loader no matter whatever the condition is?
As per my understanding, none of the sections (code,data,resources,text,...) are always loaded completely, they are loaded as and when needed, page by page. If few pages of code (in the middle or at the end), are not required to process user's request then these pages will not always get loaded.
I have tried making exe files with lots of code with/without resources both of which are not used at all, but, every time the exe loads into memory, it takes more memory than the file size. (I might have been looking at the wrong column of Memory in Task Manager)
Matt Pietrek writes here
It's important to note that PE files are not just mapped into memory
as a single memory-mapped file. Instead, the Windows loader looks at
the PE file and decides what portions of the file to map in.
and
A module in memory represents all the code, data, and resources from
an executable file that is needed by a process. Other parts of a PE
file may be read, but not mapped in (for instance, relocations). Some
parts may not be mapped in at all, for example, when debug information
is placed at the end of the file.
In a nutshell,
1- There is an exe of size 1 MB and available memory (physical + virtual) is less than 1 MB, is it consistent that loader will always refuse to load because available memory is less than the size of file?
2- If an exe of size 1 MB takes 2 MB memory when loaded (starts running first line of user code) while available memory (physical + virtual) is 1.5 MB, is it consistent that loader will always refuse to load because there is not enough memory?
3- There is an exe of size 50 MB (lots of code, data and resources) but it requires 500 KB to run the first line of user code, is it consistent that this exe will always run first line of code if available memory (physical + virtual) is 500 KB atleast?
For example, I have a Lumia 920, its total space is 32G, and available free space is 24G.
Now I want to create some files to filling disk to the full, how can I create 24G files as quickly as possilbe? I tried, but very slowly. :-(
As far as I know, one app (see below link) can do that, but I really can't understand how does he do it? Write isolated storage is very slow slow. Could you give me some advices?
http://www.windowsphone.com/en-us/store/app/%E7%BC%93%E5%AD%98%E6%B8%85%E7%90%86/b790919d-8ec8-40d8-b97a-10c466cedca8
You just need to create the file like thisand write a single byte:
FileStream fs = new FileStream(#"c:\tmp\yourfilename", FileMode.CreateNew);
fs.Seek(2048L * 1024 * 1024, SeekOrigin.Begin);
fs.WriteByte(0);
fs.Close();
This example will create a 2GB file with a single byte of content !