In my LLDB session, memory read 0x00000003 throws an error message.
IMHO, the message error: memory read failed for 0x0 should end with 0x3.
In case this is no bug but intended behaviour, could anybody explain where the offset/trim comes from?
Further details: x86_64
The memory address will be floored (rounded down) to the nearest multiple of 256 (0x100).
You don't say what system you are on, but it is very common for 64 bit systems to unmap the first 32 bit page of memory. That was originally done to catch 32 bit -> 64 bit transition errors. A common error in 32 bit code was to pass a pointer somewhere as a 32 bit integer, which in a 64 bit world would truncate it to 32 bits. Making the read/write of a < 32 bit pointer always fail makes it much easier to trap this error.
It's also generally handy to map out the pages at 0x0 to catch an access off a nullptr immediately, so many systems map out some pages above zero, even if not the full 32 bits, for that reason as well.
So most likely lldb is right, the memory at 0x0 and some region above that is not mapped, and we can't read it.
Semnodime is right about why the access is 0x0. lldb uses a "memory cache" internally. If you read one bit of memory you're very likely to read some around it, so this speeds lldb up, particularly when doing remote debugging. So by default lldb reads some amount around the address it actually needs.
You can control the amount it reads if you want, using:
settings set target.process.memory-cache-line-size <SomeValue>
And:
settings set target.process.disable-memory-cache true
turns the cache off altogether. If you did that, lldb would then try to read starting with 0x3, but I'm guessing that's still going to fail.
Related
Under Visual Studio masm:
mov ecx,270
l1: pop eax
loop l1
push eax
The point of this code is to find out if there is and what is the initial value of ESP. I try to pop immediately after the program starts, and experiment that after how many pop a push will create some memory reading related error. The result of the experiment is somehow unstable, even with exactly same number for ecx. Generally, greater than 512 will always(in my limited times of experiments) create an error, less than 128 is always "safe", and values around 250 to 400 will sometimes create error. It seem that there is no initial value for ESP. If there is, my experiment should create some stable result.
OK I run 127 for other 10 more times and now it start to crash. I am trying to experiment more numbers about this.
Let us just say using Windows-x86, on an average moment of starting a program like my experiment's program. How Windows determine what will be the initial value of esp? Is this difficult to determine(because I could imagine simply put the last address of stack segement in esp)? Is there a common practice of how to do this?
The initial value is wherever the OS put the stack in the process's virtual address space. In modern operating systems it's random.
What is above the top of the stack at _start is architecture-dependent. On Windows, you get an actual return address to something that will exit the current thread. On Linux, you get the command line and the environment variables. In any case, popping stuff from the stack that you didn't push is not going to be ABI compliant and will get you into trouble. The only rules that remain at that point are the security rules.
I am reading the book <windows via c/c++> ,in Chapter 13 - Windows Memory Architecture -
Getting a Larger User-Mode Partition in x86 Windows
I occur at this:
In early versions of Windows, Microsoft didn't allow applications to
access their address space above 2 GB. So some creative developers
decided to leverage this and, in their code, they would use the high
bit in a pointer as a flag that had meaning only to their
applications. Then when the application accessed the memory address,
code executed that cleared the high bit of the pointer before the
memory address was used. Well, as you can imagine, when an application
runs in a user-mode environment greater than 2 GB, the application
fails in a blaze of fire.
I can't understand that, can someone make an example to explain it for me, thanks.
To access ~2GB of memory, you only need a 31 bit address. However, on 32 bit systems, addresses are 32 bit long and hence, pointers are 32 bit long.
As the book describes, in early versions of windows developers could only use 2GB of memory, therefore, the last bit in each 32-bit pointer could be used for other purposes, as it was ALWAYS zero. However, before using the address, this extra bit had to be cleared again, presumably so the program didn't crash, because it tried to access a higher than 2GB address.
The code probably looked something like this:
int val = 1;
int* p = &val;
// ...
// Using the last bit of p to set a flag, for some purpose
p |= 1UL << 31;
// ...
// Then before using the address in some way, the bit has to be cleared again:
p &= ~(1UL << 31);
*p = 3;
Now, if you can be certain that your pointers will only ever point to an address where the most significant bit (MSB) is zero, i.e. in a ~2GB address space, this is fine. However, if the address space is increased, some pointers will have a 1 in their MSB and by clearing it, you set your pointer to an incorrect address in memory. If you then try to read from or write to that address, you will have undefined behavior and your program will most likely fail in a blaze of fire.
I've been reading Windows via C/C++ by Jeffrey Richter and came across the following snippet in the chapter about Windows' memory architecture related to porting 32 bit applications to a 64 bit environment.
If the system could somehow guarantee that no memory allocations would every be made above 0x00000000'7FFFFFFF, the application would work fine. Truncating a 64 bit address to a 32 bit address when the high 33 bits are 0 causes no problem whatsoever.
I'm having some trouble understanding why the system needs to guarantee that no memory allocations are made above 0x00000000'7FFFFFFF and not 0x00000000'FFFFFFFF. Shouldn't it be okay to truncate the address so long as the high 32 bits are 0? I'm probably missing something and would really appreciate it if someone with more knowledge about windows than me could explain why this is the case.
Not all 32bit systems/languages use unsigned values for memory addresses, so the 32th bit might have different meaning in some contexts. By limiting the address space to 31 bits, you don't run into that problem. And also, Windows limits a 32bit app from accessing addresses higher than 2 GB without the use of special extensions to extend that, so most apps would not need the 32th bit anyway.
On a x86 system a memory location can hold 4 bytes (32 / 8) of data, therefore a single memory address in a 64 bit system can hold 8 bytes per memory address. When examining the stack in GDB though this doesn't appear to be the case, example:
0x7fff5fbffa20: 0x00007fff5fbffa48 0x0000000000000000
0x7fff5fbffa30: 0x00007fff5fbffa48 0x00007fff857917e1
If I have this right then each hexadecimal pair (48) is a byte, thus the first memory address
0x7fff5fbffa20: is actually holding 16 bytes of data and not 8.
This has had me really confused and has for a while, so absolutely any input is vastly appreciated.
Short answer: on both x86 and x64 the minimum addressable entity is a byte: each "memory location" contains one byte, in each case. What you are seeing from GDB is only formatting: it is dumping 16 contiguous bytes, as the address increasing from ....20 to ....30, (on the left) indicates.
Long answer: 32bit or 64bit is used to indicate many things, in an architecture: almost always, is the addressable size (how many bits are in an address = how much memory you can directly address - again, bytes of memory). It also usually indicates the dimension of registers, and also (but not always) the native word size.
That means that usually, even if you can address a single byte, the machine works "better" using data of different (longer) size. What "better" means is beyond the question; a little background, however, is good to understand some misconceptions about word size in the question.
I've been poring over this Erlang crash dump where the VM has run out of heap memory. The problem is that there is no obvious culprit allocating all that memory.
Using some serious black awk magic I've summed up the fields Stack+heap, OldHeap, Heap unused and OldHeap unused for each process and ranked them by memory usage. The problem is that this number doesn't come even close to the number that is representing the total memory for all the processes processes_used according to the Erlang crash dump guide.
I've already tried the Crashdump Viewer and either I'm missing something or there isn't much help there for my kind of problem.
The number I get is 525 MB whereas the processes_used value is at 1348 MB. Where can I find the rest of the memory?
Edit: The Heap unused and OldHeap unused shouldn't have been included since they are a sub-part of Stack+Heap and OldHeap, that plus the fact that the number displayed for Stack+Heap and OldHeap are listed as number of words, not bytes, was the problem.
There is an module called crashdump_viewer which is great for these kinds of analysis.
Another thing to keep in mind is that Heap+Stack is afaik in words, not bytes which would mean that you have to multiply Heap+Stack with 4 on 32 and 8 on 64 bit. Can't find a reference in the manual for this but Processes talks about it a bit.