Overflow datastructure - overflow

Everyone knows about overflow in the programming languages, if it happens program goes to crash. However, it is not clear for me what happens actually with data which get out of the boundary. Could you explain me, saying, giving example on C++ or Java. For example, Integer can save maximum 4 byte, what will happen if one puts data more than 4 byte to Integer. How compiler will identify this undefined behaviour?

what will happen if one puts data more than 4 byte to Integer.
Typically the value will roll-over1, meaning it will jump from one end of its range to another.
This can be seen, even in Windows calculator. Start with the highest possible signed 32-bit value:
Now add one to it:
We overflowed the maximum value of a signed Dword (231-1).
1 - This is a typical result. Some architectures might actually generate an exception on integer overflow, so you shouldn't count on this behavior.
How compiler will identify this undefined behaviour?
The compiler won't identify it. That's the problem. C# can mitigate this with the checked keyword, which checks to make sure that any arithmetic done on an integer will not cause overflow/underflow.

Related

Why is non-zeroed memory only a problem with big data usage?

I was doing a graded programming assignment — an implementation of Rope data structure. The grader fed it an initial string and a series of edit operations. I did my development in C++ on a Linux machine. After testing my solution locally with small inputs (a string of ca 10 chars) I posted it to the grader, but got Segmentation Fault on one of the test cases.
I have generated a random input data with the maximum size given in the assignment specs (the string of 300k characters). I also got the Segmentation Fault locally. After a short debugging I found out that the leaves of my tree had random left and right pointers instead of NULL. After replacing the new Vertex calls with new Vertex() (the latter calls the default constructor, unlike the former which leaves the memory as-is) the code worked fine and got accepted by the grader.
This however makes me wonder — why did my code work correctly with a small input, both locally and on the grader’s machine? Is some amount of heap guaranteed to be zeroed when I run a process? Is this an artifact of some previously run program? What exactly is happening here?
Uninitialised objects can have any value. Uninitialised pointers can contain null, they can contain valid pointers by coincidence, or contain invalid pointers. It is completely undefined. Your program will behave accordingly. And it’s quite possible that memory is filled with some amount of zeroes followed by some amount of rubbish.
There may be a compiler option that will fill uninitialised variables with data that is likely to lead to a crash. More likely, there may be compiler options warning you when you use an uninitialised variable.

MIPS input string "2147483648", how to return "error: the input number is two big."

Checking MIPS algorithm overflow
error: .assciiz "error: the input number is two big."
buffer: .space 256
For QtSpim, using syscall #5, input integer, there appears to be no way to detect when user input exceeds the 32-bit signed integer range.  No success/failure error code is returned, and the value returned by the syscall, given whatever the user entered, is simply truncated to 32 bits1.
If you want to do overflow checking, you'll have accept string data (instead of integer data) from the console and parse the string to integer checking for overflow during that parse.
Such a parse involves doing a multiplication by 10 and an addition for each digit, so with one approach both of those arithmetic operations would need to involve overflow checking.
Importantly, the input to the parse would be a string, and the output would both a status and a numeric value, unlike syscall #5, so your program can customize behavior.  The status could indicate success, or failures of various kinds, such as non-digit characters, and overflow.
Other approaches to determining the overflow during parse of string to integer are possible, since we know certain facts about decimal digits, such as at most 9 will be added each digit; since we know that 11 total digits (or more) it will overflow (not counting leading zeros, if present).
For example, an alternative approach is to compute the intermediate parse result in 64-bits, then only when converting to 32 bits, check for match of the 32-bit value with the original 64-bit value (match is ok, non-match is overflow).  Since 64 bits can still overflow, counting of total non-zero digits should also be employed with this approach, where anything over 11 would indicate overflow.
The other popular MIPS simulator MARS presents a different challenge/opportunity on overflow with syscall #5 — it takes an exception on overflow, but this is not like a Java or C# exception that is easily caught and handled, but instead this is a fault that transfers control to the kernel exception handler in supervisor mode.
Doing something useful with that would be a topic of advanced operation system coursework.  For one, once you write your own kernel exception handler, you have to be prepared to handle all possible processor exceptions, not just overflow; for another you have to differentiate between the overflow condition and the others so you can handle them separately; and lastly, there's no predefined interface for telling the overflowing user-mode program using syscall #5, that this has occurred, and what the user-mode program wants to do about it, so you'd have to provide such mechanism.
1  These simulators implement effectively useful but an irregular set of system calls — there appears no attempt made to make a complete or well rounded set of syscall operations.  For example, you can read or print an integer or string from/to the console, but can only read or print strings from files — if you want to use files instead of the console with integers, now you have to write your own int to string and vice versa.

Why can't ruby use most of the 2^X numbers as object ids?

ObjectSpace._id2ref gives us the object from the Ruby's Object Space, it has an object against id in sequence starting from 0, however, if we try to see object on id 4 it gives an error as
2.6.3 :121 > ObjectSpace._id2ref(4)
Traceback (most recent call last):
2: from (irb):121
1: from (irb):121:in `_id2ref'
RangeError (0x0000000000000004 is not id value)
Also, I figured that it's the same behaviour for 2^x values(except 1, 2, 8).
(0..10).each do |exp|
object_id = 2**exp
begin
puts "Number: #{object_id} : #{ObjectSpace._id2ref(object_id)}"
rescue Exception => e
puts "Number: #{object_id} : #{e.message}"
end
end
Number: 1 : 0
Number: 2 : 2.0
Number: 4 : 0x0000000000000004 is not id value
Number: 8 : nil
Number: 16 : 0x0000000000000010 is not id value
Number: 32 : 0x0000000000000020 is not id value
Number: 64 : 0x0000000000000040 is not id value
Number: 128 : 0x0000000000000080 is not symbol id value
Number: 256 : 0x0000000000000100 is not id value
Number: 512 : 0x0000000000000200 is not id value
Number: 1024 : 0x0000000000000400 is not id value
Why can't ruby use these specific numbers as object ids?
Also, what's different for (1,2,8)? and why error is different for 128?
First, it is very important to make a couple of things crystal clear:
There are exactly two guarantees Ruby makes about object IDs. These two guarantees are the only thing you are allowed to rely on. You must not make any assumptions about object IDs other than these two guarantees:
An object has the same ID for its entire lifetime.
No two objects have the same ID at the same time.
[Note: this means in particular that different objects can have the same ID at different times, i.e. that IDs can be recycled.]
An object ID is an opaque identifier. You must not make any assumptions about its structure or about any particular value.
Any particular implementation of object IDs is a private internal implementation detail of a specific version of a specific implementation running in a specific environment at a specific moment. There is no guarantee that the results will be the same with a different implementation. There is no guarantee that the results will be the same with a different version of the same implementation. There is no guarantee that the results will be the same with the same version of the same implementation running in a different environment. In fact, there is not even a guarantee that the results will be the same between two runs of the same code on the same version of the same implementation in the same environment.
ObjectSpace::_id2ref is an abomination. It should not even exist. It most certainly should not be used. It breaks object-orientation, it breaks encapsulation, it breaks safety.
Just as an example: unfortunately, you don't say which version of which implementation you are running in which environment. However, it looks like you are running YARV 2.6.3 in a 64-bit environment.
If you were to run that exact same code on the exact same version of YARV in a 32-bit environment, you would get different results. If you were to run that exact same code on an older version of YARV (pre-2.0) in the exact same environment, you would get different results.
Let's address the first, implicit, assumption which I think I see in your question. You seem to think that any ID should resolve to an object. It's easy to see that this cannot be true: there are infinitely many IDs, but for every run of a program, there are only finitely many objects, so there will always be infinitely many IDs which don't resolve to an object.
This already explains most of your results, namely the ones for 4, 16, 32, 64, 256, 512, and 1024.
So, with that out of the way, here's a high-level explanation of why there seems to some sort of structure to the IDs, and what that structure is. (But let me remind you again, that this explanation only applies to 64 bit systems, not to 32 bit, it only applies to YARV, it only applies to versions of YARV 2.0 or newer, and it is quite possible that it will no longer apply to YARV 3.0.)
In YARV, the developers made the decision that the object ID is the same thing as the memory address of the object header. This makes it easy to ensure the "rules" of object IDs: you can't have multiple objects at the same memory address at the same time, and an object will not change its memory address.
(Actually, it turns out that the second one is already a quite severe restriction: many modern high-performance garbage collectors depend on being able to move objects around in memory. This is not possible if you assume that object ID == memory address. Which means you will not be able to use any of those high-performance algorithms.)
On pretty much all modern machines, memory access is word-aligned. While it is possible to address individual bytes, that is generally slower or more awkward. So, we can basically assume that if we allocate memory, it will be aligned on a word-boundary. Which means that all memory addresses will be divisible by 8 on 64-bit systems and 4 on 32-bit systems, or in other words, that all memory addresses will end in 3 (64-bit) or 2 (32-bit) zero bits. Or, in other words: 87.5% (75%) of the address space are unused.
On the other hand, it would be quite a waste to represent Integers as a full-blown Ruby object:
They are immutable, which means we don't have to store any state.
They can't have instance variables, which means we don't have to store an instance variable table.
They can't have a singleton class, which means we don't have to store a __klass__ pointer.
They can't be extended.
And so on …
What this means, is that we can optimize the representation of Integers by not storing them as objects at all. All we need is some special case in the engine, so that if someone asks for the class of, say, 42, instead of trying to look at 42's __klass__ pointer, the engine "magically" knows to just return the Integer class.
Once we have that in place, we can do a really cool trick, which is actually as old as the very first LISP and Smalltalk VMs, and it is called a tagged pointer representation. Normally, the value of a variable is a pointer to the object (header), but with a tagged pointer representation we can store the value of the object inside the pointer to the object itself!
All we need to do is to have some sort of tag on the pointer that tells the engine that this is actually not a pointer but a value disguised as a pointer. In some older machines, especially those specifically designed for running high-level languages, pointers did have a tag field specifically for holding, e.g. type information or access control. Modern machines don't have that, but we have those unused bits we can (ab)use as tag bits.
And that is what YARV is doing: When the last bit of a pointer is 1, then it's not actually a pointer, it's an Integer. In particular, an Integer is encoded in YARV by shifting it one bit to the left and setting the last bit to 1. This allows us to encode a 63-bit Integer in a 64-bit pointer, and do native integer arithmetic at it with no object overhead and only a little bit of bit shifting overhead.
And if you think about what this encoding means:
shifting one bit to the left is equivalent to multiplying by two
setting the last bit to 1 is equivalent to incrementing by 1
Then you can explain the first pattern: a small Integer with value n is encoded as the "quasi-pointer" 2n + 1, and since "memory address" and object ID are the same in YARV (even though this is not actually a memory address, because there is no object which could have an address), it will have the object ID 2n + 1.
Integers that don't fit into 63 bit (31 bit), are allocated as objects like any other object. In different engines, these have different names, e.g. in the Smalltalk-80 VM, they are called SmallInts, in YARV, they are called Fixnums (and the ones that don't fit into a Fixnum are called Bignums). They actually used to be different subclasses of a fully-abstract Integer class in older versions of YARV, but this was considered a mistake. (It's really an internal optimization and should not be visible to the programmer.) In current versions of YARV, Fixnum and Bignum are aliases for Integer and using them gives a deprecation warning.
This explains your result for 1. If you had tried out ObjectSpace._id2ref(3), the result would have 1, then ObjectSpace._id2ref(5) would be 2, and so on.
And we still are using only 62.5% of the address space (on a 64-bit system)!
So, let's think about what else we might want to represent in this way.
YARV has a very similar optimization for Floats. Floating point numbers that fit into 62-bits are called flonums and are represented similar, with a tag of 10 at the end. (YARV does not use flonums on 32-bit platforms.)
This explains your result for ObjectSpace._id2ref(2). If you had tried ObjectSpace._id2ref(6), the result would have been -2.0.
And a similar trick is also played for Symbols. I won't explain it here in detail, because a) I don't actually fully know how it works, and b) it is slightly more complex, because the value being encoded isn't directly the Symbol value, rather it is an index into the Symbol table. However, that explains your result for 128.
Now, lastly, there is a completely different part of the address space that is also unused: the low addresses. On most modern Operating Systems, the low addresses are reserved for mapping the kernel memory directly into the user process in order to speed up the user space ↔︎ kernel space transition. Plus, there is another reason the very low addresses are kept free: in C, it is illegal to dereference a NULL pointer. Now, one way of implementing this, would be for the runtime to track all pointer dereferences and check whether they are dereferencing the NULL pointer. But there is an easier way: just give the NULL pointer an actual memory address, but one that is never allocated. That way, you don't have to do anything: if the code tries to dereference the pointer, the address doesn't exist, and the MMU will take care of raising an error. So, most C compilers compile the NULL pointer to the actual memory address 0, and in order to make sure that there is never any real data allocated at that address, they keep a whole area around address 0 free.
This means that the low addresses are never used, and we can (ab)use them to represent even more "interesting" objects. Now, YARV uses the very low addresses to represent the following objects:
false at address 0, which has the additional advantage that 0 is considered false in C.
nil at address 8 (4 in 32-bit).
true at address 20 (2 in 32-bit).
Qundef (a special internal value inside the engine that denotes an undefined value) at address 52 (6 in 32-bit).
And that explains your number 8.
This also means that your 4, 16, 32, 64, 256, 512, and 1024 will probably never resolve to an object, because they are in the low address range where the C library will simply never allocate memory.
As a closing remark, I want to repeat one last time that all of this is a private internal implementation detail of a specific version of YARV running in a specific environment. You must not rely on any of this, ever.
When flonums were introduced in YARV, and on some platforms nil no longer had object ID 4, this did break some code, and it did cause some confusion (as evidenced e.g. by questions on Stack Overflow), even though the YARV developers are allowed to change object IDs at will, because there are no guarantees being made about any particular ID values or the structure of IDs. Please, do not make the same mistake.

Bit Shift Operator '<<' creates Extra 0xffff?

I am currently stuck with this simple bit-shifting problem. The problem is that when I assign a short variable any values, and shift them with << 8, I get 0xffff(2 extra bytes) when I save the result to the 'short' variables. However, for 'long', it is OK. So I am wondering why this would anyhow happen ??
I mean, short isn't supposed to read more than 2 bytes but... it clearly shows that my short values are containing Extra 2 bytes with the value 0xffff.
I'm seeking for your wisdom.. :)
This image describes the problem. Clearly, when the 'sign' bit(15) of 'short' is set to 1 AFTER the bit shift operation, the whole 2 byte ahead turns into 0xffff. This is demonstrated by showing 127(0x7f) passing the test but 0x81 NOT passing the test because when it is shifted, Due to it's upper 8. That causes to set Bit15(sign bit) to '1'. Also, Because 257(0x101) doesn't set the bit 15 after shifting, it turns out to be OK.
There are several problems with your code.
First, you are doing bit shift operations with signed variables, this may have unexpected results. Use unsigned short instead of short to do bit shifting, unless you are sure of what you are doing.
You are explicitly casting a short to unsigned short and then storing the result back to a variable of type short. Not sure what you are expecting to happen here, this is pointless and will prevent nothing.
The issue is related to that. 129 << 8 is 33024, a value too big to fit in a signed short. You are accidently lighting the sign bit, causing the number to become negative. You would see that if you printed it as %d instead of %x.
Because short is implicitly promoted to int when passed as parameter to printf(), you see the 32-bit version of this negative number, which has its 16 most relevant bits lit in accordance. This is where the leading ffff come from.
You don't have this problem with long because even though its signed long its still large enough to store 33024 without overloading the sign bit.

Win32 EXCEPTION_INT_OVERFLOW vs EXCEPTION_INT_DIVIDE_BY_ZERO

I have a question about the EXCEPTION_INT_OVERFLOW and EXCEPTION_INT_DIVIDE_BY_ZERO exceptions.
Windows will trap the #DE errors generated by the IDIV instruction and will end up generating and SEH exception with one of those 2 codes.
The question I have is how does it differentiate between the two conditions? The information about idiv in the Intel manual indicates that it will generate #DE in both the "divide by zero" and "underflow cases".
I took a quick look at the section on the #DE error in Volume 3 of the intel manual, and the best I could gather is that the OS must be decoding the DIV instruction, loading the divisor argument, and then comparing it to zero.
That seems a little crazy to me though. Why would the chip designers not use a flag of some sort to differentiate between the 2 causes of the error? I feel like I must be missing something.
Does anyone know for sure how the OS differentiates between the 2 different causes of failure?
Your assumptions appear to be correct. The only information available on #DE is CS and EIP, which gives the instruction. Since the two status codes are different, the OS must be decoding the instruction to determine which.
I'd also suggest that the chip makers don't really need two separate interrupts for this case, since anything divided by zero is infinity, which is too big to fit into your destination register.
As for "knowing for sure" how it differentiates, all of those who do know are probably not allowed to reveal it, either to prevent people exploiting it (not entirely sure how, but jumping into kernel mode is a good place to start looking to exploit) or making assumptions based on an implementation detail that may change without notice.
Edit: Having played with kd I can at least say that on the particular version of Windows XP (32-bit) I had access to (and the processor it was running on) the nt!Ki386CheckDivideByZeroTrap interrupt handler appears to decode the ModRM value of the instruction to determine whether to return STATUS_INTEGER_DIVIDE_BY_ZERO or STATUS_INTEGER_OVERFLOW.
(Obviously this is original research, is not guaranteed by anyone anywhere, and also happens to match the deductions that can be made based on Intel's manuals.)
Zooba's answer summarizes the Windows parses the instruction to find out what to raise.
But you cannot rely on that the routine correctly chooses the code.
I observed the following on 64 bit Windows 7 with 64 bit DIV instructions:
If the operand (divisor) is a memory operand it always raises EXCEPTION_INT_DIVIDE_BY_ZERO, regardless of the argument value.
If the operand is a register and the lower dword is zero it raises EXCEPTION_INT_DIVIDE_BY_ZERO regardless if the upper half isn't zero.
Took me a day to find this out... Hope this helps.

Resources