What determines the result of overflowed operations? [closed] - algorithm

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
Example:
int max = a > b ? a : b;
int min = a + b - max;
What determines whether this will work? The processor? The hardware? The language? Help me understand this at as deep a level as possible.

The processor IS the hardware (at least for the purposes of this question).
The language is purely a way for you to express things in such a way as to allow it to convert it to what the processor itself expects. The role of the language here would be to define what "int" means, what arithmetic operators are/do, and what their exceptional behavior is. In the low-level languages (like C/C++), it leaves several things to be "implementation defined", like the overflow behavior of integers. Other languages (like Python) may define "int" to be an abstract (not a hardware) concept and thereby change some of the rules (like detecting overflows and doing custom behavior).
If the language leaves something implementation defined and the implementation offloads that decision to the hardware, then the hardware is what defines the behavior of your code.

The high level programming language provides a way for humans to describe what they want to happen. A compiler reduces that down into a language the processor understands, (ultimately) machine code. The instruction set for a particular processor is designed to be useful for doing tasks, general purpose processors for general purpose tasks including the ones you have described. Unlike pencil and paper math where if we need another column another power of ten, 99+1 = 100 for example two digits wide going in, 3 digits coming put. Processors have a fixed with for their registers, that doesnt mean you cant get creative, but the language and the resources (memory, disk space, etc) have limits. And the processor either directly in the logic or the compiler implementing the right sequence of instructions, can and will detect an overflow if you ask it to, in general. Some processors harder than others and some processors are not general purpose enough, but I dont think we need to worry about those, the one you are reading this web page in definitely can handle this.

Computers(hardware) represent numbers in two's complement. Check this for details of two's complement, and why computers use it.
In two's complement signed numbers(not floating ones for now, for sake of simplicity) have a sign bit as most significant bit. For example:
01111111
Represents 127 in two's complement. And
10000000
represents -128. In both example, the first bit is sign bit, if it's 0, then the number is positive, else negative.
8-bit signed numbers can represent numbers between -128 and 127, so if you add 127 and 3, you won't get 130, you will get -126 because of overflow. Let's see why:
01111111
00000011
+________
10000010 which is a negative number, -126 in two's complement.
How hardware understand if an overflow occurred? In addition for example, if you add two positive numbers and the result gets negative, it means overflow. And if you add two negative numbers and result gets positive it means overflow again.
I hope that would be a nice example for how these things are happening in hardware level.

Related

Can overflow occur when a positive number is subtracted from a positive number resulting in a negative number?

I am working with the mips assembly language but am confused on the overflow aspect of arithmetic here.
Say I am subtracting 25 from 20 and end up with -5. Would this result in an overflow?
I understand that with addition if you add 2 positive numbers or 2 negative numbers and the output is the opposite sign then there is overflow but am lost when it comes to subtraction.
Find the examples at the extreme, let's do it in 8 bits signed to make it simple but the same principles holds in 32 bits
minuend: the smallest possible positive (non-negative) number, 0 and
subtrahend: the largest possible number, 127
When we do 0 - 127, the answer is -127 and that indeed fits in 8 bits signed.  There is a borrow.  In any processor the effect of the borrow is to propagate 1's though out the upper bits, making it a negative number of the proper magnitude.
Different processors set flags differently based on this borrow, MIPS doesn't have flags, while x86 will set flags to indicate borrow, and other processors will set the flags to indicate carry.
In 8 bit signed numbers:
minuend: the smallest possible positive (non-negative) number, 0 and
subtrahend: the largest possible number, -128
When we do 0 - -128, the answer should be 128, but that cannot be represented in 8 bit signed format, so this is the example of overflow.  0 - -127 = 127 and that can be represented, so no overflow there.
If we do it in 8 bits unsigned, your example of 25-20 = -5, but -5 cannot be represented in an unsigned format, so this is indeed overflow (or modular arithmetic, if you like that).
Short answer: Yes, as the 32-bit representation of -5 is FFFFFFFB.
Long answer: Depends on what you mean by "overflow".
There's signed overflow, which is when you cross the 7FFFFFFF-80000000 boundary.
And there's unsigned overflow where you cross the FFFFFFFF-00000000 boundary.
For signed arithmetic, signed overflow is undeniably a bad thing (and is considered undefined behavior in C and other languages). However, unsigned overflow is not necessarily a problem. Usually it is, but many procedures rely on it to work.
For example, imagine you have a "frame timer" variable, i.e. a 32-bit counter variable that increments by 1 during an interrupt service routine. This interrupt is tied to a real-time clock running at 60 hertz, so every 1/60th of a second the variable's value increases by 1.
Now, this variable will overflow eventually. But do we really care? No. It just wraps around back to zero again. For our purposes, it's fine, since we really don't need to know that accurately how long our program has been running since it began. We probably have events that occur every n ticks of the timer, but we can just use a bitmask for that. Effectively in this case we're using unsigned overflow to say "if this value equals FFFFFFFF and we're about to add 1 to it, reset it to zero instead." Which thanks to overflow we can easily implement without any additional condition checking.
The reason I bring this up is so that you understand that overflow is not always a bad thing, if it's the unsigned variety. It depends entirely on what your data is intended to represent (which is something you can't explain even to a C compiler.)

Implementing signed adder in HDL [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Hey Everyone,
I'm working on a project which requires me to use signed adders.
How does one implement this adder in HDL without using the arithmetic operators?(test bench not required), is there a way to modify an existing adder circuit to work as a signed adder, if so your inputs will be of great help.
Thank you in advance.
To answer the first part of your question, you need to implement (for example) a ripple carry adder, which is a series of full adders connected in a certain way. There will be tons of stuff about this on the internet, because it is standard stuff. For example: http://eda-playground.readthedocs.io/en/latest/code-examples/full-adder.html
To answer the second part of your question: an adder which can handle signed numbers is identical to one that can only handle unsigned numbers as long as the inputs and outputs use two's complement representation.
To negate a number using two's complement representation, one inverts all the bits and adds one. For example, using an 8-bit two's complement representation
23 is 00010111 and
-23 is 11101001 which was generated by inverting all the bits (11101000) and then adding 1
now let's add the two together using a standard adder, ie by doing long addition as you would have learnt at school:
23 is 00010111
-23 is 11101001 +
--------
(1)00000000
--------
Look: it works! The (1) is the carry output, which you can ignore. The result of adding two numbers needs to be one bit wider than the wider of those two numbers. So, really we should have widened the two inputs to 9-bit first. So, in a 9-bit two's complement representation:
23 is 000010111
-23 is 111101001 +
---------
(1)000000000
---------
You widen numbers which use a two's complement representation (i.e., signed numbers) by sign-extending: that is, you take the left hand bit (the MSB, the sign bit) and repeat it leftwards (in this case once, because we are widening by just 1 bit).

How many number of primitive operations does a 16, 32 or a 64-bit processor execute to perform logical right shift of an N-bit Binary number? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
Recently,I have been trying to understand how the Binary Extended Euclidean Algorithm works at the processor level. This question is all about finding an Inverse element in GF(2^m) with polynomial basis.
Generally I came across the Extended Euclidean Algorithm for evaluating an inverse element but the fact is that it involves too many addition and multiplication operations. The Binary EEA algorithm requires just bit shifting operations (equivalent to division by 2--logical shift right). The algorithm is in this link, page number 8.
In step 3 and 5 of this algorithm, every iteration shifts the parameters u and b by 1 bit to the right adding zero to the MSB at the same time. The loop ends when u == 1 and returns b. My question is how many primitive operations does a processor (say a 32 bit processor for example) perform in step 3 or step 5 of every iteration?
I came across barrel shifter and I am quite confused about how fast the shifting takes place. Should I really consider these primitive operations or should I ignore them if because the shifting may be faster?
It would really help me a lot if someone would show the primitive operations for the case where the size of u is 194 bits.
In case you might be wondering about the denominator x in step 3 and 5 of the algorithm, its the polynomial representation and x means nothing but 10 in binary and parameter u is an N-bit binary number.
There is no generic answer to this question: you can use portable code that will be tedious to optimize or highly machine specific code that will be even more complicated to optimize without breaking.
If you want real performance, you have to use MMX/AVX registers on the maximum width you can get your hands on. Intel provides lightweight wrappers on low-level instructions as macros and inline functions.
Always use unsigned types for your shifting operations to avoid unnecessary steps.
Usually ther is a "right shift" assembly OP code which is able to right shift a register a given number of bits. Such an operation takes one cycle.
This assumes thet your value is already loaded to the register however.
The best answer anyway: Implement this algorithm in a low level language (C, C++) and look at the assembly code produced by the compiler.

How to write an assembly sorting program 8086 with works with 6 digits numbers? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
im new to assembly language and i know many codes.
However im working with 8086 emulator with only works with 16 bit numbers.
this is a home work that im really stuck in :
How can i write an assembly code with do the following :
1-get 20 , maximum 6-digits decimal numbers and store them in an array.
2- sort the array in ascending order.
its really hard for me to understand how to manage registers and stack for this long numbers.
every help will be appreciated in advance .
In order to sort 32bit numbers (or broader) with 16bit registers you have to compare the upper part of each number separately.
Assume we have these random two 32 bit numbers (shown in hex) 4567afdf and 321abc09.
Now when you look at them as 16 bit values they look like this:
4567 afdf
321a bc09
As you can easily see, the upper 16 bits you can compare individually.
If the upper 16 bits are higher or lower, then you know that the lower part doesn't matter anymore and you sort them accordingly.
If the upper 16 bits are equal, then you compare the lower 16 bits and if they are also equal, both numbers are equal => no sort needed, otherwise you shuffle them accordingly. Since the upper 16 bits are also equal, you don't even need to shuffle them.
If the upper 16bits are different, you still have to shuffle the lower 16 bits accordingly, as they might be different.
The basics of this approach can be used for an arbitrary number of bits not just 32bits. Generally when you have a seemingly hard problem, you should try to think of the easy examples and how you can solve it. Then you can extend it to more complicated cases.
EDIT:
An alternative approach would be, if you have strings of decimal numbers and you want to sort them based on the string representation instead of the numbers.
In this case, you can do it as follows
If the length of the two number strings are differnt, the shorter one is the lower number.
if the length is equal, then you can look at each digit individually (starting with the first digit) until you hit a non-equal digit or the string end. If you reaced the end of the string, the numbers are the same, otherwise you kn ow which one is higher/lower.

I have a few questions about ALU's....

So, I've been trying to learn about computers for the last few months and really learn in detail how they work. I was learning about subtractors recently and I was wondering..
First of all, to my understanding, a subtractor uses two's compliment to get a result. But, why does it subtract? for example, the two's compliment of 5 (0101) is 1011. But, that also is a positive eleven. Even though the number gets negated, what makes the subtractor take that as a negative number instead of another positive number? If the problem was 8 - 5, what stops it from doing 8 +11?
What makes it recognize a signed bit from an unsigned bit? I've heard the program running decided, but then the question would be what gives the program the ability to decide whether to add or subtract and how is that interpreted to the CPU and AlU.
Also, I've learned that AlU's use one circuit that switches forth between addition and subtraction. How does this circuit work? What makes it decide whether to add or subtract?
Lastly, how does this circuit switch from addition to subtraction? The only subtractor I've been shown is an adder with not gates attached to it? How does the circuitry differ in something that can change functions?
Subtraction is nothing else than addition with the second operand being a negative number. Two's complement is constructed so that adding with a negative number simply works as expected (when ignoring overflow).
The ALU does not need to know if a number is positive or negative, but in two's complement all numbers with the most significant bit set to 1 is a negative number. The ALU doesn't care because two's complement is designed so that it all just works.
Now, subtraction is just addition so we can use the same circuits to carry out both functions. What the switch you are talking about does is that it negates the second operand (negation is pretty easy in two's complement, there are a few ways to do it), then it adds the numbers.

Resources