How divide overflow occurs when sign of the dividend is same as that of divisor? - divide

As I know, The "Divide Overflow" is an exception that occurs when you try to perform division by zero.
I was learning a tutorial about computer architecture and design and I got confused by a statement
Divide overflow occurs when the sign of the dividend is same as that of the divisor.
Can anyone please enlighten about this?

Basically because the quotient of the division can not represent the number.
Take a look here:
http://faculty.kfupm.edu.sa/COE/aimane/assembly/pagegen-71.aspx.htm

Related

Problem size, input size and asymptotic behavior for an algorithm (re post)

I am re-posting my question because accidentally I said that another Thread (with a similar topic) did answer my question, which it wasn't the case. I am sorry for any inconvenience
I am trying to understand how the input size, problem size and the asymptotic behavior of an arbitrary algorithm given in pseudo code format differ from each other. While I fully understand the input and the asymptotic behavior, I have problems understanding the problem size. To me it looks as if problem size= space complexity for a given problem. But I am not sure. I'd like to illustrate my confusion with the following example:
We have the following pseudo code:
ALGONE(x,y)
if x=0 or x=y then
return 1
end
return ALGONE(x-1,y-1) + ALGONE(x,y-1)
So let's say we give two inputs in $x$ and $y$ and $n$ represents the number of digits.
Since we are having addition as our main operation, and addition is an elementary operation, and for two numbers of n digits, we need n operations, then the asymptotic behavior is of the form O(n).
But what about the problem size in this case. I don't understand what am I supposed to say. The term problem-size is so vague. It depends on the algorithm but even then, even if one is able to understand the algorithm what do you give as an answer?
I'd assume that in this particular case the problem size, might be the number of bits we need to represent the input. But this is a guess of mine, grounded in nothing

How to find accuracy of matrix multiplication with floating-point numbers?

I am trying to analyze how floating-point computation becomes more inaccurate when the data size decreases. In order to do that, I wanted to perform simple matrix operations on different variations of floating point representation, such as float64, float32, and float16. Since float64 computation will give the most precise and accurate result out of the three, I assume all float64 computation to give the expected result (i.e., error = 0).
The issue is that when I compare the calculated result with the expected result, I don't have an exact idea of how to quantify all the individual errors that I get into a single metric. I know about certain ways to go about it, such as finding the error mean, or the sum of square of errors (SSE), but I just wanted to know if there was a standard way of calculating the overall error of a given matrix computation.
Perhaps a variant of the condition number can be helpful? See here: https://en.wikipedia.org/wiki/Condition_number#Matrices
if there was a standard way of calculating the overall error of a given matrix computation.
Consider the case when a matrix is size 1. Then we are in a familiar 1 dimension domain.
How to compare y_computed_as_float vs y_expected? Even in this case, there is not a standard of how these should compare as floating point numbers. Subtract? Divide? It is often context sensitive. So "no" to OP's question.
Yet there are common practices. So a potential "yes" to OP question for select cases.
Floating point computations are often assessed by the difference between computed and math expected values scaled by the Unit in the last place*.
error = (y_computed_as_float - y_expected)/ulpf((float) y_expected);
For an N dimension matrix, the matrix error could use a root mean square of the N2 element errors.
* Scaling by ULP has some issues near each power of 2 and more near 0.0. There are ways to mitigate that, but we a getting into the weeds.

Issue with time-analysis

I'm teaching myself data structures, and am on a section giving a brief outline of time analysis. The following problem was given:
"Each of the following are formulas for the number of operations in some
algorithm. Express each formula in big-O notation."
The problem then goes on to give multiple scenarios. One was:
g.) The number of times that n can be divided by 10 before dropping below
1.0.
(Note: It doesn't state what n is exactly, so I'm assuming it's just some input size. But I don't think it matters in terms of how the problem is stated)
I reasoned that as this would relate to its order of magnitude, it should just be log n. However, the text says that it should be quadratic. Is there something I am missing?
Any help to help my thinking would be greatly appreciated.

Non-restoring division algorithm on 128bit/64bit

I am trying to implement non-restoring division 128bit number by 64bit,i use the 32bit register. I use algorithm explanation from this link Non-restoring division algorithm but I have some problem with initialization.
1 I have 128bit/64bit so how many counts I must set 128?
2 How many bits I need for register A, 128?
All in 128bits.
Yes
The algorithm presented in that link is for unsigned numbers. The use of the expression "A < 0?" can be confusing. I would use "MSB(A) = 1?" or similar instead.

Programming Logic: Finding the smallest equation to a large number

I do not know a whole lot about math, so I don't know how to begin to google what I am looking for, so I rely on the intelligence of experts to help me understand what I am after...
I am trying to find the smallest string of equations for a particular large number. For example given the number
"39402006196394479212279040100143613805079739270465446667948293404245721771497210611414266254884915640806627990306816"
The smallest equation is 64^64 (that I know of) . It contains only 5 bytes.
Basically the program would reverse the math, instead of taking an expression and finding an answer, it takes an answer and finds the most simplistic expression. Simplistic is this case means smallest string, not really simple math.
Has this already been created? If so where can I find it? I am looking to take extremely HUGE numbers (10^10000000) and break them down to hopefully expressions that will be like 100 characters in length. Is this even possible? are modern CPUs/GPUs not capable of doing such big calculations?
Edit:
Ok. So finding the smallest equation takes WAY too much time, judging on answers. Is there anyway to bruteforce this and get the smallest found thus far?
For example given a number super super large. Sometimes taking the sqaureroot of number will result in an expression smaller than the number itself.
As far as what expressions it would start off it, well it would naturally try expressions that would the expression the smallest. I am sure there is tons of math things I dont know, but one of the ways to make a number a lot smaller is powers.
Just to throw another keyword in your Google hopper, see Kolmogorov Complexity. The Kolmogorov complexity of a string is the size of the smallest Turing machine that outputs the string, given an empty input. This is one way to formalize what you seem to be after. However, calculating the Kolmogorov complexity of a given string is known to be an undecidable problem :)
Hope this helps,
TJ
There's a good program to do that here:
http://mrob.com/pub/ries/index.html
I asked the question "what's the point of doing this", as I don't know if you're looking at this question from a mathemetics point of view, or a large number factoring point of view.
As other answers have considered the factoring point of view, I'll look at the maths angle. In particular, the problem you are describing is a compressibility problem. This is where you have a number, and want to describe it in the smallest algorithm. Highly random numbers have very poor compressibility, as to describe them you either have to write out all of the digits, or describe a deterministic algorithm which is only slightly smaller than the number itself.
There is currently no general mathemetical theorem which can determine if a representation of a number is the smallest possible for that number (although a lower bound can be discovered by understanding shannon's information theory). (I said general theorem, as special cases do exist).
As you said you don't know a whole lot of math, this is perhaps not a useful answer for you...
You're doing a form of lossless compression, and lossless compression doesn't work on random data. Suppose, to the contrary, that you had a way of compressing N-bit numbers into N-1-bit numbers. In that case, you'd have 2^N values to compress into 2^N-1 designations, which is an average of 2 values per designation, so your average designation couldn't be uncompressed. Lossless compression works well on relatively structured data, where data we're likely to get is compressed small, and data we aren't going to get actually grows some.
It's a little more complicated than that, since you're compressing partly by allowing more information per character. (There are a greater number of N-character sequences involving digits and operators than digits alone.) Still, you're not going to get lossless compression that, on the average, is better than just writing the whole numbers in binary.
It looks like you're basically wanting to do factoring on an arbitrarily large number. That is such a difficult problem that it actually serves as the cornerstone of modern-day cryptography.
This really appears to be a mathematics problem, and not programming or computer science problem. You should ask this on https://math.stackexchange.com/
While your question remains unclear, perhaps integer relation finding is what you are after.
EDIT:
There is some speculation that finding a "short" form is somehow related to the factoring problem. I don't believe that is true unless your definition requires a product as the answer. Consider the following pseudo-algorithm which is just sketch and for which no optimization is attempted.
If "shortest" is a well-defined concept, then in general you get "short" expressions by using small integers to large powers. If N is my integer, then I can find an integer nearby that is 0 mod 4. How close? Within +/- 2. I can find an integer within +/- 4 that is 0 mod 8. And so on. Now that's just the powers of 2. I can perform the same exercise with 3, 5, 7, etc. We can, for example, easily find the nearest integer that is simultaneously the product of powers of 2, 3, 5, 7, 11, 13, and 17, call it N_1. Now compute N-N_1, call it d_1. Maybe d_1 is "short". If so, then N_1 (expressed as power of the prime) + d_1 is the answer. If not, recurse to find a "short" expression for d_1.
We can also pick integers that are maybe farther away than our first choice; even though the difference d_1 is larger, it might have a shorter form.
The existence of an infinite number of primes means that there will always be numbers that cannot be simplified by factoring. What you're asking for is not possible, sorry.

Resources