I search in google with -1%2 is (-1) mod 2 = 1, but in xcode -1%2= -1. Have any one tell me why? Thank your help!^.^
http://www.google.com.hk/#hl=zh-TW&source=hp&q=-1%252&oq=-1%252&aq=f&aqi=&aql=1&gs_sm=e&gs_upl=2572l7521l0l7993l10l10l0l9l0l0l154l154l0.1l1l0&bav=on.2,or.r_gc.r_pw.,cf.osb&fp=1573ccabb4b5821b&biw=929&bih=825
This text was taken from this post.
Objective-C is a superset of C99 and C99 defines a % b to be negative when a is negative. See also the Wikipedia entry on the Modulo operation and this StackOverflow question.
Something like (a >= 0) ? (a % b) : ((a % b) + b) (which hasn't been tested and probably has unnecessary parentheses) should give you the result you want.
Often, the modulo operator is calculated using:
number % modulus := number - (number / modulus) * modulus
In your case, you get (-1) - (-1/2)*2 = (-1) - (0) = -1. Note that the -1/2 evaluates to 0 since we are using integar math.
It is beyond me if all computers operate this way, or if it varies depending on the hardware, but I remember this coming up in a class years ago.
Related
I'm confused as to the expected output of the following algorithm (this is all the information provided):
Consider the following small algorithm X:
If Algorithm X is called with input n=8, that is X(8), what will be the output?
How many iterations occur of the loop for the general value of n?
Algorithm X(n)
// Input n is an integer
Start
Initialise i to 1
While (i ≤ n) do
Report i
Increment i by 2
End while
End
I Interpreted this: to be a pre-test loop, where (i ≤ n) is checked before iteration and that it is not a generator type algorithm.
With my answers being:
An output of 1
Once for i ≤ n (assuming report i terminates the function) and zero times otherwise
And the actual answers being:
1, 3, 5, 7, 9
floor(n / 2) + 1
I can see my mistake guessing that the function terminates at report i (it clearly doesn't), it must be some sort of generator.
But I don't understand how 9 is returned?
This is obviously against the condition (9 ≤ 8). I was told "this was because this loop is post-test". But this doesn't seem very intuitive from the while (cond) do {block} syntax. And it doesn't seem correct in practice as (9 ≤ 8) evaluated before or after the loop still returns false.
Should't it be structured as do {block} while (cond) (to be considered post-test)? and stop yielding values after 7?
So:
How is 9 returned? i.e, how does the loop continue from the condition (9 ≤ 8)
Why is while (cond) do {block} syntax considered pre-test?
Edit:
Wouldn't 2 equal floor((x + 1)/2) for x >= 0 | 0 otherwise ?
I wrote a haskell function to produce prime factorizations for numbers until a certain threshould – made of some prime factors. A minimal working code can be found here:
http://lpaste.net/117263
The problem: It works very good for "threshould <= 10^9" on my computer. But beginning with "threshould = 10^10" the method don't produce any results on my computer – I never see (even not) the first list element on my screen. The name of the critical function is "exponentSets". For every prime in the list 'factors', it computes the possible exponents (with respect to already chosen exponents for other primes). Further commends are in the code. If 10^10 works good on your machine, try it with an higher exponent (10^11 ...).
My question: what is responsible for that? How can I improve the quality of the function "exponentSets"? (I'm still not very experienced in Haskell so someone more experienced might have an Idea)
Even though you are using 64-bit integers, you still do not have enough capacity to store a temporary integer which is created in intLog:
intLog base num =
let searchExtend lower#(e, n) =
let upper#(e', n') = (2 * e, n^2) -- this line is what causes the problems
-- some code
in (some if) searchExtend (1, base)
rawLists is defined like this:
rawLists = recCall 1 threshould
Which in turn sets remaining_threshould in recCall to
threshould `quot` 1 -- same as threshould
Now intLog gets called by recCall like this:
intLog p remaining_threshould
which is the same as
intLog p threshould
Now comes the interesing part: Since num p is smaller than your base threshold, you call searchExtend (1, base), which then in turn does this:
searchExtend (e, n) =
let (e', n') = (2 * e, n ^ 2)
Since n is remaining_threshould, which is the same as threshould, you essentially square 2^32 + 1 and store this in an Int, which overflows and causes rawLists to give bogus results.
(2 ^ 32 + 1) ^ 2 :: Int is 8589934593
(2 ^ 32 + 1) ^ 2 :: Integer is 18446744082299486209
I need to write an algorithm that takes a positive integer x. If integer x is 0, the algorithm returns 0. If it's any other number, the algorithm returns 1.
Here's the catch. I need to condense the algorithm into one equation. i.e. no conditionals. Basically, I need a single equation that equates to 0 if x is zero and 1 if x > 0.
EDIT: As per my comment below. I realize that I wasn't clear enough. I am entering the formula into a system that I don't have control over, hence they strange restrictions.
However, I learned a couple tricks that could be useful in the future!
In C and C++, you can use this trick:
!!x
In those languages, !x evaluates to 1 if x is zero and 0 otherwise. Therefore, !!x evaluates to 1 if x is nonzero and 0 otherwise.
Hope this helps!
Try return (int)(x > 0)
In every programming language I know, (int)(TRUE) == 1 and (int)(FALSE) == 0
Assuming 32-bit integers:
int negX = -x;
return negX >> 31;
Negating x puts a 1 in the highest bit. Shifting right by 31 places moves that 1 to the lowest bit, and fills with 0s. This does nothing to a 0, but converts all positive integers to 1.
This is basically the sign function, but since you specified a positive integer input, you can drop the part that converts negative numbers to -1.
Since virtually every system I know of uses IEEE-754 representation for floating-point numbers, you could just rely on its behavior (namely, that 0.0 / 0.0 is NaN, and NaN != NaN). Pseudo-C (-Java, ...) follows:
float oneOrNAN = (float)(x) / (float)(x);
return oneOrNAN == oneOrNAN;
Like I said, I wasn't clear enough in my problem description. When I said equation, I meant a purely algebraic equation.
I did find an acceptable solution: Y = X/(X - .001)
If it's zero you get 0/ -.001 which is just 0. Any other number, you get 5/4.999 which is close enough to 1 for my particular situation.
However, this is interesting:
!!x
Thanks for the tip!
For the problem statement in google codejam 2008: Round 1A Question 3
In this problem, you have to find the last three digits before the
decimal point for the number (3 + √5)n.
For example, when n = 5, (3 + √5)5 = 3935.73982... The
answer is 935.
For n = 2, (3 + √5)2 = 27.4164079... The answer is 027.
My solution based on the idea that T(i) = 6*T(i-1) - 4*T(n-2) + 1, where T(i) is the integer part for n=i is as below:
#include<stdio.h>
int a[5000];
main(){
unsigned long l,n;
int i,t;
a[0]=1;
a[1]=5;
freopen("C-small-practice.in","r",stdin);
scanf("%d",&t);
for(i=2;i<5000;i++)
a[i]=(6*a[i-1]-4*a[i-2]+10001)%1000;
i=t;
for(i=1;i<=t;i++){
scanf("%ld",&n);
printf("Case #%d: %.3d\n",i,a[(int)n]);
}
fclose(stdin);
}
in the line a[i]=(6*a[i-1]-4*a[i-2]+10001)%1000; i know there will be integer overflow, but i dont know why by adding 10,000 i am getting the right answer.
I am using GCC compiler where sizeof(int)=4
Can anyone explain what is happening ?
First off, the line
a[i]=(6*a[i-1]-4*a[i-2]+10001)%1000;
shouldn't actually cause any overflow, since you're keeping all previous values below 1000.
Second, did you consider what happens if 6*a[i-1]-4*a[i-2]+1 is negative? The modulus operator doesn't have to always return a positive value; it can also return negative values as well (if the thing you are dividing is itself negative).
By adding 10000, you've ensured that no matter what the previous values were, the value of that expression is positive, and hence the mod will give a positive integer result.
Expanding on that second point, here's 6.5.5.6 of the C99 specification:
When integers are divided, the result of the / operator is the algebraic
quotient with any fractional part discarded. If the quotient a/b is
representable, the expression (a/b)*b + a%b shall equal a.
A note beside the word "discarded" states that / "truncates toward zero". Hence, for the second sentence to be true, the result of a % b when a is negative must itself be negative.
This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
The most efficient way to implement an integer based power function pow(int, int)
How can I calculate powers with better runtime?
E.g. 2^13.
I remember seeing somewhere that it has something to do with the following calculation:
2^13 = 2^8 * 2^4 * 2^1
But I can't see how calculating each component of the right side of the equation and then multiplying them would help me.
Any ideas?
Edit: I did mean with any base. How do the algorithms you've mentioned below, in particular the "Exponentation by squaring", improve the runtime / complexity?
There is a generalized algorithm for this, but in languages that have bit-shifting, there's a much faster way to compute powers of 2. You just put in 1 << exp (assuming your bit shift operator is << as it is in most languages that support the operation).
I assume you're looking for the generalized algorithm and just chose an unfortunate base as an example. I will give this algorithm in Python.
def intpow(base, exp):
if exp == 0:
return 1
elif exp == 1:
return base
elif (exp & 1) != 0:
return base * intpow(base * base, exp // 2)
else:
return intpow(base * base, exp // 2)
This basically causes exponents to be able to be calculated in log2 exp time. It's a divide and conquer algorithm. :-) As someone else said exponentiation by squaring.
If you plug your example into this, you can see how it works and is related to the equation you give:
intpow(2, 13)
2 * intpow(4, 6)
2 * intpow(16, 3)
2 * 16 * intpow(256, 1)
2 * 16 * 256 == 2^1 * 2^4 * 2^8
Use bitwise shifting. Ex. 1 << 11 returns 2^11.
Powers of two are the easy ones. In binary 2^13 is a one followed by 13 zeros.
You'd use bit shifting, which is a built in operator in many languages.
You can use exponentiation by squaring. This is also known as "square-and-multiply" and works for bases != 2, too.
If you're not limiting yourself to powers of two, then:
k^2n = (k^n)^2
The fastest free algorithm I know of is by Phillip S. Pang, Ph.D and can the source code can be found here.
It uses table-driven decomposition, by which it is possible to make exp() function, which is 2-10 times faster, then native exp() of Pentium(R) processor.