Write an algorithm which will evaluate Pn (x) = (N + 1)xn + N x n - 1 + ... + 2x + 1 - pseudocode

Write an algorithm which will evaluate:
Pn(x) = (N + 1)xn + N x n - 1 + ... + 2x + 1
I am trying to write pseudocode to evaluate the above. I am trying to do using a while loop and without using arrays.
So far I have something like this:
P:= 0
R:= 0
N:= 9
SUM:=0
WHILE (N >=0 )
BEGIN
R:= N MOD 10
BEGIN
P:= P*X
SUM:=SUM +R
N:= N-R
N:= N / 10
But it is not evaluating correctly.
Any guidance would be great!

If I understand correctly, you need something like this:
SUM:=0
POWER:=1
I:=0
WHILE I <= N
SUM:=SUM+(I+1)*POWER
POWER:=POWER*X
I:=I+1
END WHILE

Related

The sum from 1 to n in theta(log n)

Is there anyway to calculate the sum of 1 to n in Theta(log n)?
Of course, the obvious way to do it is sum = n*(n+1)/2.
However, for practicing, I want to calculate in Theta(log n).
For example,
sum=0; for(int i=1; i<=n; i++) { sum += i}
this code will calculate in Theta(n).
Fair way (without using math formulas) assumes direct summing all n values, so there is no way to avoid O(n) behavior.
If you want to make some artificial approach to provide exactly O(log(N)) time, consider, for example, using powers of two (knowing that Sum(1..2^k = 2^(k-1) + 2^(2*k-1) - for example, Sum(8) = 4 + 32). Pseudocode:
function Sum(n)
if n < 2
return n
p = 1 //2^(k-1)
p2 = 2 //2^(2*k-1)
while p * 4 < n:
p = p * 2;
p2 = p2 * 4;
return p + p2 + ///sum of 1..2^k
2 * p * (n - 2 * p) + ///(n - 2 * p) summands over 2^k include 2^k
Sum(n - 2 * p) ///sum of the rest over 2^k
Here 2*p = 2^k is the largest power of two not exceeding N. Example:
Sum(7) = Sum(4) + 5 + 6 + 7 =
Sum(4) + (4 + 1) + (4 + 2) + (4 + 3) =
Sum(4) + 3 * 4 + Sum(3) =
Sum(4) + 3 * 4 + Sum(2) + 1 * 2 + Sum(1) =
Sum(4) + 3 * 4 + Sum(2) + 1 * 2 + Sum(1) =
2 + 8 + 12 + 1 + 2 + 2 + 1 = 28

Why is the runtime of this algorithm (n^2)*(n^2 + 1)/2(i.e. O(n/4))?

So, a question that my instructor gave us had us find the runtime of this algorithm:
{n > 0}
i := 1;
while i ≤ n^2
j := 1;
while j ≤ i
j := j + 1;
endwhile
i := i + 1;
endwhile
In the solution, the runtime here is
n^2(n^2 + 1)/2, which is Θ(n^4).
So, I get that the first while loop has a runtime of n^2, but why does the second loop have a runtime of (n^2 + 1)/2.
Thanks in advance for any help.
The second loop is executed i times.
So if i varies from 1 to n^2 the runtime is
1 + 2 + 3 + ... + n^2
Note now that
1 + 2 + ... + k = k*(k+1)/2
So if you replace k by n^2, you obtain a runtime in n^2*(n^2+1).
I suppose the proper ident was:
{n > 0}
i := 1;
while i ≤ n*n
j := 1;
while j ≤ i
j := j + 1;
endwhile
i := i + 1;
endwhile

Number of ways to divide a number

Given a number N, print in how many ways it can be represented as
N = a + b + c + d
with
1 <= a <= b <= c <= d; 1 <= N <= M
My observation:
For N = 4: Only 1 way - 1 + 1 + 1 + 1
For N = 5: Only 1 way - 1 + 1 + 1 + 2
For N = 6: 2 ways - 1 + 1 + 1 + 3
1 + 1 + 2 + 2
For N = 7: 3 ways - 1 + 1 + 1 + 4
1 + 1 + 2 + 3
1 + 2 + 2 + 2
For N = 8: 5 ways - 1 + 1 + 1 + 5
1 + 1 + 2 + 4
1 + 1 + 3 + 3
1 + 2 + 2 + 3
2 + 2 + 2 + 2
So I have reduced it to a DP solution as follows:
DP[4] = 1, DP[5] = 1;
for(int i = 6; i <= M; i++)
DP[i] = DP[i-1] + DP[i-2];
Is my observation correct or am I missing any thing. I don't have any test cases to run on. So please let me know if the approach is correct or wrong.
It's not correct. Here is the correct one:
Lets DP[n,k] be the number of ways to represent n as sum of k numbers.
Then you are looking for DP[n,4].
DP[n,1] = 1
DP[n,2] = DP[n-2, 2] + DP[n-1,1] = n / 2
DP[n,3] = DP[n-3, 3] + DP[n-1,2]
DP[n,4] = DP[n-4, 4] + DP[n-1,3]
I will only explain the last line and you can see right away, why others are true.
Let's take one case of n=a+b+c+d.
If a > 1, then n-4 = (a-1)+(b-1)+(c-1)+(d-1) is a valid sum for DP[n-4,4].
If a = 1, then n-1 = b+c+d is a valid sum for DP[n-1,3].
Also in reverse:
For each valid n-4 = x+y+z+t we have a valid n=(x+1)+(y+1)+(z+1)+(t+1).
For each valid n-1 = x+y+z we have a valid n=1+x+y+z.
Unfortunately, your recurrence is wrong, because for n = 9, the solution is 6, not 8.
If p(n,k) is the number of ways to partition n into k non-zero integer parts, then we have
p(0,0) = 1
p(n,k) = 0 if k > n or (n > 0 and k = 0)
p(n,k) = p(n-k, k) + p(n-1, k-1)
Because there is either a partition of value 1 (in which case taking this part away yields a partition of n-1 into k-1 parts) or you can subtract 1 from each partition, yielding a partition of n - k. It's easy to show that this process is a bijection, hence the recurrence.
UPDATE:
For the specific case k = 4, OEIS tells us that there is another linear recurrence that depends only on n:
a(n) = 1 + a(n-2) + a(n-3) + a(n-4) - a(n-5) - a(n-6) - a(n-7) + a(n-9)
This recurrence can be solved via standard methods to get an explicit formula. I wrote a small SAGE script to solve it and got the following formula:
a(n) = 1/144*n^3 + 1/32*(-1)^n*n + 1/48*n^2 - 1/54*(1/2*I*sqrt(3) - 1/2)^n*(I*sqrt(3) + 3) - 1/54*(-1/2*I*sqrt(3) - 1/2)^n*(-I*sqrt(3) + 3) + 1/16*I^n + 1/16*(-I)^n + 1/32*(-1)^n - 1/32*n - 13/288
OEIS also gives the following simplification:
a(n) = round((n^3 + 3*n^2 -9*n*(n % 2))/144)
Which I have not verified.
#include <iostream>
using namespace std;
int func_count( int n, int m )
{
if(n==m)
return 1;
if(n<m)
return 0;
if ( m == 1 )
return 1;
if ( m==2 )
return (func_count(n-2,2) + func_count(n - 1, 1));
if ( m==3 )
return (func_count(n-3,3) + func_count(n - 1, 2));
return (func_count(n-1, 3) + func_count(n - 4, 4));
}
int main()
{
int t;
cin>>t;
cout<<func_count(t,4);
return 0;
}
I think that the definition of a function f(N,m,n) where N is the sum we want to produce, m is the maximum value for each term in the sum and n is the number of terms in the sum should work.
f(N,m,n) is defined for n=1 to be 0 if N > m, or N otherwise.
for n > 1, f(N,m,n) = the sum, for all t from 1 to N of f(S-t, t, n-1)
This represents setting each term, right to left.
You can then solve the problem using this relationship, probably using memoization.
For maximum n=4, and N=5000, (and implementing cleverly to quickly work out when there are 0 possibilities), I think that this is probably computable quickly enough for most purposes.

Finding Big O of a nest for loop

for (int i=0; i < n; i++)
for (j=0;j<i*i;j++)
x++
Would the big O be O(n^3)? I'm just confused about how the i's relate to the n.
The required math (summation) is:
0 + 1 + 4 + 9 + ... + n * n = n**3 / 3 + n**2 / 2 + n / 6 = O(n**3)
So, you're right: it's O(n**3); moreover
0 + 1 + ... + n**k = O(n**(k + 1))
Methodically, to proceed using Sigma notation like below, would get you where you need to go:

Is there any easy way to do modulus of 2^32 - 1 operation?

I just heard about that x mod (2^32-1) and x / (2^32-1) would be easy, but how?
to calculate the formula:
xn = (xn-1 + xn-1 / b)mod b.
For b = 2^32, its easy, x%(2^32) == x & (2^32-1); and x / (2^32) == x >> 32. (the ^ here is not XOR). How to do that when b = 2^32 - 1.
In the page https://en.wikipedia.org/wiki/Multiply-with-carry. They say "arithmetic for modulus 2^32 − 1 requires only a simple adjustment from that for 2^32". So what is the "simple adjustment"?
(This answer only handles the mod case.)
I'll assume that the datatype of x is more than 32 bits (this answer will actually work with any positive integer) and that it is positive (the negative case is just -(-x mod 2^32-1)), since if it at most 32 bits, the question can be answered by
x mod (2^32-1) = 0 if x == 2^32-1, x otherwise
x / (2^32 - 1) = 1 if x == 2^32-1, 0 otherwise
We can write x in base 2^32, with digits x0, x1, ..., xn. So
x = x0 + 2^32 * x1 + (2^32)^2 * x2 + ... + (2^32)^n * xn
This makes the answer clearer when we do the modulus, since 2^32 == 1 mod 2^32-1. That is
x == x0 + 1 * x1 + 1^2 * x2 + ... + 1^n * xn (mod 2^32-1)
== x0 + x1 + ... + xn (mod 2^32-1)
x mod 2^32-1 is the same as the sum of the base 2^32 digits! (we can't drop the mod 2^32-1 yet). We have two cases now, either the sum is between 0 and 2^32-1 or it is greater. In the former, we are done; in the later, we can just recur until we get between 0 and 2^32-1. Getting the digits in base 2^32 is fast, since we can use bitwise operations. In Python (this doesn't handle negative numbers):
def mod_2to32sub1(x):
s = 0 # the sum
while x > 0: # get the digits
s += x & (2**32-1)
x >>= 32
if s > 2**32-1:
return mod_2to32sub1(s)
elif s == 2**32-1:
return 0
else:
return s
(This is extremely easy to generalise to x mod 2^n-1, in fact you just replace any occurance of 32 with n in this answer.)
(EDIT: added the elif clause to avoid an infinite loop on mod_2to32sub1(2**32-1). EDIT2: replaced ^ with **... oops.)
So you compute with the "rule" 232 = 1. In general, 232+x = 2x. You can simplify 2a by taking the exponent modulo 32. Example: 266 = 22.
You can express any number in binary, and then lower the exponents. Example: the number 240 + 238 + 220 + 2 + 1 can be simplified to 28 + 26 + 220 + 2 + 1.
In general, you can group the exponents every 32 powers of 2, and "downgrade" all exponents modulo 32.
For 64 bit words, the number can be expressed as
232 A + B
where 0 <= A,B <= 232-1. Getting A and B is easy with bitwise operations.
So you can simplify that to A + B, which is much smaller: at most 233. Then, check if this number is at least 232-1, and subtract 232 - 1 in that case.
This avoids expensive direct division.
The modulus has already been explained, nevertheless, let's recapitulate.
To find the remainder of k modulo 2^n-1, write
k = a + 2^n*b, 0 <= a < 2^n
Then
k = a + ((2^n-1) + 1) * b
= (a + b) + (2^n-1)*b
≡ (a + b) (mod 2^n-1)
If a + b >= 2^n, repeat until the remainder is less than 2^n, and if that leads you to a + b = 2^n-1, replace that with 0. Each "shift right by n and add to the last n bits" moves the first set bit right by n or n-1 places (unless k < 2^(2*n-1), when the first set bit after the shift-and-add may be the 2^n bit). So if the width of the type is large compared to n, this will need many shifts - consider a 128-bit type and n = 3, for large k you will need over 40 shifts. To reduce the number of shifts required, you can exploit the fact that
2^(m*n) - 1 = (2^n - 1) * (2^((m-1)*n) + 2^((m-2)*n) + ... + 2^(2*n) + 2^n + 1),
of which we will only use that 2^n - 1 divides 2^(m*n) - 1 for all m > 0. Then you shift by multiples of n that are roughly half the maximal bit-length the value can have at that step. For the above example of a 128-bit type and the remainder modulo 7 (2^3 - 1), the closest multiples of 3 to 128/2 are 63 and 66, first shift by 63 bits
r_1 = (k & (2^63 - 1)) + (k >> 63) // r_1 < 2^63 + 2^(128-63) < 2^66
to get a number with at most 66 bits, then shift by 66/2 = 33 bits
r_2 = (r_1 & (2^33 - 1)) + (r_1 >> 33) // r_2 < 2^33 + 2^(66-33) = 2^34
to reach at most 34 bits. Next shift by 18 bits, then 9, 6, 3
r_3 = (r_2 & (2^18 - 1)) + (r_2 >> 18) // r_3 < 2^18 + 2^(34-18) < 2^19
r_4 = (r_3 & (2^9 - 1)) + (r_3 >> 9) // r_4 < 2^9 + 2^(19-9) < 2^11
r_5 = (r_4 & (2^6 - 1)) + (r_4 >> 6) // r_5 < 2^6 + 2^(11-6) < 2^7
r_6 = (r_5 & (2^3 - 1)) + (r_5 >> 3) // r_6 < 2^3 + 2^(7-3) < 2^5
r_7 = (r_6 & (2^3 - 1)) + (r_6 >> 3) // r_7 < 2^3 + 2^(5-3) < 2^4
Now a single subtraction if r_7 >= 2^3 - 1 suffices. To calculate k % (2^n -1) in a b-bit type, O(log2 (b/n)) shifts are needed.
The quotient is obtained similarly, again we write
k = a + 2^n*b, 0 <= a < 2^n
= a + ((2^n-1) + 1)*b
= (2^n-1)*b + (a+b),
so k/(2^n-1) = b + (a+b)/(2^n-1), and we continue while a+b > 2^n-1. Here we unfortunately cannot reduce the work by shifting and masking about half the width, so the method is only efficient when n is not much smaller than the width of the type.
Code for the fast cases where n is not too small:
unsigned long long modulus_2n1(unsigned n, unsigned long long k) {
unsigned long long mask = (1ULL << n) - 1ULL;
while(k > mask) {
k = (k & mask) + (k >> n);
}
return k == mask ? 0 : k;
}
unsigned long long quotient_2n1(unsigned n, unsigned long long k) {
unsigned long long mask = (1ULL << n) - 1ULL, quotient = 0;
while(k > mask) {
quotient += k >> n;
k = (k & mask) + (k >> n);
}
return k == mask ? quotient + 1 : quotient;
}
For the special case where n is half the width of the type, the loop runs at most twice, so if branches are expensive, it may be better to unroll the loop and unconditionally execute the loop body twice.
It is not. What must you have heard is x mod 2^n and x/2^n being easier. x/2^n can be performed as x>>n, and x mod 2^n, do x&(1<<n-1)

Resources