It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
Good evening I was wondering if someone could please provide me with a simple pseudocode example of a deterministic algorithm... I will greatly appreciate it and surely give you points!!. thanks
To me, "deterministic" could mean many things:
Given the same input, produces the same output every time.
Given the same input, takes the same amount of time/memory/resources every time it is run.
Problems of complexity class P that can be solved in polynomial time by a deterministic computer, as opposed to problems of complexity class NP which can be only solved in polynomial time using a non-deterministic computer.
Which of these do you mean?
The most simple deterministic algorithm is this random number generator.
def random():
return 4 #chosen by fair dice roll, guaranteed to be random
It gives the same output every time, exhibits known O(1) time and resource usage, and executes in PTIME on any computer.
Do you really mean DETERMINISTIC and not NONdeterministic, I mean pretty much anything you see in any tutorial / guide / start book is deterministic, e.g.
for i from 1 to 9
print i
will always print 123456789
A deterministic algorithm is simply an algorithm that has a predefined output. For instance if you are sorting elements that are strictly ordered(no equal elements) the output is well defined and so the algorithm is deterministic.
In fact most of the computer algorithms are deterministic. Undeterminism usually apears when you have some parallelization or some equal elements that are only equal according to some non-full criteria.
Here's pseudo code for a deterministic algorithm that checks whether a given number is odd:
function is_odd(n):
if n mod 2 = 1
then return true
else return false
deterministic algorithm is an algorithm which, in informal terms, behaves predictably. Given a particular input, it will always produce the same output
public struct Point {
public int x;
public int y;
//other methods
public override int GetHashCode() {
return x ^ y;
}
}
Point P=new Point();
p.x=6;
p.y=3;
int res= p.GetHashCode();
Related
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
How can I find a largest prime number which is less than n, where n ≤ 10¹⁸ ?
Please help me find an Efficient Algorithm.
for(j=n;j>=2;j--) {
if(j%2 == 1) {
double m = double(j);
double a = (m-1)/6.0;
double b = (m-5)/6.0;
if( a-(int)a == 0 || b-(int)b == 0 ) {
printf("%llu\n",j);
break;
}
}
}
I used this approach but this is not efficient to solve for n>10^10;
How to optimize this..
Edit:
Solution: Use Primality test on each j.
Miller Rabin, Fermat's Test.
I don't think this question should be so quickly dismissed, as efficiency is not so easy to determine for numbers in this range. First of all, given the average prime gap is ln(p), it makes sense to work down from the given (n). i.e., ln(10^18) ~ 41.44), so you would expect around 41 iterations on average working down from (n). e.g., testing: (n), (n - 2), (n - 4), ...
Given this average gap, the next step is to decide whether you wish to use a naive test - check for divisibility by primes <= floor(sqrt(n)). With n <= (10^18), you would need to test against primes <= (10^9). There are ~ 50 million primes in this range. If you are willing to precompute and tabulate these values (all of which fit in 32 bits), you have a reasonable test for 64-bit values n <= 10^18. In this case, is a 200MB prime table an acceptable approach? 20 years ago, it might not have been. Today, it's not an issue.
Combining a prime table with sieving and/or Pocklington's test might improve efficiency. Alternatively, if memory is more constrained, a deterministic variant of the Miller-Rabin test with bases: 2, 325, 9375, 28178, 450775, 9780504, 1795265022 (SPRP set). Most composites fail immediately with an SPRP-2 test.
The point is - you have a decision to make between algorithmic complexity, both theoretical and in terms of implementation difficulty, and a balance with space/time trade-offs.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I have come across a problem about the determination of triangle, it says:
Given a sorted integer array(length n), determinate whether you could
build a triangle by choosing three integers from the array, the
answer is "yes" or "no".
A naive solution is by scanning all the possibilities but it turn out to be O(n^3), seems
it will be C(n, 3) possibilities.
Assuming that the integers represent side lengths and array(0) > 0,
bool IsTriangle(int[] aray, int start) {
if(array.length - start <= 2) return false;
return (array(start+2) < array(start+1) + array(start+0))
|| IsTriangle(array,start+1);
}
This works because the list of integers is sorted; thus the RHS will always be larger using any subsequent elements of the array, and the LHS will be smaller using any previous elements of the array, and thus can satisfy the triangle inequality only if the selected three consecutive elements satisfy it. This is of course O(n) and can easily be converted to a (less elegant but more performant) iterative solution.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I want to generate a very large pseudorandom permutation p : [0,n-1] -> [0,n-1], and then compute m specific values p[i], where m << n. Is it possible to do this in O(m) time? The motivation is a large parallel computation where each processor only needs to see a small piece of the permutation, but the permutation must be consistent between processors.
Note that in order to help in the parallel case, different processes computing disjoint sets of i values shouldn't accidentally produce p[i] == p[j] for i != j.
EDIT: There is a much more clever algorithm based on block ciphers that I think Geoff will write up.
There are two common algorithms for generating permutations. Knuth's shuffle is inherently sequential so not a nice choice for parallelism. The other is random selection with retry any time repetition is encountered. Random selection is clearly equivalent when applied in any order, thus I propose the following simple algorithm:
Randomly sample candidate p[i] in [0,n-1] for each i in Needed (in parallel).
Remove all non-collided entries from Needed, as well as (optionally) some deterministic choice from the collisions (e.g., keep p[i] if i < {j | p[j] = p[i]}).
Repeat from step 1 with new (smaller) set Needed.
Since we haven't lost entropy in this process, the result is essentially equivalent to sequential random sampling in some different order, starting with the locations i that did not collide (we just didn't know that order in advance). Note that if we used the computed value in a comparison, for example, we would have introduced bias.
An example very low strength version:
Generate 2k = O(1) random integers a_i,b_i in [0,n-1], with a_i relatively prime to n.
Pick a weak permutation wp : [0,n-1] -> [0,n-1], say w(i) = i with all the but the high bit flipped.
p[i] = b_0 + a_0 * wp(b_1 + a_1 * wp(... i ...))
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I have a list of projects and each project takes you exactly two days to be completed and has a due date. let P[i].id, P[i].duedate, and p[i].value be the id of the project, the due date of the project, and the value you get if you complete the project on time(on or before due date)
write an algorithm that takes as input array A and returns a schedule of which projects you will do and when, to maximize the value you get.
the output of the algorithm is an array B such that B[i] is the id of the project that you will work on during day i, i>= 1.
no more than one project in a particular date, and you don't get the value of the project unless you complete it by the due date, today is day 0 and you will start working on the projects from day 1 (the due date is an integer), e.g., if the due date of a project is 5, you can choose to work on it on days 3 and 5)
1- write the algorithm.
2- prove that the algorithm is optimal?
3- what is the time complexity for the algorithm?
If all the values be same it's simple, just greedy approach by selecting least possible due date works well.
When the values are different, you can use similar approach but this time by dynamic programming (I'll assume your due dates are discrete).
Create an array of size Max{due date} name it as V, this array holds maximum possible value which can be earned in specific time, and another array for each value in V to save the selected tasks in related V[i], now you have this DP choice:
V[0] = 0, V[1] = max{value_x1, V[i] = Max {V[i-2] + value_xi, V[i-1]}
Here value_xi means maximum value task which has due date equal or smaller than i, Also this task shouldn't be in V[i-2] selection, after that update V[i] selection.
Finally I'll left to you to finish your homework by finding the order of this algorithm and it's correctness, also you can improve memory usage.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
Say you wanted to find which input causes function x to output value y, and you know the (finite) range of possible inputs.
The input and output are both numbers, and positively correlated.
What would be the best way to optimize that?
I'm currently just looping through all of the possible inputs.
Thanks.
One solution would be a binary search over the possible inputs.
Flow:
find the median input x
get the output from function(x)
if the output is less than the desired y
start over using the smaller half of the possible inputs
else
start over using the larger half of the possible inputs
A binary search algorithm, perhaps?
http://en.wikipedia.org/wiki/Binary_search_algorithm
If the range is finite and small, a precomputed lookup table might be the fastest way
if you have some sets of know "x" data that yield "y" you can divied between training and test sets and use neural networks.