It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I'm struggling with a dynamic programming problem for a couple of days. It goes like this:
John's working day is divided in N time slots, every slot i having associated a gain G[i] which he can receive is he works in that time slot. If he decides to work in the time interval [i, j] his total reward would be R[i,j]=G[i+1]+...+G[j] as the first slot is for warming up. Everyday he has to work exactly T slots - he can chose a subset of T slots from the available N total slots. He wants to maximize his profit by choosing a set of disjunct intervals [a1,b1], [a2,b2], ...[ak,bk] with 1 <= a1 <= b1 < a2 <= b2 <...< ak <= bk and Sum[i=1, k](bi-ai+1)=T.
Example: N=7, T=5 and the gain vector {3,9,1,1,7,5,4}. The optimal solution is selecting the intervals [1,2] and [4,6] with a total profit of 9+12=21.
DP solution:
int f[i][j][0..1];
let f[i][j][0] denotes the maximal gain for the first i time slots and using j time slots, and the i-th time slot is not used.
let f[i][j][1] denotes the maximal gain for the first i time slots and using j time slots, and the i-th time slot is used.
obviously,f[i][j][k] can determine f[i+1][j][k] or f[i+1][j+1][k]. details below:
f[i+1][j+1][1]=max(f[i+1][j+1][1],f[i][j][0],f[i][j][1]+G[i+1]);
f[i+1][j][0]=max(f[i+1][j][0],f[i][j][0],f[i][j][1]);
Related
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I'm trying to find an algorithm which uses linear space of memory for:
Given two strings x and y over an arbitrary alphabet, determine their longest common sub sequence.
Note that when you're calculating the next row of the table in the dynamic programming solution to solve the LCS problem, you only need the previous row and your current row. Then you can modify the dynamic programming solution to keep track of only the previous row and the current row instead of the m x n table. Every time you reach the end of the current row, you set the previous row to the current row, and start from the beginning of the row again. You do this m times where m is the number of rows in your table. This will use space linear in the number of columns.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
How can I find a largest prime number which is less than n, where n ≤ 10¹⁸ ?
Please help me find an Efficient Algorithm.
for(j=n;j>=2;j--) {
if(j%2 == 1) {
double m = double(j);
double a = (m-1)/6.0;
double b = (m-5)/6.0;
if( a-(int)a == 0 || b-(int)b == 0 ) {
printf("%llu\n",j);
break;
}
}
}
I used this approach but this is not efficient to solve for n>10^10;
How to optimize this..
Edit:
Solution: Use Primality test on each j.
Miller Rabin, Fermat's Test.
I don't think this question should be so quickly dismissed, as efficiency is not so easy to determine for numbers in this range. First of all, given the average prime gap is ln(p), it makes sense to work down from the given (n). i.e., ln(10^18) ~ 41.44), so you would expect around 41 iterations on average working down from (n). e.g., testing: (n), (n - 2), (n - 4), ...
Given this average gap, the next step is to decide whether you wish to use a naive test - check for divisibility by primes <= floor(sqrt(n)). With n <= (10^18), you would need to test against primes <= (10^9). There are ~ 50 million primes in this range. If you are willing to precompute and tabulate these values (all of which fit in 32 bits), you have a reasonable test for 64-bit values n <= 10^18. In this case, is a 200MB prime table an acceptable approach? 20 years ago, it might not have been. Today, it's not an issue.
Combining a prime table with sieving and/or Pocklington's test might improve efficiency. Alternatively, if memory is more constrained, a deterministic variant of the Miller-Rabin test with bases: 2, 325, 9375, 28178, 450775, 9780504, 1795265022 (SPRP set). Most composites fail immediately with an SPRP-2 test.
The point is - you have a decision to make between algorithmic complexity, both theoretical and in terms of implementation difficulty, and a balance with space/time trade-offs.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I want to generate a very large pseudorandom permutation p : [0,n-1] -> [0,n-1], and then compute m specific values p[i], where m << n. Is it possible to do this in O(m) time? The motivation is a large parallel computation where each processor only needs to see a small piece of the permutation, but the permutation must be consistent between processors.
Note that in order to help in the parallel case, different processes computing disjoint sets of i values shouldn't accidentally produce p[i] == p[j] for i != j.
EDIT: There is a much more clever algorithm based on block ciphers that I think Geoff will write up.
There are two common algorithms for generating permutations. Knuth's shuffle is inherently sequential so not a nice choice for parallelism. The other is random selection with retry any time repetition is encountered. Random selection is clearly equivalent when applied in any order, thus I propose the following simple algorithm:
Randomly sample candidate p[i] in [0,n-1] for each i in Needed (in parallel).
Remove all non-collided entries from Needed, as well as (optionally) some deterministic choice from the collisions (e.g., keep p[i] if i < {j | p[j] = p[i]}).
Repeat from step 1 with new (smaller) set Needed.
Since we haven't lost entropy in this process, the result is essentially equivalent to sequential random sampling in some different order, starting with the locations i that did not collide (we just didn't know that order in advance). Note that if we used the computed value in a comparison, for example, we would have introduced bias.
An example very low strength version:
Generate 2k = O(1) random integers a_i,b_i in [0,n-1], with a_i relatively prime to n.
Pick a weak permutation wp : [0,n-1] -> [0,n-1], say w(i) = i with all the but the high bit flipped.
p[i] = b_0 + a_0 * wp(b_1 + a_1 * wp(... i ...))
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
We have N books ( N<=200 ). All of them must be translated by K people (K<=100).
Every man can translate D books starting from index S to index S+D-1, 0<=D<=N.
Every man is paid c_1 dolars per page for the first book which he translates, c_2 for the second...
c_i for the book i.
0<=c_i<10000
The books must be translated in the order they are given.
input:
first row: 2 numbers N and K
second row: N numbers - number of pages per every book (<=10 000)
third row: N numbers - c_1, c_2, ... c_N; c_i is the price for translating a book by a man who has translated i-1 books;
output:
minimum price which must be paid for the translation of all the books.
Example:
Input:
6 3
50 100 60 5 6 30
1 2 3 4 5 6
Output: 339
(the first man translated the first book +50*1
the scond man translated the second,third,forth and fifth books:
+100*1+60*2+5*3+6*4
the third man translates the last book
+30*1
=339)
Can someone help me with this homework? I know i must be using dynamic programming to solve it.
Some clues: make a function F(BookNumber, ManNumber, NumOfBooksHeHaveTranslated).
Start with F(1, 1, 0). It is clear that
F(1, 1, 0) = Pages[1] * C[1] + Min(F(2, 1, 1), F(2, 2, 0))
i.e. we have to choose the best variant from - continue with the same translator, or use the next one. Elaborate this function for common case F(B, M, N), make recursive solution, check it for small inputs, transform recursion into DP (the methods are described in algo books)
The actual choice your algorithm needs to do is at which points in the sequence of books you change from one translator to the next one. That's the only decision point for the problem, and everything else follows from it. You can translate this to a recursive / dynamic programming problem by observing e.g. that the cost of translating N books by K translators is equivalent to the cost of translating x first books by one translator and N - x by the remaining K - 1 translators; or, equivalent to the cost of translating x first books by n translators and N - x by the remaining K - n translators. This is a subdivision / recursion step that you can use in a dynamic programming solution.
I hope no-one will post actually code to do this; it is homework.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I have a list of projects and each project takes you exactly two days to be completed and has a due date. let P[i].id, P[i].duedate, and p[i].value be the id of the project, the due date of the project, and the value you get if you complete the project on time(on or before due date)
write an algorithm that takes as input array A and returns a schedule of which projects you will do and when, to maximize the value you get.
the output of the algorithm is an array B such that B[i] is the id of the project that you will work on during day i, i>= 1.
no more than one project in a particular date, and you don't get the value of the project unless you complete it by the due date, today is day 0 and you will start working on the projects from day 1 (the due date is an integer), e.g., if the due date of a project is 5, you can choose to work on it on days 3 and 5)
1- write the algorithm.
2- prove that the algorithm is optimal?
3- what is the time complexity for the algorithm?
If all the values be same it's simple, just greedy approach by selecting least possible due date works well.
When the values are different, you can use similar approach but this time by dynamic programming (I'll assume your due dates are discrete).
Create an array of size Max{due date} name it as V, this array holds maximum possible value which can be earned in specific time, and another array for each value in V to save the selected tasks in related V[i], now you have this DP choice:
V[0] = 0, V[1] = max{value_x1, V[i] = Max {V[i-2] + value_xi, V[i-1]}
Here value_xi means maximum value task which has due date equal or smaller than i, Also this task shouldn't be in V[i-2] selection, after that update V[i] selection.
Finally I'll left to you to finish your homework by finding the order of this algorithm and it's correctness, also you can improve memory usage.