Time Complexity with increasing queue size - algorithm

I've searched Google and StackOverflow for the past half hour or so, and, while I've found a lot of interesting information, I have yet to find the solution to my problem. I think this should be fairly simple to answer; I just don't know how to answer it.
I'm using the following loop in a program I'm working on (this is pseudocode of my algorithm, obviously):
while Q is not empty
x = Q.dequeue()
for(i = 1 to N)
if x.s[i] = 0
y = Combine(x, i)
Q.add(G[y])
In this loop:
Q is a queue
x is an object with that contains an integer array s
N is an integer representing the problem size
y is a new instance of the same type of object as x
Combine is a method that returns a new object of the same type as x
Combine only contains a For loop, so it has a worst-case time complexity of O(N)
My question is how I should be trying to calculate the time complexity of this loop. Because I'm potentially adding N-1 items to the queue on each loop, I'm assuming that it will increase the complexity beyond just the simple O(N) from a normal loop.
If you need any more information to help me with this, please let me know and I'll get you what I can when I can.

Related

How many subproblems are there in this Activity Selection recursive breakdown?

Activity Selection: Given a set of activities A with start and end times, find a maximum subset of mutually compatible activities.
My problem
The two approaches seem to be the same, but the numSubproblems in firstApproach is exponential, while in secondApproach is O(n^2). If I were to memoize the result, then how can I memoize firstApproach?
The naive firstApproach
let max = 0
for (a: Activities):
let B = {Activities - allIncompatbleWith(a)}
let maxOfSubproblem = ActivitySelection(B)
max = max (max, maxOfSubproblem+1)
return max
1. Assume a particular activity `a` is part of the optimal solution
2. Find the set of activities incompatible with `a: allIncompatibleWith(a)`.
2. Solve Activity for the set of activities: ` {Activities - allImcompatibleWith(a)}`
3. Loop over all activities `a in Activities` and choose maximum.
The CLRS Section 16.1 based secondApproach
Solve for S(0, n+1)
let S(i,j) = 0
for (k: 0 to n):
let a = Activities(k)
let S(i,k) = solution for the set of activities that start after activity-i finishes and end before activity-k starts
let S(k,j) = solution for the set of activities that start after activity-k finishes and end before activyty-j starts.
S(i,j) = max (S(i,k) + S(k,j) + 1)
return S(i,j)
1. Assume a particular activity `a` is part of optimal solution
2. Solve the subproblems for:
(1) activities that finish before `a` starts
(2) activities that start after `a` finishes.
Let S(i, j) refer to the activities that lie between activities i and j (start after i and end before j).
Then S(i,j) characterises the subproblems needed to be solved above. ),
S(i,j) = max S(i,k) + S(k,j) + 1, with the variable k looped over j-i indices.
My analysis
firstApproach:
#numSubproblems = #numSubset of the set of all activities = 2^n.
secondApproach:
#numSubproblems = #number of ways to chooose two indicises from n indices, with repetition. = n*n = O(n^2)
The two approaches seem to be the same, but the numSubproblems in firstApproach is exponential, while in secondApproach is O(n^2). What's the catch? Why are they different, even thought the two approaches seem to be the same?
The two approaches seem to be the same
The two solutions are not the same. The difference is in the number of states possible in the search space. Both solutions exhibit overlapping sub-problems and optimal substructure. Without memoization, both solutions browse through the entire search space.
Solution 1
This a backtracking solution where all subsets that are compatible with an activity are tried and each time an activity is selected, your candidate solution is incremented by 1 and compared with the currently stored maximum. It utilizes no insight of the start times and end times of the activities. The major difference is that the state of your recurrence is the entire subset of activities (compatible activities) for which the solution needs to be determined (regardless of their start and finish times). If you were to memoize the solution, you would have to use a bitmasks (or (std::bitset in C++) to store the solution for a subset of activities. You could also use std::set or other Set data structures.
Solution 2
The number of states for the sub-problems in the second solution are greatly reduced because the recurrence relation solves for only those activities which finish before the start of the current activity and those activities which start after the current activity finishes. Notice that the number of states in such a solution is determined by the number of possible values of the tuple (start time, end time). Since, there are n activities, the number of states are atmost n2. If we memoize this solution, we simply need to store the solution for a given start time and end time, which automatically gives a solution for the subset of activities that fall in this range, regardless of whether they are compatible among themselves.
Memoization always don't lead to polynomial time asymptotic time complexity. In the first approach, you can apply memoization, but that'll not reduce the time complexity to polynomial time.
What is memoization?
In simple words, memoization is nothing but a recursive solution (top-down) that stores the result of computed solution to sub-problem. And if the same sub-problem is to be calculated again, you return the originally stored solution instead of recomputing it.
Memoization in your first recursive solution
In your case each sub-problem is finding optimal selection of activities for a subset. So the memoization (in your case) will result in storing the optimal solution for all the subsets.
No doubt memoization will give you performance enhancements by avoiding recomputation of solution on a subset of activities that has been "seen" before, but it can't (in this case) reduce the time complexity to polynomial time because you end up storing the sub-solutions for every subset (in worst case).
Where memoization gives us real benefit?
On the other hand, if you see this, where memoization is applied for fibonacci series, the total number of sub-solutions that you have to store is linear with the size of the input. And thus it drops the exponential complexity to linear.
How can you memoize the first solution
For applying memoization in the first approach, you need to maintain the sub-solutions. The data-structure that you can use is Map<Set<Activity>, Integer> which will store the maximum number of compatible activities for the given Set<Activity>. In java equals() on a java.util.Set works properly across all the implementations, so you can use it.
Your first approach will be modified like this:
// this structure memoizes the sub-solutions
Map<Set<Activity>, Integer> map;
ActivitySelection(Set<Activity> activities) {
if(map contains activities)
return map.getValueFor(activities);
let max = 0
for (a: activities):
let B = {Activities - allIncompatbleWith(a)}
let maxOfSubproblem = ActivitySelection(B)
max = max (max, maxOfSubproblem+1)
map.put(activities, max)
return max
}
On a lighter note:
The time complexity of the second solution (CLRS 16.1) will be O(n^3) instead of O(n^2). You'll have to have 3 loops for i, j and k. The space complexity for this solution is O(n^2).

Interview Scheduling Algorithm

I am trying to think of an algorithm that always produces the optimum solution in the best possible time to this problem:
There are n candidates for a job, and k rooms in which they have scheduled interviews at various times of the day. Interviews have a specific schedule in each room, with each interview having a specified start time (si), finish time (fi), and interview room (ri). All time units are always integers. In addition we need to schedule pictures with the people currently being interviewed throughout the day. The pictures don't effectively take any time, but at some point in the day each interviewee must be in a picture. If we schedule a picture at time t, all people currently being interviewed will be in that picture. Taking a picture has no affect on the rest of each interviews start and end time. So the problem is this: with an unordered list of interviews , each with variables (si, fi, ri), how do you make sure every interview candidate is in a picture, while taking as few pictures as possible?
So ideally we would take pictures when there are as many people present as possible to minimize the number of pictures taken. My original idea for this was sort of a brute force, but it would be a really bad big-O runtime. It is very important to minimize the runtime of this algorithm while still returning the fewest possible photographs. That being said, if you can think of a fast greedy algorithm that doesn't perfectly solve the problem, I would like to hear that too.
I'm sure my description here was far from flawless, so if you would like me to clarify anything, feel free to leave a comment and I'll get back to you.
Start with the following observations:
At least one picture must be taken during each interview, since we cannot photograph that interviewee before they arrive or after they leave.
The set of people available to photograph changes only at the times si and fi.
After an arrival event si, if the next event j is an arrival, there is no need to take a picture between si and sj, since everyone available at si is still available at sj.
Therefore, you can let the set of available interviewees "build up" through arrival events (up to k of them) and wait to take a picture until someone is about to leave.
Thus I think the following algorithm should work:
Put the arrival and departure times into a list and sort it (times should remain tagged with "arrival" or "departure" and the interviewee's index).
Create a boolean array A of size n to keep track of whether each interviewee is available (interview is in progress).
Create a boolean array P of size n to keep track of whether each interviewee has been photographed.
Loop over the sorted time list (index variable i):
a. If an arrival is encountered, set A[i] to true.
b. If a departure j is encountered, check P[j] to see if the person leaving has been photographed already. If not, take a picture now and record its effects (for all A[k] = true set P[k] = true). Finally set A[i] to false.
The sort is O(n log n), the loop has 2n iterations, and checking the arrays is O(1). But since on each picture-taking event, you may need to loop over A, the overall runtime is O(n2) in the worst case (which would happen if no interviews overlapped in time).
Here's an O(n log n) solution:
Step 1: Separately sort the starting and finishing time of all interviews, but at the same time keep track of the places they are sorted to (i.e. the original indices and the indices after sort). This results in 4 arrays below
sst[] (sst = sorted starting time)
sft[] (sft = sorted finishing time)
sst2orig[] (sst index to original index)
sft2orig[] (sst index to original index)
Note: by definitions of the above 4 arrays,
"sst2orig[j] = i & sst2orig[k] = i" means that
interview [i] has starting time sst[j] and finishing time sft[k]
Step 2: Define a boolean array p_taken[] to represent if the candidate of an interview has already been phtographed. All elements in the array will be set to false initially.
Step 3: The loop
std::vector<int> photo_time;
int last_p_not_taken_sst_index = 0;
for (int i=0; i<sft.size; i++) {
// ignore the candidate already photographed
if (p_taken[sft2orig[sft[i]]]) continue;
// Now we found the first leaving candidate not phtographed, we
// must take a photo now.
photo_time.push_back(sft[i]);
// So we can now mark all candidate having prior sst[] time as
// already photographed. So, we search for the first elm. in
// sst[] that is greater than sft[i], and returns the index.
// If all elm. in sst[] is smaller than sft[i], we return sst.size().
// This could be done via a binary search
int k = upper_inequal_bound_index(sst, sft[i]);
// now we can mark all candidate with starting time prior than sst[k]
// to be "photographed". This will include the one corresponding to
// sft[i]
for (int j=last_p_not_taken_sst_index; j<k; j++)
p_taken[sst2orig[j]] = true;
last_p_not_taken_sst_index = k;
}
The final answer is saved in photo_time, and the number of photos is photo_time.size().
Time Complexity:
Step 1: Sorts: O(n log n)
Step 2: initialize p_taken[]: O(n)
Step 3: We loop n times, and in each loop
3-1 check p_taken: O(1)
3-2 binary search: O(log n)
3-3 mark candidates: aggreated O(n), since we mark once only, per candidate.
So, overall for step 3: O(n x ( 1 + log n) + n) = O(n log n)
Step 1 ~ 3, total: O(n log n)
Note that step 3 can be futher optimized: we can shrink to exclude those already previous binary-searched range. But the worst case is still O(log n) per loop. Thus the total is still O(n log n)

Greedy Algorithm Optimization

I have the following problem:
Let there be n projects.
Let Fi(x) equal to the number of points you will obtain if you spent
x units of time working on project i.
You have T units of time to use and work on any project you would
like.
The goal is to maximize the number of points you will earn and the F functions are non-decreasing.
The F functions have diminishing marginal return, in other words spending x+1 unit of time working on a particular project will yield less of an increase in total points earned from that project than spending x unit of time on the project did.
I have come up with the following O(nlogn + Tlogn) algorithm but I am supposed to find an algorithm running in O(n + Tlogn):
sum = 0
schedule[]
gain[] = sort(fi(1))
for sum < T
getMax(gain) // assume that the max gain corresponds to project "P"
schedule[P]++
sum++
gain.sortedInsert(Fp(schedule[P] + 1) - gain[P])
gain[P].sortedDelete()
return schedule
That is, it takes O(nlogn) to sort the initial gain array and O(Tlogn) to run through the loop. I have thought through this problem more than I care to admit and cannot come up with an algorithm that would run in O(n + Tlogn).
For the first case, use a Heap, constructing the heap will take O(n) time, and each ExtractMin & DecreaseKey function call will take O(logN) time.
For the second case construct a nXT table where ith column denotes the solution for the case T=i. i+1 th column should only depend on the values on the ith column and the function F, hence calculatable in O(nT) time. I did not think all the cases thoroughly but this should give you a good start.

Is this searching algorithm optimal?

I have two lists, L and M, each containing thousands of 64-bit unsigned integers. I need to find out whether the sum of any two members of L is itself a member of M.
Is it possible to improve upon the performance of the following algorithm?
Sort(M)
for i = 0 to Length(L)
for j = i + 1 to Length(L)
BinarySearch(M, L[i] + L[j])
(I'm assuming your goal is to find all pairs in L that sum to something in M)
Forget hashtables!
Sort both lists.
Then do the outer loop of your algorithm: walk over every element i in L, then every larger element j in L. As you go, form the sum and check to see if it's in M.
But don't look using a binary search: simply do a linear scan from the last place you looked. Let's say you're working on some value i, and you have some value j, followed by some value j'. When searching for (i+j), you would have got to the point in M where that value is found, or the first largest value. You're now looking for (i+j'); since j' > j, you know that (i+j') > (i+j), and so it cannot be any earlier in M than the last place you got. If L and M are both smoothly distributed, there is an excellent chance that the point in M where you would find (i+j') is only a little way off.
If the arrays are not smoothly distributed, then better than a linear scan might be some sort of jumping scan - look forward N elements at a time, halving N if the jump goes too far.
I believe this algorithm is O(n^2), which is as fast as any proposed hash algorithm (which have an O(1) primitive operation, but still have to do O(n**2) of them. It also means that you don't have to worry about the O(n log n) to sort. It has much better data locality than the hash algorithms - it basically consists of paired streamed reads over the arrays, repeated n times.
EDIT: I have written implementations of Paul Baker's original algorithm, Nick Larsen's hashtable algorithm, and my algorithm, and a simple benchmarking framework. The implementations are simple (linear probing in the hashtable, no skipping in my linear search), and i had to make guesses at various sizing parameters. See http://urchin.earth.li/~twic/Code/SumTest/ for the code. I welcome corrections or suggestions, about any of the implementations, the framework, and the parameters.
For L and M containing 3438 items each, with values ranging from 1 to 34380, and with Larsen's hashtable having a load factor of 0.75, the median times for a run are:
Baker (binary search): 423 716 646 ns
Larsen (hashtable): 733 479 121 ns
Anderson (linear search): 62 077 597 ns
The difference is much bigger than i had expected (and, i admit, not in the direction i had expected). I suspect i have made one or more major mistakes in the implementation. If anyone spots one, i really would like to hear about it!
One thing is that i have allocated Larsen's hashtable inside the timed method. It is thus paying the cost of allocation and (some) garbage collection. I think this is fair, because it's a temporary structure only needed by the algorithm. If you think it's something that could be reused, it would be simple enough to move it into an instance field and allocate it only once (and Arrays.fill it with zero inside the timed method), and see how that affects performance.
The complexity of the example code in the question is O(m log m + l2 log m) where l=|L| and m=|M| as it runs binary search (O(log m)) for every pair of elements in L (O(l2)), and M is sorted first.
Replacing the binary search with a hash table reduces the complexity to O(l2) assuming that hash table insert and lookup are O(1) operations.
This is asymptotically optimal as long as you assume that you need to process every pair of numbers on the list L, as there are O(l2) such pairs. If there are a couple of thousands of numbers on L, and they are random 64-bit integers, then definitely you need to process all the pairs.
Instead of sorting M at a cost of n * log(n), you could create a hash set at the cost of n.
You could also store all sums in another hash set while iterating and add a check to make sure you don't perform the same search twice.
You can avoid binary search by using hashtable except sorted M array.
Alternatively, add all of the members of L to a hashset lSet, then iterate over M, performing these steps for each m in M:
add m to hashset mSet - if m is already in mSet, skip this iteration; if m is in hashset dSet, also skip this iteration.
subtract each member l of L less than m from m to give d, and test whether d is also in lSet;
if so, add (l, d) to some collection rSet; add d to hashset dSet.
This will require fewer iterations, at the cost of more memory. You will want to pre-allocate the memory for the structures, if this is to give you a speed increase.

Algorithm to pick values from set to match target value?

I have a fixed array of constant integer values about 300 items long (Set A). The goal of the algorithm is to pick two numbers (X and Y) from this array that fit several criteria based on input R.
Formal requirement:
Pick values X and Y from set A such that the expression X*Y/(X+Y) is as close as possible to R.
That's all there is to it. I need a simple algorithm that will do that.
Additional info:
The Set A can be ordered or stored in any way, it will be hard coded eventually. Also, with a little bit of math, it can be shown that the best Y for a given X is the closest value in Set A to the expression X*R/(X-R). Also, X and Y will always be greater than R
From this, I get a simple iterative algorithm that works ok:
int minX = 100000000;
int minY = 100000000;
foreach X in A
if(X<=R)
continue;
else
Y=X*R/(X-R)
Y=FindNearestIn(A, Y);//do search to find closest useable Y value in A
if( X*Y/(X+Y) < minX*minY/(minX+minY) )
then
minX = X;
minY = Y;
end
end
end
I'm looking for a slightly more elegant approach than this brute force method. Suggestions?
For a possibly 'more elegant' solution see Solution 2.
Solution 1)
Why don't you create all the possible 300*300/2 or (300*299/2) possible exact values of R, sort them into an array B say, and then given an R, find the closest value to R in B using binary search and then pick the corresponding X and Y.
I presume that having array B (with the X&Y info) won't be a big memory hog and can easily be hardcoded (using code to write code! :-)).
This will be reasonably fast: worst case ~ 17 comparisons.
Solution 2)
You can possibly also do the following (didn't try proving it, but seems correct):
Maintain an array of the 1/X values, sorted.
Now given an R, you try and find the closest sum to 1/R with two numbers in the array of 1/Xs.
For this you maintain two pointers to the 1/X array, one at the smallest and one at the largest, and keep incrementing one and decrementing the other to find the one closest to 1/R. (This is a classic interview question: Find if a sorted array has two numbers which sum to X)
This will be O(n) comparisons and additions in the worst case. This is also prone to precision issues. You could avoid some of the precision issues by maintaining a reverse sorted array of X's, though.
Two ideas come to my mind:
1) Since the set A is constant, some pre-processing can be helpful. Assuming the value span of A is not too large, you can create an array of size N = max(A). For each index i you can store the closest value in A to i. This way you can improve your algorithm by finding the closest value in constant time, instead of using a binary search.
2) I see that you omit X<=R, and this is correct. If you define that X<=Y, you can restrict the search range even further, since X>2R will yield no solutions either. So the range to be scanned is R<X<=2R, and this guarantees no symetric solutions, and that X<=Y.
When the size of the input is (roughly) constant, an O(n*log(n)) solution might run faster than a particular O(n) solution.
I would start with the solution that you understand the best, and optimize from there if needed.

Resources