Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Imagine you are given a matrix of positive integer numbers (maximum 25*15, value of number does not exceed 3000000). When you do column sums and pick the smallest and the largest one, the difference between them must be the smallest possible.
You can swap numbers in every row (permute rows), not in column, how many times you want.
How would you solve this task?
I'm not asking for your code but your ideas.
Thanks in advance
I would make an attempt to solve the problem using Simulated Annealing. Here is a sketch of the plan:
Let the distance to optimize the difference between the max and min column sums.
Set the goal to be 0 (i.e., try to reach as close as possible to a matrix with no difference between sums)
Initialize the problem by calculating the array of sums of all columns to their current value.
Let a neighbor of the current matrix be the matrix that results from swapping two entries in the same row of the matrix.
Represent neighbors by their row index and two swapping column indexes.
When accepting a neighbor, do not compute all sums again. Just adjust the array of sums in the columns that have been swapped and by the difference of the swap (which you can deduce from the swapped row index)
Step 6 is essential for the sake of performance (large matrices).
The bad news is that this problem without the limits is NP-hard, and exact dynamic programming at scale seems out of the question. I think that my first approach would be large-neighborhood local search: repeatedly choose a random submatrix (rows and columns) small enough to be amenable to brute force and choose the optimal permutations while leaving the rest of the matrix undisturbed.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have n (about 10^5) points on a hypersphere of dimension m (between 10^4 to 10^6).
I am going to make a bunch of queries of the form "given a point p, find the closest of the n points to p". I'll make about n of these queries.
(Not sure if the hypersphere fact helps at all.)
The simple naive algorithm to solve this is, for each query, to compare p to all other n points. Doing this n times ends up with a runtime of O(n^2 m), which is far too big for me to be able to compute.
Is there a more efficient algorithm I can use? If I could get it to O(nm) with some log factors that'd be great.
Probably not. Having many dimensions makes efficient indexing extremely hard. That is why people look for opportunities to reduce the number of dimensions to something manageable.
See https://en.wikipedia.org/wiki/Curse_of_dimensionality and https://en.wikipedia.org/wiki/Dimensionality_reduction for more.
Divide your space up into hypercubes -- call these cells -- with edge size chosen so that on average you'll have one point per cube. You'll want a map from hypercells to the set of points they contain.
Then, given a point, check its hypercell for other points. If it is empty, look at the adjacent hypercells (I'd recommend a literal hypercube of hypercells for simplicity rather than some approximation to a hypersphere built out of hypercells). Check that for other points. Keep repeating until you get a point. Assuming your points are randomly distributed, odds are high that you'll find a second point within 1-2 expansions.
Once you find a point, check all hypercells that could possibly contain a closer point. This is possible because the point you find may be in a corner, but there's some closer point outside of the hypercube containing all the hypercells you've inspected so far.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
How many unique arrays of m elements exist such that they contain numbers in the range [1,n] and there exists atleast one subsequence {1,2,3,4....n}?
Constraints: m > n
I thought of combinations approach. But there will be repetitions.
In my approach, I first lay out all the numbers from 1 to n.
For example, if m=n+1, answer is n^2. (n spots available, each number in range [1,n])
Now, I think there might be a DP relation for further calculation, but I am not being able to figure it out.
Here's an example for n=3 and m=5. The green squares are the subsequence. The subsequence consists of the first 1 in the array, the first 2 that's after the first 1, etc. Squares that aren't part of the subsequence can either take n values if they are after the end of the subsequence, or n-1 values otherwise.
So the answer to this example is 1*9 + 3*6 + 6*4 = 51, which is easily verified by brute force. The coefficients 1,3,6 appear to be related to Pascal's triangle. The rest is left to the reader.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have an array of elements A1,A2,...,An.
The probability of a user searching for each element are P1,P2,...,Pn.
If the elements are rearranged, will the average case of the algorithm change?
Edit : I have posted the question, which appeared in my exam.
The expected number of comparisons is sum_{i=1...n}(i * p_i).
Re-ordering the elements in descending order reduces the expectation. That's intuitive since by looking at the most probable choices first will reduce, on average, the number of elements looked at before you find a particular choice.
As an example, suppose there's three items k1, k2, k3 with match probabilities 10%, 30% and 60%.
Then in the order k1, k2, k3, the expected number of comparisons is 1*0.1 + 2*0.3 + 3*0.6 = 2.5
With the most likely first: k3, k2, k1, the expected number of comparisons is 1*0.6 + 2*0.3 + 3*0.1 = 1.5
No, because it takes O(1) time to access an element in array and it does not depend on a position of this element in array. So arr[0] and arr[10000] should take the same amount of time.
If you will have something like a linked list or a binary tree, then it makes sense to put elements that are accessed with higher probability closer to the beginning.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
Wondering if anyone have worked on similar problem before? And my question is, why {8, 6} are peaks? I think 8 is peak, but since 6 is smaller than 8, it should not be a peak? Thanks.
In an array of integers, a "peak" is an element which is greater than or equal to the adjacent integers and a "valley" us an element which is less than or equal to the adjacent integers. For example, in the array {5,8,6,2,3,4,6} {8,6} are peaks and {5,2} are valleys. Given an array of integers, sort the array into an alternating sequence of peaks and valleys.
Example,
Input: {5,3,1,2,3}
Output: {5,1,3,2,3}
thanks in advance,
Lin
The 6 referred to is the 2nd 6 at the end of the sequence. This fits the description well (if not made very clear) and is backed up by 5 being a valley.
An alternating sequence of peaks and valleys is a sequence such that either all of the elements in odd-numbered positions are peaks and those in even-numbered positions are valleys, or vice-versa. The output sequence in the example demonstrates the elements of the input sequence sorted in such a way.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Call every subunitary ratio with its denominator a power of 2 a perplex.
Number 1 can be written in many ways as a sum of perplexes.
Call every sum of perplexes a zeta.
Two zetas are distinct if and only if one of the zeta has as least one perplex that the other does not have. In the image shown above, the last two zetas are considered to be the same.
Find all the numbers of ways 1 can be written as a zeta with N perplexes. Because this number can be big, calculate it modulo 100003.
Please don't post the code, but rather the algorithm. Be as precise as you can.
This problem was given at a contest and the official solution, written in the Romanian language, has been uploaded at https://www.dropbox.com/s/ulvp9of5b3bfgm0/1112_descr_P2_fractii2.docx?dl=0 , as a docx file. (you can use google translate)
I do not understand what the author of the solution meant to say there.
Well, this reminds me of BFS algorithms(Breadth first search), where you radiate out from a single point to find multiple solutions w/ different permutations.
Here you can use recursion, and set the base case as when N perplexes have been reached in that 1 call stack of the recursive function.
So you can say:
function(int N <-- perplexes, ArrayList<Double> currentNumbers, double dividedNum)
if N == 0, then you're done - enter the currentNumbers array into a hashtable
clone the currentNumbers ArrayList as cloneNumbers
remove dividedNum from cloneNumbers and add 2 dividedNum/2
iterate through index of cloneNumbers
for every number x in cloneNumbers, call function(N--, cloneNumbers, x)
This is a rough, very inefficient but short way to do it. There's obviously a lot of ways you can prune the algorithm(reduce the amount of duplicates going into the hashtable, prevent cloning as much as possible, etc), but because this shows the absolute permutation of every number, and then enters that sequence into a hashtable, the hashtable will use its equals() comparison to see that the sequence already exists(such as your last 2 zetas), and reject the duplicate. That way, you'll be left with the answer you want.
The efficiency of the current algorithm: O(|E|^(N)), where |E| is the absolute number of numbers you can have inside of the array at the end of all insertions, and N is the number of insertions(or as you said, # of perplexes). Obviously this isn't the most optimal speed, but it does definitely work.
Hope this helps!