I have a number of categories each category has a number of elements. I'm now looking for a programming algorithm to distribute these categories across a predefined number of columns without breaking up the categories, keeping the category order, and keeping the number of elements in each column as optimal as possible.
For example:
Distribution 5 categories across 3 columns
Data:
category A, 7 elements
category B, 7 elements
category C, 3 elements
category D, 2 elements
category E, 8 elements
Outcome:
Column 1: category A, 7 elements
Column 2: category B and C, 10 elements
Column 3: category D and E, 10 elements
You have the total number of elements, so you can divide that number by the number of columns to get the expected number of elements in each column. Your job is then to minimize the sum of the squares of the differences (so, if you have to store 8 elements and you store 10, you have a squared difference of 2² = 4 for that column).
You can then write a recursive function that, for every category, decides whether to move that category to the next column, or keep it in the current column. This is a boolean decision, so you can start by the branch that creates the smallest difference, and then the branch that creates the largest. The function would keep track of the best solution found so far, and immediately stop if the current sum of squared differences is greater than the total for that solution.
Related
Please recommend the optimal algorithm or solution for such a task:
There are several arrays with fractional numbers
a = [1.5, 2, 3, 4.5, 7, 10, ...(up to 100 numbers)]
b = [5, 6, 8, 14, ...]
c = [1, 2, 4, 6.25, 8.15 ...] (up to 7 arrays)
Arrays can be of arbitrary length and contain a different count of numbers.
It is required to select one number from each array in such a way that their product was into a given range.
For example data required product should be between 40 and 50.
Solution can be:
a[2] * b[2] * c[1] = 3 * 8 * 2 = 48
a[0] * b[3] * c[1] = 1.5 * 14 * 2 = 42
If there can be several solutions (different combinations), then how can you find them all in the optimal way?
This is doable, but barely. This will require combining pairs of things over and over again using a variety of strategies.
First of all if you have 2 arrays of no more than 100 things, you can create an array of all pairs, sorted by sum either ascending or descending, and it only has 10,000 things in it.
Next, we can use a heap to implement a priority queue.
With a priority queue, we can combine 2 ordered arrays of size at most 10,000 to stream out the sums in either ascending or descending order while not keeping track of more than 10,000 things. How? First we create a data structure like this:
Create priority queue
For every entry a of array A:
Put (a, B[0], 0) into our queue using the product as a priority
return a data structure which contains B and the priority queue
And now we can get values out like this:
If the priority queue is empty:
We're done
else:
Take the first element of the queue
if not at the end of B:
insert (a, b[next_index], next_index) into the queue
return that first element
And we can peek at them by just looking at the first element of the queue without touching the data structure.
This strategy can stream through 2 arrays of size 10,000 with total work just a few billion operations.
OK, so now we can arrange to always have 7 arrays. (Some may simply be a trivial [1].) We can start as follows with the brute force strategy.
Combine the first 2 ascending.
Combine the second 2 ascending.
Combine the third 2 descending.
Arrange the last descending.
Next we can use the priority queue merge strategy as follows:
Combine (first 2) with (second 2) ascending
Combine (third 2) with last descending
We just need the generators at the moment.
Now our strategy will look like this:
For each combination (in ascending order) from first 4:
For each combination that lands in window from last 3:
emit final combination
But how do we do the window? Well, as the combination from the first 4 goes up, the window that the last 3 has to fall in goes down. So adjusting the window looks like this:
while there is a next value and next value is large enough to fit in the window:
Extract next value
Add next value to end of window
while first value is too large for the window:
remove first value from the window
(Variable sized arrays, such as Python's List, can do both these operations in amortized O(1) each.)
So our actual way to finish is:
For each combination (in ascending order) from first 4:
adjust entries in window from last 3
For each in window from last 3:
emit final combination
This has a fixed overhead of a few billion operations plus O(number of answers) to actually emit the combinations. This includes a number of data structures with around 10k items, plus a window whose maximum size is 1 million items for a maximum memory usage of a few hundred MB.
Give a sparse matrix, how to reorder the rows and columns such that it is in block diagonal like form via row and column permutation?
Row and column permutation are not necessarily coupled like reverse Cuthill-McKee ordering:
http://www.mathworks.com/help/matlab/ref/symrcm.html?refresh=true In short, you can independently perform any row or column permutation.
The overall goal is to cluster all the non zero elements towards diagonal line.
Here is one approach.
First make a graph whose vertices are rows and columns. Every non-zero value is a edge between that row and that column.
You can then use a standard graph theory algorithm to detect the connected components of this graph. The single element ones represent all zero rows and columns. Number the others. Those components may have unequal numbers of rows and columns. You can distribute some zero rows and columns to them to make them square.
Your square components will be your blocks, and from the numbering of those components you know what order to put them in. Now just reorder rows and columns to achieve this structure and, voila! (The remaining zero rows/columns will result in a bunch of 0 blocks at the bottom right of the diagonal.)
Just an idea, but if you make a new matrix Ab from the original block-matrix A that contains the block-sparsity structure of A. E.g.:
A = [B 0 0; 0 0 C; 0 D 0]; % with matrices 0 (zero elements), B,C and D
Ab = [1 0 0; 0 0 2; 0 3 0]; % with identifiers 1, 2 and 3 (1-->B, 2-->C, 3-->D)
Then Ab is a simple sparse matrix (size 3x3 in the example). You can then use the reverse Cuthill-McKee ordering to get the permutations you want, and apply these permutations to Ab.
p = symrcm(Ab);
Abperm = Ab(p,p);
Then use the identifiers to create the ordered block matrix Aperm from Abperm and you'll have the desired result, I believe.
You'll need to be clever in assigning the identifiers to the individual blocks and so on, but this should be possible.
This is not a real-life question, it is just theory-crafting.
I have a big array which consists of elements like [1,140,245,123443], all
integer or floats with low selectivity, and the number of unique values is ten
times less than the size of the array. B*tree indexing is not good in this case.
I also tried to implement bitmap indexing, but in Ruby, binary operations are not so fast.
Are there any good algorithms for searching two-dimensional arrays of fixed size vectors?
And, the main question is, how do I convert the vector in value, where the conversion function has to be monotonic, so I can apply range queries such as:
(v[0]<10, v[2]>100, v[3]=32, 0.67*10^-8<v[4]<1.2154241410*10^-6)
the only idea i have is to create separate sorted indexes for each component of vector...binary search then and merge...but it is a bad idea because in the worst case scenario it will require O(N*N) operations...
Assuming that each "column" is vaguely evenly distributed in a known range, you could keep track of a series of buckets for each column, and a list of rows that satisfy the bucket. The number of buckets for each column can be the same, or different, it's totally arbitrary. More buckets is faster, but takes slightly more memory.
my table:
range: {1to10} {1to4m} {-2mto2m}
row1: {7 3427438335 420645075}
row2: {5 3862506151 -1555396554}
row3: {1 2793453667 -1743457796}
buckets for column 1:
bucket{1-3} : row3
bucket{4-6} : row2
bucket{7-10} : row1
buckets for column 2:
bucket{1-2m} :
bucket{2m-4m} : row1, row2, row4
buckets for column 3:
bucket{-2m--1m} : row2, row3
bucket{-1m-0} :
bucket{0-1m} :
bucket{1m-2m} : row1
Then, given a series of criteria: {v[0]<=5, v[2]>3*10^10}, we pull out the buckets that match that criteria:
column 1:
v[0]<=5 matches buckets {1-3} and {4-6}, which is rows 2 and 3.
column 2:
v[2]>3*10^10} matches buckets {2m-4m} and {4-6}, which is rows 1, 2 and 3.
column 3:
"" matches all , which is rows 1, 2 and 3.
Now we know that the row(s) we're looking for meet all three criteria, so we list all the rows that are in the buckets that matched all the criteria, in this case, rows 2 and 3. At this point, the number of rows remaining will be small even for massive amounts of data, depending on the granularity of your buckets. You simply check each of the rows that is left at this point to see if they match. In this sample we see that row 2 matches, but row 3 doesn't.
This algorithm is technically O(n), but in practice, if you have large numbers of small buckets, this algorithm can be very fast.
Using an index :)
The basic idea is to turn the 2 dimensional array into a 1 dimensional sorted array(while keeping the original position) and apply binary search on the later.
This method works for any n dimensional array and is used widely by databases which can be seen as a n dimensional array with variable lengths.
Given two m x n matrices A and B whose elements belong to a set S.
Problem: Can the rows and columns of A be permuted to give B?
What is the complexity of algorithms to solve this problem?
Determinants partially help (when m=n): a necessary condition is that det(A) = +/- det(B).
Also allow A to contain "don't cares" that match any element of B.
Also, if S is finite allow permutations of elements of A.
This is not homework - it is related to the solved 17x17 puzzle.
See below example of permuting rows and columns of a matrix:
Observe the start matrix and end matrix. All elements in a row or column are retained its just that their order has changed. Also the change in relative positions is uniform across rows and columns
eg. see 1 in start matrix and end matrix. Its row has elements 12, 3 and 14 along with it. Also its column has 5, 9 and 2 along with it. This is maintained across the transformations.
Based on this fact I am putting forward this basic algo to find for a given matrix A, can its rows and columns of A be permuted to give matrix B.
1. For each row in A, sort all elements in the row. Do same for B.
2. Sort all rows of A (and B) based on its columns. ie. if row1 is {5,7,16,18} and row2 is {2,4,13,15}, then put row2 above row1
3. Compare resultant matrix A' and B'.
4. If both equal, then do (1) and (2) but for columns on ORIGINAL matrix A & B instead of rows.
5. Now compare resultant matrix A'' and B''
I need to find out a method to determine how many items should appear per column in a multiple column list to achieve the most visual balance. Here are my criteria:
The list should only be split into multiple columns if the item count is greater than 10.
If multiple columns are required, they should contain no less than 5 (except for the last column in case of a remainder) and no more than 10 items.
If all columns cannot contain an equal number of items
All but the last column should be equal in number.
The number of items in each column should be optimized to achieve the smallest difference between the last column and the other column(s).
Well, your requirements and your examples appear a bit contradictory. For instance, your second example could be divided into two columns with 11 items in each, and satisfy your criteria. Let's assume that for rule #2 you meant that there should be <= 10 items / column.
In addition, I think you need to add another rule to make the requirements sensible:
The number of columns must not be greater than what is required to accomodate overflow.
Otherwise, you will often end up with degenerate solutions where you have far more columns than you need. For example, in the case of 26 items you probably don't want 13 columns of 2 items each.
If that's case, here's a simple calculation that should work well and is easy to understand:
int numberOfColumns = CEILING(numberOfItems / 10);
int numberOfItemsPerColumn = CEILING(numberOfItems / numberOfColumns);
Now you'll create N-1 columns of items (having `numberOfItemsPerColumn each) and the overflow will go in the last column. By this definition, the overflow should be minimized in the last column.
If you want to automatically determine the appropriate number of columns, and have no restrictions on its limits, I would suggest the following:
Calculate the square root of the total number of items. That would make an squared layout.
Divide that number by 1.618, and assign that to the total number of rows.
Multiply that same number by 1.618, and assign that to the total number of columns.
All columns but the right most one will have the same number of items.
By the way, the constant 1.618 is the Golden Ratio. That will achieve a more pleasant layout than a squared one.
Divide and multiply the other way round for vertical displays.
Hope this algorithm helps anyone with a similar problem.
Here's what you're trying to solve:
minimize y - z where n = xy + z and 5 <= y <= 10 and 0 <= z <= y
where you have n items split into x full columns of y items and one remainder column of z items.
There is almost certainly a smart way of doing this, but given these constraints a brute force implementation exploring all 6 + 7 + 8 + 9 + 10 = 40 possible combinations for y and z would take no time at all (only assignments where (n - z) mod y = 0 are solutions).
I think a brute force solution is easy, given the constraint on the number of items per columns: let v be the number of items per column (except the last one), then v belongs to [5,10] and can thus take a whooping 6 different values.
Evaluating 6 values is easy enough. Python one-liner (or not so far) to prove it:
# compute the difference between the number of items for the normal columns
# and for the last column, lesser is better
def helper(n,v):
modulo = n % v
if modulo == 0: return 0
else: return v - modulo
# values can only be in [5,10]
# we compute the difference with the last column for each
# build a list of tuples (difference, - number of items)
# (because the greater the value the better, it means less columns)
# extract the min automatically (in case of equality, less is privileged)
# and then pick the number of items from the tuple and re-inverse it
def compute(n): return - min([(helper(n,v), -v) for v in [5,6,7,8,9,10]])[1]
For 77 this yields: 7 meaning 7 items per columns
For 22 this yields: 8 meaning 8 items per columns