I searched for Knapsack algorithm on the net, and in all the implementations, I saw that the 2D array is of the form:
int K[n+1][W+1];
where n is the number of elements and W is the maximum weight which can be accommodated in the Knapsack.
This array was filled in a bottom up manner, in a row major format. Can it even be done in a column major format?
Roughly the only requirement on the order in which the array is filled is that, if a <= b and c <= d, then the (a,c) cell is not filled after the (b,d) cell. This follows from tracing the data dependencies of the dynamic program. Row-major, column-major, and many other fill orders are possible.
Related
I have an Array A(size <= 10^5) of numbers(<= 10^8), and I need to answer some queries(50000), for L, R, how many subsets for elements in the range [L, R], the XOR of the subset is a number that has 0 or 1 bit set(power of 2). Also, point modifications in the array are being done in between the queries, so can't really do some offline processing or use techniques like square root decomposition etc.
I have an approach where I use DP to calculate for a given range, something on the lines of this:
https://www.geeksforgeeks.org/count-number-of-subsets-having-a-particular-xor-value/
But this is clearly too slow. This feels like a classical segment tree problem, but can't seem to find as to what data points to store at each node, so that I can use the left child and right child to compute the answer for the given range.
Yeah, that DP won't be fast enough.
What will be fast enough is applying some linear algebra over GF(2), the Galois field with two elements. Each number can be interpreted as a bit-vector; adding/subtracting vectors is XOR; scalar multiplication isn't really relevant.
The data you need for each segment is (1) how many numbers are there in the segment (2) a basis for the subspace of numbers generated by numbers in the segment, which will consist of at most 27 numbers because all numbers are less than 2^27. The basis for a one-element segment is just that number if it's nonzero, else the empty set. To find the span of the union of two bases, use Gaussian elimination and discard the zero vectors.
Given the length of an interval and a basis for it, you can count the number of good subsets using the rank-nullity theorem. Basically, for each target number, use your Gaussian elimination routine to test whether the target number belongs to the subspace. If so, there are 2^(length of interval minus size of basis) subsets. If not, the answer is zero.
Given:
number of rows
number of columns
maximum value matrix can take
Two matrices are considered equivalent, if one can be obtained from another by swapping rows and columns. Matrices which are equivalent can be grouped together. How to find number of such groups?
This is a typical polya theorem problem. You should learn it and related concepts from wikipedia first, like permutation, cycles and group.
Say number of rows is N, number of columns is M and maximum value matrix can take is V. There are (N+M)! permutations in the group and V colors we can use.
Easy solution would be enumerating over all possible permutation of rows and permutation of columns. Then c(g) can be computed by c(permutation of rows) * c(permutation of columns). This is a O((N + M)!) algorithm.
Advanced solution needs some trick. You can count the number of permutations of rows which has exactly c_row cycles, where 1 <= c_row <= N, and similar for columns. Then you can enumerate over all (c_row, c_column) pairs, and summary the result. This would be a O(N^2 + M^2 + NM) algorithm with proper implementation.
In both case, you would need to use some class like BigInteger in java, as the answer would be very huge.
If I got more time and you did need some code for demonstration, I'll write one later.
Hi fellows
I have a question in regards to an algorithm problem I am trying to solve.
The input data to the algorithm is a matrix that can be described as follows:
The matrix has N dimensions, like the one in the attached picture (which has 2 dimensions for example).
There are a number of elements (denoted as c1, .. c5 in the attached sample matrix) that are mapped into the cells of the matrix.
An element can occur in one or more cells in the matrix.
The algorithm should work as follows:
Pick (and then remove) elements from the matrix and group them into buckets of E elements, so that:
the elements in each bucket don't have any common values on any of the dimensions
each element is assigned only into one bucket
For example, referring to the sample matrix attached, if we selected E to be 2, then a possible grouping could be: {c1, c2} ; {c3, c4} and {5} by itself. Check: c1 and c2 (and c3 and c4) don't have any common column or row values in the matrix.
The problem has been doing my head in for some time now and I thought it may be worth asking if some whiz here has a solution.
Thanks
Given a
we first define two real-valued functions and as follows:
and we also define a value m(X) for each matrix X as follows:
Now given an , we have many regions of G, denoted as . Here, a region of G is formed by a submatrix of G that is randomly chosen from some columns and some rows of G. And our problem is to compute as fewer operations as possible. Is there any methods like building hash table, or sorting to get the results faster? Thanks!
========================
For example, if G={{1,2,3},{4,5,6},{7,8,9}}, then
G_1 could be {{1,2},{7,8}}
G_2 could be {{1,3},{4,6},{7,9}}
G_3 could be {{5,6},{8,9}}
=======================
Currently, for each G_i we need mxn comparisons to compute m(G_i). Thus, for m(G_1),...,m(G_r) there should be rxmxn comparisons. However, I can notice that G_i and G_j maybe overlapped, so there would be some other approach that is more effective. Any attention would be highly appreciated!
Depending on how many times the min/max type data is needed, you could consider a matrix that holds min/max information in-between the matrix values, i.e. in the interstices between values.. Thus, for your example G={{1,2,3},{4,5,6},{7,8,9}} we would define a relationship matrix R sized ((mxn),(mxn),(mxn)) and having values from the set C = {-1 = less than, 0 = equals and 1 = greater than}.
R would have nine relationship pairs (n,1), (n,2) to (n,9) where each value would be a member of C. Note (n,n is defined and will equal 0). Thus, R[4,,) = (1,1,1,0,-1,-1,-1,-1,-1). Now consider any of your subsets G_1 ..., Knowing the positional relationships of a subset's members will give you offsets into R which will resolve to indexes into each R(N,,) which will return the desired relationship information directly without comparisons.
You, of course, will have to decide if the overhead in space and calculations to build R exceeds the cost of just computing what you need each time it's needed. Certain optimizations including realization that the R matrix is reflected along the major diagonal and that you could declare "equals" to be called, say, less than (meaning C has only two values) are available. Depending on the original matrix G, other optimizations can be had if it is know that a row or column is sorted.
And since some computers (mainframes, supercomputers, etc) store data into RAM in column-major order, store your dataset so that it fills in with the rows and columns transposed thus allowing column-to-column type operations (vector calculations) to actually favor the columns. Check your architecture.
I have been pulling my hair out on one problem... The overall problem is complicated... but let me try my best to explain the part that really matters...
I have a graph where each edge represents the correlation between the connected two nodes. Each node is a time course (TC) (i.e., 400 time points), where events will occur at different time points. The correlation between two nodes is defined as the percentage of overlapped events. For the simplicity of this example, let us assume that the total number of events happening on each node is the same as $tn$. And if two TCs (nodes) have $on$ overlapped events (i.e., events that happened at exactly the same time point). Then, the correlation can be defined simply as $on$/$tn$.
Now, I have a network of 11 nodes; and I know the correlation between every two nodes. How do I generate the TCs for all 11 nodes that meet the correlation constraints???
It is easy to do this for two nodes when you know the correlation between the two. Assume TC_1 and TC_2 have a correlation value of 0.6, which means there are 60 percent overlapped events in two TCs. Also, assume that the total number of events are the same for both TC_1 and TC_2 as $tn$. A simple algorithm to place the events in the two TCs is first randomly pick 0.6*$tn$ time points, and consider those as time slots where overlapped events happened in both TCs. Next, randomly pick (1-0.6)*$tn$ time points in TC_1 to place the rest of the events for TC_1. Finally, randomly pick (1-0.6)*$tn$ time points in TC_2 where no events happened in correspondent time points in TC_1.
However, it starts to get harder, when you consider a 3-node network, where the generated three TCs need to meet all three correlation constraints (i.e., 3 edges)... It's hardly possible to do this for a 11-node network...
Does this make any sense to you? Please let me know if it's not...
I was thinking that this is just a trick computer science programing issue... but the more I think about it, it's more like a linear programing problem, isn't it?
Anyone has a reasonable solution? I am doing this in R, but any code is ok...
I think there is a simpleminded linear programming approach. Represent a solution as a matrix, where each column is a node, and each row is an event. The cells are either 0 or 1 to say that an event is or is not associated with a given node. Your correlation constraint is then a constraint fixing the number of 11s in a pair of columns, relative to the number of 1s in each of those columns, which you have in fact fixed ahead of time.
Given this framework, if you treat each possible row as a particular item, occuring X_i times, then you will have constraints of the form SUM_i X_i * P_ij = K_j, where P_ij is 0 or one depending on whether possible row i has 11 in the pair of columns counted by j. Of course this is a bit of a disaster for large numbers of nodes, but with 11 nodes there are 2048 possible rows, which is not completely unmanageable. The X_i may not be linear, but I guess they should be rational, so if you are prepared to use astounding numbers of rows/events you should be OK.
Unfortunately, you may also have to try different total column counts, because there are inequalities lurking around. If there are N rows and two columns have m and n 1s in them there must be at least m + n - N 11s in that column pair. You could in fact make the common number of 1s in each column come out as a solution variable as well - this would give you a new set of constraints in which the Q_ij are 0 and one depending on whether column row i has a 1 in column j.
There may be a better answer lurking out there. In particular, generating normally distributed random variables to particular correlations is easy (when feasible) - http://en.wikipedia.org/wiki/Cholesky_decomposition#Monte_Carlo_simulation and (according to Google R mvrnorm). Consider a matrix with 2^N rows and 2^N-1 columns filled with entries which are +/-1. Label the rows with all combinations of N bits and the columns with all non-zero columns of N bits. Fill each cell with -1^(parity of row label AND column label). Each column has equal numbers of +1 and -1 entries. If you multiple two columns together element by element you get a different column, which has equal numbers of +1 and -1 entries - so they are mutually uncorrelated. If your Cholesky decomposition provides you with matrices whose elements are in the range [-1, 1] you may be able to use it to combine columns, where you combine them by picking at random from the columns or from the negated columns according to a particular probability.
This also suggests that you might possibly get by in the original linear programming approach with for example 15 columns by choosing from amongst not the 2^15 different rows that are all possibilities, but from amongst the 16 different rows that have the same pattern as a matrix with 2^4 rows and 2^4-1 columns as described above.
If there exist solution (you can have problem whit no solution) you can represent this as system of linear equtions.
x1/x2 = b =>
x1 - b*x2 = 0
or just
a*x1 + b*x2 = 0
You should be able to transform this into solving system of linear equations or more precisely homogeneous system of linear equations since b in Ax=b is equal 0.
Problem is that since you have n nodes and n*(n-1)/2 relations (equations) you have to many relations and there will be no solution.
You might represent this problem as
Minimize Ax where x > 0 and x.x == konstant
You can represent this as a mixed integer program.
Suppose we have N nodes, and that each node has T total time slots. You want to find an assignment of events to these time slots. Each node has tn <= T events. There are M total edges in your graph. Between any pair of nodes i and j that share an edge, you have a coefficent
c_ij = overlap_ij/tn
where overlap_ij is the number of overlapping events.
Let x[i,t] be a binary variable defined as
x[i,t] = { 1 if an event occurs at time t in node i
= { 0 otherwise.
Then the constraint that tn events occur at node i can be written as:
sum_{t=1}^T x[i,t] == tn
Let e[i,j,t] be a binary variable defined as
e[i,j,t] = { 1 if node i and node j share an event a time t
= { 0 otherwise
Let N(i) denote the neighbors of node i. Then we have that at each time t
sum_{j in N(i)} e[i,j,t] <= x[i,t]
This says that if a shared event occurs in a neighbor of node i at time t, then node i must have an event at time t. Furthermore, if nodei has two neighbors u and v, we can't have e[i,u,t] + e[i,v,t] > 1 (meaning that two events would share the same time slot), because the sum over all neighbors is less than x[i,t] <= 1.
We also know that there must be overlap_ij = tn*c_ij overlapping events between node i and node j. This means that we have
sum_{t=1}^T e[i,j,t] == overlap_ij
Putting this all together you get the following MIP
minimize 0
e, x
subject to sum_{t=1}^T x[i,t] == tn, for all nodes i=1,...,N
sum_{j in N(i)} e[i,j,t] <= x[i,t],
for all nodes i=1,...,N and all time t=1,...,T
sum_{t=1}^T e[i,j,t] == overlap_ij for all edges (i,j)
between nodes
x[i,t] binary for i=1,...,N and t=1,...,T
e[i,j,t] binary for all edges (i,j) and t=1,...,T
Here the objective is zero, since your model is a pure feasibility problem. This model has a total of T*N + M*T variables and N + N*T + M constraints.
A MIP solver like Gurobi can solve the above problem, or prove that it is infeasible (i.e. no solution exists). Gurobi has an interface to R.
You can extract the final time series of events for the ith node by looking at the solution vector x[i,1], x[i,2], ..., x[i,T].