I have two tables in KDB.
One is a timeseries with a datetime, sym column (spanning multiple dates, eg could be 1mm rows or 2mm rows). Each timepoint has the same number of syms and few other standard columns such as price.
Let's call this t1:
`date`datetime`sym`price
The other table is of this structure:
`date`sym`factors`weights
where factors is a list and weights is a list of equal length for each sym.
Let's call this t2.
I'm doing a left join on these two tables and then an ungroup.
factors and weights are of not equal length for each sym.
I'm doing the following:
select sum (weights*price) by date, factors from ungroup t1 lj `date`sym xkey t2
However this is very slow and can be as slow as 5-6 seconds if t1 has a million rows or more.
Calling all kdb experts for some advice!
EDIT:
here's a full example:
(apologies for the roundabout way of defining t1 and t2)
interval: `long$`time$00:01:00;
hops: til 1+ `int$((`long$(et:`time$17:00)-st:`time$07:00))%interval;
times: st + `long$interval*hops;
dates: .z.D - til .z.D-.z.D-10;
timepoints: ([] date: dates) cross ([] time:times);
syms: ([] sym: 300?`5);
universe: timepoints cross syms;
t1: update datetime: date+time, price:count[universe]?100.0 from universe;
t2: ([] date:dates) cross syms;
/ note here my real life t2, doesn't have a count of 10 weights/factors for each sym, it can vary by sym.
t2: `date`sym xkey update factors: count[t2]#enlist 10?`5, weights: count[t2]#enlist 10?10 from t2;
/ what is slow is the ungroup
select sum weights*price by date, datetime, factors from ungroup t1 lj t2
One approach to avoid the ungroup is to work with matrices (aka lists of lists) and take advantage of the optimised matrix-multiply $ seen here: https://code.kx.com/q/ref/mmu/
In my approach below, instead of joining t2 to t1 to ungroup, I group t1 and join to t2 (thus keeping everything as lists of lists) and then use some matrix manipulation (with a final ungroup at the end on a much smaller set)
q)\ts res:select sum weights*price by date, factors from ungroup t1 lj t2
4100 3035628112
q)\ts resT:ungroup exec first factors,sum each flip["f"$weights]$price by date:date from t2 lj select price by date,sym from t1;
76 83892800
q)(0!res)~`date`factors xasc `date`factors`weights xcol resT
1b
As you can see its much quicker (at least on my machine) and the result is identical save for ordering and column names.
You may still need to modify this solution somewhat to work in your actual use-case (with variable weights etc - in this case perhaps enforce a uniform number of weights across each sym filling with zeros if necessary)
I would like to generate a random matrix with constraints on both rows and columns in MATLAB. But the problem is I have two parameters for this constraints which are not fix for each element. For explanation, consider the mxn matrix P = [P1 ; P2; ...; Pm], and 2 other vectors lambda and Mu with m and n elements, respectively.
Consider lambda as [lambda(1), lambda(2), ..., lambda(m)] and Mu as [Mu(1), Mu2, ..., Mu(n)]
lamda and Mu should have this constraints:
sum of lambda(s) < sum of Mu(s).
,Now for the random matrix P:
each element of the matrix(P[j,i]) should be equal or greater than zero.
sum of the elements of each row is equal to one (i.e. for the row of j: sigma_i(P[j,i] = 1)
for each column j, sum of the production of each element with the correspond lambda(j) is less than the correspond element in the Mu vector (i.e.Mu(i)). i.e. for the column of i: sigma_j(P[j,i]*lambda(j)) < Mu(i)
I have tried coding all these constraints but because the existence of lambda and Mu vectors, just one of the constraints of 3 or 4 can be feasible. May you please help me for coding this matrix.
Thanks in advance
There could be values of Mu and Lambda that does not allow any value of P[i,j].
For each row-vector v:
Constraint 3 means the values are constrained to the hyper-plane v.1 = 1 (A)
Constraint 4 means the values are constrained to the half-space v.Lambda < m (H), where m is the element of Mu corresponding to the current row.
Constraint 1 does not guarantee that these two constraint generates a non-empty solution space.
To verify that the solution-space is non-empty, the easiest method is by checking each corner of hyper-plane A (<1,0,0,...>, <0,1,0,...>, ...). If at least one of the corners qualify for constraint 4, the solution-space is non-empty.
Having said that; Assuming the solution-space is non-empty, you could generate values matching those constraints by:
Generate random vector with elements 0 ≤ vi ≤ 1.
Scale by dividing by the sum of the elements.
If this vector does not qualify for constraint 4, repeat from step 1.
Once you have n such vectors, combine them as rows into a matrix.
The speed of this algorithm depends on how large volume of hyper-plane A is contained inside the half-space H. If only 1% is contained, it would expected to require 100 iterations for that row.
I have a nxn singular matrix. I want to add k rows (which must be from the standard basis e1, e2, ..., en) to this matrix such that the new (n+k)xn matrix is full column rank. The number of added rows k must be minimum and they can be added in any order (not just e1, e2 ,..., it can be e4, e10, e1, ...) as long as k is minimum.
Does anybody know a simple way to do this? Any help is appreciated.
You can achieve this by doing a QR decomposition with column pivoting, then taking the transpose of the last n-rank(A) columns of the permutation matrix.
In matlab, this is achieved by the qr function(See the matlab documentation here):
r=rank(A);
[Q,R,E]=qr(A);
newA=[A;transpose(E(:,end-r+1:end))];
Each row of transpose(E(:,end-r+1:end)) will be a member of standard basis, rank of newA will be n, and this is also the minimal number of standard basis you will need to do so.
Here is how this works:
QR decomposition with column pivoting is a standard procedure to decompose a matrix A into products:
A*E==Q*R
where Q is an orthogonal matrix if A is real, or an unitary matrix if A is complex; R is upper triangular matrix, and E is a permutation matrix.
In short, the permutations are chosen so that the diagonal elements are larger than the off-diagonals in the same row, and that size of the diagonal elements are non-increasing. More detailed description can be found on the netlib QR factorization page.
Since Q and E are both orthogonal (or unitary) matrices, the rank of R is the same as the rank of A. To bring up the rank of A, we just need to find ways to increase the rank of R; and this is much more straight forward thanks to the structure of R as the result of pivoting and the fact that it is upper-triangular.
Now, with the requirement placed on pivoting procedure, if any diagonal element of R is 0, the entire row has to be 0. The n-rank(A) rows of 0s in the bottom if R is responsible for the nullity. If we replace the lower right corner with an identity matrix, the that new matrix would be full rank. Well, we cannot really do the replacement, but we can append the rows matrix to the bottom of R and form a new matrix that has the same rank:
B==[ 0 I ] => newR=[ R ; B ]
Here the dimensionality of I is the nullity of A and that of R.
It is readily seen that rank(newR)=n. Then we can also define a new unitary Q matrix by expanding its dimensionality in a trivial manner:
newQ=[Q 0 ; 0 I]
With that, our new rank n matrix can be obtained as
newA=newQ*newR.transpose(E)=[Q*R ; B ]*transpose(E) =[A ; B*transpose(E)]
Note that B is [0 I] and E is a permutation matrix, so B*transpose(E) is simply the transpose
of the last n-rank(A) columns of E, and thus a set of rows made of standard basis, and that's just what you wanted!
Is n very large? The simplest solution without using any math would be to try adding e_i and seeing if the rank increases. If it does, keep e_i. proceed until finished.
I like #Xiaolei Zhu's solution because it's elegant, but another way to go (that's even more computationally efficient is):
Determine if any rows, indexed by i, of your matrix A are all zero. If so, then the corresponding e_i must be concatenated.
After that process, you can simply concatenate any subset of the n - rank(A) columns of the identity matrix that you didn't add in step 1.
rows/cols from Identity matrix can be added in any order. it does not need to be added in usual order as e1,e2,... in general situation for making matrix full rank.
Have these two tables:
TableA
ID Opt1 Opt2 Type
1 A Z 10
2 B Y 20
3 C Z 30
4 C K 40
and
TableB
ID Opt1 Type
1 Z 57
2 Z 99
3 X 3000
4 Z 3000
What would be a good algorithm to find arbitrary relations between these two tables? In this example, I'd like it to find the apparent relation between records containing Op1 = C in TableA and Type = 3000 in TableB.
I could think of apriori in some way, but doesn't seems too practical. what you guys say?
thanks.
It sounds like a relational data mining problem. I would suggest trying Ross Quinlan's FOIL: http://www.rulequest.com/Personal/
In pseudocode, a naive implementation might look like:
1. for each column c1 in table1
2. for each column c2 in table2
3. if approximately_isomorphic(c1, c2) then
4. emit (c1, c2)
approximately_isomorphic(c1, c2)
1. hmap = hash()
2. for i = 1 to min(|c1|, |c2|) do
3. hmap[c1[i]] = c2[i]
4. if |hmap| - unique_count(c1) < error_margin then return true
5. else then return false
The idea is this: do a pairwise comparison of the elements of each column with each other column. For each pair of columns, construct a hash map linking corresponding elements of the two columns. If the hash map contains the same number of linkings as unique elements of the first column, then you have a perfect isomorphism; if you have a few more, you have a near isomorphism; if you have many more, up to the number of elements in the first column, you have what probably doesn't represent any correlation.
Example on your input:
ID & anything : perfect isomorphism since all of ID are unique
Opt1 & ID : 4 mappings and 3 unique values; not a perfect
isomorphism, but not too far away.
Opt1 & Opt1 : ditto above
Opt1 & Type : 3 mappings & 3 unique values, perfect isomorphism
Opt2 & ID : 4 mappings & 3 unique values, not a perfect
isomorphism, but not too far away
Opt2 & Opt2 : ditto above
Opt2 & Type : ditto above
Type & anything: perfect isomorphism since all of ID are unique
For best results, you might do this procedure both ways - that is, comparing table1 to table2 and then comparing table2 to table1 - to look for bijective mappings. Otherwise, you can be thrown off by trivial cases... all values in the first are different (perfect isomorphism) or all values in the second are the same (perfect isomorphism). Note also that this technique provides a way of ranking, or measuring, how similar or dissimilar columns are.
Is this going in the right direction? By the way, this is O(ijk) where table1 has i columns, table 2 has j columns and each column has k elements. In theory, the best you could do for a method would be O(ik + jk), if you can find correlations without doing pairwise comparisons.
I have got a square matrix consisting of elements either 1
or 0. An ith row toggle toggles all the ith row elements (1
becomes 0 and vice versa) and jth column toggle toggles all
the jth column elements. I have got another square matrix of
similar size. I want to change the initial matrix to the
final matrix using the minimum number of toggles. For example
|0 0 1|
|1 1 1|
|1 0 1|
to
|1 1 1|
|1 1 0|
|1 0 0|
would require a toggle of the first row and of the last
column.
What will be the correct algorithm for this?
In general, the problem will not have a solution. To see this, note that transforming matrix A to matrix B is equivalent to transforming the matrix A - B (computed using binary arithmetic, so that 0 - 1 = 1) to the zero matrix. Look at the matrix A - B, and apply column toggles (if necessary) so that the first row becomes all 0's or all 1's. At this point, you're done with column toggles -- if you toggle one column, you have to toggle them all to get the first row correct. If even one row is a mixture of 0's and 1's at this point, the problem cannot be solved. If each row is now all 0's or all 1's, the problem is solvable by toggling the appropriate rows to reach the zero matrix.
To get the minimum, compare the number of toggles needed when the first row is turned to 0's vs. 1's. In the OP's example, the candidates would be toggling column 3 and row 1 or toggling columns 1 and 2 and rows 2 and 3. In fact, you can simplify this by looking at the first solution and seeing if the number of toggles is smaller or larger than N -- if larger than N, than toggle the opposite rows and columns.
It's not always possible. If you start with a 2x2 matrix with an even number of 1s you can never arrive at a final matrix with an odd number of 1s.
Algorithm
Simplify the problem from "Try to transform A into B" into "Try to transform M into 0", where M = A xor B. Now all the positions which must be toggled have a 1 in them.
Consider an arbitrary position in M. It is affected by exactly one column toggle and exactly one row toggle. If its initial value is V, the presence of the column toggle is C, and the presence of the row toggle is R, then the final value F is V xor C xor R. That's a very simple relationship, and it makes the problem trivial to solve.
Notice that, for each position, R = F xor V xor C = 0 xor V xor C = V xor C. If we set C then we force the value of R, and vice versa. That's awesome, because it means if I set the value of any row toggle then I will force all of the column toggles. Any one of those column toggles will force all of the row toggles. If the result is the 0 matrix, then we have a solution. We only need to try two cases!
Pseudo-code
function solve(Matrix M) as bool possible, bool[] rowToggles, bool[] colToggles:
For var b in {true, false}
colToggles = array from c in M.colRange select b xor Matrix(0, c)
rowToggles = array from r in M.rowRange select colToggles[0] xor M(r, 0)
if none from c in M.colRange, r in M.rowRange
where colToggle[c] xor rowToggle[r] xor M(r, c) != 0 then
return true, rowToggles, colToggles
end if
next var
return false, null, null
end function
Analysis
The analysis is trivial. We try two cases, within which we run along a row, then a column, then all cells. Therefore if there are r rows and c columns, meaning the matrix has size n = c * r, then the time complexity is O(2 * (c + r + c * r)) = O(c * r) = O(n). The only space we use is what is required for storing the outputs = O(c + r).
Therefore the algorithm takes time linear in the size of the matrix, and uses space linear in the size of the output. It is asymptotically optimal for obvious reasons.
I came up with a brute force algorithm.
The algorithm is based on 2 conjectures:
(so it may not work for all matrices - I'll verify them later)
The minimum (number of toggles) solution will contain a specific row or column only once.
In whatever order we apply the steps to convert the matrix, we get the same result.
The algorithm:
Lets say we have the matrix m = [ [1,0], [0,1] ].
m: 1 0
0 1
We generate a list of all row and column numbers,
like this: ['r0', 'r1', 'c0', 'c1']
Now we brute force, aka examine, every possible step combinations.
For example,we start with 1-step solution,
ksubsets = [['r0'], ['r1'], ['c0'], ['c1']]
if no element is a solution then proceed with 2-step solution,
ksubsets = [['r0', 'r1'], ['r0', 'c0'], ['r0', 'c1'], ['r1', 'c0'], ['r1', 'c1'], ['c0', 'c1']]
etc...
A ksubsets element (combo) is a list of toggle steps to apply in a matrix.
Python implementation (tested on version 2.5)
# Recursive definition (+ is the join of sets)
# S = {a1, a2, a3, ..., aN}
#
# ksubsets(S, k) = {
# {{a1}+ksubsets({a2,...,aN}, k-1)} +
# {{a2}+ksubsets({a3,...,aN}, k-1)} +
# {{a3}+ksubsets({a4,...,aN}, k-1)} +
# ... }
# example: ksubsets([1,2,3], 2) = [[1, 2], [1, 3], [2, 3]]
def ksubsets(s, k):
if k == 1: return [[e] for e in s]
ksubs = []
ss = s[:]
for e in s:
if len(ss) < k: break
ss.remove(e)
for x in ksubsets(ss,k-1):
l = [e]
l.extend(x)
ksubs.append(l)
return ksubs
def toggle_row(m, r):
for i in range(len(m[r])):
m[r][i] = m[r][i] ^ 1
def toggle_col(m, i):
for row in m:
row[i] = row[i] ^ 1
def toggle_matrix(m, combos):
# example of combos, ['r0', 'r1', 'c3', 'c4']
# 'r0' toggle row 0, 'c3' toggle column 3, etc.
import copy
k = copy.deepcopy(m)
for combo in combos:
if combo[0] == 'r':
toggle_row(k, int(combo[1:]))
else:
toggle_col(k, int(combo[1:]))
return k
def conversion_steps(sM, tM):
# Brute force algorithm.
# Returns the minimum list of steps to convert sM into tM.
rows = len(sM)
cols = len(sM[0])
combos = ['r'+str(i) for i in range(rows)] + \
['c'+str(i) for i in range(cols)]
for n in range(0, rows + cols -1):
for combo in ksubsets(combos, n +1):
if toggle_matrix(sM, combo) == tM:
return combo
return []
Example:
m: 0 0 0
0 0 0
0 0 0
k: 1 1 0
1 1 0
0 0 1
>>> m = [[0,0,0],[0,0,0],[0,0,0]]
>>> k = [[1,1,0],[1,1,0],[0,0,1]]
>>> conversion_steps(m, k)
['r0', 'r1', 'c2']
>>>
If you can only toggle the rows, and not the columns, then there will only be a subset of matrices that you can convert into the final result. If this is the case, then it would be very simple:
for every row, i:
if matrix1[i] == matrix2[i]
continue;
else
toggle matrix1[i];
if matrix1[i] == matrix2[i]
continue
else
die("cannot make similar");
This is a state space search problem. You are searching for the optimum path from a starting state to a destination state. In this particular case, "optimum" is defined as "minimum number of operations".
The state space is the set of binary matrices generatable from the starting position by row and column toggle operations.
ASSUMING that the destination is in the state space (NOT a valid assumption in some cases: see Henrik's answer), I'd try throwing a classic heuristic search (probably A*, since it is about the best of the breed) algorithm at the problem and see what happened.
The first, most obvious heuristic is "number of correct elements".
Any decent Artificial Intelligence textbook will discuss search and the A* algorithm.
You can represent your matrix as a nonnegative integer, with each cell in the matrix corresponding to exactly one bit in the integer On a system that supports 64-bit long long unsigned ints, this lets you play with anything up to 8x8. You can then use exclusive-OR operations on the number to implement the row and column toggle operations.
CAUTION: the raw total state space size is 2^(N^2), where N is the number of rows (or columns). For a 4x4 matrix, that's 2^16 = 65536 possible states.
Rather than look at this as a matrix problem, take the 9 bits from each array, load each of them into 2-byte size types (16 bits, which is probably the source of the arrays in the first place), then do a single XOR between the two.
(the bit order would be different depending on your type of CPU)
The first array would become: 0000000001111101
The second array would become: 0000000111110101
A single XOR would produce the output. No loops required. All you'd have to do is 'unpack' the result back into an array, if you still wanted to. You can read the bits without resorting to that, though.i
I think brute force is not necessary.
The problem can be rephrased in terms of a group. The matrices over the field with 2 elements constitute an commutative group with respect to addition.
As pointed out before, the question whether A can be toggled into B is equivalent to see if A-B can be toggled into 0. Note that toggling of row i is done by adding a matrix with only ones in the row i and zeros otherwise, while the toggling of column j is done by adding a matrix with only ones in column j and zeros otherwise.
This means that A-B can be toggled to the zero matrix if and only if A-B is contained in the subgroup generated by the toggling matrices.
Since addition is commutative, the toggling of columns takes place first, and we can apply the approach of Marius first to the columns and then to the rows.
In particular the toggling of the columns must make any row either all ones or all zeros. there are two possibilites:
Toggle columns such that every 1 in the first row becomes zero. If after this there is a row in which both ones and zeros occur, there is no solution. Otherwise apply the same approach for the rows (see below).
Toggle columns such that every 0 in the first row becomes 1. If after this there is a row in which both ones and zeros occur, there is no solution. Otherwise apply the same approach for the rows (see below).
Since the columns have been toggled successfully in the sense that in each row contains only ones or zeros, there are two possibilities:
Toggle rows such that every 1 in the first column becomes zero.
Toggle rows such that every 0 in the first row becomes zero.
Of course in the step for the rows, we take the possibility which results in less toggles, i.e. we count the ones in the first column and then decide how to toggle.
In total, only 2 cases have to be considered, namely how the columns are toggled; for the row step, the toggling can be decided by counting to minimuze the number of toggles in the second step.