Creating co-occurrence matrix in SAS - matrix

All, thanks to the amazing help and camaraderie at Stack Exchange, I can now build and do further analysis using the co-occurrence matrix R code that was discussed in my original thread: Creating Co-Occurrence Matrix.
I am now dealing with a massive data set that could only be processed on a server, and I am using SAS Studio to analyse it and thus, I will have to do the co-occurrence analysis using SAS. I would really appreciate any help from SAS experts out there, as my SAS programming techniques are limited. I am trying to do it in the SAS Studio environment.
So, essentially - I have a massive SAS .sav file of households and items, and I want to see a matrix of the number of households where items appear together. Taking the same example from my earlier thread, essentially I have a table containing the following:
HHID Items Quant
HH1 A 3
HH1 B 1
HH1 C 1
HH2 E 3
HH2 B 1
HH3 B 1
HH3 C 4
HH4 D 1
HH4 E 1
HH4 A 1
HH5 F 5
HH5 B 3
HH5 C 2
HH5 D 1, etc.
The output needed is something like this:
A B C D E F
A 0 1 1 0 1 1
B 1 0 3 1 1 0
C 1 3 0 1 0 0
D 1 1 1 0 1 1
E 1 1 0 1 0 0
F 0 1 1 1 0 0
I see that there is a macro out there that is done to do market basket analysis already, and although the output is not in this format, I can work with it as well. It's just too bad that the website doesn't exist anymore, so any help is much appreciated.
Thank you.

Related

diagonal value in co-occurrence matrix

I am so newbie and thank you so much in advance for advice
I want to make co-occurrence matrix, and followed link below
How to use R to create a word co-occurrence matrix
but I cannot understand why value of A-A is 10 in the matirx below
It should be 4 isn't it? because there are four A
dat <- read.table(text='film tag1 tag2 tag3
1 A A A
2 A C F
3 B D C ', header=T)
crossprod(as.matrix(mtabulate(as.data.frame(t(dat[, -1])))))
( ) A C F B D
A 10 1 1 0 0
C 1 2 1 1 1
F 1 1 1 0 0
B 0 1 0 1 1
D 0 1 0 1 1
The solution you use presumes each tag appears only once per film, which jives with the definition of a co-occurrence matrix as far as I can tell. Therefore, each A on the first line gets counted as co-occurring with itself and with the other two As, resulting in a total of ten co-occurences when factoring in the A on the second line.

Store values from a variable and reuse them

This is a question that could help me to solve another, still unsolved question I posted. Basically I need to condition a dataset in Stata and I thought a procedure which would need to first store certain values of a variable in a sort of matrix and then use compare the values of another variable with those stored in the matrix. A simple example could be the following:
obs id act1 act2 year act1year
1 1 0 1 2000 0
2 1 1 0 2001 2001
3 1 0 1 2004 0
4 2 1 0 2001 2001
5 2 1 0 2002 2002
6 2 0 1 2004 0
The code should be able to save in the matrix by(id) the value of act1year different from 0 (in this case 2001) for group 1 and then check if this value, for observations for which act2 is 1, is included in the range for obs i=1,3 [year(i) : year(i)-2] in this case the range does not contain the value stored in the matrix; therefore the observation will be dropped. For group id 2 the code should store [2001, 2002] and then check if the range [year(6):year(6)-2] contains any of the values stored in the matrix.
I hope my question is clear enough! Apologies for not posting any attempt but this is something I really have no idea about how to do.
Both this question and the previous discussion are difficult for me to understand, so let me suggest the following as a starting point to a solution that identifies observations for which either (a) act1 occurs or (b) act2 occurs no more than 2 years after the most recent act1 occurrence.
clear
input id act1 act2 year
1 0 1 2000
1 1 0 2001
1 0 1 2004
2 1 0 2001
2 1 0 2002
2 0 1 2004
end
generate a1yr = 0
replace a1yr = year if act1==1
generate act1r = -act1
bysort id (year act1r): replace a1yr=a1yr[_n-1] if a1yr==0 & _n>1
generate tokeep = 0
replace tokeep = 1 if act1==1
replace tokeep = 1 if act2==1 & year-a1yr<=2
list, clean noobs
Looking at the previous discussion, as it now stands, suggests substituting the following data into the code above and seeing if the code then meets the needs of that discussion.
input obsno id act1 act2 year
1 1 1 0 2000
2 1 0 1 2001
3 1 0 1 2002
4 1 0 1 2002
5 1 0 1 2003
6 2 1 0 2000
7 2 1 0 2001
8 2 0 1 2002
9 2 0 1 2002
10 2 0 1 2003
end

Dominance Matrices when teams play twice

I'm familiar with finding two step dominances when the players involved have only played each other once - you create a matrix of results filled with 1's (for wins) and 0's (for losses/ties), then square it. To find the power of each team you square the matrix then add it to itself.
So, how does the process change when you have teams involved that have played each other more than once and there are 2's introduced into the matrix? I'm working this with Matlab (Octave actually), and when I enter the matrix, which is actually a 31x31 matrix showing the results from the 2001-2002 NFL season, then square it, I get results showing that teams had dominance over themselves - like this:
Original Matrix (abbreviated):
Buf Ind Mia NE NYJ
Buf 0 0 0 0 1
Ind 2 0 0 0 1
Mia 2 2 0 1 0
NE 2 2 1 0 1
NYJ 1 1 2 1 0
Squared Matrix (abbreviated):
Buf Ind Mia NE NYJ
Buf 1 1 2 1 0
Ind 2 1 2 2 2
Mia 8 3 1 1 5
NE 10 4 2 2 4
NYJ 9 8 1 3 3
So how do I address the issue of the results showing a team having dominance over itself and get to my final power numbers like I would in a "played only once" scenario?
Thanks in advance.
I've had this same problem with soccer games with 2 points for a win, 1 for a draw and 0 for a loss, but I belief that is is possible to have a team with dominance over itself because they have beaten the team that beat them (or for soccer the draw). Therefore, I would say that you can just continue on as is. (p.s. I am a Year 11 Maths C student, so there may be other explainations for this)

Form a Matrix From a Large Text File Quickly

Hi I am struggling with reading data from a file quickly enough. ( Currently left for 4hrs, then crashed) must be a simpler way.
The text file looks similar like this:
From To
1 5
3 2
2 1
4 3
From this I want to form a matrix so that there is a 1 in the according [m,n]
The current code is:
function [z] = reed (A)
[m,n]=size(A);
i=1;
while (i <= n)
z(A(1,i),A(2,i))=1;
i=i+1;
end
Which output the following matrix, z:
z =
0 0 0 0 1
1 0 0 0 0
0 1 0 0 0
0 0 1 0 0
My actual file has 280,000,000 links to and from, this code is too slow for this size file. Does anybody know a much faster was to do this in matlab?
thanks
You can do something along the lines of the following:
>> A = zeros(4,5);
>> B = importdata('testcase.txt');
>> A(sub2ind(size(A),B.data(:,1),B.data(:,2))) = 1;
My test case, 'testcase.txt' contains your sample data:
From To
1 5
3 2
2 1
4 3
The result would be:
>> A
A =
0 0 0 0 1
1 0 0 0 0
0 1 0 0 0
0 0 1 0 0
EDIT - 1
After taking a look at your data, it seems that even if you modify this code appropriately, you may not have enough memory to execute it as the matrix A would become too large.
As such, you can use sparse matrices to achieve the same as given below:
>> B = importdata('web-Stanford.txt');
>> A = sparse(B.data(:,1),B.data(:,2),1,max(max(B.data)),max(max(B.data)));
This would be the approach I'd recommend as your A matrix will have a size of [281903,281903] which would usually be too large to handle due to memory constraints. A sparse matrix on the other hand, maintains only those matrix entries which are non-zero, thus saving on a lot of space. In most cases, you can use sparse matrices more-or-less as you use normal matrices.
More information about the sparse command is given here.
EDIT - 2
I'm not sure why it isn't working for you. Here's a screenshot of how I did it in case that helps:
EDIT - 3
It seems that you're getting a double matrix in B while I'm getting a struct. I'm not sure why this is happening; I can only speculate that you deleted the header lines from the input file before you used importdata.
Basically it's just that my B.data is the same as your B. As such, you should be able to use the following instead:
>> A = sparse(B(:,1),B(:,2),1,max(max(B)),max(max(B)));

Sorting a binary 2D matrix?

I'm looking for some pointers here as I don't quite know where to start researching this one.
I have a 2D matrix with 0 or 1 in each cell, such as:
1 2 3 4
A 0 1 1 0
B 1 1 1 0
C 0 1 0 0
D 1 1 0 0
And I'd like to sort it so it is as "upper triangular" as possible, like so:
4 3 1 2
B 0 1 1 1
A 0 1 0 1
D 0 0 1 1
C 0 0 0 1
The rows and columns must remain intact, i.e. elements can't be moved individually and can only be swapped "whole".
I understand that there'll probably be pathological cases where a matrix has multiple possible sorted results (i.e. same shape, but differ in the identity of the "original" rows/columns.)
So, can anyone suggest where I might find some starting points for this? An existing library/algorithm would be great, but I'll settle for knowing the name of the problem I'm trying to solve!
I doubt it's a linear algebra problem as such, and maybe there's some kind of image processing technique that's applicable.
Any other ideas aside, my initial guess is just to write a simple insertion sort on the rows, then the columns and iterate that until it stabilises (and hope that detecting the pathological cases isn't too hard.)
More details: Some more information on what I'm trying to do may help clarify. Each row represents a competitor, each column represents a challenge. Each 1 or 0 represents "success" for the competitor on a particular challenge.
By sorting the matrix so all 1s are in the top-right, I hope to then provide a ranking of the intrinsic difficulty of each challenge and a ranking of the competitors (which will take into account the difficulty of the challenges they succeeded at, not just the number of successes.)
Note on accepted answer: I've accepted Simulated Annealing as "the answer" with the caveat that this question doesn't have a right answer. It seems like a good approach, though I haven't actually managed to come up with a scoring function that works for my problem.
An Algorithm based upon simulated annealing can handle this sort of thing without too much trouble. Not great if you have small matrices which most likely hae a fixed solution, but great if your matrices get to be larger and the problem becomes more difficult.
(However, it also fails your desire that insertions can be done incrementally.)
Preliminaries
Devise a performance function that "scores" a matrix - matrices that are closer to your triangleness should get a better score than those that are less triangle-y.
Devise a set of operations that are allowed on the matrix. Your description was a little ambiguous, but if you can swap rows then one op would be SwapRows(a, b). Another could be SwapCols(a, b).
The Annealing loop
I won't give a full exposition here, but the idea is simple. You perform random transformations on the matrix using your operations. You measure how much "better" the matrix is after the operation (using the performance function before and after the operation). Then you decide whether to commit that transformation. You repeat this process a lot.
Deciding whether to commit the transform is the fun part: you need to decide whether to perform that operation or not. Toward the end of the annealing process, you only accept transformations that improved the score of the matrix. But earlier on, in a more chaotic time, you allow transformations that don't improve the score. In the beginning, the algorithm is "hot" and anything goes. Eventually, the algorithm cools and only good transforms are allowed. If you linearly cool the algorithm, then the choice of whether to accept a transformation is:
public bool ShouldAccept(double cost, double temperature, Random random) {
return Math.Exp(-cost / temperature) > random.NextDouble();
}
You should read the excellent information contained in Numerical Recipes for more information on this algorithm.
Long story short, you should learn some of these general purpose algorithms. Doing so will allow you to solve large classes of problems that are hard to solve analytically.
Scoring algorithm
This is probably the trickiest part. You will want to devise a scorer that guides the annealing process toward your goal. The scorer should be a continuous function that results in larger numbers as the matrix approaches the ideal solution.
How do you measure the "ideal solution" - triangleness? Here is a naive and easy scorer: For every point, you know whether it should be 1 or 0. Add +1 to the score if the matrix is right, -1 if it's wrong. Here's some code so I can be explicit (not tested! please review!)
int Score(Matrix m) {
var score = 0;
for (var r = 0; r < m.NumRows; r++) {
for (var c = 0; c < m.NumCols; c++) {
var val = m.At(r, c);
var shouldBe = (c >= r) ? 1 : 0;
if (val == shouldBe) {
score++;
}
else {
score--;
}
}
}
return score;
}
With this scoring algorithm, a random field of 1s and 0s will give a score of 0. An "opposite" triangle will give the most negative score, and the correct solution will give the most positive score. Diffing two scores will give you the cost.
If this scorer doesn't work for you, then you will need to "tune" it until it produces the matrices you want.
This algorithm is based on the premise that tuning this scorer is much simpler than devising the optimal algorithm for sorting the matrix.
I came up with the below algorithm, and it seems to work correctly.
Phase 1: move rows with most 1s up and columns with most 1s right.
First the rows. Sort the rows by counting their 1s. We don't care
if 2 rows have the same number of 1s.
Now the columns. Sort the cols by
counting their 1s. We don't care
if 2 cols have the same number of
1s.
Phase 2: repeat phase 1 but with extra criterions, so that we satisfy the triangular matrix morph.
Criterion for rows: if 2 rows have the same number of 1s, we move up the row that begin with fewer 0s.
Criterion for cols: if 2 cols have the same number of 1s, we move right the col that has fewer 0s at the bottom.
Example:
Phase 1
1 2 3 4 1 2 3 4 4 1 3 2
A 0 1 1 0 B 1 1 1 0 B 0 1 1 1
B 1 1 1 0 - sort rows-> A 0 1 1 0 - sort cols-> A 0 0 1 1
C 0 1 0 0 D 1 1 0 0 D 0 1 0 1
D 1 1 0 0 C 0 1 0 0 C 0 0 0 1
Phase 2
4 1 3 2 4 1 3 2
B 0 1 1 1 B 0 1 1 1
A 0 0 1 1 - sort rows-> D 0 1 0 1 - sort cols-> "completed"
D 0 1 0 1 A 0 0 1 1
C 0 0 0 1 C 0 0 0 1
Edit: it turns out that my algorithm doesn't give proper triangular matrices always.
For example:
Phase 1
1 2 3 4 1 2 3 4
A 1 0 0 0 B 0 1 1 1
B 0 1 1 1 - sort rows-> C 0 0 1 1 - sort cols-> "completed"
C 0 0 1 1 A 1 0 0 0
D 0 0 0 1 D 0 0 0 1
Phase 2
1 2 3 4 1 2 3 4 2 1 3 4
B 0 1 1 1 B 0 1 1 1 B 1 0 1 1
C 0 0 1 1 - sort rows-> C 0 0 1 1 - sort cols-> C 0 0 1 1
A 1 0 0 0 A 1 0 0 0 A 0 1 0 0
D 0 0 0 1 D 0 0 0 1 D 0 0 0 1
(no change)
(*) Perhaps a phase 3 will increase the good results. In that phase we place the rows that start with fewer 0s in the top.
Look for a 1987 paper by Anna Lubiw on "Doubly Lexical Orderings of Matrices".
There is a citation below. The ordering is not identical to what you are looking for, but is pretty close. If nothing else, you should be able to get a pretty good idea from there.
http://dl.acm.org/citation.cfm?id=33385
Here's a starting point:
Convert each row from binary bits into a number
Sort the numbers in descending order.
Then convert each row back to binary.
Basic algorithm:
Determine the row sums and store
values. Determine the column sums
and store values.
Sort the row sums in ascending order. Sort the column
sums in ascending order.
Hopefully, you should have a matrix with as close to an upper-right triangular region as possible.
Treat rows as binary numbers, with the leftmost column as the most significant bit, and sort them in descending order, top to bottom
Treat the columns as binary numbers with the bottommost row as the most significant bit and sort them in ascending order, left to right.
Repeat until you reach a fixed point. Proof that the algorithm terminates left as an excercise for the reader.

Resources