Converting SFrames into input dataset Sframes - performance

I have a pretty bad way to convert my input logs to the input dataset.
I have an SFrame sf with the following format:
user_id int
timestamp datetime.datetime
action int
reasoncode str
action column takes up 9 values ranging from 1 to 9.
So, every user_id can perform more than 1 action, more than once.
I am trying to obtain all unique user_id from sf and create an op_sf in the following manner:
y = 225
def calc_class(a,x):
diffd = a['timestamp'].apply(lambda x: (dte - x).days)
g = 0
b = 0
for i in diffd:
if i > y:
g += 1
else:
b += 1
if b>= x:
return 4
elif b!= 0:
return 3
elif g>= 0:
return 2
else:
return 1
l1 = []
ids = z['user_id'].unique()
for idd in ids:
temp = sf[sf['user_id']== idd]
zero1 = temp[temp['action'] == 1]
zero2 = temp[temp['action'] == 2]
zero3 = temp[temp['action'] == 3]
zero4 = temp[temp['action'] == 4]
zero5 = temp[temp['action'] == 5]
zero6 = temp[temp['action'] == 6]
zero7 = temp[temp['action'] == 7]
zeroh8 = temp[temp['reasoncode'] == 'xyz']
zero9 = temp[temp['reasoncode'] == 'abc']
/* I'm getting clas1 to clas9 from function calc_class for each action
clas1 to clas9 are 4 integers ranging from 1 to 4
*/
clas1 = calc_class(zero1,2)
clas2 = calc_class(zero2,2)
clas3 = calc_class(zero3,2)
clas4 = calc_class(zero4,2)
clas5 = calc_class(zero5,2)
clas6 = calc_class(zero6,2)
clas7 = calc_class(zero7,2)
clas8 = calc_class(zero8,2)
clas9 = calc_class(zero9,2)
l1.append([idd,clas1,clas2,clas3,clas4,clas5*(-1),clas6*(-1),clas7*(-1),clas8*(-1),clas9])
I wanted to know if this is the fastest way of doing this. Specifically if it is possible to do the same thing without generating the zero1 to zero9 SFrames.
An example sf:
user_id timestamp action reasoncode
574 23/09/15 12:43 1 None
574 23/09/15 11:15 2 None
574 06/10/15 11:20 2 None
574 06/10/15 11:21 3 None
588 04/11/15 10:00 1 None
588 05/11/15 10:00 1 None
555 15/12/15 13:00 1 None
585 22/12/15 17:30 1 None
585 15/01/16 07:44 7 xyz
588 06/01/16 08:10 7 abc
l1 corresponding to the above sf:
574 1 2 2 0 0 0 0 0 0
588 3 0 0 0 0 0 0 0 3
555 3 0 0 0 0 0 0 0 0
585 3 0 0 0 0 0 0 3 0

I think your logic is relatively complex, but it's still more efficient to use column-wise operations on the whole dataset, rather than extracting the subset of rows for each user. The key tools are SFrame.groupby, SFrame.apply, SFrame.unstack, and SFrame.unpack. API docs here:
https://dato.com/products/create/docs/generated/graphlab.SFrame.html
Here's a solution that uses slightly simpler data than your example and slightly simpler logic to code the old vs. new actions.
# Set up and make the data
import graphlab as gl
import datetime as dt
sf = gl.SFrame({'user': [574, 574, 574, 588, 588, 588],
'timestamp': [dt.datetime(2015, 9, 23), dt.datetime(2015, 9, 23),
dt.datetime(2015, 10, 6), dt.datetime(2015, 11, 4),
dt.datetime(2015, 11, 5), dt.datetime(2016, 1, 6)],
'action': [1, 2, 3, 1, 1, 7]})
# Count old vs. new actions.
sf['days_elapsed'] = (dt.datetime.today() - sf['timestamp']) / (3600 * 24)
sf['old_threshold'] = sf['days_elapsed'] > 225
aggregator = {'total_count': gl.aggregate.COUNT('user'),
'old_count': gl.aggregate.SUM('old_threshold')}
grp = sf.groupby(['user', 'action'], aggregator)
# Code the actions according to old vs. new. Use your own logic here.
grp['action_code'] = grp.apply(
lambda x: 2 if x['total_count'] > x['old_count'] else 1)
grp = grp[['user', 'action', 'action_code']]
# Reshape the results into columns.
sf_new = (grp.unstack(['action', 'action_code'], new_column_name='action_code')
.unpack('action_code'))
# Fill in zeros for entries with no actions.
for c in sf_new.column_names():
sf_new[c] = sf_new[c].fillna(0)
print sf_new
+------+---------------+---------------+---------------+---------------+
| user | action_code.1 | action_code.2 | action_code.3 | action_code.7 |
+------+---------------+---------------+---------------+---------------+
| 588 | 2 | 0 | 0 | 2 |
| 574 | 1 | 1 | 1 | 0 |
+------+---------------+---------------+---------------+---------------+
[2 rows x 5 columns]

Related

Convert cuDF data frame column to 1 or 0 for “true”/“false” values

I am using RAPIDS (0.9 release) docker container. How can I do the following with RAPIDS cuDF?
df['new_column'] = df['column_name'] > condition
df[['new_column']] *= 1
You can do this in the same way as with pandas.
import cudf
df = cudf.DataFrame({'a':[0,1,2,3,4]})
df['new'] = df['a'] >= 3
df['new'] = df['new'].astype('int') # could use int8, int32, or int64
# could also do (df['a'] >= 3).astype('int')
df
a new
0 0 0
1 1 0
2 2 0
3 3 1
4 4 1

Quick way of finding complementary vectors in MATLAB

I have a matrix of N rows of binary vectors, i.e.
mymatrix = [ 1 0 0 1 0;
1 1 0 0 1;
0 1 1 0 1;
0 1 0 0 1;
0 0 1 0 0;
0 0 1 1 0;
.... ]
where I'd like to find the combinations of rows that, when added together, gets me exactly:
[1 1 1 1 1]
So in the above example, the combinations that would work are 1/3, 1/4/5, and 2/6.
The code I have for this right now is:
i = 1;
for j = 1:5
C = combnk([1:N],j); % Get every possible combination of rows
for c = 1:size(C,1)
if isequal(ones(1,5),sum(mymatrix(C(c,:),:)))
combis{i} = C(c,:);
i = i+1;
end
end
end
But as you would imagine, this takes a while, especially because of that combnk in there.
What might be a useful algorithm/function that can help me speed this up?
M = [
1 0 0 1 0;
1 1 0 0 1;
0 1 1 0 1;
0 1 0 0 1;
0 0 1 0 0;
0 0 1 1 0;
1 1 1 1 1
];
% Find all the unique combinations of rows...
S = (dec2bin(1:2^size(M,1)-1) == '1');
% Find the matching combinations...
matches = cell(0,1);
for i = 1:size(S,1)
S_curr = S(i,:);
rows = M(S_curr,:);
rows_sum = sum(rows,1);
if (all(rows_sum == 1))
matches = [matches; {find(S_curr)}];
end
end
To display your matches in a good stylized way:
for i = 1:numel(matches)
match = matches{i};
if (numel(match) == 1)
disp(['Match found for row: ' mat2str(match) '.']);
else
disp(['Match found for rows: ' mat2str(match) '.']);
end
end
This will produce:
Match found for row: 7.
Match found for rows: [2 6].
Match found for rows: [1 4 5].
Match found for rows: [1 3].
In terms of efficiency, in my machine this algoritm is completing the detection of matches in about 2 milliseconds.

how can I create an incidence matrix in Julia

I would like to create an incidence matrix.
I have a file with 3 columns, like:
id x y
A 22 2
B 4 21
C 21 360
D 26 2
E 22 58
F 2 347
And I want a matrix like (without col and row names):
2 4 21 22 26 58 347 360
A 1 0 0 1 0 0 0 0
B 0 1 1 0 0 0 0 0
C 0 0 1 0 0 0 0 1
D 1 0 0 0 1 0 0 0
E 0 0 0 1 0 1 0 0
F 1 0 0 0 0 0 1 0
I have started the code like:
haps = readdlm("File.txt",header=true)
hap1_2 = map(Int64,haps[1][:,2:end])
ID = (haps[1][:,1])
dic1 = Dict()
for (i in 1:21)
dic1[ID[i]] = hap1_2[i,:]
end
X=[zeros(21,22)]; #the original file has 21 rows and 22 columns
X1 = hcat(ID,X)
The problem now is that I don't know how to fill the matrix with 1s in the specific columns as in the example above.
I'm also not sure if I'm on the right way.
Any suggestion that could help me??
Thanks!
NamedArrays is a neat package which allows naming both rows and columns and seems to fit the bill for this problem. Suppose the data is in data.csv, here is one method to go about it (install NamedArrays with Pkg.add("NamedArrays")):
data,header = readcsv("data.csv",header=true);
# get the column names by looking at unique values in columns
cols = unique(vec([(header[j+1],data[i,j+1]) for i in 1:size(data,1),j=1:2]))
# row names from ID column
rows = data[:,1]
using NamedArrays
narr = NamedArray(zeros(Int,length(rows),length(cols)),(rows,cols),("id","attr"));
# now stamp in the 1s in the right places
for r=1:size(data,1),c=2:size(data,2) narr[data[r,1],(header[c],data[r,c])] = 1 ; end
Now we have (note I transposed narr for better printout):
julia> narr'
10x6 NamedArray{Int64,2}:
attr ╲ id │ A B C D E F
──────────┼─────────────────
("x",22) │ 1 0 0 0 1 0
("x",4) │ 0 1 0 0 0 0
("x",21) │ 0 0 1 0 0 0
("x",26) │ 0 0 0 1 0 0
("x",2) │ 0 0 0 0 0 1
("y",2) │ 1 0 0 1 0 0
("y",21) │ 0 1 0 0 0 0
("y",360) │ 0 0 1 0 0 0
("y",58) │ 0 0 0 0 1 0
("y",347) │ 0 0 0 0 0 1
But, if DataFrames are necessary, similar tricks should apply.
---------- UPDATE ----------
In case the column of a value should be ignored i.e. x=2 and y=2 should both set a 1 on column for value 2, then the code becomes:
using NamedArrays
data,header = readcsv("data.csv",header=true);
rows = data[:,1]
cols = map(string,sort(unique(vec(data[:,2:end]))))
narr = NamedArray(zeros(Int,length(rows),length(cols)),(rows,cols),("id","attr"));
for r=1:size(data,1),c=2:size(data,2) narr[data[r,1],string(data[r,c])] = 1 ; end
giving:
julia> narr
6x8 NamedArray{Int64,2}:
id ╲ attr │ 2 4 21 22 26 58 347 360
──────────┼───────────────────────────────────────
A │ 1 0 0 1 0 0 0 0
B │ 0 1 1 0 0 0 0 0
C │ 0 0 1 0 0 0 0 1
D │ 1 0 0 0 1 0 0 0
E │ 0 0 0 1 0 1 0 0
F │ 1 0 0 0 0 0 1 0
Here is a slight variation on something that I use for creating sparse matrices out of categorical variables for regression analyses. The function includes a variety of comments and options to suit it to your needs. Note: as written, it treats the appearances of "2" and "21" in x and y as separate. It is far less elegant in naming and appearance than the nice response from Dan Getz. The main advantage here is that it works with sparse matrices so if your data is huge, this will be helpful in reducing storage space and computation time.
function OneHot(x::Array, header::Bool)
UniqueVals = unique(x)
Val_to_Idx = [Val => Idx for (Idx, Val) in enumerate(unique(x))] ## create a dictionary that maps unique values in the input array to column positions in the new sparse matrix.
ColIdx = convert(Array{Int64}, [Val_to_Idx[Val] for Val in x])
MySparse = sparse(collect(1:length(x)), ColIdx, ones(Int32, length(x)))
if header
return [UniqueVals' ; MySparse] ## note: this won't be sparse
## alternatively use return (MySparse, UniqueVals) to get a tuple, second element is the header which you can then feed to something to name the columns or do whatever else with
else
return MySparse ## use MySparse[:, 2:end] to drop a value (which you would want to do for categorical variables in a regression)
end
end
x = [22, 4, 21, 26, 22, 2];
y = [2, 21, 360, 2, 58, 347];
Incidence = [OneHot(x, true) OneHot(y, true)]
7x10 Array{Int64,2}:
22 4 21 26 2 2 21 360 58 347
1 0 0 0 0 1 0 0 0 0
0 1 0 0 0 0 1 0 0 0
0 0 1 0 0 0 0 1 0 0
0 0 0 1 0 1 0 0 0 0
1 0 0 0 0 0 0 0 1 0
0 0 0 0 1 0 0 0 0 1

Count the frequency of matrix values including 0

I have a vector
A = [ 1 1 1 2 2 3 6 8 9 9 ]
I would like to write a loop that counts the frequencies of values in my vector within a range I choose, this would include values that have 0 frequencies
For example, if I chose the range of 1:9 my results would be
3 2 1 0 0 1 0 1 2
If I picked 1:11 the result would be
3 2 1 0 0 1 0 1 2 0 0
Is this possible? Also ideally I would have to do this for giant matrices and vectors, so the fasted way to calculate this would be appreciated.
Here's an alternative suggestion to histcounts, which appears to be ~8x faster on Matlab 2015b:
A = [ 1 1 1 2 2 3 6 8 9 9 ];
maxRange = 11;
N = accumarray(A(:), 1, [maxRange,1])';
N =
3 2 1 0 0 1 0 1 2 0 0
Comparing the speed:
K>> tic; for i = 1:100000, N1 = accumarray(A(:), 1, [maxRange,1])'; end; toc;
Elapsed time is 0.537597 seconds.
K>> tic; for i = 1:100000, N2 = histcounts(A,1:maxRange+1); end; toc;
Elapsed time is 4.333394 seconds.
K>> isequal(N1, N2)
ans =
1
As per the loop request, here's a looped version, which should not be too slow since the latest engine overhaul:
A = [ 1 1 1 2 2 3 6 8 9 9 ];
maxRange = 11; %// your range
output = zeros(1,maxRange); %// initialise output
for ii = 1:maxRange
tmp = A==ii; %// temporary storage
output(ii) = sum(tmp(:)); %// find the number of occurences
end
which would result in
output =
3 2 1 0 0 1 0 1 2 0 0
Faster and not-looping would be #beaker's suggestion to use histcounts:
[N,edges] = histcounts(A,1:maxRange+1);
N =
3 2 1 0 0 1 0 1 2 0
where the +1 makes sure the last entry is included as well.
Assuming the input A to be a sorted array and the range starts from 1 and goes until some value greater than or equal to the largest element in A, here's an approach using diff and find -
%// Inputs
A = [2 4 4 4 8 9 11 11 11 12]; %// Modified for variety
maxN = 13;
idx = [0 find(diff(A)>0) numel(A)]+1;
out = zeros(1,maxN); %// OR for better performance : out(maxN) = 0;
out(A(idx(1:end-1))) = diff(idx);
Output -
out =
0 1 0 3 0 0 0 1 1 0 3 1 0
This can be done very easily with bsxfun.
Let the data be
A = [ 1 1 1 2 2 3 6 8 9 9 ]; %// data
B = 1:9; %// possible values
Then
result = sum(bsxfun(#eq, A(:), B(:).'), 1);
gives
result =
3 2 1 0 0 1 0 1 2

Count the number of rows between each instance of a value in a matrix

Assume the following matrix:
myMatrix = [
1 0 1
1 0 0
1 1 1
1 1 1
0 1 1
0 0 0
0 0 0
0 1 0
1 0 0
0 0 0
0 0 0
0 0 1
0 0 1
0 0 1
];
Given the above (and treating each column independently), I'm trying to create a matrix that will contain the number of rows since the last value of 1 has "shown up". For example, in the first column, the first four values would become 0 since there are 0 rows between each of those rows and the previous value of 1.
Row 5 would become 1, row 6 = 2, row 7 = 3, row 8 = 4. Since row 9 contains a 1, it would become 0 and the count starts again with row 10. The final matrix should look like this:
FinalMatrix = [
0 1 0
0 2 1
0 0 0
0 0 0
1 0 0
2 1 1
3 2 2
4 0 3
0 1 4
1 2 5
2 3 6
3 4 0
4 5 0
5 6 0
];
What is a good way of accomplishing something like this?
EDIT: I'm currently using the following code:
[numRow,numCol] = size(myMatrix);
oneColumn = 1:numRow;
FinalMatrix = repmat(oneColumn',1,numCol);
toSubtract = zeros(numRow,numCol);
for m=1:numCol
rowsWithOnes = find(myMatrix(:,m));
for mm=1:length(rowsWithOnes);
toSubtract(rowsWithOnes(mm):end,m) = rowsWithOnes(mm);
end
end
FinalMatrix = FinalMatrix - toSubtract;
which runs about 5 times faster than the bsxfun solution posted over many trials and data sets (which are about 1500 x 2500 in size). Can the code above be optimized?
For a single column you could do this:
col = 1; %// desired column
vals = bsxfun(#minus, 1:size(myMatrix,1), find(myMatrix(:,col)));
vals(vals<0) = inf;
result = min(vals, [], 1).';
Result for first column:
result =
0
0
0
0
1
2
3
4
0
1
2
3
4
5
find + diff + cumsum based approach -
offset_array = zeros(size(myMatrix));
for k1 = 1:size(myMatrix,2)
a = myMatrix(:,k1);
widths = diff(find(diff([1 ; a])~=0));
idx = find(diff(a)==1)+1;
offset_array(idx(idx<=numel(a)),k1) = widths(1:2:end);
end
FinalMatrix1 = cumsum(double(myMatrix==0) - offset_array);
Benchmarking
The benchmarking code for comparing the above mentioned approach against the one in the question is listed here -
clear all
myMatrix = round(rand(1500,2500)); %// create random input array
for k = 1:50000
tic(); elapsed = toc(); %// Warm up tic/toc
end
disp('------------- With FIND+DIFF+CUMSUM based approach') %//'#
tic
offset_array = zeros(size(myMatrix));
for k1 = 1:size(myMatrix,2)
a = myMatrix(:,k1);
widths = diff(find(diff([1 ; a])~=0));
idx = find(diff(a)==1)+1;
offset_array(idx(idx<=numel(a)),k1) = widths(1:2:end);
end
FinalMatrix1 = cumsum(double(myMatrix==0) - offset_array);
toc
clear FinalMatrix1 offset_array idx widths a
disp('------------- With original approach') %//'#
tic
[numRow,numCol] = size(myMatrix);
oneColumn = 1:numRow;
FinalMatrix = repmat(oneColumn',1,numCol); %//'#
toSubtract = zeros(numRow,numCol);
for m=1:numCol
rowsWithOnes = find(myMatrix(:,m));
for mm=1:length(rowsWithOnes);
toSubtract(rowsWithOnes(mm):end,m) = rowsWithOnes(mm);
end
end
FinalMatrix = FinalMatrix - toSubtract;
toc
The results I got were -
------------- With FIND+DIFF+CUMSUM based approach
Elapsed time is 0.311115 seconds.
------------- With original approach
Elapsed time is 7.587798 seconds.

Resources