aggregate rows using datatable - datatable

My data looks like below and I would like to group it by zip. Along with the grouping I would like the lower, avg, upper, sum of idk, and avgDiff columns. The two calculated fields would be the Total Count by zip and sum of 'idk' (the table below this one). I would specifically like to use DataTable for this....thank you.
zip lower avg upper RISK idk diff avgDiff total
1: 12007 -170.3723 592 1354.372 676 0 84 137.2903 123
2: 12007 -170.3723 592 1354.372 828 1 236 137.2903 123
3: 12007 -170.3723 592 1354.372 627 1 35 137.2903 123
4: 12009 -150.3723 300 1200.372 770 1 178 125.2903 456
5: 12007 -170.3723 592 1354.372 770 1 178 137.2903 123
6: 12010 -100.3723 200 1100.372 893 1 301 300.2903 890
desired result
zip lower avg upper zipCount avgDiff sumidk
1: 12007 -170.3723 592 1354.372 4 137.2903 3
2: 12009 -150.3723 300 1200.372 1 125.2903 1
3: 12010 -100.3723 200 1100.372 1 300.2903 1
The lower, avg, upper, and avgDiff will be the same within the zip.
So far I have DT[, .(zipcount =.N), by = zip]....which is grouping zip and giving me the total zips (rows), but I'm getting stuck at this point.
thank you

Related

reformulating for loop with vectorization or other approach - octave

Is there any way to vectorize (or reformulate) each body of the loop in this code:
col=load('col-deau'); %load data
h=col(:,8); % corresponding water column
dates=col(:,3); % and its dates
%removing out-of-bound data
days=days(h~=9999.000);
h=h(h~=9999.000);
dates=sort(dates(h~=9999.000));
[k,hcat]=hist(h,nbin); %making classes (k) and boundaries of classes (hcat) of water column automatically
dcat=1:15; % make boundaries for dates
for k=1:length(dcat)-1 % Loop for each date class
ii=find(dates>=dcat(k)&dates<dcat(k+1));% Counting dates corresponding to the boundaries of each date class
for j=1:length(hcat)-1 % Loop over each class of water column
ij=find(h>=hcat(j)&h<hcat(j+1)); % Count water column corresponding to the boundaries of each water column class
obs(k,j)=length(intersect(ii,ij)); % Find the size of each intersecting matrix
end
end
I've tried using vectorization, for example, to change this part:
for k=1:length(dcat)-1
ii=find(dates>=dcat(k)&dates<dcat(k+1))
endfor
with this:
nk=1:length(dcat)-1;
ii2=find(dates>=dcat(nk)&dates<dcat(nk+1));
and also using bsxfun:
ii2=find(bsxfun(#and,bsxfun(#ge,dates,nk),bsxfun(#lt,dates,nk+1)));
but to no avail. Both these approaches produce identical output, and do not correspond to that of using for loop (in terms of elements and vector size).
For information, h is a vector which contains water column in meters and dates is a vector (integer with two digits) which contains the dates in which the measurement for a corresponding water column was taken.
The input file can be found here: https://drive.google.com/open?id=1EomLGYleaNtiGG2iV_9LRt425blxdIsm
As for the output, I want to have ii like this:
ii =
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
instead with the first approach I get ii2 which is very different in terms of value and vector size (I can't post the result because the vector size is too big).
Can someone help a desperate newbie here? I just need to reformulate the loop part into a better, more concise version.
If more details need to be added, please feel free to ask me.
You can use hist3:
pkg load statistics
[obs, ~] = hist3([dates(:) h(:)] ,'Edges', {dcat,hcat});

Joining two matrices, one with numbers and the other percentages

I have two matrices, cases and percentages. I want to combine both with the columns alternating between the two i.e. cases [c1] percent [c1] cases [c2] percent [c2]...
tab year region if sex==1, matcell(cases)
tab year region, matcell(total)
mata:st_matrix("percent", 100 * st_matrix("cases"):/st_matrix("total"))
matrix list cases
c1 c2 c3 c4 c5 c6 c7 c8 c9 c10
r1 1313 1289 1121 1176 1176 1150 1190 1184 1042 940
r2 340 359 357 366 383 332 406 367 352 272
r3 260 246 266 265 270 259 309 306 266 283
r4 271 267 293 277 317 312 296 285 265 253
r5 218 249 246 213 264 255 247 221 229 220
r6 215 202 157 202 200 204 220 183 176 180
r7 178 193 218 199 194 195 201 187 172 159
r8 127 111 107 130 133 99 142 143 131 114
r9 64 68 85 74 70 60 59 70 76 61
. matrix list percent, format(%2.1f)
percent[9,10]
c1 c2 c3 c4 c5 c6 c7 c8 c9 c10
r1 70.1 71.2 67.3 67.2 66.9 71.5 72.6 72.5 74.9 73.2
r2 65.3 65.2 69.1 64.4 68.0 70.5 72.0 64.8 66.4 64.9
r3 74.7 73.7 74.7 69.2 68.9 67.6 70.5 72.3 79.4 80.9
r4 66.3 72.6 72.9 74.9 72.7 73.8 72.2 73.3 74.9 71.7
r5 68.8 67.1 66.0 63.6 67.2 67.1 65.2 67.4 68.6 73.8
r6 73.1 72.9 69.2 63.7 67.6 68.0 72.4 68.8 74.9 78.9
r7 64.5 60.3 69.9 70.6 69.3 78.3 72.3 65.8 71.4 71.3
r8 66.1 64.2 63.3 74.7 69.3 56.9 70.6 70.1 63.9 57.9
r9 77.1 73.9 70.2 74.0 71.4 73.2 81.9 72.9 87.4 74.4
How do I combine both the matrices?
currently I have tried: matrix final=cases, percent but it just puts them beside each other? I want it so each column alternates between cases and percent.
I will then use putexcel command to put them into an already formatted table with columns of cases and percentages.
Let me start by supporting Nick Cox's comments.
The problem is, there is no simple solution for combining matrices as you desire. Nevertheless, it is simple to achieve the results you want, by taking a very much different path from the one you outlined. It's no fun to write an essay describing the technique in natural language; it's much simpler to demonstrate it using code, as I do below, and as I expect Nick might have been inclined to do.
By not providing a Minimal, Complete, and Verifiable example, as described in the link Nick provided to you, you've discouraged others from showing you where you've gone off the tracks.
// create a minimal amount of sample data hopefully similar to actual data
clear
input year region sex
2001 1 1
2001 1 2
2001 1 2
2002 1 1
2002 1 2
2001 2 1
2002 2 1
2002 2 2
end
list, clean noobs
// use collapse to generate summaries equivalent to two tabs
generate male = sex==1
collapse (count) total=male (sum) cases=male, by(year region)
list, clean noobs
generate percent = 100*cases/total
keep year region total percent
// flatten and interleave the columns
reshape wide total percent, i(year) j(region)
drop year
list, clean noobs
// now use export excel to output,
// or use mkmat to load into a matrix and use putexcel to output

How to calculate Total average response time

Below are the results
sampler_label count average median 90%_line min max
Transaction1 2 61774 61627 61921 61627 61921
Transaction2 4 82 61 190 15 190
Transaction3 4 1862 1317 3612 1141 3612
Transaction4 4 1242 915 1602 911 1602
Transaction5 4 692 608 906 423 906
Transaction6 4 2764 2122 4748 1182 4748
Transaction7 4 9369 9029 11337 7198 11337
Transaction8 4 1245 890 2168 834 2168
Transaction9 4 3475 2678 4586 2520 4586
TOTAL 34 6073 1381 9913 15 61921
My question here is how is total average response time is being calculated (which is 6073)?
Like in my result I want to exclude transaction1 response time and then want to calculate Total average response time.
How can I do that?
Total Avg Response time = ((s1*t1) + (s2*t2)...)/s
s1 = No of times transaction 1 was executed
t1 = Avg response time for transaction 1
s2 = No of times transaction 2 was executed
t2 = Avg response time for transaction 2
s = Total no of samples (s1+s2..)
In your case, except transaction1 all other transactions have been executed 4 times. So, simple avg of (82, 1862, 1242...) should give the result you wanted.

How to calculate classification error rate

Alright. Now this question is pretty hard. I am going to give you an example.
Now the left numbers are my algorithm classification and the right numbers are the original class numbers
177 86
177 86
177 86
177 86
177 86
177 86
177 86
177 86
177 86
177 89
177 89
177 89
177 89
177 89
177 89
177 89
So here my algorithm merged 2 different classes into 1. As you can see it merged class 86 and 89 into one class. So what would be the error at the above example ?
Or here another example
203 7
203 7
203 7
203 7
16 7
203 7
17 7
16 7
203 7
At the above example left numbers are my algorithm classification and the right numbers are original class ids. As can be seen above it miss classified 3 products (i am classifying same commercial products). So at this example what would be the error rate? How would you calculate.
This question is pretty hard and complex. We have finished the classification but we are not able to find correct algorithm for calculating success rate :D
Here's a longish example, a real confuson matrix with 10 input classes "0" - "9"
(handwritten digits),
and 10 output clusters labelled A - J.
Confusion matrix for 5620 optdigits:
True 0 - 9 down, clusters A - J across
-----------------------------------------------------
A B C D E F G H I J
-----------------------------------------------------
0: 2 4 1 546 1
1: 71 249 11 1 6 228 5
2: 13 5 64 1 13 1 460
3: 29 2 507 20 5 9
4: 33 483 4 38 5 3 2
5: 1 1 2 58 3 480 13
6: 2 1 2 294 1 1 257
7: 1 5 1 546 6 7
8: 415 15 2 5 3 12 13 87 2
9: 46 72 2 357 35 1 47 2
----------------------------------------------------
580 383 496 1002 307 670 549 557 810 266 estimates in each cluster
y class sizes: [554 571 557 572 568 558 558 566 554 562]
kmeans cluster sizes: [ 580 383 496 1002 307 670 549 557 810 266]
For example, cluster A has 580 data points, 415 of which are "8"s;
cluster B has 383 data points, 249 of which are "1"s; and so on.
The problem is that the output classes are scrambled, permuted;
they correspond in this order, with counts:
A B C D E F G H I J
8 1 4 3 6 7 0 5 2 6
415 249 483 507 294 546 546 480 460 257
One could say that the "success rate" is
75 % = (415 + 249 + 483 + 507 + 294 + 546 + 546 + 480 + 460 + 257) / 5620
but this throws away useful information —
here, that E and J both say "6", and no cluster says "9".
So, add up the biggest numbers in each column of the confusion matrix
and divide by the total.
But, how to count overlapping / missing clusters,
like the 2 "6"s, no "9"s here ?
I don't know of a commonly agreed-upon way
(doubt that the Hungarian algorithm
is used in practice).
Bottom line: don't throw away information; look at the whole confusion matrix.
NB such a "success rate" will be optimistic for new data !
It's customary to split the data into say 2/3 "training set" and 1/3 "test set",
train e.g. k-means on the 2/3 alone,
then measure confusion / success rate on the test set — generally worse than on the training set alone.
Much more can be said; see e.g.
Cross-validation.
You have to define the error criteria if you want to evaluate the performance of an algorithm, so I'm not sure exactly what you're asking. In some clustering and machine learning algorithms you define the error metric and it minimizes it.
Take a look at this
https://en.wikipedia.org/wiki/Confusion_matrix
to get some ideas
You have to define a error metric to measure yourself. In your case, a simple method should be to find the properties mapping of your product as
p = properties(id)
where id is the product id, and p is likely be a vector with each entry of different properties. Then you can define the error function e (or distance) between two products as
e = d(p1, p2)
Sure, each properties must be evaluated to a number in this function. Then this error function can be used in the classification algorithm and learning.
In your second example, it seems that you treat the pair (203 7) as successful classification, so I think you have already a metric yourself. You may be more specific to get better answer.
Classification Error Rate(CER) is 1 - Purity (http://nlp.stanford.edu/IR-book/html/htmledition/evaluation-of-clustering-1.html)
ClusterPurity <- function(clusters, classes) {
sum(apply(table(classes, clusters), 2, max)) / length(clusters)
}
Code of #john-colby
Or
CER <- function(clusters, classes) {
1- sum(apply(table(classes, clusters), 2, max)) / length(clusters)
}

Suggest optimal algorithm to find min number of days to purchase all toys

Note: I am still looking for a fast solution. Two of the solutions below are wrong and the third one is terribly slow.
I have N toys from 1....N. Each toy has an associated cost with it. You have to go on a shopping spree such that on a particular day, if you buy toy i, then the next toy you can buy on the same day should be i+1 or greater. Moreover, the absolute cost difference between any two consecutively bought toys should be greater than or equal to k. What is the minimum number of days can I buy all the toys.
I tried a greedy approach by starting with toy 1 first and then seeing how many toys can I buy on day 1. Then, I find the smallest i that I have not bought and start again from there.
Example:
Toys : 1 2 3 4
Cost : 5 4 10 15
let k be 5
On day 1, buy 1,3, and 4
on day 2, buy toy 2
Thus, I can buy all toys in 2 days
Note greedy not work for below example: N = 151 and k = 42
the costs of the toys 1...N in that order are :
383 453 942 43 27 308 252 721 926 116 607 200 195 898 568 426 185 604 739 476 354 533 515 244 484 38 734 706 608 136 99 991 589 392 33 615 700 636 687 625 104 293 176 298 542 743 75 726 698 813 201 403 345 715 646 180 105 732 237 712 867 335 54 455 727 439 421 778 426 107 402 529 751 929 178 292 24 253 369 721 65 570 124 762 636 121 941 92 852 178 156 719 864 209 525 942 999 298 719 425 756 472 953 507 401 131 150 424 383 519 496 799 440 971 560 427 92 853 519 295 382 674 365 245 234 890 187 233 539 257 9 294 729 313 152 481 443 302 256 177 820 751 328 611 722 887 37 165 739 555 811
You can find the optimal solution by solving the asymmetric Travelling Salesman.
Consider each toy as a node, and build the complete directed graph (that is, add an edge between each pair of nodes). The edge has cost 1 (has to continue on next day) if the index is smaller or the cost of the target node is less than 5 plus the cost of the source node, and 0 otherwise. Now find the shortest path covering this graph without visiting a node twice - i.e., solve the Travelling Salesman.
This idea is not very fast (it is in NP), but should quickly give you a reference implementation.
This is not as difficult as ATSP. All you need to do is look for increasing subsequences.
Being a mathematician, the way I would solve the problem is to apply RSK to get a pair of Young tableaux, then the answer for how many days is the height of the tableau and the rows of the second tableau tell you what to purchase on which day.
The idea is to do Schensted insertion on the cost sequence c. For the example you gave, c = (5, 4, 10, 15), the insertion goes like this:
Step 1: Insert c[1] = 5
P = 5
Step 2: Insert c[2] = 4
5
P = 4
Step 3: Insert c[3] = 10
5
P = 4 10
Step 4: Insert c[4] = 15
5
P = 4 10 15
The idea is that you insert the entries of c into P one at a time. When inserting c[i] into row j:
if c[i] is bigger than the largest element in the row, add it to the end of the row;
otherwise, find the leftmost entry in row j that is larger than c[i], call it k, and replace k with c[i] then insert k into row j+1.
P is an array where the lengths of the rows are weakly decreasing and The entries in each of row P (these are the costs) weakly increase. The number of rows is the number of days it will take.
For a more elaborate example (made by generating 9 random numbers)
1 2 3 4 5 6 7 8 9
c = [ 5 4 16 7 11 4 13 6 5]
16
7
5 6 11
P = 4 4 5 13
So the best possible solution takes 4 days, buying 4 items on day 1, 3 on day 2, 1 on day 3, and 1 on day 4.
To handle the additional constraint that consecutive costs must increase by at least k involves redefining the (partial) order on costs. Say that c[i] <k< c[j] if and only if c[j]-c[i] >= k in the usual ordering on numbers. The above algorithm works for partial orders as well as total orders.
I somewhat feel that a greedy approach would give a fairly good result.
I think your approach is not optimal just because you always pick toy 1 to start while you should really pick the least expensive toy. Doing so would give you the most room to move to the next toy.
Each move being the least expensive one, it is just DFS problem where you always follow the least expensive path constrained by k.

Resources