What's the fastest way to unroll a matrix in MATLAB? - performance

How do I turn a matrix:
[ 0.12 0.23 0.34 ;
0.45 0.56 0.67 ;
0.78 0.89 0.90 ]
into a 'coordinate' matrix with a bunch of rows?
[ 1 1 0.12 ;
1 2 0.23 ;
1 3 0.34 ;
2 1 0.45 ;
2 2 0.56 ;
2 3 0.67 ;
3 1 0.78 ;
3 2 0.89 ;
3 3 0.90 ]
(permutation of the rows is irrelevant, it only matters that the data is in this structure)
Right now I'm using a for loop but that takes a long time.

Here is an option using ind2sub:
mat= [ 0.12 0.23 0.34 ;
0.45 0.56 0.67 ;
0.78 0.89 0.90 ] ;
[I,J] = ind2sub(size(mat), 1:numel(mat));
r=[I', J', mat(:)]
r =
1.0000 1.0000 0.1200
2.0000 1.0000 0.4500
3.0000 1.0000 0.7800
1.0000 2.0000 0.2300
2.0000 2.0000 0.5600
3.0000 2.0000 0.8900
1.0000 3.0000 0.3400
2.0000 3.0000 0.6700
3.0000 3.0000 0.9000
Note that the indices are reversed compared to your example.

A = [ .12 .23 .34 ;
.45 .56 .67 ;
.78 .89 .90 ];
[ii jj] = meshgrid(1:size(A,1),1:size(A,2));
B = A.';
R = [ii(:) jj(:) B(:)];
If you don't mind a different order (according to your edit), you can do it more easily:
[ii jj] = ndgrid(1:size(A,1),1:size(A,2));
R = [ii(:) jj(:) A(:)];

In addition to generating the row/col indexes with meshgrid, you can use all three outputs of find as follows:
[II,JJ,AA]= find(A.'); %' note the transpose since you want to read across
M = [JJ II AA]
M =
1 1 0.12
1 2 0.23
1 3 0.34
2 1 0.45
2 2 0.56
2 3 0.67
3 1 0.78
3 2 0.89
3 3 0.9
Limited application because zeros get lost. Nasty, but correct workaround (thanks user664303):
B = A.'; v = B == 0; %' transpose to read across, otherwise work directly with A
[II, JJ, AA] = find(B + v);
M = [JJ II AA-v(:)];
Needless to say, I would recommend one of the other solutions. :) In particular, ndgrid is the most natural solution to obtaining the row,col inds.

I find ndgrid to be the most natural solution, but here's a fun way to do it manually with the odd couple of kron and repmat:
M = [kron(1:size(A,2),ones(1,size(A,1))).' ... %' row indexes
repmat((1:size(A,1))',size(A,2),1) ... %' col indexes
reshape(A.',[],1)] %' matrix values, read across
Simple adjustment to read down, as is natural in MATLAB:
M = [repmat((1:size(A,1))',size(A,2),1) ... %' row indexes (still)
kron(1:size(A,2),ones(1,size(A,1))).' ... %' column indexes
A(:)] % matrix values, read down
(Also since my first answer was obscenely hackish.)
I also find kron to be a nice tool to replicate each element at a time rather than than the entire array at a time, as repmat does. For example:
>> 1:size(A,2)
ans =
1 2 3
>> kron(1:size(A,2),ones(1,size(A,1)))
ans =
1 1 1 2 2 2 3 3 3
Taking this a bit further, we can generate a new function called repel to replicate elements of an array as opposed to the whole array:
>> repel = #(x,m,n) kron(x,ones(m,n));
>> repel(1:4,1,2)
ans =
1 1 2 2 3 3 4 4
>> repel(1:3,2,2)
ans =
1 1 2 2 3 3
1 1 2 2 3 3

Related

How to convert from long to wide format when the column numbers per row are variable? (MATLAB)

I have a time series dataset of accelerometry values where there are many sub-seconds of measurements but the actual number of sub-seconds recorded per second is variable.
So I would be starting with something that looks like this:
Date time
Dec sec
Acc X
1
.00
0.5
1
.25
0.5
1
.50
0.6
1
.75
0.5
2
.00
0.6
2
.40
0.5
2
.80
0.5
3
.00
0.5
3
.50
0.5
4
.00
0.6
4
.25
0.5
4
.50
0.5
4
.75
0.5
And trying to convert it to wide format where each row is a second, and the columns are the decimal seconds corresponding to each second.
sub1
sub2
sub3
sub4
.5
.5
.6
.5
.6
.5
.5
NaN
.5
.5
NaN
NaN
.6
.5
.5
.5
In code this would look like:
%Preallocate some space
Dpts_observations = NaN(13,3);
%These are the "seconds" number
Dpts_observations(:,1)=[1 1 1 1...
2 2 2...
3 3...
4 4 4 4];
%These are the "decimal seconds"
Dpts_observations(:,2) = [0.00 0.25 0.50 0.75...
0.00 0.33 0.66...
0.00 0.50 ...
0.00 0.25 0.50 0.75]
%Here's actual acceleration values
Dpts_observations(:,3) = [0.5 0.5 0.5 0.5...
0.6 0.5 0.5...
0.4 0.5...
0.5 0.5 0.6 0.4]
%I have this in a separate file but I have summary data that helps me
determine the row indexes corresponding to sub-seconds that belong to the same second and I use them to manually extract from long form to wide form.
%Create table to hold indexing information
Seconds = [1 2 3 4];
Obs_per_sec = [4 3 2 4];
Start_index = [1 5 8 10];
End_index = [4 7 9 13];
Dpts_attributes = table(Seconds, Obs_per_sec, Start_index, End_index);
%Preallocate new array
Acc_X = NaN(4,4);
%Loop through seconds
for i=1:max(size(Dpts_attributes))
Acc_X(i, 1:Dpts_attributes.Obs_per_sec(i))=Dpts_observations(Dpts_attributes.Start_index(i):Dpts_attributes.End_index(i),3);
end
Now this is working but its very slow. In reality, I have a huge data set consisting of millions of seconds and I'm hoping there might be a better solution than the one I currently have going. My data is all numeric to try to make everything as fast a possible.
Thank you!

R: summing specific elements in a symmetrical matrix

If I have a correlation matrix, I know I can use upper.tri or lower.tri to sum all values, but is there a way to sum just specific parts of the matrix?
For example, a correlation matrix of 5 variables:
> Matrix
[,1] [,2] [,3] [,4] [,5]
[1,] 0 4 3 1 2
[2,] 4 0 3 2 1
[3,] 3 3 0 2 1
[4,] 1 2 2 0 1
[5,] 2 1 1 1 0
If the first 2 variables belong to one group, while 3-5 belong to another, is there a way to just ask for the sum of the inter-group values? e.g., 3+3+1+2+2+1 = 12.
A long winded answer but hopefully generic one to help you!
matrix <- matrix (c(0,4,3,1,2,4,0,3,2,1,3,3,0,2,1,1,2,2,0,1,2,1,1,1,0), nrow=5, ncol=5)
group <- list(group1=c(1,2), group2=c(3,4,5))
sum_matrix <- matrix(data <- rep(NA), nrow = length(group), ncol= length(group))
for (i in 1:length(group))
{
for(j in 1:length(group))
{
ifelse(i==j, sum_matrix[i,j]<- NA, sum_matrix[i,j] <- sum(matrix[group[[i]], group[[j]] ]) )
}
}
sum_matrix
sum(matrix[group2, group1])
sum(matrix[group1, group2])

Subtracting row 2 from row 1 repeatedly

I want to create in R a column in my data set where I subtract row 2 from row1, row 4 from row 3 and so forth. Moreover, I want that the subtraction result is repeated for each row (e.g.if the result from the subtraction row2-row1 is -0.294803, I want this value to be present both in row1 and row2, hence repeated twice for both factors of the subtraction, and so forth for all subtractions).
Here my data set.
I tried with the function aggregate but I didn't succeed.
Any hint?
Another possible solution can be:
x <- read.table("mydata.csv",header=T,sep=";")
x$diff <- rep(x$log[seq(2,nrow(x),by=2)] - x$log[seq(1,nrow(x),by=2)], each=2)
By using the function seq(), you can generate the sequences of row positions:
1, 3, 5, ... 9
2, 4, 6, ... 10
Afterwards, the code subtracts the rows 2...10 to the rows 1...9. Each result is replicated by using the command rep() and it's assigned to the new column diff.
solution 1
One way to that is with one simple loop:
(download mydata.csv)
a = read.table("mydata.csv",header=T,sep=";")
a$delta= NA
for(i in seq(1, nrow(a), by=2 )){
a[i,"delta"] = a[i+1,"delta"] = a[i+1,"log"] - a[i,"log"]
}
What is going on here is that the for loop iterates on every odd number (that's what the seq(...,by=2) does. So for the first, third, fifth, etc. row we assign to that row AND the following one the computed difference.
which returns:
> a
su match log delta
1 1 match 5.80 0.30
2 1 mismatch 6.10 0.30
3 2 match 6.09 -0.04
4 2 mismatch 6.05 -0.04
5 3 match 6.42 -0.12
6 3 mismatch 6.30 -0.12
7 4 match 6.20 -0.20
8 4 mismatch 6.00 -0.20
9 5 match 5.90 0.19
10 5 mismatch 6.09 0.19
solution 2
If you have a lot of data this approach can be slow. And generally R works better with another form of iterative functions which are the apply family.
The same code of above can be optimized like this:
a$delta = rep(
sapply(seq(1, nrow(a), by=2 ),
function(i){ a[i+1,"log"] - a[i,"log"] }
),
each=2)
Which gives the very same result as the first solution, should be faster, but also somewhat less intuitive.
solution 3
Finally it looks to me that you're trying to use a convoluted approach by using the long dataframe format, given your kind of data.
I'd reshape it to wide, and then operate more logically with separate columns, without the need of duplicate data.
Like this:
a = read.table("mydata.csv",header=T,sep=";")
a = reshape(a, idvar = "su", timevar = "match", direction = "wide")
#now creating what you want became a very simple thing:
a$delta = a[[3]]-a[[2]]
Which returns:
>a
su log.match log.mismatch delta
1 1 5.80 6.10 0.30
3 2 6.09 6.05 -0.04
5 3 6.42 6.30 -0.12
7 4 6.20 6.00 -0.20
9 5 5.90 6.09 0.19
The delta column contains the values you need. If you really need the long format for further analysis you can always go back with:
a= reshape(a, idvar = "su", timevar = "match", direction = "long")
#sort to original order:
a = a[with(a, order(su)), ]

Filter Data In a Cleaner/More Efficient Way

I have a set of data with a bunch of columns. Something like the following (in reality my data has about half a million rows):
big = [
1 1 0.93 0.58;
1 2 0.40 0.34;
1 3 0.26 0.31;
1 4 0.40 0.26;
2 1 0.60 0.04;
2 2 0.84 0.55;
2 3 0.53 0.72;
2 4 0.00 0.39;
3 1 0.27 0.51;
3 2 0.46 0.18;
3 3 0.61 0.01;
3 4 0.07 0.04;
4 1 0.26 0.43;
4 2 0.77 0.91;
4 3 0.49 0.80;
4 4 0.40 0.55;
5 1 0.77 0.40;
5 2 0.91 0.28;
5 3 0.80 0.65;
5 4 0.05 0.06;
6 1 0.41 0.37;
6 2 0.11 0.87;
6 3 0.78 0.61;
6 4 0.87 0.51
];
Now, let's say I want to get rid of the rows where the first column is a 3 or a 6.
I'm doing that like so:
filterRows = [3 6];
for i = filterRows
big = big(~ismember(1:size(big,1), find(big(:,1) == i)), :);
end
Which works, but the loop makes me think I'm missing a more efficient trick. Is there a better way to do this?
Originally I tried:
big(find(big(:,1) == filterRows ),:) = [];
but of course that doesn't work.
Use logical indexing:
rows = (big(:, 1) == 3 | big(:, 1) == 6);
big(rows, :) = [];
In the general case, where the values of the first column are stored in filterRows, you can generate the logical vector rows with ismember:
rows = ismember(big(:, 1), filterRows);
or with bsxfun:
rows = any(bsxfun(#eq, big(:, 1), filterRows(:).'), 2);

Permute all unique enumerations of a vector in R

I'm trying to find a function that will permute all the unique permutations of a vector, while not counting juxtapositions within subsets of the same element type. For example:
dat <- c(1,0,3,4,1,0,0,3,0,4)
has
factorial(10)
> 3628800
possible permutations, but only 10!/(2!*2!*4!*2!)
factorial(10)/(factorial(2)*factorial(2)*factorial(2)*factorial(4))
> 18900
unique permutations when ignoring juxtapositions within subsets of the same element type.
I can get this by using unique() and the permn() function from the package combinat
unique( permn(dat) )
but this is computationally very expensive, since it involves enumerating n!, which can be an order of magnitude more permutations than I need. Is there a way to do this without first computing n!?
EDIT: Here's a faster answer; again based on the ideas of Louisa Grey and Bryce Wagner, but with faster R code thanks to better use of matrix indexing. It's quite a bit faster than my original:
> ddd <- c(1,0,3,4,1,0,0,3,0,4)
> system.time(up1 <- uniqueperm(d))
user system elapsed
0.183 0.000 0.186
> system.time(up2 <- uniqueperm2(d))
user system elapsed
0.037 0.000 0.038
And the code:
uniqueperm2 <- function(d) {
dat <- factor(d)
N <- length(dat)
n <- tabulate(dat)
ng <- length(n)
if(ng==1) return(d)
a <- N-c(0,cumsum(n))[-(ng+1)]
foo <- lapply(1:ng, function(i) matrix(combn(a[i],n[i]),nrow=n[i]))
out <- matrix(NA, nrow=N, ncol=prod(sapply(foo, ncol)))
xxx <- c(0,cumsum(sapply(foo, nrow)))
xxx <- cbind(xxx[-length(xxx)]+1, xxx[-1])
miss <- matrix(1:N,ncol=1)
for(i in seq_len(length(foo)-1)) {
l1 <- foo[[i]]
nn <- ncol(miss)
miss <- matrix(rep(miss, ncol(l1)), nrow=nrow(miss))
k <- (rep(0:(ncol(miss)-1), each=nrow(l1)))*nrow(miss) +
l1[,rep(1:ncol(l1), each=nn)]
out[xxx[i,1]:xxx[i,2],] <- matrix(miss[k], ncol=ncol(miss))
miss <- matrix(miss[-k], ncol=ncol(miss))
}
k <- length(foo)
out[xxx[k,1]:xxx[k,2],] <- miss
out <- out[rank(as.numeric(dat), ties="first"),]
foo <- cbind(as.vector(out), as.vector(col(out)))
out[foo] <- d
t(out)
}
It doesn't return the same order, but after sorting, the results are identical.
up1a <- up1[do.call(order, as.data.frame(up1)),]
up2a <- up2[do.call(order, as.data.frame(up2)),]
identical(up1a, up2a)
For my first attempt, see the edit history.
The following function (which implements the classic formula for repeated permutations just like you did manually in your question) seems quite fast to me:
upermn <- function(x) {
n <- length(x)
duplicates <- as.numeric(table(x))
factorial(n) / prod(factorial(duplicates))
}
It does compute n! but not like permn function which generates all permutations first.
See it in action:
> dat <- c(1,0,3,4,1,0,0,3,0,4)
> upermn(dat)
[1] 18900
> system.time(uperm(dat))
user system elapsed
0.000 0.000 0.001
UPDATE: I have just realized that the question was about generating all unique permutations not just specifying the number of them - sorry for that!
You could improve the unique(perm(...)) part with specifying unique permutations for one less element and later adding the uniqe elements in front of them. Well, my explanation may fail, so let the source speak:
uperm <- function(x) {
u <- unique(x) # unique values of the vector
result <- x # let's start the result matrix with the vector
for (i in 1:length(u)) {
v <- x[-which(x==u[i])[1]] # leave the first occurance of duplicated values
result <- rbind(result, cbind(u[i], do.call(rbind, unique(permn(v)))))
}
return(result)
}
This way you could gain some speed. I was lazy to run the code on the vector you provided (took so much time), here is a small comparison on a smaller vector:
> dat <- c(1,0,3,4,1,0,0)
> system.time(unique(permn(dat)))
user system elapsed
0.264 0.000 0.268
> system.time(uperm(dat))
user system elapsed
0.147 0.000 0.150
I think you could gain a lot more by rewriting this function to be recursive!
UPDATE (again): I have tried to make up a recursive function with my limited knowledge:
uperm <- function(x) {
u <- sort(unique(x))
l <- length(u)
if (l == length(x)) {
return(do.call(rbind,permn(x)))
}
if (l == 1) return(x)
result <- matrix(NA, upermn(x), length(x))
index <- 1
for (i in 1:l) {
v <- x[-which(x==u[i])[1]]
newindex <- upermn(v)
if (table(x)[i] == 1) {
result[index:(index+newindex-1),] <- cbind(u[i], do.call(rbind, unique(permn(v))))
} else {
result[index:(index+newindex-1),] <- cbind(u[i], uperm(v))
}
index <- index+newindex
}
return(result)
}
Which has a great gain:
> system.time(unique(permn(c(1,0,3,4,1,0,0,3,0))))
user system elapsed
22.808 0.103 23.241
> system.time(uperm(c(1,0,3,4,1,0,0,3,0)))
user system elapsed
4.613 0.003 4.645
Please report back if this would work for you!
One option that hasn't been mentioned here is the allPerm function from the multicool package. It can be used pretty easily to get all the unique permutations:
library(multicool)
perms <- allPerm(initMC(dat))
dim(perms)
# [1] 18900 10
head(perms)
# [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
# [1,] 4 4 3 3 1 1 0 0 0 0
# [2,] 0 4 4 3 3 1 1 0 0 0
# [3,] 4 0 4 3 3 1 1 0 0 0
# [4,] 4 4 0 3 3 1 1 0 0 0
# [5,] 3 4 4 0 3 1 1 0 0 0
# [6,] 4 3 4 0 3 1 1 0 0 0
In benchmarking I found it to be faster on dat than the solutions from the OP and daroczig but slower than the solution from Aaron.
I don't actually know R, but here's how I'd approach the problem:
Find how many of each element type, i.e.
4 X 0
2 X 1
2 X 3
2 X 4
Sort by frequency (which the above already is).
Start with the most frequent value, which takes up 4 of the 10 spots. Determine the unique combinations of 4 values within the 10 available spots.
(0,1,2,3),(0,1,2,4),(0,1,2,5),(0,1,2,6)
... (0,1,2,9),(0,1,3,4),(0,1,3,5)
... (6,7,8,9)
Go to the second most frequent value, it takes up 2 of 6 available spots, and determine it's unique combinations of 2 of 6.
(0,1),(0,2),(0,3),(0,4),(0,5),(1,2),(1,3) ... (4,6),(5,6)
Then 2 of 4:
(0,1),(0,2),(0,3),(1,2),(1,3),(2,3)
And the remaining values, 2 of 2:
(0,1)
Then you need to combine them into each possible combination. Here's some pseudocode (I'm convinced there's a more efficient algorithm for this, but this shouldn't be too bad):
lookup = (0,1,3,4)
For each of the above sets of combinations, example: input = ((0,2,4,6),(0,2),(2,3),(0,1))
newPermutation = (-1,-1,-1,-1,-1,-1,-1,-1,-1,-1)
for i = 0 to 3
index = 0
for j = 0 to 9
if newPermutation(j) = -1
if index = input(i)(j)
newPermutation(j) = lookup(i)
break
else
index = index + 1
Another option is the iterpc package, I believe it is the fastest of the existing method. More importantly, the result is in dictionary order (which may be somehow preferable).
dat <- c(1, 0, 3, 4, 1, 0, 0, 3, 0, 4)
library(iterpc)
getall(iterpc(table(dat), order=TRUE))
The benchmark indicates that iterpc is significant faster than all other methods described here
library(multicool)
library(microbenchmark)
microbenchmark(uniqueperm2(dat),
allPerm(initMC(dat)),
getall(iterpc(table(dat), order=TRUE))
)
Unit: milliseconds
expr min lq mean median
uniqueperm2(dat) 23.011864 25.33241 40.141907 27.143952
allPerm(initMC(dat)) 1713.549069 1771.83972 1814.434743 1810.331342
getall(iterpc(table(dat), order = TRUE)) 4.332674 5.18348 7.656063 5.989448
uq max neval
64.147399 74.66312 100
1855.869670 1937.48088 100
6.705741 49.98038 100
As this question is old and continues to attract many views, this post is solely meant to inform R users of the current state of the language with regards to performing the popular task outlined by the OP. As #RandyLai alludes to, there are packages developed with this task in mind. They are: arrangements and RcppAlgos*.
Efficiency
They are very efficient and quite easy to use for generating permutations of a multiset.
dat <- c(1, 0, 3, 4, 1, 0, 0, 3, 0, 4)
dim(RcppAlgos::permuteGeneral(sort(unique(dat)), freqs = table(dat)))
[1] 18900 10
microbenchmark(algos = RcppAlgos::permuteGeneral(sort(unique(dat)), freqs = table(dat)),
arngmnt = arrangements::permutations(sort(unique(dat)), freq = table(dat)),
curaccptd = uniqueperm2(dat), unit = "relative")
Unit: relative
expr min lq mean median uq max neval
algos 1.000000 1.000000 1.0000000 1.000000 1.000000 1.0000000 100
arngmnt 1.501262 1.093072 0.8783185 1.089927 1.133112 0.3238829 100
curaccptd 19.847457 12.573657 10.2272080 11.705090 11.872955 3.9007364 100
With RcppAlgos we can utilize parallel processing for even better efficiency on larger examples.
hugeDat <- rep(dat, 2)[-(1:5)]
RcppAlgos::permuteCount(sort(unique(hugeDat)), freqs = table(hugeDat))
[1] 3603600
microbenchmark(algospar = RcppAlgos::permuteGeneral(sort(unique(hugeDat)),
freqs = table(hugeDat), nThreads = 4),
arngmnt = arrangements::permutations(sort(unique(hugeDat)), freq = table(hugeDat)),
curaccptd = uniqueperm2(hugeDat), unit = "relative", times = 10)
Unit: relative
expr min lq mean median uq max neval
algospar 1.00000 1.000000 1.000000 1.000000 1.00000 1.00000 10
arngmnt 3.23193 3.109092 2.427836 2.598058 2.15965 1.79889 10
curaccptd 49.46989 45.910901 34.533521 39.399481 28.87192 22.95247 10
Lexicographical Order
A nice benefit of these packages is that the output is in lexicographical order:
head(RcppAlgos::permuteGeneral(sort(unique(dat)), freqs = table(dat)))
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] 0 0 0 0 1 1 3 3 4 4
[2,] 0 0 0 0 1 1 3 4 3 4
[3,] 0 0 0 0 1 1 3 4 4 3
[4,] 0 0 0 0 1 1 4 3 3 4
[5,] 0 0 0 0 1 1 4 3 4 3
[6,] 0 0 0 0 1 1 4 4 3 3
tail(RcppAlgos::permuteGeneral(sort(unique(dat)), freqs = table(dat)))
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[18895,] 4 4 3 3 0 1 1 0 0 0
[18896,] 4 4 3 3 1 0 0 0 0 1
[18897,] 4 4 3 3 1 0 0 0 1 0
[18898,] 4 4 3 3 1 0 0 1 0 0
[18899,] 4 4 3 3 1 0 1 0 0 0
[18900,] 4 4 3 3 1 1 0 0 0 0
identical(RcppAlgos::permuteGeneral(sort(unique(dat)), freqs = table(dat)),
arrangements::permutations(sort(unique(dat)), freq = table(dat)))
[1] TRUE
Iterators
Additionally, both packages offer iterators that allow for memory efficient generation of permutation, one by one:
algosIter <- RcppAlgos::permuteIter(sort(unique(dat)), freqs = table(dat))
algosIter$nextIter()
[1] 0 0 0 0 1 1 3 3 4 4
algosIter$nextNIter(5)
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] 0 0 0 0 1 1 3 4 3 4
[2,] 0 0 0 0 1 1 3 4 4 3
[3,] 0 0 0 0 1 1 4 3 3 4
[4,] 0 0 0 0 1 1 4 3 4 3
[5,] 0 0 0 0 1 1 4 4 3 3
## last permutation
algosIter$back()
[1] 4 4 3 3 1 1 0 0 0 0
## use reverse iterator methods
algosIter$prevNIter(5)
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] 4 4 3 3 1 0 1 0 0 0
[2,] 4 4 3 3 1 0 0 1 0 0
[3,] 4 4 3 3 1 0 0 0 1 0
[4,] 4 4 3 3 1 0 0 0 0 1
[5,] 4 4 3 3 0 1 1 0 0 0
* I am the author of RcppAlgos
Another option is by using the Rcpp package. The difference is that it returns a list.
//[[Rcpp::export]]
std::vector<std::vector< int > > UniqueP(std::vector<int> v){
std::vector< std::vector<int> > out;
std::sort (v.begin(),v.end());
do {
out.push_back(v);
} while ( std::next_permutation(v.begin(),v.end()));
return out;
}
Unit: milliseconds
expr min lq mean median uq max neval cld
uniqueperm2(dat) 10.753426 13.5283 15.61438 13.751179 16.16061 34.03334 100 b
UniqueP(dat) 9.090222 9.6371 10.30185 9.838324 10.20819 24.50451 100 a

Resources