I have a big performance problem in R. I wrote a function that iterates over a data.frame object. It simply adds a new column to a data.frame and accumulates something. (simple operation). The data.frame has roughly 850K rows. My PC is still working (about 10h now) and I have no idea about the runtime.
dayloop2 <- function(temp){
for (i in 1:nrow(temp)){
temp[i,10] <- i
if (i > 1) {
if ((temp[i,6] == temp[i-1,6]) & (temp[i,3] == temp[i-1,3])) {
temp[i,10] <- temp[i,9] + temp[i-1,10]
} else {
temp[i,10] <- temp[i,9]
}
} else {
temp[i,10] <- temp[i,9]
}
}
names(temp)[names(temp) == "V10"] <- "Kumm."
return(temp)
}
Any ideas how to speed up this operation?
Biggest problem and root of ineffectiveness is indexing data.frame, I mean all this lines where you use temp[,].
Try to avoid this as much as possible. I took your function, change indexing and here version_A
dayloop2_A <- function(temp){
res <- numeric(nrow(temp))
for (i in 1:nrow(temp)){
res[i] <- i
if (i > 1) {
if ((temp[i,6] == temp[i-1,6]) & (temp[i,3] == temp[i-1,3])) {
res[i] <- temp[i,9] + res[i-1]
} else {
res[i] <- temp[i,9]
}
} else {
res[i] <- temp[i,9]
}
}
temp$`Kumm.` <- res
return(temp)
}
As you can see I create vector res which gather results. At the end I add it to data.frame and I don't need to mess with names.
So how better is it?
I run each function for data.frame with nrow from 1,000 to 10,000 by 1,000 and measure time with system.time
X <- as.data.frame(matrix(sample(1:10, n*9, TRUE), n, 9))
system.time(dayloop2(X))
Result is
You can see that your version depends exponentially from nrow(X). Modified version has linear relation, and simple lm model predict that for 850,000 rows computation takes 6 minutes and 10 seconds.
Power of vectorization
As Shane and Calimo states in theirs answers vectorization is a key to better performance.
From your code you could move outside of loop:
conditioning
initialization of the results (which are temp[i,9])
This leads to this code
dayloop2_B <- function(temp){
cond <- c(FALSE, (temp[-nrow(temp),6] == temp[-1,6]) & (temp[-nrow(temp),3] == temp[-1,3]))
res <- temp[,9]
for (i in 1:nrow(temp)) {
if (cond[i]) res[i] <- temp[i,9] + res[i-1]
}
temp$`Kumm.` <- res
return(temp)
}
Compare result for this functions, this time for nrow from 10,000 to 100,000 by 10,000.
Tuning the tuned
Another tweak is to changing in a loop indexing temp[i,9] to res[i] (which are exact the same in i-th loop iteration).
It's again difference between indexing a vector and indexing a data.frame.
Second thing: when you look on the loop you can see that there is no need to loop over all i, but only for the ones that fit condition.
So here we go
dayloop2_D <- function(temp){
cond <- c(FALSE, (temp[-nrow(temp),6] == temp[-1,6]) & (temp[-nrow(temp),3] == temp[-1,3]))
res <- temp[,9]
for (i in (1:nrow(temp))[cond]) {
res[i] <- res[i] + res[i-1]
}
temp$`Kumm.` <- res
return(temp)
}
Performance which you gain highly depends on a data structure. Precisely - on percent of TRUE values in the condition.
For my simulated data it takes computation time for 850,000 rows below the one second.
I you want you can go further, I see at least two things which can be done:
write a C code to do conditional cumsum
if you know that in your data max sequence isn't large then you can change loop to vectorized while, something like
while (any(cond)) {
indx <- c(FALSE, cond[-1] & !cond[-n])
res[indx] <- res[indx] + res[which(indx)-1]
cond[indx] <- FALSE
}
Code used for simulations and figures is available on GitHub.
General strategies for speeding up R code
First, figure out where the slow part really is. There's no need to optimize code that isn't running slowly. For small amounts of code, simply thinking through it can work. If that fails, RProf and similar profiling tools can be helpful.
Once you figure out the bottleneck, think about more efficient algorithms for doing what you want. Calculations should be only run once if possible, so:
Store the results and access them rather than repeatedly recalculating
Take non-loop-dependent calculations out of loops
Avoid calculations which aren't necessary (e.g. don't use regular expressions with fixed searches will do)
Using more efficient functions can produce moderate or large speed gains. For instance, paste0 produces a small efficiency gain but .colSums() and its relatives produce somewhat more pronounced gains. mean is particularly slow.
Then you can avoid some particularly common troubles:
cbind will slow you down really quickly.
Initialize your data structures, then fill them in, rather than expanding them each
time.
Even with pre-allocation, you could switch to a pass-by-reference approach rather than a pass-by-value approach, but it may not be worth the hassle.
Take a look at the R Inferno for more pitfalls to avoid.
Try for better vectorization, which can often but not always help. In this regard, inherently vectorized commands like ifelse, diff, and the like will provide more improvement than the apply family of commands (which provide little to no speed boost over a well-written loop).
You can also try to provide more information to R functions. For instance, use vapply rather than sapply, and specify colClasses when reading in text-based data. Speed gains will be variable depending on how much guessing you eliminate.
Next, consider optimized packages: The data.table package can produce massive speed gains where its use is possible, in data manipulation and in reading large amounts of data (fread).
Next, try for speed gains through more efficient means of calling R:
Compile your R script. Or use the Ra and jit packages in concert for just-in-time compilation (Dirk has an example in this presentation).
Make sure you're using an optimized BLAS. These provide across-the-board speed gains. Honestly, it's a shame that R doesn't automatically use the most efficient library on install. Hopefully Revolution R will contribute the work that they've done here back to the overall community.
Radford Neal has done a bunch of optimizations, some of which were adopted into R Core, and many others which were forked off into pqR.
And lastly, if all of the above still doesn't get you quite as fast as you need, you may need to move to a faster language for the slow code snippet. The combination of Rcpp and inline here makes replacing only the slowest part of the algorithm with C++ code particularly easy. Here, for instance, is my first attempt at doing so, and it blows away even highly optimized R solutions.
If you're still left with troubles after all this, you just need more computing power. Look into parallelization (http://cran.r-project.org/web/views/HighPerformanceComputing.html) or even GPU-based solutions (gpu-tools).
Links to other guidance
http://www.noamross.net/blog/2013/4/25/faster-talk.html
If you are using for loops, you are most likely coding R as if it was C or Java or something else. R code that is properly vectorised is extremely fast.
Take for example these two simple bits of code to generate a list of 10,000 integers in sequence:
The first code example is how one would code a loop using a traditional coding paradigm. It takes 28 seconds to complete
system.time({
a <- NULL
for(i in 1:1e5)a[i] <- i
})
user system elapsed
28.36 0.07 28.61
You can get an almost 100-times improvement by the simple action of pre-allocating memory:
system.time({
a <- rep(1, 1e5)
for(i in 1:1e5)a[i] <- i
})
user system elapsed
0.30 0.00 0.29
But using the base R vector operation using the colon operator : this operation is virtually instantaneous:
system.time(a <- 1:1e5)
user system elapsed
0 0 0
This could be made much faster by skipping the loops by using indexes or nested ifelse() statements.
idx <- 1:nrow(temp)
temp[,10] <- idx
idx1 <- c(FALSE, (temp[-nrow(temp),6] == temp[-1,6]) & (temp[-nrow(temp),3] == temp[-1,3]))
temp[idx1,10] <- temp[idx1,9] + temp[which(idx1)-1,10]
temp[!idx1,10] <- temp[!idx1,9]
temp[1,10] <- temp[1,9]
names(temp)[names(temp) == "V10"] <- "Kumm."
As Ari mentioned at the end of his answer, the Rcpp and inline packages make it incredibly easy to make things fast. As an example, try this inline code (warning: not tested):
body <- 'Rcpp::NumericMatrix nm(temp);
int nrtemp = Rccp::as<int>(nrt);
for (int i = 0; i < nrtemp; ++i) {
temp(i, 9) = i
if (i > 1) {
if ((temp(i, 5) == temp(i - 1, 5) && temp(i, 2) == temp(i - 1, 2) {
temp(i, 9) = temp(i, 8) + temp(i - 1, 9)
} else {
temp(i, 9) = temp(i, 8)
}
} else {
temp(i, 9) = temp(i, 8)
}
return Rcpp::wrap(nm);
'
settings <- getPlugin("Rcpp")
# settings$env$PKG_CXXFLAGS <- paste("-I", getwd(), sep="") if you want to inc files in wd
dayloop <- cxxfunction(signature(nrt="numeric", temp="numeric"), body-body,
plugin="Rcpp", settings=settings, cppargs="-I/usr/include")
dayloop2 <- function(temp) {
# extract a numeric matrix from temp, put it in tmp
nc <- ncol(temp)
nm <- dayloop(nc, temp)
names(temp)[names(temp) == "V10"] <- "Kumm."
return(temp)
}
There's a similar procedure for #includeing things, where you just pass a parameter
inc <- '#include <header.h>
to cxxfunction, as include=inc. What's really cool about this is that it does all of the linking and compilation for you, so prototyping is really fast.
Disclaimer: I'm not totally sure that the class of tmp should be numeric and not numeric matrix or something else. But I'm mostly sure.
Edit: if you still need more speed after this, OpenMP is a parallelization facility good for C++. I haven't tried using it from inline, but it should work. The idea would be to, in the case of n cores, have loop iteration k be carried out by k % n. A suitable introduction is found in Matloff's The Art of R Programming, available here, in chapter 16, Resorting to C.
I dislike rewriting code... Also of course ifelse and lapply are better options but sometimes it is difficult to make that fit.
Frequently I use data.frames as one would use lists such as df$var[i]
Here is a made up example:
nrow=function(x){ ##required as I use nrow at times.
if(class(x)=='list') {
length(x[[names(x)[1]]])
}else{
base::nrow(x)
}
}
system.time({
d=data.frame(seq=1:10000,r=rnorm(10000))
d$foo=d$r
d$seq=1:5
mark=NA
for(i in 1:nrow(d)){
if(d$seq[i]==1) mark=d$r[i]
d$foo[i]=mark
}
})
system.time({
d=data.frame(seq=1:10000,r=rnorm(10000))
d$foo=d$r
d$seq=1:5
d=as.list(d) #become a list
mark=NA
for(i in 1:nrow(d)){
if(d$seq[i]==1) mark=d$r[i]
d$foo[i]=mark
}
d=as.data.frame(d) #revert back to data.frame
})
data.frame version:
user system elapsed
0.53 0.00 0.53
list version:
user system elapsed
0.04 0.00 0.03
17x times faster to use a list of vectors than a data.frame.
Any comments on why internally data.frames are so slow in this regard? One would think they operate like lists...
For even faster code do this class(d)='list' instead of d=as.list(d) and class(d)='data.frame'
system.time({
d=data.frame(seq=1:10000,r=rnorm(10000))
d$foo=d$r
d$seq=1:5
class(d)='list'
mark=NA
for(i in 1:nrow(d)){
if(d$seq[i]==1) mark=d$r[i]
d$foo[i]=mark
}
class(d)='data.frame'
})
head(d)
The answers here are great. One minor aspect not covered is that the question states "My PC is still working (about 10h now) and I have no idea about the runtime". I always put in the following code into loops when developing to get a feel for how changes seem to affect the speed and also for monitoring how long it will take to complete.
dayloop2 <- function(temp){
for (i in 1:nrow(temp)){
cat(round(i/nrow(temp)*100,2),"% \r") # prints the percentage complete in realtime.
# do stuff
}
return(blah)
}
Works with lapply as well.
dayloop2 <- function(temp){
temp <- lapply(1:nrow(temp), function(i) {
cat(round(i/nrow(temp)*100,2),"% \r")
#do stuff
})
return(temp)
}
If the function within the loop is quite fast but the number of loops is large then consider just printing every so often as printing to the console itself has an overhead. e.g.
dayloop2 <- function(temp){
for (i in 1:nrow(temp)){
if(i %% 100 == 0) cat(round(i/nrow(temp)*100,2),"% \r") # prints every 100 times through the loop
# do stuff
}
return(temp)
}
In R, you can often speed-up loop processing by using the apply family functions (in your case, it would probably be replicate). Have a look at the plyr package that provides progress bars.
Another option is to avoid loops altogether and replace them with vectorized arithmetics. I'm not sure exactly what you are doing, but you can probably apply your function to all rows at once:
temp[1:nrow(temp), 10] <- temp[1:nrow(temp), 9] + temp[0:(nrow(temp)-1), 10]
This will be much much faster, and then you can filter the rows with your condition:
cond.i <- (temp[i, 6] == temp[i-1, 6]) & (temp[i, 3] == temp[i-1, 3])
temp[cond.i, 10] <- temp[cond.i, 9]
Vectorized arithmetics requires more time and thinking about the problem, but then you can sometimes save several orders of magnitude in execution time.
Take a look at the accumulate() function from {purrr} :
dayloop_accumulate <- function(temp) {
temp %>%
as_tibble() %>%
mutate(cond = c(FALSE, (V6 == lag(V6) & V3 == lag(V3))[-1])) %>%
mutate(V10 = V9 %>%
purrr::accumulate2(.y = cond[-1], .f = function(.i_1, .i, .y) {
if(.y) {
.i_1 + .i
} else {
.i
}
}) %>% unlist()) %>%
select(-cond)
}
Processing with data.table is a viable option:
n <- 1000000
df <- as.data.frame(matrix(sample(1:10, n*9, TRUE), n, 9))
colnames(df) <- paste("col", 1:9, sep = "")
library(data.table)
dayloop2.dt <- function(df) {
dt <- data.table(df)
dt[, Kumm. := {
res <- .I;
ifelse (res > 1,
ifelse ((col6 == shift(col6, fill = 0)) & (col3 == shift(col3, fill = 0)) ,
res <- col9 + shift(res)
, # else
res <- col9
)
, # else
res <- col9
)
}
,]
res <- data.frame(dt)
return (res)
}
res <- dayloop2.dt(df)
m <- microbenchmark(dayloop2.dt(df), times = 100)
#Unit: milliseconds
# expr min lq mean median uq max neval
#dayloop2.dt(df) 436.4467 441.02076 578.7126 503.9874 575.9534 966.1042 10
If you ignore the possible gains from conditions filtering, it is very fast. Obviously, if you can do the calculation on the subset of data, it helps.
Related
Link to problem: https://projecteuler.net/problem=14
So I've solved this problem using a fairly 'trivial' implementation of memoization in R. Basically, I just count up from 1:1,000,000 and count the number of collatz application it takes to get to 1. If I encounter a number less than the current iteration, I just add that number's 'chain' to the current sequence.
R Code:
collatz <- function(n) {
if(n %% 2 == 0) return(n / 2)
else return(3 * n + 1)
}
chains <- rep(0, 1e6)
for(i in 1:length(chains)) {
n <- i
iter <- 0
while(n != 1) {
n <- collatz(n)
iter <- iter + 1
if(n < i) {
iter <- iter + chains[n]
break
}
}
chains[i] <- iter
}
which.max(chains)
Now this code runs relatively fast, even for R, but the more I think about this problem, the more interesting I find it.
It seems there's a ton of different ways to approach it for efficiency in terms of space and runtime. Maybe loop backwards? Maybe run through the odd numbers or even numbers first, and then do the other half? Maybe keep intermediate results and not just the terminal chain length when incrementing? There may also be some ideas that are more 'mathematical' in nature rather than directly related to dynamic programming. Has anybody given thought to this problem/algorithm and can provide some other potentially more efficient implementations?
You are doing your memoizing in strict 1:1000000 order. There is no reason, however, to not memoize the first time you see a value. For example, starting with 3 gives the sequence 3 -> 10 -> 5 -> 16 -> 8 -> 4 -> 2 -> 1. You can memoize 10, 5, 16, 8, 4 in addition to just memoizing 3.
This will reduce the number of operations, perhaps substantially. Continuing the example above, memoizing 4 the first time you saw it saved the 2 steps it would have required to memoize it later, and memoizing 5 saved another 3 steps. It seems like these saved steps should snowball quite quickly.
I recently read a post about for loops over a range of integers being slower than the corresponding while loops, which is true, but wanted to see if the same held up for iterating over existing sequences and was surprised to find the complete opposite by a large margin.
First and foremost, I'm using the following function for timing:
def time[A](f: => A) = {
val s = System.nanoTime
val ret = f
println("time: " + (System.nanoTime - s) / 1e6 + "ms")
ret
}
and I'm using a simple sequence of Integers:
val seq = List.range(0, 10000)
(I also tried creating this sequence a few other ways in case the way this sequence was accessed affected the run time. Using the Range type certainly did. This should ensure that each item in the sequence is an independent object.)
I ran the following:
time {
for(item <- seq) {
println(item)
}
}
and
time {
var i = 0
while(i < seq.size) {
println(seq(i))
i += 1
}
}
I printed the results so to ensure that we're actually accessing the values in both loops. The first code snippet runs in an average of 33 ms on my machine. The second takes an average of 305 ms.
I tried adding the mutable variable i to the for loop, but it only adds a few milliseconds. The map function gets similar performance to a for loop, as expected. For whatever reason, this doesn't seem to occur if I use an array (converting the above defined seq with seq.toArray). In such a case, the for loop takes 90 ms and the while loop takes 40 ms.
What is the reason for this major performance difference?
The reason is: complexity. seq(i) is Θ(i) for List, which means your whole loop takes quadratic time. The foreach method, however, is linear.
If you compile with -optimize, the for loop version will likely be even faster, because List.foreach should be inlined, eliminating the cost of the lambda.
I know that loops are slow in R and that I should try to do things in a vectorised manner instead.
But, why? Why are loops slow and apply is fast? apply calls several sub-functions -- that doesn't seem fast.
Update: I'm sorry, the question was ill-posed. I was confusing vectorisation with apply. My question should have been,
"Why is vectorisation faster?"
It's not always the case that loops are slow and apply is fast. There's a nice discussion of this in the May, 2008, issue of R News:
Uwe Ligges and John Fox. R Help Desk: How can I avoid this loop or
make it faster? R News, 8(1):46-50, May 2008.
In the section "Loops!" (starting on pg 48), they say:
Many comments about R state that using loops is a particularly bad idea. This is not necessarily true. In certain cases, it is difficult to write vectorized code, or vectorized code may consume a huge amount of memory.
They further suggest:
Initialize new objects to full length before the loop, rather
than increasing their size within the loop. Do not do things in a
loop that can be done outside the loop. Do not avoid loops simply
for the sake of avoiding loops.
They have a simple example where a for loop takes 1.3 sec but apply runs out of memory.
Loops in R are slow for the same reason any interpreted language is slow: every
operation carries around a lot of extra baggage.
Look at R_execClosure in eval.c (this is the function called to call a
user-defined function). It's nearly 100 lines long and performs all sorts of
operations -- creating an environment for execution, assigning arguments into
the environment, etc.
Think how much less happens when you call a function in C (push args on to
stack, jump, pop args).
So that is why you get timings like these (as joran pointed out in the comment,
it's not actually apply that's being fast; it's the internal C loop in mean
that's being fast. apply is just regular old R code):
A = matrix(as.numeric(1:100000))
Using a loop: 0.342 seconds:
system.time({
Sum = 0
for (i in seq_along(A)) {
Sum = Sum + A[[i]]
}
Sum
})
Using sum: unmeasurably small:
sum(A)
It's a little disconcerting because, asymptotically, the loop is just as good
as sum; there's no practical reason it should be slow; it's just doing more
extra work each iteration.
So consider:
# 0.370 seconds
system.time({
I = 0
while (I < 100000) {
10
I = I + 1
}
})
# 0.743 seconds -- double the time just adding parentheses
system.time({
I = 0
while (I < 100000) {
((((((((((10))))))))))
I = I + 1
}
})
(That example was discovered by Radford Neal)
Because ( in R is an operator, and actually requires a name lookup every time you use it:
> `(` = function(x) 2
> (3)
[1] 2
Or, in general, interpreted operations (in any language) have more steps. Of course, those steps provide benefits as well: you couldn't do that ( trick in C.
The only Answer to the Question posed is; loops are not slow if what you need to do is iterate over a set of data performing some function and that function or the operation is not vectorized. A for() loop will be as quick, in general, as apply(), but possibly a little bit slower than an lapply() call. The last point is well covered on SO, for example in this Answer, and applies if the code involved in setting up and operating the loop is a significant part of the overall computational burden of the loop.
Why many people think for() loops are slow is because they, the user, are writing bad code. In general (though there are several exceptions), if you need to expand/grow an object, that too will involve copying so you have both the overhead of copying and growing the object. This is not just restricted to loops, but if you copy/grow at each iteration of a loop, of course, the loop is going to be slow because you are incurring many copy/grow operations.
The general idiom for using for() loops in R is that you allocate the storage you require before the loop starts, and then fill in the object thus allocated. If you follow that idiom, loops will not be slow. This is what apply() manages for you, but it is just hidden from view.
Of course, if a vectorised function exists for the operation you are implementing with the for() loop, don't do that. Likewise, don't use apply() etc if a vectorised function exists (e.g. apply(foo, 2, mean) is better performed via colMeans(foo)).
Just as a comparison (don't read too much into it!): I ran a (very) simple for loop in R and in JavaScript in Chrome and IE 8.
Note that Chrome does compilation to native code, and R with the compiler package compiles to bytecode.
# In R 2.13.1, this took 500 ms
f <- function() { sum<-0.5; for(i in 1:1000000) sum<-sum+i; sum }
system.time( f() )
# And the compiled version took 130 ms
library(compiler)
g <- cmpfun(f)
system.time( g() )
#Gavin Simpson: Btw, it took 1162 ms in S-Plus...
And the "same" code as JavaScript:
// In IE8, this took 282 ms
// In Chrome 14.0, this took 4 ms
function f() {
var sum = 0.5;
for(i=1; i<=1000000; ++i) sum = sum + i;
return sum;
}
var start = new Date().getTime();
f();
time = new Date().getTime() - start;
I have a list of dataframes for which I am certain that they all contain at least one row (in fact, some contain only one row, and others contain a given number of rows), and that they all have the same columns (names and types). In case it matters, I am also certain that there are no NA's anywhere in the rows.
The situation can be simulated like this:
#create one row
onerowdfr<-do.call(data.frame, c(list(), rnorm(100) , lapply(sample(letters[1:2], 100, replace=TRUE), function(x){factor(x, levels=letters[1:2])})))
colnames(onerowdfr)<-c(paste("cnt", 1:100, sep=""), paste("cat", 1:100, sep=""))
#reuse it in a list
someParts<-lapply(rbinom(200, 1, 14/200)*6+1, function(reps){onerowdfr[rep(1, reps),]})
I've set the parameters (of the randomization) so that they approximate my true situation.
Now, I want to unite all these dataframes in one dataframe. I thought using rbind would do the trick, like this:
system.time(
result<-do.call(rbind, someParts)
)
Now, on my system (which is not particularly slow), and with the settings above, this takes is the output of the system.time:
user system elapsed
5.61 0.00 5.62
Nearly 6 seconds for rbind-ing 254 (in my case) rows of 200 variables? Surely there has to be a way to improve the performance here? In my code, I have to do similar things very often (it is a from of multiple imputation), so I need this to be as fast as possible.
Can you build your matrices with numeric variables only and convert to a factor at the end? rbind is a lot faster on numeric matrices.
On my system, using data frames:
> system.time(result<-do.call(rbind, someParts))
user system elapsed
2.628 0.000 2.636
Building the list with all numeric matrices instead:
onerowdfr2 <- matrix(as.numeric(onerowdfr), nrow=1)
someParts2<-lapply(rbinom(200, 1, 14/200)*6+1,
function(reps){onerowdfr2[rep(1, reps),]})
results in a lot faster rbind.
> system.time(result2<-do.call(rbind, someParts2))
user system elapsed
0.001 0.000 0.001
EDIT: Here's another possibility; it just combines each column in turn.
> system.time({
+ n <- 1:ncol(someParts[[1]])
+ names(n) <- names(someParts[[1]])
+ result <- as.data.frame(lapply(n, function(i)
+ unlist(lapply(someParts, `[[`, i))))
+ })
user system elapsed
0.810 0.000 0.813
Still not nearly as fast as using matrices though.
EDIT 2:
If you only have numerics and factors, it's not that hard to convert everything to numeric, rbind them, and convert the necessary columns back to factors. This assumes all factors have exactly the same levels. Converting to a factor from an integer is also faster than from a numeric so I force to integer first.
someParts2 <- lapply(someParts, function(x)
matrix(unlist(x), ncol=ncol(x)))
result<-as.data.frame(do.call(rbind, someParts2))
a <- someParts[[1]]
f <- which(sapply(a, class)=="factor")
for(i in f) {
lev <- levels(a[[i]])
result[[i]] <- factor(as.integer(result[[i]]), levels=seq_along(lev), labels=lev)
}
The timing on my system is:
user system elapsed
0.090 0.00 0.091
Not a huge boost, but swapping rbind for rbind.fill from the plyr package knocks about 10% off the running time (with the sample dataset, on my machine).
If you really want to manipulate your data.frames faster, I would suggest to use the package data.table and the function rbindlist(). I did not perform extensive tests but for my dataset (3000 dataframes, 1000 rows x 40 columns each) rbindlist() takes only 20 seconds.
This is ~25% faster, but there has to be a better way...
system.time({
N <- do.call(sum, lapply(someParts, nrow))
SP <- as.data.frame(lapply(someParts[[1]], function(x) rep(x,N)))
k <- 0
for(i in 1:length(someParts)) {
j <- k+1
k <- k + nrow(someParts[[i]])
SP[j:k,] <- someParts[[i]]
}
})
Make sure you're binding dataframe to dataframe. Ran into huge perf degradation when binding list to dataframe.
I'm looking for an algorithm to compute pow() that's tail-recursive and uses memoization to speed up repeated calculations.
Performance isn't an issue; this is mostly an intellectual exercise - I spent a train ride coming up with all the different pow() implementations I could, but was unable to come up with one that I was happy with that had these two properties.
My best shot was the following:
def calc_tailrec_mem(base, exp, cache_line={}, acc=1, ctr=0):
if exp == 0:
return 1
elif exp == 1:
return acc * base
elif exp in cache_line:
val = acc * cache_line[exp]
cache_line[exp + ctr] = val
return val
else:
cache_line[ctr] = acc
return calc_tailrec_mem(base, exp-1, cache_line, acc * base, ctr + 1)
It works, but it doesn't memoize the results of all calculations - only those with exponents 1..exp/2 and exp.
You'll get better performance if you use the successive squaring technique described in SICP section 1.2.4 Exponentiation. It doesn't use memoization, but the general approach is O(log n) instead of O(n), so you should still see an improvement.
I talk about the solution to the iterative process from exercise 1.16 here.
I don't think you're recording the correct thing in your cache, the mapping changed when you call it with different arguments.
I think you need to have a cache of (base,exp) -> pow(base,exp).
I understand what ctr is for, and why only half of what you expect is recorded.
Consider calc_tailrec_mem(2,4): First level, pow(2,1) is recorded as 2, the next level = calc_tailrec_mem(2,3,...), and pow(2,2) is recorded. The next level is calc_tailrec_mem(2,2,...), but that is already saved in the cache, so the recursion stops.
The function is very confusing because it's caching something completely different from what it's supposed to be calculating, due to the acculumator and ctr.
This is way too late, but anyone out there looking for the answer, here it is:
int powMem(int base,int exp){
//initializes once and for all
static map<int,int> memo;
//base case to stop the recursion
if(exp <= 1) return base;
//check if the value is already calculated before. If yes just return it.
if(memo.find(exp) != memo.end())
return memo[exp];
//else just find it and then store it in memo for further use.
int x = powMem(base,exp/2);
memo[exp] = x*x;
//return the answer
return memo[exp];
}
This uses the memo array - a map , to be exact - to store the already calculated values.