MRTG CPU and Memory together - snmp

So, I have an Adtran router and I'd like to monitor both CPU and memory utilization in a single graph. Unfortunately Adtran doesn't offer a percentage guage for memory utilization the way it does for CPU utilization. It does offer two OIDs: one that gives you the free memory in bytes and the other that gives you total memory in bytes.
I would like to create a cpu_memory target in my MRTG configuration that does the necessary math but I can't see a way to do it. Ideally it would work something like this:
# CPU Utilization OID: .1.3.6.1.4.1.664.5.53.1.4.1.0
# Total Memory OID: .1.3.6.1.4.1.664.5.53.1.4.7.0 (adGenAOSHeapSize)
# Free Memory OID: .1.3.6.1.4.1.664.5.53.1.4.8.0 (adGenAOSHeapFree)
Target[rtr-cpu_mem]: .1.3.6.1.4.1.664.5.53.1.4.1.0&( 100 - ( .1.3.6.1.4.1.664.5.53.1.4.8.0 / .1.3.6.1.4.1.664.5.53.1.4.7.0 ) ):public#router.local
# ... rest of config
Is this even possible? Or, will I have to have a separate graph for the memory?

This is not really possible to do in a single native Target, since calculations apply to both values. While you can use pseudoZero and pseudoOne to get around this in part, you can't manage it this way.
I would advise that you have one Target for CPU, and a separate Target for the Memory calculation, which makes it much simpler. You can then use the 'dorelpercent' option on the Memory Target and have it fetch the used and total into the separate values.
However, if you really, really, have to have then in the same target, there is an awkward way to kludge it -- custom data conversion functions.
You can define a custom perl function to multiply the second item by 100, if it is less than 1, and store this into a flie 'conversion.pl'
sub topercent {
my $value = shift;
return ($1 * 100) if( $value =~ /([01]\.\d*)/ and ($1<=1));
return $value;
}
Then, define your Target like this (replace cpuoid, totalmemoid and freememoid appropriately):
ConversionCode: /path/to/conversion.pl
Target[cpumem]: ( cpuoid&totalmemoid:comm#rtr - pseudoZero&freememoid:comm&rtr ) / ( pseudoOne&totalmemoid:comm#rtr ) |topercent
This results in In=cpupercent, Out=memusedpercent
I wouldn't advise doing it this way, though; best to stick to separate Targets for Memory and CPU. You can always combine these two targets into a single graph for display if you're using MRTG/RRD with Routers2 anyway.
Another alternative is to write a custom collection script that does the retrieval and processing, and define it like this:
Target[cpumem]: `myscript.sh community router`
and make myscript.sh output four lines; CPU percent, Memory percent, and two blank lines.

You can do separate computations on each value using PseudoOne and PseudoZero, e.g.:
( PseudoZero&PseudoOne:community#host * 100 - memUsed&cpuIdle:community#host )
* ( PseudoOne&PseudoZero:community#host * 99 + PseudoOne&PseudoOne:community#host )
* ( PseudoZero&PseudoOne:community#host - PseudoOne&PseudoZero:community#host )
/ memTotal&PseudoOne:community#host
This computes the following:
memPercent = 100 * memUsed / memTotal
cpuPercent = 100 - cpuIdle

Related

SAS: What's the optimal way to find the sum of a column by another column?

I want to find out the best way to perform a group-by in SAS so I can perform some benchmarks. The simplest two ways I can think of is Proc SQL and Proc means. Here is the example in proc sql
proc sql noprint; /* took 6 mins */
create table summ as select
id,
sum(val)
from
randint
group by
id
;
quit;
I think there are ways to make this run fast
use sasfile command to load the data into memory first
create an index on id
Are there any other options I can use? Any SAS options I should turn on to make this run as fast as possible? I am not tied to proc sql nor proc means, so if there are faster ways then I would love to know about it!!!
My set up code is as below
options macrogen;
options obs=max sortsize=max source2 FULLSTIMER;
options minoperator SASTRACE=',,,d' SASTRACELOC=SASLOG;
options compress = binary NOSTSUFFIX;
options noxwait noxsync;
options LRECL=32767;
proc fcmp outlib=work.myfunc.sample;
function RandBetween(min, max);
return (min + floor((1 + max - min) * rand("uniform")));
endsub;
run;
options cmplib=work.myfunc;
data RandInt;
do i = 1 to 250000000;
id = RandBetween(1, 2500000);
val = rand("uniform");
output;
end;
drop i;
run;
My SAS comparison macros are as below
%macro sasbench(dosql = N); %macro _; %mend;
%if &dosql. = Y %then %do;
proc sql noprint; /* took 6 mins */
create table summ as select
id,
sum(val)
from
randint
group by
id
;
quit;
%end;
proc means data=randint sum noprint;
var val ;
class id;
output out = summmeans(drop=_type_ _freq_) sum = /autoname;
run;
%mend;
%sasbench();
/**/
/*sasfile randint load;*/
/*%sasbench();*/
/*sasfile randint close;*/
proc datasets lib=work;
modify randint;
INDEX CREATE id / nomiss;
run;
%sasbench();
sasfile is only a benefit if the entire data set can fit into session ram limits and if the data set is going to be used more than once. I suppose this would make sense if your benchmark includes multiple runs / different techniques on the same sasfile.
An index on id would help if the data was unsorted by id. When the data set is presorted by id the id column metadata will have sortedby flag set which a procedure can use for its own internal optimization, however there is no guarantee. As for indexes, use option msglevel=i to get informational messages in the log about index selection during processing.
The fastest way is direct addressing, but requires enough ram to handle the largest id value as an array index:
array ids(250000000) _temporary_
ids(id) + value
The next fastest way is probably hand coded array based hashing:
search SAS conference proceedings for papers by Paul Dorfman
The next fastest hash way is probably the hash component object with key suminc.
DATA Step was edited to align with the comments
data demo_data;
do rownum = 1 to 1000;
id = ceil(100*ranuni(123)); * NOTE: 100 different groups, disordered;
value = ceil(1000*ranuni(123)); * NOTE: want to sum value over group, for demonstration individual values integers from 1..1000;
output;
end;
run;
data _null_;
if 0 then set demo_data(keep=id value); %* prep pdv ;
length total 8; %* prep keysum variable ;
call missing (total); %* prevent warnings ;
declare hash ids (ordered:'a', suminc:'value', keysum:'total'); %* ordered ensures keys will be sorted ascending upon output ;
ids.defineKey('id');
*ids.defineData('id'); % * not having a defineData is an implicit way of adding only the keys as data, only data + keysum variables are .output;
ids.defineDone();
* read all records and touch each hash key in order to perform tacit total+value summation;
do until (end);
set demo_data end=end;
if ids.find() ne 0 then ids.add();
end;
ids.output(dataset:'sum_value_over_id'); * save the summation of each key combination;
stop;
run;
Note: There can be only one keysum variable.
If the suminc variable was set to be always 1 instead of value, then the keysum would be the count instead of the total.
Obtaining both sum and count over group via hash would require an explicit defineData for a count and sum variable and slightly different statements, such as:
declare hash ids (ordered:'a');
...
ids.defineData('id', 'count', 'total');
...
if ids.find() ne 0 then do; count=0; total=0; end;
count+1;
total+value;
ids.replace();
...
However, if value is known to be always a natural number, and group size is known to be < 10group size limit you could numerically encode the count by using a suminc of value + 10-group size limit and numerically decode count by processing the output data with count = (total - int(total)) * 10group size limit.
For sorted data the fastest way is most likely a DOW loop with accumulation.
proc sort data=foo;
by id;
data sum_value_over_id_v2(keep=id total);
do until (last.id);
set foo;
by id;
total = sum(total, value);
end;
run;
You will likely find that I/O is largest component of performance.
The best answer varies dramatically by the application. In your example, PROC SQL at least on my machine significantly outperforms PROC MEANS, but there are plenty of cases where it will not do so. It's able to in this case because it's building hash tables behind the scenes, more than likely, which are quite fast - a single pass through the data is all that's needed.
You certainly could speed things up by putting your full dataset into memory with SASFILE, if you have room to store the whole thing. You would have to have it in memory to begin with, though, more than likely; just reading it into memory for this purpose alone wouldn't really help since you're doing that read anyway.
As Richard notes, there are a bunch of ways to do this. I think PROC SQL will often be the fastest or similar to the fastest in simple cases, both because it's multithreaded (as opposed to data step being single threaded) and because it's got a fast hash table backend.
PROC MEANS is also usually going to be competitive, the case you show in the example is almost a worst case for it since it's got a huge number of class variables so I think it may be creating a temporary table on disk. It's also multithreaded. Reduce the class variable categories to 2500 instead of 2,500,000 and you get PROC MEANS a bit faster than PROC SQL (but within the margin of error).
Data step accumulation, either in a hash table or a DoW loop, will sometimes outperform both of the above, and sometimes not, again depending on the data. Here it does outperform slightly. The code for data step accumulation tends to be a bit more complex, which is why I'd usually discourage it unless the savings is substantial (having more code to maintain is worse, typically). PROC MEANS and PROC SQL require less maintenance and less to understand. But in applications where performance is critical and these solutions happen to be superior, it may be worth it to go this route, especially if the data step is helpful. Of course, the hash table method is limited to fitting the results in memory, though usually that's manageable.
Ultimately, I would encourage you to use whatever method is easiest to maintain but still gives sufficient performance; and when possible try to be self consistent with other code. If most of your code is in SQL, that is probably fine. SASFILE and indexes probably won't be needed, unless you're doing more complicated things than you present above. Summation is actually more work than I/O in many cases. Don't overcomplicate it, ultimately: programmer hours and difficulty of QA is something that should trump basic performance, unless you're talking several hours' difference. And if you are, then just run tests on your actual use case and see what works best.
If you assume the data is sorted then this is another solution
data sum_value_over_id_v2(keep=id total);
set a.randint(keep=id val);
by id;
total + val;
if last.id then do;
output;
total = 0;
end;
drop val;
run;

Julia: Parallel code slower than Sequential code, are there alternatives to remotecall()?

When I run in parallel a simple function (with different inputs per worker) the time it takes for a parallelized version code is consistently longer than if it was sequential.
The simple function is
addprocs(2)
#everywhere function simple(i,Q,vt_0,ct_0)
a=sum((rand(1:i),vt_0,ct_0))*2
return a
end
The Parallelized version of my code is
#In Parallel
tic()
N=10000;
v=zeros(N);
c=zeros(N);
vt_0=0;
ct_0=0;
for i=1:N
v[i]=fetch(remotecall(simple,2,i,1,vt_0,ct_0))
c[i]=fetch(remotecall(simple,3,i,2,vt_0,ct_0))
vt_0=v[i]
ct_0=c[i]
end
toc()
While the sequential code would be just calling the function in each iteration of the loop (no remote call() nor fetch(), I am thinking the way in which I call the workers in parallel in Julia with remotecall() is the root of the problem, but I am not sure.
Can anybody help me figure out why this parallel code is so slow compared to just call the function? Or is this simple calculation not worth parallelizing it?
EDIT:
Here is the Sequential code:
#Sequential
tic()
N=10000;
v=zeros(N,2);
c=zeros(N,2);
vt_0=0;
ct_0=0;
for i=1:N
v[i]=simple(i,1,vt_0,ct_0)
c[i]=simple(i,2,vt_0,ct_0)
vt_0=v[i]
ct_0=c[i]
end
toc()
Even for the case of the very-"shallow"-computing inside the simple(), Julia documentation recommends to rather use remotecall_fetch() instead of fetch( remotecall( ... ) ):
The function remotecall_fetch() exists for this purpose. It is equivalent to fetch(remotecall(...)) but is more efficient.
The [PARALLEL]-process #spawn / fetch() overheads
This is a very often overlooked topic, great to actively recognise and trying to reason about this subject. Actually this is a common root-cause of adverse impact on [PARALLEL]-process scheduling final performance. If indeed interested in details, getting both the mathematical model + an interactive UI-tool to simulate / evaluate the net-effects of the overhead-strict speedup formula on [PARALLEL] code-executions for as much as 2 .. 8000+ CPU-cores, feel free to read more on this here
The main "Suspect" here are the value-inter-process-dependencies:
One might be hidden - inside the system function rand(). In case a cryptographically strong implementation is used, each, indeed EACH call to the rand() must update the central source-of-randomness'-state. That means, that for this particular reason, all, indeed ALL the spawned processes have to have established and also maintained a queue to this central-shared-service, which will not be present at all ( as it can make zero source-of-randomness'-state updates troubles ) in a simple [SEQ]-code-execution, but will require some additional hidden overheads ( MUTEX / LOCKS / value-update atomics et al ) in case of [PAR]-code-execution units, that actually "share" this central resource ( being hidden to be strictly transactional, with a concurrency-capacity of 1, operating a 1:N-blocking-logic signalling, with none rollback ... ) and must execute and enforce the atomically safe update of the source-of-randomness'-state, before any next call to the source-of-randomness' service may get served.
the other is {prior|next}-loop step dependency, the more once mutually crossed among the pair of calls to simple()-process instances. As illustrated below, this mutual inter-dependence actually makes all potentially spawned-calls to have to be arranged as a pure [SEQ]-process-schedule, and the "remaining" sum() is indeed not much meat to chew in [PARALLEL]-process schedule.
ACTUAL MUTUAL INTER-PROCESS DEPENDENCYILLUSTRATION DEPICTS IMPOSSIBILITY TO AT LEAST INJECT LOOPSO AS TO MAKE [PARALLEL]-PROCESSING MORE EFFICIENT:
// +-------------<---- a way to "inject" a for-loop
// | +----------<---- NOT CONSUMED AT ALL
// | |
// | | +--******* BUT ********* DEPENDENCY
// | | |
// | | v
// | | +-------<-->- v[]-{prior|next}-step DEPENDENCY
// | | | +-<-->- c[]-{prior|next}-step DEPENDENCY
// | | | |
// function simple( i, Q, vt_0, ct_0 )
#everywhere function simple( N, Q, vt_0, ct_0, aVEC )
for i = 1:N
aVEC[i] = sum( ( rand(1:i), vt_0, ct_0 ) )*2
// a = sum( ( rand(1:i), vt_0, ct_0 ) )*2
// ret a
end
While adding CSP-channels with explicit inter-process communication via { put!() | take!() } Channel-methods may solve this dependency being communicated, guess what, the coroutines-scheduling will only add additional overheads, so expect to pay even more to get less.
A minor note on raw-profiling:
In all cases, it is advisable to place a tic() .. toc()-bracket tight to the code-section under test, and avoid and exclude any and all memory-allocations and similar extremely long and noisy parts from the actual measured-execution:
// ------------------------------------------------------------<SuT>-START
tic()
for i=1:N
v[i]=fetch(remotecall(simple,2,i,1,vt_0,ct_0))
c[i]=fetch(remotecall(simple,3,i,2,vt_0,ct_0))
vt_0=v[i]
ct_0=c[i]
end
toc()
// ------------------------------------------------------------<SuT>-FINISH

Operating in parallel on a large constant datastructure in Julia

I have a large vector of vectors of strings:
There are around 50,000 vectors of strings,
each of which contains 2-15 strings of length 1-20 characters.
MyScoringOperation is a function which operates on a vector of strings (the datum) and returns an array of 10100 scores (as Float64s). It takes about 0.01 seconds to run MyScoringOperation (depending on the length of the datum)
function MyScoringOperation(state:State, datum::Vector{String})
...
score::Vector{Float64} #Size of score = 10000
I have what amounts to a nested loop.
The outer loop typically would runs for 500 iterations
data::Vector{Vector{String}} = loaddata()
for ii in 1:500
score_total = zeros(10100)
for datum in data
score_total+=MyScoringOperation(datum)
end
end
On one computer, on a small test case of 3000 (rather than 50,000) this takes 100-300 seconds per outer loop.
I have 3 powerful servers with Julia 3.9 installed (and can get 3 more easily, and then can get hundreds more at the next scale).
I have basic experience with #parallel, however it seems like it is spending a lot of time copying the constant (It more or less hang on the smaller testing case)
That looks like:
data::Vector{Vector{String}} = loaddata()
state = init_state()
for ii in 1:500
score_total = #parallel(+) for datum in data
MyScoringOperation(state, datum)
end
state = update(state, score_total)
end
My understanding of the way this implementation works with #parallel is that it:
For Each ii:
partitions data into a chuck for each worker
sends that chuck to each worker
works all process there chunks
main procedure sums the results as they arrive.
I would like to remove step 2,
so that instead of sending a chunk of data to each worker,
I just send a range of indexes to each worker, and they look it up from their own copy of data. or even better, only giving each only their own chunk, and having them reuse it each time (saving on a lot of RAM).
Profiling backs up my belief about the functioning of #parellel.
For a similarly scoped problem (with even smaller data),
the non-parallel version runs in 0.09seconds,
and the parallel runs in
And the profiler shows almost all the time is spent 185 seconds.
Profiler shows almost 100% of this is spend interacting with network IO.
This should get you started:
function get_chunks(data::Vector, nchunks::Int)
base_len, remainder = divrem(length(data),nchunks)
chunk_len = fill(base_len,nchunks)
chunk_len[1:remainder]+=1 #remained will always be less than nchunks
function _it()
for ii in 1:nchunks
chunk_start = sum(chunk_len[1:ii-1])+1
chunk_end = chunk_start + chunk_len[ii] -1
chunk = data[chunk_start: chunk_end]
produce(chunk)
end
end
Task(_it)
end
function r_chunk_data(data::Vector)
all_chuncks = get_chunks(data, nworkers()) |> collect;
remote_chunks = [put!(RemoteRef(pid)::RemoteRef, all_chuncks[ii]) for (ii,pid) in enumerate(workers())]
#Have to add the type annotation sas otherwise it thinks that, RemoteRef(pid) might return a RemoteValue
end
function fetch_reduce(red_acc::Function, rem_results::Vector{RemoteRef})
total = nothing
#TODO: consider strongly wrapping total in a lock, when in 0.4, so that it is garenteed safe
#sync for rr in rem_results
function gather(rr)
res=fetch(rr)
if total===nothing
total=res
else
total=red_acc(total,res)
end
end
#async gather(rr)
end
total
end
function prechunked_mapreduce(r_chunks::Vector{RemoteRef}, map_fun::Function, red_acc::Function)
rem_results = map(r_chunks) do rchunk
function do_mapred()
#assert r_chunk.where==myid()
#pipe r_chunk |> fetch |> map(map_fun,_) |> reduce(red_acc, _)
end
remotecall(r_chunk.where,do_mapred)
end
#pipe rem_results|> convert(Vector{RemoteRef},_) |> fetch_reduce(red_acc, _)
end
rchunk_data breaks the data into chunks, (defined by get_chunks method) and sends those chunks each to a different worker, where they are stored in RemoteRefs.
The RemoteRefs are references to memory on your other proccesses(and potentially computers), that
prechunked_map_reduce does a variation on a kind of map reduce to have each worker first run map_fun on each of it's chucks elements, then reduce over all the elements in its chuck using red_acc (a reduction accumulator function). Finally each worker returns there result which is then combined by reducing them all together using red_acc this time using the fetch_reduce so that we can add the first ones completed first.
fetch_reduce is a nonblocking fetch and reduce operation. I believe it has no raceconditions, though this maybe because of a implementation detail in #async and #sync. When julia 0.4 comes out, it is easy enough to put a lock in to make it obviously have no race conditions.
This code isn't really battle hardened. I don;t believe the
You also might want to look at making the chuck size tunable, so that you can seen more data to faster workers (if some have better network or faster cpus)
You need to reexpress your code as a map-reduce problem, which doesn't look too hard.
Testing that with:
data = [float([eye(100),eye(100)])[:] for _ in 1:3000] #480Mb
chunk_data(:data, data)
#time prechunked_mapreduce(:data, mean, (+))
Took ~0.03 seconds, when distributed across 8 workers (none of them on the same machine as the launcher)
vs running just locally:
#time reduce(+,map(mean,data))
took ~0.06 seconds.

OOP much slower than Structural programming. why and how can be fixed?

as i mentioned on subject of this post i found out OOP is slower than Structural Programming(spaghetti code) in the hard way.
i writed a simulated annealing program with OOP then remove one class and write it structural in main form. suddenly it got much faster . i was calling my removed class in every iteration in OOP program.
also checked it with Tabu Search. Same result .
can anyone tell me why this is happening and how can i fix it on other OOP programs?
are there any tricks ? for example cache my classes or something like that?
(Programs has been written in C#)
If you have a high-frequency loop, and inside that loop you create new objects and don't call other functions very much, then, yes, you will see that if you can avoid those news, say by re-using one copy of the object, you can save a large fraction of total time.
Between new, constructors, destructors, and garbage collection, a very little code can waste a whole lot of time.
Use them sparingly.
Memory access is often overlooked. The way o.o. tends to lay out data in memory is not conducive to efficient memory access in practice in loops. Consider the following pseudocode:
adult_clients = 0
for client in list_of_all_clients:
if client.age >= AGE_OF_MAJORITY:
adult_clients++
It so happens that the way this is accessed from memory is quite inefficient on modern architectures because they like accessing large contiguous rows of memory, but we only care for client.age, and of all clients we have; those will not be laid out in contiguous memory.
Focusing on objects that have fields results into data being laid out in memory in such a way that fields that hold the same type of information will not be laid out in consecutive memory. Performance-heavy code tends to involve loops that often look at data with the same conceptual meaning. It is conducive to performance that such data be laid out in contiguous memory.
Consider these two examples in Rust:
// struct that contains an id, and an optiona value of whether the id is divisible by three
struct Foo {
id : u32,
divbythree : Option<bool>,
}
fn main () {
// create a pretty big vector of these structs with increasing ids, and divbythree initialized as None
let mut vec_of_foos : Vec<Foo> = (0..100000000).map(|i| Foo{ id : i, divbythree : None }).collect();
// loop over all hese vectors, determine if the id is divisible by three
// and set divbythree accordingly
let mut divbythrees = 0;
for foo in vec_of_foos.iter_mut() {
if foo.id % 3 == 0 {
foo.divbythree = Some(true);
divbythrees += 1;
} else {
foo.divbythree = Some(false);
}
}
// print the number of times it was divisible by three
println!("{}", divbythrees);
}
On my system, the real time with rustc -O is 0m0.436s; now let us consider this example:
fn main () {
// this time we create two vectors rather than a vector of structs
let vec_of_ids : Vec<u32> = (0..100000000).collect();
let mut vec_of_divbythrees : Vec<Option<bool>> = vec![None; vec_of_ids.len()];
// but we basically do the same thing
let mut divbythrees = 0;
for i in 0..vec_of_ids.len(){
if vec_of_ids[i] % 3 == 0 {
vec_of_divbythrees[i] = Some(true);
divbythrees += 1;
} else {
vec_of_divbythrees[i] = Some(false);
}
}
println!("{}", divbythrees);
}
This runs in 0m0.254s on the same optimization level, — close to half the time needed.
Despite having to allocate two vectors instead of of one, storing similar values in contiguous memory has almost halved the execution time. Though obviously the o.o. approach provides for much nicer and more maintainable code.
P.s.: it occurs to me that I should probably explain why this matters so much given that the code itself in both cases still indexes memory one field at a time, rather than, say, putting a large swath on the stack. The reason is c.p.u. caches: when the program asks for the memory at a certain address, it actually obtains, and caches, a significant chunk of memory around that address, and if memory next to it be asked quickly again, then it can serve it from the cache, rather than from actual physical working memory. Of course, compilers will also vectorize the bottom code more efficiently as a consequence.

Tricks to manage the available memory in an R session

What tricks do people use to manage the available memory of an interactive R session? I use the functions below [based on postings by Petr Pikal and David Hinds to the r-help list in 2004] to list (and/or sort) the largest objects and to occassionally rm() some of them. But by far the most effective solution was ... to run under 64-bit Linux with ample memory.
Any other nice tricks folks want to share? One per post, please.
# improved list of objects
.ls.objects <- function (pos = 1, pattern, order.by,
decreasing=FALSE, head=FALSE, n=5) {
napply <- function(names, fn) sapply(names, function(x)
fn(get(x, pos = pos)))
names <- ls(pos = pos, pattern = pattern)
obj.class <- napply(names, function(x) as.character(class(x))[1])
obj.mode <- napply(names, mode)
obj.type <- ifelse(is.na(obj.class), obj.mode, obj.class)
obj.size <- napply(names, object.size)
obj.dim <- t(napply(names, function(x)
as.numeric(dim(x))[1:2]))
vec <- is.na(obj.dim)[, 1] & (obj.type != "function")
obj.dim[vec, 1] <- napply(names, length)[vec]
out <- data.frame(obj.type, obj.size, obj.dim)
names(out) <- c("Type", "Size", "Rows", "Columns")
if (!missing(order.by))
out <- out[order(out[[order.by]], decreasing=decreasing), ]
if (head)
out <- head(out, n)
out
}
# shorthand
lsos <- function(..., n=10) {
.ls.objects(..., order.by="Size", decreasing=TRUE, head=TRUE, n=n)
}
Ensure you record your work in a reproducible script. From time-to-time, reopen R, then source() your script. You'll clean out anything you're no longer using, and as an added benefit will have tested your code.
I use the data.table package. With its := operator you can :
Add columns by reference
Modify subsets of existing columns by reference, and by group by reference
Delete columns by reference
None of these operations copy the (potentially large) data.table at all, not even once.
Aggregation is also particularly fast because data.table uses much less working memory.
Related links :
News from data.table, London R presentation, 2012
When should I use the := operator in data.table?
Saw this on a twitter post and think it's an awesome function by Dirk! Following on from JD Long's answer, I would do this for user friendly reading:
# improved list of objects
.ls.objects <- function (pos = 1, pattern, order.by,
decreasing=FALSE, head=FALSE, n=5) {
napply <- function(names, fn) sapply(names, function(x)
fn(get(x, pos = pos)))
names <- ls(pos = pos, pattern = pattern)
obj.class <- napply(names, function(x) as.character(class(x))[1])
obj.mode <- napply(names, mode)
obj.type <- ifelse(is.na(obj.class), obj.mode, obj.class)
obj.prettysize <- napply(names, function(x) {
format(utils::object.size(x), units = "auto") })
obj.size <- napply(names, object.size)
obj.dim <- t(napply(names, function(x)
as.numeric(dim(x))[1:2]))
vec <- is.na(obj.dim)[, 1] & (obj.type != "function")
obj.dim[vec, 1] <- napply(names, length)[vec]
out <- data.frame(obj.type, obj.size, obj.prettysize, obj.dim)
names(out) <- c("Type", "Size", "PrettySize", "Length/Rows", "Columns")
if (!missing(order.by))
out <- out[order(out[[order.by]], decreasing=decreasing), ]
if (head)
out <- head(out, n)
out
}
# shorthand
lsos <- function(..., n=10) {
.ls.objects(..., order.by="Size", decreasing=TRUE, head=TRUE, n=n)
}
lsos()
Which results in something like the following:
Type Size PrettySize Length/Rows Columns
pca.res PCA 790128 771.6 Kb 7 NA
DF data.frame 271040 264.7 Kb 669 50
factor.AgeGender factanal 12888 12.6 Kb 12 NA
dates data.frame 9016 8.8 Kb 669 2
sd. numeric 3808 3.7 Kb 51 NA
napply function 2256 2.2 Kb NA NA
lsos function 1944 1.9 Kb NA NA
load loadings 1768 1.7 Kb 12 2
ind.sup integer 448 448 bytes 102 NA
x character 96 96 bytes 1 NA
NOTE: The main part I added was (again, adapted from JD's answer) :
obj.prettysize <- napply(names, function(x) {
print(object.size(x), units = "auto") })
I make aggressive use of the subset parameter with selection of only the required variables when passing dataframes to the data= argument of regression functions. It does result in some errors if I forget to add variables to both the formula and the select= vector, but it still saves a lot of time due to decreased copying of objects and reduces the memory footprint significantly. Say I have 4 million records with 110 variables (and I do.) Example:
# library(rms); library(Hmisc) for the cph,and rcs functions
Mayo.PrCr.rbc.mdl <-
cph(formula = Surv(surv.yr, death) ~ age + Sex + nsmkr + rcs(Mayo, 4) +
rcs(PrCr.rat, 3) + rbc.cat * Sex,
data = subset(set1HLI, gdlab2 & HIVfinal == "Negative",
select = c("surv.yr", "death", "PrCr.rat", "Mayo",
"age", "Sex", "nsmkr", "rbc.cat")
) )
By way of setting context and the strategy: the gdlab2 variable is a logical vector that was constructed for subjects in a dataset that had all normal or almost normal values for a bunch of laboratory tests and HIVfinal was a character vector that summarized preliminary and confirmatory testing for HIV.
I love Dirk's .ls.objects() script but I kept squinting to count characters in the size column. So I did some ugly hacks to make it present with pretty formatting for the size:
.ls.objects <- function (pos = 1, pattern, order.by,
decreasing=FALSE, head=FALSE, n=5) {
napply <- function(names, fn) sapply(names, function(x)
fn(get(x, pos = pos)))
names <- ls(pos = pos, pattern = pattern)
obj.class <- napply(names, function(x) as.character(class(x))[1])
obj.mode <- napply(names, mode)
obj.type <- ifelse(is.na(obj.class), obj.mode, obj.class)
obj.size <- napply(names, object.size)
obj.prettysize <- sapply(obj.size, function(r) prettyNum(r, big.mark = ",") )
obj.dim <- t(napply(names, function(x)
as.numeric(dim(x))[1:2]))
vec <- is.na(obj.dim)[, 1] & (obj.type != "function")
obj.dim[vec, 1] <- napply(names, length)[vec]
out <- data.frame(obj.type, obj.size,obj.prettysize, obj.dim)
names(out) <- c("Type", "Size", "PrettySize", "Rows", "Columns")
if (!missing(order.by))
out <- out[order(out[[order.by]], decreasing=decreasing), ]
out <- out[c("Type", "PrettySize", "Rows", "Columns")]
names(out) <- c("Type", "Size", "Rows", "Columns")
if (head)
out <- head(out, n)
out
}
That's a good trick.
One other suggestion is to use memory efficient objects wherever possible: for instance, use a matrix instead of a data.frame.
This doesn't really address memory management, but one important function that isn't widely known is memory.limit(). You can increase the default using this command, memory.limit(size=2500), where the size is in MB. As Dirk mentioned, you need to be using 64-bit in order to take real advantage of this.
I quite like the improved objects function developed by Dirk. Much of the time though, a more basic output with the object name and size is sufficient for me. Here's a simpler function with a similar objective. Memory use can be ordered alphabetically or by size, can be limited to a certain number of objects, and can be ordered ascending or descending. Also, I often work with data that are 1GB+, so the function changes units accordingly.
showMemoryUse <- function(sort="size", decreasing=FALSE, limit) {
objectList <- ls(parent.frame())
oneKB <- 1024
oneMB <- 1048576
oneGB <- 1073741824
memoryUse <- sapply(objectList, function(x) as.numeric(object.size(eval(parse(text=x)))))
memListing <- sapply(memoryUse, function(size) {
if (size >= oneGB) return(paste(round(size/oneGB,2), "GB"))
else if (size >= oneMB) return(paste(round(size/oneMB,2), "MB"))
else if (size >= oneKB) return(paste(round(size/oneKB,2), "kB"))
else return(paste(size, "bytes"))
})
memListing <- data.frame(objectName=names(memListing),memorySize=memListing,row.names=NULL)
if (sort=="alphabetical") memListing <- memListing[order(memListing$objectName,decreasing=decreasing),]
else memListing <- memListing[order(memoryUse,decreasing=decreasing),] #will run if sort not specified or "size"
if(!missing(limit)) memListing <- memListing[1:limit,]
print(memListing, row.names=FALSE)
return(invisible(memListing))
}
And here is some example output:
> showMemoryUse(decreasing=TRUE, limit=5)
objectName memorySize
coherData 713.75 MB
spec.pgram_mine 149.63 kB
stoch.reg 145.88 kB
describeBy 82.5 kB
lmBandpass 68.41 kB
I never save an R workspace. I use import scripts and data scripts and output any especially large data objects that I don't want to recreate often to files. This way I always start with a fresh workspace and don't need to clean out large objects. That is a very nice function though.
Unfortunately I did not have time to test it extensively but here is a memory tip that I have not seen before. For me the required memory was reduced with more than 50%.
When you read stuff into R with for example read.csv they require a certain amount of memory.
After this you can save them with save("Destinationfile",list=ls())
The next time you open R you can use load("Destinationfile")
Now the memory usage might have decreased.
It would be nice if anyone could confirm whether this produces similar results with a different dataset.
To further illustrate the common strategy of frequent restarts, we can use littler which allows us to run simple expressions directly from the command-line. Here is an example I sometimes use to time different BLAS for a simple crossprod.
r -e'N<-3*10^3; M<-matrix(rnorm(N*N),ncol=N); print(system.time(crossprod(M)))'
Likewise,
r -lMatrix -e'example(spMatrix)'
loads the Matrix package (via the --packages | -l switch) and runs the examples of the spMatrix function. As r always starts 'fresh', this method is also a good test during package development.
Last but not least r also work great for automated batch mode in scripts using the '#!/usr/bin/r' shebang-header. Rscript is an alternative where littler is unavailable (e.g. on Windows).
For both speed and memory purposes, when building a large data frame via some complex series of steps, I'll periodically flush it (the in-progress data set being built) to disk, appending to anything that came before, and then restart it. This way the intermediate steps are only working on smallish data frames (which is good as, e.g., rbind slows down considerably with larger objects). The entire data set can be read back in at the end of the process, when all the intermediate objects have been removed.
dfinal <- NULL
first <- TRUE
tempfile <- "dfinal_temp.csv"
for( i in bigloop ) {
if( !i %% 10000 ) {
print( i, "; flushing to disk..." )
write.table( dfinal, file=tempfile, append=!first, col.names=first )
first <- FALSE
dfinal <- NULL # nuke it
}
# ... complex operations here that add data to 'dfinal' data frame
}
print( "Loop done; flushing to disk and re-reading entire data set..." )
write.table( dfinal, file=tempfile, append=TRUE, col.names=FALSE )
dfinal <- read.table( tempfile )
Just to note that data.table package's tables() seems to be a pretty good replacement for Dirk's .ls.objects() custom function (detailed in earlier answers), although just for data.frames/tables and not e.g. matrices, arrays, lists.
I'm fortunate and my large data sets are saved by the instrument in "chunks" (subsets) of roughly 100 MB (32bit binary). Thus I can do pre-processing steps (deleting uninformative parts, downsampling) sequentially before fusing the data set.
Calling gc () "by hand" can help if the size of the data get close to available memory.
Sometimes a different algorithm needs much less memory.
Sometimes there's a trade off between vectorization and memory use.
compare: split & lapply vs. a for loop.
For the sake of fast & easy data analysis, I often work first with a small random subset (sample ()) of the data. Once the data analysis script/.Rnw is finished data analysis code and the complete data go to the calculation server for over night / over weekend / ... calculation.
The use of environments instead of lists to handle collections of objects which occupy a significant amount of working memory.
The reason: each time an element of a list structure is modified, the whole list is temporarily duplicated. This becomes an issue if the storage requirement of the list is about half the available working memory, because then data has to be swapped to the slow hard disk. Environments, on the other hand, aren't subject to this behaviour and they can be treated similar to lists.
Here is an example:
get.data <- function(x)
{
# get some data based on x
return(paste("data from",x))
}
collect.data <- function(i,x,env)
{
# get some data
data <- get.data(x[[i]])
# store data into environment
element.name <- paste("V",i,sep="")
env[[element.name]] <- data
return(NULL)
}
better.list <- new.env()
filenames <- c("file1","file2","file3")
lapply(seq_along(filenames),collect.data,x=filenames,env=better.list)
# read/write access
print(better.list[["V1"]])
better.list[["V2"]] <- "testdata"
# number of list elements
length(ls(better.list))
In conjunction with structures such as big.matrix or data.table which allow for altering their content in-place, very efficient memory usage can be achieved.
The llfunction in gData package can show the memory usage of each object as well.
gdata::ll(unit='MB')
If you really want to avoid the leaks, you should avoid creating any big objects in the global environment.
What I usually do is to have a function that does the job and returns NULL — all data is read and manipulated in this function or others that it calls.
With only 4GB of RAM (running Windows 10, so make that about 2 or more realistically 1GB) I've had to be real careful with the allocation.
I use data.table almost exclusively.
The 'fread' function allows you to subset information by field names on import; only import the fields that are actually needed to begin with. If you're using base R read, null the spurious columns immediately after import.
As 42- suggests, where ever possible I will then subset within the columns immediately after importing the information.
I frequently rm() objects from the environment as soon as they're no longer needed, e.g. on the next line after using them to subset something else, and call gc().
'fread' and 'fwrite' from data.table can be very fast by comparison with base R reads and writes.
As kpierce8 suggests, I almost always fwrite everything out of the environment and fread it back in, even with thousand / hundreds of thousands of tiny files to get through. This not only keeps the environment 'clean' and keeps the memory allocation low but, possibly due to the severe lack of RAM available, R has a propensity for frequently crashing on my computer; really frequently. Having the information backed up on the drive itself as the code progresses through various stages means I don't have to start right from the beginning if it crashes.
As of 2017, I think the fastest SSDs are running around a few GB per second through the M2 port. I have a really basic 50GB Kingston V300 (550MB/s) SSD that I use as my primary disk (has Windows and R on it). I keep all the bulk information on a cheap 500GB WD platter. I move the data sets to the SSD when I start working on them. This, combined with 'fread'ing and 'fwrite'ing everything has been working out great. I've tried using 'ff' but prefer the former. 4K read/write speeds can create issues with this though; backing up a quarter of a million 1k files (250MBs worth) from the SSD to the platter can take hours. As far as I'm aware, there isn't any R package available yet that can automatically optimise the 'chunkification' process; e.g. look at how much RAM a user has, test the read/write speeds of the RAM / all the drives connected and then suggest an optimal 'chunkification' protocol. This could produce some significant workflow improvements / resource optimisations; e.g. split it to ... MB for the ram -> split it to ... MB for the SSD -> split it to ... MB on the platter -> split it to ... MB on the tape. It could sample data sets beforehand to give it a more realistic gauge stick to work from.
A lot of the problems I've worked on in R involve forming combination and permutation pairs, triples etc, which only makes having limited RAM more of a limitation as they will often at least exponentially expand at some point. This has made me focus a lot of attention on the quality as opposed to quantity of information going into them to begin with, rather than trying to clean it up afterwards, and on the sequence of operations in preparing the information to begin with (starting with the simplest operation and increasing the complexity); e.g. subset, then merge / join, then form combinations / permutations etc.
There do seem to be some benefits to using base R read and write in some instances. For instance, the error detection within 'fread' is so good it can be difficult trying to get really messy information into R to begin with to clean it up. Base R also seems to be a lot easier if you're using Linux. Base R seems to work fine in Linux, Windows 10 uses ~20GB of disc space whereas Ubuntu only needs a few GB, the RAM needed with Ubuntu is slightly lower. But I've noticed large quantities of warnings and errors when installing third party packages in (L)Ubuntu. I wouldn't recommend drifting too far away from (L)Ubuntu or other stock distributions with Linux as you can loose so much overall compatibility it renders the process almost pointless (I think 'unity' is due to be cancelled in Ubuntu as of 2017). I realise this won't go down well with some Linux users but some of the custom distributions are borderline pointless beyond novelty (I've spent years using Linux alone).
Hopefully some of that might help others out.
This is a newer answer to this excellent old question. From Hadley's Advanced R:
install.packages("pryr")
library(pryr)
object_size(1:10)
## 88 B
object_size(mean)
## 832 B
object_size(mtcars)
## 6.74 kB
(http://adv-r.had.co.nz/memory.html)
This adds nothing to the above, but is written in the simple and heavily commented style that I like. It yields a table with the objects ordered in size , but without some of the detail given in the examples above:
#Find the objects
MemoryObjects = ls()
#Create an array
MemoryAssessmentTable=array(NA,dim=c(length(MemoryObjects),2))
#Name the columns
colnames(MemoryAssessmentTable)=c("object","bytes")
#Define the first column as the objects
MemoryAssessmentTable[,1]=MemoryObjects
#Define a function to determine size
MemoryAssessmentFunction=function(x){object.size(get(x))}
#Apply the function to the objects
MemoryAssessmentTable[,2]=t(t(sapply(MemoryAssessmentTable[,1],MemoryAssessmentFunction)))
#Produce a table with the largest objects first
noquote(MemoryAssessmentTable[rev(order(as.numeric(MemoryAssessmentTable[,2]))),])
As well as the more general memory management techniques given in the answers above, I always try to reduce the size of my objects as far as possible. For example, I work with very large but very sparse matrices, in other words matrices where most values are zero. Using the 'Matrix' package (capitalisation important) I was able to reduce my average object sizes from ~2GB to ~200MB as simply as:
my.matrix <- Matrix(my.matrix)
The Matrix package includes data formats that can be used exactly like a regular matrix (no need to change your other code) but are able to store sparse data much more efficiently, whether loaded into memory or saved to disk.
Additionally, the raw files I receive are in 'long' format where each data point has variables x, y, z, i. Much more efficient to transform the data into an x * y * z dimension array with only variable i.
Know your data and use a bit of common sense.
If you are working on Linux and want to use several processes and only have to do read operations on one or more large objects use makeForkCluster instead of a makePSOCKcluster. This also saves you the time sending the large object to the other processes.
I really appreciate some of the answers above, following #hadley and #Dirk that suggest closing R and issuing source and using command line I come up with a solution that worked very well for me. I had to deal with hundreds of mass spectras, each occupies around 20 Mb of memory so I used two R scripts, as follows:
First a wrapper:
#!/usr/bin/Rscript --vanilla --default-packages=utils
for(l in 1:length(fdir)) {
for(k in 1:length(fds)) {
system(paste("Rscript runConsensus.r", l, k))
}
}
with this script I basically control what my main script do runConsensus.r, and I write the data answer for the output. With this, each time the wrapper calls the script it seems the R is reopened and the memory is freed.
Hope it helps.
Tip for dealing with objects requiring heavy intermediate calculation: When using objects that require a lot of heavy calculation and intermediate steps to create, I often find it useful to write a chunk of code with the function to create the object, and then a separate chunk of code that gives me the option either to generate and save the object as an rmd file, or load it externally from an rmd file I have already previously saved. This is especially easy to do in R Markdown using the following code-chunk structure.
```{r Create OBJECT}
COMPLICATED.FUNCTION <- function(...) { Do heavy calculations needing lots of memory;
Output OBJECT; }
```
```{r Generate or load OBJECT}
LOAD <- TRUE
SAVE <- TRUE
#NOTE: Set LOAD to TRUE if you want to load saved file
#NOTE: Set LOAD to FALSE if you want to generate the object from scratch
#NOTE: Set SAVE to TRUE if you want to save the object externally
if(LOAD) {
OBJECT <- readRDS(file = 'MySavedObject.rds')
} else {
OBJECT <- COMPLICATED.FUNCTION(x, y, z)
if (SAVE) { saveRDS(file = 'MySavedObject.rds', object = OBJECT) } }
```
With this code structure, all I need to do is to change LOAD depending on whether I want to generate the object, or load it directly from an existing saved file. (Of course, I have to generate it and save it the first time, but after this I have the option of loading it.) Setting LOAD <- TRUE bypasses use of my complicated function and avoids all of the heavy computation therein. This method still requires enough memory to store the object of interest, but it saves you from having to calculate it each time you run your code. For objects that require a lot of heavy calculation of intermediate steps (e.g., for calculations involving loops over large arrays) this can save a substantial amount of time and computation.
Running
for (i in 1:10)
gc(reset = T)
from time to time also helps R to free unused but still not released memory.
You also can get some benefit using knitr and puting your script in Rmd chuncks.
I usually divide the code in different chunks and select which one will save a checkpoint to cache or to a RDS file, and
Over there you can set a chunk to be saved to "cache", or you can decide to run or not a particular chunk. In this way, in a first run you can process only "part 1", another execution you can select only "part 2", etc.
Example:
part1
```{r corpus, warning=FALSE, cache=TRUE, message=FALSE, eval=TRUE}
corpusTw <- corpus(twitter) # build the corpus
```
part2
```{r trigrams, warning=FALSE, cache=TRUE, message=FALSE, eval=FALSE}
dfmTw <- dfm(corpusTw, verbose=TRUE, removeTwitter=TRUE, ngrams=3)
```
As a side effect, this also could save you some headaches in terms of reproducibility :)
Based on #Dirk's and #Tony's answer I have made a slight update. The result was outputting [1] before the pretty size values, so I took out the capture.output which solved the problem:
.ls.objects <- function (pos = 1, pattern, order.by,
decreasing=FALSE, head=FALSE, n=5) {
napply <- function(names, fn) sapply(names, function(x)
fn(get(x, pos = pos)))
names <- ls(pos = pos, pattern = pattern)
obj.class <- napply(names, function(x) as.character(class(x))[1])
obj.mode <- napply(names, mode)
obj.type <- ifelse(is.na(obj.class), obj.mode, obj.class)
obj.prettysize <- napply(names, function(x) {
format(utils::object.size(x), units = "auto") })
obj.size <- napply(names, utils::object.size)
obj.dim <- t(napply(names, function(x)
as.numeric(dim(x))[1:2]))
vec <- is.na(obj.dim)[, 1] & (obj.type != "function")
obj.dim[vec, 1] <- napply(names, length)[vec]
out <- data.frame(obj.type, obj.size, obj.prettysize, obj.dim)
names(out) <- c("Type", "Size", "PrettySize", "Rows", "Columns")
if (!missing(order.by))
out <- out[order(out[[order.by]], decreasing=decreasing), ]
if (head)
out <- head(out, n)
return(out)
}
# shorthand
lsos <- function(..., n=10) {
.ls.objects(..., order.by="Size", decreasing=TRUE, head=TRUE, n=n)
}
lsos()
I try to keep the amount of objects small when working in a larger project with a lot of intermediate steps. So instead of creating many unique objects called
dataframe-> step1 -> step2 -> step3 -> result
raster-> multipliedRast -> meanRastF -> sqrtRast -> resultRast
I work with temporary objects that I call temp.
dataframe -> temp -> temp -> temp -> result
Which leaves me with less intermediate files and more overview.
raster <- raster('file.tif')
temp <- raster * 10
temp <- mean(temp)
resultRast <- sqrt(temp)
To save more memory I can simply remove temp when no longer needed.
rm(temp)
If I need several intermediate files, I use temp1, temp2, temp3.
For testing I use test, test2, ...
rm(list=ls()) is a great way to keep you honest and keep things reproducible.

Resources