How do I store module-level write-once state? - caching

I have some module-level state I want to write once and then never modify.
Specifically, I have a set of strings I want to use to look things up in later. What is an efficient and ordinary way of doing this?
I could make a function that always returns the same set:
my_set() -> sets:from_list(["a", "b", "c"]).
Would the VM optimize this, or would the code for constructing the set be re-run every time? I suspect the set would just get GCd.
In that case, should I cache the set in the process dictionary, keyed on something unique like the module md5?
Key = proplists:get_value(md5, module_info()), put(Key, my_set())
Another solution would be to make the caller to call an init function to get back an opaque chunk of state, then pass that state into each function in the module.

A compile-time constant, like your example list ["a", "b", "c"], will be stored in a constant pool on the side when the module is loaded, and not rebuilt each time you run the expression. (In the old days, the list would have been reconstructed from its elements for each new call.) This goes for all constants no matter how complicated (like lists of lists of tuples). But when you call out to a function like sets:from_list/1, the compiler cannot assume anything about the representation used by the sets module, and the set will be constructed dynamically from that constant list.
While an ETS table would work, it is less efficient for larger constants (like, say, a set or map containing many entries), because an ETS table has the same memory model as a process - data is written and read by copying, as if by sending messages. If the constants are small, the difference between copying them and recreating them locally would be negligible, and if the constants are large, you waste time copying them.
What you want instead is a fairly new feature called Persistent Term Storage: https://erlang.org/doc/man/persistent_term.html (since Erlang/OTP 21). It is similar to the way compile time constants are handled, so there is no copying when looking up a value. (The key could be the name of your module.) Persistent Term is intended as pretty much a write-once-read-many storage - you can update the stored entry, but that's a more expensive operation which may trigger a global GC.

Related

Is there a linux header for hashtable with spinlock-protected buckets?

I write a code which rarely creates/removes objects (up to several thousands) but very frequently modifies them in soft IRQ context. These objects are also rarely read (and probably will also be rarely modified) from task context (via procfs: file per object). Currently my code contains global per-CPU data blocks, each one guarded by a spinlock. Such a block contains a fixed-sized hashtable for object storage.
Obviously the current design is not optimal, especially when having very high object update loads: reading objects from procfs will cause data losses in updating soft IRQs. I need to rewrite the synchronisation scheme to get rid of global locks. The most obvious choice - to have a spinlock for each hashtable bucket - it should scale well. The problem is that I'll probably need to use my own hashtable implementation or at least to reimplement several top-level macros (didn't find those in linux/hashtable.h for spinlock-protected buckets). Should I also look towards RCU-enabled hashtable (yet I have no solid understanding of this synchronisation approach)?
Buckets with lock protection are declared in the header linux/list_bl.h. They use lowest bit of the head pointer as a lock bit.
RCU-protected access to the bucket is defined with other hash table functions in the header linux/hashtable.h (they have _rcu suffix).
Choosing between locks and RCU is up to you. Note, that RCU itself cannot resolve modify-modify conflicts. And it helps mostly for frequently-read data, which seems is not your case.
As only one locking function - hlist_bl_lock - is declared for struct hlist_bl_head, and this function is unaware for irq's, additional actions should be performed when hash table can be used in irq or bottom halves:
spin_lock_irqsave:
local_irq_save(flags);
hlist_bl_lock(...);
spin_unlock_irqrestore:
hlist_bl_unlock(...);
local_irq_restore(flags);
spin_lock_bh:
local_bh_disable();
hlist_bl_lock(...);
spin_unlock_bh:
hlist_bl_unlock(...);
local_bh_enable();

What is the most efficient Ruby data structure to track progress?

I am working on a small project which progressively grows a list of links and then processes them through a queue. There exists the likelihood that a link may be entered into the queue twice and I would like to track my progress so I can skip anything that has already been processed. I'm estimating around 10k unique links at most.
For larger projects I would use a database but that seems overkill for the amount of data I am working with and would prefer some form of in-memory solution that can potentially be serialized if I want to save progress across runs.
What data structure would best fit this need?
Update: I am already using a hash to track which links I have completed processing. Is this the most efficient way of doing it?
def process_link(link)
return if #processed_links[link]
# ... processing logic
#processed_links[link] = Time.now # or other state
end
If you aren't concerned about memory, then just use a Hash to check inclusion; insert and lookup times are O(1) average case. Serialization is straightforward (Ruby's Marshal class should take care of that for you, or you could use a format like JSON). Ruby's Set is an array-like object that is backed with a Hash, so you could just use that if you're so inclined.
However, if memory is a concern, then this is a great problem for a Bloom filter! You can achieve set inclusion testing in constant time and the filter uses substantially less memory than a hash would. The tradeoff is the Bloom filters are probabilistic - you can get false inclusion positives. You can eliminate the probability of most false positives with the right bloom filter parameters, but if duplicates are the exception rather than the rule, you could implement something like:
Check for set inclusion in the Bloom filter [O(1)]
If the bloom filter reports that the entry is found, perform an O(n) check of the input data, to see if this item has been found in the array of input data prior to now.
That would get you very fast and memory-efficient lookups for the common case, and you could make the choice to accept the possibility of false negatives (to keep the whole thing small and fast), or you could perform verification of set inclusion when a duplicate is reported (to only do expensive work when you absolutely have to).
https://github.com/igrigorik/bloomfilter-rb is a Bloom filter implementation I've used in the past; it works nicely. There are also redis-backed Bloom filters, if you need something that can perform set membership tracking and testing across multiple app instances.
How about a Set and convert your links to value object (rather than reference object) like Structs. By creating a value object the Set will be able to detect its uniqueness. Alternately, you could use a hash and store links by their PK.
The data structure could be a hash:
current_status = { links: [link3, link4, link5], processed: [link1, link2, link3] }
To track your progress (in percent):
links_count = current_status[:links].length + current_status[:processed].length
progress = (current_status[:processed].length * 100) / links_count # Will give you percent of progress
To process your links:
push any new link you need to process to current_status[:links].
Use shift to take from current_status[:links] the next link to be processed.
After processing a link, push it to current_status[:processed]
EDIT
As I see it (and understand your question), the logic to process your links would be:
# Add any new link that needs to be processed to the queue unless it have been processed
def add_link_to_queue(link)
current_status[:to_process].push(link) unless current_status[:processed].include?(link)
end
# Process next link on the queue
def process_next_link
link = current_status[:to_process].shift # return first link on the queue
# ... login process the link
current_status[:processed].push(link)
end
# shift method will not only return but also remove the link from the original array to avoid duplications

What is the design rationale behind HandleScope?

V8 requires a HandleScope to be declared in order to clean up any Local handles that were created within scope. I understand that HandleScope will dereference these handles for garbage collection, but I'm interested in why each Local class doesn't do the dereferencing themselves like most internal ref_ptr type helpers.
My thought is that HandleScope can do it more efficiently by dumping a large number of handles all at once rather than one by one as they would in a ref_ptr type scoped class.
Here is how I understand the documentation and the handles-inl.h source code. I, too, might be completely wrong since I'm not a V8 developer and documentation is scarce.
The garbage collector will, at times, move stuff from one memory location to another and, during one such sweep, also check which objects are still reachable and which are not. In contrast to reference-counting types like std::shared_ptr, this is able to detect and collect cyclic data structures. For all of this to work, V8 has to have a good idea about what objects are reachable.
On the other hand, objects are created and deleted quite a lot during the internals of some computation. You don't want too much overhead for each such operation. The way to achieve this is by creating a stack of handles. Each object listed in that stack is available from some handle in some C++ computation. In addition to this, there are persistent handles, which presumably take more work to set up and which can survive beyond C++ computations.
Having a stack of references requires that you use this in a stack-like way. There is no “invalid” mark in that stack. All the objects from bottom to top of the stack are valid object references. The way to ensure this is the LocalScope. It keeps things hierarchical. With reference counted pointers you can do something like this:
shared_ptr<Object>* f() {
shared_ptr<Object> a(new Object(1));
shared_ptr<Object>* b = new shared_ptr<Object>(new Object(2));
return b;
}
void g() {
shared_ptr<Object> c = *f();
}
Here the object 1 is created first, then the object 2 is created, then the function returns and object 1 is destroyed, then object 2 is destroyed. The key point here is that there is a point in time when object 1 is invalid but object 2 is still valid. That's what LocalScope aims to avoid.
Some other GC implementations examine the C stack and look for pointers they find there. This has a good chance of false positives, since stuff which is in fact data could be misinterpreted as a pointer. For reachability this might seem rather harmless, but when rewriting pointers since you're moving objects, this can be fatal. It has a number of other drawbacks, and relies a lot on how the low level implementation of the language actually works. V8 avoids that by keeping the handle stack separate from the function call stack, while at the same time ensuring that they are sufficiently aligned to guarantee the mentioned hierarchy requirements.
To offer yet another comparison: an object references by just one shared_ptr becomes collectible (and actually will be collected) once its C++ block scope ends. An object referenced by a v8::Handle will become collectible when leaving the nearest enclosing scope which did contain a HandleScope object. So programmers have more control over the granularity of stack operations. In a tight loop where performance is important, it might be useful to maintain just a single HandleScope for the whole computation, so that you won't have to access the handle stack data structure so often. On the other hand, doing so will keep all the objects around for the whole duration of the computation, which would be very bad indeed if this were a loop iterating over many values, since all of them would be kept around till the end. But the programmer has full control, and can arrange things in the most appropriate way.
Personally, I'd make sure to construct a HandleScope
At the beginning of every function which might be called from outside your code. This ensures that your code will clean up after itself.
In the body of every loop which might see more than three or so iterations, so that you only keep variables from the current iteration.
Around every block of code which is followed by some callback invocation, since this ensures that your stuff can get cleaned if the callback requires more memory.
Whenever I feel that something might produce considerable amounts of intermediate data which should get cleaned (or at least become collectible) as soon as possible.
In general I'd not create a HandleScope for every internal function if I can be sure that every other function calling this will already have set up a HandleScope. But that's probably a matter of taste.
Disclaimer: This may not be an official answer, more of a conjuncture on my part; but the v8 documentation is hardly
useful on this topic. So I may be proven wrong.
From my understanding, in developing various v8 based backed application. Its a means of handling the difference between the C++ and javaScript environment.
Imagine the following sequence, which a self dereferencing pointer can break the system.
JavaScript calls up a C++ wrapped v8 function : lets say helloWorld()
C++ function creates a v8::handle of value "hello world =x"
C++ returns the value to the v8 virtual machine
C++ function does its usual cleaning up of resources, including dereferencing of handles
Another C++ function / process, overwrites the freed memory space
V8 reads the handle : and the data is no longer the same "hell!#(#..."
And that's just the surface of the complicated inconsistency between the two; Hence to tackle the various issues of connecting the JavaScript VM (Virtual Machine) to the C++ interfacing code, i believe the development team, decided to simplify the issue via the following...
All variable handles, are to be stored in "buckets" aka HandleScopes, to be built / compiled / run / destroyed by their
respective C++ code, when needed.
Additionally all function handles, are to only refer to C++ static functions (i know this is irritating), which ensures the "existence"
of the function call regardless of constructors / destructor.
Think of it from a development point of view, in which it marks a very strong distinction between the JavaScript VM development team, and the C++ integration team (Chrome dev team?). Allowing both sides to work without interfering one another.
Lastly it could also be the sake of simplicity, to emulate multiple VM : as v8 was originally meant for google chrome. Hence a simple HandleScope creation and destruction whenever we open / close a tab, makes for much easier GC managment, especially in cases where you have many VM running (each tab in chrome).

Why does loading cached objects increase the memory consumption drastically when computing them will not?

Relevant background info
I've built a little software that can be customized via a config file. The config file is parsed and translated into a nested environment structure (e.g. .HIVE$db = an environment, .HIVE$db$user = "Horst", .HIVE$db$pw = "my password", .HIVE$regex$date = some regex for dates etc.)
I've built routines that can handle those nested environments (e.g. look up value "db/user" or "regex/date", change it etc.). The thing is that the initial parsing of the config files takes a long time and results in quite a big of an object (actually three to four, between 4 and 16 MB). So I thought "No problem, let's just cache them by saving the object(s) to .Rdata files". This works, but "loading" cached objects makes my Rterm process go through the roof with respect to RAM consumption (over 1 GB!!) and I still don't really understand why (this doesn't happen when I "compute" the object all anew, but that's exactly what I'm trying to avoid since it takes too long).
I already thought about maybe serializing it, but I haven't tested it as I would need to refactor my code a bit. Plus I'm not sure if it would affect the "loading back into R" part in just the same way as loading .Rdata files.
Question
Can anyone tell me why loading a previously computed object has such effects on memory consumption of my Rterm process (compared to computing it in every new process I start) and how best to avoid this?
If desired, I will also try to come up with an example, but it's a bit tricky to reproduce my exact scenario. Yet I'll try.
Its likely because the environments you are creating are carrying around their ancestors. If you don't need the ancestor information then set the parents of such environments to emptyenv() (or just don't use environments if you don't need them).
Also note that formulas (and, of course, functions) have environments so watch out for those too.
If it's not reproducible by others, it will be hard to answer. However, I do something quite similar to what you're doing, yet I use JSON files to store all of my values. Rather than parse the text, I use RJSONIO to convert everything to a list, and getting stuff from a list is very easy. (You could, if you want, convert to a hash, but it's nice to have layers of nested parameters.)
See this answer for an example of how I've done this kind of thing. If that works out for you, then you can forego the expensive translation step and the memory ballooning.
(Taking a stab at the original question...) I wonder if your issue is that you are using an environment rather than a list. Saving environments might be tricky in some contexts. Saving lists is no problem. Try using a list or try converting to/from an environment. You can use the as.list() and as.environment() functions for this.

Condition, Block, Module - which way is the most memory and computationally efficient?

There are always several ways to do the same thing in Mathematica. For example, when adapting WReach's solution for my recent problem I used Condition:
ClearAll[ff];
SetAttributes[ff, HoldAllComplete];
ff[expr_] /; (Unset[done]; True) :=
Internal`WithLocalSettings[Null, done = f[expr],
AbortProtect[If[! ValueQ[done], Print["Interrupt!"]]; Unset[done]]]
However, we can do the same thing with Block:
ClearAll[ff];
SetAttributes[ff, HoldAllComplete];
ff[expr_] :=
Block[{done},
Internal`WithLocalSettings[Null, done = f[expr],
AbortProtect[If[! ValueQ[done], Print["Interrupt!"]]]]]
Or with Module:
ClearAll[ff];
SetAttributes[ff, HoldAllComplete];
ff[expr_] :=
Module[{done},
Internal`WithLocalSettings[Null, done = f[expr],
AbortProtect[If[! ValueQ[done], Print["Interrupt!"]]]]]
Probably there are several other ways to do the same. Which way is the most efficient from the point of view of memory and CPU use (f may return very large arrays of data - but may return very small)?
Both Module and Block are quite efficient, so the overhead induced by them is only noticable when the body of a function whose variables you localize does very little. There are two major reasons for the overhead: scoping construct overhead (scoping constructs must analyze the code they enclose to resolve possible name conflicts and bind variables - this takes place for both Module and Block), and the overhead of creation and destruction of new symbols in a symbol table (only for Module). For this reason, Block is somewhat faster. To see how much faster, you can do a simple experiment:
In[14]:=
Clear[f,fm,fb,fmp];
f[x_]:=x;
fm[x_]:=Module[{xl = x},xl];
fb[x_]:=Block[{xl = x},xl];
Module[{xl},fmp[x_]:= xl=x]
We defined here 4 functions, with the simplest body possible - just return the argument, possibly assigned to a local variable. We can expect the effect to be most pronounced here, since the body does very little.
In[19]:= f/#Range[100000];//Timing
Out[19]= {0.063,Null}
In[20]:= fm/#Range[100000];//Timing
Out[20]= {0.343,Null}
In[21]:= fb/#Range[100000];//Timing
Out[21]= {0.172,Null}
In[22]:= fmp/#Range[100000];//Timing
Out[22]= {0.109,Null}
From these timings, we see that Block is about twice faster than Module, but that the version that uses persistent variable created by Module in the last function only once, is about twice more efficient than Block, and almost as fast as a simple function invokation (because persistent variable is only created once, and there is no scoping overhead when applying the function).
For real functions, and most of the time, the overhead of either Module or Block should not matter, so I'd use whatever is safer (usually, Module). If it does matter, one option is to use persistent local variables created by Module only once. If even this overhead is significant, I'd reconsider the design - since then obviously your function does too little.There are cases when Block is more beneficial, for example when you want to be sure that all the memory used by local variables will be automatically released (this is particularly relevant for local variables with DownValues, since they are not always garbage - collected when created by Module). Another reason to use Block is when you expect a possibility of interrupts such as exceptions or aborts, and want the local variables to automatically be reset (which Block does). By using Block, however, you risk name collisions, since it binds variables dynamically rather than lexically.
So, to summarize: in most cases, my suggestion is this: if you feel that your function has serious memory or run-time inefficiency, look elsewhere - it is very rare for scoping constructs to be the major bottleneck. Exceptions would include not garbage-collected Module variables with accumulated data, very light-weight functions used very frequently, and functions which operate on very efficient low-level structures such as packed arrays and sparse arrays, where symbolic scoping overhead may be comparable to the time it takes a function to process its data, since the body is very efficient and uses fast functions that by-pass the main evaluator.
EDIT
By combining Block and Module in the fashion suggested here:
Module[{xl}, fmbp[x_] := Block[{xl = x}, xl]]
you can have the best of both worlds: a function as fast as Block - scoped one and as safe as the one that uses Module.

Resources