Purely functional equivalent of weakhashmap? - data-structures

Weak hash tables like Java's weak hash map use weak references to track the collection of unreachable keys by the garbage collector and remove bindings with that key from the collection. Weak hash tables are typically used to implement indirections from one vertex or edge in a graph to another because they allow the garbage collector to collect unreachable portions of the graph.
Is there a purely functional equivalent of this data structure? If not, how might one be created?
This seems like an interesting challenge. The internal implementation cannot be pure because it must collect (i.e. mutate) the data structure in order to remove unreachable parts but I believe it could present a pure interface to the user, who could never observe the impurities because they only affect portions of the data structure that the user can, by definition, no longer reach.

That's an interesting concept. One major complication in a "purely functional" setting would be that object identity is not normally observable in a "purely functional" sense. I.E., if I copy an object or create a new identical one, in Java it's expected that the clone is not the original. But in a functional setting, it is expected that the new one be semantically identical to the old one, even though the garbage collector will treat it differently.
So, if we allow object identity to be a part of the semantics, it would be sound, otherwise probably not. In the latter case, even if a hack could be found (I thought of one, described below), you're likely to have the language implementation fighting you all over the place because it's going to do all sorts of things to exploit the fact that object identity is not supposed to be observable.
One 'hack' that popped into my mind would be to use unique-by-construction values as keys, so that for the most part value equality will coincide with reference equality. For example, I have a library I use personally in Haskell with the following in its interface:
data Uniq s
getUniq :: IO (Uniq RealWorld)
instance Eq (Uniq s)
instance Ord (Uniq s)
A hash map like you describe would probably mostly-work with these as key, but even here I can think of a way it might break: Suppose a user stores a key in a strict field of some data structure, with the compiler's "unbox-strict-fields" optimization enabled. If 'Uniq' is just a newtype wrapper to a machine integer, there may no longer be any object to which the GC can point and say "that's the key"; so when the user goes and unpacks his key to use it, the map may have forgotten about it already. (Edit: This particular example can obviously be worked around; make Uniq's implementation be something that can't be unboxed like that; the point is just that it's tricky precisely because the compiler is trying to be helpful in a lot of ways we might not expect)
TL;DR: I wouldn't say it can't be done, but I suspect that in many cases "optimizations" will either break or be broken by a weak hash map implementation, unless object identity is given first-class observable status.

Purely functional data-structures can't change from the user perspective. So, if I get a key from a hash-map, wait, and then get the same key again, I have to get the same value. I can hold onto keys, so they can't disappear.
The only way it could work is if the API gives me the next generation and the values aren't collected until all references to the past versions of the container are released. Users of the data-structure are expected to periodically ask for new generations to release weakly held values.
EDIT (based on comment): I understand the behavior you want, but you can't pass this test with a map that releases objects:
FunctionalWeakHashMap map = new FunctionalWeakHashMap();
{ // make scope to make o have no references
Object o = new SomeObject();
map["key"] = o;
} // at this point I lose all references to o, and the reference is weak
// wait as much time as you think it takes for that weak reference to collect,
// force it, etc
Assert.isNotNull(map["key"]); // this must be true or map is not persistent
I am suggesting that this test could pass
FunctionalWeakHashMap map = new FunctionalWeakHashMap();
{ // make scope to make o have no references
Object o = new SomeObject();
map["key"] = o;
} // at this point I lose all references to o, and the reference is weak in the map
// wait as much time as you think it takes for that weak reference to collect,
// force it, etc
map = map.nextGen();
Assert.isNull(map["key"]);

Related

Destroying a Structure or Class in Common Lisp

I am wondering what the options are for destroying a structure created via defstruct in Common Lisp. It appears that a constructor is automatically provided; however, that it is not the case for a destructor.
What are the ways one can "clear" a structure from memory? Things like (setq my-struct NIL) come to mind, but I am unsure whether this is the cleanest way to do this?
Edit:
The question arose when I was trying to test what happens if I keyed something into a hashtable using a structure as the key and then destroyed that structure. What would happen to the key in the hashtable? I guess this is more of a question on how hashtables are implemented.
Since Common Lisp is a dynamic language, Garbage Collector will remove structure from memory when it's no longer in use (referenced anywhere). So yes, when you assign a name to a structure (ie my-struct) and then assign nil to this name, structure will be removed from memory.
When you use this structure also as a key, it has one reference more, so even when you assign nil to my-struct, structure will remain in memory until you remove it from the hash table.
It's worth noting, that make-hash-table takes also optional test argument:
test---a designator for one of the functions eq, eql, equal, or equalp. The default is eql.
http://www.lispworks.com/documentation/HyperSpec/Body/f_mk_has.htm
When you use structures as keys, you should set test to equalp.
There is no way for a user to 'clear' an object from memory. To free the memory and 'clear' it is the purpose of the Garbage Collector.
One may be able to use a non-standard mechanism of finalizers, which allows one to schedule actions when a garbage collector is about to destroy an object.
See 'finalize' in something like Trivial Garbage

Move Semantics in Golang

This from Bjarne Stroustrup's The C++ Programming Language, Fourth Edition 3.3.2.
We didn’t really want a copy; we just wanted to get the result out of
a function: we wanted to move a Vector rather than to copy it.
Fortunately, we can state that intent:
class Vector {
// ...
Vector(const Vector& a); // copy constructor
Vector& operator=(const Vector& a); // copy assignment
Vector(Vector&& a); // move constructor
Vector& operator=(Vector&& a); // move assignment
};
Given that definition, the compiler will choose the move constructor
to implement the transfer of the return value out of the function.
This means that r=x+y+z will involve no copying of Vectors. Instead,
Vectors are just moved.As is typical, Vector’s move constructor is
trivial to define...
I know Golang supports traditional passing by value and passing by reference using Go style pointers.
Does Go support "move semantics" the way C++11 does, as described by Stroustrup above, to avoid the useless copying back and forth? If so, is this automatic, or does it require us to do something in our code to make it happen.
Note: A few answers have been posted - I have to digest them a bit, so I haven't accepted one yet - thanks.
The breakdown is like here:
Everything in Go is passed by value.
But there are five built-in "reference types" which are passed by value as well but internally they hold references to separately maintained data structure: maps, slices, channels, strings and function values (there is no way to mutate the data the latter two reference).
Your own answer, #Vector, is incorrect is that nothing in Go is passed by reference. Rather, there are types with reference semantics. Values of them are still passed by value (sic!).
Your confusion suppsedly stems from the fact your mind is supposedly currently burdened by C++, Java etc while these things in Go are done mostly "as in C".
Take arrays and slices for instance. An array is passed by value in Go, but a slice is a packed struct containing a pointer (to an underlying array) and two platform-sized integers (the length and the capacity of the slice), and it's the value of this structure which is copied — a pointer and two integers — when it's assigned or returned etc. Should you copy a "bare" array, it would be copied literally — with all its elements.
The same applies to channels and maps. You can think of types defining channels and maps as declared something like this:
type Map struct {
impl *mapImplementation
}
type Slice struct {
impl *sliceImplementation
}
(By the way, if you know C++, you should be aware that some C++ code uses this trick to lower exposure of the details into header files.)
So when you later have
m := make(map[int]string)
you could think of it as m having the type Map and so when you later do
x := m
the value of m gets copied, but it contains just a single pointer, and so both x and m now reference the same underlying data structure. Was m copied by reference ("move semantics")? Surely not! Do values of type map and slice and channel have reference semantincs? Yes!
Note that these three types of this kind are not at all special: implementing your custom type by embedding in it a pointer to some complicated data structure is a rather common pattern.
In other words, Go allows the programmer to decide what semantics they want for their types. And Go happens to have five built-in types which have reference semantics already (while all the other built-in types have value semantics). Picking one semantics over the other does not affect the rule of copying everything by value in any way. For instance, it's fine to have pointers to values of any kind of type in Go, and assign them (so long they have compatible types) — these pointers will be copied by value.
Another angle to look at this is that many Go packages (standard and 3rd-party) prefer to work with pointers to (complex) values. One example is os.Open() (which opens a file on a filesystem) returning a value of the type *os.File. That is, it returns a pointer and expects the calling code to pass this pointer around. Surely, the Go authors might have declared os.File to be a struct containing a single pointer, essentially making this value have reference semantics but they did not do that. I think the reason for this is that there's no special syntax to work with the values of this type so there's no reason to make them work as maps, channels and slices. KISS, in other words.
Recommended reading:
"Go Data Structures"
"Go Slices: Usage and Internals"
Arrays, slices (and strings): The mechanics of 'append'"
A thead on golang-nuts — pay close attention to the reply by Rob Pike.
The Go Programming Language Specification
Calls
In a function call, the function value and arguments are evaluated in
the usual order. After they are evaluated, the parameters of the call
are passed by value to the function and the called function begins
execution. The return parameters of the function are passed by value
back to the calling function when the function returns.
In Go, everything is passed by value.
Rob Pike
In Go, everything is passed by value. Everything.
There are some types (pointers, channels, maps, slices) that have
reference-like properties, but in those cases the relevant data
structure (pointer, channel pointer, map header, slice header) holds a
pointer to an underlying, shared object (pointed-to thing, channel
descriptor, hash table, array); the data structure itself is passed by
value. Always.
Always.
-rob
It is my understanding that Go, as well as Java and C# never had the excessive copying costs of C++, but do not solve ownership transference to containers. Therefore there is still copying involved. As C++ becomes more of a value-semantics language, with references/pointers being relegated to i) smart-pointer managed objects inside classes and ii) dependence references, move semantics solves the problem of excessive copying. Note that this has nothing to do with "pass by value", nowadays everyone passes objects by Reference (&) or Const Reference (const &) in C++.
Let's look at this (1) :
BigObject BO(big,stuff,inside);
vector<BigObject> vo;
vo.reserve(1000000);
vo.push_back(BO);
Or (2)
vector<BigObject> vo;
vo.reserve(1000000);
vo.push_back(BigObject(big,stuff,inside));
Although you're passing by reference to the vector vo, in C++03 there was a copy inside the vector code.
In the second case, there is a temporary object that has to be constructed and then is copied inside the vector. Since it can only be accessed by the vector, that is a wasteful copy.
However, in the first case, our intent could be just to give control of BO to the vector itself. C++17 allows this:
(1, C++17)
vector<BigObject> vo;
vo.reserve(1000000);
vo.emplace_back(big,stuff,inside);
Or (2, C++17)
vector<BigObject> vo;
vo.reserve(1000000);
vo.push_back(BigObject(big,stuff,inside));
From what I've read, it is not clear that Java, C# or Go are exempt from the same copy duplication that C++03 suffered from in the case of containers.
The old-fashioned COW (copy-on-write) technique, also had the same problems, since the resources will be copied as soon as the object inside the vector is duplicated.
Stroustrup is talking about C++, which allows you to pass containers, etc by value - so the excessive copying becomes an issue.
In Go, (like in Delphi, Java, etc) when you pass a container type, etc they are always references, so it's a non-issue. Regardless, you don't have to deal with it or worry about in GoLang - the compiler just does what it needs to do, and from what I've seen thus far, it's doing it right.
Tnx to #KerrekSB for putting me on the right track.
#KerrekSB - I hope this is the right answer. If it's wrong, you bear no responsibility.:)

ToLookup vs. ReadOnlyCollection

I've been using LINQ-to-objects for quite a while, but I just now noticed the Enumerable.ToLookup extension method and read its documentation. I came across it while looking for the quickest way to get a read-only interface to an IEnumerable<T>. It seems to me that appending .ToLookup( o => o ) onto the enumerable results in a System.Linq.Lookup object that can serve the same purpose as a ReadOnlyCollection<T>.
So why would I ever create a direct instance of ReadOnlyCollection<T> again?
A lookup is not, conceptually, the same as a read-only enumerable. It's more like a dictionary where each key has multiple values, and is used to look up matching values by key. Calling ToLookup enumerates the input fully and builds the lookup.
A ReadOnlyCollection<T> would be far less expensive as it merely wraps any IList<T>, as well as matching the semantic meaning of a read only interface to an IEnumerable<T>.

Return concrete or abstract datatypes?

I'm in the middle of reading Code Complete, and towards the end of the book, in the chapter about refactoring, the author lists a bunch of things you should do to improve the quality of your code while refactoring.
One of his points was to always return as specific types of data as possible, especially when returning collections, iterators etc. So, as I've understood it, instead of returning, say, Collection<String>, you should return HashSet<String>, if you use that data type inside the method.
This confuses me, because it sounds like he's encouraging people to break the rule of information hiding. Now, I understand this when talking about accessors, that's a clear cut case. But, when calculating and mangling data, and the level of abstraction of the method implies no direct data structure, I find it best to return as abstract a datatype as possible, as long as the data doesn't fall apart (I wouldn't return Object instead of Iterable<String>, for example).
So, my question is: is there a deeper philosophy behind Code Complete's advice of always returning as specific a data type as possible, and allow downcasting, instead of maintaining a need-to-know-basis, that I've just not understood?
I think it is simply wrong for the most cases. It has to be:
be as lenient as possible, be as specific as needed
In my opinion, you should always return List rather than LinkedList or ArrayList, because the difference is more an implementation detail and not a semantic one. The guys from the Google collections api for Java taking this one step further: they return (and expect) iterators where that's enough. But, they also recommend to return ImmutableList, -Set, -Map etc. where possible to show the caller he doesn't have to make a defensive copy.
Beside that, I think the performance of the different list implementations isn't the bottleneck for most applications.
Most of the time one should return an interface or perhaps an abstract type that represents the return value being returned. If you are returning a list of X, then use List. This ultimately provides maximum flexibility if the need arises to return the list type.
Maybe later you realise that you want to return a linked list or a readonly list etc. If you put a concrete type your stuck and its a pain to change. Using the interface solves this problem.
#Gishu
If your api requires that clients cast straight away most of the time your design is suckered. Why bother returning X if clients need to cast to Y.
Can't find any evidence to substantiate my claim but the idea/guideline seems to be:
Be as lenient as possible when accepting input. Choose a generalized type over a specialized type. This means clients can use your method with different specialized types. So an IEnumerable or an IList as an input parameter would mean that the method can run off an ArrayList or a ListItemCollection. It maximizes the chance that your method is useful.
Be as strict as possible when returning values. Prefer a specialized type if possible. This means clients do not have to second-guess or jump through hoops to process the return value. Also specialized types have greater functionality. If you choose to return an IList or an IEnumerable, the number of things the caller can do with your return value drastically reduces - e.g. If you return an IList over an ArrayList, to get the number of elements returned - use the Count property, the client must downcast. But then such downcasting defeats the purpose - works today.. won't tomorrow (if you change the Type of returned object). So for all purposes, the client can't get a count of elements easily - leading him to write mundane boilerplate code (in multiple places or as a helper method)
The summary here is it depends on the context (exceptions to most rules). E.g. if the most probable use of your return value is that clients would use the returned list to search for some element, it makes sense to return a List Implementation (type) that supports some kind of search method. Make it as easy as possible for the client to consume the return value.
I could see how, in some cases, having a more specific data type returned could be useful. For example knowing that the return value is a LinkedList rather than just List would allow you to do a delete from the list knowing that it will be efficient.
I think, while designing interfaces, you should design a method to return the as abstract data type as possible. Returning specific type would make the purpose of the method more clear about what they return.
Also, I would understand it in this way:
Return as abstract a data type as possible = return as specific a data type as possible
i.e. when your method is supposed to return any collection data type return collection rather than object.
tell me if i m wrong.
A specific return type is much more valuable because it:
reduces possible performance issues with discovering functionality with casting or reflection
increases code readability
does NOT in fact, expose more than is necessary.
The return type of a function is specifically chosen to cater to ALL of its callers. It is the calling function that should USE the return variable as abstractly as possible, since the calling function knows how the data will be used.
Is it only necessary to traverse the structure? is it necessary to sort the structure? transform it? clone it? These are questions only the caller can answer, and thus can use an abstracted type. The called function MUST provide for all of these cases.
If,in fact, the most specific use case you have right now is Iterable< string >, then that's fine. But more often than not - your callers will eventually need to have more details, so start with a specific return type - it doesn't cost anything.

What is an elegant way to track the size of a set of objects without a single authoritative collection to reference?

Update: Please read this question in the context of design principles, elegance, expression of intent, and especially the "signals" sent to other programmers by design choices.
I have two "views" of a set of objects. One is a dictionary/map indexing the objects by a string value. The other is a dictionary/map indexing the objects by an ordinal (ordering integer). There is no "master" collection of the objects by themselves that can serve as the authoritative source for the number of objects, but the two dictionaries should always both contain references to all the objects.
When a new item is added to the set a reference is added to both dictionaries, and then some processing needs to be done which is affected by the new total number of objects.
What should I use as the authoritative source to reference for the current size of the set of objects? It seems that all my options are flawed in one dimension or another. I can just consistently reference one of the dictionaries, but that would codify an implication of that dictionary's superiority over the other. I could add a 3rd collection, a simple list of the objects to serve as the authoritative list, but that increases redundancy. Storing a running count seems simplest, but also increases redundancy and is more brittle than referencing a collection's self-tracked count on the fly.
Is there another option that will allow me to avoid choosing the lesser evil, or will I have to accept a compromise on elegance?
I would create a class that has (at least) two collections.
A version of the collection that is
sorted by string
A version of the
collection that is sorted by ordinal
(Optional) A master collection
The class would handle the nitty gritty management:
The syncing of the contents for the collections
Standard collection actions (e.g. Allow users get the size, Add or retrieve items)
Let users get by string or ordinal
That way you can use the same collection wherever you need either behavior, but still abstract away the "indexing" behavior you are going for.
The separate class gives you a single interface with which to explain your intent regarding how this class is to be used.
I'd suggest encapsulation: create a class that hides the "management" details (such as the current count) and use it to expose immutable "views" of the two collections.
Clients will ask the "manglement" object for an appropriate reference to one of the collections.
Clients adding a "term" (for lack of a better word) to the collections will do so through the "manglement" object.
This way your assumptions and implementation choices are "hidden" from clients of the service and you can document that the choice of collection for size/count was arbitrary. Future maintainers can change how the count is managed without breaking clients.
BTW, yes, I meant "manglement" - my favorite malapropism for management (in any context!)
If both dictionaries contain references to every object, the count should be the same for both of them, correct? If so, just pick one and be consistent.
I don't think it is a big deal at all. Just reference the sets in the same order each time
you need to get access to them.
If you really are concerned about it you could encapsulate the collections with a wrapper that exposes the public interfaces - like
Add(item)
Count()
This way it will always be consistent and atomic - or at least you could implement it that way.
But, I don't think it is a big deal.

Resources