NavigableMap support in Chronicle-Map ? - chronicle

Thanks in advance for your support.
Is there any way to create java NavigableMap in Chronicle-Map ?

No, because Chronicle Map is a hash-based shared nothing data structure, i. e. it's more like HashMap than TreeMap in Java. For fast persistent data store that stores the keys in order, I recommend LMDB (check out https://github.com/lmdbjava/lmdbjava), however I'm not sure it provides NavigableMap interface out of the box.

Related

Custom Serialization to allow Maps in the ngrx store?

We are receiving a substantial blob of data from a server and storing it the store in ngrx. Some of it should be organized as a map from keys to values. So, we wrote:
Immutable.Map<string, string>()
as the data type. (There are other places where the types would be <string, some_serializable_class> We enabled all of the runtime checks, and ngrx reminded us that these maps are not serializable. It would not take very much code to safely serialize and deserialize them, of course.
Is there a place in the ngrx architecture to put our own serialization? If we do so, will the runtime check be aware?
#ngrx/entity turns out to be the intended solution to this problem?

Is there any embedable key-value store for Ruby?

I need fast and reliable key-value store for Ruby. Is there anything like it already?
The requirement is for it to run wholly inside the Ruby process, not needing any outside processes.
It might be in-memory with explicit disk flushes.
It needs to have minimal value-for-key retrieval times, write times may be not so good.
The amount of data stored won't be terrible, about few hundred thousand keys, each with ~1kb text value.
It turns out that the best option for me was to use plain Hash along with Marshal to serialize it to disk.
YAML is definitely too slow for that number of objects.
Thanks to #ian-armit for reinforcing my trust in the core Ruby libraries.
You could also try Moneta which allows you to build your own key/value store embedded in a ruby process.
Like DBM? http://www.ruby-doc.org/stdlib-1.9.3/libdoc/dbm/rdoc/DBM.html
(filler for spambot)
The DBM class provides a wrapper to a Unix-style dbm or Database Manager library.
Dbm databases do not have tables or columns; they are simple key-value data stores, like a Ruby Hash except not resident in RAM. Keys and values must be strings.
You could try Oria: https://github.com/intridea/oria
Oria (oh-rye-uh) is an in-memory, Ruby-based, zero-configuration Key-Value Store. It's designed to handle moderate amounts of data quickly and easily without causing deployment issues or server headaches. It uses EventMachine to provide a networked interface to a semi-persistent store and asynchronously writes the in-memory data to YAML files.
Check out PStore. Not sure if it's fast enough though.
Daybreak is a nice new option. Data is stored in a table in memory so Ruby niceties are available (each, filter, map, reduce, etc) and appears to be faster than pstore or dbm.
See this blog post for more info.
There's LevelDB, here's the ruby bindings.

cocoa: what's the best way of designing a persistent cache?

I have to download some info from the Internet, like what's the phone number of a person. I want to save the info in disk in order to load it when my application starts. So I want to know whether Core data is the best choice? I mean is it fast enough? I want to load the info into NSCache object, is it a good class I can use?
It is a Plist type caching; key->value, only strings. Easy coding. For a few data I would recommend this. described here
The other one with NSArchiver->NSData: binary storage, any type of data, but you have to deserialize and deserialize. More coding, no limits ( well, you are doing the transformation) . I do proffer this one, because during the development, maybe I will need later some other data than text. Usually need to cache images. presented here actually the good answer is with downvote!
If you are storing anything that will be used between launches of the application then using Core Data is the way to go unless you have really, really basic requirements. NSCache is better as a temporary cache that is used by the application as it is running and for data that can be recalculated if it does not already exist.

Infinispan+kyro/Google Protocol Buffers to achieve more space and time efficient serialization?

If I understand correctly, Infinispan/JBoss Cache uses Java's own serialization mechanism, which can be slow and takes relatively more storage space. I have been looking for alternatives which can achieve the following:
Automatic cached management, in other words objects that are used more frequently are automatically loaded into memory
More efficient serialization (perhaps object --> compact binary stores). The primary goal is less disk/memory space consumption without sacrificing too much performance
Is there a framework or library that achieves both?
JBoss Cache did use Java Serialization but Infinispan does not. Instead it uses JBoss Marshalling to provide tiny payloads and catching of streams. If you enable storeAsBinary in Infinispan, it will store Java objects in their marshalled form.
Re 1. Not in either products yet.
Re 2. Supported in Infinispan using storeAsBinary. More info in https://docs.jboss.org/author/display/ISPN/Marshalling
Btw, if this does not convince you, you can always let Protobufs generate the byte[] that you need and you can stick it inside Infinispan.

Usage of RemoteCache with DeltaAware and Delta interface infinispan

I need some guidance related to the following scenario in infinispan. Here is my scenario:
1) I created two nodes and started successfully in infinispan using client server mode.
2) In the hot rod client I created a remotechachemanager and then obtained a RemoteCache.
3) In the remote cache I put like this cache.put(key, new HashMap()); it is successfully added.
4) Now when I am going to clear this value using cache.remove(key) , I am seeing that it is not getting removed and the hash map is still there every time I go to remove it.
How can clear the value so that it will be cleared from all node of the cluster?
How can I also propagate the changes like adding or removing from the value HashMap above?
Has it anything to do with implementing DeltaAware and Delta interface?
Please suggest me about this concept or some pointers where I can learn
Thank you
Removal of the HashMap should work as long as you use the same key and have equals() and hashCode() correctly implemented on the key. I assume you're using distributed or replicated mode.
EDIT: I've realized that equals() and hashCode() are not that important for RemoteCache, since the key is serialized anyway and all the comparison will be executed on the underlying byte[].
Remote cache does not directly support DeltaAware. Generally, using these is quite tricky even in library mode.
If you want to use cache with maps, I suggest rather using composite key like cache-key#map-key than storing complex HashMap.

Resources