I am developing a windows application using C# .Net. This is in fact a plug-in which is installed in to a DBMS. The purpose of this plug-in is to read all the records (a record is an object) in DBMS, matching the provided criteria and transfer them across to my local file system as XML files. My problem is related to usage of memory. Everything is working fine. But, each time I read a record, it occupies the memory and after a certain limit the plug in stops working, because of out of memory.
I am dealing with around 10k-20k of records (objects). Is there any memory related methods in C# to clear the memory of each record as soon as they are written to the XML file. I tried all the basic memory handling methods like clear(), flush(), gc(), & finalize()/ But no use.
Please consider he following:
Record is an object, I cannot change this & use other efficient data
structures.
Each time I read a record I write them to XML. and repeat this
again & again.
C# is a garbage collected language. Therefore, to reclaim memory used by an object, you need to make sure all references to that object are removed so that it is eligible for collection. Specifically, this means you should remove the objects from any data structures that are holding references to them after you're done doing whatever you need to do with them.
If you get a little more specific about what type of data structures you're using we can probably give a more specific answer.
Related
I just started learning microstream. After going through the examples published to microstream github repository, I wanted to test its performance with an application that deals with more data.
Application source code is available here.
Instructions to run the application and the problems I faced are available here
To summarize, below are my observations
While loading a file with 2.8+ million records, processing takes 5 minutes
While calculating statistics based on loaded data, application fails with an OutOfMemoryError
Why is microstream trying to load all data (4 GB) into memory? Am I doing something wrong?
MicroStream is not like a traditional database and starts from the concept that all data are in memory. And an Object graph can be stored to disk (or other media) when you store this through the StorageManager.
In your case, all data are in 1 list and thus when accessing this list it reads all records from the disk. The Lazy reference isn't useful how you have used it since it just handles the access to the one list with all data.
Some optimizations that you can introduce.
Split the data based on vendorId, or day using a Map<String, Lazy<List>>
When a Map value is 'processed' removed it from the memory again by clearing the lazy reference. https://docs.microstream.one/manual/5.0/storage/loading-data/lazy-loading/clearing-lazy-references.html
Increase the number of Channels to optimize the reading and writing the data. see https://docs.microstream.one/manual/5.0/storage/configuration/using-channels.html
Don't store the object graph every 10000 lines but just at the end of the loading.
Hope this helps you solve the issues you have at the moment
I'm learning about redis/memcache and redis is clearly the more popular option. My question is about supported data types. At my company we use the memcashier library which is built in memcached. We store temporary user data when they're making a purchase in memcache. We can easily update this object as things are added to the cart or more info about the user is given. This appears to be the same functionality as a hash in redis. I don't understand how this is only a basic string data type and how it's less powerful than a hash.
If you are using strings, that's fine - but any change involves loading the data to your application, parsing it, modifying it, and serializing it back to Redis/Memcache.
This has two problems: it's slow and non atomic. You can have two servers modifying the same object arriving in an inconsistent state - such as double or missing items in a shopping cart. And again, it's slow.
With a Redis hash key, you can atomically modify specific fields of the object without loading the entire object into memory. Instead of read, parse, modify, save - you just update.
Besides, Redis has many many data structures that can create very flexible data stores with different properties, whereas Memcache can only store strings.
BTW Redis has a module that allows you to store JSON objects just as you would a string, and manipulate them directly and atomically without getting them to the client. See Rejson.io for details.
Memcached doesn't support complex datastructures
In redis you have Lists, Sets, SortedSets, HashTables , and more.
Each data-structure mentioned above supports mutation of one or more of its elements atomically and without replacing the entire data-structure/value.
Memcached on the other hand , is a simple key-value store - that means every operation involving an attribute change within a complex object is a read-modify-write. If you just go around blindly replacing fields in objects then you are risking race-conditions and operations atomicity issues (which you can get away from by using CAS )
If the library abstracts that complexity, well - that's great but it's still less efficient than mutating only the relevant field(s)
This answer only relates to your usecase. Redis holds many other virtues over memcached, which are not relevant to this question.
Let's say I am sending log entries to ElasticSearch. We are considering adding the calling method, calling class, and line of code to our log entries. Being that these fields will contain similar values, would ElasticSearch attempt to preserve disk space by not copying this data for every occasion of the same value?
EDIT - Additional clarification: I did not read anywhere that Elastic does this. I know that some data storage systems, like columnar databases, write their data to disk so as to preserve disk storage by not writing duplicated data over and over again. So I am wondering if ElasticSearch implements similar techniques..
As far as I know: no, it doesn't. It would make several key features quite hard I believe, and I have not seen any reference to this practice.
It's tricky to 'proof' the non-existence of some method unless you look at all source-code, but I would expect this page about disk usage tuning to containt references to this practice.
Did you read anywhere about this, or does it just seem practical to you?
I would like to ask a very general question about a technical concept of which I do not know whether it exists or whether it is feasible at all.
The idea is the following:
I have an object in Garbage Collected language (e.g. C# or Java). The objects may itself contain several objects but there is no reference to any other objects that are not sub-element of the objects (or the object itself).
Theoretically it would be possible to get the memory used by this object which is most likely not a connected piece. Because I have some knowledge about the objects I can find all reference variables/properties and pointers that at the end point to another piece of the memory (probably indirectly, depending on the implementation of the programming language and virtual machine). I can take this pieces of memory combining them to a bigger piece of memory (correcting the references/pointers so that they are still intact). This piece of memory, basically bytes, could be written to a storage for example a database or a redis cache. On another machine I could theoretically load this object again an put it into the memory of the virtual machine (maybe again correct the references/pointers if they are absolute and not relative). Then I should have the same object on the other VM. The object can as complicated as I want, may also contain events or whatever and I would be able to get the state of the object transfered to anther VM (running on another computer). The only condition is that it would not contain references to something outside the objects. And of course I have to know the class type of the object on the other VM.
I ask this question because I want to share the state of an object and I think all this serialization work is just an overhead and it would be very simple if I could just freeze the memory and transport to another VM.
Is something like this possible, I'd say yes, though it might be complicated. maybe it is not possible with some VM's due to their architecture. Does something like this exist in any programming language? Maybe even in non garbage collected languages?
NOTE: I am not sure what tags should be added to this question except from programming-language, also I am not sure if there might be a better place for such a question. So please forgive me.
EDIT:
Maybe the concept can be compared to the initrd on Linux or hibernation in general.
you will have to collect all references to other objects. including graphs of objects (cycles) without duplications. it would require some kind of 'stop the world' at least for the serializing thread. it's complicated to do effectively but possible - native serialization mechanisms in many languages (java) are doing it for the developer.
you will need some kind of VM to abstract from the byte order in different hardware architectures.
you will have to detach object from any kind of environment. you can't pass objects representing threads, files handles, sockets etc. how will you detect it?
in nowadays systems memory is virtual so it will be impossible to simply copy addresses from one machine to another - you will have to translate them
objects are not only data visible to developer, it's also structure, information of sandboxing, permissions, superclasses, what method/types were already loaded and which are still not loaded because of optimalizations and lazy loading, garbage collector metadata etc
version of your object/class. on one machine class A can be created from source ver 1 but on another machine there allready might be objects of class A built from source of version 2
take performacne into consideration. will it be faster then old-school serialization? what benefits will it have?
and probably many more things none of us thought about
so: i've never heard of such solution. it seems theoretically doable but for some reason no one have ever done that. everyone offers plain old programmatic serialization. maybe you discover new, better way but keep in mind you'll be going against the crowd
I have a Mac (not document) app, that uses CoreData.
When launching the app, it reads the data stored on the filesystem.
I have to setup some things in -(void)applicationDidFinishLaunching based on the information stored using CoreData.
So it would be nice to know when my app read everything from disk.
If I do my setup in -(void)applicationDidFinishLaunching i doesn't work. If I do it a few seconds later it works!
Thx!
If you are using object controllers that automatically prepare their own content, you can observe arrangedObjects to find out when they have fetched their content. This does not guarantee that the actual objects are not faults. In fact, that's one of the main strengths of Core Data: objects are lazily loaded from disk.
If you for some reason want to make sure that most disk activity has taken place in applicationDidFinishLaunching, you can perform a custom fetch that specifically does not return objects as faults. Look up "prefetching" in the Core Data documentation. However, there is no guarantee that Core Data won't fault these objects at a later time due to memory constraints, thereby incurring another disk read when those objects are loaded again.
You can of course also use the NSBinaryStoreType, in which case the entire store is loaded into memory synchronously when it is added to the persistent store coordinator.