I am creating a site using VMC and using beans to transfer the data from the Model to the Controller/Views.
I plan to implement some basic and very simple caching that will store the beans in a simple struct if they have not changed (as usage grows, we will implement a better caching system around ver 1.3).
So the question goes to what goes in our bean.
One type of bean would only hold the basic data and would rely on some outside service to do the rest of the work (contacting the DAO to get the query, parsing the query to load the bean values). This is the "anemic bean" model as I have been told repeatedly by a co-worker :-).
Another type of bean would be more self-contained. It would know where the DAO is so would call the DAO directly to get the data query. It would contain necessary functions to parse out the query and set the properties. It would basically combine much of the "service" layer with the bean, leaving the direct database in the DAO layer.
Of course, to the controller/views, both beans would look and act the same.
But the question is memory and how ColdFusion/Java deals with it.
With the anemic model, the bean would just have enough memory to hold the property variables with just a touch more to let it point to the Service when it needs to.
With the heavier functions in the second type bean, would it take up more memory in the cache??? Would each copy of the bean have a full copy of the methods?
I tend to think that the second model would not have much more memory since they would "share" the methods and would only need memory for the property variables.
IMHO the second method would simplify the codebase, since the code the bean needs would be closer to the bean rather than scattered between the DAO and Services. And it would reduce simple functions in the Service that merely pass along calls to the DAO of the bean could go directly to the DAO when it needed it...
Does the question make sense?? Or at least how I am asking it?
All the memory management is handed at Java level, so it follows the same rules. In Java the only "new" memory that's allocated when an object instance is created is for its member variables; there is not memory foot print for the methods of the component/class itself: that stuff is only stored in memory once, with a reference back to it.
One possible consideration is that each method of a CFC is compiled as its own discrete class (why? I don't know), so each method is its own class. This will perhaps mean a slightly larger memory footprint for CFC usage compared to Java class usage, but this will still not be something that scales with object-instantiation: each instance of an object will still just consume memory for its member variables, not the methods of the CFC that defines the object.
all cfm pages are compiled to memory by default, a CFC needs to implicitly be stored in memory (application scope for example) in order to avoid instantiating it each time, however you do it requires the same memory for the same component, any additional usage will depend on any data you are storing within your component or bean.
Have you had a look at ColdSpring ?
Related
What happens to the methods inside the objects of the heap?
So I have been reading about stack and heap memory management.
Methods and variables(inside methods) are stored in the stack.
Objects and Instance variables are stored inside the heap.
When an object is called inside a stack method it has a pointer to the heap object.
I would assume these methods are stored on the stack, because 'methods are stored on the stack'. But I am unable to find confirmation about this. What happens to for example the constructor?
Articles or tutorial video's I have seen only give examples of methods in the main class.
Anyone able to answer this question?
Will explain based on how it works in Java.
Methods and variables(inside methods) are stored in the stack.
Local variables (variables inside methods) are stored in the stack. But not the method itself.
By method, we refer to the behaviour or the list of instructions that needs to be executed. This does not vary every method call and not even vary for every object instance created. The behaviour remains the same at the class level.
The behaviour is stored in a region called Method area. You can refer Java Spec for more details.
As per spec,
The method area is created on virtual machine start-up. Although the method area is logically part of the heap, simple implementations may choose not to either garbage collect or compact it. This version of the Java Virtual Machine specification does not mandate the location of the method area or the policies used to manage compiled code.
It is left to the JVM implementation on where the method area is located.
Implementations like HotSpot VM, until Java 7, used to store the method area as part of the heap. But from Java 8, it is moved out of heap and the space allocated for heap is not consumed by the method area.
What happens to for example the constructor?
Constructions are methods with a special name called, <init>.1. They are stored in the same way as other methods.
As a side note, there is a class initialization method, called <clint>, which handles static block in class.2
OK, so Realm (.NET) doesn't support async queries in it's current version.
In case the underlying table for a certain RealmObject contains a lot of records, say in the hundreds of thousands or millions, what is the preferred approach (given the current no async limitation)?
My current options (none tested thus far):
On the UI thread use Realm.GetInstance().All<T> and filter it (and then enumerate the IEnumerable). My assumption is that the UI thread will block waiting for this possible lengthy operation.
Do the previous on a worker thread. The downside would be that all RealmObject's need to be mapped to some auxiliary domain model (or even the same model, but disconnected from Realm) because realm objects cannot be shared/marshaled between threads.
Is there any recommended approach (by the Realm creators, of course)? I'm aware this doesn't completely fit the question model for this site, but so be it.
Realm enumerators are truly lazy and the All<T> is a further special case, so it is certainly fast enough to do on the UI thread.
Even queries are so fast, most of the time we recommend people do them on the UI thread.
To enlarge on my comment on the question, RealmObject subclasses are woven at compile time with the property getters and setters being mapped to call directly through to the C++ core, getting memory-mapped data.
That keeps updates between threads lightning fast, as well as delivering our incredible column-scanning speed. Most cases do not require indexes nor do they need running on separate threads.
If you create a standalone RealmObject subclass eg: new Dog() it has a flag IsManaged==false which means the getter and setter methods still use the backing field, as generated by the compiler.
If you create an object with CreateObject or you take a standalone into the Realm with Realm.Manage then IsManaged==true and the backing field is ignored.
I am trying to create a custom cashing mechanism where I am returning a weak_ptr to the cache created. Internally, I hold a shared_ptr to control the lifetime of the object.
When the maximum cache pre-set is consumed, the disposer looks for those cache objects that are not accessed for a long time and will clean them up.
Unfortunately this may not be ideal. If it was possible to check how many cache objects can be accessed through the weak_ptr, then this can be a criteria for making the decision to clean up or not.
Turns out there is no way to check how many weak_ptr(s) have handle to the resource.
But when I look at the shared_ptr documentation and implementation notes
=> the number of weak_ptrs that refer to the managed object
is part of the implementation. Why is this not exposed through an API ?
Within VB6 is it possible to implement the Singleton design pattern?
Currently the legacy system I work on has a large about of IO performed by multiple instances of a particular class. It is desirable to clean up all these instances and have the IO performed by one instance only. This would allow us to add meaningful logging and monitoring to the IO routines.
There are so many ways to do this, and it depends if this is a multi-project application with different dlls or a single project.
If it is single project and there is a large amount of code that you are worrying about chaning/breaking then I suggest the following:
Given a class clsIOProvider that is instantiated all over the place, create a module modIOProvider in the same project.
For each method / property defined in clsIOProvider, create the same set of methods in modIOProvider.
The implementation of those methods, as well as the instance data of the class, should be cloned from clsIOProvider to modIOProvider.
All methods and properties in clsIOProvider should be chnaged to forward to the implementation in modIOProvider. The class should no longer have an instance data.
(Optional) If the class requires the use of the constructor and destructor (Initialize/Terminate), forward those to modIOProvider as well. Add a single instnace counter within modIOProvider to track the number of instances. Run your initialzation code when the instance counter goes from 0 to 1, and your termination code when the instance counter goes from 1 to 0.
The advantage of this is that you do not have to change the coe in the scores of places that are utilizeing the clsIOProvider class. They are happily unaware that the object is now effectively a singleton.
If coding a proejct from scratch I would do this a bit differently, but as a refactoring appraoch wahat I've outlined should work well.
It's easy enough to only create and use one instance of an object. How you do it depends on your code what it does and where it's called from.
In one process, you can just have a single variable in a global module with an instance, maybe with a factory function that creates it on first use.
If it's shared by multiple process, it complicates things, but can be done with an ActiveX EXE and the Running Object Table.
We're working on a large windows forms .NET application with a very large database. Currently we're reaching 400 tables and business objects but that's maybe 1/4 of the whole application.
My question now is, how to handle this large amount of mapping files with NHibernate with performance and memory usage in mind?
The business objects and their mapping files are already separated in different assemblies. But I believe that a NH SessionFactory with all assemblies will use a lot memory and the performance will suffer. But if I build different Factories with only a subset of assemblies (maybe something like a domain context, which separates the assemblies in logic parts) I can't exchange objects between them easily, and only have access to a subset of objects.
Our current approach is to separate the business objects with the help of a context attribute. A business object can be part of multiple contexts. When a SessionFactory is created all mapping files of a given context (one or more) are merged into one large mapping file and compiled to a DLL at runtime. The Session itself is then created with this new mapping DLL.
But this approach has some serious drawbacks:
The developer has to take care of the assembly references between the business object assemblies ;
The developer has to take care of the contexts or NHibernate will not find the mapping for a class ;
The creation of the new mapping file is slow ;
The developer can only access business objects within the current context - any other access will result in an exception at runtime.
Maybe there is a complete different approach? I'll be glad about any new though about this.
The first thing you need to know is that you do not need to map everything. I have a similar case at work where I mapped the main subset of objects/tables I was to work against, and the others I either used them via ad-hoc mapping, or by doing simple SQL queries through NHibernate (session.createSqlQuery). The ones I mapped, a few of them I used Automapper, and for the peskier ones, regular Fluent mapping (heck, I even have NHibernate calls which span across different databases, like human resources, finances, etc.).
As far as performance, I use only one session factory, and I personally haven't seen any drawbacks using this approach. Sure, Application_Start takes more than your regular ADO.NET application, but after that's, it's smooth sailing through and through. It would be even slower to start open and closing session factory on demand, since they do take a while to freshen up.
Since SessionFactory should be a singleton in your application, the memory cost shouldn't be that important in the application.
Now, if SessionFactory is not a singleton, there's your problem.