Flex big data volume performance (ADEP/LCDS dataservice) - performance

As we have found a solution for Hibernate, server side loads the data very fast : less than a sec for thousands of records and more. Now the problem is on transporting data from server to browser. Two issue:
1.The datagrid always waits until the ArrayCollection is fully loaded. We overcame this by specifying :
<destination id="someAssemble">
<properties>
<use-transactions>true</use-transactions>
<source>com.assembler.SomeAssembler</source>
<scope>application</scope>
<item-class>vo.SomeVo</item-class>
<network>
<paging enabled="true" pageSize="50" />
</network>
==> the datagrid started display quickly. The problem is the server stops loading data from row 51. Is there a way to force Flex keep loading the data in the background (by config or by code )
2.If I try to load a big ArrayCollection (for example more than 20K records), it locks down the whole browser. Is it possible to load it smoothly behind the scene?
Please help! Thank you

Related

Drupal 9 - custom module caching issue

Longtime D7 user, first time with D9. I am writing my first custom module and having a devil of a time. My routing calls a controller that simple does this:
\Drupal::service('page_cache_kill_switch')->trigger();
die("hello A - ". rand());
I can refresh the page over and over and get a new random number each
time. But, when I change the code to:
\Drupal::service('page_cache_kill_switch')->trigger();
die("hello B - ". rand());
I still get "hello A 34234234" for several minutes. Clearing the cache doesn't help, all I can do is wait, it's normally about two minutes. I am at my wits end.
I thought it maybe an issue with my docker instance. So I generated a simple HTML file but if I edit then reload that file changes are reflected immediately.
In my settings.local.php I have disabled the render cache, caching for migrations, Internal Page Cache, and Dynamic Page Cache.
In my mymod.routing.yml I have:
options:
_admin_route: TRUE
no_cache: TRUE
Any hint on what I am missing would be deeply appreciated.
thanks,
summer

Draw a graph using D3 (v3) in a WebWorker

The goal is to draw a graph using D3 (v3) in a WebWorker (Rickshaw would be even better).
Requirement #1:
The storage space for the entire project should not exceed 1 MB.
Requirement #2:
Internet Explorer 10 should be supported
I already tried to pass the DOM element to Webworker.
This brought the following error message:
DOMException: Failed to execute 'postMessage' on 'Worker': HTMLDivElement object could not be cloned.
var worker = new Worker( 'worker.js' );
worker.postMessage( {
'chart' : document.querySelector('#chart').cloneNode(true)
} );
The GitHub user chrisahardie has made...
a small proof on concept showing how to generate a d3 SVG chart in a
web worker and pass it back to the main UI thread to be injected into
a webpage.
https://github.com/chrisahardie/d3-svg-chart-in-web-worker
He integrated jsdom into the browser with Browserify.
The problem:
The script has almost 5 MB, which is too much memory requirements for the application.
So my question:
Does anyone have experience in solving the problem or has any idea how the problem can be solved and the requirements can be met?
The Web Workers don't have access to the following JavaScript objects: The window object, The document object and The parent object. So, all we could do on that side would be to build something that could be used for quickly creating the DOM. The worker(s) could e.g process the datasets and do all the heavy computations, then pass the result back as a set of arrays. More details, you could check this article and this sample

HazelCast IMap.values() giving OutofMemory on Tomcat

I'm still trying to get to know hazelcast and have to make a decision on whether to use it or not.
I wrote a simple application where in I startup the cache on (single node) server startup and load the Map at the same time with about 400 entries.The Object itself has two String fields. I have a service class that looksup the cache and tries to get all the values from the map.
However, I'm getting a OutofMemoryError on Java Heap Space while trying to get values out of the hazelcast map. Eventually we plan to move to a 5 node cluster to start with.
Following is the cache spring config:
<hz:hazelcast id="instance">
<hz:config>
<hz:group name="dev" password=""/>
<hz:properties>
<hz:property name="hazelcast.merge.first.run.delay.seconds">5</hz:property>
<hz:property name="hazelcast.merge.next.run.delay.seconds">5</hz:property>
</hz:properties>
<hz:network port="5701" port-auto-increment="false">
<hz:join>
<hz:multicast enabled="true" />
</hz:join>
</hz:network>
</hz:config>
</hz:hazelcast>
<hz:map instance-ref="instance" id="statusMap" name="statuses" />
Following is where the error occurs:
map = instance.getMap("statuses");
Set<Status> statuses = (Set<Status>) map.values();
return statuses;
Any other method of IMap works fine. I tried getting the keySet and size and both worked fine. It is only when I try to get the values that the OutofMemory error shows up.
java.lang.OutOfMemoryError: Java heap space
I've tried the above with a standalone java application and it works fine. I've also monitored with visual VM and don't see any spike in used Heap Memory when the error occurs which is all the more confusing. Available Heap is 1G and the used Heap was about 70MB when the error occurred.
However, when I take out cache implementation from the application, it works fine going to the Database and getting the data.
I've also tried playing around with the tomcat vm args to no success. It is the same OutofMemoryError when I access IMap.values() with or without SQLPredicate. Any help or direction in this matter will be greatly appreciated.
Thanks.
As the exception mentions you're running out of heap space since the values method tries to return all deserialized values at once. If they don't fit into memory you'll likely to get an OOME.
You can use paging to prevent this from happening: http://hazelcast.org/docs/latest/manual/html-single/hazelcast-documentation.html#paging-predicate-order-limit-
How big are your 400 entries?
And like Chris said, the whole data is being pulled in memory.
In the future we'll replace this by an iteration based approach where we'll only pull small chunks in memory instead of the whole thing.
I figured out the issue. The Status object was implementing "com.hazelcast.nio.serialization.Portable" for Serialization. I did not configure the corresponding serialization factory. After I configured the factory as follows, it worked fine:
<hz:serialization>
<hz:portable-factories>
<hz:portable-factory factory-id="1" class-name="ApplicationPortableFactory" />
</hz:portable-factories>
</hz:serialization>
Apologize for not giving the complete background initially as I myself noticed it later on. Thanks for replying though. I wasn't aware of the Paging Predicate and now I'm using it for sorting and paging results. Thanks again.

Coherence Flush Delay Setting

I want a cache that checks its own items if they are expired or not. My cache config is below:
<?xml version="1.0" encoding="UTF-8"?>
<cache-config>
<caching-scheme-mapping>
<cache-mapping>
<cache-name>subscriberinfo</cache-name>
<scheme-name>distributed-scheme</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<distributed-scheme>
<scheme-name>distributed-scheme</scheme-name>
<lease-granularity>member</lease-granularity>
<service-name>DistributedCache</service-name>
<serializer>
<instance>
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
<init-params>
<init-param>
<param-type>String</param-type>
<param-value>rbm-shovel-pof-config.xml</param-value>
</init-param>
</init-params>
</instance>
</serializer>
<backing-map-scheme>
<local-scheme>
<unit-calculator>BINARY</unit-calculator>
<expiry-delay>24h</expiry-delay>
<flush-delay>180</flush-delay>
</local-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
</caching-schemes>
</cache-config>
But the thing is, flush-delay can not be set. Any ideas?
Thanks
which version of Coherence do you use ?
In Coherence 3.7, the flush-delay has been removed from dtd as deprecated since version 3.5.
Flushing is just active when inserting new objects (have a look at eviction-policy) or accessing expired objects (look at expiry-delay).
Coherence deprecated the FlushDelay and related settings in 3.5. All of that work is done automatically now:
Expired items are automatically removed, and the eviction / expiry events are raised accordingly
You will never see expired data in the cache; even if you try to access it just as it expires, the expiry will occur as an event before the data access occurs
Eviction (for memory limits) is now done asynchronously, so that the "sharp edges" of the side-effects of eviction (such as event storms) are spread out across natural cycles (with the cycle length calculated based on the estimated rate of eviction)
I hope that helps.
For the sake of full disclosure, I work at Oracle. The opinions and views expressed in this post are my own, and do not necessarily reflect the opinions or views of my employer.
But the thing is, flush-delay can not be set. Any ideas?
What do you mean by this? Does the system throw errors or the expired items not getting removed from cache. Based on the configuration you have, the entry should be removed from cache after 24hours and 180seconds since last update to the entry.

Delay in initial retrieving of xml data using XRM

Delay in initial retrieving of xml data using XRM
I am using this xml query
<fetch mapping='logical'> <entity name='de_municipality'> <order attribute='de_name' ascending='true' /> <attribute name='de_municipalityid'/> <attribute name='de_name'/> </entity> </fetch>"
For the Fetch method of XRM. It is taking around 10 seconds to get the result (though there are only limited number of entities).
ResultsXml = dc.UsingService(service => (string)service.Fetch(oFetchXml.InnerXml));
This is line of code is making the delay!!!
Next time execution will give results in 120 milli seconds.
I have experimented this query with the CRM4, the delay is only 200 milli seconds for all the attempts.
Any tips or trick or analysis method will be appreciated.
I got a reply from MSDN like this...
Hi Vinu,
we made a call to Microsoft about this issue and it was confirmed to be a design problem. This shouldn't be an issue anymore with CRM 2011.
Our current workaround is to keep the web application wich consumes the DataContext alive for as long as possible, because once the Metadata is cached the call does not occur any further.
Be careful - the DataContext not only caches the Metadata but also the content itself - such as attributes and relationships. If you want to refresh these you can clear the cache partially for specific entities as described here: Empty CRM Client DataContext cache
Kind regards
Markus
Markus Wolff
Senior Software Developer CRM Systems
Gruner & Jahr & Co. KG Hamburg, Germany

Resources