Using large tables doesn't seem to work in Blazor WASM - itext7

In a Blazor WASM application (.NET 5), I have a Razor component where I'm creating a table with roughly 4k rows, and following the tutorial from the iText website found here.
var dataTable = new Table(5, true)
.UseAllAvailableWidth()
.SetBorder(Border.NO_BORDER);
dataTable.AddColumnHeader("Header 1");
dataTable.AddColumnHeader("Header 2");
dataTable.AddColumnHeader("Header 3");
dataTable.AddColumnHeader("Header 4");
dataTable.AddColumnHeader("Header 5");
document.Add(dataTable);
for (int i = 0; i < FilteredItems.Count; i++)
{
var item = FilteredItems.ElementAt(i);
dataTable.AddColumnData(item.Item1);
dataTable.AddColumnData(item.Item2);
dataTable.AddColumnData(item.Item3);
dataTable.AddColumnData(item.Item4);
dataTable.AddColumnData(item.Item5);
if (i % 50 == 0)
{
dataTable.Flush();
}
}
dataTable.Complete();
document.Close();
I get the following error for what looks like every flush Error: Garbage collector could not allocate 16384u bytes of memory for major heap section.
I suspect this may be a Blazor limitation. Anyone else run into this problem?

Low lantency for much RAM
No Low latency for small RAM
therefore:
Turn off low latency.
https://learn.microsoft.com/en-us/dotnet/standard/garbage-collection/latency?redirectedfrom=MSDN
in app.config
<configuration>
<runtime>
<gcServer enabled="false" />
</runtime>
</configuration>

Related

Apache Ignite Cache eviction still in memory

During a stability test of our apache ignite cluster we got a memory related problem, where the used memory heap space increased to 100% and didnt go down like we expected. This is what we did:
Created a cache with eviction policy to FifoEvictionPolicy( max: 10000, batchSize:100)
20 simultaneous threads that executed the following scenario over and over again for a couple of hours:
Added a unique entry to the cache and then fetched the value to verify that it was added.
This scenario created about 2.3 million entries during the test.
Our expectation was due to our quite restricted eviction policy at maximum 10000 entries, the memory should been stabilized. However, the memory just kept rising until it reached max heap size. See attached memory graph:
Our question is:
Why is the memory used by the entries still allocated, even though eviction is done?
One thing to add to this, is that we executed the same test but with a deletion of the entry after that we added it. The memory was now stable:
Update with testcase and comment.
Below you will find a simple junit test to prove the memory leak. #a_gura seems to be correct - if we disable the ExpiryPolicy things work as expected. But if we enable ExpiryPolicy the heap seems to get filled up within the ExpiryPolicy-duration. Testcase:
public class IgniteTest {
String cacheName = "my_cache";
#Test
public void test() throws InterruptedException {
IgniteConfiguration configuration = new IgniteConfiguration();
Ignite ignite = Ignition.start(configuration);
//create a large string to use as test value.
StringBuilder testValue = new StringBuilder();
for (int i = 0; i < 10*1024; i ++) {
testValue.append("a");
}
CacheConfiguration cacheCfg = new CacheConfiguration();
cacheCfg.setName(cacheName);
cacheCfg.setEvictionPolicy(new FifoEvictionPolicy<>(10_000, 100));
Duration duration = new Duration(TimeUnit.HOURS, 12);
cacheCfg.setExpiryPolicyFactory(CreatedExpiryPolicy.factoryOf(duration));
cacheCfg.setCacheMode(CacheMode.LOCAL);
cacheCfg.setBackups(0);
Cache<Object, Object> cache = ignite.getOrCreateCache(cacheCfg);
String lastKey = "";
for (int i = 0; i < 10_000_101; i++){
String key = "key#"+i;
String value = testValue + "value#"+i;
log.trace("storing {} {}", key, value);
if (i % 1_000 == 0) {
log.debug("storing {}", key);
}
cache.put(key, value);
lastKey = key;
Thread.sleep(1);
}
String verifyKey = "key#1";
Assert.assertThat("first key should be evicted", cache.containsKey(verifyKey), CoreMatchers.is(false));
Assert.assertThat("last key should NOT be evicted", cache.containsKey(lastKey), CoreMatchers.is(true));
ignite.destroyCache(cacheName);
}
}
This has been fixed in Ignite 1.8: https://issues.apache.org/jira/browse/IGNITE-3948.
Credit to #a_gura who filed the bug and to the development team.

EhCache eternal is not behaving as expected

My requirement is to have disk based cache. If cache is memory is full, i want LRU element to be pushed to the disk. And then if file on disk is full then I want LRU element on disk to evicted. This is pretty simple requirement, however I could not achieve this using EhCache.
I made use of EhCache ( 2.10.1) with following config :
<defaultCache name="default"
maxBytesLocalHeap="50m"
maxBytesLocalDisk="100g"
eternal="true"
timeToIdleSeconds="0"
timeToLiveSeconds="0"
diskExpiryThreadIntervalSeconds="120"
memoryStoreEvictionPolicy="LRU">
<persistence strategy="localTempSwap"/>
</defaultCache>
My expectation here is, when cache is filled up (i.e. cache size is exceeding 50M) , i want LRU element(s) to be pushed to the file and hence creating some space for new element in memory.
However this is not how EhCache is working, I did a sample test to check element count in cache:
public static void main(String[] args) throws Exception {
ArrayList<String> keys = new ArrayList<String>();
CacheManager cacheManager;
FileInputStream fis = null;
try {
fis = new FileInputStream(
new File("src/config/ehcache.xml").getAbsolutePath());
cacheManager = CacheManager.newInstance(fis);
}
finally {
fis.close();
}
java.lang.Runtime.getRuntime().addShutdownHook(new Thread(){
#Override
public void run() {
try
{
System.out.println("Shutting down Eh Cache manager !!");
cacheManager.clearAll();
cacheManager.shutdown();
System.out.println("done !!");
}catch(Exception e)
{
e.printStackTrace();
}
}
});
System.out.println("starting ...");
System.out.println(System.getProperty("java.io.tmpdir"));
cacheManager.addCache("work_item_111");
Cache ehCache = cacheManager.getCache("work_item_111");
long start_outer = System.currentTimeMillis();
for(int i =0;i<30;i++)
{
long start = System.currentTimeMillis();
String key = UUID.randomUUID().toString();
ehCache.put(new Element(key, getNextRandomString()));
keys.add(key);
//System.out.println("time taken : " + (System.currentTimeMillis()- start));
System.out.println((System.currentTimeMillis()- start) +" - " + (ehCache.getStatistics().getLocalDiskSizeInBytes()/1024/1024) + " - " +(ehCache.getStatistics().getLocalHeapSizeInBytes()/1024/1024));
}
System.out.println("time taken-total : " + (System.currentTimeMillis()- start_outer));
System.out.println(ehCache.getSize());
System.out.println("disk size : " +ehCache.getStatistics().getLocalDiskSizeInBytes()/1024/1024);
System.out.println("memory size : " +ehCache.getStatistics().getLocalHeapSizeInBytes()/1024/1024);
Iterator<String> itr = keys.iterator();
int count =0;
while(itr.hasNext())
{
count++;
String key = itr.next();
if(ehCache.get(key) == null)
{
System.out.println("missingg key : " + key);
}
}
System.out.println("checked for count :" + count);
}
The outcome is quite disappointing, after putting 30 elements in cache ( each element of size appro. 4mb) , I can see only 7 elements in the cache ( ehCache.getSize() returns 7 ) and also i dont see file on disk growing.
Can any EhCache expert help me out here if I am missing anything here. Thanks.
First, regarding the Ehcache topology:
The tiering model behind Ehcache enforces since at least 2.6 that all mappings are present in the lowest tier, disk in your case, at all times.
This means it is no longer an overflow model.
Now as to why your test behaves differently than what you expect:
Ehcache 2.x disk tier leaves the keys in memory and when your heap is memory sized, the amount of space occupied by the key is subtracted from that capacity.
Ehcache 2.x writes asynchronously to the disk and has limited queues for doing that. While this is not a problem in most use cases, in such a tight loop in a test, you may hit these limits and have the cache drop puts by evicting inline.
So in order to better understand what's happening, have a look at the Ehcache logs in debug and see what is really going on.
If you see evictions, simply loop more slowly or increase some of the settings for the disk write queue such as diskSpoolBufferSizeMB attribute on the cache element.
facing the same problem. In cycle I am adding 10k elements with cache configuration
cacheConfiguration.maxBytesLocalHeap(1, MemoryUnit.MEGABYTES);
cacheConfiguration.diskSpoolBufferSizeMB(10);
// cacheConfiguration.maxEntriesLocalHeap(1);
cacheConfiguration.name("smallcache");
config.addCache(cacheConfiguration);
cacheConfiguration.setDiskPersistent(true);
cacheConfiguration.overflowToDisk(true);
cacheConfiguration.eternal(true);
When increasing maxBytesLocalHeap to 10, it is fine, when using maxEntriesLocalHeap instead of maxBytes and it is even set to value 1 (item), it works without problem.
Version 2.6
This is answer Ehcache set to eternal but forgets elements anyway?

What is the maximum number of HttpSession attributes

How many attributes we can save in HttpSession using,
session.setAttribute("someName", "abc");
Is there any limit? Can we save 'n' number of attributes in a session?
I think there is no limitation, and that depends on your computer memory.
In Documentation of this method is nothing about limitation.
I have 4 GB RAM. I am running application on Tomcat 7.
Also set -Xms512M -Xmx1524M arguments.
I am able to set & get 1,00,00,000 attributes in HttpSession.
//WORKING CODE
for(Long i=1L; i<=10000000L; i++) {
request.getSession().setAttribute("TXN_"+i, i);
}
for(Long i=1L; i<=10000000L; i++) {
logger.info(request.getSession().getAttribute("TXN_"+i).toString());
}
//Below code causes OutOfMemory Error (Heap Space)
for(Long i=1L; i<=100000000L; i++) {
request.getSession().setAttribute("TXN_"+i, i);
}
for(Long i=1L; i<=100000000L; i++) {
logger.info(request.getSession().getAttribute("TXN_"+i).toString());
}
Saving 1,00,00,000 attributes in HttpSession is more than enough for my application.

IBM Lotus Notes Domino DLL

The Domino interop API which is included with Lotus Notes causes an out of memory exception in .NET when the NotesDXLExporter class based object fails to export the 390th record, which is a big document, after exporting 389 records (which are smaller documents).
Here is a code snippet:
I initialize the NotesDXLExporter class.
NotesDXLExporter dxl1 = null;
I then configure the NotesDXLExported object as shown below:
dxl1 = notesSession.CreateDXLExporter();
dxl1.ExitOnFirstFatalError = false;
dxl1.ConvertNotesbitmapsToGIF = true;
dxl1.OutputDOCTYPE = false;
I then perform a for a loop shown below in reading documents using the dxl1 class (line on which exception occurs is indicated below).
NotesView vincr = database.GetView(#"(AllIssuesView)"); //view from an NSF file
for (int i = 1; i < vincr.EntryCount; i++)
{
try
{
vincrdoc = vincr.GetNthDocument(i);
System.IO.File.WriteAllText(#"C:\Temp\" + i + #".txt", dxl1.Export(vincrdoc)); //OUT OF MEMORY EXCEPTION HAPPENS HERE WHEN READING A BIG DOCUMENT.
}
catch(Exception ex)
{
Console.WriteLine(ex);
}
I have tried using a different version of the Interop domino dll and had had no success.
As I understand this, I see an API issue but I dont know if I am missing something?
Can you please shed some light on this?
Thanks in advance.
Subbu
You haven't said what version of the Lotus Notes you are working with. Given the history of DXL, I would say that you should try your code on the latest version of Notes that you possibly can.
But also, I don't see any calls to recycle(). Failure to call recycle() for Domino objects causes memory to leak from the Domino back end classes, and since you are running out of memory it could be contributing to your problem. You should also not use a for loop and getNthDocument. You should use getFirstDocument and a while loop with getNextDocument. You'll get much better performance. Putting these two things together leads you to the common pattern of using a temporary document to hold the result of getNextDocument, allowing you to recycle the current document, and then assign the temp document to the current, which would be something like this (not error-checked!)
NotesView vincr = database.GetView(#"(AllIssuesView)"); //view from an NSF file
vincrdoc = vincr.getFirstDocument();
while (vincrdoc != null)
{
try {
System.IO.File.WriteAllText(#"C:\Temp\" + i + #".txt", dxl1.Export(vincrdoc));
}
catch(Exception ex)
{
Console.WriteLine(ex);
}
Document nextDoc = vincr.getNextDocument(vincrdoc);
vincrdoc.recycle();
vincrdoc = nextDoc;
}

WebClient and HttpWebRequest crashes while downloading big file on Windows Phone 7

I've tried both WebClient and HttpWebRequest to download a file with size of 381MB through Wi-Fi connection or tethered. It kept crashes (no error nor exception). It works on a file with size 194MB. Any way to download big files? or Is there a limitation of file size to downlod on Windows Phone 7? Thanks.
For the HttpWebRequest: the Request.BeginGetResponse() never 'call back';
For WebClient: the DownloadProgressChanged is responding well, but it crashes before OpenReadCompleted.
The same code is working fine when file is smaller, such as, 194MB.
Here is the code for WebClient:
WebClient wc = new WebClient();
wc.DownloadProgressChanged += ((s, e) =>
{
UpdateProgress(e.BytesReceived, e.TotalBytesToReceive);
});
wc.OpenReadCompleted += delegate(object sender, OpenReadCompletedEventArgs e)
{
if (e.Error == null)
{
using (var storeIso = IsolatedStorageFile.GetUserStoreForApplication())
{
if (e.Result.Length < storeIso.AvailableFreeSpace)
{
if (storeIso.FileExists(LocalFilePath))
storeIso.DeleteFile(LocalFilePath);
using (var fs =
new IsolatedStorageFileStream(LocalFilePath,
FileMode.Create, storeIso))
{
int bytesRead;
byte[] bytes = new byte[1024 * 1024 * 1]; // 1meg
while ((bytesRead =
ResponseStream.Read(bytes, 0, bytes.Length)) != 0)
{
fs.Write(bytes, 0, bytesRead);
}
fs.Flush();
}
}
}
}
};
wc.OpenReadAsync(
new System.Uri(DownloadFilePath, System.UriKind.RelativeOrAbsolute));
Where UpdateProgress is to calculate the percentage.
When I tried on the file with size 381 MB, the app crashes before OpenReadCompleted is called.
It is similar when I tried the HttpWebRequest, the call back assigned to Request.BeginGetResponse() is not called for the file with size of 381MB.
For the smaller file size, it works just fine either with WebClient or HttpWebRequest. It seems to me there is 'memory' limitation in handing downloaded file to app?
For large files (in my estimation anything over 3MB), be sure to set your HttpWebRequest.AllowReadStreamBuffering = false. This will get data moving.
Yes, there are memory restrictions on the platform. Are you monitoring these? (See http://blogs.msdn.com/b/mikeormond/archive/2010/12/16/monitoring-memory-usage-on-windows-phone-7.aspx for details on how to do so.)
You will want to consider using multiple requests (with the Range header) to download large files. In addition to avoiding memory restrictions this will also allow your users to stop your app part way through the download and then restart it later without having to restart the download.
I've used this technique to download files up to 2.5GB on the phone.

Resources