What is the maximum number of HttpSession attributes - session

How many attributes we can save in HttpSession using,
session.setAttribute("someName", "abc");
Is there any limit? Can we save 'n' number of attributes in a session?

I think there is no limitation, and that depends on your computer memory.
In Documentation of this method is nothing about limitation.

I have 4 GB RAM. I am running application on Tomcat 7.
Also set -Xms512M -Xmx1524M arguments.
I am able to set & get 1,00,00,000 attributes in HttpSession.
//WORKING CODE
for(Long i=1L; i<=10000000L; i++) {
request.getSession().setAttribute("TXN_"+i, i);
}
for(Long i=1L; i<=10000000L; i++) {
logger.info(request.getSession().getAttribute("TXN_"+i).toString());
}
//Below code causes OutOfMemory Error (Heap Space)
for(Long i=1L; i<=100000000L; i++) {
request.getSession().setAttribute("TXN_"+i, i);
}
for(Long i=1L; i<=100000000L; i++) {
logger.info(request.getSession().getAttribute("TXN_"+i).toString());
}
Saving 1,00,00,000 attributes in HttpSession is more than enough for my application.

Related

Hazelcast Near Cache: Evict if changed on different node

I am using Spring + Hazelcast 3.8.2 and have configured a map like this using the Spring configuration:
<hz:map name="test.*" backup-count="1"
max-size="0" eviction-percentage="30" read-backup-data="true"
time-to-live-seconds="900"
eviction-policy="NONE" merge-policy="com.hazelcast.map.merge.PassThroughMergePolicy">
<hz:near-cache max-idle-seconds="300"
time-to-live-seconds="0"
max-size="0" />
</hz:map>
I've got two clients connected (both on same machine [test env], using different ports).
When I change a value in the map on one client the other client still has the old value until it will get evicted from the near cache due to the expired idle time.
I found a similar issue like this here: Hazelcast near-cache eviction doesn't work
But I'm unsure if this is really the same issue, at least it is mentioned that this was a bug in version 3.7 and we are using 3.8.2.
Is this a correct behaviour or am I doing something wrong? I know that there is a property invalidate-on-change, but as a default this is true, so I don't expect I have to set this one.
I also tried setting the read-backup-data to false, doesn't help.
Thanks for your support
Christian
I found the solution myself.
The issue is that Hazelcast sends the invalidations by default in batches and thus it waits a few seconds until the invalidations will be sent out to all other nodes.
You can find more information about this here: http://docs.hazelcast.org/docs/3.8/manual/html-single/index.html#near-cache-invalidation
So I had to set the property hazelcast.map.invalidation.batch.enabled to false which will immediately send out invalidations to all nodes. But as mentioned in the documentation this should only be used when there aren't too many put/remove/... operations expected, as this will then make the event system very busy.
Nevertheless, even though this property is set it will not guarantee that all nodes will directly invalidate the near cache entries. I noticed that after directly accessing the values on the different node sometimes it's fine, sometimes not.
Here is the JUnit test I built up for this:
#Test
public void testWithInvalidationBatchEnabled() throws Exception {
System.setProperty("hazelcast.map.invalidation.batch.enabled", "true");
doTest();
}
#Test
public void testWithoutInvalidationBatchEnabled() throws Exception {
System.setProperty("hazelcast.map.invalidation.batch.enabled", "false");
doTest();
}
#After
public void shutdownNodes() {
Hazelcast.shutdownAll();
}
protected void doTest() throws Exception {
// first config for normal cluster member
Config c1 = new Config();
c1.getNetworkConfig().setPort(5709);
// second config for super client
Config c2 = new Config();
c2.getNetworkConfig().setPort(5710);
// map config is the same for both nodes
MapConfig testMapCfg = new MapConfig("test");
NearCacheConfig ncc = new NearCacheConfig();
ncc.setTimeToLiveSeconds(10);
testMapCfg.setNearCacheConfig(ncc);
c1.addMapConfig(testMapCfg);
c2.addMapConfig(testMapCfg);
// start instances
HazelcastInstance h1 = Hazelcast.newHazelcastInstance(c1);
HazelcastInstance h2 = Hazelcast.newHazelcastInstance(c2);
IMap<Object, Object> mapH1 = h1.getMap("test");
IMap<Object, Object> mapH2 = h2.getMap("test");
// initial filling
mapH1.put("a", -1);
assertEquals(mapH1.get("a"), -1);
assertEquals(mapH2.get("a"), -1);
int updatedH1 = 0, updatedH2 = 0, runs = 0;
for (int i = 0; i < 5; i++) {
mapH1.put("a", i);
// without this short sleep sometimes the nearcache is updated in time, sometimes not
Thread.sleep(100);
runs++;
if (mapH1.get("a").equals(i)) {
updatedH1++;
}
if (mapH2.get("a").equals(i)) {
updatedH2++;
}
}
assertEquals(runs, updatedH1);
assertEquals(runs, updatedH2);
}
testWithInvalidationBatchEnabled finishs only sometimes successfully, testWithoutInvalidationBatchEnabled finishs always successfully.

Apache Ignite Cache eviction still in memory

During a stability test of our apache ignite cluster we got a memory related problem, where the used memory heap space increased to 100% and didnt go down like we expected. This is what we did:
Created a cache with eviction policy to FifoEvictionPolicy( max: 10000, batchSize:100)
20 simultaneous threads that executed the following scenario over and over again for a couple of hours:
Added a unique entry to the cache and then fetched the value to verify that it was added.
This scenario created about 2.3 million entries during the test.
Our expectation was due to our quite restricted eviction policy at maximum 10000 entries, the memory should been stabilized. However, the memory just kept rising until it reached max heap size. See attached memory graph:
Our question is:
Why is the memory used by the entries still allocated, even though eviction is done?
One thing to add to this, is that we executed the same test but with a deletion of the entry after that we added it. The memory was now stable:
Update with testcase and comment.
Below you will find a simple junit test to prove the memory leak. #a_gura seems to be correct - if we disable the ExpiryPolicy things work as expected. But if we enable ExpiryPolicy the heap seems to get filled up within the ExpiryPolicy-duration. Testcase:
public class IgniteTest {
String cacheName = "my_cache";
#Test
public void test() throws InterruptedException {
IgniteConfiguration configuration = new IgniteConfiguration();
Ignite ignite = Ignition.start(configuration);
//create a large string to use as test value.
StringBuilder testValue = new StringBuilder();
for (int i = 0; i < 10*1024; i ++) {
testValue.append("a");
}
CacheConfiguration cacheCfg = new CacheConfiguration();
cacheCfg.setName(cacheName);
cacheCfg.setEvictionPolicy(new FifoEvictionPolicy<>(10_000, 100));
Duration duration = new Duration(TimeUnit.HOURS, 12);
cacheCfg.setExpiryPolicyFactory(CreatedExpiryPolicy.factoryOf(duration));
cacheCfg.setCacheMode(CacheMode.LOCAL);
cacheCfg.setBackups(0);
Cache<Object, Object> cache = ignite.getOrCreateCache(cacheCfg);
String lastKey = "";
for (int i = 0; i < 10_000_101; i++){
String key = "key#"+i;
String value = testValue + "value#"+i;
log.trace("storing {} {}", key, value);
if (i % 1_000 == 0) {
log.debug("storing {}", key);
}
cache.put(key, value);
lastKey = key;
Thread.sleep(1);
}
String verifyKey = "key#1";
Assert.assertThat("first key should be evicted", cache.containsKey(verifyKey), CoreMatchers.is(false));
Assert.assertThat("last key should NOT be evicted", cache.containsKey(lastKey), CoreMatchers.is(true));
ignite.destroyCache(cacheName);
}
}
This has been fixed in Ignite 1.8: https://issues.apache.org/jira/browse/IGNITE-3948.
Credit to #a_gura who filed the bug and to the development team.

EhCache eternal is not behaving as expected

My requirement is to have disk based cache. If cache is memory is full, i want LRU element to be pushed to the disk. And then if file on disk is full then I want LRU element on disk to evicted. This is pretty simple requirement, however I could not achieve this using EhCache.
I made use of EhCache ( 2.10.1) with following config :
<defaultCache name="default"
maxBytesLocalHeap="50m"
maxBytesLocalDisk="100g"
eternal="true"
timeToIdleSeconds="0"
timeToLiveSeconds="0"
diskExpiryThreadIntervalSeconds="120"
memoryStoreEvictionPolicy="LRU">
<persistence strategy="localTempSwap"/>
</defaultCache>
My expectation here is, when cache is filled up (i.e. cache size is exceeding 50M) , i want LRU element(s) to be pushed to the file and hence creating some space for new element in memory.
However this is not how EhCache is working, I did a sample test to check element count in cache:
public static void main(String[] args) throws Exception {
ArrayList<String> keys = new ArrayList<String>();
CacheManager cacheManager;
FileInputStream fis = null;
try {
fis = new FileInputStream(
new File("src/config/ehcache.xml").getAbsolutePath());
cacheManager = CacheManager.newInstance(fis);
}
finally {
fis.close();
}
java.lang.Runtime.getRuntime().addShutdownHook(new Thread(){
#Override
public void run() {
try
{
System.out.println("Shutting down Eh Cache manager !!");
cacheManager.clearAll();
cacheManager.shutdown();
System.out.println("done !!");
}catch(Exception e)
{
e.printStackTrace();
}
}
});
System.out.println("starting ...");
System.out.println(System.getProperty("java.io.tmpdir"));
cacheManager.addCache("work_item_111");
Cache ehCache = cacheManager.getCache("work_item_111");
long start_outer = System.currentTimeMillis();
for(int i =0;i<30;i++)
{
long start = System.currentTimeMillis();
String key = UUID.randomUUID().toString();
ehCache.put(new Element(key, getNextRandomString()));
keys.add(key);
//System.out.println("time taken : " + (System.currentTimeMillis()- start));
System.out.println((System.currentTimeMillis()- start) +" - " + (ehCache.getStatistics().getLocalDiskSizeInBytes()/1024/1024) + " - " +(ehCache.getStatistics().getLocalHeapSizeInBytes()/1024/1024));
}
System.out.println("time taken-total : " + (System.currentTimeMillis()- start_outer));
System.out.println(ehCache.getSize());
System.out.println("disk size : " +ehCache.getStatistics().getLocalDiskSizeInBytes()/1024/1024);
System.out.println("memory size : " +ehCache.getStatistics().getLocalHeapSizeInBytes()/1024/1024);
Iterator<String> itr = keys.iterator();
int count =0;
while(itr.hasNext())
{
count++;
String key = itr.next();
if(ehCache.get(key) == null)
{
System.out.println("missingg key : " + key);
}
}
System.out.println("checked for count :" + count);
}
The outcome is quite disappointing, after putting 30 elements in cache ( each element of size appro. 4mb) , I can see only 7 elements in the cache ( ehCache.getSize() returns 7 ) and also i dont see file on disk growing.
Can any EhCache expert help me out here if I am missing anything here. Thanks.
First, regarding the Ehcache topology:
The tiering model behind Ehcache enforces since at least 2.6 that all mappings are present in the lowest tier, disk in your case, at all times.
This means it is no longer an overflow model.
Now as to why your test behaves differently than what you expect:
Ehcache 2.x disk tier leaves the keys in memory and when your heap is memory sized, the amount of space occupied by the key is subtracted from that capacity.
Ehcache 2.x writes asynchronously to the disk and has limited queues for doing that. While this is not a problem in most use cases, in such a tight loop in a test, you may hit these limits and have the cache drop puts by evicting inline.
So in order to better understand what's happening, have a look at the Ehcache logs in debug and see what is really going on.
If you see evictions, simply loop more slowly or increase some of the settings for the disk write queue such as diskSpoolBufferSizeMB attribute on the cache element.
facing the same problem. In cycle I am adding 10k elements with cache configuration
cacheConfiguration.maxBytesLocalHeap(1, MemoryUnit.MEGABYTES);
cacheConfiguration.diskSpoolBufferSizeMB(10);
// cacheConfiguration.maxEntriesLocalHeap(1);
cacheConfiguration.name("smallcache");
config.addCache(cacheConfiguration);
cacheConfiguration.setDiskPersistent(true);
cacheConfiguration.overflowToDisk(true);
cacheConfiguration.eternal(true);
When increasing maxBytesLocalHeap to 10, it is fine, when using maxEntriesLocalHeap instead of maxBytes and it is even set to value 1 (item), it works without problem.
Version 2.6
This is answer Ehcache set to eternal but forgets elements anyway?

IBM Lotus Notes Domino DLL

The Domino interop API which is included with Lotus Notes causes an out of memory exception in .NET when the NotesDXLExporter class based object fails to export the 390th record, which is a big document, after exporting 389 records (which are smaller documents).
Here is a code snippet:
I initialize the NotesDXLExporter class.
NotesDXLExporter dxl1 = null;
I then configure the NotesDXLExported object as shown below:
dxl1 = notesSession.CreateDXLExporter();
dxl1.ExitOnFirstFatalError = false;
dxl1.ConvertNotesbitmapsToGIF = true;
dxl1.OutputDOCTYPE = false;
I then perform a for a loop shown below in reading documents using the dxl1 class (line on which exception occurs is indicated below).
NotesView vincr = database.GetView(#"(AllIssuesView)"); //view from an NSF file
for (int i = 1; i < vincr.EntryCount; i++)
{
try
{
vincrdoc = vincr.GetNthDocument(i);
System.IO.File.WriteAllText(#"C:\Temp\" + i + #".txt", dxl1.Export(vincrdoc)); //OUT OF MEMORY EXCEPTION HAPPENS HERE WHEN READING A BIG DOCUMENT.
}
catch(Exception ex)
{
Console.WriteLine(ex);
}
I have tried using a different version of the Interop domino dll and had had no success.
As I understand this, I see an API issue but I dont know if I am missing something?
Can you please shed some light on this?
Thanks in advance.
Subbu
You haven't said what version of the Lotus Notes you are working with. Given the history of DXL, I would say that you should try your code on the latest version of Notes that you possibly can.
But also, I don't see any calls to recycle(). Failure to call recycle() for Domino objects causes memory to leak from the Domino back end classes, and since you are running out of memory it could be contributing to your problem. You should also not use a for loop and getNthDocument. You should use getFirstDocument and a while loop with getNextDocument. You'll get much better performance. Putting these two things together leads you to the common pattern of using a temporary document to hold the result of getNextDocument, allowing you to recycle the current document, and then assign the temp document to the current, which would be something like this (not error-checked!)
NotesView vincr = database.GetView(#"(AllIssuesView)"); //view from an NSF file
vincrdoc = vincr.getFirstDocument();
while (vincrdoc != null)
{
try {
System.IO.File.WriteAllText(#"C:\Temp\" + i + #".txt", dxl1.Export(vincrdoc));
}
catch(Exception ex)
{
Console.WriteLine(ex);
}
Document nextDoc = vincr.getNextDocument(vincrdoc);
vincrdoc.recycle();
vincrdoc = nextDoc;
}

Oracle sessions stay open after closing connection

While testing a new application, we came across an issue that sometimes a stored proc takes over 1 minute to execute and causes a time out. It was not 1 stored proc in particulary, it could be any.
Trying to reproduce the issue I've created a small (local) testapp that calls the same stored proc in different threads (code below).
Now it seems that the Oracle-sessions are still there. Inactive. And the CPU of the Oracle-server hits 100%.
I use the System.Data.OracleClient
I'm not sure if one is related to the other, but it slows down the time needed to get an answer from the database.
for (int index = 0; index < 1000; ++index)
{
ThreadPool.QueueUserWorkItem(GetStreet, index);
_runningThreads++;
WriteThreadnumber(_runningThreads);
}
private void GetStreet(object nr)
{
const string procName = "SPCK_ISU.GETPREMISESBYSTREET";
DataTable dataTable = null;
var connectionstring = ConfigurationManager.ConnectionStrings["CupolaDB"].ToString();
try
{
using (var connection = new OracleConnection(connectionstring))
{
connection.Open();
using (var command = new OracleCommand(procName, connection))
{
//Fill parameters
using (var oracleDataAdapter = new OracleDataAdapter(command))
{
//Fill datatable
}
}
}
}
finally
{
if (dataTable != null)
dataTable.Dispose();
}
}
EDIT:
I just let the dba make a count of the open sessions and there are 105 sessions that stay open-inactive. After closing my application, the sessions are removed.
Problem is solved.
We hired an Oracle-expert to take a look at this and the problem was caused due to some underlying stored procedures that took a while to execute and consumed a lot of CPU.
After the necessary tuning, everything runs smoothly.

Resources