Error when trying to start an infinispan cache after stopping it programatically - caching

I need to start and stop a local infinispan cache programmatically. To start the cache initially, all I have to do is:
defaultcachemanager.getCache("local");
This happens when the system (karaf in this case) is coming up and works perfectly. To stop the cache, I do:
defaultcachemanager.stop();
Then when I try to start the same cache using:
defaultcachemanager.getCache("local");
it fails. I tried to do:
defaultcachemanager.startCache("local");
This fails with an exception
"Cache container has been stopped and cannot be reused. Recreate the cache container."
I guess the cache container is not started by then. But isn't
defaultcachemanager.startCache("local");
supposed to create and start the cache as well. I am not sure what I am missing. Do I need to create a new instance of
defaultcachemanager
again? I looked at the code for defaultcachemanager, I see only the cache entries being stopped, I do not see the instance itself being destroyed.
Pardon my ignorance as I started working on Infinispan just last week. Any pointers are greatly appreciated.
thanks,
Asha

By calling defaultcachemanager.stop(); you stopped the "whole" cache manager. Therefore, for now, there is no running instance of cache manager here.
All what you need is to stop cache itself, instead of stopping "whole" cache manager.
defaultcachemanager.getCache(cacheName).stop();
Stops a cache of given name.
defaultcachemanager.getCache(cacheName).start();
This is how you can re-start your local cache after stop.
defaultcachemanager.startCache(cacheName);
By this you can create another cache with given name using default configuration set by configuration builders during cache manager instantiation.

Related

Issue with DRBD not being able to reattach disk

So after patching having issue with disk reattaching to sdb1. used "drbdadm attach r1" and getting this error.
I try using drbdadm create-md r1 and then bring it down and back up since the metadata is internal. That doesn't seem to work. This is the secondary node the primary is up and running without issue.

Why would GlobalKTable with in-memory key-value store not be filled out at restart?

I am trying to figure out how GlobalKTable is working and noticed that my in memory key value store is not filled in case a restart. However the documentation sounds that it is filled in case a restart since the whole data is duplicated on clients.
When I debug my application see that there is a file on /tmp/kafka-streams/category-client-1/global/.checkpoint and it is including an offset about my topic. This might be maybe necessary for stores which are persisting their data and improve restarts however since there is an offset in this file, my application skips restoring its state.
How can I be sure that each restart or fresh start includes whole data of my topic?
Because you are using in-memory store I assume that you are hitting this bug: https://issues.apache.org/jira/browse/KAFKA-6711
As a workaround, you can delete the local checkpoint file for the global store -- this will trigger the bootstrapping on restart. Or you switch back to default RocksDB store.

How to change Infinispan cache settings after it is created?

In my application i'm using Infinispan 5.3 version and I want to change setting after cache is initialized. Default settings will be loaded from xml file and some of the settings ( ex : eviction maxEntries, lifespan, etc ) should be able to change any time of application running (This is changed by sysadmin). Is there way to changed settings of already created cache ?
I tried EmbeddedCacheManager.defineConfiguration(String cacheName, Configuration configurationOverride); but this has no effect on already created cache.
Please, take into account that in the Infinispan version 5.3 there is no possibility to change cache configuration "on the fly". You need to restart your service with new configuration in case of any wanted change.
This is something the community might want to work on in the future. However, such a task is not easy because you need to figure out how to correctly deal with affected data immediately after the configuration change.
Feel free to raise new feature request: https://issues.jboss.org/browse/ISPN/

EC2 creates new instance when I terminate previous

I can't seem to find the configuration setting but when I terminate a free tier instance a new one is created a few minutes later. I want to terminate it permanently and not have it restart. I created the instance using the eclipse tools originally if this makes any difference. I have tried stop and terminate and both will create a new instance and leave my other instances in the "terminated" or "stopped" state. Is there a setting that I can configure to leave it turned off?
I figured it out it is the application is configured in elasticbeanstalk. I had to delete the application in ebs and it terminated the application.

Local mongo server with mongolab mirror & fallback

How to set up a local mongodb with mirror on mongolab (propagate all writes from local to mongolab, so they are always synchronized - I don't care about atomic, just that it syncs in a reasonable time frame)
How to use mongolab as a fallback if local server stops working (Ruby/Rails, mongo driver and mongoid).
Background: I used to have a local mongo server but it kept crashing occasionally and all my apps stopped working + I had to "repair" the DB to restart it. Then I switched to mongolab which I am very satisfied with, but it's generating a lot of traffic which I'd like to avoid by having a local "cache", but without having to worry about my local cache crashing causing all my apps to stop working. The DBs are relatively small so size is not an issue. I'm not trying to eliminate the traffic overhead of communicating to mongolab, just lower it a bit.
I'm assuming you don't want to have the mongolab instance just be part of a replica set (or perhaps that is not offered). The easiest way would be to add the remote mongod instance as a hidden member (priority 0) and just have it replicate data from your local instance.
An alternative immediate solution you could use is mongooplog which can be used to poll the oplog on one server and then apply it to another. Essentially replication on demand (you would need to seed one instance appropriately etc. and would need to manage any failures). More information here:
http://docs.mongodb.org/manual/reference/mongooplog/
The last option would be to write something yourself using a tailable cursor in your language of choice to feed the oplog data into the remote instance.

Resources