Why are connections to Azure Redis Cache so high? - caching

I am using the Azure Redis Cache in a scenario of high load for a single machine querying the cache. This machine roughly gets and sets about 20 items per second. During daytime this increases, during nighttime this is less.
So far, things have been working fine. Today I realized that the metric of "Connected Clients" is extremely high, although I only have 1 client that just constantly Gets and Sets items. Here is a screenshot of the metric I mean:
My code looks like this:
public class RedisCache<TValue> : ICache<TValue>
{
private IDatabase cache;
private ConnectionMultiplexer connectionMultiplexer;
public RedisCache()
{
ConfigurationOptions config = new ConfigurationOptions();
config.EndPoints.Add(GlobalConfig.Instance.GetConfig("RedisCacheUrl"));
config.Password = GlobalConfig.Instance.GetConfig("RedisCachePassword");
config.ConnectRetry = int.MaxValue; // retry connection if broken
config.KeepAlive = 60; // keep connection alive (ping every minute)
config.Ssl = true;
config.SyncTimeout = 8000; // 8 seconds timeout for each get/set/remove operation
config.ConnectTimeout = 20000; // 20 seconds to connect to the cache
connectionMultiplexer = ConnectionMultiplexer.Connect(config);
cache = connectionMultiplexer.GetDatabase();
}
public virtual bool Add(string key, TValue item)
{
return cache.StringSet(key, RawSerializationHelper.Serialize(item));
}
I am not creating more than one instance of this class, so this is not the problem. Maybe I missunderstand the connections metric and what they really mean is the number of times I access the cache, however, it would not really make sense in my opinion. Any ideas, or anyone with a similar problem?

StackExchange.Redis had a race condition that could lead to leaked connections under some conditions. This has been fixed in build 1.0.333 or newer.
If you want to confirm this is the issue you are hitting, get a crash dump of your client application and look at the objects on the heap in a debugger. Look for a large number of StackExchange.Redis.ServerEndPoint objects.
Also, several users have had a bugs in their code that resulted in leaked connection objects. This is often because their code is trying to re-create the ConnectionMultiplexer object if they see failures or disconnected state. There is really no need to recreate the ConnectionMultiplexer as it has logic internally to recreate the connection as necessary. Just make sure to set abortConnect to false in your connection string.
If you do decide to re-create the connection object, make sure to dispose the old object before releasing all references to it.
The following is the pattern we are recommending:
private static Lazy lazyConnection = new Lazy(() => {
return ConnectionMultiplexer.Connect("contoso5.redis.cache.windows.net,abortConnect=false,ssl=true,password=...");
});
public static ConnectionMultiplexer Connection {
get {
return lazyConnection.Value;
}
}

Related

Parse Platform Online mode and fallback for offline if timeout/network issue

Is there a way in Parse Platform to fallback to local data store if there is no connection ?
I understand that there is pin/pinInBackground, so I can pin any object to the LocalDataStore.
Then I can query the localdatastore to get that info.
However, I want always to try to get first the server data, and if it fails, get the local data.
Is there a way to do this automatically?
(or I have to pin everything locally, then query remote and if it fails, then query locally)
Great question.
Parse has the concept of cached queries. https://docs.parseplatform.org/ios/guide/#caching-queries
The interesting feature of cached queries is that you can specify - "if no network". However this only works if you have previously cached the query results. I've also found that the delay between losing network connectivity and the cached query recognising that its lost network makes the whole capability a bit rubbish.
How I have resolved this issue is using a combination of the AlamoFire library and pinning objects. The reason I chose to use the AlamoFire library is because it's extremely well supported and it spots drops in network connectivity near immediately. I only have a few hundred records so I'm not worrying about pinning objects and definitely performance does not seem to be affected. So how I work this is....
Define some class objects at the top of the class
// Network management
private var reachability: NetworkReachabilityManager!
private var hasInternet: Bool = false
Call a method as the view awakes
// View lifecycle
override func awakeFromNib() {
super.awakeFromNib()
self.monitorReachability()
}
Update object when network availability changes. I know this method could be improved.
private func monitorReachability() {
NetworkReachabilityManager.default?.startListening { status in
if "\(status)" == "notReachable" {
self.hasInternet = false
} else {
self.hasInternet = true
}
print("hasInternet = \(self.hasInternet)")
}
}
Then when I call a query I have a switch as I set up the query object.
// Start setup of query
let query = PFQuery(className: "mySecretClass")
if self.hasInternet == false {
query.fromLocalDatastore()
}
// Complete rest of query configuration
Of course I pin all the results I ever return from the server.

Spring data Page/Pageable returns duplicates on large data sets?

When operating on large data sets, Spring Data presents two abstractions: Stream and Page. We've been using Stream for awhile and had no issues, but recently I wanted to try a paginated approach and ran into a reliability issue.
Consider the following:
#Entity
public class MyData {
}
public interface MyDataRepository extends JpaRepository<MyData, UUID> {
}
#Component
public class MyDataService {
private MyDataRepository repository;
// Bridge between a Reactive service and a transactional / non-reactive database call
#Transactional
public void getAllMyData(final FluxSink<MyData> sink) {
final Pageable firstPage = PageRequest.of(0, 500);
Page<MyData> page = repository.findAll(firstPage);
while (page != null && page.hasContent()) {
page.getContent().forEach(sink::next);
if (page.hasNext()) {
page = repository.findAll(page.nextPageable());
}
else {
page = null;
}
}
sink.complete();
}
}
Using two Postgres 9.5 databases, the source database had close to 100,000 rows while the destination was empty. The example code was then used to copy from the source to the destination. At the end I would find that my destination database had far smaller row count than the source.
Run as a springboot app
The flux doing the copy was using 4-6 threads in parallel (for speed)
Total run time of at least an hour (max was 2 hours)
As it turns out, I was eventually processing the same rows multiple times (and missing other rows as a result). This lead me to discovering a fix that others had already ran into, where you should provide a Sort.by("") argument.
After changing the service to use:
// Make our pages sorted by the PKEY
final Pageable firstPage = PageRequest.of(0, 500, Sort.by("id"));
I found that while it GREATLY helped, I would still process multiple rows (from losing about half the rows to only seeing ~12 duplicates). When I use a Stream instead, I have no issues.
Does anyone have any explanation for what is going on? I don't seem to have any duplicates come through until the test has been running for at least 10-15min, which almost leads me to believe that there is some kind of session or other timeout happening (either in the client, or on the database) that causes the hiccups. But I'm really far out of my knowledge area for troubleshooting it further heh.

Why do static members lose their value in Xamarin.Forms

I’m having issues with a static member of my app class losing its value and I’m not quite sure I understand why. In my app constructor I check if the user is logged in and if not redirect to a login page where I set the static app class member.
I understand if the app is forced to close to free up resources, these values are not retained so a new app instance would start and go back to login screen. However, what I’m seeing is the static member losing its value during an application session. I can do a check to see if this is null on resume and redirect to login page but I don’t understand why this happens.
My understaning was that the only way you would lose values would be if the app was killed in the background but this problem would suggest it can happen when resuming too.
In a normal C# application static members will typically survive forever, but unfortunately your observations are entirely correct; in Xamarin Forms static members are not guaranteed to persist for the length of the application's life.
In Android's case if the underlying platform indicates a low memory state (or increased demands on memory from multiple running applications) then static members are considered collectable by the GC, which is often triggered when you pause the application (ie. switching to a different app). They will be reduced to their default value, eg. null, zero, etc.
I've wrestled with this curio for years, and the most performant work around is to implement a re-population pattern on those static members, eg.
internal List<MyCustomType> _AListOfStuff
internal List<MyCustomType> AListOfStuff
{
get
{
if (_AListOfStuff == null)
{
PopulateAListOfStuff(); //If this occurs then the static member has been garbage collected: reload it
}
return _AListOfStuff;
}
}
From what you've said, I appreciate that your particular usage of static members probably doesn't fit with this solution, however all I can offer is that you're not crazy; it is a documented quirk, and not considered a bug (don't even bother shaking that tree; I've been down that route with the devs and was told in no uncertain terms that the behaviour is here to stay, and is necessary to ensure overall device stability).
Static member will not lose. If we see the code then we can assist further. Another approach would be, try using singleton pattern, it will create new instance only if it's instance is null. sample below:
public sealed class SingletonSample
{
private static SingletonSample instance = null;
private static readonly object padlock = new object();
public static SingletonSample Instance
{
get
{
lock (padlock)
{
if (instance == null)
{
instance = new SingletonSample();
}
return instance;
}
}
}
public string FirstName { get; set; }
}

Thread safe caching

I am trying to analyze what problem i might be having with unsafe threading in my code.
In my mvc3 webapplication i try to the following:
// Caching code
public static class CacheExtensions
{
public static T GetOrStore<T>(this Cache cache, string key, Func<T> generator)
{
var result = cache[key];
if(result == null)
{
result = generator();
lock(sync) {
cache[key] = result;
}
}
return (T)result;
}
}
Using the caching like this:
// Using the cached stuff
public class SectionViewData
{
public IEnumerable<Product> Products {get;set;}
public IEnumerable<SomethingElse> SomethingElse {get;set;}
}
private void Testing()
{
var cachedSection = HttpContext.Current.Cache.GetOrStore("Some Key", 0 => GetSectionViewData());
// Threading problem?
foreach(var product in cachedSection.Products)
{
DosomestuffwithProduct...
}
}
private SectionViewData GetSectionViewData()
{
SectionViewData viewData = new SectionViewData();
viewData.Products = CreateProductList();
viewData.SomethingElse = CreateSomethingElse();
return viewData;
}
Could i run inte problem with the IEnumerable? I dont have much experience with threading problems. The cachedSection would not get touched if some other thread adds a new value to cache right? To me this would work!
Should i cache Products and SomethingElse indivually? Would that be better than caching the whole SectionViewData??
Threading is hard;
In your GetOrStore method, the get/generator sequence is entirely unsynchronized, so any nymber of threads can get null from the cache and run the generator function at the same time. This may - or may not - be a problem.
Your lock statement only locks the setter of cache[string], which is already thread safe and doesn't need to be "extra locked".
The variation of double-checked locking in the cache is suspect, I'd try to get rid of it. Since the thread that never enters the lock() section can get result without a memory barrier, result may not be entirely constructed by the time the thread gets it.
Enumerating the cached IEnumrators is safe as long as nothing modifies them at the same time. If GetSectionViewData() returns an object with immutable (as in non changing) collections, you're safe.
Your code is missing parts like how would Products be populated? Only in GetSectionViewData?
If so, then I don't see a major problem with your code.
There is however a chance that two threads generate the same data(CachedSection) for the same key, it shouldn't create a threading problem except that you are doing the work twice, so if this was an expensive operation I would change the code so it only generates it once per key. If it is not expensive, it works fine as is.
IEnumerable for Products is not touched (assuming you create it separately per thread, but the enumerator on the cache is modified for each insert operation, hence it is not thread safe. So if you are using this I would be careful about that.

.Net 4/Mvc Runtime Cache strangeness

Update: I have dropped the cache system in favor of a database solution - pitty.
I have a backend MVC controller where i need data caching. I use MemoryCache.Default to store key/value pairs, nothing big. Nevermind policies and expire times, i'f got that. The thing that mystifys me is why my cache gets cleaned out after I'f accessed a key (retrived the value) the first time. If i don't access the cached item, eventually the item will expire and my remove handler is called - it's all good. But when i retrive the item the first time, my remove handler is called after a short while. The ChacheEntryRemovedReason is set to:
CacheSpecificEviction // A cache entry was evicted for as reason that is defined by a particular cache implementation.
I can't find any explanation to what this means.
The mystifying thing here is that when i inspect the cache object when debugging in the handler (and on succeeding controller calls), the cache enum is empty. If I "set" (add) a new CacheItem to the cache, I can yet again access the key once, and history repeats.
The behavior is like a one-off caching mechanism which i totally don't need.
Any help or comments would be much appreciated!
Some simplified code just for the fun of it:
private static ObjectCache cache = MemoryCache.Default;
internal void insertInCache(string key, int value) {
CacheItemPolicy policy= new CacheItemPolicy() {
AbsoluteExpiration = ObjectCache.InfiniteAbsoluteExpiration,
Priority = CacheItemPriority.NotRemovable,
SlidingExpiration = TimeSpan.FromMinutes(ITEM_EXPIRE_TIME),
RemovedCallback = new CacheEntryRemovedCallback(RemovedHandler)
};
cache.Set(key, value, policy);
}
static void RemovedHandler(CacheEntryRemovedArguments args) {
if(args.RemovedReason == CacheEntryRemovedReason.Expired) {
//do something - or i actually want it to disappear when expired
} else {
cache.Set(args.CacheItem, somepolicy);//reinsert to keep in cache
}
}
//Apparently this triggers some cache mong mode
internal void getSome(string key){
int thisIsWhatIWanted = (int)cache.GetCacheItem(key).Value;
}
This is just example code so please don't nag me about my skillz.
My own best guess is that it may have to do with the cache not being setup properly, MVC witchery or the fact I'm running my application on a debug IIS (visual studido)

Resources