RedisClient in StackExchange.Redis - stackexchange.redis

#MarcGravell. I noticed that RedisClient (in ServiceStack.Redis) doesn't exists in StackExchange.Redis for Azure. Any plan to make it available?
In particular, I'm using the RedisClient.AcquireLock for Distributed Lock.
Note: I was able to connect to Redis instance in Azure with ServiceStack.Redis as well, but it would be nice that will be available in StackExchange.Redis.

ServiceStack != StackExchange. They are entirely unrelated... However, SE.Redis does have methods for working with distributed locks - see the methods starting with the word Lock on IDatabase etc.
If there is some specific feature you are after: perhaps raise an issue on github. But: distributed / centralised locking is definitely provided. Specifically: Take, Release, Query and Extend.

Related

Couchbase connection pool

I am building an application using couchbase as my primary db.
I want to make the application scalable enough to handle multiple requests at times concurrently.
How do you create connection pools for couchbase in Go?
Postgres has pgxpool.
I'll give a bit more detail about how gocb works. Under the hood of gocb is another SDK called gocbcore (direct usage of gocbcore is not supported) which is a fully asynchronous API. Gocb provides a synchronous API over gocbcore as well as making the API quite a lot more user friendly.
What this means is that if you issue requests across multiple goroutines then you can get multiple requests written to the network at a time. This is effectively how the gocb bulk API works - https://github.com/couchbase/gocb/blob/master/collection_bulk.go. Both of these approaches are documented at https://docs.couchbase.com/go-sdk/current/howtos/concurrent-async-apis.html.
If you still don't get enough throughput then you can look at using one of these approaches alongside increasing the number of connections that the SDK makes to each node by using the kv_pool_size query string option in your connection string, i.e. couchbases://10.112.212.101?kv_pool_size=2 however I'd recommend only changing this if the above approaches are not providing the throughput that you need. The SDK is designed to be highly performant anyway.
go-couchbase has already have a connection pool mechanism: conn_pool.go (even though there are a few issues linked to it, like issue 91).
You can see it tested in conn_pool_test.go, and in pools.go itself.
dnault points out in the comments to the more recent couchbase/gocb, which does have a Cluster instead of pool of connections.

Using org.springframework.cache.support.SimpleCacheManager in the cloud

I noticed that Spring reference application (Sagan) uses the SimpleCacheManager implementation. See here for source code of Sagan.
I was surprised by this choice because I thought that all but small applications running on a single node would use something like a Redis cache manager and not the simple cache manager.
How can a large application like Sagan -which I assume runs on cloudfoundry- use this simple implementation?
Any comment welcome.
Well, the SimpleCacheManager choice has been made because it was the simplest solution that could possibly work. Note that Sagan is, at least for now, not storing a lot of data in that cache and merely using it to respect various APIs rate-limiting and get better performance on some parts of the application.
Yes, Sagan is running on CloudFoundry (see this presentation) and is using CF marketplace services.
Even if cache consistency between instances is not a constraint for now, we could definitely add another marketplace service, here a Redis Cloud instance, and use this as a central cache repository.
Now that we're considering using that cache for more features, it even makes sense to at least consider that use case, since it could lower our monthly bill (pay a small fee for a redis service and use less memory for our CF instances).
In any case, thanks a lot balteo for this insightful question, we've created a Github issue for that.

MemoryCache object and load balancing

I'm writing a web application using ASP .NET MVC 3. I want to use the MemoryCache object but I'm worried about it causing issues with load balanced web servers. When I google for it looks like that problem is solved on the server ie using AppFabric. If a company has load balanced servers is it on them to make sure they have AppFabric or something similar running? or is there anything I can or should do as a developer for this?
First of all, for ASP.NET you should look at the ASP.NET Cache instead of MemoryCache. MemoryCache is a generic caching API that was introduced in .NET 4.0 to provide an equivalent of the ASP.NET Cache in non-web applications.
You're correct to say that AppFabric resolves the issue of multiple servers having their own instances of cached data, in that it provides a single logical cache accessible from all your web servers. Before you leap on it as the solution to your problem, there's a couple of things to consider:
It does not ship as part of Windows Server - it is, as you say, on
you to install it on your servers if you want to use it. When
AppFabric was released, there was a suggestion that it would ship as
part of the next release of Windows Server, but I haven't seen
anything about Windows Server 2012 that confirms that to be the case.
You need extra servers for it, or at least you're advised to have
them. Microsoft's recommendation for AppFabric is that you run it on
dedicated servers. Which means that whilst AppFabric itself is a free
download, you may be incurring additional Windows Server licence
costs. Speaking of which...
You might need Enterprise Edition licences. If you want to use the
High Availability features of AppFabric, you can only do this with
servers running Enterprise Edition, which is a more expensive licence
than Standard Edition.
You might not need it after all. Some of this will depend on your application and why you want to use a shared caching layer. If your concern is that caches on multiple servers could get out of sync with the database (or indeed each other), some judicious use of SqlCacheDependency objects might get you past the issue.
This CodeProject article Implementing Local MemoryCache Invalidation with Redis suggests an approach for handling the scenario you describe.
You didn't mention the flavor of load balancing that you are using: "sticky" or "stateless". By far the easiest solution is to use sticky sessions.
If you want to use local memory caches and stateless load balancing, you can end up with race conditions the cross-server invalidation messages arrive late. This can be particularly problematic if you use the Post-Redirect-Get pattern so common in ASP.Net MVC. This can be overcome by using cookies to supplement the cache invalidation broadcasts. I detail this in a blog post here.

Appfabric Caching: Configuration Provider as single point of failure

After doing some initial research into using Appfabric for caching, my understanding is that the configuration provider for the cluster is a single point of failure as mentioned here:
MSDN
I want to use appfabric just for distributed caching, particularly for the tagging features. What are the options to avoid having the configuration provider as this failure point? I thought of two but not sure if one is better or if there are any other options.
(1) Create my own caching service configuration provider. I'm guessing this is possible (?) but I'm not sure how to go about it. I'd probably make a provider that fetched the xml file from S3 since I'm already using AWS.
(2) Configure each cache as a single node cluster and then create a proxy client that uses the individual nodes as a distributed cache, a la a memcached type client.
Thoughts or recommendations, or anything else I should consider in making this decision?
Yes, it is a single point of failure.
Microsoft's recomended solutions seem to be:
(SQL Server provider) Use SQL Server
clustering. In my limited
experience of it, using SQL Server
clustering for this is probably a
case of 'the cure is worse than the
disease' i.e. it brings a lot of
pain. Unless you've already got a SQL
Server cluster available, avoid!
(XML
provider) Use Windows Server
clustering. I have even less
knowledge of this than SQL
clustering, so I can't say how well (or otherwise)
this might work. It doesn't strike me as a trivial thing to do, though.
You can create your own configuration provider by implementing the ICustomProvider interface and making some registry entries. Using AWS seems like a really good idea to make the config provider resilient, I'd be interested to see how you got on with this.
Creating a proxy client seems to me like you'd be making a lot of work for yourself, at that point it feels like you'd be more fighting against AppFabric rather than working with it.
We have also tried AppFabric but it gave us fair few headaches like for one there's no API access which is making it very difficult to use our current unit testing strategy. We have now moved to NCache that is better option than AppFabric. NCache provides tagging feature and it is not a single point of failure.

Move application to Websphere clusters

What should we take care of before moving an application from a single Websphere Application Server to a Websphere cluster
This is my list from experience. It is not complete but should cover the most common problem areas:
Plan head the distributed session management configuration (ie. will you use memory-to-memory or database based replicaton). Make a notice that if you are still on 32-bit platform the resource requirement overhead from clustering might cause you instability issues if your application uses already lots of memory.
Make sure that everything you put into user sessions can be serialized with the default serializer (implements Serializable). You might otherwise run into problems with distributed sessions.
The same goes for everything you put into DynaCache. Make sure everything serializes properly.
Specify and make sure all the resource definitions (JDBC providers etc) will be made to a proper scope. I would usually recommend using the actual Cluster scope for everything that your applications installed to cluster use. That ensures the testing features work properly from proper points, and that you don't make conflicting definitions.
Make sure your application uses relative paths for resources in web interfaces. Once you start load balancing and stuff you can run into some serious problems if you have bolted down a lot of stuff.
If you had any sort of timers make sure they work well with clusters. With Quartz that means probably that you should use the JDBC store for timer tasks. With EJB Timers make sure you register the timers only once (it is possible to corrupt the timer database of WAS if you have several nodes attempting the registering at the exactly same time) and make sure you install them to Cluster scope.
Make sure you use the WAS provided SSO mechanisms. If you have a custom implementation please make sure it handles moving the user between servers in cluster well.
Keep it simple, depending on your requirements, try configuring your load balancer to use sticky sessions and not hold state in your HTTP Session. That way you don't need to use resource hungry in memory session replication.
Single Sign On isn't an issue for a single cluster as your HTTP clients will not be moving off the same http://server.acme.com/... host domain name.
Most of your testing should focus on database contention. If you have a highly transactional application (i.e. many writes to the same table) make sure you look at your database Isolation levels so that locks are not held unecessarily. Same goes for your transaction demarkaction. Keep transactions as brief as possible. If you dont have database skills yourself make sure you get a Database Analyst to help you monitor the database while you test.
Also a good advice to raise a PMR to IBM Support up front of any major changes, such as this one or upgrading to new versions etc. Raise it as a "Software Usage Question" and they can provide you with feedback from their knowledge database based on other customers input. Same would apply for any type of product which you have a support agreement for - ask support before problems occur.

Resources