Transition from JedisPool to JedisCluster - jedis

My Application uses ElastiCache on AWS for caching purposes. Our current set up uses a basic Redis Cluster with no sharding or failover. We need to now move to a Clustered Redis Elastic Cache with sharding, failover etc enabled. Creating a new cluster on AWS was the easy bit, but we are a bit lost on how to modify our java code to reads and write from the cluster.
Current Implementation -
Initialize a JedisPool.
JedisPoolConfig jedisPoolConfig = new JedisPoolConfig();
jedisPoolConfig.setMaxTotal(100);
jedisPoolConfig.setMaxIdle(10);
jedisPoolConfig.setMaxWaitMillis(50);
jedisPoolConfig.setTestOnBorrow(true);
String host = "mycache.db8e1v.0001.usw2.cache.amazonaws.com";
int port = 6379;
int timeout = 50;
JedisPool jedisPool = new JedisPool(jedisPoolConfig, host, port, timeout)
A Jedis object is borrowed from the pool everytime we need to perform an operation
Jedis jedis = JedisPool.getResource();
The new implementation would be
JedisPoolConfig jedisPoolConfig = ...
HostAndPort hostAndPort = new HostAndPort(host, port);
jedisCluster = new JedisCluster(Collections.singleton(hostAndPort), jedisPoolConfig);
Question:
The documentation says JedisCluster is to be used in place of Jedis (not JedisPool). Does this mean I need to create and destroy a JedisCluster object in each thread. Or can I re-use the same object and it will handle the thread safety? When do I exactly close the JedisCluster then? At the end of the application?

The JedisCluster holds internal JedisPools for each node in the cluster.
Does this mean I need to create and destroy a JedisCluster object in
each thread. Or can I re-use the same object and it will handle the
thread safety?
You can reuse the same object.
When do I exactly close the JedisCluster then? At the end of the
application?
Yes.

Replacing all Jedis-calls with JedisCluster-calls is the best way to migrate.
But I wanted pipeline support which JedisCluster currently lacks. So one other idea is to extend JedisCluster to return the JedisPool>Jedis for a particular key:
protected Jedis getJedis(String key) {
int slot = JedisClusterCRC16.getSlot(key);
return connectionHandler.getConnectionFromSlot(slot);
}
The extended class has to be in namespace redis.clients.jedis to access getConnectionFromSlot.
Now a pipeline can be executed on the Jedis.
And you need a different Jedis for each key you want to operate on. Which makes sense - in cluster mode, each key can be on a different node.

Related

How to migrate from deprecated EmbeddedJMS to recommended EmbeddedActiveMQ

I have a Spring Boot app with an embedded queue. I'm using the spring-boot-starter-artemis dependency and trying to upgrade my Java app.
I could not find any guide, and a few things are not clear, e.g.:
Checking if a queue exist
EmbeddedJMS:
jmsServer.getJMSServerManager().getBindingsOnQueue(queueName).length > 0
Can it be:
jmsServer.getActiveMQServer().isAddressBound(queueName)
or maybe:
jmsServer.getActiveMQServer().bindingQuery(SimpleString.toSimpleString(queueName)).isExists()
Creation of queue:
jmsServer.getJMSServerManager().createQueue(true, queueName, null, true, queueName)
with params (boolean storeConfig, String queueName, String selectorString, boolean durable, String... bindings)
Is it the same like:
QueueConfiguration queueConfiguration = new QueueConfiguration(queueName);
queueConfiguration.setDurable(true);
return jmsServer.getActiveMQServer().createQueue(queueConfiguration, true).isEnabled();
(added)
Getting address:
jmsServer.getJMSServerManager().getAddressSettings(address);
is the same like:
jmsServer.getActiveMQServer().getAddressSettingsRepository().getMatch(address);
Not sure how migration Connection Factory settings
final Configuration jmsConfiguration = new ConfigurationImpl();
jmsConfiguration.getConnectionFactoryConfigurations()
.add(new ConnectionFactoryConfigurationImpl()
.setName("cf")
.setConnectorNames(Collections.singletonList("connector"))
.setBindings("cf"));
embeddedActiveMQ.setConfiguration(jmsConfiguration);
To check if a queue exists I think the simplest equivalent solution would be:
server.locateQueue("myQueue") != null
To create a queue the simplest equivalent solution would be:
server.createQueue(new QueueConfiguration("myQueue")) != null
The queue will be durable by default so there's no reason use setDurable(true).
To get address settings you use this (as you suspect):
server.getAddressSettingsRepository().getMatch(address);
Regarding connection factories, you don't actually need to configure connection factories on the broker. You simply need to configure the properties for the InitialContext for your JNDI lookup. See this documentation for more details on that.

How to set the RequestQueue cache when creating RequestCache using Volley class?

I'm creating a RequestQueue in a singleton class and the example provided by Google is creating it by a call of newRequestQueue of the Volley class that get as a parameter just the Context of the current application:
mRequestQueue = Volley.newRequestQueue(mCtx.getApplicationContext());
Previously I was not using a singleton class and I created a RequestQueue in the activity OnCreate with this code:
Cache cache = new DiskBasedCache(getCacheDir(), 1024 * 1024); // 1MB cap
// Set up the network to use HttpURLConnection as the HTTP client.
Network network = new BasicNetwork(new HurlStack());
// Instantiate the RequestQueue with the cache and network.
mRequestQueue = new RequestQueue(cache, network);
Because with this code I set network and cache I search for the Volley class for an appropriate newRequestQueue method but I noticed there is just two overloads of newRequestQueue, one with the HttpStack parameter that I can use to set the network, but there are not with a Cache parameter.
So the question is: there is a manner to create a RequestQueue with the Volley class costumizing its cache?

Check Session with Cassandra Datastax Java Driver

Is there any direct way to check if a Cluster/Session is connected/valid/ok?
I mean, I have a com.datastax.driver.core.Session created into a neverending thread and I'd like to assure the session is ok every time is needed. I use the next cluster initialization, but I'm not sure this is enough...
Cluster.builder().addContactPoint(url)
.withRetryPolicy(DowngradingConsistencyRetryPolicy.INSTANCE)
.withReconnectionPolicy(new ConstantReconnectionPolicy(1000L)).build());
In fact when using the DataStax Java Driver, you have a hidden/magic capability embedded:
The driver is aware of the full network topology (nodes topology across datacenters and nodes availabilities).
Thus, the only thing you have to do is to initialise your cluster with a few nodes(1) and then you can be sure at every moment that if there is at least one available node your request will be performed correctly. Because the driver is topology aware, if one node (even initialisation nodes) goes out of availability, the driver will automagically route your request to another available node.
In summary, your code is good(1).
(1): You should provide a few nodes in order to be fault tolerant in the cluster initialisation phase. Indeed, if one initialisation node is down, the driver has then the possibility to contact another one to discover the full topology.
I have a local development environment setup where I am starting up my java application and Cassandra (Docker) container at the same time, so Cassandra will normally not be in a ready state when the java application first attempts to connect.
When this is starting up the application will throw a NoHostAvailableException when the Cluster instance attempts to create a Session. Subsequent attempts to create a Session from the Cluster will then throw an IllegalStateException because the cluster instance was closed after the first exception.
What I did to rememdy this was to create a check method that attempts to create a Cluster and Session and then immediately closes these. See this:
private void waitForCassandraToBeReady(String keyspace, Cluster.Builder builder) {
RuntimeException exception = null;
int retries = 0;
while (retries++ < 40) {
Session session = null;
Cluster cluster = null;
try {
cluster = builder.build();
session = cluster.connect(keyspace);
log.info("Cassandra is available");
return;
} catch (RuntimeException e) {
log.warn("Cassandra not available, try {}", retries);
exception = e;
} finally {
if (session != null && !session.isClosed()) session.close();
if (cluster != null && !cluster.isClosed()) cluster.close();
}
sleep();
}
log.error("Retries exceeded waiting for Cassandra to be available");
if (exception != null) throw exception;
else throw new RuntimeException("Cassandra not available");
}
After this method returns, I then create a create a Cluster and Session independent of this check method.

Using AWS API/SDK to Register new EC2 Instances with Existing Elastic Load Balancer - is it possible?

I'm working on using the .Net SDK to help automate the deployment of an application into Windows EC2 instances. The process I want to achieve is:
Create a new EC2 instance - this
"bootstraps" itself by loading in
the new application version using a
service.
Ensure the new instance is in the
'running' state
Run some simple acceptance tests on
the new instance.
Register the new instance with an
existing Elastic Load balancer that
has an instance running the old
version of the application.
When the new instance is registered
with the load balancer, de-register
the old instance.
Stop the old EC2 instance.
I've managed to get steps 1 and 2 working, and I'm pretty confident about 3 and 6.
To do this I've been writing a simple C# console app that uses the AWS .Net SDK v1.3.2 to make the various API calls.
However, when I get to step 4 I cannot get the new instance registered with the load balancer. Here is my code:
public IList<Instance> PointToNewInstance(string newInstanceId)
{
var allInstances = new List<Instance>();
using (var elbClient = ClientUtilities.GetElbClient())
{
try
{
var newInstances = new List<Instance> {new Instance(newInstanceId)};
var registInstancesRequest = new RegisterInstancesWithLoadBalancerRequest
{
LoadBalancerName = LoadBalancerName,
Instances = newInstances
};
var registerReponse = elbClient.RegisterInstancesWithLoadBalancer(registInstancesRequest);
allInstances = registerReponse.RegisterInstancesWithLoadBalancerResult.Instances;
var describeInstanceHealthRequest = new DescribeInstanceHealthRequest
{
Instances = newInstances
};
DescribeInstanceHealthResponse describeInstanceHealthResponse;
do
{
describeInstanceHealthResponse = elbClient.DescribeInstanceHealth(describeInstanceHealthRequest);
} while (describeInstanceHealthResponse.DescribeInstanceHealthResult.InstanceStates[0].State == "OutOfService");
_log.DebugFormat("New instance [{0}] now in service - about to stop remove old instance", newInstanceId);
if (allInstances.Any(i => i.InstanceId != newInstanceId))
{
elbClient.DeregisterInstancesFromLoadBalancer(new DeregisterInstancesFromLoadBalancerRequest
{
Instances = allInstances.Where(i => i.InstanceId != newInstanceId).ToList(),
LoadBalancerName = LoadBalancerName
});
foreach (var instance in allInstances.Where(i => i.InstanceId != newInstanceId).ToList())
{
_log.DebugFormat("Instance [{0}] has now been de-registered from load-balancer [{1}]", instance.InstanceId, LoadBalancerName);
}
}
}
catch (Exception exception)
{
_log.Error(exception);
}
}
return allInstances.Where(i => i.InstanceId != newInstanceId).ToList();
}
The code just freezes at this line:
var registerReponse = elbClient.RegisterInstancesWithLoadBalancer(registInstancesRequest);
When I looked in more detail at the documention (relevant documentation here) I noticed this line:
NOTE: In order for this call to be
successful, the client must have
created the LoadBalancer. The client
must provide the same account
credentials as those that were used to
create the LoadBalancer.
Is it actually possible to use the API to register new instances with an existing load balancer?
All of that is easy to implement. Use Auto Scaling. Use API.
As Roman mentions, it sounds like Auto Scaling is a good way for you to go, it may not solve all of your problems but its certainly a good starting point:
-an auto scaling group can be tied to a load balancer, e.g. ill have x healthy instances
-new instances are automatically added to the load balancer (no traffic will be sent until it passed the health check)
-you can define custom health checks, such as ping http://hostname/isalive just have your instance respond to these requests once its passes step 3
-you can define scaling policies but by default if you're over capacity the oldest instances will be killed
-you don't mention the use case of the app but if you don't want a public facing address you can use an internal load balancer that doesn't take any traffic, just looks after the health check
-where possible you should always use least privilege principles for security, with your method you're going to have to give every instance a lot of power to control other instances, whether through mistake or abuse this can go wrong very easily

WCF ChannelFactory and Channel caching in ASP.NET client application

I'm building a series of WCF Services that are going to be used by more than one application. Because of that I'm trying to define a common library to access WCF services.
Knowing that each service request made by different users should use a different Channel I'm thinking in cache the Channel per-request (HttpContext.Current.Items) and cache the ChannelFactory used to create the channel per Application (HttpApplication.Items) since I can create more than one channel with the same ChannelFactory.
However, I have a question regarding this cache mechanism when it comes to closing the ChannelFactory and Channel.
Do I need to close the Channel after it's used, at the end of the request, or is it ok to leave it there to be closed (?) when the context of that request dies?
What about ChannelFactory? Since each channel is associated with the ChannelFactory that created it, is it safe to keep the same ChannelFactory during the life of the application process (AppDomain)?
This is the code I'm using to manage this:
public class ServiceFactory
{
private static Dictionary<string, object> ListOfOpenedChannels
{
get
{
if (null == HttpContext.Current.Items[HttpContext.Current.Session.SessionID + "_ListOfOpenedChannels"])
{
HttpContext.Current.Items[HttpContext.Current.Session.SessionID + "_ListOfOpenedChannels"] = new Dictionary<string, object>();
}
return (Dictionary<string, object>)HttpContext.Current.Items[HttpContext.Current.Session.SessionID + "_ListOfOpenedChannels"];
}
set
{
HttpContext.Current.Items[HttpContext.Current.Session.SessionID + "_ListOfOpenedChannels"] = value;
}
}
public static T CreateServiceChannel<T>()
{
string key = typeof(T).Name;
if (ListOfOpenedChannels.ContainsKey(key))
{
return (T)ListOfOpenedChannels[key];
}
else
{
ChannelFactory<T> channelF = new ChannelFactory<T>("IUsuarioService");
T channel = channelF.CreateChannel();
ListOfOpenedChannels.Add(key, channel);
return channel;
}
}
}
Thanks!
Ideally close the channel as soon as you are done with it. This will place it back into the channel pool so it can be used by another worker thread.
Yes, the channel factory (the expensive bit) can remain for the lifetime of the application.
Update
As of .Net 4.5 there is a built in caching options for factories
ChannelFactory Caching .NET 4.5
This is an aside. Why are you using SessionID as a context key? The context.Items is unique per request. That is:
HttpContext.Current.Items[HttpContext.Current.Session.SessionID +"_ListOfOpenedChannels"]
should be functionally equivalent to:
HttpContext.Current.Items["ListOfOpenedChannels"]

Resources