Is there any direct way to check if a Cluster/Session is connected/valid/ok?
I mean, I have a com.datastax.driver.core.Session created into a neverending thread and I'd like to assure the session is ok every time is needed. I use the next cluster initialization, but I'm not sure this is enough...
Cluster.builder().addContactPoint(url)
.withRetryPolicy(DowngradingConsistencyRetryPolicy.INSTANCE)
.withReconnectionPolicy(new ConstantReconnectionPolicy(1000L)).build());
In fact when using the DataStax Java Driver, you have a hidden/magic capability embedded:
The driver is aware of the full network topology (nodes topology across datacenters and nodes availabilities).
Thus, the only thing you have to do is to initialise your cluster with a few nodes(1) and then you can be sure at every moment that if there is at least one available node your request will be performed correctly. Because the driver is topology aware, if one node (even initialisation nodes) goes out of availability, the driver will automagically route your request to another available node.
In summary, your code is good(1).
(1): You should provide a few nodes in order to be fault tolerant in the cluster initialisation phase. Indeed, if one initialisation node is down, the driver has then the possibility to contact another one to discover the full topology.
I have a local development environment setup where I am starting up my java application and Cassandra (Docker) container at the same time, so Cassandra will normally not be in a ready state when the java application first attempts to connect.
When this is starting up the application will throw a NoHostAvailableException when the Cluster instance attempts to create a Session. Subsequent attempts to create a Session from the Cluster will then throw an IllegalStateException because the cluster instance was closed after the first exception.
What I did to rememdy this was to create a check method that attempts to create a Cluster and Session and then immediately closes these. See this:
private void waitForCassandraToBeReady(String keyspace, Cluster.Builder builder) {
RuntimeException exception = null;
int retries = 0;
while (retries++ < 40) {
Session session = null;
Cluster cluster = null;
try {
cluster = builder.build();
session = cluster.connect(keyspace);
log.info("Cassandra is available");
return;
} catch (RuntimeException e) {
log.warn("Cassandra not available, try {}", retries);
exception = e;
} finally {
if (session != null && !session.isClosed()) session.close();
if (cluster != null && !cluster.isClosed()) cluster.close();
}
sleep();
}
log.error("Retries exceeded waiting for Cassandra to be available");
if (exception != null) throw exception;
else throw new RuntimeException("Cassandra not available");
}
After this method returns, I then create a create a Cluster and Session independent of this check method.
Related
My Application uses ElastiCache on AWS for caching purposes. Our current set up uses a basic Redis Cluster with no sharding or failover. We need to now move to a Clustered Redis Elastic Cache with sharding, failover etc enabled. Creating a new cluster on AWS was the easy bit, but we are a bit lost on how to modify our java code to reads and write from the cluster.
Current Implementation -
Initialize a JedisPool.
JedisPoolConfig jedisPoolConfig = new JedisPoolConfig();
jedisPoolConfig.setMaxTotal(100);
jedisPoolConfig.setMaxIdle(10);
jedisPoolConfig.setMaxWaitMillis(50);
jedisPoolConfig.setTestOnBorrow(true);
String host = "mycache.db8e1v.0001.usw2.cache.amazonaws.com";
int port = 6379;
int timeout = 50;
JedisPool jedisPool = new JedisPool(jedisPoolConfig, host, port, timeout)
A Jedis object is borrowed from the pool everytime we need to perform an operation
Jedis jedis = JedisPool.getResource();
The new implementation would be
JedisPoolConfig jedisPoolConfig = ...
HostAndPort hostAndPort = new HostAndPort(host, port);
jedisCluster = new JedisCluster(Collections.singleton(hostAndPort), jedisPoolConfig);
Question:
The documentation says JedisCluster is to be used in place of Jedis (not JedisPool). Does this mean I need to create and destroy a JedisCluster object in each thread. Or can I re-use the same object and it will handle the thread safety? When do I exactly close the JedisCluster then? At the end of the application?
The JedisCluster holds internal JedisPools for each node in the cluster.
Does this mean I need to create and destroy a JedisCluster object in
each thread. Or can I re-use the same object and it will handle the
thread safety?
You can reuse the same object.
When do I exactly close the JedisCluster then? At the end of the
application?
Yes.
Replacing all Jedis-calls with JedisCluster-calls is the best way to migrate.
But I wanted pipeline support which JedisCluster currently lacks. So one other idea is to extend JedisCluster to return the JedisPool>Jedis for a particular key:
protected Jedis getJedis(String key) {
int slot = JedisClusterCRC16.getSlot(key);
return connectionHandler.getConnectionFromSlot(slot);
}
The extended class has to be in namespace redis.clients.jedis to access getConnectionFromSlot.
Now a pipeline can be executed on the Jedis.
And you need a different Jedis for each key you want to operate on. Which makes sense - in cluster mode, each key can be on a different node.
I'm new to JMS and am trying to setup Apache Active MQ for a messaging application as an alternative to Azure Service Bus that I'm very familiar with. I would like to setup topics and durable subscribers as and administrative task, and would like the runtime process to consume messages from those existing durable subscriber only based upon its name and, possibly, client id.
How do I retrieve an existing durable subscriber, without knowing the selector?
All the documentation and the samples I've read show that the only way to consume a message is to call the session.createDurableSubscriber() method.
Additionaly, I prefer to use the AMQP abstraction over JMS. So I found the following code to retrieve an existing subscriber:
public static ReceiverLink RecoverDurableSource(Session session, string topicPath, string subscriptionName)
{
Source recovered = null;
using (var attached = new ManualResetEvent(false))
{
void OnAttached(ILink link, Attach Attach)
{
recovered = (Source)Attach.Source;
attached.Set();
}
ReceiverLink receiver = null;
try
{
receiver = new ReceiverLink(session, subscriptionName, (Source)null, OnAttached);
if (!attached.WaitOne(TimeSpan.FromSeconds(5)))
return null;
CloseReceiverLink(receiver);
return recovered != null
? new ReceiverLink(session, subscriptionName, recovered, null)
: null
;
}
finally
{
if (recovered == null)
CloseReceiverLink(receiver);
}
}
}
private static void CloseReceiverLink(ReceiverLink receiver)
{
if (receiver == null)
return;
if (receiver.Error == null || Equals(receiver.Error.Condition, new Symbol("amqp:not-found")))
receiver.Close();
}
However, this code has the nasty side effect to re-create and default durable subscriber (manifested in this code by the ReceiverLink object) with the same name and then, if it exists, re-creating it with the correct Sourceobject.
But this may disrupt the reception of messages at the time this method is called.
I have to connect Cloudhub to Hbase. I have trid from community edition HBase connector but not succeeded. Then I tried with Java Code and again failed. From HBase Team, they have given only Master IP (10.99.X.X) and Port(2181) and userName (hadoop).
I have tried with following options:
Through Java Code:
public Object transformMessage(MuleMessage message, String outputEncoding) throws TransformerException {
try {
Configuration conf = HBaseConfiguration.create();
//conf.set("hbase.rotdir", "/hbase");
conf.set("hbase.zookeeper.quorum", "10.99.X.X");
conf.set("hbase.zookeeper.property.clientPort", "2181");
conf.set("hbase.client.retries.number", "3");
logger.info("############# Config Created ##########");
// Create a get api for consignment table
logger.info("############# Starting Consignment Test ##########");
// read from table
// Creating a HTable instance
HTable table = new HTable(conf, "consignment");
logger.info("############# HTable instance Created ##########");
// Create a Get object
Get get = new Get(Bytes.toBytes("6910358750"));
logger.info("############# RowKey Created ##########");
// Set column family to be queried
get.addFamily(Bytes.toBytes("consignment_detail"));
logger.info("############# CF Created ##########");
// Perform get and capture result in a iterable
Result result = table.get(get);
logger.info("############# Result Created ##########");
// Print consignment data
logger.info(result);
logger.info(" #### Ending Consignment Test ###");
// Begining Consignment Item Scanner api
logger.info("############# Starting Consignmentitem test ##########");
HTable table1 = new HTable(conf, "consignmentitem");
logger.info("############# HTable instance Created ##########");
// Create a scan object with start rowkey and end rowkey (partial
// row key scan)
// actual rowkey design: <consignment_id>-<trn>-<orderline>
Scan scan = new Scan(Bytes.toBytes("6910358750"),Bytes.toBytes("6910358751"));
logger.info("############# Partial RowKeys Created ##########");
// Perform a scan using start and stop rowkeys
ResultScanner scanner = table1.getScanner(scan);
// Iterate over result and print them
for (Result result1 = scanner.next(); result1 != null; result1 = scanner.next()) {
logger.info("Printing Records\n");
logger.info(result1);
}
return scanner;
} catch (MasterNotRunningException e) {
logger.error("HBase connection failed! --> MasterNotRunningException");
logger.error(e);
} catch (ZooKeeperConnectionException e) {
logger.error("Zookeeper connection failed! -->ZooKeeperConnectionException");
logger.error(e);
} catch (Exception e) {
logger.error("Main Exception Found! -- Exception");
logger.error(e);
}
return "Not Connected";
}
Above Code giving below Error
java.net.UnknownHostException: unknown host: ip-10-99-X-X.ap-southeast-2.compute.internal
It Seems that CloudHub is not able to find host name because cloudHub is not configured with DNS
When I tried with Community Edition HBase Connector it is giving following Exception:
org.apache.hadoop.hbase.MasterNotRunningException: Retried 3 times
Please suggest some way...
Rgeards
Nilesh
Email: bit.nilesh.kumar#gmail.com
It appears that you are configuring your client to try to connect to the zookeeper quorum at a private IP address (10.99.X.X). I'll assume you've already set up a VPC, which is required for your CloudHub worker to connect to your private network.
Your UnknownHostException implies that the HBase server you are connecting to is hosted on AWS, which defines private domain names similar to the one in the error message.
So what might be happening is this:
Mule connects to Zookeeper, asks what HBase nodes there are, and gets back ip-10-99-X-X.ap-southeast-2.compute.internal.
Mule tries to connect to that to find the HTable "consignment", but can't resolve an IP address for that name.
Unfortunately, if this is what's going on, it will take some networking changes to fix it. The FAQ in the VPC discovery form says this about private DNS:
Currently we don't have the ability to relay DNS queries to internal DNS servers. You would need to either use IP addresses or public DNS entries. Beware of connecting to systems which may redirect to a Virtual IP endpoint by using an internal DNS entry.
You could use public DNS and possibly an Elastic IP to get around this problem, but that would require you to expose your HBase cluster to the internet.
I believe the answer of your question is covered in the cloudhub networking guide.
https://developer.mulesoft.com/docs/display/current/CloudHub+Networking+Guide
I have singleton azure-redis database client used in our application. However on Azure portal it shows connected clients 4.99K. I am not sure who are those clients and why it is showing 4.99K clients connected when I have singleton instance?
Sample Code:
using StackExchange.Redis;
if (instance == null)
{
lock (syncRoot)
{
if (instance == null)
{
try
{
_cacheService = GetConnectionMultiplexer();
instance = _cacheService.GetDatabase();
}
catch (Exception ex)
{
Debug.WriteLine(ex.Message);
}
}
}
}
Per our discussion in comments, this is the same issue as described here:
Why are connections to Azure Redis Cache so high?
Under Weblogic 10, I am using Hibernate to store data into several tables with BLOBs. It's always worked fine but the customer found specific circumstances where 15% of the BLOBs have the correct size but only contain null characters. I can't figure out what makes it good or full of emptiness.
The BLOB type I am using does a:
public void nullSafeSet(PreparedStatement st, Object value, int index) throws HibernateException, SQLException {
if (value == null) {
st.setNull(index, sqlTypes()[0]);
return;
}
try {
Connection conn = st.getConnection();
if (conn instanceof org.apache.commons.dbcp.DelegatingConnection) {
log.debug("Delegating connection, digging for actual driver");
conn = ((org.apache.commons.dbcp.DelegatingConnection)st.getConnection()).getInnermostDelegate();
}
OutputStream tempBlobWriter = null;
BLOB tempBlob = BLOB.createTemporary(conn, true, BLOB.DURATION_SESSION);
try {
tempBlob.open(BLOB.MODE_READWRITE);
tempBlobWriter = tempBlob.setBinaryStream(1L);
tempBlobWriter.write((byte[])value);
tempBlobWriter.flush();
} finally {
if (tempBlobWriter != null)
tempBlobWriter.close();
tempBlob.close();
}
st.setBlob(index, (Blob) tempBlob);
} catch (IOException e) {
throw new HibernateException(e);
}
}
I put a log in there and can confirm that the value (byte[]) is good. I tried to change the createTemporary parameters, no success.
I am running this under Weblogic 10.0 (can't upgrade that) with the bundled Oracle Thin driver.
A clue is that the working calls come from the standard web service deployed and managed by WLS. But the problematic calls are done from a thread started along with the component that interfaces with some legacy system with JNI. This thread works like a charm for everything except these BLOBs. I am getting a new Session just before inserting the data and closing it a bit after. (The Session does NOT remain open for the lifetime of the thread)
I have set the Hibernate log level to DEBUG but it does not give me any clue. I'm starting to run out of ideas...
Problem solved.
In fact, I was doing:
open session
open transaction
get first item from legacy system
write first item to database (blob)
close transaction
open transaction
get second item from legacy system
write second item to database (blob)
close transaction
... until the legacy system has nothing more to process
close session
This would typically process between 1 and 5 items per round.
But because the Oracle driver does not use the standard way of handling blobs in JDBC, our custom type has to create a temporary blob that is stored in the session. And apparently when you're inserting blobs in differents transactions within the same session, they tend to interfere and cause my problem.
I solved it by closing the session after each commit. I do not like it but I consider it being the Oracle driver's fault.