Error trying to connect using Astyanax to Cassandra hosted on a EC2 instance - amazon-ec2

I am getting the following error "astyanax.connectionpool.exceptions.PoolTimeoutException:" when trying to use client Astyanax to connect to Cassandra on a EC2 instance. Need help
Following is my code snippet.
import org.mortbay.jetty.servlet.Context;
import com.netflix.astyanax.AstyanaxContext;
import com.netflix.astyanax.Keyspace;
import com.netflix.astyanax.MutationBatch;
import com.netflix.astyanax.connectionpool.NodeDiscoveryType;
import com.netflix.astyanax.connectionpool.OperationResult;
import com.netflix.astyanax.connectionpool.exceptions.ConnectionException;
import com.netflix.astyanax.connectionpool.impl.ConnectionPoolConfigurationImpl;
import com.netflix.astyanax.connectionpool.impl.ConnectionPoolType;
import com.netflix.astyanax.connectionpool.impl.CountingConnectionPoolMonitor;
import com.netflix.astyanax.impl.AstyanaxConfigurationImpl;
import com.netflix.astyanax.model.Column;
import com.netflix.astyanax.model.ColumnFamily;
import com.netflix.astyanax.model.ColumnList;
import com.netflix.astyanax.model.CqlResult;
import com.netflix.astyanax.serializers.StringSerializer;
import com.netflix.astyanax.thrift.ThriftFamilyFactory;
public class MetadataRS {
public static void main(String args[]){
AstyanaxContext<Keyspace> context = new AstyanaxContext.Builder()
.forCluster("ClusterName")
.forKeyspace("KeyspaceName")
.withAstyanaxConfiguration(new AstyanaxConfigurationImpl()
.setDiscoveryType(NodeDiscoveryType.RING_DESCRIBE)
.setConnectionPoolType(ConnectionPoolType.ROUND_ROBIN)
)
.withConnectionPoolConfiguration(new ConnectionPoolConfigurationImpl("MyConnectionPool")
.setPort(9042)
.setMaxConnsPerHost(40)
.setSeeds("<EC2-IP>:9042")
.setConnectTimeout(5000)
)
.withConnectionPoolMonitor(new CountingConnectionPoolMonitor())
.buildKeyspace(ThriftFamilyFactory.getInstance());
context.start();
Keyspace keyspace = context.getEntity();
System.out.println(keyspace);
ColumnFamily<String, String> CF_USER_INFO =
new ColumnFamily<String, String>(
"Standard1", // Column Family Name
StringSerializer.get(), // Key Serializer
StringSerializer.get()); // Column
OperationResult<ColumnList<String>> result = null;
try {
result = keyspace.prepareQuery(CF_USER_INFO)
.getKey("user_id_hash")
.execute();
} catch (ConnectionException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
ColumnList<String> columns = result.getResult();
// Lookup columns in response by name
String uid = columns.getColumnByName("user_id_hash").getStringValue();
System.out.println(uid);
// Or, iterate through the columns
for (Column<String> c : result.getResult()) {
System.out.println(c.getName());
}
}
}
Error
com.netflix.astyanax.thrift.ThriftKeyspaceImpl#1961f4
com.netflix.astyanax.connectionpool.exceptions.PoolTimeoutException: PoolTimeoutException: [host=():9042, latency=5001(5001), attempts=1] Timed out waiting for connection
at com.netflix.astyanax.connectionpool.impl.SimpleHostConnectionPool.waitForConnection(SimpleHostConnectionPool.java:201)
at com.netflix.astyanax.connectionpool.impl.SimpleHostConnectionPool.borrowConnection(SimpleHostConnectionPool.java:158)
at com.netflix.astyanax.connectionpool.impl.RoundRobinExecuteWithFailover.borrowConnection(RoundRobinExecuteWithFailover.java:60)
at com.netflix.astyanax.connectionpool.impl.AbstractExecuteWithFailoverImpl.tryOperation(AbstractExecuteWithFailoverImpl.java:50)
at com.netflix.astyanax.connectionpool.impl.AbstractHostPartitionConnectionPool.executeWithFailover(AbstractHostPartitionConnectionPool.java:229)
at com.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$1.execute(ThriftColumnFamilyQueryImpl.java:180)
at com.rjil.jiodrive.rs.MetadataRS.main(MetadataRS.java:57)
Exception in thread "main" java.lang.NullPointerException
at com.rjil.jiodrive.rs.MetadataRS.main(MetadataRS.java:62)

Since you are running cassandra on EC2 instance, check that the cassandra's port no. (which you have choosen as 9042) is in the allowed list of ec2 security group and that you have access to it. If not add the port no. in the inbound list of the ec2 security group and set the ip ranges as 0.0.0.0.
Alos check the firewall on the ec2 is turned off. By default its false, but its good to check it anyway.
If you have done this, then your client might be behind a firewall that prevents outbound traffic to your choosen port (9042).
Lastly if you have not used any elastic ip, its better to use the ec2 instance dns name, both in your setSeeds section and in the rpc_address of cassandra.yaml

Your issue is not in your code. You have a connectivity problem to the node you specified as your seed. So either that node isn't running, or you can't reach it from the machine running your client.

I finally upgraded libthrift to 0.9 and changed my code to the following nd it working fine now.
public Keyspace getDBConnection() {
if (poolConfig == null) {
poolConfig = new ConnectionPoolConfigurationImpl(
"CassandraPool").setPort(port).setMaxConnsPerHost(1)
.setSeeds((new StringBuilder(seedHost).append(":").append(port).toString()))
.setLatencyAwareUpdateInterval(latencyAwareUpdateInterval) // Will resort hosts per
// token partition every
// 10 seconds
.setLatencyAwareResetInterval(latencyAwareResetInterval) // Will clear the latency
// every 10 seconds. In
// practice I set this
// to 0 which is the
// default. It's better
// to be 0.
.setLatencyAwareBadnessThreshold(latencyAwareBadnessThreshold) // Will sort hosts if a host
// is more than 100% slower
// than the best and always
// assign connections to the
// fastest host, otherwise
// will use round robin
.setLatencyAwareWindowSize(latencyAwareWindowSize) // Uses last 100 latency
// samples. These samples are in
// a FIFO q and will just cycle
// themselves.
.setTimeoutWindow(60000)
;
}
AstyanaxContext<Keyspace> context = new AstyanaxContext.Builder()
.forCluster(clusterName)
.forKeyspace(keyspaceName)
.withAstyanaxConfiguration(
new AstyanaxConfigurationImpl().setDiscoveryType(
NodeDiscoveryType.NONE)
.setConnectionPoolType(
ConnectionPoolType.ROUND_ROBIN)
.setCqlVersion("3.0.0")
.setTargetCassandraVersion("2.0"))
.withConnectionPoolConfiguration(poolConfig)
.withConnectionPoolMonitor(new CountingConnectionPoolMonitor())
.buildKeyspace(ThriftFamilyFactory.getInstance());
context.start();
return context.getClient();
}

Related

Get dynamically assigned server port?

When I call
import org.apache.http.impl.nio.bootstrap.*;
import java.net.InetSocketAddress;
HttpServer server = ServerBootstrap.bootstrap()
.setListenerPort(0)
// ...
.create();
server.start();
How do I get the actual port number assigned to the server?
I tried
int port = ((InetSocketAddress) server.getEndpoint().getAddress()).getPort();
But that just returned 0
HttpComponents hide a lot of the internals and especially what you are looking for, a workaround would be to retrieve a port outside the library and use it:
int port;
try (ServerSocket socket = new ServerSocket()) {
socket.setReuseAddress(false);
socket.bind(new InetSocketAddress(0));
port = socket.getLocalPort();
}
HttpServer server = ServerBootstrap.bootstrap()
.setListenerPort(port)
// ...
.create();
server.start();

Hazelcast Near Cache: Evict if changed on different node

I am using Spring + Hazelcast 3.8.2 and have configured a map like this using the Spring configuration:
<hz:map name="test.*" backup-count="1"
max-size="0" eviction-percentage="30" read-backup-data="true"
time-to-live-seconds="900"
eviction-policy="NONE" merge-policy="com.hazelcast.map.merge.PassThroughMergePolicy">
<hz:near-cache max-idle-seconds="300"
time-to-live-seconds="0"
max-size="0" />
</hz:map>
I've got two clients connected (both on same machine [test env], using different ports).
When I change a value in the map on one client the other client still has the old value until it will get evicted from the near cache due to the expired idle time.
I found a similar issue like this here: Hazelcast near-cache eviction doesn't work
But I'm unsure if this is really the same issue, at least it is mentioned that this was a bug in version 3.7 and we are using 3.8.2.
Is this a correct behaviour or am I doing something wrong? I know that there is a property invalidate-on-change, but as a default this is true, so I don't expect I have to set this one.
I also tried setting the read-backup-data to false, doesn't help.
Thanks for your support
Christian
I found the solution myself.
The issue is that Hazelcast sends the invalidations by default in batches and thus it waits a few seconds until the invalidations will be sent out to all other nodes.
You can find more information about this here: http://docs.hazelcast.org/docs/3.8/manual/html-single/index.html#near-cache-invalidation
So I had to set the property hazelcast.map.invalidation.batch.enabled to false which will immediately send out invalidations to all nodes. But as mentioned in the documentation this should only be used when there aren't too many put/remove/... operations expected, as this will then make the event system very busy.
Nevertheless, even though this property is set it will not guarantee that all nodes will directly invalidate the near cache entries. I noticed that after directly accessing the values on the different node sometimes it's fine, sometimes not.
Here is the JUnit test I built up for this:
#Test
public void testWithInvalidationBatchEnabled() throws Exception {
System.setProperty("hazelcast.map.invalidation.batch.enabled", "true");
doTest();
}
#Test
public void testWithoutInvalidationBatchEnabled() throws Exception {
System.setProperty("hazelcast.map.invalidation.batch.enabled", "false");
doTest();
}
#After
public void shutdownNodes() {
Hazelcast.shutdownAll();
}
protected void doTest() throws Exception {
// first config for normal cluster member
Config c1 = new Config();
c1.getNetworkConfig().setPort(5709);
// second config for super client
Config c2 = new Config();
c2.getNetworkConfig().setPort(5710);
// map config is the same for both nodes
MapConfig testMapCfg = new MapConfig("test");
NearCacheConfig ncc = new NearCacheConfig();
ncc.setTimeToLiveSeconds(10);
testMapCfg.setNearCacheConfig(ncc);
c1.addMapConfig(testMapCfg);
c2.addMapConfig(testMapCfg);
// start instances
HazelcastInstance h1 = Hazelcast.newHazelcastInstance(c1);
HazelcastInstance h2 = Hazelcast.newHazelcastInstance(c2);
IMap<Object, Object> mapH1 = h1.getMap("test");
IMap<Object, Object> mapH2 = h2.getMap("test");
// initial filling
mapH1.put("a", -1);
assertEquals(mapH1.get("a"), -1);
assertEquals(mapH2.get("a"), -1);
int updatedH1 = 0, updatedH2 = 0, runs = 0;
for (int i = 0; i < 5; i++) {
mapH1.put("a", i);
// without this short sleep sometimes the nearcache is updated in time, sometimes not
Thread.sleep(100);
runs++;
if (mapH1.get("a").equals(i)) {
updatedH1++;
}
if (mapH2.get("a").equals(i)) {
updatedH2++;
}
}
assertEquals(runs, updatedH1);
assertEquals(runs, updatedH2);
}
testWithInvalidationBatchEnabled finishs only sometimes successfully, testWithoutInvalidationBatchEnabled finishs always successfully.

JDBC statement pooling with DB2 does not have significant time difference

I'm using JDBC db2 driver, a.k.a. JT400 to connect to db2 server on Application System/400, a midrange computer system.
My goal is to insert into three Tables, from outside of IBM mainframe, which would be cloud instance(eg. Amazon WS).
To make the performance better
1) I am using already established connections to connect to db2.
(using org.apache.commons.dbcp.BasicDataSource or com.ibm.as400.access.AS400JDBCManagedConnectionPoolDataSource, both are working fine.)
public class AS400JDBCManagedConnectionPoolDataSource extends AS400JDBCManagedDataSource implements ConnectionPoolDataSource, Referenceable, Serializable {
}
public class AS400JDBCManagedDataSource extends ToolboxWrapper implements DataSource, Referenceable, Serializable, Cloneable {
}
2) I want to cache the insert into statements for all three tables, so that I don't have to send query every time and compile every time, which is expensive. I would instead just pass the parameters only. (Obviously I am doing this using JDBC prepared statements)
Based on an official IBM document Optimize Access to DB2 for i5/OS
from Java and WebSphere, page 17-20 - Enabling Extended Dynamic Support, it's possible to cache the statement with AS400JDBCManagedConnectionPoolDataSource.
BUT, the problem is the insert into queries are being compiled each time, which is taking 200ms * 3 queries = 600ms each time.
Example I'm using,
public class CustomerOrderEventHandler extends MultiEventHandler {
private static Logger logger = LogManager.getLogger(CustomerOrderEventHandler.class);
//private BasicDataSource establishedConnections = new BasicDataSource();
//private DB2SimpleDataSource nativeEstablishedConnections = new DB2SimpleDataSource();
private AS400JDBCManagedConnectionPoolDataSource dynamicEstablishedConnections =
new AS400JDBCManagedConnectionPoolDataSource();
private State3 orderState3;
private State2 orderState2;
private State1 orderState1;
public CustomerOrderEventHandler() throws SQLException {
dynamicEstablishedConnections.setServerName(State.server);
dynamicEstablishedConnections.setDatabaseName(State.DATABASE);
dynamicEstablishedConnections.setUser(State.user);
dynamicEstablishedConnections.setPassword(State.password);
dynamicEstablishedConnections.setSavePasswordWhenSerialized(true);
dynamicEstablishedConnections.setPrompt(false);
dynamicEstablishedConnections.setMinPoolSize(3);
dynamicEstablishedConnections.setInitialPoolSize(5);
dynamicEstablishedConnections.setMaxPoolSize(50);
dynamicEstablishedConnections.setExtendedDynamic(true);
Connection connection = dynamicEstablishedConnections.getConnection();
connection.close();
}
public void onEvent(CustomerOrder orderEvent){
long start = System.currentTimeMillis();
Connection dbConnection = null;
try {
dbConnection = dynamicEstablishedConnections.getConnection();
long connectionSetupTime = System.currentTimeMillis() - start;
state3 = new State3(dbConnection);
state2 = new State2(dbConnection);
state1 = new State1(dbConnection);
long initialisation = System.currentTimeMillis() - start - connectionSetupTime;
int[] state3Result = state3.apply(orderEvent);
int[] state2Result = state2.apply(orderEvent);
long state1Result = state1.apply(orderEvent);
dbConnection.commit();
logger.info("eventId="+ getEventId(orderEvent) +
",connectionSetupTime=" + connectionSetupTime +
",queryPreCompilation=" + initialisation +
",insertionOnlyTimeTaken=" +
(System.currentTimeMillis() - (start + connectionSetupTime + initialisation)) +
",insertionTotalTimeTaken=" + (System.currentTimeMillis() - start));
} catch (SQLException e) {
logger.error("Error updating the order states.", e);
if(dbConnection != null) {
try {
dbConnection.rollback();
} catch (SQLException e1) {
logger.error("Error rolling back the state.", e1);
}
}
throw new CustomerOrderEventHandlerRuntimeException("Error updating the customer order states.", e);
}
}
private Long getEventId(CustomerOrder order) {
return Long.valueOf(order.getMessageHeader().getCorrelationId());
}
}
And the States with insert commands look like below,
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.SQLException;
public class State2 extends State {
private static Logger logger = LogManager.getLogger(DetailState.class);
Connection connection;
PreparedStatement preparedStatement;
String detailsCompiledQuery = "INSERT INTO " + DATABASE + "." + getStateName() +
"(" + DetailState.EVENT_ID + ", " +
State2.ORDER_NUMBER + ", " +
State2.SKU_ID + ", " +
State2.SKU_ORDERED_QTY + ") VALUES(?, ?, ?, ?)";
public State2(Connection connection) throws SQLException {
this.connection = connection;
this.preparedStatement = this.connection.prepareStatement(detailsCompiledQuery); // this is taking ~200ms each time
this.preparedStatement.setPoolable(true); //might not be required, not sure
}
public int[] apply(CustomerOrder event) throws StateException {
event.getMessageBody().getDetails().forEach(detail -> {
try {
preparedStatement.setLong(1, getEventId(event));
preparedStatement.setString(2, getOrderNo(event));
preparedStatement.setInt(3, detail.getSkuId());
preparedStatement.setInt(4, detail.getQty());
preparedStatement.addBatch();
} catch (SQLException e) {
logger.error(e);
throw new StateException("Error setting up data", e);
}
});
long startedTime = System.currentTimeMillis();
int[] inserted = new int[0];
try {
inserted = preparedStatement.executeBatch();
} catch (SQLException e) {
throw new StateException("Error updating allocations data", e);
}
logger.info("eventId="+ getEventId(event) +
",state=details,insertionTimeTaken=" + (System.currentTimeMillis() - startedTime));
return inserted;
}
#Override
protected String getStateName() {
return properties.getProperty("state.order.details.name");
}
}
So the flow is each time an event is received(eg. CustomerOrder), it gets the establishedConnection and then asks the states to initialise their statements.
The metrics for timing look as below,
for the first event, it takes 580ms to create the preparedStatements for 3 tables.
{"timeMillis":1489982655836,"thread":"ScalaTest-run-running-CustomerOrderEventHandlerSpecs","level":"INFO","loggerName":"com.xyz.customerorder.events.handler.CustomerOrderEventHandler",
"message":"eventId=1489982654314,connectionSetupTime=1,queryPreCompilation=580,insertionOnlyTimeTaken=938,insertionTotalTimeTaken=1519","endOfBatch":false,"loggerFqcn":"org.apache.logging.log4j.spi.AbstractLogger","threadId":1,"threadPriority":5}
for the second event, takes 470ms to prepare the statements for 3 tables, which is less than the first event but just < 100ms, I assume it to be drastically less, as it should not even make it to compilation.
{"timeMillis":1489982667243,"thread":"ScalaTest-run-running-PurchaseOrderEventHandlerSpecs","level":"INFO","loggerName":"com.xyz.customerorder.events.handler.CustomerOrderEventHandler",
"message":"eventId=1489982665456,connectionSetupTime=0,queryPreCompilation=417,insertionOnlyTimeTaken=1363,insertionTotalTimeTaken=1780","endOfBatch":false,"loggerFqcn":"org.apache.logging.log4j.spi.AbstractLogger","threadId":1,"threadPriority":5}
What I'm thinking is since I'm closing preparedStatement for that particular connection, it does not even exist for new connection. If thats the case whats the point of having statement caching at all in multi-threaded environment.
The documentation has similar example, where its making transactions inside the same connection which is not the case for me, as I need to have multiple connections at the same time.
Questions
Primary
Q1) Is DB2 JDBC driver caching the statements at all, between multiple connections? Because I don't see much difference while preparing the statement. (see example, first one takes ~600ms, second one takes ~500ms)
References
ODP = Open Data Path
SQL packages
SQL packages are permanent objects used to store information related
to prepared SQL statements. They can be used by the IBM iSeries Access
for the IBM Toolbox for
Java JDBC driver. They are also used by applications which use the
QSQPRCED (SQL Process Extended Dynamic) API interface.
In the case JDBC, the existence of the SQL package is
checked when the client application issues the first prepare of a SQL
Statement. If the package does not exist, it is created at that time
(even though it may not yet contain any SQL statements)
Tomcat jdbc connection pool configuration - DB2 on iSeries(AS400)
IBM Data Server Driver for JDBC and SQLJ statement caching
A couple of important things to note regarding statement caching:
Because Statement objects are child objects of a given Connection, once the Connection is closed all child objects (e.g. all Statement objects) must also be closed.
It is not possible to associate a statement from one connection with a different connection.
Statement pooling may or may not be done be by a given JDBC driver. Statement pooling may also be performed by a connection management layer (i.e. application server)
Per JDBC spec, default value for Statement.isPoolable() == false and PreparedStatement.isPoolable() == true, however this flag is only a hint to the JDBC driver. There is no guarantee from the spec that statement pooling will occur.
First off, I am not sure if the JT400 driver does statement caching. The document you referenced in your question comment, Optimize Access to DB2 for i5/OS from Java and WebSphere, is specific to using the JT400 JDBC driver with WebSphere application server, and on slide #3 it indicates that statement caching comes from the WebSphere connection management layer, not the native JDBC driver layer. Given that, I'm going to assume that the JT400 JDBC driver doesn't support statement caching on its own.
So at this point you are probably going to want to plug into some sort of app server (unless you want to implement statement caching on your own, which is sort of re-inventing the wheel). I know for sure that both WebSphere Application Server products (traditional and Liberty) support statement caching for any JDBC driver.
For WebSphere Liberty (the newer product), the data source config is the following:
<dataSource jndiName="jdbc/myDS" statementCacheSize="10">
<jdbcDriver libraryRef="DB2iToolboxLib"/>
<properties.db2.i.toolbox databaseName="YOURDB" serverName="localhost"/>
</dataSource>
<library id="DB2iToolboxLib">
<fileset dir="/path/to/jdbc/driver/dir" includes="jt400.jar"/>
</library>
The key bit being the statementCacheSize attribute of <dataSource>, which has a default value of 10.
(Disclaimer, I'm a WebSphere dev, so I'm going to talk about what I know)
First off, the IBM i Java documentation is here: IBM Toolbox for Java
Secondly, I don't see where you are setting the "extended dynamic" property to true which provides
a mechanism for caching dynamic SQL statements on the server. The first
time a particular SQL statement is prepared, it is stored in a SQL
package on the server. If the package does not exist, it is
automatically created. On subsequent prepares of the same SQL
statement, the server can skip a significant part of the processing by
using information stored in the SQL package. If this is set to "true",
then a package name must be set using the "package" property.
I think you're missing some steps in using the managed pool...here's the first example in the IBM docs
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.sql.DataSource;
import com.ibm.as400.access.AS400JDBCManagedConnectionPoolDataSource;
import com.ibm.as400.access.AS400JDBCManagedDataSource;
public class TestJDBCConnPoolSnippet
{
void test()
{
AS400JDBCManagedConnectionPoolDataSource cpds0 = new AS400JDBCManagedConnectionPoolDataSource();
// Set general datasource properties. Note that both connection pool datasource (CPDS) and managed
// datasource (MDS) have these properties, and they might have different values.
cpds0.setServerName(host);
cpds0.setDatabaseName(host);//iasp can be here
cpds0.setUser(userid);
cpds0.setPassword(password);
cpds0.setSavePasswordWhenSerialized(true);
// Set connection pooling-specific properties.
cpds0.setInitialPoolSize(initialPoolSize_);
cpds0.setMinPoolSize(minPoolSize_);
cpds0.setMaxPoolSize(maxPoolSize_);
cpds0.setMaxLifetime((int)(maxLifetime_/1000)); // convert to seconds
cpds0.setMaxIdleTime((int)(maxIdleTime_/1000)); // convert to seconds
cpds0.setPropertyCycle((int)(propertyCycle_/1000)); // convert to seconds
//cpds0.setReuseConnections(false); // do not re-use connections
// Set the initial context factory to use.
System.setProperty(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.fscontext.RefFSContextFactory");
// Get the JNDI Initial Context.
Context ctx = new InitialContext();
// Note: The following is an alternative way to set context properties locally:
// Properties env = new Properties();
// env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.fscontext.RefFSContextFactory");
// Context ctx = new InitialContext(env);
ctx.rebind("mydatasource", cpds0); // We can now do lookups on cpds, by the name "mydatasource".
// Create a standard DataSource object that references it.
AS400JDBCManagedDataSource mds0 = new AS400JDBCManagedDataSource();
mds0.setDescription("DataSource supporting connection pooling");
mds0.setDataSourceName("mydatasource");
ctx.rebind("ConnectionPoolingDataSource", mds0);
DataSource dataSource_ = (DataSource)ctx.lookup("ConnectionPoolingDataSource");
AS400JDBCManagedDataSource mds_ = (AS400JDBCManagedDataSource)dataSource_;
boolean isHealthy = mds_.checkPoolHealth(false); //check pool health
Connection c = dataSource_.getConnection();
}
}

Jmeter does not see the variables when trying to connect to the database

I have the following problem.
When you try to create a connection can not find variables.
The test has the following set of actions
Download the local settings file (put in props)
Create a database connection (This happens all in different groups, i tried in the same)
Use next code for upload local property file(Bean Shell, Thread Group 1)
FileInputStream is = new FileInputStream(new File("d:/somefolder/somefile.properties"));
props.load(is);
is.close();
Before creating the connection I checked the availability of variables(Bean Shell, Thread Group 2)
System.out.println(props.get("db.url"));
System.out.println(${__P("db.url")});
${__setProperty("db.url", props.get("db.url"))};
System.out.println(${__P("db.url")});
OutPut
correct connection url
1(Because function __P return default value if variable undefined,
default value = 1)
correct connection url
Create Jdbc Connection with next parametrs(Thread Group 2)
url: ${__P("db.url")}
Test Failure because ${__P("db.url")} return 1
If i use ${__BeanShell(props.get(db.url))}
Test Failure because props.get(db.url) return nothing
If i use ${__javaScript(props.get(db.url))}
Test Failure because props.get(db.url) return nothing
"Component jdbc connection configuration" initialized before started first Thread Group, so component don't see variables because he don't initialize.
Create Script for connecting to db
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.sql.ResultSetMetaData;
ResultSet clientConfigs = null;
ResultSet gameIds = null;
Connection connect = null;
Statement statement = null;
try {
Class.forName(props.get("configdb.driverClassName"));
String connectionUrl = props.get("configdb.url") + "?user=" + props.get("configdb.username") + "&password=" + props.get("configdb.password");
connect = DriverManager.getConnection(connectionUrl);
statement = connect.createStatement();
clientConfigs = statement.executeQuery("Select keyname,valuestr from client_config WHERE valuestr Like '%/_ah/api/%'");
ResultSetMetaData ccMetaData = clientConfigs.getMetaData();
int clientConfigsColumns = ccMetaData.getColumnCount();
ArrayList clientConfigsList = new ArrayList(20);
while (clientConfigs.next()){
HashMap clientConfigsRow = new HashMap(clientConfigsColumns);
for(int i=1; i<=clientConfigsColumns; ++i){
clientConfigsRow.put(ccMetaData.getColumnName(i), clientConfigs.getObject(i));
}
clientConfigsList.add(clientConfigsRow);
}
vars.putObject("clientConfigs", clientConfigsList);
gameIds = statement.executeQuery("Select gameId from game_config");
ResultSetMetaData giMetaData = gameIds.getMetaData();
int gameIdsColumns = giMetaData.getColumnCount();
ArrayList gameIdsList = new ArrayList(50);
while (gameIds.next()){
HashMap gameIdsRow = new HashMap(gameIdsColumns);
for(int i=1; i<=gameIdsColumns; ++i){
gameIdsRow.put(giMetaData.getColumnName(i), gameIds.getObject(i));
}
gameIdsList.add(gameIdsRow);
}
vars.putObject("gameIds", gameIdsList);
}
catch (Exception e) {
throw e;
} finally {
try {
if (clientConfigs != null) {
clientConfigs.close();
}
if (gameIds != null) {
gameIds.close();
}
if (statement != null) {
statement.close();
}
if (connect != null) {
connect.close();
}
} catch (Exception e) {
}
}
Apparently JMeter loads JDBC configuration parameters at really early, likely at UI load. See discussion here
The JDBC Config element is only processed at test startup, so it is
not possible to change the values once a test has started - i.e. you
could not change the values for different loops of the test plan. The
test plan has to know the JDBC settings near the start.
Which means, if you change value of property in threadGroup-1 at runtime, the changes will not take effect in JDBC config.
One possible way around this is to set the property from the command line while starting JMeter.
Alternatively drop JDBC sampler entirely and use one of the custom samplers to interact with your JDBC data source.

Hbase MuleSoft Cloudhub Connectivity

I have to connect Cloudhub to Hbase. I have trid from community edition HBase connector but not succeeded. Then I tried with Java Code and again failed. From HBase Team, they have given only Master IP (10.99.X.X) and Port(2181) and userName (hadoop).
I have tried with following options:
Through Java Code:
public Object transformMessage(MuleMessage message, String outputEncoding) throws TransformerException {
try {
Configuration conf = HBaseConfiguration.create();
//conf.set("hbase.rotdir", "/hbase");
conf.set("hbase.zookeeper.quorum", "10.99.X.X");
conf.set("hbase.zookeeper.property.clientPort", "2181");
conf.set("hbase.client.retries.number", "3");
logger.info("############# Config Created ##########");
// Create a get api for consignment table
logger.info("############# Starting Consignment Test ##########");
// read from table
// Creating a HTable instance
HTable table = new HTable(conf, "consignment");
logger.info("############# HTable instance Created ##########");
// Create a Get object
Get get = new Get(Bytes.toBytes("6910358750"));
logger.info("############# RowKey Created ##########");
// Set column family to be queried
get.addFamily(Bytes.toBytes("consignment_detail"));
logger.info("############# CF Created ##########");
// Perform get and capture result in a iterable
Result result = table.get(get);
logger.info("############# Result Created ##########");
// Print consignment data
logger.info(result);
logger.info(" #### Ending Consignment Test ###");
// Begining Consignment Item Scanner api
logger.info("############# Starting Consignmentitem test ##########");
HTable table1 = new HTable(conf, "consignmentitem");
logger.info("############# HTable instance Created ##########");
// Create a scan object with start rowkey and end rowkey (partial
// row key scan)
// actual rowkey design: <consignment_id>-<trn>-<orderline>
Scan scan = new Scan(Bytes.toBytes("6910358750"),Bytes.toBytes("6910358751"));
logger.info("############# Partial RowKeys Created ##########");
// Perform a scan using start and stop rowkeys
ResultScanner scanner = table1.getScanner(scan);
// Iterate over result and print them
for (Result result1 = scanner.next(); result1 != null; result1 = scanner.next()) {
logger.info("Printing Records\n");
logger.info(result1);
}
return scanner;
} catch (MasterNotRunningException e) {
logger.error("HBase connection failed! --> MasterNotRunningException");
logger.error(e);
} catch (ZooKeeperConnectionException e) {
logger.error("Zookeeper connection failed! -->ZooKeeperConnectionException");
logger.error(e);
} catch (Exception e) {
logger.error("Main Exception Found! -- Exception");
logger.error(e);
}
return "Not Connected";
}
Above Code giving below Error
java.net.UnknownHostException: unknown host: ip-10-99-X-X.ap-southeast-2.compute.internal
It Seems that CloudHub is not able to find host name because cloudHub is not configured with DNS
When I tried with Community Edition HBase Connector it is giving following Exception:
org.apache.hadoop.hbase.MasterNotRunningException: Retried 3 times
Please suggest some way...
Rgeards
Nilesh
Email: bit.nilesh.kumar#gmail.com
It appears that you are configuring your client to try to connect to the zookeeper quorum at a private IP address (10.99.X.X). I'll assume you've already set up a VPC, which is required for your CloudHub worker to connect to your private network.
Your UnknownHostException implies that the HBase server you are connecting to is hosted on AWS, which defines private domain names similar to the one in the error message.
So what might be happening is this:
Mule connects to Zookeeper, asks what HBase nodes there are, and gets back ip-10-99-X-X.ap-southeast-2.compute.internal.
Mule tries to connect to that to find the HTable "consignment", but can't resolve an IP address for that name.
Unfortunately, if this is what's going on, it will take some networking changes to fix it. The FAQ in the VPC discovery form says this about private DNS:
Currently we don't have the ability to relay DNS queries to internal DNS servers. You would need to either use IP addresses or public DNS entries. Beware of connecting to systems which may redirect to a Virtual IP endpoint by using an internal DNS entry.
You could use public DNS and possibly an Elastic IP to get around this problem, but that would require you to expose your HBase cluster to the internet.
I believe the answer of your question is covered in the cloudhub networking guide.
https://developer.mulesoft.com/docs/display/current/CloudHub+Networking+Guide

Resources