Test Hector Spring Cassandra connection - spring

In my Java-Spring based web app I'm connecting to Cassandra DB using Hector and Spring.
The connection works just fine but I would like to be able to test the connection.
So if I intentionally provide a wrong host to CassandraHostConfigurator I get an error:
ERROR connection.HConnectionManager: Could not start connection pool for host <myhost:myport>
Which is ok of course. But how can I test this connection?
If I define the connection pragmatically (and not via spring context) it is clear, but via spring context it is not really clear how to test it.
can you think of an idea?

Since I could not come up nor find a satisfying answer I decided to define my connection pragmatically and to use a simple query:
private ColumnFamilyResult<String, String> readFromDb(Keyspace keyspace) {
ColumnFamilyTemplate<String, String> template = new ThriftColumnFamilyTemplate<String, String>(keyspace, tableName, StringSerializer.get(),
StringSerializer.get());
// It doesn't matter if the column actually exists or not since we only check the
// connection. In case of connection failure an exception is thrown,
// else something comes back.
return template.queryColumns("some_column");
}
And my test checks that the returned object in not null.
Another way that works fine:
public boolean isConnected() {
List<KeyspaceDefinition> keyspaces = null;
try {
keyspaces = cluster.describeKeyspaces();
} catch (HectorException e) {
return false;
}
return (!CollectionUtils.isEmpty(keyspaces));
}

Related

How to prevent data loss from redis where server is stopped forcefully which results in RedisCommandInterruptedException

#Autowired
private StringRedisTemplate stringRedisTemplate;
public List<Object> getDataFromRedis(String redisKey) {
try {
long numberOfEntriesToRead = 60000;
return stringRedisTemplate.executePipelined(
(RedisConnection connection) -> {
StringRedisConnection stringRedisConn =(StringRedisConnection)connection;
for (int index = 0; index < numberOfEntriesToRead; index++) {
stringRedisConn.lPop(redisKey);
}
return null;
});
}catch (RedisCommandInterruptedException e) {
LOGGER.error("Interrupted EXCEPTION :::", e);
}
}
}
I have a method which reads redis content for given key. Now the problem is when my application server is stopped while this method is trying to fetch data from redis i am getting RedisCommandInterruptedException exception which results in loss of some data from redis. So how can i overcome this problem Any suggestions are appreciable.
Pipelines are not atomic operations therefore there is no guarantee that all/none of the commands are executed when an exception happens.
You can use lua scripts or multi command to make run operations in a single transaction.
You can read more about using multi in spring boot data redis in this SO thread and this site.

Kafka state store not available in distributed environment

I have a business application with the following versions
spring boot(2.2.0.RELEASE) spring-Kafka(2.3.1-RELEASE)
spring-cloud-stream-binder-kafka(2.2.1-RELEASE)
spring-cloud-stream-binder-kafka-core(3.0.3-RELEASE)
spring-cloud-stream-binder-kafka-streams(3.0.3-RELEASE)
We have around 20 batches.Each batch using 6-7 topics to handle the business.Each service has its own state store to maintain the status of the batch whether its running/Idle.
Using the below code to query th store
#Autowired
private InteractiveQueryService interactiveQueryService;
public ReadOnlyKeyValueStore<String, String> fetchKeyValueStoreBy(String storeName) {
while (true) {
try {
log.info("Waiting for state store");
return new ReadOnlyKeyValueStoreWrapper<>(interactiveQueryService.getQueryableStore(storeName,
QueryableStoreTypes.<String, String> keyValueStore()));
} catch (final IllegalStateException e) {
try {
Thread.sleep(1000);
} catch (InterruptedException e1) {
e1.printStackTrace();
}
}
}
When deploying the application in one instance(Linux machine) every thing is working fine.While deploying the application in 2 instance we find the folowing observations
state store is available in one instance and other dosen't have.
When the request is being processed by the instance which has the state store every thing is fine.
If the request falls to the instance which does not have state store the application is waiting in the while loop indefinitley(above code snippet).
While the instance without store waiting indefinitely and if we kill the other instance the above code returns the store and it was processing perfectly.
No clue what we are missing.
When you have multiple Kafka Streams processors running with interactive queries, the code that you showed above will not work the way you expect. It only returns results, if the keys that you are querying are on the same server. In order to fix this, you need to add the property - spring.cloud.stream.kafka.streams.binder.configuration.application.server: <server>:<port> on each instance. Make sure to change the server and port to the correct ones on each server. Then you have to write code similar to the following:
org.apache.kafka.streams.state.HostInfo hostInfo = interactiveQueryService.getHostInfo("store-name",
key, keySerializer);
if (interactiveQueryService.getCurrentHostInfo().equals(hostInfo)) {
//query from the store that is locally available
}
else {
//query from the remote host
}
Please see the reference docs for more information.
Here is a sample code that demonstrates that.

Spring Web-Flux: How to return a Flux to a web client on request?

We are working with spring boot 2.0.0.BUILD_SNAPSHOT and spring boot webflux 5.0.0 and currently we cant transfer a flux to a client on request.
Currently I am creating the flux from an iterator:
public Flux<ItemIgnite> getAllFlux() {
Iterator<Cache.Entry<String, ItemIgnite>> iterator = this.getAllIterator();
return Flux.create(flux -> {
while(iterator.hasNext()) {
flux.next(iterator.next().getValue());
}
});
}
And on request I am simply doing:
#RequestMapping(value="/all", method=RequestMethod.GET, produces="application/json")
public Flux<ItemIgnite> getAllFlux() {
return this.provider.getAllFlux();
}
When I now locally call localhost:8080/all after 10 seconds I get a 503 status code. Also as at client when I request /all using the WebClient:
public Flux<ItemIgnite> getAllPoducts(){
WebClient webClient = WebClient.create("http://localhost:8080");
Flux<ItemIgnite> f = webClient.get().uri("/all").accept(MediaType.ALL).exchange().flatMapMany(cr -> cr.bodyToFlux(ItemIgnite.class));
f.subscribe(System.out::println);
return f;
}
Nothing happens. No data is transferred.
When I do the following instead:
public Flux<List<ItemIgnite>> getAllFluxMono() {
return Flux.just(this.getAllList());
}
and
#RequestMapping(value="/allMono", method=RequestMethod.GET, produces="application/json")
public Flux<List<ItemIgnite>> getAllFluxMono() {
return this.provider.getAllFluxMono();
}
It is working. I guess its because all data is already finished loading and just transferred to the client as it usually would transfer data without using a flux.
What do I have to change to get the flux streaming the data to the web client which requests those data?
EDIT
I have data inside an ignite cache. So my getAllIterator is loading the data from the ignite cache:
public Iterator<Cache.Entry<String, ItemIgnite>> getAllIterator() {
return this.igniteCache.iterator();
}
EDIT
adding flux.complete() like #Simon Baslé suggested:
public Flux<ItemIgnite> getAllFlux() {
Iterator<Cache.Entry<String, ItemIgnite>> iterator = this.getAllIterator();
return Flux.create(flux -> {
while(iterator.hasNext()) {
flux.next(iterator.next().getValue());
}
flux.complete(); // see here
});
}
Solves the 503 problem in the browser. But it does not solve the problem with the WebClient. There is still no data transferred.
EDIT 3
using publishOn with Schedulers.parallel():
public Flux<ItemIgnite> getAllFlux() {
Iterator<Cache.Entry<String, ItemIgnite>> iterator = this.getAllIterator();
return Flux.<ItemIgnite>create(flux -> {
while(iterator.hasNext()) {
flux.next(iterator.next().getValue());
}
flux.complete();
}).publishOn(Schedulers.parallel());
}
Does not change the result.
Here I post you what the WebClient receives:
value :[Item ID: null, Product Name: null, Product Group: null]
complete
So it seems like he is getting One item (out of over 35.000) and the values are null and he is finishing after.
One thing that jumps out is that you never call flux.complete() in your create.
But there's actually a factory operator that is tailored to transform an Iterable to a Flux, so you could just do Flux.fromIterable(this)
Edit: in case your Iterator is hiding complexity like a DB request (or any blocking I/O), be advised this spells trouble: anything blocking in a reactive chain, if not isolated on a dedicated execution context using publishOn, has the potential to block not only the entire chain but other reactive processes has well (as threads can and will be used by multiple reactive processes).
Neither create nor fromIterable do anything in particular to protect from blocking sources. I think you are facing that kind of issue, judging from the hang you get with the WebClient.
The problem was my Object ItemIgnite which I transfer. The system Flux seems not to be able to handle this. Because If I change my original code to the following:
public Flux<String> getAllFlux() {
Iterator<Cache.Entry<String, ItemIgnite>> iterator = this.getAllIterator();
return Flux.create(flux -> {
while(iterator.hasNext()) {
flux.next(iterator.next().getValue().toString());
}
});
}
Everything is working fine. Without publishOn and without flux.complete(). Maybe someone has an idea why this is working.

Can I configure LLBLGen to include generated SQL in exceptions?

I'm using LLBLGen and I have some code like so:
if (onlyRecentMessages)
{
messageBucket.PredicateExpression.Add(MessageFields.DateEffective >= DateTime.Today.AddDays(-30));
}
var messageEntities = new EntityCollection<MessageEntity>();
using (var myAdapter = PersistenceLayer.GetDataAccessAdapter())
{
myAdapter.FetchEntityCollection(messageEntities, messageBucket);
}
I'm currently getting a SqlException on the FetchEntityCollection line. The error is:
System.Data.SqlClient.SqlException: The incoming tabular data stream (TDS) remote procedure call (RPC) protocol stream is incorrect. Too many parameters were provided in this RPC request. The maximum is 2100.
but that's a side note. What I actually want to be able to do is include the generated SQL in a custom exception in my code. So for instance something like this:
using (var myAdapter = PersistenceLayer.GetDataAccessAdapter())
{
try
{
myAdapter.FetchEntityCollection(messageEntities, messageBucket);
}
catch (SqlException ex)
{
throw new CustomSqlException(ex, myAdapter.GeneratedSqlFromLastOperation);
}
}
Of course, there is no such property as GeneratedSqlFromLastOperation. I'm aware that I can configure logging, but I would prefer to have the information directly in my stack track / exception so that my existing exception logging infrastructure can provide me with more information when these kinds of errors occur.
Thanks!
Steve
You should get an ORMQueryExecutionException, which contains the full query in the description. The query's execute method wraps all exceptions in an ORMQueryExecutionException and stores the query in the description.
ps: please, if possible ask llblgen pro related questions on our forums, as we don't monitor stackoverflow frequently. Thanks. :)

Unable to close JDBC resources!

We are running a websphere commerce site with an oracle DB and facing an issue where we are running out of db connections.
We are using a JDBCHelper singleton for getting the prepared statements and cosing the connections.
public static JDBCHelper getJDBCHelper() {
if (theObject == null){
theObject = new JDBCHelper();
}
return theObject;
}
public void closeResources(Connection con, PreparedStatement pstmt, ResultSet rs){
try{
if(rs!=null){ rs.close();}
}catch(SQLException e){
logger.info("Exception closing the resultset");
}try{
if(pstmt!=null) { pstmt.close(); }
}catch(SQLException e){
logger.info("Exception closing the preparedstatement");
}try{
if(con!=null){ con.close(); }
}catch(SQLException e){
logger.info("Exception closing the connection");
}
}
However when we try getting the connection using a prepStmt.getConnection() for passing to the close resources after execution it throws an sql exception. Any idea why? Does the connection get closed immediately after execution? And is there something wrong in our use of the singleton JDBCHelper?
EDIT
Part of the code which makes the prepared statement,executes and closes the connection
PreparedStatement pstmt = jdbcHelper.getPreparedStatement(query);
try{
//rest of the code
int brs = pstmt.executeUpdate();
}
finally{
try {
jdbcHelper.closeResources(pstmt.getConnection(),pstmt);
} catch (SQLException e1) {
logger.logp(Level.SEVERE,CLASS_NAME,methodName,"In the finally block - Could not close connection", e1);
}
}
Your connection will most likely come from a pool, and closing it actually will return the connection to the pool (under the covers). I think posting the code which gets the connection, uses it and closes it via JDBCHelper will be of more use.
Re. your singleton, I'm not sure why you're using this, since it doesn't appear to have anything to warrant it being a singleton. Check out Apache Commons DbUtils which does this sort of stuff and more besides.
This code seems to be written for single threaded operation only, as it's lacking any synchronisation code. The getJdbcHelper() method for instance is likely to create two JdbcHelpers. If I'm not mistaken there's even no guarantee that a second thread will see theObject, long after a primary thread has created it. Although they usually will, by virtue of the architecture the JVM runs on.
If you're running this inside a web server you're likely to be running into race issues, where two threads are modifying your connection at the same time. Unless you rolled your own connection pool or something.
Brian is right, use one of the freely available libraries that solve this (hard) problem for you.

Resources