How to print details status when RestTemplate request failure? - spring

I use resttemplate to call a link such as http://example/jsonObject,about 4000 times a minute ,In most time it is fine, but sometime resttemplate will throw 60 times above exceptions in a minute.
restTemplate error org.springframework.web.client.ResourceAccessException: I/O error on GET request for "http://example/jsonObject": Read timed out; nested exception is java.net.SocketTimeoutException: Read timed out
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:751)
I use code:
try {
CouponV2ResultVO couponV2ResultVO = restTemplate.getForObject("http://example/jsonObject", CouponV2ResultVO.class);
long expense = System.currentTimeMillis() - startMs;
log.info("takes {} {} ms", expense);
return couponV2ResultVO;
} catch (Exception e) {
long expense = System.currentTimeMillis() - startMs;
log.error("takes {} ms", expense, couponReqFullUri, ExceptionUtils.getFullStackTrace(e));
throw e;
}
I want to print a more complete detail such as tcp communication between the whole request when resttemplate failure, or any other usage information instead of juat a timeout exception.Is it possible?

You could use log.error(e) or e.printStackTrace instead of throw e, when you want to have more details about an exception.

Related

Redis java.net.SocketTimeoutException: Read timed out on setting value while High memory utilization on redis cluster

I am using Jedis client for redis in my spring service.
Below is the similar code which I use for setting value into a hashedkey.
Jedis jedis = null;
try {
1-- jedis = redisJedisPool.getResource();
2-- jedis.hset(key,"data", dataValue);
for (Entry<String, String> entry : hmap.entrySet()) {
if (some condition)
3-- jedis.hset(key, entry.getKey(), entry.getValue());
}
4-- jedis.expire(key, ttl);
}catch(Exception e) {
logger.error("Error for key {}, Reason: {}", key, e.getMessage());
}
finally {
RedisConnectionManager.closeJedisResource(jedis);
}
there was heavy load on this service and thus a high memory utilization of the redis cluster was noticed.
I faced a lot of SocketTimeoutException from the above function(all exception where catched in the catch block).
Exception:
Error for key: {key}, Reason: java.net.SocketTimeoutException: Read timed out
Main Issue: After this intermittent issue I see a lot of keys with ttl as -1(infinite expiry). Almost all these keys where logged in the above catch block.
Need thoughts on this from the community on what can be the possible issue here.
Action I took for verifying: I checked the data for few of these keys(with ttl -1) and saw that the data that was to be set at line 3(inside the for loop) didn't set all the data. But I came across couple of keys which had ttl as -1, not all data was set but that key was not logged in the above catch block. And this is the only func where the setting of data in cache takes place in my service.
After this I am not able to conclude the above hypothesis.

SSE connection keeps failing every 5 minutes

I'm exposing a simple SSE endpoint using the SseEmitter Spring API, persisting all the emitters in a ConcurrentHashMap. The timeout for each emitter is set to 24 hours. Every 10 seconds I'm sending a message to all the clients. Clients are subscribed with native EventSource implementation, listening for events of particular name.
Unfortunately, I've noticed that every 5 minutes the connection is lost and reestablished again - even though timeout of emitter was explicitly set to 24 hours. Client's part does log it as an error, however on server side there's nothing. The issue occurs on both Tomcat and Jetty. I'd like to keep the session open without any interruptions, so resetting the connection every 5 minutes is unacceptable. Any ideas why this could be happening?
#RestController
#RequestMapping("api/v1/sse")
class SseController {
private val emitters = ConcurrentHashMap<String, SseEmitter>()
#GetMapping
fun initConnection(#RequestParam token: String): SseEmitter {
logger.info { "Init connection from $token" }
val emitter = SseEmitter(24 * 60 * 60 * 1000)
emitter.onCompletion {
logger.info { "Completion" }
emitters.remove(token)
}
emitter.onTimeout { logger.info { "Timeout " } }
emitter.onError { logger.error(it) { "Error" } }
emitters[token] = emitter
return emitter
}
#Scheduled(fixedRate = 10000)
fun send() {
emitters.forEach { (k, v) ->
logger.info { "Sending message to $k" }
v.send(
SseEmitter.event()
.id(UUID.randomUUID().toString())
.name("randomEvent")
.data("some data")
)
}
}
}
const eventSource = new EventSource(url);
eventSource.addEventListener('randomEvent', (e) =>
console.log(e.data)
);
eventSource.onerror = (e) => console.log(e);
Alright, seems it was an issue with Stackblitz's service worker. I've just implemented the same client-side solution in Chrome's plain console and the disconnecting is no longer happening.

How to detect if RSocket connection is successfull?

I have the following program through which I can detect the connection failure i.e doBeforeRetry.
Can someone tell me how to detect the successful connection or reconnection. I want to integrate a Health Check program that monitors this connection, but I am unable to capture the event that informs the connections is successfull.
Thanks
requester = RSocketRequester.builder()
.rsocketConnector(connector -> {
connector.reconnect(Retry
.fixedDelay(Integer.MAX_VALUE,Duration.ofSeconds(1))
.doBeforeRetry(e-> System.out.println("doBeforeRetry===>"+e))
.doAfterRetry(e-> System.out.println("doAfterRetry===>"+e))
);
connector.payloadDecoder(PayloadDecoder.ZERO_COPY);
}
).dataMimeType(MediaType.APPLICATION_CBOR)
.rsocketStrategies(strategies)
.tcp("localhost", 7999);
I achieved the detection of successful connection or reconnection with the following approach.
Client Side (Connection initialization)
Mono<RSocketRequester> requester = Mono.just(RSocketRequester.builder()
.rsocketConnector(
// connector configuration goes here
)
.dataMimeType(MediaType.APPLICATION_CBOR)
.setupRoute("client-handshake")
.setupData("caller-name")
.tcp("localhost", 7999)));
One the server side
#ConnectMapping("client-handshake")
public void connect(RSocketRequester requester, #Payload String callerName) {
LOG.info("Client Connection Handshake: [{}]", callerName);
requester
.route("server-handshake")
.data("I am server")
.retrieveMono(Void.class)
.subscribe();
}
On the client side, when I receive the callback on the below method, I detect the connection is successfull.
#MessageMapping("server-handshake")
public Mono<ConsumerPreference> handshake(final String response){
LOG.info("Server Connection Handshake received : Server message [{}]", response.getCallerName());
connectionSuccess.set(true);
return Mono.empty();
}else{
throw new InitializationException("Invalid response message received from Server");
}
}
Additionally, created a application level heartbeat to ensure, the liveliness of the connection.
If you want to know if it's actually healthy, you should probably have a side task that is polling the health of the RSocket, by sending something like a custom ping protocol to your backend. You could time that and confirm that you have a healthy connection, record latencies and success/failures.

How to check how many total redis connection , that a REDIS server can given to clients?

We are using REDIS cache , and using Spring-Redis module , we set the maxActiveConnections 10 in application configuration , but sometimes in my applications am seeing below errors
Exception occurred while querying cache : org.springframework.data.redis.RedisConnectionFailureException: Cannot get Jedis connection; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
is it because of in the Redis server their are no more connections to give to my applications or any other reason , can anyone please suggest on this ?
Note : their are 15 applications which are using the same Redis server to store the data , i mean 15 applications need connections from this single redis server only , for now we set 10 as maxActiveConnections for each of the 15 applications
To check how many clients are connected to redis you can use redis-cli and type this command: redis> INFO more specifically info Clients command.
192.168.8.176:8023> info Clients
# Clients
connected_clients:1
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
Form Jedis source code, it seems that the exception happened for the following reason:
Exhausted cache: // The exception was caused by an exhausted pool
or // Otherwise, the exception was caused by the implemented activateObject() or ValidateObject()
Here is the code snippet of Jedis getResource method:
public T getResource() {
try {
return internalPool.borrowObject();
} catch (NoSuchElementException nse) {
if (null == nse.getCause()) { // The exception was caused by an exhausted pool
throw new JedisExhaustedPoolException(
"Could not get a resource since the pool is exhausted", nse);
}
// Otherwise, the exception was caused by the implemented activateObject() or ValidateObject()
throw new JedisException("Could not get a resource from the pool", nse);
} catch (Exception e) {
throw new JedisConnectionException("Could not get a resource from the pool", e);
}
}

HBase - Connection Reset by peer Exception

I am trying to use HBase for building some real time API's. Hence my use case is to support ~10000 concurrent requests per second. I am trying to do some connection pooling so as to achieve multi thread access. I followed this documentation to create the connection: https://hbase.apache.org/1.1/apidocs/org/apache/hadoop/hbase/client/ConnectionFactory.html
But I keep getting this error when I make concurrent requests to my API:
WARN [http-nio-34000-exec-93-SendThread(d-3zjyk02.target.com:2181)]
19 Apr 2017 04:48:13:872 (ClientCnxn.java:1102) - Session 0x0 for
server d-3zjyk02.target.com/10.66.241.30:2181, unexpected error,
closing socket connection and attempting reconnect
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:68)
at
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:366)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
Here is how I am creating the connection:
// Connection to the cluster. A single connection shared by all application threads
private Connection connection = null;
public Connection getHBaseConnection() throws Exception {
if (connection == null) {
try {
Configuration configuration = HBaseConfiguration.create();
configuration.addResource("core-site.xml");
configuration.addResource("hbase-site.xml");
configuration.addResource("hdfs-site.xml");
connection = ConnectionFactory.createConnection(configuration);
} catch (Exception ex) {
LOG.error("Exception in creating the HBase connection object: " + ex.getMessage());
throw new Exception("Exception in creating the HBase connection: " + ex.getMessage());
}
}
return connection;
}
And here is how I use the get HBase connection method to some scan operations:
try {
connection = getHBaseConnection();
afterConnectionStartTime = System.currentTimeMillis();
LOG.info("[" + (System.currentTimeMillis() - startTime) + "]ms" + " ...TIME TAKEN to get the HBase connection object");
if (connection != null) {
table = connection.getTable(TableName.valueOf(TABLE_NAME));
Scan scan = new Scan(Bytes.toBytes(rowKeyStartDate), Bytes.toBytes(rowKeyEndDate));
scan.addColumn(COLUMN_FAMILY, ITEM);
}
This code works fine for any number of sequential requests, but when I do concurrent requests, I keep getting this error.
Some of the observations from my research on this issue:
1) This error is related to zookeeper closing the socket after certain number of requests (which I assume when it exceeds the max client connections (40) mentioned in my zoo.cfg file). But what I don't understand is why the concurrent requests are going to zookeeper in the first place. The first request should open the connection object and all the subsequent requests should use that pre existing connection to directly talk to region servers.
2) I am assuming this is the right way to do the connection pooling (at least as per the official Hbase doc). If no, whats the right way to do it?
3) I don't want to increase the max client connections in the zookeeper cfg file, thought it might be a temporary hack that can do my work.
Any help / suggestions is much appreciated.
Thanks!

Resources