OKHttpClient - Socket Read timed out issue - okhttp

I am getting lot of Socket Read Timout when I try to read a response from my
post request. I am using OKHttpClient verion 3.11.0 with the following configuration:
#Bean
OkHttpClient okHttpClient() {
def loggingInterceptor = new HttpLoggingInterceptor()
loggingInterceptor.setLevel(HttpLoggingInterceptor.Level.BODY)
Dispatcher dispatcher = new Dispatcher()
dispatcher.setMaxRequests(200)
dispatcher.setMaxRequestsPerHost(200)
return new OkHttpClient()
.newBuilder()
.eventListenerFactory(PrintingEventListener.FACTORY)
.retryOnConnectionFailure(true)
.connectTimeout(30000, TimeUnit.MILLISECONDS)
.readTimeout(30000, TimeUnit.MILLISECONDS)
.dispatcher(dispatcher)
.connectionPool(new ConnectionPool(200, 30, TimeUnit.SECONDS))
.addNetworkInterceptor(loggingInterceptor)
.addInterceptor(loggingInterceptor).build()
}
The code to process the request and response is given below:
Response r = client.newCall(request).execute()
r.withCloseable { response ->
def returnable
if (response.header('Content-Type')?.contains('application/json')) {
def body = response.body().string()
if (body.trim().isEmpty()) {
returnable = [:]
} else {
try {
def result = new JsonSlurper().parseText(body)
return result
} catch (any) {
log.error('Failed to parse json response: ' + body, any.message)
throw any
}
}
} else if (response.header('Content-Type')?.contains('image/jpeg')) {
returnable = response.body().bytes()
} else {
returnable = response.body().string()
}
return returnable
}
I add event listener, I see that the httpclient call responseBodyStart event and it hangs there and when timeout seconds reached, the call failed and throw Socket Read timeout exception. Is there anything missing in my configuration?
Event listener shows
responseHeaderStart or responseBodyStart followed by connectionReleased after specified timeout(30s) reached?
Please find the event trace and exception trace below:
INFO 11097 : 2.1701E-5 -- callStart
INFO 11097 : 1.44572E-4 -- connectionAcquired
INFO 11097 : 0.001036047 -- requestHeadersStart
INFO 11097 : 0.001064492 -- requestHeadersEnd
INFO 11097 : 0.001084433 -- requestBodyStart
INFO 11097 : 0.001103787 -- requestBodyEnd
INFO 11097 : 0.001279736 -- responseHeadersStart
INFO 11097 : 1.007175496 -- responseHeadersEnd
INFO 11097 : 1.007247928 -- responseBodyStart
INFO 11097 : 31.082725087 -- connectionReleased
INFO 11097 : 31.083717147 -- callFailed
INFO 11097 : 31.092341876 --
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
at sun.security.ssl.InputRecord.read(InputRecord.java:503)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:940)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
at okio.Okio$2.read(Okio.java:140)
at okio.AsyncTimeout$2.read(AsyncTimeout.java:237)
at okio.RealBufferedSource.request(RealBufferedSource.java:68)
at okio.RealBufferedSource.require(RealBufferedSource.java:61)
at okio.RealBufferedSource.readHexadecimalUnsignedLong(RealBufferedSource.java:304)
at okhttp3.internal.http1.Http1Codec$ChunkedSource.readChunkSize(Http1Codec.java:469)
at okhttp3.internal.http1.Http1Codec$ChunkedSource.read(Http1Codec.java:449)
at okio.RealBufferedSource.request(RealBufferedSource.java:68)
at okhttp3.logging.HttpLoggingInterceptor.intercept(HttpLoggingInterceptor.java:241)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:45)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:126)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.logging.HttpLoggingInterceptor.intercept(HttpLoggingInterceptor.java:213)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:200)
at okhttp3.RealCall.execute(RealCall.java:77)
at okhttp3.Call$execute.call(Unknown Source)

Related

Issue with FlatFileItemWriter in multithreaded step [duplicate]

I have the following FlatFileItemWriter defined in a multi-threaded step.
public FlatFileItemWriter<School> writer() throws Exception {
FlatFileItemWriter<School> flatFileWriter = new FlatFileItemWriter<School>();
flatFileWriter.setResource(new FileSystemResource("C:\\u01\\SchoolDetails.txt"));
flatFileWriter.setName("School-File-Writer");
flatFileWriter.setAppendAllowed(true);
flatFileWriter.setLineSeparator("\n");
flatFileWriter.setHeaderCallback(writer -> writer.write(columnHeaders()));
flatFileWriter.setLineAggregator(new DelimitedLineAggregator<School>() {
{
setDelimiter("^");
setFieldExtractor((FieldExtractor<School>) schoolFieldExtractor());
}
});
return flatFileWriter;
}
private BeanWrapperFieldExtractor<School> schoolFieldExtractor() {
return new BeanWrapperFieldExtractor<School>() {
{
String[] columnValuesMapper = new String[] {
"schoolName", "schoolAddress"
};
setNames(columnValuesMapper);
}
};
}
The ItemWriter generates the files on most days. But once a while it throws the following error:
2022-02-14 22:07:46.652 [SimpleAsyncTaskExecutor-25] INFO SpringBatchConfiguration:703 - Item Reader
2022-02-14 22:07:46.653 [SimpleAsyncTaskExecutor-25] INFO PagingItemReader:80 - reading records 1 to 10
2022-02-14 22:07:46.657 [SimpleAsyncTaskExecutor-28] INFO PagingItemReader:80 - reading records 11 to 20
2022-02-14 22:07:46.661 [SimpleAsyncTaskExecutor-27] INFO PagingItemReader:80 - reading records 21 to 30
2022-02-14 22:07:46.665 [SimpleAsyncTaskExecutor-26] INFO PagingItemReader:80 - reading records 31 to 40
2022-02-14 22:07:46.998 [SimpleAsyncTaskExecutor-25] INFO o.s.batch.core.step.AbstractStep:272 - Step: [childStep:partition1] executed in 350ms
2022-02-14 22:07:47.005 [SimpleAsyncTaskExecutor-28] INFO o.s.batch.core.step.AbstractStep:272 - Step: [childStep:partition3] executed in 357ms
2022-02-14 22:07:47.033 [SimpleAsyncTaskExecutor-27] ERROR o.s.batch.core.step.AbstractStep:237 - Encountered an error executing step childStep in School-Job-Process
org.springframework.batch.item.ItemStreamException: Output file was not created: [/u01/TotalRecordsFound-20220214.txt]
at org.springframework.batch.item.util.FileUtils.setUpOutputFile(FileUtils.java:76)
at org.springframework.batch.item.support.AbstractFileItemWriter$OutputState.initializeBufferedWriter(AbstractFileItemWriter.java:553)
at org.springframework.batch.item.support.AbstractFileItemWriter$OutputState.access$000(AbstractFileItemWriter.java:385)
at org.springframework.batch.item.support.AbstractFileItemWriter.doOpen(AbstractFileItemWriter.java:319)
at org.springframework.batch.item.support.AbstractFileItemWriter.open(AbstractFileItemWriter.java:309)
at org.springframework.batch.item.support.AbstractFileItemWriter$$FastClassBySpringCGLIB$$f2d35c3.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:771)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:749)
at org.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:136)
at org.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:124)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:749)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:691)
at org.springframework.batch.item.file.FlatFileItemWriter$$EnhancerBySpringCGLIB$$294bdfee.open(<generated>)
at org.springframework.batch.item.support.CompositeItemStream.open(CompositeItemStream.java:103)
at org.springframework.batch.core.step.tasklet.TaskletStep.open(TaskletStep.java:311)
at org.springframework.batch.core.step.AbstractStep.execute(AbstractStep.java:205)
at org.springframework.batch.core.partition.support.TaskExecutorPartitionHandler$1.call(TaskExecutorPartitionHandler.java:138)
at org.springframework.batch.core.partition.support.TaskExecutorPartitionHandler$1.call(TaskExecutorPartitionHandler.java:135)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.lang.Thread.run(Thread.java:834)
The error occurs intermittenly. The error occurs when two or more threads collides to create and write the data to the file. I can avoid it by delegating my FlatFileItemWriter to a SynchronizedItemStreamWriter. But the Spring docs suggest otherwise. The docs suggest that using a FlatFileItemWriter in a multi-threaded step does NOT require synchronizing writes.
So, I am not sure on how I can avoid these errors and also according to the logs the first two partitions successfully completed running which means the file is created successfully and data is written to it (if exists). So, how is the third partition telling that the file is not created when its already created by the first two paritions.
Any help would be appreciated. Thanks in advance.

Spring Batch - ItemStreamException: Output file was not created

I have the following FlatFileItemWriter defined in a multi-threaded step.
public FlatFileItemWriter<School> writer() throws Exception {
FlatFileItemWriter<School> flatFileWriter = new FlatFileItemWriter<School>();
flatFileWriter.setResource(new FileSystemResource("C:\\u01\\SchoolDetails.txt"));
flatFileWriter.setName("School-File-Writer");
flatFileWriter.setAppendAllowed(true);
flatFileWriter.setLineSeparator("\n");
flatFileWriter.setHeaderCallback(writer -> writer.write(columnHeaders()));
flatFileWriter.setLineAggregator(new DelimitedLineAggregator<School>() {
{
setDelimiter("^");
setFieldExtractor((FieldExtractor<School>) schoolFieldExtractor());
}
});
return flatFileWriter;
}
private BeanWrapperFieldExtractor<School> schoolFieldExtractor() {
return new BeanWrapperFieldExtractor<School>() {
{
String[] columnValuesMapper = new String[] {
"schoolName", "schoolAddress"
};
setNames(columnValuesMapper);
}
};
}
The ItemWriter generates the files on most days. But once a while it throws the following error:
2022-02-14 22:07:46.652 [SimpleAsyncTaskExecutor-25] INFO SpringBatchConfiguration:703 - Item Reader
2022-02-14 22:07:46.653 [SimpleAsyncTaskExecutor-25] INFO PagingItemReader:80 - reading records 1 to 10
2022-02-14 22:07:46.657 [SimpleAsyncTaskExecutor-28] INFO PagingItemReader:80 - reading records 11 to 20
2022-02-14 22:07:46.661 [SimpleAsyncTaskExecutor-27] INFO PagingItemReader:80 - reading records 21 to 30
2022-02-14 22:07:46.665 [SimpleAsyncTaskExecutor-26] INFO PagingItemReader:80 - reading records 31 to 40
2022-02-14 22:07:46.998 [SimpleAsyncTaskExecutor-25] INFO o.s.batch.core.step.AbstractStep:272 - Step: [childStep:partition1] executed in 350ms
2022-02-14 22:07:47.005 [SimpleAsyncTaskExecutor-28] INFO o.s.batch.core.step.AbstractStep:272 - Step: [childStep:partition3] executed in 357ms
2022-02-14 22:07:47.033 [SimpleAsyncTaskExecutor-27] ERROR o.s.batch.core.step.AbstractStep:237 - Encountered an error executing step childStep in School-Job-Process
org.springframework.batch.item.ItemStreamException: Output file was not created: [/u01/TotalRecordsFound-20220214.txt]
at org.springframework.batch.item.util.FileUtils.setUpOutputFile(FileUtils.java:76)
at org.springframework.batch.item.support.AbstractFileItemWriter$OutputState.initializeBufferedWriter(AbstractFileItemWriter.java:553)
at org.springframework.batch.item.support.AbstractFileItemWriter$OutputState.access$000(AbstractFileItemWriter.java:385)
at org.springframework.batch.item.support.AbstractFileItemWriter.doOpen(AbstractFileItemWriter.java:319)
at org.springframework.batch.item.support.AbstractFileItemWriter.open(AbstractFileItemWriter.java:309)
at org.springframework.batch.item.support.AbstractFileItemWriter$$FastClassBySpringCGLIB$$f2d35c3.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:771)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:749)
at org.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:136)
at org.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:124)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:749)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:691)
at org.springframework.batch.item.file.FlatFileItemWriter$$EnhancerBySpringCGLIB$$294bdfee.open(<generated>)
at org.springframework.batch.item.support.CompositeItemStream.open(CompositeItemStream.java:103)
at org.springframework.batch.core.step.tasklet.TaskletStep.open(TaskletStep.java:311)
at org.springframework.batch.core.step.AbstractStep.execute(AbstractStep.java:205)
at org.springframework.batch.core.partition.support.TaskExecutorPartitionHandler$1.call(TaskExecutorPartitionHandler.java:138)
at org.springframework.batch.core.partition.support.TaskExecutorPartitionHandler$1.call(TaskExecutorPartitionHandler.java:135)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.lang.Thread.run(Thread.java:834)
The error occurs intermittenly. The error occurs when two or more threads collides to create and write the data to the file. I can avoid it by delegating my FlatFileItemWriter to a SynchronizedItemStreamWriter. But the Spring docs suggest otherwise. The docs suggest that using a FlatFileItemWriter in a multi-threaded step does NOT require synchronizing writes.
So, I am not sure on how I can avoid these errors and also according to the logs the first two partitions successfully completed running which means the file is created successfully and data is written to it (if exists). So, how is the third partition telling that the file is not created when its already created by the first two paritions.
Any help would be appreciated. Thanks in advance.

Spark Kafka Receiver is not picking data from all partitions

I have created a Kafka topic with 5 partitions. And I am using createStream receiver API like following. But somehow only one receiver is getting the input data. Rest of receivers are not processign anything. Can you please help?
JavaPairDStream<String, String> messages = null;
if(sparkStreamCount > 0){
// We create an input DStream for each partition of the topic, unify those streams, and then repartition the unified stream.
List<JavaPairDStream<String, String>> kafkaStreams = new ArrayList<JavaPairDStream<String, String>>(sparkStreamCount);
for (int i = 0; i < sparkStreamCount; i++) {
kafkaStreams.add( KafkaUtils.createStream(jssc, contextVal.getString(KAFKA_ZOOKEEPER), contextVal.getString(KAFKA_GROUP_ID), kafkaTopicMap));
}
messages = jssc.union(kafkaStreams.get(0), kafkaStreams.subList(1, kafkaStreams.size()));
}
else{
messages = KafkaUtils.createStream(jssc, contextVal.getString(KAFKA_ZOOKEEPER), contextVal.getString(KAFKA_GROUP_ID), kafkaTopicMap);
}
After adding the changes I am getting following exceptions:
INFO : org.apache.spark.streaming.kafka.KafkaReceiver - Connected to localhost:2181
INFO : org.apache.spark.streaming.receiver.ReceiverSupervisorImpl - Stopping receiver with message: Error starting receiver 0: java.lang.AssertionError: assertion failed
INFO : org.apache.spark.streaming.receiver.ReceiverSupervisorImpl - Called receiver onStop
INFO : org.apache.spark.streaming.receiver.ReceiverSupervisorImpl - Deregistering receiver 0
ERROR: org.apache.spark.streaming.scheduler.ReceiverTracker - Deregistered receiver for stream 0: Error starting receiver 0 - java.lang.AssertionError: assertion failed
at scala.Predef$.assert(Predef.scala:165)
at kafka.consumer.TopicCount$$anonfun$makeConsumerThreadIdsPerTopic$2.apply(TopicCount.scala:36)
at kafka.consumer.TopicCount$$anonfun$makeConsumerThreadIdsPerTopic$2.apply(TopicCount.scala:34)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
at kafka.consumer.TopicCount$class.makeConsumerThreadIdsPerTopic(TopicCount.scala:34)
at kafka.consumer.StaticTopicCount.makeConsumerThreadIdsPerTopic(TopicCount.scala:100)
at kafka.consumer.StaticTopicCount.getConsumerThreadIdsPerTopic(TopicCount.scala:104)
at kafka.consumer.ZookeeperConsumerConnector.consume(ZookeeperConsumerConnector.scala:198)
at kafka.consumer.ZookeeperConsumerConnector.createMessageStreams(ZookeeperConsumerConnector.scala:138)
at org.apache.spark.streaming.kafka.KafkaReceiver.onStart(KafkaInputDStream.scala:111)
at org.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(ReceiverSupervisor.scala:148)
at org.apache.spark.streaming.receiver.ReceiverSupervisor.start(ReceiverSupervisor.scala:130)
at org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverTrackerEndpoint$$anonfun$9.apply(ReceiverTracker.scala:542)
at org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverTrackerEndpoint$$anonfun$9.apply(ReceiverTracker.scala:532)
at org.apache.spark.SparkContext$$anonfun$37.apply(SparkContext.scala:1986)
at org.apache.spark.SparkContext$$anonfun$37.apply(SparkContext.scala:1986)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
INFO : org.apache.spark.streaming.receiver.ReceiverSupervisorImpl - Stopped receiver 0
INFO : org.apache.spark.streaming.receiver.BlockGenerator - Stopping BlockGenerator
INFO : org.apache.spark.streaming.util.RecurringTimer - Stopped timer for BlockGenerator after time 1473964037200
INFO : org.apache.spark.streaming.receiver.BlockGenerator - Waiting for block pushing thread to terminate
INFO : org.apache.spark.streaming.receiver.BlockGenerator - Pushing out the last 0 blocks
INFO : org.apache.spark.streaming.receiver.BlockGenerator - Stopped block pushing thread
INFO : org.apache.spark.streaming.receiver.BlockGenerator - Stopped BlockGenerator
INFO : org.apache.spark.streaming.receiver.ReceiverSupervisorImpl - Waiting for receiver to be stopped
ERROR: org.apache.spark.streaming.receiver.ReceiverSupervisorImpl - Stopped receiver with error: java.lang.AssertionError: assertion failed
ERROR: org.apache.spark.executor.Executor - Exception in task 0.0 in stage 29.0
There is one issue with the above code. The kafkaTopicMap parameter in KafkaUtils.createStream method specify Map of (topic_name -> numPartitions) to consume. Each partition is consumed in its own thread.
Try the below code:
JavaPairDStream<String, String> messages = null;
int sparkStreamCount = 5;
Map<String, Integer> kafkaTopicMap = new HashMap<String, Integer>();
if (sparkStreamCount > 0) {
List<JavaPairDStream<String, String>> kafkaStreams = new ArrayList<JavaPairDStream<String, String>>(sparkStreamCount);
for (int i = 0; i < sparkStreamCount; i++) {
kafkaTopicMap.put(topic, i+1);
kafkaStreams.add(KafkaUtils.createStream(streamingContext, contextVal.getString(KAFKA_ZOOKEEPER), contextVal.getString(KAFKA_GROUP_ID), kafkaTopicMap));
}
messages = streamingContext.union(kafkaStreams.get(0), kafkaStreams.subList(1, kafkaStreams.size()));
} else {
messages = KafkaUtils.createStream(streamingContext, contextVal.getString(KAFKA_ZOOKEEPER), contextVal.getString(KAFKA_GROUP_ID), kafkaTopicMap);
}

keeping connection alive to websocket when using ServerWebSocketContainer

I was trying to create a websocket based application where the server needs to keep the connection alive with the clients using heartbeat.
I checked the server ServerWebSocketContainer.SockJsServiceOptions class for the same, but could not use it. I am using the code from the spring-integration sample
#Bean
ServerWebSocketContainer serverWebSocketContainer() {
return new ServerWebSocketContainer("/messages").withSockJs();
}
#Bean
MessageHandler webSocketOutboundAdapter() {
return new WebSocketOutboundMessageHandler(serverWebSocketContainer());
}
#Bean(name = "webSocketFlow.input")
MessageChannel requestChannel() {
return new DirectChannel();
}
#Bean
IntegrationFlow webSocketFlow() {
return f -> {
Function<Message , Object> splitter = m -> serverWebSocketContainer()
.getSessions()
.keySet()
.stream()
.map(s -> MessageBuilder.fromMessage(m)
.setHeader(SimpMessageHeaderAccessor.SESSION_ID_HEADER, s)
.build())
.collect(Collectors.toList());
f.split( Message.class, splitter)
.channel(c -> c.executor(Executors.newCachedThreadPool()))
.handle(webSocketOutboundAdapter());
};
}
#RequestMapping("/hi/{name}")
public void send(#PathVariable String name) {
requestChannel().send(MessageBuilder.withPayload(name).build());
}
Please let me know how can I set the heartbeat options ensure the connection is kept alive unless the client de-registers itself.
Thanks
Actually you got it right, but missed a bit of convenience :-).
You can configure it like this:
#Bean
ServerWebSocketContainer serverWebSocketContainer() {
return new ServerWebSocketContainer("/messages")
.withSockJs(new ServerWebSocketContainer.SockJsServiceOptions()
.setHeartbeatTime(60_000));
}
Although it isn't clear for me why you need to configure it at all because of this:
/**
* The amount of time in milliseconds when the server has not sent any
* messages and after which the server should send a heartbeat frame to the
* client in order to keep the connection from breaking.
* <p>The default value is 25,000 (25 seconds).
*/
public SockJsServiceRegistration setHeartbeatTime(long heartbeatTime) {
this.heartbeatTime = heartbeatTime;
return this;
}
UPDATE
In the Spring Integration Samples we have something like stomp-chat application.
I have done there something like this to the stomp-server.xml:
<int-websocket:server-container id="serverWebSocketContainer" path="/chat">
<int-websocket:sockjs heartbeat-time="10000"/>
</int-websocket:server-container>
Added this to the application.properties:
logging.level.org.springframework.web.socket.sockjs.transport.session=trace
And this to the index.html:
sock.onheartbeat = function() {
console.log('heartbeat');
};
After connecting the client I see this in the server log:
2015-10-13 19:03:06.574 TRACE 7960 --- [ SockJS-3] s.w.s.s.t.s.WebSocketServerSockJsSession : Writing SockJsFrame content='h'
2015-10-13 19:03:06.574 TRACE 7960 --- [ SockJS-3] s.w.s.s.t.s.WebSocketServerSockJsSession : Cancelling heartbeat in session sogfe2dn
2015-10-13 19:03:06.574 TRACE 7960 --- [ SockJS-3] s.w.s.s.t.s.WebSocketServerSockJsSession : Scheduled heartbeat in session sogfe2dn
2015-10-13 19:03:16.576 TRACE 7960 --- [ SockJS-8] s.w.s.s.t.s.WebSocketServerSockJsSession : Preparing to write SockJsFrame content='h'
2015-10-13 19:03:16.576 TRACE 7960 --- [ SockJS-8] s.w.s.s.t.s.WebSocketServerSockJsSession : Writing SockJsFrame content='h'
2015-10-13 19:03:16.576 TRACE 7960 --- [ SockJS-8] s.w.s.s.t.s.WebSocketServerSockJsSession : Cancelling heartbeat in session sogfe2dn
2015-10-13 19:03:16.576 TRACE 7960 --- [ SockJS-8] s.w.s.s.t.s.WebSocketServerSockJsSession : Scheduled heartbeat in session sogfe2dn
In the browser's console I see this after:
So, looks like heart-beat feature works well...

HbaseTestingUtility: could not start my mini-cluster

I was trying to test my Hbase Code using HbaseTestingUtility. Everytime I started my mini-cluster using the code snippet below, I was getting an exception.
public void startCluster()
{
File workingDirectory = new File("./");
Configuration conf = new Configuration();
System.setProperty("test.build.data", workingDirectory.getAbsolutePath());
conf.set("test.build.data", new File(workingDirectory, "zookeeper").getAbsolutePath());
conf.set("fs.default.name", "file:///");
conf.set("zookeeper.session.timeout", "180000");
conf.set("hbase.zookeeper.peerport", "2888");
conf.set("hbase.zookeeper.property.clientPort", "2181");
conf.addResource(new Path("conf/hbase-site1.xml"));
try
{
masterDir = new File(workingDirectory, "hbase");
conf.set(HConstants.HBASE_DIR, masterDir.toURI().toURL().toString());
}
catch (MalformedURLException e1)
{
logger.error(e1.getMessage());
}
Configuration hbaseConf = HBaseConfiguration.create(conf);
utility = new HBaseTestingUtility(hbaseConf);
// Change permission for dfs.data.dir, please refer
// https://issues.apache.org/jira/browse/HBASE-5711 for more details.
try
{
Process process = Runtime.getRuntime().exec("/bin/sh -c umask");
BufferedReader br = new BufferedReader(new InputStreamReader(process.getInputStream()));
int rc = process.waitFor();
if (rc == 0)
{
String umask = br.readLine();
int umaskBits = Integer.parseInt(umask, 8);
int permBits = 0777 & ~umaskBits;
String perms = Integer.toString(permBits, 8);
logger.info("Setting dfs.datanode.data.dir.perm to " + perms);
utility.getConfiguration().set("dfs.datanode.data.dir.perm", perms);
}
else
{
logger.warn("Failed running umask command in a shell, nonzero return value");
}
}
catch (Exception e)
{
// ignore errors, we might not be running on POSIX, or "sh" might
// not be on the path
logger.warn("Couldn't get umask", e);
}
if (!checkIfServerRunning())
{
hTablePool = new HTablePool(conf, 1);
try
{
zkCluster = new MiniZooKeeperCluster(conf);
zkCluster.setDefaultClientPort(2181);
zkCluster.setTickTime(18000);
zkDir = new File(utility.getClusterTestDir().toString());
zkCluster.startup(zkDir);
utility.setZkCluster(zkCluster);
utility.startMiniCluster();
utility.getHBaseCluster().startMaster();
}
catch (Exception e)
{
e.printStackTrace();
logger.error(e.getMessage());
throw new RuntimeException(e);
}
}
}
I got the exception as follows.
2013-09-10 15:26:26 INFO ClientCnxn:849 - Socket connection established to localhost/127.0.0.1:2181, initiating session
2013-09-10 15:26:26 INFO ZooKeeperServer:839 - Client attempting to establish new session at /127.0.0.1:45934
2013-09-10 15:26:26 INFO ZooKeeperServer:595 - Established session 0x141074cd6150002 with negotiated timeout 180000 for client /127.0.0.1:45934
2013-09-10 15:26:26 INFO ClientCnxn:1207 - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x141074cd6150002, negotiated timeout = 180000
2013-09-10 15:26:26 INFO HBaseRPC:289 - Server at localhost/127.0.0.1:42926 could not be reached after 1 tries, giving up.
2013-09-10 15:26:26 WARN AssignmentManager:1714 - Failed assignment of -ROOT-,,0.70236052 to localhost,42926,1378806982623, trying to assign elsewhere instead; retry=0
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed setting up proxy interface org.apache.hadoop.hbase.ipc.HRegionInterface to localhost/127.0.0.1:42926 after attempts=1
at org.apache.hadoop.hbase.ipc.HBaseRPC.handleConnectionException(HBaseRPC.java:291)
at org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.java:259)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1305)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1261)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1248)
at org.apache.hadoop.hbase.master.ServerManager.getServerConnection(ServerManager.java:550)
at org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:483)
at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1664)
at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1387)
at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1362)
at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1357)
at org.apache.hadoop.hbase.master.AssignmentManager.assignRoot(AssignmentManager.java:2236)
at org.apache.hadoop.hbase.master.HMaster.assignRootAndMeta(HMaster.java:654)
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:551)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:362)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:692)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:207)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:525)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:489)
at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupConnection(HBaseClient.java:416)
at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:462)
at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1150)
at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1000)
at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
at com.sun.proxy.$Proxy20.getProtocolVersion(Unknown Source)
at org.apache.hadoop.hbase.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:183)
at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:335)
at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:312)
at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:364)
at org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.java:236)
... 14 more
2013-09-10 15:26:26 WARN AssignmentManager:1736 - Unable to find a viable location to assign region -ROOT-,,0.70236052
2013-09-10 15:27:24 INFO audit:5677 - allowed=true ugi=aniket (auth:SIMPLE) ip=/127.0.0.1 cmd=listStatus src=/user/aniket/hbase/.oldlogs dst=null perm=null
2013-09-10 15:27:24 INFO audit:5677 - allowed=true ugi=aniket (auth:SIMPLE) ip=/127.0.0.1 cmd=listStatus src=/user/aniket/hbase/.archive dst=null perm=null
2013-09-10 15:28:24 INFO audit:5677 - allowed=true ugi=aniket (auth:SIMPLE) ip=/127.0.0.1 cmd=listStatus src=/user/aniket/hbase/.archive dst=null perm=null
2013-09-10 15:28:24 INFO audit:5677 - allowed=true ugi=aniket (auth:SIMPLE) ip=/127.0.0.1 cmd=listStatus src=/user/aniket/hbase/.oldlogs dst=null perm=null
2013-09-10 15:29:24 INFO audit:5677 - allowed=true ugi=aniket (auth:SIMPLE) ip=/127.0.0.1 cmd=listStatus src=/user/aniket/hbase/.oldlogs dst=null perm=null
2013-09-10 15:29:24 INFO audit:5677 - allowed=true ugi=aniket (auth:SIMPLE) ip=/127.0.0.1 cmd=listStatus src=/user/aniket/hbase/.archive dst=null perm=null
2013-09-10 15:29:42 ERROR MiniHBaseCluster:201 - Error starting cluster
java.lang.RuntimeException: Master not initialized after 200 seconds
at org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:206)
at org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:420)
at org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:196)
at org.apache.hadoop.hbase.MiniHBaseCluster.<init>(MiniHBaseCluster.java:76)
at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:635)
at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:609)
at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:557)
at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:526)
at HBaseTesting.startCluster(HBaseTesting.java:131)
at HBaseTesting.main(HBaseTesting.java:62)
2013-09-10 15:29:42 INFO HMaster:1635 - Cluster shutdown requested
2013-09-10 15:29:42 INFO HRegionServer:1666 - STOPPED: Shutdown requested
2013-09-10 15:29:42 INFO HBaseServer:1651 - Stopping server on 42926
Could anyone help me with the solution.
Here's my solution.
I had to update my /etc/hosts file.
the 2 entries of interests are:
127.0.0.1 localhost
127.0.1.1 myhostname
I had to change the IP that myhostname is pointed to also point to
127.0.0.1.
Once my /etc/hosts was updated to look like this:
127.0.0.1 localhost
127.0.0.1 myhostname
The code started working.
(This is assuming a Linux server. replace /etc/hosts with the equivalent file for your operating system )
http://en.wikipedia.org/wiki/Hosts_(file)
Adding guava dependency to Gradle file worked for me.
compile group: 'com.google.guava', name: 'guava', version: '14.0'

Resources