Spring Kafka asynchronous send calls block - spring

I'm using Spring-Kafka version 1.2.1 and, when the Kafka server is down/unreachable, the asynchronous send calls block for a time. It seems to be the TCP timeout. The code is something like this:
ListenableFuture<SendResult<K, V>> future = kafkaTemplate.send(topic, key, message);
future.addCallback(new ListenableFutureCallback<SendResult<K, V>>() {
#Override
public void onSuccess(SendResult<K, V> result) {
...
}
#Override
public void onFailure(Throwable ex) {
...
}
});
I've taken a really quick look at the Spring-Kafka code and it seems to just pass the task along to the kafka client library, translating a callback interaction to a future object interaction. Looking at the kafka client library, the code gets more complex and I didn't take the time to understand it all, but I guess it may be making remote calls (metadata, at least?) in the same thread.
As a user, I expected the Spring-Kafka methods that return a future to return immediately, even if the remote kafka server is unreachable.
Any confirmation if my understanding is wrong or if this is a bug would be welcome. I ended up making it asynchronous on my end for now.
Another problem is that Spring-Kafka documentation says, at the beginning, that it provides synchronous and asynchronous send methods. I couldn't find any methods that do not return futures, maybe the documentation needs updating.
I'm happy to provide any further details if needed. Thanks.

In addition to the #EnableAsync annotation on a configuration class, the #Async annotation needs to be used on the method were you invoke this code.
http://www.baeldung.com/spring-async
Here some code fragements. Kafka producer config:
#EnableAsync
#Configuration
public class KafkaProducerConfig {
private static final Logger LOGGER = LoggerFactory.getLogger(KafkaProducerConfig.class);
#Value("${kafka.brokers}")
private String servers;
#Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, servers);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
return props;
}
#Bean
public ProducerFactory<String, GenericMessage> producerFactory(ObjectMapper objectMapper) {
return new DefaultKafkaProducerFactory<>(producerConfigs(), new StringSerializer(), new JsonSerializer(objectMapper));
}
#Bean
public KafkaTemplate<String, GenericMessage> kafkaTemplate(ObjectMapper objectMapper) {
return new KafkaTemplate<String, GenericMessage>(producerFactory(objectMapper));
}
#Bean
public Producer producer() {
return new Producer();
}
}
And the producer itself:
public class Producer {
public static final Logger LOGGER = LoggerFactory.getLogger(Producer.class);
#Autowired
private KafkaTemplate<String, GenericMessage> kafkaTemplate;
#Async
public void send(String topic, GenericMessage message) {
ListenableFuture<SendResult<String, GenericMessage>> future = kafkaTemplate.send(topic, message);
future.addCallback(new ListenableFutureCallback<SendResult<String, GenericMessage>>() {
#Override
public void onSuccess(final SendResult<String, GenericMessage> message) {
LOGGER.info("sent message= " + message + " with offset= " + message.getRecordMetadata().offset());
}
#Override
public void onFailure(final Throwable throwable) {
LOGGER.error("unable to send message= " + message, throwable);
}
});
}
}

If I look at the KafkaProducer itself, there are two parts of sending a message:
Storing the message into the internal buffer.
Uploading the message from the buffer into Kafka.
KafkaProducer is asynchronous for the second part, not the first part.
The send() method can still be blocked on the first part and eventually throw TimeoutExceptions, e.g:
The metadata for the topics is not cached or stale, so the producer tries to get the metadata from the server to know if the topic still exists and how many partitions it has.
The buffer is full (32MB by default).
If the server is completely unresponsive, you will probably encounter both issues.
Update:
I tested and confirmed this in Kafka 2.2.1. It looks like this behaviour might be different in 2.4 and/or 2.6: KAFKA-3720

Best solution is to add a 'Callback' Listener at the level of the Producer.
#Bean
public KafkaTemplate<String, WebUserOperation> operationKafkaTemplate() {
KafkaTemplate<String, WebUserOperation> kt = new KafkaTemplate<>(operationProducerFactory());
kt.setProducerListener(new ProducerListener<String, WebUserOperation>() {
#Override
public void onSuccess(ProducerRecord<String, WebUserOperation> record, RecordMetadata recordMetadata) {
System.out.println("### Callback :: " + recordMetadata.topic() + " ; partition = "
+ recordMetadata.partition() +" with offset= " + recordMetadata.offset()
+ " ; Timestamp : " + recordMetadata.timestamp() + " ; Message Size = " + recordMetadata.serializedValueSize());
}
#Override
public void onError(ProducerRecord<String, WebUserOperation> producerRecord, Exception exception) {
System.out.println("### Topic = " + producerRecord.topic() + " ; Message = " + producerRecord.value().getOperation());
exception.printStackTrace();
}
});
return kt;
}

Just to be sure. Do you have the #EnableAsync annotation applied? I want to say that could be the key to specifying the behavior of Future<>

Below code works for me to get response asynchronously
ProducerRecord<UUID, Person> person = new ProducerRecord<>(kafkaTemplate.getDefaultTopic(), messageKey,Person);
Runnable runnable = () -> kafkaTemplate.send(person).addCallback(new MessageAckHandler());
new Thread(runnable).start();
public class MessageAckHandler implements ListenableFutureCallback<SendResult<UUID,Person>> {
#Override
public void onFailure(Throwable exception) {
log.error("unable to send message: " + exception.getMessage());
}
#Override
public void onSuccess(SendResult<UUID, ScreeningEvent> result) {
log.debug("sent message with offset={} messageID={}", result.getRecordMetadata().offset(), result.getProducerRecord().key());
}
}
public class SendResult<K, V> {
private final ProducerRecord<K, V> producerRecord;
private final RecordMetadata recordMetadata;
public SendResult(ProducerRecord<K, V> producerRecord, RecordMetadata recordMetadata) {
this.producerRecord = producerRecord;
this.recordMetadata = recordMetadata;
}
public ProducerRecord<K, V> getProducerRecord() {
return this.producerRecord;
}
public RecordMetadata getRecordMetadata() {
return this.recordMetadata;
}
#Override
public String toString() {
return "SendResult [producerRecord=" + this.producerRecord + ", recordMetadata=" + this.recordMetadata + "]";
}
}

Related

How to properly configure multiple DMLCs to listen to a single sqs queue?

We have an order managament system in which after every order state update we make an api call to our client to keep them updated. We do this by first sending a message to a sqs queue and inside a consumer we hit our clients api. The processing on consumer side usually takes about 300-350ms but The approximate age of oldest message in sqs dashboard is showing spikes that reach upto 50-60 secs.
Seeing this I thought that maybe one consumer is not enough for our load and I created multiple DMLC beans and multiple copies of our consumer class. I attached these consumer classes as listeners in these DMLCs. But I have not seen any improvement in approximate age of oldest message.
I am guessing that maybe only one of the DMLC is processing these messages and others are just sitting idle.
I added multiple DMLCs because there are other places in pur codebase where the same thing is used, But now I am not sure if this is the correct way to solve the problem.
My Consumer class looks like this:
#Component
#Slf4j
#RequiredArgsConstructor
public class HOAEventsOMSConsumer extends ConsumerCommon implements MessageListener {
private static final int MAX_RETRY_LIMIT = 3;
private final OMSEventsWrapper omsEventsWrapper;
#Override
public void onMessage(Message message) {
try {
TextMessage textMessage = (TextMessage) message;
String jmsMessageId = textMessage.getJMSMessageID();
ConsumerLogging.logStart(jmsMessageId);
String text = textMessage.getText();
log.info(
"Inside HOA Events consumer Request jmsMessageId:- " + jmsMessageId + " Text:- "
+ text);
processAndAcknowledge(message, text, textMessage);
} catch (JMSException e) {
log.error("JMS Exception while processing surge message", e);
}
}
private void processAndAcknowledge(Message message, String text, TextMessage textMessage) throws JMSException {
try {
TrimmedHOAEvent hoaEvent = JsonHelper.convertFromJsonPro(text, TrimmedHOAEvent.class);
if (hoaEvent == null) {
throw new OMSValidationException("Empty message in hoa events queue");
}
EventType event = EventType.fromString(textMessage.getStringProperty("eventType"));
omsEventsWrapper.handleOmsEvent(event,hoaEvent);
acknowledgeMessage(message);
} catch (Exception e) {
int retryCount = message.getIntProperty("JMSXDeliveryCount");
log.info("Retrying... retryCount: {}, HOAEventsOMSConsumer: {}", retryCount, text);
if (retryCount > MAX_RETRY_LIMIT) {
log.info("about to acknowledge the message since it has exceeded maximum retry limit");
acknowledgeMessage(message);
}
}
}
}
And my DMLC configuration class looks like this:
#Configuration
#SuppressWarnings("unused")
public class HOAEventsOMSJMSConfig extends JMSConfigCommon{
private Boolean isSQSQueueEnabled;
#Autowired
private HOAEventsOMSConsumer hoaEventsOMSConsumer;
#Autowired
private HOAEventsOMSConsumer2 hoaEventsOMSConsumer2;
#Autowired
private HOAEventsOMSConsumer3 hoaEventsOMSConsumer3;
#Autowired
private HOAEventsOMSConsumer4 hoaEventsOMSConsumer4;
#Autowired
private HOAEventsOMSConsumer5 hoaEventsOMSConsumer5;
#Autowired
private HOAEventsOMSConsumer6 hoaEventsOMSConsumer6;
#Autowired
private HOAEventsOMSConsumer7 hoaEventsOMSConsumer7;
#Autowired
private HOAEventsOMSConsumer8 hoaEventsOMSConsumer8;
#Autowired
private HOAEventsOMSConsumer9 hoaEventsOMSConsumer9;
#Autowired
private HOAEventsOMSConsumer10 hoaEventsOMSConsumer10;
public HOAEventsOMSJMSConfig(IPropertyService propertyService, Environment env) {
queueName = env.getProperty("aws.sqs.queue.oms.hoa.events.queue");
endpoint = env.getProperty("aws.sqs.queue.endpoint") + queueName;
JMSConfigCommon.accessId = env.getProperty("aws.sqs.access.id");
JMSConfigCommon.accessKey = env.getProperty("aws.sqs.access.key");
try {
ServerNameCache serverNameCache = CacheManager.getInstance().getCache(ServerNameCache.class);
if (serverNameCache == null) {
serverNameCache = new ServerNameCache();
serverNameCache.set(InetAddress.getLocalHost().getHostName());
CacheManager.getInstance().setCache(serverNameCache);
}
this.isSQSQueueEnabled = propertyService.isConsumerEnabled(serverNameCache.get(), false);
} catch (Exception e) {
this.isSQSQueueEnabled = false;
}
}
#Bean
public JmsTemplate omsHOAEventsJMSTemplate(){
SQSConnectionFactory sqsConnectionFactory;
if (endpoint.toLowerCase().contains("localhost")) {
sqsConnectionFactory =
SQSConnectionFactory.builder().withEndpoint(getEndpoint("sqs")).build();
} else {
sqsConnectionFactory = SQSConnectionFactory.builder()
.withAWSCredentialsProvider(awsCredentialsProvider)
.withNumberOfMessagesToPrefetch(10)
.withEndpoint(endpoint)
.build();
}
CachingConnectionFactory cachingConnectionFactory = new CachingConnectionFactory(sqsConnectionFactory);
JmsTemplate jmsTemplate = new JmsTemplate(cachingConnectionFactory);
jmsTemplate.setDefaultDestinationName(queueName);
jmsTemplate.setDeliveryPersistent(false);
jmsTemplate.setSessionTransacted(false);
jmsTemplate.setSessionAcknowledgeMode(SQSSession.UNORDERED_ACKNOWLEDGE);
return jmsTemplate;
}
#Bean
public DefaultMessageListenerContainer jmsListenerHOAEventsListenerContainer() {
SQSConnectionFactory sqsConnectionFactory;
if (endpoint.toLowerCase().contains("localhost")) {
sqsConnectionFactory = SQSConnectionFactory.builder()
.withEndpoint(getEndpoint("sqs"))
.build();
} else {
sqsConnectionFactory = SQSConnectionFactory.builder()
.withAWSCredentialsProvider(awsCredentialsProvider)
.withNumberOfMessagesToPrefetch(10)
.withEndpoint(endpoint)
.build();
}
DefaultMessageListenerContainer dmlc = new DefaultMessageListenerContainer();
dmlc.setConnectionFactory(sqsConnectionFactory);
dmlc.setDestinationName(queueName);
dmlc.setAutoStartup(isSQSQueueEnabled);
dmlc.setMessageListener(hoaEventsOMSConsumer);
dmlc.setSessionTransacted(false);
dmlc.setSessionAcknowledgeMode(SQSSession.UNORDERED_ACKNOWLEDGE);
return dmlc;
}
#Bean
public DefaultMessageListenerContainer jmsListenerHOAEventsListenerContainerNo2() {
SQSConnectionFactory sqsConnectionFactory;
if (endpoint.toLowerCase().contains("localhost")) {
sqsConnectionFactory = SQSConnectionFactory.builder()
.withEndpoint(getEndpoint("sqs"))
.build();
} else {
sqsConnectionFactory = SQSConnectionFactory.builder()
.withAWSCredentialsProvider(awsCredentialsProvider)
.withNumberOfMessagesToPrefetch(10)
.withEndpoint(endpoint)
.build();
}
DefaultMessageListenerContainer dmlc = new DefaultMessageListenerContainer();
dmlc.setConnectionFactory(sqsConnectionFactory);
dmlc.setDestinationName(queueName);
dmlc.setAutoStartup(isSQSQueueEnabled);
dmlc.setMessageListener(hoaEventsOMSConsumer2);
dmlc.setSessionTransacted(false);
dmlc.setSessionAcknowledgeMode(SQSSession.UNORDERED_ACKNOWLEDGE);
return dmlc;
}
#Bean
public DefaultMessageListenerContainer jmsListenerHOAEventsListenerContainerNo3() {
SQSConnectionFactory sqsConnectionFactory;
if (endpoint.toLowerCase().contains("localhost")) {
sqsConnectionFactory = SQSConnectionFactory.builder()
.withEndpoint(getEndpoint("sqs"))
.build();
} else {
sqsConnectionFactory = SQSConnectionFactory.builder()
.withAWSCredentialsProvider(awsCredentialsProvider)
.withNumberOfMessagesToPrefetch(10)
.withEndpoint(endpoint)
.build();
}
DefaultMessageListenerContainer dmlc = new DefaultMessageListenerContainer();
dmlc.setConnectionFactory(sqsConnectionFactory);
dmlc.setDestinationName(queueName);
dmlc.setAutoStartup(isSQSQueueEnabled);
dmlc.setMessageListener(hoaEventsOMSConsumer3);
dmlc.setSessionTransacted(false);
dmlc.setSessionAcknowledgeMode(SQSSession.UNORDERED_ACKNOWLEDGE);
return dmlc;
}
}
If this question is already answered somehwere else, then please point me towards that.

Test onFailure of spring-kafka sending message

I try to test the onFailure case when I send a kafka message with producer but the onFailure method is never fire.
Here is my code where I send a message :
#Component
public class MessageSending {
#Autowired
Map<String, KafkaTemplate<String, String>> producerByCountry;
String topicName = "countryTopic";
public void sendMessage(String data) {
producerByCountry.get("countryName").send(topicName, data).addCallback(
onSuccess -> {},
onFailure -> log.error("failed")
);
}
}
Here is the test class but it's still a success case and I have no idea how I can test the failure case (I want to add some processing inside the onFailure block but I would like to first know how I can trigger onFailure by testing).
#EmbeddedKafka
#SpringBootTest
public class MessageSendingTest {
#MockBean
Map<Country, KafkaTemplate<String, String>> producerByCountry;
#Autowired
EmbeddedKafkaBroker embeddedKafka;
#Autowired
MessageSending messageSending;
#Test
void failTest(CapturedOutput capturedOutput) {
var props = KafkaTestUtils.producerProps(embeddedKafka);
var producerTemplate = new DefaultKafkaProducerFactory<String, String>(props);
var template = new KafkaTemplate<>(producerTemplate);
given(producerByCountry.get("USA"))).willReturn(template);
messageSending.sendMessage("data");
assertThat(capturedOutput).contains("failed");
}
}
I also tried the idea of this topic How to test Kafka OnFailure callback with Junit? by doing
doAnswer(invocationOnMock -> {
ListenableFutureCallback<SendResult<String, String>> listenableFutureCallback = invocationOnMock.getArgument(0);
KafkaProducerException value = new KafkaProducerException(new ProducerRecord<String, String>("myTopic", "myMessage"), "error", ex);
listenableFutureCallback.onFailure(value);
return null;
}).when(mock(ListenableFuture.class)).addCallback(any(ListenableFutureCallback.class));
But I got this mockito exception org.mockito.exceptions.misusing.UnnecessaryStubbingException due by when().addCallback
Can someone can help ?
Thanks.
You can use a mock template; see this answer for an example:
How to mock result from KafkaTemplate
EDIT
You can also mock the underlying Producer object - here is an example that is closer to your use case...
#SpringBootApplication
public class So75074961Application {
public static void main(String[] args) {
SpringApplication.run(So75074961Application.class, args);
}
#Bean
KafkaTemplate<String, String> france(ProducerFactory<String, String> pf) {
return new KafkaTemplate<>(pf, Map.of(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "france:9092"));
}
#Bean
KafkaTemplate<String, String> germany(ProducerFactory<String, String> pf) {
return new KafkaTemplate<>(pf, Map.of(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "germany:9092"));
}
}
#Component
class MessageSending {
private static final Logger log = LoggerFactory.getLogger(MessageSending.class);
#Autowired
Map<String, KafkaTemplate<String, String>> producerByCountry;
String topicName = "countryTopic";
public void sendMessage(String country, String data) {
producerByCountry.get(country).send(topicName, data).addCallback(
onSuccess -> log.info(onSuccess.getRecordMetadata().toString()),
onFailure -> log.error("failed: " + onFailure.getMessage()));
}
}
#SpringBootTest
#ExtendWith(OutputCaptureExtension.class)
class So75074961ApplicationTests {
#Test
void test(#Autowired MessageSending sending, CapturedOutput capture) {
ProducerFactory<String, String> pf = mock(ProducerFactory.class);
Producer<String, String> prod = mock(Producer.class);
given(pf.createProducer()).willReturn(prod);
willAnswer(inv -> {
Callback callback = inv.getArgument(1);
callback.onCompletion(null, new RuntimeException("test"));
return mock(Future.class);
}).given(prod).send(any(), any());
// inject the mock pf into "france" template
Map<?, ?> producers = KafkaTestUtils.getPropertyValue(sending, "producerByCountry", Map.class);
new DirectFieldAccessor(producers.get("france")).setPropertyValue("producerFactory", pf);
sending.sendMessage("france", "foo");
assertThat(capture)
.contains("failed: Failed to send; nested exception is java.lang.RuntimeException: test");
}
}
Use CompletableFuture instead of ListenableFuture for versions 3.0 or later.
public void sendMessage(String country, String data) {
producerByCountry.get(country).send(topicName, data).whenComplete(
(res, ex) -> {
if (ex == null) {
log.info(res.getRecordMetadata().toString());
}
else {
log.error("failed: " + ex.getMessage());
}
});
}
and
assertThat(capture)
.contains("failed: Failed to send");
(the latter because Spring Framework 6.0+ no longer merges nested exception messages; the top level exception is a KafkaProducerException, with the actual exception as its cause).

Stop RabbitMQ-Connection in Spring-Boot

I have a spring-boot application that pulls all the messages from a RabbitMQ-queue and then terminates. I use rabbitTemplate from the package spring-boot-starter-amqp (version 2.4.0), namely receiveAndConvert(). Somehow, I cannot get my application to start and stop again. When the rabbitConnectionFactory is created, it will never stop.
According to Google and other stackoverflow-questions, calling stop() or destroy() on the rabbitTemplate should do the job, but that doesn't work.
The rabbitTemplate is injected in the constructor.
Here is some code:
rabbitTemplate.setMessageConverter(new Jackson2JsonMessageConverter());
Object msg = getMessage();
while (msg != null) {
try {
String name = ((LinkedHashMap) msg).get(propertyName).toString();
//business logic
logger.debug("added_" + name);
} catch (Exception e) {
logger.error("" + e.getMessage());
}
msg = getMessage();
}
rabbitTemplate.stop();
private Object getMessage() {
try {
return rabbitTemplate.receiveAndConvert(queueName);
} catch (Exception e) {
logger.error("" + e.getMessage());
return null;
}
}
So, how do you terminate the connection to RabbitMQ properly?
Thanks for your inquiry.
You can call resetConnection() on the CachingConnectionFactory to close the connection.
Or close() the application context.
If I were to do it , I would use #RabbitListener to receive the messages and RabbitListenerEndpointRegistry to start and stop the listener. Sample Code is given below
#EnableScheduling
#SpringBootApplication
public class Application implements ApplicationRunner {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
public static final String queueName = "Hello";
#Bean
public Queue hello() {
return new Queue(queueName);
}
#Autowired
private RabbitTemplate template;
#Scheduled(fixedDelay = 1000, initialDelay = 500)
public void send() {
String message = "Hello World!";
this.template.convertAndSend(queueName, message);
System.out.println(" [x] Sent '" + message + "'");
}
#Autowired
RabbitListenerEndpointRegistry registry;
#Override
public void run(ApplicationArguments args) throws Exception {
registry.getListenerContainer( Application.queueName).start();
Thread.sleep(10000L);
registry.getListenerContainer( Application.queueName).stop();
}
}
#Component
class Receiver {
#RabbitListener(id= Application.queueName,queues = Application.queueName)
public void receive(String in) {
System.out.println(" [x] Received '" + in + "'");
}
}

spring boot rabbitmq dead letter queue config not work

I config spring boot rabbit' dead letter queue, but ErrorHandler never receive any message. I search all the questiones about dead letter queue, but could not figure out. Can anyone help me ?
RabbitConfig.java to config dead letter queue/exchange:
#Configuration
public class RabbitConfig {
public final static String MAIL_QUEUE = "mail_queue";
public final static String DEAD_LETTER_EXCHANGE = "dead_letter_exchange";
public final static String DEAD_LETTER_QUEUE = "dead_letter_queue";
public static Map<String, Object> args = new HashMap<String, Object>();
static {
args.put("x-dead-letter-exchange", DEAD_LETTER_EXCHANGE);
//args.put("x-dead-letter-routing-key", DEAD_LETTER_QUEUE);
args.put("x-message-ttl", 5000);
}
#Bean
public Queue mailQueue() {
return new Queue(MAIL_QUEUE, true, false, false, args);
}
#Bean
public Queue deadLetterQueue() {
return new Queue(DEAD_LETTER_QUEUE, true);
}
#Bean
public FanoutExchange deadLetterExchange() {
return new FanoutExchange(DEAD_LETTER_EXCHANGE);
}
#Bean
public Binding deadLetterBinding() {
return BindingBuilder.bind(deadLetterQueue()).to(deadLetterExchange());
}
}
ErrorHandler.java to process DEAD LETTER QUEUE:
#Component
#RabbitListener( queues = RabbitConfig.DEAD_LETTER_QUEUE)
public class ErrorHandler {
#RabbitHandler
public void handleError(Object message) {
System.out.println("xxxxxxxxxxxxxxxxxx"+message);
}
}
MailServiceImpl.java to process MAIL_QUEUE:
#Service
#RabbitListener(queues = RabbitConfig.MAIL_QUEUE)
#ConditionalOnProperty("spring.mail.host")
public class MailServiceImpl implements MailService {
#Autowired
private JavaMailSender mailSender;
#RabbitHandler
#Override
public void sendMail(TMessageMail form) {
//......
try {
mailSender.save(form);
}catch(Exception e) {
logger.error("error in sending mail: {}", e.getMessage());
throw new AmqpRejectAndDontRequeueException(e.getMessage());
}
}
}
thx god, I finanlly find the answer!
all the configuration are correct, the problem is all the queues like mail_queue are created before I configure dead letter queue. So when I set x-dead-letter-exchange to the queue after the queue is created, it does not take effect.
中文就是,修改队列参数后,要删除队列重建!!!这么简单的一个tip,花了我几小时。。。。。。
How to delete queue, I follow the answer.
Deleting queues in RabbitMQ

Integrating Java WebSockets (JSR-356) with SpringBoot

I'm having an issue getting a websocket deployed in SpringBoot. I've tried quite a few approaches based on https://spring.io/blog/2013/05/23/spring-framework-4-0-m1-websocket-support, Using Java API for WebSocket (JSR-356) with Spring Boot, etc with out any luck.
Here is what I'm trying:
web socket:
#ServerEndpoint(value="/socket/{name}", configurator = SpringConfigurator.class)
public class TestSocket {
public ApiSocket(){}
#OnOpen
public void onOpen(
Session session,
#PathParam("name") String name) throws IOException {
session.getBasicRemote().sendText("Hi " + name);
}
}
applications.properties:
server.contextPath=/api
Main class:
#SpringBootApplication
public class Main {
public static void main(String[] args) throws Exception {
SpringApplication.run(Main.class, args);
}
}
According to the blog post above, this should be all that's required. I've also tried the second approach described which involves a bean with no luck:
#Bean
public ServerEndpointExporter endpointExporter() {
return new ServerEndpointExporter();
}
I am trying to open a connection to ws://localhost:8080/api/socket/John and expecting to receive a response back with the path name:
var socket = new WebSocket('ws://localhost:8080/api/socket/John');
The result is a 404 during the handshake.
You have to add also the TestSocket in your Bean in Spring Configuration and remove configurator = SpringConfigurator.class from your TestSocket.
Generally Spring overrides the normal java JSR 356 websocket by it's STOMP protocol which is part of websocket. It also not support fully binary message as normal websocket .
You should add ServerEndpointExporter in Configuration as:
#Configuration
public class EndpointConfig
{
#Bean
public ChatEndpointNew chatEndpointNew(){
return new ChatEndpointNew();
}
#Bean
public ServerEndpointExporter endpointExporter(){
return new ServerEndpointExporter();
}
}
Let's see the complete chatMessage with the room in which the client ge's connected as:
#ServerEndpoint(value="/chatMessage/{room}")
public class ChatEndpointNew
{
private final Logger log = Logger.getLogger(getClass().getName());
#OnOpen
public void open(final Session session, #PathParam("room")final String room)
{
log.info("session openend and bound to room: " + room);
session.getUserProperties().put("room", room);
System.out.println("session openend and bound to room: " + room);
}
#OnMessage
public void onMessage(final Session session, final String message) {
String room = (String)session.getUserProperties().get("room");
try{
for (Session s : session.getOpenSessions()){
if(s.isOpen()
&& room.equals(s.getUserProperties().get("room"))){
String username = (String) session.getUserProperties().get("username");
if(username == null){
s.getUserProperties().put("username", message);
s.getBasicRemote().sendText(buildJsonData("System", "You are now connected as:"+message));
}else{
s.getBasicRemote().sendText(buildJsonData(username, message));
}
}
}
}catch(IOException e) {
log.log(Level.WARNING, "on Text Transfer failed", e);
}
}
#OnClose
public void onClose(final Session session){
String room = (String)session.getUserProperties().get("room");
session.getUserProperties().remove("room",room);
log.info("session close and removed from room: " + room);
}
private String buildJsonData(String username, String message) {
JsonObject jsonObject = Json.createObjectBuilder().add("message", "<tr><td class='user label label-info'style='font-size:20px;'>"+username+"</td>"+"<td class='message badge' style='font-size:15px;'> "+message+"</td></tr>").build();
StringWriter stringWriter = new StringWriter();
try(JsonWriter jsonWriter = Json.createWriter(stringWriter)){
jsonWriter.write(jsonObject);
}
return stringWriter.toString();
}
}
Note That, You should add ChatEndpointNew and ServerEndpointExporter separately of your main Spring configuration of your Application.
If any bug appear try this:
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-websocket</artifactId>
<version>4.0.0.RELEASE</version>
</dependency>
You can also go through this Spring documentation.

Resources