Spring Integration - Multicast UDP message not updating - spring-boot

I try to listen periodic udp message with Multicast ip with spring integration, but my code get the same udp message all the time even if the udp message updated. When I stop my program and restart it, the message updates.
Here is my config config.java:
#Bean
public IntegrationFlow udpIn() {
return IntegrationFlows.from(Udp.inboundMulticastAdapter(16343, "239.0.12.1"))
.channel("inboundChannel")
.get();
}
and here is the method handle messages service.java:
#ServiceActivator(inputChannel = "inboundChannel")
public void handleMessage(Message message) {
log.info("message.getPayload());
byte[] values = (byte[]) message.getPayload();
//some irrelevant code
}
where is wrong about the code?
Thanks...

Related

Producer callback in Spring Cloud Stream with reactor core publisher

I have written a spring cloud stream application where producers are publishing messages to the designated kafka topics. My query is how can I add a producer callback to receive ack/confirmation that the message has been successfully published on the topic? Like how we do in spring kafka producer.send(record, new callback { ... }) (maintaining async producer). Below is my code:
private final Sinks.Many<Message<?>> responseProcessor = Sinks.many().multicast().onBackpressureBuffer();
#Bean
public Supplier<Flux<Message<?>>> event() {
return responseProcessor::asFlux;
}
public Message<?> publishEvent(String status) {
try {
String key = ...;
response = MessageBuilder.withPayload(payload)
.setHeader(KafkaHeaders.MESSAGE_KEY, key)
.build();
responseProcessor.tryEmitNext(response);
}
How can I make sure that tryEmitNext has successfully written to the topic?
Is implementing ProducerListener a solution and possible? Couldn't find a concrete solution/documentation in Spring Cloud Stream
UPDATE
I have implemented below now, seems to work as expected
#Component
public class MyProducerListener<K, V> implements ProducerListener<K, V> {
#Override
public void onSuccess(ProducerRecord<K, V> producerRecord, RecordMetadata recordMetadata) {
// Do nothing on onSuccess
}
#Override
public void onError(ProducerRecord<K, V> producerRecord, RecordMetadata recordMetadata, Exception exception) {
log.error("Producer exception occurred while publishing message : {}, exception : {}", producerRecord, exception);
}
}
#Bean
ProducerMessageHandlerCustomizer<KafkaProducerMessageHandler<?, ?>> customizer(MyProducerListener pl) {
return (handler, destinationName) -> handler.getKafkaTemplate().setProducerListener(pl);
}
See the Kafka Producer Properties.
recordMetadataChannel
The bean name of a MessageChannel to which successful send results should be sent; the bean must exist in the application context. The message sent to the channel is the sent message (after conversion, if any) with an additional header KafkaHeaders.RECORD_METADATA. The header contains a RecordMetadata object provided by the Kafka client; it includes the partition and offset where the record was written in the topic.
ResultMetadata meta = sendResultMsg.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class)
Failed sends go the producer error channel (if configured); see Error Channels. Default: null
You can add a #ServiceActivator to consume from this channel asynchronously.

Multiple #RabbitListeners sending reply to same queue when using sendAndReceive() in producer

I am using SpringBoot with Spring AMQP and I want to use RPC pattern using synchronous sendAndReceive method in producer. My configuration assumes 1 exchange with 2 distinct bindings (1 for each operation on the same resource). I want to send 2 messages with 2 different routingKeys and receive response on distinct reply-to queues
Problem is, as far as I know, sendAndReceive will wait for reply on a queue with name ".replies" so both replies will be sent to products.replies queue (at least that is my understanding).
My publisher config:
#Bean
public DirectExchange productsExchange() {
return new DirectExchange("products");
}
#Bean
public OrderService orderService() {
return new MqOrderService();
}
#Bean
public RabbitTemplate rabbitTemplate(final ConnectionFactory connectionFactory) {
final RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setMessageConverter(producerJackson2MessageConverter());
return rabbitTemplate;
}
#Bean
public Jackson2JsonMessageConverter producerJackson2MessageConverter() {
return new Jackson2JsonMessageConverter();
}
and the 2 senders:
...
final Message response = template.sendAndReceive(productsExchange.getName(), "products.get", message);
...
final Message response = template.sendAndReceive(productsExchange.getName(), "products.stock.update", message);
...
consumer config:
#Bean
public Queue getProductQueue() {
return new Queue("getProductBySku");
}
#Bean
public Queue updateStockQueue() {
return new Queue("updateProductStock");
}
#Bean
public DirectExchange exchange() {
return new DirectExchange("products");
}
#Bean
public Binding getProductBinding(DirectExchange exchange) {
return BindingBuilder.bind(getProductQueue())
.to(exchange)
.with("products.get");
}
#Bean
public Binding modifyStockBinding(DirectExchange exchange) {
return BindingBuilder.bind(updateStockQueue())
.to(exchange)
.with("products.stock.update");
}
and #RabbitListeners with following sigratures:
#RabbitListener(queues = "getProductBySku")
public Message getProduct(GetProductResource getProductResource) {...}
#RabbitListener(queues = "updateProductStock")
public Message updateStock(UpdateStockResource updateStockResource) {...}
I noticed that the second sender receives 2 responses, one of which is of invalid type (from first receiver). Is there any way in which I can make these connections distinct? Or is using separate exchange for each operation the only reasonable solution?
as far as I know, sendAndReceive will wait for reply on a queue with name ".replies"
Where did you get that idea?
Depending on which version you are using, either a temporary reply queue will be created for each request or RabbitMQ's "direct reply-to" mechanism is used, which again means each request is replied to on a dedicated pseudo queue called amq.rabbitmq.reply-to.
I don't see any way for one producer to get another's reply; even if you use an explicit reply container (which is generally not necessary any more), the template will correlate the replies to the requests.
Try enabling DEBUG logging to see if provides any hints.

Spring Integration + SpringBoot JUnit tries to connect to DB unexpectedly

Please refer to system diagram attached.
system diagram here
ISSUE: When I try to post message to input channel, the code tries to connect to the DB and throws an exception that it is unable to connect.
Code inside 5 -> Read from a channel, apply Business Logic (empty for now) and send the response to another channel.
#Bean
public IntegrationFlow sendToBusinessLogictoNotifyExternalSystem() {
return IntegrationFlows
.from("CommonChannelName")
.handle("Business Logic Class name") // Business Logic empty for now
.channel("QueuetoAnotherSystem")
.get();
}
I have written the JUnit for 5 as given below,
#Autowired
PublishSubscribeChannel CommonChannelName;
#Autowired
MessageChannel QueuetoAnotherSystem;
#Test
public void sendToBusinessLogictoNotifyExternalSystem() {
Message<?> message = (Message<?>) MessageBuilder.withPayload("World")
.setHeader(MessageHeaders.REPLY_CHANNEL, QueuetoAnotherSystem).build();
this.CommonChannelName.send((org.springframework.messaging.Message<?>) message);
Message<?> receive = QueuetoAnotherSystem.receive(5000);
assertNotNull(receive);
assertEquals("World", receive.getPayload());
}
ISSUE: As you can see from the system diagram, my code also has a DB connection on a different flow.
When I try to post message to producer channel, the code tries to connect to the DB and throws an exception that it is unable to connect.
I do not want this to happen, because the JUnit should never be related to the DB, and should run anywhere, anytime.
How do I fix this exception?
NOTE: Not sure if it matters, the application is a Spring Boot application. I have used Spring Integration inside the code to read and write from/to queues.
Since the common channel is a publish/subscribe channel, the message goes to both flows.
If this is a follow-up to this question/answer, you can prevent the DB flow from being invoked by calling stop() on the sendToDb flow (as long as you set ignoreFailures to true on the pub/sub channel, like I suggested there.
((Lifecycle) sendToDb).stop();
JUNIT TEST CASE - UPDATED:
#Autowired
PublishSubscribeChannel CommonChannelName;
#Autowired
MessageChannel QueuetoAnotherSystem;
#Autowired
SendResponsetoDBConfig sendResponsetoDBConfig;
#Test
public void sendToBusinessLogictoNotifyExternalSystem() {
Lifecycle flowToDB = ((Lifecycle) sendResponsetoDBConfig.sendToDb());
flowToDB.stop();
Message<?> message = (Message<?>) MessageBuilder.withPayload("World")
.setHeader(MessageHeaders.REPLY_CHANNEL, QueuetoAnotherSystem).build();
this.CommonChannelName.send((org.springframework.messaging.Message<?>) message);
Message<?> receive = QueuetoAnotherSystem.receive(5000);
assertNotNull(receive);
assertEquals("World", receive.getPayload());
}
CODE FOR 4: The flow that handles message to DB
public class SendResponsetoDBConfig {
#Bean
public IntegrationFlow sendToDb() {
System.out.println("******************* Inside SendResponsetoDBConfig.sendToDb ***********");
return IntegrationFlows
.from("Common Channel Name")
.handle("DAO Impl to store into DB")
.get();
}
}
NOTE: ******************* Inside SendResponsetoDBConfig.sendToDb *********** never gets printed.

How to subscribe to STOMP messages from an application itself

Is there any way how to subscribe from a topic and forward messaged to another layer of an application (have a new Listener for given topic) using Spring?
Consider following message handler it handler which sends messages to a topic topic/chat/{conversationId}
public class ConversationController{
#MessageMapping("/chat/{conversationId}")
#SendTo("/topic/chat/{conversationId}")
public ConversationMessage createMesage(
#Payload CreateMessage message,
#DestinationVariable String conversationId) {
log.info("handleMessage {}", message);
return conversationService.create( message );
}
}
I would like to listen on this topic and do an action on some messages.
public class Bot{
#SubscribeMapping("/topic/chat/{conversationId}")
public void subscribeUserMessages(
#Payload ConversationMessage message,
#DestinationVariable String conversationId){
// doesn't work
}
}
I've also tried use SimpMessagingTemplate.convertAndSend(..) but it doesn't work neither. Maybe I am doing something wrong.
My application doesn't use full flagged message broker, just the default one in memory broker.

Spring Integration TcpInboundGateway Read exception resulting in SocketException:Connection reset

I am using spring boot as per examples for TcpInboundGateway,so different devices send data to this Gateways,things works fine but in between in logs it showing following exception:
2015-12-29 18:42:19.455 ERROR 3465 --- [ool-3-thread-47] o.s.i.i.tcp.connection.TcpNetConnection : Read exception 106.221.159.216:38170:8765:934c050d-c4b5-4466-98ab-ee87714c3d00 SocketException:Connection reset
If this exception is resetting connection then how to avoid this reset?What is the cause of this error?
My code as follows
#SpringBootApplication
#IntegrationComponentScan
public class SpringIntegrationApplication extends SpringBootServletInitializer{
public static void main(String[] args) throws IOException {
ConfigurableApplicationContext ctx = SpringApplication.run(SpringIntegrationApplication.class, args);
System.in.read();
ctx.close();
}
#Override
protected SpringApplicationBuilder configure(SpringApplicationBuilder application) {
return application.sources(SpringIntegrationApplication.class);
}
private static Class<SpringIntegrationApplication> applicationClass = SpringIntegrationApplication.class;
#Bean
TcpNetServerConnectionFactory cf(){
TcpNetServerConnectionFactory connectionFactory=new TcpNetServerConnectionFactory(8765);
return connectionFactory;
}
#Bean
TcpInboundGateway tcpGate(){
TcpInboundGateway gateway=new TcpInboundGateway();
gateway.setConnectionFactory(cf());
gateway.setRequestChannel(requestChannel());
return gateway;
}
#Bean
public MessageChannel requestChannel(){
return new DirectChannel();
}
#MessageEndpoint
public class Echo {
#ServiceActivator(inputChannel="requestChannel")
public byte[] echo(byte[] in,#SuppressWarnings("deprecation") #Header("ip_address") String ip){
byte[] rawbytes = gosDataSerivce.byteArrayToHex(in,ip);//Process bytes and returns result
return rawbytes;
}
}
}
After setting singleUse to true now exception message is changed slightly.
2015-12-31 06:09:00.481 ERROR 16450 --- [ool-3-thread-10] o.s.i.i.tcp.connection.TcpNetConnection : Read exception 106.221.146.40:9195:8765:1b4755e8-5b0c-44b9-b4e6-b3aacc25e228 SocketException:Connection reset
Use Case:
I have several clients that established GPRS connection to TcpInboundGateWay and sends login packet,our server will reply to this login packet.If client receives server reply to login packet then it will send data packets at regular interval. Server needs to reply to these packet also if server fails to send reply to those data packets then client GPRS connection is terminated and client will try to establish connections again.Let me know if this use case can be handle with TcpInboundGateWay
Network Trace Analysis
General flow of communication between client and server is as follows:Client sends login packet from ip say 106.221.148.165 so at server connection named 106.221.148.165:63430:8765:cc105da2-dae4-494b-af9c-d1ba268f34f1 is created, that client sends subsequent packets from that ip only.So everything works fine,but after some time same client sends its login packet from another ip say 106.221.142.204.And subsequent packets from new ip.But in logs following error comes that for previous connection exception occurred.
2016-01-05 05:16:14.871 ERROR 6819 --- [pool-3-thread-5] o.s.i.i.tcp.connection.TcpNetConnection : Read exception 106.221.148.165:63430:8765:cc105da2-dae4-494b-af9c-d1ba268f34f1 SocketException:Connection reset
I have set singleUse true and I am using spring integration 4.2.1
This message is emitted when the client closes the socket - if your client only sends one message then closes the socket, you can set singleUse to true and it will suppress this message (as long as the socket is closed normally - between messages).
With Spring Integration version 4.2 and later, the message is not emitted on a normal close, even if singleUse is false.

Resources