WebSocket message not broadcast when sent by spring integration method - spring

I have method in a Spring component which receives messages from a Spring Integration channel. When a message is received, it is sent to a WebSocket endpoint. This doesn't work. The message is not broadcast.
this.messagingTemplate.convertAndSend("/topic/update", dto);
However when I put the same code inside a Web Controller and put a RequestMapping on it, and call that endpoint, it works. The message is broadcast.
What might be causing it not to work, when it is called by the Spring integration executor?
when it works: .14:01:19.939 [http-nio-8080-exec-4] DEBUG o.s.m.s.b.SimpleBrokerMessageHandler - Processing MESSAGE destination=/topic/update session=null payload={XXX}
.14:01:19.939 [http-nio-8080-exec-4] DEBUG o.s.m.s.b.SimpleBrokerMessageHandler - Broadcasting to 1 sessions.
when it doesnt work, second message is not there. (thread is taskExecutor-1 instead of http-nio..)
Controller code:
#RequestMapping("/testreq")
public void updateDelta() {
SummaryDTO dto = new SummaryDTO();
dto.setValue(-5000.0);
dto.setName("G");
this.messagingTemplate.convertAndSend("/topic/update", dto);
}
//this method is called by Spring Integration
//created by serviceActivator = new
//ServiceActivatingHandler(webcontroller,"update");
public void updateDelta(SummaryDTO dto) {
this.messagingTemplate.convertAndSend("/topic/update", dto);
}
message send:
synchronized(this){
...
this.updatedcontrollerchannel.send(MessageBuilder.withPayload(summarydto).build(
));
}
channel creation:
updatedchannel = new DirectChannel();
updatedchannel.setBeanName("updatedcontroller");
serviceActivator = new ServiceActivatingHandler(detailService,"update");
handlerlist.add(serviceActivator);
updatedchannel.subscribe(serviceActivator);
beanFactory.registerSingleton("updatedcontroller", channel);
UPDATE
I added spring messaging source code to my environment and realized the following: There are 2 instances of the SimpleBrokerMessageHandler class in the runtime. For the working copy subscriptionregistry has one entry and for the nonworking one, it has 0 subscriptions. Does this give a clue for the root cause of the problem? There is only one MessageSendingOperations variable defined and it is on the controller.

i found the cause of the problem. Class which has #EnableWebSocketMessageBroker annotation was loaded twice and it caused two instances of SimpleBrokerMessageHandler to be created. #Artem Bilan: thanks for your time.

Should be the problem with the non-properly injected SimpMessageSendingOperations.
This one is populated by the AbstractMessageBrokerConfiguration.brokerMessagingTemplate() #Bean.
However I would like to suggest you to take a look into the WebSocketOutboundMessageHandler from Spring Integration: https://docs.spring.io/spring-integration/docs/4.3.12.RELEASE/reference/html/web-sockets.html
UPDATE
This works for me in the test-case:
#Bean
#InboundChannelAdapter(channel = "nullChannel", poller = #Poller(fixedDelay = "1000"))
public Supplier<?> webSocketPublisher(SimpMessagingTemplate brokerMessagingTemplate) {
return () -> {
brokerMessagingTemplate.convertAndSend("/topic/foo", "foo");
return "foo";
};
}
And I have this DEBUG logs:
12:57:27.606 DEBUG [task-scheduler-1][org.springframework.messaging.simp.broker.SimpleBrokerMessageHandler] Processing MESSAGE destination=/topic/foo session=null payload=foo
12:57:27.897 DEBUG [clientInboundChannel-2][org.springframework.messaging.simp.broker.SimpleBrokerMessageHandler] Processing SUBSCRIBE /topic/foo id=subs1 session=941a940bf07c47a1ac786c1adfdb6299
12:57:40.797 DEBUG [task-scheduler-1][org.springframework.messaging.simp.broker.SimpleBrokerMessageHandler] Processing MESSAGE destination=/topic/foo session=null payload=foo
12:57:40.798 DEBUG [task-scheduler-1][org.springframework.messaging.simp.broker.SimpleBrokerMessageHandler] Broadcasting to 1 sessions.
Everything works well from Spring Integration.
That's why I asked your whole Spring Boot app to play from our side.
UPDATE 2
When you develop Web application be sure to merge all the configs contexts to a single one application context - WebApplicationContext:
If an application context hierarchy is not required, applications may return all configuration via getRootConfigClasses() and null from getServletConfigClasses().
See more info in the Spring Framework Reference Manual.

Related

spring kafka embedded broker - My actual listener is never trigerred

I'm using Kafka embedded broker with spring boot and junit 5.I have been able to wire up successfully and see that the embedded broker is running.
In my setup method I pump in a few messages to the queue that my actual code listens on
#BeforeAll
public void setup() {
// code to play down some messages to topic X
}
My consumer/listener is never trigerred despite there being no errors encountered in the setup method
My Consumer is setup like
class Consumer() {
#KafkaListener(topics="X",
groupId ="...",
containerFactory="my-container-factory"
)
public void consume(ConsumerRecord<String,byte[] rec) {
//logic to handle
logger.info("Print rec : "+rec)
}
}
else where I've set up my ListenerContainerFactory with a name like
#Bean(name="my-container-factory")
public KafkaContainerListenerFactory<String,byte[]> factory() {
}
What could be wrong with this?My assertions in the test case fail and additionally I don't see my log statements that should be printed if my consume method were ever called.
I've a feeling,that auto configuration due to #SpringBootTest and #EmbeddedKafka is setting up some other listener container factory and so maybe my #KafkaListener annotation is wrong.
I know,its a bit vague but could you please tell me what/where to look at?If I run as a #SpringBootApplication my Consumer is pulling in messages from the actual queue.So no problems with my actual app.Its the test that's not executing as per expectation.
Please help.
Edit 1:
I have spring.kafka.consumer.auto-offset-reset=earliest set in my yml file.

Spring cloud stream - notification when Kafka binder is initialized

I have a simple Kafka producer in my spring cloud stream application. As my spring application starts, I have a #PostConstruct method which performs some reconciliation and tries sending events to the Kafka producer.
Issue is, my Kafka Producer is not yet ready when the reconciliation starts sending the enets into it, leading to the below:
org.springframework.messaging.MessageDeliveryException: Dispatcher has no subscribers for channel 'orderbook-service-1.orderbook'.; nested exception is org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers, failedMessage=GenericMessage ..
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:77)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:445)
Is there is a way to get a notification during my application's startup that Kafka channel is initialized, so that I only kick off the rec job post it.
Here is my code snippets:
public interface OrderEventChannel {
String TOPIC_BINDING = "orderbook";
#Output(TOPIC_BINDING)
SubscribableChannel outboundEvent();
}
#Configuration
#EnableBinding({OrderEventChannel.class})
#ConditionalOnExpression("${aix.core.stream.outgoing.kafka.enabled:false}")
public class OutgoingKafkaConfiguration {
}
#Service
public class OutgoingOrderKafkaProducer {
#Autowired
private OrderEventChannel orderEventChannel;
public void onOrderEvent( ClientEvent clientEvent ) {
try {
Message<KafkaEvent> kafkaMsg = mapToKafkaMessage( clientEvent );
SubscribableChannel subscribableChannel = orderEventChannel.outboundEvent();
subscribableChannel.send( kafkaMsg );
} catch ( RuntimeException rte ) {
log.error( "Error while publishing Kafka event [{}]", clientEvent, rte );
}
}
..
..
}
#PostConstruct is MUCH too early in the context lifecycle to start using beans; they are still being created, configured and wired together.
You can use an ApplicationListener (or #EventListener) to listen for an ApplicationReadyEvent (be sure to compare the even's applicationContext to the main application context because you may get other events).
You can also implement SmartLifecycle and put your code in start(); put your bean in a late Phase so it is started after everything is wired up.
Output bindings are started in phase Integer.MIN_VALUE + 1000, input bindings are started in phase Integer.MAX_VALUE - 1000.
So if you want to do something before messages start flowing, use a phase in-between these (e.g. 0, which is the default).

Activiti Escalation Listener Configuration

I am using activiti 5.18.
Behind the scenes : There are few task which are getting routed though a workflow. Some of these tasks are eligible for escalation. I have written my escalation listener as follows.
#Component
public class EscalationTimerListener implements ExecutionListener {
#Autowired
ExceptionWorkflowService exceptionWorkflowService;
#Override
public void notify(DelegateExecution execution) throws Exception {
//Process the escalated tasks here
this.exceptionWorkflowService.escalateWorkflowTask(execution);
}
}
Now when I start my tomcat server activiti framework internally calls the listener even before my entire spring context is loaded. Hence exceptionWorkflowService is null (since spring hasn't inejcted it yet!) and my code breaks.
Note : this scenario only occurs if my server isn't running at the escalation time of tasks and I start/restart my server post this time. If my server is already running during escalation time then the process runs smoothly. Because when server started it had injected the service and my listener has triggered later.
I have tried delaying activiti configuration using #DependsOn annotation so that it loads after ExceptionWorkflowService is initialized as below.
#Bean
#DependsOn({ "dataSource", "transactionManager","exceptionWorkflowService" })
public SpringProcessEngineConfiguration getConfiguration() {
final SpringProcessEngineConfiguration config = new SpringProcessEngineConfiguration();
config.setAsyncExecutorActivate(true);
config.setJobExecutorActivate(true);
config.setDataSource(this.dataSource);
config.setTransactionManager(this.transactionManager);
config.setDatabaseSchemaUpdate(this.schemaUpdate);
config.setHistory(this.history);
config.setTransactionsExternallyManaged(this.transactionsExternallyManaged);
config.setDatabaseType(this.dbType);
// Async Job Executor
final DefaultAsyncJobExecutor asyncExecutor = new DefaultAsyncJobExecutor();
asyncExecutor.setCorePoolSize(2);
asyncExecutor.setMaxPoolSize(50);
asyncExecutor.setQueueSize(100);
config.setAsyncExecutor(asyncExecutor);
return config;
}
But this gives circular reference error.
I have also tried adding a bean to SpringProcessEngineConfiguration as below.
Map<Object, Object> beanObjectMap = new HashMap<>();
beanObjectMap.put("exceptionWorkflowService", new ExceptionWorkflowServiceImpl());
config.setBeans(beanObjectMap);
and the access the same in my listener as :
Map<Object, Object> registeredBeans = Context.getProcessEngineConfiguration().getBeans();
ExceptionWorkflowService exceptionWorkflowService = (ExceptionWorkflowService) registeredBeans.get("exceptionWorkflowService");
exceptionWorkflowService.escalateWorkflowTask(execution);
This works but my repository has been autowired into my service which hasn't been initialized yet! So it again throws error in service layer :)
So is there a way that I can trigger escalation listeners only after my entire spring context is loaded?
Have you tried binding the class to ApplicationListener?
Not sure if it will work, but equally I'm not sure why your listener code is actually being executed on startup.
Try to set the implementation type of listeners using Java class or delegate expression and then in the class implement JavaDelegate instead of ExecutionListener.

How to set a Message Handler programmatically in Spring Cloud AWS SQS?

maybe someone has an idea to my following problem:
I am currently on a project, where i want to use the AWS SQS with Spring Cloud integration. For the receiver part i want to provide a API, where a user can register a "message handler" on a queue, which is an interface and will contain the user's business logic, e.g.
MyAwsSqsReceiver receiver = new MyAwsSqsReceiver();
receiver.register("a-queue-name", new MessageHandler(){
#Override
public void handle(String message){
//... business logic for the received message
}
});
I found examples, e.g.
https://codemason.me/2016/03/12/amazon-aws-sqs-with-spring-cloud/
and read the docu
http://cloud.spring.io/spring-cloud-aws/spring-cloud-aws.html#_sqs_support
But the only thing i found there to "connect" a functionality for processing a incoming message is a annotation on a method, e.g. #SqsListener or #MessageMapping.
These annotations are fixed to a certain queue-name, though. So now i am at a loss, how to dynamically "connect" my provided "MessageHandler" (from my API) to the incoming message for the specified queuename.
In the Config the example there is a SimpleMessageListenerContainer, which gets a QueueMessageHandler set, but this QueueMessageHandler does not seem
to be the right place to set my handler or to override its methods and provide my own subclass of QueueMessageHandler.
I already did something like this with the Spring Amqp integration and RabbitMq and thought, that it would be also similar here with AWS SQS.
Does anyone have an idea, how to accomplish this?
thx + bye,
Ximon
EDIT:
I found, that Spring JMS could actually do that, e.g. www.javacodegeeks.com/2016/02/aws-sqs-spring-jms-integration.html. Does anybody know, what consequences using JMS protocol has here, good or bad?
I am facing the same issue.
I am trying to go in an unusual way where I set up an Aws client bean at build time and then instead of using sqslistener annotation to consume from the specific queue I use the scheduled annotation which I can programmatically pool (each 10 secs in my case) from which queue I want to consume.
I did the example that iterates over queues defined in properties and then consumes from each one.
Client Bean:
#Bean
#Primary
public AmazonSQSAsync awsSqsClient() {
return AmazonSQSAsyncClientBuilder
.standard()
.withRegion(Regions.EU_WEST_1.getName())
.build();
}
Consumer:
// injected in the constructor
private final AmazonSQSAsync awsSqsClient;
#Scheduled(fixedDelay = 10000)
public void pool() {
properties.getSqsQueues()
.forEach(queue -> {
val receiveMessageRequest = new ReceiveMessageRequest(queue)
.withWaitTimeSeconds(10)
.withMaxNumberOfMessages(10);
// reading the messages
val result = awsSqsClient.receiveMessage(receiveMessageRequest);
val sqsMessages = result.getMessages();
log.info("Received Message on queue {}: message = {}", queue, sqsMessages.toString());
// deleting the messages
sqsMessages.forEach(message -> {
val deleteMessageRequest = new DeleteMessageRequest(queue, message.getReceiptHandle());
awsSqsClient.deleteMessage(deleteMessageRequest);
});
});
}
Just to clarify, in my case, I need multiple queues, one for each tenant, with the queue URL for each one passed in a property file. Of course, in your case, you could get the queue names from another source, maybe a ThreadLocal which has the queues you have created in runtime.
If you wish, you can also try the JMS approach where you create message consumers and add a listener to each one you wish (See the doc Aws Jms documentation).
When we do Spring and SQS we use the spring-cloud-starter-aws-messaging.
Then just create a Listener class
#Component
public class MyListener {
#SQSListener(value="myqueue")
public void listen(MyMessageType message) {
//process the message
}
}

How can I use Spring Integration to only send a message if my transaction finishes successfully?

I am in the process of learning Spring Integration and using it to implement a basic email service in Grails. What I want to be able to do is call my email service but only have the email be sent if the transaction trying to send the email is successful. Although this is being done in Grails, it really shouldn't be different from a regular Spring app except for using the BeanBuilder DSL instead of the XML configuration.
Anyway, here is my configuration for the channel:
beans = {
xmlns integration:'http://www.springframework.org/schema/integration'
integration.channel(id: 'email')
}
Here is my service:
class MailService {
#ServiceActivator(inputChannel = "email")
MailMessage sendMail(Closure callable) {
//sending mail code
}
}
Now what I expect to happen is that when I inject this MailService into another service and call send mail, that will place a message on the email channel, which will only get published if my transaction completes. What leads me to believe this is the section on UserProcess here: http://docs.spring.io/spring-integration/reference/html/transactions.html, which states that a user started process will have all the transactional properties that Spring provides.
I am attempting to test this with an integration test:
void "test transactionality"() {
when:
assert DomainObject.all.size() == 0
DomainObject.withNewTransaction { status ->
DomainObject object = buildAndSaveNewObject()
objectNotificationService.sendEmails(object) //This service injects emailService and calls sendMail
throw new Exception()
}
then:
thrown(Exception) // This is true
DomainObject.all.size() == 0 // This is true
greenMail.receivedMessages.length == 0 // This fails
}
What this does is create and save an object and send emails all within the same transaction. I then throw an exception to cause that transaction to fail. As expected, none of my domain objects are persisted. However, I still receive emails.
I am quite new to Spring Integration and Spring in general, so it's possible I'm misunderstanding how this is all supposed to work, but I would expect the sendMail message to never be placed on the email channel.
It turns out that I don't think Spring Integration is the best way to achieve "only perform on commit" functionality (but if you do, Gary Russell's answer is the way to go.) You should instead use the TransactionSynchronizationManager provided as part of the Spring transaction management framework.
As an example, I created a TransactionService in grails:
class TransactionService {
def executeAfterCommit(Closure c) {
TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronizationAdapter() {
#Override
void afterCommit() {
c.call()
}
})
}
}
You can then inject this anywhere you need it and use it like so:
def transactionService
transactionService.executeAfterCommit {
sendConfirmationEmail()
}
I don't know how this would be done in Grails, but in Java, you could use a transaction synchronization factory whereby you can take different actions depending on success/failure...
<int:transaction-synchronization-factory id="syncFactory">
<int:after-commit expression="payload.renameTo('/success/' + payload.name)" channel="committedChannel" />
<int:after-rollback expression="payload.renameTo('/failed/' + payload.name)" channel="rolledBackChannel" />
</int:transaction-synchronization-factory>
The result of the expression evaluation is sent to the channel, where you can have your outbound mail adapter.

Resources