Spring integration errors after migration to spring 5 - spring

I have been using Spring integration (different types of channels) with Spring 4 for some time. After i tried to upgrade my environment to Spring 5, they stopped working with errors such as the following:
org.springframework.messaging.MessageHandlingException: error occurred during processing message in 'MethodInvokingMessageProcessor' [org.springframework.integration.handler.MethodInvokingMessageProcessor#c192373]; nested exception is java.lang.IllegalArgumentException: BeanFactory must not be null, failedMessage=GenericMessage
Sample channel creation/registration is as follows:
deltaupdatedchannel = new DirectChannel();
deltaupdatedchannel.setBeanName("deltaupdatedcontroller");
serviceActivator = new ServiceActivatingHandler(deltaSummaryController, "updateDelta2");
handlerlist.add(serviceActivator);
beanFactory.registerSingleton("deltaupdatedcontroller", deltaupdatedchannel);
beanFactory.initializeBean(deltaupdatedchannel, "deltaupdatedcontroller");
deltaupdatedchannel.subscribe(serviceActivator);
Channels are used to make the following call:
this.deltaupdatedcontrollerchannel.send(MessageBuilder.withPayload(summarydto).build());
And channel calls the following code:
public void updateDelta2(DeltaSummaryDTO dto) {
this.messagingTemplate.convertAndSend(
"/topic/updatedelta", dto);
}
Here messagingTemplate is org.springframework.messaging.core.MessageSendingOperations.
How can i make them work again?

Share, please, with us the reason doing that registerSingleton(). Why just plain bean registration is not enough for you?
To fix that problem you need to call also initializeBean(Object existingBean, String beanName) after that registerSingleton().
However there is no guarantee that this will be the end of errors. I would suggest to revise a design in favor of normal bean definitions, not that manual one...

Related

Catching exception in Kafkalistener Integration test

So I need to create an integration test for my kafkalistener method, where the test expects ListenerExecutionFailedException is actually thrown because the message was failed during consumption due to another service being inactive.
Below is the test code, where I use embeddedkafkabroker for producer and consumer:
#Test(expected = ListenerExecutionFailedException.class)
public void shouldThrowException() {
RecordHeaders recordHeaders = new RecordHeaders();
recordHeaders.add(new RecordHeader("messageType", "bootstrap".getBytes()));
recordHeaders.add(new RecordHeader("userId", "123".getBytes()));
recordHeaders.add(new RecordHeader("applicationId", "1234".getBytes()));
recordHeaders.add(new RecordHeader("correlationId", UUID.randomUUID().toString().getBytes()));
ProducerRecord<String, String> producerRecord = new ProducerRecord<>(
"TEST_TOPIC",
1,
null,
"message",
"",
recordHeaders);
producer.send(producerRecord);
consumer.subscribe(Collections.singleton("TEST_TOPIC"));
consumer.poll(Duration.ofSeconds(2));
}
What I'm wondering is the exception was considered as not thrown and the test fails even though I know the message is indeed received by the listener and the exception was thrown since I saw them on the log.
And even though I changed the expected into Throwable no exception seems to be detected.
What should I do to make the exception to be detected by Junit?
Also, another interesting thing is that I tried to mock the service class which was called in the listener and return some dummy value but the service is not called when I used Mockito.verify
You seem to have some misunderstanding.
producer.send
consumer.poll
You are calling the kafka-clients directly and are not using Spring at all in this test.
ListenerExecutionFailedException is an exception that Spring's listener container wraps user exceptions thrown by message listeners.

WebSocket message not broadcast when sent by spring integration method

I have method in a Spring component which receives messages from a Spring Integration channel. When a message is received, it is sent to a WebSocket endpoint. This doesn't work. The message is not broadcast.
this.messagingTemplate.convertAndSend("/topic/update", dto);
However when I put the same code inside a Web Controller and put a RequestMapping on it, and call that endpoint, it works. The message is broadcast.
What might be causing it not to work, when it is called by the Spring integration executor?
when it works: .14:01:19.939 [http-nio-8080-exec-4] DEBUG o.s.m.s.b.SimpleBrokerMessageHandler - Processing MESSAGE destination=/topic/update session=null payload={XXX}
.14:01:19.939 [http-nio-8080-exec-4] DEBUG o.s.m.s.b.SimpleBrokerMessageHandler - Broadcasting to 1 sessions.
when it doesnt work, second message is not there. (thread is taskExecutor-1 instead of http-nio..)
Controller code:
#RequestMapping("/testreq")
public void updateDelta() {
SummaryDTO dto = new SummaryDTO();
dto.setValue(-5000.0);
dto.setName("G");
this.messagingTemplate.convertAndSend("/topic/update", dto);
}
//this method is called by Spring Integration
//created by serviceActivator = new
//ServiceActivatingHandler(webcontroller,"update");
public void updateDelta(SummaryDTO dto) {
this.messagingTemplate.convertAndSend("/topic/update", dto);
}
message send:
synchronized(this){
...
this.updatedcontrollerchannel.send(MessageBuilder.withPayload(summarydto).build(
));
}
channel creation:
updatedchannel = new DirectChannel();
updatedchannel.setBeanName("updatedcontroller");
serviceActivator = new ServiceActivatingHandler(detailService,"update");
handlerlist.add(serviceActivator);
updatedchannel.subscribe(serviceActivator);
beanFactory.registerSingleton("updatedcontroller", channel);
UPDATE
I added spring messaging source code to my environment and realized the following: There are 2 instances of the SimpleBrokerMessageHandler class in the runtime. For the working copy subscriptionregistry has one entry and for the nonworking one, it has 0 subscriptions. Does this give a clue for the root cause of the problem? There is only one MessageSendingOperations variable defined and it is on the controller.
i found the cause of the problem. Class which has #EnableWebSocketMessageBroker annotation was loaded twice and it caused two instances of SimpleBrokerMessageHandler to be created. #Artem Bilan: thanks for your time.
Should be the problem with the non-properly injected SimpMessageSendingOperations.
This one is populated by the AbstractMessageBrokerConfiguration.brokerMessagingTemplate() #Bean.
However I would like to suggest you to take a look into the WebSocketOutboundMessageHandler from Spring Integration: https://docs.spring.io/spring-integration/docs/4.3.12.RELEASE/reference/html/web-sockets.html
UPDATE
This works for me in the test-case:
#Bean
#InboundChannelAdapter(channel = "nullChannel", poller = #Poller(fixedDelay = "1000"))
public Supplier<?> webSocketPublisher(SimpMessagingTemplate brokerMessagingTemplate) {
return () -> {
brokerMessagingTemplate.convertAndSend("/topic/foo", "foo");
return "foo";
};
}
And I have this DEBUG logs:
12:57:27.606 DEBUG [task-scheduler-1][org.springframework.messaging.simp.broker.SimpleBrokerMessageHandler] Processing MESSAGE destination=/topic/foo session=null payload=foo
12:57:27.897 DEBUG [clientInboundChannel-2][org.springframework.messaging.simp.broker.SimpleBrokerMessageHandler] Processing SUBSCRIBE /topic/foo id=subs1 session=941a940bf07c47a1ac786c1adfdb6299
12:57:40.797 DEBUG [task-scheduler-1][org.springframework.messaging.simp.broker.SimpleBrokerMessageHandler] Processing MESSAGE destination=/topic/foo session=null payload=foo
12:57:40.798 DEBUG [task-scheduler-1][org.springframework.messaging.simp.broker.SimpleBrokerMessageHandler] Broadcasting to 1 sessions.
Everything works well from Spring Integration.
That's why I asked your whole Spring Boot app to play from our side.
UPDATE 2
When you develop Web application be sure to merge all the configs contexts to a single one application context - WebApplicationContext:
If an application context hierarchy is not required, applications may return all configuration via getRootConfigClasses() and null from getServletConfigClasses().
See more info in the Spring Framework Reference Manual.

spring cloud contract verification at deployment

I have extensively gone through SpringCloudContract. It is very effective TDD. I want to verify the contract during actual deployment. I have n number of micro-services (Spring stream:Source/Processor/Sink) and want to allow user to link them when they define a stream (kafka)in dataflow server dashboard. I am passing certain Object in the stream which act as
input/out for micro-service. I want to check the compatibility for micro-services and warn the user accordingly. SpringCloudContract facilitate to verify the contract during the develpment time and not a run time.
Kindly help.
I am new to Spring cloud contract, but I have found a way to start StubRunner but when it trigger the certificate I get following.
2017-04-26 16:14:10,373 INFO main c.s.s.ContractTester:36 - ContractTester : consumerMessageListener >>>>>>>>>>>>>>>>>>>>>>>>>>>>org.springframework.cloud.contract.stubrunner.BatchStubRunner#5e13f156
2017-04-26 16:14:10,503 ERROR main o.s.c.c.v.m.s.StreamStubMessages:63 - Exception occurred while trying to send a message [GenericMessage [payload={"name":"First","description":"Valid","value":1}, headers={id=49c6cc5c-93c8-2498-934a-175f60f42c03, timestamp=1493203450482}]] to a channel with name [verifications]
org.springframework.messaging.MessageDeliveryException: Dispatcher has no subscribers for channel 'application.input'.; nested exception is org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers, failedMessage=GenericMessage [payload={"name":"First","description":"Valid","value":1}, headers={id=49c6cc5c-93c8-2498-934a-175f60f42c03, timestamp=1493203450482}]
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:93)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:423)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:373)
at org.springframework.cloud.contract.verifier.messaging.stream.StreamStubMessages.send(StreamStubMessages.java:60)
at org.springframework.cloud.contract.verifier.messaging.stream.StreamStubMessages.send(StreamStubMessages.java:
The same work fine with Maven install, but not with main class.
...
#RunWith(SpringRunner.class)
#AutoConfigureMessageVerifier
#EnableAutoConfiguration
#EnableIntegration
#Component
#DirtiesContext
public class ContractTester {
private static Logger logger = LoggerFactory.getLogger(ContractTester.class);
#Autowired StubTrigger stubTrigger;
#Autowired ConsumerMessageListener consumerMessageListener;
#Bean
public boolean validSimpleObject() throws Exception {
logger.info("ContractTester : consumerMessageListener >>>>>>>>>>>>>>>>>>>>>>>>>>>>"+stubTrigger);
stubTrigger.trigger("accepted_message");
if(consumerMessageListener ==null) {
logger.info("ContractTester : consumerMessageListener >>>>>>>>>>>>>>>>>>>>>>>>>>>>");
}
logger.info("ContractTester >>>>>>>>>>>>>>>>>>>>>>>>>>>>" +consumerMessageListener.toString());
SimpleObject simpleObject = (SimpleObject) consumerMessageListener.getSimpleObject();
logger.info("simpleObject >>>>>>>>>>>>>>>>>>>>>>>>>>>>" +simpleObject.toString());
assertEquals(1, simpleObject.getValue());
//then(listener.eligibleCounter.get()).isGreaterThan(initialCounter);
return true;
}
}

Correct use of Hazelcast Transactional Map in an Spring Boot app

I am working on a proof of concept of Hazelcast Transactional Map. To accomplish this I am writing an Spring Boot app and using Atomikos as my JTA/XA implementation.
This app must update a transactional map and also update a database table by inserting a new row all within the same transaction.
I am using JPA / SpringData / Hibernate to work with the database.
So the app have a component (a JAVA class annotated with #Component) that have a method called agregar() (add in spanish). This method is annotated with #Transactional (org.springframework.transaction.annotation.Transactional)
The method must performe two task as a unit: first must update a TransactionalMap retrieved from Hazelcast instance and, second, must update a database table using a repository extended from JpaRepository (org.springframework.data.jpa.repository.JpaRepository)
This is the code I have written:
#Transactional
public void agregar() throws NotSupportedException, SystemException, IllegalStateException, RollbackException, SecurityException, HeuristicMixedException, HeuristicRollbackException, SQLException {
logger.info("AGRENADO AL MAPA ...");
HazelcastXAResource xaResource = hazelcastInstance.getXAResource();
UserTransactionManager tm = new UserTransactionManager();
tm.begin();
Transaction transaction = tm.getTransaction();
transaction.enlistResource(xaResource);
TransactionContext context = xaResource.getTransactionContext();
TransactionalMap<TaskKey, TaskQueue> mapTareasDiferidas = context.getMap("TAREAS-DIFERIDAS");
TaskKey taskKey = new TaskKey(1L);
TaskQueue taskQueue = mapTareasDiferidas.get(taskKey);
Integer numero = 4;
Task<Integer> taskFactorial = new TaskImplFactorial(numero);
taskQueue = new TaskQueue();
taskQueue.getQueue().add(taskFactorial);
mapTareasDiferidas.put(taskKey, taskQueue);
transaction.delistResource(xaResource, XAResource.TMSUCCESS);
tm.commit();
logger.info("AGRENADO A LA TABLA ...");
PaisEntity paisEntity = new PaisEntity(100, "ARGENTINA", 10);
paisRepository.save(paisEntity);
}
This code is working: if one of the tasks throw an exception then both are rolled back.
My questions are:
Is this code actually correct?
Why #Transactional is not taking care of commiting the changes in the map and I must explicitylly do it on my own?
The complete code of the project is available en Github: https://github.com/diegocairone/hazelcast-maps-poc
Thanks in advance
Finally i realized that i must inject the 'UserTransactionManager' object and take the transaction from it.
Also is necessary to use a JTA/XA implementation. I have chosen Atomikos and XA transactions must be enable in MS SQL Server.
The working example is available at Github https://github.com/diegocairone/hazelcast-maps-poc on branch atomikos-datasource-mssql
Starting with Hazelcast 3.7, you can get rid of the boilerplate code to begin, commit or rollback transactions by using HazelcastTransactionManager which is a PlatformTransactionManager implementation to be used with Spring Transaction API.
You can find example here.
Also, Hazelcast can participate in XA transaction with Atomikos. Here's a doc
Thank you
I have updated to Hazelcast 3.7.5 and added the following code to HazelcastConfig class.
#Configuration
public class HazelcastConfig {
...
#Bean
public HazelcastInstance getHazelcastInstance() {
....
}
#Bean
public HazelcastTransactionManager getTransactionManager() {
HazelcastTransactionManager transactionManager = new HazelcastTransactionManager(getHazelcastInstance());
return transactionManager;
}
#Bean
public ManagedTransactionalTaskContext getTransactionalContext() {
ManagedTransactionalTaskContext transactionalContext = new ManagedTransactionalTaskContext(getTransactionManager());
return transactionalContext;
}
When I run the app I get this exception:
org.springframework.beans.factory.NoSuchBeanDefinitionException: No
bean named 'transactionManager' available: No matching
PlatformTransactionManager bean found for qualifier
'transactionManager' - neither qualifier match nor bean name match!
The code is available at Github on a new branch: atomikos-datasource-mssql-hz37
Thanks in advance

GWT RequestFactory STRANGE behavior : No error reporting

I do have a problem, and I can't figure out what it happens...
GWT beginner, working on a personal project.
Environment:
maven project with two modules
one module is the 'model', and has Hibernate, HSQLDB and Spring dependencies. HSQLDB runs embedded, in memory, configured from spring applicationContext.xml
the other module is the 'web' and has all GWT dependencies
The application is built using some Spring Roo generated code as basis, later modified and extended.
The issue is that, when editing some entity fields and pressing save, nothing happens. No problem when creating a new entity instance, only on edit changing some field and pressing 'save' basically overrides the new values.
So I started to thoroughly debug the client code, enabled hibernate and Spring detailed logging, but still ... nothing.
Then I made a surprising (for me) discovery.
Inspecting the GWT response payload, I have seen this:
{"S":[false],"O": [{"T":"663_uruC_g7F5h5IXBGvTP3BBKM=","V":"MS4w","S":"IjMi","O":"UPDATE"}],"I":[{"F":true,"M":"Server Error: org.hibernate.PersistentObjectException: detached entity passed to persist: com.myvdm.server.domain.Document; nested exception is javax.persistence.PersistenceException: org.hibernate.PersistentObjectException: detached entity passed to persist: com.myvdm.server.domain.Document"}]}
Aha, detached entity passed to persist !!!
Please note that the gwt client code uses this snippet to call the service:
requestContext.persist().using(proxy);
Arguably this could trigger the exception, and calling merge() could solve the problem, however, read on, to question 3...
Three question arise now:
Why isn't this somehow sent to the client as an error/exception?
Why isn't this logged by Hibernate?
How come the Spring Roo generated code (as I said, used as basis) works without manifesting this problem?
Thanks a lot,
Avaiting for some opinions/suggestions.
EDITED AFTER T. BROYER's RESPONSE::
Hi Thomas, thanks for the response.
I have a custom class that implements RequestTransport and implements send(). This is how I collected the response payload. Implementation follows::
public void send(String payload, final TransportReceiver receiver) {
TransportReceiver myReceiver = new TransportReceiver() {
#Override
public void onTransportSuccess(String payload) {
try {
receiver.onTransportSuccess(payload);
} finally {
eventBus.fireEvent(new RequestEvent(RequestEvent.State.RECEIVED));
}
}
#Override
public void onTransportFailure(ServerFailure failure) {
try {
receiver.onTransportFailure(failure);
} finally {
eventBus.fireEvent(new RequestEvent(RequestEvent.State.RECEIVED));
}
}
};
try {
wrapped.send(payload, myReceiver);
} finally {
eventBus.fireEvent(new RequestEvent(RequestEvent.State.SENT));
}
}
Here's the code that is executed when 'save' button is clicked in edit mode:
RequestContext requestContext = editorDriver.flush();
if (editorDriver.hasErrors()) {
return;
}
requestContext.fire(new Receiver<Void>() {
#Override
public void onFailure(ServerFailure error) {
if (editorDriver != null) {
setWaiting(false);
super.onFailure(error);
}
}
#Override
public void onSuccess(Void ignore) {
if (editorDriver != null) {
editorDriver = null;
exit(true);
}
}
#Override
public void onConstraintViolation(Set<ConstraintViolation<?>> errors) {
if (editorDriver != null) {
setWaiting(false);
editorDriver.setConstraintViolations(errors);
}
}
});
Based on what you said, onSuccess() should be called, and it's called
So how do I isolate exactly the code that creates the problem? I have this method that creates a fresh request context in order to persist the object
#Override
protected RequestContext createSaveRequestContextFor(DocumentProxy proxy) {
DocumentRequestContext request = requests.documentRequestContext();
request.persist().using(proxy);
return request;
}
and this is how it is called::
editorDriver.edit(getProxy(), createSaveRequestContextFor(getProxy()));
As for the Spring problem, you are saying that, between two subsequent requests, the find() and persist(), the JPA entityManager should not be closed. I am still investigating this, but after I press the edit button, I see the message 'org.springframework.orm.jpa.EntityManagerFactoryUtils - Closing JPA EntityManager' and that is not right, maybe the #Transactional annotation is not applied...
Why isn't this somehow sent to the client as an error/exception?
It is. The "S": [false] indicates the first (and only) method invocation (remember, a RequestContext is a batch!) has failed. The onFailure method of the invocation's Receiver will be called.
The "F": true of the ServerFailure then says it's a fatal error, so the default implementation of Receiver#onFailure would throw a RuntimeException. However, as you do not use a Receiver at all, nothing happens and the error is silently ignored.
Note that the batch request in itself has succeeded, so the global Receiver (the one you'd pass to RequestContext#fire) would have its onSuccess method called.
Also note that Request#fire(Receiver) is a shorthand for Request#to(Receiver) followed by RequestContext#fire() (with no argument).
Why isn't this logged by Hibernate?
This I don't know, sorry.
How come the Spring Roo generated code (as I said, used as basis) works without manifesting this problem?
OK, let's explore the underlying reason of the exception: the entity is loaded by your Locator (or the entity class's findXxx static method) and then the persist method is called on the instance. If you do not use the same JPA EntityManager / Hibernate session in the find and persist methods, then you'll have the issue.
Request Factory expects you to use the open session in view pattern to overcome this. I unfortunately do not know what kind of code Spring Roo generates.
Regarding the open session in view pattern Thomas mentioned, just add this filter definitions to your web.xml to turn on the pattern in your Spring application:
<filter>
<filter-name>
Spring OpenEntityManagerInViewFilter
</filter-name>
<filter-class>
org.springframework.orm.jpa.support.OpenEntityManagerInViewFilter
</filter-class>
</filter>
<filter-mapping>
<filter-name>Spring OpenEntityManagerInViewFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>

Resources