Spring Integration + SpringBoot JUnit tries to connect to DB unexpectedly - spring-boot

Please refer to system diagram attached.
system diagram here
ISSUE: When I try to post message to input channel, the code tries to connect to the DB and throws an exception that it is unable to connect.
Code inside 5 -> Read from a channel, apply Business Logic (empty for now) and send the response to another channel.
#Bean
public IntegrationFlow sendToBusinessLogictoNotifyExternalSystem() {
return IntegrationFlows
.from("CommonChannelName")
.handle("Business Logic Class name") // Business Logic empty for now
.channel("QueuetoAnotherSystem")
.get();
}
I have written the JUnit for 5 as given below,
#Autowired
PublishSubscribeChannel CommonChannelName;
#Autowired
MessageChannel QueuetoAnotherSystem;
#Test
public void sendToBusinessLogictoNotifyExternalSystem() {
Message<?> message = (Message<?>) MessageBuilder.withPayload("World")
.setHeader(MessageHeaders.REPLY_CHANNEL, QueuetoAnotherSystem).build();
this.CommonChannelName.send((org.springframework.messaging.Message<?>) message);
Message<?> receive = QueuetoAnotherSystem.receive(5000);
assertNotNull(receive);
assertEquals("World", receive.getPayload());
}
ISSUE: As you can see from the system diagram, my code also has a DB connection on a different flow.
When I try to post message to producer channel, the code tries to connect to the DB and throws an exception that it is unable to connect.
I do not want this to happen, because the JUnit should never be related to the DB, and should run anywhere, anytime.
How do I fix this exception?
NOTE: Not sure if it matters, the application is a Spring Boot application. I have used Spring Integration inside the code to read and write from/to queues.

Since the common channel is a publish/subscribe channel, the message goes to both flows.
If this is a follow-up to this question/answer, you can prevent the DB flow from being invoked by calling stop() on the sendToDb flow (as long as you set ignoreFailures to true on the pub/sub channel, like I suggested there.
((Lifecycle) sendToDb).stop();

JUNIT TEST CASE - UPDATED:
#Autowired
PublishSubscribeChannel CommonChannelName;
#Autowired
MessageChannel QueuetoAnotherSystem;
#Autowired
SendResponsetoDBConfig sendResponsetoDBConfig;
#Test
public void sendToBusinessLogictoNotifyExternalSystem() {
Lifecycle flowToDB = ((Lifecycle) sendResponsetoDBConfig.sendToDb());
flowToDB.stop();
Message<?> message = (Message<?>) MessageBuilder.withPayload("World")
.setHeader(MessageHeaders.REPLY_CHANNEL, QueuetoAnotherSystem).build();
this.CommonChannelName.send((org.springframework.messaging.Message<?>) message);
Message<?> receive = QueuetoAnotherSystem.receive(5000);
assertNotNull(receive);
assertEquals("World", receive.getPayload());
}
CODE FOR 4: The flow that handles message to DB
public class SendResponsetoDBConfig {
#Bean
public IntegrationFlow sendToDb() {
System.out.println("******************* Inside SendResponsetoDBConfig.sendToDb ***********");
return IntegrationFlows
.from("Common Channel Name")
.handle("DAO Impl to store into DB")
.get();
}
}
NOTE: ******************* Inside SendResponsetoDBConfig.sendToDb *********** never gets printed.

Related

Spring Integration: Manual channel handling

What I want: Build a configurable library that
uses another library that has an internal routing and a subscribe method like: clientInstance.subscribe(endpoint, (endpoint, message) -> <handler>) , e.g. Paho MQTT library
later in my code I want to access the messages in a Flux.
My idea:
create MessageChannels like so:
integrationFlowContext
.registration(IntegrationFlows.from("message-channel:" + endpoint)).bridge().get())
.register()
forward to reactive publishers:
applicationContext.registerBean(
"publisher:" + endpoint,
Publisher.class,
() -> IntegrationFlows.from("message-channel:" + endpoint)).toReactivePublisher()
);
keep the message channels in a set or similar and implement the above handler: (endpoint, message) -> messageChannels.get(endpoint).send( <converter>(message))
later use (in a #PostConstruct method):
Flux
.from((Publihser<Message<?>>)applicationContext.getBean("publisher:" + enpoint))
.map(...)
.subscribe()
I doubt this to be the best way to do what I want. Feels like abusing spring integration. Any suggestions are welcome at this point.
In general however (at least in my tests) this seemed to be working. But when I run my application, I get errors like: "Caused by: org.springframework.messaging.core.DestinationResolutionException: no output-channel or replyChannel header available".
This is especially bad, since after this exception the publishers claim to not have a subscriber anymore. Thus, in a real application no messages are proceeded anymore.
I am not sure what this message means, but I can kind of reproduce it (but don't understand why):
#Test
public void channelTest() {
integrationFlowContext
.registration(
IntegrationFlows.from("any-channel").bridge().get()
)
.register();
registryUtil.registerBean(
"any-publisher",
Publisher.class,
() -> IntegrationFlows.from("any-channel").toReactivePublisher()
);
Flux
.from((Publisher<Message<?>>) applicationContext.getBean("any-publisher"))
.subscribe(System.out::println);
MessageChannel messageChannel = applicationContext.getBean("any-channel", MessageChannel.class);
try {
messageChannel.send(MessageBuilder.withPayload("test").build());
} catch (Throwable t) {
log.error("Error: ", t);
}
}
I of course read parts of the spring integration documentation, but don't quite get what happens behind the scenes. Thus, I feel like guessing possible error causes.
EDIT:
This, however works:
#TestConfiguration
static class Config {
GenericApplicationContext applicationContext;
Config(
GenericApplicationContext applicationContext,
IntegrationFlowContext integrationFlowContext
) {
this.applicationContext = applicationContext;
// optional here, but needed for some reason in my library,
// since I can't find the channel beans like I will do here,
// if I didn't register them like so:
//integrationFlowContext
// .registration(
// IntegrationFlows.from("any-channel").bridge().get())
// .register();
applicationContext.registerBean(
"any-publisher",
Publisher.class,
() -> IntegrationFlows.from("any-channel").toReactivePublisher()
);
}
#PostConstruct
void connect(){
Flux
.from((Publisher<Message<?>>) applicationContext.getBean("any-publisher"))
.subscribe(System.out::println);
}
}
#Autowired
ApplicationContext applicationContext;
#Autowired
IntegrationFlowContext integrationFlowContext;
#Test
#SneakyThrows
public void channel2Test() {
MessageChannel messageChannel = applicationContext.getBean("any-channel", MessageChannel.class);
try {
messageChannel.send(MessageBuilder.withPayload("test").build());
} catch (Throwable t) {
log.error("Error: ", t);
}
}
Thus apparently my issue above is realted to messages arriving "too early" .. I guess?!
No, your issue is related to round-robin dispatched on the DirectChannel for the any-channel bean name.
You define two IntegrationFlow instances starting with that channel and then you declare their own subscribers, but at runtime both of them are subscribed to the same any-channel instance. And that one comes with the round-robin balancer by default. So, one message goes to your Flux.from() subscriber, but another to that bridge() which doesn't know what to do with your message, so it tries to resolve a replyChannel header.
Therefore your solution just only with one IntegrationFlows.from("any-channel").toReactivePublisher() is correct. Although you could just do a FluxMessageChannel registration and use it from one side for regular messages sending and from other side as a reactive source for Flux.from().

Is 1 phase commit (ChainedTransactionManager) really necessary in this scenario vs no transaction management?

I have a Spring Boot application with a #JmsListener that receives a message from a queue, stores it in database and sends it to another queue.
I wanted to have a minimal transactional guarantee so 1-phase-commit works for me. After a lot of reading I found I could use the ChainedTransactionManager to coordinate the DataSource and JMS resources:
#Configuration
public class TransactionConfiguration {
#Bean
public ChainedTransactionManager transactionManager(JpaTransactionManager jpaTm, JmsTransactionManager jmsTm) {
return new ChainedTransactionManager(jmsTm, jpaTm);
}
#Bean
public JpaTransactionManager jpaTransactionManager(EntityManagerFactory emf) {
return new JpaTransactionManager(emf);
}
#Bean
public JmsTransactionManager jmsTransactionManager(ConnectionFactory connectionFactory) {
return new JmsTransactionManager(connectionFactory);
}
}
Queue listener:
#Transactional(transactionManager = "transactionManager")
#JmsListener(...)
public void process(#Payload String message) {
//Write to db
//Send to output queue
}
START MESSAGING TX
START DB TX
READ MESSAGE
WRITE DB
SEND MESSAGE
COMMIT DB TX
COMMIT MESSAGING TX
If the db commit fails the message will be reprocesed again
If the db commit succeeds but the messaging commit fails the message will be reprocessed. This it not a problem since I can guarantee the idempotency of the db write operation
Now my doubt is, let's suppose I hadn't configured the ChainedTransactionManager and the listener were like this (no #Transactional):
#JmsListener(...)
public void process(#Payload String message) {
//Write to db
//Send to output queue
}
Doesn't this behave the same as the other example despite not coordinating the commits? (I've verified that on SQL exceptions the message is redelivered)
RECEIVE MESSAGE
WRITE DB + COMMIT
SEND MESSAGE + COMMIT
If the DB commit failed the message would be reprocessed
If it succeeded and the send message operation failed it would be reprocessed again.
So is the ChainedTransactionManager really necessary in this case?
UPDATE: Debugging the Spring Boot autoconfiguration (JmsAnnotationDrivenConfiguration)...
#Bean
#ConditionalOnMissingBean(name = "jmsListenerContainerFactory")
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory(
DefaultJmsListenerContainerFactoryConfigurer configurer, ConnectionFactory connectionFactory) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
configurer.configure(factory, connectionFactory);
return factory;
}
... the DefaultJmsListenerContainerFactoryConfigurer is configuring the factory with factory.setSessionTransacted(true); because there is no JtaTransactionManager defined:
if (this.transactionManager != null) {
factory.setTransactionManager(this.transactionManager);
}
else {
factory.setSessionTransacted(true);
}
With setSessionTransacted(true), according to the Spring doc I would get the message rollback and redelivery behaviour I need on DB (or any) exceptions:
Local resource transactions can simply be activated through the
sessionTransacted flag on the listener container definition. Each
message listener invocation will then operate within an active JMS
transaction, with message reception rolled back in case of listener
execution failure. Sending a response message (via
SessionAwareMessageListener) will be part of the same local
transaction, but any other resource operations (such as database
access) will operate independently. This usually requires duplicate
message detection in the listener implementation, covering the case
where database processing has committed but message processing failed
to commit.
That explains I'm getting the behaviour I expect without needing to configure the ChainedTransactionManager.
After all this, could you tell me if it makes sense (it adds some guarantee I'm missing) to use the ChainedTransactionManager in this case?

Send and receive files from FTP in Spring Boot

I'm new to Spring Framework and, indeed, I'm learning and using Spring Boot. Recently, in the app I'm developing, I made Quartz Scheduler work, and now I want to make Spring Integration work there: FTP connection to a server to write and read files from.
What I want is really simple (as I've been able to do so in a previous java application). I've got two Quartz Jobs scheduled to fired in different times daily: one of them reads a file from a FTP server and another one writes a file to a FTP server.
I'll detail what I've developed so far.
#SpringBootApplication
#ImportResource("classpath:ws-config.xml")
#EnableIntegration
#EnableScheduling
public class MyApp extends SpringBootServletInitializer {
#Autowired
private Configuration configuration;
//...
#Bean
public DefaultFtpsSessionFactory myFtpsSessionFactory(){
DefaultFtpsSessionFactory sess = new DefaultFtpsSessionFactory();
Ftp ftp = configuration.getFtp();
sess.setHost(ftp.getServer());
sess.setPort(ftp.getPort());
sess.setUsername(ftp.getUsername());
sess.setPassword(ftp.getPassword());
return sess;
}
}
The following class I've named it as a FtpGateway, as follows:
#Component
public class FtpGateway {
#Autowired
private DefaultFtpsSessionFactory sess;
public void sendFile(){
// todo
}
public void readFile(){
// todo
}
}
I'm reading this documentation to learn to do so. Spring Integration's FTP seems to be event driven, so I don't know how can I execute either of the sendFile() and readFile() from by Jobs when the trigger is fired at an exact time.
The documentation tells me something about using Inbound Channel Adapter (to read files from a FTP?), Outbound Channel Adapter (to write files to a FTP?) and Outbound Gateway (to do what?):
Spring Integration supports sending and receiving files over FTP/FTPS by providing three client side endpoints: Inbound Channel Adapter, Outbound Channel Adapter, and Outbound Gateway. It also provides convenient namespace-based configuration options for defining these client components.
So, I haven't got it clear as how to follow.
Please, could anybody give me a hint?
Thank you!
EDIT:
Thank you #M. Deinum. First, I'll try a simple task: read a file from the FTP, the poller will run every 5 seconds. This is what I've added:
#Bean
public FtpInboundFileSynchronizer ftpInboundFileSynchronizer() {
FtpInboundFileSynchronizer fileSynchronizer = new FtpInboundFileSynchronizer(myFtpsSessionFactory());
fileSynchronizer.setDeleteRemoteFiles(false);
fileSynchronizer.setPreserveTimestamp(true);
fileSynchronizer.setRemoteDirectory("/Entrada");
fileSynchronizer.setFilter(new FtpSimplePatternFileListFilter("*.csv"));
return fileSynchronizer;
}
#Bean
#InboundChannelAdapter(channel = "ftpChannel", poller = #Poller(fixedDelay = "5000"))
public MessageSource<File> ftpMessageSource() {
FtpInboundFileSynchronizingMessageSource source = new FtpInboundFileSynchronizingMessageSource(inbound);
source.setLocalDirectory(new File(configuracion.getDirFicherosDescargados()));
source.setAutoCreateLocalDirectory(true);
source.setLocalFilter(new AcceptOnceFileListFilter<File>());
return source;
}
#Bean
#ServiceActivator(inputChannel = "ftpChannel")
public MessageHandler handler() {
return new MessageHandler() {
#Override
public void handleMessage(Message<?> message) throws MessagingException {
Object payload = message.getPayload();
if(payload instanceof File){
File f = (File) payload;
System.out.println(f.getName());
}else{
System.out.println(message.getPayload());
}
}
};
}
Then, when the app is running, I put a new csv file intro "Entrada" remote folder, but the handler() method isn't run after 5 seconds... I'm doing something wrong?
Please add #Scheduled(fixedDelay = 5000) over your poller method.
You should use SPRING BATCH with tasklet. It is far easier to configure bean, crone time, input source with existing interfaces provided by Spring.
https://www.baeldung.com/introduction-to-spring-batch
Above example is annotation and xml based both, you can use either.
Other benefit Take use of listeners and parallel steps. This framework can be used in Reader - Processor - Writer manner as well.

how to send and receive from the same topic within spring cloud stream and kafka

I have a spring-cloud-stream application with kafka binding. I would like to send and receive a message from the same topic from within the same executable(jar). I have my channel definitions such as below:-
public interface ChannelDefinition {
#Input("forum")
public SubscriableChannel readMessage();
#Output("forum")
public MessageChannel postMessage();
}
I use #StreamListener to receive messages. I get all sorts of unexpected errors. At times, i receive
No dispatcher found for unknown.message.channel for every other message
If i attach a command line kafka subscriber to the above forum topic, it recieves every other message.
My application receives every other message, which is exclusive set of messages from command line subscriber. I have made sure that my application subscribes under a specific group name.
Is there a working example of the above usecase?
This is a wrong way to define bindable channels (because of the use of the forum name for both). We should be more thorough and fail fast on it, but you're binding both the input and the output to the same channel and creating a competing consumer within your application. That also explains your other issue with alternate messages.
What you should do is:
public interface ChannelDefinition {
#Input
public MessageChannel readMessage();
#Output
public MessageChannel postMessage();
}
And then use application properties to bind your channels to the same queue:
spring.cloud.stream.bindings.readMessage.destination=forum
spring.cloud.stream.bindings.postMessage.destination=forum
Along with the answer above by Marius Bogoevici, here's an example of how to listen to that Input.
#StreamListener
public void handleNewOrder(#Input("input") SubscribableChannel input) {
logger.info("Subscribing...");
input.subscribe((message) -> {
logger.info("Received new message: {}", message);
});
}
For me, consuming from "input" didn't work. I needed to use method name on #Streamlistener and needed to use #EnableBinding, like below:
#Slf4j
#RequiredArgsConstructor
#EnableBinding(value = Channels.class)
public class Consumer {
#StreamListener("readMessage")
public void retrieve(Something req) {
log.info("Received {{}}", req);
}
}

Spring Integration DSL non-blocking queue configuration

I am trying to set up an integration flow where an input channel is a queue that would act as a non-blocking queue. What I see now is that if I trigger a message processing from a Spring MVC Controller, if a message processing takes time, controller won't return and will wait for message handler (Service Activator) to complete.
Here's an integration configuration
#Bean(name = "createIssue.input")
MessageChannel queueInput() {
return MessageChannels.queue(10)
.get();
}
#Bean(name = PollerMetadata.DEFAULT_POLLER)
PollerMetadata poller() {
return Pollers.fixedRate(100)
.maxMessagesPerPoll(1)
.get();
}
#Bean
IntegrationFlow createIssue() {
return IntegrationFlows.from(queueInput())
.split()
.transform(mytransformer, "convert")
.handle(myservice, "createIssue")
.get();
}
myservice and mytransformer are just regular Spring beans.
I have a Spring MVC REST Controller that writes to the createIssue.input queue using a gateway in one of its GET handlers.
If I set a breapoint in myservice.createIssue() method, I can see that controller does not return from it's method, so an external service that triggered controller has to wait for my service to complete.
What I am trying to achieve is to have an async processing queue where a gateway would just write a message into a queue and return immediately. How can I achieve that?
You should use void Gateway there, which really acts as a "just send" component:
public interface Cafe {
#Gateway(requestChannel="orders")
void placeOrder(Order order);
}
And your .handle() in the end of IntegrationFlow should replies to the nullChannel or just doesn't return anything from the createIssue() method.

Resources