Dynamic to() in Apache Camel Route - spring-boot

I am writing a demo program using Apache Camel. Out Camel route is being called from a Spring Boot scheduler and it will transfer file from the source directory C:\CamelDemo\inputFolder to the destination directory C:\CamelDemo\outputFolder
The Spring Boot scheduler is as under
#Component
public class Scheduler {
#Autowired
private ProducerTemplate producerTemplate;
#Scheduled(cron = "#{#getCronValue}")
public void scheduleJob() {
System.out.println("Scheduler executing");
String inputEndpoint = "file:C:\\CamelDemo\\inputFolder?noop=true&sendEmptyMessageWhenIdle=true";
String outputEndpoint = "file:C:\\CamelDemo\\outputFolder?autoCreate=false";
Map<String, Object> headerMap = new HashMap<String, Object>();
headerMap.put("inputEndpoint", inputEndpoint);
headerMap.put("outputEndpoint", outputEndpoint);
producerTemplate.sendBodyAndHeaders("direct:transferFile", null, headerMap);
System.out.println("Scheduler complete");
}
}
The Apache Camel route is as under
#Component
public class FileTransferRoute extends RouteBuilder {
#Override
public void configure() {
errorHandler(defaultErrorHandler()
.maximumRedeliveries(3)
.redeliverDelay(1000)
.retryAttemptedLogLevel(LoggingLevel.WARN));
from("direct:transferFile")
.log("Route reached")
.log("Input Endpoint: ${in.headers.inputEndpoint}")
.log("Output Endpoint: ${in.headers.outputEndpoint}")
.pollEnrich().simple("${in.headers.inputEndpoint}")
.recipientList(header("outputEndpoint"));
//.to("file:C:\\CamelDemo\\outputFolder?autoCreate=false")
}
}
When I am commenting out the line for recipientList() and uncommenting the to() i.e. givig static endpoint in to(), the flow is working. But when I am commenting to() and uncommenting recipientList(), it is not working. Please help how to route the message to the dynamic endpoint (outputEndpoint)?

You are using pollEnrich without specifying an AggregationStrategy: in this case, Camel will create a new OUT message from the retrieved resource, without combining it to the original IN message: this means you will lose the headers previously set on the IN message.
See documentation : https://camel.apache.org/manual/latest/enrich-eip.html#_a_little_enrich_example_using_java
strategyRef Refers to an AggregationStrategy to be used to merge the reply from the external service, into a single outgoing message. By default Camel will use the reply from the external service as outgoing message.
A simple solution would be to define a simple AggregationStrategy on your pollEnrich component, which simply copies headers from the IN message to the new OUT message (note that you will then use the original IN message body, but in your case it's not a problem I guess)
from("direct:transferFile")
.log("Route reached")
.log("Input Endpoint: ${in.headers.inputEndpoint}")
.log("Output Endpoint: ${in.headers.outputEndpoint}")
.pollEnrich().simple("${in.headers.inputEndpoint}")
.aggregationStrategy((oldExchange, newExchange) -> {
// Copy all headers from IN message to the new OUT Message
newExchange.getIn().getHeaders().putAll(oldExchange.getIn().getHeaders());
return newExchange;
})
.log("Output Endpoint (after pollEnrich): ${in.headers.outputEndpoint}")
.recipientList(header("outputEndpoint"));
//.to("file:C:\\var\\CamelDemo\\outputFolder?autoCreate=false");

Related

Spring 6: Spring Cloud Stream Kafka - Replacement for #EnableBinding

I was reading "Spring Microservices In Action (2021)" because I wanted to brush up on Microservices.
Now with Spring Boot 3 a few things changed. In the book, an easy example of how to push messages to a topic and how to consume messages to a topic were presented.
The Problem is: The examples presented do just not work with Spring Boot 3. Sending Messages from a Spring Boot 2 Project works. The underlying project can be found here:
https://github.com/ihuaylupo/manning-smia/tree/master/chapter10
Example 1 (organization-service):
Consider this Config:
spring.cloud.stream.bindings.output.destination=orgChangeTopic
spring.cloud.stream.bindings.output.content-type=application/json
spring.cloud.stream.kafka.binder.zkNodes=kafka #kafka is used as a network alias in docker-compose
spring.cloud.stream.kafka.binder.brokers=kafka
And this Component(Class) which can is injected in a service in this project
#Component
public class SimpleSourceBean {
private Source source;
private static final Logger logger = LoggerFactory.getLogger(SimpleSourceBean.class);
#Autowired
public SimpleSourceBean(Source source){
this.source = source;
}
public void publishOrganizationChange(String action, String organizationId){
logger.debug("Sending Kafka message {} for Organization Id: {}", action, organizationId);
OrganizationChangeModel change = new OrganizationChangeModel(
OrganizationChangeModel.class.getTypeName(),
action,
organizationId,
UserContext.getCorrelationId());
source.output().send(MessageBuilder.withPayload(change).build());
}
}
This code fires a message to the topic (destination) orgChangeTopic. The way I understand it, the firsttime a message is fired, the topic is created.
Question 1: How do I do this Spring Boot 3? Config-Wise and "Code-Wise"?
Example 2:
Consider this config:
spring.cloud.stream.bindings.input.destination=orgChangeTopic
spring.cloud.stream.bindings.input.content-type=application/json
spring.cloud.stream.bindings.input.group=licensingGroup
spring.cloud.stream.kafka.binder.zkNodes=kafka
spring.cloud.stream.kafka.binder.brokers=kafka
And this code:
#SpringBootApplication
#RefreshScope
#EnableDiscoveryClient
#EnableFeignClients
#EnableEurekaClient
#EnableBinding(Sink.class)
public class LicenseServiceApplication {
public static void main(String[] args) {
SpringApplication.run(LicenseServiceApplication.class, args);
}
#StreamListener(Sink.INPUT)
public void loggerSink(OrganizationChangeModel orgChange) {
log.info("Received an {} event for organization id {}",
orgChange.getAction(), orgChange.getOrganizationId());
}
What this method is supposed to do is to fire whenever a message is fired in orgChangeTopic, we want the method loggerSink to fire.
How do I do this in Spring Boot 3?
In Spring Cloud Stream 4.0.0 (the version used if you are using Boot 3), a few things are removed - such as the EnableBinding, StreamListener, etc. We deprecated them before in 3.x and finally removed them in the 4.0.0 version. The annotation-based programming model is removed in favor of the functional programming style enabled through the Spring Cloud Function project. You essentially express your business logic as java.util.function.Funciton|Consumer|Supplier etc. for a processor, sink, and source, respectively. For ad-hoc source situations, as in your first example, Spring Cloud Stream provides a StreamBridge API for custom sends.
Your example #1 can be re-written like this:
#Component
public class SimpleSourceBean {
#Autowired
StreamBridge streamBridge
public void publishOrganizationChange(String action, String organizationId){
logger.debug("Sending Kafka message {} for Organization Id: {}", action, organizationId);
OrganizationChangeModel change = new OrganizationChangeModel(
OrganizationChangeModel.class.getTypeName(),
action,
organizationId,
UserContext.getCorrelationId());
streamBridge.send("output-out-0", MessageBuilder.withPayload(change).build());
}
}
Config
spring.cloud.stream.bindings.output-out-0.destination=orgChangeTopic
spring.cloud.stream.kafka.binder.brokers=kafka
Just so you know, you no longer need that zkNode property. Neither the content type since the framework auto-converts that for you.
StreamBridge send takes a binding name and the payload. The binding name can be anything - but for consistency reasons, we used output-out-0 here. Please read the reference docs for more context around the reasoning for this binding name.
If you have a simple source that runs on a timer, you can express this simply as a supplier as below (instead of using a StreamBrdige).
#Bean
public Supplier<OrganizationChangeModel> ouput() {
return () -> {
// return the payload
};
}
spring.cloud.function.definition=output
spring.cloud.bindings.output-out-0.destination=...
Example #2
#Bean
public Consumer<OrganizationChangeModel> loggerSink() {
return model -> {
log.info("Received an {} event for organization id {}",
orgChange.getAction(), orgChange.getOrganizationId());
};
}
Config:
spring.cloud.function.definition=loggerSink
spring.cloud.stream.bindings.loggerSink-in-0.destination=orgChangeTopic
spring.cloud.stream.bindings.loggerSinnk-in-0.group=licensingGroup
spring.cloud.stream.kafka.binder.brokers=kafka
If you want the input/output binding names to be specifically input or output rather than with in-0, out-0 etc., there are ways to make that happen. Details for this are in the reference docs.

Validating Spring Kafka payloads

I am trying to set up a service that has both a REST (POST) endpoint and a Kafka endpoint, both of which should take a JSON representation of the request object (let's call it Foo). I would want to make sure that the Foo object is valid (via JSR-303 or whatever). So Foo might look like:
public class Foo {
#Max(10)
private int bar;
// Getter and setter boilerplate
}
Setting up the REST endpoint is easy:
#PostMapping(value = "/", produces = MediaType.APPLICATION_JSON_VALUE)
public ResponseEntity<String> restEndpoint(#Valid #RequestBody Foo foo) {
// Do stuff here
}
and if I POST, { "bar": 9 } it processes the request, but if I post: { "bar": 99 } I get a BAD REQUEST. All good so far!
The Kafka endpoint is easy to create (along with adding a StringJsonMessageConverter() to my KafkaListenerContainerFactory so that I get JSON->Object conversion:
#KafkaListener(topics = "fooTopic")
public void kafkaEndpoint(#Valid #Payload Foo foo) {
// I shouldn't get here with an invalid object!!!
logger.debug("Successfully processed the object" + foo);
// But just to make sure, let's see if hand-validating it works
Validator validator = localValidatorFactoryBean.getValidator();
Set<ConstraintViolation<SlackMessage>> errors = validator.validate(foo);
if (errors.size() > 0) {
logger.debug("But there were validation errors!" + errors);
}
}
But no matter what I try, I can still pass invalid requests in and they process without error.
I've tried both #Valid and #Validated. I've tried adding a MethodValidationPostProcessor bean. I've tried adding a Validator to the KafkaListenerEndpointRegistrar (a la the EnableKafka javadoc):
#Configuration
public class MiscellaneousConfiguration implements KafkaListenerConfigurer {
private Logger logger = LoggerFactory.getLogger(this.getClass());
#Autowired
LocalValidatorFactoryBean validatorFactory;
#Override
public void configureKafkaListeners(KafkaListenerEndpointRegistrar registrar) {
logger.debug("Configuring " + registrar);
registrar.setMessageHandlerMethodFactory(kafkaHandlerMethodFactory());
}
#Bean
public MessageHandlerMethodFactory kafkaHandlerMethodFactory() {
DefaultMessageHandlerMethodFactory factory = new DefaultMessageHandlerMethodFactory();
factory.setValidator(validatorFactory);
return factory;
}
}
I've now spent a few days on this, and I'm running out of other ideas. Is this even possible (without writing validation into every one of my kakfa endpoints)?
Sorry for the delay; we are at SpringOne Platform this week.
The infrastructure currently does not pass a Validator into the payload argument resolver. Please open an issue on GitHub.
Spring kafka listener by default do not scan for #Valid for non Rest controller classes. For more details please refer this answer
https://stackoverflow.com/a/71859991/13898185

Not able to to filter messages received using condition attribute in Spring Cloud Stream #StreamListener annotation

I am trying to create a event based system for communicating between services using Apache Kafka as Messaging system and Spring Cloud Stream Kafka.
I have written my Receiver class methods as below,
#StreamListener(target = Sink.INPUT, condition = "headers['eventType']=='EmployeeCreatedEvent'")
public void handleEmployeeCreatedEvent(#Payload String payload) {
logger.info("Received EmployeeCreatedEvent: " + payload);
}
This method is specifically to catch for messages or events related to EmployeeCreatedEvent.
#StreamListener(target = Sink.INPUT, condition = "headers['eventType']=='EmployeeTransferredEvent'")
public void handleEmployeeTransferredEvent(#Payload String payload) {
logger.info("Received EmployeeTransferredEvent: " + payload);
}
This method is specifically to catch for messages or events related to EmployeeTransferredEvent.
#StreamListener(target = Sink.INPUT)
public void handleDefaultEvent(#Payload String payload) {
logger.info("Received payload: " + payload);
}
This is the default method.
When I run the application, I am not able to see the methods annoated with condition attribute being called. I only see the handleDefaultEvent method being called.
I am sending a message to this Receiver Application from the Sending/Source App using the below CustomMessageSource class as below,
#Component
#EnableBinding(Source.class)
public class CustomMessageSource {
#Autowired
private Source source;
public void sendMessage(String payload,String eventType) {
Message<String> myMessage = MessageBuilder.withPayload(payload)
.setHeader("eventType", eventType)
.build();
source.output().send(myMessage);
}
}
I am calling the method from my controller in Source App as below,
customMessageSource.sendMessage("Hello","EmployeeCreatedEvent");
The customMessageSource instance is autowired as below,
#Autowired
CustomMessageSource customMessageSource;
Basicaly, I would like to filter messages received by the Sink/Receiver application and handle them accordingly.
For this I have used the #StreamListener annotation with condition attribute to simulate the behaviour of handling different events.
I am using Spring Cloud Stream Chelsea.SR2 version.
Can someone help me in resolving this issue.
It seems like the headers are not propagated. Make sure you include the custom headers in spring.cloud.stream.kafka.binder.headers http://docs.spring.io/autorepo/docs/spring-cloud-stream-docs/Chelsea.SR2/reference/htmlsingle/#_kafka_binder_properties .

Send and receive files from FTP in Spring Boot

I'm new to Spring Framework and, indeed, I'm learning and using Spring Boot. Recently, in the app I'm developing, I made Quartz Scheduler work, and now I want to make Spring Integration work there: FTP connection to a server to write and read files from.
What I want is really simple (as I've been able to do so in a previous java application). I've got two Quartz Jobs scheduled to fired in different times daily: one of them reads a file from a FTP server and another one writes a file to a FTP server.
I'll detail what I've developed so far.
#SpringBootApplication
#ImportResource("classpath:ws-config.xml")
#EnableIntegration
#EnableScheduling
public class MyApp extends SpringBootServletInitializer {
#Autowired
private Configuration configuration;
//...
#Bean
public DefaultFtpsSessionFactory myFtpsSessionFactory(){
DefaultFtpsSessionFactory sess = new DefaultFtpsSessionFactory();
Ftp ftp = configuration.getFtp();
sess.setHost(ftp.getServer());
sess.setPort(ftp.getPort());
sess.setUsername(ftp.getUsername());
sess.setPassword(ftp.getPassword());
return sess;
}
}
The following class I've named it as a FtpGateway, as follows:
#Component
public class FtpGateway {
#Autowired
private DefaultFtpsSessionFactory sess;
public void sendFile(){
// todo
}
public void readFile(){
// todo
}
}
I'm reading this documentation to learn to do so. Spring Integration's FTP seems to be event driven, so I don't know how can I execute either of the sendFile() and readFile() from by Jobs when the trigger is fired at an exact time.
The documentation tells me something about using Inbound Channel Adapter (to read files from a FTP?), Outbound Channel Adapter (to write files to a FTP?) and Outbound Gateway (to do what?):
Spring Integration supports sending and receiving files over FTP/FTPS by providing three client side endpoints: Inbound Channel Adapter, Outbound Channel Adapter, and Outbound Gateway. It also provides convenient namespace-based configuration options for defining these client components.
So, I haven't got it clear as how to follow.
Please, could anybody give me a hint?
Thank you!
EDIT:
Thank you #M. Deinum. First, I'll try a simple task: read a file from the FTP, the poller will run every 5 seconds. This is what I've added:
#Bean
public FtpInboundFileSynchronizer ftpInboundFileSynchronizer() {
FtpInboundFileSynchronizer fileSynchronizer = new FtpInboundFileSynchronizer(myFtpsSessionFactory());
fileSynchronizer.setDeleteRemoteFiles(false);
fileSynchronizer.setPreserveTimestamp(true);
fileSynchronizer.setRemoteDirectory("/Entrada");
fileSynchronizer.setFilter(new FtpSimplePatternFileListFilter("*.csv"));
return fileSynchronizer;
}
#Bean
#InboundChannelAdapter(channel = "ftpChannel", poller = #Poller(fixedDelay = "5000"))
public MessageSource<File> ftpMessageSource() {
FtpInboundFileSynchronizingMessageSource source = new FtpInboundFileSynchronizingMessageSource(inbound);
source.setLocalDirectory(new File(configuracion.getDirFicherosDescargados()));
source.setAutoCreateLocalDirectory(true);
source.setLocalFilter(new AcceptOnceFileListFilter<File>());
return source;
}
#Bean
#ServiceActivator(inputChannel = "ftpChannel")
public MessageHandler handler() {
return new MessageHandler() {
#Override
public void handleMessage(Message<?> message) throws MessagingException {
Object payload = message.getPayload();
if(payload instanceof File){
File f = (File) payload;
System.out.println(f.getName());
}else{
System.out.println(message.getPayload());
}
}
};
}
Then, when the app is running, I put a new csv file intro "Entrada" remote folder, but the handler() method isn't run after 5 seconds... I'm doing something wrong?
Please add #Scheduled(fixedDelay = 5000) over your poller method.
You should use SPRING BATCH with tasklet. It is far easier to configure bean, crone time, input source with existing interfaces provided by Spring.
https://www.baeldung.com/introduction-to-spring-batch
Above example is annotation and xml based both, you can use either.
Other benefit Take use of listeners and parallel steps. This framework can be used in Reader - Processor - Writer manner as well.

How can I send a message on connect event (SockJS, STOMP, Spring)?

I am connection through SockJS over STOMP to my Spring backend. Everything work fine, the configuration works well for all browsers etc. However, I cannot find a way to send an initial message. The scenario would be as follows:
The client connects to the topic
function connect() {
var socket = new SockJS('http://localhost:8080/myEndpoint');
stompClient = Stomp.over(socket);
stompClient.connect({}, function(frame) {
setConnected(true);
console.log('Connected: ' + frame);
stompClient.subscribe('/topic/notify', function(message){
showMessage(JSON.parse(message.body).content);
});
});
}
and the backend config looks more or less like this:
#Configuration
#EnableWebSocketMessageBroker
public class WebSocketAppConfig extends AbstractWebSocketMessageBrokerConfigurer {
...
#Override
public void registerStompEndpoints(final StompEndpointRegistry registry) {
registry.addEndpoint("/myEndpoint").withSockJS();
}
I want to send to the client an automatic reply from the backend (on the connection event) so that I can already provide him with some dataset (e.g. read sth from the db) without the need for him (the client) to send a GET request (or any other). So to sum up, I just want to send him a message on the topic with the SimMessagingTemplate object just after he connected.
Usually I do it the following way, e.g. in a REST controller, when the template is already autowired:
#Autowired
private SimpMessagingTemplate template;
...
template.convertAndSend(TOPIC, new Message("it works!"));
How to achieve this on connect event?
UPDATE
I have managed to make it work. However, I am still a bit confused with the configuration. I will show here 2 configurations how the initial message can be sent:
1) First solution
JS part
stompClient.subscribe('/app/pending', function(message){
showMessage(JSON.parse(message.body).content);
});
stompClient.subscribe('/topic/incoming', function(message){
showMessage(JSON.parse(message.body).content);
});
Java part
#Controller
public class WebSocketBusController {
#SubscribeMapping("/pending")
Configuration
#Override
public void configureMessageBroker(final MessageBrokerRegistry config) {
config.enableSimpleBroker("/topic");
config.setApplicationDestinationPrefixes("/app");
}
... and other calls
template.convertAndSend("/topic/incoming", outgoingMessage);
2) Second solution
JS part
stompClient.subscribe('/topic/incoming', function(message){
showMessage(JSON.parse(message.body).content);
})
Java part
#Controller
public class WebSocketBusController {
#SubscribeMapping("/topic/incoming")
Configuration
#Override
public void configureMessageBroker(final MessageBrokerRegistry config) {
config.enableSimpleBroker("/topic");
// NO APPLICATION PREFIX HERE
}
... and other calls
template.convertAndSend("/topic/incoming", outgoingMessage);
SUMMARY:
The first case uses two subscriptions - this I wanted to avoid and thought this can be managed with one only.
The second one however has no prefix for application. But at least I can have a single subscription to listen on the provided topic as well as send initial message.
If you just want to send a message to the client upon connection, use an appropriate ApplicationListener:
#Component
public class StompConnectedEvent implements ApplicationListener<SessionConnectedEvent> {
private static final Logger log = Logger.getLogger(StompConnectedEvent.class);
#Autowired
private Controller controller;
#Override
public void onApplicationEvent(SessionConnectedEvent event) {
log.debug("Client connected.");
// you can use a controller to send your msg here
}
}
You can't do that on connect, however the #SubscribeMapping does the stuff in that case.
You just need to mark the service method with that annotation and it returns a result to the subscribe function.
From Spring Reference Manual:
An #SubscribeMapping annotation can also be used to map subscription requests to #Controller methods. It is supported on the method level, but can also be combined with a type level #MessageMapping annotation that expresses shared mappings across all message handling methods within the same controller.
By default the return value from an #SubscribeMapping method is sent as a message directly back to the connected client and does not pass through the broker. This is useful for implementing request-reply message interactions; for example, to fetch application data when the application UI is being initialized. Or alternatively an #SubscribeMapping method can be annotated with #SendTo in which case the resulting message is sent to the "brokerChannel" using the specified target destination.
UPDATE
Referring to this example: https://github.com/revelfire/spring4Test how would that be possible to send anything when the line 24 of the index.html is invoked: stompClient.subscribe('/user/queue/socket/responses' ... from the spring controllers?
Well, look like this:
#SubscribeMapping("/queue/socket/responses")
public List<Employee> list() {
return getEmployees();
}
The Stomp client part remains the same.

Resources