Annotation Configuration for uploading files to server using Spring Integration FTP adapter - ftp

I'm unable to upload files to a server using annotation based configuration for Spring Integration FTP Adapter. The code that I have used is:
#SuppressWarnings({ "unchecked", "rawtypes" })
#Bean
public IntegrationFlow ftpOut()
{
DefaultFtpSessionFactory defSession=new DefaultFtpSessionFactory();
defSession.setUsername("chh7kor");
defSession.setPassword("Geetansh71!!");
defSession.setPort(21);
defSession.setHost("10.47.116.158");
String remoteDirectory=DefaultFtpSessionFactory.DEFAULT_REMOTE_WORKING_DIRECTORY;
File localDirectory=new File("C:\\FTP_Default");
return IntegrationFlows.from(Ftp.outboundAdapter(defSession, FileExistsMode.REPLACE).remoteDirectory(remoteDirectory)).get();
}
#Bean
public MessageChannel outputChannel()
{
File f=new File(PATH_FOR_FILES_FROM_SERVER);
File[] allSubFiles=f.listFiles();
DirectChannel dC=new DirectChannel();
for(File iterateFiles:allSubFiles)
{
final Message<File> messageFile = MessageBuilder.withPayload(iterateFiles).build();
dC.send(messageFile);
}
return dC;
}
I'm trying to read the files from a local folder and push it into a channel but the IntegrationFlow doesn't allow me to attach a channel to it.Please advise how to achieve the same as this snippet is not helping.

You seem to have completely misunderstood Spring Java configuration. #Bean is for defining beans - you should not be sending messages like you are doing in the for loop - the application context is not ready to accept messages yet, it is only defining beans at this point.
You should also configure the session factory as a #Bean - not declaring it within the integration flow #Bean.
Finally, starting a flow with an outbound adapter makes no sense; you need...
#Bean
public IntegrationFlow ftpOut() {
String remoteDirectory=DefaultFtpSessionFactory.DEFAULT_REMOTE_WORKING_DIRECTORY;
File localDirectory=new File("C:\\FTP_Default");
return IntegrationFlows.from(outputChannel())
.handle(Ftp.outboundAdapter(defSession, FileExistsMode.REPLACE).remoteDirectory(remoteDirectory)))
.get();
}
Then, after you create the context, send messages to the output channel.

A working example for people's reference to the above mentioned question is:
#Bean
public DefaultFtpSessionFactory sessionFactory()
{
DefaultFtpSessionFactory defSession=new DefaultFtpSessionFactory();
defSession.setUsername("chh7kor");
defSession.setPassword("Geetansh71!!");
defSession.setPort(21);
defSession.setHost("10.47.116.158");
return defSession;
}
#Bean
public IntegrationFlow ftpOut()
{
String remoteDirectory=sessionFactory().DEFAULT_REMOTE_WORKING_DIRECTORY;
return IntegrationFlows.from(messageChannel())
.handle(Ftp.outboundAdapter(sessionFactory(), FileExistsMode.REPLACE).remoteDirectory(remoteDirectory+"/F").autoCreateDirectory(true))
.get();
}
public static void main(String args[])
{
File f=new File(PATH_FOR_FILES_FROM_SERVER);
File[] allSubFiles=f.listFiles();
for (File file : allSubFiles) {
if(file.isDirectory())
{
System.out.println(file.getAbsolutePath()+" is directory");
//Steps for directory
}
else
{
System.out.println(file.getAbsolutePath()+" is file");
//steps for files
}
}
PollableChannel pC=ctx.getBean("pollableChannel", PollableChannel.class);
for(File iterateFiles:allSubFiles)
{
final Message<File> messageFile = MessageBuilder.withPayload(iterateFiles).build();
pC.send(messageFile);
Thread.sleep(2000);
}
}

Related

How can I test that I have configured ChainedKafkaTransactionManager correctly in my spring boot service

My spring boot service needs to consume kafka events off one topic, do some processing (including writing to the db with JPA) and then produce some events on a new topic. No matter what happens I cannot have a situation where I have published events without updating the database, and if anything goes wrong then I want the next poll of the consumer to retry the event. My processing logic including the db update is idempotent so retrying that is fine
I think I have achieved exactly once semantics as described on https://docs.spring.io/spring-kafka/reference/html/#exactly-once by using a ChainedKafkaTransactionManager like so:
#Bean
public ChainedKafkaTransactionManager chainedTransactionManager(JpaTransactionManager jpa, KafkaTransactionManager<?, ?> kafka) {
kafka.setTransactionSynchronization(SYNCHRONIZATION_ON_ACTUAL_TRANSACTION);
return new ChainedKafkaTransactionManager(kafka, jpa);
}
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
ConcurrentKafkaListenerContainerFactoryConfigurer configurer,
ConsumerFactory<Object, Object> kafkaConsumerFactory,
ChainedKafkaTransactionManager chainedTransactionManager) {
ConcurrentKafkaListenerContainerFactory<Object, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
configurer.configure(factory, kafkaConsumerFactory);
factory.getContainerProperties().setTransactionManager(chainedTransactionManager);
return factory;
}
The relevant kafka config in my application.yaml file looks like:
kafka:
...
consumer:
group-id: myGroupId
auto-offset-reset: earliest
properties:
isolation.level: read_committed
...
producer:
transaction-id-prefix: ${random.uuid}
...
Because the commit order is critical to my application I would like to write a integration test to prove that the commits happen in the desired order and that if an error occurs during the commit to kafka then the original event is consumed again. However I am struggling to find a good way of causing a failure between the db commit and the kafka commit.
Any suggestions or alternative ways I could do this?
Thanks
You could use a custom ProducerFactory to return a MockProducer (provided by kafka-clients.
Set the commitTransactionException so that it is thrown when the KTM tries to commit the transaction.
EDIT
Here is an example; it doesn't use the chained TM, but that shouldn't make a difference.
#SpringBootApplication
public class So66018178Application {
public static void main(String[] args) {
SpringApplication.run(So66018178Application.class, args);
}
#KafkaListener(id = "so66018178", topics = "so66018178")
public void listen(String in) {
System.out.println(in);
}
}
spring.kafka.producer.transaction-id-prefix=tx-
spring.kafka.consumer.auto-offset-reset=earliest
#SpringBootTest(classes = { So66018178Application.class, So66018178ApplicationTests.Config.class })
#EmbeddedKafka(bootstrapServersProperty = "spring.kafka.bootstrap-servers")
class So66018178ApplicationTests {
#Autowired
EmbeddedKafkaBroker broker;
#Test
void kafkaCommitFails(#Autowired KafkaListenerEndpointRegistry registry, #Autowired Config config)
throws InterruptedException {
registry.getListenerContainer("so66018178").stop();
AtomicReference<Exception> listenerException = new AtomicReference<>();
CountDownLatch latch = new CountDownLatch(1);
((ConcurrentMessageListenerContainer<String, String>) registry.getListenerContainer("so66018178"))
.setAfterRollbackProcessor(new AfterRollbackProcessor<>() {
#Override
public void process(List<ConsumerRecord<String, String>> records, Consumer<String, String> consumer,
Exception exception, boolean recoverable) {
listenerException.set(exception);
latch.countDown();
}
});
registry.getListenerContainer("so66018178").start();
Map<String, Object> props = KafkaTestUtils.producerProps(this.broker);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
DefaultKafkaProducerFactory<String, String> pf = new DefaultKafkaProducerFactory<>(props);
KafkaTemplate<String, String> template = new KafkaTemplate<>(pf);
template.send("so66018178", "test");
assertThat(latch.await(10, TimeUnit.SECONDS)).isTrue();
assertThat(listenerException.get()).isInstanceOf(ListenerExecutionFailedException.class)
.hasCause(config.exception);
}
#Configuration
public static class Config {
RuntimeException exception = new RuntimeException("test");
#Bean
public ProducerFactory<Object, Object> pf() {
return new ProducerFactory<>() {
#Override
public Producer<Object, Object> createProducer() {
MockProducer<Object, Object> mockProducer = new MockProducer<>();
mockProducer.commitTransactionException = Config.this.exception;
return mockProducer;
}
#Override
public Producer<Object, Object> createProducer(String txIdPrefix) {
Producer<Object, Object> producer = createProducer();
producer.initTransactions();
return producer;
}
#Override
public boolean transactionCapable() {
return true;
}
};
}
}
}
Do not use ChainedKafkaTransactionManager anymore, it is deprecated.
according to docs:
https://docs.spring.io/spring-kafka/reference/html/#container-transaction-manager
"The ChainedKafkaTransactionManager is now deprecated, since version 2.7; see the javadocs for its super class ChainedTransactionManager for more information. Instead, use a KafkaTransactionManager in the container to start the Kafka transaction and annotate the listener method with #Transactional to start the other transaction."
In my tests, where I tried to simulate exception in Producer after DB transaction committed, I simply left mandatory field empty in Kafka event (used Avro schema), and in the second test I deleted the topic for producing with the help of Kafka Admin. And then I wrote some asserts to verify that Kafka Listener was called again, when retrying.

Spring integration: discardChannel doesn't work for filter of integration flow

I faced with a problem when I create IntegrationFlow dynamically using DSL.
If discardChannel is defined as message channel object and if the filter returns false - nothing happens (the message is not sent to specified discard channel)
The source is:
#Autowired
#Qualifier("SIMPLE_CHANNEL")
private MessageChannel simpleChannel;
IntegrationFlow integrationFlow = IntegrationFlows.from("channelName")
.filter(simpleMessageSelectorImpl, e -> e.discardChannel(simpleChannel))
.get();
...
#Autowired
#Qualifier("SIMPLE_CHANNEL")
private MessageChannel simpleChannel;
#Bean
public IntegrationFlow simpleFlow() {
return IntegrationFlows.from(simpleChannel)
.handle(m -> System.out.println("Hello world"))
.get();
#Bean(name = "SIMPLE_CHANNEL")
public MessageChannel simpleChannel() {
return new DirectChannel();
}
But if the discard channel is defined as name of the channel, everything works.
Debuging I found that mentioned above the part of the code:
IntegrationFlow integrationFlow = IntegrationFlows.from("channelName")
.filter(simpleMessageSelectorImpl, e -> e.discardChannel(simpleChannel))
.get();
returns flow object which has map with integrationComponents and one of the component which is FilterEndpointSpec has "handler" field of type MessageFilter with discardChannel = null, and discardChannelName = null;
But if discard channel is defined as name of the channel the mentioned field "handler" with discardChannel=null but discardChannelName="SIMPLE_CHANNEL", as result everything works good.
It is behavior of my running application. Also I wrote the test and in test everything works good for both cases (the test doesn't run all spring context so maybe it is related to any conflict there)
Maybe someone has idea what it can be.
The spring boot version is 2.1.8.RELEASE, spring integration is 5.1.7.RELEASE
Thanks
The behaviour you describe is indeed incorrect and made me wonder, but after testing it out I can't seem to reproduce it, so perhaps there is something missing from the information you provided. In any event, here is the complete app that I've modeled after yours which works as expected. So perhaps you can compare and see if something jumps:
#SpringBootApplication
public class IntegrationBootApp {
public static void main(String[] args) {
ApplicationContext context = SpringApplication.run(IntegrationBootApp.class, args);
MessageChannel channel = context.getBean("channelName", MessageChannel.class);
PollableChannel resultChannel = context.getBean("resultChannel", PollableChannel.class);
PollableChannel discardChannel = context.getBean("SIMPLE_CHANNEL", PollableChannel.class);
channel.send(MessageBuilder.withPayload("foo").build());
System.out.println("SUCCESS: " + resultChannel.receive());
channel.send(MessageBuilder.withPayload("bar").build());
System.out.println("DISCARD: " + discardChannel.receive());
}
#Autowired
#Qualifier("SIMPLE_CHANNEL")
private PollableChannel simpleChannel;
#Bean
public IntegrationFlow integrationFlow() {
IntegrationFlow integrationFlow = IntegrationFlows.from("channelName")
.filter(v -> v.equals("foo"), e -> e.discardChannel(simpleChannel))
.channel("resultChannel")
.get();
return integrationFlow;
}
#Bean(name = "SIMPLE_CHANNEL")
public PollableChannel simpleChannel() {
return new QueueChannel();
}
#Bean
public PollableChannel resultChannel() {
return new QueueChannel(10);
}
}
with output
SUCCESS: GenericMessage [payload=foo, headers={id=cf7e2ef1-e49d-1ecb-9c92-45224d0d91c1, timestamp=1576219339077}]
DISCARD: GenericMessage [payload=bar, headers={id=bf209500-c3cd-9a7c-0216-7d6f51cd5f40, timestamp=1576219339078}]

AWS SQS (queue) with Spring Boot - performance issues

I have a service that reads all messages from AWS SQS.
#Slf4j
#Configuration
#EnableJms
public class JmsConfig {
private SQSConnectionFactory connectionFactory;
public JmsConfig(
#Value("${amazon.sqs.accessKey}") String awsAccessKey,
#Value("${amazon.sqs.secretKey}") String awsSecretKey,
#Value("${amazon.sqs.region}") String awsRegion,
#Value("${amazon.sqs.endpoint}") String awsEndpoint) {
connectionFactory = new SQSConnectionFactory(
new ProviderConfiguration(),
AmazonSQSClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(
new BasicAWSCredentials(awsAccessKey, awsSecretKey)))
.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration(awsEndpoint, awsRegion))
.build());
}
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() {
DefaultJmsListenerContainerFactory factory =
new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(this.connectionFactory);
factory.setDestinationResolver(new DynamicDestinationResolver());
factory.setConcurrency("3-10");
factory.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
factory.setReceiveTimeout(2000L); //??????????
return factory;
}
#Bean
public JmsTemplate defaultJmsTemplate() {
return new JmsTemplate(this.connectionFactory);
}
I've heard about long polling so I wonder how I could use it in my case. I wonder how this listener works - I do not want to create unnecessary calls to the AWS SQS.
My listener that reads messages and converts them to the Object and saves on Redis db:
#JmsListener(destination = "${amazon.sqs.destination}")
public void receive(String requestJSON) throws JMSException {
log.info("Received");
try {
Trace trace = Trace.fromJSON(requestJSON);
traceRepository.save(trace);
(...)
I'd like to know your opinions - what is the best approach to minimalize unnecessary calls to SQS to get messages.
Maybe shoud I use for example
factory.setReceiveTimeout(2000L);
Unfortunately there is too little information in Internet about it
Thanks,
Matthew

Spring boot stream bind queue with multiple routing keys

I need to bind single queue with multiple routing keys.
I have configuration in application.properties:
spring.cloud.stream.bindings.some-channel1.destination=exch
spring.cloud.stream.bindings.some-channel1.group=a-queue
spring.cloud.stream.rabbit.bindings.some-channel1.consumer.binding-routing-key=event.domain1
spring.cloud.stream.bindings.some-channel2.destination=exch
spring.cloud.stream.bindings.some-channel2.group=a-queue
spring.cloud.stream.rabbit.bindings.some-channel2.consumer.binding-routing-key=event.domain2
This creates queue and bindings properly in rabbit, but finally after running application I got:
org.springframework.cloud.stream.binder.BinderException: Exception thrown while starting consumer:
After all above configuration i still bad because I need single channel. But queue binded with list of routing keys.
Any Ideas how to configure it?
You can't do it with stream properties, but you can always add extra bindings with normal Spring AMQP declarations...
#SpringBootApplication
#EnableBinding(Sink.class)
public class So50526298Application {
public static void main(String[] args) {
SpringApplication.run(So50526298Application.class, args);
}
#StreamListener(Sink.INPUT)
public void listen(String in) {
System.out.println(in);
}
// extra bindings...
#Bean
public TopicExchange exch() {
return new TopicExchange("exch");
}
#Bean
public Queue queue() {
return new Queue("exch.a-queue");
}
#Bean
public Binding extraBinding1() {
return BindingBuilder.bind(queue()).to(exch()).with("event-domain2");
}
}
There is also a third party "advanced" boot starter that allows you to add declarations in a yaml file. I haven't tried it, but it looks interesting.

RabbitListener annotation queue name by ConfigurationProperties

I have configured my rabbit properties via application.yaml and spring configurationProperties.
Thus, when I configure exchanges, queues and bindings, I can use the getters of my properties
#Bean Binding binding(Queue queue, TopicExchange exchange) {
return BindingBuilder.bind(queue).to(exchange).with(properties.getQueue());
}
#Bean Queue queue() {
return new Queue(properties.getQueue(), true);
}
#Bean TopicExchange exchange() {
return new TopicExchange(properties.getExchange());
}
However, when I configure a #RabbitListener to log the messages on from the queue, I have to use the full properties name like
#RabbitListener(queues = "${some.long.path.to.the.queue.name}")
public void onMessage(
final Message message, final Channel channel) throws Exception {
log.info("receiving message: {}#{}", message, channel);
}
I want to avoid this error prone hard coded String and refer to the configurationProperties bean like:
#RabbitListener(queues = "${properties.getQueue()}")
I had a similar issue once with #EventListener where using a bean reference "#bean.method()" helped, but it does not work here, the bean expression is just interpreted as queue name, which fails because a queue namde "#bean...." does not exist.
Is it possible to use ConfigurationProperty-Beans for RabbitListener queue configuration?
Something like this worked for me where I just used the Bean and SpEL.
#Autowired
Queue queue;
#RabbitListener(queues = "#{queue.getName()}")
I was finally able to accomplish what we both desired to do by taking what #David Diehl suggested, using the bean and SpEL; however, using MyRabbitProperties itself instead. I removed the #EnableConfigurationProperties(MyRabbitProperties.class) in the config class, and registered the bean the standard way:
#Configuration
//#EnableConfigurationProperties(RabbitProperties.class)
#EnableRabbit
public class RabbitConfig {
//private final MyRabbitProperties myRabbitProperties;
//#Autowired
//public RabbitConfig(MyRabbitProperties myRabbitProperties) {
//this.myRabbitProperties = myRabbitProperties;
//}
#Bean
public TopicExchange myExchange(MyRabbitProperties myRabbitProperties) {
return new TopicExchange(myRabbitProperties.getExchange());
}
#Bean
public Queue myQueueBean(MyRabbitProperties myRabbitProperties) {
return new Queue(myRabbitProperties.getQueue(), true);
}
#Bean
public Binding binding(Queue myQueueBean, TopicExchange myExchange, MyRabbitProperties myRabbitProperties) {
return BindingBuilder.bind(myQueueBean).to(myExchange).with(myRabbitProperties.getRoutingKey());
}
#Bean
public MyRabbitProperties myRabbitProperties() {
return new MyRabbitProperties();
}
}
From there, you can access the get method for that field:
#Component
public class RabbitQueueListenerClass {
#RabbitListener(queues = "#{myRabbitProperties.getQueue()}")
public void processMessage(Message message) {
}
}
#RabbitListener(queues = "#{myQueue.name}")
Listener:
#RabbitListener(queues = "${queueName}")
application.properties:
queueName=myQueue

Resources