With Spring boot and integration DSL, getting error ClassNotFoundException integration.history.TrackableComponent - spring-boot

Trying a very basic JMS receiver using Spring Boot, Integration and DSL. I have worked on XML based on Spring Integration, but am new to Spring Boot and DSL.
This is a code sample that I have so far
#SpringBootApplication
#IntegrationComponentScan
#EnableJms
public class JmsReceiver {
static String mailboxDestination = "RETRY.QUEUE";
#Configuration
#EnableJms
#IntegrationComponentScan
#EnableIntegration
public class MessageReceiver {
#Bean
public IntegrationFlow jmsMessageDrivenFlow() {
return IntegrationFlows
.from(Jms.messageDriverChannelAdapter(this.connectionFactory())
.destination(mailboxDestination))
.transform((String s) -> s.toUpperCase())
.get();
}
//for sneding message
#Bean
ConnectionFactory connectionFactory() {
ActiveMQConnectionFactory acFac = new ActiveMQConnectionFactory();
acFac.setBrokerURL("tcp://crsvcdevlnx01:61616");
acFac.setUserName("admin");
acFac.setPassword("admin");
return new CachingConnectionFactory(acFac);
}
}
//Message send code
public static void main(String args[]) throws Throwable {
AnnotationConfigApplicationContext context =
new AnnotationConfigApplicationContext(JmsReceiver.class);
JmsTemplate jmsTemplate = context.getBean(JmsTemplate.class);
System.out.println("Sending a new mesage.");
MessageCreator messageCreator = new MessageCreator() {
#Override
public Message createMessage(Session session) throws JMSException {
return session.createTextMessage("ping!");
}
};
jmsTemplate.send(mailboxDestination, messageCreator);
context.close();
}
}
And, I get this error when running with Gradle.
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.integration.dsl.IntegrationFlow]: Factory method 'inboundFlow' threw exception; nested exception is java.lang.NoClassDefFoundError: org/springframework/integration/history/TrackableComponent
reflect.NativeMethodAccessorImpl.invoke0(Native Method)
.
.
.
Caused by: java.lang.ClassNotFoundException: org.springframework.integration.history.TrackableComponent
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
My gradle dependencies:
compile "org.springframework.boot:spring-boot-starter-jersey",
"org.springframework.boot:spring-boot-starter-actuator",
"org.springframework.boot:spring-boot-configuration-processor",
"org.springframework.boot:spring-boot-starter-integration",
"org.springframework.integration:spring-integration-jms",
"org.springframework.integration:spring-integration-java-dsl:1.1.1.RELEASE",
"org.springframework.integration:spring-integration-flow:1.0.0.RELEASE",
"org.springframework.integration:spring-integration-core:4.2.2.RELEASE",
"org.springframework.integration:spring-integration-java-dsl:1.1.0.RELEASE",
"org.springframework.integration:spring-integration-flow:1.0.0.RELEASE",
"org.apache.activemq:activemq-spring:5.11.2",
UPDATE.. SOLVED: Thanks much. Changed two things:
Cleaned up gradle dependencies based on your advice. New ones looks like this:
compile "org.springframework.boot:spring-boot-starter-jersey",
"org.springframework.boot:spring-boot-starter-actuator",
"org.springframework.boot:spring-boot-configuration-processor",
"org.springframework.boot:spring-boot-starter-integration",
"org.springframework.integration:spring-integration-jms",
"org.springframework.integration:spring-integration-java-dsl:1.1.0.RELEASE",
"org.apache.activemq:activemq-spring:5.11.2"
Code was throwing constructor error about not being able to instantiate <init> in the inner class. Changed the Inner class to static. New Code:
#SpringBootApplication
#IntegrationComponentScan
#EnableJms
public class JmsReceiver {
static String lsamsErrorQueue = "Queue.LSAMS.retryMessage";
static String fatalErrorsQueue = "Queue.LSAMS.ManualCheck";
//receiver
#EnableJms
#EnableIntegration
#Configuration
public static class MessageReceiver {
#Bean
public IntegrationFlow jmsMessageDrivenFlow() {
return IntegrationFlows
.from(Jms.messageDriverChannelAdapter(this.connectionFactory())
.destination(lsamsErrorQueue))
//call LSAMS REST service with the payload received
.transform((String s) -> s.toUpperCase())
.handle(Jms.outboundGateway(this.connectionFactory())
.requestDestination(fatalErrorsQueue))
.get();
}
#Bean
ConnectionFactory connectionFactory() {
ActiveMQConnectionFactory acFac = new ActiveMQConnectionFactory();
acFac.setBrokerURL("tcp://crsvcdevlnx01:61616");
acFac.setUserName("admin");
acFac.setPassword("admin");
return new CachingConnectionFactory(acFac);
}
}
//Message send code
public static void main(String args[]) throws Throwable {
AnnotationConfigApplicationContext context =
new AnnotationConfigApplicationContext(JmsReceiver.class);
JmsTemplate jmsTemplate = context.getBean(JmsTemplate.class);
System.out.println("Sending a new mesage.");
MessageCreator messageCreator = new MessageCreator() {
#Override
public Message createMessage(Session session) throws JMSException {
return session.createTextMessage("ping!");
}
};
jmsTemplate.send(lsamsErrorQueue, messageCreator);
context.close();
}
}

Well, that fully looks like you have a version mess in your classpath.
First of all you shouldn't mix the same artifacts manually, like you have with spring-integration-java-dsl and spring-integration-flow. BTW, do you really need the last one?.. I mean is there some reason to keep spring-integration-flow? This project is about Modular Flows.
From other side you don't need to specify spring-integration-core if you are based on the Spring Boot (spring-boot-starter-integration in your case).
And yes: the TrackableComponent has been moved to the org.springframework.integration.support.management since Spring Integration 4.2 (https://jira.spring.io/browse/INT-3799).
From here it looks like you use the older Spring Integration version somehow:
- or Spring Boot 1.2.x
- or it is really side-effect of transitive dependency from spring-integration-flow...

Related

How can I test that I have configured ChainedKafkaTransactionManager correctly in my spring boot service

My spring boot service needs to consume kafka events off one topic, do some processing (including writing to the db with JPA) and then produce some events on a new topic. No matter what happens I cannot have a situation where I have published events without updating the database, and if anything goes wrong then I want the next poll of the consumer to retry the event. My processing logic including the db update is idempotent so retrying that is fine
I think I have achieved exactly once semantics as described on https://docs.spring.io/spring-kafka/reference/html/#exactly-once by using a ChainedKafkaTransactionManager like so:
#Bean
public ChainedKafkaTransactionManager chainedTransactionManager(JpaTransactionManager jpa, KafkaTransactionManager<?, ?> kafka) {
kafka.setTransactionSynchronization(SYNCHRONIZATION_ON_ACTUAL_TRANSACTION);
return new ChainedKafkaTransactionManager(kafka, jpa);
}
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
ConcurrentKafkaListenerContainerFactoryConfigurer configurer,
ConsumerFactory<Object, Object> kafkaConsumerFactory,
ChainedKafkaTransactionManager chainedTransactionManager) {
ConcurrentKafkaListenerContainerFactory<Object, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
configurer.configure(factory, kafkaConsumerFactory);
factory.getContainerProperties().setTransactionManager(chainedTransactionManager);
return factory;
}
The relevant kafka config in my application.yaml file looks like:
kafka:
...
consumer:
group-id: myGroupId
auto-offset-reset: earliest
properties:
isolation.level: read_committed
...
producer:
transaction-id-prefix: ${random.uuid}
...
Because the commit order is critical to my application I would like to write a integration test to prove that the commits happen in the desired order and that if an error occurs during the commit to kafka then the original event is consumed again. However I am struggling to find a good way of causing a failure between the db commit and the kafka commit.
Any suggestions or alternative ways I could do this?
Thanks
You could use a custom ProducerFactory to return a MockProducer (provided by kafka-clients.
Set the commitTransactionException so that it is thrown when the KTM tries to commit the transaction.
EDIT
Here is an example; it doesn't use the chained TM, but that shouldn't make a difference.
#SpringBootApplication
public class So66018178Application {
public static void main(String[] args) {
SpringApplication.run(So66018178Application.class, args);
}
#KafkaListener(id = "so66018178", topics = "so66018178")
public void listen(String in) {
System.out.println(in);
}
}
spring.kafka.producer.transaction-id-prefix=tx-
spring.kafka.consumer.auto-offset-reset=earliest
#SpringBootTest(classes = { So66018178Application.class, So66018178ApplicationTests.Config.class })
#EmbeddedKafka(bootstrapServersProperty = "spring.kafka.bootstrap-servers")
class So66018178ApplicationTests {
#Autowired
EmbeddedKafkaBroker broker;
#Test
void kafkaCommitFails(#Autowired KafkaListenerEndpointRegistry registry, #Autowired Config config)
throws InterruptedException {
registry.getListenerContainer("so66018178").stop();
AtomicReference<Exception> listenerException = new AtomicReference<>();
CountDownLatch latch = new CountDownLatch(1);
((ConcurrentMessageListenerContainer<String, String>) registry.getListenerContainer("so66018178"))
.setAfterRollbackProcessor(new AfterRollbackProcessor<>() {
#Override
public void process(List<ConsumerRecord<String, String>> records, Consumer<String, String> consumer,
Exception exception, boolean recoverable) {
listenerException.set(exception);
latch.countDown();
}
});
registry.getListenerContainer("so66018178").start();
Map<String, Object> props = KafkaTestUtils.producerProps(this.broker);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
DefaultKafkaProducerFactory<String, String> pf = new DefaultKafkaProducerFactory<>(props);
KafkaTemplate<String, String> template = new KafkaTemplate<>(pf);
template.send("so66018178", "test");
assertThat(latch.await(10, TimeUnit.SECONDS)).isTrue();
assertThat(listenerException.get()).isInstanceOf(ListenerExecutionFailedException.class)
.hasCause(config.exception);
}
#Configuration
public static class Config {
RuntimeException exception = new RuntimeException("test");
#Bean
public ProducerFactory<Object, Object> pf() {
return new ProducerFactory<>() {
#Override
public Producer<Object, Object> createProducer() {
MockProducer<Object, Object> mockProducer = new MockProducer<>();
mockProducer.commitTransactionException = Config.this.exception;
return mockProducer;
}
#Override
public Producer<Object, Object> createProducer(String txIdPrefix) {
Producer<Object, Object> producer = createProducer();
producer.initTransactions();
return producer;
}
#Override
public boolean transactionCapable() {
return true;
}
};
}
}
}
Do not use ChainedKafkaTransactionManager anymore, it is deprecated.
according to docs:
https://docs.spring.io/spring-kafka/reference/html/#container-transaction-manager
"The ChainedKafkaTransactionManager is now deprecated, since version 2.7; see the javadocs for its super class ChainedTransactionManager for more information. Instead, use a KafkaTransactionManager in the container to start the Kafka transaction and annotate the listener method with #Transactional to start the other transaction."
In my tests, where I tried to simulate exception in Producer after DB transaction committed, I simply left mandatory field empty in Kafka event (used Avro schema), and in the second test I deleted the topic for producing with the help of Kafka Admin. And then I wrote some asserts to verify that Kafka Listener was called again, when retrying.

RabbitListener annotation queue name by ConfigurationProperties

I have configured my rabbit properties via application.yaml and spring configurationProperties.
Thus, when I configure exchanges, queues and bindings, I can use the getters of my properties
#Bean Binding binding(Queue queue, TopicExchange exchange) {
return BindingBuilder.bind(queue).to(exchange).with(properties.getQueue());
}
#Bean Queue queue() {
return new Queue(properties.getQueue(), true);
}
#Bean TopicExchange exchange() {
return new TopicExchange(properties.getExchange());
}
However, when I configure a #RabbitListener to log the messages on from the queue, I have to use the full properties name like
#RabbitListener(queues = "${some.long.path.to.the.queue.name}")
public void onMessage(
final Message message, final Channel channel) throws Exception {
log.info("receiving message: {}#{}", message, channel);
}
I want to avoid this error prone hard coded String and refer to the configurationProperties bean like:
#RabbitListener(queues = "${properties.getQueue()}")
I had a similar issue once with #EventListener where using a bean reference "#bean.method()" helped, but it does not work here, the bean expression is just interpreted as queue name, which fails because a queue namde "#bean...." does not exist.
Is it possible to use ConfigurationProperty-Beans for RabbitListener queue configuration?
Something like this worked for me where I just used the Bean and SpEL.
#Autowired
Queue queue;
#RabbitListener(queues = "#{queue.getName()}")
I was finally able to accomplish what we both desired to do by taking what #David Diehl suggested, using the bean and SpEL; however, using MyRabbitProperties itself instead. I removed the #EnableConfigurationProperties(MyRabbitProperties.class) in the config class, and registered the bean the standard way:
#Configuration
//#EnableConfigurationProperties(RabbitProperties.class)
#EnableRabbit
public class RabbitConfig {
//private final MyRabbitProperties myRabbitProperties;
//#Autowired
//public RabbitConfig(MyRabbitProperties myRabbitProperties) {
//this.myRabbitProperties = myRabbitProperties;
//}
#Bean
public TopicExchange myExchange(MyRabbitProperties myRabbitProperties) {
return new TopicExchange(myRabbitProperties.getExchange());
}
#Bean
public Queue myQueueBean(MyRabbitProperties myRabbitProperties) {
return new Queue(myRabbitProperties.getQueue(), true);
}
#Bean
public Binding binding(Queue myQueueBean, TopicExchange myExchange, MyRabbitProperties myRabbitProperties) {
return BindingBuilder.bind(myQueueBean).to(myExchange).with(myRabbitProperties.getRoutingKey());
}
#Bean
public MyRabbitProperties myRabbitProperties() {
return new MyRabbitProperties();
}
}
From there, you can access the get method for that field:
#Component
public class RabbitQueueListenerClass {
#RabbitListener(queues = "#{myRabbitProperties.getQueue()}")
public void processMessage(Message message) {
}
}
#RabbitListener(queues = "#{myQueue.name}")
Listener:
#RabbitListener(queues = "${queueName}")
application.properties:
queueName=myQueue

Invocation of Spring Cloud AWS Messaging package causes dependent beans to be null

I have a Spring Boot project that I'm using to receive events from an Amazon SQS queue. I've been using the Spring Cloud AWS project to make this easier.
The problem is this: the Spring Boot application starts up just fine, and appears to instantiate all the necessary beans just fine. However, when the method that is annotated with SqsListener is invoked, all the event handler's dependent beans are null.
Another thing that's important to note: I have two methods of propagating the event: 1) thru a POST web service call, and 2) thru the Amazon SQS. If I choose to run the event as a POST call with the same data in the POST body, it works just fine. The injected dependencies are only ever null whenever the SQSListener method is invoked by the SimpleMessageListenerContainer.
Classes:
#Service("systemEventsHandler")
public class SystemEventsHandler {
// A service that this handler depends on
private CustomService customService;
private ObjectMapper objectMapper;
#Autowired
public SystemEventsHandler(CustomService customService, ObjectMapper objectMapper) {
this.matchStatusSvc = matchStatusSvc;
this.objectMapper = objectMapper;
}
public void handleEventFromHttpCall(CustomEventObject event) {
// Whenever this method is called, the customService is
// present and the method call completes just fine.
Assert.notNull(objectMapper, "The objectMapper that was injected was null");
customService.handleEvent(event);
}
#SqsListener(value = "sqsName", deletionPolicy = SqsMessageDeletionPolicy.ON_SUCCESS)
private void handleEventFromSQSQueue(#NotificationMessage String body) throws IOException {
// Whenever this method is called, both the objectMapper and
// the customService are null, causing the invocation to
// fail with a NullPointerException
CustomEventObject event = objectMapper.readValue(body, CustomEventObject.class);
matchStatusSvc.scoresheetUploaded(matchId);
}
}
The controller (for when I choose to run the event as a POST). As stated above, it works just fine whenever I run it as a POST call.
#RestController
#RequestMapping("/events")
public class SystemEventsController {
private final SystemEventsHandler sysEventSvc;
#Autowired
public SystemEventsController(SystemEventsHandler sysEventSvc) {
this.sysEventSvc = sysEventSvc;
}
#RequestMapping(value = "", method = RequestMethod.POST)
public void handleCustomEvent(#RequestBody CustomEventObject event) {
sysEventSvc.handleEventFromHttpCall(event);
}
}
Pertinent config:
#Configuration
public class AWSSQSConfig {
#Bean
public SimpleMessageListenerContainer simpleMessageListenerContainer(AmazonSQSAsync amazonSQS) {
SimpleMessageListenerContainer msgListenerContainer = simpleMessageListenerContainerFactory(amazonSQS).createSimpleMessageListenerContainer();
msgListenerContainer.setMessageHandler(queueMessageHandler(amazonSQS));
return msgListenerContainer;
}
#Bean
public SimpleMessageListenerContainerFactory simpleMessageListenerContainerFactory(AmazonSQSAsync amazonSQS) {
SimpleMessageListenerContainerFactory msgListenerContainerFactory = new SimpleMessageListenerContainerFactory();
msgListenerContainerFactory.setAmazonSqs(amazonSQS);
msgListenerContainerFactory.setMaxNumberOfMessages(10);
msgListenerContainerFactory.setWaitTimeOut(1);
return msgListenerContainerFactory;
}
#Bean
public QueueMessageHandler queueMessageHandler(AmazonSQSAsync amazonSQS) {
QueueMessageHandlerFactory queueMsgHandlerFactory = new QueueMessageHandlerFactory();
queueMsgHandlerFactory.setAmazonSqs(amazonSQS);
QueueMessageHandler queueMessageHandler = queueMsgHandlerFactory.createQueueMessageHandler();
return queueMessageHandler;
}
#Bean(name = "amazonSQS", destroyMethod = "shutdown")
public AmazonSQSAsync amazonSQSClient() {
AmazonSQSAsyncClient awsSQSAsyncClient = new AmazonSQSAsyncClient(new DefaultAWSCredentialsProviderChain());
return awsSQSAsyncClient;
}
}
Other info:
Spring boot version: Dalston.RELEASE
Spring cloud AWS version:
1.2.1.RELEASE
Both the spring-cloud-aws-autoconfigure and spring-cloud-aws-messaging packages are on the classpath
Any thoughts?
As spencergibb suggested in his comment above, changing the method's visibility from private to public worked.

spring-boot-starter-jta-atomikos and spring-boot-starter-batch

Is it possible to use both these starters in a single application?
I want to load records from a CSV file into a database table. The Spring Batch tables are stored in a different database, so I assume I need to use JTA to handle the transaction.
Whenever I add #EnableBatchProcessing to my #Configuration class it configures a PlatformTransactionManager, which stops this being auto-configured by Atomikos.
Are there any spring boot + batch + jta samples out there that show how to do this?
Many Thanks,
James
I just went through this and I found something that seems to work. As you note, #EnableBatchProcessing causes a DataSourceTransactionManager to be created, which messes up everything. I'm using modular=true in #EnableBatchProcessing, so the ModularBatchConfiguration class is activated.
What I did was to stop using #EnableBatchProcessing and instead copy the entire ModularBatchConfiguration class into my project. Then I commented out the transactionManager() method, since the Atomikos configuration creates the JtaTransactionManager. I also had to override the jobRepository() method, because that was hardcoded to use the DataSourceTransactionManager created inside DefaultBatchConfiguration.
I also had to explicitly import the JtaAutoConfiguration class. This wires everything up correctly (according to the Actuator's "beans" endpoint - thank god for that). But when you run it the transaction manager throws an exception because something somewhere sets an explicit transaction isolation level. So I also wrote a BeanPostProcessor to find the transaction manager and call txnMgr.setAllowCustomIsolationLevels(true);
Now everything works, but while the job is running, I cannot fetch the current data from batch_step_execution table using JdbcTemplate, even though I can see the data in SQLYog. This must have something to do with transaction isolation, but I haven't been able to understand it yet.
Here is what I have for my configuration class, copied from Spring and modified as noted above. PS, I have my DataSource that points to the database with the batch tables annotated as #Primary. Also, I changed my DataSource beans to be instances of org.apache.tomcat.jdbc.pool.XADataSource; I'm not sure if that's necessary.
#Configuration
#Import(ScopeConfiguration.class)
public class ModularJtaBatchConfiguration implements ImportAware
{
#Autowired(required = false)
private Collection<DataSource> dataSources;
private BatchConfigurer configurer;
#Autowired
private ApplicationContext context;
#Autowired(required = false)
private Collection<BatchConfigurer> configurers;
private AutomaticJobRegistrar registrar = new AutomaticJobRegistrar();
#Bean
public JobRepository jobRepository(DataSource batchDataSource, JtaTransactionManager jtaTransactionManager) throws Exception
{
JobRepositoryFactoryBean factory = new JobRepositoryFactoryBean();
factory.setDataSource(batchDataSource);
factory.setTransactionManager(jtaTransactionManager);
factory.afterPropertiesSet();
return factory.getObject();
}
#Bean
public JobLauncher jobLauncher() throws Exception {
return getConfigurer(configurers).getJobLauncher();
}
// #Bean
// public PlatformTransactionManager transactionManager() throws Exception {
// return getConfigurer(configurers).getTransactionManager();
// }
#Bean
public JobExplorer jobExplorer() throws Exception {
return getConfigurer(configurers).getJobExplorer();
}
#Bean
public AutomaticJobRegistrar jobRegistrar() throws Exception {
registrar.setJobLoader(new DefaultJobLoader(jobRegistry()));
for (ApplicationContextFactory factory : context.getBeansOfType(ApplicationContextFactory.class).values()) {
registrar.addApplicationContextFactory(factory);
}
return registrar;
}
#Bean
public JobBuilderFactory jobBuilders(JobRepository jobRepository) throws Exception {
return new JobBuilderFactory(jobRepository);
}
#Bean
// hopefully this will autowire the Atomikos JTA txn manager
public StepBuilderFactory stepBuilders(JobRepository jobRepository, JtaTransactionManager ptm) throws Exception {
return new StepBuilderFactory(jobRepository, ptm);
}
#Bean
public JobRegistry jobRegistry() throws Exception {
return new MapJobRegistry();
}
#Override
public void setImportMetadata(AnnotationMetadata importMetadata) {
AnnotationAttributes enabled = AnnotationAttributes.fromMap(importMetadata.getAnnotationAttributes(
EnableBatchProcessing.class.getName(), false));
Assert.notNull(enabled,
"#EnableBatchProcessing is not present on importing class " + importMetadata.getClassName());
}
protected BatchConfigurer getConfigurer(Collection<BatchConfigurer> configurers) throws Exception {
if (this.configurer != null) {
return this.configurer;
}
if (configurers == null || configurers.isEmpty()) {
if (dataSources == null || dataSources.isEmpty()) {
throw new UnsupportedOperationException("You are screwed");
} else if(dataSources != null && dataSources.size() == 1) {
DataSource dataSource = dataSources.iterator().next();
DefaultBatchConfigurer configurer = new DefaultBatchConfigurer(dataSource);
configurer.initialize();
this.configurer = configurer;
return configurer;
} else {
throw new IllegalStateException("To use the default BatchConfigurer the context must contain no more than" +
"one DataSource, found " + dataSources.size());
}
}
if (configurers.size() > 1) {
throw new IllegalStateException(
"To use a custom BatchConfigurer the context must contain precisely one, found "
+ configurers.size());
}
this.configurer = configurers.iterator().next();
return this.configurer;
}
}
#Configuration
class ScopeConfiguration {
private StepScope stepScope = new StepScope();
private JobScope jobScope = new JobScope();
#Bean
public StepScope stepScope() {
stepScope.setAutoProxy(false);
return stepScope;
}
#Bean
public JobScope jobScope() {
jobScope.setAutoProxy(false);
return jobScope;
}
}
I found a solution where I was able to keep #EnableBatchProcessing but had to implement BatchConfigurer and atomikos beans, see my full answer in this so answer.

Spring Boot could not autowire and run

I am unable to run the attached Spring Boot sampler application. It has an AMQP starter, requiring RabbitMQ. Fundamentally, it is a simple application that just sends a message to a RabbitMQ Exchange with a queue bound to it. I get the following error:
Caused by: org.springframework.beans.factory.BeanCreationException: Could not autowire field: com.company.messaging.MessageDeliveryManager com.company.exec.Application.messageDeliveryManager; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type [com.company.messaging.MessageDeliveryManager] found for dependency: expected at least 1 bean which qualifies as autowire candidate for this dependency. Dependency annotations:
{#org.springframework.beans.factory.annotation.Autowired(required=true)}
Application.java
package com.company.exec;
#SpringBootApplication
public class Application implements CommandLineRunner {
#Autowired
MessageDeliveryManager messageDeliveryManager;
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
public void run(String... args) throws Exception {
messageDeliveryManager.sendMessage(String message);
}
}
MessageDeliveryManager.java
package com.company.messaging;
public interface MessageDeliveryManager {
void sendMessage(String message);
}
MessageDeliveryImpl.java
package com.company.messaging;
public class MessageDeliveryManagerImpl implements MessageDeliveryManager {
#Value("${app.exchangeName}")
String exchangeName;
#Value("${app.queueName}")
String queueName;
#Autowired
RabbitTemplate rabbitTemplate;
#Bean
Queue queue() {
return new Queue(queueName, false);
}
#Bean
DirectExchange exchange() {
return new DirectExchange(exchangeName);
}
#Bean
Binding binding(Queue queue, DirectExchange exchange) {
return BindingBuilder.bind(queue).to(exchange).with(queueName);
}
public void sendMessage(String message) {
rabbitTemplate.send(queueName, message);
}
}
I would really appreciate if someone can review and provide a suggestion on what I am doing wrong.
Since you have a package tree like this:
com.company.exec
com.company.messaging
and just use a default #SpringBootApplication, it just doesn't see your MessageDeliveryManager and its implementation. That's because #ComponentScan (the meta-annotation on the #SpringBootApplication) does a scan only for the current package and its subpackages.
To make it worked you should add this:
#SpringBootApplication
#ComponentScan("com.company")
Or just move your Application to the root package - com.company.

Resources