I'm trying to use the spring cloud Kafka streams binder(2.4.3) to consume and produce Avro messages. I'm able to consume the message and produce the records with a single function but I'm looking for a producer which I can use multiple times in the application. StreamBridge seems like an option for this and tried the below approach but it does not work, Did I missed anything
#Autowired
StreamBridge streamBridge;
#Bean
public Consumer<KStream<EventKey, Event>> process(){
return input -> {
input.peek((k,v) -> sendToValue(v)
);
};
}
private Event sendToValue(Event event){
System.out.println(Event);
streamBridge.send("process-out-0",event);
System.out.println("Message sent");
return touchpointEvent;
}
Binder:
spring:
application:
name: ${applicaton-name}
cloud:
stream:
function:
definition: process
bindings:
process-in-0:
destination: ${input-topic-name}
contentType: application/Avro
process-out-0:
destination: ${enriched-topic-name}
contentType: application/Avro
binding-retry-interval: 30
kafka:
streams:
binder:
brokers: ${kafka-broker}
application-id: ${consumer-group-name}
auto-create-topics: false
auto-add-partitions: false
configuration:
processing.guarantee: at_least_once
auto.offset.reset: earliest
schema.registry.url: ${kafka-schema-registry}
auto-register-schema: false
security.protocol: SSL
useNativeEncoding: true
specific.avro.reader: true
Error
[2021-03-16 21:51:52,219] [INFO] [latest-c8b6dd0c-b376-4cdd-a72d-17e97701d1d5-StreamThread-1] [o.s.c.s.b.DefaultBinderFactory DefaultBinderFactory.java:243] Creating binder: ktable
[2021-03-16 21:51:52,312] [INFO] [latest-c8b6dd0c-b376-4cdd-a72d-17e97701d1d5-StreamThread-1] [o.s.c.s.b.DefaultBinderFactory DefaultBinderFactory.java:343] Caching the binder: ktable
[2021-03-16 21:51:52,313] [INFO] [latest-c8b6dd0c-b376-4cdd-a72d-17e97701d1d5-StreamThread-1] [o.s.c.s.b.DefaultBinderFactory DefaultBinderFactory.java:347] Retrieving cached binder: ktable
[2021-03-16 21:51:52,313] [INFO] [latest-c8b6dd0c-b376-4cdd-a72d-17e97701d1d5-StreamThread-1] [o.s.c.s.b.DefaultBinderFactory DefaultBinderFactory.java:243] Creating binder: kafka
[2021-03-16 21:51:52,372] [WARN] [latest-c8b6dd0c-b376-4cdd-a72d-17e97701d1d5-StreamThread-1] [o.s.c.a.AnnotationConfigApplicationContext AbstractApplicationContext.java:559] Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'kafkaMessageChannelBinder' defined in org.springframework.cloud.stream.binder.kafka.config.KafkaBinderConfiguration: Unexpected exception during bean creation; nested exception is java.lang.NoClassDefFoundError: org/springframework/integration/support/management/ManageableLifecycle
[2021-03-16 21:51:52,374] [INFO] [latest-c8b6dd0c-b376-4cdd-a72d-17e97701d1d5-StreamThread-1] [o.s.b.a.l.ConditionEvaluationReportLoggingListener ConditionEvaluationReportLoggingListener.java:136]
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
[2021-03-16 21:51:52,379] [ERROR] [latest-c8b6dd0c-b376-4cdd-a72d-17e97701d1d5-StreamThread-1] [o.s.b.SpringApplication SpringApplication.java:837] Application run failed
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'kafkaMessageChannelBinder' defined in org.springframework.cloud.stream.binder.kafka.config.KafkaBinderConfiguration: Unexpected exception during bean creation; nested exception is java.lang.NoClassDefFoundError: org/springframework/integration/support/management/ManageableLifecycle
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:529)
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:324)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:322)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202)
update with a version change:
[2021-03-17 09:56:28,043] [INFO] [latest-0ea7c0a5-21a4-464b-bf42-a5c35b6687dc-StreamThread-1] [o.a.k.c.u.AppInfoParser AppInfoParser.java:117] Kafka version: 6.0.1-ccs
[2021-03-17 09:56:28,043] [INFO] [latest-0ea7c0a5-21a4-464b-bf42-a5c35b6687dc-StreamThread-1] [o.a.k.c.u.AppInfoParser AppInfoParser.java:118] Kafka commitId: 9c1fbb3db1e0d69d
[2021-03-17 09:56:28,043] [INFO] [latest-0ea7c0a5-21a4-464b-bf42-a5c35b6687dc-StreamThread-1] [o.a.k.c.u.AppInfoParser AppInfoParser.java:119] Kafka startTimeMs: 1615992988043
[2021-03-17 09:56:28,047] [WARN] [kafka-admin-client-thread | adminclient-2] [o.a.k.c.NetworkClient NetworkClient.java:757] [AdminClient clientId=adminclient-2] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
[2021-03-17 09:56:28,151] [WARN] [kafka-admin-client-thread | adminclient-2] [o.a.k.c.NetworkClient NetworkClient.java:757] [AdminClient clientId=adminclient-2] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
[2021-03-17 09:56:28,357] [WARN] [kafka-admin-client-thread | adminclient-2] [o.a.k.c.NetworkClient NetworkClient.java:757] [AdminClient clientId=adminclient-2] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
[2021-03-17 09:56:28,568] [WARN] [kafka-admin-client-thread | adminclient-2] [o.a.k.c.NetworkClient NetworkClient.java:757] [AdminClient clientId=adminclient-2] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
[2021-03-17 09:56:28,987] [WARN] [kafka-admin-client-thread | adminclient-2] [o.a.k.c.NetworkClient NetworkClient.java:757] [AdminClient clientId=adminclient-2] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
java.lang.NoClassDefFoundError: org/springframework/integration/support/management/ManageableLifecycle
You have an older version of spring-integration-core on the class path - that class was added in 5.4.
When using Boot, you should never declare versions; let Boot bring in the right version using its dependency management feature.
Related
I want to run integration tests that test my kafka listener and avro serialization. this requires a Kafka and a Schema regsitry (transitively also a Zookeeper).
When testing I currently have to a docker-compose.yml, but I want to reduce user error by building the required containers via testcontainers. The Kafka and Zookeeper instances get started neatly and seem to work just fine - my application can create the required topics and the listener is subscribed as well, I can even send messages via kafka console producer.
What does not work is the SchemaRegistry. The container starts, apparently connects to the ZK but cannot establish a connection to the broker. It retries connecting for some time until it times out and subsequently the container is stopped. I therefore cannot register and read my avro schematas for (De-)Serialization in my test which fail because of this.
I can't find the reason why the SR can apparently connect to the ZK but cant find my broker.
Did someone run into this problem as well? Did you manage to get this running? If so, how so?
I need Kafka and the Schema Registry testcontainers to be fully available for my tests, so omitting any of them is not an option.
I could also keep using the docker-compose.yml but I would really like to setup my test environment fully programmatically.
The schema registry container logs the following:
2023-02-08 16:56:09 [2023-02-08 15:56:09,556] INFO Session establishment complete on server zookeeper/192.168.144.2:2181, session id = 0x1000085b81e0003, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn)
2023-02-08 16:56:09 [2023-02-08 15:56:09,696] INFO Session: 0x1000085b81e0003 closed (org.apache.zookeeper.ZooKeeper)
2023-02-08 16:56:09 [2023-02-08 15:56:09,696] INFO EventThread shut down for session: 0x1000085b81e0003 (org.apache.zookeeper.ClientCnxn)
2023-02-08 16:56:09 [2023-02-08 15:56:09,787] INFO AdminClientConfig values:
/* Omitted for brevity */
(org.apache.kafka.clients.admin.AdminClientConfig)
2023-02-08 16:56:10 [2023-02-08 15:56:10,284] INFO Kafka version: 7.3.1-ccs (org.apache.kafka.common.utils.AppInfoParser)
2023-02-08 16:56:10 [2023-02-08 15:56:10,284] INFO Kafka commitId: 8628b0341c3c4676 (org.apache.kafka.common.utils.AppInfoParser)
2023-02-08 16:56:10 [2023-02-08 15:56:10,284] INFO Kafka startTimeMs: 1675871770281 (org.apache.kafka.common.utils.AppInfoParser)
2023-02-08 16:56:10 [2023-02-08 15:56:10,308] INFO [AdminClient clientId=adminclient-1] Node -1 disconnected. (org.apache.kafka.clients.NetworkClient)
2023-02-08 16:56:10 [2023-02-08 15:56:10,313] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:54776) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
/* These lines repeat a few times until the container times out and exits. */
2023-02-08 16:56:50 [2023-02-08 15:56:50,144] INFO [AdminClient clientId=adminclient-1] Node -1 disconnected. (org.apache.kafka.clients.NetworkClient)
2023-02-08 16:56:50 [2023-02-08 15:56:50,144] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:54776) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
2023-02-08 16:56:50 [2023-02-08 15:56:50,298] ERROR Error while getting broker list. (io.confluent.admin.utils.ClusterStatus)
2023-02-08 16:56:50 java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. Call: listNodes
2023-02-08 16:56:50 at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
2023-02-08 16:56:50 at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
2023-02-08 16:56:50 at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
2023-02-08 16:56:50 at io.confluent.admin.utils.ClusterStatus.isKafkaReady(ClusterStatus.java:147)
2023-02-08 16:56:50 at io.confluent.admin.utils.cli.KafkaReadyCommand.main(KafkaReadyCommand.java:149)
2023-02-08 16:56:50 Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. Call: listNodes
2023-02-08 16:56:51 [2023-02-08 15:56:51,103] INFO [AdminClient clientId=adminclient-1] Node -1 disconnected. (org.apache.kafka.clients.NetworkClient)
2023-02-08 16:56:51 [2023-02-08 15:56:51,103] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:54776) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
2023-02-08 16:56:51 [2023-02-08 15:56:51,300] INFO Expected 1 brokers but found only 0. Trying to query Kafka for metadata again ... (io.confluent.admin.utils.ClusterStatus)
2023-02-08 16:56:51 [2023-02-08 15:56:51,300] ERROR Expected 1 brokers but found only 0. Brokers found []. (io.confluent.admin.utils.ClusterStatus)
2023-02-08 16:56:51 Using log4j config /etc/schema-registry/log4j.properties
My base test class. ITs that need Kafka extend this class
#Testcontainers
#SpringBootTest
#Slf4j
public class AbstractIT {
private static final Network network = Network.newNetwork();
protected static GenericContainer ZOOKEEPER = new GenericContainer<>(
DockerImageName.parse("confluentinc/cp-zookeeper:7.2.0"))
.withNetwork(network)
.withNetworkAliases("zookeeper")
.withEnv(Map.of(
"ZOOKEEPER_CLIENT_PORT", "2181",
"ZOOKEEPER_TICK_TIME", "2000"));
protected static final KafkaContainer KAFKA = new KafkaContainer(
DockerImageName.parse("confluentinc/cp-kafka"))
.withExternalZookeeper("zookeeper:2181")
.dependsOn(ZOOKEEPER)
.withNetwork(network)
.withNetworkAliases("broker");
protected static final GenericContainer SCHEMAREGSISTRY = new GenericContainer<>(
DockerImageName.parse("confluentinc/cp-schema-registry"))
.dependsOn(ZOOKEEPER, KAFKA)
.withEnv(Map.of(
"SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL", "zookeeper:2181",
"SCHEMA_REGISTRY_HOST_NAME", "schemaregistry",
"SCHEMA_REGISTRY_LISTENERS", "http://0.0.0.0:8085",
"SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS", "broker:9092"))
.withNetwork(network)
.withNetworkAliases("schemaregistry");
#DynamicPropertySource
static void registerPgProperties(DynamicPropertyRegistry registry) {
registry.add("bootstrap.servers", KAFKA::getBootstrapServers);
registry.add("spring.kafka.bootstrap-servers", KAFKA::getBootstrapServers);
registry.add("spring.kafka.consumer.auto-offset-reset", () -> "earliest");
registry.add("spring.data.mongodb.uri", MONGODB::getConnectionString);
registry.add("spring.data.mongodb.database", () ->"test");
}
//container startup, shutdown as well as topic creation omitted for brevity
}
My docker-compose.yml that I want to replicate with testcontainers
version: "3.5"
services:
zookeeper:
image: confluentinc/cp-zookeeper:7.2.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-kafka:7.2.0
hostname: broker
container_name: broker
restart: always
depends_on:
- zookeeper
ports:
- "29092:29092"
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_SCHEMA_REGISTRY_URL: "schemaregistry:8085"
schemaregistry:
container_name: schemaregistry
hostname: schemaregistry
image: confluentinc/cp-schema-registry:5.1.2
restart: always
depends_on:
- zookeeper
environment:
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: "zookeeper:2181"
SCHEMA_REGISTRY_HOST_NAME: schemaregistry
SCHEMA_REGISTRY_LISTENERS: "http://0.0.0.0:8085"
ports:
- "8085:8085"
volumes:
- "./src/main/avro/:/etc/schema"
Here is working setup of Kafka & Schema Registry for Testcontainers. It could help to find the issue in your setup
Schema Registry
public class SchemaRegistryContainer extends GenericContainer<SchemaRegistryContainer> {
public static final String SCHEMA_REGISTRY_IMAGE =
"confluentinc/cp-schema-registry";
public static final int SCHEMA_REGISTRY_PORT = 8081;
public SchemaRegistryContainer() {
this(CONFLUENT_PLATFORM_VERSION);
}
public SchemaRegistryContainer(String version) {
super(SCHEMA_REGISTRY_IMAGE + ":" + version);
waitingFor(Wait.forHttp("/subjects").forStatusCode(200));
withExposedPorts(SCHEMA_REGISTRY_PORT);
}
public SchemaRegistryContainer withKafka(KafkaContainer kafka) {
return withKafka(kafka.getNetwork(), kafka.getNetworkAliases().get(0) + ":9092");
}
public SchemaRegistryContainer withKafka(Network network, String bootstrapServers) {
withNetwork(network);
withEnv("SCHEMA_REGISTRY_HOST_NAME", "schema-registry");
withEnv("SCHEMA_REGISTRY_LISTENERS", "http://0.0.0.0:8081");
withEnv("SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS", "PLAINTEXT://" + bootstrapServers);
return self();
}
}
Kafka + Schema Registry
public static final String CONFLUENT_PLATFORM_VERSION = "5.5.1";
private static final Network KAFKA_NETWORK = Network.newNetwork();
private static final DockerImageName KAFKA_IMAGE = DockerImageName.parse("confluentinc/cp-kafka")
.withTag(CONFLUENT_PLATFORM_VERSION);
private static final KafkaContainer KAFKA = new KafkaContainer(KAFKA_IMAGE)
.withNetwork(KAFKA_NETWORK)
.withEnv("KAFKA_TRANSACTION_STATE_LOG_MIN_ISR", "1")
.withEnv("KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR", "1");
private static final SchemaRegistryContainer SCHEMA_REGISTRY =
new SchemaRegistryContainer(CONFLUENT_PLATFORM_VERSION);
#BeforeAll
static void startKafkaContainer() {
KAFKA.start();
SCHEMA_REGISTRY.withKafka(KAFKA).start();
// init kafka properties for consumer or producer
....
kafkaProperties.setBootstrapServers(KAFKA.getBootstrapServers());
kafkaProperties.setSchemaRegistryUrl("http://" + SCHEMA_REGISTRY.getHost() + ":" + SCHEMA_REGISTRY.getFirstMappedPort());
}
As the titles says, I'm trying to switch to CloudAMQP for deployment purposes.
Application.properties look as followed:
#spring.rabbitmq.host = rabbitmq
#spring.rabbitmq.host = localhost
spring.rabbitmq.host=cow.rmq2.cloudamqp.com
spring.rabbitmq.username=vvecyvwz
spring.rabbitmq.password=mypassword
The error logs:
2022-05-15 13:48:54.656 INFO 105060 --- [ntContainer#0-2] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [cow.rmq2.cloudamqp.com:5672]
2022-05-15 13:48:54.748 WARN 105060 --- [.93.32.234:5672] c.r.c.impl.ForgivingExceptionHandler : An unexpected connection driver error occurred (Exception message: Socket closed)
2022-05-15 13:48:59.841 INFO 105060 --- [ntContainer#0-2] o.s.a.r.l.SimpleMessageListenerContainer : Restarting Consumer#665f577: tags=[[]], channel=null, acknowledgeMode=AUTO local queue size=0
2022-05-15 13:48:59.842 INFO 105060 --- [ntContainer#0-3] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [cow.rmq2.cloudamqp.com:5672]
2022-05-15 13:48:59.941 WARN 105060 --- [.93.32.234:5672] c.r.c.impl.ForgivingExceptionHandler : An unexpected connection driver error occurred (Exception message: Socket closed)
2022-05-15 13:48:59.941 ERROR 105060 --- [ntContainer#0-3] o.s.a.r.l.SimpleMessageListenerContainer : Failed to check/redeclare auto-delete queue(s).
org.springframework.amqp.AmqpIOException: java.io.IOException
at org.springframework.amqp.rabbit.support.RabbitExceptionTranslator.convertRabbitAccessException(RabbitExceptionTranslator.java:70) ~[spring-rabbit-2.4.2.jar:2.4.2]
at org.springframework.amqp.rabbit.connection.AbstractConnectionFactory.createBareConnection(AbstractConnectionFactory.java:602) ~[spring-rabbit-2.4.2.jar:2.4.2]
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.createConnection(CachingConnectionFactory.java:724) ~[spring-rabbit-2.4.2.jar:2.4.2]
at org.springframework.amqp.rabbit.connection.ConnectionFactoryUtils.createConnection(ConnectionFactoryUtils.java:252) ~[spring-rabbit-2.4.2.jar:2.4.2]
at org.springframework.amqp.rabbit.core.RabbitTemplate.doExecute(RabbitTemplate.java:2175) ~[spring-rabbit-2.4.2.jar:2.4.2]
at org.springframework.amqp.rabbit.core.RabbitTemplate.execute(RabbitTemplate.java:2148) ~[spring-rabbit-2.4.2.jar:2.4.2]
at org.springframework.amqp.rabbit.core.RabbitTemplate.execute(RabbitTemplate.java:2128) ~[spring-rabbit-2.4.2.jar:2.4.2]
at org.springframework.amqp.rabbit.core.RabbitAdmin.getQueueInfo(RabbitAdmin.java:463) ~[spring-rabbit-2.4.2.jar:2.4.2]
at org.springframework.amqp.rabbit.core.RabbitAdmin.getQueueProperties(RabbitAdmin.java:447) ~[spring-rabbit-2.4.2.jar:2.4.2]
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.attemptDeclarations(AbstractMessageListenerContainer.java:1925) ~[spring-rabbit-2.4.2.jar:2.4.2]
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.redeclareElementsIfNecessary(AbstractMessageListenerContainer.java:1906) ~[spring-rabbit-2.4.2.jar:2.4.2]
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.initialize(SimpleMessageListenerContainer.java:1349) ~[spring-rabbit-2.4.2.jar:2.4.2]
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1195) ~[spring-rabbit-2.4.2.jar:2.4.2]
at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]
Caused by: java.io.IOException: null
at com.rabbitmq.client.impl.AMQChannel.wrap(AMQChannel.java:129) ~[amqp-client-5.13.1.jar:5.13.1]
at com.rabbitmq.client.impl.AMQChannel.wrap(AMQChannel.java:125) ~[amqp-client-5.13.1.jar:5.13.1]
at com.rabbitmq.client.impl.AMQChannel.exnWrappingRpc(AMQChannel.java:147) ~[amqp-client-5.13.1.jar:5.13.1]
at com.rabbitmq.client.impl.AMQConnection.start(AMQConnection.java:439) ~[amqp-client-5.13.1.jar:5.13.1]
at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:1225) ~[amqp-client-5.13.1.jar:5.13.1]
at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:1173) ~[amqp-client-5.13.1.jar:5.13.1]
at org.springframework.amqp.rabbit.connection.AbstractConnectionFactory.connectAddresses(AbstractConnectionFactory.java:640) ~[spring-rabbit-2.4.2.jar:2.4.2]
at org.springframework.amqp.rabbit.connection.AbstractConnectionFactory.connect(AbstractConnectionFactory.java:615) ~[spring-rabbit-2.4.2.jar:2.4.2]
at org.springframework.amqp.rabbit.connection.AbstractConnectionFactory.createBareConnection(AbstractConnectionFactory.java:565) ~[spring-rabbit-2.4.2.jar:2.4.2]
... 12 common frames omitted
Caused by: com.rabbitmq.client.ShutdownSignalException: connection error; protocol method: #method<connection.close>(reply-code=530, reply-text=NOT_ALLOWED - access to vhost '/' refused for user 'vvecyvwz', class-id=10, method-id=40)
at com.rabbitmq.utility.ValueOrException.getValue(ValueOrException.java:66) ~[amqp-client-5.13.1.jar:5.13.1]
at com.rabbitmq.utility.BlockingValueOrException.uninterruptibleGetValue(BlockingValueOrException.java:36) ~[amqp-client-5.13.1.jar:5.13.1]
at com.rabbitmq.client.impl.AMQChannel$BlockingRpcContinuation.getReply(AMQChannel.java:502) ~[amqp-client-5.13.1.jar:5.13.1]
at com.rabbitmq.client.impl.AMQChannel.privateRpc(AMQChannel.java:293) ~[amqp-client-5.13.1.jar:5.13.1]
at com.rabbitmq.client.impl.AMQChannel.exnWrappingRpc(AMQChannel.java:141) ~[amqp-client-5.13.1.jar:5.13.1]
... 18 common frames omitted
Genuinely have no idea what's going wrong with my application. The host, username and password are 100% correct, I'm not sure where the problem could be.
I had the same error when I switched to Cloud AMQP. As you mentioned, the virtual host was missing from the properties:
spring.rabbitmq.virtual-host=vvecyvwz
spring.rabbitmq.host=cow.rmq2.cloudamqp.com
spring.rabbitmq.username=vvecyvwz
spring.rabbitmq.password=mypassword
spring.rabbitmq.port=5672
or you can do like this :
spring.rabbitmq.addresses=amqps://vvecyvwz:mypassword#cow.rmq2.cloudamqp.com/vvecyvwz
Ok so the problem was that I had to add a virtual host. Which I definitely already tried programmatically like this:
#Value("${spring.rabbitmq.host}")
private String host;
#Value("${spring.rabbitmq.username}")
private String username;
#Value("${spring.rabbitmq.password}")
private String password;
#Bean
public AmqpTemplate template() {
CachingConnectionFactory cf = new CachingConnectionFactory(host);
cf.setUsername(username);
cf.setPassword(password);
cf.setVirtualHost(username);
final RabbitTemplate rabbitTemplate = new RabbitTemplate(cf);
rabbitTemplate.setMessageConverter(converter());
return rabbitTemplate;
}
But apparently this doesn't work and I had to specify it in the application.properties
In my case, it was due to missing the leading / of the vhost value in the config.
My Spring Boot 2.3.1 app with SCS Hoshram.SR6 was using the Kafka Streams Binder. I needed to add a Kafka Producer that would be used in another part of the application so I added the kafka binder. The problem is the producer is not working, throwing an exception:
19:49:40.082 [scheduling-1] [900cdeb11106e199] ERROR o.s.c.stream.binding.BindingService - Failed to create producer binding; retrying in 30 seconds
org.springframework.cloud.stream.provisioning.ProvisioningException: Provisioning exception; nested exception is java.util.concurrent.TimeoutException
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.createTopic(KafkaTopicProvisioner.java:332)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.provisionProducerDestination(KafkaTopicProvisioner.java:148)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.provisionProducerDestination(KafkaTopicProvisioner.java:79)
at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindProducer(AbstractMessageChannelBinder.java:222)
at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindProducer(AbstractMessageChannelBinder.java:90)
at org.springframework.cloud.stream.binder.AbstractBinder.bindProducer(AbstractBinder.java:152)
at org.springframework.cloud.stream.binding.BindingService.lambda$rescheduleProducerBinding$4(BindingService.java:336)
at org.springframework.cloud.sleuth.instrument.async.TraceRunnable.run(TraceRunnable.java:68)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
Caused by: java.util.concurrent.TimeoutException: null
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:108)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:272)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.createTopicAndPartitions(KafkaTopicProvisioner.java:368)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.createTopicIfNecessary(KafkaTopicProvisioner.java:342)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.createTopic(KafkaTopicProvisioner.java:319)
This is my configuration:
spring:
cloud:
function:
definition: myProducer
stream:
bindings:
myKStream-in-0:
destination: my-kstream-topic
producer:
useNativeEncoding: true
myProducer-out-0:
destination: producer-topic
producer:
useNativeEncoding: true
kafka:
binder:
brokers: ${kafka.brokers:localhost}
min-partition-count: 3
replication-factor: 3
producerProperties:
enable:
idempotence: true
retries: 0x7fffffff
acks: all
key:
serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
subject:
name:
strategy: io.confluent.kafka.serializers.subject.RecordNameStrategy
value:
serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
subject:
name:
strategy: io.confluent.kafka.serializers.subject.RecordNameStrategy
schema:
registry:
url: ${schema-registry.url:http://localhost:8081}
request:
timeout:
ms: 5000
streams:
binder:
brokers: ${kafka.brokers:localhost}
configuration:
application:
id: ${spring.application.name}
server: ${POD_IP:localhost}:${local.server.port:8080}
schema:
registry:
url: ${schema-registry.url}
key:
subject:
name:
strategy: io.confluent.kafka.serializers.subject.RecordNameStrategy
value:
subject:
name:
strategy: io.confluent.kafka.serializers.subject.RecordNameStrategy
processing:
guarantee: exactly_once
replication:
factor: 3
group:
id: kpi
deserialization-exception-handler: logandcontinue
min-partition-count: 3
replication-factor: 3
state-store-retry:
max-attempts: 20
backoff-period: 1500
What could be the problem here?
UPDATE
I tweaked the config as follows:
spring:
cloud:
function:
definition: myProducer
stream:
function:
definition: myKStream
Now I can't see any exception but the messages don't get to the topic.
In another application that only uses the kafka binder it works perfectly:
#Configuration
class KafkaProducerConfiguration {
#Bean
fun myProducerProcessor(): EmitterProcessor<Message<XXX>> {
return EmitterProcessor.create()
}
#Bean
fun myProducer(): Supplier<Flux<Message<XXX>>> {
return Supplier { myProducerProcessor() }
}
}
...
#Component
class XXXProducer(#Qualifier("myProducerProcessor") private val myProducerProcessor: EmitterProcessor<Message<XXX>>) {
fun send(...): Mono<Void> {
return Mono.defer {
myProducerProcessor.onNext(message)
Mono.empty()
}
}
UPDATE 2
I set logging.level.org.springframework.cloud.stream: debug
In the logs the following trace shows up:
o.s.c.s.binder.DefaultBinderFactory - Creating binder: kstream
However there isn't anything about a Creating binder: kafka.
I was missing the multiple binder config for kafka and kstreams (https://cloud.spring.io/spring-cloud-static/spring-cloud-stream-binder-kafka/3.0.0.RELEASE/reference/html/spring-cloud-stream-binder-kafka.html#_multi_binders_with_kafka_streams_based_binders_and_regular_kafka_binder)
Thus, I had to set up: spring.cloud.stream.function.definition=myProducer;myKStream
I was trying to connect kafka aws instance through local Spring Boot API.
I am able to connect it but while listening to the topic, it's throwing an below exception but the new topics were created successfully by Spring Boot API
I am unable to publish any message as well.
java.io.IOException: Can't resolve address: ip-xxx-xx-xx-xx.ec2.internal:9092
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:235) ~[kafka-clients-2.0.1.jar:na]
Caused by: java.nio.channels.UnresolvedAddressException: null
at sun.nio.ch.Net.checkAddress(Net.java:101) ~[na:1.8.0_192]
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622) ~[na:1.8.0_192]
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:233) ~[kafka-clients-2.0.1.jar:na]
... 30 common frames omitted
2019-07-17 15:36:13.581 WARN 3709 --- [ main] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-1, groupId=group_id] Error connecting to node ip-172-31-80-50.ec2.internal:9092 (id: 0 rack: null)
I allowed this port as well
Custom TCP Rule
TCP
2181
0.0.0.0/0
Custom TCP Rule
TCP
9092
0.0.0.0/0
server:
port: 8081
spring:
kafka:
consumer:
bootstrap-servers: xx.xx.xx.xx:9092
group-id: group_id
auto-offset-reset: earliest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
producer:
bootstrap-servers: xx.xx.xx.xx:9092
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.apache.kafka.common.serialization.StringSerializer
#KafkaListener(topics = "ConsumerTest", groupId = "group_id")
public void consume(String message) throws IOException {
logger.info(String.format("#### -> Consumed message -> %s", message));
}
java.io.IOException: Can't resolve address: ip-xxx-xx-xx-xx.ec2.internal:9092
Error connecting to node ip-172-31-80-50.ec2.internal:9092
When consumers connect to the broker they get back the metadata of the broker for the partition from which they're reading data. What your client is getting back here is the advertised.listener of the Kafka broker. So whilst you connect to the broker on the public address of the broker, it returns to your client the internal address of the machine.
To fix this, you need to set up your listeners correctly on your brokers. See https://rmoff.net/2018/08/02/kafka-listeners-explained/ for details.
I am trying to implement Spring integration with MQTT. I am using Mosquitto as MQTT broker. with the reference to docs provided in the following link. I have created a project and added all the required jar files. when i execute MQTTJavaApplication.
public class MqttJavaApplication {
public static void main(String[] args) {
new SpringApplicationBuilder(MqttJavaApplication.class)
.web(false)
.run(args);
}
#Bean
public MessageChannel mqttInputChannel() {
return new DirectChannel();
}
#Bean
public MqttPahoMessageDrivenChannelAdapter inbound() {
MqttPahoMessageDrivenChannelAdapter adapter =
new MqttPahoMessageDrivenChannelAdapter("tcp://localhost:1883", "test",
"sample");
adapter.setCompletionTimeout(5000);
adapter.setConverter(new DefaultPahoMessageConverter());
adapter.setQos(1);
adapter.setOutputChannel(mqttInputChannel());
return adapter;
}
#Bean
#ServiceActivator(inputChannel = "mqttInputChannel")
public MessageHandler handler() {
return new MessageHandler() {
#Override
public void handleMessage(Message<?> message) {
System.out.println("Test##########"+message.getPayload());
}
};
}
}
I am getting following error when i publish a message via MQTT broker.
[2015-11-23 10:08:19.545] boot - 8100 INFO [main] --- AnnotationConfigApplicationContext: Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext#2d0e1c: startup date [Mon Nov 23 10:08:19 IST 2015]; root of context hierarchy
[2015-11-23 10:08:19.831] boot - 8100 INFO [main] --- PropertiesFactoryBean: Loading properties file from URL [jar:file:/D:/IoTWorkspace/MQTTTest/WebContent/WEB-INF/lib/spring-integration-core-4.2.1.RELEASE.jar!/META-INF/spring.integration.default.properties]
[2015-11-23 10:08:20.001] boot - 8100 INFO [main] --- DefaultLifecycleProcessor: Starting beans in phase 1073741823
[2015-11-23 10:08:20.104] boot - 8100 INFO [main] --- MqttPahoMessageDrivenChannelAdapter: started inbound
[2015-11-23 10:08:20.112] boot - 8100 INFO [main] --- MqttJavaApplication: Started MqttJavaApplication in 1.602 seconds (JVM running for 2.155)
[2015-11-23 10:13:04.564] boot - 8100 ERROR [MQTT Call: test] --- MqttPahoMessageDrivenChannelAdapter: Unhandled exception for GenericMessage [payload=Testing Subscription, headers={timestamp=1448253784563, id=cd5be974-3b19-8317-47eb-1c139725be24, mqtt_qos=0, mqtt_topic=sample, mqtt_retained=false, mqtt_duplicate=false}]
org.springframework.messaging.MessageDeliveryException: Dispatcher has no subscribers for channel 'unknown.channel.name'.; nested exception is org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:81)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:442)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:392)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:115)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:45)
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:105)
at org.springframework.integration.endpoint.MessageProducerSupport.sendMessage(MessageProducerSupport.java:105)
at org.springframework.integration.mqtt.inbound.MqttPahoMessageDrivenChannelAdapter.messageArrived(MqttPahoMessageDrivenChannelAdapter.java:262)
at org.eclipse.paho.client.mqttv3.internal.CommsCallback.handleMessage(CommsCallback.java:336)
at org.eclipse.paho.client.mqttv3.internal.CommsCallback.run(CommsCallback.java:148)
at java.lang.Thread.run(Thread.java:722)
Caused by: org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:153)
at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:120)
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:77)
... 10 more
[2015-11-23 10:13:04.613] boot - 8100 ERROR [MQTT Call: test] --- MqttPahoMessageDrivenChannelAdapter: Lost connection:MqttException; retrying...
[2015-11-23 10:13:04.614] boot - 8100 ERROR [MQTT Call: test] --- MqttPahoMessageDrivenChannelAdapter: Failed to schedule reconnect
java.lang.NullPointerException
at org.springframework.integration.mqtt.inbound.MqttPahoMessageDrivenChannelAdapter.scheduleReconnect(MqttPahoMessageDrivenChannelAdapter.java:228)
at org.springframework.integration.mqtt.inbound.MqttPahoMessageDrivenChannelAdapter.connectionLost(MqttPahoMessageDrivenChannelAdapter.java:255)
at org.eclipse.paho.client.mqttv3.internal.CommsCallback.connectionLost(CommsCallback.java:229)
at org.eclipse.paho.client.mqttv3.internal.ClientComms.shutdownConnection(ClientComms.java:339)
at org.eclipse.paho.client.mqttv3.internal.CommsCallback.run(CommsCallback.java:171)
at java.lang.Thread.run(Thread.java:722)
[2015-11-23 10:13:04.617] boot - 8100 INFO [Thread-0] --- AnnotationConfigApplicationContext: Closing org.springframework.context.annotation.AnnotationConfigApplicationContext#2d0e1c: startup date [Mon Nov 23 10:08:19 IST 2015]; root of context hierarchy
[2015-11-23 10:13:04.619] boot - 8100 INFO [Thread-0] --- DefaultLifecycleProcessor: Stopping beans in phase 1073741823
[2015-11-23 10:13:04.622] boot - 8100 ERROR [Thread-0] --- MqttPahoMessageDrivenChannelAdapter: Exception while unsubscribing
Client is not connected (32104)
at org.eclipse.paho.client.mqttv3.internal.ExceptionHelper.createMqttException(ExceptionHelper.java:27)
at org.eclipse.paho.client.mqttv3.internal.ClientComms.sendNoWait(ClientComms.java:132)
at org.eclipse.paho.client.mqttv3.MqttAsyncClient.unsubscribe(MqttAsyncClient.java:707)
at org.eclipse.paho.client.mqttv3.MqttAsyncClient.unsubscribe(MqttAsyncClient.java:682)
at org.springframework.integration.mqtt.inbound.MqttPahoMessageDrivenChannelAdapter.doStop(MqttPahoMessageDrivenChannelAdapter.java:124)
at org.springframework.integration.endpoint.AbstractEndpoint.doStop(AbstractEndpoint.java:145)
at org.springframework.integration.endpoint.AbstractEndpoint.stop(AbstractEndpoint.java:128)
at org.springframework.context.support.DefaultLifecycleProcessor.doStop(DefaultLifecycleProcessor.java:229)
at org.springframework.context.support.DefaultLifecycleProcessor.access$300(DefaultLifecycleProcessor.java:51)
at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.stop(DefaultLifecycleProcessor.java:363)
at org.springframework.context.support.DefaultLifecycleProcessor.stopBeans(DefaultLifecycleProcessor.java:202)
at org.springframework.context.support.DefaultLifecycleProcessor.onClose(DefaultLifecycleProcessor.java:118)
at org.springframework.context.support.AbstractApplicationContext.doClose(AbstractApplicationContext.java:966)
at org.springframework.context.support.AbstractApplicationContext$1.run(AbstractApplicationContext.java:893)
[2015-11-23 10:13:04.623] boot - 8100 ERROR [Thread-0] --- MqttPahoMessageDrivenChannelAdapter: Exception while disconnecting
Client is disconnected (32101)
at org.eclipse.paho.client.mqttv3.internal.ExceptionHelper.createMqttException(ExceptionHelper.java:27)
at org.eclipse.paho.client.mqttv3.internal.ClientComms.disconnect(ClientComms.java:405)
at org.eclipse.paho.client.mqttv3.MqttAsyncClient.disconnect(MqttAsyncClient.java:524)
at org.eclipse.paho.client.mqttv3.MqttAsyncClient.disconnect(MqttAsyncClient.java:493)
at org.eclipse.paho.client.mqttv3.MqttAsyncClient.disconnect(MqttAsyncClient.java:500)
at org.springframework.integration.mqtt.inbound.MqttPahoMessageDrivenChannelAdapter.doStop(MqttPahoMessageDrivenChannelAdapter.java:131)
at org.springframework.integration.endpoint.AbstractEndpoint.doStop(AbstractEndpoint.java:145)
at org.springframework.integration.endpoint.AbstractEndpoint.stop(AbstractEndpoint.java:128)
at org.springframework.context.support.DefaultLifecycleProcessor.doStop(DefaultLifecycleProcessor.java:229)
at org.springframework.context.support.DefaultLifecycleProcessor.access$300(DefaultLifecycleProcessor.java:51)
at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.stop(DefaultLifecycleProcessor.java:363)
at org.springframework.context.support.DefaultLifecycleProcessor.stopBeans(DefaultLifecycleProcessor.java:202)
at org.springframework.context.support.DefaultLifecycleProcessor.onClose(DefaultLifecycleProcessor.java:118)
at org.springframework.context.support.AbstractApplicationContext.doClose(AbstractApplicationContext.java:966)
at org.springframework.context.support.AbstractApplicationContext$1.run(AbstractApplicationContext.java:893)
[2015-11-23 10:13:04.624] boot - 8100 INFO [Thread-0] --- MqttPahoMessageDrivenChannelAdapter: stopped inbound
Please find below stack trace after adding #SpringBootApplication Annotation
[2015-11-26 10:55:46.571] boot - 5524 INFO [main] --- AnnotationConfigApplicationContext: Closing org.springframework.context.annotation.AnnotationConfigApplicationContext#562d4b: startup date [Thu Nov 26 10:55:46 IST 2015]; root of context hierarchy
[2015-11-26 10:55:46.572] boot - 5524 WARN [main] --- AnnotationConfigApplicationContext: Exception thrown from ApplicationListener handling ContextClosedEvent
java.lang.IllegalStateException: ApplicationEventMulticaster not initialized - call 'refresh' before multicasting events via the context: org.springframework.context.annotation.AnnotationConfigApplicationContext#562d4b: startup date [Thu Nov 26 10:55:46 IST 2015]; root of context hierarchy
at org.springframework.context.support.AbstractApplicationContext.getApplicationEventMulticaster(AbstractApplicationContext.java:337)
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:324)
at org.springframework.context.support.AbstractApplicationContext.doClose(AbstractApplicationContext.java:1025)
at org.springframework.context.support.AbstractApplicationContext.close(AbstractApplicationContext.java:988)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:329)
at org.springframework.boot.builder.SpringApplicationBuilder.run(SpringApplicationBuilder.java:130)
at com.igate.MqttJavaApplication.main(MqttJavaApplication.java:32)
[2015-11-26 10:55:46.594] boot - 5524 WARN [main] --- AnnotationConfigApplicationContext: Exception thrown from LifecycleProcessor on context close
java.lang.IllegalStateException: LifecycleProcessor not initialized - call 'refresh' before invoking lifecycle methods via the context: org.springframework.context.annotation.AnnotationConfigApplicationContext#562d4b: startup date [Thu Nov 26 10:55:46 IST 2015]; root of context hierarchy
at org.springframework.context.support.AbstractApplicationContext.getLifecycleProcessor(AbstractApplicationContext.java:350)
at org.springframework.context.support.AbstractApplicationContext.doClose(AbstractApplicationContext.java:1033)
at org.springframework.context.support.AbstractApplicationContext.close(AbstractApplicationContext.java:988)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:329)
at org.springframework.boot.builder.SpringApplicationBuilder.run(SpringApplicationBuilder.java:130)
at com.igate.MqttJavaApplication.main(MqttJavaApplication.java:32)
Exception in thread "main" java.lang.IllegalStateException: At least one base package must be specified
at org.springframework.context.annotation.ComponentScanAnnotationParser.parse(ComponentScanAnnotationParser.java:121)
at org.springframework.context.annotation.ConfigurationClassParser.doProcessConfigurationClass(ConfigurationClassParser.java:214)
at org.springframework.context.annotation.ConfigurationClassParser.processConfigurationClass(ConfigurationClassParser.java:149)
at org.springframework.context.annotation.ConfigurationClassParser.parse(ConfigurationClassParser.java:135)
at org.springframework.context.annotation.ConfigurationClassPostProcessor.processConfigBeanDefinitions(ConfigurationClassPostProcessor.java:260)
at org.springframework.context.annotation.ConfigurationClassPostProcessor.postProcessBeanDefinitionRegistry(ConfigurationClassPostProcessor.java:203)
at org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:617)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:446)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:648)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:311)
at org.springframework.boot.builder.SpringApplicationBuilder.run(SpringApplicationBuilder.java:130)
at com.igate.MqttJavaApplication.main(MqttJavaApplication.java:32)
Yep! That's true. Without #SpringBootApplication I have the same StackTrace.
The Spring Boot stuff includes IntegrationAutoConfiguration who is responsible for subscribers and TaskScheduler bean population.
Probably too late to answer but hope it might help some beginner.
Few Things:-
Declare #SpringBootApplication in your main class to let the project know its a spring boot application
For the 2nd Error, Define base package of the project where spring boot will refer for configuration files etc. To do so, add #ComponentScan({"package-name"}) at class level.
Sample Example:-
#SpringBootApplication
#ComponentScan({"package-name"})
public class Main {
...
}