I'm doing multiple scans in HBase with a java/spring boot API. However I have a doubt about performances of the way hbase-client manages connections (I use hbase-client 1.3.0).
Here is the code I use for scan :
#Repository
public abstract class AbstractHBaseRepository implements InitializingBean {
#Autowired
private HbaseTemplate hbaseTemplate;
private Connection connection;
#Override
public void afterPropertiesSet() throws Exception {
connection = ConnectionFactory.createConnection(hbaseTemplate.getConfiguration(), (ExecutorService) null, (User) null);
}
protected ResultScanner scan(String tableName, FilterList filters) {
Scan scan = new Scan();
// If list is not empty, apply filters
if (null != filters && !filters.getFilters().isEmpty()) {
scan.setFilter(filters);
}
try {
return connection.getTable(TableName.valueOf(tableName)).getScanner(scan);
} catch (IOException e) {
throw new RuntimeException("Unexpected HBase scan error", e);
}
}
}
What I see when debugging, and performing multiple API requests is :
HBase connection (Connection object) is kept alive and shared by all
threads => Good
For each scan performed, a new zookeeper session is created => Not good ?
For the second point, I have following logs each time :
INFO o.a.h.h.zookeeper.RecoverableZooKeeper : Process identifier=hconnection-0x2634f5f3 connecting to ZooKeeper ensemble=zoo.node:2181
INFO org.apache.zookeeper.ZooKeeper : Initiating client connection, connectString=zoo.node:2181,noeyyet7.noe.edf.fr:2181 sessionTimeout=120000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher#3467699e
INFO org.apache.zookeeper.ClientCnxn : Opening socket connection to server zoo.node/localhost:2181
INFO org.apache.zookeeper.ClientCnxn : Socket connection established to zoo.node/localhost:2181, initiating session
INFO org.apache.zookeeper.ClientCnxn : Session establishment complete on server zoo.node/localhost:2181, sessionid = 0x17f9803c1b2212c, negotiated timeout = 120000
INFO nectionManager$HConnectionImplementation : Closing zookeeper sessionid=0x17f9803c1b2212c
INFO org.apache.zookeeper.ZooKeeper : Session: 0x17f9803c1b2212c closed
INFO org.apache.zookeeper.ClientCnxn : EventThread shut down for session: 0x17f9803c1b2212c
Is it normal that a new zookeeper session is created for each HBase query ? Is there a way to keep it alive ?
I am developing a system which will read and process file from a directory. Once all the file has been processed it will call a method which in turn generates a file. Also, it should route/process the files based on file name, I have used spring integration router for the same. Below is the code snippet of the Integration. My question is, this is not working if I remove any of the line .channel(aggregatorOutputChannel()) or .channel(confirmChannel()), also I have to keep the same channel .channel(aggregatorOutputChannel()) before and after the aggregator. Why do I need all 3 channel declaration? if this is wrong how to correct it.
I am using JDK 8, Spring 5, Spring boot 2.0.4.
#Configuration
#EnableIntegration
public class IntegrationConfig {
#Value("${agent.demographic.input.directory}")
private String inputDir;
#Value("${agent.demographic.output.directory}")
private String outputDir;
#Value("${confirmationfile.directory}")
private String confirmDir;
#Value("${input.scan.frequency: 2}")
private long scanFrequency;
#Value("${processing.waittime: 6000}")
private long messageGroupWaiting;
#Value("${thread.corepoolsize: 10}")
private int corepoolsize;
#Value("${thread.maxpoolsize: 20}")
private int maxpoolsize;
#Value("${thread.queuecapacity: 1000}")
private int queuedepth;
#Bean
public MessageSource<File> inputFileSource() {
FileReadingMessageSource src = new FileReadingMessageSource();
src.setDirectory(new File(inputDir));
src.setAutoCreateDirectory(true);
ChainFileListFilter<File> chainFileListFilter = new ChainFileListFilter<>();
chainFileListFilter.addFilter(new AcceptOnceFileListFilter<>() );
chainFileListFilter.addFilter(new RegexPatternFileListFilter("(?i)^.+\\.xml$"));
src.setFilter(chainFileListFilter);
return src;
}
#Bean
public UnZipTransformer unZipTransformer() {
UnZipTransformer unZipTransformer = new UnZipTransformer();
unZipTransformer.setExpectSingleResult(false);
unZipTransformer.setZipResultType(ZipResultType.FILE);
unZipTransformer.setDeleteFiles(true);
return unZipTransformer;
}
#Bean("agentdemographicsplitter")
public UnZipResultSplitter splitter() {
UnZipResultSplitter splitter = new UnZipResultSplitter();
return splitter;
}
#Bean
public DirectChannel outputChannel() {
return new DirectChannel();
}
#Bean
public DirectChannel aggregatorOutputChannel() {
return new DirectChannel();
}
#Bean("confirmChannel")
public DirectChannel confirmChannel() {
return new DirectChannel();
}
#Bean
public MessageHandler fileOutboundChannelAdapter() {
FileWritingMessageHandler adapter = new FileWritingMessageHandler(new File(outputDir));
adapter.setDeleteSourceFiles(true);
adapter.setAutoCreateDirectory(true);
adapter.setExpectReply(true);
adapter.setLoggingEnabled(true);
return adapter;
}
#Bean
public MessageHandler confirmationfileOutboundChannelAdapter() {
FileWritingMessageHandler adapter = new FileWritingMessageHandler(new File(confirmDir));
adapter.setDeleteSourceFiles(true);
adapter.setAutoCreateDirectory(true);
adapter.setExpectReply(false);
adapter.setFileNameGenerator(defaultFileNameGenerator() );
return adapter;
}
#Bean
public TaskExecutor taskExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(corepoolsize);
executor.setMaxPoolSize(maxpoolsize);
executor.setQueueCapacity(queuedepth);
return executor;
}
#Bean
public DefaultFileNameGenerator defaultFileNameGenerator() {
DefaultFileNameGenerator defaultFileNameGenerator = new DefaultFileNameGenerator();
defaultFileNameGenerator.setExpression("payload.name");
return defaultFileNameGenerator;
}
#Bean
public IntegrationFlow confirmGeneration() {
return IntegrationFlows.
from("confirmChannel")
.handle(confirmationfileOutboundChannelAdapter())
.get();
}
#Bean
public IntegrationFlow individualProcessor() {
return flow -> flow.handle("thirdpartyIndividualAgentProcessor","processfile").channel(outputChannel()).handle(fileOutboundChannelAdapter());
}
#Bean
public IntegrationFlow firmProcessor() {
return flow -> flow.handle("thirdpartyFirmAgentProcessor","processfile").channel(outputChannel()).handle(fileOutboundChannelAdapter());
}
#Bean
public IntegrationFlow thirdpartyAgentDemographicFlow() {
return IntegrationFlows
.from(inputFileSource(), spec -> spec.poller(Pollers.fixedDelay(scanFrequency,TimeUnit.SECONDS)))
.channel(MessageChannels.executor(taskExecutor()))
.<File, Boolean>route(f -> f.getName().contains("individual"), m -> m
.subFlowMapping(true, sf -> sf.gateway(individualProcessor()))
.subFlowMapping(false, sf -> sf.gateway(firmProcessor()))
)
.channel(aggregatorOutputChannel())
.aggregate(aggregator -> aggregator.groupTimeout(messageGroupWaiting).correlationStrategy(new CorrelationStrategy() {
#Override
public Object getCorrelationKey(Message<?> message) {
return "xyz";
}
}))
.channel(aggregatorOutputChannel())
.handle("agentDemograpicOutput","generateAgentDemographicFile")
.channel(confirmChannel())
.get();
}
}
Below is the log
2018-09-07 17:29:20.003 DEBUG 10060 --- [ taskExecutor-2] o.s.integration.channel.DirectChannel : preSend on channel 'outputChannel', message: GenericMessage [payload=C:\thirdpartyintg\input\18237232_firm.xml, headers={replyChannel=org.springframework.messaging.core.GenericMessagingTemplate$TemporaryReplyChannel#1a867ae7, errorChannel=org.springframework.messaging.core.GenericMessagingTemplate$TemporaryReplyChannel#1a867ae7, file_name=18237232_firm.xml, file_originalFile=C:\thirdpartyintg\input\18237232_firm.xml, id=dd70999a-8b8d-93d2-1a43-a961ac2c339f, file_relativePath=18237232_firm.xml, timestamp=1536366560003}]
2018-09-07 17:29:20.003 DEBUG 10060 --- [ taskExecutor-2] o.s.i.file.FileWritingMessageHandler : fileOutboundChannelAdapter received message: GenericMessage [payload=C:\thirdpartyintg\input\18237232_firm.xml, headers={replyChannel=org.springframework.messaging.core.GenericMessagingTemplate$TemporaryReplyChannel#1a867ae7, errorChannel=org.springframework.messaging.core.GenericMessagingTemplate$TemporaryReplyChannel#1a867ae7, file_name=18237232_firm.xml, file_originalFile=C:\thirdpartyintg\input\18237232_firm.xml, id=dd70999a-8b8d-93d2-1a43-a961ac2c339f, file_relativePath=18237232_firm.xml, timestamp=1536366560003}]
2018-09-07 17:29:20.006 DEBUG 10060 --- [ taskExecutor-2] o.s.integration.channel.DirectChannel : postSend (sent=true) on channel 'outputChannel', message: GenericMessage [payload=C:\thirdpartyintg\input\18237232_firm.xml, headers={replyChannel=org.springframework.messaging.core.GenericMessagingTemplate$TemporaryReplyChannel#1a867ae7, errorChannel=org.springframework.messaging.core.GenericMessagingTemplate$TemporaryReplyChannel#1a867ae7, file_name=18237232_firm.xml, file_originalFile=C:\thirdpartyintg\input\18237232_firm.xml, id=dd70999a-8b8d-93d2-1a43-a961ac2c339f, file_relativePath=18237232_firm.xml, timestamp=1536366560003}]
2018-09-07 17:29:20.006 DEBUG 10060 --- [ taskExecutor-2] o.s.integration.channel.DirectChannel : postSend (sent=true) on channel 'firmProcessor.input', message: GenericMessage [payload=C:\thirdpartyintg\input\18237232_firm.xml, headers={replyChannel=org.springframework.messaging.core.GenericMessagingTemplate$TemporaryReplyChannel#1a867ae7, errorChannel=org.springframework.messaging.core.GenericMessagingTemplate$TemporaryReplyChannel#1a867ae7, file_name=18237232_firm.xml, file_originalFile=C:\thirdpartyintg\input\18237232_firm.xml, id=0e6dcb75-db99-1740-7b58-e9b42bfbf603, file_relativePath=18237232_firm.xml, timestamp=1536366559761}]
2018-09-07 17:29:20.007 DEBUG 10060 --- [ taskExecutor-2] o.s.integration.channel.DirectChannel : preSend on channel 'thirdpartyintgAgentDemographicFlow.channel#2', message: GenericMessage [payload=C:\thirdpartyintg\output\18237232_firm.xml, headers={file_originalFile=C:\thirdpartyintg\input\18237232_firm.xml, id=e6e2a30a-60b9-7cdd-84cc-4977d4c21c97, file_name=18237232_firm.xml, file_relativePath=18237232_firm.xml, timestamp=1536366560007}]
2018-09-07 17:29:20.008 DEBUG 10060 --- [ taskExecutor-2] o.s.integration.channel.DirectChannel : postSend (sent=true) on channel 'thirdpartyintgAgentDemographicFlow.channel#2', message: GenericMessage [payload=C:\thirdpartyintg\output\18237232_firm.xml, headers={file_originalFile=C:\thirdpartyintg\input\18237232_firm.xml, id=e6e2a30a-60b9-7cdd-84cc-4977d4c21c97, file_name=18237232_firm.xml, file_relativePath=18237232_firm.xml, timestamp=1536366560007}]
2018-09-07 17:29:20.009 DEBUG 10060 --- [ taskExecutor-2] o.s.integration.channel.DirectChannel : postSend (sent=true) on channel 'thirdpartyintgAgentDemographicFlow.subFlow#1.channel#0', message: GenericMessage [payload=C:\thirdpartyintg\input\18237232_firm.xml, headers={file_originalFile=C:\thirdpartyintg\input\18237232_firm.xml, id=13713de8-91ce-b1fa-f52d-450d3038cf9c, file_name=18237232_firm.xml, file_relativePath=18237232_firm.xml, timestamp=1536366559757}]
2018-09-07 17:29:26.009 INFO 10060 --- [ask-scheduler-9] o.s.i.a.AggregatingMessageHandler : Expiring MessageGroup with correlationKey[processdate]
2018-09-07 17:29:26.011 DEBUG 10060 --- [ask-scheduler-9] o.s.integration.channel.NullChannel : message sent to null channel: GenericMessage [payload=C:\thirdpartyintg\output\17019222_individual.xml, headers={file_originalFile=C:\thirdpartyintg\input\17019222_individual.xml, id=c654076b-696f-25d4-bded-0a43d1a8ca97, file_name=17019222_individual.xml, file_relativePath=17019222_individual.xml, timestamp=1536366559927}]
2018-09-07 17:29:26.011 DEBUG 10060 --- [ask-scheduler-9] o.s.integration.channel.NullChannel : message sent to null channel: GenericMessage [payload=C:\thirdpartyintg\output\18237232_firm.xml, headers={file_originalFile=C:\thirdpartyintg\input\18237232_firm.xml, id=e6e2a30a-60b9-7cdd-84cc-4977d4c21c97, file_name=18237232_firm.xml, file_relativePath=18237232_firm.xml, timestamp=1536366560007}]
First of all the RegexPatternFileListFilter should be first in the ChainFileListFilter. This way you won't overhead a memory in the AcceptOnceFileListFilter for files which you are not interested in.
You need .channel(confirmChannel()) in the end of the thirdpartyAgentDemographicFlow because this one is an input to your confirmGeneration flow.
I don't think that you .channel(aggregatorOutputChannel()) at all it has to implicit.
You also don't need that .channel(outputChannel()) in the sub-flows.
this is not working
Please, elaborate more: what error you get, how then it works etc...
You also can share some DEBUG logs for the org.springframework.integration to determine how your messages travel.
Also it would help a lot if your share some simple Spring Boot project on GitHub to let us to play with and reproduce according your provided instructions.
UPDATE
Also I've noticed that your aggregator is based on the groupTimeout(). To make it to send aggregated message to downstream you also need to configure there this:
/**
* #param sendPartialResultOnExpiry the sendPartialResultOnExpiry.
* #return the handler spec.
* #see AbstractCorrelatingMessageHandler#setSendPartialResultOnExpiry(boolean)
*/
public S sendPartialResultOnExpiry(boolean sendPartialResultOnExpiry) {
It is false by default, so your messages indeed are sent to the NullChannel.
See more info in the Docs: https://docs.spring.io/spring-integration/docs/current/reference/html/messaging-routing-chapter.html#agg-and-group-to
I am trying to remove file from remote by implementing streaming inbound but connection is closing before adviceChain implementing.
CODE:
#Bean
public SessionFactory<LsEntry> sftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
factory.setHost(sftpHost);
factory.setPort(sftpPort);
factory.setUser(sftpUser);
factory.setPassword(sftpPwd);
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<LsEntry>(factory);
}
#Bean
#InboundChannelAdapter(channel = "stream", poller = #Poller(cron = "2 * * * * ?"))
public MessageSource<InputStream> sftpMessageSource() {
SftpStreamingMessageSource messageSource = new SftpStreamingMessageSource(template());
messageSource.setRemoteDirectory(remoteDirecotry);
messageSource.setFilter(new AcceptAllFileListFilter<>());
return messageSource;
}
#Bean
public SftpRemoteFileTemplate template() {
return new SftpRemoteFileTemplate(sftpSessionFactory());
}
#Bean
#Transformer(inputChannel = "stream", outputChannel = "data")
public org.springframework.integration.transformer.Transformer transformer() {
return new StreamTransformer("UTF-8");
}
#ServiceActivator(inputChannel = "data" ,adviceChain = "afterChain")
#Bean
public MessageHandler handler() {
return new MessageHandler() {
#Override
public void handleMessage(Message<?> message) throws MessagingException {
String fileName = message.getHeaders().get("file_remoteFile").toString();
if (!StringUtils.isEmpty(message.toString())) {
else{
log.info("No file found in the Remote location");
}
}
};
}
#Bean
public ExpressionEvaluatingRequestHandlerAdvice afterChain() {
ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
advice.setOnSuccessExpression(
"#template.remove(headers['file_remoteDirectory'] + headers['file_remoteFile'])");
//advice.setOnSuccessExpressionString("#template.remove(headers['file_remoteFile'])");
advice.setPropagateEvaluationFailures(true);
return advice;
}
wherever i search every one is suggesting to implement ExpressionEvaluatingRequestHandlerAdvice but it is throwing me below error.
2018-03-27 12:32:02.618 INFO 23216 --- [ask-scheduler-1] o.s.b.c.l.support.SimpleJobLauncher : Job: [FlowJob: [name=starsBatchJob]] completed with the following parameters: [{JobID=1522168322277}] and the following status: [COMPLETED]
2018-03-27 12:32:02.618 INFO 23216 --- [ask-scheduler-1] c.f.u.config.ParentBatchConfiguration : Job Status Completed
2018-03-27 12:32:02.618 INFO 23216 --- [ask-scheduler-1] c.f.u.config.ParentBatchConfiguration : Total time tokk for Stars Batch execution: 0 seconds.
2018-03-27 12:32:02.618 INFO 23216 --- [ask-scheduler-1] c.f.u.config.ParentBatchConfiguration : Batch Job lock is released
2018-03-27 12:32:02.633 INFO 23216 --- [ask-scheduler-1] com.jcraft.jsch : Disconnecting from hpchd1e.hpc.ford.com port 22
2018-03-27 12:32:02.633 ERROR 23216 --- [ask-scheduler-1] o.s.integration.handler.LoggingHandler : org.springframework.messaging.MessagingException: Dispatcher failed to deliver Message; nested exception is org.springframework.messaging.MessagingException: Failed to execute on session; nested exception is org.springframework.core.NestedIOException: Failed to remove file: 2: No such file; nested exception is 2
I had this problem. My path to the remote file was incorrect. I needed a trailing /. It is a little difficult to see since the path is being created inside a Spel Expression. You can see the path using the following in the handleMessage() method.
String remoteDirectory = (String) message.getHeaders().get("file_remoteDirectory");
String remoteFile = (String) message.getHeaders().get("file_remoteFile");
I did have to use the advice.setOnSuccessExpressionString("#template.remove(headers['file_remoteFile'])"); that is commented out above instead of advice.setOnSuccessExpression"#template.remove(headers['file_remoteDirectory'] + headers['file_remoteFile'])");
It is incorrect in the documentation https://docs.spring.io/spring-integration/reference/html/sftp.html#sftp-streaming which is why I believe people who struggle with this lose faith in the doc. But this seems to be the only error.
I am using Spring Integration's TcpNetServerConnectionFactory and TcpInboundGateway to receive TCP messages. Everything is working as expected, but I was wondering if there is any way to implement address whitelisting? (Basically I want to allow a specified address and reject connections from others.) Maybe there is a way to add a callback to accept/reject when a connection is made, I couldn't find any mention in the docs or samples.
Create a custom TcpNetConnectionSupport (subclass DefaultTcpNetConnectionSupport and override createNewConnection()).
I think you should be able to close the socket there.
Inject it into the server connection factory.
See Advanced Techniques.
EDIT
It was added in Spring Integration 5...
#SpringBootApplication
public class So48951046Application {
public static void main(String[] args) {
SpringApplication.run(So48951046Application.class, args).close();
}
#Bean
public ApplicationRunner runner() {
return args -> {
Socket socket = SocketFactory.getDefault().createSocket("localhost", 1234);
Thread.sleep(10_000);
socket = SocketFactory.getDefault().createSocket("localhost", 1234);
Thread.sleep(10_000);
};
}
#Bean
public TcpNetServerConnectionFactory server() {
TcpNetServerConnectionFactory server = new TcpNetServerConnectionFactory(1234);
server.setTcpNetConnectionSupport(new DefaultTcpNetConnectionSupport() {
#Override
public TcpNetConnection createNewConnection(Socket socket, boolean server, boolean lookupHost,
ApplicationEventPublisher applicationEventPublisher, String connectionFactoryName)
throws Exception {
TcpNetConnection conn = super.createNewConnection(socket, server, lookupHost, applicationEventPublisher, connectionFactoryName);
if (conn.getHostAddress().contains("127")) {
conn.close();
}
return conn;
}
});
return server;
}
#Bean
public TcpReceivingChannelAdapter adapter() {
TcpReceivingChannelAdapter adapter = new TcpReceivingChannelAdapter();
adapter.setConnectionFactory(server());
adapter.setOutputChannel(new NullChannel());
return adapter;
}
}
and
: server, port=1234 Listening
: Started So48951046Application in 0.907 seconds (JVM running for 1.354)
: Accepted connection from 127.0.0.1
: New connection localhost:63624:1234:b558c7ca-f209-41b1-b958-7d9844f4d478
: server: Added new connection: localhost:63624:1234:b558c7ca-f209-41b1-b958-7d9844f4d478
: localhost:63624:1234:b558c7ca-f209-41b1-b958-7d9844f4d478 Reading...
: server: Removed closed connection: localhost:63624:1234:b558c7ca-f209-41b1-b958-7d9844f4d478
: Read exception localhost:63624:1234:b558c7ca-f209-41b1-b958-7d9844f4d478 SocketException:Socket is closed
: Accepted connection from 127.0.0.1
: New connection localhost:63625:1234:50c7b774-522a-4c43-b111-555e76611a33
: server: Added new connection: localhost:63625:1234:50c7b774-522a-4c43-b111-555e76611a33
: server: Removed closed connection: localhost:63625:1234:50c7b774-522a-4c43-b111-555e76611a33
: localhost:63625:1234:50c7b774-522a-4c43-b111-555e76611a33 Reading...
: Read exception localhost:63625:1234:50c7b774-522a-4c43-b111-555e76611a33 SocketException:Socket is closed
sometimes i found follow entry in my log file. I have no idea what the problem is. My guess is to set a lower request heartbeat. Any other ideas?
Additionally i had the situtation that after a rabbit restart my server was not able to reestablish the service after the rabbit was back. I have to restart my server, that a reconnection is possible.
[AMQP Connection xxx:5672] [ERROR] org.springframework.amqp.rabbit.connection.CachingConnectionFactory - Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no exchange 'xxxx' in vhost 'aaa', class-id=60, method-id=40)
The exchange and queue are not auto-delete?
public class AmqpConfiguration {
#Autowired
private ConnectionFactory connectionFactory;
#Bean
public Queue receiverQueue() {
return new Queue("receiverQueue", true, false, false, getDeadLetterExchangeArgs());
}
#Bean
public FanoutExchange senderExchange() {
return new FanoutExchange("xxxx");
}
#Bean
public Queue deadLetterQueue() {
return new Queue("deadLetterQueue");
}
#Bean
public FanoutExchange exchangeDeadLetter() {
return new FanoutExchange("deadLetter.exchange");
}
#Bean
public Binding bindDeadLetterQueueToExchange() {
return BindingBuilder.bind(deadLetterQueue()).to(exchangeDeadLetter());
}
#Bean
public Binding bindSenderExchangeToQueue() {
return BindingBuilder.bind(receiverQueue()).to(senderExchange());
}
#Bean(name = { "listenerContainerFactory" })
public SimpleRabbitListenerContainerFactory listenerContainerFactory() {
final SimpleRabbitListenerContainerFactory containerFactory = new SimpleRabbitListenerContainerFactory();
containerFactory.setDefaultRequeueRejected(false);
containerFactory.setConnectionFactory(connectionFactory);
// TODO: set heartbeat
return containerFactory;
}
private Map<String, Object> getDeadLetterExchangeArgs() {
final Map<String, Object> args = new HashMap<String, Object>();
args.put("x-dead-letter-exchange", amqpProperties.getDeadLetterExchange());
return args;
}
}
Cheers,
Dennis
no exchange 'xxxx' in vhost 'aaa'
I don't see an exchange xxxx in the configuration you showed.
Perhaps you have some bogus code sending to that exchange?
EDIT
If it's a boot app, and you are using the amqp starter, the rabbit autoconfiguration will create an admin for you. After restarting the server, you should see messages like these (if you enable DEBUG logging)...
09:43:03.450 [SimpleAsyncTaskExecutor-9] INFO o.s.a.r.c.CachingConnectionFactory - Created new connection: SimpleConnection#b6e2e2c [delegate=amqp://guest#127.0.0.1:5672/]
09:43:03.451 [SimpleAsyncTaskExecutor-9] DEBUG o.s.amqp.rabbit.core.RabbitAdmin - Initializing declarations
09:43:03.451 [SimpleAsyncTaskExecutor-9] DEBUG o.s.b.f.s.DefaultListableBeanFactory - Returning cached instance of singleton bean 'senderExchange'
09:43:03.451 [SimpleAsyncTaskExecutor-9] DEBUG o.s.b.f.s.DefaultListableBeanFactory - Returning cached instance of singleton bean 'receiverQueue'
09:43:03.451 [SimpleAsyncTaskExecutor-9] DEBUG o.s.b.f.s.DefaultListableBeanFactory - Returning cached instance of singleton bean 'bindSenderExchangeToQueue'
09:43:03.451 [SimpleAsyncTaskExecutor-9] DEBUG o.s.b.f.s.DefaultListableBeanFactory - Returning cached instance of singleton bean 'org.springframework.context.annotation.ConfigurationClassPostProcessor.importRegistry'
09:43:03.452 [SimpleAsyncTaskExecutor-9] DEBUG o.s.a.r.c.CachingConnectionFactory - Creating cached Rabbit Channel from AMQChannel(amqp://guest#127.0.0.1:5672/,1)
09:43:03.452 [SimpleAsyncTaskExecutor-9] DEBUG o.s.amqp.rabbit.core.RabbitTemplate - Executing callback on RabbitMQ Channel: Cached Rabbit Channel: AMQChannel(amqp://guest#127.0.0.1:5672/,1)
09:43:03.452 [SimpleAsyncTaskExecutor-9] DEBUG o.s.amqp.rabbit.core.RabbitAdmin - declaring Exchange 'xxxx'
09:43:03.452 [SimpleAsyncTaskExecutor-9] DEBUG o.s.amqp.rabbit.core.RabbitAdmin - declaring Queue 'receiverQueue'
09:43:03.453 [SimpleAsyncTaskExecutor-9] DEBUG o.s.amqp.rabbit.core.RabbitAdmin - Binding destination [receiverQueue (QUEUE)] to exchange [xxxx] with routing key []
09:43:03.453 [SimpleAsyncTaskExecutor-9] DEBUG o.s.amqp.rabbit.core.RabbitAdmin - Declarations finished
The admin is registered as a listener to the connection factory and always declares the queues/exchanges/bindings when the connection is established.
Do you have multiple connection factories/vhosts? If so, you need an admin for each - see the section on conditional declaration.