I am using a spring-ws client to invoke a web service in two way ssl mode. I need to make sure that I don't end up creating a new connection everytime - instead I re-use connections.
I did some re-search and found that by default HTTP 1.1 always makes persistent http(s) connections. Is that true?
Do I need any piece of code in my client to ensure connections are persistent? How can I check if the connections are persistent or if a new connection is being created vereytime I send a new request ?
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:oxm="http://www.springframework.org/schema/oxm"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/oxm http://www.springframework.org/schema/oxm/spring-oxm-3.0.xsd">
<bean id="messageFactory" class="org.springframework.ws.soap.saaj.SaajSoapMessageFactory"/>
<bean id="webServiceTemplate" class="org.springframework.ws.client.core.WebServiceTemplate">
<constructor-arg ref="messageFactory"/>
<property name="marshaller" ref="jaxb2Marshaller" />
<property name="unmarshaller" ref="jaxb2Marshaller" />
<property name="messageSender" ref="httpSender" />
</bean>
<bean id="httpSender" class="org.springframework.ws.transport.http.CommonsHttpMessageSender">
</bean>
<oxm:jaxb2-marshaller id="jaxb2Marshaller" contextPath="com.nordstrom.direct.oms.webservices.addressval" />
<bean id="Client" class="test.ClientStub">
<property name="webServiceTemplate" ref="webServiceTemplate" />
</bean>
</beans>
HTTP/1.1 has connection keep-alive but the server can decide to close it after a number of requests or does not support keep-alive
The easiest way to check is to use tcpdump or wireshark and check for SYN [S] flags. Thats new connections.
I am using the following and on J2SE 1.6 jdk it did not create any new sockets across multiple requests. The server did not send the Connection: Keep-Alive header but connections were kept alive anyway.
Bean configuration:
<bean id="wsHttpClient" factory-bean="customHttpClient"
factory-method="getHttpClient" />
<bean id="messageSender"
class="org.springframework.ws.transport.http.CommonsHttpMessageSender"
p:httpClient-ref="wsHttpClient" />
Java Code:
import org.apache.commons.httpclient.*;
import org.springframework.stereotype.*;
#Component public class CustomHttpClient {
private final HttpClient httpClient;
public CustomHttpClient() {
httpClient = new HttpClient(new MultiThreadedHttpConnectionManager());
httpClient.getParams().setParameter("http.protocol.content-charset",
"UTF-8");
httpClient.getHttpConnectionManager().getParams()
.setConnectionTimeout(60000);
httpClient.getHttpConnectionManager().getParams().setSoTimeout(60000);
httpClient.setHostConfiguration(new HostConfiguration());
}
public HttpClient getHttpClient() {
return httpClient;
}
}
Related
My SpringConfig.xml says:
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:mongo="http://www.springframework.org/schema/data/mongo"
xsi:schemaLocation="http://www.springframework.org/schema/context
http://www.springframework.org/schema/context/spring-context-3.0.xsd
http://www.springframework.org/schema/data/mongo
http://www.springframework.org/schema/data/mongo/spring-mongo-1.10.xsd
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd">
<mongo:mongo-client id="mongo" host="127.0.0.1" port="27017" >
<mongo:client-options
connections-per-host="8"
threads-allowed-to-block-for-connection-multiplier="4"
connect-timeout="1000"
max-wait-time="1500"
socket-keep-alive="true"
socket-timeout="1500"
/>
</mongo:mongo-client>
<mongo:db-factory dbname="test" mongo-ref="mongo" />
<bean id="mappingContext"
class="org.springframework.data.mongodb.core.mapping.MongoMappingContext" />
<bean id="defaultMongoTypeMapper"
class="org.springframework.data.mongodb.core.convert.DefaultMongoTypeMapper">
<constructor-arg name="typeKey"><null/></constructor-arg>
</bean>
<bean id="mappingMongoConverter"
class="org.springframework.data.mongodb.core.convert.MappingMongoConverter">
<constructor-arg name="mongoDbFactory" ref="mongoDbFactory" />
<constructor-arg name="mappingContext" ref="mappingContext" />
<property name="typeMapper" ref="defaultMongoTypeMapper" />
</bean>
<bean id="mongoTemplate" class="org.springframework.data.mongodb.core.MongoTemplate">
<constructor-arg name="mongoDbFactory" ref="mongoDbFactory" />
<constructor-arg name="mongoConverter" ref="mappingMongoConverter" />
</bean>
</beans>
I am calling this only once in constructor of my service:
ApplicationContext ctx;
public DoctorService() {
ctx = new GenericXmlApplicationContext("SpringConfig.xml");
}
And then using ctx like:
MongoOperations mongoOperation = (MongoOperations) ctx.getBean("mongoTemplate");
Query searchUserQuery = new Query(Criteria.where("_id").is( new ObjectId(userId)));
Keys k = new Keys();
Doctor doctor = mongoOperation.findOne(searchUserQuery, Doctor.class);
k.setSessionId(doctor.getSessionid());
k.setToken(doctor.getToken());
List<Keys> keys = new ArrayList<Keys>();
keys.add(k);
return keys;
However, the connection is not getting closed, even after the timeout period.
Any idea?
Thanks
There is a max 100 sized connection pool for each mongo template (as default), and these are reused again & again for speed, so the connections will not be closed, that is connections-per-host="8" in your mongo client options. So you will have at most 8 connections, and they will stay as long as your app stays. To have auto
EDIT: To kill connections, you can use the following property to have them commit suicide with a timer maxConnectionIdleTime: 1000, so any idle connection that has a idle time > 1000ms will be killed, this does work, I've tested & the change of connection pool size can be seen below;
Spike after server restart is my load test, and it did not reach its full connection pool limit, and after peaking, the connections were killed. Though it must be said that the connection creation is indeed really costly, my response time increased more than hundredfold, so not the best solution not having a connection pool, though it would be better to have a somewhat long maxConnectionIdleTime so only after a long inactivity you'd start killing connections, and also having a minConnectionsPerHost value, so you'd always have a decent amount of connection pool ready, if needed you'd increase it, then kill the excess when server is less active, rinse and repeat!
We are using spring integration adapters for file ftp in our project, the problem we are facing is, the adapters are not closing the open socket connections.
As a result, other modules which are in the same managed server are failing with "Too many open files" socket connection exception. Is there a way to close the unused open socket connections from the channel adapters Or Can we get the underlying jsch connections and close the sockets from sftp channel adapters.
We have tried caching session factory and it did not close the open sockets. The file handles kept on piling up. Thanks in advance for the inputs.
We have two xmls one with outboundAdapter and the other with InboundAdapter. These two are in different xmls as they are different jobs that are run using spring batch. We are expected to send files to a location.
We are using spring batch 2.2.0 and spring integration 2.1.6 and spring integration 2.1.6.
Here is the configuration:
We have one session factory and it is wrapped by cachingSession factory:
<beans:bean id="sftpSessionFactory"
class="org.springframework.integration.sftp.session.DefaultSftpSessionFactory">
<beans:property name="host" value="hostname"/>
<beans:property name="privateKey" value="somepath"/>
<beans:property name="port" value="22"/>
</beans:bean>
<bean id="cachingSessionFactory"
class="org.springframework.integration.file.remote.session.CachingSessionFactory">
<constructor-arg ref="sftpSessionFactory"/>
<constructor-arg value="10"/>
<property name="sessionWaitTimeout" value="1000"/>
</bean>
**and then we have a channel**
<int:channel id="ftpChannel" />
**and then we have the following outbound Channel adapter**
<int-sftp:outbound-channel-adapter id="sftpOutboundAdapter"
session-factory="cachingSessionFactory"
channel="inputChannel"
charset="UTF-8"
use-temporary-filename="false"/>
**With the above configuration we are using the ftpChannel to send the files by constructing a payload like this:**
message = MessageBuilder.withPayLoad(f).build() // MessageBuilder is //org.springframework.integration.support.MessageBuilder and f is the file
ftpChannel.send(message)
**In another inbound job, the following is the configuration of adapters:
Session factory:**
<beans:bean id="sftpSessionFactory2"
class="org.springframework.integration.sftp.session.DefaultSftpSessionFactory">
<beans:property name="host" value="hostname"/>
<beans:property name="privateKey" value="somepath"/>
<beans:property name="port" value="22"/>
</beans:bean>
**Caching session factory:**
<bean id="cachingSessionFactory2"
class="org.springframework.integration.file.remote.session.CachingSessionFactory">
<constructor-arg ref="sftpSessionFactory2"/>
<constructor-arg value="10"/>
<property name="sessionWaitTimeout" value="1000"/>
</bean>
**and another channel:**
<int:channel id="ftpChannel2" />
**Now we have the following adapter in this xml:**
<int-sftp:outbound-channel-adapter id="sftpInboundAdapter"
session-factory="cachingSessionFactory2"
channel="inputChannel"
charset="UTF-8"
use-temporary-filename="false"/>
With this configuration in the above xml we are trying to get session from the cachingSessionFactory configured in the first xml, getting a session out of it, getting a list of files and then sending some files with ftpChannel2.send() and doing session.close() in finally block. When I do session.isOpen() in after session.close(), I see true being returned.
With these two jobs, I could see a lot of open file handles, which are socket connections and I am absolutely clueless as to how I can close those opened sockets.
The session will be closed when the operation is complete as long as you don't use the caching session factory - that is intended to keep the session open for the next use.
If you turn on DEBUG logging, you should get some insight into what it wrong.
EDIT
Just ran this with no problems:
#Test
public void test() throws Exception {
DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
sf.setHost("10.0.0.3");
sf.setUsername("ftptest");
sf.setPassword("ftptest");
FtpSession session = sf.getSession();
Thread.sleep(10000);
session.close();
assertFalse(session.isOpen());
System.out.println("closed");
Thread.sleep(10000);
}
During the first sleep netstat -ntp shows the socket open; socket is gone after the close.
The session is the socket...
public void disconnect() throws IOException
{
closeQuietly(_socket_);
...
}
EDIT2
I had forgotten that with 2.1.x there was the cache-sessions attribute (2.1.x is very old).
I just tested with this (and 2.1.6) ...
<bean id="sftpSessionFactory"
class="org.springframework.integration.sftp.session.DefaultSftpSessionFactory">
<property name="host" value="10.0.0.3" />
<property name="privateKey" value="file:/somPathTo/.ssh/id_rsa" />
<property name="port" value="22" />
<property name="user" value="ftptest" />
</bean>
<int:channel id="inputChannel" />
<int-sftp:outbound-channel-adapter id="sftpOutboundAdapter"
session-factory="sftpSessionFactory"
channel="inputChannel"
charset="UTF-8"
cache-sessions="false"
use-temporary-file-name="false"
remote-directory="." />
public class Main {
public static void main(String[] args) throws Exception {
ConfigurableApplicationContext context = new ClassPathXmlApplicationContext("context.xml");
File f = new File("foo.txt");
FileOutputStream fos = new FileOutputStream(f);
fos.write("bar".getBytes());
fos.close();
context.getBean("inputChannel", MessageChannel.class).send(MessageBuilder.withPayload(f).build());
System.out.println("Sleeping - check socket");
Thread.sleep(60000); // check socket
context.close();
System.exit(0);
}
}
With no problems (the socket is closed); if I set the cache-sessions to true, the socket remains open as expected.
I do notice you don't have a remote-directory attribute - that's illegal:
exactly one of 'remote-directory' or 'remote-directory-expression' is required on a remote file outbound adapter
I am trying to configure Spring Boot, Apache Camel, ActiveMQ all togheter. This is what I did so far:
I run ActiveMQ using activemq.bat
I log into the console to monitor messages
I start backend service (sources below)
I start frontend service (Backend and Frontend are different spring boot projects)
I sucesfully send message into the que from frontend but after 20s I am getting timeout. The message appears in ActiveMQ console but it's not consumed by backend.
Here's how I configured backend:
build.gradle:
dependencies {
compile("org.apache.camel:camel-spring-boot:2.16.0")
compile("org.springframework.boot:spring-boot-starter-web")
compile("org.springframework.boot:spring-boot-starter-websocket")
compile("org.springframework:spring-messaging")
compile("org.springframework:spring-jms")
compile("org.springframework.security:spring-security-web")
compile("org.springframework.security:spring-security-config")
compile("org.springframework.boot:spring-boot-starter-data-jpa")
compile('org.apache.camel:camel-jms:2.16.0')
compile("org.hibernate:hibernate-core:4.0.1.Final")
compile("mysql:mysql-connector-java:5.1.37")
compile("log4j:log4j:1.2.16")
compile("junit:junit:4.12")
compile("org.mockito:mockito-all:1.8.4")
compile('org.apache.activemq:activemq-core:5.7.0')
compile('com.epam.training.auction:auction_common:1.0')
testCompile("junit:junit")
}
Route Config: (I use UsersServiceImpl for testing and both ways of defining it don't work)
import org.apache.camel.RoutesBuilder;
import org.apache.camel.builder.RouteBuilder;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import com.epam.training.auction_backend.services.UsersServiceImpl;
#Configuration
public class MyRouterConfiguration {
#Bean
public RoutesBuilder myRouter() {
return new RouteBuilder() {
#Override
public void configure() throws Exception {
from("jms:queue:auctions").to("bean:auctionsServiceImpl");
from("jms:queue:users").bean(UsersServiceImpl.class);
from("jms:queue:bidding").to("bean:biddingServiceImpl");
}
};
}
}
Client side, invoking method
#Override
public void registerUser(String username, String password) {
AbstractApplicationContext context = new ClassPathXmlApplicationContext("camel-client-remoting.xml");
UsersService usersService = context.getBean("usersServiceImpl", UsersService.class);
System.out.println("Invoking the logging");
UserTransferObject userTransferObject = new UserTransferObject("user", "pass");
usersService.addUser(userTransferObject);
System.out.println("User is logged");
IOHelper.close(context);
}
Client xml camel config
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:camel="http://camel.apache.org/schema/spring"
xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd
http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd">
<camel:camelContext id="camel-client">
<camel:template id="camelTemplate"/>
<camel:proxy
id="auctionsServiceImpl"
serviceInterface="com.epam.training.auction.common.AuctionsService"
serviceUrl="jms:queue:auctions"/>
<camel:proxy
id="usersServiceImpl"
serviceInterface="com.epam.training.auction.common.UsersService"
serviceUrl="jms:queue:users"/>
<camel:proxy
id="biddingServiceImpl"
serviceInterface="com.epam.training.auction.common.BiddingService"
serviceUrl="jms:queue:bidding"/>
</camel:camelContext>
<bean id="jmsConnectionFactory"
class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="tcp://localhost:61616"/>
</bean>
<bean id="pooledConnectionFactory"
class="org.apache.activemq.pool.PooledConnectionFactory"
init-method="start" destroy-method="stop">
<property name="maxConnections" value="8"/>
<property name="connectionFactory" ref="jmsConnectionFactory"/>
</bean>
<bean id="jmsConfig"
class="org.apache.camel.component.jms.JmsConfiguration">
<property name="connectionFactory" ref="pooledConnectionFactory"/>
<property name="concurrentConsumers" value="10"/>
</bean>
<bean id="jms"
class="org.apache.activemq.camel.component.ActiveMQComponent">
<property name="configuration" ref="jmsConfig"/>
<property name="transacted" value="true"/>
<property name="cacheLevelName" value="CACHE_CONSUMER"/>
</bean>
</beans>
During sending I also get warning:
2015-11-02 11:56:21.547 WARN 16328 --- [nio-8181-exec-5] o.s.b.f.s.DefaultListableBeanFactory : Bean creation exception on FactoryBean type check: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'usersServiceImpl': Invocation of init method failed; nested exception is org.apache.camel.ResolveEndpointFailedException: Failed to resolve endpoint: jms://queue:users due to: Cannot auto create component: jms
Common interfaces nad transfer objects are defined in 3rd project which is a dependency for both backend and frontend projects.
I feel that there's one missing part of this configuration. Please tell me what could it be.
Thanks in advance.
You need to change
<bean id="activemq"
class="org.apache.activemq.camel.component.ActiveMQComponent">
to
<bean id="jms"
class="org.apache.activemq.camel.component.ActiveMQComponent">
or change your endpoint urls to .to("activemq:queue:users").
The id of your ActiveMQComponent is the name used in the .to() to identify to camel that you want to use that component definition.
I am learning Spring Integration JMS. Since ActiveMQ is a message broker. I referring to project given here-> http://www.javaworld.com/article/2142107/spring-framework/open-source-java-projects-spring-integration.html?page=2#
But I want to know how can I persist message in ActiveMQ. I mean I started ActiveMQ then send request using REST client. I am calling publishService.send( message ); in for loop for 50 times and and the receiver end I have sleep timer of 10 seconds. So that 50 messages gets queued and its start processing at 10 seconds interval.
EDIT:
Look at the screen shot below:
It says 50 messages enqueued and 5 of them have been dequeued.
But then in between I stopped ActiveMQ server and by the time it has consumed 5 messages out of 50 and then again restart it.
But then I was expecting it to show remaining 45 in Messages Enqueued column. But I can see 0(see screenshot below) there and they all vanished after server restart without having to persist remaining 45 messages. How can I come over this problem ?
Please have a look at the configuration below:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:int="http://www.springframework.org/schema/integration"
xmlns:int-jms="http://www.springframework.org/schema/integration/jms"
xmlns:oxm="http://www.springframework.org/schema/oxm"
xmlns:int-jme="http://www.springframework.org/schema/integration"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-2.5.xsd
http://www.springframework.org/schema/integration http://www.springframework.org/schema/integration/spring-integration.xsd
http://www.springframework.org/schema/integration/jms http://www.springframework.org/schema/integration/jms/spring-integration-jms.xsd
http://www.springframework.org/schema/oxm http://www.springframework.org/schema/oxm/spring-oxm-3.0.xsd">
<!-- Component scan to find all Spring components -->
<context:component-scan base-package="com.geekcap.springintegrationexample" />
<bean class="org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter">
<property name="order" value="1" />
<property name="messageConverters">
<list>
<!-- Default converters -->
<bean class="org.springframework.http.converter.StringHttpMessageConverter"/>
<bean class="org.springframework.http.converter.FormHttpMessageConverter"/>
<bean class="org.springframework.http.converter.ByteArrayHttpMessageConverter" />
<bean class="org.springframework.http.converter.xml.SourceHttpMessageConverter"/>
<bean class="org.springframework.http.converter.BufferedImageHttpMessageConverter"/>
<bean class="org.springframework.http.converter.json.MappingJackson2HttpMessageConverter" />
</list>
</property>
</bean>
<!-- Define a channel to communicate out to a JMS Destination -->
<int:channel id="topicChannel"/>
<!-- Define the ActiveMQ connection factory -->
<bean id="connectionFactory" class="org.apache.activemq.spring.ActiveMQConnectionFactory">
<property name="brokerURL" value="tcp://localhost:61616"/>
</bean>
<!--
Define an adaptor that route topicChannel messages to the myTopic topic; the outbound-channel-adapter
automagically fines the configured connectionFactory bean (by naming convention
-->
<int-jms:outbound-channel-adapter channel="topicChannel"
destination-name="topic.myTopic"
pub-sub-domain="true" />
<!-- Create a channel for a listener that will consume messages-->
<int:channel id="listenerChannel" />
<int-jms:message-driven-channel-adapter id="messageDrivenAdapter"
channel="getPayloadChannel"
destination-name="topic.myTopic"
pub-sub-domain="true" />
<int:service-activator input-channel="listenerChannel" ref="messageListenerImpl" method="processMessage" />
<int:channel id="getPayloadChannel" />
<int:service-activator input-channel="getPayloadChannel" output-channel="listenerChannel" ref="retrievePayloadServiceImpl" method="getPayload" />
</beans>
Also please see the code:
Controller from where I am sending message in for loop at once:
#Controller
public class MessageController
{
#Autowired
private PublishService publishService;
#RequestMapping( value = "/message", method = RequestMethod.POST )
#ResponseBody
public void postMessage( #RequestBody com.geekcap.springintegrationexample.model.Message message, HttpServletResponse response )
{
for(int i = 0; i < 50; i++){
// Publish the message
publishService.send( message );
// Set the status to 201 because we created a new message
response.setStatus( HttpStatus.CREATED.value() );
}
}
}
Consumer code to which I have applied timer:
#Service
public class MessageListenerImpl
{
private static final Logger logger = Logger.getLogger( MessageListenerImpl.class );
public void processMessage( String message )
{
try {
Thread.sleep(10000);
logger.info( "Received message: " + message );
System.out.println( "MessageListener::::::Received message: " + message );
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
Further by searching more I found here that As per the JMS specification, the default delivery mode is persistent. But in my case it does not seems to be worked.
Please help me to have right configuration in place so that messages can be persist across broker failure.
This is typically not a problem, but the way activeMQ was built.
you can find the following explanation in book 'ActiveMQ in action'
Once a message has been consumed and acknowledged by a message
consumer, it’s typically deleted from the broker’s message store.
So when you restart your server, it only shows you the messages which are in broker's message store. In most cases you will never have the need to have a look at the processed messages.
Hope this helps!
Good luck!
I am trying to test & benchmark spring-amqp for RabbitMQ with multiple queues so I was creating rabbit template for each queue and using it to send message. The message sent is successful and I can see a message published in the exchange but I don't see anything in the queue. I am guessing it's very minor setting but can't figure it out.
This is my applicationContext.xml
<bean id="banchmarkConnectionFactory" class="org.springframework.amqp.rabbit.connection.CachingConnectionFactory">
<constructor-arg ref="benchmarkAmqpHost"/>
<property name="username" ref="benchmarkAmqpUser"/>
<property name="password" ref="benchmarkAmqpPass"/>
<property name="virtualHost" ref="benchmarkAmqpVHost"/>
<property name="channelCacheSize" value="10"/>
</bean>
<rabbit:template id="benchmarkAmqpTemplate"
connection-factory="banchmarkConnectionFactory"
exchange="my_exchange"
queue="BenchmarkQueue"
routing-key="BenchmarkQueue" />
<rabbit:admin connection-factory="banchmarkConnectionFactory"/>
<rabbit:queue name="BenchmarkQueue" auto-delete="true" durable="false" auto-declare="true"/>
This is my code which uses the benchmarkAmqpTemplate to publish to the queue.
public class publishMessage {
#Autowired
private RabbitTemplate benchmarkAmqpTemplate;
protected void publish(String payload) {
benchmarkAmqpTemplate.setQueue("BenchmarkQueue");
benchmarkAmqpTemplate.convertAndSend("my_exchange", "BenchmarkQueue", payload);
}
}
When I used the HelloWorld example it did publish a message in the queue so was wondering if I am doing something wrong.
UPDATE
I was able to solve this by adding direct-exchange tag in my context xml. My full xml looks like this:
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:rabbit="http://www.springframework.org/schema/rabbit"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsdhttp://www.springframework.org/schema/rabbit http://www.springframework.org/schema/rabbit/spring-rabbit.xsd">
<bean id="banchmarkConnectionFactory" class="org.springframework.amqp.rabbit.connection.CachingConnectionFactory">
<constructor-arg ref="benchmarkAmqpHost"/>
<property name="username" ref="benchmarkAmqpUser"/>
<property name="password" ref="benchmarkAmqpPass"/>
<property name="virtualHost" ref="benchmarkAmqpVHost"/>
<property name="channelCacheSize" value="10"/>
</bean>
<rabbit:template id="benchmarkAmqpTemplate"
connection-factory="banchmarkConnectionFactory"
exchange="my_exchange"
queue="BenchmarkQueue"
routing-key="BenchmarkQueue" />
<rabbit:admin connection-factory="banchmarkConnectionFactory"/>
<rabbit:queue name="BenchmarkQueue" auto-delete="true" durable="false" auto-declare="true"/>
<rabbit:direct-exchange name="my_exchange">
<rabbit:bindings>
<rabbit:binding queue="BenchmarkQueue" key="BenchmarkQueue" />
</rabbit:bindings>
</rabbit:direct-exchange>
</beans>
Sorry, but it looks like you misunderstood AMQP protocol a bit.
The message is published to the Exchange with the proper routingKey.
The publisher (RabbitTemplate) doesn't need to know about queues at all.
The queue is a part of receiver, subscriber to the queue.
There is one more feature in between - binding. The queue is bound to the Exchange under the appropriate routingKey. One queue can be bound to several exchanges with different routing keys. By default all queues are bound to the default exchange ("") with routingKeys equal to their names.
Please, refer for more info to the RabbitMQ site.