I have a http outbound gateway that is connecting to one URL. Below is the code snippet. I am dropping around 100 files on the folder. The URL connects localhost:8080/index.jsp. In the JSP i have added Thread.sleep(60000).
When I run the code I see that only one call is made to JSP every 60 seconds. However my pool manager to have 25 connections per route.
Not sure why it is not working. Anyone has faced similar problem?
<int:poller default="true" fixed-delay="50"/>
<int:channel id="inputChannel">
<int:queue capacity="5"/>
</int:channel>
<int:channel id="httpInputChannel">
<int:queue capacity="5"/>
</int:channel>
<int-http:outbound-gateway id="simpleHttpGateway"
request-channel="httpInputChannel"
url="${app.webservice.url}"
http-method="GET"
extract-request-payload="false"
expected-response-type="java.lang.String"
charset="UTF-8"
reply-timeout="1234"
request-factory="requestFactory"
reply-channel="wsResponseChannel">
</int-http:outbound-gateway>
<bean id="requestFactory"
class="org.springframework.http.client.HttpComponentsClientHttpRequestFactory">
<constructor-arg ref="httpClient"/>
</bean>
<bean id="httpClient" class="org.apache.http.impl.client.DefaultHttpClient">
<constructor-arg ref="poolManager"/>
</bean>
<bean id="poolManager" class="org.apache.http.impl.conn.PoolingClientConnectionManager">
<property name="defaultMaxPerRoute" value="25"/>
<property name="maxTotal" value="250"/>
</bean>
<int:channel id="wsResponseChannel">
<int:queue capacity="5"/>
</int:channel>
<int:service-activator ref="clientServiceActivator" method="handleServiceResult" input-channel="wsResponseChannel" />
<bean id="clientServiceActivator" class="com.spijb.serviceactivator.ClientServiceActivator"/>
<int-file:inbound-channel-adapter id="producer-file-adapter" channel="inputChannel" directory="file:c://Temp//throttling" prevent-duplicates="true">
<int:poller fixed-rate="100" />
</int-file:inbound-channel-adapter>
<int-file:file-to-string-transformer
id="file-2-string-transformer" input-channel="inputChannel"
output-channel="httpInputChannel" charset="UTF-8" />
You have a single poller thread on your file inbound channel adapter. You need to add a task-executor to the poller, with a pool size set the number of concurrent requests you want to handle.
You also need to set max-messages-per-poll, which defaults to 1.
I changed the configuration to add executor as below.
<int-file:inbound-channel-adapter id="producer-file-adapter" channel="inputChannel" directory="file:c://Temp//throttling" prevent-duplicates="true">
<int:poller fixed-rate="100" task-executor="executor" max-messages-per-poll="25"/>
</int-file:inbound-channel-adapter>
<task:executor id="executor" pool-size="25"/>
Still it was only sending one request to my tomcat server listening for index.jsp. My understanding was if there are multiple messages present in the channel queue which is the case now on httpInputChannel, the http outbound gateway would process multiple requests. However this is not happening. I further changed my default poller as below.
<int:poller default="true" fixed-delay="50" task-executor="executor"/>
After above change, the http outbound gateway started sending multiple requests to the URL. Now I am confused. Do we need to explicitly assign executor for outbound gateway to process multiple messages at the same time? Can someone please direct me to the documentation for the same?
Thank you.
Related
How can I handle message failed to produce to kafka in spring integration?
I didn't see 'error-channel' is an option at 'int-kafka:outbound-channel-adapter', wondering where should I add the error-channel information so that my ErrorHandler can get "failed to produce to kafka" type of error.
(including all type of failure, configuration, network and etc)
Also, inputToKafka is queued channel, where should I add error-channel to handle potential queue full error?
<int:gateway id="myGateway"
service-interface="someGateway"
default-request-channel="transformChannel"
error-channel="errorChannel"
default-reply-channel="replyChannel"
async-executor="MyThreadPoolTaskExecutor"/>
<int:transformer id="transformer" input-channel="transformChannel" method="transform" output-channel="inputToKafka">
<bean class="Transformer"/>
</int:transformer>
<int-kafka:outbound-channel-adapter id="kafkaOutboundChannelAdapter"
kafka-template="template"
auto-startup="false"
channel="inputToKafka"
topic="foo"
message-key-expression="'bar'"
partition-id-expression="2">
<int:poller fixed-delay="200" time-unit="MILLISECONDS" receive-timeout="0"
task-executor="kafkaExecutor"/>
</int-kafka:outbound-channel-adapter>
<bean id="kafkaExecutor" class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor">
....
</bean>
<bean id="template" class="org.springframework.kafka.core.KafkaTemplate">
<constructor-arg>
<bean class="org.springframework.kafka.core.DefaultKafkaProducerFactory">
<constructor-arg>
<map>
<entry key="bootstrap.servers" value="localhost:9092" />
...
</map>
</constructor-arg>
</bean>
</constructor-arg>
</bean>
<int:service-activator input-channel='errorChannel' output-channel="replyChannel" method='process'>
<bean class="ErrorHandler"/>
</int:service-activator>
Edit
<property name="producerListener">
<bean id="producerListener" class="org.springframework.kafka.support.ProducerListenerAdapter"/>
</property>
Any errors on the downstream flow will be sent to the error-channel on your gateway. However, since kafka is async by default, you won't get any errors that way. You can set sync=true on the outbound adapter and then an exception will be thrown if there's a problem.
Bear in mind, though, it will be much slower.
You can get async exceptions by adding a ProducerListener to your KafkaTemplate.
Below is outbound http gateway configured with headers but is not getting triggered continuously when I add poller. It just gets triggered once and stops.
<int:inbound-channel-adapter channel="fooinfotrigger.channel" expression="''">
<int:poller fixed-delay="5000"></int:poller>
</int:inbound-channel-adapter>
<int:channel id="fooinfo.channel">
<int:queue capacity="10"/>
</int:channel>
<int:channel id="fooinfotrigger.channel"></int:channel>
<int:chain input-channel="fooinfotrigger.channel" output-channel="fooinfo.channel">
<int:header-enricher>
<int:header name="Authorization" value="...." />
<int:header name="Content-Type" value="...." />
</int:header-enricher>
<int-http:outbound-gateway id="fooHttpGateway"
url="https://foo.com/v1/services/foo?status=active"
http-method="GET"
expected-response-type="java.lang.String"
charset="UTF-8"
reply-timeout="5000">
</int-http:outbound-gateway>
<int:transformer method="transform" ref="fooResourcesTransformer"/>
</int:chain>
<bean id="fooResourcesTransformer" class="com.foo.FooTransformer" />
The fixed-delay is an option to determine the time after the finish of the previous task. In your case the poll.
Since you affirm that it
gets triggered once and stops.
Looks like you don't finish your work somehow on the dealinfo.channel and don't return the control to the TaskScheduler and, therefore, don't free the thread for something else.
We really should see and understand your logic after that <chain> and with a subscribers on that dealinfo.channel.
Or... maybe your REST Service just doesn't return response at all. Independently of that reply-timeout="5000".
I found that when I want to make a REST call using Spring-Integration, it automatically appends 'x' in case its a custom header.
For example in Spring-integration while sending custom request headers such as API-KEY, the actual request header name in the API call becomes X-API-KEY and so it fails.
It seems like Spring is standardizing by enforcing the Custom request headers to start with X, is there a work around?
<int:channel id="requestChannel"/>
<int:channel id="httpHeaderEnricherChannel"/>
<int-http:outbound-gateway request-channel="requestChannel"
url="http://localhost:9090/balance"
http-method="GET"
mapped-request-headers="Api-Key"
expected-response-type="java.lang.String"/>
<int:header-enricher input-channel="httpHeaderEnricherChannel"
output-channel="requestChannel">
<int:header name="Api-Key" value="pass"/>
</int:header-enricher>
You should declare DefaultHttpHeaderMapper.outboundMapper() bean with the setUserDefinedHeaderPrefix(null) and including that your custom Api-Key header mapping. After that you should replace mapped-request-headers attribute with the header-mapper reference.
We have revised the feature and decided to remove "X-" default prefix in the next version.
For more info please, see here Custom HTTP headers : naming conventions and here https://jira.spring.io/browse/INT-3903.
Thanks to #Artem for clarifying, and Gary's post here Spring Integration Http Outbound Gateway Header Mapper
I was able to solve the issue
<int:channel id="requestChannel"/>
<int:gateway id="requestGateway"
service-interface="org.springframework.integration.samples.http.RequestGateway"
default-request-channel="requestChannel">
<int:default-header name="Api-Key" value="pass" />
</int:gateway>
<int-http:outbound-gateway request-channel="requestChannel"
header-mapper="headerMapper"
url="http://localhost:9090/balance"
http-method="GET"
expected-response-type="java.lang.String"/>
<beans:bean id="headerBean"
class="org.springframework.integration.samples.http.HeaderBean" />
<bean id="headerMapper"
class="org.springframework.integration.http.support.DefaultHttpHeaderMapper">
<property name="inboundHeaderNames" value="*" />
<property name="outboundHeaderNames" value="HTTP_REQUEST_HEADERS, Api-Key" />
<property name="userDefinedHeaderPrefix" value="" />
</bean>
The scenario is inbound jms adapter -> service activators (db search, business logic, inserts or updates)
<int-jms:message-driven-channel-adapter id="swiftAdapterInput" channel="mt950"
connection-factory="connectionFactory" destination-name="${integration.swift.jms.queue.from}" pub-sub-domain="false"
auto-startup="false" error-channel="errorChannel" transaction-manager="transactionManager" acknowledge="transacted" />
<int:service-activator input-channel="errorChannel" ref="errorHandler" />
<bean id="errorHandler" class="nest.integration.utils.error.ErrorHandler" />
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory" />
</bean>
When the exception was thrown during my servcie activators my errorHandler works fine but the exception was at the end(after commit), for example db unique constrat exception the message does not go to error-channel but rollbacked to jms queue only.
But I need my errorHander in this case also because I need to send email exception etc.
Tanks at advance
Best wishes, Tamas
I think your issue that you must rallback TX in your errorHandler, but don't commit.
That's why you get egyediségre vonatkozó megszorítás nem tel from DefaultMessageListenerContainer, because that DB error is caused on commit.
See here: Exceptions in Spring Integration: How to log but not intercept
I am currently building a Mule ESB server application which uses a request-response jms connector. Since it is being used in a highly concurrent environment, we enabled spring jms cache in our MQ config.
<spring:beans>
<mule>
<!-- MQ Factory -->
<spring:bean id="testMsgMqFactoryBean1" name="testMsgMqFactory1" class="com.ibm.mq.jms.MQQueueConnectionFactory">
<spring:property name="channel" value="${test.msg.mq.channel.1}" />
<spring:property name="queueManager" value="${test.msg.mq.queueManager.1}" />
<spring:property name="hostName" value="${test.msg.mq.hostName.1}" />
<spring:property name="port" value="${test.msg.mq.port.1}" />
<spring:property name="transportType" value="${mq.jms.transportType}" />
</spring:bean>
<spring:bean id="testMsgMqFactoryBeanCache1" class="org.springframework.jms.connection.CachingConnectionFactory">
<spring:property name="targetConnectionFactory" ref="testMsgMqFactoryBean1" />
<spring:property name="sessionCacheSize" value="${test.threading.profile.maxThreadsActive}" />
<spring:property name="cacheConsumers" value="false" />
<!-- <spring:property name="cacheProducers" value="false" /> -->
</spring:bean>
<!-- MQ Connector 1 -->
<jms:custom-connector name="testMsgMqConnector.1" class="org.mule.transport.jms.websphere.WebsphereJmsConnector" doc:name="Custom JMS">
<spring:property name="specification" value="1.1" />
<spring:property name="connectionFactory" ref="testMsgMqFactoryBeanCache1" />
<spring:property name="persistentDelivery" value="false" />
<spring:property name="disableTemporaryReplyToDestinations" value="true" />
<spring:property name="numberOfConsumers" value="${test.threading.profile.maxThreadsActive}" />
<spring:property name="maxRedelivery" value="-1" />
<receiver-threading-profile maxThreadsActive="${test.threading.profile.maxThreadsActive}" maxBufferSize="${test.threading.profile.maxBufferSize}" maxThreadsIdle="${test.threading.profile.maxThreadsIdle}"/>
<reconnect frequency="${mq.jms.reconnection.frequency}" count="${mq.jms.reconnection.count}" blocking="false" />
</jms:custom-connector>
<!-- msgworks inbound and outbound MQ setup -->
<!-- Rewards -->
<jms:endpoint exchange-pattern="request-response" queue="${test.msg.mq.inbound.account.queue}" name="testQueue1" connector-ref="testMsgMqConnector.1" doc:name="JMS" />
</mule>
</spring:beans>
This configuration runs fine when the client uses a static replyTo queue. However, we have some customers who are using dynamic/temporary replyTo queue. Since org.springframework.jms.connection.CachingConnectionFactory caches producers, for every temporary replyTo queue, a producer object is cached and never closed. After processing hundreds of requests, the application started throwing exceptions:
********************************************************************************
Message : Failed to create and dispatch response event over Jms destination "queue://QMGR1/TESTret5a975v53AF980F2006BE02?targetClient=1". Failed to route event via endpoint: null. Message payload is of type: JMSTextMessage
Code : MULE_ERROR-42999
--------------------------------------------------------------------------------
Exception stack is:
1. MQJE001: Completion Code 2, Reason 2017 (com.ibm.mq.MQException)
com.ibm.mq.MQQueueManager:2808 (null)
2. MQJMS2008: failed to open MQ queue TESTret5a975v53AF980F2006BE02(JMS Code: MQJMS2008) (javax.jms.ResourceAllocationException)
com.ibm.mq.jms.MQQueueServices:398 (http://java.sun.com/j2ee/sdk_1.3/techdocs/api/javax/jms/ResourceAllocationException.html)
3. Failed to create and dispatch response event over Jms destination "queue://QMGR1/TESTret5a975v53AF980F2006BE02?targetClient=1". Failed to route event via endpoint: null. Message payload is of type: JMSTextMessage (org.mule.api.transport.DispatchException)
org.mule.transport.jms.JmsReplyToHandler:173 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/transport/DispatchException.html)
After investigating the MQ error code (MQJE001: Completion Code 2, Reason 2017), I found that the reason behind this error is because we never closed producers, and the producers exhausted MQ Handles on the Queue Manager. The quick and easy fix is to uncomment the line in spring jms cache config to close Producers each and every time.
<spring:bean id="testMsgMqFactoryBeanCache1" class="org.springframework.jms.connection.CachingConnectionFactory">
<spring:property name="targetConnectionFactory" ref="testMsgMqFactoryBean1" />
<spring:property name="sessionCacheSize" value="${test.threading.profile.maxThreadsActive}" />
<spring:property name="cacheConsumers" value="false" />
<spring:property name="cacheProducers" value="false" />
</spring:bean>
Now I am not seeing the MQ issue, but came up with another performance issue, because no producers are cached, so a new producer is created every single time.
My question is, how to deal with this scenario? Since the client won't change the way they receive reply messages from temporary queues, how can we avoid exhausting MQ handlers without impacting the performance.
Thank you very much
- Lei
This is a very interesting use case. However, I'm afraid there is nothing out of the box to fix this. There are the more obvious solutions: disable the caching or extend the spring cache provider.
Temporary queues and performance are definitely not two things that you can have at the same time. I would suggest another posibility:
If you are using the temporary queues to have the responses come back to a given consumer only, probably in adition to having old messages discarded on reconnection:
You could use a well know queue for the replies, using a combination of a header with the hostname that should receive the message plus a selector different on each consumer node and a TTL on the messages sent to make them dissapear after a while.