okhttp 2.7.5 fixing without going to okhttp3 - spring-boot

Got this vulnerability,
okhttp-2.7.5.jar | Reference: CVE-2021-0341 | CVSS Score: 7.5 | Category: CWE-295 | In verifyHostName of OkHostnameVerifier.java, there is a possible way to accept a certificate for the wrong domain due to improperly used crypto. This could lead to remote information disclosure with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-8.1 Android-9 Android-10 Android-11Android ID: A-171980069
Can this be fix without going to okhttp3?
My code would be changed with all the okhttp library, all ok except this com.squareup.okhttp.MultipartBuilder to okhttp3.MultipartBody.Builder. Which is going to give me error with,
java.lang.NullPointerException: trustManager.acceptedIssuers must not be null
at okhttp3.internal.platform.Platform.buildTrustRootIndex(Platform.kt:163)
at okhttp3.internal.platform.Platform.buildCertificateChainCleaner(Platform.kt:160)
at okhttp3.OkHttpClient$Builder.sslSocketFactory(OkHttpClient.kt:754)
org.apache.camel.support.processor.DelegateSyncProcessor.process(DelegateSyncProcessor.java:65)
at org.apache.camel.processor.errorhandler.RedeliveryErrorHandler$SimpleTask.run(RedeliveryErrorHandler.java:471)
at org.apache.camel.impl.engine.DefaultReactiveExecutor$Worker.schedule(DefaultReactiveExecutor.java:193)
at org.apache.camel.impl.engine.DefaultReactiveExecutor.scheduleMain(DefaultReactiveExecutor.java:64)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:184)
at org.apache.camel.impl.engine.CamelInternalProcessor.process(CamelInternalProcessor.java:398)
at org.apache.camel.component.jetty.CamelContinuationServlet.doService(CamelContinuationServlet.java:245)
at org.apache.camel.http.common.CamelServlet.service(CamelServlet.java:130)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:750)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:799)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:550)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1440)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:501)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1355)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at org.eclipse.jetty.server.Server.handle(Server.java:516)
at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:487)
at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:732)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:479)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)
at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)
at java.lang.Thread.run(Thread.java:748)
Also I'm limited to java 8.
Thanks.

It's a pretty bogus CVE in that you need to use the HostnameVerifier API directly with untrusted input to exploit. You're probably not doing that; that interface is designed for end users to plug in a custom implementation and will rarely be used by end users directly. (OkHttp sanitizes input before passing it to this API.)
If you confirm you're not using HostnameVerifier directly your vulnerability is mitigated.
But you should upgrade anyway cause OkHttp 2.x is very obsolete.

Vulnerability got fixed with this and I'm using okhttp3,
TrustManager[] certs = new TrustManager[] { new X509TrustManager() {
private X509TrustManager standardTrustManager = null;
#Override
public X509Certificate[] getAcceptedIssuers() {
return new X509Certificate[]{};
}
#Override
public void checkServerTrusted(X509Certificate[] certificates, String authType)
throws CertificateException {
if ((certificates != null) && (certificates.length == 1)) {
certificates[0].checkValidity();
} else {
standardTrustManager.checkServerTrusted(certificates, authType);
}
}
#Override
public void checkClientTrusted(X509Certificate[] certificates, String authType)
throws CertificateException {
standardTrustManager.checkClientTrusted(certificates, authType);
}
} };
But got new error,
com.example.HTTPS$HTTPSError: javax.net.ssl.SSLException: java.lang.NullPointerException
at org.apache.camel.support.processor.DelegateSyncProcessor.process(DelegateSyncProcessor.java:65)
at org.apache.camel.processor.errorhandler.RedeliveryErrorHandler$SimpleTask.run(RedeliveryErrorHandler.java:471)
at org.apache.camel.impl.engine.DefaultReactiveExecutor$Worker.schedule(DefaultReactiveExecutor.java:193)
at org.apache.camel.impl.engine.DefaultReactiveExecutor.scheduleMain(DefaultReactiveExecutor.java:64)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:184)
at org.apache.camel.impl.engine.CamelInternalProcessor.process(CamelInternalProcessor.java:398)
at org.apache.camel.component.jetty.CamelContinuationServlet.doService(CamelContinuationServlet.java:245)
at org.apache.camel.http.common.CamelServlet.service(CamelServlet.java:130)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:750)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:799)
Caused by: javax.net.ssl.SSLException: java.lang.NullPointerException
at sun.security.ssl.Alerts.getSSLException(Alerts.java:208)
at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1946)
at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1903)
at sun.security.ssl.SSLSocketImpl.handleException(SSLSocketImpl.java:1886)
Caused by: java.lang.NullPointerException
at sun.security.ssl.AbstractTrustManagerWrapper.checkServerTrusted(SSLContextImpl.java:1091)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1621)
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:223)

Related

How to propogate an exception from tcp outbound gateway to upstream SI components

I have been searching over the internet and have been trying different ways to make it work but I couldn't get it working.
Actually my system need to connect to multiple tcp server endpoints (which we call payers endpoint) as a tcp client application. The client components include the throttling and queuing components and they get initialized dynamically based on the payers configurations stored in DB as a separate application context. I have created a tcp outbound gateway for this purpose and which is being initialized in a dynamic context based on the ideas given in
https://github.com/spring-projects/spring-integration-samples/tree/master/advanced
What I want is to catch/get exceptions to the calling code at the time calling the payersGateway component in case of any error. But I am unable to receive the exceptions back.
My context file for tcp client connectivity is as follow:
dynamic-tcp-gateway-context file
<context:property-placeholder/>
<int-ip:tcp-connection-factory id="client"
type="client"
host="${host}"
port="${port}"
single-use="true"
so-timeout="${soTimeout}"
serializer="byteArrayLfSerializer"
deserializer="ediTcpSerializer"
/>
<bean id="ediTcpSerializer" class="com.abc.throttling.EdiTcpSerializer">
<property name="maxMessageSize" value="20480000"/>
</bean>
<bean id="byteArrayLfSerializer" class="org.springframework.integration.ip.tcp.serializer.ByteArrayLfSerializer">
<property name="maxMessageSize" value="20480000"/>
</bean>
<int:channel id="toPayerChannel">
<int:dispatcher task-executor="producerThreadExecutor"/>
</int:channel>
<task:executor id="producerThreadExecutor" pool-size="10" queue-capacity="50" rejection-policy="ABORT"/>
<int:channel id="throttlerChannel">
<int:priority-queue capacity="${queueSize:1000}"/> <!-- for example queue size, you can increase this capacity
based on your
requirement -->
</int:channel>
<int:bridge input-channel="toPayerChannel" output-channel="throttlerChannel" />
<int-ip:tcp-outbound-gateway id="outGateway"
request-channel="throttlerChannel"
connection-factory="client"
request-timeout="10000"
reply-timeout="${replyTimeout}">
<int:poller id="tcpPoller" error-channel="errorChannel" fixed-rate="1000" max-messages-per-poll="${messageRate:20}"/>
</int-ip:tcp-outbound-gateway>
<int:transformer input-channel="errorChannel"
ref="exceptionTransformer" method="createErrorResponse"/>
<bean id="exceptionTransformer" class="com.stella.healthenet.throttling.TcpGatewayExceptionTransformer"/>
<int:object-to-string-transformer input-channel="errorChannel"/>
application context for main payers connectivity is as follow:
main-payers-gateway-context file
<context:component-scan base-package="com.abc.throttling" />
class="com.abc.throttling.DynamicTcpChannelResolverTest" id="dynamicTcpChannelResolverTest"/>-->
<int:gateway id="payersGateway"
service-interface="com.abc.gateway.PayersGateway"
default-request-channel="toDynRouter"
default-request-timeout="1000000"
default-reply-timeout="${payersGateway.defaultReplyTimout}"
/>
<int:converter ref="converter"/>
<bean class="com.abc.manager.edi.core.core4.ByteArrayToStringConverter" id="converter"/>
<int:converter ref="converter2"/>
<bean class="com.abc.manager.edi.core.core2.Core2RealTimeResponseToString" id="converter2"/>
<int:converter ref="converter3"/>
<bean class="com.abc.manager.edi.core.core4.Core4RealTimeResponseToString" id="converter3"/>
<int:channel id="toDynRouter" />
<int:router resolution-required="true" input-channel="toDynRouter"
expression="#dynamicPayerChannelResolver.resolve(headers['payer'])" >
</int:router>
The code for dynamic channel resolver is as follow:
#Component("dynamicPayerChannelResolver")
public class DynamicPayerChannelResolver {
#Value("${throttling.payersChannelQueueSize}")
private String payersChannelQueueSize;
#Value("${throttling.tcpSoTimeout}")
private String tcpSoTimeout;
#Value("${tcpOutboundGateway.replyTimeout}")
private String tcpReplyTimeout;
#Autowired
private ApplicationContext applicationContext;
public static final int MAX_CACHE_SIZE = 30;
private static final org.apache.logging.log4j.Logger LOG = LogManager.getLogger();
#Autowired
PayerRepositoryService payerRepositoryService;
private final Map<UUID, MessageChannel> channels = Collections.synchronizedMap(new LinkedHashMap<UUID,
MessageChannel>(){
private static final long serialVersionUID = 1L;
#Override
protected boolean removeEldestEntry(
Entry<UUID, MessageChannel> eldest) {
//This returning true means the least recently used
//channel and its application context will be closed and removed
boolean remove = size() > MAX_CACHE_SIZE;
if(remove) {
MessageChannel channel = eldest.getValue();
ConfigurableApplicationContext ctx = contexts.get(channel);
if(ctx != null) { //shouldn't be null ideally
ctx.close();
contexts.remove(channel);
}
}
return remove;
}
});
private final Map<MessageChannel, ConfigurableApplicationContext> contexts =
new HashMap<MessageChannel, ConfigurableApplicationContext>();
/**
* Resolve a payer to a channel, where each payer gets a private
* application context and the channel is the inbound channel to that
* application context.
*
* #param payer
* #return a channel
*/
public MessageChannel resolve(UUID payer) {
MessageChannel channel = this.channels.get(payer);
if (channel == null) {
channel = createNewPayerChannel(payerRepositoryService.findPayerByPayerId(payer));
}
return channel;
}
private synchronized MessageChannel createNewPayerChannel(PayerEntity payer) {
MessageChannel channel = this.channels.get(payer.getPayerId());
if (channel == null) {
ConfigurableApplicationContext ctx;
if(TransportType.SOAP_WSDL.getValue().equals(payer.getConnTransportType())){
ctx = new ClassPathXmlApplicationContext(
new String[] { "dynamic-soap-gateway-context.xml" },
false,applicationContext);
}
else{
ctx = new ClassPathXmlApplicationContext(
new String[] { "dynamic-tcp-gateway-context.xml" },
false,applicationContext);
}
this.setEnvironmentForPayer(ctx, payer);
ctx.refresh();
channel = ctx.getBean("toPayerChannel", MessageChannel.class);
this.channels.put(payer.getPayerId(), channel);
//Will works as the same reference is presented always
this.contexts.put(channel, ctx);
}
return channel;
}
public void updatePollerConfigs() {
Iterator it = this.channels.entrySet().iterator();
while(it.hasNext()){
Map.Entry pair = (Map.Entry)it.next();
PayerEntity payer = this.payerRepositoryService.findPayerByPayerId((UUID)pair.getKey());
final String transportType = payer.getConnTransportType();
if(TransportType.SOAP_WSDL.getValue().equals(transportType)){
MessageChannel channel= this.channels.get(payer);
ConfigurableApplicationContext ctx;
ctx=this.contexts.get(channel);
PollingConsumer pc = ctx.getBean("soapPoller",PollingConsumer.class);
//TODO: this value need to be set from payer configs in db.
pc.setMaxMessagesPerPoll(10);
}
else if (TransportType.X12_SOCKET.getValue().equals(transportType)) {
MessageChannel channel= this.channels.get(payer);
ConfigurableApplicationContext ctx;
ctx=this.contexts.get(channel);
PollingConsumer pc = ctx.getBean("outGateway",PollingConsumer.class);
//TODO: this value need to be set from payer configs in db.
pc.setMaxMessagesPerPoll(10);
}
}
}
private void setEnvironmentForPayer(ConfigurableApplicationContext ctx,
PayerEntity payer) {
final String transportType = payer.getConnTransportType();
final String endPoint = payer.getConnEndPoint();
if(payer.getMessageRate()==null){
throw new NullPointerException("MessageRate value is NULL in DB. Please specify a MessageRate value in " +
"Payers DB for PayerID: " + payer.getPayerId());
}
final String messageRate = payer.getMessageRate();
StandardEnvironment env = new StandardEnvironment();
Properties props = new Properties();
// populate properties for payer
;
if(TransportType.SOAP_WSDL.getValue().equals(transportType)){
props.setProperty("endpoint", endPoint);
props.setProperty("messageRate",messageRate);
props.setProperty("queueSize",payersChannelQueueSize);
}
else if (TransportType.X12_SOCKET.getValue().equals(transportType)) {
if (!endPoint.contains(":")) {
throw new Exception(ErrorType.UNEXPECTED, "Payer X12 Socket endpoint must include a port.");
}
final String ip;
if (endPoint.contains("/")) {
ip = endPoint.substring(endPoint.lastIndexOf('/') + 1, endPoint.lastIndexOf(":"));
} else {
ip = endPoint.substring(0, endPoint.lastIndexOf(":"));
}
final String port = endPoint.substring(endPoint.lastIndexOf(':') + 1);
props.setProperty("host", ip);
props.setProperty("port", port);
props.setProperty("messageRate",messageRate);
props.setProperty("queueSize",payersChannelQueueSize);
props.setProperty("soTimeout", tcpSoTimeout);
props.setProperty("replyTimeout",tcpReplyTimeout);
}
PropertiesPropertySource pps = new PropertiesPropertySource("payerChannelProps", props);
env.getPropertySources().addLast(pps);
ctx.setEnvironment(env);
}
I want to get the exceptions generated at the downstream components to appear at the payersGateway interface send method call i.e., shown below:
String response = payersGateway.send(
MessageBuilder.withPayload(requestMessage)
.setHeader("endpointUrl", endPoint)
.setHeader("senderId", coreSenderId)
.setHeader("receiverId", payerIdCode)
.setHeader("username", username)
.setHeader("password", password)
.setHeader("payloadType", payloadType)
.setHeader("nginxUrl", nginxUrl)
.setHeader("payer", payerId)
.setHeader("transType", transType)
.setHeader("priority", 1)
.build());
where the payersGateway is simple interface pointing to Gateway component in main context file.
public interface PayersGateway {
public String send(Message<String> message);
}
Please see the stack trace of produced exception as below:
[task-scheduler-1] ERROR org.springframework.integration.ip.tcp.TcpOutboundGateway - Tcp Gateway exception
java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at java.net.Socket.connect(Socket.java:538)
at java.net.Socket.<init>(Socket.java:434)
at java.net.Socket.<init>(Socket.java:211)
at javax.net.DefaultSocketFactory.createSocket(SocketFactory.java:271)
at org.springframework.integration.ip.tcp.connection.TcpNetClientConnectionFactory.createSocket(TcpNetClientConnectionFactory.java:76)
at org.springframework.integration.ip.tcp.connection.TcpNetClientConnectionFactory.buildNewConnection(TcpNetClientConnectionFactory.java:49)
at org.springframework.integration.ip.tcp.connection.AbstractClientConnectionFactory.obtainNewConnection(AbstractClientConnectionFactory.java:116)
at org.springframework.integration.ip.tcp.connection.AbstractClientConnectionFactory.obtainConnection(AbstractClientConnectionFactory.java:79)
at org.springframework.integration.ip.tcp.connection.AbstractClientConnectionFactory.getConnection(AbstractClientConnectionFactory.java:69)
at org.springframework.integration.ip.tcp.TcpOutboundGateway.handleRequestMessage(TcpOutboundGateway.java:130)
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:109)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:127)
at org.springframework.integration.endpoint.PollingConsumer.handleMessage(PollingConsumer.java:129)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.doPoll(AbstractPollingEndpoint.java:272)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.access$000(AbstractPollingEndpoint.java:58)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$1.call(AbstractPollingEndpoint.java:190)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$1.call(AbstractPollingEndpoint.java:186)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$Poller$1.run(AbstractPollingEndpoint.java:353)
at org.springframework.integration.util.ErrorHandlingTaskExecutor$1.run(ErrorHandlingTaskExecutor.java:55)
at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50)
at org.springframework.integration.util.ErrorHandlingTaskExecutor.execute(ErrorHandlingTaskExecutor.java:51)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$Poller.run(AbstractPollingEndpoint.java:344)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:81)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[task-scheduler-1] ERROR org.springframework.integration.handler.LoggingHandler - org.springframework.messaging.MessagingException: Failed to send or receive; nested exception is java.net.ConnectException: Connection refused (Connection refused)
at org.springframework.integration.ip.tcp.TcpOutboundGateway.handleRequestMessage(TcpOutboundGateway.java:158)
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:109)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:127)
at org.springframework.integration.endpoint.PollingConsumer.handleMessage(PollingConsumer.java:129)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.doPoll(AbstractPollingEndpoint.java:272)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.access$000(AbstractPollingEndpoint.java:58)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$1.call(AbstractPollingEndpoint.java:190)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$1.call(AbstractPollingEndpoint.java:186)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$Poller$1.run(AbstractPollingEndpoint.java:353)
at org.springframework.integration.util.ErrorHandlingTaskExecutor$1.run(ErrorHandlingTaskExecutor.java:55)
at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50)
at org.springframework.integration.util.ErrorHandlingTaskExecutor.execute(ErrorHandlingTaskExecutor.java:51)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$Poller.run(AbstractPollingEndpoint.java:344)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:81)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at java.net.Socket.connect(Socket.java:538)
at java.net.Socket.<init>(Socket.java:434)
at java.net.Socket.<init>(Socket.java:211)
at javax.net.DefaultSocketFactory.createSocket(SocketFactory.java:271)
at org.springframework.integration.ip.tcp.connection.TcpNetClientConnectionFactory.createSocket(TcpNetClientConnectionFactory.java:76)
at org.springframework.integration.ip.tcp.connection.TcpNetClientConnectionFactory.buildNewConnection(TcpNetClientConnectionFactory.java:49)
at org.springframework.integration.ip.tcp.connection.AbstractClientConnectionFactory.obtainNewConnection(AbstractClientConnectionFactory.java:116)
at org.springframework.integration.ip.tcp.connection.AbstractClientConnectionFactory.obtainConnection(AbstractClientConnectionFactory.java:79)
at org.springframework.integration.ip.tcp.connection.AbstractClientConnectionFactory.getConnection(AbstractClientConnectionFactory.java:69)
at org.springframework.integration.ip.tcp.TcpOutboundGateway.handleRequestMessage(TcpOutboundGateway.java:130)
... 21 more
It should work as you need; exceptions before the throttlerChannel will be thrown directly to the caller; exceptions after that channel will be sent to the gateway by the poller's error handler.
Perhaps your gateway timeout is too short?
Turning on DEBUG logging and following the message flow should help you debug it.
EDIT
You can fix the missing failedMessage by adding a bean of this type to the Tcp outbound gateway's <int-ip:request-handler-advice-chain />...
public class FailedMessagePopulator extends AbstractRequestHandlerAdvice {
#Override
protected Object doInvoke(ExecutionCallback callback, Object target, Message<?> message) throws Exception {
try {
return callback.execute();
}
catch (MessagingException e) {
if (e.getFailedMessage() == null) {
throw new MessageHandlingException(message, e.getMessage(), e);
}
else {
throw e;
}
}
catch (Exception e) {
throw new MessageHandlingException(message, e.getMessage(), e);
}
}
}
Note that you will need to add the error channel to the <gateway/> since the poller will now see the failed message and send the exception back to the gateway for error handling.

Getting connection timed out error in microservice

I have built a microservice using Java 8 and SpringBoot 2. From this microservice, I'm trying to consume another REST API service. However, I'm getting the following error on Chrome. I have already disabled the windows firewall and McAfee antivirus firewall but still getting the same error. I can call the REST API directly using the Postman tool but not through my microservice.
Error:-
java.lang.IllegalStateException: The underlying HTTP client completed
without emitting a response.
2018-06-12 15:21:29.300 ERROR 17996 --- [ctor-http-nio-3]
.a.w.r.e.DefaultErrorWebExceptionHandler : Failed to handle request
[GET http://localhost:8080/category/search]
io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection
timed out: no further information: test.usdemo.xyz.com/92.54.41.24:443
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
~[na:1.8.0_171] at
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
~[na:1.8.0_171] at
io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:325)
~[netty-transport-4.1.24.Final.jar:4.1.24.Final] at
io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340)
~[netty-transport-4.1.24.Final.jar:4.1.24.Final] at
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:633)
~[netty-transport-4.1.24.Final.jar:4.1.24.Final] at
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
~[netty-transport-4.1.24.Final.jar:4.1.24.Final] at
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
~[netty-transport-4.1.24.Final.jar:4.1.24.Final] at
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
~[netty-transport-4.1.24.Final.jar:4.1.24.Final] at
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884)
~[netty-common-4.1.24.Final.jar:4.1.24.Final] at
java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_171] Caused by:
java.net.ConnectException: Connection timed out: no further
information ... 10 common frames omitted
Controller class:-
#RestController
public class CategorySearchController {
private final CategorySearchService categorySearchService;
#Autowired
public CategorySearchController(CategorySearchService categorySearchService) {
this.categorySearchService = categorySearchService;
}
#GetMapping(path = "/search-category")
public Mono<CategoryResponse> searchCategories(SearchRequest categorySearchRequest){
return categorySearchService
.searchCategories()
.switchIfEmpty(Mono.error(
new EntityNotFoundException("No category matching " + categorySearchRequest.getSearchTerm() + " was found")));
}
}
Service class:-
#Service
public class CategorySearchServiceImpl implements CategorySearchService {
private String baseUrl = "https://demo0954903.mockable.io";
#Override
public Mono<CategoryResponse> searchCategories() {
WebClient webClient = WebClient.create(baseUrl);
return webClient.
get()
.uri("/category-search")
.retrieve()
.bodyToMono(CategoryResponse.class);
}
}
I found the solution for this issue. I need to add the proxy in the webclient as follows:-
private final WebClient webClient;
ReactorClientHttpConnector connector = new ReactorClientHttpConnector(
options -> options.httpProxy(addressSpec -> {
return addressSpec.host(PROXY_HOST).port(PROXY_PORT);
}));
this.webClient = webClientBuilder.baseUrl(BASE_URL).clientConnector(connector).build();

InvocationTargetException in Yarn task with Hadoop

While running Kafka -> Apache Apex ->Hbase, it is saying following exception in Yarn tasks:
com.datatorrent.stram.StreamingAppMasterService: Application master, appId=4, clustertimestamp=1479188884109, attemptId=2
2016-11-15 11:59:51,068 INFO org.apache.hadoop.service.AbstractService: Service com.datatorrent.stram.StreamingAppMasterService failed in state INITED; cause: java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.fs.AbstractFileSystem.newInstance(AbstractFileSystem.java:130)
at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:156)
at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:241)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:333)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:330)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:330)
at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:444)
And my DataTorrent log showing the following exception. I am running the app which communicates Kafka -> Apex -> Hbase streaming application.
Connecting to ResourceManager at hduser1/127.0.0.1:8032
16/11/15 17:47:38 WARN client.EventsAgent: Cannot read events for application_1479208737206_0008: java.io.FileNotFoundException: File does not exist: /user/hduser1/datatorrent/apps/application_1479208737206_0008/events/index.txt
at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:66)
at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:56)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1893)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1834)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1814)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1786)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:552)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:362)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034)
Adding the code :
public void populateDAG(DAG dag, Configuration conf){
KafkaSinglePortInputOperator in
= dag.addOperator("kafkaIn", new KafkaSinglePortInputOperator());
in.setInitialOffset(AbstractKafkaInputOperator.InitialOffset.EARLIEST.name());
LineOutputOperator out = dag.addOperator("fileOut", new LineOutputOperator());
dag.addStream("data", in.outputPort, out.input);}
LineOutputOperator extends AbstractFileOutputOperator
private static final String NL = System.lineSeparator();
private static final Charset CS = StandardCharsets.UTF_8;
#NotNull
private String baseName;
#Override
public byte[] getBytesForTuple(byte[] t) {
String result = new String(t, CS) + NL;
return result.getBytes(CS);
}
#Override
protected String getFileName(byte[] tuple) {
return baseName;
}
public String getBaseName() { return baseName; }
public void setBaseName(String v) { baseName = v; }
How to resolve this problem?
Thanks.
Can you share some details about your environment like what version of hadoop and apex ? Also, which log does this exception appear in ?
Just as a simple sanity check, can you run the simple maven archetype generated application described at: http://docs.datatorrent.com/beginner/
If that works, try running the fileIO and kafka applications at:
https://github.com/DataTorrent/examples/tree/master/tutorials
If those work ok we can look at the details of your code.
I got the solution,
The problem related to expiry of my license, So reinstalled new one and works fine for actual code.

Connection closed when saving blob

I'm having a problem when uploading files on a weblogic application.
Once the fileuploader has finished saving the file, the following exception appears:
java.sql.SQLException: Conexión cerrada
at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70)
at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:133)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:199)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:263)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:271)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:445)
at oracle.sql.BLOB.getDBAccess(BLOB.java:882)
at oracle.sql.BLOB.freeTemporary(BLOB.java:584)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.springframework.jdbc.support.lob.OracleLobHandler$OracleLobCreator.close(OracleLobHandler.java:412)
at org.springframework.jdbc.support.lob.SpringLobCreatorSynchronization.afterCompletion(SpringLobCreatorSynchronization.java:76)
at org.springframework.transaction.support.TransactionSynchronizationUtils.invokeAfterCompletion(TransactionSynchronizationUtils.java:133)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.invokeAfterCompletion(AbstractPlatformTransactionManager.java:951)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.triggerAfterCompletion(AbstractPlatformTransactionManager.java:926)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:754)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:678)
at org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:319)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:116)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
at $Proxy74.guardarFichero(Unknown Source)
At other environments this problem doesn't appear, so I think it must be some configuration on the JDBC driver but I'm not sure wich one I have to modify.
The invoking struts
private void adjuntarFichero()throws Exception {
logger.debug("uploadFileName = "+uploadFileName);
if (uploadFileName!=null && !uploadFileName.equals("")) {
HttpSession session = request.getSession();
HashMap<Long,String> ficheros = (HashMap<Long,String>)session.getAttribute("ficheros");
if (ficheros == null) ficheros = new HashMap<Long,String>();
Fichero file = new FicheroImpl();
file.setNombre(uploadFileName);
file.setFichero(getBytesFromFile(upload));
file = gestorFichero.guardarFichero(file);
ficheros.put(file.getId(), uploadFileName);
session.setAttribute("ficheros", ficheros);
}
}
the error is produced inside the "guardarFichero" method, who invokes a DAO class that takes care of the saving
public abstract class gestorFichero
extends org.springframework.orm.hibernate3.support.HibernateDaoSupport{
public Fichero guardarFichero (Fichero fichero){
Debug.prec(fichero, "El fichero no puede ser nulo");
return (Fichero)create(TRANSFORM_NONE, fichero);
}
public java.lang.Object create(final int transform, final com.aena.sgc.Fichero fichero)
{
if (fichero == null){
throw new IllegalArgumentException("Fichero.create - 'fichero' can not be null");
}
this.getHibernateTemplate().save(fichero);
return this.transformEntity(transform, fichero);
}
Finally I discovered wich was the problem.
It was related to the driver class. Originally we had the datasource configured to use the XA driver oracle.jdbc.xa.client.OracleXADataSource and whenever we finished uploading a file, the exception was thrown.
Once I changed the driver to oracle.jdbc.OracleDriver these errors dissapeared.

Jooq, Spring, And BoneCP connection closed twice error

I am using Spring 4.0.0, along with jOOQ 3.2.0 and BoneCP 0.8.0 for a web application.
I have the PersistenceContext configured the same as this guide (please skim read it's a little too much code to paste here)
http://www.petrikainulainen.net/programming/jooq/using-jooq-with-spring-configuration/
but with a smaller number of max connections and closeConnectionWatch = true for error checking.
From what I can deduce, this guide is a non-XML version of the jOOQ website's own guide seen here
http://www.jooq.org/doc/3.2/manual/getting-started/tutorials/jooq-with-spring/
My problem comes from probably not knowing how to use the jOOQ generated DAOs or the #Transactional annotation. I am coming across loads of "connection closed twice" exceptions, which makes my application fundamentally broken. The following stack trace doesn't actually say it is closed twice, but the output of closeConnectionWatch says something along the lines of
bonecp connection closed twice detected: first location connection was closed in thread[blah]
closed again in thread[blah2]
Stack trace of SQL Exception after the connection watch does its thing:
Jan 28, 2014 10:51:51 AM org.apache.catalina.core.StandardWrapperValve invoke
SEVERE: Servlet.service() for servlet [appServlet] in context with path [/application] threw exception [Request processing failed; nested exception is org.springframework.jdbc.UncategorizedSQLException: jOOQ; uncategorized SQLException for SQL
<snip> error code [0]; Connection is closed!; nested exception is java.sql.SQLException: Connection is closed!] with root cause java.sql.SQLException: Connection is closed!
at com.jolbox.bonecp.ConnectionHandle.checkClosed(ConnectionHandle.java:459)
at com.jolbox.bonecp.ConnectionHandle.prepareStatement(ConnectionHandle.java:1011)
at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.springframework.jdbc.datasource.LazyConnectionDataSourceProxy$LazyConnectionInvocationHandler.invoke(LazyConnectionDataSourceProxy.java:376)
at com.sun.proxy.$Proxy73.prepareStatement(Unknown Source)
at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.springframework.jdbc.datasource.TransactionAwareDataSourceProxy$TransactionAwareInvocationHandler.invoke(TransactionAwareDataSourceProxy.java:240)
at com.sun.proxy.$Proxy73.prepareStatement(Unknown Source)
at org.jooq.impl.ProviderEnabledConnection.prepareStatement(ProviderEnabledConnection.java:112)
at org.jooq.impl.SettingsEnabledConnection.prepareStatement(SettingsEnabledConnection.java:76)
at org.jooq.impl.AbstractResultQuery.prepare(AbstractResultQuery.java:224)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:295)
at org.jooq.impl.AbstractResultQuery.fetch(AbstractResultQuery.java:324)
at org.jooq.impl.SelectImpl.fetch(SelectImpl.java:1034)
at org.jooq.impl.DAOImpl.fetch(DAOImpl.java:249)
----> at com.myapplication.spring.services.UserService.authenticate(UserService.java:32)
at com.myapplication.spring.controllers.LoginController.doLogin(LoginController.java:35)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.springframework.web.method.support.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:214)
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:132)
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:104)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandleMethod(RequestMappingHandlerAdapter.java:748)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:689)
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:83)
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:945)
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:876)
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:931)
at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:833)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:647)
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:807)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:728)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at com.github.dandelion.datatables.core.web.filter.DatatablesFilter.doFilter(DatatablesFilter.java:73)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:931)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1004)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:312)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
The line where I have arrowed is the line in the service that makes the call to the database. I have #Autowired DAO objects in the service, as below
#Service("UserService")
public class UserService implements UserServiceInterface{
#Autowired UsersDao userDao;
#Autowired PasswordServiceInterface passwordService;
#Override
public Users authenticate(String user,String password) {
boolean allowedIn = false;
List<Users> users = userDao.fetch(USERS.USERNAME, user);
//do something here
Other functions I use in similar services contain calls using the DSLContext object like
DSL.select(SOMETHING).from(SOMETABLE).fetch()
The DAO and DSLContext are stored as beans in PersistenceContext like so.
I am autowiring as a DSLContext and not a *Default*DSLContext as the jOOQ guide has a test method down the bottom showing only a DSLContext.
#Bean
public DefaultDSLContext dsl() {
return new DefaultDSLContext(configuration());
}
#Bean
public UsersDao userDao() { //bad because UsersDao isn't an interface???
return new UsersDao(configuration());
}
And this is the controller
#Controller
public class LoginController {
#Autowired UserServiceInterface userService;
#RequestMapping(value = "/login", method = RequestMethod.GET)
public String login() {
return "login";
}
#RequestMapping(value = "/login/doLogin", method = RequestMethod.POST)
public String doLogin(#RequestParam("username")String username, #RequestParam("password") String password, HttpSession session) {
Users u = userService.authenticate(username, password);
if(u == null)
return "redirect:/error";
else {
session.setAttribute("user", u.getUserid());
session.setAttribute("role", u.getRoleid());
session.setAttribute("retailgroup", u.getGroupid());
return "redirect:/dashboard";
}
}
UserService is not the only service I get errors in -- all of my services are similar in that they contain one/both a DAO and a DSLContext Autowired object, with the same DAO configuration constructor as usersDao(), for example productsDao(). The Products service has this DAO and a DSLContext object, and much the same as the UsersService it makes calls to the database.
Sometimes I'll get this connection issue logging in, other times it will be fine and I can browse the site and look at products for a little while, but then randomly I'll get a 'connection is closed!' error coming from another service (there are about 5 that are written in the same way).
So my questions are
Where do I use the #Transactional annotation, and what does it actually do. Does my omission of the #Transactional annotation mean I am causing myself problems? I have previously added it in at all locations that use the DB, but I cant be sure if it was actually helping as I was still getting the same errors.
Is it an issue with my scope for something? I know beans are default as singleton - I've written my controllers in such a way that they use session stored attributes to pass to the services (which are all left as default singletons), so that they may only select data that a certain user is allowed to see.
Since the connectionPool is closing a connection twice, does this mean that the issue is that thread A and thread B go for a connection at the same time, do something with it, and then both close? Why is this happening using the configuration from the above guide? How do I ensure thread safety or is that not the problem?
Are the DAO beans supposed to be interfaces, as from my brief history with Spring I am led to believe a lot(many/all?) #Autowired beans should be? Am I supposed to be using the interface org.jooq.DAOImpl which is the interface that all the jOOQ generated DAOs seem to implement?
#Bean
public org.jooq.impl.DAOImpl usersDao() {
return new usersDao(configuration());
}
Apologies for the long question, any help would be greatly appreciated. Thanks.
Edit: This is my configuration in PersistenceContext class
#Configuration
#PropertySource("classpath:config.properties")
public class PersistenceContext {
#Autowired
private Environment env;
#Bean(destroyMethod = "close")
public DataSource dataSource() {
BoneCPDataSource dataSource = new BoneCPDataSource();
dataSource.setDriverClass(env.getRequiredProperty("db.driver"));
dataSource.setJdbcUrl(env.getRequiredProperty("db.url"));
dataSource.setUsername(env.getRequiredProperty("db.username"));
dataSource.setPassword(env.getRequiredProperty("db.password"));
dataSource.setMaxConnectionsPerPartition(20);
dataSource.setPartitionCount(2);
dataSource.setCloseConnectionWatch(true);
return dataSource;
}
#Bean
public LazyConnectionDataSourceProxy lazyConnectionDataSource() {
return new LazyConnectionDataSourceProxy(dataSource());
}
#Bean
public TransactionAwareDataSourceProxy transactionAwareDataSource() {
return new TransactionAwareDataSourceProxy(lazyConnectionDataSource());
}
#Bean
public DataSourceTransactionManager transactionManager() {
return new DataSourceTransactionManager(lazyConnectionDataSource());
}
#Bean
public DataSourceConnectionProvider connectionProvider() {
return new DataSourceConnectionProvider(transactionAwareDataSource());
}
#Bean
public JOOQToSpringExceptionTransformer jooqToSpringExceptionTransformer() {
return new JOOQToSpringExceptionTransformer();
}
#Bean
public DefaultConfiguration configuration() {
DefaultConfiguration jooqConfiguration = new DefaultConfiguration();
jooqConfiguration.set(connectionProvider());
jooqConfiguration.set(new DefaultExecuteListenerProvider(
jooqToSpringExceptionTransformer()
));
String sqlDialectName = env.getRequiredProperty("jooq.sql.dialect");
SQLDialect dialect = SQLDialect.valueOf(sqlDialectName);
jooqConfiguration.set(dialect);
return jooqConfiguration;
}
#Bean
public DefaultDSLContext dsl() {
return new DefaultDSLContext(configuration());
}
#Bean
public UsersDao userDao() {
return new UsersDao(configuration());
}
}
After re-reading your question and your chat, I can tell that it is really most likely due to your using version 3.2.0, which had this rather severe bug here:
https://github.com/jOOQ/jOOQ/issues/2863
That bug was fixed in 3.2.2, to which (or later) you should upgrade.

Resources