How to correctly instantiate RestTemplate without leaking resources - spring

Please I have the following bean definitions
#Bean
public RestTemplate produceRestTemplate(ClientHttpRequestFactory requestFactory){
RestTemplate restTemplate = new RestTemplate(requestFactory);
restTemplate.setErrorHandler(restTemplateErrorHandler);
return restTemplate;
}
#Bean
public ClientHttpRequestFactory createRequestFactory() {
PoolingHttpClientConnectionManager connectionManager = new PoolingHttpClientConnectionManager();
connectionManager.setMaxTotal(maxTotalConn);
connectionManager.setDefaultMaxPerRoute(maxPerChannel);
RequestConfig config = RequestConfig.custom().setConnectTimeout(100000).build();
CloseableHttpClient httpClient = HttpClients.createDefault();
return new HttpComponentsClientHttpRequestFactory(httpClient);
}
The code works well but the problem is that fortify flags the code above as being potentially problematic with the following
"The function createRequestFactory() sometimes
fails to release a socket allocated by createDefault() on line 141."
Please anyone with any ideas as to how to correctly do this without fortify raising alarms
Thanks in advance

I am pretty sure that you don't need to do anything. It looks to be a fortify issue that it might not be updated to this usage scenario. There is a mechanism to take exceptions when working with code analyzers - these tools are not always correct.
A Bit of Discussion
Imagine , you are using CloseableHttpClient in a scenario where there would be no #Bean or HttpComponentsClientHttpRequestFactory , then I would say that fortify is correct because that is the very intention of using a java.io.Closeable .
Spring beans are usually singleton with an intention of instance reuse so fortify should know that you are not creating multiple instances and close() method on AutoCloseable would be called when factory is destroyed at shutdown.
if you look at code of - org.springframework.http.client.HttpComponentsClientHttpRequestFactory , this is there.
/**
* Shutdown hook that closes the underlying
* {#link org.apache.http.conn.HttpClientConnectionManager ClientConnectionManager}'s
* connection pool, if any.
*/
#Override
public void destroy() throws Exception {
if (this.httpClient instanceof Closeable) {
((Closeable) this.httpClient).close();
}
}
Your fortify is looking at code in isolation and not in integrated way so its flagging.

Check this 2 points, for solving the problem.
If you never call the httpClient.close() method, sometime you can effectivlely
run out of socket.
If you're code call this method automatically somewhere there is no vuln and problem.
Anyway, this could be a FalsePositiv depending of the Version of the Java and Lib you use

Related

Spring Integration Flow with #Restcontoller Timing issue

A simple #RestController is connected with a #MessagingGateway to an IntegrationFlow.
After a load test we saw within the tracing that we lose "a lot of time" before even starting the processing within the flow:
Tracing result
In this example we can see that over 90ms spend befor sending the message to the flow.
Did anyone have some idea what leads to this behavior?
As far as I understood the documentation, everything is handled in the sender thread and therefore no special worker threads are created.
We use the Restcontroller since we need to create the documentation with springdoc-openapi-ui
ExampleCode:
RestController
#RestController
public class DescriptionEndpoint {
HttpMessageGateway httpMessageGateway;
public Result findData(#Valid dataRequest dataRequest) {
final Map<String, Object> headerParams = new HashMap<>();
return httpMessageGateway.basicDataDescriptionFlow(dataRequest, headerParams);
}
}
Gateway
#MessagingGateway
public interface HttpMessageGateway {
#Gateway(requestChannel = "startDataFlow.input")
Result basicDataDescriptionFlow(#Payload dataRequest prDataRequest, #Headers Map<String, Object> map);
}
IntegrationFlow
public class ExampleFlow {
#Bean
public IntegrationFlow startDataFlow() {
return new FlowExtension()
.handle(someHandler1)
.handle(someHandler2)
.handle(someHandler3)
.get();
}
}
After adding some more traces I realized, that this timing issue is caused by my spring security configuration.
Unfortunatelly, i thought, the span is only representing the time after the start of findData(..). But it seems, the tracing starts already in the proxy methods and security chain.
After improving some implementation on our JWTToken filter, the spend times for these endpoints are OK.

Jooq configuration per request

I'm struggling to find a way to define some settings in DSLContext per request.
What I want to achieve is the following:
I've got a springboot API and a database with multiple schemas that share the same structure.
Depending on some parameters of each request I want to connect to one specific schema, if no parameters is set I want to connect to no schema and fail.
To not connect to any schema I wrote the following:
#Autowired
public DefaultConfiguration defaultConfiguration;
#PostConstruct
public void init() {
Settings currentSettings = defaultConfiguration.settings();
Settings newSettings = currentSettings.withRenderSchema(false);
defaultConfiguration.setSettings(newSettings);
}
Which I think works fine.
Now I need a way to set schema in DSLContext per request, so everytime I use DSLContext during a request I get automatically a connection to that schema, without affecting other requests.
My idea is to intercept the request, get the parameters and do something like "DSLContext.setSchema()" but in a way that applies to all usage of DSLContext during the current request.
I tried to define a request scopeBean of a custom ConnectionProvider as follows:
#Component
#RequestScope
public class ScopeConnectionProvider implements ConnectionProvider {
#Override
public Connection acquire() throws DataAccessException {
try {
Connection connection = dataSource.getConnection();
String schemaName = getSchemaFromRequestContext();
connection.setSchema(schemaName);
return connection;
} catch (SQLException e) {
throw new DataAccessException("Error getting connection from data source " + dataSource, e);
}
}
#Override
public void release(Connection connection) throws DataAccessException {
try {
connection.setSchema(null);
connection.close();
} catch (SQLException e) {
throw new DataAccessException("Error closing connection " + connection, e);
}
}
}
But this code only executes on the first request. Following requests don't execute this code and hence it uses the schema of the first request.
Any tips on how can this be done?
Thank you
Seems like your request-scope bean is getting injected into a singleton.
You're already using #RequestScope which is good, but you could forget to add #EnableAspectJAutoProxy on your Spring configuration class.
#Configuration
#EnableAspectJAutoProxy
class Config {
}
This will make your bean run within a proxy inside of the singleton and therefore change per request.
Nevermind, It seems that the problem I was having was caused by an unexpected behaviour of some cacheable function I defined. The function is returning a value from the cache although the input is different, that's why no new connection is acquired. I still need to figure out what causes this unexpected behaviour thought.
For now, I'll stick with this approach since it seems fine at a conceptual level, although I expect there is a better way to do this.
*** UPDATE ***
I found out that this was the problem I had with the cache Does java spring caching break reflection?
*** UPDATE 2 ***
Seems that setting schema in the underlying datasource is ignored. I'm currently trying this other approach I just found (https://github.com/LinkedList/spring-jooq-multitenancy)

Springboot JMS LIstener ActiveMQ is very slow

Im having a SpringBoot application which consume my custom serializable message from ActiveMQ Queue. So far it is worked, however, the consume rate is very poor, only 1 - 20 msg/sec.
#JmsListener(destination = "${channel.consumer.destination}", concurrency="${channel.consumer.maxConcurrency}")
public void receive(IMessage message) {
processor.process(message);
}
The above is my channel consumer class's snippet, it has a processor instance (injected, autowired and inside it i have #Async service, so i can assume the main thread will be released as soon as message entering #Async method) and also it uses springboot activemq default conn factory which i set from application properties
# ACTIVEMQ (ActiveMQProperties)
spring.activemq.broker-url= tcp://localhost:61616?keepAlive=true
spring.activemq.in-memory=true
spring.activemq.pool.enabled=true
spring.activemq.pool.expiry-timeout=1
spring.activemq.pool.idle-timeout=30000
spring.activemq.pool.max-connections=50
Few things worth to inform:
1. I run everything (Eclipse, ActiveMQ, MYSQL) in my local laptop
2. Before this, i also tried using custom connection factory (default AMQ, pooling, and caching) equipped with custom threadpool task executor, but still getting same result. Below is a snapshot performance capture which i took and updating every 1 sec
3. I also notive in JVM Monitor that the used heap keep incrementing
I want to know:
1. Is there something wrong/missing from my steps?I can't even touch hundreds in my message rate
2. Annotated #JmsListener method will execute process async or sync?
3. If possible and supported, how to use traditional sync receive() with SpringBoot properly and ellegantly?
Thank You
I'm just checking something similar. I have defined DefaultJmsListenerContainerFactory in my JMSConfiguration class (Spring configuration) like this:
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory(CachingConnectionFactory connectionFactory) {
// settings made based on https://bsnyderblog.blogspot.sk/2010/05/tuning-jms-message-consumption-in.html
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory(){
#Override
protected void initializeContainer(DefaultMessageListenerContainer container) {
super.initializeContainer(container);
container.setIdleConsumerLimit(5);
container.setIdleTaskExecutionLimit(10);
}
};
factory.setConnectionFactory(connectionFactory);
factory.setConcurrency("10-50");
factory.setCacheLevel(CACHE_CONSUMER);
factory.setReceiveTimeout(5000L);
factory.setDestinationResolver(new BeanFactoryDestinationResolver(beanFactory));
return factory;
}
As you can see, I took those values from https://bsnyderblog.blogspot.sk/2010/05/tuning-jms-message-consumption-in.html. It's from 2010 but I could not find anything newer / better so far.
I have also defined Spring's CachingConnectionFactory Bean as a ConnectionFactory:
#Bean
public CachingConnectionFactory buildCachingConnectionFactory(#Value("${activemq.url}") String brokerUrl) {
// settings based on https://bsnyderblog.blogspot.sk/2010/02/using-spring-jmstemplate-to-send-jms.html
ActiveMQConnectionFactory activeMQConnectionFactory = new ActiveMQConnectionFactory();
activeMQConnectionFactory.setBrokerURL(brokerUrl);
CachingConnectionFactory cachingConnectionFactory = new CachingConnectionFactory(activeMQConnectionFactory);
cachingConnectionFactory.setSessionCacheSize(10);
return cachingConnectionFactory;
}
This setting will help JmsTemplate with sending.
So my answer to you is set the values of your connection pool like described in the link. Also I guess you can delete spring.activemq.in-memory=true because (based on documentation) in case you specify custom broker URL, "in-memory" property is ignored.
Let me know if this helped.
G.

Spring Boot Undertow add RequestLimitingHandler to DeploymentInfo

I am using Spring Boot with Undertow and trying to implement some limits on the number of requests Undertow will accept so as not to become overloaded under stress.
I've seen the answer to the question at Spring Boot Undertow add both blocking handler and NIO handler in the same application, and it appears promising, but I'm not clear what HttpHandler should be passed as the argument to the RequestLimitingHandler constructor.
Is there an easy way to add a RequestLimitingHandler to the UndertowEmbeddedServletContainerFactory bean, perhaps using the addDeploymentInfoCustomizers method?
Alternatively, if I look deeper and get into the Xnio code on which Undertow is based, it looks like there is an option to set Options.WORKER_TASK_LIMIT, but upon further investigation, it looks like the XnioWorker class ignores this setting after the 3.0.10.GA release and simply sets taskQueue to an unbounded LinkedBlockingQueue. Am I mistaken and could this also be an option?
Answering my own question in case it helps others in the future. Solution is to create a new Undertow HandlerWrapper and instantiate the new RequestLimitingHandler object within the wrap() method, like so:
#Bean
public UndertowEmbeddedServletContainerFactory embeddedServletContainerFactory(RootHandler rootHandler) {
UndertowEmbeddedServletContainerFactory factory = new UndertowEmbeddedServletContainerFactory();
factory.addDeploymentInfoCustomizers(deploymentInfo -> deploymentInfo.addInitialHandlerChainWrapper(new HandlerWrapper() {
#Override
public HttpHandler wrap(HttpHandler handler) {
return new RequestLimitingHandler(maxConcurrentRequests, queueSize, handler);
}
}));
return factory;
}

Throttling RestTemplate invocations

Using Spring RestTemplate to invoke client rest calls, would it be possible to throttle these calls?
E.g. max 10 concurrent calls.
The RestTemplate itself does not seem to provide this itself so I wonder what the options are.
It would be best to have a generic solution to e.g. also throttle SOAP calls.
From the docs:
To create an instance of RestTemplate you can simply call the default
no-arg constructor. This will use standard Java classes from the
java.net package as the underlying implementation to create HTTP
requests. This can be overridden by specifying an implementation of
ClientHttpRequestFactory. Spring provides the implementation
HttpComponentsClientHttpRequestFactory that uses the Apache
HttpComponents HttpClient to create requests.
HttpComponentsClientHttpRequestFactory is configured using an instance
of org.apache.http.client.HttpClient which can in turn be configured
with credentials information or connection pooling functionality.
I'd look into configuring RestTemplate to use HTTP Components and play with setMaxPerRoute and setMaxTotal. If your SOAP client also happens to be using HTTP Components there may be a way to share the Commons HTTP Components settings between the two.
The other option is to roll your own. You could create a Proxy that uses a Semaphore to block until another request is finished. Something along these lines (note that this code is totally untested and is only to communicate the general idea of how you'd implement this):
public class GenericCounterProxy implements InvocationHandler
{
private final Object target;
private final int maxConcurrent;
private final Semaphore sem;
GenericCounterProxy(Object target, int maxConcurrent)
{
this.target = target;
this.maxConcurrent = maxConcurrent;
this.sem = new Semaphore(maxConcurrent, true);
}
#Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
try
{
// block until acquire succeeds
sem.acquire()
method.invoke(target, args);
}
finally
{
// release the Semaphore no matter what.
sem.release();
}
}
public static <T> T proxy(T target, int maxConcurrent)
{
InvocationHandler handler = new GenericCounterProxy(target, maxConcurrent);
return (T) Proxy.newProxyInstance(
target.getClass().getClassLoader(),
target.getClass().getInterfaces(),
handler);
}
}
If you wanted to go with this type of approach:
You should probably refine the methods for which the proxy acquires the Semaphore since not every method on the target would be subject to throttling (for example, getters for settings).
You need to change from RestTemplate to RestOperations which is an interface or change the proxying mechanism to use class based proxying.

Resources