Performance monitoring of application - performance

I would like to have special performance log with information about
http request on one line
requested url
elapsed time
username
returned status code
activity id (in case that some request will internally redirected to another action)
I am using serilog now for logging unhandled exceptions. Where is ideal place to add this kind of log insertion or what is best practice ? It is good practice to store logs into the database ?

The middleware approach seems to work.
public class PerformanceMiddleware
{
private readonly RequestDelegate next;
private readonly IConfiguration _configuration;
private readonly ILogger _logger;
public PerformanceMiddleware(RequestDelegate next, IConfiguration configuration, ILogger<PerformanceMiddleware> logger)
{
_configuration = configuration;
_logger = logger;
this.next = next;
}
public async Task Invoke(HttpContext context)
{
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
await next.Invoke(context);
stopwatch.Stop();
try
{
using (var conn = new SqlConnection(_configuration.GetConnectionString("DefaultConnection")))
using (var command = new SqlCommand("dbo.usp_insertPerformance", conn) { CommandType = CommandType.StoredProcedure })
{
conn.Open();
// set parameters
command.ExecuteNonQuery();
}
}
// We dont want show this error to user.
catch (Exception ex)
{
_logger.LogError(ex, "Error in PerformanceMiddleware database operation.");
}
}
}

If you want it simple and use you own solution you could write a Middleware for the asp.net core pipeline to track the required data.
I would not recommend to use Serilog to persist the collected information. Serilog is a logging framework and should not be used to track application metrics.
Use directly a database (sql, mongo, etc.) to store and analyse your data. You have already defined the object model in your question, so it should be easy for you to create and persist an instance of your model in the database.

Why not look at an APM tool or tracing tool to do this versus just creating logs with data that do not allow you to actually identify and solve problems. APm tools provide a lot more value beyond just logging performance data, but instead enable you to solve problems. The leaders in this area are AppDynamics, New Relic, and Dynatrace. There are many open source tools which can help here too, such as Zipkin, Jaeger, and Skywalking. You may want to explain the architecture and language of your app too :)

Related

How to mask sensitive information while logging in spring integration framework

I have requirement to mask sensitive information while logging. We are using wire-tap provided by integration framework for logging and we have many interfaces already designed which logs using wire-tap. We are currently using spring boot 2.1 and spring integration.
I hope that all your integration flows log via the mentioned global single wire-tap.
This one is just a start from another integration flow anyway: it is not just for a channel and logger on it. You really can build a wire-tapped flow any complexity.
My point is that you can add a transformer before logging-channel-adapter and mask a payload and/or headers any required way. The logger will receive already masked data.
Another way is to use some masking functionality in the log-expression. You may call here some bean for masking or a static utility: https://docs.spring.io/spring-integration/reference/html/#logging-channel-adapter
Don't know if this is a fancy approach, but I ended up implementing some sort of "error message filter" to mask headers in case the sensitive one is present (this can be extended to multiple header names, but this gives the idea):
#Component
public class ErrorMessageFilter {
private static final String SENSITIVE_HEADER_NAME = "sensitive_header";
public Throwable filterErrorMessage(Throwable payload) {
if (payload instanceof MessagingException) {
Message<?> failedMessage = ((MessagingException) payload).getFailedMessage();
if (failedMessage != null && failedMessage.getHeaders().containsKey(SENSITIVE_HEADER_NAME)) {
MessageHeaderAccessor headerAccessor = new MessageHeaderAccessor(failedMessage);
headerAccessor.setHeader(SENSITIVE_HEADER_NAME, "XXX");
return new MessagingException(withPayload(failedMessage.getPayload()).setHeaders(headerAccessor)
.build());
}
}
return payload;
}
}
Then, in the #Configuration class, added a way to wire my filter with Spring Integration's LoggingHandler:
#Autowired
public void setLoggingHandlerLogExpression(LoggingHandler loggingHandler, ErrorMessageFilter messageFilter) {
loggingHandler.setLogExpression(new FunctionExpression<Message<?>>((m) -> {
if (m instanceof ErrorMessage) {
return messageFilter.filterErrorMessage(((ErrorMessage) m).getPayload());
}
return m.getPayload();
}));
}
This also gave me the flexibility to reuse my filter in other components where I handle error messages (e.g.: send error notifications to Zabbix, etc.).
P.S.: sorry about all the instanceof and ifs, but at certain layer dirty code has to start.

How to open database connection in a BackgroundJob in ABP application

Issue
For testing, I create a new job, it just use IRepository to read data from database. The code as below:
public class TestJob : BackgroundJob<string>, ITransientDependency
{
private readonly IRepository<Product, long> _productRepository;
private readonly IUnitOfWorkManager _unitOfWorkManager;
public TestJob(IRepository<Product, long> productRepository,
IUnitOfWorkManager unitOfWorkManager)
{
_productRepository = productRepository;
_unitOfWorkManager = unitOfWorkManager;
}
public override void Execute(string args)
{
var task = _productRepository.GetAll().ToListAsync();
var items = task.Result;
Debug.WriteLine("test db connection");
}
}
Then I create a new application service to trigger the job. The code snippet as below:
public async Task UowInJobTest()
{
await _backgroundJobManager.EnqueueAsync<TestJob, string>("aaaa");
}
When I test the job, It will throw following exception when execute var task = _productRepository.GetAll().ToListAsync();
Cannot access a disposed object. A common cause of this error is disposing a context that was resolved from dependency injection and then later trying to use the same context instance elsewhere in your application. This may occur if you are calling Dispose() on the context, or wrapping the context in a using statement. If you are using dependency injection, you should let the dependency injection container take care of disposing context instances.Object name: 'AbpExampleDbContext'.
Solution
S1: Add UnitOfWork attribute on execute method. It can address the issue. But it is not better for my actual scenario. In my actual scenario, the job is a long time task, and has much DB operatons, if enable UnitOfWork for Execute method, it will lock db resource for a long time. So this is not a solution for my scenario.
[UnitOfWork]
public override void Execute(string args)
{
var task = _productRepository.GetAll().ToListAsync();
var items = task.Result;
Debug.WriteLine("test db connection");
}
S2: Execute DB operation in UnitOfWork explicitly. Also, this can address the issue, but I don’t think this is a best practice. In my example,just read data from database, no transaction is required. Even-though the issue is addressed, but I don’t think it’s a correct way.
public override void Execute(string args)
{
using (var unitOfWork = _unitOfWorkManager.Begin())
{
var task = _productRepository.GetAll().ToListAsync();
var items = task.Result;
unitOfWork.Complete();
}
Debug.WriteLine("test db connection");
}
Question
My question is what’s the correct and best way to execute a DB operation in BackgroundJob?
There is addtional another question, I create a new application service, and disable UnitOfWrok, but it works fine. Please see the code as below. Why It works fine in application service, but doesn’t work in BackgroundJob?
[UnitOfWork(IsDisabled =true)]
public async Task<GetAllProductsOutput> GetAllProducts()
{
var result = await _productRepository.GetAllListAsync();
var itemDtos = ObjectMapper.Map<List<ProductDto>>(result);
return new GetAllProductsOutput()
{
Items = itemDtos
};
}
The documentation on Background Jobs And Workers uses [UnitOfWork] attribute.
S1: Add UnitOfWork attribute on execute method. It can address the issue. But it is not better for my actual scenario. In my actual scenario, the job is a long time task, and has much DB operatons, if enable UnitOfWork for Execute method, it will lock db resource for a long time. So this is not a solution for my scenario.
Background jobs are run synchronously on a background thread, so this concern is unfounded.
S2: Execute DB operation in UnitOfWork explicitly. Also, this can address the issue, but I don’t think this is a best practice. In my example,just read data from database, no transaction is required. Even-though the issue is addressed, but I don’t think it’s a correct way.
You can use a Non-Transactional Unit Of Work:
[UnitOfWork(isTransactional: false)]
public override void Execute(string args)
{
var task = _productRepository.GetAll().ToListAsync();
var items = task.Result;
}
You can use IUnitOfWorkManager:
public override void Execute(string args)
{
using (var unitOfWork = _unitOfWorkManager.Begin(TransactionScopeOption.Suppress))
{
var task = _productRepository.GetAll().ToListAsync();
var items = task.Result;
unitOfWork.Complete();
}
}
You can also use AsyncHelper:
[UnitOfWork(isTransactional: false)]
public override void Execute(string args)
{
var items = AsyncHelper.RunSync(() => _productRepository.GetAll().ToListAsync());
}
Conventional Unit Of Work Methods
I create a new application service, and disable UnitOfWork, but it works fine.
Why it works fine in application service, but doesn’t work in BackgroundJob?
[UnitOfWork(IsDisabled = true)]
public async Task<GetAllProductsOutput> GetAllProducts()
{
var result = await _productRepository.GetAllListAsync();
var itemDtos = ObjectMapper.Map<List<ProductDto>>(result);
return new GetAllProductsOutput
{
Items = itemDtos
};
}
You are using different methods: GetAllListAsync() vs GetAll().ToListAsync()
Repository methods are Conventional Unit Of Work Methods, but ToListAsync() isn't one.
From the documentation on About IQueryable<T>:
When you call GetAll() outside of a repository method, there must be an open database connection. This is because of the deferred execution of IQueryable<T>. It does not perform a database query unless you call the ToList() method or use the IQueryable<T> in a foreach loop (or somehow access the queried items). So when you call the ToList() method, the database connection must be alive.

JMeter WebSockets Publish/Subscribe - scripting aschronous responses

We have built a publish/subscribe model into our application via WebSockets so users can receive "dynamic updates" when data changes. I'm now looking to load test this using JMeter.
Is there a way to configure a JMeter test to react to receipt of a WebSocket "published" message and then run further samplers i.e. make further web requests?
I have looked at plugin samples, but they appear focused on request/reply model (e.g. https://bitbucket.org/pjtr/jmeter-websocket-samplers) rather than publish/subscribe.
Edit:
I have progressed a solution for this using the WebSocketSampler - an Example JMX file can be found on BitBucket which uses STOMP over WebSockets and includes Connect, Subscribe, Handle Publish Message and Initiate JMeter Samplers from that.
It is a misunderstanding that the https://bitbucket.org/pjtr/jmeter-websocket-samplers/overview plugin only supports request-response model conversations.
Since version 0.7, the plugin offers "single read" and "single write" samplers. Of course, it depends on your exact protocol, but the idea is that you could use a "single write" sampler to send a WebSocket message that simulates creating the subscription and then have a (standard JMeter) While loop in combination with the "single read" samplers, to read any number of messages that are being published.
If this does not satisfy your needs, let me know and i'll see what i can do for you (i'm the author of this plugin).
I had the system with STOMP. So the clients executed the HTTP messages and they got the actual state via asynchronous WebSockets with this subscribe model. To emulate this behaviour I wrote a class which via JMeterContext variable could exchange data with Jmeter threads (import part you can find by yourself import org.springframework.*):
public class StompWebSocketLoadTestClient {
public static JMeterContext ctx;
public static StompSession session;
public static void start(JMeterContext ctx, String wsURL, String SESSION) throws InterruptedException {
WebSocketClient transport = new StandardWebSocketClient();
WebSocketStompClient stompClient = new WebSocketStompClient(transport);
ThreadPoolTaskScheduler threadPoolTaskScheduler = new ThreadPoolTaskScheduler();
threadPoolTaskScheduler.initialize();
stompClient.setTaskScheduler(threadPoolTaskScheduler);
stompClient.setDefaultHeartbeat(new long[]{10000, 10000});
stompClient.setMessageConverter(new ByteArrayMessageConverter());
StompSessionHandler handler = new MySessionHandler(ctx);
WebSocketHttpHeaders handshakeHeaders = new WebSocketHttpHeaders();
handshakeHeaders.add("Cookie", "SESSION=" + SESSION);
stompClient.connect(wsURL, handshakeHeaders, handler);
sleep(1000);
}
The messages were handled in this class:
private static class MySessionHandler extends StompSessionHandlerAdapter implements TestStateListener {
private String Login = "";
private final JMeterContext ctx_;
private MySessionHandler(JMeterContext ctx) {
this.ctx_ = ctx;
}
#Override
public void afterConnected(StompSession session, StompHeaders connectedHeaders) {
session.setAutoReceipt(true);
this.Login = ctx_.getVariables().get("LOGIN");
//System.out.println("CONNECTED:" + connectedHeaders.getSession() + ":" + session.getSessionId() + ":" + Login);
//System.out.println(session.isConnected());
**//HERE SUBSCRIBTION:**
session.subscribe("/user/notification", new StompFrameHandler() {
#Override
public Type getPayloadType(StompHeaders headers) {
//System.out.println("getPayloadType:");
Iterator it = headers.keySet().iterator();
while (it.hasNext()) {
String header = it.next().toString();
//System.out.println(header + ":" + headers.get(header));
}
//System.out.println("=================");
return byte[].class;
}
#Override
public void handleFrame(StompHeaders headers, Object payload) {
//System.out.println("recievedMessage");
NotificationList nlist = null;
try {
nlist = NotificationList.parseFrom((byte[]) payload);
JMeterVariables vars = ctx_.getVariables();
Iterator it = nlist.getNotificationList().iterator();
while (it.hasNext()) {
Notification n = (Notification) it.next();
String className = n.getType();
//System.out.println("CLASS NAME:" + className);
if (className.contains("response.Resource")) {
///After getting some message you can work with jmeter variables:
vars.putObject("var1", var1);
vars.put("var2",String.valueOf(var2));
}
//Here is "sending" variables back to Jmeter thread context so you can use the data during the test
ctx_.setVariables(vars);
n = null;
}
} catch (InvalidProtocolBufferException ex) {
Logger.getLogger(StompWebSocketLoadTestClient.class.getName()).log(Level.SEVERE, null, ex);
}
}
});
}
In Jmeter testplan, after Login stage I just added a Beanshell sampler with login/password and session strings and Jmeter thread context:
import jmeterstopm.StompWebSocketLoadTestClient;
StompWebSocketLoadTestClient ssltc = new StompWebSocketLoadTestClient();
String SERVER_NAME = vars.get("SERVER_NAME");
String SESSION = vars.get("SESSION");
String ws_pref = vars.get("ws_pref");
ssltc.start(ctx,ws_pref+"://"+SERVER_NAME+"/endpoint/notification- ws/websocket",SESSION);
Further is possible to use all incoming via Websockets data with simple vars variable:
Object var1= (Object) vars.getObject("var1");
Basically, JMeter is not suited well for async type of interaction with system under test.
Though (virtually) everything is possible with Scripting components (post processors, timers, assertions, perhaps samplers, seems to look most useful in your case) and JMeter Logic Controllers.
Like, you may line up your "further samplers", covered in If blocks, analyze the "receipt of a WebSocket published message" and set the flag variables/other parameters for If blocks.
And you may even sync threads, if you need it, check this answer.
But tell you what - that pretty much looks like a lot of handwritten stuff to be done.
So it make sense to consider the whole custom handwritten test harness too.

Does CompletableFuture have a corresponding Local context?

In the olden days, we had ThreadLocal for programs to carry data along with the request path since all request processing was done on that thread and stuff like Logback used this with MDC.put("requestId", getNewRequestId());
Then Scala and functional programming came along and Futures came along and with them came Local.scala (at least I know the twitter Futures have this class). Future.scala knows about Local.scala and transfers the context through all the map/flatMap, etc. etc. functionality such that I can still do Local.set("requestId", getNewRequestId()); and then downstream after it has travelled over many threads, I can still access it with Local.get(...)
Soooo, my question is in Java, can I do the same thing with the new CompletableFuture somewhere with LocalContext or some object (not sure of the name) and in this way, I can modify Logback MDC context to store it in that context instead of a ThreadLocal such that I don't lose the request id and all my logs across the thenApply, thenAccept, etc. etc. still work just fine with logging and the -XrequestId flag in Logback configuration.
EDIT:
As an example. If you have a request come in and you are using Log4j or Logback, in a filter, you will set MDC.put("requestId", requestId) and then in your app, you will log many log statements line this:
log.info("request came in for url="+url);
log.info("request is complete");
Now, in the log output it will show this:
INFO {time}: requestId425 request came in for url=/mypath
INFO {time}: requestId425 request is complete
This is using a trick of ThreadLocal to achieve this. At Twitter, we use Scala and Twitter Futures in Scala along with a Local.scala class. Local.scala and Future.scala are tied together in that we can achieve the above scenario still which is very nice and all our log statements can log the request id so the developer never has to remember to log the request id and you can trace through a single customers request response cycle with that id.
I don't see this in Java :( which is very unfortunate as there are many use cases for that. Perhaps there is something I am not seeing though?
If you come across this, just poke the thread here
http://mail.openjdk.java.net/pipermail/core-libs-dev/2017-May/047867.html
to implement something like twitter Futures which transfer Locals (Much like ThreadLocal but transfers state).
See the def respond() method in here and how it calls Locals.save() and Locals.restort()
https://github.com/simonratner/twitter-util/blob/master/util-core/src/main/scala/com/twitter/util/Future.scala
If Java Authors would fix this, then the MDC in logback would work across all 3rd party libraries. Until then, IT WILL NOT WORK unless you can change the 3rd party library(doubtful you can do that).
My solution theme would be to (It would work with JDK 9+ as a couple of overridable methods are exposed since that version)
Make the complete ecosystem aware of MDC
And for that, we need to address the following scenarios:
When all do we get new instances of CompletableFuture from within this class? → We need to return a MDC aware version of the same rather.
When all do we get new instances of CompletableFuture from outside this class? → We need to return a MDC aware version of the same rather.
Which executor is used when in CompletableFuture class? → In all circumstances, we need to make sure that all executors are MDC aware
For that, let's create a MDC aware version class of CompletableFuture by extending it. My version of that would look like below
import org.slf4j.MDC;
import java.util.Map;
import java.util.concurrent.*;
import java.util.function.Function;
import java.util.function.Supplier;
public class MDCAwareCompletableFuture<T> extends CompletableFuture<T> {
public static final ExecutorService MDC_AWARE_ASYNC_POOL = new MDCAwareForkJoinPool();
#Override
public CompletableFuture newIncompleteFuture() {
return new MDCAwareCompletableFuture();
}
#Override
public Executor defaultExecutor() {
return MDC_AWARE_ASYNC_POOL;
}
public static <T> CompletionStage<T> getMDCAwareCompletionStage(CompletableFuture<T> future) {
return new MDCAwareCompletableFuture<>()
.completeAsync(() -> null)
.thenCombineAsync(future, (aVoid, value) -> value);
}
public static <T> CompletionStage<T> getMDCHandledCompletionStage(CompletableFuture<T> future,
Function<Throwable, T> throwableFunction) {
Map<String, String> contextMap = MDC.getCopyOfContextMap();
return getMDCAwareCompletionStage(future)
.handle((value, throwable) -> {
setMDCContext(contextMap);
if (throwable != null) {
return throwableFunction.apply(throwable);
}
return value;
});
}
}
The MDCAwareForkJoinPool class would look like (have skipped the methods with ForkJoinTask parameters for simplicity)
public class MDCAwareForkJoinPool extends ForkJoinPool {
//Override constructors which you need
#Override
public <T> ForkJoinTask<T> submit(Callable<T> task) {
return super.submit(MDCUtility.wrapWithMdcContext(task));
}
#Override
public <T> ForkJoinTask<T> submit(Runnable task, T result) {
return super.submit(wrapWithMdcContext(task), result);
}
#Override
public ForkJoinTask<?> submit(Runnable task) {
return super.submit(wrapWithMdcContext(task));
}
#Override
public void execute(Runnable task) {
super.execute(wrapWithMdcContext(task));
}
}
The utility methods to wrap would be such as
public static <T> Callable<T> wrapWithMdcContext(Callable<T> task) {
//save the current MDC context
Map<String, String> contextMap = MDC.getCopyOfContextMap();
return () -> {
setMDCContext(contextMap);
try {
return task.call();
} finally {
// once the task is complete, clear MDC
MDC.clear();
}
};
}
public static Runnable wrapWithMdcContext(Runnable task) {
//save the current MDC context
Map<String, String> contextMap = MDC.getCopyOfContextMap();
return () -> {
setMDCContext(contextMap);
try {
return task.run();
} finally {
// once the task is complete, clear MDC
MDC.clear();
}
};
}
public static void setMDCContext(Map<String, String> contextMap) {
MDC.clear();
if (contextMap != null) {
MDC.setContextMap(contextMap);
}
}
Below are some guidelines for usage:
Use the class MDCAwareCompletableFuture rather than the class CompletableFuture.
A couple of methods in the class CompletableFuture instantiates the self version such as new CompletableFuture.... For such methods (most of the public static methods), use an alternative method to get an instance of MDCAwareCompletableFuture. An example of using an alternative could be rather than using CompletableFuture.supplyAsync(...), you can choose new MDCAwareCompletableFuture<>().completeAsync(...)
Convert the instance of CompletableFuture to MDCAwareCompletableFuture by using the method getMDCAwareCompletionStage when you get stuck with one because of say some external library which returns you an instance of CompletableFuture. Obviously, you can't retain the context within that library but this method would still retain the context after your code hits the application code.
While supplying an executor as a parameter, make sure that it is MDC Aware such as MDCAwareForkJoinPool. You could create MDCAwareThreadPoolExecutor by overriding execute method as well to serve your use case. You get the idea!
You can find a detailed explanation of all of the above here in a post about the same.

How to close a database connection opened by an IBackingMap implementation within a Storm Trident topology?

I'm implementing an IBackingMap for my Trident topology to store tuples to ElasticSearch (I know there are several implementations for Trident/ElasticSearch integration already existing at GitHub however I've decided to implement a custom one which suits my task better).
So my implementation is a classic one with a factory:
public class ElasticSearchBackingMap implements IBackingMap<OpaqueValue<BatchAggregationResult>> {
// omitting here some other cool stuff...
private final Client client;
public static StateFactory getFactoryFor(final String host, final int port, final String clusterName) {
return new StateFactory() {
#Override
public State makeState(Map conf, IMetricsContext metrics, int partitionIndex, int numPartitions) {
ElasticSearchBackingMap esbm = new ElasticSearchBackingMap(host, port, clusterName);
CachedMap cm = new CachedMap(esbm, LOCAL_CACHE_SIZE);
MapState ms = OpaqueMap.build(cm);
return new SnapshottableMap(ms, new Values(GLOBAL_KEY));
}
};
}
public ElasticSearchBackingMap(String host, int port, String clusterName) {
Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", clusterName).build();
// TODO add a possibility to close the client
client = new TransportClient(settings)
.addTransportAddress(new InetSocketTransportAddress(host, port));
}
// the actual implementation is left out
}
You see it gets host/port/cluster name as input params and creates an ElasticSearch client as a member of the class BUT IT NEVER CLOSES THE CLIENT.
It is then used from within a topology in a pretty familiar way:
tridentTopology.newStream("spout", spout)
// ...some processing steps here...
.groupBy(aggregationFields)
.persistentAggregate(
ElasticSearchBackingMap.getFactoryFor(
ElasticSearchConfig.ES_HOST,
ElasticSearchConfig.ES_PORT,
ElasticSearchConfig.ES_CLUSTER_NAME
),
new Fields(FieldNames.OUTCOME),
new BatchAggregator(),
new Fields(FieldNames.AGGREGATED));
This topology is wrapped into some public static void main, packed in a jar and sent to Storm for execution.
The question is, should I worry about closing the ElasticSearch connection or it is Storm's own business? If it is not done by Storm, how and when in the topology's lifecycle I should do that?
Thanks in advance!
Okay, answering my own question.
First of all, thanks again #dedek for suggestions and reviving the ticket in Storm's Jira.
Finally, since there's no official way to do that, I've decided to go for cleanup() method of Trident's Filter. So far I've verified the following (for Storm v. 0.9.4):
With LocalCluster
cleanup() gets called on cluster's shutdown
cleanup() DOESN'T get called when killing the topology, this shouldn't be a tragedy, very likely one won't use LocalCluster for real deployments anyway
With a real cluster
it gets called when the topology is killed as well as when the worker is stopped using pkill -TERM -u storm -f 'backtype.storm.daemon.worker'
it doesn't get called if the worker is killed with kill -9 or when it crashes or - sadly - when the worker dies due to an exception
In overall that gives more or less decent guarantee of cleanup() to get called, provided you'll be careful with exception handling (I tend to add 'thundercatches' to every of my Trident primitives anyway).
My code:
public class CloseFilter implements Filter {
private static final Logger LOG = LoggerFactory.getLogger(CloseFilter.class);
private final Closeable[] closeables;
public CloseFilter(Closeable... closeables) {
this.closeables = closeables;
}
#Override
public boolean isKeep(TridentTuple tuple) {
return true;
}
#Override
public void prepare(Map conf, TridentOperationContext context) {
}
#Override
public void cleanup() {
for (Closeable c : closeables) {
try {
c.close();
} catch (Exception e) {
LOG.warn("Failed to close an instance of {}", c.getClass(), e);
}
}
}
}
However would be nice if some day hooks for closing connections become a part of the API.

Resources