I'm writing a small wrapper library that'll allow me to monitor the internals of an OkHttpClient using Dropwizard Metrics: https://github.com/raskasa/metrics-okhttp.
I'm having some trouble properly instrumenting the ConnectionPool - specifically, periodically calling getConnectionCount() to monitor the number of open TCP connections.
When an instance of OkHttpClient is initially created, getConnectionPool() is null - which I'm expecting. But also, subsequent attempts to access the pool still return null even during/after executing some network requests.
I'm assuming there is proper way to monitor the ConnectionPool because it is a part of the public API, but I'm just not seeing it clearly at the moment.
So:
Is there way to access the ConnectionPool at the point where OkHttpClient.getConnectionPool() is not null?
If this isn't the best approach, any advice for going about this a better way?
Try this:
OkHttpClient client = ...
client.setConnectionPool(ConnectionPool.getDefault());
That'll give you the same connection pool you'll get anyway, but it'll give it to you sooner.
Related
I have a Spring boot app that use HikariCP for Postgres connection pooling.
Recently I've set up tracing to collect some data how time is spent when handling a request to a specific endpoint.
My assumptions are that when using HikariCP:
The first connection to the database while handling the request might be a bit slower
Subsequent connections to the database should be fast (< 10 ms)
However, as the trace shows, the first connection is fast (< 10 ms). And while some subsequent connections during the same request handling are also fast (< 10 ms), I frequently see some subsequent connections taking 50-100ms, which seems quite slow to me, although I'm not sure if this is to be expected or not.
Is there anything I can configure to improve this behavior?
Maybe good to know:
The backend in question doesn't really see any other traffic right now, so it's only handling traffic when I manually send requests to it
I've changed maximumPoolSize to 1 to rule out that the issue is that it uses different connections in the context of 1 request and that's what causes the issue. The same behavior is still seen.
I use the default Hikari settings, I don't change them.
I do think something is wrong with your pool configuration or your usage of the pool if it takes roughly 10 ms to get an already initialized connection from your pool. I would expect it to be sub-millisecond... Are you sure you are using the pool correctly?
Make sure you are using as new versions of pool and driver as possible, and make sure that connectionTestQuery is not set, as that would execute a query every time the connection is obtained from the pool. The defaults should be good enough for the rest of the settings.
Debug logs could be one thing help figure out what is happening, metrics on the pool another. Have a look at Spring Boot Actuator, it will help you with that...
To answer your actual question on how you can improve the situation given it actually takes roughly 10 ms to obtain a connection: Do not obtain and return the connection to the pool for every query... If you do not want to pass the connection around in your code, and if it suits your use case, you can make this happen easily by making sure your whole request is wrapped in a transaction. See the Spring guide on managing transactions.
I am currently redirecting the socket io through a custom proxy. The server it actually gets send to changes from time to time. The "new" server is notfied that a client will connect/swap to it before it connects/swaps. The only issue is that this leads to the client timing out and reconnecting, which works, but takes 2 seconds that I dont want to client to wait on switch. I do not want the client to know that the server change so somehow i have to make the server to add the upcomming client/socket id to its internal list.
How could I achieve this?
Ive looked at the socket.io-adapter, but I wasn't sure if that is only for rooms/or if there is an easier way to do it
It appears using the adapter would fix it. Rather than adding my own I just ended up accessing the default namespace and doing a 'addAll' to add the client.
io.nsps["/"].adapter.addAll(socketId, new Set<Room>([socketId]));
When using the .NET Elasticsearch NEST client, I'm trying to figure out how to minimize the number of pings the client library does to our nodes. I know there are settings to disable the pings, but if we had a node down I think we would see a big negative performance impact without them. So what I'm really trying to figure out is if there is a way to use a singleton pattern around the ElasticClient object, connection state information or some other object to help achieve this.
Basically we need a shared object that has all the nodes and their up/down state that multiple ElasticClients can use without having each new client created having to figure it out. Another option would be using the ElasticClient as a singleton itself.
I am using the client in a multithreaded ASP.NET app and azure worker role so ensuring it works across threads is important.
I'm using nginx in front of ES to monitor it's traffic and you can see there are a ton of "/" hits which must be the client library pings. (This report snippet below is via Stackify from parsing our nginx logs.)
Has anyone had any success using ElasticClient as a singleton or have any suggestions?
The client itself is stateless so you should be able to use it as a singleton. You can also instantiate a new client every time but if your using an IConnectionPool you need to make sure each of the client instances receives the same instance of the IConnectionPool.
Everything with my co-located cache works fine as long as there is one request at a time. But when I hit my service with several concurrent requests, my cache doesn't seem to work.
Preliminary analysis led me to this - https://azure.microsoft.com/en-us/documentation/articles/cache-dotnet-how-to-use-service/
Apparently, I would have to use maxConnectionsToServer to allow multiple concurrent connections to cache. But the document also talks about a useLegacyProtocol parameter which has to be set to false to enable connection pooling.
I have the following questions:
My service would be getting a few hundred concurrent requests. Would this be a
good setting for such a scenario:
<dataCacheClient name="default" maxConnectionsToServer="100"
useLegacyProtocol="false">
This is my understanding of the behavior I would get with this configuration - Each time a request comes in, an attempt would be made to retrieve a connection from the pool. If there is no available connection, a new connection would be created if there are less than 100 connections currently, else the request would fail. Please confirm if this is correct.
The above documentation says that one connection would be used per instance of DataCacheFactory. I have a cache manager class which manages all interactions with cache. This is a singleton class. It creates a DataCacheFactory object and uses it to get a handle to the cache during its instantiation. My service would have 2 instances. Looks like I would need only 2 connections to server. Is this correct? Do I even need connection pooling?
What is the maximum value maxConnectionsToServer can accept and what would be an ideal value for the given scenario?
I also see a boolean paramater named "ConnectionPool". This looks complementary to "useLegacyProtocol". Is this not redundant? How is setting useLegacyProtocol="false" different from connectionPool="true"? I am confused as to whether or not and how to use this parameter.
Are maxConnectionsToServer and ConnectionPool parameters related in any way? What does it mean when I have maxConnectionsToServer set to 5 and ConnectionPool=true?
We have a cluster and deployed with some stateless ejb session beans. Currently we only cached the InitialContext object in the client code, and I have several questions:
in the current case, if we call lookup() to get a replica-aware
stub, which server will return the stub object, the same server we
get the InitialContext, or it will load balanced to other servers
every time we call the lookup method?
should we just cache the stub? is it thread safe? if it is, how does the stub handle concurrent requests from the client threads? in
parallels or in sequence?
One more question, when we call new InitialContext(), it will take a
long time before it can return a timeout exception if the servers
are not reachable, how can we set a timeout for this case?
The best way to know is to code a little test client.
You can check which server your stub came from using its toString() method, which should print a kind of URL (at least it did on WebLogic 10). It's possible you'll get surprised with the results.