Connections to Azure Redis Cache are high - stackexchange.redis

I am using StackOverflow Redis on the Azure with the version 3.2.7 and I see the same issue of high counts of Clients Connections mentioned in below URLs. Is there any way to pass some params while connecting to Redis from client to ensure the connection should be closed automatically after certain idle time? OR Azure Redis itself should automatically close the same after seeing the idle property of the client?
I have already followed the best practices mentioned in the following blogs, but still, no luck in reducing the connections count.
Why are connections to Azure Redis Cache so high?
Azure Redis Cache max connections reached
https://gist.github.com/JonCole/925630df72be1351b21440625ff2671f
I came across this https://pranavprakash.net/2016/02/05/kill-idle-connections-to-redis/ article to kill them in bulk for idle one. Was looking for some properties or settings which will help us to do it automatically. Seems I am still missing something.
Thanks in advance
Gangadhar

Implement IDisposable to dispose the object after its use
public void Dispose()
{
try {
if (connection.IsValueCreated)
{
connection.Value.Dispose();
}
} catch { }
}

Related

Is RestHighLevelClient keep connections open?

I want to use RestHighLevelClient on different clusters with commands which are not supported by Cross Cluster mechanizem (for example close and open index).
My question is if I use more than one instance of RestHighLevelClient for every cluster it will keep connections open for every cluster? (to be ensure I didn't choke the application)
by looking at various resources, it seems RestHighLevelClient keeps the connection open unless you explicitly call client.close(); on it.
From the official RestHighLevelClient initialization
The high-level client will internally create the low-level client used
to perform requests based on the provided builder. That low-level
client maintains a pool of connections and starts some threads so you
should close the high-level client when you are well and truly done
with it and it will in turn close the internal low-level client to
free those resources. This can be done through the close method:
In your case, if you having a lot of ES clusters and creating multiple RestHighLevelClient than as you are guessing, it might choke your application due to the hold of threads and its resources so you should explicitly call the close which would require more time when you again create it but would not choke your application in most of the cases.
I would suggest you do some resource benchmarking on your application and based on your trade-off choose the best possible approach.
Create multiple clients and don't close them but allocate more resources so that the application is fast and don't choke.
close clients frequently, this would not require over-allocating resources but when you create a new client for your request, latency will be more.

How do you use go-sql-driver when you have a sharded MySQL database solution?

Reading this article: http://go-database-sql.org/accessing.html
It says that the sql.DB object is designed to be long-lived and that we should not Open() and Close() databases frequently. But what should I do if I have 10 different MySQL servers and I have sharded them in a way that I have 511 databases in each server for example the way Pinterest shards their data with MySQL?
https://medium.com/#Pinterest_Engineering/sharding-pinterest-how-we-scaled-our-mysql-fleet-3f341e96ca6f
Then would I not need to constantly access new nodes with new databases all the time? As I understand then I have to Open and Close the database connection all the time depending on which node and database I have to access.
It also says that:
If you don’t treat the sql.DB as a long-lived object, you could
experience problems such as poor reuse and sharing of connections,
running out of available network resources, or sporadic failures due
to a lot of TCP connections remaining in TIME_WAIT status. Such
problems are signs that you’re not using database/sql as it was
designed.
Will this be a problem? How should I solve this issue then?
I am also interested in the question. I guess there could be such solution:
Minimize number of idle connection in pool db.SerMaxIdleConns(N)
Make map[serverID]*sql.DB. When you have no such connection - add it to map.
Make Dara more local - so backends usually go to “their” databases. However Pinterest seems not to use it.
Increase number of sockets and files on backend machines so they can keep more open connections.
Provide some reasonable idle timeout so very old unused connections could be closed.

ORA-02396: exceeded maximum idle time, please connect again error

I have a ASP.Net MVC Application connecting to Oracle DB. I am using LINQ in my controller to pull data from Oracle DB.
If that page is loaded, after several minutes if its idle, it gives the above error.
Now I can't ask my DBA to increase the idle time. In my research I saw mention of Pooling in Web.config file. My understanding is that, because of Pooling, some of these connections are still active. I have removed this portion
Min Pool Size=1;Max Pool Size=20;Pooling=true
Do I have to explicitly say in my Web.config:
Pooling=false
I also have in my Controller, Dispose function as below but that doesn't help:
protected override void Dispose(bool disposing)
{
if (disposing)
{
db.Dispose();
}
base.Dispose(disposing);
}
Please help.
If the DBA sets an idle timeout on the server for connections then configure your connection pool with this option.
Min Pool Size=0;
The default is 1. This will keep the ODP.NET client from keeping any open idle connections in the pool while the application is idle. It will still increase the pool size when connection requests come in and it will likely be slightly less efficient at satisfying initial requests since it has to create those connections, but it will work and not have the idle timeout issue.
I agree with some of the comments that there shouldn't be an idle timeout set on the server for these cases, but I've found that some organizations insist on doing this for security reasons.
Here's an approach. When you resume using your connection after a potential idle delay, such as waiting for an incoming request, do this:
Run some cheap no-op query like SELECT 1 FROM DUAL;
If you get the error you mentioned, make sure the connection is completely closed, then open a new one.
Use the connection as normal.
This is a bit of a hack compared to organizing your connection pool properly, but it's better than opening up a new connection every time you need one.

com.ibm.websphere.ce.cm.ConnectionWaitTimeoutException: Connection not available from a DBA

I am an Oracle DBA and not a java developer or websphere expert. We recently started using websphere in our environment. So the developers are still learning it. So I may not word my question properly. I did search the forums and saw 2 other questions like this. My question is more about how to trouble shoot this.
Websphere 8.5.0.2
Oracle 11.2.0.3
I see 20 open connections in the database. All are inactive. So they are not processing. From oracle it is v$session. Inactive means, you are open and not doing anything. Basically it is idle.
If they are inactive and not processing, they should be available for the connection pool to give to a new requester, assuming the DAO the java developer is using is being closed when done (this includes try/catch block). We confirmed that he is closing his connections.
Checks so far:
1. We reviewed the developers code. He is using standard java DAOs. He is closing his connection. He has a try/catch block and the first thing he does in the catch is close the connection.
2. My assumption is that this should cover the code path.
We don't see any errors raised in a log about 'closing' a connection.
My understanding of how a connection pool works
1. Pool Manager opens a configurable set of connections to the database. In our case it is 20.
2. When an application requests a connection, the connection manager does a lookup of the pool for the next available connection, then passes a pointer to that connect to the requesting function.
Possibility:
1. really slow server. We are using VMs for development/test. We do not have visibility to the servers to see if they are busy. So another VM could be using up CPU or disk.
Though lookups for available connections are light weight, it is possible that the server is hung up at 100% cpu and we timeout. Problem is, I don't have a way to look at this. No privileges and no access to someone who does.
not closing connections: We checked pretty thoroughly. We don't see any code passes (including exceptions) where he is not closing connections. First thing he does in a catch, is close the connection.
Any suggestions on where to look? I think its an issue with a slow server, but I want to rule other stuff out. I would like to state again that I am not a java developer or a websphere expert. So my question may be worded poorly.
the first thing he does in the catch is close the connection
Get the developer to introduce finally block after catch block and close connection in finally block, instead of catch block. Flow will move to catch only in case of error, but on normal flow the connection will not be released soon.
try {
//do something
}
catch(Exception ex) {
// log error
}
finally {
//close connection here
}
The symptoms you've described indicate a connection leak. Leaks are easy to solve (see ad-inf's response), but it can be hard to locate the source of the leak. Luckily, WAS comes with ConnLeakLogic mechanism. After enabling it, in the trace.log you'll find entries related to connections which have been retrieved from pool by application and haven't been returned for longer period of time. That connection information will also print a stack trace of a Java thread from the time of obtaining the connection. Seeing that stack traces, Java developer should be able to identify offending code.

How do I scale my Azure application without having a temporary outage?

I'm toying with Windows Azure Management API for scaling my Azure web role. At some point I have one instance and decide that I want to go from one instance to two instances. I send an HTTP POSt request to
https://management.core.windows.net:443/<my-subscription-id>/services/hostedservices/<my-service-name>/deployments/<my-deployment-name>/?comp=config
with an XML specifying the same configuration as deployment currently has and instances count set to two. The call succeeds and the change starts. Now for about 30 seconds the web role will not accept HTTP calls - the caller will get
10061 connection refused
in browser. Which means the role is not serving client requests. That's a problem.
How do I scale the web role in such way that it serves client requests at all times?
As per SLA (Service Level Agreement - Compute):
We guarantee that when you deploy two or more role instances in
different fault and upgrade domains your Internet facing roles will
have external connectivity at least 99.95% of the time.
This means that having one instance is not supported case for SLA, so you may (or will) have downtime. If scale from 2 or more, or from more to 2, there shall not be any outage.
This blog post outlines a good explanation about fault and upgrade domains. Before all, scaling means "upgrade" - you are changing configuration, this configuration change needs to be propagaded through all roles and instances. The only way to do that witout downtime (currently) is to have at least two instances, each of which lives in separate domain.
Please note that, when you have 2 instances or more, you might still experience an outage when you modify the service configuration (like changing the number of instances using the Service Management API). Any configuration change will trigger a reboot of your instances.
To prevent this you'll need to implement the following code in your WebRole.cs/WorkerRole.cs (and as a result you won't have an outage when you change the number of instances):
public override bool OnStart()
{
RoleEnvironment.Changing += RoleEnvironmentChanging;
return base.OnStart();
}
private void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e)
{
if (e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange))
{
e.Cancel = false;
}
}
Edit: See astaykov's comment below.

Resources