I am getting the following error trying to read from a socket. I'm doing a readInt() on that InputStream, and I am getting this error. Perusing the documentation this suggests that the client part of the connection closed the connection. In this scenario, I am the server.
I have access to the client log files and it is not closing the connection, and in fact its log files suggest I am closing the connection. So does anybody have an idea why this is happening? What else to check for? Does this arise when there are local resources that are perhaps reaching thresholds?
I do note that I have the following line:
socket.setSoTimeout(10000);
just prior to the readInt(). There is a reason for this (long story), but just curious, are there circumstances under which this might lead to the indicated error? I have the server running in my IDE, and I happened to leave my IDE stuck on a breakpoint, and I then noticed the exact same errors begin appearing in my own logs in my IDE.
Anyway, just mentioning it, hopefully not a red herring. :-(
There are several possible causes.
The other end has deliberately reset the connection, in a way which I will not document here. It is rare, and generally incorrect, for application software to do this, but it is not unknown for commercial software.
More commonly, it is caused by writing to a connection that the other end has already closed normally. In other words an application protocol error.
It can also be caused by closing a socket when there is unread data in the socket receive buffer.
In Windows, 'software caused connection abort', which is not the same as 'connection reset', is caused by network problems sending from your end. There's a Microsoft knowledge base article about this.
Connection reset simply means that a TCP RST was received. This happens when your peer receives data that it can't process, and there can be various reasons for that.
The simplest is when you close the socket, and then write more data on the output stream. By closing the socket, you told your peer that you are done talking, and it can forget about your connection. When you send more data on that stream anyway, the peer rejects it with an RST to let you know it isn't listening.
In other cases, an intervening firewall or even the remote host itself might "forget" about your TCP connection. This could happen if you don't send any data for a long time (2 hours is a common time-out), or because the peer was rebooted and lost its information about active connections. Sending data on one of these defunct connections will cause a RST too.
Update in response to additional information:
Take a close look at your handling of the SocketTimeoutException. This exception is raised if the configured timeout is exceeded while blocked on a socket operation. The state of the socket itself is not changed when this exception is thrown, but if your exception handler closes the socket, and then tries to write to it, you'll be in a connection reset condition. setSoTimeout() is meant to give you a clean way to break out of a read() operation that might otherwise block forever, without doing dirty things like closing the socket from another thread.
Whenever I have had odd issues like this, I usually sit down with a tool like WireShark and look at the raw data being passed back and forth. You might be surprised where things are being disconnected, and you are only being notified when you try and read.
You should inspect full trace very carefully,
I've a server socket application and fixed a java.net.SocketException: Connection reset case.
In my case it happens while reading from a clientSocket Socket object which is closed its connection because of some reason. (Network lost,firewall or application crash or intended close)
Actually I was re-establishing connection when I got an error while reading from this Socket object.
Socket clientSocket = ServerSocket.accept();
is = new BufferedReader(new InputStreamReader(clientSocket.getInputStream()));
int readed = is.read(); // WHERE ERROR STARTS !!!
The interesting thing is for my JAVA Socket if a client connects to my ServerSocket and close its connection without sending anything is.read() is being called repeatedly.It seems because of being in an infinite while loop for reading from this socket you try to read from a closed connection.
If you use something like below for read operation;
while(true)
{
Receive();
}
Then you get a stackTrace something like below on and on
java.net.SocketException: Socket is closed
at java.net.ServerSocket.accept(ServerSocket.java:494)
What I did is just closing ServerSocket and renewing my connection and waiting for further incoming client connections
String Receive() throws Exception
{
try {
int readed = is.read();
....
}catch(Exception e)
{
tryReConnect();
logit(); //etc
}
//...
}
This reestablises my connection for unknown client socket losts
private void tryReConnect()
{
try
{
ServerSocket.close();
//empty my old lost connection and let it get by garbage col. immediately
clientSocket=null;
System.gc();
//Wait a new client Socket connection and address this to my local variable
clientSocket= ServerSocket.accept(); // Waiting for another Connection
System.out.println("Connection established...");
}catch (Exception e) {
String message="ReConnect not successful "+e.getMessage();
logit();//etc...
}
}
I couldn't find another way because as you see from below image you can't understand whether connection is lost or not without a try and catch ,because everything seems right . I got this snapshot while I was getting Connection reset continuously.
Embarrassing to say it, but when I had this problem, it was simply a mistake that I was closing the connection before I read all the data. In cases with small strings being returned, it worked, but that was probably due to the whole response was buffered, before I closed it.
In cases of longer amounts of text being returned, the exception was thrown, since more then a buffer was coming back.
You might check for this oversight. Remember opening a URL is like a file, be sure to close it (release the connection) once it has been fully read.
I had the same error. I found the solution for problem now. The problem was client program was finishing before server read the streams.
I had this problem with a SOA system written in Java. I was running both the client and the server on different physical machines and they worked fine for a long time, then those nasty connection resets appeared in the client log and there wasn't anything strange in the server log. Restarting both client and server didn't solve the problem. Finally we discovered that the heap on the server side was rather full so we increased the memory available to the JVM: problem solved! Note that there was no OutOfMemoryError in the log: memory was just scarce, not exhausted.
Check your server's Java version. Happened to me because my Weblogic 10.3.6 was on JDK 1.7.0_75 which was on TLSv1. The rest endpoint I was trying to consume was shutting down anything below TLSv1.2.
By default Weblogic was trying to negotiate the strongest shared protocol. See details here: Issues with setting https.protocols System Property for HTTPS connections.
I added verbose SSL logging to identify the supported TLS. This indicated TLSv1 was being used for the handshake.
-Djavax.net.debug=ssl:handshake:verbose:keymanager:trustmanager -Djava.security.debug=access:stack
I resolved this by pushing the feature out to our JDK8-compatible product, JDK8 defaults to TLSv1.2. For those restricted to JDK7, I also successfully tested a workaround for Java 7 by upgrading to TLSv1.2. I used this answer: How to enable TLS 1.2 in Java 7
I also had this problem with a Java program trying to send a command on a server via SSH. The problem was with the machine executing the Java code. It didn't have the permission to connect to the remote server. The write() method was doing alright, but the read() method was throwing a java.net.SocketException: Connection reset. I fixed this problem with adding the client SSH key to the remote server known keys.
In my case was DNS problem .
I put in host file the resolved IP and everything works fine.
Of course it is not a permanent solution put this give me time to fix the DNS problem.
In my experience, I often encounter the following situations;
If you work in a corporate company, contact the network and security team. Because in requests made to external services, it may be necessary to give permission for the relevant endpoint.
Another issue is that the SSL certificate may have expired on the server where your application is running.
I've seen this problem. In my case, there was an error caused by reusing the same ClientRequest object in an specific Java class. That project was using Jboss Resteasy.
Initially only one method was using/invoking the object ClientRequest (placed as global variable in the class) to do a request in an specific URL.
After that, another method was created to get data with another URL, reusing the same ClientRequest object, though.
The solution: in the same class was created another ClientRequest object and exclusively to not be reused.
In my case it was problem with TSL version. I was using Retrofit with OkHttp client and after update ALB on server side I should have to delete my config with connectionSpecs:
OkHttpClient.Builder clientBuilder = new OkHttpClient.Builder();
List<ConnectionSpec> connectionSpecs = new ArrayList<>();
connectionSpecs.add(ConnectionSpec.COMPATIBLE_TLS);
// clientBuilder.connectionSpecs(connectionSpecs);
So try to remove or add this config to use different TSL configurations.
I used to get the 'NotifyUtil::java.net.SocketException: Connection reset at java.net.SocketInputStream.read(SocketInputStream.java:...' message in the Apache Console of my Netbeans7.4 setup.
I tried many solutions to get away from it, what worked for me is enabling the TLS on Tomcat.
Here is how to:
Create a keystore file to store the server's private key and
self-signed certificate by executing the following command:
Windows:
"%JAVA_HOME%\bin\keytool" -genkey -alias tomcat -keyalg RSA
Unix:
$JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg RSA
and specify a password value of "changeit".
As per https://tomcat.apache.org/tomcat-7.0-doc/ssl-howto.html
(This will create a .keystore file in your localuser dir)
Then edit server.xml (uncomment and edit relevant lines) file (%CATALINA_HOME%apache-tomcat-7.0.41.0_base\conf\server.xml) to enable SSL and TLS protocol:
<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
maxThreads="150" scheme="https" secure="true"
clientAuth="false" sslProtocol="TLS" keystorePass="changeit" />
I hope this helps
I have used http://github.com/streadway/amqp package in my application in order to handle connections to a remote RabbitMQ server. Everything is ok and works fine but when a connection is idle for a long period of time f.g 6 hours it gets closed. I check NotifyClose(make(chan *amqp.Error)) all time in my go routine and it returns :
Exception (501) Reason: "write tcp
192.168.133.53:55424->192.168.134.34:5672: write: broken pipe"
Why this error happens? (is there any problem in my code?)
How long a connection can be idle?
How to prevent this problem?
As Cosmic Ossifrage says, the error is saying your RabbitMQ client has disconnected.
There are so many things that could sit between your client and server that can/will drop dormant connections that it's not worth focusing on how long your connection can be dormant for. You want to set the requested heartbeat interval in your connection manager.
https://www.rabbitmq.com/heartbeats.html
I'm not familiar with the framework you're using but I see it has a defaultHeartbeat field in connection.go. You might need to experiment with the value to find the best balance is to stop the connection being killed but not hit the server too often with keep-alive traffic.
We have a web application with the stack Spring, Hibernate, C3P0, Oracle DB Driver (habing an Oracle DB behind).
From time to time we experience blocking locks over a longer period of time which then get killed on the DB end. (we know this is caused by bad application design and we will fix it, but it's not the point of this quesion).
After the DB session was killed by DB it seems that the connection pool reuses the now broken connection which results in the error:
A PooledConnection that has already signalled a Connection error is still in use!
Another error has occurred [ java.sql.SQLRecoverableException: Closed Connection ] which will not be reported to listeners!
On the DataSource we configured
dataSource.setTestConnectionOnCheckin(true);
dataSource.setTestConnectionOnCheckout(true);
But it did not help. We expected that the connections fail these tests and then get renewed. But this does not happen.
Any hints for us how to recreate the broken connections?
This warning is given when a Connection that is already checked out experiences an Exception that causes c3p0 to treat it as invalid (so it will not be reincorporated back into the pool on close()), but the Connection continues to be used and experiences an Exception again. These are not broken Connections in the pool. They are broken Connections in-use by the application. So testing them on checkout (or checkin) doesn't do anything about them.
To get rid of this, you need to examine the Exception handling within your application code. Are there circumstances where an invalid Connection might have thrown an Exception, but that Exception was caught and the Connection reused?
The warning itself is harmless. It's just saying c3p0 already knows the Connection is bad, it won't emit an event to signal that again.
Suppose that after 30s (default client-failure-check-period) the client did not receive any packets from the server as a result of net connection problems.
Will the client now be disconnect from session/connection?
Suppose now I add this configration :
<retry-interval>1000</retry-interval>
<retry-interval-multiplier>1.5</retry-interval-multiplier>
<max-retry-interval>60000</max-retry-interval>
<reconnect-attempts>1000</reconnect-attempts>
What will happen now?
Will the client still get disconnected from session/connection but only after trying to reconnect 1000 times (until net is available again)? Or will it ignore the need to do disconnect?
Regarding your first question, and according to HornetQ documentation, that can be found under 17.2. Detecting failure from the client side:
As long as the client is receiving data from the server it will consider the connection to be still alive.
If the client does not receive any packets for client-failure-check-period milliseconds then it will consider the connection failed and will either initiate failover, or call any FailureListener instances (or ExceptionListener instances if you are using JMS) depending on how it has been configured.
Therefore the client will assume that the connection was in fact lost and start its failure processes.
For your second question, also according to the HornetQ documentation, that can be found under 34.3. Configuring reconnection/reattachment attributes:
reconnect-attempts. This optional parameter determines the total number of reconnect attempts to make before giving up and shutting down. A value of -1 signifies an unlimited number of attempts. The default value is 0.
So, yes, the connection will be dropped after 1000 attempts.
I am working in an environment where we get production issues from time to time related to Oracle connections. We use ODP.NET from ASP.NET applications, and we suspect the firewall closes connections that have been in the connection pool too long.
Sometimes we get an "ORA-12571: TNS packet writer failure" error, and sometimes we get "ORA-03135: connection lost contact."
I was wondering if someone has run into this and/or has an understanding of the difference between the 2 errors.
Using a mobile phone analogy:
ORA-12571 (Failure) Means call is dropped.
ORA-03135 (Connection Lost) Other party hung up.
My understanding is that 3135 occurs when a connection is lost. This doesn't tell you why the connection was lost, though. It may have been terminated by the server because the server failed to recieve a response to a probe for a certain amount of time, and assumed that the connection was dead. Or (I'm not sure about this) the exact reverse of that: the client failed to recieve a probe response from the server for a certain amount of time, so it assumed the connection was lost. The "certain amount of time" is cotrolled by SQLNET.EXPIRE_TIME=[minutes] in sqlnet.ora.
As for 12571, my (again vague) understanding is that there was a sudden failure to send a packet during communication with the server, and that this is typically caused by some software or hardware interfering with the connection (either by design, or by error). For instance, if you pull out your ethernet cable and then try to execute a query, you'll probably get this. Or if a firewall or anti-malware application decides to block the traffic.