I am getting the following error trying to read from a socket. I'm doing a readInt() on that InputStream, and I am getting this error. Perusing the documentation this suggests that the client part of the connection closed the connection. In this scenario, I am the server.
I have access to the client log files and it is not closing the connection, and in fact its log files suggest I am closing the connection. So does anybody have an idea why this is happening? What else to check for? Does this arise when there are local resources that are perhaps reaching thresholds?
I do note that I have the following line:
socket.setSoTimeout(10000);
just prior to the readInt(). There is a reason for this (long story), but just curious, are there circumstances under which this might lead to the indicated error? I have the server running in my IDE, and I happened to leave my IDE stuck on a breakpoint, and I then noticed the exact same errors begin appearing in my own logs in my IDE.
Anyway, just mentioning it, hopefully not a red herring. :-(
There are several possible causes.
The other end has deliberately reset the connection, in a way which I will not document here. It is rare, and generally incorrect, for application software to do this, but it is not unknown for commercial software.
More commonly, it is caused by writing to a connection that the other end has already closed normally. In other words an application protocol error.
It can also be caused by closing a socket when there is unread data in the socket receive buffer.
In Windows, 'software caused connection abort', which is not the same as 'connection reset', is caused by network problems sending from your end. There's a Microsoft knowledge base article about this.
Connection reset simply means that a TCP RST was received. This happens when your peer receives data that it can't process, and there can be various reasons for that.
The simplest is when you close the socket, and then write more data on the output stream. By closing the socket, you told your peer that you are done talking, and it can forget about your connection. When you send more data on that stream anyway, the peer rejects it with an RST to let you know it isn't listening.
In other cases, an intervening firewall or even the remote host itself might "forget" about your TCP connection. This could happen if you don't send any data for a long time (2 hours is a common time-out), or because the peer was rebooted and lost its information about active connections. Sending data on one of these defunct connections will cause a RST too.
Update in response to additional information:
Take a close look at your handling of the SocketTimeoutException. This exception is raised if the configured timeout is exceeded while blocked on a socket operation. The state of the socket itself is not changed when this exception is thrown, but if your exception handler closes the socket, and then tries to write to it, you'll be in a connection reset condition. setSoTimeout() is meant to give you a clean way to break out of a read() operation that might otherwise block forever, without doing dirty things like closing the socket from another thread.
Whenever I have had odd issues like this, I usually sit down with a tool like WireShark and look at the raw data being passed back and forth. You might be surprised where things are being disconnected, and you are only being notified when you try and read.
You should inspect full trace very carefully,
I've a server socket application and fixed a java.net.SocketException: Connection reset case.
In my case it happens while reading from a clientSocket Socket object which is closed its connection because of some reason. (Network lost,firewall or application crash or intended close)
Actually I was re-establishing connection when I got an error while reading from this Socket object.
Socket clientSocket = ServerSocket.accept();
is = new BufferedReader(new InputStreamReader(clientSocket.getInputStream()));
int readed = is.read(); // WHERE ERROR STARTS !!!
The interesting thing is for my JAVA Socket if a client connects to my ServerSocket and close its connection without sending anything is.read() is being called repeatedly.It seems because of being in an infinite while loop for reading from this socket you try to read from a closed connection.
If you use something like below for read operation;
while(true)
{
Receive();
}
Then you get a stackTrace something like below on and on
java.net.SocketException: Socket is closed
at java.net.ServerSocket.accept(ServerSocket.java:494)
What I did is just closing ServerSocket and renewing my connection and waiting for further incoming client connections
String Receive() throws Exception
{
try {
int readed = is.read();
....
}catch(Exception e)
{
tryReConnect();
logit(); //etc
}
//...
}
This reestablises my connection for unknown client socket losts
private void tryReConnect()
{
try
{
ServerSocket.close();
//empty my old lost connection and let it get by garbage col. immediately
clientSocket=null;
System.gc();
//Wait a new client Socket connection and address this to my local variable
clientSocket= ServerSocket.accept(); // Waiting for another Connection
System.out.println("Connection established...");
}catch (Exception e) {
String message="ReConnect not successful "+e.getMessage();
logit();//etc...
}
}
I couldn't find another way because as you see from below image you can't understand whether connection is lost or not without a try and catch ,because everything seems right . I got this snapshot while I was getting Connection reset continuously.
Embarrassing to say it, but when I had this problem, it was simply a mistake that I was closing the connection before I read all the data. In cases with small strings being returned, it worked, but that was probably due to the whole response was buffered, before I closed it.
In cases of longer amounts of text being returned, the exception was thrown, since more then a buffer was coming back.
You might check for this oversight. Remember opening a URL is like a file, be sure to close it (release the connection) once it has been fully read.
I had the same error. I found the solution for problem now. The problem was client program was finishing before server read the streams.
I had this problem with a SOA system written in Java. I was running both the client and the server on different physical machines and they worked fine for a long time, then those nasty connection resets appeared in the client log and there wasn't anything strange in the server log. Restarting both client and server didn't solve the problem. Finally we discovered that the heap on the server side was rather full so we increased the memory available to the JVM: problem solved! Note that there was no OutOfMemoryError in the log: memory was just scarce, not exhausted.
Check your server's Java version. Happened to me because my Weblogic 10.3.6 was on JDK 1.7.0_75 which was on TLSv1. The rest endpoint I was trying to consume was shutting down anything below TLSv1.2.
By default Weblogic was trying to negotiate the strongest shared protocol. See details here: Issues with setting https.protocols System Property for HTTPS connections.
I added verbose SSL logging to identify the supported TLS. This indicated TLSv1 was being used for the handshake.
-Djavax.net.debug=ssl:handshake:verbose:keymanager:trustmanager -Djava.security.debug=access:stack
I resolved this by pushing the feature out to our JDK8-compatible product, JDK8 defaults to TLSv1.2. For those restricted to JDK7, I also successfully tested a workaround for Java 7 by upgrading to TLSv1.2. I used this answer: How to enable TLS 1.2 in Java 7
I also had this problem with a Java program trying to send a command on a server via SSH. The problem was with the machine executing the Java code. It didn't have the permission to connect to the remote server. The write() method was doing alright, but the read() method was throwing a java.net.SocketException: Connection reset. I fixed this problem with adding the client SSH key to the remote server known keys.
In my case was DNS problem .
I put in host file the resolved IP and everything works fine.
Of course it is not a permanent solution put this give me time to fix the DNS problem.
In my experience, I often encounter the following situations;
If you work in a corporate company, contact the network and security team. Because in requests made to external services, it may be necessary to give permission for the relevant endpoint.
Another issue is that the SSL certificate may have expired on the server where your application is running.
I've seen this problem. In my case, there was an error caused by reusing the same ClientRequest object in an specific Java class. That project was using Jboss Resteasy.
Initially only one method was using/invoking the object ClientRequest (placed as global variable in the class) to do a request in an specific URL.
After that, another method was created to get data with another URL, reusing the same ClientRequest object, though.
The solution: in the same class was created another ClientRequest object and exclusively to not be reused.
In my case it was problem with TSL version. I was using Retrofit with OkHttp client and after update ALB on server side I should have to delete my config with connectionSpecs:
OkHttpClient.Builder clientBuilder = new OkHttpClient.Builder();
List<ConnectionSpec> connectionSpecs = new ArrayList<>();
connectionSpecs.add(ConnectionSpec.COMPATIBLE_TLS);
// clientBuilder.connectionSpecs(connectionSpecs);
So try to remove or add this config to use different TSL configurations.
I used to get the 'NotifyUtil::java.net.SocketException: Connection reset at java.net.SocketInputStream.read(SocketInputStream.java:...' message in the Apache Console of my Netbeans7.4 setup.
I tried many solutions to get away from it, what worked for me is enabling the TLS on Tomcat.
Here is how to:
Create a keystore file to store the server's private key and
self-signed certificate by executing the following command:
Windows:
"%JAVA_HOME%\bin\keytool" -genkey -alias tomcat -keyalg RSA
Unix:
$JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg RSA
and specify a password value of "changeit".
As per https://tomcat.apache.org/tomcat-7.0-doc/ssl-howto.html
(This will create a .keystore file in your localuser dir)
Then edit server.xml (uncomment and edit relevant lines) file (%CATALINA_HOME%apache-tomcat-7.0.41.0_base\conf\server.xml) to enable SSL and TLS protocol:
<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
maxThreads="150" scheme="https" secure="true"
clientAuth="false" sslProtocol="TLS" keystorePass="changeit" />
I hope this helps
I was trying to enable the AMQP 1.0 connection with Ditto running on my local virtual Ubuntu machine following the instruction from the website. So I created the twin on my instance, verified it exists and the following step was to create a connection with the endpoint.
First my question: Is it mandatory to use Hono to create AMQP connection? Cause I would prefer to use simple mosquito client. So I tryed to execute the PUT CURL:
{
"targetActorSelection": "/system/sharding/connection",
"headers": { "aggregate": false },
"piggybackCommand": {
"type": "connectivity.commands:createConnection",
"connection": {}
}
}
to the adress where my instance of eclipse ditto running http://localhost/devops/piggyback/connectivity, but i'm getting 401 Authorization error.
I tryed to put the basic authentication used in the example: devops:devopsPw1!, but it fails as well.
Meanwhile sending the same command to the Ditto sandbox instance working fine. What did I miss in my configuration?
Thanks a lot in advance, Mila
regarding the first question. No it is not mandatory to use Hono to create an AMQP connection. You can establish an AMQP connection to whatever uri you define in your connection.
This leads me to the next point. The JSON you provided in your question is missing the description of the actual connection.
I see that we should clarify this in the documentation more explicitly like we did for the testConnection command.
You could have a look at the connection model to see how to configure the connection.
Regarding your second question (401 response) the problem is that the default devops password is "foobar". You can configure it to a password you like by setting the environment variable DEVOPS_PASSWORD of the gateway container.
I hope I could help you.
Hi good people of StackOverflow.
I am having issues with EventMachine.connect, I am able to specify host name and port but not sent additional params in my connection string.
Here is my code:
EM.connect 'localhost', 61613, StompListener
Now I am experiencing issues with STOMP connection to ActiveMQ server, basically, connection keeps dying from time to time and I need to reconnect. I have found following info where Apache ActiveMQ suggests
using transport.keepAlive=true for these cases. But I am not able to figure out how to add this URL params to EM.connect, do you know how?
I have a network of brokers with the following configuration
<transportConnectors>
<transportConnector name="tomer-amq-test2" uri="tcp://0.0.0.0:61616" updateClusterClients="true" rebalanceClusterClients="true" updateClusterClientsOnRemove="true"/>
</transportConnectors>
I expect that when I connect using the URL
failover:\(tcp://tomer-amq-test2:61616\)?backup=true
the broker shall update th client with the full list of brokers it is conencted too, and the client shall connect to one randomly, and to a second as backup
however this is not happening and the client connects only to the specified broker in the url
help anyone?
Tx Tomer
figured out the problem (at least on my env)
when a broker updates another broker that it is up, it identifies itself by the server name.
once the server name of all brokers was added to /etc/hosts on the client side, all was well
:)
I guess this is bad practice, and the broker should identify itself by ip and not by hostname
I was running activeMQ 5.5.1 on ubuntu 10.4
Your client will only be updated with the full broker-list if some or the following properties are true: updateClusterClients rebalanceClusterClients and updateClusterClientsOnRemove.
you have to set them manually on your client as they are false by default.
see: http://activemq.apache.org/failover-transport-reference.html
I am doing a JMS connection using Java. The command I am using to establish connection is
QueueConnectionFactory factory =
new com.tibco.tibjms.TibjmsQueueConnectionFactory(JMSserverUrl);
Where JMSServerUrl is the varible which stores my JMS URL.
Now the problem is that I need to add the fault tolerance URL i.e two different URL's. So can any one tell me how can I specify two URLs together in the above code sample such that if first URL is not accessible it should try connecting to the other URL.
Put all URLs in a single string with a comma between them.
new TibjmsQueueConnectionFactory("ssl://host01:20302,ssl://host02:20302");
Caution, I am a Tibco EMS newbie, but this seems to work, as evidenced by the error I can get ...
javax.jms.JMSSecurityException: Failed to connect to any server at:
ssl://host01:20302,ssl://host02:20302
[Error: Can not initialize SSL client: no trusted certificates are set:
url that returned this exception = SSL://host01:20302 ]
The .NET documentation for tibco(I know your using java) suggests that you can provide a comma delimited list of server URL's for messaging connections. Bear in mind that I don't have any real tibco experience, but this is a common way to handle initial connection fault tolerance(i.e. prior to establishing a connection and receiving information about the cluster, after which failover is typically handled by the connection). It may be worth a try. Another solution that I have seen to this problem is creating a virtual IP and handling fault tolerance at the Network Level.