I am getting the following error trying to read from a socket. I'm doing a readInt() on that InputStream, and I am getting this error. Perusing the documentation this suggests that the client part of the connection closed the connection. In this scenario, I am the server.
I have access to the client log files and it is not closing the connection, and in fact its log files suggest I am closing the connection. So does anybody have an idea why this is happening? What else to check for? Does this arise when there are local resources that are perhaps reaching thresholds?
I do note that I have the following line:
socket.setSoTimeout(10000);
just prior to the readInt(). There is a reason for this (long story), but just curious, are there circumstances under which this might lead to the indicated error? I have the server running in my IDE, and I happened to leave my IDE stuck on a breakpoint, and I then noticed the exact same errors begin appearing in my own logs in my IDE.
Anyway, just mentioning it, hopefully not a red herring. :-(
There are several possible causes.
The other end has deliberately reset the connection, in a way which I will not document here. It is rare, and generally incorrect, for application software to do this, but it is not unknown for commercial software.
More commonly, it is caused by writing to a connection that the other end has already closed normally. In other words an application protocol error.
It can also be caused by closing a socket when there is unread data in the socket receive buffer.
In Windows, 'software caused connection abort', which is not the same as 'connection reset', is caused by network problems sending from your end. There's a Microsoft knowledge base article about this.
Connection reset simply means that a TCP RST was received. This happens when your peer receives data that it can't process, and there can be various reasons for that.
The simplest is when you close the socket, and then write more data on the output stream. By closing the socket, you told your peer that you are done talking, and it can forget about your connection. When you send more data on that stream anyway, the peer rejects it with an RST to let you know it isn't listening.
In other cases, an intervening firewall or even the remote host itself might "forget" about your TCP connection. This could happen if you don't send any data for a long time (2 hours is a common time-out), or because the peer was rebooted and lost its information about active connections. Sending data on one of these defunct connections will cause a RST too.
Update in response to additional information:
Take a close look at your handling of the SocketTimeoutException. This exception is raised if the configured timeout is exceeded while blocked on a socket operation. The state of the socket itself is not changed when this exception is thrown, but if your exception handler closes the socket, and then tries to write to it, you'll be in a connection reset condition. setSoTimeout() is meant to give you a clean way to break out of a read() operation that might otherwise block forever, without doing dirty things like closing the socket from another thread.
Whenever I have had odd issues like this, I usually sit down with a tool like WireShark and look at the raw data being passed back and forth. You might be surprised where things are being disconnected, and you are only being notified when you try and read.
You should inspect full trace very carefully,
I've a server socket application and fixed a java.net.SocketException: Connection reset case.
In my case it happens while reading from a clientSocket Socket object which is closed its connection because of some reason. (Network lost,firewall or application crash or intended close)
Actually I was re-establishing connection when I got an error while reading from this Socket object.
Socket clientSocket = ServerSocket.accept();
is = new BufferedReader(new InputStreamReader(clientSocket.getInputStream()));
int readed = is.read(); // WHERE ERROR STARTS !!!
The interesting thing is for my JAVA Socket if a client connects to my ServerSocket and close its connection without sending anything is.read() is being called repeatedly.It seems because of being in an infinite while loop for reading from this socket you try to read from a closed connection.
If you use something like below for read operation;
while(true)
{
Receive();
}
Then you get a stackTrace something like below on and on
java.net.SocketException: Socket is closed
at java.net.ServerSocket.accept(ServerSocket.java:494)
What I did is just closing ServerSocket and renewing my connection and waiting for further incoming client connections
String Receive() throws Exception
{
try {
int readed = is.read();
....
}catch(Exception e)
{
tryReConnect();
logit(); //etc
}
//...
}
This reestablises my connection for unknown client socket losts
private void tryReConnect()
{
try
{
ServerSocket.close();
//empty my old lost connection and let it get by garbage col. immediately
clientSocket=null;
System.gc();
//Wait a new client Socket connection and address this to my local variable
clientSocket= ServerSocket.accept(); // Waiting for another Connection
System.out.println("Connection established...");
}catch (Exception e) {
String message="ReConnect not successful "+e.getMessage();
logit();//etc...
}
}
I couldn't find another way because as you see from below image you can't understand whether connection is lost or not without a try and catch ,because everything seems right . I got this snapshot while I was getting Connection reset continuously.
Embarrassing to say it, but when I had this problem, it was simply a mistake that I was closing the connection before I read all the data. In cases with small strings being returned, it worked, but that was probably due to the whole response was buffered, before I closed it.
In cases of longer amounts of text being returned, the exception was thrown, since more then a buffer was coming back.
You might check for this oversight. Remember opening a URL is like a file, be sure to close it (release the connection) once it has been fully read.
I had the same error. I found the solution for problem now. The problem was client program was finishing before server read the streams.
I had this problem with a SOA system written in Java. I was running both the client and the server on different physical machines and they worked fine for a long time, then those nasty connection resets appeared in the client log and there wasn't anything strange in the server log. Restarting both client and server didn't solve the problem. Finally we discovered that the heap on the server side was rather full so we increased the memory available to the JVM: problem solved! Note that there was no OutOfMemoryError in the log: memory was just scarce, not exhausted.
Check your server's Java version. Happened to me because my Weblogic 10.3.6 was on JDK 1.7.0_75 which was on TLSv1. The rest endpoint I was trying to consume was shutting down anything below TLSv1.2.
By default Weblogic was trying to negotiate the strongest shared protocol. See details here: Issues with setting https.protocols System Property for HTTPS connections.
I added verbose SSL logging to identify the supported TLS. This indicated TLSv1 was being used for the handshake.
-Djavax.net.debug=ssl:handshake:verbose:keymanager:trustmanager -Djava.security.debug=access:stack
I resolved this by pushing the feature out to our JDK8-compatible product, JDK8 defaults to TLSv1.2. For those restricted to JDK7, I also successfully tested a workaround for Java 7 by upgrading to TLSv1.2. I used this answer: How to enable TLS 1.2 in Java 7
I also had this problem with a Java program trying to send a command on a server via SSH. The problem was with the machine executing the Java code. It didn't have the permission to connect to the remote server. The write() method was doing alright, but the read() method was throwing a java.net.SocketException: Connection reset. I fixed this problem with adding the client SSH key to the remote server known keys.
In my case was DNS problem .
I put in host file the resolved IP and everything works fine.
Of course it is not a permanent solution put this give me time to fix the DNS problem.
In my experience, I often encounter the following situations;
If you work in a corporate company, contact the network and security team. Because in requests made to external services, it may be necessary to give permission for the relevant endpoint.
Another issue is that the SSL certificate may have expired on the server where your application is running.
I've seen this problem. In my case, there was an error caused by reusing the same ClientRequest object in an specific Java class. That project was using Jboss Resteasy.
Initially only one method was using/invoking the object ClientRequest (placed as global variable in the class) to do a request in an specific URL.
After that, another method was created to get data with another URL, reusing the same ClientRequest object, though.
The solution: in the same class was created another ClientRequest object and exclusively to not be reused.
In my case it was problem with TSL version. I was using Retrofit with OkHttp client and after update ALB on server side I should have to delete my config with connectionSpecs:
OkHttpClient.Builder clientBuilder = new OkHttpClient.Builder();
List<ConnectionSpec> connectionSpecs = new ArrayList<>();
connectionSpecs.add(ConnectionSpec.COMPATIBLE_TLS);
// clientBuilder.connectionSpecs(connectionSpecs);
So try to remove or add this config to use different TSL configurations.
I used to get the 'NotifyUtil::java.net.SocketException: Connection reset at java.net.SocketInputStream.read(SocketInputStream.java:...' message in the Apache Console of my Netbeans7.4 setup.
I tried many solutions to get away from it, what worked for me is enabling the TLS on Tomcat.
Here is how to:
Create a keystore file to store the server's private key and
self-signed certificate by executing the following command:
Windows:
"%JAVA_HOME%\bin\keytool" -genkey -alias tomcat -keyalg RSA
Unix:
$JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg RSA
and specify a password value of "changeit".
As per https://tomcat.apache.org/tomcat-7.0-doc/ssl-howto.html
(This will create a .keystore file in your localuser dir)
Then edit server.xml (uncomment and edit relevant lines) file (%CATALINA_HOME%apache-tomcat-7.0.41.0_base\conf\server.xml) to enable SSL and TLS protocol:
<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
maxThreads="150" scheme="https" secure="true"
clientAuth="false" sslProtocol="TLS" keystorePass="changeit" />
I hope this helps
I am trying to send a message to the Qpid broker over the AMQP 1.0 protocol. The queue is named queue2 and it is already created under default virtualhost. However, producer.send(message) is getting stuck forever. The same code is working for connecting to Azure Service Bus. I'm using qpid-jms-client 0.58. Producer code is:
Hashtable<String, String> hashtable = new Hashtable<>();
hashtable.put("connectionfactory.myFactoryLookup", protocol + "://" + url + "?amqp.idleTimeout=120000&amqp.traceFrames=true");
hashtable.put("queue.myQueueLookup", queueName);
hashtable.put(Context.INITIAL_CONTEXT_FACTORY, "org.apache.qpid.jms.jndi.JmsInitialContextFactory");
Context context = new InitialContext(hashtable);
ConnectionFactory factory = (ConnectionFactory) context.lookup("myFactoryLookup");
queue = (Destination) context.lookup("myQueueLookup");
Connection connection = factory.createConnection(username, password);
connection.setExceptionListener(new AmqpConnectionFactory.MyExceptionListener());
connection.start();
Session session=connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
// session.createQueue("queue3");
Queue queue = new JmsQueue("queue2");
MessageProducer messageProducer = session.createProducer(queue);
TextMessage textMessage = session.createTextMessage("new message");
messageProducer.send(textMessage)
I can see Connection and session is successfully established on Qpid broker dashboard:
Thread dump for application at time of producing
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x000000078327c550> (a org.apache.qpid.jms.provider.ProgressiveProviderFuture)
at java.lang.Object.wait(Object.java:502)
at org.apache.qpid.jms.provider.ProgressiveProviderFuture.sync(ProgressiveProviderFuture.java:154)
- locked <0x000000078327c550> (a org.apache.qpid.jms.provider.ProgressiveProviderFuture)
at org.apache.qpid.jms.JmsConnection.send(JmsConnection.java:773)
at org.apache.qpid.jms.JmsNoTxTransactionContext.send(JmsNoTxTransactionContext.java:37)
at org.apache.qpid.jms.JmsSession.send(JmsSession.java:964)
at org.apache.qpid.jms.JmsSession.send(JmsSession.java:843)
at org.apache.qpid.jms.JmsMessageProducer.sendMessage(JmsMessageProducer.java:252)
at org.apache.qpid.jms.JmsMessageProducer.send(JmsMessageProducer.java:182)
I have tried to run this example which gave the same result.
In general if the client is not sending it is because the remote has not granted it credit to do so. You can debug the client state using the protocol trace feature (just set PN_TRACE_FRM=true and run the client).
Likely you have misconfigured the Broker-J somehow and the destination you've created doesn't allow any messages to be sent or you've sent enough that you've tripped the write limit. You should consult the configuration guide and review what you've already setup.
Okay Finally got the issue. Filesystem is over 90 per cent full, enforcing flow control. So deleted files from my machine and it started working.
https://qpid.apache.org/releases/qpid-broker-j-7.0.7/book/Java-Broker-Runtime-Disk-Space-Management.html
I'm using IBM MQ version 8.0.0.0 in a .NET application using C#. Now I'm trying to read messages from a queue. I'm using the below code to read the messages from the queue.
.....
Hashtable props = new Hashtable();
props.Add(MQC.TRANSPORT_PROPERTY, MQC.TRANSPORT_MQSERIES_MANAGED);
props.Add(MQC.CONNECT_OPTIONS_PROPERTY, MQC.MQCNO_RECONNECT_Q_MGR); // Reconnect option
openOptions = MQC.MQOO_INPUT_SHARED | MQC.MQOO_FAIL_IF_QUIESCING;
queueManager = new MQQueueManager(queueManagerName, props);
this.queue = queueManager.AccessQueue(queueName, openOptions);
....
MQGetMessageOptions gmo = new MQGetMessageOptions();
gmo.Options = MQC.MQGMO_FAIL_IF_QUIESCING
| MQC.MQGMO_WAIT | MQC.MQGMO_SYNCPOINT;
gmo.MatchOptions = MQC.MQMO_NONE;
gmo.WaitInterval = 5000; // I'm specifying this
var message = new MQMessage();
this.queue.Get(message, gmo); // Waits here forever in case connection is lost to IBM MQ.
.........
.........
Now in case, there is a loss of connectivity to the MQ server AFTER connection is established but BEFORE a queue.Get() call is issued, I'm seeing that the .GET() call waits forever and doesn't stop despite specifying the WAIT_INTERVAL.
Also, I observed that as soon as connectivity is restored, the .Get() call returns immediately with the message that it has read from the queue.
Am I doing something wrong?
Edit:
Added the queueManager Creation code with the properties, one of which instructs the client to reconnect if possible to the same queue manager.
From this observation:
Also, I observed that as soon as connectivity is restored, the .Get() call returns immediately with the message that it has read from the queue.
The connection to queue manager was lost when the GET call was in progress. So the MQ .NET client is attempting to reconnect to queue manager. While reconnection attempts are going on, the application will find the method call as 'hanging'. This is normal. So the question is have you enabled automatic reconnection in your application? Show complete code.
Update
It's a expected behavior because the Get call is internally attempting to reconnect to queue manager. You can:
1) Reduce the reconnection timeout in mqclient.ini file. An example below.
Channels:
MQReconnectTimeout = 100
2) Check why queue manager is down and bring it up.
I've been tasked with evaluating activemq-artemis for JMS clients. I have RabbmitMQ experience, but none with activemq-artemis/JMS.
I installed artemis to my local machine, created a new broker per the instructions, and set it up as a windows service. The windows service starts and stops just fine. I've made no changes to the broker.xml file.
For my first test I'm trying to perform a JMS Queue produce/consume from a stand alone java program. I'm using the code from the Artemis User Manual in the Using JMS section, (without using JNDI):
TransportConfiguration transportConfiguration = new TransportConfiguration(NettyConnectorFactory.class.getName());
ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactoryWithoutHA(JMSFactoryType.CF,transportConfiguration);
Queue orderQueue = ActiveMQJMSClient.createQueue("OrderQueue");
Connection connection = cf.createConnection();
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
MessageProducer producer = session.createProducer(orderQueue);
MessageConsumer consumer = session.createConsumer(orderQueue);
connection.start();
TextMessage message = session.createTextMessage("This is an order");
producer.send(message);
TextMessage receivedMessage = (TextMessage)consumer.receive();
System.out.println("Got order: " + receivedMessage.getText());
When I run this code, I get the following error:
WARN: AMQ212054: Destination address=jms.queue.OrderQueue is blocked. If the system is configured to block make sure you consume messages on this configuration.
My research hasn't been conclusive on if this is a server side setting, or having the producer send without blocking. I haven't been able to find a producer send method that has a blocking boolean, only persistence. Any ideas on where to focus? Thanks.
Edit: new address-setting element added to broker.xml dedicated to this Queue:
<address-setting match="jms.queue.OrderQueue">
<max-size-bytes>104857600</max-size-bytes>
<page-size-bytes>10485760</page-size-bytes>
<address-full-policy>PAGE</address-full-policy>
</address-setting>
I found this on further research in the user manual:
max-disk-usage The max percentage of data we should use from disks.
The System will block while the disk is full. Default=100
and in the log after service startup with no messages published yet:
WARN [org.apache.activemq.artemis.core.server] AMQ222210: Storage usage is beyond max-disk-usage. System will start blocking producers.
so I think no matter my address settings, it would start to block. Looking at the max-disk-usage setting in broker.xml, it was set to 90. Documentation default says 100, I set to that, no startup log warnings, and my test pub/sub code now works.
This warn message comes when address policy set to BLOCK and memory reached. Check address policy set in broker.xml. If it is set to BLOCK, change it to PAGE. Or consume pending messages from OrderQueue.
By default max-disk-usage value is set as 90(%) and if the remaining free space size is less than 10%, then this warn message will be shown and no messages will be received until you adjust the parameter or free up space beyond 10%.
This is a follow-up to: Can create Websphere Queue Manager but not connect
I'm trying to set up MQ on a development machine, but if I try to connect to it using my domain account it's unable to authenticate (AMQ4999). Digging a little further I find this in the error logs:
AMQ8079: Access was denied when attempting to retrieve group membership
information for user 'xxx#domain'.
Now I'm well aware of the known issue with MQ where it fails to authenticate domain accounts since it's unable to access their member information, and have confirmed from the logs that this is definitely what's happening here, so I tried overriding this using the following script gleaned from the previous post:
DEFINE CHL('DOTNET.SVRCONN') CHLTYPE(SVRCONN) MCAUSER('MUSR_MQADMIN#hostname')
SET CHLAUTH('DOTNET.SVRCONN') TYPE(BLOCKUSER) USERLIST('nobody')
SET CHLAUTH('DOTNET.SVRCONN') TYPE(ADDRESSMAP) ADDRESS('*') USERSRC(CHANNEL) ACTION(ADD)
However, even with this channel in place I still cannot connect to the queue manager while logged into my domain account. I'm still plagued with the exact same error I was getting previously. One thing I did notice was that MQ Explorer reports the channel as inactive even though I started it (although judging by my reading from IBM's website this is normal).
I'm still very new to MQ so I think I'm either missing something or did something wrong, but ideally I would like to be able to set up a dev environment where I can hit the service without having to rely on the 'runas' command. I should also emphasize that this is strictly for dev/learning so obviously I'm not concerned about security.
Update:
I found out what I was doing wrong -- sure enough I was missing a step. A little more background. Upon creating the QM I was trying to connect to it using a simple C# client. Originally I wrote code that looked like this:
var queueManager = new MQQueueManager("MyQueueManager", MQC.MQCNO_STANDARD_BINDING);
Also, when trying to connect via MQExplorer both appears to be using my domain credentials to authenticate. However when I explicitly created a properties object and specified the channel like such:
var props = new Hashtable() {
[MQC.HOST_NAME_PROPERTY] = "localhost",
[MQC.PORT_PROPERTY] = 1414,
[MQC.CHANNEL_PROPERTY] = "DOTNET.SVRCONN",
[MQC.USER_ID_PROPERTY] = "DevMQUser",
[MQC.PASSWORD_PROPERTY] = "p#$$w0rd"
};
var queueManager = new MQQueueManager("MyQueueManager", props);
Then everything worked correctly. I still need to run MQExplorer.exe as a local user (even explicitly setting credentials in Connection Details > Properties doesn't seem to work), but this isn't a big deal.
Thanks for the suggestions.
Try changing...
SET CHLAUTH('DOTNET.SVRCONN') TYPE(ADDRESSMAP) ADDRESS('*') USERSRC(CHANNEL)
To...
SET CHLAUTH('DOTNET.SVRCONN') TYPE(ADDRESSMAP) ADDRESS('*') USERSRC(MAP) MCAUSER(MUSR_MQADMIN)
The USERSRC(CHANNEL) says to take the ID that is presented to the channel, in this case the local process ID of your logged-in account, to override MCAUSER.
MQ Security diagnostics
For connectivity issues over channels, grab SupportPac MS0P and install into MQ Explorer. Then turn on Authorization Events and Channel Events and recreate the problem. If the connection is blocked by a CHLAUTH record, this shows up in the Channel Event queue. If it is blocked by OAM it shows up in the QMgr Event queue. From Explorer with MS0P installed, right-clicking on the queue name from the Queues panel opens a context dialog that includes "Format event messages" as an option. Select is and MS0P will parse the PCF message into human-readable values that show all the parameters that were presented to MQ and why it blocked the connection.
IBM MQ v8
If this is v8 of MQ, you also have ID and password checking to configure. If the QMgr points to an AUTHINFO record that specifies ID and password checking (IDPWOS) the password can't be blank if the ID is set. Even if the password authentication is set to OPTIONAL the check will be made if an ID is present on the channel, which the client code will ensure is true unless specifically overridden.