Should UdpClient be a singleton when using it for logging? - udpclient

I'm using Graphite.NET for logging to statsD. Underneath the hood, it uses UdpClient to write to the statD server. Source. I think makes sense to create this as a singleton because I will be logging frequently and it seems that there will be a lot of overhead in creating this client and connecting every time I want to log. Is there any downside to doing this? What happens if the connection gets interrupted: will an exception be thrown? Will my logger be recreated by StuctureMap next time I try to use the logger? Here's what my SM configuration looks like:
x.For<IStatsDClientAdapter>()
.Singleton()
.Use<StatsDClientAdapter>()
.Ctor<string>("hostname").EqualToAppSetting("GraphiteHostname")
.Ctor<int>("port").EqualToAppSetting("GraphitePort")
.Ctor<string>("keyPrefix").EqualToAppSetting("GraphiteKeyPrefix");

Because the Graphite.NET StatsDClient instantiates and calls Connect on UdpClient in it's constructor, your ability to recover from initial connection exceptions are limited if you only make this call once (e.g. at application startup via depencency injected singleton - as you've done).
Using a Singleton with this StatsDClient means you'd need to catch and re-instantiate the StatsDClient if a connection issue occurred in order to insure that your application was initialized correctly (i.e. with a working StatsDClient)... because, again, Connect is run in the constructor.
That said, if the StatsDClient initializes successfully (i.e. Connect doesn't throw an exception) then you should be OK even if the server goes down afterward because UDP is connectionless and StatsDClient is handling/catching any exception that occurs on Send(). The client should just keep right on firing Sends at the Ip and Port that was established in the default connection with no knowledge of whether the server is good/bad.
Too bad the Graphite.NET StatsDClient doesn't pass the ip and port to UdpClient.Send() - http://msdn.microsoft.com/en-us/library/acf44a1a.aspx) instead of using a default connection via the constructor... as this would make using a static member possible (as you'd be able to construct usable StatsDClients under any conditions).
Long story short, in order to avoid getting your application into a bad state, I'd instantiate at usage time. As follows:
using(var statsdclient = new StatsDClient("my.statsd.host", 8125, "whatever.blah"))
{
statsdclient.Increment("asdf");
}
Or, alternatively, fork StatsDClient and modify it to pass the IP and Port on the Send().

Related

Oracle JDBC intermittent connection reset SQLRecoverableException [duplicate]

I am getting the following error trying to read from a socket. I'm doing a readInt() on that InputStream, and I am getting this error. Perusing the documentation this suggests that the client part of the connection closed the connection. In this scenario, I am the server.
I have access to the client log files and it is not closing the connection, and in fact its log files suggest I am closing the connection. So does anybody have an idea why this is happening? What else to check for? Does this arise when there are local resources that are perhaps reaching thresholds?
I do note that I have the following line:
socket.setSoTimeout(10000);
just prior to the readInt(). There is a reason for this (long story), but just curious, are there circumstances under which this might lead to the indicated error? I have the server running in my IDE, and I happened to leave my IDE stuck on a breakpoint, and I then noticed the exact same errors begin appearing in my own logs in my IDE.
Anyway, just mentioning it, hopefully not a red herring. :-(
There are several possible causes.
The other end has deliberately reset the connection, in a way which I will not document here. It is rare, and generally incorrect, for application software to do this, but it is not unknown for commercial software.
More commonly, it is caused by writing to a connection that the other end has already closed normally. In other words an application protocol error.
It can also be caused by closing a socket when there is unread data in the socket receive buffer.
In Windows, 'software caused connection abort', which is not the same as 'connection reset', is caused by network problems sending from your end. There's a Microsoft knowledge base article about this.
Connection reset simply means that a TCP RST was received. This happens when your peer receives data that it can't process, and there can be various reasons for that.
The simplest is when you close the socket, and then write more data on the output stream. By closing the socket, you told your peer that you are done talking, and it can forget about your connection. When you send more data on that stream anyway, the peer rejects it with an RST to let you know it isn't listening.
In other cases, an intervening firewall or even the remote host itself might "forget" about your TCP connection. This could happen if you don't send any data for a long time (2 hours is a common time-out), or because the peer was rebooted and lost its information about active connections. Sending data on one of these defunct connections will cause a RST too.
Update in response to additional information:
Take a close look at your handling of the SocketTimeoutException. This exception is raised if the configured timeout is exceeded while blocked on a socket operation. The state of the socket itself is not changed when this exception is thrown, but if your exception handler closes the socket, and then tries to write to it, you'll be in a connection reset condition. setSoTimeout() is meant to give you a clean way to break out of a read() operation that might otherwise block forever, without doing dirty things like closing the socket from another thread.
Whenever I have had odd issues like this, I usually sit down with a tool like WireShark and look at the raw data being passed back and forth. You might be surprised where things are being disconnected, and you are only being notified when you try and read.
You should inspect full trace very carefully,
I've a server socket application and fixed a java.net.SocketException: Connection reset case.
In my case it happens while reading from a clientSocket Socket object which is closed its connection because of some reason. (Network lost,firewall or application crash or intended close)
Actually I was re-establishing connection when I got an error while reading from this Socket object.
Socket clientSocket = ServerSocket.accept();
is = new BufferedReader(new InputStreamReader(clientSocket.getInputStream()));
int readed = is.read(); // WHERE ERROR STARTS !!!
The interesting thing is for my JAVA Socket if a client connects to my ServerSocket and close its connection without sending anything is.read() is being called repeatedly.It seems because of being in an infinite while loop for reading from this socket you try to read from a closed connection.
If you use something like below for read operation;
while(true)
{
Receive();
}
Then you get a stackTrace something like below on and on
java.net.SocketException: Socket is closed
at java.net.ServerSocket.accept(ServerSocket.java:494)
What I did is just closing ServerSocket and renewing my connection and waiting for further incoming client connections
String Receive() throws Exception
{
try {
int readed = is.read();
....
}catch(Exception e)
{
tryReConnect();
logit(); //etc
}
//...
}
This reestablises my connection for unknown client socket losts
private void tryReConnect()
{
try
{
ServerSocket.close();
//empty my old lost connection and let it get by garbage col. immediately
clientSocket=null;
System.gc();
//Wait a new client Socket connection and address this to my local variable
clientSocket= ServerSocket.accept(); // Waiting for another Connection
System.out.println("Connection established...");
}catch (Exception e) {
String message="ReConnect not successful "+e.getMessage();
logit();//etc...
}
}
I couldn't find another way because as you see from below image you can't understand whether connection is lost or not without a try and catch ,because everything seems right . I got this snapshot while I was getting Connection reset continuously.
Embarrassing to say it, but when I had this problem, it was simply a mistake that I was closing the connection before I read all the data. In cases with small strings being returned, it worked, but that was probably due to the whole response was buffered, before I closed it.
In cases of longer amounts of text being returned, the exception was thrown, since more then a buffer was coming back.
You might check for this oversight. Remember opening a URL is like a file, be sure to close it (release the connection) once it has been fully read.
I had the same error. I found the solution for problem now. The problem was client program was finishing before server read the streams.
I had this problem with a SOA system written in Java. I was running both the client and the server on different physical machines and they worked fine for a long time, then those nasty connection resets appeared in the client log and there wasn't anything strange in the server log. Restarting both client and server didn't solve the problem. Finally we discovered that the heap on the server side was rather full so we increased the memory available to the JVM: problem solved! Note that there was no OutOfMemoryError in the log: memory was just scarce, not exhausted.
Check your server's Java version. Happened to me because my Weblogic 10.3.6 was on JDK 1.7.0_75 which was on TLSv1. The rest endpoint I was trying to consume was shutting down anything below TLSv1.2.
By default Weblogic was trying to negotiate the strongest shared protocol. See details here: Issues with setting https.protocols System Property for HTTPS connections.
I added verbose SSL logging to identify the supported TLS. This indicated TLSv1 was being used for the handshake.
-Djavax.net.debug=ssl:handshake:verbose:keymanager:trustmanager -Djava.security.debug=access:stack
I resolved this by pushing the feature out to our JDK8-compatible product, JDK8 defaults to TLSv1.2. For those restricted to JDK7, I also successfully tested a workaround for Java 7 by upgrading to TLSv1.2. I used this answer: How to enable TLS 1.2 in Java 7
I also had this problem with a Java program trying to send a command on a server via SSH. The problem was with the machine executing the Java code. It didn't have the permission to connect to the remote server. The write() method was doing alright, but the read() method was throwing a java.net.SocketException: Connection reset. I fixed this problem with adding the client SSH key to the remote server known keys.
In my case was DNS problem .
I put in host file the resolved IP and everything works fine.
Of course it is not a permanent solution put this give me time to fix the DNS problem.
In my experience, I often encounter the following situations;
If you work in a corporate company, contact the network and security team. Because in requests made to external services, it may be necessary to give permission for the relevant endpoint.
Another issue is that the SSL certificate may have expired on the server where your application is running.
I've seen this problem. In my case, there was an error caused by reusing the same ClientRequest object in an specific Java class. That project was using Jboss Resteasy.
Initially only one method was using/invoking the object ClientRequest (placed as global variable in the class) to do a request in an specific URL.
After that, another method was created to get data with another URL, reusing the same ClientRequest object, though.
The solution: in the same class was created another ClientRequest object and exclusively to not be reused.
In my case it was problem with TSL version. I was using Retrofit with OkHttp client and after update ALB on server side I should have to delete my config with connectionSpecs:
OkHttpClient.Builder clientBuilder = new OkHttpClient.Builder();
List<ConnectionSpec> connectionSpecs = new ArrayList<>();
connectionSpecs.add(ConnectionSpec.COMPATIBLE_TLS);
// clientBuilder.connectionSpecs(connectionSpecs);
So try to remove or add this config to use different TSL configurations.
I used to get the 'NotifyUtil::java.net.SocketException: Connection reset at java.net.SocketInputStream.read(SocketInputStream.java:...' message in the Apache Console of my Netbeans7.4 setup.
I tried many solutions to get away from it, what worked for me is enabling the TLS on Tomcat.
Here is how to:
Create a keystore file to store the server's private key and
self-signed certificate by executing the following command:
Windows:
"%JAVA_HOME%\bin\keytool" -genkey -alias tomcat -keyalg RSA
Unix:
$JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg RSA
and specify a password value of "changeit".
As per https://tomcat.apache.org/tomcat-7.0-doc/ssl-howto.html
(This will create a .keystore file in your localuser dir)
Then edit server.xml (uncomment and edit relevant lines) file (%CATALINA_HOME%apache-tomcat-7.0.41.0_base\conf\server.xml) to enable SSL and TLS protocol:
<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
maxThreads="150" scheme="https" secure="true"
clientAuth="false" sslProtocol="TLS" keystorePass="changeit" />
I hope this helps

Vertx - Closing connections - JDBC and others

I have a verticle, which consumes a message from the event bus and processes it. I have a question as to when the JDBC connection should be closed. There are 2 approaches
Closing the connection once the message is processed. But this will be very expensive because I will open/close connection every time.
Trust that vertx will close the connection when the verticle is stopped/undeployed (which is literally never) and that there wont be any memory leaks as long as the connection is open. I will open the connection in the start() method, so that whenever there is a message it available.
On the other hand, If I have an elastic search backend and I am using the elastic search SDK, which has a specific method to close the client, when should the connection be really closed?
Use a connection pool, that will take away most of the cost of closing/opening connections. When using a connection pool, closing the connection returns it to the connection pool for re-use.
The basic usage pattern is:
try (Connection connection = dataSource.getConnection()) {
// use connection
}
At the end of the block the connection is closed, which - if dataSource has a connection pool - will make it available for re-use.
You can always put your clean up code in Stop() method of Verticle interface. It will be called when the verticle starts it's un-deploy procedure.
See Vert.x Docs

Connection timeout setting using resttemplate using closeableHttpclient

So I read this article https://www.baeldung.com/httpclient-timeout and it says that connection timeout adds to its own penalty if the underlying service's DNS that httpclient tries to connect to has multiple IPs configured to it.
So if I have a connection timeout set to 100ms and the called service DNS has 5 IPs mapped to it then, I am looking at a max connection timeout of 500ms assuming what works is the last IP.
Is there a way to have a cap on this connection timeout regardless what the underlying service topology is as being a client, I will always be agnostic to it.
As far as I understood, you don't have a code-wise case to run in 5 or more IPs situation rather curiosity. So here my experience :
Since you're using RestTemplate which by default uses SimpleClientHttpRequestFactory.
And as the definition of connection time out goes :
The connection timeout is the timeout in making the initial
connection; i.e. completing the TCP connection handshake and getting
connected to the requested Server.
So, as far as theory goes :
Regardless of the underlying service topology, RestTemplate will try to make connection as per the connection timeout value.
And in order to figure out the almost exact timeout in your case, you must run some latency test, print the time differences which restTemplate is taking to get 200 OK.
Also, SimpleClientHttpRequestFactory internally uses HttpURLConnection which has default timeout of infinite (0/-1).
Yes, it has also been observed in rare cases, the connection keeps trying unless Thread.interrupt() explicitly being called to end.
Thus it becomes vital to describe your read-time-out and connection-time-out values and in this way you cap your connection to the limits you defined.
Hope this helps.

Connection.close() on C3P0NativeJdbcExtractor closes the connection and removes it from the Pool

I am using C3P0NativeJdbcExtractor to extract the native JDBC connection as below.
public Connection getNativeConnection() throws SQLException{
C3P0NativeJdbcExtractor nativeJbdc;
nativeJbdc = new C3P0NativeJdbcExtractor();
return nativeJbdc.getNativeConnection(dataSource.getConnection());
}
Note that the data source here is obtained of a C3P0 Connection Pool. When I do a Connection.close() returned on this method, it is actually closing the connection instead of returning to the pool.
However if we close the unwrapped connection, then it is returned to the Pool.
Is there is a reason to why closing the wrapped connection here is failing to return the connection to the pool?
A connection pool like c3p0, holds a collection of physical ('native') connections created by a JDBC driver. When you ask it for a connection, it wraps that physical connection in a proxy, also known as the logical connection.
That proxy will intercept certain methods like Connection.close(). For close() instead of closing the connection, it invalidates the logical connection so it behaves as a closed connection, and it returns the physical connection to the connection pool.
Your code extracts the physical connection from the logical connection, and returns that instead, so if you call close() on that, you actually close the connection to the database instead of returning it to the pool.
You should almost never have a reason to extract the native connection like that. The only reason is when you need access to driver-specific features. You should try to use standard JDBC as much as possible, and only unwrap to access driver-specific features when you really need to.
When you call close(), make sure you call close() on the logical connection that you received from the connection pool, not on the unwrapped physical connection.

SFTP connection issue with DefaultSftpSessionFactory

I am trying to connect to one particular SFTP server with org.springframework.integration.sftp.session.DefaultSftpSessionFactory.
Connection is lost right after establishing connection.
Caused by: com.jcraft.jsch.JSchException: failed to send channel request
at com.jcraft.jsch.Request.write(Request.java:65)
at com.jcraft.jsch.RequestSftp.request(RequestSftp.java:47)
at com.jcraft.jsch.ChannelSftp.start(ChannelSftp.java:217)
at com.jcraft.jsch.Channel.connect(Channel.java:208)
at com.jcraft.jsch.Channel.connect(Channel.java:145)
at
With the same library I connect to several different SFTP servers without any issues.
Then I tried to connect from command line with below command but failed.
sftp -oIdentityFile=sftp_user_rsa -oUser=sftp_user sftp.zzzz.com
After trying quite a few times with different parameters I connected after specifying a subsystem.
sftp -oIdentityFile=sftp_user_rsa -oUser=sftp_user
-s/usr/libexec/sftp-server sftp.zzzz.com
Also filezilla connects without any issues.
Under the hood DefaultSftpSessionFactory is using sftp channel and setting subsystem to be sftp.
That part is hard coded.
Is there any way to use a different subsystem with this library?
Many thanks
After extending the library (Spring integration SFTP) and the library it is using (JSch) still not working. Even when changed the RequestSftp from source as follows and replaced the hard coded subsystem;
public class RequestSftp extends Request{
RequestSftp(){
setReply(true);
}
public void request(Session session, Channel channel) throws Exception{
super.request(session, channel);
Buffer buf=new Buffer();
Packet packet=new Packet(buf);
packet.reset();
buf.putByte((byte)Session.SSH_MSG_CHANNEL_REQUEST);
buf.putInt(channel.getRecipient());
buf.putString(Util.str2byte("subsystem"));
buf.putByte((byte)(waitForReply() ? 1 : 0));
buf.putString(Util.str2byte("the-new-subsystem"));
write(packet);
}
}
Please open an 'Improvement' JIRA Issue. Currently, you will have to subclass (or clone) the DefaultSftpSessionFactory and DefaultSftpSession, overriding the getSession() method in the factory. Unfortunately, connect() in the session is package-private so you can't simply override that.
It looks like you would just have to change these two lines
SftpSession sftpSession = new SftpSession(jschSession);
sftpSession.connect();
MySftpSession sftpSession = new MySftpSession(jschSession);
sftpSession.connect(subsystem);
Where subsystem would be a property on MySftpSessionFactory and connect(String subsystem) would be similar to connect() on SftpSession.
However, I notice that getSession() uses some other private methods (such as initJschSession) so it might be easier to just copy the factory and give it a new name. I hate to suggest that, but it's not very friendly for subclassing right now. Please make a note of it in the JIRA issue (in addition to adding the subsystem, we should make the factory easier to subclass.

Resources