Sonarlint multiple closes - java-6

With this code :
Connection connection = null;
PreparedStatement req = null;
try {
connection = DriverManager.getConnection(url, user, password);
req = connection.prepareStatement(SQL);
} finally {
if (connection != null) {
connection.close();
}
if (req != null) {
req.close();
}
}
SonarLint says :
Close this "PreparedStatement" in a "finally" clause on line 5 (req = ...)
And when i close req first :
Close this "Connection" in a "finally" clause on line 4 (connection = ...)
How can I make SonarLint happy ?

Assuming you are using java.sql.Connection, your code can still end up with resources not being closed at the end of the execution.
If you look at the Connection.close() method signature from Java 6 javadoc, you will see that it can throw a SQLException. Consequently, as you are already in the finally block and if an exception occurs while closing, your code will exit the method without closing the request.
Now, if you invert the close order and start with the request, the same thing can happen. Calling close() can fail, then the connection is never closed, as from the finally block you jump once again directly outside the method.
In order to close both resources properly, I would recommend to deal with it like this:
Connection connection = null;
try {
connection = DriverManager.getConnection(url, user, password);
PreparedStatement req = null;
try {
req = connection.prepareStatement(sql);
} finally {
if (req != null) {
req.close();
}
}
} finally {
if (connection != null) {
connection.close();
}
}

Related

Oracle connection timeout inside a Azure Function that cancels at 5mins

I have the following lines of code inside a Java function:
try{
context.getLogger().info("Paso001");
Class.forName("oracle.jdbc.driver.OracleDriver");
context.getLogger().info("Paso002");
Connection conn = DriverManager.getConnection(
params.get().getConnection(), params.get().getUser(), params.get().getPassword());
if (conn != null) {
context.getLogger().info("Connected to the database!");
} else {
context.getLogger().log(Level.SEVERE, "No connection to the database!");
return request.createResponseBuilder(HttpStatus.INTERNAL_SERVER_ERROR).body("Error").build();
}
context.getLogger().info("Paso003");
PreparedStatement sentencia = conn.prepareStatement(params.get().getSentence());
int index = 0;
for (Param param : params.get().getParams()) {
index++;
if (param.getType().equals("String")) {
sentencia.setString(index, param.getValue());
} else {
sentencia.setInt(index, Integer.parseInt(param.getValue()));
}
}
ResultSet rs=sentencia.executeQuery();
JSONArray result = JsonHelper.recordList2Json(rs);
context.getLogger().info(result.toString());
return request.createResponseBuilder(HttpStatus.OK).body(result.toString()).build();
} catch(Exception e)
{
context.getLogger().info("Paso00-err");
context.getLogger().log(Level.SEVERE, e.toString());
}
Loging only shows "Paso001" and "Paso002" but connection fails at 300000 ms (5 minutes) because no "Paso00-err" is shown in the logs. I assume that Azure Function is reaching maximum time.
Azure Function is inside a VNET integration and DATABASE is inside another local NET behind an ExpressRoute.
I have assumed that Firewall is correct because opening socket to Host:Port inside the funcion seems ok:
InetAddress IPv4 = null;
try {
IPv4 = InetAddress.getByName(connect.get().getHost());
} catch (UnknownHostException e) {
result = e.toString();
e.printStackTrace();
return request.createResponseBuilder(HttpStatus.OK).body(result.toString()).build();
}
try {
Socket s = new Socket(IPv4, Integer.parseInt(connect.get().getPort()));
result = "Server is listening on port " + connect.get().getPort()+ " of " + connect.get().getHost();
context.getLogger().info(result);
s.close();
}
catch (IOException ex) {
// The remote host is not listening on this port
result = "Server is not listening on port " + connect.get().getPort()+ " of " + connect.get().getHost();
context.getLogger().info(result);
}
Result gets: "Server is listening on port port of host host
Note. I get same error pointing to a public database installed locally.
Is there anything else missing to open? Any ideas?
Edit: I have rewritten code with .NET CORE 3.11...
using (OracleCommand cmd = con.CreateCommand())
{
try
{
log.LogInformation("step001");
con.Open();
log.LogInformation("step002");
cmd.BindByName = true;
cmd.CommandText = sentence;
OracleDataReader reader = cmd.ExecuteReader();
log.LogInformation("step003");
return new OkObjectResult(reader2Json(reader));
}
catch (Exception ex)
{
return new OkObjectResult("Error: "+ex.ToString());
}
}
and similar results but this time exception is going thrown:
Error: Oracle.ManagedDataAccess.Client.OracleException (0x80004005): Connection request timed out
at OracleInternal.ConnectionPool.PoolManager`3.Get(ConnectionString csWithDiffOrNewPwd, Boolean bGetForApp, OracleConnection connRefForCriteria, String affinityInstanceName, Boolean bForceMatch)
at OracleInternal.ConnectionPool.OraclePoolManager.Get(ConnectionString csWithNewPassword, Boolean bGetForApp, OracleConnection connRefForCriteria, String affinityInstanceName, Boolean bForceMatch)
at OracleInternal.ConnectionPool.OracleConnectionDispenser`3.Get(ConnectionString cs, PM conPM, ConnectionString pmCS, SecureString securedPassword, SecureString securedProxyPassword, OracleConnection connRefForCriteria)
at Oracle.ManagedDataAccess.Client.OracleConnection.Open()
at Oracli2.Function1.Run(HttpRequest req, ILogger log) in C:\proy\vscode\dot2\Oracli2\Oracli2\Function1.cs:line 50
You can increase the function time out in the hosts.json file, just so you are aware of that, but I dont think increasing it will fix your issue, 5 minutes is a generous time, unless the query you are running here does in-fact take longer than 5 minutes to return!
Can you set the retry_count & retry_delay for your connection string something small (eg: 3 tries) so you know that the time out is not because of trying to do 100 retries and not see the actual underlying error
Other issues could be to do with connectivity, best bet would be to go into the Kudu Console for the console app, open up SSH and via SSH see if you can connect to your oracle db and run a test query from here, if it's all working from here then connectivity is not the issue.

Implement `Process.waitFor(long timeout, TimeUnit unit)` in Java 6

I am working on a legacy (Java 6/7) project that uses ProcessBuilder to request a UUID from the machine in an OS-agnostic way. I would like to use the Process.waitFor(long timeout, TimeUnit unit) method from Java 8, but this isn't implemented in Java 6. Instead, I can use waitFor(), which blocks until completion or an error.
I would like to avoid upgrading the version of Java used to 8 if possible as this necessitates a lot of other changes (migrating code away from removed internal APIs and upgrading a production Tomcat server, for example).
How can I best implement the code for executing the process, with a timeout? I was thinking of somehow implementing a schedule that checks if the process is still running and cancelling/destroying it if it is and the timeout has been reached.
My current (Java 8) code looks like this:
/** USE WMIC on Windows */
private static String getSystemProductUUID() {
String uuid = null;
String line;
List<String> cmd = new ArrayList<String>() {{
add("WMIC.exe"); add("csproduct"); add("get"); add("UUID");
}};
BufferedReader br = null;
Process p = null;
SimpleLogger.debug("Attempting to retrieve Windows System UUID through WMIC ...");
try {
ProcessBuilder pb = new ProcessBuilder().directory(getExecDir());
p = pb.command(cmd).start();
if (!p.waitFor(TIMEOUT, SECONDS)) { // No timeout in Java 6
throw new IOException("Timeout reached while waiting for UUID from WMIC!");
}
br = new BufferedReader(new InputStreamReader(p.getInputStream()));
while ((line = br.readLine()) != null) {
if (null != line) {
line = line.replace("\t", "").replace(" ", "");
if (!line.isEmpty() && !line.equalsIgnoreCase("UUID")) {
uuid = line.replace("-", "");
}
}
}
} catch (IOException | InterruptedException ex) {
uuid = null;
SimpleLogger.error(
"Failed to retrieve machine UUID from WMIC!" + SimpleLogger.getPrependedStackTrace(ex)
);
// ex.printStackTrace(System.err);
} finally {
if (null != br) {
try {
br.close();
} catch (IOException ex) {
SimpleLogger.warn(
"Failed to close buffered reader while retrieving machine UUID!"
);
}
if (null != p) {
p.destroy();
}
}
}
return uuid;
}
You can use the following code which only uses features available under Java 6:
public static boolean waitFor(Process p, long t, TimeUnit u) {
ScheduledExecutorService ses = Executors.newSingleThreadScheduledExecutor();
final AtomicReference<Thread> me = new AtomicReference<Thread>(Thread.currentThread());
ScheduledFuture<?> f = ses.schedule(new Runnable() {
#Override public void run() {
Thread t = me.getAndSet(null);
if(t != null) {
t.interrupt();
me.set(t);
}
}
}, t, u);
try {
p.waitFor();
return true;
}
catch(InterruptedException ex) {
return false;
}
finally {
f.cancel(true);
ses.shutdown();
// ensure that the caller doesn't get a spurious interrupt in case of bad timing
while(!me.compareAndSet(Thread.currentThread(), null)) Thread.yield();
Thread.interrupted();
}
}
Note that unlike other solutions you can find somewhere, this will perform the Process.waitFor() call within the caller’s thread, which is what you would expect when looking at the application with a monitoring tool. It also helps the performance for short running sub-processes, as the caller thread will not do much more than the Process.waitFor(), i.e. does not need to wait for the completion of background threads. Instead, what will happen in the background thead, is the interruption of the initiating thread if the timeout elapsed.

Android BluetoothLE Disconnection issue

I'm rebuilding a ble app that will commnunicate with a bluetooth device.
The code I found had this odd method called after closing the connection,
bluetoothGatt.disconnect();
which will call the onStateChangeCallback.
The method is this;
private void refreshDeviceCache(final BluetoothGatt gatt) {
int cnt = 0;
boolean success = false;
try {
if (gatt != null) {
final Method refresh = gatt.getClass().getMethod("refresh");
if (refresh != null) {
success = (Boolean) refresh.invoke(gatt);
while (!success && cnt < 100) {
success = (Boolean) refresh.invoke(gatt);
cnt++;
}
Log.e(TAG, "retry refresh : " + cnt + " " + success);
}
}
} catch (Exception e) {
Log.e(TAG, "5", e);
}
}
I can't totally understand what this code will do, but in conclusion, it slows down the connection after the disconnection. It does not slow down the disconnection.
I really can't understand this because after I get the BluetoothProfile.STATE_DISCONNECTED, I will close the bluetoothGatt, and on the broadCastReceiver, unbind the service and close the service itself.
On the connection phase, the service will be recreated.
What line of that code on disconnection may slow down the connection? Please help me out with this.

How to close the connection properly?

This question has been asked for multiple times and there are a lot of resources talk about this. But it still make me a worry because I think the close() is not working properly.
PreparedStatemet pstmt = null;
try {
pstmt = conn.prepareStatement(query);
...
pstmt.close();
conn.close();
} catch(...) {
...
} finally {
if(pstmt != null) {
try {
pstmt.close();
} catch (SQLException e) {
pstmt = null;
}
}
if(conn != null) {
try {
conn.close();
} catch (SQLException e) {
conn = null;
}
}
System.out.println("PreparedStatement: " + pstmt);
System.out.println("Connection: " + conn);
}
So I expected that it would print out null; but it keep print out the query string and connection path to database.
your code here
try {
PreparedStatemet pstmt = null;
creates a pstmt that should not be visible in your finally block, as it is a local variable in another scope.
You probably have another pstmt somewhere outside of your try catch block. which is messing up your check within the finally block.
Try commenting out PreparedStatemet pstmt = null; in your try block to see if you code still builds, that will help you identify where exactly do you have the overriding declaration for pstmt.
EDIT:
Closing a prepared statement / connection would not mean that the values will be reset. Your connection & prepared statements are indeed closed now, since you are not setting them to null , the data values stored inside are still being printed. That should not be a problem.
Although not a biggie, you can set conn and pstmt to null as well when you closed them that would clear the local memory used by these variables as well. But remember, the connection is still already closed when you do a close call.

DD anomaly, and cleaning up database resources: is there a clean solution?

Here's a piece of code we've all written:
public CustomerTO getCustomerByCustDel(final String cust, final int del)
throws SQLException {
final PreparedStatement query = getFetchByCustDel();
ResultSet records = null;
try {
query.setString(1, cust);
query.setInt(2, del);
records = query.executeQuery();
return this.getCustomer(records);
} finally {
if (records != null) {
records.close();
}
query.close();
}
}
If you omit the 'finally' block, then you leave database resources dangling, which obviously is a potential problem. However, if you do what I've done here - set the ResultSet to null outside the **try** block, and then set it to the desired value inside the block - PMD reports a 'DD anomaly'. In the documentation, a DD anomaly is described as follows:
DataflowAnomalyAnalysis: The dataflow analysis tracks local definitions, undefinitions and references to variables on different paths on the data flow.From those informations there can be found various problems. [...] DD - Anomaly: A recently defined variable is redefined. This is ominous but don't have to be a bug.
If you declare the ResultSet outside the block without setting a value, you rightly get a 'variable might not have been initialised' error when you do the if (records != null) test.
Now, in my opinion my use here isn't a bug. But is there a way of rewriting cleanly which would not trigger the PMD warning? I don't particularly want to disable PMD's DataFlowAnomalyAnalysis rule, as identifying UR and DU anomalies would be actually useful; but these DD anomalies make me suspect I could be doing something better - and, if there's no better way of doing this, they amount to clutter (and I should perhaps look at whether I can rewrite the PMD rule)
I think this is clearer:
PreparedStatement query = getFetchByCustDel();
try {
query.setString(1, cust);
query.setInt(2, del);
ResultSet records = query.executeQuery();
try {
return this.getCustomer(records);
} finally {
records.close();
}
} finally {
query.close();
}
Also, in your version the query doesn't get closed if records.close() throws an exception.
I think that DD anomaly note is more bug, than a feature
Also, the way you free resources is a bit incomplete, for example
PreparedStatement pstmt = null;
Statement st = null;
try {
...
} catch (final Exception e) {
...
} finally {
try{
if (pstmt != null) {
pstmt.close();
}
} catch (final Exception e) {
e.printStackTrace(System.err);
} finally {
try {
if (st != null) {
st.close();
}
} catch (final Exception e) {
e.printStackTrace(System.err);
}
}
}
moreover this is not right again, cuz you should close resources like that
PreparedStatement pstmt = null;
Throwable th = null;
try {
...
} catch (final Throwable e) {
<something here>
th = e;
throw e;
} finally {
if (th == null) {
pstmt.close();
} else {
try {
if (pstmt != null) {
pstmt.close();
}
} catch (Throwable u) {
}
}
}

Resources