We use Coverity to identify potential security and quality flaws in our Java code. In one of our unit tests, we have some testing that involves database connection code:
Connection connection = mock(Connection.class);
Statement statement = mock(Statement.class);
when(connection.createStatement()).thenReturn(statement);
Coverity complains about a potential resource leak:
CID 21920: Resource leak (RESOURCE_LEAK)4. leaked_resource: Failing to save or close resource created by connection.createStatement()
My understanding of how Mockito works is that connection.getStatement() is never actually called, and so no statement is being created that needs to be closed later. (This is in contrast to the typical case in databases where a JDBC connection would need to be closed.)
Is my understanding correct? Is it fair to say that this is a false report from Coverity, caused by the atypical behavior of getConnection() in the context of mocking? If not, please straighten me out.
I'd say your understanding isn't quite correct.
In your code, connection.createStatement() does get called, but it doesn't get called on a 'real' connection that will create a resource on a database somewhere. The mock Connection implementation that Mockito creates just tracks that the method has been called, and returns null. Later, when the thenReturn() method is called, Mockito can link the call to createStatement() with the value passed to thenReturn() so that the mock Connection can return the mock Statement when the createStatement() method is called on it.
Ultimately, this is a false positive report from Coverity: there is no resource leak issue here. However, there is the question of the value of running a scanner such as Coverity on test code. In particular, I'm not sure how you could have a security flaw in test code, given that it isn't interactive and isn't something that you either ship to customers or upload to a server somewhere.
Related
When I write tests that involve subscribing to events on the Eventstream or watching actors and listning for "Terminated", the tests work fine running them 1 by 1 but when I run the whole testsuite those tests fail.
Tests also works if each of those tests are in a separate test class with Xunit.
How come?
A repo with those kind of tests: https://github.com/Lejdholt/AkkaTestError
Took a look at your repository. I can reproduce the problems you are describing.
It feels like a bug in the TestKit, some timing issue somewhere. But its hard to pin down.
Also, not all unit test frameworks are created equally. The testkit uses its own TaskDispatcher to enable the testing of what are normally inherently asynchronous processed operations.
This sometimes causes some conflicts with the testframework being used. Is also coincidentally why akka.net also moved to XUnit for their own CI process.
I have managed to fix your problem, by not using the TestProbe. Although im not sure if the problem lies with the TestProbe per say, or the fact that your where using an global reference (your 'process' variable).
I suspect that the testframework, while running tests in parrallel, might be causing some wierd things to happen with your testprobe reference.
Example of how i changed one of your tests:
[Test]
public void GivenAnyTime_WhenProcessTerminates_ShouldLogStartRemovingProcess()
{
IProcessFactory factory = Substitute.For<IProcessFactory>();
var testactor = Sys.ActorOf<FakeActor>("test2");
processId = Guid.NewGuid();
factory.Create(Arg.Any<IActorRefFactory>(), Arg.Any<SupervisorStrategy>()).Returns(testactor);
manager = Sys.ActorOf(Props.Create(() => new Manager(factory)));
manager.Tell(new StartProcessCommand(processId));
EventFilter.Info("Removing process.")
.ExpectOne(() => Sys.Stop(testactor));
}
It should be fairly self explanatory on how you should change your other test.
The FakeActor is nothing more then an empty ReceiveActor implementation.
I have a very simple test cases(scalatest, but doesn't matter) and I provide two implementation of accessing some resources, this method returns either Try or some case class instance.
Test cases:
"ResourceLoader" must
"successfully initialize resource" in {
/async code test
noException should be thrownBy Await.result(ResourceLoader.initializeRemoteResourceAsync(credentials, networkConfig), Duration.Inf)
}
"ResourceLoader" must
"successfully sync initialize remote resources" in {
noException should be thrownBy ResourceLoader.initializeRemoteResource(credentials, networkConfig)
}
This tests testing different code which access some remote resources
Sync version
def initializeRemoteResource(credentials: Credentials, absolutePathToNetworkConfig: String): Resource = {
//some code accessing remote server
}
Async version
def initializeRemoteResourceAsync(credentials: Credentials, absolutePathToNetworkConfig: String): Future[Try[Resource]] = {
Future {
//the same code as in sync version
}
}
In IDEA test tab I see that future based version is twice slower then sync version, my question is there overhead for calling Await.result explicitly? If not, why it slows down the execution? Appreciate any help, Thanks.
Note: I know it is not the best way to measure performance of production system. But it at list says how much time was spend on each test case.
Yes, there will be a small overhead for Await.result, but in practice it probably doesn't amount to much. Future {} requires an ExecutionContext (thread pool or thread creator) in implicit scope so you won't be able to successfully use it without importing the default execution context (which will simply spawn a thread) or some other context. If you're using the default execution context, for example, you will have two threads instead of one which will involve some overhead for context switching. It shouldn't be much though. If 'twice as slow' means 40ms instead of 20 then perhaps it's not worth worrying about.
Let's see how simple of a question I can ask. I have:
void TCPClient::test(const boost::system::error_code& ErrorCode)
{
// Anything can be here
}
and I would like to call it from another class. I have a global boost::thread_group that creates a thread
clientThreadGroup->create_thread(boost::bind(&TCPClient::test,client, /* this is where I need help */));
but am uncertain on how to call test, if this is even the correct way.
As an explanation for the overall project, I am creating a tcp connection between a client and a server and have a method "send" (in another class) that will be called when data needs to be sent. My current goal is to be able to call test (which currently has async_send in it) and send the information through the socket that is already set up when called. However, I am open to other ideas on how to implement and will probably work on creating a consumer/producer model if this proves to be too difficult.
I can use either for this project, but I will later have to implement listen to be able to receive control packets from the server later, so if there is any advice on which method to use, I would greatly appreciate it.
boost::system::error_code err;
clientThreadGroup->create_thread(boost::bind(&TCPClient::test,client, err));
This works for me. I don't know if it will actually have an error if something goes wrong, so if someone wants to correct me there, I would appreciate it (if just for the experience sake).
Is it possible to execute COMMIT WRITE BATCH NOWAIT in Hibernate?
I didn't search extensively but I couldn't find any evidence that you can access this functionality at the JDBC driver level.
And this leaves you with the option to specify the COMMIT_WRITE parameter at the instance or session level, if this makes sense for you.
Just in case, let me quote this blog post (I'm pasting the content for reference because the original site is either unavailable or dead and I had to use Google Cache):
Using "Commit Write Batch Nowait" from within JDBC
Anyone who has used the new
asynchronous commit feature of Oracle
10.2 will be aware that it's very useful for transaction processing
systems that would traditionally be
bound by log_file_sync wait events.
COMMIT WRITE BATCH NOWAIT is faster
because it doesn't wait for a message
assuring it that the transaction is
safely in the redo log - instead it
assumes it will make it. This nearly
eliminates log_file_sync events. It
also arguably undermines the whole
purpose of commit, but there are many
situations where the loss of a
particular transaction (say to delete
a completed session) is perfectly
survivable and far more preferable
than being unable to serve incoming
requests because all your connections
are busy with log_file_sync wait
events.
The problem anyone using Oracle's JDBC
driver is that neither the 10.2 or
11.1 drivers have any extensions which allow you to access this functionality
easily - while Oracle have lots of
vendor specific extensions for all
sorts of things support for async
commit is missing.
This means you can:
Turn on async commit at the instance level by messing with the
COMMIT_WRITE init.ora parameter.
There's a really good chance this will
get you fired, as throughout the
entire system COMMIT will be
asynchronous. While we think this is
insane for production systems there
are times where setting it on a
development box makes sense, as if you
are 80% log file sync bound setting
COMMIT_WRITE to COMMIT WRITE BATCH
NOWAIT will allow you to see what
problems you face if you can somehow
fix your current ones.
Change COMMIT_WRITE at the session level. This isn't as dangerous as
doing it system wide but it's hard to
see it being viable for a real world
system with transactions people care
about.
Prepare and use a PL/SQL block that goes "BEGIN COMMIT WRITE BATCH NOWAIT;
END". This is safer than the first
two ideas but still involves a network
round trip.
Wrap your statement in an anonymous block with an asynchronous commit.
This is the best approach we've seen.
Your code will look something like
this:
BEGIN
--
insert into generic_table
(a_col, another_col, yet_another_col)
values
(?,?,?);
--
COMMIT WRITE BATCH NOWAIT;
--
END;
I was looking for a way to do this but couldn't get it working in a test. The reason for my hold up was that I was expecting the wrong results from my test. I was testing by manually acquiring a shared table lock to simulate adding an index - but in this case, the insert query acquires the lock, not the commit. So it doesn't actually solve the problem I was looking to solve. I got round my problem by moving these insertions into a background queue, so that they don't hold up the main web request.
Anyway I think you can still do asynchronous commits in Hibernate. Basically you can use the Session.doWork() method to get access to the native Connection object (or in older versions of Hibernate, the Session.connection() method). I also moved the commit SQL into a strategy interface, so that we can run our HSQLDB-based tests which wouldn't understand the Oracle specific SQL.
In fact, it may be fine to use Session.createSQLQuery and give that the SQL, avoiding having to directly use Connection. Try it and see how it works.
private NativeStrategy nativeStrategy = new OracleStrategy();
interface NativeStrategy {
String commit();
}
public static final class OracleStrategy implements NativeStrategy {
public String commit() {
return "COMMIT WRITE BATCH NOWAIT";
}
}
public void saveAsynchronously(MyItem item) {
session.save(item);
session.flush();
// Try to issue an asynchronous commit where supported.
session.doWork(new Work() {
public void execute(Connection connection) throws SQLException {
Statement commit = connection.createStatement();
try {
commit.execute( nativeStrategy.commit() );
} finally {
commit.close();
}
}
});
}
I have the following code in Java:
if(!conn.isClosed())
{
conn.close();
}
Instead of working, I am awarded with:
java.sql.SQLException: Connection already closed
My connection object is a Snaq.db.CacheConnection
I checked the JavaDocs for isClosed, and they state that:
This method generally cannot be called
to determine whether a connection to a
database is valid or invalid. A
typical client can determine that a
connection is invalid by catching any
exceptions that might be thrown when
an operation is attempted.
So my questions are:
1) What good is JDBC's isClosed() anyway? Since when do we use Exceptions in Java to check for validity?
2) What is the correct pattern to close a database? Should I just close and swallow exceptions?
3) Any idea why would SnaqDB be closing the connection? (My backend is a Postgres 8.3)
I'll answer your questions with corresponding numbers:
I agree with you, it seems strange that isClosed provides the closed state only on a best effort basis, and that your code still has to be prepared to catch the exception when closing the connection. I think the reason is that the connection may be closed at any time by the database, and so any status returned by a query state method like isClosed is intrinsicly stale information - the state may change between checking isClosed and calling close on the Connection.
Calling close has no affect on your data and on previous queries. JDBC operations execute with synchronous results, so all useful execution has either succeeded or failed by the time isClosed is called. (True with both autoCommit or explicit transaction boundaries.) If your application is a single user accessing a local database, then perhaps showing the error to the user might help them diagnose problems. In other environments, logging the exception and swallowing it is probably the best course of action. Either way, swallowing the excpetion is safe, as has no bearing on the state of the database.
Looking at the source for SnaqDB CacheConnection, the isClosed method delegates to the underlying connection. So the problem is not there, but lies with the defined contract for isClosed() and Connection.close() throwing an exception.