Which class can I have to put byte code for getting time to get connection? - derby

I want to insert byte codes to collect the elapse time to get the database connection. But I don't know which class get the connection in derby. I tried putting the byte code like the following classes.
AsmUtil.add(reserved, "org/apache/tomcat/dbcp/dbcp/BasicDataSource", "getConnection");
AsmUtil.add(reserved, "org/apache/derby/jdbc/ClientBaseDataSourceRoot", "getConnection");
AsmUtil.add(reserved, "org/apache/derby/jdbc/BasicClientDataSource40", "getConnection");
But, I can't catch the place where derby try getting the database connection.

Related

How to deal with FATAL: terminating connection due to idle-in-transaction timeout

I have one method where some DB operations are happened with some API call. This is a scenario with Spring and postgres DB. We also have property set for idle transaction in postgres DB
idle_in_transaction_session_timeout = 10min
Now the issue is I do get Exception sometime
org.postgresql.util.PSQLException: This connection has been closed. Root Cause is FATAL: terminating connection due to idle-in-transaction timeout
For example, my code look like this:
#Transactional(value = "transactionManagerDC")
public void Execute()
{
// 1. select from DB - took 2 min
// 2. call another API - took 10 min. <- here is when the postgres close my connection
// 3. select from DB - throws exception.
};
What could be the correct design for it? We are using output for select from step 1 in API call and output of that API call used in step 3 for select. So these three steps are interdependent.
Ten minutes is a very long time to hold a transaction open. Your RDBMS server automatically disconnects your session, rolling back the transaction, because it cannot tell whether the transaction was started by an interactive (command-line) user who then went out to lunch without committing it, or by a long-running task like yours.
Open transactions can block other users of your RDBMS, so it's best to COMMIT them quickly.
Your best solution is to refactor your application code so it can begin, and then quickly commit, the transaction after the ten-minute response from that other API call.
It's hard to give you specific advice without knowing more. But you could set some sort of status = "API call in progress" column on a row before you call that slow API, and clear that status within your transaction after the API call completes.
Alternatively, you can set the transaction timeout for just that connection with something like this, to reduce the out-to-lunch risk on your system.
SET idle_in_transaction_session_timeout = '15m';

Starting a transaction with JDBC in Clojure without a block/function

Is it possible to start a transaction in Clojure using JDBC without having to encase the code in a block? Obviously I'd have to call another function to end the transaction later one.
clojure.java.jdbc is a wrapper around various Java implementations of db connectors. If you don't want to use with-db-transaction in a block you can get a connection with get-connection, save it in your state (e.g. an atom), then do:
(.setAutoCommit conn false)
then perform all the operations you want, and then
(.commit conn)

How to keep a persistent connection to SQL Server using Ruby Sequel and Tiny_TDS while in a loop

I have a ruby script that needs to run continually on the server. I've daemonized it using the daemon gem, and in my script I have it running in an infinite loop, since the daemon gem handles starting and stopping of the process that kicks off my script. In my script, I start out by setting up my DB instance using the Sequel gem and tiny_tds. Like so:
DB = Sequel.connect(adapter: 'tinytds', host: MSSQLHost, database: MSSQLDatabase, user: MSSQLUser, password: MSSQLPassword)
Then I have a loop do that is my infinite loop. Inside that, I test to see if I have a connection using DB.test_connection and then I query the DB every second or so to check if there is new content using a query such as:
DB['SELECT * FROM dbo.[MyTable]'].all do |row|
# MY logic here
# As part of my logic I test to see if I need to delete this row in the table and if so I use
DB.run('DELETE FROM dbo.[MyTable] WHERE some condition')
end
Then at the end of my logic, just before I loop again, I do:
sleep 1
DB.disconnect
All of this works great for about an hour to an hour and a half with everything checking the table, doing the logic, deleting rows, etc., then it dies and gives me this error message TinyTds::Error: Adaptive Server connection timed out
My question, why is that happening? Do I need to reformat my code in a different way? Why doesn't the DB.test_connection do what it is advertised to do? The documentation on that says it checks for a connection in the connection pool, and uses it if it finds it, and creates a new one otherwise.
Any help would be much appreciated
DB.test_connection just acquires a connection from the connection pool, it doesn't check that the connection is still valid (it must have been valid at one point or it wouldn't be in the pool). There's no way that a connection is still valid without actually sending a query. You can use the connection_validator extension that ships with Sequel if you want to do that automatically.
If you are loading Sequel before forking, you need to make sure you call DB.disconnect before forking, otherwise you can end up with multiple forked processes sharing the same connection, which can cause many different issues.
I finally ended up just putting a rescue statement in there that caught this, and re-ran my line of code to create the DB instance, yes, it puts a warning in my log about already setting that instance, but I guess I could just make that not a contstant an that would go away. Anyway, it appears to be working now, and the times it does timeout, I'm recovering gracefully from those. I just wish I could have figured out why it was/is disconnecting like it is.

Releasing Grails database connection early

I have a controller action that does the following things:
Gets a domain object from the database
Uses info on that object to find a data file (on disk) and writes contents of that file to the response output stream.
My problem is that the database connection is reserved for the duration of the action, including the (long) time required to stream the data. This results in a lot of unnecessary database connections when there are several users streaming data at the same time.
def stream() {
StreamDetails sd = StreamDetails.get(params.id)
// Extract info needed to read the stream
String filename = sd.filename
// The database connection is no longer needed, how to properly release it?
// Start writing the data stream to response output
// This may take a long time and does not use a db connection
streamService.writeToOutput(filename,response.getOutputStream())
}
I have tried:
Injecting the sessionFactory bean to the controller and calling sessionFactory.currentSession.close() before calling the service. However this causes a SessionException on the line calling the service, ie. before entering the writeToOutput() method (and nothing in that method needs a database connection). AND I don't think the session should be really closed, just released to the pool.
Copy-pasting the code from streamService.writeToOutput(...) to the controller to avoid the service call. In this case all the code gets executed but a SessionException is still thrown after the action is complete.
How to properly release the connection early?
Have you tried to inject the dataSource? You could use the DataSourceUtils to create a new connection that you can then use to get the filename. You can then manually close() this connection.
I don't know if you can use this connection in combination with gorm, so you might have to create a custom sql query as well.

Behaviour of Callable and Prepared Statements in an app server

CallableStatement and PreparedStatements are precompiled. Are they done with respect to a connection? I mean, lets assume there are some 100 connection objects residing in a connection pool of an app server. There's a class that uses Callable and PreparedStatements. Lets say the method that is used for that is :
public void invokePreparedAndCallableStatements(){
//Fetches connection from pool
Connection con = getConnectionFromPool();
CallableStatement cs = con.prepareCall(.....);
cs.register...(...);
cs.execute();
...
...
PreparedStatement st = con.prepareStatement(...);
st.setXXX(..);
st.executeUpdate();
...
}
Now when the method is called for the first time, a connection is fetched from pool and the request is processed. The Callable and Prepared Statements are compiled. When the method is called another 99 times, each time a different connection is fetched from the pool, then - will the statements be complied for each connection ?
What will be the most optimal way to use statements in this context ? I can't make them (con.prepareCall() or con.prepareStatement()) static because connection isn't static.
The code is actually compiled and stored in the shared pool of the database. Any number of connections using that same code will benefit from the cache. The compiled code is kept as long as the memory limits allow.
The statements will be precompiled. Pooling will be based on your specified parameters.
Note: If you are using JDBC 3.0, you can also pool your PreparedStatements. Reference: What's new in JDBC 3.0

Resources