when castle activerecord closes a connection - oracle

I started using nhibernate the same time I started using castle activerecord so I have those mixed up questions sometimes
I was wondering when activerecord (or nhibernate) closes a connection when the following code runs:
Dim entity = TABLE_Y.TryFind(id)
if not entity is nothing then
'long running process here
response.redirect("...")
end if
I'm asking this because this long running process takes more than one hour to finish and whenever the code redirects to another page I get an ORA-03135 (connection lost contact) telling me the connection was lost, this other page has the following active record query:
Dim entity = TABLE_X.FindAll(Order.Desc(...), _
Expression.Eq(...) And _
Expression.Eq(...)).FirstOrDefault
which then returns the ora-03135
so I was thinking if couldn't be any connection from activerecord that didn't close before the long running process
this long running process is literally another process started by the application which wait for it to end before redirecting to another page, so even if the other process uses active records it doesn't use the same connection string or whatsoever
it's funny because I'm starting a new query on a completely different table, is activerecord trying to reuse an existing connection that timed out? I tried to add "Pooling=False" and "Validate Connection=true" with no luck
thanks in advance

The connection will be closed when the session is disposed of, where this happens depends on your application. If you're using the module that comes with ActiveRecord then it will happen when the Application.EndRequest event fires (ie. at the end of your request), if you aren't, then you need to see where a SessionScope or TransactionScope is created and disposed (the dispose is where the connection is closed).
If you want to start a long running task and redirect before it's complete you will need to start it in another thread (eg. using ThreadPool or Tasks). You will also need to configure ActiveRecord to use the HybridWebThreadScopeInfo so that it will store the session in a thread local when the HttpContext is unavailable (which is what will happen in your background thread).
<activerecord threadinfotype="Castle.ActiveRecord.Framework.Scopes.HybridWebThreadScopeInfo, Castle.ActiveRecord">
Then in your task, wrap it in a TransactionScope or SessionScope (I prefer the former):
using(var trans = new TransactionScope()) {
// do your stuff here...
trans.VoteCommit();
}

Related

How to deal with FATAL: terminating connection due to idle-in-transaction timeout

I have one method where some DB operations are happened with some API call. This is a scenario with Spring and postgres DB. We also have property set for idle transaction in postgres DB
idle_in_transaction_session_timeout = 10min
Now the issue is I do get Exception sometime
org.postgresql.util.PSQLException: This connection has been closed. Root Cause is FATAL: terminating connection due to idle-in-transaction timeout
For example, my code look like this:
#Transactional(value = "transactionManagerDC")
public void Execute()
{
// 1. select from DB - took 2 min
// 2. call another API - took 10 min. <- here is when the postgres close my connection
// 3. select from DB - throws exception.
};
What could be the correct design for it? We are using output for select from step 1 in API call and output of that API call used in step 3 for select. So these three steps are interdependent.
Ten minutes is a very long time to hold a transaction open. Your RDBMS server automatically disconnects your session, rolling back the transaction, because it cannot tell whether the transaction was started by an interactive (command-line) user who then went out to lunch without committing it, or by a long-running task like yours.
Open transactions can block other users of your RDBMS, so it's best to COMMIT them quickly.
Your best solution is to refactor your application code so it can begin, and then quickly commit, the transaction after the ten-minute response from that other API call.
It's hard to give you specific advice without knowing more. But you could set some sort of status = "API call in progress" column on a row before you call that slow API, and clear that status within your transaction after the API call completes.
Alternatively, you can set the transaction timeout for just that connection with something like this, to reduce the out-to-lunch risk on your system.
SET idle_in_transaction_session_timeout = '15m';

How to keep a persistent connection to SQL Server using Ruby Sequel and Tiny_TDS while in a loop

I have a ruby script that needs to run continually on the server. I've daemonized it using the daemon gem, and in my script I have it running in an infinite loop, since the daemon gem handles starting and stopping of the process that kicks off my script. In my script, I start out by setting up my DB instance using the Sequel gem and tiny_tds. Like so:
DB = Sequel.connect(adapter: 'tinytds', host: MSSQLHost, database: MSSQLDatabase, user: MSSQLUser, password: MSSQLPassword)
Then I have a loop do that is my infinite loop. Inside that, I test to see if I have a connection using DB.test_connection and then I query the DB every second or so to check if there is new content using a query such as:
DB['SELECT * FROM dbo.[MyTable]'].all do |row|
# MY logic here
# As part of my logic I test to see if I need to delete this row in the table and if so I use
DB.run('DELETE FROM dbo.[MyTable] WHERE some condition')
end
Then at the end of my logic, just before I loop again, I do:
sleep 1
DB.disconnect
All of this works great for about an hour to an hour and a half with everything checking the table, doing the logic, deleting rows, etc., then it dies and gives me this error message TinyTds::Error: Adaptive Server connection timed out
My question, why is that happening? Do I need to reformat my code in a different way? Why doesn't the DB.test_connection do what it is advertised to do? The documentation on that says it checks for a connection in the connection pool, and uses it if it finds it, and creates a new one otherwise.
Any help would be much appreciated
DB.test_connection just acquires a connection from the connection pool, it doesn't check that the connection is still valid (it must have been valid at one point or it wouldn't be in the pool). There's no way that a connection is still valid without actually sending a query. You can use the connection_validator extension that ships with Sequel if you want to do that automatically.
If you are loading Sequel before forking, you need to make sure you call DB.disconnect before forking, otherwise you can end up with multiple forked processes sharing the same connection, which can cause many different issues.
I finally ended up just putting a rescue statement in there that caught this, and re-ran my line of code to create the DB instance, yes, it puts a warning in my log about already setting that instance, but I guess I could just make that not a contstant an that would go away. Anyway, it appears to be working now, and the times it does timeout, I'm recovering gracefully from those. I just wish I could have figured out why it was/is disconnecting like it is.

TransactionScope disposal failure

I'm using the TransactionScope class within a project based on Silverlight and RIA services. Each time I need to save some data, I create a TransactionScope object, save my data using Oracle ODP, then call the Complete method on my TransactionScope object and dispose the object itself:
public override bool Submit(ChangeSet changeSet)
{
TransactionOptions txopt = new TransactionOptions();
txopt.IsolationLevel = IsolationLevel.ReadCommitted;
using (TransactionScope tx = new TransactionScope(TransactionScopeOption.Required, txopt))
{
// Here I open an Oracle connection and fetch some data
GetSomeData();
// This is where I persist my data
result = base.Submit(changeSet);
tx.Complete();
}
return result;
}
My problem is, the first time I get the Submit method to be called, everything is fine, but if I call it a second time, the execution gets stuck for a couple of minutes after the call to Complete (so, when disposing tx), then I get the Oracle error "ORA-12154". Of course, I already checked that my persistence code completes without errors. Any ideas?
Edit: today I repeated the test and for some reason I'm getting a different error instead of the Oracle exception:
System.InvalidOperationException: Operation is not valid due to the current state of the object.
at System.Transactions.TransactionState.ChangeStatePromotedAborted(InternalTransaction tx)
at System.Transactions.InternalTransaction.DistributedTransactionOutcome(InternalTransaction tx, TransactionStatus status)
at System.Transactions.Oletx.RealOletxTransaction.FireOutcome(TransactionStatus statusArg)
at System.Transactions.Oletx.OutcomeEnlistment.InvokeOutcomeFunction(TransactionStatus status)
at System.Transactions.Oletx.OletxTransactionManager.ShimNotificationCallback(Object state, Boolean timeout)
at System.Threading._ThreadPoolWaitOrTimerCallback.PerformWaitOrTimerCallback(Object state, Boolean timedOut)
I somehow managed to solve this problem, although I still can't figure out the reason it showed up in the first place: I just moved the call to GetSomeData outside the scope of the distributed transaction. Since the call to Submit may open many connections and perform any kind of operations on the DB, I just can't tell why GetSomeData was causing this problem (it just opens a connection, calls a very simple stored function and returns a boolean). I can only guess that has something to do with the implementation of the Submit method and/or with the instantiation of multiple oracle connections within the same transaction scope.

Releasing Grails database connection early

I have a controller action that does the following things:
Gets a domain object from the database
Uses info on that object to find a data file (on disk) and writes contents of that file to the response output stream.
My problem is that the database connection is reserved for the duration of the action, including the (long) time required to stream the data. This results in a lot of unnecessary database connections when there are several users streaming data at the same time.
def stream() {
StreamDetails sd = StreamDetails.get(params.id)
// Extract info needed to read the stream
String filename = sd.filename
// The database connection is no longer needed, how to properly release it?
// Start writing the data stream to response output
// This may take a long time and does not use a db connection
streamService.writeToOutput(filename,response.getOutputStream())
}
I have tried:
Injecting the sessionFactory bean to the controller and calling sessionFactory.currentSession.close() before calling the service. However this causes a SessionException on the line calling the service, ie. before entering the writeToOutput() method (and nothing in that method needs a database connection). AND I don't think the session should be really closed, just released to the pool.
Copy-pasting the code from streamService.writeToOutput(...) to the controller to avoid the service call. In this case all the code gets executed but a SessionException is still thrown after the action is complete.
How to properly release the connection early?
Have you tried to inject the dataSource? You could use the DataSourceUtils to create a new connection that you can then use to get the filename. You can then manually close() this connection.
I don't know if you can use this connection in combination with gorm, so you might have to create a custom sql query as well.

Elegant way to stop socket read operation from outside

I implemented a small client server application in Ruby and I have the following problem: The server starts a new client session in a new thread for each connecting client, but it should be possible to shutdown the server and stop all the client sessions in a 'polite' way from outside without just killing the thread while I don't know which state it is in.
So I decided that the client session object gets a `stop' flag which can be set from outside and is checked before each action. The problem is that it should not wait for the client, if it is just waiting for a request. I have the following temporary solution:
def read_client
loop do
begin
timeout(1) { return #client.gets }
rescue Timeout::Error
if #stop
stop # Notifies the client and closes the connection
return nil
end
end
end
end
But that sucks, looks terrible and intuitively, this should be such a normal thing that there has to be a `normal' solution to it. I don't even know if it is safe or if it could happen that the gets operation reads part of the client request, but not all of it.
Another side question is, if setting/getting a boolean flag is an atomic operation in Ruby (or if I need an additional Mutex for the flag).
Thread-per-client approach is usually a disaster for server design. Also blocking I/O is difficult to interrupt without OS-specific tricks. Check out non-blocking sockets, see for example, answers to this question.

Resources