COMMIT WRITE BATCH NOWAIT in Hibernate - oracle

Is it possible to execute COMMIT WRITE BATCH NOWAIT in Hibernate?

I didn't search extensively but I couldn't find any evidence that you can access this functionality at the JDBC driver level.
And this leaves you with the option to specify the COMMIT_WRITE parameter at the instance or session level, if this makes sense for you.
Just in case, let me quote this blog post (I'm pasting the content for reference because the original site is either unavailable or dead and I had to use Google Cache):
Using "Commit Write Batch Nowait" from within JDBC
Anyone who has used the new
asynchronous commit feature of Oracle
10.2 will be aware that it's very useful for transaction processing
systems that would traditionally be
bound by log_file_sync wait events.
COMMIT WRITE BATCH NOWAIT is faster
because it doesn't wait for a message
assuring it that the transaction is
safely in the redo log - instead it
assumes it will make it. This nearly
eliminates log_file_sync events. It
also arguably undermines the whole
purpose of commit, but there are many
situations where the loss of a
particular transaction (say to delete
a completed session) is perfectly
survivable and far more preferable
than being unable to serve incoming
requests because all your connections
are busy with log_file_sync wait
events.
The problem anyone using Oracle's JDBC
driver is that neither the 10.2 or
11.1 drivers have any extensions which allow you to access this functionality
easily - while Oracle have lots of
vendor specific extensions for all
sorts of things support for async
commit is missing.
This means you can:
Turn on async commit at the instance level by messing with the
COMMIT_WRITE init.ora parameter.
There's a really good chance this will
get you fired, as throughout the
entire system COMMIT will be
asynchronous. While we think this is
insane for production systems there
are times where setting it on a
development box makes sense, as if you
are 80% log file sync bound setting
COMMIT_WRITE to COMMIT WRITE BATCH
NOWAIT will allow you to see what
problems you face if you can somehow
fix your current ones.
Change COMMIT_WRITE at the session level. This isn't as dangerous as
doing it system wide but it's hard to
see it being viable for a real world
system with transactions people care
about.
Prepare and use a PL/SQL block that goes "BEGIN COMMIT WRITE BATCH NOWAIT;
END". This is safer than the first
two ideas but still involves a network
round trip.
Wrap your statement in an anonymous block with an asynchronous commit.
This is the best approach we've seen.
Your code will look something like
this:
BEGIN
--
insert into generic_table
(a_col, another_col, yet_another_col)
values
(?,?,?);
--
COMMIT WRITE BATCH NOWAIT;
--
END;

I was looking for a way to do this but couldn't get it working in a test. The reason for my hold up was that I was expecting the wrong results from my test. I was testing by manually acquiring a shared table lock to simulate adding an index - but in this case, the insert query acquires the lock, not the commit. So it doesn't actually solve the problem I was looking to solve. I got round my problem by moving these insertions into a background queue, so that they don't hold up the main web request.
Anyway I think you can still do asynchronous commits in Hibernate. Basically you can use the Session.doWork() method to get access to the native Connection object (or in older versions of Hibernate, the Session.connection() method). I also moved the commit SQL into a strategy interface, so that we can run our HSQLDB-based tests which wouldn't understand the Oracle specific SQL.
In fact, it may be fine to use Session.createSQLQuery and give that the SQL, avoiding having to directly use Connection. Try it and see how it works.
private NativeStrategy nativeStrategy = new OracleStrategy();
interface NativeStrategy {
String commit();
}
public static final class OracleStrategy implements NativeStrategy {
public String commit() {
return "COMMIT WRITE BATCH NOWAIT";
}
}
public void saveAsynchronously(MyItem item) {
session.save(item);
session.flush();
// Try to issue an asynchronous commit where supported.
session.doWork(new Work() {
public void execute(Connection connection) throws SQLException {
Statement commit = connection.createStatement();
try {
commit.execute( nativeStrategy.commit() );
} finally {
commit.close();
}
}
});
}

Related

How to deal with FATAL: terminating connection due to idle-in-transaction timeout

I have one method where some DB operations are happened with some API call. This is a scenario with Spring and postgres DB. We also have property set for idle transaction in postgres DB
idle_in_transaction_session_timeout = 10min
Now the issue is I do get Exception sometime
org.postgresql.util.PSQLException: This connection has been closed. Root Cause is FATAL: terminating connection due to idle-in-transaction timeout
For example, my code look like this:
#Transactional(value = "transactionManagerDC")
public void Execute()
{
// 1. select from DB - took 2 min
// 2. call another API - took 10 min. <- here is when the postgres close my connection
// 3. select from DB - throws exception.
};
What could be the correct design for it? We are using output for select from step 1 in API call and output of that API call used in step 3 for select. So these three steps are interdependent.
Ten minutes is a very long time to hold a transaction open. Your RDBMS server automatically disconnects your session, rolling back the transaction, because it cannot tell whether the transaction was started by an interactive (command-line) user who then went out to lunch without committing it, or by a long-running task like yours.
Open transactions can block other users of your RDBMS, so it's best to COMMIT them quickly.
Your best solution is to refactor your application code so it can begin, and then quickly commit, the transaction after the ten-minute response from that other API call.
It's hard to give you specific advice without knowing more. But you could set some sort of status = "API call in progress" column on a row before you call that slow API, and clear that status within your transaction after the API call completes.
Alternatively, you can set the transaction timeout for just that connection with something like this, to reduce the out-to-lunch risk on your system.
SET idle_in_transaction_session_timeout = '15m';

Write call/transaction is dropped in TransactionalEventListener

I am using spring-boot(1.4.1) with hibernate(5.0.1.Final). I noticed that when I try to write to the db from within #TransactionalEventListener handler the call is simply ignored. A read call works just fine.
When I say ignore, I mean there is no write in the db and there are no logs. I even enabled log4jdbc and still no logs which mean no hibernate session was created. From this, I reckon, somewhere in spring-boot we identify that its a transaction event handler and ignore a write call.
Here is an example.
// This function is defined in a class marked with #Service
#TransactionalEventListener
open fun handleEnqueue(event: EnqueueEvent) {
// some code to obtain encodeJobId
this.uploadService.saveUploadEntity(uploadEntity, encodeJobId)
}
#Service
#Transactional
class UploadService {
//.....code
open fun saveUploadEntity(uploadEntity: UploadEntity, encodeJobId: String): UploadEntity {
// some code
return this.save(uploadEntity)
}
}
Now if I force a new Transaction by annotating
#Transactional(propagation = Propagation.REQUIRES_NEW)
saveUploadEntity
a new transaction with connection is made and everything works fine.
I dont like that there is complete silence in logs when this write is dropped (again reads succeed). Is there a known bug?
How to enable the handler to start a new transaction? If I do Propogation.Requires_new on my handleEnqueue event, it does not work.
Besides, enabling log4jdbc which successfully logs reads/writes I have following settings in spring.
Thanks
I ran into the same problem. This behavior is actually mentioned in the documentation of the TransactionSynchronization#afterCompletion(int) which is referred to by the TransactionPhase.AFTER_COMMIT (which is the default TransactionPhase attribute of the #TransactionalEventListener):
The transaction will have been committed or rolled back already, but the transactional resources might still be active and accessible. As a consequence, any data access code triggered at this point will still "participate" in the original transaction, allowing to perform some cleanup (with no commit following anymore!), unless it explicitly declares that it needs to run in a separate transaction. Hence: Use PROPAGATION_REQUIRES_NEW for any transactional operation that is called from here.
Unfortunately this seems to leave no other option than to enforce a new transaction via Propagation.REQUIRES_NEW. The problem is that the transactionalEventListeners are implemented as transaction synchronizations and hence bound to the transaction. When the transaction is closed and its resources cleaned up, so are the listeners. There might be a way to use a customized EntityManager which stores events and then publishes them after its close() was called.
Note that you can use TransactionPhase.BEFORE_COMMIT on your #TransactionalEventListener which will take place before the commit of the transaction. This will write your changes to the database but you won't know whether the transaction you're listening on was actually committed or is about to be rolled back.

SQLite-net and SQLite-WinRT at the same time in Windows runtime apps

Previously I had used SQLITE-NET library for my all sqlite database tasks and it works well.
But my app has huge number of data to insert and it took a lot of time. So I decided to use SQLITE-WinRT wrapper only where bulk insert is needed as SQLITE-WinRT wrapper seems to provide feature like preparing statements then binding data and then execute them which gives faster processing and increases performance.
In my app, there are lots of CRUD operations that uses SQLITE-NET methods and I left as it is since it is hard to completely switch from SQLITE-NET library to SQLITE-WinRT wrapper.
My app has background task that runs and processes some web-service calls and lot of CRUD operations using only SQLITE-NET library.
Whenever I tried to bulk insert using SQLITE-WinRT wrapper using prepared statements, in case background task is running, it always throws Busy exception in SQLITE-NET library. I know its reason, background service does lot of CRUD operations using SLITE-NET library. So while bulk inserting using SQLITE-WinRT wrapper it throws Busy exception as the sqlite database is already doing lot of tasks in background using SQLITE-NET.
So, my question is how to handle this situation. Please suggest me some ideas to handle such cases. I thought of two ideas:
Stopping background service while bulk inserting (In background,
there is series of long tasks like calling web-service and doing work
with SQLite db, stopping background service at once might not be
good idea )
Closing all SQLITE-NET connection (didn't work as expected though)
Any help would be appreciated. Thanks in advance.
While bulk inserting, I started like this:
string dbPath = "collection.sqlite";
var file = await ApplicationData.Current.LocalFolder.GetFileAsync(dbPath);
var db = new SQLiteWinRT.Database(file);
await db.OpenAsync(SqliteOpenMode.OpenReadWrite);
using (var statement = await db.PrepareStatementAsync("INSERT INTO Forms(ServerFormId,FormFileName,FormStatusId,PriorityId) VALUES(?,?,?,?)"))
{
await db.ExecuteStatementAsync("BEGIN TRANSACTION");
statement.Reset();
statement.BindTextParameterAt(1, "0");
statement.BindTextParameterAt(2, formName);
statement.BindTextParameterAt(3, formStatusId);
statement.BindTextParameterAt(4, priorityId);
await statement.StepAsync().AsTask().ConfigureAwait(false);
}
await db.ExecuteStatementAsync("COMMIT TRANSACTION");
SQLite-WinRT: https://blogs.msdn.microsoft.com/andy_wigley/2013/11/21/how-to-massively-improve-sqlite-performance-using-sqlwinrt/
SQLite-net: http://www.codeproject.com/Articles/826602/Using-SQLite-as-local-database-with-Universal-Apps
I'm afraid that the only option is using lock or semaphore before accessing the database.
The lock mechanism guarantees that only one thread does inner code block. Other threads synchronously waits.
readonly object sync = new object();
void MyMethod() {
lock (sync) {
...
}
}
Semaphore is similar, but the inner code block can be executed maximally by n threads.
Please see more info about SemaphoreSlim on MSDN.

NHibernate ArgumentOutOfRangeException

I recently ran into an instance where I wanted to hit the database from a Task I have running periodically within a web application. I refactored the code to use the ThreadStaticSessionContext so that I could get a session without an HttpContext. This works fine for reads, but when I try to flush an update from the Task, I get the "Index was out of range. Must be non-negative and less than the size of the collection." error. Normally what I see for this error has to do with using a column name twice in the mapping, but that doesn't seem to be the issue here, as I'm able to update that table if the session is associated with a request (and I looked and I'm not seeing any duplicates). It's only when the Task tries to flush that I get the exception.
Does anyone know why it would work fine from a request, but not from a call from a Task?
Could it be because the Task is asynchronous?
Call Stack:
at System.ThrowHelper.ThrowArgumentOutOfRangeException()
at System.Collections.Generic.List`1.System.Collections.IList.get_Item(Int32 index)
at NHibernate.Engine.ActionQueue.ExecuteActions(IList list)
at NHibernate.Engine.ActionQueue.ExecuteActions()
at NHibernate.Event.Default.AbstractFlushingEventListener.PerformExecutions(IEventSource session)
at NHibernate.Event.Default.DefaultFlushEventListener.OnFlush(FlushEvent event)
at NHibernate.Impl.SessionImpl.Flush()
Session Generation:
internal static ISession CurrentSession {
get {
if(HasSession) return Initializer.SessionFactory.GetCurrentSession();
ISession session = Initializer.SessionFactory.OpenSession();
session.BeginTransaction();
CurrentSessionContext.Bind(session);
return session;
}
}
private static bool HasSession {
get { return CurrentSessionContext.HasBind(Initializer.SessionFactory); }
}
Task that I want to access the database from:
_maid = Task.Factory.StartNew(async () => {
while(true) {
if(CleaningSession != null) CleaningSession(Instance, new CleaningSessionEventArgs { Session = UnitOfWorkProvider.CurrentSession });
UnitOfWorkProvider.TransactionManager.Commit();
await Task.Delay(AppSettings.TempPollingInterval, _paycheck.Token);
}
//I know this function never returns, I'm using the cancellation token for that
// ReSharper disable once FunctionNeverReturns
}, _paycheck.Token);
_maid.GetAwaiter().OnCompleted(() => _maid.Dispose());
Edit: Quick clarification about some of the types above. CleaningSession is an event that is fired to run the various things that need to be done, and _paycheck is the CancellationTokenSource for the Task.
Edit 2: Oh yeah, and this is using NHibernate version 4.0.0.4000
Edit 3: I have since attempted this using a Timer, with the same results.
Edit 4: From what I can see of the source, it's doing a foreach loop on an IList. Questions pertaining to an IndexOutOfRangeException in a foreach loop tend to suggest a concurrency issue. I still don't see how that would be an issue, unless I misunderstand the purpose of ThreadStaticSessionContext.
Edit 5: I thought it might be because of requests bouncing around between threads, so I tried creating a new SessionContext that combines the logic of the WebSessionContext and ThreadStaticSessionContext. Still getting the issue, though...
Edit 6: It seems this has something to do with a listener I have set up to update some audit fields on entities just before they're saved. If I don't run it, the commit occurs properly. Would it be better to do this through an event than OnPreInsert, or use an interceptor instead?
After muddling through, I found out exactly where the problem was. Basically, there was a query that was run to load the current user record called from inside of the PreUpdate event in my listener.
I came across two solutions to this. I could cache the user in memory, avoiding the query, but having possibly stale data (not that anything other than the id matters here). Alternatively, I could open a temporary stateless session and use that to look up the user in question.

jdbc batch performance

i'm batching updates with jdbc
ps = con.prepareStatement("");
ps.addBatch();
ps.executeBatch();
but in the background it seems, that the prostgres driver sends the query bit by bit to the database.
org.postgresql.core.v3.QueryExecutorImpl:398
for (int i = 0; i < queries.length; ++i)
{
V3Query query = (V3Query)queries[i];
V3ParameterList parameters = (V3ParameterList)parameterLists[i];
if (parameters == null)
parameters = SimpleQuery.NO_PARAMETERS;
sendQuery(query, parameters, maxRows, fetchSize, flags, trackingHandler);
if (trackingHandler.hasErrors())
break;
}
is there a possibility to let him send 1000 a time to speed it up?
AFAIK is no server-side batching in the fe/be protocol, so PgJDBC can't use it.. Update: Well, I was wrong. PgJDBC (accurate as of 9.3) does send batches of queries to the server if it doesn't need to fetch generated keys. It just queues a bunch of queries up in the send buffer without syncing up with the server after each individual query.
See:
Issue #15: Enable batching when returning generated keys
Issue #195: PgJDBC does not pipeline batches that return generated keys
Even when generated keys are requested the extended query protocol is used to ensure that the query text doesn't need to be sent every time, just the parameters.
Frankly, JDBC batching isn't a great solution in any case. It's easy to use for the app developer, but pretty sub-optimal for performance as the server still has to execute every statement individually - though not parse and plan them individually so long as you use prepared statements.
If autocommit is on, performance will be absolutely pathetic because each statement triggers a commit. Even with autocommit off running lots of little statements won't be particularly fast even if you could eliminate the round-trip delays.
A better solution for lots of simple UPDATEs can be to:
COPY new data into a TEMPORARY or UNLOGGED table; and
Use UPDATE ... FROM to UPDATE with a JOIN against the copied table
For COPY, see the PgJDBC docs and the COPY documentation in the server docs.
You'll often find it's possible to tweak things so your app doesn't have to send all those individual UPDATEs at all.

Resources