Jdbi select with insanely long list as parameter to bindList - quarkus

I have a query where I'd like to retrieve a huge amount of rows based on their pk ids, and use select .... where id in (<ids>) via the fluent api in JDBI like this:
jdbi(db).withHandle(h --> handle.createQuery(SQL).bindList("ids", listOfIds).mapToMap().list();
This works for as long as the number of ids doesn't exceed what the database (DB2) can handle in an in-clause. Obviously, in my case, the list of id's gets longer than Db2 can handle. So I split it into many in a List<List<Integer>> listOfIdLists and create a List<Map<String, Object>> result.
Now I have to somehow iterate over listOfIdLists and for each iteration add the result to result. Here is one of many tested variants:
List<Map<String, Object>> result = new ArrayList<>();
List<List<Integer>> listOfIdLists = chopListToLists(ids, 10);
Iterator<List<Integer>> oneChopIterator = listOfIdLists.iterator();
while (oneChopIterator.hasNext()) {
result.addAll(jdbi(db).withHandle(handle --> handle.createQuery(SQL).bindList("id", oneChopIterator.next()).mapToMap().list()));
}
Obviously variants with chops.forEach and try (Handle h = jdbi(db).open();) { /* iterate and addAll */ } has been tried also.
All of this run in a Quarkus app, and I get exceptions from Arjuna when iterating. For testing, I can add to result without errors when there is no iterating, and I instead just pick the first element/list in chops.
The exception is:
2022-11-02 00:13:15,219 WARN [com.arj.ats.arjuna] (Transaction Reaper) ARJUNA012117: TransactionReaper::check processing TX 0:ffff7f000101:925d:6361a7cf:0 in state RUN
2022-11-02 00:13:15,239 WARN [com.arj.ats.arjuna] (Transaction Reaper Worker 0) ARJUNA012095: Abort of action id 0:ffff7f000101:925d:6361a7cf:0 invoked while multiple threads active within it.
2022-11-02 00:13:15,242 WARN [com.arj.ats.arjuna] (Transaction Reaper Worker 0) ARJUNA012381: Action id 0:ffff7f000101:925d:6361a7cf:0 completed with multiple threads - thread Quarkus Main Thread was in progress with java.base#17.0.4/sun.nio.ch.SocketDispatcher.read0(Native Method)
java.base#17.0.4/sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:47)
java.base#17.0.4/sun.nio.ch.NioSocketImpl.tryRead(NioSocketImpl.java:261)
...
2022-11-02 00:13:15,244 WARN [com.arj.ats.arjuna] (Transaction Reaper Worker 0) ARJUNA012108: CheckedAction::check - atomic action 0:ffff7f000101:925d:6361a7cf:0 aborting with 1 threads active!
2022-11-02 00:13:15,315 WARN [io.agr.pool] (Transaction Reaper Worker 0) Datasource 'fs': JDBC resources leaked: 0 ResultSet(s) and 1 Statement(s)
2022-11-02 00:13:15,340 ERROR [no.cen.bat.dbs.Runner] (Quarkus Main Thread) dbsync 2 failed with throwable Unable to advance result set [statement:"select ...
...
2022-11-02 00:13:15,360 ERROR [no.cen.bat.dbs.Runner] (Quarkus Main Thread) org.jdbi.v3.core.result.ResultSetException: Unable to advance result set [statement:"select ...
...
Caused by: com.ibm.db2.jcc.am.SqlException: [jcc][t4][10120][10898][4.31.10] Invalid operation: result set is closed. ERRORCODE=-4470, SQLSTATE=null
at com.ibm.db2.jcc.am.b7.a(b7.java:794)
...
2022-11-02 00:13:15,739 WARN [com.arj.ats.arjuna] (Transaction Reaper) ARJUNA012117: TransactionReaper::check processing TX 0:ffff7f000101:925d:6361a7cf:0 in state CANCEL
2022-11-02 00:13:15,741 WARN [com.arj.ats.arjuna] (Transaction Reaper) ARJUNA012378: ReaperElement appears to be wedged: java.base#17.0.4/sun.nio.ch.SocketDispatcher.read0(Native Method)
java.base#17.0.4/sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:47)
...
2022-11-02 00:13:16,242 WARN [com.arj.ats.arjuna] (Transaction Reaper) ARJUNA012117: TransactionReaper::check processing TX 0:ffff7f000101:925d:6361a7cf:0 in state CANCEL_INTERRUPTED
2022-11-02 00:13:16,244 WARN [com.arj.ats.arjuna] (Transaction Reaper) ARJUNA012120: TransactionReaper::check worker Thread[Transaction Reaper Worker 0,5,main] not responding to interrupt when cancelling TX 0:ffff7f000101:925d:6361a7cf:0 -- worker marked as zombie and TX scheduled for mark-as-rollback
2022-11-02 00:13:21,478 WARN [com.arj.ats.arjuna] (Transaction Reaper) ARJUNA012110: TransactionReaper::check successfuly marked TX 0:ffff7f000101:925d:6361a7cf:0 as rollback only
2022-11-02 00:13:21,479 WARN [com.arj.ats.arjuna] (Quarkus Main Thread) ARJUNA012077: Abort called on already aborted atomic action 0:ffff7f000101:925d:6361a7cf:0
2022-11-02 00:13:21,480 WARN [com.arj.ats.arjuna] (Transaction Reaper Worker 0) ARJUNA012113: TransactionReaper::doCancellations worker Thread[Transaction Reaper Worker 0,5,main] missed interrupt when cancelling TX 0:ffff7f000101:925d:6361a7cf:0 -- exiting as zombie (zombie count decremented to 0)
This runs on Java 17 with latest Quarkus (2.13.3) and the ibm-drivers that come with quarkus-jdbc-db2. Jdbi version 3.34.0. Not running a native image.
The reason for the parameterized jdbi(db) is that the application has two datasources. It is replacing a db-job where two databases are linked and stuff copied from one database to the other with statements like insert into db1.mytable(a,b,c...) select a,b,c ... from db2.mytable where not exists (...);
The source db is on linux and the target on zOs. The app runs on Ubuntu 20.04.
So basically what the code do, is to retrieve all pk ids from each table in each database, use CollectionUtils.subtract(list1, list2) to identify ids missing from the target table, and then use the resulting list of ids to retrive the full rows via a select ... from ... where ids in (<ids>) query as described above. The list of Map<String, Object> which would be the result rows, would then be inserted into the other table where they are missing.
The question is; How to get this working without exceptions? I can brute-force this by deleting and inserting all rows, but I'd rather not.

The stacktrace element:
2022-11-02 00:13:15,244 WARN [com.arj.ats.arjuna] (Transaction Reaper Worker 0) ARJUNA012108: CheckedAction::check - atomic action 0:ffff7f000101:925d:6361a7cf:0 aborting with 1 threads active!
shows the narayana Transaction Reaper thread, it's responsible for timing out transactions, attempting to rollback the transaction. The text "atomic action 0:ffff7f000101:925d:6361a7cf:0 aborting with 1 threads active!" is saying that there is an application thread still running with the transaction context bound to the application thread. This is normal application behavior and the usual fix for this problem is to either extend the timeout, do less work inside the transaction add more compute/networking resources to the setup.

Related

TransientDataAccessResourceException - R2DBC pgdb connection remains in idle in transaction

I have a spring-boot application where using webflux and r2dbc-postgres. I have discovered a strange issue when trying to do some db operations in a flatMap().
Code example:
#Transactional
public Mono<Void> insertDummyFooBars() {
return Flux.fromIterable(IntStream.rangeClosed(1, 260).boxed().collect(Collectors.toList()))
.log()
.flatMap(i -> this.repository.save(FooBar.builder().foo("test-" + i).build()))
.log()
.concatMap(i -> this.repository.findAll())
.then();
}
It seems like flatMap can process max 256 elements in batches. (Queues.SMALL_BUFFER_SIZE default value is 256). So when I tried to run the code above (with 260 elements) I've got an exception - TransientDataAccessResourceException and the following message:
Cannot exchange messages because the request queue limit is exceeded; nested exception is io.r2dbc.postgresql.client.ReactorNettyClient$RequestQueueException
There is no Releasing R2DBC Connection after this exception. The pgdb connection/session remains in idle in transaction state and the app is not able to run properly when pool max size is reached and all of the connections are in idle in transaction state. I think the connection should be released even if an exception happened or not.
If I use concatMap instead of flatMap it works as expected - no exception, connection released! It's also ok with flatMap when the elements are less than or equal to 256.
Is it possible to force pgdb connection closure? What should I do If I have lot of db operations in flatMap like this? Should I replace all of them with concatMap? Is there a global solution for this?
Versions:
Postgres: 12.6, Spring-boot: 2.7.6
Demo project
LOG:
2022-12-08 16:32:13.092 INFO 17932 --- [actor-tcp-nio-1] reactor.Flux.Iterable.1 : | onNext(256)
2022-12-08 16:32:13.092 DEBUG 17932 --- [actor-tcp-nio-1] o.s.r2dbc.core.DefaultDatabaseClient : Executing SQL statement [INSERT INTO foo_bar (foo) VALUES ($1)]
2022-12-08 16:32:13.114 INFO 17932 --- [actor-tcp-nio-1] reactor.Flux.FlatMap.2 : onNext(FooBar(id=258, foo=test-1))
2022-12-08 16:32:13.143 DEBUG 17932 --- [actor-tcp-nio-1] o.s.r2dbc.core.DefaultDatabaseClient : Executing SQL statement [SELECT foo_bar.* FROM foo_bar]
2022-12-08 16:32:13.143 INFO 17932 --- [actor-tcp-nio-1] reactor.Flux.Iterable.1 : | request(1)
2022-12-08 16:32:13.143 INFO 17932 --- [actor-tcp-nio-1] reactor.Flux.Iterable.1 : | onNext(257)
2022-12-08 16:32:13.144 DEBUG 17932 --- [actor-tcp-nio-1] o.s.r2dbc.core.DefaultDatabaseClient : Executing SQL statement [INSERT INTO foo_bar (foo) VALUES ($1)]
2022-12-08 16:32:13.149 INFO 17932 --- [actor-tcp-nio-1] reactor.Flux.Iterable.1 : | onComplete()
2022-12-08 16:32:13.149 INFO 17932 --- [actor-tcp-nio-1] reactor.Flux.Iterable.1 : | cancel()
2022-12-08 16:32:13.160 ERROR 17932 --- [actor-tcp-nio-1] reactor.Flux.FlatMap.2 : onError(org.springframework.dao.TransientDataAccessResourceException: executeMany; SQL [INSERT INTO foo_bar (foo) VALUES ($1)]; Cannot exchange messages because the request queue limit is exceeded; nested exception is io.r2dbc.postgresql.client.ReactorNettyClient$RequestQueueException: [08006] Cannot exchange messages because the request queue limit is exceeded)
2022-12-08 16:32:13.167 ERROR 17932 --- [actor-tcp-nio-1] reactor.Flux.FlatMap.2 :
org.springframework.dao.TransientDataAccessResourceException: executeMany; SQL [INSERT INTO foo_bar (foo) VALUES ($1)]; Cannot exchange messages because the request queue limit is exceeded; nested exception is io.r2dbc.postgresql.client.ReactorNettyClient$RequestQueueException: [08006] Cannot exchange messages because the request queue limit is exceeded
at org.springframework.r2dbc.connection.ConnectionFactoryUtils.convertR2dbcException(ConnectionFactoryUtils.java:215) ~[spring-r2dbc-5.3.24.jar:5.3.24]
at org.springframework.r2dbc.core.DefaultDatabaseClient.lambda$inConnectionMany$8(DefaultDatabaseClient.java:147) ~[spring-r2dbc-5.3.24.jar:5.3.24]
at reactor.core.publisher.Flux.lambda$onErrorMap$29(Flux.java:7105) ~[reactor-core-3.4.25.jar:3.4.25]
at reactor.core.publisher.Flux.lambda$onErrorResume$30(Flux.java:7158) ~[reactor-core-3.4.25.jar:3.4.25]
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:94) ~[reactor-core-3.4.25.jar:3.4.25]
I have tried to change the Queues.SMALL_BUFFER_SIZE, and also tried to add a concurrency value to the flatmap. It works when I reduced the value to 255 but I think it is not a good solution.

Losing event when reactor is switching scheduler

Context: Loading and streaming Excel file as Flux. Processing Flux records and inserting them into database using R2DBC.
implementation("org.apache.poi:poi:4.1.2") - apache lib which has excel domain of workbooks/sheets/rows/cells
implementation("org.apache.poi:poi-ooxml:4.1.2")
implementation("com.monitorjbl:xlsx-streamer:2.1.0") - streamer wrapper which avoids loading entire excel file into the memory and parses chunks of a file
Converting file to Flux (extract header as first row and then glue it to every subsequent event/row coming from Flux):
override fun extract(inputStream: InputStream): Flux<Map<String, String>> {
val workbook: Workbook = StreamingReader.builder()
.rowCacheSize(10) // number of rows to keep in memory (defaults to 10)
.bufferSize(4096) // buffer size to use when reading InputStream to file (defaults to 1024)
.open(inputStream)
val xsltRows = Flux.fromIterable(workbook).flatMap { Flux.fromIterable(it) }
return xsltRows.next()
.map { rowToHeader(it) }
.flatMapMany { header -> xsltRows.map { combineToMap(header, it) } }
}
Subsequently I process this Flux into domain models for Spring R2DBC repositories and insert the entries into MySQL database.
The problem: I am missing a single Excel row (out of roughly 2 k). It is always the same row, but nothing special about data in this row.
Recall combineToMap method that associates name from the header with every cell value, it also prints the row logical sequence number as in the file:
private fun combineToMap(header: Map<Int, String>, row: Row): Map<String, String> {
val mapRow: MutableMap<String, String> = mutableMapOf()
val logicalRowNum = row.rowNum+1
logger.info("Processing row: $logicalRowNum")
for (cell in row) {
if (cell.columnIndex >= header.keys.size) {
continue
}
val headerName = header[cell.columnIndex].takeUnless { it.isNullOrBlank() }
?: throw IllegalStateException("No header name for ${cell.columnIndex} column index for header " +
"$header and cell ${cell.stringCellValue} row index ${row.rowNum}")
mapRow[headerName] = cell.stringCellValue
mapRow["row"] = logicalRowNum.toString()
}
return mapRow
}
When I added the log line I noticed the following:
2020-11-22 15:49:56.684 INFO 20034 --- [ Test worker] c.b.XSLXFileRecordsExtractor : Processing row: 255
2020-11-22 15:49:56.687 INFO 20034 --- [ Test worker] c.b.XSLXFileRecordsExtractor : Processing row: 256
2020-11-22 15:49:56.689 INFO 20034 --- [ Test worker] c.b.XSLXFileRecordsExtractor : Processing row: 257
2020-11-22 15:50:02.458 INFO 20034 --- [tor-tcp-epoll-1] c.b.XSLXFileRecordsExtractor : Processing row: 259
2020-11-22 15:50:02.534 INFO 20034 --- [tor-tcp-epoll-1] c.b.XSLXFileRecordsExtractor : Processing row: 260
2020-11-22 15:50:02.608 INFO 20034 --- [tor-tcp-epoll-1] c.b.XSLXFileRecordsExtractor : Processing row: 261
Note that the scheduler is switched after 257 row, during the switch I lost 258 row. The pool:
tor-tcp-epoll-1
is understand is Spring R2DBC internal pool.
In my downstream, if instead of doing repository.save I return static Mono.just(entity) I get my 258 row back, notice the scheduler wasn't switched as well.
2020-11-22 16:01:14.000 INFO 21959 --- [ Test worker] c.b.XSLXFileRecordsExtractor : Processing row: 257
2020-11-22 16:01:14.006 INFO 21959 --- [ Test worker] c.b.XSLXFileRecordsExtractor : Processing row: 258
2020-11-22 16:01:14.009 INFO 21959 --- [ Test worker] c.b.XSLXFileRecordsExtractor : Processing row: 259
Is this a problem with the Excel libraries or my implementation? Why am I losing the record when TP is switched?
P.S. I am not specifying any schedulers or parallel blocks or anything to mess with threads anywhere in my flow apart from calling Spring R2DBC repositories.
I will attempt to rewrite using implementation("org.apache.commons:commons-csv:1.8") and observe if same happens, but if anyone can spot anything obvious or experienced similar elsewhere, I would be infinitely grateful.
In the end I switched to commons-csv which does not have the same problem:
2020-11-22 18:34:03.719 INFO 15733 --- [ Test worker] c.b.CSVFileRecordsExtractor : Processing row: 256
2020-11-22 18:34:09.062 INFO 15733 --- [tor-tcp-epoll-1] c.b.CSVFileRecordsExtractor : Processing row: 257
2020-11-22 18:34:09.088 INFO 15733 --- [tor-tcp-epoll-1] c.b.CSVFileRecordsExtractor : Processing row: 258
For original approach, I tried to publish all on one scheduler for xlsx-streamer and poi , even forced Spring R2DBC to publish on the same single thread scheduler and it still skipped the record.
What I could observe is that when database callbacks start to come regardless of which thread pool, this is exact moment when the record gets lost , seems that iterator context gets broken.
I mean the xslx library never claimed to be reactive so no expectations there.

Tuning #JmsListener

Does #JmsListener use a poller under the hood, or is it message-driven? When testing with concurrency=1, it seems to read one message per second:
2016-06-23 09:09:46.117 INFO 13044 --- [enerContainer-1] c.s.s.core.service.PolicyChangedHandler : Received: 1: This is a test
2016-06-23 09:09:46.922 INFO 13044 --- [enerContainer-1] c.s.s.core.service.PolicyChangedHandler : Received: 2: This is a test
2016-06-23 09:09:47.730 INFO 13044 --- [enerContainer-1] c.s.s.core.service.PolicyChangedHandler : Received: 3: This is a test
2016-06-23 09:09:48.535 INFO 13044 --- [enerContainer-1] c.s.s.core.service.PolicyChangedHandler : Received: 4: This is a test
2016-06-23 09:09:49.338 INFO 13044 --- [enerContainer-1] c.s.s.core.service.PolicyChangedHandler : Received: 5: This is a test
2016-06-23 09:09:50.155 INFO 13044 --- [enerContainer-1] c.s.s.core.service.PolicyChangedHandler : Received: 6: This is a test
If it is polling, how do I adjust the polling rate or increase the number of messages read per poll?
If it is message-driven, I don't why it is so slow???
Yes Spring JMSListener uses polling under the hood by default.
See DefaultMessageListenerContainer
See also the default receiveTimeout which is 1s.
The receive timeout for each attempt can be configured through the "receiveTimeout" property. setReceiveTimeout
Set the timeout to use for receive calls, in milliseconds. The default is 1000 ms, that is, 1 second.
NOTE: This value needs to be smaller than the transaction timeout used by the transaction manager (in the appropriate unit, of course). 0 indicates no timeout at all; however, this is only feasible if not running within a transaction manager and generally discouraged since such a listener container cannot cleanly shut down. A negative value such as -1 indicates a no-wait receive operation.

Spring-Batch - MultiFileResourcePartitioner - Caused by: java.net.SocketException: Connection reset

I am using FlatFileItemReader and have extended the AbstractResource to return a stream from Amazon S3 object.
S3Object amazonS3Object = s3client.getObject(new GetObjectRequest(bucket,file));
InputStream stream = null;
stream = amazonS3Object.getObjectContent();
return stream;
In my batch job I have also implemented MultiFileResourcePartitioner in which i gave the bucket to partition all the files. I am able to read only part of few files and after which i get a socket reset error.see below pieces of error
.ResourcelessTransactionManager$ResourcelessTransaction#122ba881]
2015-08-24 23:24:03 DEBUG RepeatTemplate:366 - Repeat operation about to start at count=9
2015-08-24 23:24:03 DEBUG StepContextRepeatCallback:68 - Preparing chunk execution for StepContext: org.springframework.batch.core.scope.context.StepContext#252ce07a
2015-08-24 23:24:03 DEBUG StepContextRepeatCallback:76 - Chunk execution starting: queue size=0
2015-08-24 23:24:03 DEBUG ResourcelessTransactionManager:367 - Creating new transaction with name [null]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT
2015-08-24 23:24:03 DEBUG RepeatTemplate:464 - Starting repeat context.
2015-08-24 23:24:03 DEBUG RepeatTemplate:366 - Repeat operation about to start at count=1
2015-08-24 23:24:03 DEBUG RepeatTemplate:366 - Repeat operation about to start at count=2
2015-08-24 23:24:03 DEBUG RepeatTemplate:366 - Repeat operation about to start at count=3
2015-08-24 23:24:03 DEBUG RepeatTemplate:366 - Repeat operation about to start at count=4
2015-08-24 23:24:03 DEBUG DefaultClientConnection:160 - Connection 0.0.0.0:58171<->10.37.135.39:8099 shut down
2015-08-24 23:24:03 DEBUG DefaultClientConnection:176 - Connection 0.0.0.0:58171<->10.37.135.39:8099 closed
Caused by: org.springframework.batch.item.file.NonTransientFlatFileException: Unable to read from resource: [null]
at org.springframework.batch.item.file.FlatFileItemReader.readLine(FlatFileItemReader.java:220)
at org.springframework.batch.item.file.FlatFileItemReader.doRead(FlatFileItemReader.java:173)
at org.springframework.batch.item.support.AbstractItemCountingItemStreamItemReader.read(AbstractItemCountingItemStreamItemReader.java:83)
at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:190)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
at org.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:133)
at org.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:121)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:207)
at com.sun.proxy.$Proxy17.read(Unknown Source)
at org.springframework.batch.core.step.item.SimpleChunkProvider.doRead(SimpleChunkProvider.java:91)
at org.springframework.batch.core.step.item.FaultTolerantChunkProvider.read(FaultTolerantChunkProvider.java:87)
... 22 more
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:196)
at java.net.SocketInputStream.read(SocketInputStream.java:122)</i>
My requirement is to process files with millions of records out of an S3 bucket and the application runs on AWS. I have passed the S3 client configurations with retry's and open connections which didn't help much.
As #Michael Minella mentioned, it might be a choice for you to use Spring Cloud AWS project to get resources:
#Autowired
private ResourcePatternResolver resourcePatternResolver;
public void resolveAndLoad() throws IOException {
Resource[] allTxtFilesInFolder = this.resourcePatternResolver.getResources("s3://bucket/name/*.txt");
Resource[] allTxtFilesInBucket = this.resourcePatternResolver.getResources("s3://bucket/**/*.txt");
Resource[] allTxtFilesGlobally = this.resourcePatternResolver.getResources("s3://**/*.txt");
}
And then pass the resources to your MultiFileResourcePartitioner to see the exception can be solved.

Transaction deadlock TX

have a batch process and a regular application that updates the same table.My batch have multiple threads that run on multiple sessions. I got the following errors in my batch Tomcat:
2012-09-10 11:30:17,043 [SyncDataThread567] ERROR org.springframework.batch.core.step.AbstractStep - Encountered an error executing the step
aaa.bbb.ccc.framework.orm.DAOException:
--- The error occurred in abc.xml.
--- The error occurred while applying a parameter map.
--- Check the ear.updateServiceTimeParamMap.
--- Check the statement (update procedure failed).
--- Cause: java.sql.SQLException: ORA-20011: FUNC_UPDATESERVICETIME : Error occured ORA-00060: deadlock detected while waiting for resource
ORA-06512: at "ER.FUNC_UPDATESERVICETIME", line 154
ORA-06512: at line 1
at aaa.bbb.ccc.ddd.eee.Sss.updateServiceTimes(ServiceOrderDAOImpl.java:76)
at sun.reflect.GeneratedMethodAccessor352.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:307)
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149)
at org.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:131)
at org.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:119)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
at $Proxy6.updateServiceTimes(Unknown Source)
at aaa.bbb.ccc.ddd.eeee.Inbddd.updateServiceTimes(InbDataWriter.java:144)
at aaa.bbb.ccc.ddd.eeee.Inbddd.write(InbDataWriter.java:74)
at sun.reflect.GeneratedMethodAccessor270.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:307)
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149)
at org.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:131)
at org.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:119)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
at $Proxy7.write(Unknown Source)
at org.springframework.batch.core.step.item.SimpleChunkProcessor.writeItems(SimpleChunkProcessor.java:171)
at org.springframework.batch.core.step.item.SimpleChunkProcessor.doWrite(SimpleChunkProcessor.java:150)
at org.springframework.batch.core.step.item.SimpleChunkProcessor.write(SimpleChunkProcessor.java:268)
at org.springframework.batch.core.step.item.SimpleChunkProcessor.process(SimpleChunkProcessor.java:194)
at org.springframework.batch.core.step.item.ChunkOrientedTasklet.execute(ChunkOrientedTasklet.java:74)
at org.springframework.batch.core.step.tasklet.TaskletStep$ChunkTransactionCallback.doInTransaction(TaskletStep.java:386)
at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:128)
at org.springframework.batch.core.step.tasklet.TaskletStep$2.doInChunkContext(TaskletStep.java:264)
at org.springframework.batch.core.scope.context.StepContextRepeatCallback.doInIteration(StepContextRepeatCallback.java:76)
at org.springframework.batch.repeat.support.RepeatTemplate.getNextResult(RepeatTemplate.java:367)
at org.springframework.batch.repeat.support.RepeatTemplate.executeInternal(RepeatTemplate.java:214)
at org.springframework.batch.repeat.support.RepeatTemplate.iterate(RepeatTemplate.java:143)
at org.springframework.batch.core.step.tasklet.TaskletStep.doExecute(TaskletStep.java:250)
at org.springframework.batch.core.step.AbstractStep.execute(AbstractStep.java:195)
at org.springframework.batch.core.partition.support.TaskExecutorPartitionHandler$1.call(TaskExecutorPartitionHandler.java:109)
at org.springframework.batch.core.partition.support.TaskExecutorPartitionHandler$1.call(TaskExecutorPartitionHandler.java:107)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at org.springframework.core.task.SimpleAsyncTaskExecutor$ConcurrencyThrottlingRunnable.run(SimpleAsyncTaskExecutor.java:192)
at java.lang.Thread.run(Thread.java:619)
Caused by: com.ibatis.common.jdbc.exception.NestedSQLException:
--- The error occurred in ael.xml.
--- The error occurred while applying a parameter map.
--- Check the eraa.updateServiceTimeParamMap.
--- Check the statement (update procedure failed).
--- Cause: java.sql.SQLException: ORA-20011: FUNC_UPDATESERVICETIME : Error occured ORA-00060: deadlock detected while waiting for resource
ORA-06512: at "ER.FUNC_UPDATESERVICETIME", line 154
ORA-06512: at line 1
at com.ibatis.sqlmap.engine.mapping.statement.MappedStatement.executeQueryWithCallback(MappedStatement.java:201)
at com.ibatis.sqlmap.engine.mapping.statement.MappedStatement.executeQueryForObject(MappedStatement.java:120)
at com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForObject(SqlMapExecutorDelegate.java:518)
at com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForObject(SqlMapExecutorDelegate.java:493)
at com.ibatis.sqlmap.engine.impl.SqlMapSessionImpl.queryForObject(SqlMapSessionImpl.java:106)
at com.iit.integration.erl.orm.ServiceOrderDAOImpl.updateServiceTimes(ServiceOrderDAOImpl.java:71)
... 44 more
Caused by: java.sql.SQLException: ORA-20011: FUNC_UPDATESERVICETIME : Error occured ORA-00060: deadlock detected while waiting for resource
ORA-06512: at "ER.FUNC_IIT_UPDATESERVICETIME", line 154
ORA-06512: at line 1
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:331)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:288)
at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:743)
at oracle.jdbc.driver.T4CCallableStatement.doOall8(T4CCallableStatement.java:215)
at oracle.jdbc.driver.T4CCallableStatement.executeForRows(T4CCallableStatement.java:954)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1168)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3285)
at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3390)
at oracle.jdbc.driver.OracleCallableStatement.execute(OracleCallableStatement.java:4223)
at org.apache.tomcat.dbcp.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:169)
at com.ibatis.sqlmap.engine.execution.SqlExecutor.executeQueryProcedure(SqlExecutor.java:278)
at com.ibatis.sqlmap.engine.mapping.statement.ProcedureStatement.sqlExecuteQuery(ProcedureStatement.java:39)
at com.ibatis.sqlmap.engine.mapping.statement.MappedStatement.executeQueryWithCallback(MappedStatement.java:189)
... 49 more
This is my oracle trace file:
Redo thread mounted by this instance: 1
Oracle process number: 63
Windows thread id: 2464, image: ORACLE.EXE (SHAD)
*** 2012-09-10 11:30:12.384
*** SERVICE NAME:(SYS$USERS) 2012-09-10 11:30:12.244
*** SESSION ID:(411.3766) 2012-09-10 11:30:12.244
DEADLOCK DETECTED
[Transaction Deadlock]
Current SQL statement for this session:
UPDATE SP SET SRVC_TM = :B4 , MODIFICATION_DTM=SYSDATE WHERE OPERATION_AREA_CD = :B3 AND ROUTE_TYP = :B2 AND OBJECTID = :B1
----- PL/SQL Call Stack -----
object line object
handle number name
000000057D9B52E8 134 function ER.FUNC_UPDATESERVICETIME
000000057C3A5848 1 anonymous block
The following deadlock is not an ORACLE error. It is a
deadlock due to user error in the design of an application
or from issuing incorrect ad-hoc SQL. The following
information may aid in determining the deadlock:
Deadlock graph:
---------Blocker(s)-------- ---------Waiter(s)---------
Resource Name process session holds waits process session holds waits
TX-00040020-0017465b 63 411 X 94 364 X
TX-00020020-00166804 94 364 X 63 411 X
session 411: DID 0001-003F-00000033 session 364: DID 0001-005E-00000016
session 364: DID 0001-005E-00000016 session 411: DID 0001-003F-00000033
Rows waited on:
Session 364: obj - rowid = 0000CC64 - AAAMxkAA2AAA1q2AAY
(dictionary objn - 52324, file - 54, block - 219830, slot - 24)
Session 411: obj - rowid = 0000CC64 - AAAMxkAA2AAA1q2AAR
(dictionary objn - 52324, file - 54, block - 219830, slot - 17)
Information on the OTHER waiting sessions:
Session 364:
pid=94 serial=6104 audsid=693767 user: 57/ER
O/S info: user: , term: , ospid: 1234, machine: abc
program:
Current SQL Statement:
UPDATE SP SET ORIG_NO='751' ,ORIG_SEQ_NO=0,SP_ROUTING_STATUS='A', USER_ID='XXXX', MODIFICATION_DTM=SYSDATE WHERE OBJECTID IN ('104883389','104883404','104883407','104883440','104883443','104883455','104883467','104883509','104883545','104883764','104883788','104883806','104883812','104883821','104883836','104883854','104883863','104883893','104883899','104883931','104883937','104883964','104884084','104884117','104884120','104884138','104884141','104885439','104883386','104883422','104883560','104883587','104883767','104883785','104883809','104883824','104883845','104883851','104883884','104883890','104883955','104883958','104884012','104884093','104884114','104885412','104885436','104885442','104885445','104883383','104883395','104883413','104883419','104883464','104883494','104883524','104883773','104883842','104883917','104883920','104883943','104883949','104883967','104883997','104884051','104884105','104884108','104885451','104883437','104883461','104883476','104883497','104883500','104883503','104883566','104883584','104883614','104883794','104883800','104883815','104883830','104883857','104883869','104883923','104883952','104884048','104884057','104884063','104884066','104884081','104884087','104884102','104884111','104884135','104885415','104885424','104885427','104886297','104886308','104883398','104883410','104883458','104883473','104883512','104883515','104883527','104883530','104883536','104883554','104883596','104883770','104883782','104883803','104883827','104883833','104883839','104883848','104883866','104883875','104883878','104883896','104883902','104883914','104883970','104883976','104884060','104884069','104884072','104884123','104884132','104885409','104885430','104883425','104883431','104883446','104883449','104883452','104883482','104883506','104883518','104883539','104883548','104883569','104883575','104883578','104883623','104883779','104883797','104883818','104883860','104883925','104883934','104883940','104883946','104883973','104883979','104883982','104884078','104884090','104884096','104885421','104885448','104885454','104883392','104883416','104883428','104883479','104883491','104883521','104883542','104883551','104883557','104883563','104883872','104883911','104883928','104883961','104883994','104884018','104884054','104884099','104884129','104886299','104883401','104883434','104883470','104883485','104883533','104883572','104883581','104883776','104883791','104883881','104883887','104883905','104883908','104884075','104884126','104885418','104885433')
End of information on OTHER waiting sessions.
===================================================
PROCESS STATE
-------------
Process global information:
process: 000000057B3343D8, call: 0000000574FCBF78, xact: 0000000576A07F60, curses: 000000057E48D858, usrses: 000000057E48D858
----------------------------------------
SO: 000000057B3343D8, type: 2, owner: 0000000000000000, flag: INIT/-/-/0x00
(process) Oracle pid=63, calls cur/top: 0000000574FCBF78/0000000574FD4C48, flag: (0) -
int error: 0, call error: 0, sess error: 0, txn error 0
(post info) last post received: 108 0 4
last post received-location: aaa
last process to post me: 7e31d890 1 6
last post sent: 0 0 112
last post sent-location: bbb
last process posted by me: 7b334c00 3 0
(latch info) wait_event=0 bits=10
holding (efd=19) 4745310 Parent+children enqueue hash chains level=4
Location from where latch is held: cmi: gpl:
Context saved from call: 0
state=busy, wlstate=free
recovery area:
Dump of memory from 0x000000057E300810 to 0x000000057E300830
57E300810 00000000 00000000 00000000 00000000 [...............
I have been reaserching this issue from past few days. From what I saw few are saying its a indexing issue, few are saying its INITRANS... I am not sure.. But this deadlock happens very rare. But whenever it happens its a big issue.
So please help me guys.. what to look for.. and how I can solve this issue..
Look at your two UPDATE statements and try to understand why they would request the same row, but in a different order. That's how almost all
deadlocks happen.
There are several possible ways to avoid this error:
1) Update rows in the same order. You may be able to do this with a hint to force a full table scan or index. (I'm not 100% certain that using the same access method will always avoid this issue, but in practice it does seem to fix it. See my old question for a painful discussion about deadlocks when the access method is the same.)
2) Do not run your two processes at the same time.
3) Handle exceptions. For example, something like this:
declare
deadlock exception;
pragma exception_init(deadlock, -00060);
begin
<code>
exception when deadlock then
<do something about it here, such as re-try>
end;
/
You need to add the exception handling to both blocks of code. And a deadlock will still generate a trace file, which will probably
slow things down and take up a lot of space where no one expects it.

Resources