Transaction deadlock TX - oracle

have a batch process and a regular application that updates the same table.My batch have multiple threads that run on multiple sessions. I got the following errors in my batch Tomcat:
2012-09-10 11:30:17,043 [SyncDataThread567] ERROR org.springframework.batch.core.step.AbstractStep - Encountered an error executing the step
aaa.bbb.ccc.framework.orm.DAOException:
--- The error occurred in abc.xml.
--- The error occurred while applying a parameter map.
--- Check the ear.updateServiceTimeParamMap.
--- Check the statement (update procedure failed).
--- Cause: java.sql.SQLException: ORA-20011: FUNC_UPDATESERVICETIME : Error occured ORA-00060: deadlock detected while waiting for resource
ORA-06512: at "ER.FUNC_UPDATESERVICETIME", line 154
ORA-06512: at line 1
at aaa.bbb.ccc.ddd.eee.Sss.updateServiceTimes(ServiceOrderDAOImpl.java:76)
at sun.reflect.GeneratedMethodAccessor352.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:307)
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149)
at org.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:131)
at org.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:119)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
at $Proxy6.updateServiceTimes(Unknown Source)
at aaa.bbb.ccc.ddd.eeee.Inbddd.updateServiceTimes(InbDataWriter.java:144)
at aaa.bbb.ccc.ddd.eeee.Inbddd.write(InbDataWriter.java:74)
at sun.reflect.GeneratedMethodAccessor270.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:307)
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149)
at org.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:131)
at org.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:119)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
at $Proxy7.write(Unknown Source)
at org.springframework.batch.core.step.item.SimpleChunkProcessor.writeItems(SimpleChunkProcessor.java:171)
at org.springframework.batch.core.step.item.SimpleChunkProcessor.doWrite(SimpleChunkProcessor.java:150)
at org.springframework.batch.core.step.item.SimpleChunkProcessor.write(SimpleChunkProcessor.java:268)
at org.springframework.batch.core.step.item.SimpleChunkProcessor.process(SimpleChunkProcessor.java:194)
at org.springframework.batch.core.step.item.ChunkOrientedTasklet.execute(ChunkOrientedTasklet.java:74)
at org.springframework.batch.core.step.tasklet.TaskletStep$ChunkTransactionCallback.doInTransaction(TaskletStep.java:386)
at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:128)
at org.springframework.batch.core.step.tasklet.TaskletStep$2.doInChunkContext(TaskletStep.java:264)
at org.springframework.batch.core.scope.context.StepContextRepeatCallback.doInIteration(StepContextRepeatCallback.java:76)
at org.springframework.batch.repeat.support.RepeatTemplate.getNextResult(RepeatTemplate.java:367)
at org.springframework.batch.repeat.support.RepeatTemplate.executeInternal(RepeatTemplate.java:214)
at org.springframework.batch.repeat.support.RepeatTemplate.iterate(RepeatTemplate.java:143)
at org.springframework.batch.core.step.tasklet.TaskletStep.doExecute(TaskletStep.java:250)
at org.springframework.batch.core.step.AbstractStep.execute(AbstractStep.java:195)
at org.springframework.batch.core.partition.support.TaskExecutorPartitionHandler$1.call(TaskExecutorPartitionHandler.java:109)
at org.springframework.batch.core.partition.support.TaskExecutorPartitionHandler$1.call(TaskExecutorPartitionHandler.java:107)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at org.springframework.core.task.SimpleAsyncTaskExecutor$ConcurrencyThrottlingRunnable.run(SimpleAsyncTaskExecutor.java:192)
at java.lang.Thread.run(Thread.java:619)
Caused by: com.ibatis.common.jdbc.exception.NestedSQLException:
--- The error occurred in ael.xml.
--- The error occurred while applying a parameter map.
--- Check the eraa.updateServiceTimeParamMap.
--- Check the statement (update procedure failed).
--- Cause: java.sql.SQLException: ORA-20011: FUNC_UPDATESERVICETIME : Error occured ORA-00060: deadlock detected while waiting for resource
ORA-06512: at "ER.FUNC_UPDATESERVICETIME", line 154
ORA-06512: at line 1
at com.ibatis.sqlmap.engine.mapping.statement.MappedStatement.executeQueryWithCallback(MappedStatement.java:201)
at com.ibatis.sqlmap.engine.mapping.statement.MappedStatement.executeQueryForObject(MappedStatement.java:120)
at com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForObject(SqlMapExecutorDelegate.java:518)
at com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForObject(SqlMapExecutorDelegate.java:493)
at com.ibatis.sqlmap.engine.impl.SqlMapSessionImpl.queryForObject(SqlMapSessionImpl.java:106)
at com.iit.integration.erl.orm.ServiceOrderDAOImpl.updateServiceTimes(ServiceOrderDAOImpl.java:71)
... 44 more
Caused by: java.sql.SQLException: ORA-20011: FUNC_UPDATESERVICETIME : Error occured ORA-00060: deadlock detected while waiting for resource
ORA-06512: at "ER.FUNC_IIT_UPDATESERVICETIME", line 154
ORA-06512: at line 1
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:331)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:288)
at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:743)
at oracle.jdbc.driver.T4CCallableStatement.doOall8(T4CCallableStatement.java:215)
at oracle.jdbc.driver.T4CCallableStatement.executeForRows(T4CCallableStatement.java:954)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1168)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3285)
at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3390)
at oracle.jdbc.driver.OracleCallableStatement.execute(OracleCallableStatement.java:4223)
at org.apache.tomcat.dbcp.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:169)
at com.ibatis.sqlmap.engine.execution.SqlExecutor.executeQueryProcedure(SqlExecutor.java:278)
at com.ibatis.sqlmap.engine.mapping.statement.ProcedureStatement.sqlExecuteQuery(ProcedureStatement.java:39)
at com.ibatis.sqlmap.engine.mapping.statement.MappedStatement.executeQueryWithCallback(MappedStatement.java:189)
... 49 more
This is my oracle trace file:
Redo thread mounted by this instance: 1
Oracle process number: 63
Windows thread id: 2464, image: ORACLE.EXE (SHAD)
*** 2012-09-10 11:30:12.384
*** SERVICE NAME:(SYS$USERS) 2012-09-10 11:30:12.244
*** SESSION ID:(411.3766) 2012-09-10 11:30:12.244
DEADLOCK DETECTED
[Transaction Deadlock]
Current SQL statement for this session:
UPDATE SP SET SRVC_TM = :B4 , MODIFICATION_DTM=SYSDATE WHERE OPERATION_AREA_CD = :B3 AND ROUTE_TYP = :B2 AND OBJECTID = :B1
----- PL/SQL Call Stack -----
object line object
handle number name
000000057D9B52E8 134 function ER.FUNC_UPDATESERVICETIME
000000057C3A5848 1 anonymous block
The following deadlock is not an ORACLE error. It is a
deadlock due to user error in the design of an application
or from issuing incorrect ad-hoc SQL. The following
information may aid in determining the deadlock:
Deadlock graph:
---------Blocker(s)-------- ---------Waiter(s)---------
Resource Name process session holds waits process session holds waits
TX-00040020-0017465b 63 411 X 94 364 X
TX-00020020-00166804 94 364 X 63 411 X
session 411: DID 0001-003F-00000033 session 364: DID 0001-005E-00000016
session 364: DID 0001-005E-00000016 session 411: DID 0001-003F-00000033
Rows waited on:
Session 364: obj - rowid = 0000CC64 - AAAMxkAA2AAA1q2AAY
(dictionary objn - 52324, file - 54, block - 219830, slot - 24)
Session 411: obj - rowid = 0000CC64 - AAAMxkAA2AAA1q2AAR
(dictionary objn - 52324, file - 54, block - 219830, slot - 17)
Information on the OTHER waiting sessions:
Session 364:
pid=94 serial=6104 audsid=693767 user: 57/ER
O/S info: user: , term: , ospid: 1234, machine: abc
program:
Current SQL Statement:
UPDATE SP SET ORIG_NO='751' ,ORIG_SEQ_NO=0,SP_ROUTING_STATUS='A', USER_ID='XXXX', MODIFICATION_DTM=SYSDATE WHERE OBJECTID IN ('104883389','104883404','104883407','104883440','104883443','104883455','104883467','104883509','104883545','104883764','104883788','104883806','104883812','104883821','104883836','104883854','104883863','104883893','104883899','104883931','104883937','104883964','104884084','104884117','104884120','104884138','104884141','104885439','104883386','104883422','104883560','104883587','104883767','104883785','104883809','104883824','104883845','104883851','104883884','104883890','104883955','104883958','104884012','104884093','104884114','104885412','104885436','104885442','104885445','104883383','104883395','104883413','104883419','104883464','104883494','104883524','104883773','104883842','104883917','104883920','104883943','104883949','104883967','104883997','104884051','104884105','104884108','104885451','104883437','104883461','104883476','104883497','104883500','104883503','104883566','104883584','104883614','104883794','104883800','104883815','104883830','104883857','104883869','104883923','104883952','104884048','104884057','104884063','104884066','104884081','104884087','104884102','104884111','104884135','104885415','104885424','104885427','104886297','104886308','104883398','104883410','104883458','104883473','104883512','104883515','104883527','104883530','104883536','104883554','104883596','104883770','104883782','104883803','104883827','104883833','104883839','104883848','104883866','104883875','104883878','104883896','104883902','104883914','104883970','104883976','104884060','104884069','104884072','104884123','104884132','104885409','104885430','104883425','104883431','104883446','104883449','104883452','104883482','104883506','104883518','104883539','104883548','104883569','104883575','104883578','104883623','104883779','104883797','104883818','104883860','104883925','104883934','104883940','104883946','104883973','104883979','104883982','104884078','104884090','104884096','104885421','104885448','104885454','104883392','104883416','104883428','104883479','104883491','104883521','104883542','104883551','104883557','104883563','104883872','104883911','104883928','104883961','104883994','104884018','104884054','104884099','104884129','104886299','104883401','104883434','104883470','104883485','104883533','104883572','104883581','104883776','104883791','104883881','104883887','104883905','104883908','104884075','104884126','104885418','104885433')
End of information on OTHER waiting sessions.
===================================================
PROCESS STATE
-------------
Process global information:
process: 000000057B3343D8, call: 0000000574FCBF78, xact: 0000000576A07F60, curses: 000000057E48D858, usrses: 000000057E48D858
----------------------------------------
SO: 000000057B3343D8, type: 2, owner: 0000000000000000, flag: INIT/-/-/0x00
(process) Oracle pid=63, calls cur/top: 0000000574FCBF78/0000000574FD4C48, flag: (0) -
int error: 0, call error: 0, sess error: 0, txn error 0
(post info) last post received: 108 0 4
last post received-location: aaa
last process to post me: 7e31d890 1 6
last post sent: 0 0 112
last post sent-location: bbb
last process posted by me: 7b334c00 3 0
(latch info) wait_event=0 bits=10
holding (efd=19) 4745310 Parent+children enqueue hash chains level=4
Location from where latch is held: cmi: gpl:
Context saved from call: 0
state=busy, wlstate=free
recovery area:
Dump of memory from 0x000000057E300810 to 0x000000057E300830
57E300810 00000000 00000000 00000000 00000000 [...............
I have been reaserching this issue from past few days. From what I saw few are saying its a indexing issue, few are saying its INITRANS... I am not sure.. But this deadlock happens very rare. But whenever it happens its a big issue.
So please help me guys.. what to look for.. and how I can solve this issue..

Look at your two UPDATE statements and try to understand why they would request the same row, but in a different order. That's how almost all
deadlocks happen.
There are several possible ways to avoid this error:
1) Update rows in the same order. You may be able to do this with a hint to force a full table scan or index. (I'm not 100% certain that using the same access method will always avoid this issue, but in practice it does seem to fix it. See my old question for a painful discussion about deadlocks when the access method is the same.)
2) Do not run your two processes at the same time.
3) Handle exceptions. For example, something like this:
declare
deadlock exception;
pragma exception_init(deadlock, -00060);
begin
<code>
exception when deadlock then
<do something about it here, such as re-try>
end;
/
You need to add the exception handling to both blocks of code. And a deadlock will still generate a trace file, which will probably
slow things down and take up a lot of space where no one expects it.

Related

Exception 'Cannot get a connection, pool error Timeout waiting for idle object' when using 'DBCPConnectionPoolLookup' service in Nifi

I'm trying to use 'DBCPConnectionPoolLookup' service in 'ExecuteGroovyScript' to dynamically query the required database based on 'database.name' parameter in the input flow file.
The processor is successfully able to get the corresponding 'DBCPConnectionPool' service for querying but I'm getting the an exception java.sql.SQLException: Cannot get a connection, pool error Timeout waiting for idle object. As opposed to if I directly use the 'DBCPConnectionPool' service without the 'Lookup' service without changing any configuration it works fine.
I access the service as follows:
def clientDb = CTL.SQLLookupService.getConnection(flowFile.getAttributes())
Then use the 'clientDb' object to query as:
clientDb.rows(timseriesSqlCountQuery).eachWithIndex { row, idx ->numRowsTimeSeries= row.c}
I have tried increasing the values of Max Wait Time and Max Total Connections to higher values in 'DBCPConnectionPool' service, it does not help.
Please find below detail links of images for code,error and configuration
Exception
Configuration of 'ExecuteGroovyScript'
Configuration of 'DBCPConnectionPool' service
Configuration of 'DBCPConnectionPoolLookup' service
Script Code
import org.apache.nifi.distributed.cache.client.Deserializer
import org.apache.nifi.distributed.cache.client.Serializer
import org.apache.nifi.distributed.cache.client.exception.DeserializationException
import org.apache.nifi.distributed.cache.client.exception.SerializationException
import groovy.sql.Sql
import java.time.*
try {
def flowFile = session.get()
def isBootstrap=flowFile."isBootstrap"
def timseriesSqlQuery='SELECT id FROM [dbo].[Points] where ([MappedToEquipment] = \'Mapped\' or PointStatus = \'Mapped\')'
def timseriesSqlCountQuery='SELECT count(id) as c FROM [dbo].[Points] where ([MappedToEquipment] = \'Mapped\' or PointStatus = \'Mapped\')'
def spaceSqlQuery='select id from (select id from dbo.organization union select id from dbo.facility union select id from dbo.building union select id from dbo.floor union select id from dbo.wing union select id from dbo.room union select id from dbo.systems) tmp'
def spaceSqlCountQuery='select count(id) as c from (select id from dbo.organization union select id from dbo.facility union select id from dbo.building union select id from dbo.floor union select id from dbo.wing union select id from dbo.room union select id from dbo.systems) tmp'
def cache = CTL.lastIngestTimeMap
def clientDb = CTL.SQLLookupService.getConnection(flowFile.getAttributes())//SQL.staticService
int numRowsTimeSeries=0
int numRowsSpace=0
clientDb.rows(timseriesSqlCountQuery).eachWithIndex { row, idx ->numRowsTimeSeries= row.c}
clientDb.rows(spaceSqlCountQuery).eachWithIndex { row, idx ->numRowsSpace= row.c}
}
Exception from Nifi logs
2019-09-12 06:18:33,629 ERROR [Timer-Driven Process Thread-3] o.a.n.p.groovyx.ExecuteGroovyScript ExecuteGroovyScript[id=b435c079-ee6c-3c42-a6ea-020968267ecf] ExecuteGroovyScript[id=b435c079-ee6c-3c42-a6ea-020968267ecf] failed to process session due to java.lang.ClassCastException; Processor Administratively Yielded for 1 sec: java.lang.ClassCastException
java.lang.ClassCastException: null
2019-09-12 06:18:33,629 WARN [Timer-Driven Process Thread-3] o.a.n.controller.tasks.ConnectableTask Administratively Yielding ExecuteGroovyScript[id=b435c079-ee6c-3c42-a6ea-020968267ecf] due to uncaught Exception: java.lang.ClassCastException
java.lang.ClassCastException: null
2019-09-12 06:18:33,629 ERROR [Timer-Driven Process Thread-9] o.a.n.p.groovyx.ExecuteGroovyScript ExecuteGroovyScript[id=9b81ca15-93a5-3953-9f40-d0874cfe2531] ExecuteGroovyScript[id=9b81ca15-93a5-3953-9f40-d0874cfe2531] failed to process session due to java.lang.ClassCastException; Processor Administratively Yielded for 1 sec: java.lang.ClassCastException
java.lang.ClassCastException: null
2019-09-12 06:18:33,629 WARN [Timer-Driven Process Thread-9] o.a.n.controller.tasks.ConnectableTask Administratively Yielding ExecuteGroovyScript[id=9b81ca15-93a5-3953-9f40-d0874cfe2531] due to uncaught Exception: java.lang.ClassCastException
java.lang.ClassCastException: null
2019-09-12 06:18:33,708 ERROR [Timer-Driven Process Thread-10] o.a.n.p.groovyx.ExecuteGroovyScript ExecuteGroovyScript[id=a1ec4496-dca3-38ab-a47b-43d7ff95e40f] org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: Cannot get a connection, pool error Timeout waiting for idle object: org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: Cannot get a connection, pool error Timeout waiting for idle object
org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: Cannot get a connection, pool error Timeout waiting for idle object
at org.apache.nifi.dbcp.DBCPConnectionPool.getConnection(DBCPConnectionPool.java:308)
at org.apache.nifi.dbcp.DBCPService.getConnection(DBCPService.java:49)
at sun.reflect.GeneratedMethodAccessor106.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:84)
at com.sun.proxy.$Proxy89.getConnection(Unknown Source)
at org.apache.nifi.processors.groovyx.ExecuteGroovyScript.onInitSQL(ExecuteGroovyScript.java:339)
at org.apache.nifi.processors.groovyx.ExecuteGroovyScript.onTrigger(ExecuteGroovyScript.java:439)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLException: Cannot get a connection, pool error Timeout waiting for idle object
at org.apache.commons.dbcp2.PoolingDataSource.getConnection(PoolingDataSource.java:142)
at org.apache.commons.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:1563)
at org.apache.nifi.dbcp.DBCPConnectionPool.getConnection(DBCPConnectionPool.java:305)
... 19 common frames omitted
Caused by: java.util.NoSuchElementException: Timeout waiting for idle object
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:451)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:365)
at org.apache.commons.dbcp2.PoolingDataSource.getConnection(PoolingDataSource.java:134)
... 21 common frames omitted
2019-09-12 06:18:33,708 ERROR [Timer-Driven Process Thread-2] o.a.n.p.groovyx.ExecuteGroovyScript ExecuteGroovyScript[id=54d1e251-88f2-33f3-0489-722879a802bd] org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: Cannot get a connection, pool error Timeout waiting for idle object: org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: Cannot get a connection, pool error Timeout waiting for idle object
org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: Cannot get a connection, pool error Timeout waiting for idle object
at org.apache.nifi.dbcp.DBCPConnectionPool.getConnection(DBCPConnectionPool.java:308)
at org.apache.nifi.dbcp.DBCPService.getConnection(DBCPService.java:49)
at sun.reflect.GeneratedMethodAccessor106.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:84)
at com.sun.proxy.$Proxy89.getConnection(Unknown Source)
at org.apache.nifi.processors.groovyx.ExecuteGroovyScript.onInitSQL(ExecuteGroovyScript.java:339)
at org.apache.nifi.processors.groovyx.ExecuteGroovyScript.onTrigger(ExecuteGroovyScript.java:439)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLException: Cannot get a connection, pool error Timeout waiting for idle object
at org.apache.commons.dbcp2.PoolingDataSource.getConnection(PoolingDataSource.java:142)
at org.apache.commons.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:1563)
at org.apache.nifi.dbcp.DBCPConnectionPool.getConnection(DBCPConnectionPool.java:305)
... 19 common frames omitted
Caused by: java.util.NoSuchElementException: Timeout waiting for idle object
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:451)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:365)
at org.apache.commons.dbcp2.PoolingDataSource.getConnection(PoolingDataSource.java:134)
... 21 common frames omitted
Finally after bring down Nifi twice I have found the solution. The problem seemed to be in the code which I was using, I used the object returned by CTL.index.getConnection(flowFile.getAttributes()) to query the SQL table which actually is a connection table, now due to this Nifi used up all available connections to SQL, due to which even if I reverted to using 'DBCPConnectionPool' service instead if 'Lookup' I was getting the above error. When I used to restart Nifi it used to work fine.
The actual code to be used in your script for using 'Lookup' Service is
def connectionObj = CTL.index.getConnection(flowFile.getAttributes())
def clientDb = new Sql(connectionObj)
Now use the 'clientDb' object to query your table
clientDb.rows(timseriesSqlCountQuery).eachWithIndex { row, idx ->numRowsTimeSeries= row.c}

Spring ldaptemplate update group with large membership issue

I have issues updating groups in Active Directory with > 1500 members. It's only trying to modify the member attribute.
I have no issues updating groups with fewer members. I can also add a new group with many members.
However if its too large, update fails. I can try to update the large group to just one member and it still fails with the same error.
Code fails on the modifyAttributes line:
ModificationItem[] modList =
nameContext.getDirContextAdapter().getModificationItems();
writeADTemplate.modifyAttributes(nameContext.getName(),modList);
StackTrace Below:
org.springframework.ldap.NameAlreadyBoundException: [LDAP: error code 68 -
00000562: UpdErr: DSID-031A122A, problem 6005 (ENTRY_EXISTS), data 0
nested exception is javax.naming.NameAlreadyBoundException: [LDAP: error
code 68 - 00000562: UpdErr: DSID-031A122A, problem 6005 (ENTRY_EXISTS), data 0
remaining name 'cn=Atlassian Users,ou=Groups'
at org.springframework.ldap.support.LdapUtils.convertLdapException
(LdapUtils.java:169)
at org.springframework.ldap.core.LdapTemplate.executeWithContext
(LdapTemplate.java:810)
at
org.springframework.ldap.core.LdapTemplate.executeReadWrite
(LdapTemplate.java:802)
at org.springframework.ldap.core.LdapTemplate.modifyAttributes
(LdapTemplate.java:967)
more ...
Caused by: javax.naming.NameAlreadyBoundException: [LDAP: error code 68 -
00000562: UpdErr: DSID-031A122A, problem 6005 (ENTRY_EXISTS), data 0
remaining name 'cn=Atlassian Users,ou=Groups'
at com.sun.jndi.ldap.LdapCtx.mapErrorCode(Unknown Source)
at com.sun.jndi.ldap.LdapCtx.processReturnCode(Unknown Source)
at com.sun.jndi.ldap.LdapCtx.processReturnCode(Unknown Source)
at com.sun.jndi.ldap.LdapCtx.c_modifyAttributes(Unknown Source)
at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_modifyAttributes(Unknown
Source)
at
com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.modifyAttributes(Unknown
Source)
at javax.naming.directory.InitialDirContext.modifyAttributes(Unknown Source)
at
org.springframework.ldap.core.LdapTemplate$19.executeWithContext
(LdapTemplate.java:969)
at
org.springframework.ldap.core.LdapTemplate.executeWithContext
(LdapTemplate.java:807)
... 88 more
Ok my real issue is that Active Directory will not return a multi value attribute like member if the values > 1500.
When I was getting the current group members it was return 0 values so my code was trying to add all the members back to the group.
Looks like I'll have to figure out how to use
DefaultIncrementalAttributesMapper to get all the members

org.apache.http.nio.reactor.IOReactorException: I/O dispatch worker terminated abnormally

I have a service which uses apache HttpAsyncClient. (versions: httpasyncclient-4.0.2.jar, httpcore-4.4.3.jar, httpcore-nio-4.3.3.jar)
All requests start failing some time after starting the async client with following being initial exception -
[#|2016-03-16T22:31:59.376-0700|SEVERE|glassfish3.1.2|org.apache.http.impl.nio.client.InternalHttpAsyncClient|_ThreadID=564;_ThreadName=Thread-6;|I/O reactor terminated abnormally
org.apache.http.nio.reactor.IOReactorException: I/O dispatch worker terminated abnormally
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:357)
at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:189)
at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase.doExecute(CloseableHttpAsyncClientBase.java:67)
at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase.access$000(CloseableHttpAsyncClientBase.java:38)
at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:57)
at java.lang.Thread.run(Unknown Source)
Caused by: RestException(statusCode=500, code=null, message=I/O operation failed, developerMessage=RestException(statusCode=500, code=null, message=I/O operation failed, developerMessage=null)
at com.notificationservice.analytics.client.AsyncResponse$2.failed(AsyncResponse.java:178)
at org.apache.http.concurrent.BasicFuture.failed(BasicFuture.java:134)
at org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.failed(DefaultClientExchangeHandlerImpl.java:258)
at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.exception(HttpAsyncRequestExecutor.java:127)
at org.apache.http.impl.nio.client.InternalIODispatch.onException(InternalIODispatch.java:68)
at org.apache.http.impl.nio.client.InternalIODispatch.onException(InternalIODispatch.java:37)
at org.apache.http.impl.nio.reactor.AbstractIODispatch.outputReady(AbstractIODispatch.java:154)
at org.apache.http.impl.nio.reactor.BaseIOReactor.writable(BaseIOReactor.java:180)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:342)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:316)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:277)
at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:105)
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:586)
at java.lang.Thread.run(Unknown Source)
)
at com.notificationservice.client.AsyncResponse$2.failed(AsyncResponse.java:178)
at org.apache.http.concurrent.BasicFuture.failed(BasicFuture.java:134)
at org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.failed(DefaultClientExchangeHandlerImpl.java:258)
at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.exception(HttpAsyncRequestExecutor.java:127)
at org.apache.http.impl.nio.client.InternalIODispatch.onException(InternalIODispatch.java:68)
at org.apache.http.impl.nio.client.InternalIODispatch.onException(InternalIODispatch.java:37)
at org.apache.http.impl.nio.reactor.AbstractIODispatch.outputReady(AbstractIODispatch.java:154)
at org.apache.http.impl.nio.reactor.BaseIOReactor.writable(BaseIOReactor.java:180)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:342)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:316)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:277)
at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:105)
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:586)
... 1 more
Same problem happens with new versions - httpasyncclient-4.1.1.jar, httpcore-4.4.4.jar, httpcore-nio-4.4.4.jar
Any insight would be highly appreciated. Is there some IOReactorConfig parameter which needs to be changed?
I would say something is wrong with your rest parameters. StatusCode 500 comes from the server so your request are going to it.
Caused by: RestException(statusCode=500, code=null, message=I/O operation failed, developerMessage=RestException(statusCode=500, code=null, message=I/O operation failed, developerMessage=null

sonarqube 5.2 MySQLTransactionRollbackException: Deadlock found when trying to get lock

Using SonarQube 5.2 I’m seeing the following Deadlock issue:
05:48:22 ERROR: Error during Sonar runner execution
05:48:22 java.lang.IllegalStateException: Fail to execute request
[code=500, url=http://192.168.109.6/api/ce/submit?projectKey=CoprHD&projectName=CoprHD-controller&projectBranch=bugfix-COP-19001-hotfix]:
{"errors":[{"msg":"\n### Error updating database.
Cause: com.mysql.jdbc.exceptions.jdbc4.MySQLTransactionRollbackException:
Deadlock found when trying to get lock; try restarting transaction\n
### The error may involve org.sonar.db.user.RoleMapper.insertGroupRole-Inline\n### The error occurred while setting parameters\n
### SQL: INSERT INTO group_roles (group_id, resource_id, role) VALUES (?, ?, ?)\n
### Cause: com.mysql.jdbc.exceptions.jdbc4.MySQLTransactionRollbackException: Deadlock found when trying to get lock; try restarting transaction"}]}
05:48:22 at org.sonar.batch.report.ReportPublisher.uploadMultiPartReport(ReportPublisher.java:182)
05:48:22 at org.sonar.batch.report.ReportPublisher.sendOrDumpReport(ReportPublisher.java:151)
05:48:22 at org.sonar.batch.report.ReportPublisher.execute(ReportPublisher.java:115)
05:48:22 at org.sonar.batch.phases.PhaseExecutor.publishReportJob(PhaseExecutor.java:116)
05:48:22 at org.sonar.batch.phases.PhaseExecutor.execute(PhaseExecutor.java:106)
05:48:22 at org.sonar.batch.scan.ModuleScanContainer.doAfterStart(ModuleScanContainer.java:192)
05:48:22 at org.sonar.core.platform.ComponentContainer.startComponents(ComponentContainer.java:100)
05:48:22 at org.sonar.core.platform.ComponentContainer.execute(ComponentContainer.java:85)
05:48:22 at org.sonar.batch.scan.ProjectScanContainer.scan(ProjectScanContainer.java:258)
05:48:22 at org.sonar.batch.scan.ProjectScanContainer.scanRecursively(ProjectScanContainer.java:253)
05:48:22 at org.sonar.batch.scan.ProjectScanContainer.doAfterStart(ProjectScanContainer.java:243)
05:48:22 at org.sonar.core.platform.ComponentContainer.startComponents(ComponentContainer.java:100)
05:48:22 at org.sonar.core.platform.ComponentContainer.execute(ComponentContainer.java:85)
05:48:22 at org.sonar.batch.bootstrap.GlobalContainer.executeAnalysis(GlobalContainer.java:153)
05:48:22 at org.sonar.batch.bootstrapper.Batch.executeTask(Batch.java:110)
05:48:22 at org.sonar.runner.batch.BatchIsolatedLauncher.execute(BatchIsolatedLauncher.java:55)
05:48:22 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
05:48:22 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
05:48:22 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
05:48:22 at java.lang.reflect.Method.invoke(Method.java:606)
05:48:22 at org.sonar.runner.impl.IsolatedLauncherProxy.invoke(IsolatedLauncherProxy.java:61)
05:48:22 at com.sun.proxy.$Proxy0.execute(Unknown Source)
05:48:22 at org.sonar.runner.api.EmbeddedRunner.doExecute(EmbeddedRunner.java:275)
05:48:22 at org.sonar.runner.api.EmbeddedRunner.runAnalysis(EmbeddedRunner.java:166)
05:48:22 at org.sonar.runner.api.EmbeddedRunner.runAnalysis(EmbeddedRunner.java:153)
05:48:22 at org.sonar.runner.cli.Main.runAnalysis(Main.java:118)
05:48:22 at org.sonar.runner.cli.Main.execute(Main.java:80)
05:48:22 at org.sonar.runner.cli.Main.main(Main.java:66)
Searching for similar reports I found this reference which says the issue was resolved: https://jira.sonarsource.com/browse/SONAR-1945
I also found a reference that transaction-isolation should be changed from REPEATABLE-READ to READ-COMMITTED. Is this a reasonable thing to do with mysql for Sonar?
mysql> show variables like '%wait_timeout%';
+--------------------------+----------+
| Variable_name | Value |
+--------------------------+----------+
| innodb_lock_wait_timeout | 500 |
| lock_wait_timeout | 31536000 |
| wait_timeout | 28800 |
+--------------------------+----------+
3 rows in set (0.25 sec)
mysql> show variables like '%tx_isolation%';
+---------------+-----------------+
| Variable_name | Value |
+---------------+-----------------+
| tx_isolation | REPEATABLE-READ |
+---------------+-----------------+
1 row in set (0.00 sec)
mysql> SELECT ##GLOBAL.tx_isolation, ##tx_isolation;
+-----------------------+-----------------+
| ##GLOBAL.tx_isolation | ##tx_isolation |
+-----------------------+-----------------+
| REPEATABLE-READ | REPEATABLE-READ |
+-----------------------+-----------------+
For further info about the Deadlock issue, here is some data.
Does anyone know if this issue is something that should be tweaked in mysql or is this an issue that needs to be fixed in the SonarQube app?
mysql> show engine innodb status
=====================================
2015-12-18 07:42:25 7f61f03cd700 INNODB MONITOR OUTPUT
=====================================
Per second averages calculated from the last 31 seconds
-----------------
BACKGROUND THREAD
-----------------
srv_master_thread loops: 44635 srv_active, 0 srv_shutdown, 1284536 srv_idle
srv_master_thread log flush and writes: 1329157
----------
SEMAPHORES
----------
OS WAIT ARRAY INFO: reservation count 224853
OS WAIT ARRAY INFO: signal count 1727534
Mutex spin waits 1578113, rounds 7231747, OS waits 74673
RW-shared spins 483413, rounds 5257332, OS waits 110301
RW-excl spins 197945, rounds 3737144, OS waits 35005
Spin rounds per wait: 4.58 mutex, 10.88 RW-shared, 18.88 RW-excl
------------------------
LATEST DETECTED DEADLOCK
------------------------
2015-12-17 05:46:47 7f61f0594700
*** (1) TRANSACTION:
TRANSACTION 17641507, ACTIVE 0 sec inserting
mysql tables in use 1, locked 1
LOCK WAIT 8 lock struct(s), heap size 1184, 7 row lock(s), undo log entries 9
MySQL thread id 5021, OS thread handle 0x7f61f071a700, query id 33269201 localhost 127.0.0.1 sonar update
INSERT INTO group_roles (group_id, resource_id, role)
VALUES (null, 1515106, 'codeviewer')
*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 310 page no 6 n bits 472 index `group_roles_resource` of table `sonar`.`group_roles` trx id 17641507 lock_mode X insert intention waiting
Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0
0: len 8; hex 73757072656d756d; asc supremum;;
*** (2) TRANSACTION:
TRANSACTION 17641509, ACTIVE 0 sec inserting
mysql tables in use 1, locked 1
7 lock struct(s), heap size 1184, 4 row lock(s), undo log entries 3
MySQL thread id 5005, OS thread handle 0x7f61f0594700, query id 33269204 localhost 127.0.0.1 sonar update
INSERT INTO group_roles (group_id, resource_id, role)
VALUES (1, 1515107, 'admin')
*** (2) HOLDS THE LOCK(S):
RECORD LOCKS space id 310 page no 6 n bits 472 index `group_roles_resource` of table `sonar`.`group_roles` trx id 17641509 lock_mode X
Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0
0: len 8; hex 73757072656d756d; asc supremum;;
*** (2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 310 page no 6 n bits 472 index `group_roles_resource` of table `sonar`.`group_roles` trx id 17641509 lock_mode X insert intention waiting
Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0
0: len 8; hex 73757072656d756d; asc supremum;;
*** WE ROLL BACK TRANSACTION (2)
------------
TRANSACTIONS
------------
Trx id counter 18864174
Purge done for trx's n:o
<pre>
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 0, not started
MySQL thread id 7482, OS thread handle 0x7f61f03cd700, query id 38116433 localhost sonar init
show engine innodb status
---TRANSACTION 18864038, not started
MySQL thread id 7478, OS thread handle 0x7f61f3349700, query id 38115903 localhost 127.0.0.1 sonar cleaning up
---TRANSACTION 18864173, not started
MySQL thread id 7475, OS thread handle 0x7f61f040e700, query id 38116432 localhost 127.0.0.1 sonar cleaning up
--------
FILE I/O
--------
I/O thread 0 state: waiting for completed aio requests (insert buffer thread)
I/O thread 1 state: waiting for completed aio requests (log thread)
I/O thread 2 state: waiting for completed aio requests (read thread)
I/O thread 3 state: waiting for completed aio requests (read thread)
I/O thread 4 state: waiting for completed aio requests (read thread)
I/O thread 5 state: waiting for completed aio requests (read thread)
I/O thread 6 state: waiting for completed aio requests (write thread)
I/O thread 7 state: waiting for completed aio requests (write thread)
I/O thread 8 state: waiting for completed aio requests (write thread)
I/O thread 9 state: waiting for completed aio requests (write thread)
Pending normal aio reads: 0 [0, 0, 0, 0] , aio writes: 0 [0, 0, 0, 0] ,
ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0
Pending flushes (fsync) log: 0; buffer pool: 0
7146308 OS file reads, 6478063 OS file writes, 1783568 OS fsyncs
0.00 reads/s, 0 avg bytes/read, 0.00 writes/s, 0.00 fsyncs/s
-------------------------------------
INSERT BUFFER AND ADAPTIVE HASH INDEX
-------------------------------------
Ibuf: size 1, free list len 3077, seg size 3079, 22965 merges
merged operations:
insert 45672, delete mark 7198683, delete 214896
discarded operations:
insert 0, delete mark 0, delete 0
Hash table size 6374777, node heap has 11107 buffer(s)
0.00 hash searches/s, 0.00 non-hash searches/s
---
LOG
---
Log sequence number 219765124434
Log flushed up to 219765124434
Pages flushed up to 219765124434
Last checkpoint at 219765124434
0 pending log writes, 0 pending chkp writes
1189792 log i/o's done, 0.00 log i/o's/second
----------------------
BUFFER POOL AND MEMORY
----------------------
Total memory allocated 3296722944; in additional pool allocated 0
Dictionary memory allocated 359878
Buffer pool size 196600
Free buffers 8192
Database pages 177301
Old database pages 65285
Modified db pages 0
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 1567756, not young 296705943
0.00 youngs/s, 0.00 non-youngs/s
Pages read 7146255, created 1592527, written 5004155
0.00 reads/s, 0.00 creates/s, 0.00 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 177301, unzip_LRU len: 0
I/O sum[0]:cur[0], unzip sum[0]:cur[0]

ERROR 2103: doing work on Longs

I have data
store trn_date dept_id sale_amt
1 2014-12-15 101 10007655
1 2014-12-15 101 10007654
1 2014-12-15 101 10007544
6 2014-12-15 104 100086544
8 2014-12-14 101 1000000
8 2014-12-15 101 100865761
I'm trying to aggregate the data using below code -
Loading the data (tried both the way using HCatLoader() and using PigStorage())
data = LOAD 'data' USING org.apache.hcatalog.pig.HCatLoader();
group_table = GROUP data BY (store, tran_date, dept_id);
group_gen = FOREACH grp_table GENERATE
FLATTEN(group) AS (store, tran_date, dept_id),
SUM(table.sale_amt) AS tota_sale_amt;
Below is the Error stack Trace which I'm getting while running the job
================================================================================
Pig Stack Trace
---------------
ERROR 2103: Problem doing work on Longs
org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception while executing (Name: grouped_all: Local Rearrange[tuple]{tuple}(false) - scope-1317 Operator Key: scope-1317): org.apache.pig.backend.executionengine.ExecException: ERROR 2103: Problem doing work on Longs
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:289)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POLocalRearrange.getNextTuple(POLocalRearrange.java:263)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigCombiner$Combine.processOnePackageOutput(PigCombiner.java:183)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigCombiner$Combine.reduce(PigCombiner.java:161)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigCombiner$Combine.reduce(PigCombiner.java:51)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
at org.apache.hadoop.mapred.Task$NewCombinerRunner.combine(Task.java:1645)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1611)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1462)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:700)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:770)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 2103: Problem doing work on Longs
at org.apache.pig.builtin.AlgebraicLongMathBase.doTupleWork(AlgebraicLongMathBase.java:84)
at org.apache.pig.builtin.AlgebraicLongMathBase$Intermediate.exec(AlgebraicLongMathBase.java:108)
at org.apache.pig.builtin.AlgebraicLongMathBase$Intermediate.exec(AlgebraicLongMathBase.java:102)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.expressionOperators.POUserFunc.getNext(POUserFunc.java:330)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.expressionOperators.POUserFunc.getNextTuple(POUserFunc.java:369)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.getNext(PhysicalOperator.java:333)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.processPlan(POForEach.java:378)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNextTuple(POForEach.java:298)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:281)
Caused by: java.lang.ClassCastException: java.lang.String cannot be cast to java.lang.Number
at org.apache.pig.builtin.AlgebraicLongMathBase.doTupleWork(AlgebraicLongMathBase.java:77)
================================================================================
As i was looking for solution, many of said it is due to loading the data using HCatalog Loader. so i have tried loading the data using "PigStorage()".
still getting the same error.
This is may be because of the way you are storing data in hive. If any aggregation is going to happen on any column do mention it's data type integer or numeric.
Basically each aggregation function returns data with it's default data type,
Like -
AVG returns DOUBLE
SUM returns DOUBLE
COUNT returns LONG
I don't think so this is the issue while storing it into hive because you already tried PigStore() it means, it's just data type issue while you are passing it to aggregation.
Try to change the data type before passing it to aggregation and try.

Resources