Spring ldaptemplate update group with large membership issue - spring
I have issues updating groups in Active Directory with > 1500 members. It's only trying to modify the member attribute.
I have no issues updating groups with fewer members. I can also add a new group with many members.
However if its too large, update fails. I can try to update the large group to just one member and it still fails with the same error.
Code fails on the modifyAttributes line:
ModificationItem[] modList =
nameContext.getDirContextAdapter().getModificationItems();
writeADTemplate.modifyAttributes(nameContext.getName(),modList);
StackTrace Below:
org.springframework.ldap.NameAlreadyBoundException: [LDAP: error code 68 -
00000562: UpdErr: DSID-031A122A, problem 6005 (ENTRY_EXISTS), data 0
nested exception is javax.naming.NameAlreadyBoundException: [LDAP: error
code 68 - 00000562: UpdErr: DSID-031A122A, problem 6005 (ENTRY_EXISTS), data 0
remaining name 'cn=Atlassian Users,ou=Groups'
at org.springframework.ldap.support.LdapUtils.convertLdapException
(LdapUtils.java:169)
at org.springframework.ldap.core.LdapTemplate.executeWithContext
(LdapTemplate.java:810)
at
org.springframework.ldap.core.LdapTemplate.executeReadWrite
(LdapTemplate.java:802)
at org.springframework.ldap.core.LdapTemplate.modifyAttributes
(LdapTemplate.java:967)
more ...
Caused by: javax.naming.NameAlreadyBoundException: [LDAP: error code 68 -
00000562: UpdErr: DSID-031A122A, problem 6005 (ENTRY_EXISTS), data 0
remaining name 'cn=Atlassian Users,ou=Groups'
at com.sun.jndi.ldap.LdapCtx.mapErrorCode(Unknown Source)
at com.sun.jndi.ldap.LdapCtx.processReturnCode(Unknown Source)
at com.sun.jndi.ldap.LdapCtx.processReturnCode(Unknown Source)
at com.sun.jndi.ldap.LdapCtx.c_modifyAttributes(Unknown Source)
at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_modifyAttributes(Unknown
Source)
at
com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.modifyAttributes(Unknown
Source)
at javax.naming.directory.InitialDirContext.modifyAttributes(Unknown Source)
at
org.springframework.ldap.core.LdapTemplate$19.executeWithContext
(LdapTemplate.java:969)
at
org.springframework.ldap.core.LdapTemplate.executeWithContext
(LdapTemplate.java:807)
... 88 more
Ok my real issue is that Active Directory will not return a multi value attribute like member if the values > 1500.
When I was getting the current group members it was return 0 values so my code was trying to add all the members back to the group.
Looks like I'll have to figure out how to use
DefaultIncrementalAttributesMapper to get all the members
Related
Delayed execution for 'Explain' queries on Prestosql clusters
I have two types of prestosql clusters, on aws instances and on Kubernetes. Prestosql on K8s has a weird issue with 'EXPLAIN' queries as it takes a long time ~2-3 mins compared to 2-3 seconds on the instance one. The query stays on "WAITING_FOR_RESOURCES" for about 2 minutes and then executes very quickly. There is also an exception on the server logs 2020-12-23T05:25:01.930Z ERROR Query-20201223_052431_00004_pxqak-276 io.prestosql.cost.CachingStatsProvider Error occurred when computing stats for query 20201223_052431_00004_pxqak io.prestosql.spi.PrestoException: HIVE_METASTORE_ERROR at io.prestosql.plugin.hive.metastore.thrift.ThriftHiveMetastore.getMetastorePartitionColumnStatistics(ThriftHiveMetastore.java:461) at io.prestosql.plugin.hive.metastore.thrift.ThriftHiveMetastore.getPartitionColumnStatistics(ThriftHiveMetastore.java:438) at io.prestosql.plugin.hive.metastore.thrift.ThriftHiveMetastore.getPartitionStatistics(ThriftHiveMetastore.java:389) at io.prestosql.plugin.hive.metastore.thrift.BridgingHiveMetastore.getPartitionStatistics(BridgingHiveMetastore.java:110) at io.prestosql.plugin.hive.metastore.cache.CachingHiveMetastore.lambda$loadPartitionColumnStatistics$6(CachingHiveMetastore.java:360) at java.base/java.lang.Iterable.forEach(Iterable.java:75) at io.prestosql.plugin.hive.metastore.cache.CachingHiveMetastore.loadPartitionColumnStatistics(CachingHiveMetastore.java:353) at io.prestosql.plugin.hive.metastore.cache.CachingHiveMetastore.access$100(CachingHiveMetastore.java:89) at io.prestosql.plugin.hive.metastore.cache.CachingHiveMetastore$1.loadAll(CachingHiveMetastore.java:179) at com.google.common.cache.CacheLoader$1.loadAll(CacheLoader.java:207) at io.prestosql.cost.JoinStatsRule.doCalculate(JoinStatsRule.java:81) at io.prestosql.cost.JoinStatsRule.doCalculate(JoinStatsRule.java:48) at io.prestosql.cost.SimpleStatsRule.calculate(SimpleStatsRule.java:39) at io.prestosql.cost.ComposableStatsCalculator.calculateStats(ComposableStatsCalculator.java:82) at io.prestosql.cost.ComposableStatsCalculator.calculateStats(ComposableStatsCalculator.java:70) at io.prestosql.cost.CachingStatsProvider.getGroupStats(CachingStatsProvider.java:103) at io.prestosql.cost.CachingStatsProvider.getStats(CachingStatsProvider.java:72) at io.prestosql.cost.JoinStatsRule.doCalculate(JoinStatsRule.java:81) at io.prestosql.cost.JoinStatsRule.doCalculate(JoinStatsRule.java:48) at io.prestosql.cost.SimpleStatsRule.calculate(SimpleStatsRule.java:39) at io.prestosql.cost.ComposableStatsCalculator.calculateStats(ComposableStatsCalculator.java:82) at io.prestosql.cost.ComposableStatsCalculator.calculateStats(ComposableStatsCalculator.java:70) at io.prestosql.cost.CachingStatsProvider.getGroupStats(CachingStatsProvider.java:103) at io.prestosql.cost.CachingStatsProvider.getStats(CachingStatsProvider.java:72) at io.prestosql.cost.CostCalculatorWithEstimatedExchanges.calculateJoinExchangeCost(CostCalculatorWithEstimatedExchanges.java:233) at io.prestosql.cost.CostCalculatorWithEstimatedExchanges.calculateJoinCostWithoutOutput(CostCalculatorWithEstimatedExchanges.java:208) at io.prestosql.sql.planner.iterative.rule.DetermineJoinDistributionType.getJoinNodeWithCost(DetermineJoinDistributionType.java:180) at io.prestosql.sql.planner.iterative.rule.DetermineJoinDistributionType.addJoinsWithDifferentDistributions(DetermineJoinDistributionType.java:116) at io.prestosql.sql.planner.iterative.rule.DetermineJoinDistributionType.getCostBasedJoin(DetermineJoinDistributionType.java:98) at io.prestosql.sql.planner.iterative.rule.DetermineJoinDistributionType.apply(DetermineJoinDistributionType.java:74) at io.prestosql.sql.planner.iterative.rule.DetermineJoinDistributionType.apply(DetermineJoinDistributionType.java:49) at io.prestosql.sql.planner.iterative.IterativeOptimizer.transform(IterativeOptimizer.java:165) at io.prestosql.sql.planner.iterative.IterativeOptimizer.exploreNode(IterativeOptimizer.java:140) at io.prestosql.sql.planner.iterative.IterativeOptimizer.exploreGroup(IterativeOptimizer.java:105) at io.prestosql.sql.planner.iterative.IterativeOptimizer.exploreChildren(IterativeOptimizer.java:190) at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4058) at com.google.common.cache.LocalCache.getAll(LocalCache.java:4021) at com.google.common.cache.LocalCache$LocalLoadingCache.getAll(LocalCache.java:4972) at io.prestosql.plugin.hive.metastore.cache.CachingHiveMetastore.getAll(CachingHiveMetastore.java:255) at io.prestosql.plugin.hive.metastore.cache.CachingHiveMetastore.getPartitionStatistics(CachingHiveMetastore.java:330) at io.prestosql.plugin.hive.metastore.cache.CachingHiveMetastore.lambda$loadPartitionColumnStatistics$6(CachingHiveMetastore.java:360) at java.base/java.lang.Iterable.forEach(Iterable.java:75) at io.prestosql.plugin.hive.metastore.cache.CachingHiveMetastore.loadPartitionColumnStatistics(CachingHiveMetastore.java:353) at io.prestosql.plugin.hive.metastore.cache.CachingHiveMetastore.access$100(CachingHiveMetastore.java:89) at io.prestosql.plugin.hive.metastore.cache.CachingHiveMetastore$1.loadAll(CachingHiveMetastore.java:179) at com.google.common.cache.CacheLoader$1.loadAll(CacheLoader.java:207) at com.google.common.cache.LocalCache.loadAll(LocalCache.java:4058) at com.google.common.cache.LocalCache.getAll(LocalCache.java:4021) at com.google.common.cache.LocalCache$LocalLoadingCache.getAll(LocalCache.java:4972) at io.prestosql.plugin.hive.metastore.cache.CachingHiveMetastore.getAll(CachingHiveMetastore.java:255) at io.prestosql.plugin.hive.metastore.cache.CachingHiveMetastore.getPartitionStatistics(CachingHiveMetastore.java:330) at io.prestosql.plugin.hive.HiveMetastoreClosure.getPartitionStatistics(HiveMetastoreClosure.java:88) at io.prestosql.plugin.hive.metastore.SemiTransactionalHiveMetastore.getPartitionStatistics(SemiTransactionalHiveMetastore.java:256) at io.prestosql.plugin.hive.statistics.MetastoreHiveStatisticsProvider.getPartitionsStatistics(MetastoreHiveStatisticsProvider.java:126) at io.prestosql.plugin.hive.statistics.MetastoreHiveStatisticsProvider.lambda$new$0(MetastoreHiveStatisticsProvider.java:104) at io.prestosql.plugin.hive.statistics.MetastoreHiveStatisticsProvider.getTableStatistics(MetastoreHiveStatisticsProvider.java:146) at io.prestosql.plugin.hive.HiveMetadata.getTableStatistics(HiveMetadata.java:695) at io.prestosql.sql.planner.iterative.IterativeOptimizer.exploreGroup(IterativeOptimizer.java:107) at io.prestosql.sql.planner.iterative.IterativeOptimizer.exploreChildren(IterativeOptimizer.java:190) at io.prestosql.sql.planner.iterative.IterativeOptimizer.exploreGroup(IterativeOptimizer.java:107) at io.prestosql.sql.planner.iterative.IterativeOptimizer.optimize(IterativeOptimizer.java:96) at io.prestosql.sql.planner.LogicalPlanner.plan(LogicalPlanner.java:196) at io.prestosql.sql.analyzer.QueryExplainer.getLogicalPlan(QueryExplainer.java:182) at io.prestosql.sql.analyzer.QueryExplainer.getPlan(QueryExplainer.java:121) at io.prestosql.sql.rewrite.ExplainRewrite$Visitor.getQueryPlan(ExplainRewrite.java:137) at io.prestosql.sql.rewrite.ExplainRewrite$Visitor.visitExplain(ExplainRewrite.java:115) at io.prestosql.sql.rewrite.ExplainRewrite$Visitor.visitExplain(ExplainRewrite.java:65) at io.prestosql.sql.tree.Explain.accept(Explain.java:80) at io.prestosql.sql.tree.AstVisitor.process(AstVisitor.java:27) at io.prestosql.sql.rewrite.ExplainRewrite.rewrite(ExplainRewrite.java:62) at io.prestosql.sql.rewrite.StatementRewrite.rewrite(StatementRewrite.java:57) at io.prestosql.sql.analyzer.Analyzer.analyze(Analyzer.java:80) at io.prestosql.sql.analyzer.Analyzer.analyze(Analyzer.java:75) at io.prestosql.execution.SqlQueryExecution.analyze(SqlQueryExecution.java:221) at io.prestosql.execution.SqlQueryExecution.<init>(SqlQueryExecution.java:180) at io.prestosql.execution.SqlQueryExecution.<init>(SqlQueryExecution.java:97) at io.prestosql.execution.SqlQueryExecution$SqlQueryExecutionFactory.createQueryExecution(SqlQueryExecution.java:732) at io.prestosql.dispatcher.LocalDispatchQueryFactory.lambda$createDispatchQuery$0(LocalDispatchQueryFactory.java:119) at io.prestosql.$gen.Presto_330____20201223_050837_2.call(Unknown Source) at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57) at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:834) Caused by: MetaException(message:null) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_statistics_req_result$get_partitions_statistics_req_resultStandardScheme.read(ThriftHiveMetastore.java) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_statistics_req_result$get_partitions_statistics_req_resultStandardScheme.read(ThriftHiveMetastore.java) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_statistics_req_result.read(ThriftHiveMetastore.java) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_partitions_statistics_req(ThriftHiveMetastore.java:4013) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_partitions_statistics_req(ThriftHiveMetastore.java:4000) at io.prestosql.plugin.hive.metastore.thrift.ThriftHiveMetastoreClient.getPartitionColumnStatistics(ThriftHiveMetastoreClient.java:227) at io.prestosql.plugin.hive.metastore.thrift.FailureAwareThriftMetastoreClient.lambda$getPartitionColumnStatistics$16(FailureAwareThriftMetastoreClient.java:191) at io.prestosql.plugin.hive.metastore.thrift.FailureAwareThriftMetastoreClient.runWithHandle(FailureAwareThriftMetastoreClient.java:394) at io.prestosql.plugin.hive.metastore.thrift.FailureAwareThriftMetastoreClient.getPartitionColumnStatistics(FailureAwareThriftMetastoreClient.java:191) at io.prestosql.plugin.hive.metastore.thrift.ThriftHiveMetastore.lambda$getMetastorePartitionColumnStatistics$15(ThriftHiveMetastore.java:453) at io.prestosql.plugin.hive.metastore.thrift.ThriftMetastoreApiStats.lambda$wrap$0(ThriftMetastoreApiStats.java:42) at io.prestosql.plugin.hive.util.RetryDriver.run(RetryDriver.java:130) at io.prestosql.plugin.hive.metastore.thrift.ThriftHiveMetastore.getMetastorePartitionColumnStatistics(ThriftHiveMetastore.java:451) ... 156 more Suppressed: MetaException(message:null) ... 170 more Suppressed: MetaException(message:null) ... 170 more Suppressed: MetaException(message:null) ... 170 more Suppressed: MetaException(message:null) ... 170 more Suppressed: MetaException(message:null) ... 170 more Suppressed: MetaException(message:null) ... 170 more Suppressed: MetaException(message:null) ... 170 more Suppressed: MetaException(message:null) ... 170 more Suppressed: MetaException(message:null) ... 170 more I tried changing the values of hive.metastore.partition-batch-size.max and hive.metastore-cache-ttl
It seems than in your "slow" deployment the metastore call get_partitions_statistics_req fails for some reason and is getting retried. The retries likely consume all the "waiting" time. Since Presto by default ignores stats calculation failures like this, the query eventually works. The failure is on the Hive side, so you need to check metastore logs to understand the cause of the failure, since it's not getting propagated on the Presto side. On the Presto side you can still apply some configuration changes, as a workaround: disable stats for the Hive connector with hive.table-statistics-enabled configuration property reduce the time spent retrying metastore calls with hive.metastore.thrift.client.max-retry-time configuration property make your queries fail loud with global config property optimizer.ignore-stats-calculator-failures=false (unlikely what you want)
How to generate Dashboard report for existing JMeter csv/jtl files with tab delimiter
I am trying to generate JMeter Dashboard graph for existing results i.e., csv/jtl files. Following is the csv file content (temp1.csv): timeStamp elapsed label responseCode responseMessage threadName dataType success failureMessage bytes grpThreads allThreads Latency 1475842232895 1158 HTTP Request 200 OK Thread Group 1-1 text true 22175 1 1 911 1475842234094 529 HTTP Request 200 OK Thread Group 2-1 text true 682 1 1 529 Following is the command I ran: jmeter -g J:\temp_ws\temp1.csv -o J:\temp_ws\temp1 and delimiter set to , in user.properties jmeter.save.saveservice.default_delimiter=, It is giving the following error I got (from JMeter.log file) FATAL - jmeter.JMeter: An error occurred: org.apache.jmeter.report.dashboard.GenerationException: Error while processing samples:Consumer failed with message :No column <timeStamp> found in sample metadata <timeStamp elapsed label responseCode responseMessage threadName dataType success failureMessage bytes grpThreads allThreads Latency>, check #jmeter.save.saveservice.* properties to add the missing column at org.apache.jmeter.report.dashboard.ReportGenerator.generate(ReportGenerator.java:245) at org.apache.jmeter.JMeter.start(JMeter.java:478) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.apache.jmeter.NewDriver.main(NewDriver.java:259) Then tried to change the delimiter to \t in user.properties and run the command again to generate the report, I got the following error: 2016/10/07 17:59:32 FATAL - jmeter.JMeter: An error occurred: java.lang.ExceptionInInitializerError at org.apache.jmeter.JMeter.start(JMeter.java:477) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.apache.jmeter.NewDriver.main(NewDriver.java:259) Caused by: java.lang.StringIndexOutOfBoundsException: String index out of range: 0 at java.lang.String.charAt(Unknown Source) at org.apache.jmeter.report.dashboard.ReportGenerator.<clinit>(ReportGenerator.java:79) ... 6 more Please help me how to generate the Dashboard report for tab delimited Jmeter results (either csv/jtl) Note: For comma delimiter, Dashboard reports are getting generated.
You're facing a bug of 3.0: https://bz.apache.org/bugzilla/show_bug.cgi?id=60125 It is fixed in nightly build and will be available in 3.1 coming soon. Meanwhile you can use Nightly Builds: http://jmeter.apache.org/nightly.html
ERROR 2103: doing work on Longs
I have data store trn_date dept_id sale_amt 1 2014-12-15 101 10007655 1 2014-12-15 101 10007654 1 2014-12-15 101 10007544 6 2014-12-15 104 100086544 8 2014-12-14 101 1000000 8 2014-12-15 101 100865761 I'm trying to aggregate the data using below code - Loading the data (tried both the way using HCatLoader() and using PigStorage()) data = LOAD 'data' USING org.apache.hcatalog.pig.HCatLoader(); group_table = GROUP data BY (store, tran_date, dept_id); group_gen = FOREACH grp_table GENERATE FLATTEN(group) AS (store, tran_date, dept_id), SUM(table.sale_amt) AS tota_sale_amt; Below is the Error stack Trace which I'm getting while running the job ================================================================================ Pig Stack Trace --------------- ERROR 2103: Problem doing work on Longs org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception while executing (Name: grouped_all: Local Rearrange[tuple]{tuple}(false) - scope-1317 Operator Key: scope-1317): org.apache.pig.backend.executionengine.ExecException: ERROR 2103: Problem doing work on Longs at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:289) at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POLocalRearrange.getNextTuple(POLocalRearrange.java:263) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigCombiner$Combine.processOnePackageOutput(PigCombiner.java:183) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigCombiner$Combine.reduce(PigCombiner.java:161) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigCombiner$Combine.reduce(PigCombiner.java:51) at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171) at org.apache.hadoop.mapred.Task$NewCombinerRunner.combine(Task.java:1645) at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1611) at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1462) at org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:700) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:770) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 2103: Problem doing work on Longs at org.apache.pig.builtin.AlgebraicLongMathBase.doTupleWork(AlgebraicLongMathBase.java:84) at org.apache.pig.builtin.AlgebraicLongMathBase$Intermediate.exec(AlgebraicLongMathBase.java:108) at org.apache.pig.builtin.AlgebraicLongMathBase$Intermediate.exec(AlgebraicLongMathBase.java:102) at org.apache.pig.backend.hadoop.executionengine.physicalLayer.expressionOperators.POUserFunc.getNext(POUserFunc.java:330) at org.apache.pig.backend.hadoop.executionengine.physicalLayer.expressionOperators.POUserFunc.getNextTuple(POUserFunc.java:369) at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.getNext(PhysicalOperator.java:333) at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.processPlan(POForEach.java:378) at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNextTuple(POForEach.java:298) at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:281) Caused by: java.lang.ClassCastException: java.lang.String cannot be cast to java.lang.Number at org.apache.pig.builtin.AlgebraicLongMathBase.doTupleWork(AlgebraicLongMathBase.java:77) ================================================================================ As i was looking for solution, many of said it is due to loading the data using HCatalog Loader. so i have tried loading the data using "PigStorage()". still getting the same error.
This is may be because of the way you are storing data in hive. If any aggregation is going to happen on any column do mention it's data type integer or numeric.
Basically each aggregation function returns data with it's default data type, Like - AVG returns DOUBLE SUM returns DOUBLE COUNT returns LONG I don't think so this is the issue while storing it into hive because you already tried PigStore() it means, it's just data type issue while you are passing it to aggregation. Try to change the data type before passing it to aggregation and try.
Wepsphere CORBA.BAD_OPERATION error
I am getting the error below when I starts websphere. I really can not figure out what my problem is here. Any help and advicement would be appreciated. Thanks Index Count Time of last Occurrence Exception SourceId ProbeId ------+------+---------------------------+-------------------------- 1 1 24.10.2013 06:35:41:080 GMT org.omg.CORBA.BAD_OPERATION com.ibm.ws.naming.jndicos.CNContextImpl.isLocal 3510 ------+------+---------------------------+-------------------------- + 2 1 24.10.2013 06:35:42:284 GMT java.io.IOException com.ibm.ws.management.discovery.DiscoveryService.sendQuery 165 the log below is from dmgr_exception.log. ------Start of DE processing------ = [24.10.2013 11:24:44:982 GMT] , key = org.omg.CORBA.BAD_OPERATION com.ibm.ws.naming.jndicos.CNContextImpl.isLocal 3510 Exception = org.omg.CORBA.BAD_OPERATION Source = com.ibm.ws.naming.jndicos.CNContextImpl.isLocal probeid = 3510 Stack Dump = org.omg.CORBA.BAD_OPERATION: The delegate has not been set! vmcid: 0x0 minor code: 0 completed: No at org.omg.CORBA.portable.ObjectImpl._get_delegate(ObjectImpl.java:80) at org.omg.CORBA.portable.ObjectImpl._is_local(ObjectImpl.java:381) at com.ibm.ws.naming.jndicos.CNContextImpl.isLocal(CNContextImpl.java:4901) at com.ibm.ws.naming.jndicos.CNContextImpl.<init>(CNContextImpl.java:365) at com.ibm.ws.naming.util.WsnInitCtxFactory.getCosRootContext(WsnInitCtxFactory.java:1274) at com.ibm.ws.naming.util.WsnInitCtxFactory.getRootContextFromServer(WsnInitCtxFactory.java:934) at com.ibm.ws.naming.util.WsnInitCtxFactory.getRootJndiContext(WsnInitCtxFactory.java:824) at com.ibm.ws.naming.util.WsnInitCtxFactory.getInitialContextInternal(WsnInitCtxFactory.java:533) at com.ibm.ws.naming.util.WsnInitCtx.getContext(WsnInitCtx.java:117) at com.ibm.ws.naming.util.WsnInitCtx.getContextIfNull(WsnInitCtx.java:712) at com.ibm.ws.naming.util.WsnInitCtx.rebind(WsnInitCtx.java:247) at javax.naming.InitialContext.rebind(InitialContext.java:379) at com.ibm.ws.management.connector.rmi.RMIConnectorController.start(RMIConnectorController.java:88) at com.ibm.ws.management.component.JMXConnectors.startRMIConnector(JMXConnectors.java:664) at com.ibm.ws.management.component.JMXConnectors.started(JMXConnectors.java:1653) at com.ibm.ws.runtime.workloadcontroller.WorkloadController.startedWorkloads(WorkloadController.java:649) at com.ibm.ws.runtime.workloadcontroller.WorkloadController.started(WorkloadController.java:595) at com.ibm.ws.runtime.component.WLCImpl.start(WLCImpl.java:94) at com.ibm.ws.runtime.component.ContainerImpl.startComponents(ContainerImpl.java:977) at com.ibm.ws.runtime.component.ContainerImpl.start(ContainerImpl.java:673) at com.ibm.ws.runtime.component.ServerImpl.start(ServerImpl.java:526) at com.ibm.ws.runtime.WsServerImpl.bootServerContainer(WsServerImpl.java:192) at com.ibm.ws.runtime.WsServerImpl.start(WsServerImpl.java:140) at com.ibm.ws.runtime.WsServerImpl.main(WsServerImpl.java:461) at com.ibm.ws.runtime.WsServer.main(WsServer.java:59) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:615) at com.ibm.wsspi.bootstrap.WSLauncher.launchMain(WSLauncher.java:183) at com.ibm.wsspi.bootstrap.WSLauncher.main(WSLauncher.java:90) at com.ibm.wsspi.bootstrap.WSLauncher.run(WSLauncher.java:72) at org.eclipse.core.internal.runtime.PlatformActivator$1.run(PlatformActivator.java:78) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:92) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:68) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:400) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:177) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:615) at org.eclipse.core.launcher.Main.invokeFramework(Main.java:340) at org.eclipse.core.launcher.Main.basicRun(Main.java:282) at org.eclipse.core.launcher.Main.run(Main.java:981) at com.ibm.wsspi.bootstrap.WSPreLauncher.launchEclipse(WSPreLauncher.java:339) at com.ibm.wsspi.bootstrap.WSPreLauncher.main(WSPreLauncher.java:94)
You should have a log file with the incident report in (at least something close to) : /WebSphere/AppServer/profiles/default/logs/ffdc/<your server name>. It should contain more information (and a stacktrace). EDIT : Websphere 6.1 is qui old already, in fact IBM does not support it anymore since 30 sept 2013. Since to problem seems to be known and fixed in later release, I suggest you update to a more recent version, at least to a recent fixpack of 6.1. You can learn more about the process in this page. 6.1.0.47 FixPack is the last one. You simply need to download the PAK file and run the installer.
Transaction deadlock TX
have a batch process and a regular application that updates the same table.My batch have multiple threads that run on multiple sessions. I got the following errors in my batch Tomcat: 2012-09-10 11:30:17,043 [SyncDataThread567] ERROR org.springframework.batch.core.step.AbstractStep - Encountered an error executing the step aaa.bbb.ccc.framework.orm.DAOException: --- The error occurred in abc.xml. --- The error occurred while applying a parameter map. --- Check the ear.updateServiceTimeParamMap. --- Check the statement (update procedure failed). --- Cause: java.sql.SQLException: ORA-20011: FUNC_UPDATESERVICETIME : Error occured ORA-00060: deadlock detected while waiting for resource ORA-06512: at "ER.FUNC_UPDATESERVICETIME", line 154 ORA-06512: at line 1 at aaa.bbb.ccc.ddd.eee.Sss.updateServiceTimes(ServiceOrderDAOImpl.java:76) at sun.reflect.GeneratedMethodAccessor352.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:307) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149) at org.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:131) at org.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:119) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) at $Proxy6.updateServiceTimes(Unknown Source) at aaa.bbb.ccc.ddd.eeee.Inbddd.updateServiceTimes(InbDataWriter.java:144) at aaa.bbb.ccc.ddd.eeee.Inbddd.write(InbDataWriter.java:74) at sun.reflect.GeneratedMethodAccessor270.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:307) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149) at org.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:131) at org.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:119) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) at $Proxy7.write(Unknown Source) at org.springframework.batch.core.step.item.SimpleChunkProcessor.writeItems(SimpleChunkProcessor.java:171) at org.springframework.batch.core.step.item.SimpleChunkProcessor.doWrite(SimpleChunkProcessor.java:150) at org.springframework.batch.core.step.item.SimpleChunkProcessor.write(SimpleChunkProcessor.java:268) at org.springframework.batch.core.step.item.SimpleChunkProcessor.process(SimpleChunkProcessor.java:194) at org.springframework.batch.core.step.item.ChunkOrientedTasklet.execute(ChunkOrientedTasklet.java:74) at org.springframework.batch.core.step.tasklet.TaskletStep$ChunkTransactionCallback.doInTransaction(TaskletStep.java:386) at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:128) at org.springframework.batch.core.step.tasklet.TaskletStep$2.doInChunkContext(TaskletStep.java:264) at org.springframework.batch.core.scope.context.StepContextRepeatCallback.doInIteration(StepContextRepeatCallback.java:76) at org.springframework.batch.repeat.support.RepeatTemplate.getNextResult(RepeatTemplate.java:367) at org.springframework.batch.repeat.support.RepeatTemplate.executeInternal(RepeatTemplate.java:214) at org.springframework.batch.repeat.support.RepeatTemplate.iterate(RepeatTemplate.java:143) at org.springframework.batch.core.step.tasklet.TaskletStep.doExecute(TaskletStep.java:250) at org.springframework.batch.core.step.AbstractStep.execute(AbstractStep.java:195) at org.springframework.batch.core.partition.support.TaskExecutorPartitionHandler$1.call(TaskExecutorPartitionHandler.java:109) at org.springframework.batch.core.partition.support.TaskExecutorPartitionHandler$1.call(TaskExecutorPartitionHandler.java:107) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at org.springframework.core.task.SimpleAsyncTaskExecutor$ConcurrencyThrottlingRunnable.run(SimpleAsyncTaskExecutor.java:192) at java.lang.Thread.run(Thread.java:619) Caused by: com.ibatis.common.jdbc.exception.NestedSQLException: --- The error occurred in ael.xml. --- The error occurred while applying a parameter map. --- Check the eraa.updateServiceTimeParamMap. --- Check the statement (update procedure failed). --- Cause: java.sql.SQLException: ORA-20011: FUNC_UPDATESERVICETIME : Error occured ORA-00060: deadlock detected while waiting for resource ORA-06512: at "ER.FUNC_UPDATESERVICETIME", line 154 ORA-06512: at line 1 at com.ibatis.sqlmap.engine.mapping.statement.MappedStatement.executeQueryWithCallback(MappedStatement.java:201) at com.ibatis.sqlmap.engine.mapping.statement.MappedStatement.executeQueryForObject(MappedStatement.java:120) at com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForObject(SqlMapExecutorDelegate.java:518) at com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForObject(SqlMapExecutorDelegate.java:493) at com.ibatis.sqlmap.engine.impl.SqlMapSessionImpl.queryForObject(SqlMapSessionImpl.java:106) at com.iit.integration.erl.orm.ServiceOrderDAOImpl.updateServiceTimes(ServiceOrderDAOImpl.java:71) ... 44 more Caused by: java.sql.SQLException: ORA-20011: FUNC_UPDATESERVICETIME : Error occured ORA-00060: deadlock detected while waiting for resource ORA-06512: at "ER.FUNC_IIT_UPDATESERVICETIME", line 154 ORA-06512: at line 1 at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112) at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:331) at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:288) at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:743) at oracle.jdbc.driver.T4CCallableStatement.doOall8(T4CCallableStatement.java:215) at oracle.jdbc.driver.T4CCallableStatement.executeForRows(T4CCallableStatement.java:954) at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1168) at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3285) at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3390) at oracle.jdbc.driver.OracleCallableStatement.execute(OracleCallableStatement.java:4223) at org.apache.tomcat.dbcp.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:169) at com.ibatis.sqlmap.engine.execution.SqlExecutor.executeQueryProcedure(SqlExecutor.java:278) at com.ibatis.sqlmap.engine.mapping.statement.ProcedureStatement.sqlExecuteQuery(ProcedureStatement.java:39) at com.ibatis.sqlmap.engine.mapping.statement.MappedStatement.executeQueryWithCallback(MappedStatement.java:189) ... 49 more This is my oracle trace file: Redo thread mounted by this instance: 1 Oracle process number: 63 Windows thread id: 2464, image: ORACLE.EXE (SHAD) *** 2012-09-10 11:30:12.384 *** SERVICE NAME:(SYS$USERS) 2012-09-10 11:30:12.244 *** SESSION ID:(411.3766) 2012-09-10 11:30:12.244 DEADLOCK DETECTED [Transaction Deadlock] Current SQL statement for this session: UPDATE SP SET SRVC_TM = :B4 , MODIFICATION_DTM=SYSDATE WHERE OPERATION_AREA_CD = :B3 AND ROUTE_TYP = :B2 AND OBJECTID = :B1 ----- PL/SQL Call Stack ----- object line object handle number name 000000057D9B52E8 134 function ER.FUNC_UPDATESERVICETIME 000000057C3A5848 1 anonymous block The following deadlock is not an ORACLE error. It is a deadlock due to user error in the design of an application or from issuing incorrect ad-hoc SQL. The following information may aid in determining the deadlock: Deadlock graph: ---------Blocker(s)-------- ---------Waiter(s)--------- Resource Name process session holds waits process session holds waits TX-00040020-0017465b 63 411 X 94 364 X TX-00020020-00166804 94 364 X 63 411 X session 411: DID 0001-003F-00000033 session 364: DID 0001-005E-00000016 session 364: DID 0001-005E-00000016 session 411: DID 0001-003F-00000033 Rows waited on: Session 364: obj - rowid = 0000CC64 - AAAMxkAA2AAA1q2AAY (dictionary objn - 52324, file - 54, block - 219830, slot - 24) Session 411: obj - rowid = 0000CC64 - AAAMxkAA2AAA1q2AAR (dictionary objn - 52324, file - 54, block - 219830, slot - 17) Information on the OTHER waiting sessions: Session 364: pid=94 serial=6104 audsid=693767 user: 57/ER O/S info: user: , term: , ospid: 1234, machine: abc program: Current SQL Statement: UPDATE SP SET ORIG_NO='751' ,ORIG_SEQ_NO=0,SP_ROUTING_STATUS='A', USER_ID='XXXX', MODIFICATION_DTM=SYSDATE WHERE OBJECTID IN ('104883389','104883404','104883407','104883440','104883443','104883455','104883467','104883509','104883545','104883764','104883788','104883806','104883812','104883821','104883836','104883854','104883863','104883893','104883899','104883931','104883937','104883964','104884084','104884117','104884120','104884138','104884141','104885439','104883386','104883422','104883560','104883587','104883767','104883785','104883809','104883824','104883845','104883851','104883884','104883890','104883955','104883958','104884012','104884093','104884114','104885412','104885436','104885442','104885445','104883383','104883395','104883413','104883419','104883464','104883494','104883524','104883773','104883842','104883917','104883920','104883943','104883949','104883967','104883997','104884051','104884105','104884108','104885451','104883437','104883461','104883476','104883497','104883500','104883503','104883566','104883584','104883614','104883794','104883800','104883815','104883830','104883857','104883869','104883923','104883952','104884048','104884057','104884063','104884066','104884081','104884087','104884102','104884111','104884135','104885415','104885424','104885427','104886297','104886308','104883398','104883410','104883458','104883473','104883512','104883515','104883527','104883530','104883536','104883554','104883596','104883770','104883782','104883803','104883827','104883833','104883839','104883848','104883866','104883875','104883878','104883896','104883902','104883914','104883970','104883976','104884060','104884069','104884072','104884123','104884132','104885409','104885430','104883425','104883431','104883446','104883449','104883452','104883482','104883506','104883518','104883539','104883548','104883569','104883575','104883578','104883623','104883779','104883797','104883818','104883860','104883925','104883934','104883940','104883946','104883973','104883979','104883982','104884078','104884090','104884096','104885421','104885448','104885454','104883392','104883416','104883428','104883479','104883491','104883521','104883542','104883551','104883557','104883563','104883872','104883911','104883928','104883961','104883994','104884018','104884054','104884099','104884129','104886299','104883401','104883434','104883470','104883485','104883533','104883572','104883581','104883776','104883791','104883881','104883887','104883905','104883908','104884075','104884126','104885418','104885433') End of information on OTHER waiting sessions. =================================================== PROCESS STATE ------------- Process global information: process: 000000057B3343D8, call: 0000000574FCBF78, xact: 0000000576A07F60, curses: 000000057E48D858, usrses: 000000057E48D858 ---------------------------------------- SO: 000000057B3343D8, type: 2, owner: 0000000000000000, flag: INIT/-/-/0x00 (process) Oracle pid=63, calls cur/top: 0000000574FCBF78/0000000574FD4C48, flag: (0) - int error: 0, call error: 0, sess error: 0, txn error 0 (post info) last post received: 108 0 4 last post received-location: aaa last process to post me: 7e31d890 1 6 last post sent: 0 0 112 last post sent-location: bbb last process posted by me: 7b334c00 3 0 (latch info) wait_event=0 bits=10 holding (efd=19) 4745310 Parent+children enqueue hash chains level=4 Location from where latch is held: cmi: gpl: Context saved from call: 0 state=busy, wlstate=free recovery area: Dump of memory from 0x000000057E300810 to 0x000000057E300830 57E300810 00000000 00000000 00000000 00000000 [............... I have been reaserching this issue from past few days. From what I saw few are saying its a indexing issue, few are saying its INITRANS... I am not sure.. But this deadlock happens very rare. But whenever it happens its a big issue. So please help me guys.. what to look for.. and how I can solve this issue..
Look at your two UPDATE statements and try to understand why they would request the same row, but in a different order. That's how almost all deadlocks happen. There are several possible ways to avoid this error: 1) Update rows in the same order. You may be able to do this with a hint to force a full table scan or index. (I'm not 100% certain that using the same access method will always avoid this issue, but in practice it does seem to fix it. See my old question for a painful discussion about deadlocks when the access method is the same.) 2) Do not run your two processes at the same time. 3) Handle exceptions. For example, something like this: declare deadlock exception; pragma exception_init(deadlock, -00060); begin <code> exception when deadlock then <do something about it here, such as re-try> end; / You need to add the exception handling to both blocks of code. And a deadlock will still generate a trace file, which will probably slow things down and take up a lot of space where no one expects it.