ISPN000217 while adding data to infinispan cache - spring-boot

I have an application running on 4 nodes inside 2 clusters. The application is having cache configured using infinispan and SpringEmbeddedCacheManager. I am getting an intermittent issue while I am trying to add data to cache, please note that I am adding data as key value pair where my value will be always custom class created.
I just tried to change the cache type to replicated, local and invalidated, I have observed that I am not having issue when using local or invalidated cache. Can anyone confirm if large object in distributed cache cause any issue.
Infinispan Config
<distributed-cache name="apigw-access-cache" owners="1" segments="20" mode="SYNC" statistics="false">
<eviction max-entries="10" strategy="LIRS"/>
<expiration max-idle="360000" lifespan="3600000"/>
</distributed-cache>
Infinispan Version
<dependency>
<groupId>org.infinispan</groupId>
<artifactId>infinispan-spring4</artifactId>
<version>7.0.3.Final</version><!--$NO-MVN-MAN-VER$ -->
</dependency>
<dependency>
<groupId>org.infinispan</groupId>
<artifactId>infinispan-cli-server</artifactId>
<version>7.0.0.CR1</version>
</dependency>
Errors
2019-12-04 09:44:23.361 [qtp1933072581-15447] ERROR o.i.i.InvocationContextInterceptor - ISPN000136: Execution error
org.infinispan.remoting.RemoteException: ISPN000217: Received exception from node-10097-32028, see cause for remote stack trace
at org.infinispan.remoting.transport.AbstractTransport.checkResponse(AbstractTransport.java:44) ~[infinispan-core-7.0.3.Final.jar!/:7.0.3.Final]
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processSingleCall(CommandAwareRpcDispatcher.java:381) ~[infinispan-core-7.0.3.Final.jar!/:7.0.3.Final]
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:167) ~[infinispan-core-7.0.3.Final.jar!/:7.0.3.Final]
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:560) ~[infinispan-core-7.0.3.Final.jar!/:7.0.3.Final]
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:290) ~[infinispan-core-7.0.3.Final.jar!/:7.0.3.Final]
Caused by: java.lang.IllegalArgumentException: Can not set java.util.Set field Class.field to java.lang.String
at sun.reflect.UnsafeFieldAccessorImpl.throwSetIllegalArgumentException(UnsafeFieldAccessorImpl.java:167) ~[na:1.8.0_121]
at sun.reflect.UnsafeFieldAccessorImpl.throwSetIllegalArgumentException(UnsafeFieldAccessorImpl.java:171) ~[na:1.8.0_121]
at sun.reflect.UnsafeObjectFieldAccessorImpl.set(UnsafeObjectFieldAccessorImpl.java:81) ~[na:1.8.0_121]
at java.lang.reflect.Field.set(Field.java:764) ~[na:1.8.0_121]
Caused by: org.infinispan.commons.CacheException: Problems invoking command.
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:221)
at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:460)
at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:377)
Caused by: org.infinispan.commons.CacheException: Problems invoking command.
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:221) ~[infinispan-core-7.0.3.Final.jar!/:7.0.3.Final]
at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:460) ~[jgroups-3.6.1.Final.jar!/:3.6.1.Final]
at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:377) ~[jgroups-3.6.1.Final.jar!/:3.6.1.Final]

First off, you should not be using such an old version of Infinispan, you should upgrade to 9.4.17.Final
The stack trace fragments don't appear to be in the right order, but Can not set java.util.Set field Class.field to java.lang.String is because you two nodes have different versions of the same class.
The biggest difference between distributed and invalidation caches is that distributed caches replicate values to other nodes, while invalidation caches send an invalidation message that includes only the key. If an invalidation cache works, then the problem is almost certainly that one of your value classes has changed and one of the nodes still has the old version.

Related

Envers NullPointerException when creating test data

I am brand new to Envers - started today. I am extending an existing Spring Boot application with Audit support using Envers. I annotated all #Entity classes and made some changes, as described here Envers + MYSQL + List<String> = SQLSyntaxErrorException: Specified key was too long;
All database tables are perfectly created, but when I use a CommandLineRunner to generate test data in the database I get the below Error.
ERROR 17:50 o.s.b.SpringApplication.reportFailure:821: Application run failed
java.lang.IllegalStateException: Failed to execute CommandLineRunner
at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:782)
at org.springframework.boot.SpringApplication.callRunners(SpringApplication.java:763)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:318)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1213)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1202)
at com.agiletunes.productmanager.ProdMgrApp.init(ProdMgrApp.java:59)
at com.sun.javafx.application.LauncherImpl.launchApplication1(LauncherImpl.java:841)
at com.sun.javafx.application.LauncherImpl.lambda$launchApplication$159(LauncherImpl.java:182)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.springframework.transaction.TransactionSystemException: Could not commit JPA transaction; nested exception is javax.persistence.RollbackException: Error while committing the transaction
at org.springframework.orm.jpa.JpaTransactionManager.doCommit(JpaTransactionManager.java:541)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:746)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:714)
at org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:534)
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:305)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:98)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:688)
at com.agiletunes.productmanager.services.DatabaseTestInitializationService$$EnhancerBySpringCGLIB$$62d47d4b.createTestData(<generated>)
at com.agiletunes.productmanager.ProdMgrApp.lambda$2(ProdMgrApp.java:129)
at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:779)
... 8 common frames omitted
Caused by: javax.persistence.RollbackException: Error while committing the transaction
at org.hibernate.internal.ExceptionConverterImpl.convertCommitException(ExceptionConverterImpl.java:81)
at org.hibernate.engine.transaction.internal.TransactionImpl.commit(TransactionImpl.java:107)
at org.springframework.orm.jpa.JpaTransactionManager.doCommit(JpaTransactionManager.java:532)
... 18 common frames omitted
Caused by: java.lang.NullPointerException: null
at org.hibernate.envers.event.spi.BaseEnversEventListener.addCollectionChangeWorkUnit(BaseEnversEventListener.java:109)
at org.hibernate.envers.event.spi.BaseEnversEventListener.generateBidirectionalCollectionChangeWorkUnits(BaseEnversEventListener.java:76)
at org.hibernate.envers.event.spi.EnversPostInsertEventListenerImpl.onPostInsert(EnversPostInsertEventListenerImpl.java:49)
at org.hibernate.action.internal.EntityInsertAction.postInsert(EntityInsertAction.java:168)
at org.hibernate.action.internal.EntityInsertAction.execute(EntityInsertAction.java:135)
at org.hibernate.engine.spi.ActionQueue.executeActions(ActionQueue.java:604)
at org.hibernate.engine.spi.ActionQueue.executeActions(ActionQueue.java:478)
at org.hibernate.event.internal.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:356)
at org.hibernate.event.internal.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:39)
at org.hibernate.internal.SessionImpl.doFlush(SessionImpl.java:1454)
at org.hibernate.internal.SessionImpl.managedFlush(SessionImpl.java:511)
at org.hibernate.internal.SessionImpl.flushBeforeTransactionCompletion(SessionImpl.java:3290)
at org.hibernate.internal.SessionImpl.beforeTransactionCompletion(SessionImpl.java:2486)
at org.hibernate.engine.jdbc.internal.JdbcCoordinatorImpl.beforeTransactionCompletion(JdbcCoordinatorImpl.java:473)
at org.hibernate.resource.transaction.backend.jdbc.internal.JdbcResourceLocalTransactionCoordinatorImpl.beforeCompletionCallback(JdbcResourceLocalTransactionCoordinatorImpl.java:178)
at org.hibernate.resource.transaction.backend.jdbc.internal.JdbcResourceLocalTransactionCoordinatorImpl.access$300(JdbcResourceLocalTransactionCoordinatorImpl.java:39)
at org.hibernate.resource.transaction.backend.jdbc.internal.JdbcResourceLocalTransactionCoordinatorImpl$TransactionDriverControlImpl.commit(JdbcResourceLocalTransactionCoordinatorImpl.java:271)
at org.hibernate.engine.transaction.internal.TransactionImpl.commit(TransactionImpl.java:104)
... 19 common frames omitted
INFO 17:50 o.s.o.j.AbstractEntityManagerFactoryBean.destroy:597: Closing JPA EntityManagerFactory for persistence unit 'default'
INFO 17:50 c.z.h.HikariDataSource.close:350: HikariPool-1 - Shutdown initiated...
INFO 17:50 c.z.h.HikariDataSource.close:352: HikariPool-1 - Shutdown completed.
Before I added Envers, and before I shortend the strings, as explained in the other stackoverflow entry above, I could generate the test data perfectly.
The trouble is that I don't see in the entire stack trace any helpful information where the issue is coming from. How to find out what is the root cause?
I tried a number of things, but while I changed here and there, I stumbled over a superclass that was lacking the #Audited annotation. Now it is working perfectly.
Interesting question:
How to find out what is the root cause?
The last part of the stack trace which represents the original exception thrown reads:
Caused by: java.lang.NullPointerException: null
at org.hibernate.envers.event.spi.BaseEnversEventListener.addCollectionChangeWorkUnit(BaseEnversEventListener.java:109)
A first wild guess therefor is that it might have to do with a collection being null.
Such a guess is useful if your entity model is small and you can easily check that.
If that doesn't lead to a resolution the next step is to limit the scope.
Remove about half of your entities and check if the problem still exists.
If it does repeat.
If it doesn't try the other half.
This way reduce your model to something really small.
Now remove attributes in the same way until you have a tiny model to reproduce the issue.
Now repeat with the stuff you do with your model, until you have only a few interactions left.
Once your program is so simple often the problem becomes obvious.
If it isn't it is a great basis for asking a question here.
Or report a bug with Envers.

Getting Slow operation detected: com.hazelcast.map.impl.operation.DeleteOperation while calling ITopic.publish()

[SpringBoot Application] I am trying to delete a key from Hazelcast Map from an async Method. In the Delete Method of the MapStore class, I am trying to put a key into a topic and calling Publish().However, sometimes I am getting this message
Slow operation detected: com.hazelcast.map.impl.operation.DeleteOperation. I am adding the stack trace below.
2019-03-27 11:52:08.041 [31m WARN[0;39m [] 24586 --- [trace=,span=] [35m[hz._hzInstance_1_gaian.SlowOperationDetectorThread][0;39m [33mc.h.s.i.o.s.SlowOperationDetector [0;39m: [localhost]:5701 [gaian] [3.10] Slow operation detected: com.hazelcast.map.impl.operation.DeleteOperation
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
com.hazelcast.spi.impl.AbstractInvocationFuture.get(AbstractInvocationFuture.java:160)
com.hazelcast.client.spi.ClientProxy.invokeOnPartition(ClientProxy.java:225)
com.hazelcast.client.proxy.PartitionSpecificClientProxy.invokeOnPartition(PartitionSpecificClientProxy.java:49)
com.hazelcast.client.proxy.ClientTopicProxy.publish(ClientTopicProxy.java:52)
com.gaian.adwize.cache.mapstore.CampaignMapStore.delete(CampaignMapStore.java:95)
com.gaian.adwize.cache.mapstore.CampaignMapStore.delete(CampaignMapStore.java:36)
com.hazelcast.map.impl.MapStoreWrapper.delete(MapStoreWrapper.java:115)
com.hazelcast.map.impl.mapstore.writethrough.WriteThroughStore.remove(WriteThroughStore.java:56)
com.hazelcast.map.impl.mapstore.writethrough.WriteThroughStore.remove(WriteThroughStore.java:28)
com.hazelcast.map.impl.recordstore.DefaultRecordStore.delete(DefaultRecordStore.java:565)
com.hazelcast.map.impl.operation.DeleteOperation.run(DeleteOperation.java:38)
com.hazelcast.spi.Operation.call(Operation.java:148)
com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.call(OperationRunnerImpl.java:202)
com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:191)
com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.run(OperationExecutorImpl.java:406)
com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.runOrExecute(OperationExecutorImpl.java:433)
com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvokeLocal(Invocation.java:581)
com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvoke(Invocation.java:566)
com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke0(Invocation.java:525)
com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke(Invocation.java:215)
com.hazelcast.spi.impl.operationservice.impl.InvocationBuilderImpl.invoke(InvocationBuilderImpl.java:60)
com.hazelcast.client.impl.protocol.task.AbstractPartitionMessageTask.processMessage(AbstractPartitionMessageTask.java:67)
com.hazelcast.client.impl.protocol.task.AbstractMessageTask.initializeAndProcessMessage(AbstractMessageTask.java:130)
com.hazelcast.client.impl.protocol.task.AbstractMessageTask.run(AbstractMessageTask.java:110)
com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:155)
com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:125)
com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.run(OperationThread.java:100)
Any help from the community will be really helpful. Thanks.
Slow operation is not necessarily an error. The newer versions of Hazelcast can calculate what is a normal response time and detect when something is out of ordinary. It’s usually more of a network latency issue or garbage collection introducing latency.
This is not an error. This is visible in Hazelcast Version 4.0.x as well.

JanusGraph can not connect to ElasticSearch after Cluster Name change

I'm trying to instantiate JanusGraph with the following configuration, using Cassandra as storage backend and ElasticSearch as indexing backend:
JanusGraph graph = JanusGraphFactory.build()
.set("storage.backend", "cassandra")
.set("storage.hostname", "localhost")
.set("cache.db-cache", true)
.set("schema.default", "none")
.set("index.search.backend", "elasticsearch")
.set("index.search.elasticsearch.client-only", "false")
.set("index.search.elasticsearch.local-mode", "true")
.open();
The above code works if cassandra's cluser is named Test Cluster. If I rename it to something else, an exception is thrown:
java.lang.IllegalArgumentException: Could not instantiate implementation: org.janusgraph.diskstorage.es.ElasticSearchIndex
at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:69)
at org.janusgraph.diskstorage.Backend.getImplementationClass(Backend.java:477)
at org.janusgraph.diskstorage.Backend.getIndexes(Backend.java:464)
at org.janusgraph.diskstorage.Backend.<init>(Backend.java:149)
at org.janusgraph.graphdb.configuration.GraphDatabaseConfiguration.getBackend(GraphDatabaseConfiguration.java:1850)
at org.janusgraph.graphdb.database.StandardJanusGraph.<init>(StandardJanusGraph.java:134)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:107)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:97)
at org.janusgraph.core.JanusGraphFactory$Builder.open(JanusGraphFactory.java:152)
at engineering.divine.core.GraphFactory.cassandraGraph(GraphFactory.java:403)
at engineering.divine.core.GraphFactory.graph(GraphFactory.java:298)
at engineering.divine.core.GraphFactory.getDefault(GraphFactory.java:99)
at engineering.divine.repository.Repository.listRepositoriesToUpdate(Repository.java:130)
at engineering.divine.daemon.RepositoryAnalysisDaemon.run(RepositoryAnalysisDaemon.java:24)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:58)
... 20 more
Caused by: org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: []
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:279)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:198)
at org.elasticsearch.client.transport.support.InternalTransportClusterAdminClient.execute(InternalTransportClusterAdminClient.java:86)
at org.elasticsearch.client.support.AbstractClusterAdminClient.health(AbstractClusterAdminClient.java:127)
at org.elasticsearch.action.admin.cluster.health.ClusterHealthRequestBuilder.doExecute(ClusterHealthRequestBuilder.java:92)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:91)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:65)
at org.janusgraph.diskstorage.es.ElasticSearchIndex.<init>(ElasticSearchIndex.java:215)
... 25 more
How can I make elasticsearch work with my new cluster name?
Using Max OS X 10.11.6, any pointers are highly appreciated.
Reset your data, if it is for testing purpose
Clear all your data from the storage backend (Cassandra)
Restart all the janusgraph nodes
In JanusGraph
Each configuration option has a certain mutability level that governs whether and how it can be modified after the database is opened for the first time. The following listing describes the mutability levels.
FIXED
Once the database has been opened, these configuration options cannot be changed for the entire life of the database
GLOBAL_OFFLINE
These options can only be changed for the entire database cluster at once when all instances are shut down
GLOBAL
These options can only be changed globally across the entire database cluster
MASKABLE
These options are global but can be overwritten by a local configuration file
LOCAL
These options can only be provided through a local configuration file
You can get the Mutability Level of any configuration from the below link
Source : http://docs.janusgraph.org/latest/config-ref.html

Can't connect to Bigtable to scan HTable data due to hardcoded managed=true in hbase client jars

I'm working on a custom load function to load data from Bigtable using Pig on Dataproc. I compile my java code using the following list of jar files I grabbed from Dataproc. When I run the following Pig script, it fails when it tries to establish a connection with Bigtable.
Error message is:
Bigtable does not support managed connections.
Questions:
Is there a work around for this problem?
Is this a known issue and is there a plan to fix or adjust?
Is there a different way of implementing multi scans as a load function for Pig that will work with Bigtable?
Details:
Jar files:
hadoop-common-2.7.3.jar
hbase-client-1.2.2.jar
hbase-common-1.2.2.jar
hbase-protocol-1.2.2.jar
hbase-server-1.2.2.jar
pig-0.16.0-core-h2.jar
Here's a simple Pig script using my custom load function:
%default gte '2017-03-23T18:00Z'
%default lt '2017-03-23T18:05Z'
%default SHARD_FIRST '00'
%default SHARD_LAST '25'
%default GTE_SHARD '$gte\_$SHARD_FIRST'
%default LT_SHARD '$lt\_$SHARD_LAST'
raw = LOAD 'hbase://events_sessions'
USING com.eduboom.pig.load.HBaseMultiScanLoader('$GTE_SHARD', '$LT_SHARD', 'event:*')
AS (es_key:chararray, event_array);
DUMP raw;
My custom load function HBaseMultiScanLoader creates a list of Scan objects to perform multiple scans on different ranges of data in the table events_sessions determined by the time range between gte and lt and sharded by SHARD_FIRST through SHARD_LAST.
HBaseMultiScanLoader extends org.apache.pig.LoadFunc so it can be used in the Pig script as load function.
When Pig runs my script, it calls LoadFunc.getInputFormat().
My implementation of getInputFormat() returns an instance of my custom class MultiScanTableInputFormat which extends org.apache.hadoop.mapreduce.InputFormat.
MultiScanTableInputFormat initializes org.apache.hadoop.hbase.client.HTable object to initialize the connection to the table.
Digging into the hbase-client source code, I see that org.apache.hadoop.hbase.client.ConnectionManager.getConnectionInternal() calls org.apache.hadoop.hbase.client.ConnectionManager.createConnection() with the attribute “managed” hardcoded to “true”.
You can see from the stack track below that my code (MultiScanTableInputFormat) tries to initialize an HTable object which invokes getConnectionInternal() which does not provide an option to set managed to false.
Going down the stack trace, you will get to AbstractBigtableConnection that will not accept managed=true and therefore cause the connection to Bigtable to fail.
Here’s the stack trace showing the error:
2017-03-24 23:06:44,890 [JobControl] ERROR com.turner.hbase.mapreduce.MultiScanTableInputFormat - java.io.IOException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:240)
at org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:431)
at org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:424)
at org.apache.hadoop.hbase.client.ConnectionManager.getConnectionInternal(ConnectionManager.java:302)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:185)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:151)
at com.eduboom.hbase.mapreduce.MultiScanTableInputFormat.setConf(Unknown Source)
at com.eduboom.pig.load.HBaseMultiScanLoader.getInputFormat(Unknown Source)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:264)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
at org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:194)
at java.lang.Thread.run(Thread.java:745)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:276)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
... 26 more
Caused by: java.lang.IllegalArgumentException: Bigtable does not support managed connections.
at org.apache.hadoop.hbase.client.AbstractBigtableConnection.<init>(AbstractBigtableConnection.java:123)
at com.google.cloud.bigtable.hbase1_2.BigtableConnection.<init>(BigtableConnection.java:55)
... 31 more
The original problem was caused by the use of outdated and deprecated hbase client jars and classes.
I updated my code to use the newest hbase client jars provided by Google and the original problem was fixed.
I still get stuck with some ZK issue that I still did not figure out, but that's a conversation for a different question.
This one is answered!
I have confronted the same error message:
Bigtable does not support managed connections.
However, according to my research, the root cause is that the class HTable can not be constructed explicitly. After changed the way to construct HTable by connection.getTable. The problem resolved.

com.ibm.websphere.jtaextensions.NotSupportedException thrown under load

I have an application containing 4 MDB's each of which receives SOAP messages over JMS from MQ. Once the messages have been received we process the XML into an object model and process accordingly which always involves either loading or saving messages to an Oracle database via Hibernate.
Additionally we have a quartz process with fires every minute that may or may not trigger so actions which could also read or write to the database using Hibernate.
When the system in under high load, i.e. processing large numbers 1k + and potentially performing some database read/writes triggered by our quartz process we keep seeing the following exception be thrown in our logs.
===============================================================================
at com.integrasp.iatrade.logic.MessageContextRouting.lookup(MessageContextRouting. java:150)
at com.integrasp.iatrade.logic.RequestResponseManager.findRequestDestination(Reque stResponseManager.java:153) at com.integrasp.iatrade.logic.RequestResponseManager.findRequestDestination(Reque stResponseManager.java:174)
at com.integrasp.iatrade.logic.IOLogic.processResponse(IOLogic.java:411)< br /> at com.integrasp.iatrade.logic.FxOrderQuoteManager.requestQuote(FxOrderQuoteManage r.java:119)
at com.integrasp.iatrade.logic.FxOrderQuoteManager.processRequest(FxOrderQuoteMana ger.java:682)
at com.integrasp.iatrade.logic.FxOrderSubmissionManager.processRequest(FxOrderSubm issionManager.java:408)
at com.integrasp.iatrade.eo.SubmitOrderRequest.process(SubmitOrderRequest.java:60)
at com.integrasp.iatrade.ejb.BusinessLogicRegister.perform(BusinessLogicRegister.j ava:85)
at com.integrasp.iatrade.ejb.mdb.OrderSubmissionBean.onMessage(OrderSubmissionBean .java:147)
at com.ibm.ejs.jms.listener.MDBWrapper$PriviledgedOnMessage.run(MDBWrapper.java:30 2)
at com.ibm.ws.security.util.AccessController.doPrivileged(AccessController.java:63 )
at com.ibm.ejs.jms.listener.MDBWrapper.callOnMessage(MDBWrapper.java:271)
at com.ibm.ejs.jms.listener.MDBWrapper.onMessage(MDBWrapper.java:240)
at com.ibm.mq.jms.MQSession.run(MQSession.java:1593)
at com.ibm.ejs.jms.JMSSessionHandle.run(JMSSessionHandle.java:970)
at com.ibm.ejs.jms.listener.ServerSession.connectionConsumerOnMessage(ServerSessio n.java:891)
at com.ibm.ejs.jms.listener.ServerSession.onMessage(ServerSession.java:656)
at com.ibm.ejs.jms.listener.ServerSession.dispatch(ServerSession.java:623)
at sun.reflect.GeneratedMethodAccessor79.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.ja va:43)
at java.lang.reflect.Method.invoke(Method.java:615)
at com.ibm.ejs.jms.listener.ServerSessionDispatcher.dispatch(ServerSessionDispatch er.java:37)
at com.ibm.ejs.container.MDBWrapper.onMessage(MDBWrapper.java:96)
at com.ibm.ejs.container.MDBWrapper.onMessage(MDBWrapper.java:132)
at com.ibm.ejs.jms.listener.ServerSession.run(ServerSession.java:481)
at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1473)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.GeneratedMethodAccessor42.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.ja va:43)
at java.lang.reflect.Method.invoke(Method.java:615)
at org.hibernate.transaction.WebSphereExtendedJTATransactionLookup$TransactionMana gerAdapter$TransactionAdapter.registerSynchronization(WebSphereExtendedJTATransa ctionLookup.java:225)
... 30 more
Caused by: com.ibm.websphere.jtaextensions.NotSupportedException
at com.ibm.ws.jtaextensions.ExtendedJTATransactionImpl.registerSynchronizationCall backForCurrentTran(ExtendedJTATransactionImpl.java:247)
... 34 more
Could any body help to shed come light on what com.ibm.websphere.jtaextensions.NotSupportedException means. The IBM documentation says
"The exception is thrown by the transaction manager if an attempt is made to register a SynchronizationCallback in an environment or at a time when this function is not available. "
Which to me sounds like the container is rejecting hibernates call to start a transaction. If anybody has any idea on why the container could be throwing the message please let me know.
Thanks in advance
Karl
If you really need high load I would remove the Hibernate layer between your app and the database. Without Hibernate you have less moving parts and more control.
That is the only advice I can give you.
If anyone was interested it was a thread that was trying to sync the transaction when the transaction had timed out.
I had assumed that if the transaction timeout then the thread would have been killed however this was not the case.
karl

Resources