Getting OutOfMemoryError from Nifi encryptconfig command - apache-nifi

Getting following exception while executing command
./encrypt-config.sh -n $NIFI_HOME/conf/nifi.properties -f /opt/nifi/data/flow.xml.gz -s password -x
ERROR [main] org.apache.nifi.toolkit.encryptconfig.EncryptConfigMain: java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:3332) at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124) at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:596) at java.lang.StringBuilder.append(StringBuilder.java:190) at org.apache.commons.io.output.StringBuilderWriter.write(StringBuilderWriter.java:142) at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:2538) at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:2516) at org.apache.commons.io.IOUtils.copy(IOUtils.java:2493) at org.apache.commons.io.IOUtils.copy(IOUtils.java:2441) at org.apache.commons.io.IOUtils.toString(IOUtils.java:1084) at org.apache.commons.io.IOUtils$toString.call(Unknown Source) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:133) at org.apache.nifi.properties.ConfigEncryptionTool$_loadFlowXml_closure3$_closure29.doCall(ConfigEncryptionTool.groovy:666) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93) at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325) at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:294) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1022) at groovy.lang.Closure.call(Closure.java:414) at groovy.lang.Closure.call(Closure.java:430) at org.codehaus.groovy.runtime.IOGroovyMethods.withCloseable(IOGroovyMethods.java:1622) at org.codehaus.groovy.runtime.NioGroovyMethods.withCloseable(NioGroovyMethods.java:1759) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.codehaus.groovy.runtime.metaclass.ReflectionMetaMethod.invoke(ReflectionMetaMethod.java:54) Java heap space
Tried increasing the memory after seeing following forum.
https://community.cloudera.com/t5/Support-Questions/Getting-OutOfMemoryError-from-Nifi-encryptconfig/td-p/284446
But issue is not getting resolved. Can anyone help

I think this is an issue reported in older version and seems like fixed in nifi 1.13.0
You can refer following JIRA ticket.
This issue will happen if you are using flowfile with large data. You have to either clean up the flowfile or you need to upgrade to 1.13.0 version before upgrading to latest version.
https://issues.apache.org/jira/browse/NIFI-6999
Note : If you are not upgrading then you can clean some of the nifi templates mannually in nifi UI and make less size for flowfile.xml.gz if its applicable. Then we can upgrade to higher.

Related

HDFS delete command results in: ArrayIndexOutOfBoundsException and "RemoteException in offerService"

I notice that HDFS delete commands will randomly fail. For example, in a MapReduce job I delete a directory at startup. Occasionally it fails with the error below, but will succeed on the second try.
org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException): java.lang.ArrayIndexOutOfBoundsException
at org.apache.hadoop.ipc.Client.call(Client.java:1364)
at org.apache.hadoop.ipc.Client.call(Client.java:1411)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy14.delete(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:513)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy15.delete(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1862)
at org.apache.hadoop.hdfs.DistributedFileSystem$11.doCall(DistributedFileSystem.java:599)
at org.apache.hadoop.hdfs.DistributedFileSystem$11.doCall(DistributedFileSystem.java:595)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:595)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Digging into the datanode logs I see this exception:
RemoteException in offerService
org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): java.lang.NullPointerException
at org.apache.hadoop.ipc.Client.call(Client.java:1411)
at org.apache.hadoop.ipc.Client.call(Client.java:1364)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy17.blockReport(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReport(DatanodeProtocolClientSideTranslatorPB.java:175)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.blockReport(BPServiceActor.java:493)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:716)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:851)
at java.lang.Thread.run(Thread.java:745)
I can't find any help on that, and I'm not sure where else to look. Does anyone know what that issue is?
I'm running Ubuntu 14.04.1 LTS and hadoop version prints out:
Hadoop 2.5.0-cdh5.3.1
Subversion http://github.com/cloudera/hadoop -r 4cda8416c73034b59cc8baafbe3666b074472846
Compiled by jenkins on 2015-01-28T00:41Z
Compiled with protoc 2.5.0
From source with checksum 6a018149a764de4b8992755df9a2a1b
This command was run using /opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/jars/hadoop-common-2.5.0-cdh5.3.1.jar
Thanks for your help!
I haven't tried it yet, but Cloudera suggested upgrading to 5.3.3:
http://community.cloudera.com/t5/Storage-Random-Access-HDFS/HDFS-delete-command-results-in-ArrayIndexOutOfBoundsException/m-p/31817#U31817

NoSuchMethodError using Guava 15 on Hadoop (2.3.0)

I have a compiled jar for Hadoop including this library:
com.google.guava:guava:jar:15.0:compile
When I submit it into my Hadoop CDH5.0.1 cluster I have this error:
java.lang.NoSuchMethodError: com.google.common.base.Stopwatch.createStarted()Lcom/google/common/base/Stopwatch;
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:152)
at com.trovit.pipeline.AdsPipelineDriver.main(AdsPipelineDriver.java:17)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
The main thing is that Hadoop has an older version of guava into its classpath, loads it before mine and crashes because the used function doesn't exist.
I've tried configuration parameters such as mapreduce.task.classpath.user.precedence, mapreduce.task.classpath.first or mapreduce.job.user.classpath.first but none of them worked.
Any guess for solving this problem?

Sonar Eclipse plugin: Failed to download batch_bootstrap/db

I'm having issues with the Eclipse plugin of sonar. When executing it I'm getting the following exception in the Eclipse console:
Exception in thread "main" org.sonar.runner.impl.RunnerException: Unable to execute Sonar
at org.sonar.runner.impl.BatchLauncher$1.delegateExecution(BatchLauncher.java:91)
at org.sonar.runner.impl.BatchLauncher$1.run(BatchLauncher.java:75)
at java.security.AccessController.doPrivileged(Native Method)
at org.sonar.runner.impl.BatchLauncher.doExecute(BatchLauncher.java:69)
at org.sonar.runner.impl.BatchLauncher.execute(BatchLauncher.java:50)
at org.sonar.runner.impl.BatchLauncherMain.execute(BatchLauncherMain.java:41)
at org.sonar.runner.impl.BatchLauncherMain.main(BatchLauncherMain.java:59)
Caused by: org.sonar.api.utils.HttpDownloader$HttpException: Fail to download [http://<server>/batch_bootstrap/db?project=<project>]. Response code: 500
at org.sonar.api.utils.HttpDownloader$BaseHttpDownloader$HttpInputSupplier.getInput(HttpDownloader.java:281)
at org.sonar.api.utils.HttpDownloader$BaseHttpDownloader$HttpInputSupplier.getInput(HttpDownloader.java:235)
at com.google.common.io.ByteStreams.copy(ByteStreams.java:116)
at com.google.common.io.Files.copy(Files.java:231)
at org.sonar.batch.bootstrap.ServerClient.download(ServerClient.java:69)
at org.sonar.batch.bootstrap.PreviewDatabase.downloadDatabase(PreviewDatabase.java:101)
at org.sonar.batch.bootstrap.PreviewDatabase.start(PreviewDatabase.java:69)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.invokeMethod(ReflectionLifecycleStrategy.java:110)
at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.start(ReflectionLifecycleStrategy.java:89)
at org.picocontainer.injectors.AbstractInjectionFactory$LifecycleAdapter.start(AbstractInjectionFactory.java:84)
at org.picocontainer.behaviors.AbstractBehavior.start(AbstractBehavior.java:169)
at org.picocontainer.behaviors.Stored$RealComponentLifecycle.start(Stored.java:132)
at org.picocontainer.behaviors.Stored.start(Stored.java:110)
at org.picocontainer.DefaultPicoContainer.potentiallyStartAdapter(DefaultPicoContainer.java:1015)
at org.picocontainer.DefaultPicoContainer.startAdapters(DefaultPicoContainer.java:1008)
at org.picocontainer.DefaultPicoContainer.start(DefaultPicoContainer.java:766)
at org.sonar.api.platform.ComponentContainer.startComponents(ComponentContainer.java:91)
at org.sonar.api.platform.ComponentContainer.execute(ComponentContainer.java:77)
at org.sonar.batch.bootstrapper.Batch.startBatch(Batch.java:92)
at org.sonar.batch.bootstrapper.Batch.execute(Batch.java:74)
at org.sonar.runner.batch.IsolatedLauncher.execute(IsolatedLauncher.java:45)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.sonar.runner.impl.BatchLauncher$1.delegateExecution(BatchLauncher.java:87)
... 6 more
We're running SonarQube Server 4.1.2, the plugin version is 3.3.0.
Any ideas?
Update
Could this be related to poor DB performance (even though it's not some timeout exception)? Sonar runs on an Oracle DB, and we just updated sonar which resolved issues with excessively long query times (but weirdly enough only for logged in users). I added login information in the plugin settings though, so i assumed it should work.
When opening a source file in Eclipse, I'm getting messages like
Retrieve issues of resource ResultUtil.java...
Done in 43625ms
which sounds like the query would be running way longer that desirable.
Update 2:
We realized that even though the project existed in SonarQube, and was analyzed, it was not provisioned (the project was created with a pre 4.0 version of SonarQube). To rule out that caused the issue we provisioned a new project in SonarQube, ran the analysis on that and configured the eclipse plugin to use this new project.
This did not resolve the issue, either we're getting the same exception, or no further message at all. Calling the URL in a web-browser yields the same result (i.e. endlessly says "waiting for <server>".
The "SonarQube Web Browser" view of the eclipse plugin correctly shows issues, which I take as a sign that the plugin is configured correctly. Also, for some reason querying the issues from SonarQube is now a lot faster, so I would rule out that this caused the issue.
Any more suggestions on what we could check?
Update 3
We tried restarting SonarQube and then tried to analyze the project using the eclipse plugin. This failed with the above exception. Checking the SonarQube logs revealed these issues:
2014.03.19 11:17:35 ERROR [o.s.c.p.DbTemplate] Fail to copy table rules
org.h2.jdbc.JdbcBatchUpdateException: Unique index or primary key violation: "RULES_PLUGIN_KEY_AND_NAME ON PUBLIC.RULES(PLUGIN_RULE_KEY, PLUGIN_NAME)"; SQL statement:
INSERT INTO rules(ID,PLUGIN_RULE_KEY,PLUGIN_NAME,DESCRIPTION,PRIORITY,CARDINALITY,PARENT_ID,PLUGIN_CONFIG_KEY,NAME,STATUS,LANGUAGE,CREATED_AT,UPDATED_AT) VALUES(?,?,?,?,?,?,?,?,?,?,?,?,?) [23505-172]
at org.h2.jdbc.JdbcPreparedStatement.executeBatch(JdbcPreparedStatement.java:1167) ~[h2-1.3.172.jar:1.3.172]
at org.apache.commons.dbcp.DelegatingStatement.executeBatch(DelegatingStatement.java:297) ~[commons-dbcp-1.4.jar:1.4]
at org.apache.commons.dbcp.DelegatingStatement.executeBatch(DelegatingStatement.java:297) ~[commons-dbcp-1.4.jar:1.4]
at org.sonar.core.persistence.DbTemplate.copyTableColumns(DbTemplate.java:100) [sonar-core-4.1.2.jar:na]
at org.sonar.core.persistence.DbTemplate.copyTableColumns(DbTemplate.java:54) [sonar-core-4.1.2.jar:na]
at org.sonar.core.persistence.DbTemplate.copyTable(DbTemplate.java:49) [sonar-core-4.1.2.jar:na]
at org.sonar.core.persistence.PreviewDatabaseFactory.copy(PreviewDatabaseFactory.java:87) [sonar-core-4.1.2.jar:na]
at org.sonar.core.persistence.PreviewDatabaseFactory.createNewDatabaseForDryRun(PreviewDatabaseFactory.java:63) [sonar-core-4.1.2.jar:na]
at org.sonar.core.preview.PreviewCache.generateNewDB(PreviewCache.java:121) [sonar-core-4.1.2.jar:na]
at org.sonar.core.preview.PreviewCache.getDatabaseForPreview(PreviewCache.java:81) [sonar-core-4.1.2.jar:na]
at org.sonar.server.ui.JRubyFacade.createDatabaseForPreview(JRubyFacade.java:471) [JRubyFacade.class:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.7.0_45]
(... snip ...)
followed by
2014.03.19 11:17:35 ERROR [o.s.s.ui.JRubyFacade] Fail to render: http://hulk:9000/batch_bootstrap/db?project=E2PR-12.0
attempt to unlock read lock, not locked by current thread
java.util.concurrent.locks.ReentrantReadWriteLock$Sync.unmatchedUnlockException(ReentrantReadWriteLock.java:447)
java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryReleaseShared(ReentrantReadWriteLock.java:431)
java.util.concurrent.locks.AbstractQueuedSynchronizer.releaseShared(AbstractQueuedSynchronizer.java:1340)
java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.unlock(ReentrantReadWriteLock.java:883)
org.sonar.core.preview.PreviewCache.getDatabaseForPreview(PreviewCache.java:92)
org.sonar.server.ui.JRubyFacade.createDatabaseForPreview(JRubyFacade.java:471)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:606)
(... snip ...)
Subsequent analysis runs triggered by the eclipse plugin seem to hang at 11:28:46.317 DEBUG - Download: http://hulk:9000/batch_bootstrap/db?project=E2PR-12.0 (no proxy)
I can provide the logfile if that would help you to find out what's wrong.
It looks like your project has not been analyzed in SonarQube prior to running preview analyses in Eclipse. See http://docs.codehaus.org/display/SONAR/Configuring+SonarQube+in+Eclipse.
So, after a lot of digging I was able to resolve this issue. What pointed into the right direction was this discussion: http://comments.gmane.org/gmane.comp.java.sonar.general/33747
In the end this was caused by duplicate manual rules in the RULES table. I have no idea how they ended up there, but they were not visible from the UI, which made it kinda hard to spot.
After deleting these rules in the DB I'm now able to trigger the analysis from eclipse.

Hive and Cassandra integration using CqlStorageHandler

I referred this git project for integrating cassandra data using hive table.I copied the appropriate cassandra jars into the hive lib folder.But while running the query against cassandra I'm getting the following error.Please help me to resolve it.
https://github.com/milliondreams/hive/tree/cas-support-cql/cassandra-handler
hive> CREATE EXTERNAL TABLE messages(row_key string, col1 string, col2 string)
STORED BY 'org.apache.hadoop.hive.cassandra.cql.CqlStorageHandler' WITH SERDEPROPERTIES("cql.primarykey" = "row_key")
TBLPROPERTIES ("cassandra.ks.name" = "mycqlks", "cassandra.ks.stratOptions"="'DC':1, 'DC2':1",
"cassandra.ks.strategy"="NetworkTopologyStrategy");
java.lang.NoSuchMethodError: org.apache.hadoop.hive.metastore.MetaStoreUtils.getSchema(Lorg/apache/hadoop/hive/metastore/api/Table;)Ljava/util/Properties;
at org.apache.hadoop.hive.cassandra.cql.CqlManager.createColumnFamily(CqlManager.java:238)
at org.apache.hadoop.hive.cassandra.cql.CqlManager.createCFIfNotFound(CqlManager.java:189)
at org.apache.hadoop.hive.cassandra.cql.CqlStorageHandler.preCreateTable(CqlStorageHandler.java:247)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:462)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:455)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:74)
at com.sun.proxy.$Proxy11.createTable(Unknown Source)
at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:596)
at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:3776)
at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:256)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:144)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1355)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1139)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:945)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:756)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
FAILED: Execution Error, return code -101 from org.apache.hadoop.hive.ql.exec.DDLTask
Which version of hive are you using?
It has to be hive 0.9 as per https://github.com/milliondreams/hive/tree/cas-support-cql/cassandra-handler
I think you are using version >= 0.11.0
Version 0.9.0: http://svn.apache.org/repos/asf/hive/tags/release-0.9.0/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java
Version 0.10.0: http://svn.apache.org/repos/asf/hive/tags/release-0.10.0/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java
Version 0.11.0: http://svn.apache.org/repos/asf/hive/tags/release-0.11.0/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java
Single argument method - org.apache.hadoop.hive.metastore.MetaStoreUtils.getSchema is missing in 0.11.0

Grails' save(flush:true) hangs thread in socketRead0

There is a transactional=true service in Grails 2.2 where I do just the following:
...
client.save(flush: true) // it hangs here
...
If I remove flush:true, the execution passes further although it hangs calling a withTransaction later.
I have never met such a problem with my current large project before.
I have an idea to throw out some flush'es and withTransaction's but that's kinda unwanted because there are a bunch of them in the code currently. So it would be a painful change even if it will fix the problem.
Any idea how to cure / investigate that?
Grails 2.2 / Java 1.6 / Windows 7 x64 / Oracle XE
Hanged thread's stack trace:
java.net.SocketInputStream.socketRead0(Native Method)
java.net.SocketInputStream.read(SocketInputStream.java:150)
java.net.SocketInputStream.read(SocketInputStream.java:121)
oracle.net.ns.Packet.receive(Unknown Source)
oracle.net.ns.DataPacket.receive(Unknown Source)
oracle.net.ns.NetInputStream.getNextPacket(Unknown Source)
oracle.net.ns.NetInputStream.read(Unknown Source)
oracle.net.ns.NetInputStream.read(Unknown Source)
oracle.net.ns.NetInputStream.read(Unknown Source)
oracle.jdbc.driver.T4CMAREngine.unmarshalUB1(T4CMAREngine.java:971)
oracle.jdbc.driver.T4CMAREngine.unmarshalSB1(T4CMAREngine.java:941)
oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:432)
oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:181)
oracle.jdbc.driver.T4CPreparedStatement.execute_for_rows(T4CPreparedStatement.java:543)
oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:8674)
- locked oracle.jdbc.driver.T4CPreparedStatement#36df36e5
- locked oracle.jdbc.driver.T4CConnection#788d1087
org.apache.commons.dbcp.DelegatingStatement.executeBatch(DelegatingStatement.java:297)
org.apache.commons.dbcp.DelegatingStatement.executeBatch(DelegatingStatement.java:297)
org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:70)
org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:268)
org.hibernate.jdbc.AbstractBatcher.prepareStatement(AbstractBatcher.java:114)
org.hibernate.jdbc.AbstractBatcher.prepareStatement(AbstractBatcher.java:109)
org.hibernate.jdbc.AbstractBatcher.prepareBatchStatement(AbstractBatcher.java:244)
org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2412)
org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2875)
org.hibernate.action.EntityInsertAction.execute(EntityInsertAction.java:79)
org.hibernate.engine.ActionQueue.execute(ActionQueue.java:273)
org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:265)
org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:184)
org.codehaus.groovy.grails.orm.hibernate.events.PatchedDefaultFlushEventListener.performExecutions(PatchedDefaultFlushEventListener.java:46)
org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:51)
org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1216)
org.codehaus.groovy.grails.orm.hibernate.metaclass.SavePersistentMethod.flushSession(SavePersistentMethod.java:87)
org.codehaus.groovy.grails.orm.hibernate.metaclass.SavePersistentMethod$1.doInHibernate(SavePersistentMethod.java:60)
org.springframework.orm.hibernate3.HibernateTemplate.doExecute(HibernateTemplate.java:406)
org.springframework.orm.hibernate3.HibernateTemplate.execute(HibernateTemplate.java:339)
org.codehaus.groovy.grails.orm.hibernate.metaclass.SavePersistentMethod.performSave(SavePersistentMethod.java:56)
org.codehaus.groovy.grails.orm.hibernate.metaclass.AbstractSavePersistentMethod.doInvokeInternal(AbstractSavePersistentMethod.java:215)
org.codehaus.groovy.grails.orm.hibernate.metaclass.AbstractDynamicPersistentMethod.invoke(AbstractDynamicPersistentMethod.java:63)
sun.reflect.GeneratedMethodAccessor662.invoke(Unknown Source)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:601)
org.springsource.loaded.ri.ReflectiveInterceptor.jlrMethodInvoke(ReflectiveInterceptor.java:1243)
org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite$PojoCachedMethodSite.invoke(PojoMetaMethodSite.java:189)
org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite.call(PojoMetaMethodSite.java:53)
org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite.call(PojoMetaMethodSite.java:55)
org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:124)
org.codehaus.groovy.grails.orm.hibernate.HibernateGormInstanceApi.save(HibernateGormEnhancer.groovy:911)
xxx.domain.Consumer.save(Consumer.groovy)
xxx.domain.Consumer$save.call(Unknown Source)
org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116)
xxx.UserService.createUser(UserService.groovy:350)
............
Your stacktrace points at the likely problem:
oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:8674)
- locked oracle.jdbc.driver.T4CPreparedStatement#36df36e5
- locked oracle.jdbc.driver.T4CConnection#788d1087
You have a stuck thread (or threads) on your database and the app is hanging waiting for the threads. You need to check your database status to see if you can determine which queries are hanging. It's all possible updating to the latest JDBC driver will help.
The reason this is happening when you use flush: true is because it's forcing your Hibernate session to flush to the database, where it encounters the stuck thread.

Resources