Sonarqube 6.7 upgrade failure "Unrecoverable indexation failures" - sonarqube

We are attempting to upgrade from SonarQube 5.6.7 to SonarQube 6.7.2. I followed the steps outlined here https://docs.sonarqube.org/display/SONAR/Upgrading.
I have > 300GB available on the partition that elastic search is using so it doesn't seem to be related to this problem.
The exception:
2018.03.21 11:13:10 ERROR web[][o.s.s.p.Platform] Background initialization failed. Stopping SonarQube
java.lang.IllegalStateException: Unrecoverable indexation failures
at org.sonar.server.es.IndexingListener$1.onFinish(IndexingListener.java:39)
at org.sonar.server.es.BulkIndexer.stop(BulkIndexer.java:117)
at org.sonar.server.issue.index.IssueIndexer.doIndex(IssueIndexer.java:247)
at org.sonar.server.issue.index.IssueIndexer.indexOnStartup(IssueIndexer.java:95)
at org.sonar.server.es.IndexerStartupTask.indexUninitializedTypes(IndexerStartupTask.java:68)
at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
at org.sonar.server.es.IndexerStartupTask.execute(IndexerStartupTask.java:55)
at java.util.Optional.ifPresent(Optional.java:159)
at org.sonar.server.platform.platformlevel.PlatformLevelStartup$1.doPrivileged(PlatformLevelStartup.java:84)
at org.sonar.server.user.DoPrivileged.execute(DoPrivileged.java:45)
at org.sonar.server.platform.platformlevel.PlatformLevelStartup.start(PlatformLevelStartup.java:80)
at org.sonar.server.platform.Platform.executeStartupTasks(Platform.java:196)
at org.sonar.server.platform.Platform.access$400(Platform.java:46)
at org.sonar.server.platform.Platform$1.lambda$doRun$1(Platform.java:121)
at org.sonar.server.platform.Platform$AutoStarterRunnable.runIfNotAborted(Platform.java:371)
at org.sonar.server.platform.Platform$1.doRun(Platform.java:121)
at org.sonar.server.platform.Platform$AutoStarterRunnable.run(Platform.java:355)
at java.lang.Thread.run(Thread.java:745)
Partition configuration:
[dssc100[DEV]#omhqp13890 bin]$ df
Filesystem 1K-blocks Used Available Use% Mounted on
... Other volumes omitted ...
/dev/mapper/Volume00-upapps
464422672 127511888 313323572 29% /upapps
At one point I did attempt to run the upgrade with the logging set to debug. This generated 6GB of log files and I was unable to find anything that seemed out of the ordinary.
We've got around 6k projects in this installation, some of which have several years of history. I would like to maintain that rich history, what can I do/look for as a possible solution?

You seem to have hit SONAR-10502, which is (will be) fixed in 6.7.3 and 7.1.

Related

GC overhead limit exceeded running background task in version 5.5

I am running SonarQube 5.5 with the following wrapper config settings.
wrapper.java.initmemory=3
wrapper.java.maxmemory=4096
I am still getting the following stack trace, this project has run successfully with sonarqube 5.3.
2016.05.09 11:14:09 INFO [o.s.s.c.s.ComputationStepExecutor] Compute coverage measures | time=105ms
2016.05.09 11:14:09 INFO [o.s.s.c.s.ComputationStepExecutor] Compute comment measures | time=120ms
2016.05.09 11:14:14 INFO [o.s.s.c.s.ComputationStepExecutor] Copy custom measures | time=5667ms
2016.05.09 11:14:15 INFO [o.s.s.c.s.ComputationStepExecutor] Compute duplication measures | time=424ms
2016.05.09 11:14:26 ERROR [o.s.s.c.c.ComputeEngineContainerImpl] Cleanup of container failed
java.lang.OutOfMemoryError: GC overhead limit exceeded
2016.05.09 11:14:26 ERROR [o.s.s.c.t.CeWorkerCallableImpl] Failed to execute task AVSWNiXkOySW07vtMalp
java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.Arrays.copyOfRange(Arrays.java:3664) ~[na:1.8.0_45]
at java.lang.StringBuffer.toString(StringBuffer.java:671) ~[na:1.8.0_45]
at java.io.StringWriter.toString(StringWriter.java:210) ~[na:1.8.0_45]
at org.apache.commons.lang.Entities.escape(Entities.java:838) ~[commons-lang-2.6.jar:2.6]
at org.apache.commons.lang.StringEscapeUtils.escapeXml(StringEscapeUtils.java:620) ~[commons-lang-2.6.jar:2.6]
at org.sonar.server.computation.step.DuplicationDataMeasuresStep$DuplicationVisitor.appendDuplication(DuplicationDataMeasuresStep.java:129) ~[sonar-server-5.5.jar:na]
Memory adjustments must be made in sonar.properties:
sonar.web.javaOpts (for Web Server JVM)
sonar.ce.javaOpts (for Compute Engine JVM)
sonar.search.javaOpts (for JVM running ElasticSearch).
In your case the memory exception occurs in a background task so it relates to Compute Engine (see SonarQube architecture for more insight).
Settings in wrapper.conf are not relevant here and should be left untouched (hence the # DO NOT EDIT THE FOLLOWING SECTIONS warning in the file).

Getting java.lang.NoSuchFieldError: INT_8 error while running spark job through oozie

I am Getting java.lang.NoSuchFieldError: INT_8 error when I am trying to execute a spark job using OOzie on Cloudera 5.5.1 version.
Any help on this will be appreciated.
Please find the error stackstrace below.
16/01/28 11:21:17 WARN TaskSetManager: Lost task 0.2 in stage 20.0 (TID 40, Zlab-physrv1): java.lang.NoSuchFieldError: INT_8
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter.convertField(CatalystSchemaConverter.scala:327)
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter.convertField(CatalystSchemaConverter.scala:312)
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter$$anonfun$convertField$1.apply(CatalystSchemaConverter.scala:517)
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter$$anonfun$convertField$1.apply(CatalystSchemaConverter.scala:516)
at scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:51)
at scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:60)
at scala.collection.mutable.ArrayOps$ofRef.foldLeft(ArrayOps.scala:108)
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter.convertField(CatalystSchemaConverter.scala:516)
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter.convertField(CatalystSchemaConverter.scala:312)
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter.convertField(CatalystSchemaConverter.scala:521)
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter.convertField(CatalystSchemaConverter.scala:312)
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter$$anonfun$convert$1.apply(CatalystSchemaConverter.scala:305)
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter$$anonfun$convert$1.apply(CatalystSchemaConverter.scala:305)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at org.apache.spark.sql.types.StructType.foreach(StructType.scala:92)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at org.apache.spark.sql.types.StructType.map(StructType.scala:92)
at org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter.convert(CatalystSchemaConverter.scala:305)
at org.apache.spark.sql.execution.datasources.parquet.ParquetTypesConverter$.convertFromAttributes(ParquetTypesConverter.scala:58)
at org.apache.spark.sql.execution.datasources.parquet.RowWriteSupport.init(ParquetTableSupport.scala:55)
at parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:277)
at parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:251)
at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.<init>(ParquetRelation.scala:94)
at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anon$3.newInstance(ParquetRelation.scala:272)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:233)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
As per My idea normally we used to get this error when ever there is some difference on the jars you have used to generate the code and the jars you have used currently.
Note: When I am trying to submit the Same one using spark-submit command it's running fine.
Regards
Nisith
Finally Able to debug and fix the issue. The Issue was with the installation as one of the data nodes are having older version of parquet Jars(5.2 cdh distribution). After replacing the jars with the current version jars it was working fine.

Apache Spark EC2 job not running. No space left on device

I have been running my program multiple times on a 20 node cluster. All of a sudden every time I run the program I get the following error:
15/04/19 16:52:35 WARN scheduler.TaskSetManager: Lost task 35.0 in stage 9.0 (TID 384, ip-XXX.XXX.compute.internal): java.io.FileNotFoundException: /mnt/spark/spark-local-XXX-ebd3/18/shuffle_2_35_64 (No space left on device)
java.io.FileOutputStream.open(Native Method)
java.io.FileOutputStream.<init>(FileOutputStream.java:221)
org.apache.spark.storage.DiskBlockObjectWriter.open(BlockObjectWriter.scala:123)
org.apache.spark.storage.DiskBlockObjectWriter.write(BlockObjectWriter.scala:192)
org.apache.spark.shuffle.hash.HashShuffleWriter$$anonfun$write$1.apply(HashShuffleWriter.scala:67)
org.apache.spark.shuffle.hash.HashShuffleWriter$$anonfun$write$1.apply(HashShuffleWriter.scala:65)
scala.collection.Iterator$class.foreach(Iterator.scala:727)
scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
org.apache.spark.shuffle.hash.HashShuffleWriter.write(HashShuffleWriter.scala:65)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
org.apache.spark.scheduler.Task.run(Task.scala:54)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)
Checking the UI and it says there's absolutely nothing on the nodes. I have run the program maybe 15 times and only all of a sudden has this started. Why has this occurred out of the blue? And how do I fix it?
"No space left on device" is a quite clear exception: That node has no space left on the mount where the spark local files are being written: /mnt/spark/
Solution: go to the node (or nodes) and clean that up. rm -rf FTW.
If jobs are breaking before they terminate, due to manual intervention or failure, they will often leave temp data behind.

Accumulo on Cloudera CDH4 - Access denied when starting components

I have a small cluster up and running with Cloudera CDH4 Hadoop and Map Reduce v1. Namenode/Secondary Namenode/Jobtracker all on different machines. My three servers are also acting as Zookeeper servers.
I'm trying to install Accumulo 1.4.4 on top of this cluster. I get the same behavior with Accumulo 1.5.0. I am able to bin/accumulo init and initialize Accumulo, but starting the individual components fail. I'm trying to make my Namenode the Accumulo master.
bin/start-server.sh localhost monitor spits out a very encouraging Starting monitor on localhost, but nothing gets started. If I examine logs/monitor_localhost.err I find a stacktrace:
-bash-4.1$ cat logs/monitor_localhost.err
Thread "monitor" died null
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:622)
at org.apache.accumulo.start.Main$1.run(Main.java:91)
at java.lang.Thread.run(Thread.java:701)
Caused by: java.lang.ExceptionInInitializerError
at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2464)
at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2456)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2323)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:351)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:163)
at org.apache.accumulo.core.file.FileUtil.getFileSystem(FileUtil.java:554)
at org.apache.accumulo.core.client.ZooKeeperInstance.getInstanceIDFromHdfs(ZooKeeperInstance.java:258)
at org.apache.accumulo.server.conf.ZooConfiguration.getInstance(ZooConfiguration.java:65)
at org.apache.accumulo.server.conf.ServerConfiguration.getZooConfiguration(ServerConfiguration.java:49)
at org.apache.accumulo.server.conf.ServerConfiguration.getSystemConfiguration(ServerConfiguration.java:58)
at org.apache.accumulo.server.monitor.Monitor.run(Monitor.java:440)
at org.apache.accumulo.server.monitor.Monitor.main(Monitor.java:433)
... 6 more
Caused by: java.security.AccessControlException: access denied (java.lang.RuntimePermission accessDeclaredMembers)
at java.security.AccessControlContext.checkPermission(AccessControlContext.java:399)
at java.security.AccessController.checkPermission(AccessController.java:557)
at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
at java.lang.Class.checkMemberAccess(Class.java:2237)
at java.lang.Class.getDeclaredFields(Class.java:1805)
at org.apache.hadoop.util.ReflectionUtils.getDeclaredFieldsIncludingInherited(ReflectionUtils.java:315)
at org.apache.hadoop.metrics2.lib.MetricsSourceBuilder.initRegistry(MetricsSourceBuilder.java:92)
at org.apache.hadoop.metrics2.lib.MetricsSourceBuilder.<init>(MetricsSourceBuilder.java:56)
at org.apache.hadoop.metrics2.lib.MetricsAnnotations.newSourceBuilder(MetricsAnnotations.java:42)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:212)
at org.apache.hadoop.metrics2.MetricsSystem.register(MetricsSystem.java:54)
at org.apache.hadoop.security.UserGroupInformation$UgiMetrics.create(UserGroupInformation.java:97)
at org.apache.hadoop.security.UserGroupInformation.<clinit>(UserGroupInformation.java:190)
... 18 more
The AccessControlException: access denied looks like the important line to me, but I can't imagine what access is being restricted. I'm running everything as the hdfs user, which owns the entire /opt/accumulo-1.4.4/ directory where accumulo is un-tarred. The /accumulo directory in HDFS is also owned by the hdfs user. SELinux is permissive. Searching online has proved fruitless, has anyone dealt with this error before?
Much thanks.
I started browsing the Apache accumulo-users mailing list archive and came across the solution.
http://mail-archives.apache.org/mod_mbox/accumulo-user/201312.mbox/%3CB9CB2B2BF27F0F46B8ECF781831E00E710970A9F%400015-its-exmb10.us.saic.com%3E
I was copying the accumulo.policy.example to accumulo.policy because I thought I needed it in my configuration. Once I deleted the accumulo.policy file my issues went away and I've been able to stand up Accumulo (1.5.0 at least, 1.4.4 still has some issues for me)

Hbase 0.20.6 Can not start master exception

I'm using Hbase 0.20.6 with Hadoop 0.21.0 on Ubuntu 10.04 LTS and I got can't start master error. (The error is attached at the end of the post from the hbase-root-master-ubuntu.log file)
Does Hbase 0.20.6 work fine with Hadoop 0.21.0 ?? and if it's NOT, Is there a work around ??
What's the problem source ??
Thanks for your time and consideration.
The Log :
java.io.IOException: Call to localhost/127.0.0.1:54310 failed on local exception: java.io.EOFException
at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
at org.apache.hadoop.ipc.Client.call(Client.java:743)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy0.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:195)
at org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:94)
at org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:78)
at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1229)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1274)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:375)
at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)
Fri Dec 24 14:02:12 EET 2010 Starting master on ubuntu
ulimit -n 1024
2010-12-24 14:02:13,267 INFO org.apache.hadoop.hbase.master.HMaster: vmName=Java HotSpot(TM) Client VM, vmVendor=Sun Microsystems Inc., vmVersion=17.1-b03
2010-12-24 14:02:13,268 INFO org.apache.hadoop.hbase.master.HMaster: vmInputArguments=[-Xmx1000m, -XX:+HeapDumpOnOutOfMemoryError, -XX:+UseConcMarkSweepGC, -XX:+CMSIncrementalMode, -XX:+HeapDumpOnOutOfMemoryError, -XX:+UseConcMarkSweepGC, -XX:+CMSIncrementalMode, -XX:+HeapDumpOnOutOfMemoryError, -XX:+UseConcMarkSweepGC, -XX:+CMSIncrementalMode, -Dhbase.log.dir=/usr/lib/hbase/bin/../logs, -Dhbase.log.file=hbase-root-master-ubuntu.log, -Dhbase.home.dir=/usr/lib/hbase/bin/.., -Dhbase.id.str=root, -Dhbase.root.logger=INFO,DRFA, -Djava.library.path=/usr/lib/hbase/bin/../lib/native/Linux-i386-32]
2010-12-24 14:02:13,353 INFO org.apache.hadoop.hbase.master.HMaster: My address is ubuntu.ubuntu-domain:60000
2010-12-24 14:02:13,593 ERROR org.apache.hadoop.hbase.master.HMaster: Can not start master
There has been a discussion about this on HBase users mailing list recently, I would suggest reading it.
http://mail-archives.apache.org/mod_mbox/hbase-user/201012.mbox/%3CAANLkTimA7UQZAiG0810mtHtGk30x8ejGs+n5+CF8GxQ1#mail.gmail.com%3E
As a summary I would quote what Ryan Rawson of StumbleUpon mentioned in the lists:
HBase 0.20.6 is likely to run well on hadoop 21. We have many patches
that help bolster durability on top of branch-20-append, and also some
may apply to hadoop 21.
What you are possibly running in to is using hadoop 20 jars in hbase
0.90 on top of hadoop 21. Try deleting the hadoop 20 jars and copying
in your hadoop 21.
Also consider running cdh3b2+, hadoop 21 is a panned release and no
one runs it nor expects it to be run in a production setting.
We are using the HBase 0.90 RCs with Cloudera's CDH3b3 via debian packages. In case you want to consider it please refer to its installation page for details. I would also recommend this page for installation on a cluster. Download the latest HBase 0.90 RC from here.

Resources