GC overhead limit exceeded running background task in version 5.5 - sonarqube

I am running SonarQube 5.5 with the following wrapper config settings.
wrapper.java.initmemory=3
wrapper.java.maxmemory=4096
I am still getting the following stack trace, this project has run successfully with sonarqube 5.3.
2016.05.09 11:14:09 INFO [o.s.s.c.s.ComputationStepExecutor] Compute coverage measures | time=105ms
2016.05.09 11:14:09 INFO [o.s.s.c.s.ComputationStepExecutor] Compute comment measures | time=120ms
2016.05.09 11:14:14 INFO [o.s.s.c.s.ComputationStepExecutor] Copy custom measures | time=5667ms
2016.05.09 11:14:15 INFO [o.s.s.c.s.ComputationStepExecutor] Compute duplication measures | time=424ms
2016.05.09 11:14:26 ERROR [o.s.s.c.c.ComputeEngineContainerImpl] Cleanup of container failed
java.lang.OutOfMemoryError: GC overhead limit exceeded
2016.05.09 11:14:26 ERROR [o.s.s.c.t.CeWorkerCallableImpl] Failed to execute task AVSWNiXkOySW07vtMalp
java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.Arrays.copyOfRange(Arrays.java:3664) ~[na:1.8.0_45]
at java.lang.StringBuffer.toString(StringBuffer.java:671) ~[na:1.8.0_45]
at java.io.StringWriter.toString(StringWriter.java:210) ~[na:1.8.0_45]
at org.apache.commons.lang.Entities.escape(Entities.java:838) ~[commons-lang-2.6.jar:2.6]
at org.apache.commons.lang.StringEscapeUtils.escapeXml(StringEscapeUtils.java:620) ~[commons-lang-2.6.jar:2.6]
at org.sonar.server.computation.step.DuplicationDataMeasuresStep$DuplicationVisitor.appendDuplication(DuplicationDataMeasuresStep.java:129) ~[sonar-server-5.5.jar:na]

Memory adjustments must be made in sonar.properties:
sonar.web.javaOpts (for Web Server JVM)
sonar.ce.javaOpts (for Compute Engine JVM)
sonar.search.javaOpts (for JVM running ElasticSearch).
In your case the memory exception occurs in a background task so it relates to Compute Engine (see SonarQube architecture for more insight).
Settings in wrapper.conf are not relevant here and should be left untouched (hence the # DO NOT EDIT THE FOLLOWING SECTIONS warning in the file).

Related

Java Heap Space Error | OutOfMemory while deploying EAR in Websphere

I am getting OutOfMemory error while deploying EAR (Size : 230MB) file in Websphere server.
Sometime deployment is getting success after increasing the heap size.
I have analyzed the heap dump and found leak suspects but not sure how to proceed here after.
Leak suspect : 217,295,824 bytes (87.23 %) of Java heap is used by 105
instances of java/util/WeakHashMap$Entry
Contains 3 instances of the following leak suspects:
- array of java/lang/Object holding 16,235,440 bytes at 0x6a696c8
- array of java/lang/Object holding 101,373,968 bytes at 0x1125c240
- array of java/lang/Object holding 13,602,688 bytes at 0x5290818
<\n> Total size : 217,295,824 bytes
Size : 1,040 bytes
Name : array of java/util/WeakHashMap$Entry
Number of children : 105
Number of parents : 1
Owner address : 0x2e41fd0
Owner object : java/util/WeakHashMap
Address : 0xb4c2dc0
First single ancestor : org/eclipse/jst/j2ee/internal/archive/JavaEEArchiveUtilities at 0xb4c2dc0
and getting below error in WAS logs
[main] INFO deploylib - Installing application... ADMA5016I: Installation of Kijkglas-ear-1905.01.35 started. ADMA5058I: Application and module versions are validated with versions of deployment targets. ADMA5018I: The EJBDeploy program is running on file /tmp/app6232412827642995266.ear. Starting workbench. EJB Deploy configuration directory: /var/was/profiles/AdminAgent01/ejbdeploy/configuration/ framework search path: /opt/IBM/WebSphere/8.5/deploytool/itp/plugins build:RADWEJB95-I20150829_0214 Creating the project. JVMDUMP039I Processing dump event "systhrow", detail "java/lang/OutOfMemoryError" at 2019/06/07 10:42:59 - please wait. JVMDUMP032I JVM requested System dump using '/var/was/profiles/AdminAgent01/core.20190610.104259.30244.0001.dmp' in response to an event JVMDUMP010I System dump written to /var/was/profiles/AdminAgent01/core.20190610.104259.30244.0001.dmp JVMDUMP032I JVM requested Heap dump using '/var/was/profiles/AdminAgent01/heapdump.20190610.104259.30244.0002.phd' in response to an event JVMDUMP010I Heap dump written to /var/was/profiles/AdminAgent01/heapdump.20190610.104259.30244.0002.phd JVMDUMP032I JVM requested Java dump using '/var/was/profiles/AdminAgent01/javacore.20190610.104259.30244.0003.txt' in response to an event JVMDUMP010I Java dump written to /var/was/profiles/AdminAgent01/javacore.20190610.104259.30244.0003.txt JVMDUMP032I JVM requested Snap dump using '/var/was/profiles/AdminAgent01/Snap.20190610.104259.30244.0004.trc' in response to an event JVMDUMP010I Snap dump written to /var/was/profiles/AdminAgent01/Snap.20190610.104259.30244.0004.trc JVMDUMP013I Processed dump event "systhrow", detail "java/lang/OutOfMemoryError". An unexpected exception was thrown. Halting execution. Shutting down workbench. Error executing deployment: java.lang.OutOfMemoryError. Error is Java heap space. java.lang.OutOfMemoryError: Java heap space at java.lang.Throwable.fillInStackTrace(Native Method) at java.lang.Throwable.<init>(Throwable.java:67) at java.lang.Throwable.<init>(Throwable.java:78) at java.lang.Error.<init>(Error.java:82) at java.lang.VirtualMachineError.<init>(VirtualMachineError.java:64) at java.lang.OutOfMemoryError.<init>(OutOfMemoryError.java:69) at java.lang.String.<init>(String.java:207) at java.util.jar.Attributes.read(Attributes.java:424) at java.util.jar.Manifest.read(Manifest.java:264) at java.util.jar.Manifest.<init>(Manifest.java:82) at java.util.jar.JarFile.getManifestFromReference(JarFile.java:200) at java.util.jar.JarFile.getManifest(JarFile.java:182) at sun.net.www.protocol.jar.URLJarFile.isSuperMan(URLJarFile.java:187) at sun.net.www.protocol.jar.URLJarFile.getManifest(URLJarFile.java:155) at java.util.jar.JarFile.maybeInstantiateVerifier(JarFile.java:387) at java.util.jar.JarFile.getInputStream(JarFile.java:488) at sun.net.www.protocol.jar.JarURLConnection.getInputStream(JarURLConnection.java:178) at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown Source) at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source) at org.apache.xerces.impl.xs.opti.SchemaParsingConfig.parse(Unknown Source) at org.apache.xerces.impl.xs.opti.SchemaParsingConfig.parse(Unknown Source) at org.apache.xerces.impl.xs.opti.SchemaDOMParser.parse(Unknown Source) at org.apache.xerces.impl.xs.traversers.XSDHandler.getSchemaDocument(Unknown Source) at org.apache.xerces.impl.xs.traversers.XSDHandler.resolveSchema(Unknown Source) at org.apache.xerces.impl.xs.traversers.XSDHandler.constructTrees(Unknown Source) at org.apache.xerces.impl.xs.traversers.XSDHandler.constructTrees(Unknown Source) at org.apache.xerces.impl.xs.traversers.XSDHandler.parseSchema(Unknown Source) at org.apache.xerces.impl.xs.XMLSchemaLoader.loadSchema(Unknown Source) at org.apache.xerces.impl.xs.XMLSchemaValidator.findSchemaGrammar(Unknown Source) at org.apache.xerces.impl.xs.XMLSchemaValidator.handleStartElement(Unknown Source) at org.apache.xerces.impl.xs.XMLSchemaValidator.startElement(Unknown Source) at org.apache.xerces.impl.XMLNSDocumentScannerImpl.scanStartElement(Unknown Source) at org.apache.xerces.impl.XMLNSDocumentScannerImpl$NSContentDispatcher.scanRootElementHook(Unknown Source) EJBDeploy level: #build# ADMA5008E: The EJBDeploy program failed on file /tmp/app6232412827642995266.ear. Exception: com.ibm.etools.ejbdeploy.EJBDeploymentException: Error executing EJBDeploy ADMA0063E: An error occurred during Enterprise JavaBeans (EJB) deployment. Exception: com.ibm.etools.ejbdeploy.EJBDeploymentException: Error executing EJBDeploy ADMA5011I: The cleanup of the temp directory for application Kijkglas-ear-1905.01.35 is complete. ADMA5014E: The installation of application Kijkglas-ear-1905.01.35 failed. 2019-06-10 10:43:05,625
[main] FATAL deploylib - Jython Exception in deploy.py : 2019-06-10 10:43:05,630
[main] FATAL deploylib - Traceback (most recent call last): 2019-06-10 10:43:05,630
[main] FATAL deploylib - File "/opt/Nolio/work/WAS/gMyAppWA/all/1905.01.35/deploylib/cfgfiles/gMyAppWA-assembled.cfg", line 606, in ? application.installApplication() 2019-06-10 10:43:05,630
[main] FATAL deploylib - File "<string>", line 779, in installApplication 2019-06-10 10:43:05,630
[main] FATAL deploylib - com.ibm.ws.scripting.ScriptingException: com.ibm.ws.scripting.ScriptingException: WASX7132E: Application install for /opt/Nolio/work/WAS/gMyAppWA/all/1905.01.35/Kijkglas-ear-1905.01.35.ear failed: see previous messages for details. [2019-06-10 10:43:05] [/opt/Nolio/work/WAS/gMyAppWA/all/1905.01.35/deploylib/deploy.ksh] [ERROR] Command /var/was/profiles/AppSrv01/bin/wsadmin.sh -javaoption -Duser.timezone=CET -f deploy.py /opt/Nolio/work/WAS/gMyAppWA/all/1905.01.35/deploylib/cfgfiles/gMyAppWA-assembled.cfg /opt/Nolio/work/WAS/gMyAppWA/all/1905.01.35/deploylib/cfgfiles/gMyAppWA.TST failed. [2019-06-10 10:43:05] [/opt/Nolio/work/WAS/gMyAppWA/all/1905.01.35/deploylib/deploy.ksh] [INFO ] See also deploy.log and wsadmin.log in deploylib-8.1.4 directory.
See /opt/Nolio/work/WAS/log/gMyAppWA/all/stdout.log.2019-06-10_10_37_57_285 and /opt/Nolio/work/WAS/gMyAppWA/all/1905.01.35/deploylib/deploy.log for more information
/gMyAppWA.TST failed. [2019-06-10 10:43:05] [/opt/Nolio/work/WAS/gMyAppWA/all/1905.01.35/deploylib/deploy.ksh] [INFO ] See also deploy.log and wsadmin.log in deploylib-8.1.4 directory.
Is there any rouge process or something blocking in background ?
You didn't write your version, topology (single, network deployment), nor way you deploy your app (console, wsadmin ohter).
As you can see in the log, there is OutOfMemoryError during EJB deploy call.
You need to increase memory for ejb deploy you can either set it file or in OS level. Check this post Getting OutofMemory condition while deploying a large application in WebSphere Application Server
1) Set it in install-root/deploytool/itp/EJBDeploy.sh file
EJBDEPLOY_JVM_HEAP="-Xms1024 -Xmx1024" at the beginning of the
ejbdeploy.sh file.
2) Set it in operating System Environment.
Set EJBDEPLOY_JVM_HEAP= '-Xms1024 -Xmx1024' in OS environment
variable.
I'd also increase memory for your admin server in AdminAgent01 profile, as it looks like you are using admin agent.
Recommend not to update the ejbdeploy.sh. When update the WebSphere to a new fixpack, the ejbdeploy.sh will be restored.
Increasing heap size through admin console
login to admin console
go to "System administration" > "Deployment manager" > "Configuration" tab > "Server Infrastructure" section on the right > "Java and Process Management" > "Process definition"
"Additional Properties" section on the right > "Environment Entries"
"New" entry by providing the Name EJBDEPLOY_JVM_HEAP and value "-Xms256m -Xmx1024m"
Save and synchronize
restart the DM server

How to solve Sonarqube java.lang.OutOfMemoryError: Java heap space

I'm using Sonarqube community version. I'm getting the following error,
Exception in thread "LOG_FLUSHER" Exception in thread "CHECKPOINT_WRITER" java.lang.OutOfMemoryError: Java heap space
at java.util.ArrayList.iterator(ArrayList.java:840)
at java.util.Collections$SynchronizedCollection.iterator(Collections.java:2031)
at com.persistit.Persistit.pollAlertMonitors(Persistit.java:2285)
at com.persistit.Persistit$LogFlusher.run(Persistit.java:192)
java.lang.OutOfMemoryError: Java heap space
at java.util.HashMap$Values.iterator(HashMap.java:968)
at com.persistit.Persistit.earliestDirtyTimestamp(Persistit.java:1439)
at com.persistit.CheckpointManager.pollFlushCheckpoint(CheckpointManager.java:271)
at com.persistit.CheckpointManager.runTask(CheckpointManager.java:301)
at com.persistit.IOTaskRunnable.run(IOTaskRunnable.java:144)
at java.lang.Thread.run(Thread.java:748)
WARNING: WARN: [JOURNAL_FLUSHER] WARNING Journal flush operation took 7,078ms last 8 cycles average is 884ms
INFO: ------------------------------------------------------------------------
INFO: EXECUTION FAILURE
INFO: ------------------------------------------------------------------------
INFO: Total time: 1:17.852s
ERROR: Error during SonarQube Scanner execution
ERROR: Java heap space
ERROR:
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "CLEANUP_MANAGER"
INFO: Final Memory: 40M/989M
INFO: ------------------------------------------------------------------------
The SonarQube Scanner did not complete successfully
I have Changed the size in sonar.properties, still I'm facing the same problem. How to solve this.
sonar.web.javaOpts=-Xmx4G -Xms2048m -XX:+HeapDumpOnOutOfMemoryError
sonar.ce.javaOpts =-Xmx4G -Xms2048m -XX:+HeapDumpOnOutOfMemoryError
sonar.search.javaOpts=-Xmx4G -Xms2048m -XX:+HeapDumpOnOutOfMemoryError
What you've changed are the settings that allocate memory to SonarQube itself.
What you need to change is the setting that allocates memory to the analysis process. You haven't said which analyzer you're using, so the details will vary a little, but
for SonarQube Scanner export SONAR_SCANNER_OPTS="-Xmx512m"
for SonarQube Scanner for Maven export MAVEN_OPTS="-Xmx512m"
Large files in Project cause this problem, for me a 50MB XML file gives this error and this file was not important to the analysis process. I excluded this file in the configuration file (SonarQube.Analysis.xml) and the problem was solved

Sonarqube 6.7 upgrade failure "Unrecoverable indexation failures"

We are attempting to upgrade from SonarQube 5.6.7 to SonarQube 6.7.2. I followed the steps outlined here https://docs.sonarqube.org/display/SONAR/Upgrading.
I have > 300GB available on the partition that elastic search is using so it doesn't seem to be related to this problem.
The exception:
2018.03.21 11:13:10 ERROR web[][o.s.s.p.Platform] Background initialization failed. Stopping SonarQube
java.lang.IllegalStateException: Unrecoverable indexation failures
at org.sonar.server.es.IndexingListener$1.onFinish(IndexingListener.java:39)
at org.sonar.server.es.BulkIndexer.stop(BulkIndexer.java:117)
at org.sonar.server.issue.index.IssueIndexer.doIndex(IssueIndexer.java:247)
at org.sonar.server.issue.index.IssueIndexer.indexOnStartup(IssueIndexer.java:95)
at org.sonar.server.es.IndexerStartupTask.indexUninitializedTypes(IndexerStartupTask.java:68)
at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
at org.sonar.server.es.IndexerStartupTask.execute(IndexerStartupTask.java:55)
at java.util.Optional.ifPresent(Optional.java:159)
at org.sonar.server.platform.platformlevel.PlatformLevelStartup$1.doPrivileged(PlatformLevelStartup.java:84)
at org.sonar.server.user.DoPrivileged.execute(DoPrivileged.java:45)
at org.sonar.server.platform.platformlevel.PlatformLevelStartup.start(PlatformLevelStartup.java:80)
at org.sonar.server.platform.Platform.executeStartupTasks(Platform.java:196)
at org.sonar.server.platform.Platform.access$400(Platform.java:46)
at org.sonar.server.platform.Platform$1.lambda$doRun$1(Platform.java:121)
at org.sonar.server.platform.Platform$AutoStarterRunnable.runIfNotAborted(Platform.java:371)
at org.sonar.server.platform.Platform$1.doRun(Platform.java:121)
at org.sonar.server.platform.Platform$AutoStarterRunnable.run(Platform.java:355)
at java.lang.Thread.run(Thread.java:745)
Partition configuration:
[dssc100[DEV]#omhqp13890 bin]$ df
Filesystem 1K-blocks Used Available Use% Mounted on
... Other volumes omitted ...
/dev/mapper/Volume00-upapps
464422672 127511888 313323572 29% /upapps
At one point I did attempt to run the upgrade with the logging set to debug. This generated 6GB of log files and I was unable to find anything that seemed out of the ordinary.
We've got around 6k projects in this installation, some of which have several years of history. I would like to maintain that rich history, what can I do/look for as a possible solution?
You seem to have hit SONAR-10502, which is (will be) fixed in 6.7.3 and 7.1.

sonar-runner getting java.lang.ClassNotFoundException: org.picocontainer.Startable

I upgraded my test Sonar server to 5.2 and am using sonar-runner-2.5-RC1 and haven't had any issues running sonar-runner to analyze my code. I then upgrade my production Sonar server to 5.2 and ran a production build using the same command line settings and sonar-runner.properties file and I get the following error:
Exception in thread "main" java.lang.NoClassDefFoundError: org/picocontainer/Startable
I then ran the build on my 'test' build machine against the production sonar server and it ran correctly. So it appears to me that there must be some difference on the production build machine that is impacting sonar-runner but I can't figure out what the issue might be.
All I have in my sonar runner properties file is:
sonar.sourceEncoding=UTF-8
sonar.sources=src
sonar.modules=svc1, \
svc2
svc1.sonar.java.binaries=../build/gradle/svc1/classes/
svc1.sonar.projectName=SVC1
svc2.sonar.java.binaries=../build/gradle/svc2/classes/
svc2.sonar.projectName=SVC2
cli.sonar.language=py
cli.sonar.projectName=CLI
SONAR_RUNNER_OPTS='-Xmx2048m -XX:MaxPermSize=512m' sonar-runner-2.5-RC1/bin/sonar-runner
-e
-Dproject.settings=/workspace/build/workspace/sonar-runner.properties
-Dsonar.host.url=http://192.XXX.XXX.X -Dsonar.projectKey=TEST
-Dsonar.projectName=TEST-driver -Dsonar.branch=master
-Dsonar.projectVersion=2.0.0.0
-Dsonar.java.libraries=/workspace/build/workspace/jars/*.jar,/workspace/build/workspace/build/gradle/portal/compile/lib/*.jar,/usr/lib64/jvm/java/lib/*.jar'
Logs:
INFO: Runner configuration file: NONE
INFO: Project configuration file: /workspace/build/workspace/CH-coprhd-controller-master-sonar/coprhd-controller-sonar-runner.properties
INFO: SonarQube Runner 2.5-RC1
INFO: Java 1.7.0_71 Oracle Corporation (64-bit)
INFO: Linux 3.16.6-2-desktop amd64
INFO: SONAR_RUNNER_OPTS=-Xmx2048m -XX:MaxPermSize=512m
INFO: Error stacktraces are turned on.
INFO: User cache: /workspace/build/workspace/CH-coprhd-controller-master-sonar/.sonar/cache
INFO: Load global repositories
INFO: Load global repositories (done) | time=166ms
INFO: User cache: /workspace/build/workspace/CH-coprhd-controller-master-sonar/.sonar/cache
INFO: Load plugins index
INFO: Load plugins index (done) | time=3ms
INFO: Download sonar-issues-density-plugin-1.0.jar
INFO: Download sonar-javascript-plugin-2.8.jar
INFO: Download sonar-findbugs-plugin-3.3.jar
INFO: Download sonar-groovy-plugin-1.3.jar
INFO: Download sonar-build-stability-plugin-1.3.jar
INFO: Download sonar-xml-plugin-1.3.jar
INFO: Download sonar-web-plugin-2.4.jar
INFO: Download sonar-clover-plugin-3.0.jar
INFO: Download sonar-sonargraph-plugin-3.4.2.jar
INFO: Download sonar-python-plugin-1.5.jar
INFO: Download sonar-scm-git-plugin-1.1.jar
INFO: Download sonar-scm-svn-plugin-1.2.jar
INFO: Download sonar-checkstyle-plugin-2.4.jar
INFO: Download sonar-pmd-plugin-2.5.jar
INFO: Download sonar-java-plugin-3.7.1.jar
INFO: Download sonar-generic-coverage-plugin-1.1.jar
INFO: Download sonar-css-plugin-1.5.jar
INFO: Default locale: "en_US", source code encoding: "UTF-8"
INFO: Process project properties
Exception in thread "main" java.lang.NoClassDefFoundError: org/picocontainer/Startable
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at org.sonar.classloader.ClassRealm.loadClassFromSelf(ClassRealm.java:125)
at org.sonar.classloader.ParentFirstStrategy.loadClass(ParentFirstStrategy.java:37)
at org.sonar.classloader.ClassRealm.loadClass(ClassRealm.java:87)
at org.sonar.classloader.ClassRealm.loadClass(ClassRealm.java:76)
at org.sonar.plugins.issuesdensity.IssuesDensityPlugin.getExtensions(IssuesDensityPlugin.java:37)
at org.sonar.batch.bootstrap.ExtensionInstaller.install(ExtensionInstaller.java:51)
at org.sonar.batch.scan.ProjectScanContainer.addBatchExtensions(ProjectScanContainer.java:234)
at org.sonar.batch.scan.ProjectScanContainer.doBeforeStart(ProjectScanContainer.java:119)
at org.sonar.core.platform.ComponentContainer.startComponents(ComponentContainer.java:98)
at org.sonar.core.platform.ComponentContainer.execute(ComponentContainer.java:85)
at org.sonar.batch.bootstrap.GlobalContainer.executeAnalysis(GlobalContainer.java:153)
at org.sonar.batch.bootstrapper.Batch.executeTask(Batch.java:110)
at org.sonar.runner.batch.BatchIsolatedLauncher.execute(BatchIsolatedLauncher.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.sonar.runner.impl.IsolatedLauncherProxy.invoke(IsolatedLauncherProxy.java:61)
at com.sun.proxy.$Proxy0.execute(Unknown Source)
at org.sonar.runner.api.EmbeddedRunner.doExecute(EmbeddedRunner.java:275)
at org.sonar.runner.api.EmbeddedRunner.runAnalysis(EmbeddedRunner.java:166)
at org.sonar.runner.api.EmbeddedRunner.runAnalysis(EmbeddedRunner.java:153)
at org.sonar.runner.cli.Main.runAnalysis(Main.java:118)
at org.sonar.runner.cli.Main.execute(Main.java:80)
at org.sonar.runner.cli.Main.main(Main.java:66)
Caused by: java.lang.ClassNotFoundException: org.picocontainer.Startable
at org.sonar.classloader.ParentFirstStrategy.loadClass(ParentFirstStrategy.java:39)
at org.sonar.classloader.ClassRealm.loadClass(ClassRealm.java:87)
at org.sonar.classloader.ClassRealm.loadClass(ClassRealm.java:76)
... 34 more
INFO: [JOURNAL_FLUSHER] WARNING Journal flush operation took 9,469ms last 8 cycles average is 1,183ms
INFO: [JOURNAL_FLUSHER] WARNING Journal flush operation took 28,431ms last 8 cycles average is 3,553ms
INFO: [JOURNAL_FLUSHER] WARNING Journal flush operation took 33,431ms last 8 cycles average is 4,178ms
INFO: [JOURNAL_FLUSHER] WARNING Journal flush operation took 2,661ms last 8 cycles average is 332ms
INFO: [JOURNAL_FLUSHER] WARNING Journal flush operation took 10,554ms last 8 cycles average is 1,319ms
INFO: [JOURNAL_FLUSHER] WARNING Journal flush operation took 9,480ms last 8 cycles average is 1,185ms
INFO: [JOURNAL_FLUSHER] WARNING Journal flush operation took 8,480ms last 8 cycles average is 1,060ms
INFO: [JOURNAL_FLUSHER] WARNING Journal flush operation took 11,104ms last 8 cycles average is 1,388ms
INFO: [JOURNAL_FLUSHER] WARNING Journal flush operation took 39,183ms last 8 cycles average is 4,897ms
INFO: [JOURNAL_FLUSHER] WARNING Journal flush operation took 4,995ms last 8 cycles average is 624ms
Build timed out (after 180 minutes). Marking the build as aborted.
This Issues Density Plugin is not compatible with SQ 5.2 and is no more maintained. See http://docs.sonarqube.org/display/PLUG/Issues+Density+Plugin. Moreover I recommend to open the page Administration > System > Update Center before upgrading SonarQube. It displays the list of incompatible plugins.
The issue turned out to be that when SONAR_USER_HOME pointed to the same folder where sonar_runner was executed in, the SonarQube plugins were first downloaded to $SONAR_USER_HOME and then as the analysis started, that location was wiped out for the analysis files for each module to be placed in the same folder. Ensuring SONAR_USER_HOME was in a different location resolved the issue.

Unable to initialize any output collector in CDH5.3

15/05/24 06:11:40 INFO mapreduce.Job: Task Id : attempt_1432456238397_0004_m_000000_0, Status : FAILED
Error: java.io.IOException: Unable to initialize any output collector
at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:412)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:439)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
I am using CDH 5.3 cloudera quickstart, I wrote MapReduce Program. When i run that on shell i getting above exception.
Can any one please help me on this, how to resolve
The error "Unable to initialize any output collector" indicates that the job failed to start the container's, there can be multiple reasons for the same. However, one must review the container logs at hdfs to identify the cause the error.
In this specific instance, the value of mapreduce.task.io.sort.mb value was entered greater than 2047 MB, however the maximum value which it allows is 2047 MB, thus anything above its causes the jobs to fail marking the value provided as Invalid.
Solution:
Set the value of mapreduce.task.io.sort.mb < 2048MB
Reference:
https://support.pivotal.io/hc/en-us/articles/205649987-Map-Reduce-job-failed-with-Unable-to-initialize-any-output-collector-
CDH5.2: MR, Unable to initialize any output collector
https://community.cloudera.com/t5/Storage-Random-Access-HDFS/HBase-MapReduce-Job-Error-java-io-IOException-Unable-to/td-p/23786

Resources