Fail to process request http://localhost:9000/api/ce/submit?projectKey - sonarqube

I am facing the issue with SQ scan of few of my projects.
2017.11.20 20:26:36 ERROR web[AV/cUNmR+tiFEjO/AAAg][o.s.s.w.WebServiceEngine] Fail to process request http://localhost:9000/api/ce/submit?projectKey=com.org.app:app&projectName=ReactorProject
java.lang.IllegalStateException: Can't read file part at org.sonar.server.ws.ServletRequest.readPart(ServletRequest.java:102) at org.sonar.server.ws.ServletRequest.readInputStreamParam(ServletRequest.java:85) at org.sonar.api.server.ws.internal.ValidatingRequest.paramAsInputStream(ValidatingRequest.java:83)
at org.sonar.server.ce.ws.SubmitAction.handle(SubmitAction.java:10
.......................
Caused by: java.io.IOException: org.apache.tomcat.util.http.fileupload.FileUploadException: Unexpected EOF read on the socket
at org.apache.catalina.connector.Request.parseParts(Request.java:2919)
at org.apache.catalina.connector.Request.parseParameters(Request.java:3215)
at org.apache.catalina.connector.Request.getParameter(Request.java:1145)
Caused by: org.apache.tomcat.util.http.fileupload.FileUploadException: Unexpected EOF read on the socket
at org.apache.tomcat.util.http.fileupload.FileUploadBase.parseRequest(FileUploadBase.java:308)
at org.apache.catalina.connector.Request.parseParts(Request.java:2871)
... 52 common frames omitted
Caused by: java.io.EOFException: Unexpected EOF read on the socket
at org.apache.coyote.http11.Http11InputBuffer.fill(Http11InputBuffer.java:718)
at org.apache.coyote.http11.Http11InputBuffer.access$300(Http11InputBuffer.java:40)
at org.apache.coyote.http11.Http11InputBuffer$SocketInputBuffer.doRead(Http11InputBuffer.java:1063)
at org.apache.coyote.http11.filters.IdentityInputFilter.doRead(IdentityInputFilter.java:140)
at org.apache.coyote.http11.Http11InputBuffer.doRead(Http11InputBuffer.java:257)
at org.apache.coyote.Request.doRead(Request.java:574)
at org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:326)
at org.apache.catalina.connector.InputBuffer.checkByteBufferEof(InputBuffer.java:642)
at org.apache.catalina.connector.InputBuffer.read(InputBuffer.java:349)
at org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream.java:183)
at org.apache.tomcat.util.http.fileupload.MultipartStream$ItemInputStream.makeAvailable(MultipartStream.java:977)
at org.apache.tomcat.util.http.fileupload.MultipartStream$ItemInputStream.read(MultipartStream.java:881)
at java.io.InputStream.read(Unknown Source)
at org.apache.tomcat.util.http.fileupload.util.Streams.copy(Streams.java:98)
at org.apache.tomcat.util.http.fileupload.util.Streams.copy(Streams.java:68)
at org.apache.tomcat.util.http.fileupload.MultipartStream.readBodyData(MultipartStream.java:571)
at org.apache.tomcat.util.http.fileupload.MultipartStream.discardBodyData(MultipartStream.java:595)
at org.apache.tomcat.util.http.fileupload.MultipartStream.skipPreamble(MultipartStream.java:613)
at org.apache.tomcat.util.http.fileupload.FileUploadBase$FileItemIteratorImpl.findNextItem(FileUploadBase.java:874)
at org.apache.tomcat.util.http.fileupload.FileUploadBase$FileItemIteratorImpl.<init>(FileUploadBase.java:854)
at org.apache.tomcat.util.http.fileupload.FileUploadBase.getItemIterator(FileUploadBase.java:256)
at org.apache.tomcat.util.http.fileupload.FileUploadBase.parseRequest(FileUploadBase.java:280)
... 53 common frames omitted
I am using the embedded H2 database.
Sonar version is 6.5 and installed on Windows 7.
The file upload completes successfully when the compressed content is < 50 KB in size (I am guessing, based on how it has worked out for me).
This always fails for me:
[INFO] Analysis report generated in 1022ms, dir size=384 KB
[INFO] Analysis reports compressed in 3019ms, zip size=190 KB
Anything beyond that fails with the above exception.
Are you aware of any setting that limits the size of the file being uploaded to SQ server?
Any help will be high appreciated.

Related

Sonarqube analysis failing for huge code base - None of the configured nodes were available

Sonarqube analysis completes, but the report processing fails for project with huge code base. Issue is showing up for project with 220K lines of code.
Sonarqube version - 7.9.1 - Running on Kubernetes
I tried deleting the data/es folder for fresh indices. Still no luck.
Error Log
Caused by: java.lang.IllegalStateException: Fail to execute ES refresh request on indices 'components'
at org.sonar.server.es.request.ProxyRefreshRequestBuilder.get(ProxyRefreshRequestBuilder.java:44)
at org.sonar.server.es.request.ProxyRefreshRequestBuilder.get(ProxyRefreshRequestBuilder.java:32)
at org.sonar.server.es.BulkIndexer.stop(BulkIndexer.java:120)
at org.sonar.server.component.index.ComponentIndexer.delete(ComponentIndexer.java:165)
at org.sonar.ce.task.projectanalysis.purge.IndexPurgeListener.onComponentsDisabling(IndexPurgeListener.java:41)
at org.sonar.db.purge.PurgeDao.purgeDisabledComponents(PurgeDao.java:107)
at org.sonar.db.purge.PurgeDao.purge(PurgeDao.java:71)
at org.sonar.ce.task.projectanalysis.purge.ProjectCleaner.purge(ProjectCleaner.java:63)
at org.sonar.ce.task.projectanalysis.purge.PurgeDatastoresStep.execute(PurgeDatastoresStep.java:76)
at org.sonar.ce.task.projectanalysis.purge.PurgeDatastoresStep.access$000(PurgeDatastoresStep.java:38)
at org.sonar.ce.task.projectanalysis.purge.PurgeDatastoresStep$1.visitProject(PurgeDatastoresStep.java:63)
at org.sonar.ce.task.projectanalysis.component.DepthTraversalTypeAwareCrawler.visitNode(DepthTraversalTypeAwareCrawler.java:70)
at org.sonar.ce.task.projectanalysis.component.DepthTraversalTypeAwareCrawler.visitImpl(DepthTraversalTypeAwareCrawler.java:51)
at org.sonar.ce.task.projectanalysis.component.DepthTraversalTypeAwareCrawler.visit(DepthTraversalTypeAwareCrawler.java:39)
... 18 common frames omitted
Caused by: org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes were available: [{sonarqube}{}{}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}]
at org.elasticsearch.client.transport.TransportClientNodesService$RetryListener.onFailure(TransportClientNodesService.java:294)
at org.elasticsearch.action.ActionListenerResponseHandler.handleException(ActionListenerResponseHandler.java:59)
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:533)
at org.elasticsearch.action.TransportActionNodeProxy.execute(TransportActionNodeProxy.java:48)
at org.elasticsearch.client.transport.TransportProxyClient.lambda$execute$0(TransportProxyClient.java:60)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:253)
at org.elasticsearch.client.transport.TransportProxyClient.execute(TransportProxyClient.java:60)
at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:388)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:403)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:391)
at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1262)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:46)
at org.sonar.server.es.request.ProxyRefreshRequestBuilder.get(ProxyRefreshRequestBuilder.java:42)
... 31 common frames omitted
Caused by: org.elasticsearch.transport.NodeNotConnectedException: [sonarqube][127.0.0.1:9001] Node not connected
at org.elasticsearch.transport.ConnectionManager.getConnection(ConnectionManager.java:151)
at org.elasticsearch.transport.TransportService.getConnection(TransportService.java:557)
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:529)
I got over this issue by increasing the memory limits for the SonarQube Kubernetes pod. Thanks to #Przemek Nowak answer on this question.

Sonarqube scanner error DirectoryNotEmptyException

I am using sonarqube scanner on a Citrix remote machine. During the scan, analysis of code completes and gets zipped. But while uploading this zipped file, I get java.nio.file.DirectoryNotEmptyException and nothing gets uploaded to the sonarqube server.
Note that the scanner works fine for small projects. It does not work only for large projects. My sonarqube version is 7.2.1. Same thing happened in version 6.7 LTS. But it worked perfectly fine in version 5.6 LTS.
I have tried setting sonar.ws.timeout=9000 in sonar-scanner.properties file to increase time. This did not work.
The error message is:
INFO: 7 files had no CPD blocks
INFO: Calculating CPD for 521 files
INFO: CPD calculation finished
INFO: Analysis report generated in 4886ms, dir size=15 MB
INFO: Analysis reports compressed in 3769ms, zip size=4 MB
ERROR: Failed to delete temp folder
java.nio.file.DirectoryNotEmptyException: C:\codeAnalyzer\bhatk\.scannerwork\.sonartmp
at sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProv
ider.java:266)
at sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSyst
emProvider.java:108)
at java.nio.file.Files.deleteIfExists(Files.java:1165)
at org.sonar.api.utils.internal.DefaultTempFolder$DeleteRecursivelyFileV
isitor.postVisitDirectory(DefaultTempFolder.java:121)
at org.sonar.api.utils.internal.DefaultTempFolder$DeleteRecursivelyFileV
isitor.postVisitDirectory(DefaultTempFolder.java:110)
at java.nio.file.Files.walkFileTree(Files.java:2688)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at org.sonar.api.utils.internal.DefaultTempFolder.clean(DefaultTempFolde
r.java:97)
at org.sonar.api.utils.internal.DefaultTempFolder.stop(DefaultTempFolder
.java:106)
at org.sonar.scanner.analysis.AnalysisTempFolderProvider.stop(AnalysisTe
mpFolderProvider.java:61)
at org.picocontainer.DefaultPicoContainer.stopAdapters(DefaultPicoContai
ner.java:1048)
at org.picocontainer.DefaultPicoContainer.stop(DefaultPicoContainer.java
:803)
at org.sonar.core.platform.ComponentContainer.stopComponents(ComponentCo
ntainer.java:165)
at org.sonar.core.platform.ComponentContainer.execute(ComponentContainer
.java:124)
at org.sonar.scanner.task.ScanTask.execute(ScanTask.java:48)
at org.sonar.scanner.task.TaskContainer.doAfterStart(TaskContainer.java:
81)
at org.sonar.core.platform.ComponentContainer.startComponents(ComponentC
ontainer.java:136)
at org.sonar.core.platform.ComponentContainer.execute(ComponentContainer
.java:122)
at org.sonar.scanner.bootstrap.GlobalContainer.executeTask(GlobalContain
er.java:132)
at org.sonar.batch.bootstrapper.Batch.doExecuteTask(Batch.java:116)
at org.sonar.batch.bootstrapper.Batch.execute(Batch.java:71)
at org.sonarsource.scanner.api.internal.batch.BatchIsolatedLauncher.exec
ute(BatchIsolatedLauncher.java:46)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.sonarsource.scanner.api.internal.IsolatedLauncherProxy.invoke(Iso
latedLauncherProxy.java:60)
at com.sun.proxy.$Proxy0.execute(Unknown Source)
at org.sonarsource.scanner.api.EmbeddedScanner.doExecute(EmbeddedScanner
.java:171)
at org.sonarsource.scanner.api.EmbeddedScanner.execute(EmbeddedScanner.j
ava:128)
at org.sonarsource.scanner.cli.Main.execute(Main.java:111)
at org.sonarsource.scanner.cli.Main.execute(Main.java:75)
at org.sonarsource.scanner.cli.Main.main(Main.java:61)
INFO: ------------------------------------------------------------------------
INFO: EXECUTION FAILURE
INFO: ------------------------------------------------------------------------
INFO: Total time: 3:27.575s
INFO: Final Memory: 60M/2882M
INFO: -----------------------------------------------------------------------
ERROR: Error during SonarQube Scanner execution
ERROR: Fail to request http://localhost:9000/api/ce/submit?projectKey=my:project
&projectName=My%20project
ERROR: Caused by: timeout
ERROR: Caused by: Socket closed
ERROR:
ERROR: Re-run SonarQube Scanner using the -X switch to enable full debug logging
.
the error caused by default 10s writeTimeout inherited from OkHttp library. While readTimeout is configurable by sonar.ws.timeout param, the writeTimout is not currently configurable. In some cases, depends on the analysis report size and /or your infrastructure setup, 10s is not enough to upload an analysis report that causes socket timeout error and DirectoryNotEmptyException. I have implemented a fix and committed it to SonarSource github: https://github.com/SonarSource/sonarqube/pull/3252/files

Spring Boot Start Up error java.util.zip.ZipException: invalid stored block lengths

I created a new Spring Started project from the online spring boot application generator and getting below Spring Boot Start Up error which is Caused by java.util.zip.ZipException invalid stored block lengths
org.springframework.boot.loader.LaunchedURLClassLoader.loadClass(LaunchedURLClassLoader.java:94) ~[paymentbatch-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT]
at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[na:1.8.0_111]
... 40 common frames omitted
Caused by: java.util.zip.ZipException: invalid stored block lengths
at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:164) ~
Some problem in the maven dependencies being corrupted. Deleting maven repository and rebuilding (mvn clean install) the app fixed the issue.

HORTONWORKS - Hbase/Phoenix - WALEditCodec - missing

I am receiving the following error while trying to run Phoenix on top of Hbase:
EXCEPTION #1:
2017-11-07 12:40:12,620 WARN [RS_LOG_REPLAY_OPS-XXX:16020-0]
regionserver.SplitLogWorker: log splitting of
WALs/XXX.XXX.XXX.XXX,16020,1507179047656-
splitting/XXX.XXX.XXX.XXX%2C16020%2C1507179047656.default.1507179049782 failed, returning error
java.io.IOException: Cannot get log reader
at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:355)
at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:267)
at org.apache.hadoop.hbase.wal.WALSplitter.getReader(WALSplitter.java:839)
at org.apache.hadoop.hbase.wal.WALSplitter.getReader(WALSplitter.java:763)
at org.apache.hadoop.hbase.wal.WALSplitter.splitLogFile(WALSplitter.java:297)
at org.apache.hadoop.hbase.wal.WALSplitter.splitLogFile(WALSplitter.java:235)
at org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:104)
at org.apache.hadoop.hbase.regionserver.handler.WALSplitterHandler.process(WALSplitterHandler.java:72)
at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
:$
at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:355)
at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:267)
at org.apache.hadoop.hbase.wal.WALSplitter.getReader(WALSplitter.java:839)
at org.apache.hadoop.hbase.wal.WALSplitter.getReader(WALSplitter.java:763)
at org.apache.hadoop.hbase.wal.WALSplitter.splitLogFile(WALSplitter.java:297)
at org.apache.hadoop.hbase.wal.WALSplitter.splitLogFile(WALSplitter.java:235)
at org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:104)
at org.apache.hadoop.hbase.regionserver.handler.WALSplitterHandler.process(WALSplitterHandler.java:72)
at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.UnsupportedOperationException: Unable to find org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec
at org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(ReflectionUtils.java:36)
at org.apache.hadoop.hbase.regionserver.wal.WALCellCodec.create(WALCellCodec.java:103)
at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.getCodec(ProtobufLogReader.java:297)
at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:307)
at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:82)
at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.init(ProtobufLogReader.java:164)
at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:303)
... 11 more
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(ReflectionUtils.java:32)
... 17 more
APPLIED PATCHES #1:
I have applied the following settings through the Ambari web UI for the advanced Hbase configs as specified by the Hortonworks document:
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_command-line-upgrade/content/configure-phoenix-25.html
EXCEPTION #2
FATAL [RS_LOG_REPLAY_OPS-XXX:16020-1] conf.Configuration: error parsing conf core-site.xml
java.io.FileNotFoundException: /etc/hadoop/2.6.1.0-129/0/core-site.xml (Too many open files)
APPLIED PATCHES #2
I checked each 'core-site.xml' file on each server that contained a Hbase region server and made sure it ended with a </configuration>. As well as the core-site.xml files in the specified directory '/etc/hadoop/2.6.1.0-129/0/core-site.xml'
Haven't been able to find any other information regarding this issue.
I went into the HDFS and deleted all WAL split logs using the following command:
hdfs dfs -rm -r /apps/hbase/data/WALs/*splitting*
This resolved exception #1. Keep in mind from what I've read this will incur data loss.
For exception #2 I went back and checked the open file limits for each server (ulimit -n) and updated where applicable with respect to the Hortonworks doc:
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_security/content/kerb-config-limits.html

ERROR NioEndpoint Socket accept failed java.io.IOException: Too many open files

After updating SonarQube to v5.3 (configuration adopted from the previous version we used, v5.1) we're getting the following error which stops SQ from running:
2016.02.16 00:26:11 ERROR web[o.s.s.c.t.CeWorkerCallableImpl] Executed task | project=<my-project-id> | id=AVLnP-hq9AOM7J73mzYa | time=13ms
2016.02.16 00:26:14 ERROR web[o.a.t.u.n.NioEndpoint] Socket accept failed
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) ~[na:1.8.0_51]
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) ~[na:1.8.0_51]
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) ~[na:1.8.0_51]
at org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:688) ~[tomcat-embed-core-8.0.18.jar:8.0.18]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_51]
This error appears every 1-2 days.
Thanks in advance for your help.
Operate System treat network connections as a file,every connection is a file descriptor.So i suggest you to check open files limit of system.

Resources