Sonarqube Background Task (too many open files) - sonarqube

We have a large project (~35,000 java files) that currently has about 10,000 issues. About 3 weeks ago, our nightly scan started failing on version sonarqube 6.5. I upgraded to 6.6 and found the same problem. First, it was failing due to heap space. I have upgraded this machine to give it more memory and that seems to have allowed the analysis to finish. Now the scan fails every night during the background task to post the analysis due to "Too many open files". We have upped the limits for open files for the sonar user but that doesn't seem to have any effect. We have other smaller projects and they all finish easily. It's just this very large project that constantly fails. Has anyone seen this before?
This morning I installed sonarqube 6.7 to see if that fixes it. I'm currently running an analysis but it takes about 3 hours to finish and fail.
We increased the number of open files allowed for the sonar user.
-sh-4.1$ whoami
sonar
-sh-4.1$ ulimit -Hs
unlimited
-sh-4.1$ ulimit -Hn
1048576
Here is the error we are seeing
org.sonar.server.computation.task.projectanalysis.component.VisitException: Visit of Component {key=applications:sonar:src/main/java/com/MyFileName.java,type=FILE} failed
at org.sonar.server.computation.task.projectanalysis.component.VisitException.rethrowOrWrap(VisitException.java:44)
at org.sonar.server.computation.task.projectanalysis.component.VisitorsCrawler.visit(VisitorsCrawler.java:74)
at org.sonar.server.computation.task.projectanalysis.component.VisitorsCrawler.visitChildren(VisitorsCrawler.java:110)
at org.sonar.server.computation.task.projectanalysis.component.VisitorsCrawler.visitImpl(VisitorsCrawler.java:97)
at org.sonar.server.computation.task.projectanalysis.component.VisitorsCrawler.visit(VisitorsCrawler.java:72)
at org.sonar.server.computation.task.projectanalysis.component.VisitorsCrawler.visitChildren(VisitorsCrawler.java:110)
at org.sonar.server.computation.task.projectanalysis.component.VisitorsCrawler.visitImpl(VisitorsCrawler.java:97)
at org.sonar.server.computation.task.projectanalysis.component.VisitorsCrawler.visit(VisitorsCrawler.java:72)
at org.sonar.server.computation.task.projectanalysis.step.ExecuteVisitorsStep.execute(ExecuteVisitorsStep.java:51)
at org.sonar.server.computation.task.step.ComputationStepExecutor.executeSteps(ComputationStepExecutor.java:64)
at org.sonar.server.computation.task.step.ComputationStepExecutor.execute(ComputationStepExecutor.java:52)
at org.sonar.server.computation.task.projectanalysis.taskprocessor.ReportTaskProcessor.process(ReportTaskProcessor.java:75)
at org.sonar.ce.taskprocessor.CeWorkerImpl.executeTask(CeWorkerImpl.java:92)
at org.sonar.ce.taskprocessor.CeWorkerImpl.call(CeWorkerImpl.java:59)
at org.sonar.ce.taskprocessor.CeWorkerImpl.call(CeWorkerImpl.java:35)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalStateException: Fail to process issues of component 'applications:sonar:src/main/java/com/MyFileName.java'
at org.sonar.server.computation.task.projectanalysis.issue.IntegrateIssuesVisitor.processIssues(IntegrateIssuesVisitor.java:83)
at org.sonar.server.computation.task.projectanalysis.issue.IntegrateIssuesVisitor.visitAny(IntegrateIssuesVisitor.java:63)
at org.sonar.server.computation.task.projectanalysis.component.TypeAwareVisitorWrapper.visitAny(TypeAwareVisitorWrapper.java:82)
at org.sonar.server.computation.task.projectanalysis.component.VisitorsCrawler.visitNode(VisitorsCrawler.java:117)
at org.sonar.server.computation.task.projectanalysis.component.VisitorsCrawler.visitImpl(VisitorsCrawler.java:100)
at org.sonar.server.computation.task.projectanalysis.component.VisitorsCrawler.visit(VisitorsCrawler.java:72)
... 21 more
Caused by: java.lang.IllegalStateException: Fail to traverse file: /opt/sonarqube-6.5/temp/ce/6622607282221286408/1421069522563365386/source-11991.txt
at org.sonar.server.computation.task.projectanalysis.batch.BatchReportReaderImpl.readFileSource(BatchReportReaderImpl.java:153)
at org.sonar.server.computation.task.projectanalysis.source.SourceLinesRepositoryImpl.readLines(SourceLinesRepositoryImpl.java:45)
at org.sonar.server.computation.task.projectanalysis.issue.TrackerRawInputFactory$RawLazyInput.loadLineHashSequence(TrackerRawInputFactory.java:80)
at org.sonar.core.issue.tracking.LazyInput.getLineHashSequence(LazyInput.java:34)
at org.sonar.server.computation.task.projectanalysis.issue.TrackerRawInputFactory$RawLazyInput.loadIssues(TrackerRawInputFactory.java:105)
at org.sonar.core.issue.tracking.LazyInput.getIssues(LazyInput.java:50)
at org.sonar.core.issue.tracking.Tracking.<init>(Tracking.java:46)
at org.sonar.core.issue.tracking.Tracker.track(Tracker.java:37)
at org.sonar.server.computation.task.projectanalysis.issue.TrackerExecution.track(TrackerExecution.java:41)
at org.sonar.server.computation.task.projectanalysis.issue.IntegrateIssuesVisitor.processIssues(IntegrateIssuesVisitor.java:76)
... 26 more
Caused by: java.io.FileNotFoundException: /opt/sonarqube-6.5/temp/ce/6622607282221286408/1421069522563365386/source-11991.txt (Too many open files)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at org.apache.commons.io.FileUtils.openInputStream(FileUtils.java:301)
at org.sonar.server.computation.task.projectanalysis.batch.BatchReportReaderImpl.readFileSource(BatchReportReaderImpl.java:151)
... 35 more

Upgrading to SonarQube 6.7 fixed this error with too many open files.

Related

Sonarqube analysis failing for huge code base - None of the configured nodes were available

Sonarqube analysis completes, but the report processing fails for project with huge code base. Issue is showing up for project with 220K lines of code.
Sonarqube version - 7.9.1 - Running on Kubernetes
I tried deleting the data/es folder for fresh indices. Still no luck.
Error Log
Caused by: java.lang.IllegalStateException: Fail to execute ES refresh request on indices 'components'
at org.sonar.server.es.request.ProxyRefreshRequestBuilder.get(ProxyRefreshRequestBuilder.java:44)
at org.sonar.server.es.request.ProxyRefreshRequestBuilder.get(ProxyRefreshRequestBuilder.java:32)
at org.sonar.server.es.BulkIndexer.stop(BulkIndexer.java:120)
at org.sonar.server.component.index.ComponentIndexer.delete(ComponentIndexer.java:165)
at org.sonar.ce.task.projectanalysis.purge.IndexPurgeListener.onComponentsDisabling(IndexPurgeListener.java:41)
at org.sonar.db.purge.PurgeDao.purgeDisabledComponents(PurgeDao.java:107)
at org.sonar.db.purge.PurgeDao.purge(PurgeDao.java:71)
at org.sonar.ce.task.projectanalysis.purge.ProjectCleaner.purge(ProjectCleaner.java:63)
at org.sonar.ce.task.projectanalysis.purge.PurgeDatastoresStep.execute(PurgeDatastoresStep.java:76)
at org.sonar.ce.task.projectanalysis.purge.PurgeDatastoresStep.access$000(PurgeDatastoresStep.java:38)
at org.sonar.ce.task.projectanalysis.purge.PurgeDatastoresStep$1.visitProject(PurgeDatastoresStep.java:63)
at org.sonar.ce.task.projectanalysis.component.DepthTraversalTypeAwareCrawler.visitNode(DepthTraversalTypeAwareCrawler.java:70)
at org.sonar.ce.task.projectanalysis.component.DepthTraversalTypeAwareCrawler.visitImpl(DepthTraversalTypeAwareCrawler.java:51)
at org.sonar.ce.task.projectanalysis.component.DepthTraversalTypeAwareCrawler.visit(DepthTraversalTypeAwareCrawler.java:39)
... 18 common frames omitted
Caused by: org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes were available: [{sonarqube}{}{}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube}]
at org.elasticsearch.client.transport.TransportClientNodesService$RetryListener.onFailure(TransportClientNodesService.java:294)
at org.elasticsearch.action.ActionListenerResponseHandler.handleException(ActionListenerResponseHandler.java:59)
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:533)
at org.elasticsearch.action.TransportActionNodeProxy.execute(TransportActionNodeProxy.java:48)
at org.elasticsearch.client.transport.TransportProxyClient.lambda$execute$0(TransportProxyClient.java:60)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:253)
at org.elasticsearch.client.transport.TransportProxyClient.execute(TransportProxyClient.java:60)
at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:388)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:403)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:391)
at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1262)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:46)
at org.sonar.server.es.request.ProxyRefreshRequestBuilder.get(ProxyRefreshRequestBuilder.java:42)
... 31 common frames omitted
Caused by: org.elasticsearch.transport.NodeNotConnectedException: [sonarqube][127.0.0.1:9001] Node not connected
at org.elasticsearch.transport.ConnectionManager.getConnection(ConnectionManager.java:151)
at org.elasticsearch.transport.TransportService.getConnection(TransportService.java:557)
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:529)
I got over this issue by increasing the memory limits for the SonarQube Kubernetes pod. Thanks to #Przemek Nowak answer on this question.

SonarQube 6.2 Compute Engine NullPointerException

I have a single project on my SonarQube server that has started failing in the past few days (probably due to using the Upgrade Center to upgrade a few plugins and/or playing with the Python profile?). I have turned off the Python profile and restarted the server but the ce.log file contains the following when this project scan is uploaded to the server (other projects still work fine):
2017.12.06 10:24:02 INFO ce[AWAtEf8ZpSfG1LhTt-eQ][o.s.s.c.t.CeWorkerCallableImpl] Execute task | project=mypackage:myproject | type=REPORT | id=AWAtEf8ZpSfG1LhTt-eQ
2017.12.06 10:24:04 ERROR ce[AWAtEf8ZpSfG1LhTt-eQ][o.s.s.c.t.CeWorkerCallableImpl] Failed to execute task AWAtEf8ZpSfG1LhTt-eQ
java.lang.NullPointerException: null
at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:210)
at com.google.common.base.Splitter.splitToList(Splitter.java:416)
at org.sonar.server.computation.task.projectanalysis.filemove.FileMoveDetectionStep.getFile(FileMoveDetectionStep.java:239)
at org.sonar.server.computation.task.projectanalysis.filemove.FileMoveDetectionStep.computeScoreMatrix(FileMoveDetectionStep.java:208)
at org.sonar.server.computation.task.projectanalysis.filemove.FileMoveDetectionStep.execute(FileMoveDetectionStep.java:127)
at org.sonar.server.computation.task.step.ComputationStepExecutor.executeSteps(ComputationStepExecutor.java:64)
at org.sonar.server.computation.task.step.ComputationStepExecutor.execute(ComputationStepExecutor.java:52)
at org.sonar.server.computation.task.projectanalysis.taskprocessor.ReportTaskProcessor.process(ReportTaskProcessor.java:75)
at org.sonar.server.computation.taskprocessor.CeWorkerCallableImpl.executeTask(CeWorkerCallableImpl.java:84)
at org.sonar.server.computation.taskprocessor.CeWorkerCallableImpl.call(CeWorkerCallableImpl.java:57)
at org.sonar.server.computation.taskprocessor.CeWorkerCallableImpl.call(CeWorkerCallableImpl.java:35)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2017.12.06 10:24:04 ERROR ce[AWAtEf8ZpSfG1LhTt-eQ][o.s.s.c.t.CeWorkerCallableImpl] Executed task | project=oracle.weblogic.lifecycle:wls-deploy | type=REPORT | id=AWAtEf8ZpSfG1LhTt-eQ | time=1423ms
Any clues?
This bug is fixed in version 6.3 (see https://jira.sonarsource.com/browse/SONAR-8835 and https://groups.google.com/forum/#!topic/sonarqube/MKlBDdMZJRk).
Please note that it's highly recommended to use the Long-Term Support version, which is 6.7 at the time of writing. It prevents from facing annoying bugs like the one you face.

HORTONWORKS - Hbase/Phoenix - WALEditCodec - missing

I am receiving the following error while trying to run Phoenix on top of Hbase:
EXCEPTION #1:
2017-11-07 12:40:12,620 WARN [RS_LOG_REPLAY_OPS-XXX:16020-0]
regionserver.SplitLogWorker: log splitting of
WALs/XXX.XXX.XXX.XXX,16020,1507179047656-
splitting/XXX.XXX.XXX.XXX%2C16020%2C1507179047656.default.1507179049782 failed, returning error
java.io.IOException: Cannot get log reader
at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:355)
at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:267)
at org.apache.hadoop.hbase.wal.WALSplitter.getReader(WALSplitter.java:839)
at org.apache.hadoop.hbase.wal.WALSplitter.getReader(WALSplitter.java:763)
at org.apache.hadoop.hbase.wal.WALSplitter.splitLogFile(WALSplitter.java:297)
at org.apache.hadoop.hbase.wal.WALSplitter.splitLogFile(WALSplitter.java:235)
at org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:104)
at org.apache.hadoop.hbase.regionserver.handler.WALSplitterHandler.process(WALSplitterHandler.java:72)
at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
:$
at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:355)
at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:267)
at org.apache.hadoop.hbase.wal.WALSplitter.getReader(WALSplitter.java:839)
at org.apache.hadoop.hbase.wal.WALSplitter.getReader(WALSplitter.java:763)
at org.apache.hadoop.hbase.wal.WALSplitter.splitLogFile(WALSplitter.java:297)
at org.apache.hadoop.hbase.wal.WALSplitter.splitLogFile(WALSplitter.java:235)
at org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:104)
at org.apache.hadoop.hbase.regionserver.handler.WALSplitterHandler.process(WALSplitterHandler.java:72)
at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.UnsupportedOperationException: Unable to find org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec
at org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(ReflectionUtils.java:36)
at org.apache.hadoop.hbase.regionserver.wal.WALCellCodec.create(WALCellCodec.java:103)
at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.getCodec(ProtobufLogReader.java:297)
at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:307)
at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:82)
at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.init(ProtobufLogReader.java:164)
at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:303)
... 11 more
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(ReflectionUtils.java:32)
... 17 more
APPLIED PATCHES #1:
I have applied the following settings through the Ambari web UI for the advanced Hbase configs as specified by the Hortonworks document:
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_command-line-upgrade/content/configure-phoenix-25.html
EXCEPTION #2
FATAL [RS_LOG_REPLAY_OPS-XXX:16020-1] conf.Configuration: error parsing conf core-site.xml
java.io.FileNotFoundException: /etc/hadoop/2.6.1.0-129/0/core-site.xml (Too many open files)
APPLIED PATCHES #2
I checked each 'core-site.xml' file on each server that contained a Hbase region server and made sure it ended with a </configuration>. As well as the core-site.xml files in the specified directory '/etc/hadoop/2.6.1.0-129/0/core-site.xml'
Haven't been able to find any other information regarding this issue.
I went into the HDFS and deleted all WAL split logs using the following command:
hdfs dfs -rm -r /apps/hbase/data/WALs/*splitting*
This resolved exception #1. Keep in mind from what I've read this will incur data loss.
For exception #2 I went back and checked the open file limits for each server (ulimit -n) and updated where applicable with respect to the Hortonworks doc:
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_security/content/kerb-config-limits.html

Unable to add class: org.apache.mahout.classifier.df.mapreduce.BuildForest in Mahout.0.13

1)When I run this Random Forest example
$MAHOUT_HOME/bin/mahout org.apache.mahout.classifier.df.mapreduce.BuildForest -Dmapred.max.split.size=1874231 -d inputMahoutExamples/RandomForest/rfsplit/trainingSet/* -ds inputMahoutExamples/RandomForest/glass.info -sl 5 -p -t 10 -o inputMahoutExamples/RandomForest/rfmodel
I got this error
MAHOUT_LOCAL is not set; adding HADOOP_CONF_DIR to classpath.
Running on hadoop, using /usr/local/hadoop-2.7.2/bin/hadoop and HADOOP_CONF_DIR=/usr/local/hadoop-2.7.2/etc/hadoop
MAHOUT-JOB: /usr/local/mahout/examples/target/mahout-examples-0.13.0-job.jar
17/08/02 16:55:29 WARN MahoutDriver: Unable to add class: org.apache.mahout.classifier.df.mapreduce.BuildForest
java.lang.ClassNotFoundException: org.apache.mahout.classifier.df.mapreduce.BuildForest
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.mahout.driver.MahoutDriver.addClass(MahoutDriver.java:237)
at org.apache.mahout.driver.MahoutDriver.main(MahoutDriver.java:128)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:6
2)I am working with Mahout-0.13 and hadoop -2.7.2
$HADOOP_HOME/bin/hadoop jar $MAHOUT_HOME/examples/target/mahout-examples-0.13.0-job.jar org.apache.mahout.classifier.df.mapreduce.BuildForest -d inputMahoutExamples/RandomForest/rfsplit/trainingSet/* -ds inputMahoutExamples/RandomForest/glass.info -sl 5 -p -t 100 -o inputMahoutExamples/RandomForest/rfmodel
Also I got same error
Exception in thread "main" java.lang.ClassNotFoundException: org.apache.mahout.classifier.df.mapreduce.BuildForest
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.hadoop.util.RunJar.run(RunJar.java:214)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
I think this problem just with Mahout-0.13.
What do you think?
I was facing exactly the same issue.
I think they did not include random forest classifier in their last release (not sure though).
It doesn't show in the documentation. Even their documentation website is still in beta mode.
They mentioned two new classifiers:
org.apache.mahout.classifier.df.mapreduce.inmem
In-memory mapreduce implementation of Random Decision Forests
org.apache.mahout.classifier.df.mapreduce.partial
Partial-data mapreduce implementation of Random Decision Forests
In order to run the command, I had to download and load version 0.11.0 of Mahout. However, I am now confused. Why should I use it and trust the output whereas it was abandoned by its developers?
In previous releases, they mentioned some bugs that are not related to the algorithms but more to managing performance:
For now, the training does not support multiple input files. The input
dataset must be one single file (this support will be available with
the upcoming release). Classifying new data does support multiple
input files. The tree building is done when each mapper.close() method
is called. Because the mappers don’t refresh their state, the job can
fail when the dataset is big and you try to build a large number of
trees.
It worked though.
I also faced this problem .I think the problem is happened in latest version because ,Random forest is not included in classification with CLI drivers in Here .I have solved this problem by running on earlier version of mahout i.e. mahout 0.9 .

ERROR NioEndpoint Socket accept failed java.io.IOException: Too many open files

After updating SonarQube to v5.3 (configuration adopted from the previous version we used, v5.1) we're getting the following error which stops SQ from running:
2016.02.16 00:26:11 ERROR web[o.s.s.c.t.CeWorkerCallableImpl] Executed task | project=<my-project-id> | id=AVLnP-hq9AOM7J73mzYa | time=13ms
2016.02.16 00:26:14 ERROR web[o.a.t.u.n.NioEndpoint] Socket accept failed
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) ~[na:1.8.0_51]
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) ~[na:1.8.0_51]
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) ~[na:1.8.0_51]
at org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:688) ~[tomcat-embed-core-8.0.18.jar:8.0.18]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_51]
This error appears every 1-2 days.
Thanks in advance for your help.
Operate System treat network connections as a file,every connection is a file descriptor.So i suggest you to check open files limit of system.

Resources