I am using apache jmeter 5.5 and jdk 1.8.0_352.
I have been running the jmx file using non-GUI mode as shown below:
jmeter -n -t ./MockTest.jmx -l report.jtl
As expected .jtl file has been generated, but while trying to create HTML report using cmd as shown below:
jmeter -g ./report.jtl -o ./myStat01
It is not working as expected instead it is generating only json file.
Upon checking the "jmeter_html_report.log" the below error is found:
2022-12-19 17:00:09,488 INFO o.a.j.r.p.AbstractSampleConsumer: class org.apache.jmeter.report.processor.graph.impl.ResponseTimePercentilesOverTimeGraphConsumer#stopProducing(): responseTimePercentilesOverTime produced 0 samples
2022-12-19 17:00:09,488 INFO o.a.j.r.p.AbstractSampleConsumer: class org.apache.jmeter.report.processor.graph.impl.ResponseTimeOverTimeGraphConsumer#stopProducing(): responseTimesOverTime produced 0 samples
2022-12-19 17:00:09,488 INFO o.a.j.r.p.AbstractSampleConsumer: class org.apache.jmeter.report.processor.graph.impl.ConnectTimeOverTimeGraphConsumer#stopProducing(): connectTimeOverTime produced 0 samples
2022-12-19 17:00:09,488 INFO o.a.j.r.p.AbstractSampleConsumer: class org.apache.jmeter.report.processor.graph.impl.LatencyOverTimeGraphConsumer#stopProducing(): latenciesOverTime produced 0 samples
2022-12-19 17:00:09,488 INFO o.a.j.r.p.AbstractSampleConsumer: class org.apache.jmeter.report.processor.FilterConsumer#stopProducing(): nameFilter produced 750 samples
2022-12-19 17:00:09,488 INFO o.a.j.r.p.AbstractSampleConsumer: class org.apache.jmeter.report.processor.FilterConsumer#stopProducing(): dateRangeFilter produced 150 samples
2022-12-19 17:00:09,488 INFO o.a.j.r.p.AbstractSampleConsumer: class org.apache.jmeter.report.processor.NormalizerSampleConsumer#stopProducing(): normalizer produced 50 samples
2022-12-19 17:00:09,488 INFO o.a.j.r.p.CsvFileSampleSource: produce(): 50 samples produced in 39ms on channel 0
2022-12-19 17:00:09,488 INFO o.a.j.r.d.ReportGenerator: Exporting data using exporter:'json' of className:'org.apache.jmeter.report.dashboard.JsonExporter'
2022-12-19 17:00:09,576 INFO o.a.j.r.d.JsonExporter: Found data for consumer statisticsSummary in context
2022-12-19 17:00:09,576 INFO o.a.j.r.d.JsonExporter: Creating statistics for overall
2022-12-19 17:00:09,577 INFO o.a.j.r.d.JsonExporter: Creating statistics for other transactions
2022-12-19 17:00:09,577 INFO o.a.j.r.d.JsonExporter: Checking output folder
2022-12-19 17:00:09,578 INFO o.a.j.r.d.JsonExporter: Writing statistics JSON to /home/apache-jmeter-5.5/bin/myStat01/statistics.json
2022-12-19 17:00:09,601 INFO o.a.j.r.d.ReportGenerator: Exporting data using exporter:'html' of className:'org.apache.jmeter.report.dashboard.HtmlTemplateExporter'
2022-12-19 17:00:09,602 ERROR o.a.j.r.d.HtmlTemplateExporter: "/home/apache-jmeter-5.5/bin/report-template" is not a valid template directory
2022-12-19 17:00:09,603 ERROR o.a.j.JMeter: An error occurred:
org.apache.jmeter.report.dashboard.GenerationException: Data exporter "html" is unable to export data.
at org.apache.jmeter.report.dashboard.ReportGenerator.exportData(ReportGenerator.java:388) ~[ApacheJMeter_core.jar:5.5]
at org.apache.jmeter.report.dashboard.ReportGenerator.generate(ReportGenerator.java:260) ~[ApacheJMeter_core.jar:5.5]
at org.apache.jmeter.JMeter.start(JMeter.java:462) ~[ApacheJMeter_core.jar:5.5]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_352]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_352]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_352]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_352]
at org.apache.jmeter.NewDriver.main(NewDriver.java:259) ~[ApacheJMeter.jar:5.5]
Caused by: org.apache.jmeter.report.dashboard.ExportException: "/home/apache-jmeter-5.5/bin/report-template" is not a valid template directory
at org.apache.jmeter.report.dashboard.HtmlTemplateExporter.export(HtmlTemplateExporter.java:316) ~[ApacheJMeter_core.jar:5.5]
at org.apache.jmeter.report.dashboard.ReportGenerator.exportData(ReportGenerator.java:382) ~[ApacheJMeter_core.jar:5.5]
... 7 more
"/home/apache-jmeter-5.5/bin/report-template" is not a valid template directory
it means that your JMeter installation is broken somehow, JMeter expects to find report-template folder under it's "bin" folder, it contains all the necessary files which are needed for proper report generation.
So either get this directory from JMeter Github or simply re-install JMeter, the installation bundle contains the "proper" report-template folder.
If you moved the report-template folder somewhere for some specific reason you can make JMeter aware of the new location by specifying jmeter.reportgenerator.exporter.html.property.template_dir JMeter property pointing to the new location.
Related
I copied my complete Jmeter folder from one machine to other and tried to run. Stuck with the error - ArrayIndexOutOfBoundsException: 0. Please help
INFO - jmeter.gui.util.MenuFactory: Skipping
org.apache.jmeter.assertions.BSFAssertion
INFO - jmeter.gui.util.MenuFactory: Skipping org.apache.jmeter.extractor.BSFPostProcessor
INFO - jmeter.gui.util.MenuFactory: Skipping org.apache.jmeter.modifiers.BSFPreProcessor
INFO - jmeter.protocol.http.sampler.HTTPSamplerBase: Parser for text/html is
org.apache.jmeter.protocol.http.parser.LagartoBasedHtmlParser
INFO - jmeter.protocol.http.sampler.HTTPSamplerBase: Parser for application/xhtml+xml is
org.apache.jmeter.protocol.http.parser.LagartoBasedHtmlParser
INFO - jmeter.protocol.http.sampler.HTTPSamplerBase: Parser for application/xml is
org.apache.jmeter.protocol.http.parser.LagartoBasedHtmlParser
INFO - jmeter.protocol.http.sampler.HTTPSamplerBase: Parser for text/xml is
org.apache.jmeter.protocol.http.parser.LagartoBasedHtmlParser
INFO - jmeter.protocol.http.sampler.HTTPSamplerBase: Parser for text/vnd.wap.wml is
org.apache.jmeter.protocol.http.parser.RegexpHTMLParser
INFO - jmeter.protocol.http.sampler.HTTPSamplerBase: Parser for text/css is org.apache.jmeter.protocol.http.parser.CssParser
INFO - jorphan.exec.KeyToolUtils: keytool found at 'keytool'
INFO - jmeter.protocol.http.proxy.ProxyControl: HTTP(S) Test Script Recorder SSL Proxy will use keys that support embedded 3rd
party resources in file
G:\official\JMeter\apache-jmeter-3.1\bin\proxyserver.jks
INFO - jmeter.gui.util.MenuFactory: Skipping org.apache.jmeter.protocol.java.sampler.BSFSampler
INFO - jmeter.gui.util.MenuFactory: Skipping org.apache.jmeter.protocol.mongodb.config.MongoSourceElement
INFO - jmeter.gui.util.MenuFactory: Skipping org.apache.jmeter.protocol.mongodb.sampler.MongoScriptSampler
INFO - jmeter.gui.util.MenuFactory: Skipping org.apache.jmeter.timers.BSFTimer
INFO - jmeter.gui.util.MenuFactory: Skipping org.apache.jmeter.visualizers.BSFListener
INFO - jmeter.gui.util.MenuFactory: Skipping org.apache.jmeter.visualizers.MonitorHealthVisualizer
INFO - jmeter.samplers.SampleResult: Note: Sample TimeStamps are START times
INFO - jmeter.samplers.SampleResult: sampleresult.default.encoding is set to ISO-8859-1
INFO - jmeter.samplers.SampleResult: sampleresult.useNanoTime=true
INFO - jmeter.samplers.SampleResult: sampleresult.nanoThreadSleep=5000
INFO - jmeter.services.FileServer: Default base='G:\official\JMeter\apache-jmeter-3.1\bin'
INFO - jmeter.gui.action.Load: Loading file: G:\official\JMeter\apache-jmeter-3.1\bin\Cafyne_3.0.jmx
INFO - jmeter.services.FileServer: Set new base='G:\official\JMeter\apache-jmeter-3.1\bin'
INFO - jmeter.save.SaveService: Testplan (JMX) version: 2.2. Testlog (JTL) version: 2.2
INFO - jmeter.save.SaveService: Using SaveService properties file encoding UTF-8
INFO - jmeter.save.SaveService: Using SaveService properties version 3.1
INFO - jmeter.save.SaveService: All converter versions present and correct
INFO - jmeter.save.SaveService: Loading file: G:\official\JMeter\apache-jmeter-3.1\bin\Cafyne_3.0.jmx
INFO - jmeter.protocol.http.control.CookieManager: Settings: Delete null: true Check: true Allow variable: true Save: false Prefix:
COOKIE_
INFO - jmeter.services.FileServer: Set new base='G:\official\JMeter\apache-jmeter-3.1\bin'
ERROR - jmeter.gui.action.ActionRouter: Error processing org.apache.jmeter.gui.action.Start#71687585
java.lang.ArrayIndexOutOfBoundsException: 0
at org.apache.jmeter.gui.action.Start.startEngine(Start.java:193) at org.apache.jmeter.gui.action.Start.startEngine(Start.java:174) at
org.apache.jmeter.gui.action.Start.startEngine(Start.java:164) at
org.apache.jmeter.gui.action.Start.doAction(Start.java:108) at
org.apache.jmeter.gui.action.ActionRouter.performAction(ActionRouter.java:80)
at
org.apache.jmeter.gui.action.ActionRouter.access$000(ActionRouter.java:40)
at
org.apache.jmeter.gui.action.ActionRouter$1.run(ActionRouter.java:62)
at java.awt.event.InvocationEvent.dispatch(Unknown Source) at
java.awt.EventQueue.dispatchEventImpl(Unknown Source) at
java.awt.EventQueue.access$500(Unknown Source) at
java.awt.EventQueue$3.run(Unknown Source) at
java.awt.EventQueue$3.run(Unknown Source) at
java.security.AccessController.doPrivileged(Native Method) at
java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(Unknown
Source) at java.awt.EventQueue.dispatchEvent(Unknown Source) at
java.awt.EventDispatchThread.pumpOneEventForFilters(Unknown Source)
at java.awt.EventDispatchThread.pumpEventsForFilter(Unknown Source)
at java.awt.EventDispatchThread.pumpEventsForHierarchy(Unknown
Source) at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
at java.awt.EventDispatchThread.pumpEvents(Unknown Source) at
java.awt.EventDispatchThread.run(Unknown Source)
I know the question is old but I went through the same problem and found no solutions. Then I analyzed the source code of JMeter and I leave here the solution for other people who go through it.
In my case when I created a test plan, by default it was disabled.
After I enabled the test plan (right click -> enable) it started working!
Code is: (failed in testTree.getArray()[0])
HashTree testTree = gui.getTreeModel().getTestPlan();
JMeter.convertSubTree(testTree);
if(threadGroupsToRun != null && threadGroupsToRun.length>0) {
removeThreadGroupsFromHashTree(testTree, threadGroupsToRun);
}
testTree.add(testTree.getArray()[0], gui.getMainFrame());
It seems that test plan is empty.
can you check G:\official\JMeter\apache-jmeter-3.1\bin\Cafyne_3.0.jmx
maybe file wasn't copied ok
Thanks to the two solutions/hints above, the reason is that the first item in the list (as you see below), which is the Test Plan itself, was disabled and gray (and usually is not noticed at the first glance).
Just enable it and run your test.
I'm trying to setup JanusGraph for development on my local machine. My goal is to have a setup similar to the Cassandra remote server mode. As storage backend, I want to use Cassandra and as index backend I planned to use Elasticsearch.
For both, I'm using Docker containers (Cassandra, Elasticsearch).
My janusgraph-server.properties file looks like this:
gremlin.graph=org.janusgraph.core.JanusGraphFactory
storage.backend=cassandra
storage.hostname=127.0.0.1
storage.cassandra.astyanax.cluster-name=cassandra_test_cluster
index.search.backend=elasticsearch
index.search.hostname=127.0.0.1
index.search.port=9300
index.search.elasticsearch.cluster-name=elasticsearch_test_cluster
Starting the gremlin-server leads to this failures:
0 [main] INFO org.apache.tinkerpop.gremlin.server.GremlinServer -
\,,,/
(o o)
-----oOOo-(3)-oOOo-----
162 [main] INFO org.apache.tinkerpop.gremlin.server.GremlinServer - Configuring Gremlin Server from conf/gremlin-server/gremlin-server.yaml
256 [main] INFO org.apache.tinkerpop.gremlin.server.util.MetricManager - Configured Metrics ConsoleReporter configured with report interval=180000ms
263 [main] INFO org.apache.tinkerpop.gremlin.server.util.MetricManager - Configured Metrics CsvReporter configured with report interval=180000ms to fileName=/tmp/gremlin-server-metrics.csv
343 [main] INFO org.apache.tinkerpop.gremlin.server.util.MetricManager - Configured Metrics JmxReporter configured with domain= and agentId=
345 [main] INFO org.apache.tinkerpop.gremlin.server.util.MetricManager - Configured Metrics Slf4jReporter configured with interval=180000ms and loggerName=org.apache.tinkerpop.gremlin.server.Settings$Slf4jReporterMetrics
800 [main] INFO com.netflix.astyanax.connectionpool.impl.ConnectionPoolMBeanManager - Registering mbean: com.netflix.MonitoredResources:type=ASTYANAX,name=ClusterJanusGraphConnectionPool,ServiceType=connectionpool
807 [main] INFO com.netflix.astyanax.connectionpool.impl.CountingConnectionPoolMonitor - AddHost: 127.0.0.1
884 [main] INFO com.netflix.astyanax.connectionpool.impl.ConnectionPoolMBeanManager - Registering mbean: com.netflix.MonitoredResources:type=ASTYANAX,name=KeyspaceJanusGraphConnectionPool,ServiceType=connectionpool
884 [main] INFO com.netflix.astyanax.connectionpool.impl.CountingConnectionPoolMonitor - AddHost: 127.0.0.1
1070 [main] INFO org.janusgraph.graphdb.configuration.GraphDatabaseConfiguration - Generated unique-instance-id=c0a8000424833-XXX-MacBook-Pro-local1
1078 [main] INFO com.netflix.astyanax.connectionpool.impl.ConnectionPoolMBeanManager - Registering mbean: com.netflix.MonitoredResources:type=ASTYANAX,name=ClusterJanusGraphConnectionPool,ServiceType=connectionpool
1079 [main] INFO com.netflix.astyanax.connectionpool.impl.CountingConnectionPoolMonitor - AddHost: 127.0.0.1
1082 [main] INFO com.netflix.astyanax.connectionpool.impl.ConnectionPoolMBeanManager - Registering mbean: com.netflix.MonitoredResources:type=ASTYANAX,name=KeyspaceJanusGraphConnectionPool,ServiceType=connectionpool
1082 [main] INFO com.netflix.astyanax.connectionpool.impl.CountingConnectionPoolMonitor - AddHost: 127.0.0.1
1099 [main] INFO org.janusgraph.diskstorage.Backend - Configuring index [search]
1179 [main] INFO org.elasticsearch.plugins - [General Orwell Taylor] loaded [], sites []
1655 [main] INFO org.janusgraph.diskstorage.es.ElasticSearchIndex - Configured remote host: 127.0.0.1 : 9300
1738 [elasticsearch[General Orwell Taylor][generic][T#2]] INFO org.elasticsearch.client.transport - [General Orwell Taylor] failed to get local cluster state for [#transport#-1][XXX-MacBook-Pro.local][inet[/127.0.0.1:9300]], disconnecting...
org.elasticsearch.transport.NodeDisconnectedException: [][inet[/127.0.0.1:9300]][cluster:monitor/state] disconnected
1743 [main] WARN org.apache.tinkerpop.gremlin.server.GremlinServer - Graph [graph] configured at [conf/gremlin-server/janusgraph-server.properties] could not be instantiated and will not be available in Gremlin Server. GraphFactory message: GraphFactory could not instantiate this Graph implementation [class org.janusgraph.core.JanusGraphFactory]
java.lang.RuntimeException: GraphFactory could not instantiate this Graph implementation [class org.janusgraph.core.JanusGraphFactory]
at org.apache.tinkerpop.gremlin.structure.util.GraphFactory.open(GraphFactory.java:82)
at org.apache.tinkerpop.gremlin.structure.util.GraphFactory.open(GraphFactory.java:70)
at org.apache.tinkerpop.gremlin.structure.util.GraphFactory.open(GraphFactory.java:104)
at org.apache.tinkerpop.gremlin.server.GraphManager.lambda$new$0(GraphManager.java:55)
at java.util.LinkedHashMap$LinkedEntrySet.forEach(LinkedHashMap.java:671)
at org.apache.tinkerpop.gremlin.server.GraphManager.<init>(GraphManager.java:53)
at org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor.<init>(ServerGremlinExecutor.java:83)
at org.apache.tinkerpop.gremlin.server.GremlinServer.<init>(GremlinServer.java:110)
at org.apache.tinkerpop.gremlin.server.GremlinServer.main(GremlinServer.java:344)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.tinkerpop.gremlin.structure.util.GraphFactory.open(GraphFactory.java:78)
... 8 more
Caused by: java.lang.IllegalArgumentException: Could not instantiate implementation: org.janusgraph.diskstorage.es.ElasticSearchIndex
at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:69)
at org.janusgraph.diskstorage.Backend.getImplementationClass(Backend.java:477)
at org.janusgraph.diskstorage.Backend.getIndexes(Backend.java:464)
at org.janusgraph.diskstorage.Backend.<init>(Backend.java:149)
at org.janusgraph.graphdb.configuration.GraphDatabaseConfiguration.getBackend(GraphDatabaseConfiguration.java:1850)
at org.janusgraph.graphdb.database.StandardJanusGraph.<init>(StandardJanusGraph.java:134)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:107)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:87)
... 13 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:58)
... 20 more
Caused by: org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: []
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:279)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:198)
at org.elasticsearch.client.transport.support.InternalTransportClusterAdminClient.execute(InternalTransportClusterAdminClient.java:86)
at org.elasticsearch.client.support.AbstractClusterAdminClient.health(AbstractClusterAdminClient.java:127)
at org.elasticsearch.action.admin.cluster.health.ClusterHealthRequestBuilder.doExecute(ClusterHealthRequestBuilder.java:92)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:91)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:65)
at org.janusgraph.diskstorage.es.ElasticSearchIndex.<init>(ElasticSearchIndex.java:215)
... 25 more
1745 [main] INFO org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor - Initialized Gremlin thread pool. Threads in pool named with pattern gremlin-*
2190 [main] INFO org.apache.tinkerpop.gremlin.groovy.engine.ScriptEngines - Loaded gremlin-groovy ScriptEngine
2836 [main] WARN org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor - Could not initialize gremlin-groovy ScriptEngine with scripts/empty-sample.groovy as script could not be evaluated - javax.script.ScriptException: groovy.lang.MissingPropertyException: No such property: graph for class: Script1
None of the configured nodes are available: [] why?
What can I do to make them available?
Have you verified whether Elasticsearch and Cassandra are running on those ports on localhost? If not, I would recommend checking that you're forwarding to those ports when starting your containers.
I would also recommend checking the logs for Cassandra and Elasticsearch and seeing if there is any errors in those.
I'm trying to install nutch 1.12 on a windows 2012 server based on cygwin64 2.874. Due to limited skills with java and linux I followed the step by step introduction at https://wiki.apache.org/nutch/NutchTutorial#Step-by-Step:_Seeding_the_crawldb_with_a_list_of_URLs. The command
bin/nutch inject crawl/crawldb urls
throws an error because winutils.exe couldn't be found. Here is the hadoop log:
2016-07-01 09:22:25,660 ERROR util.Shell - Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:318)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:333)
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:326)
at org.apache.hadoop.util.GenericOptionsParser.preProcessForWindows(GenericOptionsParser.java:432)
at org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:478)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:170)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:153)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:64)
at org.apache.nutch.crawl.Injector.main(Injector.java:441)
2016-07-01 09:22:25,832 INFO crawl.Injector - Injector: starting at 2016-07-01 09:22:25
2016-07-01 09:22:25,832 INFO crawl.Injector - Injector: crawlDb: crawl/crawldb
2016-07-01 09:22:25,832 INFO crawl.Injector - Injector: urlDir: urls
2016-07-01 09:22:25,832 INFO crawl.Injector - Injector: Converting injected urls to crawl db entries.
2016-07-01 09:22:26,050 WARN util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2016-07-01 09:22:26,394 ERROR crawl.Injector - Injector: java.lang.NullPointerException
at java.lang.ProcessBuilder.start(Unknown Source)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:445)
at org.apache.hadoop.util.Shell.run(Shell.java:418)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:739)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:722)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:633)
at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:467)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:456)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:424)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:887)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:849)
at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1149)
at org.apache.nutch.util.LockUtil.createLockFile(LockUtil.java:58)
at org.apache.nutch.crawl.Injector.inject(Injector.java:357)
at org.apache.nutch.crawl.Injector.run(Injector.java:467)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.nutch.crawl.Injector.main(Injector.java:441)
I found several hints here and downloaded winutils.exe from https://codeload.github.com/srccodes/hadoop-common-2.2.0-bin/zip/master. I unpacked the folder on the server and set the environment variable NUTCH_OPTS=-Dhadoop.home.dir=[winutils_folder]. Now the winutils error is gone but the nutch call fails with a different error:
2016-07-01 13:24:09,697 INFO crawl.Injector - Injector: starting at 2016-07-01 13:24:09
2016-07-01 13:24:09,697 INFO crawl.Injector - Injector: crawlDb: crawl/crawldb
2016-07-01 13:24:09,697 INFO crawl.Injector - Injector: urlDir: urls
2016-07-01 13:24:09,697 INFO crawl.Injector - Injector: Converting injected urls to crawl db entries.
2016-07-01 13:24:09,885 WARN util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2016-07-01 13:24:10,307 ERROR crawl.Injector - Injector: org.apache.hadoop.util.Shell$ExitCodeException:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
at org.apache.hadoop.util.Shell.run(Shell.java:418)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:739)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:722)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:633)
at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:467)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:456)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:424)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:887)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:849)
at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1149)
at org.apache.nutch.util.LockUtil.createLockFile(LockUtil.java:58)
at org.apache.nutch.crawl.Injector.inject(Injector.java:357)
at org.apache.nutch.crawl.Injector.run(Injector.java:467)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.nutch.crawl.Injector.main(Injector.java:441)
After updating .bashrc (added the following lines) the hadoop log only shows warnings.
export JAVA_HOME='/cygdrive/c/Program Files/Java/jre1.8.0_92'
export PATH=$PATH:$JAVA_HOME/bin
But nutch still throws an error:
Injector: starting at 2016-07-01 17:25:22
Injector: crawlDb: crawl/crawldb
Injector: urlDir: urls
Injector: Converting injected urls to crawl db entries.
Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:570)
at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:977)
at org.apache.hadoop.util.DiskChecker.checkAccessByFileMethods(DiskChecker.java:173)
at org.apache.hadoop.util.DiskChecker.checkDirAccess(DiskChecker.java:160)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:94)
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:285)
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:344)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:115)
at org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:131)
at org.apache.hadoop.mapred.LocalJobRunner$Job.<init>(LocalJobRunner.java:163)
at org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:731)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Unknown Source)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
at org.apache.nutch.crawl.Injector.inject(Injector.java:376)
at org.apache.nutch.crawl.Injector.run(Injector.java:467)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.nutch.crawl.Injector.main(Injector.java:441)
I need hints what may be wrong configured or isn't it possible to run nutch with windows/cygwin?
I spent about a week to find out what causes UnsatisfiedLinkError.
I think the main reason is that java cannot find its dependencies.
I solved this problem by commenting out some scripts in nutch file.
(located in {nutch_folder}\runtime\local\bin)
To be more specific, I commented out scripts setting -Djava.library.path.
I made java program to use original java.library.path instead.
Use below command to check original java.libary.path on your environment.
java -XshowSettings:properties -version
It will show many properties including java.library.path = ...
comment out below code:
# setup 'java.library.path' for native-hadoop code if necessary
# used only in local mode
JAVA_LIBRARY_PATH=''
if [ -d "${NUTCH_HOME}/lib/native" ]; then
JAVA_PLATFORM=`"${JAVA}" -classpath "$CLASSPATH" org.apache.hadoop.util.PlatformName | sed -e 's/ /_/g'`
if [ -d "${NUTCH_HOME}/lib/native" ]; then
if [ "x$JAVA_LIBRARY_PATH" != "x" ]; then
JAVA_LIBRARY_PATH="${JAVA_LIBRARY_PATH}:${NUTCH_HOME}/lib/native/${JAVA_PLATFORM}"
else
JAVA_LIBRARY_PATH="${NUTCH_HOME}/lib/native/${JAVA_PLATFORM}"
fi
fi
fi
if [ $cygwin = true -a "X${JAVA_LIBRARY_PATH}" != "X" ]; then
JAVA_LIBRARY_PATH="`cygpath -p -w "$JAVA_LIBRARY_PATH"`"
fi
also comment out below code:
if [ "x$JAVA_LIBRARY_PATH" != "x" ]; then
NUTCH_OPTS=("${NUTCH_OPTS[#]}" -Djava.library.path="$JAVA_LIBRARY_PATH")
fi
by doing this NUTCH_OPTS will not contain -Djava.library.path, so java will use original setting.
nutch command is running like in (pseudo)-distributed mode.
I don't know why exactly.
Hope this helps.
No need to comment anything in the nutch script.
If you have placed the hadoop.dll and winutils.exe under a bin folder of your HADOOP_HOME make sure that the path to bin folder is also appended to your windows PATH env varable.
I've successfully run Nutch (v1.4) for a crawl using local mode on my Ubuntu 11.10 system. However, when switching over to "deploy" mode (all else being the same), I get an error during the fetch cycle.
I have Hadoop running succesfully on the machine, in a pseudo-distributed mode (replication factor is 1 and I have just 1 map and 1 reduce job setup). "jps" shows that all Hadoop daemons are up and running.
18920 Jps
14799 DataNode
15127 JobTracker
14554 NameNode
15361 TaskTracker
15044 SecondaryNameNode
I have also added the HADOOP_HOME/bin path to my PATH variable.
PATH=$PATH:/home/jimb/hadoop/bin
Then I ran the crawl from the nutch/deploy directory, as below:
bin/nutch crawl /data/runs/ar/seedurls -dir /data/runs/ar/crawls
Here is the output I get:
12/01/25 13:55:49 INFO crawl.Crawl: crawl started in: /data/runs/ar/crawls
12/01/25 13:55:49 INFO crawl.Crawl: rootUrlDir = /data/runs/ar/seedurls
12/01/25 13:55:49 INFO crawl.Crawl: threads = 10
12/01/25 13:55:49 INFO crawl.Crawl: depth = 5
12/01/25 13:55:49 INFO crawl.Crawl: solrUrl=null
12/01/25 13:55:49 INFO crawl.Injector: Injector: starting at 2012-01-25 13:55:49
12/01/25 13:55:49 INFO crawl.Injector: Injector: crawlDb: /data/runs/ar/crawls/crawldb
12/01/25 13:55:49 INFO crawl.Injector: Injector: urlDir: /data/runs/ar/seedurls
12/01/25 13:55:49 INFO crawl.Injector: Injector: Converting injected urls to crawl db entries.
12/01/25 13:56:53 INFO mapred.FileInputFormat: Total input paths to process : 1
...
...
12/01/25 13:57:21 INFO crawl.Injector: Injector: Merging injected urls into crawl db.
...
12/01/25 13:57:48 INFO crawl.Injector: Injector: finished at 2012-01-25 13:57:48, elapsed: 00:01:59
12/01/25 13:57:48 INFO crawl.Generator: Generator: starting at 2012-01-25 13:57:48
12/01/25 13:57:48 INFO crawl.Generator: Generator: Selecting best-scoring urls due for fetch.
12/01/25 13:57:48 INFO crawl.Generator: Generator: filtering: true
12/01/25 13:57:48 INFO crawl.Generator: Generator: normalizing: true
12/01/25 13:57:48 INFO mapred.FileInputFormat: Total input paths to process : 2
...
12/01/25 13:58:15 INFO crawl.Generator: Generator: Partitioning selected urls for politeness.
12/01/25 13:58:16 INFO crawl.Generator: Generator: segment: /data/runs/ar/crawls/segments/20120125135816
...
12/01/25 13:58:42 INFO crawl.Generator: Generator: finished at 2012-01-25 13:58:42, elapsed: 00:00:54
12/01/25 13:58:42 ERROR fetcher.Fetcher: Fetcher: No agents listed in 'http.agent.name' property.
Exception in thread "main" java.lang.IllegalArgumentException: Fetcher: No agents listed in 'http.agent.name' property.
at org.apache.nutch.fetcher.Fetcher.checkConfiguration(Fetcher.java:1261)
at org.apache.nutch.fetcher.Fetcher.fetch(Fetcher.java:1166)
at org.apache.nutch.crawl.Crawl.run(Crawl.java:136)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.nutch.crawl.Crawl.main(Crawl.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Now, the configuration files for the "local" mode are setup fine (since a crawl in local mode succeeded). For running in deploy mode, since the "deploy" folder did not have any "conf" subdirectory, I assumed that either:
a) the conf files need to be copied over under "deploy/conf", OR
b) the conf files need to be placed onto HDFS.
I have verified that option (a) above does not help. So, I'm assuming that the Nutch configuration files need to exist in HDFS, for the HDFS fetcher to run successfully? However, I don't know at what path within HDFS I should place these Nutch conf files, or perhaps I'm barking up the wrong tree?
If Nutch reads config files during "deploy" mode from the files under "local/conf", then why is it that the local crawl worked fine, but the deploy-mode crawl isn't?
What am I missing here?
Thanks in advance!
Try this out:
In the nutch source directory, modify the file conf/nutch-site.xml to set http.agent.name properly.
re-build the code using ant
Go to runtime/deploy directory, set the required environment variables and try crawling again.
This is likely because you have not rebuilt yet. Can you run "ant" and see what happens? Obviously, you need to update the http.agent.name in nutch-site.xml if you have not done so yet.
I am setting up a test environment using apache accumulo ver 1.4.0, hadoop ver 0.20.2 and zookeeper ver 3.3.3. Please see below for the issue.
Hadoop and Zookeeper work great together, but when I start accumulo using the procedures on apache incubator I get the following zookeeper stream of info and a warn:
2011-12-08 20:13:56,601 - INFO [main:QuorumPeerConfig#90] - Reading configuration from: /home/hadoop/zookeeper-3.3.3/bin/../conf/zoo.cfg
2011-12-08 20:13:56,603 - WARN [main:QuorumPeerMain#105] - Either no config or no quorum defined in config, running in standalone mode
2011-12-08 20:13:56,616 - INFO [main:QuorumPeerConfig#90] - Reading configuration from: /home/hadoop/zookeeper-3.3.3/bin/../conf/zoo.cfg
2011-12-08 20:13:56,617 - INFO [main:ZooKeeperServerMain#94] - Starting server
2011-12-08 20:13:56,626 - INFO [main:Environment#97] - Server environment:zookeeper.version=3.3.3-1073969, built on 02/23/2011 22:27 GMT
2011-12-08 20:13:56,627 - INFO [main:Environment#97] - Server environment:host.name.paz
2011-12-08 20:13:56,627 - INFO [main:Environment#97] - Server environment:java.version=1.6.0_26
2011-12-08 20:13:56,628 - INFO [main:Environment#97] - Server environment:java.vendor=Sun Microsystems Inc.
2011-12-08 20:13:56,629 - INFO [main:Environment#97] - Server environment:java.home=/usr/lib/jvm/java-6-sun-1.6.0.26/jre
2011-12-08 20:13:56,629 - INFO [main:Environment#97] - Server environment:java.class.path=/home/hadoop/zookeeper-3.3.3/bin/../build/classes:/home/hadoop/zookeeper-3.3.
3/bin/../build/lib/*.jar:/home/hadoop/zookeeper-3.3.3/bin/../zookeeper-3.3.3.jar:/home/hadoop/zookeeper-3.3.3/bin/../lib/log4j-1.2.15.jar:/home/hadoop/zookeeper-3.3.3/b
in/../lib/jline-0.9.94.jar:/home/hadoop/zookeeper-3.3.3/bin/../src/java/lib/*.jar:/home/hadoop/zookeeper-3.3.3/bin/../conf:
2011-12-08 20:13:56,630 - INFO [main:Environment#97] - Server environment:java.library.path=/usr/lib/jvm/java-6-sun-1.6.0.26/jre/lib/i386/server:/usr/lib/jvm/java-6-su
n-1.6.0.26/jre/lib/i386:/usr/lib/jvm/java-6-sun-1.6.0.26/jre/../lib/i386:/usr/java/packages/lib/i386:/lib:/usr/lib
2011-12-08 20:13:56,630 - INFO [main:Environment#97] - Server environment:java.io.tmpdir=/tmp
2011-12-08 20:13:56,631 - INFO [main:Environment#97] - Server environment:java.compiler=<NA>
2011-12-08 20:13:56,631 - INFO [main:Environment#97] - Server environment:os.name=Linux
2011-12-08 20:13:56,632 - INFO [main:Environment#97] - Server environment:os.arch=i386
2011-12-08 20:13:56,633 - INFO [main:Environment#97] - Server environment:os.version=3.0.0-13-generic
2011-12-08 20:13:56,633 - INFO [main:Environment#97] - Server environment:user.name=hadoop
2011-12-08 20:13:56,634 - INFO [main:Environment#97] - Server environment:user.home=/home/hadoop
2011-12-08 20:13:56,634 - INFO [main:Environment#97] - Server environment:user.dir=/home/hadoop
2011-12-08 20:13:56,641 - INFO [main:ZooKeeperServer#663] - tickTime set to 2000
2011-12-08 20:13:56,641 - INFO [main:ZooKeeperServer#672] - minSessionTimeout set to -1
2011-12-08 20:13:56,642 - INFO [main:ZooKeeperServer#681] - maxSessionTimeout set to -1
2011-12-08 20:13:56,661 - INFO [main:NIOServerCnxn$Factory#143] - binding to port 0.0.0.0/0.0.0.0:2181
2011-12-08 20:13:56,691 - INFO [main:FileSnap#82] - Reading snapshot /home/hadoop/zoo/dataDir/version-2/snapshot.0
2011-12-08 20:13:56,708 - INFO [main:FileTxnSnapLog#208] - Snapshotting: 4e
2011-12-08 20:14:52,147 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn$Factory#251] - Accepted socket connection from /0:0:0:0:0:0:0:1:40694
2011-12-08 20:14:52,153 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#777] - Client attempting to establish new session at /0:0:0:0:0:0:0:1:40694
2011-12-08 20:14:52,154 - INFO [SyncThread:0:FileTxnLog#197] - Creating new log file: log.4f
2011-12-08 20:14:52,410 - INFO [SyncThread:0:NIOServerCnxn#1580] - Established session 0x13420623ee70000 with negotiated timeout 30000 for client /0:0:0:0:0:0:0:1:4069
4
2011-12-08 20:14:52,959 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn$Factory#251] - Accepted socket connection from /127.0.0.1:38446
2011-12-08 20:14:52,962 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#777] - Client attempting to establish new session at /127.0.0.1:38446
2011-12-08 20:14:53,007 - INFO [SyncThread:0:NIOServerCnxn#1580] - Established session 0x13420623ee70001 with negotiated timeout 30000 for client /127.0.0.1:38446
2011-12-08 20:14:59,932 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#634] - EndOfStreamException: Unable to read additional data from client session
id 0x13420623ee70000, likely client has closed socket
and when I start the accumulo shell I get the following errors:
18 12:44:38,746 [impl.ServerClient] WARN : Failed to find an available server in the list of servers: []
18 12:44:38,846 [impl.ServerClient] WARN : Failed to find an available server in the list of servers: []
18 12:44:38,947 [impl.ServerClient] WARN : Failed to find an available server in the list of servers: []
18 12:44:39,048 [impl.ServerClient] WARN : Failed to find an available server in the list of servers: []
18 12:44:39,148 [impl.ServerClient] WARN : Failed to find an available server in the list of servers: []
18 12:44:39,249 [impl.ServerClient] WARN : Failed to find an available server in the list of servers: []
18 12:44:39,350 [impl.ServerClient] WARN : Failed to find an available server in the list of servers: []
18 12:44:39,450 [impl.ServerClient] WARN : Failed to find an available server in the list of servers: []
Corrected tserver memory settings to not exceed waht is allowed by the JVM. Tserver does not crash and error resolved.
Answer came from accumulo-incubator user list and is reposted at the bottom.
Basically, when going in and modifying memory settings to run in pseudo-distributed mode on a laptop I made incorrect modifications to the accumulo-site.xml and accumulo-env.sh files concerning the tablet server memory usage. The clue to the error was found in the /home/hadoop/accumulo/logs/tserver*.log file:
20 18:20:00,951 [tabletserver.NativeMap] ERROR: Failed to load native
map library
/home/hadoop/accumulo/lib/native/map/libNativeMap-Linux-i386-32.so
java.lang.UnsatisfiedLinkError: Can't load library:
/home/hadoop/accumulo/lib/native/map/libNativeMap-Linux-i386-32.so
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1706)
at java.lang.Runtime.load0(Runtime.java:770)
at java.lang.System.load(System.java:1003)
at
org.apache.accumulo.server.tabletserver.NativeMap.loadNativeLib(NativeMap.java:144)
at
org.apache.accumulo.server.tabletserver.NativeMap.<clinit>(NativeMap.java:156)
at
org.apache.accumulo.server.tabletserver.TabletServerResourceManager.<init>(TabletServerResourceManager.java:123)
at
org.apache.accumulo.server.tabletserver.TabletServer.config(TabletServer.java:2959)
at
org.apache.accumulo.server.tabletserver.TabletServer.main(TabletServer.java:3085)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.accumulo.start.Main$1.run(Main.java:89)
at java.lang.Thread.run(Thread.java:662)
20 18:20:00,999 [tabletserver.TabletServer] ERROR: Uncaught exception in
TabletServer.main, exiting
java.lang.IllegalArgumentException: Maximum tablet server map memory
134,217,728 and block cache sizes 186,646,528 is too large for this JVM
configuration 132,579,328
at
org.apache.accumulo.server.tabletserver.TabletServerResourceManager.<init>(TabletServerResourceManager.java:134)
at
org.apache.accumulo.server.tabletserver.TabletServer.config(TabletServer.java:2959)
at
org.apache.accumulo.server.tabletserver.TabletServer.main(TabletServer.java:3085)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.accumulo.start.Main$1.run(Main.java:89)
at java.lang.Thread.run(Thread.java:662)
Specific help text:
"The native libraries are not loading, which is shifting in-memory
map into the java workspace. Add to that your block cache size, and your
specifications for memory use are higher than the JVM will be allowed to
allocate. The tablet servers complain, and exit. You should see these
complaints on the accumulo monitor web pages.
You may find a benefit to rebuilding the native map library, which will
move the allocation of the in-memory map to outside the JVM. This is
not required.
The size of any memory dedicated to cache must be smaller than the size
of the JVM, which must include substantial working space for RPC calls
and garbage collection over time."
If you run the Zookeeper client (/home/hadoop/zookeeper-3.3.3/bin/zkCli.sh) and do an ls of
/accumulo/<instance uuid>/tservers
I assume you won't see any servers listed. You should see one or more tablet servers listed if Accumulo was initialized properly. Are you sure you ran the Accumulo init script after setting your Zookeeper servers in the accumulo-site.xml per the instructions?
Make sure you set "instance.zookeeper.host" to the location of your zookeeper node in the accumulo-site.xml file.
Additionally, check the logs for your tservers and loggers. If you have other configuration issues, they will not go live, which will cause the master to report having difficulties finding tservers.
Go into $ACCUMULO_HOME/conf . End the files masters,slaves and tracers to contain one single line that reads "localhost" (I assume you are doing single node)