The plan runs fine in GUI mode.
It also runs in non GUI-mode, but doesn't send proper(?) HTTP requests...
This is how I launch it:
sh jmeter.sh -n -t thread-test.jmx -l thread-test.csv
When running in the GUI the request is correct:
GET http://www.bing.com/
[no cookies]
Request Headers:
Connection: keep-alive
Host: www.bing.com
User-Agent: Apache-HttpClient/4.5.2 (Java/1.8.0_73)
But when run using non GUI-mode the request has no data to display:
No data to display
The sampler result looks like this:
Thread Name: Thread Group 1-1
Sample Start: 2016-11-01 15:24:04 CET
Load time: 141
Connect Time: 0
Latency: 141
Size in bytes: 85790
Headers size in bytes: 0
Body size in bytes: 0
Sample Count: 1
Error Count: 0
Data type ("text"|"bin"|""): text
Response code: 200
Response message: OK
Response headers:
SampleResult fields:
ContentType:
DataEncoding: null
Any idea? I'm stumped...
Here's the test plan:
<?xml version="1.0" encoding="UTF-8"?>
<jmeterTestPlan version="1.2" properties="2.9" jmeter="3.0 r1743807">
<hashTree>
<TestPlan guiclass="TestPlanGui" testclass="TestPlan" testname="Test Plan" enabled="true">
<stringProp name="TestPlan.comments"></stringProp>
<boolProp name="TestPlan.functional_mode">false</boolProp>
<boolProp name="TestPlan.serialize_threadgroups">false</boolProp>
<elementProp name="TestPlan.user_defined_variables" elementType="Arguments" guiclass="ArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
<collectionProp name="Arguments.arguments"/>
</elementProp>
<stringProp name="TestPlan.user_define_classpath"></stringProp>
</TestPlan>
<hashTree>
<ThreadGroup guiclass="ThreadGroupGui" testclass="ThreadGroup" testname="Thread Group" enabled="true">
<stringProp name="ThreadGroup.on_sample_error">stoptest</stringProp>
<elementProp name="ThreadGroup.main_controller" elementType="LoopController" guiclass="LoopControlPanel" testclass="LoopController" testname="Loop Controller" enabled="true">
<boolProp name="LoopController.continue_forever">false</boolProp>
<stringProp name="LoopController.loops">1</stringProp>
</elementProp>
<stringProp name="ThreadGroup.num_threads">1</stringProp>
<stringProp name="ThreadGroup.ramp_time">1</stringProp>
<longProp name="ThreadGroup.start_time">1478003529000</longProp>
<longProp name="ThreadGroup.end_time">1478003529000</longProp>
<boolProp name="ThreadGroup.scheduler">false</boolProp>
<stringProp name="ThreadGroup.duration"></stringProp>
<stringProp name="ThreadGroup.delay"></stringProp>
</ThreadGroup>
<hashTree>
<HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="HTTP Request" enabled="true">
<elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" enabled="true">
<collectionProp name="Arguments.arguments"/>
</elementProp>
<stringProp name="HTTPSampler.domain">www.bing.com</stringProp>
<stringProp name="HTTPSampler.port"></stringProp>
<stringProp name="HTTPSampler.connect_timeout"></stringProp>
<stringProp name="HTTPSampler.response_timeout"></stringProp>
<stringProp name="HTTPSampler.protocol"></stringProp>
<stringProp name="HTTPSampler.contentEncoding"></stringProp>
<stringProp name="HTTPSampler.path"></stringProp>
<stringProp name="HTTPSampler.method">GET</stringProp>
<boolProp name="HTTPSampler.follow_redirects">true</boolProp>
<boolProp name="HTTPSampler.auto_redirects">false</boolProp>
<boolProp name="HTTPSampler.use_keepalive">true</boolProp>
<boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp>
<boolProp name="HTTPSampler.BROWSER_COMPATIBLE_MULTIPART">true</boolProp>
<boolProp name="HTTPSampler.image_parser">true</boolProp>
<boolProp name="HTTPSampler.concurrentDwn">true</boolProp>
<boolProp name="HTTPSampler.monitor">false</boolProp>
<stringProp name="HTTPSampler.embedded_url_re"></stringProp>
</HTTPSamplerProxy>
<hashTree/>
<ResultCollector guiclass="ViewResultsFullVisualizer" testclass="ResultCollector" testname="View Results Tree" enabled="true">
<boolProp name="ResultCollector.error_logging">false</boolProp>
<objProp>
<name>saveConfig</name>
<value class="SampleSaveConfiguration">
<time>true</time>
<latency>true</latency>
<timestamp>true</timestamp>
<success>true</success>
<label>true</label>
<code>true</code>
<message>true</message>
<threadName>true</threadName>
<dataType>true</dataType>
<encoding>false</encoding>
<assertions>true</assertions>
<subresults>true</subresults>
<responseData>false</responseData>
<samplerData>false</samplerData>
<xml>false</xml>
<fieldNames>true</fieldNames>
<responseHeaders>false</responseHeaders>
<requestHeaders>false</requestHeaders>
<responseDataOnError>false</responseDataOnError>
<saveAssertionResultsFailureMessage>true</saveAssertionResultsFailureMessage>
<assertionsResultsToSave>0</assertionsResultsToSave>
<bytes>true</bytes>
<threadCounts>true</threadCounts>
<idleTime>true</idleTime>
</value>
</objProp>
<stringProp name="filename">/Users/jboive/Downloads/apache-jmeter-3.0/bin/thread-test.csv</stringProp>
</ResultCollector>
<hashTree/>
</hashTree>
</hashTree>
</hashTree>
</jmeterTestPlan>
Here's the log:
2016/11/01 15:24:31 INFO - jmeter.util.JMeterUtils: Setting Locale to en_US
2016/11/01 15:24:31 INFO - jmeter.JMeter: Loading user properties from: /Users/jboive/Downloads/apache-jmeter-3.0/bin/user.properties
2016/11/01 15:24:31 INFO - jmeter.JMeter: Loading system properties from: /Users/jboive/Downloads/apache-jmeter-3.0/bin/system.properties
2016/11/01 15:24:31 INFO - jmeter.JMeter: Copyright (c) 1998-2016 The Apache Software Foundation
2016/11/01 15:24:31 INFO - jmeter.JMeter: Version 3.0 r1743807
2016/11/01 15:24:31 INFO - jmeter.JMeter: java.version=1.8.0_73
2016/11/01 15:24:31 INFO - jmeter.JMeter: java.vm.name=Java HotSpot(TM) 64-Bit Server VM
2016/11/01 15:24:31 INFO - jmeter.JMeter: os.name=Mac OS X
2016/11/01 15:24:31 INFO - jmeter.JMeter: os.arch=x86_64
2016/11/01 15:24:31 INFO - jmeter.JMeter: os.version=10.12.1
2016/11/01 15:24:31 INFO - jmeter.JMeter: file.encoding=UTF-8
2016/11/01 15:24:31 INFO - jmeter.JMeter: Max memory =514850816
2016/11/01 15:24:31 INFO - jmeter.JMeter: Available Processors =8
2016/11/01 15:24:31 INFO - jmeter.JMeter: Default Locale=English (United States)
2016/11/01 15:24:31 INFO - jmeter.JMeter: JMeter Locale=English (United States)
2016/11/01 15:24:31 INFO - jmeter.JMeter: JMeterHome=/Users/jboive/Downloads/apache-jmeter-3.0
2016/11/01 15:24:31 INFO - jmeter.JMeter: user.dir =/Users/jboive/Downloads/apache-jmeter-3.0/bin
2016/11/01 15:24:31 INFO - jmeter.JMeter: PWD =/Users/jboive/Downloads/apache-jmeter-3.0/bin
2016/11/01 15:24:31 INFO - jmeter.JMeter: IP: 10.0.1.19 Name: iMac.local FullName: 10.0.1.19
2016/11/01 15:24:31 INFO - jmeter.gui.action.LookAndFeelCommand: Using look and feel: com.apple.laf.AquaLookAndFeel [Mac OS X, System]
2016/11/01 15:24:31 INFO - jmeter.JMeter: Loaded icon properties from org/apache/jmeter/images/icon.properties
2016/11/01 15:24:33 INFO - jmeter.engine.util.CompoundVariable: Note: Function class names must contain the string: '.functions.'
2016/11/01 15:24:33 INFO - jmeter.engine.util.CompoundVariable: Note: Function class names must not contain the string: '.gui.'
2016/11/01 15:24:35 INFO - org.jmeterplugins.repository.PluginManager: Plugins Status: [jpgc-graphs-basic=2.0, jpgc-graphs-additional=2.0, jpgc-autostop=0.1, blazemeter-debugger=0.3, jpgc-functions=2.0, jmeter-ftp=3.0, jpgc-filterresults=2.1, jmeter-http=3.0, jmeter-jdbc=3.0, jmeter-jms=3.0, jmeter-monitors=3.0, jmeter-core=3.0, jmeter-junit=3.0, jmeter-java=3.0, jmeter-ldap=3.0, jmeter-mail=3.0, jmeter-mongodb=3.0, jmeter-native=3.0, jpgc-plugins-manager=0.10, jpgc-synthesis=2.0, jmeter-tcp=3.0, jmeter-components=3.0]
2016/11/01 15:24:36 INFO - jmeter.util.BSFTestElement: Registering JMeter version of JavaScript engine as work-round for BSF-22
2016/11/01 15:24:36 INFO - jmeter.protocol.http.sampler.HTTPSamplerBase: Parser for text/html is org.apache.jmeter.protocol.http.parser.LagartoBasedHtmlParser
2016/11/01 15:24:36 INFO - jmeter.protocol.http.sampler.HTTPSamplerBase: Parser for application/xhtml+xml is org.apache.jmeter.protocol.http.parser.LagartoBasedHtmlParser
2016/11/01 15:24:36 INFO - jmeter.protocol.http.sampler.HTTPSamplerBase: Parser for application/xml is org.apache.jmeter.protocol.http.parser.LagartoBasedHtmlParser
2016/11/01 15:24:36 INFO - jmeter.protocol.http.sampler.HTTPSamplerBase: Parser for text/xml is org.apache.jmeter.protocol.http.parser.LagartoBasedHtmlParser
2016/11/01 15:24:36 INFO - jmeter.protocol.http.sampler.HTTPSamplerBase: Parser for text/vnd.wap.wml is org.apache.jmeter.protocol.http.parser.RegexpHTMLParser
2016/11/01 15:24:36 INFO - jmeter.protocol.http.sampler.HTTPSamplerBase: Parser for text/css is org.apache.jmeter.protocol.http.parser.CssParser
2016/11/01 15:24:37 INFO - jorphan.exec.KeyToolUtils: keytool found at 'keytool'
2016/11/01 15:24:37 INFO - jmeter.protocol.http.proxy.ProxyControl: HTTP(S) Test Script Recorder SSL Proxy will use keys that support embedded 3rd party resources in file /Users/jboive/Downloads/apache-jmeter-3.0/bin/proxyserver.jks
2016/11/01 15:24:37 INFO - jmeter.gui.util.MenuFactory: Skipping org.apache.jmeter.protocol.mongodb.config.MongoSourceElement
2016/11/01 15:24:37 INFO - jmeter.gui.util.MenuFactory: Skipping org.apache.jmeter.protocol.mongodb.sampler.MongoScriptSampler
2016/11/01 15:24:37 INFO - jmeter.gui.util.MenuFactory: Skipping org.apache.jmeter.visualizers.DistributionGraphVisualizer
2016/11/01 15:24:37 INFO - jmeter.samplers.SampleResult: Note: Sample TimeStamps are START times
2016/11/01 15:24:37 INFO - jmeter.samplers.SampleResult: sampleresult.default.encoding is set to ISO-8859-1
2016/11/01 15:24:37 INFO - jmeter.samplers.SampleResult: sampleresult.useNanoTime=true
2016/11/01 15:24:37 INFO - jmeter.samplers.SampleResult: sampleresult.nanoThreadSleep=5000
2016/11/01 15:24:37 INFO - jmeter.gui.util.MenuFactory: Skipping org.apache.jmeter.visualizers.SplineVisualizer
2016/11/01 15:24:49 INFO - jmeter.services.FileServer: Default base='/Users/jboive/Downloads/apache-jmeter-3.0/bin'
2016/11/01 15:24:49 INFO - jmeter.gui.action.Load: Loading file: /Users/jboive/Downloads/apache-jmeter-3.0/bin/thread-test.jmx
2016/11/01 15:24:49 INFO - jmeter.services.FileServer: Set new base='/Users/jboive/Downloads/apache-jmeter-3.0/bin'
2016/11/01 15:24:49 INFO - jmeter.save.SaveService: Testplan (JMX) version: 2.2. Testlog (JTL) version: 2.2
2016/11/01 15:24:49 INFO - jmeter.save.SaveService: Using SaveService properties file encoding UTF-8
2016/11/01 15:24:49 INFO - jmeter.save.SaveService: Using SaveService properties version 2.9
2016/11/01 15:24:49 INFO - jmeter.save.SaveService: All converter versions present and correct
2016/11/01 15:24:49 INFO - jmeter.save.SaveService: Loading file: /Users/jboive/Downloads/apache-jmeter-3.0/bin/thread-test.jmx
2016/11/01 15:24:49 INFO - jmeter.services.FileServer: Set new base='/Users/jboive/Downloads/apache-jmeter-3.0/bin'
2016/11/01 15:24:53 INFO - jmeter.samplers.SampleEvent: List of sample_variables: []
2016/11/01 15:29:59 INFO - jmeter.engine.StandardJMeterEngine: Running the test!
2016/11/01 15:29:59 INFO - jmeter.samplers.SampleEvent: List of sample_variables: []
2016/11/01 15:29:59 INFO - jmeter.gui.util.JMeterMenuBar: setRunning(true,*local*)
2016/11/01 15:29:59 INFO - jmeter.engine.StandardJMeterEngine: Starting ThreadGroup: 1 : Thread Group
2016/11/01 15:29:59 INFO - jmeter.engine.StandardJMeterEngine: Starting 1 threads for group Thread Group.
2016/11/01 15:29:59 INFO - jmeter.engine.StandardJMeterEngine: Test will stop on error
2016/11/01 15:29:59 INFO - jmeter.threads.ThreadGroup: Starting thread group number 1 threads 1 ramp-up 1 perThread 1000.0 delayedStart=false
2016/11/01 15:29:59 INFO - jmeter.threads.ThreadGroup: Started thread group number 1
2016/11/01 15:29:59 INFO - jmeter.engine.StandardJMeterEngine: All thread groups have been started
2016/11/01 15:29:59 INFO - jmeter.threads.JMeterThread: Thread started: Thread Group 1-1
2016/11/01 15:29:59 INFO - jmeter.protocol.http.sampler.HTTPHCAbstractImpl: Local host = iMac.local
2016/11/01 15:29:59 INFO - jmeter.protocol.http.sampler.HTTPHC4Impl: HTTP request retry count = 0
2016/11/01 15:29:59 INFO - jmeter.protocol.http.parser.BaseParser: Created org.apache.jmeter.protocol.http.parser.LagartoBasedHtmlParser
2016/11/01 15:29:59 INFO - jmeter.threads.JMeterThread: Thread is done: Thread Group 1-1
2016/11/01 15:29:59 INFO - jmeter.threads.JMeterThread: Thread finished: Thread Group 1-1
2016/11/01 15:29:59 INFO - jmeter.engine.StandardJMeterEngine: Notifying test listeners of end of test
2016/11/01 15:29:59 INFO - jmeter.gui.util.JMeterMenuBar: setRunning(false,*local*)
There's nothing wrong here.
In GUI mode, View Results Tree shows all fields.
In non GUI mode, the output you have set for View Results Tree is CSV which does not store all fields like response data, encoding ...
That's why you don't have all data.
To have what you want, click on "Configure" button in View Results Tree and select XML fields. Rename you file to thread-test.xml.
Actually your Test Plan does work, it just stores a limited subset of metrics in the .jtl results file which is enough to populate listeners like Aggregate Report or to build HTML Reporting Dashboard.
The data is being cut as saving requests and especially responses causes massive disk IO overhead and consumes a lot of memory, that's why it is recommended avoid saving the extra data or to do it only when an error occur.
If for some reason you need to have the full picture in non-GUI mode it is controllable via JMeter Properties. In order to get the same level of details as for the GUI mode add the following lines to user.properties file (lives in JMeter's "bin" folder)
jmeter.save.saveservice.output_format=xml
jmeter.save.saveservice.response_data=true
jmeter.save.saveservice.samplerData=true
jmeter.save.saveservice.requestHeaders=true
jmeter.save.saveservice.url=true
jmeter.save.saveservice.responseHeaders=true
JMeter restart will be required to pick the properties up. See Apache JMeter Properties Customization Guide to learn more about JMeter Properties and ways of setting and/or overriding them
Related
Can't create instance of GremlinServer with HBase and Elasticsearch.
When i run shell script: bin/gremlin-server.sh config/gremlin.yaml. I get exception:
Exception in thread "main" java.lang.IllegalStateException: java.lang.NoSuchMethodException: org.janusgraph.graphdb.tinkerpop.plugin.JanusGraphGremlinPlugin.build()
Gremlin-server logs
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/user/janusgraph/lib/slf4j-log4j12-1.7.12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/user/janusgraph/lib/logback-classic-1.1.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
0 [main] INFO org.apache.tinkerpop.gremlin.server.GremlinServer -
\,,,/
(o o)
-----oOOo-(3)-oOOo-----
135 [main] INFO org.apache.tinkerpop.gremlin.server.GremlinServer - Configuring Gremlin Server from config/gremlin.yaml
211 [main] INFO org.apache.tinkerpop.gremlin.server.util.MetricManager - Configured Metrics Slf4jReporter configured with interval=180000ms and loggerName=org.apache.tinkerpop.gremlin.server.Settings$Slf4jReporterMetrics
557 [main] INFO org.janusgraph.diskstorage.hbase.HBaseCompatLoader - Instantiated HBase compatibility layer supporting runtime HBase version 1.2.6: org.janusgraph.diskstorage.hbase.HBaseCompat1_0
835 [main] INFO org.janusgraph.diskstorage.hbase.HBaseStoreManager - HBase configuration: setting zookeeper.znode.parent=/hbase-unsecure
836 [main] INFO org.janusgraph.diskstorage.hbase.HBaseStoreManager - Copied host list from root.storage.hostname to hbase.zookeeper.quorum: main.local,data1.local,data2.local
836 [main] INFO org.janusgraph.diskstorage.hbase.HBaseStoreManager - Copied Zookeeper Port from root.storage.port to hbase.zookeeper.property.clientPort: 2181
866 [main] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
1214 [main] INFO org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper - Process identifier=hconnection-0x1e44b638 connecting to ZooKeeper ensemble=main.local:2181,data1.local:2181,data2.local:2181
1220 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
1220 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:host.name=main.local
1220 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.8.0_212
1220 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Oracle Corporation
1220 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.x86_64/jre
1221 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/home/user/janusgraph/conf/gremlin-server:/home/user/janusgraph/lib/slf4j-log4j12-
// Here hanusgraph download very many dependencies
1256 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
1256 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp
1256 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=<NA>
1256 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux
1256 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64
1256 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:os.version=3.10.0-862.el7.x86_64
1256 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:user.name=user
1257 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:user.home=/home/user
1257 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/home/user/janusgraph
1257 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=main.local:2181,data1.local:2181,data2.local:2181 sessionTimeout=90000 watcher=hconnection-0x1e44b6380x0, quorum=main.local:2181,data1.local:2181,data2.local:2181, baseZNode=/hbase-unsecure
1274 [main-SendThread(data2.local:2181)] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ClientCnxn - Opening socket connection to server data2.local/xxx.xxx.xxx.xxx:2181. Will not attempt to authenticate using SASL (unknown error)
1394 [main-SendThread(data2.local:2181)] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ClientCnxn - Socket connection established to data2.local/xxx.xxx.xxx.xxx, initiating session
1537 [main-SendThread(data2.local:2181)] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ClientCnxn - Session establishment complete on server data2.local/xxx.xxx.xxx.xxx:2181, sessionid = 0x26b266353e50014, negotiated timeout = 60000
3996 [main] INFO org.janusgraph.core.util.ReflectiveConfigOptionLoader - Loaded and initialized config classes: 13 OK out of 13 attempts in PT0.631S
4103 [main] INFO org.reflections.Reflections - Reflections took 60 ms to scan 2 urls, producing 0 keys and 0 values
4400 [main] WARN org.janusgraph.graphdb.configuration.GraphDatabaseConfiguration - Local setting cache.db-cache-time=180000 (Type: GLOBAL_OFFLINE) is overridden by globally managed value (10000). Use the ManagementSystem interface instead of the local configuration to control this setting.
4453 [main] WARN org.janusgraph.graphdb.configuration.GraphDatabaseConfiguration - Local setting cache.db-cache-clean-wait=20 (Type: GLOBAL_OFFLINE) is overridden by globally managed value (50). Use the ManagementSystem interface instead of the local configuration to control this setting.
4473 [main] INFO org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation - Closing master protocol: MasterService
4474 [main] INFO org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation - Closing zookeeper sessionid=0x26b266353e50014
4485 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Session: 0x26b266353e50014 closed
4485 [main-EventThread] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ClientCnxn - EventThread shut down
4500 [main] INFO org.janusgraph.graphdb.configuration.GraphDatabaseConfiguration - Generated unique-instance-id=c0a8873843641-main-local1
4530 [main] INFO org.janusgraph.diskstorage.hbase.HBaseStoreManager - HBase configuration: setting zookeeper.znode.parent=/hbase-unsecure
4530 [main] INFO org.janusgraph.diskstorage.hbase.HBaseStoreManager - Copied host list from root.storage.hostname to hbase.zookeeper.quorum: main.local,data1.local,data2.local
4531 [main] INFO org.janusgraph.diskstorage.hbase.HBaseStoreManager - Copied Zookeeper Port from root.storage.port to hbase.zookeeper.property.clientPort: 2181
4532 [main] INFO org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper - Process identifier=hconnection-0x5bb3d42d connecting to ZooKeeper ensemble=main.local:2181,data1.local:2181,data2.local:2181
4532 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=main.local:2181,data1.local:2181,data2.local:2181 sessionTimeout=90000 watcher=hconnection-0x5bb3d42d0x0, quorum=main.local:2181,data1.local:2181,data2.local:2181, baseZNode=/hbase-unsecure
4534 [main-SendThread(main.local:2181)] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ClientCnxn - Opening socket connection to server main.local/xxx.xxx.xxx.xxx:2181. Will not attempt to authenticate using SASL (unknown error)
4534 [main-SendThread(main.local:2181)] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ClientCnxn - Socket connection established to main.local/xxx.xxx.xxx.xxx:2181, initiating session
4611 [main-SendThread(main.local:2181)] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ClientCnxn - Session establishment complete on server main.local/xxx.xxx.xxx.xxx:2181, sessionid = 0x36b266353fd0021, negotiated timeout = 60000
4616 [main] INFO org.janusgraph.diskstorage.Backend - Configuring index [search]
5781 [main] INFO org.janusgraph.diskstorage.Backend - Initiated backend operations thread pool of size 16
6322 [main] INFO org.janusgraph.diskstorage.Backend - Configuring total store cache size: 186687592
7555 [main] INFO org.janusgraph.graphdb.database.IndexSerializer - Hashing index keys
7925 [main] INFO org.janusgraph.diskstorage.log.kcvs.KCVSLog - Loaded unidentified ReadMarker start time 2019-06-13T09:54:08.929Z into org.janusgraph.diskstorage.log.kcvs.KCVSLog$MessagePuller#656d10a4
7927 [main] INFO org.apache.tinkerpop.gremlin.server.GremlinServer - Graph [graph] was successfully configured via [config/db.properties].
7927 [main] INFO org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor - Initialized Gremlin thread pool. Threads in pool named with pattern gremlin-*
Exception in thread "main" java.lang.IllegalStateException: java.lang.NoSuchMethodException: org.janusgraph.graphdb.tinkerpop.plugin.JanusGraphGremlinPlugin.build()
at org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor.initializeGremlinScriptEngineManager(GremlinExecutor.java:522)
at org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor.<init>(GremlinExecutor.java:126)
at org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor.<init>(GremlinExecutor.java:83)
at org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor$Builder.create(GremlinExecutor.java:813)
at org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor.<init>(ServerGremlinExecutor.java:169)
at org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor.<init>(ServerGremlinExecutor.java:89)
at org.apache.tinkerpop.gremlin.server.GremlinServer.<init>(GremlinServer.java:110)
at org.apache.tinkerpop.gremlin.server.GremlinServer.main(GremlinServer.java:363)
Caused by: java.lang.NoSuchMethodException: org.janusgraph.graphdb.tinkerpop.plugin.JanusGraphGremlinPlugin.build()
at java.lang.Class.getMethod(Class.java:1786)
at org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor.initializeGremlinScriptEngineManager(GremlinExecutor.java:492)
... 7 more
Graph configuration:
storage.backend=hbase
storage.hostname=main.local,data1.local,data2.local
storage.port=2181
storage.hbase.ext.zookeeper.znode.parent=/hbase-unsecure
cache.db-cache=true
cache.db-cache-clean-wait=20
cache.db-cache-time=180000
cache.db-cache-size=0.5
index.search.backend=elasticsearch
index.search.hostname=xxx.xxx.xxx.xxx
index.search.port=9200
index.search.elasticsearch.client-only=false
gremlin.graph=org.janusgraph.core.JanusGraphFactory
host=0.0.0.0
Gremlin-server configuration
host: localhost
port: 8182
channelizer: org.apache.tinkerpop.gremlin.server.channel.HttpChannelizer
graphs: { graph: config/db.properties }
scriptEngines: {
gremlin-groovy: {
plugins: {
org.janusgraph.graphdb.tinkerpop.plugin.JanusGraphGremlinPlugin: {},
org.apache.tinkerpop.gremlin.server.jsr223.GremlinServerGremlinPlugin: {},
org.apache.tinkerpop.gremlin.tinkergraph.jsr223.TinkerGraphGremlinPlugin: {},
org.apache.tinkerpop.gremlin.jsr223.ImportGremlinPlugin: { classImports: [java.lang.Math], methodImports: [java.lang.Math#*] },
org.apache.tinkerpop.gremlin.jsr223.ScriptFileGremlinPlugin: { files: [scripts/janusgraph.groovy] }
}
}
}
serializers:
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] } }
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0, config: { serializeResultToString: true } }
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV3d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] } }
metrics: {
slf4jReporter: {enabled: true, interval: 180000}
}
What do I need to do to server start without error?
I have this upstream Publisher that emits a number every second:
private fun counter(emissionIntervalMillis: Long) =
Flux.interval(Duration.ofMillis(emissionIntervalMillis))
.map { it }.log()
Consider this implementation in which a UnicastProcessor subscribes to the previous Flux. In addition there is a ConnectableFlux generated with processor.publish().autoConnect(). Finally I subscribe to this ConnectableFlux:
val latch = CountDownLatch(15)
val numberGenerator: Flux<Long> = counter(1000)
val processor = UnicastProcessor.create<Long>()
numberGenerator.subscribeWith(processor)
val connectableFlux = processor.doOnSubscribe { println("subscribed!") }.publish().autoConnect()
Thread.sleep(5000)
connectableFlux.subscribe {
logger.info("Element [{}]", it)
latch.countDown()
}
latch.await()
Logs:
15:58:26.941 [main] DEBUG reactor.util.Loggers$LoggerFactory - Using Slf4j logging framework
15:58:26.967 [main] INFO reactor.Flux.Map.1 - onSubscribe(FluxMap.MapSubscriber)
15:58:26.969 [main] INFO reactor.Flux.Map.1 - request(unbounded)
15:58:27.973 [parallel-1] INFO reactor.Flux.Map.1 - onNext(0)
15:58:28.973 [parallel-1] INFO reactor.Flux.Map.1 - onNext(1)
15:58:29.975 [parallel-1] INFO reactor.Flux.Map.1 - onNext(2)
15:58:30.974 [parallel-1] INFO reactor.Flux.Map.1 - onNext(3)
15:58:31.974 [parallel-1] INFO reactor.Flux.Map.1 - onNext(4)
subscribed!
15:58:31.979 [main] INFO com.codependent.processors.Tests - Element [0]
15:58:31.980 [main] INFO com.codependent.processors.Tests - Element [1]
15:58:31.980 [main] INFO com.codependent.processors.Tests - Element [2]
15:58:31.980 [main] INFO com.codependent.processors.Tests - Element [3]
15:58:31.980 [main] INFO com.codependent.processors.Tests - Element [4]
15:58:32.972 [parallel-1] INFO reactor.Flux.Map.1 - onNext(5)
15:58:32.972 [parallel-1] INFO com.codependent.processors.Tests - Element [5]
As you see, when there is a subscriber to the connectableFlux, it gets the previously generated items which were cached by the UnicastProcessor. I guess this is the expected behaviour:
if you push any amount of data through it while its Subscriber has not
yet requested data, it will buffer all of the data.
Now, instead of using autoConnect I use connect():
val latch = CountDownLatch(15)
val numberGenerator: Flux<Long> = counter(1000)
val processor = UnicastProcessor.create<Long>()
numberGenerator.subscribeWith(processor)
val connectableFlux = processor.doOnSubscribe { println("subscribed!") }.publish()
connectableFlux.connect()
Thread.sleep(5000)
connectableFlux.subscribe {
logger.info("Element [{}]", it)
latch.countDown()
}
The result now quite different, the subscriber doesn't get the items that should've been cached by the UnicastProcessor. Can someone explain the difference?
16:08:44.299 [main] DEBUG reactor.util.Loggers$LoggerFactory - Using Slf4j logging framework
16:08:44.324 [main] INFO reactor.Flux.Map.1 - onSubscribe(FluxMap.MapSubscriber)
16:08:44.326 [main] INFO reactor.Flux.Map.1 - request(unbounded)
subscribed!
16:08:45.330 [parallel-1] INFO reactor.Flux.Map.1 - onNext(0)
16:08:46.329 [parallel-1] INFO reactor.Flux.Map.1 - onNext(1)
16:08:47.329 [parallel-1] INFO reactor.Flux.Map.1 - onNext(2)
16:08:48.331 [parallel-1] INFO reactor.Flux.Map.1 - onNext(3)
16:08:49.330 [parallel-1] INFO reactor.Flux.Map.1 - onNext(4)
16:08:50.328 [parallel-1] INFO reactor.Flux.Map.1 - onNext(5)
16:08:50.328 [parallel-1] INFO com.codependent.processors.Tests - Element [5]
16:08:51.332 [parallel-1] INFO reactor.Flux.Map.1 - onNext(6)
16:08:51.332 [parallel-1] INFO com.codependent.processors.Tests - Element [6]
After rereading the docs I found that autoConnect() can pass the minimum number of subscribers necessary to subscribe to the upstream. Changing it to autoConnect(0) has the same effect as connect(), not passing the previous items to the subscriber:
val latch = CountDownLatch(15)
val numberGenerator: Flux<Long> = counter(1000)
val processor = UnicastProcessor.create<Long>()
numberGenerator.subscribeWith(processor)
val connectableFlux = processor.doOnSubscribe { println("subscribed!") }.log().publish().autoConnect(0)
Thread.sleep(5000)
connectableFlux.subscribe {
logger.info("Element [{}]", it)
latch.countDown()
}
latch.await()
It seems that since the connectableFlux is ready (connected), the processor gets the OnSubscribe signal, and as there aren't any actual subscribers for the connectableFlux, it discards the items.
Changing publish() to replay() would make the subscriber get the items from the beginning , as stated in the doc.
val connectableFlux = processor.doOnSubscribe { println("subscribed!") }.log().replay().autoConnect(0)
I'd like to use Visual Studio Team Services to run load testing plan written in JMeter in cloud. In my test I have to upload file.
I guess I should attach this file to 'Supporting files' field, but I have no idea what is the path to this file. There are error message:
HttpError Non HTTP response code: java.io.FileNotFoundException Agent000 | Thread Group | Upload | Non HTTP response message: test.xml (The system cannot find the file specified)
I've tried some paths which I found in log, eg: E:\approot\JMeterLoadTest\, but there are error anyway.
What is the path to file added to 'Supporting files'? Has anybody had similiar problem?
You can define "User Defined Variables" and use BeanShell.
<Arguments guiclass="ArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
<collectionProp name="Arguments.arguments">
<elementProp name="testURL" elementType="Argument">
<stringProp name="Argument.name">testURL</stringProp>
<stringProp name="Argument.value">www.datafilehost.com</stringProp>
<stringProp name="Argument.metadata">=</stringProp>
</elementProp>
<elementProp name="testFile" elementType="Argument">
<stringProp name="Argument.name">testFile</stringProp>
<stringProp name="Argument.value">upload.txt</stringProp>
<stringProp name="Argument.metadata">=</stringProp>
</elementProp>
<elementProp name="scriptPath" elementType="Argument">
<stringProp name="Argument.name">scriptPath</stringProp>
<stringProp name="Argument.value">${__BeanShell(import org.apache.jmeter.services.FileServer; FileServer.getFileServer().getBaseDir();)}${__BeanShell(File.separator,)}</stringProp>
<stringProp name="Argument.metadata">=</stringProp>
</elementProp>
</collectionProp>
</Arguments>
and than you can load file from your base dir:
<elementProp name="HTTPsampler.Files" elementType="HTTPFileArgs">
<collectionProp name="HTTPFileArgs.files">
<elementProp name="${scriptPath}${testFile}" elementType="HTTPFileArg">
<stringProp name="File.path">${scriptPath}${testFile}</stringProp>
<stringProp name="File.paramname">upfile</stringProp>
<stringProp name="File.mimetype">text/plain</stringProp>
</elementProp>
</collectionProp>
</elementProp>
Full sample:
https://github.com/aliesbelik/jmx/blob/master/so/2015-04-15_file-upload-download.jmx
Trying to run a Oozie coordinator with a java action workflow that consists of running a Camus mapper job. The coordinator seems to run, and start the workflow every 20 minutes, but the workflow would just run indefinitely, even though the job when run independently would easily complete in a few minutes. I think the error either has to do with how I run the job, or how the arguments are passed, but I'm not sure how to debug this. Here is the code:
/coord/job.properties
oozie.coord.application.path=hdfs://10.0.2.15:8020/user/hue/app/coord/coordinator.xml
name=camus
frequency=20
start=2015-07-30T11:40Z
end=2016-07-30T11:40Z
timezone=GMT+0530
workflow=hdfs://10.0.2.15:8020/user/hue/app/workflow/workflow.xml
nameNode=hdfs://10.0.2.15:8020
jobTracker=10.0.2.15:8021
queueName=default
properties=${nameNode}/user/hue/app/workflows/lib/config.properties
coord/coordinator.xml
<coordinator-app name="${name}" frequency="${frequency}" start="${start}" end="${end}" timezone="${timezone}" xmlns="uri:oozie:coordinator:0.1">
<action>
<workflow>
<app-path>${workflow}</app-path>
</workflow>
</action>
</coordinator-app>
/workflow/workflow.xml
<workflow-app xmlns='uri:oozie:workflow:0.4' name='camus-wf'>
<start to='camus_job' />
<action name='camus_job'>
<java>
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<main-class>com.linkedin.camus.etl.kafka.CamusJob</main-class>
<arg>-P</arg>
<arg>${properties}</arg>
</java>
<ok to="end" />
<error to="fail" />
</action>
<kill name="fail">
<message>Camus Job Failed</message>
</kill>
<end name='end' />
</workflow-app>
The SHADED jar and config.properties are located in /workflow/lib/
I'm running HDP 2.2
Coordinator Logs:
2015-08-03 06:43:43,820 INFO CoordSubmitXCommand:543 - SERVER[sandbox.hortonworks.com] USER[root] GROUP[-] TOKEN[] APP[camus] JOB[0000000-150803063131195-oozie-oozi-C] ACTION[-] ENDED Coordinator Submit jobId=0000000-150803063131195-oozie-oozi-C
2015-08-03 06:43:43,935 INFO CoordMaterializeTransitionXCommand:543 - SERVER[sandbox.hortonworks.com] USER[root] GROUP[-] TOKEN[] APP[camus] JOB[0000000-150803063131195-oozie-oozi-C] ACTION[-] materialize actions for tz=Coordinated Universal Time,
start=Thu Jul 30 11:40:00 UTC 2015, end=Thu Jul 30 15:40:00 UTC 2015,
timeUnit 12,
frequency :20:MINUTE,
lastActionNumber 0
2015-08-03 06:43:43,971 INFO CoordMaterializeTransitionXCommand:543 - SERVER[sandbox.hortonworks.com] USER[root] GROUP[-] TOKEN[] APP[camus] JOB[0000000-150803063131195-oozie-oozi-C] ACTION[-] [0000000-150803063131195-oozie-oozi-C]: Update status from PREP to RUNNING
2015-08-03 06:43:44,113 INFO CoordActionInputCheckXCommand:543 - SERVER[sandbox.hortonworks.com] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000000-150803063131195-oozie-oozi-C] ACTION[0000000-150803063131195-oozie-oozi-C#1] [0000000-150803063131195-oozie-oozi-C#1]::CoordActionInputCheck:: Missing deps:
2015-08-03 06:43:44,209 INFO CoordActionNotificationXCommand:543 - SERVER[sandbox.hortonworks.com] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000000-150803063131195-oozie-oozi-C] ACTION[0000000-150803063131195-oozie-oozi-C#1] STARTED Coordinator Notification actionId=0000000-150803063131195-oozie-oozi-C#1 : WAITING
...
2015-08-03 06:43:44,267 INFO CoordActionNotificationXCommand:543 - SERVER[sandbox.hortonworks.com] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000000-150803063131195-oozie-oozi-C] ACTION[0000000-150803063131195-oozie-oozi-C#12] No Notification URL is defined. Therefore nothing to notify for job 0000000-150803063131195-oozie-oozi-C action ID 0000000-150803063131195-oozie-oozi-C#12
2015-08-03 06:43:44,268 INFO CoordActionNotificationXCommand:543 - SERVER[sandbox.hortonworks.com] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000000-150803063131195-oozie-oozi-C] ACTION[0000000-150803063131195-oozie-oozi-C#12] ENDED Coordinator Notification actionId=0000000-150803063131195-oozie-oozi-C#12
2015-08-03 06:43:44,433 WARN ParameterVerifier:546 - SERVER[sandbox.hortonworks.com] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000000-150803063131195-oozie-oozi-C] ACTION[0000000-150803063131195-oozie-oozi-C#1] The application does not define formal parameters in its XML definition
...
Workflow Logs:
2015-08-03 06:43:44,672 INFO ActionStartXCommand:543 - SERVER[sandbox.hortonworks.com] USER[root] GROUP[-] TOKEN[] APP[camus-wf] JOB[0000001-150803063131195-oozie-oozi-W] ACTION[0000001-150803063131195-oozie-oozi-W#:start:] Start action [0000001-150803063131195-oozie-oozi-W#:start:] with user-retry state : userRetryCount [0], userRetryMax [0], userRetryInterval [10]
2015-08-03 06:43:44,673 INFO ActionStartXCommand:543 - SERVER[sandbox.hortonworks.com] USER[root] GROUP[-] TOKEN[] APP[camus-wf] JOB[0000001-150803063131195-oozie-oozi-W] ACTION[0000001-150803063131195-oozie-oozi-W#:start:] [***0000001-150803063131195-oozie-oozi-W#:start:***]Action status=DONE
2015-08-03 06:43:44,673 INFO ActionStartXCommand:543 - SERVER[sandbox.hortonworks.com] USER[root] GROUP[-] TOKEN[] APP[camus-wf] JOB[0000001-150803063131195-oozie-oozi-W] ACTION[0000001-150803063131195-oozie-oozi-W#:start:] [***0000001-150803063131195-oozie-oozi-W#:start:***]Action updated in DB!
2015-08-03 06:43:45,104 INFO ActionStartXCommand:543 - SERVER[sandbox.hortonworks.com] USER[root] GROUP[-] TOKEN[] APP[camus-wf] JOB[0000001-150803063131195-oozie-oozi-W] ACTION[0000001-150803063131195-oozie-oozi-W#camus_job] Start action [0000001-150803063131195-oozie-oozi-W#camus_job] with user-retry state : userRetryCount [0], userRetryMax [0], userRetryInterval [10]
I want to do a web crawler using Nutch 1.9 and Solr 4.10.2
The crawling is working but when it comes to indexing there is a problem. I looked for the problem and I tried so many methods but nothing seem to work. This is what I get:
Indexer: starting at 2015-03-13 20:51:08
Indexer: deleting gone documents: false
Indexer: URL filtering: false
Indexer: URL normalizing: false
Active IndexWriters :
SOLRIndexWriter
solr.server.url : URL of the SOLR instance (mandatory)
solr.commit.size : buffer size when sending to SOLR (default 1000)
solr.mapping.file : name of the mapping file for fields (default solrindex-mapping.xml)
solr.auth : use authentication (default false)
solr.auth.username : use authentication (default false)
solr.auth : username for authentication
solr.auth.password : password for authentication
Indexer: java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1357)
at org.apache.nutch.indexer.IndexingJob.index(IndexingJob.java:114)
at org.apache.nutch.indexer.IndexingJob.run(IndexingJob.java:176)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.nutch.indexer.IndexingJob.main(IndexingJob.java:186)
And when I see the log file this is what I get:
2015-03-13 20:51:08,768 INFO indexer.IndexingJob - Indexer: starting at 2015-03-13 20:51:08
2015-03-13 20:51:08,846 INFO indexer.IndexingJob - Indexer: deleting gone documents: false
2015-03-13 20:51:08,846 INFO indexer.IndexingJob - Indexer: URL filtering: false
2015-03-13 20:51:08,846 INFO indexer.IndexingJob - Indexer: URL normalizing: false
2015-03-13 20:51:09,117 INFO indexer.IndexWriters - Adding org.apache.nutch.indexwriter.solr.SolrIndexWriter
2015-03-13 20:51:09,117 INFO indexer.IndexingJob - Active IndexWriters :
SOLRIndexWriter
solr.server.url : URL of the SOLR instance (mandatory)
solr.commit.size : buffer size when sending to SOLR (default 1000)
solr.mapping.file : name of the mapping file for fields (default solrindex-mapping.xml)
solr.auth : use authentication (default false)
solr.auth.username : use authentication (default false)
solr.auth : username for authentication
solr.auth.password : password for authentication
2015-03-13 20:51:09,121 INFO indexer.IndexerMapReduce - IndexerMapReduce: crawldb: testCrawl/crawldb
2015-03-13 20:51:09,122 INFO indexer.IndexerMapReduce - IndexerMapReduce: linkdb: testCrawl/linkdb
2015-03-13 20:51:09,122 INFO indexer.IndexerMapReduce - IndexerMapReduces: adding segment: testCrawl/segments/20150311221258
2015-03-13 20:51:09,234 INFO indexer.IndexerMapReduce - IndexerMapReduces: adding segment: testCrawl/segments/20150311222328
2015-03-13 20:51:09,235 INFO indexer.IndexerMapReduce - IndexerMapReduces: adding segment: testCrawl/segments/20150311222727
2015-03-13 20:51:09,236 INFO indexer.IndexerMapReduce - IndexerMapReduces: adding segment: testCrawl/segments/20150312085908
2015-03-13 20:51:09,282 WARN util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2015-03-13 20:51:09,747 INFO anchor.AnchorIndexingFilter - Anchor deduplication is: off
2015-03-13 20:51:20,904 INFO indexer.IndexWriters - Adding org.apache.nutch.indexwriter.solr.SolrIndexWriter
2015-03-13 20:51:20,929 INFO solr.SolrMappingReader - source: content dest: content
2015-03-13 20:51:20,929 INFO solr.SolrMappingReader - source: title dest: title
2015-03-13 20:51:20,929 INFO solr.SolrMappingReader - source: host dest: host
2015-03-13 20:51:20,929 INFO solr.SolrMappingReader - source: segment dest: segment
2015-03-13 20:51:20,929 INFO solr.SolrMappingReader - source: boost dest: boost
2015-03-13 20:51:20,929 INFO solr.SolrMappingReader - source: digest dest: digest
2015-03-13 20:51:20,929 INFO solr.SolrMappingReader - source: tstamp dest: tstamp
2015-03-13 20:51:21,192 INFO solr.SolrIndexWriter - Indexing 250 documents
2015-03-13 20:51:21,192 INFO solr.SolrIndexWriter - Deleting 0 documents
2015-03-13 20:51:21,342 INFO solr.SolrIndexWriter - Indexing 250 documents
2015-03-13 20:51:21,437 WARN mapred.LocalJobRunner - job_local1194740690_0001
org.apache.solr.common.SolrException: Not Found
Not Found
request: http://127.0.0.1:8983/solr/update?wt=javabin&version=2
at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:430)
at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:244)
at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:105)
at org.apache.nutch.indexwriter.solr.SolrIndexWriter.write(SolrIndexWriter.java:135)
at org.apache.nutch.indexer.IndexWriters.write(IndexWriters.java:88)
at org.apache.nutch.indexer.IndexerOutputFormat$1.write(IndexerOutputFormat.java:50)
at org.apache.nutch.indexer.IndexerOutputFormat$1.write(IndexerOutputFormat.java:41)
at org.apache.hadoop.mapred.ReduceTask$OldTrackingRecordWriter.write(ReduceTask.java:458)
at org.apache.hadoop.mapred.ReduceTask$3.collect(ReduceTask.java:500)
at org.apache.nutch.indexer.IndexerMapReduce.reduce(IndexerMapReduce.java:323)
at org.apache.nutch.indexer.IndexerMapReduce.reduce(IndexerMapReduce.java:53)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:522)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:421)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:398)
2015-03-13 20:51:21,607 ERROR indexer.IndexingJob - Indexer: java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1357)
at org.apache.nutch.indexer.IndexingJob.index(IndexingJob.java:114)
at org.apache.nutch.indexer.IndexingJob.run(IndexingJob.java:176)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.nutch.indexer.IndexingJob.main(IndexingJob.java:186)
So please any help?