jsonparser exception from Logstash - elasticsearch

I am trying to insert data into elasticsearch using logstash from SQL.When doing this using the config file I get the below exception and Logstash runs indefinately.Only on pressing ctrl+c I can stop the execution of the logstash.
D:\Logstash\Logstash\logstash-5.1.1\bin>logstash -f D:\Logstash\Logstash\logstas
h-5.1.1\bin\Crash_data.conf
Using JAVA_HOME=C:\Program Files\Java\jre1.8.0_111 retrieved from C:\ProgramData
\Oracle\java\javapath\java.exe
Could not find log4j2 configuration at path /Logstash/Logstash/logstash-5.1.1/co
nfig/log4j2.properties. Using default config which logs to console
01:20:19.777 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - El
asticsearch pool URLs updated {:changes=>{:removed=>[], :added=>["http://10.64.1
03.61:5601"]}}
01:20:19.783 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Ru
nning health check to see if an Elasticsearch connection is working {:url=>#<URI
::HTTP:0x1dccc4ca URL:http://10.64.103.61:5601>, :healthcheck_path=>"/"}
01:20:20.141 [[main]-pipeline-manager] WARN logstash.outputs.elasticsearch - Re
stored connection to ES instance {:url=>#<URI::HTTP:0x1dccc4ca URL:http://10.64.
103.61:5601>}
01:20:20.582 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Us
ing mapping template from {:path=>nil}
01:20:20.645 [[main]<jdbc] INFO logstash.inputs.jdbc - (1.141000s) SELECT * fro
m [Device_crash_Reporting].[dbo].[Device_Crash_Data]
01:20:21.696 [[main]-pipeline-manager] ERROR logstash.outputs.elasticsearch - Fa
iled to install template. {:message=>"Unexpected character ('<' (code 60)): expe
cted a valid value (number, String, array, object, 'true', 'false' or 'null')\n
at [Source: [B#7c825cdd; line: 1, column: 2]", :class=>"LogStash::Json::ParserEr
ror"}
01:20:21.699 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Ne
w Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["10
.64.103.61:5601"]}
01:20:21.704 [[main]-pipeline-manager] INFO logstash.pipeline - Starting pipeli
ne {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.b
atch.delay"=>5, "pipeline.max_inflight"=>250}
01:20:21.765 [[main]-pipeline-manager] INFO logstash.pipeline - Pipeline main s
tarted
01:20:23.108 [Api Webserver] INFO logstash.agent - Successfully started Logstas
h API endpoint {:port=>9600}
01:20:23.954 [[main]>worker1] ERROR logstash.outputs.elasticsearch - An unknown
error occurred sending a bulk request to Elasticsearch. We will retry indefinite
ly {:error_message=>"can't convert nil into Array", :error_class=>"TypeError", :
backtrace=>["org/jruby/RubyArray.java:1462:in `concat'", "D:/Logstash/Logstash/l
ogstash-5.1.1/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.4.0-j
ava/lib/logstash/outputs/elasticsearch/http_client.rb:94:in `join_bulk_responses
'", "org/jruby/RubyArray.java:1613:in `each'", "org/jruby/RubyEnumerable.java:85
2:in `inject'", "D:/Logstash/Logstash/logstash-5.1.1/vendor/bundle/jruby/1.9/gem
s/logstash-output-elasticsearch-5.4.0-java/lib/logstash/outputs/elasticsearch/ht
tp_client.rb:92:in `join_bulk_responses'", "D:/Logstash/Logstash/logstash-5.1.1/
vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.4.0-java/lib/logsta
sh/outputs/elasticsearch/http_client.rb:88:in `bulk'", "D:/Logstash/Logstash/log
stash-5.1.1/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.4.0-jav
a/lib/logstash/outputs/elasticsearch/common.rb:186:in `safe_bulk'", "D:/Logstash
/Logstash/logstash-5.1.1/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsea
rch-5.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:109:in `submit'", "D
:/Logstash/Logstash/logstash-5.1.1/vendor/bundle/jruby/1.9/gems/logstash-output-
elasticsearch-5.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:76:in `ret
rying_submit'", "D:/Logstash/Logstash/logstash-5.1.1/vendor/bundle/jruby/1.9/gem
s/logstash-output-elasticsearch-5.4.0-java/lib/logstash/outputs/elasticsearch/co
mmon.rb:27:in `multi_receive'", "D:/Logstash/Logstash/logstash-5.1.1/logstash-co
re/lib/logstash/output_delegator_strategies/shared.rb:12:in `multi_receive'", "D
:/Logstash/Logstash/logstash-5.1.1/logstash-core/lib/logstash/output_delegator.r
b:42:in `multi_receive'", "D:/Logstash/Logstash/logstash-5.1.1/logstash-core/lib
/logstash/pipeline.rb:331:in `output_batch'", "org/jruby/RubyHash.java:1342:in `
each'", "D:/Logstash/Logstash/logstash-5.1.1/logstash-core/lib/logstash/pipeline
.rb:330:in `output_batch'", "D:/Logstash/Logstash/logstash-5.1.1/logstash-core/l
ib/logstash/pipeline.rb:288:in `worker_loop'", "D:/Logstash/Logstash/logstash-5.
1.1/logstash-core/lib/logstash/pipeline.rb:258:in `start_workers'"]}
01:20:24.338 [[main]>worker0] ERROR logstash.outputs.elasticsearch - An unknown
error occurred sending a bulk request to Elasticsearch. We will retry indefinite
ly {:error_message=>"can't convert nil into Array", :error_class=>"TypeError", :
backtrace=>["org/jruby/RubyArray.java:1462:in `concat'", "D:/Logstash/Logstash/l
ogstash-5.1.1/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.4.0-j
ava/lib/logstash/outputs/elasticsearch/http_client.rb:94:in `join_bulk_responses
'", "org/jruby/RubyArray.java:1613:in `each'", "org/jruby/RubyEnumerable.java:85
2:in `inject'", "D:/Logstash/Logstash/logstash-5.1.1/vendor/bundle/jruby/1.9/gem
s/logstash-output-elasticsearch-5.4.0-java/lib/logstash/outputs/elasticsearch/ht
tp_client.rb:92:in `join_bulk_responses'", "D:/Logstash/Logstash/logstash-5.1.1/
vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.4.0-java/lib/logsta
sh/outputs/elasticsearch/http_client.rb:88:in `bulk'", "D:/Logstash/Logstash/log
stash-5.1.1/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.4.0-jav
a/lib/logstash/outputs/elasticsearch/common.rb:186:in `safe_bulk'", "D:/Logstash
/Logstash/logstash-5.1.1/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsea
rch-5.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:109:in `submit'", "D
:/Logstash/Logstash/logstash-5.1.1/vendor/bundle/jruby/1.9/gems/logstash-output-
elasticsearch-5.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:76:in `ret
rying_submit'", "D:/Logstash/Logstash/logstash-5.1.1/vendor/bundle/jruby/1.9/gem
s/logstash-output-elasticsearch-5.4.0-java/lib/logstash/outputs/elasticsearch/co
mmon.rb:27:in `multi_receive'", "D:/Logstash/Logstash/logstash-5.1.1/logstash-co
re/lib/logstash/output_delegator_strategies/shared.rb:12:in `multi_receive'", "D
:/Logstash/Logstash/logstash-5.1.1/logstash-core/lib/logstash/output_delegator.r
b:42:in `multi_receive'", "D:/Logstash/Logstash/logstash-5.1.1/logstash-core/lib
/logstash/pipeline.rb:331:in `output_batch'", "org/jruby/RubyHash.java:1342:in `
each'", "D:/Logstash/Logstash/logstash-5.1.1/logstash-core/lib/logstash/pipeline
.rb:330:in `output_batch'", "D:/Logstash/Logstash/logstash-5.1.1/logstash-core/l
ib/logstash/pipeline.rb:288:in `worker_loop'", "D:/Logstash/Logstash/logstash-5.
1.1/logstash-core/lib/logstash/pipeline.rb:258:in `start_workers'"]}
01:20:26.503 [[main]>worker1] ERROR logstash.outputs.elasticsearch - An unknown
error occurred sending a bulk request to Elasticsearch. We will retry indefinite
ly {:error_message=>"can't convert nil into Array", :error_class=>"TypeError", :
backtrace=>["org/jruby/RubyArray.java:1462:in `concat'", "D:/Logstash/Logstash/l
ogstash-5.1.1/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.4.0-j
ava/lib/logstash/outputs/elasticsearch/http_client.rb:94:in `join_bulk_responses
'", "org/jruby/RubyArray.java:1613:in `each'", "org/jruby/RubyEnumerable.java:85
2:in `inject'", "D:/Logstash/Logstash/logstash-5.1.1/vendor/bundle/jruby/1.9/gem
s/logstash-output-elasticsearch-5.4.0-java/lib/logstash/outputs/elasticsearch/ht
tp_client.rb:92:in `join_bulk_responses'", "D:/Logstash/Logstash/logstash-5.1.1/
vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.4.0-java/lib/logsta
sh/outputs/elasticsearch/http_client.rb:88:in `bulk'", "D:/Logstash/Logstash/log
stash-5.1.1/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.4.0-jav
a/lib/logstash/outputs/elasticsearch/common.rb:186:in `safe_bulk'", "D:/Logstash
/Logstash/logstash-5.1.1/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsea
rch-5.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:109:in `submit'", "D
:/Logstash/Logstash/logstash-5.1.1/vendor/bundle/jruby/1.9/gems/logstash-output-
elasticsearch-5.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:76:in `ret
rying_submit'", "D:/Logstash/Logstash/logstash-5.1.1/vendor/bundle/jruby/1.9/gem
s/logstash-output-elasticsearch-5.4.0-java/lib/logstash/outputs/elasticsearch/co
mmon.rb:27:in `multi_receive'", "D:/Logstash/Logstash/logstash-5.1.1/logstash-co
re/lib/logstash/output_delegator_strategies/shared.rb:12:in `multi_receive'", "D
:/Logstash/Logstash/logstash-5.1.1/logstash-core/lib/logstash/output_delegator.r
b:42:in `multi_receive'", "D:/Logstash/Logstash/logstash-5.1.1/logstash-core/lib
/logstash/pipeline.rb:331:in `output_batch'", "org/jruby/RubyHash.java:1342:in `
each'", "D:/Logstash/Logstash/logstash-5.1.1/logstash-core/lib/logstash/pipeline
.rb:330:in `output_batch'", "D:/Logstash/Logstash/logstash-5.1.1/logstash-core/l
ib/logstash/pipeline.rb:288:in `worker_loop'", "D:/Logstash/Logstash/logstash-5.
1.1/logstash-core/lib/logstash/pipeline.rb:258:in `start_workers'"]}
01:20:26.847 [SIGINT handler] WARN logstash.runner - SIGINT received. Shutting
down the agent.
^CTerminate batch job (Y/N)? 01:20:26.867 [LogStash::Runner] WARN logstash.agen
t - stopping pipeline {:id=>"main"}
01:20:27.045 [[main]>worker0] ERROR logstash.outputs.elasticsearch - An unknown
error occurred sending a bulk request to Elasticsearch. We will retry indefinite
ly {:error_message=>"can't convert nil into Array", :error_class=>"TypeError", :
backtrace=>["org/jruby/RubyArray.java:1462:in `concat'", "D:/Logstash/Logstash/l
ogstash-5.1.1/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.4.0-j
ava/lib/logstash/outputs/elasticsearch/http_client.rb:94:in `join_bulk_responses
'", "org/jruby/RubyArray.java:1613:in `each'", "org/jruby/RubyEnumerable.java:85
2:in `inject'", "D:/Logstash/Logstash/logstash-5.1.1/vendor/bundle/jruby/1.9/gem
s/logstash-output-elasticsearch-5.4.0-java/lib/logstash/outputs/elasticsearch/ht
tp_client.rb:92:in `join_bulk_responses'", "D:/Logstash/Logstash/logstash-5.1.1/
vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.4.0-java/lib/logsta
sh/outputs/elasticsearch/http_client.rb:88:in `bulk'", "D:/Logstash/Logstash/log
stash-5.1.1/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.4.0-jav
a/lib/logstash/outputs/elasticsearch/common.rb:186:in `safe_bulk'", "D:/Logstash
/Logstash/logstash-5.1.1/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsea
rch-5.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:109:in `submit'", "D
:/Logstash/Logstash/logstash-5.1.1/vendor/bundle/jruby/1.9/gems/logstash-output-
elasticsearch-5.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:76:in `ret
rying_submit'", "D:/Logstash/Logstash/logstash-5.1.1/vendor/bundle/jruby/1.9/gem
s/logstash-output-elasticsearch-5.4.0-java/lib/logstash/outputs/elasticsearch/co
mmon.rb:27:in `multi_receive'", "D:/Logstash/Logstash/logstash-5.1.1/logstash-co
re/lib/logstash/output_delegator_strategies/shared.rb:12:in `multi_receive'", "D
:/Logstash/Logstash/logstash-5.1.1/logstash-core/lib/logstash/output_delegator.r
b:42:in `multi_receive'", "D:/Logstash/Logstash/logstash-5.1.1/logstash-core/lib
/logstash/pipeline.rb:331:in `output_batch'", "org/jruby/RubyHash.java:1342:in `
each'", "D:/Logstash/Logstash/logstash-5.1.1/logstash-core/lib/logstash/pipeline
.rb:330:in `output_batch'", "D:/Logstash/Logstash/logstash-5.1.1/logstash-core/l
ib/logstash/pipeline.rb:288:in `worker_loop'", "D:/Logstash/Logstash/logstash-5.
1.1/logstash-core/lib/logstash/pipeline.rb:258:in `start_workers'"]}
^C01:20:30.230 [SIGINT handler] FATAL logstash.runner - SIGINT received. Termina
ting immediately..
D:\Logstash\Logstash\logstash-5.1.1\bin>
Please help me with this issue
The config file is as below:
input {
jdbc {
jdbc_driver_library => "C:\SQL JDBC Driver\sqljdbc_6.0\enu\sqljdbc42.jar"
jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
jdbc_connection_string => "jdbc:sqlserver://10.14.23.56:54287;instance:dev01;DatabaseName:Crash_Report;integratedSecurity=true"
jdbc_user => "user"
jdbc_password => "password"
statement => "Select * from [Device_Crash_Reporting].[dbo].[Device_crash_Data]"
}
}
filter {
json{
source=>"message"
}
}
output {
elasticsearch {
hosts => ["10.64.103.61:5601"]
index => "Crash_Data1"
document_type => "Data"
}
stdout {codec => json_lines}
}

Related

Create regex pattern for fluentd

I'm using Fluentd configuration and regexp parser to parse the logs.
This are my original logs in fluentd:
2022-09-22 18:15:09,633 [springHikariCP housekeeper ] DEBUG HikariPool - springHikariCP - Fill pool skipped, pool is at sufficient level.
2022-09-22 18:15:14,968 [ringHikariCP connection closer] DEBUG PoolBase - springHikariCP - Closing connection com.mysql.cj.jdbc.ConnectionImpl#7f535ea4: (connection has passed maxLifetime)
I want to create json format for the above logs with regexp like this:
{
"logtime": "2022-09-22 18:15:09,633",
"Logger Name": "[springHikariCP housekeeper ]",
"Log level": "DEBUG",
"message": "HikariPool - springHikariCP - Fill pool skipped, pool is at sufficient level"
}
{
"logtime": "2022-09-22 18:15:09,633",
"Logger Name": "[ringHikariCP connection closer]",
"Log level": "DEBUG",
"message": "PoolBase - springHikariCP - Closing connectioncom.mysql.cj.jdbc.ConnectionImpl#7f535ea4: (connection has passed maxLifetime)"
}
Can someone with Ruby expirience help me to create Ruby regex for this logs above
The regex should be straightforward, e.g.
(?'logtime'\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2},\d{3})\s(?'logname'\[[^\]]*\])\s(?'loglevel'\w+)\s(?'message'.*)
Online Test
And then, reassemble the parts using a substitution (gsub, see example below) or putting a new string together from the parts (match) :
re = /(?'logtime'\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2},\d{3})\s(?'logname'\[[^\]]*\])\s(?'loglevel'\w+)\s(?'message'.*?$)/m
str = '2022-09-22 18:15:09,633 [springHikariCP housekeeper ] DEBUG HikariPool - springHikariCP - Fill pool skipped, pool is at sufficient level.
2022-09-22 18:15:14,968 [ringHikariCP connection closer] DEBUG PoolBase - springHikariCP - Closing connection com.mysql.cj.jdbc.ConnectionImpl#7f535ea4: (connection has passed maxLifetime)'
str.gsub(re) do |match|
puts "{
\"logtime\": \"#{$~[:logtime]}\",
\"Logger Name\": \"#{$~[:logname]}\",
\"Log level\": \"#{$~[:loglevel]}\",
\"message\": \"#{$~[:message]}\"
}"
end

Fatal error received while running the crawler

I want to index binary files (PDF, WORD, TEXT) into elasticsearch, I have used fscrawler for that and I'm getting the following error while running the fscrawler.
I have followed this link : https://fscrawler.readthedocs.io/en/latest/user/getting_started.html
Config File - YAML
---
name: "hello"
fs:
url: "/home/gowtham/Documents"
update_rate: "15m"
excludes:
- "*/~*"
json_support: false
filename_as_id: false
add_filesize: true
remove_deleted: true
add_as_inner_object: false
store_source: false
index_content: true
attributes_support: false
raw_metadata: false
xml_support: false
index_folders: true
lang_detect: false
continue_on_error: false
ocr:
language: "eng"
enabled: true
pdf_strategy: "ocr_and_text"
elasticsearch:
nodes:
- url: "http://10.0.2.2:9200"
bulk_size: 100
flush_interval: "5s"
byte_size: "10mb"
index : "hello"
This location /home/gowtham/Documents has a pdf file
I got the following error
12:46:22,477 WARN [f.p.e.c.f.c.v.ElasticsearchClientV6] failed to create index [hello], disabling crawler...
12:46:22,478 FATAL [f.p.e.c.f.c.FsCrawlerCli] Fatal error received while running the crawler: [Elasticsearch exception [type=illegal_argument_exception, reason=request [/hello] contains unrecognized parameter: [include_type_name]]]
12:46:22,478 DEBUG [f.p.e.c.f.c.FsCrawlerCli] error caught
org.elasticsearch.ElasticsearchStatusException: Elasticsearch exception [type=illegal_argument_exception, reason=request [/hello] contains unrecognized parameter: [include_type_name]]
at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:177) ~[elasticsearch-6.7.1.jar:6.7.1]
at org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:2053) ~[elasticsearch-rest-high-level-client-6.7.1.jar:6.7.1]
at org.elasticsearch.client.RestHighLevelClient.parseResponseException(RestHighLevelClient.java:2030) ~[elasticsearch-rest-high-level-client-6.7.1.jar:6.7.1]
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1777) ~[elasticsearch-rest-high-level-client-6.7.1.jar:6.7.1]
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1734) ~[elasticsearch-rest-high-level-client-6.7.1.jar:6.7.1]
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1696) ~[elasticsearch-rest-high-level-client-6.7.1.jar:6.7.1]
at org.elasticsearch.client.IndicesClient.create(IndicesClient.java:191) ~[elasticsearch-rest-high-level-client-6.7.1.jar:6.7.1]
at fr.pilato.elasticsearch.crawler.fs.client.v6.ElasticsearchClientV6.createIndex(ElasticsearchClientV6.java:240) ~[fscrawler-elasticsearch-client-v6-2.7-SNAPSHOT.jar:?]
at fr.pilato.elasticsearch.crawler.fs.client.v6.ElasticsearchClientV6.createIndex(ElasticsearchClientV6.java:603) ~[fscrawler-elasticsearch-client-v6-2.7-SNAPSHOT.jar:?]
at fr.pilato.elasticsearch.crawler.fs.client.v6.ElasticsearchClientV6.createIndices(ElasticsearchClientV6.java:436) ~[fscrawler-elasticsearch-client-v6-2.7-SNAPSHOT.jar:?]
at fr.pilato.elasticsearch.crawler.fs.FsCrawlerImpl.start(FsCrawlerImpl.java:161) ~[fscrawler-core-2.7-SNAPSHOT.jar:?]
at fr.pilato.elasticsearch.crawler.fs.cli.FsCrawlerCli.main(FsCrawlerCli.java:270) [fscrawler-cli-2.7-SNAPSHOT.jar:?]
Suppressed: org.elasticsearch.client.ResponseException: method [PUT], host [http://10.0.2.2:9200], URI [/hello?master_timeout=30s&include_type_name=true&timeout=30s], status line [HTTP/1.1 400 Bad Request]
{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"request [/hello] contains unrecognized parameter: [include_type_name]"}],"type":"illegal_argument_exception","reason":"request [/hello] contains unrecognized parameter: [include_type_name]"},"status":400}
at org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:936) ~[elasticsearch-rest-client-6.7.1.jar:6.7.1]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:233) ~[elasticsearch-rest-client-6.7.1.jar:6.7.1]
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1764) ~[elasticsearch-rest-high-level-client-6.7.1.jar:6.7.1]
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1734) ~[elasticsearch-rest-high-level-client-6.7.1.jar:6.7.1]
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1696) ~[elasticsearch-rest-high-level-client-6.7.1.jar:6.7.1]
at org.elasticsearch.client.IndicesClient.create(IndicesClient.java:191) ~[elasticsearch-rest-high-level-client-6.7.1.jar:6.7.1]
at fr.pilato.elasticsearch.crawler.fs.client.v6.ElasticsearchClientV6.createIndex(ElasticsearchClientV6.java:240) ~[fscrawler-elasticsearch-client-v6-2.7-SNAPSHOT.jar:?]
at fr.pilato.elasticsearch.crawler.fs.client.v6.ElasticsearchClientV6.createIndex(ElasticsearchClientV6.java:603) ~[fscrawler-elasticsearch-client-v6-2.7-SNAPSHOT.jar:?]
at fr.pilato.elasticsearch.crawler.fs.client.v6.ElasticsearchClientV6.createIndices(ElasticsearchClientV6.java:436) ~[fscrawler-elasticsearch-client-v6-2.7-SNAPSHOT.jar:?]
at fr.pilato.elasticsearch.crawler.fs.FsCrawlerImpl.start(FsCrawlerImpl.java:161) ~[fscrawler-core-2.7-SNAPSHOT.jar:?]
at fr.pilato.elasticsearch.crawler.fs.cli.FsCrawlerCli.main(FsCrawlerCli.java:270) [fscrawler-cli-2.7-SNAPSHOT.jar:?]
Caused by: org.elasticsearch.client.ResponseException: method [PUT], host [http://10.0.2.2:9200], URI [/hello?master_timeout=30s&include_type_name=true&timeout=30s], status line [HTTP/1.1 400 Bad Request]
{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"request [/hello] contains unrecognized parameter: [include_type_name]"}],"type":"illegal_argument_exception","reason":"request [/hello] contains unrecognized parameter: [include_type_name]"},"status":400}
at org.elasticsearch.client.RestClient$1.completed(RestClient.java:552) ~[elasticsearch-rest-client-6.7.1.jar:6.7.1]
at org.elasticsearch.client.RestClient$1.completed(RestClient.java:537) ~[elasticsearch-rest-client-6.7.1.jar:6.7.1]
at org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:119) ~[httpcore-4.4.5.jar:4.4.5]
at org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:177) ~[httpasyncclient-4.1.2.jar:4.1.2]
at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:436) ~[httpcore-nio-4.4.5.jar:4.4.5]
at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:326) ~[httpcore-nio-4.4.5.jar:4.4.5]
at org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265) ~[httpcore-nio-4.4.5.jar:4.4.5]
at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81) ~[httpasyncclient-4.1.2.jar:4.1.2]
at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:39) ~[httpasyncclient-4.1.2.jar:4.1.2]
at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114) ~[httpcore-nio-4.4.5.jar:4.4.5]
at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162) ~[httpcore-nio-4.4.5.jar:4.4.5]
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337) ~[httpcore-nio-4.4.5.jar:4.4.5]
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315) ~[httpcore-nio-4.4.5.jar:4.4.5]
at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276) ~[httpcore-nio-4.4.5.jar:4.4.5]
at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104) ~[httpcore-nio-4.4.5.jar:4.4.5]
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:588) ~[httpcore-nio-4.4.5.jar:4.4.5]
at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_201]
12:46:22,484 DEBUG [f.p.e.c.f.FsCrawlerImpl] Closing FS crawler [hello]
12:46:22,485 DEBUG [f.p.e.c.f.c.v.ElasticsearchClientV6] Closing Elasticsearch client manager
12:46:22,486 DEBUG [f.p.e.c.f.FsCrawlerImpl] ES Client Manager stopped
12:46:22,487 INFO [f.p.e.c.f.FsCrawlerImpl] FS crawler [hello] stopped
Kindly help me to solve this issue.
Thanks in advance.
I have used Elasticsearch version 6.4 instead have to use 6.7 to solve this issue.
Credits to #dadoonet.
https://github.com/dadoonet/fscrawler/issues/713

Logstash: NameError: undefined local variable or method `dotfile' for # <AwesomePrint::Inspector:0x77011d93>>

I'm migrating a logstash into a EC2 instance.
It's running a AmazonLinux.
By the command tail -f /var/log/logstash/logstash-plain.log
I'm getting a the follow log cycling/repeating
2017-12-20T15:30:24,742][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[2017-12-20T15:30:24,745][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[2017-12-20T15:30:27,342][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://search-ivendas-sz2q3f573vro6xlncwjnvzbf2m.us-east-1.es.amazonaws.com:443/]}}
[2017-12-20T15:30:27,343][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>https://search-ivendas-sz2q3f573vro6xlncwjnvzbf2m.us-east-1.es.amazonaws.com:443/, :path=>"/"}
[2017-12-20T15:30:28,040][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"https://search-ivendas-sz2q3f573vro6xlncwjnvzbf2m.us-east-1.es.amazonaws.com:443/"}
[2017-12-20T15:30:28,175][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2017-12-20T15:30:28,185][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date", "include_in_all"=>false}, "#version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2017-12-20T15:30:28,201][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//search-ivendas-sz2q3f573vro6xlncwjnvzbf2m.us-east-1.es.amazonaws.com:443"]}
[2017-12-20T15:30:28,385][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}
[2017-12-20T15:30:29,298][INFO ][logstash.pipeline ] Pipeline main started
[2017-12-20T15:30:29,502][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2017-12-20T15:30:29,979][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#<NameError: undefined local variable or method `dotfile' for #<AwesomePrint::Inspector:0x18bafa48>>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/1.9/gems/awesome_print-1.8.0/lib/awesome_print/inspector.rb:163:in `merge_custom_defaults!'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/awesome_print-1.8.0/lib/awesome_print/inspector.rb:50:in `initialize'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/awesome_print-1.8.0/lib/awesome_print/core_ext/kernel.rb:9:in `ai'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-codec-rubydebug-3.0.5/lib/logstash/codecs/rubydebug.rb:39:in `encode_default'", "org/jruby/RubyMethod.java:120:in `call'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-codec-rubydebug-3.0.5/lib/logstash/codecs/rubydebug.rb:35:in `encode'", "/usr/share/logstash/logstash-core/lib/logstash/codecs/base.rb:50:in `multi_encode'", "org/jruby/RubyArray.java:1613:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/codecs/base.rb:50:in `multi_encode'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:90:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/single.rb:15:in `multi_receive'", "org/jruby/ext/thread/Mutex.java:149:in `synchronize'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/single.rb:14:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:49:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:434:in `output_batch'", "org/jruby/RubyHash.java:1342:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:433:in `output_batch'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:381:in `worker_loop'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:342:in `start_workers'"]}
I did installed the missing plugins, before I was getting another errors.
Is there someway to get more details about the problem ?
What am I missing ?
This is an issue with awesome-print plugin for rubydebug codec. set the HOME env variable (export HOME=<path_to_aprc_file>) which will be used to load .aprc configuration required by plugin. Refer this to persist this env variable.

Graylog2 - Startup fail. Address already in use

I am trying to install graylog2. I have installed open-jdk7. I have also installed elasticsearch and mongodb using apt on ubuntu 14.04.
I am new to both graylog and elasticsearch. I just want to try a trail installation and try these out. And I also did search similar questions and tried their suggestions. But none of them worked for my case.
I have followed the installation instructions on graylog.org. But when I try to start the graylog2 server I get the following error.
2015-02-12 03:19:36,216 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.IndexerClusterCheckerThread] periodical in [0s], polling every [30s].
2015-02-12 03:19:36,222 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.GarbageCollectionWarningThread] periodical, running forever.
2015-02-12 03:19:36,225 INFO : org.graylog2.periodical.IndexerClusterCheckerThread - Indexer not fully initialized yet. Skipping periodic cluster check.
2015-02-12 03:19:36,229 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.ThroughputCounterManagerThread] periodical in [0s], polling every [1s].
2015-02-12 03:19:36,280 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.DeadLetterThread] periodical, running forever.
2015-02-12 03:19:36,295 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.ClusterHealthCheckThread] periodical in [0s], polling every [20s].
2015-02-12 03:19:36,299 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.InputCacheWorkerThread] periodical, running forever.
2015-02-12 03:19:36,334 DEBUG: org.graylog2.periodical.ClusterHealthCheckThread - No input running in cluster!
2015-02-12 03:19:36,368 DEBUG: org.graylog2.caches.DiskJournalCache - Committing output-cache (entries 0)
2015-02-12 03:19:36,383 DEBUG: org.graylog2.caches.DiskJournalCache - Committing input-cache (entries 0)
2015-02-12 03:19:36,885 ERROR: com.google.common.util.concurrent.ServiceManager - Service IndexerSetupService [FAILED] has failed in the STARTING state.
org.elasticsearch.transport.BindTransportException: Failed to bind to [9300]
at org.elasticsearch.transport.netty.NettyTransport.doStart(NettyTransport.java:396)
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:85)
at org.elasticsearch.transport.TransportService.doStart(TransportService.java:90)
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:85)
at org.elasticsearch.node.internal.InternalNode.start(InternalNode.java:242)
at org.graylog2.initializers.IndexerSetupService.startUp(IndexerSetupService.java:101)
at com.google.common.util.concurrent.AbstractIdleService$2$1.run(AbstractIdleService.java:54)
at com.google.common.util.concurrent.Callables$3.run(Callables.java:95)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.common.netty.channel.ChannelException: Failed to bind to: /127.0.0.1:9300
at org.elasticsearch.common.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at org.elasticsearch.transport.netty.NettyTransport$3.onPortNumber(NettyTransport.java:387)
at org.elasticsearch.common.transport.PortsRange.iterate(PortsRange.java:58)
at org.elasticsearch.transport.netty.NettyTransport.doStart(NettyTransport.java:383)
... 8 more
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.elasticsearch.common.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:372)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:296)
at org.elasticsearch.common.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
... 1 more
Elastic search is showing the following status
{
"cluster_name" : "graylog2",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0
}
The following are the changes I made to elasticsearch.yml
cluster.name: graylog2
network.bind_host: 127.0.0.1
network.host: 127.0.0.1
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["127.0.0.1", MYSYS IP]
and graylog2.conf
is_master = true
password_secret = changed
root_password_sha2 = changed
elasticsearch_max_docs_per_index = 20000000
elasticsearch_shards = 1
elasticsearch_replicas = 0
elasticsearch_cluster_name = graylog2
elasticsearch_discovery_zen_ping_multicast_enabled = false
elasticsearch_discovery_zen_ping_unicast_hosts = IP_ARR:9300
mongodb_useauth = false
I tried killing the process on the port 9300 and tried starting graylog again. But I got the following error
2015-02-12 04:01:24,976 INFO : org.elasticsearch.transport - [graylog2-server] bound_address {inet[/127.0.0.1:9300]}, publish_address {inet[/127.0.0.1:9300]}
2015-02-12 04:01:25,227 INFO : org.elasticsearch.discovery - [graylog2-server] graylog2/LGkZJDz1SoeENKj6Rr0e8w
2015-02-12 04:01:25,252 DEBUG: org.elasticsearch.cluster.service - [graylog2-server] processing [update local node]: execute
2015-02-12 04:01:25,253 DEBUG: org.elasticsearch.cluster.service - [graylog2-server] cluster state updated, version [0], source [update local node]
2015-02-12 04:01:25,259 DEBUG: org.elasticsearch.cluster.service - [graylog2-server] set local cluster state to version 0
2015-02-12 04:01:25,259 DEBUG: org.elasticsearch.cluster.service - [graylog2-server] processing [update local node]: done applying updated cluster_state (version: 0)
2015-02-12 04:01:25,325 WARN : org.elasticsearch.transport.netty - [graylog2-server] exception caught on transport layer [[id: 0x82f30fa7]], closing connection
java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:127)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:644)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink.connect(NioClientSocketPipelineSink.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink.eventSunk(NioClientSocketPipelineSink.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:574)
.......
2015-02-12 04:01:28,536 DEBUG: org.elasticsearch.action.admin.cluster.health - [graylog2-server] no known master node, scheduling a retry
2015-02-12 04:01:28,564 DEBUG: org.elasticsearch.transport.netty - [graylog2-server] disconnected from [[graylog2-server][LGkZJDz1SoeENKj6Rr0e8w][ubuntu-greylog-9945][inet[/127.0.0.1:9300]]{client=true, data=false, master=false}]
2015-02-12 04:01:28,573 DEBUG: org.elasticsearch.discovery.zen - [graylog2-server] filtered ping responses: (filter_client[true], filter_data[false]) {none}
2015-02-12 04:01:28,590 WARN : org.elasticsearch.transport.netty - [graylog2-server] exception caught on transport layer [[id: 0xe27feaff]], closing connection
java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:127)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:644)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink.connect(NioClientSocketPipelineSink.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink.eventSunk(NioClientSocketPipelineSink.java:70)
Can you please point out to what I am doing wrong here and what I am missing??
if ES and greylog2 running on same server, try (del/comment) in elasticsearch.conf
#transport.tcp.port: 9300
and (add/uncomment) in greylog.conf
elasticsearch_transport_tcp_port = 9350

JMeter assertion failure

I am new to JMeter and the assertion concepts. I am encountering this error message when I attempted to execute a JMX file, containing assertions in JMeter:
Assertion error: false
Assertion failure: true
Assertion Failure Message: Test Failed: Variable(search result) not to equal /
received: NOT FOUND [[[[]]]]
comparison: NOT FOUND [[[[]]]]
The script is executed this way:
$java -jar ./apache-jmeter-2.10/bin/ApacheJMeter.jar -t ./jmeter-master/test.jmx -Jhost=myhost.com -Joutput_suffix=localtest
I have attempted to drop the contents of database table in MySQL, repopulate them, and re-execute the jmx file. However, it is still failing with the same error message above.
The jmeter.log indicates only the following information below:
2013/11/27 05:58:52 ERROR - jmeter.threads.JMeterThread: Test failed! java.lang.OutOfMemoryError
at java.lang.ClassLoader.defineClassImpl(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:284)
at org.mozilla.javascript.DefiningClassLoader.defineClass(DefiningClassLoader.java:27)
at org.mozilla.javascript.optimizer.Codegen.defineClass(Codegen.java:130)
at org.mozilla.javascript.optimizer.Codegen.createScriptObject(Codegen.java:85)
at org.mozilla.javascript.Context.compileImpl(Context.java:2394)
at org.mozilla.javascript.Context.compileString(Context.java:1335)
at org.mozilla.javascript.Context.compileString(Context.java:1324)
at org.mozilla.javascript.Context.evaluateString(Context.java:1076)
at org.apache.jmeter.control.IfController.evaluateCondition(IfController.java:110)
at org.apache.jmeter.control.IfController.next(IfController.java:167)
at org.apache.jmeter.control.GenericController.nextIsAController(GenericController.java:214)
at org.apache.jmeter.control.GenericController.next(GenericController.java:174)
at org.apache.jmeter.control.GenericController.nextIsAController(GenericController.java:223)
at org.apache.jmeter.control.GenericController.next(GenericController.java:174)
at org.apache.jmeter.control.GenericController.nextIsAController(GenericController.java:214)
at org.apache.jmeter.control.GenericController.reInitializeSubController(GenericController.java:274)
at org.apache.jmeter.control.GenericController.reInitializeSubController(GenericController.java:275)
at org.apache.jmeter.control.IfController.next(IfController.java:178)
at org.apache.jmeter.control.GenericController.nextIsAController(GenericController.java:214)
at org.apache.jmeter.control.GenericController.next(GenericController.java:174)
at org.apache.jmeter.control.LoopController.next(LoopController.java:118)
at org.apache.jmeter.control.GenericController.nextIsAController(GenericController.java:223)
at org.apache.jmeter.control.GenericController.next(GenericController.java:174)
at org.apache.jmeter.control.LoopController.next(LoopController.java:118)
at org.apache.jmeter.control.GenericController.nextIsAController(GenericController.java:223)
at org.apache.jmeter.control.GenericController.next(GenericController.java:174)
at org.apache.jmeter.control.LoopController.next(LoopController.java:118)
at org.apache.jmeter.control.GenericController.nextIsAController(GenericController.java:223)
at org.apache.jmeter.control.GenericController.next(GenericController.java:174)
at org.apache.jmeter.control.LoopController.next(LoopController.java:118)
at org.apache.jmeter.threads.AbstractThreadGroup.next(AbstractThreadGroup.java:88)
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:255)
at java.lang.Thread.run(Thread.java:769)
This is message logged in the JTL file:
/</failureMessage>
</assertionResult>
</httpSample>
<httpSample t="436" lt="406" ts="1385112442588" s="true" lb="Homepage:Home" rc="200" rm="OK" tn="Thread Group 1-4" dt="text" by="238401" sc="1" ec="0" ng="6" na="6"/>
<httpSample t="111" lt="34" ts="1385112445679" s="false" lb="Search:Leads" rc="200" rm="OK" tn="Thread Group 1-2" dt="text" by="15237" sc="1" ec="1" ng="6" na="6">
<assertionResult>
<name>Check for found lead</name>
<failure>true</failure>
<error>false</error>
<failureMessage>Test failed: variable(searchResult) expected not to equal /
****** received : NOT_FOUND[[[]]]
****** comparison: NOT_FOUND[[[]]]
/</failureMessage>
</assertionResult>
</httpSample>
<httpSample t="138" lt="124" ts="1385112448413" s="false" lb="Search:Leads" rc="200" rm="OK" tn="Thread Group 1-4" dt="text" by="182785" sc="1" ec="1" ng="6" na="6">
<assertionResult>
<name>Check for found lead</name>
<failure>true</failure>
<error>false</error>
<failureMessage>Test failed: variable(searchResult) expected not to equal /
****** received : NOT_FOUND[[[]]]
****** comparison: NOT_FOUND[[[]]]
Here's the segment of the JMX file that was produced and executed:
<ResultCollector guiclass="TableVisualizer" testclass="ResultCollector" testname="Result Table" enabled="true">
<boolProp name="ResultCollector.error_logging">false</boolProp>
<objProp>
<name>saveConfig</name>
<value class="SampleSaveConfiguration">
<time>true</time>
<latency>true</latency>
<timestamp>true</timestamp>
<success>true</success>
<label>true</label>
<code>true</code>
<message>true</message>
<threadName>true</threadName>
<dataType>true</dataType>
<encoding>false</encoding>
<assertions>true</assertions>
<subresults>false</subresults>
<responseData>false</responseData>
<samplerData>false</samplerData>
<xml>true</xml>
<fieldNames>false</fieldNames>
<responseHeaders>false</responseHeaders>
<requestHeaders>false</requestHeaders>
<responseDataOnError>false</responseDataOnError>
<saveAssertionResultsFailureMessage>false</saveAssertionResultsFailureMessage>
<assertionsResultsToSave>0</assertionsResultsToSave>
<bytes>true</bytes>
<threadCounts>true</threadCounts>
<sampleCount>true</sampleCount>
</value>
</objProp>
<stringProp name="filename">jmeter_output_${__P(output_suffix,generic)}.xml</stringProp>
</ResultCollector>
Can anyone please provide pointers on troubleshooting this error?
Thank you so much,
Ari.
The log said OutOfMemoryError.
You could try first with increase the memory like:
$java -Xms256m -Xmx512m -jar ./apache-jmeter-2.10/bin/ApacheJMeter.jar -t ./jmeter-master/test.jmx -Jhost=myhost.com -Joutput_suffix=localtest

Resources