I am new for ELK.
I am using :
- elasticsearch-2.1.0
- logstash-2.1.1
- kibana-4.3.0-windows
I tried to configure ELK to monitoring my application logs and I followed different tutorials and different logstash configuration, but I am getting this error when I switch on kibana, and it send the request to the elasticsearch. :
[logstash-*] IndexNotFoundException[no such index]
This is my logstash config:
input {
file {
path => "/var/logs/*.log"
type => "syslog"
}
}
filter {
grok {match => [ "message", "%{COMBINEDAPACHELOG}" ] }
}
output {
elasticsearch { hosts => localhost }
stdout { codec => rubydebug }
}
I tried to deleted all folder and re-install it and follow this tutorial step by step:
https://www.elastic.co/guide/en/logstash/current/advanced-pipeline.html
But I didn't received any kind of index, and I got again the index Error from kibana to elasticsearch
Any helps ?
Regards
debug Logs :
`
C:\Users\xxx\Desktop\LOGS\logstash-2.1.1\bin>logstash -f first-pipeline.conf --debug
io/console not supported; tty will not be manipulated
←[36mReading config file {:config_file=>"C:/Users/xxx/Desktop/LOGS/logstash-2.1.1/bin/first-pipeline.conf", :level=>:debug, :file=>"/Users/xxx/Desktop/LOGS/logstash-2.1.1/vendor/bundle/jruby
/1.9/gems/logstash-core-2.1.1-java/lib/logstash/agent.rb", :line=>"325", :method=>"local_config"}←[0m
←[36mCompiled pipeline code:
#inputs = []
#filters = []
#outputs = []
#periodic_flushers = []
#shutdown_flushers = []
#input_file_1 = plugin("input", "file", LogStash::Util.hash_merge_many({ "path" => ("/var/logs/logstash-tutorial-dataset") }, { "start_position" => ("beginning") }))
#inputs << #input_file_1
#filter_grok_2 = plugin("filter", "grok", LogStash::Util.hash_merge_many({ "match" => {("message") => ("%{COMBINEDAPACHELOG}")} }))
#filters << #filter_grok_2
#filter_grok_2_flush = lambda do |options, &block|
#logger.debug? && #logger.debug("Flushing", :plugin => #filter_grok_2)
events = #filter_grok_2.flush(options)
return if events.nil? || events.empty?
#logger.debug? && #logger.debug("Flushing", :plugin => #filter_grok_2, :events => events)
events = #filter_geoip_3.multi_filter(events)
events.each{|e| block.call(e)}
end
if #filter_grok_2.respond_to?(:flush)
#periodic_flushers << #filter_grok_2_flush if #filter_grok_2.periodic_flush
#shutdown_flushers << #filter_grok_2_flush
end
#filter_geoip_3 = plugin("filter", "geoip", LogStash::Util.hash_merge_many({ "source" => ("clientip") }))
#filters << #filter_geoip_3
#filter_geoip_3_flush = lambda do |options, &block|
#logger.debug? && #logger.debug("Flushing", :plugin => #filter_geoip_3)
events = #filter_geoip_3.flush(options)
return if events.nil? || events.empty?
#logger.debug? && #logger.debug("Flushing", :plugin => #filter_geoip_3, :events => events)
events.each{|e| block.call(e)}
end
if #filter_geoip_3.respond_to?(:flush)
#periodic_flushers << #filter_geoip_3_flush if #filter_geoip_3.periodic_flush
#shutdown_flushers << #filter_geoip_3_flush
end
#output_elasticsearch_4 = plugin("output", "elasticsearch", LogStash::Util.hash_merge_many({ "hosts" => [("localhost")] }))
#outputs << #output_elasticsearch_4
def filter_func(event)
events = [event]
#logger.debug? && #logger.debug("filter received", :event => event.to_hash)
events = #filter_grok_2.multi_filter(events)
events = #filter_geoip_3.multi_filter(events)
events
end
def output_func(event)
#logger.debug? && #logger.debug("output received", :event => event.to_hash)
#output_elasticsearch_4.handle(event)
end {:level=>:debug, :file=>"/Users/xxx/Desktop/LOGS/logstash-2.1.1/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.1-java/lib/logstash/pipeline.rb", :line=>"38", :method=>"initialize"}←[0m
←[36mPlugin not defined in namespace, checking for plugin file {:type=>"input", :name=>"file", :path=>"logstash/inputs/file", :level=>:debug, :file=>"/Users/xxx/Desktop/LOGS/logstash-2.1.1/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.1-java/lib/logstash/plugin.rb", :line=>"76", :method=>"lookup"}←[0m
[...]
Logstash startup completed
←[32mFlushing buffer at interval {:instance=>"#<LogStash::Outputs::ElasticSearch::Buffer:0x75375e77#stopping=#<Concurrent::AtomicBoolean:0x61b12c0>, #last_flush=2015-12-29 15:45:27 +0000, #flush_thread=#<Thread:0x7008acbf run>, #max_size=500, #operations_lock=#<Java::JavaUtilConcurrentLocks::ReentrantLock:0x4985690f>, #submit_proc=#<Proc:0x3c9b0727#C:/Users/xxx/Desktop/LOGS/logstash-2.1.1/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.2.0-java/lib/logstash/outputs/elasticsearch/common.rb:55>, #flush_interval=1, #logger=#<Cabin::Channel:0x65f2b086 #subscriber_lock=#<Mutex:0x202361b4>, #data={}, #metrics=#<Cabin::Metrics:0x72e380e7 #channel=#<Cabin::Channel:0x65f2b086 ...>, #metrics={}, #metrics_lock=#<Mutex:0x3623f89e>>, #subscribers={12592=>#<Cabin::Outputs::IO:0x316290ee #lock=#<Mutex:0x3e191296>, #io=#<IO:fd 1>>}, #level=:debug>, #buffer=[], #operations_mutex=#<Mutex:0x601355b3>>", :interval=>1, :level=>:info, :file=>"/Users/xxx/Desktop/LOGS/logstash-2.1.1/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.2.0-java/lib/logstash/outputs/elasticsear
ch/buffer.rb", :line=>"90", :method=>"interval_flush"}←[0m
←[36m_globbed_files: /var/logs/logstash-tutorial-dataset: glob is: ["/var/logs/logstash-tutorial-dataset"] {:level=>:debug, :file=>"/Users/xxx/Desktop/LOGS/logstash-2.1.1/vendor/bundle/jruby/1.9/gems/filewatch-0.6.7/lib/filewatch/watch.rb", :line=>"190", :method=>"_globbed_files"}←[0m`
elasticsearch.log :
[2015-12-29 15:15:01,702][WARN ][bootstrap ] unable to install syscall filter: syscall filtering not supported for OS: 'Windows 8.1'
[2015-12-29 15:15:01,879][INFO ][node ] [Blue Marvel] version[2.1.1], pid[10152], build[40e2c53/2015-12-15T13:05:55Z]
[2015-12-29 15:15:01,880][INFO ][node ] [Blue Marvel] initializing ...
[2015-12-29 15:15:01,923][INFO ][plugins ] [Blue Marvel] loaded [], sites []
[2015-12-29 15:15:01,941][INFO ][env ] [Blue Marvel] using [1] data paths, mounts [[OS (C:)]], net usable_space [242.8gb], net total_space [458.4gb], spins? [unknown], types [NTFS]
[2015-12-29 15:15:03,135][INFO ][node ] [Blue Marvel] initialized
[2015-12-29 15:15:03,135][INFO ][node ] [Blue Marvel] starting ...
[2015-12-29 15:15:03,249][INFO ][transport ] [Blue Marvel] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2015-12-29 15:15:03,255][INFO ][discovery ] [Blue Marvel] elasticsearch/3DpYKTroSke4ruP21QefmA
[2015-12-29 15:15:07,287][INFO ][cluster.service ] [Blue Marvel] new_master {Blue Marvel}{3DpYKTroSke4ruP21QefmA}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2015-12-29 15:15:07,377][INFO ][http ] [Blue Marvel] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
[2015-12-29 15:15:07,382][INFO ][node ] [Blue Marvel] started
[2015-12-29 15:15:07,399][INFO ][gateway ] [Blue Marvel] recovered [1] indices into cluster_state
[2015-12-29 16:33:00,715][INFO ][rest.suppressed ] /logstash-$DATE/_search Params: {index=logstash-$DATE, q=response=200}
[logstash-$DATE] IndexNotFoundException[no such index]
at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:566)
From my observation, it seems that you've not provided port no in logstash output config file. Generally the port used is 9200 (default) for elasticsearch (as instructed by most of the tutorials outh there). Try changing logstash config - output part to follows and let me know if it works:
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}
I fixed the problem adding this:
input {
file {
path => "/path/to/logstash-tutorial.log"
start_position => beginning
sincedb_path => "/dev/null"
}
}
now logstash is sending the index to elasticsearch
This issue will fix with below logstash config file change.
input {
file {
path => "/path/to/logfile.log"
start_position => beginning
}
}
filter {
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "logstash-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
Related
I'm newbie for using Logstash and Elasticsearch. I wanted to sync my MongoDB data into Elasticsearch using Logstash Plugin (logstash-input-mongodb).
In my mongodata.conf is
input {
uri => 'mongodb://127.0.0.1:27017/final?ssl=true'
placeholder_db_dir => '/opt/logstash-mongodb/'
placeholder_db_name => 'logstash_sqlite.db'
collection => 'twitter_stream'
batch_size => 5000
}
filter {
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
action => "index"
index => "twitter_stream"
hosts => ["localhost:9200"]
}
}
While I running bin/logstash -f /etc/logstash/conf.d/mongodata.conf --path.settings /etc/logstash/
The error was displayed like this
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2020-02-28T08:48:20,246][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2020-02-28T08:48:20,331][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.6.0"}
[2020-02-28T08:48:20,883][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \t\r\n], "#", "{" at line 2, column 13 (byte 21) after input {\n uri ", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:47:in compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:55:in compile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:17:in block in compile_sources'", "org/jruby/RubyArray.java:2580:in map'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:14:in compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:161:in initialize'", "org/logstash/execution/JavaBasePipelineExt.java:47:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:27:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:36:in execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:326:in block in converge_state'"]}
[2020-02-28T08:48:21,114][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2020-02-28T08:48:25,969][INFO ][logstash.runner ] Logstash shut down.
Please help me, I don't have any idea about this.
Your configuration is wrong, you need to specify what type of input you are using.
Try to change your input to this one:
input {
mongodb {
uri => 'mongodb://127.0.0.1:27017/final?ssl=true'
placeholder_db_dir => '/opt/logstash-mongodb/'
placeholder_db_name => 'logstash_sqlite.db'
collection => 'twitter_stream'
batch_size => 5000
}
}
I am new on ELK and I need your help.
I would like to get some information about the cpu, memory. Those informative are generated every 30 minutes.
My xml file
<?xml version="1.0" encoding="UTF-8"?>
<measData>
<measInfo Id="SensorProcessingCounters">
<measType p="1">SensorsProcessed</measType>
<measValue xxxxxxxxx >
<r p="1">81</r>
</measValue>
</measInfo>
</measData>
My logstash file.conf
input {
file {
path => "/home/test/Desktop/data/file.xml"
start_position => beginning
sincedb_path => "/dev/null"
codec => multiline
{
pattern => "<measData>|</measData>"
negate => true
what => "previous"
}
}
}
filter
{
xml {
store_xml => false
source => "message"
xpath =>
["//measInfo[#measInfoId="SensorProcessingCounters"]/measValue/r[#p='1']/text()", "SensorProcessingCounters"
]
}
mutate{
convert => {
"SensorProcessingCounters"=> "float"}
}
}
output{
elasticsearch
{
action => "index"
hosts => ["localhost:9200"]
index => "stock"
}
stdout{}
}
error message
[2018-07-12T11:16:19,253][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-07-12T11:16:19,973][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.3.1"}
[2018-07-12T11:16:20,649][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, {, ,, ] at line 20, column 27 (byte 432) after filter\r\n{\r\nxml {\r\nstore_xml => false\r\nsource => \"message\"\r\nxpath =>\r\n[\"//measInfo[#measInfoId=\"", :backtrace=>["/home/test/Desktop/logstash-6.3.1/logstash-core/lib/logstash/compiler.rb:42:in `compile_imperative'", "/home/test/Desktop/logstash-6.3.1/logstash-core/lib/logstash/compiler.rb:50:in `compile_graph'", "/home/test/Desktop/logstash-6.3.1/logstash-core/lib/logstash/compiler.rb:12:in `block in compile_sources'", "org/jruby/RubyArray.java:2486:in `map'", "/home/test/Desktop/logstash-6.3.1/logstash-core/lib/logstash/compiler.rb:11:in `compile_sources'", "/home/test/Desktop/logstash-6.3.1/logstash-core/lib/logstash/pipeline.rb:49:in `initialize'", "/home/test/Desktop/logstash-6.3.1/logstash-core/lib/logstash/pipeline.rb:167:in `initialize'", "/home/test/Desktop/logstash-6.3.1/logstash-core/lib/logstash/pipeline_action/create.rb:40:in `execute'", "/home/test/Desktop/logstash-6.3.1/logstash-core/lib/logstash/agent.rb:305:in `block in converge_state'"]}
[2018-07-12T11:16:21,024][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
Thank you
For this line:
["//measInfo[#measInfoId="SensorProcessingCounters"]/measValue/r[#p='1']/text()",
"SensorProcessingCounters"
I guess you should use single quotes:
["//measInfo[#measInfoId='SensorProcessingCounters']/measValue/r[#p='1']/text()",
"SensorProcessingCounters"
because quotes mismatch.
I have configured an ELK Server and all there component are on the same servers.. Though, i'm trying to pick my logstash-syslog.conf from the central point that's based on NFS so i dont need to Install logstash on each client..
1) My logstash-syslog.conf file
input {
file {
path => [ "/var/log/messages" ]
type => "test"
}
}
filter {
if [type] == "test" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output {
if "automount" in [message] {
elasticsearch {
hosts => "oida-elk:9200"
#index => "newmsglog-%{+YYYY.MM.dd}"
index => "%{type}-%{+YYYY.MM.dd}"
document_type => "msg"
}
stdout {}
}
}
2) When I'm running the on the clients to get the data it start the thread and just stucks there ..
[Myclient1 =~] # $ /home/data/logstash-6.0.0/bin/logstash -f /home/data/logstash-6.0.0/conf.d/ --path.data=/tmp/klm
3) after running the above command its shows below logs and then do not proceed anywhere...
[2018-03-05T21:20:51,014][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://oida-elk:9200/"}
[2018-03-05T21:20:51,078][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-03-05T21:20:51,085][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-03-05T21:20:51,101][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//oida-elk:9200"]}
[2018-03-05T21:20:51,297][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125, :thread=>"#<Thread:0x2ea3b180#/home/data/logstash-6.0.0/logstash-core/lib/logstash/pipeline.rb:290 run>"}
[2018-03-05T21:20:51,746][INFO ][logstash.pipeline ] Pipeline started {"pipeline.id"=>"main"}
[2018-03-05T21:20:51,800][INFO ][logstash.agent ] Pipelines running {:count=>1, :pipelines=>["main"]}
Please help / suggest anything ..
can you run the logtash validation using -t flag.
Also you have to pass full path to file when using the -f flag.
Can you tail on the file to see if it has newer entries coming.
I would like to add that For reading files over to logtash using filebeat is a better choice.
This code is used to send slowlog of elasticsearch 5.1.1 to logstash 5.1.1 as an input:
input {
file {
path => "C:\Users\571952\Downloads\elasticsearch-5.1.1\elasticsearch-5.1.1\logs\elasticsearch_index_search_slowlog"
start_position => "beginning"
}
}
filter {
grok { # parses the common bits
match => [ "message", "[%{URIHOST}:%{ISO8601_SECOND}][%{LOGLEVEL:log_level}]
[%{DATA:es_slowquery_type}]\s*[%{DATA:es_host}]\s*[%{DATA:es_index}]\s*[%{DATA:es_shard}]\s*took[%{DATA:es_duration}],\s*took_millis[%{DATA:es_duration_ms:float}],\s*types[%{DATA:es_types}],\s*stats[%{DATA:es_stats}],\s*search_type[%{DATA:es_search_type}],\s*total_shards[%{DATA:es_total_shards:float}],\s*source[%{GREEDYDATA:es_source}],\s*extra_source[%{GREEDYDATA:es_extra_source}]"]
}
mutate {
gsub => [
"source_body", "], extra_source[$", ""
]
}
}
output {
file {
path => "C:\Users\571952\Desktop\logstash-5.1.1\just_queries"
codec => "json_lines"
}
}
When i ran this code it is showing error like this in the command prompt.
[2017-01-04T18:30:32,032][ERROR][logstash.agent ] Pipeline aborted due to error
{:exception=>#<RegexpError: premature end of char-class: /], extra_source[$/>, :backtrac
e=>["org/jruby/RubyRegexp.java:1424:in `initialize'", "C:/Users/571952/Desktop/logstash-5
.1.1/vendor/bundle/jruby/1.9/gems/logstash-filter-mutate-3.1.3/lib/logstash/filters/mutat
e.rb:196:in `register'", "org/jruby/RubyArray.java:1653:in `each_slice'", "C:/Users/57195
2/Desktop/logstash-5.1.1/vendor/bundle/jruby/1.9/gems/logstash-filter-mutate-3.1.3/lib/lo
gstash/filters/mutate.rb:184:in `register'", "C:/Users/571952/Desktop/logstash-5.1.1/logs
tash-core/lib/logstash/pipeline.rb:230:in `start_workers'", "org/jruby/RubyArray.java:161
3:in `each'", "C:/Users/571952/Desktop/logstash-5.1.1/logstash-core/lib/logstash/pipeline
.rb:230:in `start_workers'", "C:/Users/571952/Desktop/logstash-5.1.1/logstash-core/lib/lo
gstash/pipeline.rb:183:in `run'", "C:/Users/571952/Desktop/logstash-5.1.1/logstash-core/l
ib/logstash/agent.rb:292:in `start_pipeline'"]}
[2017-01-04T18:30:32,141][INFO ][logstash.agent ] Successfully started Logstash
API endpoint {:port=>9600}
[2017-01-04T18:30:35,036][WARN ][logstash.agent ] stopping pipeline {:id=>"main
"}
Can anyone help me in solving this problem?
This is the code of my slowlog
[2016-12-28T15:53:21,341][DEBUG][index.search.slowlog.query] [vVhZxH7] [sw][0] took[184.7micros], took_millis[0], types[], stats[], search_type[QUERY_THEN_FETCH], total_shards[5], source[{
"ext" : { }
}],
i have installed logstash+elasticsearch+kibana into one host and received the error from the title. I have googled all over the related topics, still no luck and yet stuck.
I will share the configs i have made:
elasticsearch.yml
cluster.name: hive
node.name: "logstash-central"
network.bind_host: 10.1.1.25
output from /var/log/elasticsearch/hive.log
[2015-01-13 15:18:06,562][INFO ][node ] [logstash-central] initializing ...
[2015-01-13 15:18:06,566][INFO ][plugins ] [logstash-central] loaded [], sites []
[2015-01-13 15:18:09,275][INFO ][node ] [logstash-central] initialized
[2015-01-13 15:18:09,275][INFO ][node ] [logstash-central] starting ...
[2015-01-13 15:18:09,385][INFO ][transport ] [logstash-central] bound_address {inet[/10.1.1.25:9300]}, publish_address {inet[/10.1.1.25:9300]}
[2015-01-13 15:18:09,401][INFO ][discovery ] [logstash-central] hive/T2LZruEtRsGPAF_Cx3BI1A
[2015-01-13 15:18:13,173][INFO ][cluster.service ] [logstash-central] new_master [logstash-central][T2LZruEtRsGPAF_Cx3BI1A][logstash.tw.intra][inet[/10.1.1.25:9300]], reason: zen-disco-join (elected_as_master)
[2015-01-13 15:18:13,193][INFO ][http ] [logstash-central] bound_address {inet[/10.1.1.25:9200]}, publish_address {inet[/10.1.1.25:9200]}
[2015-01-13 15:18:13,194][INFO ][node ] [logstash-central] started
[2015-01-13 15:18:13,209][INFO ][gateway ] [logstash-central] recovered [0] indices into cluster_state
accessing logstash.example.com:9200 gives the ordinary output like in ES guide:
{
"status" : 200,
"name" : "logstash-central",
"cluster_name" : "hive",
"version" : {
"number" : "1.4.2",
"build_hash" : "927caff6f05403e936c20bf4529f144f0c89fd8c",
"build_timestamp" : "2014-12-16T14:11:12Z",
"build_snapshot" : false,
"lucene_version" : "4.10.2"
},
"tagline" : "You Know, for Search"
}
accessing http://logstash.example.com:9200/_status? gives the following:
{"_shards":{"total":0,"successful":0,"failed":0},"indices":{}}
Kibanas config.js is default:
elasticsearch: "http://"+window.location.hostname+":9200"
Kibana is used via nginx. Here is /etc/nginx/conf.d/nginx.conf:
server {
listen *:80 ;
server_name logstash.example.com;
location / {
root /usr/share/kibana3;
Logstash config file is /etc/logstash/conf.d/central.conf:
input {
redis {
host => "10.1.1.25"
type => "redis-input"
data_type => "list"
key => "logstash"
}
output {
stdout{ { codec => rubydebug } }
elasticsearch {
host => "logstash.example.com"
}
}
Redis is working and the traffic passes between the master and slave (i've checked it via tcpdump).
15:46:06.189814 IP 10.1.1.50.41617 > 10.1.1.25.6379: Flags [P.], seq 89560:90064, ack 1129, win 115, options [nop,nop,TS val 3572086227 ecr 3571242836], length 504
netstat -apnt shows the following:
tcp 0 0 10.1.1.25:6379 10.1.1.50:41617 ESTABLISHED 21112/redis-server
tcp 0 0 10.1.1.25:9300 10.1.1.25:44011 ESTABLISHED 22598/java
tcp 0 0 10.1.1.25:9200 10.1.1.35:51145 ESTABLISHED 22598/java
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 22379/nginx
Could you please tell which way should i investigate the issue?
Thanks in advance
The problem is likely due to the nginx setup and the fact that Kibana, while installed on your server, is running in your browser and trying to access Elasticsearch from there. The typical way this is solved is by setting up a proxy in nginx and then changing your config.js.
You have what appears to be a correct proxy set up for nginx for Kibana but you'll need some additional work to have kibana be able to access Elasticsearch.
Check the comments on this post: http://vichargrave.com/ossec-log-management-with-elasticsearch/
And check this post: https://groups.google.com/forum/#!topic/elasticsearch/7hPvjKpFcmQ
And this sample nginx config: https://github.com/johnhamelink/ansible-kibana/blob/master/templates/nginx.conf.j2
You'll have to precise the protocol for elasticsearch in the output section
elasticsearch {
host => "logstash.example.com"
protocol => 'http'
}