yet another Could not contact Elasticsearch at http://logstash.example.com:9200 - elasticsearch

i have installed logstash+elasticsearch+kibana into one host and received the error from the title. I have googled all over the related topics, still no luck and yet stuck.
I will share the configs i have made:
elasticsearch.yml
cluster.name: hive
node.name: "logstash-central"
network.bind_host: 10.1.1.25
output from /var/log/elasticsearch/hive.log
[2015-01-13 15:18:06,562][INFO ][node ] [logstash-central] initializing ...
[2015-01-13 15:18:06,566][INFO ][plugins ] [logstash-central] loaded [], sites []
[2015-01-13 15:18:09,275][INFO ][node ] [logstash-central] initialized
[2015-01-13 15:18:09,275][INFO ][node ] [logstash-central] starting ...
[2015-01-13 15:18:09,385][INFO ][transport ] [logstash-central] bound_address {inet[/10.1.1.25:9300]}, publish_address {inet[/10.1.1.25:9300]}
[2015-01-13 15:18:09,401][INFO ][discovery ] [logstash-central] hive/T2LZruEtRsGPAF_Cx3BI1A
[2015-01-13 15:18:13,173][INFO ][cluster.service ] [logstash-central] new_master [logstash-central][T2LZruEtRsGPAF_Cx3BI1A][logstash.tw.intra][inet[/10.1.1.25:9300]], reason: zen-disco-join (elected_as_master)
[2015-01-13 15:18:13,193][INFO ][http ] [logstash-central] bound_address {inet[/10.1.1.25:9200]}, publish_address {inet[/10.1.1.25:9200]}
[2015-01-13 15:18:13,194][INFO ][node ] [logstash-central] started
[2015-01-13 15:18:13,209][INFO ][gateway ] [logstash-central] recovered [0] indices into cluster_state
accessing logstash.example.com:9200 gives the ordinary output like in ES guide:
{
"status" : 200,
"name" : "logstash-central",
"cluster_name" : "hive",
"version" : {
"number" : "1.4.2",
"build_hash" : "927caff6f05403e936c20bf4529f144f0c89fd8c",
"build_timestamp" : "2014-12-16T14:11:12Z",
"build_snapshot" : false,
"lucene_version" : "4.10.2"
},
"tagline" : "You Know, for Search"
}
accessing http://logstash.example.com:9200/_status? gives the following:
{"_shards":{"total":0,"successful":0,"failed":0},"indices":{}}
Kibanas config.js is default:
elasticsearch: "http://"+window.location.hostname+":9200"
Kibana is used via nginx. Here is /etc/nginx/conf.d/nginx.conf:
server {
listen *:80 ;
server_name logstash.example.com;
location / {
root /usr/share/kibana3;
Logstash config file is /etc/logstash/conf.d/central.conf:
input {
redis {
host => "10.1.1.25"
type => "redis-input"
data_type => "list"
key => "logstash"
}
output {
stdout{ { codec => rubydebug } }
elasticsearch {
host => "logstash.example.com"
}
}
Redis is working and the traffic passes between the master and slave (i've checked it via tcpdump).
15:46:06.189814 IP 10.1.1.50.41617 > 10.1.1.25.6379: Flags [P.], seq 89560:90064, ack 1129, win 115, options [nop,nop,TS val 3572086227 ecr 3571242836], length 504
netstat -apnt shows the following:
tcp 0 0 10.1.1.25:6379 10.1.1.50:41617 ESTABLISHED 21112/redis-server
tcp 0 0 10.1.1.25:9300 10.1.1.25:44011 ESTABLISHED 22598/java
tcp 0 0 10.1.1.25:9200 10.1.1.35:51145 ESTABLISHED 22598/java
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 22379/nginx
Could you please tell which way should i investigate the issue?
Thanks in advance

The problem is likely due to the nginx setup and the fact that Kibana, while installed on your server, is running in your browser and trying to access Elasticsearch from there. The typical way this is solved is by setting up a proxy in nginx and then changing your config.js.
You have what appears to be a correct proxy set up for nginx for Kibana but you'll need some additional work to have kibana be able to access Elasticsearch.
Check the comments on this post: http://vichargrave.com/ossec-log-management-with-elasticsearch/
And check this post: https://groups.google.com/forum/#!topic/elasticsearch/7hPvjKpFcmQ
And this sample nginx config: https://github.com/johnhamelink/ansible-kibana/blob/master/templates/nginx.conf.j2

You'll have to precise the protocol for elasticsearch in the output section
elasticsearch {
host => "logstash.example.com"
protocol => 'http'
}

Related

Logstash not ingesting content into elasticsearch

I have installed elasticsearch-8.2.3 logstash-8.2.3 and kibana-8.2.3 I have configure the logstash conf file to ingest content into elasticsearch, logstash run without any error but it is not ingesting the content.
Below is the conf file:
input {
#stdin {type => "stdin-type" }
file
{
path => "D:/logstash-8.2.3/inspec/*.*"
type => "file"
start_position=>"beginning"
sincedb_path => "NUL"
ignore_older => 0
}
}
filter {
csv
{
columns =>
[
"itemid","itemtitle","rlabel","ayear","rid","rsid","anotatedby","anotatetime","antype","astate","broaderlevel3","broaderlevel2","broaderlevel1","categorylabel","toppreferedlabel"
]
separator => ","
remove_field => ["type","host"]
}
mutate
{
split => { "antype" => ";" }
split => { "broaderlevel3" => ";" }
split => { "broaderlevel2" => ";" }
split => { "broaderlevel1" => ";" }
split => { "categorylabel" => ";" }
split => { "toppreferedlabel" => ";" }
}
}
output {
stdout { }
elasticsearch
{
hosts => ["localhost"]
index => "iet-tv"
}
}
I don't get any error message while running logstash but content not getting ingested into Elasticsearch.
Below is the log:
[2022-06-29T14:03:03,579][INFO ][logstash.runner ] Log4j configuration path used is: D:\logstash-8.2.3\config\log4j2.properties
[2022-06-29T14:03:03,595][WARN ][logstash.runner ] The use of JAVA_HOME has been deprecated. Logstash 8.0 and later ignores JAVA_HOME and uses the bundled JDK. Running Logstash with the bundled JDK is recommended. The bundled JDK has been verified to work with each specific version of Logstash, and generally provides best performance and reliability. If you have compelling reasons for using your own JDK (organizational-specific compliance requirements, for example), you can configure LS_JAVA_HOME to use that version instead.
[2022-06-29T14:03:03,598][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"8.2.3", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.15+10 on 11.0.15+10 +indy +jit [mswin32-x86_64]"}
[2022-06-29T14:03:03,600][INFO ][logstash.runner ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED]
[2022-06-29T14:03:03,736][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2022-06-29T14:03:11,340][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2022-06-29T14:03:12,628][INFO ][org.reflections.Reflections] Reflections took 153 ms to scan 1 urls, producing 120 keys and 395 values
[2022-06-29T14:03:15,580][INFO ][logstash.javapipeline ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2022-06-29T14:03:15,662][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost"]}
[2022-06-29T14:03:16,210][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2022-06-29T14:03:16,532][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2022-06-29T14:03:16,549][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (8.2.3) {:es_version=>8}
[2022-06-29T14:03:16,553][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
[2022-06-29T14:03:16,627][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2022-06-29T14:03:16,627][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2022-06-29T14:03:16,632][WARN ][logstash.outputs.elasticsearch][main] Elasticsearch Output configured with `ecs_compatibility => v8`, which resolved to an UNRELEASED preview of version 8.0.0 of the Elastic Common Schema. Once ECS v8 and an updated release of this plugin are publicly available, you will need to update this plugin to resolve this warning.
[2022-06-29T14:03:16,652][INFO ][logstash.filters.csv ][main] ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
[2022-06-29T14:03:16,694][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>8, :ecs_compatibility=>:v8}
[2022-06-29T14:03:16,762][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["D:/logstash-8.2.3/conf/inspec.conf"], :thread=>"#<Thread:0x48e38277 run>"}
[2022-06-29T14:03:18,017][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>1.25}
[2022-06-29T14:03:18,102][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2022-06-29T14:03:18,171][INFO ][filewatch.observingtail ][main][2c845ee5978dc5ed1bf8d0f617965d2013df9d31461210f0e7c2b799e02f6bb8] START, creating Discoverer, Watch with file and sincedb collections
[2022-06-29T14:03:18,220][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
Any suggestions much appreciated.
Thanks
Dharmendra Kumar Singh
In filebeat, ignore_older => 0 turns off age-based filtering. In a logstash file input it tells the filter to ignore any file more than zero seconds old, and since the file input sleeps between its periodic polls for new files, that can mean it ignores all files, even if they are being updated.
In my case (Windows 10, Logstash 8.1.0), the file path with back-slashes ( C:\path\to\csv\etc.CSV ) caused the same issue, changing back-slashes to forward-slashes fixed the problem.
Here is a working logstash config:
input {
file {
path => "C:/path/to/csv/file.csv"
type => "file"
start_position => "beginning"
sincedb_path => "NUL"
}
}
filter {
csv {
columns =>
[
"WID","LID","IID","Product","QTY","TID"
]
separator => ","
}
mutate {
rename => {
"WID" => "w_id"
"LID" => "l_id"
"IID" => "i_id"
"Product" => "product"
"QTY" => "quantity"
}
convert => {
"w_id" => "integer"
"l_id" => "integer"
"i_id" => "integer"
"quantity" => "float"
}
remove_field => [
"#timestamp",
"#version",
"host",
"message",
"type",
"path",
"event",
"log",
"TID"
]
}
}
output {
elasticsearch {
action => "index"
hosts => ["https://127.0.0.1:9200"]
index => "product_inline"
}
stdout { }
}

find elasticsearch service endpoint

I'm on my trial to test elasticcloud. But now I got problem to create pipeline from logstash to elasticcloud. Here is my logstash.conf output
output {
stdout{codec=>rubydebug}
elasticsearch
{
hosts=>["https://<clusterid>.asia-southeast1.gcp.cloud.es.io:9243"]
index=>"testindex"
user=>elasticdeploymentcredentials
password=>elasticdeploymentcredentials
}
}
But it always returning error as:
[WARN ] 2021-03-29 12:24:50.148 [Ruby-0-Thread-9: :1] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://elastic:xxxxxx#<clusterid>.asia-southeast1.gcp.cloud.es.io:9243/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://elastic:xxxxxx#<clusterid>.asia-southeast1.gcp.cloud.es.io:9243/][Manticore::ResolutionFailure] <clusterid>.asia-southeast1.gcp.cloud.es.io"}
[WARN ] 2021-03-29 12:24:55.158 [Ruby-0-Thread-9: :1] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://elastic:xxxxxx#<clusterid>.asia-southeast1.gcp.cloud.es.io:9243/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://elastic:xxxxxx#<clusterid>.asia-southeast1.gcp.cloud.es.io:9243/][Manticore::ResolutionFailure] <clusterid>.asia-southeast1.gcp.cloud.es.io: Name or service not known"}
[WARN ] 2021-03-29 12:25:00.163 [Ruby-0-Thread-9: :1] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://elastic:xxxxxx#<clusterid>.asia-southeast1.gcp.cloud.es.io:9243/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://elastic:xxxxxx#<clusterid>.asia-southeast1.gcp.cloud.es.io:9243/][Manticore::ResolutionFailure] <clusterid>.asia-southeast1.gcp.cloud.es.io"}
[WARN ] 2021-03-29 12:25:05.170 [Ruby-0-Thread-9: :1] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://elastic:xxxxxx#<clusterid>.asia-southeast1.gcp.cloud.es.io:9243/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://elastic:xxxxxx#<clusterid>.asia-southeast1.gcp.cloud.es.io:9243/][Manticore::ResolutionFailure] <clusterid>.asia-southeast1.gcp.cloud.es.io: Name or service not known"}
[WARN ] 2021-03-29 12:25:10.175 [Ruby-0-Thread-9: :1] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://elastic:xxxxxx#<clusterid>.asia-southeast1.gcp.cloud.es.io:9243/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://elastic:xxxxxx#<clusterid>.asia-southeast1.gcp.cloud.es.io:9243/][Manticore::ResolutionFailure] <clusterid>.asia-southeast1.gcp.cloud.es.io"}
It is possible for me to curl it with my credential as :
[root#localhost testconfig]# curl https://elasticdeploymentcredentials:elasticdeploymentcredentials#<clusterid>.asia-southeast1.gcp.elastic-cloud.com:9243
it returning
"name" : "name",
"cluster_name" : "<clusterid>",
"cluster_uuid" : "<clusteruuid>",
"version" : {
"number" : "7.12.0",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "build_hash",
"build_date" : "2021-03-18T06:17:15.410153305Z",
"build_snapshot" : false,
"lucene_version" : "8.8.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
am I missing something?
Instead of trying to connect to Elastic Cloud via the username/password from the deployment, try to use the Cloud_ID/Cloud_Auth combination:
output {
elasticsearch {
hosts => ["https://<clusterid>.asia-southeast1.gcp.cloud.es.io:9243"]
index => "%{[#metadata][beat]}-%{[#metadata][version]}-%{+YYYY.MM.dd}"
cloud_id => "your cloudid from the console"
cloud_auth => "elastic:password"
}
}
The cloud_auth parameter is where you are actually going to use the username/password from the deployment. More information here:
https://www.elastic.co/guide/en/logstash/7.12/connecting-to-cloud.html

ElasticSearch + Logstash works, but does not displays any data

I have an Oracle DB. Logstash retrieves data from Oracle and puts it to ElasticSearch. And everything looks fine, but no changes occur on the Logstash server, as if it doesn't know what to do.
logstash.conf:
input {
jdbc {
jdbc_driver_library => "C:\JBoss\wildfly\...\ojdbc7.jar"
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
jdbc_connection_string => "jdbc:oracle:thin:#3d-ztemtis-ora.iba:1521/ORCL"
jdbc_user => "sample_user"
jdbc_password => "12345"
jdbc_validate_connection => true
# once a 2 minute
schedule => "2 * * * *"
statement => "SELECT * FROM table_one"
}
}
output {
elasticsearch {
hosts => "localhost:9200"
index => "tableone"
document_id => "%{uid}"
}
stdout{
codec => rubydebug
}
}
Logstash logs
D:\Workspace3\ElasticLogstash\logstash-6.5.1>bin\logstash -f logstash.conf
Sending Logstash logs to D:/Workspace3/ElasticLogstash/logstash-6.5.1/logs which is now configured via log4j2.properties
[2018-11-28T00:49:30,296][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-11-28T00:49:30,308][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.5.1"}
[2018-11-28T00:49:33,174][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-11-28T00:49:33,455][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-11-28T00:49:33,471][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-11-28T00:49:33,625][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-11-28T00:49:33,674][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-11-28T00:49:33,674][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-11-28T00:49:33,699][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2018-11-28T00:49:33,718][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-11-28T00:49:33,745][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-11-28T00:49:33,940][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x64e24d22 run>"}
[2018-11-28T00:49:33,971][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-11-28T00:49:34,217][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
ElasticSearch log
[2018-11-28T00:36:06,492][DEBUG][o.e.a.ActionModule ] [px9stLj] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2018-11-28T00:36:06,683][INFO ][o.e.d.DiscoveryModule ] [px9stLj] using discovery type [zen] and host providers [settings]
[2018-11-28T00:36:07,188][INFO ][o.e.n.Node ] [px9stLj] initialized
[2018-11-28T00:36:07,188][INFO ][o.e.n.Node ] [px9stLj] starting ...
[2018-11-28T00:36:07,387][INFO ][o.e.t.TransportService ] [px9stLj] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2018-11-28T00:36:10,500][INFO ][o.e.c.s.MasterService ] [px9stLj] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {px9stLj}{px9stLjKSkqdyzudpK1ZhA}{bkR2txqXTn-Eo1o7-2PqEA}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=17058418688, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}
[2018-11-28T00:36:10,500][INFO ][o.e.c.s.ClusterApplierService] [px9stLj] new_master {px9stLj}{px9stLjKSkqdyzudpK1ZhA}{bkR2txqXTn-Eo1o7-2PqEA}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=17058418688, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, reason: apply cluster state (from master [master {px9stLj}{px9stLjKSkqdyzudpK1ZhA}{bkR2txqXTn-Eo1o7-2PqEA}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=17058418688, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2018-11-28T00:36:10,585][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [px9stLj] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
[2018-11-28T00:36:10,585][INFO ][o.e.n.Node ] [px9stLj] started
[2018-11-28T00:36:10,921][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [px9stLj] Failed to clear cache for realms [[]]
[2018-11-28T00:36:10,962][INFO ][o.e.l.LicenseService ] [px9stLj] license [852e276a-f99f-4ce3-a5d6-86c7769ae24e] mode [basic] - valid
[2018-11-28T00:36:10,970][INFO ][o.e.g.GatewayService ] [px9stLj] recovered [3] indices into cluster_state
[2018-11-28T00:36:12,366][INFO ][o.e.c.r.a.AllocationService] [px9stLj] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[blog][0]] ...]).
As I said, the problem is - nothing is happens and no errors logged.
How can I know is this succesfully connected to Oracle?
Please see the schedule examples here:
https://discuss.elastic.co/t/how-to-run-the-schedule-every-five-minutes-in-logstash-5-0/66222
https://www.thegeekstuff.com/2011/07/cron-every-5-minutes/
I think your schedule section should look like this:
Every 2 minutes
schedule => "*/2 * * * *"

Logstash Index error : [logstash-*] IndexNotFoundException[no such index]

I am new for ELK.
I am using :
- elasticsearch-2.1.0
- logstash-2.1.1
- kibana-4.3.0-windows
I tried to configure ELK to monitoring my application logs and I followed different tutorials and different logstash configuration, but I am getting this error when I switch on kibana, and it send the request to the elasticsearch. :
[logstash-*] IndexNotFoundException[no such index]
This is my logstash config:
input {
file {
path => "/var/logs/*.log"
type => "syslog"
}
}
filter {
grok {match => [ "message", "%{COMBINEDAPACHELOG}" ] }
}
output {
elasticsearch { hosts => localhost }
stdout { codec => rubydebug }
}
I tried to deleted all folder and re-install it and follow this tutorial step by step:
https://www.elastic.co/guide/en/logstash/current/advanced-pipeline.html
But I didn't received any kind of index, and I got again the index Error from kibana to elasticsearch
Any helps ?
Regards
debug Logs :
`
C:\Users\xxx\Desktop\LOGS\logstash-2.1.1\bin>logstash -f first-pipeline.conf --debug
io/console not supported; tty will not be manipulated
←[36mReading config file {:config_file=>"C:/Users/xxx/Desktop/LOGS/logstash-2.1.1/bin/first-pipeline.conf", :level=>:debug, :file=>"/Users/xxx/Desktop/LOGS/logstash-2.1.1/vendor/bundle/jruby
/1.9/gems/logstash-core-2.1.1-java/lib/logstash/agent.rb", :line=>"325", :method=>"local_config"}←[0m
←[36mCompiled pipeline code:
#inputs = []
#filters = []
#outputs = []
#periodic_flushers = []
#shutdown_flushers = []
#input_file_1 = plugin("input", "file", LogStash::Util.hash_merge_many({ "path" => ("/var/logs/logstash-tutorial-dataset") }, { "start_position" => ("beginning") }))
#inputs << #input_file_1
#filter_grok_2 = plugin("filter", "grok", LogStash::Util.hash_merge_many({ "match" => {("message") => ("%{COMBINEDAPACHELOG}")} }))
#filters << #filter_grok_2
#filter_grok_2_flush = lambda do |options, &block|
#logger.debug? && #logger.debug("Flushing", :plugin => #filter_grok_2)
events = #filter_grok_2.flush(options)
return if events.nil? || events.empty?
#logger.debug? && #logger.debug("Flushing", :plugin => #filter_grok_2, :events => events)
events = #filter_geoip_3.multi_filter(events)
events.each{|e| block.call(e)}
end
if #filter_grok_2.respond_to?(:flush)
#periodic_flushers << #filter_grok_2_flush if #filter_grok_2.periodic_flush
#shutdown_flushers << #filter_grok_2_flush
end
#filter_geoip_3 = plugin("filter", "geoip", LogStash::Util.hash_merge_many({ "source" => ("clientip") }))
#filters << #filter_geoip_3
#filter_geoip_3_flush = lambda do |options, &block|
#logger.debug? && #logger.debug("Flushing", :plugin => #filter_geoip_3)
events = #filter_geoip_3.flush(options)
return if events.nil? || events.empty?
#logger.debug? && #logger.debug("Flushing", :plugin => #filter_geoip_3, :events => events)
events.each{|e| block.call(e)}
end
if #filter_geoip_3.respond_to?(:flush)
#periodic_flushers << #filter_geoip_3_flush if #filter_geoip_3.periodic_flush
#shutdown_flushers << #filter_geoip_3_flush
end
#output_elasticsearch_4 = plugin("output", "elasticsearch", LogStash::Util.hash_merge_many({ "hosts" => [("localhost")] }))
#outputs << #output_elasticsearch_4
def filter_func(event)
events = [event]
#logger.debug? && #logger.debug("filter received", :event => event.to_hash)
events = #filter_grok_2.multi_filter(events)
events = #filter_geoip_3.multi_filter(events)
events
end
def output_func(event)
#logger.debug? && #logger.debug("output received", :event => event.to_hash)
#output_elasticsearch_4.handle(event)
end {:level=>:debug, :file=>"/Users/xxx/Desktop/LOGS/logstash-2.1.1/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.1-java/lib/logstash/pipeline.rb", :line=>"38", :method=>"initialize"}←[0m
←[36mPlugin not defined in namespace, checking for plugin file {:type=>"input", :name=>"file", :path=>"logstash/inputs/file", :level=>:debug, :file=>"/Users/xxx/Desktop/LOGS/logstash-2.1.1/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.1-java/lib/logstash/plugin.rb", :line=>"76", :method=>"lookup"}←[0m
[...]
Logstash startup completed
←[32mFlushing buffer at interval {:instance=>"#<LogStash::Outputs::ElasticSearch::Buffer:0x75375e77#stopping=#<Concurrent::AtomicBoolean:0x61b12c0>, #last_flush=2015-12-29 15:45:27 +0000, #flush_thread=#<Thread:0x7008acbf run>, #max_size=500, #operations_lock=#<Java::JavaUtilConcurrentLocks::ReentrantLock:0x4985690f>, #submit_proc=#<Proc:0x3c9b0727#C:/Users/xxx/Desktop/LOGS/logstash-2.1.1/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.2.0-java/lib/logstash/outputs/elasticsearch/common.rb:55>, #flush_interval=1, #logger=#<Cabin::Channel:0x65f2b086 #subscriber_lock=#<Mutex:0x202361b4>, #data={}, #metrics=#<Cabin::Metrics:0x72e380e7 #channel=#<Cabin::Channel:0x65f2b086 ...>, #metrics={}, #metrics_lock=#<Mutex:0x3623f89e>>, #subscribers={12592=>#<Cabin::Outputs::IO:0x316290ee #lock=#<Mutex:0x3e191296>, #io=#<IO:fd 1>>}, #level=:debug>, #buffer=[], #operations_mutex=#<Mutex:0x601355b3>>", :interval=>1, :level=>:info, :file=>"/Users/xxx/Desktop/LOGS/logstash-2.1.1/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.2.0-java/lib/logstash/outputs/elasticsear
ch/buffer.rb", :line=>"90", :method=>"interval_flush"}←[0m
←[36m_globbed_files: /var/logs/logstash-tutorial-dataset: glob is: ["/var/logs/logstash-tutorial-dataset"] {:level=>:debug, :file=>"/Users/xxx/Desktop/LOGS/logstash-2.1.1/vendor/bundle/jruby/1.9/gems/filewatch-0.6.7/lib/filewatch/watch.rb", :line=>"190", :method=>"_globbed_files"}←[0m`
elasticsearch.log :
[2015-12-29 15:15:01,702][WARN ][bootstrap ] unable to install syscall filter: syscall filtering not supported for OS: 'Windows 8.1'
[2015-12-29 15:15:01,879][INFO ][node ] [Blue Marvel] version[2.1.1], pid[10152], build[40e2c53/2015-12-15T13:05:55Z]
[2015-12-29 15:15:01,880][INFO ][node ] [Blue Marvel] initializing ...
[2015-12-29 15:15:01,923][INFO ][plugins ] [Blue Marvel] loaded [], sites []
[2015-12-29 15:15:01,941][INFO ][env ] [Blue Marvel] using [1] data paths, mounts [[OS (C:)]], net usable_space [242.8gb], net total_space [458.4gb], spins? [unknown], types [NTFS]
[2015-12-29 15:15:03,135][INFO ][node ] [Blue Marvel] initialized
[2015-12-29 15:15:03,135][INFO ][node ] [Blue Marvel] starting ...
[2015-12-29 15:15:03,249][INFO ][transport ] [Blue Marvel] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2015-12-29 15:15:03,255][INFO ][discovery ] [Blue Marvel] elasticsearch/3DpYKTroSke4ruP21QefmA
[2015-12-29 15:15:07,287][INFO ][cluster.service ] [Blue Marvel] new_master {Blue Marvel}{3DpYKTroSke4ruP21QefmA}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2015-12-29 15:15:07,377][INFO ][http ] [Blue Marvel] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
[2015-12-29 15:15:07,382][INFO ][node ] [Blue Marvel] started
[2015-12-29 15:15:07,399][INFO ][gateway ] [Blue Marvel] recovered [1] indices into cluster_state
[2015-12-29 16:33:00,715][INFO ][rest.suppressed ] /logstash-$DATE/_search Params: {index=logstash-$DATE, q=response=200}
[logstash-$DATE] IndexNotFoundException[no such index]
at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:566)
From my observation, it seems that you've not provided port no in logstash output config file. Generally the port used is 9200 (default) for elasticsearch (as instructed by most of the tutorials outh there). Try changing logstash config - output part to follows and let me know if it works:
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}
I fixed the problem adding this:
input {
file {
path => "/path/to/logstash-tutorial.log"
start_position => beginning
sincedb_path => "/dev/null"
}
}
now logstash is sending the index to elasticsearch
This issue will fix with below logstash config file change.
input {
file {
path => "/path/to/logfile.log"
start_position => beginning
}
}
filter {
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "logstash-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}

Can't start Elasticsearch anymore

After making a simple change on a query in Kibana, my Elasticsearch instance stopped working and I can't start it up again. I'm using ES 0.90.9 on OSX using homebrew.
Normally I would use this to start ES:
elasticsearch -f -D es.config=/usr/local/opt/elasticsearch/config/elasticsearch.yml
This however throws out a repeated error:
[2014-04-07 15:59:02,123][INFO ][node ] [Puck] version[0.90.9], pid[8758], build[a968646/2013-12-23T10:35:28Z]
[2014-04-07 15:59:02,128][INFO ][node ] [Puck] initializing ...
[2014-04-07 15:59:02,224][INFO ][plugins ] [Puck] loaded [mongodb-river, mapper-attachments, marvel], sites [river-mongodb, marvel]
[2014-04-07 15:59:04,553][INFO ][node ] [Puck] initialized
[2014-04-07 15:59:04,553][INFO ][node ] [Puck] starting ...
[2014-04-07 15:59:04,665][INFO ][transport ] [Puck] bound_address {inet[/127.0.0.1:9302]}, publish_address {inet[/127.0.0.1:9302]}
[2014-04-07 15:59:07,727][INFO ][cluster.service ] [Puck] new_master [Puck][gtub58OkR9SskDE0SfYobw][inet[/127.0.0.1:9302]], reason: zen-disco-join (elected_as_master)
[2014-04-07 15:59:07,778][INFO ][discovery ] [Puck] elasticsearch_dannyjoris/gtub58OkR9SskDE0SfYobw
[2014-04-07 15:59:07,795][INFO ][http ] [Puck] bound_address {inet[/127.0.0.1:9202]}, publish_address {inet[/127.0.0.1:9202]}
[2014-04-07 15:59:07,796][INFO ][node ] [Puck] started
[2014-04-07 15:59:07,813][INFO ][gateway ] [Puck] recovered [0] indices into cluster_state
[2014-04-07 15:59:09,589][ERROR][marvel.agent.exporter ] error connecting to [localhost:9200]
java.net.SocketTimeoutException: connect timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:382)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:241)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:228)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:431)
at java.net.Socket.connect(Socket.java:527)
at sun.net.NetworkClient.doConnect(NetworkClient.java:158)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:424)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:538)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:214)
at sun.net.www.http.HttpClient.New(HttpClient.java:300)
at sun.net.www.http.HttpClient.New(HttpClient.java:319)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:987)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:923)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:841)
at org.elasticsearch.marvel.agent.exporter.ESExporter.openConnection(ESExporter.java:313)
at org.elasticsearch.marvel.agent.exporter.ESExporter.openConnection(ESExporter.java:293)
at org.elasticsearch.marvel.agent.exporter.ESExporter.checkAndUpload(ESExporter.java:428)
at org.elasticsearch.marvel.agent.exporter.ESExporter.checkAndUploadIndexTemplate(ESExporter.java:464)
at org.elasticsearch.marvel.agent.exporter.ESExporter.checkAndUploadAllResources(ESExporter.java:341)
at org.elasticsearch.marvel.agent.exporter.ESExporter.openExportingConnection(ESExporter.java:190)
at org.elasticsearch.marvel.agent.exporter.ESExporter.exportXContent(ESExporter.java:246)
at org.elasticsearch.marvel.agent.exporter.ESExporter.exportNodeStats(ESExporter.java:134)
at org.elasticsearch.marvel.agent.AgentService$ExportingWorker.exportNodeStats(AgentService.java:274)
at org.elasticsearch.marvel.agent.AgentService$ExportingWorker.run(AgentService.java:174)
at java.lang.Thread.run(Thread.java:695)
[2014-04-07 15:59:09,591][ERROR][marvel.agent.exporter ] Could not connect to any configured elasticsearch instances: [localhost:9200]
Removing the plugins directory worked for me.
The location of the plugins directory can be found in your config file under the path.plugins setting. e.g.
# Path to where plugins are installed:
path.plugins: /usr/local/var/lib/elasticsearch/plugins
You can find default plugin paths here
This should not destroy your marvel data, but I guarantee nothing.

Resources