Can't start Elasticsearch anymore - macos

After making a simple change on a query in Kibana, my Elasticsearch instance stopped working and I can't start it up again. I'm using ES 0.90.9 on OSX using homebrew.
Normally I would use this to start ES:
elasticsearch -f -D es.config=/usr/local/opt/elasticsearch/config/elasticsearch.yml
This however throws out a repeated error:
[2014-04-07 15:59:02,123][INFO ][node ] [Puck] version[0.90.9], pid[8758], build[a968646/2013-12-23T10:35:28Z]
[2014-04-07 15:59:02,128][INFO ][node ] [Puck] initializing ...
[2014-04-07 15:59:02,224][INFO ][plugins ] [Puck] loaded [mongodb-river, mapper-attachments, marvel], sites [river-mongodb, marvel]
[2014-04-07 15:59:04,553][INFO ][node ] [Puck] initialized
[2014-04-07 15:59:04,553][INFO ][node ] [Puck] starting ...
[2014-04-07 15:59:04,665][INFO ][transport ] [Puck] bound_address {inet[/127.0.0.1:9302]}, publish_address {inet[/127.0.0.1:9302]}
[2014-04-07 15:59:07,727][INFO ][cluster.service ] [Puck] new_master [Puck][gtub58OkR9SskDE0SfYobw][inet[/127.0.0.1:9302]], reason: zen-disco-join (elected_as_master)
[2014-04-07 15:59:07,778][INFO ][discovery ] [Puck] elasticsearch_dannyjoris/gtub58OkR9SskDE0SfYobw
[2014-04-07 15:59:07,795][INFO ][http ] [Puck] bound_address {inet[/127.0.0.1:9202]}, publish_address {inet[/127.0.0.1:9202]}
[2014-04-07 15:59:07,796][INFO ][node ] [Puck] started
[2014-04-07 15:59:07,813][INFO ][gateway ] [Puck] recovered [0] indices into cluster_state
[2014-04-07 15:59:09,589][ERROR][marvel.agent.exporter ] error connecting to [localhost:9200]
java.net.SocketTimeoutException: connect timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:382)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:241)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:228)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:431)
at java.net.Socket.connect(Socket.java:527)
at sun.net.NetworkClient.doConnect(NetworkClient.java:158)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:424)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:538)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:214)
at sun.net.www.http.HttpClient.New(HttpClient.java:300)
at sun.net.www.http.HttpClient.New(HttpClient.java:319)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:987)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:923)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:841)
at org.elasticsearch.marvel.agent.exporter.ESExporter.openConnection(ESExporter.java:313)
at org.elasticsearch.marvel.agent.exporter.ESExporter.openConnection(ESExporter.java:293)
at org.elasticsearch.marvel.agent.exporter.ESExporter.checkAndUpload(ESExporter.java:428)
at org.elasticsearch.marvel.agent.exporter.ESExporter.checkAndUploadIndexTemplate(ESExporter.java:464)
at org.elasticsearch.marvel.agent.exporter.ESExporter.checkAndUploadAllResources(ESExporter.java:341)
at org.elasticsearch.marvel.agent.exporter.ESExporter.openExportingConnection(ESExporter.java:190)
at org.elasticsearch.marvel.agent.exporter.ESExporter.exportXContent(ESExporter.java:246)
at org.elasticsearch.marvel.agent.exporter.ESExporter.exportNodeStats(ESExporter.java:134)
at org.elasticsearch.marvel.agent.AgentService$ExportingWorker.exportNodeStats(AgentService.java:274)
at org.elasticsearch.marvel.agent.AgentService$ExportingWorker.run(AgentService.java:174)
at java.lang.Thread.run(Thread.java:695)
[2014-04-07 15:59:09,591][ERROR][marvel.agent.exporter ] Could not connect to any configured elasticsearch instances: [localhost:9200]

Removing the plugins directory worked for me.
The location of the plugins directory can be found in your config file under the path.plugins setting. e.g.
# Path to where plugins are installed:
path.plugins: /usr/local/var/lib/elasticsearch/plugins
You can find default plugin paths here
This should not destroy your marvel data, but I guarantee nothing.

Related

Azure blob storage not storing locally

I want to use azure-blob-storage container to store data in a local directory. I have used the upload_blob from another container for this purpose. The file is getting uploaded to the cloud but not getting stored in the local path. I have given binds, device to cloud upload properties and also changed the permissions for the directory with "chmod 777". After doing all this the file is not getting saved locally.
Python function:
with blob_service_client.get_blob_client(container=container_name, blob=local_file_name) as upload_client:
with open(upload_file_path, "rb") as data:
print("Uploading the file")
upload_client.upload_blob(data, blob_type="BlockBlob", overwrite=True)
print("Finished uploading")
Binds:
"HostConfig": {
"Binds": [
"/opt/localstorage/blob/:/blobroot"
]
Upload properties
"blobstorage": {
"properties.desired": {
"deviceToCloudUploadProperties": {
"uploadOn": true,
"uploadOrder": "NewestFirst",
"cloudStorageConnectionString": "xxxx"
"storageContainersForUpload": {
"bloboutput": {
"target": "bloboutput"
}
}
},
"deviceAutoDeleteProperties": {
"deleteOn": false,
"deleteAfterMinutes": 15
}
Logs:
[2021-04-20 04:23:57.857] [info ] [tid 1] Info: Successfully loaded {0}: {1}, p0="Nephos.MaskClientIPAddressesInLogs", p1="False"
[2021-04-20 04:23:57.857] [info ] [tid 1] Info: Loading config Param {0} ({1}) read: {2}, p0="NephosIncludeInternalDetailsInErrorResponses", p1="Include internal details in error responses", p2="true"
[2021-04-20 04:23:57.857] [info ] [tid 1] Info: Successfully loaded {0}: {1}, p0="NephosIncludeInternalDetailsInErrorResponses", p1="True"
[2021-04-20 04:23:57.857] [info ] [tid 1] Info: Loading config Param {0} ({1}) read: {2}, p0="StampName", p1="Stamp Name", p2="Default Stamp"
[2021-04-20 04:23:57.924] [info ] [tid 1] Microsoft.AzureStack.Services.Storage.EntryPoint.BlobService: BlobService - StartAsync completed
[2021-04-20 04:23:57.925] [info ] [tid 1] Microsoft.Azure.Devices.BlobStorage.Tiering.BlobTieringService: Starting service...
[2021-04-20 04:23:57.937] [info ] [tid 1] [BlobInterface.cc:1494] [ListBlobsInOrder] ListBlobsInOrder received. Container:bloboutput BlobNameStart:null MaxBlobNames:1 OrderType:1 Flags:1
[2021-04-20 04:23:57.937] [error ] [tid 1] [MetaStore.cc:1953] [ListBlobsInOrder] Container not found. Name:bloboutput
Since you mentioned storing files 'locally'. then you should use the method 'download_blob'
And this is the tutorials.

ElasticSearch + Logstash works, but does not displays any data

I have an Oracle DB. Logstash retrieves data from Oracle and puts it to ElasticSearch. And everything looks fine, but no changes occur on the Logstash server, as if it doesn't know what to do.
logstash.conf:
input {
jdbc {
jdbc_driver_library => "C:\JBoss\wildfly\...\ojdbc7.jar"
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
jdbc_connection_string => "jdbc:oracle:thin:#3d-ztemtis-ora.iba:1521/ORCL"
jdbc_user => "sample_user"
jdbc_password => "12345"
jdbc_validate_connection => true
# once a 2 minute
schedule => "2 * * * *"
statement => "SELECT * FROM table_one"
}
}
output {
elasticsearch {
hosts => "localhost:9200"
index => "tableone"
document_id => "%{uid}"
}
stdout{
codec => rubydebug
}
}
Logstash logs
D:\Workspace3\ElasticLogstash\logstash-6.5.1>bin\logstash -f logstash.conf
Sending Logstash logs to D:/Workspace3/ElasticLogstash/logstash-6.5.1/logs which is now configured via log4j2.properties
[2018-11-28T00:49:30,296][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-11-28T00:49:30,308][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.5.1"}
[2018-11-28T00:49:33,174][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-11-28T00:49:33,455][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-11-28T00:49:33,471][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-11-28T00:49:33,625][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-11-28T00:49:33,674][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-11-28T00:49:33,674][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-11-28T00:49:33,699][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2018-11-28T00:49:33,718][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-11-28T00:49:33,745][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-11-28T00:49:33,940][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x64e24d22 run>"}
[2018-11-28T00:49:33,971][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-11-28T00:49:34,217][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
ElasticSearch log
[2018-11-28T00:36:06,492][DEBUG][o.e.a.ActionModule ] [px9stLj] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2018-11-28T00:36:06,683][INFO ][o.e.d.DiscoveryModule ] [px9stLj] using discovery type [zen] and host providers [settings]
[2018-11-28T00:36:07,188][INFO ][o.e.n.Node ] [px9stLj] initialized
[2018-11-28T00:36:07,188][INFO ][o.e.n.Node ] [px9stLj] starting ...
[2018-11-28T00:36:07,387][INFO ][o.e.t.TransportService ] [px9stLj] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2018-11-28T00:36:10,500][INFO ][o.e.c.s.MasterService ] [px9stLj] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {px9stLj}{px9stLjKSkqdyzudpK1ZhA}{bkR2txqXTn-Eo1o7-2PqEA}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=17058418688, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}
[2018-11-28T00:36:10,500][INFO ][o.e.c.s.ClusterApplierService] [px9stLj] new_master {px9stLj}{px9stLjKSkqdyzudpK1ZhA}{bkR2txqXTn-Eo1o7-2PqEA}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=17058418688, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, reason: apply cluster state (from master [master {px9stLj}{px9stLjKSkqdyzudpK1ZhA}{bkR2txqXTn-Eo1o7-2PqEA}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=17058418688, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2018-11-28T00:36:10,585][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [px9stLj] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
[2018-11-28T00:36:10,585][INFO ][o.e.n.Node ] [px9stLj] started
[2018-11-28T00:36:10,921][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [px9stLj] Failed to clear cache for realms [[]]
[2018-11-28T00:36:10,962][INFO ][o.e.l.LicenseService ] [px9stLj] license [852e276a-f99f-4ce3-a5d6-86c7769ae24e] mode [basic] - valid
[2018-11-28T00:36:10,970][INFO ][o.e.g.GatewayService ] [px9stLj] recovered [3] indices into cluster_state
[2018-11-28T00:36:12,366][INFO ][o.e.c.r.a.AllocationService] [px9stLj] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[blog][0]] ...]).
As I said, the problem is - nothing is happens and no errors logged.
How can I know is this succesfully connected to Oracle?
Please see the schedule examples here:
https://discuss.elastic.co/t/how-to-run-the-schedule-every-five-minutes-in-logstash-5-0/66222
https://www.thegeekstuff.com/2011/07/cron-every-5-minutes/
I think your schedule section should look like this:
Every 2 minutes
schedule => "*/2 * * * *"

Render issue when I learning three.js by the examples

Description of the problem
After I had cloned the three.js repository,I watch the canvas_camera_orthographic.html,I saw the scene discribed in the link:
three.js issue,sorry for that I can't upload image
Three.js version
[x] 0.86.0
[ ] r85
[ ] ...
Browser
[] All of them
[x] Chrome
[ ] Firefox
[ ] Internet Explorer
OS
[ ] All of them
[ ] Windows
[x] macOS
[ ] Linux
[ ] Android
[ ] iOS
It has something to do with the CanvasRenderer. Try to modify the example and use the WebGLRenderer instead to see this example working just fine.

Logstash Index error : [logstash-*] IndexNotFoundException[no such index]

I am new for ELK.
I am using :
- elasticsearch-2.1.0
- logstash-2.1.1
- kibana-4.3.0-windows
I tried to configure ELK to monitoring my application logs and I followed different tutorials and different logstash configuration, but I am getting this error when I switch on kibana, and it send the request to the elasticsearch. :
[logstash-*] IndexNotFoundException[no such index]
This is my logstash config:
input {
file {
path => "/var/logs/*.log"
type => "syslog"
}
}
filter {
grok {match => [ "message", "%{COMBINEDAPACHELOG}" ] }
}
output {
elasticsearch { hosts => localhost }
stdout { codec => rubydebug }
}
I tried to deleted all folder and re-install it and follow this tutorial step by step:
https://www.elastic.co/guide/en/logstash/current/advanced-pipeline.html
But I didn't received any kind of index, and I got again the index Error from kibana to elasticsearch
Any helps ?
Regards
debug Logs :
`
C:\Users\xxx\Desktop\LOGS\logstash-2.1.1\bin>logstash -f first-pipeline.conf --debug
io/console not supported; tty will not be manipulated
←[36mReading config file {:config_file=>"C:/Users/xxx/Desktop/LOGS/logstash-2.1.1/bin/first-pipeline.conf", :level=>:debug, :file=>"/Users/xxx/Desktop/LOGS/logstash-2.1.1/vendor/bundle/jruby
/1.9/gems/logstash-core-2.1.1-java/lib/logstash/agent.rb", :line=>"325", :method=>"local_config"}←[0m
←[36mCompiled pipeline code:
#inputs = []
#filters = []
#outputs = []
#periodic_flushers = []
#shutdown_flushers = []
#input_file_1 = plugin("input", "file", LogStash::Util.hash_merge_many({ "path" => ("/var/logs/logstash-tutorial-dataset") }, { "start_position" => ("beginning") }))
#inputs << #input_file_1
#filter_grok_2 = plugin("filter", "grok", LogStash::Util.hash_merge_many({ "match" => {("message") => ("%{COMBINEDAPACHELOG}")} }))
#filters << #filter_grok_2
#filter_grok_2_flush = lambda do |options, &block|
#logger.debug? && #logger.debug("Flushing", :plugin => #filter_grok_2)
events = #filter_grok_2.flush(options)
return if events.nil? || events.empty?
#logger.debug? && #logger.debug("Flushing", :plugin => #filter_grok_2, :events => events)
events = #filter_geoip_3.multi_filter(events)
events.each{|e| block.call(e)}
end
if #filter_grok_2.respond_to?(:flush)
#periodic_flushers << #filter_grok_2_flush if #filter_grok_2.periodic_flush
#shutdown_flushers << #filter_grok_2_flush
end
#filter_geoip_3 = plugin("filter", "geoip", LogStash::Util.hash_merge_many({ "source" => ("clientip") }))
#filters << #filter_geoip_3
#filter_geoip_3_flush = lambda do |options, &block|
#logger.debug? && #logger.debug("Flushing", :plugin => #filter_geoip_3)
events = #filter_geoip_3.flush(options)
return if events.nil? || events.empty?
#logger.debug? && #logger.debug("Flushing", :plugin => #filter_geoip_3, :events => events)
events.each{|e| block.call(e)}
end
if #filter_geoip_3.respond_to?(:flush)
#periodic_flushers << #filter_geoip_3_flush if #filter_geoip_3.periodic_flush
#shutdown_flushers << #filter_geoip_3_flush
end
#output_elasticsearch_4 = plugin("output", "elasticsearch", LogStash::Util.hash_merge_many({ "hosts" => [("localhost")] }))
#outputs << #output_elasticsearch_4
def filter_func(event)
events = [event]
#logger.debug? && #logger.debug("filter received", :event => event.to_hash)
events = #filter_grok_2.multi_filter(events)
events = #filter_geoip_3.multi_filter(events)
events
end
def output_func(event)
#logger.debug? && #logger.debug("output received", :event => event.to_hash)
#output_elasticsearch_4.handle(event)
end {:level=>:debug, :file=>"/Users/xxx/Desktop/LOGS/logstash-2.1.1/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.1-java/lib/logstash/pipeline.rb", :line=>"38", :method=>"initialize"}←[0m
←[36mPlugin not defined in namespace, checking for plugin file {:type=>"input", :name=>"file", :path=>"logstash/inputs/file", :level=>:debug, :file=>"/Users/xxx/Desktop/LOGS/logstash-2.1.1/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.1-java/lib/logstash/plugin.rb", :line=>"76", :method=>"lookup"}←[0m
[...]
Logstash startup completed
←[32mFlushing buffer at interval {:instance=>"#<LogStash::Outputs::ElasticSearch::Buffer:0x75375e77#stopping=#<Concurrent::AtomicBoolean:0x61b12c0>, #last_flush=2015-12-29 15:45:27 +0000, #flush_thread=#<Thread:0x7008acbf run>, #max_size=500, #operations_lock=#<Java::JavaUtilConcurrentLocks::ReentrantLock:0x4985690f>, #submit_proc=#<Proc:0x3c9b0727#C:/Users/xxx/Desktop/LOGS/logstash-2.1.1/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.2.0-java/lib/logstash/outputs/elasticsearch/common.rb:55>, #flush_interval=1, #logger=#<Cabin::Channel:0x65f2b086 #subscriber_lock=#<Mutex:0x202361b4>, #data={}, #metrics=#<Cabin::Metrics:0x72e380e7 #channel=#<Cabin::Channel:0x65f2b086 ...>, #metrics={}, #metrics_lock=#<Mutex:0x3623f89e>>, #subscribers={12592=>#<Cabin::Outputs::IO:0x316290ee #lock=#<Mutex:0x3e191296>, #io=#<IO:fd 1>>}, #level=:debug>, #buffer=[], #operations_mutex=#<Mutex:0x601355b3>>", :interval=>1, :level=>:info, :file=>"/Users/xxx/Desktop/LOGS/logstash-2.1.1/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.2.0-java/lib/logstash/outputs/elasticsear
ch/buffer.rb", :line=>"90", :method=>"interval_flush"}←[0m
←[36m_globbed_files: /var/logs/logstash-tutorial-dataset: glob is: ["/var/logs/logstash-tutorial-dataset"] {:level=>:debug, :file=>"/Users/xxx/Desktop/LOGS/logstash-2.1.1/vendor/bundle/jruby/1.9/gems/filewatch-0.6.7/lib/filewatch/watch.rb", :line=>"190", :method=>"_globbed_files"}←[0m`
elasticsearch.log :
[2015-12-29 15:15:01,702][WARN ][bootstrap ] unable to install syscall filter: syscall filtering not supported for OS: 'Windows 8.1'
[2015-12-29 15:15:01,879][INFO ][node ] [Blue Marvel] version[2.1.1], pid[10152], build[40e2c53/2015-12-15T13:05:55Z]
[2015-12-29 15:15:01,880][INFO ][node ] [Blue Marvel] initializing ...
[2015-12-29 15:15:01,923][INFO ][plugins ] [Blue Marvel] loaded [], sites []
[2015-12-29 15:15:01,941][INFO ][env ] [Blue Marvel] using [1] data paths, mounts [[OS (C:)]], net usable_space [242.8gb], net total_space [458.4gb], spins? [unknown], types [NTFS]
[2015-12-29 15:15:03,135][INFO ][node ] [Blue Marvel] initialized
[2015-12-29 15:15:03,135][INFO ][node ] [Blue Marvel] starting ...
[2015-12-29 15:15:03,249][INFO ][transport ] [Blue Marvel] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2015-12-29 15:15:03,255][INFO ][discovery ] [Blue Marvel] elasticsearch/3DpYKTroSke4ruP21QefmA
[2015-12-29 15:15:07,287][INFO ][cluster.service ] [Blue Marvel] new_master {Blue Marvel}{3DpYKTroSke4ruP21QefmA}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2015-12-29 15:15:07,377][INFO ][http ] [Blue Marvel] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
[2015-12-29 15:15:07,382][INFO ][node ] [Blue Marvel] started
[2015-12-29 15:15:07,399][INFO ][gateway ] [Blue Marvel] recovered [1] indices into cluster_state
[2015-12-29 16:33:00,715][INFO ][rest.suppressed ] /logstash-$DATE/_search Params: {index=logstash-$DATE, q=response=200}
[logstash-$DATE] IndexNotFoundException[no such index]
at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:566)
From my observation, it seems that you've not provided port no in logstash output config file. Generally the port used is 9200 (default) for elasticsearch (as instructed by most of the tutorials outh there). Try changing logstash config - output part to follows and let me know if it works:
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}
I fixed the problem adding this:
input {
file {
path => "/path/to/logstash-tutorial.log"
start_position => beginning
sincedb_path => "/dev/null"
}
}
now logstash is sending the index to elasticsearch
This issue will fix with below logstash config file change.
input {
file {
path => "/path/to/logfile.log"
start_position => beginning
}
}
filter {
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "logstash-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}

yet another Could not contact Elasticsearch at http://logstash.example.com:9200

i have installed logstash+elasticsearch+kibana into one host and received the error from the title. I have googled all over the related topics, still no luck and yet stuck.
I will share the configs i have made:
elasticsearch.yml
cluster.name: hive
node.name: "logstash-central"
network.bind_host: 10.1.1.25
output from /var/log/elasticsearch/hive.log
[2015-01-13 15:18:06,562][INFO ][node ] [logstash-central] initializing ...
[2015-01-13 15:18:06,566][INFO ][plugins ] [logstash-central] loaded [], sites []
[2015-01-13 15:18:09,275][INFO ][node ] [logstash-central] initialized
[2015-01-13 15:18:09,275][INFO ][node ] [logstash-central] starting ...
[2015-01-13 15:18:09,385][INFO ][transport ] [logstash-central] bound_address {inet[/10.1.1.25:9300]}, publish_address {inet[/10.1.1.25:9300]}
[2015-01-13 15:18:09,401][INFO ][discovery ] [logstash-central] hive/T2LZruEtRsGPAF_Cx3BI1A
[2015-01-13 15:18:13,173][INFO ][cluster.service ] [logstash-central] new_master [logstash-central][T2LZruEtRsGPAF_Cx3BI1A][logstash.tw.intra][inet[/10.1.1.25:9300]], reason: zen-disco-join (elected_as_master)
[2015-01-13 15:18:13,193][INFO ][http ] [logstash-central] bound_address {inet[/10.1.1.25:9200]}, publish_address {inet[/10.1.1.25:9200]}
[2015-01-13 15:18:13,194][INFO ][node ] [logstash-central] started
[2015-01-13 15:18:13,209][INFO ][gateway ] [logstash-central] recovered [0] indices into cluster_state
accessing logstash.example.com:9200 gives the ordinary output like in ES guide:
{
"status" : 200,
"name" : "logstash-central",
"cluster_name" : "hive",
"version" : {
"number" : "1.4.2",
"build_hash" : "927caff6f05403e936c20bf4529f144f0c89fd8c",
"build_timestamp" : "2014-12-16T14:11:12Z",
"build_snapshot" : false,
"lucene_version" : "4.10.2"
},
"tagline" : "You Know, for Search"
}
accessing http://logstash.example.com:9200/_status? gives the following:
{"_shards":{"total":0,"successful":0,"failed":0},"indices":{}}
Kibanas config.js is default:
elasticsearch: "http://"+window.location.hostname+":9200"
Kibana is used via nginx. Here is /etc/nginx/conf.d/nginx.conf:
server {
listen *:80 ;
server_name logstash.example.com;
location / {
root /usr/share/kibana3;
Logstash config file is /etc/logstash/conf.d/central.conf:
input {
redis {
host => "10.1.1.25"
type => "redis-input"
data_type => "list"
key => "logstash"
}
output {
stdout{ { codec => rubydebug } }
elasticsearch {
host => "logstash.example.com"
}
}
Redis is working and the traffic passes between the master and slave (i've checked it via tcpdump).
15:46:06.189814 IP 10.1.1.50.41617 > 10.1.1.25.6379: Flags [P.], seq 89560:90064, ack 1129, win 115, options [nop,nop,TS val 3572086227 ecr 3571242836], length 504
netstat -apnt shows the following:
tcp 0 0 10.1.1.25:6379 10.1.1.50:41617 ESTABLISHED 21112/redis-server
tcp 0 0 10.1.1.25:9300 10.1.1.25:44011 ESTABLISHED 22598/java
tcp 0 0 10.1.1.25:9200 10.1.1.35:51145 ESTABLISHED 22598/java
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 22379/nginx
Could you please tell which way should i investigate the issue?
Thanks in advance
The problem is likely due to the nginx setup and the fact that Kibana, while installed on your server, is running in your browser and trying to access Elasticsearch from there. The typical way this is solved is by setting up a proxy in nginx and then changing your config.js.
You have what appears to be a correct proxy set up for nginx for Kibana but you'll need some additional work to have kibana be able to access Elasticsearch.
Check the comments on this post: http://vichargrave.com/ossec-log-management-with-elasticsearch/
And check this post: https://groups.google.com/forum/#!topic/elasticsearch/7hPvjKpFcmQ
And this sample nginx config: https://github.com/johnhamelink/ansible-kibana/blob/master/templates/nginx.conf.j2
You'll have to precise the protocol for elasticsearch in the output section
elasticsearch {
host => "logstash.example.com"
protocol => 'http'
}

Resources