I am following this tutorial to learn Elasticsearch, running it on EC2:
http://exploringelasticsearch.com/book/an-overview/up-and-running.html
So I am able to run the server and, using the query tool provided (http://elastichammer.exploringelasticsearch.com/) I can get the status response. However, when I try creating an index with a PUT, I get back the error:
{ "error": "IndexCreationException[[planet] failed to create index];
nested: NoClassDefFoundError[Could not initialize class
org.elasticsearch.index.codec.postingsformat.PostingFormats]; ",
"status": 500 }
The same thing happens when I use cURL...I can get the sample response with
curl -XGET http://ec2-54-218-36-27.us-west-2.compute.amazonawscom:9200/
as well as the status with:
curl -XGET http://ec2-54-218-36-27.us-west-2.compute.amazonawscom:9200/_status
Has anybody experienced this?
Thank you!
Related
I send following request on Kibana console to try to import a local dashboard to kibana version 7.2.0
POST https://dcwidavcca0085.epg.nam.gm.com:5601/api/saved_objects/_import
{
"file"="C:\Users\Documents\dashboard-prod.ndjson"
}
however, I received error response
{
"error": "no handler found for uri [/https://dcwidavcca0085.epg.nam.gm.com:5601/api/saved_objects/_import?pretty] and method [POST]"
}
I'm new to kibana API, the doc I'm following is https://www.elastic.co/guide/en/kibana/7.2/saved-objects-api-import.html
Is there any syntax/format error in my request?
Tldr;
Kibana's console in 7.2 only allow to target elasticsearch API.
You need to run the command in a terminal, using the curl command.
I'm having trouble connecting to an Elasticsearch instance with a Telegraf output plugin.
I created an Elasticsearch setup via the Elasticsearch service. I created a user and password (connected to a role) in Kibana for it.
Then I setup a Telegraf output for it:
[[outputs.elasticsearch]]
urls = [ "https://hostname:port" ] # required.
timeout = "5s"
enable_sniffer = false
health_check_interval = "10s"
## HTTP basic authentication details.
username = "my_username"
password = "my_password"
index_name = "device_logs" # required.
insecure_skip_verify = true
manage_template = true
template_name = "telegraf"
overwrite_template = false
But when I try to start Telegraf with this, it just gives the error,
[agent] Failed to connect to [outputs.elasticsearch], retrying in 15s, error was 'health check timeout: no Elasticsearch node available'
The connect fail seems to originate deep in the bowels of golang's net/http library, and I don't know how to get some more useful output at this point.
Things I've tried:
Thing #1: I tested cURL:
curl -u my_username:my_password -X POST "https://hostname:port/device_logs/_doc" -H 'Content-Type: application/json' -d'
{
"name": "John Doe"
}'
This works fine.
Thing #2: I created a simple Go program to connect to elasticsearch from Go:
package main
import (
"log"
"time"
"gopkg.in/olivere/elastic.v3"
)
func main() {
// configure connection to ES
client, err := elastic.NewClient(elastic.SetURL("https://hostname:port"))
if err != nil {
panic(err)
}
log.Printf("client.running? %v",client.IsRunning())
if ! client.IsRunning() {
panic("Could not make connection, not running")
}
}
.. and it hits the first panic with the same "no Elasticsearch node available".
Thing #3: I tried running gdb on that Go program to debug into it.
It jumps down to assembly as soon as I call NewClient, so I can't really learn what is happening in the bowels of net/http.
I've never used Go before, so I'm hoping to avoid hours of learning Go, spelunking, and debugging to get around what hopefully is a simple issue here.
Any ideas on how to get more info here or why this is failing? Are there build or runtime flags for Go that I can use? gdb-with-Go debugging tips so I can step down into the Go library code? Elasticsearch client know-how?
To answer my own question, the problem here turned out to be the roles permissions. The Telegraf output plugin for Elasticsearch needs both the monitor and the manage_index_templates permissions to be enabled, or else it'll fail to connect to the Elasticsearch server without printing any information about why.
BTW: to build golang code and be able to debug into the libraries it calls:
go build -gcflags=all="-N -l"
Im using ES 1.4.4 and LS 1.5 and Kibana 4 on Debian.
I start logstash, it works fine for a couple of minutes then i have a fatal error.
In order to shutdown logstash i have to delete the recent datas stored in ES, that's the only way i found.
One more relevant fact is that Elastic Search looks OK, i can see old datas in kibana and plugin head works fine.
My output config : output { elasticsearch {port => 9200 protocol => http host => "127.0.0.1"}}
Any help will be appreciated :)
Here is the full error message :
Got error to send bulk of actions to elasticsearch server at 127.0.0.1 : Read timed out {:level=>:error}
Failed to flush outgoing items {:outgoing_count=>1362, :exception=>#, :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.3.5-java/lib/manticore/response.rb:35:in initialize'", "org/jruby/RubyProc.java:271:incall'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.3.5-java/lib/manticore/response.rb:61:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.3.5-java/lib/manticore/response.rb:224:incall_once'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.3.5-java/lib/manticore/response.rb:127:in code'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/transport/http/manticore.rb:50:inperform_request'", "org/jruby/RubyProc.java:271:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/transport/base.rb:187:inperform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/transport/http/manticore.rb:33:in perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/client.rb:115:inperform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.7/lib/elasticsearch/api/actions/bulk.rb:80:in bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch/protocol.rb:82:inbulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:413:in submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:412:insubmit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:438:in flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:436:inflush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:219:in buffer_flush'", "org/jruby/RubyHash.java:1341:ineach'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:216:in buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:193:inbuffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:159:in buffer_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:402:inreceive'", "/opt/logstash/lib/logstash/outputs/base.rb:88:in handle'", "(eval):1070:ininitialize'", "org/jruby/RubyArray.java:1613:in each'", "org/jruby/RubyEnumerable.java:805:inflat_map'", "(eval):1067:in initialize'", "org/jruby/RubyProc.java:271:incall'", "/opt/logstash/lib/logstash/pipeline.rb:279:in output'", "/opt/logstash/lib/logstash/pipeline.rb:235:inoutputworker'", "/opt/logstash/lib/logstash/pipeline.rb:163:in `start_outputs'"], :level=>:warn}
Your elasticsearch have surpassed storage and it is unable to write new documents coming from logstash, try deleting old indices and then
PUT your_index/_settings
{
"index": {
"blocks.read_only": false
}
}
I hope this will work for you. Thanks !!
When I'm trying to connect to ElasticSearch (elasticsearch-0.90.3) installed on EC2 from a none local machine using play2-elastic plugin it throws the following exception (the plugin works fine when connecting locally)
error] application - ElasticSearch : No ElasticSearch node is available. Please check that your configuration is correct, that you ES server is up and reachable from the network. Index has not been created and prepared.
org.elasticsearch.client.transport.NoNodeAvailableException: No node available
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:205) ~[elasticsearch-0.90.3.jar:na]
at org.elasticsearch.client.transport.support.InternalTransportIndicesAdminClient.execute(InternalTransportIndicesAdminClient.java:85) ~[elasticsearch-0.90.3.jar:na]
at org.elasticsearch.client.support.AbstractIndicesAdminClient.exists(AbstractIndicesAdminClient.java:147) ~[elasticsearch-0.90.3.jar:na]
at org.elasticsearch.action.admin.indices.exists.indices.IndicesExistsRequestBuilder.doExecute(IndicesExistsRequestBuilder.java:43) ~[elasticsearch-0.90.3.jar:na]
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:85) ~[elasticsearch-0.90.3.jar:na]
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:59) ~[elasticsearch-0.90.3.jar:na]
I have used different methods to test the elasticsearch server is up and running, examples:
curl -XGET '184.72.55.204:9300/_analyze?analyzer=standard' -d 'this is a test'
curl: (52) Empty reply from server
telnet 184.72.55.204 9300
Trying 184.72.55.204...
Connected to ec2-184-72-55-204.us-west-1.compute.amazonaws.com.
Escape character is '^]'.
In some google groups I also saw other people having similar problem, they seem to be able to fix the problem with turning sniffing to off, so I have this in my application.conf
elasticsearch.client="184.72.55.204:9300"
elasticsearch.sniff=false # I ADDED THIS BUT DID NOT HELP
elasticsearch.index.name="phonotags"
elasticsearch.index.settings="{ analysis: { analyzer: { my_analyzer: { type: \"custom\", tokenizer: \"standard\" } } } }"
elasticsearch.index.clazzs="indexing.*"
elasticsearch.index.show_request=true
my build.scala file contains these:
"com.clever-age" % "play2-elasticsearch" % "0.7-SNAPSHOT"
resolvers += Resolver.url("play-plugin-releases", new URL("http://repo.scala-sbt.org/scalasbt/sbt-plugin-releases/"))(Resolver.ivyStylePatterns),
resolvers += Resolver.url("play-plugin-snapshots", new URL("http://repo.scala-sbt.org/scalasbt/sbt-plugin-snapshots/"))(Resolver.ivyStylePatterns)
I appreciate your help.
thanks
It seems your node is not available
curl -XPUT '184.72.55.204:9200/twitter/tweet/1' -d '{ "user": "kimchy", "post_date" : "2011-08-18T16:20:00", "message" : "trying out Elastic Search" }'
Can you check this ?
I am following the steps mentioned on the AWS to use an interactive Hive session using SSH.
I used the following resources
https://github.com/ucbtwitter/getting-started/wiki/Using-Elastic-Map-Reduce-via-Command-Line
http://docs.amazonwebservices.com/ElasticMapReduce/latest/GettingStartedGuide/SignUp.html
I was getting this error initially "Error: Missing key access-id" and then I fixed my JSON file. The JSON file is in the same format as mentioned in the above links.
When I run this command
./elastic-mapreduce
I am getting the following error :-
Error: Unable to parse credentials.json: can't convert String into Integer.
I checked the values required in JSON at AWS as well.
Does anyone has an idea why am I getting this error?
The region value in the credentials.json must be of int type.
{......
......
"region": 1
}