I have an ES 1.7.1 cluster and I wanted to "reindex" an Index. Since I cannot use _reindex I came across this tutorial to use logstash to copy an index onto a different cluster. However in my case I want to copy it over to the same cluster. This is my logstash.conf :
input {
# We read from the "old" cluster
elasticsearch {
hosts => "myhost"
port => "9200"
index => "my_index"
size => 500
scroll => "5m"
docinfo => true
}
}
output {
# We write to the "new" cluster
elasticsearch {
host => "myhost"
port => "9200"
protocol => "http"
index => "logstash_test"
index_type => "%{[#metadata][_type]}"
document_id => "%{[#metadata][_id]}"
}
# We print dots to see it in action
stdout {
codec => "dots"
}
}
filter {
mutate {
remove_field => [ "#timestamp", "#version" ]
}
}
This errors out with the following error message:
A plugin had an unrecoverable error. Will restart this plugin.
Plugin: <LogStash::Inputs::Elasticsearch hosts=>[{:scheme=>"http", :user=>nil, :password=>nil, :host=>"myhost", :path=>"", :port=>"80", :protocol=>"http"}], index=>"my_index", scroll=>"5m", query=>"{\"query\": { \"match_all\": {} } }", docinfo_target=>"#metadata", docinfo_fields=>["_index", "_type", "_id"]>
Error: Connection refused - Connection refused {:level=>:error}
Failed to install template: http: nodename nor servname provided, or not known {:level=>:error}
Any help would be appreciated
Related
i have this same problem:
Logstash can not connect to Elastic search
My /conf.d/logstash.conf:
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-beats.crt"
ssl_key => "/etc/pki/tls/private/logstash-beats.key"
}
}
filter {
geoip {
source => "clientip"
}
}
output {
elasticsearch {
host => "http://anotherip, that is not localhost:9200"
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
}}
Error:
error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/]
Where logstash pick this ip adress? "localhost:9200"
I solve my problem, i just kill the service with kill -9, after this systemctl start service and works well. Thanks.
I am trying to send data from filebeat-->logstash-->Elasticsearch Cluster-->kibana.
I have a cluster with 3 nodes. 2 are master eligible nodes and 1 is client node.
I have checked the health of the cluster using the below command,
curl -XGET "http://132.186.102.61:9200/_cluster/state?pretty"
I can see the output properly with elected master.
When the filebeat pushes data to logstash, i see the following error,
logstash.outputs.elasticsearch - retrying failed action with response code: 503 ({"type"=>"unavailable_shards_exception", "reason"=>"[logstash-2017.06.05][1] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[logstash-2017.06.05][1]] containing [3] requests]"})
this is my logstash.conf,
input {
beats {
port => "5043"
#ssl => true
#ssl_certificate_authorities => "D:/Softwares/ELK/ELK_SSL_Certificates/testca/cacert.pem"
#ssl_certificate => "D:/Softwares/ELK/ELK_SSL_Certificates/server/cert.pem"
#ssl_key => "D:/Softwares/ELK/ELK_SSL_Certificates/server/pkcs8.key"
#ssl_key_passphrase => "MySecretPassword"
#ssl_verify_mode => "force_peer"
}
}
filter{
grok
{
match => {"message" =>"%{IP:client} %{NUMBER:duration} %{GREEDYDATA:messageFromClient}"}
}
}
#filter{
#if "_grokparsefailure" in [tags] {
# drop { }
# }
#}
output {
elasticsearch {hosts => ["132.186.189.127:9200","132.186.102.61:9200","132.186.102.43:9200"]}
stdout { codec => rubydebug }
}
May i please know the reason for this issue.
I have Elasticsearch indices with the same name as logstash-2015.12.10. on different servers, with different data. Now I want only Elasticsearch so there is the requirement of appending this data of both servers into one.
Is it possible to do it?
You could copy one index from one host to the same index on your other host using Logstash. Using the configuration below, make sure to replace the source and target hosts to match your host names.
File: copylogs.conf
input {
elasticsearch {
hosts => "server1:9200" <---- the host you want to copy from
index => "logstash-2015.12.10"
}
}
filter {
mutate {
remove_field => [ "#version", "#timestamp" ]
}
}
output {
elasticsearch {
host => "server2" <--- the host you want to copy to
port => 9200
protocol => "http"
manage_template => false
index => "logstash-2015.12.10"
}
}
And then you can simply launch it with
bin/logstash -f copylogs.conf
I am new to Elasticsearch. How to move data from one Elasticsearch index to another using the Bulk API?
I'd suggest using Logstash for this, i.e. you use one elasticsearch input plugin to retrieve the data from your index and another elasticsearch output plugin to push the data to your other index.
The config logstash config file would look like this:
input {
elasticsearch {
hosts => "localhost:9200"
index => "source_index" <--- the name of your source index
}
}
filter {
mutate {
remove_field => [ "#version", "#timestamp" ]
}
}
output {
elasticsearch {
host => "localhost"
port => 9200
protocol => "http"
manage_template => false
index => "target_index" <---- the name of your target index
document_type => "your_doc_type" <---- make sure to set the appropriate type
document_id => "%{id}"
workers => 5
}
}
After installing Logstash, you can run it like this:
bin/logstash -f logstash.conf
I followed the steps in this document and I was able to do get some reports on the Shakespeare data.
I want to do the same thing with elastic search remotely installed.I tried configuring the "host" in config file but the queries still run on host as opposed to remote .This is my config file
input {
stdin{
type => "stdin-type" }
file {
type => "accessLog"
path => [ "/Users/akushe/Downloads/requests.log" ]
}
}
filter {
grok {
match => ["message","%{COMMONAPACHELOG} (?:%{INT:responseTime}|-)"]
}
kv {
source => "request"
field_split => "&?"
}
if [lng] {
kv {
add_field => [ "location" , ["%{lng}","%{lat}"]]
}
}else if [lon] {
kv {
add_field => [ "location" , ["%{lon}","%{lat}"]]
}
}
}
output {
elasticsearch {
host => "slc-places-qa-es3001.slc.where.com"
port => 9200
}
}
You need to add protocol => http in to make it use HTTP transport rather than joining the cluster using multicast.