How to allow cross domain access in elasticsearch 2.0.0? - elasticsearch

Im, Trying to enable cross domain access in elasticsearch 2.0.0.
The following is output configuration in logstash 2.0.0 :
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => "localhost"
header => {"Access-Control-Allow-Origin": "true"}
index => "testindex"
}
However I am getting the following error:
Error: Expected one of #, => at line 28, column 42 (byte 500) after output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => "localhost"
header => {"Access-Control-Allow-Origin"
Could someone please tell me what I am doing wrong here.
Thanks
PS: I think this is most likely a syntax error cause when I remove the header from output everything else works fine.

You can try using * or the actual domain name from where you are making the request.
"Access-Control-Allow-Origin": "*" or
"Access-Control-Allow-Origin": "http://yourdomain.com"
refer:
https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS

Related

Logstash with elasticsearch output: how to write to different indices?

I hope to find here an answer to my question that I am struggling with since yesterday:
I'm configuring Logstash 1.5.6 with a rabbitMQ input and an elasticsearch output.
Messages are published in rabbitMQ in bulk format, my logstash consumes them and write them all to elasticsearch default index logstash-YYY.MM.DD with this configuration:
input {
rabbitmq {
host => 'xxx'
user => 'xxx'
password => 'xxx'
queue => 'xxx'
exchange => "xxx"
key => 'xxx'
durable => true
}
output {
elasticsearch {
host => "xxx"
cluster => "elasticsearch"
flush_size =>10
bind_port => 9300
codec => "json"
protocol => "http"
}
stdout { codec => rubydebug }
}
Now what I'm trying to do is send the messages to different elasticsearch indices.
The messages coming from the amqp input already have the index and type parameters (bulk format).
So after reading the documentation:
https://www.elastic.co/guide/en/logstash/1.5/event-dependent-configuration.html#logstash-config-field-references
I try doing that
input {
rabbitmq {
host => 'xxx'
user => 'xxx'
password => 'xxx'
queue => 'xxx'
exchange => "xxx"
key => 'xxx'
durable => true
}
output {
elasticsearch {
host => "xxx"
cluster => "elasticsearch"
flush_size =>10
bind_port => 9300
codec => "json"
protocol => "http"
index => "%{[index][_index]}"
}
stdout { codec => rubydebug }
}
But what logstash is doing is create the index %{[index][_index]} and putting there all the docs instead of reading the _index parameter and sending there the docs !
I also tried the following:
index => %{index}
index => '%{index}'
index => "%{index}"
But none seems to work.
Any help ?
To resume, the main question here is: If the rabbitMQ messages have this format:
{"index":{"_index":"indexA","_type":"typeX","_ttl":2592000000}}
{"#timestamp":"2017-03-09T15:55:54.520Z","#version":"1","#fields":{DATA}}
How to tell to logstash to send the output in the index named "indexA" with type "typeX" ??
If your messages in RabbitMQ are already in bulk format then you don't need to use the elasticsearch output but a simple http output hitting the _bulk endpoint would do the trick:
output {
http {
http_method => "post"
url => "http://localhost:9200/_bulk"
format => "message"
message => "%{message}"
}
}
So everyone, with the help of Val, the solution was:
As he said since the RabbitMQ messages were already in bulk format, no need to use elasticsearch output, the http output to _bulk API will make it (silly me)
So I replaced the output with this:
output {
http {
http_method => "post"
url => "http://172.16.1.81:9200/_bulk"
format => "message"
message => "%{message}"
}
stdout { codec => json_lines }
}
But it still wasn't working. I was using Logstash 1.5.6 and after upgrading to Logstash 2.0.0 (https://www.elastic.co/guide/en/logstash/2.4/_upgrading_using_package_managers.html) it worked with the same configuration.
There it is :)
If you store JSON message in Rabbitmq, then this problem can be solved.
Use index and type as field in JSON message and assign those values to Elasticsearch output plugin.
index =>
"%{index}"                                                        
        //INDEX from JSON body received from Kafka Producer document_type => "%{type}" }               //TYPE from JSON body
With this approach , each message can have their own index and type.
   

Unknown setting 'protocol' for elasticsearch 5.1.1

So I've been looking for the way to solve this issue all day long.
but all I've got is for the old version of elasticsearch.
fyi, i use the latest version of elk stack.
elasticsearch version : 5.1.1
kibana version : 5.1.1
logstash version : 5.1.1
This is my apache conf :
input {
file {
path => '/Applications/XAMPP/xamppfiles/logs/access_log'
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
}
output {
elasticsearch { protocol => "http" }
}
That file used to access log data from apache.
But when I run the logstash, with :
logstash -f apache.conf
I got this error message.
That message told me that something wrong with my configuration.
the http protocol is doesnt exist anymore i guess.
Can you tell me how to fix it?
Many thanks return
There is no protocol setting in the elasticsearch output anymore. Simply modify your output to this:
output {
elasticsearch {
hosts => "localhost:9200"
}
}

Sending elasticsearch5.1.1 slowlog to logstash 5.1.1 as an input

This is the logstash5.1.1 config file content which is used to match the slowlog of elasticsearch5.1.1.
input {
file {
path => "C:\Users\571952\Downloads\elasticsearch-5.1.1\elasticsearch-5.1.1\logs\elasticsearch_index_search_slowlog"
start_position => "beginning"
}
}
filter {
grok { # parses the common bits
match => [ "message", "[%{TIMESTAMP_ISO8601:logtime}][%{LOGLEVEL:log_level}]
[%{DATA:es_slowquery_type}]\s*[%{DATA:es_host}]\s*[%{DATA:es_index}]\s*[%{DATA:es_shard}]\s*took[%{DATA:es_duration}],\s*took_millis[%{DATA:es_duration_ms:float}],\s*types[%{DATA:es_types}],\s*stats[%{DATA:es_stats}],\s*search_type[%{DATA:es_search_type}],\s*total_shards[%{DATA:es_total_shards:float}],\s*source[%{GREEDYDATA:es_source}],\s*extra_source[%{GREEDYDATA:es_extra_source}],"]
}
mutate {
gsub => [
"source_body", "], extra_source[$", ""
]
}
}
output {
file {
path => "C:\Users\571952\Desktop\logstash-5.1.1\just_queries"
codec => "json_lines"
message_format => "%{source_body}"
}
}
When i executed this in logstash 5.1.1 i got error like this
[2017-01-03T11:45:20,419][FATAL][logstash.runner ] The given configuration is in
valid. Reason: The setting `message_format` in plugin `file` is obsolete and is no longer
available. You can achieve the same behavior with the 'line' codec If you have any quest
ions about this, you are invited to visit https://discuss.elastic.co/c/logstash and ask.
Can anyone help me in solving this error?
message_format is deprecated since logstash 2.2 version and removed from logstash 5.1 version.
Remove that line.

I want to Delete document by logstash,but it throws a exception

Now,I meet a question. My logstash configuration file as follows:
input {
redis {
host => "127.0.0.1"
port => 6379
db => 10
data_type => "list"
key => "local_tag_del"
}
}
filter {
}
output {
elasticsearch {
action => "delete"
hosts => ["127.0.0.1:9200"]
codec => "json"
index => "mbd-data"
document_type => "localtag"
document_id => "%{album_id}"
}
file {
path => "/data/elasticsearch/result.json"
}
stdout {}
}
I want to read id from redis, by logstash, notify es to delete document.
Excuse me,My English is poor,I hope that someone will help me .
Thx.
I can't help you particularly, because your problem is spelled out in your error message - logstash couldn't connect to your elasticsearch instance.
That usually means one of:
elasticsearch isn't running
elasticsearch isn't bound to localhost
That's nothing to do with your logstash config. Using logstash to delete documents is a bit unusual though, so I'm not entirely sure this isn't an XY problem

Losgstah configuration issue

I begin with logstash and ElasticSearch and I would like to index .pdf or .doc file type in ElasticSearch via logstash.
I configured logstash using the codec multiline to get my file in a single message in ElasticSearch. Below is my configuration file:
input {
file {
path => "D:/BaseCV/*"
codec => multiline {
# Grok pattern names are valid! :)
pattern => ""
what => "previous"
}
}
}
output {
stdout {
codec => "rubydebug"
}
elasticsearch {
hosts => "localhost"
index => "cvindex"
document_type => "file"
}
}
At the start of logstash the first file I add, I recovered in ElasticSearch in one message, but the following are spread over several messages. I wish I had the correspondence : 1 file = 1 message.
Is this possible ? What should I change my setup to solve the problem ?
Thank you for your feedback.

Resources