Logstash seemingly changes the Elasticsearch output URL - elasticsearch

I have my Logstash configured with the following output:
output {
hosts => ["http://myhost/elasticsearch"]
}
This is a valid URL, as I can cURL commands to Elasticsearch with it, such as
curl "http://myhost/elasticsearch/_cat/indices?v"
returns my created indices.
However, when Logstash attempts to create a template, it uses the following URL:
http://myhost/_template/logstash
when I would expect it to use
http://myhost/elasticsearch/_template/logstash
It appears that the /elasticsearch portion of my URL is being chopped off. What's going on here? Is "elasticsearch" a reserved word in the URL that is removed? As far as I can tell, when I issue http://myhost/elasticsearch/elasticsearch, it attempts to find an index named "elasticsearch" which leads me to believe it isn't reserved.
Upon changing the endpoint URL to be
http://myhost/myes
Logstash is still attempting to access
http://myhost/_template/logstash
What might be the problem?
EDIT
Both Logstash and Elasticsearch are v5.0.0

You have not specified which version of logstash you are using. If you are using one of the 2.x versions, you need to use use the path => '/myes/' parameter to specify that your ES instance is behind a proxy. In 2.x, the hosts parameter was just a list of hosts, not URIs.

Related

Is it possible to implement Stopwords without Elasticsearch configured in magento 2.3 + How to implement stopwords in magento 2.3

I want to add stopwords to my project but I think Elasticsearch is not installed on my server. Search Engine as MYSQL is selected.
will our stopwords work or not without Elasticsearch configured?
Also, I want to make sure that elastic search is configured or not. For that I am using the command
curl -XGET 'http://localhost:9200'
and in response, I am getting output as:
curl: (7) Failed to connect to localhost:9200; Connection refused.
Does this signify that elastic search is not configured?
I got the proper solution to this question.
a) Install Elasticsearch6.0
b) Then follow the steps https://devdocs.magento.com/guides/v2.4/config-guide/elasticsearch/es-config-stopwords.html#to-change-directory-stopwords
But one thing that needs to be kept in the mind is:
Don't override the stopwords.csv file
Instead, override the stopwords_en_US.csv file i.e. according to your locale.
Your module will work perfectly.
The solution on all sites is perfect. Just we need to override the correct file for stopwords according to locale.

Cannot delete indices with curator elasticsearch

I am trying to deleate all the logs that are were stored 14 days ago or before in elasticsearch. I have installed curator , and created the config file and the action file, in this way:
curator.yml configuration file
My elasticsearch is running in localhost:8080 ,and kibana in localhost:80
delete_indices action file
With both configurations file, I execute the currator with the config files and i obtain this:
command execution
You can see in the following image, my index name in kibana:
filebeat index in kibana
I've already tried many things, however I didn't manage to make it work, it allways says there is no index with this name. Do someone know where could be the issue?
Edit 1:
With your help, I managed to get the exact index name, however I still have the same problem:
modified delete_indices.yml file
That's what i get when i enter GET _cat/indices:
my indices
The problem was that curator will not act on any index associated with an ILM policy without setting allow_ilm_indices to to true.
The solution was:
More information: https://www.elastic.co/guide/en/elasticsearch/client/curator/5.8/option_allow_ilm.html

elasticsearch head not showing any data or getting connected

I have upgraded to elasticsearch 2.0.0 in my system and installed the elasticsearch-head plugin. But it is not getting connected and hence no display of the indices residing in my es server.
Im able to index documents and display them via CURL.
I have tried editing the elasticsearch.yml file like below:
http:cors.enabled : true
But this also seems not working.
Any idea,of why this is happening?.
You need to set the http.cors.allow-origin explicitly since there is no default value anymore since ES 2.0. Previously, that setting was set to * but that was considered a bad practice from the security point of view.
http.cors.allow-origin: /https?:\/\/localhost(:[0-9]+)?/

Logstash not creating index on Elasticsearch

I'm trying to setup a ELK stack on EC2, Ubuntu 14.04 instance. But everything install, and everything is working just fine, except for one thing.
Logstash is not creating an index on Elasticsearch. Whenever I try to access Kibana, it wants me to choose an index, from Elasticsearch.
Logstash is in the ES node, but the index is missing. Here's the message I get:
"Unable to fetch mapping. Do you have indices matching the pattern?"
Am I missing something out? I followed this tutorial: Digital Ocean
EDIT:
Here's the screenshot of the error I'm facing:
Yet another screenshot:
I got identical results on Amazon AMI (Centos/RHEL clone)
In fact exactly as per aboveā€¦ Until I injected some data into Elastic - this creates the first day index - then Kibana starts working. My simple .conf is:
input {
stdin {
type => "syslog"
}
}
output {
stdout {codec => rubydebug }
elasticsearch {
host => "localhost"
port => 9200
protocol => http
}
}
then
cat /var/log/messages | logstash -f your.conf
Why stdin you ask? Well it's not super clear anywhere (also a new Logstash user - found this very unclear) that Logstash will never terminate (e.g. when using the file plugin) - it's designed to keep watching.
But using stdin - Logstash will run - send data to Elastic (which creates index) then go away.
If I did the same thing above with the file input plugin, it would never create the index - I don't know why this is.
I finally managed to identify the issue. For some reason, the port 5000 is being accessed by another service, which is not allowing us to accept any incoming connection. So all your have to do is to edit the logstash.conf file, and change the port from 5000 to 5001 or anything of your convenience.
Make sure all of your logstash-forwarders are sending the logs to the new port, and you should be good to go. If you have generated the logstash-forwarder.crt using the FQDN method, then the logstash-forwarder should be pointing to the same FQDN and not an IP.
Is this Kibana3 or 4?
If it's Kibana4, can you click on settings in the top-line menu, choose indices and then make sure that the index name contains 'logstash-*', then click in the 'time-field' name and choose '#timestamp'
I've added a screenshot of my settings below, be careful which options you tick.

Can't ship log from local server [duplicate]

This question already has answers here:
Old logs are not imported into ES by logstash
(2 answers)
Closed 8 years ago.
I'have a problem related to logstash and elasticsearch.
When I try to ship logs via logstash from a remote machine to my elasticsearch server, no problem, index are created.
But when I try to ship logs via logstash from the server that is hosting elasticsearch, no index are created, nothing happens.
Logging from logstash shows that logstash sees whitout problem the logs I'm trying to ship.
I can't figure out why this is happening.
Any idea ?
Thanks a lot
ES version : 1.0.1
Logstash version : 1.4.0
logstash config file :
input {
file {
type => "dmprocess"
path => "/logs/mysql.log*"
}
}
filter{
grok{
type => "dmprocess"
match => [ "message", "%{DATESTAMP:processTimeStamp} %{GREEDYDATA} Extraction done for %{WORD:alias} took %{NUMBER:milliseconds:int} ms for %{NUMBER:rows:int} rows",
"message", "%{DATESTAMP:processTimeStamp} %{GREEDYDATA} : %{GREEDYDATA} %{WORD:alias} took %{NUMBER:milliseconds:int} ms"]
}
date{
match => [ "processTimeStamp", "YY/MM/dd HH:mm:ss"]
}
}
output {
elasticsearch {
host=>"devmonxde"
cluster => "devcluster"
}
}
UPDATE:
It seems that I'am not able to ship logs via input:file to an elasticsearch instance(remote or local) from a linux host.
Though I am able to send data to ES via input:stdin. So no connection/port problem.
It works like a charm if I run logstash with same config, but from a windows host.
The default behaviour on windows seems to be "beginning". This look in contradiction with the doc http://logstash.net/docs/1.4.0/inputs/file#start_position
It seems that logstash does not import old logs, even with start_position="beginning" of the file input.
The problem is that my old log are not imported into ES.
I'am creating another post for this.
Thanks
Old logs are not imported into ES by logstash
My first try would be to change the host to localhost, 127.0.0.1 or the external ip address to make sure the host name is not the problem.
Another thing would be to add an output to log everything to the console, easy way to check if the messages are coming in and are parsed the right way.

Resources