I'm trying to setup a ELK stack on EC2, Ubuntu 14.04 instance. But everything install, and everything is working just fine, except for one thing.
Logstash is not creating an index on Elasticsearch. Whenever I try to access Kibana, it wants me to choose an index, from Elasticsearch.
Logstash is in the ES node, but the index is missing. Here's the message I get:
"Unable to fetch mapping. Do you have indices matching the pattern?"
Am I missing something out? I followed this tutorial: Digital Ocean
EDIT:
Here's the screenshot of the error I'm facing:
Yet another screenshot:
I got identical results on Amazon AMI (Centos/RHEL clone)
In fact exactly as per above… Until I injected some data into Elastic - this creates the first day index - then Kibana starts working. My simple .conf is:
input {
stdin {
type => "syslog"
}
}
output {
stdout {codec => rubydebug }
elasticsearch {
host => "localhost"
port => 9200
protocol => http
}
}
then
cat /var/log/messages | logstash -f your.conf
Why stdin you ask? Well it's not super clear anywhere (also a new Logstash user - found this very unclear) that Logstash will never terminate (e.g. when using the file plugin) - it's designed to keep watching.
But using stdin - Logstash will run - send data to Elastic (which creates index) then go away.
If I did the same thing above with the file input plugin, it would never create the index - I don't know why this is.
I finally managed to identify the issue. For some reason, the port 5000 is being accessed by another service, which is not allowing us to accept any incoming connection. So all your have to do is to edit the logstash.conf file, and change the port from 5000 to 5001 or anything of your convenience.
Make sure all of your logstash-forwarders are sending the logs to the new port, and you should be good to go. If you have generated the logstash-forwarder.crt using the FQDN method, then the logstash-forwarder should be pointing to the same FQDN and not an IP.
Is this Kibana3 or 4?
If it's Kibana4, can you click on settings in the top-line menu, choose indices and then make sure that the index name contains 'logstash-*', then click in the 'time-field' name and choose '#timestamp'
I've added a screenshot of my settings below, be careful which options you tick.
Related
I upload some logs into elastic via filebeat, but there is some other information added to my original logs like the host name ,os kernel and other information about host..., and the main message become unformatted, i want to delete all the field that are unnecessary and only keep my original message in the initial form.
I have tried to delete add_host_metadata from filebeat.yml but the problem still persist.
I'm working with elk on windows.
You could use the include_fields processor enter link description here or what you could do is use the drop_fields for the fields you don’t need. Filebeat will sometimes add in fields such as host, or log, which can be dropped. There are some
That can’t be dropped though.
I am trying to deleate all the logs that are were stored 14 days ago or before in elasticsearch. I have installed curator , and created the config file and the action file, in this way:
curator.yml configuration file
My elasticsearch is running in localhost:8080 ,and kibana in localhost:80
delete_indices action file
With both configurations file, I execute the currator with the config files and i obtain this:
command execution
You can see in the following image, my index name in kibana:
filebeat index in kibana
I've already tried many things, however I didn't manage to make it work, it allways says there is no index with this name. Do someone know where could be the issue?
Edit 1:
With your help, I managed to get the exact index name, however I still have the same problem:
modified delete_indices.yml file
That's what i get when i enter GET _cat/indices:
my indices
The problem was that curator will not act on any index associated with an ILM policy without setting allow_ilm_indices to to true.
The solution was:
More information: https://www.elastic.co/guide/en/elasticsearch/client/curator/5.8/option_allow_ilm.html
I have my Logstash configured with the following output:
output {
hosts => ["http://myhost/elasticsearch"]
}
This is a valid URL, as I can cURL commands to Elasticsearch with it, such as
curl "http://myhost/elasticsearch/_cat/indices?v"
returns my created indices.
However, when Logstash attempts to create a template, it uses the following URL:
http://myhost/_template/logstash
when I would expect it to use
http://myhost/elasticsearch/_template/logstash
It appears that the /elasticsearch portion of my URL is being chopped off. What's going on here? Is "elasticsearch" a reserved word in the URL that is removed? As far as I can tell, when I issue http://myhost/elasticsearch/elasticsearch, it attempts to find an index named "elasticsearch" which leads me to believe it isn't reserved.
Upon changing the endpoint URL to be
http://myhost/myes
Logstash is still attempting to access
http://myhost/_template/logstash
What might be the problem?
EDIT
Both Logstash and Elasticsearch are v5.0.0
You have not specified which version of logstash you are using. If you are using one of the 2.x versions, you need to use use the path => '/myes/' parameter to specify that your ES instance is behind a proxy. In 2.x, the hosts parameter was just a list of hosts, not URIs.
I am investigated the Elastic stack for collecting logs files. As I understand, Elasticsearch is used for storage and indexing, and Logstash for parsing them. There is also Filebeat that can send the files to the Logstash server.
But it seems like this entire stack assumes that you have root access to the server that is producing the logs. In my case, I don't have root access, but I have FTP access to the files. I looked at various input plugins for Logstash, but couldn't find something suitable.
Is there a component of the Elastic system that can help with this setup, without requiring me to write (error-prone) custom code?
May be you can use exec input plugin with curl. Something like:
exec {
codec => plain { }
command => "curl ftp://server/logs.log"
interval => 3000}
}
This question already has answers here:
Old logs are not imported into ES by logstash
(2 answers)
Closed 8 years ago.
I'have a problem related to logstash and elasticsearch.
When I try to ship logs via logstash from a remote machine to my elasticsearch server, no problem, index are created.
But when I try to ship logs via logstash from the server that is hosting elasticsearch, no index are created, nothing happens.
Logging from logstash shows that logstash sees whitout problem the logs I'm trying to ship.
I can't figure out why this is happening.
Any idea ?
Thanks a lot
ES version : 1.0.1
Logstash version : 1.4.0
logstash config file :
input {
file {
type => "dmprocess"
path => "/logs/mysql.log*"
}
}
filter{
grok{
type => "dmprocess"
match => [ "message", "%{DATESTAMP:processTimeStamp} %{GREEDYDATA} Extraction done for %{WORD:alias} took %{NUMBER:milliseconds:int} ms for %{NUMBER:rows:int} rows",
"message", "%{DATESTAMP:processTimeStamp} %{GREEDYDATA} : %{GREEDYDATA} %{WORD:alias} took %{NUMBER:milliseconds:int} ms"]
}
date{
match => [ "processTimeStamp", "YY/MM/dd HH:mm:ss"]
}
}
output {
elasticsearch {
host=>"devmonxde"
cluster => "devcluster"
}
}
UPDATE:
It seems that I'am not able to ship logs via input:file to an elasticsearch instance(remote or local) from a linux host.
Though I am able to send data to ES via input:stdin. So no connection/port problem.
It works like a charm if I run logstash with same config, but from a windows host.
The default behaviour on windows seems to be "beginning". This look in contradiction with the doc http://logstash.net/docs/1.4.0/inputs/file#start_position
It seems that logstash does not import old logs, even with start_position="beginning" of the file input.
The problem is that my old log are not imported into ES.
I'am creating another post for this.
Thanks
Old logs are not imported into ES by logstash
My first try would be to change the host to localhost, 127.0.0.1 or the external ip address to make sure the host name is not the problem.
Another thing would be to add an output to log everything to the console, easy way to check if the messages are coming in and are parsed the right way.