Kibana and Elasticsearch error - elasticsearch

I want to access to Kibana by http://IP:80.
Nevertheless when I visit the pageI obtain these errors:
Upgrade Required Your version of Elasticsearch is too old. Kibana
requires Elasticsearch 0.90.9 or above.
and
Error Could not reach http://localhost:80/_nodes. If you are using a
proxy, ensure it is configured correctly
I have been looking up these problems on the internet and I have included these lines without success...
http.cors.enabled: true
http.cors.allow-origin: http://localhost:80
My Elasticsearch version is in fact 0.90.9.
What could I do?
please help me

According to my scenario Logstash using node protocol by default. if you apply command :
curl -XGET 'http://localhost:9200/_cluster/health?pretty=true'
if you are getting "number_of_nodes" : 2, means logstash using node protocol and becoming part of cluster .so kibana taking it other node that was in older version of elasticsearch.
solution :
put protocol => transport in logstash config file for shipping to ES.
like,
input { }
output {
elasticsearch {
action => ... # string (optional), default: "index"
embedded_http_port => ... # string (optional), default: "9200-9300"
index => ... # string (optional), default: "logstash-%{+YYYY.MM.dd}"
node_name => ... # string (optional)
port => ... # string (optional)
protocol => ... # string, one of ["node", "transport", "http"]
}
if you want to access on port 80 than you have to do proxy . otherwise kibana listen on 5601 by default. if you still facing same issue then use latest version of logstash + kibana +elasticsearch .

Download advanced version of elasticsearch as the version you are using is not compatible with Kibana. Try using latest elasticsearch version.

Related

Source of host Variable in Logstash

I'm using ELK (Kibana, ElasticSearch and Logstash are running as Docker containers) and LogstashTcpSocketAppender in Spring boot app to forward data to logstash.
Logstash config is very simple:
input {
tcp {
port => 4560
codec => json_lines
}
}
output {
elasticsearch {
hosts => [ "elasticsearch:9200" ]
}
}
The issue is that in Kibana I see host field with "gateway" value - "host: gateway".
What I don't understand is HOW is this field populated and added to logstash-* index in Kibana as:
I do not set any host variable in logback config and can clearly see it's not going out of there.
This might be set by logstash itself. But I couldn't find a concrete reference in logback documentation of how this field is being populated. And what does "gateway" really mean?
This is very confusive to me.
Could anyone please explain.
Thanks in advance.

Shipping logs to my local windows ELK/Logstash file from remote centos7 using filebeats

I have ELK all this three components configured on my local windows machine up and running.
I have a logfile on a remote centos7 machine available which I want to ship from there to my local windows with the help of Filebeat. How I can achieve that?
I have installed filebeat on centos with the help of rpm.
in configuration file I have made following changes
commented output.elasticsearch
and uncommented output.logstash (which Ip of my windows machine shall I give overe here? How to get that ip)
AND
**filebeat.inputs:
type: log
enabled: true
paths:
path to my log file**
The flow of the data is:
Agent => logstash > elasticsearch
Agent could be beat family, and you are using filebeat that is okay.
You have to configure all these stuff.
on filebeat you have to configure filebeat.yml
on logstash you have to configure logstash.conf
on elasticsearch you have to configure elasticsearch.yml
I personally will start with logstash.conf
input
{
beats
{
port =>5044
}
}
filter
{
#I assume you just want the log to run into elasticsearch
}
output
{
elasticsearch
{
hosts => "(YOUR_ELASTICSEARCH_IP):9200"
index=>"insertindexname"
}
stdout
{
codec => rubydebug
}
}
this is a very minimal working configuration. this means, Logstash will listen for input from filebeat in port 5044. Filter is needed when you want parse the data. Output is where you want to store the data. We are using elasticsearch output plugin since you want to store it to elasticsearch. stdout is super helpful for debugging your configuration file. add this you will regret nothing. This will print all the messages that sent to elasticsearch
filebeat.yml
filebeat.inputs:
- type: log
paths:
- /directory/of/your/file/file.log
output.logstash:
hosts: ["YOUR_LOGSTASH_IP:5044"]
this is a very minimal working filebeat.yml. paths is where you want logstash to harvest the file.
When you done configuring the file, start elasticsearch then logstash then filebeat.
Let me know any difficulties

Failing to install ELK

I am trying to install ELK but I am getting below timed out error.
Check your log, It might be the situation that your computer's configuration is not good enough to run elk.[elasticsearch can take 1.2g memory].
use free -m df -h check the machine can support elk service.
In centos, logs locate on /var/log/elasticsearch.
You can check your connection with curl localhost:9200.
If your elasticsearch deploys in other machines, you should set the configuration as host: 0.0.0.0
Update elasticsearch configuration
Add below code to elastcisearch.yml
# vi /etc/elasticsearch/elastcisearch.yml
network.host: 0.0.0.0
If you have enabled authentication for Elasticsearch, update Logstash configuration file and provide user and password.
# vi /home/checkstyle.conf
output {
elasticsearch {
hosts => ['localhost:9200']
user => 'admin'
password => 'admin'
sniffing => true
manage_template => false
index => "autocheckstylelogsyncfilebeat"
}
}
Verify Elasticsearch is active
# curl -XGET 'localhost:9200'
If authentication is enabled
# curl -XGET '<user>:<pwd>#localhost:9200'

How to configure FileBeat and Logstash to add XML Files in Elasticsearch?

I'm a beginner here. My own problem is to configure FileBeat and Logstash to add XML Files in Elasticsearch on CentOS 7.
I have already install the last version of filebeat, logstash, elasticsearch and Kibana, with the plug-in "elasticsearch-head" in standalone to see inside elasticsearch. And to test my installation, i have successfully add simple log file from CentOS system (/var/log/messages), and see it inside elasticsearch-head plug-in (6 index and 26 shards):
This is a viex of my elasticsearch-head plug-in
And now, next step is to add log from XML file. After reading the documentation, i have configure filebeat and logstash. All services are running, and i try the command "touch /mes/AddOf.xml" to try to active an filebeat event, and forward log to logstash (AddOf.xml is my log file).
My XML data structure is like this for one log event :
<log4j:event logger="ServiceLogger" timestamp="1494973209812" level="INFO" thread="QueueWorker_1_38a0fec5-7c7f-46f5-a87a-9134fff1b493">
<log4j:message>Traitement du fichier \\ifs-app-01\Interfaces_MES\AddOf\ITF_MES_01_01_d2bef200-3a85-11e7-1ab5-9a50967946c3.xml</log4j:message>
<log4j:properties>
<log4j:data name="log4net:HostName" value="MES-01" />
<log4j:data name="log4jmachinename" value="MES-01" />
<log4j:data name="log4net:Identity" value="" />
<log4j:data name="log4net:UserName" value="SOFRADIR\svc_mes_sf" />
<log4j:data name="LogName" value="UpdateOperationOf" />
<log4j:data name="log4japp" value="MES_SynchroService.exe" />
</log4j:properties>
<log4j:locationInfo class="MES_SynchroService.Core.FileManager" method="TraiteFichier" file="C:\src\MES_PROD\MES_SynchroService\Core\FileManager.cs" line="47" />
</log4j:event>
My filebeat configuration like this (/etc/filebeat/filebeat.yml):
filebeat.prospectors:
# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.
- input_type: log
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /mes/*.xml
document_type: message
### Multiline options
# Mutiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
multiline.pattern: ^<log4j:event
# Defines if the pattern set under pattern should be negated or not. Default is false.
multiline.negate: true
# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
multiline.match: after
#================================ Outputs =====================================
# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["localhost:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
My input logstash configuration (/etc/logstash/conf.d/01-beats-input.conf) :
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
My filter logstash configuration (/etc/logstash/conf.d/01-beats-filter.conf) :
filter
{
xml
{
source => "message"
xpath =>
[
"/log4j:event/log4j:message/text()", "messageMES"
]
store_xml => true
target => "doc"
}
}
My output logstash configuration (/etc/logstash/conf.d/01-beats-output.conf) :
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "mes_log"
document_type => "%{[#metadata][type]}"
}
}
But when i try the command "touch /mes/AddOf.xml", or add manually an event log in AddOf.xml, i don't see a new index with events log from XML file in elasticsearch.
I have see documentation for XML plug-in for logstash (here), but i don't now if i need to install something ? Or maybe I'm not doing the right thing for filebeat to send the logs to logstash ?
I'm very involved and motivated to learn about ELK stack. Thank you in advance for your expertise and help. I would be grateful ! :)
In your filebeat config, the regex for multiline.pattern probably should be in single-quotes:
multiline.pattern: '^<log4j:event'

Logstash wont talk to Elastic Search

I have Elastic Search 1.3.2 via ELMA. The ELMA setup places ES REST API behind an Apache reverse proxy with SSL and basic auth.
On a separate host, I am trying to setup Logstash 1.4.2 to forward some information over to ES. The output part of my LS is as follows:
output {
stdout { codec => rubydebug }
elasticsearch {
host => "192.168.248.4"
}
This produces the following error:
log4j, [2014-09-25T01:40:02.082] WARN: org.elasticsearch.discovery: [logstash-ubuntu-jboss-39160-4018] waited for 30s and no initial state was set by the discovery
I then tried setting the protocol to HTTP as follows:
elasticsearch {
host => "192.168.248.4"
protocol => "http"
}
This produces a connection refused error:
Faraday::ConnectionFailed: Connection refused - Connection refused
I have then tried setting the port to 9200 (which gives connection refused error) and 9300 which gives:
Faraday::ConnectionFailed: End of file reached
Any ideas on how I can get logstash talking to my ES?
The way to inform logstash to set output in ES is :
elasticsearch {
protocol => "http"
host => "EShostname:EsportNo"
}
In your case, it should be,
elasticsearch {
protocol => "http"
host => "192.168.248.4:9200"
}
If it's not working, then the problem is with the network address configuration.In order to make sure you have provided the correct configuration,
Check the http.port property of ES
Check network.bind_host property of ES
Check network.publish_host property of ES

Resources