Logstash not sending data to elastic search when ran as a service - elasticsearch

This is my config file stored at /etc/logstash/conf
input
{
file{
path => ["PATH_OF_FILE"]
}
}
output
{
elasticsearch
{
host => "172.29.86.35"
index => "new"
}
}
and this is my elasticsearch.yaml file content for network and http
\# Set the bind address specifically (IPv4 or IPv6):
\#network.bind_host: 172.29.86.35
\# Set the address other nodes will use to communicate with this node. If not
\# set, it is automatically derived. It must point to an actual IP address.
\#network.publish_host: 192.168.0.1
\# Set both 'bind_host' and 'publish_host':
network.host: 172.29.86.35
\# Set a custom port for the node to node communication (9300 by default):
\#transport.tcp.port: 9300
\# Enable compression for all communication between nodes (disabled by default):
\#transport.tcp.compress: true
\# Set a custom port to listen for HTTP traffic:
\#http.port: 9200
I am running elasticsearch and logstash as service.The problem is when I start log stash as a service it does not send any data to elasticsearch. However if I use the same config in the logstash conf file and run logstash from the CLI it works perfectly fine. Even the logs do not show any error.
The version I am running is 1.4.3 for ES and 1.4.2 for LS.
The system env is RHEL 7

I also have encountered same issue...
When I exec command using -f option, it works normally, but when I start service, nothing happen and log file under /etc/log stash never updated.
What I did as the temporary counter measure is to exec the command below(with & option)
Logstash if conffile.conf &
With this, it work even if I logout from server.

Related

ElasticSearch Connection Timed Out in EC2 Instance

I am setting up an ELK Stack (which consists of ElasticSearch, LogStash and Kibana) in a single EC2 instance. AWS EC2 instance. I am following the documentation from the elastic.co site.
TL;DR; I cannot access my ElasticSearch interface hosted in an EC2 from the Web URL. How to fix that?
Type : m4.large
vCPU : 2
Memory : 8 GB
Storage: 25 GB (EBS)
Note : I have provisioned the EC2 instance inside a VPC and with an Elastic IP.
I have installed all 3 components. ElasticSearch and LogStash are running as services while Kibana is running via the command ./bin/kibana inside kibana-7.10.1-linux-x86_64/ directory.
When I curl the ElasticSearch endpoint using
curl http://localhost:9200
I get this JSON output. (Which means the service is running and is accessible via Port 9200).
However, when I try to access the same URL via my browser, I get an error saying
Connection Timed Out
Isn't this supposed to return the same JSON output as the one I've mentioned above?
I have attached the elasticsearch.yml file here (Hosted in gofile.io).
Here are the Inbound Rules for the EC2 instance.
EDIT : I tried changing the network.host: 'localhost'
to network.host: 0.0.0.0 and restarted the service but this time I got an error while starting the service. I attached the screenshot of that.
EDIT 2 : I have uploaded the updated elasticsearch.yml to Gofile.org).
The problem is the following line in your elasticsearch.yml configuration file:
node.name: node-1
network.host: 'localhost'
With that configuration, your ES cluster is only accessible from the same host and not from the outside. According to the official documentation, you need to either specify 0.0.0.0 or a specific publicly accessible IP address, otherwise that won't work.
Note that you also need to configure the following two lines in order for the cluster to properly form:
discovery.seed_hosts: ["node-1-ip-address"]
# Bootstrap the cluster using an initial set of master-eligible nodes:
cluster.initial_master_nodes: ["node-1"]

Elasticsearch Error "bootstrap checks failed" (Binding non-loopback address)

Recently, after installation of Elasticsearch 7.3.2, I found out that the server is working fine when bound to the localhost or 127.0.0.1.
But I made it available for external use, that is on particular IP or 0.0.0.0, it raised me error and stopped the server:
bound or publishing to a non-loopback address, enforcing
bootstrap checks
[2019-09-19T18:21:43,962][ERROR][o.e.b.Bootstrap ] [MARFEEN] node validation exception
[1] bootstrap checks failed
Could not get any answer on this solution, most of them were related to max opened file limits. But it was solved when I enabled a config property discovery.seed_hosts in elasticsearch.yml file:
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["127.0.0.1"]
After enabling the above property, it worked fine on non-loopback host also.
Most of the users don't know that setting network.host: 0.0.0.0 will cause the production bootstrap check and this is the cause of failure as mentioned in the below line of the error message.
[o.e.b.Bootstrap ] [MARFEEN] node validation exception [1] bootstrap
checks failed
In order, to resolve the issue when you are running Elasticsearch in development mode or with a single node, please add below config in (Elasticsearch.config) to avoid above mentioned checks.
discovery.type: single-node --> In case of single node Elasticsearch cluser
es.enforce.bootstrap.checks=false. --> Explicitly disable these checks in Non-production env.
Your answer is correct. This is set this way so that the health check forces your configuration to be presenting an external address before the node comes online.
The way you have configured it will work, so long as you do not require any special cluster conditions. At that point, you will need to set network.host: to an external IP/hostname.

Where to elastic server logs in localhost?

My elastic search server is hosted in port 9200.
My application server makes request to the ES server.
I would like to see the request params and the request URL that hits the elastic search server.
Where can I see these?
OS: macOS Mojave
If you are using a Unix based OS, you should be able to find the Elasticsearch logs in:
/var/log/elasticsearch
I'd also check the messages in /var/log/messages, to tail & filter for Elasticsearch:
tail -f /var/log/messages | grep elasticsearch
If you are using windows system, then open elasticsearch.yml file under config folder and uncomment below line and provide your local path
path.logs: <local_path>
Save the elasticsearch.yml file and start the server.
Now you can see all the logs under your local path.

Can't start ElasticSearch on Mac

I installed elasticsearch by brew install elasticsearch and started it with brew services start elasticsearch, however, curl http://127.0.0.1:9200 shows connection refused. I checked the port: netstat -a -n | grep tcp | grep 9200 and some ipv4 is running there. Ok, so I opened /usr/local/etc/elasticsearch/elasticsearch.yml and changed the port to 9300 and also uncommented and changed: network.host: 127.0.0.1. Still shows connection refused when I do curl http://127.0.0.1:9300. The OS is MacOS High Sierra 10.13.4. If we open /usr/local/var/log/elasticsearch/elasticsearch_nikitavlasenko.log the error seems to be:
Cluster name [elasticsearch_nikitavlasenko] subdirectory exists in data paths [/usr/local/var/lib/elasticsearch/elasticsearch_nikitavlasenko]. All data under these paths must be moved up one directory to paths [/usr/local/var/lib/elasticsearch]
Did you have an older version (2.x or before) installed before? It sounds a lot like this PR to check that you're not using the old behavior when there was the node name in the path.
What I would do:
If you don't need the data any more, just remove /usr/local/var/lib/elasticsearch/elasticsearch_nikitavlasenko and start fresh.
If you need the data, you could either change path.data in your config or move the folder one level up (just like the log message says).
PS: I wouldn't use port 9300 for HTTP, because that's generally the port used for communication of the nodes in a cluster itself.
This was the result of a bug in the Homebrew formula for Elasticsearch. It was creating a directory with the node name which is no longer allowed for Elasticsearch.
The formula has been updated to remove node name from path.data and no longer create the invalid directory which should resolve this problem.
Ran into this issue some time back, Please add a minimal Elastic config file. for me it looks like below
http.port: 9200
discovery.zen.ping.unicast.hosts: ["127.0.0.1"]
path.data: /usr/local/var/elasticsearch/
path.logs: /usr/local/var/log/elasticsearch/
# Set both 'bind_host' and 'publish_host':
network.host: 127.0.0.1
# 1. Disable multicast discovery (enabled by default):
discovery.zen.ping.multicast.enabled: false
script.engine.groovy.inline.aggs: on
I think I wasn't having below config which caused the issue:
network.host: 127.0.0.1
Please check if its there in your config? Also properly set your data and logs folder path.
Let me know if you face any issue and have questions on these configs.

Multicast Enable for Logstash - Elasticsearch

I'm trying to configure logstash to join my elasticsearch cluster as a node using multicast to avoid the configuration of a specific host on logstash configuration.
The configuration I have on elasticsearch is basically:
transport.tcp.port: 9300
http.port: 9200
cluster.name: myclustername
discovery.zen.minimum_master_nodes: 1
discovery.zen.ping.timeout: 30s
discovery.zen.ping.multicast.enabled: true
discovery.zen.ping.multicast.group: 239.193.200.01
discovery.zen.ping.multicast.port: 54328
On logstash side, I have this configuration:
output {
elasticsearch {
host => "239.193.200.01"
cluster => "myclustername"
protocol => "node"
}
}
My elasticsearch cluster is being discovered successfully using multicast meaning the multicast IP is working as expected, but from that configuration I get the following log output:
log4j, [2014-06-05T05:51:44.001] WARN: org.elasticsearch.transport.netty: [logstash-aruba-30825-2014] exception caught on transport layer [[id: 0xe33ea7dd]], closing connection
java.net.SocketException: Network is unreachable
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:465)
at sun.nio.ch.Net.connect(Net.java:457)
If I remove the host key from the configuration I receive this output log:
log4j, [2014-06-05T06:07:45.500] WARN: org.elasticsearch.discovery: [logstash-aruba-31431-2014] waited for 30s and no initial state was set by the discovery
imeout(org/elasticsearch/action/support/master/TransportMasterNodeOperationAction.java:180)
at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(org/elasticsearch/cluster/service/InternalClusterService.java:492)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:615)
What am I doing wrong here? I suppose my logstash configuration is wrong but I'm sure what.
As per Logstash 1.4.1 documentation (http://logstash.net/docs/1.4.1/outputs/elasticsearch) you could create an elasticsearch.yml file in the $PWD dir of the Logstash process to ensure it is configured with the same multicast details.
I assume the Elasticsearch cluster can see each other successfully using multicast and there isn't some network issue preventing that. Check at http://your-es-host:9200/_cluster/health?pretty=true make sure the number of nodes is what you expect.
Setting elasticsearch variable to the jvm using $JAVA_OPTS is another possibility:
export JAVA_OPTS="-Des.discovery.zen.ping.multicast.group=224.2.2.4 \
-Des.discovery.zen.ping.multicast.port=54328 \
-Des.discovery.zen.ping.multicast.enabled=true"
Other option is use of elasticsearch_http, I've had the same problem, and now working good.
Resource here

Resources