I'm trying to configure logstash to join my elasticsearch cluster as a node using multicast to avoid the configuration of a specific host on logstash configuration.
The configuration I have on elasticsearch is basically:
transport.tcp.port: 9300
http.port: 9200
cluster.name: myclustername
discovery.zen.minimum_master_nodes: 1
discovery.zen.ping.timeout: 30s
discovery.zen.ping.multicast.enabled: true
discovery.zen.ping.multicast.group: 239.193.200.01
discovery.zen.ping.multicast.port: 54328
On logstash side, I have this configuration:
output {
elasticsearch {
host => "239.193.200.01"
cluster => "myclustername"
protocol => "node"
}
}
My elasticsearch cluster is being discovered successfully using multicast meaning the multicast IP is working as expected, but from that configuration I get the following log output:
log4j, [2014-06-05T05:51:44.001] WARN: org.elasticsearch.transport.netty: [logstash-aruba-30825-2014] exception caught on transport layer [[id: 0xe33ea7dd]], closing connection
java.net.SocketException: Network is unreachable
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:465)
at sun.nio.ch.Net.connect(Net.java:457)
If I remove the host key from the configuration I receive this output log:
log4j, [2014-06-05T06:07:45.500] WARN: org.elasticsearch.discovery: [logstash-aruba-31431-2014] waited for 30s and no initial state was set by the discovery
imeout(org/elasticsearch/action/support/master/TransportMasterNodeOperationAction.java:180)
at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(org/elasticsearch/cluster/service/InternalClusterService.java:492)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:615)
What am I doing wrong here? I suppose my logstash configuration is wrong but I'm sure what.
As per Logstash 1.4.1 documentation (http://logstash.net/docs/1.4.1/outputs/elasticsearch) you could create an elasticsearch.yml file in the $PWD dir of the Logstash process to ensure it is configured with the same multicast details.
I assume the Elasticsearch cluster can see each other successfully using multicast and there isn't some network issue preventing that. Check at http://your-es-host:9200/_cluster/health?pretty=true make sure the number of nodes is what you expect.
Setting elasticsearch variable to the jvm using $JAVA_OPTS is another possibility:
export JAVA_OPTS="-Des.discovery.zen.ping.multicast.group=224.2.2.4 \
-Des.discovery.zen.ping.multicast.port=54328 \
-Des.discovery.zen.ping.multicast.enabled=true"
Other option is use of elasticsearch_http, I've had the same problem, and now working good.
Resource here
Related
I am setting up an ELK Stack (which consists of ElasticSearch, LogStash and Kibana) in a single EC2 instance. AWS EC2 instance. I am following the documentation from the elastic.co site.
TL;DR; I cannot access my ElasticSearch interface hosted in an EC2 from the Web URL. How to fix that?
Type : m4.large
vCPU : 2
Memory : 8 GB
Storage: 25 GB (EBS)
Note : I have provisioned the EC2 instance inside a VPC and with an Elastic IP.
I have installed all 3 components. ElasticSearch and LogStash are running as services while Kibana is running via the command ./bin/kibana inside kibana-7.10.1-linux-x86_64/ directory.
When I curl the ElasticSearch endpoint using
curl http://localhost:9200
I get this JSON output. (Which means the service is running and is accessible via Port 9200).
However, when I try to access the same URL via my browser, I get an error saying
Connection Timed Out
Isn't this supposed to return the same JSON output as the one I've mentioned above?
I have attached the elasticsearch.yml file here (Hosted in gofile.io).
Here are the Inbound Rules for the EC2 instance.
EDIT : I tried changing the network.host: 'localhost'
to network.host: 0.0.0.0 and restarted the service but this time I got an error while starting the service. I attached the screenshot of that.
EDIT 2 : I have uploaded the updated elasticsearch.yml to Gofile.org).
The problem is the following line in your elasticsearch.yml configuration file:
node.name: node-1
network.host: 'localhost'
With that configuration, your ES cluster is only accessible from the same host and not from the outside. According to the official documentation, you need to either specify 0.0.0.0 or a specific publicly accessible IP address, otherwise that won't work.
Note that you also need to configure the following two lines in order for the cluster to properly form:
discovery.seed_hosts: ["node-1-ip-address"]
# Bootstrap the cluster using an initial set of master-eligible nodes:
cluster.initial_master_nodes: ["node-1"]
Elasticsearch works with ports 9200 and 9300, but if I start a JHipster micro-service configured to work with Elasticsearch before starting the Elasticsearch service Elasticsearch fails because something in the JHipster service starts at port 9300.
I checked this situation by running the netstat -a command in Windows CMD.
If Elasticsearch uses port 9300 and the microservice will use Elasticsearch, why is the microservice occupying port 9300?
Need to change anything else for the service to use Elasticsearch in dev mode?
You can run elasticsearch on different ports than the default one. To do so add/update in elasticsearch.yml file the following:
http.port: 9201
transport.port: 9301
I installed elasticsearch by brew install elasticsearch and started it with brew services start elasticsearch, however, curl http://127.0.0.1:9200 shows connection refused. I checked the port: netstat -a -n | grep tcp | grep 9200 and some ipv4 is running there. Ok, so I opened /usr/local/etc/elasticsearch/elasticsearch.yml and changed the port to 9300 and also uncommented and changed: network.host: 127.0.0.1. Still shows connection refused when I do curl http://127.0.0.1:9300. The OS is MacOS High Sierra 10.13.4. If we open /usr/local/var/log/elasticsearch/elasticsearch_nikitavlasenko.log the error seems to be:
Cluster name [elasticsearch_nikitavlasenko] subdirectory exists in data paths [/usr/local/var/lib/elasticsearch/elasticsearch_nikitavlasenko]. All data under these paths must be moved up one directory to paths [/usr/local/var/lib/elasticsearch]
Did you have an older version (2.x or before) installed before? It sounds a lot like this PR to check that you're not using the old behavior when there was the node name in the path.
What I would do:
If you don't need the data any more, just remove /usr/local/var/lib/elasticsearch/elasticsearch_nikitavlasenko and start fresh.
If you need the data, you could either change path.data in your config or move the folder one level up (just like the log message says).
PS: I wouldn't use port 9300 for HTTP, because that's generally the port used for communication of the nodes in a cluster itself.
This was the result of a bug in the Homebrew formula for Elasticsearch. It was creating a directory with the node name which is no longer allowed for Elasticsearch.
The formula has been updated to remove node name from path.data and no longer create the invalid directory which should resolve this problem.
Ran into this issue some time back, Please add a minimal Elastic config file. for me it looks like below
http.port: 9200
discovery.zen.ping.unicast.hosts: ["127.0.0.1"]
path.data: /usr/local/var/elasticsearch/
path.logs: /usr/local/var/log/elasticsearch/
# Set both 'bind_host' and 'publish_host':
network.host: 127.0.0.1
# 1. Disable multicast discovery (enabled by default):
discovery.zen.ping.multicast.enabled: false
script.engine.groovy.inline.aggs: on
I think I wasn't having below config which caused the issue:
network.host: 127.0.0.1
Please check if its there in your config? Also properly set your data and logs folder path.
Let me know if you face any issue and have questions on these configs.
Here is my setup:
Two instances of Ubuntu 16.04. Second one is clone made from the first one. ElasticSearch is installed only on Guest (Ubuntu) OSes. Configuration has been adjusted after cloning the VM.
I am running with bridged network in VirtualBox - each instance got its IP from the router. Windows (host) firewall is configured appropriately. All machines can ping each other. Ping, Netstat and nmap testing shows that ports 9200 and 9300 are OPEN (tested "remote" hosts also).
ElasticSearch service is running appropriately. I can "curl -XGET" both locally and remotely and get the correct results.
The problem is that the ES from the second machine is not joining the cluster.
Here are the configuration files:
First one:
cluster.name: p4g4n_cluster
node.name: master
node.master: true
network.host: 192.168.0.12
discovery.zen.ping.unicast.hosts: ["192.168.0.12", "192.168.0.17"]
Second one:
cluster.name: p4g4n_cluster
node.name: node1
node.master: false
network.host: 192.168.0.17
discovery.zen.ping.unicast.hosts: ["192.168.0.12", "192.168.0.17"]
if I try curl -XGET 192.168.0.17:9200/_cluster/health I will get master_not_discovered_exception. And if I try basic GET request, I will see that the node1 has _na_ for the cluster_uuid" property, while on first machine - *master*cluster_uuid` is present.
Version of ElasticSearch running is: 5.4.0 and
Version of Lucene is: 6.5.0
Can anyone help me with what needs to happen in order for node1 to see and join the cluster?
I was able to solve this issue.
Digging through the logs showed that this was not a network configuration issue.
Since I first configured the entire ELK stack on one machine and then cloned it, the ES and logstash were already running and pumping syslog logs into the elastic.
Because of this, the cloned machine had the same data folder as the existing one. As it turned out, the node UUID is embedded in the data folder and the solution was to delete the data folder on the cloned VM.
The error that I found in logs was: found existing node {xxx} with the same id but is a different node instance ... So there was an obvious conflict.
I found this github ES issue and this SO answer that dealt with the same issue.
You can try to add network.bind_host: 0.0.0.0 in both servers
This is my config file stored at /etc/logstash/conf
input
{
file{
path => ["PATH_OF_FILE"]
}
}
output
{
elasticsearch
{
host => "172.29.86.35"
index => "new"
}
}
and this is my elasticsearch.yaml file content for network and http
\# Set the bind address specifically (IPv4 or IPv6):
\#network.bind_host: 172.29.86.35
\# Set the address other nodes will use to communicate with this node. If not
\# set, it is automatically derived. It must point to an actual IP address.
\#network.publish_host: 192.168.0.1
\# Set both 'bind_host' and 'publish_host':
network.host: 172.29.86.35
\# Set a custom port for the node to node communication (9300 by default):
\#transport.tcp.port: 9300
\# Enable compression for all communication between nodes (disabled by default):
\#transport.tcp.compress: true
\# Set a custom port to listen for HTTP traffic:
\#http.port: 9200
I am running elasticsearch and logstash as service.The problem is when I start log stash as a service it does not send any data to elasticsearch. However if I use the same config in the logstash conf file and run logstash from the CLI it works perfectly fine. Even the logs do not show any error.
The version I am running is 1.4.3 for ES and 1.4.2 for LS.
The system env is RHEL 7
I also have encountered same issue...
When I exec command using -f option, it works normally, but when I start service, nothing happen and log file under /etc/log stash never updated.
What I did as the temporary counter measure is to exec the command below(with & option)
Logstash if conffile.conf &
With this, it work even if I logout from server.