I was checking out a machine where I've graylog and elastic installed (for graylog).
There's a thing that I can't really undestand, it seems that elasticsearch is running with two nodes on the same machine, which i would like to avoid.
here's the output:
me#server ~ # curl 'localhost:9200/_cat/nodes'
127.0.0.1 127.0.0.1 1 71 0.54 d * Candra
127.0.0.1 127.0.0.1 32 71 0.54 c - graylog-7d4bdfb9-23ac-45e9-a957-1f72b8848e2b
is this normal? how can i set it up to use just one node?
The second node is a lightweight client node (see the c in the 6th column) that Graylog is creating in order to connect to the cluster. It's perfectly normal as you can see in their official documentation:
Graylog hosts an embedded Elasticsearch node which is joining the Elasticsearch cluster as a client node.
Related
I am setting up an ELK Stack (which consists of ElasticSearch, LogStash and Kibana) in a single EC2 instance. AWS EC2 instance. I am following the documentation from the elastic.co site.
TL;DR; I cannot access my ElasticSearch interface hosted in an EC2 from the Web URL. How to fix that?
Type : m4.large
vCPU : 2
Memory : 8 GB
Storage: 25 GB (EBS)
Note : I have provisioned the EC2 instance inside a VPC and with an Elastic IP.
I have installed all 3 components. ElasticSearch and LogStash are running as services while Kibana is running via the command ./bin/kibana inside kibana-7.10.1-linux-x86_64/ directory.
When I curl the ElasticSearch endpoint using
curl http://localhost:9200
I get this JSON output. (Which means the service is running and is accessible via Port 9200).
However, when I try to access the same URL via my browser, I get an error saying
Connection Timed Out
Isn't this supposed to return the same JSON output as the one I've mentioned above?
I have attached the elasticsearch.yml file here (Hosted in gofile.io).
Here are the Inbound Rules for the EC2 instance.
EDIT : I tried changing the network.host: 'localhost'
to network.host: 0.0.0.0 and restarted the service but this time I got an error while starting the service. I attached the screenshot of that.
EDIT 2 : I have uploaded the updated elasticsearch.yml to Gofile.org).
The problem is the following line in your elasticsearch.yml configuration file:
node.name: node-1
network.host: 'localhost'
With that configuration, your ES cluster is only accessible from the same host and not from the outside. According to the official documentation, you need to either specify 0.0.0.0 or a specific publicly accessible IP address, otherwise that won't work.
Note that you also need to configure the following two lines in order for the cluster to properly form:
discovery.seed_hosts: ["node-1-ip-address"]
# Bootstrap the cluster using an initial set of master-eligible nodes:
cluster.initial_master_nodes: ["node-1"]
I'm trying to follow MonetDB docs on Cluster Management
to setup a 3 nodes cluster using 3 Centos machines, I created the 3 dbfarm using monetdbd create /path/to/mydbfarm and from the first node, I run monetdb discover and it returns nothing where it should discover the other nodes, and when I try to run monetdb -h [second node IP] -P mypasshphrase status it returns the following error
status: cannot connect: Connection refused
PS: I have a passwordless connection between these 3 nodes, ssh [any node IP] works just fine,
Thank you
By default MonetDB listens only for local connections. This is for security reasons.
To listen also for remote connections, run
monetdbd set listenaddr=0.0.0.0 .../path/to/dbfarm
on each of the nodes and restart monetdbd.
Here is my setup:
Two instances of Ubuntu 16.04. Second one is clone made from the first one. ElasticSearch is installed only on Guest (Ubuntu) OSes. Configuration has been adjusted after cloning the VM.
I am running with bridged network in VirtualBox - each instance got its IP from the router. Windows (host) firewall is configured appropriately. All machines can ping each other. Ping, Netstat and nmap testing shows that ports 9200 and 9300 are OPEN (tested "remote" hosts also).
ElasticSearch service is running appropriately. I can "curl -XGET" both locally and remotely and get the correct results.
The problem is that the ES from the second machine is not joining the cluster.
Here are the configuration files:
First one:
cluster.name: p4g4n_cluster
node.name: master
node.master: true
network.host: 192.168.0.12
discovery.zen.ping.unicast.hosts: ["192.168.0.12", "192.168.0.17"]
Second one:
cluster.name: p4g4n_cluster
node.name: node1
node.master: false
network.host: 192.168.0.17
discovery.zen.ping.unicast.hosts: ["192.168.0.12", "192.168.0.17"]
if I try curl -XGET 192.168.0.17:9200/_cluster/health I will get master_not_discovered_exception. And if I try basic GET request, I will see that the node1 has _na_ for the cluster_uuid" property, while on first machine - *master*cluster_uuid` is present.
Version of ElasticSearch running is: 5.4.0 and
Version of Lucene is: 6.5.0
Can anyone help me with what needs to happen in order for node1 to see and join the cluster?
I was able to solve this issue.
Digging through the logs showed that this was not a network configuration issue.
Since I first configured the entire ELK stack on one machine and then cloned it, the ES and logstash were already running and pumping syslog logs into the elastic.
Because of this, the cloned machine had the same data folder as the existing one. As it turned out, the node UUID is embedded in the data folder and the solution was to delete the data folder on the cloned VM.
The error that I found in logs was: found existing node {xxx} with the same id but is a different node instance ... So there was an obvious conflict.
I found this github ES issue and this SO answer that dealt with the same issue.
You can try to add network.bind_host: 0.0.0.0 in both servers
I want to setup 3 nodes on windows machine for testing purpose. i already have community version installed. I followed some tutorials on youtube to setup 1 machine 3 nodes and docs as well. all 3 nodes are up but they are not connected. i can only see 1 node serving 100% load on "nodetool status"
Here is what i wanted, 3 instances connected as below
127.0.0.1 (seed)
127.0.0.2
127.0.0.3
Here is what i did,
Installed Datastax community edition 2.0.11
Copied apache-cassandra/conf -> conf2 & conf3
modified cassandra.yaml for
cluster_name
seed_address (127.0.0.1)
listen_address (seed ip)
rpc_address 0.0.0.0
endpoint_snitch: SimpleSnitch
Above things were documented but i had to change below ports as it was single machine
rpc_port: [if default is 9160 then node1 will be 9161]
native_transport_port:
storage_port:
Changed "JMX_PORT" in cassandra.bat file (created 2 copies of main file)
started all
I tried ccm but its not picking already installed cassandra it tries to build from source and fails.
Am i missing something, it been 2 days (4-5 hours) i am trying to set this up.
Thanks,
Ninad
From my own tests, on Windows 7, 127.0.0.1/127.0.0.2 point to the same interface so you can't bind to the same port. Yet using different ports for each node, I had the same issue as you (nodes not communicating with each other). At the end I would recommend using Linux for this kind of tests, even a simple virtual machine, because for Linux 127.0.0.1 and 127.0.0.2 are not the same.
I setup a 3 node Zookeeper cdh4 ensemble on RHEL 5.5 machines. I have started the service by running zkServer.sh on each of the nodes. ZooKeeper instance is running on all the nodes, but how do I know if it is a part of an ensemble or are they running as individual services?
I tried to start the service and check the ensemble as stated here, on Cloudera's site, but it throws a ClassNotFoundException.
You can use the stat four letter word,
~$echo stat | nc 127.0.0.1 <zkport>
Which gives you output like,
Zookeeper version: 3.4.5-1392090, built on 09/30/2012 17:52 GMT
Clients:
/127.0.0.1:55829[0](queued=0,recved=1,sent=0)
Latency min/avg/max: 0/0/0
Received: 3
Sent: 2
Connections: 1
Outstanding: 0
Zxid: 0x100000000
Mode: leader
Node count: 4
The Mode: line tells you what mode the server is running in, either leader, follower or standalone if the node is not part of a cluster.