I have two-node cluster on one machine and one config file (elasticsearch.yml). Is it possible to create another .yml-config file and start every instance with different config-file? For example, i want run cluster on two ports (localhost:9200 and localhost:9201) on the same time.
I can't find command-line API for starting elastic-cluster (config-file as an argument?).
You should be able to start your second ES instance with the -Epath.conf setting on the command line and point to another folder where you have your second elasticsearch.yml configuration file
./bin/elasticsearch -Epath.conf=/path/to/my/second/config/
A newer approach starting from ES 6 release for launching multiple instances based on the same ES installation is to have multiple config folders and declaring path variable before startup execution
ES_PATH_CONF=/apps/my-es/conf/node-1 ./elasticsearch
ES_PATH_CONF=/apps/my-es/conf/node-2 ./elasticsearch
To launch as daemon include -d and -p <pidName> for defining pid name
ES_PATH_CONF=/apps/my-es/conf/node-1 ./elasticsearch -d -p es_node1_pid
ES_PATH_CONF=/apps/my-es/conf/node-2 ./elasticsearch -d -p es_node2_pid
here is a reference on ES docs:
https://www.elastic.co/guide/en/elasticsearch/reference/current/settings.html
it has to be more than command line.You should look at installing two instance of elasticsearch service configured to listen at two different ports.
If you are on ubuntu OS -
you can have two init.d scripts for both two instances
1)In the init.d script set the name of the process that run the service like for two cluster set elasticsearch_node_1 and elasticsearch_node_2
2)In the same file configure the path to logs, data and configuration file to two seperate locations for both the init files.
untill here you will have two services running on the same machine
Maybe you don't want to run the instances as OS service then i recommend checking this link
Two nodes on same machine
$ bin/elasticsearch -Des.config=$ES_HOME/config/elasticsearch.1.yml
$ bin/elasticsearch -Des.config=$ES_HOME/config/elasticsearch.2.yml
3)now modify the elasticsearch.yml files for each instance pointed by init script.
change http.port to any port you want to run your instance on.
for discovery host1 and host2 will be same, only you have to change port to another node for each instance and accordingly set path.data and path.logs for each instance
http.port: 9200
discovery.zen.ping.unicast.hosts: ["host1", "host2:port"]
path.data: /path/to/data
path.logs: /path/to/logs
Related
I am setting up an ELK Stack (which consists of ElasticSearch, LogStash and Kibana) in a single EC2 instance. AWS EC2 instance. I am following the documentation from the elastic.co site.
TL;DR; I cannot access my ElasticSearch interface hosted in an EC2 from the Web URL. How to fix that?
Type : m4.large
vCPU : 2
Memory : 8 GB
Storage: 25 GB (EBS)
Note : I have provisioned the EC2 instance inside a VPC and with an Elastic IP.
I have installed all 3 components. ElasticSearch and LogStash are running as services while Kibana is running via the command ./bin/kibana inside kibana-7.10.1-linux-x86_64/ directory.
When I curl the ElasticSearch endpoint using
curl http://localhost:9200
I get this JSON output. (Which means the service is running and is accessible via Port 9200).
However, when I try to access the same URL via my browser, I get an error saying
Connection Timed Out
Isn't this supposed to return the same JSON output as the one I've mentioned above?
I have attached the elasticsearch.yml file here (Hosted in gofile.io).
Here are the Inbound Rules for the EC2 instance.
EDIT : I tried changing the network.host: 'localhost'
to network.host: 0.0.0.0 and restarted the service but this time I got an error while starting the service. I attached the screenshot of that.
EDIT 2 : I have uploaded the updated elasticsearch.yml to Gofile.org).
The problem is the following line in your elasticsearch.yml configuration file:
node.name: node-1
network.host: 'localhost'
With that configuration, your ES cluster is only accessible from the same host and not from the outside. According to the official documentation, you need to either specify 0.0.0.0 or a specific publicly accessible IP address, otherwise that won't work.
Note that you also need to configure the following two lines in order for the cluster to properly form:
discovery.seed_hosts: ["node-1-ip-address"]
# Bootstrap the cluster using an initial set of master-eligible nodes:
cluster.initial_master_nodes: ["node-1"]
Here is my setup:
Two instances of Ubuntu 16.04. Second one is clone made from the first one. ElasticSearch is installed only on Guest (Ubuntu) OSes. Configuration has been adjusted after cloning the VM.
I am running with bridged network in VirtualBox - each instance got its IP from the router. Windows (host) firewall is configured appropriately. All machines can ping each other. Ping, Netstat and nmap testing shows that ports 9200 and 9300 are OPEN (tested "remote" hosts also).
ElasticSearch service is running appropriately. I can "curl -XGET" both locally and remotely and get the correct results.
The problem is that the ES from the second machine is not joining the cluster.
Here are the configuration files:
First one:
cluster.name: p4g4n_cluster
node.name: master
node.master: true
network.host: 192.168.0.12
discovery.zen.ping.unicast.hosts: ["192.168.0.12", "192.168.0.17"]
Second one:
cluster.name: p4g4n_cluster
node.name: node1
node.master: false
network.host: 192.168.0.17
discovery.zen.ping.unicast.hosts: ["192.168.0.12", "192.168.0.17"]
if I try curl -XGET 192.168.0.17:9200/_cluster/health I will get master_not_discovered_exception. And if I try basic GET request, I will see that the node1 has _na_ for the cluster_uuid" property, while on first machine - *master*cluster_uuid` is present.
Version of ElasticSearch running is: 5.4.0 and
Version of Lucene is: 6.5.0
Can anyone help me with what needs to happen in order for node1 to see and join the cluster?
I was able to solve this issue.
Digging through the logs showed that this was not a network configuration issue.
Since I first configured the entire ELK stack on one machine and then cloned it, the ES and logstash were already running and pumping syslog logs into the elastic.
Because of this, the cloned machine had the same data folder as the existing one. As it turned out, the node UUID is embedded in the data folder and the solution was to delete the data folder on the cloned VM.
The error that I found in logs was: found existing node {xxx} with the same id but is a different node instance ... So there was an obvious conflict.
I found this github ES issue and this SO answer that dealt with the same issue.
You can try to add network.bind_host: 0.0.0.0 in both servers
Background
CoreOS-Kubernetes has a project for multi-node on Vagrant:
https://github.com/coreos/coreos-kubernetes
https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html
They have a custom cloud config for the etcd node, but none for the worker node. For those, the Vagrant file references shell scripts, which contain some cloud config but mostly Kubernetes yaml:
https://github.com/coreos/coreos-kubernetes/blob/master/multi-node/generic/worker-install.sh
Objective
I'm trying to mount a NFS directory onto the coreOS worker nodes, for use in a Kubernetes pod. From what I read about Kubernetes in docs and tutorials, I want to mount on the node first as a persistent volume, like this on docker:
http://www.emergingafrican.com/2015/02/enabling-docker-volumes-and-kubernetes.html
I saw some posts that said mounting in the pod itself can be buggy, and want to avoid it by mounting on coreOS worker node first:
Kubernetes NFS volume mount fail with exit status 32
If mounting right in the pod is the standard way, just let me know and I'll do that.
Question
Are there options for customizing the cloud config for the worker node? I'm about to start hacking on that shell script, but thought I should check first. I looked through the docs but couldn't find any.
This is the coreOS cloud config I'm trying to add to the Vagrant file:
https://coreos.com/os/docs/latest/mounting-storage.html#mounting-nfs-exports
No NFS mount on coreOS is needed. Kubernetes will do it for you right in the pod:
http://kubernetes.io/v1.1/examples/nfs/README.html
Checkout nfs-busybox replication controller:
http://kubernetes.io/v1.1/examples/nfs/nfs-busybox-rc.yaml
I ran this and got it to write files to the server. That helped me debug the application. Note that even though nfs mounts do not show up when you ssh into the kubernetes node and run docker -it run /bin/bash, they are mounted in the kubernetes pod.. That's where most of my misunderstanding occurred. I guess you have to add the mount parameters to the command when doing it manually.
Additionally, my application, gogs, stored it's config files in /data . To get it to work, I first mounted the nfs to /mnt. Then, like in the kubernetes nfs-busybox example, I created a command which would copy all folders in /data to /mnt . In the replication controller yaml, under the container node, I put a command:
command:
- sh
- -c
- 'sleep 300; cp -a /data /mnt; done'
This gave me enough time to run the initial config of my app. Then I just waited until the sleep time was up and the files were copied over.
I then change my mount point to /data, and now the app starts right where it left off when pod restarts. Coupled with external mysql server, and it so far it looks like it's stateless.
I'm running RethinkDB on Amazon Linux AMI. I already have services running on 8080 so I need to change the port for the WebUI interface. How would I do that?
I happen to find the documentation here https://www.rethinkdb.com/docs/cli-options/
$ rethinkdb --bind all --http-port 9090
Karthick here has the right answer if you are running your instance of RethinkDB from the command line and daemonizing it.
In case you are running the default system configuration of RethinkDB after say sudo apt-get install rethinkdb and want to change it there you have to change the configuration file by following these steps:
You want to look under the directory /etc/rethinkdb and find the RethinkDB configuration file and change the http-port value to the new port you'd like it to be on.
Then if your system uses init.d you should be able to restart with sudo service rethinkdb restart. If your system usessystemdthen you'll do something like thissudo systemctl [restart|stop/start] rethinkdb`
These two links will be a good resource to you in this case:
https://www.rethinkdb.com/docs/config-file/
https://www.rethinkdb.com/docs/start-on-startup/
I installed elastic search in my local machine, I want to configure it as the only one single node in the cluster(Standalone Server). it means whenever I create a new index, it will only available to my server. It will not be accessible to other's server.
My current scenario these indexes are available to other servers (the servers are formed in a cluster), and they can make any changes to my indexes. But I don't want it.
I went through some other blogs but not getting best solution. So can you please let me know steps for same?
I ve got the answer from http://elasticsearch-users.115913.n3.nabble.com/How-to-isolate-elastic-search-node-from-other-nodes-td3977389.html.
Kimchy : You set the node to local(true), this means it will not discover other nodes using network, only within the same JVM.
in elasticsearch/bin/elasticsearch.yml file
node.local: true # disable network
Updated for ES 7.x
in elasticsearch.yml
network.host: 0.0.0.0
discovery.type: single-node
and make sure you have cluster.initial_master_nodes off
# cluster.initial_master_nodes: ["node-1", "node-2"]
credited to #Chandan.
In elasticsearch.yml
# Note, that for development on a local machine, with small indices, it usually
# makes sense to "disable" the distributed features:
#
index.number_of_shards: 1
index.number_of_replicas: 0
Use the same configuration in your code.
Also to isolate the node use node.local: true or discovery.zen.ping.multicast: false
Here's relevant info for ElasticSearch 5:
According to changelog, to enable local mode on ES 5 you need to add transport.type: local to your elasticsearch.yml instead of node.local: true.
If you intend to run Elasticseach on a Single Node and be able to bind it to public IP, two important settings are:
network.host: <PRIVATE IP OF HOST>
discovery.type: single-node
If you're using a network transport in your code, this won't work, as node.local gives you a LocalTransport only:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-transport.html#_local_transport
The trick then is to set
discovery.zen.ping.multicast: false
in your elasticsearch.yml which will stop your node looking for any other nodes.
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-discovery-zen.html#multicast
I'm not sure if this prevents other nodes from discovering yours though; I only needed this to affect a group of nodes with the same settings on the same network.
I wanted to do this without having to write/overwrite an elasticsearch.yml in my container. Here it is without a config file
Set an environment variable prior to starting elasticsearch:
discovery.type=single-node
https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
In the config file, add:
network.host: 0.0.0.0 [in Network settings]
discovery.type: single-node [in Discovery and Cluster formation settings]
This solve your problem:
PUT /_all/_settings
{"index.number_of_replicas":0}
Tested with ES version 5.
All of these didn´t help me (and I sadly didn´t read the answer of bhdrkn). The thing that worked for me was to change elasticsearch´s cluster-name everytime I need to have a separate instance, where new nodes aren´t added automatically via multicast.
Just change cluster.name: {{ elasticsearch.clustername }} in elasticsearch.yml, e.g. via Ansible. This is particulary helpful, when building separate Stages like Dev, QA and Production (which is a standard usecase in enterprise-environments).
And if you´re using logstash to get your data into elasticsearch, don´t forget to put the same cluster-name into the output-section, like:
output {
elasticsearch {
cluster => "{{ elasticsearch.clustername }}"
}
}
Otherwise your "logstash-*"-index will not be build correctly...