I have one VPS running two PHP applications (resp. production and test/staging environment of one application) that are using Elasticsearch. Until now I only had one cluster with one node, that was shared between both apps (at port 9200). I now need to separate the ES for each app, so I could use different data, indexes, mappings etc. for each of them. And I would still like to run everything on single VPS.
With Puppet I was able to set up two nodes listening at port 9200 and 9201 (two services), but they still seem to be dependent on each other – if I update mapping on one, the other app crashes (without logging anything, that's why it's so hard to debugg). I also tried using different cluster.name for each of them, but the the second cluster has UUID: _na_ and updating mapping and data doesn't work.
I'm new to ES so I'll appreciate any noob help, best practices or pointing in correct direction.
Edit
elasticsearch.yml configs:
cluster.name: my-production-cluster
http.port: 9200
node.name: my-production-node
path.data: "/var/lib/elasticsearch/my-production"
path.logs: "/var/log/elasticsearch/my-production"
cluster.name: my-test-cluster
http.port: 9201
node.name: my-test-node
path.data: "/var/lib/elasticsearch/my-test"
path.logs: "/var/log/elasticsearch/my-test"
I was able to debug a bit more and the second one is throwing master_not_discovered_exception error with 503 code.
Related
I'm trying to create a cluster of three servers with dotCMS 5.2.6 installed.
They have to interface with a second cluster of 3 elasticsearch nodes.
Despite my attempts to combine them, the best case I've obtained is with both dotCMS and elastic up and running but from dot admin backend (Control panel > Configuration > Network) I always see my three servers with red status due to Index red status.
I have tested the following combinations:
In plugins/com.dotcms.config/conf/dotcms-config-cluster-ext.properties
AUTOWIRE_CLUSTER_TRANSPORT=false
es.path.home=WEB-INF/elasticsearch
Using AUTOWIRE_CLUSTER_TRANSPORT=true seems not to change the result
In plugins/com.dotcms.config/ROOT/dotserver/tomcat-8.5.32/webapps/ROOT/WEB-INF/elasticsearch/config/elasticsearch-override.yml
transport.tcp.port: 9301
discovery.zen.ping.unicast.hosts: first_es_server:9300, second_es_server:9300, third_es_server:9300
Using transport.tcp.port: 9300 cause dotCMS startup failure with error:
ERROR cluster.ClusterFactory - Unable to rewire cluster:Failed to bind to [9300]
Caused by: com.dotmarketing.exception.DotRuntimeException: Failed to bind to [9300]
Of course, port 9300 is listening on the three elasticsearch nodes they are configured with transport.tcp.port: 9300 and have no problem to start and create their cluster.
Using transport.tcp.port: 9301 dotCMS can start and join the elastic cluster but the index status is always red even if the indexation seems to work and nothing is apparently affected.
Using transport.tcp.port: 9309 (as suggested in the dotCMS online reference) or any other port number lead to the same result as 9301 case but from dot admin backend (Control panel > Configuration > Network) the Index information for each machine still repot 9301 as ES port.
Main Question
I would like to know where the ES port can be edited considering my Elasticsearch cluster is performing well (all indices are green) and the elasticsearch-override.yml within dotCMS plugin doesn't affect the default 9301 reported by the backend.
Is the HTTP interface enabled on ES? If not, I would enable it and see what the cluster health is and what the index health is. It might be that you need to adjust your expected replicas.
https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-health.html
and
https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-indices.html
FWIW, the upcoming version of dotCMS (5.3.0) does not support embedded elasticsearch and requires a vanilla external ES node/custer to connect to.
yesterday I setup a dedicated single monitoring node following this guide.
I managed to fire up the new monitoring node with the same ES 6.6.0 version of the cluster, then added those lines to my elasticsearch.yml file on all ES cluster nodes :
xpack.monitoring.exporters:
id1:
type: http
host: ["http://monitoring-node-ip-here:9200"]
Then restarted all nodes and Kibana (that is actually running in one of the node of the ES cluster).
Now I can see today monitoring data indices being sent to the new monitoring external node but Kibana is showing a "You need to make some adjustments" when accessing the "Monitoring" section.
We checked the `cluster defaults` settings for `xpack.monitoring.exporters` , and found the
reason: `Remote exporters indicate a possible misconfiguration: id1`
Check that the intended exporters are enabled for sending statistics to the monitoring cluster,
and that the monitoring cluster host matches the `xpack.monitoring.elasticsearch` setting in
`kibana.yml` to see monitoring data in this instance of Kibana.
I already checked that all nodes are pingable each other , also I don't have xpack security so I haven't created any additional "remote_monitor" user.
I followed the error message and tried to add the xpack.monitoring.elasticsearch in kibana.yml file but I ended up with the following error :
FATAL ValidationError: child "xpack" fails because [child "monitoring" fails because [child
"elasticsearch" fails because ["url" is not allowed]]]
Hope anyone can help me in figuring what's wrong.
EDIT #1
Solved : problem was due to monitoring not being disabled in the monitoring cluster :
PUT _cluster/settings
{
"persistent": {
"xpack.monitoring.collection.enabled": false
}
}
Additional I made a mistake in kibana.yml configuration,
xpack.monitoring.elasticsearch should have been xpack.monitoring.elasticsearch.hosts
i had exactly the same problem but the root of cause was smth different.
here have a look
okay, i used to have the same problem.
my kibana did not show monitoring graphs, however
i had monitoring index index .monitoring-es-* available
the root of problem in my case was that my master nodes did not have :9200 HTTP socket available from the LAN. that is my config on master nodes was:
...
transport.host: [ "192.168.7.190" ]
transport.port: 9300
http.port: 9200
http.host: [ "127.0.0.1" ]
...
as you can see HTTP socket is available only from within host.
i didnt want if some one will make HTTP request for masters from LAN because there is
no point to do that.
However as i uderstand Kibana do not only read data from monitoring index
index .monitoring-es-*
but also make some requests directly for masters to get some information.
It was exactly why Kibana did not show anything about monitoring.
After i changed one line in the config on master node as
http.host: [ "192.168.0.190", "127.0.0.1" ]
immidiately kibana started to show monitoring graphs.
i recreated this expereminet several times.
Now all is working.
Also i want to underline in spite that now all is fine my monitoring index .monitoring-es-*
do NOT have "cluster_stats" documents.
So if your kibana do not show monitoring graphs i suggest
check if index .monitoring-es-* exists
check if your master nodes can serve HTTP requests from LAN
I'm trying to run an elasticsearch cluster with each es-node running in its own container. These containers are deployed using ECS across several machines that may be running other unrelated containers. To avoid port conflicts each port a container exposes is assigned a random value. These random ports are consistent across all running containers of the same type. In other words, all running es-node containers map port 9300 to the same random number.
Here's the config I'm using:
network:
host: 0.0.0.0
plugin:
mandatory: cloud-aws
cluster:
name: ${ES_CLUSTER_NAME}
discovery:
type: ec2
ec2:
groups: ${ES_SECURITY_GROUP}
any_group: false
zen.ping.multicast.enabled: false
transport:
tcp.port: 9300
publish_port: ${_INSTANCE_PORT_TRANSPORT}
cloud.aws:
access_key: ${AWS_ACCESS_KEY}
secret_key: ${AWS_SECRET_KEY}
region: ${AWS_REGION}
In this case _INSTANCE_PORT_TRANSPORT is the port that 9300 is bound to on the host machine. I've confirmed that all the environment variables used above are set correctly. I'm also setting network.publish_host to the host machine's local IP via a command line arg.
When I forced _INSTANCE_PORT_TRANSPORT (and in turn transport.publish_port) to be 9300, everything worked great, but as soon as it's given a random value, nodes can no longer connect to each other. I see errors like this using logger.discovery=TRACE:
ConnectTransportException[[][10.0.xxx.xxx:9300] connect_timeout[30s]]; nested: ConnectException[Connection refused: /10.0.xxx.xxx:9300];
at org.elasticsearch.transport.netty.NettyTransport.connectToChannelsLight(NettyTransport.java:952)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:916)
at org.elasticsearch.transport.netty.NettyTransport.connectToNodeLight(NettyTransport.java:888)
at org.elasticsearch.transport.TransportService.connectToNodeLight(TransportService.java:267)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$3.run(UnicastZenPing.java:395)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
It seems like the port a node binds to is the same as the port it pings while trying to connect to other nodes. Is there any way to make them different? If not, what's the point of transport.publish_port?
The way the discovery-ec2 plugin works is that it's collecting a list of IP addresses using AWS EC2 API and use this list as unicast list of nodes.
But it does not collect any information from the running cluster. Obviously the node is not yet connected!
So it does not know anything about the publish_port of other nodes.
It just adds an IP address. And that's all. Elasticsearch then is using the default port which is 9300.
So there is nothing you can do IMO to fix that in the short time.
But we can imagine adding a new feature which is close to what has been implemented for Google Compute Engine. We are using a specific metadata to get this port from the GCE APIs.
We could do the same for Azure and EC2. Do you want to open an issue so we can track the effort?
Starting from version 2.0 Elasticsearch binds only on the loopback interface by default (_local_ in terms of configuration).
The documentation says that there is a way to switch to another network, for example, _non_loopback_ binds to the first non-loopback interface. It works fine.
But I cannot figure out how do I combine these settings so that Elasticsearch binds on both loopback and non-loopback interfaces simultaneously?
PS. My reason is that I use Logstash on each Elasticsearch instance that connects to it via localhost, but I also want other Elasticsearch instances to see each other to form the cluster...
For 2.0 you would need to use
network.bind_host: 0
As of ElasticSearch 7.x, this configuration has changed yet again. for a simple single node cluster bound to loopback, local and external IPs, you essentially do this:
network.host: [_local_, _site_, _global_]
cluster.initial_master_nodes: node-1
The cluster node setting is explained here while the network host setting is in the documentation here, although it doesn't say how you would assign multiple values to network.host.
Go to
'<path_to_elasticsearch>/elasticsearch-2.3.4/config'
Open elasticsearch.yml
Add
network.host: 0.0.0.0
Now check which port elasticsearch is using (9200 is default), go to firewall inbound rules and add those ports.
I need to expose my ES cluster to the world, and am securing it via Nginx with a proxy *:9201 -> localhost:9200 (working).
However, in order to form a cluster, I am trying to use the private network on DigitalOcean to get the nodes to talk to each other.
How can I expose the node-node transport on the private network interface unsecured, while not exposing port 9200 to the world?
I am trying something like
network.publish_host: 10.128.97.184
http.port: 9200
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: 10.128.97.184,10.128.97.185
in elasticsearch.yml but it's not working, probably because the port 9300 might also be nginx-protected?
my nginx file looks like
root#els-node-1:~# cat /etc/nginx/sites-enabled/elasticsearch
server {
listen *:9201;
access_log /var/log/nginx/elasticsearch.access.log;
location / {
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/htpasswd;
proxy_pass http://localhost:9200;
proxy_read_timeout 90;
}
}
And I am able to form the cluster, but I can't see how to secure the external 9200 (disable it to all but 127.0.0.1) and keep the internal interface open for addessses like 10.x.x.x
Thanks for help!
Even if you use the private network, your ES cluster is not secure as anybody within the same Digital Ocean private network can still access your nodes through the open ports 9200 and 9300 (and potentially other services).
Your best bet is to secure your boxes via iptables and only white list the IPs you know are your own servers.
Drop all incoming and forwarded packages and add explicit rules for the other nodes in the cluster only.
Also, use network.bind_host instead of network.publish_host and in addition set up ES to use the eth1 interface only, checkout the ES network docs for details.