Add node to running cluster elasticsearch causes master not discovered exception - elasticsearch

Problem
I have a running cluster and I would like to add a data node into it. The running cluster is
x.x.x.246
and the data node is
x.x.x.99
each server can see each other by ping.
Machine OS: CentOS7
Elasticsearch: 7.61
configs:
here is elasticsearch.yml of x.x.x.246:
cluster.name: elasticsearch
node.master: true
node.name: Node_master
node.data: true
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: x.x.x.246
http.port: 9200
discovery.seed_hosts: ["x.x.x.99:9300"]
cluster.initial_master_nodes: ["x.x.x.246:9300"]
here is elasticsearch.yml of x.x.x.99
cluster.name: elasticsearch
node.name: Node_master
node.data: true
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: x.x.x.99
http.port: 9200
discovery.seed_hosts: ["x.x.x.245:9300"]
cluster.initial_master_nodes: ["x.x.x.246:9300"]
Testing running elasticsearch on machine
When I run systemctl start elasticsearch on each machine, it works well.
test run on x.x.x.246
curl -X GET "X.X.X.246:9200/_cluster/health?pretty"
show:number of the node not changing
curl -X GET "X.X.X.99:9200/_cluster/health?pretty
show:
{
"error" : {
"root_cause" : [
{
"type" : "master_not_discovered_exception",
"reason" : null
}
],
"type" : "master_not_discovered_exception",
"reason" : null
},
"status" : 503
}
edited
here is elasticsearch.yml of x.x.x.246:
cluster.name: elasticsearch
node.name: master
node.master: true
node.data: true
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["x.x.x.99","x.x.x.246]
cluster.initial_master_nodes: ["x.x.x.246"]
logger.org.elasticsearch.discovery: TRACE
here is elasticsearch.yml of x.x.x.99
cluster.name: elasticsearch
node.name: node
node.data: true
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["x.x.x.246","x.x.x.99"]
cluster.initial_master_nodes: ["x.x.x.246"]
logger.org.elasticsearch.discovery: TRACE
log on x.x.x.99:
[root#dev ~]# tail -30 /var/log/elasticsearch/elasticsearch.log
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:692) ~[elasticsearch-7.6.1.jar:7.6.1]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.6.1.jar:7.6.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
at java.lang.Thread.run(Thread.java:830) ~[?:?]
[2020-03-19T12:12:04,462][INFO ][o.e.c.c.JoinHelper ] [node-1] failed to join {master}{0UHYehfNQ2-WCadTC_VVkA}{1FNy5AJrTpKOCAejBLKR2w}{10.64.2.246}{10.64.2.246:9300}{dilm}{ml.machine_memory=1907810304, ml.max_open_jobs=20, xpack.installed=true} with JoinRequest{sourceNode={node-1}{jb_3lJq1R5-BZtxlPs_NyQ}{a4TYDhG7SWqL3CSG4tusEg}{10.64.2.99}{10.64.2.99:9300}{d}{xpack.installed=true}, optionalJoin=Optional[Join{term=178, lastAcceptedTerm=8, lastAcceptedVersion=100, sourceNode={node-1}{jb_3lJq1R5-BZtxlPs_NyQ}{a4TYDhG7SWqL3CSG4tusEg}{10.64.2.99}{10.64.2.99:9300}{d}{xpack.installed=true}, targetNode={master}{0UHYehfNQ2-WCadTC_VVkA}{1FNy5AJrTpKOCAejBLKR2w}{10.64.2.246}{10.64.2.246:9300}{dilm}{ml.machine_memory=1907810304, ml.max_open_jobs=20, xpack.installed=true}}]}
org.elasticsearch.transport.RemoteTransportException: [master][10.64.2.246:9300][internal:cluster/coordination/join]
Caused by: java.lang.IllegalStateException: failure when sending a validation request to node
at org.elasticsearch.cluster.coordination.Coordinator$2.onFailure(Coordinator.java:514) ~[elasticsearch-7.6.1.jar:7.6.1]
at org.elasticsearch.action.ActionListenerResponseHandler.handleException(ActionListenerResponseHandler.java:59) ~[elasticsearch-7.6.1.jar:7.6.1]
at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1118) ~[elasticsearch-7.6.1.jar:7.6.1]
at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1118) ~[elasticsearch-7.6.1.jar:7.6.1]
at org.elasticsearch.transport.InboundHandler.lambda$handleException$2(InboundHandler.java:244) ~[elasticsearch-7.6.1.jar:7.6.1]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:633) ~[elasticsearch-7.6.1.jar:7.6.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
at java.lang.Thread.run(Thread.java:830) [?:?]
Caused by: org.elasticsearch.transport.RemoteTransportException: [node-1][10.64.2.99:9300][internal:cluster/coordination/join/validate]
Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: join validation on cluster state with a different cluster uuid P4QlwvuRRGSmlT77RroSjA than local cluster uuid oUoIe2-bSbS2UPg722ud9Q, rejecting
at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$4(JoinHelper.java:148) ~[elasticsearch-7.6.1.jar:7.6.1]
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:257) ~[?:?]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.6.1.jar:7.6.1]
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:315) ~[?:?]
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:63) ~[elasticsearch-7.6.1.jar:7.6.1]
at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:264) ~[elasticsearch-7.6.1.jar:7.6.1]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:692) ~[elasticsearch-7.6.1.jar:7.6.1]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.6.1.jar:7.6.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
at java.lang.Thread.run(Thread.java:830) ~[?:?]

For node x.x.x.99 the entry for seed host is wrong. It should be as below:
discovery.seed_hosts: ["x.x.x.246:9300"]
The discovery.seed_hosts list is used to detect the master node, since this list contains the address to the nodes which are master eligible nodes and hold the information of the current master node as well, Since it is pointed to x.x.x.245 instead of x.x.x.246 in the configuration of x.x.x.99, the node x.x.x.99 is unable to detect the master.
Post discussion in comment correct configuration should be:
Master node:
cluster.name: elasticsearch
node.name: master
node.master: true
node.data: true
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["x.x.x.246]
cluster.initial_master_nodes: ["master"]
Note that if you want the above node to be master only and not hold data then set
node.data: false
Data node:
cluster.name: elasticsearch
node.name: data-node-1
node.data: true
node.master: false
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["x.x.x.246"]
Also since node x.x.x.99 could not join cluster it has stale cluster state. So delete data folder on x.x.x.99 and restart this node.

The reason, why it wasn't able to elect a master, is mention of discovery.seed_hosts: ["x.x.x.245:9300"] which is not part of the current master node config and is not part of master node config as well. as mentioned in this official ES docs it's used to elect a master node.
You should read in details the 2 important configs related to master selection:
discovery.seed_hosts
initial_master_nodes
You can turn DEBUG logging on Discovery module to better understand it, by adding below the line in your elasticsearch.yml
logger.org.elasticsearch.discovery: DEBUG
You can do a few modifications in both the elasticsearch.yml.
node.name has same name in both nodes elasticsearch.yml.
It's better to just mention ip without port 9200.
Better to give network.host: 0.0.0.0 value, instead of node ip in both elasticsearch.yml.
node.data: true is the default, so no need to mention it.
So better and concise version looks like below:
Master node elasticsearch.yml
cluster.name: elasticsearch
node.name: master
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.seed_hosts: ["x.x.x.99", "x.x.x.246"] -->note this
cluster.initial_master_nodes: ["x.x.x.246"] :- note this
Another data node elasticsearch.yml
cluster.name: elasticsearch
node.name: data
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["x.x.x.99", "x.x.x.246"] --> you need to change this and include both nodes
cluster.initial_master_nodes: ["x.x.x.246"]
Verify the master node
You can hit <your-any-node-ip>:9200/_cat/master and this should return the elected master node which would be in your case node with name master. more info on this.

I had also same issue, when I was trying to access elastic search from outside AWS windows server, I was not able to access it, after that I have added
network.host : aws_private_ip
and after that we need to restart elastic service, but it was throwing an error in restarting, and finally when had added below line, it works for me,
cluster.initial_master_nodes: node-1

Related

Elasticsearch stays in initializing stage, without actually working or throwing an error

I am using Elasticsearch version 6.4.3, and it was working as expected until today morning.
When I run elasticsearch.exe, it stays in initializing state, without throwing any errors.
elasticsearch.log
[2022-06-07T12:17:24,504][INFO ][o.e.n.Node ] [JO01HQESKAVM02] initializing ...
[2022-06-07T12:17:24,770][INFO ][o.e.e.NodeEnvironment ] [JO01HQESKAVM02] using [1] data paths, mounts [[(C:)]], net usable_space [136.2gb], net total_space [499.4gb], types [NTFS]
[2022-06-07T12:17:24,770][INFO ][o.e.e.NodeEnvironment ] [JO01HQESKAVM02] heap size [15.9gb], compressed ordinary object pointers [true]
elasticsearch.yml
bootstrap.memory_lock: false
cluster.name: elasticsearch
http.port: 9200
network.host: 0.0.0.0
node.data: true
node.ingest: true
node.master: true
node.max_local_storage_nodes: 1
node.name: JO01HQESKAVM02
path.data: C:\ELK\Elasticsearch\data
path.logs: C:\ELK\Elasticsearch\logs
transport.tcp.port: 9300
path.repo: ["/mount/backups", "/mount/longterm_backups"]
xpack.license.self_generated.type: basic
xpack.security.enabled: false
any help?

Unable to Start ElasticSearch with Public IP instead of localhost

I am trying to start elastic search with private ip address but it does not get started it shows some errors in error log which i have shared below.
elasticsearch.yml
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#Also Tried with Private IP Address network.host: 52.50.122.93
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["52.50.122.93", "127.0.0.1", "[::1]"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
elasticsearch.log
[2019-05-21T17:22:28,068][ERROR][o.e.b.Bootstrap ] [WIN-CQKBIA6F350] Exception
java.lang.IllegalStateException: failed to obtain node locks, tried [[C:\ELKStack\elasticsearch-7.1.0\data]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:297) ~[elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.node.Node.<init>(Node.java:272) ~[elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.node.Node.<init>(Node.java:252) ~[elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:211) ~[elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:211) ~[elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:325) [elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) [elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150) [elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) [elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) [elasticsearch-cli-7.1.0.jar:7.1.0]
at org.elasticsearch.cli.Command.main(Command.java:90) [elasticsearch-cli-7.1.0.jar:7.1.0]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:115) [elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) [elasticsearch-7.1.0.jar:7.1.0]
[2019-05-21T17:22:28,085][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [WIN-CQKBIA6F350] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: failed to obtain node locks, tried [[C:\ELKStack\elasticsearch-7.1.0\data]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:163) ~[elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150) ~[elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) ~[elasticsearch-cli-7.1.0.jar:7.1.0]
at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-7.1.0.jar:7.1.0]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:115) ~[elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-7.1.0.jar:7.1.0]
Caused by: java.lang.IllegalStateException: failed to obtain node locks, tried [[C:\ELKStack\elasticsearch-7.1.0\data]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:297) ~[elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.node.Node.<init>(Node.java:272) ~[elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.node.Node.<init>(Node.java:252) ~[elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:211) ~[elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:211) ~[elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:325) ~[elasticsearch-7.1.0.jar:7.1.0]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) ~[elasticsearch-7.1.0.jar:7.1.0]
... 6 more
you need to set one of these values
[1]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
ex discovery.seed_hosts:
- 192.168.1.10:9300
- 192.168.1.11
- seeds.mydomain.com
The error is clear in the log file
the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
You have to set the cluster.initial_master_node or discovery.seed_hosts setting
Also don't forget to set the node.name and the cluster.name, you can also start ES and set the master node with this command-line:
bin/elasticsearch -Ecluster.initial_master_nodes=master-a,master-b,master-c
https://www.elastic.co/guide/en/elasticsearch/reference/master/important-settings.html
https://www.elastic.co/guide/en/elasticsearch/reference/master/modules-discovery-bootstrap-cluster.html
https://www.elastic.co/guide/en/elasticsearch/reference/master/discovery-settings.html

Not able to set ElasticSearch cluster. Getting RemoteTransportException

I am trying to establish elasticsearch cluster with 3 nodes.
My node settings:
cluster.name: ControlElasticSearch
node.name: VinodNode
node.master: true
node.data: true
network.host: 132.186.102.84
discovery.zen.ping.unicast.hosts: ["132.186.102.49","132.186.189.127","132.186.102.84"]
discovery.zen.minimum_master_nodes: 2
When i restart my elasticsearch, i am getting **RemoteTransportException**.
[2017-05-29T17:33:58,374][INFO ][o.e.d.z.ZenDiscovery ] [VinodNode] failed to send join request to master
[{DivyaNode}{LKhUGAEXTzO-ZcglRYC_yQ}{YO0qeSh8QMWclmYcSCNyMg}{132.186.102.49}{132.186.102.49:9300}],
reason [RemoteTransportException[[DivyaNode][132.186.102.49:9300][internal:discovery/zen/join]];
nested: ConnectTransportException[[VinodNode][127.0.0.1:9300] connect_timeout[30s]];
nested: IOException[Connection refused: no further information: 127.0.0.1/127.0.0.1:9300];
nested: IOException[Connection refused: no further information]; ]
May i please know what might be the reason. If i just work with single node, it works. when i am trying to set up as cluster, i am facing this issue.
I am able to ping the machines and all the machines are in the same network.

elastic search with spring boot not work

in spring boot project.
gradle:
dependencies {
compile('org.springframework.boot:spring-boot-starter-data-elasticsearch')
compile('io.searchbox:jest:2.0.3')
runtime('net.java.dev.jna:jna')
}
config.yml:
spring:
data:
elasticsearch:
cluster-nodes: 10.19.132.207:9300
cluster-name: es
elasticsearch:
jest:
uris: http://10.19.132.207:9200
read-timeout: 10000
And my es config:
cluster.name: es
node.name: node-1
network.host: 0.0.0.0
transport.tcp.port: 9300
http.port: 9200
when I want to save data to es. The console print:
Caused by: org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{10.19.132.207}{10.19.132.207:9300}]
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:326) ~[elasticsearch-2.4.4.jar:2.4.4]
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:223) ~[elasticsearch-2.4.4.jar:2.4.4]
at org.elasticsearch.client.transport.support.TransportProxyClient.execute(TransportProxyClient.java:55) ~[elasticsearch-2.4.4.jar:2.4.4]
And my es print log:
java.lang.IllegalStateException: Received message from unsupported version: [2.0.0] minimal compatible version is: [5.0.0]
at org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1323) ~[elasticsearch-5.2.2.jar:5.2.2]
at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:74) ~[transport-netty4-5.2.2.jar:5.2.2]
How can I resolve this problem?
At first glance, it appears that neither 'spring-boot-starter-data-elasticsearch' nor 'jest 2.0.3' support Elasticsearch 5. I'd try downgrading your Elasticsearch instance to 2.4.4 and see if that works.

Titan: Connect to Elasticsearch from another machine

I'm using Titan 1.0.0 with Elasticsearch. I have Titan (with DynamoDB backend) working on an EC2 machine.
My main goal is connect to that Titan instance through another EC2 machine using Java.
Unfortunately I cannot connect to this machine.
My Titan instance is configured using a properties file. Here is a snippet of the Elasticsearch configuration:
# elasticsearch config
index.search.backend=elasticsearch
index.search.directory=/path/to/elasticsearch
index.search.elasticsearch.interface=NODE
index.search.elasticsearch.ext.node.data=true
index.search.elasticsearch.ext.node.client=false
index.search.elasticsearch.ext.node.local=false
This starts a full node holding data.
Now I want to connect to this node's Elasticsearch from another machine. My configuration file for this is:
storage.backend= com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager
storage.hostname=10.0.0.249
storage.port=8182
index.search.backend=elasticsearch
index.search.elasticsearch.interface=TRANSPORT_CLIENT
index.search.elasticsearch.ext.node.data=false
index.search.elasticsearch.ext.node.client=true
index.search.hostname=10.0.0.249:9200
storage.dynamodb.client.endpoint=https://dynamodb.us-east-1.amazonaws.com
## DynamoDB client configuration: credentials
storage.dynamodb.client.credentials.class-name=com.amazonaws.auth.DefaultAWSCredentialsProviderChain
storage.dynamodb.client.credentials.constructor-args=
When I attempt to connect using Java through this line:
graph=TitanFactory.open("conf/dynamodb_remote.properties")
I get an error saying:
java.lang.IllegalArgumentException: Could not instantiate implementation: com.thinkaurelius.titan.diskstorage.es.ElasticSearchIndex
at com.thinkaurelius.titan.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:55)
at com.thinkaurelius.titan.diskstorage.Backend.getImplementationClass(Backend.java:473)
at com.thinkaurelius.titan.diskstorage.Backend.getIndexes(Backend.java:460)
at com.thinkaurelius.titan.diskstorage.Backend.<init>(Backend.java:147)
at com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration.getBackend(GraphDatabaseConfiguration.java:1805)
at com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.<init>(StandardTitanGraph.java:123)
at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:94)
at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:62)
at com.thinkaurelius.titan.core.TitanFactory$open.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:110)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:122)
at groovysh_evaluate.run(groovysh_evaluate:3)
at org.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:215)
at org.codehaus.groovy.tools.shell.Interpreter.evaluate(Interpreter.groovy:69)
at org.codehaus.groovy.tools.shell.Groovysh.execute(Groovysh.groovy:185)
at org.codehaus.groovy.tools.shell.Shell.leftShift(Shell.groovy:119)
at org.codehaus.groovy.tools.shell.ShellRunner.work(ShellRunner.groovy:94)
at org.codehaus.groovy.tools.shell.InteractiveShellRunner.super$2$work(InteractiveShellRunner.groovy)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:90)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:324)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1207)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuperN(ScriptBytecodeAdapter.java:130)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuper0(ScriptBytecodeAdapter.java:150)
at org.codehaus.groovy.tools.shell.InteractiveShellRunner.work(InteractiveShellRunner.groovy:123)
at org.codehaus.groovy.tools.shell.ShellRunner.run(ShellRunner.groovy:58)
at org.codehaus.groovy.tools.shell.InteractiveShellRunner.super$2$run(InteractiveShellRunner.groovy)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:90)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:324)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1207)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuperN(ScriptBytecodeAdapter.java:130)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuper0(ScriptBytecodeAdapter.java:150)
at org.codehaus.groovy.tools.shell.InteractiveShellRunner.run(InteractiveShellRunner.groovy:82)
at org.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:215)
at org.apache.tinkerpop.gremlin.console.Console.<init>(Console.groovy:144)
at org.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:215)
at org.apache.tinkerpop.gremlin.console.Console.main(Console.groovy:303)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.thinkaurelius.titan.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:44)
... 44 more
Caused by: org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: []
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:279)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:198)
at org.elasticsearch.client.transport.support.InternalTransportClusterAdminClient.execute(InternalTransportClusterAdminClient.java:86)
at org.elasticsearch.client.support.AbstractClusterAdminClient.health(AbstractClusterAdminClient.java:127)
at org.elasticsearch.action.admin.cluster.health.ClusterHealthRequestBuilder.doExecute(ClusterHealthRequestBuilder.java:92)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:91)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:65)
at com.thinkaurelius.titan.diskstorage.es.ElasticSearchIndex.<init>(ElasticSearchIndex.java:201)
... 49 more
I checked using wget and seems like ports 9200 and 9201 are working but 9300 is not. And probably that's why the issue exists.
Any help?
A couple suggestions based on the Titan configuration documentation
index.search.hostname should just be the hostname or IP address. It should not contain the port.
index.search.port if you decide to specify it, you should use 9300 or your Elasticsearch's value for the transport TCP port.
index.search.elasticsearch.cluster-name should match the cluster.name in the Elasticsearch config.
Updated: This seemed to work for me. In $TITAN_HOME/conf/mytitan.properties, I configured the indexing backend like this:
storage.backend=berkeleyje
storage.directory=../db/mytitan/berkeleyje
index.search.backend=elasticsearch
index.search.index-name=mytitan
index.search.elasticsearch.interface=NODE
index.search.conf-file=mytitan-elasticsearch.yml
And then $TITAN_HOME/conf/mytitan-elasticsearch.yml looks exactly like a regular ES configuration:
cluster.name: TitanElasticsearch
network.name: u1401
network.host: 192.168.14.101
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["192.168.14.101"]
discovery.zen.minimum_master_nodes: 1
node.name: u1401
node.master: true
node.data: true
http.port: 9200
transport.tcp.port: 9300
path.data: ./db/mytitan/elasticsearch
When I attempted to specify these properties with the prefix index.search.elasticsearch.ext..., the Transport TCP port didn't start as you noted earlier.

Resources