i want to run Elasticsearch on two servers in the same cluster.
The Problem is : I can't connect the 2 servers on elasticsearch. I mean i can't get the 2 nodes on the same cluster.
can somebody tell me if my Configuration in the elasticsearch.yml is right and
Sever 1:
cluster.name: MyData
node.name: Node_1
network.host: '192.160.122.4'
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.160.122.4","192.160.122.3"]
Server2:
cluster.name: MyData
node.name: Node_2
network.host: '192.160.122.3'
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.160.122.4","192.160.122.3"]
what i need to change it ?
Thanks
to the Logs:
Log of server 1:
[2017-03-21T10:38:18,859][INFO ][o.e.n.Node ] [Node_1]
initialized
[2017-03-21T10:38:18,906][INFO ][o.e.n.Node ] [Node_1]
starting ...
[2017-03-21T10:38:19,764][INFO ][o.e.t.TransportService ] [Node_1]
publish_address {192.160.122.4:9300}, bound_addresses {192.160.122.4:9300}
[2017-03-21T10:38:19,764][INFO ][o.e.b.BootstrapChecks ] [Node_1] bound
or publishing to a non-loopback or non-link-local address, enforcing
bootstrap checks
[2017-03-21T10:38:22,899][INFO ][o.e.c.s.ClusterService ] [Node_1]
new_master {Master-Node_1}{4ZEftg6TRCOJqE0kEv-Mrg}{MYrWUFABQ5OOT0US73j13w}
{192.160.122.4}{192.160.122.4:9300}, reason: zen-disco-elected-as-master
([0]
nodes joined)
[2017-03-21T10:38:22,950][INFO ][o.e.h.HttpServer ] [Node_1]
publish_address {192.160.122.4:9200}, bound_addresses {192.160.122.4:9200}
[2017-03-21T10:38:22,997][INFO ][o.e.n.Node ] [Node_1] started
[2017-03-21T10:38:23,948][INFO ][o.e.g.GatewayService ] [Node_1]
recovered [5] indices into cluster_state
[2017-03-21T10:38:32,357][INFO ][o.e.c.r.a.AllocationService] [Node_1]
Cluster health status changed from [RED] to [YELLOW] (reason: [shards
started [[node_1][4]] ...]).
Log of Server2:
[2017-03-21T11:55:57,277][INFO ][o.e.n.Node ] [Node_2] initialized
[2017-03-21T11:55:57,293][INFO ][o.e.n.Node ] [Node_2] starting ...
[2017-03-21T11:56:01,099][INFO ][o.e.t.TransportService ] [Node_2] publish_address {192.160.122.3:9300}, bound_addresses {192.160.122.3:9300}
[2017-03-21T11:56:01,115][INFO ][o.e.b.BootstrapChecks ] [Node_2] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2017-03-21T11:56:01,115][ERROR][o.e.b.Bootstrap ] [Node_2] node validation exception
bootstrap checks failed
JVM is using the client VM [Java HotSpot(TM) Client VM] but should be using a server VM for the best performance
[2017-03-21T11:56:01,146][INFO ][o.e.n.Node ] [Node_2] stopping ...
[2017-03-21T11:56:01,193][INFO ][o.e.n.Node ] [Node_2] stopped
[2017-03-21T11:56:01,193][INFO ][o.e.n.Node ] [Node_2] closing ...
[2017-03-21T11:56:01,240][INFO ][o.e.n.Node ] [Node_2] closed
Related
I got error like below in both servers. I setup 2 server for running elasticsearch. Config file also attached as below. I am using ubuntu 18.04.
I got error saying failed to bind transport port 9093. I changed default value. I there any other i need to change ?
I am using open jdk version 8 - 181 version and elasticsearch version 6.4.3
[2018-11-11T12:38:21,155][WARN ][o.e.b.JNANatives ] Unable to lock JVM Memory: error=12, reason=Cannot allocate memory
[2018-11-11T12:38:21,157][WARN ][o.e.b.JNANatives ] This can result in part of the JVM being swapped out.
[2018-11-11T12:38:21,158][WARN ][o.e.b.JNANatives ] Increase RLIMIT_MEMLOCK, soft limit: 16777216, hard limit: 16777216
[2018-11-11T12:38:21,158][WARN ][o.e.b.JNANatives ] These can be adjusted by modifying /etc/security/limits.conf, for example:
# allow user 'elasticsearch' mlockall
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
[2018-11-11T12:38:21,158][WARN ][o.e.b.JNANatives ] If you are logged in interactively, you will have to re-login for the new limits to take effect.
[2018-11-11T12:38:21,304][INFO ][o.e.n.Node ] [linux-1] initializing ...
[2018-11-11T12:38:21,401][INFO ][o.e.e.NodeEnvironment ] [linux-1] using [1] data paths, mounts [[/ (/dev/sda1)]], net usable_space [24.6gb], net total_space [28.9gb], types [ext4]
[2018-11-11T12:38:21,401][INFO ][o.e.e.NodeEnvironment ] [linux-1] heap size [1.9gb], compressed ordinary object pointers [true]
[2018-11-11T12:38:21,402][INFO ][o.e.n.Node ] [linux-1] node name [linux-1], node ID [h03oGLGESzqeHmeNJLl0LQ]
[2018-11-11T12:38:21,402][INFO ][o.e.n.Node ] [linux-1] version[6.4.3], pid[11401], build[oss/deb/fe40335/2018-10-30T23:17:19.084789Z], OS[Linux/4.15.0-1030-azure/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_181/25.181-b13]
[2018-11-11T12:38:21,403][INFO ][o.e.n.Node ] [linux-1] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch, -Des.distribution.flavor=oss, -Des.distribution.type=deb]
[2018-11-11T12:38:22,242][INFO ][o.e.p.PluginsService ] [linux-1] loaded module [aggs-matrix-stats]
[2018-11-11T12:38:22,243][INFO ][o.e.p.PluginsService ] [linux-1] loaded module [analysis-common]
[2018-11-11T12:38:22,243][INFO ][o.e.p.PluginsService ] [linux-1] loaded module [lang-expression]
[2018-11-11T12:38:22,243][INFO ][o.e.p.PluginsService ] [linux-1] loaded module [rank-eval]
[2018-11-11T12:38:22,243][INFO ][o.e.p.PluginsService ] [linux-1] loaded module [reindex]
[2018-11-11T12:38:22,243][INFO ][o.e.p.PluginsService ] [linux-1] loaded module [repository-url]
[2018-11-11T12:38:22,243][INFO ][o.e.p.PluginsService ] [linux-1] loaded module [transport-netty4]
[2018-11-11T12:38:22,243][INFO ][o.e.p.PluginsService ] [linux-1] loaded module [tribe]
[2018-11-11T12:38:22,243][INFO ][o.e.p.PluginsService ] [linux-1] no plugins loaded
[2018-11-11T12:38:25,636][INFO ][o.e.d.DiscoveryModule ] [linux-1] using discovery type [zen]
[2018-11-11T12:38:26,235][INFO ][o.e.n.Node ] [linux-1] initialized
[2018-11-11T12:38:26,235][INFO ][o.e.n.Node ] [linux-1] starting ...
[2018-11-11T12:38:26,561][ERROR][o.e.b.Bootstrap ] [linux-1] Exception
org.elasticsearch.transport.BindTransportException: Failed to bind to [9093]
at org.elasticsearch.transport.TcpTransport.bindToPort(TcpTransport.java:821) ~[elasticsearch-6.4.3.jar:6.4.3]
at org.elasticsearch.transport.TcpTransport.bindServer(TcpTransport.java:786) ~[elasticsearch-6.4.3.jar:6.4.3]
at org.elasticsearch.transport.netty4.Netty4Transport.doStart(Netty4Transport.java:134) ~[?:?]
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:66)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method) ~[?:?]
at sun.nio.ch.Net.bind(Net.java:433) ~[?:?]
at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_181]
[2018-11-11T12:38:26,581][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [linux-1] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: BindTransportException[Failed to bind to [9093]]; nested: BindException[Cannot assign requested address];
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:140) ~[elasticsearch-6.4.3.jar:6.4.3]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:86) ~[elasticsearch-6.4.3.jar:6.4.3]
Caused by: org.elasticsearch.transport.BindTransportException: Failed to bind to [9093]
at org.elasticsearch.transport.TcpTransport.bindToPort(TcpTransport.java:821) ~[elasticsearch-6.4.3.jar:6.4.3]
at org.elasticsearch.transport.TcpTransport.bindServer(TcpTransport.java:786) ~[elasticsearch-6.4.3.jar:6.4.3]
... 6 more
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method) ~[?:?]
at sun.nio.ch.Net.bind(Net.java:433) ~[?:?]
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) ~[?:?]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403) ~[?:?]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463) ~[?:?]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) ~[?:?]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
[2018-11-11T12:38:27,284][INFO ][o.e.n.Node ] [linux-1] stopping ...
[2018-11-11T12:38:27,288][INFO ][o.e.n.Node ] [linux-1] stopped
[2018-11-11T12:38:27,288][INFO ][o.e.n.Node ] [linux-1] closing ...
[2018-11-11T12:38:27,321][INFO ][o.e.n.Node ] [linux-1] closed
My config file for servers as below
Linux-1 server
cluster.name: linux-elk
node.name: linux-1
node.data: true
node.ingest: true
node.master: false
node.max_local_storage_nodes: 2
indices.query.bool.max_clause_count: 10000
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 24.95.245.313
http.port: 9092
transport.tcp.port: 9093
discovery.zen.ping.unicast.hosts:
- 178.51.190.47
discovery.zen.minimum_master_nodes: 1
Linux-2 server
cluster.name: linux-elk
node.name: linux-2
node.data: true
node.ingest: false
node.master: true
node.max_local_storage_nodes: 2
indices.query.bool.max_clause_count: 10000
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 178.51.190.47
http.port: 9092
transport.tcp.port: 9093
discovery.zen.ping.unicast.hosts:
- 24.95.245.313
discovery.zen.minimum_master_nodes: 1
plese help ? what i need to do now ?
Another application may be bind to port 9093. need to check it first.
I created a customised Document model in wagtail admin by
class CustomizedDocument(Document):
...
And i have updated settings.WAGTAILDOCS_DOCUMENT_MODEL.
However, i realised my search on tags fails. I suspect there is something to do with elasticsearch. But i m really new to that.
Here is the tracestack message from elasticsearch
[INFO ][o.e.n.Node ] initialized
[INFO ][o.e.n.Node ] [oceLPbj] starting ...
[INFO ][o.e.t.TransportService ] [oceLPbj] publish_address {127.0.0.1:9300}, bound_addresses {[fe80::1]:9300}, {[::1]:9300}, {127.0.0.1:9300}
[INFO ][o.e.c.s.ClusterService ] [oceLPbj] new_master {oceLPbj}{oceLPbjSQ7ib2pTjx9gpPg}{WDDcwdlISnu-EW8mjQcOxQ}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[INFO ][o.e.h.n.Netty4HttpServerTransport] [oceLPbj] publish_address {127.0.0.1:9200}, bound_addresses {[fe80::1]:9200}, {[::1]:9200}, {127.0.0.1:9200}
[INFO ][o.e.n.Node ] [oceLPbj] started
[INFO ][o.e.g.GatewayService ] [oceLPbj] recovered [4] indices into cluster_state
[INFO ][o.e.c.r.a.AllocationService] [oceLPbj] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[wagtail__wagtaildocs_document][1], [wagtail__wagtaildocs_document][3]] ...]).
[INFO ][o.e.c.m.MetaDataCreateIndexService] [oceLPbj] [wagtail__wagtaildocs_document_awv9f81] creating index, cause [api], templates [], shards [5]/[1], mappings []
[INFO ][o.e.c.m.MetaDataMappingService] [oceLPbj] [wagtail__wagtaildocs_document_awv9f81/CzKReMnsQ9qziLnOQQ5K3g] create_mapping [wagtaildocs_abstractdocument_wagtaildocs_document_distributor_portal_customizeddocument]
[INFO ][o.e.c.m.MetaDataMappingService] [oceLPbj] [wagtail__wagtaildocs_document_awv9f81/CzKReMnsQ9qziLnOQQ5K3g] create_mapping [wagtaildocs_abstractdocument_wagtaildocs_document]
Could anyone help me find out what's going on here? Any help is appreciated. Let me know what else information i should provide.
I've installed ElasticSearch 5.3 on my Windows 10 machine recently and after following the installation process, ES does not respond when I access http://localhost:9200 and from Chrome's Sense.
However, when I use curl 'http://localhost:9200' on the command line, it sends this:
{
"name" : "test",
"cluster_name" : "my-application",
"cluster_uuid" : "9-wvh6UXTHSKmkdpblqyyA",
"version" : {
"number" : "5.3.0",
"build_hash" : "3adb13b",
"build_date" : "2017-03-23T03:31:50.652Z",
"build_snapshot" : false,
"lucene_version" : "6.4.1"
},
"tagline" : "You Know, for Search"
}
which indicates that the ES is running/configured correctly(?).
I also configured network.host: to 0.0.0.0/127.0.0.1/:: of elasticsearch.yml still no effect.
Here is the latest log info:
[2017-04-07T18:21:06,481][INFO ][o.e.n.Node ] [test] initializing ...
[2017-04-07T18:21:06,633][INFO ][o.e.e.NodeEnvironment ] [test] using [1] data paths, mounts [[OSDisk (C:)]], net usable_space [129.2gb], net total_space [232.3gb], spins? [unknown], types [NTFS]
[2017-04-07T18:21:06,634][INFO ][o.e.e.NodeEnvironment ] [test] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-04-07T18:21:06,681][INFO ][o.e.n.Node ] [test] node name [test], node ID [Ab5g3zN0S7qNOuP_asF-iQ]
[2017-04-07T18:21:06,682][INFO ][o.e.n.Node ] [test] version[5.3.0], pid[312], build[3adb13b/2017-03-23T03:31:50.652Z], OS[Windows 10/10.0/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_121/25.121-b13]
[2017-04-07T18:21:08,288][INFO ][o.e.p.PluginsService ] [test] loaded module [aggs-matrix-stats]
[2017-04-07T18:21:08,289][INFO ][o.e.p.PluginsService ] [test] loaded module [ingest-common]
[2017-04-07T18:21:08,291][INFO ][o.e.p.PluginsService ] [test] loaded module [lang-expression]
[2017-04-07T18:21:08,291][INFO ][o.e.p.PluginsService ] [test] loaded module [lang-groovy]
[2017-04-07T18:21:08,292][INFO ][o.e.p.PluginsService ] [test] loaded module [lang-mustache]
[2017-04-07T18:21:08,294][INFO ][o.e.p.PluginsService ] [test] loaded module [lang-painless]
[2017-04-07T18:21:08,295][INFO ][o.e.p.PluginsService ] [test] loaded module [percolator]
[2017-04-07T18:21:08,296][INFO ][o.e.p.PluginsService ] [test] loaded module [reindex]
[2017-04-07T18:21:08,296][INFO ][o.e.p.PluginsService ] [test] loaded module [transport-netty3]
[2017-04-07T18:21:08,298][INFO ][o.e.p.PluginsService ] [test] loaded module [transport-netty4]
[2017-04-07T18:21:08,299][INFO ][o.e.p.PluginsService ] [test] loaded plugin [ltr-query]
[2017-04-07T18:21:11,330][INFO ][o.e.n.Node ] [test] initialized
[2017-04-07T18:21:11,330][INFO ][o.e.n.Node ] [test] starting ...
[2017-04-07T18:21:11,612][INFO ][o.e.t.TransportService ] [test] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2017-04-07T18:21:14,657][INFO ][o.e.c.s.ClusterService ] [test] new_master {test}{Ab5g3zN0S7qNOuP_asF-iQ}{XYHTUZ8AQN6kYrEPLChFNg}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-04-07T18:21:14,826][INFO ][o.e.h.n.Netty4HttpServerTransport] [test] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
[2017-04-07T18:21:14,826][INFO ][o.e.n.Node ] [test] started
[2017-04-07T18:21:14,982][INFO ][o.e.g.GatewayService ] [test] recovered [1] indices into cluster_state
[2017-04-07T18:21:15,510][INFO ][o.e.c.r.a.AllocationService] [test] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[tmdb][0]] ...]).
Got it! Referring from Network Settings Documentation, I tried to run it using this script: elasticsearch.bat -E network.host=_local_, and it worked!
I also changed the configuration file(./config/elasticsearch.yml). Just uncomment the line and change it to network.host: _local_. After that, I simply ran elasticsearch.bat directly.
OR
to put it simply just comment the network.host line.
Hi running Elasticsearch 1.6.0 and AWS plugin 2.6.0 on Windows 2008 in Amazon.
I have AWS plgin setup, I don't get any Exception in the logs but the nodes can't seem to dicover each other.
bootstrap.mlockall: true
cluster.name: my-cluster
node.name: "ES MASTER 01"
node.data: false
node.master: true
plugin.mandatory: "cloud-aws"
cloud.aws.access_key: "AK...Z7Q"
cloud.aws.secret_key: "gKW...nAO"
cloud.aws.region: "us-east"
discovery.zen.minimum_master_nodes: 1
discovery.type: "ec2"
discovery.ec2.groups: "Elastic Search"
discovery.ec2.ping_timeout: "30s"
discovery.ec2.availability_zones: "us-east-1a"
discovery.zen.ping.multicast.enabled: false
Logs:
[2015-07-13 15:02:19,346][INFO ][node ] [ES MASTER 01] version[1.6.0], pid[2532], build[cdd3ac4/2015-06-09T13:36:34Z]
[2015-07-13 15:02:19,346][INFO ][node ] [ES MASTER 01] initializing ...
[2015-07-13 15:02:19,378][INFO ][plugins ] [ES MASTER 01] loaded [cloud-aws], sites []
[2015-07-13 15:02:19,440][INFO ][env ] [ES MASTER 01] using [1] data paths, mounts [[(C:)]], net usable_space [6.8gb], net total_space [29.9gb], types [NTFS]
[2015-07-13 15:02:26,461][INFO ][node ] [ES MASTER 01] initialized
[2015-07-13 15:02:26,461][INFO ][node ] [ES MASTER 01] starting ...
[2015-07-13 15:02:26,851][INFO ][transport ] [ES MASTER 01] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/172.30.0.123:9300]}
[2015-07-13 15:02:26,866][INFO ][discovery ] [ES MASTER 01] my-cluster/SwhSDhiDQzq4pM8jkhIuzw
[2015-07-13 15:02:56,884][WARN ][discovery ] [ES MASTER 01] waited for 30s and no initial state was set by the discovery
[2015-07-13 15:02:56,962][INFO ][http ] [ES MASTER 01] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/172.30.0.123:9200]}
[2015-07-13 15:02:56,962][INFO ][node ] [ES MASTER 01] started
[2015-07-13 15:03:13,455][INFO ][cluster.service ] [ES MASTER 01] new_master [ES MASTER 01][SwhSDhiDQzq4pM8jkhIuzw][WIN-3Q4EH3B8H1O][inet[/172.30.0.123:9300]]{data=false, master=true}, reason: zen-disco-join (elected_as_master)
[2015-07-13 15:03:13,517][INFO ][gateway ] [ES MASTER 01] recovered [0] indices into cluster_state
It can surely work with private IP, if and only if your node instances are able to access ec2 information on the same VPC in order to find out about the Cluster it should join.
you can set this Discovery Permission as a policy as the following and apply it to your IAM role:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "whatever",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"ec2:DescribeAvailabilityZones",
"ec2:DescribeInstances",
"ec2:DescribeRegions",
"ec2:DescribeSecurityGroups",
"ec2:DescribeTags"
],
"Resource": [
"*"
]
}
]
}
ES not responded to any requests after loosing an index (for unknown reason). After server restart ES trying to recover index but as soon as it read entire index (about 200mb only) ES stop to respond. The last error I saw was SearchPhaseExecutionException[Failed to execute phase [query_fetch], all shards failed]. I'm using ES on single node virtual server. Index has only one shard with about 3mln documents (200mb).
How I can recover this index?
Here's the ES log
[2014-06-21 18:43:15,337][WARN ][bootstrap ] jvm uses the client vm, make sure to run `java` with the server vm for best performance by adding `-server` to the command line
[2014-06-21 18:43:15,554][WARN ][common.jna ] Unknown mlockall error 0
[2014-06-21 18:43:15,759][INFO ][node ] [Crimson Cowl] version[1.1.0], pid[1031], build[2181e11/2014-03-25T15:59:51Z]
[2014-06-21 18:43:15,759][INFO ][node ] [Crimson Cowl] initializing ...
[2014-06-21 18:43:15,881][INFO ][plugins ] [Crimson Cowl] loaded [], sites [head]
[2014-06-21 18:43:21,957][INFO ][node ] [Crimson Cowl] initialized
[2014-06-21 18:43:21,958][INFO ][node ] [Crimson Cowl] starting ...
[2014-06-21 18:43:22,275][INFO ][transport ] [Crimson Cowl] bound_address {inet[/10.0.0.13:9300]}, publish_address {inet[/10.0.0.13:9300]}
[2014-06-21 18:43:25,385][INFO ][cluster.service ] [Crimson Cowl] new_master [Crimson Cowl][UJNl8hGgRzeFo-DQ3vk2nA][esubuntu][inet[/10.0.0.13:9300]], reason: zen-disco-join (elected_as_master)
[2014-06-21 18:43:25,438][INFO ][discovery ] [Crimson Cowl] elasticsearch/UJNl8hGgRzeFo-DQ3vk2nA
[2014-06-21 18:43:25,476][INFO ][http ] [Crimson Cowl] bound_address {inet[/10.0.0.13:9200]}, publish_address {inet[/10.0.0.13:9200]}
[2014-06-21 18:43:26,348][INFO ][gateway ] [Crimson Cowl] recovered [2] indices into cluster_state
[2014-06-21 18:43:26,349][INFO ][node ] [Crimson Cowl] started
After deleting another index on the same node ES respond to request, but failed to recover index. Here's the log
[2014-06-22 08:00:06,651][WARN ][bootstrap ] jvm uses the client vm, make sure to run `java` with the server vm for best performance by adding `-server` to the command line
[2014-06-22 08:00:06,699][WARN ][common.jna ] Unknown mlockall error 0
[2014-06-22 08:00:06,774][INFO ][node ] [Baron Macabre] version[1.1.0], pid[2035], build[2181e11/2014-03-25T15:59:51Z]
[2014-06-22 08:00:06,774][INFO ][node ] [Baron Macabre] initializing ...
[2014-06-22 08:00:06,779][INFO ][plugins ] [Baron Macabre] loaded [], sites [head]
[2014-06-22 08:00:08,766][INFO ][node ] [Baron Macabre] initialized
[2014-06-22 08:00:08,767][INFO ][node ] [Baron Macabre] starting ...
[2014-06-22 08:00:08,824][INFO ][transport ] [Baron Macabre] bound_address {inet[/10.0.0.3:9300]}, publish_address {inet[/10.0.0.3:9300]}
[2014-06-22 08:00:11,890][INFO ][cluster.service ] [Baron Macabre] new_master [Baron Macabre][eWDP4ZSXSGuASJLJ2an1nQ][esubuntu][inet[/10.0.0.3:9300]], reason: zen-disco-join (elected_as_master)
[2014-06-22 08:00:11,975][INFO ][discovery ] [Baron Macabre] elasticsearch/eWDP4ZSXSGuASJLJ2an1nQ
[2014-06-22 08:00:12,000][INFO ][http ] [Baron Macabre] bound_address {inet[/10.0.0.3:9200]}, publish_address {inet[/10.0.0.3:9200]}
[2014-06-22 08:00:12,645][INFO ][gateway ] [Baron Macabre] recovered [1] indices into cluster_state
[2014-06-22 08:00:12,647][INFO ][node ] [Baron Macabre] started
[2014-06-22 08:05:01,284][WARN ][index.engine.internal ] [Baron Macabre] [wordstat][0] failed engine
java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.index.ParallelPostingsArray.<init>(ParallelPostingsArray.java:35)
at org.apache.lucene.index.FreqProxTermsWriterPerField$FreqProxPostingsArray.<init>(FreqProxTermsWriterPerField.java:254)
at org.apache.lucene.index.FreqProxTermsWriterPerField$FreqProxPostingsArray.newInstance(FreqProxTermsWriterPerField.java:279)
at org.apache.lucene.index.ParallelPostingsArray.grow(ParallelPostingsArray.java:48)
at org.apache.lucene.index.TermsHashPerField$PostingsBytesStartArray.grow(TermsHashPerField.java:307)
at org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:324)
at org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:185)
at org.apache.lucene.index.DocInverterPerField.processFields(DocInverterPerField.java:171)
at org.apache.lucene.index.DocFieldProcessor.processDocument(DocFieldProcessor.java:248)
at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:253)
at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:453)
at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1529)
at org.elasticsearch.index.engine.internal.InternalEngine.innerIndex(InternalEngine.java:532)
at org.elasticsearch.index.engine.internal.InternalEngine.index(InternalEngine.java:470)
at org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryOperation(InternalIndexShard.java:744)
at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:228)
at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:197)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2014-06-22 08:05:02,168][WARN ][cluster.action.shard ] [Baron Macabre] [wordstat][0] sending failed shard for [wordstat][0], node[eWDP4ZSXSGuASJLJ2an1nQ], [P], s[INITIALIZING], indexUUID [LC3LMLxgS3CkkG_pvfTeSg], reason [engine failure, message [OutOfMemoryError[Java heap space]]]
[2014-06-22 08:05:02,169][WARN ][cluster.action.shard ] [Baron Macabre] [wordstat][0] received shard failed for [wordstat][0], node[eWDP4ZSXSGuASJLJ2an1nQ], [P], s[INITIALIZING], indexUUID [LC3LMLxgS3CkkG_pvfTeSg], reason [engine failure, message [OutOfMemoryError[Java heap space]]]
[2014-06-22 08:53:22,253][INFO ][node ] [Baron Macabre] stopping ...
[2014-06-22 08:53:22,267][INFO ][node ] [Baron Macabre] stopped
[2014-06-22 08:53:22,267][INFO ][node ] [Baron Macabre] closing ...
[2014-06-22 08:53:22,272][INFO ][node ] [Baron Macabre] closed
[2014-06-22 08:53:23,667][WARN ][bootstrap ] jvm uses the client vm, make sure to run `java` with the server vm for best performance by adding `-server` to the command line
[2014-06-22 08:53:23,708][WARN ][common.jna ] Unknown mlockall error 0
[2014-06-22 08:53:23,777][INFO ][node ] [Living Totem] version[1.1.0], pid[2137], build[2181e11/2014-03-25T15:59:51Z]
[2014-06-22 08:53:23,777][INFO ][node ] [Living Totem] initializing ...
[2014-06-22 08:53:23,781][INFO ][plugins ] [Living Totem] loaded [], sites [head]
[2014-06-22 08:53:25,828][INFO ][node ] [Living Totem] initialized
[2014-06-22 08:53:25,828][INFO ][node ] [Living Totem] starting ...
[2014-06-22 08:53:25,885][INFO ][transport ] [Living Totem] bound_address {inet[/10.0.0.3:9300]}, publish_address {inet[/10.0.0.3:9300]}
[2014-06-22 08:53:28,913][INFO ][cluster.service ] [Living Totem] new_master [Living Totem][D-eoRm7fSrCU_dTw_NQipA][esubuntu][inet[/10.0.0.3:9300]], reason: zen-disco-join (elected_as_master)
[2014-06-22 08:53:28,939][INFO ][discovery ] [Living Totem] elasticsearch/D-eoRm7fSrCU_dTw_NQipA
[2014-06-22 08:53:28,964][INFO ][http ] [Living Totem] bound_address {inet[/10.0.0.3:9200]}, publish_address {inet[/10.0.0.3:9200]}
[2014-06-22 08:53:29,433][INFO ][gateway ] [Living Totem] recovered [1] indices into cluster_state
[2014-06-22 08:53:29,433][INFO ][node ] [Living Totem] started
[2014-06-22 08:58:05,268][WARN ][index.engine.internal ] [Living Totem] [wordstat][0] failed engine
java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.index.FreqProxTermsWriterPerField$FreqProxPostingsArray.<init>(FreqProxTermsWriterPerField.java:261)
at org.apache.lucene.index.FreqProxTermsWriterPerField$FreqProxPostingsArray.newInstance(FreqProxTermsWriterPerField.java:279)
at org.apache.lucene.index.ParallelPostingsArray.grow(ParallelPostingsArray.java:48)
at org.apache.lucene.index.TermsHashPerField$PostingsBytesStartArray.grow(TermsHashPerField.java:307)
at org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:324)
at org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:185)
at org.apache.lucene.index.DocInverterPerField.processFields(DocInverterPerField.java:171)
at org.apache.lucene.index.DocFieldProcessor.processDocument(DocFieldProcessor.java:248)
at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:253)
at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:453)
at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1529)
at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1199)
at org.elasticsearch.index.engine.internal.InternalEngine.innerIndex(InternalEngine.java:523)
at org.elasticsearch.index.engine.internal.InternalEngine.index(InternalEngine.java:470)
at org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryOperation(InternalIndexShard.java:744)
at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:228)
at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:197)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2014-06-22 08:58:06,046][WARN ][cluster.action.shard ] [Living Totem] [wordstat][0] sending failed shard for [wordstat][0], node[D-eoRm7fSrCU_dTw_NQipA], [P], s[INITIALIZING], indexUUID [LC3LMLxgS3CkkG_pvfTeSg], reason [engine failure, message [OutOfMemoryError[Java heap space]]]
[2014-06-22 08:58:06,047][WARN ][cluster.action.shard ] [Living Totem] [wordstat][0] received shard failed for [wordstat][0], node[D-eoRm7fSrCU_dTw_NQipA], [P], s[INITIALIZING], indexUUID [LC3LMLxgS3CkkG_pvfTeSg], reason [engine failure, message [OutOfMemoryError[Java heap space]]]
In order to recover you're Elasticsearch cluster you will need to allocate more memory to the heap. As you are running on a fairly small instance this may be a bit challenging but here is what you will need to do:
Change the configuration to allocate more memory to the heap. Not
clear what your current settings are but there are several ways to
boost this - the easiest is to set the environment variable
ES_HEAP_SIZE. I'd start with 1GB, try that and then boost it in
small increments as you are already near the limit of what you can
do with a 1.6GB memory instance. Alternatively you may make
changes to the files used to launch Elasticsearch - depends on
how you have them installed, but should be in the bin directory
underneath the Elasticsearch home directory. For a linux
installation the files are elasticsearch and
elasticsearch.in.sh.
Move to a larger instance. This would be much easier to recover from
on a system with more memory - so if the above step does not work,
you could copy all your files to another larger instance and try the
above steps again with a larger heap size.
What has happened here is your server has become overloaded. Possibly there is a bad sector. What you need to do is delete your existing indices and re-index them.
On Linux,
Elasticsearch temporary files are kept in usr/local/var/elasticsearch/
Delete this folder, then repopulate your index