Elastic search (1.6) not starting on windows7 with jdk1.7.0_71 - elasticsearch

I am new to elastic search and when i start it ..keep getting below warning. can anybody help here..what need to to be changed.
[2015-06-25 13:04:15,143][WARN ][bootstrap ] jvm uses the client vm, make sure to run `java` with the server vm for best performance by adding `-server` to the command line
[2015-06-25 13:04:15,244][INFO ][node ] [Kubik] version[1.6.0], pid[20068], build[cdd3ac4/2015-06-09T13:36:34Z]
[2015-06-25 13:04:15,244][INFO ][node ] [Kubik] initializing ...
[2015-06-25 13:04:15,247][INFO ][plugins ] [Kubik] loaded [], sites []
[2015-06-25 13:04:15,277][INFO ][env ] [Kubik] using [1] data paths, mounts [[System (C:)]], net usable_space [109.3gb], net total_space [238.1gb], types [NTFS]
[2015-06-25 13:04:18,034][INFO ][node ] [Kubik] initialized
[2015-06-25 13:04:18,034][INFO ][node ] [Kubik] starting ...
[2015-06-25 13:04:18,317][INFO ][transport ] [Kubik] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/10.154.249.29:9300]}
[2015-06-25 13:04:18,669][INFO ][discovery ] [Kubik] elasticsearch/ZWZR28dARWqqEf8FOn0Hgw
[2015-06-25 13:04:18,688][WARN ][transport.netty ] [Kubik] exception caught on transport layer [[id: 0xdfa8e460]], closing connection
java.net.SocketException: Permission denied: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:152)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Resolved my issue.... putting this as Answer so that it will be helpful for others.
go to installed ElasticSearch setup and go to folder "config" and then file "elasticsearch.yml" and add/modify 'network.host' property with "127.0.0.1" .

Related

Why does different container of same elasticsearch image on docker exits out?

I am trying to run same Elastic Search image twice, but one container exits out. Only one elasticsearch container runs, others exits out. Any solution/ suggestion would be helpful. I ran it with following command:
docker run -d my_es:v3 elasticsearch
below is the log file for the process which is getting exited.
root#ubuntu-512mb-nyc3-01:~/AnyElastic# docker logs e2cbd47927af
[2016-06-16 21:36:12,339][INFO ][node ] [Angela Del Toro] version[2.3.3], pid[1], build[218bdf1/2016-05-17T15:40:04Z]
[2016-06-16 21:36:12,343][INFO ][node ] [Angela Del Toro] initializing ...
[2016-06-16 21:36:14,014][INFO ][plugins ] [Angela Del Toro] modules [reindex, lang-expression, lang-groovy], plugins [], sites []
[2016-06-16 21:36:14,053][INFO ][env ] [Angela Del Toro] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/vda1)]], net usable_space [13.9gb], net total_space [19.5gb], spins? [possibly], types [ext4]
[2016-06-16 21:36:14,053][INFO ][env ] [Angela Del Toro] heap size [1015.6mb], compressed ordinary object pointers [true]
[2016-06-16 21:36:20,241][INFO ][node ] [Angela Del Toro] initialized
[2016-06-16 21:36:20,241][INFO ][node ] [Angela Del Toro] starting ...
[2016-06-16 21:36:20,400][INFO ][transport ] [Angela Del Toro] publish_address {172.17.0.3:9300}, bound_addresses {[::]:9300}
[2016-06-16 21:36:20,407][INFO ][discovery ] [Angela Del Toro] elasticsearch/ketVVDMtQCeBwj-x64E5yQ
[2016-06-16 21:36:23,565][INFO ][cluster.service ] [Angela Del Toro] new_master {Angela Del Toro}{ketVVDMtQCeBwj-x64E5yQ}{172.17.0.3}{172.17.0.3:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-06-16 21:36:23,605][INFO ][http ] [Angela Del Toro] publish_address {172.17.0.3:9200}, bound_addresses {[::]:9200}
[2016-06-16 21:36:23,607][INFO ][node ] [Angela Del Toro] started
[2016-06-16 21:36:23,670][INFO ][gateway ] [Angela Del Toro] recovered [0] indices into cluster_state
Yes, by looking at the logs, the memory is the issue, since there is only 512mb ram on linux box and there were many containers running at that time, so other container of elasticsearch would exit out. This something which nobody has encountered before. Conclusion: Port is not the issue, you can run same images many times provided you have sufficient ram to run those docker containers.
I think my_es:v3 is the problem for you. If you are trying to name your container use --name option. Also you can't ':' in the name.
docker run -d --name my_es elasticsearch

index names on MS Windows not accepted, Linux OK

I have an elasticsearch cluster running on Linux machines without serious problems. I now want to extend it to MS Windows but hit an issue with the names of the indexes, which are not accepted. The log is quite explicit:
[2015-02-18 10:18:39,071][WARN ][common.jna ] unable to link C library. native methods (mlockall) will be disabled.
[2015-02-18 10:18:39,139][INFO ][node ] [lenov272dsy] version[1.4.3], pid[1276], build[36a29a7/2015-02-11T14:23:15Z]
[2015-02-18 10:18:39,139][INFO ][node ] [lenov272dsy] initializing ...
[2015-02-18 10:18:39,142][INFO ][plugins ] [lenov272dsy] loaded [], sites []
[2015-02-18 10:18:41,920][INFO ][node ] [lenov272dsy] initialized
[2015-02-18 10:18:41,920][INFO ][node ] [lenov272dsy] starting ...
[2015-02-18 10:18:42,104][INFO ][transport ] [lenov272dsy] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/10.233.85.45:9300]}
[2015-02-18 10:18:42,111][INFO ][discovery ] [lenov272dsy] security/6CeEuO01SeaL0kZuezwoSg
[2015-02-18 10:18:45,207][INFO ][cluster.service ] [lenov272dsy] detected_master [eu3][ZsJ2f1gcQpSOlWriWy19-g][eu3][inet[/10.81.163.112:9300]], added {[eu5][nEUNDAc0S4ytvtntjvgIXA][eu5.security.example.com][inet[/10.81.147.186:9300]],[eu4][--PlaWk9Tl2pF8XSHJulDA][eu4.security.example.com][inet[/10.81.163.129:9300]],[eu3][ZsJ2f1gcQpSOlWriWy19-g][eu3][inet[/10.81.163.112:9300]],}, reason: zen-disco-receive(from master [[eu3][ZsJ2f1gcQpSOlWriWy19-g][eu3][inet[/10.81.163.112:9300]]])
[2015-02-18 10:18:45,322][INFO ][http ] [lenov272dsy] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/10.233.85.45:9200]}
[2015-02-18 10:18:45,323][INFO ][node ] [lenov272dsy] started
[2015-02-18 10:18:53,009][WARN ][indices.cluster ] [lenov272dsy] [nessus_scan_recurrent-internet.2015-01-15t00:00:59+00:00.65731fa3-2635-a330-2a7b-00e3ea775493c5ddb3b88c869b73.getnessuscans.nessus][4] failed to create shard
org.elasticsearch.index.shard.IndexShardCreationException: [nessus_scan_recurrent-internet.2015-01-15t00:00:59+00:00.65731fa3-2635-a330-2a7b-00e3ea775493c5ddb3b88c869b73.getnessuscans.nessus][4] failed to create shard
at org.elasticsearch.index.service.InternalIndexService.createShard(InternalIndexService.java:360)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyInitializingShard(IndicesClusterStateService.java:678)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyNewOrUpdatedShards(IndicesClusterStateService.java:579)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:185)
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:431)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:184)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:154)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: The filename, directory name, or volume label syntax is incorrect
at java.io.WinNTFileSystem.canonicalize0(Native Method)
at java.io.WinNTFileSystem.canonicalize(WinNTFileSystem.java:428)
at java.io.File.getCanonicalPath(File.java:618)
at org.apache.lucene.store.FSDirectory.getCanonicalPath(FSDirectory.java:129)
at org.apache.lucene.store.FSDirectory.<init>(FSDirectory.java:143)
at org.apache.lucene.store.MMapDirectory.<init>(MMapDirectory.java:132)
at org.apache.lucene.store.MMapDirectory.<init>(MMapDirectory.java:99)
at org.elasticsearch.index.store.fs.MmapFsDirectoryService.newFSDirectory(MmapFsDirectoryService.java:45)
at org.elasticsearch.index.store.fs.FsDirectoryService.build(FsDirectoryService.java:129)
at org.elasticsearch.index.store.distributor.AbstractDistributor.<init>(AbstractDistributor.java:35)
at org.elasticsearch.index.store.distributor.LeastUsedDistributor.<init>(LeastUsedDistributor.java:36)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
at org.elasticsearch.common.inject.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:54)
at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:86)
at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:98)
at org.elasticsearch.common.inject.FactoryProxy.get(FactoryProxy.java:52)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:837)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42)
at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:57)
at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45)
at org.elasticsearch.common.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42)
at org.elasticsearch.common.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66)
at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:85)
at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:98)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:837)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42)
at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:57)
at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45)
at org.elasticsearch.common.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42)
at org.elasticsearch.common.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66)
at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:85)
at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:98)
at org.elasticsearch.common.inject.FactoryProxy.get(FactoryProxy.java:52)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:837)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42)
at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:57)
at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45)
at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:200)
at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:193)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:830)
at org.elasticsearch.common.inject.InjectorBuilder.loadEagerSingletons(InjectorBuilder.java:193)
at org.elasticsearch.common.inject.InjectorBuilder.injectDynamically(InjectorBuilder.java:175)
at org.elasticsearch.common.inject.InjectorBuilder.build(InjectorBuilder.java:110)
at org.elasticsearch.common.inject.InjectorImpl.createChildInjector(InjectorImpl.java:131)
at org.elasticsearch.common.inject.ModulesBuilder.createChildInjector(ModulesBuilder.java:69)
at org.elasticsearch.index.service.InternalIndexService.createShard(InternalIndexService.java:358)
... 9 more
This is repeated for other similar indexes, the key part being
Caused by: java.io.IOException: The filename, directory name, or volume label syntax is incorrect
I had a look at how indexes are stored on the Linux boxes and there are indeed directories named after them.
Short of renaming the indexes, is there a way to make them compatible with a Windows install of elasticsearch? (I looked at the configuration but did not find anything -- my personal, uninformed and certainly naive opinion is that there should not be such OS dependency and something like a hash of the index name should be used instead)
May sound silly, but double check that ElasticSearch has sufficient permissions to create things in its installation folder.
After side discussions with other users of elasticsearch and further tests, the problem is indeed that the index name must use characters which are correct in a filename for a given OS (indexes are stored in files and folders named after the index name).
It is therefore better to use a safe set of characters, universally accepted among the OSes in filenames (letters, digits, underscore).

Elasticsearch fails to start

I'm trying to implement a 2 node ES cluster using Amazon EC2 instances. After everything is setup and I try to start the ES, it fails to start. Below are the config files:
/etc/elasticsearch/elasticsearch.yml - http://pastebin.com/3Q1qNqmZ
/etc/init.d/elasticsearch - http://pastebin.com/f3aJyurR
Below are the /var/log/elasticsearch/es-cluster.log content -
[2014-06-08 07:06:01,761][WARN ][common.jna ] Unknown mlockall error 0
[2014-06-08 07:06:02,095][INFO ][node ] [logstash] version[0.90.13], pid[29666], build[249c9c5/2014-03-25T15:27:12Z]
[2014-06-08 07:06:02,095][INFO ][node ] [logstash] initializing ...
[2014-06-08 07:06:02,108][INFO ][plugins ] [logstash] loaded [], sites []
[2014-06-08 07:06:07,504][INFO ][node ] [logstash] initialized
[2014-06-08 07:06:07,510][INFO ][node ] [logstash] starting ...
[2014-06-08 07:06:07,646][INFO ][transport ] [logstash] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/10.164.27.207:9300]}
[2014-06-08 07:06:12,177][INFO ][cluster.service ] [logstash] new_master [logstash][vCS_3LzESEKSN-thhGWeGA][inet[/<an_ip_is_here>:9300]], reason: zen-disco-join (elected_as_master)
[2014-06-08 07:06:12,208][INFO ][discovery ] [logstash] es-cluster/vCS_3LzESEKSN-thhGWeGA
[2014-06-08 07:06:12,334][INFO ][http ] [logstash] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/<an_ip_is_here>:9200]}
[2014-06-08 07:06:12,335][INFO ][node ] [logstash] started
[2014-06-08 07:06:12,379][INFO ][gateway ] [logstash] recovered [0] indices into cluster_state
I see several things that you should correct in your configuration files.
1) Need different node names. You are using the same config file for both nodes. You do not want to do this if you are setting node name like you are: node.name: "logstash". Either create separate configuration files with different node.name entries or comment it out and let ES auto assign the node.name.
2) Mlockall setting is throwing an error. I would not start out setting bootstrap.mlockall: True until you've first gotten ES to run without it and then have spent a little time configuring linux to support it. It can cause problems with booting up:
Warning
mlockall might cause the JVM or shell session to exit if it tries to
allocate more memory than is available!
I'd check out the documentation on the configuration variables and be careful about making too many adjustments right out of the gate.
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-configuration.html
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-service.html
If you do want to make memory adjustments to ES this previous stackoverflow article should be helpful:
How to change Elasticsearch max memory size

Install elasticsearch on OpenShift

I installed a pre build Elasticsearch 1.0.0 version by reading this tutorial. If I start elasticsearch I got the following error message, Should I try an older version of ES or how to fix this issue?
[elastic-dataportal.rhcloud.com elasticsearch-1.0.0]\> ./bin/elasticsearch
[2014-02-25 10:02:18,757][INFO ][node ] [Desmond Pitt] version[1.0.0], pid[203443], build[a46900e/2014-02-12T16:18:34Z]
[2014-02-25 10:02:18,764][INFO ][node ] [Desmond Pitt] initializing ...
[2014-02-25 10:02:18,780][INFO ][plugins ] [Desmond Pitt] loaded [], sites []
OpenJDK Server VM warning: You have loaded library /var/lib/openshift/430c93b1500446b03a00005c/app-root/data/elasticsearch-1.0.0/lib/sigar/libsigar-x86-linux.so which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
[2014-02-25 10:02:32,198][INFO ][node ] [Desmond Pitt] initialized
[2014-02-25 10:02:32,205][INFO ][node ] [Desmond Pitt] starting ...
[2014-02-25 10:02:32,813][INFO ][transport ] [Desmond Pitt] bound_address {inet[/127.8.212.129:3306]}, publish_address {inet[/127.8.212.129:3306]}
[2014-02-25 10:02:35,949][INFO ][cluster.service ] [Desmond Pitt] new_master [Desmond Pitt][_bWO_h9ETTWrMNr7x_yALg][ex-std-node134.prod.rhcloud.com][inet[/127.8.212.129:3306]], reason: zen-disco-join (elected_as_master)
[2014-02-25 10:02:36,167][INFO ][discovery ] [Desmond Pitt] elasticsearch/_bWO_h9ETTWrMNr7x_yALg
{1.0.0}: Startup Failed ...
- BindHttpException[Failed to bind to [8080]]
ChannelException[Failed to bind to: /127.8.212.129:8080]
BindException[Address already in use]
You first have to stop the running demo application, which is already bound to 8080. This can be done with this command:
ctl_app stop
After running this command you will be able to start elasticsearch on the port 8080. However this is not recommended for production environments.
I would recommend installing ElasticSearch with this cartridge: https://github.com/ncdc/openshift-elasticsearch-cartridge
It will save you the headaches of manual custom configurations.
you try to assign ES to port 8080, which already is taken. the culprit in the config from there is http.port: ${OPENSHIFT_DIY_PORT}. just leave both port configs out of the config or assign the envvar some other port. the default ports for ES are 9200 for http and 9300.

elasticsearch auto-discovery rackspace not working

I'm trying to use ElasticSearch for an application I'm building, and I am hosting it on Rackspace servers. However, the auto-discovery feature is not working. I thought that it was because auto-discovery uses broadcast and multicast to find the other nodes with the matching cluster name. I found this article saying that Rackspace does now support multicast and broadcast with their new Cloud Networks feature. Then following the article's instructions I created a network, and added that network to both of the servers the nodes were running on. I then tried restarting ElasticSearch on both nodes, but they didn't find each other, and each declared themselves as the "master" (here's the output from the logs):
[2013-04-03 22:14:03,516][INFO ][node ] [Nemesis] {0.20.6}[2752]: initializing ...
[2013-04-03 22:14:03,530][INFO ][plugins ] [Nemesis] loaded [], sites []
[2013-04-03 22:14:07,873][INFO ][node ] [Nemesis] {0.20.6}[2752]: initialized
[2013-04-03 22:14:07,873][INFO ][node ] [Nemesis] {0.20.6}[2752]: starting ...
[2013-04-03 22:14:08,052][INFO ][transport ] [Nemesis] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/166.78.177.149:9300]}
[2013-04-03 22:14:11,117][INFO ][cluster.service ] [Nemesis] new_master [Nemesis][3ih_VZsNQem5W4csDk-Ntg][inet[/166.78.177.149:9300]], reason: zen-disco-join (elected_as_master)
[2013-04-03 22:14:11,168][INFO ][discovery ] [Nemesis] elasticsearch/3ih_VZsNQem5W4csDk-Ntg
[2013-04-03 22:14:11,202][INFO ][http ] [Nemesis] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/166.78.177.149:9200]}
[2013-04-03 22:14:11,202][INFO ][node ] [Nemesis] {0.20.6}[2752]: started
[2013-04-03 22:14:11,275][INFO ][gateway ] [Nemesis] recovered [0] indices into cluster_state
The other node's log:
[2013-04-03 22:13:54,538][INFO ][node ] [Jaguar] {0.20.6}[3364]: initializing ...
[2013-04-03 22:13:54,546][INFO ][plugins ] [Jaguar] loaded [], sites []
[2013-04-03 22:13:58,825][INFO ][node ] [Jaguar] {0.20.6}[3364]: initialized
[2013-04-03 22:13:58,826][INFO ][node ] [Jaguar] {0.20.6}[3364]: starting ...
[2013-04-03 22:13:58,977][INFO ][transport ] [Jaguar] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/166.78.63.101:9300]}
[2013-04-03 22:14:02,041][INFO ][cluster.service ] [Jaguar] new_master [Jaguar][WXAO9WOoQDuYQo7Z2GeAOw][inet[/166.78.63.101:9300]], reason: zen-disco-join (elected_as_master)
[2013-04-03 22:14:02,094][INFO ][discovery ] [Jaguar] elasticsearch/WXAO9WOoQDuYQo7Z2GeAOw
[2013-04-03 22:14:02,129][INFO ][http ] [Jaguar] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/166.78.63.101:9200]}
[2013-04-03 22:14:02,129][INFO ][node ] [Jaguar] {0.20.6}[3364]: started
[2013-04-03 22:14:02,211][INFO ][gateway ] [Jaguar] recovered [0] indices into cluster_state
Is adding the network not enough (Rackspace also gave me an IP for this network)? Do I need to somehow specify in the conf file to check that network when using multicast to find other nodes?
I also found this article which offered a different approach. Per the article's instructions I put this into /config/elasticsearch.yml:
cloud:
account: account #
key: account key
compute:
type: rackspace
discovery:
type: cloud
However, then when I tried to restart ElasticSearch I got this:
Stopping ElasticSearch...
Stopped ElasticSearch.
Starting ElasticSearch...
Waiting for ElasticSearch.......
WARNING: ElasticSearch may have failed to start.
And it did fail to start. I checked into the log file for any errors, but this was all that was there:
[2013-04-03 22:31:00,788][INFO ][node ] [Chamber] {0.20.6}[4354]: initializing ...
[2013-04-03 22:31:00,797][INFO ][plugins ] [Chamber] loaded [], sites []
And it stopped there without any errors and without continuing.
Has anyone successfully gotten ElasticSearch to work in the Rackspace cloud before? I know that the unicast option is also available, but I'd prefer to not have to specify each IP address individually, as I would like it to be easy to add other nodes later. Thanks!
UPDATE
I haven't solved the issue yet, but after some searching I found this post that says the "old" cloud plugin was discontinued and replaced with just an Ec2 plugin for Amazon's cloud, which explains why the changes I made to the config file do not work.
Multicast is disabled on public cloud for security reasons(can be verified with ifconfig). Here is an article that should get you what you need:
https://developer.rackspace.com/blog/elasticsearch-autodiscovery-on-the-rackspace-cloud/

Resources