Unable to open Elasticsearch-head plugin – v2.1.1 - elasticsearch

I am trying to open the elasticsearch-head plugin with this URL http://vmname:9200/_plugin/head, but no luck, i am getting ‘This web page is not available’.
Need your help to fix this, not sure whether I am missing something here.
Please find the details of the environment below,
Elasticsearch version: 2.1.1
Plugins: elasticsearch-head
JAVA: java version "1.7.0_05"
OS: Linux 2.6.32-220.el6.x86_64
Plugin installation Logs:
[esearch#vmname bin]$ ./plugin install mobz/elasticsearch-head url http://github.com/mobz/elasticsearch-head/archive/master.zip
-> Installing mobz/elasticsearch-head...
Trying https://github.com/mobz/elasticsearch-head/archive/master.zip ...
Downloading ..................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................DONE
Verifying https://github.com/mobz/elasticsearch-head/archive/master.zip checksums if available ...
NOTE: Unable to verify checksum for downloaded plugin (unable to find .sha1 or .md5 file to verify)
Installed head into /opt/elasticsearch-2.1.1/plugins/head
ElasticSearch logs:
[esearch#vmname bin]$ ./elasticsearch
[2015-12-26 12:51:19,953][WARN ][bootstrap ] unable to install syscall filter: prctl(PR_GET_NO_NEW_PRIVS): Invalid argument
[2015-12-26 12:51:20,536][INFO ][node ] [Gatecrasher] version[2.1.1], pid[19215], build[40e2c53/2015-12-15T13:05:55Z]
[2015-12-26 12:51:20,536][INFO ][node ] [Gatecrasher] initializing ...
[2015-12-26 12:51:20,825][INFO ][plugins ] [Gatecrasher] loaded [], sites [head]
[2015-12-26 12:51:20,869][INFO ][env ] [Gatecrasher] using [1] data paths, mounts [[/ (/dev/sda3)]], net usable_space [4.8gb], net total_space [27.3gb], spins? [possibly], types [ext3]
[2015-12-26 12:51:24,514][INFO ][node ] [Gatecrasher] initialized
[2015-12-26 12:51:24,514][INFO ][node ] [Gatecrasher] starting ...
[2015-12-26 12:51:24,646][INFO ][transport ] [Gatecrasher] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2015-12-26 12:51:24,669][INFO ][discovery ] [Gatecrasher] elasticsearch/NhlBGx_kTiq7JeeEPihz2w
[2015-12-26 12:51:27,728][INFO ][cluster.service ] [Gatecrasher] new_master {Gatecrasher}{NhlBGx_kTiq7JeeEPihz2w}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2015-12-26 12:51:27,751][INFO ][http ] [Gatecrasher] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
[2015-12-26 12:51:27,751][INFO ][node ] [Gatecrasher] started
[2015-12-26 12:51:27,850][INFO ][gateway ] [Gatecrasher] recovered [0] indices into cluster_state
Please let me know if you need any additional information.
I am able to access the head plugin in Windows 7 without any issue, but I can't able to access the same in Linux.
EDITED...................................
Port 9200 is not reachable from outside,
[root# newvm ~]# nc -vz vmname 9200
nc: connect to vmname port 9200 (tcp) failed: Connection refused
Whereas the port is LISTENING with in the VM,
[root#vmname ~]# nc -vz vmname 9200
Connection to vmname 9200 port [tcp/wap-wsp] succeeded!
[root# vmname ~]# netstat -nat | grep :9200
tcp 0 0 ::1:9200 :::* LISTEN
tcp 0 0 ::ffff:127.0.0.1:9200 :::* LISTEN
I have disabled the firewall and tried, but the issue still exits!
[root# vmname ~]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination

You currently have elasticsearch configured to only listen on localhost
You might want to try adding to your elasticsearch.yml the following
network.host: [networkInterface]
NOTE: This WILL make your cluster listen on public interfaces, so you'l want to bring that firewall back up and restrict access.

make sure plugins directory has sufficient permission for elasticsearch to server the plugin files:
sudo chown -R elasticsearch. /usr/share/elasticsearch

Related

Cannot retrieve cluster state due to : None of the configured nodes are available

I'm trying to implement search-guard-5-5.6.3- in ES 5.6.3 and i have some trouble
while executing
./sgadmin.sh -ts truststore.jks -tspass 90f3cbdb3eabe04f815b -ks CN=sgadmin-keystore.jks -kspass a65d2a4fa62d7ed7a4d5 -cn cluster -h host -p 9200 -nhnv -cd ../sgconfig/
I get
Cannot retrieve cluster state due to: None of the configured nodes are
available: [{#transport#-1}{A1ZqEo4RSsqP3ZRSTXTUOg}{host}{host:9200}]. This is not an error, will keep on trying ...
Root cause: NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{A1ZqEo4RSsqP3ZRSTXTUOg}{host}{host:9200}]] (org.elasticsearch.client.transport.NoNodeAvailableException/org.elasticsearch.c lient.transport.NoNodeAvailableException)
* Try running sgadmin.sh with -icl (but no -cl) and -nhnv (If thats works you need to check your clustername as well as hostnames in your SSL certificates)
* Make also sure that your keystore or cert is a client certificate (not a node certificate) and configured properly in elasticsearch.yml
* If this is not working, try running sgadmin.sh with --diagnose and see diagnose trace log file)
* Add --accept-red-cluster to allow sgadmin to operate on a red cluster.
My cluster are correctly started, in ES log it says:
[2017-11-08T15:54:55,354][INFO ][c.f.s.s.DefaultSearchGuardKeyStore] sslTransport protocols [TLSv1.2, TLSv1.1]
[2017-11-08T15:54:55,354][INFO ][c.f.s.s.DefaultSearchGuardKeyStore] sslHTTP protocols [TLSv1.2, TLSv1.1]
[2017-11-08T15:54:55,356][INFO ][o.e.p.PluginsService ] [node_1] loaded module [aggs-matrix-stats]
[2017-11-08T15:54:55,357][INFO ][o.e.p.PluginsService ] [node_1] loaded module [ingest-common]
[2017-11-08T15:54:55,357][INFO ][o.e.p.PluginsService ] [node_1] loaded module [lang-expression]
[2017-11-08T15:54:55,357][INFO ][o.e.p.PluginsService ] [node_1] loaded module [lang-groovy]
[2017-11-08T15:54:55,357][INFO ][o.e.p.PluginsService ] [node_1] loaded module [lang-mustache]
[2017-11-08T15:54:55,357][INFO ][o.e.p.PluginsService ] [node_1] loaded module [lang-painless]
[2017-11-08T15:54:55,357][INFO ][o.e.p.PluginsService ] [node_1] loaded module [parent-join]
[2017-11-08T15:54:55,357][INFO ][o.e.p.PluginsService ] [node_1] loaded module [percolator]
[2017-11-08T15:54:55,357][INFO ][o.e.p.PluginsService ] [node_1] loaded module [reindex]
[2017-11-08T15:54:55,357][INFO ][o.e.p.PluginsService ] [node_1] loaded module [transport-netty3]
[2017-11-08T15:54:55,357][INFO ][o.e.p.PluginsService ] [node_1] loaded module [transport-netty4]
[2017-11-08T15:54:55,363][INFO ][o.e.p.PluginsService ] [node_1] loaded plugin [search-guard-5]
[2017-11-08T15:54:59,119][DEBUG][o.e.a.ActionModule ] Using REST wrapper from plugin com.floragunn.searchguard.SearchGuardPlugin
[2017-11-08T15:54:59,193][INFO ][c.f.s.SearchGuardPlugin ] FLS/DLS valve not bound (noop) due to java.lang.ClassNotFoundException: com.floragunn.searchguard.configuration.DlsFlsValveImpl
[2017-11-08T15:54:59,194][INFO ][c.f.s.SearchGuardPlugin ] Auditlog not available due to java.lang.ClassNotFoundException: com.floragunn.searchguard.auditlog.impl.AuditLogImpl
[2017-11-08T15:54:59,196][INFO ][c.f.s.SearchGuardPlugin ] Privileges interceptor not bound (noop) due to java.lang.ClassNotFoundException: com.floragunn.searchguard.configuration.PrivilegesInterceptorImpl
[2017-11-08T15:54:59,660][INFO ][o.e.d.DiscoveryModule ] [node_1] using discovery type [zen]
[2017-11-08T15:55:00,694][INFO ][o.e.n.Node ] [node_1] initialized
[2017-11-08T15:55:00,695][INFO ][o.e.n.Node ] [node_1] starting ...
[2017-11-08T15:55:01,017][INFO ][o.e.t.TransportService ] [node_1] publish_address {host:9300}, bound_addresses {host:9300}
[2017-11-08T15:55:01,038][INFO ][o.e.b.BootstrapChecks ] [node_1] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2017-11-08T15:55:01,052][INFO ][c.f.s.c.IndexBaseConfigurationRepository] Check if searchguard index exists ...
[2017-11-08T15:55:01,058][DEBUG][o.e.a.a.i.e.i.TransportIndicesExistsAction] [node_1] no known master node, scheduling a retry
[2017-11-08T15:55:04,143][INFO ][o.e.c.s.ClusterService ] [node_1] new_master {node_1}{aN2lbPkJSHWWFTllDhVeNQ}{NYFK1tN7SjC_41uRabKqRw}{mongodb-rec3.ib.fr.cly}{host:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-11-08T15:55:04,250][INFO ][c.f.s.h.SearchGuardHttpServerTransport] [node_1] publish_address {host:9200}, bound_addresses {host:9200}
[2017-11-08T15:55:04,251][INFO ][o.e.n.Node ] [node_1] started
[2017-11-08T15:55:04,542][INFO ][o.e.g.GatewayService ] [node_1] recovered [3] indices into cluster_state
[2017-11-08T15:55:05,353][INFO ][o.e.c.r.a.AllocationService] [node_1] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[fs][4]] ...]).
[2017-11-08T15:55:05,465][INFO ][c.f.s.c.IndexBaseConfigurationRepository] Node 'node_1' initialized
But wihle trying to send request http://host:9200 i am getting the following error
[2017-11-08T16:09:10,954][WARN ][c.f.s.h.SearchGuardHttpServerTransport] [node_1] Someone (/host:46422) speaks http plaintext instead of ssl, will close the channel
There are tow different issues here.
First, you try to connect to the HTTP port with sgadmin, but sgadmin uses the transport port. So, instead of:
-p 9200
You need to use the transport port:
-p 9300
You can also omit this setting, since 9300 is the default.
Then, you try to connect to Elasticsearch with http: http://host:9200
But most likely you have HTTPS configured in elasticsearch.yml, that's why the HTTP connection fails, and that's what the error message says:
Someone (/host:46422) speaks http plaintext instead of ssl, will close the channel
So either connect with HTTPS instead of HTTP, or disable HTTPs in elasticsearch.yml (not recommended since insecure):
searchguard.ssl.http.enabled: false
You can also find a troubleshooting article in the docs: http://docs.search-guard.com/latest/troubleshooting-sgadmin

Port issues with Vagrant and Elasticsearch

I've been trying to install Elasticsearch in a brand new Ubuntu box (ubuntu/trusty64) using Vagrant.
This is what I get when I run curl localhost:9200 in my guest machine
{
"name" : "Base",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "2.3.4",
"build_hash" : "e455fd0c13dceca8dbbdbb1665d068ae55dabe3f",
"build_timestamp" : "2016-06-30T11:24:31Z",
"build_snapshot" : false,
"lucene_version" : "5.5.0"
},
"tagline" : "You Know, for Search"
}
It seems good. But when I run the same command on my host, this is what I get:
curl: (52) Empty reply from server
Here are my port forwarding data (vagrant port):
22 (guest) => 2222 (host)
80 (guest) => 8080 (host)
9200 (guest) => 9200 (host)
9300 (guest) => 9300 (host)
So, ports seems to be properly forwarded and Elasticsearch service in guest VM is running good.
Here's my firewall configuration in the guest VM (sudo ufw status)
To Action From
-- ------ ----
22 ALLOW Anywhere
9200/tcp ALLOW Anywhere
9300/tcp ALLOW Anywhere
22 (v6) ALLOW Anywhere (v6)
9200/tcp (v6) ALLOW Anywhere (v6)
9300/tcp (v6) ALLOW Anywhere (v6)
I've also have an Apache server that runs with no problem when using localhost:8080 (forwarded to localhost:80)
Also, nothing weird in logs: cat /var/log/elasticsearch/elasticsearch.log:
[2016-07-12 18:49:37,838][INFO ][node ] [Raymond Sikorsky] version[2.3.4], pid[2138], build[e455fd0/2016-06-30T11:24:31Z]
[2016-07-12 18:49:37,839][INFO ][node ] [Raymond Sikorsky] initializing ...
[2016-07-12 18:49:38,439][INFO ][plugins ] [Raymond Sikorsky] modules [lang-groovy, reindex, lang-expression], plugins [], sites []
[2016-07-12 18:49:38,464][INFO ][env ] [Raymond Sikorsky] using [1] data paths, mounts [[/ (/dev/sda1)]], net usable_space [35.8gb], net total_space [39.3gb], spins? [possibly], types [ext4]
[2016-07-12 18:49:38,464][INFO ][env ] [Raymond Sikorsky] heap size [1007.3mb], compressed ordinary object pointers [true]
[2016-07-12 18:49:38,464][WARN ][env ] [Raymond Sikorsky] max file descriptors [65535] for elasticsearch process likely too low, consider increasing to at least [65536]
[2016-07-12 18:49:40,455][INFO ][node ] [Raymond Sikorsky] initialized
[2016-07-12 18:49:40,455][INFO ][node ] [Raymond Sikorsky] starting ...
[2016-07-12 18:49:40,521][INFO ][transport ] [Raymond Sikorsky] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2016-07-12 18:49:40,527][INFO ][discovery ] [Raymond Sikorsky] elasticsearch/HqATev5kScKOXLXdl44ZLA
[2016-07-12 18:49:43,585][INFO ][cluster.service ] [Raymond Sikorsky] new_master {Raymond Sikorsky}{HqATev5kScKOXLXdl44ZLA}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-07-12 18:49:43,622][INFO ][http ] [Raymond Sikorsky] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
[2016-07-12 18:49:43,622][INFO ][node ] [Raymond Sikorsky] started
[2016-07-12 18:49:43,625][INFO ][gateway ] [Raymond Sikorsky] recovered [0] indices into cluster_state
I know just the basics of servers. Am I missing something? Maybe the issue has nothing to do with ports?
I run the following on ubuntu box and install latest version of elastic search to try (there has been some breaking changed in 2.x). Here are the steps I've done to make it work
change the network binding.
That is one of the breaking change
Elasticsearch 2.x will only bind to localhost by default. It will try to bind to both 127.0.0.1 (IPv4) and [::1] (IPv6), but will work happily in environments where only IPv4 or IPv6 is available. This change prevents Elasticsearch from trying to connect to other nodes on your network unless you specifically tell it to do so
like it is mentioned here
I wanted to add an additional note that often this is caused by the
server within the VM because it binds to 127.0.0.1, which is loopback.
You'll want to make sure that the server is bound to 0.0.0.0 so that
all interfaces can access it.
This is the case here and we need to change the host property. open the /etc/elasticsearch/elasticsearch.yml and add
network.bind_host: 0
network.host: 0.0.0.0

Elasticsearch fails to start

I'm trying to implement a 2 node ES cluster using Amazon EC2 instances. After everything is setup and I try to start the ES, it fails to start. Below are the config files:
/etc/elasticsearch/elasticsearch.yml - http://pastebin.com/3Q1qNqmZ
/etc/init.d/elasticsearch - http://pastebin.com/f3aJyurR
Below are the /var/log/elasticsearch/es-cluster.log content -
[2014-06-08 07:06:01,761][WARN ][common.jna ] Unknown mlockall error 0
[2014-06-08 07:06:02,095][INFO ][node ] [logstash] version[0.90.13], pid[29666], build[249c9c5/2014-03-25T15:27:12Z]
[2014-06-08 07:06:02,095][INFO ][node ] [logstash] initializing ...
[2014-06-08 07:06:02,108][INFO ][plugins ] [logstash] loaded [], sites []
[2014-06-08 07:06:07,504][INFO ][node ] [logstash] initialized
[2014-06-08 07:06:07,510][INFO ][node ] [logstash] starting ...
[2014-06-08 07:06:07,646][INFO ][transport ] [logstash] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/10.164.27.207:9300]}
[2014-06-08 07:06:12,177][INFO ][cluster.service ] [logstash] new_master [logstash][vCS_3LzESEKSN-thhGWeGA][inet[/<an_ip_is_here>:9300]], reason: zen-disco-join (elected_as_master)
[2014-06-08 07:06:12,208][INFO ][discovery ] [logstash] es-cluster/vCS_3LzESEKSN-thhGWeGA
[2014-06-08 07:06:12,334][INFO ][http ] [logstash] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/<an_ip_is_here>:9200]}
[2014-06-08 07:06:12,335][INFO ][node ] [logstash] started
[2014-06-08 07:06:12,379][INFO ][gateway ] [logstash] recovered [0] indices into cluster_state
I see several things that you should correct in your configuration files.
1) Need different node names. You are using the same config file for both nodes. You do not want to do this if you are setting node name like you are: node.name: "logstash". Either create separate configuration files with different node.name entries or comment it out and let ES auto assign the node.name.
2) Mlockall setting is throwing an error. I would not start out setting bootstrap.mlockall: True until you've first gotten ES to run without it and then have spent a little time configuring linux to support it. It can cause problems with booting up:
Warning
mlockall might cause the JVM or shell session to exit if it tries to
allocate more memory than is available!
I'd check out the documentation on the configuration variables and be careful about making too many adjustments right out of the gate.
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-configuration.html
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-service.html
If you do want to make memory adjustments to ES this previous stackoverflow article should be helpful:
How to change Elasticsearch max memory size

logstash - Exception in thread ">output" org.elasticsearch.discovery.MasterNotDiscoveredException: waited for [30s]

Log stash is 100% a disaster for me. I am using LS 1.4.1 and ES 1.02 in the same machine.
Here is how I start logstash indexer:
/usr/local/share/logstash-1.4.1/bin/logstash -f /usr/local/share/logstash.indexer.config
input {
redis {
host => "redis.queue.do.development.sf.test.com"
data_type => "list"
key => "logstash"
codec => json
}
}
output {
stdout { }
elasticsearch {
bind_host => "127.0.0.1"
port => "9300"
}
}
ES I set:
network.bind_host: 127.0.0.1
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["127.0.0.1:9300"]
And wow..this is what I get:
/usr/local/share/logstash-1.4.1/bin/logstash -f /usr/local/share/logstash.indexer.config
Using milestone 2 input plugin 'redis'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.4.1/plugin-milestones {:level=>:warn}
log4j, [2014-05-29T12:02:29.545] WARN: org.elasticsearch.discovery: [logstash-do-logstash-sf-development-20140527082230-866-2010] waited for 30s and no initial state was set by the discovery
Exception in thread ">output" org.elasticsearch.discovery.MasterNotDiscoveredException: waited for [30s]
at org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$3.onTimeout(org/elasticsearch/action/support/master/TransportMasterNodeOperationAction.java:180)
at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(org/elasticsearch/cluster/service/InternalClusterService.java:492)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:615)
at java.lang.Thread.run(java/lang/Thread.java:744)
See http://logstash.net/docs/1.4.1/outputs/elasticsearch
VERSION NOTE: Your Elasticsearch cluster must be running Elasticsearch 1.1.1. If you use any other version of Elasticsearch, you should set protocol => http in this plugin.
So your problem is that logstash doesn't support the older ES version you are using without using an http transport.
Setting 'protocol => "http"' worked for me. I expected the EPEL repo to have complementary versions of logstash and elasticsearch, but ES is used for lots of stuff, thus is not tightly coupled with the logstash rpms.
For me, the problem wasn't with the versions of elasticsearch or logstash. I had just installed them and I was using the latest version of each (1.5.0 & 1.4.2 respectively).
Running the following worked for me as well:
logstash -e 'input { stdin { } } output { elasticsearch { protocol => "http" } }'
But I wanted to get to the bottom of why I wasn't able to connect over the other protocols. Though the documentation doesn't say what the default protocol is, I was pretty sure I was either using transport or node for port 9300 by default because of the following output I got when I started elasticsearch
[2015-04-14 22:21:56,355][INFO ][node ] [Super-Nova] version[1.5.0], pid[10796], build[5448160/2015-03-23T14:30:58Z]
[2015-04-14 22:21:56,355][INFO ][node ] [Super-Nova] initializing ...
[2015-04-14 22:21:56,358][INFO ][plugins ] [Super-Nova] loaded [], sites []
[2015-04-14 22:21:58,186][INFO ][node ] [Super-Nova] initialized
[2015-04-14 22:21:58,187][INFO ][node ] [Super-Nova] starting ...
[2015-04-14 22:21:58,257][INFO ][transport ] [Super-Nova] bound_address {inet[/127.0.0.1:9300]}, publish_address {inet[/127.0.0.1:9300]}
[2015-04-14 22:21:58,273][INFO ][discovery ] [Super-Nova] elasticsearch/KPaTxb9vRnaNXBncN5KN7g
[2015-04-14 22:22:02,053][INFO ][cluster.service ] [Super-Nova] new_master [Super-Nova][KPaTxb9vRnaNXBncN5KN7g][Azads-MBP-2][inet[/127.0.0.1:9300]], reason: zen-disco-join (elected_as_master)
[2015-04-14 22:22:02,069][INFO ][http ] [Super-Nova] bound_address {inet[/127.0.0.1:9200]}, publish_address {inet[/127.0.0.1:9200]}
[2015-04-14 22:22:02,069][INFO ][node ] [Super-Nova] started
At first, I tried opening up port 9300 by following these instructions. That didn't change a thing, so most likely that port wasn't blocked.
Then I stumbled upon this github issue. There wasn't really a solution there that helped, but I did double check to make sure my elasticsearch cluster name was right by checking elasticsearch.yaml (This file is ususally stored where elasticsearch is installed. Run "which elasticsearch" to give you an idea where to look). Lo and behold, my elastisearch cluster.name had my name appended to it. Removing it so that the cluster name was just "elasticsearch" helped logstash discover my elasticsearch instance.

Install elasticsearch on OpenShift

I installed a pre build Elasticsearch 1.0.0 version by reading this tutorial. If I start elasticsearch I got the following error message, Should I try an older version of ES or how to fix this issue?
[elastic-dataportal.rhcloud.com elasticsearch-1.0.0]\> ./bin/elasticsearch
[2014-02-25 10:02:18,757][INFO ][node ] [Desmond Pitt] version[1.0.0], pid[203443], build[a46900e/2014-02-12T16:18:34Z]
[2014-02-25 10:02:18,764][INFO ][node ] [Desmond Pitt] initializing ...
[2014-02-25 10:02:18,780][INFO ][plugins ] [Desmond Pitt] loaded [], sites []
OpenJDK Server VM warning: You have loaded library /var/lib/openshift/430c93b1500446b03a00005c/app-root/data/elasticsearch-1.0.0/lib/sigar/libsigar-x86-linux.so which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
[2014-02-25 10:02:32,198][INFO ][node ] [Desmond Pitt] initialized
[2014-02-25 10:02:32,205][INFO ][node ] [Desmond Pitt] starting ...
[2014-02-25 10:02:32,813][INFO ][transport ] [Desmond Pitt] bound_address {inet[/127.8.212.129:3306]}, publish_address {inet[/127.8.212.129:3306]}
[2014-02-25 10:02:35,949][INFO ][cluster.service ] [Desmond Pitt] new_master [Desmond Pitt][_bWO_h9ETTWrMNr7x_yALg][ex-std-node134.prod.rhcloud.com][inet[/127.8.212.129:3306]], reason: zen-disco-join (elected_as_master)
[2014-02-25 10:02:36,167][INFO ][discovery ] [Desmond Pitt] elasticsearch/_bWO_h9ETTWrMNr7x_yALg
{1.0.0}: Startup Failed ...
- BindHttpException[Failed to bind to [8080]]
ChannelException[Failed to bind to: /127.8.212.129:8080]
BindException[Address already in use]
You first have to stop the running demo application, which is already bound to 8080. This can be done with this command:
ctl_app stop
After running this command you will be able to start elasticsearch on the port 8080. However this is not recommended for production environments.
I would recommend installing ElasticSearch with this cartridge: https://github.com/ncdc/openshift-elasticsearch-cartridge
It will save you the headaches of manual custom configurations.
you try to assign ES to port 8080, which already is taken. the culprit in the config from there is http.port: ${OPENSHIFT_DIY_PORT}. just leave both port configs out of the config or assign the envvar some other port. the default ports for ES are 9200 for http and 9300.

Resources